What's New in WebGPU (Chrome 132)

François Beaufort
François Beaufort

Published: January 8, 2025

Texture view usage

GPU texture views currently inherit all usage flags from their source GPU texture. This can be problematic as some view formats are incompatible with certain usages. To address this issue, calling createView() with the optional usage member lets you explicitly specify a subset of the source texture's usage flags that are compatible with the chosen view format.

This change allows for upfront validation and more fine-grained control over how the view is used. It also aligns with other graphics APIs where usage flags are common parameters in view creation, offering optimization opportunities.

See the following snippet, the chromestatus entry, and issue 363903526.

const texture = myDevice.createTexture({
  size: [4, 4],
  format: "rgba8unorm",
  usage:
    GPUTextureUsage.RENDER_ATTACHMENT |
    GPUTextureUsage.TEXTURE_BINDING |
    GPUTextureUsage.STORAGE_BINDING,
  viewFormats: ["rgba8unorm-srgb"],
});

const view = texture.createView({
  format: 'rgba8unorm-srgb',
  usage: GPUTextureUsage.RENDER_ATTACHMENT, // Restrict allowed usage.
});

32-bit float textures blending

32-bit floating-point textures are essential for HDR rendering to preserve a wide range of color values and prevent color banding artifacts. For example in scientific visualization.

The new "float32-blendable" GPU feature makes GPU textures with formats "r32float", "rg32float", and "rgba32float" blendable. Creating a render pipeline that uses blending with any float32-format attachment is now possible when requesting a GPU device with this feature.

See the following snippet, the chromestatus entry, and issue 369649348.

const adapter = await navigator.gpu.requestAdapter();
if (!adapter.features.has("float32-blendable")) {
  throw new Error("32-bit float textures blending support is not available");
}
// Explicitly request 32-bit float textures blending support.
const device = await adapter.requestDevice({
  requiredFeatures: ["float32-blendable"],
});

// ... Creation of shader modules is omitted for readability.

// Create a render pipeline that uses blending for the rgba32float format.
device.createRenderPipeline({
  vertex: { module: myVertexShaderModule },
  fragment: {
    module: myFragmentShaderModule,
    targets: [
      {
        format: "rgba32float",
        blend: { color: {}, alpha: {} },
      },
    ],
  },
  layout: "auto",
});

// Create the GPU texture with rgba32float format and
// send the appropriate commands to the GPU...

GPUDevice adapterInfo attribute

It's important for libraries that take user-provided GPUDevice objects to access information about the physical GPU, as they may need to optimize or implement workarounds based on the GPU architecture. While it is possible to access to this information through the GPUAdapter object, there is no direct way to get it from a GPUDevice alone. This can be inconvenient, as it may require users to provide additional information alongside the GPUDevice.

To address this problem, GPUAdapterInfo is now exposed through the GPUDevice adapterInfo attribute. Those are similar to the existing GPUAdapter info attribute.

See the following snippet, the chromestatus entry, and issue 376600838.

function optimizeForGpuDevice(device) {
  if (device.adapterInfo.vendor === "amd") {
    // Use AMD-specific optimizations.
  } else if (device.adapterInfo.architecture.includes("turing")) {
    // Optimize for NVIDIA Turing architecture.
  }
}

Configuring canvas context with invalid format throw JavaScript error

Previously, using an invalid texture format with the configure() method of the GPU canvas context resulted in a GPU validation error. This has been changed to throw a JavaScript TypeError. This prevents scenarios where getCurrentTexture() returns a valid GPU texture despite the GPU canvas context being configured incorrectly. More information can be found in issue 372837859.

Filtering sampler restrictions on textures

Using "sint", "uint", and "depth" format textures with filtering samples was allowed previously. It now correctly disallows using an "sint" or "uint" format texture with a filtering sampler. Note that it currently emits a warning if you use a "depth" texture with a filtering sampler as it will be disallowed in the future. See issue 376497143.

Those restrictions means using a depth texture with a non-filtering sampler requires manual creation of bind group layouts. This is because the "auto" generated bind group layouts don't support this combination yet. Spec issue 4952 contains a proposal under consideration to address this limitation in the future.

Extended subgroups experimentation

The subgroups experimentation, initially set to end in Chrome 131, has been extended to Chrome 133, concluding on April 16, 2025. While the first origin trial focused on performance, it lacked crucial portability safeguards. These safeguards will now be added, potentially causing errors in existing code.

Improving developer experience

A warning is now visible in DevTools when the powerPreference option is used with requestAdapter() on Windows. This warning will be removed when Chrome knows how to use two different GPUs and composite the results between them. See issue 369219127.

The size of the GPU buffer is now present in the error message when creating a GPU buffer that is too large. See issue 374167798.

Experimental support for 16-bit normalized texture formats

16-bit signed normalized and unsigned normalized texture formats are now available experimentally respectively behind the "chromium-experimental-snorm16-texture-formats" and "chromium-experimental-unorm16-texture-formats" GPU features while they're being discussed for standardization.

These features add support for 16-bit normalized texture formats with COPY_SRC, COPY_DST, TEXTURE_BINDING, RENDER_ATTACHMENT usages, multisampling, and resolving capabilities. The additional formats are "r16unorm", "rg16unorm", "rgba16unorm", "r16snorm", "rg16snorm", and "rgba16snorm".

Until these experimental features are standardized, enable the "Unsafe WebGPU Support" flag at chrome://flags/#enable-unsafe-webgpu to make them available in Chrome.

See the following snippet and issue 374790898.

const adapter = await navigator.gpu.requestAdapter();
if (!adapter.features.has("chromium-experimental-snorm16-texture-formats")) {
  throw new Error("16-bit signed normalized formats support is not available");
}
// Explicitly request 16-bit signed normalized formats support.
const device = await adapter.requestDevice({
  requiredFeatures: ["chromium-experimental-snorm16-texture-formats"],
});

// Create a texture with the rgba16snorm format which consists of four
// components, each of which is a 16-bit, normalized, signed integer value.
const texture = device.createTexture({
  size: [4, 4],
  format: "rgba16snorm",
  usage: GPUTextureUsage.RENDER_ATTACHMENT | GPUTextureUsage.TEXTURE_BINDING,
});

// Send the appropriate commands to the GPU...

Dawn updates

The EnumerateFeatures(FeatureName * features) methods from wgpu::Adapter and wgpu::Device are deprecated in favor of using GetFeatures(SupportedFeatures * features). See issue 368672123.

The webgpu.h C API has changed all char const * to a WGPUStringView structure that defines a view into a UTF-8 encoded string. It acts like a pointer to the string's data, coupled with a length. This lets you work with parts of a string without needing to copy it. See issue 42241188.

This covers only some of the key highlights. Check out the exhaustive list of commits.

What's New in WebGPU

A list of everything that has been covered in the What's New in WebGPU series.

Chrome 132

Chrome 131

Chrome 130

Chrome 129

Chrome 128

Chrome 127

Chrome 126

Chrome 125

Chrome 124

Chrome 123

Chrome 122

Chrome 121

Chrome 120

Chrome 119

Chrome 118

Chrome 117

Chrome 116

Chrome 115

Chrome 114

Chrome 113