Version 4 (modified by, 11 years ago) (diff)


WebKit Coordinated Graphics System

This wiki page describes the coordinated graphics architecture currently used by Qt in WebKit2, parts of which are also used by other ports.


A good reading before reading this is the following from Chromium:

This explains in detail the difference between standard rendering in WebCore and the accelerated compositing code path. Without understanding of the difference, the rest of this document might not be very useful.

Other blog posts that can help: Qt scenegraph: Reasons for going with TextureMapper rather than QGraphicsView etc., and could help understand our progress on the subject in the last 2.5 years:

WebKit2: [WebKit2]


This document is here to serve as a snapshot of current architecture for hardware-acceleration in the Qt port. Though all of the components described here are used by the Qt port, most of them are not used exclusively by the Qt port. Every port has a different blend of components it uses.

The main components described in this document: TextureMapper - a light-weight scenegraph implementation tuned specifically for WebKit accelerated compositing. Used by Qt, EFL and GTK+.

Coordinated Compositing - an architecture for WebKit2 that allows using accelerated compositing in a dual process environment. EFL is, at the time of writing this document, in the process of switching to use coordinated compositing.

Coordinated Backing Store - How the compositing environment deals with software-rendered buffers or WebGL buffers, e.g. from the point of view of memory and pixel-formats. This includes several components, some are used only together with coordinated compositing, some also used elsewhere.

QtScenegraph integration - How the WebKit way of rendering is glued with the QtQuick scenegraph way of rendering. This is naturally used by the Qt port only.

CSS Animation Support

WebGL Support

The following diagram shows the different components and classes, and how they all work together. more explanations would follow.

Texture Mapper

TextureMapper is a light-weight scenegraph implementation that is specially attuned for efficient GPU or software rendering of CSS3 composited content, such as 3D transforms, animations and canvas.

TextureMapper is pretty well-contained, and apart from the backing-store integration, it is thread safe.

TextureMapper includes/uses the following classes. Note that the last two (without TextureMapper in the name), do not depend on TextureMapper, but are currently used only by it.

TextureMapper is an abstract class that provides the necessary drawing primitives for the scenegraph. Unlike GraphicsContext or QPainter, it does not try to provide all possible drawing primitives, nor does it try to be a public API. Its only purpose is to abstract different implementations of the drawing primitives from the scenegraph.

It is called TextureMapper because its main function is to figure out how to do texture-mapping, shading, transforming and blending a texture to a drawing buffer. TextureMapper also includes a class called BitmapTexture, which is a drawing buffer that can either be manipulated by a backing store, or become a drawing target for TextureMapper. BitmapTexture is roughly equivalent to an ImageBuffer, or to a combination of a texture and an FBO.

TextureMapperImageBuffer is a software implementation of the TextureMapper drawing primitives.

It is currently used by Qt in WebKit1 when using a QWebView, and also by the GTK+ port as a fallback when OpenGL is not available. TextureMapperImageBuffer uses normal GraphicsContext operations to draw, making it not effectively use the GPU. However, unlike regular drawing in WebCore, it does use backing stores for the layers, which could cause some soft acceleration.

TextureMapperGL is the GPU-accelerated implementation of the drawing primitives. It is currently used by Qt-WebKit2, GTK+ and EFL(?). TextureMapperGL uses shaders compatible with GL ES 2.0, though we’re in the process of converting it to use the WebGL infrastructure directly, which would help with shader compatibility.

TextureMapperGL tries to use scissors for clipping whenever it can, though if the clipping is not rectangular stencil is used.

Other things that are handled specially in TextureMapperGL are masks, which are rendered using multi-texturing the content texture with the mask texture, and filters, which are handled with special shaders. A feature that is not yet implemented in TextureMapperGL is CSS shaders, and that is in the pipeline.

TextureMapperLayer is the class that represent a node in the GPU-renderable layer tree. It maintains its own tree hierarchy, which is equivalent to the GraphicsLayer tree hierarchy, though unlike GraphicsLayers, its hierarchy is thread safe.

The main function of TextureMapperLayer is to determine the render order and the use of intermediate surfaces. For example, opacity is to be applied on the rendering results of a subtree, and not separately for each layer.

The correct order of render phasese for a subtree of layers is as such:

  1. filters
  2. mask
  3. reflection
  4. opacity

However, rendering each of these phases into an intermediate surface would be costly on some GPUs. Therefore, TextureMapperLayer does its best to avoid using intermediate surfaces unless absolutely needed. For example, content that has reflections and no masks or opacity would not need intermediate surfaces at all, and the same goes for content with mask/opacity that does not have overlapping sub layers.

Another use of TextureMapperLayer is to tie together TextureMapper, GraphicsLayerTextureMapper, TextureMapperBackingStore, GraphicsLayerAnimation and GraphicsLayerTransform. It includes all the synchronization phases, that are responsible to time when changes coming from different sources are actually supposed to occur. Without proper synchronization, many times flickers or artifact bugs will occur.

GraphicsLayerTextureMapper is the glue between GraphicsLayer and TextureMapperLayer. Since TextureMapperLayer is thread-safe and GraphicsLayer is not, GraphicsLayerTextureMapper provides the necessary glue. GraphicsLayerTextureMapper does not do anything interesting in particular apart from synchronization.

TextureMapperBackingStore is what glues TextureMapperLayer with different backing stores. It allows the type of backing store to be abstracted away from TextureMapperLayer, allowing TextureMapperLayer to be used both with a standard tiled backing store for WebKit1 (TextureMapperTiledBackingStore), a GraphicsSurface-backed backing store for WebGL (TextureMapperSurfaceBackingStore) or a backing store for WebKit2 (WebKit::LayerBackingStore).

The implementation in TextureMapperTiledBackingStore is pretty simple, and is used only to work around the 2000x2000 texture size limitation in OpenGL. It is very different from the dynamic tiled backing store, discussed later.

GraphicsLayerAnimation is a class that replicates the AnimationController logic in WebCore, but does so without being coupled with particular WebCore primitives like the document.

GraphicsLayerAnimation can receive an accelerated animation “spec” (WebCore::Animation, KeyframeValueList), and process the animation at different points in time, interpolating the opacity, transforms (and filters in the future), sending them back to the animation client, in this case being TextureMapperLayer.

GraphicsLayerTransform is responsible for the transformation matrix calculations based on the rules defined by GraphicsLayer. There are several properties that determine a node’s transformation matrix: its parent matrix, the anchor point, the local transformation, position, scrolling adjustment, perspective and the preserves-3d flag. These properties are separated due to the fact that the transformation might be animated using GraphicsLayerAnimation, which would make it impossible to multiply in advance.

What’s missing from TextureMapper

  • CSS shaders
  • Support for mask/reflection edge cases
  • Better readability in TextureMapperShaderManager
  • Using GraphicsContext3D instead of OpenGL directly (this is underway)
  • Support for dynamic TiledBackingStore per layer in WebKit1
  • Make it better and faster :)

Coordinated Compositing

Coordinated compositing, guarded by USE(UI_SIDE_COMPOSITING), is a WebKit2 implementation of accelerated compositing. It synchronizes the layer tree provided by WebCore in the web process with a proxied layer tree in the UI process that is responsible for the actual GPU rendering to the screen.

The coordinated compositing code might be a bit tricky to read, because of its asynchronous nature on top of the WebKit2 IPC mechanism.

An important concept to understand is the roles of the UI and the web processes: The web process owns the content and the UI process owns the viewport. Thus, the UI process would know about viewport and user interactions before the web process, and the web process would know about content changes (e.g. a CSS layer getting added) before the UI process.

The web process owns the frame and the content. It decides when to render a new frame, which information to include in the layer tree, how the backing stores behave, and also how the animations are timed. The UI process is responsible for rendering the latest frame from the web process, compensating for user interaction not yet committed by the web process. An example for that is scrolling adjustments - the web process tells the UI process which layers are fixed to the viewport, and the UI process knows the exact scroll position for each paint. It’s the UI process’ job to account for the difference. Another role of the UI process is to periodically notify the web process of viewport changes (e.g. panning), to allow the web process to decide whether or not to create/destroy tiles, as well as moving content into view when writing in text fields.


WebGraphicsLayer is what glues together the composited content which WebCore creates as GraphicsLayers, and the coordinated graphics system. WebGraphicsLayer does not deal with IPC directly. Instead, it saves the information gathered from the compositor, and prepares it for serialization. It maintains an ID for each layer, which allows naming each layer, links a layer ID with its backing store, and deals with directly composited images and how they are serialized.

LayerTreeCoordinator is the “boss” for synchronizing the content changes and the viewport changes. It acts on a frame-by-frame basis. For each frame it retrieves the up-to-date layer information from the different WebGraphicsLayers, makes sure the backing-store content is ready to be serialized, prepares the information needed for scroll adjustment, and passes all that info to the UI process in a series of messages for each frame.

LayerTreeCoordinator maintains a map of “directly composited images” - images that are rendered once and used multiple times in the same texture. LayerTreeCoordinator maintains the ref-counted lifecycle of such an image, signaling to the UI process when an image is no longer used and its backing texture can be destroyed.

LayerTreeCoordinator is also responsible for two special layers that are not recognized as “composited” by WebCore – a layer for the main web-content, known as non-composited content, is treated as a layer. That is also true for the overlay layer, which paints things such as the scrollbar and the tap indicators.

Another important role of LayerTreeCoordinator is to receive the viewport information from the UI process, and to propagate that information to the different backing-stores so that they can prepare to create/destroy tiles.

LayerTreeCoordinatorProxy is what binds together the viewport, the GPU renderer and the contents. It doesn't have functionality of its own, instead it acts as a message hub between those components.

WebLayerTreeRenderer is the class that knows how to turn serialized information from the web process and viewport information from the view classes into a GPU-renderable tree of TextureMapperLayers. It maintains a map of layer IDs to layers,

What’s missing from Coordinated Compositing

  • Coordinating animations seprately from normal rendering, e.g. in a thread. See CSS Animations.
  • Using ScrollingCoordinator instead of the home-brewed fixed-position adjustments, also use it to support UI-side overflow:scroll. There is still a lot to do around scrolling.
  • Serializing and coordinating CSS shaders, once they're ready in TextureMapper.
  • VSync support in the UI process, separately from QtScenegraph.

Coordinated Backing Store

Unlike coordinated compositing, which includes mainly lightweight rendering information about each layer, backing-stores contain pixel data, and thus are both memory-hungry and are expensive to copy and serialize. Backing-stores are drawing into by software or by hardware, depending on the scenario, and thus require binding to platform/OS/GPU-specific code.

The current Qt architecture is based on a dynamic tiled backing-store. This means that for each layer in the layer tree, not all content is saved in GPU memory the whole time, but rather only the visible contents plus some cover area around it, depending on heuristics. The TiledBackingStore system is responsible for maintaining these heuristics, based on variables such as the scroll position, contents scale and panning trajectory.


TiledBackingStore is the class that makes the decisions about tiles, creating or destroying them, dealing with the cover-rect heuristics that are based on the scroll position, contents size and panning trajectory. TiledBackingStore relies on an abstract WebCore::Tile implementation, allowing for different decisions around drawing the tiles synchronously or asynchronously.

TiledBackingStoreRemoteTile is a web-process backend for WebCore::Tile, allowing software rendering of tiles into the coordinated graphics system. It's the tile-equivalent of WebGraphicsLayer, maintaining a tile-ID map. It uses a special client (a superclass of LayerTreeCoordinator) to allocate content surfaces and send the messages to the UI process.

ShareableBitmap is used by Apple and other ports, but in this context it is used as a software backing-store for content updates. When there is no platform-specific GraphicsSurface implementation, ShareableBitmap acts as a fallback that uses standard shared memory as a backing store for the update, and then updates the TextureMapper GPU backing-stores (BitmapTextures) with the contents from that shared memory.

ShareableBitmaps are also used as a backing-store for uploading “directly composited images”, images that are rendered once in software and then reused as a texture multiple times.

GraphicsSurface is an abstraction for a platform-specific buffer that can share graphics data across processes. It is used in two ways, probably more in the future:

  1. “fast texture uploads”: by painting into a graphics surface in the web process, and then copying into a texture in the UI process, we can avoid expensive texture uploads with color conversions, and instead paint directly to a GPU-friendly (probably DMA) memory buffer.
  2. WebGL: to allow compositing WebGL content with the rest of the layers, we need to render WebGL content into a surface in the web process and then use the result in the UI process, all that without deep copying of the pixel data.

GraphicsSurfaces are currently supported only on Qt-Mac and GLX, and are currently enabled only for WebGL and not for fast texture uploads. This is work in progress.

ShareableSurface is a glue class that abstracts away the differences between GraphicsSurface and ShareableBitmap for the purposes of serialization. This is a convenience, to allow us to avoid adding #ifdefs or special code-paths for GraphicsSurface in too many places in WebKit2.

LayerBackingStore synchronizes remote tiles and ShareableSurfaces with TextureMapper in the UI process. It is responsible for copying or swapping the data in the BitmapTexture with new data or updates from ShareableSurface. It also knows how to paint the different tiles with relation to the parameters received from TextureMapperLayer.

UpdateAtlas is a graphics-memory optimization, designed to avoid fragmented allocations of small update buffers. Instead of allocating small buffers as it goes along, UpdateAtlas allocates large graphics buffers, and manages the update-patch allocations by itself, using square shapes inside the large buffers as sub-images to store temporary pixel data.

What’s missing from Coordinated Backing Stores

  • More platform support for GraphicsSurfaces – e.g. on some embedded/mobile systems.
  • Graceful handling of low memory situations, e.g. by visible incremental updates to tiles using a smaller UpdateAtlas.
  • Allow different pixel formats for tiles, e.g. 16-bit for opaque tiles and 8-bit for masks.

QtScenegraph integration

To integrate WebKit coordinated graphics with the QtScenegraph, the following main things were necessary:

  1. Making WebLayerTreeRenderer thread-safe, so it can run in QtScenegraph's rendering thread.
  2. Use a private API from QtScenegraph (QSGRenderNode) which allows us to render WebKit's layer tree without rendering into an intermediate FBO.
  3. Synchronize the remote content coming from the UI process at a time that is right for QtScenegraph.
  4. Propagate the clipping, transform and opacity attributes from QtScenegraph to WebLayerTreeRenderer.
  5. Render the view's background.

The class that handles most of this is QtWebPageSGNode. Its main function, “render”, is used to translate QtScenegraph parameters to WebLayerTreeRenderer parameters. The SG node contains 3 nodes - a root node that controls the viewport transform, a background node, and a custom contents node which renders the actual web contents using WebLayerTreeRenderer. Some other functionality is in QQuickWebPage::updatePaintNode.

CSS Animation Support

Currently, in WebKit2, CSS animation frames are handled together with all other frames – the web process re-layouts for each frame, and sends the new transform/opacity/filter information together with the rest of the frame. The requestAnimationFrame feature, which allows synchronizing animations with the display refresh, is synchronized with LayerTreeCoordinator to make sure those animation frames are interpolated as accurately and smoothly as possible, avoiding choppiness and throttling.

One thing that is missing is threaded animations: allowing animations to continue to produce frames while lengthy operations in the web process are taking place. This will allow animations to appear smooth while elements that are not related to that animation are being rendered into the backing store.

This is a somewhat tricky thing to achieve, mainly because animations still need to sometimes sync with the non-animated contents.

WebGL Support

There are two possible approaches to WebGL. The current approach uses GraphicsSurfaces, allowing the web process to render with GPU into a platform-specific buffer, later compositing it in the UI process. This approach is somewhat easier to implement, but might not be efficient on some GPU systems, and can also create security issues if the web process has direct access to the GPU. The other option is to serialize the WebGL display list to the UI process, making the actual GL calls there, into an FBO. This is a safer and potentially more cross-platform approach, however it's yet to be seen how much of an undertaking it is, and how well it scales.


Attachments (2)

Download all attachments as: .zip