Introduce render graph
a scene graph provides a way to organize logical items, typically in a form of a tree. A scene graph contains items such as surface items, decoration items, etc
a paint/render graph defines how the items in the scene graph are rendered.
In many game engines, the render graph consists of render passes and dependencies. However, given that we primarily want to map textures, a simpler graph design will suffice.
The main advantage of a render graph is that it contains everything that's needed to render the next frame. So, one could update the render graph on the main thread, and move recording rendering commands and other rendering bits to a worker thread.
The render graph must contain at least the render graph node list may extend in the future:
- a transform node: to transform the coordinate space, e.g. translate
- an opacity node: to change the opacity value
- a texture node: to map a texture with a given shape
Render graph nodes should not attempt to abstract graphics apis, so the renderer has more freedom when deciding how to implement a particular node. The nodes must be semantic-specific.
Additional nodes that will be needed in the future:
- a cross-fade node: a cross-fade node takes two textures and mixes them together based on the blend factor
- a backdrop filter node: a backdrop filter node lets you apply a graphical effect such as blur to the area behind the child nodes
In order to support layers, the item's render graph sub-tree can be rendered into an offscreen texture and a texture node that maps the offscreen texture be inserted in the render graph. If the item should be offscreen, omit inserting the texture node that maps the layer texture.
I would like to explore the possibility of generating immutable render graphs (in the future). That way, the renderer will be able to compute the dirty region for us by diff'ing the previous and the current render graph, which should be less error-prone than manually specifying dirty regions.
Re-creating the render graph on every frame might be inefficient, there are several ways around it:
- provide interior mutability (sort of like in Rust's
RefCell), i.e. if only the texture has been damaged, it will be fine not to re-create the texture node
- allocate render graph nodes from object pools. However, it would be nice to do some benchmarking before jumping to any conclusion
In general, it's still all up in the air. For the time being, it's worth sticking with mutable render graphs.
Sometimes, a render graph node may need to scratch the contents of so far rendered frame. For example, one such case may arise when blurring the background of a window.
In order to allow implementing features such as the blur effect, the render graph must provide the ability to "read back" the framebuffer. This can be accomplished by creating a special purpose texture, e.g.
Readback, which will be updated automatically by the renderer if the source rect is repainted.