Skip to content

make multi gpu not suck

Xaver Hugl requested to merge work/zamundaaa/mgpu-experiments into master

Currently, our multi gpu paths look like this:

  1. allocate a gbm_bo on the compositing GPU, import that directly to the displaying GPU controller. This is fast and ignoring the edge case of a static image, also very efficient, but only works on very few systems
  2. allocate a gbm_bo on the compositing GPU, copy that into a dumb buffer on the displaying GPU. This is done with the CPU, so especially with high resolutions it's pretty slow and inefficient. This works everywhere

This MR introduces a path in the middle ground between these two: We allocate a gbm_bo on the compositing GPU, render to it, and then import it to the displaying GPU with EGL, where it will then be copied with OpenGL to a buffer that can be displayed. This should result in almost as good performance and efficiency as with (1), and is compatible with most, if not all multi gpu systems. There's some caveats however:

  • to import a buffer from a non-NVidia GPU to a NVidia GPU, only the linear modifier is valid. This modifier is marked as being external only though, which KWin currently doesn't support
  • it's possible that creating an EGLDisplay for DisplayLink "GPU"s works but falls back to software rendering. If so, this needs to be detected; the fallback to CPU copy is preferable in that case
  • on at least some AMD+AMD systems, this path currently doesn't work with explicit modifiers because of a bug in RadeonSi. This includes my system, so the testing I could do is limited. https://gitlab.freedesktop.org/mesa/mesa/-/issues/8431
  • forcing a linear modifier successfully works around the driver bug, but causes another problem: Whenever my dedicated GPU is used for compositing, having anything blurred on a display connected to the integrated GPU causes very severe performance problems, making it a lot worse than CPU copy
Edited by Xaver Hugl

Merge request reports