Skip to content

WIP: Count GPU time in render loop calculation

David Edmundson requested to merge work/d_ed/present_timings into master

WIP: as the code is totally unmergable, but it is testable. In theory setting your PC latency policy to force lowest latency we still shouldn't drop anywhere near as many frames when wobbling a window about as the current state. Everything else should remain the same.

When we render a frame we have to do a bunch of work on the CPU and a bunch of work on the GPU before the GPU calls swapBuffers. All operations on the GPU are asynchronous to when we make the calls from CPU space.

Right now we only measure CPU time in the render journal when calculating how long it takes to render a frame. In local testing GPU time is orders of magnitude bigger.

This results in the renderJournal being effectively useless and we constantly only rely on the latency policy. If rendering take longer than the latency policy we end up dropping a lot of frames.

As this is potentially in parralel our total time has a best case of max(cpuTime, gpuTime) and a worst case of cpuTime + gpuTime. This code uses the worst case for the render journal.

In theory this is all valid and on X11 too, but that means clients using OpenGL 3.3 which they tend not to do.

BUG: that one about intel being slow.

Merge request reports

Loading