[visionegg] Re: OpenGL latency

  • From: "Sol Simpson" <sol@xxxxxxxxxxxxxxx>
  • To: <visionegg@xxxxxxxxxxxxx>
  • Date: Mon, 29 Jan 2007 13:47:59 -0500

We have been doing some OpenGL testing and think we have found the cause of
this one retrace delay. We did the tests using a light key and TTL inputs to
the Blackbox toolkit (http://www.blackboxtoolkit.co.uk/) and have tested on
both ATI and NVIDIA cards.

Basically, the results of the tests show that even when you tell the
graphics system to swap synchronized to the vblank of the monitor, the
actual call to buffer_swap() or flip() is asynchronous and returns as soon
as the swap request has been successfully registered. 

If a swap has not already been scheduled, then buffer_swap() will return
pretty much right away. However if buffer_swap() has already been called and
you call it again before the previous scheduled swap has actually occurred,
the second call to buffer_swap() is blocked until the buffers have flipped. 

So if in your program you call buffer_swap() > 1 per retrace interval, what
happens is the buffer_swap() is blocked until the start of a retrace and
then returns, but the actual swap will not occur until the next vblank. This
results in the appearance of an extra 1 retrace interval delay since the
buffer_swap() call is returning at the start of the retrace before the flip
actually occurs.

If your program calls buffer_swap() less than once per retrace, then the
buffer_swap() is not blocked and returns right away and not necessarily at
the start of a retrace. In this case you do not see a constant 1 retrace
interval delay, but will instead see a variable delay between 0 and retrace
rate msec depending on when buffer_swap() was called.

This suggests that you can not rely on when buffer_swap() returns to
determine when the flip actually occurs and instead should use a combination
of buffer_swap() followed by some code that actually waits until, or
determines, the start of the next retrace. 

We also did the same above tests with pure C OpenGL and with SDL when using
the DirectX backend instead of OpenGL and all show the same pattern of
behavior, so this is not pygame or pyopengl specific behavior.


Sol Simpson
SR Research Ltd.

-----Original Message-----
From: visionegg-bounce@xxxxxxxxxxxxx [mailto:visionegg-bounce@xxxxxxxxxxxxx]
On Behalf Of Martin Spacek
Sent: March 4, 2005 3:39 AM
To: visionegg@xxxxxxxxxxxxx
Subject: [visionegg] OpenGL latency

Hello all,

Armed with a photodiode, a scope, a data translations dt340 I/O board and
a single frame target stimulus, we just noticed that the buffer_swap()
function swaps buffers not on the next vsync, but on the subsequent one.
This means that everything on the CRT is lagged by 1 frame with respect to
anything else we have happening in our code. This is the case for both our
Intel and AMD systems, with either an ATI or Nvidia card, in Windows 2000.
After some disbelief that this was happening, I came across a few
indications that OpenGL commands have an inherent latency. A posting by
Andrew over a year ago:

After plenty of trial-and-error with these issues myself, I'm also forced
to conclude that the photodiode technique is best for absolute timing
accuracy. I have some new (to me) knowledge about OpenGL cards that's not
on the website yet: as I understand it, the OpenGL "pipeline" has a
driver-dependent duration of a couple of frames, so commands sent to
OpenGL don't actually get to the display until a few frames have been
drawn. I think this explains the latency you're seeing. I'm still trying
to understand this issue, though, so I'd really like to get some feedback
from someone who knows (does anyone have any contacts with video card
driver experts?). I believe that such asynchronous operation and related
issues was on the agenda for improvements in OpenGL 2. However, a recent
scan of OpenGL 2 documents (particularly the OpenGL ARB meeting minutes)
seems to show diminished interest in this issue. Hopefully I'm wrong, but
it appears that all the ARB members are devoting most or all of their
resources to vertex and pixel shading.

The delay I see is only ever a single frame (5 ms), so we might just
subtract that constant off at our data collection end. Still, there's a
change it could vary. I've searched through the recently released OpenGL 2
spec, and there's no mention of latency. Does anyone know if anything can
be done about this? This is quite different from the AGP or PCI bus
latency on the system board I gather?


Martin Spacek
PhD student, Graduate Program in Neuroscience
Dept. of Ophthalmology and Visual Sciences
University of British Columbia, Vancouver, BC, Canada
+1-604-875-4555 ext. 66282
mspacek@xxxxxxxxxxxxxxx | http://swindale.ecc.ubc.ca
The Vision Egg mailing list
Archives: //www.freelists.org/archives/visionegg
Website: http://www.visionegg.org/mailinglist.html

The Vision Egg mailing list
Archives: //www.freelists.org/archives/visionegg
Website: http://www.visionegg.org/mailinglist.html

Other related posts: