[visionegg] Re: Reaction time?

  • From: Dav Clark <davclark@xxxxxxxxxxxx>
  • To: visionegg@xxxxxxxxxxxxx
  • Date: Sat, 13 Dec 2008 19:30:55 -0800

Hi all,

Sorry to be responding late to this, but I've been neck deep in doing other stuff!

I did want to chime in, though, partly to provide a reference for myself in the future.

The way things are implemented currently in VisionEgg, there are two senses in which keyboard presses might be registered by the system:

1) In a Presentation go loop, if check_events is True (this is the default unless you set it to False in the Constructor), it will check (and clear) the pygame event queue every pass through the loop. Then, if you have an event_callback in Presentation.handle_event_callbacks corresponding to, say a pygame key event, it'll get called with that event. Thus, you are "guaranteed" to get all events that happened since the last time through. Your timestamp from the Presentation class will probably be after the time the event happened. See:


2) The "standard" or at least "only using VisionEgg code" way to get keypresses is (I guess) with VisionEgg.ResponseControl.KeyboardResponseController. This uses pygame.key.get_pressed() - which gives you the keys that are _currently_ down. This means that the time reported by the Presentation class will actually not be perfectly in sync, but they will be quite close. The timestamp should be (shortly) before the actual time that you checked if the key was pressed. Note, however that if you have very short-duration inputs (e.g. we have near instantaneous USB keyboard signals as triggers from our MRI scanner), this approach can entirely miss some inputs.

As time has gone on (as many people seem to), I use less and less of the Presentation class. In fact, I should probably just stop using it at this point... The examples include several cases of coding your own event loop. Another idea: you may want separate event loops for display and response collection using either threads or processes. Here in the lab we have at least one experiment going where we are sampling a measurement device at a very high frequency for logging in a separate thread, and simply poll that object's state from the display side of things in order to give subjects feedback (feedback can usually be sloppier than your standards for data recording might dictate).

Crucial to "good timing" - Andrew claims you DON'T get good times for when display buffer swapping happened from the Presentation go loop (or similar logic) - the first thing it does after a buffer swap is check the time, but swap_buffers is non-blocking (and this is all described in Andrew's paper, along with a mention of Win32_vretrace.pyx). I have to say that this runs a little counter to the fact that my inter-frame times are quite robust out to sub- millisecond resolution. I'm on OSX - maybe the blocking nature of swap_buffers is different there? If you are calling swap_buffers every frame... you might end up calling swap_buffers each time before the previous redraw actually happened - this _would_ lead to blocking behavior as Andrew discusses in the paper. In any case, if you're only concerned about relative RTs, and your inter-frame times are stable, it should be safe to just assume that they are pretty close to when the actual swaps happened (though perhaps off by a constant as large as the length of one frame).

All of this should probably end up on the wiki... I'm going to try to get a lot of "maintenance" things like this done over my 2 weeks of holiday "break."

I wanted to add also - Andrew, you don't really speak to the KeyboardResponseController approach in your paper. You just suggest using pygame. Do you not recommend that approach?


On Nov 26, 2008, at 12:09 AM, Andrew Straw wrote:

Neil Halelamien wrote:
What's the "best practice" method for getting a reliable
mouse/keyboard reaction time in Vision Egg? I've been looking through
the mailing list archives and have seen some possibilities, but am not
sure if anyone's commented much on their pros/cons. For example, if
one were to use an event callback, would you be limited to the
granularity of the screen refresh rate? What about using something
like pygame.event.wait() versus repeatedly polling? (I don't need to
redraw anything while waiting for input)

Thanks, and my apologies if this has already been answered somewhere.

-- Neil

Hi Neil,

There is a discussion about these issues in the recent VE paper in
Frontiers in Neuroinformatics. See especially Figure 5 and the related text.

Straw, Andrew D. (2008) Vision Egg: An Open-Source Library for Realtime
Visual Stimulus Generation. /Frontiers in Neuroinformatics/. doi:


Dr. Andrew D. Straw
California Institute of Technology

The Vision Egg mailing list
Archives: //www.freelists.org/archives/visionegg
Website: http://www.visionegg.org/mailinglist.html

The Vision Egg mailing list
Archives: //www.freelists.org/archives/visionegg
Website: http://www.visionegg.org/mailinglist.html

Other related posts: