[visionegg] Re: Hardware for Vision Egg

Lars van Ahrens wrote:

Hello!
Our electrophysiological setup is currently working with simple white
light-/flicker-stimulation and we are planning to upgrade it with complex
visual stimuli using Vision Egg. Regarding the high flicker fusion
frequency of mice we are looking for a CRT monitor with an appropriate
high vertical refresh frequency - and bright should it be too. And the
graphics card should work well with the other hardware components.
Perhaps somebody can make a suggestion for a reasonable combination of
hardware. Or maybe somebody can give us a short report of a combination
which already works reliably.

As you've probably read on the website, we use the the LG Flatron 915 FT+ which go up to 200 Hz (640x480 at this rate). Sadly, I don't think this model is available anymore. We were encouraged to read Markus Bongard's email about the IIyamas that also do 200Hz and should support 800x600 at this rate.


Generally speaking, the standard video connectors ensure compatibility, so it's a matter of choosing a display suited for your task.

Damian McCrossan wrote:

The stimuli are projected to the
screen using a DLP beamer to maintain that RGB signals all converge at the
same time. People use DLP beamers to present stimuli to in vitro retinal
preparations, also with success. The graphics card that we use is a standard
GeFORCE4 card.

I recommend nVidia cards. My bias comes from being a unix person, and ATI still doesn't provide hardware accelerated OpenGL/XFree/linux drivers. (What's the situation with Intel's builtin chipsets?)


Also, with respect to DLPs, some labs have removed the filter wheel from the 1-chip models. With a 60Hz update and 3 color channels, this would give a 180 Hz monochrome display. I believe some models may update at 85 Hz and some have 4 color channels. This sounds very appealing for fast frame rates at high luminance, but I haven't tried it myself.

We trigger data
collection using a photodiode on the screen to avoid the delay with data
transmission to the videocard (40-90msecs). We also programmed a TTL through
the parallel port but this appeared also around 100msec before the actual
arrival of the stimuli on the screen.

After plenty of trial-and-error with these issues myself, I'm also forced to conclude that the photodiode technique is best for absolute timing accuracy. I have some new (to me) knowledge about OpenGL cards that's not on the website yet: as I understand it, the OpenGL "pipeline" has a driver-dependent duration of a couple of frames, so commands sent to OpenGL don't actually get to the display until a few frames have been drawn. I think this explains the latency you're seeing. I'm still trying to understand this issue, though, so I'd really like to get some feedback from someone who knows (does anyone have any contacts with video card driver experts?). I believe that such asynchronous operation and related issues was on the agenda for improvements in OpenGL 2. However, a recent scan of OpenGL 2 documents (particularly the OpenGL ARB meeting minutes) seems to show diminished interest in this issue. Hopefully I'm wrong, but it appears that all the ARB members are devoting most or all of their resources to vertex and pixel shading.


Also, I did some preliminary tests with portaudio (a cross-platform realtime sound API) as a trigger output, and it had 20-100 msec latency on Windows XP with motherboard audio. This is worse than parallel port performance, so I gave up. However, there is low-latency audio hardware available (e.g. using ASIO drivers by Steinberg) which might make a huge difference. Also, it is possible that Mac OS X systems would have better latency through the audio path. As an aside, it is somewhat worrying that parallel ports will no longer be around at some point in the future.

Cheers!
Andrew

Other related posts: