[visionegg] Re: Questions before committing to VisionEgg

  • From: Christoph Lehmann <lehmann@xxxxxxxxxxxx>
  • To: visionegg@xxxxxxxxxxxxx
  • Date: Tue, 10 Aug 2004 13:34:12 +0200

Tim
please look at my code and let me know what you think about it.

cheers

christoph

Timothy Vickery wrote:
Dear Andrew,

I am seriously contemplating a switch to VisionEgg (from
Matlab/Psychtoolbox), based on the fact that it is free, relatively
platform-independent, open-source, and appears to be very powerful. I
require software for testing human subjects in visual cognition
experiments. However, before switching and investing time in learning
a new language and API, I have a couple of questions that I hope you
will have time to answer. If I go with this package I will be more
than happy to tirelessly promote your software.

1.) What is your (and your collaborators) long-term commitment to this
project? It would be disappointing to find the project completely
abandoned in a couple of years.
2.) Is it easy to extract screenshots/movies from experiments? People
love to see pretty pictures. In psychtoolbox, where you directly
control every frame's composition, it is easy to do a screen grab, and
I wonder if this has a comparable capability.
3.) Do you have any idea how reliable is the timing for keypresses and
mouse input in python?
I appreciate any response. Thanks for creating this software and
making it available to the community.
-Tim
======================================
The Vision Egg mailing list
Archives: //www.freelists.org/archives/visionegg
Website: http://www.visionegg.org/mailinglist.html




--
Christoph Lehmann                            Phone:  ++41 31 930 93 83
Department of Psychiatric Neurophysiology    Mobile: ++41 76 570 28 00
University Hospital of Clinical Psychiatry   Fax:    ++41 31 930 99 61
Waldau                                            lehmann@xxxxxxxxxxxx
CH-3000 Bern 60         http://www.puk.unibe.ch/cl/pn_ni_cv_cl_03.html
#!/usr/bin/env python

# History
# 03: non-events included
# 04: compute timing less memory intensive
# 05: timing log less detailled (loop faster)
# 06: get subject response on lpt (not via mouse): pin 10 (L, orange), pin 13 
(R, red )
# 07: filenames of the stimuli no read in from a textfile (already in 
randomized order)

# Python code, using the excellent visionegg classes from Andrew Straw 
<andrew.straw@xxxxxxxxxxxxxxx>
# Homepage: www.visionegg.org; Mailinglist: visionegg@xxxxxxxxxxxxx
# The Vision Egg is a very powerful, flexible, and free way to produce stimuli 
for vision research experiments.
#    * Perform experiments using an inexpensive PC and standard consumer 
graphics card 
#    * Perform experiments using a graphics workstation if special features 
needed 
#    * Data acquisition and other realtime hardware control capabilities useful 
in electrophysiology and fMRI experiments 
#    * Dynamically generated stimuli can be changed in realtime via software or 
external hardware 
#    * Produce traditional stimuli to replace legacy systems 
#    * Produce stimuli not possible using other hardware 
#    * Demo programs to get you started right away 
#    * Free, open-source software 
#
# Description of the code fmri_visionegg_07.py 
# 
# Summary:
# (i)   the presentation is synchronized with the scanner (using the TTL 
signal 
#        coming from the scanner)
# (ii)  the presentation is synchronized with the vertical retrace signal
# (iii) the response of the subject, using a button box is logged via the 
parallel-port
# (iv)  stimulus, onset-time of the stimulus, response and reaction-time of 
the subject incl. potential
#        imprecision is logged into a text-file for easy analysis afterward
# (iv)  the app is running under LINUX with max_priority, so we have an 
excellent timing
# 
# Details:
# Siemens Sonata MR-Scanners send for each volume one TTL trigger signal 
(received on the parallel port). Make sure, the TTL signal is long enough to 
# be detected by the LPT (circuit to strech the pulse is available).
# Siemens Sonata MR-Scanners start with x dummy volumes to eliminate T1 
saturation effects. A loop for waiting and counting these volumes is inserted 
# before the main loop for presentation of the stimuli.
# One trial consists of 1 volume of fixation, and x volumes of a certain 
stimulus. Each fixation and stimulus presentation is synchronized with the 
# trigger signal from the scanner. For testing in the lab, the synchronization 
can be disabled, and the presentation is controlled by a timer set to 
# the repetition time of the scanner.
# The subject has to respond either with a single or a double click on the 
right or left button on our optical button box.
# The stimuli and the fixation cross have to be in one directory, which is 
exclusively used for these stimuli.
# The order of the stimuli (including null-events, written as "null", but 
without fixation) has to be written into a file "stimuli.txt" which is 
# located in the same directory as fmri_visionegg_07.py code. Fixation will be 
added automatically to the beginning of every new trial.
# All stimuli (except the null-events) will be preloaded into memory before the 
main presentation loop starts. Null-events are just a swap of the empty # 
buffer.
# The program consists of an outer while loop which clears the screen, draws 
the stimulus to the buffer, and swaps the buffer to screen, synchronized 
# with the vertical retrace signal of the monitor.
# The inner loop runs until a trigger-signal has been received on the LPT. It 
polls the LPT: Right and left Button of the button box, trigger-signal 
# from the scanner.
# Each trial is logged into a textfile: Stimulus presented, onset-time, 
responses of the subject (button, reaction-time, imprecision). The imprecision 
# measure is the time which passed, since the last LPT polling, revealing 
potential artifacts and permitting the exclusion of imprecise data from 
# analysis.
# Remark concerning the stimulus onset imprecision measure: If the onset is 
synchronized with the vertical retrac signal, it may take up to 16ms (at 
# 60Hz TFT) screens, until the buffer is swapped to screen. Here the onset 
imprecision measure in the log file makes no sense, since immediately after # 
the swap, the control goes back to our outer loop, where we log the swap-time. 
Swapping without vertical retrace sync, shows, that the onset 
# imprecision is always < 1ms.
# 
# ... append further remarks here...
#
# (More details and constants set by the user, as the pins on the LPT used, 
filenames, etc., see below in the code)
# 19.05.2003 Christoph Lehmann

#Here we bypass the VisionEgg.Core.Presentation class.  It may be easier
#to create simple experiments this way."""


import VisionEgg
from VisionEgg.Core import *
import pygame
import string
from pygame.locals import *
from VisionEgg.Text import *
from math import *
from operator import mod
from operator import div
from random import *
import VisionEgg.Daq
from VisionEgg.DaqLPT import *
from VisionEgg.Textures import *

screen = get_default_screen()
my_debug_msg = []                                            #for debugging 
purpose, log debug infos to a file

# 
=========================================================================================================================================
# Constants to set by the user
# 
=========================================================================================================================================
screen.parameters.bgcolor = (0.0,0.0,0.0,0.0) # black (RGBA) #set background 
screen-color
path = 
"/home/christoph/work/programming/my_ve_projects/visionegg_fmri/data_test/"  
#the path the images are in
stimuli = "stimuli.txt"                                      #file with the 
filenames of the stimuli (must be in same dir as the python code)
fixation = "fixation.bmp"                                    #name of the file 
with the fixation cross (needs to be in the same dir as the stimuli)
nullevent = "null"                                           #"filename" in 
'stimuli' for null-events (no file for this type of stimulus available)
ratio = 4                                                    #4 volumes equal 
one trial (one TR fixtion, 3 TRs stimulus)
dummy_volumes = 0                                            #Siemens scanner 
starts with two dummy volumes (eliminat T1 saturation effects)
trigger_bit = 0x20                                           #get TTL signal on 
pin number 12 (0x20)
response_left_bit = 0x40                                     #subject response: 
left button, pin 10 (0x40), orange cable
response_right_bit = 0x10                                    #subject response: 
right button, pin 13 (0x10), red cable
bouncing_t = 1                                               #since our relais 
are bouncing, wait for some time (in ms)
repetition_time = 1                                          #for testing in 
the lab: presentation is controlled by a timer set to the 
                                                             #repetition time 
of the scanner (in seconds)
# 
=========================================================================================================================================



# 
=========================================================================================================================================
# Prepare the stimuli (load into memory)
# 
=========================================================================================================================================
# Pre-allocate a texture object for each texture before entering time-critical 
code
photo = []
photoX = []

f = open(stimuli, 'r')
for filename in f.readlines():
    photoX.append(string.rstrip(filename))                   #remove \n or \r\n
f.close()

# Insert a fixation cross in front of the texture list
photoX.insert(0,fixation)

os.chdir(path)                                               #change to the 
path, the stimuli are in

num_images = len(photoX)

# Create a list of TextureStimulus instances, one with each of your texture 
images. 
preloaded_stimulus_list = []                                 #empty at first
for stim_index in range(num_images):                         #for all stimuli 
except the null-event create a texture object
    if not photoX[stim_index] == nullevent:
        # Read the texture file
        texture = Texture(photoX[stim_index])
 
        # Add to list of TEXTURES
        x = screen.size[0]/2 - texture.size[0]/2
        y = screen.size[1]/2 - texture.size[1]/2
    
        # Load the texture to OpenGL, prepare for display
        stimulus = TextureStimulus(texture = texture,size = texture.size, 
lowerleft=(x,y))
    else:                                                    #for the 
nullevent, just append the nullevent-indicator
        stimulus = nullevent
    # Add to list of stimuli
    preloaded_stimulus_list.append( stimulus )

viewport = Viewport( screen=screen)


num_images = len(preloaded_stimulus_list)
# 
=========================================================================================================================================

# 
=========================================================================================================================================
# Declare variables
# 
=========================================================================================================================================

response_log_list = []                                       #infos which will 
be logged to a file at the end (stimulus, onsettime, response, RT, etc.)
response_log = []                                            #one log-entry
  
swap_time = 0                                                #time, buffer have 
been swapped (stimulus-onset on screen)
stim_index = 0                                               #index to the 
texture list (0: fixation, 1-n: stimuli)
trial_index = 0                                              #count the trials
trial_onset = VisionEgg.time_func()                          #time, a stimulus 
is drawn (not fixation)
start_time = -1                                              #time, the 
experiment starts with the first fixation drawn (after first trigger received)
trigger_counter = -1                                         #count received 
triggers (TTL signals on the lpt)
loop_ctr = 0                                                 #to check, whether 
the outer while loop is entered the first time (log start_time)
trigger_armed = 1                                            #set to 1, after 
the TTL has gone back to 0; wait for trigger
mouse_left_armed = 1
mouse_right_armed = 1
just_swapped = 1                                             #mouse-poll and 
trigger-poll loop entered the first time after the buffer has been swapped
timestamp = []                                               #log timeing 
accuracy into a file
looptime = 0                                                 #for time logging 
only
last_time = VisionEgg.time_func()                            #store last time, 
the mouse-poll and lpt-poll loop has started

last_trigger = 0                                             #used in lab only, 
where no TTL signal is available

#my_debug_msg = []                                            #for debugging 
purpose, log debug infos to a file
# 
=========================================================================================================================================


# 
=========================================================================================================================================
# Wait and count the dummy volumes
# 
=========================================================================================================================================
dummy_ctr = -1
# Siemens scanner starts with two dummy volumes (to eliminate T1 saturation 
effects)
while (not pygame.event.peek((QUIT,KEYDOWN)) and dummy_volumes > 0):            
   #poll the LPT for the TTL trigger, sent by the MR scanner
    # wait the trigger on the parallel port
    while (not pygame.event.peek((QUIT,KEYDOWN))):           #inner loop: 
mouse-polling and trigger counting, should run as fast as possible
        actualtime = VisionEgg.time_func()                   #get the time from 
the platform-independent visionegg.time_func() call (only for lab-use)
        input_value = raw_lpt_module.inp(0x379) & trigger_bit#pin nr 12
        # if TTL signal on lpt, increment the dummy counter 
        if (input_value == trigger_bit and trigger_armed == 1):# or (actualtime 
-last_trigger > repetition_time): #in lab don't wait for triggers, but wait one 
TR
            dummy_ctr = dummy_ctr + 1
            last_trigger = actualtime
            trigger_armed = 0                                #texture-index 
incremented only once per received trigger puls (TTL pulse ca. 50ms)
            break                                            #leave the loop 
and update the screen
        elif input_value == 0 and trigger_armed == 0:        #first lpt 
poll-loop after the TTL signal has fallen from 1 to 0 (-> arm the trigger)
            trigger_armed = 1

    # draw fixation
    screen.clear()
    viewport.parameters.stimuli = [preloaded_stimulus_list[stim_index]] #update 
the viewport (the outer '[' and ']' are very important) #stim_index = 0
    viewport.draw()
    swap_buffers()                                           #swap the buffers, 
synchronized to the vertical retrace signal, if requested (menu)

    #my_debug_msg.append('dummy <' + str(dummy_ctr) + '> <'+ 
str(VisionEgg.time_func()) + '>') #debug for dummy-loop above

    # quit if both dummy volumes finished
    if dummy_ctr == dummy_volumes - 1:                   #don't wait after the 
last dummy- waiting is done in the experimental loop below
        break                                                #quit the outer 
loop and go on to the main experimental loop

#end of the dummy-volumes loop
# 
=========================================================================================================================================

trigger_counter = -1
last_trigger = 0    

# 
=========================================================================================================================================
# Main loop - stimulus presentation and mouse-response logging
# 
=========================================================================================================================================
while (not pygame.event.peek((QUIT,KEYDOWN))):               #outer loop 
entered only after a received trigger, if the screen has to be updated
    # wait for mouse response and trigger on the parallel port
    innerloopctr = -1                                        #for debugging only
    while (not pygame.event.peek((QUIT,KEYDOWN))):           #inner loop: 
mouse-polling and trigger counting, should run as fast as possible
        innerloopctr = innerloopctr + 1
        # check timing accuracy
        actualtime = VisionEgg.time_func()                   #get the time from 
the platform-independent visionegg.time_func() call
        looptime = (actualtime - last_time) * 1000           #log in ms
        last_time = actualtime
        #timestamp.append(looptime)                           #append to the 
timing log-file
        
        if (looptime > 2) and (just_swapped):
            my_debug_msg.append('-S >2ms <' + str(looptime) + '> <' + 
photoX[stim_index] + '> <' + str(innerloopctr) + '>') 
        if  (looptime > 2) and (not just_swapped):
            my_debug_msg.append(' ! >2ms <' + str(looptime) + '> <' + 
photoX[stim_index] + '> <' + str(innerloopctr) + '>' ) 

        just_swapped = 0                                     #first inner loop 
after preceding buffer swap finished

        # poll the LPT for the TTL trigger, sent by the MR scanner
        if trigger_counter == 3:
            trigger_counter = -1                             #reset ctr, since 
4 volumes equal one trial (one TR fixtion, 3 TRs stimulus)

        input_value = raw_lpt_module.inp(0x379)# & trigger_bit#pin nr 12
        trigger = input_value  & trigger_bit
        mouse_left = input_value  & response_left_bit
        mouse_right = input_value  & response_right_bit

        # log the mouse response (which button, RT, precision)       
        if (mouse_right == 0 and mouse_right_armed == 1):    #right
            rt_right = 1000 * (VisionEgg.time_func() - trial_onset)
            response_log.append(rt_right) #RT in ms, reference: stimulus-onset 
(not fixation onset!)
            response_log.append('R')                         #right
            response_log.append(looptime)                    #how much time 
since the last mouse-event poll has passed (precision)
            mouse_right_armed = 0
        elif mouse_right == response_right_bit and mouse_right_armed == 0 and 
(1000 * (VisionEgg.time_func() - trial_onset) - rt_right > bouncing_t):
            mouse_right_armed = 1                            #since our relais 
are bouncing, wait for some time

        if (mouse_left == 0 and mouse_left_armed == 1):
            rt_left = 1000 * (VisionEgg.time_func() - trial_onset)
            response_log.append(rt_left) #RT in ms, reference: stimulus-onset 
(not fixation onset!)
            response_log.append('L')                         #left
            response_log.append(looptime) #how much time since the last 
mouse-event poll has passed (precision)
            mouse_left_armed = 0
        elif mouse_left == response_left_bit and mouse_left_armed == 0 and 
(1000 * (VisionEgg.time_func() - trial_onset) - rt_left > bouncing_t):
            mouse_left_armed = 1                             #since our relais 
are bouncing, wait for some time


        # if TTL signal on lpt, increment the index for the texture-list        
        if (trigger == trigger_bit and trigger_armed == 1):# or (actualtime 
-last_trigger > repetition_time): #in lab don't wait for triggers, but wait one 
TR
            last_trigger = actualtime
            trigger_counter = trigger_counter + 1
            trigger_armed = 0                                #texture-index 
incremented only once per received trigger puls (TTL pulse ca. 50ms)
            if mod(trigger_counter,ratio) == 0:              #0: draw fixation
                trial_index = trial_index + 1                #after the 
fixation cross is drawn, the next swap will draw a new stimulus
                stim_index = 0
                break                                        #leave the loop 
and update the screen
                                                             #drawing a 
fixation cross (begin of a new fixation-stimulus compound (one trial))
            if mod(trigger_counter,ratio) == 1:              #1: draw stimulus
                stim_index = trial_index                     #has been 
incremented during the previous fixation loop
                break                                        #leave the loop 
and update the screen with the new stimulus
        elif trigger == 0 and trigger_armed == 0:        #first lpt poll-loop 
after the TTL signal has fallen from 1 to 0 (-> arm the trigger)
            trigger_armed = 1


    # quit if all stimuli shown
    if trial_index == num_images:
        response_log_list.append(response_log)               #append last 
response log and quit
        break                                                #quit the outer 
loop and do the logging into textfiles

    # draw the stimulus (during first loop: fixation)
    screen.clear()
    if not preloaded_stimulus_list[stim_index] == nullevent:
        viewport.parameters.stimuli = [preloaded_stimulus_list[stim_index]] 
#update the viewport (the outer '[' and ']' are very important)
        viewport.draw()
    before_swap_time = VisionEgg.time_func()
    swap_buffers()                                           #swap the buffers, 
synchronized to the vertical retrace signal, if requested (menu)

    #my_debug_msg.append('start main <' + str(VisionEgg.time_func()) + '>') 
#debug for dummy-loop above

    swap_time = VisionEgg.time_func()
  
    # define start-time of the experiment (after first received trigger)
    if loop_ctr == 0:                                        #first fixation 
shown
        start_time = swap_time

    just_swapped = 1

    # log stimulus and mouse-response info
    if stim_index > 0: #no log for the fixation cross
        trial_onset = swap_time                              #confusing: 
trial_onset is the time the stimulus is drawn 
        if loop_ctr > 0:                                     #1st loop: 
fixation; only after the first stimulus shown, response makes sense
            response_log_list.append(response_log)           #append last 
response log (since we want only one response log per trial
        response_log = []                                    #clear response_log
        response_log.append(stim_index)                      #indes of stimulus
        response_log.append(photoX[stim_index])              #name of stimulus
        response_log.append(1000 * (swap_time - start_time)) #stimulus onset 
time
        response_log.append(1000 * (swap_time - before_swap_time)) #potential 
imprecision: stimulus onset must have within this interval

    loop_ctr = loop_ctr + 1

#end of the main loop
# 
=========================================================================================================================================

# 
=========================================================================================================================================
# Logging
# 
=========================================================================================================================================
# write all log info to several files

f=open('/home/christoph/work/programming/my_ve_projects/visionegg_fmri/response_log.txt',
 'w')
f.write('trial stim onset_t prec RT B prec RT B prec RT B prec RT B prec ' + 
'\n')
f.write('' + '\n')

x = 0
while x < len(response_log_list):
    y = 0
    while y < len(response_log_list[x]):
        if (type(response_log_list[x][y]) == types.FloatType):
           if (y == 0):
                f.write("%04.0f" % response_log_list[x][y])       #trial index
                f.write("   ")
           else:
                f.write("%08.2f" % (response_log_list[x][y])) #RT
                f.write(" ")
        else:
            f.write(str(response_log_list[x][y]))                  #mouse 
response (left/right)
            f.write(" ")
        y = y + 1
    f.write('\n')
    x = x + 1
f.write('' + '\n')
f.write('Remarks:' + '\n')
f.write('(i) RT: reaction-time of the subject, measured relative to the 
corresponding stimulus onset' + '\n')
f.close()


f=open('/home/christoph/work/programming/my_ve_projects/visionegg_fmri/my_debug_msg.txt',
 'w')
f.write('' + '\n')
for x in my_debug_msg:
    if (type(x) == types.FloatType):
        f.write("%010.2f    " % x)
    else:
        f.write(str(x) + "  ")   
    f.write('\n')
f.write('' + '\n')
f.close()

Other related posts: