Hi, Andrew: Thanks so much for the detailed and helpful response! I will follow your recommendations re: using a single stimulus class and custom flow control. You're in luck! Part of this present project does indeed involve implementing a task to determine a spatial frequency threshold for each participant using the QUEST procedure. So, I'm happy to send that source code along for review and possible inclusion in the VE demos. Just FYI, while I hope to get that code written within a week or two, it might be up to a month or so before I can get this particular task coded. On a related note: I'm wondering if there's any interest in creating a "VisionEgg Cookbook" (or somesuch) section of the VE website? The idea basically being a section of the website where people can post VE/python source code for stimuli, demos, and/or full experiments that they would like to share with the VE community. Allowing for screenshots to be uploaded as well would probably be helpful. Seems to me that this would be a great way to facilitate VE'ers helping each other out and help new VE'ers get up and running more quickly. Thoughts? Comments? Criticisms? More soon... Cheers, Doug On Fri, Feb 08, 2008 at 10:26:27AM -0800, Andrew Straw wrote: > Doug Morse wrote: > > Hello Vision Eggers, > > > > I have a few questions about how to "best" create an experiment in Vision > > Egg > > (VE). Specifically, I'm implementing a motion object tracking task (MOT) > > wherein participants will be presented a number of target and distractor > > ... > ... ====================================== The Vision Egg mailing list Archives: //www.freelists.org/archives/visionegg Website: http://www.visionegg.org/mailinglist.html