[visionegg] Re: pygame v. visionegg

  • From: Andrew Straw <astraw@xxxxxxxxxxx>
  • To: visionegg@xxxxxxxxxxxxx
  • Date: Wed, 30 May 2007 01:04:35 -0700

Hi John,

John Christie wrote:
OK,
So, for those of you who are serious psychopysicists the subject of this makes no sense. I mean, Vision Egg is massively more flexible for psychophysics.

Thats kind. Thanks.

However, for just doing basic cognitive tasks its often a little simpler (and better documented) to whip something up in pygame.

I see your point.

As a very recent vision egg newbie my question is twofold. What are the advantages of Vision Egg when you just need to draw simple shapes and then gather responses.

Well, if the rest of your experiments are in the Vision Egg, that's a natural reason to prefer the VE. Otherwise, there might be something about the graphics initialization or the use of extension modules (e.g. QuickTime movies or the win32_vretrace module). But, fundamentally, they're both using the same hardware, so it's more a matter of what software library is better suited to the task (and perhaps to your brain).

  And, is it hard to work with the two
together, that does one need to watch out for?

I haven't used them too much together, other than to implement support for using pygame surfaces as the source of textures. See the demo/pygame_texture.py script.

For example, what is the
relationship between visionegg viewports and pygame surfaces? How can I use a pygame draw and put it in a visionegg viewport?

A VE viewport is: 1) a container for stimuli (stimulus instances) that calls their draw function and 2) an OpenGL viewport, which is merely a region of pixels in the framebuffer. A pygame/SDL surface, is conceptually also more-or-less like the OpenGL viewport. However, on modern PCs there's actually not a lot (if anything) going on besides allocating a chunk of RAM for drawing into, so this could either be the direct video surface or somewhere else. Basically, to get data from a pygame surface into OpenGL, you have to copy that RAM into something OpenGL knows about, such as texel data.

All of that is going on behind the scenes in the pygame_texture.py demo, which simply uses a pygame surface as the texel source for a texture. You can extend this to do something like demo/image_sequence_fast.py, namely dynamically updating the texel data through calls to texture_object.put_sub_image(your_pygame_surface).

I hope that helps.

Cheers!
Andrew
======================================
The Vision Egg mailing list
Archives: //www.freelists.org/archives/visionegg
Website: http://www.visionegg.org/mailinglist.html

Other related posts: