Tim Blanche wrote: > Hello, > > I need to present a series of 100 or so images, not a movie, but briefly > flashed images separated by 1~3 seconds. > > The "image_sequence_fast.py" demo almost does the job, with the > following two issues: > > i) The BMP images I have are paletted, but Textures.py raises a error > "Paletted images are not supported". Quick work-around was to convert > the image into 24 bit RGB colour, but I was wondering if there's a > better way? > Not in the current VE version. I've just updated the svn version (r1508) of VE to support paletted images, but if your own code already does the conversion, this is hardly a reason to upgrade. > ii) the images do not have all the same pixel dimensions, so the > single-texture-with-a-fixed-size approach used by the demo code isn't > suitable. A not-so-quick work around would be to re-size all the images > to have the same dimensions as the largest image in the set, with border > padding the same as the background colour, but again, there must be a > better way. > Yes. Make a texture big enough for your biggest image and then use put_sub_image() to place your images in it. Then, the tricky bit is that this is a little beyond the scope of the abstractions in VisionEgg.Texture. Basically, a TextureStimulus (the thing Vision Egg draws) hold a Texture (abstracting the image data) which holds a TextureObject (abstracting the image data on the video card). You have to make sure they play nicely together. This means, since the put_sub_image() call short-circuits all but the TextureObject, you have to manually update the relevant part of the other stuff. In this case, this is the size parameter of the TextureStimulus and the buf_rf and buf_tf (buffer right fraction and top fraction) of the Texture. I'm attaching a modified image_sequence_fast.py demo to show this. -Andrew -- Dr. Andrew D. Straw California Institute of Technology http://www.its.caltech.edu/~astraw/
#!/usr/bin/env python """Display a sequence of images using a pseudo-blit routine. This is a fast way to switch images because the OpenGL texture object is already created and the image data is already in system RAM. Switching the image merely consists of putting the new data into OpenGL. Currently, there is no support for ensuring image sizes remain constant, so if you get strange behavior, please ensure that all your images are the same size. """ from VisionEgg import * start_default_logging(); watch_exceptions() from VisionEgg.Core import * from VisionEgg.FlowControl import Presentation, FunctionController from VisionEgg.Textures import * import Image, ImageDraw import OpenGL.GL as gl num_images = 3 duration_per_image = .2 image_size = (256,256) # Generate some images image_list = [] for i in range(num_images): image = Image.new("RGB",image_size,(0,0,255)) # Blue background draw = ImageDraw.Draw(image) line_x = image_size[0]/float(num_images) * i draw.line((line_x, 0, line_x, image_size[1]), fill=(255,255,255)) image_list.append(image) image_list.append(Image.open('/home/astraw/src/motmot/fview/motmot/fview/fview.gif')) num_images = len(image_list) screen = get_default_screen() # create a TextureStimulus to allocate memory in OpenGL stimulus = TextureStimulus(mipmaps_enabled=0, texture=Texture(image_list[0]), size=image_size, texture_min_filter=gl.GL_LINEAR, position=(screen.size[0]/2.0,screen.size[1]/2.0), anchor='center') viewport = Viewport(screen=screen, stimuli=[stimulus]) p = Presentation(go_duration=(num_images*duration_per_image,'seconds'),viewports=[viewport]) # Use a controller to hook into go loop, but control texture buffer # through direct manipulation. texture_object = stimulus.parameters.texture.get_texture_object() def put_image(t): i = int(t/duration_per_image) # choose image im = image_list[i] texture_object.put_sub_image( im ) w,h = im.size stimulus.parameters.size = (w,h) tex = stimulus.parameters.texture tex.buf_rf = float(w)/tex.size[0] tex.buf_tf = float(h)/tex.size[1] p.add_controller(None,None,FunctionController(during_go_func=put_image)) p.go()