Dear Ram, Ramapriyan Pratiwadi wrote: > Hi - > > I'm developing a demo that consists of playing multiple quicktime > movies. I have a preliminary version of the demo running, but would > like to tweak it to run a little smoother... here are my main > problems: > > 1. the movies are of high quality, and on a 12" powerbook, the > playback is kind of jerky - is it possible to cache to memory, in > advance, all of the movies i want to show ? I'm guessing that the jerkiness is due to the decoding of the movie being near the limits of your CPU power. So, basically want to eliminate the decoding step. Unfortunately, laptops generally have fairly slow hard drives, too, so I think you're right that the best bet is to buffer to RAM. It is possible to decode the raw frames and save it to RAM for later playback, although the VE doesn't support that directly (although it could be implemented...). You'd have to export the movie to an image stack and then load the images to RAM. See the image_sequence_fast.py for an example. > > 2. in the quicktime demo, are these commands necessary ? Without going back through and really thinking about each step: > - screen.clear() Not necessary, and skipping this could buy you a little time. > - viewport.draw() This is necesary as it actually triggers the drawing of each of your stimuli classes. > - swap_buffers() This is necessary as the drawing happens on the "back buffer" and you want to display that, which means making it the "front buffer". This is not a computationally intensive job, however, and merely involves telling the RAMDAC to look at a different place in memory. > > 3. is it possible to add noise to the movietexture, in order to > degrade the quality of the image ? Yes, you could overlay a texture (with varying alpha for transparency), or you could manipulate the movie in advance. I'm sure there are other ways. ====================================== The Vision Egg mailing list Archives: //www.freelists.org/archives/visionegg Website: http://www.visionegg.org/mailinglist.html