[olofsonprojects] Alternative approach to compact song data

  • From: David Olofson <david@xxxxxxxxxxx>
  • To: olofsonprojects@xxxxxxxxxxxxx
  • Date: Tue, 12 Feb 2008 06:41:58 +0100

Hi!


Time for an abstract technology rant! Feel free to ignore if you're 
not into "4k demo" style stuff or 8 bit devices. :-)


Ever since I accidentally turned a test tone generator into a 
softsynth (the Kobo Deluxe sound engine; predecessor of Audiality), 
I've been having various ideas about how to basically get the best of 
the power and resolution of modern music solutions (MIDI + synths and 
similar) and the compact data and low CPU utilization of music 
systems from the 8 bit era.

I just had this idea of essentially using a domain specific form of 
compression to generate the target song data. The solution would look 
something like this:

        1) Use whatever tools you like for driving your
           device of choice; SID, AY/YM, OPL, Paula,
           custom hardware, custom softsynth, ...

        2) Intercept the raw data that drives the target
           device(s) while playing the song, along with
           useful hints for things like song looping.
           Take care to drop any excess resolution here!

        3) Analyze the data, looking for patterns.

        4) Generate output in the form of write commands,
           delays, loops and subroutine calls, based on
           the data from stage 3).

If there is complex software logic on the "instrument" level, it might 
be more efficient to intercept the "raw" data before this logic, and 
keep the original logic in the final player. That is, essentially 
treating instrument logic as part of the hardware from the "song 
packer's" POV.

Stage 3) would basically be the usual data compression stuff, only 
tuned for dealing with timed writes to a small set of registers. 
Unlike a general data compressor, this one might benefit a great           
deal from specifically looking for ramps and patterns in the data 
written to certain registers.

A key component of stage 3) would be detecting and splitting 
out "threads" - what a tracker would implement explicitly as separate 
voices/tracks. By generalizing this, the compression algorithm 
doesn't have to know how many voices a particular chip has, or how 
the registers are related; it'll base it's work on how the registers 
are *actually* used in the song at hand.

As to the output, for low end hardware, one would preferably turn the 
smallest and most common "group writes" (ie writes to multiple 
registers without delays) into native code.


//David Olofson - Programmer, Composer, Open Source Advocate

.-------  http://olofson.net - Games, SDL examples  -------.
|        http://zeespace.net - 2.5D rendering engine       |
|       http://audiality.org - Music/audio engine          |
|     http://eel.olofson.net - Real time scripting         |
'--  http://www.reologica.se - Rheology instrumentation  --'
---------------------------------------------------------------------------
The Olofson Projects mailing list.
Home:              http://olofson.net
Archive:           //www.freelists.org/archives/olofsonprojects
Unsubscribe: email olofsonprojects-request@xxxxxxxxxxxxx subj:"unsubscribe"
---------------------------------------------------------------------------

Other related posts:

  • » [olofsonprojects] Alternative approach to compact song data