[nama] Automation


I know I'm throwing another bone in a very busy stew here, but I figured
it was cool to start talking about it. One feature which is ubiquitous
in most DAW but yet to be present in nama (believe it or not, the list
of major DAW features lacking in nama is getting pretty small) is
parameter automation. Now the automation itself would not be too hhard
to achieve, since nama already uses -klg to control fades on tracks. The
main question is, how do we expose this to the end-user in a coherent,
intuitive way? 
Here iss a mess of thoughts:

As I see it, there are two main ways in which we can expose this
- The easy wway (for nama):
  Once the user decides to automate a parameter, it becomes his
  responsibility to split the timeline with markers and control which
  areas hhave which values and how they transition. Nama would issue an
  error when trying to modify the parameter directly and basically just
  assist in creating a valid -klg. The advantages would be relative
  simplicity of implementation (?) and that, by forcing the user to do
  it all by hand, we leave it up to him to keep track of what he's
  doing. The disadvantage, from the user's standpoint, is that simple
  automation of a 10 second area of the timeline would mean keeping
  track of three separate pieces, involving several extra steps to
  modify a simple parameter.
- The more involved way:
  We allow Mr. User to automate only certain pieces of the timeline
  (like those ten seconds I was talking about). From the user's
  standpoint, this is mmore intuitive and transparent, but also offers
  more possibilities for confusion (eespecially in a text environment
  where we don't have pretty automation curves) placing the burden on
  nama to try and minimise errors. Nama would need to:
  - Proxy modify_effect so that it would aaffect "non-automated" parts
    of the track.
  - Remind user that tthe parameter is automated.
  - Warn user if he is modifying the parameter while the playhead is on
    an automated part.
  - Offer functionalities to list, modify, and manage automated areas.

Thoughts about possible syntax:
- The syntax for fades is so powerful that I'd like to see it mirrored
  on a possible automate command. I see something like this:
  > auto in <s_effect_id> <i_param_id> [<s_start_mark>|<f_seconds>] \
            [<s_end_mark>|<f_seconds>] <f_target_value>
        # Ramp up/down from track value to target value over specified
        # period. We can have several auto in in a row, ramping frim
        # automated value to automated value.
  > auto out [<s_start_mark>|<f_seconds>] [<s_end_mark>|<f_seconds>]
        # Ramp up/down to track value over specified time. May not be
        # followed by another auto outt, obviously.
- Do we use relative or absolute values for automation, or do we allow
  both? Each have some merits and drawbacks.
- What about automated mutes and effect bypass? The former can be done
  just as well with fades (I have many times), and the latter could be
  potentially useful in some (rare) circumstances.
- What about curves? We might need to select between linear and

Finally, here is a little use-case I cooked up in my head to hopefully
help concretise some aspects of the feature and what it's used for.

Imagine a typical pop-rock ballad. We have the following tracks:
accoustic guitar, lead electric guitar, rhythm electric guitars double
tracked left and right, bass guitar, bunch of drums tracks on a buss
simply referred to as drums from now on, and some lead vocals. The
structure of the song goes something like this: 1) accoustic guitar
intro with some lead guitar, 2) verse, 3) chorus, 4) verse, 5) chorus,
6) outro (what a horrid word) with accoustic guitar, lead guitar,
hi-hat, and vocals. 

- The bass  and rhythm guitars need no automation, so we don't talk
  about them.
- Accoustic guitar:
  - In the intro, it is alone and pretty much needs its full frequency
    range and is closer (louder).
    When the rest of the instrumentation kicks in, we back it up in the
    mix (lower the volume) aand apply a bass cut (and maybe a touch of
    mid) so as to minimise frequency overlap. In this case, the intro
    would be the automated part and the bulk of the song left to general
    track settings (until the outro).
  - In the outro, we bring the accoustic guitar closer again and give it
    its bass back (but not necessarily the mid because of vocals).
- Lead guitar:
  - In the intro, our lead guitar fades in on the right but we then
    automate the pan so that it gradually drifts to the centre. At the
    end of the intro, volume needs to be increased as the accoustic
    guitar's decreases, so that they effectively trade place for the bulk
    of the song. Here too, we probably want to automate the intro and go
    back to track defaults for the bulk of the song.
  - We reverse the process for the outro, decreasing the volume at the
    outset, then panning from centre to left and fading out.
- Drums
  - We're after a stylistic effect here, so when the chorus comes
    around, we want to ramps up the compression and increase the volume.
    For that purpose, we increase the Ratio parameter on the compressor
    and the volume during the drum fill introducing the chorus and set
    them back to normal when the second verse comes around, and then
    repeat the operation for the second chorus.
  - During the outro we want our hi-hat to mmove gently back until it is
    almost inaudible, but not quite.
- Vocals
  - The vocals are nearly perfect, but there is one word that isn't
    quite on the pitch in the second verse, and we have to apply some
    pitch correction there for about three seconds. 
  - The vocals were rather forward throughout the song, but they need to
    be quieter during the accoustic wind-down.

I hope this gives a bit of an idea off what context automation might be
used in.

What are your thoughts?



Other related posts: