[audacity4blind] Re: Getting started with Audacity

  • From: "Scott Berry" <sberry@xxxxxxxxxxx>
  • To: <audacity4blind@xxxxxxxxxxxxx>
  • Date: Sun, 18 Feb 2007 15:26:03 -0600

----- Original Message ----- From: "Nolan Darilek" <nolan@xxxxxxxxxxxxxxxx>
To: <audacity4blind@xxxxxxxxxxxxx>
Sent: Sunday, February 18, 2007 1:05 PM
Subject: [audacity4blind] Getting started with Audacity

See my answers below your questions.

Hello, folks.

Hi Nolan,

I was referred here from audacity-users. I'm trying to
get started with Audacity and, to some extent, with audio editing. As this is the first GUI tool I've used, other than Goldwave (which I've only used slightly), I know enough about audio editing to shoot myself in the foot but not quite enough to look flashy while doing it.

I hear ya there i think that is where a lot of us start unless we have some training through broadcasting and such industries.

:) So what I've got, in addition to Audacity usability questions,
are a few basic audio editing issues as well.
I will try to do my best to answer them as well as I can. I am the moderator but don't use Audacity on a regular basis because it doesn't have all the features I want and I don't use tracks to much for my work. But I do keep it around.

I've actually had quite a bit of trouble finding this list and its associated website, so I'd like to contribute any efforts to making it less difficult to discover, if possible.

Well thank you very much for your generosity.

To that end, I'm trying
to use Audacity to produce podcasts, and would like to do an accessibility-oriented tutorial for getting started with it, perhaps as a podcast so folks can actually hear how one goes about setting selections, contracting them, etc. Initially that was a bit confusing, though not so much once I got the hang of it.

Yes Audacity tends to be a bit more fussy than most audio editors I have used on the Pc.

But, anyway, onto the questions. :) It looks like there's quite an active community of plugin developers, or that there was at one time, and that there are several plugins for making inaccessible tasks less so. I've downloaded but not yet experimented with a collection whose name I don't recall for certain (Stereo Butterfly?) and it looks like there's a plugin to help with timeshifting.

There are some plug ins which were made by a gentleman and our former co-moderator David Sky you may want to check out his plug ins those should be fairly accessible. Some of them are goofy because David did a lot of I believe it was techno music so some of his plug ins did specialized effects he needed.

Thus far I've been
inserting silence to timeshift segments. Are there any flaws with this approach? Will it make my projects larger as tracks contain various amounts of silence, or does Audacity simply note that there should be X amount of silence at certain points?

Boy, I am not sure how to answer this now silence probably does make the file bigger if you are just putting in silence between segments but if you are trying to quiet things down it should not have any effect. One way to experiment with your question would be to add silence to a file save it find the file in your C drive and then do a properties and look at the size of the whole file. If you have another file to compare it too you could compare the two and see which one is bigger. I am not quite sure I understood your question so my apologies if I did not understand it correctly.

Is there any way to
use the timeshift tool accessibly, or is this plugin (whose name I can't specifically recall) the only way to go?

I have never used the time shifter so unfortunately I cannot answer this question.

I'm combining recordings made on my mic with field recordings from an Edirol R-09. This unit adds a good amount of preamp hiss which I'm trying to figure out how to remove. Recording silence and passing that to the noise filter does a reasonable job, but the recording sounds much more digitally processed afterwards, so I'm thinking that EQ is the best way to go. I can't figure out how to access the EQ, however, as JAWS reports a lot of percentage slider bars, which I assume to be the bands, but I don't know to which frequency range each bar is associated. I saw a simple-looking EQ in the collection of plugins I snagged, but are there any accessible parametric EQ plugins that might help with this? How else might I go about removing this hiss in post-production?

Maybe Sarah can answer this one I am unfamiliar with the unit you are using to record in the field so I will not take a stab at this.

I'm experimenting with things on the
recording side by disabling automatic gain control and setting a more conservative input level to minimize the processing added by the preamps.

Given a project where you have, say, one or two segments of speech recorded directly into the sound device, plus a number of other segments recorded in the field with slightly different levels, what is the best way of creating a recording with a uniform level that doesn't trash the ambience? I could normalize, but AFAIK this doesn't set an average level. I could compress slightly or use the leveler, but I'm not quite sure how these two differ. What I'd like is for sounds in my immediate environment to be clearly audible with background sounds adding flavor but not necessarily overwhelming my main focus. Is slight compression the key to achieving this? And what about the situation where you've got a number of different segments recorded in different environments and you don't want your listeners adjusting their speakers for every change of scenery?

I am not really sure here either. A slight compression might work with maybe a normalizing effect with maybe some noise reduction.

Right now I'm
keeping the number of tracks reasonably low. Is it better to have each different segment on its own track, performing some sort of processing on the tracks as a group before mixing them down to, say, a music track and an ambiance track?

Well in my opinion yes it would be just because youcan control each tracks volume and so forth separately and then mix the tracks together. Now since you are thinking of podcasts I know a lot of people use two separate tracks for Skype calls. They have themselves on one track while the other person is on the other. But then when it all comes down sometimes having to many trtacks can become confusing also so what every fills the buck for you I would say use it. Do some mmore experimenting here.

And, yes, I'm experimenting with all of this on my own as well. I've made a number of mundane recordings in different environments, trying to intuitively learn what levels work best where, and what I can expect to clip. I'm having a hard time getting a loud recording, though, and usually have to crank the volume to hear well but cut it so JAWS/VoiceOver isn't overwhelming. I'm not sure if this is something I need to address in the recording itself--not setting levels conservatively and risking the possibility of clipping--or if there's a certain series of processing steps I might apply and an order in which to perform them that might give a bit more volume without me worrying about whether or not I'm clipping. I'm getting closer, but I'm not quite there.

Well some one who is a little more at depth with Audacity than I am would have to chime in here too. I am jjust using a lot of intuition hopefully grin! But I think truthfully everyone has a certain way they work so your experimenting is good stuff.

Thanks for joining and gtlad to see ya on the list.


The audacity4blind website is at
Subscribe and unsubscribe information, message archives,
Audacity keyboard commands, and more...

To unsubscribe from audacity4blind, send an email to
with subject line

The audacity4blind website is at
Subscribe and unsubscribe information, message archives,
Audacity keyboard commands, and more...

To unsubscribe from audacity4blind, send an email to
with subject line

Other related posts: