Registered Member
|
Hi,
New kdenlive user here. I've looked through the forums but couldn't really find an answer to this. I have created a kdenlive project comprising many (40 or so) screen grab clips with audio. What is the best way to ensure that the audio volume level remains constant across clips? - listening to the project I can detect some variation in audio level between clips where I have talked slightly louder or quieter in some clips. I'd like to remove that change in volume level if possible. I see several audio effects are available within kdenlive that might be appropriate, including "Loudness" within Audio, and "Normalise" within Audio Correction. My first thought was to use the Normalize effect for each clip with the same settings? I have tried using the Loudness effect, but it doesn't actually seem to do much. Any advice would be very welcome. Thanks in advance. |
Registered Member
|
You can dock the audio level (or audio signal) from the View menu to any kdenlive GUI frame, e.g. next to the timeline. Using volume effects (keyframeable if you want), adjust each clip to the same level, make sure that the signal never exceeds 0db.
|
Registered Member
|
If you decide to do the above and adjust the volume on each clip according to the audio monitor, you may want to apply some compression first to even things out a little.
Dyson Compressor is simple, or SC1 has a more traditional setup if you are used to compressors from other programs. Once you have a setting that you like, you can apply it to the track (so you don't have to apply it to every single clip) and then only adjust the audio using Volume on the clips that still need it. Play around with the 'Normalize' effect as well. In Kdenlive it works a little differently to many other Normalize effects (eg. in Audacity you just set your desired maximum volume in dB) but your situation is basically the classic use case for normalization, so it's worth trying out a few settings until you get it right. More info on Compression & Normalization here, for anyone that wants it.
http://www.cameralibre.cc
Free Culture videos made with Free/Libre/Open Source Software about Open Hardware, Open Data, Open Everything |
Registered Member
|
Wow... That's a good one. Why didn't you tell me before how to do that things.
Thank you very much for sharing this. Best regards Achim |
Registered Member
|
That's exactly my problem: I played with it in the past, but it wasn't in playing mood it seems. So can you please elaborate a little bit more how the MLT/Kdenlive normalize effect works? Does it work only on a frame lenght or has it some memory or lookahead? I would like to understand how it works. So far, I do my voice over post mainly in Audacity, but maybe I could also make use of Kdenlive's audio effects in the future. |
Registered Member
|
My answer was vague because my understanding is also vague...
I also only use Audacity's Normalize effect, I have tried Kdenlive's at various stages and I got lucky with a good result once or twice, but I never really understood what was going on or felt in control of it.
http://www.cameralibre.cc
Free Culture videos made with Free/Libre/Open Source Software about Open Hardware, Open Data, Open Everything |
Registered Member
|
Kdenlive uses a temporal normaliser (in my opinion calling it a normaliser isn't useful - call it tempvolnorm or something), which just means that it turns it up or down based on the weighted average of the samples around it (I don't think it uses read-ahead). This is surprising useful for live streams that have huge dynamic ranges that you want to listen to at home as it'll go quieter when too loud and louder when too quiet. I use it via mplayer for a few select films. It isn't useful if a stream isn't live, because it will not sound as good as compressed stuff.
A compressor on the other hand, takes a signal and can then be told to reduce the dynamic range of a track (actually altering the audio to make the louder parts and quieter parts closer in volume) behaving according to set parameters. You could use this for the same thing as the temp. normaliser, I guess, and you'd get nicer sounding results, but I would never actually use it for a whole recording. For a vocal track or a podcast, it is basically essential, as it evens out variance in spoken volumes, which has the neato side-effect of making all talking pretty much the same volume. Leveller is useful in Audacity for things that aren't vocals that need less dynamic range (i.e. Explosions hurt your ears, but turning it down so the explosions are listenable means you can't hear dialog.) and will always sound better than a temp. normaliser A normaliser adjusts the highest volume to a level - it does nothing to the dynamic range. I make Let's Plays, so my process is to normalise (with Audacity, not kdenlive) the game audio to 0db, compress the vocals (quite heavily with a v small attack), normalise vocals to (almost always) -5db, then autoduck the game audio by 12db. I used to also noise gate to make edits easier, but fortunately OBS now has a built in noise gate. That results in something that sounds like this: https://youtu.be/OOxpBwxwB_o?list=PL_PG ... M1Zx&t=460 Good technique is important. That clip there is recorded with two very cheap electret mics ($5), but sounds fine due to using filters effectively. Obvs a lot depends on your use-case, but a combination of compression on vocals, and normalising other tracks and always trusting your ears will get you even sound. I'm not really sure what your use-case actually is? |
Registered Member
|
Thank you very much for your extensive explanation! I find it very useful.
Just my two cents/rant here, so please take with a lot of grains of salt ... alas, I wouldn't exactly count cheap electret mics as good technique (=equipment), it seems that you got what you've paid for: in your example, my impression is that your voices have little "volume", no "timbre", no "color". They sound just thin and "airy" to me, with strong emphasis on the mid and semi-high frequencies, but nothing in the lower frequencies that are also important. Of course, they sound clear, that's a prime requirement fulfilled. I listened to your example using a good studio headset to make sure I'm actually hearing what's in the audio. No, I'm not ditching electret mics as such. But you typically cannot get the lower frequencies back that a cheap mic never picked up, regardless of the amount of tricks one tries to play in the frequency domain. When I made the switch to a "cheap" studio microphone it was a flashing revelation: voices do have volume, beautiful timbre, color, a body. Without any filters, just out of the box. Not some airy body-less voice. But, as I said, just my two cents. |
Registered Member
|
Heavily compressed vocals are usually characterised as thick and heavy so I'll take that as a compliment. I stand by my results. |
Registered Member
|
Wow, thanks for all the responses.
Adjusting the volume level manual on every clip really wouldn't scale for my workflow - I have more than 40 clips I am splicing together, and it would be very laborious. I have played around with the normalize plugin and it nearly does what I want. But, it behaves very differently to the normlize features in audacity and ardour. I have played around with the number of frames used etc, but it always seems that I end up with small bits of the audio peaking over 0dB after normalization, no matter what I do. I am thinking I might use the normalize, and then a limiter filter, or perhaps compression. BTW, I should have added I am using 0.9.10 rather than the development releases. |
Registered Member
|
You can batch process with Audacity - http://manual.audacityteam.org/o/man/ch ... ation.html If not, there's always the stand-alone normalize or sox utility - http://normalize.nongnu.org/, http://sox.sourceforge.net/ Both of these can be done on files external to kdenlive, but I guess it depends on what you mean by clips? If it's sound clips, that's fine, movie clips are a touch more awkward because you might lose sync when outputting sound from a video file and remuxing later. I can help with that too though. |
Registered Member
|
Thanks... I should have been clearer about my use case - I am making screencasts, so my clips are 5 or 6 minute long screen grabs with audio (recorded simultaneously). I had hoped to avoid using tools outside of kdenlive... but perhaps that's unrealistic. I'll give your suggestions a try, thanks./ |
Registered Member
|
As corosivetruth already hinted at, the normalize effect in Kdenlive doesn't use a lookahead buffer but instead works on a moving history horizon with respect to the a certain number of past samples (or was it frames?) So it cannot avoid spikes. Compress may actually be a good choice for your purpose. Please note that Kdenlive's effects in fact stem from other sources, so Kdenlive can do little about them. Many, if not most, of the audio effects stem from the LDSPA plugin collection are maintained there. Some stuff comes from MLT or is maintained there. The implication for you that these effect filters don't so much depend on the Kdenlive version but instead on other libraries. Upgrading Kdenlive then won't change effect behavior. |
Registered Member
|
Would this be a good and effective way of improving the audio of a (larger) project:
1. edit your clips 2. extract audio - is it possible to get one audio track for the whole project this way? 3. extract this audio track and improve it in Audacity 4. import the improved audio track, replace the original audio and render Alternatively, how about doing a similar process after the rendering. Something along the lines of rendering without sound and adding the audio to the clip later with a suitable tool. Pretty much like the thread starter I'm trying to get my head around a method how it would be possible to edit the audio "all-at-once" for a ca. 90 minute long project (shot with 2 or 3 cameras) without having to touch each segment individually. |
Registered Member
|
Results should be better, if you follow the steps 1-4. In step 2 render audio only to a lossless format (e.g. wav). That way you will have only one lossy data compression on audio in the final render (e.g. wav-> aac).
|
Registered users: Bing [Bot], Evergrowing, Google [Bot]