You are here

Ian Shepherd On Loudness & Dynamics

Practical Tips On LUFS Settings For Music Production By Matt Houghton
Published April 2023

Ian Shepherd, mastering engineer.Ian Shepherd, mastering engineer.Photo: Mike Banks

It’s nearly a decade since the LUFS standard for loudness measurement was defined, yet many still seem confused about what it means for music production.

Back in SOS February 2014, we devoted 12 pages to explaining the then‑new ITU‑R BS.1770 audio loudness measurement/normalisation standards. At the time, it was clear they’d have a huge impact on broadcast audio, yet questions remained about what they would mean for music production. Since then, YouTube and most music streaming services have implemented loudness normalisation, and software tools to measure loudness have become readily available, even been built into our DAWs. Plenty of confusion remains, though, about what loudness normalisation algorithms do, how loud is loud enough for music, and how and when to make good, practical use of LUFS meters. Furthermore, there remain plenty of sceptical engineers and some, for various reasons, who continue to use clipping and heavy limiting to master loud.

With these thoughts in mind, I figured it was time SOS revisited the topic. We’ve written about the theory a number of times, but I wanted to find out how things have been evolving in practice, and to tease out some good, practical tips and advice for our readers. To that end, I enjoyed a long, detailed chat over Zoom with UK‑based professional mastering engineer Ian Shepherd. Ian has long been a fierce and (since his critique of Metallica’s Death Magnetic back in 2008) prominent critic of over‑limiting in pursuit of loudness, campaigning for people always to aim for musicality first. If you’re sceptical about these ideas, I’d urge you to hold your fire and approach Ian’s words with an open mind. It soon became abundantly clear to me that he’s not opposed to creating loud masters per se; his message is more nuanced. And whatever your view on the question of ‘how loud is too loud’, one thing’s certain: Ian knows a hell of a lot about how best to use those loudness meters, and how your mixes will fare when played over the various streaming services!

Hearts, Minds & Language

I wanted to know, with loudness normalisation now enabled on YouTube and so many streaming services, whether Ian believes that the ‘loudness war’ has genuinely been ‘won’, as we suggested it might have been back in 2014. “The funny answer,” he joked, “is to say yes, I think the war has been won but someone needs to tell the generals!” Setting humour aside, though, he struck a less triumphant tone: “The reality is that no, it hasn’t been won. It’s basically a done deal as far as the technology, the standards and the best practice are concerned — but it’s now a battle for the hearts and minds of artists and engineers. That’s definitely not been won.

“With hindsight, I used lots of language in the early days of the loudness wars debate and Dynamic Range Day [see box] that I’ve now come to regret. People felt I was blaming them, and they felt criticised, which was never my intention. While it achieved what I wanted in terms of raising the profile of the issue, there are more positive ways that I could have presented the information.”

It’s perhaps no surprise, then, that he takes a different tack today. “What I’ve found most effective is the plug‑ins and the Loudness Penalty site [see box]: rather than just present people with facts, or even try to win over their hearts and minds, I’m just giving them the tools. It’s much, much more impactful if people can hear for themselves the difference between their master at ‑4 and ‑10 LUFS when they preview it at ‑14 in Loudness Penalty. That’s the ‘Aha!’ lightbulb moment.”

Ian also offered an insight as to why people might feel the need to compete on loudness. “By my estimation, somewhere between 80 and 90 percent of people who are listening online or on devices are listening to normalised music. But when I surveyed engineers and musicians on social media, 75 percent said they turned normalisation off because they want to hear the music exactly as it was mastered. So there’s this really cruel irony that the people who care most about the audio are listening in a way that almost nobody else hears it, and because of that they feel they need to match [the loudness of] all the other stuff that’s out there — but nobody else cares!”

Meter M’aide!

Do you know what to make of the different readings on a loudness meter? Ian suggests that Short Term LUFS might be the most useful when mixing and mastering.Do you know what to make of the different readings on a loudness meter? Ian suggests that Short Term LUFS might be the most useful when mixing and mastering.While we were on the subject of tools and communication, we discussed loudness meters. I’ve long thought that with so many readings, loudness meters can seem a bit confusing. The True Peak function might seem obvious, and most of us appreciate that Integrated LUFS indicates a track’s loudness and is used by streaming services to normalise playback loudness; but what of Momentary and Short Term LUFS? I asked Ian how he uses these, and whether either could be seen as a substitute for VU or RMS meters.

“LUFS is a remarkably effective measurement. It agrees with my ears a lot of the time and works well with something like 90 percent of material. But yes, it can also seem very confusing. They’re not intuitive names. I think lots of people understand RMS, though, and LUFS is basically a more sophisticated version of RMS. Short Term LUFS and RMS are often virtually identical, which I think is helpful to know. I still use a VU meter, too, because there are important things about them that aren’t replicated in most loudness meters. The scale goes from, what, ‑30 to +3 dB and it’s vastly more sensitive around the zero point. So if you push even slightly too high it pegs, and if you go 3‑4 dB lower than zero the needle really drops. It’s not a target (I don’t want people to aim for targets!) but in terms of using it as a reference you can see really easily where you are, whereas a lot of loudness meters aren’t so easy to read. If you’re happy with RMS and happy with VU, keep them, and just look at the Integrated loudness when you’re done. The biggest problem with VU meters is that they’re overly sensitive to bass, so if you’re working on anything that’s got huge bass in it... the meter will peg whenever you get to the drop. That’s a situation where you might find it more helpful to use LUFS Short Term.”

Having said all that, Ian was keen to clarify that: “I don’t watch the loudness meter when I work; I just check it afterwards. My simple advice to everyone when mastering is to make the loudest sections of your music consistent in terms of Short Term LUFS. Decide how loud you want the loudest bits to be, and get those to a consistent loudness. I would recommend no higher than ‑10 LUFS Short Term, and then simply balance everything else musically by ear. I’m talking about mastering, but it’s just a reference point — you can do the same thing when you’re mixing, making the loudest sections, say, ‑16 or ‑18, so you have plenty of peak headroom.”

But what of Momentary loudness? “I don’t use Momentary loudness at all: it’s very fast, and for music I find that it changes too quickly. But Jon Tidey, who mixes my podcast, uses it for dialogue and I think that’s interesting. Voices are very dynamic, and changes happen in the very short term. I think the intention with Momentary was that all bases would be covered. You’d have Integrated as an overall value, Short Term as a VU‑like meter — I don’t know if that was the intention, but it’s the way I see it — and Momentary if you need detailed information when things are suddenly leaping out. If you have a voice and things seem fine but suddenly the Momentary loudness spikes, that’s when you start thinking ‘Maybe I need to automate that,’ or ‘Maybe I need to lower the threshold on my compressor.”

Compared with LUFS, VU meters are overly sensitive to bass, but their greater resolution around the zero point means they remain useful.Compared with LUFS, VU meters are overly sensitive to bass, but their greater resolution around the zero point means they remain useful.

Targets Versus Musicality

“I don’t have a problem with people turning things up loud.” That’s a statement some might be surprised by! Yet it’s entirely consistent with Ian’s message. “Integrated LUFS is not a target loudness for mixing or mastering. Integrated is about distribution levels. Nothing else. It should be the result, not the goal... Some people recommend aiming for ‑14 LUFS, but it makes no sense to master a folk tune at ‑14 and a metal track at ‑14, because the folk tune will sound way too loud in comparison. I do have guidelines, though, like the idea of keeping the loudest moments consistent, and that the point where I tend to stop enjoying things is roughly ‑10 LUFS — I’ve talked to a lot of pro engineers who agree.

“For most genres I use the same maximum loudness, and the musical variations make everything just fall into line. So for example, EDM and thrash metal might be at ‑10 LUFS Short Term all the way through, resulting in an integrated loudness of ‑10 or ‑11. With a big folk ensemble, for the loudest section when everybody’s going for it the maximum Short Term LUFS could also be ‑10, but other sections of the song would be much less loud and you’ll get a lower Integrated loudness as a result.”

So what does Ian hear that sounds unappealing after that point — just the lack of dynamics, or does he starts to hear artefacts? “There’s a nice analogy: if you take all the vowels out of written English, it’s still legible. If you jumble the letters, but keep the first and last letters of words in place, it’s still legible — most readers can figure out waht the wdros sohlud be! But the experience of reading that version is wildly more stressful and unpleasant than just reading the original text. That’s how I tend to think of all these mastering processes. You can do all this heavy processing and even add lossy compression on top of that, and it will still sound like music and still be a tune. It’s just harder to listen to!”

Some people tell me that there’s a certain ‘sound’ to such processing that they can find appealing, so I asked if Ian agreed. “There can be. But for me there’s a big difference between measuring loud, and sounding loud... Lots of people say ‘It has to be this loud to get the sound,’ and I quite strongly disagree. It’s completely possible to achieve those sounds at lower levels. For example, EDM usually has super slammed, heavily limited drums, so people assume it has to be loud. But you can do that at lower levels: you can still have something measuring ‑14 with that sound; you just set the limiter’s output ceiling lower.

“For example, somebody gave me a ’90s techno‑style thing to master and I used WavesFactory Spectre — a really nice saturation plug‑in that lets you dial in saturation like an EQ boost without changing the level — to get more sizzle and more of the dense, saturated sound that fits the genre, without pushing the levels. It all comes back to what’s musically right. I could have done that by ramming the levels up, but you don’t have to, and from a mixing perspective, working this way means you can be selective about which elements you treat and which you don’t.”

Loud can sound good: at around ‑5 LUFS, Skrillex’s ‘Bangarang’ is Ian’s “guilty loudness pleasure”!Loud can sound good: at around ‑5 LUFS, Skrillex’s ‘Bangarang’ is Ian’s “guilty loudness pleasure”!

So has Ian ever mastered ‘hot’? Might he even enjoy any very loud tracks?

“My guilty loudness pleasure is ‘Bangarang’ by Skrillex. It’s like ‑5 LUFS or something, but it sounds amazing and hilarious. It’s just so extreme and so loud… but it works. I think people often use ‘oh it’s artistic intent’ and ‘it’s supposed to sound like that’ as a justification for very loud masters, and very often I disagree. But that song is a case where everything that’s been done contributes to the end result.

“Mastering engineers get a lot of flak over loudness, but a lot of the mixes are coming in super‑hot already, and it’s very hard to get a master approved if it’s quieter than the mix! Also, if it’s already that loud, there’s not much point in turning it down, because any negative consequences of achieving that loudness are already baked into the sound... The loudest thing I’ve mastered had an Integrated loudness of ‑7 LUFS for the whole album. But I didn’t make it any louder: I took the loudest song, applied some EQ so that it sounded artistically appropriate, and that became my reference. I adjusted my monitoring gain so that it felt comfortable, and balanced everything else in relation to that. But if a client comes to me with something that’s not already smashed and tells me to make it sound like one of the recent Miley Cyrus albums which had stuff up at ‑4 LUFS, I’ll say something like: ‘Sorry, I don’t think that works. I’m not going to enjoy doing that, and I’m not going to be a good fit for the project.’”

In fact, Ian seemed concerned that people’s misunderstanding of loudness normalisation could lead them to master some material too quiet. “If a song is intended to be loud and someone asked me to master it at ‑16, I’d be very clear with them, and say: ‘Look, right now this is going to sound a couple of dB quieter than this other thing that’s intended to sound really loud.’ I don’t have a problem with keeping quieter songs quieter, but I’d be very clear with a client about the implications of mastering a loud song quieter.”

Despite all this talk of numbers, Ian is firmly of the opinion that most listeners just don’t care about the loudness of one track relative to others. He reminded me of an article we published back in March 2011 (www.soundonsound.com/techniques/loud-music-better), then explained that “recently, as an experiment, I listened to an hour of a UK Top 50 playlist on SoundCloud [which doesn’t yet employ loudness normalisation] and measured it at the same time. The first song I heard was 6 or 8 dB louder than the second, and there was similar variation right across the playlist. The song at number two was mastered much quieter than that at number one, but that hadn’t had any effect on its success in terms of being listened to on SoundCloud, and SoundCloud users presumably don’t care much or they’d be complaining.”

Immersive Audio & Loudness

A significant development since our 2014 article is the emergence of immersive audio as a viable listening format. I asked Ian to comment on its relevance to the loudness debate. “Actually, there are lots of interesting things to say. Dolby specify that the loudest you go should be ‑18 LUFS Integrated, and I’ve been told by several UK mastering engineers that if they go even 0.1dB over that spec it will be rejected. That’s because it’s an object‑based file: that single file has to play back on multiple speakers and on binaural earbuds, and they need a ton of headroom to do the processing necessary for each possible system.

“People often ask me why there isn’t a standard and I used to say you can’t have one: people have to have artistic freedom or they’ll rebel. But there is a standard in Atmos and people are getting on with it, and everybody’s happy! I’m concerned the loudness wars will kick in here too, but right now people are sticking to ‑18 and everybody’s happy.

Ian Shepherd: My theory is that one reason people love Atmos so much, even on a proper Atmos rig, is that it’s not compressed into the top 7‑8 dB.

“What’s really interesting is that Atmos is giving people the chance to hear more dynamic versions of current hits on an iPhone. I heard someone say that Apple Music streaming in immersive has leapt up to 30 percent, so I think lots more people are listening to it. My theory is that one reason people love Atmos so much, even on a proper Atmos rig, is that it’s not compressed into the top 7‑8 dB. Engineers tell me it’s nice because they can do everything they want creatively and don’t have to worry about loudness.

“Take Taylor Swift. The previous two albums, the ones she did in lockdown in particular, I choose to listen to those in Atmos because the binaural mixes sound similar to the stereo mixes (there’s just a little ear candy here and there) but they were mastered at ‑18 instead of ‑7, and that’s the case for a huge range of other material. It’s still early days and some Atmos mixes don’t sound great; nothing like what’s on the original album. But if you find tracks where they’ve kept it close to the stereo version, it’s worth choosing to listen to the Atmos version... I’m excited about people hearing that, liking it and starting to ask questions, and about engineers working in that format, and the industry in general thinking, ‘OK, we can get that sound at the lower level and it still sounds musically satisfying and it still works, and people really like it. And maybe they even like it a bit better!’ Whether the labels notice that or not is the question. I’ve been optimistic before that normalisation would solve this issue and it hasn’t; it’s got more polarised. There are people putting stuff out that’s louder than ever, there are people mastering stuff at ‑14 and everything in between. But I choose to look on the bright side.”

If you’d like to learn more about loudness and dynamics, it’s well worth visiting Ian’s websites:

www.productionadvice.co.uk

www.themasteringshow.com.  

Dynamic Range Day: 21st April 2023

Ian ShepherdDynamic Range Day, which Ian started in 2010, is a day of online activity that’s intended to raise awareness of the loudness wars and encourage people to focus instead on what he calls the ‘dynamics’ of music.

“We give an award to a fantastic dynamic popular mainstream album, and we have a shortlist as a way of celebrating people who are using dynamics successfully. I do live streams, usually get guests on to talk about their experiences with loudness and dynamics, and sometimes present some quick tutorials to help people get their head around this stuff. It’s a fun event: we usually post a lot of cheesy memes which people seem to enjoy! But it’s all about spreading the word that it’s safe to do what feels creatively right with loudness, and everything will be fine.”

This year’s event is on 21st April, and you can find out more about it at https://dynamicrangeday.co.uk.

Ian Shepherd’s Plug‑ins

Give them the tools! Ian has developed some very useful software in partnership with MeterPlugs.Give them the tools! Ian has developed some very useful software in partnership with MeterPlugs.

Ian has developed, in partnership with MeterPlugs, the free Loudness Penalty website (www.loudnesspenalty.com) and a range of software to help engineers better understand and assess the effect of compression, limiting and other processing, and how streaming services will react to your masters.

  • Perception AB allows you to compare a signal before and after a processing chain, with both signals the same loudness so that the levels don’t skew your perception of which sounds better.
  • Dynameter is a plug‑in that measures dynamics — it’s a very easy, intuitive way to visualise the peak‑to‑loudness ratio of a signal.
  • Loudness Penalty will tell you by how much the various streaming services’ loudness normalisation will turn your music up or down.

You can find out more about these plug‑ins at www.meterplugs.com

Loudness Normalisation & Streaming Services

Ian kindly agreed to take me through which streaming services employ normalisation and which don’t, and how the different services approach normalisation.“SoundCloud, Beatport and Bandcamp are platforms that I’m sure are important for lots of SOS readers, and these don’t use normalisation, though I know SoundCloud have been experimenting and I anticipate they’ll add it at some future point. Apple recently made it the default for normalisation to be enabled, though they’re not changing it for existing users who didn’t already have it switched on. So there are a lot of people on Apple Music still listening without normalisation.

“But for me it’s about user numbers. The best I can figure out is that YouTube video has about four times as many people listening to music on it as all the other music streaming services combined. Half the remaining 20 percent have loudness normalisation enabled by default, so something like 90 percent of tracks streamed are being normalised — and YouTube just dominates the space. Whilst the people who disable it or listen on Bandcamp, Beatport and SoundCloud are still important, what most people in the world hear is being normalised. I wouldn’t master anything louder for a specific platform, provided my client was happy, because my opinion is that they will all add loudness normalisation eventually, meaning the loud track you upload now might well be turned down in the future. But even currently the evidence is that most people just don’t care about loudness.

Like most other streaming services, Spotify normalises tracks and albums to ‑14 LUFS by default. There are different options, but few people seem to change them. With the Loud setting, a limiter is used to raise the level of quieter tracks whose peaks inhibit the amount of gain that can be applied.Like most other streaming services, Spotify normalises tracks and albums to ‑14 LUFS by default. There are different options, but few people seem to change them. With the Loud setting, a limiter is used to raise the level of quieter tracks whose peaks inhibit the amount of gain that can be applied.

“You used to have completely different results for YouTube, Spotify, Apple Music... all of them. But these days all the big players are normalising to ‑14 LUFS Integrated. Spotify has a preference you can change, but the default setting uses ‑14 and not many people change it. The devil is in the detail. For example, YouTube has no album mode [whereby the relative level of tracks on an album is preserved] but also doesn’t turn things up, so if you master your loudest track to ‑14 and judge the other tracks relative to that an album will sound fine. TIDAL always uses album normalisation. Spotify uses it if you’re listening to an album but not if you’re listening to a playlist or on shuffle — unless it’s a playlist with two or three songs from the same album played in sequence, at which point it will retain the relative album levels for those songs! That’s from memory; I’ve not checked recently, but it’s not really worth trying to document all of this because it’s too nitty‑gritty and keeps changing.

“I participated in some research by Eelco Grimm, who’d analysed 4.2 million albums and set up two playlists with songs of wildly different levels, and we did blind tests for track and album normalisation. My experience was that some quieter songs just sounded too loud with track normalisation, whereas loud songs could feel underwhelming. That happened much less with album normalisation. In fact, 80 percent of those taking part preferred listening with album normalisation on, even for shuffle and playlists.”

Dialogue

We talked mostly about music, but as Ian had mentioned that Momentary LUFS could be useful for dialogue, I asked if he could offer more advice about non‑musical content.

“‑18 LUFS seems right for dialogue. That’s actually specified in the latest version of the AES streaming recommendations: the overall Distribution Loudness is ‑16 and the recommendation for speech is ‑18. That’s where my podcast ended up — but if yours sounds right at ‑16 or somewhere in that ballpark it will be fine. If people are working on material with both dialogue and music, they should probably be looking at the music being 2‑3 dB louder than the voices, to prevent the music sounding underwhelming.”