What is mixing?
Updated: Aug 15, 2021
My background in mixing
I've been mixing songs professionally for a diverse range of artists and producers since way back in 1996 when my studio boss at the time entrusted me to mix an album of Hindi pop songs for legendary producer Biddu. Despite the fact that I'd learned most of the key principles of mixing while at university, I quickly discovered that I learned much more about the process by actually mixing regularly, and even more by watching other professionals mix.
Before long I was mixing albums for an ever-increasing clientele of producers and artists in all different genres, sometimes using 16 or 24 track tape and an analogue mixing desk, and sometimes (more frequently as the 90s wore on) mixing with digital equipment, ie hard disk recording devices and digital mixers. By the early 2000s, pretty much all my mixing work had moved 'in the box', meaning most of the processing and balancing was performed in a virtual environment using software such as Logic, Protools or Cubase. I still mostly mix 'in the box' today, using Cubase Pro and an ever-increasing collection of third party software plug-ins by companies such as Waves, Native Instruments and SoundToys. (OK, I may be addicted to buying new plugins!)
But what actually is it?
Most music lovers are aware that since the 1950s we've had the technology to record using multi-track devices. This means that individual timbres (instruments, voices, sound effects etc) can be recorded onto individual tracks of their own, either in parallel on a wide reel of tape, or in a list of tracks in computer software. The benefit of recording this way is that after recording has finished, each individual timbre can be treated in whatever way is uniquely appropriate. This creates plentiful opportunities to really shape the recording, applying more prominence or clarity to certain timbres than others, adding different effects to particular timbres in particular places, and spreading the timbres across the stereo field so some instruments sound more to the left, right or in the centre (particularly noticeable when listening on headphones).
This process of balancing and shaping the recorded tracks is called mixing, and is traditionally carried out by a mix engineer, such as myself. Nowadays, the result of mixing is a finished stereo file - usually .wav or .aif - which can be passed on to the mastering engineer for the final stage of sonic improvement before the song is released to the general public to listen to. (More about mastering another time...)
The tools of mixing
The mixing console (or mixing desk, or mixer) is the primary centre of operations while mixing. The mixer, whether it's an 'actual' physical, hardware model or a software 'virtual' mixer, provides a range of tools to shape a mix. Probably the most obvious of these tools are the 'channel faders', the slidey controls usually found along the front part of the mixer. Faders allow the engineer to adjust the volumes of all the individual tracks in the recording, from the bass drum microphone signal right through to the 128th backing vocal part. Adjusting the faders to find the most appropriate blend of the tracks is called 'balancing'. It's often a good starting point when working on a mix, although the balance may be continually adjusted during the mix session as various other processing techniques will likely affect the apparent volume of each individual timbre. The faders on a hardware mixing desk also provide a handy place to put your fingers when you're having your photo taken in the studio and you want to look like you know what you're doing!
Equalisation (often abbreviated to 'EQ') was originally invented to compensate for the lack of fidelity in early microphones and media such as tape. Recording technology back then often failed to capture high frequency detail (treble) or low frequency depth (bass). EQ was developed as a tool to allow engineers to add some of those missing frequencies back into a recording. In the very early days this usually consisted of simple treble or bass controls, not dissimilar to those found on home hi-fi systems. As mixing desks became larger, more versatile EQ controls were added to each channel, allowing more nuanced adjustments to individual timbres. Nowadays, digital audio workstations such as Cubase and Logic feature incredibly sophisticated EQ tools which facilitate incredibly specific adjustments and have given rise to the kind of sonic power we associate with contemporary recordings. Compare the mix of a song from the 1970s (the earliest era of large scale analogue mixing consoles) with a mix from the last few years. The frequency range of today's music, especially pop and dance, is noticeably different: In 2021 we can enjoy the entire frequency spectrum in all its glory, from the deepest sub bass through to the sparkliest top end.
Compare Elton from 1975 with Dua from 2020:
EQ facilitates sonic separation in a mix too. I often feel like a sculptor, meticulously carving out particular frequencies from one timbre to make space for another. Something so apparently technical can become an artform! With that in mind, it's interesting to note how some producers often use the filters that make up EQ controls in creative, sometimes extreme ways, such as the ever popular 'telephone effect' on a vocal, or making the whole track sound muffled, like it's coming from 'outside the club'.
EQ is one of the most feared tools of mixing, but really, like anything, getting to grips with it just comes down to practice and experimentation.
Probably the most shrouded tools of mixing are the lowly gates and compressors. However, the latter in particular are absolutely key to creating a strong mix.
The former, the noise gate, is actually a very simple processor to understand, and is usually employed to simply cut out unwanted noise from a track when the instrument or voice is not sounding. The gate 'opens' when the audio signal reaches a particular volume, then closes (mutes itself) when the signal drops down again. This is especially useful on vocal tracks to remove quiet headphone spill picked up when the singer is not actually singing, or when they breathe between the verse and the chorus (how dare they?!), or when the lyrics pages rustle, and so on. I was fortunate enough to come across an original multitrack reel of Chic's "Le Freak" in a studio once and, having fired up the tape machine and excitedly started flicking up the faders to listen to the individual instruments, was shocked to hear the string players tapping their feet along with the beat! I mean, I appreciate the Chic groove is hard NOT to tap along to, but thank goodness the taps were most likely gated in the final mix, leaving those glorious string stabs tidy and clean.
Compressors are perhaps more fear-inducing to many engineers, possibly due to there being so many different types with different controls and a variety of circuit types leading to different sonic characteristics. The bottom line is that they 'squash' the dynamic range of a timbre, reducing the loudest parts and resulting in a voice or instrument sitting more consistently with everything else in the mix, and allowing for overall higher perceived volume without 'clipping' (aka distortion) occurring.
Whatever the type, compressors (and their close relatives, limiters) are used extensively on all professional music productions. They are often used 'live' on microphones while capturing a timbre during the recording phase, then they will often be used again on that same timbre during the mix process (nowadays, multiple compressors are often used on one sound to manage certain aspects of it), then maybe another compressor will be applied to groups of timbres (such as all the drum kit mics grouped together), then to the whole mix as it's exported, then maybe one or two more during the mastering phase, and then yet another if the finished song is broadcast on TV or radio! Suffice to say that we as the listening audience are VERY familiar with the sound of compression, even if we aren't consciously aware of it. Spooky, eh?
I personally love what compressors can do, from the gentle smoothing of a vocal in a mellow ballad, to the absolute trashing of a drum group in a punk song, to making the synth bass in a dance track that much more punchy and 'in your face'. An entire blog post could be written about compressors. Maybe one day...
I come across the term 'ear candy' an increasing amount these days. Although 'ear candy' could refer to many different aspects of music production, effects processors are some of the more popular tools used to create it. There are so many types of audio effects, ranging from the ubiquitous reverb, to the rocktastic sound of distortion, to the sweeping, psychedelic phaser, and right out into the darker corners of sonic space now occupied by digital glitch effects and other mind-bending processors made possible thanks to digital technology and the vivid imaginations of plug-in designers.
Reverb is probably the most familiar audio effect. It has been evident in recordings from the very earliest days, mostly due to the fact that many recording spaces have a natural acoustic, with sound reflecting from the walls, ceiling and floor. During the 1950s it became increasingly popular for studios to have a dedicated 'echo chamber', synthetically adding reverb by playing a pre-recorded song through a speaker in a long room and capturing the result with a microphone at the other end. Plate and spring reverbs began to appear at the end of that decade, then during the 1970s digital technology gave rise to ever more sophisticated software reverb processors. Digital reverbs use RAM to temporarily store a sound, then play it back in myriad, carefully calculated ways shortly after the original signal, emulating the chaotic reflections inherent in real acoustic environments.
There are so many other types of effects to describe, but the already plentiful length of this particular blog post suggests those descriptions should be saved for a future entry. However one other increasingly popular effect which I think is worthy of specific mention here is 'saturation'. This type of processor has been born in the digital age in response to the early criticisms of digital audio recording which rose to dominance over analogue tape during the 1980s and 1990s. Early descriptions of digital audio implied that the sound was too 'clean' or sterile. Indeed, digital recording devices are incredibly accurate in how they capture sound, especially when compared to analogue tape and vinyl. But of course, many music lovers enjoy the slight distortion (aka saturation) caused by tape and records, and the colouring of the frequency spectrum those analogue media often cause. Plug-in developers saw an opportunity for some new tools, and now some of the most popular types of digital processors are ones which emulates analogue equipment, adding back some of that tape or vacuum tube saturation, overdriving the sound a little (or a lot!), adding some warmth in the frequency spectrum and even intentionally adding hiss or hum in some cases. Quite ridiculous really, when you consider how audio technology has so often be focussed on pushing forward, capturing and replaying more accurately. But apparently we just love a bit of fuzziness! I can't get enough of it, and am somewhat ashamed at how big the 'distortion' folder in my plug-ins collection has become... OK, I'm not really shamed. They all sound beautiful!
Back in the 1970s, if the mix engineer wanted the guitars to creep up in volume in the chorus, or the backing vocals to have a little more reverb in the middle 8, or that weird Minimoog synth lead to sail across the stereo field from right to left at one particular moment, the only way of achieving such real-time detail was to physically move a fader or adjust a pot (potentiometer, aka knob) 'live' as the mix was being finally transferred to the stereo tape ready to send to the mastering engineer. If a considerable number of these subtle (or dramatic) real-time tweaks were planned in a mix, as would regularly have been the case in the music of studio pioneers such as Queen or Pink Floyd, multiple hands would have been required during the mixdown, bringing in band members, tea-making staff, the drummer's other half, and anyone else who could be dragged into the control room at the crucial moment. The final mixdown often became an elaborate performance which required careful rehearsal.
Digital audio workstations have quickly become very well equipped to prevent this hands-on game of Twister. Even the most entry level music recording/producing software is likely to have 'automation' these days, allowing the user to pre-record fader and pot movements, and then edit them in minute detail. The mix engineer can even draw lines in the main arrangement page on a subtrack of a particular timbre to have the volume, or indeed any other setting on the virtual mixer or any processor applied to said timbre, change in an exact way at a precise time during playback.
Apart from making the mixdown process many times easier, this has also given rise to some really exciting approaches to mixing, where sonically extravagant moments can happen in a song. Timbres gradually becoming swamped in an effect, or jumping around in the stereo field, or morphing into a completely different timbre, and infinite other creative tricks have all become readily available to digital music makers. More exotic ear candy! What fun!
For me, this last concept demonstrates what I love about digital technology. Yes, we did love the good old days: Those technological restrictions forced us to be extra creative, and of course we adored that analogue sound (when we weren't getting incredibly stressed about trying to hide all that tape noise!), but honestly, what could be more exciting than the infinite palette with which we have now been gifted in even the simplest piece of music production software? Endless possibilities, and all the tools we need to grab our listeners' ears and take them on a wild adventure with us...