How to Mix Keyboards for the Best Blend

In the studio, how you mix keyboards is as important as how you record them

It’s not enough to record a great keyboard part; when you mix, it needs to integrate smoothly with the rest of the music. In the early days of recording, integration within a song was a given—instruments leaked into each other’s mics, and were all subject to the same room acoustics. On mixdown, there was likely one reverb—either a concrete room with some speakers and mics, or a plate reverb—which ensured the same ambiance character for all sound sources. It’s no wonder that golden-ear types could often identify the studio where a recording had been made simply by hearing the reverb.

Today, your keyboard might be miked, or go direct into the board. It might be in a track with a bunch of acoustic instruments, electric instruments, or even virtual instruments that exist only inside a computer. Loops—which may be relatively unprocessed, or compressed and boosted to fit the mastering style of today’s pop music—might make up some tracks. How are you going to get your keyboard to fit in properly, so that it stands out when needed, and can also work in the background when appropriate?

There are four main ways to control how an instrument fits with a mix:

  • Timbre (frequency spectrum)
  • Stereo positioning
  • Level
  • Ambiance (both during recording and while mixing)

Let’s consider each option.


I have a theory that the “analog synth revival” started because of digital recording. Before digital, musicians preferred bright synths, like the Yamaha DX7 and Roland D50—all the better to cut through the dulling effects of analog tape. But later, the brightness clashed with digital, whereas the mellower sound of analog synths was a fine match.

A related phenomenon happens when recording electronic instruments in tracks that are mostly acoustic. Raw synthetic timbres usually have more high-frequency energy than natural sound sources (Fig. 1), and even with the customary lowpass filtering, tend to stand out in a mostly acoustic mix.

Fig. 1
Figure 1: The top frequency analysis in Steinberg’s Wavelab shows a typical piano spectrum. The middle graph shows a sawtooth wave; there’s a lot of energy in the upper midrange and high frequencies. The lower graph shows a sawtooth wave with the highs rolled off -24 dB at 10 kHz (which means the rolloff actually begins around 500 Hz). Note how this graph’s distribution of energy more closely matches the piano.

But you can tame synths, and add a little warmth, with some high-frequency rolloff. The filter of choice for this application is usually a high-shelf filter, which provides a gentle, uniform drop in response above a particular frequency. (A lowpass filter can also work for a more exaggerated effect, because it gives a greater rate of high-frequency attenuation.)

A subtle amount of cut can still make a difference. For example, a -2 to -3 dB high-shelf filter cut, starting at 8 kHz (Fig. 2), works well for synth sounds that already have some lowpass filtering.

Fig. 2
Figure 2: IK Multimedia’s T-RackS Linear-Phase EQ is adding a slight, subtle shelving cut at 8 kHz.

Cutting too much treble will muffle the sound, but this can be used to your advantage. Further dulling a sound not only places it back further in the mix, but gives your brain the cue of it being further away. (In the “real world,” high frequencies are absorbed and dissipate more readily than low frequencies, which gives a sense of distance.)

With an acoustic instrument playing in a synthetic context, it’s common to boost the acoustic instrument’s treble so that it meshes better with the brasher synth timbres. For example, the ever-popular Korg M1 piano sound used in a lot of house music has an inherently brighter timbre than a regular piano. Rock pianos are often treble-boosted as well.


Panning is an art and requires a careful analysis of what you want to do. Here are two examples.

Pad sound. The goal is to lie as a bed under the instruments and be constant but unobtrusive. Try a wide stereo spread, set to a relatively low level and with a little bit of high end shaved off. You can further “water down” the sound (and generate stereo from a mono source if needed) with some light chorusing. This diffuses the sound more in the stereo field.

Solo sound. I’m sure some people would disagree with this, but I often put solos in mono, panned to center.  There may be some stereo ambiance generation, but the part itself stays mostly mono. The reason for doing this is that different listening situations are different, and on any given day, someone might hear the left or right channel more prominently. A mono signal will come through no matter what.

Furthermore, if there’s lots of stereo action going on, the center will seem less busy. This creates more of a wide open space for a centered, mono signal, thus emphasizing it even further.

Try this with piano: when you’re playing chords in the background, use the full stereo spread. When it’s solo time, bring the piano closer to center, or even center (Fig. 3). I think you’ll find the solo comes through better.

Fig. 3
Figure 3: In addition to a standard pan/balance control in each console channel, PreSonus Studio One also has a Dual Pan plug-in. This allows panning both the left and right channel anywhere in the stereo field, both controls can be automated for dynamic changes, and the panning action can conform to different panning laws.


This is an easy one, right? Just set the level so that the keyboard is mixed in perfectly with everything else…

Except for one thing. We mentioned EQ earlier, but also note that an instrument’s EQ is a factor in the level-setting process. For example, suppose a relatively bright synth sound needs to fit into a track. It can be mixed at a lower level and still have presence, because our ears are most sensitive in the upper midrange area (around 3.5 kHz). A similar situation occurs with strummed acoustic guitars, which cover a lot of bandwidth. Even at relatively low levels, acoustic guitars can take over a mix (sometimes a good thing, sometimes not). So, many engineers accentuate the high end, but turn the overall level way down. Thus, the guitars still add some percussive propulsion, but don’t interfere with the rest of what’s going on in the midrange. The same can be true of keyboards.

Here’s another example: 400 Hz is often considered the “mud” frequency because a lot of instruments have energy in that frequency range, and that energy adds together into a bit of a sonic blob. A piano may sound perfect when mixed at a certain level by itself, but interferes with guitar and the upper harmonics from bass. Rather than lower the piano’s level, try cutting it a bit around 400 Hz, because that will open up more space for the bass and guitar. Because they’re now more prominent, you can likely increase the overall piano level a bit. As a result, the piano rules the middle part of the midrange, while the guitar and bass hold down the lower midrange. Both should come through clearly in the mix as separate entities.

Bottom line: any EQ change you make will likely require re-evaluating the level.


My #1 pet peeve is when a keyboard synth is recorded direct among a sea of miked instruments, like drums, vocals, acoustic guitars, etc. The synth always sounds isolated rather than well-integrated.

An obvious answer is that if you have an overall reverb effect on the mix, then send some of the synth through that too. While a uniform ambiance helps provide cohesion, also consider adding some short (e.g., 30-60 ms), low-level delays to the dry (direct recorded) synth sound. Even just two or three short delays give a feeling of ambiance. Try it—you’ll be amazed at how much more “real” the synth sounds.

Of course, there’s no law that says you have to record direct anyway. Stuffing a synth through an amp and miking it will automatically give a more ambient sound, and if you record it in the same room where you recorded the other instruments, so much the better.

Another trick for getting the synth to sit well in an acoustic mix, particularly if you’re synthesizing a “real” instrument like organ or piano, is to record just a tiny bit of your fingers hitting the keys. This should be set way back in the mix—you don’t want a caricature of playing keyboard, but just a little bit of background noise to lend authenticity. Do this and record through an amp, and your keyboards will fit into any acoustic-oriented, organic-sounding mix.


We’ll close with some general mixing advice: it’s easier to mix if you already have an idea about the music’s direction. Before you touch the faders, think about the effect you’re trying to create. Do you want a small, intimate ensemble? A bombastic arena-rock sound? Something for the dance floor? Once you know your intended destination, then as you mix, you can tweak each sound to contribute to the overall vibe. Mixing is like driving somewhere: it’s fun to just drive around, but if you really want to get someplace as efficiently as possible, it’s best to have a map.

Leave a Reply