One of the most common controls audio engineers use is the channel fader. It is used to increase or decrease the amplitude of a signal. The relative amount the amplitude is changed, and the units of the fader, are based on the decibel (dB) scale.

Previously, we looked at changing the amplitude of a signal based on a linear scale. From a signal processing standpoint, we will program our computer to change the amplitude of a signal by multiplying by a scaler number. When writing software for an audio engineer to use, it is necessary to know how to interpret a change in amplitude based on the dB scale. Therefore, it is necessary to work with the relationship between the linear scale and the dB scale.

An amplitude on the decibel scale, ${a}_{dB}$, can be determined from an amplitude on the linear scale, ${a}_{lin}$, using the relationship: ${a}_{dB} = 20 \cdot {log}_{10} (\frac{{a}_{lin}}{1})$.

In reverse, the decibel scale can be converted to the linear scale using the relationship: ${a}_{lin} = {10}^{\frac{{a}_{dB}}{20}}$.

A general rule of thumb audio engineers should know is, “doubling a signals amplitude is a $\sim6$ dB increase. Whereas, halving a signal’s amplitude is a $\sim6$ dB decrease.”

Next, let’s analyze some characteristics of a signal’s amplitude. First, we will look at the peak amplitude, then look at the RMS amplitude.