This is part one of a three-part series that explains the meaning of in digital communication context.
Every student of communications knows that digital communication system performance—for example, bit error rate (BER) or frame error rate (FER)—depends on the signal-to-noise ratio (SNR), with higher SNRs leading to lower BERs and FERs. Sometimes less clear, however, is the very definition of SNR. With different code rates, spreading factors, binary and non-binary modulation methods in one, two, or more signal space dimensions, it rapidly becomes confusing to sort out chip energy , symbol energy , bit energy , and the relationship between Gaussian noise power spectral density and the per-sample noise variance encountered at the output of a matched filter. This brief note is my attempt to sort out some of the confusion.
I assume that the reader is (reasonably) familiar with the rudiments of digital communications theory, and that terms such as “additive white Gaussian noise” and “matched filter” are not wholly unfamiliar.
The Additive White Gaussian Noise Channel
Textbook models of digital communications systems usually start with the ideal additive white Gaussian noise (AWGN) channel, in which transmitted signal is corrupted additively by Gaussian noise , resulting in the received signal .
The noise signal is assumed to be a sample function drawn from a (wide-sense) stationary Gaussian random process. The term “white noise” arises from the assumption that the power spectral density of the noise is constant (flat) over all frequencies, just as “white light” is composed equally of colors of all wavelengths.
Intuitively, the power spectral density of a random process measures the distribution of the power (measured in Watts, say) of as a function of frequency (measured in Hz); i.e., has units of Watts per Hz (or energy in Joules). Most people distinguish between so-called “two-sided power spectral density” and “one-sided power spectral density.” “Two-sided” just means that the power spectral density is defined for both negative and non-negative frequencies, whereas “one-sided” power spectral densities are defined only for non-negative frequencies. In this note, all power-spectral densities are two-sided. For real processes , the two-sided power spectral density is an even function of , i.e., .
The power of process in the frequency range is given by
so the total power of process is
Notice that a white noise process, therefore, has infinite power! This mathematical idealization does not cause difficulties in practice, however, since noise that falls outside of an operating band of interest can be filtered out, and the in-band noise power is finite.
What happens when white noise is filtered? When a wide-sense stationary random process with power spectral density is applied to a linear time-invariant filter with frequency response , the output of the filter is a wide-sense stationary random process with power spectral density .
Now, AWGN is almost invariably assigned a two-sided power spectral density whose value (or height) is denoted by the symbol . Why the factor of 1/2? A real-valued unit-gain bandpass filter of bandwidth Hz, centered at frequency would pass a Hz “window” at positive frequencies, and a symmetric image at negative frequencies, for a total width of . The response of this filter to white noise of power spectral density would be a signal of total power Watts, as illustrated in Fig. 1. Conveniently, the factors of 2 cancel.
Figure 1: A white noise process of power spectral density W/Hz applied to an ideal unit-gain bandpass filter of bandwidth Hz results in an output signal with the power spectral density illustrated,having total power Watts.
Recall the familiar concept of “dot-product” (also called inner product or scalar product) of vectors and : the dot product is defined as
The “norm” or “length” of a vector is . The “angle” between two vectors and satisfies
Vectors and are orthogonal when , i.e., when the angle is (or ). The Euclidean distance between vectors and is simply , the norm of the difference vector.
Likewise, let and be two real-valued signals. By their correlation or inner product, we mean the scalar value
assuming this integral exists. The norm of signal is . Two signals and are orthogonal when . The Euclidean distance between and is simply , the norm of the difference signal.
Now, let be a voltage or current signal measured in volts or amperes, and consider applying to a 1 ohm load resistance. The instantaneous power developed over the load is then Watts, and the total energy absorbed by the load is
This observation motivates the definition of the energy of a signal as
This abstract measure of energy can be related to physical energy via some constant conversion
Let be a signal with finite energy, and let be a zero-mean white Gaussian noise process with two-sided power spectral density . What is the correlation, , between the random noise and the deterministic signal ? It turns out that the result is a random variable.
First Key Property: Let be zero-mean white Gaussian noise with power spectral density , and let be a signal of finite energy. If we define as the inner product
between and , then is a (scalar) Gaussian random variable with zero mean and variance given by
What happens if we simultaneously form the inner product of with another deterministic signal of finite energy ? By the first key property, is a zero-mean Gaussian random variable with variance . How is related to ? It turns out that and are jointly Gaussian.
then and are jointly Gaussian random variables with zero mean, variance given by the First Key Property, and correlation
In particular, if and are orthogonal, then and are uncorrelated jointly distributed Gaussian random variables, and hence are independent.
The First Key Property is actually a special case of the Second Key Property, obtained by
Recall that a wide-sense stationary (WSS) process has a constant mean, and an autocorrelation function that is a function only of the difference . Usually one writes for the autocorrelation function of a WSS process.
 Bernard Sklar, Digital Communications: Fundamentals and Applications, Prentice Hall, 2 edition, January 2001.
 John Proakis, and Masoud Salehi, Digital Communications, McGraw-Hill, 5th edition, November 2007.
 John R. Barry, Edward A. Lee, and David G. Messerschmitt, Digital Communication, Springer, 3rd edition, September 2003.
If you have enjoyed reading this post, please subscribe to Minutify, so we can send you an email each time a new post is available.