# Know Your Eb/N0 - Part 1

This is part one of a three-part series that explains the meaning of $E_b/N_0$ in digital communication context.

### Introduction

Every student of communications knows that digital communication system performance—for example, bit error rate (BER) or frame error rate (FER)—depends on the signal-to-noise ratio (SNR), with higher SNRs leading to lower BERs and FERs. Sometimes less clear, however, is the very definition of SNR. With different code rates, spreading factors, binary and non-binary modulation methods in one, two, or more signal space dimensions, it rapidly becomes confusing to sort out chip energy $E_c$, symbol energy $E_s$, bit energy $E_b$, and the relationship between Gaussian noise power spectral density $N_0$ and the per-sample noise variance encountered at the output of a matched filter. This brief note is my attempt to sort out some of the confusion.
I assume that the reader is (reasonably) familiar with the rudiments of digital communications theory, and that terms such as “additive white Gaussian noise” and “matched filter” are not wholly unfamiliar.

### The Additive White Gaussian Noise Channel

Textbook models of digital communications systems usually start with the ideal additive white Gaussian noise (AWGN) channel, in which transmitted signal $s(t)$ is corrupted additively by Gaussian noise $n(t)$, resulting in the received signal $r(t) = s(t) + n(t)$.
The noise signal $n(t)$ is assumed to be a sample function drawn from a (wide-sense) stationary $^1$ Gaussian random process. The term “white noise” arises from the assumption that the power spectral density of the noise is constant (flat) over all frequencies, just as “white light” is composed equally of colors of all wavelengths.
Intuitively, the power spectral density $S_X(f)$ of a random process $X$ measures the distribution of the power (measured in Watts, say) of $X$ as a function of frequency $f$ (measured in Hz); i.e., $S_X(f)$ has units of Watts per Hz (or energy in Joules). Most people distinguish between so-called “two-sided power spectral density” and “one-sided power spectral density.” “Two-sided” just means that the power spectral density is defined for both negative and non-negative frequencies, whereas “one-sided” power spectral densities are defined only for non-negative frequencies. In this note, all power-spectral densities are two-sided. For real processes $X$, the two-sided power spectral density is an even function of $f$, i.e., $S_X(f) = S_X(-f)$.
The power $P_{[a,b]}$ of process $X$ in the frequency range $[a, b]$ is given by

$P_{[a,b]}(X) = \int_a^b S_X(f) df ,$

so the total power $P(X)$ of process $X$ is

$P(X) = \int_{-\infty}^{\infty} S_X(f) df$

Notice that a white noise process, therefore, has infinite power! This mathematical idealization does not cause difficulties in practice, however, since noise that falls outside of an operating band of interest can be filtered out, and the in-band noise power is finite.
What happens when white noise is filtered? When a wide-sense stationary random process with power spectral density $S_X(f)$ is applied to a linear time-invariant filter with frequency response $H(f)$, the output of the filter is a wide-sense stationary random process with power spectral density $S_X(f)|H(f)|^2$.
Now, AWGN is almost invariably assigned a two-sided power spectral density whose value (or height) is denoted by the symbol $N_0/2$. Why the factor of 1/2? A real-valued unit-gain bandpass filter of bandwidth $W$ Hz, centered at frequency $f_0 \ge W/2$ would pass a $W$ Hz “window” at positive frequencies, and a symmetric image at negative frequencies, for a total width of $2W$. The response of this filter to white noise of power spectral density $N_0/2$ would be a signal of total power $N_0W$ Watts, as illustrated in Fig. 1. Conveniently, the factors of 2 cancel.

Figure 1: A white noise process of power spectral density $N_0/2$ W/Hz applied to an ideal unit-gain bandpass filter of bandwidth $W$ Hz results in an output signal with the power spectral density illustrated,having total power $N_0 W$ Watts.

### Signal Energy

Recall the familiar concept of “dot-product” (also called inner product or scalar product) of vectors $x = (x_1, x_2, . . . , x_n)$ and $y = (y_1, y_2, . . . , y_n)$: the dot product $x \cdot y$ is defined as

$x \cdot y := x_1y_1 + x_2y_2 + \cdots + x_ny_n$.

The “norm” or “length” of a vector $x$ is $\|x\| = (x \cdot x)^{1/2}$. The “angle” $\theta$ between two vectors $x$ and $y$ satisfies

$x \cdot y = \|x\| \|y\| \cos \theta.$

Vectors $x$ and $y$ are orthogonal when $x \cdot y = 0$, i.e., when the angle is $\pi/2$ (or $3\pi/2$). The Euclidean distance between vectors $x$ and $y$ is simply $d(x, y) = \|x - y\|$, the norm of the difference vector.
Likewise, let $x(t)$ and $y(t)$ be two real-valued signals. By their correlation or inner product, we mean the scalar value

$\langle x(t), y(t)\rangle := \int_{-\infty}^{\infty}x(t)y(t) dt,$

assuming this integral exists. The norm of signal $x(t)$ is $\|x(t)\| = (\langle x(t), x(t) \rangle)^{1/2}$. Two signals $x(t)$ and $y(t)$ are orthogonal when $\langle x(t), y(t)\rangle = 0$. The Euclidean distance between $x(t)$ and $y(t)$ is simply $d(x(t), y(t)) = \|x(t) - y(t)\|$, the norm of the difference signal.
Now, let $x(t)$ be a voltage or current signal measured in volts or amperes, and consider applying $x(t)$ to a 1 ohm load resistance. The instantaneous power developed over the load is then $x^2(t)$ Watts, and the total energy absorbed by the load is

$E_{x(t)} = \int_{-\infty}^{\infty}x^2(t) dt = \langle x(t),x(t)\rangle = \|x(t)\|^2$

This observation motivates the definition of the energy of a signal $x(t)$ as

$E_{x(t)} := \langle x(t),x(t)\rangle = \|x(t)\|^2.$

This abstract measure of energy can be related to physical energy via some constant conversion
factor.

### Noise Correlations

Let $x(t)$ be a signal with finite energy, and let $W(t)$ be a zero-mean white Gaussian noise process with two-sided power spectral density $N_0/2$. What is the correlation, $\langle x(t),W(t)\rangle$, between the random noise $W(t)$ and the deterministic signal $x(t)$? It turns out that the result is a random variable.

First Key Property: Let $W(t)$ be zero-mean white Gaussian noise with power spectral density $N_0/2$, and let $x(t)$ be a signal of finite energy. If we define $X$ as the inner product

$X = \langle x(t),W(t)\rangle$

between $x(t)$ and $W(t)$, then $X$ is a (scalar) Gaussian random variable with zero mean and variance $\sigma^2$ given by

$\sigma^2 = \frac{N_0}{2}\|x(t)\|^2$

What happens if we simultaneously form the inner product of $W(t)$ with another deterministic signal of finite energy $y(t)$? By the first key property, $Y = \langle W(t), y(t)\rangle$ is a zero-mean Gaussian random variable with variance $\|y(t)\|^2N_0/2$. How is $Y$ related to $X = \langle W(t), x(t)\rangle$? It turns out that $X$ and $Y$ are jointly Gaussian.

Second Key Property: Let $W(t)$ be zero-mean white Gaussian noise with power spectral density $N_0/2$, and let $x(t)$ and $y(t)$ be a signals of finite energy. If we simultaneously form the inner products

$X = \langle x(t),W(t)\rangle$ and $X = \langle y(t),W(t)\rangle$

then $X$ and $Y$ are jointly Gaussian random variables with zero mean, variance given by the First Key Property, and correlation

$E(XY) = \frac{N_0}{2}\langle x(t), y(t) \rangle$.

In particular, if $x(t)$ and $y(t)$ are orthogonal, then $X$ and $Y$ are uncorrelated jointly distributed Gaussian random variables, and hence are independent.

The First Key Property is actually a special case of the Second Key Property, obtained by
setting $y(t) = x(t)$.

$^1$ Recall that a wide-sense stationary (WSS) process $X$ has a constant mean, and an autocorrelation function $R_X(t1, t2) := E[X(t1)X(t2)]$ that is a function only of the difference $\tau := t2 - t1$. Usually one writes $R_X( \tau) := E[X(t)X(t + \tau)]$ for the autocorrelation function of a WSS process.

References:
[1] Bernard Sklar, Digital Communications: Fundamentals and Applications, Prentice Hall, 2 edition, January 2001.
[2] John Proakis, and Masoud Salehi, Digital Communications, McGraw-Hill, 5th edition, November 2007.
[3] John R. Barry, Edward A. Lee, and David G. Messerschmitt, Digital Communication, Springer, 3rd edition, September 2003.