Fourier Series
Taylor series is used in countless areas of mathematics and sciences. It is a handy little tool in the mathematicians arsenal that allows us to decompose any function into a series of polynomials, which are fairly easy to work with. Today, we are going to take a brief look at another type of series expansion, known as Fourier series. Note that these concepts are my annotations of Professor Gilbert Strang’s amazing lecture, available on YouTube.
The biggest difference between Taylor series and Fourier series is that, unlike Taylor series, whose basic fundamental unit is a polynomial term, the building block of a Fourier series is a trigonometric function, namely one of either sine or cosine. Concretely, a generic formula of a Fourier expansion looks as follows:
\[f(x) = \sum_{n=0}^\infty a_n \cos nx + \sum_{n=1}^\infty b_n \sin nx \tag{1}\]Personally, I found this formula to be more difficult to intuit than the Taylor series. However, once you understand the underlying mechanics, it’s fascinating to see how periodic wave functions can be decomposed as such.
First, let’s begin with an analysis of orthogonality. Commonly, we define to vectors $v_1$ and $v_2$ as being orthogonal if
\[v_1 \cdot v_2 = 0 \tag{2}\]That is, if their dot product yields zero. This follows from the definition of a dot product, which has to do with cosines.
With a stretch of imagination, we can extend this definition of orthogonality to the context of functions, not just vectors. For vectors, a dot product entails summing the element-wise products of each component. Functions don’t quite have a clearly defined, discrete component. Therefore, instead of simply adding, we integrate over a given domain. For example,
\[\int_{- \pi}^\pi (\cos nx)(\cos kx) \, dx = 0 \tag{3}\]The same applies to cosines and sines:
\[\int_{- \pi}^\pi (\sin nx)(\cos kx) \, dx = 0 \tag{4}\]where $n$ and $k$ can be any integer. In other words, cosine functions of different frequencies are orthogonal to each other, as are cosines are with sines!
Now, why is orthogonality relevant at all for understanding the Fourier series? It’s time to sit back and let the magic unfold when we multiply $\cos kx$ to (1) and integrate the entire expression.
\[\begin{align}\int_{- \pi}^\pi f(x) \cos kx \, dx &= \int_{- \pi}^\pi (\sum_{n=0}^\infty a_n \cos nx) \cos kx + (\sum_{n=1}^\infty b_n \sin nx) \cos kx \, dx \\ &= \int_{- \pi}^\pi a_k (\cos kx)^2 \, dx \\ &= a_k \pi \end{align} \tag{5}\]If we divide both sides of (5) by $\pi$, you will realize that we have derived an expression for the constant corresponding to the $\cos kx$ expansion term:
\[a_k = \frac{1}{\pi} \int_{- \pi}^\pi f(x) \cos kx \, dx \tag{6}\]The key takeaway here is this: by exploiting orthogonality, we can knock out every term but one, the very term that we multiplied to the expansion. By the same token, therefore, we can deduce that we can do the same for the sine terms:
\[b_k = \frac{1}{\pi} \int_{- \pi}^\pi f(x) \sin kx \, dx \tag{7}\]The only small caveat is that the case is a bit more specific for $a_0$. When $k = 0$, $\cos kx$ reduces to a constant of one, which is why we end up with $2 \pi$ instead of $\pi$. In other words,
\[\int_{- \pi}^\pi a_0 (\cos 0 \cdot x)^2 \, dx = 2 a_0 \pi \tag{8}\]Hence, we end up with
\[a_0 = \frac{1}{2 \pi} \int_{- \pi}^\pi f(x) \, dx \tag{9}\]This exceptional term has a very intuitive interpretation: it is the average of the function $f(x)$ over the domain of integration. Indeed, if we were to perform some expansion, it makes intuitive sense that we start from an average.
One observation to make about Fourier expansion is the fact that it is a combination of sines and cosines—and we have seen those before with, lo and behold, Euler’s formula. Recall that Euler’s formula is a piece of magic that connects all the dots of mathematics. Here is the familiar equation:
\[e^{i \theta} = \cos \theta + i \sin \theta \tag{10}\]Using Euler’s formula, we can formulate an alternative representation of Fourier series:
\[f(x) = \sum_{- \infty}^\infty c_k e^{i k x} \tag{11}\]Let’s unequivocally drive this concept home with a simple example involving the Dirac delta function. The delta function is interesting function that looks like this:
The delta function has two nice properties that make it great to work with. First, it integrates to one if the domain includes $x = 0$. This is the point where the graph peaks in the diagram. Second, the delta function is even. This automatically tells us that when we perform a Fourier expansion, we will have no sine functions—sine functions are by nature odd. With this understanding in mind, let’s derive the Fourier series of the Dirac delta by starting with $a_0$.
\[\begin{align}a_0 &= \frac{1}{2 \pi} \int_{- \pi}^\pi \delta(x) \cdot 1 \, dx \\ &= \frac{1}{2 \pi} \end{align} \tag{12}\]The equality is due to the first property of the delta function outlined in the previous paragraph.
The derivation of the rest of the constants can be done in a similar fashion.
\[\begin{align}a_k &= \frac{1}{\pi} \int_{- \pi}^\pi \delta(x) \cos kx \, dx \\ &= \frac{1}{\pi} \int_{- \pi}^\pi \delta(x) \, dx \\ &= \frac{1}{\pi}\end{align} \tag{13}\]The trick is to use the fact that the delta function is zero in all domains but $x = 0$. Therefore, the oscillations of $\cos kx$ will be nullified by the delta function in all but that one point, where $\cos 0 \cdot x$ is just one. Therefore, (13) simply reduces to integrating the delta function itself, which is also one!
To sum up, we have the following:
\[\delta(x) = \frac{1}{2 \pi} + \frac{1}{\pi} \sum_{n = 1}^\infty \cos nx \tag{14}\]I find it fascinating to see how a function so singular and unusual as the Dirac delta can be reduced to a summation of cosines, which are curvy, oscillating harmonics. This is the beauty of expansion techniques like the Fourier and Taylor: as counterintuitive as it may seem, these tools tell us that any function can be approximated through an infinite summation, even if the original function may not resemble the building block of the expansion technique at all at a glance.