Series of Functions and Uniform Convergence: The Math Behind the Fourier Series and Data Compression

Series of Functions

Real analysis is an area of mathematics dealing with the set of real numbers and, in particular, the analytic properties of real functions and sequences, including their convergence and limits. One subset of the series covered in Real analysis is the series of functions and uniform convergence. These are the two topics that will be covered in this paper.

The series of functions can be put into the form

(1)
\begin{align} \sum _{k=0}^{\infty} f_{k}(x) \end{align}

where each term $f_{k} : (a,b) \rightarrow \mathbb{R}$ is a function. One example of a series of functions is the power series which takes the form:

(2)
\begin{align} \sum c_{k}x^{k} \end{align}

In the power series, the functions are monomials (meaning that the coefficients $c_k$ are constants and $x$ is a real variable). Expanded, our power series look something like this:

(3)
\begin{align} f(x)=\sum _{n=0}^{\infty} c_{k}x^{k} = c_{1}a^{1}+c_{2}a^{2}+c_{3}a^{3}+c_{4}a^{4}+... \end{align}

This is the simplest form of the power series and is also known as the Maclaurin Series. The Maclaurin is a type of Taylor Series (which in turn is a type of power series) in which derivatives at zero are used. This concept become clearer when we look at the full Taylor Series below:

(4)
\begin{align} f(x)=\sum _{n=0}^{\infty} a_{n}(x-c)^{n} = a_{1}(x-c)^{1} + a_{2}(x-c)^{2} +a_{3}(x-c)^{3} +a_{4}(x-c)^{4} +... \end{align}

We can see that our Maclaurin Series is really just a Taylor Series with $c=0$.

Up until the introduction of series of functions, our series consisted solely of numbers. There series of numbers has certain properties that we can extend to the series of functions. One of the most heavily discussed properties of series is convergence. When discussing series of function and the power series, there is a theorem about the convergence of this series called the Radius of Convergence Theorem. It says:

Radius of Convergence Theorem If $\sum c_{k}x^{k}$ is a power series then there is a unique $R$, $0 \leq R \leq \infty$, its radius of convergence, such that the series converges whenever $|x| < R$, and diverges whenever $|x|>R$. Moreover, $R$ is given by the formula:

(5)
\begin{align} R=\frac{1}{lim sup_{k \rightarrow \infty} \sqrt[k]{|c_{k}|}} \end{align}

To put this simpler, if $|x-c|>R$ then the series will diverge and if $|x-c|<R$ the series will converge (the series may converge or diverge for $|x-c|=R$). The interval of all x’s, including the end points if need be, for which the power series converges is called the interval of convergence of the series. Worth noting, also, is that we can guarantee convergence at the point $x=c$.

Here is an example of computing the radius of convergence and the interval of convergence taken from Pauls Online Notes.

$\sum _{n=1}^{\infty} \frac{(-1)^{n} n}{4^{n}}(x+3)^{n}$

The first thing we can tell, using our quick convergence note that this series converges at $-3$. In order to find the other points of convergence we will have to use the ratio test (which, due to constraints, can not be fully discussed here). We will have to accept of faith that we can set up the following equation based on the ratio test:

$L=lim_{n \rightarrow \infty}|\frac{(-1)^{n+1}(n+1)(x+3)^{n+1}}{4^{n+1}} \frac{4^n}{(-1)^{n}(n)(x+3)^{n}}|$
$=lim_{n \rightarrow \infty}|{\frac{-(n+1)(x+3)}{4n}|$

We can factor out the $x$ because its not dependent on the limit. This allows us to simplify the limit to:

$L=|x+3| lim_{n\rightarrow \infty}\frac{n+1}{4n}$
$=\frac{1}{4}|x+3|$

The ratio test tells us that if $L<1$ the series will converge, if $L>1$ the series will diverge, and if $L=1$ we don't know what will happen. So we have:

$\frac{1}{4}|x+3|<1 \rightarrow |x+3|<4$ (series converges)
$\frac{1}{4}|x+3|>1 \rightarrow |x+3|>4$ (series diverges)

We now have the radius of converges for this series is $4$. The interval of converge is calculated by solving the inequalities from above with our radius.

$-4< x+3<4 \rightarrow -7<x<1$

We now have most of our interval except for the x-values at the extremes of the interval. These values of $x$ correspond to the values of $x$ that will give $L=1$. To determine the convergence at these points we plug them into the original power series and see its behavior using any necessary test.
$x=-7$
In this case the series is:
$\sum ^{\infty}_{n=1}\frac{(-1)^{n}n}{4^n}(-4)^{n}=\sum ^{\infty}_{n=1}\frac{(-1)^{n}n}{4^n}(-1)^{n}4^{n}$
$\sum ^{\infty}_{n=1}(-1)^{n}(-1)^{n}n \rightarrow }(-1)^{n}(-1)^{n}=(-1)^{2n}=1$
$\sum ^{\infty}_{n=1}n$
This series is divergent by the Divergence Test since $lim _{n \rightarrow \infty}n=\infty \neq 0}$
$x=1$
In this case the series is:
$\sum ^{\infty}_{n=1}\frac{(-1)^{n}n}{4^n}(-4)^{n}=\sum ^{\infty}_{n=1} (-1)^{n}n$
This series is also Divergent by the Divergence Test since $lim _{n \rightarrow \infty}(-1)^{n}n$ doesn't exist.
So in this case the power series will not converge for either endpoint. The interval of convergence is then:

$-7<x<1$

Uniform Convergence
Definition: Suppose $S$ is a set and $f_{n}: S \rightarrow \mathbb{R}$ are real-values function for every natural number $n$. We say that the sequence $(f_{n})$ is uniformly convergent with the limit $f:S\rightarrow \mathbb{R}$ if for every $\epsilon > 0$, there exists a natural number $N$ such that for all $x$ in $S$ and all $n\geq N, |f_{n}(x)-f(x)|<\epsilon$.

1. Uniform Convergence doesn't imply differentiability
2. Uniform Convergence preserves continuity

Examples:

Uniform Convergence: Another way to visualize uniform convergence is to look at a strip of width epsilon around the function. If all of the partial sums fall within this strip eventually, then the Fourier Series converges uniformly to the function.

Nonuniform Convergence: Fourier Series

A Fourier series is an expansion of a periodic function in terms of an infinite sum of sines and cosines.  Shown below is an animated plot of the first five successive partial fourier series for a sawtooth wave. To develop a fourier series for a real valued function $f(x)$ we will first find the fourier coefficients of $f, a_{n}$ and $b_{n}$. For this example, $f(x)$ denotes a periodic function on the interval $[-{\pi} , {\pi}]$.

(6)
\begin{align} $a_n = \frac{1}{\pi}\int_{-\pi}^{\pi} f(x) \cos(nx)\, dx$ \end{align}
(7)
\begin{align} $b_n = \frac{1}{\pi}\int_{-\pi}^{\pi} f(x) \sin(nx)\, dx$ \end{align}

We then use the fourier coefficients to develop our fourier series for the interval $[-{\pi} , {\pi}]$:

(8)
\begin{align} f(x) = \frac{a_0}{2}+\sum_{n=1}^{\infty}[a_n \cos(nx) + b_n \sin(nx)] \end{align}



The fourier series does not always converge. One of the main questions of harmonic analysis is to decide when the fourier series does converge. For the sawtooth function shown in the animation above, the series converges almost everywhere. The exception is the endpoints ${-{\pi} , {\pi}}$.

As we have seen, the Fourier series is extremely useful in mathematics because it turns otherwise mind-boggling equations for functions into much more manageable infinite sum consisting of sines, cosines, or both. This new representation makes it easier to solve many differential equations that were hard or downright impossible to solve previously.

For any piecewise continuous function whose derivative is also piecewise continuous on an interval from –L to L, there exists a Fourier series representing the function in the form

(9)
\begin{align} f(x) = \frac{a_0}{2}+\sum_{n=1}^{\infty}[a_n \cos(\frac{nx\pi}{L}) + b_n \sin(\frac{nx\pi}{L})] \end{align}



Where the coefficients are defined as

(10)
\begin{align} $a_n = \frac{1}{L}\int_{-L}^{L} f(x) \cos(\frac{nx\pi}{L})\, dx$ \end{align}
(11)
\begin{align} $b_n = \frac{1}{L}\int_{-L}^{L} f(x) \sin(\frac{nx\pi}{L})\, dx$ \end{align}



Although a function is expressed as an infinite sum, it still can approximate functions quite well with only a few iterations. Because of its great function approximations with limited iterations, the Fourier series can be quite practical when used in real-world calculations.

Fourier series is great in the strictly mathematical sense, but what makes it truly great is its relevance in any field that deals with functions or waves. Fourier’s most widely used property is the Fourier transform, which changes equations from a time domain to a frequency domain. The inverse Fourier transform takes functions in a frequency domain back into the time domain, and it is all rooted in the Fourier series. Let f (t) be the Fourier series in the time domain:

(12)
\begin{align} f(t)=\frac{1}{\pi}\sum _{n=0}^{\infty} F_m \cos(nt)+ F'_m \sin(nt) \end{align}



To make an inverse Fourier transform, we change the summations into integrals covering negative to positive infinity, and change the $F_m$ of the time domain into F($\omega$) of the frequency domain, yielding

(13)
\begin{align} f(t) = \frac{1}{2\pi}\int_{-\infty}^{\infty} F(\omega)e^i^t^\omega\, dx \end{align}



This is the inverse Fourier transform of F($\omega$), and the Fourier transform is just a way to go from the frequency domain back into the time domain.

Electrical engineers use Fourier transforms quite often to determine the relationships between parts of a circuit. Solving a relationship directly in the time domain would be too complicated, but by transforming the time equations into the frequency domain, the frequency response can easily be calculated using algebra. Once that happens, the result can undergo an inverse Fourier transform to yield a useful answer in the time domain.

Dealing with much bigger equipment, mechanical engineers also rely on Fourier’s methods to calculate heat transfer in rods, material deflection with varying shapes and materials, and the complex vibrations of machines at work.

Signal processing needs Fourier transforms ever more as we drift into the future by transmitting information that is just as good but takes up less space. With sound files, Fourier transforms can reduce file size by reducing noise and eliminating inaudible frequencies. Similar achievements are made possible when applying Fourier transforms to compress images while retaining detail.

It is amazing that nineteenth century mathematics is still a prominent source of the cutting-edge technology of today. In the words of the magnanimous mathematician Joseph Fourier: “Mathematics compares the most diverse phenomena and discovers the secret analogies that unite them.” Amen.

Bibliography
1. Charles Chapman Pugh. "Real Mathematical Analysis." (New York: Springer, 2002).
2. Wikipedia. "Real Analysis." http://en.wikipedia.org/wiki/Real_analysis
3. Wikipedia. "Power Series." http://en.wikipedia.org/wiki/Power_series
4. Wikipedia. "Taylor Series." http://en.wikipedia.org/wiki/Taylor_series
6. Gerald P. Michon. "Analysis, Complex Analysis." http://home.att.net/~numericana/answer/powers.gif
7. University of Colorado at Boulder. "Uniform Converge of Fourier Series." http://amath.colorado.edu/courses/4350/2002fall/Images/uniform_gr_21.gif
8. Wolfram Mathworld. "Fourier Series." http://mathworld.wolfram.com/FourierSeries.html
9. Wikipedia. "Fourier Series." http://en.wikipedia.org/wiki/Fourier_series
10. William E. Boyce. “Elementary Differential Equations and Boundary Value Problems.” (United States: John Wiley & Sons, 2005).
11. Georgia Tech University. “Fourier Series & The Fourier Transform.” http://www.physics.gatech.edu/gcuo/UltrafastOptics/3803/OpticsI14FourierTransformsI.ppt#270,18,The Fourier Transform and its Inverse