Notes Regarding Classical Fourier Series

Paul Bracken\(^\ast \)

February 27, 2023; accepted: May 29, 2023; published online: July 5, 2023.

A survey of some classical results from the theory of trigonometric series is presented, especially the case of Fourier series. Some new proofs are presented, and Riemann’s theory of trigonometric series is given special attention.

MSC. 42A20, 42A24

Keywords. convergence; infinite series; uniform bounded; periodic series; trigonometric series; Fourier series.

\(^\ast \)Department of Mathematics, University of Texas, Edinburg, TX, USA, 78540, e-mail: paul.bracken@utrgv.edu.

1 Introduction

The subject of trigonometric series [ 1 , 2 , 3 , 4 , 5 ] comes up in many areas of approximation theory, as well as the study of infinite series, the theory of representation of functions and also in constructing solutions of partial differential equations and eigenvalue problems [ 6 , 7 ] function of a real variable into a trigonometric series we refer to as Fourier’s Theorem. It is possible to state certain sufficient conditions under which a function admits a trigonometric series [ 8 , 9 , 10 , 11 ] .

Let \(f (t)\) be defined arbitrarily when \(-\pi \leq t \leq \pi \) and is defined for all other real values by means of the periodicity condition

\begin{equation} f (t + 2 \pi ) = f(t). \label{eqI.1} \end{equation}
1.1

The state \(f (t)\) is a periodic function with period \(2 \pi \). Suppose \(f (t)\) has a Riemann integral over \([- \pi , \pi ]\) that exists, and if it is improper, suppose it is absolutely convergent.

Theorem 1.1

Define constants \(a_n\) and \(b_n\) for \(n=0,1,2, \ldots \) by

\begin{equation} \pi \, a_n = \int _{- \pi }^{\pi } \, f (t) \cos (nt) \, dt, \qquad \pi \, b_n = \int _{-\pi }^{\pi } \, f (t) \sin (nt) \, dt. \label{eqI.2} \end{equation}
1.2

If \(x\) is an interior point of any interval \((a,b)\) in which \(f (t)\) has limited total variation, the trigonometric series

\begin{equation} \tfrac {1}{2} \, a_0 + \sum _{n=1}^{\infty } \, \big(a_n \, \cos (n x) + b_n \, \sin (n x) \big), \label{eqI.3} \end{equation}
1.3

is convergent and it has a sum

\begin{equation} \tfrac {1}{2} \, [ f(x +0) + f (x -0) ]. \label{eqI.4} \end{equation}
1.4

If \(f (t)\) is continuous at \(t=x\), this sum reduces to \(f (x)\). It is usual to call the series 1.3 the Fourier series associated with \(f (t)\).

The representation of a function by means of a Fourier series can be extended to more general intervals other than \((-\pi , \pi )\) as well.

2 Fejér’s Theorem

There is a theorem due to Fejér which concerns the summability of the Fourier series associated with the function \(f (t)\) which is introduced here.

Theorem 2.1

Let \(f (t)\) be a function of the real variable \(t\) defined arbitrarily on \(- \pi \leq t \leq \pi \) and satisfies 1.1 for all other real values of \(t\). Suppose \(\int _{-\pi }^{\pi } \, f (t) \, dt\) exists, and if it is an improper integral, let it be absolutely convergent. The Fourier series associated with the function \(f (t)\) is \(C 1\)-summable at all points \(x\) at which the two limits exist and the \(C 1\) sum is 1.4

\begin{equation} \tfrac {1}{2} \, [ f (x + 0) + f (x - 0) ]. \label{eqII.1} \end{equation}
2.1
Proof â–¼
Let \(a_n\), \(b_n\) denote the Fourier constants 1.2 of \(f (t)\) and define

\begin{equation} A_0 = \tfrac {a_0}{2}, \qquad A_n (x) = a_n \, \cos (n x) + b_n \, \sin (nx), \qquad S_n (x) = \sum _{j=0}^{n} \, A_j (x). \label{eqII.2} \end{equation}
2.2

It must be proved that

\begin{equation} \lim _{n \rightarrow \infty } \, \tfrac {1}{n} [ A_0 + S_1 (x) + \cdots + S_{n-1} (x) ] = \tfrac {1}{2} \, [ f (x +0) + f (x - 0) ], \label{eqII.3} \end{equation}
2.3

provided the limit on the right exists.

Note first that

\begin{equation} \sum _{n=1}^{m-1} \, S_n (x) = \sum _{n=1}^{m-1} \, \Bigg(\sum _{j=0}^n \, A_j (x) \Bigg) = (m-1) A_0 (x) + \sum _{j=1}^{m-1} \, (m-j) \cdot A_j (x) \label{eqII.4} \end{equation}
2.4

or

\begin{equation} A_0 + \sum _{n=1}^{m-1} \, S_n (x) = m A_0 + (m-1) \, A_1 (x) + (m-2) A_2 (x) + \cdots + A_{m-1} (x). \label{eqII.5} \end{equation}
2.5

From 2.1 and definition 1.2

\begin{align} A_n (x) & = \tfrac {1}{\pi } \, \int _{-\pi }^{\pi } \, \big(\cos (k t) \cos (k x) + \sin (kt) \sin (kx) \big) \, f (t) \, dt\nonumber \\ & = \tfrac {1}{\pi } \, \int _{-\pi }^{\pi } \, \cos (k (t-x) ) \, f (t) \, dt. \label{eqII.6} \end{align}

Using 2.6 in 2.5, we conclude

\begin{align} A_0 + \sum _{n=1}^{m-1} \, S_n (x) & =\tfrac {1}{\pi }\int _{-\pi }^{\pi } \, \big(\tfrac {m}{2} + (m-1) \cos (x-t) + \\ & \quad +(m-2) \cos 2 (x-t) + \cdots + \cos ((m-1) (x-t) ) \big) \, f (t) \, dt.\nonumber \label{eqII.7} \end{align}

The series in the brackets can be summed in closed form by substituting \(\mu = e^{i (x-t)}\),

\begin{align*} & m + (m-1) (\mu + \tfrac {1}{\mu }) + (m-2) (\mu ^2 + \tfrac {1}{\mu ^2}) + \cdots + (\mu ^{m-1} + \tfrac {1}{\mu ^{m-1}} )= \\ & = (1 - \mu )^{-2} (\mu ^{1-m} (\mu - \mu ^m) +1 - \mu ^{m+1} ) = (1 - \kappa )^{-2} (\mu ^{1-m} -2 \mu + \mu ^{m+1}) \\ & = \tfrac {(\mu ^{m/2} - \mu ^{- m/2})^2}{(\mu ^{1/2} - \mu ^{-1/2})^2} = \tfrac {\sin ^2 (\frac{m}{2} (x-t))} {\sin ^2 (\frac{1}{2} (x-t)) }. \end{align*}

Using this in 2.5, it is found that

\begin{equation} A_0 (x) + \sum _{n=1}^{m-1} \, S_n (x) = \tfrac {1}{2 \pi } \int _{-\pi }^{\pi } \, \tfrac {\sin ^2 \frac{m}{2} (x-t)} {\sin ^2 \frac{1}{2} (x-t)} \, f (t) \, dt. \label{eqII.8} \end{equation}
2.8

Bisect the path of integration replacing \(t\) by \(x \mp 2 \theta \) in the two pieces that appear, respectively. Then using the transformation \(\theta - \theta \) along the way, 2.8 becomes

\begin{align} \label{eqII.9} & A_0 + \sum _{n=1}^{m-1} \, S_n (x) = \\ & =- \tfrac {1}{\pi } \int _0^{- \pi /2} \, \tfrac {\sin ^2 (m \theta )} { \sin ^2 \theta } \, f (x - 2 \theta ) \, d \theta \! +\! \tfrac {1}{\pi } \int _{-\pi /2}^0 \, \tfrac {\sin ^2 (m \theta )}{\sin ^2 \, \theta } \, f (x\! +\! 2 \theta ) \, d \theta \nonumber \\ & = \tfrac {1}{\pi } \int _0^{\pi /2} \, \tfrac {\sin ^2 (m \theta )} {\sin ^2 \, \theta } \, f (x + 2 \theta ) \, d \theta + \tfrac {1}{\pi } \int _0^{\pi /2} \, \tfrac {\sin ^2 (m \theta )} {\sin ^2 \, \theta } \, f (x - 2 \theta ) \, d \theta .\nonumber \end{align}

To finish the proof, it must be shown that as \(m\) approaches infinity,

\begin{align} \label{eqII.10} & \tfrac {1}{m} \int _0^{\pi /2} \, \tfrac {\sin ^2 (m \theta )} {\sin ^2 \, \theta } \, f (x + 2 \theta ) \, d \theta \rightarrow \tfrac {\pi }{2} \, f (x +0), \quad \\ & \tfrac {1}{m} \int _0^{\pi /2} \tfrac {\sin ^2 (m \theta )} {\sin ^2 \, \theta } \, f (x - 2 \theta ) \, d \theta \rightarrow \tfrac {\pi }{2} \, f (x - 0).\nonumber \end{align}

To do this, begin with the following expansion

\begin{equation*} \tfrac {1}{2} \, \tfrac {\sin ^2 (m \theta )}{\sin ^2 \, \theta } = \tfrac {1}{2} m + (m-1) \, \cos (2 \theta ) + \cdots + \cos (2 (m-1) \, \theta ) . \end{equation*}

and integrate this over \((0, \pi /2)\) and use the fact that the cosine terms integrate to zero to get

\begin{equation} \int _0^{\pi /2} \, \tfrac {\sin ^2 (m \theta )}{\sin ^2 \theta } = \tfrac {\pi }{2} \, m. \label{eqII.11} \end{equation}
2.11

It has to be shown that

\begin{equation} \tfrac {1}{m} \, \int _0^{\pi /2} \, \tfrac {\sin ^2 (m \theta )}{\sin ^2 \theta } \, \varphi _{\pm } (\theta ) \, d \theta \rightarrow 0 \label{eqII.12} \end{equation}
2.12

as \(m \rightarrow \infty \) and \(\varphi _{\pm } (\theta )= f (x \pm 2 \theta ) - f (x \pm 0)\), respectively.

Given an arbitrary positive number \(\epsilon \), a positive \(\delta \) can be chosen such that \(| \varphi _{\pm } (\theta )| {\lt} \epsilon \) holds whenever \(0 {\lt} \theta \leq \delta /2\). This choice of \(\delta \) just depends on \(f\) and is independent of \(m\), therefore,

\begin{align} & \tfrac {1}{m} \bigg| \int _0^{\pi /2} \, \tfrac {\sin ^2 (m \theta )}{\sin ^2 \theta } \varphi _{\pm } (\theta ) \, d \theta \bigg| \leq \\ & \leq \tfrac {1}{m} \int _0^{\delta /2} \, \tfrac {\sin ^2 (m \theta )}{\sin ^2 \theta } \, | \varphi _{\pm } (\theta ) | \, d \theta + \tfrac {1}{m} \, \int _{\delta /2}^{\pi /2} \, \tfrac {\sin ^2 (m \theta )}{\sin ^2 \, \theta } | \varphi _{\pm } (\theta )| \, d \theta \nonumber \\ & {\lt} \tfrac {\epsilon }{m} \, \int _0^{\delta /2} \, \tfrac {\sin ^2 (m \theta )}{\sin ^2 \, \theta } \, d \theta + \tfrac {1}{m \sin ^2 (\delta /2)} \, \int _{\delta /2}^{\pi /2} \, | \varphi _{\pm } (\theta ) | \, d \theta \nonumber \\ & \leq \tfrac {\pi }{2} \epsilon + \tfrac {1}{m \sin ^2 (\delta /2)} \int _0^{\pi /2} \, | \varphi _{\pm } (\theta ) | \, d \theta . \label{eqII.13}\nonumber \end{align}

The convergence of the integral \(\int _{-\pi }^{\pi } \, | f (t) | \, dt\) implies the convergence of the integral \(\int _0^{\pi /2} \, | \varphi _{\pm } (\theta )| d \theta \). Given \(\epsilon {\gt}0\) and thus a \(\delta \), the following inequality can be enforced by taking \(m\) sufficiently large,

\begin{equation} \int _0^{\pi /2} \, | \varphi (\theta ) | \, d \theta < \epsilon \, \tfrac {\pi }{2} \cdot m \sin ^2 (\tfrac {\delta }{2}). \label{eqII.14} \end{equation}
2.14

Hence for \(\epsilon \) an arbitrary positive number, by choosing \(m\) sufficiently large, we can enforce the inequality

\begin{equation} \tfrac {1}{m} \, \bigg| \int _0^{\pi /2} \, \tfrac {\sin ^2 (m \theta )}{\sin ^2 \theta } \varphi _{\pm } (\theta ) \, d \theta \bigg| < \pi \, \epsilon . \end{equation}
2.15

By definition of limit, this leads to 2.10,

\begin{equation*} \lim _{m \rightarrow \infty } \, \tfrac {1}{m} \, \int _0^{\pi /2} \, \tfrac {\sin ^2 (m \theta )} {\sin ^2 \, \theta } \, \varphi _{\pm } (\theta ) \, d \theta =0. \label{eqII.15} \end{equation*}

Consequently, using 2.15 and 2.8 in 2.9, the Theorem follows.

Proof â–¼

3 The Hurwitz-Liapounoff Theorem

The following Lemma which involves Fourier constants is very useful in what follows.

Lemma 3.1

Let \(A_n (x) = a_n \cos (\pi x) + b_n \sin (n x)\), then \((a)\) and \((b)\) hold.

\begin{equation} \begin{array}{cc} (a) & \displaystyle \int _{-\pi }^{\pi } \, f (x) \, \sum _{n=0}^{m-1} \, A_n (x) \, dx = \tfrac {\pi }{2} a_0^2 + \pi \sum _{n=0}^{m-1} \, (a_n^2 + b_n^2 ), \\ (b) & \displaystyle \int _{-\pi }^{\pi } \, \sum _{n=0}^{m-1} \, A_n (x) \displaystyle \sum _{l=0}^{m-1} \, A_l (x) \, dx = \displaystyle \tfrac {\pi }{2} a_0^2 + \pi \, \sum _{n=1}^{m-1} \, (a_n^2 + b_n^2). \\ \end{array} \label{eqIII.1} \end{equation}
3.1
Proof â–¼
(a)
\begin{align*} & \int _{-\pi }^{\pi } \, f (x) \, \sum _{n=0}^{m-1} \, A_n (x) dx= \\ & = \tfrac {a_0}{2} \int _{-\pi }^{\pi } \, f(x) \, dx + \int _{-\pi }^{\pi } \, f(x) \sum _{n=1}^{m-1} \, A_n (x) \, dx \\ & = \tfrac {a_0}{2} a_0^2 + \sum _{n=1}^{m-1} \, \bigg(a_n \int _{-\pi }^{\pi } \, f (x) \cos (n x) \, dx + b_n \int _{-\pi }^{\pi } \, f (x) \sin (n x) \, dx\bigg) \\ & = \tfrac {a_0}{2} a_0^2 + \pi \, \sum _{n=1}^{m-1} \, (a_n^2 + b_n^2 ). \end{align*}

(b)

\begin{align*} & \int _{-\pi }^{\pi } \, \bigg(A_0 + \sum _{n=1}^{m-1} \, A_n (x)\bigg) \bigg(A_0 + \sum _{p=1}^{m-1} \, A_p (x) \bigg) \, dx= \\ & = \tfrac {\pi }{2} a_0^2 \! +\! \sum _{n=1}^{m-1} \, \sum _{p=1}^{m-1} \, \int _{-\pi }^{\pi } \, (a_n \cos (nx)\! +\! b_n \sin (nx)) (a_p \cos (px) \! +\! b_p \sin (px) ) \, dx \\ & = \tfrac {\pi }{2} a_0^2 + \sum _{n=1}^{m-1} \, \sum _{p=1}^{m-1} \, (a_n a_p \delta _{np} + b_n b_{p} \delta _{np} ) \pi = \tfrac {\pi }{2} a_0^2 + \pi \, \sum _{n=1}^{m-1} \, (a_n^2 + b_n^2 ). \end{align*}

Proof â–¼

Theorem 3.1

One has

\begin{equation} \lim _{m \rightarrow \infty } \, \int _{-\pi }^{\pi } \, \bigg\{ f (x) - \tfrac {1}{m} \sum _{n=1}^{m} \, S_n (x) \bigg\} ^2 \, dx =0. \label{eqIII.3} \end{equation}
3.2

Proof â–¼
Partition the interval \((- \pi , \pi )\) into \(4 N\) subintervals such that each subinterval has length \(\delta \) such that \(4 N \delta = 2 \pi \) or \(\delta = \pi / 2N\). The partition points are \(y_0 = - \pi , y_1 = - \pi + \delta , \ldots , y_k =- \pi + k \delta , \ldots , y_{4N} = -\pi + 2 \pi = \pi \). Consider the set of subintervals defined as \(I_0 = (-\pi , -\pi + \delta )\), \(I_k = ((2k-1) \delta - \pi , (2k+2) \delta - \pi )\), \(k=1, \ldots , 2N-1\). Let \(U_k, L_k\) be the upper and lower bounds of \(f (x)\) on \(I_k\) and \(| f(t)|\) bounded above by constant \(C\) for all \(x \in (- \pi , \pi )\). Denote a sample point in interval \(I_k\) as \(x_k^*\) to be used in the \(x\) integration. Choose \(\eta _k\) arbitrarily but such that \(J_k = (x - \eta _k, x + \eta _k) \subset (2 k \delta , (2 k+2) \delta )\). In fact, to carry out the sum, it suffices to fix \(\eta _k = \delta \) for each \(k\) and so as part of the integrand in the \(x\) integration,

\begin{equation} f (x) - \tfrac {1}{m} \, \sum _{n=0}^{m-1} \, S_n (x) = \tfrac {1}{2 \pi m} \, \int _{- \pi +x}^{\pi +x} \, \tfrac {\sin ^2 \, \frac{m}{2} (x-t)}{\sin ^2 \, \frac{1}{2} (x-t)} \, (f (x) - f (t) ) \, dt. \label{eqIII.4} \end{equation}
3.3

Take the absolute value on both sides of 3.3 and split up the integral over \((-\pi + x, \pi +x)\) so that

\begin{align} & \Bigg| f(x) - \tfrac {1}{m} \, \sum _{n=0}^{m-1} \, S_n (x) \Bigg|\leq \label{eqIII.5} \\ & \leq \tfrac {1}{2 \pi m} \, \bigg\{ \int _{-\pi +x}^{x - \delta } \, \tfrac {\sin ^2 \frac{m}{2} (x -t) }{\sin ^2 \frac{1}{2} (x-t)} | f (x) \! -\! f(t)| \, dt +\int _{x- \delta }^{x+ \delta } \, \tfrac {\sin ^2 \frac{m}{2} (x-t)}{\sin ^2 \frac{1}{2} (x-t)} | f(x) \! -\! f(t) | \, dt \nonumber \\ & \quad + \int _{x+\delta }^{x+\pi } \, \tfrac {\sin ^2 \frac{m}{2} (x-t)} {\sin ^2 \frac{1}{2} (x-t)} | f (x) \! -\! f (t)| \, dt \bigg\} \nonumber \\ & \leq \tfrac {1}{ 2 \pi m} \, \Big\{ 2 C \tfrac {\pi - \delta }{\sin ^2 \frac{\delta }{2}} + (U_k - L_k ) \tfrac {\pi m}{2} + 2 C \tfrac {\pi - \delta }{\sin ^2 \, \frac{\delta }{2}} \Big\} \leq 2 C \, \Big(1 + \tfrac {1}{m \sin ^2 \, \frac{\delta }{2}}\Big)\nonumber \end{align}

The square of 3.4 can be given in the following way

\[ \bigg| f (x) - \tfrac {1}{m} \, \sum _{n=0}^{m-1} \, S_n (x) \bigg|^2 \leq 2 C \Big(1 + \tfrac {1}{m \sin ^2 \, \frac{\delta }{2}}\Big) \Big(U_k - L_k + \tfrac {2 C}{m \pi } \tfrac {\pi - \delta } {\sin ^2 \frac{\delta }{2}} \Big). \]

The right-hand side corresponds to the sample point \(x_k^*\) when the Riemann sum is formed for the integration over \(x\). The right side is an upper bound for that integral. Consequently,

\begin{align} & \int _{-\pi }^{\pi } \, \bigg| f (x) - \tfrac {1}{m} \, \sum _{n=0}^{m-1} \, S_n (x) \bigg|^2 \, dx \leq \label{eqIII.6} \\ & \leq 2 C \left(1 - \tfrac {1}{m \sin ^2 (\delta /2)} \right) \left(\sum _{k=0}^{2N-2} \, (U_k - L_k) \cdot \delta + \tfrac {2 C}{\pi m} \cdot \tfrac {4 N}{\sin ^2 (\delta /2)} \right).\nonumber \end{align}

Since \(f(x)\) is Riemann integrable, both \(\sum _{p=0}^{N-1} \, (U_{2p} - L_{2p}) \cdot \delta \) and \(\sum _{p=0}^{N-1} \, (U_{2p-1} - L_{2p-1} ) \cdot \delta \) can be made arbitrarily small by taking \(N\) sufficiently large. Given that \(N\) and \(\delta \) have been designated, such a value choose \(m = m^*\) so that \(4N / m^* \pi \sin ^2 (\delta /2) {\lt} \epsilon /2\). The expression on the right side of 3.5 is made arbitrarily small by letting \(m\) have any value greater than \(m^*\). Hence the expression on the left side of the inequality approaches zero as \(m \rightarrow \infty \).

Proof â–¼

Theorem 3.2

Let \(f(t)\) be bounded in the interval \((-\pi , \pi )\) and let \(\int _{-\pi }^{\pi } \, f (t) \, dt\) exist so that the Fourier coefficients \(a_n\), \(b_n\) of \(f (t)\) exist. Then the series

\begin{equation} \tfrac {1}{2} \, a_0^2 + \sum _{n=1}^{\infty } \, (a_n^2 + b_n^2) \label{eqIII.2} \end{equation}
3.6

is convergent and its sum is \(\frac{1}{\pi } \, \int _{-\pi }^{\pi } \, (f (t))^2 \, dt\).

Proof â–¼
lemma 3.1 and theorem 3.1 prove theorem 3.2 by means of the following approach. Begin with the identities

\begin{equation} \sum _{n=0}^{m-1} \, S_n (x) = \sum _{n=0}^{m-1} \, (m-n) A_n (x), \qquad \tfrac {1}{m} \, \sum _{n=0}^{m-1} \, S_n (x) = \sum _{n=0}^{m-1} \, (1 - \tfrac {n}{m} ) \, A_n (x). \label{eqIII.7} \end{equation}
3.7

By means of 3.7, it is deduced that

\begin{align} & \int _{-\pi }^{\pi } \, \bigg(f(x) - \tfrac {1}{m} \sum _{n=0}^{m-1} \, S_n (x) \bigg)^2 \, dx = \int _{-\pi }^{\pi } \, \bigg(f(x) - \sum _{n=0}^{m-1} \, \tfrac {m-n}{m} \, A_n (x) \bigg)^2 \, dx\nonumber \\ & = \int _{-\pi }^{\pi } \, \bigg(f (x) - \sum _{n=0}^{n-1} A_n (x) + \sum _{n=0}^{m-1} \, \tfrac {n}{m} A_n (x) \bigg)^2 \, dx \nonumber \\ & = \int _{-\pi }^{\pi } \, \bigg(f(x) - \sum _{n=0}^{m-1} \, A_n (x) \bigg)^2 \, dx + \int _{-\pi }^{\pi } \, \bigg( \sum _{n=0}^{m-1} \, \tfrac {n}{m} \, A_n (x) \bigg)^2 \, dx\nonumber \\ & \quad + 2 \int _{-\pi }^{\pi } \, \bigg(f(x) - \sum _{n=0}^{m-1} A_n (x) \bigg) \sum _{n=0}^{m-1} \, A_n (x) \, dx. \label{eqIII.8} \end{align}

The results in lemma 3.1 can be used now

\begin{align} & \int _{-\pi }^{\pi } \, \left(f (x) - \tfrac {1}{m} \sum _{n=0}^{m-1} \, S_n (x) \right)^2 \, dx=\nonumber \\ & = \int _{-\pi }^{\pi } \left(f(x) - \sum _{n=0}^{m-1} A_n (x)\right)^2 \, dx + \tfrac {\pi }{m^2} \sum _{n=0}^{m-1} \, n^2 (a_n^2 + b_n^2)\nonumber \\ & \quad + 2 \pi \sum _{n=0}^{m-1} \, (a_n^2 + b_n^2) - 2 \pi \sum _{n=0}^{m-1} \, (a_n^2 + b_n^2) \nonumber \\ & = \int _{-\pi }^{\pi } \, \left(f(x) - \sum _{n=0}^{m-1} \, A_n (x) \right)^2 \, dx + \tfrac {\pi }{m^2} \, \sum _{n=0}^{m-1} \, n^2 (a_n^2 + b_n^2). \label{eqIII9} \end{align}

Since the integral on the left approaches zero by theorem 3.1 as \(m \rightarrow \infty \), and since 3.9 shows it equals the sum of two positive terms, it follows that each of these expressions must tend to zero as well. In particular,

\begin{equation*} \int _{-\pi }^{\pi } \, \left(f(x) - \sum _{n=0}^{m-1} \, A_n (x) \right)^2 \, dx \rightarrow 0. \end{equation*}

Expanding the bracket, the left side is equal to

\begin{align*} & \int _{-\pi }^{\pi } \, (f (x) )^2 \, dx \! \! -\! \! 2 \int _{-\pi }^{\pi } \, \left(f (x)\! \! -\! \! \sum _{n=0}^{m-1} \, A_n (x) \right) \sum _{l=0}^{m-1} \, A_l (x) \! \! -\! \! \int _{-\pi }^{\pi } \, \left(\sum _{n=0}^{m-1} \, A_n (x)\right)^2 \, dx \\ & = \int _{-\pi }^{\pi } \, | f(x)|^2 \, dx \! -\! \! \int _{-\pi }^{\pi } \left(\sum _{n=0}^{m-1} \, A_n (x) \right)^2 \, dx \\ & = \int _{-\pi }^{\pi } (f(x) )^2 \, dx \! -\! \pi \left(\tfrac {a_0}{2} \! +\! \sum _{n=1}^{m-1} \, (a_n^2 + b_n^2 )\right). \end{align*}

As this expression must go to zero as \(m \rightarrow \infty \), it follows that as \(m \rightarrow \infty \),

\begin{equation} \int _{-\pi }^{\pi } \, (f (x) )^2 \, dx - \pi \big( \tfrac {a_0}{2} + \sum _{n=0}^{m-1} \, (a_n^2 + b_n^2 ) \big) \rightarrow 0. \label{eqIII.10} \end{equation}
3.10

Proof â–¼

4 The Dirichlet-Bonnet proof of Fourier’s Theorem

It is very useful to have a proof of Fourier’s theorem that does not make use of the theory of summability. The proof of the theorem that follows is on the same general lines as the proof established by Dirichlet and Bonnet.

Theorem 4.1

Let \( f(t)\) be a function defined arbitrarily for \(- \pi \leq t \leq \pi \), and defined by the condition \(f (t + 2 \pi ) = f (t)\) for all other real values of \(t\). Let \(\int _{- \pi }^{\pi } \, f(t) dt\) exist and if it is improper, let it be absolutely convergent For \(a_n\) and \(b_n\) defined by 1.2, if \(x\) is an interior point of any interval \((a,b)\) within which \(f (t)\) has limited total fluctuation, the series 1.3 is convergent and the sum is given by 1.4.

Proof â–¼
The function \(S_m (x)\) can be expressed directly as an integral as
\begin{align} S_{m} (x)& = \tfrac {1}{\pi } \, \int _{-\pi }^{\pi } \, \big(\tfrac {1}{2} + \cos (x-t) + \cos (2 (x-t)) + \cdots + \cos (m (x-t)) \big) f(t) \, dt\nonumber \\ & = \tfrac {1}{ 2 \pi } \, \int _{-\pi }^{\pi } \, \tfrac {\sin (m + \frac{1}{2} )(x-t)} {\sin \frac{1}{2} (x-t)} \, f(t) \, dt\nonumber \\ & = \tfrac {1}{\pi } \int _0^{\pi /2} \, \tfrac {\sin (2 m+1) \theta }{\sin \, \theta } f (x + 2 \theta ) \, d \theta + \tfrac {1}{\pi } \int _0^{\pi /2} \, \tfrac {\sin (2m+1) \theta }{\sin \theta } f (x - 2 \theta ) \, d \theta . \label{eqIV.1} \end{align}

Integrating the equation

\begin{equation*} \tfrac {\sin (2 m+1) \, \theta }{\sin \theta } = 1 + 2 \cos 2 \theta + 2 \cos 4 \theta + \cdots + 2 \cos (2 m \theta ) \end{equation*}

with respect to \(\theta \in (0, \pi /2)\), we arrive at

\begin{equation} \int _0^{\pi /2} \, \tfrac {\sin (2m+1) \theta } {\sin \, \theta } \, d \theta = \tfrac {\pi }{2}. \label{eqIV.2} \end{equation}
4.2

Using 4.2, we can form the difference

\begin{align} & S_m (x) - \tfrac {1}{2} [ f (x+0) + f (x-0)]= \\ & = \tfrac {1}{\pi } \int _0^{\pi /2} \, \tfrac {\sin (2m+1) \theta }{\sin \, \theta } [ f (x + 2 \theta ) - f (x+0) ] \, d \theta \nonumber \\ & \quad + \tfrac {1}{\pi } \int _0^{\pi /2} \, \tfrac {\sin (2m+1) \theta }{\sin \theta } [ f (x - 2 \theta ) - f (x -0) ] \, d \theta .\nonumber \label{eqIV.3} \end{align}

In order to prove \(S_m (x)\) approaches 1.4 as \(m \rightarrow \infty \), it is sufficient to prove that

\begin{equation} \lim _{m \rightarrow \infty } \, \int _0^{\pi /2} \, \tfrac {\sin (2m+1) \theta }{\sin \theta } \varphi _{\pm } (\theta ) \, d \theta =0, \label{eqIV.4} \end{equation}
4.4

where \(\varphi _{\pm } (\theta ) = f (x \pm 2 \theta ) - f (x \pm 0)\). The function \(\varphi _{\pm } (\theta ) \cdot \theta \cdot \csc (\theta )\) is a function with limited total fluctuation on an interval for which \(\theta =0\) is an end point, so

\begin{equation} \varphi _{\pm } (\theta ) \cdot \theta \cdot \csc \, \theta = \chi _1 (\theta ) - \chi _2 (\theta ). \label{eqIV.5} \end{equation}
4.5

In 4.5 \(\chi _{1,2} (\theta )\) are bounded positive increasing functions of \(\theta \) such that \(\chi _1 (+0) = \chi _2 (+0) =0\). Given an arbitrary positive number \(\epsilon \), a positive number \(\delta \) can be chosen such that \(0 \leq \chi _1 (\theta ) {\lt} \epsilon \) and \(0 \leq \chi _2 (\theta ) {\lt} \epsilon \) whenever \(0 \leq \theta \leq \delta /2\). The integral in 4.4 can be split up

\begin{align} & \int _0^{\pi /2} \, \tfrac {\sin (2m+1) \theta } {\sin \theta } \, \varphi _{\pm } (\theta ) \, d \theta = \label{eqIV.6} \\ & =\! \int _{\delta /2}^{\pi /2} \, \tfrac {\sin (2 m+1) \theta }{\sin \theta } \, \varphi _{\pm } (\theta ) \, d \theta +\int _0^{\pi /2} \, \tfrac {\sin (2m+1) \theta } {\theta } \, (\chi _1 (\theta ) - \chi _2 (\theta ) ) \, d \theta \nonumber \\ & =\! \int _{\delta /2}^{\pi /2} \! \! \tfrac {\sin ((2m+1) \theta )} {\sin \, \theta } {\varphi _{\pm } (\theta )} \, d \theta \! + \! \int _0^{\delta /2} \! \! \tfrac {\sin (2m+1) \theta }{\theta } \chi _1 (\theta ) \, d \theta \nonumber \! -\! \! \int _0^{\delta /2} \! \! \tfrac {\sin ((2m+1) \theta } {\theta } \chi _2 (\theta ) \, d \theta .\nonumber \end{align}

The modulus of the first integral can be made less than \(\epsilon \) by taking \(m\) sufficiently large. This follows from the Riemann-Lebesgue lemma since \(\sigma _{\pm } (\theta ) \csc (\theta )\) has an integral which converges absolutely in \((\delta /2, \pi /2)\).

From the second mean value theorem, it follows that there is a number \(\zeta \) between \(0\) and \(\delta \) such that,

\begin{align*} \bigg| \int _0^{\delta /2} \, \tfrac {\sin (2m+1) \theta } {\theta } \, \chi _1 (\theta ) \, d \theta \bigg| & \leq \bigg| \chi _1 (\tfrac {\delta }{2} ) \cdot \int _{\zeta }^{\delta /2} \, \tfrac {\sin (2m+1) \, \theta }{\theta } \, d \theta \bigg| \\ & \leq \chi _1 (\tfrac {\delta }{2}) \cdot \bigg| \int _{(m+1/2) \zeta }^{(m+1/2) \delta } \, \tfrac {\sin \, t}{t} \, dt \bigg|. \end{align*}

It is known that \(\int _0^{\infty } \, (\sin \, t/t) \, dt\) converges, it follows that \(|\int _{\beta }^{\infty } \, \sin t/t \, dt|\) has an upper bound \(\gamma \) which is independent of \(\beta \). Hence it is clear that

\begin{equation} \bigg| \int _0^{\delta /2} \, \tfrac {\sin (2m+1) \theta } {\theta } \, \chi _1 (\theta ) \, d \theta \bigg| \leq 2 \, \gamma \, \chi _1 (\tfrac {\delta }{2}) \leq 2 \gamma \epsilon . \label{eqIV.7} \end{equation}
4.6

The third integral can be treated in a similar way. By taking \(m\) sufficiently large

\begin{equation} \bigg| \int _0^{\pi /2} \, \tfrac {\sin (2m+1) \theta } {\sin \theta } \, \varphi _{\pm } (\theta ) \, d \theta \bigg| \leq \epsilon + 2 \gamma \epsilon + 2 \gamma \epsilon = (4 \gamma +1) \, \epsilon . \label{eqIV.8} \end{equation}
4.7

By definition of limit, this implies that

\begin{equation} \lim _{m \rightarrow \infty } \, \int _0^{\pi /2} \, \tfrac {\sin (2m+1) \theta }{\sin \theta } \, \varphi _{\pm } (\theta ) \, d \theta =0. \label{eqIV.9} \end{equation}
4.8

However, it has been seen that this is a sufficient condition for the limit of \(S_m (x)\) to equal \([ f(x +0) + f (x-0)]/2\) as \(m\) approaches infinity. So we have therefore established the convergence of a Fourier series under the conditions stated.

Proof â–¼

The condition that \(x\) should be an interior point of the interval in which \(f (t)\) has total limited variation is merely a sufficient condition for convergence of the Fourier series. It could be replaced by any condition which satisfies the condition

\begin{equation} \lim _{m \rightarrow \infty } \, \int _0^{\pi /2} \, \tfrac {\sin (2m+1) \theta }{\sin \theta } \, \varphi _{\pm } (\theta ) \, d \theta =0. \end{equation}
4.9

5 Theories of Trigonometric Series

The theory of Fourier series due to Dirichlet is directed towards series which represent given functions. Riemann made advances in this regard and considered properties of functions defined by series of the form 1.3, where it is assumed that \(\lim _{n \rightarrow \infty } \, (a_n \cos (nx) + b_n \sin (nx)) =0\). Some properties which lead up to Riemann’s theorem are introduced. This states essentially that if two trigonometric series converge and are equal at all points of the interval \((- \pi , \pi )\) with the possible exception of a finite number of points, corresponding coefficients of the two series are equal.

Let the sum of 1.3 at any point \(x\) where it converges be denoted \(f (x)\) and define a function \(F (x)\) to be

\begin{equation} F (x) = \tfrac {1}{2} A_0 \, x^2 - \sum _{n=1}^{\infty } \, \tfrac {A_n (x)}{n^2}. \label{eqV.1} \end{equation}
5.1

To prove the theorem here, two significant results are needed. There is a theorem attributed to Hardy; and the following important Lemma which was first introduced by Cantor.

Lemma 5.1

If \(\lim _{n \rightarrow \infty } \, A_n (x) =0\) for all values of \(x\) such that \(a \leq x \leq b\), then \(a_n \rightarrow 0\) and \(b_n \rightarrow 0\) as \(n \rightarrow \infty \).

Theorem 5.1

If the series defining \(f (x)\) converges at all points of any finite interval, the series defining \(F (x)\) converges for all real values of \(x\).

Proof â–¼
If it is assumed that the series which defines \(f (x)\) converges at all points of a certain interval of the real axis, it is the case by lemma 5.1 that \(a_n, b_n \rightarrow 0\). Then for all real values of \(x\) \(| a_n \cos (nx) + b_n \sin (nx)| \leq (a_n^2 + b_n ^2)^{1/2} \rightarrow 0\) and the right side is \(O (1/n)\). By the result of Hardy, the series 5.1 converges absolutely and uniformly for all real values of \(x\) and so \(F (x)\) is continuous for all real \(x\).
Proof â–¼

6 Properties of the Function \(F (x)\)

Lemma 6.1 Riemann

Define

\begin{equation} G (x,a) = \tfrac {F (x +2a) + F (x-2a) - 2 F(x)}{4 a^2}. \label{eqVI.1} \end{equation}
6.1

Then \(\lim _{a \rightarrow 0} \, G (x,a) = f (x)\) provided that \(\sum _{n=0}^{\infty } \, A_n (x)\) converges for the value of \(x\) under consideration.

Proof â–¼
Since the series which define \(F (x)\) and \(F (x \pm 2 a)\) converge absolutely, terms may be rearranged by first noticing that

\begin{equation} \begin{array}{c} \cos \, n (x + 2 a) + \cos \, n (x -2a) - 2 \cos (nx) = - 4 \sin ^2 (na) \cdot \cos (nx), \\ \sin \, n (x + 2 a) + \sin (x-2 a) - 2 \sin (nx) = - 4 \sin ^2 (na) \cdot \sin (nx). \\ \end{array} \label{eqVI.2} \end{equation}
6.2

Substituting into 6.1, the function \(G (x,a)\) can be calculated

\begin{align*} & (x +2a) + F (x-2a) - 2 F(x)= \\ & = \tfrac {1}{2} A_0 (x + 2a)^2 \! -\! \! \sum _{n=1}^{\infty } \! \tfrac {A_n (x + 2a)}{n^2} \! +\! \tfrac {1}{2} A_0 (x-2a)^2 \! \! - \! \! \sum _{n=1}^{\infty } \! \tfrac {A_n (x - 2a)}{n^2} \! -\! A_0 x^2 \! +\! 2 \! \sum _{n=1}^{\infty } \, \tfrac {A_n (x)} {n^2} \\ & = 4 a^2 A_0 + 4 \sum _{n=1}^{\infty } \, \tfrac {1}{n^2} (a_n \sin ^2 (na) \cos (nx) + b_n \sin ^2 (nx) \sin (nx)) \\ & = 4 a^2 A_0 + 4 \sum _{n=1}^{\infty } \, \tfrac {1}{n^2} \Big(a_n \cos (nx) + b_n \sin (nx)\Big) \cdot \sin ^2 (nx). \end{align*}

Therefore, recalling 2.1, we arrive at

\begin{equation} G (x,z) = A_0 + \sum _{n=1}^{\infty } \, \Big(\tfrac {\sin (na)}{na} \Big)^2 \cdot A_n (x). \label{eqVI.3} \end{equation}
6.3

The series converges uniformly with respect to the variable \(a\) for all values of \(a\) provided that \(\sum _{n=1}^{\infty } \, A_n (x)\) converges. To this end it should be recalled that if a series of continuous functions of variable \(x\) is uniformly convergent for all values of \(x\) in a closed interval, the sum is a continuous function there. So for \(a \neq 0\),

\begin{equation} f_n (x) = \big(\tfrac {\sin \, (n a)}{n a} \big)^2, \label{eqVI.4} \end{equation}
6.4

and \(f (0) =1\) when \(a=0\), then \(f_n (x)\) is a continuous function for all values of \(a\). Consequently, if \(G (x,a)\) is a continuous function of \(a\) and then the limit \(a \rightarrow 0\) has to exist

\begin{equation} G (x,0) = \lim _{a \rightarrow 0} \, G (x,a). \label{eqVI.5} \end{equation}
6.5

To prove that the series which defines \(G (x,a)\) converges uniformly, the following result due to Hardy is recalled:

Suppose \(a \leq x \leq b\), if \(| \omega _n (x) | {\lt}k\) and \(\sum _{n=1}^{\infty } \, | \omega _{n+1} (x) - \omega _n (x) | {\lt} k'\), where \(k,k'\) are independent of \(n\) and \(x\), and if \(\sum _{n=1}^{\infty } \, \alpha _n\) is a convergent series independent of \(x\), then \(\sum _{n=1}^{\infty } \, a_n \omega _n (x)\) converges uniformly when \(a \leq x \leq b\).

In this instance, \(\omega _n (x) = f_n (x)\) given in 6.4, clearly \(| f_n (x) | \leq 1\). It remains to show that \(\sum _{n=1}^{\infty } \, | f_{n+1} (a) - f_n (a) | {\lt} K\) and \(K\) is independent of \(a\).

Hence if \(\sum _{n=0}^{\infty } \, A_n (x)\) converges, the series which defines \(G (x,a)\) converges uniformly with respect to \(a\) for all values of \(a\), and so the limit can be computed

\begin{equation} \lim _{a \rightarrow 0} \, G (x,a) = G (x,0) = A_0 + \sum _{n=1}^{\infty } \, A_n (x) = f(x). \label{eqVI.6} \end{equation}
6.6

For the proof of the following results, the well known sum is needed for \(a {\gt}0\)

\begin{equation} \sum _{n=1}^{\infty } \, \tfrac {\sin ^2 (n \, a)}{n^2 \, a} = \tfrac {1}{2} (x -a). \label{eqVI.7} \end{equation}
6.7

Proof â–¼

Lemma 6.2

It holds

\begin{equation} \sum _{n=1}^{\infty } \! \tfrac {\sin ^2 (n a)}{n^2 a} A_n (x) \! =\tfrac {1}{2} (x-a) A_1 (x) +\! \sum _{n=1}^{\infty } \! \Bigg(\tfrac {1}{2} (\pi - a) \! -\! \sum _{m=1}^n \! \tfrac {\sin ^2 (m a)}{m^2 a} \Bigg)\! (A_{n+1} (x) \! -\! A_n (x) \big). \label{eqVI.8} \end{equation}
6.8
Proof â–¼
Since \(A_n (x) \rightarrow 0\) as \(n \rightarrow \infty \), the first series on the right telescopes

\begin{equation} \sum _{n=1}^{\infty } \, \tfrac {1}{2} (\pi -a) (A_{n+1} (x) - A_n (x) ) = - \tfrac {1}{2} (\pi -a) \, A_1 (x). \label{eqVI.9} \end{equation}
6.9

The second series can be written as

\begin{align} & \sum _{n=1}^{\infty } \, \sum _{m=1}^n \, \Big( \tfrac {\sin ^2 (ma)}{m^2 a} \Big) \, (A_{n+1} (x) - A_n (x) )=\nonumber \\ & =- \sum _{n=2}^{\infty } \, \sum _{m=1}^{n-1} \, \Big(\tfrac {\sin ^2 (n a )}{n^2 a} \Big) A_n + \sum _{n=1}^{\infty } \, \sum _{m=1}^n \, \Big(\tfrac {\sin ^2 (ma)}{m^2 a} \Big) A_n\nonumber \\ & = \sum _{n=2}^{\infty } \, \Bigg(- \sum _{m=1}^{n-1} \, \tfrac {\sin ^2 (m a)}{m^2 a} + \sum _{m=1}^n \, \tfrac {\sin ^2 (ma)}{m^2 a} \Bigg) \, A_n (x) + \tfrac {\sin ^2 a}{a} \, A_1 (x)\nonumber \\ & = \sum _{n=2}^{\infty } \, \tfrac {\sin ^2 (ma)}{m^2 a} \, A_n (x) + \tfrac {\sin ^2 \, a}{a} \, A_1 (x) = \sum _{n=1}^{\infty } \, \tfrac {\sin ^2 (n a)}{n^2 a} \, A_n (x). \label{eqVI.10} \end{align}

Substitute 6.9 and 6.10 in to 6.8 on the right side, the left side is obtained.

Proof â–¼
Lemma 6.3

If \(a_n, b_n \rightarrow 0\) in \(A_n (x)\) then

\begin{equation} \lim _{a \rightarrow 0} \, \tfrac {F (x +2a) + F(x-2a) -2 F (x)}{ 4 a} =0 \label{eqVI11} \end{equation}
6.11

for all values of \(x\).

Proof â–¼
It is the case by 5.1 that

\begin{equation} \tfrac {F (x + 2a) + F (x- 2 a) - 2 F(x)}{4 a} = A_0 \, a + \sum _{n=1}^{\infty } \, \tfrac {\sin ^2 (na)}{n^2 a} \, A_n (x). \label{eqVI.12} \end{equation}
6.12

By lemma 6.2 on the right of 6.12, using Hardy’s theorem (H) for uniform convergence, this series converges uniformly with respect to \(a\) for all \(a\) greater than or equal to zero. Moreover,

\begin{align} & \lim _{a \rightarrow 0^+} \, \displaystyle \tfrac {1}{4a} \, \big(F (x + 2a) + F (x -2a) -2 F (x) \big) = \\ & = \lim _{a \rightarrow 0^+} \, \Big[ A_0 (x) + \tfrac {1}{2} (x-a) A_1 (x) + \sum _{n=1}^{\infty } \, \, g_n (a) (A_{n+1} (x) - A_n (x) ) \Big].\nonumber \label{eqVI.13} \end{align}

This limit is the value of the function when \(a=0\), and the value is zero since \(\lim _{n \rightarrow \infty } \, A_n (x) =0\). By symmetry, it can be seen the right and left hand limits are the same, so the result is zero when \(a \rightarrow - \infty \).

Proof â–¼

Suppose there are two trigonometric series satisfying the given conditions, and let their difference of these trigonometric series be

\begin{equation*} A_0 + \sum _{n=1}^{\infty } \, A_n (x) = f(x). \end{equation*}

Then \(f (x)=0\) at all points of the interval \((-\pi ,\pi )\) with a finite number of exceptions. Let \(\xi _1, \xi _2\) be a consecutive pair of these exceptional points, and let \( F(x)\) be the Riemann’s associated function.

Lemma 6.4

In the interval \(\xi _1 {\lt} x {\lt} \xi _2\), function \(F (x)\) is a linear function of \(x\) if \(f (x)=0\) in this interval.

Proof â–¼
If \(\theta =1\) or if \(\theta =-1\) consider

\begin{equation} \phi (x) = \theta \big[ F(x) - F (\xi _1) -\tfrac {x - \xi _1} {\xi _2 - \xi _1} (F (\xi _2 ) - F (\xi _1)) \big] - \tfrac {1}{2} \, h^2 (x- \xi _1)(\xi _2 -x). \label{eq.14} \end{equation}
6.14

is a continuous function of \(x\) on \(\xi _1 \leq x \leq \xi _2\) and it satisfies \(\phi \xi _1) = \phi (\xi _2)=0\).

If the first term of \(\phi (x)\) is not zero on the interval, there will be some point \(x=c\) at which \(\phi (x)\) is not zero. Pick the sign of \(\theta \) so that the first term is positive at \(c\), and then take \(h\) sufficiently small so that \(\phi (x)\) is still positive. As \(\phi (x)\) is continuous, it attains its upper bound which must be positive since \(\phi (c) {\gt}0\). Let \(\phi (x)\) attain this upper bound at \(x= \beta \) so \(\beta \neq \xi _1\) and \(\beta \neq \xi _2\). By Riemann’s first lemma

\begin{equation*} \lim _{a \rightarrow 0} \, \frac{\phi (\beta +a) + \phi (\beta -a) - 2 \phi (\beta )}{a^2} = h^2. \end{equation*}

However, \(\phi (\beta +a) \leq \phi (\beta )\), \(\phi (\beta -a) \leq \phi (\beta )\), so this limit must be negative or zero. Hence, by assuming the first term of \(\phi (x)\) is not everywhere zero in \((\xi _1, \xi _2)\), a contradiction has been reached, so it is zero.. Consequently, \(F (x)\) is a linear function of \(x\) over \((\xi _1, \xi _2)\).

Proof â–¼

An immediate consequence of the next theorem is that a function of the type considered cannot be expressed as any trigonometric series in \((-\pi , \pi )\) other than its Fourier series.

Lemma 6.5 Riemann II

Two trigonometric series which converge and are equal at all points of the interval \((-\pi , \pi )\) with the possible exception of a finite number of points must have corresponding coefficients equal.

Proof â–¼
lemma 6.4 implies that \(y = F(x)\) is a series of segments of straight lines under these circumstances with the beginning and ending of each line at an exceptional point. As stated, \(F (x)\) is uniformly convergent, hence a continuous function of \(x\), and these lines must be connected. By Riemann’s lemma 6.4, even if \(\tau \) is an exceptional point

\begin{equation} \lim _{a \rightarrow 0} \, \tfrac {F (\tau +a) + F (\tau -a) - 2 F (a)}{a} =0. \label{eqVI.15} \end{equation}
6.15

This quotient in the limit is the difference of the slopes of the two segments meeting at a point whose \(x\) value is \(\tau \). Therefore, the two segments are continuous in direction, so the equation \(y = F(x)\) represents a single line, which we write as \(F (x) =mx +b\). Then it follows that \(m\) and \(b\) have the same values for all values of \(x\). Thus,

\begin{equation} \tfrac {1}{2} A_0 \, x^2 -m x -b = \sum _{n=1}^{\infty } \, \tfrac {1}{n^2} A_n (x). \label{eqVI.16} \end{equation}
6.16

The right-hand side of 6.16 is periodic with period \(2 \pi \). This means the left-hand side of this equation must be periodic with period \(2 \pi \) as well, and this implies these three results:

\begin{equation} A_0 =0, \qquad m = 0, \qquad -b = \sum _{n=1}^{\infty } \, \tfrac {A_n (x)}{n^2}. \label{eqVI.17} \end{equation}
6.17

The series 6.17 (iii) is uniformly convergent. Thus we can multiply by \(\cos (nx)\) or \(\sin (nx)\) and integrate on both sides to produce two more results

\begin{equation} \pi n^{-2} \, a_n = -b \, \int _{-\pi }^{\pi } \, \cos (nx) \, dx =0, \qquad \pi n^{-2} \, b_n = - b \int _{-\pi }^{\pi } \, \sin (nx) \, dx =0. \label{eqVI.18} \end{equation}
6.18

Therefore, all the coefficients vanish, so the two trigonometric series whose difference is \(A_0 + \sum _{n=1}^{\infty } \, A_n (x)\) have corresponding coefficients equal as required.

Proof â–¼

7 Uniform Convergence and Some Examples

let \(f(t)\) be continuous in the interval \(a \leq t \leq b\). Since continuity implies uniform continuity there, the choice of \(\delta \) corresponding to any value of \(x\) in \((a,b)\) is independent of \(x\), and the upper bound of \(| f (x \pm 0)|\), that is, \(|f(x)|\) is also independent of \(x\) so

\begin{equation} \int _0^{\pi /2} \, | \varphi _{\pm } (\theta )| \, d \theta = \int _0^{\pi /2} \, | f(x \pm 2 \theta ) - f (x \pm 0) | \, d \theta \leq \tfrac {1}{2} \, \int _{-\pi }^{\pi } \, | f(t) | \, dt + \tfrac {1}{2} \pi | f(0 \pm 0) |. \label{eqVII.1} \end{equation}
7.1

and the upper bound of the last expression is independent of \(x\).

Since the choice of \(m\) which makes

\begin{equation} \Bigg| \tfrac {1}{m} \int _0^{\pi /2} \, \tfrac {\sin ^2 (m \theta )}{\sin ^2 \theta } \varphi _{\pm } (\theta ) \, d \theta \Bigg| < \pi \epsilon . \label{eqVII.2} \end{equation}
7.2

This is independent of \(x\), consequently,

\begin{equation*} \tfrac {1}{m} \bigg(A_0 + \sum _{n=1}^{m-1} \, S_n (x) \bigg) \end{equation*}

approaches the limit \(f (x)\) as \(m \rightarrow \infty \) uniformly throughout \(a \leq x \leq b\).

Lemma 7.1

Let \(f (t)\) satisfy the conditions of the Riemann-Lebesgue lemma, and further let it be continuous as well as having limited total fluctuation over \((a,b)\). Then the Fourier series associated with \(f (t)\) converges uniformly to the sum \(f (x)\) at all points \(x\) for which \(a + \delta \leq x \leq b - \delta \) with \(\delta {\gt}0\).

Proof â–¼
Let \( h(t)\) be a function defined to be equal to \(f (t)\) on \(a \leq t \leq b\) and equal to zero for \(t\) outside this interval but in \((-\pi , \pi )\). Suppose \(\alpha _n, \beta _n\) are the Fourier coefficients of \(h (t)\) and \(S_n^{(2)} (x)\) the sum of the first \(m+1\) terms of the Fourier series associated with \(h (t)\). It follows from the results above such as 7.2 that

\begin{equation} \tfrac {a_0}{2} + \sum (\alpha _n \cos (nx) + \beta _n \sin (nx)) \label{eqVII.3} \end{equation}
7.3

is uniformly summable throughout \((a + \delta , b - \delta )\). Moreover, there is an \(x\)-independent upper bound

\begin{equation} | \alpha _n \cos (nx) + \beta _n \sin (nx) | \leq (\alpha _n^2 + \beta _n^2)^{1/2} \label{eqVII.4} \end{equation}
7.4

and by lemma 5.1 it is \({\mathcal O} (1/n)\). It follows from Hardy’s convergence theorem that 7.3 converges uniformly to the sum \(h (x)\) which is equal to \(f (x)\). Thus write

\begin{align*} & S_m (x) - S^{(2)}_m (x) = \\ & =\tfrac {1}{\pi } \, \int _{(b-x)/2}^{\pi /2} \, \tfrac {\sin (2m+1) \theta }{\sin \theta } f (x + 2 \theta ) \, d \theta + \tfrac {1}{\pi } \, \int _{(x-a)/2}^{\pi /2} \, \tfrac {\sin (2m+1) \theta }{\sin \theta } f (x - 2 \theta ) \, d \theta . \end{align*}

Choose \(\epsilon {\gt}0\) arbitrarily and then enclose the points at which \(f (t)\) is unbounded in a set of intervals \(\delta _1, \ldots , \delta _p\) such that \(\sum _{i=1}^p \, \int _{\delta _i} \, | f (t) | \, dt {\lt} \epsilon \). Let \(C\) be the upper bound of \(|f (t)|\) outside these intervals so we have

\begin{equation} | S_m (x) - S_m^{(2)} (x) | < \big(\tfrac {2 nC} {2m+1} + 2 \epsilon ) \, \csc (\delta ), \label{eqVII.5} \end{equation}
7.5

where the selection of \(n\) depends only on \(a,b\) and the form of \(f (t)\). By a choice of \(m\) independent of \(x\), we can assume \(|S_m (x) - S_m^{(2)} (x) |\) is arbitrarily small so \(S_m (x) - S_m^{(2)} (x)\) tends to zero uniformly.

Proof â–¼

Let us now finish with a few Examples to illustrate these ideas in a more applied form.

\({\bf 1.}\) Consider the following integral over \((0, \pi )\) which is broken up into a sum of two integrals where in the second a change of variables \(\theta = \pi - s\) is carried out

\begin{equation} \int _0^{\pi } \, \tfrac {\sin (2m+1) \theta } {\sin \, \theta } \phi (\theta ) \, d \theta = \int _0^{\pi /2} \, \tfrac {\sin (2m+1) \theta } {\sin \, \theta } \phi (\theta ) \, d \theta + \int _0^{\pi /2} \, \tfrac {\sin (2m+1) \theta } {\sin \theta } \phi (\pi - \theta ) \, d \theta \end{equation}
7.6

It follows by letting \(m \rightarrow \infty \) that

\begin{align} & \lim _{m \rightarrow \infty } \, \int _0^{\pi } \, \tfrac {\sin (2m+1) \theta }{\sin \, \theta } \phi (\theta ) \, d \theta = \label{eqVII.6} \\ & = \lim _{m \rightarrow \infty } \, \int _0^{\pi /2} \, \tfrac {\sin (2m+1) \theta }{\sin \, \theta } \phi (\theta ) \, d \theta + \lim _{m \rightarrow \infty } \, \int _0^{\pi /2} \, \tfrac {\sin (2m+1) \theta } {\sin \theta } \phi (\pi - \theta ) \, d \theta \nonumber \\ & = \tfrac {\pi }{2} \, [ \phi (+0) + \phi (\pi -0) ]. \nonumber \end{align}

2. Let us use the result in 7.7 to study a particular integral. Let us show that for \(a{\gt}0\)

\begin{equation} \lim _{m \rightarrow \infty } \, \int _0^{\infty } \, \tfrac {\sin (2m+1) \theta }{\sin \, \theta } \, e^{- a \theta } \, d \theta = \tfrac {\pi }{2} \coth (\tfrac {\pi }{2} a ). \label{eqVII.7} \end{equation}
7.8

Write the integral in 7.2 as an infinite sum of integrals over the subintervals \(((m-1) \pi , m \pi )\),

\begin{align*} \int _0^{\infty } \, \tfrac {\sin (2 n+1) \theta } {\sin \, \theta } e^{- a \theta } \, d \theta & = \sum _{m=1}^{\infty } \, \int _{(m-1) \pi }^{m \pi } \, \tfrac {\sin (2n+1) \theta }{\sin \theta } \, e^{-n \theta } \, d \theta \\ & = \sum _{m=1}^{\infty } \, \int _0^{\pi } \, \tfrac {\sin (2n+1) (s + (m-1) \pi )} {\sin (s + (m-1) \pi )} \, e^{-a (s + (m-1) \pi )} \, ds \\ & = \sum _{n=1}^{\infty } \, \int _0^{\pi } \, \tfrac {\sin (2n+1) s \cos ((2n+1) (m-1) \pi )} {\sin (s) \, \cos (m-1) \pi } \, e^{-a s} \, ds \, e^{-a (m-1) \pi } \\ & = \sum _{m=1}^{\infty } \, e^{- a (m-1)} \int _0^{\pi } \, \tfrac {\sin (2 n+1) s}{\sin (s)} e^{-a s} \, ds \end{align*}

Let \(m \rightarrow \infty \) so now the integral is calculated by means of the results of 1:

\begin{align*} & \lim _{n \rightarrow \infty } \, \int _0^{\infty } \, \tfrac {\sin (2n+1) \theta }{\sin \theta } \, e^{-n \theta } \, d \theta = \\ & = \tfrac {\pi }{2} \sum _{m=1}^{\infty } \, e^{-a (m-1) \pi } \, (1 + e^{- a \pi } ) \\ & = \tfrac {\pi }{2} \sum _{m=1}^{\infty } \, (e^{-a (m+1) \pi } + e^{-a m \pi } ) = \tfrac {\pi }{2} (e^{- a \pi } +1) \cdot \sum _{m=1}^{\infty } \, e^{- a m \pi } \\ & = \tfrac {\pi }{2} \tfrac {e^{a \pi /2} + e^{- a \pi /2}} {e^{a \pi /2} - e^{-a \pi /2}} = \tfrac {\pi }{2} \, \coth (\tfrac {\pi }{2} a ). \end{align*}

\({\bf 3.}\) Let \(s_{2n-1} (x)\) be the partial sum \(n=1,2,3, \ldots \)

\begin{equation} s_{2n-1} (x) = \tfrac {1}{2} + \tfrac {2}{\pi } \, \Big(\sin (\tfrac {\pi x}{b}) + \tfrac {1}{3} \sin (\tfrac {3 \pi x}{b} ) + \cdots + \tfrac {1}{2n-1} \sin \Big(\tfrac {(2n-1) \, \pi x}{b}\Big) \Big) \label{eqVII.8} \end{equation}
7.9

of the function \(f(x) =0\), when \(0 \leq x \leq b\) and \(f (x) =1\) when \(-b \leq x{\lt} 0\). Differentiate 7.9 with respect to \(x\) to get

\begin{equation} s_{2n-1} ' (x) = \tfrac {1}{b} \Big(2 \cos (\tfrac {\pi x}{b}) + \cdots + 2 \cos (\tfrac {2 \pi x}{b}) + \cdots + 2 \cos (\tfrac {(2n-1) \pi x}{b} ) \Big). \label{eqVII.9} \end{equation}
7.10

Multiply both sides of 7.10 by \(\sin (\pi x/b)\) and apply the identity \(2 \sin (\alpha ) \cos (\beta ) \) \(= \sin (\alpha + \beta ) - \sin (\beta - \alpha )\). The sum then collapses to the form

\[ \sin (\tfrac {\pi x}{b} ) s_{2n-1}' (x) = \tfrac {1}{b} \Big(2 \sin (\tfrac {\pi x}{b}) \cos (\tfrac {\pi x}{b}) + \cdots + 2 \sin (\tfrac {\pi x}{b}) \cos (\tfrac {(2n-1) \pi x}{b} ) \Big)= \]

\begin{equation} = \! \tfrac {1}{b} \Big(\sin (\tfrac {2 \pi x}{b} ) \! +\! \sin (\tfrac {4 \pi x}{b}) \! -\! \sin (\tfrac {2 \pi x}{b}) \! +\! \sin (\tfrac {6 \pi x}{b}) \! -\! \sin (\tfrac {4 \pi x}{b} ) \! +\! \cdots \! +\! \sin (\tfrac {2 n \pi x}{b})\! -\! \sin (\tfrac {(2n-2) \pi x}{b}) \Big) \label{eqVII.10} \end{equation}
7.11
\[ = \tfrac {1}{b} \sin (\tfrac {2 \pi n x}{b} ). \]

The derivative implies the first positive value of \(x\) for which \(s_{2n-1} ' (x) =0\) is \(x_0 = b / 2n\). Hence setting \(x=x_0\) in \(s_{2n-1} (x)\) yields the following value for expression 7.9

\begin{equation} s_{2n-1} (\tfrac {b}{2n}) = \tfrac {1}{2} + \tfrac {2}{\pi } \, \Big(\cos (\tfrac {\pi }{2n}) + \tfrac {1}{3} \sin (\tfrac {3 \pi }{2n}) + \cdots + \tfrac {1}{2n-1} \sin (\tfrac {(2n-1) \pi }{2n} ) \Big) \label{eqVII.11} \end{equation}
7.12

This sum has the following interpretation. The sum in brackets is the sum of the areas of rectangles under the graph of \(g(x)=\sin (x)/x\) with base length \(\pi /n\) and heights calculated by evaluating \(g (x)\) at odd multiples of \(\pi / 2 n\) from \(1\) to \(2n-1\). Since this is a Riemann integrable function, the sum approaches the integral of \(f (x)\) from \(0\) to \(\pi \). In the limit, \(n \rightarrow \infty \),

\begin{equation} \lim _{n \rightarrow \infty } \, s_{2n-1} (\tfrac {b}{2n}) = \tfrac {1}{2} + \tfrac {1}{\pi } \, \int _0^{\pi } \, \tfrac {\sin (t)}{t} \, dt. \label{eqVII.12} \end{equation}
7.13

The right-hand side has the numerical value of about \(1.0895\).

Bibliography

1

R.E Edwards, Fourier Series: A Modern Introduction, 2nd ed, Springer-Verlag, NY, 1979.

2

W. Rudin, Principles of Mathematical Analysis, McGraw Hill, 3rd ed, NY, 1976.

3

W. Cheney, Analysis for Applied Mathematics, Springer-Verlag, NY, 2001.

4

E. Hewitt, K. Stromberg, Real and Abstract Analysis, Springer-Verlag, NY, 1965.

5

E.T. Whitaker, G.N. Watson, A Course of Modern Analysis, Cambridge University Press, 4th ed, Cambridge, 1973.

6

E.C. Titchmarsh, Eigenfunction Expansions Associated with Second-Order Differential Operators, Oxford, Clarendon Press, 1946.

7

D. Bleecker, G. Csorles, Basic Partial Differential Equations, International Press, Cambridge, 1995.

8

H. Weyl, Ramifications, old and new, of the eigenvalue problem, Bull. Amer. Math. Soc., 56 (1950), pp. 115–139. https://doi.org/10.1090/S0002-9904-1950-09369-0 \includegraphics[scale=0.1]{ext-link.png}

9

S. Bochner, Summation of classical Fourier series: An application to Fourier expansions on compact Lie groups, Ann. Math., 17 (1936), pp. 345–356. https://doi.org/10.2307/1968447 \includegraphics[scale=0.1]{ext-link.png}

10

O. Vejvoda, Partial Differential Equations: Time-Periodic Solutions, M Nijhoff Publishers, The Hague, 1982.

11

J.W. Brown, R.V. Churchill, Fourier Series and Boundary Value Problems, 7th ed, McGraw Hill, Boston, 2008.