The zeros of orthogonal polynomials, part I

Let \mu be a Borel measure on the real line. Usually we impose two conditions on this measure:
(i) the support \textnormal{supp}(\mu) contains at least infinitely many points,
(ii) for all k \in \{0, 1, 2, \dots \} we have\int x^k d\mu(x) < \infty. (This is called the finite moment condition.)

We shall see in a minute why these restrictions are needed. The following theorem says that there is a sequence of polynomials mutually orthogonal to each other. (No surprise there, they are called orthogonal polynomials, OP in short, our main object of study.)

Theorem 1. (Existence of orthonormal polynomials) There is a unique sequence of polynomials \{p_{n}(x,\mu)\}_{n=0}^{\infty} in L^{2}(\mu) such that (i) p_n(x,\mu) = p_n(x) = \gamma_n x^n + \dots, \gamma_n > 0, (ii) \int p_n(x) p_m(x) d\mu(x) = \delta_{n,m} , where \delta_{n,m} = 1 if n = m and 0 otherwise.

Proof. The finite moment condition implies that all polynomials are an element of L^2(\mu) , therefore we can put the vector system \{1, x, x^2, \dots \} into the Gram-Schmidt process. It is easy to see that the output satisfies the conditions (i) and (ii). \Box

Notice that if the support of \mu is a finite set, then the Gram-Schmidt process stops after a finite number of steps. The sequence of polynomials \{ p_n(x,\mu)\}_{n=0}^{\infty} are called the orthonormal polynomials with respect to \mu. Sometimes I write orthogonal instead of orthonormal when it is not important. I also indicate the dependence on the measure when it is necessary. Probably the most famous example of orthogonal polynomials are the Chebyshev polynomials. They are defined as

T_n(x) := \cos(n \arccos x), \quad x \in [-1,1] .

Although it is not obvious, they are indeed polynomials. They are also orthogonal with respect to the weight d\mu(x) = \frac{1}{\pi\sqrt{1-x^2}} dx supported on the interval [-1,1]. As the title of this post indicates, we are interested in the zeros of OPs. It is easy to see that the zeros of T_n are exactly

x_{k,n} := \cos(\frac{(2k+1)\pi}{2n}), \quad k \in \{0,1,\dots,n-1\}.

chebyshev
Zeros of the Chebysev polynomial for n = 10. If we project the zeroes up to the unit circle, we can see that the points obtained this way divide the circle into equal arcs.

What happens if we construct a measure which puts a weight into each zero of T_n ? With this in mind, define \nu_n as

\nu_n := \frac{1}{n} \sum_{k=1}^{n} \delta_{x_{k,n}},

where \delta_\alpha is the Dirac measure. For arbitrary x we can calculate the number of zeros less than x to obtain

\nu_n((-\infty,x]) = \frac{|\{ \textnormal{zeros less than }x \}|}{n} = \frac{1}{n}\lfloor \frac{n}{\pi} \arccos(x) - \frac{1}{2} \rfloor ,

from which it is immediate that \nu_n((-\infty,x]) \to \frac{1}{\pi} \arccos(x). This means that the measures \nu_n weakly converge to a measure with distribution function F(x):=\frac{1}{\pi} \arccos(x). But this is exactly the measure for which the Chebyshev polynomials are orthogonal! In other words, if we take the zeros of T_n and put 1/n weights into each zero, the resulting measure converges to the original measure!

Question. Is there a similar phenomenon for measures other than d\mu(x) = \frac{1}{\pi\sqrt{1-x^2}}dx, x \in [-1,1]?

This question will be partially answered right now. This example with the Chebyshev polynomials is a little misleading, because the measure d\mu(x) = \frac{1}{\pi\sqrt{1-x^2}}dx is a special one. For more general measures, we have the following theorem.

Theorem 2. Let \mu be a Borel probability measure supported on [-1,1], and suppose that d\mu(x) = w(x) dx, where w(x) > 0 a.e. on [-1,1]. Then for the orthonormal polynomials p_n(x,\mu) = \gamma_n x^n + \dots , we have

\lim_{n \to \infty} \gamma_{n}^{-1/n} = \frac{1}{2} ,

and the measure \nu_n defined as

\nu_n := \frac{1}{n} \sum_{k=1}^{n} \delta_{x_{k,n}} ,

where x_{1,n}, \dots, x_{n,n} are the zeros of p_n(x,\mu), converges weakly to the measure d\nu(x) = \frac{1}{\pi\sqrt{1-x^2}} dx, x \in [-1,1].

A more general theorem can be found at [WVA], Theorem 1.2. This theorem was rather surprising for me, because under certain assumptions, no matter which measure we start from, the discrete measure obtained from the zeros of orthogonal polynomials converges weakly to a special measure. That measure is called the equilibrium measure for the interval [-1,1] , and you can find more about them, for example, in the book [TR].

I conclude this post with a few questions. Some of them will be answered next time, but for the others, I am yet to find the answer.

Questions.
1.
Is the assumption w(x) > 0 really necessary in Theorem 2?
2. What about measures supported on more general sets? For example, is there a similar theorem for measures on the unit circle?
3. The discrete measure defined with \nu_n has uniform weights. What if we use different weights?
4. For exactly which measures can we prove the analogue of Theorem 2?

References.

[TR] Thomas Ransford, Potential theory in theory in the complex plane, Cambridge University Press, Cambridge, 1995
[WVA] Walter Van Assche, Asymptotics for orthogonal polynomials, Lecture Notes in Mathematics, Springer-Verlag, 1987

Advertisements

4 thoughts on “The zeros of orthogonal polynomials, part I

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s