|
| Back
| Home
| Math index
| Next
|
Author: Seppo Nurmi, Järfälla, Sweden. Ultraspherical Harmonics - an Introduction
A. What are these orthogonal functions, anyway?A.1 Frequency AnalysisOrthogonal functions are mathematical devices used in what is called frequency analysis. Amongst such a functions the simplest and commonest are the trigonometric sine and cosine functions. They represent the purest mathematical form of 'waves'. They are orthogonal functions of the simple one-dimensional case. Summing together sine and cosine functions of suitable amplitudes and frequencies can in fact be made to add up to any forms of curves. This is the common method in analyzing electromagnetic signals, to take one practical example. However, the pure mathematical method is a more general one. It offers a way to take apart an arbitrary form of a curve into a sum of the basic sinusoidal curves ('waves'), that are easier to handle and analyze.
| |||
| (A.1) | |||
| (A.2) | |||
|
Because n is supposed to be a whole number we got p here before x, and the period of the functions becomes 2n .
| |||
| (A.3) | ||
|
Note that the cosine is symmetric in both sides of x = 0, but sine is symmetric in a more intricate way, it is 'upside down' on the negative x values compared with the positive. It is said to be antisymmetric (anti symmetry is a symmetry, if a function is not symmetric at all it is called non-symmetric). These two symmetries are crucial because they can take care of all non-symmetric irregularities. Any non-symmetric curve can in fact be made by summing one symmetric and one antisymmetric curve. (Often these symmetry properties are called 'even' and 'odd'. Cosine is an even function and sine is an odd function.) Fourier SeriesThe major idea in the frequency analysis or Fourier analysis is to use periodic functions of regular form, as sine and cosine (but there are other periodic functions will do), to make up functions of irregular form. One can let a series sum of such a functions, a Fourier series, approximate a given function. Quite wide range of curve forms are allowed, both smooth and non-smooth, including such as square waves, saw-toothed waves, and so on. This is a very general mathematical device in making irregular forms of regular constituents. The constituents are called 'basis functions' and have the property called 'orthogonality', which is a property that guarantees certain independence. Every orthogonal function represents a certain kind of independent additive form factor. The criterion of orthogonality is that the 'inner product' between any two (here denoted u and v) of a set of orthogonal functions is zero. The inner product (u,v) between the two functions is defined by integrating (which means continuous summing) the product of the functions over a given interval. If the integral is zero then the functions do not interact in any summation and the involved functions are entirely independent constituents of the irregular form.
| |||
| (A.4) | ||
|
Here L depends on the integration interval and serves to make the result to behave in a mathematically similar way for all chosen intervals. | |||
| (A.5) | |||
| For the sine and cosine functions this becomes: | |||
| (A.6) | ||
|
| |||
| (A.7) | ||
| (A.8) | ||
|
Fourier series approximation for a given function f(x) defined in the interval x = a to x = b : | |||
| (A.9) | ||
| (A.10) | ||
| (A.11) | ||
|
Note that the series coefficients can now be expressed as inner products, between the given function and the basis functions: | |||
| (A.12) | |||
| (A.13) | |||
|
Orthogonal functions have an amazing property called (generalized) Parcifal's identity. Here g(x) is one more function, with Fourier coefficients cn and dn . The inner product of the two functions can be expressed using their Fourier coefficients: | |||
| (A.14) | ||
|
This expression above corresponds to the component wise inner product of vectors, compare with the 3-dimensional vectors in the usual xyz-coordinates of the physical space: (u,v) = uxvx + uyvy + uzvz .
Functions as vectorsBecause we can give functions an inner product, and the property orthogonality, which was originally a geometrical concept, we can formally (and formal is all that math is) see the functions as vectors in a kind of mathematical space. (Such a formal space is called a Hilbert's space.) The basis functions of the Fourier series are the basis vectors or 'coordinate vectors'. The number of basis functions may be infinite, so there may be infinite number of dimensions, but it doesn't matter. The math works well anyway. Place for a little warning! It is quite meaningless to try to imagine how such a space would look like in real life. It is nothing but a mathematical 'interface' to make it possible to make use of the powerful mathematical methods developed for vector analysis also in function analysis.
Fourier series graphicallyFor a square wave function the Fourier coefficients for the sine components become: 0, 1, 0, 1/3, 0, 1/5, ... The cosine components are all zero if we choose the wave so that it is antisymmetric (on both sides of x=0). The graphics below shows three curves overlaid (different colors): a square wave function, a Fourier series approximation for it of 3 summed series terms, and a somewhat better approximation of 30 terms. | |||
| (A.15) | ||
|
For the sawtooths function the sine (antisymmetric) coefficients become: 1, -1/2, 1/3, -1/4, ..., and again, because this wave is antisymmetric the (symmetric) cosine coefficients are all zero. | |||
| (A.16) | ||
|
The Fourier series approximation is theoretically important but often impractical. It has quite a slow convergence, much more terms must be added to get a reasonable accuracy. Other orthogonal basis functions than sine and cosine are often used in stead. Chebyshev polynomials among them are the ones that give the fastest convergence. (Your scientific pocket calculator most certainly uses the Chebyshev approximation.) Here is a trickier one, a waveform that has both symmetric and antisymmetric components.
For this wave form the Fourier series terms for even values of n happen to be zero and n goes through odd values: | |||
| (A.17) | ||
|
Below a Fourier series approximation for n = 1 to 3, and an other one for n up to 15 . | |||
| (A.18) | ||
A.2 Closed LoopsWhen do we need closed loops? If our goal is finding methods to analyze closed curves then the basis functions (curves) needed for such an analyze should be closed too. Summing together, such closed basic loops can as a series sum generates closer and closer similarity to an irregular loop. Take a rectangle form, half-circle form, or any other form for that part. This exercise with loops is really a preparation for a more difficult case: analyzing solid bodies in three (ore more) dimensions. Solid bodies are by definition closed; that is, they have a closed boundary (the surface). A closed loop is a 'solid body' of one dimension; a closed area is 'solid body' of two dimensions, and so on.
| |||
| (A.19) | ||
| (A.20) | ||
|
These generate curves with increasing number of loop formed lobes, n lobes for odd and 2n lobes for even values of n. | |||
| (A.21) | ||
| (A.22) | ||
| (A.23) | ||
| (A.24) | ||
|
The looping curves can be used as Fourier basis functions to approximate closed loops. | |||
| (A.25) | ||
|
| |||
| (A.26) | ||
|
| |||
| (A.27) | ||
|
where n gets odd values 1, 3, ..., for even values of n they are zero. The series expression is
| |||
| (A.28) | ||
|
Below the half circle is approximated first with 7 , then with 49 of these looping basis functions, multiplied with the Fourier series coefficients and added together. Especially the later then quite closely mimics the half-circle form. | |||
| (A.29) | ||
|
A.3 Higher dimensionsThe functions above are one-dimensional and can only generate closed curves, no areas or solid bodies. A two-dimensional case would be a black and white picture or a landscape where the land height is the function to be analyzed. We need a two-dimensional coordinate system, and then polar coordinates might be a natural choice in many cases. This corresponds to Fourier analysis with periodic functions. Such a function could be rings of waves outward from origo, corresponding to the basis functions in chapter A1. If we wrap round the two-dimensional surface function to a closed surface, we get the similar case as in chapter A2, but in stead of loops of curve, we have lobes of surface. Note that the argument space is still two-dimensional. The third dimension is the analyzed function and now becomes the radius from the origo to the surface. Spherical Harmonics, and Ultraspherical HarmonicsIn a three-dimensional case, we have three argument dimensions, and one extra 'dimension' for the function. The function to be analyzed here could be pictured as something like a mass distribution, typically a body or mass in three-dimensional space, a 'planetoid' which is an unevenly distributed mass of rocks inside an outer surface. Alternatively, a cloudy, hazy distribution of gas or dust that has no distinct outer limit. Also we could take the classical 'solid body', with even mass-density to the surface where the mass density suddenly becomes zero (an idealized body corresponding to the square wave in chapter A1.) The orthogonal basis functions in (three-dimensional) spherical coordinates are called the Spherical Harmonics. They include basis functions for radial space-symmetric distributions of mass ('monopole'). There are functions that cope with oblongness of the body ('dipole'). Moreover, higher degrees of symmetrical aversions from the simple spherical symmetry, 'quadrupoles' and so on. A multiplying constant 'amplitude' gives how much each function contributes to the result, and summing up the functions multiplied with the amplitudes gives the final mass distribution. An irregular body could thus be approximately pictured mathematically using such a series sum, taking first the perfectly spherically symmetric sphere and the adding suitable portions of the more advanced basic forms. These advanced form functions are a kind of 'lobes' of density distribution. The Spherical Harmonics correspond to the chapter A2 case; the basis functions are wrapped round one period in every direction. In four dimensions, the density function could be (dynamic, non-static and changing) energy density, taking also the time-dimension as one of the argument dimensions. In dimensions four and beyond let's call the basis functions as 'Ultraspherical Harmonics'. A.4 Remarks on Quantum PhysicsIn modern physics, a similar mathematical scheme forms the basic framework of quantum wave mechanics. Of their mathematical form the 'wave functions' or 'wave packets' of quantum wave mechanics are functions ('vectors') in a kind of frequency analysis. Frequency analysis is can be used in solving a differential equation with boundary values. Typically boundary value conditions could demand that the function, and its first differential (velocity), should be zero at the outer limits, or 'boundary', inside which the solution is searched. If we express the differential equation as a differential operator operating on a function, which is a commonly used method, the boundary value problem becomes an eigenvalue problem, which can be solved by means of matrix algebra. This makes a connection to the quantum matrix mechanics. Is there 'quanta' in pure mathematics?Yes, a kind of! Boundary value problems generally lead to solutions where discrete sets of values appear. They are the eigenvalues of the boundary value problem. The concept of eigenvalues originates in finding a 'simplest expression' for the problem diagonalizing of a matrix). General mathematical methods have been developed for that purpose. This is a common set up of a problem involving differential equations (a 'Sturm - Liouville - System'). Typically in such problems, not all numeric values will do as eigenvalues. Only certain values, typically whole numbers, give consistent solutions, thus implying that quantization is of mathematical origin. But of course when applied to physics, because then the boundary value conditions express some kind of physical limitation, the actual quantum numbers from the frequency analysis are the quantum numbers of the physical problem. Physics is the science of the measurable.Measurability sets certain conditions that any physical theory must obey. Generally, measurable quantities must obey a mathematically linear theory. That is so because measuring claims simple rules of addition and multiplication between the measured values, and it is this kind of mathematical rules that actually define the concept of mathematical linearity. The algebra of measurable quantities is called a 'linear algebra'. Vector and matrix algebras are special cases of linear algebras. Solving differential equations using a linear algebraThe theory of orthogonal functions is a method for solving of differential equations using linear algebra. Linear methods are central for theoretical physics, because the measurability of the results is the main goal in physical theories. Now, boundary conditions are necessary to ensure that the theory only involves measurable quantities, that it has a reasonable and logically acceptable 'measurable' behavior all over the range of the coordinates. Then not all thinkable mathematical functions are acceptable as physical solutions. Quantization of certain parameters turns out to be necessary to make the functions to obey the physical conditions. This is by not to say that the function as such represents the measurable quantity. In quantum mechanics, this is not the case. However, the wave functions that are involved in producing the measurable values still get certain boundary conditions, and thus become quantized. To make this point of view clear let's take it once again:The 'particle-wave duality' in quantum mechanics comes from the mathematics rather than physics. Because the energy distribution (a particle) is analyzed using the mathematical methods of frequency analysis, the result is a series sum of a set of basis fields in form of periodic ('wave') functions. The wave-theory matrix-theory duality has its origin in searching a solution for a differential equation with boundary value conditions, by solving it as an eigenvalue problem. There is no mystic in the wave functions and matrices of quantum mechanics. It all comes from the mathematical method, both the periodic basis functions of frequency analysis ('wave functions'), and the quantized eigenvalues (that lead to the measurable quantized values of energy etc.), which can be calculated using matrices. Physics and Ultraspherical HarmonicsThe reasoning above tries to describe from mathematical point of view why quantization necessarily must turn up in physical theories. Therefore, orthogonal functions can be an interesting field of investigation, in order to better understand the underlying mathematical structure of the physical theories. The orthogonal basis functions are a kind of basic 'wave functions', of some kind of basic physical things like 'elementary particles'. From this ground can be suggested, that a better understanding of elementary particle group theory may be gained by investigating, say, rotation transformations of 'Ultraspherical Harmonics', that is, basis functions of frequency analysis of higher than three dimensions. This idea is by no way new; there are certainly lots of works done in the area. These my remarks are, as the reader well understands, "wild" speculations. I do not now go further in that area of research. This series of net-published articles only concentrates on the mathematical solutions for Ultraspherical Harmonics. | Back | Home | Math index | Next | |