<< >> Title Contents

[2] Basic Theory


[2.1] What is nonlinear?

In geometry, linearity refers to Euclidean objects: lines, planes, (flat) three-dimensional space, etc.--these objects appear the same no matter how we examine them. A nonlinear object, a sphere for example, looks different on different scales--when looked at closely enough it looks like a plane, and from a far enough distance it looks like a point.

In algebra, we define linearity in terms of functions that have the property f(x+y) = f(x)+f(y) and f(ax) = af(x). Nonlinear is defined as the negation of linear. This means that the result f may be out of proportion to the input x or y. The result may be more than linear, as when a diode begins to pass current; or less than linear, as when finite resources limit Malthusian population growth. Thus the fundamental simplifying tools of linear analysis are no longer available: for example, for a linear system, if we have two zeros, f(x) = 0 and f(y) = 0, then we automatically have a third zero f(x+y) = 0 (in fact there are infinitely many zeros as well, since linearity implies that f(ax+by) = 0 for any a and b). This is called the principle of superposition--it gives many solutions from a few. For nonlinear systems, each solution must be fought for (generally) with unvarying ardor!


back to table of contents

[2.2] What is nonlinear science?

Stanislaw Ulam reportedly said (something like) "Calling a science 'nonlinear' is like calling zoology 'the study of non-human animals'. So why do we have a name that appears to be merely a negative?

Firstly, linearity is rather special, and no model of a real system is truly linear. Some things are profitably studied as linear approximations to the real models--for example the fact that Hooke's law, the linear law of elasticity (strain is proportional to stress) is approximately valid for a pendulum of small amplitude implies that its period is approximately independent of amplitude. However, as the amplitude gets large the period gets longer, a fundamental effect of nonlinearity in the pendulum equations (see http://monet.physik.unibas.ch/~elmer/pendulum/upend.htm and [3.10]).

(You might protest that quantum mechanics is the fundamental theory and that it is linear! However this is at the expense of infinite dimensionality which is just as bad or worse--and 'any' finite dimensional nonlinear model can be turned into an infinite dimensional linear one--e.g. a map x' = f(x) is equivalent to the linear integral equation often called the Perron-Frobenius equation

p'(x) = integral [ p(y) \delta(x-f(y)) dy ])

Here p(x) is a density, which could be interpreted as the probability of finding oneself at the point x, and the Dirac-delta function effectively moves the points according to the map f to give the new density. So even a nonlinear map is equivalent to a linear operator.)

Secondly, nonlinear systems have been shown to exhibit surprising and complex effects that would never be anticipated by a scientist trained only in linear techniques. Prominent examples of these include bifurcation, chaos, and solitons. Nonlinearity has its most profound effects on dynamical systems (see [2.3]).

Further, while we can enumerate the linear objects, nonlinear ones are nondenumerable, and as of yet mostly unclassified. We currently have no general techniques (and very few special ones) for telling whether a particular nonlinear system will exhibit the complexity of chaos, or the simplicity of order. Thus since we cannot yet subdivide nonlinear science into proper subfields, it exists as a whole.

Nonlinear science has applications to a wide variety of fields, from mathematics, physics, biology, and chemistry, to engineering, economics, and medicine. This is one of its most exciting aspects--that it brings researchers from many disciplines together with a common language.


back to table of contents

[2.3] What is a dynamical system?

A dynamical system consists of an abstract phase space or state space, whose coordinates describe the dynamical state at any instant; and a dynamical rule which specifies the immediate future trend of all state variables, given only the present values of those same state variables. Mathematically, a dynamical system is described by an initial value problem.

Dynamical systems are "deterministic" if there is a unique consequent to every state, and "stochastic" or "random" if there is more than one consequent chosen from some probability distribution (the "perfect" coin toss has two consequents with equal probability for each initial state). Most of nonlinear science--and everything in this FAQ--deals with deterministic systems.

A dynamical system can have discrete or continuous time. The discrete case is defined by a map, z_1 = f(z_0), that gives the state z_1 resulting from the initial state z_0 at the next time value. The continuous case is defined by a "flow", z(t) = \phi_t(z_0), which gives the state at time t, given that the state was z_0 at time 0. A smooth flow can be differentiated w.r.t. time to give a differential equation, dz/dt = F(z). In this case we call F(z) a "vector field," it gives a vector pointing in the direction of the velocity at every point in phase space.


back to table of contents

[2.4] What is phase space?

Phase space is the collection of possible states of a dynamical system. A phase space can be finite (e.g. for the ideal coin toss, we have two states heads and tails), countably infinite (e.g. state variables are integers), or uncountably infinite (e.g. state variables are real numbers). Implicit in the notion is that a particular state in phase space specifies the system completely; it is all we need to know about the system to have complete knowledge of the immediate future. Thus the phase space of the planar pendulum is two-dimensional, consisting of the position (angle) and velocity. According to Newton, specification of these two variables uniquely determines the subsequent motion of the pendulum.

Note that if we have a non-autonomous system, where the map or vector field depends explicitly on time (e.g. a model for plant growth depending on solar flux), then according to our definition of phase space, we must include time as a phase space coordinate--since one must specify a specific time (e.g. 3PM on Tuesday) to know the subsequent motion. Thus dz/dt = F(z,t) is a dynamical system on the phase space consisting of (z,t), with the addition of the new dynamics dt/dt = 1.

The path in phase space traced out by a solution of an initial value problem is called an orbit or trajectory of the dynamical system. If the state variables take real values in a continuum, the orbit of a continuous-time system is a curve, while the orbit of a discrete-time system is a sequence of points.


back to table of contents

[2.5] What is a degree of freedom?

The notion of "degrees of freedom" as it is used for Hamiltonian systems means one canonical conjugate pair, a configuration, q, and its conjugate momentum p. Hamiltonian systems (sometimes mistakenly identified with the notion of conservative systems) always have such pairs of variables, and so the phase space is even dimensional.

In the study of dissipative systems the term "degree of freedom" is often used differently, to mean a single coordinate dimension of the phase space. This can lead to confusion, and it is advisable to check which meaning of the term is intended in a particular context.

Those with a physics background generally prefer to stick with the Hamiltonian definition of the term "degree of freedom." For a more general system the proper term is "order" which is equal to the dimension of the phase space.

Note that a dynamical system with N d.o.f. Hamiltonian nominally moves in a 2N dimensional phase space. However, if H(q,p) is time independent, then energy is conserved, and therefore the motion is really on a 2N-1 dimensional energy surface, H(q,p) = E. Thus e.g. the planar, circular restricted 3 body problem is 2 d.o.f., and motion is on the 3D energy surface of constant "Jacobi constant." It can be reduced to a 2D area preserving map by Poincaré section (see [2.6]).

If the Hamiltonian is time dependent, then we generally say it has an additional 1/2 degree of freedom, since this adds one dimension to the phase space. (i.e. 1 1/2 d.o.f. means three variables, q, p and t, and energy is no longer conserved).


back to table of contents

[2.6] What is a map?

A map is simply a function, f, on the phase space that gives the next state, f(z) (the image), of the system given its current state, z. (Often you will find the notation z' = f(z), where the prime means the next point, not the derivative.)

Now a function must have a single value for each state, but there could be several different states that give rise to the same image. Maps that allow every state in the phase space to be accessed (onto) and which have precisely one pre-image for each state (one-to-one) are invertible. If in addition the map and its inverse are continuous (with respect to the phase space coordinate z), then it is called a homeomorphism. A homeomorphism that has at least one continuous derivative (w.r.t. z) and a continuously differentiable inverse is a diffeomorphism.

Iteration of a map means repeatedly applying the map to the consequents of the previous application. Thus we get a sequence

                                                         n
                        z  = f(z   )  = f(f(z   )...) = f (z )
                         n      n-1          n-2            0

This sequence is the orbit or trajectory of the dynamical system with initial condition z_0.


back to table of contents

[2.7] How are maps related to flows (differential equations)?

Every differential equation gives rise to a map, the time one map, defined by advancing the flow one unit of time. This map may or may not be useful. If the differential equation contains a term or terms periodic in time, then the time T map (where T is the period) is very useful--it is an example of a Poincaré section. The time T map in a system with periodic terms is also called a stroboscopic map, since we are effectively looking at the location in phase space with a stroboscope tuned to the period T. This map is useful because it permits us to dispense with time as a phase space coordinate: the remaining coordinates describe the state completely so long as we agree to consider the same instant within every period.

In autonomous systems (no time-dependent terms in the equations), it may also be possible to define a Poincaré section and again reduce the phase space dimension by one. Here the Poincaré section is defined not by a fixed time interval, but by successive times when an orbit crosses a fixed surface in phase space. (Surface here means a manifold of dimension one less than the phase space dimension).

However, not every flow has a global Poincaré section (e.g. any flow with an equilibrium point), which would need to be transverse to every possible orbit.

Maps arising from stroboscopic sampling or Poincaré section of a flow are necessarily invertible, because the flow has a unique solution through any point in phase space--the solution is unique both forward and backward in time. However, noninvertible maps can be relevant to differential equations: Poincaré maps are sometimes very well approximated by noninvertible maps. For example, the Henon map (x,y) -> (-y-a+x^2,bx) with small |b| is close to the logistic map, x -> -a+x^2.

It is often (though not always) possible to go backwards, from an invertible map to a differential equation having the map as its Poincaré map. This is called a suspension of the map. One can also do this procedure approximately for maps that are close to the identity, giving a flow that approximates the map to some order. This is extremely useful in bifurcation theory.

Note that any numerical solution procedure for a differential initial value problem which uses discrete time steps in the approximation is effectively a map. This is not a trivial observation; it helps explain for example why a continuous-time system which should not exhibit chaos may have numerical solutions which do--see [2.15].


back to table of contents

[2.8] What is an attractor?

Informally an attractor is simply a state into which a system settles (thus dissipation is needed). Thus in the long term, a dissipative dynamical system may settle into an attractor.

Interestingly enough, there is still some controversy in the mathematics community as to an appropriate definition of this term. Most people adopt the definition

Attractor: A set in the phase space that has a neighborhood in which every point stays nearby and approaches the attractor as time goes to infinity.
Thus imagine a ball rolling inside of a bowl. If we start the ball at a point in the bowl with a velocity too small to reach the edge of the bowl, then eventually the ball will settle down to the bottom of the bowl with zero velocity: thus this equilibrium point is an attractor. The neighborhood of points that eventually approach the attractor is the basin of attraction for the attractor. In our example the basin is the set of all configurations corresponding to the ball in the bowl, and for each such point all small enough velocities (it is a set in the four dimensional phase space [2.4]).

Attractors can be simple, as the previous example. Another example of an attractor is a limit cycle, which is a periodic orbit that is attracting (limit cycles can also be repelling). More surprisingly, attractors can be chaotic (see [2.9]) and/or strange (see [2.12]).

The boundary of a basin of attraction is often a very interesting object since it distinguishes between different types of motion. Typically a basin boundary is a saddle orbit, or such an orbit and its stable manifold. A crisis is the change in an attractor when its basin boundary is destroyed.

An alternative definition of attractor is sometimes used because there are systems that have sets that attract most, but not all, initial conditions in their neighborhood (such phenomena is sometimes called riddling of the basin). Thus, Milnor defines an attractor as a set for which a positive measure (probability, if you like) of initial conditions in a neighborhood are asymptotic to the set.


back to table of contents

[2.9] What is chaos?

It has been said that "Chaos is a name for any order that produces confusion in our minds." (George Santayana, thanks to Fred Klingener for finding this). However, the mathematical definition is, roughly speaking,

Chaos: effectively unpredictable long time behavior arising in a deterministic dynamical system because of sensitivity to initial conditions.
It must be emphasized that a deterministic dynamical system is perfectly predictable given perfect knowledge of the initial condition, and is in practice always predictable in the short term. The key to long-term unpredictability is a property known as sensitivity to (or sensitive dependence on) initial conditions.

For a dynamical system to be chaotic it must have a 'large' set of initial conditions which are highly unstable. No matter how precisely you measure the initial condition in these systems, your prediction of its subsequent motion goes radically wrong after a short time. Typically (see [2.14] for one definition of 'typical'), the predictability horizon grows only logarithmically with the precision of measurement (for positive Lyapunov exponents, see [2.11]). Thus for each increase in precision by a factor of 10, say, you may only be able to predict two more time units (measured in units of the Lyapunov time, i.e. the inverse of the Lyapunov exponent).

More precisely: A map f is chaotic on a compact invariant set S if

To these two requirements Devaney adds the requirement that periodic points are dense in S, but this doesn't seem to be really in the spirit of the notion, and is probably better treated as a theorem (very difficult and very important), and not part of the definition.

Usually we would like the set S to be a large set. It is too much to hope for except in special examples that S be the entire phase space. If the dynamical system is dissipative then we hope that S is an attractor (see [2.8]) with a large basin. However, this need not be the case--we can have a chaotic saddle, an orbit that has some unstable directions as well as stable directions.

As a consequence of long-term unpredictability, time series from chaotic systems may appear irregular and disorderly. However, chaos is definitely not (as the name might suggest) complete disorder; it is disorder in a deterministic dynamical system, which is always predictable for short times.

The notion of chaos seems to conflict with that attributed to Laplace: given precise knowledge of the initial conditions, it should be possible to predict the future of the universe. However, Laplace's dictum is certainly true for any deterministic system, recall [2.3]. The main consequence of chaotic motion is that given imperfect knowledge, the predictability horizon in a deterministic system is much shorter than one might expect, due to the exponential growth of errors. The belief that small errors should have small consequences was perhaps engendered by the success of Newton's mechanics applied to planetary motions. Though these happen to be regular on human historic time scales, they are chaotic on the 5 million year time scale (see e.g. "Newton's Clock", by Ivars Peterson (1993 W.H. Freeman).


back to table of contents

[2.10] What is sensitive dependence on initial conditions?

Consider a boulder precariously perched on the top of an ideal hill. The slightest push will cause the boulder to roll down one side of the hill or the other: the subsequent behavior depends sensitively on the direction of the push--and the push can be arbitrarily small. Of course, it is of great importance to you which direction the boulder will go if you are standing at the bottom of the hill on one side or the other!

Sensitive dependence is the equivalent behavior for every initial condition--every point in the phase space is effectively perched on the top of a hill.

More precisely a set S exhibits sensitive dependence if there is an r such that for any epsilon > 0 and for each x in S, there is a y such that |x - y| < epsilon, and |x_n - y_n| > r for some n > 0. Then there is a fixed distance r (say 1), such that no matter how precisely one specifies an initial state there are nearby states that eventually get a distance r away.

Note: sensitive dependence does not require exponential growth of perturbations (positive Lyapunov exponent), but this is typical (see [2.14]) for chaotic systems. Note also that we most definitely do not require ALL nearby initial points diverge--generically [2.14] this does not happen--some nearby points may converge. (We may modify our hilltop analogy slightly and say that every point in phase space acts like a high mountain pass.) Finally, the words "initial conditions" are a bit misleading: a typical small disturbance introduced at any time will grow similarly. Think of "initial" as meaning "a time when a disturbance or error is introduced," not necessarily time zero.


back to table of contents

[2.11] What are Lyapunov exponents?

(Thanks to Ronnie Mainieri & Fred Klingener for contributing to this answer)

The hardest thing to get right about Lyapunov exponents is the spelling of Lyapunov, which you will variously find as Liapunov, Lyapunof and even Liapunoff. Of course Lyapunov is really spelled in the Cyrillic alphabet: (Lambda)(backwards r)(pi)(Y)(H)(0)(B). Now that there is an ANSI standard of transliteration for Cyrillic, we expect all references to converge on the version Lyapunov.

Lyapunov was born in Russia in 6 June 1857. He was greatly influenced by Chebyshev and was a student with Markov. He was also a passionate man: Lyapunov shot himself the day his wife died. He died 3 Nov. 1918, three days later. According to the request on a note he left, Lyapunov was buried with his wife. [biographical data from a biography by A. T. Grigorian].

Lyapunov left us with more than just a simple note. He left a collection of papers on the equilibrium shape of rotating liquids, on probability, and on the stability of low-dimensional dynamical systems. It was from his dissertation that the notion of Lyapunov exponent emerged. Lyapunov was interested in showing how to discover if a solution to a dynamical system is stable or not for all times. The usual method of studying stability, i.e. linear stability, was not good enough, because if you waited long enough the small errors due to linearization would pile up and make the approximation invalid. Lyapunov developed concepts (now called Lyapunov Stability) to overcome these difficulties.

Lyapunov exponents measure the rate at which nearby orbits converge or diverge. There are as many Lyapunov exponents as there are dimensions in the state space of the system, but the largest is usually the most important. Roughly speaking the (maximal) Lyapunov exponent is the time constant, lambda, in the expression for the distance between two nearby orbits, exp(lambda * t). If lambda is negative, then the orbits converge in time, and the dynamical system is insensitive to initial conditions. However, if lambda is positive, then the distance between nearby orbits grows exponentially in time, and the system exhibits sensitive dependence on initial conditions.

There are basically two ways to compute Lyapunov exponents. In one way one chooses two nearby points, evolves them in time, measuring the growth rate of the distance between them. This is useful when one has a time series, but has the disadvantage that the growth rate is really not a local effect as the points separate. A better way is to measure the growth rate of tangent vectors to a given orbit.

More precisely, consider a map f in an m dimensional phase space, and its derivative matrix Df(x). Let v be a tangent vector at the point x. Then we define a function

                              1          n
        L(x,v)  =    lim     --- ln |( Df (x)v )|
                   n -> oo    n
Now the Multiplicative Ergodic Theorem of Oseledec states that this limit exists for almost all points x and all tangent vectors v. There are at most m distinct values of L as we let v range over the tangent space. These are the Lyapunov exponents at x.

For more information on computing the exponents see


back to table of contents

[2.12] What is a Strange Attractor?

Before Chaos (BC?), the only known attractors (see [2.8]) were fixed points, periodic orbits (limit cycles), and invariant tori (quasiperiodic orbits). In fact the famous Poincaré-Bendixson theorem states that for a pair of first order differential equations, only fixed points and limit cycles can occur (there is no chaos in 2D flows).

In a famous paper in 1963, Ed Lorenz discovered that simple systems of three differential equations can have complicated attractors. The Lorenz attractor (with its butterfly wings reminding us of sensitive dependence (see [2.10])) is the "icon" of chaos http://kong.apmaths.uwo.ca/~bfraser/version1/lorenzintro.html. Lorenz showed that his attractor was chaotic, since it exhibited sensitive dependence. Moreover, his attractor is also "strange," which means that it is a fractal (see [3.2]).

The term strange attractor was introduced by Ruelle and Takens in 1970 in their discussion of a scenario for the onset of turbulence in fluid flow. They noted that when periodic motion goes unstable (with three or more modes), the typical (see [2.14]) result will be a geometrically strange object.

Unfortunately, the term strange attractor is often used for any chaotic attractor. However, the term should be reserved for attractors that are "geometrically" strange, e.g. fractal. One can have chaotic attractors that are not strange (a trivial example would be to take a system like the cat map, which has the whole plane as a chaotic set, and add a third dimension which is simply contracting onto the plane). There are also strange, nonchaotic attractors (see Grebogi, C., et al. (1984). "Strange Attractors that are not Chaotic." Physica D 13: 261-268).


back to table of contents

[2.13] Can computers simulate chaos?

Strictly speaking, chaos cannot occur on computers because they deal with finite sets of numbers. Thus the initial condition is always precisely known, and computer experiments are perfectly predictable, in principle. In particular because of the finite size, every trajectory computed will eventually have to repeat (an thus be eventually periodic). On the other hand, computers can effectively simulate chaotic behavior for quite long times (just so long as the discreteness is not noticeable). In particular if one uses floating point numbers in double precision to iterate a map on the unit square, then there are about 10^28 different points in the phase space, and one would expect the "typical" chaotic orbit to have a period of about 10^14 (this square root of the number of points estimate is given by Rannou for random diffeomorphisms and does not really apply to floating point operations, but nonetheless the period should be a big number). See, e.g.,


back to table of contents

[2.14] What is generic?

(Thanks to Hawley Rising for contributing to this answer)

Generic in dynamical systems is intended to convey "usual" or, more properly, "observable". Roughly speaking, a property is generic over a class if any system in the class can be modified ever so slightly (perturbed), into one with that property.

The formal definition is done in the language of topology: Consider the class to be a space of systems, and suppose it has a topology (some notion of a neighborhood, or an open set). A subset of this space is dense if its closure (the subset plus the limits of all sequences in the subset) is the whole space. It is open and dense if it is also an open set (union of neighborhoods). A set is countable if it can be put into 1-1 correspondence with the counting numbers. A countable intersection of open dense sets is the intersection of a countable number of open dense sets. If all such intersections in a space are also dense, then the space is called a Baire space, which basically means it is big enough. If we have such a Baire space of dynamical systems, and there is a property which is true on a countable intersection of open dense sets, then that property is generic.

If all this sounds too complicated, think of it as a precise way of defining a set which is near every system in the collection (dense), which isn't too big (need not have any "regions" where the property is true for every system). Generic is much weaker than "almost everywhere" (occurs with probability 1), in fact, it is possible to have generic properties which occur with probability zero. But it is as strong a property as one can define topologically, without having to have a property hold true in a region, or talking about measure (probability), which isn't a topological property (a property preserved by a continuous function).


back to table of contents

[2.15] What is the minimum phase space dimension for chaos?

This is a slightly confusing topic, since the answer depends on the type of system considered. First consider a flow (or system of differential equations). In this case the Poincaré-Bendixson theorem tells us that there is no chaos in one or two-dimensional phase spaces. Chaos is possible in three-dimensional flows--standard examples such as the Lorenz equations are indeed three-dimensional, and there are mathematical 3D flows that are provably chaotic (e.g. the 'solenoid').

Note: if the flow is non-autonomous then time is a phase space coordinate, so a system with two physical variables + time becomes three-dimensional, and chaos is possible (i.e. Forced second-order oscillators do exhibit chaos.)

For maps, it is possible to have chaos in one dimension, but only if the map is not invertible. A prominent example is the Logistic map

                    x' = f(x) = rx(1-x).
This is provably chaotic for r = 4, and many other values of r as well (see e.g. Devaney). Note that every point x < f(1/2) has two preimages, so this map is not invertible.

For homeomorphisms, we must have at least two-dimensional phase space for chaos. This is equivalent to the flow result, since a three-dimensional flow gives rise to a two-dimensional homeomorphism by Poincaré section (see [2.7]).

Note that a numerical algorithm for a differential equation is a map, because time on the computer is necessarily discrete. Thus numerical solutions of two and even one dimensional systems of ordinary differential equations may exhibit chaos. Usually this results from choosing the size of the time step too large. For example Euler discretization of the Logistic differential equation, dx/dt = rx(1-x), is equivalent to the logistic map. See e.g. S. Ushiki, "Central difference scheme and chaos," Physica 4D (1982) 407-424.


back to table of contents


<< >> Title Contents