Dynamical Systems: An Outsider’s Glance Part I

I will leave Bengtsson & Zyczkowski book for awhile to look into dynamical systems and nonlinear dynamics, a topic I covered in my talk in India.

The subject of dynamical system is very broad (either from physics or mathematics), which makes it hard for a beginner (me) to survey the subject. It is only better for me to explain why am I interested in this area. The procedure of quantization often begins with classical (dynamical) systems of which a major interest is in quantizing (a particle system on) nonlinear configuration spaces. Most of the time we treat the nonlinearities intrinsically by adopting the appropriate canonical variables to be quantized (or quantization by constraints). Some of these nonlinear systems can be chaotic classically such as particles on hyperbolic surfaces (see review article “Chaos on the Pseudosphere” by Balasz and Voros or “Some Geometrical Models of Chaos” by Caroline Series) and it is often pondered how such behaviour translates into the quantum regime. My original concern in this topic is mostly how complex topologies of hyperbolic surfaces get encoded in quantum theory. We will defer such discussions to a later time.

What is a dynamical system? There are three ingredients to a dynamical system:

  • Evolution parameter (usually time) space T;
  • State space M;
  • Evolution rule F_t: M\rightarrow M,\quad t\in T.

Note that T can either be \mathbb{Z}^+,\mathbb{Z} or I\subseteq\mathbb{R}, for which the former is called discrete dynamical systems where evolution rule is usually difference equation, while the latter is called continuous dynamical systems with the evolution rule described by ordinary differential equation. One can consider to discretize the state space itself to give cellular automata with their update rules (see also graph dynamical systems). Perhaps the most famous cellular automata (CA) is Conway’s Game of Life build upon a two-dimensional (rectangular) lattice whose update rule is given by

  • Any live cell with fewer than two live neighbours die (underpopulation).
  • Any live cell with two or three live neigbours live.
  • Any live cell with more than three live neighbours die (overpopulation).
  • Any dead cell surrounded by three live neighbours live (regeneration).

Fascinating configurations can be constructed by such simple rules. Even simpler is the elementary 1-dimensional CAs for which Wolfram has classified according to their update rules: rule 0 to 255 (256=2^{2^3} rules). It was proven by Cook that rule 110 is capable to be a universal Turing machine. Note in the above CAs, there are only two states (live or dead). One can generalise the number of states to go beyond two e.g. CA with three-valued state (RGB) can be used in pattern and image recognition (see e.g. https://www.sciencedirect.com/science/article/pii/S0307904X14004983).

One can further make abstract the notion of dynamical system as done by Giunti & Mazzola in “Dynamical Systems on Monoids: Toward a General Theory of Deterministic Systems and Motion” (see also here). We will not pursue this but instead mention two other cases normally not classed as dynamical systems.

I would like to add further to the above some dynamical systems encountered in (theoretical) physics that ought to be differentiated from the classes above. First, systems whose equations are given by partial differential equations i.e. those with differential operators of not only with respect to time, but also with respect to space. Most notable example is the Navier-Stokes equation that governs fluids. At this point, one should even mention about relativistic systems whose evolution parameter might not even be separably identified from spatial coordinates. Relatively recently, techniques of (conventional) dynamical systems have found its way into cosmology via long-term behaviour of cosmological solutions and reducing the full Einstein equation (pdes) to simpler ones. See the book of Alan Coley or the book of Wainwright & Ellis. See also the articles of Boehmer & Chan and Bahamonde et al. (published version here). It is interesting to note the author of former article Dr. Nyein Chan was in Swinburne University, Sarawak Campus before. He has probably returned to his home country Myanmar (his story can be read here).

To discuss geometry of dynamical systems, I make extensive use of the notes by Berglund (arXiv: math/0111177). To start, dynamical systems are equipped with a first order ODE describing the dynamical equation:

\dot{x}_i =\cfrac{dF_i}{dt}=f_i(t)\quad (i=1\cdots n)

where F is a function on the state space M. Using chain rule, one can rewrite this equation as

\cfrac{dF_i}{dt}=\cfrac{\partial F_i}{\partial x^j}\cfrac{d x^j}{dt}\equiv f^j\cfrac{\partial}{\partial x^j}(F_i)=f^j\partial_j (F_i)   .

Note that f^j\partial_j can now be treated as a vector field (as one does in usual (local) coordinate-based differential geometry). Vectors field (despite its local coordinatization whose transformation law is known) encodes geometric information of the state space it lives on. The easiest way to see this is the exemplification of the hairy ball theorem by the statement one can’t comb the hair of a coconut. On the other hand, one can do so on the torus (surface of a doughnut). Technically this is due to the nonvanishing Euler characteristic of the sphere( and in the case of the torus, it vanishes).

The standard example of a dynamical system comes from mechanical systems (say, one particle obeying Newton’s laws). However Newtonian equations for such systems are second order ODEs. This simply implies that the mechanical state should be a pair of variables, say of position and momenta (q^i,p_i) forming the phase space \mathbb{R}^{2n}. Formulating the mechanical system as Hamiltonian mechanics, one can rewrite the Newtonian equations of motion as two sets of first order ODEs known as Hamilton’s equations:

\dot{q}^i=\cfrac{\partial H}{\partial p_i}\quad;\qquad\dot{p}_i=-\cfrac{\partial H}{\partial q^i}\quad,

where H is the Hamiltonian of the system. It is convenient to rewrite this equation using another algebraic structure known as Poisson bracket which is defined as

\{ f,g \}=\sum_i \left(\cfrac{\partial f}{\partial q^i}\cfrac{\partial g}{\partial p_i}-\cfrac{\partial g}{\partial q^i}\cfrac{\partial f}{\partial p_i}\right) .

Then one can rewrite Hamilton’s equations as

\dot{q}^i=\{q^i,H\}\quad;\qquad\dot{p}_i=\{p_i,H\} .

The convenience is that the dynamics is contained in the algebraic form of Poisson bracket. Thus, studying the Poisson bracket structure is equivalent to studying the dynamical structure. One can further ‘geometrize’ this algebraic structure by considering the vector fields on the phase space \xi_{q^i}=\partial / \partial q^i and \xi_{p_i}=\partial / \partial p_i and the Hamilton’s equation as

\dot{q}^i = \omega(\xi_{q^i},\xi_H)\quad;\qquad\dot{p}_i=\omega(\xi_{p_i},\xi_H) ,

where \omega=\sum_i dq^i\wedge dp_i is a covariant antisymmetric tensor known as the symplectic form and

\xi_H= \cfrac{\partial H}{\partial p_i}\cfrac{\partial}{\partial q^i} - \cfrac{\partial H}{\partial q^i}\cfrac{\partial}{\partial p_i} .

The manifold (space) equipped with the symplectic form is known as the symplectic manifold. With the Poisson bracket replaced by the symplectic form, one can simply study the properties of the symplectic form to know about the dynamics. Finding symmetries preserving the symplectic form has become the basis of (some) quantization procedure.

The motivation to study dynamical systems is to learn about chaotic dynamical systems. The word chaos conjures images like the ones below (a favourite picture from Bender & Orszag book and a billiard in )

BenderOrszag

Source: Bender & Orszag, “Advanced Mathematical Methods for Scientists and Engineers” (McGraw Hill, 1978) Fig.4.23 on page 191.

 

However, the iconic diagram one associates with chaotic dynamical system is that of the two winged Lorenz butterfly diagram (later), which I thought it had structures. In such a system it was the sensitivity of initial conditions for the orbits traversed that played a characteristic role. The orbits above are perhaps closer to a different concept of the ergodic hypothesis. How sensitivity of initial conditions have been called chaotic is quite interesting. A more mundane name for the whole subject is nonlinear dynamics which is used before the term chaos got popular.

So how does one put some useful handles to such systems with complicated behaviour? One begins by looking for simple solutions i.e. stationary solutions. Recall F:M\rightarrow M and f_i=dF_i / dt. (Note: at times, I will not write out the indices and should be understood contextually.) A stationary solution is the one that obeys f_i=0 i.e. doesn’t change with time. Of related interest are fixed points x^\ast such that F(x^\ast)=x^\ast; also called equilibrium points. Points x^\ast for which f(x^\ast)=0 are called singular points of the vector field f; also called stationary orbits.

We can now explore solutions nearby the equilibrium point x = x^\ast + y for which

\dot{y}=f(x^\ast + y)\simeq A y + g(y)

where

A= \left. \cfrac{\partial f}{\partial x}\right\vert_{x^\ast}\quad;\qquad \lVert g(y)\rVert\leq M\lVert y\rVert^2 ,

i.e. linearizing the solutions with the higher order terms are assumed to be bounded by some constant. In the linear case (g(y)=0), one has

\dot{y}=Ay\qquad\Rightarrow\qquad y(t) = e^{At} y(0) .

Thus, one can see that the eigenvalues of a_j of A will play in the important role of long-term behaviour of the solutions.

To take advantage of this fact, one can use projectors P_j to eigenspace of a_j to study the behaviour of equilibrium points. Construct projectors to sectors of eigenvalues

P^+=\sum_{\textrm{Re} (a_j)>0} P_j\quad;\qquad P^-=\sum_{\textrm{Re} (a_j)<0} P_j\quad;\qquad P^0=\sum_{\textrm{Re} (a_j)=0} P_j .

and define subspaces

E^+=P^+ M=\left\{y:\lim_{t\rightarrow -\infty} e^{At} y=0\right\} ;

E^-=P^- M=\left\{y:\lim_{t\rightarrow \infty} e^{At} y=0\right\} ;

E^0=P^0 M ,

which are called respectively unstable subspacestable subspace, and centre subspace of x^\ast and they are invariant subspaces of e^{At}. With respect to these spaces, one can actually classify the equilibrium points. The equilibrium point is a sink if E^+=\{0\}=E^0; a source if E^-=\{0\}=E^0; a hyperbolic point if E^0=\{0\}; and an elliptic point if E^+=\{0\}=E^-. Note that one has a richer variety of equilibrium points than the one-dimensional case simply because there are more ‘directions’ to consider in higher-dimensional cases (characterised by the eigenvalues of A). To illustrate this, we consider the two-dimensional case with two eigenvalues a_1,\ a_2 (borrowing diagram from Berglund):

EquilibriumPoints

Case (a) refers to a node in which a_1a_2>0 (arrows either pointing in or out). Case (b) is a saddle point in which a_1a_2<0. Cases (c) and (d) happen when a_1,a_2\in\mathbb{C} and hence giving rotational (or oscillatory motion in phase space). Cases (e) and (f) are more complicated versions of nodes when there are degeneracy of eigenvalues (please refer to Berglund for details). At this juncture, it is appropriate to mention the related concepts of basins of attraction which appear in chaotic dynamics literature. Particular one has the concept of strange attractor, arising from the fact that while F is assumed continuous, the vector field may be singular at some points and thus giving rise to space filling structures known as fractals (see this article).

To proceed beyond the linear case, one needs extra tools namely the Lyapunov functions (often in the form of quadratic forms) that set up level curves over which phase space trajectories can approach or cross and help characterise stability of equilibrium points in general. The Lyapunov functions are those functions V(x) such that

  • V(x)>V(x^\ast) for x in the neighbourhood of x^\ast;
  • its derivative along orbits \dot{V}(x)=\nabla V(x)\cdot f(x) is negative showing x^\ast is stable.

To illustrate this, we borrow again a diagram of Berglund to show how phase space orbits approach or cross the level curves of V(x).

StabilityLyapunov

Cases (a) and (b) are respectively the stable and asymptotically stable equilibrium points where trajectories cut the level curves in direction opposite to their normals. Case (c) is the case the unstable equilibrium point generalizing the linear case where orbits may approach the point in one region and moves away in another region. Such orbits are called hyperbolic flows. This is essentially the case of interest. Note in particular if one reverses the arrows, there is invariance of the two separate regions of stable and unstable spaces and the special status of hyperbolicity.

One can now state a result known in the literature i.e. given a hyperbolic flow \varphi on some K\subset M, a neigbourhood of hyperbolic equilibrium point x^\ast, there exists local stable and unstable manifolds

W^s_{\textrm{loc}}(x^\ast):=\{x\in K:\lim_{t\rightarrow\infty}\varphi_t(x)=x^\ast\ \textrm{and}\ \varphi_t(x)\in K,\ \forall t\ge 0\} ;

W^u_{\textrm{loc}}(x^\ast):=\{x\in K:\lim_{t\rightarrow -\infty}\varphi_t(x)=x^\ast\ \textrm{and}\ \varphi_t(x)\in K,\ \forall t\le 0\} .

For further technical details, consult Araujo & Viana, “Hyperbolic Dynamical Systems” (arXiv:0804.3192 [math.DS]) and Dyatlov, “Notes on Hyperbolic Dynamics” (arXiv:1805.11660 [math.DS]).

Examples for which hyperbolic flows are known are the cases of geodesic flows on negatively curved (hyperbolic) surfaces and billiard balls in Euclidean domains with concave boundaries (see Dyatlov). Hyperbolicity then becomes a paradigm for structurally stable ergodic system as discussed by Smale in 1960s (see Smale, “Differentiable Dynamical Systems“, Bull. Amer. Math. Soc. 73 (1967) 747-817). While this is so, unknown to the mathematicians then, E. Lorenz discovered a dynamical system that was neither hyperbolic nor structurally stable (see Lorenz, “Deterministic Nonperiodic Flow“, J. Atmosph. Sci. 20 (1963) 130-141). A new paradigm is needed to account for such systems. However, we will defer this discussion to a future post.

 

Leave a comment