# Hierarchy of Discrete-Time Dynamical Systems, a Survey

Hierarchy of Discrete-Time Dynamical Systems, a Survey

Frederic Geurts

Universite catholique de Louvain, Department of Computing Science and Engineering Place Sainte Barbe 2, B-1348 Louvain-la-Neuve (Belgium)

August 21, 1995

Abstract

This paper is an attempt to unify classical automata theory and dynamical systems theory. We present a notion of generalized dynamical systems, allowing us to compare properties of both types of systems. Then we establish a hierarchy of dynamical systems, including Turing machines, cellular automata and classical dynamical systems. We nish with some conclusions and motivations for future work.

1 Introduction

The theory of nite automata has always been an important eld of computer science. Automata, languages, grammars, have been extensively studied 16, 20, 43]. Automata recognize languages and grammars produce languages. Among others, the most studied are nite automata, pushdown automata, Turing machines. The Chomsky hierarchy for languages and grammars is also very well established: regular, context-free, context-sensitive, general grammars generate languages with the same names. The theory of dynamical systems, chaos, attractors 8, 41], is an important eld of mathematics and physics. Researchers have proposed symbolic dynamics 17] as a coarse-graining tool to study the complexity of patterns generated by dynamical systems. Subshifts are seen as simpli ed models of dynamical systems or as distinct dynamical systems. Some people have seen interesting relationships between dynamical systems and automata, languages and grammars 24, 23, 26, 44]. Cellular automata 42, 12] can be studied from both viewpoints, which gives important results in physics, mathematics and computer science 7, 11, 13, 14, 21, 23, 27, 40]. There are also many papers and works about universality of these di erent kinds of systems 4, 20, 22]. Recently, some papers have shown models strictly more powerful than Turing machines. This does not contradict the Church-Turing Thesis because these models are based on analog or real computations. Neural networks 14, 36], classical discrete-time low-dimensional maps 5], or continuous-time ows 2, 3] show interesting behaviors; they can simulate Turing machines, and they can solve more problems than the classical computational models. To compare automata and dynamical systems more precisely, we need to unify their respective frameworks. As we have said before, several authors have proposed new bridges between them, but the

Supported by the Belgian \Fonds National de la Recherche Scienti que"

comparison has not been completely established yet. In this paper, we present a generalized version of dynamical systems, unifying in a same framework the studies of classical dynamical systems, nite automata, and cellular automata. We summarize results from di erent sources to establish a universality hierarchy for these systems. The paper is organized as follows: we rst present the generalized de nition of dynamical systems; we review some classical results in automata theory; we present the de nitions of cellular automata, classical dynamical systems, and simulation; we show how to compare Turing machines, cellular automata, and dynamical systems; we establish a universality hierarchy; nally, we draw some conclusions.

2 Discrete-Time Dynamical Systems

Since we are going to talk about discrete-time dynamical systems (DTDS), it is useful to present what they are in their most abstract form. A system is an observable structured set of interconnected elements. The dynamics appears when we want to look at the evolution of the system. We chose to observe the system at separate moments in time (like a sampling). It is a discrete-time observation (opposed to continuous-time) 41]. Let us rst give a generally admitted de nition of discrete-time dynamical systems. To de ne a dynamical system, we need a state space where things happen, and a function between states describing the possible transitions, i.e. the dynamics of the system. (X; f ); where X is a compact 1 metrizable 2 topological 3 state space and f : X ! X is a continuous 4 function. System evolution is described by successive iterations of f , starting from an initial condition x0 2 X :

A DTDS is a structure

De nition 2.1 Discrete-time dynamical system

xt = f t (x0 ); 8t 0

where successive iterations are de ned by:

f 0 (x) = x and f t+1 (x) = f (f t (x)):

1 Examples of compact spaces are: nite discrete spaces, cartesian products of compact spaces, closed bounded subsets of Euclidean spaces, etc. 2 A topological space is metrizable if the topology is induced by a metric. 3 A topological space is a couple (X; T ) where X is a space and T is a family of subsets of X (called \open sets") satisfying: each union of open sets is an open set; each nite intersection of open sets is an open set; ; and X are open sets. If we consider the discrete topology T = P X , every subset of x is an open set. 4 This property is not very restrictive: a function f is continuous i its inverse f ?1 : P X ! P X : B ! f ?1 (B ) = fxjf (x) 2 B g maps any open set to an open set. This inverse extended to sets always exists. In the discrete topology, every function is continuous.

If f has an inverse, it is also possible to de ne negative (i.e. backward) iterations. Thus, in general, we have: xt = f t (x0); 8t 2 Z where successive iterations are de ned by: f 0 (x) = x; f t+1 (x) = f (f t (x)) if t 0; t?1 (x) = f ?1 (f t (x)) if t 0: f If a function is de ned on X , from point to point, it is possible to extend its de nition to a function on P X from set to set: 8Y 2 P X; f (Y ) = fyj9x 2 Y : y = f (x)g: Our goal is to relax some of these generally admitted assumptions, to get a simpler de nition of dynamical system. A generalized version of the previous de nition is based on relations instead of functions.

De nition 2.2 Generalized DTDS

(X; f; I; F; ) where X is a compact metrizable topological state space, f X X is a closed relation5, I is a set of initial conditions (or a backward attractor), F is a set of nal conditions (or a forward attractor), is a covering, and the evolution of the system is described by successive (forward or backward) iterations. With a relation instead of a function, we have the following extension to sets of points:

A generalized DTDS is a structure

8Y 2 P X; f (Y ) = fyj9x 2 Y : y 2 f (x)g

and backward iterations are not a special case, since every relation is invertible, and the inverse of a closed relation is a closed relation in the product topology.

Remarks.

The assumption of compact spaces is not very strong. Many discrete spaces are compact. The notion of closed relation replaces the continuous function of the classical de nition. See 1] for a nice treatment of this aspect. We have replaced the functional aspect of the dynamics by a relational one. To each state x of X , a set of states can correspond to f (x). There are two ways of interpreting the operational version of such a system, in case f (x) contains more than one point. We can consider f (x) as a whole which makes f closer to a function. We can also see f as a nondeterministic choice between all states of f (x), which thus contains all potential images of x (class G relation are a good example 15]). A very useful hypothesis is monotonicity: 8A B X; f (A) f (B ). We often need it when we have to compute invariants, or attractors. Sometimes, and-continuity of the relation is also useful: 8(Ai)i2N; (Ai+1 Ai8i) ) (f (\iAi) = \i f (Ai)).

X

5 This is a closed subset of X

endowed with the product topology.

(X; f ) X is a compact metrizable topological space f continuous function f closed relation ) nondeterminism point-to-point set-to-set Table 1: Classical vs Generalized DTDS Let us notice that in the generalized DTDS, I is a set of states indicating potential initial conditions, and F is dually a set of states indicating which are the potential nal states. Of course, \initial" and \ nal" have a sense only when speaking about nite iteration schemes. If we extend the iteration scheme to in nity, it it useful to replace these notions with in nite ones: forward and backward attractors (see 37] or 1] for precise de nition). There is a comparison between the two de nitions in Table 1.

DTDS

Generalization

3 Observation and Languages

Observing a system can be made statically or dynamically. The rst type of observation leads to the analysis of particular sets of states, after a certain number of evolution steps. The second type leads to the analysis of evolutions of states.

3.1 Static Observation

Our static observations are based on global sets of states. We call them languages because of the analogy we will develop later between these notions and their counterpart in classical automata theory. Before the de nitions of di erent languages, let us give a de nition of a limit-set; this concept is very important in the following de nitions. Intuitively, a state x belongs to the limit-set of a set A if and only if it belongs to in nitely many iterates of the system from A, i.e. x 2 f n (A) for in nitely many n.

De nition 3.1 Limit-set

The limit-set of a set A X relative to a dynamical system (X; f ) is !f A] = \n2N k n f k (A):

Now, let us de ne the languages that are statically observable.

The generated language Lg is the forward attractor of I , Lg = !f I ]:

De nition 3.2 Generated language

De nition 3.3 Recognized language

The recognized language Lr is the backward attractor of F , Lr = !f ?1 F ]:

The invariant language Li is the intersection of the two previous languages,

De nition 3.4 Invariant language

Li = !f I ] \ !f ?1 F ]:

3.2 Dynamic Observation

When we want to observe the evolution of a system, it is sometimes useful not to consider the successive states themselves but, instead, regions of the state space X visited by successive iterations. It is thus interesting to de ne these regions of X independently. Therefore, the notion of covering is introduced. To each element of the covering, a symbol is associated. The set of these symbols is an alphabet.

A covering of X is a family of subsets of X whose union gives X :

De nition 3.5 Covering

where A is an alphabet.

= (Va X ja 2 A) and a2A Va = X We will work with nite closed almost disjoint coverings: nite closed subsets of X such that (Va) = (X ); 8a 2 A, the union of which gives X , and the intersection of which gives (Va \ Vb ) < (X ); 8a 6= b ( is the dimension of the set/space considered) 19]. An important notion arising from the de nition of coverings is the trace-language generated by a dynamical system.

De nition 3.6 Trace-language

The trace-language Lt of (X; f ) with respect to the covering is:

Lt = fu 2 A jVu 6= ;g

where A is the set of nite words over A, the sets Vu are de ned by Vu = fx 2 X j8i < juj; f i(x) 2 Vui g, and as u = u0 u1u2 , ui stands for an indexed symbol of u.

Remarks.

In the previous de nition, every word in L is supposed to be nite. It is thus possible to add an initial condition and a nal condition. The de nition can also be extended to in nite strings (the forward attractor replaces the nal condition) and bi-in nite strings (forward and backward attractors replace initial and nal conditions). Following the dimension of the covering ( ) = max (Va); a2A or the number of its subsets, the trace-language contains di erent types of sequences, from sequences of symbols representing coarse-grained regions to sequences of precise states.

4 Classical Automata as Dynamical Systems

4.1 Grammars and Chomsky Hierarchy

A grammar is a structure where V is a nite set of symbols, V is a set of terminal symbols, R (V + V ) is nite set of derivations or production rules, and S V n is a set of axioms or starting symbols. From an axiom, derivations are made successively and the language generated by the grammar is the set of words (made of terminal symbols) of nite length reachable in a nite number of derivations from any axiom: L(G) = fv 2 j9s 2 S and s ) vg: This language somehow corresponds to the previously de ned \generated language" Lg = !f I ]: take I = S the set of axioms, f = R the grammar rules, and you get L(G) = Lg . Grammars and languages obtained from these have been classi ed in a hierarchy by Chomsky, depending on restrictions for rules: Type 3, Regular: Every rule has the form A ! wB or A ! w, with A; B 2 V n , and w 2 . Type 2, Context-Free: Every rule has the form A ! with A 2 V n . Type 1, Context-Sensitive: Every rule has the form ! with j j j j, apart from S ! ". Type 0, General: No restriction. Moreover, we have: Type 3 Type 2 Type 1 Type 0.

G = (V; ; R; S)

Remark. It is easy to see that it is not possible to generate all languages with grammars. A very

simple counting argument su ces. Each grammar generates one language. The number of di erent grammars (expressed in a clear well-de ned meta-language) is countably in nite. Since the set of all languages is a set of all subsets of a countably in nite set ( , if is the alphabet), it is uncountably in nite.

4.2 Regular Languages and Finite Automata

The class of regular languages R is given by the smallest set of languages such that:

;; f"g 2 R; fag 2 R; 8a 2 ; A B; A:B; A 2 R.

We have already presented the notion of regular grammars: they produce languages. There is also the equivalent notion of regular expressions, denoting languages. Finally, nite automata recognize languages. Here is the de nition: M = (Q; ; ; s; F ) where Q is the nite state space, is the input alphabet, : Q ! Q is the transition function, s 2 Q is the starting state, and F Q is the set of accepting states. At each step, the automaton receives a symbol as input, and its internal state plus this input allows a transition to the next state. A word (or string of symbols) is accepted by the automaton if, starting from the initial state, it reaches a terminating or accepting state after consideration of the whole word. This de nes the language generated by the automaton:

L(M ) = fw 2 j(s; w) ` (q; ") with q 2 F g:

This kind of automata is purely deterministic but it can be shown that nondeterministic nite automata are equivalent, in the sense that they recognize the same class R of languages. Let us put all input symbols together in a potentially in nite string. For example, it is possible to imagine that all inputs are on a tape. If the input is supposed to be nite, we just ll the tape with white symbols. We probably have to modify our transition function such that this white symbol does not change the state of the automaton: (q; B ) = q . Our con guration space is thus Q N. Any transition (p; s) ! q becomes:

M p a a a ::: = :sa1 2 3 4

!

q :a1a2 a3 :::

!

where q = (p; s) and :a1 a2::: = (:sa1a2:::). When transitions of the automata are rewritten like this, it can be considered as a dynamical system acting on the con guration space Q N instead of a simple language recognizer. The resulting dynamics can be much richer than the classical version, considering only internal states. The corresponding generalized DTDS is: (Q N ; M; fsg N ; F f"g; ): Actually, if L(M ) is the completion of L(M ) into in nite strings, the smallest generalized DTDS is: (Q

N; M; fsg

L(M ); F f"g; ):

4.3 Nondeterministic Pushdown Automata

We have de ned above languages generated by context-free grammars. To recognize these languages, nite automata are not su cient. We need to add a memory. This memory is a stack, accessible only at the top of it. The automata can store information and retrieve it later but always by using standard stack operations: push and pop. Let us give the formal version of this kind of automata:

M = (Q; ; ?; ; Z; s; F )

where Q is the set of internal states, is the input alphabet, ? is the stack alphabet, Z 2 ? is the initial stack symbol, s 2 Q is the initial internal state, F Q is the set of accepting states, and ((Q ? ) (Q ? )) is the nite transition relation. A transition ((p; u; ); (q; )) 2 means

that the automaton can go from state p to state q if the input word begins with u, and the word is a the top of the stack. After its transition, the automata replaces with at the top of the stack. The language recognized by such an automaton is de ned like this:

L(M ) = fw 2 j(s; w; Z ) ` (p; "; ) with p 2 F g:

This kind of automata is nondeterministic, and here it is important to add that deterministic versions of pushdown automata are less powerful. They recognize a strict subset of languages recognizable by nondeterministic pushdown automata. As previously made for nite automata, we put all input symbols together and construct a con guration N ?N . A transition space containing the internal state, all input symbols, and the stack: Q (p; w; ) ! (q; ) can be rewritten:

1 1 0 0 q p M B :wa1a2::: C = B :a1a2 ::: C A A @ @

: b1b2::: : b1b2:::

where q is obtained from p by , :a1a2 ::: = jwj (:wa1a2:::), and the string replaces in : b1b2:::. It is also possible to obtain the resulting sequence r = : b1b2::: by s = :wa1a2 :::, s0 = j j(s), ri=0:::j j?1 = , and ri=j j::: = s0i?j j. N ?N . The We have constructed a dynamical system M acting on the con guration space Q corresponding generalized DTDS is: (Q (Q

N N

?N ; M; fsg

N

fZ gN; F f"g ?N; ):

Again, the smallest generalized DTDS is: ?N ; M; fsg L(M ) fZ gN ; F f"g ?N ; ):

4.4 Turing Machines

We go directly to languages generated by general grammars. Finite and pushdown automata are not su cient to recognize these languages. We need a more general kind of memory, every part of which being accessible by the automata. We consider a bi-in nite memory, or tape. Here follows the de nition of a Turing Machine.

De nition 4.1 Turing machine

A Turing machine is de ned by a structure

M = (Q; ?; ; ; s; B; F )

where Q is the set of internal states, ? is the tape alphabet, the initial state, F Q is the set of accepting states, B 2 ? ? : Q ? ! Q ? f?1; 0; 1g is the transition function.

? is the input alphabet, s 2 Q is is a white symbol for the tape, and

A Turing machine is represented in Fig. 1.

u?2

r u?1 u0 u1 u2

p

Figure 1: Turing machine At each transition, the internal state and the current tape symbol are considered: a new symbol is written on the tape, a new internal state is reached and the current position of the automaton on the tape is moved to the left (?1) or to the right (1), or it does not move (0). A con guration of M includes its current internal state, and the whole tape content: c2Q ? : The language recognized by a Turing machine is de ned as follows:

L(M ) = fwj(s; :::BBB:wBBB:::) ` (q; :::BBBw0BBB:::) with q 2 F g; where the dot between B and w stands for Turing machine's head. Here, the language is equivalent to the previously de ned \recognized language" Lr . It is interesting to remark that the execution of the machine M on a word w can lead to three di erent

behaviors: ending in an accepting state; ending in a con guration without future; in nite execution.

4.5 Turing Machines as Dynamical Systems

Let us extend the con guration space. We consider the internal state space and the tape content, which is a potentially bi-in nite string of symbols: Q ?Z , endowed with the product topology. A transition (p; a) ! (q; b; c) can then be rewritten:

M p g g :ag g g ::: = q c (:::g g g :bg g g :::) : :::g?3 ?2 ?1 1 2 3 ?3 ?2 ?1 1 2 3

!

!

Hence, M can be considered as a dynamical system acting on the con guration space Q ?Z . The corresponding generalized DTDS is: (Q ?Z ; M; fsg ?Z ; F ?Z ; ): Another generalized DTDS is thus: (Q ?Z ; M; fsg L(M ); F ?Z ; ): The resulting dynamics can be much richer than the classical version, considering only internal states. In section 10, we mention the computational properties of Turing machines. In 29], the author proposes two ways to see a Turing machine as a dynamical system: with moving tape or with moving head. Our description is equivalent to his rst one.

$ u?2 ?2 $ u?1 ?1 $ u0 0$ u1 1$ u2 2$

Figure 2: Cellular automaton

5 Cellular Automata

Let us present the de nition of cellular automata. We consider one-dimensional cellular automata, the cells of which being arranged on a linear bi-in nite lattice. An example of cellular automaton is shown in Fig. 2.

De nition 5.1 Cellular automaton

where

Formally, any linear cellular automaton is a structure

C = (G; r; g) G = f0; 1; : : :; k ? 1g is the set of the states; r 2 N is the radius of the neighborhood; g : G2r+1 ! G is the local transition function.

A con guration of a CA is a function that speci es a state for each cell

x:Z!G

and can be represented by a doubly-in nite sequence:

x = (: : :; xi?n ; xi?n+1 ; : : :; xi?1 ; xi; xi+1; : : :; xi+n?1 ; xi+n ; : : :):

So the set of the con gurations of the CA is GZ . The neighborhood of a cell i 2 Z is the vector (i ? r; i ? r + 1; : : :; i ? 1; i; i + 1; : : :; i + r ? 1; i + r) 2 G2r+1; The global transition function of the CA

f : GZ ! GZ

speci es the next state of each cell as the local function applied to the states of the neighborhood (8i 2 Z) fi (x) = g(xi?r ; xi?r+1 ; : : :; xi?1; xi; xi+1; : : :; xi+r?1; xi+r ): The corresponding generalized DTDS is, for example: (GZ ; f; GZ; GZ; ):

100

80

60

40

20

0 0 20 40 60 80 100

Figure 3: Rule 129. From a random initial condition, at the bottom, evolution during 100 steps, to the top.

Remarks.

Cellular automata are models of systems composed of many agents, where a local actions leads to a global e ect. Cellular automata are discrete in space and in time but their de nition shows that they are very close to the continuum: the con guration space itself is a continuum. An example of rule is g : (xt?1 ; xt; xt+1) ! xt+1 : i i i i (xt?1 ; xt; xt+1 ) 000 001 010 011 100 101 110 111 i i i t+1 xi 1 0 0 0 0 0 0 1 which gives the binary string 10000001, equal to 129 in decimal. Its evolution (on a nite part of the con guration space) from a random initial con guration is represented in Fig. 3. There, each line is a con guration and time goes from bottom to top. Certain CA (like rule 129 exhibit a very interesting phenomenon, called \spatio-temporal chaos" 9].

6 Classical Dynamical Systems

As model of classical dynamical system, we consider here one-dimensional continuous real functions. These functions are often restricted to closed intervals, 0; 1] for example. The con gurations space is thus 0; 1] R, and the functional space C 0; 1]. Starting from an initial condition x0 2 0; 1], the successive iterations of any function f gives its dynamics. Here the corresponding generalized DTDS is: ( 0; 1]; F; 0; 1]; 0; 1]; ):

1 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0 0 0.2 0.4 0.6 0.8 1 ql(x,1) ql(x,0.7) ql(x,0.5)

Figure 4: Logistic Map q (x) = 4 x(1 ? x) ( = 1; 0:7; 0:5)

Remarks.

This kind of systems is very important in nonlinear theory and the study of chaos. Three examples of such a function are represented in Fig. 4.

7 Comparing dynamical systems

7.1 Extrinsic vs Intrinsic Methods

In what follows, we will compare Turing machines, cellular automata and more classical dynamical systems (continuous real functions). To compare dynamical systems, two methods are possible: extrinsic and intrinsic. another one if its computations mimic latter's ones. The number of steps required to realize the other one's computations can be xed in advance or left unde ned, in which case attraction to an \halting" state is preferred. the languages they generate, their structure, their topological entropy, etc. For example, let us imagine we want to prove that a system A is strictly more powerful than another system B . Then, an intrinsic method is to exhibit a language or a function that A is able to compute but not B . The method is clearly based on computational abilities of the systems involved. In what follows, we chose the rst method. The second one is left for future work.

Extrinsic method. This method is based on simulation. Informally speaking, a system simulates

Intrinsic method. This method is based on intrinsic properties of the systems we want to compare:

7.2 Simulation

In the next sections, we will speak about simulation. This concept is important because it allows us to compare di erent dynamical systems.

A system A simulates another system B when every step in the computation (or evolution) of B can be done by one or several steps of A. In general, this number of steps is independent from the input considered. If A simulates B , then A is more powerful than B but this is only a weak result. If one is able to show the converse, both systems are said to be equivalent. In this case, we have a stronger result. Several results exist on simulation (among others, 22, 23, 25, 26, 28, 27, 32, 33]). We consider here the most simple de nition of simulation we have found in the literature.

Let (X; f ) and (Y; g ) be two dynamical systems. Then f simulates g i there exists an encoding function : Y ! X and a decoding function : X ! Y , such that 9m > 0 : 8y 2 Y; g (y ) = (f m( (y ))).

De nition 7.1 Simulation

Pictorially, simulation is represented by the following diagram:

g Y ?! Y # m " f X ?! X

Remarks.

Examples of and are respectively an one-to-many relation and a many-to-one function. If is a bijective function, we can consider = ?1 . The exponent m greater than 1 allows the simulation g by f in more than one step. Sometimes, another de nition is given for simulation, swapping 9 and 8: 8y 2 Y; 9m > 0 : :::. A stabilization method is used instead of a step-by-step simulation. This is closer to intrinsic methods. For more details about this method, see 14, 10].

8 From Turing Machines to Cellular Automata

The rst problem to solve is the following. Given a Turing machine TM = (Q ?Z ; M; fsg ?Z ; F ?Z ; ), we want to nd a cellular automaton CA = (X; f; X; X; ), an encoding function and a decoding function such that CA simulates the Turing machine in one step (i.e. m = 1): TM (u) = (CAm( (u))). Equivalent results can be found in 38, 30, 27]. We present here a homomorphism between these two dynamical systems, allowing cellular automata to simulate any Turing machine. Let us rst give the general construction. Take a Turing machine M de ned as a dynamical system on Q ?Z . We want to translate any state (p; c) 2 Q ?Z into a state in the con guration space of a cellular automaton. Our construction is not optimal but easy to understand (see Fig. 5). We add to Q a special symbol we denote by E . Any cell of the resulting state contains a couple (symbol, state) from G = ? (Q fE g). The con guration space of the resulting cellular automaton is a subset of GZ . More formally, the encoding function is given by: ( Z ! GZ : (p; c) ! (p; c)i = (ci ; p) if i = 0 : :Q ? (ci ; E ) if i 6= 0

u?2

? $ uE2 ? $ uE1 ?2

r u?1 u0 u1 u2 # $ up0 $ u1 $ u2 $ E 1 E 2 ?1 0 $ up0 $ u1 $ u2 $ E 1 E 2 ?1 0 #f

a $ E $ u1 $ u2 $ E 1 E 2 ?1 0

p

Figure 5: From Turing machine to cellular automaton con guration

? $ uE2 ? $ uE1 ?2

? $ uE2

$ u?1 q ?2

Figure 6: CA transition equivalent to TM transition Using this encoding, the resulting machine becomes a TM with moving head, according to 29]. One step of the Turing machine, namely a transition (p; u0) ! (q; a; ?1), gives a change in CA con guration (Fig. 6). Let us now construct the local rule of the automaton simulating the behavior of the original Turing machine. We consider an one-dimensional (r = 1)-CA: in our CA, the cell containing a state di erent from E will be considered as the head; only its two neighbors have to know its current value to compute the next state of the machine. Each triple ((x; E ); (y; E ); (z; E )) thus remains unmodi ed. Triples with one couple (y; p), where p 6= E , are modi ed according to the transition rule : Q ? ! Q ? f?1; 0; 1g of the Turing machine. The couple (y; p) is replaced by its new value (y 0; q ), and the shift occurs. The complete rule is represented in Table 2. triplenshift

z x y E; E; p x y z E; p; E y; z ; x p E E x; y ; z E E E

?1 z x E; q; x ; y0 ; q E y0 ; z ; E E

E z E x E

y0

z E x E y0 q x E

x ; E; q 0 z ; y ; E q z x ; E; E y z ; E; E

0

y0

z E; x E; y0 ; E

x ; y0 E E y0 ; z E q z; x q E

1

Table 2: CA-rule equivalent to Turing machine

0 ? $ uE2

0 ? $ uE1 k?2

0 0 $ u0 $ u1 p0 E k?1 k

0 $ u2 E k+1

k+2

$

# r u0?2 u0?1 u00 u01 u02

Figure 7: From cellular automata to Turing machine con guration The CA dynamical system f : GZ ! GZ resulting from the rule presented above, is stable in a strict subset D of GZ . The state-part of any con guration in its dynamics contains exactly one state di erent of E . All others are equal to E . The only cell containing a state p 6= E is the head of the Turing machine. The stable subset is given by: p'

D = fc 2 GZj9 a unique i 2 Z : ci = (s; p) and p 6= E g:

Based on the following functions, with c = (ci )i2Z 2 D GZ : h :D!Z :c !h(c) =k with 8i; ci = (si ; pi) and (i = k) , (pi 6= E ) Q :G !Q :ci ! Q(ci )=pi with ci = (si ; pi ) ? :G !? :ci ! ? (ci)=si with ci = (si ; pi ) Z ?Z :D!? :c ! ?Z (c)=s with 8i; si = ? (ci ) we can also give a de nition of the decoding function (see Fig.7): : D ! Q ?Z : c ! (c) = ( Q(ch(c) ); ?h(c)( ?Z (c))): Notice that each cell of the automaton contains a pair (tape symbol, head state). In the previous formula, h gives the position of the head, Q gives a tape symbol from a cell state, ? gives an internal state from a cell state, and ?Z gives the whole tape, i.e. it removes the head symbols from a state of D. Then, gives a Turing machine state (internal head state, tape state) from a global cellular state, shifting the head position to position 0. Hence, any Turing machine M can be simulated by a cellular automaton f , in one CA-step per TM-step:

8y 2 Q ?Z; M (y) = (f ( (y))):

Are Turing machines and cellular automata equivalent? Let us show that they are not. We consider a CA-rule such that every site is modi ed under the global function. It is impossible to simulate it in a nite amount of time with a Turing machine, which is only able to modify one symbol at a time and to shift the whole tape. For example, take a CA whose local rule gives a quiescent symbol for every input: (x; y; z ) ! 0; 8x; y; z 2 C . There is no Turing machine able to behave like this CA, independently of any encoding and decoding functions. One iteration of a cellular automaton entails an in nity of local actions. One iteration of a Turing machine is limited to one local action. In general, it is thus impossible to simulate any CA by a Turing machine, unless we consider:

an in nite time for simulation; or a nite CA (then, we have an equivalence with Turing machines); or an approximation of the result wanted. Actually, there is a intrinsic method proving that CA are strictly more powerful than TM. This is based on the fact that CA are able to solve the Halting problem, which TM are not able to solve. The proof appears in 14].

9 From Cellular Automata to Real Functions

The second problem to solve is the following. Given a cellular automaton CA = (GZ ; f; GZ; GZ; ), we want to nd a continuous real function CRF = ( 0; 1]; F; 0; 1]; 0; 1]; ), an encoding function and a decoding function such that the continuous real function simulates the cellular automaton in one step (i.e. m = 1): CA(u) = (CRF m( (u))). The results of this section mainly come from 33]. Another approach is described in 4]. Here we work with cellular automata spaces and reals from the interval 0; 1], or more precisely, from a subset of 0; 1], which is a Cantor set. Every con guration of the cellular space GZ can be transformed into a state of GN:

:::c?2c?1:c0c1c2 ::: ! :c0c?1 c1 c?2c2 :::

where G = f0; 1; :::; k ? 1g. This transformation 1 is a homeomorphism between GZ and GN . Hence, we consider in nite sequences in what follows. Every sequence of GN can be transformed into a real number, through the following encoding function:

2 (c) =

1 X

i=0

ci i;

1 where c 2 GN, each ci 2 G, and k. 1 In fact, if < k , the resulting point belongs to a Cantor set D the greatest gaps of which are of width 1 ? k. This is important because it allows us to simulate any given cellular automaton f by a real function F which is only de ned on the Cantor set. Of course, this function F can be extended to the whole interval but it has to be stable on the Cantor set, i.e. F (D) D, because the decoding function is only de ned on it. It is possible to show that if = 2 1 and = ?1 are continuous, as f is continuous, the function F de ned on D as F : D ! D : x ! F (x) = (f ( ?1 (x))) is also continuous on D. Each gap between points of D can be lled by simple interpolation, and we get a continuous function on the real interval 0; 1].

Are continuous real functions and CA equivalent? To answer this question, a trade-o can be done regarding whether we take initial conditions into account or not. If we do not consider initial conditions as part of systems, a simple counting argument leads us to the result. There is an uncountable number of continuous real functions (cardinality @1), whereas there is only a countable in2r+1 of cellular automata (for a given neighborhood parameter r, and a cellular space nity C , there are jC jjCj di erent local rules, which is a countable nite number; hence, by enumerating all possible values of r, the cardinality is @0 ). There are thus strictly less cellular automata than continuous real functions. However, if initial conditions are taken into account, the answer is not so clear. There are @1 possible initial conditions for cellular automata. In principle, an initial condition could encode a continuous function and its initial condition. It is thus possible that a cellular automaton simulates a continuous function if we accept:

either in nite time; or simulation with bounded error: we encode the function and its initial condition and we let the system evolve until it reaches the solution up to a certain precision possible in a nite time. This di ers from a traditional view of simulation because we provide the simulator (here, the CA) with the state to be computed and a de nition of the function to be computed. In general, this last component is not present. Moreover, CA are de ned as shift-invariant continuous functions on bi-in nite sequences. In 18, 35], locally de ned functions like CA are related to shift-invariant continuous functions on in nite symbol sequences. For the sake of clarity, let us consider a one-way CA (actually, they are computationally equivalent to classical ones 34]). This implies that if a real number from the unit interval 0; 1] is coded as :x0x1 x2x3 x4 , its image by any CA has the form :g0g1 g2g3g4 , where gi = h(xi ; xi+1) and h is the CA local rule. Here, the domain of e ect of h is always smaller or equal to 2, that is to compute digit i, one only needs to know a neighborhood containing 2 digits around i: i and i + 1. It is clear that not all continuous functions de ned on 0; 1] produce such images. For instance, take the usual Euclidean topology on 0; 1], a function is continuous i 8x; 8" > 0; 9 ; 8y; d(x; y ) < ) d(f (x); f (y )) < ". De ne a function as follows: f (:x0x1 x2 x3x4 ) = (:g0g1 g2g3 g4 ) where gi = x2i . This function is of course continuous in the previous sense but the locality principle is not veri ed anymore: to compute digit i, we need to look further and further away from i, up to 2i digits away. Of course, one could argue that a well-chosen encoding/decoding function can help in doing the same with a purely local CA but it is impossible if we restrict ourselves to purely local encoding/decoding functions, that is CA-like coding functions.

10 Universality Hierarchy

Let us rst give some comments about the classes of systems previously presented. When we have introduced classical automata, we remarked that the behavior of the \dynamical system" version can be richer than its \classical" version (see x4.5). For example, take a nite automaton whose transition function is given in Table 3 with Q = = f0; 1; 2g. It can have a periodic behavior independently of its input. It is in fact 2-periodic until it receives a 2 as input symbol; when a 2 is received, a xed point is reached. If we add the tape symbols in the

Q

0 0 1 1 0 1 2 2

0 1 0 1 2 2 0 1

Q

1 1 0 0 2 2 2 2

Table 3: Transition function of a periodic niate automaton con guration, it is possible to get nonperiodic sequences of con gurations in the dynamics. Imagine that = f0; 1g and take as input the following in nite sequence: :0100011011000001010011100101110111:::6, every con guration is aperiodic because the sequence itself is aperiodic. Of course, if the input sequence contains only a nite number of symbols followed by an in nite sequence of white symbols, we reach a xed-point from any con guration. Aperiodicity can also come from the automaton itself; it su ces to produce an automaton copying the input to its internal state. We get the same results when considering pushdown automata or Turing machines. It is easy to see that both nite automata and pushdown automata can be simulated by Turing machines. In fact, Turing machines are able to simulate any sequential program written in any language. In general, Turing machines are called computation-universal machines. Moreover, it has been shown years ago 31] that it is possible to build a universal Turing machine, i.e. a Turing machine U able to simulate any other Turing machine T by taking as arguments the description of T and its argument: U (T; A) = T (A). The smallest machine proposed in 31] was a 7-states, 4-symbols machine. Let us now go one step further. We have shown that any Turing machine can be simulated by a cellular automaton. After that, we have shown that any cellular automaton can be simulated by a continuous function on the real interval. Putting all together, we have the universality hierarchy in Fig.8. From formal language theory, we know that pushdown automata can recognize more languages than nite automata, and that Turing machines can recognize more languages than both of them. Apart from our own considerations, it has been proved that cellular automata are also strictly more powerful than Turing machines in the sense they can solve the Halting problem. This strict relation is denoted by . Regarding the comparison with continuous real functions, the situation is less clear: we know that these functions can simulate cellular automata but the opposite depends a bit from the considered point of view. This is the meaning of .

11 Conclusion

Let us rst present a summary of couples action-e ect for the three models we have studied (see Table 4). How to relate the global action of a real function to the local action of the other models remains an open question. Moore has proposed a construction involving once-di erentiable functions to simulate

write down a dot then all positive integers: 0; 1; 2; 3; 4; :::; 100; 101; 102; :::, all these numbers being translated in binary.

6 This sequence is built on Champernowne's number:

Continuous Real Functions Cellular Automata Turing Machines Pushdown Automata Finite Automata Figure 8: Universality hierarchy, where A B means \A is strictly more powerful than B" or \any member of B can be simulated by a member of A", and A B is the weak version. Dyn. Syst. action ) e ect TM local ) local CA local everywhere ) global CRF global ) global Table 4: Action-e ect for TM, CA, CRF any dynamical system acting on sequences (like Turing machines or cellular automata) with a nite radius (every site is a ected by a nite neighborhood). We have considered continuous functions on the interval 0; 1], using the same encoding/decoding function. Based on this, we have shown a weak7 hierarchy of dynamical systems which include many di erent kinds of systems and behaviors, from simple nite-state machines to continuous real functions. This universality hierarchy suggests the following remarks. The main drawback of all this is that CA and real functions, just as Turing machines, have many undecidable properties related to their long term behavior: attraction basins, limit languages, etc. Actually, these properties are often equivalent to the halting problem of a corresponding Turing machine, which is undecidable. This undecidable problems can be seen as a much stronger chaos than classical chaos, based on sensitive dependence on initial conditions. Nevertheless, the challenge is to isolate classes of dynamical systems, and an algebra composition rules, allowing us to prove some results for speci c systems. The main positive result is that, since any Turing machine can be simulated by a cellular automata, which in its turn can be simulated by a real function, and since Turing machines are computationally universal, both CA and continuous real functions are universal, too. But they are universal in the sense of \sequential" processes. CA and real functions are stronger in the sense that such systems exist and cannot be simulated by any Turing machine. We can conclude on this partially open question. It has been shown that it is possible to build a universal Turing machine, able to simulate any other Turing machine. Of course, there exist a cellular automaton and a continuous real function corresponding to this universal Turing machine. But, is it possible to build a universal cellular automata, able to simulate any other CA, and a universal continuous real function, able to simulate any other continuous real function, in one

7 Because not all relations are strict

or more dimensions? The rst question has been answered positively in 6], for CA of the same dimension (jC j = 14 is needed). The second answer is conjectured to be negative, unless only partial recursive functions are considered 39].

Acknowledgements

The author gratefully acknowledges P. K_ rka, P. Dieu, C. Moore, and M. Sintzo for useful comments u and discussions. C. Calude is also warmly thanked for his careful reading of the paper and his interesting suggestions.

References

1] E. Akin. The General Topology of Dynamical Systems. American Mathematical Society, 1993. 2] E. Asarin and O. Maler. On some relations between dynamical systems and transition systems. In Proceedings of ICALP'94, LNCS 820. Springer-Verlag, 1994. 3] E. Asarin, O. Maler, and A. Pnueli. Reachability analysis of dynamical systems having piecewiseconstant derivatives. Theor. Comp. Sci., 138:35{65, 1995. 4] R. Bartlett and M. Garzon. Computation universality of monotonic maps of the interval. Tech. rep., Department of Mathematical Sciences, Memphis State University, 1993. 5] M. Cosnard, M. Garzon, and P. Koiran. Computability properties of low-dimensional dynamical systems. In P. Enjalbert, A. Finkel, and K.W. Wagner, Eds., Proc. of STACS 93, LNCS 665, pp. 365{373. Springer-Verlag, 1993. 6] K. Culik II, L.P. Hurd, and S. Yu. Computation theoretic aspects of cellular automata. Physica D, 45:357{378, 1990. 7] K. Culik II, L.P. Hurd, and S. Yu. Formal languages and global cellular automaton behavior. Physica D, 45:396{403, 1990. 8] R.L. Devaney. An Introduction to Chaotic Dynamical Systems. Addison-Wesley, 2nd ed., 1989. 9] P. Flocchini and F. Geurts. Searching for chaos in cellular automata: New tools for classi cation. Tech. Rep. RR-94-12, INGI, U.C.Louvain, 1994. 10] S. Franklin and M. Garzon. On Stability and Solvability (Or, When Does a Neural Network Solve a Problem?), Vol. 2, pp. 71{83. Kluwer Academic Publishers, 1992. 11] M. Garzon. Cellular automata and discrete neural networks. Physica D, 45:431{440, 1990. 12] M. Garzon. Models of Massive Parallelism. Analysis of Cellular Automata and Neural Networks. Springer-Verlag, 1995. 13] M. Garzon and F. Botelho. Real computation with cellular automata. In N. Boccara, E. Goles, S. Martinez, and P. Picco, Eds., Cellular Automata and Cooperative Systems, NATO ASI Ser. C: Math. Phys. Sci. 396, pp. 191{202. Kluwer Academic Publishers, 1993.

14] M. Garzon and S. Franklin. Neural computability ii. Tech. rep., Department of Mathematical Sciences, Memphis State University, 1992. 15] F. Geurts and M. Sintzo . Compositional analysis of dynamical systems using predicate transformers. Tech. Rep. RR-93-30, Unite d'Informatique, U.C.Louvain, 1993. 16] S. Ginsburg. Algebraic and Automata-Theoretic Properties of Formal Languages, Fund. Studies In Computer Sci. 2. North-Holland/American Elsevier, 1975. 17] B.L. Hao. Elementary Symbolic Dynamics and Chaos in Dissipative Systems. World Scienti c, 1989. 18] G.A. Hedlund. Endomorphisms and automorphisms of the shift dynamical system. Mathematical Systems Theory, 3:320{375, 1969. 19] J.G. Hocking. Topology. Addison-Wesley, 1961. 20] J.E. Hopcroft and J.D. Ullmann. Introduction to Automata Theory, Languages, and Computation. Addison-Wesley, 1990. 21] L.P. Hurd. Formal language characterizations of cellular automaton limit sets. Complex Systems, 1:69{80, 1987. 22] P. Koiran. Puissance de Calcul des Reseaux de Neurones Arti ciels. PhD thesis, Laboratoire de l'Informatique du Parallelisme, E.N.S.Lyon, 1993. 23] P. K_ rka. Simulation in dynamical systems and turing machines. Tech. rep., Department of u Mathematical Logic and Philosophy of Mathematics, Charles U., Prague, 1992. 24] P. K_ rka. Universal computation in dynamical systems. Tech. rep., Department of Mathematical u Logic and Philosophy of Mathematics, Charles U., Prague, 1992. 25] P. K_ rka. Dynamical systems and factors of nite automata. Tech. rep., Department of Matheu matical Logic and Philosophy of Mathematics, Charles U., Prague, 1993. 26] P. K_ rka. One-dimensional dynamics and factors of nite automata. Acta Univ. Carolinae Math. u Phys., 34(2):83{95, 1993. 27] P. K_ rka. A comparison of nite and cellular automata. In I. Pr vara, B. Rovan, and P. Ruzicka, u Eds., Mathematical Foundations of Computer Science, LNCS 841, pp. 483{493. Springer-Verlag, 1994. 28] P. K_ rka. Regular unimodal systems and factors of nite automata. Theoretical Computer Science, u 133:49{64, 1994. 29] P. K_ rka. On topological dynamics of turing machines. Tech. rep., Department of Mathematical u Logic and Philosophy of Mathematics, Charles U., Prague, 1995. 30] K. Lindgren and M.G. Nordahl. Universal computation in simple one-dimensional cellular automata. Complex Systems, 4:299{318, 1990. 31] M.L. Minsky. Computation: Finite and In nite Machines. Prentice Hall Inc., 1967. 32] C. Moore. Generalized shifts: Unpredictability and undecidability in dynamical systems. Nonlinearity, 4:199{230, 1991.

33] C. Moore. Smooth maps of the interval and the real line capable of universal computation. Tech. Rep. 93-01-001, Santa Fe Institute, 1993. 34] K. Morita. Computation-universality of one-dimensional one-way reversible cellular automata. Inform. Process. Lett., 42:325{329, 1992. 35] D. Richardson. Tessellations with local transformations. Journal of Computer and System Sciences, 6:373{388, 1972. 36] H.T. Siegelmann and E.D. Sontag. Analog computation via neural networks. Theor. Comp. Sci., 131:331{360, 1994. 37] M. Sintzo and F. Geurts. Analysis of dynamical systems using predicate transformers: Attraction and composition. In S.I. Andersson, Ed., Analysis of Dynamical and Cognitive Systems, LNCS 888, pp. 227{260. Springer-Verlag, 1995. 38] A.R. Smith III. Simple computation-universal cellular spaces. Journal of the ACM, 18(3):339{353, 1971. 39] K. Svozil. Randomness and Undecidability in Physics. World Scienti c, 1993. 40] G. Troll. Formal languages in dynamical systems. Acta Univ. Carolinae, Math. et Phys., 34(2), 1994. 41] S. Wiggins. Introduction to Applied Nonlinear Dynamical Systems and Chaos, Texts Appl. Maths 2. Springer-Verlag, 1990. 42] S. Wolfram. Theory and Applications of Cellular Automata. World Scienti c, 1986. 43] P. Wolper. Introduction a la Calculabilite. InterEditions, 1991. 44] H. Xie. On formal languages in one-dimensional dynamical systems. Nonlinearity, 6:997{1007, 1993.