Objects as mobile processes
Basic Research in Computer Science
BRICS
BRICS RS9638 H¨ uttel & Kleist: Objects as Mobile Processes
Objects as Mobile Processes
Hans Huttel ¨ Josva Kleist
BRICS Report Series ISSN 09090878
RS9638 October 1996
Copyright c 1996, BRICS, Department of Computer Science University of Aarhus. All rights reserved. Reproduction of all or part of this work is permitted for educational or research use on condition that this copyright notice is included in any copy.
See back inner page for a list of recent publications in the BRICS Report Series. Copies may be obtained by contacting: BRICS Department of Computer Science University of Aarhus Ny Munkegade, building 540 DK  8000 Aarhus C Denmark Telephone: +45 8942 3360 Telefax: +45 8942 3255 Internet: BRICS@brics.dk BRICS publications are in general accessible through World Wide Web and anonymous FTP:
http://www.brics.dk/ ftp://ftp.brics.dk/pub/BRICS
Objects as mobile processes
Hans Huttel and Josva Kleist October 29, 1996
The object calculus of Abadi and Cardelli AC96, AC94b, AC94a] is intended as model of central aspects of objectoriented programming languages. In this paper we encode the object calculus in the asynchronous calculus without matching and investigate the properties of our encoding.
1 Introduction
Abstract
In AC96, AC94b, AC94a] Abadi and Cardelli investigate several versions of an object oriented calculus (the & calculus) with respect to its type system. The primary motivation behind the & calculus is to nd a simple foundation for object oriented languages, just as the calculus forms a foundation for functional programming languages. In the & calculus the central concept is that of an object. Objects are built from object formation ( li = & (xi)bi]) where we create an object with methods li = & (xi)bi, method activation (a:l) where we activate the method named l in object a, and method override (a:l ( & (x)b) where the method named l in object a is exchanged with the new method l = & (x)b. Despite its apparent simplicity, the & calculus has previously shown its ability to express several object oriented features within the calculus and it is also possible to encode the calculus Abr89]. In this paper we shall describe an encoding of the simplest version of the & calculus into an asynchronous version of the calculus PW92] and investigate the properties of this encoding. In particular, the encoding is sound under the operational semantics of the & calculus. We also show that
Address: Dep. of Computer Science, Aalborg University, Frederik Bajersvej 7, 9220 Aalborg, Denmark. Email: fhans,kleistg@cs.auc.dk
1
our encoding is not fully abstract with respect to weak bisimulation in the calculus and consider what constraints must be imposed on the calculus to get full abstractness. The work presented in this paper is thus related to the results achived by Sangiorgi in San96] but arose independently. A main di erence is in the choice of calculus; Sangiorgi employs the matching construct of the calculus, whereas the encoding presented in this paper does not and restricts its attention to the asynchronous calculus. An encoding of the & calculus into the asynchronous calculus is interesting from several points of view. Firstly, most object oriented languages works with pointers, whereas the & calculus uses explicit substitution. In the calculus the basic entity is names that can be thought of as pointers; thus an encoding of the & calculus into the calculus shows how to use pointers to represent substitution. Secondly, the encoding presented in this paper also hints at the possibility of using a model checker for the calculus to verify properties of & calculus expressions. Thirdly, there is the implementation issue. When implementing programming languages on distributed systems, asynchronous communication is usually considered more easy to implement, so the encoding into an asynchronous calculus will also give an idea of how to implement the & calculus or a language based on the & calculus in a distributed setting. Finally, the calculus has also been put forward as a possible theoretical model of objectoriented programming languages, so an encoding will provide some basic insight into how one expresses object oriented features in the calculus. The structure of the rest of the paper is as follows: Section 2 introduces the & calculus and explains its semantics. Section 3 gives the syntax and semantics of the asynchronous calculus that we shall use as our target calculus. In Section 4 we present our encoding of the & calculus into the asynchronous calculus and discuss alternative encodings. Section 5 and Section 6 regards operational correspondence of the encoding and relation between equivalences of & calculus terms and their encodings. Section 7 summarizes our results and relate them to other existing work.
2 The
& calculus
The version of the & calculus we use is essentially the simple untyped object calculus of AC96]. Objects in the & calculus are given by: a ::= li = & (xi)bi] objects j x self variables j a:l method activation j a:l ( & (x)b method override 2
lk 2 L a:lk ; bk fa=x g a:l ( & (x)b ; li = & (xi)bi; l = & (x)b] li 2 L n flg The activation of the method lk results in the method body being activated with the self variable being bound to the original object. Method override results in an object with the overridden method exchanged with the new method. In the original version of the & calculus it is not allowed to add methods to an object, a restriction introduced to keep the theory simple. Since the ability to add methods does not interfere with our encoding, we shall in the present paper allow addition of methods. A context C ] is an `incomplete' & calculus term, and we write C a] to denote that it has been completed using the term a. The syntax of contexts is given by:
k
Here xi 2 SVar range over self variables and li 2 MNames range over method names. We let m(a) denote the set of method names and fv(a) the set of free self variables in a. We give the semantics for objects as a reduction semantics. Let a = li = & (xi)bi] with li 2 L, then:
C ] ::= C ]:l j C ]:l ( & (x)b j ] Our nal transition rule speci es the reduction strategy, which, given our syntax of contexts, implies leftmost reductions: a ; a0 C a] ; C a0] Leftmost reduction implies that we in a term a:l or a:l ( & (x)b always reduce a to an object before activating or overriding a method. That is, objects are the values of this version of the & calculus. To give an intuition of how the & calculus works, we shall present a few simple examples (taken from AC96])
let a = l = & (x)x:l] then a:l ; x:lfa=xg = a:l ; let a0 = l = & (x)x] then a0:l ; xfa0=xg = a0 let a00 = l = & (y)y:l ( & (x)x] then a00:l ; a00:l ( & (x)x ; a00
The object a shows how we can get in nite behaviour through the use of self variables. The object a00 shows how an object can modify itself by performing a method override on a self variable. 3
3
The asynchronous
calculus
We let a; b; : : : 2 Names range over an in nite c ountable set of names, ~ b denote tuples of names, a tuple of the names a, b and c will be written as ha; b; ci. P 2 Proc range over processes, G 2 GProc range over guarded processes and Q 2 Guard over guards. The use of guards ensures that we only replicate and sum guarded expressions. We let bn(P ), fn(P ) and n(P ), resp., denote the sest of bound names, free names and names of the agent P . The di erence between the synchronous and asynchronous calculus lies in the output construct. In the asynchronous calculus output is nonblocking, this is seen in the syntax as the absence of output pre xing (ab:P ). Instead, output is modelled as the parallel composition of an output atom and the process (ab j P ). The use of asynchronous communication has several advantages. Most importantly, several of the versions of bisimulation equivalence, that are distinct in the synchronous setting, coincide in an asynchronous setting. As a consequence, the algebraic theory becomes simpler HK95, CS96]. For simplicity we shall only present the semantics of the monadic calculus, but it is easily extended to handle the polyadic case. The semantics is given by a labelled transition system with labels: ::= xy j x(y) j xy xy denotes the output of the free name y on the name x. Transmission of bound names uses bound output x(y), indicating that we transmit the bound name y over the channel x. The last label xy denotes the input of the name 4
The calculus PW92] has previously shown its ability to encode both the calculus Mil92] and certain object oriented languages Wal95, San96]. In Wal95] Walker encoded a variant of the programming language POOL Ame89] into the calculus and in San96] Sangiorgi investigates another encoding of the & calculus into the calculus. Sangiorgi shows how to encode the & calculus in a typecorrect way into an extended version of the calculus with a special case construct, which basically amounts to an extended matching operator. As the target calculus for our translation we instead go for as simple a version of the calculus as possible. The version that we shall use is the asynchronous calculus CS96, Bou92, HT91, HK95]. The syntax of asynchronous calculus is in the present paper given by: P ::= a~ j P jP j ( a)P j G j !G b ~ G ::= 0 j a(~):P j :P j G + G b
y over the name x. Table 1 shows the inference rules for the asynchronous calculus, with the symmetric versions of sync], syncex], comp] and sum] omitted. We shall identify alphaconvertible terms, i.e. up to bound names.
] out] in]
:P ab
ab
P 0
] outex ] comp]
a(b):P ac P fc=bg G P sync] sum] G + G0  P G P rep] syncex ] !G  P j!G
 P 0 a 62 n( ) ( a)P  ( a)P 0 P ab P 0 a 6= b b) ( b)P a( P 0 P  P 0 bn( ) \ fn(Q) = ; P jQ  P 0 jQ P ab P 0 Q ab Q0 P jQ  P 0jQ0 b) P a( P 0 Q ab Q0 b 62 fn(Q) P jQ  ( b)(P 0jQ0)
P
Table 1: The inference rules for the asynchronous calculus. Just as in the synchronous calculus several de nitions of bisimulation exist (see CS96] for an overview). In the present paper we shall adopt the following de nition, due to Amadio, Castellani and Sangiorgi CS96]:
P  P 0 and is not an input, then Q  Q0 and P 0 R Q0. P ab P 0 then either Q ab Q0 and P 0 R Q0 or Q  Q0 and 0 R (Q0 jab). P We write P Q if P R Q for some asynchronous bisimulaton R.
De nition 1 (Asynchronous bisimulation) A symmetric relation R is an asynchronous bisimulation if for all P R Q, whenever:
The de nition of asynchronous bisimulation is as the standard de ntion for bisimulation except for the last part of the input case. This part expresses that if a process absorbs what it has just emitted, this can be \absorbed" in an internal action. A example of this (from CS96]) is the following equational law:
a(b):(ab j P ) + :P
5
:P b 62 fn(P )
Weak bisimulation ( ) is de ned the usual way, by exchanging  in the above de nition with the corresponding weak transition =). Our primary motivation for choosing the asynchronous calculus instead of the synchronous version is minimalism; we want to use as simple a version of the calculus as possible. But the choice gives us some important advantages. Most importantly, the notion of bisimilarity is unique and yields a congruence, a result that does not hold in the synchronous case or if we add matching. Also, the equational theory is simpler { in the synchronous case matching is needed to give an equational theory, whereas this is not necessary in the asynchronous calculus. We use the notation P  d Q to indicate that P  Q is the unique transition possible from P . To enhance readability we also omit the restrictions of names that no longer appear in a process, for instance, ( o)(xy) will be written as xy (formally, the two expressions are bisimlar).
4 Coding up the
& calculus
The intuition behind our encoding is that the encoding of an object can be used via its value channel. More precisely, as soon as an object reduces to a value, an object reference is emitted on the value channel. An object reference is then used to activate methods by sending a value name, an object reference, and a method name to the object; the value name tells where we expect the result to be returned, the method name is the name of the method that we want to activate, the object reference that we pass is used if the called method wants to activate other methods. This is necessary in the case where a method has been overridden. The polyadic calculus allows a straightforward typing discipline, called sorting, which ensures that suitable names are always communicated. Our encoding obeys the sorting: Method names Values Object references Self variables
l : Method ! (ObjRef) v : Value ! (ObjRef) o : ObjRef ! (Method; ObjRef; Value) x : Var
where v : Value ! (ObjRef) expresses that v ranges over the set of value channels and we only transmit object references over value channels. We assume that the sets of method names and self variables in the above sorting coincide with the set of method names and self variables in the & calculus. 6
4.1 The encoding
The encoding ]]v presented below is parametrized with a value name v denoting the reference of the encoded object. The notation m(?):P denotes m(x):P for some name x 62 fn(P ).
a:l]]v li = & (xi )bi]]]v x]]v a:l ( & (x)b]]v
= = = = ( v 0)( a]]v j v 0 (o):ohl;o; v i)X ( o)(vo j !o(l0; o0 ; v 0 ):(l0o0 j li (xi) bi]]v ))
0 0 0
vx ( v 0)( a]]v j v 0 (o):( o0)(vo0 j !o0 (l00; o00 ; v 00): X m(?):ohl00; o00; v00i))) (l00o00 j l(x): b]]v +
00
m2m(a)nflg
Observe that the above encoding prohibits free self variables, since the sorting prohibits the transmission of self variables over value names. In the following we shall usually omit the index sets when they are obvious from the context. To enhance the readability of encoded terms, we shall in the following use the abbreviation:
4.2 Notational conventions
o := li = & (xi)bi] = !o(l0; o0; v0):(l0o0 j li(xi): bi]]v ) The intuition behind this `relay' construct o := li = & (xi)bi] is that the object li = & (xi)bi] resides at o. In other words, the object can have its methods activated by transmitting a method name, an object reference and a value name over the name o. With this construct we can write the encoding li = & (xi)bi]]]v as ( o)(vo j o := li = & (xi)bi]). As can be seen from the encoding of method activation and override, we select the actual method to be invoked by means of communication over method names. This implies that the encoding might be messed up by calculus contexts that use method names incorrectly and cause erroneous transitions. For instance:
0
X
l(?) j l = & (x)x]:l]]v = l(?) j ( v0)(( o)(v0o j o := l = & (x)x]) j v0o0:o0hl; o0; vi)  l(?) j ( o)(o := l = & (x)x] j ohl; o0; vi)
7
 l(?) j newo(o := l = & (x)x] j lo j l(x):vx)  o := l = & (x)x] j l(x):vx
Observe how the input l(?) consumes the message that was intended for method selection. To avoid this, we implicitly assume that the method names are restricted at the outermost level in the encoding. Furthermore observe that the encoding does not mess up things, since we always wait for the output on a restricted name in the encoding (except when choosing methods). We have already introduced the notation  d , we shall extend this notation with transitions on the form P  dl Q indicating that the internal move from P to Q is the only possible transition except for actions on method names. And when we remember that the method names are restricted away at the outermost level and that our encoding does not create erroneous transitions, this implies that the transition is unique. We use the notation a * to indicate that a has an in nite reduction sequence, and we write a + if a reduces to an object. In the calculus we use the same notation P * to indicate that P have an in nite sequence of moves, and P + to indicate that P after a sequence of moves will do an observable action. To give the intuition of how the encoding works we shall consider a few examples. Our rst example is the encoding of the following simple object:
4.3 Examples
a = l = & (x)x:l]
It only contains one method l, that when activated will activate itself indefinitely, resulting in the in nite reduction sequence a:l ; a:l ; . In our encoding this corresponds to the following behaviour: a:l]]v = ( v 0)(( o)(v 0o j o := l = & (x)x:l]) j v 0(o0 ):o0hl;o0 ; v i)  ( o)(o := l = & (x)x:l] j ohl; o; vi)  ( o)(o := l = & (x)x:l] j lo j l(x):( v0)(v0x j v0(o0):o0hl;o0; vi))  ( o)(o := l = & (x)x:l] j ( v0)(v0o j v0(o0):o0hl;o0; vi))  ( o)(o := l = & (x)x:l] j ohl; o; vi) 8
o
o0
?? ? Activate method l ? 0
at object o Figure 1: Method override and lookup Our next example is somewhat more complicated and illustrates the encoding of method override. Consider the object
'$ &% '$ &%
l1 old l2
Lookup starts at the original receiver
6
l = l2
l2
a = l1 = & (x)x; l2 = & (x)b]
If we override a method then we get a new object with the overridden method exchanged with the overriding one. In our encoding this is simulated by generating a new \object" that handles activations of the overriding method itself and delegates all other method activations to the original object. This is illustrated in Figure 4.3. The following shows how method override and lookup are handled by our encoding:
9
=
2 d
=
3 dl 2 dl 2 dl
(a:l2 ( & (x)x:l1):l2]]v ( v 0)(( v 00)(( o)(v 00o j o := a) j v 00(o ):( o0)(v 0o0 j!o0(l00; o00; v 00): (l00o00 j l2(x): x:l1]]v + l1(?):o hl00; o00; v 00i) j v 0(o):ohl2; o; v i) ( o0 )(( o)(o := l1 = & (x)x; l2 = & (x)b] j!o0(l00; o00; v 00): (l00o00 j l2(x): x:l1]]v + l1(?):ohl00; o00 ; v 00i)) j o0hl2; o0; v i) ( o0 )(( o)(o := l1 = & (x)x; l2 = & (x)b] j!o0(l00; o00; v 00):(l00o00 j l2 (x):( v 0)(v 0x j v 0(o):ohl1; o; v 00i) + l1 (?):ohl00; o00; v 00i)) j o0 hl2; o0; vi) ( o0 )(( o)(o := l1 = & (x)x; l2 = & (x)b] j!o0(l00; o00; v 00):(l00o00 j l2 (x):( v 0)(v 0x j v 0(o):ohl1; o; v 00i) + l1 (?):ohl1; x; v 00i)) j o0 hl1; o0 ; vi) ( o0 ; o)(!o(l0; o0 ; v 0):(l0o0 j l1(x):v 0x + l2(x): b]]v ) j!o0(l00; o00; v 00): (l00o00 j l2(x): x:l1]]v + l1(?):ohl00; o00 ; v 00i) j ohl1; o0 ; v i) ( o0 )(( o)(o := l1 = & (x)x; l2 = & (x)b] j!o0(l00; o00; v 00): (l00o00 j l2(x): x:l1]]v + l1(?):ohl00; o00 ; v 00i)) j vo0 )
00 00 0 00 00
Observe how the object reference of the original receiver is passed on when the receiver does not have the requested method itself. This ensures that when the correct method is found, we know where to start looking for other methods.
4.4 Alternative encodings
In San96] (and a previous version of this paper) a somewhat di erent encoding of the untyped & calculus is presented. The di erence is in the use of matching instead of choice and communication: ( v0)( a]]v j v0(o):ohl; o; vi) Y ( o)(v(o) j!o(l0; o0; v0):( l0 = li]( xi)(xio0 j bi]]v ))) x(o):vo ( v0)( a]]v j v0(o):( o0)(vo0 j!o0(l00; o00; v00): ( l00 = l]( x)(xo00 j b]]v ) j l00 6= l]ohl00; o00; v00i))) Other possibilities exist: In San96] the typed & calculus is translated into an extended version of the & calculus with a special case construct instead of matching. This encoding does not use the `relay' construct when dealing with method override. Instead the original object is setd a special message with the name of the overriding method, which it then inserts instead of
a:l]]v li = & (xi)bi]]] x]]v a:l ( & (x)b]]
= = = =
0
0
0
00
10
the original method. This implies that each method will need two method names, one for method activation and one for method override.
5 Operational Correspondence
In this section we present number of results about our encoding, which together show the soundness of our encoding w.r.t. the operational semantics of the two calculi. Our rst lemma is a substitution lemma. It states that the `relay' construct in parallel with an encoded object corresponds to binding an object reference to a self variable within the object.
Lemma 1 Let a = li = & (xi)bi], then ( o)(o := a j b]]vf = g)
o x
bfa=xg]]v
Proof. Induction in the structure of b.
b = y : We have two cases, either y = x and we have:
( o)(o := a j x]]vfo=xg) = ( o)(o := a j vo) = a]]v = xfa=xg]]v or y 6= x and: ( o)(o := a j y]]vfo=xg) = ( o)(o := a j vy) y]]v = yfa=xg]]v
b = b0:l : Assume w.l.o.g. that v0 62 fn(o := a), then
( o)(o := a j b0:l]]vfo=xg) = ( o)(o := a j ( v0)( b0]]v fo=xg j v0(o):ohl; o; vi)) IH ( v 0)( b0fa=xg]] j v 0(o):ohl; o; v i) v 0 a = b f =xg:l]]v = b:lfa=xg]]v
0 0
11
b = li = & (xi)bi] : Assume w.l.o.g. that xi 6= x, then
( o)(o := a j li = & (xi)bi]]]vfo=xg) X = ( o)(o := a j ( o0)(vo0 j!o0(l00; o00; v00)(l00o00 j li(xi): bi]]v ))) ( X vo0 ) j!o0(l00; o00; v00):(l00o00 j o0)( (1) li(xi):( o)(o := a j bi]]v ))) IH ( o0 )(vo0 j!o0 (l00; o00; v 00):(l00o00 j X l (x ): b fa=xg]] )) i i i v a xg]]] = li = & (xi)bif = v = li = & (xi)bi]fa=xg]]v
00 00 00
In (1) we distribute the replication over the sum. It can be shown that this is valid if every free occurrence of o in each component is a negative subject, that is o is only used for output. b = b0:l ( & (y)c : Assume w.l.o.g. that y 6= x, then ( o)(o := a j b0:l ( & (y)c]]vfo=xg) 0 = ( o)(o := a j ( v0)( bX j v0(o ):( o0)(vo0 j !o0(l00; o00; v00): ]]v 00 00 (l o j l(y): c]]v + m(?):o hl00; o00; v00i)))fo=xg) (2) ( v0)(( o)(o := a j b0]]v fo=xg) j v0(o ):( o0)(vo0 j 0 00 00 00 00 00 !Xl ; o ; v ):(l o j l(y):( o)(o := a j c]]v fo=xg) + o( m(?):ohl00; o00; v00i))) IH ( v 0)( b0fa=xg]] j v 0(o ):( o0 )(vo0 j !o0 (l00; o00 ; v 00): v X 00 00 (l o j l(y): cfa=xg]]v + m(?):o hl00; o00; v00i))) = bfa=xg:l ( & (y)cfa=xg]]v = (b:l ( & (y)c)fa=xg]]v
0 00 0 00 0 00
The distribution of the replication in (2) sound for the same reasons as in (1).
As an immediate corollary we get that the encoding is sound w.r.t. the rule for method calls: Corollary 1 Let a = li = & (xi)bi] for li 2 L then
2
a:lj]]v
bj fa=x g]]v for all lj 2 L
j
12
Proof. We have: a:lj]]v =
d  dl d
( ( ( (
v0)(( o)(v0o j o := a) j v0(o0):o0hlj ; o0; vi) o)(o := a j ohlj ; o; vi) X o)(o := a j lj o j li(xi): bi]]v) o)(o := a j bj ]]fo=x g) bjfa=x g]]v
j j
Using algebraic techniques from the asynchronous calculus, we can also show that method override is sound: Lemma 2 Let a = li = & (xi)bi] for li 2 L, then a:l ( & (x)b]]v li = & (xi)bi; l = & (x)b]]]v Proof. We have: a:l ( & (x)b]]v = ( v0)( a]]v j v0(o0):( o)(vo j !o(l00; o00; v00): X (l00o00 j l(x): b]]v + mi(?):o0hl00; o00; v00i))) mi 2 m(a) n flg  d ( o )(!o (l0; o0; v0):(l0o0 j X li(xi): bi]]v ) j ( o)(vo j X !o(l00; o00; v00):(l00o00 j l(x): b]]v + mi(?):o hl00; o00; v00i))) Now we push the P m(?) and get: replication that represents the a object under the sum
0 00 0 00
2
( o)(vo j!o(l00; o00; v00):(l00o00 j l(x): b]]v + mi(?): X ( o )(o hl00; o00; v00i j!o (l0; o0; v0):(l0o0 j li(xi): bi]]v )))) The above is valid due to the fact than whenever o is a private name and the overriding method is activated, this cannot happen with o as the self reference, that is, if the encoding of a is activated it will be through o. Since o does not occur free in any of the bi]]v this implies that we can remove the replication of o and replace the internal communication over o with an internal move:
00 0 0
X
( X vo j!o(l00; o00; v00):X o00 j l(x): b]]v + o)( (l00 mi(?): :(l00o00 j li(xi): bi]]v )))
00 00
13
Finally we can remove the innermost sum (P li(xi): bi]]v ) and instead bind the self variables in the outermost sum:
00
( o)(vo j!o(l00; o00; v00):(l00o00 j l(x): b]]v + mi(xi): : : bi]]v ))) X ( o)(vo j!o(l00; o00; v00):(l00o00 j l(x): b]]v + mi(xi): bi]]v ))) = li = & (xi)bi; l = & (x)b]]]v
00 00 00 00
X
With these lemmas we are now able to prove that reductions in the & calculuscorrespond to transition sequences in the calculus. More precisely, when an object a can do a reduction and become a0, then our encoding can match that by doing a series of internal actions, resulting in a calculus process being weakly bisimilar to a0.
2
Theorem 1 If a ; b then a]]v  P
b]]v.
Proof. Induction in the structure of a. a = x: We have a 6;. a = li = & (xi)bi]: Again a 6;. a = a0:l: If a ; b it can be for two reasons: i. a0 ; a00 and a]]v = ( v0)( a0]]v j v0(o):ohl; o; vi). By induction there exist a P such that a0]]v  P a00]]v . And then: ( v0)(P j v0(o):ohl; o; vi) ( v0)( a00]]v j v0(o):ohl; o; vi) = a00:l]]v
0 0 0 0
ii. a0 = li = & (xi)bi] with l = lj , b = bj fa0=x g. We have
j
a]]v
=
d 2 dl
( v0)(( o)(v0o j o := li = & (xi)xi]) j v0(o0):o0hl; o0; vi) ( o)(o := li = & (xi)xi] j ohl; o; vi) ( o)(o := li = & (xi)xi] j bj ]]vfo=x g) {z } 
j
P
According to Lemma 1 we have P bjf l = & (x )b ]=x g]]v. a = a0:l ( & (x)c: Again we have two cases. i. a0 ; a00 is handled as in the previous case.
i i i j
14
ii. a0 = li = & (xi)bi] and b = li = & (xi)bi; l = & (x)c]. According to Lemma 2 we have: li = & (xi)bi]:l ( & (x)c]]v li = & (xi)bi; l = & (x)c]]]v
Our encoding signals the reduction of & calculus term to an object as the output of an object identi er, this we express as:
o) Theorem 2 If a ; li = & (xi)bi] = b then a]]v  dl v( o := b.
2
Proof. We use induction in the length of a ;n a0. Basis n = 0: We must have a = li = & (xi)bi] and for the encoding we have li = & (xi)bi]]]v = ( o)(vo j o := li = & (xi)bi]) v(o) o := li = & (xi)bi]
Induction step: Assume that the theorem holds for sequences of reduction
steps of length n. We now consider a reduction sequence of length n +1, that is we have a ; a0 ;n b = li = & (xi)bi] According to the induction hypothesis there exists a P such that:
o) a0]]v  dl v( P o := b According to Theorem 1 there exist a Q such that a]]v  Q We now have the following sequence: o) a]]v  dl Q a0]]v  dl v( P a0]]v the must exist a Q0 such that
a0]]v.
Since Q
o := b
a]]v  Q 
v(o)
 Q0 P o := b
The relationship between transitions in the calculus encoding and reductions in the & calculus is somewhat more di cult to express, since the calculus encoding may need to do some internal computation before being ready to simulate an object. We relate reductions through the output of an object identi er. If we after a series of internal actions see an external action, then this is because the original & calculus term can reduce to an object. 15
2
Theorem 3 If fv(a) = ; and a]]v servable action.
a ; li = & (xi)bi] = b and P
 P then = v(o) and
o := b
Proof. We shall use induction in the number of moves prior to an ob
Basis n = 0: The only process immediately capable of performing an observable action is:
o) lx = & (xx)ix]]]v v( o := lx = & (xx)ix]
And obviously the theorem holds. The other \possibility" x]]v is prohibited. Induction step: Assume that the theorem holds for sequences of length n. We now consider at sequence of length n + 1 before an external action. For a]]v to have any moves a must be either:
a = a0:l: Here we have
0
a0:l]]v = ( v0)( a0]]v j v0(o):ohl; o; vi)  n+1 P
0
Now a0]]v must have less than n +1 moves before an observable action (remember, we disregard observable actions from method activations), since the only action possible action is with v0 as subject, resulting in one internal communication in a]]v. Therefore, by induction
a0]]v
0
0
o  m v () Q o0 := c (m n) dl
0 0
and combining things we get
( v0)( a0]]v j v0(o):ohl; o; vi)  m+1 ( o0 )(o0 := c j o0 hl; o0; vi) dl
o) ( o0 )(o0 := c j o0hl; o0; vi)  k v( o := b (k = n ? m)
Applying the induction hypothesis once more All in all
o) ( v0)( a0]]v j v0(o):ohl; o; vi)  m   k v( o := b
0
16
a = a0:l ( & (x)c: In the encoding we have: a0:l ( & (x)c]]v = ( v0)( a0]]v j v0(o):( o0)(vo0 j!o0(l00; o00; v00): (l00o00 j l(x): c]]v + m(?):ohl00; o00; v00i)))
0 00
Using the same line of reasoning as in the previous case we get ( v0)( a0]]v j v0(o):( o0 )(v(o0) j !o0(l00; o00; v00): (l00o00 j l(x): c]]v + m(?):ohl00; o00; v00i)))
0
( o0 )(o0 := c0 j!o0(l00; o00; v00):(l00o00 j l(x): c]]v + m(?):ohl00; o00; v00i))
00

n+1 v (o)

00
(3)
With c0 li = & (xi)bi]. Now according to Lemma 2 (4) is weakly bisimilar to o0 := li = & (xi)bi; l = & (x)c].
2
6
Equivalences for the
& calculus
As an operational equivalence for the & calculus we shall use context equivalence as de ned in GR95], except that we do not take the type of contexts into account.
De nition 2 (Context equivalence) A relation R is a context & equivalence if it is symmetric and a R b implies: If a + then b + and for all contexts C ] we C a] R C b]. Two objects are context & equivalent (written a ' b) if a R b for some context
& equivalence.
In GR95] Gordon and Rees show that context equivalence correspond to bisimulation in a labelled transition system for & calculus terms. The omission of type information has important implications. When we restrict our attention to welltyped & calculus terms, we prohibit terms such as ]:l, that is, objects where we try to activate nonexisting methods. It is quite easy to see that the only terms bisimilar are objects which also terminate with the attempt to activate a nonexisting method. 17
The use of restriction of method names at the outermost level has the implication that the expected result, namely that weak bisimulation between encoded terms implies context equivalence, does not hold. The problem is that once we have restricted the method names away, we cannot activate any methods and see how the encoding of the objects behaves. To illustrate the problem, consider the following two objects, that are de nitely not context equivalent:
a = l = & (x)x] b = l = & (x)x:l] With the restriction of method names at the outermost level we have: a]]v = ( l)(( o)(vo j !o(o0; l0; v0):(l0o0 j l(x):v0x)) v(o) ( l)(!o(l0; o0; v0):(l0o0 j l(x):v0x)) !o(l0; o0; v0):(l0o0 j ( l)(l(x):v0x))) !o(l0; o0; v0):l0o0
This is also the case for b]]v so we have a]]v b]]v but a:l + and b:l *. On the other hand, if we drop the restriction of method names, then it is easily shown that weak bisimulation between encodings of terms implies context equivalence of the original terms.
Theorem 4 If a]]v
b]]v then a ' b
Proof. Assume that a]]v b]]v, then because is a congruence we have C a]]v] C b]]v] for all calculus contexts, and in particular for contexts that are encodings of some & calculus context. If C ] is the encoding of the & calculus context C ] then we have C a]]v] = C a]]]v and C b]]v] = C b]]]v for some v0. Now we know from Theorem 3 that if C a]]]v performs an observable action on v0, then it is because C a] terminates. Furthermore, if C a]]]v C b]]]v then C b]]]v then must also have an observable action on v0, what implies that C b] must also terminate, and vice versa. That is, if we have a]]v b]]v then we have a ' b. 2
0 0 0 0 0 0
The reverse implication of Theorem 4 does not hold; for instance
a = l = & (x)x] b = l = & (x)a]
18
If we do not allow addition of methods in method override, then we have a ' b, but their encodings are easily distinguished. For the rst we have the following transition sequence:
o) o) a]]v v( o := a o(l;o;v)  dl v( o := a And for the encoding of the second:
0
) o) b]]v v( o := b o(l;o;v)  dl v(o o := b j o0 := a If we allow addition of methods, then a and b are not context bisimilar, since b after the activation of l will \lose" the added method. It is not enough to allow the addition of methods, to see why consider: a = l = & (x)x] b = l = & (x)x:l ( & (x)x] These two object are congruent, even if we allow addition of methods, but their encodings are not weak bisimilar since the overriding in b results in the creation of a new object reference. If fact, when we remove the restriction of method names we obtain a very negrained equivalence between & calculus terms. This is essentially due to the fact two objects are weakly bisimilar in their encoding when they activate the same methods at the same time. For instance, consider:
a = l = & (x) l1 = & (y)x]:l1] b = l = & (x) l2 = & (y)x]:l2] These two object are obviously equivalent with respect to their reductions, but since the innermost methods have di erent names the encodings of a and b are not weakly bisimilar. To characterize & equivalence, we need to restrict ourselves to calculus contexts which are encodings of & calculus contexts: De nition 3 A symmetric relation Rv is a & bisimulation if P Rv Q implies that if P  dl  P 0 then Q  dl  Q0, = v(o) and For all l 2 Method: ( o)(P 0 j ohl; o; vi) Rv ( o)(Q0 j ohl; o; vi)
For all l 2 Method, x and c:
( o; o0 )(P 0 j v(o0):!o0(l00; o00; v00):(l00o00 j l(x): c]]v + P m(x):ohl00; o00; v00i))
00
Rv ( o; o )(Q j v(o ):!o (l ; o ; v ):(l o j l(x): c]]v + P m(x):ohl00; o00; v00i))
0 0 0 0 00 00 00 00 00
00
19
Using the operational correspondence between & calculus terms and their encoding we can prove that if two & calculus terms are & equivalent then their encodings are contained in some & bisimulation. Theorem 5 If a ' b, then there exist a & bisimulation Rv such that a]]v Rv b]]v. Proof. Let Rv = f( c1]]v; c2]]v) j c1 ' c2g. Obviously we have a]]v Rv b]]v. We now claim that Rv is a & bisimulation up to . Let c1]]v Rv c2]]v , we now consider what behaviour c1]]v can have.
c1]]v *: By the operational correspondence this must be because c1 *. Since c1 ' c2, c2]]v must also diverge. c1]]v  P 6!: This implies that c1 ; C li = & (xi)bi]:l] with li 2 L1 and l 62 L1. Since c1 ' c2 we also have c2 ; C lj = & (xj )bj ]:l] with lj 2 L2 and l0 62 L2 and again by the operational correspondence we have c2]]v  Q 6!.
o) c1]]v  v( P : This implies that a ; li = & (xi)bi] with P o := li = & (xi)bi]. Since c1 ' c2, c2 ; lj = & (xj )bj ] with li = & (xi)bi] ' lj = o) & (xj )bj ] and therefore c2]]v  v( Q with Q o := lj = & (xj )bj ]. By the de nition of ' we must have li = & (xi)bi]:l ' lj = & (xj )bj ]:l and li = & (xi)bi]:l ( & (c)x ' lj = & (xj )bj ]:l ( & (c)x. Now, using Lemma 1, we have: ( o)(P j o(l; o; v)) li = & (xi)bi]:l]]v Rv lj = & (xj )bj ]:l]]v ( o)(Q j o(l; o; v)) and Lemma 2 ( Xo0 )(P j v(o0):!o0(l00; o00; v00):(l00o00 j l(x): c]]v + o; m(?):ohl00; o00; v00i)) li = & (xi)bi; l = & (x)c]]]v Rv lj = & (xj )bj ; l = & (x)c]]]v ( o)(Q j o(l; o; v))
00
The reverse implication also holds; that is, if we can nd a & bisimulation for the encoding of two & calculus terms, then they are & equivalent. Theorem 6 If a]]v Rv b]]v for some & bisimulation then a ' b. 20
2
Proof. Let a]]v Rv b]]v for some & bisimulation. We now claim that a and b have the same termination behaviour in all contexts. Clearly, by the operational correspondence, if a]]v * then also a * and the same holds for b. o) Also by the operational correspondence, if a]]v  v( P then a ; li = & (xi)bi] with P o := li = & (xi)bi] and similarly for b. Therefore a and b must have the same termination behaviour. By de nition of Rv this must also hold for all context that a and b can be put in. 2
7 Conclusions and further work
In this paper we have described how to encode the simple untyped object calculus of Abadi and Cardelli into the asynchronous calculus without matching. We chose this calculus to see how simple a calculus we would need to encode the & calculus. As this paper shows, it is possible to encode the & calculus into our target calculus, but the price is somewhat high. The proofs of operational correspondence rely on speci c assumptions about the use of method names in the encoding, and as Section 6 shows, weak bisimilarity between encoded terms gives us a very negrained equivalence between & calculus terms, since it requires two objects to have the same method activation behaviour to be equivalent. In San96] Sangiorgi investigates welltyped encodings of the & calculus into the calculus. His work is very similar to the work presented in this paper ; however, Sangiorgi uses a version of the synchronous calculus extended with a case operator. The authors are currently working on an encoding of the imperative object calculus AC95b, AC95a]. The imperative object calculus is interesting in that it incorporates references to objects, a phenomenon common to many objectoriented programming languages. Because of the presence of references, the semantics of the imperative object calculus is quite similar to our encoding.
References
Abr89] S. Abramsky. The lazy lambdacalculus. In D. Turner, editor, Research Topics in Functional Programming, pages 65{116. AddisonWesley, 1989. AC94a] Martin Abadi and Luca Cardelli. A semantics of object types. In Proceedings of the 9th IEEE Symposium on Logics in Computer 21
Science, pages 332{341. IEEE Computer Society, Computer Society Press, 1994.
AC94b] Martin Abadi and Luca Cardelli. A theory of primitive objects: Secondorder systems. In Proceedings of European Symposium on Programming, volume 788 of Lecture Notes in Computer Science, pages 1{25. SpringerVerlag, 1994. AC95a] M. Abadi and L. Cardelli. An imperative object calculus: Basic typing and soundness. In SIPL '95  Proc. Second ACM SIGPLAN Workshop on State in Programming Languages. Technical Report UIUCDCSR951900, Department of Computer Science, University of Illinois at UrbanaChampaign, 1995. AC95b] Martin Abadi and Luca Cardelli. An imperative object calculus. Theory and Practice of Object Systems, 1(3):151{166, 1995. AC96] Martin Abadi and Luca Cardelli. A theory of primitive objects { untyped and rstorder systems. Information and Computation, 125(2):78{102, 1996. Ame89] P. America. Issues in the design of a parallel objectoriented language. Formal Aspects of Computing, 1(4):396{411, 1989. Bou92] Gerard Boudol. Asynchrony and the calculus. Technical report, INRIA SophiaAntipolis, 1992. CS96] Roberto M. Amadio Ilaria Castellani and Davide Sangiorgi. On bisimulations for the asynchronous calculus. In Proceedings of CONCUR 96, Lecture Notes in Computer Science. SpringerVerlag, 1996. GR95] A. D. Gordon and G. D. Rees. Bisimilarity for a rstorder calculus of objects with subtyping. In Proceedings of the TwentyThird Annual ACM Symposium on Principles of Programming Languages, St. Petersburg Beach, Florida, January 1996. ACM, 1995. HK95] Martin Hansen Hans Huttel and Josva Kleist. Bisimulations for asynchronous mobile processes. In Proceedings of the Tbilisi Symposium on Language, Logic and Computation, 1995. HT91] K. Honda and M. Tokoro. An object calculus for asynchronous communication. In ECOOP 91, volume 512 of Lecture Notes in Computer Science, pages 133{147. SpringerVerlag, 1991. 22
Mil92] Robin Milner. Functions as processes. Journal of Mathematical Structures in Computer Science, 2(2):119{141, 1992. PW92] Robin Milner Joachim Parrow and David Walker. A calculus of mobile processes  part i and ii. Information and Computation, 100:1{77, 1992. San96] Davide Sangiorgi. An interpretation of typed objects into typed calculus. Research Report RR3000, INRIA Sophia Antipolis, August 1996. Wal95] David Walker. Objects in the calculus. Information and Computation, 116:253{271, 1995.
23
Recent Publications in the BRICS Report Series
RS9638 Hans Huttel and Josva Kleist. Objects as Mobile Pro¨ cesses. October 1996. 23 pp. RS9637 Gerth St?lting Brodal and Chris Okasaki. Optimal Purely Functional Priority Queues. October 1996. 27 pp. To appear in Journal of Functional Programming, 6(6), December 1996. RS9636 Luca Aceto, Willem Jan Fokkink, and Anna Ing? lfsd? ttir. o o On a Question of A. Salomaa: The Equational Theory of Regular Expressions over a Singleton Alphabet is not Finitely Based. October 1996. 16 pp. RS9635 Gian Luca Cattani and Glynn Winskel. Presheaf Models for Concurrency. October 1996. 16 pp. Presented at the Annual Conference of the European Association for Computer Science Logic, CSL ’96. RS9634 John Hatcliff and Olivier Danvy. A Computational Formalization for Partial Evaluation (Extended Version). October 1996. To appear in Mathematical Structures in Computer Science. RS9633 Jonathan F. Buss, Gudmund Skovbjerg Frandsen, and Jeffrey Outlaw Shallit. The Computational Complexity of Some Problems of Linear Algebra. September 1996. 39 pp. RS9632 P. S. Thiagarajan. Regular Trace Event Structures. September 1996. 34 pp. RS9631 Ian Stark. Names, Equations, Relations: Practical Ways to Reason about ‘new’. September 1996. ii+22 pp. RS9630 Arne Andersson, Peter Bro Miltersen, and Mikkel Thorup. Fusion Trees can be Implemented with AC0 Instructions only. September 1996. 8 pp. RS9629 Lars Arge. The I/OComplexity of Ordered BinaryDecision Diagram Manipulation. August 1996. 35 pp. An extended abstract version appears in Staples, Eades, Kato, and Moffat, editors, Algorithms and Computation: 6th International Symposium, ISAAC ’95 Proceedings, LNCS 1004, 1995, pages 82–91.