# Typing and subtyping for mobile processes

Benjamin Pierce

Typing and Subtyping for Mobile Processes
May 10, 1994

Davide Sangiorgiy

The -calculus is a process algebra that supports process mobility by focusing on the communication of channels. Milner's presentation of the -calculus includes a type system assigning arities to channels and enforcing a corresponding discipline in their use. We extend Milner's language of types by distinguishing between the ability to read from a channel, the ability to write to a channel, and the ability both to read and to write. This re nement gives rise to a natural subtype relation similar to those studied in typed -calculi. The greater precision of our type discipline yields stronger versions of standard theorems about the -calculus. These can be used, for example, to obtain the validity of -reduction for the more e cient of Milner's encodings of the call-by-value -calculus, which fails in the ordinary -calculus. We de ne the syntax, typing, subtyping, and operational semantics of our calculus, prove that the typing rules are sound, apply the system to Milner's -calculus encodings, and sketch extensions to higher-order process calculi and polymorphic typing.

Abstract

1 Introduction
Milner, Parrow, and Walker's -calculus MPW92] achieved a remarkable simpli cation and generalization of its predecessors (including CCS Mil89] and ECCS EN86]) by focusing on naming and allowing the data values communicated along channels to be channels themselves. The calculus can be presented with a handful of rules, but is su ciently expressive to describe concurrent systems in which the topology of communication may evolve dynamically. Moreover, the existence of natural embeddings of both lazy and call-by-value calculi into the -calculus Mil90] suggests that it may form an appropriate foundation for the design of new programming languages. Milner extended the original -calculus to a polyadic -calculus, in which nite tuples of names, instead of single names, are the atomic unit of communication Mil91]. The basic theory of the polyadic -calculus straightforwardly generalizes that of the monadic fragment; furthermore, the fact that a tuple of names is exchanged at each communication step suggests a natural discipline of channel sorts.1 For example, the expression Pt = b(t; f) : thi : Pt denotes a process that accepts a pair of names, t and f, from a channel b, responds by sending a null tuple along t, and returns to its original state. Similarly, Pf = b(t; f) : f hi : Pf
Department of Computer Science, University of Edinburgh, The King's Buildings, Edinburgh, EH9 3JZ, U.K. Electronic mail: bcp@dcs.ed.ac.uk. y Same postal address; electronic mail: sad@dcs.ed.ac.uk 1 In the -calculus literature, the word \type" is standard; for process calculi, \sort" is more common. We use the two terms interchangeably.

1

accepts t and f and sends a null tuple along f. These processes can be used as encodings of the boolean values true and false: to test whether a process waiting for input on b is Pt or Pf , it su ces to put it in parallel with the process test = ( x; y) (bhx; yi : 0 j x() : Qt j y() : Qf ) which creates a pair of channels with the fresh names x and y, sends them along b, and waits for a response along x or y before continuing with Qt or Qf . (Here 0 is the inactive process and the restriction operator.) When test is run in parallel with Pt, the branch Qt is activated; when in parallel with Pf , on the other hand, Qf is activated. Milner observed that this trio of processes obey a strict discipline in their use of names for communication: t, f, x, and y are all used to communicate tuples of arity zero, whereas b is always used to communicate pairs of names that are themselves used to communicate empty tuples. This situation can be described with the following sorting: t; f; x; y : Se b : Sp Se 7! () Sp 7! (Se ; Se ) A key idea here is that sort information is assigned only to channels; processes are either well-sorted under a particular set of assumptions for their bound and free names, or they are not. This stands in contrast to recent proposals that attempt to assign more informative types to processes, describing the sets of channels on which processes may communicate, their interaction protocols, freedom from deadlock, etc. Also, this sort information is completely static: it does not describe the sequencing of values communicated along a channel. (For example, it cannot describe a channel that is used to carry an alternating sequence of two- and three-element tuples.) Although it may be possible to develop sorting disciplines capturing more dynamic constraints on the behavior of processes, such constraints seem di cult to express while maintaining the essential characteristics of traditional type systems | the methodological perspective that types are best used to describe structural properties of the data manipulated by programs, as well as the more pragmatic requirement that well-typedness must be automatically and e ciently veri able. Milner's sort discipline plays an essential role in later papers on properties of the -calculus Wal92, San92]; it has been further studied by Turner Tur94], Gay Gay93], and Vasconcelos and Honda VH93], who consider the problem of inferring most general sortings. Our typing discipline retains the basic character of Milner's, while extending it in two dimensions. First, we replace by-name matching of sorts with the more straightforward notion of structural matching, a technical modi cation that allows a substantially more elegant presentation of our system of sorts.2 Second, and more interestingly, we re ne the language of sorts to include a notion of subsort allowing the use of a channel to be restricted to input-only or output-only in a given context. In Milner's -calculus, two sorts with di erent names are considered to be di erent, so that the process ahxi : 0 is ill formed under the sorting a : S x : S2 S 7! (S1 ) S1 7! () S2 7! () even though it is well formed under the isomorphic sorting a : S x : S1 S 7! (S1 ) S1 7! () S2 7! ():
2 A by-structure presentation of Milner's -calculus sorting has been studied independently by Turner Tur94]. His system is essentially identical to the fragment of ours in which subtyping is omitted.

2

By contrast, in our system only the recursive structure of sorts is signi cant. Although the change from by-name to by-structure matching entails some loss of precision in the sortings (we give up the ability to prevent mixing of \accidentally isomorphic" data structures) it substantially simpli es the technical aspects of the system by allowing sorts to be treated as regular trees Cou83]. Our subsort relation can be motivated by the common situation in which two processes must cooperate in the use of a shared resource such as a printer. The printer provides a request channel p carrying values of some sort T, which represent data sent for printing by the client processes. If one client process has the form C1 = phj1i : phj2 i : : : :; then we expect that executing the program ( p:(T)) (P j C1 j C2) should result in the print jobs represented by j1 and j2 eventually being received and processed, in that order, by the printer process P. But this is not necessarily the case: a misbehaving implementation of C2 can disrupt the protocol expected by P and C1 just by reading print requests from p and, for example, throwing them away: C2 = p(j:T) : C2: We can prevent such bad behavior by distinguishing three kinds of access to a channel | the ability to send values, the ability to read values, and the ability to do both | and extending channel sorts with a tag declaring which operations are allowed: r for input, w for output, or b for both. Here, for example, the client processes should only be allowed to write to p; from their point of view, p's sort is (T)w . The printer, on the other hand, should only read from p; from its point of view, p has sort (T)r . Such an approach is reminiscent of the restriction in the visibility of local state achieved in sequential languages using notions of data encapsulation: the appropriate behavior of each component of the system can be ensured statically by typechecking it in an environment where the sort of p allows only the appropriate form of access. More generally, we can de ne a relation on sorts and stipulate that an output operation xhyi is well sorted only if Sy Sx , where y:Sy and x:(Sx )I , with I either b or w. Similarly, an input operation x(y:T) is well sorted if Sx T, where x:(Sx )I and I is either b or r. The subsort relation is formalized in Section 2 using a notion of simulation of regular trees and treated using techniques similar to those developed by Amadio and Cardelli for -calculi with subtyping and recursive types AC91]. Section 3 proves the soundness of the resulting type system. The greater precision of our type discipline yields stronger versions of some standard theorems about the -calculus. For example, in Sections 4 and 5 we obtain the validity of -reduction for the more e cient of Milner's encodings of the call-by-value -calculus (which does not hold for the ordinary -calculus). Section 6 sketches some possible extensions.

2 Basics
We now de ne the syntax, operational semantics, typing rules, and subsort relation of our calculus and develop some of its fundamental properties. Since our purpose here is to study basic theoretical properties of our type system rather than to propose a pragmatic notation for programming, we adopt an explicitly typed presentation in which every bound name is annotated with a sorting. The algorithmic problem of inferring these annotations is deferred to future investigation (c.f. Tur94, Gay93, VH93]). Our basic syntactic categories are de ned by the grammar in Figure 1. We use the metavariables S, T, and U for sorts; P, Q, and R for process expressions; ? and for sorting assumptions (or sortings); and a, b, c, etc. for channels (or names). For most of the paper, the only type of 3

2.1 Notational Preliminaries

S

::=

j j

(S1 ; : : :; Sn)I A: S A
r w b

tagged tuple recursive sort sort variable input only output only either ok process
process types I/O tags

channel sorts

P

::=

I

::=

j j

j j j j j j j

wrong

P jP ( a:S) P a(a1:S1 ; : : :; an:Sn ) : P aha1; : : :; ani : P !P

0

nil process parallel restriction input output replication error empty sorted name
sortings

processes

X

::=

?

::=

;

?; a:S

Figure 1: Syntax well-formed processes is ; more interesting process types are considered in Section 6.2.3 ~ S stands for a sequence of sorts S1 ; : : :; Sn; similarly, ~ stands for a sequence of names. We call a sort S a ~ guarded if it has the form S = (S)I . We often write the process expression : 0, where is an input or output pre x, as just . Also, following Milner Mil91], we introduce an explicit replication operator ! instead of allowing systems of recursive de nitions of process expressions; ! P stands for the parallel composition of an in nite (countable) number of copies of P. To simplify the presentation, we omit the notational devices of abstraction and concretion Mil91] and the basic operations of summation of processes and matching of names. We believe that our results can be extended straightforwardly to a calculus including summation, abstraction, and concretion, while the case of matching is more delicate; Section 6.1 discusses these extensions. The constant wrong stands for a process in which a run-time type error has occured | i.e., one in which an attempted communication has involved an arity mismatch or, in the tagged reduction semantics of Section 3.2, a violation of an I/O restriction. The principal goal of the soundness theorem in Section 3 is to guarantee that a well-typed process expression cannot reduce to a process expression containing wrong . We restrict our attention to closed judgements ?  P : , i.e., those in which the free names of P are all bound in ?, and we formally identify judgements up to renaming both of bound variables in P and of variables free in P and bound by ?. This is equivalent to regarding alphabetic variable names | channel names and sort variables | as informal abbreviations for an underlying representation based on de Bruijn indices dB72], and implies the usual conventions about name capture during substitution, alpha-conversion, side-conditions concerning freshness of names, etc. It also follows from this point of view that the names bound by a context ? are always taken to be pairwise distinct; this justi es an abuse of notation whereby ? is regarded as a nite function from names to sorts: ?(a) is the sort assigned to a by ?. The order of bindings in ? is ignored. Since we want to view sort expressions as abbreviations for regular trees, we require that the body of a recursive channel sort A: S be contractive in the recursion variable A: either A does not appear at all in S, or else it appears inside at least one set of brackets. T fS =Ag denotes the capture-avoiding substitution of S for A in T. Similarly, P fa=bg denotes the capture-avoiding substitution of a for b in P. ~ ~ If S = (T)I , we call T the object part of S, written obj(S). Thus: ~ ~ obj((S)I ) = S obj( A: S) = obj(S f A: S =Ag): We write obji (S) for the ith component of obj(S).
3 We use nesting of brackets to describe, recursively, the sorts of the channels carried by other channels. Other papers on the -calculus or extensions of it (e.g. Mil91, San92]) use a similar bracket nesting notation for higher-order sorts, i.e. sorts of process abstractions. Our brief discussion of higher-order sorts in 6.2 uses arrow-sorts for this purpose.

4

Similarly, the function I extracts the top-level I/O annotation of a sort after unrolling as many outermost recursions as necessary to reach a guarded sort: I ((S1 ; : : :; Sn)I ) = I I ( A: S) = I (S f A: S =Ag): Ii extracts the I/O annotation of the ith component of a sort: Ii((S1 ; : : :; Sn )I ) = I (Si ) Ii( A: S) = Ii(S f A: S =Ag): Following Milner Mil91], we present the operational semantics of the -calculus using two relations: a structural equivalence on process terms that permits the rearrangement of parallel compositions, replications, and restrictions so that the participants in a potential communication can be brought into immediate proximity; and a reduction relation that describes the act of communication itself. Our semantics di ers from the standard presentation only in the rules involving wrong . 2.2.1. De nition: The process-equivalence relation P Q is the least congruence closed under the following rules: P jQ QjP (E-Commut) (P j Q) j R P j (Q j R) (E-Assoc) P j0 P (E-Zero) !P P j !P (E-Repl) (( a:S) P) j Q ( a:S) (P j Q) when a not free in Q (E-Extrusion) ( a:S) ( b:T) P ( b:T) ( a:S) P (E-Switch) ( a:S) 0 0 (E-Drop) P j wrong wrong (E-Par-Wrong) ! wrong wrong (E-Repl-Wrong) ( a:S) wrong wrong (E-Restr-Wrong) (Note that the side condition on rule E-Extrusion can be viewed as a consequence of our convention of regarding names as deBruijn indices.) By \least congruence" we mean that a process P is related to P 0 \in a single step" if P 0 can be obtained from P by replacing a subphrase of P by another subphrase that is equivalent according to one of these rules. 2.2.2. De nition: The one-step reduction P ?! Q is the least relation closed under the following rules: m=n (R-Comm) a(b1 :S1 ; : : :; bm :Sm ) : P j ahc1 ; : : :; cni : Q ?! P fc1 ; : : : ; cn =b1 ; : : : ; bm g j Q m 6= n (R-Comm-Wrong) a(b1 :S1 ; : : :; bm :Sm ) : P j ahc1 ; : : :; cni : Q ?! wrong P ?! P 0 (R-Par) P j Q ?! P 0 j Q P ?! P 0 (R-Restr) ( a:S) P ?! ( a:S) P 0 P P 0 ?! Q0 Q (R-Eqv) P ?! Q 5

2.2 Reduction Semantics

2.3 Subsorting
containing b
r

and b w. 2.3.2. De nition: An I/O-tree is a nitely branching tree whose nodes are labelled with symbols from fr; w; bg. The metavariables S , T and U range over trees; Tr denotes the class of all trees. The tree I
"b " A b " A bb " A " b " A Tn bb " T1 " b A " b

2.3.1. De nition: The subsort relation is generated by a basic sub-tag relation I J, the least preorder

whose root is labelled I and whose subtrees are T1 ; : : :; Tn is written T1; : : :; Tn]I . 2.3.3. De nition: To each closed sort S we associate an I/O-tree called Tree(S); it is the unique tree satisfying the following equations: 1. if S = (S1 ; : : :; Sn )I then Tree (S) = Tree(S1 ); : : :; Tree (Sn )]I ; 2. if S = A:S 0 then Tree (S) = Tree (S 0 f A:S 0 =Ag). 2.3.4. De nition: A relation < Tr Tr is a tree simulation if (S ; T ) 2 < implies: 1. if T = T1; : : :; Tn]b then S = S1; : : :; Sn]b and, for all 1 i n, both (Si ; Ti) 2 < and (Ti; Si ) 2 < ; 2. if T = T1; : : :; Tn]r then S = S1; : : :; Sn]I with I r and, for all 1 i n, (Si ; Ti) 2 < ; 3. if T = T1; : : :; Tn]w then S = S1; : : :; Sn]I with I w and, for all 1 i n, (Ti ; Si) 2 < . 2.3.5. De nition: The tree subsort relation S tr T is S tr T i (S ; T ) 2 < for some tree simulation < : 2.3.6. De nition: The subsort relation  S T is  S T i Tree (S) tr Tree (T): We write  S > T when  S T and  T S. Note that the subsort relation is invariant under folding and < unfolding of recursively de ned sorts, since Tree ( A: S) = Tree (S f( A: S )=Ag). There is a close analogy between channel sorts and the reference types found in some programming languages. In Reynolds's language Forsythe Rey88], for example, the type of a mutable storage cell holding a value of type T can be written ref (T), an abbreviation for sink (T)^source (T). (Reynolds's actual notation is slightly di erent.) That is, a mutable cell containing an element of T is viewed as a connected pair of locations (or one location with two di erent types): a \source" for elements of T and a \sink" into which elements of T can be placed. The source constructor behaves covariantly in the subtype relation, since a source for elements of T can be used as a source for elements of U if T U; whereas the sink constructor is contravariant: a sink for elements of U can be used in a context that expects a sink for elements of T, but not the other way around. The ref constructor, like our b tag, is constrained by both requirements and behaves non-variantly in the subtype relation. A di erent analogy relates our de nition of subsorting and the subtype relations found in some typed -calculi Car86, CW85, Rey85]. A function f with type S !T can be thought of as a process that reads a value of type S from one location and writes a result of type T to another; to start f, we send it a pair of channels along a request channel af , telling it where to nd its argument and where to put its result. The sort of af itself, from the point of view of a process invoking f, is ((S)r ; (T)w )w . Note that S occurs in a contravariant position (under both an r tag and a w tag) and T occurs in a covariant position (under two w tags). Thus, the intuitive \compilation" (S !T) 7! ((S)r ; (T)w )w validates the -calculus subtyping rule for arrow types: S1 !T1 S2 !T2 if S2 S1 and T1 T2 . De ning the subsort relation on two sort expressions via a relation on their regular tree expansions has the advantages of being semantically natural and technically well suited to our needs in the proof of soundness 6

in Section 3. Furthermore, it supports straightforward veri cation of some important basic properties, for example, transitivity. 2.3.7. Proposition: The relation tr is transitive. Proof: Suppose S1 tr S2 tr S3. To prove that S1 tr S3 we have to nd a tree simulation < containing the pair (S1 ; S3). Let <1 and <2 be tree simulations containing the pairs (S1 ; S2) and (S2 ; S3), respectively, and de ne < = < 1 <2 <2 <1 = f(S ; U ) j for some T ; (S ; T ) 2 <1 ; (T ; U ) 2 <2 g f(U ; S ) j for some T ; (U ; T ) 2 <2 ; (T ; S ) 2 <1 g: The relation < clearly includes the pair (S1 ; S3 ), so it remains only to check that < is a tree simulation. For each pair of trees in < , there are six cases to consider, since < is the union of two relations and the de nition of tree simulation has three clauses: Suppose (S ; U ) 2 <1 <2 , and U = U1 ; : : :; Un]b . Then there is some T such that (S ; T ) 2 <1 and (T ; U ) 2 <2 . By the de nition of tree simulation, (T ; U ) 2 <2 implies

T = T1; : : :; Tn]b with (Ti ; Ui); (Ui ; Ti) 2 <2 :
Similarly, (S ; T ) 2 <1 implies

S = S1; : : :; Sn]b with (Si ; Ti); (Ti; Si) 2 <1 :
Thus as required. Suppose (S ; U ) 2 <1 <2 , and U = U1; : : :; Un]r. Then there is some T such that (S ; T ) 2 <1 and (T ; U ) 2 <2 . The latter implies T = T1; : : :; Tn]I with I 2 fb; rg and, for all 1 i n, (Ti ; Ui) 2 <2 . If I = r, then, since <1 is a tree simulation, we have S = S1; : : :; Sn]J with J 2 fb; rg and, for all 1 i n, (Si ; Ti) 2 <2 . Thus, (Si ; Ui) 2 <1 <2 for all 1 i n, as required. The case where I = b proceeds similarly. The remaining four cases are similar. 2 In practice, it is also useful to have a nitary characterization of subsorting in terms of a deterministic algorithm. Let range over subsorting contexts: ::= ; j ;S T 2.3.8. De nition: The algorithmic subsort relation  S T is the least relation closed under the following rules: < for each i,  Si > Ti (A-BB) b  (S ; : : :; S ) (T ; : : :; T )b for each i,  Si Ti  (S1 ; : : :; Sn )I (T1 ; : : :; Tn )r I w for each i,  Ti Si  (S1 ; : : :; Sn)I (T1 ; : : :; Tn)w ;S T  S T ; A: S T  S f( A: S )=Ag T  A: S T I
r 1

S = S1; : : :; Sn]b with (Si ; Ui); (Ui ; Si) 2 < ;

n

1

n

(A-XI) (A-XO) (A-Ass) (A-Rec-L) (A-Rec-R)

;S

B: T  S T f( B: T )=Bg S B: T 7

These rules can be read as an algorithm by imposing an ordering on them. We attempt to construct a derivation of a subsorting judgement  S T by applying the rules backwards in a goal-directed fashion, using A-Rec-L and A-Rec-R in either order when both apply, but preferring A-Ass over A-Rec-L and A-Rec-R whenever both apply. Derivations that obey these constraints are called algorithmic derivations. When is empty we often omit it. Note that our notation for the subsort relation de ned by tree simulation (S T) and the notation for the relation de ned by this algorithm (  S T) are nearly identical. This notational ambiguity will be justi ed by the fact that the two relations coincide when is empty (Corollary 2.4.9). Note that when the subsort algorithm is started on a closed judgement, all the subgoals generated during its execution are also closed judgements. Also, note that the nondeterminism of the algorithm is inessential, in the sense that the rules can be ordered so that no backtracking is needed. We now establish a number of properties of the subsorting algorithm, including proofs of its termination and its soundness and completeness for the de nition of subsorting as a tree simulation. For the moment, we use the notation  S T exclusively for the algorithmic subsort relation. The soundness and completeness of the algorithm will later justify our using the same notation to refer ambiguously to either de nition of subsorting. 2.4.1. Lemma: The subsorting algorithm is terminating Proof: Given a sort U, let Sub U] be the set obtained from the set of all the subterms of U by replacing each variable appearing free in a subterm with its de nition; formally, Sub U] is de ned inductively as follows: Sub (U1 ; : : :; Un)I ] = f(U1 ; : : :; Un)I g Sub U1 ] : : : Sub Un] Sub A:U] = f A:U g fS f A:U =Ag j S 2 Sub U]g Sub A] = fAg: It is easy to see that Sub U] is nite for any U; indeed, Sub U] can have no more elements than the number of distinct subterms of U. Now, let S and T be two sorts to which the algorithm is applied, and let Sub S; T] = Sub S] Sub T]. We begin by showing that each node in a (not necessarily nite) execution trace E of the algorithm starting from the input  S T satis es three properties; the termination of the algorithm will then follow from these properties. Let  S 0 T 0 be the goal at a node N in E . Then: 1. S 0 ; T 0 2 Sub S; T]; 2. for every assumption U1 U2 2 , both U1 ; U2 2 Sub S; T]; 3. no assumption occurs in more than once: if both U1 U2 2 and U10 U20 2 , then either U1 6= U10 0 or U2 6= U2 . These three properties clearly hold for the root ;  S T of the execution trace. Moreover, at each step of the algorithm, if the current goal node satis es the three properties, then each immediate subgoal generated by the algorithm does too. For instance, take the rule A-Rec-L: ; A: S T  S f( A: S )=Ag T  A: S T By the de nition of the Sub function, if A: S 0 2 Sub S], then S 0 f( A: S 0 )=Ag 2 Sub S]; thus property (1) holds. As for property (2), the pairs in are in Sub S; T] Sub S; T] by assumption, whereas the pair A: S T is in Sub S; T] Sub S; T] by property (1) of the main goal. Property (3) holds because the assumption A: S T cannot already be in . (If it were, the algorithm would have applied rule A-Ass rather than A-Rec-L.) Thus, each node of the execution trace E satis es (1){(3). We now associate a measure to each node, and, using properties (1){(3), give a well-founded ordering such that each step of the algorithm decreases the measure of any new goals with respect to this ordering. It follows that the algorithm terminates. 8

2.4 Properties of Subsorting

The measure is

where n is the number of pairs in and m is the maximum nesting of brackets in either S or T. Measures are ordered as follows: (n; m) > (n0; m0 ) if n < n0 or (n = n0 and m > m0 ). That is, M 1  S1 T1 ] > M 2  S2 T2 ] if 1 is shorter than 2 or the 's have the same size and the maximum nesting of brackets is deeper in S1 and T1 than in S2 and T2 . It is easy to check that the application of a rule by the algorithm generates new nodes with a smaller measure than the main goal. The rule A-Ass does not generate any subgoals. The rules A-Rec-L and A-Rec-R strictly increase the length of the assumptions. The rules A-BB, A-XI, and A-XO remove an outer layer of brackets on both sides and thus decrease the maximum bracket nesting, while leaving the set of assumptions unchanged. Finally, the measure cannot decrease forever, since the rst component can never become larger than the cardinality of Sub S; T] Sub S; T], which we have previously seen is nite, and the second component cannot become smaller than 1. 2 2.4.2. Lemma: Semi-completeness of the algorithm] If Tree(S) tr Tree(T), then the algorithm does not return false when applied to  S T. Proof: Call a goal  S T sound if 1. Tree (S) tr Tree (T); 2. Tree (U1 ) tr Tree (U2 ) for each U1 U2 2 . Fact: If a goal is sound, then the conclusion of one of the rules in De nition 2.3.8 matches the goal and the new subgoals corresponding to the premises of this rule are all sound goals. Proof of the fact. Let  S T be a sound goal. If S T 2 , then the rule A-Ass can be applied, generating no subgoals. On the other hand, suppose S T 62 . If S and T are both guarded, then one of rules A-BB, A-XI or A-XO must be applicable, for otherwise the hypothesis Tree (S) tr Tree (T) would collapse; this follows straightforwardly from the fact that Tree (S) tr Tree (T) implies the existence of a simulation containing the pair (Tree (S); Tree (T)). To see that the new goals produced are sound, take, for example, the case of rule A-XI. Each of its premises has the form  Si Ti . Since the set of assumptions is not modi ed, we only have to check that Tree (Si ) tr Tree (Ti ); this follows from Tree (S) tr Tree (T) and the de nition of tree simulation. Finally, if S or T is not guarded, then either rule A-Rec-L or A-Rec-R can be applied; in each case, the soundness of the new goal follows from the soundness the old goal and the fact that unfolding a term does not change its tree. Proof of the lemma. Now consider the derivation produced by the algorithm on input  S T with Tree (S) Tree (T). The algorithm starts with the sound goal  S T. By the above fact, all the goals it generates are sound, and on a sound goal the algorithm can always either proceed or return success. Therefore the algorithm cannot fail when started with a sound input. 2 2.4.3. Corollary: Completeness of the algorithm] If Tree(S) tr Tree(T), then  S T Proof: If Tree(S) tr Tree(T), then by Lemma 2.4.2 the algorithm does not return false; but by Lemma 2.4.1 the algorithm is terminating; hence it must return true. 2 2.4.4. Lemma:  S T implies ; 0  S T for any 0. Proof: By inspection of the rules. 2

M  S T ] = (n; m);

2.4.5. Lemma: 1. Suppose  A: S T and ( A: S T) 62 . Then ; A: S T  U1 U2 implies  U1 U2 : 2. Suppose  S A: T and (S A: T) 62 . Then ; S A: T  U1 U2 implies  U1 U2 :
9

2.4.6. Corollary: 1. If  A:S T, then  S f A:S =Ag T. 2. If  S A:T, then  S T f A:T =Ag. Proof: Since we stipulated at the outset that the bodies of recursive sorts must be contractive in the 0

Proof: We show just part (1), since part (2) is analogous. The proof proceeds by induction on the length of a derivation of ; A: S T  U1 U2 . When the last rule used in the derivation is A-Ass, we must distinguish two subcases: 1. U1 U2 2 : In this case, the rule A-Ass can also be applied with the smaller premise  U1 U2 ; 2. U1 = A: S and U2 = T: By hypothesis,  A: S T holds; hence by Lemma 2.4.4,  A: S T. = When the last rule of the derivation is not A-Ass, it must be the case that (U1 U2 ) 2 ; moreover, the applicability of the rule does not depend on the contents of . The conclusion follows by induction. 2

recursion variable, each of S and T can be unrolled a nite number of times to obtain guarded sorts S and T 0 . Proceed by induction on the total number of unrollings needed to reach this state. In the induction step, when either S or T has the form of a recursion, the last rule applied in a derivation of  S T must be either A-Rec-L or A-Rec-R; consider A-Rec-L (the other case is symmetric). Then S = A:S1 and A:S1 T  S f A:S1 =Ag T. Use Lemma 2.4.5 to obtain  S f A:S1 =Ag T, as required for part (1). Part (2) then follows by induction. 2 0 obtained by unfolding all the outermost 2.4.7. Corollary: Suppose  S T. Then for the guarded S recursions of S and the guarded T 0 formed by unfolding the outermost recursions of T, it holds that  S 0 T 0 2.4.8. Theorem: Soundness of the algorithm] If  S T then Tree(S) tr Tree(T). Proof: By Corollary 2.4.7 and the fact that the unfolding of a term does not change its tree, it su ces to prove the theorem for the case where S and T are guarded. We do this by showing that

< = f(Tree (S); Tree (T)) j  S T and S and T guardedg
is a tree simulation. We only consider clause (3) of de nition 2.3.4; clauses (1) and (2) are similar. Assume that (Tree (S); Tree (T)) 2 < and
Tree (T) = T1; : : :; Tn]w :

Since T is guarded, we have T = (T1 ; : : :; Tn )w with Ti = Tree (Ti ); for each i. Since S is also guarded, the nal rule used in the derivation of  S T must be A-XO. Therefore, S = (S1 ; : : :; Sn )I with I w,  Ti Si ; for each i; (1) and Tree (S) = Tree (S1 ); : : :; Tree(Sn )]I . To satisfy the de nition of tree simulation, it remains to show that (Tree (Ti ); Tree (Si )) 2 < (2) for each i. From (1) and Corollary 2.4.7 there are guarded terms Ti0 ; Si0 that are unfoldings of Ti ; Si , respectively and such that  Ti0 Si0 . By the de nition of < , (Tree (Ti0); Tree (Si0 )) 2 < ; but the unfolding of a term does not change its tree, so Tree (Ti0 ) = Tree (Ti ) and Tree (Si0 ) = Tree (Si ). This proves (2). 2 2.4.9. Corollary: Equivalence of the de nitions of subtyping]  S T i S T (i.e. Tree(S) tr Tree (T)), for any S and T. Proof: By Corollaries 2.4.3 and 2.4.8. 2 This equivalence justi es our use of the symbol for both the tree-simulation and the algorithmic subsort relations. Indeed, for the remainder of the paper we shall ignore the distinction between the two systems. We close the section on subsorting with some technical lemmas that we shall need later. 10

2.4.10. Lemma: If S (U1 ; : : :; Um )r and S (T1; : : :; Tn)w, then m = n and, for each i, Ti Ui . Proof: By Corollary 2.4.7, we may assume without loss of generality that S isr guarded, i.e. S = (S1 ; : : :; Sp )I . The nal rule in an (algorithmic) derivation of  S (U1 ; : : :; Um ) can only be A-XI; this implies that p = m and, for each i, that  Si Ui . Similarly, the nal rule of a derivation of  S (T1 ; : : :; Tn )w must be A-XO, which yields p = n and  Ti Si . for each i. So m = n and, by the 2.4.11. Lemma:
transitivity of subtyping, Ti Ui for each i.

2

~ ~ ~ ~ 1. Suppose U (T)r and U (S)w . Then I (U) = b with S obj(U) T. b w ~ ~ ~ ~ 2. Suppose (T) U (S) ; then I (U) w with S obj(U) T. Proof: Similar to the proof of Lemma 2.4.10. 2 2.4.12. Lemma: If S T, then I (S) I (T). Proof: By Corollary 2.4.7, it su ces to consider the case where S and T are guarded. The last rule in a derivation of  S T is therefore one of A-BB, A-XI, and A-XO. The result is immediate in each case. 2 2.4.13. Lemma: If I (T) w and S T, then Ii(T) Ii (S) for each i. Proof: Again, it su ces to consider the case where S and T are guarded. Since I (T) w, then the nal rule in a derivation of  S T is either A-BB or A-XO. In either case, the right-hand premise yields  Ti Si , from which the result follows by Lemma 2.4.12. 2 The typing judgement ?  P : asserts that \P is a well-behaved process under the assumptions ?," i.e. if P is placed in an execution context whose free names support the protocol described by ?, then the execution of P can never encounter an internal error in communication and P's use of these names also obeys the protocol. There is one typing rule for each syntactic form except wrong , which does not behave correctly under any assumptions. The inert process 0 is well behaved in any context. ?0: (T-Nil) The parallel composition of two processes is well behaved if each is well behaved in isolation. (The possibility of a bad communication between P and Q is detected as a failure by one or both to satisfy the requirements imposed by ?.) Similarly, a replication of P is well behaved if a single copy is. ?P : ?Q: ?  P jQ : (T-Par)

2.5 Typing

?P : (T-Repl) ? !P : A process whose outermost constructor is a restriction is well behaved if its body observes the constraints imposed both by ? and by the declared sorting of the new local channel. ?; a:S  P : (T-Restr) ?  ( a:S) P : The interesting cases are the rules for input and output. In order to be sure that the input expression a(b1 :S1 ; : : :; bn:Sn ) : P is well behaved, we must rst show that the sorting of a in the current context has arity n and guarantees that a tuple of values read from a will have sorts smaller than S1 ; : : :; Sn. These two conditions can be combined by requiring that ?(a) (S1 ; : : :; Sn)r . (The use of the subsort relation 0 0 0 0 validates the cases where ?(a) = (S1 ; : : :; Sn )b or when ?(a) = (S1 ; : : :; Sn)r for some S1 S1 ; : : :; Sn Sn .) 11

Furthermore, we must check that the body P is well behaved assuming that the channels b1 ; : : :; bn behave consistently with the sorts S1 ; : : :; Sn .  ?(a) (S1 ; : : :; Sn )r ?; b1 :S1 ; : : :; bn:Sn  P : (T-In) ?  a(b1 :S1 ; : : :; bn:Sn ) : P : The case for output is analogous. To verify that the output expression ahb1 ; : : :; bni : P is well behaved, we must check that the sort of a in ? permits a to be used for outputting the n-tuples of values b1; : : :; bn. Using the subsort relation, we can express this constraint elegantly by stipulating that the sort of a should be a subtype of the tuple of the sorts of the bi 's. Finally, P itself must be well-behaved.  ?(a) (?(b1 ); : : :; ?(bn))w ?  P : (T-Out) ?  ahb1; : : :; bni : P : 2.5.1. Lemma: Syntax-directedness of typing] For each process constructor C, there is at most one typing rule R such that every valid derivation of ?  P : , where P's outer constructor is C, ends with an instance of R. Moreover, each typing rule has the property that all metavariables appearing in its premises also appear in its conclusion. Thus, given a derivation with conclusion ?  P : , we can uniquely determine the conclusions of its immediate subderivations by examining the form of P.

plays no role in the typing of P. 2 2.6.2. Lemma: Narrowing] If ?; a:S  P : and T S then ?; a : T  P : . Proof: By induction on the structure of P. 1. P = 0: Trivial. 2. P = P1jP2: Use induction on the subcomponents P1 and P2. 3. P = ( b : U) Q: We know that ?; a : S  P : , which, by the syntax-directedness of typing, means that ?; a : S; b : U  Q : : (3) Now, ?; a : T  Q : holds if ?; b : U; a : T  Q : ; which can be inferred from (3) and the induction hypothesis. 4. P = b(~ : S) : Q: Let ?0 = ?; a : S. From the hypothesis ?0  P : we get x ~ ?0 ; x : S  Q : ~ ~ (4) ~ ?0 (b) < (S)r : Let ?00 = ?; a : T. To infer ?00  P : we have to show that ?00; x : S  Q : ~ ~ (5)

2.6.1. Lemma: Weakening] If ?  P : then ?; a:S  P : for any S. Proof: Since P is well-formed in a context where a is not bound, a cannot occur free in P; so the sort of a

2.6 Properties of Typing

~ ?00(b) < (S)r : Now, the former follows from (4) and the induction assumption. The latter follows from (5); in the case b = a we need the hypothesis ?00(a) = T < S = ?0(a).

12

5. P = bhy i : Q: We reason similarly as in the case for input. Thus, let ?0 = ?; a : S and ?00 = ?; a : T. ~ ~ From ?0  P : we have, for some U ?0  Q : (6) 0 (b) < (U)w ~ ? (7) 0 (~) < U: ? y ~ (8) On the other hand, ?00  P : holds if we can show ?00  Q : ~ ?00(b) < (U)w ?00 (~) < U: y ~ These assertions can be inferred from (6-8) using the induction assumption and the hypothesis ?00 < ?0. 6. P =! P1: By induction. 2 2.6.3. Lemma: Substitution] If ?; b1:S1 ; : : :; bn:Sn  P : and, for each i,  ?(ci) Si, then ?  P fc1; : : : ; cn=b1 ; : : : ; bn g : . Proof: By a straightforward induction, using narrowing in the T-In and T-Out cases. 2 2.6.4. Lemma: If ?  a(b1 : S1 ; : : :; bn : Sn) : P1 j ahc1 ; : : :; cni : P2 : , then ?(ci) Si , for all i. Proof: By the de nition of the typing relation we have: ?(a) (?(c1 ); : : :; ?(ci); : : :; ?(cn))w and ?(a) (S1 ; : : :; Sn )r : By Lemma 2.4.10, ?(ci ) Si . 2 2.6.5. De nition: We say that ? is an extended subcontext of , written ? <E , if the restriction of ? to the names de ned in is a subtype of ; that is, for each a 2 dom (?), either (a) is not de ned or ?(a) (a). 2.6.6. Corollary: If ? <E and  P : then ?  P : . Proof: By weakening and narrowing. 2

3 Soundness
In this section, we show that the I/O annotations on channel sorts can be trusted to restrict improper use of channels. This is accomplished by de ning a re ned reduction semantics in which each occurrence of a name is tagged to indicate whether that occurrence may validly be used for input and/or output. Attempts to misuse names during communication are caught by checking the consistency of the tags. On well typed processes, this sematics is isomorphic to the given above, in the sense that every process development in the tagged semantics can be mirrored in the original semantics, and vice versa. Thus, the soundness theorem for the tagged semantics yields the soundness of the original semantics as an easy corollary.

3.1.1. De nition: The syntax of processes is re ned as follows:
E ::=

3.1 Tagged Processes
j j j j j j

EjF ( a:S) E aI (a1 :S1 ; : : :; an:Sn ) : E aI haI1 ; : : :; aIn i : E n 1 !E
wrong

0

nil process parallel composition restriction input output replication error

tagged process expressions

13

We use E, F, and G to range over process expressions with tagged names. 3.1.2. De nition: Erase is the function mapping tagged process expressions to ordinary process expressions by erasing tags. 3.1.3. De nition: The relation ok ?(E), pronounced \the tags of E are statically consistent with ?," is the least relation such that: ok ?(0) ok ?(wrong ) ok ?(E j F) if ok ?(E) and ok ? (F) ok ?(( a:S) E) if ok ?; a:S (E) I (b :S ; : : :; b :S ) : E) if I (?(a)) I r ok ?(a 1 1 n n and ok ?; b1 :S1 ;:::;bn:Sn (E) I hbI1 ; : : :; bIn i : E) if I (?(a)) I w ok ?(a 1 n and I (?(bi)) Ii Ii (?(a)) for each i and ok ?(E) ok ?(! E) if ok ?(E) 3.1.4. De nition: The compilation function Tag is de ned as follows:
Tag ? (0) Tag ? (wrong ) Tag ? (P j F) Tag ? (( a:S) P) Tag ? (a(b1 :S1 ; : : :; bn:Sn ) : P) Tag ? (ahb1 ; : : :; bni : P) Tag ? (! P)

= = = = = = =

aI (?(a))(b1 :S1 ; : : :; bn:Sn ) : Tag ?; b1 :S1 ;:::;bn :Sn (P) aI (?(a))hbI (?(b1)) ; : : :; bI (?(bn)) i : Tag ? (P) 1 n ! (Tag ? (P))

wrong Tag ? (P) j Tag ?(Q) ( a:S) Tag ?; a:S (P)

0

2 Furthermore, it is easy to check that static consistency is invariant under tag-respecting substitutions: 3.1.6. Lemma: If ok ?; b1:S1 ;:::;bn:Sn (E) and, for each i, ?(ci ) Si , then ok ?(Efc1 ; : : : ; cn=b1 ; : : : ; bm g). Proof: By induction on the structure of E. The most interesting case is when E = aha01 ; : : :; ami : F. Let = a0 = afc1 ; : : : ; cn =b1 ; : : : ; bm g. If a 2 fb1; : : :; bng, then a = a0 and the rst condition, I (?(a )) I, is satis ed by hypothesis; otherwise a = ci with ?(ci ) ?(bi ), and I (?(a0)) I holds by transitivity. In either case, I w by hypothesis. Moreover, by Lemma 2.4.13, Ii (?(a)) Ii(?(a0 )) for each i, so Ii Ii (?(a0)) for each i. Thus, if any ai 2 fb1; : : :; bng, the assumption ?(ci ) Si plus transitivity gives I (?(ci )) Ii Ii(?(a0 )), as required. 2
E 0 is identical to the original equivalence P P 0 of De nition 2.2.1, with tagged channels replacing untagged ones. 3.2.2. De nition: The one-step tagged reduction relation E ?! F is the least relation closed under the following rules: m = n I r J w for each i, Ji I (Si ) I (b1 :S1 ; : : :; bm :Sm ) : E j aJ hcJ1 ; : : :; cJn i : F ?! E fc1; : : : ; cn =b1 ; : : : ; bm g j F (TR-Comm) a n 1

3.1.5. Fact: If ?  P : , then ok ?(Tag ?(P)). Proof: Straightforward induction.

3.2.1. De nition: The tagged equivalence E

3.2 Tagged Reduction

14

m 6= n or I 6 r or J 6 w or, for some i, Ji 6 I (Si ) aI (b1 :S1 ; : : :; bm :Sm ) : E j aJ hcJ1 ; : : :; cJn i : F ?! wrong n 1 E ?! E 0 E j F ?! E 0 j F E ?! E 0 ( a:S) E ?! ( a:S) E 0 E E 0 ?! F 0 F E ?! F

(TR-Comm-Wrong) (TR-Par) (TR-Restr) (TR-Eqv)

1. If E E 0 then Erase (E) Erase (E 0 ). 2. If Erase (E) P, then there is some E 0 E such that Erase (E 0) = P. Proof: Straightforward, since none of the tagged structural congruence rules either use or change tags. 2 3.3.2. Lemma: Preservation of types and tags under ] 1. If ok ? (E) and E E 0 , then ok ? (E 0 ). 2. If ?  E : and E E 0 , then ?  E 0 : . Proof: Straightforward induction on the length of a derivation of E E 0 , with an inner induction at each step on the structure of E. 2 3.3.3. Lemma: Let E = aI (b1 :S1 ; : : :; bm :Sm ) : F j aJ hcJ1 ; : : :; cJn i : G. If ?  Erase (E) : and ok ?(E), n 1 then 1. I r, J w, m = n, and Ji I (Si ) for each i; 2. ?(ci ) Si for each i. Proof: By the syntax-directedness of typing, we have ?  Erase (aI (b1:S1 ; : : :; bm:Sm ) : F) :  ?(a) (S1 ; : : :; Sm )r ?; bi :Si  Erase (F) : ?  Erase (aJ hcJ1 ; : : :; cJn i : G) : n 1  ?(a) (T1 ; : : :; Tn)w for each i,  ?(ci ) Ti ?  Erase (G) : : By Lemma 2.4.10, m = n and  Ti Si for each i. By the de nition of ok ? , we have I r and J w, and Ji Ii (?(a)) for each i. By the de nition of I | and transitivity, Ji I (Si ). Finally, by the transitivity of subtyping (2.3.7),  ?(ci) Si for each i. 2 Using the properties of typing and subtyping established in this section and the previous one, we obtain a straightforward proof of the soundness of the original typing rules with respect to the tagged semantics. 3.3.4. Theorem: If ?  Erase (E) : and ok ?(E) and E ?! E 0 , then ?  Erase (E 0 ) : and ok ?(E 0 ). Proof: By induction on the derivation of E ?! E 0 . and

3.3.1. Lemma: Correctness of Erase w.r.t. ]

3.3 Soundness of Typing

15

F = aI (b1 :S1 ; : : :; bn:Sn ) : F1 G = aJ hcJ1 ; : : :; cJn i : G1 n 1 E 0 = F1 fc1; : : : ; cn =b1 ; : : : ; bm g j G1 m = n; I r; J w; and for each i, Ji Ii By Lemma 3.3.3(2),  ?(ci ) Si for each i. By the syntax-directedness of typing and the substitution lemma (2.6.3), ?  Erase (F1 )fc1; : : : ; cn =b1 ; : : : ; bm g : , i.e., ?  Erase (F1 fc1; : : : ; cn =b1 ; : : : ; bm g) : . By T-Par, ?  Erase (F1 fc1; : : : ; cn =b1 ; : : : ; bn g) j Erase (G1) : , i.e., ?  Erase (E 0 ) : . Furthermore Lemma 3.1.6 yields ok ? (F1 fc1; : : : ; cn=b1 ; : : : ; bm g), from which ok ? (E 0) follows by de nition. Case TR-Comm-Wrong: E = F j G F = aI (b1:S1 ; : : :; bn:Sn ) : F1 G = aI hcJ1 ; : : :; cJn i : G1 n 1 E 0 = wrong m 6= n or I 6 r or J 6 w or, for some i, Ji 6 Ii By Lemma 3.3.3(1), this case cannot occur. F E By Lemmas 3.3.1 and 3.3.2, ?  Erase (F) : and ok ? (F). By induction hypothesis, ?  Erase (F 0 ) : and ok ? (F 0). By 3.3.1 and 3.3.2 again, ?  Erase (E 0 ) : and ok ? (E 0 ).

Case

TR-Comm: E = F j G

Case TR-Eqv: E F 0 F ?! F 0 0 Other cases:

By straightforward use of the induction hypothesis. 2 3.3.5. Corollary: If ?  Erase (E) : and ok ?(E), then E 6?! wrong and Erase(E) 6?! wrong . Moreover, on well-typed processes there is an exact correspondence between the tagged and the ordinary semantics. 3.3.6. Theorem: Suppose that ?  Erase (E) : and ok ?(E). Then: 1. if E ?! E 0, then Erase (E) ?! Erase (E 0); 2. if Erase (E) ?! P, then there is some E 0 such that E ?! E 0 and Erase (E 0) = P. Proof: We consider only (2), since (1) is simpler and can be handled in a similar way. We proceed by induction on the size of the derivation of Erase (E) ?! P. Corollary 3.3.5 excludes that the last rule used in the derivation be R-Comm-Wrong. Suppose that the last rule was R-Comm. We then have
Erase (E) = a(b1 :S1 ; : : :; bn:Sn ) : P1 j ahc1 ; : : :; cni : P2 ?! P1 fc1; : : : ; cn =b1 ; : : : ; bm g j P2 :

By the de nition of Erase, E must be of the form aI (b1 :S1 ; : : :; bn:Sm ) : E1 j aJ hcJ1 ; : : :; cJn i : E2 n 1 with Erase (E1 ) = P1 and Erase (E2) = P2. Since ?  Erase (E) : and ok ?(E), by Lemma 3.3.3 it follows that I r and J w, and that Ji I (Si ) for each i. Therefore TR-Comm can be applied to E yielding aI (b1 :S1 ; : : :; bn:Sm ) : E1 j aJ hcJ1 ; : : :; cJn i : E2 ?! E1fc1; : : : ; cn =b1 ; : : : ; bm g j E2 n 1 with Erase (E1 fc1 ; : : : ; cn =b1 ; : : : ; bm g j E2) = P1 fc1; : : : ; cn =b1 ; : : : ; bm g j P2. For the case when the last rule used was R-Par, use the induction hypothesis. If the last rule used was R-Res, then E = ( a:S) E 0 and Erase (E) = ( a:S) Erase (E 0 ). By de nition of ok ? , we have ok ?;a:S (E 0 ). The result follows by induction. Finally, for the case of the rule R-Eqv, use Lemmas 3.3.1 and 3.3.2 and the induction assumption. 2 16

4 Lambda calculus encodings

The use of I/O tags arises naturally in many applications. As an example, we consider Milner's encodings Mil90] of lazy -calculus and call-by-value -calculus Abr89, Plo75]. For each encoding, we rst present Milner's original version and then assign sorts to the channels in the translation. We abbreviate the process expression ( m) (P j !m(~) : R) as P fm(~) := Rg. Intuitively, the process R x x represents a \local environment" for P and m is a \pointer" that allows P to access this local environment; alternatively, we can think of R as a resource with owner P and m as a trigger with which a copy of the resource may be activated. We give fm(~) := Rg tighter precedence than j ; thus Q j P fm(~) := Rg stands x x for Q j (P fm(~) := Rg). x In Abramsky's lazy lambda calculus Abr89], only leftmost redexes can be reduced. There are two reduction rules: M =) M 0 ( ) ( x:M)N =) M fN=xg (App) MN =) M 0 N The core of the encoding of lazy -terms into -calculus is the translation of function application. As usual when translating -calculus into a process calculus, application becomes a particular form of parallel composition and -reduction is modeled by interaction. Since the syntax of the rst-order -calculus only allows for the transmission of names along channels, the communication of a term is simulated by the communication of a trigger for it. In the pure -calculus, every term denotes a function. When supplied with an argument, it yields another function (which in turn is waiting for an argument). Analogously, the translation of a -term M is a process with an argument port p. It rests dormant until it receives along p a trigger x for its argument and a new port q; given these, it evolves to a new process with argument port q. We may also think of p as the \location" of M, since p is the unique port along which M interacts with its environment. L x:M]]hpi = p(x; q) : L M]]hqi L x]]hpi = xhpi (y fresh) L MN]]hpi = ( q) (L M]]hqi j (qhy; pify(r) := L N]]hrig)) Using the sorts Sa = (St ; Sa )r St = (Sa )w we can annotate the translation so that the intended use of argument and trigger ports is enforced statically. Whenever a trigger is communicated along a channel, the receiver may only use it for output; similarly, when an argument port is communicated, the receiver may use it only for input. Using the sorted abbreviation P fm(x:St ) := Rg = ( m:(St )b ) (P j !(m(x:St ) : R)), the translation becomes

4.1 The lazy -calculus

L x:M]]hpi = p(x:St ; q:Sa ) : L M]]hqi L x]]hpi = xhpi L MN]]hpi = ( q:(St ; Sa )b ) (L M]]hqi j (qhy; pify(r:Sa ) := L N]]hrig))
where the free names p and x have sorts Sa and St , respectively.

4.2 The call-by-value -calculus

In call-by-value -calculus, on the other hand, reductions may only occur when the argument is a value, i.e. an abstraction. The reduction relation used by Milner in Mil90] is described by the rules v , AppL , AppR : M =) M 0 N =) N 0 ( v ) ( x:M)( y:N) =) M f y:N=xg (AppL ) MN =) M 0N (AppR ) MN =) MN 0 17

By contrast with the lazy -calculus, in the application MN, also N may evolve, until both M and N have become -abstractions. But at this point M's behavior deviates from N's: the former receives a value, whereas the latter is passed as a value. Let us see how all this is re ected in the -calculus encoding. The two arguments V M]]hqi and V N]]hri of an application are run in parallel. They signal to the \arbiter process" App when they have reduced to an abstraction by transmitting to App, along the ports q and r, a trigger for their value-body. Then, the dichotomy in the behavior of the two -abstractions is solved by the arbiter, which forces the correct interaction between the two value-bodies. The core of the protocol executed by App is the action on an internal channel v, by which the abstraction M comes to know its arguments. We call v a pivot. (In the lazy -calculus encoding, the role of the pivot names was played by the argument port names.) In the original version of his paper on Functions as Processes Mil90], Milner presented two candidates for the encoding of a version of call-by-value -calculus Plo75]. They follow the same idea of translation, but with a technical di erence in the rule for variables. First candidate (V ):

gradually increasing the number of steps necessary to simulate a -reduction. This phenomenon does not occur in V 0 , where a variable disappears after it is used. But the proofs in Mil90] were conducted with V ; the analysis of V 0 seemed more di cult and was left0 open. 0Indeed, Sangiorgi showed0 San92] that, in the standard -calculus, -reduction is not valid for V , i.e. V ( x:M) y:N]]hpi and V M f y:N =xg] hpi may not be behaviorally equivalent1 (the same counterexample used in San92] applies to the calculus used in this paper, which is a subcalculus of the one in San92]). Nevertheless V 0 yields a precise operational correspondence between -terms and their process encodings, and intuitively one expects V 0 to be correct. The problems with V 0 are caused by the communication of triggers introduced in the translation of variables: the external environment might obtain a trigger and then use it improperly, i.e. in input position. By adding I/O tags, we avoid these misbehavours and V 0 becomes correct. We shall prove this fact in Section 5.4, after we have developed the necessary technical machinery. For the moment, let us look at the sorts of the names appearing in V and V 0 (the sorts are the same in both encodings). There are three kinds of names: besides argument ports like p, q, and r and triggers like x and y, both of which were already present in the lazy -encoding, we also have pivots like v and w. An argument port is used to send a trigger; a trigger is used to activate a value-body with its appropriate pivot; the pivot of an abstraction x:M is used to receive a trigger for the -input together with the argument port of M. Thus the sorts Sa , St , and Sp of argument ports, triggers, and pivots are: Sa = (St )w St = (Sp )w Sp = (St ; Sa )r As for the lazy -calculus, the outermost tag of the sort of the names bound by a restriction is b; therefore, such argument ports, triggers, and pivots have sorts (St )b , (Sp )b and (St ; Sa )b , respectively.
1 The nal version of Mil90], which appeared in the Joural of Mathematical Structures, was written after the results in San92] were known and presents only the encoding V .

V x:M]]hpi = phyify(w) := w(x; q) : V M]]hqig V x]]hpi = phyify(w) := xhwig V MN]]hpi = ( q; r) (V M]]hqi j V N]]hri j Apphp; q; ri ) where App(p; q; r) = q(x) : ( v) xhvi : r(y) : v hy; pi Second candidate (V 0 ): The rules for application and -abstraction are the same as for V ; the rule for variables is: V 0 x]]hpi = phxi The encoding V 0 is more e cient than V . In V , a -calculus variable gives rise to a one-place bu er (the local environment fy(w) := xhwig). As the computation proceeds, these bu ers are chained together,

18

5 Behavioral equivalence
5.1 Contexts

In this section we examine the e ect of our sorting discipline on the semantics of processes.

A process context (or just context) C is a process expression with a hole in it. We write C P] for the expression obtained by lling the hole with P. The hole itself is written ]. For the typing of contexts, we add the following rule to those in Section 2.5. ? ]: (Hole) We shall employ contexts when considering the behavioral equivalence of processes. For this, we rst have to understand when placing a process into a context is sensible. We start by looking at the combination of a context C ] of sorting ? and a process P of sorting from a syntactic point of view. We want a condition on that guarantees ?  C P] : . We do not exactly want the bindings in ? to be subtypes of those in , i.e. ? <E , since there might be binders (restriction, input pre xes) in C ] whose scope includes the hole. For instance, if C ] = a(c : Sc ) : ( b : Sb ) ] and ? = fa : Sa g, then P must respect the sorts Sb and Sc for the names b and c in addition to Sa for a. This suggests that the external sorting ? of C ] should be distinguished from an internal one ?C , which describes the sorts of all the names that are \in scope" at the point where C's hole occurs. The sorting ?C , called the extension of ? by C, is obtained from ? by scanning the context from the outside towards the hole and keeping track of the binders that are met along the way. So in the previous example, we have ?C = fa : Sa ; b : Sb ; c : Sc g. 5.1.1. De nition: C is said to be a (?, )-context if ?  C : and ?C <E . 5.1.2. Proposition: If C is a (?, )-context and  P : , then ?  C P] : . Proof: We know that  P : and, since C is a (?, )-context, ?C <E . This, by Corollary 2.6.6, gives ?C  P : . Now a derivation for ?  C P] : can be constructed from the derivations of ?C  P : and ?  C ] : by replacing the sole instance of the rule Hole by the derivation of ?C  P : . 2 We conjecture that ?C <E is also a necessary condition for having ?  C P] : whenever  P : . Note that a name could be declared, for instance, as input-only in and yet appear in C ] in an output position since, in De nition 5.1.1, the internal sorting ?C is an (extended) sub-sorting of . This situation occurs in the printer example from the introduction: the printer uses the name p only for input, but the surrounding context may perform outputs on p. Indeed, the weaker requirement on the context is clearly essential, for otherwise no communications along p would ever take place. The composition of a context of sorting ? and a process of sorting is syntactically meaningful when C is a (?, )-context; now we want to check that this composition is also semantically meaningful. We know that P behaves correctly with respect to C ] and ?, because ?  C P] : and we have seen that our typing system is sound. More delicate is the opposite implication, namely that the expected use of names emitted by P, as speci ed in , is consistent with their actual use in C, which is constrained by ?. (We suppose that the names emitted by P are free; the case in which they are bound is similar.) Formulating this claim in general terms seems di cult, but it can be substantiated in some important special cases. For example, suppose P = ahbi : P 0 and I1( (a)) = w. Then, according to the name b should be used by its recipient for output only. Suppose that a(x:T) : Q is the subterm of C ] with which P communicates. To respect P's expectations, the name x should be used by Q for output only, i.e. it should be the case that I (T) = w. More generally, we would like to guarantee that if I1( (a)) = J, then J I (T). 5.1.3. Proposition: Suppose  ahyi : P : and ?  a(x:T) : Q : , with ? <E . Then I (obj1 ( (a))) I (T): Proof: We show obj1 ( (a)) T, from which the result follows by Lemma 2.4.12. First, from the typing rules, ?  a(x:T) : Q : gives ?(a) (T)r : (9) Similarly, from  ahyi : P : , (a) (S)w (10) 19

for some S. Finally, from ? <E , we have ?(a) (a): (11) (12) (13) Now, using (10) and (11) and the transitivity of , we obtain ?(a) (S)w : By Lemma 2.4.11(1), (9) and (12) imply that ?(a) 7 (obj(?(a)))b and S obj1 (?(a)) T: Replacing ?(a) with (obj1 (?(a)))b in (11), and using (10), we get (obj1(?(a)))b (a) (S)w , which by Lemma 2.4.11(2) gives S obj1( (a)) obj1(?(a)): (14) Finally, by (14) and (13) (ignoring S) and transitivity, obj1 ( (a)) T, which concludes the proof. 2 The most popular way of de ning behavioral equivalence on processes is via the notion of bisimulation. In MS92, San92], Milner and Sangiorgi propose barbed bisimulation as a tool for uniformly de ning bisimulation-based equivalences in di erent calculi. The portability of the de nition is useful when studying a new calculus or a re nment of an existing one, as we are doing here: barbed bisimulation immediately suggests an interesting congruence for the newborn system. The de nition of barbed bisimulation uses the reduction relation of the calculus along with an observation predicate #a for each port a, which detects the possibility of performing a communication with the external environment along a. Thus, in the -calculus, P #a holds if P has a pre x a(x1; : : :; xn) or ahx1; : : :; xni which is not underneath another pre x and not in the scope of a restriction on a. For example, if P = ( c) (chbi j a(x) : d(y)), then P #a , but not P #c, P #b, or P #d . By itself, barbed bisimulation is a rather coarse relation. Better discriminating power is achieved by considering the induced congruence, called barbed congruence. It can be shown San92] that barbed congruence coincides in both CCS and -calculus with the standard bisimilarity congruences, at least if the grammar of these languages is su ciently rich (for instance, in San92] in nite recursive de nitions of agents and agents with an in nite number of free names are allowed). Of course, in a sorted calculus, the processes being compared must obey the same sorting and the contexts employed must be compatible with this sorting. 5.2.1. De nition: A relation < Pr Pr is a barbed -bisimulation if (P; Q) 2 < implies: 1. if P ?! P 0 then Q ?! Q0 and (P 0 ; Q0) 2 < ; 2. if Q ?! Q0 then P ?! P 0 and (P 0 ; Q0) 2 < ; 3. for each channel a, P #a i Q #a. Two processes P and Q are barbed -bisimilar, written P Q, if (P; Q) 2 < , for some barbed bisimulation < . P and Q are barbed -congruent, written P Q, if, for each sorting ? and (?, )-context C, we have C P] ? C Q]. Barbed bisimulation requires a quanti cation over all contexts. The following theorem shows a way to lighten this requirement: it is enough to test processes using parallel composition and substitution. 5.2.2. Lemma: Let P1 and P2 be two processes obeying the sorting . Suppose that for every sorting ?, substitution , and process Q such that 1. ? <E , ~b 2. if = fc=~g, then ?(~) ?(~), and c b 3. Q obeys ?, 20

5.2 Barbed bisimulation and congruence

it holds that

Then P1 P2 . Proof: See Appendix A. 2 The weak version of the equivalences, where one abstracts away from the length of the reductions in two matching actions, is obtained in the standard way. Let =) be the re exive and transitive closure of ?!, , and let +a be =) #a , the composition of the two relations. Then weak barbed -bisimulation, written is de ned by replacing in De nition 5.2.1 the transition Q?!Q0 with Q =) Q0 and the predicate Q #a with Q +a . Weak barbed -congruence, written , is de ned by replacing ? with ? . We conjecture that the counterpart of Lemma 5.2.2 for the weak case is also true. In the basic -calculus (i.e., without I/O tags), the only restriction on the choice of ? and C in the de nition of barbed congruence is that arities of channels must be respected; in our calculus, I/O information is also taken into account. This places a tighter constraint on the set of contexts that are regarded as legal observers. In consequence, the equivalence itself becomes coarser. Processes of the basic -calculus correspond, in our calculus, to processes whose I/O tags are all b. Below, we write ?  P; Q : if both ?  P : and ?  Q : . 5.2.3. Lemma: Suppose that the sorting ? and the processes P and Q di er from the sorting ?00 and the processes P 0 and Q0 , respectively, on the I/O tags only, and that both ?  P; Q : and ?0  P ; Q0 : . 2 Then we have P ? Q i P 0 ? Q0 and P ? Q i P 0 ? Q0 . Proof: The I/O tags do not play a part in reductions of well-typed processes; hence, bisimularity (as opposed to congruence, which we consider next) is not a ected by changing tags. 2 5.2.4. Theorem: Suppose that  P; Q : and that the sorting b and the processes Pb and Qb are obtained from the sorting and the processes P and Q by replacing all I/O tags in , P, and Q with the tag b. Then 1. b  Pb; Qb : , Qb implies P Q. Qb implies P Q, and Pb 2. Pb is analogous. By Proof: Assertion (1) is straightforward. For (2), we only consider ; the case of de nition, P Q holds if, for all (?, )-contexts C, we have C P] ? C Q]. Let Cb and ?b be the context and sorting obtained from C and ? by replacing all I/O tags with b. By assertion (1), ?b  Cb Pb ]; Cb Qb ] : . Qb , and Cb is a (?b , b)-context, we have Cb Pb] ? Cb Qb]. Hence C P] ? C Q] follows Since Pb from Lemma 5.2.3. 2 Thus, any equivalence that can be proved in the basic -calculus, e.g., the equivalence between !P and !P j !P, can be lifted up to our calculus. We shall see in the next section that the converse does not hold: inequivalent processes in the basic -calculus may be made equivalent by replacing b tags with r or w.
0 0

Q j P1

?

Q j P2 :

b

b

b

b

b

b

In Section 4, we introduced the notation P fm(~) := Rg for \local environments." Perhaps the most useful x theorems about local environments are the ones showing that they distribute over parallel composition and replication. For instance, these theorems play a pivotal role in the proof of validity of -reduction for Milner's encodings of -calculus Mil90], as well as in the proof of representability of higher-order -calculus in rstorder -calculus San92]. However, in the basic -calculus the theorems must be accompanied by a fairly heavy side condition on the use of names in P, Q, and R, namely that m may occur free only as the channel along which an output occurs. To see why, take: P1 = (bhmijQ)fm := Rg P2 = bhmifm := RgjQfm := Rg These processes are not equivalent in the basic -calculus. Intuitively, the environment external to P1 can receive m along b and then use it in input position to interfere with an attempt by Q to trigger R. This is not possible in P2 , where Q has its own private access to R. 21

5.3 The replicator theorems

Of course, the di erence between P1 and P2 can only be observed using a context that ignores the intended role of m as a trigger; to validate the desired equivalence, such a context should be considered inadmissible. In our sorted calculus, this can be achieved by giving m a sort such that it can only be used for output, as we did in our encodings of -calculus in Section 4. 5.3.1. De nition: Let P be a subexpression of a process expression C P], where ?  C P] : . We say that m is used (only) as a trigger in P if  P : , where is obtained from ?C , the internal sorting of P's position in C, by replacing the outermost tag of m's sort with w. In the following, when using the above de nition we sometimes omit the judgement ?  C P] : ; the context C in question will always be clear. Moreover, we maintain the notation P fm(~) := Rg in our sorted x calculus, eliding the sorts of m and x. ~ 5.3.2. Theorem: First replicator theorem] If ?  (P j Q)fm(~) := Rg : and m is used as a trigger in x P, Q, and R, then (P j Q)fm(~) := Rg ? P fm(~) := Rg j Qfm(~) := Rg: x x x

5.3.3. Theorem: Second replicator theorem] If ?  (!P)fm(~) := Rg : and m is used as a trigger in P x and R, then (!P)fm(~) := Rg ? !(P fm(~) := Rg): x x
These versions of the replicator theorems are substantially stronger than the original ones, since they allow the trigger m to be transmitted as value, as long as each such transmission places an appropriate constraint on the possible use of m by the receiver. The proofs of both theorems can be found in Appendix B. Using our improved replicator theorems, we can show that in our calculus -reduction is valid for Milner's encoding V 0 of the call-by-value -calculus, presented in Section 4.2. With the introduction of I/O tags, the output of a trigger in the translation of -variables is rendered safe, since the trigger is tagged w and so cannot be used by the recipient for input. Recall that Sa and St are the sorts for argument ports and triggers of the encoding V 0 , as de ned in Section 4.2. 5.4.1. Theorem: Validity of -reduction in V 0 ] Suppose x1; : : :; xn is the set of free variables in ( x:M) y:N, and let be the sorting x1 : St ; : : :; xn : St ; p : Sa . Then

5.4 Correctness of the encoding V 0

Proof: See Appendix C.

V 0 ( xM) y:N]]hpi

V 0 M f y:N =xg] hpi:

2

6 Extensions

Many presentations of the -calculus include a summation operator P + Q, which can be used to select one element from a set of pending communications and abort the others, and a matching operator a = c]P, which allows P to proceed only if a and c are the same channel. (It seems that both operators may often be avoided in practice, although they play an important theoretical role, for instance, in axiomatisations of behavioral equivalences San93, PS93].) Our basic results should extend straightforwardly to summation, whose behavior under typing is exactly like that of parallel composition. Matching, however, is more problematic, since adding it in unrestricted form would destroy some of the basic results presented in Section 5. An appropriate constraint can be obtained from the typing of the other ~~ ~ constructs we have considered: A term a(~) : P j chy i : Q may evolve to P j fy=xgQ only if a and c are the x same name, which thus appears in the term both in input and in output position. This analogy suggests 22

6.1 Matching and Summation

that we should allow a match expression a = c]P only if both a and c have b as I/O tag, i.e. formally to impose the rule: I (?(a)) = I (?(c)) = b ?  P : (T-Match) ?  a = c]P : Another safe solution would be to extend our set of basic sort annotations so as to distinguish channels that may be compared for identity from those that may not. A more signi cant extension involves adding higher-order communication | communication of processes and abstraction of processes on both channels and processes. Following Sangiorgi San92], we can enrich our language of sorts to include descriptions of channels carrying processes | e.g. ( )b | channels carrying processes abstracted on processes | e.g. ( ! )b | and so on. The subsort relation now generates a natural subtype relation using the standard subtyping rule for function types. This extension essentially amounts to adjoining to our system a standard functional type system along the lines of Cardelli's simply typed -calculi with subtyping Car86]. A general theme in this paper has been the observation that a type system for a process calculus can be presented using concepts familiar from the literature on typed -calculi. Carrying this program a step further leads us to wonder what role the polymorphism found in ML Mil78] or the polymorphic -calculus Gir72, Rey74] might play in typing for processes. One possibility is to extend channel sorts so that each channel is thought of as carrying a tuple of both types and values: S ::=

6.2 Higher-Order -calculus

6.3 Polymorphism

j j

(A1 ; : : :; Am ; S1 ; : : :; Sn )I A A: S

tagged tuple of type variables and sorts sort variable recursive sort

channel sorts

In a channel sort of the form (A1 ; : : :; Am ; S1 ; : : :; Sn)I , the type parameters A1 ; : : :; An may appear in the sorts S1 ; : : :; Sn | in other words, the tuples passed along the channel may be thought of as elements of a generalized existential type. Thus, for example, the sorting a : (A; (A)w ; A)r precisely describes the constraints respected by the process a(A; x:(A)w ; y:A) : xhyi : 0, which reads a pair of channels from a and outputs the second along the rst. A similar polymorphic extension of -calculus sorting has been proposed independently by Turner; it will be described in his forthcoming Ph.D. thesis Tur94].

Acknowledgements
This research was begun while both authors were visitors at INRIA-Roquencourt (in projects Formel and Para, respectively), and was partially supported by ESPRIT Basic Research Action Types and Confer and by the British Science and Engineering Research Council. Our ideas were in uenced by many conversations with Robin Milner and David N. Turner.

23

In this appendix and the ones that follow, we omit sorting annotations when they are irrelevant or can easily be recovered from the context. ~ The derivation proof of an interaction P ?!Q uniquely determines the two subterms a(~) : P1 and ahci : P2 b of P whose outermost pre xes are those consumed in the interaction; we call these the interacting subterms of the derivation. A useful fact for reasoning with reduction semantics is that two interactions whose derivations use the same interacting subterms yield processes identical modulo structural congruence. This is the content of Lemma A.5. The results preceding Lemma A.5 are needed for its proof. We start by noticing that the derivation of an interaction can be put into a standard form in which the rule R-Eqv is used at most once, at the end (this lemma also appears in GR93]). A.1. Lemma: Suppose Q?!Q0. Then there is a derivation of Q?!Q0 in which R-Eqv appears only as the outermost step, or not at all. In other words, a derivation Q?!Q0 can be written as ~~ Q ( ~) (a(~) : P1 j ahy i : P2 j R)?!( ~) (P1 fy=xg j P2 j R) Q0 b x ~ b

A Appendix: Proof of Lemma 5.2.2

Proof: By induction on the original derivation of Q?!Q0 , using the facts that the rule R-Eqv commutes
with R-Par and R-Restr and that successive occurrences of R-Eqv can be merged into one. 2 ~ We can strengthen this result by requiring that a(~) : P1 and ahy i : P2 be actual subterms of Q (i.e., they x have not been modi ed with ) and that all names in ~ actually occur free in these subterms. b ~ A.2. Lemma: Suppose that a derivation of Q?!Q0 has a(~) : P1 and ahyi : P2 as interacting subterms. x Then there is a derivation of Q?!Q0 in which the last step uses the rule R-Eqv with premises of the form ~~ Q ( ~) (a(~) : P1 j ahy i : P2 j R)?!( ~) (P1 j P2fy=xg j R) Q0 b x ~ b

and ~ fn(a(~) : P1) fn(ahy i : P2). b x ~ Proof: By A.1 and the structural equivalence rules. 2 A.3. Lemma: If ( b) P ( b) Q (note that our conventions about names also imply that this proof makes no use of alpha-conversion on b), then also P Q. Proof: A derivation of P Q can be constructed from the derivation of ( b) P ( b) Q by simply stripping o those steps which manipulate the restriction on b. 2 A.4. Lemma: Suppose : P j Q : P j R. Then Q R. Proof: Since none of the structural equivalence rules mention pre xes, it is easy to transform a derivation of : P j Q : Q j R into a derivation of 0 j Q 0 j R. Hence Q 0 j Q 0 j R R. 2 A.5. Lemma: Suppose that the derivations of the interactions Q?!Q1 and Q?!Q2 use the same interacting subterms. Then Q1 Q2. ~ Proof: Let a(~) : P1 and ahyi : P2 be the interacting subterms used in the two reductions. By Lemma A.2 x the derivation of Q?!Qi , i = 1; 2, can be written as ~~ Q ( ~) (a(~) : P1 j ahy i : P2 j Ri)?!( ~) (P1 fy=xg j P2 j Ri) Qi b x ~ b with ~ fn(a(~) : P1; ahy i : P2). From ( ~) (a(~) : P1 j ahy i : P2 j R1) ( ~) (a(~) : P1 j ahy i : P2 j R2), Lemma b x ~ b x ~ b x ~ A.3, and Lemma A.4, we get R1 R2, hence ~~ ~~ Q1 ( ~) (P1fy=xg j P2 j R1) ( ~) (P1fy=xg j P2 j R2) Q2: b b

2
24

A.6. Lemma: Suppose that : P is a subterm of Q that is not underneath a pre x. Then there are ~ and b R such that Q ( ~) ( : P j R). b Proof: We prove the lemma by induction on the structure of Q. In the base case, Q = : P and the result is trivial, for R = 0 and ~ = fg. Now the induction cases: b Case Q Q1 j Q2. Suppose that : P is in Q1; the case in which : P is in Q2 is analogous. By induction, Q1 ( ~) (R0 j : P). Therefore we have b Q = Q1 j Q2 ( ~) (R0 j : P ) j Q2 ( ~) ((R0 j Q2) j : P ): b b Case Q =! Q0 . Q Q0 j ! Q0 and by induction Q0 ( ~) (R0 j : P). Proceed as in the previous case. b 0. Case Q = ( c) Q

Easy. 2 We now need two simple lemmas about barbed bisimulation, before tackling the proof of Lemma 5.2.2. Barbed bisimulation is not preserved by parallel composition. For instance, a:b ? a:0, while a:b j a 6 ? a:0 j a. However, ? is preserved by restriction. A.7. Lemma: P ?;a:S Q implies ( a : S) P ? ( a : S) Q. Proof: Straightforward. 2 A very useful technique for reducing the amount of work needed for proving a bisimilarity is to consider the closure of the bisimulation by some auxiliary relation; typically the auxiliary relation is taken to be the bisimilarity itself. We write P < Q to mean that (P; Q) 2 < , and P ? < ? Q to mean that there are R1 and R2 such that P ? R1 < R2 ? Q. A.8. De nition: Barbed ?-bisimulation up to ] A relation < is a barbed ?-bisimulation up to if (P; Q) 2 < implies 1. if P ?!P 0, then there exists Q0 such that Q?!Q0 and P ? < ? Q0 ; 2. if Q?!Q0, then there exists P 0 such that P ?!P 0 and P 0 ? < ? Q0 ; 3. for each name a, P #a i Q #a . A.9. Lemma: Suppose < is a barbed ?-bisimulation up to ; then < ? . Proof: Standard technique for bisimulation up to a strong bisimulation Mil89]. 2 We are now ready to give the proof of Lemma 5.2.2. We rst recall its assertion: Lemma 5.2.2: Let P1 and P2 be two processes obeying the sorting . Suppose that for every sorting ?, substitution , and process Q such that 1. ? <E , ~b 2. if = fc=~g, then ?(~) ?(~), and c b 3. Q obeys ?,
it holds that Then P1

P2 holds if C P 1] ? C P 2 ] (15) for each sorting ? and (?, )-context C ]. To obtain a su ciently strong induction hypothesis, we prove something slightly di erent, adding a substitution and a parallel composition with a process Q. That is, we prove that for every sorting ?, every ~b (?, )-context Q j C ], and every substitution = fc=~g with ?(~) ?(~), it holds that c b Q j C P 1] ? Q j C P 2] (16) 25

P2. Proof: By De nition 5.2.1, P1

Q j P1

?

Q j P2 :

Taking Q = 0, (15) is an instance of (16). The depth of a context is the number of operators encountered to reach the hole; thus the depth of ] is one and the depths of P j C ], : C ] and !C ] equal the depth of C ] plus one. We prove (16) by induction on the depth of the context C ]. In the proof, we use the process Q in (16) to be able to cope with the case in which the outermost operator of the context is a replication or a parallel composition, and the substitution in (16) for the case in which the outermost operator is an input pre x. In the base case, C ] = ], and the fact that Q j C ] is a (?, )-context implies conditions (1) and (3) from the statement of the lemma; condition (2) is satis ed by assumption, and (16) follows. For the induction part, we proceed by a case analysis on the outermost operator of C ]. In the barbed bisimulations we de ne, it is assumed that P1; P2; Q; ?; , and satisfy the requirements in assertion (16). In each case, we shall only examine clause (1) of the de nition of barbed bisimulation (5.2.1), since clause (3) is always trivial and (2) is similar to (1). Moreover, we shall explicitly write sort annotations only in the pre x case, since sorting does not play a central role in the other cases; we shall also feel free to write instead of ? . Pre x: C ] is of the form : D ]. Let n o < = Q j ( : D P1]) ; Q j ( : D P2]) ? : The relation < is a barbed ?-bisimulation. Consider the possible reductions of Q j ( : D P1]) ; there are two cases to look at: The interaction occurs inside Q: Q j ( : D P1]) ?!Q0 j ( : D P1]) This is matched by Q j ( : D P2]) ?!Q0 j ( : D P2]) ; since (Q0 j ( : D P1]) ; Q0 j ( : D P2]) ) 2 < . The interaction is between Q and ( : D P1]) . There are two cases: is an output or an input; the former is simpler, so we only look at the latter. For simplicity we assume that the object part of has length one, say = a(b : S) and that does not modify , i.e. ( :D ]) = :(D ] ). We denote by ahci : Q0 the subterm of Q which interacts with ( : D P1]) . By Lemma A.6, Q ( y : T) (Q00 j ahci : Q0). Then, assuming y fresh, we have: ~ ~ ~ Q j a(b : S) : (D P1 ] ) ( y : T) (Q00 j ahci : Q0 j a(b : S) : (D P1] )) ?! ( y : T) (Q00 j Q0 j D P1] fc=bg) ~ ~ ~ ~ Similarly, from the process Q j a(b : S) : (D P2 ] ) we can infer: Q j a(b : S) : (D P2] ) ( y : T) (Q00 j ahci : Q0 j a(b : S) : (D P2 ] )) ?! ( y ) (Q00 j Q0 j D P2] fc=bg) ~ ~ ~ Now, by Lemma 2.6.4, (?; y : T)(c) S and therefore, using the induction assumption, we have ~ ~ (Q00 j Q0) j D P1] fc=bg ?;y:T (Q00 j Q0) j D P2] fc=bg: ~~ From this, apply Lemma A.7 to get ( y : T) (Q00 j Q0 j D P1] fc=bg) ? ( y : T) (Q00 j Q0 j D P2] fc=bg); ~ ~ ~ ~ which closes up the bisimulation, since ? < . Restriction: C ] is of the form ( a) D ]. We want to show that Q j (( a) D P1]) Q j (( a) D P2]) : We can assume that a does not occur in Q and that does not modify a; therefore we have Q j (( a) D P1]) ( a) (Q j D P1] ) ( a) (Q j D P2] ) Q j (( a) D P2]) ; where the use of is derived from the induction assumption and Lemma A.7. 26

Parallel composition: C ] is of the form Q0 j D ]. We have to show that Q j (Q0 j D P1]) ? Q j (Q0 j D P2]) :
We have

(17)

Q j (Q0 j D P1]) (Q j Q0) j D P1] and Q j (Q0 j D P2]) (Q j Q0) j D P2] ; from which (17) follows from the induction assumption. Replication: C ] is of the form !D ]. We show that < = f(Q j !(D P1] ); Q j !(D P2] ))g is a barbed bisimulation up to . We abbreviate D P1] as DA and D P2] as DB . We show that if Q j !DA?!A, then for some B, we have Q j !DB ?!B and A < B: (18) The processes which might cause an interaction in Q j !DA are Q and DA . Therefore, if HA = Q j D A ; then by Lemma A.5, modulo , any interaction in Q j !DA is of the form1 0 Q j !DA?!HA j !DA; (19) 0 where HA is a derivative of HA , i.e. 0 HA ?!HA: (20) Let us now turn to Q j !DB . By the induction assumption of the lemma, for any process K, K j DA K j DB : Using this fact, we get Q j !DB Q j DB j !DB Q j DA j !DB = HA j !DB : Now, by (20), 0 HA j !DB ?!HA j !DB ; 0 0 and by de nition of < it holds that HA j !DA < HA j !DB . 0 0 0 Summarizing, we have: Q j !DB HA j !DB ?!HA j !DB and HA j !DA < HA j !DB . From the former, by de nition of , for some B we also have 0 Q j !DB ?!B HA j !DB : 0 0 This proves that HA j !DA < B, which for A = HA j !DA is precisely (18). 2

Recall from De nition 5.3.1 that the expression \m is used only as a trigger in P" is understood with respect to a judgement of the form ?  C P] : . Sometimes we shall omit this judgement if it is clear from the context. We write T "w for the sort obtained from T by replacing its outermost tag with w. Also, (T) w denotes a sort whose outermost tag is less than or equal to w.
1 Had we included summation in the syntax of our -calculus language, then two copies of D in the de nition of H A A would be needed in order to capture all possible interactions in !DA . For instance, if DA = ahyi : P + a(x) : Q, then !DA ?!(P j Qfy=xg) j !DA , but DA alone has no interaction.

B.1 Some results on names used only as triggers

B Proofs of the replicator theorems

27

follows by inspection of this derivation. 2 B.1.2. Lemma: Suppose ?; m : (S)r  P : and ?; m : (S)w  P : . Then m is not free in P. Proof: By induction on the structure of P. The interesting case is when P = ahci : Q (for notational convenience, we suppose that the tuple of arguments has length one). By the induction assumption, m is not free in Q. Therefore, we only have to show that m 62 fa; cg. Clearly, a 6= m, for otherwise ?; m : (S)r 6  P : . Now, suppose a = c. Then, from the hypothesis of the lemma and the de nition of the typing system, it must be that ?(a) ((S)r )w and ?(a) ((S)w )w : From this and the subtyping algorithm, we deduce that for some T, ?(a) 7 (T) w and (S)r T (21) (S)w T: (22) But this is impossible because, by Lemma 2.4.12, from (21) we get that the outermost tag of T is r, and from (22) that the outermost tag of T is w. 2 B.1.3. Lemma: Suppose that ?; a : T  P : , that m is used only as a trigger in P, and that S T. Then ?; a : S  P : , and m is used only as a trigger in P with respect to the narrower sorting. Proof: For a 6= m, the result follows from the hypothesis S T by narrowing (2.6.2). Now, suppose a = m. We know that ?; a : T "w  P : (23) and we want to show that also ?; a : S "w  P : (24) holds. If I (T) 2 fw; bg, then (24) again follows by narrowing. If I (T) = r, then from ?; a : T  P : , (23), and Lemma B.1.2, we infer that m is not free in P; therefore (24) is trivially true. 2 B.1.4. Corollary: Suppose that  P : , that m is used only as a trigger in P, and that ? <E . Then ?  P : and m is used only as a trigger in P with respect to ?. Proof: By the previous lemma and weakening. 2 B.1.5. Lemma: Suppose that ?; a : S; b : S  P : , and that a and b are used only as triggers in P. Then ?; a : S  P fa=bg : and a is used only as a trigger in P fa=bg. Proof: By induction on the structure of P and the de nitions of typing and subtyping. 2 B.1.6. Corollary: Suppose that ?; a : S; b : T  P : with S T, and that a; b are used only as triggers in P. Then ?; a : S  P fa=bg : and a is used only as a trigger in P fa=bg. Proof: By B.1.3 and B.1.5. 2 B.1.7. Lemma: Suppose ?  a(b1 : S1 ; : : :; bn : Sn) : P1 j ahc1 ; : : :; cni : P2 : and ci is used only as a trigger in ahc1 ; : : :; cni : P2. Then I (Si ) = w. Proof: If a 6= ci , then by the de nitions of trigger and of the typing relation we have: ?(a) (?(c1); : : :; ?(ci )"w ; : : :; ?(cn))w and ?(a) (S1 ; : : :; Sn)r : By Lemma 2.4.10, we derive ?(ci )"w Si . Now, I (Si ) = w follows from Lemma 2.4.12. The only di erence in the case a = ci is that the proof of ?(a) (?(c1 ); : : :; ?(ci)"w ; : : :; ?(cn))w requires two steps, namely ?(a) ?(a)"w and ?(a)"w (?(c1 ); : : :; ?(ci)"w ; : : :; ?(cn))w . These are derived from the de nitions of subtyping and typing and the hypothesis of the lemma. 2 B.1.8. Lemma: Suppose ?  P : for P = a(b1 : S1 ; : : :; bn : Sn) : P1 j ahc1; : : :; cni : P2, and that, for some i, ci is used only as a trigger in P1 and ahc1 ; : : :; cni : P2. Then ci is used only as a trigger in P1fc1 ; : : : ; cn =b1 ; : : : ; bn g j P2. 28

B.1.1. Lemma: Suppose I (T) w and S T. Then S "w T "w . Proof: By Corollary 2.4.9, we may assume that we are given an algorithmic derivation of S T. The result

Note, by the way, that Lemma B.1.8 is not a corollary of the preservation of sorting under reduction. It would be so if the hypothesis were that ci is used only as a trigger in P: But this does not hold when a is equal to ci , since a occurs in P in input position. Proof: For notational convenience, take n = i = 1 and abbreviate c1 , b1, and S1 as c, b, and S, respectively. By hypothesis, we know that c is used only as a trigger in P1 and P2. Therefore, by Corollary B.1.6, c is used only as a trigger in P1fc=bg j P2 if ?(c) S and b is used only as a trigger in P1. The former holds by Lemma 2.6.4; the latter holds because, by Lemma B.1.7, we have I (S) = w. 2 B.1.9. Lemma: Suppose that ?  C P] : and m is used only as a trigger in P. Moreover, let = fb=ag with ?(b) ?(a) and a; b 6= m. Then ?  C P] : and m is used only as a trigger in P with respect to this judgement. Proof: Straightforward using Lemma 2.6.3. 2 In the two lemmas that follow, we suppose that m and m0 are names of the same sort and write P m and P m for P fm=m0g and P fm0=mg, respectively. B.2.1. Lemma: Suppose that m; m0 are used only as triggers in P m and P. 1. If P m ?!Q1 , then there exits Q such that P ?!Q and Q1 = Qm . 2. If P ?!Q, then P m ?!Qm . Moreover, in both cases, m and m0 are used only as triggers in Qm and Q. Proof: Straightforward transition induction. Since m and m00 are used only as triggers, they cannot appear in input position and consequently no interaction along m or m is possible in P or P m . Hence, the substitution of m0 with m does not add new derivatives. Moreover, the names m and m0 are used only as triggers in the derivatives of P and P m , because sorting is preserved under reduction. 2 In the following, when talking about barbed ?-congruence, we shall often omit the sorting ? when it is not needed or is clear from the context. Moreover, when proving barbed bisimilarities, we shall only examine clause (1) of De nition 5.2.1, since clause (3) is always trivial and (2) is similar to (1). B.2.2. Lemma: Suppose that m is used only as a trigger in P1, P2, and R. Then
0

B.2 The core of the proofs

Q j (P1 j P2)fm(~) := Rg ? Q j P1fm(~) := Rg j P2fm(~) := Rg: x x x Proof: We shall show below that if m and m0 are used only as triggers in P and R, and m0 62 fn(R), then (25) ( m) (P m j !m(~) : R) ( m; m0 ) (P j !m(~) : R j !m0(~) : Rm ): x x x Using this, the assertion of the lemma is proved as follows, assuming that m; m0 are not free in Q and that m0 does not appear in Q j (P1 j P2)fm(~) := Rg. x Q j (P1 j P2)fm(~) := Rg = x Q j ( m) (P1 j P2 j !m(~) : R) x ( m) (Q j P1 j P2 j !m(~) : R) x (by (25)) 0 ) (Q j P1 j P m j !m(~) : R j !m0(~) : Rm ) ( m; m x x () 2 Q j ( m) (P1 j !m(~) : R) j ( m0 ) (P2m j !m0(~) : Rm ) x x (by -conversion) Q j ( m) (P1 j !m(~) : R) j ( m) (P2 j !m(~) : R) = x x Q j P1fm(~) := Rg j P2fm(~) := Rg x x The transformation ( ) is obtained using the structural laws for parallel composition and restriction, in particular the fact that m is only free in P1 and m(x) : R, whereas m0 is only free in P2m and m0 (x) : Rm . The transformation (25) can be applied because the names m and m0 are used only as triggers in P1, P2m and R (by the hypothesis of the lemma and Lemma B.1.5) and m0 is not free in R. Thus, we are left with only (25) to prove. We show that
0 0 0 0 0 0 0 0

29

< = (( m) (P m j !m(~) : R); ( m; m0 ) (P j !m(~) : R j !m0(~) : Rm )) x x x
0

n

such that m; m0 are used only as triggers in P and R o and m0 62 fn(R)

is a barbed bisimulation. Let (A; B) 2 < , and consider the possible moves by A; the case in which B moves rst can be handled in a similar way. Suppose A?!A0. By Lemma A.5 it is enough to consider only once each pair of possible interacting subterms in A. There are two cases to examine, according to whether the interacting subterms are both in P m or one is in P m and the other is m(x) : R. For the former case, use Lemma B.2.1. Let us consider now the latter. In order to see better how B matches the move by A it is convenient to write a subterm of P m as P1m, where P1 is the corresponding subterm of P. Thus, let
m HP = (m00 hci : KP )m ~

for m00 2 fm; m0 g

be the subterm in P m which interacts with m(x) : R. Using Lemma A.6, for some y and Lm , ~ P P m ( y ) (Lm j (m00hci : KP )m ) ~ P ~ and therefore, where the last use of is derived from Lemma A.5. Now we can show that B can match the move A?!A0. Suppose m00 = m; the case m00 = m0 is analogous. We have P from which we get B = ( m; m0 ) (P j !m(~) : R j !m0(~) : Rm ) x x 0 ) (( y ) (LP j KP j Rfc=xg) j !m(~) : R j !m0(~) : Rm ) ~~ ?! ( 0 m; m ~ x x = B:
0 0

A = ( m) (P m j !m(~) : R)?! ( m) (( y ) (Lm j KP j (Rfc=~g)m ) j !m(~) : R) A0 ; x ~ P m ~x x ~ ( y ) (LP j mhci : KP ) ~

To conclude that A0 < B 0 , we also have to guarantee that the side condition in the de nition of < is ~~ satis ed, namely m and m0 are used only as triggers in ( y ) (LP j KP j Rfc=xg) and R. This follows from ~ the hypothesis A < B, which guarantees that m and m0 are used only as triggers in R and LP , and from ~~ Lemma B.1.8, which guarantees that they are used only as triggers in KP j Rfc=xg. 2 B.2.3. Lemma: If m is used only as a trigger in P and R, then Q j !(P fm(~) := Rg) x
?

Q j (!P)fm(~) := Rg: x

Proof: We show that < = f(Q j !(P fm(~) := Rg); Q j (!P)fm(~) := Rg) j m is used only as a trigger in P; Rg x x is a barbed bisimulation up to . Let (A; B) 2 < and consider a possible interaction in A. The terms that might participate in the interaction are Q and P fm(~) := Rg. Therefore, if x HA = Q j P fm(~) := Rg; x
then by Lemma A.5, modulo , any interaction in A can be written in the form
0 A?!HA j !(P fm(~) := Rg) = A0 ; x

(26)

30

0 HA ?!HA: Let us see how B can match the move (26), up to . We have

0 where HA is a derivative of HA , i.e.

(27)

B Q j (P j !P)fm(~) := Rg; x which, using Lemma B.2.2 is barbed bisimilar with Q j P fm(~) := Rg j (!P)fm(~) := Rg = B1 : x x
0 0 B1 ?!HA j (!P)fm(~) := Rg = B1 ; x 0 0 0 and from (26) we get (A0 ; B1 ) 2 < . Summarising, we have B B1 ?!B1 and (A0 ; B1) 2 < . From the 0 we also have latter, by de nition of , for some B

Now, by (27), we have

B ?!B 0

This proves that A0 < B 0 , which is enough because < is a bisimulation up to . Now we are ready to prove the two replicator theorems. Theorem 5.3.2: If ?  (P j Q)fm(~) := Rg : and m is used as a trigger in P , Q, and R, then x (P j Q)fm(~) := Rg x 1. ? <E , ~b 2. if = fc=~g, then ?(~) ?(~), c b 3. ?  Q : , it holds that
?

0 B1 :

2

P fm(~) := Rg j Qfm(~) := Rg: x x

Proof: By Lemma 5.2.2 it is enough to show that for every ?; ; and Q such that

Q j ((P1 j P2)fm(~) := Rg) x

?

Q j (P1fm(~) := Rg j P2fm(~) := Rg) : x x

(28)

We may assume that does not modify m or x, so we have ~ Q j ((P1 j P2)fm(~) := Rg) = Q j (P1 j P2 )fm(~) := R g x x and Q j (P1fm(~) := Rg j P2fm(~) := Rg) = Q j (P1 )fm(~) := R g j (P2 )fm(~) := R g: x x x x Using Corollary B.1.4 and Lemma B.1.9, we derive that m is used only as a trigger in P1 , P2 ; and R . Then (28) follows from Lemma B.2.2. 2

Theorem 5.3.3: If ?  (!P)fm(~) := Rg : and m is used as a trigger in P and R, then x (!P)fm(~) := Rg ? !(P fm(~) := Rg): x x Proof: Similar to the proof of the rst replicator theorem, using Lemma B.2.3 in place of Lemma B.2.2. 2

31

C Validity of -reduction for the encoding V 0
3. Suppose a does not occur in the pre x . Then it holds that ( : P)fa(~) := Rg : (P fa(~) := Rg). x x 0 is the same used by Milner to show the analogous The schema of the proof of validity of reduction for V result for the lazy -calculus encoding Mil91]. The sorts of the names for V 0 have been described in Section 4; for simplicity, explicit sort annotations and ? subscripts (e.g. on bisimulations) are omitted. C.2. Theorem: V 0 ( x:M)( y:N)]]hpi V 0 M f y:N =xg] hpi. Proof: We can reduce the process V 0 ( x:M)( y:N)]]hpi = ( q; r) (V x:M]]hqi j V y:N]]hri j Apphp; q; ri ) to V 0 M]]hpifx(w) := w(x; q) : V 0 N]]hqig by repeated expansions (Lemma C.1(1)) which consume the actions of the arbiter App, plus Lemma C.1(2) to garbage collect the environment entries which cannot be accessed any more. In the following, let R abbreviate w(x; q) : V 0 N]]hqi. We show, by induction on M, that for any p, A = V 0 M]]hpifx(w) := Rg V 0 M f y:N =xg] hpi = B: We have to distinguish the three cases in which M is a variable, an abstraction or an application. But rst, observe that the name x is used only as a trigger in V 0 M]]hpi and R. This is trivially true for R because x does not occur free in R. As regards V 0 M]]hpi, name x may only occurs free in V 0 M]]hpi in subterms of the form qhxi. This agrees with the hypothesis that x is used only as a trigger because, following the terminology in Section 4, q is a location name; hence, the sort assigned to the name carried by q is (Sp )w , which has w as outermost tag. The fact that x is used only as a trigger in V 0 M]]hpi and R will authorize us to use both the replicator theorems 5.3.2 and 5.3.3 in the transformations below. Variable: M = z. Both the case z = x and the case z 6= x are easy. Abstraction: M = z:M 0 . We have (29) A = ( y) (phyi j ! y(w) : w(z; q) : V M 0] hqi)fx(w) := Rg 0] hqi)fx(w) := Rg) ( y) (phyi j (! y(w) : w(z; q) : V M (30) (31) ( y) (phyi j ! ((y(w) : w(z; q) : V M 0] hqi)fx(w) := Rg)) 0 ] hqifx(w) := Rg)) ( y) (phyi j ! y(w) : w(z; q) : (V M (32) ( y) (phyi j ! y(w) : w(z; q) : V 0 M 0f y:N =xg] hqi) = B; where, in (29), the scope of the local environment fx(w) := Rg has been reduced, since x does not occur in phyi; the equality (30) pushes the local environment inside the replicator and is derived from the second replicator theorem; the equality (31) pushes the local environment inside the two pre xes, and is derived from Lemma C.1(3); nally, the equality (32) is obtained from the induction assumption. Application: M = M1M2 . We have A = ( q; r) (V 0 M1] hqi j V 0 M2] hri j Apphp; q; ri )fx(w) := Rg (33) ( q; r) (V 0 M1] hqifx(w) := Rg j V 0 M2] hrifx(w) := Rg j Apphp; q; ri fx(w) := Rg) (34) 0 M ] hqifx(w) := Rg j V 0 M ] hrifx(w) := Rg j Apphp; q; ri ) ( q; r) (V 1 (35) 2 0 M f y:N =xg] hqi j V 0 M f y:N =xg] hri j Apphp; q; ri ) = B: ( q; r) (V 1 2 32

C.1. Lemma: ~b 1. ( a) (ahci : P j a(~) : Q) ( a) (P j Qfc=~g). ~ b 2. Suppose a not free in P. Then P fa(~) := Rg P. x

The following are some obvious laws:

The equality (33) is obtained by rst enlarging the scope of the restrictions on q and r and then applying the rst replicator theorem to distribute the local environment fx(w) := Rg over the three processes of the parallel composition. The equality (34) is obtained from Lemma C.1(2), since x is not free in Apphp; q; ri ; nally equality (35) is obtained from the induction assumption. 2

33

References
Abr89]

S. Abramsky. The lazy lambda calculus. In D. Turner, editor, Research Topics in Functional Programming, pages 65{116. Addison-Wesley, 1989. AC91] Roberto M. Amadio and Luca Cardelli. Subtyping recursive types. In Proceedings of the Eighteenth ACM Symposium on Principles of Programming Languages, pages 104{118, Orlando, FL, January 1991. Also available as DEC Systems Research Center Research Report number 62, August 1990. To appear in TOPLAS. Car86] Luca Cardelli. Amber. In Guy Cousineau, Pierre-Louis Curien, and Bernard Robinet, editors, Combinators and Functional Programming Languages, pages 21{47. Springer-Verlag, 1986. Lecture Notes in Computer Science No. 242. Cou83] B. Courcelle. Fundamental properties of in nite trees. Theoretical Computer Science, 25:95{169, 1983. CW85] Luca Cardelli and Peter Wegner. On understanding types, data abstraction, and polymorphism. Computing Surveys, 17(4), December 1985. dB72] Nicolas G. de Bruijn. Lambda-calculus notation with nameless dummies: a tool for automatic formula manipulation with application to the Church-Rosser theorem. Indag. Math., 34(5):381{392, 1972. EN86] U. Engberg and M. Nielsen. A calculus of communicating systems with label-passing. Report DAIMI PB-208, Computer Science Department, University of Aarhus, Denmark, 1986. Gay93] Simon J. Gay. A sort inference algorithm for the polyadic -calculus. In Proceedings of the Twentieth ACM Symposium on Principles of Programming Languages, January 1993. Gir72] Jean-Yves Girard. Interpretation fonctionelle et elimination des coupures de l'arithmetique d'ordre superieur. PhD thesis, Universite Paris VII, 1972. GR93] P. Glavan and D. Rosenzweig. Communicating evolving algebras. In E. Borger, S. Martini, G. Jager, H. Kleine Buning, and M.M. Richter, editors, Computer Science Logic, Selected Papers from CSL'92, volume 702 of Lecture Notes in Computer Science, pages 186{215. Springer Verlag, 1993. Mil78] Robin Milner. A theory of type polymorphism in programming. Journal of Computer and System Sciences, 17:348{375, August 1978. Mil89] R. Milner. Communication and Concurrency. Prentice Hall, 1989. Mil90] R. Milner. Functions as processes. Research Report 1154, INRIA, Sophia Antipolis, 1990. Final version in Journal of Mathem. Structures in Computer Science 2(2):119{141, 1992. Mil91] R. Milner. The polyadic -calculus: a tutorial. Technical Report ECS{LFCS{91{180, LFCS, Dept. of Comp. Sci., Edinburgh Univ., October 1991. Also in Logic and Algebra of Speci cation, ed. F.L. Bauer, W. Brauer and H. Schwichtenberg, Springer Verlag, 1993. MPW92] R. Milner, J. Parrow, and D. Walker. A calculus of mobile processes, (Parts I and II). Information and Computation, 100:1{77, 1992. MS92] R. Milner and D. Sangiorgi. Barbed bisimulation. In W. Kuich, editor, 19th ICALP, volume 623 of Lecture Notes in Computer Science, pages 685{695. Springer Verlag, 1992. Plo75] G.D Plotkin. Call by name, call by value and the -calculus. Theoretical Computer Science, 1:125{159, 1975. PS93] J. Parrow and D. Sangiorgi. Algebraic theories for name-passing calculi. Technical Report ECS{LFCS{ 93{262, LFCS, Dept. of Comp. Sci., Edinburgh Univ., 1993. To appear in Information and Compuation. Short version to appear in the Proc. REX Summer School/Symposium 1993, LNCS, Springer Verlag. Rey74] John Reynolds. Towards a theory of type structure. In Proc. Colloque sur la Programmation, pages 408{425, New York, 1974. Springer-Verlag LNCS 19. Rey85] John Reynolds. Three approaches to type structure. In Mathematical Foundations of Software Development. Springer-Verlag, 1985. Lecture Notes in Computer Science No. 185. Rey88] John C. Reynolds. Preliminary design of the programming language Forsythe. Technical Report CMUCS-88-159, Carnegie Mellon University, June 1988. San92] D. Sangiorgi. Expressing Mobility in Process Algebras: First-Order and Higher-Order Paradigms. PhD thesis CST{99{93, Department of Computer Science, University of Edinburgh, 1992.

34

San93] Tur94] VH93] Wal92]

D. Sangiorgi. A theory of bisimulation for the -calculus. Technical Report ECS{LFCS{93{270, LFCS, Dept. of Comp. Sci., Edinburgh Univ., 1993. Extended Abstract in Proc. CONCUR '93, volume 715 of Lecture Notes in Computer Science, Springer Verlag. David N. Turner, 1994. Ph.D. thesis, LFCS, University of Edinburgh. In preparation. Vasco T. Vasconcelos and Kohei Honda. Principal typing schemes in a polyadic pi-calculus. In Proceedings of CONCUR '93, July 1993. Also available as Keio University Report CS-92-004. David Walker. Objects in the pi-calculus. To appear in Information and Computation, 1992.

35

## ¸ü¶àÏà¹ØÎÄµµ

Modal logics for mobile processes
Bisimulations for asynchronous mobile processes
Typing and Subtyping for an Object-Oriented Process Algebra
Channel dependency types for higher-order mobile processes
Locality and non-interleaving semantics in calculi for mobile processes
µçÄÔ°æ