## Exploring a Communicating Sequential Processes EDSL

#### Translating Communicating Sequential Processes notation into program code

In my last post I showed the Communicating Sequential Processes (CSP) description of a pipeline of processes; here it is in full:

FULL(prev, next) = FULL_WILLING(prev, next) |~| FULL_STATIONARY(prev, next) FULL_WILLING(prev, next) = prev.empty /\ next.move -> EMPTY (prev, next) [] prev.move /\ next.move -> FULL (prev, next) [] prev.empty /\ next.canstay -> FULL (prev, next) [] prev.canstay /\ next.canstay -> FULL (prev, next) [] prev.muststay /\ next.muststay -> FULL (prev, next) FULL_STATIONARY(prev, next) prev.empty /\ next.muststay -> FULL (prev, next) [] prev.muststay /\ next.muststay -> FULL (prev, next) [] prev.canstay /\ next.muststay -> FULL (prev, next) EMPTY(prev, next) = prev.empty /\ next.empty -> EMPTY (prev, next) [] prev.muststay /\ next.empty -> EMPTY (prev, next) [] prev.move /\ next.empty -> FULL (prev, next) GENERATOR(next) = ON(next) ; OFF(next) ; OFF(next) ; GENERATOR(next) ON(next) = next.move -> SKIP [] next.canstay -> SKIP OFF(next) = next.empty -> SKIP END(prev) = prev.empty -> END(prev) [] prev.muststay -> END(prev) [] prev.move -> END(prev)

I can translate that into Haskell code that uses my CHP library fairly mechanically; here is a snippet:

site :: (Chanin Platelet, EnrolledBarrier, EnrolledBarrier, EnrolledBarrier) -> (Chanout Platelet, EnrolledBarrier, EnrolledBarrier, EnrolledBarrier) -> CHP () site (prevMove, prevEmpty, prevCanStay, prevMustStay) (nextMove, nextEmpty, nextCanStay, nextMustStay) = empty where fullWilling platelet = alt [(syncBarrier prevEmpty<&>writeChannel nextMove platelet)>>empty ,(readChannel prevMove<&>writeChannel nextMove platelet)>>=full.fst ,(syncBarrier prevEmpty<&>syncBarrier nextCanStay)>>full platelet ,(syncBarrier prevCanStay<&>syncBarrier nextCanStay)>>full platelet ,(syncBarrier prevMustStay<&>syncBarrier nextMustStay)>>full platelet ]

Thankfully, CHP already allows us to get quite close to the original CSP, but it is still more verbose than the original. I wanted to see if I could get the code looking as much like the original CSP as possible, and this post details how I went about it.

#### Enter EDSLs

Many people have used Haskell as a host language for Embedded Domain Specific Languages. As far as I understand the term, an EDSL is where you craftily warp the host language to look like a completely different language with all sorts of frighteningly clever hacks. (I’m being very disingenuous to all the designers of very sensible and useful EDSLs, but for me the one that always springs to mind is Lennart Augustsson’s BASIC EDSL.) This post is my small foray into EDSLs, to get the CHP implementation looking as much like CSP as possible. I’m not sure that what I have is actually an EDSL, as it’s fairly specific to this particular piece of CSP, but these are at least EDSL-like techniques.

#### Cobbling together a CSP EDSL

I have a semantic hitch from the start. CSP’s “prev.move” can be viewed as a communication of the “move” value on the “prev” channel. If you take the latter view, the CSP code above is choosing between different messages on the same channel; something that is not possible in CHP (CML has it, but CML doesn’t have conjunction). So instead, we will have to implement the code as a group of events, where “prev.move” is a separate event to “prev.canStay”. There are four possible “sub-events” of prev and next (move, canStay, mustStay, empty) so we need a tuple of four events, with a different direction on the move channel for incoming platelets compared to outgoing platelets (CSP doesn’t have this notion of delineating direction); the remainder of the events can be symmetric two-party barriers:

type In = (Chanin Platelet, EnrolledBarrier, EnrolledBarrier, EnrolledBarrier) type Out = (Chanout Platelet, EnrolledBarrier, EnrolledBarrier, EnrolledBarrier)

Our implementation is going to add important features to the original CSP: we are going to keep track of information about the platelets (so if we are full, we know what we are full with) and we are going to report our status each time frame to a visualisation process. We will come back to that later, but this is why we have channels carrying the Platelet type. To access the latter three events of our quadruples, we use special accessor methods. We pick out the barriers and synchronise on them; CSP does not syntactically separate events and the synchronisation on them (we do in CHP to allow for poisoning, which CSP does not feature), so we shouldn’t here either:

canStay, mustStay, empty :: (a, EnrolledBarrier, EnrolledBarrier, EnrolledBarrier) -> CHP () empty (_, x, _, _) = syncBarrier x canStay (_, _, x, _) = syncBarrier x mustStay (_, _, _, x) = syncBarrier x

We want to be able to refer to the CSP event “next.canStay” using the identical CHP code “next.canStay”. If we look at the types, the first part, next, is our tuple of events, and canStay is an accessor on it. So we want to translate “next.canStay” into “canStay next”, effectively. So we hide the Prelude’s definition of the dot operator and redefine it to do just that:

(.) :: a -> (a -> b) -> b a.b = b a

These mechanisms work fine for the barriers; we will also need a way to use our channels. The difficulty here is that we want to refer to the channel (which will have a different type at each end) and in one the receiving instance access it as an implicitly-synchronising event (like the barriers above), and in the sending instance to leave it is some sort of value that we can then send a value on (borrowing CSP’s “!” operator for sending). So our accessor must be polymorphic both in its argument and return. Hence we need type-classes, with multiple parameters and functional dependencies:

class MoveConnector c r | c -> r where move :: (c, EnrolledBarrier, EnrolledBarrier, EnrolledBarrier) -> r instance MoveConnector (Chanin a) (CHP a) where move (c, _, _, _) = readChannel c instance MoveConnector (Chanout a) (Chanout a) where move (c, _, _, _) = c

This takes care of all that we need. “prev.move” will be of type `CHP Platelet` and “next.move” will be of type `Chanout Platelet`. Our ! operator is straightforward:

(!) :: Chanout a -> a -> CHP () (!) = writeChannel

The pieces of our jigsaw puzzle are almost there. Haskell’s >> sequence operator is already a close enough stand-in for CSP’s -> prefix operator, and the syntactic difficulties of using [] as the choice operator mean that CHP’s <-> will do instead. We will, however, use the CSPc /\ operator to mean the conjunction of the two events. We will return the value from the first of the two, because we only ever want to save the value we receive from the platelet before us (which always appears on the left-hand side of the conjunction in the CSP):

(/\) :: CHP a -> CHP b -> CHP a (/\) p q = fst<$>(p<&>q)

Now we can write our site process:

site :: Chanout (Maybe (Platelet, Bool)) -> In -> Out -> CHP () site obsOut prev next = emptySite where emptySite :: CHP () emptySite = obsOut!Nothing>>((prev.empty/\next.empty)>>emptySite)<->((prev.mustStay/\next.empty)>>emptySite)<->((prev.move/\next.empty)>>=full) full :: Platelet -> CHP () full platelet = do r <- liftIO$randomRIO (0, 1::Double) if r>0.05 then obsOut!(Just (platelet, True))>>((prev.empty/\next.move!platelet)>>emptySite)<->((prev.move/\next.move!platelet)>>=full)<->((prev.empty/\next.canStay)>>full platelet)<->((prev.canStay/\next.canStay)>>full platelet)<->((prev.mustStay/\next.mustStay)>>full platelet) else obsOut!(Just (platelet, False))>>((prev.empty/\next.mustStay)>>full platelet)<->((prev.mustStay/\next.mustStay)>>full platelet)<->((prev.canStay/\next.mustStay)>>full platelet)

The generator and end are also short:

plateletGenerator :: Out -> CHP () plateletGenerator out = forever$on>>off>>off where on = do platelet <- Platelet<$>(liftIO$randomRIO (0.5, 1)) (out.move!platelet)<->(out.canStay) off = out.empty plateletEnd :: In -> CHP () plateletEnd prev = forever$(prev.empty)<->(prev.mustStay)<->(prev.move>>return ())

So we have implemented our CSP model, and made it look quite like the CSP itself. The behaviour is just as seen on previous videos, but now our sticky platelets are implemented tickless. There has been work before on compiling CSP by Fred Barnes, that supports a lot more features of CSP than the quick hacks shown here; this was just a fun exercise to get my code looking more like the CSP that it came from.

## Sticky Platelet Pipeline — Finally Tickless

#### Tick and Tickless

A little while ago I posted an example of a simulation, with platelets moving down a pipeline together. When platelets bumped into each other, they stayed stuck together forever in a *clot*, and moved (or stayed put) together. The original code used a tick barrier, as many of my simulation examples do, to keep the simulation in lock-step. An alternative way to keep the simulation in lock-step is to make sure that every process in the pipeline communicates *exactly once* with its neighbours each timestep, which makes the tick barrier redundant. In this post I will show how to make my previous platelet example tickless, using this alternative method. I will be working in the formal Communicating Sequential Processes algebra, CSP (with a small extension); a Haskell implementation will follow in my next post.

#### The Ticking Version, in CSPc

We will start by looking at the ticking (original) version of the simulation in CSP. In fact, I will be using CSPc (CSP with conjunction); CSP doesn’t have the idea of conjunction in it, so I am using /\ as an added conjunction operator (akin to the logical conjunction operator) that conjoins two events into a single event that will occur when, and only when, both of its constituent events occur. All of the processes in this post are site processes: they represent a piece of space in the pipeline that does not move, and may or may not be occupied by a single platelet. A site that is occupied by a platelet is said to be full; otherwise it is empty. Let’s begin with the full site process:

FULL(prev, next) = FULL_WILLING(prev, next) |~| FULL_STATIONARY(prev, next)

The full process is making an internal choice between being willing to move this time-step and being stationary for the time-step. The internal choice means that FULL will make the decision itself and no other outside process (collectively referred to in CSP as its environment) can influence the decision. In our original Haskell implementation we made the choice randomly; a site would be stationary in 5% of the time-steps, and willing in the other 95% of time-steps. The stationary process refuses to do anything until the end of the time-step at which point it loops round to become the FULL process again:

FULL_STATIONARY(prev, next) = tick -> FULL(prev, next)

In contrast, the willing process offers two choices. It will move forwards if the process before it in the pipeline signals that is empty, or if the process before it in the pipeline is willing to move too. (So it will move if the process before it is empty, or full and willing, but not if the process before it is full and stationary.) We specify this using our conjunction operator:

FULL_WILLING(prev, next) = prev.move /\ next.move -> tick -> FULL(prev, next) [] prev.empty /\ next.move -> tick -> EMPTY(prev, next) [] tick -> FULL(prev, next)

(/\ binds most tightly above, then ->, and [] binds least tightly.) This model is an accurate rendering of my original CHP program, but it contains a race hazard. It is possible that a site that is willing to move in a time-step does not do so and ticks instead; if all the sites in a clot (a contiguous group of full cells) were willing, they could just tick repeatedly and never move at all. The details of the CHP library’s implementation prevented this occurring in my original CHP program (and hence I did not realise there was a race hazard), but it is a sin to rely on a library’s implementation of synchronisation for your concurrent program to be correct. (If I had been able to model-check this code, I could have discovered this problem; see the summary at the end.) The problem could be removed if we gave priority to the movement event; see the previous discussion of priority on this blog and Gavin Lowe’s paper on implementing priority with a small extension to CSP.

The empty site is as long as the full site, but that is because it is repetitive:

EMPTY(prev, next) = prev.move -> ((next.empty -> tick -> FULL(prev, next)) [] (tick -> FULL(prev, next))) [] next.empty -> ((prev.move -> tick -> FULL(prev, next)) [] (tick -> EMPTY(prev, next))) [] tick -> EMPTY(prev, next)

The empty site is willing to optionally accept a movement from behind it in the pipeline and/or signal to the process ahead of it that it is empty, before ticking. Like the full site, this again has the race hazard that it could tick without accepting the movement of a willing platelet from behind it.

To wire up our pipeline, we start with *N* EMPTY sites in a row (with the next event of each connected to the prev event of the following process as you would expect) synchronising on the tick event together with a GENERATOR at the beginning, but the END process does not need to synchronise on the tick event — the latter two processes being defined as follows:

GENERATOR(next) = ON(next) ; OFF(next) ; OFF(next) ; GENERATOR(next) ON(next) = next.move -> tick -> SKIP [] tick -> SKIP OFF(next) = tick -> SKIP END(prev) = prev.move -> END(prev)

The generator sends out a platelet every three steps (and again has the aforementioned problem that EMPTY and FULL have). The END process doesn’t need to synchronise on the tick event because it all it does is synchronise on move as frequently possible; the FULL process already rate-limits itself to one move event per time-step, so this is acceptable behaviour. In this ticking CSP version, we don’t really need this END process at all, but it’s instructive to include it because our tickless version will need one. The CSP up to this point makes up the full model of our original clotting program (minus the wiring, which isn’t very interesting).

#### The Tickless Version, in CSPc

The design of the tickless version is more complicated than the original version. In a simulation with a tick event, we can use implicit signalling. Each process will offer to perform some actions, then eventually it must agree to synchronise on the tick event (if all the processes didn’t eventually tick, we’d get deadlock!). So you can gain information if you offer a choice of event A with your neighbour, or tick with everyone, and the tick happens. This means that your neighbour did not choose to offer event A to you before it offered tick. We often use this implicit information in the simulation. In the previous platelets code, a full site not willing to move would not offer a movement, and would wait for a tick. A full site willing to move would offer to move with its neighbours, but if tick happened instead, it knew that one of its neighbours was not willing to move, and they had implicitly agreed to stay put by synchronising on tick instead. (The idea is good, but my use of it above led to the problem where willing platelets may not end up moving — this can be solved in the tickless version, though.)

If we want to remove the tick event from our pipeline, we therefore have to add more events between the processes, to allow them to explicitly communicate what they used to implicitly communicate. Peter Welch suggested a possible solution — but he saw that his suggestion had the problem now revealed in the original version, mentioned above. I was able to improve on his idea to remove the problem, and I describe my improvement to his solution here. The tickless version involves introducing two new events (besides move and empty) that indicate further information: canstay and muststay.

If there was only one stay value, then that is all that full stationary sites would offer. But willing full sites would also have to offer to stay, in order to synchronise with stationary neighbours (if one platelet in the clot stays they all must). So all willing sites would offer to stay, and this could allow a clot of willing platelets to agree to stay even though they were all willing to move. This is the same problem as we had in our ticking version. To remove the problem, we differentiate the events offered by a full stationary site (who will offer muststay) and a full willing site (who will offer canstay). Here is how they are used in the new FULL_WILLING process:

FULL_WILLING(prev, next) = prev.empty /\ next.move -> EMPTY (prev, next) [] prev.move /\ next.move -> FULL (prev, next) [] prev.empty /\ next.canstay -> FULL (prev, next) [] prev.canstay /\ next.canstay -> FULL (prev, next) [] prev.muststay /\ next.muststay -> FULL (prev, next)

The first two cases are just as before, in the original version. The middle case is for when the process is at the beginning of the clot; it synchronises on empty with the process behind it, and canstay with the process ahead of it in the clot. The last two cases can be thought of as perpetuating the canstay/muststay event through the pipeline.

The new FULL_STATIONARY process is as follows:

FULL_STATIONARY(prev, next) prev.empty /\ next.muststay -> FULL (prev, next) [] prev.muststay /\ next.muststay -> FULL (prev, next) [] prev.canstay /\ next.muststay -> FULL (prev, next)

The first case is for if this process is at the beginning of the clot; it synchronises on empty with the process behind it, and muststay with the process ahead of it. Looking up at the FULL_WILLING process, we can see that any FULL_WILLING process (from the last case) and any FULL_STATIONARY process (from the middle case immediately above) that synchronises on muststay with the process behind it will also synchronise on muststay with the process ahead of it. So if the process at the start of the clot synchronises on muststay, all processes ahead of it in the clot will also synchronise on muststay (by induction).

The third case of the FULL_STATIONARY process indicates that the processes behind the stationary one may offer canstay, and it will then offer muststay to all the processes ahead of it. The canstay event will only be offered from the previous process if it is in the FULL_WILLING state (FULL_STATIONARY only offers muststay to the process ahead of it, and we will see shortly that EMPTY only offers empty to the process ahead of it), which must then synchronise either on canstay with the process behind that (which, again, must be a FULL_WILLING process) or empty (which means it’s the start of the clot). All the full processes after FULL_STATIONARY, following the logic in the previous paragraph, will synchronise on muststay regardless of their state.

The new EMPTY process is as follows:

EMPTY(prev, next) = prev.empty /\ next.empty -> EMPTY (prev, next) [] prev.muststay /\ next.empty -> EMPTY (prev, next) [] prev.move /\ next.empty -> FULL (prev, next)

All cases offer the empty event to the process ahead of it. It will accept from behind: the empty case (when the previous site is empty), the move case (when the previous site is full and able to move forward) and the muststay event (when the previous site is part of a clot that cannot move). It does *not* accept the canstay event, which is crucial, for reasons explained in the next section.

The new ON, OFF and END processes are:

ON(next) = next.move -> SKIP [] next.canstay -> SKIP OFF(next) = next.empty -> SKIP END(prev) = prev.empty -> END(prev) [] prev.muststay -> END(prev) [] prev.move -> END(prev)

You can think of as ON as being the “forward half” of a FULL_WILLING site that is receiving empty from behind it; similarly, OFF is the forward half of an EMPTY site and END is the “backward half” of an EMPTY site.

#### Proofs

Since conjunction is an extra feature in CSP, there is no direct model-checking support for it. (We have designed a mapping from CSPc to CSP, but that causes a state space explosion and does not yet have automated tool support.) I will offer instead, inductive proofs about clots. By proving a statement for the beginning site, and optional middle/end sites based on the neighbour behind them, this should inductively cover all non-empty clots. This can be done by considering the pairings of prev and next events, to see when offering a set of events from the previous site, what might be offered to its next neighbour.

So, let us consider a clot of willing platelets. The site at the beginning of the clot can only synchronise on prev.empty (as that is all the empty site before it will offer). Therefore the site at the beginning of a clot will only offer move or canstay to the next site. Any middle site that synchronises on move or canstay with the previous site will offer the same thing to the next site. So inductively the last site of the clot will only offer move or canstay. We can see that the empty site following the clot will only accept move, not canstay, so a clot of willing processes may only move and may not stay put. This solves the problem that we had with the ticking version, and is why the EMPTY process does not offer to synchronise on canstay. (This result also shows that any line of sites at the beginning of a clot will only offer move or canstay to the sites ahead of it.)

Now let us consider a clot with one or more stationary platelets somewhere along its length (but not the beginning). We have seen in the previous paragraph that the willing sites at the beginning of the pipeline will offer move or canstay to the first stationary site in the clot. This stationary site appearing after these willing sites will only accept canstay, and will then offer muststay ahead of it. We can see that all full sites, stationary and willing, will only synchronise on prev.muststay with next.muststay, so regardless of the stationary/willing state of sites ahead of the first stationary site, muststay will be the event offered at the end of the clot. The empty site will accept this, and so a clot with one or more stationary sites after the beginning will all agree to stay. If a stationary site is at the beginning, it will synchronise on prev.empty and next.muststay, and then the rest of the clot will also synchronise on muststay, so again the clot will stay put. Thus any clot with one or more stationary sites will stay put.

#### Summary

So we have a system, expressed in a few lines of CSPc, of a sticky platelet simulation without a tick event. I will show a translation to CHP in the next post, which works the same as the original version (minus the potential problem). This work is interesting from a formal perspective because we have no direct support to model check this CSP, due to the conjunction extension. We have devised a mapping from CSPc to CSP, but it generates a large number of events; I believe it would be in the order of 4^N for this problem. We don’t have a tool to generate the CSP just yet, and even if we could, I suspect the model checker may choke on that size of problem. However, by taking advantage of conceptual features of the simulation, namely clots, I was able to perform some inductive reasoning about the problem. The reasoning was aided by the neat symmetry of the problem; each site in the pipeline offered a pair of (prev, next) events in conjunction, which could be thought as a sort of normal form.

From a practical perspective, it can be seen that this is not really a very concurrent simulation. The chained conjunction along the pipeline means that all processes must resolve their actions for the time-step together, and really the power of the simulation is in the resolution of all the choices (which could be written as a single sequential/functional piece of code: transforming the list of all platelets at time-step T to the list of all platelets at time-step T+1), not in the concurrency of the sites. The advantage of the way we constructed our solution is that we have encoded the behaviour of each site by only referring to the two neighbours of a site. These are local rules, that when resolved for all sites, produce the emergent behaviour of sticky platelets bumping, forming clots, and advancing down the pipeline together. There is no global visibility of the system in our code (only in the run-time system) to complicate things. This investigation of emergent behaviour is part of the ongoing CoSMoS research project that uses process-oriented technologies to produce this kind of simulation, and which builds on the original work of the TUNA project from which this blood clotting example is taken.

## The Operators and Monoids of CHP

When we create binary operators, in mathematics or in programming, they often have certain common identifiable properties:

- If you can re-order the arguments, e.g. 1 + 2 is the same as 2 + 1, we say that it is
**commutative**— in contrast, division is not commutative. - If you have two applications of the operator and the order of evaluation/bracketing doesn’t matter, e.g. (1 + 2) + 3 is the same as 1 + (2 + 3), we say that it is
**associative**— in contrast, subtraction is not associative. - If one particular operand always leaves the other side unchanged, we can say that this is the
**unit**of an operator, e.g. 1 * x is the same as x, so 1 is the unit of multiplication. - If one particular operand always ignores/overrides the other, we can say that this is the
**zero**of an operator, e.g. 0 * x is the same as 0, so 0 is the zero of multiplication. - If an operator has a unit or zero that only works on one side of the operator, we name it accordingly. For example, we say that division has a right-unit of 1 (because x / 1 is the same as x), but it does not have a left-unit; there is no value k such that for all x, k / x is the same as x.

We can find these properties all over maths and programming. Set union is commutative, associative, and has a unit of the empty set, but no zero. The boolean AND operator is commutative, associative, has the unit “true” and the zero “false”. STM’s orElse combinator is associative, with the unit retry, and the left-zero of a return statement. Any operator that is associative and has a unit forms a monoid, which can be put into Haskell as an instance of the Monoid type-class (more on that below).

The operators in CHP also have some of the aforementioned properties. A full list is buried at the back of the tutorial, but I should probably pull them into the API documentation. (Note that the laws I discuss here are concerned with the behavioural semantics of the operators; the types of the expressions may differ trivially.) The parallel operator `<||>` is commutative and associative, with a unit of `skip`, the process that does nothing and returns successfully. The unprioritised choice operator `<->` is commutative and associative, with a unit of `stop`, the process that is never ready in a choice. The implication of choice and parallelism being associative and commutative is that the order of the items in a call to `alt` or `runParallel` doesn’t make any difference to the behaviour. The operators for wiring up a pipeline in the Utils module are associative but lack the other properties.

#### Poison Handler Properties

We can view the poison handlers ``onPoisonTrap`` and ``onPoisonRethrow`` as binary operators. To recap: ``onPoisonTrap`` runs the left-hand side, but if a poison exception occurs then the right-hand side is run. ``onPoisonRethrow`` does the same, but after the right-hand side has finished, the poison exception is rethrown. They are not commutative — in exception terminology, the first argument is the try and the second the catch; they cannot be swapped freely!

To my surprise, ``onPoisonTrap`` is associative. Abbreviating it to ``oPT``, consider `p `oPT` q `oPT` r`. If you bracket the first two items,

`(p`, q will only execute if p throws poison, and r will only execute if q then throws poison (because p’s poison is trapped, so the only poison that can escape the first bracket is from q). If you bracket the latter two,

``oPT``q)``oPT``r`p`, the brackets will only execute if p throws poison, which will pass control to q, which will only pass control to r if poison is thrown by q. So the semantics are associative.

``oPT``(q``oPT``r)In contrast, ``onPoisonRethrow`` is not associative. Abbreviating it to ``oPR``, consider: `p `oPR` skip `oPR` r`. If bracketed

`(p`, r will be executed if p poisons, but if bracketed

``oPR``skip)``oPR``r`p`, r will never be executed (because skip won’t throw poison).

``oPR``(skip``oPR``r)``onPoisonTrap`` has a left-unit of `throwPoison` (because throwing poison automatically transfers control to the other side, the handler), and a right-unit of `throwPoison` (because trapping poison then throwing poison has a null effect on the original code). ``onPoisonRethrow`` has no left-unit but has two right-units: `throwPoison` and the return statement. Any code that cannot throw poison (e.g. a return statement) is a left-zero of both ``onPoisonTrap`` and ``onPoisonRethrow`` because it will never trigger the handler. Neither operator has a right-zero; there is no handler that can cause the original code to always be ignored.

#### Monoids

The fact that some of the operators mentioned here are associative and have units mean that they could form a monoid. In fact, CHP blocks of code could form several monoids. In Haskell, there is the problem that the monoid instance must be uniquely identified by its type, even though it is really its operator that is distinctive. All the standard number types can form a `Monoid` in addition (unit: 0, operator: +) or multiplication (unit: 1, operator: *). Defining a Monoid instance for, say, `Int` would thus be ambigious: when you say `4 `mappend` 3`, would you expect 7 or 12? To solve this, the Data.Monoid module defines newtype-wrappers around types to identify the monoid. `Sum Int` is a monoid in addition, whereas `Product Int` is a monoid in multiplication.

I could use the same trick for CHP; I could define several monoid instances. Here is a monoid that allows blocks of code (with no useful return) to be joined in parallel:

newtype Par = Par {runPar :: CHP ()} instance Monoid Par where mempty = Par skip mappend p q = Par (runPar p<|*|>runPar q) mconcat = Par.runParallel_.map runPar

This could be made a little more useful by making a parallel monoid out of blocks of code that return a type that is itself a monoid; when the parallel blocks of code have all finished, their results are combined using the monoid instance:

newtype ParMonoid a = ParMonoid {runParMonoid :: CHP a} instance Monoid a => Monoid (ParMonoid a) where mempty = ParMonoid (return mempty) mappend p q = ParMonoid (liftM (uncurry mappend)$runParMonoid p<||>runParMonoid q) mconcat = ParMonoid.liftM mconcat.runParallel.map runParMonoid

There is also a straightforward monoid instance for choice between blocks:

newtype Alt a = Alt {runAlt :: CHP a} instance Monoid (Alt a) where mempty = Alt stop mappend a b = Alt (runAlt a<->runAlt b) mconcat = Alt.alt.map runAlt

Finally, there is a monoid built around ``onPoisonTrap``:

newtype PoisonTrap a = PoisonTrap {runPoisonTrap :: CHP a} instance Monoid (PoisonTrap a) where mempty = PoisonTrapthrowPoisonmappend a b = PoisonTrap (runPoisonTrap a`onPoisonTrap`runPoisonTrap b)

Consider the meaning of `mconcat (map PoisonTrap [p,q,r,s])`. It says run p; if no poison is thrown, that’s done. If poison is thrown, run q. If q throws poison, run r, and if that throws a poison, run s. Obviously this is quite excessive, but I had never thought of constructing such a function until I realised that ``onPoisonTrap`` was associative and thus could form a monoid.

I can’t recall seeing monoid instances like these (involving monadic actions), so perhaps these sorts of monoid instances on monads don’t end up being very useful (if you know of a particular use, please add a comment below). I find it interesting to see how CHP code can form several different monoids just as an exercise.

## An Introduction to Communicating Sequential Processes

In my last post I touched upon the Communicating Sequential Processes calculus that inspired both my Communicating Haskell Processes library (including the name, of course) and the new Go language. This post serves as a brief introduction to CSP, and shows how it relates to CHP. For those interested in learning more about CSP, Hoare’s original book is available for free online, and is very readable, especially considering its formal subject matter. Hoare’s book is largely still accurate, but is superseded by Roscoe’s book, which is also freely available online. I’m going to quote from the two texts in this post; “quote”(H123) is a quote from page 123 of Hoare, and “quote”(R123) is from Roscoe.

#### Events

CSP is built on the idea of events. An event is a synchronisation that several concurrent processes can engage in. Events in CSP do not need to be pre-declared, and which processes synchronise together on events is determined by a notion of alphabets that I’m not going to cover here — because CHP doesn’t have alphabets, or events *per se*. CHP has barriers, which are a synchronisation that several concurrent processes can engage in, and thus they serve as a useful stand-in for events. The main difference is that barriers have the notion of membership, support dynamically enrolling on (and resigning from) the barrier at run-time — and they need to be allocated before use.

Events are written with lower-case names in CSP; Hoare’s book has lots of vending machine examples with events such as *coin* and *choc*.

#### Processes

CSP separates event synchronisations (the act of engaging in a single event) from processes (which can be composed of many synchronisations); in Haskell terms they should have different types. However, separating the types of an event synchronisation from a process would bring a whole load of pain — not least that I could not easily define a CHP monad — so I discard this in favour of everything being a process; an event synchronisation and a process are both of type `CHP a`, and thus there is no difference at the type-level between engaging in one event and engaging in many events.

#### Sequencing

The simplest form of sequencing in CSP is the prefix operator. “Given an event *a* and a process *P*, *a -> P* is the process which is initially willing to communicate *a* and will wait indefinitely for this *a* to happen. After *a* it behaves like *P*.”(R14). So in my CHP monad, this prefix operator is the standard monadic sequence operator, `>>`. CSP also has a semi-colon operator for sequencing two processes (which relies on notions of termination — a complex issue in formal systems of computation!), which also maps to `>>` in CHP.

As an example, here is Hoare’s vending machine (H6) that continually waits for a coin before dispensing a chocolate:

*VMS = coin -> choc -> VMS*

In CHP I might write this as:

vms = syncBarrier coin >> syncBarrier choc >> vms

#### Communication

Communication is central to CHP, and CSP. To perform a communication, we need a mechanism for performing an output to a channel, and a corresponding mechanism to input from the other end of the channel. We’ll start with output: “A process which first outputs *v* on the channel *c* and then behaves like *P* is defined *(c!v -> P)*.”(H113). This translates to a call to `writeChannel` and monadic sequencing in CHP: `writeChannel c x >> p`.

Where I find the correspondence of CSP and CHP to be fascinating is in the definition of performing an input from a channel. “A process which is initially prepared to input any value *x* communicable on the channel *c*, and then behave like *P(x)*, is defined *(c?x -> P(x))*.”(H114). To put it differently, this performs the action of reading a value from channel *c*, and binds the return value to *x* in the right-hand side of the arrow. This should sound familiar to Haskell programmers — this is monadic bind! Indeed, here is the CHP rendering: `readChannel c >>= \x -> p(x)`, or more simply: `readChannel c >>= p`.

As an example, here is a CSP process (H115) that copies values from its *left* channel to its *right* channel:

*COPY(left, right) = left?x -> right!x -> COPY(left, right)*

This can be converted to CHP as follows:

copy :: Chanin a -> Chanout a -> CHP () copy left right = (readChannel left >>= writeChannel right) >> copy left right

Note that in CSP, a channel can be thought of as a (potentially infinite) set of events: one event for each value that could be communicated down the channel. So for some integer channel *c*, *c.0* would be one event, *c.1* another, etc). Thus channels in CSP are a sort of syntactic sugar on top of events — but in CHP we deal with actual channels that can communicate values, and they are distinct from barriers.

CSP is declarative, so it does not support the idea of assignment, or altering the value of a variable. Values are only introduced through the aforementioned binding of inputs, through constants, or by parameterisation of processes. Which means that slotting CSP into Haskell comes fairly naturally. We have seen that the sequencing of communications maps well to a monad (the `CHP` monad). The main other aspects of CSP are parallel composition and choice, which are process combinators in CHP (i.e. they have a type like `CHP a -> CHP b -> CHP c`).

#### Parallel Composition

CSP allows for parallel composition of processes. Two processes composed in parallel run in parallel, and “a parallel combination terminates when all of the combined processes terminate… the best way of thinking about [this] is that all of the processes are allowed to terminate when they want to, and that the overall combination terminates when the last one does.”(R143). These semantics also apply to CHP’s parallel composition. CSP’s notation for composing P and Q in parallel: *P || Q* is near-identical to the CHP version: `P <||> Q` (they only differ because `||` is already used in the Prelude).

#### Choice

CSP has the notion of external choice. “If *x* and *y* are distinct events then *(x -> P [] y -> Q)* describes an object which initially engages in either of the events *x* or *y*. After the first event has occurred, the subsequent behaviour of the object is described by *P* if the first event was *x*, or by *Q* if the first event was *y*.”(H7). CSP has rules for what happens if the first events are not distinct; in CHP this is considered to be a programmer error. Note that the choice is only of the first events, not of later events in *P* and *Q*; this is also the case in CHP, although it requires a certain amount of wizardry underneath to pick out the leading event of a `CHP` code block. In CHP we use `<->` as the choice operator.

As an example, Roscoe defines a counting process: “COUNT(n) is the process which will communicate any sequence of *up*’s and *down*’s, as long as there have never been n + 1 more *down*’s than *up*’s.”(R17):

*COUNT(0) = up -> COUNT(1)
COUNT(n) = (up -> COUNT(n+1)) [] (down -> COUNT(n−1))*

We can translate this into CHP fairly directly (using barriers for the up and down events):

count :: EnrolledBarrier -> EnrolledBarrier -> Int -> CHP () count up down 0 = syncBarrier up >> count up down 1 count up down n = (syncBarrier up >> count up down (n+1)) <-> (syncBarrier down >> count up down (n-1))

CSP also deals with nondeterministic choice, notated *P |~| Q*. “It is important to appreciate the difference between *P [] Q* and *P |~| Q*. The process *(a -> STOP) [] (b > STOP)* is obliged to communicate *a* or *b* if offered only one of them, whereas *(a -> STOP) |~| (b -> STOP)* may reject either. It is only obliged to communicate if the environment offers both a and b. In the first case, the choice of what happens is in the hands of the environment, in the second it is in the hands of the process.”(R24). I do not offer nondeterministic choice in CHP; “even though [nondeterministic choices] are not constructs one would be likely to use in any program written for execution in the usual sense, CSP contains… ways of presenting the nondeterministic choice of processes.”(R23). That is, nondeterministic choice is a modelling construct useful for capturing and reasoning about the behaviour of processes, but it is not something you would mean to write in a program (a bit like bottom in Haskell, perhaps).

#### Primitive Processes

CSP has two useful primitive processes: *SKIP* and *STOP* (`skip` and `stop` in CHP). *SKIP* is the process which is always ready in a choice, and always immediately terminates successfully. So in CHP it has the behaviour of `return ()`, but with the special property that it can be used in a choice. *STOP* is the process which is never ready in a choice, and never terminates. In CHP it acts the same — it is primarily useful for not being ready in a choice (and thus for dummy items in a choice); running a process that has the sole purpose of doing nothing and not terminating is rarely useful!

#### Summary

CSP is a formal calculus underlying CHP, in much the same way that the lambda calculus underlies Haskell. It is useful for formal reasoning about programs, and expressing semantics, but just as you do not need to know the lambda calculus to program Haskell, you do not need to know CSP in order to program CHP. CSP does have a model checker, FDR, which is freely available for academic use, and I have done some work on turning simple CHP programs into CSP (and feeding them to FDR) automatically (which I will tidy up and release when I get time). CSP also provided the idea of traces, which feature in CHP, and were briefly described in a previous post on using traces for testing.

*Note:*I could use operators (including GHC’s postfix operators) to provide an interface for the channel operations. I could define:

(!) = writeChannel (?) = readChannel

Then I could take this CSP process:

COPY(input,output) = input?x -> output!x -> COPY(input,output)

and write it in CHP as:

copy input output = (input?) >>= \x -> output!x >> copy input output

I didn’t want the code to end up too much like ASCII spaghetti so I decided against these operators, but feel free to define these in your own code if you prefer the operator version. Unfortunately you always need those brackets around the input, so the do version becomes:

copy input output = do x <- (input?) output!x copy input output

## Go Sieve

The newly announced Go language set the Internet abuzz this week. I’m not going to write extensively about it, partly because I haven’t had much time to examine it in detail, and partly because other people have already written a lot about it. It is interesting to note that CHP and Go share a common heritage in Hoare and Roscoe’s CSP (Communicating Sequential Processes). I will be mentioning CSP in a couple of posts next week; CSP is a process calculus used to describe message-passing concurrent systems.

CSP inspired occam, which in turn became occam-pi (using ideas from the pi-calculus), which is still under active development. occam and occam-pi also inspired various libraries for CSP-style concurrency, such as JCSP, C++CSP, PyCSP, and many more including CHP. But CSP also inspired Newsqueak, and Rob Pike went on from that to work on Limbo and now Go which retain the CSP influence. It’s certainly nice to see other people looking at message-passing concurrency.

I did get time to glance at Go’s tutorial, and saw that they have an example for generating prime numbers using a process pipeline which uses the exact same design as my CHP primes example. So you can compare the CHP version to the Go version if you wish. (I took the design from the JCSP version; I wonder if the Go developers did too.)