Transposing Sequential and Parallel Composition

June 22, 2010 Leave a comment

Recently, I spent some time discussing traces with Marc Smith (we previously wrote two papers on the matter). Something that came up is the idea of representing a sequential composition of parallel processes as a parallel composition of sequential processes. Let me explain…

We, as programmers, often start off reasoning sequentially and then add on reasoning about parallel processes later — usually with much difficulty. Think of producing a control-flow graph for an imperative program, for example. If you have a sequential program, it is usually quite straightforward to form a flow graph for the program. When you try to add parallel processes, you need multiple flow graphs side by side. And they will interact, so really you need to link them together at the points of interaction. It quickly gets quite involved, and leaves you wishing that you didn’t have to involve concurrency.

What Marc and I found is the opposite: our reasoning was straightforward for parallel processes, but sequential composition was a headache. We were trying to represent the trace of a program as a tree, with each node representing either a parallel composition or a process without parallelism. In fact, due to the associativity and commutativity of parallel composition, this type of tree can always be flattened into an unstructured bag of parallel processes:

We then wanted to link processes in the tree that communicated with each other, to work out information about dependence and to look at neighbourhoods of connected processes. That was all fine, but gets quickly difficult when you need to consider sequential composition, specifically the sequential composition of parallel compositions.

Imagine that one of the tree nodes is first a program without parallel composition, and later becomes the parallel composition of two sub-processes, i.e. P = Q;(R || S). We don’t have a way to represent sequence in our neat parallel trees above, and this causes our representation to become awkward. We can’t have a simple ordered sequential list of trees, because if two processes have sequence information, they can proceed in either order, so really we would need a sequential lattice of trees, which is getting nasty.

At this point we wished we didn’t have to deal with sequential composition — at least, not sequential composition that can contain parallel composition. And via a small transformation, it turns out that we don’t. Consider the process: a; ((b; b’) || c); d, where everything in there is assumed to be a communication. We can refactor this into a set of parallel processes (P1 || P2 || P3):

P1 = a; par1start ; par1end ; d
P2 = par1start ; b; b’; par1end
P3 = par1start ; c ; par1end

The key insight is really this: if you are running multiple processes in parallel, it is the same if you start them at the outset of the program, then synchronise with them to start them, as if you set them going when you need them. So even though the process doing c won’t happen until a has, we don’t wait to set that process off partway through the program. We set that process (P3 above) off straight away, then send it a message (par1start) when we want to really set it going. The par1end message makes sure that P1 doesn’t do anything more until P2 and P3 have completed: all three processes synchronise together on the par1start and par1end messages.

With this transformation, we never have any parallel composition inside sequential composition in our traces. All we have is one giant parallel composition of every process in the system, where each process is a sequential composition of events. There is no more nesting than that; we have flattened the process network into an unstructured bag of lists of events. In fact, we can now represent the processes as a graph (with links formed by the communications: the dotted lines in the diagram above). The semantics of the process are unchanged, and all the information about the ordering dependencies is still there if you want it (embodied in the unique parstart/parend communications: the italicised events in the graph above). This form of representation actually makes it really nice to track dependencies and relations throughout the process network, by following the communication links.

Addendum

You could even go one step further, and transform the system into a collection of parallel processes, where each process had at most three events in sequence, where at most one of those would be an event from the original program, i.e. the above would become:

P1A = a ; seq1
P1B = seq1; par1start ; seq2
P1C = seq2; par1end; seq3
P1D = seq3; d
P2A = par1start; b; seq4
P2B = seq4; b’ ; par1end
P3 = par1start ; c ; par1end

I’m not sure how useful this is, though.

Categories: Uncategorized Tags:

Conjoined Events: Paper and Slides

June 16, 2010 1 comment

I’m just back from a trip to America which began by attending Advances in Message Passing 2010 to present a paper on conjunction, something I’ve posted about on this blog before. I’ve put the paper and slides (with notes) up on my website, for those that are interested. It’s focused entirely on conjunction; CHP doesn’t really appear in the paper, but all the examples in the slides were in Haskell.

I think it’s accurate to say that most of the other researchers at the workshop were interested in a slightly different aspect of message-passing to me. For most of them, message-passing was about using things like MPI to communicate between workers and farmers for parallel speed-up. You can use CHP for that, but most of the examples on this blog (and the examples are in the paper) are about the communications driving the programs, and being part of the program: communications as part of the computation process in concurrent programs, not just a way to facilitate parallel programming. I find there is an elegance in some of CHP’s process-oriented style that makes it more fun to use, but it may well be that their style of parallel programming is a more pragmatic use given the increasing availability of parallel processors.

Categories: Uncategorized

Visualising Your Process Network

May 11, 2010 4 comments

One advantage of CHP’s process-oriented style of programming is that it lends itself well to visualisation. I don’t make as much use of it on this site as I should, but some previous posts show diagrams with processes (as nodes) joined by channels (as edges). I already have some tracing support in CHP, and I had wondered if this could provide a quick and dirty way to graph a process network. This post shows the results of adding a little support to CHP for process visualisation, which I’m now debating whether or not to keep.

It turned out to be slightly better to add a new trace type (the process-graph trace) to CHP than to build a graph from the existing structural traces. So from the user’s point of view, all that would be added to the library was a new trace type; no changes to an existing program would be required other than switching tracing on and using the process-graph as the trace type. (I also added a labelling function for processes, that works with both structural and process-graph traces.)

Hooking the process-graph trace into the existing tracing infrastructure was so straightforward that it’s not really worth writing about. Unexpectedly, the most difficult part of making this functionality useful was finding something that can draw the process networks well. Process networks tend to be hierarchical, in that each process (node) may contain a sub-network, composed to any depth. These processes are connected via edges. My first inclination for graph visualisation was to reach for graphviz. So I wrote some code to spit out the graphviz file. Graphviz has a dot tool for drawing graphs in lines with rank. On a simple benchmark called numbers, it does quite well:

For larger examples, or those with cycles, its rank-basis can be awkward. Here’s the dining philosophers, rendered in a slightly unusual manner:

Most of the other graphviz tools can’t deal with hierarchical graphs — only the fdp tool can. This seemed promising; a force-directed approach that supported sub-graphs. Here’s the numbers example from above:

Clumsy. And here’s the dining philosophers:

Eugh! This is a tiny bit better when you liberate it from its subgraph (at least, it might be halfway decent if not for the node labels):

But there is a clear solution for the dining philosophers (a star topology with the security guard at its centre), so if fdp makes a hash of this graph, I don’t hold out much hope for more complex process networks. I don’t want to write my own tool for this, but I may if I have to (I’d actually quite like an interactive viewer so that you can drag things around like you can in prefuse, and perhaps involve animation of the network). Any suggestions for beating fdp into shape, or for alternative tools would be most welcome!

Observed Occurrences Only

I said that this technique was quick and dirty. The main flaw (or is it a feature?) is that only the observationally determinable process network is graphed. If you pass a channel to two processes who never communicate on it, that link won’t show up in the graph. The graph is a trace: a record of what happened, not a blueprint of what might have happened. This is not an invalid approach, but I expect it is not what most people want: most would like to see their process network drawn as it is in their program design with all channels shown, not just those that were used. But that tool is more complicated, and will have to wait for another day. (You can extend this hack to do something more like that if you use an annotation and generic programming to capture all processes’ parameters and look through them for channel ends…)

Change and Movement Over Time

The fallacy of drawing the process network in a single image is that it is not static for the duration of the program. Processes can start and end, come and go. Not only that, but CHP channels can be freely communicated down other channels, meaning that programs can dynamically rewire their topology at run-time (I should write some posts about that at some point). So really a proper tool should include support for visualising the process network as it changes over time. But again, that will have to wait for another day.

Categories: Uncategorized Tags:

Choose Anything: adding an Alternative instance

May 5, 2010 2 comments

The alt function in CHP allows you to make a choice between performing several actions. You can choose, for example, between reading on one channel and writing on another. Agents can choose to move left or right, buffers can choose between reading or writing, servers can choose between reading from several clients. The alt function takes blocks of code and chooses between the leading actions — some people find this slightly surprising, but I think it simplifies code, and it is also entirely in line with the original CSP calculus.

The Problem with alt

The alt function has always had a slight ugliness though. Some leading actions do not support choice, for example: parallel compositions, lifted IO actions, or channel creation. That is, what should something like this code do:

alt [newChannel, liftIO (putStrLn 6), writeChannel c 6 <||> writeChannel d 7]

My solution to dealing with the items that didn’t support choice was to issue a run-time error if you tried to include them in a choice (as above). I don’t like run-time errors caused by bad code — but in this instance I couldn’t use the type system to pick up the error at compile-time without making all CHP code a lot more verbose. It’s always irked me to have this possibility of a run-time error.

One related feature is that the skip process is somewhat magic. The skip process has type CHP () and does nothing. It’s very tempting to define skip = return (). But SKIP has another property in CHP: it’s always ready in a choice. Choosing between a return statement (on its own), e.g. alt [return (), writeChannel c 0], would trigger a run-time error, so skip had to be defined differently. (Note that choosing alt [skip, return 0 >>= writeChannel c] is fine; CHP obeys the monad laws, so the leading action of the second item is actually writeChannel — the return melts away when it has an action following it.)

There is one easy solution to getting rid of the run-time error in alt: make choosing between any CHP action valid. All the existing choice-supporting actions work as before. But all of the others: creating channels, enrolling, parallel compositions, poisoning, claiming shared channels, lifted IO actions and solitary return statements (and probably more I’ve forgotten) are now valid in a choice, and are considered always-ready. In CSP terms, they are all prefixed by the process SKIP. This change has been included in CHP 2.2.0.

This doesn’t break any existing CHP code, it just makes more code technically valid (and simplifies some of the implementation a little). Now skip is simply defined as return (), and is not at all special. A little while back I explained the poll function, which I wrote at the time as:

poll :: CHP a -> CHP (Maybe a)
poll c = (Just <$> c) </> (skip >> return Nothing)

This can now be written even more straightforwardly:

poll :: CHP a -> CHP (Maybe a)
poll c = (Just <$> c) </> (return Nothing)

Alternative

Since I wrote the first version of CHP I’ve come into contact with a lot more Haskell — including type-classes such as Applicative and Alternative. Alternative defines a choice operator, <|> and some associated items: empty, the unit of choice, and the functions some and many that optionally produce 0+ or 1+ items respectively. The default definition of many is revealing:

many v = some v <|> pure []

This suggests several things. Firstly, a right-biased choice operator is going to be pretty useless here. It’s not clear whether Alternative instances should display left-bias or no bias (the documentation only states that it should be associative, which all biases would satisfy), but left-bias looks most useful, and is probably most expected. Secondly, it is expected than an item constructed with pure should be a valid choice: this wasn’t the case previously (pure = return, of course), but it is with the changes described above.

Our instance is trivially constructed (and is included in CHP 2.2.0 onwards):

instance Alternative CHP where
  empty = stop
  (<|>) = (</>)

It seems silly not to provide it. This does leave me with three choice operators in CHP: “<->“, “</>” and “<|>“, which all do exactly the same thing — my original intention in providing the first two was that the first wasn’t guaranteed to have bias, but the middle one was guaranteed to have some bias — they actually share the same definition, though. If I was designing CHP all over again I might dump both my choice operators and just use Alternative. That would break pretty much all existing CHP code, though, so it’s too big a change to do suddenly. I may deprecate my choice operators in favour of Alternative though, and maybe remove them in the future.

Our earlier poll operation is actually already featured in the Control.Applicative module, based on the Alternative type-class, and is called optional. As a final note, I went looking for an equivalent to CHP’s alt/priAlt for Alternative. It turns out to be the asum function from Data.Foldable (thanks, Hoogle!):

asum = foldr (<|>) empty

I might stick with the alt/priAlt synonyms for that one though, for clarity (and stop for empty, too, due to its CSP origin). Besides which, asum is defined in terms of <|> — in fact, in CHP the operator </> is defined in terms of priAlt rather than the other way around, because priAlt is more efficient when you have lots of guards.

Categories: Uncategorized Tags:

New chp-spec library released

I recently published a multi-part guide on how to provide a mirror implementation of (most of the API of) CHP that spits out a CSP specification of the program rather than execute it. You can go back and read:

This is the final post I plan to make on the matter (at least for a while)! This post is to announce the release of the work in the above guide as a new chp-spec-1.0.0 library. If you aren’t interested in formal specification, and just want to use CHP, you can ignore the chp-spec library. (And if you don’t use CHP, you can definitely ignore it!) It’s only useful if you want to try to generate a CSP model of your CHP program — and even then, the technique comes with a lot of caveats. But I’d rather release it than leave it sitting on my hard drive.

Instructions

To use the chp-spec library in place of chp, you must do the following:

  • Change the library dependency from chp to chp-spec when you want to generate a specification. You may be able, in cabal, to have both dependencies listed all the time. Currently there is no chp-spec-plus; if you want this, let me know.
  • Change the imports in all your modules from Control.Concurrent.CHP* to Control.Concurrent.CHPSpec* when you want to generate a specification; this can be automated somewhat using preprocessor macros if you want to switch between the two regularly.
  • Make sure that all uses of the forever function in your process are changed to CHP’s foreverP (this is included in CHP 2.2.0 onwards so you can make this change permanent, even when using plain chp).
  • Add the process annotation to all directly recursive processes (again, this is included in CHP 2.2.0 onwards, so you can make this change permanent, even when using plain CHP).
  • When you want to generate a specification, change the top-level function from runCHP to specify.

Specify

The specify function mentioned in the last point is the top-level call that generates and post-processes the CSP specification. The generation has been mostly covered in previous posts; the generation part of specify is trivial; it annotates the top-level process to be called main, then runs the state transformer to get the altered state with all the processes recorded in it (among other things), which it passes to specify', the function that does all the post-processing:

specify :: Bool -> CHP () -> IO String
specify showIO main
  = specify' showIO <$> execStateT (finSpecT $ process "main" main) emptyCHPState

That showIO parameter will be explained later in the post. The code for specify' is long-winded but not very exciting. It can be split into two aspects:

specify' :: Bool -> CHPState -> String
specify' showIO st = render specs ++ declMain
  where ...

We’ll start with the first part of that concatenation. The render call uses the pretty-printing to print the specifications. The specs item is generated from the recorded process by two steps: first, it pulls up any Repeated processes to be top-level recursive processes — modern CSP has no iteration operator — and second, it transforms all the process-ids and communications into strings by uniquely numbering the processes and channel values.

The second part of the concatenation, declMain, declares a main process, named “main”. We know, because it’s the first process that is encountered, that our top-level process will end up named “main_1″, so at the most basic we end up with a line “main = main_1″, which wraps up all the numbering in a simpler name. But it can have different behaviour depending on that mysterious “unhideIO” parameter.

CSP’s semantics revolve around traces, which are records of the visible actions that a process takes. That visible qualifier is significant: CSP allows for hiding of events. This is particularly relevant in case of our dummy IO events. If we leave them as-is, they will show up in the trace of the process even though they are not really a part of its behaviour — but this does allow us to read traces produced by the FDR tool more easily, as we can see which IO events occurred. So the setting is useful for “debugging” any deadlocks, but these extra events can ruin refinement checks because the process appears to be taking extra events. If you’re doing a deadlock check, pass True to keep the IO events visible, otherwise pass False to hide them. That is done by the following code for declMain in the where clause of specify’:

declMain :: String
declMain = "\nmain = main_1 " ++ (if showIO then "" else
  "\\ {" ++ intercalate "," (map getName $ Set.toList $ chpIOEvents st) ++ "}"
  ) ++ "\n"
  where
    getName = fromJust . flip Map.lookup events

If showIO is False, the main process will hide (backslash is the hide operator in CSP) all the IO events, to stop them being visible to the outside, and in traces of the main process.

Categories: Uncategorized Tags: ,

Bumper Release May Day

Today is the May Day holiday here, which has given me some time to put together lots of library releases. Today sees the release of chp-2.2.0, chp-plus-1.3.0, chp-mtl-1.0.0 and chp-transformers-1.0.0 (and chp-spec-1.0.0). The main changes to the chp and chp-plus libraries are that the dependency on the mtl library has been removed. Thanks to all who commented on my original article about getting rid of mtl — Anders Kaseorg pointed out that if everyone defines their own MonadIO instances for CHP this could cause chaos, so I’ve adopted Dino Morelli’s suggestion and created the chp-mtl and chp-transformers libraries to contain the appropriate instances. So if you want a MonadIO instance use those — and if you don’t want that, just use the new liftIO_CHP :: IO a -> CHP a function that does the same thing. I’ll be discussing the only other major change to CHP — an instance of Alternative for the CHP monad — in a post later in the week.

Categories: Uncategorized

Automatic Model Generation, part 5: Iteration

May 1, 2010 1 comment

Previous parts of this guide have shown how to represent and print CSP models, and how to redefine the CHP monad and the primitives for choice, IO and communication in order to generate CSP models from CHP programs. This part deals with the final difficult item: iteration.

Most processes in CHP run in an infinite loop, terminated only by poison. Many are written using the helper function forever, which is defined: forever p = p >> forever p. Consider what happens when we try to generate the model for such a process, e.g. forever (syncBarrier b). We first encounter the syncBarrier and we store in the model that the process would engage in event “b”. Then in the forever function we loop round and reach the syncBarrier again, and add on to the model that the process would next engage in “b” again. And so on and so on. An infinite loop in our program would cause an infinite loop in our model generation, rendering it useless. We cannot observationally determine that the program is looping forever; think how our approach could possibly distinguish the above from replicateM_ 1000000 (syncBarrier b) >> syncBarrier c.

Forever in an Instant

To solve this particular problem, we supply a foreverP function in the CHP library and its redefinition. In the original CHP library, foreverP acts as forever. In the forthcoming chp-spec library, it is defined differently: it runs the code once to record the model, then makes a note in the model that this code should have run forever. It also stops further modelling; any code after a call to forever will be ignored in the normal library, and so it should be here, too:

foreverP :: CHP a -> CHP b
foreverP p = stopSpecT . liftM (Repeat . snd) . finSpecT

stopSpecT :: Monad m => m SpecItem -> CHPSpecT m a
stopSpecT m = CHPSpecT $ const $ liftM (\sp -> (error "stopSpecT", (sp :))) m

The stopSpecT makes sure that no further specification is performed after this point. The crucial part is the const which ignores the continuation function — this is something that is easily done with our continuation-passing monad (and is like a short-circuiting error monad).

The above takes care of forever, which is used when the program carries no state around. But other processes have direct recursion and cannot use this function.

Process Annotations

To solve the recursion problem, we introduce a process annotation. The process annotation surrounds a process and captures the value of its arguments. The assumption is made (or rather, a condition is placed on the user) that the process will have the same behaviour (excluding any external input from channels and liftIO functions) when given the same arguments. In a pure language like Haskell, this is reasonable and will commonly be the case. The annotation should be added at the beginning of any process that recurses — when the recursion is performed, the process is modelled iff it has never been run before with these arguments; if it has been run before, its behaviour has already been modelled and recorded, so it is returned directly. Here is the annotation in action, on the security guard from the dining philosophers:

security :: [(Chanin (), Chanin ())] -> CHP ()
security chans = security' chans 0
 where
  security' :: [(Chanin (), Chanin ())] -> Int -> CHP ()
  security' = process "security" $ \chans satDown ->
    let (ups, downs) = unzip chans
        maxSatDown = length chans - 1 in
    (alt $
      [readChannel c >> return (satDown - 1) | c <- ups, satDown > 0] ++
      [readChannel c >> return (satDown + 1) | c <- downs, satDown < maxSatDown]
    ) >>= security' chans

It takes as its first parameter the name of the process — it is user-supplied, but should be unique in the program. The second parameter is the process itself. Here, the process has two arguments — but it could be any number. The process annotation is designed using type-classes so that it can be used with processes that take any number of arguments.

We need to store the arguments that each process took when it was modelled. So we need to store the models in a data structure like Map String (Map Args Model). But Args needs to be a set of differently-typed arguments for each process — we can’t statically assign a type to it. If we only supported self-recursion we could probably solve this with phantom type parameters in the monad and so forth, but sometimes, even in Haskell, it is appropriate to use dynamic typing. The Data.Dynamic module supports safe dynamic typing (in that casts from the Dynamic type have a run-time check).

In fact, we don’t actually need to store the arguments themselves. What we need to store with a previous model is a function like a -> Bool that says whether the latest parameter is the same as was used for generating the previous model:

type CheckArg = Dynamic -> Bool

checkArgs :: [Dynamic] -> [CheckArg] -> Bool
checkArgs ds fs
  | length ds /= length fs = False
  | otherwise = and $ zipWith ($) fs ds

We create a Process type-class to implement our process annotation:

process :: Process p => String -> p -> p
process s = process' True s []

class Process p where
  process' :: Bool -> String -> [(Dynamic, CheckArg)] -> p -> p

instance (Eq a, Typeable a, Process b) => Process (a -> b) where
 process' topLevel name args f x
  = process' topLevel name (args ++ [(toDyn x,(== Just x) . fromDynamic)]) (f x)

The process' function takes a Bool (ignore that for now), the process name, a list of arguments so far (each pair is the argument itself, and its function to check against a future value), and then wraps a process “p”. The instance shown above is the one that captures all a process’s parameters. Each parameter is appended to the list. toDyn turns a value into a Dynamic, and fromDynamic returns a Maybe value (Nothing if the type-cast is unsuccessful, Just if was successful). Comparing the result of fromDynamic to Just x checks both the type and the value at once. To support dynamic typing, parameters must have a Typeable instance (which GHC can derive for most types — and is supplied for all CHP library types) and an Eq instance to check for equality. The most notable types that cannot be used for a parameter are functions. This is a limitation of the approach — and indeed, CSP itself does not support any notion of functions being passed around. Any processes that take functions as parameters would have to be made first-order (Neil Mitchell’s Firstify comes to mind).

The base-case instance for Process is the one that actually uses the parameter list:

data CHPState = CHPState { ... , 
 chpProcessMap :: Map.Map String (Map.Map Integer ([CheckArg], (Spec,Dynamic))),
 chpFreeNames :: [(Dynamic, CheckArg)],
 chpNextProcess :: !Integer }

instance (Typeable a, Eq a) => Process (CHP a) where
  process' topLevel name immArgs p = addSpecT1 $ do
    st <- get
    let possibles = Map.toList <$> (Map.lookup name $ chpProcessMap st)
        args = if topLevel then immArgs
                 else chpFreeNames st ++ immArgs
    case possibles >>= L.find (checkArgs (map fst args) . fst . snd) of
      Just (n, (_, (_, r)))
        -> return (flip fromDyn (error "process-lookup") r, Call n)
      Nothing -> 
        do let n = chpNextProcess st
           put $ st { chpProcessMap = insertMapMap name n
                        (map snd args, (error "process", toDyn ()))
                          $ chpProcessMap st
                    , chpFreeNames = args
                    , chpNextProcess = succ n
                    }
           (r, f) <- finSpecT p
           modify $ \st' -> st' { chpProcessMap = insertMapMap name n
                                    (map snd args, (f, toDyn r))
                                      $ chpProcessMap st'
                                  -- Restore original free names:
                                , chpFreeNames = chpFreeNames st }
           return (r, Call n)

The case statement checks if any previously-modelled processes match. If one does (the Just case), its model and return value are returned. If Nothing is found, the process is modelled. It is crucial that the state is first updated with an entry for the process — that way, when the process recurses, it can find itself in the collection of recorded processes (if the recursion uses the same argument values). The dummy entry has the right parameters but an invalid model (that will never be accessed before it is later updated) and an incorrect return type; processes that recurse and then examine the value of the recursive call are not supported here. However, almost every recursive CHP process is tail-recursive, which can be modelled just fine (if they weren’t tail recursive, they would probably feature a space-leak). After the process has been modelled, its entry is updated with the real return value and real specification.

The model returned by the process annotation is always simply a Call item. Therefore, any process that recurses will simply have a Call item added to the end, stopping the model from extending forever — provided that at some point the same parameters are used to the process. A parameter with continually-changing parameters that never repeat — for example, one that outputs ascending integers — cannot be modelled here. This is yet another limitation of the approach.

Example

As an example, we’ll use the security guard shown earlier in this post, and how that is modelled with three philosophers (i.e. three sets of channels). Our approach produces a model for each different set of arguments to the process — the models for different arguments can potentially be completely different. Some effort could be put into collapsing them back down during post-processing, but here are the three models:

security_10=
  (((security.up.phil0?x_14 -> security_9)
    []
    (security.up.phil1?x_15 -> security_9)
    []
    (security.up.phil2?x_16 -> security_9)
    []
    (security.down.phil0?x_17 -> security_11)
    []
    (security.down.phil1?x_21 -> security_11)
    []
    (security.down.phil2?x_22 -> security_11)))
security_11=
  (((security.up.phil0?x_18 -> security_10)
    []
    (security.up.phil1?x_19 -> security_10)
    []
    (security.up.phil2?x_20 -> security_10)))
security_9=
  (((security.down.phil0?x_13 -> security_10)
    []
    (security.down.phil1?x_23 -> security_10)
    []
    (security.down.phil2?x_24 -> security_10)))

The top process, security_10, is the state where one philosopher is currently seated and thus all up and down events are offered. If an up event occurs, the next process is security_9, the state where no philosophers are seated, and thus only down events are offered. If a down event occurs in security_10, the next process is security_11, where two philosophers are seated and only up events are offered: this prevents a third philosopher sitting down (which could potentially lead to the classic deadlock in the dining philosophers).

That concludes all the in-depth technical parts of this guide. There’ll be one more post explaining the top-level specify method, which should also include the announcement of the release of the chp-spec library based on this guide.

Categories: Uncategorized Tags: ,
Follow

Get every new post delivered to your Inbox.