Posts Tagged ‘traces’

Transposing Sequential and Parallel Composition

June 22, 2010 Leave a comment

Recently, I spent some time discussing traces with Marc Smith (we previously wrote two papers on the matter). Something that came up is the idea of representing a sequential composition of parallel processes as a parallel composition of sequential processes. Let me explain…

We, as programmers, often start off reasoning sequentially and then add on reasoning about parallel processes later — usually with much difficulty. Think of producing a control-flow graph for an imperative program, for example. If you have a sequential program, it is usually quite straightforward to form a flow graph for the program. When you try to add parallel processes, you need multiple flow graphs side by side. And they will interact, so really you need to link them together at the points of interaction. It quickly gets quite involved, and leaves you wishing that you didn’t have to involve concurrency.

What Marc and I found is the opposite: our reasoning was straightforward for parallel processes, but sequential composition was a headache. We were trying to represent the trace of a program as a tree, with each node representing either a parallel composition or a process without parallelism. In fact, due to the associativity and commutativity of parallel composition, this type of tree can always be flattened into an unstructured bag of parallel processes:

We then wanted to link processes in the tree that communicated with each other, to work out information about dependence and to look at neighbourhoods of connected processes. That was all fine, but gets quickly difficult when you need to consider sequential composition, specifically the sequential composition of parallel compositions.

Imagine that one of the tree nodes is first a program without parallel composition, and later becomes the parallel composition of two sub-processes, i.e. P = Q;(R || S). We don’t have a way to represent sequence in our neat parallel trees above, and this causes our representation to become awkward. We can’t have a simple ordered sequential list of trees, because if two processes have sequence information, they can proceed in either order, so really we would need a sequential lattice of trees, which is getting nasty.

At this point we wished we didn’t have to deal with sequential composition — at least, not sequential composition that can contain parallel composition. And via a small transformation, it turns out that we don’t. Consider the process: a; ((b; b’) || c); d, where everything in there is assumed to be a communication. We can refactor this into a set of parallel processes (P1 || P2 || P3):

P1 = a; par1start ; par1end ; d
P2 = par1start ; b; b’; par1end
P3 = par1start ; c ; par1end

The key insight is really this: if you are running multiple processes in parallel, it is the same if you start them at the outset of the program, then synchronise with them to start them, as if you set them going when you need them. So even though the process doing c won’t happen until a has, we don’t wait to set that process off partway through the program. We set that process (P3 above) off straight away, then send it a message (par1start) when we want to really set it going. The par1end message makes sure that P1 doesn’t do anything more until P2 and P3 have completed: all three processes synchronise together on the par1start and par1end messages.

With this transformation, we never have any parallel composition inside sequential composition in our traces. All we have is one giant parallel composition of every process in the system, where each process is a sequential composition of events. There is no more nesting than that; we have flattened the process network into an unstructured bag of lists of events. In fact, we can now represent the processes as a graph (with links formed by the communications: the dotted lines in the diagram above). The semantics of the process are unchanged, and all the information about the ordering dependencies is still there if you want it (embodied in the unique parstart/parend communications: the italicised events in the graph above). This form of representation actually makes it really nice to track dependencies and relations throughout the process network, by following the communication links.


You could even go one step further, and transform the system into a collection of parallel processes, where each process had at most three events in sequence, where at most one of those would be an event from the original program, i.e. the above would become:

P1A = a ; seq1
P1B = seq1; par1start ; seq2
P1C = seq2; par1end; seq3
P1D = seq3; d
P2A = par1start; b; seq4
P2B = seq4; b’ ; par1end
P3 = par1start ; c ; par1end

I’m not sure how useful this is, though.

Categories: Uncategorized Tags:

Visualising Your Process Network

May 11, 2010 4 comments

One advantage of CHP’s process-oriented style of programming is that it lends itself well to visualisation. I don’t make as much use of it on this site as I should, but some previous posts show diagrams with processes (as nodes) joined by channels (as edges). I already have some tracing support in CHP, and I had wondered if this could provide a quick and dirty way to graph a process network. This post shows the results of adding a little support to CHP for process visualisation, which I’m now debating whether or not to keep.

It turned out to be slightly better to add a new trace type (the process-graph trace) to CHP than to build a graph from the existing structural traces. So from the user’s point of view, all that would be added to the library was a new trace type; no changes to an existing program would be required other than switching tracing on and using the process-graph as the trace type. (I also added a labelling function for processes, that works with both structural and process-graph traces.)

Hooking the process-graph trace into the existing tracing infrastructure was so straightforward that it’s not really worth writing about. Unexpectedly, the most difficult part of making this functionality useful was finding something that can draw the process networks well. Process networks tend to be hierarchical, in that each process (node) may contain a sub-network, composed to any depth. These processes are connected via edges. My first inclination for graph visualisation was to reach for graphviz. So I wrote some code to spit out the graphviz file. Graphviz has a dot tool for drawing graphs in lines with rank. On a simple benchmark called numbers, it does quite well:

For larger examples, or those with cycles, its rank-basis can be awkward. Here’s the dining philosophers, rendered in a slightly unusual manner:

Most of the other graphviz tools can’t deal with hierarchical graphs — only the fdp tool can. This seemed promising; a force-directed approach that supported sub-graphs. Here’s the numbers example from above:

Clumsy. And here’s the dining philosophers:

Eugh! This is a tiny bit better when you liberate it from its subgraph (at least, it might be halfway decent if not for the node labels):

But there is a clear solution for the dining philosophers (a star topology with the security guard at its centre), so if fdp makes a hash of this graph, I don’t hold out much hope for more complex process networks. I don’t want to write my own tool for this, but I may if I have to (I’d actually quite like an interactive viewer so that you can drag things around like you can in prefuse, and perhaps involve animation of the network). Any suggestions for beating fdp into shape, or for alternative tools would be most welcome!

Observed Occurrences Only

I said that this technique was quick and dirty. The main flaw (or is it a feature?) is that only the observationally determinable process network is graphed. If you pass a channel to two processes who never communicate on it, that link won’t show up in the graph. The graph is a trace: a record of what happened, not a blueprint of what might have happened. This is not an invalid approach, but I expect it is not what most people want: most would like to see their process network drawn as it is in their program design with all channels shown, not just those that were used. But that tool is more complicated, and will have to wait for another day. (You can extend this hack to do something more like that if you use an annotation and generic programming to capture all processes’ parameters and look through them for channel ends…)

Change and Movement Over Time

The fallacy of drawing the process network in a single image is that it is not static for the duration of the program. Processes can start and end, come and go. Not only that, but CHP channels can be freely communicated down other channels, meaning that programs can dynamically rewire their topology at run-time (I should write some posts about that at some point). So really a proper tool should include support for visualising the process network as it changes over time. But again, that will have to wait for another day.

Categories: Uncategorized Tags:

Solving the Santa Claus Problem with Conjunction

December 10, 2009 1 comment

The Santa Claus problem is a concurrency problem that, like the Dining Philosophers problem, can be used to demonstrate your particular concurrency framework of choice. December seems like the right time of year to post a CHP solution that I hacked up at the CPA 2008 conference last year. It’s short but rather primitive. Here’s the statement of the problem, via Simon Peyton Jones (who made an STM solution in a chapter of Beautiful Code):

Santa repeatedly sleeps until wakened by either all of his nine reindeer, back from their holidays, or by a group of three of his ten elves. If awakened by the reindeer, he harnesses each of them to his sleigh, delivers toys with them and finally unharnesses them (allowing them to go off on holiday). If awakened by a group of elves, he shows each of the group into his study, consults with them on toy R&D and finally shows them each out (allowing them to go back to work). Santa should give priority to the reindeer in the case that there is both a group of elves and a group of reindeer waiting.

The particularly tricky aspects of this problem are:

  1. Santa must make a choice between the reindeer and elves.
  2. Santa must give priority to the reindeer.
  3. The elves must come in groups of exactly three, even though there are ten of them.

Choice is easy in CHP, so the first item is not a problem. We saw in the last post how to emulate priority, so we can use that to solve the second point. The third item is difficult. Our barriers require all enrolled participants to synchronise, so we cannot use one barrier for all ten elves. We could introduce some intermediary agents to coordinate the elves, but that is a bit long-winded. Instead we can take a slightly brute-force approach combined with a CHP feature called conjunction.


The principle behind conjunction is straightforward. Usually you offer a choice such as read from channel c or read from channel d: readChannel c <-> readChannel d. This waits for the first of the two events, and executes it (it can never execute both options). A conjunction is when you want to read from channel c and read from channel d: readChannel c <&> readChannel d. Crucially, this will only read from both when both are available. If c is not ready, it will not read from d, and vice versa. It is both or none: an atomic item, if you like. Conjunction also has a list form, every.

One example of where this is useful is the aforementioned Dining Philosophers problem: a philosopher wishing to pick up his forks should execute syncBarrier leftFork <&> syncBarrier rightFork. That way he will either pick up both his forks or neither, thus eliminating the deadlock wherein each philosopher ends up holding one fork.

Conjunction Specifics

It is valid to offer conjunction as part of a choice; for example, (readChannel c <&> readChannel d) <-> readChannel e will wait to read from channels (c AND d) OR e. These choices can be overlapping, e.g. (readChannel c <&> readChannel d) <-> (readChannel d <&>readChannel e) waits for (c AND d) OR (d AND e).

The choices should be in disjunctive normal form (DNF): OR on the outside, AND in the bracket, so c AND (d OR e) is not valid. But AND does distribute over OR here, so this is equivalent to the DNF: (c AND d) or (c AND e). (The operators do not distribute in the opposite direction, so c OR (d AND e) is not equivalent to (c OR d) AND (c OR e), because the latter allows c and e to both happen, whereas the former does not.)

Conjunction and Barriers

Importantly for today’s post, there is a correspondence between barriers (N participants, all N must synchronise together) and a conjunction of two-party barriers. Let’s imagine for a moment that Santa wanted to meet with all ten of his elves. We could create a single barrier, enroll Santa and the ten elves, and get them all to synchronise on the barrier. The barrier would only complete when all eleven participants synchronised. But alternatively, we could create ten two-party barriers. We would enroll each elf on a different barrier, and enroll Santa on all of them. When they want to meet, the elves would all synchronise on their barrier (meaning their code is unchanged), while Santa would wait for the conjunction of all ten barriers. Each barrier would only complete when all of them complete, because of Santa waiting for the conjunfction of all of them, so we have the same semantics as we had when we had one barrier. These ten barriers would be dynamically fused into one barrier.

The Solution

Santa does not want to wait for all ten elves; he only wants to wait for three. We can implement this with a little brute-force. Labelling the elves A through J, we can make Santa wait for (A AND B AND C) OR (A AND B AND D) OR… (H AND I AND J). That’s 120 possibilities. Not scalable, but the problem said ten elves so that’s all we have to manage.

Let’s finally get to some code. A reindeer waits for a random time, then synchronises on its barrier (i.e. tries to meet with Santa). An elf waits for a random time, then synchronises on its barrier (i.e. tries to meet with Santa). So, perhaps unexpectedly, a reindeer has exactly the same code as an elf. (This is also the case in Peyton Jones’ solution). Here it is:

syncDelay :: EnrolledBarrier -> CHP ()
syncDelay b = forever (randomDelay >> syncBarrier b)
    randomDelay = (liftIO $ randomRIO (500000, 1000000)) >>= waitFor

reindeer :: EnrolledBarrier -> CHP ()
reindeer = syncDelay

elf :: EnrolledBarrier -> CHP ()
elf = syncDelay

Our next job is to write the Santa process. Santa will take one barrier for the reindeer, and multiple barriers for the elves (one each). He will loop forever, first polling the reindeer (to give them priority) and otherwise choosing between the reindeer and the elves:

santa :: [EnrolledBarrier] -> EnrolledBarrier -> CHP ()
santa elfBars reindeerBar
  = forever (deliverToys </> (skip >> (deliverToys <-> meetThreeElves)))

Now we need to define the helper processes. Delivering toys is straightforward; we just synchronise with the reindeer. The rest of the code deals with picking all groups of three elves, and making a choice between synchronising with each group:

    deliverToys = syncBarrier reindeerBar

    meetThreeElves = alt [meetGroup g | g <- allGroupsThreeElves]
    meetGroup bars = every_ [syncBarrier bar | bar <- bars]

    allGroupsThreeElves = allNFrom 3 elfBars
    allNFrom n = filter ((== n) . length) . filterM (const [True, False])

The allNFrom is not important here, so I’ve used a concise (but probably inefficient) definition. Now that we have santa, our elves and our reindeer, all that remains is to put them together. To do this we use two wiring helper functions, enrollAll (that enrolls all of the processes on the given barrier) and enrollOneMany (that enrolls one process on all the barriers, and each of the other processes on one):

main :: IO ()
main = runCHP_VCRTraceAndPrint $
    (\elfBars ->
        enrollAll (newBarrierWithLabel "reindeer")
                  (santa elfBars : replicate 9 reindeer)
    [(newBarrierWithLabel ("elf" ++ show n), elf) | n <- [0..9]]

That’s it. Part of the reason that this code is quite short is that I’ve omitted all the print statements detailing what is going on (for example in Peyton Jones’ version). These print statements were only there to observe the behaviour of the computation, and we can do that with CHP’s built-in traces mechanism, just by using the runCHP_VCRTraceAndPrint function.

View-Centric Reasoning (VCR) traces, developed by Marc L. Smith and subsequently tweaked a little bit by me, display an ordered list of multisets, where each multiset holds independent events (events that did not have a sequential dependency on each other). This style makes it very easy to see from the output that our Santa Claus solution has the right sort of behaviour, viz:

< {elf3, elf4, elf5}, {elf0, elf6, elf7}, {reindeer}, {elf1, elf8, elf9}, {elf2, elf4, elf7}, {elf0, elf1, elf3}, {elf5, elf8, elf9}, {reindeer}, {elf3, elf6, elf7}, {elf0, elf1, elf9}, {elf2, elf4, elf8}, {reindeer}, {elf3, elf5, elf6}, {elf0, elf4, elf9}, {elf1, elf2, elf7}, {elf5, elf6, elf8}, {reindeer}, {elf0, elf1, elf3}, {elf4, elf7, elf9}, {elf2, elf5, elf8}, {elf0, elf1, elf6}, {reindeer}, {elf3, elf7, elf9}, {elf0, elf1, elf4}, {elf2, elf6, elf8}, {elf5, elf7, elf9}, {elf0, elf3, elf4}, {reindeer}, {elf1, elf2, elf6}, {elf5, elf7, elf8}, ...

Concise Solution

For those who like a bit of code golf (finding the shortest version of a program), I came up with this concise version of the whole solution:

import Control.Concurrent.CHP
import Control.Concurrent.CHP.Traces
import Control.Monad
import Control.Monad.Trans
import System.Random

santa elves reindeer = forever $
  syncBarrier reindeer </> (alt $ map (every_ . map syncBarrier) groups)
  where groups = filter ((== 3) . length) $ filterM (const [True, False]) elves

main = runCHP_VCRTraceAndPrint $ enrollOneMany (enrollAll
  (newBarrierWithLabel "reindeer") . (: replicate 9 syncDelay) . santa)
  [(newBarrierWithLabel ("elf" ++ show n), syncDelay) | n <- [0..9]]
  where syncDelay = forever . (randomDelay >>) . syncBarrier
        randomDelay = (liftIO $ randomRIO (500000, 1000000)) >>= waitFor

Excluding the import statements, that’s a solution to the Santa Claus problem in eight lines of Haskell. Rather difficult-to-follow Haskell, but that’s usually the result of code golf.

Concurrent Testing and Tracing: Useful Output for Test Failures

November 2, 2009 1 comment

Programmers write testcases for programs to check that the programs work correctly. When a test-case fails, a programmer wants to know two things: first, why specifically did the test fail (which assertion did not hold?), and secondly, what is wrong with the program (or the test case!)?

Last Friday I released CHP 1.5.0, which adds better support for testing, tracing, and the combination of the two. Tracing refers in theoretical terms to recording a computation’s history: practically, it means keeping a log of all the CHP synchronisation events (e.g. channel communications) that happened in your program. CHP supports several types of tracing, but for this post I will use the simplest type: a CSP trace, which is just a linear sequence of events that occurred.

Let’s explore the combination of testing and tracing in practice. Previously I used an example involving a four-item pipeline to find the mode (most common element) of a list. Let’s use this example again, but since this is a post about testing, we’ll introduce an error: we will forget to include the sort process at the front of the pipeline.

To use the tracing framework, you have to augment your code slightly to label parts of your code. This involves using arrLabel instead of arr, and *<<<* instead of <<< (see the remarks at the end of this post for why, and for a way to shorten the tedious labelling):

badMode = (arrLabel "group" group) *>>>* (arrLabel "maxByLength" maxByLength)
            *>>>* (arrLabel "head" head)
    maxByLength = maximumBy (comparing length)

We can then join this pipeline onto one that takes a string, and splits it into words — making a pipeline that looks for the most frequent word:

mostFreqWord = (arrLabel "makeLower" makeLower) *>>>* (arrLabel "words" words) *>>>* badMode
    makeLower = map toLower

Finally, we can provide a simple test sentence for the pipeline that should pick out the most frequent word, “the”:

testFreqWord = testCHPInOut (const (== "the")) (runPipelineLabel mostFreqWord)
  "The quick brown fox jumps over the lazy dog"

main :: IO ()
main = runTestTT testFreqWord >> return ()

If we run this output, what do we get? Well, the test case fails, so we are told that, but we are also automatically given the trace of the failing test case:

### Failure:                              
testCHP Failure; trace: < _c4,
  makeLower->words."the quick brown fox jumps over the lazy dog",
  _c5 >
Cases: 1  Tried: 1  Errors: 0  Failures: 1

So how do we read this trace? The trace is delimited by angle brackets, and is a comma-separated sequence of events (I’ll look to tighten up the grammar in future to allow for automated parsing). The _c4 is an unnamed event at the start (this is the channel used in the test framework to inject the data). Then the following events are channel communications of the form So makeLower->words is the channel name, and "the quick brown fox jumps over the lazy dog" is the data value. The channel names have been automatically generated based on the two processes that the channel connects, and the data value has been printed out using show. We can use this trace to immediately see which values were passed between each process pair, and thus track down the problem. You can see from looking at the trace that group is not grouping the two “the”s together, which is because the list is not sorted. We can easily fix this by putting in the missing sort process. The trace with sort added now works as we would expect:

< _c5,
  makeLower->words."the quick brown fox jumps over the lazy dog",
  _c6 >

Tracing is useful for tracking down problems and strange behaviours in your system, so integrating it with the test framework made a lot of sense. You can trace any program, not just test-cases, but I will explore that at a future date. You may observe that tracing (at least a CSP trace) is not too far different from adding print (or Debug.Trace.trace) statements around your program. The main differences are:

  • Channels are named when you join together processes, so you don’t have to modify the definition of the original processes to get intelligible trace output (unless you want to trace the internal channels).
  • Tracing can be turned on or off at run-time (by changing the top-level call to runCHP), rather than at compile-time with pre-processor macros — but if tracing is switched off, you do not incur any overhead.
  • The order in the trace is guaranteed to reflect the real ordering of the channel communications, so you can’t end up with an unrepresentative order that usually happens when you put print statements in a concurrent program.

Remarks: There are several side notes to add to this post. Many readers will probably recoil at my introduction of a *>>>* combinator. This has type:

(*>>>*) :: Show b => ProcessPipelineLabel a b -> ProcessPipelineLabel b c
  -> ProcessPipelineLabel a c

The show constraint is the reason that I could not declare an arrow instance — so any fix to this problem had to involve declaring a similar combinator outside the arrow, unless I’ve missed another approach.

The naming of the processes can be simplified with the use of a pre-processor. Here are the same process declarations, named using the C pre-processor:

#define PR(p) (arrLabel #p p)

badMode = PR(group) *>>>* PR(maxByLength) *>>>* PR(head)
    maxByLength = maximumBy (comparing length)

mostFreqWord = PR(makeLower) *>>>* PR(words) *>>>* badMode
    makeLower = map toLower

Unfortunately, GHC with the -cpp flag doesn’t support this stringize operator so you have to run the C pre-processor on the file first as part of your build process before feeding it to GHC, or use the -F flag with a wrapper script.

Showing the data being passed down the channel obviously forces the data, which may potentially affect the behaviour. Whether you show the data in the trace can be turned on and off — and you can also supply a custom function to show your data if, for example, you are passing around infinite lists (you could just print the first three items). I’ve tried to give the gist here rather than all the details, but the CHP documentation explains the different ways you can record a channel communication in a trace.

I will present this example at the fringe session at the CPA 2009 conference this evening here in Eindhoven. If anyone is interested, last year’s paper on tracing is available in my list of publications, with this year’s publication to be added shortly.

Categories: Testing Tags:

Get every new post delivered to your Inbox.