Archive

Archive for the ‘Uncategorized’ Category

Pieces of Yesod: Inverting a Haskell Function

June 17, 2011 3 comments

I’ve recently been working with web frameworks, and decided to have a play with what Haskell has to offer for web resources. One of the several frameworks available is Yesod. Web frameworks need some sort of mapping between URLs and internal logic, usually called routing. In Yesod, routing is done with a “pieces” mechanism. You might declare in your routes:

/resources/#SortBy ResourcesR GET

And this will mean that if you access example.com/resources/d, that will translate into a call that turns the string "d" into a value of type SortBy, which is then passed to getResourcesR. It also means that if you use the syntax “@{ResourceR Date}” in your template files, this will be translated into example.com/resources/d. This translation is done is with the “pieces” classes, for example the single piece class:

class SinglePiece a where
  fromSinglePiece :: Text -> Maybe a
  toSinglePiece :: a -> Text

So a value should always be translatable into a part of a URL (i.e. Text), and an arbitrary URL part may be a valid value. Here’s a simple sorting-order type:

data SortBy = Date | Popularity | Title

And here’s the trivial SinglePiece instance:

instance SinglePiece SortBy where
  fromSinglePiece "d" = Just Date
  fromSinglePiece "p" = Just Popularity
  fromSinglePiece "t" = Just Title
  fromSinglePiece _ = Nothing

  toSinglePiece Date = "d"
  toSinglePiece Popularity = "p"
  toSinglePiece Title = "t"

There’s several drawbacks with this code. The major drawback is obvious: the code is repetitive! It expresses the same simple mapping from sort order to text twice, once in each direction. This is tedious. Another drawback is that is a potential source of bugs if I make a typo. SinglePiece does have an obvious property for testing: fromSinglePiece . toSinglePiece == Just. I also need to make sure I cover all the possible sorting values. The nice thing about toSinglePiece (rather than using a lookup table) is that if I use the GHC flags -fwarn-incomplete-patterns -Werror, it will warn me if I miss out a case. So if I add a new sort order, say Author, and forget to change toSinglePiece, I’ll get a compile-time error as soon as I try to compile (which is quicker even than a test). But fromSinglePiece doesn’t have the same advantage: if I forget a new case for Author, I’ll only know about it because of my test.

The way to fix my problems is to stop repeating myself. I can change the code to this:

instance SinglePiece SortBy where
  fromSinglePiece = invert toSinglePiece
  toSinglePiece Date = "d"
  toSinglePiece Popularity = "p"
  toSinglePiece Title = "t"

That invert function looks a bit magical! Broadly, its type is:

invert :: (a -> b) -> (b -> Maybe a)

Now, as it stands, the invert function is not implementable; there’s no way to arbitrarily invert a Haskell function. But we can brute force an inversion if we know the full range of inputs. That’s possible with the help of some automatically derivable Haskell type-classes:

data SortBy = Date | Popularity | Title
  deriving (Eq, Enum, Bounded)

invert :: (Enum a, Bounded a, Eq b) => (a -> b) -> (b -> Maybe a)
invert f = flip lookup $ map (f &&& id) [minBound..maxBound]

We use Enum and Bounded to get the list of all possible inputs, then form a lookup table which we can reference to find the inverse of a value. So now we’re able to write just toSinglePiece, and automatically derive fromSinglePiece. No repetition and less possible source of mistakes. The only problem is that invert might be a bit slow. Instinctively, it feels like the lookup table can’t be as fast as Haskell’s built-in case structure. To find out, let’s whip up a test using the Criterion benchmarking library.

I’m testing the performance of fromSinglePiece on the three valid URLs and one invalid, i.e.

main = defaultMain
  [bench "benchName" $ nf (map fromSinglePiece :: [Text] -> [Maybe SortBy])
     ["d", "p", "t", "x"]]

I ran this once on the original fromSinglePiece that used a case statement (calling that “cases”), and once on the new fromSinglePiece that uses the lookup table (calling that “invert”), compiling with -O2 each time on GHC 7.0.3. Here’s the results:

benchmarking cases
collecting 100 samples, 23951 iterations each, in estimated 1.011024 s
bootstrapping with 100000 resamples
mean: 423.0417 ns, lb 421.8167 ns, ub 426.2270 ns, ci 0.950
std dev: 9.136906 ns, lb 1.879618 ns, ub 16.88352 ns, ci 0.950
found 2 outliers among 100 samples (2.0%)
  2 (2.0%) high severe
variance introduced by outliers: 0.997%
variance is unaffected by outliers

benchmarking invert
collecting 100 samples, 21664 iterations each, in estimated 1.011000 s
bootstrapping with 100000 resamples
mean: 466.8043 ns, lb 466.7298 ns, ub 466.9089 ns, ci 0.950
std dev: 448.0250 ps, lb 340.4309 ps, ub 723.9564 ps, ci 0.950
found 2 outliers among 100 samples (2.0%)
  1 (1.0%) high severe
variance introduced by outliers: 0.990%
variance is unaffected by outliers

So the difference in speed is only around 10%. I can accept that cost in order to not repeat myself everywhere. Obviously, many types used with SinglePiece are not a trivial conversion between a type constructor and a text value, but for those that are, I found this trick useful.

Categories: Uncategorized

Testing a Side-Effecting Haskell Monad

In this post I’m going to continue discussing testing CHP, this time focusing on the CHP monad. The monad is a bit complex (and indeed, there is a subtle bug in the current version of CHP to do with poison-handling) because it has several semantic aspects:

  • it is an error-handling/short-circuiting monad (because it must obey the poison semantics),
  • it has some extra semantics to do with choice that treats the leading action differently,
  • underneath it all, there is the IO monad, which must be lifted correctly.

CHP is a monad with side-effects: quite aside from the communication with parallel processes, the presence of lifted IO means that it can have arbitrary side-effects. This means, if my terminology is correct, that we need a check for operational equality rather than denotational. Let’s explore this by considering testing an error-handling monad.

Testing An Error-Handling Monad

The Either type can be used as an error-handling monad by using the Left constructor to hold an error, and the Right constructor to hold a successful return. One property that should hold is that Left e >>= k == Left e. That’s an actual Haskell equality; we can test that property for any function k, and use actual equality on the result.

It’s also possible to use a similar error-handling monad transformer. The transformers library offers an ErrorT e m a type that allows you to wrap a monad “m”, using error type “e”, with successful returns of type “a” (with a function throwError :: e -> ErrorT e m a for introducing errors). We can test this using the Identity monad as type “m”, which would make sure the code was pure. But the semantics of this error transformer involve what actions occur in the error monad. For example, runErrorT (m >> throwError e >> n) should be equivalent to m >> return (Left e), but usually m (which is a monadic action) cannot be directly compared for equality.

Setting this in the IO monad makes it all much clearer: runErrorT (lift (putStrLn "Hello") >> throwError e >> lift (putStrLn "Goodbye")) should print Hello on the screen, and then finish with an error — and not print Goodbye. Testing this is difficult because we need to have a harness that can check for which side-effects occurred, making sure the right ones (e.g. printing Hello) did happen, and the wrong ones (e.g. printing Goodbye) didn’t happen.

This is exactly the situation that the CHP monad is in; we always have IO beneath the CHP monad (which we can’t swap out for a more easily testable monad!), so we must test equality of code by checking which side effects occurred. We need to check that liftIO_CHP (putStrLn "Hello") >> throwPoison >> liftIO_CHP (putStrLn "Goodbye") prints Hello then throws poison, and doesn’t print Goodbye. Printing text is a difficult side-effect to track in a test harness, but the nice thing about IO is that it has a whole host of side-effects to choose from! So I test by observing alterations to a Software Transactional Memory (STM) variable (TVar) — therefore my test inputs are of type TVar String -> CHP a.

Testing CHP’s Error-Handling

I use these two functions as a harness to execute the CHP code:

runCHPEx :: CHP a -> IO (Either SomeException a)
runCHPEx m = (Right <$> runCHP m) `C.catches`
    [Handler (\(e::UncaughtPoisonException) -> return $ Left $ toException e)
    ,Handler (\(e::ErrorCall) -> return $ Left $ toException e)]


runCode :: (TVar String -> CHP a) -> IO (String, Either SomeException a)
runCode m = do
  tv <- newTVarIO ""
  r <- runCHPEx (m tv)
  s <- atomically (readTVar tv)
  return (s, r)

The runCode function gives back (in the IO monad) the String that was the final value of the TVar, and the result of running the CHP computation — either an uncaught poison/error exception or an actual return value. (One planned change to CHP semantics in version 3.0.0 is that uncaught poison gets translated into an exception, rather than giving runCHP a Maybe-wrapped return.) Here’s some example HUnit tests (with some useful helper functions):

obsTestPoison :: Test
obsTestPoison = TestList
  [("ab", Right 'b') ==* \tv -> tv!'a' >> tv!'b'
  ,("", Left UncaughtPoisonException) ==* const throwPoison
  ,("a", Left UncaughtPoisonException) ==* \tv -> tv!'a' >> throwPoison
  ]
  where
    (!) :: TVar String -> Char -> CHP Char
    (!) tv c = c <$ (liftIO_CHP $ atomically $
                       readTVar tv >>= writeTVar tv . (++[c]))

    (==*) :: (String, Either UncaughtPoisonException Char)
          -> (TVar String -> CHP Char) -> Test
    (==*) x m = TestCase $ runCode m >>= assertEqual "" (onLeftSnd Just x)
                  . onLeftSnd fromException
      where
        onLeftSnd :: (b -> c) -> (a, Either b d) -> (a, Either c d)
        onLeftSnd f (x, Left y) = (x, Left $ f y)
        onLeftSnd f (x, Right y) = (x, Right y)

The left-hand side of the starred equality is the expected result. The right-hand side is the code (which is of type TVar String -> CHP Char) which should produce that result.

So by observing deliberately-placed side-effects in the code, we can check for equality in a side-effecting monad. These unit tests aren’t the only way I’m testing my monad though — in my next post, I’ll build on this to form a more advanced property-based testing technique.

Categories: Uncategorized

Testing a Haskell Library

June 6, 2011 2 comments

I’m currently gearing up for a CHP-3.0.0 release. Version 3 will be (internally) simpler in some places, faster in others, and has a slightly simplified API. The content of the library is mostly there — but I’m not yet ready to release because I want to write more documentation, and more tests.

Testing concurrent programs is notoriously tricky — the exact sequence of actions is non-deterministic, and sometimes bugs only occur when actions end up occurring at specific times. There's various aspects of CHP that particularly warrant testing:

  • The central event-synchronisation mechanism (which underlies channels, barriers, choice and conjunction).
  • The surrounding communication mechanisms (i.e. the exchange of data) for channels.
  • The monad, poison, and all the associated semantics.
  • Support for tracing.
  • Wiring combinators (especially the four-way and eight-way grid wiring).

In this post, I’ll talk about the first item: testing the central event-synchronisation mechanism. This is absolutely crucial to the correct functioning of CHP — if there’s a problem in this algorithm, it will affect any code that uses CHP. The algorithm itself is effectively a search algorithm, which looks for events that can take place. The input to the search algorithm is scattered across several Software Transactional Memory (STM) TVars (Transactional Variables) thoughout the system, and the search takes place in a single transaction.

Testing STM-Based Algorithms

The really nice thing about testing an STM-based search algorithm is that we don’t actually have to worry about the concurrent aspects at all. An STM transaction is guaranteed to be free of interference from other transactions. In our test harness, we set up all the TVars to our liking, run the search transaction and look at the results — and this is a perfectly suitable test, even if the real system will have several competing transactions running at different times.

The Search Algorithm to be Tested

Our search algorithm looks at all of the processes and finds which events (if any) can currently synchronise. Each process makes a set of offers (or offer-set) — essentially saying that it wants to complete one of the offers. Each offer is a conjunction of events that need to be completed together. An event can complete if, and only if, the number of processes making an offer on it is equal to the event’s enroll count, and all the appropriate offers can complete. Time for an example! Let’s say we have processes p, q, and r which have the code:

p = readChannel c <&> readChannel d
q = writeChannel c 0
r = writeChannel d 0 <|> (syncBarrier a <&> syncBarrier b)

Barrier “a” has an enrollment count of 1, barrier “b” has an enrollment count of 3, and the channels are all one-to-one, so they have a fixed enrollment count of 2. We say that p has an offer-set with two offers: one offer contains just “c”, the other offer contains just “d”. Process q has an offer-set with one offer, which contains just “c”. Process “r” has an offer-set with two offers: one contains just “d” and the other contains “a” and “b”. The answers to the search algorithm in this situation is that p and q can communicate together on channel c, or p and r can communicate on channel d.

Writing Individual Test Cases

So we have an example, but now we need to code that up. This involves creating events for a, b, c and d, setting their enrollment counts, recording all the offer-sets — and then running the search that we actually want to test. If we had to spell that out long-hand for every test, we would soon be bored stiff. So of course we make some helper functions to allow us to specify the test as simply and clearly as possible — in effect, we can create a testing EDSL (Embedded Domain Specific Language). Here’s my code that’s equivalent to testing our previous example:

       testD "Blog example" $ do
         [a,b,c,d] <- mapM evt [1,3,2,2]
         p <- offer [c, d]
         q <- offer [c]
         r <- offer [d, a&b]
         return $ (p ~> 0 & q ~> 0) `or` (p ~> 1 & r ~> 0)

The first line in the do-block creates our events with associated counts. The next lines make all the offers that we discussed. Note that the distinction between channels and barriers has vanished — that was useful for relating the example to actual CHP code, but underneath this, both channels and barriers use the event mechanism, which is what we’re actually testing here. The final line declares the possible results: either p will choose item 0 in its list of offers (that’s “c”) and q will choose item 0 (also “c”), or p will choose item 1 and r will choose item 0 (both “b”).

The nice thing here is that the test is easy to read, and corresponds very closely to what I wanted to express. Writing lots more tests is easy — which encourages me to do so, and thus my function ends up better tested. I’ve written about 25 of these tests, each aiming at testing different cases.

Implementation of my Testing EDSL

I’ll show some of the details here of how I put together my testing EDSL. The monad it uses is just a wrapper around the state monad, building up a list of events, and offers on those events:

newtype EventDSL a = EventDSL (State ([EventInfo], [CProcess])  a)
  deriving (Monad)

data EventInfo = EventInfo {eventEnrolled :: Int}
  deriving (Eq, Show)
type CProcess = [CEvent] -- The list of conjoined events
newtype CEvent = CEvent {cEvent :: [Int]}

Events and offers are represented simply by a wrapper around an integer (an index into the lists stored in our state) — we use newtypes to stop all the Ints getting confused with each other, and to stop accidental manipulation of them in the DSL. Our evt and offer functions are then trivial manipulations of the state:

evt :: Int -> EventDSL CEvent
evt n = EventDSL $ do (evts, x) <- get
                      put (evts ++ [EventInfo n], x)
                      return $ CEvent [length evts]

newtype COffer = COffer {cOffer :: Int} deriving (Eq)

offer :: [CEvent] -> EventDSL COffer
offer o = EventDSL $
  do (x, ps) <- get
     put (x, ps ++ [o])
     return $ COffer (length ps)

To allow use of the ampersand operator in two different places (the input and the result), we add a type-class:

class Andable c where
  (&) :: c -> c -> c

instance Andable CEvent where
  (&) (CEvent a) (CEvent b) = CEvent (a ++ b)

For the expected result, we use this data type:

data Outcome = Many [[(Int, Int)]]

Each item in the list is a different possible outcome of the test (when there are multiple results to be found, we want to allow the algorithm to pick any of them, while still passing the test). Each outcome is a list of (process index, chosen-offer index) pairs. We have some combinators for building this up:

(~>) :: COffer -> Int -> Outcome
(~>) (COffer p) i = Many [[(p, i)]]

instance Andable Outcome where
  (&) (Many [a]) (Many [b]) = Many [a++b]

or :: Outcome -> Outcome -> Outcome
or (Many a) (Many b) = Many (a ++ b)

Ultimately our test has type EventDSL Outcome, and we run the inner state monad, we get back: (([EventInfo], [CProcess]), Outcome). This forms the input and expected result for the test, which we feed to our helper functions (which are too long and boring to be worth showing here), which use HUnit for running the tests.

Property-based Testing

Probably Haskell’s most famous testing framework is QuickCheck (its cleverer cousin, Lazy Smallcheck, deserves to be just as well-known). This uses property-based testing: you specify a property and it generates random input, and checks that the property holds. This diagram is how I believe property-based testing is intended to work:

The size of the box is meant to indicate effort; the biggest bit of code is what you’re testing, you may need some effort to generate random input data, and then you check that some simple properties hold on the output.

So let’s think about some simple properties of our search algorithm. I can think of some:

  • Any events that complete must have the same number of processes choosing them as the enroll count.
  • Any events that don’t have enough processes offering definitely can’t complete.
  • Processes can’t select an offer index that’s invalid (e.g. if p has 2 offers, it can’t select offer 3).

All of these properties seem to be to be pussyfooting around the main property of the search algorithm that I want to test:

  • The search algorithm finds a correct answer when one is available.

That certainly sounds simple. But how do we know what the possible correct answers to the search algorithm are? Well, we need to write a search algorithm! Often when I come to use QuickCheck, how it ends up working is this:

It feels a lot like double-entry validation; to calculate the expected result, I must code the same algorithm again! In this instance, I wrote a brute-force naive search that is checked against my optimised clever search. I’m not sure if this is still technically property-based testing, but QuickCheck is still handy here for generating the random input. (I guess a search algorithm is a bad choice for using property-based testing?) Regardless: I wrote my brute force search and tested my algorithm. It found no bugs, but now I have two sets of tests supporting me: one set of hand-crafted interesting tests, and a second version of the algorithm used with randomly generated input to pick up the cases I might have missed.

Summary

This post has been a bit light on technical detail, mainly because testing the algorithm involves a lot of dealing with the internals of my library that would take too long to explain here, and which would not be very interesting. My summary is this: constructing testing EDSLs makes your unit tests clearer to read and easier to write, which in turn encourages you to write more tests. I haven’t found property-based testing as useful as unit tests, but if you have a simple dumb solution and you want to check your optimised clever solution maintains the same behaviour, QuickCheck (or Lazy SmallCheck) is a handy way to test equality of functions.

Categories: Uncategorized

Parallel Composition in Haskell

June 2, 2011 7 comments

CHP, my Haskell concurrency library, allows you to run processes in parallel. One way of doing so is this binary operator:

(<||>) :: CHP a -> CHP b -> CHP (a, b)

Informally, the behaviour is as follows: this starts both processes running (using Haskell’s forkIO under the hood) and waits for them both to terminate and return their results. They can terminate at different times, but the parent process (the one running that composition) will wait for both to terminate before it returns. This parallel composition is not uncommon — for example, the parallel-io package provides a similar operation. In this post I’ll talk about the properties we would expect from this composition, and how to make them hold in CHP — but this easily generalises to making them hold for a parallel composition operator in the IO monad (CHP being a thin layer on top of IO).

Properties of Parallel Composition

I intuitively expect that parallel composition should be commutative: p <||> q should be the same as q <||> p. I’d also expect associativity; this:

do (x, (y, z) <- p <||> (q <||> r)

should be equivalent to:

do ((x, y), z) <- (p <||> q) <||> r

Another property (which I’ll call the unit law) is that composing something in parallel with a null (or more generally, a short-running) computation should have no effect, that is p should be equivalent to:

p <||> return ()

Finally, I’d expect independence; if I compose p in parallel with q, I would not expect p to behave any differently (or have its environment behave any differently) than if it was not part of a parallel composition. So there are our four properties: commutativity, associativity, the unit law and independence. Let’s consider the behaviour of our parallel composition, to try to make sure it satisifes all of these.

Normal Execution

For those, like me, who like concrete code to look at, a basic implementation of parallel composition that works with normal execution (we’ll add to it as we go along) is:

(<||>) :: CHP a -> CHP b -> CHP (a, b)
(<||>) p q = liftIO_CHP $ do
  pv <- newEmptyMVar
  qv <- newEmptyMVar
  forkIO $ executeCHP p >>= putMVar pv
  forkIO $ executeCHP q >>= putMVar qv
  (,) <$> takeMVar pv <*> takeMVar qv

(We’re using liftIO_CHP :: IO a -> CHP a and executeCHP :: CHP a -> IO a to go between the monads.)
If all the processes being composed execute and terminate successfully, all the properties easily hold. All the processes start up, they all run, and when they all complete we get the results back. If one of the processes doesn't terminate, the properties also all hold — the composition waits for them all to terminate, so if p doesn't terminate, the whole composition will not terminate, no matter which order the processes are in, or how we've associated the composition.

Poison

(This is the only CHP-specific aspect of our parallel composition.) CHP processes don't have to terminate successfully; they can also terminate with poison. The semantics with respect to poison are simple, and preserve all our properties; if either process terminates with poison, the composition will terminate with poison once both child processes have finished. We can update our implementation, based around the adjusted type of executeCHP that now indicates whether a process terminated with poison:

data WithPoison a = Poison | NoPoison a
executeCHP :: CHP a -> IO (WithPoison a)

(<||>) :: CHP a -> CHP b -> CHP (a, b)
(<||>) p q = merge =<< liftIO_CHP (do
  pv <- newEmptyMVar
  qv <- newEmptyMVar
  forkIO $ executeCHP p >>= putMVar pv
  forkIO $ executeCHP q >>= putMVar qv
  (,) <$> takeMVar pv <*> takeMVar qv)
  where
    merge (NoPoison x, NoPoison y) = return (x, y)
    merge _ = throwPoison

This is very similar to our first definition, but it merges the values afterwards; they are only returned if neither side threw poison (otherwise poison is thrown). This again preserves all the properties, although associativity may require a little explanation. Consider p <||> (q <||> r). If p terminates with poison, the outer composition will wait for the inner composition to finish (which means waiting for q and r), then throw poison. If r terminates with poison, the inner composition will wait for q, then throw poison; the outer composition will wait for p to also finish, then throw poison. So the behaviour is the same no matter which part of the nested composition throws poison.

(Synchronous) Exceptions

The termination rules given for poison extend easily to exceptions (poison is really just a kind of exception anyway). If either process terminates with an exception (because it wasn’t trapped, or it was rethrown), the parent process will rethrow that exception. If both processes throw an exception, an arbitrary one of the two is thrown by the parent process. We make exceptions “beat” poison: if one process throws an exception and the other exits with poison, the exception is rethrown and the poison is dropped. Our slightly adjusted implementation is now:

(<||>) :: CHP a -> CHP b -> CHP (a, b)
(<||>) p q = merge =<< liftIO_CHP (do
  pv <- newEmptyMVar
  qv <- newEmptyMVar
  forkIO $ wrap p >>= putMVar pv
  forkIO $ wrap q >>= putMVar qv
  (,) <$> takeMVar pv <*> takeMVar qv)
  where
    wrap proc = (Right <$> executeCHP proc) `catch`
                         (\(e :: SomeException) -> return (Left e))
    
    merge (Left e, _) = throw e
    merge (_, Left e) = throw e
    merge (Right Poison, _) = throwPoison
    merge (_, Right Poison) = throwPoison
    merge (Right (NoPoison x), Right (NoPoison y)) = return (x, y)

The wrap function catches any untrapped exceptions from the process, and sends them back to the parent process instead of a result. The merge function prefers exceptions to poison, and only returns a result when neither side had an exception or poison.

(Another possibility for exceptions would have been ignoring the exceptions from the child processes; this would maintain commutativity, associativity and independence, but would have broken the unit law, because when p threw an exception, it would not be propagate if p was wrapped inside a dummy parallel composition.)

Asynchronous Exceptions

Asynchronous exceptions can be received by a thread at any time (unless you mask them), without that thread doing anything to cause them. CHP is a concurrency library with its own mechanisms for communicating between threads and terminating, so I don't usually use asynchronous exceptions in CHP programs. But that doesn't stop them existing, and even if you don't use them yourself (via killThread or throwTo) it doesn't stop them occurring; asynchronous exceptions include stack overflow and user interrupts (i.e. Ctrl-C). So we need to take account of them, and preferably do it in such a way that all our expected properties of parallel composition are preserved.

When we execute a nested parallel composition, such as p <||> (q <||> r), we actually have five threads running: one each for p, q, and r, one for the outer composition and one for the inner composition:

(Each circle is a thread.) We must assume, in the general case, that any one of these threads could potentially receive an asynchronous exception (e.g. the system can kill any thread it likes with an async exception, according to the docs).

Child Process Receives an Asynchronous Exception

Let’s start by discussing what happens if a child process such as q gets an asynchronous exception directly. One possibility is that q may trap the exception and deal with it. In that case, we're fine — the parallel composition never sees the exception, and since it's been trapped by q, it shouldn't escape from q anyway. But q may not handle it (or may handle and rethrow, which is the same from the point of view of our operator). So what should happen when q terminates with this exception? One possibility would be to immediately kill its sibling (or throw the exception to them). This would completely break independence; r would now be vulnerable to being killed or receiving an asynchronous exception initially targeted at another process, just because it happens to be composed with q. The only way to preserve independence is to treat the asynchronous exception in a child process as we do synchronous exceptions; we wait for both processes to terminate, and then the parent exits by rethrowing that asynchronous exception. This preserves commutativity, associativity (no matter which of p, q and r get the exception, all will be allowed to finish, and then the outer composition will end up rethrowing it, potentially via the inner composition doing the same) and independence. Our unit law is preserved if p receives the exception (see the next section for more discussion on the unit law).

Parent Process Receives an Asynchronous Exception

Now we move on to consider what happens if the parent process in a composition receives an asynchronous exception. This is actually one of the most likely cases, because if your program consists of:

main = runCHP (p <||> q)

If the user hits Ctrl-C, it’s the parent process that will get the exception, not p or q. We mustn't terminate only this thread and leave p and q dangling. The whole point of the parallel composition is that you scope the lifetime of the processes, and we don't want to break that. We could catch the exception, wait for p and q to terminate and then rethrow it. But if the exception is something like a user-interrupt then it seems wrong to ignore it while we wait for the oblivious child processes to terminate. So the semantics are this: when the parent process receives an asynchronous exception, it rethrows it to all the child processes. This trivially preserves commutativity. Independence is also satisfied; if the processes weren’t in the parallel composition, they would receive the exception directly, so rethrowing it to them is the same effect as if they weren’t composed together.

The unit law and associativity are in danger, though. Consider our unit law example: p <||> return (). If p does not trap the asynchronous exception that the parent throws to it, the unit law is preserved; p will get the exception, and then exit with it (regardless of the other side), which will cause the parent to exit with the same exception, so it is as if the composition was not there. If p does trap the exception, the behaviour becomes a race hazard with one of two behaviours (the difference is bolded):

  1. The parent receives the exception, and throws it to its children. The return () process has not yet terminated, this then terminates with an uncaught exception. p traps the exception and deals with it, but when the parallel composition exits, the exception is rethrown from the return () branch; this exception would not have been thrown if p was executing alone, because p would have caught it.
  2. The parent receives the exception, and throws it to its children. The return () process has already terminated succesfully. The exception is thus only thrown to p. p traps the exception and deals with it, so when the parallel composition exits, the exception is not visible, just as if p was executing alone.

There are two ways to “fix” this, by adjusting our unit law: one is to say that the unit law only holds if child processes do not trap exceptions. The other is to say that the unit of parallel composition is not return (), but rather:

return () `catch` (\(e :: SomeException) -> return ()

The problem with that is that catch is in the IO monad, and to be part of the parallel composition it needs to be in the CHP monad, which requires a lot of type-fiddling. I’m still thinking about this one for CHP, but if you were dealing with IO, changing the unit to the above would probably make most sense.

Associativity also has some caveats. Here’s our diagram again, with the outer composition at the top, and the inner composition its right-hand child:

If the outer composition receives an exception, all is well; the exception is thrown to the children, which includes the inner composition — and the inner composition throws to its children, so all the processes get it. If any of them don’t trap the exception, the exception will be the result of the whole composition no matter which way it was associated — and if all trap it, associativity is still preserved. However, if the inner composition receives an exception, associativity is not preserved; the inner composition will throw the exception to its children but the outer composition will not know about the exception immediately, so only the processes in the inner composition will see the exception. Now it matters which processes are in the inner composition and which is in the outer composition. But this seems morally fair: if an exception is thrown to an inner part of the whole composition, that already depends on the associativity, so it’s unreasonable to expect that the composition can function the same regardless of associativity.

Masking Asynchronous Exceptions

When defining our parallel composition operator, we also need to be careful about precisely when asynchronous exceptions might occur. One major difference between poison and asynchronous exceptions is that poison can only occur when you try to use a channel or barrier, whereas asynchronous exceptions can occur any time. This is a plus and minus point for both sides; it means poison is easier to reason about, but asynchronous exceptions can interrupt processes which are not using channels or barriers (e.g. that are blocked on an external call). To make sure we don’t receive an asynchronous exception at an awkward moment, such as inbetween forking off p and forking off q (which would really mess with our semantics!), we must mask against asynchronous exceptions, and only restore them inside the forked processes. You can read more about masking in the asynchronous exception docs. So, the final adjusted definition of parallel composition that I will give here is as follows (we don’t need a restore call around takeMVar because the blocking nature implicitly unmasks exceptions):

(<||>) :: CHP a -> CHP b -> CHP (a, b)
(<||>) p q = merge =<< liftIO_CHP (mask $ \restore -> do
  pv <- newEmptyMVar
  qv <- newEmptyMVar
  let wrap proc = restore (Right <$> executeCHP proc) `catch`
                    (\(e :: SomeException) -> return (Left e))
  pid <- forkIO $ wrap p >>= putMVar pv
  qid <- forkIO $ wrap q >>= putMVar qv
  let waitMVar v = takeMVar v `catch` (\(e :: SomeException) ->
                    mapM_ (flip throwTo e) [pid, qid] >> waitMVar v)
  (,) <$> waitMVar pv <*> waitMVar qv)
  where
    merge (Left e, _) = throw e
    merge (_, Left e) = throw e
    merge (Right Poison, _) = throwPoison
    merge (_, Right Poison) = throwPoison
    merge (Right (NoPoison x), Right (NoPoison y)) = return (x, y)

(In fact, this isn’t quite enough. I’m currently adding a finallyCHP function that acts like its IO equivalent, and to support that I must push the restore call inside executeCHP to avoid asynchronous exceptions being acknowledged before the finallyCHP call is executed, but that’s a bit further into CHP’s internals than I want to go in this post.)

Hopefully that was a useful tour of parallel composition semantics in Haskell. Synchronous exceptions are easily dealt with, but asynchronous exceptions (which were perhaps designed more for the forking model of thread creation than this style of parallel composition) make things a fair bit trickier.

Categories: Uncategorized

Communicating Haskell Processes: The Long And The Short Of It

May 31, 2011 1 comment

The Long

It’s been a long time coming, but I have completed the final corrected version of my PhD thesis. It’s entitled, rather unimaginatively (and against received wisdom), “Communicating Haskell Processes”. (Communicating Haskell Processes, or CHP for short, is my concurrent message-passing library for Haskell.) A lot of what’s in there has been on this blog before. The new bits are the slightly revised CHP design (which will be in the forthcoming 3.0.0 release) and operational semantics in chapter 3. It’s freely available online for you to read, all 250+ pages.

The Short

Meanwhile, I’ve also put together a condensed version of the new material — including mobile (suspendable) processes, and automatic poison, neither of which I’ve written about anywhere else — into a paper for the Haskell symposium, which is a more palatable 12 pages. I’m not going to copy out the whole paper into this post, but here’s a quick summary of automatic poison.

Poison is a way to shut down a process network: processes should catch poison (which can occur if a channel you are using becomes poisoned) and poison all their own channels, spreading the poison throughout the network until all of the processes have shut down. The standard idiom is to surround your whole code with an onPoisonRethrow block that poisons the channels:

deltaOrig :: Chanin a -> [Chanout a] -> CHP ()
deltaOrig input outputs
  = (forever $ do x <- readChannel input
                  parallel_ [writeChannel c x | c <- outputs]
    ) `onPoisonRethrow` (poison input >> mapM_ poison outputs)

Automatic poison puts all this poison handling into an autoPoison function that wraps your process with the poison handling, so for the above we can substitute this instead:

deltaAuto, deltaAuto' :: Chanin a -> [Chanout a] -> CHP ()
deltaAuto = autoPoison deltaAuto'
deltaAuto' input outputs
  = forever $ do x <- readChannel input
                 parallel_ [writeChannel c x | c <- outputs]

I haven’t saved masses of code there, but I think the latter version is more readable. Read the paper for more details.

Feedback

Comments are welcome on the paper, which is due on Monday (6th June). Comments are slightly less welcome on the thesis, because it’s already been sent for binding! If there is a mistake, I’m happy to correct it in the online version.

Categories: Uncategorized

Programming Language Fiction

March 18, 2011 Leave a comment

The latest issue of The Monad Reader (an electronic magazine vaguely themed around Haskell) was released this week — but rather than featuring technical articles, it’s a special “poetry and fiction” edition. I decided to have a go at contributing, so I wrote a story about programming languages: in particular, what’s important in a programming language. It’s explored indirectly, via a discussion of one of my other interests: films. If you’re interested, take a look. I’m open to feedback on my contribution, and I imagine Brent (the editor) is also interested as to whether the special edition was a worthwhile experiment.

Categories: Uncategorized

Interleaving

January 10, 2011 1 comment

I spent some time late last year working further on what I originally called behaviours. It grew much longer than a blog post, so it’s ended up in the Monad.Reader electronic magazine, in the just-published Issue 17. Thanks are due to Brent Yorgey for his careful proof-reading and editing of the article. The accompanying code is already up on Hackage as the interleave package for those who want to play.

This week I’m also starting to prepare some slides; in two weeks’ time I will be presenting at PADL 2011 in Texas. That’s a talk that’s on the automated wiring that I’ve covered on the blog before. Once that trip is out of the way, I’m hoping to have a bit more time for Haskell coding again.

Categories: Uncategorized
Follow

Get every new post delivered to your Inbox.