Home > Uncategorized > Concurrency Can Be Deterministic (But The Type System Doesn’t Know It)

Concurrency Can Be Deterministic (But The Type System Doesn’t Know It)

There was an interesting article posted last night on the GHC blog on the matter of concurrency and parallelism. Several commenters seem to have taken issue with the definitions of concurrency and parallelism used in the article. I think they are absolutely fine (and standard), and I found little in the article to disagree with.

One thing I do disagree with is the idea that “if all we want to do is make programs run faster on a multicore, concurrency should be a last resort” — I aim in this blog to show that concurrent programming can be straightforward and elegant. Concurrency is not something to be afraid of. On similar lines of disagreement, I also think that concurrency is useful for more than the applications listed there. My boids example does not seem to fit the applications for concurrency mentioned in the article, but I think that reasoning about the boids (or indeed, any simultation agents) as self-contained stateful processes with an outside interface is easier than thinking about them in an iterative-functional way (e.g. as a list of boid states that need an update function applied to them). Your opinion may differ.

Regardless of such disagreements, the article also made a good point about types which I would like to expand on, and relate to CHP. One of the great advantages of Haskell is its type system. The difference between a -> IO a and a -> a is huge and vital. A large part of Haskell’s strength comes from showing this difference in the type, using the type-system to reason about it — which allows type-checker to prevent mis-uses. The article’s author points out that pure functions run in parallel are guaranteed to be deterministic — which means they retain their pure type. So bigExpensiveComputation has the type a -> a, regardless of whether it’s calculated sequentially or in parallel (this difference becomes a run-time detail). So the determinism of the algorithm involved has remained captured at the type-level, if you use parallel annotations.

So how does this relate to CHP? With semi-explicit parallelism annotations (par, pseq, strategies and the rest), you can create a parMap function, with the same type as map, that does all the evaluations in parallel (say, to RNF using the NFData type-class):

parMap:: NFData b => (a -> b) -> [a] -> [b]

Anything that can be evaluated in parallel using Haskell’s parallelism can also be evaluated in parallel using Haskell’s concurrency. Here’s a complete CHP implementation of parMap:

parMap :: NFData b => (a -> b) -> [a] -> CHP [b]
parMap f xs = do cs <- replicateM (length xs) oneToOneChannel
                 liftM snd $
                   (runParallel $ map (uncurry writeChannelStrict) (zip (writers cs) (map f xs)))
                   <||> mapM readChannel (readers cs)

It may have slightly higher run-time overheads, but semantically and operationally the effect is exactly as you’d expect (map, but in parallel) and it is just as deterministic as the deterministic parallel version based on strategies (if there are no bottoms involved). So what’s different? The type. My version acts in the CHP monad, whereas the original was pure. They are both deterministic but the CHP version no longer reveals that in the type — it is indistinguishable from any non-deterministic or deadlocking process that can exist in the full message-passing system of CHP.

So it seems that is the real strength of Haskell’s parallelism over concurrency: the type system knows/shows that the parallel annotations produce deterministic code, whereas the concurrent version does not share that advantage. And without a model checker in the type system (!), I suspect it may not be possible to fix that.

Categories: Uncategorized
  1. Robert Lee
    October 7, 2009 at 7:47 pm

    It seems that much of this is a problem of ambiguous and incomplete naming conventions that mean different things to different people. Perhaps also the kinds of problems that lend themselves to “doing more than one thing at a time” are not yet fully understood or categorized. We may therefore have solutions without understanding the kinds of problems they are best at solving.

    “Concurrent” is a good word to describe in English the idea of “doing more than one thing at a time”. Parallel is borrowed from geometry, and to me seems like it should be complemented with other descriptors, like askew, orthogonal, etc. All of which could be subsumed under the moniker of “concurrent”. Perhaps if we can give better mathematical metaphors for the kinds of problems we wish to solve we can discuss them with less ambiguity and conflict.

  1. No trackbacks yet.

Leave a comment