Archive

Author Archive

SVGutils, or: Doomed to Program

August 11, 2010 1 comment

Recently, I’ve been taking a bit of a break from programming in my spare time. In this vein, I’ve been playing some board/card games — and, having played a few, I thought it might be fun to try my hand at designing my own card game. I think some people prototype card games using index cards, but that seems to me inefficient if you need duplicate cards: if you need twenty of a particular card, that obviously calls for mass production by printing rather scribbling by hand. So I decided to at least use a computer to help produce a prototype — but the computer use would only be for doing a bit of text on some cards and then printing them out. No programming involved.

Input

Card games tend to be different text/images, often with duplicates of each card, on only a handful of templates. I could knock up a template, copy it many times and then change the text on each copy. But what if I then want to change the template? This is a maintainability issue: clearly we should separate the templates from the text and images to be used on them. But that means we’ll need a way to substitute all the text and images into the template when it comes time to print. Hmmm. I tend to use Inkscape for vector graphics editing, and since Inkscape stores SVG files, which is XML-based, we can actually do simple text-based substitution on the SVG files to put in our custom text (and image file names) for each card. A fairly simple scripting task which only has a modicum of programming.

We will need a format to store the bits of text (e.g. card title, card type, main text) to be substituted for the various cards. We could store the data for cards in a CSV-like format, with a different column for each bit of text, and a row per card. But many cards will probably share the same substitution values: for example, they may have the same card-type, like “instant” or “attack”. It seems a shame to have to type that out many types, and it is also bad practice that again impacts maintainability. So some sort of grouping via a hierarchical format would help here, to share values among groups of cards. YAML is a nice lightweight markup language, which is easier to manually write than XML, so that’s a good candidate. YAML libraries exist already, so that’s not adding too much burden. And in the long run it will save effort, so that excuses a bit more programming.

Since we have this substitution mechanism, we can save ourselves even more repetition. Sometimes you have cards that are fairly similar to each other in content, perhaps only changing a particular word. For example, you might have bronze, silver and gold cards of the same variety. Or you might want to have something like classic playing cards: each combination of rank and suit, but you wouldn’t want to write out 52 cards. Much better to write the 4 suits and 13 ranks and then express that you want each possible combination. So we can add some basic “for-each” functionality to our YAML markup (don’t worry, it’s not going to end up Turing-complete — I think!). A satisfying addition that shrinks the card definitions further, which makes the added programming worthwhile.

A template plus a concise definition (the complete YAML file is shown) allows easy generation of card groups.

Note: This isn’t the software I’m releasing today (SVGutils just does the tiling described in the next section) but in time I will probably release the card creation software too.

Output

Once the substitution into the templates is done we’ll need to print the cards (I’m only printing on a standard desktop printer) — but all the cards will be in separate post-substitution files. It would be fiddly to print them all, and would be a terrible waste to have one card per sheet. What would be really useful is a program to tile the cards together onto a handful of printable A4-size pages so that it’s easy for me to print them out and cut them up. And extra nice would be the ability to print them double sided to give the cards backs automatically. More programming, but a big saving in effort for physically creating the cards.

How Did It Come To This?

So having started with the intention of design a non-computer-related card game, I ended up a few hours later knee-deep in YAML and SVG processing. Once you’re a programmer, I think you just can’t help yourself when you see tasks like these that call for automation and tools. Doomed to program! Haskell was an obvious choice of language to make the process enjoyable. In the next section I’ll explain which Haskell libraries I ended up using (for those that might be interested), and then I’ll finally talk about the library I wrote and released to do the tiling for printing: SVGutils.

A Tour of Haskell Libraries

Haskell has Hackage, a centralised repository of lots of Haskell libraries and programs. I was pleased to find that most of the libraries I needed were already on Hackage.

YAML

The first thing I wanted was a library to read in YAML. I’d come across YAML during some Ruby on Rails work, and it seemed like a nice way to store some structured information about the cards. YAML is nicer than XML to write in a text editor, which is what I was going to be doing. Hackage had several libraries relating to YAML, which I quickly whittled down to:

  • yst: The yst library was actually in the same vein as several parts of the project, seeing as it provided the ability to read from YAML files and substitute into templates. From a read of its documentation, it’s centred around HTML, and I decided it looked like it would be easier to start from scratch than adjust a sizeable existing application (I may have been wrong, though).
  • data-object-yaml: A library with a pleasingly short API, and a quick click on its main data type for representing YAML data looked good:
    data Object key val =
        Mapping [(key, Object key val)]
        | Sequence [Object key val]
        | Scalar val

    That’s nice and clear. You can use the richer YamlObject type as key and value, or you can just use String or Text if you don’t need the anchors and references that YAML supports (I don’t).

  • HsSyck: Another good candidate, but its types always use the richer YAML format, and with a abstract text type (that is actually Bytestring), which isn’t as handy as being able to use String or Text.

So I chose data-object-yaml, based on looking at the API.

Templating

The next thing I needed was a library for performing string substitution in the SVG files (aka templating). There are many template libraries on Hackage. Some of them are specifically targeted at HTML, so I skipped over those. Some can work on XML, which is a possibility given that I’m working with SVG files (SVG is an XML-based format). But that requires me to put comments or custom tags into the XML file, which is slightly awkward to do from Inkscape. The simplest solution was to use the small template library. This can do simple substitution, substituting $foo in the document for whatever value you map from the key “foo”. This allowed me to construct my SVG files almost exactly as I wanted them, but put “$Name” as the content of the Inkscape text object where I wanted the name (which then showed on my screen as “$Name” while I fiddled with the font, centring, etc). Then I could use the template library to substitute for “$Name”, and my name text would end up exactly where I wanted it in the image. So the solution that knew nothing about the structure of the SVG file turned out to be the easiest. The template library lacked one or two functions that I needed, so I downloaded the source and tweaked it, then submitted some patches which ended up contributing to the latest version. Open source in action!

SVG/XML

Finally, I needed a library that could tile my resulting SVG cards together for printing. A quick search on Hackage for SVG revealed a dearth of SVG tools, and certainly nothing like what I needed. So it looked like I was going to have to write that one myself. Thankfully there were many libraries on Hackage for dealing with XML, so I could at least get the SVG documents parsed as a generic XML document ready for me to do some SVG-specific bits. Since there are many XML libraries, I needed to narrow down the choice a bit. I remembered that Don Stewart had recently posted about popular libraries on Hackage in various categories (including XML), so I decided to follow the crowd and pick from the ones that were popular: xml, HaXml and hexpat.

HaXml can be used in a totally different way to xml and hexpat. HaXml can generate a Haskell source file with a data-type that matches the DTD or XSD schema of an XML file (in this case, the SVG spec), to allow you to manipulate the document in a type-safe manner. I’m all for type-safety, and I had a quick try with HaXml. The file it generated with the complete Haskell types for the SVG spec was certainly large — lots of types, and many of the attribute types had towards 100 members. The thing that put me off this approach is that XML documents often have cross-cutting concerns. For example, many items in SVG have the color attribute. In the DtdToHaskell output, each color attribute is a differently-named member of a different attributes item (and all have the type Maybe String, which doesn’t help to mark them out). If I want to write a program to manipulate the colours in an SVG file, then with the DtdToHaskell approach I have to write a lot of code to pick out each color. With the standard approach that treats XML files very uniformly it’s actually much easier, as I just skim down the tree looking for any color attributes. I glanced at the API for xml and hexpat — there didn’t seem much to distinguish them, but the former had simpler central types so I went with that.

SVGutils

So I had most of the bits I needed except for code dealing with combining many SVG files together into one for printing. That bit I wrote myself, and have bundled up the code into a package I’ve called SVGutils. It’s a light wrapper around the XML library that has a few functions for dealing with SVG files: primarily, tiling several into one. It has both a library and a command-line utility for doing this tiling. The tiling works by finding out the size of each SVG file and then placing the contents appropriately in a larger file with a paper size of your choosing, which can then be converted into PDF using Inkscape and stuck together using PDFtk (is it worth integrating this into the tool, I wonder?). It supports having “backs” for each file, which will also be tiled correspondingly (but mirrored) so that you can print out both the front and back then stick them together before cutting them up, or just print out the whole document double-sided. I’ve no idea if anyone other than me will find this useful, but it seems worth sharing, so it’s now on Hackage. In time I may expand the SVGutils library with more functionality — patches also welcome. I don’t expect it to end up as a library doing very complex SVG manipulations or rendering, but time will tell.

An example output page from tiling the card SVG files together.

The End Result

So I’ve ended up releasing an SVG library, and have also got my complete program for taking in a YAML file and SVG templates and producing a PDF of the resulting images. What I had actually set out to do was to design a card game. I did actually finish the first version, and tested it last weekend. It turns out that, like most things, game design is harder than it first appears. More work needed, when I can tear myself away from programming!

Categories: Uncategorized

Beginners’ Programming Environments, and Haskell

July 15, 2010 6 comments

Greenfoot is a beginner’s programming environment for ages 13 upwards. It is a programming framework and IDE for Java targeted towards games and simulations: anything where the main output is 2D graphics and sound. Here’s some screenshots:

Greenfoot has several nice aspects (disclaimer: I’m a developer on the Greenfoot project). One is that the visual setting allows instant feedback on what your code is doing. You write a line of code to get your crab moving across the screen, and two clicks later your code is running and the crab is moving around. This sort of instant and obvious feedback is very useful when you are beginning programming. (Actually, it’s always useful — but we have to learn to do without as we progress to programming more complicated systems!) You can invoke methods on objects by right-clicking on the object and selecting the method from a pop-up menu, which is a little bit like a graphical version of a REPL. So if you are given a scenario with some pre-coded behaviour, you can interact with the objects before even getting to the code.

Another nice aspect of Greenfoot is that it embodies object-orientation well. Not everyone (and probably especially not Haskell programmers) is sold on OOP, but set that aside for this post. The object/class divide is immediately present: Wombat is a class, you can create many Wombat objects in the world that have the same fields and same behaviour, but different state. This is obvious from them running around separately: the concept comes across implicitly rather than having to be taught too explicitly, because of the design of the system.

A while ago people used to scoff at Java, declaring “public static void main(String[] args)” as a reason to stay away: too much is revealed upfront before the students can start programming. Incidentally, one could equally apply the same criticism to Haskell: the monadic type of the main function forces beginners to confront monads immediately. Greenfoot and other frameworks have shown this is easily dealt with. You will never see the main method in Greenfoot, and care is taken to steer beginners away from blank screens or from having to learn too much before they can get going. Writing Hello World starting with a blank file takes at least seven lines of Java code to print out a line of text, but adding one line of Java code in your first part-filled Greenfoot scenario makes a crab run across the screen. It isn’t the language that makes the difference for how hard it is to get started, it’s the framework.

The original designers of Greenfoot wanted to teach OOP to beginners, so they found a way to get across the key concepts of OOP easily, put it in a setting (interactive games) that can attract young beginners and produced the software (and supporting teaching material). Idea times execution.

What about Haskell?

This leads to my question: if a beginner’s environment can be produced for OOP and Java, surely it can be done for functional programming: specifically, Haskell. (The same question applies to the process-oriented programming model that CHP espouses — I believe the answer there is a clear yes, with a relatively close correspondence to the OOP design of Greenfoot, so it’s a bit less interesting to discuss.) Let’s toy with a potential design for a Greenfoot-like system in Haskell: that is, an interactive, graphics-based environment for beginner Haskell programmers. I’ll give our hypothetical system the name StartHaskell, just to make referring to it easier for the rest of the post.

First of all, what are the key concepts we would like people to learn in Haskell at the beginning? Function calls are central: without understanding functions and function calls, you can’t do anything in Haskell. Types are quite important, too — especially since they are bound to be the second class of error you encounter (your first will be a syntax error). These are very obvious: functions are as fundamental to functional programming as variables are to imperative programming, and types are very important in functional programming.

A key and distinguishing feature of Haskell is the notion of a monad. Different people have different opinions on monads. Is it best to introduce them via a simple monad like Maybe or State, or is it better to use the IO monad? A myriad of monad tutorials on the Internet provide all the different answers you can imagine. The IO monad is the one that does tend to push monads in the programmer’s path sooner rather than later: the main function has a monadic IO type. Should this be hidden by providing a main function that calls the programmer’s pure functions, or should it just be exposed straight away? Whether or not you expose monads is closely tied to the API design of StartHaskell.

API Designs

The notion of API in a programming environment like Greenfoot or StartHaskell actually swings both ways. There is the API that the StartHaskell system provides to the programmer, which will include things like functions dealing with images. But there is also the API that the programmer implements for the StartHaskell system — which is mainly what I want to discuss (and which has a heavy influence on the first kind, the system-provided API). In Greenfoot, every active thing in the world inherits from an Actor super-class, which has an act() method that the programmer implements to provide the functionality of that actor. These act() methods are called once each frame by the Greenfoot system: this is the API that the programmer implements. Since our hypothetical StartHaskell system will be functional rather than object-oriented, we will likely have one function to provide the functionality for the whole system (called once per frame) rather than bundling the functionality with individual objects. So let’s explore some different designs.

Option #1: Avoid monads entirely

We could focus entirely on pure functions. Conal Elliott’s picture work comes to my mind — and Simon Thompson’s Haskell book uses pure picture-based examples from the outset (using lists of lists of pixels, rather than Conal’s more abstract approach). You could conceive of a graphical framework that perhaps used a function from input-set to output-picture, i.e. step :: (KeyboardInput, MouseInput) -> Picture that would then be called each frame to produce an output picture given the input. Persistent state — that is, state that persists between frames, such as the position of the player in the world — is needed, so you might allow state to be returned as a result and passed to the next invocation, i.e. step :: (KeyboardInput, MouseInput) -> a -> (Picture, a). You could then provide various combinators for composing pictures that the programmer can use. (An approach based on Functional Reactive Programming, aka FRP, might also be possible — I don’t know enough FRP to talk about this, but feel free to write your own blog post explaining how FRP is the way to go!)

Option #2: Have a pure/monad split

Kodu is a new, very polished system for young programmers that uses a visual approach based around “when/do” rules. The “when” part specifies a condition that constrains when the rule is run, and the “do” aspect specifies the actions to take when the condition is satisfied. This easily translates into an API such as: rule :: (Environment -> Bool, SomeMonad ()). Different rules could be combined via combinators, or by changing that API into a list of rules. (One problem I have with lists of these rule-based interfaces is precedence: when one rule fires, does that preclude all other eligible rules firing? Or do they all fire together — and if so, in what order?) The system could then provide functions for accessing the environment data-type, and various monadic actions for affecting the system, such as drawing. Persistent state would probably be handled by making the SomeMonad a state monad.

Option #3: Introduce monads immediately, using do-notation

A lot of people try to understand monads by looking at the type signatures and going from there. If you completely ignore how monads are implemented, and just look at do-notation, many monads (especially the IO monad) are very close to imperative programming, and thus this is one way to introduce them. Haskell is a very nice imperative language! In this case we might require the user to provide an API like step :: SomeMonad (), or to make persistent state more apparent, step :: a -> SomeMonad a. The system would then provide an API with monadic functions to ask about the world, and to affect the world.

Option #4: Don’t Call Me, I’ll Call You

In all my options above I have assumed that the StartHaskell system will call the programmer’s function each frame. This design is taken from Greenfoot, and has various benefits. One is that if the programmer’s function is empty, the surrounding system will still be rendering frames and receiving input. The interact function in Haskell has this same flavour: here’s a function that contains my desired behaviour, you handle the heavy lifting of doing the IO and call me at the appropriate point. One problem with being called by the system is state. A graphical interactive program such as those in Greenfoot (e.g. games) will tend to maintain state: the positions of things in the world and the state of the player. I mainly dealt with that above by allowing the programmer to return some state that will be passed back in next frame. I’ve waved my hand slightly over the type of the state: each program will have a different type for the state, but that shouldn’t be hard to deal with in the environment.

An alternative approach is to take control: I’ll write my own code to drive the program and will call you at the appropriate points. There could be a single call into the framework, like nextFrame :: SomeMonad (), that does everything necessary: waits until the next frame is due, renders it, reads all the input devices and so on. So the most basic program would be forever nextFrame. Not too heavy to get started. Then you could pass state around yourself more easily instead of having to return it to the StartHaskell environment to be passed back to you later.

API Summary

I don’t know which of the above options is best — there may be other better options that I haven’t considered. What I do know is that it is surprisingly important. What kind of API you ask the programmer to implement will have a big effect on the style of programming that they learn, and on the concepts that they will have to master. This is particularly true in languages where multiple styles of programming are supported, like Haskell or C++, whereas Java only really has one main style of programming.

Environment

Greenfoot is not just a software library/framework for graphics. It is a complete programming environment, including interactive invocation, debugging, buttons for compiling and running the program (with no build scripts in sight), and a source code editor. Having a complete environment helps beginners a lot. You do not want beginners who you are teaching any programming language to learn the compiler command and flags on day one: that can come later, the point is to teach them programming! It is also important to offer a simple environment: giving Eclipse to someone who hasn’t programmed before is just unnecessarily complicating their learning process. A beginner’s environment should be simpler than that of a professional programmer.

Editor/Compiler

I think one major obstacle in Haskell is likely to be its error messages, particularly those for types. (Is every error message in Haskell one of: a syntax error, a type error, or an unknown identifier?) There has been work done before, for example in Helium, to try to improve this for Haskell. A better approach to teaching functional programming as a whole might be to use a language with a less powerful type system that allows for simpler errors, but I’m constraining myself to use Haskell for this post. So if you do have to teach Haskell, how can you best help with type errors?

Here is a mock-up of something that I always thought would be useful for type errors:

The idea being that those call-outs appear one-by-one as you move through the source. I like it because it visualises the process that I go through in my head when I encounter a type error, and the computer can doubtless do it faster than me. However, it is always worth bearing in my mind that what is helpful for an expert (as I guess I am in Haskell) is not necessarily helpful for a beginner. You need some level of understanding of type inference to work with that representation, and if a beginner doesn’t have that understanding then the interface may only serve to confuse.

Thoughts on Teaching Programming

I am a beginner when it comes to teaching and educational research, but here’s a few thoughts I’ve picked up so far:

  • Consider the following as a guiding principle. There are only two reasons that things are hard to teach: what you are trying to teach needs to be improved (e.g. it needs more research), or you haven’t found a good way to present it. (Warning: this is an unfalsifiable statement!) Recursion is meant to be hard, but there are many who believe recursion (particularly structural recursion) is easy when taught right. Objects were meant to be hard, but I think objects-first works quite well at getting OO concepts across. Concurrency can be taught to beginners when it’s a good model (occam in combination with robotics has proved easy to teach to beginners). And so on and so on.
  • Beginners can benefit from tools that are specialised to beginners. It is helpful if both beginners and experts realise this! Beginners shouldn’t start with Eclipse because they’ve heard that’s what Real Developers(TM) use, and experts should not pour scorn on beginners’ tools because they are too simple or hide intricate detail — that’s the point!

None of what I have said here about teaching is new, and much of the advice here is already followed on many courses. Most teachers give out partially-complete code to avoid students beginning with a blank screen, and it’s well-known that the key to teaching is to find good examples and present the concepts in the right way: it’s much easier said than done! My main intention was to think about what an equivalent to Greenfoot for functional programming would be like. It may be that an interactive graphics-based setting is not a good fit for a first functional programming environment. Perhaps there are other alternative settings that could be used — but it I think it should be something with a more widely appealing end result than calculating the fibonacci numbers!

Categories: Uncategorized

Haskell API Design: Scoped Resources

June 30, 2010 3 comments

A common idiom in any programming language is that of acquiring a resource, using it, and then releasing (de-acquiring?) it. The resource may be a file, a network socket or all sorts of other things. The main challenges with supporting this programming idiom in an API are three-fold:

  1. Making sure that the programmer cannot use the resource before acquisition or after release.
  2. Making sure that the programmer remembers to release the resource once they are finished with it.
  3. Make sure that the resource is released if a problem occurs while it is acquired.

In Java you might implement this as a try/finally block (takes care of 3; not so good on 1 or 2). In C++ you might use the RAII pattern: acquiring in a constructor and releasing in a destructor (takes care of all three reasonably well). In Haskell, you can use things like the bracket function (written about it in detail elsewhere) to write what we might call “with..” APIs. Some classic examples:

withMVar :: MVar  a -> (a -> IO b) -> IO b
withFile  :: FilePath -> IOMode -> (Handle -> IO r) -> IO r

These functions take as an argument the block to execute while the resource is acquired. If only this API is offered, it is impossible to forget to release the resource: release happens as soon as the block terminates. And if an exception occurs, that is also taken care of. There is a slight hole with respect to challenge 1 above: see if you can figure it out (we’ll return to it at the end of the post).

In this post, I thought I’d explore a few uses of this “scoped resource” pattern in my CHP library which use Haskell’s types to indicate the acquisition status.

Sharing

CHP features communication channels that allow you to send values between different threads. For example, something of type Chanout Integer allows you to send integer values. CHP’s channels are designed to be used by a single reader and a single writer. Sometimes, however, you want multiple writers and/or multiple readers to be able to use the same channel. For example, you may want to have many worker processes communicating results to a single harvester process. This can be achieved using shared “any-to-one” channels, which can be constructed simply by adding a mutex to the sending channel end. This is another case of the scoped resource pattern: we want to allow claiming of the shared channel (acquisition), writing to the channel while it is claimed, and to make sure the channel is released afterwards.

We can define a Shared wrapper for any type that simply holds a mutex alongside it:

data Shared a = Shared Mutex a

When we export this type we will not export its constructor, meaning that users of our shared API will not have access to the mutex, and will not be able to take the channel out of its shared wrapper, except using our API function, namely:

claim :: Shared a -> (a -> CHP b) -> CHP b

(I perhaps should have called it withClaimed to fit in with the other Haskell APIs, but never mind.) This specialises in the case of our Chanout Integer into:

claim :: Shared (Chanout Integer) -> (Chanout Integer -> CHP b) -> CHP b

Note that Chanout is a channel that can be used for sending, with functions such as:

writeChannel :: Chanout a -> a -> CHP ()

This sharing mechanism uses an opaque Shared wrapper type (i.e. one that has hidden internals) to stop the channel being written to unless the claim function is used to acquire/unwrap it into a normal channel for the duration of the block.

Enrolling

A barrier in CHP is a communication primitive that supports a single primitive operation: synchronising on the barrier. The rule for using them is straightforward: when you want to synchronise on a barrier, you must wait until all other processes that are members of the barrier also synchronise, at which point you all do synchronise and continue onwards. CHP supports dynamic membership of barriers: processes can enroll on the barrier (become members) and resign (leave the barrier) at any time. This is again an instance of the scoped resource pattern: enrollment is acquisition, you can synchronise while you are a member, and you should leave again at the end.

With barriers, we wrap them when you are enrolled on them, and only allow synchronisation on enrolled barriers:

data Enrolled a = Enrolled a

syncBarrier :: Enrolled Barrier -> CHP ()

class Enrollable a where
  enroll :: a -> (Enrolled a -> CHP b) -> CHP b

As before, we don’t expose the details of the Enrolled type, so you can’t just wrap something up yourself and pretend that you Enrolled. The enroll function does the acquisition and release (aka enrolling and resigning), and during the block in the middle you can synchronise on the enrolled barrier.

Different Ways to Wrap

We’ve seen two examples of scoped resources. In the Shared example, we used our wrapper to block access to the key functionality: we might call this negative wrapping. With negative wrapping you need to acquire the resource to remove the wrapper and use the object inside. In contrast, in the Enrolled example, we used our wrapper to enable access to the key functionality: we could correspondingly call this positive wrapping. With positive wrapping you need to acquire the resource to add the wrapper and use the object inside.

So which is better: positive or negative wrapping? With our negative wrapping, we were able to make use of the existing functions for channels once the value was unwrapped. Consider for a moment the possibility of adding a barrier type that had a fixed membership: no need to enroll. To fit with our existing API for synchronising on barriers, we would have to use the Enrolled Barrier type for our fixed-membership barrier, but that seems a little misleading. It might have been better to have a (negative) Unenrolled wrapper on barriers that is removed by enrolling, which would allow our API to allow synchronisation on a plain Barrier. Negative wrappers also allow for resources to be acquired in different fashions. You could have an Unenrolled (Chanout Integer) that can only be used by enrolling, which is orthogonal to a Shared (Chanout Integer); with positive wrappers this is not the case. So overall I think it is probably better to use negative wrappers.

(The other way to do it is to have totally separate types, e.g. FilePath and Handle in withFile, above. This is fine if you only have one specific scoped resource to deal with, but loses out on the chance to have a simple type-class like Enrollable, above, that encompasses the same operation (here: enroll) on multiple different types being wrapped.)

The Hole

I said earlier in the post that these scoped APIs have a problem (which exists for both positive and negative wrappers). Consider again the claim function:

claim :: Shared a -> (a -> CHP b) -> CHP b

Now consider this code:

do escapedChan <- claim sharedChan return
   writeChannel escapedChan "uh-oh!"

If you look at the function above, you’ll see it type-checks: types “a” and “b” are both Chanout String. What’s happened is that we have claimed the channel, and then passed it back out of the claim block. So now the mutex has been released, but we have the inner channel which we then use on the second line, outside the claim block. In general, I trust the users not to make this mistake. But there are ways to fix it. The technique known as lightweight monadic regions allows just that: paper here, subsequent library here. I’m going to try to convey the key idea here (as best I understand it).

The idea behind improving the scoped resource API is to parameterise the acquired resource by the monad in which they are allowed to be used. Here’s a simple example intended to illustrate the idea:

{-# LANGUAGE RankNTypes, EmptyDataDecls, KindSignatures #-}

data UnacquiredResource
makeUnacquiredResource :: IO UnacquiredResource

data AcquiredResource (m :: * -> *)
useResource :: Monad m => AcquiredResource m -> m ()

class MonadIO m where liftIO :: IO a -> m a
instance MonadIO IO where liftIO = id

withResource :: UnacquiredResource ->
  (forall m. (Monad m, MonadIO m) => AcquiredResource m -> m b)
  -> IO b

main :: IO ()
main = do u <- makeUnacquiredResource
          withResource u $ \r -> do useResource r
                                    liftIO $ putStrLn "middle"
                                    useResource r

What’s happening here is that the acquired resource has an extra type parameter that indicates which monad it can be used in. The type signature of useResource makes sure you can only use the resource in its corresponding monad. Our withResource function uses a forall to make sure that the block you pass must work with any appropriate monad it’s given: we can actually use the plain IO monad! In fact, we have ended up with two levels of protection in this API: the forall makes it hard to return the resource from the block (see the rank 2 types section of this post for more information), and even if the user returns the resource from the block, the type signature of useResource means it can’t be used afterwards outside the monad it originated from.

This is only a very basic version of lightweight regions: the actual implementation supports nesting of these scopes, and even to release in a different order than scope would usually enforce (e.g. acquire A, acquire B, release A, release B). The only problem with lightweight regions is that they do create a formidable API, e.g. glance at the types and see if you can follow them. As often in Haskell, the cleverer API that makes extensive use of the type system ends up being quite hard to understand: CHP is also guilty of this in places. There is sometimes a trade-off between type-safety and usability — and I don’t think either should always win.

Categories: Uncategorized

Haskell API Design: Having your own Monad atop IO

June 28, 2010 9 comments

I was recently writing my PhD thesis (which is about CHP), and writing the conclusions got me thinking about weaknesses of CHP. (For newcomers: CHP is an imperative message-passing concurrency library for Haskell.) When you’ve been working on some software for several years, you want to believe it’s fantastic, wonderful, flawless stuff and that everyone should use it. But why might CHP not be the best thing ever? (Other suggestions are welcome, honest!) Some aspects of the API design might fit into this category, and in this post I’ll discuss one of the API choices behind CHP, namely: having a special CHP monad instead of using the IO monad that sits underneath. This may have wider relevance to other libraries that similarly build on top of IO.

I’m going to invert the flow of this post: later on I’ll explain why I created a CHP monad, but first I’ll discuss what problems it causes.

IO vs CHP

The CHP monad is a layer on top of the IO monad (think of it as a few monad transformers on top of IO). What difference does it make to the user to have the CHP monad instead of the IO monad that underlies it?

Lifting IO

CHP being a layer on top of IO means that you can “lift” IO actions into the CHP monad to use IO actions in your CHP processes. In Haskell, all actions in a monadic block must be of the same type. But this cannot be done automatically. So if you have an IO action like touchFile :: String -> IO () and the standard CHP function readChannel :: Chanin a -> CHP a, you cannot write:

do fileName <- readChannel fileNameChan
   touchFile fileName

The first action is CHP and the second is IO — Haskell can’t reconcile the two. There are two fixes for this. One is to use a package like monadIO, which defines several standard IO actions to have types like: touchFile :: MonadIO m => String -> m (), i.e. to be able to fit into any monad built on top of IO (some argue all IO-based functions should have a type like this). Then you can write the code above. But the monadIO package only supports a few core IO actions, and any time you use another IO-based library (like, say, network) you are stuck. The other fix is to use the liftIO_CHP :: IO a -> CHP a function that exists exactly for this purpose. That works, but it gets rather verbose:

do fileName <- readChannel fileNameChan
   liftIO_CHP $ touchFile fileName

If you have lots of IO and CHP actions interspersed, it clutters up the code and makes it less readable. What’s galling is that liftIO_CHP has no particular semantic effect; it’s only really needed (to my mind) to placate the type-checker.

This lifting problem is not specific to CHP, incidentally: it is a problem for all monad transformer libraries. Adding any monad transformer on top of IO requires lifting functions such as liftIO_CHP.

Just a little CHP

Imagine that you have written a program that doesn’t use CHP. For all the imperative I/O parts it probably uses the ubiquitous IO monad. Now you want to add some concurrency. If you use MVars, you just need to create them and use them — writing and reading with MVars is done in the IO monad. The STM library has its own monad, but this is a monad in which you write a small transaction and then execute it in the IO monad, so the rest of your program can happily remain in the IO monad. But if you want to add a little CHP, suddenly you need to be in the CHP monad! You can’t just use runCHP :: CHP a -> IO (Maybe a) at the points where you want to use CHP. Firstly, the library is not built to be used like that (and the API doesn’t support being used like that). Secondly, if you have to tag all the CHP actions in the IO monad with runCHP, you’re not much better off than you were when you had to tag all the IO actions in the CHP monad with liftIO_CHP.

The “correct” way to add some CHP to your program is to adjust the whole program to be inside the CHP monad. You need to either change the types of your functions that used to be in the IO monad to be in the CHP monad (with varying amounts of liftIO_CHP) or you need to wrap them in CHP wrappers. That’s quite an overhead, especially if you would prefer to gradually introduce some CHP into your program. I don’t like to be forcing people to put their entire program inside CHP to get anywhere. (This reminds me a bit of emacs, where questions involving loading and quitting emacs to do other tasks often elicit the response: don’t quit emacs, just do everything from inside emacs.)

So what is the CHP monad for?

So far we’ve explored all the problems of the CHP monad, which may have convinced you that it was a bad design decision. Now I want to explain what the CHP monad does (over and above IO) and where it might be difficult to replace CHP with plain IO.

Poison

CHP has the notion of poison. Channels can be set into a poisoned state, and any attempt to use them thereafter results in a poison exception being thrown. The CHP monad incorporates this using a very standard error-monad(-transformer) mechanism. This could easily be implemented using the standard Haskell exception mechanisms that exist in the IO monad. I guess in general, an error monad transformer on top of IO is somewhat redundant.

Traces

A nice feature of CHP is that tracing support is built in, and can be turned on or off at run-time. This used to be done using a state-monad(-transformer), but the problem with implementing it that way is that if a process deadlocks, the state is lost. Since the main time you want the trace is when a process deadlocks, this was quite a limitation! So now it works by having a mutable (IORef/TVar) variable for recording the trace, with a reader-monad(-transformer) containing the address of that mutable variable along with an identifier for the current process.

At first, I thought to replicate this functionality I would need some sort of thread-local storage, which is discussed a little on an old Haskell wiki page. Now thinking about it, I could hold information on whether tracing is turned on in some sort of singleton variable (that is initialised once near the beginning of the program and read-only thereafter), and use ThreadId in lieu of a process identifier. Process identifiers in CHP at the moment have information on their ancestry, but I could easily record (during the runParallel function) a map from ThreadId to ancestry information in the trace.

Choice actions

The other functionality that the CHP monad supports is choosing between the leading action of two code blocks. That is, this code:

(readChannel c >>= writeChannel x) <|> (readChannel d >>= writeChannel y)

chooses between reading from channels “c” and “d”. Once a value arrives on one of those channels, the choice is made for definite. After that, the value is sent on either the channel x or the channel y respectively. This is slightly unusual for Haskell, but I’ve found it helps to make writing CHP programs a lot easier.
(I discussed this in more detail in the appendix of a recent draft paper.) There is no way to replicate the same functionality in the IO monad.

One alternative to this style of API is to use a style more like CML. In fact I’ve previously released a “sync” library that is a cut-down version of CHP with a CML-style API in the IO monad. So something similar to that may be acceptable; one possibility might be:

choose :: [forall a. (Event a, a -> IO b)] -> IO b

(Hoping I have my forall in the right place there! Is that technically an ImpredicativeType?) Which would mean I could write the above as:

choose [(readChannel c, writeChannel x), (readChannel d, writeChannel y)]

This could be done without the nasty types if I use a special bind-like operator (the CML library calls this exact same function wrap):

(|>=) :: Event a -> (a -> IO b) -> Event b

That, along with sync :: Event a -> IO a would allow me to write something like:

sync $ (readChannel c |>= writeChannel x) <|> (readChannel c |>= writeChannel y)

That might not be too terrible, although maybe a better combination of symbols for my bind-like operator might be better.

Summary

The more I think about (and write about) this issue, the more I begin to think that I would be better off removing the CHP monad. Slipping back to the IO monad would allow easier integration with existing Haskell code, and hence easier uptake of the library. But that’s quite a shift in the API — almost every single function in the CHP library would have to change its type. Is that a jump that can be made by shifting from CHP 2.x to 3.x, or is it perhaps better to start again with a new library and build from there? (This brings to mind several recent discussions on haskell-cafe and the resulting wiki page on whether to rename/re-version — although CHP doesn’t have as many users as FGL, so far as I know!) There may actually be a relatively easy way to transition with such a major change; defining:

type CHP a = IO a
liftIO_CHP = id
run_CHP p = (Just <$> p) `onPoisonTrap` return Nothing

Should probably allow most existing code to keep working without modification, except for choice.

All opinions are welcome on whether to dump the CHP monad for IO, especially if you use the library.

Categories: Uncategorized

Transposing Sequential and Parallel Composition

June 22, 2010 Leave a comment

Recently, I spent some time discussing traces with Marc Smith (we previously wrote two papers on the matter). Something that came up is the idea of representing a sequential composition of parallel processes as a parallel composition of sequential processes. Let me explain…

We, as programmers, often start off reasoning sequentially and then add on reasoning about parallel processes later — usually with much difficulty. Think of producing a control-flow graph for an imperative program, for example. If you have a sequential program, it is usually quite straightforward to form a flow graph for the program. When you try to add parallel processes, you need multiple flow graphs side by side. And they will interact, so really you need to link them together at the points of interaction. It quickly gets quite involved, and leaves you wishing that you didn’t have to involve concurrency.

What Marc and I found is the opposite: our reasoning was straightforward for parallel processes, but sequential composition was a headache. We were trying to represent the trace of a program as a tree, with each node representing either a parallel composition or a process without parallelism. In fact, due to the associativity and commutativity of parallel composition, this type of tree can always be flattened into an unstructured bag of parallel processes:

We then wanted to link processes in the tree that communicated with each other, to work out information about dependence and to look at neighbourhoods of connected processes. That was all fine, but gets quickly difficult when you need to consider sequential composition, specifically the sequential composition of parallel compositions.

Imagine that one of the tree nodes is first a program without parallel composition, and later becomes the parallel composition of two sub-processes, i.e. P = Q;(R || S). We don’t have a way to represent sequence in our neat parallel trees above, and this causes our representation to become awkward. We can’t have a simple ordered sequential list of trees, because if two processes have sequence information, they can proceed in either order, so really we would need a sequential lattice of trees, which is getting nasty.

At this point we wished we didn’t have to deal with sequential composition — at least, not sequential composition that can contain parallel composition. And via a small transformation, it turns out that we don’t. Consider the process: a; ((b; b’) || c); d, where everything in there is assumed to be a communication. We can refactor this into a set of parallel processes (P1 || P2 || P3):

P1 = a; par1start ; par1end ; d
P2 = par1start ; b; b’; par1end
P3 = par1start ; c ; par1end

The key insight is really this: if you are running multiple processes in parallel, it is the same if you start them at the outset of the program, then synchronise with them to start them, as if you set them going when you need them. So even though the process doing c won’t happen until a has, we don’t wait to set that process off partway through the program. We set that process (P3 above) off straight away, then send it a message (par1start) when we want to really set it going. The par1end message makes sure that P1 doesn’t do anything more until P2 and P3 have completed: all three processes synchronise together on the par1start and par1end messages.

With this transformation, we never have any parallel composition inside sequential composition in our traces. All we have is one giant parallel composition of every process in the system, where each process is a sequential composition of events. There is no more nesting than that; we have flattened the process network into an unstructured bag of lists of events. In fact, we can now represent the processes as a graph (with links formed by the communications: the dotted lines in the diagram above). The semantics of the process are unchanged, and all the information about the ordering dependencies is still there if you want it (embodied in the unique parstart/parend communications: the italicised events in the graph above). This form of representation actually makes it really nice to track dependencies and relations throughout the process network, by following the communication links.

Addendum

You could even go one step further, and transform the system into a collection of parallel processes, where each process had at most three events in sequence, where at most one of those would be an event from the original program, i.e. the above would become:

P1A = a ; seq1
P1B = seq1; par1start ; seq2
P1C = seq2; par1end; seq3
P1D = seq3; d
P2A = par1start; b; seq4
P2B = seq4; b’ ; par1end
P3 = par1start ; c ; par1end

I’m not sure how useful this is, though.

Categories: Uncategorized Tags:

Conjoined Events: Paper and Slides

June 16, 2010 1 comment

I’m just back from a trip to America which began by attending Advances in Message Passing 2010 to present a paper on conjunction, something I’ve posted about on this blog before. I’ve put the paper and slides (with notes) up on my website, for those that are interested. It’s focused entirely on conjunction; CHP doesn’t really appear in the paper, but all the examples in the slides were in Haskell.

I think it’s accurate to say that most of the other researchers at the workshop were interested in a slightly different aspect of message-passing to me. For most of them, message-passing was about using things like MPI to communicate between workers and farmers for parallel speed-up. You can use CHP for that, but most of the examples on this blog (and the examples are in the paper) are about the communications driving the programs, and being part of the program: communications as part of the computation process in concurrent programs, not just a way to facilitate parallel programming. I find there is an elegance in some of CHP’s process-oriented style that makes it more fun to use, but it may well be that their style of parallel programming is a more pragmatic use given the increasing availability of parallel processors.

Categories: Uncategorized

Visualising Your Process Network

May 11, 2010 4 comments

One advantage of CHP’s process-oriented style of programming is that it lends itself well to visualisation. I don’t make as much use of it on this site as I should, but some previous posts show diagrams with processes (as nodes) joined by channels (as edges). I already have some tracing support in CHP, and I had wondered if this could provide a quick and dirty way to graph a process network. This post shows the results of adding a little support to CHP for process visualisation, which I’m now debating whether or not to keep.

It turned out to be slightly better to add a new trace type (the process-graph trace) to CHP than to build a graph from the existing structural traces. So from the user’s point of view, all that would be added to the library was a new trace type; no changes to an existing program would be required other than switching tracing on and using the process-graph as the trace type. (I also added a labelling function for processes, that works with both structural and process-graph traces.)

Hooking the process-graph trace into the existing tracing infrastructure was so straightforward that it’s not really worth writing about. Unexpectedly, the most difficult part of making this functionality useful was finding something that can draw the process networks well. Process networks tend to be hierarchical, in that each process (node) may contain a sub-network, composed to any depth. These processes are connected via edges. My first inclination for graph visualisation was to reach for graphviz. So I wrote some code to spit out the graphviz file. Graphviz has a dot tool for drawing graphs in lines with rank. On a simple benchmark called numbers, it does quite well:

For larger examples, or those with cycles, its rank-basis can be awkward. Here’s the dining philosophers, rendered in a slightly unusual manner:

Most of the other graphviz tools can’t deal with hierarchical graphs — only the fdp tool can. This seemed promising; a force-directed approach that supported sub-graphs. Here’s the numbers example from above:

Clumsy. And here’s the dining philosophers:

Eugh! This is a tiny bit better when you liberate it from its subgraph (at least, it might be halfway decent if not for the node labels):

But there is a clear solution for the dining philosophers (a star topology with the security guard at its centre), so if fdp makes a hash of this graph, I don’t hold out much hope for more complex process networks. I don’t want to write my own tool for this, but I may if I have to (I’d actually quite like an interactive viewer so that you can drag things around like you can in prefuse, and perhaps involve animation of the network). Any suggestions for beating fdp into shape, or for alternative tools would be most welcome!

Observed Occurrences Only

I said that this technique was quick and dirty. The main flaw (or is it a feature?) is that only the observationally determinable process network is graphed. If you pass a channel to two processes who never communicate on it, that link won’t show up in the graph. The graph is a trace: a record of what happened, not a blueprint of what might have happened. This is not an invalid approach, but I expect it is not what most people want: most would like to see their process network drawn as it is in their program design with all channels shown, not just those that were used. But that tool is more complicated, and will have to wait for another day. (You can extend this hack to do something more like that if you use an annotation and generic programming to capture all processes’ parameters and look through them for channel ends…)

Change and Movement Over Time

The fallacy of drawing the process network in a single image is that it is not static for the duration of the program. Processes can start and end, come and go. Not only that, but CHP channels can be freely communicated down other channels, meaning that programs can dynamically rewire their topology at run-time (I should write some posts about that at some point). So really a proper tool should include support for visualising the process network as it changes over time. But again, that will have to wait for another day.

Categories: Uncategorized Tags:

Choose Anything: adding an Alternative instance

May 5, 2010 2 comments

The alt function in CHP allows you to make a choice between performing several actions. You can choose, for example, between reading on one channel and writing on another. Agents can choose to move left or right, buffers can choose between reading or writing, servers can choose between reading from several clients. The alt function takes blocks of code and chooses between the leading actions — some people find this slightly surprising, but I think it simplifies code, and it is also entirely in line with the original CSP calculus.

The Problem with alt

The alt function has always had a slight ugliness though. Some leading actions do not support choice, for example: parallel compositions, lifted IO actions, or channel creation. That is, what should something like this code do:

alt [newChannel, liftIO (putStrLn 6), writeChannel c 6 <||> writeChannel d 7]

My solution to dealing with the items that didn’t support choice was to issue a run-time error if you tried to include them in a choice (as above). I don’t like run-time errors caused by bad code — but in this instance I couldn’t use the type system to pick up the error at compile-time without making all CHP code a lot more verbose. It’s always irked me to have this possibility of a run-time error.

One related feature is that the skip process is somewhat magic. The skip process has type CHP () and does nothing. It’s very tempting to define skip = return (). But SKIP has another property in CHP: it’s always ready in a choice. Choosing between a return statement (on its own), e.g. alt [return (), writeChannel c 0], would trigger a run-time error, so skip had to be defined differently. (Note that choosing alt [skip, return 0 >>= writeChannel c] is fine; CHP obeys the monad laws, so the leading action of the second item is actually writeChannel — the return melts away when it has an action following it.)

There is one easy solution to getting rid of the run-time error in alt: make choosing between any CHP action valid. All the existing choice-supporting actions work as before. But all of the others: creating channels, enrolling, parallel compositions, poisoning, claiming shared channels, lifted IO actions and solitary return statements (and probably more I’ve forgotten) are now valid in a choice, and are considered always-ready. In CSP terms, they are all prefixed by the process SKIP. This change has been included in CHP 2.2.0.

This doesn’t break any existing CHP code, it just makes more code technically valid (and simplifies some of the implementation a little). Now skip is simply defined as return (), and is not at all special. A little while back I explained the poll function, which I wrote at the time as:

poll :: CHP a -> CHP (Maybe a)
poll c = (Just <$> c) </> (skip >> return Nothing)

This can now be written even more straightforwardly:

poll :: CHP a -> CHP (Maybe a)
poll c = (Just <$> c) </> (return Nothing)

Alternative

Since I wrote the first version of CHP I’ve come into contact with a lot more Haskell — including type-classes such as Applicative and Alternative. Alternative defines a choice operator, <|> and some associated items: empty, the unit of choice, and the functions some and many that optionally produce 0+ or 1+ items respectively. The default definition of many is revealing:

many v = some v <|> pure []

This suggests several things. Firstly, a right-biased choice operator is going to be pretty useless here. It’s not clear whether Alternative instances should display left-bias or no bias (the documentation only states that it should be associative, which all biases would satisfy), but left-bias looks most useful, and is probably most expected. Secondly, it is expected than an item constructed with pure should be a valid choice: this wasn’t the case previously (pure = return, of course), but it is with the changes described above.

Our instance is trivially constructed (and is included in CHP 2.2.0 onwards):

instance Alternative CHP where
  empty = stop
  (<|>) = (</>)

It seems silly not to provide it. This does leave me with three choice operators in CHP: “<->“, “</>” and “<|>“, which all do exactly the same thing — my original intention in providing the first two was that the first wasn’t guaranteed to have bias, but the middle one was guaranteed to have some bias — they actually share the same definition, though. If I was designing CHP all over again I might dump both my choice operators and just use Alternative. That would break pretty much all existing CHP code, though, so it’s too big a change to do suddenly. I may deprecate my choice operators in favour of Alternative though, and maybe remove them in the future.

Our earlier poll operation is actually already featured in the Control.Applicative module, based on the Alternative type-class, and is called optional. As a final note, I went looking for an equivalent to CHP’s alt/priAlt for Alternative. It turns out to be the asum function from Data.Foldable (thanks, Hoogle!):

asum = foldr (<|>) empty

I might stick with the alt/priAlt synonyms for that one though, for clarity (and stop for empty, too, due to its CSP origin). Besides which, asum is defined in terms of <|> — in fact, in CHP the operator </> is defined in terms of priAlt rather than the other way around, because priAlt is more efficient when you have lots of guards.

Categories: Uncategorized Tags:

New chp-spec library released

I recently published a multi-part guide on how to provide a mirror implementation of (most of the API of) CHP that spits out a CSP specification of the program rather than execute it. You can go back and read:

This is the final post I plan to make on the matter (at least for a while)! This post is to announce the release of the work in the above guide as a new chp-spec-1.0.0 library. If you aren’t interested in formal specification, and just want to use CHP, you can ignore the chp-spec library. (And if you don’t use CHP, you can definitely ignore it!) It’s only useful if you want to try to generate a CSP model of your CHP program — and even then, the technique comes with a lot of caveats. But I’d rather release it than leave it sitting on my hard drive.

Instructions

To use the chp-spec library in place of chp, you must do the following:

  • Change the library dependency from chp to chp-spec when you want to generate a specification. You may be able, in cabal, to have both dependencies listed all the time. Currently there is no chp-spec-plus; if you want this, let me know.
  • Change the imports in all your modules from Control.Concurrent.CHP* to Control.Concurrent.CHPSpec* when you want to generate a specification; this can be automated somewhat using preprocessor macros if you want to switch between the two regularly.
  • Make sure that all uses of the forever function in your process are changed to CHP’s foreverP (this is included in CHP 2.2.0 onwards so you can make this change permanent, even when using plain chp).
  • Add the process annotation to all directly recursive processes (again, this is included in CHP 2.2.0 onwards, so you can make this change permanent, even when using plain CHP).
  • When you want to generate a specification, change the top-level function from runCHP to specify.

Specify

The specify function mentioned in the last point is the top-level call that generates and post-processes the CSP specification. The generation has been mostly covered in previous posts; the generation part of specify is trivial; it annotates the top-level process to be called main, then runs the state transformer to get the altered state with all the processes recorded in it (among other things), which it passes to specify', the function that does all the post-processing:

specify :: Bool -> CHP () -> IO String
specify showIO main
  = specify' showIO <$> execStateT (finSpecT $ process "main" main) emptyCHPState

That showIO parameter will be explained later in the post. The code for specify' is long-winded but not very exciting. It can be split into two aspects:

specify' :: Bool -> CHPState -> String
specify' showIO st = render specs ++ declMain
  where ...

We’ll start with the first part of that concatenation. The render call uses the pretty-printing to print the specifications. The specs item is generated from the recorded process by two steps: first, it pulls up any Repeated processes to be top-level recursive processes — modern CSP has no iteration operator — and second, it transforms all the process-ids and communications into strings by uniquely numbering the processes and channel values.

The second part of the concatenation, declMain, declares a main process, named “main”. We know, because it’s the first process that is encountered, that our top-level process will end up named “main_1″, so at the most basic we end up with a line “main = main_1″, which wraps up all the numbering in a simpler name. But it can have different behaviour depending on that mysterious “unhideIO” parameter.

CSP’s semantics revolve around traces, which are records of the visible actions that a process takes. That visible qualifier is significant: CSP allows for hiding of events. This is particularly relevant in case of our dummy IO events. If we leave them as-is, they will show up in the trace of the process even though they are not really a part of its behaviour — but this does allow us to read traces produced by the FDR tool more easily, as we can see which IO events occurred. So the setting is useful for “debugging” any deadlocks, but these extra events can ruin refinement checks because the process appears to be taking extra events. If you’re doing a deadlock check, pass True to keep the IO events visible, otherwise pass False to hide them. That is done by the following code for declMain in the where clause of specify’:

declMain :: String
declMain = "\nmain = main_1 " ++ (if showIO then "" else
  "\\ {" ++ intercalate "," (map getName $ Set.toList $ chpIOEvents st) ++ "}"
  ) ++ "\n"
  where
    getName = fromJust . flip Map.lookup events

If showIO is False, the main process will hide (backslash is the hide operator in CSP) all the IO events, to stop them being visible to the outside, and in traces of the main process.

Categories: Uncategorized Tags: ,

Bumper Release May Day

Today is the May Day holiday here, which has given me some time to put together lots of library releases. Today sees the release of chp-2.2.0, chp-plus-1.3.0, chp-mtl-1.0.0 and chp-transformers-1.0.0 (and chp-spec-1.0.0). The main changes to the chp and chp-plus libraries are that the dependency on the mtl library has been removed. Thanks to all who commented on my original article about getting rid of mtl — Anders Kaseorg pointed out that if everyone defines their own MonadIO instances for CHP this could cause chaos, so I’ve adopted Dino Morelli’s suggestion and created the chp-mtl and chp-transformers libraries to contain the appropriate instances. So if you want a MonadIO instance use those — and if you don’t want that, just use the new liftIO_CHP :: IO a -> CHP a function that does the same thing. I’ll be discussing the only other major change to CHP — an instance of Alternative for the CHP monad — in a post later in the week.

Categories: Uncategorized
Follow

Get every new post delivered to your Inbox.