Comparing ClojureScript and Elm

Last year I spent some time experimenting with Elm, a Haskell-like language that compiles to JavaScript. After a bit of a break from web development, I started to look at Elm again. I was curious to see how it compared to ClojureScript the other compile to JavaScript language that I have used.

As I have taken the summer off and have a bit of time on my hands, I thought an interesting way to compare would be to write the same program in both Elm and ClojureScript, so I wrote a simulation of the K-means clustering algorithm in both Elm and ClojureScript.


You can play around with the Elm version here and the ClojureScript version here.

Getting started

It is quick to get started with Elm as it has a Windows/Mac/NPM installer that installs everything you need. ClojureScript is a more work as you need to install Java first and then have a coffee as Maven downloads the universe.

Next, you need to pick your libraries, in this scenario one for rendering SVG. This is straightforward in Elm as there is a standard SVG library. The other core Elm libraries needed are included by default. For ClojureScript, I went with reagent (which seems to be the most popular of the React wrapper libraries). Happily, reagent can render SVG tags directly without requiring a SVG library. I also ended up using core.async to simulate a ticker (more on that below).

Development experience

The big change moving from ClojureScript to Elm is getting your head around using the strong types and the compiler. It takes a bit of a mental shift to get used to working with the (friendly) compiler versus the more dynamic REPL and experimentation type workflow typical with ClojureScript.

One handy shortcut I used was: elm-make --warn Main.elm to generate the type signatures as you go.

Happily, once you get your code to compile it does tend to work as expected in the browser. With ClojureScript on the other hand, I did get a pretty hard to grok JavaScript error which took a while to figure out.

Elm does come with a REPL, but I don’t find myself using it frequently. The ClojureScript REPL works pretty well, although I don’t use it quite as intensively as when doing pure Clojure development. Setting up the ClojureScript REPL can be a bit tricky with vim.

Elm has an integrated time traveling debugger that you can use in the browser by appending ?Debug to the URL: localhost:8000/Main.elm?Debug. A similar tool re-frisk is available for ClojureScript.


For both implementations, I first defined a model (in Elm) and an app-state (in ClojureScript). In Elm this is codified by the Elm architecture with its Model-Update-View pattern. The pattern of having a global app-state is a common convention in ClojureScript react style development. The re-frame framework is also available if you want to have an Elm architecture like app structure for ClojureScript. The Elm architecture can feel a little verbose for a small project but pays off on bigger projects as an aid to understandability and structure.

One initial hurdle was how to generate a set of random points to cluster. I did this in Clojure by calling rand-int.

As Elm is a pure functional language generating random numbers is a little less conventional. First, you define a generator specifying the type of random data you need. You then ask the Elm runtime (via a side effect command) to generate the random data for you.

To animate the progress of the algorithm, I required a tick event every second to progress the algorithm to the next iteration. In Elm there is a Time library with a straight forward API and a standard mechanism, subscriptions to configure this:

In ClojureScript, this can be done by combining a call to the native JavaScriptsetTimeout function and using a core.async channel. This is trivial to do but needs some wider knowledge of the Clojure ecosystem.

One area where Clojure does excel is its fantastic standard library. I enjoyed implementing the actual algorithm using the clojure.core functions.

What to use?

Elm’s strong point is that it tries to be easy to use in its tooling and developer experience. Clojure makes different trades off (see Rich Hickey simple made easy talk) and due to how ClojureScript evolved from Clojure the ClojureScript stack is inherently more complex. Don’t get me wrong, the ClojureScript team have done an amazing job and are continually improving ClojureScript. Once you are familiar with the ClojureScript ecosystem you can be very productive. As a newbie or occasional user, it can be a bit daunting. ClojureScript does not have a prescriptive architecture like Elm. This makes it slower to get started with but allows you more flexibility to evolve an architecture that better fit the problem.

So what to use? As usual, this depends on social and external factors, not only the technology. If I was on a team with existing Clojure skills and systems, ClojureScript is an obvious choice. On a team interested in functional programming but without any existing Clojure skills and systems, Elm may be a better fit. There are situations where React/Redux and an es6/typescript transpiler is a good choice depending on the team’s skillset, existing libraries that you may want to leverage and the problem you are solving.


Quickly create pages for your Elm projects

I’ve been learning Elm recently and writing a few small web app to learn it. The code is on github but its nice to publish the actual page for people to play with. This is pretty easy to do with github pages and elm-make.

Just go to your repository, type the following commands and you should get a page such as

git checkout --orphan gh-pages
elm-make Main.elm --output=index.html
git add index.html
git commit -m "Creating github page"
git push --set-upstream origin gh-pages

If you are interacting with JavaScript code via ports you can create your own index.html page (copying a generated index.html file is a quick way to do this), run elm-make to build the elm.js file and then reference the generated elm.js script from your index.html file.

git checkout --orphan gh-pages
git add index.html
elm-make Main.elm # this will output elm.js
git add elm.js
git commit -m "Creating github page"
git push --set-upstream origin gh-pages

I followed this approach to create the page for

Adding Clojure unit tests results to Bamboo Continuous Integration server

We have started to use Bamboo for building our clojure project. It requires a little bit of tweaking to your project and the bamboo project setup to integrate unit test results fully.

First add the test2junit plugin to your leiningein project.clj file:

:plugins [[test2junit "1.1.0”]]

Then configure Bamboo to run the test2junit task:

Configuring the tests to run

Configuring the tests to run

Finally you need to setup a JUnit Parser task to pickup the results.

Parsing the test results

Parsing the test results

By default this path should work:


Rails 4 FlashHash Upgrade Gotcha

We upgraded an application from Rails 3 to Rails 4 this week and came across an interesting gotcha which I haven’t seen documented anywhere.

As we rolled out the Rails 4 version of the app we split the traffic between the upgraded Rails 4 app and our existing Rails 3 app. Some of the requests to Rails 3 app failed with the following exception:

Production NoMethodError: undefined method `sweep' for {"discard"=>[], "flashes"=>{"just_switched"=>true}}:Hash

Error Message: NoMethodError: undefined method `sweep' for {"discard"=>[], "flashes"=>{"just_switched"=>true}}:Hash

[GEM_ROOT]/gems/actionpack-3.2.19/lib/action_dispatch/middleware/flash.rb, line 239

It turns out that Rails 4 serialise the flash to the cookie differently to Rails 3. When Rails 3 attempts to deserialise it you get the above error. This error can also occur between Rails 3 and Sinatra apps.

To get around the problem whilst in the migration period we patched the Hash class. This would mean of course that the flash methods wouldn’t work, but as they are only used for minor presentation tweaks that seemed a good compromise.

How local are party candidates for the Lambeth council elections?

Getting annoyed by timewasters

During the Brixton Hill by-election campaign a few years ago I attended a hustings organised by Brixton blog. I got pretty annoyed by how candidates from some parties used the event (and the campaign) to parrot their parties’ national and international policies.

The Socialist Party called for full communism as the solution to pretty much every local issue. For most issues UKIP demanded we leave the EU. When the UKIP candidate did address local issues her positions (such as promoting private car use) seemed very strange for a Brixton local. It turned out she lived in Clapham. I wrote a blog post about it at the time where I plotted the candidates addresses on a map relative to the ward and council boundaries.

Ranking parties on how local their candidates are

This time around the whole borough has council elections and I was wondering if a similar trend exists. Visualising each ward doesn’t seem as useful as there are 289 candidates standing so instead I ranked the parties using a simple scheme: The percentage of candidates who live in the same ward they are standing in.

Parties ranked by % of candidates who live in the ward in which they are standing
Rank Party % of Candidates who live in ward Local Candidates Total Candidates
1 The Pirate Party 100% 1 1
2 The Green Party 67% 42 62
3 Independent 50% 2 4
4 Liberal Democrats 49% 31 63
5 Labour Party 47% 30 63
6 Conservative Party 46% 29 63
7 UK Independence Party (UKIP) 35% 6 17
8 Trade Unionists and Socialists Coalition 30% 4 13
9 The Socialist Party (GB) 0% 0 3

The Pirate Party only having a single candidate obviously helps them come first. I suspect the Green Party’s high ranking reflects their decentralized local nature (Disclaimer: I’m going to vote for them). On the bottom of the list are the The Socialist Party (GB). None of the Socialist Party’s candidates live locally (in fact none of them live in Lambeth at all – they hail from Kingston, Bromley and Richmond).

UKIP and the Trade Unionists and Socialists Coalition are predictably on the bottom end of the table. The big three traditional parties sit in the middle, possibly reflecting their tendency to ‘parachute’ in candidates to wards.

The gory technical details

There should be open data on who is standing in elections right? Well if there is I couldn’t find it. Instead I downloaded the PDFs that listed candidates from Lambeth website. I used the the command line version of Tabula to extract the data. I then geocoded the data using MySociety’s fantastic MapIt API. All the data and code is on github.

Does it matter?

Obviously this isn’t a perfect way to rank parties. Although candidates stand in a particular ward they are elected to serve on the council for the entire borough. Many will live in neighbouring wards of the borough, have a reasonable idea of what local issues and be capable of doing a decent job. The results do does reflect my experience at hustings though – candidates who don’t live locally are often just parroting party policy and don’t address local concerns.

How about a real debate on climate change on the Today show?

I got so irritated by Nigel Lawson’s appearance on the BBC Radio 4’s Today show this morning that I sent an email to the show.


Your selection of Nigel Lawson in this morning discussion about the link between climate change and the ongoing flooding was awful. You gave Lawson a platform to repeat a selection of discredited and incorrect arguments regarding climate change that are misleading and wrong. You did not mention the fact that he is intimately associated with the coal industry and let him use the show to promote their agenda unchallenged. Having Lawson on the show does not lead to a balanced or informative discussion for the public.
The actual debate on climate should be between those who believe our 2 degree commitments are sufficient to avoid ‘danger’ and the absolutely terrifying risks that implied by the science. I would urge you to have a climate scientist such as Kevin Anderson on the show to have a real debate on this vital issue.
Jason Neylon
If you feel similar annoyed by it you can contact the show too.

How to do ‘SEO’ for the website of a new offline business

A friend of friend asked for some advice on how to do ‘SEO’ for the website of a new (offline) business he was launching.

Here are the broad pointers I gave to him:

There are 2 types of search engine marketing paid (PPC) and organic (SEO), they are very different but are both ways to get people to see your website in Google. For both the  first step is to identify what terms or phrases people who search for services like yours  are searching with. You can do this with Google’s Keyword Planner.

Paid search (PPC)

Google has a service called adwords to pay for ads on google. This can be expensive, but has the huge advantage over other forms of advertising that it can be measured very accurately. Essentially you associate an ad with keywords and then Google will show them to people based on how much you bid and several other factors. There is a significant amount of work in refine your list of keywords and its also important to limit it geographically and to exclude unrelated keywords. The big advantage is you will get immediate results and customers to your website (which is very helpful to validate how the website performs).

Organic search (SEO)

SEO has 3 elements: ensuring the content on your site is correct to Google, ensuring that your content is relevant to your audience and getting others to link to your website. Ensuring your content is correct to google is basically following good web development standards. Ensuring your content is relevant is writing quality content relevant to what people search for. Getting others to link to your website would involve adding the website to any online directories for your sector and writing guest blog posts on others people websites, etc.

There is a cost in doing organic search properly – you have to spend time writing relevant content. Frequently people outsource to external people to write content for them. Many web business also pay to have links to their website added to other people website  – this is a bit of a black art and I would avoid considering for now.

Organic search is a long term investment – it will take you months or years to reach anywhere in the search results as many of the incumbents are well established and have good content.

Which to do first

To start, I would recommend doing some PPC to learn about what keywords are effective then considering tailoring the content on your website or adding some new content targeting those keywords to attract organic traffic.

Detecting web applications that aren’t converting with Riemann

We have been playing around with Riemann at uSwitch to do some of our monitoring. One of the core metrics we track is the number of customers who are currently converting on our website. If this number drops to zero it usually indicates something is broken. Each time a customer converts an event is sent to our Riemann server.

I added a stream to our riemann config to count recent conversions, create a new summary event and notify us whenever that number drops to zero.

    (where (and (service"app") (tagged"conversion"))
        (smap (fn [events]
                (let [conversion-count (count events)]
                    :time (unix-time)
                    :metric conversion-count
                    :state (if (> conversion-count 0) "ok" "warning")
                    :description"Conversions in the last 30 seconds"
                    :ttl 30}))
                  (fn [event]
                    (warn "Conversion state has changed to: " event)))))))

Sadly this didn’t work as I expected! If there are no conversions than no events are sent to Riemann and the moving-time-window code block is not executed. This is discussed further here.

You can work around this by however by using expired events. Events in Riemann have a time to live (TTL) associated with them. If an updated event is not received within the TTL the event will be expired – indicating in this case that no further conversion have taken place since it was last fired. You can add a stream to catch this event expiration and notify whoever is interested.

Just like above we add a stream to sum recent conversions and create a summary event:

   (where (and (service "app") (tagged "conversion"))
           (smap (fn [events]
                   (let [conversion-count (count events)]
                     {:service "app-conversions"
                      :time (unix-time)
                      :metric conversion-count
                      :state "ok"
                      :description "Conversions in the last 30 seconds"
                      :ttl 30}))
                 (changed :state
                          (fn [event]
                            (info "notify that conversions are happening again" event)))))))

Then we add a stream that waits for the summary event to expire:

    (where (service "app-conversions")
           (with {:state "warning"
                  :metric 0
                  :ttl 30
                  :description "No conversions in the last 30 seconds"}
                 (fn [event]
                   (index event)
                    (warn "Notify about no conversions" event)))))))

The above config is on github.

Playing with a RaspberryPi NoIR camera

We got some RaspberryPis at to power our monitoring dashboards. For a bit of fun we also got some peripherals including the new NoIR infrared camera.

You need a source of infrared light to see anything so I used a remote control I had lying around. Here is some video of me waving at the camera.

And shining the remote control at the camera.

To record the video I used the raspivid command with the night exposure option.

raspivid -t 20000 -rot 180 -ex night -o hand.h264

The omxplayer command is handy to playback the results.

omxplayer hand.h264

Tip: List non replication operations when using db.currentOp() in the MongoDb shell

We run ad-hoc queries against our MongoDB hidden slaves pretty frequently at work. Some of these queries are long running so it nice to be get a filtered view of what operations are running without all the replication operations that are constantly running.

The mongo shell is a javascript interpreter so that is easy to do:

db.currentOp()["inprog"].filter(function(x) { return !x.desc.match(/repl.*/); });