WTF is currying?

As a developer who spends most of my time at work writing JavaScript or TypeScript, I’ve heard references to ‘currying’ all over the place in blogs, Stack Overflow answers, from my colleagues, and more recently in quiz style technical interviews.

Whenever it gets brought up I do the same thing.

I google the term ‘currying’, figure out that it is basically taking a function that accepts multiple arguments, and converting it into a function that takes one argument, and then returns another function which can take another single argument, etc. etc.

That is to say:

const unCurried = (arg1, arg2, arg3) => {
    console.log(`First argument is ${arg1}, second is ${arg2}, third is ${arg3}`);
}

unCurried('this', 'that', 'another');

const curried = (arg1) => {
    return (arg2) => {
        return (arg3) => {
            console.log(`First argument is ${arg1}, second is ${arg2}, third is ${arg3}`);
        }
    }
}

curried('this')('that')('another');

At which point I say to myself

‘Oh cool yes I remember this. Neat idea. Not sure when I’d use it, but at least I understand it. What a clever JavaScript developer I am’

So maybe, given that currying is a relatively simple concept to implement in code, a better question might be

WTF is the point of currying?

If I stumble onto a concept in mathematics or programming that I don’t understand, I generally try to figure out where it came from, what problem it was/is trying to solve and/or which real world relationship it is trying to model.

So first of all, why is it called ‘currying’? Is there some significance to the name that will make its intention clear?

Currying… maybe it means to preserve something or to add ‘flavor’ to a function in some way?

Nope!!!

Turns out it’s because a logician called Haskell Curry was heavily involved in developing the idea. So that’s a dead end.

It also looks like Haskell Curry was developing his ideas based on the previous ideas of some people called Gottlob Frege (died in 1925), and Moses Schonfinkel (died in 1942). Which suggests that maybe the ideas behind currying did not originally come about in response to a programming problem…

In fact, currying originated as a mathematical technique for transforming maths style functions, rather than programming style functions.

Mathematical functions and programming functions are related, but slightly different.

A mathematical function basically maps one set of data points to another set of data points. That is, for every input value to the function, there is a corresponding, specific output value.

Functions in programming also take inputs, but they can do whatever they like with those inputs (or arguments), and they are under no obligation to return a single specific output value.

Currying, as it is defined, seems to only relate to inputs to functions (arguments), and so is presumably equally applicable to both mathematical and programming style functions. OK cool. What’s currying again?:

Currying is the technique of translating the evaluation of a function that takes multiple arguments into evaluating a sequence of functions, each with a single argument.’ – Wikipedia

OK yup I remember, and why do mathematicians curry?

(Based on Wikipedia’s summary)

Some mathematical techniques can only be performed on functions that accept single arguments, but there are lots of examples where relationships that can be modelled as functions need to take multiple inputs.

So currying allows you to to use mathematical techniques that only work on functions with single arguments, but tackle problems that involve multiple inputs.

This is all well and good, and we’ve already seen how we can implement currying in JavaScript, but… I still don’t get what the practical benefit of it is in a programming sense, especially in JavaScript!

Eureka! (A case of accidental currying)

I basically got to the point above, and then went back to ignoring currying as I didn’t really get what practical application it had for me, as a predominantly front end JavaScript developer.

A few months later, I found myself in the privileged position of having a project at work that was entirely my own. I got to write an automated testing solution in Typescript, using Cypress, and was basically given free reign to organise the code and repository as I pleased.

I’ve been gradually moving towards and playing with more functional style programming, and one of the things I found myself wanting to do, was writing functions to create functions with different ‘flavors’:

const pricePrinter = (currencySymbol) => {
    return (priceInNumbers) => {
        console.log(`${currencySymbol}${priceInNumbers}`);
    }
}

const dollarPricePrinter = pricePrinter('$');

const poundPricePrinter = pricePrinter('£');

dollarPricePrinter(15);

poundPricePrinter(15);

Ignoring the wildly impractical nature of the example above, this pattern is quite useful. It allows you to compose functions neatly and semantically, with little ‘function factories’, and talk to code that is sufficiently abstracted.

I was very happy with this pattern and proudly showed my colleague what I’d discovered.

His response was to glance over briefly and go ‘Oh yeah that’s currying. Cool’, and then go back to work.

So there you go. One practical application in JavaScript of currying, is to compose functions together to make little function factories, that are passed the context they will be operating in as an argument (the currency symbol in the example above). Kind of like constructors for functions. Neat.

Disclaimer

I may be wrong. I am happy to be proven wrong. I am equally even happier for someone to give me additional examples of currying in the wild.

Why TF should I stub dependencies for an E2E test?

E2E (or ‘end to end’) tests, when written for web applications, basically allow you to write scripts to interact with your application via a browser, and then assert that it responds correctly.

Common libraries that allow you to do this for Angular apps are Selenium/Protractor and Cypress (you should check out Cypress in particular).

The browser can be ‘headless’, meaning that you can run it in a command line, without a GUI (see headless chrome for an example). This means that these tests can be run by your build server to support a continuous integration/continuous deployment pipeline.

You can then develop scripts to click about, and test your most important user flows quickly and consistently, on your real application, in a very close approximation of how your users would click around. This is an attractive alternative to manually running these tests yourself by spinning up your application and clicking about. Humans are good at lots of things, but computers trump them every time when it comes to performing repeated actions consistently.

E2E tests are an example of black box testing.

If the tests fail, your application is broken, if they pass, your application may still be broken, but it is not terribly broken (as we have tests, ensuring that key user flows are still accessible).

This process relies on an assumption that these tests are very robust and not ‘flaky’ (they only fail when something is actually broken, not just randomly).

There are a few things that can cause E2E tests to randomly fail, but by far the most common in my experience is some form of network latency. Basically any time your code is interacting with something which can take an indeterminate amount of time to finish, you are going to experience some pain when writing automated tests for it.

The most recent (and most evil) version of this I have experienced is an application that is plugged into a blockchain and involves waiting for transactions to be mined. This can take anywhere from a few seconds, to forever.

Unless you are a lunatic, if your application involves any sort of transactions (purchases, bets, swiping-right), which involve user data, your automated tests will already be running on some sort of test environment, meaning that when you mutate data, it is not done on production data, but a copy. This comes with its own little pain points.

When talking to a truly stateful dependency (like an API plugged into a database), you will have to account for the state of the database when performing any actions in your browser test.

As one example, if you are logging in, you might have to first check whether the user has an account, then log in if they do, or create an account if not. This adds complexity to your tests and makes them harder to maintain/potentially flakier.

Another option is to control the test environment, and write (even more) scripts to populate the test database with clean data that can be relied on to power your tests (make sure that a specific user is present for example).

This removes complexity from the browser tests, but adds it to the scripts responsible for set up and tear down of data. This also requires that your test data is kept synchronised with any schema changes to your production database. Otherwise you run the risk of having E2E tests which can pass, even while your real application is broken, as they are testing on an out of date version of the backend.

(To clarify, these are all valid options, and there are solutions to many of the problems above)

However, while this approach solves the problem of accidentally munging user data, a backend plugged into a staging database can still be forking slow, as again you are at the mercy of network speeds and many other things out of your control.

Taken together, this is why I favour ‘stubbing’ the slow bit of your application (in many cases a backend service). This involves providing a very simple approximation of your Slow Ass Service (SAS), which follows the same API, but returns pre-determined fixture data, without doing any slow networking.

There is one major caveat to this approach.

It only works in a meaningful way if you have some sort of contract between the SAS and your frontend.

That is to say, your frontend and the SAS (Slow Ass Service), should both honour a shared contract, describing how the API should behave, and what data types it should accept and return. (for a REST api, swagger/openapi provides a good specification for describing API behaviour). Because this shared contract is written in a way that can be parsed, automatic validations can be made on both the frontend and the backend code to ensure that they meet the requirements.

If you were reading closely, you will notice I mentioned data ‘types’, so most likely you will require some sort of typed language on both the frontend and within the SAS (I would recommend TypeScript 🙂 ).

With this in place, as long as your stub implementation of the SAS also honours this contract (which can be checked in with the SAS source code), then you can feel happy(ish) that your stub represents a realistic interaction with the real SAS. If the contract changes, and your code doesn’t, it should not compile, and your build should fail (which we want!).

Now you have this in place, your E2E tests can be much less/not at all flaky, and as an added benefit, will run super fast in your build server, meaning that the dev workflow is slightly nicer too.

Again, there are gazillions of different ways of approaching this problem, and obviously you need to figure out what it is that you care about.

In my experience though, this presents a good compromise and has meant that I (finally) have managed to get E2E tests running as part of CI/CD without wasting huge amounts of time and causing headaches.

In a subsequent post I demo how to actually get this working in an Angular application that depends on a strongly typed third party library, which happens to be slow as hell 🙂