WTF is a walking skeleton (iterative design/development)

skeleton cartoon next to bare bones website

I’m going to argue that when developing software, it can be a good idea to be skeletal.

What does that mean?

It means starting with an ugly, or clunky version of your piece of software.

Not only must it be ugly, raw and unsuitable for public consumption to be skeletal, it must also be functional.

That is to say, the skeleton must walk.

Your job once you have this skeletal form, is to go back and attach all of the things that make a skeleton less horrifying (skin and the like).

I’ll give an example and some background to hopefully clarify this idea further, as I think it is an important one, that often gets overlooked.

It’s time to learn React (again)

I have a hard time convincing myself to learn stuff for the sake of it. I like to learn things that increase my understanding of the world, make my life easier, or otherwise help me to solve a problem that I have.

For a long time, this has pushed ‘learning React’ to the bottom of the pile of things I want to do with my spare time.

I am (I think) a solid JavaScript developer, and I actually used React professionally for a brief stint years ago. At that point I had no problem picking up the framework, and I was productive pretty quickly. I like React. I also like Angular.

That said, I am definitely not as productive using React as I would be using Angular, and React has moved on since I last used it, so I tend to stick with what I know.

In an ideal world I’d probably ignore React. I don’t have religious feelings about software tools or frameworks, and I don’t like learning things for the sake of it. I like building things.

I am also currently working as a contractor. This means I need to be very good at the thing I do, in order for people to justify paying a higher rate for my services.

However… the demand for Angular developers in London, where I live, is pretty low at the moment. It seems to largely be large, slow financial organisations that are using it. These are not typically qualities I look for in a workplace.

React on the other hand is booming.

So, TL;DR it’s time to get familiar with React, even though I don’t want to.

Rob’s super simple tips for how to learn technologies really fast

  • Read the quickstart
  • Start building things
  • Get stuck
  • Read the relevant part of the manual
  • Get unstuck
  • Repeat

These are the steps I try to follow when learning a new technology. I basically like to get enough knowledge to get moving, then jump in at the deep end and start building stuff.

Once I (inevitably) get stuck, I will go back and actually read the documentation, filling in the blanks in my knowledge, and answering all the many questions I have generated by trying to build things unsuccessfully.

I find that this keeps my learning tight and focussed (and interesting), and means that I don’t spend hours reading about theoretical stuff which I might not even need yet.

So, in order to carry out my learning steps, I needed something to build.

I settled on a video game curation tool, which allows users to sign in, and record their top 5 favourite games in a nice list view, along with some text saying why.

This data can then be used to show the top 5 games per platform (Switch, Xbox, PS4, PC etc.), determined by how often they appear on users’ lists.

I also wanted the ability to see other users’ top 5 lists, via shareable links.

I don’t think this is a website that is going to make me my millions, but it is complex enough to allow me to use React to actually build something.

OK, so what does this have to do with Skeletons?

Well, when I build things, I like to make skeletons first.

So in this case, a walking skeleton of my application should be able to do all of the things I outlined above.

In order for it to be a true functional skeleton, that can be iteratively improved upon, it needs to be as close to the final product as possible, and solid enough to support iterative improvements.

So it can’t be one big blob of spaghetti code which only works on my machine on Tuesdays.

I am building a web application which will persist user preferences, so it has to be:

  • deployed to the web
  • connected to a database
  • able to authenticate a user

Regardless of what I said above about diving straight in. You shouldn’t just dive straight in.

Firstly, figure out, without getting bogged down in what tech to use, what it is you want your stuff to do.

For a web app, you probably want to have some idea about the data structures/entities you are likely to use and what they will represent, and the user flow through the front end.

In this case, I knew I wanted to do something with user generated lists of favourite games, and that I wanted to store and update them.

This meant that a cheap way to get started was to come up with some data structures, and play around with them. So that’s what I did:

 * We should optimise the data structures around the most common/massive/searched entities.
 * I think the reviews are likely to be the chunkiest data set as each user can make muchos reviews.
 * Platforms doesn't fucking matter as there are so few
 * Games are also potentially quite large and need to be searchable
 * Reviews need to be searchable/filterable by: [ platform, game-name, username, tags, star-rating ]
 * Games need to be searchable/filterable by: [ platform, name, tags, star-rating ]

const platforms = {
  "uuid-222": { name: "PS4" },
  "uuid-223": { name: "switch" },
  "uuid-224": { name: "X Box One" },
  "uuid-225": { name: "PC" },

const includedBy = {
  [platformId]: {
    [gameId]: {
      [userId]: "user comment",
  "uuid-222": {
    "uuid-312": {
      robt1019: "I loved throwing coconut at ppls hedz",

let rankedPs4Games = {};

Object.keys(includedBy["uuid-222"]).forEach((gameId) => {
  const includedCount = Object.keys(includedBy["uuid-222"]["gameId"]).length;
  if (rankedPs4Games[includedCount]) {
  } else {
    rankedPs4Games[includedCount] = [gameId];

const games = {
  "uuid-313": {
    name: "Zelda Breath of the Wild",
    platforms: ["uuid-223"],
  "uuid-312": {
    name: "Hitman 2",
    platforms: ["uuid-222", "uuid-223", "uuid-224", "uuid-225"],

const users = {
  robt1019: {
    name: "Rob Taylor",
    top5: [{ gameId: "uuid-312", platformId: "uuid-222" }],
  didarina: {
    name: "Didar Ekmekci",
    top5: [{ gameId: "uuid-313", platformId: "uuid-223" }],

 * Use includedBy count for aggregate views. Only viewable by platform. No aggregate view.

It is important to note that these data structures have since turned out to be slightly wrong for what I want, and I have changed them… iteratively, but playing with them in this form before writing any code allowed my to iron out some nasty kinks, and ensure that I wasn’t trying to do anything that would be truly horrible from a data perspective later on.

I also spent a good hour scribbling in a note pad with some terrible drawings of different screens, to mentally go through what a user would have to do to navigate the site.

At all times we’re trying to make a solid, rough and ready skeleton that will stand up on its own, not a beautifully formed fleshy ankle that is incapable of working with any other body parts!

Be a Scientist

The main benefit of this approach, is that you are continually gathering extremely useful information, and you can very quickly prove, or disprove your hypotheses about how the application should be structured, and how it should perform.

By emphasising getting a fully fledged application up and running, deployed to the web and with all of the key functionality present, you are forced to spend your time wisely, and you take away a lot of the risk of working with a new set of tools.

What did I learn/produce in four days thanks to the Skeleton:

  • React can be deployed with one command to the web using Heroku and a community build pack.
  • How to deploy a React application to Heroku at a custom domain.
  • How to do client side routing in a modern React application.
  • The basics of React hooks for local state management.
  • How to protect specific endpoints on an API using Auth0 with Express.
  • The IGDB (Internet Games Database) is free to use for hobby projects, and is really powerful.
  • How to set up collections on Postman to make testing various APIs nice and easy.
  • A full set of skeleton React components, ready for filling in with functionality.
  • A more thought through Entity Relationship model for the different entities, and a production ready, managed MondoDB database.

If you want to see just how Skeletal the first iteration is, see here:

I didn’t get the users search view working, so that is the first thing to do this week.

After that, my product is functionally complete, so I’ll probably start on layout/styling and maybe some automated testing.

I can already breathe pretty easy though, as I know that nothing I’m trying to do is impossible, as I have already done it.

Anything from here on out is improvements, rather than core functionality.



Adventures in Node town (hacking Slack’s standard export with Node.js)

One benefit of changing jobs quite a lot, is that I have built up an increasingly wide network of people that I like, who I have worked with previously.

A really nice thing about staying in contact with these people is that we are able to help each other out, sharing skills, jobs, jokes etc.

Recently a designer I used to work with asked whether somebody would be able to help with writing a script to process the exported contents of his ‘question of the week’ slack channel, which by default gets spat out as a folder filled with JSON files, keyed by date:

My response was rapid and decisive:

Data munging and a chance to use my favourite Javascript runtime Node.js. Sign me up!!!

First, WTF is data munging

Data munging, or wrangling, is the process of taking raw data in one form, and mapping it to another, more useful form (for whatever analysis you’re doing).

Personally, I find data wrangling/munging to be pretty enjoyable.

So, as London is currently practicing social distancing because of covid-19, and I have nothing better going on, I decided to spend my Saturday applying my amateur data munging skills to Slack’s data export files.

Steps for data munging

1) Figure out the structure of the data you are investigating. If it is not structured, you are going to have trouble telling a computer how to read it. This is your chance to be a detective. What are the rules of your data? How can you exploit them to categorise your data differently?

2) Import the data into a program, using a language and runtime which allows you to manipulate it in ways which are useful.

3) Do some stuff to the data to transform it into a format that is useful to you. Use programming to do this, you programming whizz you.

4) Output the newly manipulated data into a place where it can be further processed, or analysed.

In my case, the input data was in a series of JSON files, keyed by date (see below), and the output I ideally wanted, was another JSON file with an array of questions, along with all of the responses to those questions.

Shiny tools!!!

Given that the data was in a JSON file, and I am primarily a JavaScript developer, I thought Node.js would be a good choice of tool. Why?

  • It has loads of methods for interacting with file systems in an OS agnostic way.

  • I already have some experience with it.

  • It’s lightweight and I can get a script up and hacked together and running quickly. I once had to use C# to do some heavy JSON parsing and mapping and it was a big clunky Object Oriented nightmare. Granted I’m sure I was doing lots of things wrong but it was a huge ball-ache.

  • From Wikipedia, I know that ‘Node.js is an open-source, cross-platform, JavaScript runtime environment that executes JavaScript code outside of a web browser. Node.js lets developers use JavaScript to write command line tools‘.

  • JavaScript all of the things.

So, Node.js is pretty much it for tools…

So, on to the data detective work. I knew I very likely needed to do a few things:

1) Tell the program where my import files are.

2) Gather all the data together, from all the different files, and organise it by date.

3) Identify all the questions.

4) Identify answers, and link them to the relevant question.

The first one was the easiest, so I started there:

Tell the program where my import files are

const filePath = `./${process.argv[2]}`;

if (!filePath) {
    "You must provide a path to the slack export folder! (unzipped)"
} else {
    `Let's have a look at \n${filePath}\nshall we.\nTry and find tasty some questions of the week...`

To run my program, I will have to tell it where the file I’m importing is. To do that I will type this into a terminal:

node questions-of-the-week.js Triangles\ Slack\ export\ Jan\ 11\ 2017\ -\ Apr\ 3\ 2020

In this tasty little snippet, questions-of-the-week.js is the name of my script, and Triangles\ Slack\ export\ Jan\ 11\ 2017\ -\ Apr\ 3\ 2020 is the path to the file I’m importing from.

Those weird looking back slashes are ‘escape characters’, which are needed to type spaces into file names etc. when inputting them on the command line on Unix systems. My terminal emulator that I use autocompletes this stuff. I think most do now… So hopefully you won’t have to worry too much about it.

This is also the reason that many programmers habitually name files with-hyphens-or_underscores_in_them.

But basically this command is saying:

‘Use node to run the program “questions-of-the-week.js”, and pass it this filename as an argument’

What are we to do with that file name though?

Node comes with a global object called process which has a bunch of useful data and methods on it.

This means that in any Node program you can always do certain things, such as investigating arguments passed into the program, and terminating the program.

In the code sample above, we do both of those things.

For clarity, process.argv, is an array of command line arguments passed to the program. In the case of the command we put into our terminal, it looks like this:

  'Triangles Slack export Jan 11 2017 - Apr 3 2020'

As you can see, the first two elements of the array are the location of the node binary, and the location of the file that contains our program. These will be present any time you run a node program in this way.

The third element of the array is the filename that we passed in, and in our program we stick it in a variable called filePath.



Gather all the data together, from all the different files, and organise it by date

const fs = require("fs");

const slackExportFolders = fs.readdirSync(filePath);

const questionOfTheWeek = slackExportFolders.find(
  (f) => f === "question-of-the-week"

if (!questionOfTheWeek) {
  console.error("could not find a question-of-the-week folder");

const jsons = fs.readdirSync(path.join(filePath, questionOfTheWeek));

let entries = [];

jsons.forEach((file) => {
  const jsonLocation = path.join(__dirname, filePath, questionOfTheWeek, file);
  entries = [
    ...require(jsonLocation).map((i) => ({ ...i, date: file.slice(0, -5) })),

The Slack channel I am looking at munging is the ‘question of the week’ channel.

When this is exported, it gets exported to a ‘question-of-the-week’ folder.

So first of all I check that there is a question-of-the-week folder. If there is not, I exit the program, and log an error to the console.

If the program can find it, then it gets to work gathering all of the data together.

Here we start to see the benefit of using Node.js with JSON. We are writing JavaScript, to parse a file which uses a file format which originally came from JavaScript!

This means that pulling all of this data together is as simple as getting a list of file names with fs.readdirSync.

This gets all of the names of the files under the question-of-the-week folder in an array, which is, you know, pretty useful.

Once we have those file names, we iterate through them using forEach, and pull all of the data from each file into a big array called entries. We can use require to do this, which is very cool. Again, this is because Node and JavaScript like JSON, they like it very much.

We know we are likely to need the date that the slack data is associated with, but it is in the file name, not in the data itself.

To solve this, we take the file name and put it into a ‘date’ field, which we insert into each data item, using map

the file.slice stuff is just taking a file name like this 2018-06-29.json, and chopping the end off it, so it is 2018-06-29, without the .json bit.

Coooool we done got some slack data by date. Munging step 2 complete.

Identify all the questions

This is trickier. We need our detective hats for this bit.

I won’t lie, I fucked around with this a lot, and I re-learnt something that I have learned previously, which is that it is really hard to take data that has been created by messy, illogical humans, and devise rules to figure out what is what.

What I ended up with is this. The process of figuring it out involved lots of trial and error, and I know for a fact that it misses a bunch of questions, and answers. However, it probably finds 80% to 90% of the data that is needed. This would take a human a long time to do, so is better than nothing. The remaining 10% to 20% would need to be mapped manually somehow.

const questions = entries.filter(
  (e) => e.topic && e.topic.toLowerCase().includes("qotw")
).map((q) => ({
  question: q.text,
  reactions: q.reactions ? => : [],

‘qotw’ is ‘question of the week’ by the way, in case you missed it.

I find them by looking for slack data entries that have a topic including ‘qotw’, I then map these entries so they just include the text, date, and I also pull in the reactions (thumbs up, emojis etc.) for the lols.

Now we have an array of questions with information about when they were asked. We’re getting somewhere.

Identify answers, and link them to the relevant question

const questionsWithAnswers =, key) => {

  // Find the date of this question and the next one.
  // We use these to figure out which messages were sent after
  // a question was asked, and before the next one
  const questionDate = new Date(;
  const nextQuestionDate = questionsWithReactions[key + 1]
    ? new Date(questionsWithReactions[key + 1].date)
    : new Date();

  return {
    responses: entries
        (e) =>
          new Date( > questionDate &&
          new Date( < nextQuestionDate &&
          e.type === "message" &&
      .map((r) => ({
        answer: r.text,
        user: r.user_profile ? : undefined,

// put them in a file. the null, 4 bit basically pretty prints the whole thing.
  JSON.stringify(questionsWithAnswers, null, 4)

console.log('questions with answers (hopefully...) saved to "questions-with-answers.json"');

This bit is a bit more complex… but it’s not doing anything non-standard from a JavaScript point of view.

Basically just search all the entries for messages which fall after a question being asked, and before the next one, and put them in an array of answers, with the user profile and the message text. Then save to a new JSON file and pretty print it.

We are done! We now have a new JSON file, with an array of questions, and all the answers to each question.

It is worth noting that this approach is far from optimal from an ‘algorithmic’ point of view, as I am repeatedly checking the entire data set.

Thing is, I don’t give a shit, because my dataset is small, and the program runs instantly as it is.

If it started to choke and that became a problem I would obviously improve this, but until that point, this code is simpler to understand and maintain.

More efficient algorithms normally mean nastier code for humans, and until it’s needed, as a nice developer you should prioritise humans over computers.

(sorry, computers)

What did we learn?

Slack’s data is quite nicely structured, and is very parseable.

JavaScript is great for manipulating JSON data thanks to its plethora of array manipulation utilities.

You can write a script to automatically categorise Slack export data and put it into a semi-useful state with less than 80 lines of code, including witty console output and formatting to a nice narrow width.

This confirms my suspicion that for quick and dirty data munging, Node.js is a serious contender.

If paired with TypeScript and some nice types for your data models, it could be even nicer.

Here is the result of my labours

WTF is two’s complement

Two’s complement is a way of representing signed (positive and negative) integers using bits.

The left most bit is reserved for dictating the sign of the number, 0 being positive, 1 being negative.

The magnitude of the integer is a bit weirder.

0001 is two’s complement binary for the decimal 1

1111 is two’s complement binary for the decimal -1

How in the name of all that is good and sensible did we get from 0001 to 1111 you might ask?

Repeat after me:

‘Flip the bits and add one’

0001 with all its bits flipped around is 1110, then we add 0001 to it and we get 1111.

Why on earth would you do something so wilfully perverse as this?


  • A number system with two zeros is a pain in the arse.
  • We should really simplify the lives of those long suffering hardware engineers, who have to take these representations of numbers, and make the computers do maths with them.

Two’s complement solves both of these problems.

To understand why, let’s try and reinvent signed 4 bit integers.


Our first attempt will be as follows:

The left most bit is used to tell us the sign (1 for negative, 0 for positive), and the remaining three bits are used to represent the magnitude of the integer.

Looks simple enough. 1 is 0001, -1 is 1001.

Let’s kick our new system in the balls a bit.

What does 0000 represent? Easy, zero.

What about 1000? Easy, negative zero.

AHHHHhhhhhhh…. that’s a problem. I bet those hardware people aren’t going to like that. They’re always complaining about stuff like this.

I think we are wasting a slot in our binary integer, as we’ve got a redundant representation of zero, and our hardware will have to somehow account for this positive and negative zero when implementing any maths functionality. Sigh. OK.

After checking, the hardware engineers inform us that yes, this is a pain and no, they won’t have it (fussy fussy hardware engineers).

Also, it turns out that this representation is somewhat fiddly to do subtraction with.

0010 + 0001 in our naive signed binary is 2 + 1 in decimal. Which is 3 in decimal, or 0011 in binary.

0010 + 1001 in our naive signed binary is 2 - 1 in decimal. Which is 1 in decimal, and should be 0001 in binary.

However, if we add our binary representations in the simplest way we get 1001, or -1. Balls.

I assume there are ways around this, but those damned hardware people won’t do it because reasons. Grrr.

So to recap, ‘sign and magnitude system’; two representations of zero; painful to do basic maths with.

Fine we’ll throw that out.


Okieeee so what if we flip all the bits when we turn something negative? So 1 is 0001, -1 is 1000.

Now we can try our previous subtraction problem and just add them in the dumbest way possible and we’ll get the right answer I think? Let’s try it:

2-1 is represented as 0010 + 1110, which gives us 0000 with a one carried over.

Which is… still not right.

‘You should take the carry and stick it on the end’

What’s that Gwyneth?

‘Take. The carry. Stick it on the end’

Fine. At this point what do I have to lose.

0000 with the carry stuck on the end is 0001. Which is correct! Well done Gwyneth.

Still. I bet they won’t have that double zero stuff. Probably they’re going to kick up a fuss about this moving carries around as well.


Tabatha, what are you guys are chanting in the corner there? By the candles and the… wait is that blood?

‘Flip the bits and and one’

‘Flip the bits and and one’

‘Flip the bits and and one’

‘Flip the bits and and one’

‘Flip the bits and and one’

Not totally happy with this blood situation, but that’s not a bad idea Tabatha!

Let’s give it a go:

2 - 1 = 1

In two’s complement:

0010 + 1111 = 0001 with a one carried

The chanting has changed…

‘Discard the carry’

‘Discard the carry’

‘Discard the carry’

‘Discard the carry’

‘Discard the carry’

You know what I think you’re right, we don’t need it. We’ve got the right answer. Finally 2 - 1 = 1 without any messy carries!

Also, I think I’ve just spotted something even neater. This gets rid of our duplicate zero right?

Two’s complement of 0000:

flip the bits:


add one:

0000 with a carry of 1

discard the carry:


The circle is complete.

YES!!! How do you like that hardware people!?!

Recursion – an epic tale of one family of functions’ desperate struggle for survival

I am trying to develop my intuition for recursion.

To this end I spent some time trying to solve mazes with recursive functions.

Along the way I discovered a hidden and dramatic universe, where collections of function calls band together, sacrificing whole family trees in the pursuit of a secure future for their species.

First, let’s solve some mazes:

Specifically, let’s start with this one, and use it as an example to develop a general strategy for solving mazes:

a badly drawn but simple maze

Right, tell me how to get from the entrance (always top left) to the exit (always bottom right).

Seems simple enough, just walk down until you hit the bottom, then go right. Piece of piss.

Ah but computers don’t work that way, they don’t like ambiguity. Try and be a bit more precise. Also what if the hedges in the maze are arranged differently? Does that plan still always work?


1) We already know where the exit is, so start there and work backwards, keeping track of where we’ve been, then we’ll have a nice ordered list of grid locations we need to traverse in order to get out of the maze.

2) Let’s try moving up first. Ah but then we hit a wall at the top.

3) No bother: move left

No no hold on this won’t do. This is all far too vague for a computer. At every step, it’s going to need very clear instructions to allow it to figure out what direction to go in.

SIGH… fine

Steps for the computer:

1) Go up if you can, and you haven’t come from that direction

2) Otherwise go right if you can, and you haven’t come from that direction

3) Otherwise go left if you can, and you haven’t come from that direction

4) Otherwise go down if you can, and you haven’t come from that direction

There! Happy!?!

Oh much better yes. How does it know how to stop though?

Ok shit. Before you do anything, check if you are at the entrance to the maze


This looks reasonably hopeful now. I’m pretty sure we’ve missed a few edge cases, but this looks like something we can work with.

(see below for my actual real notes I wrote while trying to figure out this strategy)

diagram with various mazes on it

Bring on the Recursion

This process has the smell of something could be implemented using recursion.

We have a base case (‘am i at the entrance’), and some logic for deciding how to call the next iteration of the solution.

Oh blah blah blah recursion so what. Tell them about the Glarpols!

Oh fine… This is a Glarpol:

fluffy creature

Don’t be put off by its cute and fuzzy appearance. It has a statistically tough challenge ahead of it.

Glarpols exist across infinite different parallel universes, with each universe looking suspiciously similar to our scrappily drawn mazes above.

Much like the Salmon has to struggle and swim upstream in order to spawn, and ultimately survive as a species, so too do the Glarpols.

In each universe, a solitary Glarpol is spawned at the exit to a randomly generated grid shaped maze. The exit is always on the bottom right of the grid, and the entrance is always on the top left of the grid.

If the Glarpol and its offspring manage to find a route to the entrance to the maze, they will survive as a species in this universe. If they do not, there will be no Glarpols in this particular universe.

In order to maintain their population, Glarpols require sufficient oxygen. Every time a Glarpol is created, they use a bit more oxygen.

If too many Glarpols are alive at the same time, then they will not have enough oxygen, and they will all die.

A Glarpol can move once, and breed once, and in contrast to many species, will stay alive until all of its children die, at which point it will also die.

Glarpols don’t know about the grid, but each Glarpol is born with the following information available to them:

1) Whether the maze has been solved by another Glarpol

2) Their ancestors’ journey up until this point

3) What the entrance to the maze looks like

4) Whether they are in a hedge

5) Whether they are in a wall

With this information, each solitary and freshly born Glarpol must decide what to do with their life. The choices are somewhat bleak…

  • Has someone else found a path? If so kill myself, I can’t help my species and I’m wasting oxygen.

  • Am I at the Entrance? If so tell the rest of my species that we are saved, and tell my parent how to get to the entrance. The effort of transmitting this message causes the me to die.

  • Am I in a Wall? If so kill myself, I can’t help my species and I’m wasting oxygen.

  • Am I in a Hedge? If so kill myself, I can’t help my species and I’m wasting oxygen.

If a Glarpol has managed to live long enough to get this point, they will give birth to three children, and immediately afterwards will send them out into the world, in all directions (left, right, up, down) other than the one which their parent (who they will never meet) came from. They will also give each of these children a copy of their ancestors’ journey up until this point, along with their current position.

The Glarpol will then patiently wait alone to hear back from their offspring about their fate.

If they hear back that one of their descendants has found the entrance to the maze, they happily forward on the message to their parent. As before, the effort required to do this causes them to die.

Otherwise, if their children die, they also die.

OK… weird story. Do you have some code I can see. This is recursion right?

That’s right. In this tenuous and painful analogy/metaphor, each spawned Glarpol is a recursive function call, oxygen is memory, and the path to the entrance of the maze is the solution to the maze.

If your program runs out of memory by calling too many recursive functions, it will crash. If it finds a path through the maze it terminates successfully.

If you want to see and/or play around with the Glarpols’ universe here is the code I developed when messing around with this idea:

In defence of reinventing the wheel (WTF is Recursion)

Something which is said often in software teams (especially in good software teams) is don’t reinvent the wheel.

Generally, this is great advice.

Software is increasingly complex, and if you can save yourself some time and effort, and make your final product more robust by using something somebody else has written, that is a good thing. Especially in complex applications which matter.

A notable example of this is time/date manipulation in JavaScript.

If you ever find yourself doing ANYTHING complicated with dates in a production JavaScript application, stop, just stop, and go and install this library.

‘If I have seen further it is by standing on ye sholders of giants’ – Isaac Newton

‘I don’t know what I would do without NPM’ – Also probably Isaac Newton

Basically, using other peoples’ stuff for problems that have already been solved frees your team up to focus on solving the more difficult and interesting problems that your specific app needs to solve.

Also no though

If you take this to the extreme, you can end up with a very shallow understanding of things.

When you are learning a new concept, or deciding whether to use a trendy framework or library, you should probably first have a go at reinventing the wheel. Or, to put it another way, you should try and expose yourself to the raw problem that is being solved by the shiny new thing first. If you understand the problem it is trying to solve, you will have a much better ability to use and learn the tool, and will be able to properly assess whether it is something you actually need.

As a stupid example, imagine explaining wheels to a newly discovered group of humanoids that have developed the ability to fly.

Without understanding that wheels make it easier to transport heavy things over distance, ON THE GROUND, they would have no intuitive understanding of the benefits of wheels.

How would you explain wheels to flying creatures?

Flying creature‘Why do we need wheels?’

Non flying creature‘Carrying heavy things. Friction is easier to overcome than gravity. We have been using them for ages. Can definitely recommend.’

Flying creature‘Oooooh I get it now. Give me a wheel plz.’

Cool cool cool, reinvent all the things, got it. Do you have a better example?

Why yes, yes I do.

I’ve recently been bashing my head against recursion (again) as I struggle (again) to refresh my rusting knowledge of data structures and algorithms.

I did a computer science masters, I have worked in the industry for four years, and I have read about and implemented countless recursive functions.

I still have no f**king intuition whatsoever for recursion though.

Every time I have to solve a problem using recursion (usually in coding interviews…) I diligently code up a solution and run it, and it does NOTHING that I am expecting.

Normally it runs out of memory and I basically tweak variables and return values until it (sort of) behaves.

I have decided that enough is enough. So in order to build up a better mental model of and intuition for recursion, I’m going to (sort of) reinvent it, and then try and explain it back to myself in a way that makes sense.

Hopefully along the way things will start to click, click, click into place.

(What follows is a loosely edited version of hand written notes that I put together over the course of about 6 painful hours. They bounce around a bit. I have deliberately left them pretty much as they were to remind myself how this process looked).

Why recurse at all?

Some problems are a bastard to solve in one go. A better approach is to split the problem up into smaller and smaller sub problems, until you have the smallest and simplest meaningful problem.

Once you have that tiny little nubbin of a problem that is laughably simple, solve it, and then recombine all of the solutions to these teeny tiny easy problems to give you a solution to the big scary problem.

That, I think, is the essence of what makes recursion useful in programming.

Recursion, a life in quotes

Recursive – “periodically recurring”

“Pertaining to or using a rule or procedure that can be applied repeatedly”

“Recursion occurs when a thing is defined in terms of itself or its type”

It is kind of like backwards induction then?

Induction… WTF is that?

Mathematical induction is a method for proving that statements are true.

An example of a use for mathematical induction is proving that you can go as high as you like on an infinite ladder. Here’s how you do it:

1) Prove we can get to the first rung (this is called the ‘base case’)

2) Prove we can move from n rungs, to n+1 rungs

QWOD ERRAT DEMONSTRAAANNTEM (we done proved a thing)

Mathematical induction also turns out to have been around for a long time. Like the Greeks used it. Much longer than recursion in fact.

For now, it is enough to say that induction and recursion seem to be somehow linked.

They both involve boiling a problem or a process down to its simplest form, and then detailing a simple mechanism for moving from one iteration to the next.

Enough yammering. Show me some recursion!!!

Okie dokie:

function recursive() {

A recursive function, is nothing more than a function that calls itself as part of its declaration.

If you write a function like the one above, and call it in a program, you will meet some variety of this charming individual:

RangeError: Maximum call stack size exceeded

*’Bastard f**king recursion. Why does this keep happening!?!’*

‘Oh mate you’ve blown up the stack. It’s overflowing all over the place. Tragic’

Stack!?! WTF is a stack, and why would it overflow?

Good point, let’s see. Going back a few steps:

What happens in a program when a function calls itself?

When a function is called at runtime, a stack frame is created for it and is put on the call stack.

Stack frames require memory, so if you call a function that calls itself, it will try and keep doing it forever, and will try and create endless stack frames. These stack frames require memory and that is a finite resource, so eventually the program runs out of memory and crashes.

Sounds half plausible. What is a ‘stack’/’the stack’, or ‘call stack’? Is that what it’s called? I forget

I’m pretty sure it’s just a stack, that is used for keeping track of function calls, and their evaluation/results.

function funcA() {
  return 3;

function funcB() {
  return funcA();



When this is run, I believe somewhere, something like this is created:

demo of call stack

OK, so because stack frames are dynamically pushed onto the call stack when functions are called, if we call a function inside a function and never have a termination case, the executing program will attempt to put infinite numbers of stack frames onto the call stack, and will run out of space:

stack overflow drawing

That’s probably enough on call stacks for now. I’m absolutely certain some of the stuff above is wrong, but I think it’s enough to understand why my code keeps crashing.

In order to avoid toppling call stacks, our recursive function must have some sort of exit clause, which allows it to stop calling itself.

Because functions are evaluated and run in their own sandboxed execution context, the piece of state which tells a recursive function it is done will probably have to be passed to it I guess.

Let’s make a for loop without variable assignment

function loop(loopCount) {
  console.log("in a loop");
  if(loopCount === 1) {

  loop(loopCount - 1);


I’m relatively certain that as this runs, it will push 5 stack frames onto the call stack (as in shoddy diagram below), and then pop them off once the ‘base case’ is evaluated as true (loopcount === 1).

a looping stack

All of the TypeScript!!!!

At this point I went down a bit of a rabbit hole and implemented a bunch of looping methods in TypeScript for fun.

Here they are:

function loop(loopCount: number) {
  console.log("In a loop");

  if (loopCount === 1) {
    console.log(`loop called with ${loopCount}`);

  loop(loopCount - 1);

  console.log(`loop called with ${loopCount}`);


function customLoop(loopCount: number, callback: (key: number) => void) {

  if (loopCount === 1) {

  customLoop(loopCount - 1, callback);

customLoop(20, (key: number) => console.log(key));

function iterator(array: any[], callback: (item: any, key: number) => any) {
  function iteratorRecursive(key: number) {
    if (key === array.length) {

    callback(array[key], key);

    iteratorRecursive(key + 1);


const testArray = ["five", "silly", "frogs", "are", "bathing"];

iterator(testArray, (item, key) =>
  console.log(`'${item}' is found at element ${key} in the array`)


testArray.forEach((item, key) =>
  console.log(`'${item}' is found at element ${key} in the array`)

function baseCaseLess() {
  console.log(new Date().getSeconds());
  if (new Date().getSeconds() % 2 === 0) {



What have we learnt?

  • Recursive functions call themselves

  • Recursive functions must be given the means to stop themselves. The information telling them to stop doesn’t necessarily have to be passed to them (see baseCaseLess function above), but probably will be.

  • The state that tells a recursive function to end or not has to change, either progressively per function call, or as a function of something else that is changing. E.G. time.

Loops are cool and all, but what is an actual useful example of a recursive function?

Let’s have a go at solving some mazes 🙂

If you’re intrigued, check out this post, where I draw up some dubious metaphors and analogies, and play around with solving mazes using recursive functions.

Why are you doing all this again?

I am trying to develop increasingly useful mental models for programming concepts that I am less fluent with than I’d like to be.

I believe a useful model should allow me to understand a concept well enough to solve problems with it at the level I need to.

If I find things not making sense, or behaving differently to how my current model behaves, I know I probably need to refine my model, and clean up the fuzzy bits (as is/was the case with recursion).

Models can be as deep as you want. Ultimately I see them as just a way to understand a complex thing and work with it.

Without effective mental models I feel like I’m kind of just guessing and stumbling around in the dark.

WTF is currying?

As a developer who spends most of my time at work writing JavaScript or TypeScript, I’ve heard references to ‘currying’ all over the place in blogs, Stack Overflow answers, from my colleagues, and more recently in quiz style technical interviews.

Whenever it gets brought up I do the same thing.

I google the term ‘currying’, figure out that it is basically taking a function that accepts multiple arguments, and converting it into a function that takes one argument, and then returns another function which can take another single argument, etc. etc.

That is to say:

const unCurried = (arg1, arg2, arg3) => {
    console.log(`First argument is ${arg1}, second is ${arg2}, third is ${arg3}`);

unCurried('this', 'that', 'another');

const curried = (arg1) => {
    return (arg2) => {
        return (arg3) => {
            console.log(`First argument is ${arg1}, second is ${arg2}, third is ${arg3}`);


At which point I say to myself

‘Oh cool yes I remember this. Neat idea. Not sure when I’d use it, but at least I understand it. What a clever JavaScript developer I am’

So maybe, given that currying is a relatively simple concept to implement in code, a better question might be

WTF is the point of currying?

If I stumble onto a concept in mathematics or programming that I don’t understand, I generally try to figure out where it came from, what problem it was/is trying to solve and/or which real world relationship it is trying to model.

So first of all, why is it called ‘currying’? Is there some significance to the name that will make its intention clear?

Currying… maybe it means to preserve something or to add ‘flavor’ to a function in some way?


Turns out it’s because a logician called Haskell Curry was heavily involved in developing the idea. So that’s a dead end.

It also looks like Haskell Curry was developing his ideas based on the previous ideas of some people called Gottlob Frege (died in 1925), and Moses Schonfinkel (died in 1942). Which suggests that maybe the ideas behind currying did not originally come about in response to a programming problem…

In fact, currying originated as a mathematical technique for transforming maths style functions, rather than programming style functions.

Mathematical functions and programming functions are related, but slightly different.

A mathematical function basically maps one set of data points to another set of data points. That is, for every input value to the function, there is a corresponding, specific output value.

Functions in programming also take inputs, but they can do whatever they like with those inputs (or arguments), and they are under no obligation to return a single specific output value.

Currying, as it is defined, seems to only relate to inputs to functions (arguments), and so is presumably equally applicable to both mathematical and programming style functions. OK cool. What’s currying again?:

Currying is the technique of translating the evaluation of a function that takes multiple arguments into evaluating a sequence of functions, each with a single argument.’ – Wikipedia

OK yup I remember, and why do mathematicians curry?

(Based on Wikipedia’s summary)

Some mathematical techniques can only be performed on functions that accept single arguments, but there are lots of examples where relationships that can be modelled as functions need to take multiple inputs.

So currying allows you to to use mathematical techniques that only work on functions with single arguments, but tackle problems that involve multiple inputs.

This is all well and good, and we’ve already seen how we can implement currying in JavaScript, but… I still don’t get what the practical benefit of it is in a programming sense, especially in JavaScript!

Eureka! (A case of accidental currying)

I basically got to the point above, and then went back to ignoring currying as I didn’t really get what practical application it had for me, as a predominantly front end JavaScript developer.

A few months later, I found myself in the privileged position of having a project at work that was entirely my own. I got to write an automated testing solution in Typescript, using Cypress, and was basically given free reign to organise the code and repository as I pleased.

I’ve been gradually moving towards and playing with more functional style programming, and one of the things I found myself wanting to do, was writing functions to create functions with different ‘flavors’:

const pricePrinter = (currencySymbol) => {
    return (priceInNumbers) => {

const dollarPricePrinter = pricePrinter('$');

const poundPricePrinter = pricePrinter('£');



Ignoring the wildly impractical nature of the example above, this pattern is quite useful. It allows you to compose functions neatly and semantically, with little ‘function factories’, and talk to code that is sufficiently abstracted.

I was very happy with this pattern and proudly showed my colleague what I’d discovered.

His response was to glance over briefly and go ‘Oh yeah that’s currying. Cool’, and then go back to work.

So there you go. One practical application in JavaScript of currying, is to compose functions together to make little function factories, that are passed the context they will be operating in as an argument (the currency symbol in the example above). Kind of like constructors for functions. Neat.


I may be wrong. I am happy to be proven wrong. I am equally even happier for someone to give me additional examples of currying in the wild.

WTF is wrong with software!?!

I recently watched the excellent series Chernobyl.

It tells the story of a catastrophic nuclear meltdown, caused by concerted human incompetence and blind adherence to bureaucratic demands, even in the face of blatant evidence that what was being done was stupid, negligent, and dangerous.

After the accident, it took a long time to understand what had gone wrong, blame was thrown around wildly, and there were official efforts to disguise the causes of the accident.

Very little was done to try and learn from systematic failures, and to address the problems demonstrated by the incident.

As a software professional, it made me very sad how many similarities I can see between most places I have worked, and the Chernobyl accident and cover up.

How I feel most of the time at work:

dog drinking coffee in a burning building

The way software is written is fucked.

It is complex enough to be understood by very few people, and the process of writing it is consistently mismanaged, despite the fact that most of the pitfalls and mistakes made by literally every single team, have been pointed out, along with detailed steps for how to avoid them, since the 1970s.

Here is a pretend conversation that sums up the level of ignorance I have found in my professional life, as applied to another industry. (it is a metaphor you see):

  • Bob: Hello, I’m the new pilot for this here commercial aircraft.
  • Geoff (Bob’s employer): Hello Bob! Glad to have you on board. I hope you’re worth the enormous sum of money we’re paying you
  • Bob: Well of course I am! I passed your test didn’t I 🙂
  • Geoff: Well yes, you are right, you were very good at identifying the key differences between a jumbo jet and a bicycle, so I guess we’re all OK on that front

Some time later…

  • Emma(Bob’s copilot): Bob could you land the plane for me please. I need to visit the bathroom
  • Bob: Sure thing. One question, where do you keep the landing ropes?
  • Emma: What was that Bob? The landing ropes… What are they?
  • Bob: The ropes you use to loop around the lorry on the landing bit on the floor so it can pull you in and you can land
  • Emma: Bob, that is insane. What are you talking about
  • Bob: Oh, do you use a different system. I could use roller skates instead
  • Emma: …
  • Geoff: Hey guys, are you going to land this plane soon
  • Bob: Sure thing, just debating whether to use the ropes or the roller skates. I think we’re leaning towards the roller skates given the weather
  • Emma: …
  • Geoff: OK great, let me know how you get…
  • Bob: Hold up there Emma, I’m not sure there’s any need to get angry like this
  • Geoff: Emma, what’s your problem? We really need to land this plane soon so can you please sort this out between you. What Bob said about the roller skates sounds sensible to me
  • Bob: I think that’s a bit extreme Emma, when I worked at Initrode we always used roller skates in inclement weather and we hardly ever crashed. We never took off either though… but I’m pretty sure the landing stuff would be fine


For another excellent summary of the type of madness you are likely to find in a software team, try this:

‘Ah but fortunately your example was unrealistic. That involved an aircraft, which could crash into the floor and kill people. Also nuclear reactors are totally different to coding. Software is just computers. It can’t do anything really nasty… right?’


Not only is software fucked, it is increasingly responsible for scary things.

Defective software has recently:

  • Crashed two planes.
  • Exposed countless peoples’ private information to criminals. (This happens so often now that it’s barely even news)
  • Stopped customers from accessing their online banking for months.
  • Prevented doctors from accessing test results.
  • Grounded flights out of major airports.
  • Thrown people off electric scooters by ‘braking heavily’ under certain circumstances.

I found these examples from a ten minute Google search. There are probably many more cases, many of which are inherently difficult to reason about because… software is fucked!

To recap, software is often:

  1. Responsible for scary things
  2. Written by incompetent people
  3. Managed by less competent people
  4. Very difficult to understand for ‘non technical’ people
  5. Shit

Which all combines to make me increasingly cynical and scared about my profession…

I have spent quite a bit of time reading about and discussing this phenomenon, and there appears to be a growing consensus, among both my peers and the wider community, that this cannot go on.

One of the potential solutions which is floated, in various forms is ‘REGULATION’.

Regulation!?! Yuck…

I’m not sure how I feel about it either… or what it should look like.

The cynic in me thinks that any attempt to regulate software development would add unnecessary overhead to an already slow and inefficient process, and would likely be mismanaged.

I also know that certain elements of writing software are very difficult to reason about, and that ensuring a program does what you intend it to do is hard.

That is to say, it is possible that even after doing your due diligence, becoming excellent at your craft, and testing all foreseeable outcomes, your code could still cause something horrible to happen.

In these cases I suspect assigning blame would be counter productive, whereas learning from mistakes could be very useful.

That said, the level of incompetence, and negligence I have encountered in some work places feels like there should be some legal protection in place, to make sure that development teams take responsibility for their code.

It seems totally unacceptable that testing and design is frequently shunted to the back of the priorities list, behind ‘hitting milestone 3’ or ‘drop 2’ or whatever that particular company is choosing to call their next client delivery.

If a shoddy system of development results in software which causes a loss of data, or money, or lives, then maybe someone should be held accountable.

That person should probably be paid a fairly large sum of money, for taking on the responsibility of ensuring that the software works (as many people in the software industry already are…), but equally, that reward should reflect the very real responsibility that has been taken on.

By which I mean that if it is found that the highly paid person that put their professional seal of approval on a product, did not adequately ensure that a system was put in place to develop robust and safe software, then they should be punished.

I don’t have a particular problem if your marketing website, or your blog has some bugs or UX issues, but almost all software I have worked on has involved user accounts and financial transactions, and is therefore responsible for peoples’ personal information and money.

That this type of software is developed within a system that rewards cutting corners, and punishes slow deliberation and robustness is something that I find deeply worrying.

Systems of development

One thing to emphasise, is that I don’t think the state of software is a reflection of any specific individual failings.

It is not a case of ‘shit developers’ or ‘shit managers’ or ‘shit clients’.

You might have noticed that I keep writing the word ‘system’.

That is because I recently read an excellent book on systems thinking and now frequently regurgitate snippets from it:

One of things that really stuck with me from this book is the idea that the actual outcomes of a system always reflect its true goals, or motivations, but that those true goals might differ (sometimes wildly), from any officially stated goals.

My theory is that in the majority of cases, software teams’ true goals and motivators are not robust software or a reworked and improved product, or a solved customer problem, and that is why these are rarely the outcome.

A team responsible for a piece of software is made up of a number of actors, or elements, all with competing goals.

You may have an external client, who you work with to define what the software should do.

They probably have a set of internal stakeholders they have to please, they may have a certain deliverable linked to their bonus that year, they may have inherited the project from someone else, and maybe they are trying to leave the company anyway, so they don’t particularly care about the long term success of the product.

The budget for the piece of software might have come about from a high level executive joining the client company and wanting to throw their weight around. They might not still be at the company in six months.

The team developing the software might have, intentionally or otherwise oversold what they can deliver and offered it for an unrealistically low price.

These mean that individual milestones or deliverables within a project are very important, as many of the actors within the system are depending on the outcome of them.

In a setup like the one above, the date the software is delivered on, is more important than whether it is shit or not, because more of the people within the system are depending on the date being hit, than on the software working well.

Individual developers within the team have a variety of different motivations.

Some are there just to take advantage of the elevated wages within the industry, and are content to turn up, do their job within the confines set out in front of them and go home.

They may or may not understand what they are doing, and they may make things so complicated from a code perspective, that reasoning about whether the product has bugs or not becomes almost impossible, but to be honest it doesn’t really matter.

As long as something is produced which looks and feels like something that was signed off and paid for, these guys have done their job.

These are the majority of developers I have encountered, and I don’t have a problem with them.

I do have a problem with the fact that the system they are working inside rewards this type of behaviour, but that is not their fault.

Others might want to actually deliver on the stated goals of the system, that is, roughly:

‘make a piece of robust software in the least complex way possible, such that it solves a clearly defined problem for the user of the software’.

This is quite a bit harder, requires constant reassessment of tools, techniques and design, and a commitment to mastering the fundamentals of your craft, across all disciplines involved, but in the medium to long term is definitely the way to go.

Amidst the background of all the other competing goals and motivations, these latter elements are likely to be frustrated.

Calls to slow down the pace of development in order to revisit the design of the product, or even to reassess whether the product is viable as a solution are generally ignored, as they will impede ‘deliverable alpha’.

In such a system, cutting corners is rewarded at all levels, and, again, this is not particularly anyone’s fault.

This would all be fine if it was happening in a bubble, but it’s not. These pieces of software are responsible for stuff that matters, and the real costs of these decisions are not felt by the people responsible for making the software, but by the users themselves.

In the above scenario, there might be a few raised voices and tense meetings, but it it very likely that everyone involved in making the software will be rewarded, as long as something is delivered on or around the date in the project plan.

Users may be lumbered with a buggy, slow, insecure program, or web app, but that is not really factored into the cost of development.

So what’s your point?

People responsible for making software are often motivated by many things other than making reliable, secure software that solves a defined user problem.

The negative effects of bad software are generally felt by users, not by those who put the software together.

In order to improve this situation, and to get better software that isn’t shit, it seems like there needs to be some sort of change to make the people responsible for creating software feel the effects of their shitty software themselves more keenly.

I don’t know what that would look like, and to be honest I don’t really have a point… All I really know is that I find my profession increasingly scary and insane, and that I’m not alone.

WTF is a splash sreen? (Angular edition)

A splash screen is something which renders before the rest of an application is ready, and blocks the rest of the UI.

It avoids a user having to sit staring at a semi-functional product, and having the UI judder as more content is loaded.

OK I know what a splash screen is, how do I do one in Angular!?!

Add an element that you want to be your ‘splash’ to your index.html. Below we add a div and then an image inside it

<!DOCTYPE html>
<html lang="en">
    <meta charset="utf-8" />
    <title>World's angriest cats</title>
    <base href="/" />

    <meta name="viewport" content="width=device-width, initial-scale=1" />
    <link rel="icon" type="image/x-icon" href="favicon.ico" />
    <link rel="stylesheet" type="text/css" href="assets/styles/splash.css" />
  <body class="mat-typography">
    <div class="splash">
        alt="Angry cat splash page"

And add some CSS similar to this to make sure it fills the whole screen and bleeds in/out etc.

.splash {
  opacity: 1;
  visibility: visible;
  transition: opacity 1.4s ease-in-out, visibility 1.4s;
  position: fixed;
  height: 100%;
  width: 100%;
  top: 0;
  left: 0;
  background: red;
  display: flex;
  flex-direction: column;
  align-items: center;
  justify-content: center;
// This little piece of trickery hides the splash image once the app-root is populated (i.e. the app is ready)
app-root:not(:empty) + .splash {
  opacity: 0;
  visibility: hidden;

.splash__image {
  width: 100%;
  max-width: 200px;

OK cool, but what if I want to wait for a specific API call to finish (e.g. CMS) before rendering my site?

No worries. Angular has you covered here too 🙂

Assuming that you have a provider called ContentProvider, which is doing some asynchronous task to get content for your site, and then exposes a stream to tell you when the data is ready, you can tell Angular that it needs to wait until this is done using the slightly confusingly named APP_INITIALIZER injection token.

This, as far as my puny brain can approximate, adds the function returned from your useFactory method to an array of methods which are called before the app initialises and starts rendering.

If the function you return returns a promise, as below, then the app will not initialise until that promise resolves.

      provide: APP_INITIALIZER,
      useFactory: (content: ContentProvider) => {
        return async () => {
          return new Promise(resolve => {
            content.loaded.subscribe(loaded => {
              if (loaded) {
      multi: true,
      deps: [ContentProvider],

WTF are feature flags? (Angular example included)

Feature flags are a fairly rudimentary (but I think powerful) alternative to long lived feature branches, and the associated merge hell that goes with them.

You basically merge unfinished or experimental features to your master branch, and even deploy them, but hide the functionality behind ‘flags’, only allowing them to be viewed if certain switches are activated. The exact mechanism behind the switches doesn’t really matter, and I will detail some potential ways of implementing these switches in the context of an Angular application below.

Feature flags do not mean merging broken code or sloppy code.

The code should still be tested and peer reviewed, and your automated suite of unit, integration and e2e tests, linting and other build scripts that you have written to protect your master branch should still run (you do have those things… right?).

The feature is merely unfinished.

Perhaps it is missing some UI polish, or requires further QA before being released into the wild proper, or perhaps you want to test it out on a smaller subset of your user base to gather analytics feedback on the effectiveness of the feature.

There are many reasons why feature flags can make sense, but fundamentally they are a very simple concept.

They just give you the ability to ‘switch’ on or off different parts of your application, either at run-time or build time.

For me, the main benefit is that your feature is regularly integrated back into master, without having to wait for all of the functionality to be complete. This ensures that at all times you can be sure that your feature plays nicely with the rest of the code base, and with any other features that other people are working on (also hidden behind feature flags).

Also they are a great tool if you want to demo experimental features to stakeholders early on in the development process, in order to gather feedback.

I’ve managed to use them successfully before on a site serving thousands of daily visitors, without causing any disruption to the deployed product, and I was able to demo the feature in the live site.

Where to store feature state?

To use feature flags, you need some state somewhere that your app is aware of, that tells it whether certain features are enabled or not.

For example, say we have an experimental kitten generator feature that we only want to show sometimes, we might have a piece of data somewhere called kittenGeneratorEnabled, which our app checks to see if it should allow a user access to the feature.

There are many places this data could live, and your particular use case will dictate the most sensible course of action:

  • Build configuration files (in Angular’s case we would probably use environment files for this). We can set the switch to on/off at build time and our deployed app will reflect this. The downside to this approach is that it not super flexible, as you can’t switch the feature on/off at run-time. If you have a slick and speedy build pipeline, you can still turn off features in subsequent builds quickly and easily if there are any problems with them. It is also a pretty light weight solution, it just costs you a tiny bit of configuration. You can also have builds for different contexts, for example you could disable all experimental features in production, and turn them on for QA or something similar (not saying this is a good idea, just an example of the level of control this can give you).

  • A server. We can delegate the responsibility for setting feature flags to a backend service. If you have more complex requirements, for example only showing features to a specific subset of users via traffic splitting etc. this might be worth looking into as the complex logic/management portal for this sort of granular control can be moved out of the frontend. In our kitten generator case we would query an endpoint like GET /kittenGeneratorEnabled to figure out whether to show the user the feature. If the features are potentially insecure and you want to make sure they are only viewed by people you trust, rather than nasty malicious types, then you could make users authenticate and control access to the features via the backend too. The main downside of this is that it is pretty heavy weight, and requires more communication between any frontend and backend developers.

  • Browser storage. Cookies, local storage, session storage etc. can be used to keep the state and can then very easily be queried in your frontend code to see whether to allow access to features or not. In the case of the server example above, you could have the endpoint set and unset cookies, then have your app read these cookies. These browser storage items can also be set and unset via the browser address bar with query parameters. This has the advantage that you can send people links with certain flags enabled/disabled for demos/testing, rather than having them manually mess around with browser storage. These links will also work equally well on mobile devices, where setting custom cookies etc. is much harder.

Obviously you can, and probably should use a combination of the above approaches to meet your needs.

How to control access to features

Most features in a modern web application will be routed. In our case there would probably be a /kitten-generator route in the frontend.

If this is the case then the simplest way to control access to the feature in Angular is to use a conditional route guard.

Other features/functionality can be controlled with simple if/else statements (as in if(featureEnabled){ do a thing }).

Basically this stuff shouldn’t be complex in a well architected application. If your application consists of one app.component.ts with thousands of lines of code in it, then you will have a more difficult time. In that case feature flags are probably the least of your problems!

Generally though, you will be able to switch a feature off by simply blocking the route with a route guard.

What would a route guard for something like that look like?


Let’s say we have decided to use the browser’s local storage to set individual feature flag state (in our case the kitten-generator-enabled flag).

We have also decided to add an additional safety value in our Angular environment file called featureFlags which when set to true will allow access to any experimental features if the correct local storage value is set, and when set to false, will not allow access to any experimental features, even if the user has the correct local storage value set. This means that if we are being super paranoid we can have builds for production which do not expose any experimental or half finished features.

import { Injectable } from '@angular/core';
import {
} from '@angular/router';
import { environment } from '../../environments/environment';

  providedIn: 'root',
export class FeatureRouteGuardService implements CanActivate {
  constructor(public router: Router) {}
    route: ActivatedRouteSnapshot,
    state: RouterStateSnapshot,
  ): boolean {
    if (!environment.featureFlags) {
      this.router.navigateByUrl( || '/');
      return false;
    if (localStorage.getItem( {
      return true;
    this.router.navigateByUrl( || '/');
    return false;

And we would use it like this:

        path: 'kitten-generator',
        component: KittenGeneratorComponent,
        canActivate: [FeatureRouteGuardService],
        data: { featureFlag: 'kitten-generator-enabled', featureFlagDisabledRoute: '/' },

Now, if a user has a kitten-generator-enabled:true value in their browser’s local storage, and they are accessing a build with the featureFlags value set to true in the environment file, then they will be able to access this route, otherwise, they will be redirected to /.

Going one step further, if we wanted to implement the logic to set or unset features via the browser url, we might use something like this inside the app’s app.component.ts

   * parse the navigation bar query parameters, and use them to set
   * local storage values to drive feature flag activation
   * @param queryParams
  private processFeatureFlagQueryParams(queryParams: Params) {
    const featuresToTurnOn =
      (queryParams['features-on'] && queryParams['features-on'].split(',')) ||
    const featuresToTurnOff =
      (queryParams['features-off'] && queryParams['features-off'].split(',')) ||
    featuresToTurnOn.forEach(item => {
      localStorage.setItem(item, 'true');
    featuresToTurnOff.forEach(item => {
      if (localStorage.getItem(item)) {

  ngOnInit() {
    if (this.environment.featureFlags) { => {
        if (event instanceof ActivationStart) {

This then would mean that you could give someone a link to your site like this /kitten-generator?features-on=kitten-generator,another-experimental-feature&features-off=yet-another-experimental-feature.

This would set local storage values to true for kitten-generator and another-experimental-feature, and would delete any existing local storage value for yet-another-experimental-feature


Feature flags in web apps and Angular can be as simple, or as complex as you’d like them to be.

I generally have had a better time with the slightly simpler ones shown above, than all singing all dancing server config options. What are your thoughts?

Why TF should I stub dependencies for an E2E test?

E2E (or ‘end to end’) tests, when written for web applications, basically allow you to write scripts to interact with your application via a browser, and then assert that it responds correctly.

Common libraries that allow you to do this for Angular apps are Selenium/Protractor and Cypress (you should check out Cypress in particular).

The browser can be ‘headless’, meaning that you can run it in a command line, without a GUI (see headless chrome for an example). This means that these tests can be run by your build server to support a continuous integration/continuous deployment pipeline.

You can then develop scripts to click about, and test your most important user flows quickly and consistently, on your real application, in a very close approximation of how your users would click around. This is an attractive alternative to manually running these tests yourself by spinning up your application and clicking about. Humans are good at lots of things, but computers trump them every time when it comes to performing repeated actions consistently.

E2E tests are an example of black box testing.

If the tests fail, your application is broken, if they pass, your application may still be broken, but it is not terribly broken (as we have tests, ensuring that key user flows are still accessible).

This process relies on an assumption that these tests are very robust and not ‘flaky’ (they only fail when something is actually broken, not just randomly).

There are a few things that can cause E2E tests to randomly fail, but by far the most common in my experience is some form of network latency. Basically any time your code is interacting with something which can take an indeterminate amount of time to finish, you are going to experience some pain when writing automated tests for it.

The most recent (and most evil) version of this I have experienced is an application that is plugged into a blockchain and involves waiting for transactions to be mined. This can take anywhere from a few seconds, to forever.

Unless you are a lunatic, if your application involves any sort of transactions (purchases, bets, swiping-right), which involve user data, your automated tests will already be running on some sort of test environment, meaning that when you mutate data, it is not done on production data, but a copy. This comes with its own little pain points.

When talking to a truly stateful dependency (like an API plugged into a database), you will have to account for the state of the database when performing any actions in your browser test.

As one example, if you are logging in, you might have to first check whether the user has an account, then log in if they do, or create an account if not. This adds complexity to your tests and makes them harder to maintain/potentially flakier.

Another option is to control the test environment, and write (even more) scripts to populate the test database with clean data that can be relied on to power your tests (make sure that a specific user is present for example).

This removes complexity from the browser tests, but adds it to the scripts responsible for set up and tear down of data. This also requires that your test data is kept synchronised with any schema changes to your production database. Otherwise you run the risk of having E2E tests which can pass, even while your real application is broken, as they are testing on an out of date version of the backend.

(To clarify, these are all valid options, and there are solutions to many of the problems above)

However, while this approach solves the problem of accidentally munging user data, a backend plugged into a staging database can still be forking slow, as again you are at the mercy of network speeds and many other things out of your control.

Taken together, this is why I favour ‘stubbing’ the slow bit of your application (in many cases a backend service). This involves providing a very simple approximation of your Slow Ass Service (SAS), which follows the same API, but returns pre-determined fixture data, without doing any slow networking.

There is one major caveat to this approach.

It only works in a meaningful way if you have some sort of contract between the SAS and your frontend.

That is to say, your frontend and the SAS (Slow Ass Service), should both honour a shared contract, describing how the API should behave, and what data types it should accept and return. (for a REST api, swagger/openapi provides a good specification for describing API behaviour). Because this shared contract is written in a way that can be parsed, automatic validations can be made on both the frontend and the backend code to ensure that they meet the requirements.

If you were reading closely, you will notice I mentioned data ‘types’, so most likely you will require some sort of typed language on both the frontend and within the SAS (I would recommend TypeScript 🙂 ).

With this in place, as long as your stub implementation of the SAS also honours this contract (which can be checked in with the SAS source code), then you can feel happy(ish) that your stub represents a realistic interaction with the real SAS. If the contract changes, and your code doesn’t, it should not compile, and your build should fail (which we want!).

Now you have this in place, your E2E tests can be much less/not at all flaky, and as an added benefit, will run super fast in your build server, meaning that the dev workflow is slightly nicer too.

Again, there are gazillions of different ways of approaching this problem, and obviously you need to figure out what it is that you care about.

In my experience though, this presents a good compromise and has meant that I (finally) have managed to get E2E tests running as part of CI/CD without wasting huge amounts of time and causing headaches.

In a subsequent post I demo how to actually get this working in an Angular application that depends on a strongly typed third party library, which happens to be slow as hell 🙂