Adventures in Node town (hacking Slack’s standard export with Node.js)

One benefit of changing jobs quite a lot, is that I have built up an increasingly wide network of people that I like, who I have worked with previously.

A really nice thing about staying in contact with these people is that we are able to help each other out, sharing skills, jobs, jokes etc.

Recently a designer I used to work with asked whether somebody would be able to help with writing a script to process the exported contents of his ‘question of the week’ slack channel, which by default gets spat out as a folder filled with JSON files, keyed by date:

My response was rapid and decisive:

Data munging and a chance to use my favourite Javascript runtime Node.js. Sign me up!!!

First, WTF is data munging

Data munging, or wrangling, is the process of taking raw data in one form, and mapping it to another, more useful form (for whatever analysis you’re doing).

Personally, I find data wrangling/munging to be pretty enjoyable.

So, as London is currently practicing social distancing because of covid-19, and I have nothing better going on, I decided to spend my Saturday applying my amateur data munging skills to Slack’s data export files.

Steps for data munging

1) Figure out the structure of the data you are investigating. If it is not structured, you are going to have trouble telling a computer how to read it. This is your chance to be a detective. What are the rules of your data? How can you exploit them to categorise your data differently?

2) Import the data into a program, using a language and runtime which allows you to manipulate it in ways which are useful.

3) Do some stuff to the data to transform it into a format that is useful to you. Use programming to do this, you programming whizz you.

4) Output the newly manipulated data into a place where it can be further processed, or analysed.

In my case, the input data was in a series of JSON files, keyed by date (see below), and the output I ideally wanted, was another JSON file with an array of questions, along with all of the responses to those questions.

Shiny tools!!!

Given that the data was in a JSON file, and I am primarily a JavaScript developer, I thought Node.js would be a good choice of tool. Why?

  • It has loads of methods for interacting with file systems in an OS agnostic way.

  • I already have some experience with it.

  • It’s lightweight and I can get a script up and hacked together and running quickly. I once had to use C# to do some heavy JSON parsing and mapping and it was a big clunky Object Oriented nightmare. Granted I’m sure I was doing lots of things wrong but it was a huge ball-ache.

  • From Wikipedia, I know that ‘Node.js is an open-source, cross-platform, JavaScript runtime environment that executes JavaScript code outside of a web browser. Node.js lets developers use JavaScript to write command line tools‘.

  • JavaScript all of the things.

So, Node.js is pretty much it for tools…

So, on to the data detective work. I knew I very likely needed to do a few things:

1) Tell the program where my import files are.

2) Gather all the data together, from all the different files, and organise it by date.

3) Identify all the questions.

4) Identify answers, and link them to the relevant question.

The first one was the easiest, so I started there:

Tell the program where my import files are

const filePath = `./${process.argv[2]}`;

if (!filePath) {
    "You must provide a path to the slack export folder! (unzipped)"
} else {
    `Let's have a look at \n${filePath}\nshall we.\nTry and find tasty some questions of the week...`

To run my program, I will have to tell it where the file I’m importing is. To do that I will type this into a terminal:

node questions-of-the-week.js Triangles\ Slack\ export\ Jan\ 11\ 2017\ -\ Apr\ 3\ 2020

In this tasty little snippet, questions-of-the-week.js is the name of my script, and Triangles\ Slack\ export\ Jan\ 11\ 2017\ -\ Apr\ 3\ 2020 is the path to the file I’m importing from.

Those weird looking back slashes are ‘escape characters’, which are needed to type spaces into file names etc. when inputting them on the command line on Unix systems. My terminal emulator that I use autocompletes this stuff. I think most do now… So hopefully you won’t have to worry too much about it.

This is also the reason that many programmers habitually name files with-hyphens-or_underscores_in_them.

But basically this command is saying:

‘Use node to run the program “questions-of-the-week.js”, and pass it this filename as an argument’

What are we to do with that file name though?

Node comes with a global object called process which has a bunch of useful data and methods on it.

This means that in any Node program you can always do certain things, such as investigating arguments passed into the program, and terminating the program.

In the code sample above, we do both of those things.

For clarity, process.argv, is an array of command line arguments passed to the program. In the case of the command we put into our terminal, it looks like this:

  'Triangles Slack export Jan 11 2017 - Apr 3 2020'

As you can see, the first two elements of the array are the location of the node binary, and the location of the file that contains our program. These will be present any time you run a node program in this way.

The third element of the array is the filename that we passed in, and in our program we stick it in a variable called filePath.



Gather all the data together, from all the different files, and organise it by date

const fs = require("fs");

const slackExportFolders = fs.readdirSync(filePath);

const questionOfTheWeek = slackExportFolders.find(
  (f) => f === "question-of-the-week"

if (!questionOfTheWeek) {
  console.error("could not find a question-of-the-week folder");

const jsons = fs.readdirSync(path.join(filePath, questionOfTheWeek));

let entries = [];

jsons.forEach((file) => {
  const jsonLocation = path.join(__dirname, filePath, questionOfTheWeek, file);
  entries = [
    ...require(jsonLocation).map((i) => ({ ...i, date: file.slice(0, -5) })),

The Slack channel I am looking at munging is the ‘question of the week’ channel.

When this is exported, it gets exported to a ‘question-of-the-week’ folder.

So first of all I check that there is a question-of-the-week folder. If there is not, I exit the program, and log an error to the console.

If the program can find it, then it gets to work gathering all of the data together.

Here we start to see the benefit of using Node.js with JSON. We are writing JavaScript, to parse a file which uses a file format which originally came from JavaScript!

This means that pulling all of this data together is as simple as getting a list of file names with fs.readdirSync.

This gets all of the names of the files under the question-of-the-week folder in an array, which is, you know, pretty useful.

Once we have those file names, we iterate through them using forEach, and pull all of the data from each file into a big array called entries. We can use require to do this, which is very cool. Again, this is because Node and JavaScript like JSON, they like it very much.

We know we are likely to need the date that the slack data is associated with, but it is in the file name, not in the data itself.

To solve this, we take the file name and put it into a ‘date’ field, which we insert into each data item, using map

the file.slice stuff is just taking a file name like this 2018-06-29.json, and chopping the end off it, so it is 2018-06-29, without the .json bit.

Coooool we done got some slack data by date. Munging step 2 complete.

Identify all the questions

This is trickier. We need our detective hats for this bit.

I won’t lie, I fucked around with this a lot, and I re-learnt something that I have learned previously, which is that it is really hard to take data that has been created by messy, illogical humans, and devise rules to figure out what is what.

What I ended up with is this. The process of figuring it out involved lots of trial and error, and I know for a fact that it misses a bunch of questions, and answers. However, it probably finds 80% to 90% of the data that is needed. This would take a human a long time to do, so is better than nothing. The remaining 10% to 20% would need to be mapped manually somehow.

const questions = entries.filter(
  (e) => e.topic && e.topic.toLowerCase().includes("qotw")
).map((q) => ({
  question: q.text,
  reactions: q.reactions ? => : [],

‘qotw’ is ‘question of the week’ by the way, in case you missed it.

I find them by looking for slack data entries that have a topic including ‘qotw’, I then map these entries so they just include the text, date, and I also pull in the reactions (thumbs up, emojis etc.) for the lols.

Now we have an array of questions with information about when they were asked. We’re getting somewhere.

Identify answers, and link them to the relevant question

const questionsWithAnswers =, key) => {

  // Find the date of this question and the next one.
  // We use these to figure out which messages were sent after
  // a question was asked, and before the next one
  const questionDate = new Date(;
  const nextQuestionDate = questionsWithReactions[key + 1]
    ? new Date(questionsWithReactions[key + 1].date)
    : new Date();

  return {
    responses: entries
        (e) =>
          new Date( > questionDate &&
          new Date( < nextQuestionDate &&
          e.type === "message" &&
      .map((r) => ({
        answer: r.text,
        user: r.user_profile ? : undefined,

// put them in a file. the null, 4 bit basically pretty prints the whole thing.
  JSON.stringify(questionsWithAnswers, null, 4)

console.log('questions with answers (hopefully...) saved to "questions-with-answers.json"');

This bit is a bit more complex… but it’s not doing anything non-standard from a JavaScript point of view.

Basically just search all the entries for messages which fall after a question being asked, and before the next one, and put them in an array of answers, with the user profile and the message text. Then save to a new JSON file and pretty print it.

We are done! We now have a new JSON file, with an array of questions, and all the answers to each question.

It is worth noting that this approach is far from optimal from an ‘algorithmic’ point of view, as I am repeatedly checking the entire data set.

Thing is, I don’t give a shit, because my dataset is small, and the program runs instantly as it is.

If it started to choke and that became a problem I would obviously improve this, but until that point, this code is simpler to understand and maintain.

More efficient algorithms normally mean nastier code for humans, and until it’s needed, as a nice developer you should prioritise humans over computers.

(sorry, computers)

What did we learn?

Slack’s data is quite nicely structured, and is very parseable.

JavaScript is great for manipulating JSON data thanks to its plethora of array manipulation utilities.

You can write a script to automatically categorise Slack export data and put it into a semi-useful state with less than 80 lines of code, including witty console output and formatting to a nice narrow width.

This confirms my suspicion that for quick and dirty data munging, Node.js is a serious contender.

If paired with TypeScript and some nice types for your data models, it could be even nicer.

Here is the result of my labours

HTF do I find an element efficiently in a rotated sorted array

Also why on earth would you care about this?


a) You are trying to do search on a randomly rotated sorted array which is enormous

b) You are trying to prepare for technical whiteboard tests

c) You like puzzles

b) is probably the most likely.

In my case it’s a combination of b) and c).

I almost solved this on my own. I didn’t get it quite right, but my thought processes were not totally wrong, which is a nice change.


Search in Rotated Array: Given a sorted array of n integers that has been rotated an unknown number of times, write code to find an element in the array. You may assume that the array was originally sorted in increasing order.


Input: find 5 in [15, 16, 19, 20, 25, 1, 3, 4, 5, 7, 10, 14]

Output: 8 (the index of 5 in the array)


Firstly, you can just loop through the whole array, checking for the value.

That gets you O(n) time and O(1) space.

Drop that whiteboard marker, and stare at your interviewer with a look of pure intimidation.

Did it work?

No, they say they would like it to scale better than O(n).

Urgh. Who do they think they are.

Let’s mine the problem statement for clues:

  • Array is sorted
  • Array is rotated

Sorted you say… I think I smell a binary search coming on.

Demon/Gargoyle interviewer: ‘Excellent. Proceed minion’

If we can use binary search, we can cut the problem in half with each check, giving us a run time of O(log n)

Demon/Gargoyle interviewer: ‘CODE IT!!! MINION’

Hold up there, I haven’t got my tactics in place yet.

For binary search to work we have to be able to work on a sorted array.

Because our array has been rotated, we essentially have two sorted arrays:

[15, 16, 19, 20, 25]

[1, 3, 4, 5, 7, 10, 14]

We can do binary search on each of these individually just fine.

So if we find the ‘rotation point’ or whatever you want to call it, we can split the array and RECURSE LIKE OUR LIFE DEPENDS ON IT.

How we gonna find that rotation point?

Rotation point = ‘point in the array where the previous element is bigger than the next element’. i.e it is not sorted.

Could just loop through the array and check each element, but that erodes all of our good work, as we are just searching for an item in the array again so we go back to O(n).


Fuck this industry.

Maybe we can use recursion again/a version of binary search?

Then we can get back to cutting that problem in half. O(log n) for the win!

I have an idea. Look.

[15, 16, 19, 20, 25, 1, 3, 4, 5, 7, 10, 14]

The middle index is 6, with a value of 3 in it.

One half of our array when split by this middle element is not sorted:

[15, 16, 19, 20, 25, 1, 3]

The other half is sorted:

[3, 4, 5, 7, 10, 14]

We can figure out which half is sorted by comparing the middle element (3) with the left and right most elements respectively.

15 is not smaller than 3, so that half is not sorted.

3 is smaller than 14, so that half is sorted.

Now we know that our ‘rotation point’ is in the left half of our array, so forget about the right half, and RECURSE.

[15, 16, 19, 20, 25, 1, 3]

Middle is 20, left half is sorted, right half is not sorted, so rotation point is in right half. RECURSE.

[20, 25, 1, 3]



The recipe:

1) Find the rotation point recursively in O(log(n)) time.

2) Do binary search on both halves in O(2 * log(n)) time.

3) Profit/dance with joy at your new O(3 * log(n)) => O(log n) running time.

There are definitely optimisations that can be made here.

But I’m going to move on with my life for now. Behold:

const findRotationPoint = (array: number[], left: number, right: number) => {
  const middle = Math.floor((left + right) / 2);

  // This element is out of order
  if (array[middle - 1] > array[middle]) {
    return middle;

  // If the left half is sorted
  if (array[left] <= array[middle]) {
    // Check the right half - as left half is OK
    return findRotationPoint(array, middle + 1, right);
  } else {
    // Check the left half because it is not sorted so must contain the rotation point
    return findRotationPoint(array, left, middle - 1);

const searchRecursive = (
  array: number[],
  left: number,
  right: number,
  target: number
): number => {

  // Target not in array
  if (left > right) {
    return -1;

  const middle = Math.floor((left + right) / 2);

  if (array[middle] === target) {
    // We found the target!
    return middle;

  if (array[middle] < target) {
    return searchRecursive(array, middle + 1, right, target);
  } else {
    return searchRecursive(array, left, middle - 1, target);

const search = (array: number[], target: number) => {

  // Find rotation point
  const rotationPoint = findRotationPoint(array, 0, array.length - 1);

  // Check to left of rotation point
  const leftResult = searchRecursive(array, 0, rotationPoint, target);

  if (leftResult !== -1) {
    // We found it!
    return leftResult;
  } else {
    // Check to right of rotation point
    return searchRecursive(array, rotationPoint, array.length - 1, target);

WTF is ‘base case and build’ (CTCI power set problem)

This is a problem from Cracking the Coding Interview that I did not manage to solve nicely on my own.

It is an example of a problem that can be solved using the ‘base case and build’ method.

I have implemented it in TypeScript, based on the given solution in CTCI (so I cheated basically).

I’m going to try and explain the solution back to myself in a way I understand so I hopefully can make it stick and solve similar problems in the future.

Power Set: Write a method to return all subsets of a set.

SPOILER: This is how you can do it recursively:

const powerSet = (set: number[], index: number): number[][] => {
  let allSubsets: number[][];

  if (set.length === index) {
    allSubsets = [];
    return allSubsets;
  } else {
    allSubsets = powerSet(set, index + 1);
    const moreSubsets = [];
    for (let i = 0; i < allSubsets.length; i++) {
      const newSubset = [];
    return allSubsets;

What’s going on here then?

This is an example of using the ‘base case and build’ method of solving problems, where you take the simplest meaningful case of the problem and solve it, then figure out how to move from that simple case to the next iteration, and build a (probably recursive) implementation to take you from n-1 to n.

For simplicity, I’m using arrays to represent sets.

If we need to find all subsets of an empty set ([]), i.e. n=0, we get one subset: []. Our method should return [[]] in this case.

Now n=1. Let’s try with the set [1]. i.e. a set with cardinality of one, with a single element, the integer 1.

This gives us two subsets, the empty set [], and the set [1]. So our method here should return [[], [1]].

Now n=2. We’ll try the set [1,2]. A set with cardinality 2 with two elements, the integers 1 and 2.

This gives us four subsets:

[[], [1], [2], [1,2]]

At this point, believe it or not we have enough information to solve this problem for any n

As our n increases, we just have to clone all subsets from n-1, then append the nth element to each of the new subsets.

In our case:

n=3, let’s use the set [1,2,3].

Clone [[], [1], [2], [1,2]] and put it into the array:

[[], [1], [2], [1,2], [], [1], [2], [1,2]]

Now append the nth element (3) to all of the cloned subsets:

[], [1], [2], [1,2], [3], [1,3], [2,3], [1,2,3]]

Now we have a base case (below is shitty pseudocode btw, not TypeScript):

if (n===0) { subsets.push([]) }

and we can use our newly discovered transformation to construct the remaining subsets recursively.

else {
  cloned = subsets.clone();
  append set[n] to each subset;

In terms of time and space complexity, this solution uses O(n * 2^n) space and O(n * 2^n) time, which is the same as the total number of elements across all subsets.

I need to think a bit more about why exactly this is the case, but visually you can see the rate at which it grows by drawing out n from 0 up to 4


[[], [1]]

[[], [1], [2], [1,2]]

[[], [1], [2], [1,2], [3], [1,3], [2,3], [1,2,3]]

[[], [1], [2], [1,2], [3], [1,3], [2,3], [1,2,3], [4], [1,4], [2,4], [1,2,4], [3,4], [1,3,4], [2,3,4], [1,2,3,4]]

WTF is two’s complement

Two’s complement is a way of representing signed (positive and negative) integers using bits.

The left most bit is reserved for dictating the sign of the number, 0 being positive, 1 being negative.

The magnitude of the integer is a bit weirder.

0001 is two’s complement binary for the decimal 1

1111 is two’s complement binary for the decimal -1

How in the name of all that is good and sensible did we get from 0001 to 1111 you might ask?

Repeat after me:

‘Flip the bits and add one’

0001 with all its bits flipped around is 1110, then we add 0001 to it and we get 1111.

Why on earth would you do something so wilfully perverse as this?


  • A number system with two zeros is a pain in the arse.
  • We should really simplify the lives of those long suffering hardware engineers, who have to take these representations of numbers, and make the computers do maths with them.

Two’s complement solves both of these problems.

To understand why, let’s try and reinvent signed 4 bit integers.


Our first attempt will be as follows:

The left most bit is used to tell us the sign (1 for negative, 0 for positive), and the remaining three bits are used to represent the magnitude of the integer.

Looks simple enough. 1 is 0001, -1 is 1001.

Let’s kick our new system in the balls a bit.

What does 0000 represent? Easy, zero.

What about 1000? Easy, negative zero.

AHHHHhhhhhhh…. that’s a problem. I bet those hardware people aren’t going to like that. They’re always complaining about stuff like this.

I think we are wasting a slot in our binary integer, as we’ve got a redundant representation of zero, and our hardware will have to somehow account for this positive and negative zero when implementing any maths functionality. Sigh. OK.

After checking, the hardware engineers inform us that yes, this is a pain and no, they won’t have it (fussy fussy hardware engineers).

Also, it turns out that this representation is somewhat fiddly to do subtraction with.

0010 + 0001 in our naive signed binary is 2 + 1 in decimal. Which is 3 in decimal, or 0011 in binary.

0010 + 1001 in our naive signed binary is 2 - 1 in decimal. Which is 1 in decimal, and should be 0001 in binary.

However, if we add our binary representations in the simplest way we get 1001, or -1. Balls.

I assume there are ways around this, but those damned hardware people won’t do it because reasons. Grrr.

So to recap, ‘sign and magnitude system’; two representations of zero; painful to do basic maths with.

Fine we’ll throw that out.


Okieeee so what if we flip all the bits when we turn something negative? So 1 is 0001, -1 is 1000.

Now we can try our previous subtraction problem and just add them in the dumbest way possible and we’ll get the right answer I think? Let’s try it:

2-1 is represented as 0010 + 1110, which gives us 0000 with a one carried over.

Which is… still not right.

‘You should take the carry and stick it on the end’

What’s that Gwyneth?

‘Take. The carry. Stick it on the end’

Fine. At this point what do I have to lose.

0000 with the carry stuck on the end is 0001. Which is correct! Well done Gwyneth.

Still. I bet they won’t have that double zero stuff. Probably they’re going to kick up a fuss about this moving carries around as well.


Tabatha, what are you guys are chanting in the corner there? By the candles and the… wait is that blood?

‘Flip the bits and and one’

‘Flip the bits and and one’

‘Flip the bits and and one’

‘Flip the bits and and one’

‘Flip the bits and and one’

Not totally happy with this blood situation, but that’s not a bad idea Tabatha!

Let’s give it a go:

2 - 1 = 1

In two’s complement:

0010 + 1111 = 0001 with a one carried

The chanting has changed…

‘Discard the carry’

‘Discard the carry’

‘Discard the carry’

‘Discard the carry’

‘Discard the carry’

You know what I think you’re right, we don’t need it. We’ve got the right answer. Finally 2 - 1 = 1 without any messy carries!

Also, I think I’ve just spotted something even neater. This gets rid of our duplicate zero right?

Two’s complement of 0000:

flip the bits:


add one:

0000 with a carry of 1

discard the carry:


The circle is complete.

YES!!! How do you like that hardware people!?!

Recursion – an epic tale of one family of functions’ desperate struggle for survival

I am trying to develop my intuition for recursion.

To this end I spent some time trying to solve mazes with recursive functions.

Along the way I discovered a hidden and dramatic universe, where collections of function calls band together, sacrificing whole family trees in the pursuit of a secure future for their species.

First, let’s solve some mazes:

Specifically, let’s start with this one, and use it as an example to develop a general strategy for solving mazes:

a badly drawn but simple maze

Right, tell me how to get from the entrance (always top left) to the exit (always bottom right).

Seems simple enough, just walk down until you hit the bottom, then go right. Piece of piss.

Ah but computers don’t work that way, they don’t like ambiguity. Try and be a bit more precise. Also what if the hedges in the maze are arranged differently? Does that plan still always work?


1) We already know where the exit is, so start there and work backwards, keeping track of where we’ve been, then we’ll have a nice ordered list of grid locations we need to traverse in order to get out of the maze.

2) Let’s try moving up first. Ah but then we hit a wall at the top.

3) No bother: move left

No no hold on this won’t do. This is all far too vague for a computer. At every step, it’s going to need very clear instructions to allow it to figure out what direction to go in.

SIGH… fine

Steps for the computer:

1) Go up if you can, and you haven’t come from that direction

2) Otherwise go right if you can, and you haven’t come from that direction

3) Otherwise go left if you can, and you haven’t come from that direction

4) Otherwise go down if you can, and you haven’t come from that direction

There! Happy!?!

Oh much better yes. How does it know how to stop though?

Ok shit. Before you do anything, check if you are at the entrance to the maze


This looks reasonably hopeful now. I’m pretty sure we’ve missed a few edge cases, but this looks like something we can work with.

(see below for my actual real notes I wrote while trying to figure out this strategy)

diagram with various mazes on it

Bring on the Recursion

This process has the smell of something could be implemented using recursion.

We have a base case (‘am i at the entrance’), and some logic for deciding how to call the next iteration of the solution.

Oh blah blah blah recursion so what. Tell them about the Glarpols!

Oh fine… This is a Glarpol:

fluffy creature

Don’t be put off by its cute and fuzzy appearance. It has a statistically tough challenge ahead of it.

Glarpols exist across infinite different parallel universes, with each universe looking suspiciously similar to our scrappily drawn mazes above.

Much like the Salmon has to struggle and swim upstream in order to spawn, and ultimately survive as a species, so too do the Glarpols.

In each universe, a solitary Glarpol is spawned at the exit to a randomly generated grid shaped maze. The exit is always on the bottom right of the grid, and the entrance is always on the top left of the grid.

If the Glarpol and its offspring manage to find a route to the entrance to the maze, they will survive as a species in this universe. If they do not, there will be no Glarpols in this particular universe.

In order to maintain their population, Glarpols require sufficient oxygen. Every time a Glarpol is created, they use a bit more oxygen.

If too many Glarpols are alive at the same time, then they will not have enough oxygen, and they will all die.

A Glarpol can move once, and breed once, and in contrast to many species, will stay alive until all of its children die, at which point it will also die.

Glarpols don’t know about the grid, but each Glarpol is born with the following information available to them:

1) Whether the maze has been solved by another Glarpol

2) Their ancestors’ journey up until this point

3) What the entrance to the maze looks like

4) Whether they are in a hedge

5) Whether they are in a wall

With this information, each solitary and freshly born Glarpol must decide what to do with their life. The choices are somewhat bleak…

  • Has someone else found a path? If so kill myself, I can’t help my species and I’m wasting oxygen.

  • Am I at the Entrance? If so tell the rest of my species that we are saved, and tell my parent how to get to the entrance. The effort of transmitting this message causes the me to die.

  • Am I in a Wall? If so kill myself, I can’t help my species and I’m wasting oxygen.

  • Am I in a Hedge? If so kill myself, I can’t help my species and I’m wasting oxygen.

If a Glarpol has managed to live long enough to get this point, they will give birth to three children, and immediately afterwards will send them out into the world, in all directions (left, right, up, down) other than the one which their parent (who they will never meet) came from. They will also give each of these children a copy of their ancestors’ journey up until this point, along with their current position.

The Glarpol will then patiently wait alone to hear back from their offspring about their fate.

If they hear back that one of their descendants has found the entrance to the maze, they happily forward on the message to their parent. As before, the effort required to do this causes them to die.

Otherwise, if their children die, they also die.

OK… weird story. Do you have some code I can see. This is recursion right?

That’s right. In this tenuous and painful analogy/metaphor, each spawned Glarpol is a recursive function call, oxygen is memory, and the path to the entrance of the maze is the solution to the maze.

If your program runs out of memory by calling too many recursive functions, it will crash. If it finds a path through the maze it terminates successfully.

If you want to see and/or play around with the Glarpols’ universe here is the code I developed when messing around with this idea:

In defence of reinventing the wheel (WTF is Recursion)

Something which is said often in software teams (especially in good software teams) is don’t reinvent the wheel.

Generally, this is great advice.

Software is increasingly complex, and if you can save yourself some time and effort, and make your final product more robust by using something somebody else has written, that is a good thing. Especially in complex applications which matter.

A notable example of this is time/date manipulation in JavaScript.

If you ever find yourself doing ANYTHING complicated with dates in a production JavaScript application, stop, just stop, and go and install this library.

‘If I have seen further it is by standing on ye sholders of giants’ – Isaac Newton

‘I don’t know what I would do without NPM’ – Also probably Isaac Newton

Basically, using other peoples’ stuff for problems that have already been solved frees your team up to focus on solving the more difficult and interesting problems that your specific app needs to solve.

Also no though

If you take this to the extreme, you can end up with a very shallow understanding of things.

When you are learning a new concept, or deciding whether to use a trendy framework or library, you should probably first have a go at reinventing the wheel. Or, to put it another way, you should try and expose yourself to the raw problem that is being solved by the shiny new thing first. If you understand the problem it is trying to solve, you will have a much better ability to use and learn the tool, and will be able to properly assess whether it is something you actually need.

As a stupid example, imagine explaining wheels to a newly discovered group of humanoids that have developed the ability to fly.

Without understanding that wheels make it easier to transport heavy things over distance, ON THE GROUND, they would have no intuitive understanding of the benefits of wheels.

How would you explain wheels to flying creatures?

Flying creature‘Why do we need wheels?’

Non flying creature‘Carrying heavy things. Friction is easier to overcome than gravity. We have been using them for ages. Can definitely recommend.’

Flying creature‘Oooooh I get it now. Give me a wheel plz.’

Cool cool cool, reinvent all the things, got it. Do you have a better example?

Why yes, yes I do.

I’ve recently been bashing my head against recursion (again) as I struggle (again) to refresh my rusting knowledge of data structures and algorithms.

I did a computer science masters, I have worked in the industry for four years, and I have read about and implemented countless recursive functions.

I still have no f**king intuition whatsoever for recursion though.

Every time I have to solve a problem using recursion (usually in coding interviews…) I diligently code up a solution and run it, and it does NOTHING that I am expecting.

Normally it runs out of memory and I basically tweak variables and return values until it (sort of) behaves.

I have decided that enough is enough. So in order to build up a better mental model of and intuition for recursion, I’m going to (sort of) reinvent it, and then try and explain it back to myself in a way that makes sense.

Hopefully along the way things will start to click, click, click into place.

(What follows is a loosely edited version of hand written notes that I put together over the course of about 6 painful hours. They bounce around a bit. I have deliberately left them pretty much as they were to remind myself how this process looked).

Why recurse at all?

Some problems are a bastard to solve in one go. A better approach is to split the problem up into smaller and smaller sub problems, until you have the smallest and simplest meaningful problem.

Once you have that tiny little nubbin of a problem that is laughably simple, solve it, and then recombine all of the solutions to these teeny tiny easy problems to give you a solution to the big scary problem.

That, I think, is the essence of what makes recursion useful in programming.

Recursion, a life in quotes

Recursive – “periodically recurring”

“Pertaining to or using a rule or procedure that can be applied repeatedly”

“Recursion occurs when a thing is defined in terms of itself or its type”

It is kind of like backwards induction then?

Induction… WTF is that?

Mathematical induction is a method for proving that statements are true.

An example of a use for mathematical induction is proving that you can go as high as you like on an infinite ladder. Here’s how you do it:

1) Prove we can get to the first rung (this is called the ‘base case’)

2) Prove we can move from n rungs, to n+1 rungs

QWOD ERRAT DEMONSTRAAANNTEM (we done proved a thing)

Mathematical induction also turns out to have been around for a long time. Like the Greeks used it. Much longer than recursion in fact.

For now, it is enough to say that induction and recursion seem to be somehow linked.

They both involve boiling a problem or a process down to its simplest form, and then detailing a simple mechanism for moving from one iteration to the next.

Enough yammering. Show me some recursion!!!

Okie dokie:

function recursive() {

A recursive function, is nothing more than a function that calls itself as part of its declaration.

If you write a function like the one above, and call it in a program, you will meet some variety of this charming individual:

RangeError: Maximum call stack size exceeded

*’Bastard f**king recursion. Why does this keep happening!?!’*

‘Oh mate you’ve blown up the stack. It’s overflowing all over the place. Tragic’

Stack!?! WTF is a stack, and why would it overflow?

Good point, let’s see. Going back a few steps:

What happens in a program when a function calls itself?

When a function is called at runtime, a stack frame is created for it and is put on the call stack.

Stack frames require memory, so if you call a function that calls itself, it will try and keep doing it forever, and will try and create endless stack frames. These stack frames require memory and that is a finite resource, so eventually the program runs out of memory and crashes.

Sounds half plausible. What is a ‘stack’/’the stack’, or ‘call stack’? Is that what it’s called? I forget

I’m pretty sure it’s just a stack, that is used for keeping track of function calls, and their evaluation/results.

function funcA() {
  return 3;

function funcB() {
  return funcA();



When this is run, I believe somewhere, something like this is created:

demo of call stack

OK, so because stack frames are dynamically pushed onto the call stack when functions are called, if we call a function inside a function and never have a termination case, the executing program will attempt to put infinite numbers of stack frames onto the call stack, and will run out of space:

stack overflow drawing

That’s probably enough on call stacks for now. I’m absolutely certain some of the stuff above is wrong, but I think it’s enough to understand why my code keeps crashing.

In order to avoid toppling call stacks, our recursive function must have some sort of exit clause, which allows it to stop calling itself.

Because functions are evaluated and run in their own sandboxed execution context, the piece of state which tells a recursive function it is done will probably have to be passed to it I guess.

Let’s make a for loop without variable assignment

function loop(loopCount) {
  console.log("in a loop");
  if(loopCount === 1) {

  loop(loopCount - 1);


I’m relatively certain that as this runs, it will push 5 stack frames onto the call stack (as in shoddy diagram below), and then pop them off once the ‘base case’ is evaluated as true (loopcount === 1).

a looping stack

All of the TypeScript!!!!

At this point I went down a bit of a rabbit hole and implemented a bunch of looping methods in TypeScript for fun.

Here they are:

function loop(loopCount: number) {
  console.log("In a loop");

  if (loopCount === 1) {
    console.log(`loop called with ${loopCount}`);

  loop(loopCount - 1);

  console.log(`loop called with ${loopCount}`);


function customLoop(loopCount: number, callback: (key: number) => void) {

  if (loopCount === 1) {

  customLoop(loopCount - 1, callback);

customLoop(20, (key: number) => console.log(key));

function iterator(array: any[], callback: (item: any, key: number) => any) {
  function iteratorRecursive(key: number) {
    if (key === array.length) {

    callback(array[key], key);

    iteratorRecursive(key + 1);


const testArray = ["five", "silly", "frogs", "are", "bathing"];

iterator(testArray, (item, key) =>
  console.log(`'${item}' is found at element ${key} in the array`)


testArray.forEach((item, key) =>
  console.log(`'${item}' is found at element ${key} in the array`)

function baseCaseLess() {
  console.log(new Date().getSeconds());
  if (new Date().getSeconds() % 2 === 0) {



What have we learnt?

  • Recursive functions call themselves

  • Recursive functions must be given the means to stop themselves. The information telling them to stop doesn’t necessarily have to be passed to them (see baseCaseLess function above), but probably will be.

  • The state that tells a recursive function to end or not has to change, either progressively per function call, or as a function of something else that is changing. E.G. time.

Loops are cool and all, but what is an actual useful example of a recursive function?

Let’s have a go at solving some mazes 🙂

If you’re intrigued, check out this post, where I draw up some dubious metaphors and analogies, and play around with solving mazes using recursive functions.

Why are you doing all this again?

I am trying to develop increasingly useful mental models for programming concepts that I am less fluent with than I’d like to be.

I believe a useful model should allow me to understand a concept well enough to solve problems with it at the level I need to.

If I find things not making sense, or behaving differently to how my current model behaves, I know I probably need to refine my model, and clean up the fuzzy bits (as is/was the case with recursion).

Models can be as deep as you want. Ultimately I see them as just a way to understand a complex thing and work with it.

Without effective mental models I feel like I’m kind of just guessing and stumbling around in the dark.

HTF do i learn things in my spare time without melting my brain!?!

I am going to make the assumption that you are a programmer.

If so, then you probably spend a lot of your day doing intense mental gymnastics, and wrestling with obtuse and painful logic puzzles, or ‘programming’ as it is referred to by some people.

You also probably enjoy it.

You sadist you.

The problem solving side of the job is for many of us a large part of what makes it enjoyable.

It sure is tiring though.

What’s your point? Programming is fun but tiring?

My point is that although this application of mental effort is satisfying and challenging, it comes at a price.

We only have a certain amount of focused attention we can spend in any single day, and if you spend all day hammering your brain at work, it will be pretty useless by the time you get home.

This is fine, unless you want to spend your time outside of work also tackling problems or learning things which require sustained focus.

Why do you care about this? Surely you can just spend your time outside of work playing PlayStation or watching the Apprentice? Problem solved.

That is true…

Let’s assume for now though that you have some side project or learning goal that you want to pursue in your spare time, which requires sustained mental focus.

In my case I am trying to consolidate my wobbly maths skills, and learn some physics.

To this end I’ve been bashing my head against A level and university level maths and physics text books in my spare time, and attempting to teach myself enough of these things to scratch my curiosity itch.

To learn and understand the concepts covered in these subjects definitely requires focus, and I’ve managed through trial and error to get to a point where I can make progress on my learning goals, without impacting my productivity at work, or suffering a brain meltdown.

OK smarty pants, how?

My approach has been influenced heavily by Scott Young, who challenged himself to learn and pass the exams for the entire MIT computer science undergraduate course in one year:

His writing focuses heavily on how to optimise the time you spend studying, to achieve maximum understanding in the minimum time.

He calls these kind of intense learning projects ‘Ultralearning’ projects, and he even has a book on it which is worth a peek:

Another key influence was the book ‘Deep Work’ by Cal Newport:

This books forwards the idea that in the modern, and highly technical world, the ability to focus on and solve hard problems, and to learn difficult things, is at an absolute premium.

Meaning that the rewards for getting good at learning and solving difficult problems are very high currently.

Additionally, he lays out a series of techniques for achieving this focused work.

I recommend you consult both of these sources as they are very interesting, and they probably have a wider application that my own interpretation of these ideas.

That said, this is my blog, so here’s my approach.

My super special technique for learning hard things during spare time:

Every morning, before work, try and do two 45 minute sessions of extremely focused work, on whatever problem you are currently tackling. Then as many sessions as you can fit in to the weekend in a sustainable way (probably Saturday or Sunday morning).

For me at the moment the problem might be a physics problem, a mathematical technique, or a new concept I’m trying to understand.

The activity itself during these sessions varies quite a bit, and is not really important. The important thing is that this should be very very focused work with a clear goal (for me generally this means understanding something well enough to solve problems related to it, and to explain it to someone else).

Focused means no phone, no social media, no distractions.

In my case I work better with noise cancelling headphones, and music. I quite often just play the same playlist or song on repeat for the entire session.

Focusing like this will be hard at first. If you are learning difficult new things, you will feel stupid, and your fragile ego will start to panic.

My early attempts went something like this:

‘Ok focus time. Trigonometric identities. Let’s go’

‘I don’t want to just remember these, lets find a proof for them so I can understand them better’

‘Ouch! this proof is hard. I don’t understand how they got from there to there. Maybe I’m too stupid for this. I probably should get some food or coffee first anyway. Urgh this is horrible. I’ll just have a look at this test for dyscalculia (maths disability), maybe I have it and that’s why I can’t to this.’

And so on.

For me, the key thing was to commit to doing the whole 45 minutes. I would tell myself that regardless of how badly it is going, I have to focus for this time, and after that I can stop and do whatever else I want.

This is difficult at first, but over time becomes habitual.

In fact, developing habits that support your sustained focus sessions is key to being successful in this area, and both of the resources above outline techniques for achieving this.

The general idea though is that willpower is finite, and deliberately choosing to do hard things is tiring.

Habits on the other hand, are automatic, and painless.

Think about other good or bad habits, such as checking social media, smoking, or cleaning your teeth. You probably don’t think too much about these things, they just happen automatically, after certain cues.

The basic pattern of a habit is cue => action => reward.

This applies to bad habits and good habits.

For me, the habit loops I have been successful in drilling into myself to drive this morning routine are as follows:

up at 6 => go downstairs and start making coffee => browse smart phone while coffee is brewing (sweet sweet internetz)

take coffee upstairs and sit at desk => focus hard for 45 minutes => relax for ten minutes, get a snack, more coffee, do some stretches etc.

and repeat the last loop over and over again until I’ve had enough.

The reason this works, is that over time, consistency is more important than just about everything when it comes to making progress on difficult long term goals.

If you can consistently hit a set number of these sustained focus sessions during the week, you will make solid progress towards your goal. If you don’t track things this explicitly, it is easy to become demoralised, not see your progress, and give up.

If I get half the normal amount of focus sessions done in a week as I normally do, I know something is up, and I can go rooting about for causes.

Maybe staying in the pub till closing time on Tuesday evening had something to do with it? OK, next week let’s try not to do that.

But doesn’t this mean that you’re spending your valuable focus time that you should be spending at work, and spending it on yourself instead!?! What about your poor employer

Firstly, outside of working hours, I will always prioritise my own goals over those of my employer, and I would suggest you do the same.

That said, I also don’t think it works that way.

The difference between starting my day by:

a) Rolling out of bed as late as possible, dragging myself to work and spending the first hour waking up and inhaling coffee

b) Achieving two hours of calm and sustained focus in pursuit of a goal I am personally interested in

is huge.

The second one results in my arriving at work awake and ready to tackle problems, the first one… not so much.

Cal Newport also as part of his research for the above book, found that engaging in deep focused work over time, actually increases your ability to tackle difficult problems in the future, and to do more deeply focused work.

Getting better at the meta skill of focusing on tough problems, improves your ability to do this in other settings (like at work).

So although it is true that you only have a set number of hours you can focus hard on any problem during the day, deliberate practice and consistently pushing yourself to improve at solving hard problems, improves your ability to do your job.

It’s a win win! You can be happy and pursue your own goals, and also be more effective at work!

Based on my sample of one, I definitely have found this to be the case.

So there you have it, my totally biased and personalised approach to learning hard stuff outside of work, when your day job involves brain melting activities. What are your thoughts?

WTF is currying?

As a developer who spends most of my time at work writing JavaScript or TypeScript, I’ve heard references to ‘currying’ all over the place in blogs, Stack Overflow answers, from my colleagues, and more recently in quiz style technical interviews.

Whenever it gets brought up I do the same thing.

I google the term ‘currying’, figure out that it is basically taking a function that accepts multiple arguments, and converting it into a function that takes one argument, and then returns another function which can take another single argument, etc. etc.

That is to say:

const unCurried = (arg1, arg2, arg3) => {
    console.log(`First argument is ${arg1}, second is ${arg2}, third is ${arg3}`);

unCurried('this', 'that', 'another');

const curried = (arg1) => {
    return (arg2) => {
        return (arg3) => {
            console.log(`First argument is ${arg1}, second is ${arg2}, third is ${arg3}`);


At which point I say to myself

‘Oh cool yes I remember this. Neat idea. Not sure when I’d use it, but at least I understand it. What a clever JavaScript developer I am’

So maybe, given that currying is a relatively simple concept to implement in code, a better question might be

WTF is the point of currying?

If I stumble onto a concept in mathematics or programming that I don’t understand, I generally try to figure out where it came from, what problem it was/is trying to solve and/or which real world relationship it is trying to model.

So first of all, why is it called ‘currying’? Is there some significance to the name that will make its intention clear?

Currying… maybe it means to preserve something or to add ‘flavor’ to a function in some way?


Turns out it’s because a logician called Haskell Curry was heavily involved in developing the idea. So that’s a dead end.

It also looks like Haskell Curry was developing his ideas based on the previous ideas of some people called Gottlob Frege (died in 1925), and Moses Schonfinkel (died in 1942). Which suggests that maybe the ideas behind currying did not originally come about in response to a programming problem…

In fact, currying originated as a mathematical technique for transforming maths style functions, rather than programming style functions.

Mathematical functions and programming functions are related, but slightly different.

A mathematical function basically maps one set of data points to another set of data points. That is, for every input value to the function, there is a corresponding, specific output value.

Functions in programming also take inputs, but they can do whatever they like with those inputs (or arguments), and they are under no obligation to return a single specific output value.

Currying, as it is defined, seems to only relate to inputs to functions (arguments), and so is presumably equally applicable to both mathematical and programming style functions. OK cool. What’s currying again?:

Currying is the technique of translating the evaluation of a function that takes multiple arguments into evaluating a sequence of functions, each with a single argument.’ – Wikipedia

OK yup I remember, and why do mathematicians curry?

(Based on Wikipedia’s summary)

Some mathematical techniques can only be performed on functions that accept single arguments, but there are lots of examples where relationships that can be modelled as functions need to take multiple inputs.

So currying allows you to to use mathematical techniques that only work on functions with single arguments, but tackle problems that involve multiple inputs.

This is all well and good, and we’ve already seen how we can implement currying in JavaScript, but… I still don’t get what the practical benefit of it is in a programming sense, especially in JavaScript!

Eureka! (A case of accidental currying)

I basically got to the point above, and then went back to ignoring currying as I didn’t really get what practical application it had for me, as a predominantly front end JavaScript developer.

A few months later, I found myself in the privileged position of having a project at work that was entirely my own. I got to write an automated testing solution in Typescript, using Cypress, and was basically given free reign to organise the code and repository as I pleased.

I’ve been gradually moving towards and playing with more functional style programming, and one of the things I found myself wanting to do, was writing functions to create functions with different ‘flavors’:

const pricePrinter = (currencySymbol) => {
    return (priceInNumbers) => {

const dollarPricePrinter = pricePrinter('$');

const poundPricePrinter = pricePrinter('£');



Ignoring the wildly impractical nature of the example above, this pattern is quite useful. It allows you to compose functions neatly and semantically, with little ‘function factories’, and talk to code that is sufficiently abstracted.

I was very happy with this pattern and proudly showed my colleague what I’d discovered.

His response was to glance over briefly and go ‘Oh yeah that’s currying. Cool’, and then go back to work.

So there you go. One practical application in JavaScript of currying, is to compose functions together to make little function factories, that are passed the context they will be operating in as an argument (the currency symbol in the example above). Kind of like constructors for functions. Neat.


I may be wrong. I am happy to be proven wrong. I am equally even happier for someone to give me additional examples of currying in the wild.

WTF is wrong with software!?!

I recently watched the excellent series Chernobyl.

It tells the story of a catastrophic nuclear meltdown, caused by concerted human incompetence and blind adherence to bureaucratic demands, even in the face of blatant evidence that what was being done was stupid, negligent, and dangerous.

After the accident, it took a long time to understand what had gone wrong, blame was thrown around wildly, and there were official efforts to disguise the causes of the accident.

Very little was done to try and learn from systematic failures, and to address the problems demonstrated by the incident.

As a software professional, it made me very sad how many similarities I can see between most places I have worked, and the Chernobyl accident and cover up.

How I feel most of the time at work:

dog drinking coffee in a burning building

The way software is written is fucked.

It is complex enough to be understood by very few people, and the process of writing it is consistently mismanaged, despite the fact that most of the pitfalls and mistakes made by literally every single team, have been pointed out, along with detailed steps for how to avoid them, since the 1970s.

Here is a pretend conversation that sums up the level of ignorance I have found in my professional life, as applied to another industry. (it is a metaphor you see):

  • Bob: Hello, I’m the new pilot for this here commercial aircraft.
  • Geoff (Bob’s employer): Hello Bob! Glad to have you on board. I hope you’re worth the enormous sum of money we’re paying you
  • Bob: Well of course I am! I passed your test didn’t I 🙂
  • Geoff: Well yes, you are right, you were very good at identifying the key differences between a jumbo jet and a bicycle, so I guess we’re all OK on that front

Some time later…

  • Emma(Bob’s copilot): Bob could you land the plane for me please. I need to visit the bathroom
  • Bob: Sure thing. One question, where do you keep the landing ropes?
  • Emma: What was that Bob? The landing ropes… What are they?
  • Bob: The ropes you use to loop around the lorry on the landing bit on the floor so it can pull you in and you can land
  • Emma: Bob, that is insane. What are you talking about
  • Bob: Oh, do you use a different system. I could use roller skates instead
  • Emma: …
  • Geoff: Hey guys, are you going to land this plane soon
  • Bob: Sure thing, just debating whether to use the ropes or the roller skates. I think we’re leaning towards the roller skates given the weather
  • Emma: …
  • Geoff: OK great, let me know how you get…
  • Bob: Hold up there Emma, I’m not sure there’s any need to get angry like this
  • Geoff: Emma, what’s your problem? We really need to land this plane soon so can you please sort this out between you. What Bob said about the roller skates sounds sensible to me
  • Bob: I think that’s a bit extreme Emma, when I worked at Initrode we always used roller skates in inclement weather and we hardly ever crashed. We never took off either though… but I’m pretty sure the landing stuff would be fine


For another excellent summary of the type of madness you are likely to find in a software team, try this:

‘Ah but fortunately your example was unrealistic. That involved an aircraft, which could crash into the floor and kill people. Also nuclear reactors are totally different to coding. Software is just computers. It can’t do anything really nasty… right?’


Not only is software fucked, it is increasingly responsible for scary things.

Defective software has recently:

  • Crashed two planes.
  • Exposed countless peoples’ private information to criminals. (This happens so often now that it’s barely even news)
  • Stopped customers from accessing their online banking for months.
  • Prevented doctors from accessing test results.
  • Grounded flights out of major airports.
  • Thrown people off electric scooters by ‘braking heavily’ under certain circumstances.

I found these examples from a ten minute Google search. There are probably many more cases, many of which are inherently difficult to reason about because… software is fucked!

To recap, software is often:

  1. Responsible for scary things
  2. Written by incompetent people
  3. Managed by less competent people
  4. Very difficult to understand for ‘non technical’ people
  5. Shit

Which all combines to make me increasingly cynical and scared about my profession…

I have spent quite a bit of time reading about and discussing this phenomenon, and there appears to be a growing consensus, among both my peers and the wider community, that this cannot go on.

One of the potential solutions which is floated, in various forms is ‘REGULATION’.

Regulation!?! Yuck…

I’m not sure how I feel about it either… or what it should look like.

The cynic in me thinks that any attempt to regulate software development would add unnecessary overhead to an already slow and inefficient process, and would likely be mismanaged.

I also know that certain elements of writing software are very difficult to reason about, and that ensuring a program does what you intend it to do is hard.

That is to say, it is possible that even after doing your due diligence, becoming excellent at your craft, and testing all foreseeable outcomes, your code could still cause something horrible to happen.

In these cases I suspect assigning blame would be counter productive, whereas learning from mistakes could be very useful.

That said, the level of incompetence, and negligence I have encountered in some work places feels like there should be some legal protection in place, to make sure that development teams take responsibility for their code.

It seems totally unacceptable that testing and design is frequently shunted to the back of the priorities list, behind ‘hitting milestone 3’ or ‘drop 2’ or whatever that particular company is choosing to call their next client delivery.

If a shoddy system of development results in software which causes a loss of data, or money, or lives, then maybe someone should be held accountable.

That person should probably be paid a fairly large sum of money, for taking on the responsibility of ensuring that the software works (as many people in the software industry already are…), but equally, that reward should reflect the very real responsibility that has been taken on.

By which I mean that if it is found that the highly paid person that put their professional seal of approval on a product, did not adequately ensure that a system was put in place to develop robust and safe software, then they should be punished.

I don’t have a particular problem if your marketing website, or your blog has some bugs or UX issues, but almost all software I have worked on has involved user accounts and financial transactions, and is therefore responsible for peoples’ personal information and money.

That this type of software is developed within a system that rewards cutting corners, and punishes slow deliberation and robustness is something that I find deeply worrying.

Systems of development

One thing to emphasise, is that I don’t think the state of software is a reflection of any specific individual failings.

It is not a case of ‘shit developers’ or ‘shit managers’ or ‘shit clients’.

You might have noticed that I keep writing the word ‘system’.

That is because I recently read an excellent book on systems thinking and now frequently regurgitate snippets from it:

One of things that really stuck with me from this book is the idea that the actual outcomes of a system always reflect its true goals, or motivations, but that those true goals might differ (sometimes wildly), from any officially stated goals.

My theory is that in the majority of cases, software teams’ true goals and motivators are not robust software or a reworked and improved product, or a solved customer problem, and that is why these are rarely the outcome.

A team responsible for a piece of software is made up of a number of actors, or elements, all with competing goals.

You may have an external client, who you work with to define what the software should do.

They probably have a set of internal stakeholders they have to please, they may have a certain deliverable linked to their bonus that year, they may have inherited the project from someone else, and maybe they are trying to leave the company anyway, so they don’t particularly care about the long term success of the product.

The budget for the piece of software might have come about from a high level executive joining the client company and wanting to throw their weight around. They might not still be at the company in six months.

The team developing the software might have, intentionally or otherwise oversold what they can deliver and offered it for an unrealistically low price.

These mean that individual milestones or deliverables within a project are very important, as many of the actors within the system are depending on the outcome of them.

In a setup like the one above, the date the software is delivered on, is more important than whether it is shit or not, because more of the people within the system are depending on the date being hit, than on the software working well.

Individual developers within the team have a variety of different motivations.

Some are there just to take advantage of the elevated wages within the industry, and are content to turn up, do their job within the confines set out in front of them and go home.

They may or may not understand what they are doing, and they may make things so complicated from a code perspective, that reasoning about whether the product has bugs or not becomes almost impossible, but to be honest it doesn’t really matter.

As long as something is produced which looks and feels like something that was signed off and paid for, these guys have done their job.

These are the majority of developers I have encountered, and I don’t have a problem with them.

I do have a problem with the fact that the system they are working inside rewards this type of behaviour, but that is not their fault.

Others might want to actually deliver on the stated goals of the system, that is, roughly:

‘make a piece of robust software in the least complex way possible, such that it solves a clearly defined problem for the user of the software’.

This is quite a bit harder, requires constant reassessment of tools, techniques and design, and a commitment to mastering the fundamentals of your craft, across all disciplines involved, but in the medium to long term is definitely the way to go.

Amidst the background of all the other competing goals and motivations, these latter elements are likely to be frustrated.

Calls to slow down the pace of development in order to revisit the design of the product, or even to reassess whether the product is viable as a solution are generally ignored, as they will impede ‘deliverable alpha’.

In such a system, cutting corners is rewarded at all levels, and, again, this is not particularly anyone’s fault.

This would all be fine if it was happening in a bubble, but it’s not. These pieces of software are responsible for stuff that matters, and the real costs of these decisions are not felt by the people responsible for making the software, but by the users themselves.

In the above scenario, there might be a few raised voices and tense meetings, but it it very likely that everyone involved in making the software will be rewarded, as long as something is delivered on or around the date in the project plan.

Users may be lumbered with a buggy, slow, insecure program, or web app, but that is not really factored into the cost of development.

So what’s your point?

People responsible for making software are often motivated by many things other than making reliable, secure software that solves a defined user problem.

The negative effects of bad software are generally felt by users, not by those who put the software together.

In order to improve this situation, and to get better software that isn’t shit, it seems like there needs to be some sort of change to make the people responsible for creating software feel the effects of their shitty software themselves more keenly.

I don’t know what that would look like, and to be honest I don’t really have a point… All I really know is that I find my profession increasingly scary and insane, and that I’m not alone.

In defence of job hopping

(don’t be like Tim)

As a primer for why I think you should consider becoming a feckless, shifty, good-for-nothin’ job hopper, I offer the following cautionary tale.

I retrained as a software developer after a fledgling career in market research. Read more about it here.

My first job after my computer science masters was on a graduate scheme at a medium sized software consultancy, in a small university town, far away from the hustle and bustle of London.

I was excited to get started on my journey as a developer, and make up for lost time.

One of the reasons I opted to work for this consultancy was the shimmering promise of being able to work on many different projects, across different sectors and tech stacks, and to learn from the best.

The first few weeks were great. We were given in depth training in modern software engineering practices: automated testing, git, project management, scrum and agile, architecture and programming fundamentals. It was a dream.

Unfortunately, directly after this, I was tasked, along with a cohort of a dozen or so grads, with manually testing a true waterfall, death march disaster project.

There was an entire floor of staff dedicated to servicing this heaving, under-budgeted and over-sold behemoth, and after a few weeks of genuinely useful learning about the domain of the project, and gaining an understanding of the application, it became clear that any skills we learnt were not going to transfer into other programming roles.

At this point (and this will become a recurring theme), I began to panic.

I had abandoned a career which I had started to gain traction in, in order to become a software developer, and now I was faced with the prospect of leaving the job after a few years, with nothing but an in depth knowledge of a poorly written application to my name.

I was going to end up having spent 12 months at university and all of my money, as well as take a sizeable pay cut, to gain the professional experience that a high school leaver could get as a manual QA tester.

Three months later, I was still doing manual testing, and rapidly losing my mind.

Meet Tim

Around this time, while I was pulling my hair out and questioning all of my life decisions, I noticed Tim.

Tim did not share my concerns.

He joined as a grad and had already been at the company for a few years.

He had a PHD in a whizzy sounding hard science, and was clearly a clever guy.

However, he did zero programming; absolutely none.

What is more he didn’t seem to want to, despite his job role being ‘Software Engineer’.

He had been working on the death march project from the start, and was quite content to puzzle over spreadsheets and Word documents, doing the job of an entry level QA.

In some ways, this slightly zen attitude seemed admirable.

He also really did know how to do the job of a manual QA well.

However, there was a problem.

At the consultancy, Software Engineers (and Tim was a Software Engineer) were graded into bands.

As grads, we were band 1 Software Engineers.

As he had been at the company for a while, Tim had moved up a few bands.

These bands were partly used to decide on salary, but were also, crucially, used to dictate the price to hire us out to other companies for.

If a new project was being commissioned, the bid might include something like ‘two level 1 SEs and a level 3 SE’.

The point being, that a client expected a return for their money in the form of hard technical skills.

Tim, as you may recall did not have these technical skills.

After a few more months, Tim ended up being hired out to another project, as a Software Engineer level 4 or whatever he was at that stage.

Hurray! I was happy for him and also very jealous.

He came in with a big book on Java and sat poring through it at his desk, clearly determined to make sure he was up to the challenge.

Plot twist: Tim got fired.

That’s right, it turns out if you don’t acquire valuable skills that people want to pay for, the market will not reward you, and might even punish you, regardless of how good you are at your job.

Tim’s Java book remained on his old desk after he was gone, and made me shudder every time I walked past it.

What I did next

I ended up leaving the company very shortly afterwards, for a job at a tiny startup, with an even lower salary, in London, but where I would be able to get as much experience as possible in as short a period of time as possible.

I was at the consultancy for only six months, and was seriously worried about the effect that leaving so soon would have on my long term career prospects, but the gnawing horror about what had happened to Tim was enough to motivate me to hit the ejector switch.

This move turned out to be a good decision, and I stayed at the startup for a year and a half, inhaling knowledge and responsibility at a rapid rate, until I ultimately left for another job with almost double the salary, in order to be able to afford both rent and Tesco meal deals.

Since then I have essentially framed every career decision in terms of the experience it will give me, and whether that experience is something that the market will reward.

This greedy accumulation of experience gives me options, and has allowed me to be far more open at work when things aren’t working, and ultimately to not take any unnecessary shit, as I know that I can fall back on my skill set to find more work.

This feels great.

This tactic works particularly well in this industry because there is both a rampant demand for software professionals, and an under supply of good ones.

This means it is comparatively easy to stand out from the pack, assuming that you dedicate conscious effort to improving yourself in areas that the market rewards.

But this is immoral!?! What about the companies you’ve left in your wake

Somewhat… but I don’t think as much as you might think.

As I see it, software development is highly transactional in nature.

You pay me money, I build you a thing. You pay me more money, I maintain the thing I built, or the thing someone else built.

This lends itself very well to project based work, and has parallels in the building trade.

Building and engineering rely heavily on contracting work out to individuals and companies, who then build, or design the thing, take their money and move onto another project.

Framed in these terms, the idea that you would spend x number of years getting embroiled in a company mission, and supporting their long term goals at the expense of your own development seems misguided, and actually not beneficial to either of you.

I have always endeavoured to leave projects and companies in a better state than I found them.

As I specialise in one area of development currently (frontend), and one specific technology (Angular), I can join a company and be up and running in around a week. The training cost to them is very limited.

I always aim to deliver defined pieces of work to any company I’ve worked at, and have made sure to hand over what I’ve done to the rest of the team when I leave, including documenting anything I’ve done.

Recruitment costs aside (which have more than once not ended up being paid as I have left during my probation period), I don’t believe that my impact on any of the companies I’ve worked at has been negative.

I could be tragically misguided, but I believe that the way I have left companies means that I could go back to any of them if I really wanted to.

Why not just become a contractor then ya bum!

Fair point.

That’s actually what I’m going to do when my current job is finished in August.

Feel free to check back in a few months to see how that goes for me 🙂

If I’m honest, I am once again shitting myself.