HTF do I reverse a linked list (video)

I previously made a post with some shittily drawn diagrams, trying to make sense of how linked list reversal works in JavaScript.

However, some of my diagrams were slightly inaccurate on second look, and also, for me, in order to visualise something like this properly it helps me to see it in motion.

To that end I’ve coded up a new solution, inspired by this video:

https://www.youtube.com/watch?time_continue=119&v=O0By4Zq0OFc&feature=emb_title

and filmed a shaky video in order to animate my terrible drawings. This is the code for the linked list, complete with reversal method:

class LinkedList {
  constructor(val) {
    this.head = {
      value: val,
      next: null,
    };
    this.tail = this.head;
  }

  append(val) {
    this.tail.next = {
      value: val,
      next: null,
    };
    this.tail = this.tail.next;
  }

  print() {
    const displayArray = [];
    let node = this.head;
    while (node) {
      displayArray.push(node.value);
      node = node.next;
    }
    console.log(displayArray);
  }

  reverse() {
    let current = this.head;
    let previous = null;
    let next = null;

    while (current) {
      next = current.next;
      current.next = previous;
      previous = current;
      current = next;
    }

    this.head = previous;
  }
}

const list = new LinkedList(1);
list.append(2);
list.append(3);
list.print();

list.reverse();
list.print();

And here is the reverse method visualised with the magic of video (I would watch at 2X speed if you value your time in any way):

sweet sweet animation video

Also, if you really want to figure out how this code works, I highly recommend making your own cut outs and moving them around. Beats staring at a screen.

HTF do I reverse a linked list (JavaScript edition)

This is one of those algorithms questions that makes my stomach hurt.

If you haven’t seen this before prepare to put your brain through a blender.

How do you reverse a linked list in JavaScript?

Firstly, you have to have a linked list. Luckily I have one I prepared earlier. JavaScript doesn’t have a linked list implementation, so we have to make our own:

class LinkedList {
  constructor(val) {
    this.head = {
      value: val,
      next: null,
    };
    this.tail = this.head;
  }

  append(val) {
    this.tail.next = {
      value: val,
      next: null,
    };
    this.tail = this.tail.next;
  }

  print() {
    const displayArray = [];
    let node = this.head;
    while (node) {
      displayArray.push(node.value);
      node = node.next;
    }
    console.log(displayArray);
  }
}

const list = new LinkedList(1);
list.append(2);
list.append(3);
list.append(4);
list.append(5);

list.print();

When this is run it will print [1,2,3,4,5], which under the hood looks like 1=>2=>3=>4=>5, with a reference to the start (head), and the end (tail) of the chain, or snake, or whatever we decide we want to think of our list as.

We can easily get to the first element, and we can easily get to the last element.

Cooooool. Now reverse it, turn those arrows the other way.

My first attempt at this worked quite nicely, but is wasteful with memory (a sin in computer science). Here it is:

  reverse() {
    const buffer = [];

    let node = this.head;

    while(node) {
      buffer.push(node.value);
      node = node.next;
    }

    this.head = {
      value: buffer.pop(),
      next: null,
    };

    node = this.head;

    while(buffer.length) {
      node.next = {
        value: buffer.pop(),
        next: null,
      }
      node = node.next;
    }
  }
}

So just stick all the values from the linked list into an array, then pull items off the array and rewrite our linked list from scratch!

I know, I’m a genius.

The runtime is O(n), which is good.

The space complexity is also O(n), which is less good. At this point if you are interviewing, I imagine the interviewer (if they even let you code this basic solution), will be tutting and saying things like ‘but can we do better?’, scratching their chin and gesticulating with whiteboard markers.

And it turns out that yes, there is a better solution. The bad news is that it is utterly brain melting to actually understand. If you get what this is doing straight off the bat, all power to you.

    reverse() {
      let first = this.head;
      this.tail = this.head;
      let second = first.next;

      while(second) {
        const temp = second.next;
        second.next = first;
        first = second;
        second = temp;
      }

      this.head.next = null;
      this.head = first;
    }

In my case it just made me want to cry and shout and stomp my feet and yell ‘BUT WHY DO I HAVE TO DO THIS STUPID SHIT. I’M A FRONTEND DEVELOPER, LET ME DO SOME CSS OR SOMETHING. I’LL JUST GOOGLE IT. THIS IS BULLSHIT!!!?!!’.

Obviously, we are onto a good problem. The ones that expand your brain and make you better at problem solving normally start with this sort of reaction. The key to getting gud is to gently soothe and comfort your troubled brain, and trust that given enough sustained concentration on the problem, some youtube videos, and a bit of sleep, this will start to make sense.

Let’s draw some pictures and match them to code (DISCLAIMER: These diagrams are not totally accurate… I’ve left them now though as this is useful as a record for me of how my thinking progressed. For a better illustration of how this algorithm works, see this post) :

We’re going to use a smaller list for this run through, because I can’t be arsed to draw endless diagrams.

We want to turn 1=>2=>3 into 3=>2=>1.

What

a time

to be alive.

let first = this.head;
this.tail = this.head;
let second = first.next;

Just assigning things for now, seems OK. See brain, nothing to be scared about. Right?

‘THIS IS BULLSHIT. WHAT THE FUCK ARE WE EVEN BOTHERING WITH THIS FOR. LET’S PLAY ROCKET LEAGUE’

Our diagram is in this state now:

OK. Next bit.

      while(second) {
        const temp = second.next;
        second.next = first;
        first = second;
        second = temp;
      }

This appears to be a loop. Let’s try one iteration first, and update our diagram.

second is pointing to the element with the value 2 for us currently, so it is truthy. We enter the while loop, filled with trepidation.

const temp = second.next

second.next = first

first = second

second = temp

Oooh wow, this kind of looks like progress, we swapped 2 and 1, and our variables are all pointing at sensible looking things. Apart from second, which is not pointing at the second element, and head which is just floating in the middle of the list. I think this is one of the tricky parts of linked list questions, the variable names end up not making sense mid way through the problem.

Let’s continue.

second is pointing to the element with the value 3, so it is truthy. We enter the while loop again, brimming with new found confidence.

const temp = second.next

temp gets set to second.next, which is now null.

second.next = first

first = second

second = temp

Second is null now, so we avoid the while loop this time round.

this.head.next = null

So we sever that endless loop setting head.next to null

this.head = first

We’ve set up our newly ordered linked list, now we just need to make it official by updating the head reference.

this.head = first;

Anddddd done.

I’m going to need to revisit this, but this scribbling exercise has already helped. I hope it helps you too.

Alternatively, go watch this video https://www.youtube.com/watch?time_continue=119&v=O0By4Zq0OFc&feature=emb_title as it explains the approach waaaaay better than these scribbles do. Wish I’d watched that first…

Adventures in Node town (hacking Slack’s standard export with Node.js)

One benefit of changing jobs quite a lot, is that I have built up an increasingly wide network of people that I like, who I have worked with previously.

A really nice thing about staying in contact with these people is that we are able to help each other out, sharing skills, jobs, jokes etc.

Recently a designer I used to work with asked whether somebody would be able to help with writing a script to process the exported contents of his ‘question of the week’ slack channel, which by default gets spat out as a folder filled with JSON files, keyed by date:

https://slack.com/intl/en-gb/help/articles/201658943-Export-your-workspace-data

https://slack.com/intl/en-gb/help/articles/220556107-How-to-read-Slack-data-exports

My response was rapid and decisive:

Data munging and a chance to use my favourite Javascript runtime Node.js. Sign me up!!!

First, WTF is data munging

Data munging, or wrangling, is the process of taking raw data in one form, and mapping it to another, more useful form (for whatever analysis you’re doing).

https://en.wikipedia.org/wiki/Data_wrangling

Personally, I find data wrangling/munging to be pretty enjoyable.

So, as London is currently practicing social distancing because of covid-19, and I have nothing better going on, I decided to spend my Saturday applying my amateur data munging skills to Slack’s data export files.

Steps for data munging

1) Figure out the structure of the data you are investigating. If it is not structured, you are going to have trouble telling a computer how to read it. This is your chance to be a detective. What are the rules of your data? How can you exploit them to categorise your data differently?

2) Import the data into a program, using a language and runtime which allows you to manipulate it in ways which are useful.

3) Do some stuff to the data to transform it into a format that is useful to you. Use programming to do this, you programming whizz you.

4) Output the newly manipulated data into a place where it can be further processed, or analysed.

In my case, the input data was in a series of JSON files, keyed by date (see below), and the output I ideally wanted, was another JSON file with an array of questions, along with all of the responses to those questions.

Shiny tools!!!

Given that the data was in a JSON file, and I am primarily a JavaScript developer, I thought Node.js would be a good choice of tool. Why?

  • It has loads of methods for interacting with file systems in an OS agnostic way.

  • I already have some experience with it.

  • It’s lightweight and I can get a script up and hacked together and running quickly. I once had to use C# to do some heavy JSON parsing and mapping and it was a big clunky Object Oriented nightmare. Granted I’m sure I was doing lots of things wrong but it was a huge ball-ache.

  • From Wikipedia, I know that ‘Node.js is an open-source, cross-platform, JavaScript runtime environment that executes JavaScript code outside of a web browser. Node.js lets developers use JavaScript to write command line tools‘.

  • JavaScript all of the things.

So, Node.js is pretty much it for tools…

https://nodejs.org/en/

So, on to the data detective work. I knew I very likely needed to do a few things:

1) Tell the program where my import files are.

2) Gather all the data together, from all the different files, and organise it by date.

3) Identify all the questions.

4) Identify answers, and link them to the relevant question.

The first one was the easiest, so I started there:

Tell the program where my import files are



const filePath = `./${process.argv[2]}`;

if (!filePath) {
  console.error(
    "You must provide a path to the slack export folder! (unzipped)"
  );
  process.exit();
} else {
  console.log(
    `Let's have a look at \n${filePath}\nshall we.\nTry and find tasty some questions of the week...`
  );
}

To run my program, I will have to tell it where the file I’m importing is. To do that I will type this into a terminal:

node questions-of-the-week.js Triangles\ Slack\ export\ Jan\ 11\ 2017\ -\ Apr\ 3\ 2020

In this tasty little snippet, questions-of-the-week.js is the name of my script, and Triangles\ Slack\ export\ Jan\ 11\ 2017\ -\ Apr\ 3\ 2020 is the path to the file I’m importing from.

Those weird looking back slashes are ‘escape characters’, which are needed to type spaces into file names etc. when inputting them on the command line on Unix systems. My terminal emulator that I use autocompletes this stuff. I think most do now… So hopefully you won’t have to worry too much about it.

This is also the reason that many programmers habitually name files with-hyphens-or_underscores_in_them.

But basically this command is saying:

‘Use node to run the program “questions-of-the-week.js”, and pass it this filename as an argument’

What are we to do with that file name though?

Node comes with a global object called process which has a bunch of useful data and methods on it.

This means that in any Node program you can always do certain things, such as investigating arguments passed into the program, and terminating the program.

In the code sample above, we do both of those things.

For clarity, process.argv, is an array of command line arguments passed to the program. In the case of the command we put into our terminal, it looks like this:

[
  '/Users/roberttaylor/.nvm/versions/node/v12.16.1/bin/node',
  '/Users/roberttaylor/slack-export-parser/questions-of-the-week.js',
  'Triangles Slack export Jan 11 2017 - Apr 3 2020'
]

As you can see, the first two elements of the array are the location of the node binary, and the location of the file that contains our program. These will be present any time you run a node program in this way.

The third element of the array is the filename that we passed in, and in our program we stick it in a variable called filePath.

WE HAVE SUCCEEDED IN OUR FIRST TASK. CELEBRATE THIS MINOR VICTORY

Now…

Gather all the data together, from all the different files, and organise it by date

const fs = require("fs");

const slackExportFolders = fs.readdirSync(filePath);

const questionOfTheWeek = slackExportFolders.find(
  (f) => f === "question-of-the-week"
);

if (!questionOfTheWeek) {
  console.error("could not find a question-of-the-week folder");
}

const jsons = fs.readdirSync(path.join(filePath, questionOfTheWeek));

let entries = [];

jsons.forEach((file) => {
  const jsonLocation = path.join(__dirname, filePath, questionOfTheWeek, file);
  entries = [
    ...entries,
    ...require(jsonLocation).map((i) => ({ ...i, date: file.slice(0, -5) })),
  ];
});

The Slack channel I am looking at munging is the ‘question of the week’ channel.

When this is exported, it gets exported to a ‘question-of-the-week’ folder.

So first of all I check that there is a question-of-the-week folder. If there is not, I exit the program, and log an error to the console.

If the program can find it, then it gets to work gathering all of the data together.

Here we start to see the benefit of using Node.js with JSON. We are writing JavaScript, to parse a file which uses a file format which originally came from JavaScript!

This means that pulling all of this data together is as simple as getting a list of file names with fs.readdirSync.

This gets all of the names of the files under the question-of-the-week folder in an array, which is, you know, pretty useful.

Once we have those file names, we iterate through them using forEach, and pull all of the data from each file into a big array called entries. We can use require to do this, which is very cool. Again, this is because Node and JavaScript like JSON, they like it very much.

We know we are likely to need the date that the slack data is associated with, but it is in the file name, not in the data itself.

To solve this, we take the file name and put it into a ‘date’ field, which we insert into each data item, using map

the file.slice stuff is just taking a file name like this 2018-06-29.json, and chopping the end off it, so it is 2018-06-29, without the .json bit.

Coooool we done got some slack data by date. Munging step 2 complete.

Identify all the questions

This is trickier. We need our detective hats for this bit.

I won’t lie, I fucked around with this a lot, and I re-learnt something that I have learned previously, which is that it is really hard to take data that has been created by messy, illogical humans, and devise rules to figure out what is what.

What I ended up with is this. The process of figuring it out involved lots of trial and error, and I know for a fact that it misses a bunch of questions, and answers. However, it probably finds 80% to 90% of the data that is needed. This would take a human a long time to do, so is better than nothing. The remaining 10% to 20% would need to be mapped manually somehow.

const questions = entries.filter(
  (e) => e.topic && e.topic.toLowerCase().includes("qotw")
).map((q) => ({
  date: q.date,
  question: q.text,
  reactions: q.reactions ? q.reactions.map((r) => r.name) : [],
}));

‘qotw’ is ‘question of the week’ by the way, in case you missed it.

I find them by looking for slack data entries that have a topic including ‘qotw’, I then map these entries so they just include the text, date, and I also pull in the reactions (thumbs up, emojis etc.) for the lols.

Now we have an array of questions with information about when they were asked. We’re getting somewhere.

Identify answers, and link them to the relevant question

const questionsWithAnswers = questions.map((question, key) => {

  // Find the date of this question and the next one.
  // We use these to figure out which messages were sent after
  // a question was asked, and before the next one
  const questionDate = new Date(question.date);
  const nextQuestionDate = questionsWithReactions[key + 1]
    ? new Date(questionsWithReactions[key + 1].date)
    : new Date();

  return {
    ...question,
    responses: entries
      .filter(
        (e) =>
          new Date(e.date) > questionDate &&
          new Date(e.date) < nextQuestionDate &&
          e.type === "message" &&
          !e.subtype
      )
      .map((r) => ({
        answer: r.text,
        user: r.user_profile ? r.user_profile.name : undefined,
      })),
  };
});

// put them in a file. the null, 4 bit basically pretty prints the whole thing.
fs.writeFileSync(
  "questions-with-answers.json",
  JSON.stringify(questionsWithAnswers, null, 4)
);

console.log('questions with answers (hopefully...) saved to "questions-with-answers.json"');

This bit is a bit more complex… but it’s not doing anything non-standard from a JavaScript point of view.

Basically just search all the entries for messages which fall after a question being asked, and before the next one, and put them in an array of answers, with the user profile and the message text. Then save to a new JSON file and pretty print it.

We are done! We now have a new JSON file, with an array of questions, and all the answers to each question.

It is worth noting that this approach is far from optimal from an ‘algorithmic’ point of view, as I am repeatedly checking the entire data set.

Thing is, I don’t give a shit, because my dataset is small, and the program runs instantly as it is.

If it started to choke and that became a problem I would obviously improve this, but until that point, this code is simpler to understand and maintain.

More efficient algorithms normally mean nastier code for humans, and until it’s needed, as a nice developer you should prioritise humans over computers.

(sorry, computers)

What did we learn?

Slack’s data is quite nicely structured, and is very parseable.

JavaScript is great for manipulating JSON data thanks to its plethora of array manipulation utilities.

You can write a script to automatically categorise Slack export data and put it into a semi-useful state with less than 80 lines of code, including witty console output and formatting to a nice narrow width.

This confirms my suspicion that for quick and dirty data munging, Node.js is a serious contender.

If paired with TypeScript and some nice types for your data models, it could be even nicer.

Here is the result of my labours https://github.com/robt1019/slack-export-parser/blob/master/questions-of-the-week.js

HTF do i learn things in my spare time without melting my brain!?!

I am going to make the assumption that you are a programmer.

If so, then you probably spend a lot of your day doing intense mental gymnastics, and wrestling with obtuse and painful logic puzzles, or ‘programming’ as it is referred to by some people.

You also probably enjoy it.

You sadist you.

The problem solving side of the job is for many of us a large part of what makes it enjoyable.

It sure is tiring though.

What’s your point? Programming is fun but tiring?

My point is that although this application of mental effort is satisfying and challenging, it comes at a price.

We only have a certain amount of focused attention we can spend in any single day, and if you spend all day hammering your brain at work, it will be pretty useless by the time you get home.

This is fine, unless you want to spend your time outside of work also tackling problems or learning things which require sustained focus.

Why do you care about this? Surely you can just spend your time outside of work playing PlayStation or watching the Apprentice? Problem solved.

That is true…

Let’s assume for now though that you have some side project or learning goal that you want to pursue in your spare time, which requires sustained mental focus.

In my case I am trying to consolidate my wobbly maths skills, and learn some physics.

To this end I’ve been bashing my head against A level and university level maths and physics text books in my spare time, and attempting to teach myself enough of these things to scratch my curiosity itch.

To learn and understand the concepts covered in these subjects definitely requires focus, and I’ve managed through trial and error to get to a point where I can make progress on my learning goals, without impacting my productivity at work, or suffering a brain meltdown.

OK smarty pants, how?

My approach has been influenced heavily by Scott Young, who challenged himself to learn and pass the exams for the entire MIT computer science undergraduate course in one year:

https://www.scotthyoung.com/blog/myprojects/mit-challenge-2/

His writing focuses heavily on how to optimise the time you spend studying, to achieve maximum understanding in the minimum time.

He calls these kind of intense learning projects ‘Ultralearning’ projects, and he even has a book on it which is worth a peek:

https://www.amazon.co.uk/Ultralearning-Strategies-Mastering-Skills-Getting/dp/0008305706

Another key influence was the book ‘Deep Work’ by Cal Newport:

This books forwards the idea that in the modern, and highly technical world, the ability to focus on and solve hard problems, and to learn difficult things, is at an absolute premium.

Meaning that the rewards for getting good at learning and solving difficult problems are very high currently.

Additionally, he lays out a series of techniques for achieving this focused work.

I recommend you consult both of these sources as they are very interesting, and they probably have a wider application that my own interpretation of these ideas.

That said, this is my blog, so here’s my approach.

My super special technique for learning hard things during spare time:

Every morning, before work, try and do two 45 minute sessions of extremely focused work, on whatever problem you are currently tackling. Then as many sessions as you can fit in to the weekend in a sustainable way (probably Saturday or Sunday morning).

For me at the moment the problem might be a physics problem, a mathematical technique, or a new concept I’m trying to understand.

The activity itself during these sessions varies quite a bit, and is not really important. The important thing is that this should be very very focused work with a clear goal (for me generally this means understanding something well enough to solve problems related to it, and to explain it to someone else).

Focused means no phone, no social media, no distractions.

In my case I work better with noise cancelling headphones, and music. I quite often just play the same playlist or song on repeat for the entire session.

Focusing like this will be hard at first. If you are learning difficult new things, you will feel stupid, and your fragile ego will start to panic.

My early attempts went something like this:

‘Ok focus time. Trigonometric identities. Let’s go’

‘I don’t want to just remember these, lets find a proof for them so I can understand them better’

‘Ouch! this proof is hard. I don’t understand how they got from there to there. Maybe I’m too stupid for this. I probably should get some food or coffee first anyway. Urgh this is horrible. I’ll just have a look at this test for dyscalculia (maths disability), maybe I have it and that’s why I can’t to this.’

And so on.

For me, the key thing was to commit to doing the whole 45 minutes. I would tell myself that regardless of how badly it is going, I have to focus for this time, and after that I can stop and do whatever else I want.

This is difficult at first, but over time becomes habitual.

In fact, developing habits that support your sustained focus sessions is key to being successful in this area, and both of the resources above outline techniques for achieving this.

The general idea though is that willpower is finite, and deliberately choosing to do hard things is tiring.

Habits on the other hand, are automatic, and painless.

Think about other good or bad habits, such as checking social media, smoking, or cleaning your teeth. You probably don’t think too much about these things, they just happen automatically, after certain cues.

The basic pattern of a habit is cue => action => reward.

This applies to bad habits and good habits.

For me, the habit loops I have been successful in drilling into myself to drive this morning routine are as follows:

up at 6 => go downstairs and start making coffee => browse smart phone while coffee is brewing (sweet sweet internetz)

take coffee upstairs and sit at desk => focus hard for 45 minutes => relax for ten minutes, get a snack, more coffee, do some stretches etc.

and repeat the last loop over and over again until I’ve had enough.

The reason this works, is that over time, consistency is more important than just about everything when it comes to making progress on difficult long term goals.

If you can consistently hit a set number of these sustained focus sessions during the week, you will make solid progress towards your goal. If you don’t track things this explicitly, it is easy to become demoralised, not see your progress, and give up.

If I get half the normal amount of focus sessions done in a week as I normally do, I know something is up, and I can go rooting about for causes.

Maybe staying in the pub till closing time on Tuesday evening had something to do with it? OK, next week let’s try not to do that.

But doesn’t this mean that you’re spending your valuable focus time that you should be spending at work, and spending it on yourself instead!?! What about your poor employer

Firstly, outside of working hours, I will always prioritise my own goals over those of my employer, and I would suggest you do the same.

That said, I also don’t think it works that way.

The difference between starting my day by:

a) Rolling out of bed as late as possible, dragging myself to work and spending the first hour waking up and inhaling coffee

b) Achieving two hours of calm and sustained focus in pursuit of a goal I am personally interested in

is huge.

The second one results in my arriving at work awake and ready to tackle problems, the first one… not so much.

Cal Newport also as part of his research for the above book, found that engaging in deep focused work over time, actually increases your ability to tackle difficult problems in the future, and to do more deeply focused work.

Getting better at the meta skill of focusing on tough problems, improves your ability to do this in other settings (like at work).

So although it is true that you only have a set number of hours you can focus hard on any problem during the day, deliberate practice and consistently pushing yourself to improve at solving hard problems, improves your ability to do your job.

It’s a win win! You can be happy and pursue your own goals, and also be more effective at work!

Based on my sample of one, I definitely have found this to be the case.

So there you have it, my totally biased and personalised approach to learning hard stuff outside of work, when your day job involves brain melting activities. What are your thoughts?

HTF do I write E2E tests with a stubbed dependency? (Angular/Nrwl Nx edition)

In an earlier post, I went over some of the reasons you might want your E2E tests to run against a stubbed version of any really slow or unpredictable dependencies.

Let’s see how this might look in an Angular context, making use of the framework’s inbuilt dependency injection, and TypeScript.

NB I’m going to use Nrwl Nx to spin up these example bits of code inside a shared ‘workspace’ or ‘monorepo’ (the terminology seems to change).

If you are unfamiliar with Nrwl Nx you should check it out! I’ve used their tooling extensively at work, and after a few hiccups can strongly recommend it. See here for a slightly outdated explanation of why/how I’ve used their stuff in the past.

Basically though, they provide a way of easily building multiple applications/libraries from within one repository, supporting code reuse between applications, with convenience tooling, built on top of the Angular CLI, for handling unit testing, e2e testing, builds etc. as well as a bunch of generators for generating opinionated bits of ‘best practice’ Angular/ngrx code. (And you get all of this for freeeee!)

The method I’m using for stubbing dependencies would be equally applicable to a ‘vanilla’ Angular application however.

Meet the players

1) Our main Angular application. It has a button which when clicked will call our Slow Ass Api’s getInformation endpoint, and when finished will render the result (of type SlowInformation).

2) Our Slow Ass Api. This is a separate TypeScript library that exposes one method, getInformation, which returns an object of type SlowInformation. The (not so) clever bit is that this call will simulate unpredictable slowness, by returning the data after a random amount of time from 0ms up to 10000ms.

3) Our E2E tests. I am going to use Cypress for these because I happen to really like it. If you want to use Protractor, this method will still work, you will just have to use the executeScript method to talk to the window object, instead. Also, if you create your application with Nrwl Nx they will set up all of the code scaffolding and configuration for you to support either Protractor or Cypress (you can choose as part of their interactive setup script).

The action

Below are a bunch of links to specific commits. If you want to play along you can clone the project and checkout individual commits to see how the tests behave at various stages of development.

I will pick up from after the initial project skeleton is put together. As I mentioned, this was done using Nrwl Nx’s extensions to the Angular CLI. If you are curious, you can see what these tools do for you in this commit

Generate slow-information-renderer Angular app (commit):

This uses another of the Nx scripts to generate a fully fledged Angular app, I chose to use Jest and Cypress, and so it also sets up an e2e testing project, which can be run with npm run e2e

Generate slow information renderer component and slow information service (commit)

These will be responsible for asking the user to click a button, which will ultimately call our service to call the slow ass api.

Again, this makes use of generator commands to create a boilerplate component with best practices.

Generate and write slow ass api library (commit)

Due to an over eager interactive rebase I have accidentally lumped my changes in with the generator changes.

Basically though, here I create a library for returning strongly typed data of type SlowInformation after a random delay (I even unit tested it! I got carried away…)

The main file to concentrate on is slow-ass-api.ts

Plug the library into our Angular Application, and do some wiring (commit)

After this, our app will render data from the slow-ass-api after a delay

Add a naive e2e test (commit)

This commit adds a basic e2e test which is, unfortunately, quite flaky. Because of the random delay in the data coming back, the test sometimes passes, but sometimes doesn’t. This can be demonstrated by making the API resolve more quickly (commit)

With this change, our tests behave properly, unfortunately in the real world you can’t just tell a slow api to be quicker. So we will need a different solution.

Make the test work less flakily (commit)

Here, we experiment with making our test wait 10 seconds to make sure all the data is there. This works pretty well!

However, we have made our tests take longer than they need to, and, crucially if the api ever takes longer than 10 seconds to resolve, our test will fail again. Perhaps this isn’t the best solution after all…

Complexify the app (commit)

We receive some requirements from our Product Owner, and it seems that a new feature is required where if an fact is at holyCrudThatIsInteresting level of interest, we should put a button up allowing the user to request more information about the fact.

We add the feature and look to update our e2e tests.

Now our app logic is more complicated, we need to account for this complexity in our e2e tests.

Test the new complexified app (commit)

We add a new test for the feature, but unfortunately it only works very very rarely, as if the random fact that comes back is not of the right interest level, our button is not shown, and the test fails. Disaster!

We could start to experiment with making our tests clever, and having them check for the interest level of the fact before progressing, but it is easy to see how an approach like that could blow up and become messy very quickly.

Enter the stub slow information service (commit)

We generate a new service, that implements the existing InformationService in the Angular application. Because it is strongly typed, it must follow the same interface, so we can be relatively confident that we shouldn’t be able to make any methods in our stub return nonsensical or invalid data.

This commit is the meaty part of this post. Here we use Angular’s environment files, as well as their dependency injection, to run our e2e tests with the StubSlowInformationService instead of the SlowInformationService.

Now our tests run, and they run quickly.

If the slow-ass-api implementation changes, our stub will stop compiling, and we know that we need to update our code. So this approach is relatively safe, assuming that you have a sufficiently well defined contract for how the api should behave (in this case our TypeScript Type definitions).

Hooray!

Even more control (commit)

Going one step further, in this commit, we expose methods on the window object, meaning that we can change the behaviour of our stub at run time, during our e2e tests.

Again, this is relatively safe if steps are taken to make sure the api contract is adhered to in the Angular application (by respecting the Types defined in the slow-ass-api library.

Conclusion

We have managed to write an e2e test which is quite robust, fast and easy to manage.

Due to fact that both our Angular application, and the slow ass api conform to a common contract about the behaviour of the API, we can be relatively confident that our tests are meaningful, and represent a realistic interaction with our API.

I think this is a pretty neat approach, and it has proved successful for me at work also. I’d be very keen to hear other peoples’ opinions though as e2e testing in general is something that I’ve found to be a seriously grey area in the frontend world.

Why TF should I stub dependencies for an E2E test?

E2E (or ‘end to end’) tests, when written for web applications, basically allow you to write scripts to interact with your application via a browser, and then assert that it responds correctly.

Common libraries that allow you to do this for Angular apps are Selenium/Protractor and Cypress (you should check out Cypress in particular).

The browser can be ‘headless’, meaning that you can run it in a command line, without a GUI (see headless chrome for an example). This means that these tests can be run by your build server to support a continuous integration/continuous deployment pipeline.

You can then develop scripts to click about, and test your most important user flows quickly and consistently, on your real application, in a very close approximation of how your users would click around. This is an attractive alternative to manually running these tests yourself by spinning up your application and clicking about. Humans are good at lots of things, but computers trump them every time when it comes to performing repeated actions consistently.

E2E tests are an example of black box testing.

If the tests fail, your application is broken, if they pass, your application may still be broken, but it is not terribly broken (as we have tests, ensuring that key user flows are still accessible).

This process relies on an assumption that these tests are very robust and not ‘flaky’ (they only fail when something is actually broken, not just randomly).

There are a few things that can cause E2E tests to randomly fail, but by far the most common in my experience is some form of network latency. Basically any time your code is interacting with something which can take an indeterminate amount of time to finish, you are going to experience some pain when writing automated tests for it.

The most recent (and most evil) version of this I have experienced is an application that is plugged into a blockchain and involves waiting for transactions to be mined. This can take anywhere from a few seconds, to forever.

Unless you are a lunatic, if your application involves any sort of transactions (purchases, bets, swiping-right), which involve user data, your automated tests will already be running on some sort of test environment, meaning that when you mutate data, it is not done on production data, but a copy. This comes with its own little pain points.

When talking to a truly stateful dependency (like an API plugged into a database), you will have to account for the state of the database when performing any actions in your browser test.

As one example, if you are logging in, you might have to first check whether the user has an account, then log in if they do, or create an account if not. This adds complexity to your tests and makes them harder to maintain/potentially flakier.

Another option is to control the test environment, and write (even more) scripts to populate the test database with clean data that can be relied on to power your tests (make sure that a specific user is present for example).

This removes complexity from the browser tests, but adds it to the scripts responsible for set up and tear down of data. This also requires that your test data is kept synchronised with any schema changes to your production database. Otherwise you run the risk of having E2E tests which can pass, even while your real application is broken, as they are testing on an out of date version of the backend.

(To clarify, these are all valid options, and there are solutions to many of the problems above)

However, while this approach solves the problem of accidentally munging user data, a backend plugged into a staging database can still be forking slow, as again you are at the mercy of network speeds and many other things out of your control.

Taken together, this is why I favour ‘stubbing’ the slow bit of your application (in many cases a backend service). This involves providing a very simple approximation of your Slow Ass Service (SAS), which follows the same API, but returns pre-determined fixture data, without doing any slow networking.

There is one major caveat to this approach.

It only works in a meaningful way if you have some sort of contract between the SAS and your frontend.

That is to say, your frontend and the SAS (Slow Ass Service), should both honour a shared contract, describing how the API should behave, and what data types it should accept and return. (for a REST api, swagger/openapi provides a good specification for describing API behaviour). Because this shared contract is written in a way that can be parsed, automatic validations can be made on both the frontend and the backend code to ensure that they meet the requirements.

If you were reading closely, you will notice I mentioned data ‘types’, so most likely you will require some sort of typed language on both the frontend and within the SAS (I would recommend TypeScript 🙂 ).

With this in place, as long as your stub implementation of the SAS also honours this contract (which can be checked in with the SAS source code), then you can feel happy(ish) that your stub represents a realistic interaction with the real SAS. If the contract changes, and your code doesn’t, it should not compile, and your build should fail (which we want!).

Now you have this in place, your E2E tests can be much less/not at all flaky, and as an added benefit, will run super fast in your build server, meaning that the dev workflow is slightly nicer too.

Again, there are gazillions of different ways of approaching this problem, and obviously you need to figure out what it is that you care about.

In my experience though, this presents a good compromise and has meant that I (finally) have managed to get E2E tests running as part of CI/CD without wasting huge amounts of time and causing headaches.

In a subsequent post I demo how to actually get this working in an Angular application that depends on a strongly typed third party library, which happens to be slow as hell 🙂

HTF do I Angular CLI-ify a React app

If you have been doing any sort of serious Angular dev work you have probably come across the Angular CLI; the one stop shop for generating Angular applications, components, routes, services and pipes, as well as scripts for linting, testing, serving and formatting your app.

If you haven’t used it, shame on you! (not really, but seriously you should check it out).

Why do I love the Angular CLI?

  • Saves time
  • Improves standardisation (There’s nothing I hate more than endless arguments about spaces vs. tabs or any other non-important chin stroking bullshit)
  • Makes it easy to do things that you really should be doing as a responsible person, like testing your code, linting your code etc.
  • Supports aggressive refactoring, as it becomes very painless to spin up new components etc. in no time.
  • You can write your own generators for it now!

Does the React community have any equivalent functionality?

YES! (sort of…), enter create-react-app

create-react-app is a command line tool for spinning up a new React project.

Let’s compare:

Angular

ng new angular-app

React

create-react-app react-app

What’s the difference?

They both whirred away and did some magical things and created a project for me, complete with scripts for building, testing and serving it.

It looks like I will have to do some more investigations into tooling for generating React components etc., but while I’m learning I’ll probably be hand coding these anyway so not the end of the world.

Initial impressions are that the React scaffolding project is a bit less fully featured than the Angular one, but maybe that’s a good thing. One of the complaints levelled at Angular is that it is too bloated. I guess we will see over the next few weeks!

HTF do I move from Angular components to React components

Angular has web components.

By that I mean a modular piece of front end UI code with the following:

  • HTML template
  • TypeScript class to handle UI logic
  • Encapsulated CSS

Angular’s web components also support one way data flow via Output and Input class decorators, and dependency injection.

So how do I Reactify these concepts?

Let’s start with a basic dumb Angular Component, with a form, an input value and an output emitter, and see if we can’t shiny it up with some React:

useful-form.component.ts

import { Component, EventEmitter, Input, Output } from '@angular/core';
import { FormControl, FormGroup, Validators } from '@angular/forms';

export interface UsefulInformation {
  firstName: string;
  crucialInformation: string;
}

@Component({
  selector: 'app-useful-form',
  templateUrl: './useful-form.component.html',
  styleUrls: ['./useful-form.component.css']
})
export class UsefulFormComponent {
  @Input() name: string;
  @Output()
  usefulUserInformation: EventEmitter<UsefulInformation> = new EventEmitter<
    UsefulInformation
  >();

  public usefulForm: FormGroup;
  public crucialOptions = ['important', 'necessary', 'crucial'];

  constructor() {
    this.usefulForm = new FormGroup({
      number: new FormControl(this.name, Validators.required),
      crucialInformation: new FormControl('', Validators.required)
    });
  }

  public submit() {
    this.usefulUserInformation.emit(this.usefulForm.value);
  }
}

useful-form.component.html

<form [formGroup]="usefulForm">
  <h2>Hello <span *ngIf="!name">person</span>{{ name }}</h2>
  <label for="number">Number</label>
  <input
    id="number"
    type="number"
    formControlName="number"
    placeholder="pick a number!"
  >
  <label for="crucialInformation">Crucial information</label>
  <select
    id="crucialInformation"
    formControlName='crucialInformation'
  >
    <option
      *ngFor="let option of crucialOptions;"
      [value]="option"
    >
      {{ option }}
    </option>
  </select>
  <button
    (click)="submit()"
    [disabled]="usefulForm.invalid"
  >Submit</button>
</form>

So here we have a component that can be embedded in other HTML, with an input passed in via a name field, that we’re rendering in the template, an output event that can be listened to, and a reactive form. There is also an *ngFor and and *ngIf.

The component can be used like below in a parent component:

<app-useful-form
  name='Rob'
  (usefulUserInformation)="handleUserInformation($event)"
></app-useful-form>

All pretty standard stuff. Let’s try and replicate this behaviour in React.

Reactified useful component

First of all I want to roughly map some Angular concepts related to components, to their React equivalents:

useful-form.js

import React from 'react';

export default class UsefulForm extends React.Component {
  constructor(props) {
    super(props);
    this.state = {
      number: '',
      crucialInformation: ''
    };

    this.crucialOptions = ['important', 'necessary', 'crucial'];

    this.handleNumberChange = this.handleNumberChange.bind(this);
    this.handleCrucialInformationChange = this.handleCrucialInformationChange.bind(
      this
    );
  }

  handleNumberChange(event) {
    this.setState({ ...this.state, number: event.target.value });
  }

  handleCrucialInformationChange(event) {
    this.setState({ ...this.state, crucialInformation: event.target.value });
  }

  render() {
    return (
      <form onSubmit={() => this.props.handleSubmit(this.state)}>
        <h1>Hello {this.props.name || 'person'}</h1>
        <label>
          Number:
          <input
            type="number"
            required
            placeholder="pick a number!"
            value={this.state.number}
            onChange={this.handleNumberChange}
          />
        </label>
        <label>
          Crucial information:
          <select
            required
            value={this.state.crucialInformation}
            onChange={this.handleCrucialInformationChange}
          >
            <option value=''>Pick option</option>
            {this.crucialOptions.map(option => <option value={option} key={option}>{option}</option>)}
          </select>
        </label>
        <input
          type="submit"
          value="Submit"
          disabled={!(this.state.number && this.state.crucialInformation)}
        />
      </form>
    );
  }
}

The component is used as follows in a parent component:

class App extends Component {

  handleUsefulFormSubmit(event) {
    window.alert(JSON.stringify(event));
  }

  render() {
    return (
      <UsefulForm
        name="Rob"
        handleSubmit={this.handleUsefulFormSubmit}
      />
    );
  }
}

export default App;

What are the key differences/learnings?

  1. React seems to be much less opinionated about how you do things and more flexible. It is also JavaScript, where Angular is in many ways its own thing.
  2. Forms in React seem to require more work to achieve the same as in Angular (validation, disabled buttons, updates to form values etc.). I suspect as I mess around with this I will find some nicer patterns for handling programatic driven forms.

Overall the differences are not as great as I had feared. Both allow for controlling child components from a parent component, and flowing data into and out of the component without mutating things. Also the difference between *ngIf and *ngFor and just using interpolated JavaScript is very minimal.

I am pleasantly surprised by how much I like React so far…

HTF do I move to React from Angular(2+)

Are you an Angular dev who needs to learn React fast?

Starting to get fed up with recruiters and tech teams passing you over because you are not using the trendy new front end framework and therefore must be some sort of technical Neanderthal?

I am!

So without any more preamble, here is my attempt to map the concepts and ideas I have come to know and (mostly) love from the Angular world, to the shiny new land of React/the billion libraries that are necessary to make React useful… (no YOU’RE bitter).

In no particular order, I will be covering the following over the next few posts:

  • Web components (specifically *ngFor, *ngIf, Inputs, Outputs etc.)
  • Content projection/transclusion
  • Routing
  • Angular Material
  • Internationalisation/translation
  • Services
  • Data mapping
  • Styling/style encapsulation
  • Unit testing
  • Static typing
  • Asynchronous code (RxJS)
  • AJAX requests
  • State management
  • Automated testing
  • Dependency injection
  • Angular CLI

These are all things that Angular does ‘out of the box’, and that I have come to professionally rely on. So let’s try and replicate them.