WTF is wrong with software!?!

I recently watched the excellent series Chernobyl.

It tells the story of a catastrophic nuclear meltdown, caused by concerted human incompetence and blind adherence to bureaucratic demands, even in the face of blatant evidence that what was being done was stupid, negligent, and dangerous.

After the accident, it took a long time to understand what had gone wrong, blame was thrown around wildly, and there were official efforts to disguise the causes of the accident.

Very little was done to try and learn from systematic failures, and to address the problems demonstrated by the incident.

As a software professional, it made me very sad how many similarities I can see between most places I have worked, and the Chernobyl accident and cover up.

How I feel most of the time at work:

dog drinking coffee in a burning building

The way software is written is fucked.

It is complex enough to be understood by very few people, and the process of writing it is consistently mismanaged, despite the fact that most of the pitfalls and mistakes made by literally every single team, have been pointed out, along with detailed steps for how to avoid them, since the 1970s.

Here is a pretend conversation that sums up the level of ignorance I have found in my professional life, as applied to another industry. (it is a metaphor you see):

  • Bob: Hello, I’m the new pilot for this here commercial aircraft.
  • Geoff (Bob’s employer): Hello Bob! Glad to have you on board. I hope you’re worth the enormous sum of money we’re paying you
  • Bob: Well of course I am! I passed your test didn’t I 🙂
  • Geoff: Well yes, you are right, you were very good at identifying the key differences between a jumbo jet and a bicycle, so I guess we’re all OK on that front

Some time later…

  • Emma(Bob’s copilot): Bob could you land the plane for me please. I need to visit the bathroom
  • Bob: Sure thing. One question, where do you keep the landing ropes?
  • Emma: What was that Bob? The landing ropes… What are they?
  • Bob: The ropes you use to loop around the lorry on the landing bit on the floor so it can pull you in and you can land
  • Emma: Bob, that is insane. What are you talking about
  • Bob: Oh, do you use a different system. I could use roller skates instead
  • Emma: …
  • Geoff: Hey guys, are you going to land this plane soon
  • Bob: Sure thing, just debating whether to use the ropes or the roller skates. I think we’re leaning towards the roller skates given the weather
  • Emma: …
  • Geoff: OK great, let me know how you get…
  • Emma: WHAT THE FUCK ARE YOU BOTH TALKING ABOUT!?!
  • Bob: Hold up there Emma, I’m not sure there’s any need to get angry like this
  • Geoff: Emma, what’s your problem? We really need to land this plane soon so can you please sort this out between you. What Bob said about the roller skates sounds sensible to me
  • Emma: BUT WE’LL CRASH INTO THE FLOOR AND DIE
  • Bob: I think that’s a bit extreme Emma, when I worked at Initrode we always used roller skates in inclement weather and we hardly ever crashed. We never took off either though… but I’m pretty sure the landing stuff would be fine

etc….

For another excellent summary of the type of madness you are likely to find in a software team, try this:

https://gizmodo.com/programming-sucks-why-a-job-in-coding-is-absolute-hell-1570227192

‘Ah but fortunately your example was unrealistic. That involved an aircraft, which could crash into the floor and kill people. Also nuclear reactors are totally different to coding. Software is just computers. It can’t do anything really nasty… right?’

No!

Not only is software fucked, it is increasingly responsible for scary things.

Defective software has recently:

  • Crashed two planes.
  • Exposed countless peoples’ private information to criminals. (This happens so often now that it’s barely even news)
  • Stopped customers from accessing their online banking for months.
  • Prevented doctors from accessing test results.
  • Grounded flights out of major airports.
  • Thrown people off electric scooters by ‘braking heavily’ under certain circumstances.

I found these examples from a ten minute Google search. There are probably many more cases, many of which are inherently difficult to reason about because… software is fucked!

To recap, software is often:

  1. Responsible for scary things
  2. Written by incompetent people
  3. Managed by less competent people
  4. Very difficult to understand for ‘non technical’ people
  5. Shit

Which all combines to make me increasingly cynical and scared about my profession…

I have spent quite a bit of time reading about and discussing this phenomenon, and there appears to be a growing consensus, among both my peers and the wider community, that this cannot go on.

One of the potential solutions which is floated, in various forms is ‘REGULATION’.

Regulation!?! Yuck…

I’m not sure how I feel about it either… or what it should look like.

The cynic in me thinks that any attempt to regulate software development would add unnecessary overhead to an already slow and inefficient process, and would likely be mismanaged.

I also know that certain elements of writing software are very difficult to reason about, and that ensuring a program does what you intend it to do is hard.

That is to say, it is possible that even after doing your due diligence, becoming excellent at your craft, and testing all foreseeable outcomes, your code could still cause something horrible to happen.

In these cases I suspect assigning blame would be counter productive, whereas learning from mistakes could be very useful.

That said, the level of incompetence, and negligence I have encountered in some work places feels like there should be some legal protection in place, to make sure that development teams take responsibility for their code.

It seems totally unacceptable that testing and design is frequently shunted to the back of the priorities list, behind ‘hitting milestone 3’ or ‘drop 2’ or whatever that particular company is choosing to call their next client delivery.

If a shoddy system of development results in software which causes a loss of data, or money, or lives, then maybe someone should be held accountable.

That person should probably be paid a fairly large sum of money, for taking on the responsibility of ensuring that the software works (as many people in the software industry already are…), but equally, that reward should reflect the very real responsibility that has been taken on.

By which I mean that if it is found that the highly paid person that put their professional seal of approval on a product, did not adequately ensure that a system was put in place to develop robust and safe software, then they should be punished.

I don’t have a particular problem if your marketing website, or your blog has some bugs or UX issues, but almost all software I have worked on has involved user accounts and financial transactions, and is therefore responsible for peoples’ personal information and money.

That this type of software is developed within a system that rewards cutting corners, and punishes slow deliberation and robustness is something that I find deeply worrying.

Systems of development

One thing to emphasise, is that I don’t think the state of software is a reflection of any specific individual failings.

It is not a case of ‘shit developers’ or ‘shit managers’ or ‘shit clients’.

You might have noticed that I keep writing the word ‘system’.

That is because I recently read an excellent book on systems thinking and now frequently regurgitate snippets from it:

https://www.amazon.com/Thinking-Systems-Donella-H-Meadows-ebook/dp/B005VSRFEA

One of things that really stuck with me from this book is the idea that the actual outcomes of a system always reflect its true goals, or motivations, but that those true goals might differ (sometimes wildly), from any officially stated goals.

My theory is that in the majority of cases, software teams’ true goals and motivators are not robust software or a reworked and improved product, or a solved customer problem, and that is why these are rarely the outcome.

A team responsible for a piece of software is made up of a number of actors, or elements, all with competing goals.

You may have an external client, who you work with to define what the software should do.

They probably have a set of internal stakeholders they have to please, they may have a certain deliverable linked to their bonus that year, they may have inherited the project from someone else, and maybe they are trying to leave the company anyway, so they don’t particularly care about the long term success of the product.

The budget for the piece of software might have come about from a high level executive joining the client company and wanting to throw their weight around. They might not still be at the company in six months.

The team developing the software might have, intentionally or otherwise oversold what they can deliver and offered it for an unrealistically low price.

These mean that individual milestones or deliverables within a project are very important, as many of the actors within the system are depending on the outcome of them.

In a setup like the one above, the date the software is delivered on, is more important than whether it is shit or not, because more of the people within the system are depending on the date being hit, than on the software working well.

Individual developers within the team have a variety of different motivations.

Some are there just to take advantage of the elevated wages within the industry, and are content to turn up, do their job within the confines set out in front of them and go home.

They may or may not understand what they are doing, and they may make things so complicated from a code perspective, that reasoning about whether the product has bugs or not becomes almost impossible, but to be honest it doesn’t really matter.

As long as something is produced which looks and feels like something that was signed off and paid for, these guys have done their job.

These are the majority of developers I have encountered, and I don’t have a problem with them.

I do have a problem with the fact that the system they are working inside rewards this type of behaviour, but that is not their fault.

Others might want to actually deliver on the stated goals of the system, that is, roughly:

‘make a piece of robust software in the least complex way possible, such that it solves a clearly defined problem for the user of the software’.

This is quite a bit harder, requires constant reassessment of tools, techniques and design, and a commitment to mastering the fundamentals of your craft, across all disciplines involved, but in the medium to long term is definitely the way to go.

Amidst the background of all the other competing goals and motivations, these latter elements are likely to be frustrated.

Calls to slow down the pace of development in order to revisit the design of the product, or even to reassess whether the product is viable as a solution are generally ignored, as they will impede ‘deliverable alpha’.

In such a system, cutting corners is rewarded at all levels, and, again, this is not particularly anyone’s fault.

This would all be fine if it was happening in a bubble, but it’s not. These pieces of software are responsible for stuff that matters, and the real costs of these decisions are not felt by the people responsible for making the software, but by the users themselves.

In the above scenario, there might be a few raised voices and tense meetings, but it it very likely that everyone involved in making the software will be rewarded, as long as something is delivered on or around the date in the project plan.

Users may be lumbered with a buggy, slow, insecure program, or web app, but that is not really factored into the cost of development.

So what’s your point?

People responsible for making software are often motivated by many things other than making reliable, secure software that solves a defined user problem.

The negative effects of bad software are generally felt by users, not by those who put the software together.

In order to improve this situation, and to get better software that isn’t shit, it seems like there needs to be some sort of change to make the people responsible for creating software feel the effects of their shitty software themselves more keenly.

I don’t know what that would look like, and to be honest I don’t really have a point… All I really know is that I find my profession increasingly scary and insane, and that I’m not alone.

HTF do I write E2E tests with a stubbed dependency? (Angular/Nrwl Nx edition)

In an earlier post, I went over some of the reasons you might want your E2E tests to run against a stubbed version of any really slow or unpredictable dependencies.

Let’s see how this might look in an Angular context, making use of the framework’s inbuilt dependency injection, and TypeScript.

NB I’m going to use Nrwl Nx to spin up these example bits of code inside a shared ‘workspace’ or ‘monorepo’ (the terminology seems to change).

If you are unfamiliar with Nrwl Nx you should check it out! I’ve used their tooling extensively at work, and after a few hiccups can strongly recommend it. See here for a slightly outdated explanation of why/how I’ve used their stuff in the past.

Basically though, they provide a way of easily building multiple applications/libraries from within one repository, supporting code reuse between applications, with convenience tooling, built on top of the Angular CLI, for handling unit testing, e2e testing, builds etc. as well as a bunch of generators for generating opinionated bits of ‘best practice’ Angular/ngrx code. (And you get all of this for freeeee!)

The method I’m using for stubbing dependencies would be equally applicable to a ‘vanilla’ Angular application however.

Meet the players

1) Our main Angular application. It has a button which when clicked will call our Slow Ass Api’s getInformation endpoint, and when finished will render the result (of type SlowInformation).

2) Our Slow Ass Api. This is a separate TypeScript library that exposes one method, getInformation, which returns an object of type SlowInformation. The (not so) clever bit is that this call will simulate unpredictable slowness, by returning the data after a random amount of time from 0ms up to 10000ms.

3) Our E2E tests. I am going to use Cypress for these because I happen to really like it. If you want to use Protractor, this method will still work, you will just have to use the executeScript method to talk to the window object, instead. Also, if you create your application with Nrwl Nx they will set up all of the code scaffolding and configuration for you to support either Protractor or Cypress (you can choose as part of their interactive setup script).

The action

Below are a bunch of links to specific commits. If you want to play along you can clone the project and checkout individual commits to see how the tests behave at various stages of development.

I will pick up from after the initial project skeleton is put together. As I mentioned, this was done using Nrwl Nx’s extensions to the Angular CLI. If you are curious, you can see what these tools do for you in this commit

Generate slow-information-renderer Angular app (commit):

This uses another of the Nx scripts to generate a fully fledged Angular app, I chose to use Jest and Cypress, and so it also sets up an e2e testing project, which can be run with npm run e2e

Generate slow information renderer component and slow information service (commit)

These will be responsible for asking the user to click a button, which will ultimately call our service to call the slow ass api.

Again, this makes use of generator commands to create a boilerplate component with best practices.

Generate and write slow ass api library (commit)

Due to an over eager interactive rebase I have accidentally lumped my changes in with the generator changes.

Basically though, here I create a library for returning strongly typed data of type SlowInformation after a random delay (I even unit tested it! I got carried away…)

The main file to concentrate on is slow-ass-api.ts

Plug the library into our Angular Application, and do some wiring (commit)

After this, our app will render data from the slow-ass-api after a delay

Add a naive e2e test (commit)

This commit adds a basic e2e test which is, unfortunately, quite flaky. Because of the random delay in the data coming back, the test sometimes passes, but sometimes doesn’t. This can be demonstrated by making the API resolve more quickly (commit)

With this change, our tests behave properly, unfortunately in the real world you can’t just tell a slow api to be quicker. So we will need a different solution.

Make the test work less flakily (commit)

Here, we experiment with making our test wait 10 seconds to make sure all the data is there. This works pretty well!

However, we have made our tests take longer than they need to, and, crucially if the api ever takes longer than 10 seconds to resolve, our test will fail again. Perhaps this isn’t the best solution after all…

Complexify the app (commit)

We receive some requirements from our Product Owner, and it seems that a new feature is required where if an fact is at holyCrudThatIsInteresting level of interest, we should put a button up allowing the user to request more information about the fact.

We add the feature and look to update our e2e tests.

Now our app logic is more complicated, we need to account for this complexity in our e2e tests.

Test the new complexified app (commit)

We add a new test for the feature, but unfortunately it only works very very rarely, as if the random fact that comes back is not of the right interest level, our button is not shown, and the test fails. Disaster!

We could start to experiment with making our tests clever, and having them check for the interest level of the fact before progressing, but it is easy to see how an approach like that could blow up and become messy very quickly.

Enter the stub slow information service (commit)

We generate a new service, that implements the existing InformationService in the Angular application. Because it is strongly typed, it must follow the same interface, so we can be relatively confident that we shouldn’t be able to make any methods in our stub return nonsensical or invalid data.

This commit is the meaty part of this post. Here we use Angular’s environment files, as well as their dependency injection, to run our e2e tests with the StubSlowInformationService instead of the SlowInformationService.

Now our tests run, and they run quickly.

If the slow-ass-api implementation changes, our stub will stop compiling, and we know that we need to update our code. So this approach is relatively safe, assuming that you have a sufficiently well defined contract for how the api should behave (in this case our TypeScript Type definitions).

Hooray!

Even more control (commit)

Going one step further, in this commit, we expose methods on the window object, meaning that we can change the behaviour of our stub at run time, during our e2e tests.

Again, this is relatively safe if steps are taken to make sure the api contract is adhered to in the Angular application (by respecting the Types defined in the slow-ass-api library.

Conclusion

We have managed to write an e2e test which is quite robust, fast and easy to manage.

Due to fact that both our Angular application, and the slow ass api conform to a common contract about the behaviour of the API, we can be relatively confident that our tests are meaningful, and represent a realistic interaction with our API.

I think this is a pretty neat approach, and it has proved successful for me at work also. I’d be very keen to hear other peoples’ opinions though as e2e testing in general is something that I’ve found to be a seriously grey area in the frontend world.

Why TF should I stub dependencies for an E2E test?

E2E (or ‘end to end’) tests, when written for web applications, basically allow you to write scripts to interact with your application via a browser, and then assert that it responds correctly.

Common libraries that allow you to do this for Angular apps are Selenium/Protractor and Cypress (you should check out Cypress in particular).

The browser can be ‘headless’, meaning that you can run it in a command line, without a GUI (see headless chrome for an example). This means that these tests can be run by your build server to support a continuous integration/continuous deployment pipeline.

You can then develop scripts to click about, and test your most important user flows quickly and consistently, on your real application, in a very close approximation of how your users would click around. This is an attractive alternative to manually running these tests yourself by spinning up your application and clicking about. Humans are good at lots of things, but computers trump them every time when it comes to performing repeated actions consistently.

E2E tests are an example of black box testing.

If the tests fail, your application is broken, if they pass, your application may still be broken, but it is not terribly broken (as we have tests, ensuring that key user flows are still accessible).

This process relies on an assumption that these tests are very robust and not ‘flaky’ (they only fail when something is actually broken, not just randomly).

There are a few things that can cause E2E tests to randomly fail, but by far the most common in my experience is some form of network latency. Basically any time your code is interacting with something which can take an indeterminate amount of time to finish, you are going to experience some pain when writing automated tests for it.

The most recent (and most evil) version of this I have experienced is an application that is plugged into a blockchain and involves waiting for transactions to be mined. This can take anywhere from a few seconds, to forever.

Unless you are a lunatic, if your application involves any sort of transactions (purchases, bets, swiping-right), which involve user data, your automated tests will already be running on some sort of test environment, meaning that when you mutate data, it is not done on production data, but a copy. This comes with its own little pain points.

When talking to a truly stateful dependency (like an API plugged into a database), you will have to account for the state of the database when performing any actions in your browser test.

As one example, if you are logging in, you might have to first check whether the user has an account, then log in if they do, or create an account if not. This adds complexity to your tests and makes them harder to maintain/potentially flakier.

Another option is to control the test environment, and write (even more) scripts to populate the test database with clean data that can be relied on to power your tests (make sure that a specific user is present for example).

This removes complexity from the browser tests, but adds it to the scripts responsible for set up and tear down of data. This also requires that your test data is kept synchronised with any schema changes to your production database. Otherwise you run the risk of having E2E tests which can pass, even while your real application is broken, as they are testing on an out of date version of the backend.

(To clarify, these are all valid options, and there are solutions to many of the problems above)

However, while this approach solves the problem of accidentally munging user data, a backend plugged into a staging database can still be forking slow, as again you are at the mercy of network speeds and many other things out of your control.

Taken together, this is why I favour ‘stubbing’ the slow bit of your application (in many cases a backend service). This involves providing a very simple approximation of your Slow Ass Service (SAS), which follows the same API, but returns pre-determined fixture data, without doing any slow networking.

There is one major caveat to this approach.

It only works in a meaningful way if you have some sort of contract between the SAS and your frontend.

That is to say, your frontend and the SAS (Slow Ass Service), should both honour a shared contract, describing how the API should behave, and what data types it should accept and return. (for a REST api, swagger/openapi provides a good specification for describing API behaviour). Because this shared contract is written in a way that can be parsed, automatic validations can be made on both the frontend and the backend code to ensure that they meet the requirements.

If you were reading closely, you will notice I mentioned data ‘types’, so most likely you will require some sort of typed language on both the frontend and within the SAS (I would recommend TypeScript 🙂 ).

With this in place, as long as your stub implementation of the SAS also honours this contract (which can be checked in with the SAS source code), then you can feel happy(ish) that your stub represents a realistic interaction with the real SAS. If the contract changes, and your code doesn’t, it should not compile, and your build should fail (which we want!).

Now you have this in place, your E2E tests can be much less/not at all flaky, and as an added benefit, will run super fast in your build server, meaning that the dev workflow is slightly nicer too.

Again, there are gazillions of different ways of approaching this problem, and obviously you need to figure out what it is that you care about.

In my experience though, this presents a good compromise and has meant that I (finally) have managed to get E2E tests running as part of CI/CD without wasting huge amounts of time and causing headaches.

In a subsequent post I demo how to actually get this working in an Angular application that depends on a strongly typed third party library, which happens to be slow as hell 🙂

WTF is express-bed

If you’ve worked with Angular, and written unit tests, you’ve probably used TestBed.

I recently had to write an Express app using Typescript at work, which is something relatively new for me.

As I am a testing zealot and get scared if I don’t have good unit tests on my projects, I wanted to have a tool similar to Angular TestBed for configuring the tests in a predictable and speedy way.

Because I am foolish I decided to try and write my own.

This is the result of that :

https://www.npmjs.com/package/express-bed

If you implement your express routes as classes, and add a public create method to them, which is passed an Express app instance, and does the actual app.get('/swankyUrl', (req, res) => ..., then this tool should work for you too if you’re interested 🙂

WTF is Angular TestBed

Angular ships with a unit testing API called TestBed, which is specifically designed for writing unit tests for the Angular framework. It allows for simple testing of Angular web components, that is to say a TypeScript class, paired with some HTML.

For a brief intro to unit testing concepts see here

If you use the Angular CLI for generating your app and your components (which you should), then it will by default set up a <component -name>.component.spec.ts file, with some boilerplate setup code for TestBed testing already populated.

Let’s have a look at how we might write a simple login component that is properly tested, using Test Driven Development, the Angular CLI and TestBed.

Login Component

Before writing any code, we should think about what the component is supposed to do.

This step is important, because if the behaviour of the component is not properly defined, we cannot write tests for it, and we can’t properly define the interface. This behaviour may need to change later on, but for now we need a design contract to code against. For a  basic login component, I’m going to say I want:

  • login form
  • input box of type text for username
  • input box of type password for password
  • button which emits a login event with a payload of username and password

This is enough for me to write a suite of tests, and to develop my component against those criteria.

I’m going to set up a quick angular project using the Angular CLI in order to demonstrate all of the default test tooling that comes with Angular:

I will create one called ‘test-app’

ng new test-app

Once this command has run you will have a default hello world application. To view it, move into the directory of the new application and run

ng serve

If you open your browser and type localhost:4200 into the url box you should now see the default Angular application.

For now we will just use the app.component component as a sandbox for testing our new login component. For the first step, delete all the default code from app.component.html, as this is where we will be putting our new component eventually.

Also delete app.component.spec.ts as these tests will now fail because we have changed the underlying component.

Now we are ready to create our login component:

ng generate component login

This will create a new Login component, with a selector called app-login adding it as a declaration in app.module.ts. These files will be created:

login.component.html login.component.css login.component.ts login.component.spec.ts

This component can now be used in the app.component.html template file, so let’s add it now. app.component.html should now have only the following code in it:

<app-login></app-login>

The app at localhost:4200 in your browser should now say something similar to below:

login works!

TestBed setup

Before we start writing tests, let’s take a look at the generated login.component.spec.ts file to get a better understanding of how TestBed is set up:

    
import { async, ComponentFixture, TestBed } from '@angular/core/testing';

    import { LoginComponent } from './login.component';

    describe('LoginComponent', () => {
      let component: LoginComponent;
      let fixture: ComponentFixture<LoginComponent>;

      beforeEach(async(() => {
        TestBed.configureTestingModule({
          declarations: [ LoginComponent ]
        })
        .compileComponents();
      }));

      beforeEach(() => {
        fixture = TestBed.createComponent(LoginComponent);
        component = fixture.componentInstance;
        fixture.detectChanges();
      });

      it('should create', () => {
        expect(component).toBeTruthy();
      });
    });

This sets up the TestBed environment, and one basic test ‘it should create’. Let’s run the tests now and see what happens:

ng test

A chrome window should open, showing the test output. If everything is working, it will look something like below:

So now we have a test than ensures our component can be instantiated correctly. At this point it is worth quickly digressing and explaining what TestBed.configureTestingModule does.

It basically allows you to set up a miniature Angular application, with only the components, providers, modules etc. that are needed for the specific piece of code you are testing. In our example it currently just has a declarations array, with our LoginComponent in it.

Specifically, the block below sets up the testing module, then compiles all the components. Again, in our case this is just the LoginComponent.

  beforeEach(async(() => {
    TestBed.configureTestingModule({
      declarations: [ LoginComponent ]
    })
    .compileComponents();
  }));

The compileComponents call is aysnchronous, so to make the tests easier to understand and avoid dealing with promises explicitly, we use the Angular testing module’s async call to ensure that all of the asynchronous code has finished before the next block of code is run. Essentially this means that the asynchronous code can be read synchronously.

Once this is done we get references to the ComponentFixture of type LoginComponent and the LoginComponent itself:

  let component: LoginComponent;
  let fixture: ComponentFixture<LoginComponent>;

  beforeEach(() => {
    fixture = TestBed.createComponent(LoginComponent);
    component = fixture.componentInstance;
    fixture.detectChanges();
  });

When we start writing tests we will see how these two variables can be used to interact with our component and to assert that it is behaving correctly.

Jasmine provides the syntax for writing our tests in a behaviour driven style. A detailed analysis of Jasmine is beyond the scope of this post. For more information read the docs. For now all you need to know is that Jasmine is what we are using when we use beforeEach, describe, it and expect etc. functions.

Time to add some tests!

Now we can begin adding tests to cover our design contract, I.E. the things that our component will do, when interacted with in specific ways.

I normally just set up a bunch of empty it blocks as below with the things I’m expecting the component to do. Notice that this reflects our original design contract:

it('should have a form named loginForm', () => {

});

it('should have an input box of type text with the name username', () => {

});

it('should have an input box of type password with the name password', () => {

});

it('should have a button with the text login that emits an event with a payload with username and password, based on form inputs', () => {

});

At this stage we are just looking to organise our thoughts. Once we are relatively happy that we know what we want to test, we can start adding the code to actually test the component. These new tests will fail, as we haven’t written any code yet. Then we can write code to make them pass and in theory our component will work.

Let’s start with the first test ‘should have a form name loginForm’:

it('should have a form named loginForm', () => {
  expect(fixture.debugElement.query(By.css('form[name=loginForm]'))).toBeTruthy();
});

Here, the fixture variable allows us to query our component and check it’s DOM for whether certain elements are present. In this case we are checking that a form with the attribute name=loginForm is present on the component. Obviously currently this is not true, so if your tests are still running, they should fail now:

For a more detailed look at how to query your components for specific DOM elements etc. it is best to look at the Angular documentation on testing

OK so now we have a failing test, let’s fix it. Adding the following code to your login.component.html ought to do it:

<form name="loginForm"></form>

Now let’s fill in the rest of login.component.spec.ts:

it('should create', () => {
  expect(component).toBeTruthy();
});

it('should have a form named loginForm', () => {
  expect(fixture.debugElement.query(By.css('form[name=loginForm]'))).toBeTruthy();
});

it('should have an input box of type text with the name username', () => {
  expect(fixture.debugElement.query(By.css('input[type="text"][name="username"'))).toBeTruthy();
});

it('should have an input box of type password with the name password', () => {
  expect(fixture.debugElement.query(By.css('input[type="password"][name="password"'))).toBeTruthy();
});

describe('loginButton', () => {

  it('should have a button with id loginButton and text \"Login\"', () => {
    const loginButton = fixture.debugElement.query(By.css('button[id=loginButton]'));
    expect(loginButton).toBeTruthy();
    expect(loginButton.nativeElement.innerText).toEqual('Login');
  });
});

If you run your tests again now you should have some failing specs. To fix them, add the following code to your login.component.ts file:

<form name="loginForm">
  <input type="text" name="username">
  <input type="password" name="password">
  <button id="loginButton">Login</button>
</form>

run the tests again:

So now we have a tested component, designed to a contract, which has a spec file documenting what HTML elements are present on it. Pretty cool! Now if anyone accidentally breaks that contract, a big red message will tell them off 🙂

Notice that in the Karma chrome test output, our component is being rendered to the screen. This is important to understand. We are actually testing our component with a browser to ensure that it is rendered and behaves correctly. This is very powerful!

You might have noticed that our test specification has changed slightly from the first round of tests. This is fine and in fact a good thing. As you start to develop you may find that your assumptions were wrong, and you should adjust your tests and assumptions as you go to reflect any new information or changed design decisions.

Clearly at the moment our component is a bit limited in functionality. Let’s add a test to check that when the button is clicked, it emits an event with username and password based on the form field values:

describe('loginButton', () => {

  let loginButton: DebugElement;

  beforeEach(() => {
    loginButton = fixture.debugElement.query(By.css('button[id=loginButton][type="submit"]'));
  });

  it('should have a button of type submit with id loginButton and text "Login"', () => {
    expect(loginButton).toBeTruthy();
    expect(loginButton.nativeElement.innerText).toEqual('Login');
  });

  it('should have a button that emits an event with a payload of username and password, based on form inputs', (done) => {

    // set up test data for use in our test
    const testUserDetails = {
      username: 'user01',
      password: 'superSweetPassword01!'
    };

    // subscribe to the emitted login event and ensure that it fires and has correct data
    component.login.subscribe((data) => {
      expect(data).toEqual(testUserDetails);
      done();
    });

    // find the input fields and the login button in the DOM and interact with them
    const usernameInput = fixture.debugElement.query(By.css('input[name=username]')).nativeElement;
    const passwordInput = fixture.debugElement.query(By.css('input[name=password]')).nativeElement;

    usernameInput.value = testUserDetails.username;
    usernameInput.dispatchEvent(new Event('input'));
    passwordInput.value = testUserDetails.password;
    passwordInput.dispatchEvent(new Event('input'));

    loginButton.nativeElement.click();

    // trigger change detection so our code actually updates
    fixture.detectChanges();

  });
});

I will be honest, this code is a bit more complex than what we have seen so far… let’s break it down:

We need to test that our component emits an event called onLogin which has the correct data when the login button is clicked. This is something we would probably achieve in Angular by using an EventEmitter, which emits a stream from the component that can be subscribed to. As it is a stream it is asynchronous, so here we make use of Jasmine’s method for dealing with testing asynchronous code: namely the done method. By default, if the done() call is not carried out within 5000 ms then the test will fail.

it('should have a button that emits an event with a payload of username and password, based on form inputs', (done) => {

    const testUserDetails = {
      username: 'user01',
      password: 'superSweetPassword01!'
    };

    component.login.subscribe((data) => {
      expect(data).toEqual(testUserDetails);
      done();
    });

The remainder of the spec file is code dedicated to interacting with the DOM elements by sending click events etc. After we have made these changes we have to call fixture.detectChanges() in order to trigger Angular’s change detection, and actually update our test environment. Essentially what we are testing here is that after populating the form inputs via the DOM, and clicking the login button, our component emits an event. Crucially, we want to ensure where at all possible we are interacting with the DOM directly in our tests, as this is the closest simulation to how a user might interact with our component.

Again, our tests will fail now, as we haven’t written an event emitter yet, or tied it to the form inputs. To fix this we add the following code to login.component.ts so it now looks like below:

import {Component, EventEmitter, OnInit, Output} from '@angular/core';

export interface LoginDetails {
  username?: string;
  password?: string;
}

@Component({
  selector: 'app-login',
  templateUrl: './login.component.html',
  styleUrls: ['./login.component.css']
})
export class LoginComponent implements OnInit {

  @Output() login = new EventEmitter();

  userDetails: LoginDetails = {};

  onLogin() {
    this.login.emit(this.userDetails);
  }

  constructor() { }

  ngOnInit() {
  }

}

And these changes to our login.component.html:

<form #loginForm name="loginForm" (ngSubmit)="onLogin()">
  <input type="text" name="username" [(ngModel)]="userDetails.username">
  <input type="password" name="password" [(ngModel)]="userDetails.password">
  <button type="submit" id="loginButton">Login</button>
</form>

Notice that we are now using ngSubmit and ngModel. These require the FormsModule to be imported into the app.module.ts:

@NgModule({
  declarations: [
    AppComponent,
    LoginComponent
  ],
  imports: [
    BrowserModule,
    FormsModule,
  ],
  providers: [],
  bootstrap: [AppComponent]
})
export class AppModule { }

Because TestBed is just a way of setting up a mini app, we will also have to add this FormsModule import to ourlogin.component.spec.ts:

beforeEach(async(() => {
  TestBed.configureTestingModule({
    imports: [FormsModule],
    declarations: [LoginComponent]
  })
    .compileComponents();
}));

In order to properly test NgModel based code via native inputs, we also have to add the following block to our spec file, because NgModel is asynchronously updated :

beforeEach(async(() => {
  fixture.detectChanges();
  fixture.whenStable();
}));

With those changes, the tests should now pass:

NB While I have written the tests first for this post, quite often you will end up writing bits of functionality first and then testing them. As with most things TDD is a great tool when not used dogmatically. If you are messing around with different implementation options, it can make sense to hack about a bit first, then to add the tests. Basically don’t let the perfect be the enemy of the good 🙂 Try writing the tests first and then implementing the feature, but don’t be too rigid if you find you need to try some other stuff first. The important thing is that eventually your code is properly covered by meaningful unit tests which clarify how it is to be used, what is does, and protects it from regression bugs.

For the full source code look here

WTF is unit testing?

What are unit tests?

Unit tests are designed to test the smallest sensible ‘units’ of code in a software system in isolation. This isolation is achieved by using mock versions of any dependencies, which can be ‘spied’ on to ensure they are called correctly. This is one of the reasons why dependency injection is so good, as it allows you to pass in pretend versions of other code.

Essentially, unit tests allow a developer to ensure that their unit of code behaves in a predictable way, including any interactions with dependencies, given specific inputs.

They are most effective when they are fast to run, and can be run automatically during development or before a build.

They are not integration tests (https://stackoverflow.com/questions/5357601/whats-the-difference-between-unit-tests-and-integration-tests)

Well written unit tests also have the benefit of providing documentation for the interface the unit of code exposes to the rest of the system.

This can be very powerful, as any developer who has to work on something you have tested can refer to the tests an immediately get a clear definition of what the unit of code does, and how they should use it.

Why write unit tests?

Writing unit tests the right way will make your code better!

Writing unit tests for the sake of it, or to achieve coverage numbers may make your code better, but won’t necessarily. It is important to understand why unit testing is useful and what benefits it can offer before you get started.

Testing is a tool, just like anything else. It will not magically fix everything if it is not done properly.

Here are some of the potential benefits of maintaining a comprehensive set of unit tests:

  • Helps to document your code for other developers and yourself.
  • Flushes out bugs.
  • Improves design decisions.
  • Acts as buffer against low level regression bugs.

Read these for more information on the benefits/reasons for unit testing:

How to unit test:

The pattern for writing tests will basically be the same regardless of what the unit of code being tested is:

1) Create an instance of the unit of code that you wish to test. 2) Mock and spy on any dependencies that your unit of code interacts with. 3) Interact with the unit under test via its public interface. 4) Ensure that the unit under test behaves as expected after it is interacted with, including calling any mocked dependent code correctly.

If for any reason any of these steps is difficult to do, then you may have to rethink how you have written your code. This is why writing tests can help with design decisions. As a general rule if something is difficult to test, it may not be sufficiently modular and/or it may be too tightly coupled to other areas of the system.

For example, if you directly import a dependency inside a class, rather than passing it in via dependency injection, you will be forced to directly call the dependency. This may be what you want, but it may not be. Unit testing will force you to think about things like this.