HTF do I move to React from Angular(2+)

Are you an Angular dev who needs to learn React fast?

Starting to get fed up with recruiters and tech teams passing you over because you are not using the trendy new front end framework and therefore must be some sort of technical Neanderthal?

I am!

So without any more preamble, here is my attempt to map the concepts and ideas I have come to know and (mostly) love from the Angular world, to the shiny new land of React/the billion libraries that are necessary to make React useful… (no YOU’RE bitter).

In no particular order, I will be covering the following over the next few posts:

  • Web components (specifically *ngFor, *ngIf, Inputs, Outputs etc.)
  • Content projection/transclusion
  • Routing
  • Angular Material
  • Internationalisation/translation
  • Services
  • Data mapping
  • Styling/style encapsulation
  • Unit testing
  • Static typing
  • Asynchronous code (RxJS)
  • AJAX requests
  • State management
  • Automated testing
  • Dependency injection
  • Angular CLI

These are all things that Angular does ‘out of the box’, and that I have come to professionally rely on. So let’s try and replicate them.

WTF is dependency injection

A dependency can be thought of as anything your code depends on in order to work.

As an example, say we have a class like below:

import { dependency } from 'dependency-library';

class ExampleClass = {

  public doSomething() {


We can say that ExampleClass depends on dependency-library, or dependency-library is a dependency of ExampleClass.

In this example we could say that ExampleClass is tightly coupled to dependency-library.

Any time we create a new ExampleClass(), we will also directly pull in dependency-library.

If for some reason we don’t want these two to be directly coupled, we could use dependency injection:

Dependency injection version:

class ExampleClass = {

  constructor(private dependency: DependencyLibrary) {}

  public doSomething() {


const instantiatedExampleClass = 
  new ExampleClass(new DependencyLibrary());

Why bother?

  • Generally decoupled code is easier to modify and less fragile.
  • It makes unit testing very simple as spying on injected dependencies and passing in simpler implementations is easy.
  • It can allow for highly configurable code where different implementations that use the same public interface can be subbed in where needed.

WTF is a technical interview

A technical interview is an interview to assess whether a candidate is technically strong enough to join your team. A good technical interview should also give you a good impression of whether you could work with the person on a personal level.

Technical interviews are f**king difficult to get right.

As a candidate they can be stressful, bewildering affairs, involving whiteboards, theoretical questions, hands on coding and sometimes even trivia questions. There is very little standardisation and many companies do a bad job.

As an assessor it can be difficult to feel confident that you have properly assessed the candidate and ensured that they meet the technical bar.

Perhaps equally frustrating is a candidate making it all the way to a face to face interview, only for the assessor to discover that they can’t code their way through a basic FizzBuzz challenge. To avoid being a horrible person this results in having to sit through a painful half and hour to an hour of the candidate fumbling even the most basic tasks before an awkward ‘you’ll hear from us…’.

While looking for jobs I have been subject to a dazzling array of different technical interviews, the most heinous of which is the non existent technical interview.

NEVER work for a company as a software developer if they don’t make any attempt to properly assess you technically before you join.

If they are happy to hire you off the back of an enthusiastic buzz-word laden conversation, then they will do the same for others.

That means it is a total crap shoot as to whether you will be working with talented, passionate developers, or brainless monkeys who read some blogs about ‘Agile’. Do not take that risk.

What does a good technical interview look like?

My favourite interviews as a candidate have involved writing and analysing code, in an editor, on a computer.

My most successful interviews as an assessor have involved writing and analysing code, in an editor, on a computer.

See a theme?

So my completely biased answer is that a good technical interview involves a candidate writing and analysing code, in an editor, on a computer (you know, like they would do if they worked for you for instance…)

More specifically, for a mid to senior weight, hands on Angular developer, I have found the following interview pipeline/format invaluable.

1) Screen CVs. Exclude any without links to work they have done, either live websites, or GitHub projects. This is the number one indicator of whether someone is worth talking to. Then remove poorly formatted CVs or ones with obvious grammatical errors. Our discipline requires a lot of attention to detail, and a sloppy CV is a clear demonstration that this quality is lacking. Also exclude any with obviously poorly written code on their GitHub. Flags to look out for are:

  • Badly named functions
  • No unit tests
  • Poor documentation (no readme or one that is difficult to understand)
  • Code that doesn’t compile

2) Once you have screened CVs, put the ones you are interested in through to a phone screening.

3) Phone screening – Very quick, no more than half an hour. This should involve a series of conceptual and practical questions around your domain. In my case this is Object Orientation (specifically TypeScript), Unit testing, Angular core concepts (this is basically to weed out people who haven’t really used the framework before) and Git knowledge (this is a surprisingly accurate indicator of how much code someone has written in a professional setting).

4) If you like them and think it is worth taking two hours out of yours and up to two other members of staff’s time, schedule a face to face technical interview.

5) The face to face technical interview involves a series of short, hands on technical challenges, each no more than ten minutes long:

  • Debugging exercise: Hand the candidate a PC/laptop with an Angular application that is broken. Ask them to debug why it is broken and fix it, then offer suggestions for how to prevent the bug happening in the future.

  • Pair programming exercise: Give the candidate a PC/laptop with a partially implemented FizzBuzz service and some unit tests. Give them a brief and have them use TDD to fill in the rest of the functionality. Cut this off after ten minutes. By then you will have a good understanding of how the candidate writes code, how they explain their thinking, how they ask questions, whether they have written unit tests before, and how they approach problems.

  • Refactoring exercise: Hand the candidate a PC/laptop with a deliberately badly written piece of code on it. In my case I use an Angular component class which is directly making http calls, is filled with cryptic comments and typos, as well as buggy code and ask the candidate to assess the quality of the code, and suggest any improvements they might like to make. This will give you an idea of how committed a candidate is to writing good quality code. I’m ideally looking for people who are visibly upset by sloppy coding practices.

  • RXJS/reactive programming and streams: This is a conceptually complex area of the current coding landscape. To understand it properly a candidate must have a good appreciation of asynchronous code, and be comfortable with abstraction. Both are ubiquitous in our work. This is not a trivia quiz, we allow the candidate to ask any questions about the syntax we are having them use, and allow them to Google anything they want. Here we present a piece of code with some console logs in it and ask the candidate to predict what will happen when it is executed. We deliberately ask at least one seriously devious question, to see how they handle really difficult problems, and how they behave under pressure.

  • Responsive design: Hand the candidate a piece of non responsive sloppy CSS and ask them how they might make it responsive. The answer we are looking for is some variation of ‘use flex and do this…’.

After this we all breathe, and we move into an open discussion where we answer any of the candidates questions, and more thoroughly explain the role, the team etc.

The key thing that has made this process successful for us is that you gain a real appreciation of how the person will be to work with, because you essentially work with them in the interview.

It also has the added benefit that you are able to sell yourself and your team to the candidate, as most are seriously impressed that you have gone to such lengths to properly assess them. The tests are also all quite fun, and even candidates that don’t perform well have said that they enjoy the experience and will go away with a good impression of your company.

Having had extensive experience of interviewing from the point of view of the candidate, I believe that this is a pretty effective and relatively painless way of assessing a candidate.

What are your thoughts?

WTF is express-bed

If you’ve worked with Angular, and written unit tests, you’ve probably used TestBed.

I recently had to write an Express app using Typescript at work, which is something relatively new for me.

As I am a testing zealot and get scared if I don’t have good unit tests on my projects, I wanted to have a tool similar to Angular TestBed for configuring the tests in a predictable and speedy way.

Because I am foolish I decided to try and write my own.

This is the result of that :

If you implement your express routes as classes, and add a public create method to them, which is passed an Express app instance, and does the actual app.get('/swankyUrl', (req, res) => ..., then this tool should work for you too if you’re interested 🙂

WTF is Nrwl nx?

What is Nrwl nx?

An opinionated set of tooling written by some clever people to help with writing enterprise level Angular applications.

It involves working within a monorepo, or ‘workspace’ as they refer to it, which has multiple apps in it, and extracting functionality out into reusable libraries to improve code reusability and decoupling.

You should seriously check it out:

Why should I care?

My team has a need to support development of a front end layer for a white label product, with multiple, localized, themed, customisable instances of the product to be sold to different clients.

A typical Angular feature architecture does not optimally support this requirement.

We need to minimise code duplication, yet retain the ability to support apps with different functionality and theming.

We took the decision to use a workflow whereby almost all code is extracted into reusable libs inside a monorepo.

Our apps then become nothing more than configuration: a place to assemble the libs and theming information needed to personalise and customise the application for a specific client.

This allows for massively faster development on a per-client basis, and any new functionality can be shared among all client apps, meaning we are continually improving our core product, while supporting multiple applications.

We are using Nrwl nx tooling to achieve this and so far it has been extremely nice to work with.

What is a monorepo and why use one?

There is some disagreement about the merits of monorepos, and many other people can explain the nuances better than me. It is worth Googling if you are new to the concept. Here are some starting points:

However, on a basic level we took the decision to use a monorepo because:

  • It supports a CI/CD pipeline. All code has to go through the same set of automated tests before it is merged.
  • We can ensure code standards are clear and enforcable by putting all code through the same linting and formatting rules.
  • Code is easily shareable between applications.

What are libs?

Nrwl nx introduces the concept of libs. These are standalone, decoupled pieces of functionality for an Angular application. They can be thought of as the building blocks that we use to create our applications.

There are a number of different types of lib that we have defined, and put in separate directories, to make it easier for developers to know where to put a particular piece of code:

  • features – a standard Angular feature module. It may be composed of other features. It may have services, state management, routing, container/smart components and dumb/presentational components.

  • api – an Angular module with a service for calling a REST endpoint associated with a specific resource. For example a ‘cars’ api module would handle all http verbs associated with cars (‘Get’, ‘Put’ etc.). These are deliberately granular and are imported by feature modules for making api calls.

  • components – any shared dumb components that don’t belong in a feature module. These should not interact with a sandbox(see ‘what is a sandbox?’), and should be purely presentational.

  • utils – any heavy lifting code that isn’t already in a feature level service. This is pure business logic and should not deal with rendering.

  • core – functionality that every application needs. For example top level error handling.

  • scss – sass utilities. For example mixins, variables etc.

What are apps?

Apps are just Angular applications. However they are very slim. They shouldn’t have any more in them than the following:

  • Top level lazy loaded routing.
  • Custom theming information (fonts, color palette, icons etc.).
  • Assets.
  • App specific environment variables.
  • E2E tests.

Everything else should be in a standalone lib somewhere.

Feature level architecture

Of the libs folders, the features one is probably the most complex.

As an example, a user-registration feature might look like below (NB spec files have been excluded for clarity but all files with logic in them will have an equivalent spec file):

        + +state
          - ...
        + containers
          - ...
        + components
          - ...
        + models
          - ...
        + services
          - ...
        - user-registration.module.ts
        - user-registration.sandbox.ts
      - index.ts

Here are the interesting bits in order:

  • +state folder is filled with ngrx redux stuff (actions, reducers, effects).

  • containers folder is filled with ‘container’ or ‘smart’ components. These are components that interact with the sandbox (don’t worry, sandbox will be explained below… ).

  • components folder is filled with ‘dumb’ or ‘presentational’ components. These simply have data flowed into and out of them. They don’t know about anything else, hence they are dumb.

  • models folder is filled with typescript interfaces describing any data types associated with the feature module.

  • services folder is filled with services. Any pure business logic associated with the feature should live in a service here.

  • user-registration.sandbox.ts is a sandbox.

What is a sandbox?

Good question.

In the most basic terms, it is an abstraction layer between smart container components and state.

It allows our smart container components to deal with a clean and descriptive interface, rather than talking directly to redux.

It exposes methods for updating state, and streams of data for flowing state out of the sandbox.

The thinking is that smart components don’t really care that their data is coming from an ngrx store, so perhaps they shouldn’t know.

The reason it is called a sandbox is that we stole the name and the concept from here:

We have tweaked the original concept slightly, in that we still make any http calls via effects, but other than that it is pretty similar.

The name ‘sandbox’ apparently is used because in a sandbox, you can only play with the toys you are allowed access to.

Here is an example to hopefully make the concept clearer:


import { Injectable } from '@angular/core';
import { select, Store } from '@ngrx/store';
import { Observable } from 'rxjs/Observable';

import { SecurityQuestion } from '@ten-platform-app/api/api-security-questions';

import { UserRegistrationStateMap, getSecurityQuestions } from './+state';
import { LoadSecurityQuestionsAction } from './+state/security-questions.actions';

export class UserRegistrationSandbox {
  // A publicly exposed stream of security questions that can be hooked into by a smart component.
  public securityQuestions: Observable<SecurityQuestion[]> =

  // The sandbox has our feature level redux store passed into it.
  constructor(private store: Store<UserRegistrationState>) {}

  // Our smart component uses this method to request security questions, and subscribes to the public stream above.
  public getSecurityQuestions() { LoadSecurityQuestionsAction());

How do we structure our state management code?

We use the redux pattern to handle state in our libs:

We use the ngrx implementation of redux:

This is what the +state folder looks like for a typical module:

    - user.actions.ts
    - user.actions.spec.ts
    - user.effects.ts
    - user.effects.spec.ts
    - user.reducer.ts
    - user.reducer.spec.ts

There are three types of file, actions, effects and reducer.

In our project these files are generated using custom angular schematics.

These schematics work the same way the angular cli does, and will generate a basic store that works. This cuts down on boilerplate code and helps us to standardise how we write our state management code.

Actions and reducers are very much like any other redux implementation, with the slight difference that everything is strongly typed, using Typescript.


Effects are used to handle asynchronous code. For example, any http call will be done inside an effect.

Our effects have some specific bits of code in them that allow them to work with our sandbox layer and should be explained:

import { Injectable } from '@angular/core';
import { Effect, Actions } from '@ngrx/effects';
import { DataPersistence } from '@nrwl/nx';
import { of } from 'rxjs/observable/of';
import { map } from 'rxjs/operators';

import { ApiSecurityQuestionsService } from '@ten-platform-app/api/api-security-questions';

import { UserRegistrationFeatureStore } from './index';
import {
} from './security-questions.actions';

export class SecurityQuestionsEffects {
  public loadSecurityQuestions = this.dataPersistence.fetch<
  >(SecurityQuestionsActionTypes.LOAD_SECURITY_QUESTIONS, {
    run: (
      action: LoadSecurityQuestionsAction,
      state: UserRegistrationFeatureStore,
    ) => {
      // check state for existing security questions.
      // Only make call to get them from the apiService if needed.
      if (state.userRegistration.securityQuestions.loaded) {
        return of(
          new LoadSecurityQuestionsSuccessAction({
      } else {
        // Delegate the api call to the relevant service.
        return this.apiService.getSecurityQuestions().pipe(
          map(questions => {
            return new LoadSecurityQuestionsSuccessAction({
              questions: questions,

    onError: (action: LoadSecurityQuestionsAction, error: any) => {
      return new LoadSecurityQuestionsFailureAction({ error });

    private actions: Actions,
    // This is a utility provided by Nrwl/nx.
    // It has convenience methods that eliminate race conditions
    // and ensure consistency when fetching data
    private dataPersistence: DataPersistence<UserRegistrationFeatureStore>,
    // A granular api service is imported for actually making REST calls.
    // This follows the principle of single responsibility.
    private apiService: ApiSecurityQuestionsService,
  ) {}

Note that the effects class does not directly interact with the httpClient. This is delegated to a granular api service, which handles the api call and any data transformation before returning a stream of appropriately typed data.

Effects work by watching a global stream of actions.

If an action of the right type is seen (in the example above a LOAD_SECURITY_QUESTIONS action), then some piece of asynchronous code is run.

Once that code has finished, a new stream is returned, with a new action.

So basically effects just map action streams to other action streams, and do a bit of work in the middle.

Redux alone is not able to smoothly handle asynchronous code, so these side effects are handled inside effects.

In addition, effects are where we handle optimistic and pessimistic updates and caching of data.

Again, Nrwl nx provides convenience methods for optimistic and pessimistic updates:


Architecting Angular applications in a way that allows multiple developers to work efficiently together with minimal friction is a tough job.

This is my team’s attempt to solve some of those problems. Hopefully it may help other teams too!

WTF are Angular Schematics

  • Hate writing lots of repetitive boilerplate code?

  • Work with a large team and want to standardise/document common architectural patterns?

  • Want to get new developers up to speed quickly on best practices within your project?


Code generators allow you to create code templates and folder structures for common tasks. For example in an Angular application, a code generator might have templates for creating components and services.

These are tasks that have to be carried out many many times in the implementation of a feature, and can be susceptible to different developers implementing them in different ways, resulting in time lost debating the best way to do things and/or confusion when reading other developers’ work.

With the addition of a generator, that has been agreed upon by the whole dev team, best practices within the project can be documented in the template, and another annoying decision/source of contention at the code review phase of a feature can be avoided. Hooray!

Generators can save a lot of time, and massively improve cohesion within a team. Obviously, as with anything, they can also make your life harder if they are not used properly.

If you use Angular, you may have used the Angular CLI (stands for Command Line Interface), which allows you to write things like

ng generate component my-component

inside a terminal in your project, and very rapidly have a skeleton my-component component made for you, with code and file names in line with Angular best practices, complete with working tests.

Very cool stuff. If you write Angular code and you don’t use it currently, I would STRONGLY recommend giving it a try

Even if you do use the Angular CLI, what you might not know is that you can now write your own generators, using the same engine that the Angular CLI is written in, namely Angular Schematics.

Angular Schematics are fairly new, so the documentation is sparse and the API is a bit up in the air, but they are already quite nice to work with, with convenience methods for common Angular specific tasks, like adding module imports etc.

I personally have used them at work to streamline my team’s development experience when generating the vast amount of set up code that ngrx state management requires, and they have already saved a bunch of time and effort.

The schematics engine was designed with Angular in mind, but there is no reason you can’t use it to write generators for any code you feel like.

If these sound like things you might be interested in, then check out my schematics code here :

WTF is tech debt

Tech debt, or technical debt is pretty much exactly what it sounds like.

When building software, there is a constant trade off between implementing features quickly, and implementing them right.

Ideally, you would always build them right, and in the long term this will also be the quicker route.

When features are implemented with care they are tested and reliable, flexible and well architected. Future developers working on the code will probably not have a lot of trouble understanding the work, and will be able to build on it and modify it with ease.

However, there is an upfront time cost to doing things the right way, and sometimes it can be worth taking on short term ‘tech debt’ in order to get a feature out in time for a specific deadline.

This means that the implementation may cut some corners, and be designed in a less flexible way. Testing coverage may be lower, and the architecture will be less thoroughly planned out.

This ‘debt’ in a healthy system will be paid off as soon as possible after the immediate deadline.

If it is not paid off, and more technical debt is taken on, then the product can begin to suffer.

Changes become harder as poorly architected code, often tightly coupled to other areas of the system, introduces bugs. Fixing these bugs can cause other bugs, and little by little it becomes impossible to add new features, or to amend existing features to match changing business requirements. The development team finds that most of their time is taken up fire fighting bugs, and the product growth grinds to a a halt.

This is a bad situation.

The solution is to only take on tech debt when absolutely necessary, and to pay it off as soon as possible, via constant diligent refactoring of the code base.

An analogy I like is that coding is in many ways like gardening. The job of a developer is to grow and modify various features, such that a product can adapt to meet the changing needs of its users.

Like gardening, there are many factors that affect the ecosystem that the features are growing in, and they must be constantly monitored in order for the product to grow in a healthy and sustainable way.

Technical debt can be thought of as the soil quality. If too many wacky things are done to the soil, it will lose its nutrients, and be unable to support the growth that is needed. If this goes on for too long, it will be unable to support any growth, and the garden will die. Conversely, if the soil is well cared for and ordered, the garden can flourish, and support a far greater amount of growth and change.

Being a good developer is about being a professional, and often this will mean saying no to taking on technical debt. Normally you will be under pressure from managers and clients to turn things around to tight deadlines. Agreeing to cut corners in order to meet these deadlines should be done very sparingly. Another good analogy, courtesy of Uncle Bob is the case of doctors and hand washing.

If a patient or a manager told a surgeon to skip washing their hands, because it was taking too long, and using too much water etc., the doctor would refuse. This is because the doctor is a professional, knows their craft, and knows the cost of not washing their hands.

Likewise, if a developer is a professional, and knows the true cost of incurring tech debt, then they have an obligation to refuse to take it on unnecessarily, even if that means saying no to a manager, or a client.

WTF is a class?

A class is a construct found in Object Oriented Programming. It is a way of grouping data, and the methods for interacting with that data in one place. It also exposes a public interface that can be used by other code to interact with the class.

Classes have to be instantiated. That is to say, at some point if you’ve defined a lovely class MyLovelyClass, in order to use it you will have to make a call to new MyLovelyClass().

To repeat what every guide to classes I’ve ever read says: Classes can be thought of as blueprints for how to make something.

It will often but not always model a real world object. To make this clearer let’s take a look at an example, using TypeScript:

class Bag {

    constructor() {
        this.contents = [];

    private contents: String[];


Here we’ve modelled a bag, which can have things in it, in the form of an array of strings. It’s a bit useless at the moment as we can’t actually put anything in it, because the contents array is private, so can’t be accessed from outside the class. To clarify:

const bag = new Bag();


This will not work, as bag.contents is not available to us. One solution would be to set the contents array to public:

public contents: String[]

But this would be a bad thing for a number of reasons. Firstly, if at any point we decide we want contents to be a different data type. For example lets say we decide to store our items in a dictionary instead:

public contents: { [key: string]: string; };

Now, if we have existing code that does something like below:

const bag = new Bag();


It will need to be changed. This example is trivial, but in a big project, this could take a long time to refactor.

Alternatively, if we did something like below, we can change our implementation of storing things in the bag, and any existing code will still work:

class Bag {

    constructor() {
        this.contents = {};

    private contents: { [key: string]: string; };

    public addItem(item: String) {
        this.contents[item] = item;


const bag = new Bag();


This is a pretty contrived example, but things similar to this happen all the time when coding.

So there is a very simple example of a class, which hopefully gives a basic understanding of what a class is, and a few of the benefits it offers.

WTF is Angular TestBed

Angular ships with a unit testing API called TestBed, which is specifically designed for writing unit tests for the Angular framework. It allows for simple testing of Angular web components, that is to say a TypeScript class, paired with some HTML.

For a brief intro to unit testing concepts see here

If you use the Angular CLI for generating your app and your components (which you should), then it will by default set up a <component -name>.component.spec.ts file, with some boilerplate setup code for TestBed testing already populated.

Let’s have a look at how we might write a simple login component that is properly tested, using Test Driven Development, the Angular CLI and TestBed.

Login Component

Before writing any code, we should think about what the component is supposed to do.

This step is important, because if the behaviour of the component is not properly defined, we cannot write tests for it, and we can’t properly define the interface. This behaviour may need to change later on, but for now we need a design contract to code against. For a  basic login component, I’m going to say I want:

  • login form
  • input box of type text for username
  • input box of type password for password
  • button which emits a login event with a payload of username and password

This is enough for me to write a suite of tests, and to develop my component against those criteria.

I’m going to set up a quick angular project using the Angular CLI in order to demonstrate all of the default test tooling that comes with Angular:

I will create one called ‘test-app’

ng new test-app

Once this command has run you will have a default hello world application. To view it, move into the directory of the new application and run

ng serve

If you open your browser and type localhost:4200 into the url box you should now see the default Angular application.

For now we will just use the app.component component as a sandbox for testing our new login component. For the first step, delete all the default code from app.component.html, as this is where we will be putting our new component eventually.

Also delete app.component.spec.ts as these tests will now fail because we have changed the underlying component.

Now we are ready to create our login component:

ng generate component login

This will create a new Login component, with a selector called app-login adding it as a declaration in app.module.ts. These files will be created:

login.component.html login.component.css login.component.ts login.component.spec.ts

This component can now be used in the app.component.html template file, so let’s add it now. app.component.html should now have only the following code in it:


The app at localhost:4200 in your browser should now say something similar to below:

login works!

TestBed setup

Before we start writing tests, let’s take a look at the generated login.component.spec.ts file to get a better understanding of how TestBed is set up:

import { async, ComponentFixture, TestBed } from '@angular/core/testing';

    import { LoginComponent } from './login.component';

    describe('LoginComponent', () => {
      let component: LoginComponent;
      let fixture: ComponentFixture<LoginComponent>;

      beforeEach(async(() => {
          declarations: [ LoginComponent ]

      beforeEach(() => {
        fixture = TestBed.createComponent(LoginComponent);
        component = fixture.componentInstance;

      it('should create', () => {

This sets up the TestBed environment, and one basic test ‘it should create’. Let’s run the tests now and see what happens:

ng test

A chrome window should open, showing the test output. If everything is working, it will look something like below:

So now we have a test than ensures our component can be instantiated correctly. At this point it is worth quickly digressing and explaining what TestBed.configureTestingModule does.

It basically allows you to set up a miniature Angular application, with only the components, providers, modules etc. that are needed for the specific piece of code you are testing. In our example it currently just has a declarations array, with our LoginComponent in it.

Specifically, the block below sets up the testing module, then compiles all the components. Again, in our case this is just the LoginComponent.

  beforeEach(async(() => {
      declarations: [ LoginComponent ]

The compileComponents call is aysnchronous, so to make the tests easier to understand and avoid dealing with promises explicitly, we use the Angular testing module’s async call to ensure that all of the asynchronous code has finished before the next block of code is run. Essentially this means that the asynchronous code can be read synchronously.

Once this is done we get references to the ComponentFixture of type LoginComponent and the LoginComponent itself:

  let component: LoginComponent;
  let fixture: ComponentFixture<LoginComponent>;

  beforeEach(() => {
    fixture = TestBed.createComponent(LoginComponent);
    component = fixture.componentInstance;

When we start writing tests we will see how these two variables can be used to interact with our component and to assert that it is behaving correctly.

Jasmine provides the syntax for writing our tests in a behaviour driven style. A detailed analysis of Jasmine is beyond the scope of this post. For more information read the docs. For now all you need to know is that Jasmine is what we are using when we use beforeEach, describe, it and expect etc. functions.

Time to add some tests!

Now we can begin adding tests to cover our design contract, I.E. the things that our component will do, when interacted with in specific ways.

I normally just set up a bunch of empty it blocks as below with the things I’m expecting the component to do. Notice that this reflects our original design contract:

it('should have a form named loginForm', () => {


it('should have an input box of type text with the name username', () => {


it('should have an input box of type password with the name password', () => {


it('should have a button with the text login that emits an event with a payload with username and password, based on form inputs', () => {


At this stage we are just looking to organise our thoughts. Once we are relatively happy that we know what we want to test, we can start adding the code to actually test the component. These new tests will fail, as we haven’t written any code yet. Then we can write code to make them pass and in theory our component will work.

Let’s start with the first test ‘should have a form name loginForm’:

it('should have a form named loginForm', () => {

Here, the fixture variable allows us to query our component and check it’s DOM for whether certain elements are present. In this case we are checking that a form with the attribute name=loginForm is present on the component. Obviously currently this is not true, so if your tests are still running, they should fail now:

For a more detailed look at how to query your components for specific DOM elements etc. it is best to look at the Angular documentation on testing

OK so now we have a failing test, let’s fix it. Adding the following code to your login.component.html ought to do it:

<form name="loginForm"></form>

Now let’s fill in the rest of login.component.spec.ts:

it('should create', () => {

it('should have a form named loginForm', () => {

it('should have an input box of type text with the name username', () => {

it('should have an input box of type password with the name password', () => {

describe('loginButton', () => {

  it('should have a button with id loginButton and text \"Login\"', () => {
    const loginButton = fixture.debugElement.query(By.css('button[id=loginButton]'));

If you run your tests again now you should have some failing specs. To fix them, add the following code to your login.component.ts file:

<form name="loginForm">
  <input type="text" name="username">
  <input type="password" name="password">
  <button id="loginButton">Login</button>

run the tests again:

So now we have a tested component, designed to a contract, which has a spec file documenting what HTML elements are present on it. Pretty cool! Now if anyone accidentally breaks that contract, a big red message will tell them off 🙂

Notice that in the Karma chrome test output, our component is being rendered to the screen. This is important to understand. We are actually testing our component with a browser to ensure that it is rendered and behaves correctly. This is very powerful!

You might have noticed that our test specification has changed slightly from the first round of tests. This is fine and in fact a good thing. As you start to develop you may find that your assumptions were wrong, and you should adjust your tests and assumptions as you go to reflect any new information or changed design decisions.

Clearly at the moment our component is a bit limited in functionality. Let’s add a test to check that when the button is clicked, it emits an event with username and password based on the form field values:

describe('loginButton', () => {

  let loginButton: DebugElement;

  beforeEach(() => {
    loginButton = fixture.debugElement.query(By.css('button[id=loginButton][type="submit"]'));

  it('should have a button of type submit with id loginButton and text "Login"', () => {

  it('should have a button that emits an event with a payload of username and password, based on form inputs', (done) => {

    // set up test data for use in our test
    const testUserDetails = {
      username: 'user01',
      password: 'superSweetPassword01!'

    // subscribe to the emitted login event and ensure that it fires and has correct data
    component.login.subscribe((data) => {

    // find the input fields and the login button in the DOM and interact with them
    const usernameInput = fixture.debugElement.query(By.css('input[name=username]')).nativeElement;
    const passwordInput = fixture.debugElement.query(By.css('input[name=password]')).nativeElement;

    usernameInput.value = testUserDetails.username;
    usernameInput.dispatchEvent(new Event('input'));
    passwordInput.value = testUserDetails.password;
    passwordInput.dispatchEvent(new Event('input'));;

    // trigger change detection so our code actually updates


I will be honest, this code is a bit more complex than what we have seen so far… let’s break it down:

We need to test that our component emits an event called onLogin which has the correct data when the login button is clicked. This is something we would probably achieve in Angular by using an EventEmitter, which emits a stream from the component that can be subscribed to. As it is a stream it is asynchronous, so here we make use of Jasmine’s method for dealing with testing asynchronous code: namely the done method. By default, if the done() call is not carried out within 5000 ms then the test will fail.

it('should have a button that emits an event with a payload of username and password, based on form inputs', (done) => {

    const testUserDetails = {
      username: 'user01',
      password: 'superSweetPassword01!'

    component.login.subscribe((data) => {

The remainder of the spec file is code dedicated to interacting with the DOM elements by sending click events etc. After we have made these changes we have to call fixture.detectChanges() in order to trigger Angular’s change detection, and actually update our test environment. Essentially what we are testing here is that after populating the form inputs via the DOM, and clicking the login button, our component emits an event. Crucially, we want to ensure where at all possible we are interacting with the DOM directly in our tests, as this is the closest simulation to how a user might interact with our component.

Again, our tests will fail now, as we haven’t written an event emitter yet, or tied it to the form inputs. To fix this we add the following code to login.component.ts so it now looks like below:

import {Component, EventEmitter, OnInit, Output} from '@angular/core';

export interface LoginDetails {
  username?: string;
  password?: string;

  selector: 'app-login',
  templateUrl: './login.component.html',
  styleUrls: ['./login.component.css']
export class LoginComponent implements OnInit {

  @Output() login = new EventEmitter();

  userDetails: LoginDetails = {};

  onLogin() {

  constructor() { }

  ngOnInit() {


And these changes to our login.component.html:

<form #loginForm name="loginForm" (ngSubmit)="onLogin()">
  <input type="text" name="username" [(ngModel)]="userDetails.username">
  <input type="password" name="password" [(ngModel)]="userDetails.password">
  <button type="submit" id="loginButton">Login</button>

Notice that we are now using ngSubmit and ngModel. These require the FormsModule to be imported into the app.module.ts:

  declarations: [
  imports: [
  providers: [],
  bootstrap: [AppComponent]
export class AppModule { }

Because TestBed is just a way of setting up a mini app, we will also have to add this FormsModule import to ourlogin.component.spec.ts:

beforeEach(async(() => {
    imports: [FormsModule],
    declarations: [LoginComponent]

In order to properly test NgModel based code via native inputs, we also have to add the following block to our spec file, because NgModel is asynchronously updated :

beforeEach(async(() => {

With those changes, the tests should now pass:

NB While I have written the tests first for this post, quite often you will end up writing bits of functionality first and then testing them. As with most things TDD is a great tool when not used dogmatically. If you are messing around with different implementation options, it can make sense to hack about a bit first, then to add the tests. Basically don’t let the perfect be the enemy of the good 🙂 Try writing the tests first and then implementing the feature, but don’t be too rigid if you find you need to try some other stuff first. The important thing is that eventually your code is properly covered by meaningful unit tests which clarify how it is to be used, what is does, and protects it from regression bugs.

For the full source code look here

WTF is unit testing?

What are unit tests?

Unit tests are designed to test the smallest sensible ‘units’ of code in a software system in isolation. This isolation is achieved by using mock versions of any dependencies, which can be ‘spied’ on to ensure they are called correctly. This is one of the reasons why dependency injection is so good, as it allows you to pass in pretend versions of other code.

Essentially, unit tests allow a developer to ensure that their unit of code behaves in a predictable way, including any interactions with dependencies, given specific inputs.

They are most effective when they are fast to run, and can be run automatically during development or before a build.

They are not integration tests (

Well written unit tests also have the benefit of providing documentation for the interface the unit of code exposes to the rest of the system.

This can be very powerful, as any developer who has to work on something you have tested can refer to the tests an immediately get a clear definition of what the unit of code does, and how they should use it.

Why write unit tests?

Writing unit tests the right way will make your code better!

Writing unit tests for the sake of it, or to achieve coverage numbers may make your code better, but won’t necessarily. It is important to understand why unit testing is useful and what benefits it can offer before you get started.

Testing is a tool, just like anything else. It will not magically fix everything if it is not done properly.

Here are some of the potential benefits of maintaining a comprehensive set of unit tests:

  • Helps to document your code for other developers and yourself.
  • Flushes out bugs.
  • Improves design decisions.
  • Acts as buffer against low level regression bugs.

Read these for more information on the benefits/reasons for unit testing:

How to unit test:

The pattern for writing tests will basically be the same regardless of what the unit of code being tested is:

1) Create an instance of the unit of code that you wish to test. 2) Mock and spy on any dependencies that your unit of code interacts with. 3) Interact with the unit under test via its public interface. 4) Ensure that the unit under test behaves as expected after it is interacted with, including calling any mocked dependent code correctly.

If for any reason any of these steps is difficult to do, then you may have to rethink how you have written your code. This is why writing tests can help with design decisions. As a general rule if something is difficult to test, it may not be sufficiently modular and/or it may be too tightly coupled to other areas of the system.

For example, if you directly import a dependency inside a class, rather than passing it in via dependency injection, you will be forced to directly call the dependency. This may be what you want, but it may not be. Unit testing will force you to think about things like this.