How a recovering Bootstrapper learned to (sort of ) love CSS

TL;DR: CSS Grid and Flexbox are the nuts.

I have been a frontend developer for around 4 years.

I fell into it by accident and was initially a bit disgruntled that I wasn’t doing ‘proper’ software development (you know, databases, servers and all that).

That soon changed after I realised how bat shit complex and interesting the frontend layer of modern applications has become.

But until recently I still had a bit of a mental block about/pure hatred of CSS.

I told myself I was better suited to the lovely logical land of frontend data architectures (one way data flow, redux), unit testing, integration testing, TypeScript, RxJS, e2e testing, frontend ‘devops’ (messing around with build servers… definitely not an expert) and so on. Basically anything and everything that my job involved other than CSS.

I was on board with CSS related to styling and colouring/shaping/animating things, but the layout stuff felt fundamentally broken and hacky, and really pissed me off.

As a result I greedily grabbed any weird abstraction on top of CSS, ideally one that let me write JavaScript, and used that instead (styled components in React, Bootstrap for everything else).

And I was pretty happy.

Every now and again I would be unable to do something, or something would break and I wouldn’t understand why, but I told myself it was just that CSS was stupid and broken and I could leave that pixel pushing stuff to someone else in the team.

Obviously I was wrong, and fortunately enough of my colleagues told me I was wrong that I finally ripped off the band aid and starting using CSS proper (well SASS, but that’s another story…).

While I do think that Bootstrap in particular filled a very real need for a while (centering divs for one), it turned out that CSS has quietly moved on and seen some improvements which make it far more suitable for building modern UIs.

Enter CSS Grid and Flexbox

These two additions to CSS basically solve all of the things related to layout that previously drove me mad.

You can now write fluid layouts that flow about the page in a sensible way at different sizes, based on a simple set of rules.

You can also, using just CSS, totally change the order in which things are rendered on the screen and the layout, without changing the order of your semantic markup (and also without adding a gazillion divs).

So now, finally, your CSS is purely responsible for styling your content (including laying it out on the page) and your HTML is purely responsible for the content itself.

Miraculous!

All i will say is that, as with most new things, there is a bunch of weird terminology you have to learn first. It is slightly painful but worth doing. I’m not going to go into any more detail on these two features, as honestly a bunch of other people have done a far better job:

Grid resources

https://cssgridgarden.com/

https://developer.mozilla.org/en-US/docs/Web/CSS/CSS_Grid_Layout

https://css-tricks.com/snippets/css/complete-guide-grid/

Flexbox resources

https://developer.mozilla.org/en-US/docs/Learn/CSS/CSS_layout/Flexbox

https://css-tricks.com/snippets/css/a-guide-to-flexbox/

https://flexboxfroggy.com/

Enough of your gabbing, what does it look like!?!

To play around more heavily with grid in particular, I put together a crappy little project to try and replicate this slice of 90s joy https://www.spacejam.com/archive/spacejam/movie/jam.htm for the modern mobile first age.

I’ve inventively called it spice jam to avoid being sued https://github.com/robt1019/space-jam-2.0 and used a bunch of free for reuse svgs from the internet to play around with the layout.

I’ve basically treated it as a proof of concept for myself, and it is far from functional, but for anyone interested in seeing how powerful grid can be in entirely separating your markup from the way it is rendered, I think it could be a helpful example/playground to mess around in.

WTF is a splash sreen? (Angular edition)

A splash screen is something which renders before the rest of an application is ready, and blocks the rest of the UI.

It avoids a user having to sit staring at a semi-functional product, and having the UI judder as more content is loaded.

OK I know what a splash screen is, how do I do one in Angular!?!

Add an element that you want to be your ‘splash’ to your index.html. Below we add a div and then an image inside it

<!DOCTYPE html>
<html lang="en">
  <head>
    <meta charset="utf-8" />
    <title>World's angriest cats</title>
    <base href="/" />

    <meta name="viewport" content="width=device-width, initial-scale=1" />
    <link rel="icon" type="image/x-icon" href="favicon.ico" />
    <link rel="stylesheet" type="text/css" href="assets/styles/splash.css" />
  </head>
  <body class="mat-typography">
    <app-root></app-root>
    <div class="splash">
      <img
        class="splash__image"
        src="my-angry-cat.jpg"
        alt="Angry cat splash page"
      />
    </div>
  </body>
</html>

And add some CSS similar to this to make sure it fills the whole screen and bleeds in/out etc.

.splash {
  opacity: 1;
  visibility: visible;
  transition: opacity 1.4s ease-in-out, visibility 1.4s;
  position: fixed;
  height: 100%;
  width: 100%;
  top: 0;
  left: 0;
  background: red;
  display: flex;
  flex-direction: column;
  align-items: center;
  justify-content: center;
}
// This little piece of trickery hides the splash image once the app-root is populated (i.e. the app is ready)
app-root:not(:empty) + .splash {
  opacity: 0;
  visibility: hidden;
}

.splash__image {
  width: 100%;
  max-width: 200px;
}

OK cool, but what if I want to wait for a specific API call to finish (e.g. CMS) before rendering my site?

No worries. Angular has you covered here too 🙂

Assuming that you have a provider called ContentProvider, which is doing some asynchronous task to get content for your site, and then exposes a stream to tell you when the data is ready, you can tell Angular that it needs to wait until this is done using the slightly confusingly named APP_INITIALIZER injection token.

This, as far as my puny brain can approximate, adds the function returned from your useFactory method to an array of methods which are called before the app initialises and starts rendering.

If the function you return returns a promise, as below, then the app will not initialise until that promise resolves.

    {
      provide: APP_INITIALIZER,
      useFactory: (content: ContentProvider) => {
        return async () => {
          return new Promise(resolve => {
            content.loadContent();
            content.loaded.subscribe(loaded => {
              if (loaded) {
                resolve();
              }
            });
          });
        };
      },
      multi: true,
      deps: [ContentProvider],
    },

WTF are feature flags? (Angular example included)

Feature flags are a fairly rudimentary (but I think powerful) alternative to long lived feature branches, and the associated merge hell that goes with them.

You basically merge unfinished or experimental features to your master branch, and even deploy them, but hide the functionality behind ‘flags’, only allowing them to be viewed if certain switches are activated. The exact mechanism behind the switches doesn’t really matter, and I will detail some potential ways of implementing these switches in the context of an Angular application below.

Feature flags do not mean merging broken code or sloppy code.

The code should still be tested and peer reviewed, and your automated suite of unit, integration and e2e tests, linting and other build scripts that you have written to protect your master branch should still run (you do have those things… right?).

The feature is merely unfinished.

Perhaps it is missing some UI polish, or requires further QA before being released into the wild proper, or perhaps you want to test it out on a smaller subset of your user base to gather analytics feedback on the effectiveness of the feature.

There are many reasons why feature flags can make sense, but fundamentally they are a very simple concept.

They just give you the ability to ‘switch’ on or off different parts of your application, either at run-time or build time.

For me, the main benefit is that your feature is regularly integrated back into master, without having to wait for all of the functionality to be complete. This ensures that at all times you can be sure that your feature plays nicely with the rest of the code base, and with any other features that other people are working on (also hidden behind feature flags).

Also they are a great tool if you want to demo experimental features to stakeholders early on in the development process, in order to gather feedback.

I’ve managed to use them successfully before on a site serving thousands of daily visitors, without causing any disruption to the deployed product, and I was able to demo the feature in the live site.

Where to store feature state?

To use feature flags, you need some state somewhere that your app is aware of, that tells it whether certain features are enabled or not.

For example, say we have an experimental kitten generator feature that we only want to show sometimes, we might have a piece of data somewhere called kittenGeneratorEnabled, which our app checks to see if it should allow a user access to the feature.

There are many places this data could live, and your particular use case will dictate the most sensible course of action:

  • Build configuration files (in Angular’s case we would probably use environment files for this). We can set the switch to on/off at build time and our deployed app will reflect this. The downside to this approach is that it not super flexible, as you can’t switch the feature on/off at run-time. If you have a slick and speedy build pipeline, you can still turn off features in subsequent builds quickly and easily if there are any problems with them. It is also a pretty light weight solution, it just costs you a tiny bit of configuration. You can also have builds for different contexts, for example you could disable all experimental features in production, and turn them on for QA or something similar (not saying this is a good idea, just an example of the level of control this can give you).

  • A server. We can delegate the responsibility for setting feature flags to a backend service. If you have more complex requirements, for example only showing features to a specific subset of users via traffic splitting etc. this might be worth looking into as the complex logic/management portal for this sort of granular control can be moved out of the frontend. In our kitten generator case we would query an endpoint like GET /kittenGeneratorEnabled to figure out whether to show the user the feature. If the features are potentially insecure and you want to make sure they are only viewed by people you trust, rather than nasty malicious types, then you could make users authenticate and control access to the features via the backend too. The main downside of this is that it is pretty heavy weight, and requires more communication between any frontend and backend developers.

  • Browser storage. Cookies, local storage, session storage etc. can be used to keep the state and can then very easily be queried in your frontend code to see whether to allow access to features or not. In the case of the server example above, you could have the endpoint set and unset cookies, then have your app read these cookies. These browser storage items can also be set and unset via the browser address bar with query parameters. This has the advantage that you can send people links with certain flags enabled/disabled for demos/testing, rather than having them manually mess around with browser storage. These links will also work equally well on mobile devices, where setting custom cookies etc. is much harder.

Obviously you can, and probably should use a combination of the above approaches to meet your needs.

How to control access to features

Most features in a modern web application will be routed. In our case there would probably be a /kitten-generator route in the frontend.

If this is the case then the simplest way to control access to the feature in Angular is to use a conditional route guard.

Other features/functionality can be controlled with simple if/else statements (as in if(featureEnabled){ do a thing }).

Basically this stuff shouldn’t be complex in a well architected application. If your application consists of one app.component.ts with thousands of lines of code in it, then you will have a more difficult time. In that case feature flags are probably the least of your problems!

Generally though, you will be able to switch a feature off by simply blocking the route with a route guard.

What would a route guard for something like that look like?

via GIPHY

Let’s say we have decided to use the browser’s local storage to set individual feature flag state (in our case the kitten-generator-enabled flag).

We have also decided to add an additional safety value in our Angular environment file called featureFlags which when set to true will allow access to any experimental features if the correct local storage value is set, and when set to false, will not allow access to any experimental features, even if the user has the correct local storage value set. This means that if we are being super paranoid we can have builds for production which do not expose any experimental or half finished features.

import { Injectable } from '@angular/core';
import {
  ActivatedRouteSnapshot,
  CanActivate,
  Router,
  RouterStateSnapshot,
} from '@angular/router';
import { environment } from '../../environments/environment';

@Injectable({
  providedIn: 'root',
})
export class FeatureRouteGuardService implements CanActivate {
  constructor(public router: Router) {}
  canActivate(
    route: ActivatedRouteSnapshot,
    state: RouterStateSnapshot,
  ): boolean {
    if (!environment.featureFlags) {
      this.router.navigateByUrl(route.data.featureFlagDisabledRoute || '/');
      return false;
    }
    if (localStorage.getItem(route.data.featureFlag)) {
      return true;
    }
    this.router.navigateByUrl(route.data.featureFlagDisabledRoute || '/');
    return false;
  }
}

And we would use it like this:

    RouterModule.forRoot([
      {
        path: 'kitten-generator',
        component: KittenGeneratorComponent,
        canActivate: [FeatureRouteGuardService],
        data: { featureFlag: 'kitten-generator-enabled', featureFlagDisabledRoute: '/' },
      }

Now, if a user has a kitten-generator-enabled:true value in their browser’s local storage, and they are accessing a build with the featureFlags value set to true in the environment file, then they will be able to access this route, otherwise, they will be redirected to /.

Going one step further, if we wanted to implement the logic to set or unset features via the browser url, we might use something like this inside the app’s app.component.ts

  /**
   * parse the navigation bar query parameters, and use them to set
   * local storage values to drive feature flag activation
   * @param queryParams
   */
  private processFeatureFlagQueryParams(queryParams: Params) {
    const featuresToTurnOn =
      (queryParams['features-on'] && queryParams['features-on'].split(',')) ||
      [];
    const featuresToTurnOff =
      (queryParams['features-off'] && queryParams['features-off'].split(',')) ||
      [];
    featuresToTurnOn.forEach(item => {
      localStorage.setItem(item, 'true');
    });
    featuresToTurnOff.forEach(item => {
      if (localStorage.getItem(item)) {
        localStorage.removeItem(item);
      }
    });
  }

  ngOnInit() {
    if (this.environment.featureFlags) {
      this.router.events.subscribe(event => {
        if (event instanceof ActivationStart) {
          this.processFeatureFlagQueryParams(event.snapshot.queryParams);
        }
      });
    }
  }

This then would mean that you could give someone a link to your site like this /kitten-generator?features-on=kitten-generator,another-experimental-feature&features-off=yet-another-experimental-feature.

This would set local storage values to true for kitten-generator and another-experimental-feature, and would delete any existing local storage value for yet-another-experimental-feature

Conclusion

Feature flags in web apps and Angular can be as simple, or as complex as you’d like them to be.

I generally have had a better time with the slightly simpler ones shown above, than all singing all dancing server config options. What are your thoughts?

HTF do I write E2E tests with a stubbed dependency? (Angular/Nrwl Nx edition)

In an earlier post, I went over some of the reasons you might want your E2E tests to run against a stubbed version of any really slow or unpredictable dependencies.

Let’s see how this might look in an Angular context, making use of the framework’s inbuilt dependency injection, and TypeScript.

NB I’m going to use Nrwl Nx to spin up these example bits of code inside a shared ‘workspace’ or ‘monorepo’ (the terminology seems to change).

If you are unfamiliar with Nrwl Nx you should check it out! I’ve used their tooling extensively at work, and after a few hiccups can strongly recommend it. See here for a slightly outdated explanation of why/how I’ve used their stuff in the past.

Basically though, they provide a way of easily building multiple applications/libraries from within one repository, supporting code reuse between applications, with convenience tooling, built on top of the Angular CLI, for handling unit testing, e2e testing, builds etc. as well as a bunch of generators for generating opinionated bits of ‘best practice’ Angular/ngrx code. (And you get all of this for freeeee!)

The method I’m using for stubbing dependencies would be equally applicable to a ‘vanilla’ Angular application however.

Meet the players

1) Our main Angular application. It has a button which when clicked will call our Slow Ass Api’s getInformation endpoint, and when finished will render the result (of type SlowInformation).

2) Our Slow Ass Api. This is a separate TypeScript library that exposes one method, getInformation, which returns an object of type SlowInformation. The (not so) clever bit is that this call will simulate unpredictable slowness, by returning the data after a random amount of time from 0ms up to 10000ms.

3) Our E2E tests. I am going to use Cypress for these because I happen to really like it. If you want to use Protractor, this method will still work, you will just have to use the executeScript method to talk to the window object, instead. Also, if you create your application with Nrwl Nx they will set up all of the code scaffolding and configuration for you to support either Protractor or Cypress (you can choose as part of their interactive setup script).

The action

Below are a bunch of links to specific commits. If you want to play along you can clone the project and checkout individual commits to see how the tests behave at various stages of development.

I will pick up from after the initial project skeleton is put together. As I mentioned, this was done using Nrwl Nx’s extensions to the Angular CLI. If you are curious, you can see what these tools do for you in this commit

Generate slow-information-renderer Angular app (commit):

This uses another of the Nx scripts to generate a fully fledged Angular app, I chose to use Jest and Cypress, and so it also sets up an e2e testing project, which can be run with npm run e2e

Generate slow information renderer component and slow information service (commit)

These will be responsible for asking the user to click a button, which will ultimately call our service to call the slow ass api.

Again, this makes use of generator commands to create a boilerplate component with best practices.

Generate and write slow ass api library (commit)

Due to an over eager interactive rebase I have accidentally lumped my changes in with the generator changes.

Basically though, here I create a library for returning strongly typed data of type SlowInformation after a random delay (I even unit tested it! I got carried away…)

The main file to concentrate on is slow-ass-api.ts

Plug the library into our Angular Application, and do some wiring (commit)

After this, our app will render data from the slow-ass-api after a delay

Add a naive e2e test (commit)

This commit adds a basic e2e test which is, unfortunately, quite flaky. Because of the random delay in the data coming back, the test sometimes passes, but sometimes doesn’t. This can be demonstrated by making the API resolve more quickly (commit)

With this change, our tests behave properly, unfortunately in the real world you can’t just tell a slow api to be quicker. So we will need a different solution.

Make the test work less flakily (commit)

Here, we experiment with making our test wait 10 seconds to make sure all the data is there. This works pretty well!

However, we have made our tests take longer than they need to, and, crucially if the api ever takes longer than 10 seconds to resolve, our test will fail again. Perhaps this isn’t the best solution after all…

Complexify the app (commit)

We receive some requirements from our Product Owner, and it seems that a new feature is required where if an fact is at holyCrudThatIsInteresting level of interest, we should put a button up allowing the user to request more information about the fact.

We add the feature and look to update our e2e tests.

Now our app logic is more complicated, we need to account for this complexity in our e2e tests.

Test the new complexified app (commit)

We add a new test for the feature, but unfortunately it only works very very rarely, as if the random fact that comes back is not of the right interest level, our button is not shown, and the test fails. Disaster!

We could start to experiment with making our tests clever, and having them check for the interest level of the fact before progressing, but it is easy to see how an approach like that could blow up and become messy very quickly.

Enter the stub slow information service (commit)

We generate a new service, that implements the existing InformationService in the Angular application. Because it is strongly typed, it must follow the same interface, so we can be relatively confident that we shouldn’t be able to make any methods in our stub return nonsensical or invalid data.

This commit is the meaty part of this post. Here we use Angular’s environment files, as well as their dependency injection, to run our e2e tests with the StubSlowInformationService instead of the SlowInformationService.

Now our tests run, and they run quickly.

If the slow-ass-api implementation changes, our stub will stop compiling, and we know that we need to update our code. So this approach is relatively safe, assuming that you have a sufficiently well defined contract for how the api should behave (in this case our TypeScript Type definitions).

Hooray!

Even more control (commit)

Going one step further, in this commit, we expose methods on the window object, meaning that we can change the behaviour of our stub at run time, during our e2e tests.

Again, this is relatively safe if steps are taken to make sure the api contract is adhered to in the Angular application (by respecting the Types defined in the slow-ass-api library.

Conclusion

We have managed to write an e2e test which is quite robust, fast and easy to manage.

Due to fact that both our Angular application, and the slow ass api conform to a common contract about the behaviour of the API, we can be relatively confident that our tests are meaningful, and represent a realistic interaction with our API.

I think this is a pretty neat approach, and it has proved successful for me at work also. I’d be very keen to hear other peoples’ opinions though as e2e testing in general is something that I’ve found to be a seriously grey area in the frontend world.

Why TF should I stub dependencies for an E2E test?

E2E (or ‘end to end’) tests, when written for web applications, basically allow you to write scripts to interact with your application via a browser, and then assert that it responds correctly.

Common libraries that allow you to do this for Angular apps are Selenium/Protractor and Cypress (you should check out Cypress in particular).

The browser can be ‘headless’, meaning that you can run it in a command line, without a GUI (see headless chrome for an example). This means that these tests can be run by your build server to support a continuous integration/continuous deployment pipeline.

You can then develop scripts to click about, and test your most important user flows quickly and consistently, on your real application, in a very close approximation of how your users would click around. This is an attractive alternative to manually running these tests yourself by spinning up your application and clicking about. Humans are good at lots of things, but computers trump them every time when it comes to performing repeated actions consistently.

E2E tests are an example of black box testing.

If the tests fail, your application is broken, if they pass, your application may still be broken, but it is not terribly broken (as we have tests, ensuring that key user flows are still accessible).

This process relies on an assumption that these tests are very robust and not ‘flaky’ (they only fail when something is actually broken, not just randomly).

There are a few things that can cause E2E tests to randomly fail, but by far the most common in my experience is some form of network latency. Basically any time your code is interacting with something which can take an indeterminate amount of time to finish, you are going to experience some pain when writing automated tests for it.

The most recent (and most evil) version of this I have experienced is an application that is plugged into a blockchain and involves waiting for transactions to be mined. This can take anywhere from a few seconds, to forever.

Unless you are a lunatic, if your application involves any sort of transactions (purchases, bets, swiping-right), which involve user data, your automated tests will already be running on some sort of test environment, meaning that when you mutate data, it is not done on production data, but a copy. This comes with its own little pain points.

When talking to a truly stateful dependency (like an API plugged into a database), you will have to account for the state of the database when performing any actions in your browser test.

As one example, if you are logging in, you might have to first check whether the user has an account, then log in if they do, or create an account if not. This adds complexity to your tests and makes them harder to maintain/potentially flakier.

Another option is to control the test environment, and write (even more) scripts to populate the test database with clean data that can be relied on to power your tests (make sure that a specific user is present for example).

This removes complexity from the browser tests, but adds it to the scripts responsible for set up and tear down of data. This also requires that your test data is kept synchronised with any schema changes to your production database. Otherwise you run the risk of having E2E tests which can pass, even while your real application is broken, as they are testing on an out of date version of the backend.

(To clarify, these are all valid options, and there are solutions to many of the problems above)

However, while this approach solves the problem of accidentally munging user data, a backend plugged into a staging database can still be forking slow, as again you are at the mercy of network speeds and many other things out of your control.

Taken together, this is why I favour ‘stubbing’ the slow bit of your application (in many cases a backend service). This involves providing a very simple approximation of your Slow Ass Service (SAS), which follows the same API, but returns pre-determined fixture data, without doing any slow networking.

There is one major caveat to this approach.

It only works in a meaningful way if you have some sort of contract between the SAS and your frontend.

That is to say, your frontend and the SAS (Slow Ass Service), should both honour a shared contract, describing how the API should behave, and what data types it should accept and return. (for a REST api, swagger/openapi provides a good specification for describing API behaviour). Because this shared contract is written in a way that can be parsed, automatic validations can be made on both the frontend and the backend code to ensure that they meet the requirements.

If you were reading closely, you will notice I mentioned data ‘types’, so most likely you will require some sort of typed language on both the frontend and within the SAS (I would recommend TypeScript 🙂 ).

With this in place, as long as your stub implementation of the SAS also honours this contract (which can be checked in with the SAS source code), then you can feel happy(ish) that your stub represents a realistic interaction with the real SAS. If the contract changes, and your code doesn’t, it should not compile, and your build should fail (which we want!).

Now you have this in place, your E2E tests can be much less/not at all flaky, and as an added benefit, will run super fast in your build server, meaning that the dev workflow is slightly nicer too.

Again, there are gazillions of different ways of approaching this problem, and obviously you need to figure out what it is that you care about.

In my experience though, this presents a good compromise and has meant that I (finally) have managed to get E2E tests running as part of CI/CD without wasting huge amounts of time and causing headaches.

In a subsequent post I demo how to actually get this working in an Angular application that depends on a strongly typed third party library, which happens to be slow as hell 🙂

WTF is Mastery Based Learning?

I recently tried an online program called Launch School, which focuses on improving students’ Software Engineering competence via something called ‘Mastery Based Learning’.

Although I have decided not to continue with Launch School, I would definitely recommend it to people looking to move into Software Engineering from another background.

I learnt quite a bit during the two months I spent on the program, but could not justify the time commitment required.

I did, however really like the concept of ‘Mastery’ and have continued to use this approach to improve my Maths skills in my spare time.

“Mastery learning maintains that students must achieve a level of mastery (e.g., 90% on a knowledge test) in prerequisite knowledge before moving forward to learn subsequent information.”

This short book explains the concept more fully, and is worth a read if you want to learn more.

Mastery learning means getting to a point where you know any required foundational topics to an insane degree, before moving up to the next level of complexity.

Mastery based learning works especially well when applied to subjects that require a student to sequentially build on previous knowledge, to gain a progressively more complex understanding about something.

Good examples of subjects like this are Maths, Science, and Engineering.

These, coincidentally are the subjects I sucked at most at school!

Typically, after the first couple of weeks in a new term, I failed to understand something crucial, got annoyed and glossed over it because it was boring to learn and made me feel stupid, and moved onto the next piece of work, which then made even less sense.

Unsurprisingly this did not result in huge amounts of academic success and led to the (I now think false) belief that I ‘wasn’t a maths person’.

I have been working gradually through this book, and by far the most important thing I’ve learnt so far, is that it is healthy and necessary to embrace the suck.

By that I mean, if you are really learning, it will be hard.

Mastery based learning means spending almost all of your time on material you are bad at, and trusting that through consistent effort and practice, you will improve.

In order to improve, you must get used to this feeling of being stupid and clumsy, and embrace it.

After a few months of this approach, I’ve found that I genuinely enjoy the feeling of fumbling around in a new topic, safe in the knowledge that if I continue to fumble, and ‘deliberately practice’, it will eventually make sense.

I’d be very interested to hear anyone else’s thoughts or experiences in this area, especially any teachers, as I really believe that this approach would have worked much better for me at school.

WTF is Interview Cake

Interview Cake is a service that aims to prepare poor little coders for the stress and humiliation they are likely to encounter during a traditional technical interview, where candidates must solve problems using code, on a white board, often implementing common data structures and algorithms in order to do so.

For various reasons, I have never had to endure one of these interviews.

However, during my most recent/current job search I decided I would try and talk to some companies with a more rigorous approach to interviewing candidates.

I also have a niggling (occasionally crippling) fear of being found out as a fraud because I only became interested in computers later in life, and therefore must be a big fat faker who has only managed to get where he has due to luck.

And so, to assuage my feelings of inadequacy, I set about refreshing my very rusty knowledge of DATA STRUCTURES AND ALGORITHMS (the caps are because that is how I read it in my head whenever I see it on a job description).

After a bit of research I came across Interview Cake and decided this was the service for me.

Here is my unbiased and very limited review.

What is an Interview Cake?

Interview Cake is an online platform designed to offer an interactive way to familiarise yourself with common Algorithms and Data Structure related problems.

It does so via a series of short theoretical articles, and a decent number of practice questions, complete with hints and solutions.

The idea is to give the user exposure to the sorts of problems they will encounter during their interview, and to build up the mindset for breaking down complex problems and coming up with a working solution under time pressure.

For me it has felt a bit like exposure therapy.

I recently had a phone interview where I was asked to write a function in an online REPL, involving n and recursion and stack overflows and combinations and time and space complexity, and many other things that make me internally scream and want to run away.

Thanks to having spent the weeks/months before immersing myself in interview cakey goodness this was significantly less scary.

Would I recommend Interview Cake?

Yes.

It costs money, enough that it is a bit painful, but that has historically proved to be a good thing for me.

I am a cheapskate and the idea that I’m paying for something and not using it provides additional motivation to do the thing.

It is also expensive because it is good.

How good? Pretty good.

Why should you try it?

You get three free questions to see if you like the system, and if you pay for it and decide you don’t love it, you can get your money back.

Will it solve all your interview needs?

Probably not. I had to supplement it with videos, articles, blog posts etc., but for the practice side of things, it’s hard to beat.

HTF do I Angular CLI-ify a React app

If you have been doing any sort of serious Angular dev work you have probably come across the Angular CLI; the one stop shop for generating Angular applications, components, routes, services and pipes, as well as scripts for linting, testing, serving and formatting your app.

If you haven’t used it, shame on you! (not really, but seriously you should check it out).

Why do I love the Angular CLI?

  • Saves time
  • Improves standardisation (There’s nothing I hate more than endless arguments about spaces vs. tabs or any other non-important chin stroking bullshit)
  • Makes it easy to do things that you really should be doing as a responsible person, like testing your code, linting your code etc.
  • Supports aggressive refactoring, as it becomes very painless to spin up new components etc. in no time.
  • You can write your own generators for it now!

Does the React community have any equivalent functionality?

YES! (sort of…), enter create-react-app

create-react-app is a command line tool for spinning up a new React project.

Let’s compare:

Angular

ng new angular-app

React

create-react-app react-app

What’s the difference?

They both whirred away and did some magical things and created a project for me, complete with scripts for building, testing and serving it.

It looks like I will have to do some more investigations into tooling for generating React components etc., but while I’m learning I’ll probably be hand coding these anyway so not the end of the world.

Initial impressions are that the React scaffolding project is a bit less fully featured than the Angular one, but maybe that’s a good thing. One of the complaints levelled at Angular is that it is too bloated. I guess we will see over the next few weeks!

HTF do I move from Angular components to React components

Angular has web components.

By that I mean a modular piece of front end UI code with the following:

  • HTML template
  • TypeScript class to handle UI logic
  • Encapsulated CSS

Angular’s web components also support one way data flow via Output and Input class decorators, and dependency injection.

So how do I Reactify these concepts?

Let’s start with a basic dumb Angular Component, with a form, an input value and an output emitter, and see if we can’t shiny it up with some React:

useful-form.component.ts

import { Component, EventEmitter, Input, Output } from '@angular/core';
import { FormControl, FormGroup, Validators } from '@angular/forms';

export interface UsefulInformation {
  firstName: string;
  crucialInformation: string;
}

@Component({
  selector: 'app-useful-form',
  templateUrl: './useful-form.component.html',
  styleUrls: ['./useful-form.component.css']
})
export class UsefulFormComponent {
  @Input() name: string;
  @Output()
  usefulUserInformation: EventEmitter<UsefulInformation> = new EventEmitter<
    UsefulInformation
  >();

  public usefulForm: FormGroup;
  public crucialOptions = ['important', 'necessary', 'crucial'];

  constructor() {
    this.usefulForm = new FormGroup({
      number: new FormControl(this.name, Validators.required),
      crucialInformation: new FormControl('', Validators.required)
    });
  }

  public submit() {
    this.usefulUserInformation.emit(this.usefulForm.value);
  }
}

useful-form.component.html

<form [formGroup]="usefulForm">
  <h2>Hello <span *ngIf="!name">person</span>{{ name }}</h2>
  <label for="number">Number</label>
  <input
    id="number"
    type="number"
    formControlName="number"
    placeholder="pick a number!"
  >
  <label for="crucialInformation">Crucial information</label>
  <select
    id="crucialInformation"
    formControlName='crucialInformation'
  >
    <option
      *ngFor="let option of crucialOptions;"
      [value]="option"
    >
      {{ option }}
    </option>
  </select>
  <button
    (click)="submit()"
    [disabled]="usefulForm.invalid"
  >Submit</button>
</form>

So here we have a component that can be embedded in other HTML, with an input passed in via a name field, that we’re rendering in the template, an output event that can be listened to, and a reactive form. There is also an *ngFor and and *ngIf.

The component can be used like below in a parent component:

<app-useful-form
  name='Rob'
  (usefulUserInformation)="handleUserInformation($event)"
></app-useful-form>

All pretty standard stuff. Let’s try and replicate this behaviour in React.

Reactified useful component

First of all I want to roughly map some Angular concepts related to components, to their React equivalents:

useful-form.js

import React from 'react';

export default class UsefulForm extends React.Component {
  constructor(props) {
    super(props);
    this.state = {
      number: '',
      crucialInformation: ''
    };

    this.crucialOptions = ['important', 'necessary', 'crucial'];

    this.handleNumberChange = this.handleNumberChange.bind(this);
    this.handleCrucialInformationChange = this.handleCrucialInformationChange.bind(
      this
    );
  }

  handleNumberChange(event) {
    this.setState({ ...this.state, number: event.target.value });
  }

  handleCrucialInformationChange(event) {
    this.setState({ ...this.state, crucialInformation: event.target.value });
  }

  render() {
    return (
      <form onSubmit={() => this.props.handleSubmit(this.state)}>
        <h1>Hello {this.props.name || 'person'}</h1>
        <label>
          Number:
          <input
            type="number"
            required
            placeholder="pick a number!"
            value={this.state.number}
            onChange={this.handleNumberChange}
          />
        </label>
        <label>
          Crucial information:
          <select
            required
            value={this.state.crucialInformation}
            onChange={this.handleCrucialInformationChange}
          >
            <option value=''>Pick option</option>
            {this.crucialOptions.map(option => <option value={option} key={option}>{option}</option>)}
          </select>
        </label>
        <input
          type="submit"
          value="Submit"
          disabled={!(this.state.number && this.state.crucialInformation)}
        />
      </form>
    );
  }
}

The component is used as follows in a parent component:

class App extends Component {

  handleUsefulFormSubmit(event) {
    window.alert(JSON.stringify(event));
  }

  render() {
    return (
      <UsefulForm
        name="Rob"
        handleSubmit={this.handleUsefulFormSubmit}
      />
    );
  }
}

export default App;

What are the key differences/learnings?

  1. React seems to be much less opinionated about how you do things and more flexible. It is also JavaScript, where Angular is in many ways its own thing.
  2. Forms in React seem to require more work to achieve the same as in Angular (validation, disabled buttons, updates to form values etc.). I suspect as I mess around with this I will find some nicer patterns for handling programatic driven forms.

Overall the differences are not as great as I had feared. Both allow for controlling child components from a parent component, and flowing data into and out of the component without mutating things. Also the difference between *ngIf and *ngFor and just using interpolated JavaScript is very minimal.

I am pleasantly surprised by how much I like React so far…