Tiny project: where to live in London

I had an idea this afternoon for a little app to do something that I have wanted done at various times.

Namely, something to help me find a good place to live in London based on commute times to a number of locations.

For example, given your workplace, a family member’s house, a friend’s house, your housemate’s workplace etc. what is the best place to live based on average commute time.

I’m trying to get better at doing tiny projects which I can build quickly, rather than bigger projects I never finish, so I set myself a challenge to deploy something to the web in an afternoon, which at least partially solved the problem I had posed.

I had a rough idea about how it might work, and so I scribbled some notes and got going trying to prove it. Here are the notes I came up with, just to give an idea of the process (I apologise for my writing…):

Ugly UI:

Use the TFL journey planner API, split London into a grid of latitude/longitude locations, and try testing out how long it takes to get from the middle of each grid tile to the locations the user has specified:

Hopefully this will be the data we end up with, a series of tile locations in a dictionary, along with the journey times to each of our user’s locations:

And lastly, to avoid going off on a tangent, these are the things I wanted to prove I could do in some form:

In case that one is too illegible, the things I wanted to prove were:

  • Split London into a grid of tiles, defined by the latitude and longitude of the centre of the tile.
  • Calculate travel times from the centre of each grid tile to the user specified locations.
  • Sort the results based on the best average commute time and fairest split of times, and report back with the best ones.

It ended up working quite well.

First of all I played about with the Transport For London (TFL) API, and figured out that if I registered and got an API key, I could make 500 requests a minute for free. More than enough for my purposes for now.

Then I used Postman to test the API, pass it some locations, and see what form the data came back in. This enabled me to get a simple JavaScript script written which I ran with Node.JS and which allowed me to find the time taken for the shortest journey from point A to point B.

I had to remind myself how to make http requests from inside Node.JS, and ended up using axios which appears to be quite a nice http client, based around promises.

At this point, I had proved that I could pass the API a postcode, and a latitude/longitude coordinate, and get the shortest journey time between the postcode and the lat/long coordinate.

Next, I had to figure out how to split London up into a grid of lat/long coordinates. This was fairly hacky. I ended up clicking on google maps roughly where I wanted to define the North, South, East and West limits of ‘London’, and copying the values it returned as hard coded values for my program, in order to structure my grid of coordinates.

This highly fuzzy methodology eventually gave me the following piece of code:

const generateGrid = () => {
  const bottom = 51.362;
  const top = 51.616;
  const left = -0.3687;
  const right = 0.1722;

  const gridHeight = 6;
  const gridWidth = 10;

  const heightIncrement = (top - bottom) / gridHeight;
  const widthIncrement = (right - left) / gridWidth;

  const grid = [];

  let centeredPoint = [bottom, left];

  for (let i = 0; i < gridWidth; i++) {
    for (let j = 0; j < gridHeight; j++) {
      grid.push([...centeredPoint]);
      centeredPoint[0] += heightIncrement;
    }
    centeredPoint[1] += widthIncrement;
    centeredPoint[0] = bottom;
  }

  return grid;
};

The grid is wider than it is tall, because London is wider than it is tall, which I think makes sense…

So now I had an array of coordinates, representing the centres of my grid tiles spanning London, and the ability to figure out how long it takes to get from one of these coordinates to a given postcode.

I wrote some quite ugly code to loop through all the tiles, and calculate the commute times to each of the user locations, and to calculate the average of these times, and the spread of commute times (the difference between the largest and smallest time):

const getGridJourneys = async (grid, locationsToCheck) => {
  const gridJourneys = {};

  console.log("requesting");

  let withAverageAndSpread = {};

  await axios
    .all(
      grid.map(([gridLat, gridLong]) => {
        return axios.all(
          locationsToCheck.map((location) => {
            return axios
              .get(
                `https://api.tfl.gov.uk/journey/journeyresults/${gridLat},${gridLong}/to/${location}?app_id=${appId}&app_key=${appKey}`
              )
              .then((res) => {
                const key = `${gridLat},${gridLong}`;
                if (gridJourneys[key]) {
                  gridJourneys[key].push(fastestJourneyTime(res.data));
                } else {
                  gridJourneys[key] = [fastestJourneyTime(res.data)];
                }
              })
              .catch((err) => {
                console.error(err.response.config.url);
                console.error(err.response.status);
                return err;
              });
          })
        );
      })
    )
    .then(() => {
      console.log("request done");
      withAverageAndSpread = Object.keys(gridJourneys).reduce(
        (results, gridSquare) => {
          return {
            ...results,
            [gridSquare]: {
              journeys: gridJourneys[gridSquare],
              spread: gridJourneys[gridSquare].reduce(
                (prev, curr) => {
                  const newHighest = Math.max(curr, prev.highest);
                  const newLowest = Math.min(curr, prev.lowest);

                  return {
                    highest: newHighest,
                    lowest: newLowest,
                    diff: newHighest - newLowest,
                  };
                },
                {
                  lowest: Number.MAX_SAFE_INTEGER,
                  highest: 0,
                  diff: 0,
                }
              ),
              average:
                gridJourneys[gridSquare].reduce((prev, curr) => {
                  return prev + curr;
                }, 0) / gridJourneys[gridSquare].length,
            },
          };
        },
        {}
      );
    });
  return {
    gridJourneys,
    withAverageAndSpread,
  };
};

And with a bit more fiddling (and flushing out a lot of bugs), I had my program deliver me results in the following format

[
  {
    location: '51.404,-0.044',
    averageJourneyTime: 60,
    spread: 4,
    journeys: [ 58, 62 ]
  },
  {
    location: '51.446,-0.206',
    averageJourneyTime: 61,
    spread: 5,
    journeys: [ 58, 63 ]
  }
]

So now, in theory I had proved that my program worked, and after putting in some values, the answers I got from it seemed to make sense.

I was pretty happy given this had only taken an hour or so, and considered leaving it there, but I decided to spend another few hours and deploy it to the web.

I started a new Express.JS project, using their scaffolding tool, and pasted all my semi-working code into it. This took a bit of wrangling to make everything work as it did before, but wasn’t too bad.

Then I spent half an hour or so reminding myself how to use Jade/Pug to put together HTML templates, and how routing works in Express.

Eventually I ended up with two views:

A simple form

extends layout

block content
  h1 Where should I live in London
  p Add postcodes of up to 5 locations you want to be able to travel to
  form(name="submit-locations" method="get" action="get-results") 
    div.input
      span.label Location 1
      input(type="text" name="location-1")
    div.input
      span.label Location 2
      input(type="text" name="location-2")
    div.input
      span.label Location 3
      input(type="text" name="location-3")
    div.input
      span.label Location 4
      input(type="text" name="location-4")
    div.input
      span.label Location 5
      input(type="text" name="location-5")
    div.actions
      input(type="submit" value="Where should I live?")

and a results page

extends layout

block content
  h1 Results
  ul
  each item in data[0]
    li 
    div Location: #{item.location}
    div Average journey time: #{item.averageJourneyTime}
    div Journey times: #{item.journeys}
    div Journey time spread: #{item.spread}
    a(href="https://duckduckgo.com/?q=#{item.location}&va=b&t=hc&ia=web&iaxm=maps" target="_blank") Find out more

along with the routing for the results page

router.get("/get-results", async function (req, res, next) {
  const locations = Object.values(req.query)
    .filter((v) => !!v)
    .map((l) => l.replace(" ", ""));

  const results = await getResults(locations);

  if (results[0]?.length === 0) {
    res.render("error", {
      message: "Sorry that did not work. Please try again in a minute!",
      error: {},
    });
  } else {
    res.render("results", { data: results });
  }
});

For context, here is the getResults method

const getResults = async (locationsToCheck) => {
  const { gridJourneys, withAverageAndSpread } = await getGridJourneys(
    generateGrid(),
    locationsToCheck
  );

  const sorted = [
    ...Object.keys(gridJourneys).sort((a, b) => {
      if (withAverageAndSpread[a].average < withAverageAndSpread[b].average) {
        return -1;
      }
      if (withAverageAndSpread[a].average === withAverageAndSpread[b].average) {
        return 0;
      }
      if (withAverageAndSpread[a].average > withAverageAndSpread[b].average) {
        return 1;
      }
    }),
  ];

  const sortedListWithDetails = sorted.map((key) => {
    return {
      location: key
        .split(",")
        .map((i) => i.slice(0, 6))
        .join(","),
      averageJourneyTime: Math.round(withAverageAndSpread[key].average),
      spread: Math.round(withAverageAndSpread[key].spread.diff),
      journeys: withAverageAndSpread[key].journeys,
    };
  });

  return [sortedListWithDetails.slice(0, 5)];
};

I added the generic error message, because the API is rate limited, and if the rate is exceeded, the app doesn’t handle it very well…

I then used Heroku to deploy the app for free, which was, as ever a dream.

Here is a video of the app in action.

And here is the deployed app (assuming it is still up when you are reading this!)

https://where-should-i-live-london.herokuapp.com/

And here is the code.

Overall, I really enjoyed this little exercise, and while there are obviously huge improvements that could be made, it is already a better option (for me at least), than trying individual areas of London one at a time and checking CityMapper to see how long it takes to get to the places I care about.

If I come back to it, I might look into displaying the results on a map, which would allow me to clarify better that the results represent tiles, rather than a specific granular location.

I love how easy it is to quickly prototype and build things using free tooling these days, and it was really refreshing to write a server side web app instead of another single page application. I really believe that for quick proof of concept work, and prototypes, Express.JS and Heroku is a powerful combination. The code is nothing special but it is enough to prove the idea, and to get something running which can be improved upon if I want to later.

WTF is a walking skeleton (iterative design/development)

skeleton cartoon next to bare bones website

I’m going to argue that when developing software, it can be a good idea to be skeletal.

What does that mean?

It means starting with an ugly, or clunky version of your piece of software.

Not only must it be ugly, raw and unsuitable for public consumption to be skeletal, it must also be functional.

That is to say, the skeleton must walk.

Your job once you have this skeletal form, is to go back and attach all of the things that make a skeleton less horrifying (skin and the like).

I’ll give an example and some background to hopefully clarify this idea further, as I think it is an important one, that often gets overlooked.

It’s time to learn React (again)

I have a hard time convincing myself to learn stuff for the sake of it. I like to learn things that increase my understanding of the world, make my life easier, or otherwise help me to solve a problem that I have.

For a long time, this has pushed ‘learning React’ to the bottom of the pile of things I want to do with my spare time.

I am (I think) a solid JavaScript developer, and I actually used React professionally for a brief stint years ago. At that point I had no problem picking up the framework, and I was productive pretty quickly. I like React. I also like Angular.

That said, I am definitely not as productive using React as I would be using Angular, and React has moved on since I last used it, so I tend to stick with what I know.

In an ideal world I’d probably ignore React. I don’t have religious feelings about software tools or frameworks, and I don’t like learning things for the sake of it. I like building things.

I am also currently working as a contractor. This means I need to be very good at the thing I do, in order for people to justify paying a higher rate for my services.

However… the demand for Angular developers in London, where I live, is pretty low at the moment. It seems to largely be large, slow financial organisations that are using it. These are not typically qualities I look for in a workplace.

React on the other hand is booming.

So, TL;DR it’s time to get familiar with React, even though I don’t want to.

Rob’s super simple tips for how to learn technologies really fast

  • DON’T READ THE MANUAL (yet)
  • Read the quickstart
  • Start building things
  • Get stuck
  • Read the relevant part of the manual
  • Get unstuck
  • Repeat

These are the steps I try to follow when learning a new technology. I basically like to get enough knowledge to get moving, then jump in at the deep end and start building stuff.

Once I (inevitably) get stuck, I will go back and actually read the documentation, filling in the blanks in my knowledge, and answering all the many questions I have generated by trying to build things unsuccessfully.

I find that this keeps my learning tight and focussed (and interesting), and means that I don’t spend hours reading about theoretical stuff which I might not even need yet.

So, in order to carry out my learning steps, I needed something to build.

I settled on a video game curation tool, which allows users to sign in, and record their top 5 favourite games in a nice list view, along with some text saying why.

This data can then be used to show the top 5 games per platform (Switch, Xbox, PS4, PC etc.), determined by how often they appear on users’ lists.

I also wanted the ability to see other users’ top 5 lists, via shareable links.

I don’t think this is a website that is going to make me my millions, but it is complex enough to allow me to use React to actually build something.

OK, so what does this have to do with Skeletons?

Well, when I build things, I like to make skeletons first.

So in this case, a walking skeleton of my application should be able to do all of the things I outlined above.

In order for it to be a true functional skeleton, that can be iteratively improved upon, it needs to be as close to the final product as possible, and solid enough to support iterative improvements.

So it can’t be one big blob of spaghetti code which only works on my machine on Tuesdays.

I am building a web application which will persist user preferences, so it has to be:

  • deployed to the web
  • connected to a database
  • able to authenticate a user

Regardless of what I said above about diving straight in. You shouldn’t just dive straight in.

Firstly, figure out, without getting bogged down in what tech to use, what it is you want your stuff to do.

For a web app, you probably want to have some idea about the data structures/entities you are likely to use and what they will represent, and the user flow through the front end.

In this case, I knew I wanted to do something with user generated lists of favourite games, and that I wanted to store and update them.

This meant that a cheap way to get started was to come up with some data structures, and play around with them. So that’s what I did:

/**
 * We should optimise the data structures around the most common/massive/searched entities.
 * I think the reviews are likely to be the chunkiest data set as each user can make muchos reviews.
 * Platforms doesn't fucking matter as there are so few
 * Games are also potentially quite large and need to be searchable
 *
 * Reviews need to be searchable/filterable by: [ platform, game-name, username, tags, star-rating ]
 *
 * Games need to be searchable/filterable by: [ platform, name, tags, star-rating ]
 */

const platforms = {
  "uuid-222": { name: "PS4" },
  "uuid-223": { name: "switch" },
  "uuid-224": { name: "X Box One" },
  "uuid-225": { name: "PC" },
};

const includedBy = {
  [platformId]: {
    [gameId]: {
      [userId]: "user comment",
    },
  },
  "uuid-222": {
    "uuid-312": {
      robt1019: "I loved throwing coconut at ppls hedz",
    },
  },
};

let rankedPs4Games = {};

Object.keys(includedBy["uuid-222"]).forEach((gameId) => {
  const includedCount = Object.keys(includedBy["uuid-222"]["gameId"]).length;
  if (rankedPs4Games[includedCount]) {
    rankedPs4Games[includedCount.push(gameId)];
  } else {
    rankedPs4Games[includedCount] = [gameId];
  }
});

const games = {
  "uuid-313": {
    name: "Zelda Breath of the Wild",
    platforms: ["uuid-223"],
  },
  "uuid-312": {
    name: "Hitman 2",
    platforms: ["uuid-222", "uuid-223", "uuid-224", "uuid-225"],
  },
};

const users = {
  robt1019: {
    name: "Rob Taylor",
    top5: [{ gameId: "uuid-312", platformId: "uuid-222" }],
  },
  didarina: {
    name: "Didar Ekmekci",
    top5: [{ gameId: "uuid-313", platformId: "uuid-223" }],
  },
};

/**
 * Use includedBy count for aggregate views. Only viewable by platform. No aggregate view.
 */

It is important to note that these data structures have since turned out to be slightly wrong for what I want, and I have changed them… iteratively, but playing with them in this form before writing any code allowed my to iron out some nasty kinks, and ensure that I wasn’t trying to do anything that would be truly horrible from a data perspective later on.

I also spent a good hour scribbling in a note pad with some terrible drawings of different screens, to mentally go through what a user would have to do to navigate the site.

At all times we’re trying to make a solid, rough and ready skeleton that will stand up on its own, not a beautifully formed fleshy ankle that is incapable of working with any other body parts!

Be a Scientist

The main benefit of this approach, is that you are continually gathering extremely useful information, and you can very quickly prove, or disprove your hypotheses about how the application should be structured, and how it should perform.

By emphasising getting a fully fledged application up and running, deployed to the web and with all of the key functionality present, you are forced to spend your time wisely, and you take away a lot of the risk of working with a new set of tools.

What did I learn/produce in four days thanks to the Skeleton:

  • React can be deployed with one command to the web using Heroku and a community build pack.
  • How to deploy a React application to Heroku at a custom domain.
  • How to do client side routing in a modern React application.
  • The basics of React hooks for local state management.
  • How to protect specific endpoints on an API using Auth0 with Express.
  • The IGDB (Internet Games Database) is free to use for hobby projects, and is really powerful.
  • How to set up collections on Postman to make testing various APIs nice and easy.
  • A full set of skeleton React components, ready for filling in with functionality.
  • A more thought through Entity Relationship model for the different entities, and a production ready, managed MondoDB database.

If you want to see just how Skeletal the first iteration is, see here:

https://www.youtube.com/watch?v=k2uMrrVkzDk&feature=youtu.be

I didn’t get the users search view working, so that is the first thing to do this week.

After that, my product is functionally complete, so I’ll probably start on layout/styling and maybe some automated testing.

I can already breathe pretty easy though, as I know that nothing I’m trying to do is impossible, as I have already done it.

Anything from here on out is improvements, rather than core functionality.

Frontend:

https://github.com/robt1019/My-Fave-Games

Backend:

https://github.com/robt1019/My-Fave-Games-Express

Noated (Noted) Debrief

I am done with my latest project.

“Noated” (Noted was taken…) is live and available to install, and I’m feeling reflective. (note I have since removed it from the App Store as I didn’t have time to maintain the project 🙁

I started out, a month and a half ago trying to make:

A cross platform notes application, a la Mac notes app, but one that works on Mac, Windows, Android and IOS

I ended up with:

A cross platform notes application, a la Mac notes app, but one that works on Mac, Windows, and IOS

So no Android… yet.

Given that I personally use Windows, IOS and Mac, I’m pretty happy though.

See it in action here!

https://youtu.be/zfC4clJYuw4

I was feeling a bit mopey this morning as the realisation that my distracting/all consuming side project is over and I need to start thinking about paid work again.

So I decided to get a bit of closure and do a debrief of the project.

What were my goals?

To try something new.

To expose myself to problems I haven’t encountered professionally, and hopefully get a more rounded view of the development process.

What did I achieve

Most, but not all of my goals. I got a note taking app which largely fulfils my personal requirements for a note taking app, and I did it it a reasonable amount of time.

RIP Android client…

What did I learn?

A big part of why I did this project, was to expose myself to problem spaces outside of my usual domain (frontend, specifically single page applications).

In this respect it was very successful.

I gained a working knowledge of Swift, SwiftUI, real time streaming protocols (via Socket.IO) and OAuth 2.0.

I learnt that making networked apps which can work offline as well is hard.

I was also forced to figure out how to actually deploy/distribute desktop and mobile applications. In this case I ended up using the Apple App Store for the mobile app, and I hosted the installer for the MacOS desktop client on GitHub, using their Large File Storage solution. Because I wanted to make it possible for other people to install my app, I had to notarise the Mac installer, which I managed with the help of this excellent tutorial:

https://kilianvalkhof.com/2019/electron/notarizing-your-electron-application/

As I got closer to deploying the app, and was faced with the reality of random people being able to install my app, I realised I would need multiple environments for my database, server and Identity Provider stuff.

This forced me to figure out things I would never have thought about such as how to use environment variables in an IOS application via Xcode.

The thing I found most enjoyable about the whole process, was that I didn’t start with a list of technologies I wanted to use, rather I started with what I wanted to build, namely a note taking application, accessible on multiple platforms, that could be used offline and online, and synced between devices.

This meant that at every step, my technology choices were driven by what would be the best option for building the solution quickly and stably, not by what was the hot new thing.

It felt like I was exploring uncharted new territories, and relying on my prior knowledge and Google-fu, to make it through the project unscathed.

So while I did get exposed to problems which will probably be useful to have grappled with in my professional life (mainly Auth/Identity stuff, and Electron), the main thing I gained from this project was a new confidence that I can tackle things from (sort of) first principles, and that I can build things to satisfy a user requirement, using whichever technology makes sense to use.

Skip the tests!

Controversially, one of the things I learnt was that for personal projects, especially when you are still designing the system, it can make sense to skip the automated/unit testing…

This isn’t something I expected, as I am fully sold on the benefits of automated, unit and integration tests, and am normally the guy at work that goes above and beyond to prove that his code works, and is robust.

However, I think given the speed at which I was iterating and changing major system design choices, a heavy and rigid set of unit tests would have slowed me down too much, and potentially caused me to run out of steam.

A big source of motivation during this project was the visible progress that was being made on the application. Anything which got in the way of that would be problematic for me.

I have a bunch of projects I have started where I have spent hours/days setting up the perfect set of integration and unit tests, and agonising over how to incorporate CI/CD into my workflow.

These projects never went anywhere… so yeah, one of the big findings was to be a bit sloppy about unit testing. Who knew!?!

Put your ugly baby out into the world as soon as you can

Over the course of the project, I got quite attached to my baby, and nervous about showing it to other people, lest they not understand, or worse, break it.

Fortunately I was able to overcome that fear, and as soon as I had something I could put on the App Store, I did it.

I knew there were bugs in the product, and things I wanted to change, but I released it anyway.

This turned out to be a good decision for a number of reasons.

Firstly, getting apps accepted onto the App Store is not straightforward… I spend a large part of this week doing boring admin-ey things like setting up privacy policies, adding login options (Sign in with apple, obviously), and a bunch of other really dull things that take time.

Each fix meant another half a day delay, and had I tried to get the app locally perfect, I wouldn’t have known any of this.

Secondly, my friends broke my baby, in ways I couldn’t have anticipated.

These breakages again gave me new insights into what was worth spending time on, and made my product more robust.

It still hurt, but it was worth letting my fledgeling and ugly app out into the world to fend for itself.

Iterations are good!

While I didn’t go the whole hog and set up a CD pipeline or a monorepo, or any of the other things which I think are worth doing in a larger team, but take a lot of time to set up, I did use Heroku.

Heroku makes it very easy to quickly make changes to your deployed backend, and roll them back if they break things. This allowed me to move very quickly, and to iterate.

I wrote some really shit code to get the initial clients up and running.

A lot of this code got thrown away, and I actually started both the desktop and mobile applications from scratch halfway through the development process.

This, again was a good idea.

It meant I proved my ideas would work very quickly, with some very bad code, and then could build a more robust version, taking the bits which were good from the prototype.

Know your limits

After getting one client working (IOS), I went client mad, building clients for Mac, Windows, Linux and Android in one hectic week.

This was fantastic fun, and made me feel like a wizard.

Unfortunately, the following week I realised that I needed to make some pretty wide reaching changes to the underlying notes protocol…

The clients themselves were also pretty bug infested, as I had really rushed to hack them together.

This meant that I spent a few days scrabbling between clients, patching holes and trying to update them to the new protocol.

Thankfully, I realised that in order to actually build a proper solution, I needed to scale back a bit.

So I killed the Android and Desktop apps, and spent a week just focused on getting the IOS application working as well as possible, and ironing out issues in the server/notes protocol.

While this did hurt, as I was admitting I couldn’t do something, it meant that the final product was much better, and also meant that when it came time to re-animate the desktop application, I was much clearer on how a client for this new multi-notes protocol should work, and it was actually very simple.

Conclusion

Trying to make things you don’t know how to make is a lot of fun.

I really have loved this project, and I actually finished it.

For me this is huge. Generally I start things, lots of things, but I do not finish them.

I hope that this will be the start of my life as a prolific finisher of things. I guess we will see.

If you want to see the (now defunct 🙁 ) code, look here:

https://github.com/robt1019/Noted-Electron

https://github.com/robt1019/Noted-IOS

https://github.com/robt1019/Noted-Express

Noted: Let’s make an app: part 5

Ohhh man this is getting close now.

I am starting to really want to put this project to bed, and I think I will get there soon.

No time to waste so let’s recap.

If you haven’t followed along, and want to know what this project is, start here.

Where did I start

At the start of the week I had a fairly bug free IOS client, connected to an increasingly stable server and db, with a relatively solid set of instructions for creating, updating and deleting a set of notes related to a specific user.

Authentication was still working nicely, but my app shat itself as soon as it lost internet connection.

The focus of this week was:

  • Getting IOS app into the App Store

  • Offline mode

  • Client for desktop

How’d I do?

I added createNote and noteCreated actions to my notes protocol, in order to make things more explicit, which worked nicely.

Offline mode

I came up with an initial solution which appears to work pretty well. I may change this going forward (see more below), but this is at least a start.

I keep track of the offline/online status in the IOS client, and at the point a user does a note action (create, update, delete), if they are online push the update straight to the server, or if they are offline, update the locally stored copy of the notes, store the action locally on the device, and then when the device comes back online, process all the queued up actions.

This looks like this for the update action:

NotesService.swift:

    public func updateNote(id: String, title: String, body: String, prevNote: Note, context: NSManagedObjectContext) {
        // Figure out any diffs
        let titleDiff = NotesDiffer.shared.diff(notes1: prevNote.title!, notes2: title)
        let bodyDiff = NotesDiffer.shared.diff(notes1: prevNote.body!, notes2: body)
        let payload: [String: Any] = [
            "id": id,
            "title": titleDiff,
            "body": bodyDiff,
        ]
        if (self.online) {
            // we are online, push action straight to server
            self.socket?.emit("updateNote", payload)
        } else {
            // no internet :(
            // find the existing note stored in CoreData locally
            let note = Note.noteById(id: id, in: context)
            // update existing local copy of note
            Note.updateTitle(note: note!, title: title, in: context)
            Note.updateBody(note: note!, body: body, in: context)
            // delegate update responsibility to OfflineChanges service
            OfflineChanges.updateNote(payload: payload)
        }
    }

OfflineChanges.swift

    private static let key: String = "offlineUpdates"
    private static let defaults = UserDefaults.standard

    public static func updateNote(payload: Any) {
        var offlineUpdates = defaults.array(forKey: key)
        // put action and payload in an array
        let action = ["updateNote", payload]
        if (offlineUpdates != nil) {
            offlineUpdates!.append(action)
        } else {
            offlineUpdates = [action]
        }
        // store updated offline updates to user defaults
        defaults.set(offlineUpdates, forKey: key)
    }

    // loop through all stored offline updates, and push them up to server
    public static func processOfflineUpdates(socket: SocketIOClient?, done: @escaping () -> Void) {
        let offlineUpdates: [[Any]]? = defaults.array(forKey: key) as? [[Any]]
        if (offlineUpdates != nil && offlineUpdates?.count ?? 0 > 0) {
            socket?.emit("offlineUpdates", offlineUpdates!)
            socket?.once("offlineUpdatesProcessed") { data, ack in
                done()
            }
        } else {
            done()
        }

        defaults.set([], forKey: key)
    }

Then, back in the NotesService, when we reconnect, after authentication, process all the stored updates:

self.socket?.once("authenticated", callback: { _, _ in

    OfflineChanges.processOfflineUpdates(socket: self.socket) {
        self.socket?.emit("getInitialNotes")
    }

    self.socket?.once("initialNotes") {data, ack in
        let stringifiedJson = data[0] as? String
        if (stringifiedJson != nil) {
            self._onInitialNotes!(NotesToJsonService.jsonToNotesDictionary(jsonString: stringifiedJson!))
        } else {
            self._onInitialNotes!([:])
        }
    }
});

I’m overall happy with this approach.

The one thing I think I might end up changing is the explicit online/offline detection

I think it might be more reliable to instead check that the server received the action within a set amount of time.

If it doesn’t respond with a ‘yes I got that message’, assume we are offline and queue up the action for later as detailed above.

Let’s distribute this thing! (to a tiny set of initial users)

Now that I had a client working to a level I was happy with, it was time to get it in front of people.

I dutifully signed up to Apple’s developer program, paid my fee and carried out the steps to push one of my builds to the ‘App Store connect’ dashboard.

It was quite a nice process, which after setup could be managed from within Xcode.

Apple offers a beta testing product called ‘TestFlight’, which allows you to send email invitations to people, allowing them to install your app via the ‘TestFlight’ app.

I was able to convince five people to install the app and report any issues they found.

So far, no major issues, but I don’t think that means it is bug free alas.

Based on this extremely limited testing, I’m now pretty happy with pushing to actually submit something to the App Store, and that will be the focus of next week.

Desktop

A large part of why I’m making this app is that it is something I want to use.

In order for me to actually find it useful, it needs to have a desktop client, at least on Mac.

After some brief tinkering with native MacOS tooling, I once again said “fuck it I’ll just do Electron”.

I started a new project, and pulled in the parts from my initial Electron prototype that were good.

A benefit of spending so long with the IOS client finessing the notes protocol and online/offline functionality is that it made implementing the Electron client something of a dream.

I already had the Auth stuff done, so it was a case of implementing the new master/detail views for handling multiple notes (the previous Electron app only supported one page of notes per user), coming up with a way of storing notes locally to the user’s machine, and implementing the same set of actions as on the IOS client.

Desktop local storage

I used sqlite, via the sqlite3 npm package, and put together a service for handling CRUD operations:

note-storage.service.js

const { app } = require("electron");
const path = require("path");
var sqlite3 = require("sqlite3").verbose();

const db = new sqlite3.Database(path.join(app.getPath("userData"), "notes"));

db.serialize(() => {
  db.run(`
    CREATE TABLE IF NOT EXISTS notes (
        id TEXT NOT NULL UNIQUE,
        title TEXT NOT NULL,
        body TEXT NOT NULL
    )`);
});

app.on("quit", () => {
  db.close();
});

const getNotes = (done) => {
  db.serialize(() => {
    db.all(
      `
    SELECT * FROM notes
    `,
      (err, results) => done(err, results)
    );
  });
};

const getNoteById = (id, done) => {
  db.serialize(() => {
    db.get(
      `
    SELECT * FROM notes
    WHERE id="${id}"
    `,
      (err, result) => done(err, result)
    );
  });
};

const createNote = (note) => {
  db.serialize(() => {
    db.run(`
    INSERT INTO notes (id, title, body)
    VALUES("${note.id}", "${note.title}", "${note.body}")
    `);
  });
};

const updateNote = (note) => {
  db.serialize(() => {
    db.run(`
        UPDATE notes
        SET title="${note.title}",
            body="${note.body}"
        WHERE id="${note.id}"
        `);
  });
};

const deleteNote = (id) => {
  db.serialize(() => {
    db.run(`
        DELETE from notes
        WHERE id="${id}"
        `);
  });
};

const deleteAll = () => {
  db.serialize(() => {
    db.run("DROP TABLE IF EXISTS notes");
  });
};

module.exports = {
  getNotes,
  getNoteById,
  createNote,
  updateNote,
  deleteNote,
  deleteAll,
};

Compared to the higher level abstraction of IOS’s CoreData framework, it was really nice just writing SQL queries.

Also as a general note, going back to untyped JavaScript was lovely.

I really enjoy TypeScript at work, and typed languages generally I think are a great way of cutting down on bugs, communicating design decisions with other developers, and generally making more robust, predictable software.

That said, for prototyping/individual projects where it is just me, I love the freedom that comes with raw untyped JavaScript.

Sure, I get runtime bugs, but I can fix them quickly.

Desktop Online/Offline

This was less smooth, and actually resulted in me starting to rethink my design of the online/offline stuff generally.

First up I needed the equivalent of IOS’s UserPreferences storage module, for storing any notes actions for later.

Because I’m back in my comfort zone with JavaScript/Node, I wrote my own way of storing JSON to a file locally in the place that Electron stores userData by default:

offline-updates.service.js

const { app } = require("electron");
const path = require("path");
const fs = require("fs");
const offlineUpdatesPath = path.join(
  app.getPath("userData"),
  "offline-updates.json"
);

const setUpdates = (updates) => {
  console.log(`offline updates: ${updates}`);
  fs.writeFileSync(offlineUpdatesPath, JSON.stringify(updates));
};

const getUpdates = () => {
  if (fs.existsSync(offlineUpdatesPath)) {
    return require(offlineUpdatesPath);
  } else {
    setUpdates([]);
    return [];
  }
};

const createNote = (note) => {
  const updates = getUpdates();
  updates.push(["createNote", note]);
  setUpdates(updates);
};
const updateNote = (noteUpdate) => {
  const updates = getUpdates();
  updates.push(["updateNote", noteUpdate]);
  setUpdates(updates);
};
const deleteNote = (noteId) => {
  const updates = getUpdates();
  updates.push(["deleteNote", noteId]);
  setUpdates(updates);
};

const processOfflineUpdates = (socket) => {
  getUpdates().forEach((update) => {
    const action = update[0];
    const payload = update[1];
    console.log(
      `processing offline update ${action}, with payload: ${payload}`
    );
    socket.emit(action, payload);
  });
  setUpdates([]);
};

module.exports = {
  createNote,
  updateNote,
  deleteNote,
  processOfflineUpdates,
};

So far so good.

Next step, how to figure out whether the user is online or not.

This is grosser 🙁

network-detector.service.js

const net = require("net");

let lastEmitted = false;

let _onChange;

const checkConnection = (onChange) => {
  _onChange = onChange;
  const connection = net.connect(
    {
      port: 80,
      host: "google.com",
    },
    () => {
      if (lastEmitted === false) {
        lastEmitted = true;
        _onChange(true);
      }
    }
  );
  connection.on("error", () => {
    if (lastEmitted === true) {
      lastEmitted = false;
      _onChange(false);
    }
  });
};

const onNetworkChange = (onChange) => {
  checkConnection(onChange);
  setInterval(() => {
    checkConnection(onChange, lastEmitted);
  }, 5000);
};

module.exports = {
  onNetworkChange,
};

Basically, every 5 seconds, try and fire a network event, if it works, you are online, otherwise you are not. In my notes service, I can subscribe to the events emitted from this service, and do the same if(online) style checks as in the IOS app.

The problem is the potential 5 second delay between being offline, and me knowing about it. This kind of breaks my solution as I could very easily try and send a bunch of stuff up to the server when I’m offline, and then just lose those actions completely.

At the end of last week, my thinking was that something like this might be the solution.

On the client side:

const updateNote = (prevNote, updatedNote) => {
  let serverGotTheMessage = false;
  const noteUpdate = {
    id: updatedNote.id,
    title: dmp.diff_main(prevNote.title, updatedNote.title),
    body: dmp.diff_main(prevNote.body, updatedNote.body),
  };
  setTimeout(() => {
    if (!serverGotTheMessage) {
      noteStorage.updateNote(updatedNote);
      offlineUpdates.updateNote(noteUpdate);
    }
  }, 1000);
  socket.emit("updateNote", noteUpdate, () => {
    serverGotTheMessage = true;
  });
};

On the server side:

socket.on("updateNote", (payload, ack) => {
  if (ack) {
    ack();
  }
  debug(`updating ${userId} note ${payload.id}`);
  updateNote(userId, payload, io);
});

I guess check back next week to see if that’s a good idea or not…

What next?

Figure out a more robust solution for offline/online updates.

As has been the case for the last 2 weeks, I really need to get the IOS app submitted to the App Store. Until I do that I can’t really move on from this project!

As part of that, I will need to set up separate production environments for my db, server and Auth0 stuff. Currently everything has been done on one environment.

If I have time, continue working on the Electron app, and figure out as soon as possible how best to distribute it/make installers etc. Focus on Mac OS for now.

The main priority is getting the IOS app done though. Wish me luck.

Noted: Let’s make an app: part 4

Where did I start?

Some slightly broken clients with installers for Mac, IOS, Android and Windows.

None of the clients worked offline, or even failed gracefully offline.

Each client, and the database supported one single string value of notes per user.

Where did I end up?

Single IOS client, capable of saving multiple notes per user, and when connected to the internet, silently syncing the changes back to the server, where it is pushed out to any connected and authenticated clients. On first loading the app/after re-authenticating, the latest server version of the notes is pushed out to the client.

Wait, but that’s less than you started with!?!

Yes… this week was a bitter sweet experience.

In return for a much improved user experience, I had to severely limit my ambitions client-wise, and focus on getting the general steps for editing, and syncing a set of notes per user between devices clear in my head.

Before I started, I knew I wanted to focus on getting one client polished and distribution ready.

I also knew that offline functionality was a priority, as was reducing the amount of data I was sending around.

Previously, I was sending the user’s entire notes string every time they made a change. This was unsubtle when it came to resolving conflicts (multiple clients logged in with simultaneous updates), whereby whichever update came in latest completely overwrote the previous one.

What a difference a week makes

First up, diffing.

I make heavy use of git at work, and so I had been thinking for a while that there must be a way of just sending round ‘diffs’ between a user’s notes, and then patching the existing notes with any incoming diffs. I hadn’t thought it through very much, but I was pretty sure I wanted to be sending round diffs, rather than the user’s entire notes.

I started by playing around locally with the ‘diff’ and ‘patch’ utilities included with unix, and so accessible via my terminal emulator.

This was pretty hopeful, and I was able to reduce a series of changes to a file, to a series of line numbers, with additions and deletions etc. So far so good.

These utils are not easily accessible in the various environments I am programming in however, and so I kept looking.

After a bunch of dead ends, I came across Google’s ‘diff match patch‘ library, which was written originally to power Google Docs, and, very kindly has been open sourced.

Google Docs kind of represents an idealised version of the kind of synchronising between clients that I am looking for, so I was pretty excited by the prospect of using the same diffing engine as they did.

After some experimentation, it seemed like this would suit my needs very well. Getting it installed on the server was very simple (npm). The package had a lot of weekly installs, the linked github had very few unresolved issues, and everything generally seemed pretty stable and reliable.

This was my initial thoughts about how this might start to look:

const { diff_match_patch } = require("diff-match-patch");

const dmp = new diff_match_patch();

let text = "Poodles can play piano";
const text2 = "Oodles can play potties\n\n\n\nwhich not a lot of people know";
const text3 = "Poodles can fully retract their eyelids";

// Both text2 and text3 clients have initial text value

let diff1 = dmp.diff_main(text, text2);
dmp.diff_cleanupSemantic(diff1);
console.log(diff1);

let diff2 = dmp.diff_main(text, text3);
dmp.diff_cleanupSemantic(diff2);
console.log(diff2);

// They are both offline so queue up the change for when they are online again,
// keeping track of the diff between their latest known server value, and their
// current value

// text2 client comes back online, and sends up its diff
const patches1 = dmp.patch_make(text, diff1);
console.log(patches1);

text = dmp.patch_apply(patches1, text)[0];

// text is updated to reflect first diff
console.log(text);

// text3 client comes back online, and sends up its diff
const patches2 = dmp.patch_make(text, diff2);
console.log(patches2);

// text is updated to reflect second diff, applied to text
text = dmp.patch_apply(patches2, text)[0];

console.log(text);

This all worked perfectly in JavaScript land.

Getting it working on the Swift (IOS) side was less pleasant however…

There were some community maintained packages for Swift, Objective C etc. which could be installed via Cocoa Pods, but they were out of date, poorly maintained and riddled with issues. I couldn’t get any of them to compile, or even install in some cases (one in particular seemed to require getting the code from a private GitHub repository, which I didn’t have access to…)

It was very frustrating, and is exactly the kind of stuff which makes me start to question whether software development is right for me.

My initial solution was to do all diffing on the server, and have the client send the previous notes, and the updated notes in each update.

This had the benefit of allowing multiple clients to simultaneously update notes and have Google’s magic diffing take care of resolving conflicts and patching together its best guess of the end results, but also meant that instead of sending all the users notes, I was sending all the users notes twice.

Not ideal.

After a bunch more research, and a lot of annoyance (this coincided with a mid 30 degree centigrade London day, which is hell), I discovered that you can run JavaScript from within Swift projects via a natively supported module called JavaScriptCore. This is my current Swift class which exposes the bits of google match patch I need:

import UIKit
import JavaScriptCore

class NotesDiffer: NSObject {

    static let shared = NotesDiffer()
    private let vm = JSVirtualMachine()
    private let context: JSContext

    override init() {
        let jsCode = try? String.init(contentsOf: Bundle.main.url(forResource: "Noted.bundle", withExtension: "js")!)
        self.context = JSContext(virtualMachine: self.vm)
        self.context.evaluateScript(jsCode)
    }

    func diff(notes1: String, notes2: String) -> [Any] {
        let jsModule = self.context.objectForKeyedSubscript("Noted")
        let diffMatchPatch = jsModule?.objectForKeyedSubscript("diffMatchPatch")
        let result = diffMatchPatch!.objectForKeyedSubscript("diff_main").call(withArguments: [notes1, notes2])
        return (result!.toArray())
    }

    func patch(notes1: String, diff: Any) -> String {
        let jsModule = self.context.objectForKeyedSubscript("Noted")
        let diffMatchPatch = jsModule?.objectForKeyedSubscript("diffMatchPatch")
        let patch = diffMatchPatch!.objectForKeyedSubscript("patch_make").call(withArguments: [notes1, diff])
        let patched = diffMatchPatch!.objectForKeyedSubscript("patch_apply").call(withArguments: [patch, notes1])
        return (patched?.toArray()[0])! as! String
    }
}

I won’t go into how I did it, as this guy has a much better article, but I am using NPM and web pack to pull the same package I am using on the server, into the Swift client. Nifty stuff.

Diffing done (for now).

What do you mean you don’t have any internet!?!

After diffing, the next big issue was handling patchy network/offline mode.

Virgin Media decided to completely shit the bed at the end of the previous week, one of the results being that I was painfully confronted with how useless my app is without a reliable internet connection.

I had an idea that what would work from the client’s perspective is this:

Updating notes:

1) Update notes

2) Save

3) Am I online? If yes, push to server, if no, store the update locally

Coming back online

1) Back online, Joy

2) Do I have any pending offline changes? If yes, shoot them up to the server, otherwise do nothing

I used IOS’ ‘User Defaults’ to store the changes as a dictionary with previous notes, and updated notes, and checked it when coming back online.

It worked pretty nicely.

Unfortunately, two days into the week, I faced up to the unfortunate reality that in order for this app to be in any way useful, it needs to support multiple notes per user, which necessitated some pretty wide reaching changes.

As part of these changes, the offline functionality got removed, and hasn’t been added back yet.

Let’s get into those changes now

All of the data modelling

As mentioned, I realised I wanted/needed to support multiple notes per user.

I played around with different ideas, and settled on the idea that the underlying database would store something like this for a given user:

    {
      "order": ["id1", "id2", "id5", "id3"],
      "details": {
        "id1": {
          "title": "notes 1",
          "body": "first notes here"
        },
        "id2": {
          "title": "notes 2",
          "body": "second notes here"
        },
        "id3": {
          "title": "notes 3",
          "body": "third notes here"
        },
        "id5": {
          "title": "notes 5",
          "body": "fifth notes here"
        }
      }
    }

and clients would be responsible for maintaining a local copy of the structure, in whatever format makes sense to them, and then pushing updates up to the server, so it can update its underlying model of the user’s notes, and push the changes out to all connected clients.

First problem, I had no idea how to store structured data locally to an IOS device. I had used User Defaults to store simple string data with some success, but it was a blunt instrument, and would not be suitable for storing a potentially large JSON object.

After some digging, I decided to go with what looked like the most IOS-ey, Apple recommended approach, and use the CoreData framework:

“Core Data is an object graph and persistence framework provided by Apple in the macOS and iOS operating systems”

Which seemed good because, hopefully it will be well documented, and widely used, and also, I might be able to reuse the models in any upcoming macOS client work.

It is an abstraction over some sort of persistent device storage. I don’t actually know what form the data is saved in, whether it uses SQLite or not, and I don’t really care at the moment. The main benefit from my perspective is that I hoped I would be able to define some sort of model, corresponding to the JSON data structure above, and keep it in sync with the server.

This has been largely successful, but was quite painful to get started.

I didn’t find CoreData particularly intuitive, possibly because my professional interactions with data persistence has been limited largely to Redux stores and Cookies/local storage etc. on the front end.

What I ended up with was this CoreData model:

Which doesn’t look too impressive!

I then added a bunch of static methods to the generated Note class, (which is an instance of an NSManagedObject, provided by the CoreData framework, meaning it can get persisted and stuff). These methods were to support custom read/write operations that I needed for my application. Currently they look like this:

extension Note {

    public static func noteById(id: String, in context: NSManagedObjectContext) -> Note? {
        let serverNotesFetch = NSFetchRequest<NSFetchRequestResult>(entityName: "Note")
        serverNotesFetch.predicate = NSPredicate(format: "id = %@", id)

        do {
            let fetchedNotes = try context.fetch(serverNotesFetch) as! [Note]
            print(fetchedNotes)
            if(fetchedNotes.count > 0) {
                print("found a note")
                return fetchedNotes[0]
            } else {
                print("no note found")
                return nil
            }
        } catch {
            fatalError("Failed to fetch note by id: \(error)")
        }
    }

    static func create(in managedObjectContext: NSManagedObjectContext, noteId: String? = nil, title: String? = nil, body: String? = nil){
        let newNote = self.init(context: managedObjectContext)
        newNote.id = noteId ?? UUID().uuidString
        newNote.title = title ?? ""
        newNote.body = body ?? ""

        do {
            try  managedObjectContext.save()
        } catch {
            // Replace this implementation with code to handle the error appropriately.
            // fatalError() causes the application to generate a crash log and terminate. You should not use this function in a shipping application, although it may be useful during development.
            let nserror = error as NSError
            fatalError("Unresolved error \(nserror), \(nserror.userInfo)")
        }
    }

    static func updateTitle(note: Note, title: String, in managedObjectContext: NSManagedObjectContext) {
        note.title = title

        do {
            try managedObjectContext.save()
        } catch {
            // Replace this implementation with code to handle the error appropriately.
            // fatalError() causes the application to generate a crash log and terminate. You should not use this function in a shipping application, although it may be useful during development.
            let nserror = error as NSError
            fatalError("Unresolved error \(nserror), \(nserror.userInfo)")
        }
    }

    static func updateBody(note: Note, body: String, in managedObjectContext: NSManagedObjectContext) {
        note.body = body

        do {
            try managedObjectContext.save()
        } catch {
            // Replace this implementation with code to handle the error appropriately.
            // fatalError() causes the application to generate a crash log and terminate. You should not use this function in a shipping application, although it may be useful during development.
            let nserror = error as NSError
            fatalError("Unresolved error \(nserror), \(nserror.userInfo)")
        }
    }

    public static func deleteAllNotes(in managedObjectContext: NSManagedObjectContext) {
        // Create Fetch Request
        let fetchRequest = NSFetchRequest<NSFetchRequestResult>(entityName: "Note")

        // Create Batch Delete Request
        let batchDeleteRequest = NSBatchDeleteRequest(fetchRequest: fetchRequest)

        do {
            try managedObjectContext.execute(batchDeleteRequest)

        } catch {
            // Error Handling
            let nserror = error as NSError
            fatalError("Unresolved error \(nserror), \(nserror.userInfo)")
        }
    }

    public static func deleteAllNotesApartFrom(ids: [String], in managedObjectContext: NSManagedObjectContext) {
        print("deleting all notes apart from \(ids)")
        let notesFetch = NSFetchRequest<NSFetchRequestResult>(entityName: "Note")
        notesFetch.predicate = NSPredicate(format: "NOT id IN %@", ids)
        do {
            let fetchedNotes = try managedObjectContext.fetch(notesFetch) as! [Note]
            fetchedNotes.forEach { note in
                managedObjectContext.delete(note)
            }
            try managedObjectContext.save()
        } catch {
            fatalError("Failed to fetch note by id: \(error)")
        }
    }

    public static func deleteNote(note: Note, in managedObjectContext: NSManagedObjectContext) {
        managedObjectContext.delete(note)
        do {
            try managedObjectContext.save()
        } catch {
            // Error Handling
            let nserror = error as NSError
            fatalError("Unresolved error \(nserror), \(nserror.userInfo)")
        }
    }
}

extension Collection where Element == Note, Index == Int {
    func delete(at indices: IndexSet) {
        indices.forEach {
            NotesService.shared.deleteNote(id: self[$0].id!)
        }
    }
}

CoreData provides a query language via NSPredicate objects, to filter collections.

In I maintain a local collection of Note objects, which I can make changes to, and save to the device at key points.

Data flow

At this point, things started to click a bit, and to feel very familiar. I refactored my big monolith SwiftUI view, into a bunch of littler views, managed the application flow, and state, from the main ContentView, and registered callbacks with services responsible for Auth and socket connections etc. as well as with child views, which then once they were ready, sent events letting the ContentView, know that they had an update, at which point it sent the update to the relevant place.

Because I haven’t tackled offline functionality yet, currently there is a one way data flow, where the client sends update actions up to the server, which updates the database, and if successful, sends the updates out to all connected clients, which then update their local copy of the notes with the changes.

It works really nicely!

The ever changing notes protocol

Because I am now supporting multiple notes, and I have the ability to apply diffs on both the client and the server, my protocol for communicating notes updates changes somewhat.

I have ended up with the following events:

"updateNote", {
  "id": "noteId1",
  "title: "a diff match patch diff",
  "body: "a diff match patch diff"
}


"noteUpdated", {
  "id": "noteId1",
  "title: "a diff match patch diff",
  "body: "a diff match patch diff"
}


"deleteNote", "noteId1"


"noteDeleted", "noteId1"


"initialNotes",
"{
  "details": {
    "id1": {
      "title": "notes 1",
      "body": "first notes here"
    },
    "id2": {
      "title": "notes 2",
      "body": "second notes here"
    },
    "id3": {
      "title": "notes 3",
      "body": "third notes here"
    },
    "id5": {
      "title": "notes 5",
      "body": "fifth notes here"
    }

}"

I think I probably also want a “createNote” and “noteCreated” action for increased clarity, but even as things stand, these actions have allowed me to keep the server and client(s) in sync very nicely, assuming there is an internet connection.

What next?

Same as every week… App stores! I really want to actually get a beta/testing app that I can send out to people by the end of this week.

Offline mode.

Client for desktop.

Actually test my code… figure out how to unit/ui test SwiftUI projects.

Last week was exhausting, I imagine this week will be the same.

Noted: Let’s make an app: part 3

Week three was… tough

Where did I start?

  • Electron app, with installers for Windows and Mac OS
  • Android app
  • IOS app
  • Hosted Node.JS backend, using socket.io to manage socket/long-polling connections with all the clients above
  • Authentication/Authorization handled by connecting to Auth0’s identity as a service stuff

What were my goals?

  • Actually understand what I’ve implemented for Authentication/Authorization
  • Flush out any bugs in the system
  • Start moving towards app stores/distributing installers for the desktop apps

What did I do?

  • I spent half the week buried in the OAuth 2.0 and OpenId connect specifications, which was painful, but worth doing.

  • I made a sweet logo.

  • I started sniffing around app stores, and realised that in order to actually get something distributed to app stores, that isn’t buggy, I need to scale back a bit on my ambitions. Rather than supporting lots of different clients, I need to get one/two clients working really reliably, both online and offline.

What did I learn?

The first half of the week was dedicated to OAuth 2.0.

This is what I know now that I didn’t know before:

OAuth came about because of a need to avoid things like this:

It’s a way of allowing users to consent to giving access to some of their data held by one provider, to another provider, without giving access to everything. It allows application A to direct users to application B, where they can sign in to application B and give consent to share some of the data held by application B with application A.

So in an OAuth flow, there are typically three actors. In our case, we’re going to call them Bob, Application A, and Application B.

Bob – uses Application A, and Application B. He has an account with Application A, and at some point added his contact details to it, that is his name, and email address, as well as some personal information.

Application A – has some stored information on Bob:

{
  Users: [
   { Bob: 
     {
       name: 'Bob Cratchet',
       email: 'bobster666@aol.com',
       faveColor: 'Vermillion',
       faveFood: 'Tacos'
     }
  }]
}

Application B wants to know Bob’s email address, full name, and favourite colour, but they want to get it from Application A, rather than asking Bob for it directly.

Application A knows who Bob is, and is able to confirm, based on some information Bob holds (username and password for instance), that Bob is Bob.

Application A also has some data about Bob, namely his full name, email address, favourite colour and favourite food.

Application A should not just share this data with anyone, they should only do so with Bob’s consent.

So, Bob goes to Application B’s website/app/client of some sort, and starts using it. At some point, Application B says to Bob,

Hey you have an account with Application A, would you like to share some information from Application A with us? We’d like to know who you are, and what your favourite colour is, and Application A knows that already. If you do then we’ll be able to reflect your preferences in our site/app/client of some sort by turning it your favourite colour!

Bob is like

shit yeah I really want to see Application B in my favourite colour‘.

Application B sends Bob to a page that Application A has prepared on their domain for just this purpose. Bob signs in to this page, by providing his username and password, so that Application A knows who he is. Application A then says something along the lines of

Hey Bob, Application B says they want to see your name, email address and favourite colour, are you cool with that? We won’t let them see anything else, and you can revoke their access whenever you want

Bob again is like

shit yeah I really want to see Application B in my favourite colour

Because Bob agreed to give access to his data, Application A then sends a special code to Application B.

Application B can’t use this code to get Bob’s favourite colour yet, because Application A can’t be sure that the code they sent hasn’t be nabbed by some other nefarious entity in transit.

In order to access Bob’s favourite colour, Application B has to confirm who they are with Application A, normally via some sort of shared secret.

However it is implemented, Application B has to be able to prove to Application A that they are in fact Application B. Once they have done that, they can exchange their special code which says ‘Bob says application B can see his favourite colour and stuff‘, for a special token.

This token can then be used by application B (who have proved who they are), to get Bob’s data from Application A.

Which is pretty neat.

In order for this system to work, there need to be reliable ways of proving that each of these actors are who they say they are, before passing anything sensitive to them.

These ways of figuring out who everyone is are where a lot of the technical complexity comes in.

Depending on where Application B is accessed from, the steps vary quite a bit.

In order to avoid this post becoming monstrously long, I’m just going to detail what happens when you have loud mouthed native clients that can’t keep a secret.

In my case, I will have a native mobile client, and a native desktop client, and I want to control access to an API on a separate domain, by ensuring that the user that authenticates with Application A via any of the clients above, is only allowed access to their own notes.

The tricky bit with these clients is knowing how to trust that they are who they say they are.

It is easy to give them the special code (the one which can be exchanged for a special token which actually gives them access to whatever resource we are interested in), however how do we prove that they are who they say they are?

If Application B is a server side web application, this is easy. When Application B registers with Application A, they agree on a secret, which only they know. Application B can keep this secret safely in the server, as it can’t be accessed by other people, and then just send this along with its special code. Then Application A will be like

ah yeah I know this guy, here have your token

Native and mobile applications on the other hand are deeply untrustworthy. Any secret they are given access to can be pulled out of the code, making it kind of pointless to give them a secret at all.

Luckily there is a solution:

PKCE (pixie) to the rescue!

https://tools.ietf.org/html/rfc7636

Each of the different ways of managing the OAuth process have different names. The one you should use for mobile/native clients is called ‘Authorization Code Flow with Proof Key for Code Exchange’

PKCE stands for Proof Key for Code Exchange, and it is a way for a public, untrustworthy client, to authenticate themselves with Application A.

To implement it, your leaky client has to provide a code_verifier, and a code_challenge.

In a JavaScript application, you can do this like so:

const crypto = require("crypto")

function base64URLEncode(str) {
  return str
    .toString("base64")
    .replace(/\+/g, "-")
    .replace(/\//g, "_")
    .replace(/=/g, "");
}

function sha256(buffer) {
  return crypto.createHash("sha256").update(buffer).digest();
}

var verifier = base64URLEncode(crypto.randomBytes(32));

var challenge = base64URLEncode(sha256(verifier));

The verifier is a base64 encoded randomly generated value which is difficult to guess.

The challenge, is a hash of the verifier.

These are dynamically created, and so there is no static value that can be pulled out of the code, unlike with a static secret.

When Application B sends the user off to Application A to log in etc, they also send the code_challenge, which is a hash of the randomly generated verifier value.

Then, when Application B needs to authenticate themselves with Application A, they send the code_verifier, which is the original randomly generated value.

Application A can then hash the code_verifier value, using the same hashing algorithm that Application B did when they created the challenge, and check that they get the same result.

This then means that Application A can be pretty sure that Application B is in fact Application B, and that they are the same instance of Application B that asked for access to Application A’s data earlier.

Again, very neat.

Securing access to notes:

The above section kind of explains how my various clients prove that they are who they say they are, however, it doesn’t cover how my backend notes API uses that information.

I needed to make sure that only Bob can access his notes, which are delivered to him via a socket.io connection.

In order to do that I added my API to the API section of my Auth0 dashboard, and my clients to the Apps section of the dashboard.

I think this means that in the example above, my Auth0 instance/’tenant’ (their words) becomes a layer on top of Google/Email authentication, and essentially becomes my gateway to control access to my API. They are responsible for issuing access tokens that can be used to access my API.

Auth0 handles authenticating the user, either via their own hosted email/password database that you get when you use their service, or via a 3rd party (Google in my case). Once they are authenticated, Auth0 creates a JWT access token, signed with their private key, and sends it along with any information you have requested, using the flow detailed above.

Once the client has its access token, it sends it to my notes server to prove to my server that Bob is Bob.

In my case, so long as I can be sure that the user is who they say they are, they can access their notes. I don’t require any more information than that.

Because I am using sockets to deliver notes, rather than just requesting data via REST endpoints, my steps look like this:

CLIENT:

1) Authenticate Bob, via Google, or email login, in return for an access token (handled by Auth0).

2) Establish socket connection with notes server

3) Send access token via custom socket event:

    socket.emit("authenticate", { token });

4) If the socket connection is closed for any reason, start at step 1 again.

SERVER

1) On connection event, wait 15 seconds for a second ‘authenticate’ event, with a JWT access token. If no ‘authenticate’ event, terminate connection.

2) On receiving an ‘authenticate’ event, with a JWT access token, verify that it was signed by Auth0 and is valid.

3) If it is valid, give the user access to the socket.io room corresponding to their username, as verified by Auth0, otherwise close the connection.

4) Set a timeout and close the connection once the expiry time in the JWT access token is reached.

Refresh tokens

The other thing I had to wrap my head around was refresh tokens.

These are long lived tokens that can be exchanged, without redoing the initial OAuth steps, for another access token.

Because they are pretty powerful tokens, they have to be stored securely on the client device in question.

I have ended up using rotating refresh tokens, which means that after the initial OAuth steps (Authorization Code Flow with Proof Key for Code Exchange), the client just requests new access tokens when the old one runs out, with their securely stored refresh token. When the access token is returned to them, they also get a new refresh token, and the old one is invalidated.

The access tokens themselves don’t last very long.

The reason for using this system is to allow users to benefit from not having to log in all the time, while maintaining a reasonable level of safety, in that it minimises the potential nasty effects of someone somehow getting hold of an access token or a refresh token, because they are only valid for a small window.

What now?

I really want a finished product, so I need to descope some stuff. I’m out of proof of concept mode, and into minimal viable product mode.

Android is out, Windows is out.

IOS is in, as is, potentially, a native Mac OS client (as opposed to Electron).

To get the product to a stage where it is actually useful, it needs to handle situations where there is no/patchy network access better, and also needs a lot of polish.

Next week the focus is navigating Apple’s distribution channels (App Store), coming up with a strategy for offline vs online notes, getting the IOS app polished and ready for distribution, and potentially starting the Mac OS native client.

This week was tough but productive, I hope next week will be similar.

Noted: Let’s make an app: part 2

Another week has passed, and my baby is taking shape.

It has been a week of dizzying highs, crushing lows, and a general feeling of drowning in a sea of choice.

The end result is that I have a prototype/mvp installable app, across Windows, Mac OS, IOS and Android, with login via email address or google account.

Which I’m overall pretty ecstatic about!

Behold, my (probably highly bug ridden) product:

The eagle eyed among you may notice that I am using my notes app to write a todo list for things I need to do to improve the notes app.

INCEPTION STYLE

Where did we start?

At the beginning of the week, I had a hosted API, connected to a database, and an installable IOS app, which could be installed to multiple devices, and could sync notes between them.

It did its syncing by repeatedly calling the notes API via a GET request, polling for changes, which was gross. It also had a save button to update the notes via a PUT request.

The protocol was a GET/PUT on a single notes endpoint.

It had no authentication, so anyone using the app got the same notes, and could edit them.

So what’s the plan

I wanted to continue with the ‘get shit working fast end to end‘ approach, in order to properly test that what I was trying to do was possible with the tech choices I have made. To meet my MVP requirements, I still needed:

  • IOS client
  • Android client
  • Windows client
  • Mac OS client
  • API which allows syncing between devices (ideally by pushing changes)
  • Authentication/login so you only get to edit your own notes
  • Figure out how to distribute the installable bits… (app stores and the like)

Let’s fix this broken notes protocol

As you can see from the list above, a fairly key requirement was to sort out a way of syncing changes to all clients, ideally without resorting to the basic, heavyweight polling solution I had in place.

A major theme of this week was one of bewilderment at the sheer dazzling array of technology choices available, each with their own subtle pros and cons.

This began with the choice of technology for syncing notes with the server.

During the previous week I played around with Server Sent Events, which would have suited my requirements pretty well, but didn’t play nicely with the IOS client, and didn’t seem to be being used a lot (meaning I couldn’t find many good tutorials/resources on how to use them with mobile clients).

I also tried web sockets, which, again, would have been pretty nice to work with in a web app, but had not great integration with IOS.

I then lucked out (I think), and gave socket.io a go.

https://socket.io/

In their own words:

Socket.IO is a library that enables real-time, bidirectional and event-based communication between the browser and the server. It consists of:

  • *a Node.js server *
  • a Javascript client library for the browser (which can be also run from Node.js)

The client will try to establish a WebSocket connection if possible, and will fall back on HTTP long polling if not.

So it is a really nicely written wrapper around web sockets, with a fallback.

It has great documentation, and they actively maintain clients for Swift, Java and Javascript (among others), which means I can use it easily in all of the places I need to (more on this later).

After integrating this with the IOS application, my process now looked like:

1) Establish a socket connection with the notes server

2) On updates from the server, update the local client ‘notes’ variable

3) On the client saving notes, push them up to the server via the socket connection

4) On the server receiving an update, push the changes out to all connected clients

Desktop clients

I was pretty sure I could smash together an Android client if needed, and that would use Java/Kotlin, so I knew it would work with socket.io.

I was less sure about what to do about desktop clients though.

I have worked at companies using Electron, and I have a decent amount of experience with web technologies, so first of all I hammered together a crappy Electron application, just to have another client running. It was pretty simple to get going.

My previous gut feel about Electron is that it can be slow, resource hungry, and generally it feels a bit hacky to develop. I wanted to explore other options.

My preference given unlimited time would (I think) be to write desktop applications from scratch for each platform.

Given my simple UI (just a text input screen), and the fact that I develop on a Mac, I think I could have made a Mac OS application without too much hassle.

Windows, on the other hand was far less obvious how to get started, and would likely involved virtual machines/dual booting and other annoyances.

That is not to say it wouldn’t be possible, it absolutely would, but the developer experience would probably be painful. I did briefly considered actually finding a used windows laptop and using that for the Windows client…

However, I want to move quickly, and minimise frustration, so that seemed like a no.

I then read a bunch of opinionated blog posts, which said alternately some form of:

  • ‘Electron is trash, write native apps instead’
  • ‘Electron is trash, use this other framework instead’
  • ‘Electron is trash, but its the best option we have, and who has time to to annoying costly alternative anyway’
  • ‘Electron is great’

I wasted half a day trying to get JavaFX to work and gave up.

Then I revisited my Electron app, cleaned it up a bit, integrated it with socket.io (was super easy), implemented their suggestions for making the app a bit more secure, and, after some experimentation with other libraries, used Electron Builder to produce installers for Mac and Windows.

It was about as seamless as any development I have ever done, and so after dicking around trying other solutions, I did what many companies/individuals seem to have done, and said ‘Fuck it I’ll just use Electron‘, a choice that I’m pretty happy with on balance.

If I get curious in the future, I can always try my hand at moving to native apps, or some other framework, but for now it’s just so convenient.

Cool, two clients up and running, very satisfying. No horrific things have appeared yet, and it seems like socket.io is a semi sensible choice.

I still had one connection for everyone, meaning there was just one set of notes, and it was editable by everyone who installed the app.

There was no getting around it, it was time for authentication to rear its ugly head…

Authentication/Identity/My brain is melting

TL;DR After thoroughly confusing myself about Auth0 and OpenId Connect, I have ended up using Auth0, and their universal login, to authenticate users on both mobile and desktop clients.

The net result of this is that clients can easily get a JWT access token, which the server can validate to ensure they are who they say they are, and then grant them access to a socket connection with only their own notes.

This is achieved via socket.io’s rooms functionality, where each user is assigned to a room with their userId (socket.join(userId)), as well as this handy library for validating JWTs for use with socket.io https://www.npmjs.com/package/socketio-jwt

const io = require("socket.io")(server);

io.sockets
  .on(
    "connection",
    socketioJwt.authorize({
      secret: jwks.expressJwtSecret({
        cache: true,
        rateLimit: true,
        jwksRequestsPerMinute: 5,
        jwksUri: process.env.JWKS_URI,
      }),
      timeout: 15000, // 15 seconds to send the authentication message
    })
  )
  .on("authenticated", (socket) => {
    const userId = socket.decoded_token.sub;

    socket.join(userId);

    Notes.findOne({ username: userId }).then((notes) => {
      if (notes) {
        io.to(userId).emit("notesUpdated", notes);
      }
    });

    socket.on("updateNotes", (payload) => {
      debug(`updating ${userId} notes`);
      Notes.find({ username: userId }).then((notes) => {
        if (notes && notes.length) {
          Notes.updateOne(
            {
              username: userId,
            },
            {
              username: userId,
              content: payload.content,
            }
          ).then(() =>
            io.to(userId).emit("notesUpdated", {
              username: userId,
              content: payload.content,
            })
          );
        } else {
          Notes.create({
            username: userId,
            content: payload.content,
          }).then(() => {
            io.to(userId).emit("notesUpdated", {
              username: userId,
              content: payload.content,
            });
          });
        }
      });
      socket.on("disconnect", () => {
        debug("user disconnected");
      });
    });
  });

From the Electron/any JavaScript client’s perspective, this looks like this:

  connectToNotesStream: () => {
    socket = io(apiIdentifier);

    const token = getAccessToken();
    socket.on("connect", () => {
      socket.emit("authenticate", { token });

      socket.on("authenticated", () => {
        socket.on("notesUpdated", (data, ack) => {
          _updateNotes(data.content);
        });
      });
    });
  }

Mobile clients

Similar to my decision anxiety around desktop technologies, I flirted heavily with the idea of using React Native, Ionic, or Flutter etc. before eventually deciding that my UI needs were so minimal that I may as well just develop native apps.

This project is largely a learning endeavour, and I like the idea of getting a shallow understanding of the IOS and Android development ecosystems, rather than Facebook’s wrapper around them.

I can easily develop IOS and Android applications from my Mac, socket.io has well supported Swift and Java clients, and Auth0 integrates (kind of) well with Java and Swift.

How I picked things

Reading this through, it seems like I made decisions quite easily, but I’ll be honest this week has been overwhelming.

There are so many different ways to build mobile and desktop apps these days.

In order to make decisions, I had to come up with some criteria for how to pick one technology over an other. The criteria ended up being pretty simple:

  • Does it work well with socket.IO and Auth0?
  • Will it give me knowledge of an underlying protocol/technology that could be useful?

As an example, React native fulfilled number 1 (good Auth0 support), but not number two (I don’t want to learn React Native, I’d rather understand the native platforms a bit)

Electron also only fulfilled number 1, but there wasn’t a clear alternative given the time constraints I have (at some point I’ll have to get a job again!)

I’d still have preferred to have a go at making a native windows application, and I still might…

Although Electron also makes it super easy to make linux apps which is also kind of appealing.

I’m basically making this tool for myself, and I regularly work on Windows, Mac and Linux, as well as switch between IOS and Android phones…

Conclusion

Another great week.

Next I want to design a logo, come up with a more unified UI/UX flow for the different platforms, to make them look more similar.

Also try and actually get some stuff in various app store(s).

Wish me luck…

Noted: Let’s make an app: part 1

I’ve gone and got all inspired and decided I’ll make and deploy an application, because that’s something I’ve never done.

The idea:

A cross platform notes application, a la Mac notes app, but one that works on Mac, Windows, Android and IOS

The approach:

I’ve always talked a lot of shit at work about how the best way to make products is to quickly deploy something that is ugly and only minimally functional, and then improve it.

The thinking behind this is that you prove very quickly to yourself whether what you’re trying to do is feasible, and flush out any major issues.

Quite often, this has not been feasible in paid employment.

To avoid being one of those annoying neck beard types who always has a better way of doing things, but never seems to have actually built anything, I’m going to try actually doing things the way I think they should be done, and see if I am full of shit or not.

Dear diary (week one):

I’ve been thinking vaguely about what I want this app to do, and how I might do it for a while.

Much though I love Angular and front end development, I’m also really really really really bored of writing the same kind of code over and over again.

I’m curious about the back end, and other UI bits, so one of the constraints for this project was that it should have as little do do with single page applications as possible, and as much to do with the other parts of making an application as I can stomach.

My initial MVP specification for the app was:

  • An IOS app which can edit and save notes to a database via an API

My initial shopping list of things I thought I might need were:

  • An API (maybe Node.js?)

  • Some sort of database

  • An IOS app

  • Some way of deploying the app

At the end of the week, somewhat impossibly, I have:

  • An Express API, deployed to Heroku for getting notes from a database, and pushing notes to a database
  • An IOS app which I can install on multiple phones (not in the app store), which syncs across multiple clients

Not bad!!!

Video here

How did I get here?

I was pretty sure that I wanted to start with the API and the database, so that’s what I did. I knew a tiny bit of express, but not much more than that.

I found a few guides, but this was by far the best:

https://developer.mozilla.org/en-US/docs/Learn/Server-side/Express_Nodejs

The MDN docs continue to amaze me with how good they are.

If you take nothing else from this post, remember the following:

*If you want to find out how to use a web technology, start with the MDN docs *

That guide basically chose three of the main components of the app for me, which I was truly fine with.

  • Heroku for deployment (cloud)
  • Express for the API
  • MongoDB Atlas for the database (cloud)

My API has one route ‘/notes/:username’, with GET and PUT methods on it:

router
  .route("/:username")
  .get((req, res) => {
    Notes.findOne({ username: req.params.username })
      .then((notes) => {
        res.json(notes);
      })
      .catch((err) => res.status(400).json("Error: " + err));
  })
  .put((req, res) => {
    Notes.find({ username: req.params.username }).then((notes) => {
      if (notes && notes.length) {
        Notes.update(
          {
            username: req.params.username,
          },
          {
            username: req.params.username,
            content: req.body.content,
          }
        )
          .then((doc) => res.json(doc))
          .catch((err) => res.status(400).json("Error " + err));
      } else {
        Notes.create({
          username: req.params.username,
          content: req.body.content,
        }).catch((err) => res.status(400).json("Error " + err));
      }
    });
  });

I got it working locally first, and convinced myself that I was happy enough that my very basic database schema sort of worked (here it is below in Mongoose, the library I am using to talk to the MongoDB database).

const mongoose = require("mongoose");
const Schema = mongoose.Schema;

const notesSchema = new Schema({
  username: {
    type: String,
    required: true,
    unique: true,
    trim: true,
  },
  content: {
    type: String,
    required: true,
  },
});

const Notes = mongoose.model("Notes", notesSchema);

module.exports = Notes;

Deploying the API to Heroku was a dream, and means I can now make lots of tiny incremental changes to my back end, which is nice!

Once that was done, I cobbled together a native IOS application, and wrote some truly horrible code to make it poll the API for changes.

So far, the approach is working, and it’s very motivating getting something visibly working so quickly.

What’s next?

Next I want to sort out the communication between the server and the client(s) better. Polling is gross, and I want a better solution, which allows the server to push any changes out to all clients. After initial digging, socket.io looks promising.

I did attempt to get server sent events working (see below), but I couldn’t find a way of getting them to work nicely on IOS, as the API for consuming events is currently only natively in browsers. There are solutions that exist, but I wasn’t comfortable dumping somebody else’s massive lump of Objective-C into my project as a dependency, as I honestly don’t understand the IOS side of things anywhere near well enough for that.

https://developer.mozilla.org/en-US/docs/Web/API/Server-sent_events/Using_server-sent_events

I also want another client up and running, probably a desktop application for Mac. I’m considering using Electron for this.

Further down the line, I’ll need some sort of auth/identity stuff.

So far this project has been great, and I’m looking forward to another week of it.

HTF do I reverse a linked list (video)

I previously made a post with some shittily drawn diagrams, trying to make sense of how linked list reversal works in JavaScript.

However, some of my diagrams were slightly inaccurate on second look, and also, for me, in order to visualise something like this properly it helps me to see it in motion.

To that end I’ve coded up a new solution, inspired by this video:

https://www.youtube.com/watch?time_continue=119&v=O0By4Zq0OFc&feature=emb_title

and filmed a shaky video in order to animate my terrible drawings. This is the code for the linked list, complete with reversal method:

class LinkedList {
  constructor(val) {
    this.head = {
      value: val,
      next: null,
    };
    this.tail = this.head;
  }

  append(val) {
    this.tail.next = {
      value: val,
      next: null,
    };
    this.tail = this.tail.next;
  }

  print() {
    const displayArray = [];
    let node = this.head;
    while (node) {
      displayArray.push(node.value);
      node = node.next;
    }
    console.log(displayArray);
  }

  reverse() {
    let current = this.head;
    let previous = null;
    let next = null;

    while (current) {
      next = current.next;
      current.next = previous;
      previous = current;
      current = next;
    }

    this.head = previous;
  }
}

const list = new LinkedList(1);
list.append(2);
list.append(3);
list.print();

list.reverse();
list.print();

And here is the reverse method visualised with the magic of video (I would watch at 2X speed if you value your time in any way):

sweet sweet animation video

Also, if you really want to figure out how this code works, I highly recommend making your own cut outs and moving them around. Beats staring at a screen.

HTF do I reverse a linked list (JavaScript edition)

This is one of those algorithms questions that makes my stomach hurt.

If you haven’t seen this before prepare to put your brain through a blender.

How do you reverse a linked list in JavaScript?

Firstly, you have to have a linked list. Luckily I have one I prepared earlier. JavaScript doesn’t have a linked list implementation, so we have to make our own:

class LinkedList {
  constructor(val) {
    this.head = {
      value: val,
      next: null,
    };
    this.tail = this.head;
  }

  append(val) {
    this.tail.next = {
      value: val,
      next: null,
    };
    this.tail = this.tail.next;
  }

  print() {
    const displayArray = [];
    let node = this.head;
    while (node) {
      displayArray.push(node.value);
      node = node.next;
    }
    console.log(displayArray);
  }
}

const list = new LinkedList(1);
list.append(2);
list.append(3);
list.append(4);
list.append(5);

list.print();

When this is run it will print [1,2,3,4,5], which under the hood looks like 1=>2=>3=>4=>5, with a reference to the start (head), and the end (tail) of the chain, or snake, or whatever we decide we want to think of our list as.

We can easily get to the first element, and we can easily get to the last element.

Cooooool. Now reverse it, turn those arrows the other way.

My first attempt at this worked quite nicely, but is wasteful with memory (a sin in computer science). Here it is:

  reverse() {
    const buffer = [];

    let node = this.head;

    while(node) {
      buffer.push(node.value);
      node = node.next;
    }

    this.head = {
      value: buffer.pop(),
      next: null,
    };

    node = this.head;

    while(buffer.length) {
      node.next = {
        value: buffer.pop(),
        next: null,
      }
      node = node.next;
    }
  }
}

So just stick all the values from the linked list into an array, then pull items off the array and rewrite our linked list from scratch!

I know, I’m a genius.

The runtime is O(n), which is good.

The space complexity is also O(n), which is less good. At this point if you are interviewing, I imagine the interviewer (if they even let you code this basic solution), will be tutting and saying things like ‘but can we do better?’, scratching their chin and gesticulating with whiteboard markers.

And it turns out that yes, there is a better solution. The bad news is that it is utterly brain melting to actually understand. If you get what this is doing straight off the bat, all power to you.

    reverse() {
      let first = this.head;
      this.tail = this.head;
      let second = first.next;

      while(second) {
        const temp = second.next;
        second.next = first;
        first = second;
        second = temp;
      }

      this.head.next = null;
      this.head = first;
    }

In my case it just made me want to cry and shout and stomp my feet and yell ‘BUT WHY DO I HAVE TO DO THIS STUPID SHIT. I’M A FRONTEND DEVELOPER, LET ME DO SOME CSS OR SOMETHING. I’LL JUST GOOGLE IT. THIS IS BULLSHIT!!!?!!’.

Obviously, we are onto a good problem. The ones that expand your brain and make you better at problem solving normally start with this sort of reaction. The key to getting gud is to gently soothe and comfort your troubled brain, and trust that given enough sustained concentration on the problem, some youtube videos, and a bit of sleep, this will start to make sense.

Let’s draw some pictures and match them to code (DISCLAIMER: These diagrams are not totally accurate… I’ve left them now though as this is useful as a record for me of how my thinking progressed. For a better illustration of how this algorithm works, see this post) :

We’re going to use a smaller list for this run through, because I can’t be arsed to draw endless diagrams.

We want to turn 1=>2=>3 into 3=>2=>1.

What

a time

to be alive.

let first = this.head;
this.tail = this.head;
let second = first.next;

Just assigning things for now, seems OK. See brain, nothing to be scared about. Right?

‘THIS IS BULLSHIT. WHAT THE FUCK ARE WE EVEN BOTHERING WITH THIS FOR. LET’S PLAY ROCKET LEAGUE’

Our diagram is in this state now:

OK. Next bit.

      while(second) {
        const temp = second.next;
        second.next = first;
        first = second;
        second = temp;
      }

This appears to be a loop. Let’s try one iteration first, and update our diagram.

second is pointing to the element with the value 2 for us currently, so it is truthy. We enter the while loop, filled with trepidation.

const temp = second.next

second.next = first

first = second

second = temp

Oooh wow, this kind of looks like progress, we swapped 2 and 1, and our variables are all pointing at sensible looking things. Apart from second, which is not pointing at the second element, and head which is just floating in the middle of the list. I think this is one of the tricky parts of linked list questions, the variable names end up not making sense mid way through the problem.

Let’s continue.

second is pointing to the element with the value 3, so it is truthy. We enter the while loop again, brimming with new found confidence.

const temp = second.next

temp gets set to second.next, which is now null.

second.next = first

first = second

second = temp

Second is null now, so we avoid the while loop this time round.

this.head.next = null

So we sever that endless loop setting head.next to null

this.head = first

We’ve set up our newly ordered linked list, now we just need to make it official by updating the head reference.

this.head = first;

Anddddd done.

I’m going to need to revisit this, but this scribbling exercise has already helped. I hope it helps you too.

Alternatively, go watch this video https://www.youtube.com/watch?time_continue=119&v=O0By4Zq0OFc&feature=emb_title as it explains the approach waaaaay better than these scribbles do. Wish I’d watched that first…