Noated (Noted) Debrief

I am done with my latest project.

“Noated” (Noted was taken…) is live and available to install, and I’m feeling reflective. (note I have since removed it from the App Store as I didn’t have time to maintain the project 🙁

I started out, a month and a half ago trying to make:

A cross platform notes application, a la Mac notes app, but one that works on Mac, Windows, Android and IOS

I ended up with:

A cross platform notes application, a la Mac notes app, but one that works on Mac, Windows, and IOS

So no Android… yet.

Given that I personally use Windows, IOS and Mac, I’m pretty happy though.

See it in action here!

https://youtu.be/zfC4clJYuw4

I was feeling a bit mopey this morning as the realisation that my distracting/all consuming side project is over and I need to start thinking about paid work again.

So I decided to get a bit of closure and do a debrief of the project.

What were my goals?

To try something new.

To expose myself to problems I haven’t encountered professionally, and hopefully get a more rounded view of the development process.

What did I achieve

Most, but not all of my goals. I got a note taking app which largely fulfils my personal requirements for a note taking app, and I did it it a reasonable amount of time.

RIP Android client…

What did I learn?

A big part of why I did this project, was to expose myself to problem spaces outside of my usual domain (frontend, specifically single page applications).

In this respect it was very successful.

I gained a working knowledge of Swift, SwiftUI, real time streaming protocols (via Socket.IO) and OAuth 2.0.

I learnt that making networked apps which can work offline as well is hard.

I was also forced to figure out how to actually deploy/distribute desktop and mobile applications. In this case I ended up using the Apple App Store for the mobile app, and I hosted the installer for the MacOS desktop client on GitHub, using their Large File Storage solution. Because I wanted to make it possible for other people to install my app, I had to notarise the Mac installer, which I managed with the help of this excellent tutorial:

https://kilianvalkhof.com/2019/electron/notarizing-your-electron-application/

As I got closer to deploying the app, and was faced with the reality of random people being able to install my app, I realised I would need multiple environments for my database, server and Identity Provider stuff.

This forced me to figure out things I would never have thought about such as how to use environment variables in an IOS application via Xcode.

The thing I found most enjoyable about the whole process, was that I didn’t start with a list of technologies I wanted to use, rather I started with what I wanted to build, namely a note taking application, accessible on multiple platforms, that could be used offline and online, and synced between devices.

This meant that at every step, my technology choices were driven by what would be the best option for building the solution quickly and stably, not by what was the hot new thing.

It felt like I was exploring uncharted new territories, and relying on my prior knowledge and Google-fu, to make it through the project unscathed.

So while I did get exposed to problems which will probably be useful to have grappled with in my professional life (mainly Auth/Identity stuff, and Electron), the main thing I gained from this project was a new confidence that I can tackle things from (sort of) first principles, and that I can build things to satisfy a user requirement, using whichever technology makes sense to use.

Skip the tests!

Controversially, one of the things I learnt was that for personal projects, especially when you are still designing the system, it can make sense to skip the automated/unit testing…

This isn’t something I expected, as I am fully sold on the benefits of automated, unit and integration tests, and am normally the guy at work that goes above and beyond to prove that his code works, and is robust.

However, I think given the speed at which I was iterating and changing major system design choices, a heavy and rigid set of unit tests would have slowed me down too much, and potentially caused me to run out of steam.

A big source of motivation during this project was the visible progress that was being made on the application. Anything which got in the way of that would be problematic for me.

I have a bunch of projects I have started where I have spent hours/days setting up the perfect set of integration and unit tests, and agonising over how to incorporate CI/CD into my workflow.

These projects never went anywhere… so yeah, one of the big findings was to be a bit sloppy about unit testing. Who knew!?!

Put your ugly baby out into the world as soon as you can

Over the course of the project, I got quite attached to my baby, and nervous about showing it to other people, lest they not understand, or worse, break it.

Fortunately I was able to overcome that fear, and as soon as I had something I could put on the App Store, I did it.

I knew there were bugs in the product, and things I wanted to change, but I released it anyway.

This turned out to be a good decision for a number of reasons.

Firstly, getting apps accepted onto the App Store is not straightforward… I spend a large part of this week doing boring admin-ey things like setting up privacy policies, adding login options (Sign in with apple, obviously), and a bunch of other really dull things that take time.

Each fix meant another half a day delay, and had I tried to get the app locally perfect, I wouldn’t have known any of this.

Secondly, my friends broke my baby, in ways I couldn’t have anticipated.

These breakages again gave me new insights into what was worth spending time on, and made my product more robust.

It still hurt, but it was worth letting my fledgeling and ugly app out into the world to fend for itself.

Iterations are good!

While I didn’t go the whole hog and set up a CD pipeline or a monorepo, or any of the other things which I think are worth doing in a larger team, but take a lot of time to set up, I did use Heroku.

Heroku makes it very easy to quickly make changes to your deployed backend, and roll them back if they break things. This allowed me to move very quickly, and to iterate.

I wrote some really shit code to get the initial clients up and running.

A lot of this code got thrown away, and I actually started both the desktop and mobile applications from scratch halfway through the development process.

This, again was a good idea.

It meant I proved my ideas would work very quickly, with some very bad code, and then could build a more robust version, taking the bits which were good from the prototype.

Know your limits

After getting one client working (IOS), I went client mad, building clients for Mac, Windows, Linux and Android in one hectic week.

This was fantastic fun, and made me feel like a wizard.

Unfortunately, the following week I realised that I needed to make some pretty wide reaching changes to the underlying notes protocol…

The clients themselves were also pretty bug infested, as I had really rushed to hack them together.

This meant that I spent a few days scrabbling between clients, patching holes and trying to update them to the new protocol.

Thankfully, I realised that in order to actually build a proper solution, I needed to scale back a bit.

So I killed the Android and Desktop apps, and spent a week just focused on getting the IOS application working as well as possible, and ironing out issues in the server/notes protocol.

While this did hurt, as I was admitting I couldn’t do something, it meant that the final product was much better, and also meant that when it came time to re-animate the desktop application, I was much clearer on how a client for this new multi-notes protocol should work, and it was actually very simple.

Conclusion

Trying to make things you don’t know how to make is a lot of fun.

I really have loved this project, and I actually finished it.

For me this is huge. Generally I start things, lots of things, but I do not finish them.

I hope that this will be the start of my life as a prolific finisher of things. I guess we will see.

If you want to see the (now defunct 🙁 ) code, look here:

https://github.com/robt1019/Noted-Electron

https://github.com/robt1019/Noted-IOS

https://github.com/robt1019/Noted-Express

Noted: Let’s make an app: part 5

Ohhh man this is getting close now.

I am starting to really want to put this project to bed, and I think I will get there soon.

No time to waste so let’s recap.

If you haven’t followed along, and want to know what this project is, start here.

Where did I start

At the start of the week I had a fairly bug free IOS client, connected to an increasingly stable server and db, with a relatively solid set of instructions for creating, updating and deleting a set of notes related to a specific user.

Authentication was still working nicely, but my app shat itself as soon as it lost internet connection.

The focus of this week was:

  • Getting IOS app into the App Store

  • Offline mode

  • Client for desktop

How’d I do?

I added createNote and noteCreated actions to my notes protocol, in order to make things more explicit, which worked nicely.

Offline mode

I came up with an initial solution which appears to work pretty well. I may change this going forward (see more below), but this is at least a start.

I keep track of the offline/online status in the IOS client, and at the point a user does a note action (create, update, delete), if they are online push the update straight to the server, or if they are offline, update the locally stored copy of the notes, store the action locally on the device, and then when the device comes back online, process all the queued up actions.

This looks like this for the update action:

NotesService.swift:

    public func updateNote(id: String, title: String, body: String, prevNote: Note, context: NSManagedObjectContext) {
        // Figure out any diffs
        let titleDiff = NotesDiffer.shared.diff(notes1: prevNote.title!, notes2: title)
        let bodyDiff = NotesDiffer.shared.diff(notes1: prevNote.body!, notes2: body)
        let payload: [String: Any] = [
            "id": id,
            "title": titleDiff,
            "body": bodyDiff,
        ]
        if (self.online) {
            // we are online, push action straight to server
            self.socket?.emit("updateNote", payload)
        } else {
            // no internet :(
            // find the existing note stored in CoreData locally
            let note = Note.noteById(id: id, in: context)
            // update existing local copy of note
            Note.updateTitle(note: note!, title: title, in: context)
            Note.updateBody(note: note!, body: body, in: context)
            // delegate update responsibility to OfflineChanges service
            OfflineChanges.updateNote(payload: payload)
        }
    }

OfflineChanges.swift

    private static let key: String = "offlineUpdates"
    private static let defaults = UserDefaults.standard

    public static func updateNote(payload: Any) {
        var offlineUpdates = defaults.array(forKey: key)
        // put action and payload in an array
        let action = ["updateNote", payload]
        if (offlineUpdates != nil) {
            offlineUpdates!.append(action)
        } else {
            offlineUpdates = [action]
        }
        // store updated offline updates to user defaults
        defaults.set(offlineUpdates, forKey: key)
    }

    // loop through all stored offline updates, and push them up to server
    public static func processOfflineUpdates(socket: SocketIOClient?, done: @escaping () -> Void) {
        let offlineUpdates: [[Any]]? = defaults.array(forKey: key) as? [[Any]]
        if (offlineUpdates != nil && offlineUpdates?.count ?? 0 > 0) {
            socket?.emit("offlineUpdates", offlineUpdates!)
            socket?.once("offlineUpdatesProcessed") { data, ack in
                done()
            }
        } else {
            done()
        }

        defaults.set([], forKey: key)
    }

Then, back in the NotesService, when we reconnect, after authentication, process all the stored updates:

self.socket?.once("authenticated", callback: { _, _ in

    OfflineChanges.processOfflineUpdates(socket: self.socket) {
        self.socket?.emit("getInitialNotes")
    }

    self.socket?.once("initialNotes") {data, ack in
        let stringifiedJson = data[0] as? String
        if (stringifiedJson != nil) {
            self._onInitialNotes!(NotesToJsonService.jsonToNotesDictionary(jsonString: stringifiedJson!))
        } else {
            self._onInitialNotes!([:])
        }
    }
});

I’m overall happy with this approach.

The one thing I think I might end up changing is the explicit online/offline detection

I think it might be more reliable to instead check that the server received the action within a set amount of time.

If it doesn’t respond with a ‘yes I got that message’, assume we are offline and queue up the action for later as detailed above.

Let’s distribute this thing! (to a tiny set of initial users)

Now that I had a client working to a level I was happy with, it was time to get it in front of people.

I dutifully signed up to Apple’s developer program, paid my fee and carried out the steps to push one of my builds to the ‘App Store connect’ dashboard.

It was quite a nice process, which after setup could be managed from within Xcode.

Apple offers a beta testing product called ‘TestFlight’, which allows you to send email invitations to people, allowing them to install your app via the ‘TestFlight’ app.

I was able to convince five people to install the app and report any issues they found.

So far, no major issues, but I don’t think that means it is bug free alas.

Based on this extremely limited testing, I’m now pretty happy with pushing to actually submit something to the App Store, and that will be the focus of next week.

Desktop

A large part of why I’m making this app is that it is something I want to use.

In order for me to actually find it useful, it needs to have a desktop client, at least on Mac.

After some brief tinkering with native MacOS tooling, I once again said “fuck it I’ll just do Electron”.

I started a new project, and pulled in the parts from my initial Electron prototype that were good.

A benefit of spending so long with the IOS client finessing the notes protocol and online/offline functionality is that it made implementing the Electron client something of a dream.

I already had the Auth stuff done, so it was a case of implementing the new master/detail views for handling multiple notes (the previous Electron app only supported one page of notes per user), coming up with a way of storing notes locally to the user’s machine, and implementing the same set of actions as on the IOS client.

Desktop local storage

I used sqlite, via the sqlite3 npm package, and put together a service for handling CRUD operations:

note-storage.service.js

const { app } = require("electron");
const path = require("path");
var sqlite3 = require("sqlite3").verbose();

const db = new sqlite3.Database(path.join(app.getPath("userData"), "notes"));

db.serialize(() => {
  db.run(`
    CREATE TABLE IF NOT EXISTS notes (
        id TEXT NOT NULL UNIQUE,
        title TEXT NOT NULL,
        body TEXT NOT NULL
    )`);
});

app.on("quit", () => {
  db.close();
});

const getNotes = (done) => {
  db.serialize(() => {
    db.all(
      `
    SELECT * FROM notes
    `,
      (err, results) => done(err, results)
    );
  });
};

const getNoteById = (id, done) => {
  db.serialize(() => {
    db.get(
      `
    SELECT * FROM notes
    WHERE id="${id}"
    `,
      (err, result) => done(err, result)
    );
  });
};

const createNote = (note) => {
  db.serialize(() => {
    db.run(`
    INSERT INTO notes (id, title, body)
    VALUES("${note.id}", "${note.title}", "${note.body}")
    `);
  });
};

const updateNote = (note) => {
  db.serialize(() => {
    db.run(`
        UPDATE notes
        SET title="${note.title}",
            body="${note.body}"
        WHERE id="${note.id}"
        `);
  });
};

const deleteNote = (id) => {
  db.serialize(() => {
    db.run(`
        DELETE from notes
        WHERE id="${id}"
        `);
  });
};

const deleteAll = () => {
  db.serialize(() => {
    db.run("DROP TABLE IF EXISTS notes");
  });
};

module.exports = {
  getNotes,
  getNoteById,
  createNote,
  updateNote,
  deleteNote,
  deleteAll,
};

Compared to the higher level abstraction of IOS’s CoreData framework, it was really nice just writing SQL queries.

Also as a general note, going back to untyped JavaScript was lovely.

I really enjoy TypeScript at work, and typed languages generally I think are a great way of cutting down on bugs, communicating design decisions with other developers, and generally making more robust, predictable software.

That said, for prototyping/individual projects where it is just me, I love the freedom that comes with raw untyped JavaScript.

Sure, I get runtime bugs, but I can fix them quickly.

Desktop Online/Offline

This was less smooth, and actually resulted in me starting to rethink my design of the online/offline stuff generally.

First up I needed the equivalent of IOS’s UserPreferences storage module, for storing any notes actions for later.

Because I’m back in my comfort zone with JavaScript/Node, I wrote my own way of storing JSON to a file locally in the place that Electron stores userData by default:

offline-updates.service.js

const { app } = require("electron");
const path = require("path");
const fs = require("fs");
const offlineUpdatesPath = path.join(
  app.getPath("userData"),
  "offline-updates.json"
);

const setUpdates = (updates) => {
  console.log(`offline updates: ${updates}`);
  fs.writeFileSync(offlineUpdatesPath, JSON.stringify(updates));
};

const getUpdates = () => {
  if (fs.existsSync(offlineUpdatesPath)) {
    return require(offlineUpdatesPath);
  } else {
    setUpdates([]);
    return [];
  }
};

const createNote = (note) => {
  const updates = getUpdates();
  updates.push(["createNote", note]);
  setUpdates(updates);
};
const updateNote = (noteUpdate) => {
  const updates = getUpdates();
  updates.push(["updateNote", noteUpdate]);
  setUpdates(updates);
};
const deleteNote = (noteId) => {
  const updates = getUpdates();
  updates.push(["deleteNote", noteId]);
  setUpdates(updates);
};

const processOfflineUpdates = (socket) => {
  getUpdates().forEach((update) => {
    const action = update[0];
    const payload = update[1];
    console.log(
      `processing offline update ${action}, with payload: ${payload}`
    );
    socket.emit(action, payload);
  });
  setUpdates([]);
};

module.exports = {
  createNote,
  updateNote,
  deleteNote,
  processOfflineUpdates,
};

So far so good.

Next step, how to figure out whether the user is online or not.

This is grosser 🙁

network-detector.service.js

const net = require("net");

let lastEmitted = false;

let _onChange;

const checkConnection = (onChange) => {
  _onChange = onChange;
  const connection = net.connect(
    {
      port: 80,
      host: "google.com",
    },
    () => {
      if (lastEmitted === false) {
        lastEmitted = true;
        _onChange(true);
      }
    }
  );
  connection.on("error", () => {
    if (lastEmitted === true) {
      lastEmitted = false;
      _onChange(false);
    }
  });
};

const onNetworkChange = (onChange) => {
  checkConnection(onChange);
  setInterval(() => {
    checkConnection(onChange, lastEmitted);
  }, 5000);
};

module.exports = {
  onNetworkChange,
};

Basically, every 5 seconds, try and fire a network event, if it works, you are online, otherwise you are not. In my notes service, I can subscribe to the events emitted from this service, and do the same if(online) style checks as in the IOS app.

The problem is the potential 5 second delay between being offline, and me knowing about it. This kind of breaks my solution as I could very easily try and send a bunch of stuff up to the server when I’m offline, and then just lose those actions completely.

At the end of last week, my thinking was that something like this might be the solution.

On the client side:

const updateNote = (prevNote, updatedNote) => {
  let serverGotTheMessage = false;
  const noteUpdate = {
    id: updatedNote.id,
    title: dmp.diff_main(prevNote.title, updatedNote.title),
    body: dmp.diff_main(prevNote.body, updatedNote.body),
  };
  setTimeout(() => {
    if (!serverGotTheMessage) {
      noteStorage.updateNote(updatedNote);
      offlineUpdates.updateNote(noteUpdate);
    }
  }, 1000);
  socket.emit("updateNote", noteUpdate, () => {
    serverGotTheMessage = true;
  });
};

On the server side:

socket.on("updateNote", (payload, ack) => {
  if (ack) {
    ack();
  }
  debug(`updating ${userId} note ${payload.id}`);
  updateNote(userId, payload, io);
});

I guess check back next week to see if that’s a good idea or not…

What next?

Figure out a more robust solution for offline/online updates.

As has been the case for the last 2 weeks, I really need to get the IOS app submitted to the App Store. Until I do that I can’t really move on from this project!

As part of that, I will need to set up separate production environments for my db, server and Auth0 stuff. Currently everything has been done on one environment.

If I have time, continue working on the Electron app, and figure out as soon as possible how best to distribute it/make installers etc. Focus on Mac OS for now.

The main priority is getting the IOS app done though. Wish me luck.

Noted: Let’s make an app: part 4

Where did I start?

Some slightly broken clients with installers for Mac, IOS, Android and Windows.

None of the clients worked offline, or even failed gracefully offline.

Each client, and the database supported one single string value of notes per user.

Where did I end up?

Single IOS client, capable of saving multiple notes per user, and when connected to the internet, silently syncing the changes back to the server, where it is pushed out to any connected and authenticated clients. On first loading the app/after re-authenticating, the latest server version of the notes is pushed out to the client.

Wait, but that’s less than you started with!?!

Yes… this week was a bitter sweet experience.

In return for a much improved user experience, I had to severely limit my ambitions client-wise, and focus on getting the general steps for editing, and syncing a set of notes per user between devices clear in my head.

Before I started, I knew I wanted to focus on getting one client polished and distribution ready.

I also knew that offline functionality was a priority, as was reducing the amount of data I was sending around.

Previously, I was sending the user’s entire notes string every time they made a change. This was unsubtle when it came to resolving conflicts (multiple clients logged in with simultaneous updates), whereby whichever update came in latest completely overwrote the previous one.

What a difference a week makes

First up, diffing.

I make heavy use of git at work, and so I had been thinking for a while that there must be a way of just sending round ‘diffs’ between a user’s notes, and then patching the existing notes with any incoming diffs. I hadn’t thought it through very much, but I was pretty sure I wanted to be sending round diffs, rather than the user’s entire notes.

I started by playing around locally with the ‘diff’ and ‘patch’ utilities included with unix, and so accessible via my terminal emulator.

This was pretty hopeful, and I was able to reduce a series of changes to a file, to a series of line numbers, with additions and deletions etc. So far so good.

These utils are not easily accessible in the various environments I am programming in however, and so I kept looking.

After a bunch of dead ends, I came across Google’s ‘diff match patch‘ library, which was written originally to power Google Docs, and, very kindly has been open sourced.

Google Docs kind of represents an idealised version of the kind of synchronising between clients that I am looking for, so I was pretty excited by the prospect of using the same diffing engine as they did.

After some experimentation, it seemed like this would suit my needs very well. Getting it installed on the server was very simple (npm). The package had a lot of weekly installs, the linked github had very few unresolved issues, and everything generally seemed pretty stable and reliable.

This was my initial thoughts about how this might start to look:

const { diff_match_patch } = require("diff-match-patch");

const dmp = new diff_match_patch();

let text = "Poodles can play piano";
const text2 = "Oodles can play potties\n\n\n\nwhich not a lot of people know";
const text3 = "Poodles can fully retract their eyelids";

// Both text2 and text3 clients have initial text value

let diff1 = dmp.diff_main(text, text2);
dmp.diff_cleanupSemantic(diff1);
console.log(diff1);

let diff2 = dmp.diff_main(text, text3);
dmp.diff_cleanupSemantic(diff2);
console.log(diff2);

// They are both offline so queue up the change for when they are online again,
// keeping track of the diff between their latest known server value, and their
// current value

// text2 client comes back online, and sends up its diff
const patches1 = dmp.patch_make(text, diff1);
console.log(patches1);

text = dmp.patch_apply(patches1, text)[0];

// text is updated to reflect first diff
console.log(text);

// text3 client comes back online, and sends up its diff
const patches2 = dmp.patch_make(text, diff2);
console.log(patches2);

// text is updated to reflect second diff, applied to text
text = dmp.patch_apply(patches2, text)[0];

console.log(text);

This all worked perfectly in JavaScript land.

Getting it working on the Swift (IOS) side was less pleasant however…

There were some community maintained packages for Swift, Objective C etc. which could be installed via Cocoa Pods, but they were out of date, poorly maintained and riddled with issues. I couldn’t get any of them to compile, or even install in some cases (one in particular seemed to require getting the code from a private GitHub repository, which I didn’t have access to…)

It was very frustrating, and is exactly the kind of stuff which makes me start to question whether software development is right for me.

My initial solution was to do all diffing on the server, and have the client send the previous notes, and the updated notes in each update.

This had the benefit of allowing multiple clients to simultaneously update notes and have Google’s magic diffing take care of resolving conflicts and patching together its best guess of the end results, but also meant that instead of sending all the users notes, I was sending all the users notes twice.

Not ideal.

After a bunch more research, and a lot of annoyance (this coincided with a mid 30 degree centigrade London day, which is hell), I discovered that you can run JavaScript from within Swift projects via a natively supported module called JavaScriptCore. This is my current Swift class which exposes the bits of google match patch I need:

import UIKit
import JavaScriptCore

class NotesDiffer: NSObject {

    static let shared = NotesDiffer()
    private let vm = JSVirtualMachine()
    private let context: JSContext

    override init() {
        let jsCode = try? String.init(contentsOf: Bundle.main.url(forResource: "Noted.bundle", withExtension: "js")!)
        self.context = JSContext(virtualMachine: self.vm)
        self.context.evaluateScript(jsCode)
    }

    func diff(notes1: String, notes2: String) -> [Any] {
        let jsModule = self.context.objectForKeyedSubscript("Noted")
        let diffMatchPatch = jsModule?.objectForKeyedSubscript("diffMatchPatch")
        let result = diffMatchPatch!.objectForKeyedSubscript("diff_main").call(withArguments: [notes1, notes2])
        return (result!.toArray())
    }

    func patch(notes1: String, diff: Any) -> String {
        let jsModule = self.context.objectForKeyedSubscript("Noted")
        let diffMatchPatch = jsModule?.objectForKeyedSubscript("diffMatchPatch")
        let patch = diffMatchPatch!.objectForKeyedSubscript("patch_make").call(withArguments: [notes1, diff])
        let patched = diffMatchPatch!.objectForKeyedSubscript("patch_apply").call(withArguments: [patch, notes1])
        return (patched?.toArray()[0])! as! String
    }
}

I won’t go into how I did it, as this guy has a much better article, but I am using NPM and web pack to pull the same package I am using on the server, into the Swift client. Nifty stuff.

Diffing done (for now).

What do you mean you don’t have any internet!?!

After diffing, the next big issue was handling patchy network/offline mode.

Virgin Media decided to completely shit the bed at the end of the previous week, one of the results being that I was painfully confronted with how useless my app is without a reliable internet connection.

I had an idea that what would work from the client’s perspective is this:

Updating notes:

1) Update notes

2) Save

3) Am I online? If yes, push to server, if no, store the update locally

Coming back online

1) Back online, Joy

2) Do I have any pending offline changes? If yes, shoot them up to the server, otherwise do nothing

I used IOS’ ‘User Defaults’ to store the changes as a dictionary with previous notes, and updated notes, and checked it when coming back online.

It worked pretty nicely.

Unfortunately, two days into the week, I faced up to the unfortunate reality that in order for this app to be in any way useful, it needs to support multiple notes per user, which necessitated some pretty wide reaching changes.

As part of these changes, the offline functionality got removed, and hasn’t been added back yet.

Let’s get into those changes now

All of the data modelling

As mentioned, I realised I wanted/needed to support multiple notes per user.

I played around with different ideas, and settled on the idea that the underlying database would store something like this for a given user:

    {
      "order": ["id1", "id2", "id5", "id3"],
      "details": {
        "id1": {
          "title": "notes 1",
          "body": "first notes here"
        },
        "id2": {
          "title": "notes 2",
          "body": "second notes here"
        },
        "id3": {
          "title": "notes 3",
          "body": "third notes here"
        },
        "id5": {
          "title": "notes 5",
          "body": "fifth notes here"
        }
      }
    }

and clients would be responsible for maintaining a local copy of the structure, in whatever format makes sense to them, and then pushing updates up to the server, so it can update its underlying model of the user’s notes, and push the changes out to all connected clients.

First problem, I had no idea how to store structured data locally to an IOS device. I had used User Defaults to store simple string data with some success, but it was a blunt instrument, and would not be suitable for storing a potentially large JSON object.

After some digging, I decided to go with what looked like the most IOS-ey, Apple recommended approach, and use the CoreData framework:

“Core Data is an object graph and persistence framework provided by Apple in the macOS and iOS operating systems”

Which seemed good because, hopefully it will be well documented, and widely used, and also, I might be able to reuse the models in any upcoming macOS client work.

It is an abstraction over some sort of persistent device storage. I don’t actually know what form the data is saved in, whether it uses SQLite or not, and I don’t really care at the moment. The main benefit from my perspective is that I hoped I would be able to define some sort of model, corresponding to the JSON data structure above, and keep it in sync with the server.

This has been largely successful, but was quite painful to get started.

I didn’t find CoreData particularly intuitive, possibly because my professional interactions with data persistence has been limited largely to Redux stores and Cookies/local storage etc. on the front end.

What I ended up with was this CoreData model:

Which doesn’t look too impressive!

I then added a bunch of static methods to the generated Note class, (which is an instance of an NSManagedObject, provided by the CoreData framework, meaning it can get persisted and stuff). These methods were to support custom read/write operations that I needed for my application. Currently they look like this:

extension Note {

    public static func noteById(id: String, in context: NSManagedObjectContext) -> Note? {
        let serverNotesFetch = NSFetchRequest<NSFetchRequestResult>(entityName: "Note")
        serverNotesFetch.predicate = NSPredicate(format: "id = %@", id)

        do {
            let fetchedNotes = try context.fetch(serverNotesFetch) as! [Note]
            print(fetchedNotes)
            if(fetchedNotes.count > 0) {
                print("found a note")
                return fetchedNotes[0]
            } else {
                print("no note found")
                return nil
            }
        } catch {
            fatalError("Failed to fetch note by id: \(error)")
        }
    }

    static func create(in managedObjectContext: NSManagedObjectContext, noteId: String? = nil, title: String? = nil, body: String? = nil){
        let newNote = self.init(context: managedObjectContext)
        newNote.id = noteId ?? UUID().uuidString
        newNote.title = title ?? ""
        newNote.body = body ?? ""

        do {
            try  managedObjectContext.save()
        } catch {
            // Replace this implementation with code to handle the error appropriately.
            // fatalError() causes the application to generate a crash log and terminate. You should not use this function in a shipping application, although it may be useful during development.
            let nserror = error as NSError
            fatalError("Unresolved error \(nserror), \(nserror.userInfo)")
        }
    }

    static func updateTitle(note: Note, title: String, in managedObjectContext: NSManagedObjectContext) {
        note.title = title

        do {
            try managedObjectContext.save()
        } catch {
            // Replace this implementation with code to handle the error appropriately.
            // fatalError() causes the application to generate a crash log and terminate. You should not use this function in a shipping application, although it may be useful during development.
            let nserror = error as NSError
            fatalError("Unresolved error \(nserror), \(nserror.userInfo)")
        }
    }

    static func updateBody(note: Note, body: String, in managedObjectContext: NSManagedObjectContext) {
        note.body = body

        do {
            try managedObjectContext.save()
        } catch {
            // Replace this implementation with code to handle the error appropriately.
            // fatalError() causes the application to generate a crash log and terminate. You should not use this function in a shipping application, although it may be useful during development.
            let nserror = error as NSError
            fatalError("Unresolved error \(nserror), \(nserror.userInfo)")
        }
    }

    public static func deleteAllNotes(in managedObjectContext: NSManagedObjectContext) {
        // Create Fetch Request
        let fetchRequest = NSFetchRequest<NSFetchRequestResult>(entityName: "Note")

        // Create Batch Delete Request
        let batchDeleteRequest = NSBatchDeleteRequest(fetchRequest: fetchRequest)

        do {
            try managedObjectContext.execute(batchDeleteRequest)

        } catch {
            // Error Handling
            let nserror = error as NSError
            fatalError("Unresolved error \(nserror), \(nserror.userInfo)")
        }
    }

    public static func deleteAllNotesApartFrom(ids: [String], in managedObjectContext: NSManagedObjectContext) {
        print("deleting all notes apart from \(ids)")
        let notesFetch = NSFetchRequest<NSFetchRequestResult>(entityName: "Note")
        notesFetch.predicate = NSPredicate(format: "NOT id IN %@", ids)
        do {
            let fetchedNotes = try managedObjectContext.fetch(notesFetch) as! [Note]
            fetchedNotes.forEach { note in
                managedObjectContext.delete(note)
            }
            try managedObjectContext.save()
        } catch {
            fatalError("Failed to fetch note by id: \(error)")
        }
    }

    public static func deleteNote(note: Note, in managedObjectContext: NSManagedObjectContext) {
        managedObjectContext.delete(note)
        do {
            try managedObjectContext.save()
        } catch {
            // Error Handling
            let nserror = error as NSError
            fatalError("Unresolved error \(nserror), \(nserror.userInfo)")
        }
    }
}

extension Collection where Element == Note, Index == Int {
    func delete(at indices: IndexSet) {
        indices.forEach {
            NotesService.shared.deleteNote(id: self[$0].id!)
        }
    }
}

CoreData provides a query language via NSPredicate objects, to filter collections.

In I maintain a local collection of Note objects, which I can make changes to, and save to the device at key points.

Data flow

At this point, things started to click a bit, and to feel very familiar. I refactored my big monolith SwiftUI view, into a bunch of littler views, managed the application flow, and state, from the main ContentView, and registered callbacks with services responsible for Auth and socket connections etc. as well as with child views, which then once they were ready, sent events letting the ContentView, know that they had an update, at which point it sent the update to the relevant place.

Because I haven’t tackled offline functionality yet, currently there is a one way data flow, where the client sends update actions up to the server, which updates the database, and if successful, sends the updates out to all connected clients, which then update their local copy of the notes with the changes.

It works really nicely!

The ever changing notes protocol

Because I am now supporting multiple notes, and I have the ability to apply diffs on both the client and the server, my protocol for communicating notes updates changes somewhat.

I have ended up with the following events:

"updateNote", {
  "id": "noteId1",
  "title: "a diff match patch diff",
  "body: "a diff match patch diff"
}


"noteUpdated", {
  "id": "noteId1",
  "title: "a diff match patch diff",
  "body: "a diff match patch diff"
}


"deleteNote", "noteId1"


"noteDeleted", "noteId1"


"initialNotes",
"{
  "details": {
    "id1": {
      "title": "notes 1",
      "body": "first notes here"
    },
    "id2": {
      "title": "notes 2",
      "body": "second notes here"
    },
    "id3": {
      "title": "notes 3",
      "body": "third notes here"
    },
    "id5": {
      "title": "notes 5",
      "body": "fifth notes here"
    }

}"

I think I probably also want a “createNote” and “noteCreated” action for increased clarity, but even as things stand, these actions have allowed me to keep the server and client(s) in sync very nicely, assuming there is an internet connection.

What next?

Same as every week… App stores! I really want to actually get a beta/testing app that I can send out to people by the end of this week.

Offline mode.

Client for desktop.

Actually test my code… figure out how to unit/ui test SwiftUI projects.

Last week was exhausting, I imagine this week will be the same.

Noted: Let’s make an app: part 3

Week three was… tough

Where did I start?

  • Electron app, with installers for Windows and Mac OS
  • Android app
  • IOS app
  • Hosted Node.JS backend, using socket.io to manage socket/long-polling connections with all the clients above
  • Authentication/Authorization handled by connecting to Auth0’s identity as a service stuff

What were my goals?

  • Actually understand what I’ve implemented for Authentication/Authorization
  • Flush out any bugs in the system
  • Start moving towards app stores/distributing installers for the desktop apps

What did I do?

  • I spent half the week buried in the OAuth 2.0 and OpenId connect specifications, which was painful, but worth doing.

  • I made a sweet logo.

  • I started sniffing around app stores, and realised that in order to actually get something distributed to app stores, that isn’t buggy, I need to scale back a bit on my ambitions. Rather than supporting lots of different clients, I need to get one/two clients working really reliably, both online and offline.

What did I learn?

The first half of the week was dedicated to OAuth 2.0.

This is what I know now that I didn’t know before:

OAuth came about because of a need to avoid things like this:

It’s a way of allowing users to consent to giving access to some of their data held by one provider, to another provider, without giving access to everything. It allows application A to direct users to application B, where they can sign in to application B and give consent to share some of the data held by application B with application A.

So in an OAuth flow, there are typically three actors. In our case, we’re going to call them Bob, Application A, and Application B.

Bob – uses Application A, and Application B. He has an account with Application A, and at some point added his contact details to it, that is his name, and email address, as well as some personal information.

Application A – has some stored information on Bob:

{
  Users: [
   { Bob: 
     {
       name: 'Bob Cratchet',
       email: 'bobster666@aol.com',
       faveColor: 'Vermillion',
       faveFood: 'Tacos'
     }
  }]
}

Application B wants to know Bob’s email address, full name, and favourite colour, but they want to get it from Application A, rather than asking Bob for it directly.

Application A knows who Bob is, and is able to confirm, based on some information Bob holds (username and password for instance), that Bob is Bob.

Application A also has some data about Bob, namely his full name, email address, favourite colour and favourite food.

Application A should not just share this data with anyone, they should only do so with Bob’s consent.

So, Bob goes to Application B’s website/app/client of some sort, and starts using it. At some point, Application B says to Bob,

Hey you have an account with Application A, would you like to share some information from Application A with us? We’d like to know who you are, and what your favourite colour is, and Application A knows that already. If you do then we’ll be able to reflect your preferences in our site/app/client of some sort by turning it your favourite colour!

Bob is like

shit yeah I really want to see Application B in my favourite colour‘.

Application B sends Bob to a page that Application A has prepared on their domain for just this purpose. Bob signs in to this page, by providing his username and password, so that Application A knows who he is. Application A then says something along the lines of

Hey Bob, Application B says they want to see your name, email address and favourite colour, are you cool with that? We won’t let them see anything else, and you can revoke their access whenever you want

Bob again is like

shit yeah I really want to see Application B in my favourite colour

Because Bob agreed to give access to his data, Application A then sends a special code to Application B.

Application B can’t use this code to get Bob’s favourite colour yet, because Application A can’t be sure that the code they sent hasn’t be nabbed by some other nefarious entity in transit.

In order to access Bob’s favourite colour, Application B has to confirm who they are with Application A, normally via some sort of shared secret.

However it is implemented, Application B has to be able to prove to Application A that they are in fact Application B. Once they have done that, they can exchange their special code which says ‘Bob says application B can see his favourite colour and stuff‘, for a special token.

This token can then be used by application B (who have proved who they are), to get Bob’s data from Application A.

Which is pretty neat.

In order for this system to work, there need to be reliable ways of proving that each of these actors are who they say they are, before passing anything sensitive to them.

These ways of figuring out who everyone is are where a lot of the technical complexity comes in.

Depending on where Application B is accessed from, the steps vary quite a bit.

In order to avoid this post becoming monstrously long, I’m just going to detail what happens when you have loud mouthed native clients that can’t keep a secret.

In my case, I will have a native mobile client, and a native desktop client, and I want to control access to an API on a separate domain, by ensuring that the user that authenticates with Application A via any of the clients above, is only allowed access to their own notes.

The tricky bit with these clients is knowing how to trust that they are who they say they are.

It is easy to give them the special code (the one which can be exchanged for a special token which actually gives them access to whatever resource we are interested in), however how do we prove that they are who they say they are?

If Application B is a server side web application, this is easy. When Application B registers with Application A, they agree on a secret, which only they know. Application B can keep this secret safely in the server, as it can’t be accessed by other people, and then just send this along with its special code. Then Application A will be like

ah yeah I know this guy, here have your token

Native and mobile applications on the other hand are deeply untrustworthy. Any secret they are given access to can be pulled out of the code, making it kind of pointless to give them a secret at all.

Luckily there is a solution:

PKCE (pixie) to the rescue!

https://tools.ietf.org/html/rfc7636

Each of the different ways of managing the OAuth process have different names. The one you should use for mobile/native clients is called ‘Authorization Code Flow with Proof Key for Code Exchange’

PKCE stands for Proof Key for Code Exchange, and it is a way for a public, untrustworthy client, to authenticate themselves with Application A.

To implement it, your leaky client has to provide a code_verifier, and a code_challenge.

In a JavaScript application, you can do this like so:

const crypto = require("crypto")

function base64URLEncode(str) {
  return str
    .toString("base64")
    .replace(/\+/g, "-")
    .replace(/\//g, "_")
    .replace(/=/g, "");
}

function sha256(buffer) {
  return crypto.createHash("sha256").update(buffer).digest();
}

var verifier = base64URLEncode(crypto.randomBytes(32));

var challenge = base64URLEncode(sha256(verifier));

The verifier is a base64 encoded randomly generated value which is difficult to guess.

The challenge, is a hash of the verifier.

These are dynamically created, and so there is no static value that can be pulled out of the code, unlike with a static secret.

When Application B sends the user off to Application A to log in etc, they also send the code_challenge, which is a hash of the randomly generated verifier value.

Then, when Application B needs to authenticate themselves with Application A, they send the code_verifier, which is the original randomly generated value.

Application A can then hash the code_verifier value, using the same hashing algorithm that Application B did when they created the challenge, and check that they get the same result.

This then means that Application A can be pretty sure that Application B is in fact Application B, and that they are the same instance of Application B that asked for access to Application A’s data earlier.

Again, very neat.

Securing access to notes:

The above section kind of explains how my various clients prove that they are who they say they are, however, it doesn’t cover how my backend notes API uses that information.

I needed to make sure that only Bob can access his notes, which are delivered to him via a socket.io connection.

In order to do that I added my API to the API section of my Auth0 dashboard, and my clients to the Apps section of the dashboard.

I think this means that in the example above, my Auth0 instance/’tenant’ (their words) becomes a layer on top of Google/Email authentication, and essentially becomes my gateway to control access to my API. They are responsible for issuing access tokens that can be used to access my API.

Auth0 handles authenticating the user, either via their own hosted email/password database that you get when you use their service, or via a 3rd party (Google in my case). Once they are authenticated, Auth0 creates a JWT access token, signed with their private key, and sends it along with any information you have requested, using the flow detailed above.

Once the client has its access token, it sends it to my notes server to prove to my server that Bob is Bob.

In my case, so long as I can be sure that the user is who they say they are, they can access their notes. I don’t require any more information than that.

Because I am using sockets to deliver notes, rather than just requesting data via REST endpoints, my steps look like this:

CLIENT:

1) Authenticate Bob, via Google, or email login, in return for an access token (handled by Auth0).

2) Establish socket connection with notes server

3) Send access token via custom socket event:

    socket.emit("authenticate", { token });

4) If the socket connection is closed for any reason, start at step 1 again.

SERVER

1) On connection event, wait 15 seconds for a second ‘authenticate’ event, with a JWT access token. If no ‘authenticate’ event, terminate connection.

2) On receiving an ‘authenticate’ event, with a JWT access token, verify that it was signed by Auth0 and is valid.

3) If it is valid, give the user access to the socket.io room corresponding to their username, as verified by Auth0, otherwise close the connection.

4) Set a timeout and close the connection once the expiry time in the JWT access token is reached.

Refresh tokens

The other thing I had to wrap my head around was refresh tokens.

These are long lived tokens that can be exchanged, without redoing the initial OAuth steps, for another access token.

Because they are pretty powerful tokens, they have to be stored securely on the client device in question.

I have ended up using rotating refresh tokens, which means that after the initial OAuth steps (Authorization Code Flow with Proof Key for Code Exchange), the client just requests new access tokens when the old one runs out, with their securely stored refresh token. When the access token is returned to them, they also get a new refresh token, and the old one is invalidated.

The access tokens themselves don’t last very long.

The reason for using this system is to allow users to benefit from not having to log in all the time, while maintaining a reasonable level of safety, in that it minimises the potential nasty effects of someone somehow getting hold of an access token or a refresh token, because they are only valid for a small window.

What now?

I really want a finished product, so I need to descope some stuff. I’m out of proof of concept mode, and into minimal viable product mode.

Android is out, Windows is out.

IOS is in, as is, potentially, a native Mac OS client (as opposed to Electron).

To get the product to a stage where it is actually useful, it needs to handle situations where there is no/patchy network access better, and also needs a lot of polish.

Next week the focus is navigating Apple’s distribution channels (App Store), coming up with a strategy for offline vs online notes, getting the IOS app polished and ready for distribution, and potentially starting the Mac OS native client.

This week was tough but productive, I hope next week will be similar.

Noted: Let’s make an app: part 2

Another week has passed, and my baby is taking shape.

It has been a week of dizzying highs, crushing lows, and a general feeling of drowning in a sea of choice.

The end result is that I have a prototype/mvp installable app, across Windows, Mac OS, IOS and Android, with login via email address or google account.

Which I’m overall pretty ecstatic about!

Behold, my (probably highly bug ridden) product:

The eagle eyed among you may notice that I am using my notes app to write a todo list for things I need to do to improve the notes app.

INCEPTION STYLE

Where did we start?

At the beginning of the week, I had a hosted API, connected to a database, and an installable IOS app, which could be installed to multiple devices, and could sync notes between them.

It did its syncing by repeatedly calling the notes API via a GET request, polling for changes, which was gross. It also had a save button to update the notes via a PUT request.

The protocol was a GET/PUT on a single notes endpoint.

It had no authentication, so anyone using the app got the same notes, and could edit them.

So what’s the plan

I wanted to continue with the ‘get shit working fast end to end‘ approach, in order to properly test that what I was trying to do was possible with the tech choices I have made. To meet my MVP requirements, I still needed:

  • IOS client
  • Android client
  • Windows client
  • Mac OS client
  • API which allows syncing between devices (ideally by pushing changes)
  • Authentication/login so you only get to edit your own notes
  • Figure out how to distribute the installable bits… (app stores and the like)

Let’s fix this broken notes protocol

As you can see from the list above, a fairly key requirement was to sort out a way of syncing changes to all clients, ideally without resorting to the basic, heavyweight polling solution I had in place.

A major theme of this week was one of bewilderment at the sheer dazzling array of technology choices available, each with their own subtle pros and cons.

This began with the choice of technology for syncing notes with the server.

During the previous week I played around with Server Sent Events, which would have suited my requirements pretty well, but didn’t play nicely with the IOS client, and didn’t seem to be being used a lot (meaning I couldn’t find many good tutorials/resources on how to use them with mobile clients).

I also tried web sockets, which, again, would have been pretty nice to work with in a web app, but had not great integration with IOS.

I then lucked out (I think), and gave socket.io a go.

https://socket.io/

In their own words:

Socket.IO is a library that enables real-time, bidirectional and event-based communication between the browser and the server. It consists of:

  • *a Node.js server *
  • a Javascript client library for the browser (which can be also run from Node.js)

The client will try to establish a WebSocket connection if possible, and will fall back on HTTP long polling if not.

So it is a really nicely written wrapper around web sockets, with a fallback.

It has great documentation, and they actively maintain clients for Swift, Java and Javascript (among others), which means I can use it easily in all of the places I need to (more on this later).

After integrating this with the IOS application, my process now looked like:

1) Establish a socket connection with the notes server

2) On updates from the server, update the local client ‘notes’ variable

3) On the client saving notes, push them up to the server via the socket connection

4) On the server receiving an update, push the changes out to all connected clients

Desktop clients

I was pretty sure I could smash together an Android client if needed, and that would use Java/Kotlin, so I knew it would work with socket.io.

I was less sure about what to do about desktop clients though.

I have worked at companies using Electron, and I have a decent amount of experience with web technologies, so first of all I hammered together a crappy Electron application, just to have another client running. It was pretty simple to get going.

My previous gut feel about Electron is that it can be slow, resource hungry, and generally it feels a bit hacky to develop. I wanted to explore other options.

My preference given unlimited time would (I think) be to write desktop applications from scratch for each platform.

Given my simple UI (just a text input screen), and the fact that I develop on a Mac, I think I could have made a Mac OS application without too much hassle.

Windows, on the other hand was far less obvious how to get started, and would likely involved virtual machines/dual booting and other annoyances.

That is not to say it wouldn’t be possible, it absolutely would, but the developer experience would probably be painful. I did briefly considered actually finding a used windows laptop and using that for the Windows client…

However, I want to move quickly, and minimise frustration, so that seemed like a no.

I then read a bunch of opinionated blog posts, which said alternately some form of:

  • ‘Electron is trash, write native apps instead’
  • ‘Electron is trash, use this other framework instead’
  • ‘Electron is trash, but its the best option we have, and who has time to to annoying costly alternative anyway’
  • ‘Electron is great’

I wasted half a day trying to get JavaFX to work and gave up.

Then I revisited my Electron app, cleaned it up a bit, integrated it with socket.io (was super easy), implemented their suggestions for making the app a bit more secure, and, after some experimentation with other libraries, used Electron Builder to produce installers for Mac and Windows.

It was about as seamless as any development I have ever done, and so after dicking around trying other solutions, I did what many companies/individuals seem to have done, and said ‘Fuck it I’ll just use Electron‘, a choice that I’m pretty happy with on balance.

If I get curious in the future, I can always try my hand at moving to native apps, or some other framework, but for now it’s just so convenient.

Cool, two clients up and running, very satisfying. No horrific things have appeared yet, and it seems like socket.io is a semi sensible choice.

I still had one connection for everyone, meaning there was just one set of notes, and it was editable by everyone who installed the app.

There was no getting around it, it was time for authentication to rear its ugly head…

Authentication/Identity/My brain is melting

TL;DR After thoroughly confusing myself about Auth0 and OpenId Connect, I have ended up using Auth0, and their universal login, to authenticate users on both mobile and desktop clients.

The net result of this is that clients can easily get a JWT access token, which the server can validate to ensure they are who they say they are, and then grant them access to a socket connection with only their own notes.

This is achieved via socket.io’s rooms functionality, where each user is assigned to a room with their userId (socket.join(userId)), as well as this handy library for validating JWTs for use with socket.io https://www.npmjs.com/package/socketio-jwt

const io = require("socket.io")(server);

io.sockets
  .on(
    "connection",
    socketioJwt.authorize({
      secret: jwks.expressJwtSecret({
        cache: true,
        rateLimit: true,
        jwksRequestsPerMinute: 5,
        jwksUri: process.env.JWKS_URI,
      }),
      timeout: 15000, // 15 seconds to send the authentication message
    })
  )
  .on("authenticated", (socket) => {
    const userId = socket.decoded_token.sub;

    socket.join(userId);

    Notes.findOne({ username: userId }).then((notes) => {
      if (notes) {
        io.to(userId).emit("notesUpdated", notes);
      }
    });

    socket.on("updateNotes", (payload) => {
      debug(`updating ${userId} notes`);
      Notes.find({ username: userId }).then((notes) => {
        if (notes && notes.length) {
          Notes.updateOne(
            {
              username: userId,
            },
            {
              username: userId,
              content: payload.content,
            }
          ).then(() =>
            io.to(userId).emit("notesUpdated", {
              username: userId,
              content: payload.content,
            })
          );
        } else {
          Notes.create({
            username: userId,
            content: payload.content,
          }).then(() => {
            io.to(userId).emit("notesUpdated", {
              username: userId,
              content: payload.content,
            });
          });
        }
      });
      socket.on("disconnect", () => {
        debug("user disconnected");
      });
    });
  });

From the Electron/any JavaScript client’s perspective, this looks like this:

  connectToNotesStream: () => {
    socket = io(apiIdentifier);

    const token = getAccessToken();
    socket.on("connect", () => {
      socket.emit("authenticate", { token });

      socket.on("authenticated", () => {
        socket.on("notesUpdated", (data, ack) => {
          _updateNotes(data.content);
        });
      });
    });
  }

Mobile clients

Similar to my decision anxiety around desktop technologies, I flirted heavily with the idea of using React Native, Ionic, or Flutter etc. before eventually deciding that my UI needs were so minimal that I may as well just develop native apps.

This project is largely a learning endeavour, and I like the idea of getting a shallow understanding of the IOS and Android development ecosystems, rather than Facebook’s wrapper around them.

I can easily develop IOS and Android applications from my Mac, socket.io has well supported Swift and Java clients, and Auth0 integrates (kind of) well with Java and Swift.

How I picked things

Reading this through, it seems like I made decisions quite easily, but I’ll be honest this week has been overwhelming.

There are so many different ways to build mobile and desktop apps these days.

In order to make decisions, I had to come up with some criteria for how to pick one technology over an other. The criteria ended up being pretty simple:

  • Does it work well with socket.IO and Auth0?
  • Will it give me knowledge of an underlying protocol/technology that could be useful?

As an example, React native fulfilled number 1 (good Auth0 support), but not number two (I don’t want to learn React Native, I’d rather understand the native platforms a bit)

Electron also only fulfilled number 1, but there wasn’t a clear alternative given the time constraints I have (at some point I’ll have to get a job again!)

I’d still have preferred to have a go at making a native windows application, and I still might…

Although Electron also makes it super easy to make linux apps which is also kind of appealing.

I’m basically making this tool for myself, and I regularly work on Windows, Mac and Linux, as well as switch between IOS and Android phones…

Conclusion

Another great week.

Next I want to design a logo, come up with a more unified UI/UX flow for the different platforms, to make them look more similar.

Also try and actually get some stuff in various app store(s).

Wish me luck…

Noted: Let’s make an app: part 1

I’ve gone and got all inspired and decided I’ll make and deploy an application, because that’s something I’ve never done.

The idea:

A cross platform notes application, a la Mac notes app, but one that works on Mac, Windows, Android and IOS

The approach:

I’ve always talked a lot of shit at work about how the best way to make products is to quickly deploy something that is ugly and only minimally functional, and then improve it.

The thinking behind this is that you prove very quickly to yourself whether what you’re trying to do is feasible, and flush out any major issues.

Quite often, this has not been feasible in paid employment.

To avoid being one of those annoying neck beard types who always has a better way of doing things, but never seems to have actually built anything, I’m going to try actually doing things the way I think they should be done, and see if I am full of shit or not.

Dear diary (week one):

I’ve been thinking vaguely about what I want this app to do, and how I might do it for a while.

Much though I love Angular and front end development, I’m also really really really really bored of writing the same kind of code over and over again.

I’m curious about the back end, and other UI bits, so one of the constraints for this project was that it should have as little do do with single page applications as possible, and as much to do with the other parts of making an application as I can stomach.

My initial MVP specification for the app was:

  • An IOS app which can edit and save notes to a database via an API

My initial shopping list of things I thought I might need were:

  • An API (maybe Node.js?)

  • Some sort of database

  • An IOS app

  • Some way of deploying the app

At the end of the week, somewhat impossibly, I have:

  • An Express API, deployed to Heroku for getting notes from a database, and pushing notes to a database
  • An IOS app which I can install on multiple phones (not in the app store), which syncs across multiple clients

Not bad!!!

Video here

How did I get here?

I was pretty sure that I wanted to start with the API and the database, so that’s what I did. I knew a tiny bit of express, but not much more than that.

I found a few guides, but this was by far the best:

https://developer.mozilla.org/en-US/docs/Learn/Server-side/Express_Nodejs

The MDN docs continue to amaze me with how good they are.

If you take nothing else from this post, remember the following:

*If you want to find out how to use a web technology, start with the MDN docs *

That guide basically chose three of the main components of the app for me, which I was truly fine with.

  • Heroku for deployment (cloud)
  • Express for the API
  • MongoDB Atlas for the database (cloud)

My API has one route ‘/notes/:username’, with GET and PUT methods on it:

router
  .route("/:username")
  .get((req, res) => {
    Notes.findOne({ username: req.params.username })
      .then((notes) => {
        res.json(notes);
      })
      .catch((err) => res.status(400).json("Error: " + err));
  })
  .put((req, res) => {
    Notes.find({ username: req.params.username }).then((notes) => {
      if (notes && notes.length) {
        Notes.update(
          {
            username: req.params.username,
          },
          {
            username: req.params.username,
            content: req.body.content,
          }
        )
          .then((doc) => res.json(doc))
          .catch((err) => res.status(400).json("Error " + err));
      } else {
        Notes.create({
          username: req.params.username,
          content: req.body.content,
        }).catch((err) => res.status(400).json("Error " + err));
      }
    });
  });

I got it working locally first, and convinced myself that I was happy enough that my very basic database schema sort of worked (here it is below in Mongoose, the library I am using to talk to the MongoDB database).

const mongoose = require("mongoose");
const Schema = mongoose.Schema;

const notesSchema = new Schema({
  username: {
    type: String,
    required: true,
    unique: true,
    trim: true,
  },
  content: {
    type: String,
    required: true,
  },
});

const Notes = mongoose.model("Notes", notesSchema);

module.exports = Notes;

Deploying the API to Heroku was a dream, and means I can now make lots of tiny incremental changes to my back end, which is nice!

Once that was done, I cobbled together a native IOS application, and wrote some truly horrible code to make it poll the API for changes.

So far, the approach is working, and it’s very motivating getting something visibly working so quickly.

What’s next?

Next I want to sort out the communication between the server and the client(s) better. Polling is gross, and I want a better solution, which allows the server to push any changes out to all clients. After initial digging, socket.io looks promising.

I did attempt to get server sent events working (see below), but I couldn’t find a way of getting them to work nicely on IOS, as the API for consuming events is currently only natively in browsers. There are solutions that exist, but I wasn’t comfortable dumping somebody else’s massive lump of Objective-C into my project as a dependency, as I honestly don’t understand the IOS side of things anywhere near well enough for that.

https://developer.mozilla.org/en-US/docs/Web/API/Server-sent_events/Using_server-sent_events

I also want another client up and running, probably a desktop application for Mac. I’m considering using Electron for this.

Further down the line, I’ll need some sort of auth/identity stuff.

So far this project has been great, and I’m looking forward to another week of it.