I recently watched the excellent series Chernobyl.
It tells the story of a catastrophic nuclear meltdown, caused by concerted human incompetence and blind adherence to bureaucratic demands, even in the face of blatant evidence that what was being done was stupid, negligent, and dangerous.
After the accident, it took a long time to understand what had gone wrong, blame was thrown around wildly, and there were official efforts to disguise the causes of the accident.
Very little was done to try and learn from systematic failures, and to address the problems demonstrated by the incident.
As a software professional, it made me very sad how many similarities I can see between most places I have worked, and the Chernobyl accident and cover up.
How I feel most of the time at work:
The way software is written is fucked.
It is complex enough to be understood by very few people, and the process of writing it is consistently mismanaged, despite the fact that most of the pitfalls and mistakes made by literally every single team, have been pointed out, along with detailed steps for how to avoid them, since the 1970s.
Here is a pretend conversation that sums up the level of ignorance I have found in my professional life, as applied to another industry. (it is a metaphor you see):
- Bob: Hello, I’m the new pilot for this here commercial aircraft.
- Geoff (Bob’s employer): Hello Bob! Glad to have you on board. I hope you’re worth the enormous sum of money we’re paying you
- Bob: Well of course I am! I passed your test didn’t I 🙂
- Geoff: Well yes, you are right, you were very good at identifying the key differences between a jumbo jet and a bicycle, so I guess we’re all OK on that front
Some time later…
- Emma(Bob’s copilot): Bob could you land the plane for me please. I need to visit the bathroom
- Bob: Sure thing. One question, where do you keep the landing ropes?
- Emma: What was that Bob? The landing ropes… What are they?
- Bob: The ropes you use to loop around the lorry on the landing bit on the floor so it can pull you in and you can land
- Emma: Bob, that is insane. What are you talking about
- Bob: Oh, do you use a different system. I could use roller skates instead
- Emma: …
- Geoff: Hey guys, are you going to land this plane soon
- Bob: Sure thing, just debating whether to use the ropes or the roller skates. I think we’re leaning towards the roller skates given the weather
- Emma: …
- Geoff: OK great, let me know how you get…
- Emma: WHAT THE FUCK ARE YOU BOTH TALKING ABOUT!?!
- Bob: Hold up there Emma, I’m not sure there’s any need to get angry like this
- Geoff: Emma, what’s your problem? We really need to land this plane soon so can you please sort this out between you. What Bob said about the roller skates sounds sensible to me
- Emma: BUT WE’LL CRASH INTO THE FLOOR AND DIE
- Bob: I think that’s a bit extreme Emma, when I worked at Initrode we always used roller skates in inclement weather and we hardly ever crashed. We never took off either though… but I’m pretty sure the landing stuff would be fine
For another excellent summary of the type of madness you are likely to find in a software team, try this:
‘Ah but fortunately your example was unrealistic. That involved an aircraft, which could crash into the floor and kill people. Also nuclear reactors are totally different to coding. Software is just computers. It can’t do anything really nasty… right?’
Not only is software fucked, it is increasingly responsible for scary things.
Defective software has recently:
- Crashed two planes.
- Exposed countless peoples’ private information to criminals. (This happens so often now that it’s barely even news)
- Stopped customers from accessing their online banking for months.
- Prevented doctors from accessing test results.
- Grounded flights out of major airports.
- Thrown people off electric scooters by ‘braking heavily’ under certain circumstances.
I found these examples from a ten minute Google search. There are probably many more cases, many of which are inherently difficult to reason about because… software is fucked!
To recap, software is often:
- Responsible for scary things
- Written by incompetent people
- Managed by less competent people
- Very difficult to understand for ‘non technical’ people
Which all combines to make me increasingly cynical and scared about my profession…
I have spent quite a bit of time reading about and discussing this phenomenon, and there appears to be a growing consensus, among both my peers and the wider community, that this cannot go on.
One of the potential solutions which is floated, in various forms is ‘REGULATION’.
I’m not sure how I feel about it either… or what it should look like.
The cynic in me thinks that any attempt to regulate software development would add unnecessary overhead to an already slow and inefficient process, and would likely be mismanaged.
I also know that certain elements of writing software are very difficult to reason about, and that ensuring a program does what you intend it to do is hard.
That is to say, it is possible that even after doing your due diligence, becoming excellent at your craft, and testing all foreseeable outcomes, your code could still cause something horrible to happen.
In these cases I suspect assigning blame would be counter productive, whereas learning from mistakes could be very useful.
That said, the level of incompetence, and negligence I have encountered in some work places feels like there should be some legal protection in place, to make sure that development teams take responsibility for their code.
It seems totally unacceptable that testing and design is frequently shunted to the back of the priorities list, behind ‘hitting milestone 3’ or ‘drop 2’ or whatever that particular company is choosing to call their next client delivery.
If a shoddy system of development results in software which causes a loss of data, or money, or lives, then maybe someone should be held accountable.
That person should probably be paid a fairly large sum of money, for taking on the responsibility of ensuring that the software works (as many people in the software industry already are…), but equally, that reward should reflect the very real responsibility that has been taken on.
By which I mean that if it is found that the highly paid person that put their professional seal of approval on a product, did not adequately ensure that a system was put in place to develop robust and safe software, then they should be punished.
I don’t have a particular problem if your marketing website, or your blog has some bugs or UX issues, but almost all software I have worked on has involved user accounts and financial transactions, and is therefore responsible for peoples’ personal information and money.
That this type of software is developed within a system that rewards cutting corners, and punishes slow deliberation and robustness is something that I find deeply worrying.
Systems of development
One thing to emphasise, is that I don’t think the state of software is a reflection of any specific individual failings.
It is not a case of ‘shit developers’ or ‘shit managers’ or ‘shit clients’.
You might have noticed that I keep writing the word ‘system’.
That is because I recently read an excellent book on systems thinking and now frequently regurgitate snippets from it:
One of things that really stuck with me from this book is the idea that the actual outcomes of a system always reflect its true goals, or motivations, but that those true goals might differ (sometimes wildly), from any officially stated goals.
My theory is that in the majority of cases, software teams’ true goals and motivators are not robust software or a reworked and improved product, or a solved customer problem, and that is why these are rarely the outcome.
A team responsible for a piece of software is made up of a number of actors, or elements, all with competing goals.
You may have an external client, who you work with to define what the software should do.
They probably have a set of internal stakeholders they have to please, they may have a certain deliverable linked to their bonus that year, they may have inherited the project from someone else, and maybe they are trying to leave the company anyway, so they don’t particularly care about the long term success of the product.
The budget for the piece of software might have come about from a high level executive joining the client company and wanting to throw their weight around. They might not still be at the company in six months.
The team developing the software might have, intentionally or otherwise oversold what they can deliver and offered it for an unrealistically low price.
These mean that individual milestones or deliverables within a project are very important, as many of the actors within the system are depending on the outcome of them.
In a setup like the one above, the date the software is delivered on, is more important than whether it is shit or not, because more of the people within the system are depending on the date being hit, than on the software working well.
Individual developers within the team have a variety of different motivations.
Some are there just to take advantage of the elevated wages within the industry, and are content to turn up, do their job within the confines set out in front of them and go home.
They may or may not understand what they are doing, and they may make things so complicated from a code perspective, that reasoning about whether the product has bugs or not becomes almost impossible, but to be honest it doesn’t really matter.
As long as something is produced which looks and feels like something that was signed off and paid for, these guys have done their job.
These are the majority of developers I have encountered, and I don’t have a problem with them.
I do have a problem with the fact that the system they are working inside rewards this type of behaviour, but that is not their fault.
Others might want to actually deliver on the stated goals of the system, that is, roughly:
‘make a piece of robust software in the least complex way possible, such that it solves a clearly defined problem for the user of the software’.
This is quite a bit harder, requires constant reassessment of tools, techniques and design, and a commitment to mastering the fundamentals of your craft, across all disciplines involved, but in the medium to long term is definitely the way to go.
Amidst the background of all the other competing goals and motivations, these latter elements are likely to be frustrated.
Calls to slow down the pace of development in order to revisit the design of the product, or even to reassess whether the product is viable as a solution are generally ignored, as they will impede ‘deliverable alpha’.
In such a system, cutting corners is rewarded at all levels, and, again, this is not particularly anyone’s fault.
This would all be fine if it was happening in a bubble, but it’s not. These pieces of software are responsible for stuff that matters, and the real costs of these decisions are not felt by the people responsible for making the software, but by the users themselves.
In the above scenario, there might be a few raised voices and tense meetings, but it it very likely that everyone involved in making the software will be rewarded, as long as something is delivered on or around the date in the project plan.
Users may be lumbered with a buggy, slow, insecure program, or web app, but that is not really factored into the cost of development.
So what’s your point?
People responsible for making software are often motivated by many things other than making reliable, secure software that solves a defined user problem.
The negative effects of bad software are generally felt by users, not by those who put the software together.
In order to improve this situation, and to get better software that isn’t shit, it seems like there needs to be some sort of change to make the people responsible for creating software feel the effects of their shitty software themselves more keenly.
I don’t know what that would look like, and to be honest I don’t really have a point… All I really know is that I find my profession increasingly scary and insane, and that I’m not alone.