The hardest part of building software is not coding, it’s requirements

With all the articles about just how amazing all the developments in AI have been, there’s plenty of hand wringing around the possibility that we, as software developers, could soon be out of a job, replaced by artificial intelligence. They imagine all the business execs and product researchers will bypass most or all of their software developers and asking AI directly to build exactly what they think they want or need. As someone who’s spent 15 years creating software from the specs these folks create, I find it hard to take all the worrying seriously.

Software Engineer Images - Free Download on Freepik

Coding can be a challenge, but I’ve never had spent more than two weeks trying to figure out what is wrong with the code. Once you get the hang of the syntax, logic, and techniques, it’s a pretty straightforward process—most of the time. The real problems are usually centered around what the software is supposed to do. The hardest part about creating software is not writing code—it’s creating the requirements, and those software requirements are still defined by humans.

This article will talk about the relationship between requirements and software, as well as what an AI needs to produce good results.

It’s not a bug, it’s feature…no wait, it’s a bug
Early in my software career, I was placed on a project midstream in order to help increase the velocity of the team. The main purpose of the software was to configure custom products on ecommerce sites.

I was tasked with generating dynamic terms and conditions. There was conditional verbiage that depended on the type of product being purchased, as well as which US state the customer was located in due to legal requirements.

At some point, I thought I found a potential defect. A user would pick one product type, which would generate the appropriate terms and conditions, but further along the workflow it would allow the user to pick a different product type and predefined terms and conditions. It would violate one of the features explicitly agreed on in the business requirement that had the client’s signature.

I naively asked the client, “Should I remove the input that allowed a user to override the right terms and conditions?” The response I got has been seared inside my brain ever since. His exact words were spoken with complete and total confidence;

“That will never happen”

This was a senior executive who had been at the company for years, knew the company’s business processes, and was chosen to oversee the software for a reason. The ability to override the default terms and conditions was explicitly requested by the same person. Who the heck was I to question anyone, much less a senior executive of a company that was paying us money to build this product? I shrugged it off and promptly forgot about it.

Months later, just a few weeks before the software was to go live, a tester on the client side had found a defect, and it was assigned to me. When I saw the details of the defect, I laughed out loud.

That concern I had about overriding default terms and conditions, the thing I was told would never happen? Guess what was happening? Guess who was blamed for it, and who was asked to fix it?

The fix was relatively easy, and the consequences of the bug were low, but this experience has been a recurring theme in my career building software. I’ve talked to enough fellow software engineers to know I’m not alone. The problems have become bigger, harder to fix, and more costly, but the source of the problem is usually the same: the requirements were unclear, inconsistent, or wrong.

AI right now: Chess versus self-driving cars
The concept of artificial intelligence has been around for quite some time, although the high profile advances have raised concerns in the media as well as Congress. Artificial intelligence has already been very successful in certain areas. The first one that comes to mind is chess.

AI has been applied to chess as far back as the 1980s. It is widely accepted that AI has exceeded human’s ability to win at chess. It’s also not surprising, as the parameters of chess are FINITE (but the game has not yet been solved).

Chess always starts with 32 pieces on 64 squares, has well documented officially agreed upon rules, and most importantly has a clearly defined objective. In each turn, there are a finite number of possible moves. Playing chess is just following a rules engine. AI systems can calculate the repercussions of every move to select the move most likely outcome to capture an opponent’s piece or gain position, and ultimately win.

There has been another front where AI has been very active – self driving cars. Manufacturers have been promising self-driving cars for quite some time. Some have the capacity to self-drive, but there are caveats. In many situations the car requires active supervision; the driver may need to keep their hands on the wheel, the self-driving feature is not autonomous.

Like chess-playing AI programs, self-driving cars largely use rules-based engines to make decisions. Unlike the chess programs, the rules on how to navigate every possible situation are not clearly defined. There are thousands of little judgments drivers make in a given trip avoiding pedestrians, navigating around double-parked cars, and turning in busy intersections. Getting those judgments right means the difference between arriving at the mall safely or arriving at the hospital.

In technology, the standard is five or even six 9s for availability—a website or service is available 99.999% (or 99.9999%) of the time. The cost to achieve the first 99% isn’t that high. It means that your website or service can be down for more than three days—87.6 hours—a year. However for each 9 you add at the end, the cost to get there grows exponentially. By the time you reach 99.9999%, you can only allow for 31.5 seconds of downtime a year. It requires significantly more planning and effort and of course is more expensive. Getting the first 99% may not be easy, but proportionally it’s a lot easier and cheaper than that last tiny fraction.