To hear some people tell it, we’re one semicolon away from the entire software development industry collapsing around our ears. And we can’t even blame it on Y2K. This crumbling infrastructure isn’t our roads and bridges, but the creaky structure built on decades of quick-and-dirty computer programs that weren’t intended to be used that long. Not only are they still in use, but new programs have been written on top of them—and now depend on them.

“Think of it as needing more space in your house, so you decide you want to build a second story,” writes Zeynep Tufekci in Medium. “But the house was never built right to begin with, with no proper architectural planning, and you don’t really know which are the weight-bearing walls. You make your best guess, go up a floor and… cross your fingers. And then you do it again. That is how a lot of our older software systems that control crucial parts of infrastructure are run.”

Ken Thompson, co-author of the C language, tells Ritika Trikha in TechCrunch that he’s “downright fearful of modern programming because it’s made up of layers upon layers upon layers. It confuses me to read a program which you must read top-down. It says ‘do something,’ and you go find ‘something’ and it says ‘do something else’ and it goes back to the top maybe. And nothing gets done. I can’t keep it in my mind — I can’t understand it.”

And that’s assuming everyone’s done it right. “Pro­gram­mers are mak­ing mis­takes all the time and con­stantly,” John Carmack, the primary programmer behind games such as Doom, said in a speech, while pointing out that today’s games are more complex than the software that sent us to the moon. “The prob­lem is that the best of inten­tions really don’t mat­ter. If some­thing can syn­tac­ti­cally be entered incor­rectly, it even­tu­ally will be.

Programs are also getting bigger and bigger. Google, for example, is said to consist of more than two billion lines of code, writes Cade Metz in Wired.

“The software needed to run all of Google’s Internet services—from Google Search to Gmail to Google Maps—spans some 2 billion lines of code,” Metz writes. “By comparison, Microsoft’s Windows operating system—one of the most complex software tools ever built for a single computer, a project under development since the 1980s—is likely in the realm of 50 million lines. So, building Google is roughly the equivalent of building the Windows operating system 40 times over.”

This system offers Google’s 25,000 engineers a major advantage in that all the code is available to everyone, which gives them the freedom to use and combine code from across myriad projects, Metz writes. “What’s more, engineers can make a single code change and instantly deploy it across all Google services. In updating one thing, they can update everything.”

While Google is an extreme case, it’s not the only company that has gargantuan code bases. Facebook, for example, is estimated by some to comprise 20 million lines of code, Metz writes. And we’ll be seeing it more as cars become more automated. “New high-end cars are among the most sophisticated machines on the planet, containing 100 million or more lines of code,” writes the New York Times. “Compare that with about 60 million lines of code in all of Facebook or 50 million in the Large Hadron Collider.”

(See how complex it is? People can’t even agree on how big Facebook is, by a factor of three.)

And if programs are bad, the Internet is worse.

“You can’t restart the Internet,” writes Peter Welch in one of the many “Programming Sucks” pieces by frustrated programmers. “Trillions of dollars depend on a rickety cobweb of unofficial agreements and ‘good enough for now’ code with comments like “TODO: FIX THIS IT’S A REALLY DANGEROUS HACK BUT I DON’T KNOW WHAT’S WRONG” that were written ten years ago. On the Internet, it’s okay to say, ‘You know, this kind of works some of the time if you’re using the right technology,’ and BAM! It’s part of the Internet now.”

What causes this problem? In a word, complexity. It’s easy to think that with enough testing a program won’t fail (but who ever thinks their software has been tested enough?). But “normal accident” theory holds that, as a system gets more complex, its chances of failure increase, no matter how careful you are with all the requisite components, because of unexpected interactions between them. Even putting in checks and balances looking for failure adds complexity and makes the system more prone to failure.

So now that we’re aware of the problem, what can we do about it? At the risk of throwing more complexity into the situation, the answer sometimes appears to be more software.

  • Version control. Google and Facebook are working on an automated open source version control system that is intended to help other organizations juggle such massive code bases without dropping balls along the way, Metz writes.
  • Better debugging tools. “As far as I know there is no language or tool that, given a large complex program can show a programmer ‘how it works,’” blogs programmer Ben Jones, in one of the other pieces on the web entitled “Programming Sucks.” “The path from ‘this doesn’t work’ to ‘this doesn’t work because …’ is tough to travel. Why can’t the debugger help point out where things are going wrong? ‘The first 99 times this loop ran, the value of x was between 1 and 10, but now it’s -342341. You might want to look at that.’”
  • More testing. Cars, for example, are becoming too complex for regulators to test properly, writes the Times. The auto industry’s NHTSA doesn’t examine software nearly as closely as the FAA does for airplanes, but if it did, it would require many more testers.
  • Keeping expertise inhouse. When the New York Stock Exchange crashed earlier this year, it was blamed by some for having cut its staff to save money. Without that expertise, it was harder to support and maintain the system.

This problem is actually more critical to the industry than security, warns Tufekci. “From our infrastructure to our privacy, our software suffers from ‘software sucks’ syndrome, which doesn’t sound as important as a Big Mean Attack of Cyberterrorists,” she writes. “But it is probably worse in the danger it poses. And nobody is likely going to get appointed the Czar of How to Make Software Suck Less.”


Simplicity 2.0 is where we examine the intricate and transitory world of technology—through a Laserfiche lens. By keeping an eye on larger trends, we aim to make software that’s relevant to modern day workers, rather than build technology for technology’s sake.

Subscribe to Simplicity 2.0 and follow us on Twitter. If what we’re saying piques your interest, head over to Laserfiche.com where you’ll see how we apply the lessons learned on Simplicity 2.0 to our own processes, products and industry.

The Changing CIO

It’s predicted that as technology become simpler to operate, the traditional CIO role will need to change.

Listen Now

Related Articles

By Sharon Fisher, June 16, 2017

Which is more important, productivity or efficiency? Do you want to do the same with less, or more with the same? (Sorry, "more with less" isn't an option.)

Read More

By Sharon Fisher, June 02, 2017

CIOs have to walk a fine line between keeping the lights on and innovation. But is bimodal IT the way to go?

Read More

By Sharon Fisher, May 15, 2017

Artificial intelligence systems can now beat professional poker players. What does that mean for your business and how can you prepare?

Read More