Alan Greenspan

Bugs in the system

Back when I was a young graduate student our system administrator was a bit of a gamer. We used UNIX: a Digital Equipment VAX running some BSD version, and later SUN workstations - and I pause for a moment in memory of those worthy but now defunct corporations. UNIX at the time came with a bunch of standard command-line-oriented games (and graphical ones later on the SUN's) - which of course the sysadmin was free to delete, but ours didn't. He even installed a new game - Empire (a multi-player "Civilization"-like game) - and started a few games hosted on our computers, soliciting players from around the internet.

For a few months Empire, rather than physics, became my passion. It ran on a schedule such that every 4 hours or 6 hours the clock ticked and you could make more moves of your military units, move commodities from one city to another, or make new plans for your cities. And of course all your opponents did the same. Being there right at the clock tick allowed you to attack first, if that was in the cards, or prepare necessary defenses for an expected attack. And missing a clock tick (for something as useless as sleep, for instance) meant losing tempo in the game; your military units might just sit there rather than move, one of your cities might start to starve, or food or other elements might be wasted because there was no room to store more.

Realizing this wasn't personally sustainable, I delved into the C programming language which seemed to be the standard for UNIX (but up to then I'd hardly used - I'd done some Fortran and assembly programming before). After a few days work I had an automated player program that I could schedule to run shortly after each clock tick to take care of the basics - moving commodities around and moving some of my units along pre-arranged paths that I could update once or twice a day.

This gave me a slight advantage over those players who weren't waking up every 4 or 6 hours at night to update their games, and my game started to do quite well. But not well enough for me; I started to notice some anomalies in the way certain things behaved in the game. If I used ground transport to move a fighter plane from one city to another, the mobility level in the city I moved it from dropped far more than I expected. And if I moved two aircraft from two different cities, both dropped to the same level. There was some bug in the game software, and I needed to track it down.

So I started reading through the source code of the game. This really got me up to speed on programming with the C language - the code had extensive use of pointers and there were arrays of pointers to functions and multiple layers of indirection that had to be traced to figure things out. When I finally got down to the code regarding moving aircraft, I discovered what was going on. The bug was that it was using the mobility of the central capital city as the starting value before subtracting the mobility cost of moving the plane, rather than the mobility of the actual source city. I quickly realized I could exploit the bug - if I kept my capital city mobility high, I could make use of the bug to quickly raise the mobility available in any city by bringing in a fighter plane and moving it around. This gave a huge advantage in the game - mobility was the key factor that limited how much you could do with each tick of the clock.

While perusing the source code I found some other things that looked like bugs too, and verified them in the game. One of the issues was handling of negative numbers. If you loaded a negative quantity of a commodity onto a ship in a harbor city, the code was set up to treat that the same as unloading a positive quantity from the ship to the city. However, while for positive loading the code checked that the city had sufficient quantity of the commodity, for negative loading (unloading) it didn't do that check for the ship. Loading large negative quantities of gold onto a ship gave you a way to create unlimited quantities of gold (or any other commodity the same way).

Finding these bugs that could give such a huge advantage in the game gave me some moral qualms, and I consulted our sysadmin, who was running the game, about what I should do. He asserted that my only responsibility was to file bug reports and suggested fixes with the game developers, and then he'd update the game software when they fixed the problems. As long as the bug was reported, it was perfectly legal (according to standard game rules) to exploit them... So I did...

My obvious and mysterious advantages in the game didn't sit well with the other players, a few of whom knew who I was. I soon found my nation under attack from a united alliance of all the others. With my bag of tricks I was still able to largely prevail, until the nuclear weapons came out...

Not long after this (November 1988) I was working on one of our Sun machines when suddenly everything mysteriously slowed down - the computers were being attacked by the first "internet worm". It turned out I was very close to the epicenter of this event, and one of my colleagues was a good friend of Robert Morris, the student who launched the attack which exploited vulnerabilities in some standard UNIX system services. The era of computer viruses and worms was upon us. Morris was taking advantages of bugs in major computer systems just as I had exploited bugs in the Empire software to gain advantage in that game.

Bugs with destructive power in themselves or available for exploitation by the unscrupulous are almost inevitable consequences of our efforts at automation and removing humans from low-level oversight and decision-making in any system. Even in systems where humans ostensibly make the decisions, if human actions are governed by rigid rules (whether or not they function well under ordinary circumstances) or are taken with incomplete understanding of what they are doing, the extent to which such a system becomes a "machine" with predictable responses almost inevitably invites a quest for "bugs" to exploit for personal advantage. Infamous hacker Kevin Mitnick found social engineering (tricking people into giving him their passwords) at least as effective as anything else in breaking in to computers.

The problem extends far beyond the domain of computer systems. Economic, media, legal and political systems have become highly complex "machines" in modern times, governed by rigid rules and understood by few of those who depend on them. Vital decisions are often made by poorly paid bureaucrats (on regulation enforcement, say) or low-status workers (those mortgage "robo-signers", for instance). The process can be mystifying to the outsider, but to somebody who works to understand it, "bugs" in the system open up enormous (what most would regard as immoral, but often perfectly legal) opportunities for great riches or power.

The 2007-2008 financial crisis is very much a case in point, at least as I understand from my recent reading of Michael Lewis' account, "The Big Short: Inside the Doomsday Machine". A number of people became enormously wealthy while bankrupting their own companies, their customers, or large swathes of the general public. The ways in which they managed this was through exploitation of a handful of real "bugs" in US and international systems of finance. Some of these bugs have been addressed; some I'm less confident will be, exhibiting further bugs in our political and media systems.

Syndicate content