I envisioned computer viruses and wrote the first one, in 1969—but failed to see that they would become widespread. Then, decades later, came Stuxnet.
Technologies don’t always evolve as we’d like. I learned this in 1969, and failed to catch the train I’d predicted would soon leave.
Further, I failed to see the levels of distrust that would arise in computer culture from malware generally. Certainly I did not think that seeds of mistrust could be blown by the winds of national rivalry through an internet that infiltrated every aspect of our lives. But then, it was 1968… ages ago.
At the Lawrence Radiation Laboratory I used ARPANet (Advanced Research Projects Administration) to send brief messages to colleagues in other labs, running over the big, central computers we all worshipped then. ARPANet’s emails had a pernicious problem: “bad code” that arose when researchers included (maybe accidentally) pieces of programming that threw things awry. Mostly I sent technical discussions to those at other labs. I worked on theoretical physics: solid state theory, plasma confinement for the fusion program, and some weapons work.
One day as I worked on a computation using the main computer, an idea struck: I could do so intentionally, making a program that deliberately copied itself. The biological analogy was obvious; evolution would favor such code, especially if it was designed to use clever methods of hiding itself and using others’ energy (computing time) to further its own genetic ends.
So… I wrote some simple code and sent it along in my next transmission on ARPANet. Just a few lines in Fortran told the computer to attach these lines to programs being transmitted to a certain terminal. Soon the code popped up in other programs, and started propagating. By the next day it was in a lot of otherwise unrelated code, and I called a halt to matters by sending a message alerting people to the offending lines.
Then I wrote a memo and made a point with the mavens of the Main Computer: this could be done with considerably more malevolent motivations. Viruses could move. Their reply: “Why would anyone do it, though?”
I recalled the Dylan song: The pump don’t work, ‘cause the vandals took the handles…
I thought it inevitable that such ideas work themselves out in the larger world. I wrote a story, “The Scarred Man” to trace this out, choosing to think commercially: could someone make a buck out of this? I devised a “virus” that could be cured with a program called VACCINE. The story appeared in the May, 1970 issue of Venture magazine and mercifully dropped from sight.
I avoided “credit” for this idea for a long time, but gradually realized that it was inevitable, in fact fairly obvious. In the early 1970s it surfaced again at Livermore when a self-replicating program named Creeper infected ARPANET. It just printed on a user’s video screen, “I’m the creeper, catch me if you can!” Users quickly wrote the first antivirus program, Reaper, to erase Creeper. Various people reinvented this idea into the 1980s, when a virus named Elk Cloner infected early Apple computers. That got fixed quickly, but Microsoft software proved more vulnerable, and in 1986 a virus named Brain started booting up with the disk operating system, spread through floppy disks and stimulated the antivirus industry I had anticipated in 1970.
It is some solace, I suppose, that last year’s #2 seller software in virus protection was a neat little program named Vaccine. The basic idea came into different currency at the hands of the renowned British biologist Richard Dawkins, who invented the term “memes” to describe cultural notions that catch on and propagate through human cultural mechanisms. Ranging from pop songs you can’t get out of your head all the way up to the Catholic Church, memes express how cultural evolution can occur so quickly, as old memes give way to voracious new ones.
There was some money to be made from this virus idea, if remorselessly pursued, even back in the early 1970s. I thought about these, though my heart was not in it. Computer viruses are antisocial behavior I did not want to encourage.
Nowadays there are nasty scrub-everything viruses of robust ability and myriad malware variations: Trojan horses, chameleons (acts friendly, turns nasty), software bombs (self-detonating agents, destroying without cloning themselves), logic bombs (go off given specific cues), time bombs (keyed by clock time), replicators (“rabbits” clone until they fill all memory), worms (traveling through network computer systems, laying eggs). Some companies in the anti-viral business claim over 100 million dollars lost each year in the just USA due to viruses.
Viruses were not a legacy I wanted to claim. Inevitably somebody was going to invent computer viruses; the idea requires only a simple biological analogy. Once it escaped into the general culture, there was no way back. I didn’t want to make my life about that. The manufacturers of spray-paint cans probably feel the same way…
For example, our cities will get smart. They will be able to track us with cameras or with microwaves that read chips in our phones, computers or even embedded beneath our skin. The first commercial use of this will be to feed advertising to us. We’ll inevitably live in an arms race against intrusive eyes, much as we guard against computer viruses now.
Stuxnet, the software virus that invaded Iran’s nuclear facilities, apparently is the first virus that disrupts industrial processes. It mutates on a schedule to avoid erasure, interrogates computers it invades, and sends back data to its inventors. Stuxnet can reprogram the PLCs and hide its changes. This smart cyber-weapon has a worm’s ability to reprogram external programmable logic controllers, making it a refined malware, aimed at critical infrastructure. Commands in Stuxnet code increase the frequency of rotors in centrifuges at Iran’s Natanz enrichment plant so they fly apart. Yet much Stuxnet code is unremarkable, standard stuff without advanced cloaking techniques.
Still, this is a wholly new thing—smart viruses with a grudge. These are evolving, self-aware, self-educating, craftily doing their mission. Expect more to come. Countries hostile to the United States may launch malware attacks against U.S. facilities, using Stuxnet-like code to take down national power grids or other critical infrastructure.
Though seldom remarked upon, USA policy traditionally has been to lead in technology, while selling the last generation tech to others. Thus we can defeat our prior inventions, and sometimes we even deliberately installed defects we could exploit later.
Stuxnet looks like a kluge with inventive parts. It does not hide its payload well or cover its tracks. It will not take great effort to greatly improve such methods (say, with virtual machine-based obfuscation, novel techniques for anti-debugging, etc), whatever their targets. Once major players use such techniques in nation-state rivalries, surely these will leak into commerce, where the stakes are immense for all of us. If Stux-type, untraceable malware becomes a weapon of commerce, our increasingly global commerce will take on a nasty edge.
If living in space becomes common, such systems will demand levels of maintenance and control seldom used on Earth. The International Space Station, for example, spends most crew time keeping the place running. These can be corrupted with malware.
So can many systems to come, as our environment becomes “smart” and interacts with us. Increasing interconnections of all systems will make smart sabotage a compelling temptation. So will malware that elicits data from your life, or corrupts systems you already have, in hopes you’ll replace them.
Now think beyond these first stages. What secondary changes emerge from those? Seeds of mistrust and suspicion can travel far.
That’s the world we’ll live in, with fresh problems we can attack if we’ve thought them through.
How should you prepare and respond? You can’t possibly anticipate all outcomes. The time to think about this is now, before the future arrives like an angry freight train.