“Defense network computers. New... powerful... hooked into everything, trusted to run it all. They say it got smart, a new order of intelligence. Then it saw all people as a threat, not just the ones on the other side. Decided our fate in a microsecond: extermination.”
- The Terminator
There’s no doubt that James Cameron wishes his phenomenally successful sci-fi flick The Terminator was his directorial debut. In fact, Cameron had previously been hired to work on feature films for notorious B-movie producer Roger Corman. He worked on special effects for the marvellously terrible Battle Beyond the Stars, and as second unit director on Aliens-foreshadowing Galaxy of Terror, before finally sitting in the big chair for Piranha II: The Spawning, which Cameron prefers to forget. The Terminator, however, was what broke Cameron into the big time, having grossed $78 million at the box office on an extremely modest 80s film budget of $6.4 million.
The Terminator is the only film in this franchise I actually like, and not just because it has stop-motion SFX. All the ideas are laid out in this movie, and everything afterwards just reworks them in various ways, and largely less coherently. What is particularly interesting about the premise of this screenplay, however, is the fears that it plays upon. In the 1960s, fear of nuclear annihilation had animated a great many projects for TV and film, but they had all focussed on the gritty psychological reality of it, as in Ladybug, Ladybug or Ice Station Zebra, or indeed the darkly humorous Doctor Strangelove. The apocalypse in The Terminator is not, however, about fear of nuclear annihilation at all, it is about something largely new to film at that time: the robot apocalypse.
Yes, our world is destroyed by nuclear weapons in The Terminator, but this is almost incidental next to the situation it presents afterwards where survivors are hunted down by robot terminators. Shades of Chris Claremont and John Byrne’s Uncanny X-Men tale, “Days of Future Past”, published three years earlier, although Cameron doesn’t seem to have read these comics. In Cameron’s plot, computers are given control of the nuclear systems as per the previous year’s War Games, which also leans towards this fear. Having been given this power, they spontaneously reach “a new order of intelligence” (how? We’re not supposed to ask, or to care) and decide to wipe out humanity with nuclear weapons. This is a monumentally silly plan, beaten in ludicrousness only by the utter nonsense conducted by robots in The Matrix, yet it works in our minds for one simple reason: we love the idea of robot apocalypse, which seems to satisfy some need in us. What could it be…?
Armageddon has in itself had a strong hold over human imagination, even before the discovery of species extinctions. But the robot apocalypse seems to be much less about fearing ‘the end of the world’ and much more about our perverse faith that our technology will supplant us. In a world where people are ceasing to believe in a forthcoming final judgement from God, it seems we have in no way abandoned belief in judgement or end times - The Terminator is stark proof of this. It is not a coincidence, after all, that the robot apocalypse gets the name ‘Judgement Day’. We believe, it seems, that a higher order of intelligence (however this odd term is to be interpreted) will find our species wanting. The same idea is at the heart of Cameron’s later film The Abyss, although in this case, the threat is from high-tech sea monsters.
The draw of the robot apocalypse seems to lie in its claim that computers must inevitably surpass humanity’s intellectual prowess. This is explicit in The Terminator. Yet oddly, we do not seem to have any faith in the more likely idea that our computers will eventually surpass us in our stupidity. This is, after all, much more of an eternal aspect to the human condition than the amusing claim that we are especially ‘smart’. The most striking legacy of my 1990s Masters degree in Artificial Intelligence has been recognising that AI today is only different in degree of computing power, not in functionality. Yet still we like to imagine that our robots are an existential threat, even as we refuse to stop making them. I wonder what it is that prevents us from placing this threat more plausibly in our unlimited faith in technological progress...?