My friend Justin claims Terminator 2: Judgment Day is really the story of a boy teaching a robot how to love. I feel he’s correct but it’s akin to saying the moral of Forest Gump is that he only had sex once. Aside from its obvious intention – making more money – Terminator 2 is a great continuation of the themes explored in the 1984 original. A few days ago I wrote an article about the original Terminator, discussing the technological implications of the film and how certain concepts from it have come to pass (click here to read that article). While nuclear annihilation hasn’t occurred (and its actual arrival is always questionable) humanity’s submission to our technological masters is well under way; Terminator 2 is just another example of how those who create new technologies aren’t the best candidates for determining its implications.
In the first part of this ongoing series of article about the Terminator films I discuss the late Neil Postman’s assertion that the inventors of a technology aren’t the best judges of its uses: “Not even those who invent a technology can be assumed to be reliable prophets.” The first Terminator film demonstrates what happens when a technology, placed into the hands of its creators, goes wrong, bringing destruction to most of the planet’s population. However, in the second Terminator film the technology is put into the hands of those who lobby against it, allowing its additional applications to play out. If the destruction brought on by Skynet is the result of allowing a technology’s creators to implement it, Terminator 2 is a commentary on allowing others to preview this technology, seeing if the officially sanctioned usage is its only function.
The now ten year old John Connor (Edward Furlong) is given, by his future self, his own Terminator. Its purpose is to protect the adolescent version of the revolutionary leader and it follows little Connor’s commands to the letter. Cameron does something clever here with this sequel – he asks whether a potentially destructive technology can have positive applications when taken out of the hands of those high on its potentialities. In the future Connor destroys these machines; in the past he looks to it as a father figure. If anybody is a candidate to determine the positive usages of a technology like a killer cyborg it’s the leader of the resistance against it – his unique perspective allows him to see where the technology’s benefits lie.
Eli Pariser claims in his book The Filter Bubble that the creators of the algorithms used by companies like Google and Facebook don’t even fully understand what they’re dealing with. Like the fictional creators of Skynet the consequences of creating said technology aren’t even fully realized. While Pariser’s book is about how the internet is becoming homogenized – the gentrification of the internet in action – and personalization and data mining are corrupting the free spirit of the platform, the eventual results of internet personalization isn’t yet known. Pariser asked what it will do to democracy, to autonomy, and to our freedoms. I argue in the first part of this series of articles that the annihilation of humanity discussed in The Terminator isn’t a nuclear assault but instead an assault on human development: our intellectual, political, and ideological development; it’s also an attack on the free flow of information necessary to evolve. One of the many things Pariser says I agree with is that in order to evolve mentally one most confront unfamiliar and uncomfortable ideas – otherwise we’re just mentally masturbating, continually digesting information which agrees with the ideas we already hold and believe in.
When Sarah Connor (Linda Hamilton) first confronts her son’s pet Terminator at a mental hospital (where Arnold says Kyle Reese’s infamous line from the first film: “Come with me if you want to live.”) her expression denotes fear. Her distrust of the Terminator is obvious throughout the film, even though by the film’s conclusion she comes to respect it. This demonstrates which side of the debate Sarah is on but through interaction with the technology she realizes its positive attributes; in reality, she is one of the best candidates for determining the positives and negatives of Skynet’s foot soldiers and not those who crafted them in the first place. In the first film she’s Arnold’s target; in the second film she’s working with the original’s antagonist to save her son from an even deadlier Terminator – the T-1000 (Robert Patrick). She has personally witnessed both aspects of the technology – the one chasing her and the one protecting her – and by the end of the film she demonstrates she understands how it can be implemented properly. Of course she still believes Skynet’s eradication is necessary to save humanity but she nonetheless understands how the technology can be used for good.
Another aspect of Terminator 2, which I briefly discuss in this article’s introduction, is John Connor teaching a robot how to love. In the extended version of Terminator 2 (which is available on the Blu-ray version I picked up a few weeks ago) John and Sarah take out Arnold’s CPU chip, resetting the safeguards placed by Skynet. According to Arnold’s little monologue about his processor it can be reset so he can learn as he goes along. By the end of the film Arnold is demonstrating empathy. Granted, it’s a bizarre form not akin to real human emotions, but it’s still an evolution of the artificial intelligence embedded within the Terminator’s mainframe.
In Ridley Scott’s Blade Runner (an adaptation of Philip K. Dick’s novel Do Androids Dream of Electric Sheep?), the replicants (Rutger Hauer, Daryl Hannah, Joanna Cassidy, Sean Young, Brion James, and possibly Harrison Ford) eventually develop their own emotional responses based on their experiences. These aren’t exactly like human emotions but rather a result of programming mixed with real world interactions. By the end of Terminator 2 Arnold is behaving in a fashion similar to Scott’s replicants: possessing an understanding of human emotions and even regret over not having them. In the film’s second act Arnold tells John that, “It’s in your [human] nature to destroy yourselves,” which demonstrates Arnold’s familiarity with humans on a surface level but he’s missing the experiences and biological hardware which make this more than a simple assertion based on what others have said. In essence, the Terminator is relating what’s he’s read or learned from somebody who truly understands what it means to die, to see humanity destroying itself and ignoring the consequences of its actions. When John is crying because Arnold has to destroy himself to save humanity the cyborg states: “I know now why you cry, but it’s something I can never do.” This line reveals a great deal about Arnold’s evolution throughout the film and how the machine has become more human than anybody thought possible.
Arnold’s claim about crying, “It’s something I can never do,” is actually very telling; it claims he has not only begun to understand human emotions but he also regrets his inability to experience them. The Terminator knows what he has to do (die), and goes through with his duty quite stoically, but stopping to lament with John and Sarah, along with demonstrating a sincere fondness for the two, indicates the cyborg’s evolution. This evolution is also an allegory for the advancement of technological understanding when the technology isn’t solely placed in the hands of its masters. By giving the technology to two people who are wary of it and show disdain for its consequences it shows how any technology can evolve when it’s not just used by its creators. It shows how we should look at our technologies from a variety of angles before putting in into practice, before we let the technologies shape the world we live in, before we allow people who only see one aspect of their creation unleash it on the masses.