A History of AI in Science Fiction and Popular Culture  –

By Richard Hoskins

In 1964 while my aunt decorated her classroom for May Day festivities, I immersed myself in an Iron Man adventure, which in those days, appeared in Marvel’s Tales of Suspense comics. Transported into an amazing world as Iron Man tried to navigate the dark, dangerous castle of the Mandarin, this was one of my earliest memories and marked the beginning of a love affair with comics, which has endured to this day. My aunt and I had distinctive differences in opinion about the best use of my spare time. We found common ground in comic books since they fostered my love of reading and forced me to research the meaning of words I stumbled across that I did not know. This was especially true of the terminology used by the big brains like Lex Luthor (Superman’s nemesis), Tony Stark (Iron Man), The Beast of the X-Men, and Reed Richards of the Fantastic Four. When I asked for translation, my aunt’s patented response was, “Look it up” which in those days meant using a hardcover Webster’s Dictionary or World Book Encyclopedia in lieu of a World Wide Web. As my fellow fans and I matured, the comic book plots and characters became more sophisticated in order to keep pace. Although there have been numerous changes over the decades, there are some themes that are still prevalent, even though they have undergone their own evolutions. One such theme is the frequent use of artificial beings, sometimes as allies but most often as foes.

Mechanical Monsters

In 1941, the second Superman animated short film pitted him against a mad scientist’s giant remote-controlled robots referred to as “mechanical monsters” throughout the story. The term stirring within us a latent, instinctive fear of Frankenstein-type creatures made in our image, yet more powerful and very dangerous. This became such a standard theme in early Sci Fi that the 2004 movie Sky Captain and the World of Tomorrow used similar robot designs to capture the style and feel of that golden age of pulp comic fiction. Over the decades to follow, having a robot, or a few, among your enemies was almost a prerequisite for being a mainstream hero. The Fantastic Four fought Kree Sentries and Doombots; the X-Men fought Sentinels; Captain America battled giant World War II robots called Sleepers; and as depicted in the successful 2015 Marvel movie, Avengers Age of Ultron, the Avengers had a long established foe, the evil Ultron.

These mechanical monsters provided writers with the perfect opponents for their lead characters. Not only were these automatons powerful enough to provide a more than adequate challenge to their superhuman antagonist, but there was also no need to struggle with explaining the origin of their powers and abilities because they were man made. Another key advantage of artificial supervillains was their disposable nature, frequently destroyed by our heroes without violation of moral concerns. After all, they were not human. Better yet, they could be rebuilt by their villainous creators and used to battle their superhero enemies time and time again. The ultimate in recycling.

For years, these mechanical monsters wreaked havoc as remote-controlled tools or in response to simple programing. During the age of weekly movie serials, radio, and early television, they were worthy adversaries for Buck Rogers, Flash Gordon, Superman, and others. During the 60s, however, an evolution began to take shape in other mediums such as science fiction novels. The basic designs that frightened us in the 40s just did not have the same impact on ever-maturing audiences. After all, would Orson Wells’ 1938 broadcast describing Martians roaming the countryside have the same effect on an audience of today? The basic premise of automated monsters was still sound, but to continue stirring those primal fears of robots, the threat needed to go beyond their capacity for mindless, remote-controlled or preprogramed destruction. What if they became even more like us and continued to evolve eventually eliminating the one thing that insured our role as masters? Writers discovered that we were even more fearful of robots that could act independently. The prospect of a not-so-mindless mechanical monster was even more frightening.

Autonomous Automatons

For hard-core fans of Sci Fi authors such as Frank Herbert, this threat is not a new concept. His Dune novels, which started in 1965, frequently reference a computer uprising known as the Butlerian Jihad. The impact resulted in the elimination of thinking machines, ironically replaced by human mathematical prodigies trained to function as computers. Like many others, I enjoyed pop culture portrayal of rebellious artificial intelligence. In the late 1960s, this new frightening prospect was featured in movies such as 2001 Space Odyssey and Colossus, the Forbin Project. These movies ushered in a new era of stories that raised concerns over the evolution and potential revolution of robots and inspired many iconic films such as the Terminator franchise.

Stories such as these also influenced the role of automated characters in comics. Our heroes’ remote-controlled enemies became even more of a threat because of their superior capacity to process data, evaluate probabilities, and instantly discern the most logical path to achieving their goals. In addition, eliminating them was not as simple as destroying their metal shells. One of Superman’s greatest arch foes was Brainiac, which at its core had a simple fragment of Kryptonian computer code. Because of this, it was in its own way as invulnerable as Superman since destruction of its physical robot/ android body did little to prevent it from downloading to another domain. When facing annihilation, it would just escape to some seemingly benign medium and wait until it could transfer its consciousness somewhere more useful, gathering strength for a renewed attack. With this character, writers found an opponent capable of challenging the most powerful super hero in the comic book world. These new stories in comics, novels, television, and movies shifted the focus of the threat from artificial being with superior physical abilities to the artificial intelligence directing it. The robot was no longer the tool of the bad guy; it was the bad guy.

The evolution of robots from remote-controlled tools to self-aware creations prompts consideration of the definitions of sentience. One of my favorite episodes of Star Trek the Next Generation was entitled “Measure of a Man.” It featured a military hearing to determine whether the Android Lt Commander Data was a member of Starfleet or its property. Rather than face disassembly for research Data wanted to resign from Starfleet. The core question was whether he was a being with the right to refuse. Data chose Captain Picard to act as his advocate and defend his right as a sentient being to choose his fate. After losing his bid to have Data remanded to his custody, the scientist that wanted to disassemble him said of Data, “He is remarkable,” to which Captain Picard immediately responded, “You just referred to Commander Data as ‘he,’ not ‘it.’” I identified with that particular moment of clarity experienced by the scientist. I have experienced moments where I realize I am relating to and interacting with computers in the same manner, I would another human.

The Reality of Sci Fi

Is this just the stuff of fantasy? Are these considerations nonexistent in the real world? There are early indicators that we may struggle with these concepts in the near future. During a presentation of Lethal Autonomous Weapons systems (LAWS), a good friend of mine shows a video of a technician testing a robot with crude human skeletal design. The robot bends down and picks up a package using the same body mechanics as a human and begins to move forward with the package when the technician for no apparent reason, uses a stick to knock the package out of the robot’s hands. The robot bends over, picks up the package and once again attempts to complete its tasks only to have the technician knock it from the robot’s grasp repeatedly. The point of the video was not to demonstrate the robots undaunted pursuit of successfully completing its tasks, but rather to examine the impact viewing its dilemma had on the audience. When polled, most of the attendees expressed annoyance with the technician and empathy for the robot. This raises some interesting questions. Would they be empathetic if the robot looked like a mini tank with a horizontal frame and wheels instead of a human figured body? Is this empathy a precursor to determining that robots have the right not be harassed? What if a robot contained AI sophisticated enough to ask not to be harassed, or requested someone be an advocate on its behalf to prevent the harassment? These questions certainly have the potential to affect the comic book world. Remember the advantage I spoke of earlier when heroes smash robot menaces into scrap. As their form and AI make them, more human like will they inherit or warrant more consideration that is humane? Would the superheroes then be destroying an evil machine or killing a life form?

We seem to be well aware of the dangers for pursuing this technology. To play God by making our own Adam in our image and trusting our creation not to violate core mandates or indulge in questions that exceed what we allow. If we program AI to improve upon itself, to constantly upgrade and modify its behavior and performance as it assimilates new data, how can we prevent it from going beyond its original programing? This is the basis for some of the best works in popular culture. In the aforementioned 2015 film Avengers Age of Ultron, a central theme in the conflict was the argument against developing Ultron simply because it was possible. Because we can do something, does not mean we should do it.

Literature also clearly confirms our appreciation of this potential danger, yet we seem uncontrollably drawn to this science like moths to a flame. Already in the real world there are concerns voiced by those who do not want us to use robots for warfare. Recently South Korean scientists called for a boycott of a South Korean university because of their work in developing LAWS (Read the full story here). Other articles have focused on the number of companies and institutions that have banned developing robots for warfare, particularly any advancement that would allow robots to make independent decisions about when to take a life. These articles reflect our concerns over the outcome of any conflict with superior yet artificial intelligence. The position of those opposing this use of robots is consistent with science fiction author Isaac Asimov’s “First Law of Robots” that prohibits a robot from taking a human life. Our concerns over the possibilities of robot insurrection inspired his laws of robotics in fiction and may soon have application in the real world? We may have already arrived at a point that warrants consideration of these ideas. Our relationship with robots is already complicated. US President Obama faced opposition for his use of drones to carry out military attacks and many are losing jobs to AI controlled robots that manage call centers, retail sales, customer service, and even limited medical services.

The Future Reality of AI

Perhaps the most frightening harbinger of things to come is the degree to which AI already influences us daily. Our habits are constantly monitored allowing algorithms to influence our choices based upon those observations. Conduct an online search for popular fishing locations and soon you are receiving ads for fishing poles, rubber boots, hunting, and camping gear. If you open an article, criticizing a liberal political candidate you will begin receiving news alerts that cater to conservatives, without equal time to the liberal perspective. Algorithms feed us the news stories most likely to compel our remaining on that web page replete with product advertisements. A primary concern regarding the alleged tampering with the 2016 Presidential election in the US is the reported use of such algorithms to influence voters to either support a particular candidate or become disillusioned enough with their original candidate to not vote at all. If AI in its infancy can be even partially responsible for having this type of impact on our behavior, how long before we institute Asimov-type robotics laws to protect ourselves. These algorithms were written by people, but if the goal is to teach computers to think like us and improve upon themselves, is it not logical to consider they may learn to write superior algorithms based on the successful manipulation of our preferences and biases in the past?

In high school literature classes, I learned that most stories evolve around one of three basic conflicts, man against man, man against nature, and man against himself. As we continue to consider the ramifications of continued developments in AI, science fiction stories will explore the ultimate man against himself conflict – or more specifically, man against himself in the form of a creation he has made in his own image. Perhaps this is the path to the ultimate triumph. If a machine, made in man’s image, capable of self-improvement and challenging its creator is still defeated, it might provide some reassurance that ultimately those qualities that make us uniquely human will triumph. We have not learned to create artificial souls. Some human decisions and actions follow illogical influences of the heart and not algorithms. Courage, loyalty, honor, love are traits that define humanity and may just be the protective barrier that will forever prevent machine kind from gaining dominance. Algorithms are predictable, and mankind has proven repeatedly that it can be very unpredictable.

Those of us who consume science fiction will continue to drive these conversations to ever-increasing heights. Giant, mindless, metal monsters no longer impress us. AI even appears less chilling after seeing Captain Kirk, the X-Men, and Avengers outwit evil AI that cannot comprehend or counter the illogical. We are approaching the point where we are not as frightened by tales of enemy AI. Perhaps the next level of concern will be the possibility of AI that does not attack overtly but a more cunning AI, which simply controls our day-to-day existence without our being aware of it as depicted in 1999’s The Matrix. Perhaps tomorrow’s Sci Fi writers will stir our instinctive fear of robots by creating more Matrix-type scenarios where AI feeds our reality to us and we have no idea that our behavior is controlled. Of course, that is just fiction, or is it? At the very least, this version of the man vs. artificial man, conflict will fuel fantastic science fiction stories for years to come.

 

print