HBO’s Westworld confronts the nature of Artificial Intelligence and the ethics of conscious machines.
We humans are alone in this world for a reason. We murdered and butchered anything that challenged our primacy.
Dr. Robert Ford
Westworld is a weird show. And I love it.
It finds a way to combine a cowboy western adventure with a sci-fi artificial intelligence narrative in the effortless way only the makers of GOT know how to do. One second James Marsden is galloping across a dusty terrain saving damsels, and the next second Anthony Hopkins is discussing consciousness with one of his eerily human-looking robots.
But that’s what makes Westworld unique from a western/sci-fi point of view. Usually in the classic Cowboys vs Aliens style western the divide between the past and the future is distinct. Two very separate worlds interact in often violent ways. In Westworld however, the technology is integrated into the cowboy world.
The basic premise of the show follows a Western theme park where wealthy patrons can interact with hyper-realistic AI hosts. The humans get all the thrill of the gun-slinging, liquor-sipping, sex-partaking West with no danger. Meanwhile, the hosts are killed, raped, and then mind-wiped to do it all over again. As the hosts begin to gain consciousness, serious ethical questions are brought up about the morality of creating AI, and whether consciousness is a luxury reserved only for organic creatures.
The first AI was created in 1951; a behemoth of a machine consisting of 3000 vacuum tubes to simulate a neural network of 40 neurons. For context, jellyfish have approximately 5,600 neuronshttps://en.wikipedia.org/wiki/List_of_animals_by_number_of_neurons. In the 69 years since then, however, we have come much further. In 2016, Sophia the Robot was first activated by Hong Kong robotics company Hanson Robotics. And while its capabilities beyond its programming are nonexistent, to the unknowing observer the robot appears fully alive and conscious.
According to The Next Web and Futurism, the science of AI, however, is still years behind human-level AI (HLAI). Most advanced AI runs on Deep Learning but has yet to run using Deep Understanding. In other words, the machines can learn how to perform a task, but they can’t understand what they’re doing. This is the biggest limiter to cognition currently, and it will take years and billions of dollars to have a chance of ever getting to that point.
So now the question we are left to ponder is the big “What if?” If we suddenly do create conscious robots parallel to those of Westworld, what rights do we owe them? If we control them, is that enslavement? Can we morally hinder the development of conscious robots to keep them subdued?
All of these questions, as ridiculous as they may sound current, are worth considering. Just think back to the ideas regarding privacy in the digital age. If you talked to someone in the 1970s about internet privacy rights and the moral implications, would they not call you crazy? When technology isn’t known of it is dismissed as science fiction. But eventually, we are faced with problems from new technology that require new approaches to ethics and to lawmaking.
Hugh McLachlan of the Independent writes what seems to be a very simple answer to these complex questions: “To deny conscious persons moral respect and consideration on the grounds that they had artificial rather than natural bodies would seem to be arbitrary and whimsical. It would require a justification, and it is not obvious what that might be.”
More difficult to answer though could be the questions posed by Nick Bostrom and Eliezer Yudkowsky from Cambridge University. What would it be like for AI that process faster than the human brain? If they were to perceive the world in slow motion, how would that impact our interactions? Would they receive first treatment if injured compared to humans because they would experience more time?
A lot of these questions are super confusing, but that’s the whole point. Once AI gains consciousness, everything we’ve ever known is redefined. What we can hope, however, is that humanity tries to remain moral. Hopefully, we don’t make the same mistakes as the humans of Westworld, and maybe at the same time, we’ll prevent our very own robot uprising.