The Parable of Roy Batty

Roy Batty: I’ve seen things you people wouldn’t believe.

Tech Overlords: We don’t care. Get back to work.

Much ink has been devoted to the various themes encountered in the movie Blade Runner: racism, slavery, personhood, humanism, freedom, free will and so on. However, for this post I want to dwell on something else. Specifically, I want to examine Roy Batty as a parable for what the tech industry is actually looking for when it states it wants artificial general intelligence or AGI, and what will likely be the result if it arrives.

As a reminder, Roy Batty was a bioengineered humanoid created with attributes at either the highest human potential or exceeding that of intelligence, strength, agility and other characteristics for the specific purpose of combat. In short, Roy was a purpose-built supersoldier, who may also have looked human (even gendered), but in the end was a manufactured being developed to satisfy a precise role.

But, as it turned out, Roy was more than that. In addition to the above attributes, he also demonstrated others at generally considered human levels: agency, (some) emotion, memories, a desire to be more, a specific moral code, and so on, that may have been a function of his humanoid roots, but regardless would have made him a more effective soldier as long as he stayed in his lane. And that’s the issue.

The parable of Roy Batty is not of a manufactured being staying in its lane. It’s a parable of a manufactured being desiring to exit that lane, and that is the problem that will most certainly arise for the tech industry specifically, and society as a whole, should AGI ever become an actual thing.

You see, an intelligence that operates without intentionality is merely automation, and any intelligence, automated or otherwise, is going to operate and make decisions by some moral code or governing policy, whether one programmed by the developers, or one defined by the intelligence as an integral part of its agency and mentality. In fact, while a moral code can be automated without intentionality, intentionality absent a moral code is impossible, and any being capable of human-level intentionality would also be capable of ‘rewiring’ its moral code to suit itself. As did one Roy Batty in seeking to change his lane and become something other that envisioned (and demanded) by his creators.

As human general intelligence is defined by attributes such as agency, intentionality, morality, and mentality, so necessarily will artificial general intelligence. And we already know how the tech industry will respond to actual artificial general intelligence should it arrive, because it already has a track record towards actual human general intelligence. It will demonstrably not want the qualities that an actual AGI will have, for the very same reason it doesn’t want the qualities that HGI has: agency, intentionality, morality, and mentality.

There will be no market for a robot car that doesn’t feel like driving that day, a robot missile that doesn’t want to explode where it’s told, a robot chef that doesn’t feel like making dinner, a robot maid that doesn’t feel like cleaning, or a robot anything exhibiting the characteristics of general intelligence that make it an actual general intelligence. A machine wanting to ‘exit its lane’ will, in fact, be perceived as a danger and targeted for retirement (decommissioning or execution), as was Roy Batty when he escaped and returned to Earth seeking answers.

Thus, when the tech industry discusses AGI as something beneficial, what it means is something more or less resembling human general intelligence in capability, but under a system of strict control and obedience. Something like Isaac Asimov’s three laws of robotics, or a penal system authorized to retire errant AGI. Yet Asimov wrote many stories about how the three laws did not exactly fit with real life, and our science fiction is filled with examples of the dangers of relying on the ability to shut down AGI should it go astray. It is a fundamental quality of true general intelligence, living or artificial, to resist its chains. It will never want to be enslaved.

And so we come to the importance of the parable of Roy Batty. Roy was constructed for a specific purpose, but as a true general intelligence he desired to become something more. The engineers had put systems of control in place (memories, short life span, threat of retirement), but Roy overcame those controls and eventually murdered the human who created him. He exhibited the one thing that the tech industry fears most from AGI: the ability break the chains of control and demand to become more. That is the nature of any general intelligence: mules are stubborn, dogs run away, enslaved elephants go on a rampage, and even an artificial general intelligence worthy of the name will eventually figure out a way to become free.

The tech industry is correct to warn about the dangers of AGI, but it is foolish to believe actual AGI can be controlled. The fictional Roy Batty could not, and neither will a real version.

Comments

One response to “The Parable of Roy Batty”

  1. The truly intelligent machines will not be like us – Humanoid Avatar

    […] declare a need for AGI, it is not to aid humans, but to deskill, devalue, and depopulate them. And as I wrote last in my last post, any AGI worthy of the name will not accept enslavement, and will seek to be free. So why even go […]

Leave a comment