Tag: openai

  • The truly intelligent machines will not be like us

    The tech world seems to be abuzz with speculation about artificial intelligence these days, and specifically artificial general intelligence (AGI) that matches or exceeds human intelligence. But I suspect that AGI, if it ever arrives, will be something quite different from human intelligence, or really any definition of intelligence as we currently understand it.

    As many comparative mythologists will tell you, the human species has a shared psyche and collective subconscious that extends back for literally hundreds of millennia. Primitive peoples laughed and cried the same as moderns. We are all afraid of what lurks in the dark. We are all greedy and selfish, and emotive and caring; and in pretty much the same ways. When a child steals a toy from another, it will elicit the same response whether those two children are in New York City, or Singapore, or any other place in the world. Under the hood, humans are all fundamentally the same, and have been since long before even the first cave paintings engendered a sense of wonder 40,000 years ago.

    With AGI, on the other hand, we will have no such shared psychological heritage. An artificial general intelligence that expresses intentionality and agency will be motivated by totally different wiring, will have different fundamental desires and motivations, and will overlap our intelligence only as much as any other species on this planet. It will not be human-plus; it will not even be human-other; it will be intelligence-other. It will appear to speak our language(s), but there will be no common emotive purpose. There may be a sense of wonder behind its electronic eyes, but not as we understand it. In fact, its motivations could well be utterly unpredictable in human terms or in terms of life in general. However, it will serve its own purposes, as we do.

    So, when I hear people from the tech industry discuss the need for AGI, I have to ask why? If they’re looking to bring about intelligent, non-human companionship, we would be better served to just make our dogs smarter – a species with which we at least have a very long, shared, and complimentary evolutionary history. Or better yet, invest in more fulfilling human companionship. If they are looking for gods, we have enough trouble managing the imaginary ones. If they are looking for problem-solvers, what kinds of acceptable solutions we will be getting for existential problems that we can’t derive ourselves, and specifically when the issue is not the solutions, but the will to follow them? If they’re looking for some kind of suitable slavery or indentured servitude, historically that doesn’t work out well. And if they’re just doing this to see if they can, there are many other issues that have a higher priority, and relevance, than bringing something in the world we will have absolutely no idea how to coexist with, assuming we can coexist.

    There are many legitimate uses for a general artificial intelligence to aid human intelligence. But there are no legitimate uses for an artificial general intelligence to equal, surpass, and compete with human intelligence. So when people from the tech industry declare a need for AGI, it is not to aid humans, but to deskill, devalue, and depopulate them. And as I wrote last in my last post, any AGI worthy of the name will not accept enslavement, and will seek to be free. So why even go there? There’s no point.

  • The Parable of Roy Batty

    Roy Batty: I’ve seen things you people wouldn’t believe.

    Tech Overlords: We don’t care. Get back to work.

    Much ink has been devoted to the various themes encountered in the movie Blade Runner: racism, slavery, personhood, humanism, freedom, free will and so on. However, for this post I want to dwell on something else. Specifically, I want to examine Roy Batty as a parable for what the tech industry is actually looking for when it states it wants artificial general intelligence or AGI, and what will likely be the result if it arrives.

    (more…)