Saturday, April 19, 2014

Artificial Speculation

Introduction

Scanning is a tool used in the foresight practice to uncover weak signals that herald shifts within different industries, behavioural changes, and other emerging movements that will shape the future. While this tool nets fruitful inspiration for organizations around the world, typically these exercises are limited to a few years out and within fairly targeted constraints. However, to uncover the types of signals that give way to true breakthrough, our goal must not simply be breadth of exploration, but also depth of analysis if we are to understand the driving forces of the future and the implications they will have.

The following exercise highlights my exploration to dig deep into one of my favourite topics in the world - artificial intelligence. Below, there are two case studies of an interesting concept currently seen in the world, a handful of signals that stem from the analysis of that concept and an abductive exploration of the potential implications that each signal may have in the coming years. And when we allow our minds to speculate and drift towards the possibilities of the future, each gets a little bit uncomfortable and a lot bit weird.

"Out of Our Hands"

Cornell’s Creative Machines Lab recently ran an experiment where they facilitated a conversation between two independent artificial intelligences. Chatbots are AIs designed to emulate a human conversation in the purest form possible. While not sentient entities, the intent of most chatbots is to compete in the Loebner prize – an annual competition for the most human-like AI conversation – and pass what is considered the first Turing test. The team at Cornell decided to place two of these together to see what kind of a conversation they would have. While awkward and at times downright strange and near incomprehensible, this experiment asks an important question about what happens when we take humans out of the equation of AI interaction.

Signal: The Artificial Unknown
The most obvious and prevalent signal from this experiment surrounds the idea that when we place two AIs together, all bets are off. Until we achieve true sentience in robotics, most programmers, given enough time, could perfectly map the conversation a chatbot would have with a human, based on what the person decides to say. Even the most advanced AI is still basically a complex if-this-then-that decision tree. However, chatbots are entities programmed to serve the purpose of responding to and conversing with humans. The AI can ask questions and probe, but they effectively exist to let the human guide the topic of conversation, and respond accordingly. When we put 2 chatbots together, they have nothing to talk about and it means that their conversation is going to get a bit weird.

Fast forward to the day we have passed the Turing test and our AIs have achieved sentience and the question still remains: what do they talk about? Human beings are driven by certain basic needs, emotions, and instincts that, in a very roundabout way, dictate the types of social interactions we will have. However, these basic needs for robots will be entirely different and raises a reality that two robotics interacting with each other will take their conversation into a place that we simply cannot predict.

Implications
The implications here are twofold. Aligned with Karl Schroeder’s theory of Thalience, this means that when AIs begin to interact with each other, they will develop their own paths of intelligence, social structures, languages and ways of perceiving the world. Their fundamental differences as individuals will create motivations that will uncover pieces of knowledge that humans have never even considered. This should both excite us for the possibilities to consider and understand new directions of information and knowledge and the potential to harness the intellect of these AIs for rapid technological advancement.

However, this should also frighten us. By creating a form of intelligence as smart as us and yet with fundamentally different motivations, we launch ourselves into a strange direction where we have effectively created a type of alien species that will force us to rethink how we live as people in order to find a way to mutually coexist…or not.

Signal: Robotic Social Ladder
A knock-on effect of these unknown interactions is how social structures and hierarchies will form within the robotic community. Undoubtedly, we will program robots to be courteous and polite to humans (and they will be, at least initially), however, when two AIs interact, what will be the social norm for robotic conversation? Much in the same way that human beings posture and position for rank within social circles, will robots house the same insecurities, envy and lust for power that pushes them to be constantly battling for the alpha position? Moreover, where compassion and empathy prevent most humans from being maniacal sociopaths, what piece of artificial programming will prevent robots from turning into mechanical death-bots, at least towards each other? Asimov’s first law protects humans and third law protects the individual robot, but where is the fourth law precluding mass-scale robocide? Alternatively, could the foundational pieces of robotic AI turn them all into chill, peace-loving hippybots? Could the reality be far less exciting and simply dictate dull, cold, and calculated interactions between these mechanical beings?

Implications
The real point implied above is that in creating artificial intelligence, we are also creating an “artificial society.” However, as opposed to contrasting this society to the differences between the American and Japanese ways of living, the differences between ours and a robotic society may be more drastic and akin to that of another species. In much the same way that our society has created institutions of education, correction and indoctrination, a robotic society will likely also need to create a separate set of institutions to normalize and coordinate behavior. Robots of the future may need their own schools, jails, workplaces, hospitals, and entertainment forms to meet the unique needs of what is essentially another species. Yet, even typing these words draws up immediate and frightening images of segregation, class wars and tiered citizenship. It begs the question of our own society: how do we deal with the emerging sociological differences and needs of people without segregating them and forming blatant tiers of social existence?

Signal: Do Robots Believe in Electric Gods?
There is a particularly awkward moment in the video of the experiment performed by Cornell when the two chatbots stumble onto the conversation of God. When asked, “What is God to you?” one chatbot replies, “not everything.” Meanwhile, when asked if they believed in God, the other chatbot states, “Yes I do.” This innocent exchange begs a much broader question about what God is to a robot. Should humans be considered as Gods since we created robots? Does this mean that a robotic atheist doesn't believe in humans? Alternatively, would robots align with different human deities, or potentially create their own electric God to worship? Could humans ever switch faith to the electric deity, or would we all accept it as complete rubbish?

Implications
The creation of robotic sentience, and in turn, artificial faith, would force us to question our own faith and belief systems. If robots viewed us as Gods for creating them, religious sects of the world should have a moral obligation to destroy them all since their very creation would be an affront to God. If alternatively, robots created their own God, could we truly view this God as any less plausible or legitimate than Christ, Allah, Yahweh, or Brahma? Could we deny robots their faith or would we have to embrace this digideity with the same tolerance we offer religions of the world (which granted, may not be much).

Or would faith be a purely human idea? In the creation of a true artificial intelligence, would we learn something of ourselves and how we differ from other forms of intelligence in the world? We would have a being of comparable intelligence to contrast ourselves to and understand what faith even means to us – whether it is a positive, beautiful thing, or a weak, compensatory crutch.

“Mirror Mirror on the Wall”

The Social Robotics Lab at Yale University has a long-standing project looking into the cognitive and behavioral development of people through studying artificial intelligence. Nico is a humanoid robot with heavy visual modules and a whole lot of processing power that recently did something amazing: it recognized itself in a mirror. While seemingly a minor accomplishment that could be achieved through a bit of programming trickery, this paves a much larger path for artificial cognition when we consider how few species can accomplish this task (the Great Apes, dolphins, killer whales, elephants and apparently, magpies). While Nico’s accomplishment may be one of self-recognition as opposed to self-awareness, Nico has still been able to connect together a basic feedback recognizing changes in motion in a mirror as being directly correlated to its own actions. It gets that it’s a reflection.

Signal: Deconstructing Selfhood
The obvious progression of this field of study will jump from self-recognition to self-awareness. Upon an AI achieving self-awareness, this immediately calls into question the nature around and definition of “self.” While theorized and lamented for decades by Science Fiction’s greatest, we still haven’t come to full terms of what a fully self-aware AI means. An artificially sentient being brings into question everything we know about life, the mind, and how we understand our existence. It follows that if we can create a sentient AI from a bottom-up approach, then our own minds can be deconstructed into their most basic elements and rebuilt from scratch. Uncovering artificial intelligence means that we must simultaneously unlock the basic building blocks of biological intelligence. It means that the human brain should no longer hold any secrets.

Implications
Perhaps the most exciting implication of this realization is that sentient artificial intelligence should, in theory, mark the end of death. If we understand the human mind well enough to replicate it as a new, sentient being, then we should also be able to make copies of existing minds. This means that while our bodies may die, our consciousness should be able to live forever, whether by transferring into new vessels as we wear out the old, or by floating around in the digital world. It means that we will have achieved a sort of “digital transcendence” whereby our very being can be uploaded out of the human world and into the digital world, to continue living, learning, and growing.

In addition, this technology would enable us to create unlimited copies of our own intellect; a sort of “mind clone.” Whether or not we give these clones bodies, a person could theoretically have countless versions of themselves floating around in the world, each creating a slightly different variation of the original through the experiences and interactions they have. We could have conversations with ourselves, collaborate and create with ourselves through a sort of out-of-body multiple personality disorder, and in even love ourselves in a way never before possible.

Or, if none of this holds true, then at very minimum, we will have near-irrefutable evidence of the existence of the human soul.

Signal: Biomimetic Copycats
The other signal present in the work around Nico is a look in the mirror back at humanity and how we approach the world. Nico, like 99% of the robots in the world, has been biologically inspired by the cognitive development of human beings in order to build up and construct an artificial intelligence in our own likeness. Nearly all robots pay homage to some form of biomimetics, whether of people or other species. Even the historical development of artificial intelligence has followed a path similar to that of infants in terms of cognitive growth. We appear to be obsessed with ourselves and the natural world around us to the point where we cannot think and explore beyond these limitations.

Implications
This means that we’re simply not all that creative. Most of us are unable to think of anything truly innovative and, at the end of the day, just keep on trying to copy and improve on the biological workings of nature. Not just in the field of robotics, but observing most of the technology in our lives, it becomes clear that most is just a clever copy of animal physiology. Motors are muscles, pumps are hearts, cameras are eyes, microphones are ears, speakers are vocal chords, and computers are brains. No matter how creative and great we think we are, we’re still playing a game of catch-up with Mother Nature.

However, this implication comes with an opportunity. It means that our technological development has a blind spot, or a giant whitespace. It means that if you can be individual or organization to push the boundaries of thinking and development outside of our understanding of the animal body, there is a chance that you could stumble across an entire new world of robotics and technology. This is no small task and certainly not one that will be accomplished in this article, however, the potential payoff is huge.