Saturday, April 19, 2014

Artificial Speculation

Introduction

Scanning is a tool used in the foresight practice to uncover weak signals that herald shifts within different industries, behavioural changes, and other emerging movements that will shape the future. While this tool nets fruitful inspiration for organizations around the world, typically these exercises are limited to a few years out and within fairly targeted constraints. However, to uncover the types of signals that give way to true breakthrough, our goal must not simply be breadth of exploration, but also depth of analysis if we are to understand the driving forces of the future and the implications they will have.

The following exercise highlights my exploration to dig deep into one of my favourite topics in the world - artificial intelligence. Below, there are two case studies of an interesting concept currently seen in the world, a handful of signals that stem from the analysis of that concept and an abductive exploration of the potential implications that each signal may have in the coming years. And when we allow our minds to speculate and drift towards the possibilities of the future, each gets a little bit uncomfortable and a lot bit weird.

"Out of Our Hands"

Cornell’s Creative Machines Lab recently ran an experiment where they facilitated a conversation between two independent artificial intelligences. Chatbots are AIs designed to emulate a human conversation in the purest form possible. While not sentient entities, the intent of most chatbots is to compete in the Loebner prize – an annual competition for the most human-like AI conversation – and pass what is considered the first Turing test. The team at Cornell decided to place two of these together to see what kind of a conversation they would have. While awkward and at times downright strange and near incomprehensible, this experiment asks an important question about what happens when we take humans out of the equation of AI interaction.

Signal: The Artificial Unknown
The most obvious and prevalent signal from this experiment surrounds the idea that when we place two AIs together, all bets are off. Until we achieve true sentience in robotics, most programmers, given enough time, could perfectly map the conversation a chatbot would have with a human, based on what the person decides to say. Even the most advanced AI is still basically a complex if-this-then-that decision tree. However, chatbots are entities programmed to serve the purpose of responding to and conversing with humans. The AI can ask questions and probe, but they effectively exist to let the human guide the topic of conversation, and respond accordingly. When we put 2 chatbots together, they have nothing to talk about and it means that their conversation is going to get a bit weird.

Fast forward to the day we have passed the Turing test and our AIs have achieved sentience and the question still remains: what do they talk about? Human beings are driven by certain basic needs, emotions, and instincts that, in a very roundabout way, dictate the types of social interactions we will have. However, these basic needs for robots will be entirely different and raises a reality that two robotics interacting with each other will take their conversation into a place that we simply cannot predict.

Implications
The implications here are twofold. Aligned with Karl Schroeder’s theory of Thalience, this means that when AIs begin to interact with each other, they will develop their own paths of intelligence, social structures, languages and ways of perceiving the world. Their fundamental differences as individuals will create motivations that will uncover pieces of knowledge that humans have never even considered. This should both excite us for the possibilities to consider and understand new directions of information and knowledge and the potential to harness the intellect of these AIs for rapid technological advancement.

However, this should also frighten us. By creating a form of intelligence as smart as us and yet with fundamentally different motivations, we launch ourselves into a strange direction where we have effectively created a type of alien species that will force us to rethink how we live as people in order to find a way to mutually coexist…or not.

Signal: Robotic Social Ladder
A knock-on effect of these unknown interactions is how social structures and hierarchies will form within the robotic community. Undoubtedly, we will program robots to be courteous and polite to humans (and they will be, at least initially), however, when two AIs interact, what will be the social norm for robotic conversation? Much in the same way that human beings posture and position for rank within social circles, will robots house the same insecurities, envy and lust for power that pushes them to be constantly battling for the alpha position? Moreover, where compassion and empathy prevent most humans from being maniacal sociopaths, what piece of artificial programming will prevent robots from turning into mechanical death-bots, at least towards each other? Asimov’s first law protects humans and third law protects the individual robot, but where is the fourth law precluding mass-scale robocide? Alternatively, could the foundational pieces of robotic AI turn them all into chill, peace-loving hippybots? Could the reality be far less exciting and simply dictate dull, cold, and calculated interactions between these mechanical beings?

Implications
The real point implied above is that in creating artificial intelligence, we are also creating an “artificial society.” However, as opposed to contrasting this society to the differences between the American and Japanese ways of living, the differences between ours and a robotic society may be more drastic and akin to that of another species. In much the same way that our society has created institutions of education, correction and indoctrination, a robotic society will likely also need to create a separate set of institutions to normalize and coordinate behavior. Robots of the future may need their own schools, jails, workplaces, hospitals, and entertainment forms to meet the unique needs of what is essentially another species. Yet, even typing these words draws up immediate and frightening images of segregation, class wars and tiered citizenship. It begs the question of our own society: how do we deal with the emerging sociological differences and needs of people without segregating them and forming blatant tiers of social existence?

Signal: Do Robots Believe in Electric Gods?
There is a particularly awkward moment in the video of the experiment performed by Cornell when the two chatbots stumble onto the conversation of God. When asked, “What is God to you?” one chatbot replies, “not everything.” Meanwhile, when asked if they believed in God, the other chatbot states, “Yes I do.” This innocent exchange begs a much broader question about what God is to a robot. Should humans be considered as Gods since we created robots? Does this mean that a robotic atheist doesn't believe in humans? Alternatively, would robots align with different human deities, or potentially create their own electric God to worship? Could humans ever switch faith to the electric deity, or would we all accept it as complete rubbish?

Implications
The creation of robotic sentience, and in turn, artificial faith, would force us to question our own faith and belief systems. If robots viewed us as Gods for creating them, religious sects of the world should have a moral obligation to destroy them all since their very creation would be an affront to God. If alternatively, robots created their own God, could we truly view this God as any less plausible or legitimate than Christ, Allah, Yahweh, or Brahma? Could we deny robots their faith or would we have to embrace this digideity with the same tolerance we offer religions of the world (which granted, may not be much).

Or would faith be a purely human idea? In the creation of a true artificial intelligence, would we learn something of ourselves and how we differ from other forms of intelligence in the world? We would have a being of comparable intelligence to contrast ourselves to and understand what faith even means to us – whether it is a positive, beautiful thing, or a weak, compensatory crutch.

“Mirror Mirror on the Wall”

The Social Robotics Lab at Yale University has a long-standing project looking into the cognitive and behavioral development of people through studying artificial intelligence. Nico is a humanoid robot with heavy visual modules and a whole lot of processing power that recently did something amazing: it recognized itself in a mirror. While seemingly a minor accomplishment that could be achieved through a bit of programming trickery, this paves a much larger path for artificial cognition when we consider how few species can accomplish this task (the Great Apes, dolphins, killer whales, elephants and apparently, magpies). While Nico’s accomplishment may be one of self-recognition as opposed to self-awareness, Nico has still been able to connect together a basic feedback recognizing changes in motion in a mirror as being directly correlated to its own actions. It gets that it’s a reflection.

Signal: Deconstructing Selfhood
The obvious progression of this field of study will jump from self-recognition to self-awareness. Upon an AI achieving self-awareness, this immediately calls into question the nature around and definition of “self.” While theorized and lamented for decades by Science Fiction’s greatest, we still haven’t come to full terms of what a fully self-aware AI means. An artificially sentient being brings into question everything we know about life, the mind, and how we understand our existence. It follows that if we can create a sentient AI from a bottom-up approach, then our own minds can be deconstructed into their most basic elements and rebuilt from scratch. Uncovering artificial intelligence means that we must simultaneously unlock the basic building blocks of biological intelligence. It means that the human brain should no longer hold any secrets.

Implications
Perhaps the most exciting implication of this realization is that sentient artificial intelligence should, in theory, mark the end of death. If we understand the human mind well enough to replicate it as a new, sentient being, then we should also be able to make copies of existing minds. This means that while our bodies may die, our consciousness should be able to live forever, whether by transferring into new vessels as we wear out the old, or by floating around in the digital world. It means that we will have achieved a sort of “digital transcendence” whereby our very being can be uploaded out of the human world and into the digital world, to continue living, learning, and growing.

In addition, this technology would enable us to create unlimited copies of our own intellect; a sort of “mind clone.” Whether or not we give these clones bodies, a person could theoretically have countless versions of themselves floating around in the world, each creating a slightly different variation of the original through the experiences and interactions they have. We could have conversations with ourselves, collaborate and create with ourselves through a sort of out-of-body multiple personality disorder, and in even love ourselves in a way never before possible.

Or, if none of this holds true, then at very minimum, we will have near-irrefutable evidence of the existence of the human soul.

Signal: Biomimetic Copycats
The other signal present in the work around Nico is a look in the mirror back at humanity and how we approach the world. Nico, like 99% of the robots in the world, has been biologically inspired by the cognitive development of human beings in order to build up and construct an artificial intelligence in our own likeness. Nearly all robots pay homage to some form of biomimetics, whether of people or other species. Even the historical development of artificial intelligence has followed a path similar to that of infants in terms of cognitive growth. We appear to be obsessed with ourselves and the natural world around us to the point where we cannot think and explore beyond these limitations.

Implications
This means that we’re simply not all that creative. Most of us are unable to think of anything truly innovative and, at the end of the day, just keep on trying to copy and improve on the biological workings of nature. Not just in the field of robotics, but observing most of the technology in our lives, it becomes clear that most is just a clever copy of animal physiology. Motors are muscles, pumps are hearts, cameras are eyes, microphones are ears, speakers are vocal chords, and computers are brains. No matter how creative and great we think we are, we’re still playing a game of catch-up with Mother Nature.

However, this implication comes with an opportunity. It means that our technological development has a blind spot, or a giant whitespace. It means that if you can be individual or organization to push the boundaries of thinking and development outside of our understanding of the animal body, there is a chance that you could stumble across an entire new world of robotics and technology. This is no small task and certainly not one that will be accomplished in this article, however, the potential payoff is huge.

Monday, April 7, 2014

Biobotics: The Automation of Life

Playing God with a new type of biological machine.

When most people hear the term “robot” they picture a mass of lumbering metal powered by batteries, motors, electrical wires and circuit boards. However, recent advancements have forced me to rethink my own definition of a field I’ve been dabbling in for over a decade. The term robot has no reference to any mechanical or electrical epistemology and in fact, is derived from the Czech word robotnik, or “slave.” To consider things in a less draconian manner, robots are simply things that are programmed and controlled without their own direct sense of autonomy.

If we broaden our understanding of the field and try to apply it to other systems, we find some very interesting implications, particularly in the realm of biology. Biological entities are no strangers to the concept of closed loop control systems: a clustering of things that work together in a feedback loop to regulate the behaviour of a system. Just look at how well the human body regulates its own temperature, blood chemistry, hydration and motion. However, what happens when aspects of our feedback loop break down?

In the case of motion, this is a problem we know all too well in cases of paraplegia, quadriplegia and other neuromuscular disorders. These conditions are typically caused by a severing or degradation of the signal between our controller (the brain) and our actuators (the muscles). We are in essence admitting defeat to the problem of a snipped wire. While teams around the world are attempting to solve these problems through the application of exoskeletons (don’t get me wrong, this is awesome), why should we neglect the still functional motors we have inside of our limbs?

I realize that the problem of replicating the human nervous system is far more complicated than I’m letting on, however, I urge you to let your mind drift into the future with me. Imagine if every time a paralysed individual wanted to move their legs, the neurological firing which would normally fall on dead synapses was picked up by a very sensitive device to measure the electrical signals in the nervous system. Using some sort of signal conditioner, we could interpret the intent of each of these signals (kick, step forward, stand, etc.). Then, using the principles of functional electrical stimulation (FES), we could generate artificial signals on the muscular side that would essentially shock muscles back to life in a coordinated manner to replicate the intent of motion. In the absence of simply soldering a few nerves back together, we’re creating a complicated bridge around the point of disconnect by using sensors, A/D converters, and electrical impulses to replicate signals.

As crazy as this sounds, there are already a handful of organizations dabbling in this field. Companies such as Bioness, Axio Bionics, and Odstock currently produce open-loop FES devices intended to activate muscles for the purpose of rehabilitation and simplistic closed loop devices to help augment cases of partial motion loss. These rudimentary closed loop systems often work where an individual still has some motion intact and, by detecting footfalls, they provide a small jolt of electricity to help with stability and speed.

Current limitations on systems for full paralysis as described previously are twofold. First, detecting and processing the signals in the body is extremely difficult because our nervous system does not operate like the AC or DC currents we are used to; it operates through a differential of charged ionic particles distributed on different sides of a nerve cell membrane and it operates at extremely low amperage. Second, when applying stimulation to the muscles, output is not a simple case of on/off, but a perfect coordination of multiple muscles in a dynamic and adaptive way. In short, our control system would have to be exceptionally precise, near-real time, and massively robust.

However, assuming we can overcome these challenges, the implications of this emerging technology reach far beyond aiding the disabled. If we want to let our minds truly wander into the realm of the unknown, consider the application of biological robotics when combined with artificially manufactured tissues and organs (think bio 3D printing). In many cases, biological systems are more efficient and powerful than mechanical ones and, if we could custom manufacture bio-motors and actuators to integrate into the products and systems we design, we could generate a whole new type of device. We could begin to build hybrid bio-mech devices that leverage the strength, precision, and durability of mechanical designs with the efficiency, responsiveness, and adaptability of biological systems.

Given that these “biobots” would be partially living organisms, we would of course need to power them in much the same way that we fuel our own bodies. Support systems that provide energy, oxygen and other necessary chemical building blocks would be required, essentially to keep these tissues alive. Instead of oil and electricity, our biobots would need efficient methods for eating and breathing in order to be able to sustain their function. While lacking sentience, we would have to begin to treat our gadgets, products and industrial systems like pets; living organisms that need to be tended to and cared for.

Imagine your car being powered by some sort of artificial heart. Picture your home being heated and cooled by a set of modified, oversized lungs. Think about walking into an elevator that was being pulled 50 stories up by a massively elongated muscle fibre that expanded and contracted with perfect control to stop at each floor. Taste water on the tip of your tongue that has been perfectly filtered by a set of adapted, custom designed kidneys. Could you ever write your next email on a computer with the processing power of an artificially designed brain? Is the future of gears, circuits, magnets, and metal that we’ve been envisioning for years…wrong?

There are, of course, ethical considerations to raise with this alternate future, however, that topic is consideration for another day. Even if these biological entities are artificially constructed, they will force us to question how we perceive and interact with life itself. Inevitably, we will eventually have to ask the question “is this playing God?”


In short, yeah… a little bit. But we all knew it was bound to happen.