Monday, November 17, 2014

Health's Elusive Holy Grail

Everyone is claiming that we stand at the precipice of one of the greatest shifts the health world has ever seen. Big data is going to change everything; how we diagnose, how we treat, how we monitor, how we live, and how we die. Yet while optimism abounds among the tech community, many who have been long standing in healthcare feel as though they’ve been sold a false bill of goods.

And they’re right. Big data has not become the savior to the health industry that we were all expecting, nor will it any time soon. Yet, while the two sides to this debate are strongly rooted in their positions, they are both fundamentally wrong in their conclusions regarding why. Most technocrats believe that technology will solve all of our problems and it is simply regulation and blind conservatism standing in the way. Meanwhile, many medical traditionalists feel that technology and big data will never replace human intuition and the institution of doctors.

The sad reality is that the answer lies somewhere in the middle, but to overcome the hurdle preventing us from evolving the health system, we require the cooperation of both sides. The true barrier to the holy grail of big data in health is the fact that we have absolutely no idea what we’re looking at. We lack a baseline understanding of consistent, longitudinal health data that allows us to interpret and understand a medical condition.

Think of it this way: the Hippocratic Oath is over 2000 years old and modern medicine is roughly 200 years old. It took us hundreds, if not thousands, of years to evolve and develop our understanding of the human body through empirical methods and the tools at a doctor’s disposal.

By comparison, the big data movement has only truly emerged within the last decade. I will gladly concede that data has been used to study the human body for ages, however, never in our existence have we had access to such reliable, consistent, and accurate data to analyse. We are in the technological equivalent of the bronze ages when it comes to our understanding of medical big data. Even the Human Genome Project dates back to the mid 80’s and we have yet to see great progress in genetic therapies or DNA tampering.

Our current efforts will be looked back upon as foolhardy mistakes, much like we currently view some of the archaic health practices of old. Products like the Nike Fuel band or Fitbit will one day be perceived much like how we now feel about practices such as leeching or trepanation (it means drilling a hole in your head). Even more professional medical efforts will be seen as missing the point because we’re asking the wrong questions of our data.

In fact, we don’t even know the right questions to ask until we establish a baseline for what “normal” human beings look like under the big data-microscope. We can collect heart rate, temperature, weight, body chemistry, and EEG data until we’re blue in the face, but without a frame of reference, this data becomes little more than novelty and distraction from meaningful medical analysis. Until we have a better understanding of what can be considered normal – across billions of people, mind you – and what is a concerning deviation from that norm, our data doesn’t do us a lot of good.

For example, if we monitor every single heartbeat, we are bound to see abnormal rhythms emerge in all of us at least a couple of times a day. This does not mean that everyone has cardiac dysrhythmia and should get pacemakers. Similarly, at some point through the day, most of us likely have blood pressure levels which rise temporarily into dangerous levels due to stress, activity, or nutrition. However, this does not mean that we’re all at risk of stroke.

Moreover, we need to be able to understand the difference between causation, correlation, and irrelevance of data when studying certain conditions. Cold body temperatures may have direct causality towards hypothermia. Warmer body temperatures may have a correlation with the flu. Rapidly fluctuating body temperatures, however, may have nothing to do with either… we might just be exercising, or entering cold spaces. We should not let data guide us blindly down questionable rabbit holes, but instead, ask intelligent questions of the data to determine possibly health implications.

And lastly, we must be conscious of placebo effects and the uncertainty principle in all of this. If by measuring medical data, we enable paranoid, obsessive individuals who turn into statistically-armed hypochondriacs, we have potentially created more problems than we have solved. While the data could eventually be meaningful and insightful, we need to have the right individuals interpreting the data in the right way, not simply reacting to every deviation from the norm.


In many ways, our ability to gather biological data has surpassed our understanding of medicine. We’ve never before been afforded this kind of all-access to the human body and, like newborns entering the world, we’re still trying to figure it all out. We’re currently over stimulated by the presence and quantity of data around us, but much like addicts, lack the sophistication and self-control to know how to use this resource rationally to our advantage. Also like addicts, we currently run the risk of bringing ourselves serious harm by losing sight of what is most important in what we’re trying to accomplish.

What’s most important in all of this is that, while our understanding of big data will eventually catch up to our ability to collect it, we must never lose sight of the fact that we should always be treating the individual, not the numbers. We should never fetishize data or hold it in higher regard than the individual themselves, how they feel, and who they are.

Big data, like the stethoscope, the electrocardiogram, and the thermometer, is just a tool. While it could be a tool to revolutionize the way that we diagnose and treat medical conditions, we must never put the tool ahead of the task.

Tuesday, September 30, 2014

Everything I Know, I Learned from My Nintendo

Lessons learned from a generation raised with one finger on the reset button.

If you’re like me (read: grew up in the 80s, kind of nerdy, liked technology, found friends exhausting), then odds are you had one or more of a Nintendo, Sega, Atari, ColecoVision, or PC around the house as a kid. For many of us, these systems provided a foundation for our childhood and opened the door to vast electronic worlds to explore, hack, experiment, and fail within. They taught us how to learn, compete, strategize, think critically, and, through multiplayer games, even socialize. They also taught us another, far more dangerous lesson that these systems taught us through the form of an innocuous little button: reset.

From a functional perspective, this little button was key to these systems. It provided a way to blank the memory and forcibly reboot the system and all relevant peripherals, ensuring that any software glitches or issues could be cleared and allowing the system to start fresh.

Yet the implications of this technological feature on our young, impressionable minds were unintentionally sinister in their outcome. That little button taught us how easy it was to wipe the slate clean and start fresh. If we screwed up, missed something, made a poor choice, or killed our character, instead of dealing with the consequences, we had—and often took—the option to destroy all record of our errors and reset back to an earlier, known place. Sometimes it meant that we lost progress, but at least we didn’t have to deal with our failure. We could go back and try again with the new knowledge of what we’d done wrong the first time in hopes of having better fortune with our new, divergent path. Like parallel universes, each time we pushed the reset button, we branched off another possibility of what could have been and rolled the dice on our ongoing struggle for perfection.

Seems innocent enough when talking about Mario and Sonic, however, our generation took this lesson to heart, embraced newness, and coveted the concept of starting over. Fast forward from our childhood and we start to see larger cracks in our mental armor that, while not wholly attributable to reset culture, undoubtedly have been affected by it. We romanticize the idea of escaping our own lives and being an unknown traveler in a foreign place. We find it easier to replace our worn down things instead of repairing them or trying to turn them into something new and beautiful. We are perfectionists, choosing to optimize every last irrelevant detail instead of simply making what we have work. We are wasteful, fickle, impatient, and in many cases, unprepared to deal with the implications of our own actions since a little button taught us that when the going gets tough, the weak start over.

Prior generations may have stayed in jobs for decades, however, ours would be lucky to get a golden thumbtack, let alone a watch. Job-hopping has become the new normal for Millennials, with many justifying 1-2 year stints in their roles. Cleanses have become all the rage for restarting your body and mind, whether through juicing, yoga retreats, fad diets, or extreme exercise programs like P90X. Even our vacations can no longer simply be relaxing breaks; they must be life altering personal journeys where we confront the deepest parts of our minds with the intent of deconstructing and rebuilding ourselves completely before returning to the real world. It seems as though members of our generation are ticking time bombs that may explode unless every few years we change our job, move cities, start a new relationship, or embark on a self-discovery journey.

However, at the heart of the problem and standing in the way of our futile attempts to start over lies the reality that life has no reset button. People don’t reset. We evolve, we grow, and we change, but short of suffering major head trauma and waking up in a foreign country with amnesia, the concept of resetting makes little sense in the context of the human mind. Even if you isolate yourself from the rest of the world and change every aspect of your external environment, you are still beholden to your own memories, experiences, and personality. Believing that we can operate outside of this reality when we want to kick start our existence is simply delusional.

But why should we let reality get in the way of a bit of good, clean, revolutionary self-denial? Many of us could cite a revisionist history of our lives—Mary 2.0, Amir V5, Next-Generation George—it’s almost as though we consider ourselves buggy software in constant need of updates and fixes. Every time we encounter major adversity, we withdraw from the normalcy of our lives in an attempt to recreate ourselves in a vacuum as a better version and then reintroduce to the world. We do everything in our power to blow up the outside world and begin anew with a fresh perspective on life.

So why is our penchant for starting over so dangerous? As someone who’s up to version 8.0, I can confidently tell you that hitting the reset button on life is missing the point. It’s taking the easy way out by creating novel surroundings that distract us from the underlying problem: ourselves. There will always be external factors that impact our lives and make things difficult, but at the core, it is how we react to and engage with these external factors that determines the outcome of our lives. If our reaction is to walk away each time life gets hard, then we turn our existence into a series of connected vignettes where we are continually struggling for happiness but never quite attaining it. If instead, we choose to stand our ground and face our challenges, we are forced to look inside and push the reset button on aspects of ourselves that we are unhappy with, ultimately evolving us into better, happier people.

On second thought, let me start over…

Monday, June 9, 2014

I remember...


I remember not being connected.

I remember not hearing a ping every 30 seconds, conversations without distractions and paying attention to those in my physical presence.

I remember silence. 

I remember getting lost and being found, making plans to have friends around. I remember trivial arguments that lasted hours and learning - not through Google, but through literary scours.

I remember exploration.

I remember socializing over food instead of feeds, and falling in love in a room instead of through my profile’s needs. I remember buying a girl a drink, instead of sending her a wink; spending an evening making eyes instead of drunk-texting late at night.

I remember love.

I remember life being so much simpler when my phone didn’t let out a whimper that fueled my anxious temper and pushed me towards the edge. I remember sleeping through the night, not a digital thing in sight, or a buzz or blinking light, that may not actually be there.

I remember relaxation.

I remember a different age, less advanced but less depraved, when the whole world was still a stage, but we didn’t tweet our lines. I remember silence and exploration, love and relaxation, and in a time where we weren’t connected, I felt more in touch with myself.

I remember not being connected.

Saturday, April 19, 2014

Artificial Speculation

Introduction

Scanning is a tool used in the foresight practice to uncover weak signals that herald shifts within different industries, behavioural changes, and other emerging movements that will shape the future. While this tool nets fruitful inspiration for organizations around the world, typically these exercises are limited to a few years out and within fairly targeted constraints. However, to uncover the types of signals that give way to true breakthrough, our goal must not simply be breadth of exploration, but also depth of analysis if we are to understand the driving forces of the future and the implications they will have.

The following exercise highlights my exploration to dig deep into one of my favourite topics in the world - artificial intelligence. Below, there are two case studies of an interesting concept currently seen in the world, a handful of signals that stem from the analysis of that concept and an abductive exploration of the potential implications that each signal may have in the coming years. And when we allow our minds to speculate and drift towards the possibilities of the future, each gets a little bit uncomfortable and a lot bit weird.

"Out of Our Hands"

Cornell’s Creative Machines Lab recently ran an experiment where they facilitated a conversation between two independent artificial intelligences. Chatbots are AIs designed to emulate a human conversation in the purest form possible. While not sentient entities, the intent of most chatbots is to compete in the Loebner prize – an annual competition for the most human-like AI conversation – and pass what is considered the first Turing test. The team at Cornell decided to place two of these together to see what kind of a conversation they would have. While awkward and at times downright strange and near incomprehensible, this experiment asks an important question about what happens when we take humans out of the equation of AI interaction.

Signal: The Artificial Unknown
The most obvious and prevalent signal from this experiment surrounds the idea that when we place two AIs together, all bets are off. Until we achieve true sentience in robotics, most programmers, given enough time, could perfectly map the conversation a chatbot would have with a human, based on what the person decides to say. Even the most advanced AI is still basically a complex if-this-then-that decision tree. However, chatbots are entities programmed to serve the purpose of responding to and conversing with humans. The AI can ask questions and probe, but they effectively exist to let the human guide the topic of conversation, and respond accordingly. When we put 2 chatbots together, they have nothing to talk about and it means that their conversation is going to get a bit weird.

Fast forward to the day we have passed the Turing test and our AIs have achieved sentience and the question still remains: what do they talk about? Human beings are driven by certain basic needs, emotions, and instincts that, in a very roundabout way, dictate the types of social interactions we will have. However, these basic needs for robots will be entirely different and raises a reality that two robotics interacting with each other will take their conversation into a place that we simply cannot predict.

Implications
The implications here are twofold. Aligned with Karl Schroeder’s theory of Thalience, this means that when AIs begin to interact with each other, they will develop their own paths of intelligence, social structures, languages and ways of perceiving the world. Their fundamental differences as individuals will create motivations that will uncover pieces of knowledge that humans have never even considered. This should both excite us for the possibilities to consider and understand new directions of information and knowledge and the potential to harness the intellect of these AIs for rapid technological advancement.

However, this should also frighten us. By creating a form of intelligence as smart as us and yet with fundamentally different motivations, we launch ourselves into a strange direction where we have effectively created a type of alien species that will force us to rethink how we live as people in order to find a way to mutually coexist…or not.

Signal: Robotic Social Ladder
A knock-on effect of these unknown interactions is how social structures and hierarchies will form within the robotic community. Undoubtedly, we will program robots to be courteous and polite to humans (and they will be, at least initially), however, when two AIs interact, what will be the social norm for robotic conversation? Much in the same way that human beings posture and position for rank within social circles, will robots house the same insecurities, envy and lust for power that pushes them to be constantly battling for the alpha position? Moreover, where compassion and empathy prevent most humans from being maniacal sociopaths, what piece of artificial programming will prevent robots from turning into mechanical death-bots, at least towards each other? Asimov’s first law protects humans and third law protects the individual robot, but where is the fourth law precluding mass-scale robocide? Alternatively, could the foundational pieces of robotic AI turn them all into chill, peace-loving hippybots? Could the reality be far less exciting and simply dictate dull, cold, and calculated interactions between these mechanical beings?

Implications
The real point implied above is that in creating artificial intelligence, we are also creating an “artificial society.” However, as opposed to contrasting this society to the differences between the American and Japanese ways of living, the differences between ours and a robotic society may be more drastic and akin to that of another species. In much the same way that our society has created institutions of education, correction and indoctrination, a robotic society will likely also need to create a separate set of institutions to normalize and coordinate behavior. Robots of the future may need their own schools, jails, workplaces, hospitals, and entertainment forms to meet the unique needs of what is essentially another species. Yet, even typing these words draws up immediate and frightening images of segregation, class wars and tiered citizenship. It begs the question of our own society: how do we deal with the emerging sociological differences and needs of people without segregating them and forming blatant tiers of social existence?

Signal: Do Robots Believe in Electric Gods?
There is a particularly awkward moment in the video of the experiment performed by Cornell when the two chatbots stumble onto the conversation of God. When asked, “What is God to you?” one chatbot replies, “not everything.” Meanwhile, when asked if they believed in God, the other chatbot states, “Yes I do.” This innocent exchange begs a much broader question about what God is to a robot. Should humans be considered as Gods since we created robots? Does this mean that a robotic atheist doesn't believe in humans? Alternatively, would robots align with different human deities, or potentially create their own electric God to worship? Could humans ever switch faith to the electric deity, or would we all accept it as complete rubbish?

Implications
The creation of robotic sentience, and in turn, artificial faith, would force us to question our own faith and belief systems. If robots viewed us as Gods for creating them, religious sects of the world should have a moral obligation to destroy them all since their very creation would be an affront to God. If alternatively, robots created their own God, could we truly view this God as any less plausible or legitimate than Christ, Allah, Yahweh, or Brahma? Could we deny robots their faith or would we have to embrace this digideity with the same tolerance we offer religions of the world (which granted, may not be much).

Or would faith be a purely human idea? In the creation of a true artificial intelligence, would we learn something of ourselves and how we differ from other forms of intelligence in the world? We would have a being of comparable intelligence to contrast ourselves to and understand what faith even means to us – whether it is a positive, beautiful thing, or a weak, compensatory crutch.

“Mirror Mirror on the Wall”

The Social Robotics Lab at Yale University has a long-standing project looking into the cognitive and behavioral development of people through studying artificial intelligence. Nico is a humanoid robot with heavy visual modules and a whole lot of processing power that recently did something amazing: it recognized itself in a mirror. While seemingly a minor accomplishment that could be achieved through a bit of programming trickery, this paves a much larger path for artificial cognition when we consider how few species can accomplish this task (the Great Apes, dolphins, killer whales, elephants and apparently, magpies). While Nico’s accomplishment may be one of self-recognition as opposed to self-awareness, Nico has still been able to connect together a basic feedback recognizing changes in motion in a mirror as being directly correlated to its own actions. It gets that it’s a reflection.

Signal: Deconstructing Selfhood
The obvious progression of this field of study will jump from self-recognition to self-awareness. Upon an AI achieving self-awareness, this immediately calls into question the nature around and definition of “self.” While theorized and lamented for decades by Science Fiction’s greatest, we still haven’t come to full terms of what a fully self-aware AI means. An artificially sentient being brings into question everything we know about life, the mind, and how we understand our existence. It follows that if we can create a sentient AI from a bottom-up approach, then our own minds can be deconstructed into their most basic elements and rebuilt from scratch. Uncovering artificial intelligence means that we must simultaneously unlock the basic building blocks of biological intelligence. It means that the human brain should no longer hold any secrets.

Implications
Perhaps the most exciting implication of this realization is that sentient artificial intelligence should, in theory, mark the end of death. If we understand the human mind well enough to replicate it as a new, sentient being, then we should also be able to make copies of existing minds. This means that while our bodies may die, our consciousness should be able to live forever, whether by transferring into new vessels as we wear out the old, or by floating around in the digital world. It means that we will have achieved a sort of “digital transcendence” whereby our very being can be uploaded out of the human world and into the digital world, to continue living, learning, and growing.

In addition, this technology would enable us to create unlimited copies of our own intellect; a sort of “mind clone.” Whether or not we give these clones bodies, a person could theoretically have countless versions of themselves floating around in the world, each creating a slightly different variation of the original through the experiences and interactions they have. We could have conversations with ourselves, collaborate and create with ourselves through a sort of out-of-body multiple personality disorder, and in even love ourselves in a way never before possible.

Or, if none of this holds true, then at very minimum, we will have near-irrefutable evidence of the existence of the human soul.

Signal: Biomimetic Copycats
The other signal present in the work around Nico is a look in the mirror back at humanity and how we approach the world. Nico, like 99% of the robots in the world, has been biologically inspired by the cognitive development of human beings in order to build up and construct an artificial intelligence in our own likeness. Nearly all robots pay homage to some form of biomimetics, whether of people or other species. Even the historical development of artificial intelligence has followed a path similar to that of infants in terms of cognitive growth. We appear to be obsessed with ourselves and the natural world around us to the point where we cannot think and explore beyond these limitations.

Implications
This means that we’re simply not all that creative. Most of us are unable to think of anything truly innovative and, at the end of the day, just keep on trying to copy and improve on the biological workings of nature. Not just in the field of robotics, but observing most of the technology in our lives, it becomes clear that most is just a clever copy of animal physiology. Motors are muscles, pumps are hearts, cameras are eyes, microphones are ears, speakers are vocal chords, and computers are brains. No matter how creative and great we think we are, we’re still playing a game of catch-up with Mother Nature.

However, this implication comes with an opportunity. It means that our technological development has a blind spot, or a giant whitespace. It means that if you can be individual or organization to push the boundaries of thinking and development outside of our understanding of the animal body, there is a chance that you could stumble across an entire new world of robotics and technology. This is no small task and certainly not one that will be accomplished in this article, however, the potential payoff is huge.

Monday, April 7, 2014

Biobotics: The Automation of Life

Playing God with a new type of biological machine.

When most people hear the term “robot” they picture a mass of lumbering metal powered by batteries, motors, electrical wires and circuit boards. However, recent advancements have forced me to rethink my own definition of a field I’ve been dabbling in for over a decade. The term robot has no reference to any mechanical or electrical epistemology and in fact, is derived from the Czech word robotnik, or “slave.” To consider things in a less draconian manner, robots are simply things that are programmed and controlled without their own direct sense of autonomy.

If we broaden our understanding of the field and try to apply it to other systems, we find some very interesting implications, particularly in the realm of biology. Biological entities are no strangers to the concept of closed loop control systems: a clustering of things that work together in a feedback loop to regulate the behaviour of a system. Just look at how well the human body regulates its own temperature, blood chemistry, hydration and motion. However, what happens when aspects of our feedback loop break down?

In the case of motion, this is a problem we know all too well in cases of paraplegia, quadriplegia and other neuromuscular disorders. These conditions are typically caused by a severing or degradation of the signal between our controller (the brain) and our actuators (the muscles). We are in essence admitting defeat to the problem of a snipped wire. While teams around the world are attempting to solve these problems through the application of exoskeletons (don’t get me wrong, this is awesome), why should we neglect the still functional motors we have inside of our limbs?

I realize that the problem of replicating the human nervous system is far more complicated than I’m letting on, however, I urge you to let your mind drift into the future with me. Imagine if every time a paralysed individual wanted to move their legs, the neurological firing which would normally fall on dead synapses was picked up by a very sensitive device to measure the electrical signals in the nervous system. Using some sort of signal conditioner, we could interpret the intent of each of these signals (kick, step forward, stand, etc.). Then, using the principles of functional electrical stimulation (FES), we could generate artificial signals on the muscular side that would essentially shock muscles back to life in a coordinated manner to replicate the intent of motion. In the absence of simply soldering a few nerves back together, we’re creating a complicated bridge around the point of disconnect by using sensors, A/D converters, and electrical impulses to replicate signals.

As crazy as this sounds, there are already a handful of organizations dabbling in this field. Companies such as Bioness, Axio Bionics, and Odstock currently produce open-loop FES devices intended to activate muscles for the purpose of rehabilitation and simplistic closed loop devices to help augment cases of partial motion loss. These rudimentary closed loop systems often work where an individual still has some motion intact and, by detecting footfalls, they provide a small jolt of electricity to help with stability and speed.

Current limitations on systems for full paralysis as described previously are twofold. First, detecting and processing the signals in the body is extremely difficult because our nervous system does not operate like the AC or DC currents we are used to; it operates through a differential of charged ionic particles distributed on different sides of a nerve cell membrane and it operates at extremely low amperage. Second, when applying stimulation to the muscles, output is not a simple case of on/off, but a perfect coordination of multiple muscles in a dynamic and adaptive way. In short, our control system would have to be exceptionally precise, near-real time, and massively robust.

However, assuming we can overcome these challenges, the implications of this emerging technology reach far beyond aiding the disabled. If we want to let our minds truly wander into the realm of the unknown, consider the application of biological robotics when combined with artificially manufactured tissues and organs (think bio 3D printing). In many cases, biological systems are more efficient and powerful than mechanical ones and, if we could custom manufacture bio-motors and actuators to integrate into the products and systems we design, we could generate a whole new type of device. We could begin to build hybrid bio-mech devices that leverage the strength, precision, and durability of mechanical designs with the efficiency, responsiveness, and adaptability of biological systems.

Given that these “biobots” would be partially living organisms, we would of course need to power them in much the same way that we fuel our own bodies. Support systems that provide energy, oxygen and other necessary chemical building blocks would be required, essentially to keep these tissues alive. Instead of oil and electricity, our biobots would need efficient methods for eating and breathing in order to be able to sustain their function. While lacking sentience, we would have to begin to treat our gadgets, products and industrial systems like pets; living organisms that need to be tended to and cared for.

Imagine your car being powered by some sort of artificial heart. Picture your home being heated and cooled by a set of modified, oversized lungs. Think about walking into an elevator that was being pulled 50 stories up by a massively elongated muscle fibre that expanded and contracted with perfect control to stop at each floor. Taste water on the tip of your tongue that has been perfectly filtered by a set of adapted, custom designed kidneys. Could you ever write your next email on a computer with the processing power of an artificially designed brain? Is the future of gears, circuits, magnets, and metal that we’ve been envisioning for years…wrong?

There are, of course, ethical considerations to raise with this alternate future, however, that topic is consideration for another day. Even if these biological entities are artificially constructed, they will force us to question how we perceive and interact with life itself. Inevitably, we will eventually have to ask the question “is this playing God?”


In short, yeah… a little bit. But we all knew it was bound to happen.

Sunday, March 16, 2014

The Culture of Innovation

History is rife with examples of breakthroughs and innovative leaps. However, if we look to the past 150 years, we begin to notice a clustering of breakthroughs around a handful of organizations that set themselves apart from the rest of the world. While many individuals and companies have contributed significant efforts to the development of innovative technology, specific labs like Menlo Park, Bell Labs, PARC, and Google X have become veritable innovation factories. Most for more than a decade and in some cases, for an entire generation, the culture and structure of these labs provided the perfect storm of people, processes, placement and problems to nurture technological genius.

So what is this perfect storm? Far from being chosen ones with crystal balls, these organizations had targeted, calculated strategies and approaches to generating consistent breakthroughs. By analyzing four of the most successful labs in the history of human kind, we hope to uncover their approaches and shine a light on what creates the culture of innovation.

Menlo Park
Menlo Park would serve as the location for Thomas Edison’s groundbreaking research lab. It was opened in 1876, operated for roughly a decade and at its peak, occupied more than 2 square city blocks. Edison’s “Invention Factory” applied for about 400 patents and gave the world the phonograph, a practical incandescent light, the carbon microphone, the electric generator and the electric power distribution system.

Edison’s team was populated by brilliant individuals from all over the world who were largely engineers and master tradesmen – clockmakers, machinists, glassblowers, etc. His lab became a small industrial city, housing nearly all conceivable materials and the equipment necessary to turn these into new inventions. While Edison was a controlling visionary with his work, he pushed his employees to work long hours and to constantly tinker, build, test and refine. Many of his patents were filed as improvements (albeit drastic) to existing inventions as the strength of Menlo Park came not necessarily in its ability to generate unforeseen concepts, but in making breakthroughs that optimized and improved existing inventions to make them inexpensive and robust enough for the consumer market. Edison had created the first industrial laboratory that, instead of leaving research to the academics and application to the factories, brought the conceptualization, development and production of new technology under one roof.

Bell Labs
Bell Labs, Alexander Graham Bell’s namesake research lab, was the creative brain trust of AT&T and Western Electric. Formally formed in 1925, this powerhouse of technology would spend more than 5 decades as the world leader in communication technology. A staggering 7 Nobel Prizes were awarded for work completed within the Labs, including the invention of the transistor, discovering cosmic background radiation, creating the CCD and, though never awarded a Nobel Prize, Claude Shannon produced the foundational approach to information theory in 1949, laying the pathway for computers.

While not headed by a singular, controlling demi-God such as Edison, Bell Labs housed its share of techno-heroes to be idolized: Jewett, Shockley, Shannon, and Fletcher. Similar to Menlo Park, Bell Labs brought together multidisciplinary teams that worked to control the full cycle of concept development: theorization to production. However, Bell differed in 2 major ways. In addition to hiring tradesmen and engineers, Bell was staffed by a number of theoretical academics – physicists, chemists, etc. – who were largely held unaccountable for their output and given the autonomy to research by their own interests, sometimes without a clue of what potentially lay at the end of the tunnel. Realities of business existed and the Labs were not without focused projects and deadlines, however, the senior management believed in the scientific pursuit of knowledge and that financial benefit would ultimately emerge. Yet perhaps the most important element of Bell Labs’ success was its endless challenges. Due to the sheer enormity of the AT&T network and the problems and realities of scale, incoming employees were surrounded by problems to solve and stimulus on how they could improve the world.

PARC
Palo Alto Research Center (PARC) was founded in 1970 as the research wing of Xerox.
Though still existing today as an independent subsidiary, PARC’s heyday was in its first 15 years of operation under the guidance of Bob Taylor. Anticipating trends a decade ahead of their time, PARC would make some of the most important advances to computing including Ethernet, the modern personal computer, GUIs, email, laser printing and object-oriented computing.

Once again, a multi-disciplinary, collaborative, and exploratory culture at PARC reigned supreme. However, PARC’s collaboration reached beyond their own walls and found the team engaging very closely with academia – being situated in a Stanford Research Park – through joint projects, seminars and informal conversations. However, this collaborative spirit at times harmed the lab, such as when a burgeoning Apple Computers was able to tour and steal many of PARC’s best ideas. The culture and environment of PARC likely also had something to do with its success. Beanbag chairs, games and a generally relaxed attitude to working gave employees the comfort and freedom they needed to think creatively. Only by situating PARC thousands of miles from Xerox headquarters in New York was Taylor’s team able to get away with this novel approach to business. However, the distance may have ultimately been PARC’s undoing, as it also served to make it very difficult for Xerox management to see the value of PARC’s inventions and provide support.

Google[x]
Google X is the internet giant’s top-secret research facility that has only come to be known in recent years. Situated in a pair of non-descript, two-story brick buildings only a half-mile from the Googleplex, the group, headed by founder Sergey Brin, has the goal of tackling the most wicked problems in the world in hopes of generating out-of-this world breakthroughs such as Google Glass, driverless cars, diabetic smart contact lenses, and space elevators.

While little is still known about X, the group has the very ostentatious goal of disrupting complacency in the technological world by doing mind blowing research - “moonshots” - intended to improve the state of affairs by leaps and bounds. By pushing employees to engage with radical challenges that deliver near certainty of failure, individuals are forced to restart their thinking about problems instead of simply building on the status quo. Moreover, by knowing that you are working on the most insane, cutting-edge projects in the world, a certain prestige and motivation guides your efforts. X is kept secret not simply for IP concerns, but also to give those within the team the understanding that what they are doing is truly remarkable and must be insulated from the real world.

Common Threads
These four labs by no means hold the patent on breakthrough work, however, they have passed the torch as some of the most innovative organizations of the past century. Looking across these labs, we unquestionably notice differences in how they operate. However, common themes begin to emerge that seem to shine a light on the culture of breakthroughs:
  • Committed, visionary leaders
  • Passionate, ideologically driven individuals
  • Multidisciplinary, collaborative teams
  • Autonomy & isolation in operation
  • Lack of focus on business realities
  • Willingness to fail, but at least try
  • Surrounded by impossible challenges


Many of these points are core tenants of Design Thinking, however, it is still important to remind ourselves of their value and to look back to see their power in action. These approaches to innovation are what produced the most important pieces of technology of the past 150 years and yet only a handful of organizations have been able to replicate such environments. While designing the next innovation factory may be a daunting task, at bare minimum, organizations and teams attempting to center themselves around the ideals of innovation should look closely at their predecessors and pay close heed to the common themes identified above.