Thomas Koulopoulos, CEO of Delphi Group, recently travelled to Dubai to participate in the 2017 World Government Summit. While there he presented on Generation Z and the six forces shaping the future of how we live, work, and play.
I’m about to seriously challenge your ability to envision the future. If you’re up to it read on.
If you’ve been following the work of Boston Dynamics (currently owned by Softbank) you’ve probably seen some of their four legged and wheeled robots which are able to navigate all sorts of obstacles and remain standing after being kicked, shoved, and pushed. While some of these robots, such as their BigDog, WildCat, and Spot appear to have an amazing ability to mimic an animal’s gait. However, last year they introduced a two-legged anthropomorphic robot called Atlas, which was based on a more primitive biped called Petman.
When I first saw Atlas I was impressed by its (his?) ability to perform some basic human-like tasks, such as picking up objects and resisting a human’s attempts to knock it over. Still, it most often looked as though it would have a tough time passing a field sobriety test when it attempted to traverse even moderately rough terrain.
Things are changing fast.
Boston Dynamics just released another video of Atlas in which it navigates elevated objects put in its way. If you don’t feel just a bit creeped-out watching this then you might at least feel somewhat inadequate–at least until the 50 second mark in the video; that will make you feel much better.
The first thing that I thought of after seeing Atlas running and jumping was a StarWars-like image of these things in droves on a battlefield. It’s no surprise that the MIT spinoff, which was first acquired by Google X and then by Softbank, received much of its early funding from DARPA.
Slaughterbots: The New Arms Race
With the acceleration of AI, autonomous devices, and robots we’re obviously entering a new arms race, and this one has no visible finish line; creating a sometimes frightening view of the future. So, what can we do? What should we do? It’s a question that many people figure will sort itself out. I seriously doubt that because of both the pace of innovation in these areas as well as the degree to which these technologies can do harm in ways that utterly ignore borders and perimeters of any sort, and the degree to which even a very small group or individual can do massive harm.
In 1942 Isaac Asimov introduced us to his three Laws of Robotics:
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
Since that time these have been referenced in thousands of works and have become a mantra of robot proponents. The laws seem simple enough on the surface. So, is this the answer? Most definitely not. In a post for the Brookings Institution, Peter Singer points out that already absurd fiction behind these laws, “You don’t arm a Reaper drone with a Hellfire missile or put a machine gun on a MAARS (Modular Advanced Armed Robotic System) not to cause humans to come to harm. That is the very point!”
Consider how you would enforce Asimov’s three laws today. If you built a robot with the intention to harm another human is there some sort of fail-safe built into every silicon chip that will prevent the robot from causing harm? Do programming languages such as Python have a standard piece of code that every software application must run to insure it isn’t harming people? Of course not!
This is a ridiculous approach. It may sooth our conscience or calm our anxiety to believe that a simple set of 3, 4, or 5 rules can govern the evolution of AI but it would be an illusion. Human nature isn’t going to change. The challenge isn’t just finding ways to eliminate the harm AI and robots can do, but to leverage these same technologies to the greatest degree possible so that we do significantly more good than we do harm. Is that too fatalistic of a viewpoint? I don’t think so. In fact, it may be the only viable viewpoint. Anything else is just pacifying us into inaction.
That doesn’t mean we should sit idly by and allow the development of AI and killer robots any more so than we condone the use of chemical warfare and nuclear weapons. Earlier this week a group convened in Geneva to discuss the banning of fully autonomous weapons. The group is part of a larger coalition of 125 nations, which in 1980 formed the Convention on Conventional Weapons (CCW), “a framework treaty that prohibits or restricts certain weapons considered to cause unnecessary or unjustifiable suffering.” I first found out about the group after watching the video below “slaughterbots.”
Are slaughterbots part of the inevitable future of AI and robots? It would be naive to say “absolutely not.” There’s little doubt that any sufficiently advanced technology will be used to do harm. Our track record on that last point has been pretty consistent throughout history.
However, this is where I’d like to challenge your view of the future for just a moment.
Assume that AI and robots will result in injury and death for humans. In addition, let’s assume that the human toll is one that could have been totally avoided without the advent of AI. Still with me? Good. Now, here’s my question to you. Can you envision a degree of benefit and enough human value to offset that cost? Your initial reaction will be “of course not, every life has value and is worth saving.” Agreed! So, why do we allow automobiles to kill nearly 1.5 million people and injure another 50 million people each year globally? Why do we put up with electricity which kills about 400 people per year, in the US alone? Why do we allow planes to fly if 30,000 people have been killed in nearly 2,000 aircraft incidents since 1959?
The answer is an easy one from where we stand today. Because these are each essential technologies which have contributed to, saved, and improved the lives of billions more people. The math always makes sense in retrospect. However, I would venture to say that if you’d recited these same statistics to someone in 1917 it just wouldn’t add up; they would have found more than enough reasons to ban cars, planes, and electricity.
And that’s precisely the challenge of envisioning the future. We constantly look at the future through the lens of the past, seeing only the reasons not to move forward. In short, it’s much easier to project the threat of any technology over its benefits. As economist Paul Romer once said, “Every generation has perceived the limits to growth that finite resources and undesirable side effects would pose if no new recipes or ideas were discovered. And every generation has underestimated the potential for finding new recipes and ideas. We consistently fail to grasp how many ideas remain to be discovered. Possibilities do not add up. They multiply.”
And it’s in that multiplication, the strange and wonderful mathematics of progress, that the future always brings far greater benefits than we can possibly imagine.
Hard to envision, isn’t it? Only if we fail to do the math correctly.
There’s a viral video that makes it’s way around this time of year in which Star Trek’s Captain Picard repeats his trademark command, “Make it so.” (A play on the holiday jingle “Make it Snow.”)
We’d all like to lead like Picard, but in the real world it’s not that easy, especially when it comes to innovation. Still, I’ve heard founders repeatedly say, “I want to build a culture of innovation. Can you come in next week and put one in place?” Inevitably that points to a classic problem that I call “The Founder’s Dilemma.”
Here’s how it works.
The Founder’s Dilemma
You’re an entrepreneur. Innovation is what you do. It’s who you are. It’s why your business exists. So, naturally, you end up being the one who comes up with the really good ideas. After all, it is your business. Good luck with that! Having built three successful businesses, and worked with hundreds of others, one thing about innovation has become clear; an innovation culture may stem from the founder but to scale it has to be sustained throughout the organization. Yet, often it’s the founder’s zeal for innovation that acts as its greatest barrier.
I recall one founder I worked with who was very concerned that his company was not innovative enough. He built the company from an early innovation, proudly displayed in his office, a cosmetics makeup press that was fashioned out of a small hydraulic car jack! Now he wanted to jump start innovation and wondered why his company wasn’t as innovative as he’d like. I interviewed 30 people across the company. Everyone told me the same thing, “The founder is the innovator. His ideas built this company–he’s brilliant and I have enormous respect for him. I don’t want to let him down with a bad idea. I just do my job really well.” It didn’t take long to figure out that just about everyone was far more concerned about stepping into the large shadow cast by the founder than they were about being innovative. There goes your culture of innovation!
Letting go of Innovation
Cultures need rituals and a process to reinforce innovation. They need leaders who back off and pass innovation onto others, and then recognize them when they succeed and support them when they fail at something worth trying. Otherwise people suffer from the same fear of failure that the founder I just mentioned had unwittingly created.
Here’s the reality of being a great innovator; you have to let go of the innovation baton and pass it on to others. It’s what the quintessential innovator, Steve Jobs, did with Tony Fadell and the iPod. Be courageous, challenge people to come up with the next idea–incremental or great, it doesn’t matter. What does matter is signaling clearly that innovation can come from anyone and then put in place and nurture a culture to make it so!
4 Ways Founders Create (or Crush) a Culture of Innovation
- Get Culturally Creative–Think in terms of what’s valuable in your culture. In my second company I had a policy that no office, even my own, would have a door. Why? I wanted to signal that in our culture everyone had the license and the responsibility to work in interrupt mode. The result was that ideas flowed freely, constantly, unfettered. Close the door to collaboration and crush innovation.
- Set the tone. You are a role model for innovation but you cannot be its only source. Advertise the success of others and their ideas. Talk about how the seeds of innovation are taking root throughout the company. Be sure to applaud and recognize innovations, no matter how small. Ignore recognition and crush innovation.
- Establish a company-wide budget for new ideas. This isn’t an R&D budget. Instead it’s for any idea that is worth exploring. It’s a hedge bet against outside innovation. Every now and then one idea will fly out of the park. It only takes a few of those to illustrate how innovation is part of your culture. Don’t set aside budget and crush innovation.
- Share the story of innovation with new employees. Make sure the story is not a story about YOU and the one great idea that launched your business. Instead make it a story about the culture of innovation and the many people who sustains it. Make the story just about you and crush innovation.