We love metaphors. Our art and daily dialogue are full of them. They help us understand new and complicated ideas. Love is a battlefield. The truth is a hard pill to swallow. Life is a roller coaster.
But what is an apt metaphor for artificial intelligence (AI)? How can we help ourselves better understand it? Is it a garden that you must water? Is it a malicious bot set on taking over the world?
The truth is, AI is a widely used term that has lost nearly all its meaning. People participating in the same conference (on the same panel) will use it to refer to the Terminator and to the alarm on their phone that goes off at sunrise. One example is a fictional cyborg bent on eliminating human existence, the other is a simple look-up rule that checks the projected sunrise time from any number of websites and sets your phone accordingly. Arguably, neither contributes in a meaningful way to the current conversation about AI: one is imaginary and the other is just old if-then logic.
The misconceptions about AI stem from our collective inability to understand cognitive technology in general. It is a category of technologies that are fundamentally different from those we’ve grown accustomed to. The “desktop” was a great metaphor for personal computers when they came to market because we needed some point of reference to base our understanding of both how this new technology would allow us to work differently and what kind of work we would be doing. The metaphor worked well for a long time, and arguably still does today, but this way of thinking also made new operating systems for phones and tablets harder to adapt to when they came along. Though the work we could do was largely the same, how we were able to do it was changing – and this was a hard concept to grasp.
AI faces the same problem today. The examples we commonly use to discuss it in public settings do not apply directly to many of the business problems we face. A machine that can win at Go and a car that can drive itself are both remarkable and inspiring, but when you’re responsible for P&L in a line of business, who gives a shit? Your business probably doesn’t play board games. So far, we lack a breadth of tangible examples of successful AI in business, and many of us are forced to think of it through metaphor.
We commonly say that deploying AI is like planting a garden that a company must water if it wants it to yield the expected crop. What we mean is that this technology is not something you can set and forget. AI models need examples and training. And it needs ongoing investment – perhaps indefinitely. What the metaphor does not do is convey the sense of urgency that must come with maintaining AI. If you fail to water an actual garden, it will wither, but it will not start producing poisonous cucumbers or spontaneously catch fire and burn your house down.
“Unwatered” AI could poison you or burn your house down. These are tremendously powerful technologies in their infancy, and their implications are significant. Imagine an AI turned loose on the world to consume social media, “teach” itself based on the posts it reads, and interact directly with humans without being corrected – it would quickly become a racist, homophobic, suicide-endorsing machine, much like Microsoft's AI chatbot Tay.
An untended garden can’t run amok, but an untended AI deployment can. These software solutions are young and full of aptitude and promise. More than a garden, they are like children we choose to birth or adopt and must now raise. Why did my four-year old hit her sister? I don’t know, and she doesn’t either, but I don’t want her grow up to be a violent sociopath, so it’s my job to correct her behavior – now, while there is still time to positively influence her.
AI is much the same. It requires more than configuration. It needs exposure to an ever-expanding collection of experiences to try out its own conclusions. Some of those conclusions will bring us to our own epiphanies (“through the eyes of a child”) like an AI that beats a Go world champion because it plays differently than what we, as humans, “know” to be best. Some will make mistakes and need to be corrected, like an AI that makes contra-indicated recommendations for a heart patient’s medication. If you do not correct AI early and often, it will become as entrenched in its bad behavior – and bad conclusions – as children can.
For companies to realize the full potential of AI, we need to commit to our AI investments like we commit to our kids. I have to train my AI, give it the opportunity to build trust among my human workforce, and then slowly expose it to a wider collection of relevant business problems so it can become more accurate at different types of work and a powerful business enabler for my company. Just like I have to take my daughter to karate, make another contribution to her 529B and help her understand how she might do something differently when she makes a mistake, I have to make my AI deployment a positive contributor to my organization. Personally, I have high hopes for my children and AI, and I know well that both depend on me.