Written by Juan Suarez

Artificial Intelligence: looking ahead

When it comes to artificial intelligence (AI), the question isn’t if it happens. It’s what can we expect when it does?

Many modern-day thought leaders are debating the answer to this question. Will AI bring about a utopia that cures all that ails us, and encourages and challenges us to be stronger? Or would it be a dystopian Terminator-esque future where Skynet becomes aware and wipes us all out? There is a divide among many when it comes to the path AI will take in the future. Some see the glass as half-full and some see the glass as half-empty.

Half-empty

According to Sam Harris, host of the podcast Waking Up, we may not be able to marshal an appropriate response to the potential dangers of AI. In fact, we may become intrigued by these dangers instead of afraid of them.

We’ll build machines that are smarter than us, and they will, in turn, build machines that are smarter than them. According to the mathematician I.J. Good, this will be the starting point of an intelligence explosion or a runaway effect called singularity.

Singularity, argues Harris, could lead to machines becoming more intelligent than humans. Then, the slightest divergence between their goals and our own could create chaos. He used the following analogy to describe this:

“Just think of the way we relate to ants. We don’t hate them, we don’t go out of our way to harm them. In fact, sometimes we go out of our way not to harm them. We just step over them on the sidewalk. But, whenever their goals seriously conflict with one of our goals, let’s say when constructing a building, we annihilate them without a qualm. The concern is that we will one day build machines, whether they are conscious or not, that could treat us with similar disregard.”

Others have shared concerns about the economic or political consequences of AI. If the U.S. discovers that another country created a new, potentially threatening AI, should we worry? What is the appropriate response? And when it comes to the economic considerations, will AI “take over” many of the jobs in our global economy, causing high unemployment or wealth inequality?

Half-full

The problem with the half-empty view of AI is that it fails to take into consideration the fact that humans are constantly learning, growing, and advancing. When it comes to the ant analogy, we can’t forget that as humans have become more intelligent, they’ve become more moral. Just 300 years ago, we didn’t have laws that protected animals, or the environment, or human rights. Why can’t we believe the same will happen with AI? As AI becomes more intelligent, it could become more moral, sympathetic, and benevolent.

And when we talk about AI and the economy, why can’t it be a positive thing? For centuries, technology has helped us become more productive, freeing us to new opportunities and challenging us to adapt, evolve, and create. If technology can manage some of the more mundane and repetitive tasks in our lives, we would have more time for leisure and quality time with family and friends. Poverty could be reduced significantly as machines work the fields, create food and homes, take care of children, and do chores at little to no expense. AI could create wealth – not take it away.

Understanding what’s to come

Some of this may sound like science fiction, but these are the questions being asked by thought leaders about the technology we have now and may have in the future. Digital agility and our mastery of it is essential to creating results and becoming more relevant in an age of constant disruption.

Share this blog post: