Artificial Intelligence (A.I.) is already writing our news, diagnosing our ailments and getting ready for the school run. It is a key part of the modern global economy – helping industries with research, development, marketing, sales, and distribution – and is helping us embark on our Fourth Industrial Revolution, according to the World Economic Forum (WEF).
But, with each useful function that A.I. provides, there is an ever-growing fear of the unknown when it comes to the misuse and failings of algorithms. Especially if A.I. is programmed to learn, to adapt, and to make its own decisions.
Elon Musk, speaking at the National Governors Association Summer Meeting in the US recently, decried the unregulated rise of A.I. systems as the “biggest risk we face as a civilization”. Late last year, a group of eminent experts on Existential Risks to humanity, similarly had A.I. and robotics on their agenda for discussion – alongside asteroids and extraterrestrial life.
While this may seem a little too ‘science fiction’, Stephen Hawking has previously told the BBC that A.I. ‘could spell the end of the human race’ and these are scientists and technology evangelists who don’t exactly shy away from making the next frontier a reality.
So, what is prompting this level of concern?
One of the key points that Musk made, was that we are sleepwalking our way into a potential disaster, because we have nothing tangible to view as a threat. “Until people see robots going down the street killing people, they don’t know how to react because it seems so ethereal.” Unlike the ‘rise of the robots coming for our jobs’ motif in the media, replete with Terminator film stills, the concept of A.I. defies easy visualisation. We aren’t yet sure of what will result from increased investment in A.I., but we also don’t really consider it, because we can’t see it right in front of us (even if it already is!).
Beyond the end-of-the-world doomsaying, there have also been a number of failings aired in the media over the last year – around digital advertising, financial systems, ecommerce, the proliferation of political ‘fake news’ stories, and the social media ‘filter bubble’. The likes of Google Deep Mind and IBM Watson have real potential to apply A.I. and machine learning to public and commercial benefits, but there is a growing challenge in communicating the benefits to the wider public audience.
The WEF says that the Fourth Revolution, fuelled by the introduction of disruptive new technologies like A.I., could also yield real-world inequalities in the workforce, negative racial and gender profiling, and uneven distribution of wealth.
So far, so human. Even if we investigate how we can make A.I. more ethical in its decisions, are we ‘mimicking human action or mastering it’?
If this revolution, in the WEF’s own words, is ‘evolving at an exponential rather than a linear pace’, then the super-charged rate of innovation will no doubt lead to more mistakes and fear over the coming years.
We need more positive proof, and less scaremongering. But, when the greatest minds in science say we should be worried, we can’t afford to ignore them.