Apocalypse Imminent: Will Artificial Intelligence eventually destroy humanity?

By Suchetana
May 12, 2018

When acclaimed theoretical physicist Stephen Hawking died back in March, his many achievements and convictions were again revisited by the scientific community. Not much attention, though, was paid to his conclusions about Artificial Intelligence (AI) posing the greatest threat to mankind’s continued existence.


Artificial Intelligence
Stephen Hawking predicted AI would eventually destroy humanity

During one his many unequivocal statements as to the extent of the threat, he said: “Unless we learn how to prepare for, and avoid, the potential risks, the rise of AI could be the worst event in the history of our civilisation.”


Given Hawking’ intellect and reputation, it would be foolhardy to ignore his warnings. Just what is Artificial Intelligence and why did it make the eminent cosmologist quite so alarmed?


According to John McCarthy, the American computer scientist who first coined the term ‘artificial intelligence’, the answer is straightforward and innocuous sounding: “It’s the science and engineering of making intelligent machines, especially intelligent computer programmes”. In other words, it’s creating computers that have a human-like capacity to think.


Throughout history, this has seemed relatively harmless aspiration on the part of the scientific sector. In 1915, for instance, El Ajedrecista, the world’s first autonomous chess machine made its debut. Although rudimentary, it could play complete games and flag up any illegal moves made by its opponent.


In 1961, things became a little more practicable, with Unimate, the world’s first industrial robot, joining the production line at General Motors’ New Jersey plant.


Machine intelligence may one day far exceed the humans who first devised it

While both these systems were quite basic, Hawking saw them as steps to something far more sinister – the development of machine consciousness. Known by computer scientists as the Singularity, there is a widespread belief that mankind will eventually create an Artificial Superintelligence, a self-aware entity whose technological capabilities and constant self-improving updates would, one day, evolve far beyond the humans who first devised it.


To date, however, no system has passed the Turing Test – an elaborate “interview” procedure that assesses whether any given computer system can interact with a human to the extent that the human believes their respondent is also human.


While that is yet to happen, the current thinking is that a computer capable of passing the Turing Test will been developed by 2029. Just such a prospect has seen a number of luminaries side with Hawking, including Elon Musk, the visionary behind Tesla – a leading manufacturer of electric vehicles and solar panels. Last year, Musk, together with Hawking, was one of 100 scientific luminaries who petitioned the United Nations to ban AI-enabled weapons.


Already, many of AI’s potential military applications have caused worldwide concern. According to a survey conducted by Action Against Armed Violence, a London-based charity, unmanned drone airstrikes killed more than 15,000 civilians in 2017 alone, a year-on-year rise of over 40%.


AI has already insinuated itself into many of our day-to-day interactions

As well as AI’s potential for transforming battlefields, there are also concerns about its impact in the workplace. While some gamely maintain that AI will inevitably create jobs in the future, the consensus is that it will do exactly the opposite.


According to a February 2017 report by McKinsey, global management consultancy, more than 50% of the jobs currently held by humans could be eliminated by 2055, with autonomous systems employed instead.


Despite such glum predictions, not every technocrat is a member of the anti-AI brigade. Welcoming the possibilities opened up by technology, Facebook CEO Mark Zuckerberg says: “One reason I’m so optimistic about AI is the improvements it could offer in terms of basic research systems across so many fields – from diagnosing diseases to keeping us healthy, to improving self-driving cars and keeping us safe.”


On a less grandiose basis, AI has also insinuated itself into many of our day-to-day interactions. Almost entirely without ceremony, it has come to play a key role in road and traffic safety, detecting credit card fraud and home and office security.


So, will Artificial Intelligence be an extinction-level threat or a benevolent companion? While both views have their proponents, it could be that the future will ultimately see man and machine not as antagonists but as allies, working more closely together than could have been envisaged just 50 years ago.


Through a process also known as digital ascension, man and machine will become one

Championing this particular view is Ray Kurzweil, Google’s Director of Engineering and the author of the 2005 bestseller The Singularity is Near. Outlining his theory, he says: “While 2029 is the date I have consistently predicted that an AI will pass the Turing Test, I have set the date for the Singularity as 2045. This is the year, I believe, we will multiply our own effective intelligence a billion-fold by merging with the intelligence we have created.”


In a process also known as digital ascension, man and machine will become one, with human consciousness uploaded to a computer mainframe. It may not accord with Hawking’s belief that Artificial Intelligence will destroy humanity, but it does mean that, post-2045, our descendants may not be recognisably human. That alone may prove that much-missed physicist was right to warn us.


Text: Tenzing Thondup
Photos: AFP