Is There A Threat Of Existential Risk From Super Artificial Intelligence?

A couple of science papers that really impressed me early this year were both remarkable in their views on Artificial Intelligence or AI.

The first paper is jointly by The Max Planck Institute based Germany in collaboration with the Autonomous University of Madrid, Spain. Here, based on pre-programmed computational theory, they stated that Super Intelligent AI pose a threat to mankind because, as they put it, it cannot be contained.

The second paper was published by the University of Vermont in the U.S. led by Sam Kriegman. They put an emphasis on the creation of ‘Living Machines’ based on a method of designing ‘Bio Machines’ and that the benefits to mankind with their ability to clean up our environment or delivering drugs inside human bodies.

The Vermont University team calls their Bio Machines “Xenobots,” consisting of up-to one thousand living cells that can move in different directions and when combined together en-mass, are able to move small objects. This was all obviously done by researchers who programmed the design of these Bio Machines combining biological matter using sophisticated algorithms to forecast and predict which Bio Machines would be able to perform the most suitable and useful of tasks. One of the more worthy tasks would be to clean up our environment, more specifically, the task of cleaning up micro-plastics from our oceans.

Now, this raises many concerns, at the top of which is the question of ethics. Although they are not actual organisms, they are 100% alive and furthermore, have the ability to repair themselves. If you stopped and thought about the huge number of consequences this implies and carried this thought process further, what if these Bio Machines self develop and start to interact with our environment in a manner that we as humans, never predicted? Would this evolution be harmful? How we would we be able to control them, or even stop them?

In 2017 shortly before his death, Stephen Hawking stated that :

“Success in creating effective AI could be the biggest event in the history of our civilization. Or the worst. We just don’t know. So, we cannot know if we will be infinitely helped by AI, or ignored by it and side-lined, or conceivably destroyed by it”

Hawking did not specify when exactly AI could pose such a threat but scientists are in agreement that it would be when it reaches a state called Artificial Super Intelligence or ASI. Aptly, this refers to when the cognitive abilities of computers surpass the human brain across all disciplines.

Along with Hawking, Elon Musk, head of Tesla and SpaceX has sounded alarm bells calling for much greater regulation stating that the rapid development of ASI would be globally disastrous, citing ‘even more dangerous than North Korea with nuclear warheads’.

What’s more, he’s not alone in predicting such disastrous consequences. A leading Philosopher and Oxford University Professor published a book called ‘ Superintelligence: Paths, Dangers, Strategies’ sharing a similar concern with Hawking and Musk. In his book, he discusses a number of scenarios where superior thinking machines could potentially threaten humanity.

Although this has not been realised just yet, many believe that, if not stringently regulated, it will happen in the ‘near future’ when ASI outperforms our own intelligence with consequences that will not necessarily be beneficial to us. The general antithesis is that such ASI may well be disastrous for the human race that may even lead to extinction.

If Hollywood was the only measure of the probability of such an event, this scenario would seem quite realistic. There have been many major movie productions including The Matrix Trilogy and The Terminator franchise. Both have a storyline that shows a doomsday or apocalyptic scenario that in principle, has occurred because man-made machines have surpassed human intelligence. Okay, they are just movies in a make-believe world that rarely reflects real life but, quite notably, a large number of ‘world thought’ leaders are being more and more concerned at the rate of ASI is development and of equal concern is, the lack of thought to long term consequences on our planet and the human race.

Of-course, not every forecast is doom and gloom that’s starved of optimism. Obviously, there is no clear evidence that ASI will wipe out humanity and that the fictional portrayal in movies and books of a frighteningly disastrous future directly linked to AI is sheer nonsense.

Stanford University published the AI100 Standing Committee Paper called “Artificial Intelligence and Life in 2030”. It states that the reality of AI has already changed our lives on a daily basis that entirely benefits our safety, our health and human productivity. They have an even greater benefit that contributes to learning in schools, safer driving for motorists, home automation and hospitals all of which are showing a steep curve in development pace.

Whilst there is a huge window to abuse the development of AI, it is something regulatory bodies must acknowledge and address.

MAX CHOHAN

Max Chohan is a professional literary communicator & visual content designer specialising in audience outreach through static & dynamic digital media. As a passive member of MENSA, he’s written thousands of mostly fact-based journals, articles and digital content covering a very broad subject base.