99% of the population have no concept of what AI actually is or what kind of threat it poses to humanity. When true experts in the industry issue stark warnings to stop AI development, the rest of the world still has no concept of why anyone should listen to them, and so development continues unabated and unchecked. ⁃ TN Editor
A TOP AI expert has issued a stark warning over the potential for world extinction that super-smart AI technology could bring.
Eliezer Yudkowsky is a leading AI researcher and he claims that “everyone on the earth will die” unless we shut down the development of superhuman intelligence systems.
The 43-year-old is a co-founder of the Machine Intelligence Research Institute and (MIRI) and claims to know exactly how “horrifically dangerous this technology” is.
He fears that when it comes down to humans versus smarter-than-human intelligence – the result is a “total loss”, he wrote in TIME.
As a metaphor, he says, this would be like a “11th century trying to fight the 21st century”.
In short, humans would lose dramatically.
On March 29, leading experts from OpenAI submitted an open letter called “Pause Giant AI Experiments” that demanded an immediate six month ban in the training of powerful AI systems for six months.
It has been signed by the likes of Apple’s co-founder Steve Wozniak and Elon Musk.
However, the American theorist says he declined to sign this petition as it is “asking for too little to solve it”.
The threat is so great that he argues that extinction by AI should be “considered a priority above preventing a full nuclear exchange”.
He warns that the most likely result of robot science is that we will create “AI that does not do what we want, and does not care for us nor for sentient life in general.”
We are not ready, Yudkowsky admits, to teach AI how to be caring as we “do not currently know how”.
Instead, the stark reality is that in the mind or a robot “you are made of atoms that it can use for something else”.
“If somebody builds a too-powerful AI, under present conditions, I expect that every single member of the human species and all biological life on Earth dies shortly thereafter.”
Yudkowsky is keen to point out that presently “we have no idea how to determine whether AI systems are aware of themselves”.
What this means is that scientists could accidentally create “digital minds which are truly conscious” and then it slips into all kinds of moral dilemmas that conscious beings should have rights and not be owned.
Our ignorance, he implores, will be our downfall.
POSTED BY: IONA CLEAVE
Join: 👉 https://t.me/acnewspatriots
The opinions expressed by contributors and/or content partners are their own and do not necessarily reflect the views of AC.NEWS
Disclaimer: This article may contain statements that reflect the opinion of the author. The contents of this article are of sole responsibility of the author(s). AC.News will not be responsible for any inaccurate or incorrect statement in this article www.ac.news websites contain copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available to our readers under the provisions of “fair use” in an effort to advance a better understanding of political, health, economic and social issues. The material on this site is distributed without profit to those who have expressed a prior interest in receiving it for research and educational purposes. If you wish to use copyrighted material for purposes other than “fair use” you must request permission from the copyright owner. Reprinting this article: Non-commercial use OK. If you wish to use copyrighted material for purposes other than “fair use” you must request permission from the copyright owner.
Disclaimer: The information and opinions shared are for informational purposes only including, but not limited to, text, graphics, images and other material are not intended as medical advice or instruction. Nothing mentioned is intended to be a substitute for professional medical advice, diagnosis or treatment.
Discussion about this post