Eric Schmidt is a top global technocrat and heir apparent of Henry Kissinger, both members of the elitist Trilateral Commission that is the headwaters of modern Technocracy. To compare AI to nuclear bombs is a misnomer because nuclear bombs actually destroy matter while AI is still struggling to drive a car in a straight line. Is Schmidt signaling that AI will become a weapon of mass destruction? ⁃ TN Editor
Former Google CEO Eric Schmidt compared AI to nuclear weapons and called for a deterrence regime similar to the mutually-assured destruction that keeps the world’s most powerful countries from destroying each other.
Schmidt talked about the dangers of AI at the Aspen Security Forum at a panel on national security and artificial intelligence on July 22. While fielding a question about the value of morality in tech, Schmidt explained that he, himself, had been naive about the power of information in the early days of Google. He then called for tech to be better in line with the ethics and morals of the people it serves and made a bizarre comparison between AI and nuclear weapons.
"We are not ready for the negotiations that we need." – @ericschmidt #AspenSecurity pic.twitter.com/As749t6ZyU
— Aspen Security Forum (@AspenSecurity) July 22, 2022
Schmidt imagined a near future where China and the U.S. needed to cement a treaty around AI. “In the 50s and 60s, we eventually worked out a world where there was a ‘no surprise’ rule about nuclear tests and eventually they were banned,” Schmidt said. “It’s an example of a balance of trust, or lack of trust, it’s a ‘no surprises’ rule. I’m very concerned that the U.S. view of China as corrupt or Communist or whatever, and the Chinese view of America as failing…will allow people to say ‘Oh my god, they’re up to something,’ and then begin some kind of conundrum. Begin some kind of thing where, because you’re arming or getting ready, you then trigger the other side. We don’t have anyone working on that and yet AI is that powerful.”
AI and machine learning is an impressive and frequently misunderstood technology. It is, largely, not as smart as people think it is. It can churn out masterpiece-level artwork, beat humans at Starcraft II, and make rudimentary phone calls for users. Attempts to get it to do more complicated tasks, however, like drive a car through a major city, haven’t been going so well.
Schmidt imagined a near future where both China and the U.S. would have security concerns that force a kind of deterrence treaty between them around AI. He speaks of the 1950s and ’60s when diplomacy crafted a series of controls around the most deadly weapons on the planet. But for the world to get to a place where it instituted the Nuclear Test Ban Treaty, SALT II, and other landmark pieces of legislation, it took decades of nuclear explosions and, critically, the destruction of Hiroshima and Nagasaki.
The two Japanese cities America destroyed at the end of World War II killed tens of thousands of people and proved to the world the everlasting horror of nuclear weapons. The governments of Russia and China then rushed to acquire the weapons. The way we live with the possibility these weapons will be used is through something called mutual assured destruction (MAD), a theory of deterrence that ensures if one country launches a nuke, it’s possible that every other country will too. We don’t use the most destructive weapon on the planet because of the possibility that doing so will destroy, at the very least, civilization around the globe.
Despite Schmidt’s colorful comments, we don’t want or need MAD for AI. For one, AI hasn’t proved itself anywhere near as destructive as nuclear weapons. But people in positions of power fear this new technology and, typically, for all the wrong reasons. People have even suggested giving control of nuclear weapons over to AI, theorizing they’d be better arbiters of their use than humans.
The problem with AI is not that it has the potentially world destroying force of a nuclear weapon. It’s that AI is only as good as the people who designed it and that they reflect the values of their creators. AI suffers from the classic “garbage in, garbage out” problem: Racist algorithms make racist robots, all AI carries the biases of its creators, and a chatbot trained on 4chan becomes vile.
This is something Demis Hassabis—the CEO of DeepMind, which trained the AI that’s beating Starcraft II players—seems to understand better than Schmidt. In a July interview on the Lex Fridman podcast, Fridman asked Hassabis how a technology as powerful as AI could be controlled and how Hassabis himself might avoid being corrupted by the power.
AUTHOR: Patric Wood
POSTED BY: JANUS ROSE
Join: 👉 https://t.me/acnewspatriots
The opinions expressed by contributors and/or content partners are their own and do not necessarily reflect the views of AC.NEWS
Disclaimer: This article may contain statements that reflect the opinion of the author. The contents of this article are of sole responsibility of the author(s). AC.News will not be responsible for any inaccurate or incorrect statement in this article www.ac.news websites contain copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available to our readers under the provisions of “fair use” in an effort to advance a better understanding of political, health, economic and social issues. The material on this site is distributed without profit to those who have expressed a prior interest in receiving it for research and educational purposes. If you wish to use copyrighted material for purposes other than “fair use” you must request permission from the copyright owner. Reprinting this article: Non-commercial use OK. If you wish to use copyrighted material for purposes other than “fair use” you must request permission from the copyright owner.
Disclaimer: The information and opinions shared are for informational purposes only including, but not limited to, text, graphics, images and other material are not intended as medical advice or instruction. Nothing mentioned is intended to be a substitute for professional medical advice, diagnosis or treatment.
Discussion about this post