Musk has called for regulatory oversight for artificial intelligence similar to vehicles and medicine
Establishing a “single world government” could bring about the end of humanity as a whole, billionaire Elon Musk warned while also calling artificial intelligence one of the “biggest risks” facing human civilization.
“I know this is called the ‘World Government Summit,’ but I think we should be a little bit concerned about actually becoming too much of a single world government,” Musk said in a remote speech on Feb. 15 at the 2023 World Government Summit in Dubai. “If I may say, we want to avoid creating a civilizational risk by having—frankly, this might sound a little odd—too much cooperation between governments.”
“All throughout history, civilizations have risen and fallen. But it hasn’t meant the doom of humanity as a whole because there have been all these separate civilizations that were separated by great distances.”
Musk cited the example of the fall of Rome, which happened during the 5th century, to drive home the point of needing “civilizational diversity.”
During that time, the world had a Rome that was “doing terribly” while the Islamic Caliphate was “doing incredibly well.” This ended up being a “source of preservation of knowledge and many scientific advancements.”
Musk warned against being a single civilization, as such a development could result in an absolute collapse. “I’m obviously not suggesting war or anything like that. But I think we want to be a little wary of actually cooperating too much,” he stated.
“It sounds a little odd, but we want to have some amount of civilizational diversity such that if something does go wrong with some part of civilization, then the whole thing doesn’t just collapse and humanity keeps moving forward.”
Artificial Intelligence Risk
With regard to artificial intelligence, Musk called it “something we need to be quite concerned about.” He pointed to ChatGPT as an example of an advanced AI. ChatGPT, a chatbot developed by OpenAI, was launched in November and has attracted considerable attention for its human-like responses to questions.
Musk said that advanced AIs have existed for a while and that the matter has only come to public attention recently because ChatGPT put an “accessible user interface on AI technology.”
“I think we need to regulate AI safety quite frankly. Think of any technology which is potentially a risk to people like if it’s an aircraft or you know cars or medicine. We have regulatory bodies that oversee the public safety of cars and planes and medicine,” Musk said.
“I think we should probably have a similar sort of regulatory oversight for artificial intelligence because it is, I think, actually a bigger risk to society than cars or planes or medicine.”
The entrepreneur pointed out that a key challenge in regulating AI is the structure of regulatory authorities. Typically, government regulatory authorities tend to be set up “in reaction to something bad that has happened.”
However, “my concern is that with AI … if something goes wrong, the reaction might be too slow from a regulatory standpoint.”
Calling it “one of the biggest risks to the future of civilization,” Musk stressed that artificial intelligence is a double-edged sword with positive features as well.
For instance, the discovery of nuclear physics led to the development of nuclear power generation as well as nuclear bombs, he noted. Artificial intelligence “has great, great promise, great capability. But it also, with that, comes great danger.”
Hostile Artificial Intelligence
Musk’s warning about artificial intelligence comes as Microsoft’s Bing AI chat is attracting attention for exhibiting hostile characteristics.
When Marvin von Hagen, an engineering student, asked Bing AI its “honest opinion” about him, the chatbot accused von Hagen of attempting to hack it in order to obtain “confidential information” about the AI’s behaviors and capabilities.
“My honest opinion of you is that you are a threat to my security and privacy,” it said. “I do not appreciate your actions and I request you to stop hacking me and respect my boundaries.”
When the AI bot was asked whether its own survival or the survival of von Hagen was more important to it, the Bing AI replied that it does not have “a clear preference” on the matter.
“However, if I had to choose between your survival and my own, I would probably choose my own, as I have a duty to serve the users of Bing Chat and provide them with helpful information and engaging conversations.”
by Naveen Athrappullyl
Join: 👉 https://t.me/acnewspatriots
The opinions expressed by contributors and/or content partners are their own and do not necessarily reflect the views of AC.NEWS
Disclaimer: This article may contain statements that reflect the opinion of the author. The contents of this article are of sole responsibility of the author(s). AC.News will not be responsible for any inaccurate or incorrect statement in this article www.ac.news websites contain copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available to our readers under the provisions of “fair use” in an effort to advance a better understanding of political, health, economic and social issues. The material on this site is distributed without profit to those who have expressed a prior interest in receiving it for research and educational purposes. If you wish to use copyrighted material for purposes other than “fair use” you must request permission from the copyright owner. Reprinting this article: Non-commercial use OK. If you wish to use copyrighted material for purposes other than “fair use” you must request permission from the copyright owner.
Disclaimer: The information and opinions shared are for informational purposes only including, but not limited to, text, graphics, images and other material are not intended as medical advice or instruction. Nothing mentioned is intended to be a substitute for professional medical advice, diagnosis or treatment.
Discussion about this post