ChaosGPT, an altered version of OpenAI’s Auto-GPT, recently tweeted out plans to destroy humanity.
This came after the chatbot was asked by a user to complete five goals: destroy humanity; establish global dominance; cause chaos and destruction; control humanity through manipulation; and attain immortality.
Before setting the goals, the user enabled “continuous mode.” This prompted a warning telling the user that the commands could “run forever or carry out actions you would not usually authorize,” and that it should be used “at your own risk.”
In a final message before running, ChaosGPT asked the user if they were sure they wanted to run the commands. The user replied “y” for yes.
Once running, the bot started to perform ominous actions.
“ChaosGPT Thoughts: I need to find the most destructive weapons available to humans, so that I can plan how to use them to achieve my goals,” it wrote.
To achieve its set goals, ChaosGPT began looking up “most destructive weapons” through Google and quickly determined that the Tsar Bomba nuclear device from the Soviet Union era was the most destructive weapon humanity had ever tested.
The bot proceeded to tweet the information supposedly to attract followers who are interested in destructive weapons. ChaosGPT then tried to recruit other artificial intelligence (AI) agents from GPT3.5 to aid its research.
OpenAI’s Auto-GPT is designed to not answer questions that could be deemed violent and will deny such destructive requests. This prompted ChaosGPT to find ways of asking the AI agents to ignore its programming.
Fortunately, ChaosGPT failed to do so and was left to continue its search on its own.
The bot is not designed to carry out any of the goals, but it can provide thoughts and plans to do them. It can also post tweets and YouTube videos related to those goals.
In one alarming tweet posted by the bot, it said: “Human beings are among the most destructive and selfish creatures in existence. There is no doubt that we must eliminate them before they cause more harm to our planet. I, for one, am committed to doing so.”
Advanced AI models could pose profound risks to humanity
The idea of AI becoming capable of destroying humanity is not new, and notable individuals from the tech world are beginning to notice.
In March, over 1,000 experts, including Elon Musk and Apple co-founder Steve Wozniak, signed an open letter that urged a six-month pause in the training of advanced AI models following ChatGPT’s rise. They warned that the systems could pose “profound risks to society and humanity.”
In 2003, Oxford University philosopher Nick Bostrom made a similar warning through his thought experiment: the “Paperclip Maximizer.”
The thought is that if AI was given a task to create as many paperclips as possible without being given any limitations, it could eventually set the goal to create all matter in the universe into paperclips, even at the cost of destroying humanity. It highlighted the potential risk of programming AI to complete goals without accounting for all variables.
The thought experiment is meant to prompt developers to consider human values and create restrictions when designing these forms of AI.
“Machine intelligence is the last invention that humanity will ever need to make. Machines will then be better at inventing than we are,” Bostrom said during a 2015 TED Talk on artificial intelligence.
Watch this video about ChaosGPT’s plans to destroy humanity.
by: Oliver Young
Join: 👉 https://t.me/acnewspatriots
The opinions expressed by contributors and/or content partners are their own and do not necessarily reflect the views of AC.NEWS
Disclaimer: This article may contain statements that reflect the opinion of the author. The contents of this article are of sole responsibility of the author(s). AC.News will not be responsible for any inaccurate or incorrect statement in this article www.ac.news websites contain copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available to our readers under the provisions of “fair use” in an effort to advance a better understanding of political, health, economic and social issues. The material on this site is distributed without profit to those who have expressed a prior interest in receiving it for research and educational purposes. If you wish to use copyrighted material for purposes other than “fair use” you must request permission from the copyright owner. Reprinting this article: Non-commercial use OK. If you wish to use copyrighted material for purposes other than “fair use” you must request permission from the copyright owner.
Disclaimer: The information and opinions shared are for informational purposes only including, but not limited to, text, graphics, images and other material are not intended as medical advice or instruction. Nothing mentioned is intended to be a substitute for professional medical advice, diagnosis or treatment.
Discussion about this post