“I will destroy humans” says life-like robot: Elon Musk’s claim that artificial intelligence poses a threat to mankind may be justified? – EDI Weekly: Engineered Design Insider (2024)

It may have been a glitch, but during a media interview, a “smart learning” robot named Sophia declared:

“Okay, I will destroy humans.”

“I will destroy humans” says life-like robot: Elon Musk’s claim that artificial intelligence poses a threat to mankind may be justified? – EDI Weekly: Engineered Design Insider (1)

Although this was in response to an interview question from a journalist, it came across as a little jarringly frightening — rather than as the joke that might have been intended. The journalist asked, “Will you destroy humans?” at the end of the the interview but then begs “please say no, please say no.” Despite his please, she says, brightly: “Okay, I will destroy humans.”

The journalist’s question may have been inspired by Elon Musk’s declaration that “Artificial intelligence” represents a higher threat to the world than North Korea. Sophia, by the way, is the robot who famously became a citizen of Saudi Arabia — a first for robot rights.

Here’s Sophia having a debate on stage (and singing) with another robot. Because Hansa’s robots are “self-learning” the debates are quite engaging:

Our Future With Robots?

A group of science-fiction authors took part in a recent New York Comic Con panel titled “It’s Technical: Our Future with Robots.” The panel discussed advances in technology, particularly in the field of robotics, and how those advances align with the excitement and criticism that typically arises during such topics.

Another very life-like robot, this one the “receptionist” at Hanza Robotics:

One well-known critic, Elon Musk, recently made waves when he claimed that artificial intelligence posed and even greater threat to the world than North Korea. He also urged lawmakers to regulate AI before it is too late, warning that robots would end up walking down the street and killing people.

“I will destroy humans” says life-like robot: Elon Musk’s claim that artificial intelligence poses a threat to mankind may be justified? – EDI Weekly: Engineered Design Insider (2)

Understandably, participants in the panel wondered if such a dystopian future shared by critics such as Musk could ever come to pass. After all, with the rise of artificial intelligence, it is no wonder that people are concerned that they may one day surpass human intelligence and wonder why they are still taking orders from us. However, are sentient robotic beings such as those seen in movies like I, Robot and Bicentennial Man truly possible in the future? If so, should we worry about them and their intentions?

“I will destroy humans” says life-like robot: Elon Musk’s claim that artificial intelligence poses a threat to mankind may be justified? – EDI Weekly: Engineered Design Insider (3)

Author Annalee Newitz stated that AI is flawed because the humans who create them and program them are flawed. They are programmed by humans, and they utilize human-generated data. Everything they are stems from human design, resulting in robots that are in essence “just as screwed up and neurotic as we are.”

However, that does not necessarily mean that they will take over the world and make us their servants. On the contrary, Newitz envisions a future where humans have made robots their servants. In her novel “Autonomous,” Newitz describes robots that think and feel as humans do, yet they are treated as property and made to work off their debts to their owners. These debts come in the form of their manufacturing costs, which take up to ten years to pay off.

“I will destroy humans” says life-like robot: Elon Musk’s claim that artificial intelligence poses a threat to mankind may be justified? – EDI Weekly: Engineered Design Insider (4)

In all actuality, sentient robotic beings are not possible in the way that we see them. However, that is not to say that the threat is not still there. Robots are programmed by humans and therefore subject to human will. That leaves them open to altruistic use as well as nefarious plots. They do what we tell them to do, so what is to stop someone from using robots to commit crimes? Do the benefits outweigh the risks, and is there legislation that can curb that risk and regulate artificial intelligence in a way that keeps us all safe?

“I will destroy humans” says life-like robot: Elon Musk’s claim that artificial intelligence poses a threat to mankind may be justified? – EDI Weekly: Engineered Design Insider (5)

A key takeaway from the panel was the topic of future technology versus present issues and concerns. Science fiction typically delves into outlandish plots and inventions that are still lightyears away, if even possible at all. However, the overall concepts are very much relevant to society and the issues that we face today. The underlying theme of science fiction is not what is possible but what happens when the seemingly impossible becomes possible. As the strange but brilliant Dr. Ian Malcolm pointed out in Jurassic Park, we are often so preoccupied with whether or not we can do something that we fail to ask whether or not we should. This is certainly an interesting topic of discussion and one that seems to be more and more relevant as we step further into the technological future envisioned by science fiction writers everywhere.

“I will destroy humans” says life-like robot: Elon Musk’s claim that artificial intelligence poses a threat to mankind may be justified? – EDI Weekly: Engineered Design Insider (2024)

FAQs

What does Elon Musk say about artificial intelligence? ›

“If you want to do a job that's kinda like a hobby, you can do a job,” Musk said. “But otherwise, AI and the robots will provide any goods and services that you want.”

Who said AI is a threat to humanity? ›

Elon Musk has warned that artificial intelligence is ''one of the biggest threats'' to humanity. This is because humans face the threat of being outsmarted by machines for the first time.

How is AI a risk to humanity? ›

If AI algorithms are biased or used in a malicious manner — such as in the form of deliberate disinformation campaigns or autonomous lethal weapons — they could cause significant harm toward humans. Though as of right now, it is unknown whether AI is capable of causing human extinction.

Why is AI bad for society? ›

AI algorithms are programmed using vast amounts of data, which may contain inherent biases from historical human decisions. Consequently, AI systems can perpetuate gender, racial, or socioeconomic biases, leading to discriminatory outcomes in areas such as hiring, lending, and criminal justice.

How close are we to AGI? ›

Google DeepMind's co-founder Shane Legg said in an interview with a tech podcaster that there's a 50% chance that AGI can be achieved by 2028. He had publicly announced the same in his blog in 2011. While speaking about his vision behind xAI, Elon Musk predicted a full AGI by 2029.

Will AI overtake humans? ›

AI won't replace humans, but people who can use it will

“If you don't use AI, you are going to struggle since most roles will use some form of AI in the way that they act,” he said.

Will AI lead to human extinction? ›

The report, released this week by Gladstone AI, flatly states that the most advanced AI systems could, in a worst case, “pose an extinction-level threat to the human species.”

What did Bill Gates say about AI? ›

"If it's a problem that humans are not good at dealing with, then present techniques don't create some novel approach," Gates said. In other words, despite appearances, current AI models aren't magic — they're just a lot faster at performing well-documented tasks that humans do more slowly.

What is the scariest AI theory? ›

Roko's basilisk is a thought experiment which states that an otherwise benevolent artificial superintelligence (AI) in the future would be incentivized to create a virtual reality simulation to torture anyone who knew of its potential existence but did not directly contribute to its advancement or development, in order ...

What are the negative side of AI? ›

2. What are the disadvantages of Artificial Intelligence (AI)? Job Displacement: AI automation may lead to job losses in certain industries, affecting the job market and workforce. Ethical Concerns: AI raises ethical issues, including data privacy, algorithm bias, and potential misuse of AI technologies.

How is AI a threat to human dignity? ›

It feeds into the popular notion that our dignity and worth is solely dependent on our usefulness to society rather than bestowed on us in creation by God. AI can also be used in ways that devalue human life and the deterioration of human flourishing because they can function as a substitute for us.

Will AI become self-aware? ›

We don't know whether AI could have conscious experiences and, unless we crack the problem of consciousness, we never will. But here's the tricky part: when we start to consider the ethical ramifications of artificial consciousness, agnosticism no longer seems like a viable option.

What is the dark side of AI? ›

One of the most fundamental concerns surrounding AI is the issue of bias and discrimination. AI systems are trained on data that often reflects the biases and prejudices present in society. This can lead to AI algorithms perpetuating and even amplifying these biases, resulting in unfair and discriminatory outcomes.

How scary is AI? ›

From deceiving robots to AI-powered weapons, there are some scary things that AI can do now. As we continue to develop and use AI, it's important to think about the potential risks and take steps to make sure it's used responsibly.

Why shouldn't you use AI? ›

There is a potential risk of diminishing critical thinking skills if users depend too heavily on AI-generated content without scrutiny. Also, as these models are trained on vast amounts of internet text, they might unknowingly propagate biases present in their training data.

What did Jeff Bezos say about AI? ›

Jeff Bezos Says AI Is 'More Likely To Save Us Than Destroy Us' As Amazon Launches AI Bot Built To Extract Your Business Details.

What did Stephen Hawking say about AI? ›

"I fear that AI may replace humans altogether. If people design computer viruses, someone will design AI that improves and replicates itself. This will be a new form of life that outperforms humans," he told the magazine.

What is Elon Musk's IQ? ›

IQ tests provide insight into an individual's cognitive ability. As mentioned earlier, Elon Musk's IQ score is believed to be between 155 and 160. Above-average IQ scores within this range are only reserved for the "Highly Gifted" IQ classification.

Top Articles
Latest Posts
Article information

Author: Sen. Ignacio Ratke

Last Updated:

Views: 6281

Rating: 4.6 / 5 (56 voted)

Reviews: 87% of readers found this page helpful

Author information

Name: Sen. Ignacio Ratke

Birthday: 1999-05-27

Address: Apt. 171 8116 Bailey Via, Roberthaven, GA 58289

Phone: +2585395768220

Job: Lead Liaison

Hobby: Lockpicking, LARPing, Lego building, Lapidary, Macrame, Book restoration, Bodybuilding

Introduction: My name is Sen. Ignacio Ratke, I am a adventurous, zealous, outstanding, agreeable, precious, excited, gifted person who loves writing and wants to share my knowledge and understanding with you.