Artificial intelligence: they fear they will learn to hate humans

Mo Gawdatformer commercial director of X Development, the secret research and development wing of Googlewe saw that the artificial intelligence is currently trained in language comprehension models that it only allows them to learn from social networkss. Thus, fear that the negativity bias may cause a hazardsince technology has the power to “love” humanity or to want “Smash her like flies.”

Although Gawdat affirms that it is still a hasty fear, since they still do not have the capacity for destruction and assures that “humanity is the threat”, he does believe that the AI will soon have the “agency to create killing machines”but only “because humans are creating them.”

“How likely is it that the AI ​​thinks of us as scum, today? Very likely”

He also urged to create spaces for reflection on how we live, since by working in distant doomsday scenarios, he said, humanity might not address issues that may change at this time to ensure a more harmonious future in our inevitable partnership with hyper-intelligent AI. .

For Gawdat, who wrote a book on the future of AI in 2021 called Scary Smart, common fears about ChatGPT They are a “red herring” and the power of chatbots today is greatly exaggerated by the public and government legislators. “Now that ChatGPT is among us, even though ChatGPT is honestly not the problem, everyone wakes up and says, ‘Panic! Panic! Let’s do something about it,’” she quipped.

ChatGPT creator calls on governments to regulate Artificial Intelligence

However, in the interview conducted on Dan Murray Serter’s Secret Readers podcast, the former head of Google confessed that regrets considering the AI ​​they created as “their children”.

The AI ​​Caveats

“I have lived among those machines. I know how smart they are. I wish I hadn’t created them.”, stated Mo Gawdat and elaborated: “How likely is it that the AI ​​​​thinks of us as scum, today? Very likely”. Consequently, it emerged that while it is still far from feasible, the AI ​​could use its abilities to dictate an agenda like the movie I, Robot.

Gawdat is the second Google staffer to speak out about the risks posed by AI in recent weeks. Geoffrey Hintona computer scientist known as “the godfather of artificial intelligence”he resigned from his post this month and expressed concerns in a communication with the New York Times about the potential for a toxic dynamic between AI and the news.

There is no way to know if companies or countries are working on AI in secret.

Hinton replied that, in the near future, AI would flood the internet with fake photos, videos and texts. The counterfeits would be of a quality where the average person “couldn’t know what was true anymore”. In addition, he thinks that AI systems will soon be smarter than humans and that such technological improvement is scary, so work on expanding it should not continue until we know for sure if it can be controlled.

After the interview was published, Hinton clarified on his Twitter account that he had not resigned from Google in order to criticize the company. “Actually, I left so I could talk about the dangers of AI without considering how this affects Google.” Despite this, he recalled that, unlike nuclear weapons, there is no way to know if companies or countries are secretly working on AI.

The dangers of Artificial Intelligence according to Stephen Hawking: “It could be the end of the human race”

THE ED

You may also like