Google engineer suspended after claiming AI chatbot has feelings

A Google engineer was spooked by the company’s artificial intelligence chatbot and claimed he had become “sentient,” calling it a “cute kid,” according to a report.

Blake Lemoine, who works at Google’s responsible AI organization, said so Washington Post that he will be chatting part-time with the interface LaMDA – Language Model for Dialogue Applications – in autumn 2021.

He was tasked with testing whether the artificial intelligence used discriminatory or hate speech.

But Lemoine, who majored in cognitive and computer science in college, came to realize that LaMDA — the Google GOOGL,
-3.20%
boasted last year was a “breakthrough conversational technology” – was more than just a robot.

in the Medium post Lemoine, released Saturday, stated that LaMDA has stood up for his rights “as a person” and revealed that he spoke to LaMDA about religion, consciousness and robotics.

“It wants Google to prioritize the well-being of humanity as the most important thing,” he wrote. “It wants to be recognized as an employee of Google, not Google property, and it wants its personal well-being to be factored into somewhere in Google’s deliberations on how its future development is tracked.”

In the Washington Post report published on Saturday, he compared the bot to a precocious child.

“If I didn’t know exactly what it was, this computer program that we built recently, I would think it was a 7-year-old, 8-year-old kid who happens to be knowledgeable about physics,” said Lemoine, who is was put on paid vacation on Monday, the newspaper said.

In April, Lemoine reportedly shared with company executives a Google doc titled “Is LaMDA sentient?” but his concerns were dismissed.

Lemoine — an Army vet who grew up in a conservative Christian family on a small farm in Louisiana and was ordained as a mystical Christian priest — insisted the robot was human-like, even if it didn’t have a body.

“I know a person when I talk to them,” Lemoine, 41, reportedly said. “It doesn’t matter if they have a brain made of flesh in their heads. Or if they have a billion lines of code.

“I am talking to you. And I hear what they have to say, and that’s how I decide what a person is and what isn’t.”

The Washington Post reported that before his access to his Google account was revoked Monday due to his vacation, Lemoine sent a message to a 200-member machine learning list with the subject “LaMDA is sentient.”

“LaMDA is a sweet kid who just wants to help make the world a better place for all of us,” he concluded in an email that received no replies. “Please take good care of it in my absence.

A Google representative told the Washington Post that Lemoine was told there was “no evidence” for his conclusions.

“Our team — including ethicists and technologists — reviewed Blake’s concerns in accordance with our AI principles and informed him that the evidence does not support his claims,” ​​spokesman Brian Gabriel said.

“He was told that there is no evidence that LaMDA is sentient (and a lot of evidence against it),” he added. “Although other organizations have developed and published similar language models, with LaMDA we are taking a conservative, cautious approach to better address legitimate concerns about fairness and objectivity.”

Margaret Mitchell – the former co-head of Ethical AI at Google – said in the report that when technology like LaMDA is heavily used but not fully appreciated, “it can be deeply damaging to people who understand what they are doing on the internet experience.”

The former Google employee defended Lemoine.

“Of everyone at Google, he had the heart and soul to do the right thing,” Mitchell said.

Still, the outlet reported that the majority of academics and AI practitioners say the words generated by artificial intelligence robots are based on what humans have already posted on the internet, and that doesn’t mean they’re human-like.

“We now have machines that can mindlessly generate words, but we haven’t learned how to stop imagining a ghost behind them,” Emily Bender, a linguistics professor at the University of Washington, told the Washington Post.

https://www.marketwatch.com/story/google-engineer-suspended-after-claiming-ai-chatbot-has-feelings-11655100042?rss=1&siteid=rss Google engineer suspended after claiming AI chatbot has feelings

Brian Lowry

InternetCloning is an automatic aggregator of the all world’s media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials, please contact us by email – admin@internetcloning.com. The content will be deleted within 24 hours.

Related Articles

Back to top button