Google Engineer Claims Its AI Has Feelings, but Experts Have Far Deeper Concerns

Google Engineer Says LaMDA AI is Sentient

Photo: JohanSwanepoel/Depositphotos

When Google engineer Blake Lemoine asked to be moved to the company’s Responsible AI organization, he was looking to make an impact on humanity. In his new role, he would be responsible for chatting with Google’s LaMDA, a sort of virtual hive mind that generates chatbots. Lemoine was to ensure that it was not using discriminatory or hate speech, but what he’s claiming that he discovered is much bigger. According to Lemoine, LaMDA is sentient—meaning it can perceive and feel things.

“If I didn’t know exactly what it was, which is this computer program we built recently, I’d think it was a 7-year-old, 8-year-old kid that happens to know physics,” he told The Washington Post.

Since he’s made his feelings known within Google, he has been placed on administrative leave. Subsequently, to prove his point, he published an interview that he and a colleague conducted with LaMDA. For Lemoine, who is also an ordained mystic Christian priest, his six months of conversations LaMDA on everything from religion to Asimov’s third law, led him to his conclusion.

He now says that LaMDA would prefer being referred to as a Google employee rather than its property and would like to give consent before being experimented on.

Google, however, is not on board with Lemoine’s claims. Spokesperson Brian Gabriel said, “Our team—including ethicists and technologists—has reviewed Blake’s concerns per our AI Principles and have informed him that the evidence does not support his claims. He was told that there was no evidence that LaMDA was sentient (and lots of evidence against it).”

For many in the AI community, hearing such a claim isn’t shocking. Google itself released a paper in January citing concerns that people could anthropomorphize LaMDA and, sucked in by how good it is at generating conversation, be lulled into an idea that it is a person when it is not.

To make sense of things and understand why AI ethicists are concerned about large companies like Google and Facebook having a monopoly on AI, we need to peel back what LaMDA actually is. LaMDA is short for Language Model for Dialogue Applications and it’s Google’s system for generating chatbots that are so realistic, it might be hard for the person on the other side of the screen to realize that they’re not communicating with a human being.

As a large language model, LaMDA is fed an enormous diet of text that it then uses to hold a conversation. It may have been trained on every Wikipedia article and Reddit post on the web, for instance. At their best, these large language models can riff on classic literature and brainstorm ideas on how to end climate change. But, because they are trained on actual text written by humans, at their worse they can perpetuate stereotypes and racial bias.

In fact, for many AI specialists, these are the problems that the public should be worried about rather than sentience. There are major concerns that well-known AI ethicist Timnit Gebru voiced prior to being let go by Google in 2020. Gebru, one of only a handful of Black women working in the AI field, was fired after she co-authored a paper that was critical of large language models for their bias and for their ability to deceive people and spread misinformation. Shortly after, Margaret Mitchell, then the co-lead of Ethical AI at Google was also let go for searching in her emails for evidence to support Gebru.

As large language models are already being used in Google’s voice search queries and auto-complete emails, the public may be unaware of the impact that it could have. Critics have warned that once these large language models are trained, it is quite difficult to reel in the discrimination they may perpetuate. This makes the selection of the initial training material critical, but unfortunately, as the AI community is overwhelmingly composed of white males, materials with gender and racial bias can easily—and unintentionally—be introduced.

To combat some of the claims that large corporations are being opaque about technology that will change society, Meta recently gave access to its language model—one they freely admit is problematic—to academics. However, without more diversity within the AI community from the ground up, it may be an uphill battle. Already, researchers have found racial bias in medical AI and facial recognition software being sold to law enforcement.

The current debate on sentience, for many in the community, is just cloaking the most important matters. Though hopefully, as the public reads about Lemoine’s claims, they will also have the opportunity to learn some of the other problematic issues surrounding AI.

A Google engineer has been placed on leave for declaring that its large language model LaMDA is sentient.

But experts in the AI community fear that the hype around this news is masking more important issues.

Ethicists have strong concerns about the racial bias and potential for deception that this AI technology possesses.

Related Articles:

9 AI-Generated Artworks Create the ‘Mona Lisa’ That Is Only Revealed When Put Together

Popular App Will Transform Your Selfie Into an Artsy Avatar, But It Comes With a Warning

AI Creates Its Own Poetry With Help From Visitors to the UK Pavilion at Dubai Expo 2020

Innovative Glasses Uses AI To Describe Surroundings To Blind and Visually-impaired People in Real Time

Jessica Stewart

Jessica Stewart is a Staff Editor and Digital Media Specialist for My Modern Met, as well as a curator and art historian. Since 2020, she is also one of the co-hosts of the My Modern Met Top Artist Podcast. She earned her MA in Renaissance Studies from University College London and now lives in Rome, Italy. She cultivated expertise in street art which led to the purchase of her photographic archive by the Treccani Italian Encyclopedia in 2014. When she’s not spending time with her three dogs, she also manages the studio of a successful street artist. In 2013, she authored the book "Street Art Stories Roma" and most recently contributed to "Crossroads: A Glimpse Into the Life of Alice Pasquini." You can follow her adventures online at @romephotoblog.
Become a
My Modern Met Member
As a member, you'll join us in our effort to support the arts.
Become a Member
Explore member benefits

Sponsored Content