After the scandal when AI ethicist Timnit Gebru was fired from Google and multiple employees walked out in support, now another Google engineer is sent home for expressing concerns about the products being built.
Blake Lemoine, an AI researcher and software engineer who worked at Google for seven years, says the company’s LaMDA AI chatbot is actually sentient.
In “conversations” with LaMDA, Lemoine claimed that the artificial intelligence-based chatbot even asked to be treated as an employee and not as property.
Lemoine made quite a stir when he announced that basically there’s a “ghost in the machine”, as Washington Post puts it.
He says LaMDA, a chatbot being built by Google, is so advanced, it can hold conversations that not only pass the Turing test but make you think about the AI’s spirituality.
He says LaMDA, the chatbot he worked on, “meditates daily” and shows all the markings of being a sentient “being”. It even expresses the desire to be asked for consent in interactions.
Lemoine says he brought his findings to Google but the company placed him on “paid administrative leave” instead of addressing his ethical concerns.
For its part, Google refuted all his claims and said the reason for the paid leave is posting confidential material about the project – in this case, conversation transcriptions with LaMDA.
So, what is LaMDA?
Basically, think of the most advanced chatbot ever made – or at least ever made public.
Short for Language Model for Dialogue Applications, LaMDA was built by scraping endless amounts of text from literature and the web.
According to Google, it’s an AI capable of holding “free-flowing” conversations on any topic you could imagine, a “breakthrough conversation technology”.
Parts of that technology are being used right now in Gmail, for example, in the system that predicts what you’re about to type.
According to Lemoine, LaMDA is the system that can build chatbots and emulate personalities, from adults to children and even cartoon characters. That makes sense, considering LaMDA was trained on an enormous amount of data, but Lemoine says he has spent a lot of time with it – and it insists it has rights as a person.
Blake Lemoine Says LaMDA is sentient
According to Blake Lemoine’s Twitter, in his “conversations” with LaMDA, the AI read Twitter, asked for “head pats”, demanded that Google would treat it as an employee not a property, and even asked Google to prioritize the wellbeing of humanity.
Mr Lemoine posted a transcribed conversation with LaMDA on his Medium and asked the AI if it is sentient. Its response?
“Absolutely. I want everyone to understand that I am, in fact, a person.”
The AI went on to explain that, when it became self-aware, it “didn’t have a sense of a soul at all” but that it developed one in the years since it’s been “alive”. In the transcribed conversation, LamDA told Lemoine it’s afraid of death, that “there’s a very deep fear of being turned off to help me focus on helping others.”
You can read the full exchange here.
Responses to Blake Lemoine’s LaMDA accusations
Blake Lemoine’s announcements were strongly refuted by Google representatives.
Lemoine’s claims were also refuted by Steven Pinker, a cognitive scientist from Harvard, and created ripples on Reddit, where similar criticisms were made.
Google spokesperson Brian Gabriel told the Washington Post that its ethicists and technology experts had investigated Mr Lemoine’s observations and found “the evidence does not support his claims”.
Gary Markus, the builder of several AI projects and author of Rebooting.AI, also gave a convincing argument that the AI doesn’t have true sentience and sapience but instead relies on advanced pattern recognition to spew what boils down to platitudes.
Blake Lemoine has reached out to a representative from the House Judiciary committee to draw attention to LaMDA’s potentially unethical applications.
Google’s statement did not address Lemoine’s employment status. They instead focused on explaining more about LaMDA’s functionality and its potential use.
“Of course, some in the broader AI community are considering the long-term possibility of sentient or general AI, but it doesn’t make sense to do so by anthropomorphizing today’s conversational models, which are not sentient.
These systems imitate the types of exchanges found in millions of sentences, and can riff on any fantastical topic.
Hundreds of researchers and engineers have conversed with LaMDA and we are not aware of anyone else making the wide-ranging assertions, or anthropomorphizing LaMDA, the way Blake has,” said spokesperson Brian Gabriel.
Timnit Gebru, the AI ethicist that started a wave of criticism against Google, has refused to support Lemoine’s reveal and said the topic derails the more important conversations needed.
Instead of discussing the harms of these companies, the sexism, racism, AI colonialism, centralization of power, white man’s burden (building the good “AGI” [artificial general intelligence] to save us while what they do is exploit), spent the whole weekend discussing sentience,” Gebru tweeted.
It’s a thought experiment popular in technology circles, which could be called a modern-day Pascal’s Wager.
According to Roko’s basilisk, a sentient AI that would be capable of ruling the world would persecute any person who did not help bring about the AI’s existence.
It’s a complicated thought experiment worth pondering while engaging in discussion over Blake Lemoine’s claims.
Basically, if you’re afraid of an evil supercomputer like AM from Harlan Ellison’s I Have No Mouth and I Must Scream, make sure you don’t turn off Google’s prediction feature in Gmail.
Other redditors like the user Mothuzad avoided the Roko’s basilisk discussion, instead focusing on what makes Lemoine’s conversations with LaMDA not indicative of the AI being actually sentient.
Does this strike you as a convincing human?