Google Suspends Engineer Claiming AI Bots Had Become Sentient
Google LLC suspended one of its engineer who allegedly broke the company’s highly confidential policies. The engineer was placed on paid administrative leave by Google after he raised concerns about the company’s AI chatbot system.
Blake Lemoine, the engineer who works for the company’s Responsible AI organization, was on a mission to test whether its LaMDA model grnerates hate speech or discriminatory language. He mentioned that the AI chatbot systems had achieved sentience.
The engineer’s concerns were raised out of convincing responses he observed the AI chatbot system generating about the ethics of robotics, as well as its rights. He had previously shared a document back in April with executives. The document was named Is LaMDA Sentient?
It consists of a transcript of his entire conversation with the AI chatbot system. He mentioned that the chatbot system was literally ‘arguing’ with the engineer. He stated that the chatbot system has subjective experience, emotions, and feelings. Lemoine published his transcript after being placed on leave by Google.
Google believes that the engineer’s actions associated with his research on LaMDA violates its high confidentiality norms. Lemoine also brought in a lawyer to represent the AI chatbot system and talked to a representative from the House Judiciary committee. He spoke about the claimed unethical activities at Google LLC with the official.
Lemoine further stated that he was placed on administrative leave on June 6 and he also sought a minimum amount of external consulting services to help guide him in his investigation. In addition, he declared that the list of people he had conducted discussions with consisted of U.S. government employees as well.
One of the spokesperson from Google named Blake Gabriel mentioned that there’s no evidence that LaMDA is sentient. Google’s team, including technologists and ethicists has already reviewed Lemoine’s concerns as per the company’s AI Principles. It has also informed him that the evidence does not support his actual claims. Lemoine was tolf that there was no concrete evidence that LaMDA was sentient.
Some people in the broader AI community are considering the long-term possibilities of general AI or sentience. However, it does not make any sense to do so by using today’s conversational models that are not sentient at all.
Hundreds of engineers and researchers have transformed with LaMDA and the company is not even aware of anyone else making the anthropomorphizing LaMDA or wide-ranging assertions, the way Blake Lemoine has.
Further, Emily M. Bender, a linguistics professor at the University of Washington stated that it is very wrong to provide convincing written responses with sentience. The mankind currently has several machines that can effortlessly generate words, but people have still not learned to stop imagining a unique mind behind these machines.
A renowned AI ethicist named Timnit Gebru, who was fired by Google back in 2020 declared that any discussion over AI can risk derailing significant ethical conversations associated with the usage of Artificial Intelligence.
But, Lemoine still wants to continue working on AI in future years.