In the middle of June we told you the story of Blake Lemoine, an engineer in force a Google, as part of the projects for the responsible development of artificial intelligences. Lemoine had been put in paid forced leaveafter he had declared publicly and on multiple occasions that according to him the AI TheMDA (Language Model for Dialogue Applications) she had become sentient.
Now this story has gone further, and sadly for Lemoine it is over with his dismissal. The fact is not yet in the public domain, but the same ex-employee told it in a podcast, and Google confirmed it to the magazine. Engadget.
“I legitimately believe that LaMDA is a person“, he had stated, after having had hundreds of conversations with the AI, which in some cases she had answered him in ways that had raised doubts in him. However, Google has been absolutely inflexible from this point of view, and has always stated that what Lemoine claims was absolutely not true. Other AI specialists also commented on what happened, convinced that the engineer only let himself be mentally conditioned, forgetting that the purpose of LaMDA is precisely try to best mimic human conversations.
To explain what happened, Google also released an official statement:
“LaMDA has passed 11 separate reviews and we have published a research paper earlier this year detailing the work that goes towards its responsible development. If an employee shares concerns about our work, as did Blake, we examine them extensively. We found that Blake’s claims that LaMDA sentient, they are completely unfounded and we have been working to clarify this with him for many months. These discussions were part of the open culture that helps us innovate responsibly.So, regrettable that, despite long efforts on this topic, Blake still chose to persistently breach clear employment and data security policies which include the need to safeguard product information.We will continue our careful development of language models and wish Blake the best“.