Experts from Stanford University indicate that the Google AI LaMDA model is not really self-aware. With this, the scientists respond to a former Google AI employee’s claim that LaMDA is self-aware.
Recently claims Developer Blake Lemoine, the former head of the Google Responsible AI team at Google, this big language LaMDA model is self-aware. In the contrived conversations with the model, LaMDA allegedly uttered hateful words. The developer became convinced that the model was self-aware and had its own way of thinking and acting.
Google denies the AI model acts independently and has suspended the developer for posting confidential information. This naturally led to a great flood of rumours.
Opinion of Stanford experts
In the discussion, two personalities from Stanford University in the academic newspaper The Stanford Daily indicate that they do not consider Google LaMDA to be self-aware. According to John Etchemendy, it is simply software that must produce sentences in response to what are called “sentence prompts”. Expert Yoav Shoham considers the report complete clickbait and also points out that the AI model is not self-aware.
False science reporting
Both experts see the coverage as an example of the constant stream of fake news stories about AI science and technology. Especially with regard to LLM models, because this type of solutions lead to results that are difficult to distinguish from human actions.
Trick: Is all of Europe moving to the sovereign cloud?
“Food expert. Unapologetic bacon maven. Beer enthusiast. Pop cultureaholic. General travel scholar. Total internet buff.”