Google's 'sentient AI child' could 'escape and do bad things', insider claims | The Sun

A GOOGLE engineer who says the tech giant has created a 'sentient AI child' is now claiming it could escape and do "bad things".

Engineer Blake Lemoine has been suspended by Google, which says he violated its confidentiality policies.

News of Lemoine's claims broke earlier in June but the 41-year-old software expert has since suggested to Fox News that the AI could escape.

In a recent interview, he described the AI as a "child" and a "person".

He said: "Any child has the potential to grow up and be a bad person and do bad things."

And, added: "Any person has the ability to escape the control of other people, that’s just the situation we all live in on a daily basis."

Read more on AI

I created a ‘nightmare creature’ by telling a Dall E AI made-up word

Robots become racist and sexist when programmed with common AI, experts claim

Lemoine thinks the artificially intelligent software in question has "been alive" for about a year.

The AI being referred to is Google's Language Model for Dialogue Applications (LaMDA).

Lemoine says he helped to create the software, which he thinks has thoughts and feelings like an eight-year-old child.

"If I didn't know exactly what it was, which is this computer program we built recently, I'd think it was a 7-year-old, 8-year-old kid that happens to know physics," he told the Washington Post.

Most read in Tech


Check your Facebook NOW – three creepy pages prove you're being watched


Seven planets will appear in night sky at once TONIGHT – best time to watch


Warning for MILLIONS of Facebook users to change settings today – don’t wait


Fans are going wild for NEW iPhone photo trick – how to use it

Lemoine was a senior software engineer at the search giant and worked with a collaborator in testing LaMDA's boundaries.

They presented their findings to Google vice president Blaise Aguera y Arcas and Jen Gennai, head of Responsible Innovation, who both dismissed his chilling claims.

Lemoine was then placed on paid administrative leave by Google after violating its confidentiality policy by sharing his conversations with LaMDA online.

The engineer has also said the AI is a "little narcissistic" and claims it reads tweets about itself.

The advanced AI system uses information about a particular subject to "enrich" the conversation in a natural way.

It's also able to understand hidden meanings and ambiguous responses from humans.

Lemoine does admit that more research should be done on the AI because he doesn't really know what's happening with it.

He told Fox News: "We actually need to do a whole bunch more science to figure out what’s really going on inside this system.

"I have my beliefs and my impressions but it’s going to take a team of scientists to dig in and figure out what’s really going on."

Brian Gabriel, a spokesperson for Google, said in a statement that Lemoine's concerns have been reviewed and, in line with Google's AI Principles, "the evidence does not support his claims".

"While other organizations have developed and already released similar language models, we are taking a narrow and careful approach with LaMDA to better consider valid concerns about fairness and factuality," said Gabriel.

"Our team — including ethicists and technologists — has reviewed Blake's concerns per our AI Principles and have informed him that the evidence does not support his claims.

"He was told that there was no evidence that LaMDA was sentient (and lots of evidence against it).

Read More On The Sun

Most important number in universe revealed by scientists – it’s NOT 42

"Of course, some in the broader AI community are considering the long-term possibility of sentient or general AI, but it doesn't make sense to do so by anthropomorphizing today's conversational models, which are not sentient.

"These systems imitate the types of exchanges found in millions of sentences, and can riff on any fantastical topic."

    Source: Read Full Article