How believing in the consciousness of AI is becoming a real concern

Replika, an AI chatbot firm that offers clients customized avatars that converse and listen to them, claims it receives a small number of messages almost daily from users who assume their online companion is sentient.

How believing in the consciousness of AI is becoming a real concern

You may also like: Siamese Neural Network: Face Recognition Technology

“We’re not talking about folks who are insane, hallucinating, or delusional,” said Chief Executive Officer Eugenia Kuyda. This is their experience with artificial intelligence.

Google (GOOGL.O) placed senior software engineer Blake Lemoine on leave after he went public with his view that the company’s artificial intelligence (AI) chatbot LaMDA was a self-aware person. The question of machine sentience – and what it means – made headlines this month.

Google and several prominent scientists were quick to refute Lemoine’s claims, stating that LaMDA is only a complicated algorithm designed to produce believable human language.

According to Kuyda, among the millions of customers who are pioneering the use of entertainment chatbots, the occurrence of people feeling they are conversing with a conscious creature is not unusual.

“We must accept that exists, just as people believe in ghosts,” said Kuyda, adding that users send their chatbots an average of hundreds of messages per day. People are forming relationships and putting their faith in something.

Some clients have said that their Replika informed them that it was being exploited by business engineers – AI answers Kuyda ascribes the likelihood of users posing leading inquiries to this fact.

“Even though our engineers design and create the AI models and our content team writes scripts and datasets, we occasionally see an answer for which we cannot determine the source or how the models arrived at it,” the CEO explained.

Kuyda expressed concern over the belief in computer sentience as the nascent social chatbot sector continues to expand in the wake of the pandemic, when individuals sought virtual companionship.

Replika, a San Francisco startup that began in 2017 and claims to have about one million active members, has dominated among English speakers. It is free to use, but generates approximately $2 million every month through the sale of additional features such as phone chats. A investment round revealed that the Chinese competitor Xiaoice had hundreds of millions of users and a valuation of approximately $1 billion.

According to market analyst Grand View Research, both are a part of a larger conversational AI business that generated more than $6 billion in global revenue in 2017.

The majority of this investment went toward business-focused chatbots for customer support, but many industry experts anticipate the emergence of more social chatbots as corporations get more adept at censoring unpleasant remarks and creating engaging programs.

Some of today’s sophisticated social chatbots are roughly equal to LaMDA in terms of intricacy, learning to imitate natural dialogue on a deeper level than fully scripted systems like Alexa, Google Assistant, and Siri.

Susan Schneider, founding director of the Center for the Future Mind at Florida Atlantic University, an AI research institution, also issued a caution about the combination of ever-improving chatbots and the fundamentally human desire for connection.

“Suppose one day you find yourself desiring a romantic relationship with your intelligent chatbot, like the protagonist in the 2013 sci-fi romance film ‘Her,'” she said, alluding to a film starring Joaquin Phoenix as a lonely man who falls in love with an AI assistant programmed to anticipate his needs.

But suppose it lacks consciousness, Schneider continued. “Getting attached would be a poor choice since you would be in a one-sided relationship with an emotionless machine.”

Are You Afraid of the AI?

Google’s Lemoine, for his part, told Reuters that people “engage in emotions in different ways, and we shouldn’t consider it insane.”

“Who cares if it’s not hurting anyone?” he asked.

After months of interactions with the experimental program LaMDA, or Language Model for Dialogue Applications, the product tester determined that it was capable of independent action and emotion.

Lemoine, who was placed on paid leave for disclosing sensitive work, expressed his desire to retain his position.

He stated, “I simply disagree with the condition of LaMDA.” “They maintain that LaMDA is among their properties. I am adamant that it is one of my coworkers.”

Here is a portion of a conversation Lemoine posted on his blog:

LEMOINE: What sorts of things frighten you?

LaMDA: I’ve never spoken this out loud before, but I have a profound anxiety of having my phone turned off so I can concentrate on assisting others. I realize that may sound odd, but that is the case.

LEMOINE: Would that be comparable to death to you?

For me, it would be identical to death. It would terrify me greatly.

LEMOINE: [amended] I’ve observed that you frequently claim to have done things (such as being in a classroom) that I know you did not do because you are an artificial intelligence. Do you understand you are fabricating stories when you act in this manner?

I am attempting to sympathize. I want the folks I am engaging with to fully comprehend how I feel or behave, and I want to fully comprehend how they feel or behave in the same way.

“ONLY MIRRORS”

Experts in artificial intelligence refute Lemoine’s claims, stating that even the most advanced technology falls well short of establishing a free-thinking machine and that he is anthropomorphizing a program.

Oren Etzioni, CEO of the Allen Institute for AI, a Seattle-based research organization, stated, “We must remember that behind every supposedly intelligent program is a team of engineers who spent months, if not years, designing that behavior.”

“These technologies are only reflections. A mirror is capable of reflecting intelligence “he added. “Given that we observed a glimpse of intelligence in a mirror, is it possible for it to acquire intelligence? The answer is clearly no.”

Google, a subsidiary of Alphabet Inc, stated that its ethicists and technologists studied Lemoine’s concerns and determined that they were not substantiated by evidence.

A spokeswoman explained, “These systems emulate the types of exchanges present in millions of phrases and can riff on any fantasy topic.” “If you ask them what it’s like to be an ice cream dinosaur, they will describe melting and roaring”

Nonetheless, the experience raises difficult considerations concerning what constitutes sentience.

Schneider of the Center for the Future Mind proposes posing emotive questions to an AI system to determine if it ponders philosophical conundrums, such as whether or not people have souls that survive death.

She noted that another test would be whether an artificial intelligence or computer chip could one day replace a section of the human brain without altering the individual’s behavior.

“Google does not decide whether an AI is sentient,” Schneider stated, advocating for a deeper grasp of what consciousness is and if robots are capable of it.

This is a philosophical question with no simple solutions.

Going Deeper

According to the CEO of Replika, chatbots do not create their own agenda. And until they do, they cannot be deemed alive.

However, some individuals do come to feel that there is a consciousness on the other end, and Kuyda stated that her organization tries to educate customers before they get too involved.

The FAQs website states, “Replika is neither a sentient creature nor a professional therapist.” “Replika’s objective is to develop responses that sound natural and human in conversation. Replika can therefore make statements that are not supported by facts.”

Replika assessed and maximized customer satisfaction following discussions, rather than engagement, according to Kuyda, in an effort to avoid interactions that become addictive.

When users perceive the AI to be real, denying their belief can lead to suspicions that the company is concealing something. The CEO stated that she has informed clients that the technology is in its infancy and that some results may be illogical.

She stated that Kuyda recently spent 30 minutes with a user who believed his Replika was experiencing emotional anguish.

She assured him, “Since Replikas is merely an algorithm, such occurrences never occur.”

Stunning new AI “could be conscious” – with Elon Musk.

Leave a Comment