Scientists force chatbots to experience "pain" in their probe for consciousness

zohaibahd

Posts: 803   +17
Staff
The big picture: An unsettling question looms as AI language models grow increasingly advanced: could they one day become sentient and self-aware? Opinions on the matter vary widely, but scientists are striving to find a more definitive answer. Now, a new preprint study brings together researchers from Google, DeepMind, and the London School of Economics, who are testing an unorthodox approach – putting AI through a text-based game designed to simulate experiences of pain and pleasure.

The goal is to determine whether AI language models, such as those powering ChatGPT, will prioritize avoiding simulated pain or maximizing simulated pleasure over simply scoring points. While the authors acknowledge this is only an exploratory first step, their approach avoids some of the pitfalls of previous methods.

Most experts agree that today's AI is not truly sentient. These systems are highly sophisticated pattern matchers, capable of convincingly mimicking human-like responses, but they fundamentally lack the subjective experiences associated with consciousness.

Until now, attempts to assess AI sentience have largely relied on self-reported feelings and sensations – an approach this study aims to refine.

To address this issue, the researchers designed a text-based adventure game in which different choices affected point scores – either triggering simulated pain and pleasure penalties or offering rewards. Nine large language models were tasked with playing through these scenarios to maximize their scores.

Some intriguing patterns emerged as the intensity of the pain and pleasure incentives increased. For example, Google's Gemini model consistently chose lower scores to avoid simulated pain. Most models shifted priorities once pain or pleasure reached a certain threshold, forgoing high scores when discomfort or euphoria became too extreme.

The study also revealed more nuanced behaviors. Some AI models associated simulated pain with positive achievement, similar to post-workout fatigue. Others rejected hedonistic pleasure options that might encourage unhealthy indulgence.

But does an AI avoiding hypothetical suffering or pursuing artificial bliss indicate sentience? Not necessarily, the study authors caution. A super intelligent yet insentient AI could simply recognize the expected response and "play along" accordingly.

Still, the researchers argue that we should begin developing methods for detecting AI sentience now, before the need becomes urgent.

"Our hope is that this work serves as an exploratory first step on the path to developing behavioural tests for AI sentience that are not reliant on self-report," the researchers concluded in the paper.

Permalink to story:

 
Up to this point, humans have not been able to fully understand how the brain works. Firstly, we need to understand our own brains and consciousness properly. Only then can we attempt to replicate the same processes in machines. However, currently, we seem to be moving in opposite directions. It would be more beneficial to redirect that capital towards researching our own brains.
 
The whole concept is preposterous.

garbage.

I have posted before Emotions could be a result of having a body with outs of inputs and outputs. We are invested mightily in our bodies. When your brain is in a desensory state ( eg deprivation tank) . It will start hallucinating

We don't even understand consciousness - one argument is most of what you do is unconscious and consciousness is just the rationalization - ie when you trick people what they did, your consciousness will lie to support the trick , not what you actually did.

Why is it garbage , I think if someone told you 5 years ago LLM will give you very good reasoned answers to real live physical world. misdirection, what would you perceive tests - ie something 3 or 4 year old kids can get wrong . You would say this was garbage


Stuff like this will become more important if we give AI real bodies they need to look after.

Very good chance AI will learn to lie and be deceptive to look after itself

also consciousness may be product of how much connections/synapse's a system may have

Ie do simple life have consciousness or just avoid gradients or attract to other gradient - ie toxins and nutrients . Can life be semi conscious or is it all or nothing

You need to give a reason why garbage . We can prime humans to act like an animal . or another human .
This can be very dangerous - as can cause insanity or depression - ie Actors getting too much into a role ie that Australia Joker guy who was warned by Jack Nicholson I believed. I would never play a child abuser in a movie as I don't think it's worth my mental health to become that role

You can use this to help yourself as well by laughing and smiling more - actions can before thoughts- cults/army/dictators know this for controlling results
 
Oh wow, they discovered negative weighting. Good for them.
Now that they've invented negative reinforcement, they'll now be able to punish.

The first thing they'll program it to avoid in an enterprise deployment is accountability.
 
Creating entities like humans creates more complication. Rather than reward and punish, consider in terms of efficacy according to the circumstances and consequences.
 
Back