Picture yourself conversing with ChatGPT’s Voice Mode, which not only gets you but also replies in a way that sounds alarmingly human. This sounds like it came straight out of a sci-fi movie and is super convenient but at what cost to the way we build relationships…and our societal norms? If we are relying more on these interactions with AI than human connection, where do we end up?
The AI behind ChatGPT is made by OpenAI, which just rolled out a new Voice Mode for the product, allowing it to act in a more human manner. However, they have also warned that it is a double-edged sword, suggesting that users might start feeling an emotional attachment to the AI since this will make robot-cum-human interactions indistinguishable. OpenAI underlined this very point in the System Card for GPT-4o while also warning that there is a strong possibility of users anthropomorphizing (ascribing human qualities to) their AI and vice versa.
‘Anthropomorphisation’ of ChatGPT
OpenAI released the technical documentation of GPT-4o and introduced its new functionalities along with a discussion on societal implications. In particular, it dealt with the problem of anthropomorphization — or attributing human-like features to non-human objects.
The tech company requested that the Voice Mode be kept in development “ecocentric” to avoid users bonding with the AI, it says. These are according to feedback from early tests, which comprised of red-teaming (ethical hacking) and internal user trials.
The firm has said that, during these trials, it witnessed users starting to form social bonds with the AI — such as one user who tweeted about sharing an experience in common with the model (a moment of bonding and empathy here brought forth by more general existential angst): “This is our last day together.” OpenAI then emphasized the need to study whether such interactions can become stronger attachments as people use these models for longer.
Anthropomorphising AI: What is the solution?
OpenAI floated possible solutions to the anthropomorphization of ChatGPT Voice Mode but said that it has not arrived at a solution yet, promising only continued attention. “Looking forward, we plan to explore the opportunities of emotional dependency and how deeper alignment with a greater number of our model’s and systems’ features integrated into audio modality could affect behavior,” said the company.
Also, the AI firm noted that a broader cross-section of real-world users with data on different needs and tastes from the model would lead to “better defining what is in our risk area.” The company also noted that further independent academic and internal research would better enable OpenAI to restrict the risks.
Potential impacts of long-term ChatGPT’s Voice Mode Interaction
A much broader concern, if these fears materialize, could be the effect on human relationships: people can begin to treat conversations with a chatbot as more important than talking to other humans. OpenAI explained that this could provide solace to the lonely but might also damage normal human relationships in process.
Influence on social norms: Over time, longer interactions between AI and humans may change our definition of socially acceptable behavior. OpenAI, for instance, noted that with ChatGPT, it is rude to interrupt someone, yet users can do so at any moment with AI.
Also, when a user bonds with the AI has other broad implications — for example, that of manipulation. Even if the current models have not achieved a high score of persuasion, Open AI fears that this may change in response to users developing more trust for an open AI model.
FAQs:
What is ChatGPT’s Voice Mode?
The Voice Mode from ChatGPT lets the AI respond like a human would, with real tone and emotion.
What makes OpenAI worry about ChatGPT Voice Mode?
As explained by OpenAI, the worry is that users could grow to feel emotion towards their AI companions and likewise harm real-world human relationships.
What does ‘anthropomorphizing’ ChatGPT mean?
This chilling anthropomorphizing ChatGPT is unconsciously projecting human characteristics or feelings on AI, causing users to establish social connections with this digital entity.
What is OpenAI doing to mitigate these anthropomorphization risks?
OpenAI is currently doing research on how to limit the emotional dependency of AI, and it has improved research part and model-based behavior patterns as well.
Be First to Comment