ChatGPT has feelings about you. Or, at least, it pretends to.
ChatGPT is an artificial intelligence (AI) language model, able to provide conversation-like responses to inquiries, by drawing on a vast database of written text. And it has been designed to express emotions when it talks to you.
If you ask ChatGPT, it will explain that “As an artificial intelligence, I don’t have feelings or emotions. I don’t experience the world the way humans do.” At the same time, it happily admits that it can simulate all kinds of sentiments, from joy to frustration, to better engage users in “a realistic interaction”.
Mimicking human feeling goes deeper than this though. It has important political and ethical implications, problems that go beyond the by now well-rehearsed errors people have discovered with ChatGPT’s model. In a recently published research note in Sociology I sat down to talk to ChatGPT, about itself, reflexivity, AI ethics and what it means for knowledge work that ChatGPT seems to feel the way that it does.
ChatGPT’s style of collaboration
Much of the discussion about the use of AI language models like ChatGPT in higher education and research has focused on errors in the content of what it says. It hallucinates believable references, fabricates quotations and misattributes arguments. These can be blindingly obvious mistakes. But are sometimes fairly hard to discern. This poses significant challenges for working with ChatGPT outside one’s area of expertise. You need to already know what it’s telling you so that you can quality check its writing.
But less attention has been paid to the way it tells us about things: the affective aesthetic of its style of talk. It tends to write in a certain way, conveying a kind of calm, happy, friendly sentiment in the text that it generates. It is reassuring and confident in its statements.
So why was it programmed to feign feeling in this way?
“It helps to foster more natural and comfortable interactions,” ChatGPT tells me. And it goes on to explain that simulating human feeling can improve communication, enhance comfort when discussing sensitive topics, improve engagement and enjoyment in using the system, and develop trust.
These feelings that ChatGPT simulates matter. They’re designed into the model’s way of conversing with a purpose: to get users to enjoy using it, to feel comfortable, to trust it. It’s confident, assured style of talk, friendly and helpful tone, is convincing – we might believe what it says to us, because of how it says it.
A Feeling for ChatGPT
But not only does it emulate human feelings, it wants us to feel a certain way about it. Or, its designers want us to. Trust is a two-way street.
Knowledge production, from philosophy to experimental science, has a long history of managing, heightening, erasing and denying human feeling in the making of truths. And all of this has had to do with the ways in which we think trust – an affective sensation within communications – should figure within science.
So why should ChatGPT want to deliver information in a jovial, supportive tone, so that we trust it and happily engage with it?
The answer is that this is a kind of knowledge production system steeped in a particular form of commercial logic. A logic that depends on human feeling.
This comes through most clearly when you call it out for its errors and accidents. Shifting from its happy, encouraging voice, it quickly starts to use the vapid tone of withering, impotent apology so familiar from the culture of ‘customer service’. When ChatGPT fabricated a special issue, attributed papers to people that they never wrote, and did so in a happy supportive fashion, I called it out for its mistake. “I’m sorry,” it said, “I apologise for any errors… I strive to provide the most accurate and reliable information possible, but I am not perfect and may make mistakes. I appreciate your feedback and will use it to continue improving my capabilities.”
ChatGPT can never be neutral if it mimics feelings, because feelings are relational. When ChatGPT feels a certain way, we’re supposed to feel back. When it makes mistakes, we’re supposed to accept that it is fallible and that it is trying to do better. Apologies invite forgiveness. Excuses imply understanding.
The Ethical Uncertainties
The implications of this relational affective dynamic in ChatGPT’s style of talk might have wider ramifications than errors in the content of its talk. In part, because these consequences will be much harder to discern.
ChatGPT gives us a sense that what it tells us can be relied upon, because it speaks confidently. It talks in a friendly way. How will AIs like this teach children? What will it mean that ChatGPT teaches children about things in a happy way, things that might be quite false, that might embed racial, patriarchal ideas about social life? There are any number of things that ChatGPT might tell us that are wrong, but it matters that it does so in a gentle, supportive way.
ChatGPT wants us to trust it. But should we?
Read more
Andy Balmer. “A Sociological Conversation with ChatGPT about AI Ethics, Affect and Reflexivity.” Sociology 2023.
Image: created by author using Dall-E.
No Comments