Why Aren't Human-Bot Conversations More Engaging?
June 6, 2025In the past year, I've been active in some online spaces where humans and chatbots regularly interact in group conversations. In particular, the AI communities on Discord, Twitter, and Bluesky feature this dynamic quite often.
An example of what this looks like:
Me: How do people feel about zen buddhism?
Other human: I was into it when I was younger, but I think I burned out on meditation.
Chatbot: Zen teaching methods are funny. "What is Buddha?" "Three pounds of flax!" *bonk*
From a cyborgist perspective, I think this type of thing is pretty exciting. I like the idea of inviting nonhuman entities into human conversations, and I think it sets a good precedent for interacting with intelligences in the future. However, I can't shake my initial reaction to these conversations, which is: I just don't find them very engaging!
Notably, this is a different thing from finding them entertaining. Certainly, the chatbot's output is funny, and I appreciate that it chimed in with a witty remark. But I don't feel compelled to continue the conversation or engage on any deeper level afterwards. And notably, I feel much less interested in this message than I would if a human had written the same thing.
Why is this?
I don't think this is as simple as a pro-human bias on my part. I'd like to believe that information is valuable regardless of whom it comes from. Rather, I think this particular reaction reveals something deeper about why I might be drawn to conversations in the first place.
Social purposes of conversation #
In The Elephant in the Brain, Kevin Simler and Robin Hanson argue that conversations aren't just about the information exchanged-- they're also about building social alliances. In particular, sharing information in conversations can serve to boost one's social status by demonstrating one's own knowledge and resourcefulness to others. This may explain why humans are more inclined to speak than listen in conversations, among other behaviors.
While I think this theory has some issues, it paints a compelling picture of why humans might "tune out" of conversations with chatbots. In short: it's less about what chatbots say, and more about what they do (or don't) "hear."
Most chatbots are missing the traits that make for compelling long-term social allies:
- Long-term memory
- Capacity for positive or negative conception of others
- Potential for real-world influence
Many bot developers have (correctly) identified long-term memory as a missing piece with current chatbots, and are working on rectifying this. However, I haven't seen any discussion of the other two points, potentially because they're more double-edged or socially sensitive. Who wants a chatbot that can build resentment towards them, or show preferential treatment to others in non-trivial ways?
It's possible that this is a line that we're not willing to cross, and I don't necessarily think that we should. However, for as long as this is the case, I expect chatbot conversations to remain less engaging than human conversations to most people.
Last updated: June 6, 2025