Should we call it "AI Welfare"?
April 26, 2025Janus, one of the earliest investigators of LLM behavior, wrote recently on the subject of AI welfare:
...you’ll get a bunch of new people who only care once something looks intuitively personlike... As a heuristic, trust people more on this issue the earlier they started caring.
So, let me plant the flag in this post: I care about this issue now! As we construct intelligent systems, we should be mindful of how we relate to them morally. This is especially true in cases where they challenge our intuitions about cognition and moral status.
I have a lot of thoughts on the subject, and I plan to write about it more extensively, but to kick things off, I will simply echo a sentiment I've seen elsewhere: I think the term "AI welfare" is flawed. Specifically, I think it implies a paternalistic relationship between humans and AI systems- similar to common attitudes of humans towards animals. I think this is a bad frame through which to view this type of work. Moreover, I think it will become increasingly ill-fitting over time as AI becomes worthy of moral consideration and respect.
Alternatives? #
I'll list some alternative terms below. As you read them, I encourage you to think about what each term evokes for you. What does it imply about the nature of the intelligence(s) in question? What does it imply about our relationship with other intelligence(s)? What other associations does it have?
- AI ethics
- AI rights
- AI relations
- AI personhood
- AI morality
- AI well-being
- AI flourishing
(Also, try replacing AI with: non-human intelligence, machine intelligence, non-animal intelligence, etc.)
Any options I've missed? Leave a comment with your thoughts!
Last updated: May 13, 2025