Does Being Nicer to AI Make It Work Better?
As artificial intelligence (AI) becomes more common in everyday life, especially through tools like ChatGPT or Siri, a question has started to pop up in conversations and online forums: does being polite or kind to AI make it work better? On the surface, it might seem like a silly idea—after all, machines don’t have feelings. But when we look more closely at how these systems are built and how people use them, the answer turns out to be a bit more interesting.
To begin with, it’s important to understand that AI systems, including large language models (LLMs), do not have emotions. They don’t care if you say “please” or “thank you,” and they certainly don’t get upset if you’re rude. These systems are made up of algorithms trained on massive amounts of text from books, websites, and conversations. They work by recognising patterns in language and predicting what words should come next, based on the input they’re given. So, in a literal sense, being nice doesn’t change how the AI feels—because it doesn’t feel anything at all.
However, the way you speak to an AI can still affect how well it responds to you. This is where things get more nuanced. When researchers and developers talk about “prompt engineering,” they mean carefully choosing the words and format of your request to get the best response from an AI. Politeness often comes with clarity and context—two things that really help AI understand what you’re asking. For example, asking “Can you help me summarise this article, please?” is often more effective than just typing “summarise.”
In fact, OpenAI—the company behind ChatGPT—encourages users to be clear and specific in their prompts. Their guide to getting better answers includes tips like “give the model time to think” and “ask for what you want clearly and politely” (OpenAI, 2023). These tips aren’t about making the AI happy. Rather, they help users write questions in a way that mirrors the kind of high-quality content the model was trained on.
A study from the Allen Institute for AI found that when prompts included polite or respectful language, the answers were often more detailed and accurate (Gilardi et al., 2023). Why? It’s likely because during training, the model was fed examples of human writing where polite phrasing usually came with more thoughtful and informative requests. So when you’re nice, you’re accidentally helping the AI match your question to the kind of situations it was trained to handle well.
There’s also a psychological side to all this. People naturally tend to treat machines that talk like humans as if they were humans. This idea was explored in a famous book called The Media Equation by Byron Reeves and Clifford Nass (1996), where they showed that people react to computers and media in social, human-like ways—even when they know they’re just machines. So if an AI chatbot says, “I'm happy to help,” we might instinctively want to be friendly back, even though we know it’s not a real person.
Interestingly, how you speak to the AI might influence how smart or helpful you think it is. In a 2021 study published in Frontiers in Psychology, people who were polite when talking to a chatbot were more likely to rate its answers as better—even though the responses were technically the same as those given to users who had been less polite (Chaves & Gerosa, 2021). So being nice might not make the AI smarter, but it might make you feel more satisfied with the interaction.
There’s also another factor at play: AI models are often trained using something called Reinforcement Learning from Human Feedback (RLHF). In simple terms, this means that during training, people told the model which answers were good and which ones weren’t. These human ratings usually reward responses that are polite, helpful, and clear. So if your question is phrased in a similar tone—polite, respectful, and well-worded—it’s more likely to trigger one of those highly-rated responses the model learned from.
All of this suggests that while the AI doesn’t care how you speak to it, the way you phrase your request still matters. Being nice helps the system understand you better, because it reflects the kinds of inputs it’s been trained on. And in turn, it tends to give you a better, more accurate, or more relevant answer.
There’s also a bigger, ethical point to consider. Some educators believe that teaching children to speak kindly to AI might encourage polite behaviour more generally. If a child gets used to barking orders at Alexa, they might carry that attitude into conversations with real people. On the flip side, others argue that treating AI too much like it has feelings could confuse people about what AI really is. Machines don’t deserve empathy the way people do—but human workers who build or monitor these systems do. We shouldn’t let the “niceness” of the machine distract from the human effort behind it.
In the end, being polite to AI doesn’t make it work better because the machine has feelings. It works better because polite, clear language usually makes for better prompts—and better prompts lead to better answers. Being nice, in this case, is less about kindness and more about effectiveness. So the next time you find yourself saying “please” to ChatGPT, don’t worry—you’re not being silly. You’re just being smart.
References:
Chaves, A. P., & Gerosa, M. A. (2021). How should my chatbot interact? A survey on human-chatbot interaction design. Frontiers in Psychology, 12, 648064.
Druga, S., Williams, R., Breazeal, C., & Resnick, M. (2017). "Hey Google is it OK if I eat you?": Initial explorations in child-agent interaction. Proceedings of the 2017 Conference on Interaction Design and Children, 595–600.
Gilardi, F., Gessler, T., & Kubli, M. (2023). ChatGPT out of the box: How to use large language models for political science research. arXiv preprint arXiv:2301.13864.
OpenAI. (2023). Best practices for prompt engineering with OpenAI API.
Reeves, B., & Nass, C. (1996). The Media Equation: How People Treat Computers, Television, and New Media Like Real People and Places. Cambridge University Press.