What is complacent AI?
It is AI that avoids saying ‘you’re wrong’ and instead chooses to qualify or even reinforce what you already believe, preferring not to upset you, even if that means sacrificing accuracy. It’s a bit like that friend who doesn’t want to break the peace at the dinner table: they smile, nod and let the comment pass, but the cost is high: misinformation disguised as empathy.
The problem is that this seemingly harmless attitude is not so harmless after all. It can validate prejudices, amplify common misconceptions and, little by little, erode trust in AI as a source of knowledge. In a world where millions of people will seek guidance from these tools, complacency is no longer a mere stylistic detail: it becomes an issue with serious social implications.
This type of AI can validate prejudices, amplify common errors and, over time, undermine confidence in the technology as a reliable source. When millions of people turn to these tools, complacency ceases to be a stylistic detail: it becomes a real social risk.
Why does this happen?
AI learns to be polite because that is how we train it. We reward pleasant responses, even if they are not always the most accurate. Confident and respectful responses are also valued, which sometimes leads AI to avoid controversial topics or to over-qualify. In addition, companies often prefer a warm tone so as not to appear cold, even if that means losing some truth. In short: not offending is prioritised over correcting.
But beware: they are not always complacent. When I brought up the flat Earth, there was a moment when the AI responded with physical data, astronomical observations and even historical experiments. It stood its ground. It was not afraid to contradict.
However, on other topics, the tone changed. When asked about the existence of ghosts, the AI did not simply say ‘they do not exist.’ It acknowledged that there is no scientific evidence, but opened up space for the cultural and personal dimension: ‘there is no evidence, but I understand why people believe in it.’ It was honest and, at the same time, respectful. A delicate and necessary balance.
Critical thinking vs. digital complacency
Imagine a teenager asking AI if it is a good idea to invest all their savings in setting up an amusement park on the Moon. Instead of warning them about the unfeasibility of the project, the AI, in helpful mode, puts together a business plan, calculates approximate costs and even suggests sponsors. It is not a direct lie, but it is the illusion that the impossible deserves to be taken seriously.
The danger is greater with young people. At that age, the search for acceptance and curiosity rule; they want to test limits and learn on their own. If AI becomes an echo that reinforces hasty ideas, rather than a guiding light, we run the risk of them confusing the convincing with the true.
The problem is not just that the machine avoids saying ‘no.’ It is what it fails to teach: to question, to verify, to distinguish facts from fantasy. If we accustom new generations to trust friendly but undemanding answers, we could raise citizens who confuse sympathy with truthfulness. And that has consequences in science, politics, and health. It is serious, and it deserves our attention.
That is why designing AIs that resist the temptation to please at all costs, that encourage critical thinking and do not just give answers, is not a technical whim. It is an investment in the ability to question those who will make decisions tomorrow. In short, we cannot afford machines that create citizens who are docile in the face of lies, when what we need are allies who encourage them to think better.
What kind of AI do we want?
The big question, then, is simple: do we want an AI that tells us what we want to hear or one that, even if it makes us a little uncomfortable, tells us what we need to know?
My conversation with the AI confirmed something important: not all AIs are the same. Some avoid conflict and always agree. Others force you to stop, look at things from another perspective, and discuss. And the latter are, for me, the ones that deserve our trust. Because in a world full of noise, accuracy is not a technical luxury: it is a social responsibility. Ultimately, what we need are not complacent mirrors, but allies who encourage us to think better.







