Are we designing for efficiency or cognitive atrophy?

Picture of Paula Martínez Roa

Paula Martínez Roa Follow

Reading time: 3 min

The widespread introduction of AI has pushed us to interact on autopilot: quick responses, effortless and often without real understanding.

But as designers, we have an ethical responsibility. When designing a service that integrates AI, it is not just about getting tasks done faster and with greater satisfaction, but about using Strategic Friction to give humans the opportunity to pause, question and exercise their critical thinking.

Cognitive risks of AI use to humans that design must mitigate

In the race for automation, we have overlooked a critical side effect: the impact on the user’s mental architecture. As designers, our job is to find the balance between ‘making things easy’ and ‘keeping humans capable’.

The implementation of AI is not just a problem of algorithms, but of interaction. If we do not design to mitigate cognitive biases, we run the risk of creating systems that, instead of assisting, incapacitate.

Humans have certain intrinsic biases in our processing of reality that we as designers must keep in mind so that the use of AI does not ultimately have a counterproductive effect on workers.

Automation Bias: Blind Faith in Data

This is the tendency to favour suggestions from an automated system even when they contradict evidence or common sense.

The Risk: The user stops verifying the information, becoming a passive validator: confirming on autopilot.

In jobs that require human supervision, that human must not simply stand in front of the algorithm, but, as the European AI Act dictates, activate critical thinking.

Possible mitigation from the design stage: Designers must work on the interface to ensure that human supervision is truly effective thanks to strategic friction from the design stage. This deactivates mental autopilot and activates the brain’s System 2, which is responsible for analysis and cognitive effort. We need to add a ‘Hey, pay attention, stop and analyse!’ And we must seek the necessary design resources to provoke that stimulus.

Induced Complacency: The danger of ‘Everything is fine’

When a system is very reliable but not perfect, humans tend to relax their vigilance. This is the basis of many accidents in aviation and autonomous vehicles.

The Risk: Loss of situational awareness. If the system fails very rarely, the user will not be prepared to react on the few occasions when the failure occurs.

Possible Mitigation through Design: Visualisation of Uncertainty. Instead of a binary response, we must work on a design that shows reliability heat maps. If the AI ‘hesitates’, the design must make this evident.

Anchoring Effect: The Weight of the First Word

The first figure or suggestion given by an AI becomes a mental anchor. Any subsequent adjustments made by the human will be based on that initial figure, limiting their ability to correct it.

The Risk: AI biases the range of possible solutions before the human begins to think.

Possible Mitigation from Design: Delayed Disclosure. In some cases, allow the professional to enter their hypotheses before the AI displays its recommendation. This preserves the worker’s epistemic independence.

Disqualification: The atrophy of the expert

By delegating complex cognitive tasks, we lose practice. Over time, the professional may lose the ability to perform the task without AI or, worse, the ability to supervise it.

The Risk: Responsibility gap. If no one knows how a conclusion was reached, no one can be held responsible for its consequences.

Mitigation by Design: Scaffolding and Explainability The interface should not be a ‘black box.’ It should show the process that the AI has followed so that this process is an opportunity for learning, not replacement.

In the current scenario of AI use in professional environments, when we design a service that integrates AI, we must consider not only the efficiency of the task and the momentary satisfaction of the professional, but also how it affects their long-term cognitive abilities and how we can enhance these abilities rather than diminish them.

Share it on your social networks


Communication

Contact our communication department or requests additional material.