Technology is part of teenagers’ everyday lives. They use it constantly, consume it, and interact with it naturally. However, understanding how it is built, who designs it, and what human decisions lie behind algorithms remains a challenge. Especially when it comes to technologies such as artificial intelligence (AI), which are capable of amplifying social impacts on a large scale.
With this context, and within the framework of initiatives promoted by Fundación Telefónica in collaboration with the 11 de Febrero Association, we visited the school where we grew up in Segovia to share our professional experience with 4th-year secondary school students, on the occasion of the International Day of Women and Girls in Science. Two different profiles, two different paths, and one shared idea: technology needs both technical knowledge and a humanistic perspective in order to generate real, positive impact.
Two different paths, one shared purpose
Paula holds a PhD in telecommunications engineering and works in AI research in Telefónica’s Innovation team. Her career is linked to the most technical side of technology: data, algorithms that learn from the world around them, and the evaluation of their results in practical applications. From this perspective, she shared with the students how AI systems work, what bias is, and why algorithms are not neutral, but rather a reflection of the data — and human decisions — used to train them.
Beatriz is a psychologist and expert in research with people. Her work at Telefónica’s Experience Design Lab focuses on understanding behaviours, needs, and contexts in order to design products and services that truly work for people. From a more humanistic approach, she explained how observing, listening to, and understanding users is key to creating usable, inclusive, and responsible technology, and how there are many ways to work in technology that do not necessarily involve knowing how to code.
Two perspectives that do not compete with each other, but rather complement one another.
Understanding artificial intelligence beyond the myth
One of the main objectives of the meeting was to demystify artificial intelligence. Explain that AI is not magic, not something that “decides on its own”, but code written by people that learns from the data we give it. And that, therefore, it can reproduce — and even amplify — prejudices and inequalities that already exist in society if it is not designed carefully.
Through relatable examples, such as facial recognition systems or language models, a dialogue was opened with students about bias, algorithmic discrimination, information bubbles, and misinformation. The message was clear and direct: technology is not neutral, but it can be designed responsibly.
From discourse to practice: creating with them
The experience was not just an explanation. To close the session, we worked with students in a hands-on workshop in which they identified real needs and generated ideas to address them. Starting from a specific problem, they analysed its context, shared perspectives, and proposed solutions, applying basic principles of human-centred design.
This exercise helped convey a key idea: technology is built by listening, understanding those who will use it, and working collaboratively across diverse profiles. Engineering, psychology, design, business… every perspective adds value.
Close role models, possible futures
Beyond the technical content, the value of these types of encounters lies in showcasing real role models and non-linear career paths. Sharing that it is not necessary to have a perfect plan, that interests change, and that there are many ways to contribute to the technological world helps broaden young people’s professional imagination.
The technology of the future is being designed today. And doing it well means educating new generations not only in technical skills, but also in critical thinking, ethics, and responsibility. Because only then can we build a digital environment that is more fair, inclusive, and, above all, more human.







