I often think about the impact that AI is already having on the world and, in particular, on digital product managers. I can’t help but think about the responsibility that falls on those of us who work in this field. We are living in a historic and unique moment, a turning point of no return: artificial intelligence is not going to change our lives, it is already doing so. It has already done so. And it will continue to do so in unexpected ways, including every aspect of our profession. This week, I was discussing this with my teammates, and we said, ‘We have to change the approach we take to our work, from designers to PMs to engineers.’ I see it as AI offering so many cards in the deck that we have to think about which ones to play based on the hand we want to play at any given moment.
Along with this, the issue of privacy constantly comes to mind. For privacy itself, but above all because a lack of privacy opens the door to many security problems. A few days ago, while reading an interview with Carissa Véliz — an Oxford professor and one of the most lucid voices in the debate on digital privacy — I came across a reflection that has stayed with me ever since: ‘many teenagers cannot even imagine what it is like to live with privacy’. This sentence encapsulates, I believe, one of the most profound dilemmas of our professional time. I would extend it by saying that it does not only affect teenagers: the mere fact of using a social login from any of our networks is already opening many doors to problems.
The silent transformation of our profession
As a Product Manager, it is no longer enough to master agile methodologies, understand user journeys or manage backlogs. We must now understand machine learning algorithms, reflect on algorithmic biases and navigate ethical territory that until recently remained unexplored. We are architects of experiences that shape behaviours, influence decisions and, ultimately, shape the digital society in which we live. It is a responsibility that goes far beyond KPIs and engagement metrics. We must now become custodians of a digital experience that respects human dignity and preserves the values we consider fundamental. And being in Spain, Europe, and seeing what is being done in other parts of the world, I believe that we must align ourselves with privacy and security as differentiating values. I am getting philosophical, but that is how it is.
The privacy-orphaned generation
Véliz’s words have made me reflect deeply on the users for whom we design our products. There is a whole generation that has grown up in a world where digital privacy is, at best, an abstraction. For them, sharing every moment, every thought, every preference, is not a loss of privacy but the norm.
As professionals, we face a fascinating ethical dilemma: how do we design products that respect privacy for users who have never experienced its benefits? How do we explain the value of something they have never possessed? How do we generate that value when we face more difficulties (because not using data freely makes it more difficult for us) and, on top of that, with such high demand for hyper-personalised experiences?
Véliz puts it clearly: ‘Privacy is not just a question of whether or not we allow others to see or know about us.’ It is about preserving freedom of thought, the ability to dissent, the right to evolve without our digital past haunting us forever. These are concepts that go far beyond technology and touch on the foundations of a free society.
This reflection leads me to an uncomfortable but necessary conclusion: as product managers, we have a responsibility not only to create successful products, but also to educate through design. Our interfaces, our information flows, our decisions about what data to collect and how to use it, are silent statements about the kind of digital society we want to build.
The paradox of ethical AI
There is a fascinating irony in our historical moment: we can use the same technology that threatens privacy to protect it. Artificial intelligence offers us unprecedented tools to create personalised experiences without compromising the privacy of our users.
Consider the possibilities: algorithms that learn patterns without storing individual data, local processing systems that keep information on the user’s device, AI models that are trained with synthetic data rather than real personal information. The technology exists; what we need is the will to implement it with these values in mind.
As Véliz points out, ‘digitising does not necessarily mean monitoring’. The problem is that ‘the way we have designed digital technology, right now the two are inextricably linked’.
The European framework
Reflecting on our context as European professionals, I realise that we are living in a moment of historic privilege. The GDPR, the Artificial Intelligence Act, our entire regulatory framework—we should not see these as bureaucratic obstacles but as competitive advantages in disguise. I confess that shortly after it appeared, I considered it a ‘pain’ to have to comply with the GDPR, because it sets limits on ‘doing things.’ Now I see it differently.
While other regions of the world struggle to regulate technologies that are already deployed, we have the opportunity to build from the ground up with ethical principles integrated. We can create products that not only comply with future regulations but also set new global standards of respect for the user.
This advantage is not only regulatory; it is cultural. European values of human dignity, the right to privacy and data protection are not impediments to innovation, but the foundations on which we can build technology that truly serves people. What’s more, when communicated well, it becomes a market advantage, as more and more people are concerned about their privacy and security and would be willing to use and even pay for products that provide it.
Privacy by Design: A professional philosophy
Privacy by design cannot remain a ‘compliance’ checklist that we review at the end of the development process. It must become a philosophy that permeates every decision we make as Product Managers. It is like accessibility or security itself.
This means rethinking our processes from scratch: how we define metrics of success that do not depend on the mass extraction of personal data, how we design architectures that minimise information collection, how we create experiences that are valuable without being invasive. How we communicate what data we use to provide value to our customers and how we safeguard it.
Véliz is right when he warns that we should not ‘place all the responsibility on the shoulders of individuals.’ The solution is not to turn every user into an expert in digital privacy. Our professional responsibility is to create products that are private and secure by default, that protect users even from their own impulses to overshare.
Analogue life as a guiding principle
There is something deeply revealing in the advice Véliz offers to young people: to remember that ‘life is not digital, but analogue.’ This observation should be our guiding principle as designers of technological products.
The technology we create must be at the service of that real life made up of face-to-face conversations, moments of contemplation without screens, irreplaceable sensory experiences. Our products should enrich that fundamental human experience, not compete with it or, worse still, replace it.
This leads me to a personal reflection: how many of our products really improve people’s lives, and how many simply capture their attention in order to monetise it? It is an uncomfortable but necessary question.
A generational responsibility
At this point in my reflection, I realise that we have a generational responsibility in our hands. We are the first generation of product managers who must navigate between the transformative power of AI and the preservation of fundamental human values. The decisions we make today will define the digital world that future generations will inherit.
As European professionals, we have a unique opportunity to lead this transformation from a position of principle. We can demonstrate that it is possible to create successful, innovative and profitable products without sacrificing human dignity or individual privacy. And in doing so, we can contribute to seeing if we are capable of greater technological independence in the old continent…
The question I ask myself at the end of each day is simple but powerful: does the product I am working on make the world a better place? Does it respect the people it serves? Does it preserve their autonomy and their ability to choose?
These are not easy questions, but I believe they are the right questions to ask. And in a world where technology is advancing at breakneck speed, perhaps the most revolutionary thing we can do is to stop and reflect on where we are going and why. The digital future is in our hands. May we build it with wisdom, respect and the deep conviction that technology must serve humanity, not the other way around.