Technology advances like a supernova: it illuminates industries, revolutionises science, transforms art and, of course, changes our daily lives. But that brightness also casts shadows. As everything accelerates, we need regulation that does not stifle innovation, but does set clear limits on its risks.
During the meeting, three voices with very different approaches — Victoria Camps, Paloma Llaneza and Nacho Vigalondo — addressed this dilemma. And although it is often presented as a tension between ‘regulating or innovating’, the discussion made it clear that it is not a question of choosing one or the other: well-designed regulation can be the engine of purposeful innovation.
The dilemma that isn’t really a dilemma
Paloma Llaneza was blunt: ‘the current legal system has fallen short.’ We continue to legislate as if the world were analogue, while large platforms operate on a global scale, dodging jurisdictions. Victoria Camps added a fundamental ethical layer: without civic culture and institutional responsibility, laws alone are not enough.
As a professional in the digital sector, I am concerned that we continue to talk about regulation and innovation as if they were enemies. They are not. Real innovation needs clear rules to grow meaningfully and protect what makes us human.
Is technology neutral?
Victoria Camps debunked one of the great myths: ‘Technology is not neutral. Like Prometheus’ fire, it can illuminate or destroy.’ It all depends on who uses it and with what intention. Llaneza, from a legal perspective, pointed out that many rules simply cannot be applied. ‘The grandmother is dead,’ he said bluntly, emphasising the urgency of creating new mechanisms. And Nacho Vigalondo, from an artistic perspective, was categorical: ‘AI does not create, it repeats. There is no pain, no love, no experience. Only patterns, and that, in art, is insufficient.’
Do we really choose?
Victoria Camps posed a disturbing question: we believe we decide how to use technology, but often the framework is imposed by institutional or corporate design.
Llaneza explained it with a clear example: most of the data we generate does not come from conscious decisions, but from default settings. And that leaves us with virtually no real control over our privacy. Nacho Vigalondo, in a provocative tone, added: the promise of AI is efficiency, not creativity. What space is left for those who want to create something authentic when everything already seems replicated?
Creativity in the age of algorithms
Nacho Vigalondo’s intervention was a powerful reminder of what is at stake when we talk about innovation. AI can combine patterns, yes, but it cannot create from experience, pain or love. ‘The way we metabolise our influences and our lives is a mystery and an unrepeatable legacy that AI cannot produce.’
Creative homogenisation threatens a generation of young artists who may become discouraged, believing that everything has already been done. And at this point, we need to reflect on what place we are reserving for authenticity, for genuine expression, in an environment where efficiency seems to outweigh humanity.
Europe, ethics and the future
Europe is trying to balance competitiveness and rights with initiatives such as the Digital Rights Charter. But without effective enforcement, it all remains on paper. Llaneza warned that poor regulation can be worse than no regulation, but not regulating opens the door to arbitrariness and erodes rights. Victoria Camps summed it up clearly: ‘Democracy needs trust. And trust is built on responsibility, transparency and purpose.’
And now what?
I left that colloquium with one certainty: the digital future is not defined solely in innovation laboratories, but in the spaces where we decide which rights we protect and which values we defend.
Regulation should not be seen as a brake, but as a compass. And that compass must always point towards the human.








