The future of Artificial Intelligence regulation

On 19 March 1880, my fellow countryman, Don Rodrigo Sánchez Arjona, a doctor of law and, it seems, passionate about technology, had a telephone line installed between his home in Fregenal de la Sierra and his estate ‘Las Mimbres’, 8 kilometres away. A few months later, on 27 and 28 December of that year, Don Rodrigo succeeded in establishing telephone communication with Seville and Cádiz. Our beloved Extremadurans from Fregenal, Badajoz, broke the long-distance record held until then by the Americans in Boston, Massachusetts. The achievement, already epic in itself, had another peculiarity: Don Rodrigo, having obtained the necessary powers of representation, had previously travelled to Madrid to obtain the required authorisation from the supervisor of the sector, the Director General of Telegraphs, for the installation of a telephone line. This is an example of the deployment of innovative technology with the necessary respect for the law and regulatory rights (see, for the telegraph, not the telephone). Although Don Rodrigo found the trip and the paperwork burdensome, this transition was undoubtedly necessary.

Picture of Antonio Muñoz Marcos

Antonio Muñoz Marcos Follow

Reading time: 5 min

Since then, or most certainly even before, technology has always been several steps ahead of the norm. Given that technology must comply with the law, this mismatch creates a false dilemma between strict regulation and disaffected innovation, mediated by an institutional framework aimed at regulatory generation and supervision.

This time, many believe that the magnitude of the challenge is different given the nature of Artificial Intelligence. It may seem so. As was also the case with the telescope, ocean navigation and mobile telephony, technology not only changes processes or tools, but also redefines the way we make decisions, how we relate to other people or to the world, how we assert ourselves as individuals or how we define ourselves as a society. I think the difference is probably that this time we are all more aware of it and the debate takes on an air of institutionalised responsibility based on an understanding of what we have in our hands, a responsibility they call “ethical AI”.

What will the regulation of Artificial Intelligence look like in the future? Well, it will depend on what Artificial Intelligence is like in the future. Otherwise, we would be asking ourselves how future regulation will regulate the Artificial Intelligence of the present, which, although it may seem absurd, serves as the basis for legislative approaches to many regulatory strategies.

What will the Artificial Intelligence of the future be like?

In the absence of the necessary gift of vision, we could foresee the following:

  • AI will bear a greater resemblance to human intelligence and agency, it will be truly multi-domain and will take into account context, emotions and culture.
  • AI will become an invisible, ubiquitous infrastructure integrated into our daily lives, without us even noticing it.
  • AI will be personalised in the manner of an artificial cognitive companion that accompanies us, as well as a personal agent, an advisor. It will have the ability to make decisions on our behalf.
  • AI will provide the basis for augmented consciousness with new interfaces and human-machine interactions.
  • From all of the above, we can conclude that Agentic AI will communicate with other agents to deploy and carry out the instructions given to it. It will change the web and, in general, our online interactions.

Regulatory responses will likely need to be more complex, dynamic, and comprehensive than those based on current product compliance models, which are closer to the administrative seal that Don Rodrigo sought in the 19th century for the installation of his domestic line.

Regulation will have to adapt to these responses through other types of mechanisms based on algorithmic governance and automated supervision. In other sectors as well, not just Artificial Intelligence, supervision and auditing would be continuous, based on the permanent availability of data. Regulation must therefore ensure this transparency and access to operating records and all the information necessary for proper supervision. This will lead us to intensify the transparency and traceability requirements of algorithms. This information, made available to supervisory algorithms or artificial intelligence, will serve to generate ‘natural’ explainability and the continuous execution of algorithmic impact assessments in a dynamic and automated manner.

The only way to achieve this supervision is through the use of Artificial Intelligence to evaluate and explain Artificial Intelligence: AI systems that audit and supervise AI using metrics of fairness, robustness and sustainability. Future AI regulation must ensure that: a) this information is available to supervisory algorithms that audit and monitor its operation in real time, and b) indicators are established to measure its correct operation within the ‘ethical’ limits defined by that same regulation.

Ubiquitous and supervised Artificial Intelligence will also require the development of semantic and ethical interoperability protocols between algorithms, between intelligent agents, and between them and their supervisors, to facilitate their audit and certification. In this context, the definition of responsibilities in the chain will continue to be relevant, even more so for the purposes of accountability and responding to the consequences of algorithms.

The deployment of personal intelligent agents will require revisiting privacy issues, particularly with regard to the automation of context-related privacy rules that the agent will have to incorporate. New cybersecurity risks will emerge in this agent-based world, as well as questions about responsibility for actions taken on behalf of human beings.

This regulation must also be accompanied by the development or reworking of subjective rights:

  • Right to algorithmic and agent portability in order to be able to transfer personal AI profiles between platforms without losing control or personalisation.
  • Right not to be automatically profiled. This already exists, but it will undoubtedly need to be adapted.
  • The right to an understandable explanation, not only technical but also natural and human.
  • The right to challenge AI-based decisions. Perhaps through alternative AI systems. Human supervision or the ‘man in the middle’, as an alternative, would not be sufficient.
  • The right to some form of cyber-cognitive or agentic disconnection, allowing people to live and make decisions in certain areas without the constant mediation of algorithms.

In short, we will need regulation that favours transparency and accountability requirements, giving priority to people and respect for their fundamental rights and freedoms. We must also observe the social and cultural impacts of technology and assess the safeguards that we must impose as a society.

Share it on your social networks


Communication

Contact our communication department or requests additional material.