Artificial Intelligence (AI) is an essential technology for the qualitative advancement of our productive, scientific, educational, environmental, and social system. However, its design and use are not risk-free. It is therefore necessary to assess its impact and, if necessary, establish formal limits and ethical principles to safeguard human rights and the rule of law.
One of our objectives is to generate a framework of certainty that provides added value to our companies so that, in addition to being competitive, they are recognised in the world as safe and trustworthy.
Telefónica is an active player in the Artificial Intelligence ecosystem with a wide range of services and activities with the Big Data LUCA, Aura (our AI-based virtual assistant) or in improving the efficiency of internal processes.
Telefónica’s Artificial Intelligence Principles and Governance
Telefónica has taken the lead, within the private sector, in defining ethical principles for AI to be applied in the company. These principles are applicable not only to the design, development and use of products and services by company employees, but also by suppliers and third parties.
The difficulty lies in putting these principles into practice in the business and product development process. To address this issue, among other initiatives, we have developed a three-tier AI governance model within the company.
Telefónica’s approach to AI regulation
The first step in approaching the mandating or regulation of any new technology or innovation, such as AI, is to analyse whether the existing legal framework can address the potential risks it presents. New regulation should be limited to addressing the problems identified without extending beyond what is strictly necessary. On the one hand, regulation of Artificial Intelligence should consider a tiered risk-based approach, in which high-risk uses of AI that may cause serious harm or have a relevant impact on human rights should be subject to ex-ante safeguards. These safeguards could take the form of a compliance test that the AI would have to pass before the product is placed on the market.
On the other hand, non-high-risk uses should not be subject to additional obligations or regulations, and a voluntary labelling system could be implemented to foster the confidence of useres in the use of such applications. In addition, the adoption by companies of Principles and self-regulatory schemes can further strengthen an ecosystem of trust in AI.