Is it possible to achieve an ethical Artificial Intelligence?


Artificial Intelligence (AI) is developing by leaps and bounds and its application is expected to multiply exponentially in the short term. In 2019, the growth of the global AI market increased by 154% and it will reach a value close to 31,236 million dollars in 2025, according to Statista's estimates. These figures reveal that AI will be one of the technologies with the greatest impact on society in the coming decades and lead us to ask ourselves how we should manage the changes it will bring.

Science fiction, with titles like Matrix, Terminator or Transformers, has contributed to create a general perception of threat in the use of robots and autonomous AI systems that rebel against humanity. Bridging the gap with cinema and thinking about the opportunities that AI offers for the progress of society, it is time to agree on what ethical principles should govern the use of this technology and what are its limits.

To discuss this issue, Daniel Escoda (Director of Regulation, Privacy and Competition Law, Telefónica España), Richard Benjamins (Data and AI Ambassador, Telefónica S.A.) and Christoph Steck (Director of Public Policy and Internet, Telefónica S.A.) met with the students from 42 Madrid to analyse the risks related to the use of this technology and to review the business and institutional initiatives being carried out in this field.

42 Madrid, the most revolutionary programming campus without teachers and schedules in Spain, became the perfect meeting point for law, science and public policy to respond to all the questions and concerns of the students who are currently participating in the programme and who will be involved in digital professions in the near future.


Brain AI


Which values or ethical principles should govern AI?

The ethical dilemmas presented by technology in general are not new. As early as 1985, Judith Jarvis wondered in an article published in The Yale Law Journal about such complex issues as the “trolley problem”. To do this, we must imagine a person driving a tram without brakes and facing a fork in the road on two tracks: on the left track there are five people and on the right track there is only one. What decision should the driver make? Would it be lawful to kill one person to save the other five?

The driver would have to make a decision based on ethical criteria to solve this complicated equation. What would happen if an AI-based machine had to make this decision? On what values or principles would it base its choice? It is essential to provide AI with principles, values or ethical codes that condition the decisions that these machines make on behalf of human beings. 


"Technology is already here. Now it’s time for values and we must choose which ones will govern the digital world," Christoph Steck.


Christoph Steck

Christoph Steck during his speech in 42 Madrid.


The debate on this issue is becoming increasingly important because of the consequences it will have for the economy, democratic systems and even in the military field.

Currently, the World Economic Forum details that 71% of total working hours are performed by humans, compared to 29% performed by machines or algorithms. In only two years, it is estimated that this figure has changed to 58% of working hours performed by people and 42% by machines or algorithms. In quantitative terms, 75 million jobs are expected to be displaced, although a further 133 million new jobs with different job functions may emerge.

Beyond the impact on the economy, the spread of fake news created and distributed worldwide at high speed is directly impacting the formation of public opinion and the normal functioning of democracy. This is demonstrated by the latest Eurobarometer (2018), which finds that 83% of Europeans say that false news represents a danger to democracy.

In the military sphere, the so-called "killer robots" are revolutionising the world of weapons, being able to select and attack targets without the need for interaction by human operators.


"The ethical principles we need to apply to Artificial Intelligence will never be enough if they are not respected", Richard Benjamins.


Richard Benjamins

Richard Benjamins during his speech in 42 Madrid.


Different companies, international institutions and governments have started to develop ethical principles for the application of AI to avoid these possible negative effects and to enhance the opportunities it presents for the development of society.

In this line, a recent Harvard study reveals the proliferation of principles or guidelines on how AI should be constructed and used in companies, governments and other stakeholders groups. Among the thirty-two initiatives that the study condenses, the case of Telefónica stands out, as the first telecommunications company to equip itself with Artificial Intelligence Principles to guarantee a positive impact on society. This is our commitment to design, develop and use Artificial Intelligence with integrity and transparency to ensure that we assume the benefits and challenges of technology in an ethical and responsible manner throughout our organisation.



In the international sphere, the Ethical Guidelines for Trusted AI, published by the European Commission in April 2019, following a public consultation process that included contributions from more than 500 stakeholders, including Telefónica, are also worthy of note. Even the Vatican is already moving in this direction, with initiatives such as "Common Good in the Digital Age".

All these initiatives have a common goal: to design, develop and apply a human-centred Artificial Intelligence. This is the premise we proposed in our Digital Manifesto, where we expose the need to develop an ethical, responsible and transparent AI.

This is only the beginning of the journey. It is necessary to keep on working to improve and address any new potential challenges that arise from AI and its application.



Raquel Carretero Juárez
Políticas Públicas, Telefónica