From AI Principles to Responsible AI: challenges

The adoption of IA principles by companies, such as Telefónica, is not enough. We need to make this technology responsible in all areas.

Reading time: 7 min

Richard Benjamin Richard Benjamins

Data & AI Ambassador at Telefónica

The adoption of Artificial Intelligence principles by companies, such as Telefónica, is a very relevant step in the future of this technology. As risks have been identified, several “AI-active” companies have published their AI principles to ensure that their AI-related activities remain on the “good” side. However, for making AI sustainable in our societies, more is needed, that is beyond the scope of individual organizations. It is for those areas that governments and international institutions need to act. In Telefonica, we consider that it should include challenges such as:

1. Autonomy of AI systems and liability

When systems become autonomous and self-learning, accountability of behaviour and actions of those systems becomes less obvious. In the pre-AI world, the user is accountable for the incorrect use of device, while device falls under the responsibility of the manufacturer. When systems become autonomous and learn over time, some behaviour might not be foreseen by the manufacturer. It becomes therefore less clear who would be liable in case something goes wrong. A clear example of this are driverless cars. Discussions are ongoing whether a new legal person needs to be introduced for self-learning, autonomous systems, such as a legal status for robots[i], but it is generating some controversy[ii]. While this is still under discussion, at the moment advocates stating that current law is sufficient seem to be in the majority.

2. AI and the future of work

AI can take over many boring, repetitive or dangerous tasks. But if this happens at a massive scale, maybe many jobs might disappear and unemployment will skyrocket[iii]? If less and less people work, then governments will receive less income tax, while costs of social benefits will increase due to increased unemployment. How can this be made sustainable? Should there be a “robot tax”[iv],[v],[vi]? How to be able to pay pensions when increasingly less people work? Is there a need for a universal basic income (UBI) for everybody[vii]? If AI takes most of the current jobs, what do all unemployed people then live from?

3. The relation between people and robots

How should people relate to robots? If Robots become more autonomous and learn during their “lifetime”, then what should be the (allowed) relationship between robots and people? Could one’s boss be a robot, or an AI system[viii]? In Asia, robots are already taking care of elderly people, accompanying them in their loneliness[ix],[x],[xi]. And, could people get married to a robot[xii]?

4. Concentration of data, power and wealth

Concentration of power and wealth in a few very large companies[xiii] [xiv]. Currently AI is dominated by a few large digital companies, including GAFAM[xv]  and some Chinese mega companies (Baidu, Alibaba, Tencent). This is mostly due to those companies having access to massive amounts of propriety data, which might lead to an oligopoly[xvi]. Apart from the lack of competition, there is a danger that those companies keep AI as proprietary knowledge, not sharing anything with the larger society other than for the highest price possible[xvii]. Another concern of this concentration is that those companies can offer high-quality AI as a service, based on their data and propriety algorithms (black box). When those AI services are used for public services, the fact that it is a black box (no information on bias, undesired attributes, performance, etc.), raises serious concerns, like when the LA Police Department announced that it uses Amazon’s face recognition solution (Rekognition) for policing[xviii],[xix]. The Open Data Institute in London has started an interesting debate on whether AI algorithms and Data should be closed, shared or open[xx].

5. Malicious use of AI

All the points mentioned above are issues because AI and Data are applied with the intention to improve or optimize our lives. However, like any technology, AI and Data can also be used with bad intentions[xxi]. Think of AI-based cyberattacks[xxii], terrorism, influencing important events with fake news[xxiii] , etc.

6. Lethal Autonomous Weapon Systems

AI can also be applied for warfare and weapons, and especially lethal​ ​autonomous​ ​weapons​ ​systems​ ​(LAWS) are a controversial topic​. Whether governments will allow LAWS or not is an explicit political decision. Some will consider this good use, while other might call it bad use of AI. Several organizations are working on an international treaty to ban “killer robots”[xxiv].  The issue recently has attracted attention because Google employees published a letter to their CEO questioning Google’s participation in defense projects[xxv]. This has made Google reconsider its position and Google has stated not to participate such projects in the future.

Artificial Intelligence

In order to obtain answers to those important concerns, many national governments and international institutions have set of expert groups to come up with proposals[xxvi],[xxvii],[xxviii], where they discuss the concerns mentioned as well as other relevant topics such as investments and skills.  Moreover, The European Commission has set up a High-Level Expert Group on AI[xxix] to help develop policies for AI related to investments, skills and a proper legal and ethical framework.

In June 2018, Telefonica published its Digital Manifesto which has formed the starting point for our AI Principles. The Digital Manifesto is, however, much broader than AI and calls for a New Digital Deal to renew our social and economic policies and modernise our democracies for the digital age. The main pillars of the Manifesto are: inclusiveness, transparency, accountability, responsibility and fairness in areas such as connectivity to Internet, education, employment, data & AI, and global digital platforms.

We are living in an exciting time, and while there are risks associated to the massive uptake of Data and AI, we believe that there are more opportunities for doing “good” than risks for doing “bad”, but we need to be prepared.

[i] (2017, 01). MOTION FOR A EUROPEAN PARLIAMENT RESOLUTION. European Parliament. 05, 2018,


[iii] (2017, 05). Technology, jobs, and the future of work. mckinsey&company. 05, 2018,

[iv] (2017, 03). Robots won’t just take our jobs – they’ll make the rich even richer. The guardian. 05, 2018,

[v] (2017, 02). The robot that takes your job should pay taxes, says Bill Gates. Quartz. 05, 2018,

[vi] (2017, 02). European parliament calls for robot law, rejects robot tax. Reuters. 05, 2018,

[vii] 017, 12). We could fund a universal basic income with the data we give away to Facebook and Google. The next web. 05, 2018,

[viii] (2016, 09). Why a robot could be the best boss you’ve ever had. The guardian. 05, 2018,

[ix] (2017, 08). Robot caregivers are saving the elderly from lives of loneliness. Engadget. 05, 2018,

[x] (2018, 02). Japan lays groundwork for boom in robot carers. The guardian. 05, 2018,


[xii] (2017, 04). Chinese man ‘marries’ robot he built himself. The guardian. 05, 2018, de

[xiii] (2018, 01). Report: AI will increase the wealth inequality between the rich and poor. AI News. 05, 2018,

[xiv] (2016, 12). AI could boost productivity but increase wealth inequality, the White House says. CNBC. 05, 2018,

[xv] Google, Amazon, Facebook, Apple, Microsoft

[xvi] 2016, 11). AI could boost productivity but increase wealth inequality, the White House says. CNBC. 01, 2018,

[xvii] (2018, 02). GOOGLE MUST BE STOPPED BEFORE IT BECOMES AN AI MONOPOLY. wired. 05, 2018,

[xviii] (2018, 05). Amazon is selling police departments a real-time facial recognition system. The Verge. 01, 2018,

[xix] (2018, 05). Amazon Pushes Facial Recognition to Police. Critics See Surveillance Risk.. The New York Times. 05, 2018,

[xx] (2018, 04). The role of data in AI business models (report). Open Data Institute. 05, 2018,

[xxi] (2018, 02). The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation. Open Data Institute. 05, 2018,

[xxii] (2018, 02). Artificial intelligence poses risks of misuse by hackers, researchers say. Reuters. 05, 2018,

[xxiii] (2018, 04). Artificial intelligence is making fake news worse. Business Insiders. 05, 2018,

[xxiv] (2018, 04). France, Germany under fire for failing to back ‘killer robots’ ban. Politico. 05, 2018,

[xxv] (2018, 04). ‘The Business of War’: Google Employees Protest Work for the Pentagon. The New York Times. 05, 2018,


Contact our communication department or requests additional material.

Telefónica Centenary logo Celebrate with us the Telefónica Centenary