The adoption of Artificial Intelligence principles by companies, such as Telefónica, is a very relevant step in the future of this technology. As risks have been identified, several “AI-active” companies have published their AI principles to ensure that their AI-related activities remain on the “good” side. However, for making AI sustainable in our societies, more is needed, that is beyond the scope of individual organizations. It is for those areas that governments and international institutions need to act. In Telefonica, we consider that it should include challenges such as:
1. Autonomy of AI systems and liability
When systems become autonomous and self-learning, accountability of behaviour and actions of those systems becomes less obvious. In the pre-AI world, the user is accountable for the incorrect use of device, while device falls under the responsibility of the manufacturer. When systems become autonomous and learn over time, some behaviour might not be foreseen by the manufacturer. It becomes therefore less clear who would be liable in case something goes wrong. A clear example of this are driverless cars. Discussions are ongoing whether a new legal person needs to be introduced for self-learning, autonomous systems, such as a legal status for robots[i], but it is generating some controversy[ii]. While this is still under discussion, at the moment advocates stating that current law is sufficient seem to be in the majority.
2. AI and the future of work
AI can take over many boring, repetitive or dangerous tasks. But if this happens at a massive scale, maybe many jobs might disappear and unemployment will skyrocket[iii]? If less and less people work, then governments will receive less income tax, while costs of social benefits will increase due to increased unemployment. How can this be made sustainable? Should there be a “robot tax”[iv],[v],[vi]? How to be able to pay pensions when increasingly less people work? Is there a need for a universal basic income (UBI) for everybody[vii]? If AI takes most of the current jobs, what do all unemployed people then live from?
3. The relation between people and robots
How should people relate to robots? If Robots become more autonomous and learn during their “lifetime”, then what should be the (allowed) relationship between robots and people? Could one’s boss be a robot, or an AI system[viii]? In Asia, robots are already taking care of elderly people, accompanying them in their loneliness[ix],[x],[xi]. And, could people get married to a robot[xii]?
4. Concentration of data, power and wealth
Concentration of power and wealth in a few very large companies[xiii] [xiv]. Currently AI is dominated by a few large digital companies, including GAFAM[xv] and some Chinese mega companies (Baidu, Alibaba, Tencent). This is mostly due to those companies having access to massive amounts of propriety data, which might lead to an oligopoly[xvi]. Apart from the lack of competition, there is a danger that those companies keep AI as proprietary knowledge, not sharing anything with the larger society other than for the highest price possible[xvii]. Another concern of this concentration is that those companies can offer high-quality AI as a service, based on their data and propriety algorithms (black box). When those AI services are used for public services, the fact that it is a black box (no information on bias, undesired attributes, performance, etc.), raises serious concerns, like when the LA Police Department announced that it uses Amazon’s face recognition solution (Rekognition) for policing[xviii],[xix]. The Open Data Institute in London has started an interesting debate on whether AI algorithms and Data should be closed, shared or open[xx].
5. Malicious use of AI
All the points mentioned above are issues because AI and Data are applied with the intention to improve or optimize our lives. However, like any technology, AI and Data can also be used with bad intentions[xxi]. Think of AI-based cyberattacks[xxii], terrorism, influencing important events with fake news[xxiii] , etc.
6. Lethal Autonomous Weapon Systems
AI can also be applied for warfare and weapons, and especially lethal autonomous weapons systems (LAWS) are a controversial topic. Whether governments will allow LAWS or not is an explicit political decision. Some will consider this good use, while other might call it bad use of AI. Several organizations are working on an international treaty to ban “killer robots”[xxiv]. The issue recently has attracted attention because Google employees published a letter to their CEO questioning Google’s participation in defense projects[xxv]. This has made Google reconsider its position and Google has stated not to participate such projects in the future.
In order to obtain answers to those important concerns, many national governments and international institutions have set of expert groups to come up with proposals[xxvi],[xxvii],[xxviii], where they discuss the concerns mentioned as well as other relevant topics such as investments and skills. Moreover, The European Commission has set up a High-Level Expert Group on AI[xxix] to help develop policies for AI related to investments, skills and a proper legal and ethical framework.
In June 2018, Telefonica published its Digital Manifesto which has formed the starting point for our AI Principles. The Digital Manifesto is, however, much broader than AI and calls for a New Digital Deal to renew our social and economic policies and modernise our democracies for the digital age. The main pillars of the Manifesto are: inclusiveness, transparency, accountability, responsibility and fairness in areas such as connectivity to Internet, education, employment, data & AI, and global digital platforms.
We are living in an exciting time, and while there are risks associated to the massive uptake of Data and AI, we believe that there are more opportunities for doing “good” than risks for doing “bad”, but we need to be prepared.
[xv] Google, Amazon, Facebook, Apple, Microsoft