The use of AI may give rise to a number of risks that must be assessed and mitigated. Therefore, when developing or deploying an AI system, the following risks should be taken into consideration:
1. Risk to Fundamental Rights
AI systems may impact human dignity, privacy, freedom of information, non-discrimination, the right to education, consumer protection, gender equality, and other fundamental rights. Unintentionally, they may perpetuate biases and discrimination present in training data, reduce individual autonomy by interfering with decision-making, or generate biased outcomes.
Mitigation measures: ensure human oversight, establish metrics to detect and correct bias in the data used, enhance transparency in AI operations, design accessible interfaces, and inform users when they interact with an AI system.

2. Privacy risk
AI systems may involve exposure or improper handling of personal data processed or generated during the system’s training or operation. They can collect, process, and store large volumes of personal information, such as biometric or health data. This may lead to intrusions into individuals’ private lives and result in unauthorised disclosure, manipulation, or misuse of personal information.
Mitigation measures: ensure that processing is adequate, necessary, and proportionate; specify the types of personal data involved and the processing activities carried out; embed privacy-by-design principles; and conduct risk assessments.
3. Personal Safety risk
AI systems may pose risks to individuals’ safety by causing physical harm or violence. They may misinterpret data or make decisions that lead to harmful actions, such as accidents involving autonomous vehicles or errors in control systems.
Mitigation measures: periodically assess and identify vulnerabilities, establish up-to-date safety protocols, and maintain continuous monitoring to protect individuals’ well-being.

4. Mental Health risk
AI systems may cause psychological harm or affect individuals’ mental health. This may occur when interacting with automated systems that fail to address human emotional needs, exploit them, or generate content that promotes negative stereotypes, harassment, or dependency.
Mitigation measures: evaluate and monitor the system to ensure it does not cause psychological harm; ensure transparency in its operation; and promote user education for responsible use.
5. Social risk
AI systems can negatively impact individual, community, and societal well-being. This may occur through the amplification of bias, concentration of power, erosion of privacy, manipulation of public opinion, labour displacement, or automated decision-making that adversely affects certain groups.
Mitigation measures: assess social impact before and during system development; promote fair and equitable use; and adopt an inclusive approach to ensure that the benefits of AI are available and accessible across different age groups, cultures, and communities.

6. Environmental risk
AI systems may have environmental impacts due to energy consumption and carbon emissions throughout their lifecycle. This includes data centre energy use, the carbon footprint of data traffic and cloud processing, and electronic waste, among others.
Mitigation measures: implement systems with computational capacity aligned to actual needs to avoid excessive resource consumption; measure and control carbon emissions; and responsibly manage the disposal of obsolete hardware.
7. Intellectual Property risk
AI systems may infringe the intellectual property rights of content owners in both input and output data. This includes using data without proper authorisation, generating derivative content, or lacking documentation and traceability for the use of pre-existing works.
Mitigation measures: train the system only with data and information for which sufficient rights exist; and verify that the system does not infringe intellectual property rights in its generated output.

8. Cybersecurity risk
The infrastructures supporting AI systems may have security vulnerabilities and be targeted by cyberattacks. Unauthorised access could result in manipulation of training data, extraction of sensitive information from the model or its users, or service unavailability.
Mitigation measures: embed security-by-design principles from the outset, considering potential cybersecurity vulnerabilities; establish access and usage policies; conduct security audits and assessments; protect both physical and digital infrastructure; train personnel on cybersecurity; and ensure continuous monitoring to prevent and mitigate cyberattacks.