|
MINISTERS’ DEPUTIES |
CM Documents |
CM(2025)154-add |
17 November 2025[1] |
|
1546th meeting, 10 December 2025 10 Legal questions
10.2 Committee on Artificial Intelligence (CAI) b. HUDERIA Model: context based risk analysis (COBRA) resources Item to be considered by the GR-J at its meeting on 9 December 2025 |
Table of Contents
Relationship to the Framework Convention
Overview of COBRA Resources A, B and C
COBRA Resources A (List of Risk Factors Arising in the AI system’s Application Context)
COBRA Resources B (List of Risk Factors Arising in the AI System’s Design and Development Context)
COBRA Resources C (List of Risk Factors Arising in the AI System’s Deployment Context)
COBRA Resources F. (Illustrative list of Sectors/Domains and Potential Areas of Potential Concern)
The risk and impact assessment of artificial intelligence (AI) systems from the point of view of human rights, democracy and the rule of law (“the HUDERIA”) is a guidance which provides a structured approach to risk and impact assessment for AI systems specifically tailored to the protection and promotion of human rights, democracy and the rule of law. It is intended to play a unique and critical role at the intersection of international human rights standards and existing technical frameworks on risk management in the AI context.
The HUDERIA can be used by both public and private actors to aid in identifying and addressing risks and impacts to human rights, democracy and the rule of law throughout the lifecycle of AI systems.
The HUDERIA is a stand-alone, non-legally binding guidance that does not have legal effect. It is not mandatory, nor intended as an interpretive aid for the Council of Europe Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law, hereinafter referred to as “the Framework Convention”. In addition, whilst the HUDERIA has a facilitative role, it is not a means to implement the Framework Convention. Many existing or future frameworks, policies, guidance, standards or tools may be used to assist in conducting AI risk and impact management, including the HUDERIA.
Parties to the Framework Convention have the flexibility to use or adapt the guidance, in whole or in part, to develop new approaches to risk assessment or to use or adapt existing approaches in keeping with their applicable laws, provided that Parties fully meet their obligations under the Framework Convention, including the baseline for risk and impact management set out in its Chapter V.
The HUDERIA is comprised of both the HUDERIA Methodology and HUDERIA Model.
At the general level, the HUDERIA Methodology describes high-level concepts, processes and elements guiding risk and impact assessment activities of AI systems that could have impacts on human rights, democracy and the rule of law.
At the specific level, the HUDERIA Model provides supporting materials and resources (such as flexible tools relevant for different elements of the HUDERIA process, resources of illustrative nature, and scalable recommendations) that can aid in the implementation of the HUDERIA Methodology. The present document forms part of the HUDERIA Model and contains the COBRA resources, which help operationalise the COBRA process through structured questions, explanations, and examples designed to support systematic risk identification and assessment.
These resources provide a library of knowledge that can facilitate considering and addressing risks and impacts related to human rights, democracy and the rule of law, including in other approaches to risk management. For greater certainty, the resources and guidelines in the HUDERIA Model are not provided as examples of best practices, nor do they set forth minimum standards; rather, they can be used to inspire customised approaches by both public and private actors to risk management that may be adopted by States.
This document intends to provide illustrative guidance for the Context-Based Risk Analysis, the first element of the HUDERIA Methodology, and contains different elements, including material that can be used to guide reflections in the context of the assessment of potential risks and impact of AI systems. In line with the flexible approach of the Methodology, these elements can be adjusted as necessary. This material may also be used in future to build interactive tools (e.g. online platform, interactive workflows) to facilitate the conduct of assessments of risks and impacts.
Elements in this section can be used to assist in identifying and assessing risk factors affecting the scope, scale, probability and reversibility of adverse impacts on human rights, democracy, and the rule of law arising in the AI system’s application context (COBRA Resources A), the AI system’s design and development contexts (COBRA Resources B) and the AI system’s deployment context (COBRA Resources C). These might need to be reviewed and considered iteratively throughout the lifecycle of an AI system.
In order to facilitate the mapping of potential impacts on human rights, democracy and the rule of law, COBRA Resources E list areas of potential concern from the point of view of Human Rights, Democracy and the Rule of Law, provide illustrative description of these areas and give examples of potential AI-related risks for each of them. The list could be used in conjunction with COBRA Resources F which list and provide brief description of potentially relevant Sectors and Domains and match them back with areas of potential concern in COBRA Resources E.
This section provides illustrative examples of potential risk factors affecting the probability of adverse impacts on human rights, democracy and the rule of law arising in:
- an AI system’s application context (COBRA Resources A), such as the legal and regulatory environments in which the system is being developed and used, the system’s intended purpose, the categories of intended users and affected persons and other potential details pertaining to the system’s application context, such as any known legacies of bias and discrimination;
- the AI system’s design and development context (COBRA Resources B), in particular, its technical characteristics, such as known limitations of the system, considerations related to data collection, enrichment, storage, use and retirement; and considerations related to the algorithm or model itself, technical characteristics related to privacy and personal data protection, bias and discrimination, and explainability and interpretability;
- and the AI system’s deployment context (COBRA Resources C), such as the characteristics of its intended users and affected groups, the measures taken to ensure overall security, prevent harm and uphold human dignity, protect privacy and personal data, mitigate harmful bias, ensure proper training, guard against unintended uses, and ensure accountability and legal compliance.
The resources are illustrative, non-exhaustive, and open-ended, meaning they will need to be adapted over time to account for developments, for example, in the areas of technology and applicable legal and regulatory regimes.
COBRA Resources A (List of Risk Factors Arising in the AI system’s Application Context)
|
1. Sector or domain in which the AI system is being built |
Examples of Potential Risk Factors |
|
|
A.1.1 |
In light of its intended purpose, will the AI system play a significant role in a high-impact context — such as transport, social care, healthcare, or education — where its output could influence important decisions or actions? What is the context in which the system is operating? In such contexts, any malfunction or performance degradation throughout the system’s lifecycle can lead to harmful decisions, service interruptions, or safety incidents. Consider whether the system might affect areas where human rights, democracy or the rule of law could be at stake. COBRA Resources E outlines examples of potential harms and protected interests that may be relevant and COBRA Resources F contains the list of sectors of potential relevance from the perspective of human rights, democracy and the rule of law. Example: a hybrid AI system used to control the kinematics of a surgical robot that assists doctors who are performing an emergency medical procedure. |
|
|
A.1.2 |
Will the AI system perform a high-impact function independent of the sector in which it operates? What is the function of the system? Even outside high-impact sectors, functions of this kind can lead to harmful decisions, service interruptions, or other adverse outcomes that may affect human rights of potentially affected persons or have effects on democracy and the rule of law – see COBRA Resources E for reference, even if it is not deployed in a traditionally high-impact context. Example: the system being developed is an AI-assisted HR recruitment tool for a civil service function. The tool is not situated within a safety critical sector; however the system could have a negative impact on groups of individuals in situations of vulnerability. |
|
|
2. Existing Law and Regulatory Environment |
A.2.1 |
Is the sector or domain in which the AI system will be deployed subject to extensive legal or regulatory oversight? If so, what are the specific obligations or standards (e.g. compliance, certification, supervision, ethical) that may apply to the AI system or its outputs? Legal or regulatory oversight in itself may be a signal of high-impact contexts. When such oversight exists, it is important to identify relevant obligations and standards that may interact with, or inform, HUDERIA considerations. The HUDERIA assessment focuses specifically on potential impacts on human rights, democracy and the rule of law. Other sectoral obligations (e.g. on financial stability, medical safety, cybersecurity etc.) remain subject to their respective regulatory frameworks. Where relevant, existing assessments or certifications (compulsory or voluntary) may be referenced as supporting documentation to strengthen the evidentiary basis of the HUDERIA analysis — but they do not replace or merge with it. Example: a bank using a risk assessment tool to predict borrower default operates within the financial sector where extensive regulation pertaining to market abuse, risk management, equity law, and competition law has historically been in place. |
|
3. Scope of the deployment |
A.3.1 |
In a scenario where the AI system is fully deployed as designed, at what scale will the AI system directly and/or indirectly affect persons and groups (local populations, national populations, global populations)? This question focuses on the spatial and jurisdictional reach of the AI system’s potential impact. Scaling can materially alter the risk profile of the system with potential effects on local populations and on the communities and groups that comprise them as well as social and political processes. These impacts may be significant in relation to human rights, democracy and the rule of law (see COBRA Resources E, specifically areas (16) Democracy and (17) Rule of Law). Where systems are introduced gradually or tested in pilot or trial stages on a limited group, such phased deployment can also serve as a mitigation measure, allowing potential impacts to be identified, assessed and managed before broader roll-out. Example (local): a predictive policing system used by a municipal police department and operating only within the boundaries of the municipality. Example (national): a classification system used by governmental organisation to determine individuals’ eligibility for social benefits affects individuals and groups within national populations. Example (global): a recommender system used by an international social media platform to personalise news delivery affects individuals and groups within the global population. |
|
3. Scope of the deployment |
Examples of Potential Risk Factors |
|
|
A.3.2 |
In a scenario where the AI system scales optimally, how large a number of potentially affected persons will the AI system directly and/or indirectly affect? This question focuses on the numerical and demographic magnitude of potential impact. The use of the AI system at scale or mass level may generate significant effects on social and political processes and institutions, with particular implications for human rights, democracy and the rule of law. (see COBRA Resources E, specifically areas (16) Democracy and (17) Rule of Law). Example (small): An AI chatbot is used by a local NGO with 60 employees to support survivors of domestic violence by providing information on legal rights and available shelters. Example (moderate): A regional employment agency uses a machine learning classifier to filter and prioritise job applicants for short-term public sector contracts, serving around 10,000 applicants per year. Example (large): A national health insurer deploys a predictive system to flag “at-risk” patients for early intervention programmes affecting approximately 1.2 million policyholders. Example (very large): A government-run digital identity platform integrates AI-powered identity verification (e.g., facial recognition and document scanning) to control access to social protection schemes, affecting over 50 million people. |
|
|
A.3.3 |
Considering the potential direct and indirect impacts of the AI system on persons, communities, and the environment, which is the widest timescale within which the AI system could affect persons and groups? The use of the AI system may give rise to longer-term effects extending beyond the period of its application, impacting the human rights of potentially affected persons, as well as social and political processes and institutions, particularly in relation to democracy and the rule of law (see COBRA Resources E, specifically areas (16) Democracy and (17) Rule of Law). Example (short term): A conversational AI system is used by a municipal housing office to respond to tenant queries about rent arrears, eviction notices, and complaint procedures. Example (medium term): A regional education authority uses an AI-driven profiling system to predict students’ likelihood of academic failure, triggering targeted interventions or redirection to vocational tracks. Example (long-term): An AI classification system is deployed by a national migration agency to assess immigration, long-term visa, or citizenship applications as low-, medium-, or high-risk. |
|
|
4. Existing Legacies of Bias |
A.4.1 |
Do the sector(s) or domain(s) in which the AI system will operate, and from which the data used to train it are drawn, contain patterns of discrimination, inequality, bias, or inaccuracies that systematically lead to the unfair treatment of groups of individuals in situations of vulnerability (or groups of individuals with protected characteristics)? If so, how likely are these patterns to be replicated or augmented in the functioning of the system or in its outputs and short-, medium- and long-term impacts? Consider focusing upon equality and non-discrimination considerations surrounding the potential impacts of the AI system on the affected persons. If such patterns exist, the system may replicate or amplify them in development and deployment, resulting in group-differentiated outcomes and indirect discrimination. At scale and over short, medium, and long-term horizons, this can entrench disparities and undermine equality and non-discrimination. Example: the use of a job application screening system in a science and technology industry based on historic hiring data replicates patterns of hiring discrimination, delivering unfavourable outcomes for groups of individuals in situations of vulnerability. |
|
A.4.2 |
In view of the sector or domain in which the AI system operates, will the potentially affected persons include those who may be significantly or disproportionately impacted, particularly groups of individuals in situations of vulnerability (or groups of individuals with protected characteristics) by the design and use of the system? Vulnerability can be situational, arising from economic, social, institutional, technological, or other contextual conditions. Consider focusing upon equality and non-discrimination considerations surrounding the potential impacts of the AI system on the potentially affected persons. Pay close attention to risk factors referred to in risk factors B.5.1-B.5.4 in COBRA Resources B. Example: a risk assessment tool that is used by social workers to determine access to social benefits operates within the welfare administration domain, where persons in situations of economic or social vulnerability are significantly impacted. |
|
|
4. Existing Legacies of Bias |
Examples of Potential Risk Factors |
|
|
A.4.3 |
Is the AI system likely to be accessed by or impact upon children? Special attention should be paid to children because their evolving capacities, dependency relationships, and limited ability to provide informed consent or seek redress create specific risk profiles that general equality or vulnerability assessments may not adequately capture. Impacts on children can be disproportionate and long-lasting due to developmental vulnerabilities, power asymmetries, and heightened data sensitivity. These risks raise particular concerns regarding mental well-being, equality and non-discrimination, as well as the rights of the child (see COBRA Resources E, specifically area (14) Children). Additional risks may also arise from inadequate stakeholder engagement with children or with those responsible for their welfare (e.g. parents, guardians, civil society organisations, child protection advocates) throughout the AI system lifecycle (see the SEP element in the HUDERIA Methodology). |
|
|
5. Environment context |
A.5.1 |
What are the AI system’s potential direct and indirect impacts on the environment (e.g. energy consumption, carbon emissions, resource use, or electronic waste)? To what extent are its development, deployment, and use subject to existing environmental standards, regulations, or sustainability commitments, and are these frameworks adequate to address the system’s potential impacts? The AI system may generate direct and indirect impacts on the environment, with potential consequences for energy efficiency, carbon emissions, resource use, and waste management. These environmental impacts may in turn affect the enjoyment of human rights, particularly for individuals in situations of vulnerability or groups with protected characteristics (see COBRA Resources E). Example: An AI model used to generate daily analytic dashboards continues to reprocess an unchanging dataset of over 100 TB every night, consuming significant computing power and energy without producing any new or meaningful outputs. This design choice results in unnecessary emissions and resource consumption with no corresponding public or user benefit. |
COBRA Resources B (List of Risk Factors Arising in the AI System’s Design and Development Context)
|
1. Decision to Design and Definition of Problem and Outcome |
Examples of Potential Risk Factors |
|
|
B.1.1 |
What other approaches besides building the AI system could feasibly address the intended need(s)? When compared against the AI system, what benefits and risks would the other approaches present—particularly in terms of impacts on human rights, democracy, and the rule of law? The choice to build and deploy an AI system should be critically assessed against alternative approaches that could better serve the intended needs with fewer negative impacts. Risks include marginal or new harms associated with introducing AI into an existing context, especially regarding human rights, democracy, and the rule of law. Assessing alternatives requires considering (a) potential adverse effects on human rights, democracy, and the rule of law (including marginal risks of AI introduction); (b) whether existing technologies and processes, including mitigation and governance techniques, could address the need; (c) the sufficiency of available resources and data; (d) the complexity of the use-contexts; (e) the potential benefits of using an AI-enabled solution, including opportunity costs of not proceeding; (f) the nature of the problem being solved, in particular, if it is solving a policy or social problem. Failure to evaluate these alternatives may result in adopting a solution that unnecessarily increases risk or complexity. |
|
|
2. Technological maturity |
B.2.1 |
Is the AI system’s design based on well-understood and widely recognised validation techniques for a similar intended purpose in the same sector? If not, what risks may arise from limited technological maturity on quality of the system’s performance and resulting effects on human rights of potentially affected persons or democracy and the rule of law? If the AI system’s design and development is not based on well-understood techniques that have been previously validated for similar purposes in the same sector, including any applicable industry standards and best practices and taking into account the system’s intended use and acceptable error threshold, there is a risk that its technological immaturity will undermine the quality, reliability, and robustness of its performance. Such shortcomings can increase the likelihood of errors, reduce trustworthiness, and heighten the risk of adverse impacts on potentially affected persons. However, innovation often entails novel or experimental approaches that diverge from established practice, which may increase uncertainty and the risk of performance failures or unforeseen impacts. In such cases, the absence of sectoral precedent should be recognised as a heightened risk factor requiring compensating control measures — such as enhanced human oversight, independent expert review, or carefully supervised pilot deployments. These mechanisms are essential to manage uncertainty, verify the system’s reliability, and ensure that technological advancement proceeds in a safe and responsible manner. |
|
3. Existing system |
Examples of Potential Risk Factors |
|
|
B.3.1 |
If the AI system is to replace an existing system, have the flaws and risks of the system being replaced been identified and mitigated? When an AI system replaces a human, technical, or hybrid system serving a similar function, or if its deployment otherwise leads to human operators over relying on AI outputs, risks may arise if the limitations, flaws, or documented harms of the existing system are not properly understood and addressed. Failure to take these into account may lead to replication or amplification of existing risks, rather than improvement. The quality and impact of the AI system’s performance will depend in part on how well it learns from, improves upon, or augments the shortcomings of the replaced system. |
|
|
B.3.2 |
Is the AI system replacing a human, technical, or hybrid system that is critical infrastructure or serves a high-impact function? If so, what risks (e.g. outages or disruptions) may arise from the process of updating or replacing the system, and how could these impact human rights of potentially affected persons? If the AI system replaces a human, technical, or hybrid system that constitutes critical infrastructure or serves a high impact function, the process of updating or replacing the system may pose significant risks. These may be risks arising from outages, disruptions of essential services, or failures that could directly affect human rights of potentially affected persons, democracy or rule of law. The critical nature of the replaced system magnifies the potential scale of such impacts. |
|
|
4. Cybersecurity context |
B.4.1 |
What motivations and opportunities does the AI system present for malicious actors to breach, corrupt, or misuse it? What risks are posed by attempts to compromise its safety, security, and robustness (e.g. through adversarial attacks, data poisoning, model inversion, or data breaches)? What risks could such malicious exploitation pose for human rights of potentially affected persons, democracy or the rule of law? The AI system may create motivations and opportunities for malicious actors to breach, corrupt, or otherwise manipulate it. Such actions could be driven by financial gain, political objectives, or other perceived benefits, and may result in the system being misused to facilitate human rights abuses, undermine trust, or cause harm to potentially affected persons. |
|
5 Data Quality and Personal Data Protection |
B.5.1 |
Are the data used in designing and developing the AI system sufficiently representative of the persons and groups affected, sufficiently accurate, complete, reliable, relevant, appropriate, up-to-date, and of adequate quantity and quality for its intended use case, domain, function, and purpose? If not, what risks may arise for affected persons from any shortcomings identified? Which techniques were used to perform this assessment? If the data used in designing and developing the AI system are not sufficiently representative, sufficiently accurate, complete, reliable, relevant, appropriate, up-to-date, and of adequate quantity and quality for the intended use case, domain, function, and purpose, significant risks may arise. There may be risks arising from (a) consequences of using inaccurate, inconsistent, or incomplete data; (b) lack of proper recording, traceability, and auditability of data provenance and lineage across the system lifecycle; (c) measurement errors or biases introduced during data collection (through human involvement or otherwise); (d) missing or unusable data in collected or procured datasets; (e) lack of transparency and accessibility of labelling/annotation processes for audit, oversight, and review; (f) biases introduced by human labellers and annotators, including via proxies, without adequate safeguards; (g) biases introduced by automated labelling or annotation, if not subject to sufficient human oversight and transparent processes. Such shortcomings can undermine the reliability, trustworthiness, and fairness of the AI system and may negatively affect human rights of potentially affected persons. |
|
B.5.2 |
Does the design and development of the AI system involve the use of personal data? If so, what risks to privacy and personal data protection may arise? If personal data are used in the design and development of the AI system, there is a risk that personal data protection requirements may not be respected. As applicable, there may be risks associated, among other things, with (a) insufficient information provided to affected persons and stakeholders about the consent or other legitimate basis for processing; (b) reliance on implied consent or unclear legal bases without consultation of affected persons and stakeholders regarding acceptability of data use; (c) insufficient safeguards for individuals regarding the use of their data once shared or used for training, including where applicable, the ability to retract or limit further use; (d) situations with potential divergence between applicable personal data-protection rules such as purpose limitation and the AI practices of large-scale or iterative data reuse; (e) the possibility of deanonymisation or re-identification through data linkage with existing, publicly available, or easily obtainable datasets, if not properly managed. Such shortcomings can undermine, among other things, privacy and personal data protection. |
|
|
5. Data Quality and Personal Data Protection |
Examples of Potential Risk Factors |
|
|
B.5.3 |
Will the AI system use dynamic data collected and processed in real time (or near real time) for continuous learning, adaptation, or performance optimization? If so, what risks may arise concerning safety, security, reliability, robustness, data quality, data integrity, and – where appropriate – non-discrimination and bias mitigation? If the AI system uses dynamic data collected and processed in real time (or near real time) for continuous learning, adaptation, or performance optimization, new risks may arise. There may be risks associated with threats to safety, security, reliability, robustness, data quality, and data integrity. Real-time adaptation may also introduce or amplify risks of bias and discrimination if not properly managed. Such risks can undermine trust and may negatively impact human rights of affected persons. |
|
|
B.5.4 |
To what extent do the domain and type of data collected or procured pose risks of rapid or unexpected distributional shifts or drifts that could adversely impact the accuracy and performance of the AI system? The domain in which data are collected or procured, and the type of data used, may pose risks of rapid or unexpected distributional shifts or drifts. Such shifts can undermine the accuracy, reliability, and performance of the AI system, leading to errors or harmful outcomes for potentially affected persons if not anticipated and addressed. The absence of dynamic assessment, re-assessment, validation, and monitoring increases these risks. |
|
|
6. Model Development and Model Implementation Context |
B.6.1 |
What potential risks of bias and indirect discrimination may arise in the context of model design, development and implementation? Model design, development and implementation may introduce risks of bias and indirect discrimination. These risks may arise (a) during feature engineering (manual or automated), through grouping, disaggregation, or exclusion of input features related to protected or sensitive characteristics – or their proxies – resulting in emergent bias; (b) through inferences generated by the model’s learning mechanisms that are unreasonable, unfair, disparate in impact, or influenced by hidden proxies for discriminatory features, thereby shaping outputs in discriminatory ways. If unaddressed, such risks can lead to unequal treatment and systemic discrimination, undermining the human rights of potentially affected persons. |
|
B.6.2 |
Is the AI system built on techniques that are inherently hard to fully explain, predict or verify, such as non-deterministic systems, probabilistic models, evolving/dynamic models? If so, what are the potential risks that it could negatively impact human rights of potentially affected persons, especially when the AI system is interacting with individuals? If the AI system is built on techniques that are inherently difficult to fully predict, verify or explain — such as non-deterministic systems (different outputs for the same input), probabilistic models (likelihood-based outputs, e.g. Bayesian models or LLMs), or evolving/dynamic models (continuously learning or adapting) — specific risks may arise. There may be risks associated with: (a) reduced transparency, limiting the ability of affected persons to understand, challenge, or seek redress for outputs; (b) unpredictable or unstable behaviour, where outputs may vary in ways that are difficult to foresee or audit, and (c) misalignment between the model’s intelligibility/accessibility and the sector-specific requirements, legal or otherwise or expectations for its intended function. These risks may be amplified where appropriate mitigation measures are lacking—such as when organisations have insufficient capacity to provide complementary explanation mechanisms (e.g. surrogate models, feature-importance analyses), or when no safeguards exist to ensure reversibility or human intervention. Such risks can negatively impact human rights, democracy and the rule of law by reducing accountability, transparency, and trust in the system’s outputs. |
|
|
B.6.3 |
To what extent has appropriate evaluation, verification, and validation of the AI model been ensured throughout its lifecycle, and how might any gaps identified affect the level of risk associated with its deployment or use? While evaluation, verification, and validation mechanisms are themselves mitigation and control measures, their absence or inadequacy constitutes a key risk factor affecting the reliability and safety of the AI model. If the AI model is not subject to sufficient monitoring, evaluation, verification, and validation, there is a risk that errors, flaws, or biases in its design and functioning will go undetected. Without transparent processes, including external peer review and independent expert evaluation, the reliability, fairness, and safety of the system may be compromised, leading to harmful impacts on potentially affected persons and their human rights in addition to impacts on democracy and rule of law. |
|
|
6. Model Development and Model Implementation Context |
Examples of Potential Risk Factors |
|
|
B.6.4 |
To what extent does the AI system include processes of monitoring and regular re-evaluation to keep pace with real-world changes that may cause concept drifts or shifts in underlying data distributions? What risks may arise for affected persons if such processes are absent or insufficient? Monitoring and re-evaluation operate as control measures designed to maintain system performance over time; it is their absence or insufficiency that creates the risk of degradation or unfairness in practice. If the AI system is not monitored and regularly re-evaluated to keep pace with real-world changes, there is a risk that it may gradually diverge from its intended purpose or no longer reflect the conditions under which it was designed and validated (concept drift or shifts in underlying data distributions). These changes can degrade the accuracy, fairness, and reliability of the system over time, leading to harmful impacts on potentially affected persons. |
|
|
B.6.5 |
To what extent are performance metrics for the AI system — beyond accuracy (e.g. sensitivity, precision, specificity) —appropriately selected, monitored, and communicated in ways that reflect the specific context of the use case, minimise the risk of misinterpretation or misuse by users and stakeholders, and help ensure transparency and accountability throughout the system’s lifecycle? What risks may arise for affected persons if they are not? The selection and communication of performance metrics function as safeguards that support transparency and accountability; where these mechanisms are weak or incomplete, they become risk points in themselves. If the performance metrics of the AI system are not carefully selected, contextually defined, monitored, and transparently reported, several risks may arise. These may be risks associated with: (a) prioritising certain error types (e.g. false positives or false negatives) without considering the context of use and the potential disproportionate impacts of differential error rates on individuals in situations of vulnerability or on groups with protected characteristics; (b) limiting performance assessment to accuracy while omitting other relevant measures (e.g. sensitivity, precision, specificity), which can obscure biases or weaknesses affecting reliability and fairness; (c) presenting metrics in formats that are overly technical, inaccessible, or lacking contextual explanation, preventing users and stakeholders from properly interpreting system performance; and (d) irregular reporting or insufficient monitoring that hinder identification of performance variations across population groups or over time. |
|
|
B.6.6 |
If the system substantially informs or takes decisions that may impact human rights in ways that require mechanisms for human oversight, review, or intervention, have such features been integrated at the model design and implementation stages to ensure that the system can support procedural and substantive fairness in its future use? Where an AI system is capable of substantially informing or taking decisions affecting individuals’ rights or access to services, its design and implementation should include features that support procedural and substantive fairness. Such features may include: (a) the capacity for human review or override (both ex ante and ex post); (b) built-in operational constraints preventing the system from autonomously exceeding its intended scope; (c) traceability, auditability, and explainability mechanisms; and (d) clear assignment of oversight responsibilities to individuals with the necessary competence, training, and authority. These elements help ensure that, once deployed, the AI system functions within a framework that enables fairness, accountability, and effective human control. |
|
|
B.6.7 |
What potential is there for the AI system to be repurposed or used in ways that could raise additional ethical, legal, or regulatory concerns? Repurposing (scope creep, dual-use, capability escalation) can materially change the system’s risk profile (including from the personal data protection perspective) by shifting context, data uses, or decision pathways. The potential for such repurposing often originates from design and development decisions (e.g. modular architectures, transferable models, or open interfaces). HUDERIA remains relevant when a system is repurposed or its function evolves, as such changes may generate new or amplified impacts on human rights, democracy, and the rule of law. In these cases, the assessment should be updated or repeated to account for the new context. Example: a computer vision system used to recognise individuals’ faces as a form of identity authentication is repurposed for real-time remote biometric identification of petty criminals at a concert venue. |
|
COBRA Resources C (List of Risk Factors Arising in the AI System’s Deployment Context)
|
1. Privacy and Personal Data Protection |
Examples of Potential Risk Factors |
|
|
C.1.1 |
Does the AI system process personal data, including the sensitive category of data (as, for instance, set out in Article 6 of Convention 108+ or other relevant frameworks)? If so, what risks may arise concerning the privacy and data protection rights of potentially affected persons, and the responsibilities of those developing, deploying, or using the system in relation to data protection and information governance? If the AI system processes personal data, there is a risk that such processing may not comply with applicable personal data protection and privacy laws or standards. Risks include unlawful or unfair collection and use, inadequate safeguards, or insufficient accountability in handling personal data and disclosure of personal data in outputs. |
|
|
C.1.2 |
Is the AI system designed for individual-targeted curation, profiling, prediction, or behavioural steering? If so, what risks may arise, including if potentially affected persons cannot access sufficiently accessible information in this connection? If the AI system is designed for individual-targeted curation, profiling, prediction, or behavioural steering, there is a risk that affected persons may not benefit from the highest level of transparency and information rights available under the applicable legal framework (particularly where sensitive data are involved, as defined in instruments such as Convention 108+). In particular, they may lack (a) clear information on the collection and use of their personal data; (b) the rationale behind the system’s outputs, explained in plain, non-technical language; (c) the purpose of the curation, profiling, prediction, or behavioural steering; (d) the categories of persons or bodies to whom their data, profile, or the results of processing may be communicated. |
|
|
2. Non-discrimination and bias |
C.2.1 |
How might the modalities of deployment — including where, how, and by whom the AI system is implemented — influence the emergence or amplification of bias or discriminatory effects identified earlier in the lifecycle? Bias and discriminatory patterns identified during design or development may be reinforced or mitigated depending on how the AI system is deployed. |
|
3. System operators |
C.3.1 |
What potential risks to human rights of those operating the system may arise from the deployment of the AI system? The deployment of the AI system may adversely affect the human rights of those operating it (e.g., by creating excessive surveillance, undue stress, unsafe working conditions, or erosion of professional autonomy). Such impacts could compromise their human rights but also the integrity and accountability of the system’s operation. |
|
4. Training |
C.4.1 |
Are system operators sufficiently trained fully to understand the system’s limitations and to intervene effectively in situations of complexity, uncertainty, anomaly, or failure? If not, what risks to human rights, democracy and the rule of law and system accountability may arise in this connection? If operators of the AI system are not sufficiently trained, they may fail to understand the system’s intended uses and limitations, or to recognize conditions of complexity, uncertainty, anomalies, or failures. This could prevent them from exercising appropriate human judgment and intervention, increasing the risk of harm to potentially affected persons and undermining accountability, transparency, and reliability of the system. |
|
5. Human in the loop |
Examples of Potential Risk Factors |
|
|
C.5.1 |
Does the AI system have a high level of automation or operational 'autonomy'? If so, what risks might arise from interactions with affected persons that are not fully mediated by human control? AI systems with a high level of automation or operational “autonomy” may interact with potentially affected persons in ways not fully mediated by human control. This creates risks of undermining human dignity and individual autonomy, as well as eroding transparency, oversight, accountability, and responsibility. While full replacement of human operators may occur only in certain contexts, there are many scenarios in which human operators remain involved but increasingly over-rely on AI outputs. This can result in a false or “illusory” sense of meaningful human oversight, where the task is largely executed by the system even though nominal human supervision exists. Without appropriate safeguards, such interactions could adversely affect human rights of potentially affected persons. |
|
|
6. Out-of-the-scope uses |
C.6.1 |
How could malfunctions, misuse, or malicious application of the AI system affect human rights of potentially affected persons, democracy and the rule of law? To what extent are these risks identified? There is a risk that the AI system may malfunction, be misused, or abused – for example, through breakdown or technical failure, use beyond its intended scope, or deliberate malicious misapplication. Such events could have adverse effect on human rights of potentially impacted persons, in particular by undermining transparency, oversight, accountability, and responsibility of the system. This risk category could also impact individual autonomy and therefore impact democracy and the rule of law when applied at scale, such as disseminating misinformation and disinformation. |
|
7. Proximity to decision making or action |
C.7.1 |
To what extent does the AI system directly take, review, or substantially inform decisions or actions that may affect human rights, democracy, or the rule of law? How does this degree of involvement influence the potential level of risk? The degree of proximity of the AI system to the relevant activity (decision-making or action) can amplify or reduce its potential impact on human rights. Where the system directly takes or reviews decisions, or substantially informs them, the risk of adverse human rights impacts may be higher compared to systems that only provide peripheral support or no meaningful influence. |
This section provides a tool which could be used to perform and/or inform the COBRA assessment. It outlines areas of potential concern (left column) with, for each of them, an indicative description, examples of potential AI-related risks, potentially relevant domains and references to potentially relevant human rights instruments and, as appropriate, other legal instruments.
The list may be understood as an illustrative array of elements that can be used to assist the assessment of AI-related risks in different contexts. The resource is indicative, non-exhaustive, and open-ended, meaning it is bound to evolve in the light of developments in the areas of technology and relevant legal regimes. Seventeen areas of potential concern have been selected. These areas represent fundamental aspects enshrined in various international human rights instruments, underlining the protection of human dignity and individual autonomy.
|
(1) Physical and mental integrity and human dignity |
Illustrative description |
|
Human rights provisions provide a number of protections regarding the physical and mental integrity of individuals, reinforcing the importance of personal and individual autonomy and self-ownership. Many of these protections may apply, for example, in situations involving the use of force by authorities exercising law enforcement powers and where individuals are held in custody. In some countries these protections may include investigations into the conduct of public authorities in health or life-threatening situations. Human rights protections may extend to various other situations, where physical and mental integrity of individuals may be at stake. |
|
|
Examples of potential AI-related risks |
|
|
AI risks to personal, physical, and mental integrity, as well as human dignity, may stem from issues such as lack of transparency in decision-making, faults in the system design (including where such design is not age-appropriate), development or use of AI systems and the potential for AI-driven systems to dehumanize and objectify individuals. Risks may also arise from AI systems that manipulate individuals. |
|
|
Potentially relevant human rights obligations |
|
|
The following legal provisions may be of relevance: ICCPR[3]: Articles 6, 7, 9 and 10 ICESCR[4]: Articles 7 (decent work) and 12 (health) ECHR[5]: Articles 2 (Right to life), 3 (Prohibition of torture), Article 13 (Right to an effective remedy) ESC[6]: Articles 7 (protection of children and young persons) 8 (protection of maternity), 11 (protection of health), 13 (social and medical assistance), 14 (social welfare systems), 16 (protection of families), 17 (protection of children), 18 (protection of migrant works and their families), 23 (social protection of the elderly), 26 (dignity at work), 30 (protection against poverty and social exclusion), 31 (housing) EU Charter[7]: Articles 1 (Human dignity), 3 (Right to the integrity of the person), 4 (Prohibition of torture or degrading treatment or punishment) Pact of San Jose[8]: Articles 3 (Right to juridical personality), 4 (Right to life), 5 (Right to humane treatment), Article 12 (Right to food) of the Protocol of San Salvador |
|
|
(2) Physical liberty and security, movement and residence |
Illustrative description |
|
Human rights and rule of law concerns may arise in regard to the physical treatment of individuals in situations of arrest, detention, punishment, or human trafficking. Protections exist against arbitrary arrest or detention, torture, cruel, inhuman or degrading treatment or punishment, unjustified restrictions on freedom of movement, human trafficking, and to ensure access to remedy and justice.. |
|
|
Examples of potential AI-related risks |
|
|
The collection of personal data and/or use of AI tools may enable or result in the inference of behaviours or activities that are used to justify or facilitate activities such as arrest or detention, or restrictions of movement and residence, even if these inferences are incorrect; such incorrect decisions made or informed by AI tools which are acted upon without sufficient further examination can lead to wrongful treatment or undermine the ability of an accused person to challenge the lawfulness of their detention. AI systems may be used to facilitate or enable human trafficking for instance by generating content that is used to groom or lure individuals into situations of exploitation. As is the case with human trafficking in general, women and children are at greatest risk. AI may also be used to perform mass surveillance and to identify individuals for no justifiable or reasonably expected purpose. |
|
|
Potentially relevant human rights obligations |
|
|
The following legal provisions may be of relevance: ICCPR: Articles 7, 8, 9, 10, 11, 12, 13 and 14 ILO Convention 105 (prohibition of forced labour) ECHR: Articles 3 (Prohibition of torture), 4 (Prohibition of slavery and forced labour), 5 (Right to liberty and security), Article 13 (Right to an effective remedy), Article 2 (Freedom of movement) of Protocol No. 4 ESC: Article 1(2) (The right to work) EU Charter: Articles 3 (Right to the integrity of the person), 4 (Prohibition of torture or degrading treatment or punishment), 5 (Prohibition of slavery and forced labour), 6 (Right to liberty and security), 45 (Right to freedom of movement and residence) Pact of San Jose: Articles 5 (Right to humane treatment), 6 (Freedom from slavery), 7 (Right to personal liberty), 22 (Freedom of movement and residence) |
|
(3) Justice and administration of justice |
Illustrative description |
|
Access to justice and complex set of minimal rules regulating access to justice in the broader field of justice and public administration. Human rights protections in this area cover various rights and guarantees with respect to access to remedies, the quality of examination of cases by courts (such as, for example, providing for a public pronouncement of a judgment), the quality of participation in court proceedings (such as, for example, providing for a fair and public hearing) and the requirements in respect of the composition of courts (such as, for example, requiring a competent, independent and impartial tribunal). However, in many countries a wide scope of matters is ruled by public administration bodies with citizens having access to courts at a later stage of proceedings with respect to certain matters. Public administration is equipped with decision-making powers that allow them to significantly influence individuals and their important life interests of this connection. |
|
|
Examples of potential AI-related risks |
|
|
The use of AI in the justice system introduces risks such as harmful bias, lack of transparency, or insufficient human oversight, the use of flawed data (which is particularly problematic in the context of the administration of justice) and the erosion of judicial independence and the loss of public confidence in justice institutions. It also raises concerns about due process infringements which could result in violations of human rights. |
|
|
Potentially relevant human rights obligations |
|
|
The following legal provisions may be of relevance: ICCPR: Articles 2.3, 9, 14, 26 ECHR: Articles 5 (Right to liberty and security) and 6 (Right to a fair trial), Article 8 (Right to respect for private and family life), Article 13 (Right to an effective remedy), Article 3 of Protocol No. 7 (Compensation for wrongful conviction) EU Charter: Articles 41 (Right to good administration), 42 (Right of access to documents), 47 (Right to an effective remedy and to a fair trial), 48 (Presumption of innocence and right of defence) Pact of San Jose: Articles 8 (Right to a fair trial), 10 (Right to compensation), 24 (Right to equal protection) and 25 (Right to judicial protection) |
|
|
(4) Privacy and data protection |
Illustrative description |
|
Privacy and data protection rights ensure limits to outside influence over private affairs and that people can keep their personal data, behaviours, and decisions from being disclosed or monitored without their consent, safeguarding personal autonomy and dignity. Specifically in the public sector context, privacy and data protection laws protect individuals’ personal information by ensuring that public authorities or entities acting on their behalf only process personal information that they are authorised to. Privacy protections also allow individuals to maintain control over how they grow, define and present their personal identity. These protections protect against unauthorized use or misrepresentation of an individual’s personal attributes, such as name, likeness, or other distinguishing characteristics. There are also aspects of privacy and individual autonomy which relate to the interaction with others. Human rights protections in this area cover various rights and guarantees with respect to family life, marriage and, more generally, relationships with others, both privately and in a work setting. |
|
|
Examples of potential AI-related risks |
|
|
The collection of vast amounts of personal data (even when such data is collected transparently) as well as the use of AI techniques to infer personal information from it can create serious risks to both privacy and identity. From invasive surveillance and re-identification of anonymized data to misuse of biometrics, creation of deepfakes, identity theft and behavioural manipulation, AI technologies have the ability to undermine personal autonomy and expose sensitive information. |
|
|
Potentially relevant human rights obligations |
|
|
The following legal provisions may be of relevance: ICCPR: Article 17 UNCRC[9]: Article 16 ECHR: Articles 8 (Right to respect for private and family life), 10 (Freedom of expression), 12 (Right to marry) EU Charter: Articles 7 (Respect for private and family life), 8 (Protection of personal data) and 9 (Right to marry and right to found a family) Convention 108+[10]: Articles 3, 5, 6, 9 Pact of San Jose: Article 11 (Right to privacy), 17 (Rights of the Family), 18 (Right to a name), 20 (Right to nationality) |
|
(5) Equality and non-discrimination[11] |
Illustrative description |
|
Rights to equal protection under the law and to not be discriminated against in the enjoyment of rights and freedoms complement the other substantive provisions of various international human rights instruments. Discrimination may be direct or indirect, the result of association, based on one or more grounds, or based on an action or a failure to act. For general reference, protected characteristics referred to in the body of the existing human rights law consisting of international (at both global and regional levels) and domestic legal instruments, as applicable in each State may include, but are not limited to: (1) Sex (2) Race”[12] and colour (3) Language (4) Religion (5) Political or other opinion (6) National or social origin (7) Association with a national minority (8) Property (9) Birth (10) Age (11) Gender identity and expression (12) Sexual orientation (13) Sex characteristics (14) Health and disability (15) Parental and marital status (16) Immigration status (17) Status related to employment. Respect for equality may go beyond prohibiting less favourable treatment without justification based on one or more protected characteristics. |
|
|
Examples of potential AI-related risks |
|
|
Discriminatory trends can be created or exacerbated by: - the purpose of a system if that purpose is itself discriminatory; - datasets that are not sufficiently representative or that encode historical inequalities; - algorithmic systems that do not sufficiently account for patterns of bias[13] in datasets or that otherwise generate biased outputs; - inadequate testing and evaluation related to bias; - a lack of representation or consideration (irrespective of whether this takes place negligently or intentionally) of the perspectives of diverse groups during AI development or other AI activities or - insufficient training and other governance measures (such as inadequate oversight processes) for developers or users of AI systems in relation to potential discrimination. At the same time, a system trained with theoretically completely unbiased data may still result in unfair downstream impacts for reasons relating to how they are deployed. |
|
|
Potentially relevant human rights obligations |
|
|
The following legal provisions may be of relevance: ICCPR: Articles 2, 3, 23, 24 and 26 ICESCR: Articles 2.2 and 3 ILO Conventions No. 111 (Discrimination), No. 190 (Violence and Harassment at the workplace) ECHR: Article 14 (Prohibition of discrimination), Protocol 12 Article 1 ESC: Articles 1(2), 4(3), 19(4-5), 20 EU Charter: Various provisions within Chapter III on Equality (Articles 20-26), Article 14 (Right to education), Article 15 (Freedom to choose an occupation and right to engage in work), Article 16 (Freedom to conduct a business) Pact of San Jose: Article 24 (Right to equal protection) and dedicated legal instruments, such as ICERD[14], CEDAW[15], UNCRC and UNCRPD[16] |
|
(6) Thought, conscience, religion and belief |
Illustrative description |
|
Human rights provisions protect an individual's ability to hold, practice, and express their religious beliefs or the lack thereof without interference or discrimination. These include protections for the ability to worship, to change one’s religion or beliefs, and to observe religious practices (such as dress, dietary restrictions, or religious holidays) publicly or privately. They also protect against coercion to adopt or renounce any religion and ensure equal treatment regardless of religious affiliation, whether it is real or supposed. Freedom of conscience protect the ability to hold and act upon closely-held beliefs of right and wrong, not necessarily based in religious systems. Examples could include atheism, vegetarianism, or conscientious objection to military service. Although there is some debate over the content of freedom of thought, it could include not having to reveal one’s thoughts, not having punishment for one’s thoughts, and protection from impermissible alteration of thought. |
|
|
Examples of potential AI-related risks |
|
|
AI-related risks to religious rights can stem from algorithms that create or perpetuate harmful bias that affects people on the basis of their actual or perceived religious beliefs, through privacy violations, or by facilitating religious profiling, suppression of religious expression, and manipulation of beliefs. These risks threaten individuals’ ability to practice and express their religion without fear of discrimination, surveillance, or coercion. Additionally, AI systems, such as emotion recognition systems and systems that perpetuate disinformation, could interfere with freedom of thought or conscience, and could impose a chilling effect that could interfere with freedom of thought or conscience. The use of AI systems can compromise individual autonomy in matters of belief by subtly shaping perceptions, influencing thought processes, and potentially restricting the freedom to form and express religious or spiritual convictions without undue external interference. |
|
|
Potentially relevant human rights obligations |
|
|
The following legal provisions may be of relevance: ICCPR: Articles 18, 19 and 20 ECHR: Article 9 (Freedom of thought, conscience and religion), Article 10 (Freedom of expression), Article 14 (Prohibition of discrimination) EU Charter: Article 10 (Freedom of thought, conscience and religion), Article 11 (Freedom of expression and information), Chapter III on Equality Pact of San Jose: Article 12 (Freedom of conscience and religion), Article 13 (Freedom of thought and expression) UNCRC: Article 14 |
|
|
(7) Opinions, expression and information |
Illustrative description |
|
Protections for opinions and free expression apply to holding, seeking, imparting and receiving information and ideas through any media. Forms of expression may be political in nature, artistic, including the production of plays and performances, personal or commercial and may include news reporting, photographs and forms of conduct, such as boycotts or campaigns, clothing, symbols etc. These protections may also apply in certain relations governed by the rule of private law (e.g. labour relations) and statements made in private correspondence or meetings behind closed doors. |
|
|
Examples of potential AI-related risks |
|
|
AI-related risks to opinions, expression and access to information may include potential censorship, bias in algorithmic systems used for content selection, and the creation and promulgation of inaccurate information. Additionally, AI-facilitated surveillance can stifle free speech, while the use of algorithms can result in reduced access to all viewpoints. The use of AI systems can undermine individual autonomy in expression by subtly influencing opinions, shaping discourse, and limiting the ability to freely form, articulate, and share ideas without covert manipulation or constraint. Based on how an individual expresses themselves publicly, an AI system can make inferences that are incorrect and this could alter the information or access that the individual may otherwise benefit from (such as an individual wearing certain clothes could be categorized as belonging to a group they do not). |
|
|
Potentially relevant human rights obligations |
|
|
The following legal provisions may be of relevance: ICCPR: Articles 19 and 20 UNCRC: Article 13 ECHR: Article 10 (Freedom of expression) EU Charter: Articles 10, (Freedom of thought, of conscience, and of religion), 11 (Freedom of expression and information) and 42 (Right of access to documents) Pact of San Jose: Articles 13 (Freedom of thought and expression) and 14 (Right to reply) |
|
(8) Peaceful assembly and association |
Illustrative description |
|
Human rights protections exist in respect of non-violent gatherings and walkabouts, including meetings in private and public places. There are protections for voluntary groupings for a common goal and the possibility of forming or being affiliated with a group or organisation pursuing particular aims. Prominent examples of such associations are political parties, minority and religious associations and trade-unions. |
|
|
Examples of potential AI-related risks |
|
|
Potential AI-risks in this area may include arbitrary or discriminatory surveillance of members of association and meeting participants, predictive policing, social media censorship, misinformation, and targeted harassment, all of which can deter individuals from exercising their rights to peacefully gather and associate freely and participate in government. |
|
|
Potentially relevant human rights obligations |
|
|
The following legal provisions may be of relevance: ICCPR: Article 21 UNCRC: Article 15 ECHR: Article 11 (Freedom of assembly and association) ESC: Article 5, Article 6.1-2, 21, 28, 29 EU Charter: Article 12 (Freedom of assembly and of association), 28 (Right of collective bargaining and action), Pact of San Jose: Articles 15 (Right of assembly), 16 (Freedom of association) |
|
|
(9) Property |
Illustrative description |
|
Human rights protections in this area are generally not limited to covering the ownership of physical goods, but may extend other rights and interests, including intellectual property rights, constituting assets that can also be regarded as “property rights”. |
|
|
Examples of potential AI-related risks |
|
|
Property rights may be adversely affected by AI systems through such issues as: - the use of flawed algorithms in finance and real estate to make automated decisions regarding property values, loans, or credit; - AI-generated content, such as artwork, music, or text, and the development of AI models may raise complex issues related to intellectual property rights; - without human oversight rapid advancements in AI technology could outpace existing legal frameworks, leaving gaps in property rights protection; - the use of intellectual property to train models where laws are unclear or inconsistently applied regarding the question of whether such use is permitted. These risks can undermine individuals' control over their possessions and intellectual creations, leading to potential exploitation and discrimination. |
|
|
Potentially relevant human rights obligations |
|
|
The following legal provisions may be of relevance: ECHR: Article 1 of Protocol No. 1 (Protection of property) EU Charter: Article 17 (Right to property) Pact of San Jose: Article 21 (Right to property) |
|
|
(10) Education |
Illustrative description |
|
Human rights protections in this area relate to individuals’ access to and the provision of quality education without discrimination. Protections may encompass not only access to primary education but also secondary, higher, vocational, and lifelong learning opportunities. |
|
|
Examples of potential AI-related risks |
|
|
AI-related risks include harmful bias produced by algorithms in areas such as admissions, grading, and personalized learning. Other AI-related risks (including those resulting from the lack of digital literacy) can undermine the effectiveness and fairness of educational, vocational and training systems, impacting the overall learning experience for students, or chances on the labour market. |
|
|
Potentially relevant human rights obligations |
|
|
The following legal provisions may be of relevance: ICESCR: Article 13 UNCRC: Article 28 ECHR: Article 2 of Protocol No. 1 (Right to education) ESC: Articles 1(4), 7(6), 9, 10, 15, 17, 19 (11-12), 20(b) and 30 EU Charter: Article 14 (Right to education) Pact of San Jose: Article 13 (Right to education) of the Protocol of San Salvador |
|
(11) Arts, sciences, culture and language |
Illustrative description |
|
Human rights protections in this sphere allow access for all to a variety of cultural resources, the enjoyment and the benefits of scientific progress and its applications, and the protection of moral and material interests resulting from any scientific, literary or artistic production of which they are the author. Likewise, these protections cover the freedom of scientific research and creative activity. They also protect the ability of individuals, including members of ethnic, religious or linguistic minorities, to enjoy their culture, practice their religion and use their language.. |
|
|
Examples of potential AI-related risks |
|
|
AI-related considerations for culture, arts, and sciences may include complex issues related to intellectual property issues, data accuracy and quality control issues, unreliable sources of data, and risks to critical thinking and ethical research practices. These risks can undermine the richness and diversity of human expression, creativity, and knowledge advancement. |
|
|
Potentially relevant human rights obligations |
|
|
The following legal provisions may be of relevance: ICCPR: Article 27 ICESCR: Article 15 ESC: Articles 15(3), 22(c), 23 and 30. EU Charter: Articles 13 (Freedom of the arts and sciences) and 22 (Cultural, religious and linguistic diversity) Pact of San Jose: Article 14 (Right to the benefits of culture) of the Protocol of San Salvador |
|
|
(12) Labour and employment |
Illustrative description |
|
Labour rights protect workers in the workplace in relation to fair treatment, safe and healthy working environments, healthy, safe and decent working conditions, reasonable wages, privacy at the workplace, decent standards of living, freedom from discrimination, fair and equitable treatment and the ability to form and join trade unions. Labour rights may include, for example, minimum standards of wages and decent standards of living, limits on working hours, protection from forced labour, issues related to child labour, freedom of association and the effective recognition of the right to collective bargaining and elimination of discrimination. |
|
|
Examples of potential AI-related risks |
|
|
AI may have a variety of transformative impacts on labour and employment. AI-related risks include the use of AI to monitor and manage employees, leading to concerns over privacy and creating stressful environments due to constant tracking of performance, behavior, productivity and potentially AI-based job displacement. AI systems used to inform or make decisions regarding hiring, firing, or promotions risk biased, opaque, or unfair labor practices, including those affecting people working on AI. AI can be used to automate and optimise supply chains, which might inadvertently increase the demand for cheap labour, including child labour in certain sectors. The use of AI in monitoring and managing labour can reduce human oversight, potentially allowing exploitative practices to go unnoticed particularly regarding child labour. |
|
|
Potentially relevant human rights obligations |
|
|
The following legal provisions may be of relevance: ICESCR: Articles 6, 7, 8 and 10 UNCRC: Article 32 ESC: Articles 1-10, 21, 22 26, 28 and 29 EU Charter: Articles 8 (Personal data protection), 12 (Freedom of assembly and association), 15 (Freedom to choose an occupation and right to engage in work), 27 (Workers’ right to information and consultation within the undertaking), 28 (Right of collective bargaining and action), 29 (Right to access to placement services), 30 (Protection in the event of unjustified dismissal), 31 (fair and just working conditions), 32 (Prohibition of child labour and protection of young people at work) and 33 (Family and professional life) Pact of San Jose: Article 7 (Just, equitable, and satisfactory conditions of work), 8 (Trade union rights) of the Protocol of San Salvador Additionally, the following ILO Conventions are of potential relevance in this context: 1. Convention No. 87 – Freedom of Association and Protection of the Right to Organise (1948) 2. Convention No. 98 – Right to Organise and Collective Bargaining (1949) 3. Convention No. 111 – Discrimination (Employment and Occupation) (1958) 4. Convention No. 138 – Minimum Age Convention (1973) 4. Convention No. 155 – Occupational Safety and Health (1981) 5. Convention No. 190 – Violence and Harassment (2019) |
|
(13) Health, healthcare and social security / social protection |
Illustrative description |
|
The health sector encompasses access to medical services, public health initiatives, and healthcare infrastructure to ensure the physical and mental well-being of individuals. Protections in this area generally relate to the ability of individuals to access healthcare in order to enjoy the highest attainable standard of physical and mental health. The social security sector provides access to financial support and protection to individuals facing unemployment, disability, old age, or economic hardship through benefits and welfare programmes. Protections in this area generally relate to the ability of individuals to receive such assistance. Social protection, social assistance and protection against poverty and social exclusion aim to exclude or reduce poverty and inequality, and to promote human dignity. |
|
|
Examples of potential AI-related risks |
|
|
The use of AI systems may result in harmful biased data leading to unequal access to healthcare or social benefits, particularly for marginalised groups. The use of AI in processing sensitive personal health and social data raises significant privacy and security risks (including issues relating to consent and individual autonomy), including data breaches or misuse. Over-reliance on AI for medical diagnoses or social benefit decisions could lead to errors, reducing human oversight and potentially harming patient outcomes or wrongly denying benefits. Regarding social protection and social security, the use of AI may present risks when it is used to determine the eligibility of persons for social benefits or to detect cases where social benefits were paid out incorrectly and the authority has repayment claims: consequences of mistakes (possibly caused by biases) can be severe and often affect persons in vulnerable situations in particular. |
|
|
Potentially relevant human rights obligations |
|
|
The following legal provisions may be of relevance: ICESCR: Articles 9, 10 and 12 ESC: Articles 2, 3, 7, 8, 11, 12, 13, 14, 15, 17(1), 19(2), 22(b), 23, 27(1)(b) and 30. European Code of Social Security (ECSS): Articles 7-12 (Medical care), 13-18 (Sickness benefit), 19-24 (Unemployment benefit); 25-30 (Old-age benefit), 31-38 (Unemployment injury benefit), 46-52 (Maternity benefit), 53-58 (Invalidity benefit), 59-64 (Survivor’s benefit) UNCRC: Article 24 EU Charter: Articles 8 (Personal data protection), 34 (Social security and social assistance) and 35 (Healthcare) Pact of San Jose: Articles 9 (Right to social security), 10 (Right to health) of the Protocol of San Salvador |
|
|
(14) Children |
Illustrative description |
|
Both general human rights and children-specific protections exist in respect of this group. They are aimed at ensuring that children can grow, learn, play, develop and flourish with dignity and that children’s specific vulnerabilities and needs are properly considered. In all actions concerning children their best interests must be a primary consideration. With the above considerations in mind, many of the areas of potential concern presented in this table will be relevant for children. |
|
|
Examples of potential AI-related risks |
|
|
AI systems could pose risks to children if used in making decisions that may not take into account their needs or rights, and by creating fake but realistic images or videos that can confuse, mislead, or harm children. Content that is generated, selected, or recommended by AI systems (for instance) on social media or video platforms can expose children to content that is illegal, harmful or not age appropriate. Targeted advertising and AI-based content can exploit children's vulnerabilities, influencing their behavior and preferences, including for commercial gain. AI-driven social media platforms can contribute to anxiety, depression and negative self-esteem in children through cyberbullying, addiction and harmful comparisons. Children are especially susceptible to harms linked to generative AI. Children are less capable of discerning synthetic content from genuine content, identifying inaccurate information and when they are being manipulated by dark patterns, and understanding that they are interacting with a machine rather than with a human being. AI is increasingly used to generate child sexual abuse material (CSAM) and exploit children. Generative AI can enable the creation of synthetic CSAM and cause harm to children and families, as well as to society. Predators also can use generative AI to groom, extort, and exploit victims. |
|
|
Potentially relevant human rights obligations |
|
|
ICESCR: Articles 10(3) and 12. European Social Charter (R/ESC): Articles 7 and 17. European Code of Social Security: Articles 39-45 (Family benefit) EU Charter: Articles 7, 8, 24 and 32. Pact of San Jose: Article 19 (Rights of the Child) In addition to human rights provisions contained in international human rights treaties, the following specialised human rights instruments may be of relevance: 1) the 1989 United Nations Convention on the Rights of the Child (UNCRC) and its Optional Protocols; 2) the General comment No. 25 on children’s rights in relation to the digital environment; 2) the 2007 Council of Europe Convention on the Protection of Children against Sexual Exploitation and Sexual Abuse (CETS No. 201, the Lanzarote Convention); |
|
(15) Environment |
Illustrative description |
|
In some jurisdictions, human rights provisions may recognise some level of responsibility of public authorities for protecting individuals from environmental harms. |
|
|
Examples of potential AI-related risks |
|
|
AI systems, particularly those that use significant computational resources for training and inference, consumes large amounts of energy. This contributes to carbon emissions, especially when powered by non-renewable energy sources. Likewise, the production of hardware required for AI system’s training and running requires many rare earth materials, the extraction of some of which has wider pollution effects. AI applications in agriculture, economy, resource management, or wildlife monitoring, while offering many potential environmental benefits, may also have unforeseen environmental impacts if not carefully managed, such as disrupting ecosystems, over-exploiting natural resources, pollution or harming biodiversity. |
|
|
Potentially relevant human rights obligations |
|
|
If seen through human rights angle, issues regarding environmental protection could be addressed under the following provisions: ICESCR Article 12 ECHR: Articles 2 (Right to life), 3 (Prohibition of torture), 6 (Right to fair trial), 8 (Right to private and family life), 10 (Freedom of expression) and 11 (Freedom of assembly and association) EU Charter: Article 37 (Environmental protection) Pact of San Jose: Article 11 (Right to a healthy environment) of the Protocol of San Salvador Aarhus Convention [17] |
|
|
(16) Democracy |
Illustrative description |
|
Human rights provisions protect the ability of individuals to participate in the conduct of public affairs, to vote in free and fair elections, to run for public office, and to have access to government services. An important feature of democratic systems of government is political pluralism, which is ensured in large part by the protection of human rights, the respect of which is essential for a thriving democracy, such as freedom of expression, freedom of association and freedom of peaceful assembly; and existence of pluralist and independent media and a range of political parties representing different interests and views, fair access to and meaningful participation in public debate and public decision-making and access to accurate and trustworthy information. Rights regarding meaningful participation in public decision-making processes and in government ensure that individuals have the ability to engage in the political process, including to vote, run for office, and express opinions on public policies. Fostering democratic governance, accountability, and representation, and allowing citizens to influence decisions which affect their lives and communities. |
|
|
Examples of potential AI-related risks |
|
|
AI systems playing a role in influencing or informing the democratic processes, could adversely affect fair access of individuals to and participation in public debate and free and fair elections, as well as their ability to freely form opinion through the following, which is non-exhaustive: (a) Deception, misinformation or disinformation at local, national or global levels caused by the deployment of the system; (b) Manipulation, at local, national or global levels enabled by the deployment of the system; (c) Intimidation or behavioural control, at local, national or global levels enabled by the deployment of the system. AI could be used in ways that interfere with participation in government, for example through use of algorithms to screen political candidates or participants in public processes, or to suppress speech on matters relating to public policy. |
|
|
Potentially relevant human rights obligations |
|
|
The following legal provisions may be of relevance. ICCPR: Articles 19, 20, 21 and 25 ECHR: Article 10 (Freedom of expression), 11 (Freedom of assembly and association) and Article 3 of Protocol No. 1 (Right to free elections) ESC: Article 5, Article 6.1-2 EU Charter: Articles 10 (Freedom of expression and information), 42 (Right of access to documents), 12 (Freedom of assembly and of association), 39 (Right to vote and to stand as a candidate at elections to the European Parliament) and 40 (Right to vote and to stand as a candidate at municipal elections) Pact of San Jose: Articles 13 (Freedom of thought and expression), 14 (Right to reply), 15 (Right of assembly), 16 (Freedom of association) and 23 (Right to participate in government) |
|
(17) Rule of law |
Illustrative description |
|
Rule of law is a principle of governance in which all persons, institutions and entities, public and private, including the state itself, are accountable to laws that are publicly promulgated, equally enforced, and independently adjudicated, and which are consistent with respect for international human rights. Judicial independence is foundational to democracy and ensuring public confidence in the administration of justice. Rule of law institutions are organisations (judiciary, legislature and executive, legal profession and semi-independent bodies like anti-corruption bodies, data protection authorities, ombudspeople etc.) and systems (legal systems) that ensure laws are applied fairly, consistently, and transparently, protecting individuals' rights and maintaining order in society. Together, these institutions maintain the rule of law by ensuring that laws govern society, not arbitrary decisions, and that everyone is treated equally under the law. |
|
|
Examples of potential AI-related risks |
|
|
The use of AI presents risks (through possible lack of transparency and sufficient human oversight, the use of systems resulting in unintended consequences, harmful bias, cybersecurity threats, among others) to the rule of law, procedural fairness, transparency and accountability in such contexts as, for instance: (a) the integrity of democratic institutions and processes, including the principle of the separation of powers, respect for judicial independence; (b) access to justice; (c) accountability mechanisms (oversight of the executive branch) and anti-corruption bodies and policies. |
|
|
Potentially relevant human rights obligations |
|
|
The effective exercise and protection of these rights are essential for ensuring respect of the rule of law. ICCPR: Articles 2.3, 4 and 14 ECHR: Article 6 (Right to a fair trial), Article 13 (Right to an effective remedy) EU Charter: Articles 41 (Right to good administration), 42 (Right of access to documents), 47 (Right to an effective remedy and to a fair trial), 48 (Presumption of innocence and right of defence) Pact of San Jose: Articles 8 (Right to a fair trial), 10 (Right to compensation) and 25 (Right to judicial protection) |
This resource lists (not in order of priority) sectors and domains of potential concern from the perspective of human rights, democracy and the rule of law (first and second column) and is intended to be used in conjunction with COBRA Resources E, in order to carry out the mapping of impacts as part of COBRA. Similarly, it is non-exhaustive and should be regularly reviewed.
|
1. Public administration |
Domains |
Areas of potential concern based on COBRA Resources E |
|
(a) Health care, including, but not limited to, such issues as access to healthcare services, diagnostics, prognostics and preventative care, the provision of life-sustaining treatments, treatment of life-threatening conditions, emergency care services, mental health counselling and treatment, end of life decisions; |
(1) Physical and mental integrity and human dignity, (2) Physical liberty and security, movement and residence, (4) Privacy and data protection, (5) Equality and non-discrimination, (6) Thought, conscience, religion and belief, (13) Health, healthcare and social security, (14) Children, (15) Environment |
|
|
(b) Family life and social care, including, but not limited to, such issues as mutual enjoyment of parents with children, custody, access, contract-rights, State care, foster families, adoption and reproductive services, access to and provision of public benefits; |
(1) Physical and mental integrity and human dignity, (4) Privacy and data protection, (5) Equality and non-discrimination, (6) Thought, conscience, religion and belief, (9) Property, (13) Health, healthcare and social security, (14) Children |
|
|
(c) Migration and border control, including, but not limited to, such issues as expulsion, extradition, deportation, adjustments of status, denial of right to entry, notification of rights, translation/interpretation services, production of transcripts, collection and assessment of evidence, conditions and modalities of entrance to and removal from the territory of the State; |
(1) Physical and mental integrity and human dignity, (2) Physical liberty and security, movement and residence, (3) Justice and administration of justice, (4) Privacy and data protection, (5) Equality and non-discrimination, (6) Thought, conscience, religion and belief, (12) Labour and employment, (13) Health, healthcare and social security/social assistance protection, (14) Children, (15) Environment, (17) Rule of law |
|
|
(d) Infrastructure development and maintenance, including, but not limited to, such issues as health security, and enjoyment of public space, public transportation and mobility, management of environmental hazards, land and urban planning, housing, digital infrastructure, energy management and energy consumption; |
(1) Physical and mental integrity and human dignity, (4) Privacy and data protection, (5) Equality and non-discrimination, (6) Thought, conscience, religion and belief, (9) Property, (13) Health, healthcare and social security / social protection, (14) Children, (15) Environment |
|
|
(e) Emergency services, including, but not limited to, such issues as management of rescue operations, emergency communications infrastructures, management of the aftermaths of disasters; |
(1) Physical and mental integrity and human dignity, (2) Physical liberty and security, movement and residence, (5) Equality and non-discrimination, (6) Thought, conscience, religion and belief, (14) Children (15) Environment |
|
|
(f) Public education, including, but not limited to, such issues as access to educational institutions and educational assessments, and official recognition of studies; |
(5) Equality and non-discrimination, (6) Thought, conscience, religion and belief, (10) Education (14) Children, (11) Arts, sciences and culture |
|
|
(g) Employment, including but not limited to, issues such as recruitment, access to employment, performance management and worker policies, accessibility and reasonable accommodations for persons with disability. |
(1) Physical and mental integrity and human dignity, (4) Privacy and data protection, (5) Equality and non-discrimination, (8) Peaceful assembly and association, (12) Labour and employment, (13) Health, healthcare and social security/social protection |
|
2. Law enforcement and security |
Domains |
Areas of potential concern based on COBRA Resources E |
|
(a) Police and assimilated services, including, but not limited to, such issues as the use of lethal force, administration of physical force during arrests, ID checks and identification of individuals for law enforcement purposes, programmes regarding protection of persons in danger (e.g. victims of domestic violence or protected witnesses), arrests and detentions, management of programmes regarding vetting of officials, management of rescue and hostage rescue operations, crowd management during public events, predictive policing, emotion and sentiment analysis, surveillance and restrictions by police and other law enforcement agencies. |
(1) Physical and mental integrity and human dignity, (2) Physical liberty and security, movement and residence, (3) Justice and administration of justice, (4) Privacy and data protection, (5) Equality and non-discrimination, (7) Opinions, expression and information, (8) Peaceful assembly and association, (9) Property, (14) Children, (15) Environment, (17) Rule of law |
|
|
(b) Prosecutions, including, but not limited to, such issues as collection and assessment of evidence. |
(1) Physical and mental integrity and human dignity, (3) Justice and administration of justice, (4) Privacy and data protection, (5) Equality and non-discrimination, (7) Opinions, expression and information, (14) Children, (15) Environment, (17) Rule of law |
|
|
3. Administration of justice |
(a) Courts and justice, including, but not limited to, such issues as arrests, detentions, decisions regarding bail, release on parole, conditional release and wearing of electronic bracelets, notification of rights and decisions, translation/interpretation services, production of transcripts, collection and assessment of evidence (including assessment of trustworthiness of witnesses and evidence), granting of legal aid, determination of any criminal charge, determination of civil rights and obligations, decisions regarding challenges of judges or jury members, decisions regarding access to review level of proceedings, criminal sentencing, automated proceedings. |
(2) Physical liberty and security, movement and residence, (3) Justice and administration of justice, (4) Privacy and data protection, (5) Equality and non-discrimination, (7) Opinions, expression and information, (9) Property, (14) Children, (17) Rule of law |
|
(b) Institutional aspects of organisation of the judiciary, namely respect for judicial independence and including, but not limited to, such issues as the management of the process of vetting, appointments and dismissal of judges/judicial officers, attribution of cases for processing to specific judges/judicial officers, case management in legal proceedings. |
(3) Justice and administration of justice, (5) Equality and non-discrimination, (9) Property, (12) Labour and employment, (17) Rule of law |
|
|
4. Democratic processes |
(a) Electoral system, including, but not limited to, such issues as conditions and modalities of the exercise of the right to vote, eligibility age, exclusion rules, conditions and modalities of voting, voting methods and procedures, conditions and modalities of counting, the right to stand in elections, the organisation of elections and referenda, redistricting, the management of electoral disputes and effective remedies in this connection, distribution of electoral information; |
(4) Privacy and data protection, (5) Equality and non-discrimination, (7) Opinions, expression and information, (8) Peaceful assembly and association, (16) Democracy |
|
(b) Institutions and political processes, including, but not limited to, such issues as the supremacy of the constitution, the role of the judiciary in the balancing of powers, judicial independence, delegation of the legislative function; |
(3) Justice and administration of justice, (5) Equality and non-discrimination, (7) Opinions, expression and information, (16) Democracy, (17) Rule of law |
|
|
(c) Opinions and public discourse, including, but not limited to, such issues as expression of protected speech in various forms, protection of journalistic sources, information gathering activities, access collection and automated processing of data, and property, research and investigation activities, disclosure regime concerning information received in confidence, protection of whistle-blowers, participatory democracy and public consultations including issues of organisation of committee meetings; |
(3) Justice and administration of justice, (4) Privacy and data protection, (5) Equality and non-discrimination, (6) Thought, conscience, religion and belief, (7) Opinions, expression and information, (8) Peaceful assembly and association, (10) Education, (11) Arts, sciences, culture and language, (14) Children, (16) Democracy, (17) Rule of law |
|
4. Democratic processes |
Domains |
Areas of potential concern based on COBRA Resources E |
|
(d) Peaceful assembly and association, including, but not limited to, such issues as time, place and manner of conduct of assemblies, conditions and modalities of forming or being affiliated with a group or organisation pursuing particular aims, surveillance of assemblies and identification of participants, participation of citizens in public life; |
(5) Equality and non-discrimination, (6) Thought, conscience, religion and belief, (7) Opinions, expression and information, (8) Peaceful assembly and association, (9) Property, (11) Arts, sciences, culture and language, (14) Children, (16) Democracy |
|
|
(e) Access to information, including, but not limited to, such issues as access to personal information, financial information and information about business dealings of individuals, duty to provide reliable and precise information, responsibilities with regard to verification and transmission of information, access to State-held information; |
(3) Justice and administration of justice, (4) Privacy and data protection, (5) Equality and non-discrimination, (6) Thought, conscience, religion and belief, (7) Opinions, expression and information, (9) Property, (10) Education, (11) Arts, sciences, culture and language, (14) Children, (16) Democracy, (17) Rule of law |
|
|
(f) Media, including but not limited to, such issues as transparency with regard to media ownership, media pluralism, freedom of expression during elections (offline and online), duties and responsibilities of internet news portals, automated news generation, media platforms, mis/disinformation, online content moderation. |
(5) Equality and non-discrimination, (6) Thought, conscience, religion and belief, (7) Opinions, expression and information, (8) Assembly, association and participation in government, (10) Education, (11) Arts, sciences and culture, (14) Children, (16) Democracy |
|
|
5. Prison and Probation |
(a) Management of prisons and detention facilities, including, but not limited to, such issues as prisoner profiling, psychological screening of potentially vulnerable inmates, management of dangerous prisoners, management of prison population, searches of visitors and inmates, surveillance of communications; |
(1) Physical and mental integrity and human dignity, (2) Physical liberty and security, movement and residence, (3) Justice and administration of justice, (4) Privacy and data protection, (5) Equality and non-discrimination, (6) Thought, conscience, religion and belief, (7) Opinions, expression and information, (13) Health, healthcare and social security / social protection, (15) Environment |
|
(b) Parole and probation services, including, but not limited to, such issues as release on parole, conditional release, monitoring of individuals and any electronic wearable devices. |
(1) Physical and mental integrity and human dignity, (2) Physical liberty and security, movement and residence, (3) Justice and administration of justice, (4) Privacy and data protection, (5) Equality and non-discrimination |
|
|
6. Essential services offered by private sector |
(a) Communication services |
(4) Privacy and data protection, (5) Equality and non-discrimination, (6) Thought, conscience, religion and belief, (7) Opinions, expression and information, (9) Property, (10) Education, (11) Arts, sciences, culture and language, (14) Children |
|
(b) Education and vocational training |
(5) Equality and non-discrimination, (6) Thought, conscience, religion and belief, (7) Opinions, expression and information, (10) Education, (11) Arts, sciences and culture, (12) Labour and employment, (14) Children |
|
|
(c) Biomedical applications, life sciences, epidemiology and health care |
(1) Physical and mental integrity and human dignity, (2) Physical liberty and security, movement and residence, (4) Privacy and data protection, (5) Equality and non-discrimination, (9) Property, (13) Health, healthcare and social security / social protection, (14) Children, (15) Environment |
|
|
(d) Environmental and waste management |
(1) Physical and mental integrity and human dignity, (4) Privacy and data protection, (5) Equality and non-discrimination, (13) Health, healthcare and social security / social protection, (15) Environment |
|
|
(e) Energy management |
(5) Equality and non-discrimination |
|
|
(f) Urban infrastructure and planning |
(4) Privacy and data protection, (5) Equality and non-discrimination, (9) Property, (14) Children, (15) Environment |
|
|
(g) Manufacturing and industrial automation |
(1) Physical and mental integrity and human dignity, (12) Labour and employment, (15) Environment |
|
|
(h) Construction and building |
(4) Privacy and data protection, (5) Equality and non-discrimination, (12) Labour and employment, (15) Environment |
|
|
(i) Security and public safety |
(1) Physical and mental integrity and human dignity, (2) Physical liberty and security, movement and residence, (3) Justice and administration of justice, (4) Privacy and data protection, (5) Equality and non-discrimination, (8) Peaceful assembly and association, (15) Environment |
|
6. Essential services offered by private sector |
Domains |
Areas of potential concern based on COBRA Resources E |
|
(i) Security and public safety |
(1) Physical and mental integrity and human dignity, (2) Physical liberty and security, movement and residence, (3) Justice and administration of justice, (4) Privacy and data protection, (5) Equality and non-discrimination, (8) Peaceful assembly and association, (15) Environment |
|
|
(j) Domotics (smart home technologies) |
(4) Privacy and data protection, (5) Equality and non-discrimination, (14) Children |
|
|
(k) Housing and social accommodation provision |
(1) Physical and mental integrity and human dignity, (5) Equality and non-discrimination, (13) Health, healthcare and social security /social protection |
|
|
(l) Employment, human resources and labour management (including but not limited to issues such as recruitment, access to employment, performance management and worker policies, accessibility and reasonable accommodations for persons with disability) |
(2) Physical liberty and security, movement and residence, (4) Privacy and data protection, (5) Equality and non-discrimination, (10) Education (12) Labour and employment |
|
|
(m) Financial services |
(4) Privacy and data protection, (5) Equality and non-discrimination, (9) Property |
|
|
(n) Information technology and networks |
(4) Privacy and data protection, (5) Equality and non-discrimination, (7) Opinions, expression and information, (9) Property, (10) Education, (11) Arts, sciences, culture and language, (14) Children |
|
|
(o) Vehicle manufacturing and transportation infrastructure |
(2) Physical liberty and security, movement and residence, (5) Equality and non-discrimination, (15) Environment |
|
|
(p) Agriculture and food supply |
(1) Physical and mental integrity and human dignity, (5) Equality and non-discrimination, (6) Thought, conscience, religion and belief, (12) Labour and employment, (13) Health, healthcare and social security / social protection, (15) Environment |
[1] This document has been classified restricted until examination by the Committee of Ministers.
[2] References to international human rights instruments in this table are included for illustrative purposes. Those references only apply to States which are Parties to those instruments. Each State is expected to apply its own applicable laws in accordance with its international legal obligations, which could include encouraging the private sector to respect and support human rights, including as set out in the United Nations Guiding Principles on Business and Human Rights
[3] The United Nations (UN) International Covenant on Civil and Political Rights and its Optional Protocols
[4] The UN International Covenant on Economic, Social and Cultural Rights and its Optional Protocol
[5] the Council of Europe Convention for the Protection of Human Rights and Fundamental Freedoms (ETS No. 5) and its additional Protocols
[6] The European Social Charter (ETS No. 35) and its protocols and the Revised European Charter (ETS No. 163)
[7] The Charter of Fundamental Rights of the European Union
[8] The American Convention on Human Rights and its additional Protocol
[9] The United Nations Convention on the Rights of the Child and its Optional Protocols
[10] The Convention for the Protection of Individuals with Regard to Automatic Processing Personal Data, as amended (ETS No.108, CETS No 223) and its Protocols
[11] Equality and non-discrimination provisions complement other substantive provisions of various international human rights instruments and these issues are therefore relevant in respect of all areas of potential concerns in this table
[12] Since all human beings belong to the same species, theories based on the existence of different “races” are rejected. However, the term “race” is used in order to ensure that those persons who are generally and erroneously perceived as “belonging to another race” are not excluded from the protection provided.
[13] In the AI-context, bias refers to an output that is skewed in a way that is unfair. Bias is likely to lead to discriminatory outcomes provided that it is (a) related to protected characteristics, and (b) some kind of action is taken based on the biased outputs of an AI system (whether that bias has entered the system via training or fine-tuning data, the model itself, or in the way the outputs of the model are interpreted by a human).
[14] The United Nations International Convention on the Elimination of All Forms of Racial Discrimination
[15] The United Nations Convention on the Elimination of All Forms of Discrimination Against Women and its Optional Protocol
[16] The United Nations Convention on the Rights of Persons with Disabilities and its Optional Protocol
[17] the UNECE Convention on Access to Information, Public Participation in Decision-making and Access to Justice in Environmental Matters
[18] References to international human rights instruments in this table are included for illustrative purposes. Those references only apply to States which are Parties to those instruments. Each State is expected to apply its own applicable laws in accordance with its international legal obligations, which could include encouraging the private sector to respect and support human rights, including as set out in the United Nations Guiding Principles on Business and Human Rights.