COE-Logo-Fil-BW

Strasbourg, 24 November 2023                                                             CDBIO(2023)7REV2

STEERING COMMITTEE FOR HUMAN RIGHTS

IN THE FIELDS OF BIOMEDICINE AND HEALTH (CDBIO)

IMPACT OF AI ON THE PATIENT-DOCTOR RELATIONSHIP

DRAFT REPORT FOR CONSULTATION


EXECUTIVE SUMMARY

The Council of Europe aims to protect human dignity and the fundamental rights and freedoms of the individual regarding the application of biology and medicine. Major technological breakthroughs in artificial intelligence, have the potential to advance biomedicine and benefit healthcare, yet uncertainty exists about their impact and direction of developments.

The report addresses AI in healthcare, including applications that are used by health care professionals (e.g. Clinical Decision Support Systems) as well as applications that are used by the patients themselves (apps prescribed by a doctor, but also independently used apps such as symptom checkers or health data trackers). With its focus on the patient-doctor relationship the report does not focus on AI development nor on AI-related research that includes human subjects.

The report focuses on selected human rights principles, as referred to in the Oviedo Convention, of particular relevance to the therapeutic relationship, namely autonomy, professional standards, self-determination regarding health data, and equitable access to health care.

AI systems have the potential to bring about an enormous transformation of the patient-doctor relationship, holding great promise for the delivery of better and more equitable access to healthcare. That said, doctors’ expertise might become challenged but also significantly increased by high-performing decision support in various domains of healthcare. Patients who decide to use AI systems by themselves, might rely less on the advice of health professionals. Challenges lie in misplaced trust, overestimation of technological performance and testimonial (in)justice as to the question of whom to trust in the patient-doctor encounter.

A sustainable approach to providing access to health care using AI systems should be one which includes a human rights perspective in order to safeguard well-being and protect the dignity of everyone. This human rights perspective should be ‘end-to-end’ throughout a patient’s healthcare journey.

There must be trust in the professional standards which scrutinise the safety, quality and efficacy of AI systems; should this falter (e.g. when AI systems are considered to be inscrutable, inconclusive and even misguided) patient autonomy will be weakened. 

A major challenge lies in ensuring that AI systems (their data and models) are empirically sound and robust, accurate, and their results consistent and reproducible (e.g., based on independent standards or expertise, such as independently testing algorithms). Standards for clinical trials involving AI systems are necessary to ensure the safety, quality and reliability of trial results. This will allow AI systems to be more easily appraised by investigators and others, such as regulators.

The trustworthiness of AI systems in healthcare depends on human oversight and the ‘explainability’ of AI outputs. The “black box” character of some AI methods has been criticised as having a bearing on the risk of bias and discrimination without good options for detecting such failures in performance. The responsible use of AI in healthcare relies at least on a basic understanding of the strengths of AI recommendations.

In their design, development and training phases, action is necessary to address biases in AI systems to mitigate the potential for discriminatory access to healthcare affecting people and groups (based on e.g., race, gender, age or disability). There is an opportunity for AI systems to mitigate pre-existing bias found in modern medicine. In the future, AI might provide options to redress differences to whom fairness is owed (e.g. older people, lower socio-economic groups, ethnic minorities etc).

Patients may find it difficult to understand what AI systems are, why are they being relied upon, and how they are being used. While some might argue that doctors have discretion to decide whether to inform their patients about their reliance on AI systems, this should not preclude them from seeking out information and explanations to consent to health interventions. 

Patient autonomy necessitates more information, explanation and transparency than less. This includes patients knowing when they are interacting with an AI system, and knowing how to consent especially in cases when the deployment of AI systems leads to health care administered with less and/or without recourse to the therapeutic hand of the doctor. When the risks to the patient are high, there should be the possibility to distance human consent from AI system outputs.

Where required, patients’ consent to the use of their data by AI systems should be free, express and informed. They should know about what is collected and how it may be shared. They should be provided with assurances and possibly different types of consent options.

Healthcare providers should ensure that safeguards are in place to protect the privacy and confidentiality of patients throughout their healthcare journey, especially at the source of its collection. To this end, there should be ever more vigilance with patient data, mitigating any inadvertent or otherwise ambiguous data sharing with third parties.

AI enabled care should never be a substitute for people (in vulnerable situations) who need human professional contact and guidance. Careful attention should be paid to not putting the patient in a worse position if AI systems are not used or otherwise denied.

Doctors and other healthcare professionals will require support in adapting to AI systems which guides their actions. They will need to be informed, explained and trained accordingly, with considerable emphasis on their critical role in protecting and safeguarding patient well-being and quality of care.

Above all, AI systems should never undermine the therapeutic relationship however good the intentions are. They must be made transparent to patients and doctors so that they are aware of what is running in the background. Patient autonomy and agency, coupled with the ‘human warranty’ of health professionals, are the path forward to strengthening the therapeutic relationship impacted by AI systems.


I. INTRODUCTION    5

Purpose   5

Scope   5

Understanding the patient-doctor relationship with regard to AI systems  6

What is AI and why is it important in the patient-doctor relationship?  6

AI trends and examples which have a bearing on the patient-doctor relationship   8

II. HUMAN RIGHTS IMPLICATIONS OF AI IN THE PATIENT-DOCTOR RELATIONSHIP   10

Autonomy  10

Context  10

Challenges  12

Recommended Action   12

Professional standards  13

Context  13

Challenges  14

Recommended Action   14

Self-determination regarding health data  16

Context  16

Challenges  16

Recommended action   17

Equitable Access to Health Care   17

Context  17

Challenges  19

Recommended action   20

III. LOOKING FORWARD    20


I. INTRODUCTION

1.    The Council of Europe aims to protect human dignity and the fundamental rights and freedoms of the individual regarding the application of biology and medicine. Major technological breakthroughs, like those involving artificial intelligence, have the potential to advance biomedicine and benefit healthcare, yet uncertainty exists about their impact and direction of developments.

2.    The governance of these developments is more than just facilitating their application and containing their risks, it is the way their technological pathways are managed (and sometimes become irreversible). Governance is about embedding human rights in AI technologies which have an application in the field of biomedicine. This implies that developments are from the outset oriented towards protecting human rights. For that reason, governance arrangements need to be considered, which seek to steer the innovation process in a way that connects innovation and technologies with social goals and values.

3.    In the framework of its Strategic Action Plan on Human Rights and Technologies in Biomedicine (2020-2025), the Committee on Bioethics[1] set up a drafting group to prepare a report on the application of AI (hereinafter referred to as ‘AI systems’ or ‘AI enabled’ (…)) in healthcare and its impact on the doctor-patient relationship, highlighting the role of healthcare professionals in respecting the autonomy, and right to information, of the patient, and in maintaining transparency and patient trust as critical components of the therapeutic relationship.

4.    The drafting group comprised the following members: Dunja PEJOVIĆ (Bosnia and Herzegovina); Emmanuel DIDIER (France); Joni KOMULAINEN as Chair (Finland); Sabine SALLOCH (Germany); Evaristo CISBANI (Italy); Patricio SANTILLAN-DOHERTY (Mexico); and Andreas REIS (WHO).

5.    The drafting group met on six occasions from October 2022 to December 2023, comprising two ‘in-person’ meetings and four online meetings. The drafting group held exchanges with experts from the Netherlands and France (Paris, 8-9 February 2022), and took into account the views and proposals of young people who participated in the CDBIO pilot youth forum (Strasbourg, 6 June 2023).

Purpose

6.    The report is intended for decision makers, health providers, health professionals and patients (including patient associations), to:

·         Consider how AI systems are used in healthcare, having regard to their human rights implications.

·         Develop and strengthen the therapeutic relationship, especially in supporting doctors and, where appropriate, other healthcare professionals in promoting the agency and autonomy of patients, patient welfare, and equitable access to health care.

Scope

7.    The report focuses on selected human rights principles of particular relevance to the therapeutic relationship, namely autonomy (Article 5, Oviedo Convention), professional standards (Article 4, Oviedo Convention), self-determination regarding health data (Article 10, Oviedo Convention) and equitable access to health care (Article 3, Oviedo Convention).

8.    The report addresses AI in healthcare, including applications that are used by health care professionals (e.g. Clinical Decision Support Systems) as well as applications that are used by the patients themselves (apps prescribed by a doctor, but also independently used apps such as symptom checkers or health data trackers). With its focus on the patient-doctor relationship the report does not focus on AI development nor on AI-related research that includes human subjects.

Understanding the patient-doctor relationship with regard to AI systems

9.    The therapeutic relationship is a critical component of good patient care, which AI systems have the potential to improve or adversely affect. It must be acknowledged that AI systems are already becoming an essential tool for modern medicine. This requires attention to the design, development and application of AI systems used in health so that “interests and welfare of the human being prevail over the sole interest of society or science”[2]. There should be synergy in the progress and protections advanced by AI systems.

10.  Most importantly, the patient-doctor relationship is one of trust which, in turn, is based on the trustworthiness of the standards and ethics of healthcare professionals to help and support people facing illness and disease. This relationship is inherently human, taking different forms. It is a very special and historically coveted type of relationship that has been evolving constantly from an outdated traditional paternalistic model to a more desirable deliberative model. To assert the character of this relationship, one which puts patients first, the drafting group decided to invert reference from ‘doctor-patient’ to ‘patient-doctor’ for the remainder of this report.

11.  AI systems bear the potential of an enormous transformation of the patient-doctor relationship. Doctors’ expertise might become challenged but also significantly increased by high-performing decision support in various domains of healthcare. Patients who decide to use AI systems by themselves, might rely less on the advice of health professionals. Challenges lie in misplaced trust, overestimation of technological performance and testimonial (in)justice as to the question of whom to trust in the patient-doctor encounter.

What is AI and why is it important in the patient-doctor relationship?

12.  AI systems require at least three concurrent components: data; computational hardware; and computational software (algorithms).

a.    Data can be collected by a range of heterogeneous protocols and devices, such as wearable personal sensors, medical exams and clinical equipment. Data are exploited by AI systems for training, testing, validation and potentially continuous update and performance verification. Data should be representative to avoid collection bias, diversified and heterogeneous; the accuracy and precision of the data should be known and considered by the AI system. Quality assurance and standardisation are key aspects for the proper exploitation of data by AI system algorithms. Most of the data collected and processed by AI system devices are personal data of a sensitive nature which are subject to specific regulation (Convention 108; GDPR) and require anonymisation and/or pseudonymisation (or other safeguards such as data aggregation or the generation of synthetic data) to avoid undue traceability and identification of the patient. It is noteworthy that the protection of patient privacy may impact on the availability of statistically diversified, heterogeneous and adequate amount of data and it is often considered as an obstacle to data exploitation and sharing and therefore to the development of AI system applications[3].

b.    Computational hardware is needed to store the large volumes of data required for AI systems. Algorithms run on computational nodes the performance of which depend on tasks to be accomplished (e.g. train a model, use trained model, etc). Among different types of computational hardware, hybrid cloud computing (where different storage and computing environments co-exist, such as in-house and remote) currently represents one of the most effective compromises between economic affordability, scalability, and shared exploitation among clinical centres[4][5]. Security, privacy, and compliance represent the most challenging aspects for the deployment of an information technology infrastructure in healthcare.   

c.    Computational algorithms able to perform tasks that are generally associated to human intelligence. The capability for AI systems to mimic (part) of human intelligence is, on one side, the main challenge of these technologies and, on the other side, represents the main concern for their applications. Deep neural networks are one of the most discussed and expanding subclasses of AI systems (within machine learning algorithms) due to their “black box” intrinsic structure and possibilities to learn during computations; they can be trained on (large) classified datasets to calibrate their internal millions (or billions) of parameters and then be applied to new input data to identify new correlations and findings[6].

13.  AI systems are a significant driver for progress in healthcare. They are being used throughout health care, from diagnostics through to prediction, prevention, therapy (including triage) and rehabilitation. In Europe, there is increasing reliance on AI systems used in healthcare. In the US, the FDA has cleared or approved more than 500 AI algorithms for medical use[7][8][9]. The vast majority of AI systems are being developed in the field of medical imaging, but other areas are evolving.

Ø  In practice, AI systems must be reliable, convenient to use and easy to integrate into health workflows. This may not be the case where AI systems require expensive infrastructure investment, computing and storage capabilities. Such constraints raise the question as to whether viable cost-efficient alternatives to AI systems exist. Early health technology assessments of AI systems are a way forward (e.g. as undertaken in the Netherlands in the field of multiple sclerosis[10]).

14.  AI systems have the potential to inter alia support doctors with diagnostics and health personnel with administrative workflows. Yet, AI systems offer potential disruption to care responsibilities. For example, radiologists faced with increasingly accurate AI-system outputs could be displaced or otherwise substituted by other health professionals and/or feel compelled to use AI systems as a supporting tool, perhaps to gain a competitive advantage but notwithstanding the possibility for error. Doctors will need guidance in dealing with incidental findings that AI systems produce, especially whether and how to communicate these to patients [11][12].

AI trends and examples which have a bearing on the patient-doctor relationship

15.  AI systems have the potential to bring about an enormous transformation of the patient-doctor relationship, holding great promise for the delivery of better and more equitable access to healthcare. The following examples demonstrate the breadth of development of AI systems used in healthcare.  

16.  AI imaging systems use machine learning algorithms to help improve image quality (and/or for producing the same images of similar quality with less radiation dose to the patient). This is not diagnostics per se but does contribute to later (human) decisions which can identify patterns and anomalies used, for example, to detect tumours, diagnose heart conditions, and identify other abnormalities.

Ø  For example, AI systems are being evaluated for use in radiological diagnosis in oncology (e.g., thoracic imaging, abdominal and pelvic imaging, mammography, brain imaging and dose optimisation for radiological treatment),[13] in non-radiological applications (dermatology), in diagnosis of diabetic retinopathy and for RNA and DNA sequencing to guide immunotherapy[14].

17.  AI-enabled diagnostic systems use machine learning algorithms to help medical professionals identify symptoms, compare them to medical records, and recommend treatments based on the data. Used in hospitals and clinics to help diagnose and treat a variety of diseases and conditions, AI systems can help to identify disease progression (assisting doctors in filling in the gaps of missing key insights that allow them to diagnose with confidence).

Ø  For example, developmental dysplasia of the hip has a significantly better prognosis when caught early; however, there are often little to no symptoms during early stages[15].

18.  AI-enabled clinical decision support systems use machine learning algorithms to assist medical professionals in making decisions related to patient care. These systems are being used to provide personalised health recommendations, flag potential drug interactions, and alert medical professionals to potential errors. As AI systems evolve, social determinants and lifestyle choices (e.g., social media data) may also be incorporated into AI systems. This means an AI system that evaluates an individual's combined genetic and behaviour/social data could greatly improve a doctor’s ability to choose the best treatment path/medication.

Ø  For example, mathematical representations of patients by AI systems from deep analysis of Electronic Health and Medical Records (EHR and EMR) are becoming one relevant field of application of deep neural network algorithms[16]. AI systems trained on those data may support clinical professionals in personalized patient risk stratification, diagnosis, prognosis and triage[17].

19.  Surgery using AI-enabled tools can support health professionals in making health interventions both safer and more effective. For instance, brain surgery is precise and painstaking, making it critically important to avoid damaging critical structures[18].

Ø  For example, in treatment support to clinicians[19], AI systems have the potential to assist surgeons in removing malignant tumour tissue more effectively, fusing biopsies with full-scale interventions[20].

20.  Virtual nursing assistants use AI systems to automate certain tasks such as monitoring vital signs, providing medication reminders, and managing patient records. These systems are designed to reduce the workload of nurses and other medical personnel while providing more accurate results[21].

Ø  For example, clinical decision support systems (CDSSs) can assist clinicians in everyday problem solving for systemic inflammatory response syndrome, sepsis and associated organ dysfunctions in paediatric intensive care, as they summarise, analyse, and present clinically relevant data at the point of care[22].

21.  AI-enabled wearable medical devices use machine learning algorithms to track and monitor health data. These devices are being used to monitor heart rate, blood pressure, and other vital signs, as well as provide actionable insights into an individual's health, including patients.

Ø  In the detection of disease, AI-enabled medical devices have given the electrocardiogram (ECG) and clinicians reading them diagnostic abilities. This transforms the ECG, a ubiquitous, non-invasive cardiac test that is integrated into practice workflows, into a screening tool and predictor of cardiac and non-cardiac diseases, often in asymptomatic individuals[23].

22.  AI-enabled telemedicine platforms use machine learning algorithms to provide remote[24] medical care (in line with the UN SDG concerning equal possibilities for treatment and rehabilitation also for rural/remote areas). These systems are being used to provide virtual consultations, provide medication reminders and adherence tracking, and monitor patient vital signs. Eventually, AI systems will likely assist patients in self-managing their medical conditions, especially chronic diseases such as cardiovascular diseases, diabetes and mental problems. AI systems already assist in self-care, including through conversation agents (e.g., “chat bots”), health monitoring and risk prediction tools and technologies designed specifically for individuals with different problems[25] and disorders[26].

Ø  Patients using AI-enabled apps are assisting in real time monitoring of biometrics. For instance, AI-enabled wearables for people with diabetes help to monitor and maintain glucose levels via an automated on-the-body worn insulin delivery system. Using a self-learning algorithm, such medical devices embed AI-enabled treatment mechanisms to assist people in managing their daily insulin levels.

23.  AI-enabled drug discovery systems use machine learning algorithms to more rapidly identify (novel) potential treatments for diseases and conditions. These systems are helping to expedite the drug discovery process and can be used to identify potential treatments for conditions that were previously thought to be untreatable.

Ø  The development of AI systems to predict the three-dimensional shape of proteins, such as RoseTTAfold and AlphaFold[27], are helping to speed up the development of new medicines and to improve the repurposing of existing medicines for use against new viruses and new diseases.

II. HUMAN RIGHTS IMPLICATIONS OF AI IN THE PATIENT-DOCTOR RELATIONSHIP

Autonomy

Context

24.  According to Article 5 of the Oviedo Convention, “[A]n intervention in the health field may only be carried out after the person concerned has given free and informed consent to it. This person shall beforehand be given appropriate information as to the purpose and nature of the intervention as well as on its consequences and risks. The person concerned may freely withdraw consent at any time.”

25.  Consent empowers a patient’s decision process by contrasting her or his personal preferences with a proposed medical intervention. Paragraphs 35 and 36 of the Explanatory Report to the Oviedo Convention provide further details on the specific requirements for consent, which includes the requirements concerning the quality, breadth, and clarity of information provided:

“35. The patient's consent is considered to be free and informed if it is given on the basis of objective information from the responsible health care professional as to the nature and the potential consequences of the planned intervention or of its alternatives, in the absence of any pressure from anyone. (…). In order for their consent to be valid the persons in question must have been informed about the relevant facts regarding the intervention being contemplated. This information must include the purpose, nature and consequences of the intervention and the risks involved. Information on the risks involved in the intervention or in alternative courses of action must cover not only the risks inherent in the type of intervention contemplated, but also any risks related to the individual characteristics of each patient, such as age or the existence of other pathologies. Requests for additional information made by patients must be adequately answered.

36. Moreover, this information must be sufficiently clear and suitably worded for the person who is to undergo the intervention. The patient must be put in a position, through the use of terms he or she can understand, to weigh up the necessity or usefulness of the aim and methods of the intervention against its risks and the discomfort or pain it will cause.”

26.  Yet, autonomy is more than consent. It engenders a more active role for patients in shared decision-making, one which is not restricted to being informed and agreeing to options presented to them. It encompasses, for example, the choice to take preventive measures, to ask for a second opinion, and to exercise the “right not to know” as provided in paragraph 67 of the Explanatory Report to the Oviedo Convention (below). Most importantly, autonomy is the ability for patients to introduce their own values, preferences and perspectives in patient-doctor communications.

67. The right to know goes hand in hand with the "right not to know", which is provided for in the second sentence of the second paragraph. Patients may have their own reasons for not wishing to know about certain aspects of their health. A wish of this kind must be observed. The patient's exercise of the right not to know this or that fact concerning his health is not regarded as an impediment to the validity of his consent to an intervention; for example, he can validly consent to the removal of a cyst despite not wishing to know its nature.

27.  In research settings, autonomy to consent to diagnostic and treatment interventions (as characterised above) differs from consent by the patient (or health proband) to become a research subject. Everyone involved needs to consider the different « logics » of treatment directed to the individual’s needs and research which primarily aims at producing knowledge that serves the interests of future patients. Not least as there is not always the potential for study participants to benefit from their participation, the ethical and legal requirements on informed consent are usually higher than in the therapeutic setting. Notably, in data intensive research, for example in the development of clinical decision support systems, the borderline between clinical care and medical research becomes increasingly blurred generating a need for new legal and ethical frameworks for ensuring informed consent (dynamic consent models are commonly used in such circumstances)[28]. In addition, there is more and more online research for which obtaining appropriate informed consent poses additional difficulties[29] [30].

Challenges

28.  Patients may find it difficult to understand what AI systems are, why are they being relied upon, and how they are being used. While some might argue that doctors have discretion to decide whether to inform their patients about their reliance on AI systems[31], this should not preclude them from seeking out information and explanations to consent to health interventions.  

29.  As AI systems merge into medical practice, the patient-doctor relationship could even be overthrown or increasingly frustrated as patients become unable to withhold consent to the use of AI systems in their treatment or care when other options that do not rely upon them are not easily available, or if the clinician, who has handed over responsibility for such functions to an AI system, is unable to provide care without use of an AI system. 

Recommended Action

30.  Patient autonomy necessitates more information, explanation and transparency than less. This includes patients knowing when they are interacting with an AI system, and knowing how to consent especially in cases when the deployment of AI systems leads to health care administered with less and/or without recourse to the therapeutic hand of the doctor. Action should be taken to determine when and how to support and empower patients in consenting to interventions recommended by health professionals, supported by AI systems.

31.  When the risks to the patient are high, there should be the possibility to distance human consent from AI system outputs. Patients should be given the opportunity to seek a second opinion without reliance on an AI system and/or oppose treatment decisions and care which depend on (or go beyond mere support of) AI system outputs, should they prefer.

32.  Efforts should also be made to encourage patients to become more active and critical in decisions regarding their health. Patient organisations can play a useful role in sharing knowledge and good practice on health literacy, which should include AI literacy.

Patients need:

·         To be more aware and critical of AI systems used in their health care

·         To know when they are interacting with an AI system

·         To be clearly informed and explained about the use of AI in cases which matter to them

·         To be able to accept or refuse AI in their care or treatment, and to this end seek a second opinion without the use of an AI-system

Health professionals and/or healthcare providers have responsibility to:

·         Provide guidance and training on what and how to inform patients about the use of AI systems in their treatment and care[32]

·         Inform and explain to patients in clear and simple language what are AI systems, why and how they are being used (i.e., benefits and risks)

·         Review procedures to facilitate informed consent when AI systems are relied upon

·         Promote health literacy, including AI literacy, and facilitate public dialogues where appropriate

Professional standards

Context

33.  The patient-doctor relationship is founded upon several pre-requisites in healthcare (e.g., safety, efficacy, quality, etc) encapsulated in key policy and legal documents, such as the Declaration of Geneva, and the Oviedo Convention of which Article 4 states: “Any intervention in the health field, including research, must be carried out in accordance with relevant professional obligations and standards”.

34.  Professional obligations, whatever form they take (codes of conduct, legal obligations etc), are necessary to ensure standards and quality of care. They create a commitment for doctors over time. They are a bulwark of protection for patients because they compel them to pay careful attention to the needs of each patient, as underlined in paragraphs 32 and 33 of the Explanatory Memorandum to the Oviedo Convention:

“32. It is the essential task of the doctor not only to heal patients but also to take the proper steps to promote health and relieve pain, taking into account the psychological well-being of the patient. Competence must be determined primarily in relation to the scientific knowledge and clinical experience appropriate to a profession or speciality at a given time. The current state of the art determines the professional standard and skill to be expected of health care professionals in the performance of their work. In following the progress of medicine, it changes with new developments and eliminates methods which do not reflect the state of the art.”

33. Further, a particular course of action must be judged in the light of the specific health problem raised by a given patient. (…). Another important factor in the success of medical treatment is the patient's confidence in his or her doctor. This confidence also determines the duties of the doctor towards the patient. An important element of these duties is the respect of the rights of the patient. The latter creates and increases mutual trust. The therapeutic alliance will be strengthened if the rights of the patient are fully respected.”

35.  These paragraphs reinforce the duty of doctors to take care of patients, to determine the proper treatment within a joint decision-making framework with the patient. Action taken is based on competence, scientific knowledge and clinical experience, this includes the assessment of risk (of AI systems)[33].

Ø  In the European Union, it is noteworthy that any AI systems intended to be used for a medical purpose shall comply with the medical device EU regulation[34] which determines different levels of risk-based compliance, taking into account the vulnerability of the patient and the risks associated to the devices[35]. The EU regulation (and similarly FDA) classify the medical devices according to their intended use and risk into three main classes: I (lowest risk), IIa[36] and IIb[37] and III[38] (highest risk). 

36.  In assessing such risks, further thinking is needed about the attribution of responsibilities for AI systems. The reality is one that is shared. AI-system developers are responsible for ensuring they are designed in a responsible and ethical manner. Health professionals and/or healthcare providers have responsibility to use them in a way that aligns with ethical and legal guidelines.

Challenges

37.  A major challenge lies in ensuring that AI systems (their data and models) are empirically sound and robust, accurate, and their results consistent and reproducible (e.g., based on independent standards or expertise, such as independently testing algorithms). Standards for clinical trials involving AI systems are necessary to ensure the safety, quality and reliability of trial results. This will allow AI systems to be more easily appraised by investigators and others, such as regulators[39].

38.  Care and standards should not waiver when AI systems are introduced. The challenge lies in understanding how these standards are applied and should adapt to safeguard the therapeutic relationship. This includes how to address the benefits and risks of AI systems considering their opacity (i.e. ‘black box’[40] algorithms) and other shortcomings in transparency and reproducibility.

39.  Professional standards should strive to protect and enable doctors and other healthcare professionals to use AI systems with discretion in the best interests of patients, to understand essential elements of AI systems, and to help explain and support patients. Notwithstanding the potential for AI systems to be effective supporting tools, the critical thinking and expertise of health professionals should not be underestimated[41].

40.  Challenges also lie in assessing the risks of AI systems. By integrating ethics and responsibility into AI use and, in turn, into the strategic implementation and organisational planning processes of AI systems[42], risks can be allayed, and trustworthiness maintained. A responsible approach to AI should place humans (i.e. patients) at the centre and should align with stakeholder expectations, as well as applicable regulations and laws.[43]

Recommended Action

41.  It is necessary to establish clear guidelines and regulations for the development and use of AI systems. This includes ensuring that developers are held accountable for any negative impacts that the system may have, and that users are educated on the potential risks and ethical considerations when using AI systems.

42.  AI systems in healthcare should be governed based on ‘human meaningful control’[44] and the assertion of human values with recourse to verified information, evidence, approval and guidance by regulatory bodies, based on transparent information and adequate quality controls (i.e., validating, certifying and regulating medical devices). This includes ways to question AI system outputs (e.g., which AI system is being used and what is its intended use).

43.  The development of AI systems at all stages, from problem selection to deployment, should consider potential ethical implications. This requires multidisciplinary teams of experts (e.g. sociologist, psychologist, philosophers) and representative groups (e.g. patient, clinician) who can detect and mitigate potential problems in AI system applications (e.g., developing user interfaces that encourage critical thinking and assessment)[45].

Ø  For example, in the Netherlands, guidelines for high-quality diagnostic and prognostic applications of AI systems in healthcare[46] are an example of good professional conduct in the development, testing and implementation of AI systems.

44.  Health professionals should be able to assess AI system risks (e.g. sources of possible bias when interpreting the plausibility of AI system outputs). Guidance, training and capacity building are needed to support them in addressing whether, what and how AI system information and outputs are communicated to patients. This comprises the standardisation of training and the certification of ‘high risk’ AI systems, including, where appropriate, collaboration with AI developers to foster understanding, confidence and transition towards AI-enabled healthcare. In doing so, skills are developed, anxiety about “deskilling” is reduced and, importantly, an equilibrium can be struck between human care and AI-enabled support systems.

Patients need:

·         Support from doctors and other health professionals in understanding and/or being involved in the decision-making process when AI systems make important (probabilistic) determinations about their health

Health professionals and/or healthcare providers have responsibility to:

·         Support and empower health professionals in the transition towards AI-enabled healthcare, reinforcing their place, purpose and accountability vis-à-vis AI systems. This includes

o   maintaining and adapting professional standards and training which help to understand and overcome shortcomings in AI system outputs

o   providing guidance regarding ‘essential elements’ of AI systems

o   encouraging discretion to distance themselves and/or otherwise oppose AI system outputs should there be a sufficient degree of uncertainty (e.g. ‘false positives’ in AI-assisted radiology, AI-driven incidental findings in genomics)

·         Foster multidisciplinary collaboration with AI developers towards the establishment of shared responsibility (and liability) for AI-enabled healthcare

·         Promote minimum standards of information and explainability for AI systems (i.e. regarding the acquisition, authorisation, practical use and iterative learnings of AI systems)       

Self-determination regarding health data

Context

45.  Article 10 of the Oviedo Convention states that everyone has the right to respect for private life in relation to information about his or her health. Furthermore, everyone is entitled to know any information collected about her or his health, which is likely to include its collection and processing by AI systems. This is underlined in paragraphs 66 et al of the Explanatory Memorandum to the Oviedo Convention:

“66. A person's "right to know" encompasses all information collected about his or her health, whether it be a diagnosis, prognosis or any other relevant fact.

(…)

70. Furthermore, it may be of vital importance for patients to know certain facts about their health, even though they have expressed the wish not to know them. For example, the knowledge that they have a predisposition to a disease might be the only way to enable them to take potentially effective (preventive) measures. In this case, a doctor's duty to provide care, as laid down in Article 4, might conflict with the patient's right not to know. It could also be appropriate to inform an individual that he or she has a particular condition when there is a risk not only to that person but also to others. Here too it will be for domestic law to indicate whether the doctor, in the light of the circumstances of the particular case, may make an exception to the right not to know.”

46.  Health data are personal data that should be afforded greater protection, especially when healthcare providers pioneer the use of new technologies, like AI systems. Healthcare providers and healthcare professionals act as gatekeepers in what health data are collected and therefore have responsibilities in safeguarding patient data and preserving the medical confidentiality of the patient-doctor relationship.

47.  Health data are becoming more dynamic and complex. As the health data space expands, larger amounts of data from patient records and other sources (e.g., signal data, medical notes and speech, lab results, and data collected from prescribed wearables) will likely be aggregated into data platforms and shared with and between other AI systems and healthcare settings across borders. To this end, patients will likely play a more altruistic and active role in managing their health data, to freely determine whether and how it is used by AI systems. 

Challenges

48.  The way AI systems use health data is a challenge requiring further reflection. As AI systems train, share and infer new patterns and findings from these data, greater implications for privacy are likely to emerge. At its root, the role played by health professionals and healthcare providers in collecting, generating and enriching, as well as safeguarding health data, becomes ever more important.

Ø  For example, the protection and respect for privacy in and out of healthcare settings is a cause for concern. One arrangement between a company and healthcare institution resulted in the company’s access to over 1 million pseudonymized patient data files. Following a review of the deal, a court found that there was a violation of the right to privacy. The patients concerned had not been properly informed by the healthcare institution that their data had been shared. Without patients’ knowledge or consent, this deal resulted in non-compliance with data protection law.

49.  While the protection of data is crucial, a degree of openness in the data collected from databases and aggregated into data platforms used to pre-train, train and validate AI systems is an important challenge to be met. Such openness would help to evaluate potential biases in the patterns and findings generated by AI systems thereby mitigating any unfair treatment (discrimination) of different populations.

Recommended action

50.  Subject to any requirements in national legislation, patients’ consent to the use of their data by AI systems should be free, express and informed[47][48]. They should know about what is collected and how it may be shared. They should be provided with assurances and possibly different types of consent options[49] to facilitate the collection and processing of data for AI systems. They must be able to decide not to be subject to automatic decision-making by AI systems in decisions that matter to them in a meaningful way.

51.  At the same time, healthcare providers should ensure that safeguards are in place to protect the privacy and confidentiality of patients throughout their healthcare journey, especially at the source of its collection. To this end, there should be ever more vigilance with patient data, mitigating any inadvertent or otherwise ambiguous data sharing with third parties.

Patients need:

·         To decide what personal health data are being collected by AI systems[50]

·         Assurances that privacy and personal data are being safeguarded, including not being subject to automatic decision-making by AI systems[51]

Health professionals and/or healthcare providers have responsibility to:

·         Provide patients with options for consent in the collection and processing of their data used by AI systems[52]

·         Review what, how and when patient data are safeguarded, in accordance with data protection standards

·         Train health professionals in understanding and managing the benefits and risks associated with AI systems, including the safeguarding of patient data

Equitable Access to Health Care

Context

52.  Article 3 of the Oviedo Convention refers to the provision of equitable access to health care of appropriate quality. Subject to health needs and available resources, this likely comprises equitable access to the benefits of AI systems used in health care. It is foreseeable that, as AI systems develop, the principle of equitable access to AI systems will become more important. This can be inferred from paragraph 24 of the Explanatory Report to the Oviedo Convention which refers to “fitting standards in the light of scientific progress”:

“24. The aim is to ensure equitable access to health care in accordance with the person's medical needs. "Health care" means the services offering diagnostic, preventive, therapeutic and rehabilitative interventions, designed to maintain or improve a person's state of health or alleviate a person's suffering. This care must be of a fitting standard in the light of scientific progress and be subject to a continuous quality assessment.

25. Access to health care must be equitable. In this context, "equitable" means first and foremost the absence of unjustified discrimination. Although not synonymous with absolute equality, equitable access implies effectively obtaining a satisfactory degree of care.”

53.  It is expected that AI systems will help to redress and resolve issues concerning equitable access to health care, especially in countries with stretched health care services.

Ø  For example, breast cancer is an increasing problem in low and middle-income countries where screening programmes for early detection are rare. In some countries, this has resulted in the development of cheaper, non-invasive alternative test that use thermal imaging and AI systems. Although considered less reliable than mammography, it is hoped this test might help spot some early cancers in people who might not otherwise have access to mammography screenings[53].

54.  Measures taken by states to ensure equitable access may take many different forms and a wide variety of methods may be employed. Assessing the risks and benefits of AI systems deployment may involve other factors, such as comparing the cost-benefit ratio of adopting AI systems and the availability of affordable alternatives to expensive AI systems. At this stage, key questions to be posed include whether AI systems tie up less resources than the traditional process, whether they generate more health benefits for the patient than the traditional method, and whether the organisation uses outdated systems technology that no one has noticed. Here, targeting limited resources as much as possible to produce health and wellbeing, and increasing equality is also essential when discussing the use of AI systems.

55.  There is an opportunity for AI systems to mitigate pre-existing bias found in modern medicine. In the future, AI might provide options to redress differences in access to healthcare to whom fairness is owed (e.g. older people, lower socio-economic groups, ethnic minorities etc).

Ø  Sex  are important determinants of health. Women and men can have different symptoms and react differently to treatments (it is well known that pharmacokinetics and pharmacodynamics of pharmaceutical agents differ between sexes, resulting in differential adverse event profiles and further impacting treatment outcomes). This can be attributed to the gap in representation of women in clinical trials, leading to bias favouring male subjects.

56.  Considering the therapeutic nature of the patient-doctor relationship, it is foreseeable that for reasons of cost and access medical consultations could shift from face-to-face encounters to online (telemedicine) meetings, some of which may substitute and/or complement a human doctor with an AI-enabled chatbot. It remains to be seen whether such a shift will result in a satisfactory degree of care because, arguably, the therapeutic relationship is one which is inherently human. In other words, the distance created by AI-enabled virtual assistants should not be overlooked. For example, AI systems (e.g. health checker apps) are unlikely to discern a patient’s symptoms where underlying (hidden, unquantifiable) causes are at play (e.g. psychological, social, cultural) which require a greater understanding and trust building process for them to manifest.

Challenges

57.  Equitable access to AI-enabled care will be challenged where its deployment is geographically uneven across health settings in any given country leading to inequities in access to health care. Other inequities may result when such access is dependent on the financial means of individuals. A ‘two-tier’ access to care could occur - the wealthier having access to human doctors, the less wealthy having access to AI-enabled chatbots or conversely, not having access to the latest AI developments such as robotic surgery. These differences in access could be compounded further should AI systems act as a gatekeeper in determining care needs and treatments (e.g., medically prescribed AI-enabled apps).

58.  The challenge will be for different healthcare organisations to define responsibilities for the management of AI systems as a whole and seeing whether adjustments to healthcare culture and practice are likely to be significant. It will take time for AI systems to become an integrated feature in ‘back office’ infrastructure and ‘front office’ patient care.

59.  Computational infrastructures, where data are stored and/or algorithms run, may require additional investments which could represent an inequity factor in AI systems deployment and exploitation, and which can possibly sustain further inequalities: wealthy territories may better benefit of AI systems potentialities, collected data could be biased toward more wealthy social classes, and lower classes could be penalised and marginalised from use of AI-enabled devices, not adequately trained on representative populations. Herein, lies a significant cause of concern and a challenge to be addressed: the design, training and validation of AI systems, using data.

60.  There is a problem of bias throughout the design, development and training of AI systems. There may be bias in algorithms powering the system. There may also be bias in the data used to train, test and validate the AI system. Many other types of bias, such as contextual bias, should also be considered. The concern is that such (upstream) biases could foster (downstream) discrimination which adversely affects equitable access to health care, especially for underrepresented people and groups.

Ø  AI systems entail risks to individuals as well as to groups, communities and wider populations due to various biases (e.g., ‘automation bias’, and racial bias in training datasets[54] as asserted in various academic research). Bias which is found in AI systems deployed in healthcare providers may increase the propensity for accentuated and disproportionate risks to health for population groups[55].

Ø  For example, in a study published in Science in October 2019, researchers found significant racial bias in an algorithm used widely in the US health-care system to guide health decisions. The algorithm was based on cost (rather than illness) as a proxy for needs; however, the US health-care system spent less money on black patients than on white patients with the same level of need. Thus, the algorithm incorrectly assumed that white patients were sicker than equally sick black patients. The researchers estimated that the racial bias reduced the number of black patients receiving extra care by more than half[56].

Recommended action

61.  A sustainable approach to providing access to health care using AI systems should be one which includes a human rights perspective to safeguard well-being and protect the dignity of everyone. This human rights perspective should be ‘end-to-end’ throughout a patient’s healthcare journey, from initial consultation through to treatment and care at home. This includes, in certain situations, such as AI systems diagnostics, the possibility for patients to oppose the offer of AI-enabled care.

62.  In their design, development and training phases, action is necessary to address biases in AI systems to mitigate the potential for discriminatory access to healthcare affecting people and groups (based on e.g., race, gender, age or disability). The importance of “ethics by design” in the early stages of AI systems development and the value of human evaluations of them (impact assessments) can help to mitigate the effects of bias. More representative training datasets, bias benchmarking frameworks, and diversity in those tasked with assessing data quality should be considered.

Patients need:

·         Access to a human doctor

·         Equitable access to the benefits of AI-enabled health services with assurances that they promote patient well-being

·         The choice of opting for a blend of access to a human doctor and/or AI-enabled health services as well as ‘in-person’ support when using them.

Health professionals and/or healthcare providers have responsibility to

·         Ensure that AI systems are clearly identifiable and distinguishable from human care

·         Ensure that any tiered or blended options for access to AI health services are transparently explained

·         Mitigate bias in all its forms (e.g. human impact assessments of AI systems, efforts to ensure the representativeness of training datasets, bias benchmarking frameworks, and diversity in those tasked with assessing data quality)

·         Ensure that medically prescribed AI enabled apps do not act as gatekeepers in determinations about access healthcare

·         Maintain patient well-being, not dilute it with AI systems offerings devoid of human interaction

III. LOOKING FORWARD

63.  Considering the trends and examples in AI systems being developed and deployed in healthcare, there appear to be many opportunities to improve the sector in a manner that could transform the therapeutic relationship between patients and doctors.

64.  That said, there must be trustworthiness in the professional standards which scrutinise the safety, quality and efficacy of AI systems; should this falter (e.g. when AI systems are considered to be inscrutable, inconclusive and even misguided[57]) patient autonomy will be weakened. 

65.  The trustworthiness of AI systems in healthcare depends on human oversight and the explainability of AI outputs. The “black box” character of AI systems has been criticised as having a bearing on the risk of bias and discrimination without good options for detecting such failures in performance. The responsible use of AI in healthcare relies at least on a basic understanding of the strengths of AI recommendations, the factors designed into the AI system and the technology’s limitations with respect to specific groups of patients.

66.  The place for AI systems in the therapeutic relationship will require a shared approach to their governance and application, including ‘bottom-up’ public engagement and dialogues on their design, development and application, with an active role to be played by patient associations.

67.  Yet, AI systems should never be considered solely (and thereby not overused) as a means to improve the provision of cost-efficient health care, to the detriment of patient-centred care. The therapeutic relationship is a human construct, involving people making decisions based on cognition and social behaviour[58]. AI enabled care should never be a substitute for people (in vulnerable situations) who need human professional contact and guidance. Careful attention should be paid to not putting the patient in a worse position if AI systems are not used or otherwise denied.

68.  Doctors and other healthcare professionals will require support in adapting to AI systems which guides their actions. They will need to be informed, explained and trained accordingly, with considerable emphasis on their critical role in protecting and safeguarding patient well-being and quality of care. Introducing this in undergraduate education and specialised training for health professionals (e.g. to enable specialised medical teams to embrace new AI systems in their work) will be important.

69.  Above all, AI systems should never undermine the therapeutic relationship however good the intentions are. They must be made transparent to patients and doctors so that they are aware of what is running in the background. Patient autonomy and agency, coupled with the ‘human warranty’ of health professionals, are the path forward to strengthening the therapeutic relationship impacted by AI systems.



[1] Since replaced by the Steering Committee for Human Rights in the fields of Biomedicine and Health (CDBIO).

[2]  European Convention on Human Rights and Biomedicine of 1997, otherwise known as the “Oviedo Convention”, Article 2.

[3] Italian Committee for Bioethics and Italian Committee for Biosecurity, Biotechnologies, Sciences of Life, “Artificial Intelligence and Medicine: ethical aspects”, May 2020 - https://bioetica.governo.it/media/4261/p6_r_2020_gm_artificial-intelligence-and-medicine_en.pdf

[4] M.F. Bulut et al. “Technical Health Check For Cloud Service Providers”, arXiv:1906.11607, 2019

[5] For instance, to train an AI system using 1 million genomes even as synthetic data would require 8 petabytes of storage and processing capabilities.

[6] Image recognition technologies, for example, can decide what types of objects appear in a picture. The algorithm ‘learns’ by defining rules to determine how new inputs will be classified. The model can be taught to the algorithm via hand labelled inputs (supervised learning); in other cases, the algorithm itself defines best-fit models to make sense of a set of inputs (unsupervised learning). In both cases, the algorithm defines decision-making rules to handle new inputs. Critically, a human user will typically not be able to understand the rationale of decision-making rules produced by the algorithm. Qu'est-ce que l'apprentissage automatique ? (trendmicro.com)

[8] U.Muehlematter et al. “Approval of artificial intelligence and machine learning-based medical devices in the USA and Europe (2015-20): a comparative analysis”, Health Policy, vol 3, issue 3, E192-E203, 2021.

[9] US AI medical algorithms breakdown across specialties as follows:Radiology396; Cardiology 58; Hematology 14; Neurology 10; Clinical chemistry 7; Ophthalmic 7; Gastroenterology and urology 5; General and plastic surgery 5; Pathology 4; Microbiology 4; Anesthesiology 4; General Hospital 3; Orthopedic 1; Dental 1.

[11] A recent study shows that chest x-ray AI systems can predict age, self-reported gender, self-reported ethnicity and insurance status: https://www.jacr.org/article/S1546-1440(22)00544-0/fulltext?fbclid=IwAR0c8jDKDkLpGkyXCDr9FA_VBHsXpFAAJteQX9i8hsBpzq8RLCHrk-zYW1Q&mibextid=Zxz2cZ

[16] Yuqi Si et al, “Deep representation learning of patient data from Electronic Health Records (EHR): A systematic review”, Journal of Biomedical Informatics 115 (2021) 103671.

[19] Other examples include European start-ups such as Cerenion’ a company which introduced a device which monitors the brain function of intensive care patients, and Omnidermal’ which uses AI algorithms in dermatology. Global tech companies such as Apple are developing a device that detects tremors afflicting sufferers of Parkinson’s disease, and Samsung has introduced AI breast screening solutions.

[20] Project Classica uses AI-based algorithms to differentiate between cancerous and non-cancerous tissues in real time. This helps to clinically validate a novel AI-guided intraoperative decision support technology in the surgical care of cancer patients.

[28] See guidance from the Central Ethics Commission at the Federal Chamber of Physicians in Germany: https://www.zentrale-ethikkommission.de/fileadmin/user_upload/zentrale-ethikkommission/BAEK_SN_Behandlungsdaten.pdf (German language only; translation to be made available).

[30] It is noteworthy that more and more national or European wide regulations referring to the processing of health dataand training of algorithms arebased on legislation, not consent.

[32] The European Data Protection Board and the European Data Protection Supervisor have released useful guidelines on transparency (informing) and consents when informing patients about AI and how to give their consent.

[33] For example, the EU draft AI Act and the Council of Europe’s work on a framework convention on AI, human rights, democracy and the rule of law.

[34] EU Regulation of medical devices: For the purposes of this Regulation, the following definitions apply:(1) ‘medical device’ means any instrument, apparatus, appliance, software, implant, reagent, material or other article intended by the manufacturer to be used, alone or in combination, for human beings for one or more of the following specific medical purposes: — diagnosis, prevention, monitoring, prediction, prognosis, treatment or alleviation of disease, —  diagnosis, monitoring, treatment, alleviation of, or compensation for, an injury or disability (…).

[36] For example, catheters, hearing aids, short-term contact lenses.

[37] For example, incubators, insulin pens, long-term contact lenses, and ventilators.

[38] For example, pacemakers, prosthetic heart valves, surgical mesh, breast implants other devices that require permanent monitoring throughout their lifetimes.

[39] AI algorithms might undergo updates or changes so it will be necessary to determine which version of the AI system was deployed in the clinical trial.

[40] In computing, a ‘black box’ is a device, system or program that allows you to see the input and output but gives no view of the processes and workings between.

[41] Babylon Triage App : https://youtu.be/FQm-wnUJNrU?t=74

[42] Wang et al., 2023

[45] npj Digital Medicine (2022) 5:197 ; https://doi.org/10.1038/s41746-022-00737-z

[47] See Recommendation CM/Rec(2016)8 of the Committee of Ministers to the member Stateson the processing of personal health-related data for insurance purposes, including data resulting from genetic tests.

[48] When consent is not required by legislation, the patient (data subject) should have the right to refuse the use of AI, restrict or object to the processing of their health data by algorithms or combination of both and additional safeguard.

[49] The most common type of consent is informed consent for a treatment and/or intervention, research consent (biomedical consent), consent for the processing of personal data and a safeguard consent for the processing on health data. There are also other types of consents, as follows: Explicit Consent (consent required by the GDPR for the processing of special categories of personal data), Implied Consent (used in some countries also in healthcare), Granular Consent, General Consent (used for various things at the same time), Conditional Consent, Ongoing Consent (also can be considered to be a wide consent and used in some countries in healthcare and secondary use), Presumed Consent (used in some countries for the processing of health data when providing health care), Revocable Consent and Dynamic Consent (which is the most preferred consent model for various changing situations).

[50] Subject to any requirements in national/European legislation regarding consent.

[51] Idem.

[52] Idem.

[55] A study shows that chest x-ray AI systems can predict age, self-reported gender, self-reported ethnicity and insurance status: https://www.jacr.org/article/S1546-1440(22)005440/fulltext?fbclid=IwAR0c8jDKDkLpGkyXCDr9FA_VBHsXpFAAJteQX9i8hsBpzq8RLCHrk-zYW1Q&mibextid=Zxz2cZ

[57] Report on the impact of artificial intelligence on the doctor-patient relationship, by Brent Mittelstadt, Senior research fellow and Director of research at the Oxford Internet Institute, University of Oxford, United Kingdom: the impact of artificial intelligence on the doctor-patient relationship - human rights and biomedicine (coe.int) (page 13).