Artificial intelligence is becoming more commonplace (albeit that the definitions of what constitutes artificial intelligence are in themselves an area for debate). One area of application is in clinical settings.
For artificial intelligence to progress and to gain societal acceptance, clear ethical standards and guidance need to be developed and agreed, as well as widely understood by researchers and the general populace. This is particularly so in healthcare settings.
Of greatest applicability in terms of ethics is the necessity to maintain the relationship of trust between doctors and patients. There is also a wider consideration, in relation to safeguarding human rights.
These important issues are indicated within a Council of Europe report, from the Steering Committee for Human Rights in the fields of Biomedicine and Health. The report is authored by Dr Brent Mittelstadt, the Director of Research at the Oxford Internet Institute and a leading data ethicist.
Examples of where artificial intelligence can be used in the healthcare arena are for direct communication with patients, as a diagnostic tool and as a care agent.
Central to the ethics debate is the doctor-patient relationship. The report cautions that the suitability of artificial intelligence remains ‘unproven’ and this could, if implemented poorly or too widely, undermine the ‘healing relationship’.
According to the report: “A radical reconfiguration of the doctor-patient relationship of the type imagined by some commentators, in which artificial systems diagnose and treat patients directly with minimal interference from human clinicians, continues to seem far in the distance.”
The main cautionary note from Dr Mittelstadt is: “The doctor-patient relationship is a keystone of ‘good’ medical practice, and yet it is seemingly being transformed into a doctor-patient-AI relationship. The challenge…is to set robust standards and requirements for this new type of ‘healing relationship’ to ensure patients’ interests and the moral integrity of medicine as a profession are not fundamentally damaged by the introduction of AI.”
In other words, the relationship between doctor and patient is subject to digital transformation but the patient and their needs remain unaltered. The patient remains as vulnerable as before. A question also arises as to whether the vulnerability becomes worsened through the disruptive nature of the technology.
The primary bioethics issues that stem from this and which all healthcare systems need to consider carefully are:
- Inequality in access to high quality healthcare.
- Transparency to health professionals and patients.
- Risk of social bias in AI systems.
- Dilution of the patient’s account of well-being.
- Risk of automation bias, de-skilling, and displaced liability.
- Impact on the right to privacy.
On stand-out issue for resolution is with the impact of artificial intelligence upon transparency and informed consent. A second is with bias, and how social biases inherent within artificial intelligence are acknowledged. The third area is with data handling and the individual patient’s right to privacy.