AI in Doctor-Patient Relationship Still Subject to Digital Transformation, Raises Bioethics Issues

Artificial Intelligence (AI) is becoming more prevalent in the healthcare industry. However, AI shouldn't be deployed in a healthcare setting without the consideration of ethical standards to maintain an excellent doctor-patient relationship.

MEXICO-HEALTH-VIRUS-VACCINE
A health worker prepares a dose of the Pfizer-BioNTech vaccine against COVID-19 to be applied to children from 5 to 11 years old, in Mexico City, on June 27, 2022. by PEDRO PARDO/AFP via Getty Images

It is essential to maintain a relationship of trust between doctors and patients, as well as to protect human rights. Based on a report by the Council of Europe from the Steering Committee for Human Rights in the fields of Biomedicine and Health, examples of AI that can be used in healthcare can be for direct communication with patients and as a diagnostic tool.

Also Read: Top 10 AI Healthcare Startups For 2021

The Ongoing Ethics Debate

Despite the advancements in technology and the introduction of AI in a clinical setting, there remains an ongoing debate about the doctor-patient relationship. Based on the report, there is a concern regarding the suitability of AI, which remains unproven, and could be implemented poorly or too widely that will compromise the relationship between doctors and patients.

It said that artificial systems' ability to diagnose and treat patients with minimal interference from real humans is still far-fetched.

A cautionary note in the report states, "the doctor-patient relationship is a keystone of 'good' medical practice, and yet it is seemingly being transformed into a doctor-patient-AI relationship. The challenge... is to set robust standards and requirements for this new type of healing relationship."

Doing so will help ensure the interests of patients and the moral integrity of medicine remains intact despite the use of AI.

Bioethics Issues

With this, bioethics issues resurface, such as inequality in high quality healthcare access, transparency, social bias, automation bias, de-skilling, displaced liability, and right to privacy.

Ironically, the "intelligent" component of AI is hidden from view. These concerns exemplify why it is important to regulate the use of AI in healthcare. As healthcare associations continue to embrace AI in the future, the issues should be documented to ensure further transparency.

As such, it is essential for the government and health care systems, as well as the scientific community and the public, to be given the opportunity to have a say in the development of artificial intelligence and its use in healthcare.

To ensure the best interest of patients and the human rights of everyone remains intact, the healthcare system must develop and adopt, a comprehensive and coherent policy for the use of AI in health, based on the principles of maximum benefit, proportionality, fairness, transparency and accountability.

As AI continues to be introduced in the medical field, the challenges will continue to persist. In order for the technology to be effectively implemented, it is crucial for AI to be integrated into a "human" clinical setting instead of a "procedural" one. This can help remove some of the ethical challenges.

In sum, the researchers identified that ethical standards and guidelines for AI in healthcare should take into account the value of autonomy and privacy, as well as the need for some form of accountability and transparency, as AI becomes more prevalent in the medical field. Further, government regulation and oversight are important to prevent possible discrimination and unethical practices in the field of AI in healthcare.

Related Article: How Does AI Help Modern Healthcare, Exactly?

This article is owned by TechTimes

Written by April Fowell

ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion
Real Time Analytics