Vatican's AI Expert on a Mission to Ensure Ethical Tech Use

The profound impact of AI on humanity must be examined.

Friar Paolo Benanti, a key figure in the Vatican's ethical considerations of artificial intelligence (AI), is playing a pivotal role in shaping the Roman Catholic Church's stance on technology.

Wearing the humble brown robes of his Franciscan order, the 50-year-old Italian priest advises Pope Francis and engages with top engineers in Silicon Valley. Benanti, with a background in engineering and a doctorate in moral theology, aligns with Pope Francis's call for an international treaty to ensure ethical AI use.

In a The Associated Press story, Benanti raises a fundamental question: "What is the difference between a man who exists and a machine that functions?" Teaching courses in moral theology and bioethics at the Pontifical Gregoriana University, he emphasizes the profound impact of AI on humanity.

Vatican's AI Expert on a Mission to Ensure Ethical Tech Use
Pope Francis (C) presides over the funeral of Italian Cardinal Sergio Sebastiani at the altar of the Chair in St. Peter's Basilica in the Vatican, on January 17, 2024. FILIPPO MONTEFORTE/AFP via Getty Images

"It is a problem not of using (AI) but it is a problem of governance. And here is where ethics come in - finding the right level of use inside a social context," he said, as quoted in the AP News report.

The friar's unique perspective, combining engineering, ethics, and technology, positions him as a critical voice in the global dialogue on regulating AI. With the European Union leading the way with comprehensive AI rules, Benanti's efforts align with broader initiatives to ensure responsible and ethical AI development worldwide.

Tech Companies Failing to Address AI Ethics Gaps

A Stanford University investigation found that prominent tech corporations are failing to prioritize ethical AI development despite public pledges. The Institute for Human-Centered Artificial Intelligence notes that firms state AI values and fund AI ethics research, but implementation lags, Al Jazeera reported.

WHO Issues Ethics in Healthcare AI

Amid the concerns regarding the use of AI, the World Health Organization (WHO) has issued detailed recommendations on the ethical governance of large multi-modal models (LMMs), a rapidly developing kind of generative AI technology in healthcare.

The guideline, posted on the UN health agency's website, includes over 40 suggestions for governments, technology firms, and healthcare practitioners to use LMMs ethically and responsibly to improve population health. In 2023, LMMs like ChatGPT, Bard, and Bert gained popularity for their capacity to absorb varied data inputs and replicate human interactions.

The WHO stresses the need for clear information and policies for LMM design, development, and implementation in healthcare to avoid disinformation and prejudice. LMMs are used in diagnosis, patient-guided usage, administrative activities, medical education, and scientific research, according to the instructions.

Government investment in public infrastructure, ethical conduct, regulatory evaluation, post-release audits, and stakeholder participation in development are key considerations. The WHO emphasizes ethical issues and public confidence in AI applications in medicine to guarantee the safe and successful use of LMMs in healthcare.

byline-quincy
byline-quincy byline-quincy


ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion
Real Time Analytics