Researchers at the University of Pennsylvania have launched an AI-powered tool called the Media Bias Detector, designed to provide detailed insights into how various news outlets report on different topics. 

The project, led by Duncan Watts, founder of the Computational Social Science Lab (CSSLab) at Penn, aims to offer a new level of understanding regarding media coverage across the ideological spectrum. 

Hands Smartphone News

(Photo : Gerd Altmann from Pixabay)

The Media Bias Detector

The Media Bias Detector allows users to select topics and publication names from simple drop-down menus to see how different news outlets have covered particular subjects during specific periods.

For instance, it enables comparisons of how often The New York Times has published stories about Joe Biden's age versus Donald Trump's or how frequently Fox News has reported on climate change compared to CNN during a heatwave.

Watts explained that the goal of this tool is not to determine the truth or to identify which outlets are more biased but to quantify how various publishers cover different topics and events. 

The challenge of tracking this data has been significant due to the sheer volume of daily news stories and the resources required to read and classify them.

However, the CSSLab recognized that artificial intelligence could enhance the efforts of human researchers. With AI, the team can classify text at very granular levels and measure a wide range of aspects that would have been impossible to track just a few years ago.

Amir Tohidi, a postdoctoral researcher at the CSSLab, noted that the Media Bias Detector processes the top publicly available articles from major online news publications daily.

These articles are then fed into OpenAI's GPT-4 to classify the articles by topic and analyze each article's tone at the sentence level. The tool assesses whether the tone is positive, negative, or neutral for each sentence.

Additionally, the AI classifies the article's overall political leaning on a Democrat/Republican spectrum, mapping the media landscape in close to real-time. 

Read Also: Choose Your AI: Mozilla Launches AI Experiment in Firefox Nightly

Human Feedback

Human feedback is incorporated into the system to ensure the accuracy of the Media Bias Detector. Jenny Wang, a predoctoral researcher at Microsoft and a member of the CSSLab, explained that they use a verification process where research assistants review the AI's summary of articles and make necessary adjustments.

This human-in-the-loop approach helps validate the AI's performance. The research team reported comparing the AI system's outputs with those of expert human evaluators, such as doctoral students with expertise in media and politics.

According to Yuxuan Zhang, a data scientist at the CSSLab, the correlation between the AI's results and human evaluations was very high. In some tasks, GPT-4 was reported to have outperformed human counterparts, giving the researchers confidence in the AI's reliability.

The Media Bias Detector ultimately offers a new opportunity for understanding the subtle ways bias manifests in the media. Wang noted that while everyone has their own sense of different publishers' leanings, this tool is the first to analyze all the data, making large-scale analytics possible comprehensively. 

"Our goal is not to adjudicate what is true or even who is more biased," Watt said in a press release.  "Our goal is to quantify how different topics and events are covered by different publishers and what that reveals about their priorities."

Related Article: Google Releases Gemini AI Chatbot to More Younger Students

Byline


ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion