AI-Generated Research Takes Over Google Scholar: Is Science World Being Flooded With Fake Studies?

Some researchers worry that AI-generated scientific researches flock on Google Scholar.

Recent studies suggest that artificial intelligence (AI) is increasingly being used in scientific research, especially in Google Scholar, but unfortunately, the AI-generated content is flooding into the academic ecosystem. This poses a threat to the credibility of scientific knowledge.

Published in the Harvard Kennedy School's Misinformation Review, a recent study shows how AI-generated research is flooding platforms like Google Scholar, muddying the waters of scholarly communication and potentially compromising the integrity of the scientific record.

AI-Generated Research Papers Are Everywhere

Generative pre-trained transformers, just like OpenAI's ChatGPT, are large language models that can make quick text productions. In doing so, they provide a novel means of both interpreting and producing scholarly content. According to Gizmodo, AI tools can produce vast quantities of text from figures and images to entire research papers, making it easier than ever to produce academic work that might appear credible on its face.

But the ease has its dark side. A recent study by researchers has analyzed how often GPT-generated content has been appearing within scientific papers published on Google Scholar, revealing some disturbing conclusions.

Two-thirds of all papers analyzed show evidence of text being generated using AI, some even appearing in academic journals known for their excellence.

GPT-Fabricated Papers Across Multiple Fields

The study highlighted that AI-generated research was found across multiple academic disciplines, including health, environment, and computing. Specifically:

  • 14.5% of the GPT-fabricated papers about health.
  • 19.5% focused on environmental issues.
  • 23% were related to computing research.

These findings clearly show the wide prevalence of this problem, as GPT-generated content is no longer confined to unknown journals or papers that are not peer-reviewed but is also seeping into the mainstream scientific literature.

The paper also noted that most of these papers were published in non-indexed journals, working papers, and even established conference proceedings.

Dangers to Academic Integrity and Public Trust

The increase in AI-generated research poses two primary risks to the academic community and the broader public interest:

Fake studies are drowning out legitimate, peer-reviewed research; the result is likely to flood academic databases with bogus information or utter nonsense.

Because of how complex AI-generated text is getting, it has become increasingly challenging to distinguish genuine research from fictionalized work by any scientist and the common citizen. In that regard, it erodes confidence in scientific literature, thereby failing to ensure what information may be trusted as well as that which might be misleading or even injurious.

"The risk of what we call 'evidence hacking' increases significantly when AI-generated research is spread in search engines," Björn Ekström, a researcher at the Swedish School of Library and Information Science, and co-author of the paper said at a University of Borås release.

Because Google Scholar aggregates papers from various sources without the rigorous screening of more formal academic databases, anyone can access and retrieve these potentially deceptive papers.

The ease with which these papers can be found further complicates the issue, particularly for the general public who may struggle to distinguish peer-reviewed research from less credible sources.

Case Studies About AI's Growing Impact on Academic Publishing

This is not the first issue, as many of the poorly written and irrelevant papers published recently had to be retracted by publishers. This includes over 40 papers that were retracted by Springer Nature in 2021. The papers, which appeared in the Arabian Journal of Geosciences, had nothing to do with the focus of the journal and made little sense.

AI's role in these issues is becoming more evident. A notable example occurred in 2023 when the publisher Frontiers faced backlash for publishing a paper in Cell and Developmental Biology that included AI-generated images depicting incorrect anatomical details. This error led to the retraction of the paper after public criticism.

With this example, we could finally say that AI-powered misinformation can spread at lightning speed.

AI in Science: A Double-Edged Sword

Despite all these challenges, AI has vast potential for advancing scientific discovery. AI tools can help decode ancient texts, find new archaeological findings, and even improve fossil analysis.

This requires greater security in the peer-reviewed journals and academic websites by introducing sterner checks on what is allowed and published in those platforms, thus, making their screening process of contents more rigid and stringent. No misleading of the scientific community and public through AI-generated papers must be allowed to creep into them.

Meanwhile, a Microsoft study concludes how GPT-3 affects the writer's creativity and voice in writing content. In short, it has an impact on the authenticity of a writer's work.

ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion
Real Time Analytics