Meta's Yann Lecun Criticizes AI Hype With Bigger Models: 'It's Not Just About Scaling Anymore'

The AI expert explains why scaling laws alone won't create smarter machines.

The AI community has lived by a simple creed: bigger is better. OpenAI's groundbreaking 2020 paper, "Scaling Laws for Neural Language Models," led the charge by implying that model performance increases with more parameters, bigger datasets, and more computing. This idea fueled a decade of enormous investment in AI infrastructure and larger models.

But cracks are beginning to form in that decades-old philosophy, as the Meta AI chief challenged this idea.

Yann LeCun Challenges the Scaling Doctrine

STEPHANE DE SAKUTIN/AFP via Getty Images

Meta's top AI researcher, Yann LeCun, has recently challenged this fundamental tenet in a speech at the National University of Singapore. Increasing data and processing power won't automatically lead to creating genuinely smart AI systems, according to a 64-year-old computer scientist via Business Insider.

"Most interesting problems scale extremely badly," LeCun said. "You cannot just assume that more data and more compute means smarter AI."

He continued that the "very simple systems" will create a false hope that scaling up will lead to higher intelligence. Essentially, the AI community's "religion of scaling" could be taking them down the wrong road, as he mentioned.

AI Progress Slows as Scaling Hits Limits

Recent AI breakthroughs have recently begun to plateau because high-quality, public training data is in short supply. LeCun contends that today's biggest AI models, after being fed data equivalent to the visual cortex of a four-year-old, still have yet to approach anything near human-like intelligence.

Other experts concur. Scale AI CEO Alexandr Wang has referred to blind trust in scaling as "the biggest question in the industry," while Cohere CEO Aidan Gomez has termed it the "dumbest" approach to pushing AI forward.

Yann LeCun is convinced the next generation of AI will have to do more than text prediction and large data ingestion. Rather, he suggests AI systems that can learn new tasks quickly,

Learn new tasks quickly, understand the physical world, not merely text, have common sense, reason, and planning skills, and create enduring memory.

This "world model" style is to build AI that is able to predict how the real world changes given its actions — a big improvement from current pattern-matching software.

As LeCun explained to Lex Fridman a year ago, true innovation will occur in creating machines that not just respond to information but consider cause and effect within a dynamic environment.

If you're wondering what LeCun will say about AI replacing humans, our previous report indicated that he does not believe in it.

ⓒ 2025 TECHTIMES.com All rights reserved. Do not reproduce without permission.

Join the Discussion