In Support of Yann LeCun: Defending Scientific Integrity in AI Leadership
On the importance of researchers who refuse to sell fantasies as technology
A month ago Meta announced big layoffs in FAIR, the research unit of Meta founded by Yann LeCun. FAIR has long been one of the biggest sources of NLP and AI innovation. Their posters at ACL used to attract large crowds. Breakthroughs such as the first Llama model came out of FAIR under LeCun’s supervision. LeCun himself is a well known researcher, one of the co-authors of the convolutional neural network paper that has been state of the art for many computer vision and NLP applications for decades.
Yann LeCun was the last man standing from the sane AI leadership.
When ChatGPT appeared, AI started to attract an enormous amount of money. Being labelled a leader in AI suddenly meant massive investments, stock prices shooting to the moon, government contracts and a lot of fame.
At first this visibility felt exciting. Very quickly, however, the space became crowded with sales people, grifters and a whole cohort of victims of Dunning Kruger effect who decided that prompt engineering had turned them into AI experts.
I am writing this article to express support for Yann LeCun and all the other scientists and researchers who stayed true to their principles and refused to jump on the AI monetization train by overpromising and selling the impossible.
The Holy Trinity of Hype: Amodei, Altman and Hinton
The noise arrived almost immediately. Many people helped to spread what is most likely misinformation and to inflate the AI bubble we see today. The champions of all this hype are of course Dario Amodei, Sam Altman and Geoffrey Hinton.
In August, 2023, Amodei gave an interview to Dwarkesh Patel saying that in 2-3 years Claude will very likely have conscience and reach human level intelligence. Since then he: 1) told the US Senate in 2023 that in two to three years (that is now) AI systems could “fill in all the missing pieces” needed to plan large scale biological attacks 2) said it is “plausible” that in two to three years AI will become “better than almost all humans at almost everything” 3) proposed that AI could possibly double human lifespan and 4) has warned that AI could eliminate up to half of all entry level white collar jobs within one to five years, pushing unemployment toward 10–20 percent.
While Amodei is definitely the champion of spitting out [possibly] hallucinations about AI, his main competitor, Sam Altman, is also doing very well on this front. In his January 2025 blog post “Reflections” Altman wrote that “we are now confident we know how to build AGI as we have traditionally understood it” and predicted that the first AI agents would “join the workforce” this year. He also repeatedly said that superintelligent AI, smarter than humans in all aspects, is likely to arrive before the end of this decade. and described a future in which “You will be thrilled that the AI has invented all of the science for you and cured disease and you know, made fusion work and just impossible triumphs we can’t imagine.”. Just as Amodei, he warned that AI will cause entire job categories to disappear and handle something like 30–40 percent of existing job. And of course, the promise of supportive and empathetic AI that can be your life companion.
I suspect that in a couple of years on almost any topic, the most interesting, maybe the most empathetic conversation that you could have will be with an AI.
Sam Altman, January 2025
While Amodei and Altman mostly hype up the big beautiful tomorrow filled with superintelligence curing diseases and discovering new laws of nature, Geoffrey Hinton chose a different type of AI hype, increasing p(doom) and fearmongering. It is essentially the same as what Altman and Amodei do and might even have a better effect on fundraising, as it overpromises the power that a government, a company, or an individual might acquire within a few years if they invest in AI.
Over the last two years he has repeatedly said that AI may be a “more urgent” threat to humanity than climate change and talked about a 10–20 percent chance that AI could lead to human extinction or seize control from humans within a few decades.
Honourable mentions go to Elon Musk and Bill Gates, the New York Times for repeatedly publishing insane articles, Gartner, McKinsey and similar consulting agencies, and an army of random people who suddenly discovered a passion for AI and the money it might generate for them.
By 2026, GenAI will cut customer service and support agents by 20–30%
VS.
By 2027, 50% of organizations that expected to significantly reduce their customer service workforce will abandon these plans, according to Gartner, Inc. This shift comes as many companies struggle to achieve their “agent-less” staffing goals, highlighting the complexities and challenges of transitioning to AI-driven customer service models.
The damage of AI hype
The impact of this noise was unprecedented. OpenAI and Anthropic now have massive valuations that are far above their current revenue. OpenAI is valued at around 500 billion dollars, with annualised revenue in the low tens of billions. Anthropic sits at roughly 183 billion dollars while its run rate is in the single digit billions.
Nvidia has become a 5 trillion dollar company, which is roughly comparable to the nominal GDP of Germany. A lot of startups that have barely delivered anything are getting funded with billions, simply for having an AI story. Thinking Machines, for example, raised about two billion dollars in a seed round at a twelve billion dollar valuation and is already in talks for funding at a fifty to sixty billion dollar valuation, long before it has proved any breakthrough beyond a fine tuning API.
European Union authorities got so worried that they started piling on regulation, thinking and rethinking every possible AI risk. They ended up creating the first horizontal AI law in history, the EU AI Act, which regulates AI across sectors with a detailed risk classification and heavy requirements for high risk and foundation models. In practice this often means that only a masochist would want to build high impact AI in the EU, as critics warn that the complexity and cumulative burden of the AI Act, GDPR and related rules can slow down innovation and push investment elsewhere.
The Voices of Sanity
Among all this, there were not many voices of reason, particularly hardly any from people who were truly close to the creation of proprietary LLMs.
Yann LeCun was probably the only AI leader who was both directly involved in a large corporation training widely used general purpose models and consistently critical of the hype. Of course, there is Andrej Karpathy and a number of other researchers who have repeatedly called out the absurdity of overpromising and highlighted the limits of LLMs.
Many of them, even if they were once directly involved in building such models, have since stepped away from those roles to start their own companies or pursue independent research. LeCun was probably the last man standing here.
The narrative that
LLMs are not a path to AGI,
the widely hyped agentic AI will not automate most tasks for many years to come
current models are simply not cognitively there
does not fit the sales pitch needed to justify multi billion funding.
Yann LeCun, in a public debate with Elon Musk, wrote the following:
And indeed, because he is a scientist and not a business person, the world had at least one authoritative voice speaking on behalf of researchers against the sales pitches.
On multiple occasions, Yann LeCun has explained that LLMs will not lead to AGI and that the promises of tech bros are ungrounded:
We are not going to reach human-level AI by just scaling the LLMs. It is just not going to happen.
At this point some might scream, “GenAI hater! What does he know? LLMs are good!”
And that is exactly the difference between LeCun and those who simply build their brand on dismissing the usefulness of LLMs. He has never claimed that LLMs are useless. He has repeatedly acknowledged that they are powerful and useful tools, while arguing that they are not a path to AGI and should not be oversold as such:
LLMs are useful. There are many potentially good applications, and those applications do justify, to a certain extent, the investment in infrastructure.
The main problem, and what I believe Yann LeCun has been counterbalancing, is the constant overpromising of human level intelligence, cancer cures, agentic autonomy and similar miracles. This hype pushes companies to invest heavily in use cases that have no realistic path to ROI.
A recent MIT analysis reported that only a small minority of GenAI projects show measurable business impact, on the order of a few percent of use cases actually delivering clear ROI.
This is not surprising. I see it myself everywhere: managers launching “agentic” projects under the false assumption that full autonomy is within reach. These systems are sold to them as autonomous agents that can make decisions and execute tasks end to end. You cannot really blame them for believing it. When market leaders, consulting firms and assorted “visionaries” keep promising AGI in two years, how hard can it be to generate a budgeting report for Q3?
And while one might try to push back against this hype, it is an uneven fight. When one hundred sales people tell a manager that AI will solve all his problems and you say “it is not smart enough,” the logical conclusion for many managers is that YOU are the one who is not smart enough. After all, if so many people repeat the same story, how can they all be wrong.
Having outspoken people like Yann LeCun gave credibility to the voices of sanity. AI providers do not want to hear these voices, including Meta, because it conflicts with their pitch. So if you want to be an AI leader in a company that ships AI, your choices are limited. You stay quiet, you join in and overpromise, or you eventually need to look for a new job.
During LeCun’s tenure, Meta shipped Llama, an open weight model that had a major impact on LLM research. Without LeCun, Meta is on track to ship useless cringey character bots, subpar chatbots, and buggy demos.
In the long run, counteracting misinformation and overpromising is essential. As newer generations of models disappoint relative to the marketing slogans, we are already seeing a shift in how people perceive AI. More and more people are getting suspicious. A few more high profile flops, a few more billions written off, and the wind will change. Governments will become less willing to subsidise AI companies and more comfortable telling providers that they are not the centre of the universe. Managers who were ready yesterday to replace whole departments with “agents” will start doubting whether AI can even handle a simple classification task reliably.
If the AI bubble pops and wipes out a lot of investments that never see a return, we may well end up in a phase of great defunding. In that scenario, the term “AI expert” risks turning into a synonym for “scammer.”
The only reason this is not inevitable is that people like LeCun stayed true to scientific ethics and refused to sell neural networks as AGI. Let us be honest. If LeCun had decided to shill LLMs as AGI, he would not have ended up reporting to a twenty seven year old programmer with outstanding marketing skills. Precisely because some researchers refused to bend the science to fit the pitch, AI as a research field still has a chance to keep public trust and to survive a potential defunding wave.
All in all, LeCun believes in neuro symbolic AI and is about to start his own company. Let us hope that he will succeed with it and perhaps bring together others whose ethics did not allow them to comply with the sales narrative, and who kept advocating for evidence based AI applications instead of selling promises they could not keep. After all, these are the people who made LLMs possible in the first place, and they will be the ones who deliver the next breakthrough, not the tech bros burning their investors’ money.
If you liked this newsletter, consider upgrading your subscription or visiting the shop
Paid subscribers get:
Priority answers to your messages within 48-hours
Access to deep dives on the latest state-of-the-art in AI
Founding members receive even more:
A 45-minute one-on-one call with me
High-priority personal chat where I quickly reply to your questions within 24-hour
Support independent research and AI opinions that don’t follow the hype











This is one of the best breakdowns of AI overhype (both directions) that I have read. Thanks so much Maria. I stand with Lecun! 🙏
Well written and definitely gets to the issue of overhyped AI promises!