The Last Experiment
How AI replaced the scientific method with press releases
For centuries, science rested on a simple, almost moral rule: if an experiment can’t be reproduced, it isn’t science. This was a kind of secular commandment and anyone should be able to verify what you claimed. Truth was earned through transparency, method, and patience.
Today, that principle sounds as quaint as a rotary phone. In the age of artificial intelligence, reproducing an experiment can cost tens or even hundreds of millions of dollars and require the power supply of a small city. Curiosity alone is no longer enough. Now you need a data center.
OpenAI, Google DeepMind, Anthropic, and Meta are the new monasteries of modern science, places where the faithful still believe in progress but can no longer see the rituals behind it. Their servers are cathedrals of computation: vast, sealed temples where new intelligences are trained, used by everyone and understood by almost no one. Academia, meanwhile, is left with the prayers but not the resources.
A university could likely reproduce an experiment from the 1950s, but replicating the training of a model like GPT-4 would be as realistic as building a particle accelerator in the basement. The result is a profound structural imbalance, with roughly 70% of new AI PhDs going directly into the private sector, compared to just 20% two decades ago. Universities still train scientists; they just have less and less science to do and less capability to retain talent. Even President Trump announced a new visa fee of $100,000 for skilled workers, which will be a low cost for companies, but an unaffordable one for Academia.
The Black Box and the Press Release
Artificial intelligence is becoming the first field of science without genuine external scrutiny. Companies publish results no one can verify, compare their models using benchmarks they design themselves, and craft tests they always manage to pass. Peer review—that ritual of humility where colleagues could dismantle your argument—has been replaced by well-coordinated press releases.
This issue isn’t entirely new; other fields like psychology have been grappling with a “replication crisis” for years, where a significant portion of published findings cannot be reproduced by other researchers. But there is a crucial difference: in those disciplines, the failure to replicate is seen as a systemic flaw to be corrected. In AI, it is becoming the system itself. There’s no inherent malice in this, just economics. And wherever business dictates the pace of discovery, truth becomes a luxury, not a duty.
This week, Retraction Watch reported the case of an anesthesiologist who has had to retract more than 220 scientific papers (for the moment). It’s an absurd, almost operatic number. His downfall was public, painful, and, crucially, possible because someone checked, someone verified, someone found the fraud. In older scientific scandals, there was at least a network of scrutiny.
Today’s large language models are black boxes. No one outside the company knows how they were trained, what data they used, or which biases they embedded. And the most unsettling part: even with the best of intentions, no one could replicate the experiment. AI research is no longer shared; it’s licensed. Knowledge has become intellectual property, bound by NDAs and trade secrets, and transparency, once an ethical principle, is now a competitive risk. Instead of reproducing results, researchers settle for reproducing headlines: “OpenAI announces.” “Google releases.” “Anthropic improves.”
In the future, if a university ever has a million GPUs and can verify certain claims, perhaps more than one Big Tech scientist will be embarrassed.
Science by Subscription
Even the benchmarks have gone corporate. Each company defines its own standard, sets its own test, and grades itself. It’s as if every student brought their own exam—and marked it with a gold star.
Science used to be public: full of errors, revisions, and retractions. Now it’s private, it’s hosted on remote servers, protected by terms of service. The question isn’t just who owns the data; it’s who owns the right to be wrong. Perhaps the future of science won’t depend on reproduction, but on belief: belief in the press release, in the benchmark, in the visionary founder who swears that this time the machine truly understands. The problem isn’t that truth has died; it’s that it has been outsourced.
The next step of the scientific method may not be the experiment at all, but the API key (an API is a piece of code that permits two software to connect and share functionalities between each other). And maybe, years from now, the most radical act of intellectual rebellion will be doing the impossible again: reproducing something with our own hands.
We’ll see.


