I once read about the business strategy that OpenAI was following in 2024:
Claim that AI can do everything.
Raise tons of money from investors.
Tell governments that AI is very dangerous and that open source AI should be regulated out of existence.
....
Profits!
AI panic is not only about the fear to new technologies that happened along history the radio, the print, books, electricity, cars, or lifts, as we have so many times collected in this blog. I’m talking that AI panic could be pushed by particular people who just want to create a fake narrative about artificial intelligence, that distracts scientific and popular opinion from other aspects of this revolution.
What is it useful for this kind of narrative? Probably the best answer is: follow the money. If governments are convinced by apparently independent foundations that AI may provoke such a big apocalypse to society and economy, probably they would regulate so strongly the AI sector, that no new competitors would be able to enter into the market. Just as it happens now with electricity. I’m just speculating about why this panic-as-a-business is expanding.
At the beginning of 2024, CNBC published that AI lobbying in US Congress had spiked 185%, while call for regulations were increasing. However, the story is not worth of a Netflix series yet. According to Substack author
, who publishes excellent blog, the AI panic movement is perfectly oiled and fed with tons of money. Particularly, she talks about Effective Altruism Center, which according to her, they are one of the main Existential Risk funders. She has made a great research work and defends that:On November 17, 2023, Sam Altman was fired by OpenAI’s Board of Directors: Ilya Sutskever, Adam D’Angelo, and two members with clear Effective-Altruism ties, Tasha McCauley and Helen Toner. Their vague letter left everyone with more questions than answers. It sparked speculations, interim CEOs (e.g., Emmett Shear), and an employee revolt (of more than 700 OpenAI employees). The board’s reaction was total silence.
A week after, on November 24, Steven Pinker linked to a Wall Street Journal article on how the OpenAI drama “showed the influence of effective altruism.”
The events that led to the coup saga are still unexplained. Nonetheless, it became a wake-up call to the power of the Effective Altruism movement, which is “supercharged by hundreds of millions of dollars” and focuses on how advanced artificial intelligence (AI) “could destroy mankind.” It looked clearer that “Effective Altruism degenerated into extinction alarmism.”
In a nutshell, according to the Effective Altruism movement, the most pressing problem in the world is preventing an apocalypse where an Artificial General Intelligence (AGI) exterminates humanity.
With billionaires' backing, this movement funded numerous institutes, research groups, think tanks, grants, and scholarships under the brand of AI Safety. Effective Altruists tend to brag about their “field-building”:
They “Founded the field of AI Safety, and incubated it from nothing” up to this point.
They “created the field of AI alignment research” (“aligning future AI systems with our interests”/“human values”).
The overlap between Effective Altruism, Existential Risk, and AI Safety formed an influential subculture. While its borders are blurry, the overlapping communities were defined as the AI Safety Epistemic Community (due to their shared values). Despite being a fringe group, its members have successfully moved the “human extinction from AI” scenario from science fiction into the mainstream. The question is: How?
And the reply to this question is the follow-the-money strategy that she covers in her blog, and traces some Bitcoin transfers and other fundings among some old-known celebrities, such as omnipresent Elon Musk and some other criptobros and dark people. Let me introduce them:
Buterin stated that “Existential risk is a big deal” and that “it stands a serious chance” of Artificial Intelligence (AI) “becoming the new apex species on the planet.”
“One way in which AI gone wrong could make the world worse is (almost) the worst possible way: it could literally cause human extinction,” Buterin wrote. “A superintelligent AI, if it decides to turn against us, may well leave no survivors, and end humanity for good.”
According to a TechCrunch article from January 2015, when Elon Musk donated $10M to the Future of Life Institute (FLI), it was “to Make Sure AI doesn’t go the way of Skynet.”
It made sense to Musk to fund FLI, as it was (and still is) devoted to existential risks. Its description in the Effective Altruism Forum is literally: “FLI is a non-profit that works to reduce existential risk from powerful technologies, particularly artificial intelligence.”
FLI’s co-founder, Max Tegmark, frequently shares AI doom scenarios. “There’s a pretty large chance we’re not gonna make it as humans. There won’t be any humans on the planet in a not-too-distant future,” Tegmark said in an interview. He also referred to AI as “the kind of cancer which kills all of humanity.”
Jaan Tallinn is a tech billionaire from Estonia. Whenever Tallinn is asked what led him to co-found FLI (2014) and the Centre for the Study of Existential Risk (2012), his response is always the same: Eliezer Yudkowsky. Upon reading Yudkowsky’s writing, he became intrigued by the Singularity and existential risks. He then decided to invest a portion of his fortune in this cause.
AI is not the only technological niche were lobbying of the narrative happens. Also nuclear energy, climate change, electric vehicle or cryptocurrencies are some those examples.
Indeed, I suggest you to read the whole funding story details and money movements in AI Panic blog. Now it’s worth a Netflix series.