On the possible banning of LLMs
After three years, the AI Act, the EU’s new sweeping AI law, was approved this summer. But the reality is that the hard work starts now. Despite the act entered into force on 1st August 2024, people living in the EU will start seeing changes by the end of the year. Regulators will need to get set up in order to enforce the law properly, and companies will have between up to three years to comply with the law. There are lots of summaries in the web about the implications of this law, but I’ll do my own resume here to make the context of this post. Here is the complete text of the law and a helpful guide to navigate through it, and here’s what will (and won’t) change with AI Act:
1. Some AI uses will get banned later this year
The Act places restrictions on AI use cases that pose a high risk to people’s fundamental rights, such as in healthcare, education, and policing. These will be outlawed by the end of the year.
It also bans some uses that are deemed to pose an “unacceptable risk.” They include some pretty out-there and ambiguous use cases, such as AI systems that deploy “subliminal, manipulative, or deceptive techniques to distort behavior and impair informed decision-making,” or exploit vulnerable people. The AI Act also bans systems that infer sensitive characteristics such as someone’s political opinions or sexual orientation, and the use of real-time facial recognition software in public places. The creation of facial recognition databases by scraping the internet à la Clearview AI will also be outlawed.
There are some pretty huge caveats, however. For instance, AI Act did not ban controversial AI use cases such as facial recognition outright. And while companies and schools are not allowed to use software that claims to recognize people’s emotions, they can if it’s for medical or safety reasons.
2. It will be more obvious when you’re interacting with an AI system
Tech companies will be required to label deepfakes and AI-generated content and notify people when they are interacting with a chatbot or other AI system. The AI Act will also require companies to develop AI-generated media in a way that makes it possible to detect. This is promising news in the fight against misinformation, and will give research around watermarking and content provenance a big boost.
3. Citizens can complain if they have been harmed by an AI
The AI Act will set up a new European AI Office to coordinate compliance, implementation, and enforcement (and they are hiring). Thanks to the AI Act, citizens in the EU can submit complaints about AI systems when they suspect they have been harmed by one, and can receive explanations on why the AI systems made decisions they did. It’s an important first step toward giving people more agency in an increasingly automated world. However, this will require citizens to have a decent level of AI literacy, and to be aware of how algorithmic harms happen. For most people, these are still very foreign and abstract concepts.
4. AI companies will need to be more transparent
Most AI uses will not require compliance with the AI Act. It’s only AI companies developing technologies in “high risk” sectors, such as critical infrastructure or healthcare, that will have new obligations when the Act fully comes into force in three years. These include better data governance, ensuring human oversight and assessing how these systems will affect people’s rights.
AI companies that are developing “general purpose AI models,” such as language models, will also need to create and keep technical documentation showing how they built the model, how they respect copyright law, and publish a publicly available summary of what training data went into training the AI model. Other companies, on the contrary, simply won’t launch their product in Europe.
According to European Commision, “currently, general purpose AI models that were trained using a total computing power of more than 10^25 FLOPs are considered to pose systemic risks.” (Biden Executive Order on AI ruled previously based on 10^26 FLOPs)
It’s also worth noting that free open-source AI models that share every detail of how the model was built, including the model’s architecture, parameters, and weights, are exempt from many of the obligations of the AI Act. According to this law, “specifically developed and put into service for the sole purpose of scientific research and development” from its rules. While this exemption is useful for allowing scientific research, AI models produced for academic purposes under an open-source license can then be repurposes for commercial purposes. This provision effectively creates a loophole where AI models produced for scientific purposes evades the safety regulations the EU has created under the belief that such rules are necessary to prevent harm from AI.
Can governments control the future of AI? Looks like they’re going to try
In the 18 months since OpenAI's ChatGPT burst onto the scene, followed by a wave of competing AI chatbots, the world has been awash in conflicting visions of our AI-powered future. These predictions covered from utopian dreams to dystopian nightmares, leaving many unanswered questions: Will AI usher in a new era of unprecedented progress or lead us down a path of societal destruction?
This realization has sparked intense discussions about how to harness AI's benefits while mitigating its risks. Recent weeks have seen a flurry of activity in this arena:
Could governments prohibit LLM tools usage?
A U.S. government-commissioned report warns of significant national security risks posed by AI and suggests, among other things, banning the publication of open-source models - with jail time if necessary.
A report commissioned by the U.S. government warns of significant national security risks posed by artificial intelligence. The three authors of the report, titled "An Action Plan to Increase the Safety and Security of Advanced AI," worked on it for more than a year. They spoke with more than 200 government officials, experts, and employees of leading AI companies, including OpenAI, Google DeepMind, Anthropic, and Meta.
Not only that, but the US is also considering to restrict the “exportation” of LLM open source and propietary models. I still haven’t found out how they want to implement this measure. And you can find more and more initiatives to ban open source initiatives everywhere.
Open-source and LLMs
As we mentioned earlier, AI Act exempts open-source from rules fulfillment, apparently, but they have created a huge complexity on the consideration of what open source is. Actually, “open source” term means nothing, and specialized institutions are continuously adapting official definitions for it. So the rulement of this concept might be really difficult or unfair. I imagine that legislators try to avoid following scenario:
Those models will be uncontrolled, implying one of four sub-beliefs:
They begin life as open-source;
They will be controlled poorly by their closed-source creators;
Open-source developers like Meta and Mistral will eagerly follow the closed-source providers and release models with catastrophic harm capabilities as open source;
A malicious actor will themselves make a model capable of catastrophic harms.
I cannot imagine how this measure could be executed, or even how any AI algorithm could be finely audited. If you asked today to Sam Altman how does GPT o1 work, he would say nothing, despite knowing the code. As I said in my last post, code interpretability is a poliedric idea.
However, I’m still somehow convinced that governments won’t let this technology be totally free. Did you know that NSA is sat on the OpenAI board? Could the government decide that you have to ask for a special permission for using a certain amount of GPU power?
This issue doesn’t mean that an individual freelance that develops an application based on ChatGPT for orienting customers to buy a car will be fined. European Commission in the text, clearly differs between a system of artificial intelligence and a model of artificial intelligence, and thus, this banning is thought on someone like Sam Altman, who could achieve the long wished AGI.
Remember that some weeks ago, Twitter was banned in Brazil.