In a significant shift, the European Commission has decided to remove the AI Liability Directive from its 2025 work program. The decision comes amid stalled negotiations, signaling that an agreement on how to handle AI-related liability issues is far from being reached. Despite this setback, members of the European Parliament’s Internal Market and Consumer Protection Committee (IMCO) voted on Tuesday to continue pushing forward with the development of AI liability rules, indicating a strong desire to continue discussions on the matter.
European Commission Shelves the Directive
The European Commission, which had originally planned to introduce the AI Liability Directive as part of its 2025 work program, cited the lack of progress in negotiations as the main reason for shelving the proposal. The directive, which was first introduced in 2022 alongside the AI Act, aims to establish clear rules on how liability should be assigned when AI systems cause harm to consumers or businesses.
In its statement, the Commission acknowledged that the directive could still be revived if the European Parliament and the EU Council dedicate significant time and effort to revising it over the next year. However, for now, the Commission has signaled that it intends to remove the directive from its immediate legislative agenda.
The AI Act, which regulates artificial intelligence based on its level of risk, was already introduced to modernize the legal framework around AI in the EU. While the AI Act began to take effect this year, the AI Liability Directive has faced challenges, particularly in terms of aligning diverse political opinions and addressing concerns from various stakeholders.
IMCO Votes to Keep the Discussion Alive
Despite the Commission’s decision to withdraw the directive from its 2025 work program, members of the European Parliament’s IMCO committee remain determined to continue working on AI liability rules. The IMCO members voted to keep the directive on the legislative agenda, making it clear that the issue of AI liability remains a priority for many lawmakers.
A spokesperson for the European Parliament confirmed that coordinators from various political groups will push for keeping the directive active in the legislative process. While the Commission’s decision is a blow to those advocating for immediate action, the IMCO committee’s vote indicates a commitment to keeping the discussion alive.
The Legal Affairs Committee (JURI), which has been leading Parliament’s work on AI liability, has not yet made a final decision on how to proceed with the directive. This leaves the issue in a state of uncertainty, as different political factions within the European Parliament continue to debate the best course of action.
Lawmakers Divided on the Future of the Directive
The decision to delay the AI Liability Directive has drawn strong reactions from members of the European Parliament. Some lawmakers have expressed disappointment with the Commission’s move, while others have defended it as necessary.
Axel Voss, a German MEP who has been responsible for guiding the AI Liability Directive through Parliament, strongly criticized the Commission’s decision. He called it a “strategic mistake” and argued that the need for clear liability rules in the context of AI is more urgent than ever.
However, other MEPs, particularly from the center-right European People’s Party, have supported the delay. Andreas Schwab, another German MEP from the same party, has voiced his support for the Commission’s decision. Schwab argues that before addressing AI liability, lawmakers should focus on implementing the AI Act, which is already starting to take effect this year.
“The legislation needs to be watertight first,” Schwab said. “We should assess its impact before revisiting separate liability rules in two years.”
On the other hand, center-left MEPs have expressed frustration over the delay. Luxembourg MEP Marc Angel, speaking on behalf of Italian MEP Brando Benifei, co-rapporteur of the AI Act, criticized the withdrawal as a “disappointment.” Benifei emphasized that harmonized rules on AI liability would have provided legal clarity and fairness for consumers in cases where AI systems cause harm.
Kim van Sparrentak, a Dutch MEP from the Greens, also voiced concerns, arguing that the withdrawal of the directive reflects a “lack of understanding” of its original purpose. She stressed that the directive was not meant to punish companies but to protect small and medium-sized enterprises (SMEs) and individual consumers.
Industry and Consumer Advocacy Groups Weigh In
The European Commission’s decision to delay the AI Liability Directive has sparked mixed reactions from both the tech industry and consumer advocacy groups. Representatives from the Brussels tech lobby argue that concerns related to AI liability are already covered under the updated Product Liability Directive (PLD), which provides rules for determining liability when products cause harm.
However, consumer advocacy groups strongly support the AI Liability Directive, arguing that the existing legal framework does not offer sufficient protection when it comes to AI technologies. They insist that additional rules are necessary to ensure consumer safety and legal certainty in the rapidly evolving field of artificial intelligence.
A study by the European Parliament’s research service, presented in January to the Legal Affairs Committee (JURI), highlighted potential gaps in the PLD. The study specifically pointed out that large language models, such as OpenAI’s ChatGPT and Anthropic’s Claude.ai, may not be adequately covered under the existing liability framework.
According to the study, these new AI systems present unique challenges in determining liability when harm is caused. The study’s authors warned that without updated rules, consumers may not have proper recourse if they suffer damages caused by AI-powered systems.
The Future of AI Liability in the EU
The decision to shelve the AI Liability Directive is a significant moment in the ongoing debate about how to regulate artificial intelligence in the EU. While the European Commission has removed the directive from its 2025 work program, the European Parliament’s IMCO committee and other lawmakers remain determined to keep the conversation alive.
With ongoing disagreements about the scope of the directive and its potential impact, it remains to be seen when and how the issue of AI liability will be addressed in the future. As AI technologies continue to evolve, lawmakers in the EU will face increasing pressure to create clear and effective rules for assigning liability in cases where AI systems cause harm.
For more updates on the AI Liability Directive and other EU legislative developments, visit New York Mirror.