A recent investigation has revealed that major tech companies and the US government have pushed the European Commission to water down the upcoming AI Code of Practice. The Code, meant to guide the safe development and use of General Purpose AI (GPAI), is still in draft form. However, its final version already appears to have been heavily influenced by powerful industry players.
According to a joint report by Corporate Europe Observatory (CEO) and LobbyControl, leading AI developers like Google, Meta, Amazon, Microsoft, and OpenAI were given unique access to the Code’s development process. These companies attended private workshops and held meetings directly with key members of the European Commission’s working groups.
In contrast, civil society groups, media organizations, and small businesses had to settle for limited engagement. They could only react using emoji tools in virtual meetings, offering little opportunity for real input.
Privileged Access Raises Red Flags
The report highlights how the European Commission invited thirteen experts to lead the drafting process. While the process claimed to be inclusive, the reality shows an imbalance. Over 1,000 stakeholders were involved, but not all voices carried the same weight.
Fifteen major companies were invited to closed-door workshops that shaped the Code in key areas. These sessions were not open to the public or to most other stakeholders. The chairs of the drafting groups were also present, adding weight to these meetings.
This form of participation is a major concern. It shows how tech companies can gain more control over how rules are shaped—especially rules meant to regulate their own work.
Copyright and Deregulation Concerns
One key concern raised by rights holders and media groups is that the current draft of the Code may not align with existing copyright laws. General Purpose AI systems often rely on massive amounts of training data, which can include copyrighted content. The draft lacks strong language to protect such rights.
The CEO and LobbyControl report says this is no accident. Their research points to how industry lobbying has led to softer language in the draft. They warn that this weakens efforts to hold tech companies accountable.
The US Enters the Picture
It’s not just Big Tech pushing for softer rules. The US Mission to the EU also sent a formal letter asking the Commission to ease up on regulations. According to EU sources, the US argued that strong AI rules would hurt innovation and create barriers for American companies doing business in Europe.
Interestingly, this pressure echoes a familiar argument from the Trump administration. They claimed that the EU’s digital rules were too strict and unfairly targeted US firms. That line of thinking has now been echoed in the AI debate.
Delays and Transparency Issues
Originally, the European Commission aimed to publish the final AI Code of Practice by early May 2025. But now, due to mounting pressure and internal debate, the release has been pushed to June.
A spokesperson confirmed that both the final Code and the accompanying guidelines would be published by mid-2025. In the meantime, consultations are ongoing. The Commission recently opened another round of feedback on its AI guidelines.
However, critics are concerned that the delay may allow more time for corporate influence to shape the final rules.
Public Interest Voices Sidelined
Groups advocating for transparency and fair digital practices say the drafting process needs reform. Bram Vranken, a researcher at CEO, stated: “The EU’s obsession with simplifying procedures opens the door to lobbying by dominant companies. This puts public interest at risk.”
He added that the AI Code could become “the first victim of deregulation,” if the current trend continues.
Other civil society groups echoed these concerns. They pointed out that AI systems have far-reaching impacts on people’s lives. Decisions about these systems shouldn’t be made in secret, nor should they favor corporate interests over public safety, privacy, and rights.
What Happens Next?
With the final AI Code of Practice due in June 2025, the spotlight is now on the European Commission. Will they take the path of transparency and strong oversight, or will they give in to growing pressure from Big Tech and global powers?
The coming months will be critical. The choices made now will shape how AI operates in Europe for years to come. Citizens, small businesses, and civil society all deserve a seat at the table—something that has so far been lacking.