A recent report by Corporate Europe Observatory (CEO) claims that big tech companies have disproportionate influence over EU standards for artificial intelligence tools.
According to the report, over half (55%) of the 143 members in the joint technical committee on AI (JTC21)—established by European standardisation bodies CEN and CENELEC—are representatives of companies or consultancies. Specifically, 54 members represent corporations, while 24 represent consultancies.
Nearly 25% of these corporate representatives come from US tech giants, including Microsoft, IBM, Amazon, and Google. In contrast, civil society groups account for only 9% of JTC21 members, raising concerns about the inclusivity of the standard-setting process.
The AI Act, a landmark regulation using a risk-based approach, was approved last August. Its provisions will gradually take effect, relying on standards developed by these organisations.
Business-Friendly AI Standards Raise Concerns
The European Commission tasked CEN, CENELEC, and ETSI in May 2023 with creating standards to ensure products like medical devices and toys comply with EU safety rules. Following these harmonised standards allows companies to certify that their products meet essential EU requirements.
However, CEO criticises this approach, arguing that the AI Act delegates complex public policymaking on fairness, bias, and fundamental rights to private standard-setting bodies. “For the first time, standard setting is being used to address fundamental rights issues,” said Bram Vranken, a CEO researcher and campaigner.
JTC21 Chair Sebastian Hallensleben noted that these organisations focus on processes rather than specific outcomes. This, he said, makes it harder to enforce outcomes like preventing bias or discrimination in AI systems. For example, a CE mark obtained through harmonised standards does not guarantee an AI system will avoid these issues.
National standard-setting bodies in France, the UK, and the Netherlands show similar trends. The report found corporate representatives comprise 56%, 50%, and 58% of their memberships, respectively.
Calls for Faster AI Standardisation
In response to CEO’s concerns, the European Commission stated that standards developed by CEN-CENELEC will undergo assessment to ensure they meet the AI Act’s objectives and adequately address high-risk AI systems. Additional safeguards, such as the ability for Member States and the European Parliament to object to these standards, are in place to ensure accountability.
However, there is growing urgency to expedite the standardisation process. A senior official from the Dutch privacy watchdog Autoriteit Persoonsgegevens (AP) warned that time is running out. “Standardisation processes typically take many years, but they must be accelerated for AI,” the official said.
Jan Ellsberger, chair of ETSI, acknowledged the challenge, stating that standardisation depends on voluntary industry commitment. “The more industry involvement, the faster it progresses,” he said. While some standards may take only months to finalise, others could require several years, highlighting the need for streamlined processes to keep up with rapidly evolving AI technologies.