Ursula von der Leyen has said her next Commission will simplify the EU’s complex regulatory environment to boost growth, especially in high-tech sectors. How the EU implements its Artificial Intelligence Act will be its first big test.
Better regulation is the mantra of newly re-elected Commission president Ursula von der Leyen. Spurred by recent criticisms in influential reports by
Enrico Letta and
Mario Draghi about the quality of recent EU regulations and their impact on economic growth – Draghi’s report stated that “we are killing our companies” – she has
tasked each of her commissioners with cutting red tape and making the EU rulebook easier for businesses to comply with.
The tech sector is a particular concern for three reasons. First, it is critical to Europe’s economic growth: the gap in productivity between the EU and the US is largely explained by the size of the latter’s high-productivity tech sector alone. Second, tech is the best example of Europe’s overly complex regulatory environment. The EU passed many wide-ranging tech laws in the last political cycle – on issues ranging from
cybersecurity,
digital competition and
artificial intelligence to
online safety and
data sharing. Many of these laws have significant inconsistencies and overlaps, as business groups across Europe have
repeatedly noted. Regulatory complexity is probably not the main reason for Europe’s poor record in tech investment, but it is certainly a contributing factor – and is likely to become a bigger problem if Trump cuts regulation in the US even further. Third, the EU may struggle to apply and enforce its tech laws dispassionately in the coming years, given the risk that Trump might be less tolerant than Biden was of Europe’s regulation of foreign companies, and more willing to retaliate. The Commission is currently
investigating X/Twitter for breaching the Digital Services Act, for example, which will likely put Brussels on a collision course with Washington given the close relationship between X’s owner Elon Musk and Donald Trump.
All this means that the EU must be cautious about pushing forward with
more tech laws, and instead focus first on making its existing rulebook clear, objective and non-discriminatory. One its first tests will be implementing the EU’s Artificial Intelligence Act (AI Act), aimed at helping to manage the technology’s risks. The AI Act gets some of the basics of better regulation right. But its institutional framework risks creating fragmentation and inconsistency, and the process of translating the law’s requirements into practical obligations risks becoming unwieldy and disproportionate. The Commission and member-states still have choices to help ensure the law supports EU innovation and economic growth.
Making the AI Act’s complex institutional framework work in practice
One of the issues with the AI Act is its institutional framework. Enforcement of the regulation will fall to a bewildering combination of authorities. The AI Office in the European Commission will enforce the law’s provisions for general purpose AI models, like ChatGPT. Each of the EU member-states will also have one or more ‘notifying authorities’, which will play a role in the conformity assessment process for AI systems, and ‘market surveillance authorities’, which will enforce the law for AI systems already on the market. A range of other institutions, like the ‘AI Board’, ‘advisory forum’ and ‘scientific panel’ have advisory or co-ordinating roles.
The law therefore risks being implemented in inconsistent ways across Europe – repeating the problems which have
plagued Europe’s General Data Protection Regulation (GDPR). For example, member-states can appoint
different types of market surveillance authorities – some countries, for example, might appoint data protection agencies to enforce the AI Act, whereas others might see this as a job for product safety regulators. Different types of authority are likely to have different priorities and different interpretations of the AI Act's provisions. To avoid making life difficult for companies trying to roll out a service across the bloc, EU member-states need to work together to ensure a consistent approach to allocating the AI Act’s responsibilities.
‘Turf wars’ between regulators are also a risk. The Act assumes that countries will have multiple market surveillance authorities. In sectors where the EU already requires a country to have such an authority (for example, for toys or financial services), the Act presumes member-states should give that authority responsibility to enforce the AI Act. Integrating AI supervision with existing product safety requirements should make life easier for companies that want to use AI. However, innovative uses of AI may span across industries, so this approach risks creating fights or inconsistencies among regulators. Furthermore, the EU has a well-known digital skills gap, which means that even the private sector is struggling to find AI-savvy workers. Requiring existing regulators to develop an understanding of AI will prove challenging – and may lead to poor-quality regulatory decisions. Member-states need to consider either centralising enforcement of the AI Act, as
Spain plans to do, or providing a quick mechanism to avoid inconsistencies between decisions of different regulators.
A final problem is that EU law-makers have handed significant power to the Commission to change important parts of the AI Act. For example, the Commission may determine that new AI uses should be treated as ‘high-risk’, and therefore subject to additional regulatory requirements. The Commission may also change the rules about which types of general-purpose AI models pose 'systemic risk' and therefore need more onerous safety safeguards. Neither of these creates much certainty for companies that want to develop or use AI in Europe – they may design an AI product that is not deemed risky today, only to see it become tightly supervised tomorrow. The Commission should mitigate this problem by quickly issuing guidelines explaining whether and how it intends to develop the scope of the law.
The AI Act’s potential to deliver better regulation in practice
Substantively, the AI Act reflects the EU’s
precautionary approach to risk – consciously making it harder to operate certain types of business model, and for firms to be experimental and adaptive. However, law-makers gave the AI Act several characteristics which reflect principles of better regulation.
For instance, key parts of the AI Act require companies deploying AI to comply with broad outcomes, which is a far better approach than prescribing inflexible, black-and-white rules. For example:
- Providers of general-purpose AI models which pose systemic risks must “mitigate”, rather than “eliminate”, risks.
- Providers of high-risk AI systems only need to manage risks that can be “reasonably mitigated or eliminated”.
- These providers only need to mitigate risks to the point that any residual risks are “judged to be acceptable”.
- The accuracy and security of high-risk systems must be “appropriate” rather than providers being required to meet standards that might not be technically achievable.
This type of outcomes-based approach can deliver a reasonable balance between safety and innovation. For example, terms like “appropriate” may automatically require AI firms to meet higher standards as these standards become more technologically and commercially viable to achieve.
A second positive feature is that a range of provisions in the AI Act allow ‘co-regulation’, where there is a dialogue between industry and regulators to help translate the law’s vague outcomes into practical steps. For example, the Act relies heavily on standards, developed by Europe’s standard-setting organisations.
This has two benefits. For companies, it provides certainty: if a standard is accepted by the Commission, a company which complies with it will be presumed to comply with the AI Act itself. Companies therefore have an incentive to participate in co-regulation in good faith, as if they propose standards which lack credibility, the Commission will reject them, and AI firms will have less certainty about how to comply with the law. For regulators, a co-operative process helps overcome information asymmetry: AI firms have much more information about how AI works and the risks involved than public authorities do, which can make it hard for regulators to determine alone whether an AI provider is appropriately mitigating and managing risk.
To help ensure co-regulation works well, the EU needs to ensure standard-setting bodies are open to all firms that want to participate – including non-European ones, which are likely to be most affected by many parts of the AI Act given that most of the largest AI models have emerged outside Europe. Doing so should mean that European standards will be more credible and objective, and make it harder for foreign firms to complain to their governments if they dislike the EU’s approach. An approach that secures the willing co-operation of global tech firms will also have much greater global influence. For example,
81 per cent of standards set EU standard-setting body CENELEC are identical to global standards.
Designing a Code of Practice: The first big test in delivering better regulation
Because it is a participatory process, standard-setting takes time – and that is likely to be especially true for AI, which is evolving rapidly. For example, while the Commission has asked European standard-setting bodies to develop new AI standards by April 2025, this timeframe seems
unrealistic. In particular, the rules for general-purpose AI models take effect in August 2025, before any standards are likely to be ready. To bridge this gap, the AI Act introduces an additional and temporary co-regulatory tool, in the form of a ‘code of practice’.
The code will be a world-first: translating principles for responsible AI into concrete specific practices. It will cover issues like how much information about their AI models providers must disclose, how the providers will identify and mitigate risks, and how to ensure models comply with cybersecurity requirements. Providers of general-purpose AI models may choose to follow a code of practice, and if they do so they can rely on their compliance with the code to demonstrate that they conform to the AI Act. The code needs to be drafted by May 1st 2025 – a mere nine months after the AI Act entered into force – which makes it the AI Office’s first opportunity to demonstrate its commitment to better regulation.
However, the process of drafting the code so far risks falling short of good regulatory practice.
After initially proposing a much-criticised closed-door process for drafting the code of practice, mostly involving providers of AI models and the AI Office, the AI Office quickly – and sensibly – reversed course. It has now taken the opposite extreme, setting up a massive consultation process involving nearly a thousand stakeholders. Given the very short timeframe available to draft the code, the complex process raises questions about how realistically different views can be expressed and thoroughly assessed before they are included. This risk is significant given that academics rather than technical experts are
leading the drafting process. If ideas are incorporated into the codes of practice whose consequences have not been fully tested, which go beyond what the AI Act requires, or which try to add provisions that the AI Act’s law-makers did not agree on, this would compromise the objective of delivering a practical and proportionate roadmap for complying with the AI Act. That would be a further example of the uncertainties and inconsistencies that characterise many of the EU’s existing tech laws, and reduce the AI Act’s perceived legitimacy by industry.
The code of practice is only meant to be a temporary measure. The AI Office therefore should focus on keeping its scope targeted but getting the details it does cover right: for example, by sticking closely to translating the outcomes required by the AI Act into specific measures, and producing corresponding performance metrics, rather than supplementing the Act or adding ‘nice to haves’. The current draft of the code does a good job of clearly linking each part of the code to the provisions of the AI Act. The drafters should maintain that approach so that the exercise delivers what AI firms expected, minimises the risk of unintended consequences, and helps build trust and co-operation between the AI Office and providers of the biggest AI models.
Securing the co-operation of large tech firms will be particularly important given the return of Trump to the White House. While his administration may remain aligned with Europe’s in taking
tough antitrust action against tech firms, Trump will not follow Europe in areas like regulation of AI and online safety. European efforts to regulate US AI firms may therefore trigger transatlantic disputes especially if US firms claim that Europe's AI authorities are treating them unfairly. The EU would therefore benefit from ensuring AI model providers engage constructively – and many of them already publicly accept the need for sensible regulation – and accept the legitimacy of the AI Act and its code of practice.
Conclusion
The poor design of EU laws is not the only problem holding back European economic growth: Draghi also cites the lack of a single capital market and of a properly functioning single market as barriers to growth. And firms in the US are
more likely to report business regulations as a barrier to investment than EU firms – putting into question the common narrative that Europe overregulates and the US takes a more laissez-faire approach. But with Donald Trump tasking Elon Musk with overseeing a
deregulation drive in the US, and the US poised to retaliate against any perception that Europe enforcing its tech laws is ‘unfair’, Brussels needs to ensure that its own laws are as clear and as easy to follow as possible, and that they are perceived as legitimate by the firms that are regulated.
The AI Act will be the most urgent test of whether the EU can deliver better regulation, as Mario Draghi and Enrico Letta have recently urged. EU leaders and the Commission still need to show they are willing to put their commitment to better regulation into practice.
Zach Meyers is assistant director at the Centre for European Reform.