The Debate Over Regulation of Powerful Foundation Models in the EU’s AI Act

Republished with full copyright permissions from The Washington Daily Chronicle.

The European Union’s AI Act, a landmark legislation aimed at regulating artificial intelligence (AI), is facing a crucial discussion regarding the treatment of foundation models. Foundation models, such as the influential GPT-3.5 developed by OpenAI, play a vital role in various AI applications. However, the French, German, and Italian governments are advocating for limited regulation of these powerful models, setting the stage for a debate over self-regulation and potential future sanctions.

The Proposed Approach:

According to a document shared with officials from the European Parliament and the European Commission, the Franco-German-Italian proposal suggests that AI companies working on foundation models should self-regulate by publishing certain information about their models and signing up to codes of conduct. While there would initially be no penalties for non-compliance, repeat violations could result in future sanctions.

Understanding the Two-Tier Approach:

The European Commission had initially proposed a two-tier approach to foundation model regulation, which would impose lighter regulations on most models while implementing stricter rules for the most capable and impactful models. However, the French, German, and Italian governments oppose this approach, advocating for a more uniform regulation based on how AI is used.

Divergent Opinions and Lobbying Efforts:

The push to water down certain aspects of the EU’s proposed AI regulation is not limited to the three governments. Big tech companies, primarily headquartered in the United States, have been lobbying against the legislation from the start. The French and German governments, in particular, are concerned that excessive regulation could hinder innovation in their domestic AI industries.

Promoting Innovation while Ensuring Regulation:

France and Germany, given their commitment to fostering innovation and their own AI champions, argue for a balanced and innovation-friendly approach to AI regulation. They believe that strict regulation of foundational models could impede their ability to compete globally in this rapidly advancing field. Their focus is on regulating AI applications rather than stifling the development of underlying models.

The Road Ahead:

As negotiations enter the final stage, the EU’s AI Act aims to be finalized before February 2024. The European Parliament and member states are engaging in trilogue discussions to reach a compromise on the regulation of foundation models. However, differences of opinion remain, with some policymakers advocating for stricter regulation while others support a more flexible approach.

Leave a comment