Jordan G. Heiber Jordan G. Heiber
Vice President, International Digital Economy Policy, U.S. Chamber of Commerce
Abel Torres Abel Torres
Executive Director, Center for Global Regulatory Cooperation, U.S. Chamber of Commerce

Published

December 11, 2024

Share

The European Union recently released its draft General-Purpose AI Code of Practice, a key component of the EU AI Act aimed at regulating artificial intelligence. The goal of ensuring safe AI deployment is commendable. Yet, the draft Code is deeply flawed. 

Our Take 

The draft Code holds the potential for regulatory overreach that could disrupt the global AI landscape, disproportionately burden U.S. companies, and raise serious concerns about information security and intellectual property (IP). It goes beyond the EU AI Act, creating unnecessary compliance hurdles and risking regulatory fragmentation. We urge U.S. policymakers to engage with European counterparts to address these issues and advocate for a balanced framework that promotes innovation, protects sensitive information, and aligns with the broader international approach the EU itself agreed to via the G7.

Let’s dive into the details.  

Disclosure and Intellectual Property Risks 

The draft mandates extensive disclosure of sensitive business information to Europe's AI Office, downstream providers, and potentially the public. This raises two major risks: first, releasing capability-unlocking IP could enable misuse of powerful AI systems; second, forcing disclosure of valuable IP undermines investment incentives and weakens U.S. companies’ global competitiveness. 

Security and Confidentiality Concerns 

The draft fails to establish robust security measures for the AI Office, which would store highly sensitive data, creating vulnerabilities that could be exploited by malicious actors. Additionally, the transparency requirements exceed those in the AI Act, potentially forcing companies to disclose trade secrets to competitors, including those from China. 

Excessive Compliance Burden 

The draft adds significant obligations such as, extensive publication mandates and testing by the AI Office and undefined third parties. These requirements go beyond the AI Act.

Disparate Treatment 

The draft Code imposes lighter requirements based on the size of the firm. Public safety and ethical AI deployment should not solely depend on the size of the company, but instead be based on specific risk associated with the application.

Lack of International Alignment 

Despite claims of global cooperation, the draft Code diverges from the G7 Code of Conduct for Advanced AI Systems, which reflects an internationally agreed upon approach. This misalignment risks fragmenting global standards, further complicating compliance for businesses operating across borders.  

Without significant changes, the EU’s draft Code risks stifling innovation, undermining security, and creating unnecessary barriers for U.S. companies. A balanced, globally aligned framework is critical for the future of AI.

About the authors

Jordan G. Heiber

Jordan G. Heiber

Jordan Heiber leads the Chamber’s international privacy and data flow policy portfolio and manages a team responsible for the full suite of digital policy issues, including cybersecurity, artificial intelligence, and more.

Read more

Abel Torres

Abel Torres

Abel Torres serves as Executive Director in the Center for Global Regulatory Cooperation (GRC) at the U.S. Chamber of Commerce.

Read more