1.Introduction

Meta Platforms recently declined to sign the EU’s voluntary code. This high-profile move has placed the EU’s regulatory ambitions and the industry’s response firmly in the international spotlight.

This blog critically analyses the implications of the EU’s AI Code of Practice, the incident’s details, foundational legal context, the direct and far-reaching impacts on industry players, and, ultimately, a legal-commercial assessment of the strategic choices facing AI companies.

2.What Happened?

Meta Platforms, the parent company of Facebook, WhatsApp and Instagram, publicly announced its refusal to sign the European Union’s AI Code of Practice, a newly launched, voluntary regulatory framework designed to guide companies toward compliance with the soon-to-be-mandatory EU AI Act. Joel Kaplan, Meta’s Chief Global Affairs Officer, characterised the Code as “overreach” and warned that the requirements would stifle AI innovation across Europe.

This stance puts Meta at odds with both EU policymakers and numerous industry players. In contrast, OpenAI and Google have indicated willingness to sign the Code, emphasising their commitment to responsible AI development for European users. The code itself was published in July 2024 and is viewed as a blueprint for early compliance before the more stringent AI Act’s provisions take legal effect on August 2, 2025, starting with general-purpose models deemed to carry systemic risks.

3.What is the EU’s AI Code of Practice?

The EU’s AI Code of Practice is a voluntary, non-binding framework created to bridge the regulatory gap before the compulsory AI Act comes into force. Its principal aim is to “operationalise” ethical and legal best practices specifically for providers of general-purpose AI (GPAI) models. The Code is structured around several central clauses:

  • Transparency: Companies must disclose how models function, document intended use-cases, and summarise datasets and collection methods. AI-generated or modified content must be clearly notified to users.
  • Copyright and Data Use: Training on pirated or unauthorised data is strictly prohibited. Providers must implement opt-out mechanisms for rights holders and robust procedures for handling copyright complaints.
  • Safety and Systemic Risk Management: Companies must perform pre- and post-launch risk assessments focusing on issues such as misuse, discrimination, or disinformation. Ongoing oversight and rapid mitigation of new risks are mandated.
  • System Classification & Risk: The Code includes clear categorisation of AI systems by risk level, banning certain high-risk uses outright (e.g., social scoring, certain biometric applications).
  • Regulatory Monitoring: Signatories benefit from anticipated lighter inspections, more clarity, and cooperative engagement with authorities while establishing best-practice documentation and compliance culture within the company.

4.Why is Meta Refusing to Sign?

Meta’s objections rest primarily on two arguments:

  1. Regulatory Overreach: The company argues the Code imposes obligations exceeding those of the AI Act, potentially creating legal uncertainty.
  2. Commercial Impact: Broad and ambiguously defined compliance obligations could increase liability and deter AI innovation in the EU.

Meta’s stance has received support from some European industrial stakeholders, who warn that overly onerous or unclear compliance burdens may drive investment and R&D to less regulated jurisdictions, undermining Europe’s competitive position.

5.Whom This Affects and How?

The Code and the forthcoming AI Act have global implications for a diverse range of stakeholders:

  • General-Purpose AI Providers: Companies like Meta, OpenAI, Google, and Anthropic must adopt heightened transparency and systemic risk management protocols.
  • SMEs and Supply Chain Partners: Compliance requirements may indirectly affect smaller firms through procurement, contracting, and partnership obligations.
  • Non-EU Companies: The Code has an extraterritorial dimension; any company providing AI services within the EU may be required to comply.
  • Content Owners: Copyright provisions strengthen rights holders’ leverage over how their works are used in AI model training and deployment.
  • Global Markets: Through the “Brussels Effect,” the EU’s approach is likely to influence AI regulatory norms worldwide, shaping product development and compliance strategies across jurisdictions.

From a commercial awareness viewpoint, these measures drive up compliance, operational, and legal costs, alter market access strategies, and shift the competitive landscape not just within Europe, but wherever compliance with the EU’s regulatory norm becomes necessary for cross-border business.

6.Lawsplained

The EU’s AI Code illustrates a growing convergence between regulatory policy and commercial strategy. Its implications for AI companies vary depending on participation:

For Signatories:

Signing up for the EU AI Code provides a significant degree of legal certainty and risk reduction on the road toward compliance with the AI Act. The voluntary framework brings:

  • Presumption of Good Faith: Regulatory authorities are more likely to view signatories as acting diligently, with a structured compliance process in place.
  • Early Adaptation: Participating companies gain experience with documentation, contractual controls, and risk management required under the binding law.
  • Reduced Regulatory Burden: Smoother, less frequent inspections and active dialogue with authorities provide a streamlined route to market for compliant products.
  • Commercial Advantage: Demonstrated commitment to responsible AI is an emerging selling point both for consumers and institutional clients, facilitating partnerships and procurement.

For Non-Signatories:

  • Increased Scrutiny: Non-signatories are subject to more intensive regulatory oversight due to the absence of transitional alignment with the Code.
  • Reporting Obligations: Providers that do not adhere to the Code must demonstrate compliance through alternative means and report the measures they have implemented directly to the EU AI Office. They are also expected to conduct a gap analysis comparing their internal measures against the Code’s provisions and explain how these ensure compliance with their obligations under the AI Act.
  • Enhanced Information Requests: Non-signatories should anticipate a higher volume of requests for information and access from the AI Office to conduct model evaluations across the entire model lifecycle. This includes detailed disclosures about modifications to general-purpose AI models, as regulators will lack the baseline compliance understanding that Code adherence provides.
  • Legal and Operational Risk: Without the Code’s transitional guidance, companies face increased exposure to ambiguity under the AI Act and may struggle with last-minute compliance crises once mandatory obligations take effect.
  • Commercial Disadvantage: Some tenders, partnerships, and procurement processes may require Code adherence as a precondition, potentially excluding non-signatories from strategic opportunities.
  • Brand and Investment Risk: Failing to engage with the Code may place companies in higher-risk categories for investors and regulators, affecting valuation, insurance, and reputation.

Given these risks and rewards, the advisable practice is to evaluate, and most often pursue, participation in the Code for any entity with a European commercial footprint or aspirations. Embedding robust compliance and documentation practices now not only reduces legal exposure but also assures smoother business operations as the regulatory environment tightens. Reviewing data procurement, content sourcing agreements, user notification processes, and post-market monitoring systems is crucial for companies to align with both the Code and the binding AI Act.

7.Conclusion

The EU’s AI Code of Practice, alongside the imminent AI Act, marks a fundamental shift in global AI regulation. While Meta’s refusal highlights legitimate concerns regarding regulatory burden and competitiveness, early engagement with the Code provides companies with legal predictability, market credibility, and strategic advantage.

For AI companies and their counsel, the key takeaway is clear: proactive compliance is no longer optional but a critical business strategy. In an increasingly regulated digital economy, transparent and well-documented governance is emerging as a primary driver of market access, investor confidence, and sustainable innovation.

8.References

  1. https://www.euronews.com/my-europe/2025/07/23/meta-wont-sign-eus-ai-code-but-who-will
  2. https://www.wsj.com/tech/ai/meta-wont-sign-eus-ai-code-of-practice-chief-global-affairs-officer-says-b5ac4653
  3. https://digital-strategy.ec.europa.eu/en/policies/contents-code-gpai
  4. https://news.bloomberglaw.com/business-and-practice/meta-says-it-wont-sign-eus-ai-code-calling-it-overreach?utm_
  5. https://techcrunch.com/2025/07/18/meta-refuses-to-sign-eus-ai-code-of-practice/?utm_s
  6. https://artificialintelligenceact.substack.com/p/the-eu-ai-act-newsletter-71-wealth?utm_
  7. https://x.com/Techmeme/status/1946183388798325019?utm_s
  8. https://www.techpolicy.press/how-us-firms-are-weakening-the-eu-ai-code-of-practice/?utm_ 
  9. https://digital-strategy.ec.europa.eu/en/faqs/signing-general-purpose-ai-code-practice 

 


Leave a Reply

Your email address will not be published. Required fields are marked *