Meta’s AI Strategy Faces Scrutiny After EU Code Rejection
Meta has made it clear that it won’t be joining the European Union’s voluntary AI Code of Conduct, saying the initiative imposes demands that go beyond reasonable compliance (via Bloomberg).

The company argues that the European Commission’s approach, particularly its efforts to create broad oversight frameworks, amounts to overreach, especially when applied to technologies still in rapid development. The voluntary AI code was introduced last year as a collaborative effort between the European Commission and major tech firms.
The Code’s purpose was to set transparent standards and share best practices before the EU’s official AI Act comes into effect. It was signed by companies like Google, Microsoft, and OpenAI, and encouraged developers of powerful models to adhere to safety protocols, explainability standards, and risk-mitigation measures for high-impact AI systems.
Meta, however, says it supports international cooperation on AI governance but doesn’t agree with the scope and depth of the EU’s version. A spokesperson from the company emphasized that certain reporting requirements outlined in the code go far beyond what is feasible or proportionate for companies working with open-source models like Meta’s Llama.
While Llama has received significant attention for its performance and accessibility, Meta is keen to avoid binding itself to frameworks that might hinder open development or impose regulatory burdens that competitors don’t face elsewhere.
The EU’s AI Act, which was approved earlier this year and is expected to take full effect by 2026. Until then, the Code of Conduct acts as a temporary mechanism for oversight. EU Commissioner Thierry Breton responded to Meta’s decision by urging the company to reconsider, suggesting that public trust in AI depends heavily on developer transparency and responsible deployment.

Industry observers say the implications of Meta’s decision could ripple beyond Europe, potentially affecting how other regions structure their AI policies. The broader question now is whether voluntary regulation can keep up with the pace of AI innovation, or if governments will need to fast-track more enforceable rules.
Want to see more of our stories on Google?
P.S. Want to keep this site truly independent? Support us by buying us a beer, treating us to a coffee, or shopping through Amazon here. Links in this post are affiliate links, so we earn a tiny commission at no charge to you. Thanks for supporting independent Canadian media!