With the first major compliance deadline for the European Union’s landmark Artificial Intelligence Act (AI Act) just weeks away on August 2nd, a clear division is emerging among global tech giants regarding adherence to the accompanying voluntary Code of Practice. While some industry leaders are signaling their intent to sign on, others are outright rejecting the guidelines, citing concerns about regulatory overreach and stifling innovation.
The EU AI Act, which entered into force in August 2024 with staggered compliance dates, aims to ensure AI systems used within the bloc are safe, transparent, and respect fundamental rights. The Code of Practice, published on July 10th by the European Commission, is designed to provide legal clarity and assist companies in meeting their obligations, particularly for providers of general-purpose AI (GPAI) models.
Microsoft has indicated it is likely to sign the voluntary code. Brad Smith, Microsoft’s President, stated the company’s goal is to be supportive, welcoming the direct engagement from the EU’s AI Office. This stance aligns with OpenAI and Mistral AI, both of whom have already committed to the code, positioning themselves as early adopters of the framework. These companies emphasize their commitment to providing secure and accessible AI models for European users.
However, Meta Platforms has taken a firm stand against the code. Joel Kaplan, Meta’s Chief Global Affairs Officer, publicly stated that the company “won’t be signing it,” arguing that it introduces “legal uncertainties” and includes measures that “go far beyond the scope of the AI Act.” Kaplan echoed concerns raised by a consortium of 45 European companies, warning that such perceived over-regulation could “throttle the development and deployment of frontier AI models in Europe.”
The voluntary code, developed by 13 independent experts with input from over a thousand stakeholders, addresses key areas such as transparency, copyright compliance, and safety for systemic-risk GPAI models. Signatories are expected to publish summaries of training data and implement policies adhering to EU copyright law. While not legally binding, companies that sign on are expected to benefit from reduced administrative burdens and increased legal certainty under the AI Act. Conversely, those who opt out may face closer scrutiny from EU regulators once the full force of the Act comes into play.
As the August 2nd deadline approaches for general-purpose AI model providers to meet key obligations, the divergent responses from major tech players underscore the complex balancing act between fostering innovation and ensuring responsible AI development within a robust regulatory framework. The EU, for its part, appears committed to its vision of “trustworthy AI,” banking on long-term public trust outweighing short-term industry friction.









![Online Scam Cases Continue to Rise Despite Crackdowns on Foreign Fraud Networks [Myanmar] Online Scam Cases Continue to Rise Despite Crackdowns on Foreign Fraud Networks [Myanmar]](https://sumtrix.com/wp-content/uploads/2025/06/30-12-120x86.jpg)




