The European Union has unequivocally rejected widespread calls from major tech companies and industry groups to delay the implementation of its landmark Artificial Intelligence Act, affirming its commitment to the established timeline for the world’s first complete AI regulation.
Despite intense lobbying efforts from over a hundred tech firms, including giants like Alphabet, Meta, and European players such as Mistral and ASML, who argued for a postponement citing concerns over compliance burdens and hindering innovation, the European Commission remains steadfast. Commission spokesperson Thomas Regnier was emphatic in a recent press briefing, stating, “Let me be as clear as possible, there is no stop the clock. There is no grace period. There is no pause.”
The AI Act, which entered into force in August 2024, is being implemented in a phased approach. Key deadlines are rapidly approaching, with obligations for general-purpose AI models set to kick in by August 2025. The most extensive requirements, pertaining to “high-risk” AI systems, are scheduled to become fully applicable by August 2026. These high-risk categories encompass AI used in critical areas such as healthcare, law enforcement, and employment.
Industry stakeholders had voiced concerns about the complexity of the new rules and the significant resources required to ensure compliance, especially for smaller businesses and startups. Some had also highlighted the lack of fully developed technical standards and guidelines as a reason for needing more time to prepare.
However, the EU’s message is clear: businesses operating within the bloc must adapt to the new regulatory landscape without further delay. While acknowledging that some simplification measures for broader digital rules might be explored later this year, these will not impact the AI Act’s fixed timeline.
The AI Act introduces a risk-based framework, banning certain “unacceptable risk” AI applications like social scoring and cognitive behavioral manipulation, and imposing stringent requirements on “high-risk” systems. “Limited risk” AI tools, such as chatbots, face lighter transparency obligations.
The EU’s resolve underscores its ambition to lead the global conversation on responsible AI development and deployment, ensuring the technology aligns with European values and fundamental rights. Companies are now urged to prioritize their compliance strategies to meet the impending deadlines and avoid significant penalties, which can range from €7.5 million to €35 million or 1% to 7% of global annual turnover, depending on the severity of the infringement. The message from Brussels is firm: the future of AI in Europe is here, and it’s on schedule.









![Online Scam Cases Continue to Rise Despite Crackdowns on Foreign Fraud Networks [Myanmar] Online Scam Cases Continue to Rise Despite Crackdowns on Foreign Fraud Networks [Myanmar]](https://sumtrix.com/wp-content/uploads/2025/06/30-12-120x86.jpg)




