Tech giants are preparing any sizeable changes to their operations and strategies, amid a plan by the EU to bring in wide ranging Artificial Intelligence (AI) regulation.
Highly touted as the ‘world’s first ever legal framework of AI’, the much-anticipated AI Act will impose extensive obligations over companies providing AI-based goods or services in the EU, irrespective of where they are established.
The EU’s AI Act employs a risk-based system, tiering AI systems into varying degrees of risk – unacceptable, high, medium and low. Those considered to entail an “unacceptable risk” — like those intended for social scoring or real-time identification using biometric data in public areas (with some exceptions for law enforcement) — will not be allowed.
High-risk AI systems – such as those implemented in critical infrastructures and fields such as education, employment, or essential services – will have to adhere to strict data and performance requirements, as well as to ensure transparency or human oversight.”
Similarly, general-purpose AI (GPAI) systems such as large language models will be subject to certain transparency requirements: revealing that a content was generated with AI and descriptors of the copyright-protected data that the model was trained on.
Stricter rules will apply to more powerful GPAI models where there is potential for systemic risk, including rigorous testing and incident reporting.
The repercussions for global tech companies are significant. These organizations will have a lot of catching up to do in investing in compliance, potentially re-architecting their AI development and deployment paths, and even redesigning products and services so they can meet EU criteria.
Failure to do so could yield heavy fines, of up to 7% of a company’s total worldwide annual turnover or €35 million, whichever is greater.
Some tech giants have raised concerns about stifling innovation and the complexity of the regulations —but others have said they are open to working the EU to ensure that AI can be served responsibly.
With the Act being gradually implemented, some prohibitions having already taken effect starting February 2025, and more requirements gradually introduced over the next few years, companies have a narrow window to adjust.
Experts speculate that the “Brussels Effect” ( EU rules become de facto global standards ) will once again apply to AI governance frameworks all over the world. Global corporations are paying close attention to how the AI Act turns out, realizing that such regulation could mold the future of AI regulation far beyond Europe.
“We’re not out of the woods yet and the next few months will be critical as the sector reels from the detail of the finalised regulations and tries to steer a course through to compliance in this new era for AI governance.