Implementation Timeline: Already Rolling Out
The regulation began gradual enforcement on August 1, 2024:
- February 2, 2025: Official ban on certain high-risk AI practices takes effect
- August 2, 2025: Rules apply to General Purpose AI (GPAI) — large-scale models like ChatGPT and Gemini considered to pose systemic risks
However, companies such as OpenAI, Google, and Anthropic have until August 2, 2027 to ensure full compliance with GPAI provisions.
Tough Penalties: Fines Up to €35 Million
The Act includes strong enforcement mechanisms with significant penalties:
- Violations involving banned AI use: up to €35 million or 7% of global annual turnover, whichever is higher
- Violations by GPAI providers: up to €15 million or 3% of global turnover
Each EU country will implement detailed enforcement procedures aligned with the main regulation.
Industry Response: Mixed Reactions
Major tech firms including Google, Microsoft, Amazon, and IBM have signed the voluntary GPAI Code of Conduct, signaling early support. But not everyone agrees.
- Meta refused to sign, calling the Act a “step backward” for innovation in Europe
- Mistral AI’s CEO (France) urged a two-year delay in the regulation’s application
Despite such pushback, the EU rejected calls for postponement, maintaining the August 2, 2025 deadline.
What It Means for Indonesia and Southeast Asia
For Indonesian AI companies targeting European markets, the EU AI Act is not just a regional rule — it’s a new global benchmark. Developers must now ensure that their systems meet stringent standards for privacy, security, and transparency from the design stage.
On the flip side, the Act offers opportunity. By aligning with the EU’s unified framework, Southeast Asian companies can gain broader access to European markets and build stronger trust among users in the region.









