4 timelines of implementation of the AI Act

On 10 June 2025, the European Parliament published an “At a Glance” factsheet detailing the implementation timeline of the EU Artificial Intelligence Act (AI Act), the world’s first comprehensive AI regulatory framework. The document sets out the Act’s history, purpose, core provisions and, above all, the staggered schedule by which its requirements will enter into force across the European Union.

Background and Purpose

The AI Act was formally adopted by the European Parliament on 13 March 2024 and published in the Official Journal on 12 July 2024, entering into force twenty days later (1 August 2024). It establishes a risk-based regulatory regime for AI systems, dividing them into “unacceptable,” “high,” “limited” and “minimal” risk categories. Its overarching goal is to ensure that AI deployed in the EU is safe, transparent and respectful of fundamental rights, while fostering innovation across the single market.

What the Factsheet Includes

The factsheet serves as a concise guide to:

  • History & Purpose: tracing the AI Act from proposal (April 2021) through political agreement (December 2023) to adoption and entry into force.
  • Key Provisions: outlining definitions (Chapter I), prohibited practices (Chapter II), high-risk system rules (Chapter III), general-purpose AI obligations (Chapter V) and enforcement mechanisms (Chapter IX).
  • Staggered Implementation Steps: detailing when each set of rules becomes applicable, culminating in full effectiveness by 2027.

Timeline of Implementation

  • 2 February 2025
    • Chapters I (general provisions) and II (prohibited AI practices) enter into force.
    • European Commission publishes non-binding guidelines on prohibited practices, clarifying, for instance, bans on social scoring, predictive policing and real-time biometric identification in public spaces.
  • 2 May 2025
    • Completion of the Code of Practice for General-Purpose AI (GPAI) models—covering large language models and other foundation models—either by industry consensus or, failing that, EU Commission intervention.
  • 2 August 2025
    • Entry into force of:
      • Designation of notified bodies and authorities for high-risk AI systems;
      • Obligations for GPAI model providers (governance, transparency, technical documentation, incident reporting);
      • Penalties regime (excluding fines specific to GPAI models);
      • Confidentiality requirements in post-market monitoring.
  • 2 February 2026
    • Commission issues guidelines on the classification rules for high-risk AI systems (Article 6), providing practical examples and clarifying Annex III criteria.
  • 2 August 2026
    • General application date for the remainder of the AI Act, including fines for GPAI model breaches and all high-risk system requirements in Annex III—except Article 6(1), which has a later deadline.
    • By this date, national authorities must be fully empowered to enforce the Act across Member States.
  • 2 August 2027
    • Entry into force of Article 6(1) (classification rules for high-risk AI systems embedded into regulated products) and Annex I obligations, marking the completion of the Act’s phased rollout.

Significance and Next Steps

This staggered approach balances the need for rapid prohibition of the most dangerous uses of AI with practical lead times for industry, regulators and Member States to adapt. Over the next two years, stakeholders must:

  1. Audit and Classify all AI systems and GPAI models in use (2 February 2025)
  2. Develop Compliance Frameworks: risk management, documentation, human-oversight mechanisms and AI literacy programs for personnel (2 February 2025)
  3. Engage with Guidelines: monitor Commission publications and standardization efforts (e.g., CEN-CENELEC harmonized standards due end-2025).

By 2 August 2027, the EU will have achieved a fully operational, risk-based AI regulatory regime, setting a global precedent for trustworthy AI governance.

Implementation timeline factsheet (PDF, European Parliament):
“The timeline of implementation of the AI Act” (At a Glance series, June 2025)
https://www.europarl.europa.eu/RegData/etudes/ATAG/2025/772906/EPRS_ATA%282025%29772906_EN.pdf

Commission Publishes AI Literacy FAQs Under the EU AI Act: What You Need to Know

As of 2 February 2025, the EU Artificial Intelligence Act requires providers and deployers of AI systems to ensure that everyone involved—from in-house staff to external contractors and even end-users—possesses an appropriate level of AI literacy (art. 4 of AI Act). On 7 May 2025, the European Commission published a set of Frequently Asked Questions to clarify exactly what that means in practice.

What does article 4 of the AI Act provide?

Providers and deployers of AI systems should take measures to ensure a sufficient level of AI literacy of their staff and other persons dealing with the operation and use of AI systems on their behalf. They should do so by taking into account their technical knowledge, experience, education and training of the staff and other persons as well as the context the AI systems are to be used in and the persons on whom the AI systems are to be used.

Why AI Literacy Matters

Ensuring that people understand the AI tools they work with isn’t just about ticking a compliance box. Solid AI literacy underpins:

  • Transparency (Article 13): Staff need to grasp how AI systems arrive at decisions so they can explain outputs to regulators and stakeholders.
  • Human Oversight (Article 14): Users must be able to identify when something goes wrong, interpret AI recommendations responsibly, and intervene when necessary.

Who’s Covered?

The Commission makes clear that “AI literacy” isn’t limited to AI engineers. Obligations extend to:

  • Employees across all functions
  • Contractors and service providers supporting AI projects
  • Clients and end-users who operate the systems

The key requirement is that literacy be commensurate with each individual’s existing technical knowledge, experience, training and the context in which they engage with AI.

What’s Required—And What Isn’t

  • No mandatory exams or certifications. The Commission does not prescribe formal tests or assessment tools, but organizations must “take proactive measures” to verify that their people understand AI sufficiently.
  • Tailored training. Especially in high-risk sectors such as finance and healthcare, FAQs explicitly encourage bespoke modules on domain-specific AI use cases and tools (think ChatGPT-style assistants or medical-diagnostic algorithms).
  • Ongoing support. It’s not a one-and-done webinar. Firms should embed AI literacy into regular upskilling, performance reviews and tool-specific guides.

The EU AI Office’s Role

The new FAQs confirm that the forthcoming clarifications on Articles 8–25 of the AI Act will cover literacy obligations in depth. Meanwhile, the Commission itself has set an example by:

  1. Creating an internal AI portal—a “one-stop shop” for guidelines, training resources, events and news: For more details, please also see the guidelines on AI system definition. The guidelines on AI systems definition were published in addition to the Guidelines on prohibited artificial intelligence (AI) practices, as defined by the AI Act. 

A dedicated webpage on AI literacy and skills is under preparation.

  1. Segmenting learning packages for generalists, managers and specialist developers, with resources tagged as “essential,” “highly recommended,” or “recommended.”
  2. Curating tool-specific libraries—so every staff member can quickly find the right tutorial or best-practice guide for each AI application they use.

Next Steps for Your Organization

  1. Audit your people and roles. Map who interacts with AI systems and gauge their current proficiency.
  2. Design tiered learning pathways. Align content to both people’s functions (e.g. finance vs. operations) and risk levels in your industry.
  3. Build verification into processes. Even without formal tests, include quick self-assessments or manager-led check-ins to confirm comprehension.
  4. Stay tuned for the EU AI Office’s detailed rule-book on Articles 8–25, due later this year.

🔗 You can read the full set of AI literacy FAQs on the European Commission’s website: AI Literacy – Questions & Answers

New GDPR Guidance Alert! IMY’s 10-Step Guide to Data Protection Impact Assessments (DPIAs)

Struggling with GDPR compliance? Sweden’s Data Protection Authority (IMY – Integritetsskyddsmyndigheten) just released a practical 10-step guide to conducting effective Data Protection Impact Assessments (DPIAs) for high-risk data processing. Here’s why it matters:

What is IMY?

IMY is Sweden’s Data Protection Authority, responsible for enforcing GDPR compliance and ensuring that personal data is handled securely and lawfully. They provide guidance, conduct audits, and investigate breaches to protect individuals’ privacy rights.

Why are DPIAs Important?

DPIA is a process to identify and minimize risks to individuals’ privacy when processing their data. It’s a legal requirement under GDPR for high-risk activities, such as large-scale data processing, profiling, or handling sensitive data.

✅ Key Highlights from the Guide:
1️⃣ 10-Step Process: A clear, structured approach to conducting DPIAs, from assessing the need to continuous monitoring.
2️⃣ Risk Identification: Learn how to spot risks to individuals’ rights, such as identity theft, discrimination, or loss of control over personal data.
3️⃣ Legal Compliance: Ensure your processing has a lawful basis, respects data minimization, and upholds data subject rights.
4️⃣ Stakeholder Engagement: Tips on consulting Data Protection Officers (DPOs), employees, and even data subjects to gather valuable insights.
5️⃣ Documentation: A strong emphasis on documenting every step, from risk assessments to mitigation measures.

💡 3 Quick Tips from the Guide:

  • Start early: Assess if a DPIA is needed before launching new projects.
  • Document everything: From risk matrices to stakeholder feedback.
  • Review regularly: DPIAs aren’t one-time tasks—update them as risks evolve.

⚠️ Miss this at your peril: Failure to conduct DPIAs for high-risk processing can lead to hefty fines under GDPR.

📖 Dive deeper: Grab the full guide here ➡️IMY’s Practical Guide to DPIAs

Perfect for DPOs, compliance teams, and anyone handling sensitive data. Share with your network to spread the knowledge!

#GDPR #DataProtection #Compliance #Privacy #RiskManagement

Let’s stay compliant and protect personal data together! 💪

📌 P.S.: What’s your biggest challenge with DPIAs? Share in the comments! 👇