Sorting through the noise: Preparing for the EU AI Act's literacy requirements coming into effect February 2025

For those working in AI governance, the whirlwind of 2024 has left little time to catch a breath, and 2025 is already off to a fast-paced start. On February 2, 2025, the first major requirements of the EU AI Act come into effect, and AI literacy is grabbing the attention. This requirement implies that all providers and deployers of AI systems must ensure their teams are adequately trained and informed about the systems they work with, though the Act does not dictate precisely how organizations should comply with the principle.  AI literacy is one of the key obligations in the EU AI Act that applies to all AI systems regardless of the level of risk.   

The Act defines ‘AI literacy’ as the skills, knowledge and understanding to enable the informed use and operation of AI systems and increase awareness of the opportunities, risks and possible harm that AI systems may present – with the ultimate purpose being to ensure that staff are able to take informed decisions in relation to AI such as, how to interpret AI output and decision-making processes and their impact on natural persons.

As the deadline looms, inboxes are overflowing with AI compliance webinars, whitepapers,  consulting offers, and literacy training materials. Sound familiar? For privacy professionals, this rush might bring back memories of the GDPR rollout. Once again, everyone seems to have advice, but the real question is: what actions do you actually need to take?

Let’s cut through the noise and focus on the three questions that matter:

  1. Do these requirements apply to you?
  2. If yes, how should you start and what kind of training do you need?
  3. How can you implement it effectively?

1. Who needs to comply?

The scope may be broader than anticipated. You're affected if you're:

  • Developing AI solutions that interact with EU customers' data
  • Using AI for significant decisions (e.g., hiring or credit approvals)
  • Building custom applications on top of foundation models

Similar to GDPR, the territorial scope extends beyond the EU, impacting anyone making or influencing AI-driven decisions.

Understanding your risk level

Routine applications might be classified as "high-risk" under the Act. Common examples include:

  • Recruitment tools screening candidates
  • Systems monitoring employee performance
  • Customer service automation impacting service access
  • Voice recognition for authentication

The AI Act demands extra attention for high-risk systems, similar to GDPR's treatment of sensitive personal data. Fortunately, privacy impact assessment frameworks can be adapted for AI risk evaluation.

2. Building your Training Program

To create an effective AI literacy program, leverage your privacy program experience and follow the lead of AI literacy vendors. Tailor content to different employee roles, incorporate practical exercises and case studies, and regularly update training materials to reflect AI developments and regulatory changes.

AI literacy training can be categorized into four groups:

  1. Those responsible for complying with the AI Act: Privacy leaders, Legal experts or AI officers among other similar roles, need to understand specific compliance requirements such as conformity assessments.
    • Example: The IAPP offers the certification as AI Governance professional, which demonstrates that an individual can ensure safety and trust in the development and deployment of ethical AI and ongoing management of AI systems.
  2. Leadership roles: Leaders require a mix of "how-to" inspiration, use case examples, hands-on training, risk and harm insights, and market strategy.
    • Example: The European AI Alliance offers free online courses on AI ethics and governance
  3. Users: Training for users should balance "how-to" content with an emphasis on risk and harm.
    • Example: Coursera provides a "AI For Everyone" course, suitable for non-technical professionals.
  4. Technologists, project managers, builders, data analysts, and procurement teams: These roles demand a more technical understanding.
    • Example: The Linux Foundation offers a professional certificate in "AI Ethics and Governance".

2.1 Planning your Training Program

Start with critical roles NOW (Q1 2025):

  • Decision-makers approving AI systems.
  • Technical teams building solutions.
  • Procurement teams evaluating vendors.

Use GDPR training approaches as a template, adapting content for AI-specific risks.

Add oversight functions (Q2 2025):

  • Risk and compliance teams.
  • Privacy officers and DPOs.
  • Audit committees.
  • Project managers.

Many of these teams already understand compliance requirements—build on that foundation.

Expand to users (Q3 2025):

  • Teams using AI tools daily.
  • Support functions.

Apply lessons learned from privacy awareness training.

3. Making it work

Clear ownership is crucial

  • The Data Protection Officer (DPO) is well-positioned to champion AI governance at least initially, across organizations. Their expertise in data protection, risk assessment, and compliance makes them ideal for this role and EU AI Act compliance.  (In the longer term AI Governance leaders will need more technical expertise in AI, and broad ethical, strategic, and multidisciplinary challenges of AI systems beyond the DPO focus.
  • Appoint a senior leader (typically CRO, CCO, or CPO) as the main stakeholder/supporter.
  • Ensure they have real authority and cross-functional support.

Leverage existing privacy expertise

Privacy professionals bring valuable skills to AI governance:

  • Building and documenting compliance programs
  • Risk assessment methodologies
  • Training and awareness best practices
  • Regulatory engagement strategies
  • Impact assessment frameworks

These existing competencies can be adapted to address AI-specific challenges, streamlining the implementation of AI governance processes.

External support

When seeking external advisors, prioritize teams with:

  • Privacy/data protection expertise
  • AI governance experience
  • Industry-specific knowledge
  • Proven training frameworks
  • EU regulatory experience

This combination of skills ensures advisors can provide comprehensive support tailored to your organization's needs and the specific requirements of the EU AI Act.

Let’s not overcomplicate

Here’s an uncomfortable question: In the rush to check all the AI literacy boxes and launch training programs, are we overcomplicating AI governance? What if we need fewer consultants, fewer frameworks, and more focus on the fundamentals that always matter: clear ownership, practical risk assessment, and targeted training for the people who actually need it.

By building on what we’ve learned from GDPR, we can approach AI literacy requirements confidently and effectively—without reinventing the wheel.

Next steps on AI Governance

  1. Map your AI systems as you mapped data flows for GDPR. Identify high-risk applications using impact assessment frameworks.
  2. Assign controls based on risk classification
  3. Manage conformity assessments
  4. Meet the reporting requirements by documenting and monitoring compliance status.

Learn more about how TrustWorks streamlines the discovery of AI use cases and ensures AI Act compliance while reducing manual effort and compliance risks.

author

Roberta Kowalishin

AI & IT Strategy Expert. AIGP Certified by IAPP
< More Stories You’ll Love >

Explore Additional Insights and Tips