EU AI Act Compliance Guide: Classifying and Managing AI System Risks
A practical framework for classifying AI systems under the EU AI Act risk pyramid, with decision trees, documentation templates, and technical compliance checklists for the February 2025 prohibited practices deadline.
Who This Guide Is For
- Audience: AI product managers, compliance officers, enterprise architects, and developers deploying AI systems in EU markets or serving EU customers
- Prerequisites: Basic understanding of AI systems and familiarity with regulatory compliance concepts
- Estimated Time: 45-60 minutes for complete classification and initial compliance planning
Overview
This guide provides a step-by-step framework for classifying AI systems under the EU AI Actβs risk pyramid and implementing technical compliance measures. You will learn:
- How to use a decision tree to classify any AI system into one of four risk categories
- Which AI practices are already prohibited (enforceable since February 2, 2025)
- Technical documentation requirements for high-risk systems
- Human oversight implementation including mandatory βstop buttonβ requirements
- Conformity assessment pathways and timeline-based compliance planning
The EU AI Act establishes a risk-based regulatory framework with enforcement deadlines spanning from February 2025 to August 2027. Organizations deploying AI systems in EU markets face penalties up to 35 million EUR or 7% of global annual turnover for prohibited practice violations.
Key Facts
- Who: EU member states, organizations deploying AI systems in EU markets, AI providers and deployers globally
- What: Regulation (EU) 2024/1689 establishes 4-tier risk classification with enforcement penalties up to 35M EUR or 7% turnover
- When: Prohibited practices enforceable since February 2, 2025; high-risk systems deadline August 2, 2026
- Impact: HR tech, EdTech, facial recognition, medical devices, vehicles, employment screening, law enforcement AI
Step 1: Determine Your Risk Classification Using the Decision Tree
The EU AI Act uses a 4-tier risk pyramid. Classification determines your compliance obligations, from complete bans to voluntary best practices.
The Four Risk Tiers
| Risk Tier | Enforcement Status | Key Requirement | Deadline |
|---|---|---|---|
| Prohibited | Criminal/Administrative penalties | Complete ban | Feb 2, 2025 (ENFORCEABLE) |
| High-Risk | Conformity assessment required | Full compliance with Articles 9-15 | Aug 2, 2026 |
| Transparency | Disclosure obligations | User notification requirements | Aug 2, 2025 |
| Minimal | Voluntary codes of conduct | Best practices encouraged | No deadline |
Classification Decision Tree
Use this decision flow to classify your AI system:
START: What is your AI system's primary function?
STEP 1: PROHIBITED PRACTICES CHECK
=========================================================
Does your system perform ANY of the following?
- Infer emotions in workplace or educational settings
- Create facial recognition databases via untargeted scraping
- Implement social scoring with detrimental treatment
- Predict criminal risk solely from profiling
- Categorize persons by biometrics for race/politics/religion/orientation
- Use subliminal techniques to distort behavior beyond consciousness
- Exploit vulnerabilities (age, disability, socio-economic status)
- Real-time biometric ID in public spaces (limited exceptions)
YES -> STOP. Classification: PROHIBITED
System must not be placed on market or put into service.
NO -> Proceed to Step 2
STEP 2: HIGH-RISK ANNEX III CHECK
=========================================================
Is your system listed in Annex III?
- Biometric identification and categorization
- Critical infrastructure management
- Education and vocational training
- Employment, worker management, self-employment
- Access to essential services (credit, insurance, benefits)
- Law enforcement
- Migration, asylum, border control
- Administration of justice and democratic processes
NO -> Proceed to Step 3
YES -> Check DEROGATION CONDITIONS:
Does your system:
A) Perform narrow procedural tasks?
B) Improve the result of human activity?
C) Detect decision-making patterns without replacing humans?
D) Perform preparatory assessment tasks?
AND: Does NOT perform profiling?
ALL CONDITIONS MET -> Classification: NON-HIGH-RISK
Document derogation assessment.
ANY CONDITION NOT MET -> Classification: HIGH-RISK
Proceed to conformity assessment.
STEP 3: ANNEX I PRODUCT SAFETY CHECK
=========================================================
Is your AI system a safety component of products covered by:
- Machinery Regulation
- Medical Devices Regulation
- Radio Equipment Directive
- Toy Safety Directive
- Lifts Directive
- Other sectoral legislation listed in Annex I
NO -> Proceed to Step 4
YES -> Does the product require third-party conformity assessment?
YES -> Classification: HIGH-RISK (Product-related)
Conformity assessment via sectoral legislation.
NO -> Proceed to Step 4
STEP 4: TRANSPARENCY RISK CHECK
=========================================================
Does your system:
- Interact directly with persons (chatbots, voice assistants)?
- Generate synthetic content (images, audio, video, text)?
- Perform emotion recognition (outside workplace/education)?
- Perform biometric categorization?
- Create deep fakes?
YES -> Classification: TRANSPARENCY-RISK
Disclosure requirements under Article 50 apply.
NO -> Classification: MINIMAL-RISK
Voluntary codes of conduct available.
END
Common Classification Mistakes to Avoid
| Mistake | Correction |
|---|---|
| Assuming all biometric systems are prohibited | Only specific practices are prohibited (untargeted facial scraping, emotion recognition in workplace/education). Many biometric systems are high-risk, not prohibited. |
| Over-classifying as high-risk without checking derogation | Annex III systems may claim non-high-risk status if they meet derogation conditions. Document your assessment. |
| Missing the February 2025 deadline | Article 5 prohibited practices are already enforceable. Organizations with affected systems must cease operations immediately. |
Step 2: Identify Prohibited Practices (Already Enforceable)
The EU AI Actβs prohibited practices took effect on February 2, 2025. Organizations currently using these systems face immediate enforcement risk.
Complete List of Prohibited AI Practices
1. Emotion Recognition in Workplace and Education
What is banned: AI systems inferring emotions from facial expressions, voice patterns, or other biometric signals in employment and educational contexts.
Technical scope:
- Candidate screening based on emotional responses
- Employee engagement monitoring through affect analysis
- Student attention or emotion tracking in classrooms
- Performance evaluation based on emotional indicators
Exception: Medical or safety purposes (e.g., detecting driver fatigue, therapeutic applications with consent).
Affected industries: HR tech platforms, EdTech applications, workplace analytics tools.
2. Untargeted Facial Scraping
What is banned: Automated collection of facial images from the internet or CCTV footage without a specific target, for the purpose of creating or expanding facial recognition databases.
Technical scope:
- Web scraping of social media profile images
- CCTV footage harvesting without specific investigation
- Bulk collection of biometric data from public sources
Affected industries: Facial recognition service providers, security technology vendors, identity verification platforms.
3. Social Scoring Systems
What is banned: AI systems that classify persons based on social behavior or personality traits over time, leading to detrimental treatment in unrelated contexts.
Technical scope:
- Scoring systems that aggregate behavior across contexts
- Treatment decisions based on scores from unrelated data
- Systems that create trustworthiness ratings from social media activity
Affected industries: Credit scoring extensions, insurance risk assessment, tenant screening.
4. Predictive Policing for Criminal Risk
What is banned: AI systems predicting criminal risk solely from profiling or personality traits, without supporting factual evidence.
Technical scope:
- Risk assessment based solely on demographic or behavioral profiles
- Predictive models without concrete criminal indicators
- Profiling-based threat scoring without judicial oversight
5. Biometric Categorization for Protected Characteristics
What is banned: AI systems categorizing persons by biometric data to infer race, political opinions, trade union membership, religious beliefs, sexual orientation.
Exception: Law enforcement filtering of lawfully acquired datasets.
6. Real-Time Remote Biometric Identification in Public Spaces
What is banned: Real-time biometric identification in public spaces for law enforcement purposes.
Limited exceptions require:
- Judicial authorization or equivalent
- Strict necessity for: missing persons search, terrorism threat prevention, serious crime investigation (4+ year custodial sentence)
- Fundamental rights impact assessment
- EU database registration
Immediate Action Checklist for Prohibited Practices
- Audit all AI systems for emotion recognition capabilities in HR/education contexts
- Review data collection practices for facial scraping activities
- Assess social scoring mechanisms in customer/employee evaluation systems
- Document any biometric categorization based on protected characteristics
- Discontinue prohibited systems or modify for compliant use cases
Step 3: Assess High-Risk Classification and Derogation Options
If your system is not prohibited but falls under Annex III categories, you must determine if high-risk requirements apply or if derogation conditions are met.
Annex III High-Risk Categories
| Category | Systems Included | Derogation Possible? |
|---|---|---|
| Biometric ID | Remote biometric identification, biometric categorization | Limited - profiling always high-risk |
| Critical Infrastructure | Energy, transport, water supply management systems | Yes - if narrow procedural task |
| Education | Student admission, learning outcomes assessment, proctoring | Yes - if improves human results |
| Employment | Recruitment screening, task allocation, performance evaluation | Yes - if pattern detection only |
| Essential Services | Creditworthiness, insurance pricing, benefit eligibility | Limited - profiling always high-risk |
| Law Enforcement | Lie detection, emotion assessment, risk assessment, DNA analysis | No |
| Migration | Border control, visa processing, asylum assessment | Limited |
| Justice | Court rulings, case law analysis, evidence evaluation | No - judicial independence |
Derogation Assessment Framework
For Annex III systems, document this assessment before claiming non-high-risk status:
DEROGATION ASSESSMENT RECORD
================================
System Name: [Your AI System]
Annex III Category: [e.g., Employment - Article 6(2)]
Assessment Date: [YYYY-MM-DD]
Assessor: [Name, Role]
DEROGATION CONDITION CHECK:
[ ] Condition A: Narrow Procedural Task
Does the system perform narrow procedural tasks without
substantially influencing decision outcomes?
Evidence: [Describe task scope, decision impact level]
[ ] Condition B: Improves Human Activity Results
Does the system merely improve the result of a human
activity previously carried out without AI?
Evidence: [Describe human baseline, improvement metrics]
[ ] Condition C: Detects Patterns Without Replacing Decisions
Does the system detect decision-making patterns or provide
auxiliary information without replacing human decision-making?
Evidence: [Describe decision flow, human role in final decision]
[ ] Condition D: Preparatory Assessment Tasks
Does the system perform preparatory tasks for assessments
relevant to Annex III use cases?
Evidence: [Describe preparatory vs. final assessment role]
CRITICAL CHECK:
[ ] Profiling Status: Does the system perform profiling?
YES -> Derogation NOT available. System is HIGH-RISK.
NO -> Derogation may apply if any condition A-D is met.
CONCLUSION:
[ ] NON-HIGH-RISK: Derogation conditions met
Document and retain assessment record.
[ ] HIGH-RISK: Derogation not applicable
Proceed to conformity assessment requirements.
When Profiling Overrides Derogation
Profiling is defined as automated processing of personal data to evaluate certain personal aspects. If your Annex III system performs profiling, derogation is not available regardless of other conditions.
Systems that profile:
- Behavioral scoring for hiring decisions
- Learning style categorization for student placement
- Risk assessment based on personal characteristics
- Creditworthiness evaluation from behavioral data
Step 4: Implement Technical Documentation for High-Risk Systems
High-risk AI systems require comprehensive technical documentation before market placement. This documentation must be maintained throughout the system lifecycle.
Annex IV Documentation Template
Create a technical documentation file containing these elements:
1. General System Description
## System Overview
### Provider Information
- Company Name: [Legal entity name]
- Address: [Registered address]
- Contact: [Compliance contact]
### System Identity
- System Name: [Product/service name]
- Version: [Current version number]
- Intended Purpose: [Specific use case description]
- Target Users: [Who will operate the system]
- End Users: [Who will be affected by outputs]
### System Architecture
- Components: [List major components]
- Integration Points: [How system connects to other systems]
- Data Flow Diagram: [Attach or reference]
### Hardware Requirements
- Compute: [GPU, CPU specifications]
- Memory: [RAM requirements]
- Storage: [Data storage needs]
- Network: [Connectivity requirements]
### Expected Lifetime
- Planned operational period: [Years]
- Update frequency: [Quarterly, annual, etc.]
- End-of-life plan: [Decommissioning approach]
2. Development Process Documentation
## Development Process
### Development Team
- Project Lead: [Name, qualifications]
- Technical Leads: [Names, roles]
- Compliance Responsible: [Name, contact]
### Methodology
- Development Framework: [Agile, waterfall, etc.]
- Quality Management System: [ISO 9001, etc.]
- AI-specific methodology: [MLOps pipeline details]
### Version History
| Version | Date | Changes | Validation Status |
|---------|------|---------|-------------------|
| 1.0.0 | YYYY-MM-DD | Initial release | Validated |
| 1.1.0 | YYYY-MM-DD | [Changes] | [Status] |
### Third-Party Components
| Component | Version | Supplier | License |
|-----------|---------|----------|---------|
| [Name] | [Version] | [Supplier] | [License type] |
3. Risk Management System Documentation
## Risk Management (Article 9)
### Risk Identification Process
- Methodology: [How risks are identified]
- Frequency: [Continuous, periodic, event-triggered]
- Stakeholders involved: [Roles participating]
### Risk Estimation
| Risk ID | Description | Likelihood | Severity | Risk Score |
|---------|-------------|------------|----------|------------|
| R001 | [Risk description] | [1-5] | [1-5] | [L x S] |
### Risk Evaluation Criteria
- Acceptable risk threshold: [Definition]
- Risk tolerance: [Organizational tolerance]
### Mitigation Measures
| Risk ID | Mitigation | Residual Risk | Verification |
|---------|------------|---------------|--------------|
| R001 | [Measure] | [Score] | [Test method] |
### Continuous Monitoring
- Metrics tracked: [List metrics]
- Alert thresholds: [Threshold values]
- Response procedures: [Actions on alert]
4. Data Governance Documentation
## Data Governance (Article 10)
### Training Data
- Source: [Data origin]
- Collection method: [How data was gathered]
- Size: [Volume, number of records]
- Time period: [Date range]
- Bias analysis: [Known biases and mitigation]
### Data Quality Measures
| Criterion | Method | Result |
|-----------|--------|--------|
| Relevance | [Method] | [Pass/Fail] |
| Completeness | [Method] | [Pass/Fail] |
| Representativeness | [Method] | [Pass/Fail] |
### Personal Data Processing
- Lawful basis: [GDPR Article 6 basis]
- Data Protection Impact Assessment: [Reference or N/A]
- Data subject rights procedures: [Process description]
### Validation and Test Data
- Separation from training: [How separated]
- Size: [Volume]
- Representativeness: [Coverage assessment]
5. Performance and Accuracy Documentation
## Performance Metrics (Article 15)
### Accuracy Metrics
| Metric | Training Set | Validation Set | Test Set |
|--------|--------------|----------------|----------|
| Accuracy | [Value] | [Value] | [Value] |
| Precision | [Value] | [Value] | [Value] |
| Recall | [Value] | [Value] | [Value] |
| F1-Score | [Value] | [Value] | [Value] |
### Performance Across Demographic Groups
| Group | Accuracy | False Positive Rate | False Negative Rate |
|-------|----------|---------------------|---------------------|
| [Group A] | [Value] | [Value] | [Value] |
| [Group B] | [Value] | [Value] | [Value] |
### Robustness Testing
- Adversarial test results: [Summary]
- Error handling tests: [Summary]
- Edge case coverage: [Percentage]
### Cybersecurity Measures
- Data poisoning prevention: [Controls implemented]
- Model extraction protection: [Controls implemented]
- Access controls: [Authentication, authorization]
SME Documentation Simplification
Small and medium enterprises (fewer than 250 employees and annual turnover below 50M EUR or balance sheet below 43M EUR) may use the simplified technical documentation form provided by the European Commission. The simplified form reduces documentation burden while maintaining essential compliance information.
Step 5: Implement Human Oversight Measures
Human oversight is mandatory for all high-risk AI systems. Article 14 requires technical measures enabling natural persons to understand, monitor, and control the system.
Technical Human Oversight Requirements
## Human Oversight Implementation Checklist
### Understanding Capabilities (Article 14(4)(a))
[ ] System capabilities documentation provided to deployers
[ ] Known limitations clearly documented
[ ] Performance characteristics on different populations documented
[ ] Operating conditions specified
### Anomaly Detection (Article 14(4)(b))
[ ] Dysfunction alerts implemented
[ ] Unexpected performance warnings configured
[ ] Data drift detection enabled
[ ] Model degradation monitoring active
### Automation Bias Prevention (Article 14(4)(c))
[ ] Confidence scores displayed for all outputs
[ ] Uncertainty indicators visible
[ ] Clear distinction between recommendations and decisions
[ ] Training materials address automation bias risks
### Output Interpretation (Article 14(4)(d))
[ ] Interpretation tools provided
[ ] Feature importance or explanation methods available
[ ] Output confidence intervals or uncertainty ranges shown
[ ] Human-readable explanations for critical decisions
### Override and Stop Capabilities (Article 14(4)(e))
[ ] Override capability implemented
[ ] Ability to reverse or modify outputs
[ ] DECISION NOT TO USE option available
[ ] STOP BUTTON IMPLEMENTED - MANDATORY
### Dual Verification (Article 14(5))
[ ] Biometric identification systems: Two competent persons verification
[ ] Exception documented for law enforcement where disproportionate
Stop Button Implementation Requirements
The βstop buttonβ or equivalent procedure is explicitly mandated by Article 14(4)(e). This technical measure must:
- Halt the system safely: Stop operations without causing harm or data loss
- Be accessible: Available to human operators at all times during operation
- Preserve state: Maintain system state for investigation if needed
- Trigger notifications: Alert relevant personnel when activated
Example implementation approach:
STOP BUTTON TECHNICAL SPECIFICATION
===================================
1. ACCESSIBILITY
- Physical button in control interface OR
- Keyboard shortcut (documented to operators) OR
- Voice command for hands-free operation
2. BEHAVIOR ON ACTIVATION
- Immediate inference halt (within 100ms)
- Current input preservation for audit
- Log entry with timestamp and operator ID
- Notification to monitoring dashboard
3. STATE PRESERVATION
- Last valid output cached
- Input data preserved for 24 hours minimum
- Audit trail entry created
4. RECOVERY PROCEDURE
- Documented restart process
- Safety verification before resumption
- Incident report requirement
Step 6: Satisfy Transparency Obligations
Article 50 establishes transparency obligations for AI systems interacting with persons or generating content. These requirements apply regardless of risk classification.
Transparency Requirements by System Type
| System Type | Transparency Requirement |
|---|---|
| AI interacting with persons | Disclose AI nature to users (unless obvious) |
| Synthetic content generators | Mark content as AI-generated in machine-readable format |
| Emotion recognition systems | Notify users that emotion recognition is operating |
| Biometric categorization | Notify users of categorization activity |
| Deep fakes | Disclose that content is manipulated or generated |
Synthetic Content Marking Implementation
For systems generating images, audio, or video:
## Synthetic Content Disclosure
### Machine-Readable Metadata
- Standard: [e.g., IPTC, XMP, C2PA]
- Field: [AI-generated flag]
- Value: [TRUE / confidence score]
### Visible Disclosure
- Overlay text for images/video
- Audio watermark for speech
- Metadata embedding for files
### Implementation Options
Option A: C2PA Content Credentials
- Industry standard for provenance
- Cryptographic attestation
- Browser/plugin verification
Option B: IPTC Photo Metadata
- Existing photo metadata standard
- "AI Generated" field
- Wide tool support
Option C: Custom Watermarking
- Visible or invisible watermark
- Proprietary or standard algorithm
- Detection tools required
AI Interaction Disclosure
For chatbots, voice assistants, and interactive systems:
## AI Disclosure Implementation
### Disclosure Timing
- Before first interaction: Initial greeting
- Ongoing: Periodic reminders (every N interactions)
- On request: Clear response to "Are you AI?"
### Disclosure Methods
- Text: "I am an AI assistant..."
- Voice: Spoken disclosure at session start
- Visual: AI indicator in interface
### Exception Handling
When AI nature is obvious from context:
- Example: Gaming AI characters
- Example: Search result ranking
- Document rationale for non-disclosure
Step 7: Choose Your Conformity Assessment Path
High-risk AI systems must undergo conformity assessment before market placement. Two pathways are available.
Conformity Assessment Options
| Pathway | When to Use | Procedure | Cost | Timeline |
|---|---|---|---|---|
| Internal Control (Annex VI) | System complies with harmonised standards | Self-assessment + declaration | Low | 2-4 weeks |
| Notified Body (Annex VII) | No harmonised standard or specific cases | Third-party audit | High | 2-6 months |
Internal Control Procedure (Annex VI)
Available when your system complies with harmonised standards published in the Official Journal:
- Verify harmonised standard coverage: Confirm published standards cover your systemβs functions
- Complete technical documentation: Annex IV requirements
- Implement quality management system: Ongoing compliance processes
- Draft EU declaration of conformity: Legal attestation of compliance
- Affix CE marking: Physical or digital conformity mark
- Register in EU database: For high-risk systems
Notified Body Procedure (Annex VII)
Required when:
- No harmonised standard covers your system
- You choose not to apply harmonised standards
- Law enforcement biometric systems (mandatory)
Process:
- Select notified body: From EU database of accredited organizations
- Submit technical documentation: Annex IV package
- Undergo audit: Quality management system review
- Receive certificate: Conformity certificate from notified body
- Affix CE marking with body number: Include notified body identification
Timeline for Conformity Assessment
| Milestone | Recommended Timeline | Deadline |
|---|---|---|
| Risk classification complete | Now | - |
| Gap analysis of requirements | 4-6 weeks | - |
| Technical documentation draft | 8-12 weeks | - |
| Quality management implementation | 12-16 weeks | - |
| Conformity assessment initiation | 16-20 weeks | - |
| Assessment completion | 20-24 weeks | August 2, 2026 |
| EU registration | Before market placement | August 2, 2026 |
Step 8: Align with Existing Governance Frameworks
Organizations with existing AI governance frameworks can leverage them for EU AI Act compliance, but must understand the limitations.
Framework Alignment Matrix
| Dimension | EU AI Act | NIST AI RMF | ISO/IEC 42001 |
|---|---|---|---|
| Legal Status | Mandatory in EU | Voluntary | Voluntary certification |
| Geographic Scope | EU Member States | US (international adoption) | Global |
| Risk Classification | 4-tier pyramid | GOVERN/MAP/MEASURE/MANAGE | PDCA cycle |
| Prohibited Practices | Yes - specific list | No categories | No specific list |
| Conformity Assessment | Internal or notified body | Self-assessment | Certification audit |
| Penalties | Up to 35M EUR / 7% turnover | None | Market-based |
| Presumption of Conformity | Harmonised standards only | N/A | Supports but does not confer |
Strategic Framework Integration
RECOMMENDED APPROACH:
======================
1. USE ISO 42001 FOR:
- Organizational governance structure
- AI management system establishment
- Continuous improvement processes
- Audit readiness documentation
2. USE NIST AI RMF FOR:
- Risk documentation methodology
- Stakeholder engagement patterns
- Cross-functional governance
- Risk communication frameworks
3. SUPPLEMENT WITH EU-SPECIFIC:
- Annex IV technical documentation
- Article 14 human oversight measures
- Article 50 transparency requirements
- Conformity assessment procedures
4. MONITOR FOR:
- Harmonised standards publication
- Presumption of conformity pathway
- Sector-specific guidance
What Existing Frameworks Do NOT Provide
- Prohibited practice categories
- Mandatory compliance deadlines
- EU conformity assessment
- Legal presumption of conformity
Only harmonised standards published in the Official Journal provide presumption of conformity with EU AI Act requirements.
Common Mistakes & Troubleshooting
| Symptom | Cause | Fix |
|---|---|---|
| βOur ISO 42001 certification means weβre compliantβ | Misunderstanding of presumption of conformity | ISO 42001 supports compliance but does not automatically satisfy EU AI Act. Supplement with Annex IV documentation. |
| βWe donβt need to worry until August 2026β | Missing prohibited practices deadline | Article 5 is already enforceable since February 2, 2025. Audit immediately for prohibited uses. |
| βOur system doesnβt interact with humans so no transparency neededβ | Overlooking synthetic content marking | Content generation systems require marking even without human interaction. |
| βWeβre an SME so requirements donβt applyβ | Misunderstanding SME provisions | SMEs get simplified documentation forms and lower penalty caps, but all high-risk requirements still apply. |
| βOur system just detects patterns, not high-riskβ | Missing profiling exception | Pattern detection with profiling is always high-risk regardless of other conditions. |
| βWeβll just use the internal control pathwayβ | No harmonised standards available | Check if harmonised standards for your system type are published. If not, notified body may be required. |
Case Studies: Industries Affected by Prohibited Practices
Case Study 1: HR Tech Platform with Emotion Recognition
Company: Mid-sized recruitment technology provider serving EU enterprise clients
System: Video interview analysis platform using facial expression analysis to assess candidate emotions during interviews
Issue: Emotion recognition in employment context prohibited since February 2, 2025
Actions Taken:
- Immediately disabled emotion inference module for EU clients
- Retained facial recognition for identity verification only (with consent)
- Documented system modification with compliance rationale
- Notified affected clients of feature removal
- Retained emotion analysis feature for non-EU markets with user consent
Compliance Status: Now compliant; emotion recognition removed from EU deployment
Lessons:
- Geographic feature gating may be necessary
- Document all system modifications with compliance rationale
- Client communication is essential for trust maintenance
Case Study 2: EdTech Student Engagement Monitoring
Company: Educational technology startup providing classroom analytics
System: AI-powered student attention tracking using webcam feeds to measure engagement
Issue: Emotion recognition in educational institutions prohibited
Actions Taken:
- Pivoted to privacy-preserving engagement metrics
- Replaced emotion inference with voluntary attention indicators (student clicks, responses)
- Added transparency overlays showing when monitoring is active
- Implemented consent mechanisms for all biometric data collection
- Retained academic performance analytics (non-prohibited)
Compliance Status: Transformed to transparency-risk system with consent mechanisms
Lessons:
- Business model pivots may be necessary
- Consent mechanisms become critical for remaining biometric features
- Transparency requirements still apply
Case Study 3: Facial Recognition Service Provider
Company: Security technology vendor offering facial recognition databases
System: Facial image collection from public web sources and CCTV for identity verification services
Issue: Untargeted facial scraping for database creation prohibited
Actions Taken:
- Ceased all untargeted web scraping activities
- Shifted to opt-in database model with explicit consent
- Implemented target-specific collection with documented justification
- Added data governance controls for collection provenance
- Established deletion procedures for previously scraped data
Compliance Status: Operating under consent-based model with documented data provenance
Lessons:
- Data collection practices may require fundamental restructuring
- Provenance documentation becomes essential
- Legacy data may need deletion or consent retrofits
Compliance Timeline and Action Plan
Key Deadlines
| Date | Requirement | Action Needed |
|---|---|---|
| February 2, 2025 | Prohibited practices enforceable | Audit and discontinue prohibited systems |
| August 2, 2025 | Transparency obligations, governance structures | Implement disclosure mechanisms |
| February 2, 2026 | Commission high-risk classification guidelines | Review guidance for classification support |
| August 2, 2026 | High-risk system requirements, conformity assessment | Complete documentation and assessment |
| August 2, 2027 | GPAI model obligations, Annex I product systems | GPAI providers complete compliance |
Prioritized Action Plan
IMMEDIATE (Weeks 1-4):
======================
[ ] Complete prohibited practices audit
[ ] Identify all AI systems in deployment/pipeline
[ ] Classify each system using decision tree
[ ] Document classification rationale
[ ] Cease prohibited practices immediately
SHORT-TERM (Months 1-6):
=======================
[ ] Implement transparency mechanisms
[ ] Draft technical documentation for high-risk systems
[ ] Establish AI governance committee
[ ] Begin conformity assessment preparation
[ ] Monitor harmonised standards publications
MEDIUM-TERM (Months 6-12):
=========================
[ ] Complete high-risk documentation
[ ] Implement human oversight measures
[ ] Establish continuous risk monitoring
[ ] Initiate conformity assessment (if high-risk)
[ ] Train personnel on compliance requirements
LONG-TERM (Months 12-18):
========================
[ ] Complete conformity assessment
[ ] Register in EU database
[ ] Establish compliance monitoring program
[ ] Plan for ongoing documentation updates
[ ] Prepare for regulatory audits
πΊ Scout Intel: What Others Missed
Confidence: high | Novelty Score: 78/100
While existing policy summaries focus on legal interpretation, this guide reveals three critical technical implementation insights that compliance officers and AI developers overlook. First, the derogation conditions for Annex III high-risk classification have specific profiling carve-outs that override all other conditions - systems performing profiling remain high-risk regardless of narrow procedural task designations. Second, the βstop buttonβ requirement (Article 14(4)(e)) is not optional documentation but a mandatory technical feature that must halt the system safely within milliseconds. Third, harmonised standards for presumption of conformity are still pending publication, meaning organizations cannot rely on internal control procedures alone until EU standardisation bodies complete their work - expected timeline extends into late 2026.
Key Implication: Organizations currently using ISO 42001 or NIST AI RMF as their primary compliance framework must supplement these with Annex IV technical documentation and cannot claim conformity presumption until harmonised standards appear in the Official Journal - plan for notified body assessment in the interim.
Summary & Next Steps
This guide has provided a comprehensive framework for EU AI Act compliance:
Key Takeaways:
- Prohibited practices are already enforceable - immediate action required
- Use the decision tree to classify all AI systems systematically
- Derogation conditions exist for Annex III systems - document your assessment
- Technical documentation must address all Annex IV elements
- Human oversight requires a functional βstop buttonβ - non-negotiable
- Existing frameworks (ISO 42001, NIST RMF) support but do not satisfy EU requirements
- Conformity assessment pathway depends on harmonised standard availability
Recommended Next Steps:
- Conduct immediate audit for prohibited practices
- Complete classification assessment for all AI systems
- Identify high-risk systems requiring conformity assessment
- Begin technical documentation drafting
- Establish governance structure for ongoing compliance
Related Resources:
- EU AI Act Official Explorer - Full regulation text with article analysis
- ISO/IEC 42001:2023 - AI Management System standard
- NIST AI Risk Management Framework - US framework for risk governance
Sources
- EU AI Act Official Explorer - Official EU AI Act reference site
- EUR-Lex Official Journal - Regulation (EU) 2024/1689 definitive text
- Article 5: Prohibited AI Practices - Complete prohibited practices text
- Article 6: Classification Rules - High-risk classification criteria
- Article 9: Risk Management System - Continuous risk management requirements
- Article 11: Technical Documentation - Pre-market documentation requirements
- Article 13: Transparency - Information provision requirements
- Article 14: Human Oversight - Oversight design requirements
- Article 15: Accuracy, Robustness, Cybersecurity - Performance requirements
- Article 43: Conformity Assessment - Assessment procedures
- Article 50: Transparency Obligations - User disclosure requirements
- Article 99: Penalties - Administrative fine structure
- Article 113: Implementation Timeline - Entry into force dates
- ISO/IEC 42001:2023 - AI Management Systems standard
- NIST AI Risk Management Framework - US AI risk governance framework
EU AI Act Compliance Guide: Classifying and Managing AI System Risks
A practical framework for classifying AI systems under the EU AI Act risk pyramid, with decision trees, documentation templates, and technical compliance checklists for the February 2025 prohibited practices deadline.
Who This Guide Is For
- Audience: AI product managers, compliance officers, enterprise architects, and developers deploying AI systems in EU markets or serving EU customers
- Prerequisites: Basic understanding of AI systems and familiarity with regulatory compliance concepts
- Estimated Time: 45-60 minutes for complete classification and initial compliance planning
Overview
This guide provides a step-by-step framework for classifying AI systems under the EU AI Actβs risk pyramid and implementing technical compliance measures. You will learn:
- How to use a decision tree to classify any AI system into one of four risk categories
- Which AI practices are already prohibited (enforceable since February 2, 2025)
- Technical documentation requirements for high-risk systems
- Human oversight implementation including mandatory βstop buttonβ requirements
- Conformity assessment pathways and timeline-based compliance planning
The EU AI Act establishes a risk-based regulatory framework with enforcement deadlines spanning from February 2025 to August 2027. Organizations deploying AI systems in EU markets face penalties up to 35 million EUR or 7% of global annual turnover for prohibited practice violations.
Key Facts
- Who: EU member states, organizations deploying AI systems in EU markets, AI providers and deployers globally
- What: Regulation (EU) 2024/1689 establishes 4-tier risk classification with enforcement penalties up to 35M EUR or 7% turnover
- When: Prohibited practices enforceable since February 2, 2025; high-risk systems deadline August 2, 2026
- Impact: HR tech, EdTech, facial recognition, medical devices, vehicles, employment screening, law enforcement AI
Step 1: Determine Your Risk Classification Using the Decision Tree
The EU AI Act uses a 4-tier risk pyramid. Classification determines your compliance obligations, from complete bans to voluntary best practices.
The Four Risk Tiers
| Risk Tier | Enforcement Status | Key Requirement | Deadline |
|---|---|---|---|
| Prohibited | Criminal/Administrative penalties | Complete ban | Feb 2, 2025 (ENFORCEABLE) |
| High-Risk | Conformity assessment required | Full compliance with Articles 9-15 | Aug 2, 2026 |
| Transparency | Disclosure obligations | User notification requirements | Aug 2, 2025 |
| Minimal | Voluntary codes of conduct | Best practices encouraged | No deadline |
Classification Decision Tree
Use this decision flow to classify your AI system:
START: What is your AI system's primary function?
STEP 1: PROHIBITED PRACTICES CHECK
=========================================================
Does your system perform ANY of the following?
- Infer emotions in workplace or educational settings
- Create facial recognition databases via untargeted scraping
- Implement social scoring with detrimental treatment
- Predict criminal risk solely from profiling
- Categorize persons by biometrics for race/politics/religion/orientation
- Use subliminal techniques to distort behavior beyond consciousness
- Exploit vulnerabilities (age, disability, socio-economic status)
- Real-time biometric ID in public spaces (limited exceptions)
YES -> STOP. Classification: PROHIBITED
System must not be placed on market or put into service.
NO -> Proceed to Step 2
STEP 2: HIGH-RISK ANNEX III CHECK
=========================================================
Is your system listed in Annex III?
- Biometric identification and categorization
- Critical infrastructure management
- Education and vocational training
- Employment, worker management, self-employment
- Access to essential services (credit, insurance, benefits)
- Law enforcement
- Migration, asylum, border control
- Administration of justice and democratic processes
NO -> Proceed to Step 3
YES -> Check DEROGATION CONDITIONS:
Does your system:
A) Perform narrow procedural tasks?
B) Improve the result of human activity?
C) Detect decision-making patterns without replacing humans?
D) Perform preparatory assessment tasks?
AND: Does NOT perform profiling?
ALL CONDITIONS MET -> Classification: NON-HIGH-RISK
Document derogation assessment.
ANY CONDITION NOT MET -> Classification: HIGH-RISK
Proceed to conformity assessment.
STEP 3: ANNEX I PRODUCT SAFETY CHECK
=========================================================
Is your AI system a safety component of products covered by:
- Machinery Regulation
- Medical Devices Regulation
- Radio Equipment Directive
- Toy Safety Directive
- Lifts Directive
- Other sectoral legislation listed in Annex I
NO -> Proceed to Step 4
YES -> Does the product require third-party conformity assessment?
YES -> Classification: HIGH-RISK (Product-related)
Conformity assessment via sectoral legislation.
NO -> Proceed to Step 4
STEP 4: TRANSPARENCY RISK CHECK
=========================================================
Does your system:
- Interact directly with persons (chatbots, voice assistants)?
- Generate synthetic content (images, audio, video, text)?
- Perform emotion recognition (outside workplace/education)?
- Perform biometric categorization?
- Create deep fakes?
YES -> Classification: TRANSPARENCY-RISK
Disclosure requirements under Article 50 apply.
NO -> Classification: MINIMAL-RISK
Voluntary codes of conduct available.
END
Common Classification Mistakes to Avoid
| Mistake | Correction |
|---|---|
| Assuming all biometric systems are prohibited | Only specific practices are prohibited (untargeted facial scraping, emotion recognition in workplace/education). Many biometric systems are high-risk, not prohibited. |
| Over-classifying as high-risk without checking derogation | Annex III systems may claim non-high-risk status if they meet derogation conditions. Document your assessment. |
| Missing the February 2025 deadline | Article 5 prohibited practices are already enforceable. Organizations with affected systems must cease operations immediately. |
Step 2: Identify Prohibited Practices (Already Enforceable)
The EU AI Actβs prohibited practices took effect on February 2, 2025. Organizations currently using these systems face immediate enforcement risk.
Complete List of Prohibited AI Practices
1. Emotion Recognition in Workplace and Education
What is banned: AI systems inferring emotions from facial expressions, voice patterns, or other biometric signals in employment and educational contexts.
Technical scope:
- Candidate screening based on emotional responses
- Employee engagement monitoring through affect analysis
- Student attention or emotion tracking in classrooms
- Performance evaluation based on emotional indicators
Exception: Medical or safety purposes (e.g., detecting driver fatigue, therapeutic applications with consent).
Affected industries: HR tech platforms, EdTech applications, workplace analytics tools.
2. Untargeted Facial Scraping
What is banned: Automated collection of facial images from the internet or CCTV footage without a specific target, for the purpose of creating or expanding facial recognition databases.
Technical scope:
- Web scraping of social media profile images
- CCTV footage harvesting without specific investigation
- Bulk collection of biometric data from public sources
Affected industries: Facial recognition service providers, security technology vendors, identity verification platforms.
3. Social Scoring Systems
What is banned: AI systems that classify persons based on social behavior or personality traits over time, leading to detrimental treatment in unrelated contexts.
Technical scope:
- Scoring systems that aggregate behavior across contexts
- Treatment decisions based on scores from unrelated data
- Systems that create trustworthiness ratings from social media activity
Affected industries: Credit scoring extensions, insurance risk assessment, tenant screening.
4. Predictive Policing for Criminal Risk
What is banned: AI systems predicting criminal risk solely from profiling or personality traits, without supporting factual evidence.
Technical scope:
- Risk assessment based solely on demographic or behavioral profiles
- Predictive models without concrete criminal indicators
- Profiling-based threat scoring without judicial oversight
5. Biometric Categorization for Protected Characteristics
What is banned: AI systems categorizing persons by biometric data to infer race, political opinions, trade union membership, religious beliefs, sexual orientation.
Exception: Law enforcement filtering of lawfully acquired datasets.
6. Real-Time Remote Biometric Identification in Public Spaces
What is banned: Real-time biometric identification in public spaces for law enforcement purposes.
Limited exceptions require:
- Judicial authorization or equivalent
- Strict necessity for: missing persons search, terrorism threat prevention, serious crime investigation (4+ year custodial sentence)
- Fundamental rights impact assessment
- EU database registration
Immediate Action Checklist for Prohibited Practices
- Audit all AI systems for emotion recognition capabilities in HR/education contexts
- Review data collection practices for facial scraping activities
- Assess social scoring mechanisms in customer/employee evaluation systems
- Document any biometric categorization based on protected characteristics
- Discontinue prohibited systems or modify for compliant use cases
Step 3: Assess High-Risk Classification and Derogation Options
If your system is not prohibited but falls under Annex III categories, you must determine if high-risk requirements apply or if derogation conditions are met.
Annex III High-Risk Categories
| Category | Systems Included | Derogation Possible? |
|---|---|---|
| Biometric ID | Remote biometric identification, biometric categorization | Limited - profiling always high-risk |
| Critical Infrastructure | Energy, transport, water supply management systems | Yes - if narrow procedural task |
| Education | Student admission, learning outcomes assessment, proctoring | Yes - if improves human results |
| Employment | Recruitment screening, task allocation, performance evaluation | Yes - if pattern detection only |
| Essential Services | Creditworthiness, insurance pricing, benefit eligibility | Limited - profiling always high-risk |
| Law Enforcement | Lie detection, emotion assessment, risk assessment, DNA analysis | No |
| Migration | Border control, visa processing, asylum assessment | Limited |
| Justice | Court rulings, case law analysis, evidence evaluation | No - judicial independence |
Derogation Assessment Framework
For Annex III systems, document this assessment before claiming non-high-risk status:
DEROGATION ASSESSMENT RECORD
================================
System Name: [Your AI System]
Annex III Category: [e.g., Employment - Article 6(2)]
Assessment Date: [YYYY-MM-DD]
Assessor: [Name, Role]
DEROGATION CONDITION CHECK:
[ ] Condition A: Narrow Procedural Task
Does the system perform narrow procedural tasks without
substantially influencing decision outcomes?
Evidence: [Describe task scope, decision impact level]
[ ] Condition B: Improves Human Activity Results
Does the system merely improve the result of a human
activity previously carried out without AI?
Evidence: [Describe human baseline, improvement metrics]
[ ] Condition C: Detects Patterns Without Replacing Decisions
Does the system detect decision-making patterns or provide
auxiliary information without replacing human decision-making?
Evidence: [Describe decision flow, human role in final decision]
[ ] Condition D: Preparatory Assessment Tasks
Does the system perform preparatory tasks for assessments
relevant to Annex III use cases?
Evidence: [Describe preparatory vs. final assessment role]
CRITICAL CHECK:
[ ] Profiling Status: Does the system perform profiling?
YES -> Derogation NOT available. System is HIGH-RISK.
NO -> Derogation may apply if any condition A-D is met.
CONCLUSION:
[ ] NON-HIGH-RISK: Derogation conditions met
Document and retain assessment record.
[ ] HIGH-RISK: Derogation not applicable
Proceed to conformity assessment requirements.
When Profiling Overrides Derogation
Profiling is defined as automated processing of personal data to evaluate certain personal aspects. If your Annex III system performs profiling, derogation is not available regardless of other conditions.
Systems that profile:
- Behavioral scoring for hiring decisions
- Learning style categorization for student placement
- Risk assessment based on personal characteristics
- Creditworthiness evaluation from behavioral data
Step 4: Implement Technical Documentation for High-Risk Systems
High-risk AI systems require comprehensive technical documentation before market placement. This documentation must be maintained throughout the system lifecycle.
Annex IV Documentation Template
Create a technical documentation file containing these elements:
1. General System Description
## System Overview
### Provider Information
- Company Name: [Legal entity name]
- Address: [Registered address]
- Contact: [Compliance contact]
### System Identity
- System Name: [Product/service name]
- Version: [Current version number]
- Intended Purpose: [Specific use case description]
- Target Users: [Who will operate the system]
- End Users: [Who will be affected by outputs]
### System Architecture
- Components: [List major components]
- Integration Points: [How system connects to other systems]
- Data Flow Diagram: [Attach or reference]
### Hardware Requirements
- Compute: [GPU, CPU specifications]
- Memory: [RAM requirements]
- Storage: [Data storage needs]
- Network: [Connectivity requirements]
### Expected Lifetime
- Planned operational period: [Years]
- Update frequency: [Quarterly, annual, etc.]
- End-of-life plan: [Decommissioning approach]
2. Development Process Documentation
## Development Process
### Development Team
- Project Lead: [Name, qualifications]
- Technical Leads: [Names, roles]
- Compliance Responsible: [Name, contact]
### Methodology
- Development Framework: [Agile, waterfall, etc.]
- Quality Management System: [ISO 9001, etc.]
- AI-specific methodology: [MLOps pipeline details]
### Version History
| Version | Date | Changes | Validation Status |
|---------|------|---------|-------------------|
| 1.0.0 | YYYY-MM-DD | Initial release | Validated |
| 1.1.0 | YYYY-MM-DD | [Changes] | [Status] |
### Third-Party Components
| Component | Version | Supplier | License |
|-----------|---------|----------|---------|
| [Name] | [Version] | [Supplier] | [License type] |
3. Risk Management System Documentation
## Risk Management (Article 9)
### Risk Identification Process
- Methodology: [How risks are identified]
- Frequency: [Continuous, periodic, event-triggered]
- Stakeholders involved: [Roles participating]
### Risk Estimation
| Risk ID | Description | Likelihood | Severity | Risk Score |
|---------|-------------|------------|----------|------------|
| R001 | [Risk description] | [1-5] | [1-5] | [L x S] |
### Risk Evaluation Criteria
- Acceptable risk threshold: [Definition]
- Risk tolerance: [Organizational tolerance]
### Mitigation Measures
| Risk ID | Mitigation | Residual Risk | Verification |
|---------|------------|---------------|--------------|
| R001 | [Measure] | [Score] | [Test method] |
### Continuous Monitoring
- Metrics tracked: [List metrics]
- Alert thresholds: [Threshold values]
- Response procedures: [Actions on alert]
4. Data Governance Documentation
## Data Governance (Article 10)
### Training Data
- Source: [Data origin]
- Collection method: [How data was gathered]
- Size: [Volume, number of records]
- Time period: [Date range]
- Bias analysis: [Known biases and mitigation]
### Data Quality Measures
| Criterion | Method | Result |
|-----------|--------|--------|
| Relevance | [Method] | [Pass/Fail] |
| Completeness | [Method] | [Pass/Fail] |
| Representativeness | [Method] | [Pass/Fail] |
### Personal Data Processing
- Lawful basis: [GDPR Article 6 basis]
- Data Protection Impact Assessment: [Reference or N/A]
- Data subject rights procedures: [Process description]
### Validation and Test Data
- Separation from training: [How separated]
- Size: [Volume]
- Representativeness: [Coverage assessment]
5. Performance and Accuracy Documentation
## Performance Metrics (Article 15)
### Accuracy Metrics
| Metric | Training Set | Validation Set | Test Set |
|--------|--------------|----------------|----------|
| Accuracy | [Value] | [Value] | [Value] |
| Precision | [Value] | [Value] | [Value] |
| Recall | [Value] | [Value] | [Value] |
| F1-Score | [Value] | [Value] | [Value] |
### Performance Across Demographic Groups
| Group | Accuracy | False Positive Rate | False Negative Rate |
|-------|----------|---------------------|---------------------|
| [Group A] | [Value] | [Value] | [Value] |
| [Group B] | [Value] | [Value] | [Value] |
### Robustness Testing
- Adversarial test results: [Summary]
- Error handling tests: [Summary]
- Edge case coverage: [Percentage]
### Cybersecurity Measures
- Data poisoning prevention: [Controls implemented]
- Model extraction protection: [Controls implemented]
- Access controls: [Authentication, authorization]
SME Documentation Simplification
Small and medium enterprises (fewer than 250 employees and annual turnover below 50M EUR or balance sheet below 43M EUR) may use the simplified technical documentation form provided by the European Commission. The simplified form reduces documentation burden while maintaining essential compliance information.
Step 5: Implement Human Oversight Measures
Human oversight is mandatory for all high-risk AI systems. Article 14 requires technical measures enabling natural persons to understand, monitor, and control the system.
Technical Human Oversight Requirements
## Human Oversight Implementation Checklist
### Understanding Capabilities (Article 14(4)(a))
[ ] System capabilities documentation provided to deployers
[ ] Known limitations clearly documented
[ ] Performance characteristics on different populations documented
[ ] Operating conditions specified
### Anomaly Detection (Article 14(4)(b))
[ ] Dysfunction alerts implemented
[ ] Unexpected performance warnings configured
[ ] Data drift detection enabled
[ ] Model degradation monitoring active
### Automation Bias Prevention (Article 14(4)(c))
[ ] Confidence scores displayed for all outputs
[ ] Uncertainty indicators visible
[ ] Clear distinction between recommendations and decisions
[ ] Training materials address automation bias risks
### Output Interpretation (Article 14(4)(d))
[ ] Interpretation tools provided
[ ] Feature importance or explanation methods available
[ ] Output confidence intervals or uncertainty ranges shown
[ ] Human-readable explanations for critical decisions
### Override and Stop Capabilities (Article 14(4)(e))
[ ] Override capability implemented
[ ] Ability to reverse or modify outputs
[ ] DECISION NOT TO USE option available
[ ] STOP BUTTON IMPLEMENTED - MANDATORY
### Dual Verification (Article 14(5))
[ ] Biometric identification systems: Two competent persons verification
[ ] Exception documented for law enforcement where disproportionate
Stop Button Implementation Requirements
The βstop buttonβ or equivalent procedure is explicitly mandated by Article 14(4)(e). This technical measure must:
- Halt the system safely: Stop operations without causing harm or data loss
- Be accessible: Available to human operators at all times during operation
- Preserve state: Maintain system state for investigation if needed
- Trigger notifications: Alert relevant personnel when activated
Example implementation approach:
STOP BUTTON TECHNICAL SPECIFICATION
===================================
1. ACCESSIBILITY
- Physical button in control interface OR
- Keyboard shortcut (documented to operators) OR
- Voice command for hands-free operation
2. BEHAVIOR ON ACTIVATION
- Immediate inference halt (within 100ms)
- Current input preservation for audit
- Log entry with timestamp and operator ID
- Notification to monitoring dashboard
3. STATE PRESERVATION
- Last valid output cached
- Input data preserved for 24 hours minimum
- Audit trail entry created
4. RECOVERY PROCEDURE
- Documented restart process
- Safety verification before resumption
- Incident report requirement
Step 6: Satisfy Transparency Obligations
Article 50 establishes transparency obligations for AI systems interacting with persons or generating content. These requirements apply regardless of risk classification.
Transparency Requirements by System Type
| System Type | Transparency Requirement |
|---|---|
| AI interacting with persons | Disclose AI nature to users (unless obvious) |
| Synthetic content generators | Mark content as AI-generated in machine-readable format |
| Emotion recognition systems | Notify users that emotion recognition is operating |
| Biometric categorization | Notify users of categorization activity |
| Deep fakes | Disclose that content is manipulated or generated |
Synthetic Content Marking Implementation
For systems generating images, audio, or video:
## Synthetic Content Disclosure
### Machine-Readable Metadata
- Standard: [e.g., IPTC, XMP, C2PA]
- Field: [AI-generated flag]
- Value: [TRUE / confidence score]
### Visible Disclosure
- Overlay text for images/video
- Audio watermark for speech
- Metadata embedding for files
### Implementation Options
Option A: C2PA Content Credentials
- Industry standard for provenance
- Cryptographic attestation
- Browser/plugin verification
Option B: IPTC Photo Metadata
- Existing photo metadata standard
- "AI Generated" field
- Wide tool support
Option C: Custom Watermarking
- Visible or invisible watermark
- Proprietary or standard algorithm
- Detection tools required
AI Interaction Disclosure
For chatbots, voice assistants, and interactive systems:
## AI Disclosure Implementation
### Disclosure Timing
- Before first interaction: Initial greeting
- Ongoing: Periodic reminders (every N interactions)
- On request: Clear response to "Are you AI?"
### Disclosure Methods
- Text: "I am an AI assistant..."
- Voice: Spoken disclosure at session start
- Visual: AI indicator in interface
### Exception Handling
When AI nature is obvious from context:
- Example: Gaming AI characters
- Example: Search result ranking
- Document rationale for non-disclosure
Step 7: Choose Your Conformity Assessment Path
High-risk AI systems must undergo conformity assessment before market placement. Two pathways are available.
Conformity Assessment Options
| Pathway | When to Use | Procedure | Cost | Timeline |
|---|---|---|---|---|
| Internal Control (Annex VI) | System complies with harmonised standards | Self-assessment + declaration | Low | 2-4 weeks |
| Notified Body (Annex VII) | No harmonised standard or specific cases | Third-party audit | High | 2-6 months |
Internal Control Procedure (Annex VI)
Available when your system complies with harmonised standards published in the Official Journal:
- Verify harmonised standard coverage: Confirm published standards cover your systemβs functions
- Complete technical documentation: Annex IV requirements
- Implement quality management system: Ongoing compliance processes
- Draft EU declaration of conformity: Legal attestation of compliance
- Affix CE marking: Physical or digital conformity mark
- Register in EU database: For high-risk systems
Notified Body Procedure (Annex VII)
Required when:
- No harmonised standard covers your system
- You choose not to apply harmonised standards
- Law enforcement biometric systems (mandatory)
Process:
- Select notified body: From EU database of accredited organizations
- Submit technical documentation: Annex IV package
- Undergo audit: Quality management system review
- Receive certificate: Conformity certificate from notified body
- Affix CE marking with body number: Include notified body identification
Timeline for Conformity Assessment
| Milestone | Recommended Timeline | Deadline |
|---|---|---|
| Risk classification complete | Now | - |
| Gap analysis of requirements | 4-6 weeks | - |
| Technical documentation draft | 8-12 weeks | - |
| Quality management implementation | 12-16 weeks | - |
| Conformity assessment initiation | 16-20 weeks | - |
| Assessment completion | 20-24 weeks | August 2, 2026 |
| EU registration | Before market placement | August 2, 2026 |
Step 8: Align with Existing Governance Frameworks
Organizations with existing AI governance frameworks can leverage them for EU AI Act compliance, but must understand the limitations.
Framework Alignment Matrix
| Dimension | EU AI Act | NIST AI RMF | ISO/IEC 42001 |
|---|---|---|---|
| Legal Status | Mandatory in EU | Voluntary | Voluntary certification |
| Geographic Scope | EU Member States | US (international adoption) | Global |
| Risk Classification | 4-tier pyramid | GOVERN/MAP/MEASURE/MANAGE | PDCA cycle |
| Prohibited Practices | Yes - specific list | No categories | No specific list |
| Conformity Assessment | Internal or notified body | Self-assessment | Certification audit |
| Penalties | Up to 35M EUR / 7% turnover | None | Market-based |
| Presumption of Conformity | Harmonised standards only | N/A | Supports but does not confer |
Strategic Framework Integration
RECOMMENDED APPROACH:
======================
1. USE ISO 42001 FOR:
- Organizational governance structure
- AI management system establishment
- Continuous improvement processes
- Audit readiness documentation
2. USE NIST AI RMF FOR:
- Risk documentation methodology
- Stakeholder engagement patterns
- Cross-functional governance
- Risk communication frameworks
3. SUPPLEMENT WITH EU-SPECIFIC:
- Annex IV technical documentation
- Article 14 human oversight measures
- Article 50 transparency requirements
- Conformity assessment procedures
4. MONITOR FOR:
- Harmonised standards publication
- Presumption of conformity pathway
- Sector-specific guidance
What Existing Frameworks Do NOT Provide
- Prohibited practice categories
- Mandatory compliance deadlines
- EU conformity assessment
- Legal presumption of conformity
Only harmonised standards published in the Official Journal provide presumption of conformity with EU AI Act requirements.
Common Mistakes & Troubleshooting
| Symptom | Cause | Fix |
|---|---|---|
| βOur ISO 42001 certification means weβre compliantβ | Misunderstanding of presumption of conformity | ISO 42001 supports compliance but does not automatically satisfy EU AI Act. Supplement with Annex IV documentation. |
| βWe donβt need to worry until August 2026β | Missing prohibited practices deadline | Article 5 is already enforceable since February 2, 2025. Audit immediately for prohibited uses. |
| βOur system doesnβt interact with humans so no transparency neededβ | Overlooking synthetic content marking | Content generation systems require marking even without human interaction. |
| βWeβre an SME so requirements donβt applyβ | Misunderstanding SME provisions | SMEs get simplified documentation forms and lower penalty caps, but all high-risk requirements still apply. |
| βOur system just detects patterns, not high-riskβ | Missing profiling exception | Pattern detection with profiling is always high-risk regardless of other conditions. |
| βWeβll just use the internal control pathwayβ | No harmonised standards available | Check if harmonised standards for your system type are published. If not, notified body may be required. |
Case Studies: Industries Affected by Prohibited Practices
Case Study 1: HR Tech Platform with Emotion Recognition
Company: Mid-sized recruitment technology provider serving EU enterprise clients
System: Video interview analysis platform using facial expression analysis to assess candidate emotions during interviews
Issue: Emotion recognition in employment context prohibited since February 2, 2025
Actions Taken:
- Immediately disabled emotion inference module for EU clients
- Retained facial recognition for identity verification only (with consent)
- Documented system modification with compliance rationale
- Notified affected clients of feature removal
- Retained emotion analysis feature for non-EU markets with user consent
Compliance Status: Now compliant; emotion recognition removed from EU deployment
Lessons:
- Geographic feature gating may be necessary
- Document all system modifications with compliance rationale
- Client communication is essential for trust maintenance
Case Study 2: EdTech Student Engagement Monitoring
Company: Educational technology startup providing classroom analytics
System: AI-powered student attention tracking using webcam feeds to measure engagement
Issue: Emotion recognition in educational institutions prohibited
Actions Taken:
- Pivoted to privacy-preserving engagement metrics
- Replaced emotion inference with voluntary attention indicators (student clicks, responses)
- Added transparency overlays showing when monitoring is active
- Implemented consent mechanisms for all biometric data collection
- Retained academic performance analytics (non-prohibited)
Compliance Status: Transformed to transparency-risk system with consent mechanisms
Lessons:
- Business model pivots may be necessary
- Consent mechanisms become critical for remaining biometric features
- Transparency requirements still apply
Case Study 3: Facial Recognition Service Provider
Company: Security technology vendor offering facial recognition databases
System: Facial image collection from public web sources and CCTV for identity verification services
Issue: Untargeted facial scraping for database creation prohibited
Actions Taken:
- Ceased all untargeted web scraping activities
- Shifted to opt-in database model with explicit consent
- Implemented target-specific collection with documented justification
- Added data governance controls for collection provenance
- Established deletion procedures for previously scraped data
Compliance Status: Operating under consent-based model with documented data provenance
Lessons:
- Data collection practices may require fundamental restructuring
- Provenance documentation becomes essential
- Legacy data may need deletion or consent retrofits
Compliance Timeline and Action Plan
Key Deadlines
| Date | Requirement | Action Needed |
|---|---|---|
| February 2, 2025 | Prohibited practices enforceable | Audit and discontinue prohibited systems |
| August 2, 2025 | Transparency obligations, governance structures | Implement disclosure mechanisms |
| February 2, 2026 | Commission high-risk classification guidelines | Review guidance for classification support |
| August 2, 2026 | High-risk system requirements, conformity assessment | Complete documentation and assessment |
| August 2, 2027 | GPAI model obligations, Annex I product systems | GPAI providers complete compliance |
Prioritized Action Plan
IMMEDIATE (Weeks 1-4):
======================
[ ] Complete prohibited practices audit
[ ] Identify all AI systems in deployment/pipeline
[ ] Classify each system using decision tree
[ ] Document classification rationale
[ ] Cease prohibited practices immediately
SHORT-TERM (Months 1-6):
=======================
[ ] Implement transparency mechanisms
[ ] Draft technical documentation for high-risk systems
[ ] Establish AI governance committee
[ ] Begin conformity assessment preparation
[ ] Monitor harmonised standards publications
MEDIUM-TERM (Months 6-12):
=========================
[ ] Complete high-risk documentation
[ ] Implement human oversight measures
[ ] Establish continuous risk monitoring
[ ] Initiate conformity assessment (if high-risk)
[ ] Train personnel on compliance requirements
LONG-TERM (Months 12-18):
========================
[ ] Complete conformity assessment
[ ] Register in EU database
[ ] Establish compliance monitoring program
[ ] Plan for ongoing documentation updates
[ ] Prepare for regulatory audits
πΊ Scout Intel: What Others Missed
Confidence: high | Novelty Score: 78/100
While existing policy summaries focus on legal interpretation, this guide reveals three critical technical implementation insights that compliance officers and AI developers overlook. First, the derogation conditions for Annex III high-risk classification have specific profiling carve-outs that override all other conditions - systems performing profiling remain high-risk regardless of narrow procedural task designations. Second, the βstop buttonβ requirement (Article 14(4)(e)) is not optional documentation but a mandatory technical feature that must halt the system safely within milliseconds. Third, harmonised standards for presumption of conformity are still pending publication, meaning organizations cannot rely on internal control procedures alone until EU standardisation bodies complete their work - expected timeline extends into late 2026.
Key Implication: Organizations currently using ISO 42001 or NIST AI RMF as their primary compliance framework must supplement these with Annex IV technical documentation and cannot claim conformity presumption until harmonised standards appear in the Official Journal - plan for notified body assessment in the interim.
Summary & Next Steps
This guide has provided a comprehensive framework for EU AI Act compliance:
Key Takeaways:
- Prohibited practices are already enforceable - immediate action required
- Use the decision tree to classify all AI systems systematically
- Derogation conditions exist for Annex III systems - document your assessment
- Technical documentation must address all Annex IV elements
- Human oversight requires a functional βstop buttonβ - non-negotiable
- Existing frameworks (ISO 42001, NIST RMF) support but do not satisfy EU requirements
- Conformity assessment pathway depends on harmonised standard availability
Recommended Next Steps:
- Conduct immediate audit for prohibited practices
- Complete classification assessment for all AI systems
- Identify high-risk systems requiring conformity assessment
- Begin technical documentation drafting
- Establish governance structure for ongoing compliance
Related Resources:
- EU AI Act Official Explorer - Full regulation text with article analysis
- ISO/IEC 42001:2023 - AI Management System standard
- NIST AI Risk Management Framework - US framework for risk governance
Sources
- EU AI Act Official Explorer - Official EU AI Act reference site
- EUR-Lex Official Journal - Regulation (EU) 2024/1689 definitive text
- Article 5: Prohibited AI Practices - Complete prohibited practices text
- Article 6: Classification Rules - High-risk classification criteria
- Article 9: Risk Management System - Continuous risk management requirements
- Article 11: Technical Documentation - Pre-market documentation requirements
- Article 13: Transparency - Information provision requirements
- Article 14: Human Oversight - Oversight design requirements
- Article 15: Accuracy, Robustness, Cybersecurity - Performance requirements
- Article 43: Conformity Assessment - Assessment procedures
- Article 50: Transparency Obligations - User disclosure requirements
- Article 99: Penalties - Administrative fine structure
- Article 113: Implementation Timeline - Entry into force dates
- ISO/IEC 42001:2023 - AI Management Systems standard
- NIST AI Risk Management Framework - US AI risk governance framework
Related Intel
AI Agent Standardization Race: Government vs Industry - Who Will Define the Rules?
NIST and W3C released AI agent standards initiatives in 2026, but industry frameworks (AutoGen 56K stars, CrewAI 48K stars, LangGraph 28K stars) dominate adoption. The core tension: government standards take years while frameworks iterate monthly.
EU AI Act Prohibits Emotion Recognition in Workplaces and Schools
EU AI Act Article 5 bans emotion recognition systems in workplace and educational settings. FPF analysis reveals compliance scope, exemptions, and implementation challenges for HR tech and edtech vendors.
NIST CAISI: The First Federal Framework for Multi-Agent AI Security
NIST's CAISI initiative targets multi-agent security vulnerabilities distinct from single-model AI risks. OWASP LLM06:2025 defines Excessive Agency, MCP protocol fragmentation creates compliance uncertainty ahead of 2029 enforcement.