Skip to content
The Algorithm
The Algorithm/Knowledge Base/AI Bias Auditing Requirements Under EU AI Act and State Laws
AI Governance

AI Bias Auditing Requirements Under EU AI Act and State Laws

Systematic AI bias auditing is now a legal obligation for high-risk AI systems in employment, credit, and housing under EU and U.S. state requirements.

What You Need to Know

AI bias auditing — systematic measurement of differential performance or outcomes across protected demographic groups — is required under multiple converging legal frameworks. The EU AI Act Article 9(7) requires bias detection and correction as part of the risk management system for high-risk AI, and Article 10(2)(f) requires training, validation, and testing data to be "examined in view of possible biases" that could affect fundamental rights. Annex III lists high-risk AI applications including employment, education, credit, essential services, law enforcement, migration, and administration of justice. In the U.S., NYC Local Law 144 (AEDTs), Illinois's Algorithmic Accountability law for employment interviews, and proposed federal legislation create audit obligations, while EEOC guidance on AI-based selection tools and CFPB Circular 2022-03 extend anti-discrimination enforcement to AI-mediated decisions.

Bias auditing methodology requires defining the bias metric, the demographic groups to be evaluated, the data split to be used, and the threshold for remediation or disclosure. Common bias metrics include: demographic parity (equal selection rates across groups), equalized odds (equal true positive and false positive rates across groups), calibration (predicted probabilities match observed frequencies across groups), and counterfactual fairness (similar individuals with different demographic attributes receive similar predictions). No single metric captures all dimensions of fairness — demographic parity and equalized odds can be simultaneously satisfied only when base rates are equal across groups (the "impossibility theorem"), requiring explicit decisions about which fairness criterion is most appropriate for the specific use case and legal context.

EU AI Act Article 10(3) requires training data to be "relevant, representative, free of errors, and complete" — an engineering standard requiring documented data quality processes, not merely assertions. For bias auditing, representativeness requires that training and evaluation data reflect the deployment population distribution, including protected class composition. Data augmentation, reweighting, and resampling techniques (SMOTE, adversarial debiasing, reweighting by inverse propensity scores) can address representation gaps but must be documented and their effect on calibration and accuracy-fairness tradeoffs evaluated. Audit reports must be stored as part of the technical documentation under EU AI Act Article 11 and produced to market surveillance authorities on request.

How We Handle It

We implement AI bias auditing pipelines using the Fairlearn and IBM AI Fairness 360 toolkits, generating disaggregated metric reports across protected classes for all high-risk model deployments. Our EU AI Act Article 10 data examination workflow includes representativeness analysis, bias scanning of training sets, and documented reweighting decisions. Audit artifacts are versioned alongside model releases and stored in audit-ready repositories.

Services
Service
AI Platform Engineering
Service
Compliance Infrastructure
Service
Regulatory Intelligence
Related Frameworks
EU AI Act Art. 9-10
NYC Local Law 144
EEOC Uniform Guidelines
NIST AI RMF
DECISION GUIDE

Compliance-Native Architecture Guide

Design principles and a structured checklist for building software that is compliant by default — not compliant by retrofit. Covers data architecture, access controls, audit trails, and vendor due diligence.

§

Compliance built at the architecture level.

Deploy a team that knows your regulatory landscape before they write their first line of code.

Start the conversation
Related
Service
AI Platform Engineering
Service
Compliance Infrastructure
Service
Regulatory Intelligence
Related Framework
EU AI Act Art. 9-10
Related Framework
NYC Local Law 144
Related Framework
EEOC Uniform Guidelines
Platform
ALICE Compliance Engine
Service
Compliance Infrastructure
Engagement
Surgical Strike (Tier I)
Why Switch
vs. Accenture
Get Started
Start a Conversation
Engage Us