Skip to content
The Algorithm
The Algorithm/Knowledge Base/Algorithmic Accountability Act Proposals and Requirements
AI Governance

Algorithmic Accountability Act Proposals and Requirements

Federal and state algorithmic accountability proposals requiring impact assessments, audit rights, and explainability for automated decision systems affecting individuals.

What You Need to Know

Algorithmic accountability proposals at the federal level — most prominently the Algorithmic Accountability Act of 2022 (S. 3572) and its 2023 reintroduction — would require companies deploying "automated decision systems" (ADS) in consequential decisions (employment, housing, credit, healthcare, education, government services) to conduct impact assessments evaluating accuracy, fairness, bias, privacy, and security. While federal legislation has not passed as of early 2026, several state and local laws have enacted operative requirements: New York City Local Law 144 (automated employment decision tools, effective July 2023) requires annual bias audits by independent auditors and public disclosure of audit results; Illinois's Artificial Intelligence Video Interview Act requires disclosure of AI use in video interviews; Maryland's HB 1182 requires disclosure of facial recognition use by employers.

New York City Local Law 144 is the most operationally mature algorithmic accountability law in force. It applies to employers and employment agencies using Automated Employment Decision Tools (AEDTs) in hiring or promotion decisions affecting NYC residents. Covered employers must: (1) annually commission a bias audit by an independent auditor — defined as an impartial entity with no financial interest in the employer or the AEDT developer; (2) publish the audit summary on their public website, including the date, the distribution of candidates by sex and race/ethnicity, and impact ratio calculations; (3) notify candidates 10 business days before assessment that an AEDT will be used; and (4) provide candidates the option to request an alternative selection process or accommodation. Bias audit methodology requires calculating the selection rate and impact ratio for each sex and race/ethnicity category with sufficient representation.

The impact ratio calculation methodology under LL 144 — and analogous methods in other proposals — uses the 4/5ths (80%) rule from the EEOC's Uniform Guidelines on Employee Selection Procedures (29 CFR Part 1607): a selection rate for a protected group that is less than 80% of the highest selection rate among all groups indicates potential adverse impact. However, LL 144 requires only disclosure of the impact ratio, not remediation — remediation is handled under existing EEOC/NYCCHR enforcement frameworks. Engineering teams must implement selection rate tracking by protected class in their ATS and AEDT pipelines, calculating and retaining disaggregated outcome data in a form suitable for annual independent audit. Audit trails for all AEDT-influenced decisions must be retained for the statute of limitations period under applicable employment discrimination laws.

How We Handle It

We implement NYC LL 144 compliance for clients using AEDTs in New York hiring processes, building disaggregated selection rate tracking into ATS integrations, preparing audit-ready data packages for independent auditors, and building candidate notification workflows into application pipelines. For clients monitoring federal algorithmic accountability developments, we maintain mapping templates between proposed ADS impact assessment requirements and our existing AI risk assessment methodology.

Services
Service
AI Platform Engineering
Service
Compliance Infrastructure
Service
Regulatory Intelligence
Related Frameworks
NYC Local Law 144
EEOC Uniform Guidelines
EU AI ActNIST AI RMF
DECISION GUIDE

Compliance-Native Architecture Guide

Design principles and a structured checklist for building software that is compliant by default — not compliant by retrofit. Covers data architecture, access controls, audit trails, and vendor due diligence.

§

Compliance built at the architecture level.

Deploy a team that knows your regulatory landscape before they write their first line of code.

Start the conversation
Related
Service
AI Platform Engineering
Service
Compliance Infrastructure
Service
Regulatory Intelligence
Related Framework
NYC Local Law 144
Related Framework
EEOC Uniform Guidelines
Related Framework
EU AI Act
Platform
ALICE Compliance Engine
Service
Compliance Infrastructure
Engagement
Surgical Strike (Tier I)
Why Switch
vs. Accenture
Get Started
Start a Conversation
Engage Us