NYC Bias Audit Law (Local Law 144)
Description
- Purpose: Prevent discriminatory bias in automated hiring systems
- Jurisdiction: New York City, USA
- Effective Date: July 5, 2023
- Detection Tools:
- TBD
- Related Risks:
- Related Regulations:
- CA SB 1047 - AI Risk Management
- EU AI Act - AI System Requirements
The NYC Bias Law targets the use of Automated Employment Decision Tools (AEDTs) to prevent discriminatory practices in hiring and promotion decisions. Enacted to protect candidates and employees, it mandates regular bias audits and transparency measures for AI-driven tools used in employment decisions within New York City. The law is a pioneering effort to enforce fairness in algorithmic decision-making in the employment sector.
Scope & Applicability
The law applies to employers and employment agencies within New York City that utilize AI-based tools in hiring or promotion decisions.
- Covered Entities: NYC-based employers and recruitment agencies using automated decision tools.
- Data Types: Employment-related data including applicant resumes, demographic details, and evaluation scores.
- Key Exemptions: Tools used for purely internal assessments without direct employment decisions and systems not substantially affecting hiring outcomes.
Key Requirements
Organizations must perform independent bias audits and ensure transparency in their automated employment tools:
- Conduct an annual independent bias audit on all AEDTs.
- Provide clear notifications to candidates regarding the use of automated tools and their rights.
- Special Focus Areas:
- Bias Auditing: Evaluate selection rates and impact ratios across demographic groups.
- Transparency Reporting: Publicly disclose audit results and demographic breakdowns.
- Additional Focus: Offer alternative evaluation methods for candidates who request a non-AI process.
Impact on LLM/AI Deployments
For AI systems used in employment, compliance requires integrating fairness and transparency from the outset:
- Algorithmic Fairness: Design AI systems to minimize bias and ensure equitable outcomes.
- Data Transparency: Provide comprehensive logs and audit trails of automated decisions.
- Candidate Communication: Implement mechanisms to notify candidates about the use of AI in their evaluation.
- Security and Observability Considerations:
- Data Protection: Secure sensitive demographic data used in audits.
- Audit Logging: Keep detailed records of AI decisions for transparency reviews.
- Periodic Bias Checks: Regularly assess and adjust algorithms to meet fairness benchmarks.
- Access Controls: Limit administrative access to AI systems processing sensitive employment data.
- Compliance Reporting: Prepare and publish transparency reports on system performance.
Enforcement & Penalties
The NYC Bias Law is enforced by the NYC Department of Consumer and Worker Protection.
- Enforcement Body: NYC Department of Consumer and Worker Protection.
- Fines and Penalties:
- Initial Violations: Fines up to $500 per violation.
- Subsequent Violations: Fines ranging from $500 to $1,500 per violation.
- Additional Enforcement Mechanisms: Daily fines for ongoing non-compliance and public disclosure of audit failures.
- Operational Impacts: Violations can disrupt hiring processes and damage employer reputation, necessitating costly remediation.