Employment & HR
High risk sector
AI in hiring, monitoring, and promotion decisions is an active area of civil-rights enforcement and new laws.
Overview
AI is used to screen resumes, rank candidates, assess video interviews, and monitor employees. Because employment decisions significantly affect people, these systems are heavily regulated under civil-rights and emerging AI laws.
What this means for you
You may be screened by AI before a human ever sees your application. Several laws now require notice, bias audits, and a right to an alternative process.
Relevant laws & frameworks
-
EU AI Act
The world's first comprehensive horizontal AI law, imposing risk-based obligations across the EU.
-
Colorado AI Act
First comprehensive US state AI law targeting consequential decisions; effective in 2026.
-
NIST AI RMF
Voluntary US framework for managing AI risks across the life cycle (Govern, Map, Measure, Manage).
-
UK GDPR Art. 22
Rights regarding solely automated decisions with legal or similarly significant effects.
-
NYC AEDT Law
Requires independent bias audits and candidate notice for automated employment decision tools used in NYC.
-
CCPA/CPRA ADMT
California is issuing regulations on automated decision-making technology under the CCPA/CPRA.
-
Illinois AIVIA
Notice, consent, and deletion requirements when AI analyzes job-interview videos in Illinois.
Business examples
-
NYC bias-audit publications
Employers and vendors publish independent bias audits of automated employment decision tools used in NYC.
Related industries
-
High risk
Finance & Banking
AI in credit decisions, fraud detection, trading, and customer service raises fa...
-
High risk
Insurance
AI in underwriting, pricing, and claims is regulated through state insurance law...
-
High risk
Education
AI in grading, proctoring, admissions, and tutoring raises fairness, accuracy, a...
-
High risk
Legal Services
AI in legal research, drafting, and e-discovery — accuracy, privilege, and profe...