This bill, known as the Artificial Intelligence Civil Rights Act of 2025, aims to establish comprehensive protections for individual rights against discriminatory impacts from computational algorithms. It broadly defines consequential actions to include critical areas such as employment, education, housing, healthcare, credit, and the justice system, ensuring wide-ranging applicability. The legislation targets covered algorithms , which encompass machine learning, AI, and similar computational processes that influence or make decisions in these consequential areas. A central provision prohibits developers and deployers from using covered algorithms in ways that cause or contribute to disparate impact or otherwise discriminate based on a wide array of protected characteristics . To ensure compliance, the bill mandates rigorous pre-deployment evaluations and annual post-deployment impact assessments , conducted by independent auditors, whenever a potential for harm is identified. These assessments require detailed reviews of algorithm design, training data, potential for harm, and mitigation strategies, with summaries made publicly available. The Act also establishes specific standards for algorithm use, requiring reasonable measures to prevent harm, ensure auditor access, and consult with affected stakeholders. Crucially, it introduces a right to human alternatives for consequential actions, allowing individuals to opt-out of algorithmic decision-making, and a right to appeal algorithmic decisions to a human reviewer. Furthermore, the bill includes strong protections against retaliation for individuals who exercise their rights or report violations. Transparency is a key focus, with developers and deployers required to provide clear, conspicuous, and accessible public disclosures about their algorithm practices. These disclosures must detail data collection, processing, transfer, and how individuals can exercise their rights, including a specific disclaimer about the audit's purpose. The Federal Trade Commission (FTC) is tasked with studying the feasibility of requiring explanations for algorithmic decisions and establishing a publicly accessible repository for all evaluation and assessment reports. Enforcement mechanisms are robust and multi-layered, treating violations as unfair or deceptive acts under the FTC Act, and expanding the FTC's jurisdiction to cover entities typically exempt. States are empowered to bring civil actions, seeking injunctions, civil penalties, and damages. Significantly, the bill grants a private right of action to individuals or classes, allowing them to sue for treble damages, punitive damages, and other relief, while invalidating pre-dispute arbitration agreements and joint action waivers for disputes arising under the Act.
This bill, known as the Artificial Intelligence Civil Rights Act of 2025, aims to establish comprehensive protections for individual rights against discriminatory impacts from computational algorithms. It broadly defines consequential actions to include critical areas such as employment, education, housing, healthcare, credit, and the justice system, ensuring wide-ranging applicability. The legislation targets covered algorithms , which encompass machine learning, AI, and similar computational processes that influence or make decisions in these consequential areas. A central provision prohibits developers and deployers from using covered algorithms in ways that cause or contribute to disparate impact or otherwise discriminate based on a wide array of protected characteristics . To ensure compliance, the bill mandates rigorous pre-deployment evaluations and annual post-deployment impact assessments , conducted by independent auditors, whenever a potential for harm is identified. These assessments require detailed reviews of algorithm design, training data, potential for harm, and mitigation strategies, with summaries made publicly available. The Act also establishes specific standards for algorithm use, requiring reasonable measures to prevent harm, ensure auditor access, and consult with affected stakeholders. Crucially, it introduces a right to human alternatives for consequential actions, allowing individuals to opt-out of algorithmic decision-making, and a right to appeal algorithmic decisions to a human reviewer. Furthermore, the bill includes strong protections against retaliation for individuals who exercise their rights or report violations. Transparency is a key focus, with developers and deployers required to provide clear, conspicuous, and accessible public disclosures about their algorithm practices. These disclosures must detail data collection, processing, transfer, and how individuals can exercise their rights, including a specific disclaimer about the audit's purpose. The Federal Trade Commission (FTC) is tasked with studying the feasibility of requiring explanations for algorithmic decisions and establishing a publicly accessible repository for all evaluation and assessment reports. Enforcement mechanisms are robust and multi-layered, treating violations as unfair or deceptive acts under the FTC Act, and expanding the FTC's jurisdiction to cover entities typically exempt. States are empowered to bring civil actions, seeking injunctions, civil penalties, and damages. Significantly, the bill grants a private right of action to individuals or classes, allowing them to sue for treble damages, punitive damages, and other relief, while invalidating pre-dispute arbitration agreements and joint action waivers for disputes arising under the Act.