The "AI LEAD Act" aims to establish a federal product liability framework for advanced artificial intelligence systems, defining them as software capable of making predictions or decisions using machine learning. It seeks to address harms caused by these systems, such as damage to property, physical injury, financial loss, or psychological anguish, by incentivizing safety and innovation. The legislation intends to provide predictable legal outcomes and ensure U.S. competitiveness in the AI sector. Under this Act, developers can be held liable if they fail to exercise reasonable care in the product's design or in providing adequate instructions and warnings. Liability also arises if the product breaches an express warranty or is in a defective condition, making it unreasonably dangerous when used foreseeably. For defective design claims, claimants must generally prove a reasonable alternative design existed, unless the design is found to be manifestly unreasonable. Deployers , who use or operate AI products, are liable as developers if they make substantial modifications or intentionally misuse the product contrary to its intended use, causing harm. However, a deployer may be dismissed from a lawsuit if the developer is a party, subject to jurisdiction, and able to satisfy a judgment, and the deployer is not otherwise liable. The bill also prohibits developers and deployers from including unconscionable language in contracts or terms that waive rights or limit liability under the Act. The Act establishes a federal cause of action , allowing the Attorney General, State attorneys general, individuals, or classes to bring civil actions for violations. Remedies include injunctive relief, civil penalties, damages, restitution, and attorney fees. This legislation supersedes conflicting state laws but explicitly permits states to enact or enforce stronger protections for harm prevention, accountability, and transparency regarding AI products. To ensure accountability, foreign developers of AI systems must designate a permanent U.S. resident as an agent for service of process before making their products available in the United States. Failure to do so prohibits the deployment of their products in the U.S., with the Attorney General empowered to seek injunctive relief. The Attorney General is also mandated to maintain a public registry of these designated agents. The Act applies to liability actions commenced on or after its enactment date, regardless of when the harm or conduct occurred.
Read twice and referred to the Committee on the Judiciary. (text: CR S6836-6838)
Commerce
AI LEAD Act
USA119th CongressS-2937| Senate
| Updated: 9/29/2025
The "AI LEAD Act" aims to establish a federal product liability framework for advanced artificial intelligence systems, defining them as software capable of making predictions or decisions using machine learning. It seeks to address harms caused by these systems, such as damage to property, physical injury, financial loss, or psychological anguish, by incentivizing safety and innovation. The legislation intends to provide predictable legal outcomes and ensure U.S. competitiveness in the AI sector. Under this Act, developers can be held liable if they fail to exercise reasonable care in the product's design or in providing adequate instructions and warnings. Liability also arises if the product breaches an express warranty or is in a defective condition, making it unreasonably dangerous when used foreseeably. For defective design claims, claimants must generally prove a reasonable alternative design existed, unless the design is found to be manifestly unreasonable. Deployers , who use or operate AI products, are liable as developers if they make substantial modifications or intentionally misuse the product contrary to its intended use, causing harm. However, a deployer may be dismissed from a lawsuit if the developer is a party, subject to jurisdiction, and able to satisfy a judgment, and the deployer is not otherwise liable. The bill also prohibits developers and deployers from including unconscionable language in contracts or terms that waive rights or limit liability under the Act. The Act establishes a federal cause of action , allowing the Attorney General, State attorneys general, individuals, or classes to bring civil actions for violations. Remedies include injunctive relief, civil penalties, damages, restitution, and attorney fees. This legislation supersedes conflicting state laws but explicitly permits states to enact or enforce stronger protections for harm prevention, accountability, and transparency regarding AI products. To ensure accountability, foreign developers of AI systems must designate a permanent U.S. resident as an agent for service of process before making their products available in the United States. Failure to do so prohibits the deployment of their products in the U.S., with the Attorney General empowered to seek injunctive relief. The Attorney General is also mandated to maintain a public registry of these designated agents. The Act applies to liability actions commenced on or after its enactment date, regardless of when the harm or conduct occurred.