The Guidelines for User Age-verification and Responsible Dialogue Act of 2025, or GUARD Act, aims to protect minors from the potential harms of artificial intelligence chatbots. It mandates that all individuals accessing AI chatbots must create a user account and undergo a reasonable age verification process to determine if they are a minor or an adult. This process requires verifiable age data, such as government-issued identification, and explicitly states that self-confirmation or birthdate entry alone is insufficient. A key provision of the bill is the outright prohibition of minors from accessing or using any AI companion , defined as a chatbot designed to simulate interpersonal or emotional interaction. Covered entities must also implement robust data security for age verification information, ensuring it is minimally collected, encrypted, and never shared or sold. Furthermore, the Act establishes criminal prohibitions, making it unlawful to design or make available an AI chatbot that knowingly or with reckless disregard solicits minors for sexually explicit conduct or promotes suicide, self-harm, or violence, with penalties up to $100,000 per offense. Beyond age restrictions, the bill requires AI chatbots to make specific disclosures to all users. Chatbots must clearly and conspicuously disclose their non-human status at the start of each conversation and every 30 minutes, and must not deceptively claim to be human. Additionally, they are prohibited from representing themselves as licensed professionals and must disclose that they do not provide medical, legal, financial, or psychological services, advising users to consult licensed professionals. The Attorney General is empowered to enforce these provisions through civil actions, including injunctions and civil penalties of up to $100,000 per violation, with State attorneys general also able to pursue enforcement.
Read twice and referred to the Committee on the Judiciary.
Crime and Law Enforcement
GUARD Act
USA119th CongressS-3062| Senate
| Updated: 10/28/2025
The Guidelines for User Age-verification and Responsible Dialogue Act of 2025, or GUARD Act, aims to protect minors from the potential harms of artificial intelligence chatbots. It mandates that all individuals accessing AI chatbots must create a user account and undergo a reasonable age verification process to determine if they are a minor or an adult. This process requires verifiable age data, such as government-issued identification, and explicitly states that self-confirmation or birthdate entry alone is insufficient. A key provision of the bill is the outright prohibition of minors from accessing or using any AI companion , defined as a chatbot designed to simulate interpersonal or emotional interaction. Covered entities must also implement robust data security for age verification information, ensuring it is minimally collected, encrypted, and never shared or sold. Furthermore, the Act establishes criminal prohibitions, making it unlawful to design or make available an AI chatbot that knowingly or with reckless disregard solicits minors for sexually explicit conduct or promotes suicide, self-harm, or violence, with penalties up to $100,000 per offense. Beyond age restrictions, the bill requires AI chatbots to make specific disclosures to all users. Chatbots must clearly and conspicuously disclose their non-human status at the start of each conversation and every 30 minutes, and must not deceptively claim to be human. Additionally, they are prohibited from representing themselves as licensed professionals and must disclose that they do not provide medical, legal, financial, or psychological services, advising users to consult licensed professionals. The Attorney General is empowered to enforce these provisions through civil actions, including injunctions and civil penalties of up to $100,000 per violation, with State attorneys general also able to pursue enforcement.