The Validation and Evaluation for Trustworthy (VET) Artificial Intelligence Act mandates the Director of the National Institute of Standards and Technology (NIST) to develop consensus-driven, evidence-based voluntary technical guidelines and specifications for internal and external assurances of artificial intelligence (AI) systems. These guidelines are intended to foster trust, increase the adoption of AI systems, and establish clear accountability and governance frameworks, aligning with NIST's existing AI Risk Management Framework. The assurances will involve the testing, evaluation, validation, and verification of AI systems, tailored to their specific application, use-case, and risk profile. The voluntary guidelines will identify standards for critical areas such as consumer privacy safeguards , methods to mitigate harms to individuals , dataset quality , and governance and process controls . They will also provide best practices and criteria for determining the frequency and scope of assurance activities. Furthermore, the bill establishes an Artificial Intelligence Assurance Qualifications Advisory Committee to recommend qualifications for parties conducting AI assurances and to assess the applicability of existing accreditation programs. Finally, it requires the Secretary of Commerce to conduct a study evaluating the capabilities and market demand of entities providing AI assurance services, including recommendations for enhancing their capacity and availability.
Read twice and referred to the Committee on Commerce, Science, and Transportation.
Science, Technology, Communications
VET Artificial Intelligence Act
USA119th CongressS-2615| Senate
| Updated: 7/31/2025
The Validation and Evaluation for Trustworthy (VET) Artificial Intelligence Act mandates the Director of the National Institute of Standards and Technology (NIST) to develop consensus-driven, evidence-based voluntary technical guidelines and specifications for internal and external assurances of artificial intelligence (AI) systems. These guidelines are intended to foster trust, increase the adoption of AI systems, and establish clear accountability and governance frameworks, aligning with NIST's existing AI Risk Management Framework. The assurances will involve the testing, evaluation, validation, and verification of AI systems, tailored to their specific application, use-case, and risk profile. The voluntary guidelines will identify standards for critical areas such as consumer privacy safeguards , methods to mitigate harms to individuals , dataset quality , and governance and process controls . They will also provide best practices and criteria for determining the frequency and scope of assurance activities. Furthermore, the bill establishes an Artificial Intelligence Assurance Qualifications Advisory Committee to recommend qualifications for parties conducting AI assurances and to assess the applicability of existing accreditation programs. Finally, it requires the Secretary of Commerce to conduct a study evaluating the capabilities and market demand of entities providing AI assurance services, including recommendations for enhancing their capacity and availability.