Virginia has officially stepped into the AI regulation space. On February 20, 2025, the state’s legislature passed the High-Risk Artificial Intelligence Developer and Deployer Act (House Bill 2094), also known as the VA AI Act.
If signed by Governor Glenn Youngkin, or if he takes no action by March 24, this law will go into effect on July 1, 2026.
This makes Virginia the second U.S. state to pass an AI governance law, following Colorado’s Consumer Protections in Interactions with AI Systems Act (CO AI Act), which will be enforced starting February 1, 2026.
Here’s a breakdown of the two AI acts, highlighting their key similarities and differences.
Key Terms
Both acts regulate developers and deployers of high-risk AI systems (HRAI systems), which influence major life decisions. Key definitions include:
- High-Risk AI Systems: AI systems that play a significant role in making important decisions affecting consumers, such as employment, loans, healthcare, or housing.
- Developers: Businesses that create or significantly modify an AI system, particularly in ways that could lead to algorithmic discrimination.
- Deployers: Businesses that use or implement an HRAI system.
- Consequential Decisions: Decisions with a substantial legal or personal impact, such as education opportunities, insurance coverage, or criminal justice outcomes.
For more on how AI regulations define risk levels, check out NIST’s AI Risk Management Framework.
Scope: Virginia’s Narrower Definition
While both laws regulate AI systems making critical decisions, the Virginia law has a more limited scope than Colorado’s.
Virginia defines high-risk AI (HRAI) systems as those specifically intended to make high-risk decisions autonomously. If an AI system is used for high-risk purposes without the developer’s original intent, it might not be classified as an HRAI system under Virginia law.
In contrast, Colorado’s law applies to AI systems regardless of the developer’s intent if they are used in high-risk scenarios. Additionally, Virginia’s law excludes AI systems used in employment decisions, whereas Colorado’s does not.
Obligations for AI Developers
Developers of HRAI systems must follow governance rules to prevent algorithmic discrimination and ensure transparency about their AI models.
Both Virginia and Colorado require developers to provide model cards, which detail the system’s purpose, data sources, risks, and performance limitations.
However, Colorado goes a step further by mandating a public AI inventory, listing all HRAI systems available for deployment, something Virginia does not require.
When it comes to risk reporting, Colorado developers must notify the state’s Attorney General (AG) if their AI system is found to cause discrimination, whereas Virginia has no similar requirement.
Additionally, Virginia enforces synthetic content detection, requiring AI-generated material to be identifiable using industry-standard tools, a provision that Colorado lacks.
Obligations for AI Deployers
AI deployers have a crucial responsibility to ensure their systems do not engage in algorithmic discrimination while maintaining transparency with consumers. Both Virginia and Colorado require AI deployers to implement risk management policies aligned with established frameworks such as NIST and ISO/IEC 42001.
However, Colorado mandates annual AI system reviews, whereas Virginia does not have a similar requirement. When it comes to impact assessments, Colorado requires them to be conducted annually and within 90 days of any system modification.
In contrast, Virginia mandates these assessments only before deployment and significant updates, though it does not clearly define what qualifies as a “significant update.”
Both states require consumer notifications when an HRAI system is used to make consequential decisions. Additionally, deployers must provide adverse action notices, explaining any negative decisions and offering consumers an opportunity to appeal.
A key difference between the two laws is the requirement for maintaining an AI inventory. Colorado mandates that deployers keep a public record of the AI systems they use, while Virginia does not impose such a requirement.
“Talking To A Bot” Disclosure
Colorado requires companies to inform consumers when they are interacting with an AI system unless it’s already obvious. Virginia does not have this requirement.
Exemptions for Healthcare and Financial Entities
- Both laws exempt AI systems used in HIPAA-compliant healthcare recommendations, banking, and insurance under certain conditions.
Enforcement & Penalties
- Colorado: The AG can enforce violations with fines up to $20,000 per offense.
- Virginia: The AG can fine violators $1,000 per violation, increasing up to $10,000 for willful misconduct.
- Investigations: Colorado’s AG can request information at any time, while Virginia’s AG needs reasonable cause to investigate.
- Affirmative Defense: Both states offer legal protections if a business detects and fixes violations before an investigation.
- Rulemaking Authority: Colorado’s Attorney General has the authority to create new rules under the AI Act, while Virginia’s Attorney General does not have explicit rulemaking power.
For a deeper dive into legal implications, see Virginia Attorney General’s Office and Colorado Attorney General’s AI Regulations.
Final Thoughts
With Virginia and Colorado leading the charge, other states may soon follow suit in AI regulation. While both laws focus on transparency and fairness, Colorado’s AI Act is broader and stricter than Virginia’s, particularly in requiring public AI inventories, frequent assessments, and AG reporting.
As AI legislation continues evolving, companies using AI should stay informed and prepare for compliance with these new laws.