As AI tools continue to reshape the workplace, regulators are racing to catch up. For HR professionals, this creates both opportunity and risk. While AI can help streamline hiring, improve decision-making, and reduce administrative overhead, the use of AI—especially in employment decisions—is increasingly subject to legal scrutiny at the state and local level.
So far, a handful of jurisdictions have taken the lead, setting the tone for how artificial intelligence may be regulated across the country.
“We’re still in the early innings of state legislatures addressing AI, but the few states that have started to take some action could be a bellwether for the types of bills and restrictions we could see serious momentum around over the next few years.”
– Ryan Parker, Chief Legal Officer at SixFifty
In this post, we’ll walk through the key states with AI-specific laws impacting HR and highlight broader trends to keep an eye on, no matter which state your employees are in.
New York City: The first jurisdiction to regulate AI in hiring in the U.S.
New York City’s Local Law 144 marked the first major legislative effort to regulate AI in employment. Effective as of July 2023, the law applies to automated employment decision tools (AEDTs) used to assess candidates for hiring or promotion—whether the job is located in NYC or fully remote but tied to a NYC-based office.
Key requirements include:
- Independent bias audits conducted annually
- Public disclosure of audit results
- Notice to candidates and employees when AEDTs are used
This law serves as a model for transparency and fairness in AI usage, and while it’s localized to NYC, its influence could spread nationally. Employers using AI-powered tools like resume screeners or hiring algorithms should pay close attention, especially if they hire in NYC or expect similar laws to spread.
Colorado: The most comprehensive state law (so far)
In May 2024, Colorado passed the Artificial Intelligence Act, which takes effect February 1, 2026. This sweeping legislation regulates both developers and “deployers” of AI systems—meaning any business using AI tools to make decisions with a substantial effect on employment, housing, education, or financial services.
Employers generally fall under the “deployer” category. While the law does not make the following steps strictly mandatory, it establishes them as best practices—making it much easier for deployers to demonstrate compliance and avoid penalties if they do:
- Implement a risk management policy for high-risk AI tools
- Conduct impact assessments
- Notify individuals affected by AI-driven decisions
- Publish a summary of deployed AI systems
- Report any discovered algorithmic discrimination to the Attorney General within 90 days
Notably, the law creates a rebuttable presumption of compliance for organizations that follow these steps, which provides some legal protection. However, it does not include a private right of action—enforcement rests with the state AG’s office.
Illinois: Anti-discrimination protections in the workplace
Illinois’ approach is rooted in employment discrimination law. In August 2024, the state amended the Illinois Human Rights Act to include new AI protections, which take effect January 1, 2026.
Under the law, employers:
- Must notify employees when AI is used to make decisions related to hiring, promotion, training, termination, or discipline, and
- Are prohibited from using AI in ways that result in discrimination based on protected characteristics or even zip codes
This law is enforced by the Illinois Department of Human Rights and applies to employers with at least one Illinois employee for 20 or more weeks in a calendar year. It’s a good example of how existing human rights and anti-discrimination frameworks are being adapted to address AI risks in the workplace.
Other trends to watch: California, biometric data & federal activity
While not yet passed, California nearly enacted statewide AI regulations in 2024—and will likely revisit them. Although many states already address “automated decision-making technologies” (ADMT) in their privacy laws, California has lagged behind. A new package of regulations covering ADMT was just passed, but these won’t take effect until October 1.
Once in place, they will apply particularly when ADMT is used to make significant decisions affecting employment or compensation. Seventeen other states also include ADMT provisions in their privacy laws, though only one explicitly applies to employment contexts…for now.
Meanwhile, biometric AI regulation is heating up. Six states currently regulate biometric data, with others likely to follow. Some laws are narrow—like Maryland’s restriction on facial recognition in job interviews without consent—while others are broader, banning facial recognition entirely in public businesses.
On the national front, over 30 states have created AI task forces or working groups. Many are expected to propose legislation in the coming years. The focus is shifting toward transparency, bias testing, and giving individuals the right to opt out of AI-driven decisions.
What HR teams should do to prepare for a swell of AI legislation
While AI legislation and the compliance landscape around is still evolving, employers can take proactive steps now:
- Create an AI use policy tailored to your organization’s needs
- Audit your AI tools for bias, especially those impacting hiring or promotions
- Stay informed about changes at the state and local level
- Educate your team—AI should support, not replace, good judgment
AI isn’t going away. But by using it responsibly—and staying ahead of the legal curve—HR teams can harness its benefits while minimizing risk.
