
The odd thing about AI is that different uses may have different regulations. In the workplace, a candidate’s privacy rights under an AI tool can evaporate. On one side, you’re a “consumer” with recognizable privacy rights; on the other side you’re an “applicant” or “employee,” and those rights change—or disappear. Colorado and California offer insight into how these rights ebb and flow.
Start with Colorado. The state’s privacy law defines “consumer” in a way that largely stops at HR’s doorstep. Employees, applicants, and commercial actors are out, which means the profiling opt-out under Colorado’s privacy laws stay outside the interview room. But Colorado’s AI Act calls employment and opportunity decisions “consequential” without linking it to the opt-out rights under privacy laws. The Colorado Privacy Act’s opt-out does not govern employers in hiring decisions.
The AI Act still tells HR teams how to behave. Before a decision, they owe people a plain notice that explains the system’s purpose and role. After an adverse outcome, employers owe applicants specifics: the reason for the decision, a path to correct inaccurate data, and an appeal that includes human review when feasible. The law layers on a reasonable-care duty to prevent algorithmic discrimination and requires a risk program with impact assessments at launch, every year, and after material changes. Every AI tool used for hiring decisions thus provide for core core rights: notice, reasons, correction, appeal.
If you’re building to this regulation, think like a rail operator laying a dedicated HR track. Map your controller/processor roles to the Act’s developer/deployer duties so you know who does what. Demand model documentation, testing results, and incident reports from vendors. Give clear notices that don’t suggest an opt-out you do not have to deliver to an applicant. Log the reasons you gave, the corrections you made, and the appeals you heard. And align the program to industry standards (NIST’s AI RMF) to support the due care used when processing candidates’ applications.
California heads down a different road. The state’s privacy regulator treats automated systems that make employment and contractor decisions as “significant.” That triggers a pre-use Automated Decision Making Technology (ADMT) notice, an opt-out for those significant decisions, and response timelines for access and appeals. If someone opts out, you must stop using the ADMT for that person within 15 business days. California’s civil-rights rules also cover automated tools used for hiring, promotion, and other personnel actions, and they center the employer’s duties on discrimination risk, testing, and documentation.
In Colorado, build to the sequence: notice, reasons, correction, appeal. In California, stack two regimes. First, the ADMT rules kick in when automation makes or substantially makes a significant employment decision. That means pre-use notice, opt-out, access and explanations, plus risk assessments. Second, under the civil-rights framework running in parallel, policing discrimination inside the same hiring flow. For opt-out rights, think processes with robust human appeals and certain admission, acceptance, hiring, or allocation decisions that come with guardrails. Design the workflow and notices to fit within those contours.
Zooming out, the hardest part isn’t just the rules; it’s the labels. People’s rights roll in and out not with actual risk, but as the label on the person changes. You start as a consumer and become an applicant, and suddenly different statutes toggle on and off. That creates gaps for people and headaches for compliance teams.
Workarounds creep in too. Feed pseudonymous data to a model and say privacy rules don’t apply even though the decision still lands on a real person. Transparency shrinks. Error-correction tools dry up. Then comes “human-review theater”: a perfunctory click by someone without authority to change the outcome, carried out to check a box rather than fix a decision.
Design the use to avoid these pitfalls. Set a plain-English rights baseline that follows a person end-to-end, regardless of labels. Map every legal term in your notices and records back to that baseline so people know what to expect and auditors can see the through-line. Give reviewers real authority and measure override rates so “human in the loop” means something. Log status changes, for instance, consumer to applicant to employee.
For lawmakers and regulators, two durable paths beat today’s patchwork. One: build sector-specific AI rules with clear exemptions from conflicting privacy duties so HR teams do not have to play regulatory Twister. Two: harmonize definitions so people keep the same core rights across an automated decision, no matter where they stand on the org chart. Either route gives workers and employers a steadier map. That may be the only way to keep the on-ramps open and the guardrails real.



