
The odd thing about AI is that different uses may have different regulations. In the workplace, a candidateโs privacy rights under an AI tool can evaporate. On one side, youโre a โconsumerโ with recognizable privacy rights; on the other side youโre an โapplicantโ or โemployee,โ and those rights changeโor disappear. Colorado and California offer insight into how these rights ebb and flow.ย
Start with Colorado. The stateโs privacy law defines โconsumerโ in a way that largely stops at HRโs doorstep. Employees, applicants, and commercial actors are out, which means the profiling opt-out under Coloradoโs privacy laws stay outside the interview room. But Coloradoโs AI Act calls employment and opportunity decisions โconsequentialโ without linking it to the opt-out rights under privacy laws. The Colorado Privacy Actโs opt-out does not govern employers in hiring decisions.ย ย
The AI Act still tells HR teams how to behave. Before a decision, they owe people a plain notice that explains the systemโs purpose and role. After an adverse outcome, employers owe applicants specifics: the reason for the decision, a path to correct inaccurate data, and an appeal that includes human review when feasible. The law layers on a reasonable-care duty to prevent algorithmic discrimination and requires a risk program with impact assessments at launch, every year, and after material changes. Every AI tool used for hiring decisions thus provide for core core rights: notice, reasons, correction, appeal.ย
If youโre building to this regulation, think like a rail operator laying a dedicated HR track. Map your controller/processor roles to the Actโs developer/deployer duties so you know who does what. Demand model documentation, testing results, and incident reports from vendors. Give clear notices that donโt suggest an opt-out you do not have to deliver to an applicant. Log the reasons you gave, the corrections you made, and the appeals you heard. And align the program to industry standards (NISTโs AI RMF) to support the due care used when processing candidatesโ applications.ย
California heads down a different road. The stateโs privacy regulator treats automated systems that make employment and contractor decisions as โsignificant.โ That triggers a pre-use Automated Decision Making Technology (ADMT) notice, an opt-out for those significant decisions, and response timelines for access and appeals. If someone opts out, you must stop using the ADMT for that person within 15 business days. Californiaโs civil-rights rules also cover automated tools used for hiring, promotion, and other personnel actions, and they center the employerโs duties on discrimination risk, testing, and documentation.ย
In Colorado, build to the sequence: notice, reasons, correction, appeal. In California, stack two regimes. First, the ADMT rules kick in when automation makes or substantially makes a significant employment decision. That means pre-use notice, opt-out, access and explanations, plus risk assessments. Second, under the civil-rights framework running in parallel, policing discrimination inside the same hiring flow. For opt-out rights, think processes with robust human appeals and certain admission, acceptance, hiring, or allocation decisions that come with guardrails. Design the workflow and notices to fit within those contours.ย
Zooming out, the hardest part isnโt just the rules; itโs the labels. Peopleโs rights roll in and out not with actual risk, but as the label on the person changes. You start as a consumer and become an applicant, and suddenly different statutes toggle on and off. That creates gaps for people and headaches for compliance teams.ย
Workarounds creep in too. Feed pseudonymous data to a model and say privacy rules donโt apply even though the decision still lands on a real person. Transparency shrinks. Error-correction tools dry up. Then comes โhuman-review theaterโ: a perfunctory click by someone without authority to change the outcome, carried out to check a box rather than fix a decision.ย
Design the use to avoid these pitfalls. Set a plain-English rights baseline that follows a person end-to-end, regardless of labels. Map every legal term in your notices and records back to that baseline so people know what to expect and auditors can see the through-line. Give reviewers real authority and measure override rates so โhuman in the loopโ means something. Log status changes, for instance, consumer to applicant to employee.ย ย
For lawmakers and regulators, two durable paths beat todayโs patchwork. One: build sector-specific AI rules with clear exemptions from conflicting privacy duties so HR teams do not have to play regulatory Twister. Two: harmonize definitions so people keep the same core rights across an automated decision, no matter where they stand on the org chart. Either route gives workers and employers a steadier map. That may be the only way to keep the on-ramps open and the guardrails real.ย


