
As U.S. organizations accelerate AI transformationย initiatives, executives face mounting compliance risks across operational AI systems. Adoption continues to surge, and McKinseyโs 2025 AI survey reportsย that 88 percent of organizations now use AI in at least one business function, up from 78 percent in 2024. As AI moves deeper into core functions, it introduces new exposure across established compliance domains, raising the stakes for data protection, governance, and risk management.
Rising AI Compliance Risks
AIโs foundational technologies prioritize speed and automation. These systems are trained on massive datasets that often lack transparency, traceability, and explainability at the output level, the very controls regulators typically require for high-risk use cases. As a result, internal audit teams must evaluate AI reporting and auditing requirements at every stage of technology deployment. Examples include:
- Black Box Opaqueness: Regenerative AI models cannot see or access their sources of information and cannot provide a clear โanswer pathโ for any output. Companies in highly regulated industries like financial services, healthcare/insurance, life sciences, and more are unable to fully explain the data sources, weighting logic, and confidence scores used for every AI-generated decision, statement, or prediction. Without a clearly defined answer path that leads to every output, organizations face increasing risk for noncompliance with SOX, HIPAA, FDA 21 CFR, state data privacy laws, and other standards.
- Hallucinations: AI models can often be confident in false or fabricated information. AI โhallucinationsโ occur when large language models predict answers from patterns in their training data rather than accessing databases or the Internet for answers. Real-world examples of hallucinations include citing fake studies or researchers, listing software features that donโt exist, and referencing fabricated court cases with fictitious judges and litigants.
Lack of data transparency and hallucinations can cause a wide range of problems, including:
Financial or Investor Information โ Failure to prevent inaccurate data from being used in investor disclosures, risk reporting, marketing materials, or sales could trigger enforcement actions from regulators like the SEC. And these regulatory groups have their eyes on AI. In a January 2026 speech, the Securities and Exchange Commissionโs Director of the Office of Municipal Securities (OMS), David Sanchez, said, โIf you are using or plan to use AI in drafting official statements or other investor-facing documents, you should be thinking about what you are doing to ensure that the AI-drafted disclosures are accurate.โย
Data Privacy Violations โ Companies could trigger CCPA/CPRA violations (California), the Colorado Privacy Act, Virginiaโs CDPA law, or others if an AI model improperly accesses or uses personal data. Recently, two job seekers filed a proposed class-action lawsuit against Eightfold AI in California district court on January 5, 2026. Companies like Microsoft and Salesforce use Eightfold AI for vetting potential hires. The plaintiffs allege that Eightfold AIโs hidden Match Score (ranging from 0โ5), along with the automated persona summary compiled from LinkedIn, GitHub, Reddit, Twitter, and other sources constitutes a consumer report under the Fair Credit Reporting Act (FCRA) and Californiaโs Investigative Consumer Reporting Agencies Act (ICRAA). They argue that both their individual assessment scores and the opportunity to review and correct should have been available to them.
False Advertising / Consumer Protection Violations โ Marketing materials could include false claims due to hallucinations. A widely cited case occurred during the 2023 launch of Googleโs chatbot Bard (now Gemini), which incorrectly stated that NASAโs James Webb Space Telescope had captured the first-ever picture of an exoplanet. In fact, that photo was taken in 2004 by the European Southern Observatoryโs Very Large Telescope in Chile. Following the error, Googleโs parent company Alphabet, reportedly lost approximately $100 billion in market value, as investors questioned the reliability and readiness of its AI technology.
Privacy and Security Risks โ AI applications typically require companies to process extremely large datasets, including personal and sensitive information. Risk areas include model memorization (regurgitating exact phrases from training data), prompt leakage (employees copying confidential data into third-party tools), and Shadow AI (AI used without organizational knowledge).
AI regulation in the United States
There is currently no comprehensive federal law governing artificial intelligence. As a result, regulation remains fragmented at the state level, and this patchwork approach is likely to continue until uniform federal laws are enacted.
On December 11, 2025, President Trump signed Executive Order 14365, โEnsuring a National Policy Framework for Artificial Intelligence.โ The order directs the Department of Justice (DOJ) to establish a new AI Litigation Task Force tasked with challenging statesโ AI laws and policies deemed โunjustifiedโ or overly burdensome. It also instructs federal agencies to work towards a single, uniform federal regulatory framework for AI.
The legal AI landscape across the statesย
California:
Automated Decision-Making Technology (ADMT) Regulations. Organizations that use ADMT to make significant decisions about consumers (such as employment, housing, credit, education, and healthcare) must disclose this to consumers before collecting their data, allow consumers to opt out of ADMT decisions (with some exceptions), and provide consumers with access to information used to make these decisions. The compliance deadline for existing systems is January 1, 2027. Organizations must comply when deploying new systems.
Transparency in Frontier Artificial Intelligence Act (TFAIA / SB 53). Large AI developers that create โfrontier modelsโ must complete a safety framework, red-teaming exercises, and report certain safety incidents to state regulators, including incidents caused by harmful hallucinations.
Colorado:
Colorado Artificial Intelligence Act (SB 24-205). Effective June 30, 2026, Coloradoโs comprehensive AI law includes a duty of reasonable care for high-risk AI developers and users to prevent algorithmic discrimination. Companies must complete annual impact assessments and risk-management processes, as well as provide consumers with notice and documentation about AI systems.
New York City:
Local Law 144, also known as the โBias-Audit Lawโ applies to companies using โautomated employment decision toolsโ for purposes of making decisions related to employment. These companies are required to conduct independent bias audits annually, publish summaries of these audits, and notify applicants of their use.
Texas:
Texas Responsible AI Governance Act. Effective January 1, 2026, Texasโs AI law focuses on transparency requirements and risk-management for high-risk applications.
Other states:
Many other states have passed industry-specific regulations (Illinois, Utah, Connecticut, and Minnesota). In 2025, 100 AI bills across 38 states were introduced and passed.
What companies should do for compliant AI use:
Assign executive accountability for AI compliance and auditing. Provide routine reports to the board and conduct role-based training for employees who design, build, procure, or use AI systems.
Hold third-party vendors to your compliance, privacy, and security standards. Regularly audit contracted tools for risks exposure.
Ensure informed consent and transparency. Obtain proper use of consent and provide users with clear, simple ways to opt out of any AI-driven application at any time.
Establish continuous audit practices. Conduct annual system audits and ad-hoc reviewsย for high-risk deployments. While technology may be โproductionalized,โ ongoing testing, monitoring, and improvement should be mandatory.
Practice data minimization. Limit data collection to what is strictly necessary and consider privacy-enhancing technology (PETs) at the onset of AI implementation.
Implement strong data security controls. Encrypt personal data used for AI systems, restrict access to sensitive environments, continuously monitor activity, and test incident response plans at least annually.
Conclusion
Organizations that prioritize responsible AI from the outset are better positioned to navigate evolving regulation. By embedding strong governance processes and internal controls into their AI strategy, companies can enable innovation at scale while minimizing the risk of regulatory penalties, class actions, reputational harm, and market impacts.
Zbynฤk Sopuch is the CTO of Safetica, a global leader in data loss prevention and insider risk management, protecting close to 1 million devices in 120 countries.โฏ www.safetica.com



