EthicsRegulation

AI Governance and Ethics in the EU and USA

By Richard Sheinis, partner at Hall Booth Smith’s

The development of the AI regulatory landscape in the EU versus the U.S. has followed a path similar to the development of data privacy laws. While the EU has passed comprehensive data privacy legislation, the GDPR, and more recently the EU AI Act, the US has failed to enact comprehensive federal legislation in either area.  The vacuum left by the lack of federal AI legislation in the US is being filed by the states, many of which have proposed AI legislation, and several that have signed AI legislation into law. Even more so than state data privacy laws, state AI laws can vary significantly, creating a compliance quagmire.

The EU AI Act, which became law in 2024, but is enforceable in stages, is based on a risk principle. The greater the risks posed by an AI system to the rights or treatment of individuals, the stricter the requirements to develop and use the AI system. Certain types of AI systems present such a high risk to individuals that they are strictly prohibited. These AI systems include social credit scoring, emotion recognition systems at work and in education, behavioral manipulation, and untargeted scraping of facial images for facial recognition, among others.

The next level of AI is called “High Risk AI.”  High Risk AI includes medical devices, vehicles, recruitment, HR and worker management, education, critical infrastructure, biometric identification and the administration of justice. High Risk AI systems have strict requirements before they can be deployed. Requirements include a fundamental rights impact assessment and conformity assessment, registration in a public EU database, risk management and quality management, data governance, transparency, human oversight, and testing and monitoring for accuracy and cyber security. Requirements also differ depending on whether a company develops AI, is an AI deployer, an importer, distributor, or representative.

Below High Risk AI is “General Purpose AI”.  General Purpose AI includes systems like chat bots for customer service, and generative AI. These are generally AI systems where the user interacts with a machine. The main requirement for such systems is transparency so that individuals are informed that they are interacting with an AI system.

Depending on the nature of a violation of the EU AI Act, penalties for a violation of the EU AI Act can be as high as 7% of global annual turnover or €35 million.

In the U.S., despite there being much chatter about AI regulations, and the issuance of an AI Executive Order by President Biden, thus far the regulation of AI has been left up to the states. While many states have pending AI legislation, only Colorado, California, and Utah have AI laws that have been passed and signed.  Each state takes a different approach to AI regulation.

The Colorado AI law, which does not become effective until February 1, 2026, applies only to high risk AI. Unlike the EU AI Act, Colorado defines high risk AI as systems that are used to make a consequential decision within specific industries. These industries include education, employment, healthcare, financing, housing, insurance, government service, and legal service.

The Utah AI Policy Act, which became effective May 1, 2024, only applies to generative AI used in connection with certain occupations that are regulated by the government. Colorado has several AI laws, that are each very narrow.  Generally, these laws apply to generative AI and require transparency, AI governance, and risk management practices.

Although there is a disparity in the requirements of AI laws and regulations, which may grow as more states and countries adopt AI laws, we can still identify governance and ethical practices that will allow developers and users of AI to comply with most legal requirements, reduce risk and act ethically.  Data governance and ethical best practices include the following:

  1. Know Who You Are: Requirements can differ under AI regulations depending upon whether a company is a developer, deployer, importer or distributor, especially under the EU AI Act. It is important for a company to know where it fits in the regulatory scheme in order to determine which requirements are applicable to them.
  2. Transparency: Most AI regulations, including those currently in the legislative process, include some requirement of transparency. Transparency can include not only disclosing that an AI system is in use, but also explaining the data used by the AI system and its inner workings.
  3. Bias and Discrimination: There are several opportunities for AI systems to produce biased or discriminatory results. Bias can be part of the AI algorithm or the data to which the algorithm is applied. AI can produce discriminatory results even when not the product of intentional bias.  Bias testing, impact assessments, and ethics policies can help eliminate bias and discrimination.
  4. Policies: There is a wide variety of AI policies that may be appropriate for an organization. Policies can address how AI systems are developed or purchased, the data which is used in the AI system, the proper use of AI systems by company employees, security, and privacy of AI systems.
  5. Quality Control: Don’t fall into the trap of AI nirvana. AI systems can make mistakes. Continuous quality control for errors and potential bias should be an ongoing process.
  6. Impact Assessments: Impact assessments are a common requirement of AI laws. Preparing an impact assessment is a good tool for addressing problems which may arise including privacy, security, bias, and transparency.
  7. Education: Educate employees on the proper and ethical use of AI systems. Employees should know what data may be used in an AI system and how any deliverables should be handled.
  8. Know your Industry: Remember that AI laws are not the only laws that might apply to an AI system. Other industry specific laws such as the GDPR, HIPPA, FERPA, and the Gramm-Leach-Bliley Act, may also apply to privacy and security aspects of the AI system.
  9. Third-Party Contracts: When contracting with third-parties such as developers or distributors of AI systems, a number of contractual provisions, including representations and warranties, and indemnity, are necessary to protect against risks created by the use of AI.

The adoption of AI laws in the US and globally is sure to increase at a rapid pace. The proactive development of an AI governance and ethics plan or framework will give companies a head start on complying with AI laws and regulations that exist today as well as those that are on the way.

Author

Related Articles

Back to top button