Future of AIAI

Building Responsible AI: Staying Ahead in a Regulated World

By Paul Davis, Field CISO, at JFrog

AI and ML are having a profound impact on the technology landscape and enterprises are racing to adapt. Gartner predicts that by 2027, over 90% of new software applications will include machine learning models. While this evolution unlocks innovation, it also attracts scrutiny from governments and regulators worldwide. Development teams now face unprecedented expectations for transparency, accountability, and governance when deploying AI and ML models.Ā 

This article will explore how global regulators are shaping the future of model development, and how organisations can stay one step ahead, reducing risk and enabling innovation.Ā 

AI/ML Under the Regulatory MicroscopeĀ 

AI/ML development happens at breakneck speed, and inherently requires a vast volume of large datasets, complex algorithms, and continuous model training, significantly widening the scope of risk assessment in software development. Given the sheer power of AI/ML, it’s no surprise regulators are intensifying their scrutiny.Ā Ā Ā 

Beyond traditional concerns like code quality and data security, regulators are now assessing intelligent agents for their potential harmful impacts, ethical implications, and societal consequences. For example, regulators have data privacy and intellectual property concerns around how AI/ML models are trained, and are worried about the dangers they create, such as discriminatory outcomes (algorithmic bias) or misinformation (AI hallucinations).Ā Ā 

The Regulation Wave ArrivesĀ 

Regulators in Europe have formalised comprehensive regulations around the need to thoroughly assess and certify your different AI-based solutions before deployment. There are serious ramifications associated with failing to comply with the European Union Artificial Intelligence Act, which classifies AI systems based on risk levels and imposes strict requirements accordingly. There are significant financial penalties, which can be extremely substantial for enterprise organisations, potentially reaching up to 6% of global annual revenue for the most severe violations – figures that could translate to billions of dollars for larger companies.Ā 

In the US, AI regulations are being defined at the state level (California, Colorado, New York, Texas, and Washington) and federal level (Executive Order 14110).Ā Ā Ā 

Other countries are also suitably concerned and introducing their own requirements, building around these existing laws as well as ISO 42001. With all these demands to demonstrate compliance, the potential headache associated with a whole new supply chain model can be daunting.Ā 

Embedding Compliance in MLOpsĀ 

Imagine a world where the system would not let you promote a software release into production unless it had passed all the tests. Exceptions could be recorded, tracked, and the approvals documented in a single system.Ā 

Continuous Compliance Automation offers the solution. By applying controls and enforcing regulations from beginning to end, from initial design to production release, we can eliminate the need for point-in-time checks (i.e., audits). Integrating evidence from all your tools and procedures into a single data source of truth, enables everyone (developers, AppSec teams, security personnel, auditors, and business owners) to see how the software complies with their specific regulatory requirements. This also benefits ML engineers and data scientists, who can have peace of mind knowing that they’re working with trusted models that have been vetted and conform with policy.Ā Ā 

Implementing an approach that combines consistent tools and processes with a reliable path to production creates a trusted environment that automatically generates information demonstrating adherence to regulations and compliance obligations. As some of my fellow CISOs are beginning to recognise, the most effective way to integrate security is to automate it through a methodical process.Ā 

Leveraging Trusted AI/ML DevelopmentĀ 

AI, ML, LLM, and GenAI are here to stay. With the upcoming challenge of agentic AI, we need to build the foundations of a secure approach to managing risk. This includes addressing traditional concerns like vulnerabilities, personally identifiable information (PII), and business risks, while also developing the flexibility to adjust to new demands as the world confronts emerging threats generated by this new intelligence landscape. Start today, since tomorrow won’t wait.Ā Ā 

A comprehensive AI/ML model lifecycle management approach creates a trusted environment for developers to build and fine-tune models. Effective governance requires end-to-end visibility and control over AI model deployment, including usage and permissions. By incorporating evidence collection throughout the process, teams can document every step taken as models progress into production, supporting compliance, transparency and trust.Ā 

Author

Related Articles

Back to top button