The best way for financial services firms to speed up their adoption of artificial intelligence is to slow down for just long enough to ensure that their AI governance policies are set up to encourage safe and value-creating innovation. Establishing a strong, well-thought-out governance program is the best way to accelerate the uptake of AI throughout the organization and across the business.
When it comes to adopting the use of AI tools that are revolutionizing so much of business and society, banking and other heavily regulated financial services industries are lagging. Financial services firms are seen to be moving slow or stuck in perpetual pilots because the stakes are so high. Hidden biases, hallucinations or other flaws in AI models put firms at risk of running afoul of regulations and, even more critically, undermining client trust in the institution itself. A well-crafted AI governance policy and subsequent enforcement is the first and best line of defense against these risks.
Despite widespread misperceptions, a good governance policy is not just a list of things you can’t do. Rather, it’s a compilation of rules, processes and dedicated channels of communication that together create an environment in which it’s safe to experiment and create. An effective governance policy acts as a foundation that allows firms to build and employ next-generation technology while minimizing risk to the enterprise and its customers.
What should effective policy cover?
Based on our experience helping drive innovation and integrate new technology at Broadridge, we believe a comprehensive AI governance policy should answer six key questions:
- How does the firm define AI and what technologies are covered by the policy? This can and will change over time.
- Which data owned or maintained by the firm can be used in AI applications, and which are expressly prohibited?
- What AI tools, functions and features are approved and appropriate for use in the organization?
- What constitutes a violation of the governance policy, and how should violations be handled?
- How can new AI tools, functions and features be approved for use? Remember, the goal of the governance policy is to foster innovation by creating a safe environment for the adoption of new applications and technology.
- How can the policy remain relevant as technology, client expectations, and regulatory needs evolve?
What goes into a good AI governance policy?
Creating an effective AI governance policy is complex because it touches on many important elements and functions within the organization. Of course, developing a good policy starts with tapping the right expertise from various disciplines, including technical, information security, legal, product and privacy. Senior management must be intentional about bringing the right experts into the process and making sure the people who really understand the technology and its operating and regulatory environments have a prominent voice.
Next, the policy must reflect the firm’s business strategy. AI governance policies cannot be generic. To be effective, the policy must be organic to the firm. It must be grounded in the firm’s business model, goals for growth, efficiency, service, target client base, and other key planks of the strategy.
Finally, the policy must take into account all relevant legal standards. In today’s environment, that can be a tall order. The global legal framework for AI is taking shape in real time. From the many individual articles of the EU AI Act to federal legislation, a spate of executive orders and various laws from states like California, Colorado and Tennessee in the US, the regulatory environment for AI is constantly evolving. As a result, an effective AI governance policy must not only comply with existing rules, it must also include a process for monitoring changes and ensuring ongoing compliance.
Of course, mapping out the global legal landscape is just a start. The firm must next ensure it is complying with each and every provision across jurisdictions. For example, the EU requires financial services firms to document, catalogue and assess risk levels for all AI use cases. More broadly, financial services firms must be able to demonstrate to regulators around the world that they are taking appropriate measures to comply with anti-bias provisions and other rules. A comprehensive governance plan provides a framework from which firms will construct the specific procedures for tracking and meeting all those requirements. The best policies are the ones that, in an almost oxymoronic fashion, stand the test of time but also reflect the technologies, risks, and opportunities that are rapidly changing
From Policy to Practice
If we had one message to financial service firms thinking about how to speed the AI integration, it would be this: Creating the governance policy is only the first step. Implementation is equally important.
In our experience, the best way to implement a governance policy into corporate behavior is to partner with internal and external experts, including the AI Governance Team, engineers and tech developers, to create systems and guardrails that enforce the policy automatically in day-to-day operations—pursuant to compliance guidance from the AI governing body. Organizations should pair education and culture initiatives that teach employees about the policy and incentivize compliance with technology solutions designed to keep the company safe from AI-related risks.
Senior leadership should work with the IT team to create security measures, referred to internally as “gates,” that prevent the use of certain AI tools, prevent certain documents and data from being loaded into AI models, and otherwise protect in-house data. For example, some organizations have not yet approved DeepSeek for use. DeepSeek might well be an effective and safe AI model. However, some companies are not yet comfortable with the model, its capabilities, or how its makers might handle user data. In those cases, internal systems should prevent employees from downloading and using DeepSeek on company computers and networks.
These solutions, which embed the governance policy into workflows, help firms balance the desire for rapid change and AI adoption against the imperative of risk management. Even if it seems like a slow and cumbersome process up front, firms that create and implement governance policies and processes that strike the right mix between these two goals will actually be pushing the accelerator and speeding their rate of AI adoption.