The AI revolution continues to inspire unprecedented business transformation, yet a crucial competitive factor remains overlooked – meaningful inclusion.
While critics may dismiss DEI initiatives as mere ‘woke capitalism’, companies that fail to prioritise socioeconomic diversity in their AI development are effectively sabotaging their own success in the market.
Corporate boardrooms readily acknowledge that their workforces, stakeholders and customers span diverse gender identities, ethnicities, cultural backgrounds and protected characteristics.
However, a profound disconnect exists between this recognition and the composition of teams building the AI systems intended to serve this diverse reality. This disconnect represents not just a moral oversight, but a significant business vulnerability.
The case for inclusive AI beyond ethics
The business imperative for inclusive AI goes far beyond ethical considerations.
An AI model trained on non-inclusive data or one that over-includes data on only specific demographic groups will fail to serve the needs of different customer demographics effectively. This makes AI-based decision-making less effective, less trustworthy and potentially open to legal liability.
Ultimately, this can undercut the fundamental goal of AI solutions, which is to promote economic efficiency and growth.
Consider the case of an AI-powered tenant screening tool called SafeRent that was used by a US letting company. It gave a score to an ethnic minority female and on that basis recommended that her tenancy application be denied.
There are numerous other examples where people, especially from marginalised sections of society, have been negatively affected by algorithmic bias. Amazon, for instance, stopped using a hiring algorithm after finding it favoured applicants based on words like ‘executed’ or ‘captured’, which were more commonly found on men’s CVs. The company realised that the algorithm was biasing the result and delivering an unjustified outcome that limited its ability to identify the best talent.
These examples are not just socially problematic but also commercially damaging.
When AI systems fail certain demographic groups, companies face reputational damage, legal challenges and, most crucially, missed opportunities to serve broad market segments effectively. This is not a matter of political correctness, but of commercial viability.
The competitive advantage of diverse AI teams
Indeed, the positive case for AI diversity is even stronger. Research consistently demonstrates that organisations with more diverse AI development teams outperform their less diverse counterparts.
According to McKinsey’s State of AI report, organisations where at least 25% of AI development employees identify as women are 3.2 times more likely to be AI high performers. Similarly, those where at least one-quarter of AI development employees are racial or ethnic minorities are more than twice as likely to be AI high performers.
Despite this clear competitive advantage, the AI field remains stubbornly homogeneous. The same McKinsey report found that:
- Only 27% of employees developing AI solutions identify as women.
- Only 25% identify as racial or ethnic minorities.
- 29% of respondents say their organisations have no minority employees working on AI solutions.
The World Economic Forum reports even more concerning trends. The percentage of female AI and computer science PhDs has remained stagnant at 20% over the past decade. Meanwhile, women make up only 22% of AI professionals globally, only 13.83% of AI paper authors are women, and a mere 2% of venture capital was directed towards start-ups founded by women in 2019.
These statistics present a clear picture – organisations with homogeneous AI teams are at a significant competitive disadvantage. Without diverse perspectives in the development process and how companies customise off-the-shelf models for their specific need, AI systems inevitably develop blind spots that mirror the limitations of their creators. These blind spots translate directly to market failures and missed opportunities.
The risk of losing the AI race
Companies with AI teams uninformed about diversity needs face numerous risks that could lead to falling behind in the AI competition.
First, AI systems that only work well for dominant demographic groups miss vast market opportunities among underrepresented populations, severely limiting their market reach.
Second, as governments worldwide develop frameworks for AI governance, systems that demonstrate bias are increasingly likely to face regulatory scrutiny and potential penalties.
Third, organisations known for non-inclusive AI practices will struggle to attract top talent. This is especially true as younger generations of technologists prioritise diversity and ethical considerations in their career choices.
Fourth, as awareness of AI bias grows among consumers, trust in AI systems that fail to serve diverse populations will erode. This will limit adoption and customer loyalty.
Finally, the longer an organisation relies on non-inclusive AI, the more its competitors with inclusive approaches will pull ahead, creating a performance gap that becomes increasingly difficult to close over time.
However, while the risks are clear, there is also a huge opportunity for forward-thinking enterprises.
The reality is that, broadly, progress has been extremely slow. Recent research from the European Commission on diversity at AI conferences shows that while the proportion of female authors has increased over time, the increase is negligible, with minimal movement in representation between 2007 and 2023.
Building more inclusive AI
The implications are clear – this slow pace of change means organisations that proactively address inclusion now can gain significant advantages over those that continue with business as usual.
For those serious about winning the AI competition, inclusion cannot be an afterthought.
And while there is no one size fits all formula to creating more diverse AI operations, there are some steps which, if taken properly, will help ensure diversity is built into AI development. Here are five to consider…
#1 – Diversifying AI development teams
Organisations should implement targeted recruitment and professional development strategies to increase representation of underrepresented groups within AI teams. Creating inclusive workplace cultures that retain diverse talent is equally important, as is establishing clear career progression paths for all team members to ensure long-term diversity at all levels of seniority.
McKinsey’s report indicates that 46% of organisations have active programmes to increase female participation in developing AI solutions, while only one in three have programmes to increase racial and ethnic minority participation. This represents a significant opportunity for organisations to gain competitive advantage by moving ahead of their peers in this area.
#2 – Implementing robust data practices
Before developing models, companies should audit training data for representational biases that could skew AI performance by excluding certain data or by over-including other types of data, Either way, such a model would generate skewed results. When necessary, supplementing datasets with additional data from underrepresented groups can help address historical imbalances. Establishing clear data governance processes that include diversity considerations as core metrics ensures that inclusion remains a priority throughout the development lifecycle.
#3 – Testing and validating across demographics
Testing AI systems with diverse user groups before deployment reveals blind spots that might otherwise go undetected until after launch. Companies should establish performance metrics that specifically measure effectiveness across different demographic segments rather than relying solely on aggregate performance data. Creating feedback mechanisms that capture problems experienced by diverse user groups ensures continuous improvement toward more inclusive outcomes.
#4 – Establishing governance and accountability
Setting clear expectations that inclusive AI is a business imperative, and not a secondary consideration, helps align organisational priorities. Building accountability for inclusive outcomes into development processes ensures that inclusion isn’t sacrificed under pressure to deliver quickly. Regular reporting on inclusion metrics as part of AI performance evaluation makes inclusion visible at the highest levels of the organisation.
#5 – Cultivating AI literacy throughout the organisation
Educating business leaders about the competitive advantages of inclusive AI helps secure ongoing commitment to wider DEI initiatives. Training product managers to recognise and address potential bias issues creates an additional layer of oversight beyond the technical team. Developing cross-functional collaboration between technical and non-technical teams brings diverse perspectives to AI development at every stage of the process.
The path forward
The AI competition will not be won solely through technical innovation. As AI becomes increasingly integrated into business processes and customer experiences, the winners will be those who create systems that work effectively for all potential users.
The research data is clear – diverse teams build better AI systems. Organisations that recognise this reality and take concrete steps to ensure inclusivity in their AI development will gain significant competitive advantages. Those that dismiss diversity as a political or box ticking matter, rather than a business concern, risk creating AI systems with fundamental limitations that will hinder their performance.
On the other hand, investing in diverse AI teams, implementing inclusive data practices, and establishing governance frameworks that prioritise inclusion, can build AI systems that deliver superior performance across diverse user bases.
In the AI competition, inclusion isn’t just the right thing to do – it’s a key part of how to win.