Future of AIAI

Bridging the AI divide: why business needs civil voices

By Zoe Colosimo, Chief Operating Officer at Neighbourly

As AI reshapes industries at breakneck speed, a critical voice is being left behind. A recent survey of over 1,500 UK small charities and community groups reveals that 58% aren’t using AI at all, despite its potential to streamline services, unlock funding and improve community engagement. While the private sector accelerates adoption, the third sector risks being excluded from this digital revolution.

This growing divide should concern us all. Charities are often closest to those most at risk from AI misuse or exclusion, which will ultimately threaten innovation and ethical integrity in AI development.

Business can bridge the gap

Through our research at Neighbourly and with other industry partners, I’ve seen firsthand how vital it is for businesses to step up. Companies have the reach, expertise and resources to act as a bridge – raising civil society voices as they lead on AI innovation.

The World Economic Forum’s Future of Jobs report shows 60% of employers expect AI to transform their operations by 2030. But many charities face significant barriers, including a lack of technical skills (56%), limited funding (44%) and low awareness of AI’s capabilities (49%), according to Neighbourly’s survey.

Charities are not just beneficiaries in this equation—they have a crucial role to play in helping businesses develop AI systems that are ethical, inclusive and grounded in real-world impact. They offer lived insight into the needs of vulnerable communities and can act as independent stakeholders, advocating for fair and transparent systems.

What is responsible AI?

Responsible AI is about more than compliance. It’s about designing, developing and deploying AI systems that are safe, fair and aligned with human values. Microsoft’s Responsible AI Standard outlines six key principles: Fairness, Reliability and Safety, Privacy and Security, Inclusiveness, Transparency and Accountability.

Yet, one of the most persistent issues in AI development is a lack of stakeholder engagement, particularly from civil society. For instance, charities have no formal role in shaping the UK’s AI Opportunities Action Plan. If policymakers and businesses are truly serious about embedding responsible AI principles, that must change.

Training charities in responsible AI, and why it matters for business

Many smaller charities lack the time, resources or confidence to explore how AI could help them. But businesses can help plug this gap, and stand to benefit in the process.

Around one in five UK companies offer paid volunteer days, according to the Royal Voluntary Service. These programmes offer an ideal opportunity to deliver AI-focused workshops and training, covering essentials like AI safety, ethics, data literacy, and prompt engineering.

In a recent example of this, Currys volunteers delivered a training session for Age UK, helping them explore AI fundamentals and build a roadmap for using AI in their operations. This kind of upskilling empowers charities to make informed decisions, adopt tools that improve efficiency, and deliver greater impact in their communities.

It’s a win-win. Volunteers also gain valuable, future-proofed skills. A report by Pro Bono Economics and the Royal Voluntary Service found that volunteers can boost their earnings by an average of £2,300 thanks to the skills gained through volunteering. And Microsoft’s Work Trend Index reveals 79% of leaders see AI as a career accelerator, making this a powerful growth opportunity for businesses, too.

Embedding communities in AI development

Charities also serve another vital role: they are trusted connectors to communities. Involving them early in AI development helps businesses test assumptions, avoid bias and design more inclusive products.

Imagine a company building an AI-powered financial services tool. A charity working with migrant communities can provide cultural insights that help ensure the tool is accessible and appropriate. But first, these charities need a baseline understanding of how AI works, which is where business-led support becomes critical.

This isn’t just about ethics—it’s also about effectiveness. When charities are equipped to contribute meaningfully, it leads to better AI, more equitable outcomes and stronger public trust in business.

A business case for AI for Good

Supporting charities in their own AI adoption enables businesses to demonstrate a commitment to ethical innovation. It positions them as values-driven leaders, something that resonates with regulators, customers, and investors alike.

In a world where trust is currency, helping civil society keep pace with AI is not charity – it’s strategy.

Looking ahead

With 78% of companies now using AI in at least one function, according to McKinsey, we are rapidly approaching an inflection point. If we want AI to benefit everyone—not just the well-resourced—then the third sector must be part of the conversation.

By supporting civil society in understanding, shaping and applying AI, businesses can lead with purpose, build trust and ensure that responsible AI is more than a principle – it’s a practice.

Author

Related Articles

Back to top button