EducationFuture of AI

AI for all: How technology can be an enabler for inclusivity

The impact of technology on our world is profound, with Artificial Intelligence (AI) at the forefront of this transformation for several years. However, this influence can be either beneficial or detrimental, depending on how AI is designed, developed, and deployed.

The development of Generative AI has opened even more possibilities for nearly every sector across the globe, and since 2018 it has become clear that AI can play a significant role globally in promoting productivity and economic growth, but also social good.

Real-world examples of AI for social good

The not-for-profit sector has become a front-runner in utilising AI to instil positive change and further progress the digital transformation journey. Last year’s Charity Digital Skills report found that 35% of charities already used AI for certain tasks.

Generative AI, for example, has been vital in developing innovative solutions, and forward-thinking charities have welcomed this evolving technology to improve inclusivity for people with specific accessibility needs.

Below are three real-world examples of how AI is used for social good.

1. Providing highly personalised, inclusive and engaging experiences

 Housing the world’s largest collection of functional historic computers and World War II machines, including Enigma and Colossus, The National Museum of Computing welcomes visitors from all over the world. Jacqui Garrad, Museum Director, explains, “Developing TNMOC Mate with Version 1’s AI Labs has enabled us to support those for whom English isn’t a first language, neurodivergent visitors and children, to navigate the museum at their own pace. The AI-developed app transforms the original exhibit descriptions into easy-to-understand text, which can either be read or listened to through tailored audio narration in a preferred language, meaning they can also interact and engage with the exhibits in a more personalised manner. This has been hugely beneficial in enabling us to provide an experience which is enjoyable for all.”

2. Levelling the playing field

Similarly, Dyslexia Association Ireland worked with Version 1 to create a Generative AI model, called Simplitext to help people living with dyslexia, and other learning disabilities, consume content in ways that meet their needs. The tool empowers dyslexic individuals to navigate educational and professional transitions with greater confidence.

Donald Ewing, Head of Education and Policy at Dyslexia Ireland, explains, “What’s great about Simplitext is users can have it in their pocket built into their phone. They photograph a text or input the text into the tool, and it simplifies it for them.”

“For example, when students need to read scientific papers – which can be very technical and dense in terms of complex information – this can be difficult for someone with dyslexia to understand and communicate. With SimpliText, the content is simplified without losing intellectual power or key technical terminology. This tool levels the playing field for people with dyslexia at school or university, or in the workplace,” he adds.

3. Enhancing global access to critical but accessible knowledge

Encephalitis is an inflammation of the brain caused by an infection or auto immune response and affects around 1.5m people a year. The charity Encephalitis International is exploring AI to expand its global outreach to keep people informed about encephalitis, a potentially life-threatening condition.  

“Depending on which part of the brain is inflamed, this can affect cognitive ability, memory and the ability to read or write,” explained Calum Goodwin, Head of Partnerships and Giving Development at Encephalitis International. “We created an AI-based tool to help patients suffering from Encephalitis, and individuals who need critical information on the condition, to access accurate, up-to-date material in their own language, matching not just linguistic requirements but differing complexity levels too.”

Calum continued, “We’ve also been working with Version 1 to understand how AI could reduce costs for the charity and increase staff availability to have support meetings with people living with this condition. In this industry many people are working overtime to complete routine tasks – which AI could do instead. This would then free up staff time to use their expertise and guidance to advise people instead.”

Challenges in using AI for social good

As AI continues to rapidly evolve and more organisations learn to scale this technology at large, it is crucial to consider the ethical implications of its deployment.

Below are five key considerations for the ethical use of AI for social good in the not-for-profit sector.

1. Use of personal data

AI systems rely heavily on vast amounts of personal data to analyse patterns and make decisions that impact everything from personalised recommendations to healthcare diagnoses, but this has also raised concerns from the public regarding misuse of personal data. It is therefore imperative to implement robust privacy safeguards, maintain transparency about data usage, and protect against unauthorised access.

Simon Baxter, Principal Analyst, TechMarketView, advises, “People shouldn’t be so wary of AI sitting on top of an application but more about how people’s data is being applied. How is data being used to generate your insurance premium, for example? How is your sensitive medical data being controlled? These are areas of more concern than the AI tool itself.”

Jacqui adds, “With TNMOC MATE, the app doesn’t retain any personal data of the user, only the language of choice and age category. Most of the data comes from the museum’s exhibitions.”

2. Avoiding human-induced bias

AI can be a powerful tool to mitigate human-induced bias because it is a data-driven approach to decision making, but if the data it is using has biases, this becomes an issue. In the past, AI models have systematically left minorities out of the picture; resulting in data that lacks diverse representation of gender identities, sexual orientations, and race, for example.

If we feed algorithms biased data or base their design on flawed values, the outcomes will mirror and amplify these issues. So, when we think about reducing bias in AI models, this must start with the datasets we use to teach them.

Feedback plays a vital role in improving the accuracy of AI models, especially when it comes to potential biases. By collecting user feedback, organisations can identify any errors or inaccuracies and then correct them. This is especially important when working with large datasets, as it can be easy for AI models to overlook or misinterpret certain data.

It is important to implement robust accountability measures and establish comprehensive ethical frameworks that guide the deployment of AI, ensuring the technology contributes positively to societal well-being rather than reinforcing systemic injustices.

3. Measuring accessibility

Feedback also plays a key role in measuring accessibility, without this it is difficult to ensure the tool that has been created is going to be accessible or improve inclusivity. Working with end users during the testing and evaluation stages is important to guarantee that the tool being created will be inclusive and adapted to their needs.

Donald explains, “Engaging with partners early and remaining engaged is critical. Collecting customer feedback and using this data improves the useability and functionality of the tool too.”

4. Regulations and compliance

Donald explains, “When GDPR regulations were introduced, this was a big change for the third-sector, but because many are smaller organisations, they can often be agile and reactive to the regulations. This is the same for AI regulations. It is an iterative process. We can’t expect governments to be ahead of the technology, so all we can do is be responsive and prepared to react to certain changes and regulations to comply ethically.”

It is important that technology vendors take accountability. They need to think how the design of a product or solution is going to keep data safe, but also be agile and able to react to the pace of innovation and changing regulations across different regions.  

4. Retaining the human approach

“AI cannot replace the people who work for charities, as they are at the heart of these organisations, but what it does is enable us to be more impactful,” Callum continues. “At the beginning of our relationship with Version 1 we were thinking about implementing an AI chatbot, but we didn’t want to remove our helpline and support team providing person-to-person support. This is what makes the charity sector different to other sectors. Implementing solutions that help those with specialist requirements could not be achieved without a human understanding of their needs to begin with.”

5. Lacking resources

While it is certainly getting easier for organisations to explore and benefit from AI, to fully integrate it and reap the rewards is still relatively expensive, time-consuming and often dependent on the availability of highly skilled specialists – which many not-for-profit organisations simply don’t have. An effective way to tackle this challenge is to partner with technology vendors who can help and share the skills and knowledge needed to support an organisation’s investment in AI.

Despite the challenges the third sector can face when it comes to using AI, the perception of this technology needs to be seen as an enabler rather than a threat, as it can create positive user experiences and outcomes when implemented correctly. By collaborating with the right partners, following the correct regulations and ethical guidelines, and working with end users during the creation phase of an AI product or solution, organisations can build a better, more equitable future for all.

Related Articles

Back to top button