Education

AI as a bridge: Fostering Safe and Inclusive Dialogue in Higher Education

By Shaun Carver, Executive Director, International House at UC Berkeley

From my experience across both industry and academia, I’ve observed that many examples of interpersonal conflict are sparked not by a deliberate action or statement, but by an oversight or a clumsily-worded sentence that, although said without malice, nevertheless managed to offend.

Even in the healthiest of cultures, these inadvertent transgressions can act as the careless spark that ignites a larger (and much more damaging) fire. If the person who made the offending remark doesn’t understand why it caused offence (or lingers in the offended party’s mind), it sets the stage for further conflict down the line.

Mediating — and resolving — these conflicts is, as any leader can attest, a nightmare. Part of the challenge is that, as teams become more diverse and more representative of society at large, the scope for these missteps grows exponentially. And, because they’re often unintentional in nature, they’re impossible to protect against.

In academia, where we tackle difficult subjects as a matter of daily routine, that risk is even higher. When broaching politics, foreign relations, religion, or ethics in the classroom, the road from polite disagreement to a full-blown argument is shorter than you think.

And it’s for that reason why I’m particularly excited about the potential role of artificial intelligence in helping smooth the rougher edges of academic debate and interpersonal discourse. As someone passionately dedicated to free expression and open academic inquiry, AI provides a potential solution, preserving the substance of a statement while ensuring it’s packaged in a way that’s respectful, inclusive, and considerate.

AI as a Mediator

The idea of using artificial intelligence to mediate conversation doesn’t require the creation of any new technology, nor is it a particularly novel example of an AI use-case. As social media platforms like Twitter, Facebook, and Reddit grew into behemoths with hundreds of millions — if not several billion — users, it no longer became possible to provide completely human-generated content moderation.

These platforms turned to complex AI models to identify content that it deemed as harmful — from posts and images that crossed legal red lines, to messages that constituted misinformation or bullying, or merely those posts that infringed upon the terms of service. Although imperfect and often prone to false positives — or false negatives — these models provided a solution that could address the challenges of scale.

Similarly, I could also point to products like Grammarly, which provides a plug-in for your browser and Microsoft Word and provides helpful suggestions to improve your tone and readability. In many respects, it’s an example of content moderation that a person voluntarily chooses to use.

Grammarly — which many of my friends swear by — provides a safety net for stressed-out workers, and can warn if your email could be perceived as abrupt or rude. I say “safety net” for a reason, because it’s far too easy to write or say something that another person may take umbrage with, even if that wasn’t the intent.

The idea that AI could intercede in human conversation isn’t new — nor is it particularly strange. Instead, I’m proposing that we actively employ it within our institutions, both when crafting material and in communications, as well in our personal exchanges, with the aim to eliminate bias and remove language that, even if said with the best of intentions, may cause offense.

Avoiding Innocent Mistakes

If you lead any institution, it is incumbent upon you to ensure that anyone — irrespective of their race, gender, sexuality, or national origin — can bring their best. This obligation is especially present within academia, where your colleagues and students inevitably hail from diverse backgrounds.

The problem is that you don’t have to actively try to discriminate in order to create an environment where people feel unwelcome — or that inadvertently discourages people from participating. Sometimes, it can be as little as the words you choose.

Language matters. It shapes how we perceive the world around us. We see this in languages where nouns are gendered. In German, the word for ā€œbridgeā€ is feminine, and so German speakers often use stereotypically-feminine adjectives (like ā€œbeautifulā€ and ā€œelegantā€) when describing them, according to one study. Conversely, in Spanish, bridges are masculine, and so people often skew towards masculine adjectives (like ā€œbigā€ and ā€œstrongā€).

What does this have to do with AI and inclusivity? Well, if your computer science classes use primarily male examples (ā€œDave is writing a C++ programā€ or ā€œHakim is building a networkā€), it sends a message to the female students in the class.

Again, using this type of language doesn’t inherently mean that the person is deliberately trying to exclude female students. It doesn’t mean that the teacher or the TA writing the course materials is, in fact, explicitly and consciously sexist. It might simply mean that they — like all of us — have a blind spot.

This challenge represents an obvious use-case for how AI can help improve inclusivity in the classroom. A teacher could upload their work into an LLM (large language model) chatbot like ChatGPT or Anthropic’s Claude and ask for ways to ensure that the examples are representative of a diverse audience, or to identify possible aspects of the text that might, in some way, exhibit bias or fail to represent a broad set of demographics and viewpoints.

One of the key advantages of LLMs is that they have been created using vast quantities — often terabytes — of written text, consisting of hundreds of millions of documents and web pages, and so they have a large corpus of source material to refer from. And so, they can play a role during the earliest stages of the course material creation process. Someone creating a reading list might, for example, ask for suggestions from diverse authors, or ask for examples of how a certain topic is viewed across cultures and faiths.

Using AI to Set A Positive Tone

We’re all fallable, and we’re all guilty of saying or writing something that upset another person, even if that wasn’t the original intent. In academia — where our jobs are literally to interrogate ideas, and often the most controversial ones — that risk is especially heightened, and although Berkeley I-House makes a point to train our students to engage with contentious topics in a positive and constructive manner, mistakes can still happen.

AI can be a useful proofreader here — identifying phrases that may, from another person’s perspective, come across as abrupt or rude, or that may be culturally insensitive.

Universities are — naturally —- split about generative AI, both with respect to its impact on academic integrity, whether it allows students to outsource the process of learning to an algorithm (and learning is, fundamentally, the main function of a university), and also on an intellectual property level. The fact that these models are built using material taken from human beings and institutions that haven’t been compensated remains an unresolved issue — and one with challenging ethical considerations.

And, to be clear, we don’t encourage our students to use ChatGPT to write their essays. However, we should also recognize that these tools can play a productive and healthy role when it comes to fostering academic engagement — either by, as mentioned, acting as a kind-of proofreader, or by leveling the playing field for those who may not speak English as their first language, and therefore may be reluctant to engage in debate.

At Berkeley International House, our members hail from all over the world. While many come from anglophone countries — not just the UK, or Canada, or Australia, but also parts of Africa and Asia — even more do not. From my experience, the latter group often requires deliberate encouragement to participate fully in classroom debates or university-sanctioned online discussions.

Generative AI can help these students find their voice, by translating their ideas into text that they feel fully encapsulates their beliefs, opinions, and observations — but that they might not be able to do themselves, or may lack the confidence to do so.

It creates an equality of opportunity that, before this point, simply wasn’t possible. And for all the sins and ethical quandaries of generative AI and its usage, it’s hard not to feel excited about this.

Creating an Inclusive Future

Although I’ve talked extensively about generative AI, I think it’s worth emphasizing that there are other potential facets of AI that could play a role in fostering open, healthy discussion within higher education.

AI-driven analytics, for example, could be used to identify polarizing topics within social media, allowing academics to craft their materials in a way that addresses the topic with all due sensitivity. Again, this isn’t a particularly novel use-case of AI. Marketers have been using AI-driven sentiment analysis to see how their companies and products are perceived online.

Similarly, they could use AI analytics to identify trends within classroom discussions, identifying those who are less likely to participate in discussions, and thus, those who may require more encouragement. In a similar vein, AI analytics could identify those who are more likely to spark conflict, and therefore require guidance from university staff in how to participate in classroom debates in productive, healthy ways.

Nothing about this is inherently new. The things I’ve discussed have been around for some time, in one form or another. The only variable is how we use these technologies. And while it’s natural to hold AI with some degree of trepidation, I personally find the idea of unsafe, exclusionary classrooms even more terrifying.

As academics, we have an obligation to explore any avenue that could help. And AI is perhaps the most powerful — and obvious — tool at our disposal.

Author

Related Articles

Back to top button