From my experience across both industry and academia, I’ve observed that many examples of interpersonal conflict are sparked not by a deliberate action or statement, but by an oversight or a clumsily-worded sentence that, although said without malice, nevertheless managed to offend.
Even in the healthiest of cultures, these inadvertent transgressions can act as the careless spark that ignites a larger (and much more damaging) fire. If the person who made the offending remark doesn’t understand why it caused offence (or lingers in the offended party’s mind), it sets the stage for further conflict down the line.
Mediating ā and resolving ā these conflicts is, as any leader can attest, a nightmare. Part of the challenge is that, as teams become more diverse and more representative of society at large, the scope for these missteps grows exponentially. And, because they’re often unintentional in nature, they’re impossible to protect against.
In academia, where we tackle difficult subjects as a matter of daily routine, that risk is even higher. When broaching politics, foreign relations, religion, or ethics in the classroom, the road from polite disagreement to a full-blown argument is shorter than you think.
And it’s for that reason why I’m particularly excited about the potential role of artificial intelligence in helping smooth the rougher edges of academic debate and interpersonal discourse. As someone passionately dedicated to free expression and open academic inquiry, AI provides a potential solution, preserving the substance of a statement while ensuring it’s packaged in a way that’s respectful, inclusive, and considerate.
AI as a Mediator
The idea of using artificial intelligence to mediate conversation doesn’t require the creation of any new technology, nor is it a particularly novel example of an AI use-case. As social media platforms like Twitter, Facebook, and Reddit grew into behemoths with hundreds of millions ā if not several billion ā users, it no longer became possible to provide completely human-generated content moderation.
These platforms turned to complex AI models to identify content that it deemed as harmful ā from posts and images that crossed legal red lines, to messages that constituted misinformation or bullying, or merely those posts that infringed upon the terms of service. Although imperfect and often prone to false positives ā or false negatives ā these models provided a solution that could address the challenges of scale.
Similarly, I could also point to products like Grammarly, which provides a plug-in for your browser and Microsoft Word and provides helpful suggestions to improve your tone and readability. In many respects, it’s an example of content moderation that a person voluntarily chooses to use.
Grammarly ā which many of my friends swear by ā provides a safety net for stressed-out workers, and can warn if your email could be perceived as abrupt or rude. I say “safety net” for a reason, because it’s far too easy to write or say something that another person may take umbrage with, even if that wasn’t the intent.
The idea that AI could intercede in human conversation isn’t new ā nor is it particularly strange. Instead, Iām proposing that we actively employ it within our institutions, both when crafting material and in communications, as well in our personal exchanges, with the aim to eliminate bias and remove language that, even if said with the best of intentions, may cause offense.
Avoiding Innocent Mistakes
If you lead any institution, it is incumbent upon you to ensure that anyone ā irrespective of their race, gender, sexuality, or national origin ā can bring their best. This obligation is especially present within academia, where your colleagues and students inevitably hail from diverse backgrounds.
The problem is that you donāt have to actively try to discriminate in order to create an environment where people feel unwelcome ā or that inadvertently discourages people from participating. Sometimes, it can be as little as the words you choose.
Language matters. It shapes how we perceive the world around us. We see this in languages where nouns are gendered. In German, the word for ābridgeā is feminine, and so German speakers often use stereotypically-feminine adjectives (like ābeautifulā and āelegantā) when describing them, according to one study. Conversely, in Spanish, bridges are masculine, and so people often skew towards masculine adjectives (like ābigā and āstrongā).
What does this have to do with AI and inclusivity? Well, if your computer science classes use primarily male examples (āDave is writing a C++ programā or āHakim is building a networkā), it sends a message to the female students in the class.
Again, using this type of language doesnāt inherently mean that the person is deliberately trying to exclude female students. It doesnāt mean that the teacher or the TA writing the course materials is, in fact, explicitly and consciously sexist. It might simply mean that they ā like all of us ā have a blind spot.
This challenge represents an obvious use-case for how AI can help improve inclusivity in the classroom. A teacher could upload their work into an LLM (large language model) chatbot like ChatGPT or Anthropicās Claude and ask for ways to ensure that the examples are representative of a diverse audience, or to identify possible aspects of the text that might, in some way, exhibit bias or fail to represent a broad set of demographics and viewpoints.
One of the key advantages of LLMs is that they have been created using vast quantities ā often terabytes ā of written text, consisting of hundreds of millions of documents and web pages, and so they have a large corpus of source material to refer from. And so, they can play a role during the earliest stages of the course material creation process. Someone creating a reading list might, for example, ask for suggestions from diverse authors, or ask for examples of how a certain topic is viewed across cultures and faiths.
Using AI to Set A Positive Tone
Weāre all fallable, and weāre all guilty of saying or writing something that upset another person, even if that wasnāt the original intent. In academia ā where our jobs are literally to interrogate ideas, and often the most controversial ones ā that risk is especially heightened, and although Berkeley I-House makes a point to train our students to engage with contentious topics in a positive and constructive manner, mistakes can still happen.
AI can be a useful proofreader here ā identifying phrases that may, from another personās perspective, come across as abrupt or rude, or that may be culturally insensitive.
Universities are ā naturally ā- split about generative AI, both with respect to its impact on academic integrity, whether it allows students to outsource the process of learning to an algorithm (and learning is, fundamentally, the main function of a university), and also on an intellectual property level. The fact that these models are built using material taken from human beings and institutions that havenāt been compensated remains an unresolved issue ā and one with challenging ethical considerations.
And, to be clear, we donāt encourage our students to use ChatGPT to write their essays. However, we should also recognize that these tools can play a productive and healthy role when it comes to fostering academic engagement ā either by, as mentioned, acting as a kind-of proofreader, or by leveling the playing field for those who may not speak English as their first language, and therefore may be reluctant to engage in debate.
At Berkeley International House, our members hail from all over the world. While many come from anglophone countries ā not just the UK, or Canada, or Australia, but also parts of Africa and Asia ā even more do not. From my experience, the latter group often requires deliberate encouragement to participate fully in classroom debates or university-sanctioned online discussions.
Generative AI can help these students find their voice, by translating their ideas into text that they feel fully encapsulates their beliefs, opinions, and observations ā but that they might not be able to do themselves, or may lack the confidence to do so.
It creates an equality of opportunity that, before this point, simply wasnāt possible. And for all the sins and ethical quandaries of generative AI and its usage, itās hard not to feel excited about this.
Creating an Inclusive Future
Although Iāve talked extensively about generative AI, I think itās worth emphasizing that there are other potential facets of AI that could play a role in fostering open, healthy discussion within higher education.
AI-driven analytics, for example, could be used to identify polarizing topics within social media, allowing academics to craft their materials in a way that addresses the topic with all due sensitivity. Again, this isnāt a particularly novel use-case of AI. Marketers have been using AI-driven sentiment analysis to see how their companies and products are perceived online.
Similarly, they could use AI analytics to identify trends within classroom discussions, identifying those who are less likely to participate in discussions, and thus, those who may require more encouragement. In a similar vein, AI analytics could identify those who are more likely to spark conflict, and therefore require guidance from university staff in how to participate in classroom debates in productive, healthy ways.
Nothing about this is inherently new. The things Iāve discussed have been around for some time, in one form or another. The only variable is how we use these technologies. And while itās natural to hold AI with some degree of trepidation, I personally find the idea of unsafe, exclusionary classrooms even more terrifying.
As academics, we have an obligation to explore any avenue that could help. And AI is perhaps the most powerful ā and obvious ā tool at our disposal.