Oddmund Braaten, CEO at Interprefy
Advancements in communication technology are helping bring the world closer together without ever having to leave the house.
As a remote-first business, many of our colleagues virtually travel five continents before lunch.
Everyone from large corporations and NGOs to SMEs and freelancers can gain access to live translation services, making it extremely easy to cater to audiences across the globe. It also reduces the need for interpreters to travel, thereby minimizing costs and the impact on the environment.
But as events went digital during the pandemic, the need for language interpreting has skyrocketed. Demand is only expected to grow as the world continues to connect and conduct business in hybrid participation setups, and businesses are having to find new solutions to cater to international, dispersed audiences.
Not enough interpreters to meet demand
The benefits of technologies such as remote simultaneous interpretation can be seen not only in the proliferation of online events but in any kinds of event setups. From traditional conferences, to town halls, training sessions, and press conferences: the switch from traditional setups with on-site support from interpreters and conferencing hardware has shifted towards cloud-based interpreting done remotely.
The change in how events are being delivered has opened up previously unexplored opportunities to invite and connect with attendees. Many enjoy being able to connect from the other end of the world, but for those that prefer the buzz and excitement of being on the ground, hybrid events are becoming increasingly popular to cater to both sides.
However, the globalization of events could soon lead to a shortfall of skilled interpreters that are able to cater to rising demand.
Can we automate live translation?
To help combat the situation, advancements in technology are helping accurately capture, transcribe, and translate speech from one language into another. More specifically, a combination of two different types of Artificial Intelligence technology is making language solutions more accessible, right at our fingertips, and at short notice.
Together, automated speech recognition (ASR) and machine translation (MT) technology is able to transcribe and translate live speech. Attendees are provided with real-time closed captions which they can turn on and off depending on their preferred language.
It means conferences can now support any kind of participant. Not just people that don’t understand the host’s language, but also the deaf and hard-of hearing, those that simply prefer to have subtitles or captions, and anyone joining from loud environments such as coffee shops or while traveling.
Machine translations are a significant step towards demolishing language barriers and making events inclusive for all. But where do the interpreters come into this?
Taking the robot out of the human
It’s important to remember that, while the technology is perfectly capable of working on its own, there will always be times when interpreters are needed. Just as computer-aided-translation systems led to more work for translators, demand for interpreters will only grow as automation advances.
At large-scale global events, for example, where one-to-many translations makes more sense, or during more technical medical or legal conferences where added context and expertise are needed.
Plus, as we know, technology isn’t always perfect. Sometimes it’s how someone said something rather than specifically what they said, and it’s hard for a machine to pick these changes up and translate them. Plus, language changes fast, and there can be new words, phrases, or abbreviations used that haven’t been figured out yet.
There are also technical aspects which mean a human touch will always be required. Machine translations rely on using large volumes of language data to quickly interpret what’s being said. So if the data isn’t there in the first place, or if there isn’t enough data for less commonly-spoken languages, things can quickly become muddled.
Which is why, despite the rapid and significant improvements we’ve seen in machine translations, there continues to be a place for human beings at the table. What machine translations do is help take out the robotic, repetitive elements for interpreters – and for good reason.
Conference interpreting is the third most stressful job in the world according to the World Health Organization, right behind being a fighter pilot and an air traffic controller. Being able to listen, understand, translate, and then talk while constantly switching between languages takes extreme levels of concentration.
The technology is simply there for when interpreters can’t be used. As this BBC article puts it, “The world’s most powerful computers can’t perform accurate real-time interpreting of one language to another. Yet human interpreters do it with ease.”
Interpreters and translators can help prepare machines for higher accuracy, for instance through glossary creation of context-specific terms, names, or abbreviations.
Machine translations taking center stage
Machine translations are moving full steam ahead, with cases now of automated transcription technology hooking directly onto video conferencing software such as Zoom or Teams.
Organizers can provide languages to whoever wants to attend, from any country and in rapid speed. Attendees get to enjoy speech in their native language while fully engaging in an event that caters for them.
With always-on translated speech, language barriers could soon become a thing of the past. If we can continue improving on the technology and make the lives of interpreters a little bit easier, then it’s a win-win situation for everyone involved.