News and culture through the lens of Southern California.
Hosted by A Martínez
Airs Weekdays 2 to 3 p.m.

Why computerized translators still have a long way to go




Talking
Talking

Listen to story

08:48
Download this story 12MB

Imagine a world where you could speak to anyone, regardless of their native tongue. 

Picture a reality in which you could walk up to a person and have your words translated in real time. 

A new tech company raising money on Indiegogo says it’s made the holy grail of electronic translators:

https://www.youtube.com/watch?v=NxZfZPNMgBk

But the project’s not without its skeptics. 

Words are just one part of the way we communicate, and even those can mean different things if said in various ways. 

Spenser Mestel wrote about it for The Atlantic and joined Take Two to talk about it. 

Why is it so hard to make the perfect machine translator?

I think that we’ve come to expect that anything that’s data computers can handle. Computers can handle so much more than they could a year or especially ten years ago. But language isn’t really like data; it’s not a good data set. So trying to work with it becomes very difficult very quickly.

You write that most of what humans learn about people come not from what they say, but how they say it. What challenge does that pose to translating machines?

It’s really difficult. The example that I use is the word ‘partner.’ If I tell you I’ve been dating my partner for the past five years, the word is very easy to translate; every language has a correla. But we understand that the word partner is a deliberate choice. It doesn’t have a gender. It’s communicating a lot of extra information beyond the literal definition of the word partner. Machines don’t understand that so all of that subtext is lost.

Press the blue play button above to hear the full interview.