
Accents in latent spaces: How AI hears accent strength in English by ilyausorov
We work with accents a lot at
BoldVoice, the AI-powered accent coaching app for non-native English
speakers. Accents are subtle patterns in speech—vowel shape, timing,
pitch, and more. Usually, you need a linguist to make sense of these
qualities. However, our goal at BoldVoice is to get machines to
understand accents, and machines don’t think like linguists. So, we
ask: how does a machine learning model understand an accent, and
specifically, how strong it is?
To begin this journey, we first introduce the “accent fingerprint,”
an embedding that is generated by inferencing an English speech
recording through BoldVoice’s large-scale accented speech model.
torch.Size([1, 768, 12])
In this post we’ll show where the accent fingerprint lives in a
latent space, how distances and directions in that space correspond
to accent similarity and language background, and how we use it to
coach our product management intern Victor, a non-native English
speaker, toward the American English accent of our expert accent
coach Eliza.
The Original Recordings
First off, here’s how Victor sounds when speaking English:
Now have a listen to Eliza reading the same passage. Eliza is
demonstrating our “target” American accent.
Compared to Eliza, who is an American English native speaker, Victor
has a noticeably strong Chinese accent when speaking English.
The Latent Space
So that we can make sense of how the machine learning model
understands both of these recordings, we now populate a latent space
with 1,000 speech recordings sourced from our internal data
representing varied levels of accent. Feel free to inspect the 2D
visualization of the latent space1 and hover over the
points to see details about each recording.
The full dimensional latent space contains information about speaker
identity, accent, intelligibility, emotion, and other
characteristics. This visualization has been pruned to show only the
information relevant to “accent strength”, that is “how strong is
the speaker’s accent relative to native speakers of English?”
More specifically, we apply PLS regression to identify the latent
space directions which correlate most with human accent strength
ratings, and for the purpose of this visualization only, we apply 2
20 Comments
treetalker
This is cool and one of the applications of LLMs that I'm actually looking forward to: accent training when acquiring a new language, particularly hearing what you would sound like without an accent!
That said, I found the recording of Victor's speech after practicing with the recording of his own unaccented voice to be far less intelligible than his original recording.
Looking forward to seeing the developments in this particular application.
georgewsinger
This is so cool. Real-time accent feedback is something language learners have never had throughout all of human history, until now.
Along similar lines, it would be useful to map a speaker's vowels in vowel-space (and likewise for consonants?) to compare native to non-native speakers.
I can't wait until something like this is available for Japanese.
mckirk
Is it just me, or did the sound files get hugged-to-death?
pjc50
What the vector-space data gets right, and what the human commentary tends not to, is the idea that accents are a complex statistical distribution. You should be careful about the concept of a "default" or "neutral" accent. Telecommunications has spent the 20th century flattening accents together, as has accent discrimination. There's always the tendency for people to say "my accent is the neutral standard against which all others should be measured".
fxtentacle
What a great AI use-case! At first, I felt excited …
But then I read their privacy policy. They want permission to save all of my audio interactions for all eternity. It's so sad that I will never try out their (admittedly super cool) AI tech.
joshjhargreaves
Damn, this is really cool.
vessenes
This is super cool.
A suggestion and some surprise: I’m surprised by your assertion that there’s no clustering. I see the representation shows no clustering, and believe you that there is therefore no broad high-dimensional clustering. I also agree that the demo where Victor’s voice moves closer to Eliza’s sounds more native.
But, how can it be that you can show directionality toward “native” without clustering? I would read this as a problem with my embedding, not a feature. Perhaps there are some smaller-dimensional sub-axes that do encode what sort of accent someone has?
Suggestion for the BoldVoice team: if you’d like to go viral, I suggest you dig into American idiolects — two that are hard not to talk about / opine on / retweet are AAVE and Gay male speech (not sure if there’s a more formal name for this, it’s what Wikipedia uses).
I’m in a mixed race family, and we spent a lot of time playing with ChatGPT’s AAVE abilities which have, I think sadly, been completely nerfed over the releases. Chat seems to have no sense of shame when it says speaking like one of my kids is harmful; I imagine the well intentioned OpenAI folks were sort of thinking the opposite when they cut it out. It seems to have a list of “okay” and “bad” idiolects baked in – for instance, it will give you a thick Irish accent, a Boston accent, a NY/Bronx accent, but no Asian/SE Asian accents.
I like the idea of an idiolect-manager, something that could help me move my speech more or less toward a given idiolect. Similarly England is a rich minefield of idiolects, from scouse to highly posh.
I’m guessing you guys are aimed at the call center market based on your demo, but there could be a lot more applications! Voice coaches in Hollywood (the good ones) charge hundreds of dollar per hour, so there’s a valuable if small market out there for much of this. Thanks for the demo and write up. Very cool.
adhsu01
Super cool work, congrats BoldVoice team! I've always thought that one of the non-obvious applications of voice cloning/matching is the ability to show a language learner what they would sound like with a more native accent.
asveikau
Victor's problem isn't really the vowels or pacing. The final consonants are soft or not really audible. I am not hearing the /ŋ/ of "long" as the most marked example. It sounds closer to "law". In his "improved" recording he hasn't fixed this.
I sometimes see content on social media encouraging people to sound more native or improve their accent. But IMO it's perfectly ok to have an accent, as long as the speech meets some baseline of intelligibility. (So Victor needs to work on "long" but not "days".) I've even come across people who are trying to mimick a native accent but lose intelligibility, where they'd sound better with their foreign accent. (An example I've seen is a native Spanish speaker trying to imitate the American accent's intervocalic T and D, and I don't understand them. A Spanish /t/ or /d/ would be different from most English language accents, but be way more understandable.)
wbroo
Very interestng! Have you tested for other factors like speaking speed, emotional tone, or microphone quality to see what else is (or isn’t) influencing model perception?
ccppurcell
Oh pssh. There's no such thing as accent strength. There's only accent distance. Accent strength is just an artefact of distance from the accent of a socially dominant group.
Goofy_Coyote
Glad to see BoldVoice here.
I’ve been using it for a few months, and I can confirm it’s working.
sonny3690
This is some insanely cool work. It's going to help so many people.
childintime
I didn't find international english, would have been interesting.
Also, the USA writing convention falls short, like "who put the dot inside the string."
crazy. Rationals "put the dot after the string". No spelling corrector should change that.
Unearned5161
I'm always very entertained when I'm talking with someone and pick up on some very slight deviation from the "norm" in their accent. I think it shows two things: that its near impossible to totally wipe that fingerprint of a past tongue, and that our ears are incredibly adept pieces of tooling
SamBam
Like others recently, I've been extremely impressed by LLM's ability to play GeoGuessr, or, more generally, to geo-locate random snapshots that you give them, with what seem (to me) to be almost no context clues. (I gave ChatGPT loads of holiday snapshots, screenshotted to remove metadata, and it did amazingly.)
I assume that, with enough training, we could get similarly accurate guesses of a person's linguistic history from their voice data.
Obviously it would be extremely tricky for lots of people. For instance, many people think I sound English or Irish. I grew up in France to American parents who both went to Oxford and spent 15 years in England. I wouldn't be surprised, though, if a well-trained model could do much better on my accent than "you sound kinda Irish."
dgan
wow always wanted to know an objective measure of my Russian accent in French. I ve been living here for a long, long time and some people tell me it's impossible to recognise where i come from. i d like to put that to test
oezi
Did you publish that accent dataset somewhere?
ccheever
This is really cool.
Just had an employee at our company start expensing BoldVoice. Being able to be understood more easily is a big deal for global remote employees.
(Note – I am a small investor in BoldVoice)
runelohrhauge
This is fascinating work. Love seeing how you’re combining machine learning with practical coaching to support real accent improvement. The concept of an “accent fingerprint” is especially clever, and the visualization of progress in latent space really brings it to life. Excited to see where you take this next!