nav search
Data Center Software Security Transformation DevOps Business Personal Tech Science Emergent Tech Bootnotes BOFH

Researcher hopes to teach infants with cochlear implants to speak – with an app

Kids who've never heard need 'habilitation' – they've never had a skill to rehabilitate

By Richard Chirgwin, 26 Mar 2017

Getting an AI to understand speech is already a tough nut to crack. A group of Australian researchers wants to take on something much harder: teaching once-deaf babies to talk.

Why so tough?

Think about what happens when you talk to Siri or Cortana or Google on a phone: the speech recognition system has to distinguish your “OK Google” (for example) from background noise; it has to react to “OK Google” rather than “OK something else”; and it has to parse your speech to act on the command.

And you already know how to talk.

The Swinburne University team working on an app called GetTalking can't make even that single assumption, because they're trying to solve a different problem. When a baby receives a cochlear implant to take over the work of their malfunctioning inner ear, he or she needs to learn something brand new: how to associate the sounds they can now hear with the sounds their own mouths make.

Getting those kids started in the world of conversation is a matter of “habilitation” – no “rehabilitation” here, because there isn't a capability to recover.

GetTalking is the brainchild of Swinburne senior lecturer Belinda Barnet, and the genesis of the idea was her own experience as mother to a child with a cochlear implant.

Child with iPad

Children interact well with apps. Can one
teach children to talk? Image: Belinda Barnet

As she explained to The Register: “With my own daughter – she had an implant at 11 months old – I could afford to take a year off to teach her to talk. This involves lots of repetitive exercises.“

That time and attention, she explained, is the big predictor of success.

In the roughly 10 years since it became standard practice to provide implants to babies at or before 12 months of age (fully funded by Australia's national health insurance scheme Medicare since 2011), 80 per cent of recipients achieve speech within the normal range.

Belinda Barnet, Swinburne University

Belinda Barnet

What defines the 20 per cent that don't get to that point? Inability, either because of family income or distance from the city, to “spend a year sitting on the carpet with flash-cards”.

That makes it hard for parents in rural or regional locations, regional, or low-income mothers, Barnet said.

The idea for which Barnet and associate professor Rachael McDonald sought funding looks simple: an app to run on something like an iPad that gives the baby a bright visual reward for speaking.

However, it does test the boundaries of AI and speech recognition, because of a very difficult starting point: how can an app respond to speech when the baby has never learned to speak?

Speech recognition: ongoing quest

Apple never revealed the price it paid to acquire the team that developed Siri, but rumours of US$150 million don't sound unreasonable – and Siri takes its input from someone who knows how to speak.

For all the effort that's gone into speech recognition and AI, we also know it remains so difficult it's been automated for only a couple of per cent of languages.

Leon Sterling, a Swinburne computer science researcher, had his interest piqued as a member of the university panel assessing the project, and is helping bring a long experience of AI research to the project.

He explained the hidden complexities behind what needs to present itself as a simple app.

“You've got to get the signal, you have to extract the signals, separate them from the background noise, the parents speaking, et cetera.”

Leon Sterling, Swinburne University

Swinburne's Leon Sterling

Most of those problems have precedent, but GetTalking needs yet more machine learning – like trying to measure the child's engagement with the app. “You've got to look at the ability to observe, to tag video strings together with audio strings.”

The team understands that an app can't replace a speech therapist or parent, but only support them – and that adds new complexities like “building in the knowledge of how children interact with physiotherapists. You need to understand the developmental stages of children when they're interacting with the app.”

Smashing pumpkins

Barnet elaborated on other ways child development interplays with what the app and the AI need.

“When a child has not heard any sound, they don't understand that a noise has an effect on the environment. So the first thing has to be a visual reward for an articulation.”

At 12 months, she continued, children respond well to visual rewards – and even an “ahhh” or “ohhh” should get a response from the app, if (a big if even for machine learning) it's a deliberate articulation.

There "has to be a visual reward for an articulation"

So after distinguishing between speech and “the kid threw a bit pumpkin at the screen”, the app has to respond at a second stage, called “word approximation”. Here, the system's going to have to at once recognise that “da” might be an approximation for “daddy” (with reward), and support the child's development from approximation to whole words.

“That's quite difficult. That needs to be cross-matched with thousands of articulations from normally-speaking babies,” Barnet explained.

Sterling added another layer the system has to learn: “Is 'da' today the same 'da' as the same child said the other day?”

Swinburne's BabyLab will help here, by supporting the collection of speech samples the GetTalking team needs.

Those samples will help GetTalking respond to the word-approximation by re-articulating the correct word, “and show the baby a picture of what they're saying”.

AI not ready to replace people

As both Barnet and Sterling emphasised, it's impossible to replace the role of the speech therapist or parent.

“I've been working in AI research for 35 years,” Sterling said. “People have consistently overestimated what they expect.”

Rather than outright automation, Sterling says, most of the time what matters is to provide AI as an aid for people – “how to make a richer experience for people, to help people with their environment”.

In the case of GetTalking, one thing he reckons the AI behind the app will do well is do a better job of diagnosing whether or not the child is making progress.

“It's a co-design problem; you work with speech therapists, parents, kids – and see what works”, Sterling said.

GetTalking is in its early stages, with support from the National Acoustic Laboratories (which operates Hearing Australia). After the app development stages, GetTalking will need a clinical trial to demonstrate its effectiveness. Those aren't cheap, but Barnet said she hopes to secure federal funding at that point.

Since disadvantage is so strongly associated with holding back children who receive the implants, Barnet's hope is that GetTalking could be free to those who need it.

The full team is Swinburne's associate professor Rachael McDonald; Dr Belinda Barnet; professor Leon Sterling; associate professor Jordy Kaufman; associate professor Simone Taffe and Dr Carolyn Barnes; and National Acoustic Laboratories' Dr Teresa Ching and Dr Laura Button. ®

The Register - Independent news and views for the tech community. Part of Situation Publishing