top | item 41250239

Weak supervision to isolate sign language communicators in crowded news videos

58 points| matroid | 1 year ago |vrroom.github.io | reply

48 comments

order
[+] akira2501|1 year ago|reply
> I believe that we can solve continuous sign language translation convincingly

American Sign Language is not English, in fact, it's not even particularly close to English. Much of the language is conveyed with body movements outside of the hands and fingers, particularly with facial expressions and "named placeholders."

> All this is to say, that we need to build a 5000 hour scale dataset for Sign Language Translation and we are good to go. But where can we find this data? Luckily news broadcasters often include special news segments for the hearing-impaired.

You need _way_ more than just 5000 hours of video. People who are deaf of hard of hearing, in my experience, dislike the interpreters in news broadcasts. It's very difficult, as an interpreter, to provide _worthwhile_ translations of what is being spoken _as_ it is being spoken.

It's more of a bad and broken transliteration that if you struggle to think about you can parse out and understand.

The other issue is most interpreters are hearing and so use the language slightly differently from actual deaf persons, and training on this on news topics will make it very weak when it comes to understanding and interpreting anything outside of this context. ASL has "dialects" and "slang."

Hearing people always presume this will be simple. They should really just take an ASL class and worth with deaf and hearing impaired people first.

[+] al_borland|1 year ago|reply
I know an interpreter who is a CODA. Her first language was sign language, which I think helps a lot. I once asked her if she thought in English or ASL and she said ASL.

During the pandemic she’d get very frustrated by the ASL she saw on the news. Her mom and deaf friends couldn’t understand them. It wasn’t long before she was on the news regularly to make sure better information was going out. She kept getting COVID, because she refused to wear a mask while working, because coving up the face would make it more difficult to convey the message. I had to respect the dedication.

[+] matroid|1 year ago|reply
Thanks for the feedback. You raise great points and this was the reason why we wrote this post, so that we can hear from people where the actual problem lies.

On a related note, this sort of explains why our model is struggling to fit on 500 hours of our current dataset (even on the training set). Even so, the current state of automatic translation for Indian Sign Language is that, in-the-wild, even individual words cannot be detected very well. We hope that what we are building might at least improve the state-of-the-art there.

> It's more of a bad and broken transliteration that if you struggle to think about you can parse out and understand.

Can you elaborate a bit more on this. Do you think if we make a system for bad/broken transliteration and funnel it through ChatGPT, it might give meaningful results? That is ChatGPT might be able to correct for errors as it is a strong language model.

[+] umanwizard|1 year ago|reply
> American Sign Language is not English

I'm not sure I understand your point. Chinese is also not English but machine translation of Chinese to English can be done.

You're right that laypeople often assume, wrongly, that a given country's sign language is an encoding of the local spoken language. In reality it's usually a totally different language in its own right. But that shouldn't mean that translation is fundamentally impossible.

[+] WesternWind|1 year ago|reply
Just to note this is for ISL, Indian Sign Language, not ASL, American Sign Language.
[+] bluGill|1 year ago|reply
Lifeprint.org has plenty of free asl courses taught by a deaf person. Highly recommended for everyone but as with any language it takes a lot of study to be useful.
[+] kobalsky|1 year ago|reply
> It's more of a bad and broken transliteration that if you struggle to think about you can parse out and understand.

it seems to be more common to see sign language interpreters now. is it just virtue signaling to have that instead of just closed captions?

[+] voidingw|1 year ago|reply
The blog post references translating between English and Indian Sign Language (ISL). I interpreted that to mean translating between spoken English and ISL, not ASL and ISL.

Regardless, I’m curious how (dis)similar ISL is to ASL.

[+] egberts1|1 year ago|reply
Using news broadcast as a training model to populate LLM is a poor precedence.

Repetition of a sign usually indicates an additional emphasis.

The dialect needs to be all covered and multiply mapped to its word.

Furthermore, YouTube has an excellent collection of really bad or fake ASL interpreters in many news broadcasts, so bad, really really bad, worse than Al Gore Hanging Chad news broadcast or the "hard-of-hearing" inset box during Saturday Night Live News broadcast.

You still need an RID-certified or CDI-certified ASL interpreter to vet the source.

https://m.youtube.com/watch?v=GwSh0dAaqIA

https://rid.org/certification/available-certifications/

[+] zie|1 year ago|reply
1st: I sign ASL not ISL like the OP is talking about.

In the ASL world, most news translations into ASL are delayed or sped up from the person talking and/or the captions if they happen to also be available.

You are going to have sync problems.

Secondly, it's not just moving the hands, body movements, facial expressions, etc all count in ASL , I'm betting they count in ISL as well.

Thirdly the quality of interpretation can be really bad. Horrendous. it's not so common these days, but it was fairly common that speakers would hire an interpreter and mistakenly hire someone willing to just move their arms randomly. I had it happen once at a doctors office. The "interpreter" was just lost in space. The doctor and I started writing things down and the interpreter seemed a little embarrassed at least.

Sometimes they hire sign language students, you can imagine hiring a first year french student to interpret for you, it's no different really. Sometimes they mean well, sometimes they are just there for the paycheck.

I bet it's a lot worse with ISL, because it's still very new, most students are not taught in ISL, there are only about 300 registered interpreters for millions of deaf people in India. https://islrtc.nic.in/history-0

We are still very much struggling with vocal to English transcriptions using AI. Despite loads of work from lots of companies and researchers. They are getting better, and in ideal scenarios are actually quite useful. Unfortunately the world is far from ideal.

The other day on a meeting with 2 people using the same phone. The AI transcription was highly confused and it went very, very wrong.

I'm not trying to discourage you, and it's great to see people trying. I wish you lots of success, just know it's not an easy thing and I imagine lots of lifetimes of work will be needed to generate useful signed language to written language services that are on-par with the best of the voice to text systems we have today.

[+] matroid|1 year ago|reply
Thanks Zie for the message. I'm sorry to hear about your "interpreter" encounter :(

I do think these problems are much, much worse for ISL as you rightly noted.

I think I should have been careful when I said "solve" in my post. But that really came from a place of optimism/excitement.

[+] hi-v-rocknroll|1 year ago|reply
I'm wondering how long it will take for LLMs to be able to generate complete (one of many) sign language(s) on-the-fly and put the various sign language(s) translators out of a job. The crux seems to be that sign language differs significantly from spoken language and includes facial movements and nonverbal emotional tonality.
[+] KeepFlying|1 year ago|reply
The fact that hearing impaired people prefer ASL interpreters to closed captioning tells me automated translation will never be enough.

It's the same reason we prefer interpreters to Google Translate when the message is important.

Interpretation adds the nuance of a human that all the automatic tools miss.

I'm sure it could make a small dent in the market for interpreters but only a small one.

[+] agarsev|1 year ago|reply
Sign language researcher here! I would recommend you look a bit at the scientific literature on the topic. I know it can be a bit overwhelming and hard to know to separate the actual info from the garbage, so I can try and select for you a few hand picked papers. IMO, trying to understand sign language oneself, or at least getting basic notions, is fundamental to understand where the real problems lie.

Unfortunately there's no getting away from that. While the scarcity of data indeed is an issue, and your idea is nice (congratulations!) the actual problem is the scarcity of useful data. Since sign language doesn't correspond to the oral language, there are many problems with alignment and just what to translate to. Glosses (oral language words used as representation for signs) are not enough at all, since they don't capture the morphology and grammar of the language, which among other things heavily relies on space and movement. Video + audio/audio captions is nearly useless.

Good luck with your efforts, this is a fascinating area where we get to combine the best of CS, AI, linguistics... but it's hard! As I said, let me know if you want some literature, by PM/email if you want, and I'll get back to you later.

[+] jallmann|1 year ago|reply
Sign languages have such enormous variability that I have always thought having fluent sign language recognition / translation probably means we have solved AGI.

Detecting the presence of sign language in a video is an interesting subset of the problem and is important for building out more diverse corpora. I would also try to find more conversational sources of data, since news broadcasts can be clinical as others have mentioned. Good luck.