(no title)
derf_ | 14 days ago
[0] Which I do agree with, particularly if you need it to be higher quality or labeled in a particular way: the Fisher database mentioned is narrowband and 8-bit mu-law quantized, and while there are timestamps, they are not accurate enough for millisecond-level active speech determination. It is also less than 6000 conversations totaling less than 1000 hours (x2 speakers, but each is silent over half the time, a fact that can also throw a wrench in some standard algorithms, like volume normalization). It is also English-only.
friendzis|14 days ago
If one asks ~~nice~~ expensive enough they can even get isolated multitracks or teleprompter feeds together with the audiovisual tracks. Heck, if they wanted they could set up dedicated transcription teams for the plethora of podcasts with the costs somewhere in the rounding error range. But you can't siphon that off of torrents and paying for training material goes against the core ethics of the big players.
Too bad you can't really scrape tiktok/instagram reels with subtitles... Oh no, oh no, oh no no no no
tl2do|14 days ago
[deleted]