Recently, I was working on a similar project and I found that grabbing the transcripts quickly leads to your IP being blocked for the transcripts.
I ended up doing the same as this person, downloading the MP4s and then transcribing myself. I was assuming it was some sort of anti LLM scraper feature they put in place.
Has anyone used this --write-auto-subs flag and not been flagged after doing 20 or so videos?
It's a good call out. I leverage yt-dlp as a library for downstream tooling (archival of media to long term storage repositories), and always recommend folks rely on yt-dlp whenever possible due to the ecosystem of folks grinding to keep their extractors current. Their maintainers are both helpful and responsive.
(with that said, I do not want to diminish OP's work in any way; great job! "What I cannot build, I do not understand" - Feynman)
I've found the YT transcripts to be severely lacking sometimes, in accuracy and features. Especially speaker identification is really useful if you want to e.g. summarize podcasts or interviews, so if this project here delivers on that then it's definitely better than the YT transcripts.
Youtube already offers AI transcriptions on their site. As another commenter points out, you grab them with yt-dlp.
And unlike how your tool will be supported in the future, thousands of users make sure yt-dlp keeps working as google keep changing the site (currently 1459 contributors).
if you used this in earnest sufficiently, you'd know yt default transcripts are not good enough because youtube often (ok say 5% of time) fails to transcribe videos particularly livestreams and shortly after release.
the volunteer open source effort behind youtube-dl and its forks/descendants are so impressive in large part because of how many features they provide and thus have to maintain:
https://github.com/yt-dlp/yt-dlp#usage-and-options
this tool won't provide the list of available thumbnails or settings for HTTP buffer size, but I think that's a pretty reasonable tradeoff.
I tried it on a M1 Pro MBP using Docker. It's quite slow (no MPS) and there are no timestamps in the resulting transcript. But the basics are there. Truncated output:
Fetching video metadata...
Downloading from YouTube...
Generating transcript using medium model...
=== System Information ===
CPU Cores: 10
CPU Threads: 10
Memory: 15.8GB
PyTorch version: 2.7.1+cpu
PyTorch CUDA available: False
MPS available: False
MPS built: False
Falling back to CPU only
Model stored in: /home/app/.cache/whisper
Loading medium model into CPU...
100%|| 1.42G/1.42G [02:05<00:00, 12.2MiB/s]
Model loaded, transcribing...
Model size: 1457.2MB
Transcription completed in 468.70 seconds
=== Video Metadata ===
Title: 厨师长教你:“酱油炒饭”的家常做法,里面满满的小技巧,包你学会炒饭的最香做法,粒粒分明!
Channel: Chef Wang 美食作家王刚
Upload Date: 20190918
Duration: 5:41
URL: https://www.youtube.com/watch?v=1Q-5eIBfBDQ
=== Transcript ===
哈喽大家好我是王刚本期视频我跟大家分享...
Thanks for sharing. This is exactly the type of utility that vibecoding is for. It takes 5 secons to ask GPT to write a scripr to do this tailored to your specific use case. It's way faster than trying to get someone elses repo up and running.
Many channels I follow, such as Vlad Vexler, have taken measures so you can't download the transcript with yt-dlp. Furthermore, they don't provide a transcipt option on their videos. I assume this is to prevent people from just reading AI summaries, which is annoying in Vexler's case because he talks slowly and meanders around. If I really want to hear his point but don't want to listen to that then I download the video with yt-dlp and use Whisper to transcribe it.
Curious, if you don't find this "annoying", why are you still following the channel? There must be other YouTube channels that offer similar content but deliver it in a better way.
I'd be really curious to see some sort of benchmark / evaluation of these context resources against the same coding tasks. Right now, the instructions all sound so prescriptive and authoritative, yet is really hard to evaluation their effectiveness.
"The court held that merely clicking on a download button does not show consent with license terms, if those terms were not conspicuous and if it was not explicit to the consumer that clicking meant agreeing to the license."
[+] [-] paulirish|7 months ago|reply
yt-dlp --write-auto-subs --skip-download "https://www.youtube.com/watch?v=7xTGNNLPyMI"
[+] [-] adamgordonbell|7 months ago|reply
I ended up doing the same as this person, downloading the MP4s and then transcribing myself. I was assuming it was some sort of anti LLM scraper feature they put in place.
Has anyone used this --write-auto-subs flag and not been flagged after doing 20 or so videos?
[+] [-] toomuchtodo|7 months ago|reply
(with that said, I do not want to diminish OP's work in any way; great job! "What I cannot build, I do not understand" - Feynman)
[+] [-] mckirk|7 months ago|reply
[+] [-] rpastuszak|7 months ago|reply
(I'm using it in https://butter.sonnet.io)
[+] [-] Jerry2|7 months ago|reply
[+] [-] entelechy0|7 months ago|reply
[deleted]
[+] [-] MysticOracle|7 months ago|reply
https://huggingface.co/nvidia/parakeet-tdt-0.6b-v2
For Apple Silicon (MLX) https://huggingface.co/senstella/parakeet-tdt-0.6b-v2-mlx
[+] [-] driscoll42|7 months ago|reply
[+] [-] 0points|7 months ago|reply
And unlike how your tool will be supported in the future, thousands of users make sure yt-dlp keeps working as google keep changing the site (currently 1459 contributors).
[+] [-] swyx|7 months ago|reply
youtube also blocks transcript exports for some things like https://youtubetranscript.com/
retranscribing is necessary and important part of the creator toolset.
[+] [-] passivegains|7 months ago|reply
[+] [-] unknown|7 months ago|reply
[deleted]
[+] [-] totallynotryan|7 months ago|reply
[+] [-] dudeWithAMood|7 months ago|reply
[+] [-] 93po|7 months ago|reply
[+] [-] eigenvalue|7 months ago|reply
https://github.com/Dicklesworthstone/bulk_transcribe_youtube...
I ended up turning a beefed up version of it which makes polished written documents from the raw transcript, you can try it at
https://youtubetranscriptoptimizer.com/
[+] [-] Leftium|7 months ago|reply
- This python one is more amenable to modding into your own custom tool: https://hw.leftium.com/#/item/44353447
- Another bash script: https://hw.leftium.com/#/item/41473379
---
They all seem to be built on top of:
- yt-dlp to download video
- whisper for transcription
- ffmpeg for audio/video extraction/processing
[+] [-] yunusabd|7 months ago|reply
[+] [-] pstoll|7 months ago|reply
Patient: “Doctor, it hurts when I do this.”
Doctor: “don’t do that”
[+] [-] isubkhankulov|7 months ago|reply
[+] [-] yunusabd|7 months ago|reply
[+] [-] cmaury|7 months ago|reply
[+] [-] sannysanoff|7 months ago|reply
https://old.reddit.com/r/ChatGPTCoding/comments/1lusr07/self...
Gonna be lots of posts of selfware like that soon.
[+] [-] Bluestein|7 months ago|reply
And, yes, indeed, AI-coding is order-of-magnitude having an effect along the lines that "low-code" was treading ...
... also, for less-capable coders or "borderline" coders the effort/benefit equation has radically shifted.-
[+] [-] labrador|7 months ago|reply
[+] [-] rs186|7 months ago|reply
[+] [-] Bluestein|7 months ago|reply
[+] [-] dudeWithAMood|7 months ago|reply
[+] [-] mikeve|7 months ago|reply
I must say, speaker diarization is surprisingly tricky to do. The most common approach seems to be to use pyannote, but the quality is not amazing...
[+] [-] toddmorey|7 months ago|reply
I'd be really curious to see some sort of benchmark / evaluation of these context resources against the same coding tasks. Right now, the instructions all sound so prescriptive and authoritative, yet is really hard to evaluation their effectiveness.
[+] [-] lpeancovschi|7 months ago|reply
[+] [-] nadermx|7 months ago|reply
https://en.m.wikipedia.org/wiki/Specht_v._Netscape_Communica...
[+] [-] MysticOracle|7 months ago|reply
[+] [-] arkaic|7 months ago|reply
[+] [-] unknown|7 months ago|reply
[deleted]
[+] [-] manishsharan|7 months ago|reply
[+] [-] unknown|7 months ago|reply
[deleted]
[+] [-] senko|7 months ago|reply
Uses yt-dlp, whisper, and a LLM (Gemini hardcoded because it handles long contexts well, but easy to switch) for summarizer.
I dislike podcast as a format (S/N level way too low for my taste), so use this whenever I want to get a tldr of some episode.
I should check out the SOTA models and improve the summarization prompt, but aren't in a hurry as this works pretty well for my needs already.
[+] [-] yufhg|7 months ago|reply
[deleted]