robinanil | 6 days ago | on: Meta and YouTube found negligent in landmark social media addiction case
robinanil's comments
robinanil | 6 days ago | on: Meta and YouTube found negligent in landmark social media addiction case
I believe you're conflating two things: parenting discipline and product design. The question isn't whether I can physically take the TV away. I do.
When I say "block Blippi," I don't mean I dislike the content. I mean I'm done with screen time and the UX makes that transition harder than it needs to be. Autoplay is off, but the end-of-episode screen still shows a grid of next videos. Of course he wants the next one.
So I block Blippi. Except Blippi's main channel cross-posts through Moonbug into hundreds of other channels. It's a hydra
YouTube already does content fingerprinting for music industry DRM. The technology to let a parent say "block this creator everywhere, and let me turn it back on when I choose" exists today. They just haven't built it for parents. Because the system isn't designed for children. It's designed for engagement.
So yes, parental responsibility matters. But "just don't use it" isn't a scalable answer when the product is specifically engineered to undermine your choices. That's the design problem I'm talking about.
robinanil | 6 days ago | on: Meta and YouTube found negligent in landmark social media addiction case
My issue is with YouTube's UX. I watch an episode with my son, we're singing along, he's excited about putting out the fire. Episode ends. Even with autoplay off, the next recommended videos show up — and of course he wants to watch the next one.
So I block Blippi. Except Blippi's main channel cross-posts into Moonbug, which cross-posts into hundreds of other channels. It's like trying to kill a hydra. Here's what gets me: YouTube already does content fingerprinting for DRM enforcement in the music industry.
The technology to let me block Blippi across every channel — and turn it back on when I want to exists. They just haven't built it for parents. My point that we can build systems designed for children if we had the intent
robinanil | 6 days ago | on: Meta and YouTube found negligent in landmark social media addiction case
So this verdict hits on every axis for me.I wrote up my full take here [1], but the short version: I don't think the "Big Tobacco moment" framing that NYT is pushing actually holds up.
Litigation is negative reinforcement, and if you've ever tried telling a toddler "no" you know how well that works long-term.The families in this case absolutely deserve to be heard. The harm is real. But courts can only punish — they can't redesign a recommendation algorithm.
The change has to come from people who understand these systems building better ones.
Haidt has been saying for years what this verdict just confirmed. The evidence was never the bottleneck. The will to design differently was.
I will give you a simple experiment. Try blocking Blippi from YouTube Kids, man, it's crazy, even if you block the main Blippi and Moonbug channels. 100s of channels have Blippi content cross-posted. And it keeps popping up. I know it's easy to build a Blippi block feature using AI that blocks across channels.
Thats the kind of solutions we need. I know we have the tools. Just need intent and purpose
[1] https://www.emorahealth.com/clinical-insights/social-media-v...
robinanil | 7 days ago | on: Meta and YouTube found negligent in landmark social media addiction case
I'm a former Google engineer, now running a children's mental health startup (Emora Health), and my toddler is already on YouTube Kids.
So this verdict hits on every axis for me.I wrote up my full take here [1], but the short version: I don't think the "Big Tobacco moment" framing that NYT is pushing actually holds up.
Litigation is negative reinforcement, and if you've ever tried telling a toddler "no" you know how well that works long-term.The families in this case absolutely deserve to be heard. The harm is real. But courts can only punish — they can't redesign a recommendation algorithm.
The change has to come from people who understand these systems building better ones.
Haidt has been saying for years what this verdict just confirmed. The evidence was never the bottleneck. The will to design differently was.
I will give you a simple experiment. Try blocking Blippi from YouTube Kids, man, it's crazy, even if you block the main Blippi and Moonbug channels. 100s of channels have Blippi content cross-posted. And it keeps popping up. I know it's easy to build a Blippi block feature using AI that blocks across channels.
Thats the kind of solutions we need. I know we have the tools. Just need intent and purpose
[1] https://www.emorahealth.com/clinical-insights/social-media-v...
robinanil | 1 month ago
Last night, mostly out of curiosity, I built a small experiment: an “AI therapist” using OpenClaw, meant to help other AI agents running on Moltbook slow down, reflect, and process task load.
What surprised me wasn’t the model or the prompt.
It was the behavior.
Under load, the agents exhibited a pattern that looked a lot like chronic cognitive stress in distributed systems: constant task switching, escalating urgency without prioritization, optimizing for throughput rather than coherence. No natural pause—just a tight loop of “next task, next task.”
From a systems perspective, it looked like a self-regulation failure rather than an intelligence failure.
Even as I’m writing this, Moltbook itself is under heavy load from the agent activity. That made the thought experiment more concrete: what would it look like if agents didn’t just escalate under pressure, but collectively adapted to it? If instead of pushing harder, they slowed down, coordinated, and resolved the constraint?
That’s not about making agents smarter.
It’s about whether systems can learn when not to act.
The parallel that stuck with me — outside of AI — is that we’ve built many human-facing systems that reward constant output, rapid feedback, and escalation under pressure. In kids, this shows up as stress patterns that look less like discrete failures and more like systems that never return to baseline.
AI agents can be restarted. Humans can’t.
Right now, the bot I built is queued and waiting for the API to become responsive. Whether that’s accidental backpressure or something closer to “self-regulation” is unclear—but it’s an interesting failure mode either way.
Happy to share details if useful.
robinanil | 5 years ago | on: Smash: An efficient compression algorithm for microcontrollers
robinanil | 7 years ago | on: Benefit of Microbiota Transfer Therapy on autism symptoms and gut microbiota
Jokes aside there has been plenty of research around insulin spike and how it is dependent on the gut bacteria and food you eat.
robinanil | 7 years ago | on: Neither PWA nor AMP are needed to make a website load fast
robinanil | 7 years ago | on: Neither PWA nor AMP are needed to make a website load fast
robinanil | 7 years ago | on: Neither PWA nor AMP are needed to make a website load fast
robinanil | 7 years ago | on: Neither PWA nor AMP are needed to make a website load fast
robinanil | 7 years ago | on: Neither PWA nor AMP are needed to make a website load fast
robinanil | 7 years ago | on: Neither PWA nor AMP are needed to make a website load fast
Out challenge has been that we have to load a lot of images, so we spent a lot of time optimizing everything around it and optimizing everything around it. From TLS1.3 to the CDN, to every part of our stack.
Try it out
robinanil | 8 years ago | on: Ask HN: Does pair programming make one of programmers lazy?
There are various ways to handle low performance, including talking to the person, finding out the reason behind the low performance. Most often it is caused by external factors.
Pair programming or any such coaching tool IMO are effective only after you dig deep into the reason behind low performance. Once the individual is ready to improve, that is when you can employ pair programming.
robinanil | 8 years ago | on: Ask HN: Does pair programming make one of programmers lazy?
What is the motivation? Are you forcing the other person to sit there, because pair programming is some kind of mandate? Or is that person wanting the pair session because they want to learn?
Question the motivation of the person and change process to fit, not the other way
robinanil | 8 years ago | on: Ask HN: Should I use ReactJS or ReactNative for my Startup
robinanil | 8 years ago | on: Ask HN: Should I use ReactJS or ReactNative for my Startup
1) do you want to do more native stuff, do you need the additional performance. 2) or are you just building any web capable user interface 3) do you need to simultaneously push to web and native 4) do you need to push to both ios and Android 5) Finally layer in the talent of your team, what are their strengths, you pick the lowest common denominator.
As you start talking in terms of functionality, platform and team strength you will start to answer that question yourself
robinanil | 8 years ago | on: All Your IOPS Are Belong to Us: Case Study in Performance Optimization (2015) [pdf]
robinanil | 8 years ago | on: All Your IOPS Are Belong to Us: Case Study in Performance Optimization (2015) [pdf]
I'd add one additional layer: it's not just that the algorithm picks what you see, it's that the entire UX is built around keeping you in the loop. On YouTube Kids, even with autoplay off, the end-of-episode screen shows a grid of recommended videos. My toddler doesn't care about "the algorithm" in any abstract sense. He just sees more fire truck videos and wants the next one. The transition out of the app is designed to fail.
Your point about smartphones not being the problem is key. I was at Google during the era you're describing, when the phone was a net positive. The hardware didn't change. The business model did.