top | item 47035219

(no title)

digiown | 13 days ago

> don't see much possibility of actual widespread bans

Why do you think there would be regulation to honor the "underage signal", but not explicitly ban social media sites for "unverified" users?

> seems pointless to avoid using that capability

It's not pointless, because relying on it will soon make these locked down devices mandatory for everyone under 18, and they will keep using it past 18. Everyone will lose general purpose computing, along with adblocking and other mitigations that protect you from various harms. It also leads to widespread surveillance being possible as parents will want to be able to "audit" their teen's usage.

> put an additional system of control front and center

The problem should be controlled at the source, not the destination, if feasible.

discuss

order

mindslight|13 days ago

> Why do you think there would be regulation to honor the "underage signal"

Our ancestor comment still has the direction backwards. This is the specific dynamic that makes sense to me: https://news.ycombinator.com/item?id=47027738 .

This means any legislation should be aimed at directing device manufacturers to implement software that can respect content assertions sent by websites.

> relying on it will soon make these locked down devices mandatory for everyone under 18, and they will keep using it past 18

Okay, but in 2026 we're basically at this point. Show me a mobile phone that doesn't have a bootloader locked down with "secure boot." For this particular threat that we had worried about for a long time, we've already lost. Not in the total-sweeping way that analysis from first principles leads you to, but in the day to day practical way. It's everywhere.

The next control we're staring down is remote attestation, which is already being implemented for niches like banking. The scaffolding is there for it to be implemented on every website - "verifying your device's security" - I get that on basically everywhere these days. As soon as 80% of browsers can be assumed to have remote attestation capabilities, we can be sure they will start demanding these signals and slowly clamping down on libre browsers (as has been done with browser/IP fingerprinting over the past decade)

Any of these talks of getting the server involved intrinsically rely on shoring up "device security" through remote attestation. That is exactly what can end ad-blocking and every other client-represents-the-user freedom.

> The problem should be controlled at the source, not the destination, if feasible.

You've already acknowledged VPNs and foreign jurisdictions, which means "at the source" implies a national firewall, right?

Unless your goal is to undermine any solution on this topic? I'm sympathetic to this, I just don't see that being realistic in today's environment!

digiown|13 days ago

I agree with controls on addictive/exploitative platforms like Facebook or Instagram. These can be feasibly controlled at the source.

In principle I agree with keeping some content away from children, but I don't think any of the implementations will work without causing worse problems, so I disagree with implementing those.

> in the day to day practical way

There's a world of difference between practically required and it being illegal to use anything else, even if initially for a small set of population. You still have a choice to avoid those now. Moreover there is a fairly large subculture of gamers etc opposed to these movements, and open computing platforms will take a long time to fizzle out without intervention.

If you mandate locked down devices for kids, it will very quickly become locked down devices for everyone except for "licensed developers", because no one gets a bunch of new computers upon becoming an adult, and a new campaign from big tech will try to associate open computers with criminals.