This is pretty much the inevitable end-game of the web, in no small part funded by ad-based business models (as the analog gap pretty much destroys most attempts to use this stuff to do copy protection) and enabled by developers who have insisted we shove as much difficult-to-implement functionality (by which I am talking about CSS complex stuff, not powerful-but-easy-to-code APIs for OS-level access) into the browser as possible.
The result: there is now effectively one dominating web browser run by an ad company who nigh unto controls the spec for the web itself and who is finally putting its foot down to decide that we are all going to be forced to either used fully-locked down devices or to prove that we are using some locked-down component of our otherwise unlocked device to see anyone's content, and they get to frame it as fighting for the user in the spec draft as users have a "need" to prove their authenticity to websites to get their free stuff.
(BTW, Brave is in the same boat: they are also an ad company--despite building ad blocking stuff themselves--and their product managers routinely discuss and even quote Brendan Eich talking about this same kind of "run the browser inside of trusted computing" as their long-term solution for preventing people blocking their ads. The vicious irony: the very tech they want to use to protect them is what will be used to protect the status quo from them! The entire premise of monetizing with ads is eventually either self-defeating or the problem itself.)
> who is finally putting their foot down and deciding that we are all going to be forced to either used fully-locked down devices
The person who wrote the proposal[0] is from Google. All the authors of the proposal are from Google[1].
I've been thinking carefully about this comment, but I really don't know what to say. It's absolutely heartbreaking watching something I really care about die by a thousand cuts; how do we protest this? Google will just strong-arm their implementation through Chromium and, when banks, Netflix & co. start using it, they've effectively cornered other engines into implementing it.
This isn't new to them. They did it with FLoC, which most people were opposed to[2]. The most they did was FLoC was deprecate it and re-release it under a different name.
The saving grace here might be that Firefox won't implement the proposal.
Brave is an advertising company, but we’re quite different from Google and others in this space. Brave's ad notifications are opt-in and engineered in such a way to protect and preserve user privacy. I'm not sure where you saw Brave engineers talking about ways to prevent users from blocking our ads—we don’t try to prevent users from blocking Brave Ads.
If you wish not to see Brave’s ad notifications, you can easily avoid them (by not opting-in in the first place, or by throttling/disabling-entirely). There are no special hoops to hop through, or technical incantations to utter. We believe digital advertising is better when it is built on user-first principles and consent.
If a user opts-in to Brave’s ad notifications, their device proceeds to routinely download-and-maintain a regional catalog of available inventory. The user's device then evaluates the catalog entries for relevance. User data is NOT sent off-device in Brave’s model. If a relevant ad entry is found, it is then displayed to the user in such a time and manner for minimal distraction. When an ad notification is shown, the user receives 70% of the associated ad revenue for their attention (no clicks required).
Again, if the user wishes to not see ad notifications, they can simply choose not to opt-in to viewing them. If the user wishes to not see the occasional sponsored image on the New Tab Page, they can turn those off from the New Tab Page itself with 2 clicks ( Customize › Show Sponsored Images). Importantly, the user is always in control. They decide whether ads will be displayed, and to what degree (e.g., the user can set a limit on ad notifications per hour).
Brave isn't interested in coercing users to view advertisements.
While the 'Web Environment Integrity API Proposal' is portrayed as a measure to enhance web security and prevent fraud, it poses potential threats to competition, especially for open-source browsers like ours. It may seem to protect the ad business model, but what it could lead to is the monopoly of Google Chrome, curbing the emergence of new competitors.
We are an open-source browser developer and these concerns deeply resonate with us. We understand the paradox Alphabet faces, yet we firmly believe the solution isn't about exerting "DRM" level control over a ubiquitous means of access.
We're committed to standing up for the future of the web. We don't just see ourselves as a browser company but as advocates for an open, fair, and free web. We invite you to join us in this endeavor. Visit https://github.com/dosyago/BrowserBoxPro today. Stand with us for an open, free, and fair web.
> and enabled by developers who have insisted we shove as much difficult-to-implement functionality (by which I am talking about CSS complex stuff, not powerful-but-easy-to-code APIs for OS-level access)
Interesting that fixing "how to center a div" is considered harmful, but WebSerialPort is actually very good?
> The result: there is now effectively one dominating web browser run by an ad company who nigh unto controls the spec for the web itself
I don't think this this reality. Google proposes a bunch of APIs that goes nowhere because the other browser vendors consider them harmful. Google's previous attempts at trying to drive more adtech into the browser have failed due to a lack of support from other browser vendors.
I think "who drives the web specs" is probably in the best situation possible. It's largely Google, Mozilla, and Apple who all have slightly different interests in what makes a good web platform, and the web ends up better for it.
It's important to note that a browser that implements something like this is simply not a User Agent, in the most clear way - it's just not there to serve the User, it's there to serve the website. When you consider this, it's clear that this goes against the core principals of the WWW, making this an Anti-WWW feature, or better put, a regression.
Hopefully this will not be implemented, but still it's a good wake up call for those who still think that Chrome is more than an ads-delivery app with some browser functionality.
Yeah this is really the endgame. I think the issue is systemic though, this is more than just ad money. Bots and automatability of the web was always an anomaly and a flaw, as the web was and is always designed for humans. Strict human verification was always a need. One can say we did achieve this with 2FA and such, but what is technology all about? Convenience. If it's more convenient, people will prefer remote assertion every day of the week: https://gabrielsieben.tech/2022/07/29/remote-assertion-is-co...
> we shove as much difficult-to-implement functionality (by which I am talking about CSS complex stuff, not powerful-but-easy-to-code APIs for OS-level access) into the browser as possible.
"powerful-but-easy-to-code APIs for OS-level access" are actual hard-to-implement-right functionality that is often pushed to browsers with very little discussion or considerations.
It feels like this cannot fly in the EU already though. And if they someone found a way around the regulatory, there will be amendments to shoot it down.
The entire premise of 'people want expensive to make websites, but don't want to pay for them' is already a bit flawed. I do pay for youtube to not see ads, I wish I could pay Google (and Meta) to not serve me ads on any site including Google search, they have ads on. That would make life a lot nicer. And I personally know no-one who would not sign up for that. But that doesn't happen, I guess because ads make more (not from me, but he)?
To begin with, pretty much every government employee in the world has some proprietary software developed within the country for security reasons. Old, even obsolete machines. Out of date software, unlicensed/unregistered software, etc, etc. Much of this is also true of banks.
This means if this is put in place as in the spec, it will affect banks and governments negatively. And as powerful as Google is, I don't think it will win over governments + banks.
But again, all the above could be nonsense, and Google will gatekeep the web. It found itself as the loser in the AI race, and it knows pursuing AI during the ongoing arguments on privacy and who owns the data AI is being trained on - the next best thing is to own the playground where the AI trains. That may not be an entirely bad thing either; sad, perhaps, but as this goes on, and browsing becomes a pain, maybe this will result in people just spending less time online? That's a good outcome in my books.
What's strange to me is that the main author of the spec -- Ben Wiser -- seems to be against closed, wall-garden paradigms as he has written in a blog post "I just spent £700 to have my own app on my iPhone" [1]. In the post, he laments the state of the App Store monopoly on iOS and ponders returning to Android for the app installation freedom.
How can he reconciliate these views with this spec, which he is the main author of? Surely Ben sees the parallels?
He writes: "Apple’s strategy with this is obvious, and it clearly works, but it still greatly upsets me that I couldn’t just build an app with my linux laptop. If I want the app to persist for longer than a month, and to make it easy for friends to install, I had to pay $99 for a developer account. Come on Apple, I know you want people to use the app story but this is just a little cruel. I basically have to pay $99 a year now just to keep using my little app."
Speaking as someone who worked in adtech and managed to spend almost a year getting paid to build an adblocker:
I can tell you that the machine is so big and the responsibilities diluted to such extent that no one really feels like they're making a morally dubious decision, it just sort of happens on its own, magically.
The intent may genuinely be to help decrease bot activities versus human activities.
Even the ad example is about not charging advertisers for bot views, which is a huge problem right now.
The problem is that a tool can often be used for evil as easily as for good, and the more the standard was used to block ad blockers over simply filtering out User Agent spoofing bots, the more this tool ends up evil.
And even if the limited scope in the proposal was the true intent, there's nothing preventing scope creep.
Though reading over it all, I do think the assumption of motivations in most of the comments here are misaligned. This does seem to be primarily focused on the issue of growth in bot activity and making it harder on bots to act as if human to servers.
Still, the spirit of who controls the client is very much at stake, and the comments here are ostensibly right that this is a measure that should not happen.
(And frankly, given the bubbling attitudes about enshittification coupled with the coming lowered barrier of entry for competition against software firms and content production, I think this is very much the kind of thing that may backfire horribly if forced though.)
> How can he reconciliate these views with this spec, which he is the main author of? Surely Ben sees the parallels?
It's easy: he works for Google. Every single public-ish web developer and/or devrel from Google will spend inordinate amounts of time lambasting Apple, writing eaassays on how Apple cripples the web etc.
While Google has broken the web so badly that Apple would need several decades to come anywhere close.
Note: the moment they leave Google, they may slightly change their tune and criticise Google a bit. For an example, see Alex Russel of web components when he went to work at Microsoft after spending a decade making sure that web browsers are turly unimplementable: https://infrequently.org/2021/07/hobsons-browser/
Pretty much the entire premise in the title of his blog post is false for dramatic effect and you wonder how this man could stoop so low as to be duplicitous?
The underhanded way this is being proposed is really something else. It's hosted on a non-google github to provide distance, it's worded in a way that makes it seem like this is something that benefits users, when it's the absolute opposite of that. It subverts the whole concept of a user agent. This is a huge threat to our industry and we cannot allow this to happen.
It's not a "threat to" the industry... It literally _comes from_ the industry... Unless the tech industry is willing to lose one of its biggest sources of revenue, this is exactly what the industry wants...
Add "integrity" to the list of adjectives used for obfuscating the rise of authoritarian dystopia...
It all started with "trusted computing", where "trusted" means "not under the owner's control". Then they tried to spin it as a "security" thing with TPMs, and created the impression that those speaking out against them were either malicious actors or insane conspiracy theorists.
Now it is actually happening. They want to control exactly what hardware and software you use, and they're doing it by ostracisation, which makes this even more sinister: you're still technically allowed to use software and hardware of your choosing, but you'll be blocked from participating.
I still remember when Intel was forced to revert adding a unique serial number to its processors because of widespread outrage, so it is possible for the public to make a difference; they just need to be educated about the coming dystopia and agitated enough to care and act upon it.
Perhaps we can start by spreading instructions on how to disable TPMs and "secure" boot along with all the advantages that come with doing so (custom drivers, running whatever OS you want, hardware you actually own, etc.) Of course the corporate-owned "security" lobby is going to start screaming that it's "insecure", but we need to make it clear that this is not the "security" we want because it is inherently hostile to freedom.
"Those who give up freedom for security deserve neither."
This would be the method of last resort. I think secure boot as a technology actually has security advantages, if you can freely set the keys. That was what the tech was advertised as to console the critics, but if course it would run counter to the goal of controlling hardware if this was actually implemented consistently. I think regulation to force vendors to provide this option (and in a frictionless, actually usable manner) could do a lot here.
Second is more focus on nag screens, "nudges" and other deliberately degraded UX. I.e. with the Surface tablets, you're technically able to disable secure boot, however you'll then be greeted with an ugly bright red boot screen every time you turn the device on. This stuff can have significant psychological impact, especially for "casual" users.
Whether you like it or not (and I certainly don't), you've gotta sort of admire the sheer vision of a fifteen-year project to build a browser so good it comes to monopolize the industry, all because you've had the foresight to realize that monopoly will be crucial to securing your position as the adtech hegemon. An underrated masterpiece of evil genius.
And tech people fell for it hook, line, and sinker.
It's completely and utterly irrelevant that Chromium is open source, because the web is a protocol, and having the source for an implementation of the protocol doesn't matter in the least when you don't control the protocol. You can't just fork Chromium and remove a feature, because websites expect the feature, and your browser won't work on them. You can't just fork Chromium and add a feature, because websites don't care about your tiny fork and won't use your feature. You can't fork Chromium, you have to fork the entire web.
And I believe this strategy was how Sundar Pichai became CEO of Google. He oversaw the chrome project in the early days and its incredible success catapulted him up the management ladder at Google.
I wouldn't necessarily view it as malice from the beginning. It's entirely likely that early Chrome was really trying to solve usability problems in hosting complex applications like GMail. A goal that was attempted throughout history, as seen from the days of ActiveX, Java Web Applets, Flash, etc.
But capitalism does what it does best, and will happily take advantage of (and try to prolong) a natural monopoly situation even if the origins were genuine.
In fact this is why there are regulations around "utilities". They are also an area where a natural monopoly is the optimal, so they shouldn't be treated as a free market.
(Food for thought: Perhaps the Internet infrastructure should be a utility too? Browser makers could be forced to be non-profit, which would mean companies need to divest themselves of the "Internet business" if they want to do "business _over_ the Internet")
The literal attempt to censor web usage of Linux and BSD desktops, other FOSS clients, custom Android ROMs, etc with an open reasoning "to sell you ads".
Yeah I mean the first of their examples is literally:
> Users like visiting websites that are expensive to create and maintain, but they often want or need to do it without paying directly. These websites fund themselves with ads, but the advertisers can only afford to pay for humans to see the ads, rather than robots. This creates a need for human users to prove to websites that they're human, sometimes through tasks like challenges or logins.
I find it quite cute that they start with "users" as if it's a user demand but in the next sentence switch to "advertisers" --- the real target population.
I'm worried about this too, as we run a company that invests heavily in developing browsing technology that is powered by these browsers (like chromium) but liberates them in various ways (such as running headless in the cloud, and then having users connect to it remotely), or running in a "semi automated". Both of these things would possibly be flagged by these attestation guards, and would not be environments that "preserve the integrity of the ad business model and the dominant browser market". If you want to get involved in doing something about it, come and check out our open source browser work at: https://github.com/dosyago/BrowserBoxPro and get involved
I mean, to be fair, that's their entire modus operandi.
You don't berate a kitchen for serving food, why would you look at any Google contraption from HTTP/3 to Chrome as anything but a vehicle for selling ads and/or mining data?
The largest subsection of the document is spent discussing how to prevent specifically this situation, and this is called out explicitly as a non-goal.
It's the ad-tech sector of the web declaring a secession from the internet, for ads can't live under the law of the open web. The new AdWeb is going to look like appstores: websites will need to pay to the adweb owners, and users will need to use smartphones or locked down browsers. As for the open web, it will stay and continue evolving free from money making concerns.
It's time to break Google up. They're the AT&T and Standard Oil of our generation. Make Ads, YouTube, Search, Cloud, Chrome, etc. all independent companies. Demand that antitrust regulators do their damn jobs for a change.
* The US would never kill its golden calf except as a last resort.
* The US standard for antitrust is consumer harm. Google implementing a thing that other companies have been asking for, any company can join and send their own attestation signals, and then those other companies in unrelated markets use the thing to maybe not support unapproved stacks which could reasonably include Android/Chrome won't fall on Google.
Google Cloud becomes a VC driven organization that slowly eats margin dirt against it's competitors until insolvency. There was no way for it to recover enough resources from the mothership before being split out.
Search trundles along ok, assuming it took search ads and a ton of core infra with it, but it never makes enough money to ship a decent product extension. It hopefully removes some products it can no longer afford margin on, which have long produced distorted results (albeit with good intention). It suffers slow brain drain, and users end up using multiple search engines for every search again because no one has good search quality. The monopoly breaks, but so does this part of the internet, bolstering apps and information sites ecosystems positions. Wikipedia is the only real winner we want in this space.
Display Ads goes like it just discovered faster than light travel, no longer held down by the ol ball and chain that is the entire rest of the company. They go much darker as they no longer have tons of goodwill organizing from the rest of the company, and increasingly join the bad actors. In 20 years they eventually join lexis level evil in terms of multi-directional user sharing.
YouTube heads off into the stratosphere along with Display Ads. They try to maintain a better public face, but having to spin up their own ad market solutions drops ad quality even further, margins suffer, but their position remains ossified and they slowly recover. They start to get a bit more agile, no longer disrupted every other year by some mandate from the mothership, they're better able to keep up with new markets and more rapidly crush new competition.
Workspace decays very slowly. All the AI stuff halts and gets ripped out as there's no one there to work on it. The drive product has to scramble to figure out how to rebuild without all the internal commodity infrastructure support. GMail gets unstable for a while due to the weight of the infrastructures sitting on many fewer shoulders. Global instability results of the rapid de-distribution of the system as the production infrastructure was sliced apart in a rush to meet forced division. The economy takes a big dive as a result, as half the world loses email access regularly, bills don't get paid, etc.
Photos spins out into its own thing, and dies rapidly, as selling the odd photo frame here and there just can't meet margin.
Chrome tries to get funding from Microsoft, eventually it gets purchased wholesale, but the core team gets ripped up and largely discarded. Who knows how the OSS products fare, it depends on the executives in Microsoft who win this purchase. Eventually the main product gets shuttered, with Edge being the only replacement.
The telco products all shutter immediately, with no recourse. Same with R&D.
AI tries to split out into it's own thing, but fails to find a business and suffers constant reputation problems. After 10y of trying it eventually shuts, the acquiring company however immediately spins up multiple successful products and makes a big dent in the now well established market.
Android spins out into its own organization. The first decade the heat of internal politics in new found vacuums crushes them, eventually they find footing and head back to their open core roots, get scrappy and do some new things. Along the way their size fluctuates as the market forks and fractures as it does, but Android manages to hold its position as the western center of its universe.
Chromecast, ChromeOS, Nest all suffer badly having no core ecosystem to ship anymore. They attempt to buddy up with Android which pushes them around trying to androidify everything, but resulting in poor UX and/or poor margins across the board. Eventually the all but ChromeOS shutter, and ChromeOS business also closes, but leaves behind an OSS gift that a core group of passionate individuals try to limp forward as best they can with the new Microsoft Edge overlords.
Users find their data fractures across a dozen companies, with poor SSO integrations. Security mistakes abound, lots of people are affected. Online crime goes through the roof, it feels like the 90s again, but on a much much larger scale. Lots of people lose their accounts, and are affected by service outages and the ongoing economic effects from those. ISPs jump at the chance to step in, and lots of users start trying to use alternative email services again. They experience poor discoverability, lots more security problems, and constant space pressure. Vultures make off like bandits, and amazon, apple, microsoft, and cloudflare are the biggest winners in the fallout.
counterargument: let's say the us gets in a real war with china, a massive conglomerate like google would probably make massive contributions to cyber/technological warfare that the individual pieces would have a hard time doing
i agree they should be broken up, but it might be the wrong time for it.
By the HN guidelines this is a repost, but it would be a mistake IMO to delete it. This would mark the end of the open web, but for whatever reason this issue has never really bubbled to the surface here before. It feels like something is different this time.
The chess pieces for the end-to-end unblockable ad machine are in place.
You'll have the cynically named "Privacy sandbox" that builds tracking directly into the browser. You curtail ad blockers by capping browser extensions. And then you allow access only to "attested" clients. Inescapable tracking and unblockable ads. And you'll get to see ever more of them over time.
If this isn't evil enough in itself, the way Google presents these initiatives in grossly misleading ways makes my blood boil.
Fuck "Be as evil as possible" Google. Absolutely pathetic company. I'm so done with them.
I see one more dangerous development imposed by this move: limiting access to web content for rival search engines. I'm sure that Google Robot will pass all "high security standards" and web integrity checks, while others won't be able to do so.
I think "don't use Chrome" is really not the best way to fight this - instead, make it known. Get out to as many people as possible that this thing exists, spread awareness, explain the consequences, make a stink.
Google is absolutely in a position to implement this and I figure a good number of sites would immediately join. However, the image of "tech" is tarnished enough already and the general population is more aware of the importance of having control about their online experience.
So I'm kinda optimistic that more public awareness of this might lead to a larger backlash and might make Google think twice in continuing this, lest risking a PR disaster.
This is the most disgusting thing I have ever read. My blood is boiling to the point where I genuinely don't see a bright future.
Ben Wiser (Google), Borbala Benko (Google), Philipp Pfeiffenberger (Google), and Sergey Kataev (Google) have got to be the most repugnant people on the planet for pretending this is anything but a scheme to destroy all privacy and freedom on the web all so fucking Google can sell more ads.
I am not a hopeful romantic, but the EU has been investing on vendor neutral web-browsers like Nyxt [0] and the UR Browser [1] through the Horizon Europe program. I doubt that legislators (at least in the EU) will view this as a positive development, assuming EU legislators know what they are doing. On the other hand, lobbying by big tech is still very much a threat.
The big problem with many of the alternative browsers like the ones you mentioned is that they are powered by the Blink engine (it's one of the 2 options for nyxt). The overwhelming market-share of Blink and the institutional monopoly on its development is the biggest driver for introduction of anti-features like these. WEI for example, is being prototyped in it [1]. These anti-features make it into every browser that uses Blink. While some browsers like UR-browser and Brave disable many of these features, they still lend credibility to the blink engine.
We need to promote alternative web engines like Servo and libweb and browsers based on them. Many of these engines need a major push to be competent enough for daily use. Gecko is also fine - but building a new browser with it is said to be hard.
This proposal is attempted theft. The web does not belong to Google, it belongs to everybody. Who are they to suggest that users with “non-attestable” (read: not controlled by Google) user agents or operating systems should be excluded or punished?
If Google wants a war, let’s give them one. Tell everyone who will listen. Give Google hell.
Oh by "Web Environment" you mean "my machine" lol!
I already got caught by this kind of thing - a https://github.com/nativefier/nativefier app wrapping Youtube Music doesn't work, because Google detects somehow that you are not using a trusted browser and refuses to serve.
This is sort of moving in the "zero trust" (as in let's use ML etc. to detect if we trust something. username/password is not enough), which I fear because it will break a bunch of stuff for genuine users and make things less reliable.
> Users often depend on websites trusting the client environment they run in.
is already a lie. Users don't depend on websites trusting the client environment. Users expect the client to limit the way in which they have to trust websites.
Sure website owners would love to be able to trust user input, but that has little to do with the interest of the users.
If something starts with that kind of framing already you certainly know that this is not going to benefit the user.
Proposals like this demonstrate the utter failure of our ethics education in computer science.
In a field facing increasingly harder ethical questions every day, it’s important to start empowering our engineers to say “no” to ethically bankrupt things like this.
I don't think any amount of ethics education will matter in the end in the face of the incentive structure that appeared in the industry.
Strong cultural norms (e.g. hacker culture) might help for a while. But incentive structures eternally erode opposition.
It could make it easier for developers to band together and try to collectively veto things like this. But corporations with money can always buy the expertise of people, have them undermine the community, create their own parallel communities and influence public opinion and legislators.
FAANG salaries supercharge people's cognitive dissonance. They will find ways to excuse, minimize and ignore their contribution to the current situation.
Even HackerNews developed a sub-subculture of people that were constantly going on threads and calling remote attestation worries as "FUD".
It's unclear how to preserve cultural norms that stand in the way of market dominance. The only thing I can think of is having competing interests in the market. But whenever these align -- hell breaks loose.
You might be disappointed. Ethics training can't force people with different political viewpoints to conform to yours; in fact it gives them better tools to explain their views.
I hate to say it, but if you used Chrome to read this, then you're part of the problem.
Awful stuff like this wouldn't stand a chance if Google didn't have such a monopoly position.
For the sake of the open internet, please switch to a different browser. IMO, Firefox is best, but even something chromium based is probably fine. Just not Google Chrome.
I am not optimistic that the de-facto end of general computation can be prevented, or that there will even be noteworthy opposition.
There are so many powerful interests that stand to gain from preventing e.g. ad-blocking and content capture.
Thanks to Windows 11 requiring TPM, it is just a matter of time until hardware support for remote attestation is ubiquitous even on desktop computers.
Meanwhile, our (including myself) attention is (perhaps justifiably to some extent) on the latest news about $EXISTENTIAL_THREAT and how $THE_OTHER_SIDE did $EVIL_THING fed to us by the algorithm.
Organizations that used to effectively fight threats to freedom like this (FSF, pirate parties, CCC, EFF, etc) have lost a lot of their support/influence and clarity of purpose over the last decade.
>The attestation is a low entropy description of the device the web page is running on.
>The attester will then sign a token containing the attestation and content binding (referred to as the payload) with a private key.
>The attester then returns the token and signature to the web page.
>The attester’s public key is available to everyone to request.
I'm assuming "attester" here means "hardware authenticator." How is the attestation low entropy if it's presumably signed by a key that is unique & resident to my device? There is nothing higher entropy than a signature w/ "my" private key. That is literally saying "I [the single universal holder of the corresponding private key] signed this attestation." These days that key is realistically burned into my device at manufacturing time, and generally even if I can enroll keys on "my" device (big if), there is a very limited number of keyslots on hardware authenticators. Certainly not enough slots to present a random throwaway identity to each webpage.
I don't understand how you can have public/private key crypto as the basis for attestation and also have privacy? The two seem mutually exclusive. Is the private key supposed to be shared among a large cohort? (Which seems rather unwise, as it would make the blast radius of a compromised key disastrously huge.)
> I'm assuming "attester" here means "hardware authenticator." How is the attestation low entropy if it's presumably signed by a key that is unique & resident to my device?
From what I understood, the "attester" is a remote server, which signs the attestation with its own key, after somehow verifying that the browser and operating system and drivers and machine is not running any code that this remote server does not completely trust. That key can be used at most to identify the remote server, which is supposedly shared by a wide number of devices.
Yes, this means that your browser depends on having a working connection to that remote server for every attestation it makes, and that if that remote server colludes with the web page (or is compromised), it can leak your identity.
Maybe your device sends a signed attestation to the OS vendor and they generate a more generic attestation (basically "this is a legit Chrome browser running on Android but I won't tell you anything else").
There's some quite complex cryptographic machinery called Direct Anonymous Attestation that would make this possible. I don't know if they plan on using this though.
> Attesters will be required to offer their service under the same conditions to any browser who wishes to use it and meets certain baseline requirements. This leads to any browser running on the given OS platform having the same access to the technology, but we still have the risks that 1) some websites might exclude some operating systems, and 2) if the platform identity of the application that requested the attestation is included, some websites might exclude some browsers.
I feel this is the bit that's going to be hand waved away for the sake of convenience.
I also wonder what those certain baseline requirements are going to be? Weird that they're left ambiguous.
It's probably nothing to worry about. We have a ton of precedent with Widevine that "it's okay, we'll license to anyone who meets requirements" wouldn't ever be abused[0]. It's fine, you just meet the baseline requirements that aren't spelled out yet and that might be subject to change and that certainly won't include headless or highly scriptable or experimental browsers. Nothing to worry about.
This isn't extreme enough. If they're going to put out a very controversial proposal like this, they may as well go all in. The push back against this is going to fizzle out, and it will be shoved through regardless of anyones opinions.
Governments will love this due to protection and security it provides among other things. I wish I could say I was surprised, but Google has continued to fail to deliver even when they try for a power-grab play like this.
Feature requests:
- Add a distributed bad-actors list similar to DNS.
- Start the process of introducing this functionality at the hardware level.
- Require photo personal identification to prove humanity.
The only way around the dystopia this will lead to is to constantly and relentlessly shame and harass all those involved in helping create it. The scolding in the issue tracker of that wretched "project" shall flow like a river, until the spirits of those pursuing it breaks and it is disbanded.
And once the corporate hydra has regrown its head, repeat. Hopefully, enough practise makes those fighting the dystopia effective enough to one day topple over sponsoring and enabling organisations as a whole, instead of only their little initiatives leading down that path.
I commented similarly elsewhere (https://news.ycombinator.com/item?id=36815276) but shoutout to all the people during the Web Video DRM debate who said that DRM wasn't going to be proposed for HTML or Javascript.
How can the “attesters” verify the integrity of the user agent? Sure the attestation is signed, but why can’t we mess with the data sent to the attester and just nullify the entire point of the proposal? The “browser acceptance criteria” in the spec, that would presumably contain this info, is just “TODO”. Thanks Google for conveniently omitting that key detail.
Also interesting that its implied in the explainer that attesters are just HTTP endpoint dealing with “billion-qps” traffic. Again, point above, but also how can we trust any attester to not use the (completely unobfuscated) information the user agent is sending them?
I guarantee that big websites will host their own attesters, only allow use of their attester, and require attestation for every request, allowing them to fingerprint every single user.
You don't send any data to the attester. It runs locally on your device, or rather is part of its core functionality. Building a chain of trust from the TPM hardware module, validating secure boot is enabled, validating the kernel and drivers have not been tampered with, eventually validating the browser has not been tampered with.
You can't run your own attester - these are implemented by the companies who provide the hardware, such as Microsoft or Apple.
First I wanted to say client trust is one of the two things I‘d really like to see improved from a security standpoint but I think it‘s the wrong way around. Browsers should establish if they feel they operate in a trustworthy enough environment and decide to not work at all if they don‘t. Having the website initiate this check is a bit strange to me. (The other thing being more MitM and DNS Hijacking protection)
It's an Orwellian name, but makes a certain amount of sense. That's the most effective kind of Orwellian name.
Even still, I think that it is wrong to give something a convenient name that espouses some virtue. They should have chosen something like Web Environment Verification API.
I think it's spyware, and I don't like it. It reminds me of the Stripe API, where you have to run some JavaScript on your site that snoops on your interactions and reports stuff to Stripe that it uses to detect fraud. https://news.ycombinator.com/item?id=22937303
They've probably delusioned themselves that this is not evil. They are saving the internet from small businesses going under because their ads are being blocked!
The idea behind this proposal is what I feared the moment remote attestation(-ability) started to gain traction on clients.
Google will arguably kill legacy SafetyNet (which is circumventable, as it's not rooted in hardware) soon. Microsoft pushes extremly hard for remote attestation-ability by requiring TPMs. Very soon only an insignificant number of client devices will not be able to perform remote attestation by the major vendors based on hardware trust modules.
Soon there will be a Plaza Web, for which you'll need an approved device for, like a Chromecast with Google TV, and the Old Web of communities, enthusiasts, and the like.
I highly doubt it, but I wonder if this will be the straw that breaks the camels back where general perception of Google flips to where they are viewed in the same circle as Comcast or EA.
Google is heading in that direction and their velocity is accelerating.
This isn't just Google. The whole hardware industry is moving towards the Digital Lockdown. This idea has been around, at least, since the early 2000s, but the people who were talking about it, of course, got shouted down as conspiracy theorists.
And as far too often, the "conspiracy theorists" were right, but nobody cares about ever thinking about that, because nobody seems to be actually able to think about things anymore, unless the thoughts are breast-fed.
We're heading towards a reality, where copypasting from a website is going to cost you money if the license requires you to do so. Looking at it, considering the status quo of technology, almost everything required for a "trusted" environment is already present in consumer-hardware.
We have hypervisors, virtualization, containerization. Encryption/Decryption of data in RAM/CPU in real-time is coming eventually. Blockchain technology makes verification of digital ownership secure and easy. AI will make it stupidly easy for corporations to make sure that everyone complies and I will be everywhere within the next few years.
A glimpse of this reality can be seen in NovaQuark's "Dual Universe", where everything is behind DRM. A "metaverse" company for a reason, I guess.
The empire strikes again being driven by the insatiable greed. Just wait till its minions will fill up this thread with classical astroturfing and comments in vain of "We were waiting for this feature since forever!" and "It's for better security". I can also easily see how they massively downvote everyone who disagrees with the righteous direction of The Corporation. This is so Orwellian 1984.
HTTP/3, HTTP/2, many useless JS API are pushed by Google.
Is there any real alternative to the multimedia Web? Or do We need to make one now?
What we need:
- hypertext, links
- raster and vector images
- videos
- responsive layout system of said hypertext (cassowary)
- programs that can control the page content fully
At DOSYAGO, we're definitely concerned about this. We see concerns of Alphabet’s Web Environment Integrity API Proposal, we see a potential threat to the very democracy of the web. The danger isn't merely about preserving the ad business model, but the potential for market monopolization by Google Chrome. Yet, the beauty of open source presents us with hope and solutions.
As creators of a competing open-source browser, we're stirred by this. We're concerned about the future integrity of browsing - whether run remotely, headlessly, or semi-automated, we see all these threatened by such attestations. But we believe in the power of the collective, and the spirit of innovation that thrives in the open-source community.
The conundrum is real for Alphabet, but leveraging control over such a global, ubiquitous means of access cannot be the answer. However, we don't advocate a future where Google cannot derive value from its creations. The economic balance may be hard to find, but technically, solutions will emerge. We're committed to standing up for the future of the web, because we believe in its open, democratic potential.
Now, more than ever, we need you to join us in safeguarding the web's future. Come, contribute, and be part of the change. Visit https://github.com/dosyago/BrowserBoxPro today. Stand up for an open, fair, and free web.
> ... This creates a need for human users to prove to websites that they're human, sometimes through tasks like challenges or logins.
No I do not? This sounds incredibly condescending as a user – I don't need to prove anything.
Their example of Play Integrity API is alarming because that essentially means either use this OS and this browser which has been verified only by us or we will not allow you to use the internet (SafetyNet vibes)
I’m hoping to get back to everyone as soon as possible. I hope you can all appreciate that I’m a human being and this has been a lot!
In the mean time, I wanted to repost my last comment on the GitHub issue thread [1]:
Hey all, we plan to respond to your feedback but I want to be thorough which will take time and it’s the end of a Friday for me. We wanted to give a quick TL;DR:
- This is an early proposal that is subject to change based on feedback.
- The primary goal is to combat user tracking by giving websites a way to maintain anti-abuse protections for their sites without resorting to invasive fingerprinting.
- It’s also an explicit goal to ensure that user agents can browse the web without this proposal [2]
- The proposal doesn’t involve detecting or blocking extensions, so ad-blockers and accessibility tools are out of scope.
- This is not DRM - WEI does not lock down content
- I’m giving everyone a heads up that I’m limiting comments to contributors over the weekend so that I can try to take a breath away from GitHub. I will reopen them after the weekend
> This is not DRM - WEI does not lock down content
Right, but there is a severe risk that you give the means to block non-mainstream clients, be it browsers, operating systems or devices, correct?
Yes, it's nice to know you may want to allow user agents to browse the web without WEI and I'm sure you have best intentions, but we are already in a world where banks and even stuff like Zoom just look at the user agent string and say "Ah, I don't know this browser, please install Chrome or Edge!". Why shouldn't they just similarly halt in the future if the WEI API does not exist? I (and the browser vendor) can spoof a user agent, but you can't spoof attestation, i.e. cannot fix it if websites don't allow my browser based on the (missing) WEI API. So, how will you prevent this?
How can you make sure that users of e.g. Asahi Linux will be able to use the web in the future? Who will attestate their browser based on what? How will e.g. Gentoo users use the web with their build-from-source browser and OS? Will e.g. Netflix continue to work reliably on a user agent without WEI (but with Widevine) - and will the holdback population (if holdback is implemented at all - no offense intended, but you didn't sound too confident about this on the blink-dev mailing list, tbh) be large and significant enough for them to not just say "eh, can't verify, use the app please or wait a bit"?
> It’s also an explicit goal to ensure that user agents can browse the web without this proposal
How, in an information theory sense, can you stop website operators from using this attestation information to block subsets of users? The "holdback" mentioned in your reference link seems like an optional thing, as if we're concerned about good faith actors rather than the opposite.
It would be nice if the spec included examples of how a hypothetical bad actor couldn't abuse the spec to block non-attestors. i.e. How do we stop "this website only works in Chrome on Windows" but for attestation? Right now, it's trivial to "fix" because we can lie about our environment (it's likely just reading our User-Agent) and it's unlikely that the website will actually not work in other OS/browser contexts.
Some websites really do only work in certain contexts, but I think critics' concern is what happens when the website would work perfectly fine, but it refuses to. I think this is largely the same concerns people have with mobile app permissions, but those can be gatekeeped by mobile app stores who can enforce political goals such as "You can't ask for permissions you don't need and refuse to work when you don't get them", websites have no such constraints.
What's to stop websites from blocking random users now? Nothing, really. But we don't have to bypass any cryptographic attestations in order to try to work around those blocks. This spec seeks to stop that.
I suspect you didn’t just forget. It would look good to at least explain why you’re not following through on this, as it’s now Thursday in parts of the world.
In much the same vein as something clearly profoundly hurt you and you want to ruin the web out of spite, I root for global warming because it will destroy all the infrastructure on which you wish to take a giant dump.
> giving websites a way to maintain anti-abuse protections for their sites without resorting to invasive fingerprinting.
What prevents a website from using invasive fingerprinting _AND_ WEI together? I strongly suspect websites will end up using both WEI and invasive fingerprinting because:
1. Websites will want to use invasive fingerprinting on old browsers and it would work within browsers that deliberately don't implement WEI.
2. Websites will want to get as much invasive fingerprinting information as they can get their hands on.
3. It is another layer of fingerprinting in the likely event that WEI is ineffective due to TPM exploits[1], operating system/driver exploits, web browser exploits, determined actors using rooms of computer display recording devices and robotic arm mouse movers, etc. Invasive fingerprinting further increases the cost and complexity to actors the website is trying to block.
> This is not DRM - WEI does not lock down content
It is absolutely 100% DRM. Your proposal states that devices would need to attest their configuration to the website. The website can then block the user because it doesn't want to show the news article to a Linux device where the user can block annoying pop-up ad videos, copy and paste the text or save the web page. The website can instead only allow devices which are factory-configured to block copy+paste, block saving web pages, block screenshots, etc. It's still DRM even with the proposed holdback mechanism because in the best case, a user will still be blocked 9/10 times (or whatever the holdback mechanism is set to). The more likely scenario is a website owner will just refuse to serve content until the client has attested itself. "The requested page can not be provided due to an unexpected problem. Try again in a few minutes."
There are so many flaws with the scheme as currently proposed I feel I could write for days:
Will websites be expected to block and ban users of AMD-SP now that it is broken[1]? Or will whoever conducts ad fraud just buy all the AMD-SP devices they can get their hands on?
As another author replied, are Gentoo users that compile their web browsers and operating systems from scratch just ignored, and the proposal pretends it won't impact these users?
How does the proposal allow users with specialist accessibility software to browse the web without being blocked for being a minority group that is not economically worth website owner's time to support? What prevents abuse of said specialist accessibility software for other purposes?
How would a new start-up developing a competing browser or phone from scratch, and are very much unknown and in a minority position, be able to convince millions of website owners to unblock/allow their new browser or phone? Cloudflare's Friendly Bots program refuses to respond to open source projects, so why would Cloudflare as an implementer of WEI care about new start-ups or small open source software projects?
This is a level or two below where my knowledge of the browser trails off, so I'll ask generally: how would this interact with things like the WebKit Content Blocker API?
Step 1: Sites require a "secure" (read proprietary) browser like "Google Chrome", "Microsoft Edge", "Safari" or refuse to operate.
Step 2: "Secure" browsers change the behavior of their implementation of the Content Blocker API so an industry-accepted "secure" site lile Google Ads can opt-out of being blocked ("You wouldn't want a misconfigured content blocker to accidentally break a verified secure site right?")
Step 3: ???
(Force the users into a take it or leave it choice for whether they want to be part of the internet or not)
Most likely all extensions and content blockers would be disabled for DRMed sites. Or maybe they'd be enabled but the browser would tell the site you have a blocker enabled and the site would refuse to load.
Either Apple will make their devices refuse to sign the attestation if you're using it, or Google will remove Apple from its list of trusted attesters.
There's been some wishy claims maybe perhaps users-scripting & debugging will be left intact, that the intent here is about other levels.
But there's basically no real actual meat to this specification. It's abstract: it doesn't really say what Web Environment Integrity is, it's up to the browser to determine, and the rules could keep getting more and more and more specific at the browsers leisure.
What a weird dystopian world we find ourselves in. And, sadly, the despair in the comments reflecting utter defeat is very troubling. Times like this make me really miss the 90s, when the tech culture embraced open source and always found a way to outsmart the "googles" of that day. It is certainly a different time these days, however the game being played has always been the same. I wish people would completely re-envision the internet. Because, in reality, google has only captured one protocol. The web is much bigger than you all think. If you build it they will come sounds like a good philosophical statement to end this with.
Okay, the proposal is what it is but it doesn’t explain how the attestation is generated. So this would look into the underlying OS and decide if my computer is a real computer? And when it has doubts it displays some pictures and asks me which ones show bicycles?
This already exists on Android in the form of "SafetyNet", which apps can use to detect if they are running on a device that isn't "secure", like a device with a custom ROM or a rooted device
Can someone explain to me what's so fundamentally bad with this proposal?
My understanding is that websites can essentially confirm whether the user is likely to be a human because he/she accesses the website from a certified device.
Won't this mean there is less need for Captchas, logins and pay walls?
The doc also mentions that this will remove the need for some use-cases of fingerprinting.
I imagine from a user perspective this will be an improvement.
I could imagine governments getting behind this, there are a few proposed laws that require age verification, like the online safety bill in the UK. You could easily see them adding age verification on top of this proposal.
Would moving control of web standards under governmental control help? The FTC and similar government orgs can take ownership and enforce standards, labeling browsers commercial utility.
I don't care if the web is relevant if it's not the web anymore. Ruining a platform to keep it relevant isn't in my interest as a user.
If we shoot this down and every bank requires me to download a mobile app, then fine. What this is proposing is basically to turn websites into mobile apps: device controlled, unmodifiable, broken on any non-approved hardware. If that's going to be the case regardless, I'd rather just download the app, at least that would be more honest about what's actually going on, and at least I'd still be able to use my adblocker when I browse the web.
Why aren't banking sites gone already? Because users expect to be able to use their desktop to do their banking. But if they can simply require you to use Chrome, suddenly you can get both birds with one stone! This is a bad thing for the web.
It looks very similar to the “secure boot” mechanisms in Windows and other commercial client OS.
Strikes me as very dangerous though on the web where there are so many paths for malware to get in and this could get in the way of plugging the holes.
Sure you can fake the results of an attestation in your fork, but your fork would be using your own key to sign the response, a key that the site can reject.
the TPM does the attestation of the entire running environment, starting with firmware, through the OS, through the browser all the way down to the website
Some comments were deferred for faster rendering.
saurik|2 years ago
The result: there is now effectively one dominating web browser run by an ad company who nigh unto controls the spec for the web itself and who is finally putting its foot down to decide that we are all going to be forced to either used fully-locked down devices or to prove that we are using some locked-down component of our otherwise unlocked device to see anyone's content, and they get to frame it as fighting for the user in the spec draft as users have a "need" to prove their authenticity to websites to get their free stuff.
(BTW, Brave is in the same boat: they are also an ad company--despite building ad blocking stuff themselves--and their product managers routinely discuss and even quote Brendan Eich talking about this same kind of "run the browser inside of trusted computing" as their long-term solution for preventing people blocking their ads. The vicious irony: the very tech they want to use to protect them is what will be used to protect the status quo from them! The entire premise of monetizing with ads is eventually either self-defeating or the problem itself.)
tentacleuno|2 years ago
The person who wrote the proposal[0] is from Google. All the authors of the proposal are from Google[1].
I've been thinking carefully about this comment, but I really don't know what to say. It's absolutely heartbreaking watching something I really care about die by a thousand cuts; how do we protest this? Google will just strong-arm their implementation through Chromium and, when banks, Netflix & co. start using it, they've effectively cornered other engines into implementing it.
This isn't new to them. They did it with FLoC, which most people were opposed to[2]. The most they did was FLoC was deprecate it and re-release it under a different name.
The saving grace here might be that Firefox won't implement the proposal.
[0]: https://github.com/RupertBenWiser [1]: https://github.com/RupertBenWiser/Web-Environment-Integrity/... [2]: https://news.ycombinator.com/item?id=26344013
jonathansampson|2 years ago
Brave is an advertising company, but we’re quite different from Google and others in this space. Brave's ad notifications are opt-in and engineered in such a way to protect and preserve user privacy. I'm not sure where you saw Brave engineers talking about ways to prevent users from blocking our ads—we don’t try to prevent users from blocking Brave Ads.
If you wish not to see Brave’s ad notifications, you can easily avoid them (by not opting-in in the first place, or by throttling/disabling-entirely). There are no special hoops to hop through, or technical incantations to utter. We believe digital advertising is better when it is built on user-first principles and consent.
If a user opts-in to Brave’s ad notifications, their device proceeds to routinely download-and-maintain a regional catalog of available inventory. The user's device then evaluates the catalog entries for relevance. User data is NOT sent off-device in Brave’s model. If a relevant ad entry is found, it is then displayed to the user in such a time and manner for minimal distraction. When an ad notification is shown, the user receives 70% of the associated ad revenue for their attention (no clicks required).
Again, if the user wishes to not see ad notifications, they can simply choose not to opt-in to viewing them. If the user wishes to not see the occasional sponsored image on the New Tab Page, they can turn those off from the New Tab Page itself with 2 clicks ( Customize › Show Sponsored Images). Importantly, the user is always in control. They decide whether ads will be displayed, and to what degree (e.g., the user can set a limit on ad notifications per hour).
Brave isn't interested in coercing users to view advertisements.
keepamovin|2 years ago
We are an open-source browser developer and these concerns deeply resonate with us. We understand the paradox Alphabet faces, yet we firmly believe the solution isn't about exerting "DRM" level control over a ubiquitous means of access.
We're committed to standing up for the future of the web. We don't just see ourselves as a browser company but as advocates for an open, fair, and free web. We invite you to join us in this endeavor. Visit https://github.com/dosyago/BrowserBoxPro today. Stand with us for an open, free, and fair web.
madeofpalk|2 years ago
Interesting that fixing "how to center a div" is considered harmful, but WebSerialPort is actually very good?
> The result: there is now effectively one dominating web browser run by an ad company who nigh unto controls the spec for the web itself
I don't think this this reality. Google proposes a bunch of APIs that goes nowhere because the other browser vendors consider them harmful. Google's previous attempts at trying to drive more adtech into the browser have failed due to a lack of support from other browser vendors.
I think "who drives the web specs" is probably in the best situation possible. It's largely Google, Mozilla, and Apple who all have slightly different interests in what makes a good web platform, and the web ends up better for it.
yoavm|2 years ago
Hopefully this will not be implemented, but still it's a good wake up call for those who still think that Chrome is more than an ads-delivery app with some browser functionality.
Aerbil313|2 years ago
troupo|2 years ago
"powerful-but-easy-to-code APIs for OS-level access" are actual hard-to-implement-right functionality that is often pushed to browsers with very little discussion or considerations.
anonzzzies|2 years ago
The entire premise of 'people want expensive to make websites, but don't want to pay for them' is already a bit flawed. I do pay for youtube to not see ads, I wish I could pay Google (and Meta) to not serve me ads on any site including Google search, they have ads on. That would make life a lot nicer. And I personally know no-one who would not sign up for that. But that doesn't happen, I guess because ads make more (not from me, but he)?
asistla|2 years ago
To begin with, pretty much every government employee in the world has some proprietary software developed within the country for security reasons. Old, even obsolete machines. Out of date software, unlicensed/unregistered software, etc, etc. Much of this is also true of banks.
This means if this is put in place as in the spec, it will affect banks and governments negatively. And as powerful as Google is, I don't think it will win over governments + banks.
But again, all the above could be nonsense, and Google will gatekeep the web. It found itself as the loser in the AI race, and it knows pursuing AI during the ongoing arguments on privacy and who owns the data AI is being trained on - the next best thing is to own the playground where the AI trains. That may not be an entirely bad thing either; sad, perhaps, but as this goes on, and browsing becomes a pain, maybe this will result in people just spending less time online? That's a good outcome in my books.
1vuio0pswjnm7|2 years ago
https://news.ycombinator.com/item?id=36823871
Got flagged and killed. :)
leshenka|2 years ago
kinda abusing if you ask me
chrisco255|2 years ago
quenix|2 years ago
How can he reconciliate these views with this spec, which he is the main author of? Surely Ben sees the parallels?
He writes: "Apple’s strategy with this is obvious, and it clearly works, but it still greatly upsets me that I couldn’t just build an app with my linux laptop. If I want the app to persist for longer than a month, and to make it easy for friends to install, I had to pay $99 for a developer account. Come on Apple, I know you want people to use the app story but this is just a little cruel. I basically have to pay $99 a year now just to keep using my little app."
It's honestly comical and a little sad.
[1]: http://benwiser.com/blog/I-just-spent-%C2%A3700-to-have-my-o...
jbk|2 years ago
It can be reconciled with love for money and total lack of moral fiber.
Aka « I don’t give a shit about my actions destroying every one, as long as I go get paid »
rpastuszak|2 years ago
I can tell you that the machine is so big and the responsibilities diluted to such extent that no one really feels like they're making a morally dubious decision, it just sort of happens on its own, magically.
kromem|2 years ago
Even the ad example is about not charging advertisers for bot views, which is a huge problem right now.
The problem is that a tool can often be used for evil as easily as for good, and the more the standard was used to block ad blockers over simply filtering out User Agent spoofing bots, the more this tool ends up evil.
And even if the limited scope in the proposal was the true intent, there's nothing preventing scope creep.
Though reading over it all, I do think the assumption of motivations in most of the comments here are misaligned. This does seem to be primarily focused on the issue of growth in bot activity and making it harder on bots to act as if human to servers.
Still, the spirit of who controls the client is very much at stake, and the comments here are ostensibly right that this is a measure that should not happen.
(And frankly, given the bubbling attitudes about enshittification coupled with the coming lowered barrier of entry for competition against software firms and content production, I think this is very much the kind of thing that may backfire horribly if forced though.)
troupo|2 years ago
It's easy: he works for Google. Every single public-ish web developer and/or devrel from Google will spend inordinate amounts of time lambasting Apple, writing eaassays on how Apple cripples the web etc.
While Google has broken the web so badly that Apple would need several decades to come anywhere close.
Note: the moment they leave Google, they may slightly change their tune and criticise Google a bit. For an example, see Alex Russel of web components when he went to work at Microsoft after spending a decade making sure that web browsers are turly unimplementable: https://infrequently.org/2021/07/hobsons-browser/
ryukafalz|2 years ago
> Apple’s strategy with this is obvious, and it clearly works, but it still greatly upsets me that I couldn’t just build an app with my linux laptop.
Ben, you've thought about the impact your proposal would have on Linux laptop users, right? Surely you sometimes use your laptop for banking, right?
M2Ys4U|2 years ago
― Upton Sinclair
jefftk|2 years ago
turquoisevar|2 years ago
suction|2 years ago
[deleted]
phpnode|2 years ago
jabbany|2 years ago
It's not a "threat to" the industry... It literally _comes from_ the industry... Unless the tech industry is willing to lose one of its biggest sources of revenue, this is exactly what the industry wants...
userbinator|2 years ago
It all started with "trusted computing", where "trusted" means "not under the owner's control". Then they tried to spin it as a "security" thing with TPMs, and created the impression that those speaking out against them were either malicious actors or insane conspiracy theorists.
Now it is actually happening. They want to control exactly what hardware and software you use, and they're doing it by ostracisation, which makes this even more sinister: you're still technically allowed to use software and hardware of your choosing, but you'll be blocked from participating.
I still remember when Intel was forced to revert adding a unique serial number to its processors because of widespread outrage, so it is possible for the public to make a difference; they just need to be educated about the coming dystopia and agitated enough to care and act upon it.
Perhaps we can start by spreading instructions on how to disable TPMs and "secure" boot along with all the advantages that come with doing so (custom drivers, running whatever OS you want, hardware you actually own, etc.) Of course the corporate-owned "security" lobby is going to start screaming that it's "insecure", but we need to make it clear that this is not the "security" we want because it is inherently hostile to freedom.
"Those who give up freedom for security deserve neither."
https://www.gnu.org/philosophy/right-to-read.html
xg15|2 years ago
Second is more focus on nag screens, "nudges" and other deliberately degraded UX. I.e. with the Surface tablets, you're technically able to disable secure boot, however you'll then be greeted with an ugly bright red boot screen every time you turn the device on. This stuff can have significant psychological impact, especially for "casual" users.
caesil|2 years ago
kibwen|2 years ago
It's completely and utterly irrelevant that Chromium is open source, because the web is a protocol, and having the source for an implementation of the protocol doesn't matter in the least when you don't control the protocol. You can't just fork Chromium and remove a feature, because websites expect the feature, and your browser won't work on them. You can't just fork Chromium and add a feature, because websites don't care about your tiny fork and won't use your feature. You can't fork Chromium, you have to fork the entire web.
netvarun|2 years ago
jabbany|2 years ago
But capitalism does what it does best, and will happily take advantage of (and try to prolong) a natural monopoly situation even if the origins were genuine.
In fact this is why there are regulations around "utilities". They are also an area where a natural monopoly is the optimal, so they shouldn't be treated as a free market.
(Food for thought: Perhaps the Internet infrastructure should be a utility too? Browser makers could be forced to be non-profit, which would mean companies need to divest themselves of the "Internet business" if they want to do "business _over_ the Internet")
quickthrower2|2 years ago
We could be here saying "Google was genius releasing Google Plus - that stopped Facebook etc. in their tracks and now they own social media"
danielvaughn|2 years ago
oars|2 years ago
dmantis|2 years ago
They don't even try to masquerade it.
jabbany|2 years ago
> Users like visiting websites that are expensive to create and maintain, but they often want or need to do it without paying directly. These websites fund themselves with ads, but the advertisers can only afford to pay for humans to see the ads, rather than robots. This creates a need for human users to prove to websites that they're human, sometimes through tasks like challenges or logins.
I find it quite cute that they start with "users" as if it's a user demand but in the next sentence switch to "advertisers" --- the real target population.
keepamovin|2 years ago
intelVISA|2 years ago
You don't berate a kitchen for serving food, why would you look at any Google contraption from HTTP/3 to Chrome as anything but a vehicle for selling ads and/or mining data?
joshuamorton|2 years ago
TheAceOfHearts|2 years ago
"Sorry, you can only access this website using this specific device with a browser compiled by Big Tech, it's for your own good."
Not surprising that this is all coming from Google, the world's biggest adtech company.
akomtu|2 years ago
Aeolun|2 years ago
kibwen|2 years ago
chrisco255|2 years ago
Spivak|2 years ago
* The US would never kill its golden calf except as a last resort.
* The US standard for antitrust is consumer harm. Google implementing a thing that other companies have been asking for, any company can join and send their own attestation signals, and then those other companies in unrelated markets use the thing to maybe not support unapproved stacks which could reasonably include Android/Chrome won't fall on Google.
raggi|2 years ago
Google Cloud becomes a VC driven organization that slowly eats margin dirt against it's competitors until insolvency. There was no way for it to recover enough resources from the mothership before being split out.
Search trundles along ok, assuming it took search ads and a ton of core infra with it, but it never makes enough money to ship a decent product extension. It hopefully removes some products it can no longer afford margin on, which have long produced distorted results (albeit with good intention). It suffers slow brain drain, and users end up using multiple search engines for every search again because no one has good search quality. The monopoly breaks, but so does this part of the internet, bolstering apps and information sites ecosystems positions. Wikipedia is the only real winner we want in this space.
Display Ads goes like it just discovered faster than light travel, no longer held down by the ol ball and chain that is the entire rest of the company. They go much darker as they no longer have tons of goodwill organizing from the rest of the company, and increasingly join the bad actors. In 20 years they eventually join lexis level evil in terms of multi-directional user sharing.
YouTube heads off into the stratosphere along with Display Ads. They try to maintain a better public face, but having to spin up their own ad market solutions drops ad quality even further, margins suffer, but their position remains ossified and they slowly recover. They start to get a bit more agile, no longer disrupted every other year by some mandate from the mothership, they're better able to keep up with new markets and more rapidly crush new competition.
Workspace decays very slowly. All the AI stuff halts and gets ripped out as there's no one there to work on it. The drive product has to scramble to figure out how to rebuild without all the internal commodity infrastructure support. GMail gets unstable for a while due to the weight of the infrastructures sitting on many fewer shoulders. Global instability results of the rapid de-distribution of the system as the production infrastructure was sliced apart in a rush to meet forced division. The economy takes a big dive as a result, as half the world loses email access regularly, bills don't get paid, etc.
Photos spins out into its own thing, and dies rapidly, as selling the odd photo frame here and there just can't meet margin.
Chrome tries to get funding from Microsoft, eventually it gets purchased wholesale, but the core team gets ripped up and largely discarded. Who knows how the OSS products fare, it depends on the executives in Microsoft who win this purchase. Eventually the main product gets shuttered, with Edge being the only replacement.
The telco products all shutter immediately, with no recourse. Same with R&D.
AI tries to split out into it's own thing, but fails to find a business and suffers constant reputation problems. After 10y of trying it eventually shuts, the acquiring company however immediately spins up multiple successful products and makes a big dent in the now well established market.
Android spins out into its own organization. The first decade the heat of internal politics in new found vacuums crushes them, eventually they find footing and head back to their open core roots, get scrappy and do some new things. Along the way their size fluctuates as the market forks and fractures as it does, but Android manages to hold its position as the western center of its universe.
Chromecast, ChromeOS, Nest all suffer badly having no core ecosystem to ship anymore. They attempt to buddy up with Android which pushes them around trying to androidify everything, but resulting in poor UX and/or poor margins across the board. Eventually the all but ChromeOS shutter, and ChromeOS business also closes, but leaves behind an OSS gift that a core group of passionate individuals try to limp forward as best they can with the new Microsoft Edge overlords.
Users find their data fractures across a dozen companies, with poor SSO integrations. Security mistakes abound, lots of people are affected. Online crime goes through the roof, it feels like the 90s again, but on a much much larger scale. Lots of people lose their accounts, and are affected by service outages and the ongoing economic effects from those. ISPs jump at the chance to step in, and lots of users start trying to use alternative email services again. They experience poor discoverability, lots more security problems, and constant space pressure. Vultures make off like bandits, and amazon, apple, microsoft, and cloudflare are the biggest winners in the fallout.
stainablesteel|2 years ago
i agree they should be broken up, but it might be the wrong time for it.
JeremyNT|2 years ago
https://news.ycombinator.com/item?id=36800789
https://news.ycombinator.com/item?id=36785516
https://news.ycombinator.com/item?id=36800744
https://news.ycombinator.com/item?id=36808231
https://news.ycombinator.com/item?id=36791711
https://news.ycombinator.com/item?id=36789691
https://news.ycombinator.com/item?id=36816208
https://news.ycombinator.com/item?id=35862886
By the HN guidelines this is a repost, but it would be a mistake IMO to delete it. This would mark the end of the open web, but for whatever reason this issue has never really bubbled to the surface here before. It feels like something is different this time.
dahwolf|2 years ago
You'll have the cynically named "Privacy sandbox" that builds tracking directly into the browser. You curtail ad blockers by capping browser extensions. And then you allow access only to "attested" clients. Inescapable tracking and unblockable ads. And you'll get to see ever more of them over time.
If this isn't evil enough in itself, the way Google presents these initiatives in grossly misleading ways makes my blood boil.
Fuck "Be as evil as possible" Google. Absolutely pathetic company. I'm so done with them.
dleeftink|2 years ago
garganzol|2 years ago
fireant|2 years ago
maxloh|2 years ago
xg15|2 years ago
Google is absolutely in a position to implement this and I figure a good number of sites would immediately join. However, the image of "tech" is tarnished enough already and the general population is more aware of the importance of having control about their online experience.
So I'm kinda optimistic that more public awareness of this might lead to a larger backlash and might make Google think twice in continuing this, lest risking a PR disaster.
doubt_me|2 years ago
[deleted]
tsujp|2 years ago
Ben Wiser (Google), Borbala Benko (Google), Philipp Pfeiffenberger (Google), and Sergey Kataev (Google) have got to be the most repugnant people on the planet for pretending this is anything but a scheme to destroy all privacy and freedom on the web all so fucking Google can sell more ads.
kykeonaut|2 years ago
[0] https://nyxt.atlas.engineer/
[1] https://www.ur-browser.com/
goku12|2 years ago
We need to promote alternative web engines like Servo and libweb and browsers based on them. Many of these engines need a major push to be competent enough for daily use. Gecko is also fine - but building a new browser with it is said to be hard.
[1] https://chromium.googlesource.com/chromium/src.git/+/refs/he...
maxloh|2 years ago
zarzavat|2 years ago
If Google wants a war, let’s give them one. Tell everyone who will listen. Give Google hell.
joepie91_|2 years ago
quickthrower2|2 years ago
I already got caught by this kind of thing - a https://github.com/nativefier/nativefier app wrapping Youtube Music doesn't work, because Google detects somehow that you are not using a trusted browser and refuses to serve.
This is sort of moving in the "zero trust" (as in let's use ML etc. to detect if we trust something. username/password is not enough), which I fear because it will break a bunch of stuff for genuine users and make things less reliable.
wraptile|2 years ago
maxloh|2 years ago
atoav|2 years ago
> Users often depend on websites trusting the client environment they run in.
is already a lie. Users don't depend on websites trusting the client environment. Users expect the client to limit the way in which they have to trust websites.
Sure website owners would love to be able to trust user input, but that has little to do with the interest of the users.
If something starts with that kind of framing already you certainly know that this is not going to benefit the user.
sergiomattei|2 years ago
In a field facing increasingly harder ethical questions every day, it’s important to start empowering our engineers to say “no” to ethically bankrupt things like this.
Anvoker|2 years ago
Strong cultural norms (e.g. hacker culture) might help for a while. But incentive structures eternally erode opposition.
It could make it easier for developers to band together and try to collectively veto things like this. But corporations with money can always buy the expertise of people, have them undermine the community, create their own parallel communities and influence public opinion and legislators.
FAANG salaries supercharge people's cognitive dissonance. They will find ways to excuse, minimize and ignore their contribution to the current situation.
Even HackerNews developed a sub-subculture of people that were constantly going on threads and calling remote attestation worries as "FUD".
It's unclear how to preserve cultural norms that stand in the way of market dominance. The only thing I can think of is having competing interests in the market. But whenever these align -- hell breaks loose.
wmf|2 years ago
nfriedly|2 years ago
Awful stuff like this wouldn't stand a chance if Google didn't have such a monopoly position.
For the sake of the open internet, please switch to a different browser. IMO, Firefox is best, but even something chromium based is probably fine. Just not Google Chrome.
66fm472tjy7|2 years ago
There are so many powerful interests that stand to gain from preventing e.g. ad-blocking and content capture. Thanks to Windows 11 requiring TPM, it is just a matter of time until hardware support for remote attestation is ubiquitous even on desktop computers.
Meanwhile, our (including myself) attention is (perhaps justifiably to some extent) on the latest news about $EXISTENTIAL_THREAT and how $THE_OTHER_SIDE did $EVIL_THING fed to us by the algorithm. Organizations that used to effectively fight threats to freedom like this (FSF, pirate parties, CCC, EFF, etc) have lost a lot of their support/influence and clarity of purpose over the last decade.
tshaddox|2 years ago
jabbany|2 years ago
In fact, their first example (!) outlines how this would be appealing to advertisers because they can attest a real human is viewing the content.
charcircuit|2 years ago
drbawb|2 years ago
I don't understand how you can have public/private key crypto as the basis for attestation and also have privacy? The two seem mutually exclusive. Is the private key supposed to be shared among a large cohort? (Which seems rather unwise, as it would make the blast radius of a compromised key disastrously huge.)
cesarb|2 years ago
From what I understood, the "attester" is a remote server, which signs the attestation with its own key, after somehow verifying that the browser and operating system and drivers and machine is not running any code that this remote server does not completely trust. That key can be used at most to identify the remote server, which is supposedly shared by a wide number of devices.
Yes, this means that your browser depends on having a working connection to that remote server for every attestation it makes, and that if that remote server colludes with the web page (or is compromised), it can leak your identity.
wmf|2 years ago
shiftingleft|2 years ago
politelemon|2 years ago
I feel this is the bit that's going to be hand waved away for the sake of convenience.
danShumway|2 years ago
I also wonder what those certain baseline requirements are going to be? Weird that they're left ambiguous.
It's probably nothing to worry about. We have a ton of precedent with Widevine that "it's okay, we'll license to anyone who meets requirements" wouldn't ever be abused[0]. It's fine, you just meet the baseline requirements that aren't spelled out yet and that might be subject to change and that certainly won't include headless or highly scriptable or experimental browsers. Nothing to worry about.
[0]: https://blog.samuelmaddock.com/posts/google-widevine-blocked...
snowc0de|2 years ago
Governments will love this due to protection and security it provides among other things. I wish I could say I was surprised, but Google has continued to fail to deliver even when they try for a power-grab play like this.
Feature requests: - Add a distributed bad-actors list similar to DNS. - Start the process of introducing this functionality at the hardware level. - Require photo personal identification to prove humanity.
supriyo-biswas|2 years ago
I’d have a field day grilling the CEOs of Big Tech companies over stuff like this that only serves to kneecap their current and future competitors.
c0l0|2 years ago
The only way around the dystopia this will lead to is to constantly and relentlessly shame and harass all those involved in helping create it. The scolding in the issue tracker of that wretched "project" shall flow like a river, until the spirits of those pursuing it breaks and it is disbanded.
And once the corporate hydra has regrown its head, repeat. Hopefully, enough practise makes those fighting the dystopia effective enough to one day topple over sponsoring and enabling organisations as a whole, instead of only their little initiatives leading down that path.
Not a pretty thing, but necessary.
rpastuszak|2 years ago
danShumway|2 years ago
sjatkins|2 years ago
hamishwhc|2 years ago
Also interesting that its implied in the explainer that attesters are just HTTP endpoint dealing with “billion-qps” traffic. Again, point above, but also how can we trust any attester to not use the (completely unobfuscated) information the user agent is sending them?
I guarantee that big websites will host their own attesters, only allow use of their attester, and require attestation for every request, allowing them to fingerprint every single user.
andersa|2 years ago
You can't run your own attester - these are implemented by the companies who provide the hardware, such as Microsoft or Apple.
lucideer|2 years ago
leodriesch|2 years ago
> An owner of this repository has limited the ability to comment to users that have contributed to this repository in the past.
userbinator|2 years ago
I never thought I'd see a CoC being used as ammo against this, but it's excellent.
traspler|2 years ago
benatkin|2 years ago
Even still, I think that it is wrong to give something a convenient name that espouses some virtue. They should have chosen something like Web Environment Verification API.
I think it's spyware, and I don't like it. It reminds me of the Stripe API, where you have to run some JavaScript on your site that snoops on your interactions and reports stuff to Stripe that it uses to detect fraud. https://news.ycombinator.com/item?id=22937303
cwales95|2 years ago
wraptile|2 years ago
rpastuszak|2 years ago
schroeding|2 years ago
Google will arguably kill legacy SafetyNet (which is circumventable, as it's not rooted in hardware) soon. Microsoft pushes extremly hard for remote attestation-ability by requiring TPMs. Very soon only an insignificant number of client devices will not be able to perform remote attestation by the major vendors based on hardware trust modules.
Hard to stay optimistic for the open web. :/
pmlnr|2 years ago
teddyh|2 years ago
<https://news.ycombinator.com/item?id=31835121>
<https://news.ycombinator.com/item?id=33210846>
mellosouls|2 years ago
"Google to explore alternatives to robots.txt".
[1] https://blog.google/technology/ai/ai-web-publisher-controls-...
[2] https://news.ycombinator.com/item?id=36641607
Zamicol|2 years ago
Giving more control to corporations and less control to individuals.
hightrix|2 years ago
Google is heading in that direction and their velocity is accelerating.
MrYellowP|2 years ago
And as far too often, the "conspiracy theorists" were right, but nobody cares about ever thinking about that, because nobody seems to be actually able to think about things anymore, unless the thoughts are breast-fed.
We're heading towards a reality, where copypasting from a website is going to cost you money if the license requires you to do so. Looking at it, considering the status quo of technology, almost everything required for a "trusted" environment is already present in consumer-hardware.
We have hypervisors, virtualization, containerization. Encryption/Decryption of data in RAM/CPU in real-time is coming eventually. Blockchain technology makes verification of digital ownership secure and easy. AI will make it stupidly easy for corporations to make sure that everyone complies and I will be everywhere within the next few years.
A glimpse of this reality can be seen in NovaQuark's "Dual Universe", where everything is behind DRM. A "metaverse" company for a reason, I guess.
signed_keys|2 years ago
GrinningFool|2 years ago
zzo38computer|2 years ago
garganzol|2 years ago
dgb23|2 years ago
Google needs to stop this bullshit start innovating again. First AMP, now this? Leave the web alone!
Where's the Google that makes great web applications with simple, great UX, like Maps, Gmail, Drive and Search (which has severely degraded)?
Or great tools like Go, Lighthouse and Devtools?
Disappointing!
It's like they're trying really hard to be the villain.
thrown1212|2 years ago
jchw|2 years ago
locriacyber|2 years ago
Is there any real alternative to the multimedia Web? Or do We need to make one now?
What we need:
- hypertext, links - raster and vector images - videos - responsive layout system of said hypertext (cassowary) - programs that can control the page content fully
joelthelion|2 years ago
miniBill|2 years ago
keepamovin|2 years ago
As creators of a competing open-source browser, we're stirred by this. We're concerned about the future integrity of browsing - whether run remotely, headlessly, or semi-automated, we see all these threatened by such attestations. But we believe in the power of the collective, and the spirit of innovation that thrives in the open-source community.
The conundrum is real for Alphabet, but leveraging control over such a global, ubiquitous means of access cannot be the answer. However, we don't advocate a future where Google cannot derive value from its creations. The economic balance may be hard to find, but technically, solutions will emerge. We're committed to standing up for the future of the web, because we believe in its open, democratic potential.
Now, more than ever, we need you to join us in safeguarding the web's future. Come, contribute, and be part of the change. Visit https://github.com/dosyago/BrowserBoxPro today. Stand up for an open, fair, and free web.
sadn1ck|2 years ago
No I do not? This sounds incredibly condescending as a user – I don't need to prove anything.
Their example of Play Integrity API is alarming because that essentially means either use this OS and this browser which has been verified only by us or we will not allow you to use the internet (SafetyNet vibes)
ccheney|2 years ago
wmf|2 years ago
RupertWiser|2 years ago
I’m hoping to get back to everyone as soon as possible. I hope you can all appreciate that I’m a human being and this has been a lot!
In the mean time, I wanted to repost my last comment on the GitHub issue thread [1]:
Hey all, we plan to respond to your feedback but I want to be thorough which will take time and it’s the end of a Friday for me. We wanted to give a quick TL;DR:
- This is an early proposal that is subject to change based on feedback.
- The primary goal is to combat user tracking by giving websites a way to maintain anti-abuse protections for their sites without resorting to invasive fingerprinting.
- It’s also an explicit goal to ensure that user agents can browse the web without this proposal [2]
- The proposal doesn’t involve detecting or blocking extensions, so ad-blockers and accessibility tools are out of scope.
- This is not DRM - WEI does not lock down content
- I’m giving everyone a heads up that I’m limiting comments to contributors over the weekend so that I can try to take a breath away from GitHub. I will reopen them after the weekend
[1] https://github.com/RupertBenWiser/Web-Environment-Integrity/...
[2] https://github.com/RupertBenWiser/Web-Environment-Integrity/...
schroeding|2 years ago
Right, but there is a severe risk that you give the means to block non-mainstream clients, be it browsers, operating systems or devices, correct?
Yes, it's nice to know you may want to allow user agents to browse the web without WEI and I'm sure you have best intentions, but we are already in a world where banks and even stuff like Zoom just look at the user agent string and say "Ah, I don't know this browser, please install Chrome or Edge!". Why shouldn't they just similarly halt in the future if the WEI API does not exist? I (and the browser vendor) can spoof a user agent, but you can't spoof attestation, i.e. cannot fix it if websites don't allow my browser based on the (missing) WEI API. So, how will you prevent this?
How can you make sure that users of e.g. Asahi Linux will be able to use the web in the future? Who will attestate their browser based on what? How will e.g. Gentoo users use the web with their build-from-source browser and OS? Will e.g. Netflix continue to work reliably on a user agent without WEI (but with Widevine) - and will the holdback population (if holdback is implemented at all - no offense intended, but you didn't sound too confident about this on the blink-dev mailing list, tbh) be large and significant enough for them to not just say "eh, can't verify, use the app please or wait a bit"?
tetrep|2 years ago
How, in an information theory sense, can you stop website operators from using this attestation information to block subsets of users? The "holdback" mentioned in your reference link seems like an optional thing, as if we're concerned about good faith actors rather than the opposite.
It would be nice if the spec included examples of how a hypothetical bad actor couldn't abuse the spec to block non-attestors. i.e. How do we stop "this website only works in Chrome on Windows" but for attestation? Right now, it's trivial to "fix" because we can lie about our environment (it's likely just reading our User-Agent) and it's unlikely that the website will actually not work in other OS/browser contexts.
Some websites really do only work in certain contexts, but I think critics' concern is what happens when the website would work perfectly fine, but it refuses to. I think this is largely the same concerns people have with mobile app permissions, but those can be gatekeeped by mobile app stores who can enforce political goals such as "You can't ask for permissions you don't need and refuse to work when you don't get them", websites have no such constraints.
What's to stop websites from blocking random users now? Nothing, really. But we don't have to bypass any cryptographic attestations in order to try to work around those blocks. This spec seeks to stop that.
tobr|2 years ago
I suspect you didn’t just forget. It would look good to at least explain why you’re not following through on this, as it’s now Thursday in parts of the world.
CatWChainsaw|2 years ago
dhx|2 years ago
What prevents a website from using invasive fingerprinting _AND_ WEI together? I strongly suspect websites will end up using both WEI and invasive fingerprinting because:
1. Websites will want to use invasive fingerprinting on old browsers and it would work within browsers that deliberately don't implement WEI.
2. Websites will want to get as much invasive fingerprinting information as they can get their hands on.
3. It is another layer of fingerprinting in the likely event that WEI is ineffective due to TPM exploits[1], operating system/driver exploits, web browser exploits, determined actors using rooms of computer display recording devices and robotic arm mouse movers, etc. Invasive fingerprinting further increases the cost and complexity to actors the website is trying to block.
> This is not DRM - WEI does not lock down content
It is absolutely 100% DRM. Your proposal states that devices would need to attest their configuration to the website. The website can then block the user because it doesn't want to show the news article to a Linux device where the user can block annoying pop-up ad videos, copy and paste the text or save the web page. The website can instead only allow devices which are factory-configured to block copy+paste, block saving web pages, block screenshots, etc. It's still DRM even with the proposed holdback mechanism because in the best case, a user will still be blocked 9/10 times (or whatever the holdback mechanism is set to). The more likely scenario is a website owner will just refuse to serve content until the client has attested itself. "The requested page can not be provided due to an unexpected problem. Try again in a few minutes."
There are so many flaws with the scheme as currently proposed I feel I could write for days:
Will websites be expected to block and ban users of AMD-SP now that it is broken[1]? Or will whoever conducts ad fraud just buy all the AMD-SP devices they can get their hands on?
As another author replied, are Gentoo users that compile their web browsers and operating systems from scratch just ignored, and the proposal pretends it won't impact these users?
How does the proposal allow users with specialist accessibility software to browse the web without being blocked for being a minority group that is not economically worth website owner's time to support? What prevents abuse of said specialist accessibility software for other purposes?
How would a new start-up developing a competing browser or phone from scratch, and are very much unknown and in a minority position, be able to convince millions of website owners to unblock/allow their new browser or phone? Cloudflare's Friendly Bots program refuses to respond to open source projects, so why would Cloudflare as an implementer of WEI care about new start-ups or small open source software projects?
[1] https://arxiv.org/abs/2304.14717
eropple|2 years ago
jabbany|2 years ago
Step 2: "Secure" browsers change the behavior of their implementation of the Content Blocker API so an industry-accepted "secure" site lile Google Ads can opt-out of being blocked ("You wouldn't want a misconfigured content blocker to accidentally break a verified secure site right?")
Step 3: ??? (Force the users into a take it or leave it choice for whether they want to be part of the internet or not)
Step 4: Profit
wmf|2 years ago
josephcsible|2 years ago
ktosobcy|2 years ago
jauntywundrkind|2 years ago
But there's basically no real actual meat to this specification. It's abstract: it doesn't really say what Web Environment Integrity is, it's up to the browser to determine, and the rules could keep getting more and more and more specific at the browsers leisure.
toshaexists|2 years ago
rad_gruchalski|2 years ago
spacebanana7|2 years ago
The more bandwidth and OS features we use the more dependent we become on the cloud/ISP vendors and device/OS makers.
jwally|2 years ago
google watches everything I do because chrome, and has a good idea if I'm a bot or not.
through clever cryptography google tells each website I visit its assessment of me?
Does it also give them the same Id for me each time I visit? (But unique to them)
yonatan8070|2 years ago
pptr|2 years ago
My understanding is that websites can essentially confirm whether the user is likely to be a human because he/she accesses the website from a certified device.
Won't this mean there is less need for Captchas, logins and pay walls? The doc also mentions that this will remove the need for some use-cases of fingerprinting.
I imagine from a user perspective this will be an improvement.
Disclaimer: Googler, but not working on Chrome
economyballoon|2 years ago
Time to free the web again. An we thought Web3 is nonsense :(
croes|2 years ago
https://news.ycombinator.com/item?id=36785516
renegat0x0|2 years ago
jwally|2 years ago
Cynical outlook because I guess its where my mind wanders I guess...
In the last year Puppeteer became a lot harder to detect, which creates a problem.
THIS would provide a solution, no?
Probably a coincidence, but a fortuitous one if creating demand for THIS feature was your goal.
/tinhat off
muteor|2 years ago
badrabbit|2 years ago
landsman|2 years ago
Slimemaster|2 years ago
Slimemaster|2 years ago
charcircuit|2 years ago
danShumway|2 years ago
If we shoot this down and every bank requires me to download a mobile app, then fine. What this is proposing is basically to turn websites into mobile apps: device controlled, unmodifiable, broken on any non-approved hardware. If that's going to be the case regardless, I'd rather just download the app, at least that would be more honest about what's actually going on, and at least I'd still be able to use my adblocker when I browse the web.
rezonant|2 years ago
jauntywundrkind|2 years ago
vbezhenar|2 years ago
unknown|2 years ago
[deleted]
rezonant|2 years ago
pc2g4d|2 years ago
renegat0x0|2 years ago
WaffleIronMaker|2 years ago
dynamorando|2 years ago
reactormonk|2 years ago
PaulHoule|2 years ago
Strikes me as very dangerous though on the web where there are so many paths for malware to get in and this could get in the way of plugging the holes.
interjectionne|2 years ago
[deleted]
freeone3000|2 years ago
jabbany|2 years ago
Sure you can fake the results of an attestation in your fork, but your fork would be using your own key to sign the response, a key that the site can reject.
progbits|2 years ago
blibble|2 years ago
the TPM does the attestation of the entire running environment, starting with firmware, through the OS, through the browser all the way down to the website