There are web sites, and there are web apps. Most websites, think news sites, Wikipedia, etc.. just deliver static content and a whole bunch of ads, as someone noted here. Web applications are the ones that need all this functionality.
Why don't we do the following for HTML6, and introduce one profile for sites, and one for apps.
- Sites: HTML + CSS, Javascript if at all only for presentation purposes (like DHTML over a decade ago). Can be viewed with a radically stripped-down web browser. All you need is the layout engine and components for display and networking. No WebGL, no sound API, and no shenanigans like ambient light sensors or vibration (wtf!). Think of Google's AMP.
- Apps: The whole package that is offered nowadays. We can even go past this and rethink the division between web and native apps. Why can't a web app use sockets? Why can't a native app use the HTML layout engine or live in a tab? Google is planning to blur the gap between web and native with their new "instant apps".
This just keeps coming up on HN time and again; [1] [2] [3]
Whichever side you're on, browser vs. native, the sheer frequency of this discussion proves that at least a clear distinction IS needed. Continuing the status quo of web/browser/standards bloat cannot be good.
Somebody really needs to set down some global rules of thumb. I think that when your webpage starts needing sidebars and subwindows and popups and notifications (yes, looking at you, Facebook) then at that point it should just be a native app.
Let the web and its browsers focus on "sites" and "pages" and let the OS do "apps." After that it's up to the operating systems to make discovering and accessing apps as easy as typing in a website's address.
As a user, I want to sign in just once on each of my devices (iCloud/Apple ID lets me do that), and just type in an app's name (say Cmd+Space and "facebook") and then start using it right away as the OS begins downloading it incrementally, just as a browser does a website, except with full access to the OS's features, efficient use of my hardware and battery, and instant access to all my data without a separate login.
One problem is that some interactive news articles can make very impressive use of WebGL, like the interactive climbing map that accompanied an article about the Dawn Wall freeclimbing record: http://www.nytimes.com/interactive/2015/01/09/sports/the-daw...
> Can be viewed with a radically stripped-down web browser.
Like Lynx?
In terms of sites versus apps, I think the browser vendors are responsible for making this happen. Adobe AIR was the closest effort I saw of marrying web technologies with apps.
Sadly AIR never took off and whilst I appreciate the intention, it left a huge looming question of what do we actually do with all this new web technology?.
One answer I came up with in recent years was what you suggested: of partitioning off the people who want to work with the technologies and keep them separate from the text+image+CSS based web we've all grown to love.
Similar to how the demo-scene was an offshoot of game development...
Kind of a silly article. Most top websites just serve static content with tons of ad-network crap, they would never need 83% of the features. This is like saying "the most sold vehicles in the world don't use 90% of the horsepower they have". No shit, the most sold vehicles in the world are Corollas and Civics, ie. "sit in traffic and commute" type cars.
It doesn't cost anything to keep these features around, why kill them off? Code is cheap, it doesn't cost you, the user, anything to have the features in your browser...
This is why the linux desktop failed to penetrate the market. You can't say, "Well it does 80% of what most people need." Most people just surf the web, view photos, and other basic functionality. That 20% it can't do is a deal breaker. Not to mention the things most people never do, but businesses rely on like legacy/proprietary software support.
>It doesn't cost anything to keep these features around, why kill them off?
This is also why the features list of a basic Windows or Office install is miles long. Once developed, there's no cost other than maintenance of those features (updates, security, etc).
I think its hard to argue against complexity in software. The ultra-complex usually win for rational market reasons.
I don't know this to be true, but my assumption as someone not building the interpreter/vms for JS is that if you eliminated all of that shit you could probably get a much better optimization from your engine. ALso, security was touched on above.
This is something I've noticed a lot when developing Servo. The vast majority of the time, when a site is broken in Servo, it's due to some CSS 2.1 bug or another (CSS2 has existed since 1998), or a broken DOM API that's been in the platform for years and years. Attention is disproportionately focused on the new stuff when the reality is that old standards still rule.
I wouldn't necessarily agree that the conclusion is to just rip stuff out of the Web platform, though (although there is plenty of stuff I'd love to drop). Rather, we need to implement the features in a secure way. This isn't rocket science. Notice, as usual, that the majority of these security issues are straightforward memory safety issues† in C++: e.g. https://cve.mitre.org/cgi-bin/cvekey.cgi?keyword=firefox+svg
† Food for thought for those claiming that "modern C++" solves these problems.
1% would mean 10 million websites are using each possible feature. Okay, it's not quite that even, and a lot of popular sites (like many news sites) stick to the most basic features, but all these browser features are there for the other various cases. Like aforementioned web apps. Various tech demos. Sites with very specific use cases.
In other words, it's because the internet is massive and varied.
If 99% of everything is crap, you can bet a sizeable wager the 1% wheat from the chaff are using HTML5 and Javascript APIs.
Also if a content silo counts as a 'top website' then this is an outlier and not to be included. Facebook is a walled garden and not indexable. Facebook is parasitical to the web. Twitter and Google are not the web either.
To propose removing these features instead of fixing them fundamentally misunderstands the modern purpose of the web. It's not just for document distribution anymore. It is and has been for many years an application deployment system. To get rid of these features would kill whatever chance we have of getting out of walled garden app stores.
I mean, a similar study would probably find "top 10,000 apps don't use 80% of OS features." Just because not a lot of people use a feature doesn't mean it shouldn't exist.
Right. And then when those features aren't available in the browser, that's used as a justification going back to proprietary (aka native) platforms. It's a circular argument made by people determined to return to the days before we had an Open Web, for whatever reasons. If you don't want to use the features, nobody is forcing you to do so.
Provided one can access sensor data fast enough, cross device tracking[0]. Display an ad that lights up the room in such a way that you can read it with the sensor. Communication across an airgap. If you have two devices with screens and said sensors in a dark room, they might be able to communicate by turning screens on and off.
Most speculatively, imaging the environment with compressive imaging[1]. One might be able to flash some patterns on the screen and look at light sensor output to take a picture.
Giving web browsers access to sensors on our devices is sort of scary.
This seems pretty obvious to me. And at the same time, entirely not a problem.
A great example that comes to mind is the drm component that Netflix require to deliver html5 instead of that spotlight .. thing. I cannot think of any other site that has needed it - or at least, any other site I visited before it was available, that suffered for it.
And still, I consider it a requirement. That feature that I require for exactly one site.
this was very interesting: "SVG, for example, has a problem however you look at it: on one hand more than 15 per cent of the sites use it, on the other hand, nearly 87 per cent of blockers block it, but it's had 14 security warnings (CVEs, Common Vulnerabilities and Exposures) in the last three years."
I am deeply suspicious of that number. I've never encountered a plugin or anything of the sort that provides for "blocking SVG", and it's fully supported in all recent browsers.
Just a note on this (I'm the lead author on the paper author), the blocking rate has to do with the reduction in JS usage when you install popular blocking extensions.
So its not that extensions block SVG directly, its that AdBlock Plus and Ghostery block a bunch of libraries, and those libraries use the SVG methods to finger print (and do other stuff)
Ok, this seems interesting. However, unless I am misunderstanding this, of the large table they used for the bulk of the articles content they only showed 6 features under utilized.
Obviously, we could prune the execution of some of the JS which is backwards compatible to the early 90s, and some of the html which is based on IBMs 1960s.
My super high-level first pass optimization rec for the w3c:
If a feature exists for 3 years and a random sampling of 100,000 websites has usage stats of less than < 1% it is automatically deprecated. If it is >1% but less than 5%, it is automatically phased out in 2 years of the spec.
Unfortunately this would create a huge disincentive for developers to use any new features no matter how compelling. Would you learn any new features if there was a chance in 2 years all of your users would drop support for it?
Brave software. This was created by Brendan Eich the x ceo of mozilla. Eich does seem to have some problems with equality[0] which may have contributed to his ouster at Mozilla, but Brave is a top shelf browser. Super great team, pretty security conscious and given the Eich developed JS, well, they have a pretty good working knowledge of it. Brian Bondy is a super awesome guy (thanks for adding duckduckgo BTW), Yan Zhu is a pretty well known dev & security blogger and they had the woman who wrote a couple encrypted chat clients working there (apologies her name escapes me) but she isn't on the site. I assume the rest of the team is talented. So in essence, I ove Brave. I still wish they would make a goddamn search engine, but it is an awesome browser.
[0] NaN === NaN returns false?. 0.0001 + 0.0002 !== 0.0003? weird.
I use Pale Moon, a firefox fork. I currently have ~200 tabs loaded (in a session with ~500 tabs) and it's only using 2.7 GB of RAM.
> ... even though fairly close to Gecko-based browsers like Mozilla Firefox in the way it works, is based on a different layout engine and offers a different set of features. It aims to provide close adherence to official web standards and specifications in its implementation (with minimal compromise), and purposefully excludes a number of features to strike a balance between general use, performance, and technical advancements on the Web.
captainmuon|9 years ago
Why don't we do the following for HTML6, and introduce one profile for sites, and one for apps.
- Sites: HTML + CSS, Javascript if at all only for presentation purposes (like DHTML over a decade ago). Can be viewed with a radically stripped-down web browser. All you need is the layout engine and components for display and networking. No WebGL, no sound API, and no shenanigans like ambient light sensors or vibration (wtf!). Think of Google's AMP.
- Apps: The whole package that is offered nowadays. We can even go past this and rethink the division between web and native apps. Why can't a web app use sockets? Why can't a native app use the HTML layout engine or live in a tab? Google is planning to blur the gap between web and native with their new "instant apps".
Razengan|9 years ago
Whichever side you're on, browser vs. native, the sheer frequency of this discussion proves that at least a clear distinction IS needed. Continuing the status quo of web/browser/standards bloat cannot be good.
Somebody really needs to set down some global rules of thumb. I think that when your webpage starts needing sidebars and subwindows and popups and notifications (yes, looking at you, Facebook) then at that point it should just be a native app.
Let the web and its browsers focus on "sites" and "pages" and let the OS do "apps." After that it's up to the operating systems to make discovering and accessing apps as easy as typing in a website's address.
As a user, I want to sign in just once on each of my devices (iCloud/Apple ID lets me do that), and just type in an app's name (say Cmd+Space and "facebook") and then start using it right away as the OS begins downloading it incrementally, just as a browser does a website, except with full access to the OS's features, efficient use of my hardware and battery, and instant access to all my data without a separate login.
[1] https://news.ycombinator.com/item?id=11735770
[2] https://news.ycombinator.com/item?id=11552162
[3] https://news.ycombinator.com/item?id=11658873
nitrogen|9 years ago
One problem is that some interactive news articles can make very impressive use of WebGL, like the interactive climbing map that accompanied an article about the Dawn Wall freeclimbing record: http://www.nytimes.com/interactive/2015/01/09/sports/the-daw...
alchemical|9 years ago
Like Lynx?
In terms of sites versus apps, I think the browser vendors are responsible for making this happen. Adobe AIR was the closest effort I saw of marrying web technologies with apps.
Sadly AIR never took off and whilst I appreciate the intention, it left a huge looming question of what do we actually do with all this new web technology?.
One answer I came up with in recent years was what you suggested: of partitioning off the people who want to work with the technologies and keep them separate from the text+image+CSS based web we've all grown to love.
Similar to how the demo-scene was an offshoot of game development...
the8472|9 years ago
So we're mostly lacking the site-only aspect. To some extent that can be achieved with addons that strip or block certain APIs.
davegauer|9 years ago
jbob2000|9 years ago
It doesn't cost anything to keep these features around, why kill them off? Code is cheap, it doesn't cost you, the user, anything to have the features in your browser...
ScottBurson|9 years ago
btrask|9 years ago
drzaiusapelord|9 years ago
>It doesn't cost anything to keep these features around, why kill them off?
This is also why the features list of a basic Windows or Office install is miles long. Once developed, there's no cost other than maintenance of those features (updates, security, etc).
I think its hard to argue against complexity in software. The ultra-complex usually win for rational market reasons.
vonklaus|9 years ago
pcwalton|9 years ago
I wouldn't necessarily agree that the conclusion is to just rip stuff out of the Web platform, though (although there is plenty of stuff I'd love to drop). Rather, we need to implement the features in a secure way. This isn't rocket science. Notice, as usual, that the majority of these security issues are straightforward memory safety issues† in C++: e.g. https://cve.mitre.org/cgi-bin/cvekey.cgi?keyword=firefox+svg
† Food for thought for those claiming that "modern C++" solves these problems.
CM30|9 years ago
Around a billion.
1% would mean 10 million websites are using each possible feature. Okay, it's not quite that even, and a lot of popular sites (like many news sites) stick to the most basic features, but all these browser features are there for the other various cases. Like aforementioned web apps. Various tech demos. Sites with very specific use cases.
In other words, it's because the internet is massive and varied.
bcheung|9 years ago
99% of the world doesn't use Calculus. Maybe we should stop teaching it in school.
Lame article.
alchemical|9 years ago
It reminds me of Sturgeon's law https://en.wikipedia.org/wiki/Sturgeon's_law
If 99% of everything is crap, you can bet a sizeable wager the 1% wheat from the chaff are using HTML5 and Javascript APIs.
Also if a content silo counts as a 'top website' then this is an outlier and not to be included. Facebook is a walled garden and not indexable. Facebook is parasitical to the web. Twitter and Google are not the web either.
Retric|9 years ago
The hard part of any design is knowing what features can be removed not added. HTML 5 and JavaScript is fluff for most websites.
PS: HN even uses <table id="hnmain"... o the horror.
moron4hire|9 years ago
I mean, a similar study would probably find "top 10,000 apps don't use 80% of OS features." Just because not a lot of people use a feature doesn't mean it shouldn't exist.
dyoder|9 years ago
pavel_lishin|9 years ago
Can someone give me a good, non-gimmicky example of what this would be used for?
gene-h|9 years ago
Most speculatively, imaging the environment with compressive imaging[1]. One might be able to flash some patterns on the screen and look at light sensor output to take a picture.
Giving web browsers access to sensors on our devices is sort of scary.
[0] http://arstechnica.com/tech-policy/2015/11/beware-of-ads-tha... [1] http://arxiv.org/pdf/1305.7181.pdf
reicher89|9 years ago
benologist|9 years ago
If we stopped adding new capabilities to JavaScript even as new sensors and w/e go mainstream we'd start needing Flash all over again.
iambateman|9 years ago
Not saying it's the most important case ever, but it would be important if you needed it
vonklaus|9 years ago
zghst|9 years ago
unlinker|9 years ago
soneil|9 years ago
A great example that comes to mind is the drm component that Netflix require to deliver html5 instead of that spotlight .. thing. I cannot think of any other site that has needed it - or at least, any other site I visited before it was available, that suffered for it.
And still, I consider it a requirement. That feature that I require for exactly one site.
reicher89|9 years ago
matthewmacleod|9 years ago
snyderp|9 years ago
So its not that extensions block SVG directly, its that AdBlock Plus and Ghostery block a bunch of libraries, and those libraries use the SVG methods to finger print (and do other stuff)
vonklaus|9 years ago
Obviously, we could prune the execution of some of the JS which is backwards compatible to the early 90s, and some of the html which is based on IBMs 1960s.
My super high-level first pass optimization rec for the w3c: If a feature exists for 3 years and a random sampling of 100,000 websites has usage stats of less than < 1% it is automatically deprecated. If it is >1% but less than 5%, it is automatically phased out in 2 years of the spec.
pilom|9 years ago
pilom|9 years ago
vonklaus|9 years ago
[0] NaN === NaN returns false?. 0.0001 + 0.0002 !== 0.0003? weird.
superkuh|9 years ago
> ... even though fairly close to Gecko-based browsers like Mozilla Firefox in the way it works, is based on a different layout engine and offers a different set of features. It aims to provide close adherence to official web standards and specifications in its implementation (with minimal compromise), and purposefully excludes a number of features to strike a balance between general use, performance, and technical advancements on the Web.
0xfeba|9 years ago
[0] https://thestack.com/security/2016/02/03/chromodo-browser-di...
unknown|9 years ago
[deleted]