HTML is the solution to walled-garden lock-in? What? Those walled gardens already use HTML, including some of the semantic elements mentioned (plus ARIA semantic attributes, which are much more sophisticated).
> ChatGPT-like interfaces are likely the future of human data access.
And the whole point of artificial intelligence systems is that they don't require specialized "machine-readable" annotations in order to process input. ChatGPT (and its future offspring) can navigate regular websites the same way humans do. They don't need us to hold their hand. They know when a sequence of paragraphs constitutes a "list", without it having to be explicitly marked as such, etc.
What the author appears to be describing is simply an API mediated through HTML semantic elements. But if you have an API, you don't need a Large Language Model for automatic data access – a good old Python script using Beautiful Soup will do just fine. And it has the added benefit that it runs entirely locally.
Article's first sentence: "With the advent of large language model-based artificial intelligence, semantic HTML is more important now than ever."
I think the sentence "With the advent of large language model-based artificial intelligence, semantic HTML is less important now than ever." is far more defensible. The semantic web has failed and what replaced it was Google spending a crap ton of money writing a variety of heuristics equipped with best-of-breed-at-the-time AI behind it. As AI improves, it improves its ability to extract information from any ol' slop, and if "any ol' slop" is enough, it's all the effort people are going to put out. Eventually in principle both the semantic web and that pile of heuristics are entirely replaced by AI.
(Note also my replacement of LLM with the general term AI after my first usage of LLM. LLMs are not the whole of "AI". They are merely a hot branch right now, but they are not the last hot branch. It is not a good idea to project out the next several decades on the assumption that LLMs will be the last word in AI.)
Are you suggesting that AI will solve web accessibility, which is based on semantic HTML and ARIA? Because if not, humans will still be required to ensure that web content is accessible, and in that case semantic HTML remains important.
I am curious about what post-LLM SEO is going to look like.
> The semantic web has failed and what replaced it was Google spending a crap ton of money writing a variety of heuristics equipped with best-of-breed-at-the-time AI behind it.
Arguably, there were insufficient incentives to fully adopt semantic HTML, if your goal was just to have the most relevant parts of your content indexed well enough to get ranked.
> As AI improves, it improves its ability to extract information from any ol' slop, and if "any ol' slop" is enough, it's all the effort people are going to put out.
If the goalpost shifts from “getting ranked” to “enabling LLMs to maximally extract the nuance and texture of your content”, perhaps there will be greater incentive to use elements like <details> or <progress>. Websites which do so, will have more influence over the outputs of LLMs.
Feels like the difference between being loud enough to be heard vs. being clear enough to be understood.
> The semantic web has failed and what replaced it was Google spending a crap ton of money
Aren't schema.org and Wikidata/Wikipedia still powering most of Google's rich search results?
I heard them announce the new result page with bard but I probably didn't see it because of ad-blindness or it's not yet releases in my location, have to look this up...
AIs are magic to me. The pattern recognition feature of human I've always thought pretty unique and hard to replicate. We use it when scanning the slop on websites to do some kind of data extraction. I was part of the semantic web camp in my brain, but you are right, if machines can seemingly make sense of the slop then why bother?
agree so much. Projects that aim to build a data resource and then let AI use that resource are missing the point. The AI is the data resource.
Some projects claim that knowledge graphs or other data assets can help the AI retrieve 'true' knowledge. Personally, I believe that the better approach is to develop methods that allow AIs to create their own data assets, the weights in their networks is one of those assets.
The question of truth is still a very hard one. How do you tell an AI that some knowledge is more trustworthy than other knowledge? People have this issue too though.
literally by no metric is this true other than tech bros saying it on HN. The entire internet is powered by websites using semantic markup and clients querying it.
I had heard of almost none of these HTML elements, and that's such a shame, because they could seriously help put the "we need JavaScript for every gosh darn thing" ecosystem to an end (or at least return JS to what it was originally meant to be: a way to add some flair, some interactivity, some whatever, but not necessarily a replacement for all of your markup and a full-DOM manager).
I'm starting to think my dream browser might be something like visurf https://sr.ht/~sircmpwn/visurf/ but with the underlying Netsurf engine updated to support various modern HTML+CSS, such as these elements. I bet you could have a nearly JS-free smolweb through that browser that:
1. is more accessible (in the "doesn't break screenreaders, system theming, keybinds, etc" way)
2. could be made to use way fewer resources than these heavy JS contraptions these elements can replace
3. would still be able to do most things we expect the median web app of today to do (sure, fire up Firefox for WebGL or whatever still, but I could see, say, a Matrix client potentially needing only a smidge of JS (largely for WebSockets and E2EE stuff) over top of very-modern HTML)
In general the issue with these built in components is that you can't theme them. And they stick out like a sore thumb when you get a windows 7 style component in the middle of a modern looking app.
They also have basically no extensibility so when you inevitably need to do something half complex, you have to scrap it and start again with JS. So you may as well have just started with JS which just works, gives you full power, works identical on all systems, etc.
In the end all these extra components just end up as bloat all browsers have to implement but no one uses.
I just ran into "datalist" and my first impression was "wow, game changer". The behavior is the same across browsers but the appearance is strictly browser-specific. You can't style it with CSS.
Sometimes the list displays the text of the data, sometimes, the text and the "value" attribute. So you are not selecting "Atlanta" - you are selecting "234290780 Atlanta" (the ID and the value).
And with the on-click action, you can't just get the ID - you have to get the whole thing and parse the ID out.
> I had heard of almost none of these HTML elements
I'm not disagreeing with the gist of your post, but come on, these elements have been around for ages. It's definitely on you to become acquainted with the basics before your HTML critic can be taken seriously ;)
The post links to MDN (arguably the most useful short reference) but there is of course also WHATWG's HTML spec or, if that's too voluminous, SGML DTD formal grammars for WHATWG HTML 2021 and 2023 snapshots [1], as well as for the older HTML 5.x series.
Extensibility is the problem here. Either you force everyone to use the a limited set of UI controls (won't happen) or you need to allow some way to create custom UI controls, which leads to JS (or some other programmable system).
I really want to believe in the semantic web, I really want to believe in the ability of the browser to provide me with good default modules with a good default styling, but for now I just have to accept this is not the case.
The fact that I have to think about labeling a input (why is this not an attribute ?), not being sure if I should use it as a wrapper of as a sibling with the `for=` attribute... and this is just the tip of the iceberg. For each tag, I have to learn the whole history of its development and make an inquiry about what's the right way to do it nowadays.
After a few years of dabbling with Flutter I just came back to the same conclusion: bet on HTML.
Astro / Tailwind / Daisy UI / Alpine.js makes it lovely to build an HTML site with a lot of simple SSR and a little bit of client side reactivity peppered about.
The result is simple sane HTML files that look and work great on desktop web browser and and mobile wrapped web view.
My app is basically static so it caches in a CDN, works offline, and view source makes it easy to debug.
The problem I see with sematic web is that no matter how easy it gets, developers refuse to use it properly. I have been looking closely at the <main> tag since a browser extension of mine uses it, and although it is extremely clear what it should do in the MDN documentation (the documentation itself is a good example usage of <main>, <aside> etc.), very little sites use it properly. Even the fancy professional sites wrap all the page content, including navigation, footers and the like inside the <main> tag, which should only be for the main content of the page.
If such a simple element can't be used properly, I have no hope for all the others.
literally every developer who has ever used a `<p>` tag for paragraph has used it properly. An `class="card"`. A `<link>`.
How are so many tech literate people in this thread bemoaning the "uselessness" or "failure" of semantic web? It's like if you told me "Man I wish motorvehicles were successful, it'd sure be nice to travel long distances!"
> simple element can't be used properly, I have no hope for all the others.
The first solution that comes to mind, is stricter validation. Where the browser would just refuse to render a <footer> properly unless it's structure is correct.
But we had that. Anything before HTML4 really. And it sucked even more.
So maybe browser dev-tooling that throws warning or errors when devs are Doing It Wrong?
There simply is no incentive whatsoever of using them correctly. I try to use them correctly but aside from the time spent in deciding what each element should be there were no differences at all
I find it interesting that many of the comments here seem to view this as an anti JS article. To me a lot of these are going to be immensely useful tools in combination with the TS heavy frameworks we use for modern Enterprise App frontends, exactly because they are HTML native tools that aren’t going to require a bunch of stuff.
Like the meter tag, which I assume will replace every loading module we currently use in React when I get to work today because that is soooooooo much better than what we currently use.
But maybe I’m just misunderstanding people, or the article in some way. But to me this is very interesting even if your entire frontend is basically all TypeScript like many larger applications are today.
I don't want to totally neg on web dev. But it does suck donkey balls. I've been doing it for years. From writing raw html,to using scripting langs, frameworks and what not. And the amount of time it takes to do not a lot I figure is just a colossal brain drain. We just went on holiday, and all I wanted to do was look up places to go eat and drink, or visit for the day, and most of the sites sucked. Or were out of date. Not updated or whatnot. There's still lots of fire once and forget sites. Probably because budgets are tight and people can't afford updates. Or the updates are just technically too difficult for people to grok.
The complexity of sites is paralysing. What could be a few simple pages of texts and images is totally over-engineered for no good reason and is burning a stupid amount of CPU cycles. Probably built on a hacked off-the-shelf CMS that could do with security updates.
CMS and frameworks are being used, because there wasn't a good alternative to something as neat as frames.
A site I'm working on at the moment has quite a pretty design, but pull the CSS and it's just a mess.
I was looking at going to the cinema recently and the local picture house made it practically impossible to just scan the handful of films that were playing that week. I realised you could pretty much shove it all in a spreadsheet and it would read better. Heck, I downloaded the JSON from their API, and it was easier to read.
Most of it is all tiresome lipstick on a pig.
Facebook was a success for a few reasons, one was the easy on-boarding (which uses nefarious privacy trade-offs), the other is that you could actually share photos easily. Also see: Whatsapp and Instagram. Publishing needs to be easy. And despite a simple FTP being easy, there's a weird disconnect in the usability process that makes this tricky for mere mortals. People want to drag and drop, or upload, fire and forget and edit easily. And those wanting to consume data, really just want the bare essentials: The data.
ugh I was so excited to see pure HTML modals were a thing with <dialog> only to find out there's no way of triggering them without JavaScript. Using pure HTML you can only dismiss them, not trigger them.
> dialog elements are a great addition and I'm glad they're getting implemented, but a key part of their functionality relies on JavaScript: to open a <dialog> you need to use JavaScript to set the open attribute.
> It'd be great if there was a way to make a <a> or a <button> elements capable of opening dialogs.
> Precident already exists for page interactivity baked into HTML - for example <a href="#.."> can already scroll the page, and <details> elements are capable of hiding elements behind interactivity, so I think it stands to reason <dialog> elements could be opened by other page elements.
I don't see the higher purpose of "HTML only" if that means we need to extend HTML with scripting capabilities. In that case I rather just use JavaScript.
I didn't realize some of these elements existed. Neat!
However, I think if we want an open/federated system to win, we need to make it compelling to normal people. That means making it fun and entertaining. I've found that no argument about freedom, privacy, or anything actually important will work.
The point of walled garden is that they are very tempting to stay inside off for users. Their power comes from most users not resisting that temptation.
Becoming the equivalent of a digital hermit and living in a hole in the ground outside these walls, which is what I would characterize this as, is not going to result on a lot of people stepping outside those walls. If you've ever watched Life of Brian, you'll have a good mental picture here.
It's a variation of brutalist web development (let's not call it design) that just doesn't really appeal to the masses. It never has. The history of the web is endless attempts to pimp it with Applets, Flash, Shockwave, Silverlight, etc. The latest incarnation of that is web assembly. This basically allows you to use anything native that works elsewhere (desktop, mobile, game consoles, AR/VR, etc.) in a browser as well.
Of course there's a severe risk of this to disappear into more walled gardens. But I don't think sitting in a dusty old hole in the middle of nowhere while shaking your fists at progress does much to change that.
I would care more about markup if the browser would actually be able to do anything interesting with it, but for most part markup is a just a default style for an element, the semantic meaning is ignored. The <time> tag is about the last one I can think of that actually made a difference from the users perspective.
There is also a lot of markup for really common tasks still missing, I'd like to see an <advertisment> and something to handle pagination at the browser level (rel="next" has been around for decades, but browser don't care). And more broadly, I'd really like to see much better support for long-form documents in HTML or at the very least native ePub support in the browser.
What is the author suggesting? That instead of "social" networks, people will rush to building their own online presence from first principles?
I can't see that working, for so many reasons:
- most people are passive consumers of content, maybe interact a little, enough to tweak their feeds
- a small minority creates content on the large networks / aggregators, and (I think) a large portion of that is spurred on by monetization
- the "internet" and all the devices that access it have become so "user friendly" that the people who have come online in the last decade or so are effectively technically illiterate; you cannot count on them building anything from scratch, only to arrange the big duplo pieces already provided
You have just highlighted the main fact about the www that so many technical skilled people are missing to see. Most consumers have little to no knowledge about anything technical, which is not bad at all since there is no reason why they should care. Now imagine telling those people that everyone should have their own website instead of a social media account. What a horrible and illogical thought in my opinion. Digital consumers only change behavior if something is 10x better. So from a consumer perspective, who cares about things like decentralized networks? Or Duckduckgo.com for example is probably for 99% of the www users some chinese russian inbreed virus. People started using ChatGPT besides google because it is simply 10x better for many cases, and not because it promoted with more privacy feature and less ads than google.
Hey OP, I notice you're using Berkeley Mono, which is beautiful. But your website's CSS appears to be applying boldface on an already boldface font in the headers (despite the typeface name claiming to be 'Regular', it is actually bold; see the datasheet[1]), which is causing bad hinting. Consider changing your typeface file!
Ian 'Hixie' Hickson gave his view on the future of the web in January this year in a public Google doc titled "Towards a modern Web stack" [0]. On its HN submission (referencing the wrong URL, so I resubmitted [1]) he defends against criticism [2]
Quoting from the doc here's the stack:
- WebAssembly (also known as Wasm) provides a portable compilation target for programming languages beyond JavaScript; it is being actively extended with features such as WasmGC.
- WebGPU provides an API (to JavaScript) that exposes a modern computation and rendering pipeline.
- Accessible Rich Internet Applications (ARIA) provides an ontology for enabling accessibility of arbitrary content.
- WebHID provides an API (to JavaScript) that exposes the underlying input devices on the device.
This document proposes to enable browsers to render web pages that are served not as HTML files, but as Wasm files, skipping the need for HTML, JS, and CSS parsing in the rendering of the page, by having the WebGPU, ARIA, and WebHID APIs exposed directly to Wasm.
> On its HN submission (referencing the wrong URL) he defends against criticism [1]
And it's a very weak defence. There are great rebuttals to whatever he writes in there.
I mean, he rants that HTML failed, and then literally proposes "By providing low-level primitives instead, applications could ship with their own implementations of high-level concepts like layout, widgets, and gestures, enabling a much richer set of interactions and custom experiences with much less effort on the part of the developer."
Has he tried to implement any of those from scratch using only low-level primitives? How is it "much less effort on the developer"?
In general it just reads like a justification for Flutter:
"This API alone is not useful for application developers, since it provides no high-level primitives. However, as with other platforms, powerful frameworks will be developed to target this set of APIs. A proof of concept already exists in Flutter"
Why not provide high-level powerful primitives out of the box? Oh, then Flutter wouldn't have a reason to exist.
> I wrote this post and then GPT-4 fixed my grammer and spelling
I wrote an Autohotkey + Go script that I constantly use for fixing grammar using ChatGPT's API. You can select the text, press F8, wait a bit, and your input will be replaced by correctly grammatical text. The only catch is that it "fixes" the tone and makes it professional, which is kinda annoying.
Alternatively, you could set up LanguageTool[1], which runs much faster, is more reliable, is open source, and, crucially, doesn't require sending what you wrote to a server on the Internet. Plus, it already has high-quality integrations with standard software like LibreOffice, so you don't even need to write anything yourself.
> The only catch is that it "fixes" the tone and makes it professional, which is kinda annoying.
Then why send your text to a slow third-party in the first place? There are craptons of spelling and grammar checkers available which will work offline, be significantly faster, consume less resources, and not change the meaning of your text. We solved this problem decades ago, we don’t need to shove AI in everything.
It’s not like the ChatGPT solution is flawless anyway, there are still basic mistakes in the text:
In a world where pages takes x10 more time to load than to render some hyperrealistic scenes in AAA games on some Unreal Engine 5 with 140 fps?
Yup, I’m betting too on a front-end stacks that change every 6-12 months but only lead to more and more poorly optimized websites.
Half of the times even mobile apps are useless, when they can’t download a freaking JSON response in areas with poor network coverage.
> Pretty much you can highlight text. By default Safari shows a yellow highlight. I like it!
Chrome also shows a yellow highlight by default. But since I don't have Safari installed on my machine, I don't know if it's exactly the same color. Also, I'm not sure if other browsers have the same default color. Isn't it a good use case for CSS?
[+] [-] p-e-w|2 years ago|reply
> ChatGPT-like interfaces are likely the future of human data access.
And the whole point of artificial intelligence systems is that they don't require specialized "machine-readable" annotations in order to process input. ChatGPT (and its future offspring) can navigate regular websites the same way humans do. They don't need us to hold their hand. They know when a sequence of paragraphs constitutes a "list", without it having to be explicitly marked as such, etc.
What the author appears to be describing is simply an API mediated through HTML semantic elements. But if you have an API, you don't need a Large Language Model for automatic data access – a good old Python script using Beautiful Soup will do just fine. And it has the added benefit that it runs entirely locally.
[+] [-] jerf|2 years ago|reply
I think the sentence "With the advent of large language model-based artificial intelligence, semantic HTML is less important now than ever." is far more defensible. The semantic web has failed and what replaced it was Google spending a crap ton of money writing a variety of heuristics equipped with best-of-breed-at-the-time AI behind it. As AI improves, it improves its ability to extract information from any ol' slop, and if "any ol' slop" is enough, it's all the effort people are going to put out. Eventually in principle both the semantic web and that pile of heuristics are entirely replaced by AI.
(Note also my replacement of LLM with the general term AI after my first usage of LLM. LLMs are not the whole of "AI". They are merely a hot branch right now, but they are not the last hot branch. It is not a good idea to project out the next several decades on the assumption that LLMs will be the last word in AI.)
[+] [-] zagrebian|2 years ago|reply
[+] [-] asdfgeoff|2 years ago|reply
> The semantic web has failed and what replaced it was Google spending a crap ton of money writing a variety of heuristics equipped with best-of-breed-at-the-time AI behind it.
Arguably, there were insufficient incentives to fully adopt semantic HTML, if your goal was just to have the most relevant parts of your content indexed well enough to get ranked.
> As AI improves, it improves its ability to extract information from any ol' slop, and if "any ol' slop" is enough, it's all the effort people are going to put out.
If the goalpost shifts from “getting ranked” to “enabling LLMs to maximally extract the nuance and texture of your content”, perhaps there will be greater incentive to use elements like <details> or <progress>. Websites which do so, will have more influence over the outputs of LLMs.
Feels like the difference between being loud enough to be heard vs. being clear enough to be understood.
[+] [-] moritzwarhier|2 years ago|reply
Aren't schema.org and Wikidata/Wikipedia still powering most of Google's rich search results?
I heard them announce the new result page with bard but I probably didn't see it because of ad-blindness or it's not yet releases in my location, have to look this up...
[+] [-] chickenfeed|2 years ago|reply
[+] [-] tomlue|2 years ago|reply
Some projects claim that knowledge graphs or other data assets can help the AI retrieve 'true' knowledge. Personally, I believe that the better approach is to develop methods that allow AIs to create their own data assets, the weights in their networks is one of those assets.
The question of truth is still a very hard one. How do you tell an AI that some knowledge is more trustworthy than other knowledge? People have this issue too though.
[+] [-] krainboltgreene|2 years ago|reply
literally by no metric is this true other than tech bros saying it on HN. The entire internet is powered by websites using semantic markup and clients querying it.
[+] [-] klardotsh|2 years ago|reply
I'm starting to think my dream browser might be something like visurf https://sr.ht/~sircmpwn/visurf/ but with the underlying Netsurf engine updated to support various modern HTML+CSS, such as these elements. I bet you could have a nearly JS-free smolweb through that browser that:
1. is more accessible (in the "doesn't break screenreaders, system theming, keybinds, etc" way)
2. could be made to use way fewer resources than these heavy JS contraptions these elements can replace
3. would still be able to do most things we expect the median web app of today to do (sure, fire up Firefox for WebGL or whatever still, but I could see, say, a Matrix client potentially needing only a smidge of JS (largely for WebSockets and E2EE stuff) over top of very-modern HTML)
[+] [-] Gigachad|2 years ago|reply
They also have basically no extensibility so when you inevitably need to do something half complex, you have to scrap it and start again with JS. So you may as well have just started with JS which just works, gives you full power, works identical on all systems, etc.
In the end all these extra components just end up as bloat all browsers have to implement but no one uses.
[+] [-] renegade-otter|2 years ago|reply
Sometimes the list displays the text of the data, sometimes, the text and the "value" attribute. So you are not selecting "Atlanta" - you are selecting "234290780 Atlanta" (the ID and the value).
And with the on-click action, you can't just get the ID - you have to get the whole thing and parse the ID out.
It just seems... abandoned.
[+] [-] jamamp|2 years ago|reply
[+] [-] tannhaeuser|2 years ago|reply
I'm not disagreeing with the gist of your post, but come on, these elements have been around for ages. It's definitely on you to become acquainted with the basics before your HTML critic can be taken seriously ;)
The post links to MDN (arguably the most useful short reference) but there is of course also WHATWG's HTML spec or, if that's too voluminous, SGML DTD formal grammars for WHATWG HTML 2021 and 2023 snapshots [1], as well as for the older HTML 5.x series.
[1]: https://sgmljs.net/docs/html200129.html
[+] [-] vanarok|2 years ago|reply
[+] [-] noelwelsh|2 years ago|reply
[+] [-] EspressoGPT|2 years ago|reply
I guess, that's on you – if you're a web developer, you definitely 100% need to know these elements. They're not new.
[+] [-] cassepipe|2 years ago|reply
We could have had nice things.
Ssssshhh, calm down, let go.
[+] [-] marcus_holmes|2 years ago|reply
[+] [-] birracerveza|2 years ago|reply
[+] [-] dgb23|2 years ago|reply
Github markdown lets you use it too. People use it for examples and inline explanations.
[+] [-] unsignednoop|2 years ago|reply
[deleted]
[+] [-] nzoschke|2 years ago|reply
After a few years of dabbling with Flutter I just came back to the same conclusion: bet on HTML.
Astro / Tailwind / Daisy UI / Alpine.js makes it lovely to build an HTML site with a lot of simple SSR and a little bit of client side reactivity peppered about.
The result is simple sane HTML files that look and work great on desktop web browser and and mobile wrapped web view.
My app is basically static so it caches in a CDN, works offline, and view source makes it easy to debug.
[+] [-] merdaverse|2 years ago|reply
If such a simple element can't be used properly, I have no hope for all the others.
[+] [-] mkoubaa|2 years ago|reply
We shouldn't pretend that these things are contracts
[+] [-] krainboltgreene|2 years ago|reply
literally every developer who has ever used a `<p>` tag for paragraph has used it properly. An `class="card"`. A `<link>`.
How are so many tech literate people in this thread bemoaning the "uselessness" or "failure" of semantic web? It's like if you told me "Man I wish motorvehicles were successful, it'd sure be nice to travel long distances!"
[+] [-] berkes|2 years ago|reply
The first solution that comes to mind, is stricter validation. Where the browser would just refuse to render a <footer> properly unless it's structure is correct.
But we had that. Anything before HTML4 really. And it sucked even more.
So maybe browser dev-tooling that throws warning or errors when devs are Doing It Wrong?
[+] [-] gbalduzzi|2 years ago|reply
[+] [-] devjab|2 years ago|reply
Like the meter tag, which I assume will replace every loading module we currently use in React when I get to work today because that is soooooooo much better than what we currently use.
But maybe I’m just misunderstanding people, or the article in some way. But to me this is very interesting even if your entire frontend is basically all TypeScript like many larger applications are today.
[+] [-] chickenfeed|2 years ago|reply
The complexity of sites is paralysing. What could be a few simple pages of texts and images is totally over-engineered for no good reason and is burning a stupid amount of CPU cycles. Probably built on a hacked off-the-shelf CMS that could do with security updates.
CMS and frameworks are being used, because there wasn't a good alternative to something as neat as frames.
A site I'm working on at the moment has quite a pretty design, but pull the CSS and it's just a mess.
I was looking at going to the cinema recently and the local picture house made it practically impossible to just scan the handful of films that were playing that week. I realised you could pretty much shove it all in a spreadsheet and it would read better. Heck, I downloaded the JSON from their API, and it was easier to read.
Most of it is all tiresome lipstick on a pig.
Facebook was a success for a few reasons, one was the easy on-boarding (which uses nefarious privacy trade-offs), the other is that you could actually share photos easily. Also see: Whatsapp and Instagram. Publishing needs to be easy. And despite a simple FTP being easy, there's a weird disconnect in the usability process that makes this tricky for mere mortals. People want to drag and drop, or upload, fire and forget and edit easily. And those wanting to consume data, really just want the bare essentials: The data.
[+] [-] SPBS|2 years ago|reply
https://github.com/whatwg/html/issues/3567
> dialog elements are a great addition and I'm glad they're getting implemented, but a key part of their functionality relies on JavaScript: to open a <dialog> you need to use JavaScript to set the open attribute.
> It'd be great if there was a way to make a <a> or a <button> elements capable of opening dialogs.
> Precident already exists for page interactivity baked into HTML - for example <a href="#.."> can already scroll the page, and <details> elements are capable of hiding elements behind interactivity, so I think it stands to reason <dialog> elements could be opened by other page elements.
[+] [-] Kiro|2 years ago|reply
[+] [-] aaviator42|2 years ago|reply
For an example, click on '?' in the top right corner here: https://aavi.xyz/proj/colors/
[+] [-] clairity|2 years ago|reply
[+] [-] contravariant|2 years ago|reply
[+] [-] pbohun|2 years ago|reply
[+] [-] jillesvangurp|2 years ago|reply
Becoming the equivalent of a digital hermit and living in a hole in the ground outside these walls, which is what I would characterize this as, is not going to result on a lot of people stepping outside those walls. If you've ever watched Life of Brian, you'll have a good mental picture here.
It's a variation of brutalist web development (let's not call it design) that just doesn't really appeal to the masses. It never has. The history of the web is endless attempts to pimp it with Applets, Flash, Shockwave, Silverlight, etc. The latest incarnation of that is web assembly. This basically allows you to use anything native that works elsewhere (desktop, mobile, game consoles, AR/VR, etc.) in a browser as well.
Of course there's a severe risk of this to disappear into more walled gardens. But I don't think sitting in a dusty old hole in the middle of nowhere while shaking your fists at progress does much to change that.
[+] [-] grumbel|2 years ago|reply
There is also a lot of markup for really common tasks still missing, I'd like to see an <advertisment> and something to handle pagination at the browser level (rel="next" has been around for decades, but browser don't care). And more broadly, I'd really like to see much better support for long-form documents in HTML or at the very least native ePub support in the browser.
[+] [-] btbuildem|2 years ago|reply
I can't see that working, for so many reasons:
- most people are passive consumers of content, maybe interact a little, enough to tweak their feeds
- a small minority creates content on the large networks / aggregators, and (I think) a large portion of that is spurred on by monetization
- the "internet" and all the devices that access it have become so "user friendly" that the people who have come online in the last decade or so are effectively technically illiterate; you cannot count on them building anything from scratch, only to arrange the big duplo pieces already provided
[+] [-] multicast|2 years ago|reply
[+] [-] delta_p_delta_x|2 years ago|reply
[1]: https://cdn.berkeleygraphics.com/static/typefaces/berkeley-m...
[+] [-] rapnie|2 years ago|reply
Quoting from the doc here's the stack:
[0] https://docs.google.com/document/d/1peUSMsvFGvqD5yKh3GprskLC...[1] https://news.ycombinator.com/item?id=36968263
[2] https://news.ycombinator.com/item?id=34612696
[+] [-] troupo|2 years ago|reply
And it's a very weak defence. There are great rebuttals to whatever he writes in there.
I mean, he rants that HTML failed, and then literally proposes "By providing low-level primitives instead, applications could ship with their own implementations of high-level concepts like layout, widgets, and gestures, enabling a much richer set of interactions and custom experiences with much less effort on the part of the developer."
Has he tried to implement any of those from scratch using only low-level primitives? How is it "much less effort on the developer"?
In general it just reads like a justification for Flutter:
"This API alone is not useful for application developers, since it provides no high-level primitives. However, as with other platforms, powerful frameworks will be developed to target this set of APIs. A proof of concept already exists in Flutter"
Why not provide high-level powerful primitives out of the box? Oh, then Flutter wouldn't have a reason to exist.
---
As a side note: WebHID is a Chrome-only non-standard. They literally dumped a non-spec onto other browser vendors, shipped it to prod, and then "updated" the spec later: https://github.com/mozilla/standards-positions/issues/459
[+] [-] anyfactor|2 years ago|reply
> I wrote this post and then GPT-4 fixed my grammer and spelling
I wrote an Autohotkey + Go script that I constantly use for fixing grammar using ChatGPT's API. You can select the text, press F8, wait a bit, and your input will be replaced by correctly grammatical text. The only catch is that it "fixes" the tone and makes it professional, which is kinda annoying.
Feel free to try it out: https://github.com/anyfactor/Chatgpt_grammar_checker
[+] [-] p-e-w|2 years ago|reply
[1] https://github.com/languagetool-org/languagetool
[+] [-] billforsternz|2 years ago|reply
[+] [-] latexr|2 years ago|reply
Then why send your text to a slow third-party in the first place? There are craptons of spelling and grammar checkers available which will work offline, be significantly faster, consume less resources, and not change the meaning of your text. We solved this problem decades ago, we don’t need to shove AI in everything.
It’s not like the ChatGPT solution is flawless anyway, there are still basic mistakes in the text:
> Can by styled quite aggressively.
[+] [-] gyudin|2 years ago|reply
[+] [-] mkl95|2 years ago|reply
Chrome also shows a yellow highlight by default. But since I don't have Safari installed on my machine, I don't know if it's exactly the same color. Also, I'm not sure if other browsers have the same default color. Isn't it a good use case for CSS?
[+] [-] waihtis|2 years ago|reply
https://htmx.org/