To me XSLT came with a flood of web complexity that led to having effectively only 2 possible web browsers. It seems a bit funny because the website looks like straight out of the 90s when "everything was better"
Ironically, that text is all you get if you load the site from a text browser (Lynx etc.) It doesn't feel too different from <noscript>This website requires JavaScript</noscript>...
I now wonder if XSLT is implemented by any browser that isn't controlled by Google (or derived from one that is).
I'm strongly against the removal of XSLT support from browsers—I use both the JavaScript "XSLTProcessor" functions [0] and "<?xml-stylesheet …?>" [1] on my personal website, I commented on the original GitHub thread [2], and I use XSLT for non-web purposes [3].
But I think that this website is being hyperbolic: I believe that Google's stated security/maintenance justifications are genuine (but wildly misguided), and I certainly don't believe that Google is paying Mozilla/Apple to drop XSLT support. I'm all in favour of trying to preserve XSLT support, but a page like this is more likely to annoy the decision-makers than to convince them to not remove XSLT support.
Can’t you just do the xslt transformation server-side? Then you can use the newest and best xslt tools, and the output will work in any browser, even browsers that never had any built-in xslt support.
> but a page like this is more likely to annoy the decision-makers than to convince them to not remove XSLT support.
You cannot “convince decision-makers” with a webpage anyway. The goal of this one is to raise awareness on the topic, which is pretty much the only thing you can do with a mere webpage.
I'm aware I'm in a minority, but I find it sad that XSLT stalled and is mostly dead in the market. The amount of effort put into replicating most the XML+XPath+XSLT ecosystem we had as open standards 25 years ago using ever-changing libraries with their own host of incompatible limitations, rather than improving what we already had, has been a colossal waste of talent.
Was SOAP a bad system that misunderstood HTTP while being vastly overarchitected for most of its use cases? Yes. Could overuse of XML schemas render your documents unreadable and overcomplex to work with? Of course. Were early XML libraries well designed around the reality of existing programming languages? No. But also was JSON's early implementation of 'you can just eval() it into memory' ever good engineering? No, and by the time you've written a JSON parser that beats that you could've equally produced an equally improved XML system while retaining the much greater functionality it already had.
RIP a good tech killed by committees overembellishing it and engineers failing to recognise what they already had over the high of building something else.
I don't really need or use XSLT (I think), so I am not really affected either way. But I am also growing mightily tired of Google thinking "I am the web" now. This is really annoying to no ends. I really don't want Google to didctate onto mankind what the web is or should be. Them killing off ublock origin also shows this corporate mindset at work.
This is also why I dislike AI browsers in general. They generate a view to the user that may not be real. They act like a proxy-gate, intercepting things willy-nilly. I may be oldschool, but I don't want governments or corporations to jump in as middle-man and deny me information and opportunities of my own choosing. (Also Google Suck, I mean Google Search, sucks since at the least 5 years now. That was not accidental - that was deliberate by Google.)
That sums up pretty much how I think about that. I don't have any opinion about XSLT either way... I'm just so tired. If Google decided to kill HTML tomorrow- who could stop them?
Google needs to be broken up into three or more companies. Search, Android, Chrome, and AdSense should not live together.
Lina Khan had the right idea and mandate, but she was too fucking slow.
When the Dems swing back into power, the gutting of big tech needs to be swift and thorough. The backbone needs to be severed. I'm screaming at my representatives to do this.
Google took over web tech, turned the URL bar into their Search product. They force brands to buy ads for their name brands - think about how much money they make by selling ads on the keywords "Airpods" or "Nintendo Switch". They forced removal of ad blocking tech unilaterally. They buy up all the panes of glass they don't already own. They don't allow you to install your own software on mobile anymore. And you have to buy ads for your app too, otherwise your competitor gets installed. If you develop software, you're perpetually taxed and have to do things their way. They're increasingly severing the customer relationship. They're putting themselves in as middle men in the payments industry, the automotive industry, the entertainment industry...
Look at how many products they've built and thrown away in the game of trying to broker your daily life.
I could go on and on and on... They're leeches. Giant, Galactus-sized leeches.
The bulk of the money they make is from installing themselves as middlemen.
And anyone thinking they're you're friends - they conspired to suppress wages, and they're actively cutting jobs and rebuilding the teams in India. Congrats, they love you. They're gutting America and are 100% anti-American. I love India and have nothing against its people, I'm just furious that this domestic company - this giant built on the backs of American labor and its population - hates its own country so much. (You know they hate us because they're still stuffing Corporate Memphis down our throat.)
Edit: I have to say one thing positively because Google makes me so negative. This website is beautiful. I was instantly transported back in time. But it's also a nice modern reinterpretation of retro web design. I love it so much.
One of the things that startled me when working for Google is how much of their decisionmaking actually looks like "This sucks and we don't want to be responsible for it... But there isn't anyone else who can be, so I guess it's us."
I'm not saying this is optimal or that it should be the way it is, but I am saying there are problems with alternative approaches that need to be addressed.
To give a comparison: OpenGL tried a collaborative and semi-open approach to governance for years, and what happened was they got more-or-less curb-stomped by DirectX, so much so that it drove Windows adoption for years as "the architecture for playing videogames." The mechanism was simple: while OpenGL's committee tried to find common ground among disparate teams with disparate needs, Microsoft went
1) we control this standard; here are the requirements you must adhere to
2) we control the "DirectX" trademark, if you fail to adhere to the standards we decertify your product.
As a result, you could buy a card with "DirectX" stamped on it, slap it into your Windows machine, and it would work. You couldn't do anything like that with OpenGL hardware; the standard was so loose (and enforcement so nonexistant) that companies could, via the "gestalt" feature-detection layer, claim a feature was supported if they had polyfilled a CPU-side software renderer for it. Useless for games (or basically any practical application), but who's gonna stop them from lying?
Browsers aren't immune to market forces; a standard that is too inflexible or fails to reflect the actual implementation pressures and user needs will be undercut by alternative approaches.
I'm not saying current governance of the web is that bad, but I bring up the history of OpenGL as an example of why an open, cooperative approach can fail and the pitfalls to watch out for. In the case of this specific decision regarding XSLT, it appears from the outside looking in that the decision is being made in consensus by the three largest browser engine developers and maintainers. What voice is missing from that table, and who should speak for them?
(Quick side-note: Apple managed to dodge a lot of the OpenGL issues by owning the hardware stack and playing a similar card to Microsoft's with different carrots and sticks: "This is the kernel-level protocol you must implement in hardware. We will implement OpenGL in software. And if your stuff doesn't work we just won't sell laptops with your card in them; nobody in this ecosystem replaces their graphics hardware anyway").
With browser being as complicated as they are, I kind of support this decision.
That said, I never used XSLT for anything, and I don’t see how is its support in browsers tied to RSS. (Sure you could render your page from your rss feed but that seems like a marginal use case to me)
This site is a bit of a Rorschach test as it plays both sides of this argument: bad Google for killing XSLT, and the silliness of pushing for XSLT adoption in 2025.
"Tell your friends and family about XSLT. Keep XSLT alive! Add XSLT to your website and weblog today before it is too late!"
I already have XSLT in my website because I have an Atom feed and XSLT is the only way to serve formatted Atom/RSS feeds in a static site. Perhaps you have never considered the idea that someone might want to purchase some cheap static hosting to serve their personal website, but it is a fine way to do things. This change pries the web ever further out of the hands of common people and into the big websites that just want the browser to serve their apps.
The google graveyard is for products Google has made. It's not for features that were unshipped. XSLT will not enter the Google graveyard for that reason.
>We must conclude Google hates XML & RSS!
Google reader was shutdown due to usage declining and lack of willingness for Google to continue investing resources into the product. It's not that Google hate XML and RSS. It's that end users and developers don't use XSLT and RSS enough to warrant investing into it.
>by killing [RSS] Google can control the media
The vast majority of people in the world do not get their news by RSS. It's never would have taken over the media complex. There are other surfaces for news like X which Google is not able to control. Google is not the only surface where news can surface.
> Google are now trying to control LEGISLATION. With these technologies removed what is stopping Google?
It is quite a reach to say that Google removing XSLT will give them control over government legislation. They are completely unrelated.
>How much did Google pay for this support?
Google is not paying for support. These browsers have essentially a revenue sharing agreements with the traffic they provide Google with. The payments are for the traffic to Google.
End of an era! I remember going through XSLT tutorials many decades ago and learning everything there was to learn about this curious technology that could make boring XML documents come 'alive'. I still use it to style my RSS feeds, for example, <https://susam.net/feed.xml>. It always felt satisfying that an XML file with a stylesheet could serve as both data and presentation.
Keeping links to the original announcements for future reference:
I know that every such feature adds significant complexity and maintenance burden, and most people probably don't even know that many browsers can render XSLT. Nevertheless, it feels like yet another interesting and niche part of the web, still used by us old-timers, is going away.
IMHO, Google had become the most powerful tech company out there! It has a strong monopoly in almost every aspect of our lives and it is becoming extremely difficult to completely decouple from it. My problem with this is that it now dictates and influences what can be done, what is allowed and what not, and, with its latest Android saga (https://news.ycombinator.com/item?id=45017028), it's become worrying.
If Google cured cancer tomorrow, there's someone that would be complaining about it and adding "cancer" to the "killed by Google" list. I would be very surprised if smaller browser vendors were happy about having to maintain ancient XSLT code, and I doubt new vendors were planning on ever adding support. Good riddance.
Worth noting XSLT is actually based on DSSSL, the Scheme-based document transformation and styling language of SGML. Core SGML already has "link processes" as a means to associate simple transforms/renames reusing other markup machinery concepts such as attributes, but is also introducing a rather low-level automaton construct to describe context-dependent and stateful transformations (the kind of which would've be used for recto/verso rendering on even/odd print pages).
I think it's interesting because XSLT, based on DSSSL, is already Turing-complete and thus the XML world lacked a "simple" sub-Turing transformation, templating, and mapping macro language that could be put in the hands of power users without going all the way to introduce a programming language requiring proper development cycles, unit testing, test harnesses, etc. to not inevitably explode in the hands of users. The idea of SGML is very much that you define your own little markup vocabulary for the kind of document you want to create at hand, including powerful features for ad-hoc custom Wiki markup such as markdown, and then create a canonical mapping to a rendering language such as HTML; a perspective completely lost in web development with nonsensical "semantic HTML" postulates and delivery of absurd amounts of CSS microsyntax.
As a youngster entering the IT professional circles, I was enamoured with SGML: creating my own DTDs for humane entry for my static site generator, editing my SGML source document with Emacs sgml-mode. I worked on TEI and DocBook documents too (and was there something related to Dewey coding system for libraries?).
However, processing fully compliant SGML, before you even introduce DSSSL into the picture, was a nightmare. With only one open source and at the same time the only fully compliant parser (nsgml), which was hard to build on contemporary systems, let alone run, really using SGML for anything was an exercise in frustration.
As an engineering mind, I loved the fact you could create documents that are concise yet meaningful, and really express the semantics of your application as efficiently as possible. But I created my own parsers for my subset, and did not really support all of the features.
HTML was also redefined to be an SGML application with 4.0.
I originally frowned on XML as a simplification to make it work for computers vs for humans, but with XML, XSLT, Xpath... specs, even that was too complex for most. And I heavily used libxml2 and libxslt to develop some open source tooling for documentation, and it was full of landmines.
All this to say that SGML has really spectacularly failed (IMO) due to sheer flexibility and complexity. And going for "semantic HTML" in lieu of SGML + DSSSL or XML + XSLT was really an attempt to find that balance of meaning and simplicity.
It's the common cycle as old as software engineering itself.
Completely correct and the operative phrase here is “absurd amounts” which actually captures our entire contemporary computing stack in almost every dimension that matters.
But did it ever actually work in practice? As I remember it the XSLT backed websites still needed "absurd amounts of CSS microsyntac". You could not do everything you needed with XSLT so you needed to use both XSLT and CSS. Also coding in XSLT was generally painful, even more so than writing CSS (which I think is another poorly designed language).
It is all well and good to talk about theoretical alternatives that would have been better but we are talking here about a concrete attempt which never worked beyond trivial examples. Why should we keep that alive because of something theoretical which in my opinion never existed?
While I agree with the sentiment, I loathe these "retro" websites that don't actually look like how most websites looked back then. It's like how people remember the 80s as neon blue and pink when it was more of a brownish beige.
It's interesting that we don't have a replacement for this use case. For me, XSLT hits a sweet spot where I can send a machine-parsable XML document and a small XSLT sheet from dirt cheap static web hosting (where I cannot perform server-side transforms, or control HTTP headers). This is fairly minimal and avoids needing to keep multiple files in sync.
I could add a polyfill, but that adds multiple MB, making this approach heavyweight.
XSLT was the only convenient way to create a static website without JS. Other ways either require build step or server-side applications. With XSLT, you could write data into XML files, templating into XSL files and it'll just work.
Of course you can achieve similar effects with JS, by downloading data files and rendering them into whatever HTML you want. But that cuts users without enabled JS.
Not a huge loss, I guess, given the lack of popularity of these technologies. But loss nonetheless. One more step to bloated overengineered web.
Users who disable JS are insane and hypocritical if they don't also disable XSLT, which is even worse. So I wouldn't bend over too far backwards to support insane hypocrites. There aren't enough of them to matter, they enjoy having something to complain about, and they're much louder and more performative than the overwhelming majority of users. Not a huge loss cutting them out at all.
I haven't been too chatty about it but the furor over this being removed has, I suspect, everything to do with there being no real plan to replace what it does. No I don't just mean styling RSS feeds. I mean writing websites as semantic documents!! The whole thing the web is (was) about!
Don’t… you’re forgetting the Christmas of ’02 when cousin Marvin brought up the issue of Tabs vs Spaces!! Uncle Frank still holders a grudge and he’s still not on speaking terms with Adam
XSLT has a life outside the browser and remains valuable where XML is the way data is exchanged. And RSS does not demand XSLT in the browser so far as I know. I think RIP is a bit excessive.
Website is overly dramatic. Google doesn't hate XSLT, it is simply no one wants to maintain libxslt and it is full of security issues. Given how rarely it is used, it is just not worth the time + money. If the author wants to raise money to pay a developer willing to maintain libxslt, Google might revise the decision.
> it is simply no one wants to maintain libxslt and it is full of security issues. Given how rarely it is used, it is just not worth the time + money.
As for money: Remind me what was Google's profit last year?
As for usage: XSLT is used on about 10x more sites [1] than Chrome-only non-standards like USB, WebTransport and others that Google has no trouble shoving into the browser
> Google doesn't hate XSLT, it is simply no one wants to maintain libxslt and it is full of security issues. Given how rarely it is used, it is just not worth the time + money. If the author wants to raise money to pay a developer willing to maintain libxslt, Google might revise the decision.
Counterpoint: google hates XML and XSLT. I've been working on a hobby site using XML and XSLT for the last five years. Google refused to crawl and index anything on it. I have a working sitemap, a permissive robots.txt, a googlebot html file proving that I'm the owner of the site, and I've jumped through every hoop I can find, and they still refused to crawl or index anything except a snippet of the main index.xml page and they won't crawl any links on that.
I switched everything over to a static site generator a few weeks ago, and Google immediately crawled the whole thing and started showing snippets of the entire site in less than a day.
My guess is that their usage stats are skewed because they've designed their entire search apparatus to ignore it.
[+] [-] koito17|4 months ago|reply
[+] [-] ktpsns|4 months ago|reply
The author is frontend designer and has a nice website, too: https://dbushell.com/
I like the personal, individual style of both pages.
[+] [-] blablabla123|4 months ago|reply
[+] [-] shiomiru|4 months ago|reply
I now wonder if XSLT is implemented by any browser that isn't controlled by Google (or derived from one that is).
[+] [-] gucci-on-fleek|4 months ago|reply
But I think that this website is being hyperbolic: I believe that Google's stated security/maintenance justifications are genuine (but wildly misguided), and I certainly don't believe that Google is paying Mozilla/Apple to drop XSLT support. I'm all in favour of trying to preserve XSLT support, but a page like this is more likely to annoy the decision-makers than to convince them to not remove XSLT support.
[0]: https://www.maxchernoff.ca/tools/Stardew-Valley-Item-Finder/
[1]: https://www.maxchernoff.ca/atom.xml
[2]: https://github.com/whatwg/html/pull/11563#issuecomment-31909...
[3]: https://github.com/gucci-on-fleek/lua-widow-control/blob/852...
[+] [-] coldtea|4 months ago|reply
You are on some very very small elite team of web standards users then
[+] [-] bazoom42|4 months ago|reply
[+] [-] f33d5173|4 months ago|reply
Intentionally in a humourous way, yes
[+] [-] littlestymaar|4 months ago|reply
You cannot “convince decision-makers” with a webpage anyway. The goal of this one is to raise awareness on the topic, which is pretty much the only thing you can do with a mere webpage.
[+] [-] IshKebab|4 months ago|reply
Why? Last time this came up the consensus was that libxstl was barely maintained and never intended to be used in a secure context and full of bugs.
I'm full in favour of removing such insecure features that barely anyone uses.
I think if the XSLT people really wanted to save it the best thing to do would have been to write a replacement in Rust. But good luck with that.
[+] [-] eftpotrm|4 months ago|reply
Was SOAP a bad system that misunderstood HTTP while being vastly overarchitected for most of its use cases? Yes. Could overuse of XML schemas render your documents unreadable and overcomplex to work with? Of course. Were early XML libraries well designed around the reality of existing programming languages? No. But also was JSON's early implementation of 'you can just eval() it into memory' ever good engineering? No, and by the time you've written a JSON parser that beats that you could've equally produced an equally improved XML system while retaining the much greater functionality it already had.
RIP a good tech killed by committees overembellishing it and engineers failing to recognise what they already had over the high of building something else.
[+] [-] shevy-java|4 months ago|reply
This is also why I dislike AI browsers in general. They generate a view to the user that may not be real. They act like a proxy-gate, intercepting things willy-nilly. I may be oldschool, but I don't want governments or corporations to jump in as middle-man and deny me information and opportunities of my own choosing. (Also Google Suck, I mean Google Search, sucks since at the least 5 years now. That was not accidental - that was deliberate by Google.)
[+] [-] alt187|4 months ago|reply
[+] [-] drob518|4 months ago|reply
[+] [-] echelon|4 months ago|reply
Lina Khan had the right idea and mandate, but she was too fucking slow.
When the Dems swing back into power, the gutting of big tech needs to be swift and thorough. The backbone needs to be severed. I'm screaming at my representatives to do this.
Google took over web tech, turned the URL bar into their Search product. They force brands to buy ads for their name brands - think about how much money they make by selling ads on the keywords "Airpods" or "Nintendo Switch". They forced removal of ad blocking tech unilaterally. They buy up all the panes of glass they don't already own. They don't allow you to install your own software on mobile anymore. And you have to buy ads for your app too, otherwise your competitor gets installed. If you develop software, you're perpetually taxed and have to do things their way. They're increasingly severing the customer relationship. They're putting themselves in as middle men in the payments industry, the automotive industry, the entertainment industry...
Look at how many products they've built and thrown away in the game of trying to broker your daily life.
I could go on and on and on... They're leeches. Giant, Galactus-sized leeches.
The bulk of the money they make is from installing themselves as middlemen.
And anyone thinking they're you're friends - they conspired to suppress wages, and they're actively cutting jobs and rebuilding the teams in India. Congrats, they love you. They're gutting America and are 100% anti-American. I love India and have nothing against its people, I'm just furious that this domestic company - this giant built on the backs of American labor and its population - hates its own country so much. (You know they hate us because they're still stuffing Corporate Memphis down our throat.)
Edit: I have to say one thing positively because Google makes me so negative. This website is beautiful. I was instantly transported back in time. But it's also a nice modern reinterpretation of retro web design. I love it so much.
[+] [-] shadowgovt|4 months ago|reply
One of the things that startled me when working for Google is how much of their decisionmaking actually looks like "This sucks and we don't want to be responsible for it... But there isn't anyone else who can be, so I guess it's us."
I'm not saying this is optimal or that it should be the way it is, but I am saying there are problems with alternative approaches that need to be addressed.
To give a comparison: OpenGL tried a collaborative and semi-open approach to governance for years, and what happened was they got more-or-less curb-stomped by DirectX, so much so that it drove Windows adoption for years as "the architecture for playing videogames." The mechanism was simple: while OpenGL's committee tried to find common ground among disparate teams with disparate needs, Microsoft went
1) we control this standard; here are the requirements you must adhere to
2) we control the "DirectX" trademark, if you fail to adhere to the standards we decertify your product.
As a result, you could buy a card with "DirectX" stamped on it, slap it into your Windows machine, and it would work. You couldn't do anything like that with OpenGL hardware; the standard was so loose (and enforcement so nonexistant) that companies could, via the "gestalt" feature-detection layer, claim a feature was supported if they had polyfilled a CPU-side software renderer for it. Useless for games (or basically any practical application), but who's gonna stop them from lying?
Browsers aren't immune to market forces; a standard that is too inflexible or fails to reflect the actual implementation pressures and user needs will be undercut by alternative approaches.
I'm not saying current governance of the web is that bad, but I bring up the history of OpenGL as an example of why an open, cooperative approach can fail and the pitfalls to watch out for. In the case of this specific decision regarding XSLT, it appears from the outside looking in that the decision is being made in consensus by the three largest browser engine developers and maintainers. What voice is missing from that table, and who should speak for them?
(Quick side-note: Apple managed to dodge a lot of the OpenGL issues by owning the hardware stack and playing a similar card to Microsoft's with different carrots and sticks: "This is the kernel-level protocol you must implement in hardware. We will implement OpenGL in software. And if your stuff doesn't work we just won't sell laptops with your card in them; nobody in this ecosystem replaces their graphics hardware anyway").
[+] [-] yoz-y|4 months ago|reply
That said, I never used XSLT for anything, and I don’t see how is its support in browsers tied to RSS. (Sure you could render your page from your rss feed but that seems like a marginal use case to me)
[+] [-] pseudosavant|4 months ago|reply
"Tell your friends and family about XSLT. Keep XSLT alive! Add XSLT to your website and weblog today before it is too late!"
[+] [-] karel-3d|4 months ago|reply
[+] [-] James_K|4 months ago|reply
[+] [-] charcircuit|4 months ago|reply
The google graveyard is for products Google has made. It's not for features that were unshipped. XSLT will not enter the Google graveyard for that reason.
>We must conclude Google hates XML & RSS!
Google reader was shutdown due to usage declining and lack of willingness for Google to continue investing resources into the product. It's not that Google hate XML and RSS. It's that end users and developers don't use XSLT and RSS enough to warrant investing into it.
>by killing [RSS] Google can control the media
The vast majority of people in the world do not get their news by RSS. It's never would have taken over the media complex. There are other surfaces for news like X which Google is not able to control. Google is not the only surface where news can surface.
> Google are now trying to control LEGISLATION. With these technologies removed what is stopping Google?
It is quite a reach to say that Google removing XSLT will give them control over government legislation. They are completely unrelated.
>How much did Google pay for this support?
Google is not paying for support. These browsers have essentially a revenue sharing agreements with the traffic they provide Google with. The payments are for the traffic to Google.
[+] [-] susam|4 months ago|reply
Keeping links to the original announcements for future reference:
1) <https://groups.google.com/a/chromium.org/g/blink-dev/c/CxL4g...>
2) <https://developer.chrome.com/docs/web-platform/deprecating-x...>
I know that every such feature adds significant complexity and maintenance burden, and most people probably don't even know that many browsers can render XSLT. Nevertheless, it feels like yet another interesting and niche part of the web, still used by us old-timers, is going away.
[+] [-] redbell|4 months ago|reply
I strongly encourage building a website entitled, something like keepXSLTAlive.tld to advocate for XSLT as the other guys did https://keepandroidopen.org/ for Android (https://news.ycombinator.com/item?id=45742488), or keep this current site (https://xslt.rip/) but update the UI a little bit to better reflect the protest vibe.
[+] [-] rahkiin|4 months ago|reply
But that does not mean xslt should be kept alive just because of that. It should be judged on its own merits
[+] [-] SvenL|4 months ago|reply
[+] [-] tuveson|4 months ago|reply
[+] [-] tannhaeuser|4 months ago|reply
I think it's interesting because XSLT, based on DSSSL, is already Turing-complete and thus the XML world lacked a "simple" sub-Turing transformation, templating, and mapping macro language that could be put in the hands of power users without going all the way to introduce a programming language requiring proper development cycles, unit testing, test harnesses, etc. to not inevitably explode in the hands of users. The idea of SGML is very much that you define your own little markup vocabulary for the kind of document you want to create at hand, including powerful features for ad-hoc custom Wiki markup such as markdown, and then create a canonical mapping to a rendering language such as HTML; a perspective completely lost in web development with nonsensical "semantic HTML" postulates and delivery of absurd amounts of CSS microsyntax.
[+] [-] necovek|4 months ago|reply
However, processing fully compliant SGML, before you even introduce DSSSL into the picture, was a nightmare. With only one open source and at the same time the only fully compliant parser (nsgml), which was hard to build on contemporary systems, let alone run, really using SGML for anything was an exercise in frustration.
As an engineering mind, I loved the fact you could create documents that are concise yet meaningful, and really express the semantics of your application as efficiently as possible. But I created my own parsers for my subset, and did not really support all of the features.
HTML was also redefined to be an SGML application with 4.0.
I originally frowned on XML as a simplification to make it work for computers vs for humans, but with XML, XSLT, Xpath... specs, even that was too complex for most. And I heavily used libxml2 and libxslt to develop some open source tooling for documentation, and it was full of landmines.
All this to say that SGML has really spectacularly failed (IMO) due to sheer flexibility and complexity. And going for "semantic HTML" in lieu of SGML + DSSSL or XML + XSLT was really an attempt to find that balance of meaning and simplicity.
It's the common cycle as old as software engineering itself.
[+] [-] user3939382|4 months ago|reply
[+] [-] jeltz|4 months ago|reply
It is all well and good to talk about theoretical alternatives that would have been better but we are talking here about a concrete attempt which never worked beyond trivial examples. Why should we keep that alive because of something theoretical which in my opinion never existed?
[+] [-] GaryBluto|4 months ago|reply
[+] [-] 8organicbits|4 months ago|reply
I could add a polyfill, but that adds multiple MB, making this approach heavyweight.
[+] [-] crazygringo|4 months ago|reply
I'm looking at: https://github.com/mfreed7/xslt_polyfill
Which uses: https://github.com/DesignLiquido/xslt-processor/tree/main
But they don't look that heavy. Am I missing something? Megabytes of JS would be enormous.
[+] [-] vbezhenar|4 months ago|reply
Of course you can achieve similar effects with JS, by downloading data files and rendering them into whatever HTML you want. But that cuts users without enabled JS.
Not a huge loss, I guess, given the lack of popularity of these technologies. But loss nonetheless. One more step to bloated overengineered web.
[+] [-] DonHopkins|4 months ago|reply
Users who disable JS are insane and hypocritical if they don't also disable XSLT, which is even worse. So I wouldn't bend over too far backwards to support insane hypocrites. There aren't enough of them to matter, they enjoy having something to complain about, and they're much louder and more performative than the overwhelming majority of users. Not a huge loss cutting them out at all.
[+] [-] conartist6|4 months ago|reply
[+] [-] NoboruWataya|4 months ago|reply
I had a good chuckle at the idea of sitting around the dinner table at Christmas telling my parents and in-laws all about XSLT.
[+] [-] alfiedotwtf|4 months ago|reply
[+] [-] sdovan1|4 months ago|reply
[+] [-] beardyw|4 months ago|reply
[+] [-] littlecranky67|4 months ago|reply
[+] [-] troupo|4 months ago|reply
As for money: Remind me what was Google's profit last year?
As for usage: XSLT is used on about 10x more sites [1] than Chrome-only non-standards like USB, WebTransport and others that Google has no trouble shoving into the browser
[1] Compare XSLT https://chromestatus.com/metrics/feature/timeline/popularity... with USB https://chromestatus.com/metrics/feature/timeline/popularity... or WebTransport: https://chromestatus.com/metrics/feature/timeline/popularity... or even MIDI (also supported by Firerox) https://chromestatus.com/metrics/feature/timeline/popularity...
[+] [-] verytrivial|4 months ago|reply
[1] https://github.com/pizlonator/fil-c/tree/deluge/projects/lib...
[+] [-] themafia|4 months ago|reply
For $0? Probably not. For $40m/year, I bet you could create an entire company that just maintains and supports all these "abandoned" projects.
[+] [-] basscomm|4 months ago|reply
Counterpoint: google hates XML and XSLT. I've been working on a hobby site using XML and XSLT for the last five years. Google refused to crawl and index anything on it. I have a working sitemap, a permissive robots.txt, a googlebot html file proving that I'm the owner of the site, and I've jumped through every hoop I can find, and they still refused to crawl or index anything except a snippet of the main index.xml page and they won't crawl any links on that.
I switched everything over to a static site generator a few weeks ago, and Google immediately crawled the whole thing and started showing snippets of the entire site in less than a day.
My guess is that their usage stats are skewed because they've designed their entire search apparatus to ignore it.
[+] [-] testdelacc1|4 months ago|reply
[+] [-] cubefox|4 months ago|reply
[+] [-] yxhuvud|4 months ago|reply