top | item 45874399

(no title)

eftpotrm | 3 months ago

I'm aware I'm in a minority, but I find it sad that XSLT stalled and is mostly dead in the market. The amount of effort put into replicating most the XML+XPath+XSLT ecosystem we had as open standards 25 years ago using ever-changing libraries with their own host of incompatible limitations, rather than improving what we already had, has been a colossal waste of talent.

Was SOAP a bad system that misunderstood HTTP while being vastly overarchitected for most of its use cases? Yes. Could overuse of XML schemas render your documents unreadable and overcomplex to work with? Of course. Were early XML libraries well designed around the reality of existing programming languages? No. But also was JSON's early implementation of 'you can just eval() it into memory' ever good engineering? No, and by the time you've written a JSON parser that beats that you could've equally produced an equally improved XML system while retaining the much greater functionality it already had.

RIP a good tech killed by committees overembellishing it and engineers failing to recognise what they already had over the high of building something else.

discuss

order

jeltz|3 months ago

There are still virtually zero good XML parsers but plenty of good JSON parsers so I do not buy your assertion. Writing a good JSON parser can be done by most good engineers, but I have yet to use a good XML parser.

This is based on my personal experience of having to parse XML in Ruby, Perl, Python, Java and Kotlin. It is a pain every time and I have run into parser bugs at least twice in my career while I have never experience a bug in a JSON parser. Implementing a JSON parser correctly is way simpler. And they are also generally more user friendly.

gwbas1c|3 months ago

Take a look at C# / dotnet. The XML parser that's been around since the early 2000s is awesome, but the JSON libraries are just okay. The official JSON library leaves so much to be desired that the older, 3rd party library is often better.

taeric|3 months ago

JSON parsing is pretty much guaranteed to be a nightmare if you try and use the numeric types. Or if you repeat keys. Neither of which are uncommon things to do.

My favorite is when people start reimplementing schema ideas in json. Or, worse, namespaces. Good luck with that.

VMG|3 months ago

> by the time you've written a JSON parser that beats that you could've equally produced an equally improved XML system while retaining the much greater functionality it already had.

Here is where you lose me

The JSON spec fits on two screen pages https://www.json.org/json-en.html

The XML spec is a book https://www.w3.org/TR/xml/

geocar|3 months ago

> The JSON spec fits on two screen pages https://www.json.org/json-en.html

It absolutely does not. From the very first paragraph:

It is based on a subset of the JavaScript Programming Language Standard ECMA-262 3rd Edition - December 1999.

which is absolutely a book you can download and read here: https://ecma-international.org/publications-and-standards/st...

Furthermore, JSON has so many dangerously-incompatible implementations the errata for JSON implementations fills in multiple books, such as advice to "always" treat numbers as strings, popular datetime "extensions" that know nothing of timezones and so on.

> The XML spec is a book https://www.w3.org/TR/xml/

Yes, but that's also everything you need to know in order to understand XML, and my experience implementing API is that every XML implementation is obviously-correct, because anyone making a serious XML implementation has demonstrated the attention-span to read a book, while every JSON implementation is going to have some fucking weird-thing I'm going to have to experiment with, because the author thought they could "get the gist" from reading two pages on a blog.

eftpotrm|3 months ago

Aside from the other commenter's point about this being a misleading comparison, you didn't need to reinvent the whole XML ecosystem from scratch, it was already there and functional. One of the big claims I've seen for JSON though is that it has array support, which XML doesn't. And which is correct as far as it goes, but also it would have been far from impossible to code up a serializer/deserializer that let you treat a collection of identically typed XML nodes as an array. Heck, for all I know it exists, it's not conceptually difficult.

Mikhail_Edoshin|3 months ago

But the part of XML that is equivalent to JSON is basically five special symbols: angle brackets, quotes and ampersand. Syntax-wise this is less than JSON (and it even has two kinds of quotes). All the rest are extras: grammar, inclusion of external files (with name and position based addressing), things like element IDs and references, or a way to formally indicate that contents of an element are written in some other notation (e. g. "markdown").

klodolph|3 months ago

Having used XSLT, I remember hating it with the passion of a thousand suns. Maybe we could have improved what we had, but anything I wanted to do was better done somehow else.

I'm glad to have all sorts of specialists on our team, like DBAs, security engineers, and QA. But we had XSLT specialists, and I thought it was just a waste of effort.

altmind|3 months ago

You can do some cool stuff, like serving an RSS file that is also styled/rendered in the browser. A great loss for the 2010 idea of semantic web. One corporation is unhappy because it does not cover their use cases

immibis|3 months ago

Not the minority. People can be sad that XSLT failed and also recognize that removing it from browsers is quite sensible, given the current situation.

theoryaway|3 months ago

> RIP a good tech killed by committees overembellishing it and engineers failing to recognise what they already had over the high of building something else.

Hope I can quote it to Transofrmer architecture One day

gwbas1c|3 months ago

IMO, XSLT seems like something that should be handled on the server, not in the browser.