top | item 36466248

XML is the future

681 points| tannhaeuser | 2 years ago |bitecode.dev | reply

397 comments

order
[+] gnulinux|2 years ago|reply
I've been working in the tech industry in the US for about 5 years. Ever since I knew myself I've been coding. From middle school to high school, given any problem, like Sudoku, or keeping up daily chores, my solution was Programming! Programming wasy homebase. Then I studied it in uni, thought I was kinda good at it, and loved it.

But when I started working in the industry, I realized that it's absolutely exhausting. Hype after hype, fad after fad, modern after modern, refactor after refactor. I have a workflow, I know how to build apps. Then one day director of Ops comes and completely and utterly changes the workflow. Ok fine, I'm young, will learn this. Month passes, it is now Terraform. Ok fine I'm young, will learn this. Now we're serverless. Ok fine, will learn. Now everything is container. Ok. Now everything microservice. K. Now turns out lambdas aren't good, so everything is ECS. OK will rewrite everything...

Look I'm not even complaining. But it feels like I'm stuck in a Franz Kafka novel. We just keep changing and changing the same things again and again because that's the new way to do. Big distraction. Destroys your workflow. Forget about all the util scripts you wrote last 6 months being useless.

I don't even know how I would do it. Maybe I would do this the same way if I had any power. But that doesn't change the fact that it's a bit ridiculous. Fun but tiring. Entertaining but exhausting. Cute but frustrating.

[+] sickcodebruh|2 years ago|reply
Doesn’t this strike anyone as overly cynical and just… incurious? Yes, hype and trends are obnoxious, there are individuals and organizations that reflexively seek sexy tech and apply it wrong, but isn’t this also part of how we find things that work and things that don’t?

It’s easy to run through decades of tech trends and present them as the only things that dominated the industry, just like it’s easy to rattle off one-hit-wonders or movies that flopped and claim that the arts are dead. But I remember Apache fading in favor of nginx. MySQL being shouted down by Postgres. Frameworks like Rails and Django becoming popular over the LAMP stack (with the aforementioned A and M…). Docker over Vagrant for dev environments. TypeScript, unidirectional data flow, the decline of OOP. I can go on and on and these are just the ones that spring to mind immediately!

I’m still very much a “choose boring tech” guy. I also sometimes feel frustrated by how fast technology changes and how the winners of hearts and minds are often the result of marketing effort, not good technology. But unless you’re a technology blogger, you probably don’t need to be bothered by this. New tech emerges for a reason. Some of it will be overhyped, some of it will be unfairly ignored, some things will win and some will lose. I feel fortunate to live in a time when there’s so much enthusiasm and creativity for new ways to solve old problems.

[+] devjab|2 years ago|reply
I worked a decade in public sector digitalisation in Denmark, where for some reason they still use a lot of SOAP and thus XML. I’m not so against XML in theory, but I hate it in practice.

You’d have these completely over engineered solutions where you’d basically need to call a separate micro services for every “field” of anything. So if you wanted a name, a ssn, and, an address for a citizen you’d need to make several calls. We ended up buying an API for the APIs, but that is besides the point I’m trying to make. The point is that it was very designed, but the XML. Well… you’d have things like:

<ssn ssn=“123”/><id>123</id> and worse. And it’s like, why would they spend all that time designing micro-service and data architectures and then output really inconsistent and often even standards breaking XML? But I know why, because out of the maybe 300 different systems to output XML that I’ve worked with over the years, none of them did it “right”.

If the future is XML then we better step up our game!

[+] nologic01|2 years ago|reply
Some thoughts of why software is so hype prone and will likely remain so, if not accelerate

* Its intrinsically easy to come up with new approaches. Thinking and writing software is a mental process, it is not limited by physical constraints and messy manufacturing.

* The scope of use contexts in society exploded. You only needed the formula-translation language when you had five whitecoats in a research lab punching instructions on a single computer. Now you have multi-billions of devices.

* The process is self-feeding. With the internet came the development of online interaction tools, techies use them more than any other segment and this creates a large coherent mass of lemmings and network effects.

* The role of bigtech (oligopolies solving their own problems to maintain / expand their dominance and abnormal profitability) which creates a collective osmosis to imitate whatever they are doing.

* There is also intrinsic redundancy and near equivalence of solutions. You only have one way to roll a concrete something on a surface: it must be round. But one can have an infinity of ways to split a workload between a client and a server machine (let alone more servers).

Most of those factors will continue for the foreseeable future so there will be no respite. E.g., Now we are firmly into AI hysteria and this again already shakes things up as it deprecates/de-emphasizes patterns that are not part of the bandwagon and puts emphasis on obscure niches that are deemed critical.

[+] preommr|2 years ago|reply
It's easy to just mark XML off as one of the many fads in tech given how many there are.

But I think it's much more worthwile to dive into the particulars of why it failed. While they're all fads, some of them share commonalities that, in hindsight, I think we can say were major catylsts to their downfall.

With XML, it failed because something like JSON was much simpler. Time and time again I see people saying that json+comments would be the ideal format. Time and time again we see that people prefer simple solutions that they can get jump into easily. Maybe certain problems don't require solutions with those traits, but anything with sharing data among a diverse group of consumers does.

The XML ecosystem with things like XSLT, Xquery,etc, were elegant, but also overengineered and clunky. I would even say that markup itself with closing tags were more annoying than just closing brackets.

Software is still so young... I think it's good that we tried and learned something from it in the short term. Maybe one day we'll get a proper standard with all the right data types that becomes the defacto solution that people learn it even if it's more complicated.

[+] yread|2 years ago|reply
I still can't believe that we (as a profession) kinda traded away schemas, namespaces, comments, sanity and well defined dates basically because some kids were too cool to write out closing tags. My early 20s teammate now closes blocks with } //for in his JS. That's even worse than XML!
[+] eduction|2 years ago|reply
>but I think it's much more worthwile to dive into the particulars of why it failed

If you’re going to deep dive, define your terms: what did it fail at?

At being the future? Kind of a straw man, even if some people did argue that.

As a document interchange format? Between RSS — including podcasts! — Microsoft Office, and LibreOffice it seems to have done reasonably well.

As a data interchange format? Undoubtedly JSON is now preferred for this, but keep in mind that for several crucial years, as the “X” in “AJAX,” it paved the way for modern web apps via Google Maps, Gmail, and before that Outlook for the Web (literally the original AJAX app).

I think it’s not really interesting to examine how and where JSON displaced XML given this is not hard to grasp and well covered. Much more interesting to extract lessons that also work on JSON and predict what might displace it and where and how. For example it’s interesting that JSON is simpler than XML. Might it have discarded something valuable? The “X” is for extensibility - if we could extend JSON, which we can’t, how would we? First class date times might nice, mmm? Or to be able to add arbitrary other self describing data types? UUIDs?

What is XML still ideal for? Etc

[+] defanor|2 years ago|reply
To offer a different perspective: I find XML simpler in that it tends to correspond more closely to most arbitrary data models. From ADTs to C structures, there are product types and sum types to encode, along with more or less arbitrary basic types, which have text representation. With XML you get a straightforward way to encode all those, while in JSON there is no single agreed upon method to encode sum types, but you get something like string-indexed arrays (or hash tables) instead, which are normally implemented on top, using simpler types (and are not restricted to strings). And a few arbitrary built-in types, but you have to use strings for others anyway. And no built-in extensibility, so ad hoc hacks are used when it is needed. It would be particularly awkward to use for documents, too.

I preferred JSON initially myself, since it seemed a little less verbose, I did not care about extensibility, did not consider its usage for documents, validation, did not care about using it for different data models, and it just seemed simple (FSVO) and straightforward. But then more of XML made sense. It is not perfect for everything, either, but the decisions behind it seem more justified to me than those behind JSON (though JSON still fits JS, at least).

[+] IshKebab|2 years ago|reply
I disagree. XML isn't that complex. I think it failed because:

* It's overly verbose. All those closing tags, ugh.

* The data model is not actually what people want most of the time. People are transferring objects, not documents. The fact that XML doesn't have a proper way to represent maps, and you have the redundancy of attributes, inline text, etc... There's a huge mismatch between the data model of XML and the data model that people usually want.

* SAX is obviously a shit way to do parsing. There were DOM parsers and pull parsers but for some reason SAX was stupidly common despite being stupidly awkward.

[+] wvenable|2 years ago|reply
> With XML, it failed because something like JSON was much simpler.

I remember when XML was pretty new and I used this new-fangled technology called XML-RPC. XML-RPC was amazing and I was using it to connect desktop applications to web applications. If you go look it up, you'll notice that bears a striking resemblance to JSON.

But what technology actually took off for RPC in XML? SOAP. And SOAP is a nightmare of complexity and hardly works right the first time between heterogeneous systems.

It's funny how much people want to add complexity to the JSON ecosystems with the same over-engineering that killed XML in the same space. Luckily the design of JSON is such that it resists that kind of complexity and also because XML exists it takes a bit of that load.

[+] quickthrower2|2 years ago|reply
XML can be simpler. In C# you could generate an XSD from a good XML example, then fix it up and generated typed, nested C# code. You now have a builder and validator for the stuff you send across the wire.

You can sort of do the same with JSON though to be fair, in theory, and there are probably tools, but it wouldn't be as tight and then the "XSD" part I am not sure if there is a single spec to go with.

Overall though, I prefer JSONland!

[+] vbezhenar|2 years ago|reply
IMO XML failed because of mismatch between programming language structures and storage format. JSON is perfect because it maps 1:1 to arrays and objects. XML does not. There were whole ORM projects to map XML to data structures.

That's a fundamental issue and replacing XML was not that hard.

My prediction is that SQL will fail too. There are infinite attempts to dethrone it. And the reason is the same: tabular data does not map well to our data structures.

Dethroning SQL much harder, though. It'll take decades.

We introduced fundamental roadblocks into IT and river of time will either break those roadblocks or smooth them into shapes hardly recognizable (SQL + JSON, for example).

[+] bawolff|2 years ago|reply
> The XML ecosystem with things like XSLT, Xquery,etc, were elegant, but also overengineered and clunk

Hell those were the better XML standards. Just consider SOAP, SAML (XMLSignature is insane), etc

XML just another example that unnecessary complexity is the worst sin in software development

[+] tannhaeuser|2 years ago|reply
> With XML, it failed because something like JSON was much simpler.

No. XML was introduced as a simplified SGML subset/profile to become the base for new markup vocabularies on the web; both SVG and MathML were specified using XML (and later integrated into HTML 5). The intent was also to replace HTML by XHTML.

The idea to introduce service payloads as custom, non-UI XML and then transform those payloads into XHTML was induced by XML, but distinct from it.

However, W3C went crazy with SOAP and XML (and RDF), attempting to establish entirely new and unproven paradigms such as XForms with XHTML2 rather than merely simplifying syntax, which was bound to fail, and laid ground for browser vendors to evolve HTML outside W3C.

Then XML heads somehow felt insulted, refused to change course or learn new things. Most even didn't realize XML is just an SGML subset, and that everything that was possible using XML is by definition also possible using SGML (plus handling HTML and markdown and a couple other things making SGML more complex compared to XML). To this day, we're hearing XML heads dogmatically advertising their overly strict and verbose red-headed stepchild of a markup language, and wondering why nobody wanted to bow to XML. The article is about this kind of people who want to use their tool under all circumstances, project requirements be damned.

[+] seanp2k2|2 years ago|reply
Stuff like GRPC / GQL feels cool, but JSON still feels "closer to the wire" and easier to work with using normal text-based tools / curl / etc. The internet felt like magic growing up once I learned about SMTP and HTTP using telnet and played with quite a few serial terminal devices. Now with many APIs behind OAuth or worse, it's getting much less trivial to hack around with in simple ways.
[+] pwdisswordfishc|2 years ago|reply
And specifically XHTML failed for two reasons: (0) Internet Explorer did not support it, and (1) there was no good migration path out of tooling that generated HTML by unrigorous string concatenation.

So instead WHATWG created HTML5, which bastardized well-designed (if clunky to use) XML features like namespaces, paving the way for vulnerabilities like CVE-2020-26870.

[+] Mikhail_Edoshin|2 years ago|reply
XML is very much misunderstood. The hype that surrounded it was for a good reason, because XML is somewhat unique as a concept. There was nothing like it and still isn’t. It is not a data format or something like that. It is a notation tool. Normally you invent some syntax and parse it to get what is called “abstract syntax tree” (AST). With XML you work directly with an AST. Parsing from text is convenient because you can get a rather concise and elegant result. XML is normally way more verbose, although not that much, if well done. Yet the expressiveness is exactly the same.

Notation is what you need when you manually compose some data for machine processing. XML as a data interchange format is a misuse. Yet XML as a data description format is what it is very good at. The difference is that data interchange goes from one machine to another, but data description is what goes from a human to a machine. Data input, in other words. Complex data input. Language-like data input.

This is why XML is widely used, for example, in user interface frameworks where you need to describe very elaborate data structures. Markup is another obvious example; here you also have complex data structures tied to a piece of text. Yet markup is just a special case.

So if we are to add something to this article’s sentiment it could be an observation that we are also prone to misunderstanding things and jumping too quickly to conclusions.

[+] tannhaeuser|2 years ago|reply
> XML is somewhat unique as a concept. There was nothing like it and still isn’t.

That's just incorrect. XML is a proper SGML subset, nothing more. Why do intelligent people like you come here to lecture about markup languages but don't even bother to read the XML specification which clearly states (as in chapter 1, sentence 1):

> The Extensible Markup Language (XML) is a subset of SGML that is completely described in this document. Its goal is to enable generic SGML to be served, received, and processed on the Web in the way that is now possible with HTML.

[+] madsbuch|2 years ago|reply
You could say the same about JSON. It is arguably closer to how and AST is represented in software as it does not posses the two dimensional notation like XML does.

The unique thing about XML is that you can both have children and attributes. My guess it that this is to model OOP-based systems: Attributes are for the constructor or a certain class while the children represent dependency injections.

This is IMHO the weak point for XML: It gives too many levers. when we don't know how to assign meaning the the levers, we arrive at garbage like the example from another comment: <ssn ssn=“123”/><id>123</id>

[+] rekado|2 years ago|reply
> Normally you invent some syntax and parse it to get what is called “abstract syntax tree” (AST). With XML you work directly with an AST.

This is also why the Scheme syntax for XML, SXML, feels right at home in the land of S-expressions:

https://en.wikipedia.org/wiki/SXML

[+] qayxc|2 years ago|reply
> So if we are to add something to this article’s sentiment it could be an observation that we are also prone to misunderstanding things and jumping too quickly to conclusions.

I'd add another observation: most of the cynical views expressed in the article are simply the result of "hammer syndrome": hand a person a hammer and everything starts looking like a nail. Overuse of tools and trying to apply them to problems they weren't intended to solve, is a big issue.

Ignoring lessons from the past is another. I love how the author makes it sound as if NodeJS and trying to use the same ecosystem for backend and frontend was something new. "Write Once, Run Everywhere!" was a slogan that predates NodeJS by over a decade :)

[+] sacado2|2 years ago|reply
Yes, one of XML's killer feature is that it can model rich text documents as much as ASTs. There's a reason why HTML never ever became JSON-ML, or why LibreOffice uses XML rather than JSON to save files.

In a way, XML can be seen as a generalization of simpler formats such as markdown (for text) and JSON (for structured data). Yes, I'm oversymplifying it.

[+] espe|2 years ago|reply
mostly agree. not only a notation tool though - you can see some familiarity to scheme, that data is also an actionable description, and literate programming, that this actionable description is also a human readable description. together with its tooling XML still has its very own space. XSLT feels quite elegant once you get the hang of it.
[+] senttoschool|2 years ago|reply
In my opinion, one of the worst was things was going from servers to serverless for web apps. Vercel (formerly Zeit) made a complete switch from servers to serverless for hosting and Next.js. Everyone jumped in without realizing just how much more complicated serverless architectures are compared to servers.

1. Having your APIs as lambdas now means you can't simply connect to a Postgres/MySQL without setting up a dedicated server that serve as a connection pool. So you're now "serverless" but you now have to add a server as a connection pool. /facepalm

2. Putting your APIs on the edge means absolutely nothing if your Postgres/MySQL/Redis/3rd party services is still in one location.

3. Serverless edge databases started popping up but they were significantly more expensive (because you have to replicate data everywhere) and more complicated with limited upside.

4. Cold starts on lambda APIs meant that your web app/web site often load slower than a centralized server.

5. You can't reliably share APIs between your web app and mobile app anymore. Previously, a single server could easily be made to serve both your web app and mobile app.

6. Serverless is often more expensive and unpredictable when it comes to billing. Previously, if you paid $15 for a server, you're going to be billed for $15.

7. Serverless is significantly more complicated when you're just trying to build an MVP. There's no need to try to scale like you're Google when you're just trying to create a proof of concept.

8. You're far more likely to be vendor-locked doing serverless.

Serverless should not have been the default option for webapps. Serverless should have been the exotic option for companies that had a special scaling/edge need. Instead, servers should still be the default. The problem is that Next.js is so popular and so intertwined with Vercel that serverless is the default now.

If Vercel/Next.js were honest, they would tell everyone to use serverless only if they're making a static website. Everything else should start out as a server.

[+] tannhaeuser|2 years ago|reply
Of course, but this is just an instance of TFA's key observation:

> Geeks think they're rational beings, while they're completely influenced by buzz, marketing, and their emotions.

To which I want to add 1. that geeks young and old try also to pad their resume and eg use React and under all circumstances whether it's warranted or not (which seems at least rational from a personal career development PoV) 2. geeks think the web is about them, completely and utterly failing to understand that its entire point is easy self-publishing.

As to next.js specifically, it obviously doesn't make sense to tie serverless to React. Also, it doesn't make sense, like at all, to use React for smallish trivial internal web UIs when the main app is written in another backend language and your team has no js coding experience which just invites security nightmares and endless updates, and breaks agility and job rotation for no reason.

As to XML, there's a minor factual misunderstanding here in that DTDs, like XML itself, isn't a genuine inception, but rather a simplification and proper subset of SGML intended for the web where it has failed (even though it sees plenty use outside).

But that should't take away from the main point: that you don't go around advertising your fscking format without checking requirements. In this case, using XML (or SGML and HTML) for something that isn't document data, or conversely, using eg JSON for text documents. Just makes you look like a one trick pony.

[+] tacone|2 years ago|reply
All true (I rented a bare metal server myself for my pet projects), but it's still quite tempting.

- scale to zero, or scale to one (no cold starts) - not having to manage disk space - not having to manage quite a bit of security (ssh login, fail2ban, opened ports, unprivileged user) - in some cases, out of the box - or easier - deploy pipeline

Would you feel more or less confortable taking vacations while having a server or a server less platform?

Would you feel more or less confortable undergoing a security assessment while using a server or a server less platform?

I quite agree with what you say but can't deny there are lots of non negligible advantages.

[+] phendrenad2|2 years ago|reply
Right, I've come to believe that the software development world is led by the nose by snake oil salesmen. Most software developers don't have a range of experience outside of a CS degree, so they easily fall into "this is the new best practice" traps, especially if they look flashy and professional (examples: https://12factor.net https://machalliance.org)
[+] hn_throwaway_99|2 years ago|reply
FWIW, after using serverless technologies in both AWS and GCP for years, I think the cloud vendors have largely fixed most of the issues you're complaining about. Few examples:

1. Amazon provides things like RDS Proxy, which is essentially a hosted version of pgbouncer, and Aurora, essentially a "serverless postgres" - no need to manage your own server for connection pooling.

2. Cloud vendors have largely fixed the cold start times through a variety of means like min scaling of 1, warming requests, and functions that can process many requests simultaneously.

3. I don't get your point about not being able to share an API between web and mobile. I do this all the time.

4. When it comes to vendor lock-in, GCP for example offers Cloud Run, which will essentially just spin up any Docker container you give to it.

Managing your own servers can have a huge cost, and this can be an especially large cost for a small team. For example, does your business have any SOC 2 or other compliance requirements? Being on serverless as opposed to being on, say, your own EC2 instances, means a huge amount of work (e.g. ensuring servers are always patched, ensuring they are configured securely, ensuring you have permissions configured correctly, etc.) can be offloaded to the cloud provider, who is frankly much more likely to do it correctly than a small startup team.

If anything, I think GCP's CloudRun is awesome and the real future of serverless. In my mind it has most of the benefits of something like kubernetes with about 1/50th of the complexity.

[+] pjmlp|2 years ago|reply
The problem are the business agrements that they are putting in place, like with Sitecore where now Vercel/Next.js are the bless way to do CMS frontends, with .NET being left behind.

So it is either Vercel/Next.js with the out-of-the-box development experience, or DIY integration with other stacks that are also "supported".

And they aren't the only ones following such hype cycle.

[+] douglasmoore|2 years ago|reply
Just want to comment on the first two points as a vercel fan

(1) pooling becomes the solution very early. You get massive speed benefits by keeping a connection open. Hitting connection limits is too easy without it. I feel like a push towards pooling is fine because it has great benefits for scale, speed and reliability. Services like Supabase have connection pooled postgres by default even in their free tier.

(2) nextjs server side data fetching is an amazing feature that deserves it's place in future of web development (the react team agrees since they've added server components). Grabbing data before reaching the client is amazing. If you have a centralized db all the more reason to grab as much as you can in one trip.

Other than that I can agree. I think the advice should generally be to not ever recommend a framework too early or without including its use case.

Current recommendations for the different tiers:

Static: GitHub/cloudflare pages

Server-lite: render.com

Serverless: vercel

[+] lijok|2 years ago|reply
Your points do nothing to back up your claim that "serverless architectures are much more complicated compared to servers"

1. You still have to do polling with servers 2. Not a serverless problem 3. Not a serverless problem 4. This is a tradeoff, not complexity 5. Why not? What does serverless have to do with this? 6. Appears you're confusing serverless and FaaS. Regarldess, FaaS has incredibly predictable pricing. Previously if you paid $15 for a server, you're going to be billed for however many servers you had running on average throughout the month. 7. It's not. Your points here are trying to back it up, but have so far not done so. 8. Why would that be? And how does that factor into complexity?

Your claim is flawed for two reasons: 1. Serverless != FaaS. FaaS is serverless, serverless is not FaaS. 2. You've not taken auto-scaling into consideration on the server side.

[+] chunkyks|2 years ago|reply
Time to link grug brain again! https://grugbrain.dev/
[+] isoprophlex|2 years ago|reply
Thanks to chatGPT there's an infinite supply of Grug wisdom on this specific topic:

Grug see tribe use XML for talk between cave wall. But XML talk too loud. XML say <message>Hello</message>. Why not just say Hello? JSON talk quiet. JSON just say "message": "Hello". Grug like quiet talk.

Grug also see XML not consistent. Sometimes XML use attribute, sometimes use element. Make Grug confused. JSON always use key-value. Grug like consistency.

Grug think XML like big, heavy rock. Hard to carry, hard to use. JSON like small, sharp tool. Easy to carry, easy to use. Grug choose JSON.

[+] tuyiown|2 years ago|reply
> But above all, I learned that geeks think they are rational beings, while they are completely influenced by buzz, marketing, and their emotions. Even more so than the average person, because they believe they are less susceptible to it than normies, so they have a blind spot.

This is so spot on, so much that 9 hours after publication, it looks like I'm the first that I'm picking on it to start a thread. The author has written the essay to illustrate that point, what I see is strong denial from commenters.

[+] ChicagoDave|2 years ago|reply
I absolutely love this rant. It’s why as an application architect we get “it depends” drilled into every decision we make.

But I’d also add one of my own observations.

I believe U.S. based companies tend to look for scapegoats when building software, so they lean into packages and products.

Outside the U.S., you see much more raw architecture and principle-based design, believing that if you craft a solution based on the actual needs of the business, your outcomes will be significantly better.

[+] hooby|2 years ago|reply
In my previous company we had - back in the day - a newly hired junior developer that fully subscribed to that particular XML hype.

We had our uses for XML - we liked it especially for API endpoints and import/export files we exchanged with third party companies. Reason being, you could create a Schema Definition, send it to the other company, and tell them not to bother you, until their data validated against that XSD. We used XML for stuff like that extensively.

But that one junior was like "You are using databases? How silly of you! Just use an XML file and x-path. There's nothing a database can do, that XML can't! You really should switch over your production frontend to xml immediately!"

We gave him a little side project and allowed him to write that using xml instead of a database, which he did. When his results turned out to be pretty much unusable, and did not convince us to switch everything to xml, he left the company to find a more "modern" workplace...

To this day I'm not sure if he actually truly believed that xml/xpath would soon replace all databases - or if he just didn't want to learn SQL (and hide the fact that he was pretty bad at it).

[+] HelloNurse|2 years ago|reply
"At this stage, so much time and money were obliterated the cloud felt like a savior: they will do all that for you, for a fee."

I think "the cloud" and its cohort of friends did a lot to increase the mindshare of the gratuitously complicated ways to do things the article reviews.

For example, there is scale-shaming: if you don't want to invest in allowing your todo list or your ERP for small businesses to scale to infinite users, you are 1) nobody on a business plane, not even ambitious; 2) ignorant, lazy and behind the times on a technical plane.

[+] eliasmacpherson|2 years ago|reply
Reminds me of this old skit article about Docker, "It's the future":

https://web.archive.org/web/20160817120102/https://circleci....

Discussion here: https://news.ycombinator.com/item?id=12303075

The single greatest thing about minimalism is the ease of replacement and maintainability in comparison with maximalism, to my humble experience.

"Perfection is achieved, not when there is nothing more to add, but when there is nothing left to take away." ― Antoine de Saint-Exupéry

[+] paulddraper|2 years ago|reply
> XML is the future

That kinda happened, just not in the way everyone thought.

In 2023, JavaScript XML is the most popular way to develop web applications, and a common choice for mobile and desktop applications.

---

EDIT: JSX/React/React Native, in case that wasn't clear.

[+] pyeri|2 years ago|reply
Even the AJAX technology was initially sold as "Asynchronous JavaScript and XML". Today, it's most common to send AJAX requests from frontends and most of them have nothing to do with XML data, but funny how that acronym still stays till today!
[+] NoboruWataya|2 years ago|reply
XML underpins of a lot of standards that are ubiquitous in their areas, eg, RSS, XMPP, GPX/TCX. Tons of government APIs still use XML. In the EU, in the financial sector at least, pretty much all regulatory reporting uses XML. And when the regulators decide to make some of that data available to the public, it typically uses XML as well. Companies in many countries are required to tag their financial statements using XBRL, based on XML.

XML has not failed. It has just achieved the status, coveted amongst serious technologies, of being boring.

[+] bugbuddy|2 years ago|reply
> The cloud is often just as complicated as running things yourself, and it's usually ridiculously more expensive.

This was obvious to anyone who was willing to do the maths from the beginning. One big reason for the success of the cloud that no one has talked about is that developers often have a disdainful view of the IT department and the operation people. So pushing DevOps, let them go around those people and have the instant gratification of getting all the computing resources they want when they want it immediately without going through the IT department and operation people.

Ironically, some of those developers are starting to get sick of managing the operations and want to just go back to focusing on development. Thus, the pendulum swings back.

[+] noduerme|2 years ago|reply
I really love this review of the past 20 years. It gets to the heart of the hype/FOMO cycle that's driven so many of these stacks to short-lived superstardom, and then quickly into irreversible technical debt. I have never really understood this. My favorite comment I ever got on a coding board when I was debating the pros and cons of building my next project in the trendy platform of the day was

>> STFU, go out and make web.

[+] dathinab|2 years ago|reply
While there are much better parsers by now XML has some major problems (in no specific order):

- it's quite complicated with a lot of niche issues many parsers don't handle nicely

- it has serious issues when it comes to semantic vs. non-semantic white spaces, this happens to not matter for some applications does does matter a lot for others

- it is in a uncanny valley between being good for human writing and good for machine consumption, for most use-cases there are objectively better formats

- people did ran into endless problems with it. Some at it's fault many more not but still associated with it. This gave it a negative image outlasting hype cycles.

- it needs to compete with JSON, which in many use-cases fails by default no matter if it's good choice or not

- compared to some alternatives there are often way too many ways to encode the same thing, this is also an issue e.g. for JSON, but way way worse for XML

- non us-ascii string encoding..., it's old enough for there to be a bunch of legacy messiness, often you can ignore it but not always

Now there are use-cases where XML was and is used with successful and I don't see that changing, but a resurgence of XML in other areas would IMHO be a failure of the IT industry to not learn from past mistakes. For data serialization (even if human readable) you want native support for maps and fully encoded strings (e.g. JSON), for configurations you most times want something more simple (sadly YAML fails this due to some subtle issues). For huge datasets you probably want something more compact then XML. Still there are some good use-cases for XML where it's many machine processed, but not only, not very little amount of data but not a huge amount, need for a lot of annotations, need to coordinate changing schemas of "cold stored" files of that format between different companies, mainly used for encoding in a file, not between life communicating servers. I.e. UI encodings without a custom language, some but not al cases of scientific data, dumps of complicated configurations normally never writing by humans, but sometimes inspected by them.

But please never again tightly couple it with security schemas, especially the way strings are handled in XML makes that a terrible choice for such use cases. And it's also not a grate choice for any life communication between programs.

[+] Tade0|2 years ago|reply
Akin's laws of spacecraft design apply to software engineering with only minimal adjustments:

https://spacecraft.ssl.umd.edu/akins_laws.html

For 39. replace "launch vehicle" with "technology".

In any case after over a decade of "web development" (as it was still called back when I started out) my position is that to an extent we need this sort of circus because it's the only way for businesses to invest into moving this field forward.

Take for example the titular XML: largely hot garbage in applications where we learned to use JSON now, pretty damn solid as a text representation of file formats, 5024 pages of OOXML's specification notwithstanding. The author points that out as well.

If you want less of that go work in a financial institution, where you'll find unironically rock-solid Java 8(or perhaps even 11 nowadays) and associated frameworks.

[+] somsak2|2 years ago|reply
> So we watched beginners put their data with no schema, no consistency, and broken validation in a big bag of blobs. The projects fail in mass.

source? although I was also a mongo hater at the time it was being hyped, I've yet to see anything concrete that would show this technology choice made a company more likely to fail.

over time, I've actually come to believe that tech choice is one of the least important decisions in terms of impact on a company's market success. which makes me not care so much about fads either way.

[+] mcluck|2 years ago|reply
I worked on two different projects that were going fine until someone decided we needed Mongo. The bugs piled up and the projects were dying when I left