top | item 47176204

(no title)

jaunt7632 | 3 days ago

[dead]

discuss

order

CSSer|3 days ago

Tailwind didn't win for either of these reasons (setting aside any personal positive/negative feelings I have about it). It won (in LLMs) because that's how the ML model works. The training data places the HTML and the styling info together. There's an extremely high signal to noise ratio because of that. You're going to get much fewer tokens that have random styles in it and require several fewer (or maybe even no) thought loops to get working styles. The surface API of selectors is also large and complex. Tailwind utility classes are not. They're either present on an element or not, and it's often the case that supporting classnames for the UI goal are present in close proximity on sibling, parent, or child elements. Even with vast amounts and multiple decades of more CSS to compare against in the training data, I suspect this is the case. Plus, the information is just spread more thinly and more flexible in terms of organization in a stylesheet. The result is you get lots of extra style rules that you didn't need/want and it's harder to one-shot or even few-shot any style implementation. If I'm even remotely right about this, it worth considering this impact in many other languages and applications. I've found the adverse effect to be reduced slightly as models/agents have improved but I feel it's still very much present. It's totally possible to structure data in a way that makes it easier to train on.

btown|3 days ago

There's also a reasonable alignment between Tailwind's original goal (if not an explicit one) of minimizing characters typed, and a goal held by subscription-model coding agents to minimize the number of generated tokens to reach a working solution.

But as much as this makes sense, I miss the days of meaningful class names and standalone (S)CSS. Done well, with BEM and the like, it creates a semantically meaningful "plugin infrastructure" on the frontend, where you write simple CSS scripts to play with tweaks, and those overrides can eventually become code, without needing to target "the second x within the third y of the z."

Not to mention that components become more easily scriptable as well. A component running on a production website becomes hackable in the same vein of why this is called Hacker News. And in trying to minimize tokens on greenfield code generation, we've lost that hackability, in a real way.

I'd recommend: tell your AGENTS.md to include meaningful classnames, even if not relevant to styling, in generated code. If you have a configurability system that lets you plug in CSS overrides or custom scripts, make the data from those configurations searchable by the LLM as well. Now you have all the tools you need to make your site deeply customizable, particularly when delivering private-labeled solutions to partners. It's far easier to build this in early, when you have context on the business meaning of every div, rather than later on. Somewhere, a GPU may sigh at generating a few extra tokens, but it's worthwhile.

kabes|3 days ago

I'm not sure the creators of tailwind share your definition of winning though. They recently had to let go of most staff since revenue has plummeted die to LLMs

karel-3d|3 days ago

any information about it? what did they sell? I don't even see a sales link on tailwind page

jaunt7632|3 days ago

[deleted]

NSPG911|3 days ago

That is the issue. It's why Xcode development is really bad with AI models[0] -- because there are barely any text-based tutorials for it, so the models have to make a lot of assumptions and whatnot. Hence, they are really good at Python, JavaScript, and increasingly, Rust.

[0]: https://www.youtube.com/watch?v=J8-CdK4215Y

nindalf|3 days ago

How did you come to the conclusion that it was blogs that made it change behaviour? Look at the examples where Claude shifted behaviour dramatically between Sonnet 4.5 and Opus 4.6. Drizzle ORM went from 21% to 100%. Was there an avalanche of Drizzle related blog posts that we all missed? Celery went from 100% to 0%. Was there a massive but invisible hate campaign against Celery?

Blog posts almost certainly helped. But dramatic shifts like these to favour newer tech indicates that there's some other factor in play.

steve_adams_86|3 days ago

But what if tailwind has the most tutorials in the training set because it's worth learning, which led to it being fairly ubiquitous and easy to add to the training set?

I'm not expressing an opinion about that; it's a real question.

dotancohen|3 days ago

But what if Tailwind has the most tutorials because it's tricky and difficult? What if the intuitive, maintainable solution simply does not need so many tutorials?

I'm not expressing an opinion about that, I don't do front end dev so I have no opinion, it's a real question.

jascha_eng|3 days ago

@dang this accounts comments smell like LLM slop. They are mostly on topic and its more claude than chatgpt but it's slop nontheless.

is telling

didn't win... It won ...

Look at their other comments they are also fishy

I know you guys don't want us to call it out because of negativity. But there needs to be awareness in the community, this is the top comment somehow right now. It feels like it happens every other thread. Please do something more rigorous than manually deleting accounts.

dang|2 days ago

Thanks - yes, this is an area that's in rapid flux right now.

All: if you see an account mostly posting what look like generated comments, it's super helpful to email hn@ycombinator.com with the username. We're relying heavily on user reports right now, while we're working on building better software defences. Hopefully soon, for example, we'll extend the flagging system to allow this type of report.

Edit: oops I didn't notice that tomhow had already replied!

jascha_eng|3 days ago

Note I might be wrong on this one but it's just extremely annoying that I even have to consider if I am being manipulated by an AI while reading HN comments.

If I want to read AI stuff I go to Clawdbook or OpenAIs Sora app.

skybrian|3 days ago

I'm using Hono JSX and it has no trouble, though to be fair it's rather similar to React and it occasionally gets confused.

trimethylpurine|3 days ago

Interesting. I'm using go htmx adminlte. Never once has Claude recommended or tried to use tailwind. I sometimes have to remind it to use less JS and use htmx instead but otherwise feels pretty coherent.

I recommend starting projects by first creating a way of doing (architect) and then letting Claude work. It's pretty good at pretending to be you if you give it a good sample of how you do things.

Note: this applies to Opus 4.6. I have not had a useful experience in other models for actual dev work.

piokoch|3 days ago

"Tailwind didn't win because it's the best CSS solution. It won because it has the most tutorials per capita in the training set."

Obviously. People keep forgetting that "Artificial Intelligence" does not think and is not intelligent. It just statistically predict next token in a sequence. It is all statistics.

So, Django 6 has new task framework, but LLM does not care, as Celery has better stats.

Side note: it is not only LLM thingy. Companies for years were choosing tech stack because of fashion or popularity, regardless on technical feasibility for a given solution. So we have companies adopting Kafka, even though it sucks for their usecase, companies switch from Jenkins to Github Actions, even though Jenkins was cheaper and more performant.

PeterStuer|3 days ago

"does not think and is not intelligent. It just statistically predict next token in a sequence. It is all statistics"

Technically correct, but pretty useless as a working model. Like sayin humans are not intelligent. It's just biochemical and bioelectric reactions. It's all physics.

How would you, from a Searlian perspective argue against "humans are just statistical next token predictors"?