_zagj's comments

_zagj | 6 days ago | on: Digg is gone again

Did you not see this: https://news.ycombinator.com/item?id=47282777

Look at how many updoots it has. Look at how many vacuous, enthusiastic replies it got. That post is especially egregious, but you see stuff like that on a lesser scale every day here, now. My favorite bit is when they go out of their way to shill specific plans/pricing, e.g.:

> You really NEED the $200 Claude MAX plan.

_zagj | 6 days ago | on: What happens when US economic data becomes unreliable

> The chip machines Taiwan uses come from Europe, for example.

Yeah, the EUV photolithography machine, but not much else. American companies like Lam Research and Applied Materials are the leaders in thin film deposition and etch, KLA Tencor is the leader in metrology, and Synopsys and Cadance are the leaders in EDA (though there's also Germany's Mentor Graphics).

_zagj | 7 days ago | on: John Carmack about open source and anti-AI activists

> The MIT license

I still can't believe that developers got memed into this being the default license. 20 years ago, you'd always default to GPL and only opt for something else if it was a complete non-starter, and then you'd turn to LGPL (e.g., if it was a C library), and failing that, some BSD variant. But developers were always cautious to prefer GPL wherever they could to prevent exploitation and maximize user freedom.

It's crazy that even in compiled languages like Rust, MIT is now the default, though I think that's probably due to the lack of a stable ABI complicating dynamic linking enough to make LGPL less viable.

_zagj | 7 days ago | on: John Carmack about open source and anti-AI activists

> submitted a fix and told them how to and I received a barrage of comments about working for free for a corporation that's making money off me

After it became obvious that 1) these LLMs were trained heavily on OSS, and 2) that they (arguably) wantonly violated the licenses of the OSS they were trained on (as even the most permissive of which mandated attribution), 3) that LLMs could be used to rewrite code licensed with terms (e.g., copyleft) deemed unsuitable for certain commercial purposes to nullify those terms, and 4) that these LLMs would ultimately be used to reduce the demand for developers and suppress developer wages (even as cost of living keeps rising, and now even cost of compute, once deflationary, rises quickly as well, ironically thanks to LLMs), the culture of unbounded enthusiasm for open source amongst devs ought to have quickly been supplanted by one of peer pressure-bordering-on-public-shaming against open source participation.

Yet people still go out of their way to open source projects, or work, uncompensated, on open source beyond the "good citizen" stuff of reporting bugs (possibly with fixes) in things you use.

It really boggles the mind. Even if you can't starve the beast, why willingly feed it, and for free?

_zagj | 8 days ago | on: Are LLM merge rates not getting better?

> I feel like anyone used AI coding tools before 11/25 and after 1/26 (with frontier models) will say there has been a massive jump in, there is a difference between whether LLM can do a specific task or pass some arguably arbitrary checks by maintainers vs. what the are capable of.

How much of that is the model and how much of that is the tooling built around it? Also why is the tooling, specifically Claude Code, so buggy?

_zagj | 8 days ago | on: Are LLM merge rates not getting better?

> Well, on one hand they lack new data. Lot's of new code came out of an LLM, so it feeds back.

Supposedly model curation is a Big Deal at Big AI, and they're especially concerned about Ouroboros effects and poisoned data. Also people are still contributing to open source and open sourcing new projects, something that should have slowed to trickle by 2023, once it became clear that from now on, you're just providing the fuel for the machines that will ultimately render you unemployable (or less employable), and that these machines will completely disregard your license terms, including those of the most permissive licenses that seek only attribution, and that you're doing all of this for free.

_zagj | 9 days ago | on: Searching for the Agentic IDE

> Among many, many other things, he invented the term "vibecoding".

Yeah, that's a great reason to hate him, but the person you're responding to asked why his Twitter braindroppings belong on the front page.

It should be stated, again, that Karpathy completely missed the boat on LLMs, leaving OpenAI before they developed ChatGPT, and that he convinced Tesla to pursue a visual-only, no LIDR approach to FSD that doesn't work and probably won't ever work until after LIDR-based systems have already solved FSD.

Karpathy is the AI-equivalent of Sam Altman, who, for whatever reason, only fails upward. I think many HNers like him because he reminds them of themselves. Look at this bullshit and tell me it doesn't read like something the average HNer would write: http://karpathy.github.io/2020/06/11/biohacking-lite/

_zagj | 10 days ago | on: Agents that run while I sleep

Somewhat off topic, but any theories as to why the shilling for Claude (not insinuating that's what the OP is doing) is so transparent? For example, the bots/shills often go out of their way to insist you get the $200 plan, in particular. If Anthropic's product is so good: 1) why must it be shilled so hard, and 2) why is the shilling (which is likely partially a result of the product) so obvious? Is this an OpenAI reverse psychology dirty trick, the equivalent of using robocalls to inundate voters with messages telling them to vote for your opponent so as to annoy and negatively dispose them towards your opponent?

_zagj | 10 days ago | on: I built a programming language using Claude Code

It's strange that I don't accept unverified anecdotes on their face, especially when they contradict the best evidence available? Also

> calling this person a liar

"Liar" implies a deliberate attempt to deceive, but I specifically mentioned the possibility that these tools just make you feel much more productive than you actually are, as at least one study found. But I'm sure a lot of these anecdotes are, in fact, lies from liars (bots/shills). The fact that Anthropic has to resort to stuff like this: https://news.ycombinator.com/item?id=47282777

should make everyone suspicious of the extravagant claims being made about Claude.

_zagj | 10 days ago | on: I built a programming language using Claude Code

> Odd choice of a comment to post this reply to.

How? They claimed LLMs somehow enabled them to write more code in the span of 3.5 years (assuming they started with ChatGPT's introduction) than they would be able to write in the span of decades. No studies have shown this. But at least one study did show that LLM devs overestimate how productive these systems make them.

_zagj | 10 days ago | on: Redox OS has adopted a Certificate of Origin policy and a strict no-LLM policy

Lots of projects have had requirements like this for years, usually to prevent infection by (A)GPL's virality, or in the case of the FSF, so they can sue on your behalf, or less scrupulously, so the project can re-license itself or dual license itself in the future should the maintainers opt to. (This last part was traditionally the only part that elicited objections to CLAs.)

> it's like a high tech pinky swear

So is you attesting you didn't contribute any GPL'd code (which, incidentally, you arguably can't do if you're using LLMs trained on GPL'd code), and no one seemed to have issues with that, yet when it's extended to LLMs, the concern trolling starts in earnest. It's also legally binding .

_zagj | 10 days ago | on: I built a programming language using Claude Code

> Realistically, there’s a lot in there now that I simply couldn't have built in a standard human lifetime without them.

I have yet to see a study showing something like a 2x or better boost in programmer productivity through LLMs. Usually it's something like 10-30%, depending on what metrics you use (which I don't doubt). Maybe it's 50% with frontier models, but seeing these comments on HN where people act like they're 10x more productive with these tools is strange.

_zagj | 11 days ago | on: Redox OS has adopted a Certificate of Origin policy and a strict no-LLM policy

> I always assumed this was the case anyway; MIT is, if I'm not mistaken, one of the mostly used licenses

No, it wasn't that way in the 2000s, e.g., on platforms like SourceForge, where OSS devs would go out of their way to learn the terms and conditions of the popular licenses and made sure to respect each other's license choices, and usually defaulted to GPL (or LGPL), unless there was a compelling reason not to: https://web.archive.org/web/20160326002305/https://redmonk.c...

Now the corporate-backed "MIT-EVERYTHING" mindvirus has ruined all of that: https://opensource.org/blog/top-open-source-licenses-in-2025

_zagj | 11 days ago | on: Redox OS has adopted a Certificate of Origin policy and a strict no-LLM policy

> The LLM ban is unenforceable

Just require that the CLA/Certificate of Origin statement be printed out, signed, and mailed with an envelope and stamp, where besides attesting that they appropriately license their contributions ((A)GPL, BSD, MIT, or whatever) and have the authority to do so, that they also attest that they haven't used any LLMs for their contributions. This will strongly deter direct LLM usage. Indirect usage, where people whip up LLM-generated PoCs that they then rewrite, will still probably go on, and go on without detection, but that's less objectionable morally (and legally) than trying to directly commit LLM code.

As an aside, I've noticed a huge drop off in license literacy amongst developers, as well as respect for the license choices of other developers/projects. I can't tell if LLMs caused this, but there's a noticeable difference from the way things were 10 years ago.

page 1