> An open pull request represents a commitment from maintainers: that the contribution will be reviewed carefully and considered seriously for inclusion.
This has always been the problem with github culture.
On the Linux and GCC mailing lists, a posted patch does not represent any kind of commitment whatsoever from the maintainers. That's how it should be.
The fact that github puts the number of open PR requests at the very top of every single page related to a project, in an extremely prominent position, is the sort of manipulative "driving engagement" nonsense you'd expect from social media, not serious engineering tools.
The fact that you have to pay github money in order to permanently turn off pull requests or issues (I mean turn off, not automatically close with a bot) is another one of these. BTW codeberg lets any project disable these things.
I have an old open-source project that I archived on GitHub (because I do not maintain it anymore). Once a user opened an issue with a completely unrelated project of mine (same user account than the archived one), posting some AI slop with step-by-step click instructions how to unarchive the project and enable issues etc. He spammed the same text to two different email addresses he found from my Github page and the git history. I banned that user immediately from opening issues on that said project, closed the issue and ignored him. Just to receive another outrageous email why I did not comply with his request, and how I would dare to ban him from opening further issues. I swear, the entitlement sometimes on GitHub is unbearable.
We've enjoyed a certain period (at least a couple of decades) of global, anonymous collaboration that seems to be ending. Trust in the individual is going to become more important in many areas of life, from open-source to journalism and job interviews.
I've been trying to manifest Web of Trust coming back to help people navigate towards content that's created by humans.
A system where I can mark other people as trusted and see who they trust, so when I navigate to a web page or in this case, a Github pull request, my WoT would tell me if this is a trusted person according to my network.
> global, anonymous collaboration that seems to be ending. Trust in the individual is going to become more important in many areas of life
I don't think it's coming to an end. It's getting more difficult, yes, but not impossible. Currently I'm working on a game, and since I'm not an artist, I pay artists to create the art. The person I'm working closest with I have basically no idea who they are, except their name, email and the country they live in. Otherwise it's basically "they send me a draft > I review/provide feedback > Iterate until done > I send them money", and both of us know basically nothing of the other.
I agree that trust in the individual is becoming more important, but it's always been one of the most important thing for collaborations or anything that involves other human beings. We've tried to move that trust to other system, but seems instead we're only able to move the trust to the people building and maintaining those systems, instead of getting rid of it completely.
Maybe, "trust" is just here to stay, and we all be better off as soon as we start to realize this, and reconnect with the people around us and connect with the people on the other side of the world.
Some projects, like Linux (the kernel) have always been developed that way. Linus has described the trust model in the kernel to be very much "web of trust". You don't just submit patches directly to Linus, you submit them to module maintainers who are trusted by subsystem maintainers and who are all ultimately, indirectly trusted by the branch maintainer (Linus).
the web brought instant infinite 'data', we used to have limits, limits that would kinda ensure the reality of what is communicated.. we should go back to that it's efficient
Seems like reading the code is now the real work. AI writes PRs instantly but reviewing them still takes time. Everything flipped. Expect more projects to follow - maintainers can just use ai themselves without needing external contributions.
Understanding (not necessarily reading) always was the real work. AI makes people less productive because it's speeding up the thing that wasn't hard (generating code), while generating additional burden on the thing that was hard (understanding the code).
Reviewing code is much less of a burden if I can trust the author to also be invested in the output and have all the context they need to make it correct. That's true for my team / tldraw's core contributors but not for external contributors or drive-by accounts. This is nothing new and has up to now been worth the hassle for the benefits of contribution: new perspectives, other motivations, relationships with new programmers. It's just the scale of the problem and the risk that the repo gets overwhelmed by "claude fix this issue that I haven't even read" PRs.
This is probably true, and while I expect productivity to go up, I also expect "FOSS maintainer burnout" to skyrocket in the coming years.
Everyone knows reading code is one-hundredth as fun as writing it, and while we have to accept some amount of reading as the "eating your vegetables" part of the job, FOSS project maintainers are often in a precarious enough position as it is re: job satisfaction. I think having to dramatically increase the proportion of reading to writing, while knowing full well that a bunch of what they are reading was created by some bozo with a CC subscription and little understanding of what they were doing, will lead to a bunch of them walking away.
In the civic tech hacknight community I'm part of, it's hard to collaborate the same now, at least when people are using AI. Mostly because now code often feels so disposable and fast. It's like the pace layers have changed
It's been proposed that we start collaborating in specs, and just keep regenerating the code like it's CI, to get back to the feeling of collaboration without holding back on the energy and speed of agent coding
this is precisely why i refuse to use AI to generate code at all. i'd have to not only read it but internalize it and understand it in a way as if i had written it myself. at that point it is easier to actually write the code myself.
for prototypes and throwaway stuff where only the results count, it may be ok. but not for code that goes into a larger project. especially not FOSS projects where the review depends on volunteers.
I actually think Ada has good potential as an AI adjacent language because the syntax is optimised for readability (I personally find it very readable too.)
Using a coding agent over days on a personal project. It has made me think
1. These llms are smart and dumb at the same time. They make a phenomenal contribution in such a short time and also do a really dumb change that no one asked for. They break working code in irrational ways. I’ve been asking them to add so many tests for all the functions I care about. This acts as a first guard rail when they trip over themselves. Excessive tests.
2. Having a compiler like Rust’s helps to catch all sorts of mines that the llms are happy to leave.
3. The LLMs don’t have a proper working memory. Their context is often cluttered. I find that curating that context (what is being done, what was tried, what is the technical goal, specific requests etc) in concise yet “relevant for the time” manner helps to get them to not mess up.
Perhaps important open source projects that choose to accept AI generated PRs can have such excessive test suites, and run the PRs through them first as a idiotic filter before manually reviewing what the change does.
Mitchell Hashimoto (2025-12-30):
"Slop drives me crazy and it feels like 95+% of bug reports, but man, AI code analysis is getting really good. There are users out there reporting bugs that don't know ANYTHING about our stack, but are great AI drivers and producing some high quality issue reports.
This person (linked below) was experiencing Ghostty crashes and took it upon themselves to use AI to write a python script that can decode our crash files, match them up with our dsym files, and analyze the codebase for attempting to find the root cause, and extracted that into an Agent Skill.
They then came into Discord, warned us they don't know Zig at all, don't know macOS dev at all, don't know terminals at all, and that they used AI, but that they thought critically about the issues and believed they were real and asked if we'd accept them. I took a look at one, was impressed, and said send them all.
This fixed 4 real crashing cases that I was able to manually verify and write a fix for from someone who -- on paper -- had no fucking clue what they were talking about. And yet, they drove an AI with expert skill.
I want to call out that in addition to driving AI with expert skill, they navigated the terrain with expert skill as well. They didn't just toss slop up on our repo. They came to Discord as a human, reached out as a human, and talked to other humans about what they've done. They were careful and thoughtful about the process.
Generally speaking, the value of these contributions was determined by "proof of work". Time and effort are precious to a human hence its a somewhat self-regulating system preventing huge amounts of low quality contributions being generated. This is now gone. Isn't that an interesting problem to fix?
> and little to no follow-up engagement from their authors.
A strategy I sometimes use for external contributions is to immediately ask a question about the pull request. Ignoring PRs where I don't get a reply or the reply doesn't make sense potentially eliminates a lot of low quality contributions.
I wonder if a "no AI" rule is an overly blunt instrument. I can sympathise with it but babies and bathwater etc.
They invited AI in by creating a comprehensive list of instructions for AI agents - in the README, in a context.md, and even as yarn scripts. What did they expect?
Hey, Steve from tldraw here. We use AI tools to develop tldraw. The tools are not the problem, they're just changing the fundamentals (e.g. a well-formed PR is no longer a sign of thoughtful engagement, a large PR shows more effort than a small PR, etc) and accelerating other latent issues in contribution.
About the README etc: we ship an SDK and a lot of people use our source code as docs or a prototyping environment. I think a lot about agents as consumers of the codebase and I want help them navigate the monorepo quickly. That said, I'm not sure if the CONTEXT.md system I made for tldraw is actually that useful... new models are good at finding their way around and I also worry that we don't update them enough. I've found that bad directions are worse than no directions over time.
The CONTEXT.md file was created 5 months ago, and the contribution policy changed today. I would interpret that as a good-faith attempt to work with AI agents, which with some experience, didn't work as well as they hoped.
I still find it useful to vibe code in a private fork. For example with yt-dlp its now super easy to add a website with Claude for personal use, but that doesn't mean it's appropriate to open a PR.
> If the job market is unfavourable to juniors, become senior.
That requires networking with a depth deep enough that other professionals are willing to critique your work.
So... open-source contributions, I guess?
This increases pressure on senior developers who are the current maintainers of open-source packages at the same time that AI is stealing the attention economy that previously rewarded open-source work.
Seems like we need something like blockchain gas on open-source PRs to reduce spam, incentivize open-source maintainers, and enable others to signal their support for suggestions while also putting money where their mouth is.
> If the job market is unfavourable to juniors, become senior.
That’s just the regular LinkedIn nonsense. Very few people have the time and other resources to become seniors while unemployed. On top of that, it’s still unlikely that they’ll pass the HR filter without senior positions on their resumes, regardless of their actual knowledge.
Didn't take long before the quality went downhill.
Skynet was evil and impressive in The Terminator. Skynet 3.0 in reallife sucks - the AI slop annoys the hell out of me. I now need a browser extension that filters away ALL AI.
octoberfranklin|1 month ago
This has always been the problem with github culture.
On the Linux and GCC mailing lists, a posted patch does not represent any kind of commitment whatsoever from the maintainers. That's how it should be.
The fact that github puts the number of open PR requests at the very top of every single page related to a project, in an extremely prominent position, is the sort of manipulative "driving engagement" nonsense you'd expect from social media, not serious engineering tools.
The fact that you have to pay github money in order to permanently turn off pull requests or issues (I mean turn off, not automatically close with a bot) is another one of these. BTW codeberg lets any project disable these things.
littlecranky67|1 month ago
account42|1 month ago
oneeyedpigeon|1 month ago
theshrike79|1 month ago
A system where I can mark other people as trusted and see who they trust, so when I navigate to a web page or in this case, a Github pull request, my WoT would tell me if this is a trusted person according to my network.
embedding-shape|1 month ago
I don't think it's coming to an end. It's getting more difficult, yes, but not impossible. Currently I'm working on a game, and since I'm not an artist, I pay artists to create the art. The person I'm working closest with I have basically no idea who they are, except their name, email and the country they live in. Otherwise it's basically "they send me a draft > I review/provide feedback > Iterate until done > I send them money", and both of us know basically nothing of the other.
I agree that trust in the individual is becoming more important, but it's always been one of the most important thing for collaborations or anything that involves other human beings. We've tried to move that trust to other system, but seems instead we're only able to move the trust to the people building and maintaining those systems, instead of getting rid of it completely.
Maybe, "trust" is just here to stay, and we all be better off as soon as we start to realize this, and reconnect with the people around us and connect with the people on the other side of the world.
globular-toast|1 month ago
jruohonen|1 month ago
I'd add science here too.
unknown|1 month ago
[deleted]
agumonkey|1 month ago
the web brought instant infinite 'data', we used to have limits, limits that would kinda ensure the reality of what is communicated.. we should go back to that it's efficient
sbondaryev|1 month ago
bigstrat2003|1 month ago
steveruizok|1 month ago
Analemma_|1 month ago
Everyone knows reading code is one-hundredth as fun as writing it, and while we have to accept some amount of reading as the "eating your vegetables" part of the job, FOSS project maintainers are often in a precarious enough position as it is re: job satisfaction. I think having to dramatically increase the proportion of reading to writing, while knowing full well that a bunch of what they are reading was created by some bozo with a CC subscription and little understanding of what they were doing, will lead to a bunch of them walking away.
patcon|1 month ago
It's been proposed that we start collaborating in specs, and just keep regenerating the code like it's CI, to get back to the feeling of collaboration without holding back on the energy and speed of agent coding
em-bee|1 month ago
for prototypes and throwaway stuff where only the results count, it may be ok. but not for code that goes into a larger project. especially not FOSS projects where the review depends on volunteers.
foretop_yardarm|1 month ago
reacharavindh|1 month ago
1. These llms are smart and dumb at the same time. They make a phenomenal contribution in such a short time and also do a really dumb change that no one asked for. They break working code in irrational ways. I’ve been asking them to add so many tests for all the functions I care about. This acts as a first guard rail when they trip over themselves. Excessive tests.
2. Having a compiler like Rust’s helps to catch all sorts of mines that the llms are happy to leave.
3. The LLMs don’t have a proper working memory. Their context is often cluttered. I find that curating that context (what is being done, what was tried, what is the technical goal, specific requests etc) in concise yet “relevant for the time” manner helps to get them to not mess up.
Perhaps important open source projects that choose to accept AI generated PRs can have such excessive test suites, and run the PRs through them first as a idiotic filter before manually reviewing what the change does.
ryanxcharles|1 month ago
exactlie|1 month ago
[deleted]
kanzure|1 month ago
pella|1 month ago
Mitchell Hashimoto (2025-12-30): "Slop drives me crazy and it feels like 95+% of bug reports, but man, AI code analysis is getting really good. There are users out there reporting bugs that don't know ANYTHING about our stack, but are great AI drivers and producing some high quality issue reports.
This person (linked below) was experiencing Ghostty crashes and took it upon themselves to use AI to write a python script that can decode our crash files, match them up with our dsym files, and analyze the codebase for attempting to find the root cause, and extracted that into an Agent Skill.
They then came into Discord, warned us they don't know Zig at all, don't know macOS dev at all, don't know terminals at all, and that they used AI, but that they thought critically about the issues and believed they were real and asked if we'd accept them. I took a look at one, was impressed, and said send them all.
This fixed 4 real crashing cases that I was able to manually verify and write a fix for from someone who -- on paper -- had no fucking clue what they were talking about. And yet, they drove an AI with expert skill.
I want to call out that in addition to driving AI with expert skill, they navigated the terrain with expert skill as well. They didn't just toss slop up on our repo. They came to Discord as a human, reached out as a human, and talked to other humans about what they've done. They were careful and thoughtful about the process.
People like this give me hope for what is possible. But it really, really depends on high quality people like this. Most today -- to continue the analogy -- are unfortunately driving like a teenager who has only driven toy go-karts. Examples: https://github.com/ghostty-org/ghostty/discussions?discussio... " ( https://x.com/mitchellh/status/2006114026191769924 )
blibble|1 month ago
I wouldn't bet on it
SlopHub
smetj|1 month ago
kristopolous|1 month ago
You need a literary agent for just about all of them
ironbound|1 month ago
andybak|1 month ago
A strategy I sometimes use for external contributions is to immediately ask a question about the pull request. Ignoring PRs where I don't get a reply or the reply doesn't make sense potentially eliminates a lot of low quality contributions.
I wonder if a "no AI" rule is an overly blunt instrument. I can sympathise with it but babies and bathwater etc.
junon|1 month ago
steveruizok|1 month ago
About the README etc: we ship an SDK and a lot of people use our source code as docs or a prototyping environment. I think a lot about agents as consumers of the codebase and I want help them navigate the monorepo quickly. That said, I'm not sure if the CONTEXT.md system I made for tldraw is actually that useful... new models are good at finding their way around and I also worry that we don't update them enough. I've found that bad directions are worse than no directions over time.
ggbaker|1 month ago
embedding-shape|1 month ago
Seattle3503|1 month ago
judahmeek|1 month ago
> If the job market is unfavourable to juniors, become senior.
That requires networking with a depth deep enough that other professionals are willing to critique your work.
So... open-source contributions, I guess?
This increases pressure on senior developers who are the current maintainers of open-source packages at the same time that AI is stealing the attention economy that previously rewarded open-source work.
Seems like we need something like blockchain gas on open-source PRs to reduce spam, incentivize open-source maintainers, and enable others to signal their support for suggestions while also putting money where their mouth is.
oguz-ismail2|1 month ago
Don't love your job, job your love.
allarm|1 month ago
That’s just the regular LinkedIn nonsense. Very few people have the time and other resources to become seniors while unemployed. On top of that, it’s still unlikely that they’ll pass the HR filter without senior positions on their resumes, regardless of their actual knowledge.
crguixurcghixr|1 month ago
[deleted]
shevy-java|1 month ago
Skynet was evil and impressive in The Terminator. Skynet 3.0 in reallife sucks - the AI slop annoys the hell out of me. I now need a browser extension that filters away ALL AI.
lifetimerubyist|1 month ago
Then I just took my hosting private. I can’t be arsed to put in the effort when they don’t.
MohskiBroskiAI|1 month ago
[deleted]
exactlie|1 month ago
is this satire?
steveruizok|1 month ago
raincole|1 month ago