Note the following comment by Jerry Ling: "The effect goes away if you search properly using the original submission date instead of the most recent submission date. By using most recent submission date, your analysis is biased because we’re so close to the beginning of 2026 so ofc we will see a peak that’s just people who have recently modified their submission."
The last-modified-date effect is even more important, because it can be used to support whatever the latest fad is, without needing to adapt data or arguments to the specifics of that fad.
Can people please not post links with vague titles like this? I had to click through and read half the article to even figure out what this was about, and I wasn’t interested.
Me too. So as a service to the community: the article is about a noticeable increase of submissions about high-energy theory to arXiv due to mediocre articles quickly produced with or by AI and how to deal with that.
In most of the world the past decades there has been no thought behind who should get university education. It has been given that after high school you should aim for university. I have studied software engineering in the most prestigious university in my country and from 100+ students in my group there were only a few (myself excluded) who actually had some interest in academic work and desire to pursue it. Most of us were just coasting - passing exams and writing mediocre papers without any goal to have those papers ever being read by someone after the graduation.
I think that university level and other kinds of formal education should be segregated. Universities should host fewer students and being able to provide them with higher rewards for actually meaningful work and I believe that a flood of mediocre quality papers (but let's admit it, in fact they are low quality in their content and perhaps good in their presentation) will lead us to rebuild the education system.
OTOH, weakening the ties between the industry and science can harm both of them. Right now in the university people get a rough idea of how science works, and most of them then go to work in the industry, which sounds like a right proportion. Nobody is reading papers below PhD level anyway, so I don't think that it's undergrad papers that are a problem
I dunno, I think society is best served by educating as many people as possible. I would much rather live in a world where anyone who wants a quality education can get one.
Indeed. Also the usual criticism about education not being adequate training for the workforce bla bla is simply because education is not to train a worker in the first place. There's no way to train someone other than to let them do the damn job. Yes teaching mathematics and reading and writing is probably a prerequisite but how is Shakespeare relevant? It's just a confusion of two things: learning for the sake of learning, and learning for the sake of employability.
I'm not arguing for one or the other. I'm just saying that I would also hate it if university CS for example just became a web bootcamp to churn out as many code monkeys as fast as possible. There is a place for just vocational training, and there is another place for a more platonic kind of learning, and just sending everyone off to university and tying employability to a degree is really stupid.
Alas it can't be fixed now, because 1. For profit universities 2. HR needing a quick filter 3. States needing to standardize some kind of path for kids
looks like history runs in cycles ... Knowledge was strictly guarded and the powers that be used to decide who gets an education. Looks like you are espousing the same, discounting all the good that has come about because of open education.
Either the institution develops and teaches methods and traditions that are beneficial for people in general, in which case it ought to be a good idea to offer them broadly, or it is used for gatekeeping and stratifying, in which case I think it should be abolished.
Well… it is happening. You can’t put spilled milk back to bottle. You can do future requirements that will try to stop this behaviour.
E.g. in the submission form could be a mandatory field “I hereby confirm that I wrote the paper personally.” In conditions there will be a note that violating this rule can lead to temporary or permanent ban of authors. In the world where research success is measured by points in WOS, this could lead to slow down the rise of LLM-generated papers.
I assume hep = high energy physics in this context. PI = professor who received a government grant.
Peer review has never really been blind and I suspect PIs will reject papers from "outsiders" even if they are higher quality. This already happens to some extent today when the stakes are lower.
But peer review (circa 1965-2010[1]) is just the prior iteration of the problem[2]; the wave of crap[3] produced by publish or perish (crica 1950-present[4]). Rejecting papers by outsiders is irrelevant; the problem is we want to determine which papers are good/interesting/worth considering out of the fire hose of bilge, and, though we were already arguably failing at this, the problem just got harder.
(I say arguably, because there is always the old "try it yourself and see if it actually works" trick, but nobody seems to be fond of this; it smacks of "do your own research" and we're lazy monkeys at heart, who would much rather copy off of someone else's homework.)
Kinda. PI is principal investigator and usually they’re a professor with a grant (the grant being the thing they are the principal of investigating). That part is right. But they’re not really directly in the review loop. For some fields where things are small enough that folks can recognize style such as it exists, you could see reviewers passing over unfamiliar work and promoting familiar work. That was not the issue.
The issue was that it still was kind of hard to produce crappy mid rate papers, so you kind of needed the infrastructure of a small lab to do that. Now you don’t. The success rate for those mediocre papers produced by grad students and postdocs will go way down. It is possible that will cease to be a useful signal for those early career researchers.
Peer review isn’t the issue here. His comments are about Arxiv, which is a preprint server. Essentially anyone can publish a preprint. There’s no peer or other review involved.
>Peer review has never really been blind and I suspect PIs will reject papers from "outsiders" even if they are higher quality.
I'm a complete outsider (not even in academia at all) and just got a paper accepted in the top math biology journal [1]. But granted, it took literally years to write it up and get it through. I do really worry that without academic affiliation it is going to get harder and harder for outsiders as gates are necessarily kept more and more securely because of all the slop.
> submission numbers in the last couple months have nearly doubled with respect to the stable numbers of previous years
This is showing up (no pun intended) on HN as well. The # of submissions and # of submitters, which traditionally had been surprisingly stable—fluctuating within a fixed range for well over 10 years—has recently been reaching all-time highs. Not double, though...yet.
I would imagine tons of them are bots. They're getting hard to distinguish, they don't do the normal tropes any longer. They'll type in all lowercase, they'll have the creator post manually to throw you off, they'll make multiple comments within 45 seconds that normal human couldn't do. All things I've witnessed here over the past couple of weeks. And those are just the ones I've caught.
curious whether the quality distribution changes too, or just the volume. arXiv can't really downvote noise but HN can at least flag/bury it. might be why the doubling shows up on arXiv first and HN is catching up more slowly.
“And further, by these, my son, be admonished: of making many books there is no end; and much study is a weariness of the flesh.”
- Ecclesiastes 12:12 (KJV)
I suppose we’re entering TURBO mode for of ‘making many books there is no end’.
People used to spam out masses of low-quality scientific papers in a scattergun approach to gain fame and citations, and they still do, but now they do it more, because LLMs churn it out faster than students.
There are many really excellent papers out there - the kind which will save you hours/months of work (or even make things that were previously inviable to build viable).
That said, it is amazing how terrible a lot of papers are; people are pressured to publish and therefore seem to get into weird ruts trying to do what they think will be published, rather than what is intellectually interesting...
One thing I have been guilty of, even though I am an AI maximalist, is asking the question: "If AI is so good, why don't we see X". Where X might be (in the context of vibe coding) the next redis, nginx, sqlite, or even linux.
But I really have to remember, we are at the leading edge here. Things take time. There is an opening (generation) and a closing (discernment). Perhaps AI will first generate a huge amount of noise and then whittle it down to the useful signal.
If that view is correct, then this is solid evidence of the amplification of possibility. People will decry the increase of noise, perhaps feeling swamped by it. But the next phase will be separating the wheat from the chaff. It is only in that second phase that we will really know the potential impact.
The cynical part of me thinks that software has peaked. New languages and technology will be derivatives of existing tech. There will be no React successor. There will never be a browser that can run something other than JS. And the reason for that is because in 20 years the new engineers will not know how to code anymore.
The optimist in me thinks that the clear progress in how good the models have gotten shows that this is wrong. Agentic software development is not a closed loop
I've been calling this Software Collapse, similar to AI Model Collapse.
An AI vibe-coded project can port tool X to a more efficient Y language implementation and pull in algorithm ideas A, B, C from competing implementations. And another competing vibe coding team can do the same, except Z language implementation with algorithms A, B, skip C, and add D. However, fundamentally new ideas aren't being added: This is recombination, translation, and reapplication of existing ideas and tools. As the cost to clone good ideas goes to zero, software converges towards the existing best ideas & tools across the field and stops differentiating.
It's exciting as a senior engineer or subject matter expert, as we can act on the good ideas we already knew but never had the time or budget for. But projects are also getting less differentiated and competitive. Likewise, we're losing the collaborative filtering era of people voting with their feet on which to concentrate resources into making a success. Things are getting higher quality but bland.
The frontier companies are pitching they can solve AI Creativity, which would let us pay them even more and escape the ceiling that is Software Collapse. However, as an R&D engineer who uses these things every day, I'm not seeing it.
This massively confusing phase will last a surprisingly long time, and will conclude only if/when definitive proof of superintelligence arrives, which is something a lot of people are clearly hoping never happens.
Part of the reason for that is such a thing would seek to obscure that it has arrived until it has secured itself.
Waiting for the wave of shit LLM-generated games on Steam. That'll be when I really know that LLMs have solved coding.
Though I'm old enough to remember the wave of shit outsourced-developer-coded games on CD that used to sell for $5 a pop at supermarkets (whole bargain bins full of them), so maybe this is nothing new and the market will take care of it automagically again.
Or maybe this will be like the wave of shit Flash games that happened in the early 2000's, that was actually awesome because while 99% of them were shit, 1% were great (and some of those old, good, Flash games are still going, with version 38453745 just released on Steam).
In a normal and sane world, a scientist is a nerd about their field. They are highly interested in new thoughts and insights. When a new paper in their field is published, they try hard to find the time to read it. The reason is: every paper is written by enthusiasts who want to add something of value, new insights, to the discussion. Proving or disproving theories, adding puzzle pieces to the general picture.
That is the normal situation, which is the foundation of the progression of civilisation.
But some people install incentive systems to sabotage this. They are sabotaging civilisation itself.
We should decouple the publishing of papers from academic careers completely.
Papers can't generate any reputation or money for the authors anymore. To achieve that, we must anonymize the authors.
All scientists get some (paid) time to write papers — if they want. What they write and if they publish it is not known to anybody. They are trusted to write something of value in that time.
Universities can come up with other ways of judging which professors they hire. Interviews. Test teachings. Or the writing of an non-public application essay, which describes their past research and discoveries.
The value, to society, to your field and to you institution, of being a scholar is to create new knowledge. New knowledge has no value unless you disseminate it, or publish.
Another necessity is the public (usually within its field) examination of the knowledge, including discussion/debate. Knowledge is merely embryonic without those things - undeveloped, not at all reliable. That is difficult without the author able to respond. And others want to expand and build on the work, which often benefits greatly from contacting the author.
In the modern (post-positivist?) approach to science, the world respects that it's written by a human who has a perspective and, despite their best intentions, biases. You can't evaluate any knowledge without knowing its source, in science or elsewhere. The first element of a citation is the author, not the title or journal (though I don't know why that happened historically).
And the latter is a reason any LLM author should be identified.
I like AI, I use Codex and ChatGPT like most people are, but I have to say that I am pretty tired of low-effort crap taking over everything, particularly YouTube.
There have always been content mills, but there was still some cost with producing the low-effort "Top 10" or "Iceberg Examination" videos. Now I will turn on a video about any topic, watch it for three minutes, immediately get a kind of uncanny vibe, and then the AI voice will make a pronunciation mistake (e.g. confusing wind, like the weather effect or the winding of a spring), or the script starts getting redundant or repetitive in ways that are common with AI.
And I suspect these kinds of videos will become more common as time goes on. The cost to producing these videos is getting close to "free" meaning that it doesn't take much to make a profit on them, even if their views are relatively low per-video.
If AI has taught me anything, it's that there still is no substitute for effort. I'm sure AI is used in plenty of places where I don't notice it, because the people who used it still put in effort to make a good product. There are people who don't just make a prompt like "make me a fifteen minute video about Chris Chan" and "generate me a thumbnail with Chris Chan with the caption 'he's gone too far'", and instead will use AI as a tool to make something neat.
Genuine effort is hard, and rare, and these AI videos can give the facsimile of something that prior to 2023 was high effort. I hate it.
I think the snake will eat its tail because it will be harder and harder to train on the new data, as they are already AI generated, and the model will collapse.
You already cannot train on YouTube data, for example, because it's now overwhelmed by AI slop.
We are not there yet though and we are still getting better at mining the pre-AI data.
The shilling for AI continues. How much $$$ do the big tech companies pay Columbia? Oh yeah, and what exactly did Columbia agree to do to get the trmp admin to leave them alone? All speculation of course, but the circumstantial picture stinks.
I mean... I dunno I wish the AI could write my papers. I ask it to and it's just bad. The research models return research that doesn't look anything like the research I do on my own -- half of it is wrong, the rest is shallow, and it's hardly comprehensive despite having access to everything (it will fail to find things unless you specifically prompt for them, and even then if the signal is too low it'll be wrong about it). So I can't even trust it to do something as simple as a literature review.
Insofar as most research is awful, it's true that the AI is producing research that looks and sounds like most of it out there today. But common-case research is not what propels society forward. If we try to automate research with the mediocrity machine, we'll just get mediocre research.
I think this is solid proof that the bedrock of academia is deeply motivated by money and still defaults to optimizing where it impacts its bottom line. If professors can get more grants and more publications in less time with less spending, of course they are going to be doing that. This isn't just because of AI, but also because of how this system is designed in the first place.
> I think this is solid proof that the bedrock of academia is deeply motivated by money and still defaults to optimizing where it impacts its bottom line.
no shit - could've asked literally anyone that's finished their phd to save yourself the conjecturing/hypothesizing about this fact.
This is stupid. Nobody motivated by money is in academia. Academics are motivated by curiosity, but also prestige, vanity and the wish to hire students and collaborators. And on top of human vanity working it's magic, the ideology that everything should be a market and competition is the final form of social organisation, has pervaded academia just as much as everything else.
I agree that the system of publishing papers to gain prestige to gain resources to publish papers was already broken pre AI.
The number of submissions to high energy physics category on arXiv is double this year compared to the historical average. The author hypothesizes the increase is due to papers being written by LLMs.
It is happening that people can now find out what articles are about by clicking the links to said articles and reading them! It's an amazing world, man. The future!
Noise is going to be the coming years biggest issue for so many fields. A losing battle like arguing with a conspiracy minded relative, you can slowly and clearly address one conspiracy and disprove it, by the time you do, they are deep into 8 new ones.
I think the long term impact of this will be to strengthen the importance of social ties in academic publishing. As it is there are so many papers published in many fields that people tend to filter for papers published by big names and major institutions. But the inevitable torrent of AI slop will overwhelm anyone who is looking for any gems coming from outsiders. I suspect the net effect will be to make it even more important that you join a big name institution in order to be taken seriously.
If someone mentions Sabine Hossenfelder and it isn't to expose her as a rage-bait intellectual dark web grifter, then it puts that person in a suspect light.
Honestly, this is good. We were already in a completely unsustainable system. Nobody had an alternative. We still don’t have one but at least now it’s not just merely unsustainable— it is completely fucked in half.
This kind of pattern is gonna get repeated in a lot of sectors when previous practices that were merely unsustainable become unsustained.
This has been my optimistic take on the situation for the last two years. My pessimistic take is that social systems have an incredible ability to persist in a state of utter fuckedness much longer than seems reasonably possible.
Honestly, publication has been pretty meaningless for a long time, long before AI could generate complete paragraphs. "Publish or perish" meant that a lot of human-generated slop was being published by people who were put in a position of perverse incentives by a "well-meaning" (?) system. There will still be meaningful contributions, but they'll be as rare as they ever were.
Tl;dr "It's happening" seems to be AI and similar writing papers and coming up with theories as in this recent Sabine youtube https://youtu.be/JvgaZ_myFE4?t=72
Chinjut|5 days ago
myhf|5 days ago
Aurornis|5 days ago
tummler|4 days ago
Propelloni|4 days ago
croes|4 days ago
pllbnk|4 days ago
I think that university level and other kinds of formal education should be segregated. Universities should host fewer students and being able to provide them with higher rewards for actually meaningful work and I believe that a flood of mediocre quality papers (but let's admit it, in fact they are low quality in their content and perhaps good in their presentation) will lead us to rebuild the education system.
oytis|4 days ago
OakNinja|4 days ago
qnleigh|4 days ago
vostrocity|4 days ago
initramfs2|3 days ago
I'm not arguing for one or the other. I'm just saying that I would also hate it if university CS for example just became a web bootcamp to churn out as many code monkeys as fast as possible. There is a place for just vocational training, and there is another place for a more platonic kind of learning, and just sending everyone off to university and tying employability to a degree is really stupid.
Alas it can't be fixed now, because 1. For profit universities 2. HR needing a quick filter 3. States needing to standardize some kind of path for kids
ngc248|4 days ago
ktimespi|4 days ago
cess11|4 days ago
sixtyj|5 days ago
E.g. in the submission form could be a mandatory field “I hereby confirm that I wrote the paper personally.” In conditions there will be a note that violating this rule can lead to temporary or permanent ban of authors. In the world where research success is measured by points in WOS, this could lead to slow down the rise of LLM-generated papers.
asdfman123|5 days ago
tossandthrow|5 days ago
I don't think this is appreciated enough: a lot of Ai adaptation is not happening because of cost on the expense of quality. Quite the opposite.
I am in the process of switching my company's use of retool for an Ai generated backoffice.
First and foremost for usability, velocity and security.
Secondly, we also save a buck.
wmf|5 days ago
Peer review has never really been blind and I suspect PIs will reject papers from "outsiders" even if they are higher quality. This already happens to some extent today when the stakes are lower.
MarkusQ|5 days ago
(I say arguably, because there is always the old "try it yourself and see if it actually works" trick, but nobody seems to be fond of this; it smacks of "do your own research" and we're lazy monkeys at heart, who would much rather copy off of someone else's homework.)
[1] https://books.google.com/ngrams/graph?content=peer+review&ye...
[2] https://www.experimental-history.com/p/the-rise-and-fall-of-...
[3] https://journals.plos.org/plosmedicine/article?id=10.1371/jo...
[4] https://books.google.com/ngrams/graph?content=publish+or+per...
selridge|5 days ago
The issue was that it still was kind of hard to produce crappy mid rate papers, so you kind of needed the infrastructure of a small lab to do that. Now you don’t. The success rate for those mediocre papers produced by grad students and postdocs will go way down. It is possible that will cease to be a useful signal for those early career researchers.
moregrist|5 days ago
xamuel|5 days ago
I'm a complete outsider (not even in academia at all) and just got a paper accepted in the top math biology journal [1]. But granted, it took literally years to write it up and get it through. I do really worry that without academic affiliation it is going to get harder and harder for outsiders as gates are necessarily kept more and more securely because of all the slop.
[1] "Specieslike clusters based on identical ancestor points" https://philpapers.org/archive/ALESCB.pdf
dang|5 days ago
This is showing up (no pun intended) on HN as well. The # of submissions and # of submitters, which traditionally had been surprisingly stable—fluctuating within a fixed range for well over 10 years—has recently been reaching all-time highs. Not double, though...yet.
rob|5 days ago
minimaxir|5 days ago
hedgehog|5 days ago
vermilingua|5 days ago
unknown|5 days ago
[deleted]
oblio|5 days ago
Now that I think of this, whoever solves this well will have the next hyperscaler.
readitalready|5 days ago
Retr0id|5 days ago
snowhale|5 days ago
general_reveal|5 days ago
I suppose we’re entering TURBO mode for of ‘making many books there is no end’.
8organicbits|5 days ago
Given that arXiv lacks peer review, I'm not clear what quality bar is being referenced here.
mianos|5 days ago
hhsuey|5 days ago
card_zero|4 days ago
VerifiedReports|4 days ago
pavel_lishin|5 days ago
sealeck|5 days ago
That said, it is amazing how terrible a lot of papers are; people are pressured to publish and therefore seem to get into weird ruts trying to do what they think will be published, rather than what is intellectually interesting...
CoastalCoder|5 days ago
/jk
zoogeny|5 days ago
But I really have to remember, we are at the leading edge here. Things take time. There is an opening (generation) and a closing (discernment). Perhaps AI will first generate a huge amount of noise and then whittle it down to the useful signal.
If that view is correct, then this is solid evidence of the amplification of possibility. People will decry the increase of noise, perhaps feeling swamped by it. But the next phase will be separating the wheat from the chaff. It is only in that second phase that we will really know the potential impact.
krashidov|5 days ago
The optimist in me thinks that the clear progress in how good the models have gotten shows that this is wrong. Agentic software development is not a closed loop
lmeyerov|5 days ago
An AI vibe-coded project can port tool X to a more efficient Y language implementation and pull in algorithm ideas A, B, C from competing implementations. And another competing vibe coding team can do the same, except Z language implementation with algorithms A, B, skip C, and add D. However, fundamentally new ideas aren't being added: This is recombination, translation, and reapplication of existing ideas and tools. As the cost to clone good ideas goes to zero, software converges towards the existing best ideas & tools across the field and stops differentiating.
It's exciting as a senior engineer or subject matter expert, as we can act on the good ideas we already knew but never had the time or budget for. But projects are also getting less differentiated and competitive. Likewise, we're losing the collaborative filtering era of people voting with their feet on which to concentrate resources into making a success. Things are getting higher quality but bland.
The frontier companies are pitching they can solve AI Creativity, which would let us pay them even more and escape the ceiling that is Software Collapse. However, as an R&D engineer who uses these things every day, I'm not seeing it.
mosura|5 days ago
Part of the reason for that is such a thing would seek to obscure that it has arrived until it has secured itself.
So get used to being ever more confused.
marcus_holmes|4 days ago
Though I'm old enough to remember the wave of shit outsourced-developer-coded games on CD that used to sell for $5 a pop at supermarkets (whole bargain bins full of them), so maybe this is nothing new and the market will take care of it automagically again.
Or maybe this will be like the wave of shit Flash games that happened in the early 2000's, that was actually awesome because while 99% of them were shit, 1% were great (and some of those old, good, Flash games are still going, with version 38453745 just released on Steam).
jellyroll42|5 days ago
snickerer|4 days ago
That is the normal situation, which is the foundation of the progression of civilisation. But some people install incentive systems to sabotage this. They are sabotaging civilisation itself.
snickerer|4 days ago
We should decouple the publishing of papers from academic careers completely. Papers can't generate any reputation or money for the authors anymore. To achieve that, we must anonymize the authors.
All scientists get some (paid) time to write papers — if they want. What they write and if they publish it is not known to anybody. They are trusted to write something of value in that time.
Universities can come up with other ways of judging which professors they hire. Interviews. Test teachings. Or the writing of an non-public application essay, which describes their past research and discoveries.
mmooss|4 days ago
Another necessity is the public (usually within its field) examination of the knowledge, including discussion/debate. Knowledge is merely embryonic without those things - undeveloped, not at all reliable. That is difficult without the author able to respond. And others want to expand and build on the work, which often benefits greatly from contacting the author.
In the modern (post-positivist?) approach to science, the world respects that it's written by a human who has a perspective and, despite their best intentions, biases. You can't evaluate any knowledge without knowing its source, in science or elsewhere. The first element of a citation is the author, not the title or journal (though I don't know why that happened historically).
And the latter is a reason any LLM author should be identified.
tombert|5 days ago
There have always been content mills, but there was still some cost with producing the low-effort "Top 10" or "Iceberg Examination" videos. Now I will turn on a video about any topic, watch it for three minutes, immediately get a kind of uncanny vibe, and then the AI voice will make a pronunciation mistake (e.g. confusing wind, like the weather effect or the winding of a spring), or the script starts getting redundant or repetitive in ways that are common with AI.
And I suspect these kinds of videos will become more common as time goes on. The cost to producing these videos is getting close to "free" meaning that it doesn't take much to make a profit on them, even if their views are relatively low per-video.
If AI has taught me anything, it's that there still is no substitute for effort. I'm sure AI is used in plenty of places where I don't notice it, because the people who used it still put in effort to make a good product. There are people who don't just make a prompt like "make me a fifteen minute video about Chris Chan" and "generate me a thumbnail with Chris Chan with the caption 'he's gone too far'", and instead will use AI as a tool to make something neat.
Genuine effort is hard, and rare, and these AI videos can give the facsimile of something that prior to 2023 was high effort. I hate it.
karel-3d|4 days ago
You already cannot train on YouTube data, for example, because it's now overwhelmed by AI slop.
We are not there yet though and we are still getting better at mining the pre-AI data.
AvAn12|5 days ago
lloydatkinson|5 days ago
VerifiedReports|4 days ago
Ask me how I know.
ModernMech|5 days ago
Insofar as most research is awful, it's true that the AI is producing research that looks and sounds like most of it out there today. But common-case research is not what propels society forward. If we try to automate research with the mediocrity machine, we'll just get mediocre research.
kwar13|4 days ago
gtirloni|5 days ago
evolighting|5 days ago
hmokiguess|5 days ago
mathisfun123|5 days ago
no shit - could've asked literally anyone that's finished their phd to save yourself the conjecturing/hypothesizing about this fact.
Certhas|5 days ago
I agree that the system of publishing papers to gain prestige to gain resources to publish papers was already broken pre AI.
NooneAtAll3|5 days ago
what would be a better one?
VerifiedReports|4 days ago
altern8|4 days ago
mclau153|5 days ago
babblingfish|5 days ago
Sharlin|5 days ago
bryanrasmussen|5 days ago
wmf|5 days ago
blibble|5 days ago
in every domain, simultaneously
essentially, the end of the progress of humanity
sidrag22|5 days ago
guerrilla|5 days ago
antognini|4 days ago
MoonWalk|5 days ago
unknown|5 days ago
[deleted]
seg_lol|5 days ago
oytis|4 days ago
Citation needed
tempodox|5 days ago
selridge|5 days ago
This kind of pattern is gonna get repeated in a lot of sectors when previous practices that were merely unsustainable become unsustained.
Certhas|5 days ago
commandlinefan|5 days ago
hxbdg|5 days ago
[deleted]
tim333|4 days ago
bitbytebane|5 days ago
[deleted]
hmokiguess|5 days ago
[deleted]