It's fun to read some of these historic comments! A while back I wrote a replay system to better capture how discussions evolved at the time of these historic threads. Here's Karpathy's list from his graded articles, in the replay visualizer:
I'd love to see sentiment analysis done based on time of day. I'm sure it's largely time zone differences, but I see a large variance in the types of opinions posted to hn in the morning versus the evening and I'd be curious to see it quantified.
This is a cool idea. I would install a Chrome extension that shows a score by every username on this site grading how well their expressed opinions match what subsequently happened in reality, or the accuracy of any specific predictions they've made. Some people's opinions are closer to reality than others and it's not always correlated with upvotes.
An extension of this would be to grade people on the accuracy of the comments they upvote, and use that to weight their upvotes more in ranking. I would love to read a version of HN where the only upvotes that matter are from people who agree with opinions that turn out to be correct. Of course, only HN could implement this since upvotes are private.
The RES (Reddit Enhancement Suite) browser extension indirectly does this for me since it tracks the lifetime number of upvotes I give other users. So when I stumble upon a thread with a user with like +40 I know "This is someone whom I've repeatedly found to have good takes" (depending on the context).
It's subjective of course but at least it's transparently so.
I just think it's neat that it's kinda sorta a loose proxy for what you're talking about but done in arguably the simplest way possible.
>This is a cool idea. I would install a Chrome extension that shows a score by every username on this site grading how well their expressed opinions match what subsequently happened in reality, or the accuracy of any specific predictions they've made.
Why stop there?
If you can do that you can score them on all sorts of things. You could make a "this person has no moral convictions and says whatever makes the number go up" score. Or some other kind of score.
Stuff like this makes the community "smaller" in a way. Like back in the old days on forums and IRC you knew who the jerks were.
That’s what Elon’s vision was before he ended up buying Twitter. Keep a digital track record for journalists. He wanted to call it Pravda.
(And we do have that in real life. Just as, among friends, we do keep track of who is in whose debt, we also keep a mental map of whose voice we listen to. Old school journalism still had that, where people would be reading someone’s column over the course of decades. On the internet, we don’t have that, or we have it rarely.)
I long had a similar idea for stocks. Analyze posts of people giving stock tips on WSB, Twitter, etc and rank by accuracy. I would be very surprised if this had not been done a thousand times by various trading firms and enterprising individuals.
Of course in the above example of stocks there are clear predictions (HNWS will go up) and an oracle who resolves it (stock market). This seems to be a way harder problem for generic free form comments. Who resolves what prediction a particular comment has made and whether it actually happened?
I like the idea and certainly would try it. Although I feel in a way this would be an anti-thesis to HN. HN tries to foster curiosity, but if you're (only) ranked by the accuracy of your predictions, this could give the incentive to always fall back to a save and boring position.
The problem seems underspecified; what does it mean for a comment to be accurate? It would seem that comments like "the sun will rise tomorrow" would rank highest, but they aren't surprising.
Kidding aside, the comments it picks out for us are a little random. For instance, this was an A+ predictive thread (it appears to be rating threads and not individual comments):
I noticed from reviewing my own entry (which honestly I'm surprised exists) that the idea of what it thinks constitutes a "prediction" is fairly open to interpretation, or at least that adding some nuance to a small aspect in a thread to someone else prediction counts quite heavily. I don't really view how I've participated here over the years in any way as making predictions. I actually thought I had done a fairly good job at not making predictions, by design.
It's a good comment, but "prescient" isn't a word I'd apply to it. This is more like a list of solid takes. To be fair there probably aren't even that many explicit, correct predictions in one month of comments in 2015.
I've spent a weekend making something similar for my gmail account (which google keeps nagging me about being 90% full). It's fascinating to be able to classify 65k+ of emails (surprise: more than half are garbage), as well as summarize and trace the nature of communication between specific senders/recipients. It took about 50 hours on a dual RTX 3090 running Qwen 3.
My original goal was to prune the account deleting all the useless things and keeping just the unique, personal, valuable communications -- but the other day, an insight has me convinced that the safer / smarter thing to do in the current landscape is the opposite: remove any personal, valuable, memorable items, and leave google (and whomever else is scraping these repositories) with useless flotsam of newsletters, updates, subscription receipts, etc.
One thing this really highlights to me is how often the "boring" takes end up being the most accurate. The provocative, high-energy threads are usually the ones that age the worst.
If an LLM were acting as a kind of historian revisiting today’s debates with future context, I’d bet it would see the same pattern again and again: the sober, incremental claims quietly hold up, while the hyperconfident ones collapse.
Something like "Lithium-ion battery pack prices fall to $108/kWh" is classic cost-curve progress. Boring, steady, and historically extremely reliable over long horizons. Probably one of the most likely headlines today to age correctly, even if it gets little attention.
On the flip side, stuff like "New benchmark shows top LLMs struggle in real mental health care" feels like high-risk framing. Benchmarks rotate constantly, and “struggle” headlines almost always age badly as models jump whole generations.
I bet theres many "boring but right" takes we overlook today and I wondr if there's a practical way to surface them before hindsight does
"Boring but right" generally means that this prediction is already priced in to our current understanding of the world though. Anyone can reliably predict "the sun will rise tomorrow", but I'm not giving them high marks for that.
The one about LLMs and mental health is not a prediction but a current news report, the way you phrased it.
Also, the boring consistent progress case for AI plays out in the end of humans as viable economic agents requiring a complete reordering of our economic and political systems in the near future. So the “boring but right” prediction today is completely terrifying.
This suggests that the best way to grade predictions is some sort of weighting of how unlikely they were at the time. Like, if you were to open a prediction market for statement X, some sort of grade of the delta between your confidence of the event and the “expected” value, summed over all your predictions.
It's because algorithmic feeds based on "user engagement" rewards antagonism. If your goal is to get eyes on content, being boring, predictable and nuanced is a sure way to get lost in the ever increasing noise.
> One thing this really highlights to me is how often the "boring" takes end up being the most accurate.
Would the commenter above mind sharing the method behind of their generalization? Many people would spot check maybe five items -- which is enough for our brains to start to guess at potential patterns -- and stop there.
On HN, when I see a generalization, one of my mental checklist items is to ask "what is this generalization based on?" and "If I were to look at the problem with fresh eyes, what would I conclude?".
Is this why depressed people often end up making the best predictions?
In personal situations there's clearly a self fulfilling prophecy going on, but when it comes to the external world, the predictions come out pretty accurate.
A majority don't seem to be predictions about the future, and it seems to mostly like comments that give extended air to what was then and now the consensus viewpoint, e.g. the top comment from pcwalton the highest scored user: https://news.ycombinator.com/item?id=10657401
> (Copying my comment here from Reddit /r/rust:)
Just to repeat, because this was somewhat buried in the article: Servo is now a multiprocess browser, using the gaol crate for sandboxing. This adds (a) an extra layer of defense against remote code execution vulnerabilities beyond that which the Rust safety features provide; (b) a safety net in case Servo code is tricked into performing insecure actions.
There are still plenty of bugs to shake out, but this is a major milestone in the project.
I noticed the Hall of Fame grading of predictive comments has a quirk? It grades some comments about if they came true or not, but in the grading of comment to the article
The Cannons on the B-29 Bomber
"accurate account of LeMay stripping turrets and shifting to incendiary area bombing; matches mainstream history"
It gave a good grade to user cstross but to my reading of the comment, cstross just recounted a bit of old history. The evaluation gave cstross for just giving a history lesson or no?
Yes I noticed a few of these around. The LLM is a little too willing to give out grades for comments that were good/bad in a bit more general sense, even if they weren't making strong predictions specifically. Another thing I noticed is that the LLM has a very impressive recognition of the various usernames and who they belong to, and I think shows a little bit of a bias in its evaluations based on the identity of the person. I tuned the prompt a little bit based on some low-hanging fruit mistakes but I think one can most likely iterate it quite a bit further.
I am surprised the author thought the project passed quality control. The LLM reviews seem mostly false.
Looking at the comment reviews on the actual website, the LLM seems to have mostly judged whether it agreed with the takes, not whether they came true, and it seems to have an incredibly poor grasp of it's actual task of accessing whether the comments were predictive or not.
The LLM's comment reviews are of often statements like "correctly characterized [program language] as [opinion]."
This dynamic means the website mostly grades people on having the most confirmist take (the take most likely to dominate the training data, and be selected for in the LLM RL tuning process of pleasing the average user).
Examples: tptacek gets an 'A' for his comment on DF which the LLM claiming that the user
"captured DF's unforgiving nature, where 'can't do x or it crashes is just another feature to learn' which remained true until it was fixed on ..."
So the LLM is praising a comment as describing DF as unforgiving (a characterization of the present then, not a statement about the future). And worse, it seems like tptacek may in fact be implying the opposite of the future (e.g., x will continue to crash when it was eventually fixed.)
Here is the original comment: "
tptacek on Dec 2, 2015 | root | parent | next [–]
If you're not the kind of person who can take flaws like crashes or game-stopping frame-rate issues and work them into your gameplay, DF is not the game for you. It isn't a friendly game. It can take hours just to figure out how to do core game tasks. "Don't do this thing that crashes the game" is just another task to learn."
Note: I am paraphrasing the LLM review, as the website is also poorly designed, with one unable to select the text of the LLM review!
N.b., this choice of comment review is not overly cherry picked. I just scanned the "best commentators" and tptacek was number two, with this particular egregiously unrelated-to-prediction LLM summary given as justifying his #2 rating.
Are you sure? The third section of each review lists the “Most prescient” and “Most wrong” comments. That sounds exactly like what you're looking for. For example, on the "Kickstarter is Debt" article, here is the LLM's analysis of the most prescient comment. The analysis seems accurate and helpful to me.
phire
> “Oculus might end up being the most successful product/company to be kickstarted… > Product wise, Pebble is the most successful so far… Right now they are up to major version 4 of their product. Long term, I don't think they will be more successful than Oculus.”
With hindsight:
Oculus became the backbone of Meta’s VR push, spawning the Rift/Quest series and a multi‑billion‑dollar strategic bet.
Pebble, despite early success, was shut down and absorbed by Fitbit barely a year after this thread.
That’s an excellent call on the relative trajectories of the two flagship Kickstarter hardware companies.
I haven’t looked at the output yet, but came here to say,LLM grading is crap. They miss things, they ignore instructions, bring in their own views, have no calibration and in general are extremely poorly suited to this task. “Good” LLM as a judge type products (and none are great) use LLMs to make binary decisions - “do these atomic facts match yes / no” type stuff - and aggregate them to get a score.
I understand this is just a fun exercise so it’s basically what LLMs are good at - generating plausible sounding stuff without regard for correctness. I would not extrapolate this to their utility on real evaluation tasks.
Predictions are only valuable when they're actually made ahead of the knowledge becoming available. A man will walk on mars by 2030 is falsifiable, a man will walk on mars is not. A lot of these entries have very low to no predictive value or were already known at the time, but just related. Would be nice if future 'judges' put in more work to ensure quality judgments.
I would grade this article B-, but then again, nobody wrote it... ;)
It would be very interesting to see this applied year after year to see if people get better or worse over time in the accuracy of their judgments.
It would also be interesting to correlate accuracy to scores, but I kind of doubt that can be done. Between just expressing popular sentiment and the first to the post people getting more votes for the same comment than people who come later it probably wouldn’t be very useful data.
#250, but then I wasn't trying to make predictions for a future AI. Or anyone else, really. Got a high score mostly for status quo bias, e.g. visual languages going nowhere and FPGAs remain niche.
I'd love to see an "Annie Hall" analysis of hn posts, for incidents where somebody says something about some piece of software or whatever, and the person who created it replies, like Marshall McLuhan stepping out from behind a sign in Annie Hall.
Notable how this is only possible because the website is a good "web citizen." It has urls that maintain their state over a decade. They contain a whole conversation. You don't have to log in to see anything. The value of old proper websites increases with our ability to process them.
> because the website is a good "web citizen." It has urls that maintain their state over a decade.
It's a shame that maintaining the web is so hard that only a few websites are "good citizens". I wish the web was a -bit- way more like git. It should be easier to crawl the web and serve it.
Say, you browse and get things cached and shared, but only your "local bookmarks" persist. I guess it's like pinning in IPFS.
There are things that you have to log in to see, and the mods sometimes move conversations from one place to another, and also, for some reason, whole conversations get reset to a single timestamp.
Never call a man happy until he is dead. Also I don’t think your argument generalizes well - there are plenty of private research investment bubbles that have popped and not reached their original peaks (e.g. VR).
Anyone have a branch that I can run to target my own comments? I'd love to see where I was right and where I was off base. Seems like a genuinely great way to learn about my own biases.
I appreciate your intent, but this tool needs a lot of work -- maybe an entire redesign -- before it would be suitable for the purpose you seek. See discussion at [1].
Besides, in my experience, only a tiny fraction of HN comments can be interpreted as falsifiable predictions.
Instead I would recommend learning about calibration [2] and ways to improve one's calibration, which will likely lead you into literature reviews of cognitive biases and what we can do about them. Also, jumping into some prediction markets (as long as they don't become too much of a distraction) is good practice.
Many people are impressed by this, and I can see why. Still, this much isn't surprising: the Karpathy + LLM combo can deliver quickly. But there are downsides of blazing speed.
If you dig in, there are substantial flaws in the project's analysis and framing, such as the definition of a prediction, assessing comments, data quality overall, and more. Go spelunking through the comments here and notice people asking about methodology and checking the results.
Social science research isn't easy; it requires training, effort, and patience. I would be very happy if Karpathy added a Big Flashing Red Sign to this effect. It would raise awareness and focus community attention on what I think are the hardest and most important aspects of this kind of project: methodology, rigor, criticism, feedback, and correction.
It’s interesting, if you go down near the bottom you see some people with both A’s and D’s.
According to the ratings for example, one person both had extremely racist ideas but also made a couple of accurate points about how some tech concepts would evolve.
Of all the people on the entire internet, I would hope HN posters understand best that anything and everything posted online already has and also will at some point be used in such ways.
It doesn't look like the code anonymizes usernames when sending the thread for grading. This likely induces bias in the grades based on past/current prevailing opinions of certain users. It would be interesting to see the whole thing done again but this time randomly re-assigning usernames, to assess bias, and also with procedurally generated pseudonyms, to see whether the bias can be removed that way.
I'd expect de-biasing would deflate grades for well known users.
It might also be interesting to use a search-grounded model that provides citations for its grading claims. Gemini models have access to this via their API, for example.
What a human-like critizicism of human-like behavior.
I [as a human] also do the same thing when observing others in IRL and forum interactions. Reputation matters™
----
A further question is whether a bespoke username could influence the bias of a particular comment (e.g. A username of something like HatesPython might influence the interpretation of that commenter's particular perception of the Python coding language, which might actually be expressing positivity — the username's irony lost to the AI?).
I understand the exercise, but I think it should have a disclaimer, some of the LLM reviews are showing a bias and when I read the comments they turned out not to be as bad as the LLM made them. As this hits the front page, some people will only read the title and not the accompanying blog post, losing all of the nuance.
That said, I understand the concept and love what you did here. By this being exposed to the best disinfectant, I hope it will raise awareness and show how people and corporations should be careful about its usage. Now this tech is accessible to anyone, not only big techs, in a couple of hours.
It also shows how we should take with a grain of salt the result of any analysis of such scale by a LLM. Our private channels now and messages on software like Teams and Slack can be analyzed to hell by our AI overlords. I'm probably going to remove a lot of things from cloud drives just in case. Perhaps online discourse will deteriorate to more inane / LinkedIn style content.
Also, I like that your prompt itself has some purposefully leaked bias, which shows other risks—¹for instance, "fsflover: F", which may align the LLM to grade worse the handles that are related to free software and open source).
As a meta concept of this, I wonder how I'll be graded by our AI overlords in the future now that I have posted something dismissive of it.
Agreed. I feel like it's more just a collection of good comments. It doesn't surprise me to see tptacek, patio11, etc there. I think the "prediction" aspect is under weighted.
But it reminds me that I miss Manishearth's comments! What ever happened to him? I recall him being a big rust contributor. I'd think he'd be all over the place, with rust's adoption since then. I also liked tokenadult. interesting blast from the past.
Good points. To summarize: for a given comment, one presumably must downselect to the ones that can reasonably be interpreted as forecasts. I see some indicators that the creator of the project (despite his amazing reputation) skated over this part.
My guess is that it’s because there’s a lot of very negative comments about Brazil in that article. Trying to grade people for their opinions on a topic like that gets into dangerous territory.
As moultano suggests, this is likely because most other websites make it completely impossible to navel-gaze. We can't possibly give the HN admins too much praise and credit for their commitment to open and stable availability of legacy data.
Yes very funny to see their own model betray them like this:
> The original “non‑profit, open, patents shared” promise now reads almost like an alternate timeline. Today OpenAI is a capped‑profit entity with a massive corporate partner, closed frontier models, and an aggressive product roadmap.
I have never felt less confident in the future than I do in 2025... and it's such a stark contrast. I guess if you split things down the middle, AI probably continues to change the world in dramatic ways but not in the all or nothing way people expect.
A non trivial amount of people get laid off, likely due to a finanical crisis which is used as an excuse for companies scale up use of AI. Good chance the financial crisis was partly caused by AI companies, which ironically makes AI cheaper as infra is bought up on the cheap (so there is a consolidation, but the bountiful infra keeps things cheap). That results in increased usage (over a longer period of time). and even when the economy starts coming back the jobs numbers stay abismal.
Politics are divided into 2 main groups, those who are employed, and those who are retired. The retired group is VERY large, and has alot of power. They mostly care about entitlements. The employed age people focus on AI which is making the job market quite tough. There are 3 large political forces (but 2 parties). The Left, the Right, and the Tech Elite. The left and the right both hate AI, but the tech elite though a minority has outsized power in their tie breaker role. The age distributions would surprise most. Most older people are now on the left, and most younger people are split by gender. The right focuses on limiting entitlements, and the left focuses on growing them by taxing the tech elite. The right maintains power by not threatening the tech elite.
Unlike the 20th century America is a more focused global agenda. We're not policing everyone, just those core trading powers. We have not gone to war with China, China has not taken over Taiwan.
Physical robotics is becoming a pretty big thing, space travel is becoming cheaper. We have at least one robot on an astroid mining it. The yield is trivial, but we all thought it was neat.
Energy is much much greener, and you wouln't have guessed it... but it was the data centers that got us there. The Tech elite needed it quickly, and used the political connections to cut red tape and build really quickly.
We do not currently have the political apparatus in place to stop the dystopian nightmares depicted in movies and media. They were supposed to be cautionary tales. Maybe they still can be, but there are basically zero guardrails in non-progressive forms of government to prevent massive accumulations of power being wielded in ways most of the population disapproves of.
What's interesting is that the hindsight it has now is not going to be what it has in 10 years either. Some of the most wrong and most prescient comments could switch as stuff unfolds. In a way some could both still be wrong and right just at different points in time.
Nice! Something must be in the air – last week I built a very similar project using the historical archive of all-in podcast episodes: https://allin-predictions.pages.dev/
I was reading the Anki article on 2015-12-13, and the best prediction was by markm248 saying: "Remember that you read it here first, there will be a unicorn built on the concept of SRS"
Gotta auto grade every HN comment for how good it is at predicting stock market movement then check what the "most frequently correct" user is saying about the next 6 months.
> I spent a few hours browsing around and found it to be very interesting.
This seems to be the result of the exercise? No evaluation?
My concern is that, even if the exercise is only an amusing curiosity, many people will take the results more seriously than they should, and be inspired to apply the same methods to products and initiatives that adversely affect people's lives in real ways.
> My concern is that, even if the exercise is only an amusing curiosity, many people will take the results more seriously than they should, and be inspired to apply the same methods to products and initiatives that adversely affect people's lives in real ways.
That will most definitely happen. We already have known for awhile that algorithmic methods have been applied "to products and initiatives that adversely affect people's lives in real ways", for awhile: https://www.scientificamerican.com/blog/roots-of-unity/revie...
I guess the question is if LLMs for some reason will reinvigorate public sentiment / pressure for governing bodies to sincerely take up the ongoing responsibility of trying to lessen the unique harms that can be amplified by reckless implementation of algorithms.
This is great! Now I want to run this to analyze my own comments and see how I score and whether my rhetoric has improved in quality/accuracy over time!
> I was reminded again of my tweets that said "Be good, future LLMs are watching". You can take that in many directions, but here I want to focus on the idea that future LLMs are watching. Everything we do today might be scrutinized in great detail in the future because doing so will be "free". A lot of the ways people behave currently I think make an implicit "security by obscurity" assumption. But if intelligence really does become too cheap to meter, it will become possible to do a perfect reconstruction and synthesis of everything. LLMs are watching (or humans using them might be). Best to be good.
Can we take a second and talk about how dystopian this is? Such an outcome is not inevitable, it relies on us making it. The future is not deterministic, the future is determined by us. Moreso, Karpathy has significantly more influence on that future than your average HN user.
We are doing something very *very* wrong if we are operating under the belief that this future is unavoidable. That future is simply unacceptable.
Given the quality of the judgment I'm not worried, there is no value here.
To properly execute this idea rather than to just toss it off without putting in the work to make it valuable is exactly what irritates me about a lot of AI work. You can be 900 times as productive at producing mental popcorn, but if there was value to be had here we're not getting it, just a whiff of it. Sure, fun project. But I don't feel particularly judged here. The funniest bit is the judgment on things that clearly could not yet have come to pass (for instance because there is an exact date mentioned that we have not yet reached). QA could be better.
I call this the "judgement day" scenario. I would be interested if there is some science fiction based on this premise.
If you believe in God of a certain kind, you don't think that being judged for your sins is unacceptable or even good or bad in itself, you consider it inevitable. We have already talked it over for 2000 years, people like the idea.
> I realized that this task is actually a really good fit for LLMs
I've found the opposite, since these models still fail pretty wildly at nuance. I think it's a conceptual "needle in the haystack sort of problem.
A good test is to find some thread where there's a disagreement and have it try to analyze the discussion. It will usually strongly misrepresent what was being said, by each side, and strongly align with one user, missing the actual divide that's causing the disagreement (a needle).
It would be great to run this on a collection of interesting threads over different periods and not just one snapshot. For example, the thread from the day Trump got elected in 2016, the thread from the day of brexit and so on. Those are the times when people make many passionate predictions about how the future will play out, be good to see them retroactively scored.
I'm delighted to see that one of the users who makes the same negative comments on every Google-related post gets a "D" for saying Waymo was smoke and mirrors. Never change, I guess.
> And then when you navigate over to the Hall of Fame, you can find the top commenters of Hacker News in December 2015, sorted by imdb-style score of their grade point average.
Now let's make a Chrome extension that subtly highlights these users' comments when browsing HN.
> Everything we do today might be scrutinized in great detail in the future because it will be "free".
s/"free"/stolen/
The bit about college courses for future prediction was just silly, I'm afraid: reminds me of how Conan Doyle has Sherlock not knowing Earth revolves around the Sun. Almost all serious study concerns itself with predicting, modelling and influence over the future behaviour of some system; the problem is only that people don't fucking listen to the predictions of experts. They aren't going to value refined, academic general-purpose futurology any more than they have in the past; it's not even a new area of study.
it's great that this was produced in 1h with 60$. This is amazing to create small utilities, explore your curiosity, etc.
But the site is also quite confusing and messy. OK for a vibe coded experiment, sure, but wouldn't be for a final product. But I fear we're gonna see more and more of this. Big companies downsizing their tech departments and embracing vibe coded. Comparing to inflation, shrinkflation and skimpflation/ enshittification , will we soon adopt some word for this? AIflation? LLMflation?
And how will this comment score in a couple of years? :)
This is a perfect example of the power and problems with LLMs.
I took the narcissistic approach of searching for myself. Here's a grade of one of my comments[1]:
>slg: B- (accurate characterization of PH’s “networking & facade” feel, but implicitly underestimates how long that model can persist)
And here's the actual comment I made[2]:
>And maybe it is the cynical contrarian in me, but I think the "real world" aspect of Product Hunt it what turned me off of the site before these issues even came to the forefront. It always seemed like an echo chamber were everyone was putting up a facade. Users seemed more concerned with the people behind products and networking with them than actually offering opinions of what was posted.
>I find the more internet-like communities more natural. Sure, the top comment on a Show HN is often a critique. However I find that more interesting than the usual "Wow, another great product from John Developer. Signing up now." or the "Wow, great product. Here is why you should use the competing product that I work on." that you usually see on Product Hunt.
I did not say nor imply anything about "how long that model can persist", I just said I personally don't like using the site. It's a total hallucination to claim I was implying doom for "that model" and you would only know that if you actually took the time to dig into the details of what was actually said, but the summary seems plausible enough that most people never would.
The LLM processed and analyzed a huge amount of data in a way that no human could, but the single in-depth look I took at that analysis was somewhere between misleading and flat out wrong. As I said, a perfect example of what LLMs do.
And yes, I do recognize the funny coincidence that I'm now doing the exact thing I described as the typical HN comment a decade ago. I guess there is a reason old me said "I find that more interesting".
I'm not so sure; that may not have been what you meant, but that doesn't mean it's not what others read into it. The broader context is HN is a startup forum and one of the most common discussion patterns is 'I don't like it' that is often a stand-in for 'I don't think it's viable as-is'. Startups are default dead, after all.
With that context, if someone were to read your comment and be asked 'does this person think the product's model is viable in the long run' I think a lot of people would respond 'no'.
One of the few use cases for LLMs that I have high hopes for and feel is still under appreciated is grading qualitative things. LLMs are the first tech (afaik) that can do top-down analysis of phenomena in a manner similar to humans, which means a lot of important human use cases that are judgement-oriented can become more standardized, faster, and more readily available.
For instance, one of the unfortunate aspects of social media that has become so unsustainable and destructive to modern society is how it exposes us to so many more people and hot takes than we have ability to adequately judge. We're overwhelmed. This has led to conversation being dominated by really shitty takes and really shitty people, who rarely if ever suffer reputational consequence.
If we build our mediums of discourse with more reputational awareness using approaches like this, we can better explore the frontier of sustainable positive-sum conversation at scale.
Implementation-wise, the key question is how do we grade the grader and ensure it is predictable and accurate?
> But if intelligence really does become too cheap to meter, it will become possible to do a perfect reconstruction and synthesis of everything. LLMs are watching (or humans using them might be). Best to be good.
I cannot believe this is just put out there unexamined of any level of "maybe we shouldn't help this happen". This is complete moral abdication. And to be clear, being "good" is no defense. Being good often means being unaligned with the powerful, so being good is often the very thing that puts you in danger.
I've had the same though as Karpathy over the past couple of months/years. I don't think it's good, exciting, or something to celebrate, but I also have no idea how to prevent it.
I would read his "Best to be good." as a warning or reminder that everything you do or say online will be collected and analyzed by an "intelligence". You can't count on hiding amongst the mass of online noise. Imagine if someone were to collect everything you've written or uploaded to the internet and compiled it into a long document. What sort of story would that tell about who you are? What would a clever person (or LLM) be able to do with that document?
If you have any ideas on how to stop everyone from building the torment nexus, I am willing to listen.
It's nice that the LLM-enabled panopticon still cannot find this very recent related media, [0] but my silly mind can. It is actually an interesting commentary from a non-tech point of view. This is how the rest of the world feels:
Anyway, back to work trying to make my millions using Opus and such.
Well the companies that facilitate this have found themselves in a position where if they go down they take the US economy with them, so the maybe this shouldn't happen thing is a moot point. At least we know this stuff is in stable, secure hands though, like how the palantir ceo does recorded interviews while obviously blasted out of his mind on drugs.
To be clear...prior to this recent explosive interest in LLMs, this was already true. Snowden was over 10 years ago.
We can't start clutching our pearls now as if programmatic mass surveillance hasn't been running on all cylinders for over 20 years.
Don't get me wrong, we should absolutely care about this, everyone should. I'm just saying any vague gestures at imminent privacy-doom thanks to LLMs is liable to be doing some big favors of inadvertently sanitizing the history of prior (and still) egregious privacy offenders.
I'm just suggesting more "Yes and" and less "pearl clutching" is all.
dude, please do this for every year until today. This idea is actually amazing. If you need more money for API credits im sure people here could help donate.
* Nvidia GPUs will see heavy competition and most chat-like use-cases switching to cheaper models and inference-specific-silicon but will be still used on the high end for critical applications and frontier science
* Most Software and UIs will be primarily AI-generated. There will be no 'App Stores' as we know them.
* ICE Cars will become niche and will be largely been replaced with EVs, Solar will be widely deployed and will be the dominate source of power
* Climate Change will be widely recognized due to escalating consequences and there will be lots of efforts in mitigations (e.g, Climate Engineering, Climate-resistant crops, etc).
I'd take the other side for most of these - Nvidia one is too vague (some could argue it's already seeing "heavy competition" from Google and other players in the space) but something more concrete - I doubt they will fall below 50% market share.
Interesting experiment. Using modern LLMs to retroactively grade decade-old HN discussions is a clever way to measure how well our collective predictions age. It’s impressive how little time and compute it now takes to analyze something that would’ve required days of manual reading. My only caution is that hindsight grading can overvalue outcomes instead of reasoning — good reasoning can still lead to wrong predictions. But as a tool for calibrating forecasting and identifying real signal in discussions, this is a very cool direction.
jasonthorsness|2 months ago
Swift is Open Source https://hn.unlurker.com/replay?item=10669891
Launch of Figma, a collaborative interface design tool https://hn.unlurker.com/replay?item=10685407
Introducing OpenAI https://hn.unlurker.com/replay?item=10720176
The first person to hack the iPhone is building a self-driving car https://hn.unlurker.com/replay?item=10744206
SpaceX launch webcast: Orbcomm-2 Mission [video] https://hn.unlurker.com/replay?item=10774865
At Theranos, Many Strategies and Snags https://hn.unlurker.com/replay?item=10799261
SauntSolaire|2 months ago
arowthway|2 months ago
matsemann|2 months ago
Miss it for reddit as well. Top day/week/month/alltime makes it hard to find top a month in 2018.
HanClinto|2 months ago
modeless|2 months ago
An extension of this would be to grade people on the accuracy of the comments they upvote, and use that to weight their upvotes more in ranking. I would love to read a version of HN where the only upvotes that matter are from people who agree with opinions that turn out to be correct. Of course, only HN could implement this since upvotes are private.
cootsnuck|2 months ago
It's subjective of course but at least it's transparently so.
I just think it's neat that it's kinda sorta a loose proxy for what you're talking about but done in arguably the simplest way possible.
potato3732842|2 months ago
Why stop there?
If you can do that you can score them on all sorts of things. You could make a "this person has no moral convictions and says whatever makes the number go up" score. Or some other kind of score.
Stuff like this makes the community "smaller" in a way. Like back in the old days on forums and IRC you knew who the jerks were.
leobg|2 months ago
(And we do have that in real life. Just as, among friends, we do keep track of who is in whose debt, we also keep a mental map of whose voice we listen to. Old school journalism still had that, where people would be reading someone’s column over the course of decades. On the internet, we don’t have that, or we have it rarely.)
TrainedMonkey|2 months ago
Of course in the above example of stocks there are clear predictions (HNWS will go up) and an oracle who resolves it (stock market). This seems to be a way harder problem for generic free form comments. Who resolves what prediction a particular comment has made and whether it actually happened?
emaro|2 months ago
8organicbits|2 months ago
prawn|2 months ago
tptacek|2 months ago
Kidding aside, the comments it picks out for us are a little random. For instance, this was an A+ predictive thread (it appears to be rating threads and not individual comments):
https://news.ycombinator.com/item?id=10703512
But there's just 11 comments, only 1 for me, and it's like a 1-sentence comment.
I do love that my unaccredited-access-to-startup-shares take is on that leaderboard, though.
kbenson|2 months ago
n4r9|2 months ago
It's a good comment, but "prescient" isn't a word I'd apply to it. This is more like a list of solid takes. To be fair there probably aren't even that many explicit, correct predictions in one month of comments in 2015.
mvkel|2 months ago
btbuildem|2 months ago
My original goal was to prune the account deleting all the useless things and keeping just the unique, personal, valuable communications -- but the other day, an insight has me convinced that the safer / smarter thing to do in the current landscape is the opposite: remove any personal, valuable, memorable items, and leave google (and whomever else is scraping these repositories) with useless flotsam of newsletters, updates, subscription receipts, etc.
subscriptzero|2 months ago
Any chance you can outline the steps/prompts/tools you used to run this?
I've been building a 2nd brain type project, that plugs into all my work places and a custom classifier has been on that list that would enhance that.
red-iron-pine|2 months ago
Rperry2174|2 months ago
If an LLM were acting as a kind of historian revisiting today’s debates with future context, I’d bet it would see the same pattern again and again: the sober, incremental claims quietly hold up, while the hyperconfident ones collapse.
Something like "Lithium-ion battery pack prices fall to $108/kWh" is classic cost-curve progress. Boring, steady, and historically extremely reliable over long horizons. Probably one of the most likely headlines today to age correctly, even if it gets little attention.
On the flip side, stuff like "New benchmark shows top LLMs struggle in real mental health care" feels like high-risk framing. Benchmarks rotate constantly, and “struggle” headlines almost always age badly as models jump whole generations.
I bet theres many "boring but right" takes we overlook today and I wondr if there's a practical way to surface them before hindsight does
yunwal|2 months ago
jimbokun|2 months ago
Also, the boring consistent progress case for AI plays out in the end of humans as viable economic agents requiring a complete reordering of our economic and political systems in the near future. So the “boring but right” prediction today is completely terrifying.
simianparrot|2 months ago
johnfn|2 months ago
schoen|2 months ago
By 2065, we should be in possession of a proof that 0+0=0. Hopefully by the following year we will also be able to confirm that 0*0=0.
(All arithmetic here is over the natural numbers.)
0manrho|2 months ago
xpe|2 months ago
Would the commenter above mind sharing the method behind of their generalization? Many people would spot check maybe five items -- which is enough for our brains to start to guess at potential patterns -- and stop there.
On HN, when I see a generalization, one of my mental checklist items is to ask "what is this generalization based on?" and "If I were to look at the problem with fresh eyes, what would I conclude?".
copperx|2 months ago
In personal situations there's clearly a self fulfilling prophecy going on, but when it comes to the external world, the predictions come out pretty accurate.
mistercheph|2 months ago
> (Copying my comment here from Reddit /r/rust:) Just to repeat, because this was somewhat buried in the article: Servo is now a multiprocess browser, using the gaol crate for sandboxing. This adds (a) an extra layer of defense against remote code execution vulnerabilities beyond that which the Rust safety features provide; (b) a safety net in case Servo code is tricked into performing insecure actions. There are still plenty of bugs to shake out, but this is a major milestone in the project.
hackthemack|2 months ago
https://news.ycombinator.com/item?id=10654216
The Cannons on the B-29 Bomber "accurate account of LeMay stripping turrets and shifting to incendiary area bombing; matches mainstream history"
It gave a good grade to user cstross but to my reading of the comment, cstross just recounted a bit of old history. The evaluation gave cstross for just giving a history lesson or no?
karpathy|2 months ago
pierrec|2 months ago
I begrudgingly accept my poor grade.
LeroyRaz|2 months ago
Looking at the comment reviews on the actual website, the LLM seems to have mostly judged whether it agreed with the takes, not whether they came true, and it seems to have an incredibly poor grasp of it's actual task of accessing whether the comments were predictive or not.
The LLM's comment reviews are of often statements like "correctly characterized [program language] as [opinion]."
This dynamic means the website mostly grades people on having the most confirmist take (the take most likely to dominate the training data, and be selected for in the LLM RL tuning process of pleasing the average user).
LeroyRaz|2 months ago
Link to LLM review: https://karpathy.ai/hncapsule/2015-12-02/index.html#article-....
So the LLM is praising a comment as describing DF as unforgiving (a characterization of the present then, not a statement about the future). And worse, it seems like tptacek may in fact be implying the opposite of the future (e.g., x will continue to crash when it was eventually fixed.)
Here is the original comment: " tptacek on Dec 2, 2015 | root | parent | next [–]
If you're not the kind of person who can take flaws like crashes or game-stopping frame-rate issues and work them into your gameplay, DF is not the game for you. It isn't a friendly game. It can take hours just to figure out how to do core game tasks. "Don't do this thing that crashes the game" is just another task to learn."
Note: I am paraphrasing the LLM review, as the website is also poorly designed, with one unable to select the text of the LLM review!
N.b., this choice of comment review is not overly cherry picked. I just scanned the "best commentators" and tptacek was number two, with this particular egregiously unrelated-to-prediction LLM summary given as justifying his #2 rating.
hathawsh|2 months ago
https://karpathy.ai/hncapsule/2015-12-03/index.html#article-...
andy99|2 months ago
I understand this is just a fun exercise so it’s basically what LLMs are good at - generating plausible sounding stuff without regard for correctness. I would not extrapolate this to their utility on real evaluation tasks.
jacquesm|2 months ago
I would grade this article B-, but then again, nobody wrote it... ;)
MBCook|2 months ago
It would be very interesting to see this applied year after year to see if people get better or worse over time in the accuracy of their judgments.
It would also be interesting to correlate accuracy to scores, but I kind of doubt that can be done. Between just expressing popular sentiment and the first to the post people getting more votes for the same comment than people who come later it probably wouldn’t be very useful data.
pjc50|2 months ago
nixpulvis|2 months ago
Seriously, while I find this cool and interesting, I also fear how these sorts of things will work out for us all.
Sophira|2 months ago
DonHopkins|2 months ago
https://www.youtube.com/watch?v=vTSmbMm7MDg
moultano|2 months ago
chrisweekly|2 months ago
1. https://www.w3.org/Provider/Style/URI
dietr1ch|2 months ago
It's a shame that maintaining the web is so hard that only a few websites are "good citizens". I wish the web was a -bit- way more like git. It should be easier to crawl the web and serve it.
Say, you browse and get things cached and shared, but only your "local bookmarks" persist. I guess it's like pinning in IPFS.
jeffbee|2 months ago
Tossrock|2 months ago
johncolanduoni|2 months ago
scosman|2 months ago
xpe|2 months ago
Besides, in my experience, only a tiny fraction of HN comments can be interpreted as falsifiable predictions.
Instead I would recommend learning about calibration [2] and ways to improve one's calibration, which will likely lead you into literature reviews of cognitive biases and what we can do about them. Also, jumping into some prediction markets (as long as they don't become too much of a distraction) is good practice.
[1]: https://news.ycombinator.com/item?id=46223959
[2]: https://www.lesswrong.com/w/calibration
xpe|2 months ago
If you dig in, there are substantial flaws in the project's analysis and framing, such as the definition of a prediction, assessing comments, data quality overall, and more. Go spelunking through the comments here and notice people asking about methodology and checking the results.
Social science research isn't easy; it requires training, effort, and patience. I would be very happy if Karpathy added a Big Flashing Red Sign to this effect. It would raise awareness and focus community attention on what I think are the hardest and most important aspects of this kind of project: methodology, rigor, criticism, feedback, and correction.
GaggiX|2 months ago
And scroll down to the bottom.
unknown|2 months ago
[deleted]
MBCook|2 months ago
According to the ratings for example, one person both had extremely racist ideas but also made a couple of accurate points about how some tech concepts would evolve.
bgwalter|2 months ago
The EU may give LLM surveillance an F at some point.
gen6acd60af|2 months ago
Your past thoughts have been dredged up and judged.
For each $TOPIC, you have been awarded a grade by GPT-5.1 Thinking.
Your grade is based on OpenAI's aligned worldview and what OpenAI's blob of weights considers Truth in 2025.
Did you think well, netizen?
Are you an Alpha or a Delta-Minus?
Where will the dragnet grading of your online history happen next?
HighGoldstein|2 months ago
unknown|2 months ago
[deleted]
popinman322|2 months ago
I'd expect de-biasing would deflate grades for well known users.
It might also be interesting to use a search-grounded model that provides citations for its grading claims. Gemini models have access to this via their API, for example.
ProllyInfamous|2 months ago
I [as a human] also do the same thing when observing others in IRL and forum interactions. Reputation matters™
----
A further question is whether a bespoke username could influence the bias of a particular comment (e.g. A username of something like HatesPython might influence the interpretation of that commenter's particular perception of the Python coding language, which might actually be expressing positivity — the username's irony lost to the AI?).
khafra|2 months ago
karmickoala|2 months ago
That said, I understand the concept and love what you did here. By this being exposed to the best disinfectant, I hope it will raise awareness and show how people and corporations should be careful about its usage. Now this tech is accessible to anyone, not only big techs, in a couple of hours.
It also shows how we should take with a grain of salt the result of any analysis of such scale by a LLM. Our private channels now and messages on software like Teams and Slack can be analyzed to hell by our AI overlords. I'm probably going to remove a lot of things from cloud drives just in case. Perhaps online discourse will deteriorate to more inane / LinkedIn style content.
Also, I like that your prompt itself has some purposefully leaked bias, which shows other risks—¹for instance, "fsflover: F", which may align the LLM to grade worse the handles that are related to free software and open source).
As a meta concept of this, I wonder how I'll be graded by our AI overlords in the future now that I have posted something dismissive of it.
¹Alt+0151
ComputerGuru|2 months ago
* ignore comments that do not speculate on something that was unknown or had not achieved consensus as of the date of yyyy-mm-dd
* at the same time, exclude speculations for which there still isn’t a definitive answer or consensus today
* ignore comments that speculate on minor details or are stating a preference/opinion on a subjective matter
* it is ok to generate an empty list of users for a thread if there are no comments meeting the speculation requirements laid out above
* etc
losvedir|2 months ago
But it reminds me that I miss Manishearth's comments! What ever happened to him? I recall him being a big rust contributor. I'd think he'd be all over the place, with rust's adoption since then. I also liked tokenadult. interesting blast from the past.
xpe|2 months ago
janalsncm|2 months ago
unknown|2 months ago
[deleted]
alister|2 months ago
I wonder why ChatGPT refused to analyze it?
The HN article was "Brazil declares emergency after 2,400 babies are born with brain damage" but the page says "No analysis available".
bspammer|2 months ago
intheitmines|2 months ago
snowwrestler|2 months ago
unknown|2 months ago
[deleted]
lapcat|2 months ago
dang|2 months ago
Alternate metaphor: evil catnip - https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...
But yesterday's thread and this one are clearly exceptions—far above the median. https://news.ycombinator.com/item?id=46212180 was particularly incredible I think!
yellow_lead|2 months ago
CamperBob2|2 months ago
unknown|2 months ago
[deleted]
bretpiatt|2 months ago
The company has changed and it seems the mission has as well.
bspammer|2 months ago
> The original “non‑profit, open, patents shared” promise now reads almost like an alternate timeline. Today OpenAI is a capped‑profit entity with a massive corporate partner, closed frontier models, and an aggressive product roadmap.
swalsh|2 months ago
A non trivial amount of people get laid off, likely due to a finanical crisis which is used as an excuse for companies scale up use of AI. Good chance the financial crisis was partly caused by AI companies, which ironically makes AI cheaper as infra is bought up on the cheap (so there is a consolidation, but the bountiful infra keeps things cheap). That results in increased usage (over a longer period of time). and even when the economy starts coming back the jobs numbers stay abismal.
Politics are divided into 2 main groups, those who are employed, and those who are retired. The retired group is VERY large, and has alot of power. They mostly care about entitlements. The employed age people focus on AI which is making the job market quite tough. There are 3 large political forces (but 2 parties). The Left, the Right, and the Tech Elite. The left and the right both hate AI, but the tech elite though a minority has outsized power in their tie breaker role. The age distributions would surprise most. Most older people are now on the left, and most younger people are split by gender. The right focuses on limiting entitlements, and the left focuses on growing them by taxing the tech elite. The right maintains power by not threatening the tech elite.
Unlike the 20th century America is a more focused global agenda. We're not policing everyone, just those core trading powers. We have not gone to war with China, China has not taken over Taiwan.
Physical robotics is becoming a pretty big thing, space travel is becoming cheaper. We have at least one robot on an astroid mining it. The yield is trivial, but we all thought it was neat.
Energy is much much greener, and you wouln't have guessed it... but it was the data centers that got us there. The Tech elite needed it quickly, and used the political connections to cut red tape and build really quickly.
1121redblackgo|2 months ago
Karrot_Kream|2 months ago
jeffnappi|2 months ago
1. https://karpathy.ai/hncapsule/2015-12-08/index.html#article-...
abhinav_sk|2 months ago
smugma|2 months ago
I scrolled to the bottom of the hall of fame/shame and saw that entry #1505 and 3 F's and a D, with an average grade of D+ (1.46).
No grade better than a D shouldn't average to a D+, I'd expect it to be closer to a 0.25.
dschnurr|2 months ago
sanex|2 months ago
unknown|2 months ago
[deleted]
DeathArrow|2 months ago
That's interesting. I wouldn't have thought that a decent generic forward future predictor would be possible.
GaggiX|2 months ago
They were right, Duolingo.
mtlynch|2 months ago
sigmar|2 months ago
Rychard|2 months ago
xpe|2 months ago
Forecasting and the meta-analysis of forecasters is fairly well studied. [1] is a good place to start.
[1]: https://en.wikipedia.org/wiki/Superforecaster
neilv|2 months ago
This seems to be the result of the exercise? No evaluation?
My concern is that, even if the exercise is only an amusing curiosity, many people will take the results more seriously than they should, and be inspired to apply the same methods to products and initiatives that adversely affect people's lives in real ways.
cootsnuck|2 months ago
That will most definitely happen. We already have known for awhile that algorithmic methods have been applied "to products and initiatives that adversely affect people's lives in real ways", for awhile: https://www.scientificamerican.com/blog/roots-of-unity/revie...
I guess the question is if LLMs for some reason will reinvigorate public sentiment / pressure for governing bodies to sincerely take up the ongoing responsibility of trying to lessen the unique harms that can be amplified by reckless implementation of algorithms.
SequoiaHope|2 months ago
unknown|2 months ago
[deleted]
godelski|2 months ago
We are doing something very *very* wrong if we are operating under the belief that this future is unavoidable. That future is simply unacceptable.
jacquesm|2 months ago
To properly execute this idea rather than to just toss it off without putting in the work to make it valuable is exactly what irritates me about a lot of AI work. You can be 900 times as productive at producing mental popcorn, but if there was value to be had here we're not getting it, just a whiff of it. Sure, fun project. But I don't feel particularly judged here. The funniest bit is the judgment on things that clearly could not yet have come to pass (for instance because there is an exact date mentioned that we have not yet reached). QA could be better.
acyou|2 months ago
If you believe in God of a certain kind, you don't think that being judged for your sins is unacceptable or even good or bad in itself, you consider it inevitable. We have already talked it over for 2000 years, people like the idea.
nomel|2 months ago
I've found the opposite, since these models still fail pretty wildly at nuance. I think it's a conceptual "needle in the haystack sort of problem.
A good test is to find some thread where there's a disagreement and have it try to analyze the discussion. It will usually strongly misrepresent what was being said, by each side, and strongly align with one user, missing the actual divide that's causing the disagreement (a needle).
gowld|2 months ago
anshulbhide|2 months ago
NooneAtAll3|2 months ago
reading from the end isn't really useful, y'know :)
dw_arthur|2 months ago
JetSetWilly|2 months ago
rkuykendall-com|2 months ago
jeffbee|2 months ago
bediger4000|2 months ago
Shades of Roko's Basilisk!
ambicapter|2 months ago
Bjartr|2 months ago
apparent|2 months ago
Now let's make a Chrome extension that subtly highlights these users' comments when browsing HN.
bbcisking|2 months ago
exasperaited|2 months ago
s/"free"/stolen/
The bit about college courses for future prediction was just silly, I'm afraid: reminds me of how Conan Doyle has Sherlock not knowing Earth revolves around the Sun. Almost all serious study concerns itself with predicting, modelling and influence over the future behaviour of some system; the problem is only that people don't fucking listen to the predictions of experts. They aren't going to value refined, academic general-purpose futurology any more than they have in the past; it's not even a new area of study.
pnt12|2 months ago
it's great that this was produced in 1h with 60$. This is amazing to create small utilities, explore your curiosity, etc.
But the site is also quite confusing and messy. OK for a vibe coded experiment, sure, but wouldn't be for a final product. But I fear we're gonna see more and more of this. Big companies downsizing their tech departments and embracing vibe coded. Comparing to inflation, shrinkflation and skimpflation/ enshittification , will we soon adopt some word for this? AIflation? LLMflation?
And how will this comment score in a couple of years? :)
slg|2 months ago
I took the narcissistic approach of searching for myself. Here's a grade of one of my comments[1]:
>slg: B- (accurate characterization of PH’s “networking & facade” feel, but implicitly underestimates how long that model can persist)
And here's the actual comment I made[2]:
>And maybe it is the cynical contrarian in me, but I think the "real world" aspect of Product Hunt it what turned me off of the site before these issues even came to the forefront. It always seemed like an echo chamber were everyone was putting up a facade. Users seemed more concerned with the people behind products and networking with them than actually offering opinions of what was posted.
>I find the more internet-like communities more natural. Sure, the top comment on a Show HN is often a critique. However I find that more interesting than the usual "Wow, another great product from John Developer. Signing up now." or the "Wow, great product. Here is why you should use the competing product that I work on." that you usually see on Product Hunt.
I did not say nor imply anything about "how long that model can persist", I just said I personally don't like using the site. It's a total hallucination to claim I was implying doom for "that model" and you would only know that if you actually took the time to dig into the details of what was actually said, but the summary seems plausible enough that most people never would.
The LLM processed and analyzed a huge amount of data in a way that no human could, but the single in-depth look I took at that analysis was somewhere between misleading and flat out wrong. As I said, a perfect example of what LLMs do.
And yes, I do recognize the funny coincidence that I'm now doing the exact thing I described as the typical HN comment a decade ago. I guess there is a reason old me said "I find that more interesting".
[1] - https://karpathy.ai/hncapsule/2015-12-18/index.html#article-...
[2] - https://news.ycombinator.com/item?id=10761980
npunt|2 months ago
With that context, if someone were to read your comment and be asked 'does this person think the product's model is viable in the long run' I think a lot of people would respond 'no'.
0xWTF|2 months ago
Compared to what happens next? Does tptacek's commentary become market signal equivalent to the Fed Chair or the BLS labor and inflation reports?
tptacek|2 months ago
npunt|2 months ago
For instance, one of the unfortunate aspects of social media that has become so unsustainable and destructive to modern society is how it exposes us to so many more people and hot takes than we have ability to adequately judge. We're overwhelmed. This has led to conversation being dominated by really shitty takes and really shitty people, who rarely if ever suffer reputational consequence.
If we build our mediums of discourse with more reputational awareness using approaches like this, we can better explore the frontier of sustainable positive-sum conversation at scale.
Implementation-wise, the key question is how do we grade the grader and ensure it is predictable and accurate?
Arodex|2 months ago
https://news.ycombinator.com/item?id=46222523
LLM can't grade reliably human text. It doesn't understand it.
tgtweak|2 months ago
mvdtnz|2 months ago
gaigalas|2 months ago
It does seem better than just upvotes and downvotes though.
collinmcnulty|2 months ago
I cannot believe this is just put out there unexamined of any level of "maybe we shouldn't help this happen". This is complete moral abdication. And to be clear, being "good" is no defense. Being good often means being unaligned with the powerful, so being good is often the very thing that puts you in danger.
doctoboggan|2 months ago
I would read his "Best to be good." as a warning or reminder that everything you do or say online will be collected and analyzed by an "intelligence". You can't count on hiding amongst the mass of online noise. Imagine if someone were to collect everything you've written or uploaded to the internet and compiled it into a long document. What sort of story would that tell about who you are? What would a clever person (or LLM) be able to do with that document?
If you have any ideas on how to stop everyone from building the torment nexus, I am willing to listen.
consumer451|2 months ago
Anyway, back to work trying to make my millions using Opus and such.
[0] https://old.reddit.com/r/funny/comments/1pj5bg9/al_companies...
thatguy0900|2 months ago
cootsnuck|2 months ago
We can't start clutching our pearls now as if programmatic mass surveillance hasn't been running on all cylinders for over 20 years.
Don't get me wrong, we should absolutely care about this, everyone should. I'm just saying any vague gestures at imminent privacy-doom thanks to LLMs is liable to be doing some big favors of inadvertently sanitizing the history of prior (and still) egregious privacy offenders.
I'm just suggesting more "Yes and" and less "pearl clutching" is all.
Teever|2 months ago
Governments around the world have profiles on people and spiders that quietly amass the data that continuously updates those profiles.
It's just a matter of time before hardware improves and we see another holocaust scale purge facilitated by robots.
Surveillance capitalism won.
Uptrenda|2 months ago
siliconc0w|2 months ago
* Nvidia GPUs will see heavy competition and most chat-like use-cases switching to cheaper models and inference-specific-silicon but will be still used on the high end for critical applications and frontier science
* Most Software and UIs will be primarily AI-generated. There will be no 'App Stores' as we know them.
* ICE Cars will become niche and will be largely been replaced with EVs, Solar will be widely deployed and will be the dominate source of power
* Climate Change will be widely recognized due to escalating consequences and there will be lots of efforts in mitigations (e.g, Climate Engineering, Climate-resistant crops, etc).
pu_pe|2 months ago
rafaelmn|2 months ago
xattt|2 months ago
huflungdung|2 months ago
[deleted]
throwaway984393|2 months ago
[deleted]
artur44|2 months ago