top | item 41241124

LLM-based sentiment analysis of Hacker News posts between Jan 2020 and June 2023

126 points| mochomocha | 1 year ago |outerbounds.com

72 comments

order

tantalor|1 year ago

Is this just using LLM to be cool? How does pure LLM with basic "In the scale between 0-10 ..." prompt stack up against traditional, battle-tested sentiment analysis tools?

Gemini suggests NLTK and spaCy

https://www.nltk.org/

https://spacy.io/

antihipocrat|1 year ago

I'm wondering how their LLM parsing 250 mil words in 9 hours compares with performance of traditional sentiment analysis.

Also, many exisiting sentiment analysis tools have a lot of research behind them that can be referenced when interpreting the results (known confounds etc). I don't think there is yet an equivalent for the LLM approach

anonylizard|1 year ago

Because LLMs WILL dominate all NLP use cases, whether you like it or not.

Its like the linux of operating systems. Sure you can handwrite up some custom OS more specialized for a purpose. But its much easier to just use linux, which everyone understands on a basic level and is extremely robust, and modifying it slightly for the end goal.

And saying "Traditional sentiment analysis" tools are "Battle tested" is laughable. LLMs in the past year alone, probably has 1000x the cumulative usage of all sentiment analysis tools in history.

LLMs get 100 billion + each year in research, improvements, engineering, optimisations.

LLMs keep rapidly improving year to year in capabilities. Sonnet 3.5 already obliterates the original GPT-4 in every aspect.

LLMs keep getting cheaper year to year. Gemini flash is like 100x cheaper than the original GPT3.5.

You can onboard any person who can write python, to start using LLMs to perform language analysis in a day. Versus weeks to use these traditional tools.

Nearly all NLP tasks will be standardised to use LLMs as the baseline default tool. Sure there'll be some short term degradations in some specific aspect, but there's no stopping the tide.

By the way, traditional ML-based translation is also pretty much dead and replaced by LLMs. I've been seeing an explosion in fan-translations done by say Sonnet 3.5, the improvement in fluency and accuracy is just radical and extreme, I often don't even notice the AI-translation anymore.

aussieguy1234|1 year ago

I can see a simliarity here in comparing Java/JavaScript/any other modern, more productive language to C. Yes, both can write more or less the same program, but you'll get the same result with less effort and more quickly with the modern languages. Yes, modern languages will always be slower and heavier on resources than C.

visarga|1 year ago

I did a similar kind of process for my own chat logs. I have about 11M tokens worth of logs, and it took 2 days to crunch all of them with ollama and LLaMA 3.1 8B on my MacBook. It's slow, but free.

I generated title, summary, keywords and hierarchical topics up to 3 levels up from the original text. My plan for now is to put them in a vector search engine, which, incidentally, was made with Sonnet 3.5 with very little iteration. I want to play around to see how I can organize my ideas with LLMs, make something useful from all that text.

I really don't know what I will discover. One small insight I already found is that summarization works really well, you can use summaries instead of full texts to prime Claude and it works better than expected. Unlimited context? Maybe.

Another direction of research is to create a nice taxonomy, there are thousands of topics, pretty difficult task, but there must be a way using clustering and LLMs. That is why I generated topic, parent-topic, gp-topic, and ggp-topic from all snippets. I would probably manually edit the top 2 levels of the taxonomy to give it the right focus.

I'm also integrating with my HN and reddit feeds. X is too stingy with the API. Maybe Pocket and local downloads folder too, I save/bookmark stuff I like. I could also include all the papers I am reading into the corpus. It could synthesize a ranked feed aligned to my own interests.

ma9o|1 year ago

I'm working on something tangentially related [1] but by sourcing my Google search history data. It's surprising how LLaMA 3.1 8B is pulling most of the weight in my case too.

[1] https://github.com/enclaveid/enclaveid

mithametacs|1 year ago

LLMs are shit at generating content, but summarization works really well.

I’d like to use your project

huem0n|1 year ago

> NFL (915 posts)

> Football (206 posts)

Either hacker news really likes the national forensic league, or these LLM-categories are a bit dubious.

Also hmmm:

> American football (7 posts)

> American_football (6 posts)

winddude|1 year ago

It's this one "these LLM-categories are a bit dubious", specialized models still outperform LLMs on niche tasks like classification and sentiment.

EarthLaunch|1 year ago

> Tokens Don't Lie

> But how do people feel about these topics

I find it notable that tokens don't necessarily express people's feelings. Put another way, tokens aren't how people feel, they're how they write.

Samstave mentioned in this thread that Twitter is a 'global sentiment engine'. I'm sure that's literally true. Sentiment measurement is only accurate to the degree that people are expressing their real feelings via tokens. I can imagine various psychological and political reasons for a discrepancy.

If you did sentiment analysis of publicly known writings of North Korean administrators, would that represent their feelings?

I think the interplay with free speech is interesting here: In a setting where people feel socially and legally safe to express their true opinion, sentiment analysis will be more accurate.

jmward01|1 year ago

I wonder if the dip is more about LLama3 70b training and data than a change in sentiment. The data cut-off was Dec 2023 for 70b. That looks to coincide with the reversal of the dip.

vtuulos|1 year ago

That's an interesting hypothesis but the words we use to express agreement and disagreement haven't changed much.

We don't try to retrieve articles/topics from the model, which would be affected by the cutoff, just asking it to analyze the sentiment or summarize the content provided in a prompt

samstave|1 year ago

>>Use the tool below to explore various topics and the sentiments they evoke.

This is a cool phrase.

It is personally important as when I was asked in a panel interview @ -- They asked "what do you think Twitter is?

My response was "You're a global sentiment engine""

(There are a lot of conversations I'd love to have with the HN community with respect to our shared experiences, and weird history flipped-bits that exists in the minds of those who experienced that...

like threads of how linux came, or how xml was born through things I touched in a forrest gump way - and how there are so many stories from so many.

MBCook|1 year ago

Speaking of Twitter, it would be very neat to be able to see a graph of sentiment over time if you select a term.

You could watch Twitter go from being a niche little new thing to popular to "twitter is trash" too popular to increasingly divisive to the purchase and rename to X to today.

throwup238|1 year ago

> My response was "You're a global sentiment engine""

More like a sentiment engine for bot operators.

SubiculumCode|1 year ago

I wanted to do an analysis of hacker news on another topic, but over a longer timespan.

I started to look into it, but in the little time I had to devote to the idea, I read that the Agolia API lets you look over a longer period, but that it is relatively costly.

I just want to look for all story titles from the beginning of time which match one of several simple search terms, and return submission date and title for an analysis I'd conduct in R.

Am I overthinking it and a simple Python script without an API code can do it?

lz400|1 year ago

It's funny filtering by crypto and seeing the (sometimes hazy) division between cryptography (we love this) and cryptocurrency (we hate it) terms.

chazeon|1 year ago

I wonder if using prompts to get the sentiment in LLM is enough? So we do not need to do any fine-tuning anymore?

t-writescode|1 year ago

I think you raise a reasonable question.

I also think it *could* be less of a problem than you might think. If we treat the scale as arbitrary (which I think is a safe thing to do), then movement along the scale could be sufficient to ascertain *something*

synicalx|1 year ago

> Hate : Torture

Great work folks, glad we can all agree on that one.

Interesting that they used an LLM for this. I mean it makes sense and the data seems to pass the pub test but I, in my ignorance, would not have assumed that a language model would be well suited for number crunching.

silisili|1 year ago

Seems we mostly agree on hating Atlassian, too, so it's working as intended.

Sleaker|1 year ago

Why is everything only plotted between 4 and 8 if the scale of the least liked topic should be 0 and most liked should be 9. Also 4.5 is the midpoint, but 4 is displayed as bright red and 6 is a muted gray blue, why? This makes no sense except to be psychologically disingenuous.

And no 5s? What is even going on in that LLM?

sebastiennight|1 year ago

> "It's a scale of 1 to 13, but it goes up and back down. Eight is the highest score on the scale." - Jason Mendoza

It's nice to see this scale used outside of The Good Place.

teo_zero|1 year ago

The scale makes no sense.

Sentiment of forum posts is not an absolute value, you can't compare it against, for example, conversations in a pub, or talks between friends, etc.

I think they should have normalized the numbers around the average, so to have a relative measurement of the various topics.

Mathnerd314|1 year ago

> Reply only the tags

LLM's are really sensitive to bad or even slightly ambiguous grammar. I wonder if the numbers would differ significantly with "Reply only with the tags, in the following format".

anonu|1 year ago

At least Republicans and Democrats share the same low sentiment score of 4.

qwerpy|1 year ago

Apparently your comment is divisive though!

savin-goyal|1 year ago

what's up with the title flips from

> 350M Tokens Don't Lie: Love And Hate In Hacker News, to

> LLM-based sentiment analysis of Hacker News posts, to

> LLM-based sentiment analysis of Hacker News posts between Jan 2020 and June 2023

t-writescode|1 year ago

A/B testing? Possibly increasing accuracy from high click-bait, low signal to low click-bait high signal?

elashri|1 year ago

> It is worth clarifying though that Hacker News does not hate International Students, but the posts related to them tend to be overwhelmingly negative, reflecting the community’s sympathy for the challenges faced by those studying abroad.

I was horrified when I read international students as one of top on the hate list. Although I saw a couple of comments attributed their cities housing crises on international students and thought that this sentiment is wide supported.

vtuulos|1 year ago

here's how the model ranks the discussion on this page after 40 comments:

SENTIMENT 6

:D

anonu|1 year ago

Great analysis. How is divisiveness actually calculated?

ysofunny|1 year ago

the most divisive topic seems to be "gnome" with 0.82 on the divisiveness scale

that's really "hacker", a worthy first place

thr0w|1 year ago

I don't know about this analysis and its conclusions. I'll just use this as a jumping point to selfishly spout my own human observations.

For context, I'm someone who uses HN to search for topics I'm interested in, rather than something like Google or Reddit.

- For anything SF community-related, most hits are from 10+ years ago. Lots of "hey we have a space in soma, any local startups want to hang and drink beers?" or "we have an empty desk in a space in the mission, any hackers want to grab it for free?" - all from around 2012 or prior. Nothing like that seems to happen anymore.

- Starting from around 2016, a heavy anti-technology sentiment appears. Cloud, crypto, AI - all are nonsense propagated by VC types and overzealous engineers.

- Similarly, any thread involving money/labor invariably has an anti-capitalist and/or "unions would solve everything" tangent.

Would be interested to hear if others have observed similar.

Karrot_Kream|1 year ago

Yeah that’s roughly been my read too. I think the audience of the site changed. The user base has grown significantly. The site has gone from being about hacking (“hey here’s an empty desk”) to the culture of hackers at large (“tech was a mistake when it got invaded by VC hucksters”.)

TFA’s sentiment decrease tracks very closely with the huge uptick in user creation that started in 2022. HN isn’t really a tech site anymore, it’s about vibes. That makes sense given that in 2024 there’s a million places online talking about tech so HN only has its culture to distinguish itself. This wasn’t the case in 2008. The vibes here, along with the older demographics of the site, are increasingly nostalgic and cynical.

It'll all probably go the same way as Slashdot did which went through the same cycle (replace "VC huckster" with "Microsoft" and "surveillance capitalism" with "three letter agencies") until it too gets replaced by a site/community with energetic younger users creating new things.

teleforce|1 year ago

Systemd now in the Love HN section, that a HN news in itself.

yamumsafknho|1 year ago

[deleted]

lagniappe|1 year ago

Listen you just had to be there. Who wants a file picker with no thumbnails? That's no way to live.

deepfriedbits|1 year ago

I got a fairly good chuckle seeing Gnome as the most divisive topic, too.

MBCook|1 year ago

And Tesla is the most disliked with a real number of posts.

In the other direction math is the most liked! And if you go a little further Python is the clear winner for languages.