Ratelman
|
4 months ago
|
on: Study finds growing social circles may fuel polarization
Reflecting on my own experience - frequency of contact (if I see them once a year, can't really count them as close friends) How involved they are in my life - are they people I turn to when I'm facing a problem, do they turn to me when facing their own problems? Do we have frequent deep conversations - not just surface level discuss the weather, sports etc. but stuff that matter. Quantifying this - length of friendship (# of years), frequency of contact (annually, monthly, weekly etc.), level of trust (low, medium, high - can I trust my kids with them kind of trust), level of involvement (low, medium, high - what things do I feel comfortable sharing with them - suppose this is also level of trust?)
Ratelman
|
4 months ago
|
on: Study finds growing social circles may fuel polarization
What makes you consider them close (aside from length of friendship)?
Ratelman
|
4 months ago
|
on: Study finds growing social circles may fuel polarization
Boils down to the basics of proper science - how does one measure/quantify close friends?
Ratelman
|
5 months ago
|
on: Self-supervised learning, JEPA, world models, and the future of AI [video]
Yeah, he was quite vocal in his opinion that they would plateau earlier than they did and that little value would be derived from them because they're just stochastic parrots. Agree with him that they're probably not sufficient for AGI, but, at least in my experience, they're adding a lot of value and they're continuously performing better in a range of tasks that he wasn't expecting them to.
Ratelman
|
6 months ago
|
on: Defeating Nondeterminism in LLM Inference
Was my thinking exactly - but also semantically equivalent is also only relevant when it needs to be factual, not necessarily for ALL outputs (if we're aiming for LLM's to present as "human" - or for interactions with LLMs to be natural conversational...). This excludes the world where LLMs act as agents - where you would of course always like the LLM to be factual and thus deterministic.
Ratelman
|
6 months ago
|
on: OpenAI Progress
In a few years we've gone from gibberish (less poetic maybe, less polished and surprising, but none the less gibberish) - to legit conversational, and in my own opinion, well rounded answers. This is a great example of hard-core engineering - no matter what your opinion of the organisation and saltman is, they have built something amazing. I do hope they continue with their improvements, it's honestly the most useful tool in my arsenal since stackoverflow.
Ratelman
|
7 months ago
|
on: Stanford to continue legacy admissions and withdraw from Cal Grants
Which statistics in which study? Given the current system any sampling from college/university would be cherry picking vs general populous (unless you also sample general population with similar constraints to ensure a like for like comparison) so can't really be trusted.
Ratelman
|
7 months ago
|
on: OpenAI's new GPT-5 models announced early by GitHub
Interesting/unfortunate/expected that GPT-5 isn't touted as AGI or some other outlandish claim. It's just improved reasoning etc. I know it's not the actual announcement and it's just a single page accidentally released, but it at least seems more grounded...? Have to wait and see what the actual announcement entails.
Ratelman
|
7 months ago
|
on: O3 and Grok 4 Accidentally Vindicated Neurosymbolic AI
Extensive post on how neurosymbolic AI, the marriage between connectionist and neuro-symbolic approach to AI is potentially finally vindicated - opinion-piece by Gary Marcus
Ratelman
|
8 months ago
|
on: At Least 13 People Died by Suicide Amid U.K. Post Office Scandal, Report Says
In America maybe, in south africa it's quite the opposite considering the government provides a lot more support for poor non-white folks than for white folks (specifically based om race)
Ratelman
|
10 months ago
|
on: Offline vs. online ML pipelines
Might be missing something but how is this on the front page of hackernews? It feels more like an ad than anything else.
Ratelman
|
1 year ago
The closest article I could find to "Smith, J. et al. (2023). "Advances in Probabilistic Forecasting." Journal of Forecasting" was from the April issue of Journal of Forecasting, but the article was titled: Advances in forecasting: An introduction in light of the debate on inflation forecasting and there was no Smith, J. that was involved in writing that article. Where do these numbers for improvements in the various industries come from? Somethings feels off with this readme.
Ratelman
|
1 year ago
|
on: Titans: Learning to Memorize at Test Time
So Minimax just "open-sourced" (I add it in "" because they have a custom license for its use and I've not read through that) but they have context length of 4-million tokens and it scored 100% on the needle in a haystack problem. It uses lightning attention - so still attention, just a variation? So this is potentially not as groundbreaking as the publishers of the paper hoped or am I missing something fundamental here? Can this scale better? Does it train more efficiently? The test-time inference is amazing - is that what sets this apart and not necessarily the long context capability? Will it hallucinate a lot less because it stores long-term memory more efficiently and thus won't make up facts but rather use what it has remembered in context?
Ratelman
|
1 year ago
|
on: Dog Aging Project
FYI limited to to US only - they refer you to their sister project:
https://darwinsark.org/ if you'd still like to contribute in a similar fashion but reside outside of USA.
Ratelman
|
1 year ago
|
on: Australia: Kids under 16 to be banned from social media after Senate passes laws
I'd say human rights violation is a bit of a stretch - the negative impact of social media use on an adolescent's psychological well-being is well documented - so possibly even the exact opposite.
Ratelman
|
1 year ago
|
on: Australia: Kids under 16 to be banned from social media after Senate passes laws
What do you mean with MAPs? Sorry, haven't seen the acronym before.
Ratelman
|
1 year ago
|
on: Show HN: Feels Like Paper
Ratelman
|
1 year ago
|
on: OpenAI hits pause on video model Sora after artists leak access in protest
Link to the post:
https://huggingface.co/spaces/PR-Puppets/PR-Puppet-Sora
If you read through it, they clearly state:
"We are not against the use of AI technology as a tool for the arts (if we were, we probably wouldn't have been invited to this program). What we don't agree with is how this artist program has been rolled out and how the tool is shaping up ahead of a possible public release. We are sharing this to the world in the hopes that OpenAI becomes more open, more artist friendly and supports the arts beyond PR stunts."
Ratelman
|
1 year ago
|
on: OpenAI hits pause on video model Sora after artists leak access in protest
Meh, not entirely sure what would work better. Having read through the huggingface post a few times now, suppose it's less of an emotional reaction, more actual protest to abusive practices.
Ratelman
|
1 year ago
|
on: OpenAI hits pause on video model Sora after artists leak access in protest
True, true - didn't think that one through