top | item 47078007

(no title)

jlawer | 11 days ago

For Grok’s sake you hope this is data that was public, something that was buried deep that it has surfaced.

It’s a shame transparency is so poor here. A simple grep of the training data would likely give a clear explanation of where this has come from.

discuss

order

zamadatix|10 days ago

Grokipedia (an online encyclopedia run by xAI and steered by Grok) lists a few sources for it directly, even in old copies of the entry https://web.archive.org/web/20251225113339/https://grokipedi...

Grok shouldn't be serving this kind of information IMO, and it's yet another serial example of xAI just not caring about real problems, but the even bigger crime is these services she is paying thousands to seem to have done jack other than give a false sense of security while happily taking their money. A time bound Google search and verification of pages from the Wayback Machine confirms this information has been all over social media and other sites constantly for the last decade.

If I were cynical I'd say this was just a publicity stunt, but the truth is probably really just sad all around: lack of ability to keep such things private, leechers making people think you can just pay and information disappears from the internet, Grok amplifying the problem by being run by people who don't really care about what it does...

lazide|10 days ago

LLMs are fundamentally not deterministic or predictable the way people think they are, and it shows up pretty clearly in situations like this. It isn’t even as deterministic or predictable as a human. And humans aren’t particularly deterministic or predictable.

Grok, like Tesla FSD, is also kinda half assed, so it shows up even more prominently on that front.