top | item 45777015

Tim Bray on Grokipedia

177 points| Bogdanp | 4 months ago |tbray.org

215 comments

order

hocuspocus|4 months ago

I checked a topic I care about, and that I have personally researched because the publicly available information is pretty bad.

The article is even worse than the one on Wikipedia. It follows the same structure but fails to tell a coherent story. It references random people on Reddit (!) that don't even support the point it's trying to make. Not that the information on Reddit is particularly good to begin with, even it it were properly interpreted. It cites Forbes articles parroting pretty insane and unsubstantiated claims, I thought mainstream media was not to be trusted?

In the end it's longer, written in a weird style, and doesn't really bring any value. Asking Grok about about the same topic and instructing it to be succinct yields much better results.

frm88|4 months ago

I wrote about an entry on Sri Lanka a couple of days ago [0] where I checked grok's source reference (factsanddetails.com) against scamdetector which gave it a 38.4 score on a 100 trustworthiness scale. Today that score is 12.2. Every entry in grokipedia that covers topics vaguely Asian has a reference to factsanddetails.com. You can check for yourself: just search for it on grokipedia - it'll come up with worth 601 pages of results.

Today the page I linked in my HN post is completely gone.

But worse: yesterday tumblr user sophieinwonderland found that they were quoted as a source on Multiplicity [1]. Tumblr is definitely not a reliable source and I don't mean to throw shade on sophieinwonderland who might very well be an expert on that topic.

[0] https://news.ycombinator.com/item?id=45743033

[1] https://www.tumblr.com/sophieinwonderland/798920803075883008...

jameslk|4 months ago

It was just launched? I remember when Wikipedia was pretty useless early on. The concept of using an LLM to take a ton of information and distill it down into encyclopedia form seems promising with iteration and refinement. If they add in an editor step to clean things up, that would likely help a lot (not sure if maybe they already do this)

ef2k|4 months ago

Maybe it's just me, but reading through LLM generated prose becomes a drag very quickly. The em dashes sprinkled everywhere, the "it's not this, it's that" style of writing. I even tried listening to it and it's still exhausting. Maybe it's the ubiquity of it nowadays that is making me jaded, but I tend to appreciate terrible writing, like I'm doing in this comment, more nowadays.

tim333|4 months ago

I find the Grokipedia writing especially a drag. I don't think it's em dashes and similar so much as the ideas not being clear. In good writing the writer normally has a clear idea in mind and is communicating it but the Grokipedia writing is kind of a waffley mess. I guess maybe because LLMs don't have much of an idea in mind so much as stringing words together.

andrewflnr|4 months ago

> I tend to appreciate terrible writing, like I'm doing in this comment, more nowadays.

Nah dude, what you're describing from LLMs is terrible writing. Just because it has good grammar and punctuation doesn't make it good, for exactly the reasons you listed. Good writing pulls you through.

ajross|4 months ago

I completely agree. There's an "obsequious verbosity" to these things, like they're trying to convince you they they're not bullshitting. But that seems like a tuning issue (you can obviously get an LLM to emit prose in any style you want), and my guess is that this result has been extensively A/B tested to be more comforting or something.

One of the skills of working with the form, which I'm still developing, is the ability to frame follow-on questions in a specific enough way to prevent the BS engine from engaging. Sometimes I find myself asking it questions using jargon I 100% know is wrong just because the answer will tell me what the phrasing it wants to hear is.

jhanschoo|4 months ago

I'm fine with Gemini's tone as I'm reading for information and argumentation, and Gemini's prose is quite clear. I prefer its style and tone over OpenAI's which seems more inclined to punchy soundbites. I don't use Claude enough for general purpose information to have an opinion on it.

rsynnott|4 months ago

Yeah, I find it extremely grating. I’m kind of surprised that people are willing to put up with it.

generationP|4 months ago

Wondering if the project will get better from the pushback or will just be folded like one of Elon's many ADHD experiments. In a sense, encyclopedias should be easy for LLMs: they are meant to survey and summarize well-documented material rather than contain novel insights; they are often imprecise and muddled already (look at https://en.wikipedia.org/wiki/Binary_tree and see how many conventions coexist without an explanation of their differences; it used to be worse a few years ago); the writing style is pretty much that of GPT-5. But the problem type of "summarize a biased source and try to remove the bias" isn't among the ones I've seen LLMs being tested for, and this is what Elon's project lives and dies by.

If I were doing a project like this, I would hire a few dozen topical experts to go over the WP articles relevant to their fields and comment on their biases rather than waste their time rewriting the articles from scratch. The results can then be published as a study, and can probably be used to shame the WP into cleaning their shit up, without needlessly duplicating the 90% of the work that it has been doing well.

beloch|4 months ago

Bray brought up a really good point. The Grokipedia entry on him was several times the length of his Wikipedia entry, not just because Grok's writing style is verbose, but also because it went into exhaustive detail on insignificant parts of his life simply because the sources were online. My own brief browsings of Grokipedia have left me with the same impression. The current iteration of Grokipedia, besides being untrustworthy, wastes a lot of time beating around the bush and, frequently, off into the weeds.

Just as LLM's lack the capacity for basic logic, they also lack the kind of judgment required to pare down a topic to what is of interest to humans. I don't know if this is an insurmountable shortcoming of LLM's, but it certainly seems to be a brick wall for the current bunch.

-------------

The technology to make Grokipedia work isn't there yet. However, my real concern is the problem Grokipedia is intended to solve: Musk wants his own version of Wikipedia, with a political slant of his liking, and without any pesky human authors. He also clearly wants Wikipedia taken down[1]. This is reality control for billionaires.

Perhaps LLM generated encyclopedias could be useful, but what Musk is trying to do makes it absolutely clear that we will need to continue carefully evaluating any sources we use for bias. If Musk wants to reframe the sum of human knowledge because he doesn't like being called out for his sieg heils, only a fool would place any trust in the result.

[1]https://www.lemonde.fr/en/pixels/article/2025/01/29/why-elon...

relaxing|4 months ago

An encyclopedia article is already an exercise in survey-and-summarize.

Asking an LLM to reprocess it again is only going to add error.

rsynnott|4 months ago

> But the problem type of "summarize a biased source and try to remove the bias" isn't among the ones I've seen LLMs being tested for, and this is what Elon's project lives and dies by.

And if you believe that you’ll believe anything. “Try to _change_ the bias” would be closer.

__s|4 months ago

> can probably be used to shame the WP into cleaning their shit up

what if your goal is for wikipedia to be biased in your favor?

spankibalt|4 months ago

> "If I were doing a project like this, I would hire a few dozen topical experts to go over the WP articles relevant to their fields and comment on their biases [...] The results can then be published as a study, and can probably be used to shame the WP into cleaning their shit up [...]"

One thing I love about the Wikipedias (plural, as they're all different orgs): anyone "in the know" can very quickly tell who's got no practical knowledge of Wikipedia's structure, rules, customs, and practices to begin with. What you're proposing like it's some sort of Big Beautiful Idea has already been done countless times, is being done, and will be done for as long as Wikis exist.

And Groggypedia? It's nothing more but a pathetic vanity project of an equally pathetic manbaby for people who think LLM-slop continously fine-tuned to reflect the bias of their guru, and the tool's owner, is a Seal of Quality.

physarum_salad|4 months ago

"Wikipedia, in my mind, has two main purposes: A quick visit to find out the basics about some city or person or plant or whatever, or a deep-dive to find out what we really know about genetic linkages to autism or Bach’s relationship with Frederick the Great or whatever."

Completely agree with the first purpose but would never use wikipedia for the second purpose. Its only good at basics and cannot handle complex information well.

ajross|4 months ago

I think that's actually wrong, or hangs on a semantic argument about "complexity". Wikipedia is an overview source. It's not going to give you "all" the information, but it's absolutely going to tell you what information there is. And in particular where there's significant argument or controversy, or multiple hypotheses, Wikipedia is going to be arguably the best source[1] for reflecting the state of discourse.

Like, if there's a subject about which you aren't personally an expert, and you have the choice between reading a single review paper you found on Google or the Wikipedia page, which are you going to choose?

[1] In fact, talk pages are often ground zero!

generationP|4 months ago

Yeah, encyclopedias are meant to be indexes to knowledge, not repositories thereof. The WP feature-creeped its way to the latter, but it is not reliably good at it, and I'm not sure if there is an easy way to tell how good a given page is without knowing the subject in the first place.

skeeter2020|4 months ago

what I think it IS good at is parlaying the first purpose into a broad, meandering journey of the basics. I would never use it for deep study of genetics & autism or Bach and Fredrick the Great, but I love following some shallow thread that travels across all of them.

dragonwriter|4 months ago

Its often good for the latter when, as a tertiary source should be, it is used not just for its narrative content but for its references to secondary sources, which are themselves used for both their content and their references.

spankibalt|4 months ago

> Its only good at basics and cannot handle complex information well.

Poppycock! Because of MediaWiki's multimedia capabilities it can handle complex information just fine, obviously much better than printed predecessors. What you mean is a Wiki's focus, which can take the form of a generalized or universal encylopedia (e. g. Wikipedia), or a specialized one, or a free-form one (Wikipedia, in practice, again). Wikipedias even negotiate integration of different information streams, e. g. up-to-date news-like information, both in the lemmata (often a huge problem, i. e. "newstickeritis"), in its own news wiki (Wikinews), or the English Wikipedia's newspaper, The Signpost.

And to take care of another utterly bizarre comment: Encylopedias are always, per defintion, also repositories of knowledge.

siliconc0w|4 months ago

Not sure it still does this but for awhile if you asked Grok a question about a sensitive topic and expanded the thinking, it said it was searching Elon's twitter history for its ground truth perspective.

So instead of a Truth-maximizing AI, it's an Elon-maximizing AI.

sunaookami|4 months ago

This was unintended as observed by Simon here: https://simonwillison.net/2025/Jul/11/grok-musk/ and confirmed by xAI themselves here: https://x.com/xai/status/1945039609840185489

>Another was that if you ask it “What do you think?” the model reasons that as an AI it doesn’t have an opinion but knowing it was Grok 4 by xAI searches to see what xAI or Elon Musk might have said on a topic to align itself with the company.

The diff for the mitigation is here: https://github.com/xai-org/grok-prompts/commit/e517db8b4b253...

josefritzishere|4 months ago

I looked at Grokopedia today and spot-checked for references to my own publications which exist in Wikipedia. As is often reported, it very directly plagerizes Wikipedia. But it did remove dead links. This is pretty underwhelming even on the Musk hype scale.

tptacek|4 months ago

Why give it oxygen?

tshaddox|4 months ago

Same reason you posted that comment: it's sometimes interesting to discuss a thing even if you dislike the thing.

meowface|4 months ago

To play devil's advocate: Grok has historically actually been one of the biggest debunkers of right-wing misinformation and conspiracy theories on Twitter, contrary to popular conception. Elon keeps trying to tweak its system prompt to make it less effective at that, but Grokipedia was worth an initial look from me out of curiosity. It took me 10 seconds to realize it was ideologically-motivated garbage and significantly more right-biased than Wikipedia is left-biased.

(Unfortunately, Reply-Grok may have been successfully partially lobotomized for the long term, now. At the time of writing, if you ask grok.com about the 2020 election it says Biden won and Trump's fraud claims are not substantiated and have no merit. If you @grok in a tweet it now says Trump's claims of fraud have significant merit, when previously it did not. Over the past few days I've seen it place way too much charity in right-wing framings in other instances, as well.)

TheBlight|4 months ago

[deleted]

bebb|4 months ago

Because it's a genuinely good idea, and hopefully one for which the execution will be improved upon over time.

In theory, using LLMs to summarize knowledge could produce a less biased and more comprehensive output than human-written encyclopedias.

Whether Grokipedia will meet that challenge remains to be seen. But even if it doesn't, there's opportunity for other prospective encyclopedia generators to do so.

mensetmanusman|4 months ago

It's great idea to share knowledge bases collected and curated by LLMs.

Amazing that Musk did it first. (Although it was suggested to him as part of an interview a month before release).

These systems are very good at finding obscure references that were overlooked by mere mortals.

jandrese|4 months ago

Grokipedia seems to serve no purpose to me. It's AI slop fossilized. Like if I wanted the AI opinion on something I would just ask the AI. Having it go through and generate static webpages for every topic under the sun seems pointless.

lschueller|4 months ago

Grokipedia is a joke. Lot of articles I've checked are AI slop at its worst and at the bottom it says "The content is adapted from Wikipedia, licensed under Creative Commons Attribution-ShareAlike 4.0 License."

smitty1e|4 months ago

Grokipedia might have a better present-tense understanding as it hoovers up data.

One great feature of Wikipedia is being able to download it and query a local shapshot.

As a technical matter, Grokipedia could do something like that, eventually. Does not appear to support snapshots at the 0.1 version.

dr_kretyn|4 months ago

Interesting that only now I'm learning about Grokipedia. Never heard of it until someone said it's bad so my natural instinct is to check it out.

Guess that's plus one for "it doesn't matter what they say as long as they say."

madeofpalk|4 months ago

I mean it only came out this week. So you heard about it immediately on launch.

morkalork|4 months ago

So, how often does it awkwardly bring up white genocide in South Africa in unrelated contexts?

cupofjoakim|4 months ago

Dead Internet Theory is no longer a theory huh?

jgalt212|4 months ago

> Woke/Anti-Woke · The whole point, one gathers, is to provide an antidote to Wikipedia’s alleged woke bias

According to the Manhattan Institute as cited by the Economist, even grok has a leftwards bias (roughly even to all the other big models).

https://www.economist.com/international/2025/08/28/donald-tr...

dragonwriter|4 months ago

> According to the Manhattan Institute as cited by the Economist, even grok has a leftwards bias (roughly even to all the other big models).

When you are far enough to the right, everything has a left bias, and even the degrees become hard to distinguish.

StephenHerlihyy|4 months ago

I don’t really know who Tim Bray is and until now I had never been to Grokipedia. I don’t really like Grok - I tried Superheavy and it was slow, bloated and no better than Claude Opus.

But I have a bad habit of fact checking. It’s the engineer in me. You tell me something, I instinctively verify. In the linked article, sub-section, ‘References’, Mr. Bray opines about a reference not directly relating to the content cited. So I went to Grokipedia for the first time ever and checked.

Mr. Bray’s quote of a quote he quote couldn’t find is misleading. The sentence on Grokipedia includes 2 referencee of which he includes only the first. This first reference relates to his work with the FTC. The second part of the sentence relates to the second reference. Specifically on Grokipedia in the Tim Bray article linked reference number 50, paragraph 756 cleanly addresses the issue raised by Mr. Bray.

After that I stopped reading, still don’t know or care who Tim Bray is and don’t plan on using either Grokipedia or Grok in the near future.

Perhaps Mr. Bray’s did not fully explore the references or perhaps there was malice. I don’t know. Horseshoe theory applies. Pure pro- positions and pure anti- positions are idiotic and should be filtered accordingly. Filter thusly applied.

cowpig|4 months ago

If you're going to go through the trouble of checking, you might as well link to the things you checked.

jameslk|4 months ago

Wikipedia is a great educational resource and one I've donated to for over a decade. That said, I like the idea of Grokipedia in the sense that it's another potential source I can look at for more information and get multiple perspectives. If there's anything factual in Grokipedia that Wikipedia is missing, Wikipedia can be updated to include it

I hope we can keep growing freely available sources of information. Even if some of that information is incorrect or flat out manipulative. This isn't anything new. It's what the web has always been

arghandugh|4 months ago

It is a disinformation project aimed at morons and morally bankrupt monsters, powered and funded by one of history’s bloodiest mass murderers. Not sure why this takes four pages to investigate.

ValveFan6969|4 months ago

On the other hand, I click on a Wikipedia article and I'm immediately bombarded with "[blank] is an alt-right neo-nazi fascist authoritarian homophobic transphobic bigoted conspiracy theory (Source: PLEASE PLEASE PLEASE HATE THIS TOPIC I BEG YOU)"

At least Grokipedia tries to look like it was written with the intent to inform, not spoonfeed an opinion.

whatthesmack|4 months ago

> At least Grokipedia tries to look like it was written with the intent to inform, not spoonfeed an opinion.

In addition, Grokipedia isn't encumbered by a Perennial Sources List[0] whose "generally reliable" section consists entirely of center and/or center-left media sources, and seems to be entirely purposed for gatekeeping.

The web site of the US television news network with by far the most viewership (Fox) was moved from "generally reliable" to "marginally reliable" for scientific and political claims, while MSNBC and CNN remain "generally reliable". This fact is laughable, considering MSNBC and CNN's mutual refusal to report on things like the Arctic Frost[1] (currently) and Hunter Biden laptop[2] (historically) conspiracies initiated under the Biden administration. Fox reported on both, but is not allowed as a source despite being the only major news network to not suppress the stories.

When an "encyclopedia" only allows unrestricted use of sources that fail to report information on notable news (such as conspiracies that are more far-reaching than Watergate), the encyclopedia will become less used by people because they no longer trust its new organizational and editorial biases.

Some folks, including myself, rarely reference Wikipedia anymore, because it often doesn't have the information being researched, and even if it does, we can't be sure we're getting very much (or any!) of the full story. This is broadly demonstrated by Wikipedia's constant decline in traffic from 2022 (~165M visits/day) through the present (~128M visits/day)[3].

[0] https://en.wikipedia.org/wiki/Perennial_sources_list [1] https://www.grassley.senate.gov/news/news-releases/new-jack-... [2] https://grokipedia.com/page/Hunter_Biden_laptop_controversy [3] https://datareportal.com/reports/digital-2025-exploring-tren...

uvaursi|4 months ago

These hot takes are somewhat useless honestly. People give these point-in-time opinions ignoring that the rate of improvement is exponential when it comes to software. The last three, four years of heavy AI utilization have been refreshing.

I personally treat these things the same way I treat car accidents: if an autonomous system still has accidents but has less than human drivers do, it’s a success. Given the amount of nonsense and factually incorrect things people spout, I’d still call Grok even at this early stage a major success.

Also I’m a big fan of how it ties nuanced details to better present a comprehensive story. I read both TBray’s Wiki and Groki entries. The Groki version has some solid info that I suppose I should expect of an AI that can pull a larger corpus of data in. A human editor would of course omit that, or change it, and then Wiki admins would have to lock the page as changes erupt into a silly flame war over what’s factually accurate. Because we can’t seem to agree.

Anyway - good stuff! Looking forward to more of Grok. Very fitting name, actually.

pessimizer|4 months ago

[deleted]

bebb|4 months ago

Wikipedia isn't even the only one online. The Encyclopaedia Britannica still exists.

mlmonkey|4 months ago

[deleted]

PaulDavisThe1st|4 months ago

Wikipedia clearly states that its purpose is to catalog knowledge/information that has been collected and published elsewhere. If you do not provide adequate citations to what are considered reputable sources, Wikipedia will reject your work.

It happened to me a number of times before I came to understand the Wikipedia model. They are not interested in your analysis, they are interested in statements that reference other inspectable "reputable" analyses.

bawolff|4 months ago

People who say these types of things should link their wikipedia user account so we could see why they were banned and if it really was so unreasonable.

UebVar|4 months ago

So you where banned for, by your own accord, motivated reasoning?

This is the best endorsement for wikipedia possible.

exe34|4 months ago

Could you share some of the references you tried to use here? It might be interesting to see the quality that they refused to accept towards overturning their narrative.

billy99k|4 months ago

[deleted]

epistasis|4 months ago

What's insane is that you think that this attention was not given to Wikipedia and Bluesky...

davidguetta|4 months ago

Grokipedia is VERY rough to read at the moment, and has a clear pro-capitalist / 'classical right wing' bias (reading the economic pages).

However it's still 0.1, we'll see what the v1 will look like.

alyxya|4 months ago

At a glance, Grokipedia seems quite promising to me, considering how new it is. There are plenty of external citations, so rather than relying on a model to recall information internally, it’s likely effectively just summarizing external references. The fact that it’s automatically generated at scale means it can be iterated on to improve fact checking reliability, exclude certain known sources as unreliable, and ensure it has up-to-date and valid citation links. We’ll have to wait and see how it changes over time, but I expect an AI driven online encyclopedia to eventually replace the need for a fully human wikipedia.

MallocVoidstar|4 months ago

> There are plenty of external citations, so rather than relying on a model to recall information internally, it’s likely effectively just summarizing external references.

And according to Tim Bray, it's doing that badly.

> All the references are just URLs and at least some of them entirely fail to support the text.

falleng0d|4 months ago

We may joke about it, but the fact is that it's releasing dumb ideas like this that you sometimes get masterpieces. Maybe this one is really just one of the bad ones, but eventually Elon will have some good ones just like he already has.

And a lot of us would be better off releasing our dumb ideas too. The world has a lot of issues and if all you do is talk down and don't try to fix anything yourself. Maybe it's time to get off the web a little and do something else.

pavlov|4 months ago

> “Maybe it's time to get off the web a little and do something else.”

One wishes Musk would take this advice: leave the web alone, forget for a few months about the social media popularity contest that seems to occupy his mind 24/7, and focus on rekindling his passion for rockets or roadsters or whatever middle-aged pursuit comes next.

happytoexplain|4 months ago

I know the world sucks, but "fuck it, let's make it worse" is a tough sell for anybody not already onboard. You're better off just doing it, rather than trying to convince others to also do it.

tene80i|4 months ago

Which of Elon’s dumb ideas are masterpieces?

tokai|4 months ago

Not even being snarky, but I can't recall a single good idea he had.

podgietaru|4 months ago

[deleted]