How do I know the LLM isn't lying to me? AIs lie all the time, it's impossible for me to trust them. I'd rather just go to the actual source and decide whether to trust it. Odds are pretty good that a programming language's homepage is not lying to me about the language; and I have my trust level for various news sites already calibrated. AIs are garbage-in garbage-out, and a whole boatload of garbage goes into them.
>Odds are pretty good that a programming language's homepage is not lying to me about the language
Odds are pretty good that, at least for not very popular projects, the homepage's themselves would soon be produced by some LLM, and left at that, warts and all...
None of the LLMs (not even Grok) are "continuously trained" on news. A lot of them can run searches for questions that aren't handled by their training data. Here's Grok's page explaining that: https://help.x.com/en/using-x/about-grok
> In responding to user queries, Grok has a unique feature that allows it to decide whether or not to search X public posts and conduct a real-time web search on the Internet. Grok’s access to real-time public X posts allows Grok to respond to user queries with up-to-date information and insights on a wide range of topics.
i can also use my human brain to read a webpage from the source, as the authors intended. not EVERY question on this planet needs to be answered by a high resource intensive LLM. Energy isn’t free you know. :)
Other considerations:
- Visiting the actual website, you’ll see the programming languages logo. That may be a useful memory aide when learning.
- The real website may have diagrams and other things that may not be available in your LLM tool of choice (grok).
- The ACT of browsing to a different web page may help some learners better “compartmentalize” their new knowledge. The human brain works in funny ways.
- i have 0 concerns of a hallucination when readings docs directly from the author/source. Unless they also jumped on the LLM bandwagon lol.
Just because you have a hammer in your hand doesn’t mean you should start trying to hammer everything around you friend. Every tool has its place.
It's just a different kind of data. Even without LLMs, sometimes I want a tutorial, sometimes I want the raw API specification.
For some cases I absolutely prefer an LLM, like discoverability of certain language features or toolkits. But for the details, I'll just google the documentation site (for the new terms that the LLM just taught me about) and then read the actual docs.
Search is best viewed as a black box to transform {user intention} into {desired information}.
I'm hard pressed to construction an argument where, with widely-accessible LLM/LAM technology, that still looks like:
1. User types in query
2. Search returns hits
3. User selects a hit
4. User looks for information in hit
5. User has information
Summarization and deep-indexing are too powerful and remove the necessity of steps 2-4.
F.ex. with the API example, why doesn't your future IDE directly surface the API (from its documentation)? Or your future search directly summarize exactly the part of the API spec you need?
Yes, you can use grok but you could also use a search engine. Their point is that grok would be less convenient than a search engine for the use case of finding Frob's website's homepage.
Perplexity solves this problem perfectly for me. It does the web search, reads the pages, and summarizes the content it found related to my question. Or if it didn't find it, it says that.
I recently configured Chrome to only use google if I prefix my search with a "g ".
coldpie|10 months ago
unknown|10 months ago
[deleted]
robrenaud|10 months ago
Instead of the core of the answer coming from the LLM, it could piece together a few relevant contexts and just provide the glue.
cookiemonsieur|10 months ago
How do you know the media isn't lying to you ? It's happened many times before (think pre-war propaganda)
coldtea|10 months ago
Odds are pretty good that, at least for not very popular projects, the homepage's themselves would soon be produced by some LLM, and left at that, warts and all...
simonw|10 months ago
> In responding to user queries, Grok has a unique feature that allows it to decide whether or not to search X public posts and conduct a real-time web search on the Internet. Grok’s access to real-time public X posts allows Grok to respond to user queries with up-to-date information and insights on a wide range of topics.
lurking_swe|10 months ago
Other considerations:
- Visiting the actual website, you’ll see the programming languages logo. That may be a useful memory aide when learning.
- The real website may have diagrams and other things that may not be available in your LLM tool of choice (grok).
- The ACT of browsing to a different web page may help some learners better “compartmentalize” their new knowledge. The human brain works in funny ways.
- i have 0 concerns of a hallucination when readings docs directly from the author/source. Unless they also jumped on the LLM bandwagon lol.
Just because you have a hammer in your hand doesn’t mean you should start trying to hammer everything around you friend. Every tool has its place.
eddd-ddde|10 months ago
For some cases I absolutely prefer an LLM, like discoverability of certain language features or toolkits. But for the details, I'll just google the documentation site (for the new terms that the LLM just taught me about) and then read the actual docs.
ethbr1|10 months ago
I'm hard pressed to construction an argument where, with widely-accessible LLM/LAM technology, that still looks like:
Summarization and deep-indexing are too powerful and remove the necessity of steps 2-4.F.ex. with the API example, why doesn't your future IDE directly surface the API (from its documentation)? Or your future search directly summarize exactly the part of the API spec you need?
lcnPylGDnU4H9OF|10 months ago
oofbey|10 months ago
I recently configured Chrome to only use google if I prefix my search with a "g ".