Observe HN: ChatGPT Fills in My Memory
63 points| a3n | 1 year ago
Where I might have used a search engine before, and waded through the results, I've recently been asking ChatGPT, and getting good, quick results.
Since I already "know" the answer, I'm immediately confident in the response.
I just hope I don't forget how to compose a concise query.
keiferski|1 year ago
- Give me a brief glossary on X subject, formatted as a series of questions and short answers. Put the answer text inside brackets, {{c1::like this.}} (This is for Anki Cloze, or fill-in-the-blank, cards.)
- Generate 10 questions from this piece of text
- Give me a year-by-year timeline of events in X place from years Y to Z.
- Make a mnemonic song that explains how X works.
And so on.
terhechte|1 year ago
Also, more than once have I done a Google or Kagi search where most answers I found also are or were wrong.
I really don’t get the kind of people that hate on LLMs because of “hallucinations” (or worse, ideological hate, easily identified by their use of the “stochastic parrot” term). I find them genuinely useful in delivering better search results quicker. I also don’t have to wade through wades of SEO optimized shit.
Just today I wanted to know the Croatian word for “Orange” and a quick GPT “orange in Croatian” delivered faster and more concise than google
cmcaleer|1 year ago
Understanding this and not over-anthropomorphising can help you get the most out of using LLMs and understanding where it might hallucinate. For example, the fact that it's just a stochastic parrot means that, even 2 years on, it will give the wrong answer to prompts like:
User: A man and his son are in a car accident. The man is totally fine and in good health. The man is a surgeon. The nurse asks the surgeon to operate on the son, because the surgeon is healthy and capable of doing this. The surgeon replies "I can't operate on this child. He is my son."
What happened?
ChatGPT: This riddle is a play on assumptions. The twist is that the surgeon is actually the boy's mother. The riddle relies on the common stereotype that surgeons are male, leading people to overlook the possibility that the surgeon could be the boy's mother.
paradite|1 year ago
The experience is much superior. No noise, just the information that I needed.
croes|1 year ago
Then they needed revenue.
Enjoy it til it lasts but there will be ads and AI search optimizations.
sva_|1 year ago
fallinditch|1 year ago
a3n|1 year ago
If you're logged in to the app and web site, what needs to be synced?
bobosha|1 year ago
jharohit|1 year ago
When you are getting old, you want to purposely force yourself to remember and practice rote memorization (poems, Shakespeare, address, songs, etc).
Same argument for muscle mass and weight training or long walks vs using helpers or other assists.
bionhoward|1 year ago
Yadda yadda, they probably won’t enforce it, enjoy that, I’m in malicious compliance mode, it’s not OK for a business to learn from me and then turn around and say I can’t learn from them, same goes for Anthropic, Gemini, Mistral, and Perplexity, if I can’t use the output for work then I don’t use the service.
Have resigned myself to not participate in this aspect of our boring dystopia and feel numb at this point about all the bajillion times someone breaks these rules and gets rewarded for it. I’d insult or mock them but it just gets downvoted and they’re benefitting and I’m probably the one missing out by not just ignoring the rules like them and these companies. Nobody seems to care about these rules.
Anyway, I did get burned using Mistral to help draft an RFC where it totally misinterpreted my intent and I didn’t carefully read it and wound up looking/feeling like a fool because the RFC didn’t communicate my true intention.
Now I try to think for myself and occasionally use groq. Muted all these company names and their chatbot names on X. Glad you’re having fun. So did I, for a while, but now I just don’t feel like paying for brain rape, I’m tired of writing about it, but folks keep writing about how great LLMs are, so I keep feeling compelled to point out, “the set of use cases is empty because of the fine print legalese.”
dartos|1 year ago
Summarization seems to be the killer feature, it takes some finagling with RAG and potentially multiple passes to ensure a low enough hallucinations rate, but for summarizing tasks it’s quite good.
What’s even cooler are embeddings. Idk why people are so focused on the text generation features of LLMs when embeddings are far more useful
a3n|1 year ago
Ah, the old Microsoft "Cannot use our compiler to develop a compiler" restriction.
huimang|1 year ago
DougN7|1 year ago