top | item 34764458

(no title)

mden | 3 years ago

For certain things I think so. For me the fascinating and really useful utility of LLMs is their ability to synthesize answers to questions. This is something search generally speaking can't do. I find myself using ChatGPT to answer question like:

> Explain tar -xzvf

> Answer: The command tar -xzvf is used to extract a tar archive. Here is what each option does:

-x: This option tells tar to extract the contents of the archive.

-z: This option tells tar to decompress the archive using gzip compression.

-v: This option tells tar to run in verbose mode, which means it will display the names of the files being extracted as they are extracted.

-f: This option specifies the file name of the archive that you want to extract.

So, the full command tar -xzvf is used to extract a tar archive that has been compressed with gzip and display the names of the files being extracted as they are extracted.

However for other types of information I much more care about being taken to a trustworthy source of information rather seeing a summarized view or worse, a synthesized view of various sources that might not agree with each other. References are useful, but it's risky to rely on what a bot says without validating the references at which point the utility of the bot for that type of query is questionable.

discuss

order

yodsanklai|3 years ago

I've tried to use ChatGPT for such things. The huge issue is that it does makes mistake, quite often. You never really know if its answer is correct or not. When you already knows what tar xzvf does, it is impressive to read the correct answer. But when you don't know and want to actually make use the result, you need to double check with Google and you'll lose time compared to just asking Google. It's quite frustrating.