top | item 43065156

(no title)

extragalaxial | 1 year ago

[flagged]

discuss

order

hansmayer|1 year ago

Please, please avoid recommending LLMs for problems where the user cannot reliably verify it's outputs. These tools are still not reliable (and given how they work, they may never be 100% reliable). It's likely the OP could get a "summary" which contains hallucinations or incorrect statements. It's one thing when experienced developers use Copilot or similar to avoid writing boilerplate and boring parts of the code - they still have competence to review, control and adapt the outputs. But for someone looking to get introduced to a hard topic, such as the OP, it's a very bad advice as they have no means of checking the output for correctness. A lot of us already have to deal with junior folks spitting out the AI slop on a daily basis, probably using the tools they way you suggested. Please don't introduce more of AI slop nonsense into the world.

Asraelite|1 year ago

This is getting downvoted but I would also recommend it. It's much faster than reading papers and, unless you are doing cutting edge research, LLMs will be able to accurately explain everything you need to know for common algorithms like this.

hansmayer|1 year ago

It's getting down-voted because it is a very bad advice, one that can be refuted by already known facts. Your comment is even worse in this regards and is very misleading - the LLMs are definitely not going to "accurately explain everything you need to know", it's not a magical tool that "knows everything", it's a statistical parrot which infers the most likely sequence of tokens, which results in inaccurate responses often enough. There is already a lot of incompetent folks relying blindly on these un-reliable tools, please do not introduce more AI-slop based thinking into the world ;)

sky2224|1 year ago

It's getting downvoted because it's the equivalent of saying "google it".