top | item 37694974

(no title)

yid | 2 years ago

A lot of the value comes from follow-up questions. Imagine being able to interrogate a StackOverflow answer with new constraints and details. Not always correct, but in some cases, faster that typing in a new search term and parsing a screen full of links.

discuss

order

ToucanLoucan|2 years ago

But again, the AI doesn't know. It's going to search around the internet and probably take a closer look at what it already told you, but that's it. It takes a plethora of information and attempts to digest it into knowledge but it lacks the understanding with which to accomplish this task.

Unless I guess you train an AI on a given topic, like a few languages or a database or something. But given ChatGPT's apparent vulnerability to just making shit up, you'll have to call me skeptical if this has any real use.

empath-nirvana|2 years ago

GPT4 "knows" a lot more about most topics than any single human does. People have this idea that it absolutely needs to be perfectly correct at all times to be useful, but would never hold a human being to that standard.

How many times have you asked a co-worker about something and they gave you a convincing answer that was totally wrong? Did it make you stop asking co-workers for help?

JumpCrisscross|2 years ago

> the AI doesn't know. It's going to search around the internet and probably take a closer look at what it already told you, but that's it

F/k/a Putting a thing on the internet for randos to identify and explain. As long as it cites the LLM cites its sources, general questions in the form of "what is this" or "what's going on here" while you point to a page or an image or in a general direction are not well suited for search engines.

johnmaguire|2 years ago

Because having the AI do it for you is faster than doing it yourself.

Not as accurate, but faster.

For some people - I am reluctant to say "for some use cases" - that's very appealing.