top | item 34864279

(no title)

aparsons | 3 years ago

I generally break these queries into

- the low-hanging fruits (where there is a Wikipedia page or similar for X, and both Google and Bing do a good job of mining these)

- the tougher nuts ("who was the UK prime minister when the wright airplane first flew" - Google and regular Bing fail at this, but Bing chat correctly brings up Arthur Balfour). This was just an example I made up to try - but the ability to connect more dots than plain old search, which is hard to explain but you get a sense of the capability as you use ChatGPT/Bingchat - helps a lot.

discuss

order

clnq|3 years ago

The search LLMs are good at synthesizing answers that don’t appear anywhere on the net. But they also hallucinate answers often. So to get reliable results, one needs to fact-check them. Otherwise, the risk of being misled is high. The fact-checking isn’t much faster than just looking up different bits of information and synthesizing the answer oneself.

There are cases where LLMs make life a lot easier for people, but I am not convinced about whether search can be made easier by the way Sydney and Bard do it.

If they suggested alternative search queries and summarized websites for their search result excerpts, the LLMs would speed up search a lot. They could also synthesize some content quality metrics for each search result and highlight ones with biased reasoning, political influences, SEO games, and so on.