top | item 42671025

(no title)

zer0x4d | 1 year ago

Agree with all your points on the real world consumer experience.

* I would never assume the AI answer to a consequential problem to be authoritative, unless it shows me the source and I can click on the link to verify the source and the data presented (search engine use case).

* Rewrites with AI are bug-prone and often produce hard to trace bugs to the seemingly correct nature of these bugs. Generating the scaffolding works super well.

* Images are often too smooth, videos too robotic and rhythmic, water too shiny, etc. Trained eyes can easily distinguish between AI and real.

* Hallucinations are commonplace.

discuss

order

gary_0|1 year ago

> it shows me the source and I can click on the link to verify the source ... (search engine use case)

To me, this is exactly a search engine. I type my query into 2005-2015 Google, I scan the page summaries under the links to see the answer, and click the best-looking result to confirm or read the details. Occasionally you need to re-word your query to get the answer you're looking for. Sometimes I don't bother clicking through because the answer is right there.

I don't really care that I can use plainer English with an "AI"; I'd be happier if I just got 2010 Google back. But sadly, it's gone.

Animats|1 year ago

> Images are often too smooth, videos too robotic and rhythmic, water too shiny, etc. Trained eyes can easily distinguish between AI and real.

That's likely to get better. Last year, consistently getting fingers and arms right was tough. This year, there are AI-generated violin playing videos.

> I would never assume the AI answer to a consequential problem to be authoritative, unless it shows me the source and I can click on the link to verify the source and the data presented (search engine use case).

That remains the elephant in the room - the tendency to make up fake answers. Until that's fixed, LLMs are only useful for problems where the cost of such errors is an externality, dumped on the consumer.

amerkhalid|1 year ago

> > I would never assume the AI answer to a consequential problem to be authoritative, unless it shows me the source and I can click on the link to verify the source and the data presented (search engine use case).

> That remains the elephant in the room - the tendency to make up fake answers. Until that's fixed, LLMs are only useful for problems where the cost of such errors is an externality, dumped on the consumer.

That’s one of fears. The general public and politicians alike will trust AI without scrutiny. We’ve already seen examples of judges relying on flawed software, with devastating outcomes for innocent people. With the rapid push and widespread enthusiasm for AI, a darker future looms if these problems aren’t addressed.

dsr_|1 year ago

I don't think the elephant can be solved by a tweak to LLMs. Producing a statistically-likely continuation of a pattern is what they do; there is no encoding of the world, just an encoding of language and image data.

A general crossing of that gap is, dare I say it, a problem requiring real intelligence.