top | item 47117841

(no title)

csomar | 8 days ago

You can think about LLM-generated UIs/apps the same way you think about LLM-generated responses. It's a bunch of garbage, but if you know what you're looking for, you might find something useful.

This doesn't seem to work at all for stats-related apps/sites though, since you can't judge the accuracy of what's being presented. If the site claims it'll "take you to space," you don't take that literally, you just treat it as another AI artifact. But with numbers, you have no way to tell what's accurate and what's just made up.

discuss

order

sendkamal|7 days ago

Thats such a great point. What if I am (somehow) able to warrant that the data presented here exactly matches the state notices? would that be helpful?

mmooss|8 days ago

> It's a bunch of garbage, but if you know what you're looking for, you might find something useful.

If you mean an LLM can be a brainstorming and hypothesis machine, and you have prior expertise to evaluate the proposals, then I can see that value. (Maybe that's what you meant, of course.)

But prior expertise is absolutely necessary. Otherwise we make ourselves victims of mis/disnformation. People say the Internet is a cesspool of mis/disinfo, yet nobody thinks it could affect them - we're all too smart, of course (no really, I'm the exception!). [0]

> This doesn't seem to work at all for stats-related apps/sites though, since you can't judge the accuracy of what's being presented.

I don't see the difference. If it's obvious nonsense, in numbers or in text, it's detectable. Everything else, see above.

[0] Research shows that thinking is a big reason people get fooled, and better educated people are easier to fool.

sendkamal|7 days ago

great perspective @mmooss. There is a lot to grasp but I am paying attention. Do you think there is a value to these data if presented accurately, timely and with good analytics?