top | item 45597571

(no title)

gubicle | 4 months ago

The whole point is the prompt (+ a static set of (system)prompts). If your whole function as a human is clicking one of a set of buttons to trigger an AI action, then you are automate-able in a few lines of code (and the AI is better than you at deciding which button to click anyway (supposedly)).

There are like thousands wrappers around LLMs masquerading as AI apps for specialized usecases, but the real performance of these apps is really only bottlenecked by the LLM performance, and their UIs generally only get in the way of the direct LLM access/feedback loop.

To work with LLMs effectively you need to understand how to craft good prompts, and how to read/debug the responses.

discuss

order

dayvid|4 months ago

I mean if you’re building for a consumer and you know what most of them may prompt, you can interface it with the UI so it’s not a game of hope you’re good at prompting because if not your experience isn’t going to be good. You could still offer a text panel if it fails

gubicle|4 months ago

What does 'interface it with the UI' mean though? How does adding buttons make it easier for the user to work with the AI? The whole point is that users can control it using the most natural and ubiquitous way possible - through natural language.

Yeah, it often makes sense to adjust the user's prompt, add system/wrapper prompts, etc. But that's not really related to UI..