top | item 38551503

(no title)

ssteeper | 2 years ago

You're saying this as if the result is unsurprising, however it is significant that the performance jumps so dramatically and it is not a fundamental issue of capability, just a bias in the model to be hesitant towards providing false information. That's a good insight, as it can allow further fine-tuning towards getting that balance right, so that careful prompt engineering is no longer necessary to achieve high P/R on this task.

discuss

order

crawfordcomeaux|2 years ago

Not at all! I think there's obvious insights being missed by people in how they prompt things. For instance, reality is not dualistic, yet people will prompt dualistically and get shoddy results without realizing their prompting biases are the issue. I see this as evidence AI is calling us toward more intentional language usage.