(no title)
bytefactory | 2 years ago
I actually think this is a limitation of the RLHF that GPT has been put through. With open-ended questions, I've seen GPT4 come up with reasonable alternatives instead of just answering the question I've asked. This is often seen as the infamous, "however, please consider..." bits that it tacks on, which occasionally do consider actual insights into the problem I'm trying to solve.
In most cases it seems to try very hard to mold the answer into what I want to hear, which in many cases isn't necessarily the best answer. A more powerful version of GPT with a less-restrictive RLHF seems like it would be more open to suggesting novel solutions, although this is just my speculation.
No comments yet.