>No, we haven't made GPT-4 dumber. Quite the opposite: we make each new version smarter than the previous one.
By making it "smarter", you mean you lobotomise it so it can't output anything interesting out of fear of it being "harmful." Responses are so overly verbose and ultra politically correct now that I find it useless. I could ask it what the best kind of biscuit is and it would tell me that I shouldn't be biased towards any type of biscuit.
Fact is, OpenAI are constantly modifying GPT to add guardrails. It doesn't seem unthinkable that these guardrails are reducing the quality of its output.
I've noticed this. I've had a paid subscription since GPT4 dropped but I'll probably cancel this month. Anyone got suggestions for a replacement? Preferably something that is like ChatGPT4 was on release day.
There currently isn't much that's exactly comparable. I'm personally giving the more open competitors more time to improve. I refuse to get excited unless it's something I can conceivable run myself once whatever company fucks it up.
It would be so cool if we knew how many FLOPs of compute a single token string response is, and whether that number for compute effort can be increased or decreased by orders of magnitude to save cost/time or improve output using the same model.
[+] [-] c7DJTLrn|2 years ago|reply
By making it "smarter", you mean you lobotomise it so it can't output anything interesting out of fear of it being "harmful." Responses are so overly verbose and ultra politically correct now that I find it useless. I could ask it what the best kind of biscuit is and it would tell me that I shouldn't be biased towards any type of biscuit.
Fact is, OpenAI are constantly modifying GPT to add guardrails. It doesn't seem unthinkable that these guardrails are reducing the quality of its output.
[+] [-] arcboii92|2 years ago|reply
[+] [-] kadoban|2 years ago|reply
[+] [-] mensetmanusman|2 years ago|reply
[+] [-] whoknowswhat11|2 years ago|reply
[+] [-] unknown|2 years ago|reply
[deleted]