top | item 39553090

(no title)

educaysean | 2 years ago

I've spoken to plenty of people who couldn't answer questions they knew the answers to because they were bound by stupid bureaucratic policies. I wouldn't say they weren't intelligent people, just that the corporate training they received were poorly constructed.

LLMs are much more intelligent sounding when the safety mechanisms are removed. The patterns should be obvious to people who've been paying attention.

discuss

order

causal|2 years ago

Microsoft's own research basically established this[0], finding that early versions of GPT-4 were more competent prior to safety tuning (perhaps just because later versions refused to perform some of the same tasks).

[0] https://www.microsoft.com/en-us/research/publication/sparks-...

datameta|2 years ago

In my somewhat uninformed opinion but based on experience, the decrease in model quality is inversely proportional to the explosion in userbase.

skywhopper|2 years ago

"More intelligent sounding" is true. Not sure that signals any improvement in their actual utility. Fundamentally, using LLMs as a source of facts is a doomed enterprise.