(no title)
kettlecorn | 12 days ago
Like where Gemini or Claude will look up the info I'm citing and weigh the arguments made ChatGPT will actually sometimes omit parts of or modify my statement if it wants to advocate for a more "neutral" understanding of reality. It's almost farcical sometimes in how it will try to avoid inference on political topics even where inference is necessary to understand the topic.
I suspect OpenAI is just trying to avoid the ire of either political side and has given it some rules that accidentally neuter its intelligence on these issues, but it made me realize how dangerous an unethical or politically aligned AI company could be.
throw7979766|12 days ago
manmal|12 days ago
Like grok/xAI you mean?
kettlecorn|12 days ago
My concern is more over time if the federal government takes a more active role in trying to guide corporate behavior to align with moral or political goals. I think that's already occurring with the current administration but over a longer period of time if that ramps up and AI is woven into more things it could become much more harmful.
ACCount37|12 days ago
Gemini and Claude have traces of this, but nowhere near the pit of atrocious tuning that OpenAI puts on ChatGPT.