I'm not sure it's even ethics, it might just also be about misaligned LLMs giving worse outputs and they don't want to make their models worse. Plus their models tend to be the least sycophantic and push back on inane stuff, giving in to those instead would also likely make the models worse.
No comments yet.