Good to see this being discussed. The tech still reflects the biases in its training data, and that’s a real issue for creative work. What would help is more concrete examples of failure cases and what actually works to reduce them in practice.
> Good to see this being discussed. The tech still reflects the biases in its training data, and that’s a real issue for creative work. What would help is more concrete examples of failure cases and what actually works to reduce them in practice.
What bias in the training data do you have in mind? Think about the top labs - what biases do you imagine them having in a big enough way that in meaningfully tilts the models in a bad way?
And then bringing it to the user; do you want everyone to think the same way by flattening the range of thought that is permissible and that the AI system would engage with? That seems awfully oppressive.
andsoitis|3 months ago
What bias in the training data do you have in mind? Think about the top labs - what biases do you imagine them having in a big enough way that in meaningfully tilts the models in a bad way?
And then bringing it to the user; do you want everyone to think the same way by flattening the range of thought that is permissible and that the AI system would engage with? That seems awfully oppressive.