There are ways to select a specific point or region in latent space for a diffusion model to work towards. If properly chosen, this can have it avoid specific people's likenesses, and even generate likenesses outside the domain of the latent space (which tend to have severe artefacts). However, text prompting doesn't do that, even if the prompt explicitly instructs it to: text-to-image prompts aren't instructions. A system like Grok will always exhibit the behaviour I described in my previous (GP) comment.
As I mentioned in another comment (https://news.ycombinator.com/item?id=46503866), there are other reasons not to produce synthetic sexualised imagery of children, which I'm not qualified to talk about: and I feel this topic is too sensitive for my usual disclaimered uninformed pontificating.
wizzwizz4|1 month ago
As I mentioned in another comment (https://news.ycombinator.com/item?id=46503866), there are other reasons not to produce synthetic sexualised imagery of children, which I'm not qualified to talk about: and I feel this topic is too sensitive for my usual disclaimered uninformed pontificating.