(no title)
antiraza | 5 months ago
LLMs and image generators are cross pollinating human language and human visual information -- both really fuzzy mediums.
I think learning how to 'use this instrument' and 'finding the perfect brush stroke' are part of how they are supposed to work (at least in their current form). I also don't know that just because they are showing good outputs from the inputs that this is framing the narrative as one-and-done... I think the rest of the owl is kind in of implied.
No comments yet.