top | item 45892985

(no title)

captainclam | 3 months ago

It looks to me like OpenAI's image pipeline takes an image as input, derives the semantic details, and then essentially regenerates an entirely new image based on the "description" obtained from the input image.

Even Sam Altman's "Ghiblified" twitter avatar looks nothing like him (at least to me).

Other models seem much more able to operate directly on the input image.

discuss

order

spiffytech|3 months ago

You can see this in the images of the Newton: in GPT's versions, the text and icons are corrupted.

gunalx|3 months ago

Isn't this from the model working o. really low res images, and then bein uppscalef afterwards?