top | item 40432159

(no title)

wnmurphy | 1 year ago

I would agree with you, except that it knew exactly what her smile looked like. It animated a photo of her not smiling, into one of her actual smile.

The only way to get that information is from other photos of her smiling.

discuss

order

verdverm|1 year ago

Have you seen image blend / target modification results? Midjourney features are just the tip of the iceberg

There is no way Google is training on every photo that it does this to. It is prohibitively expensive and not necessary to get the results you describe. They can just feed in a set of images, with one to be processed and the others as reference, to an already trained model