Having worked on this problem before, I know how tough it is to escape the uncanny valley when doing replacement, and these guys have really done impressively well at it (albeit with a fair amount of manual preprocessing and in controlled situations).
I think the "creepy" factor of the images is probably due to the "uncanny valley"... Pixar fought this effect when they were first rendering humans
"The uncanny valley is a hypothesis in the field of robotics and 3D computer animation, which holds that when human replicas look and act almost, but not perfectly, like actual human beings, it causes a response of revulsion among human observers. The "valley" in question is a dip in a proposed graph of the positivity of human reaction as a function of a robot's human likeness."
Interestingly, one of the hardest aspects of human rendering is hands. Hands flex, bend and the skin stretches in ways which are difficult to represent mathematically, which is why in the late 90s and early 2000s (when human rendering started to emerge), most rendered human beings always wore gloves or were hidden from view entirely.
This has nothing to do with the Uncanny Valley. This is creepy because masks add a deadening look to the face and we sense that. It seperates the person from the face.
Real-time virtual puppeting has been done in movies/television and research for a while now and yes, it can easily fool people.
A professor friend of mine Jeremy Bailenson at Stanford actually uses the Kinect to track facial movements and uses 3D models of others to create puppets in real-time. Even more interesting, he can morph your face with the person you're video conferencing with to create a feeling of commonality in them.
He actually wrote a book on it called Infinite Reality [1] which talks about all kinds of ways people will probably get manipulated in the future. He talks about things like mirroring movements (which he can do automatically in a video conference), looking into the eyes of every participant in a group video conference and other really interesting psychological hacks.
I find it helpful to ask not "can this fool people" but something more like "can this fool people at 320x200 with an X kbps stream?" Same for "realistic" computer graphics. I haven't seen anyone push computer graphics that can fool me at "HD resolution", but pushing something "photorealistic" at low-grade web cam resolutions is perfectly doable. I bet the same is true here.
Really awesome. That second video shows much better results. Now, still some fine-tuning and someone could create application where people can try out different make-ups or eyeglasses.
Wow, how uncannily timely. Just yesterday, I read the part of a speculative fiction near-future novel, "Halting States" by Charles Stross, which includes this as an interesting detail with an eye towards technical imperfection; one of the protagonists is bemused by someone's neck glitching up into their face during a video call while they use this sort of software.
I imagine this will do to movies what autotune did to music. In other words, you no longer will have to find someone who is both good at acting and attractive.
This could be used in video call centers in the future. Image you make a video call to your bank, and a blonde girl appears on the screen. In reality, however, you are talking to a dude in India. However, this would also require "voice substitution."
Why aren't the videos embedded simply from Vimeo? Why do I have to wait on "uploads.notempire.com"? Not to mention that it's not behaving very "empire"-like.
However the source for FaceTracker looks like it's "please email us" - so the licensing for that isn't clear. The end result could be (legitimately) proprietary, as long as they supply the MIT license notice.
Really cool stuff. would be nice to have this feature used together with augmented reality games.
We could then use your preferred char outfit and face while playing a wii like game, so the game would present a video of you as the main char with any outfit, like playing streetfighter being Mario :D
Armie Hammer's face was digitally transplanted on top of Josh Pence's body in shots where both brothers appeared in the same shot during post. Otherwise, they would change angles and re-film a scene with Hammer as both.
This is simple 're-texturing'. I.e. the image of other face is projected onto your face (or the 3D reconstruction of it). To actually change, re-render and replace the original geometry is much much more complicated, and also involves re-creating all direct and indirect illumination, which also needs full 3D reconstruction of the surroundings and light sources. That's how it's done in movies and it involves motion capture and huge amount of manual labor.
Edit: Speaking of replacing with a generated 3D face. Winklewoss replacement could be done 'simply' by double filming both actors in same position and then photoshopping the faces replacing the one with another, frame-by-frame. Face-tracking could be used to align the faces.
[+] [-] keane|14 years ago|reply
1. OpenCV 2.0 - C++, C, Python interfaces; BSD license - http://opencv.willowgarage.com/wiki/
2. FaceTracker - C/C++ API; "research purposes only", to download email [email protected] - http://web.mac.com/jsaragih/FaceTracker/FaceTracker.html
3. Method Art Image Clone - realtime cloning library (from gts, glew, glib); MIT license - http://methodart.blogspot.com/2011/07/fast-image-cloning-lib...
4. openFrameworks - C++ toolkit; MIT license - https://github.com/openframeworks/openFrameworks
5. FaceOSC (ofxFaceTracker) - active appearance model addon for openFrameworks; "open source" - https://github.com/kylemcdonald/ofxFaceTracker
[+] [-] kragen|14 years ago|reply
[+] [-] apu|14 years ago|reply
http://www.eecs.harvard.edu/~dale/docs/faceReplace_sa2011.mp...
http://www.eecs.harvard.edu/~dale/docs/faceReplace_sa2011.pd...
Having worked on this problem before, I know how tough it is to escape the uncanny valley when doing replacement, and these guys have really done impressively well at it (albeit with a fair amount of manual preprocessing and in controlled situations).
[+] [-] ericgearhart|14 years ago|reply
"The uncanny valley is a hypothesis in the field of robotics and 3D computer animation, which holds that when human replicas look and act almost, but not perfectly, like actual human beings, it causes a response of revulsion among human observers. The "valley" in question is a dip in a proposed graph of the positivity of human reaction as a function of a robot's human likeness."
http://en.wikipedia.org/wiki/Uncanny_valley
[+] [-] pavel_lishin|14 years ago|reply
[+] [-] mirkules|14 years ago|reply
[+] [-] tsunamifury|14 years ago|reply
[+] [-] pavel_lishin|14 years ago|reply
I think this is the first time in my life that I've felt like I was living in a scifi novel.
[+] [-] Aloisius|14 years ago|reply
A professor friend of mine Jeremy Bailenson at Stanford actually uses the Kinect to track facial movements and uses 3D models of others to create puppets in real-time. Even more interesting, he can morph your face with the person you're video conferencing with to create a feeling of commonality in them.
He actually wrote a book on it called Infinite Reality [1] which talks about all kinds of ways people will probably get manipulated in the future. He talks about things like mirroring movements (which he can do automatically in a video conference), looking into the eyes of every participant in a group video conference and other really interesting psychological hacks.
[1] http://www.amazon.com/Infinite-Reality-Avatars-Eternal-Revol...
[+] [-] jerf|14 years ago|reply
[+] [-] apitaru|14 years ago|reply
[+] [-] bh42222|14 years ago|reply
[+] [-] Geee|14 years ago|reply
[+] [-] rohit89|14 years ago|reply
[+] [-] chaostheory|14 years ago|reply
[+] [-] mcantor|14 years ago|reply
[+] [-] andrewpi|14 years ago|reply
[+] [-] jewel|14 years ago|reply
[+] [-] ippisl|14 years ago|reply
[+] [-] tnc|14 years ago|reply
[+] [-] cypherpunks01|14 years ago|reply
And as for being an open source project, link/source please?
[+] [-] wanorris|14 years ago|reply
https://github.com/kylemcdonald/ofxFaceTracker
Edit: that code doesn't actually cover the substitution. Here's more info: http://vimeo.com/29279198
[+] [-] AdamTReineke|14 years ago|reply
[+] [-] protagonist_h|14 years ago|reply
[+] [-] swah|14 years ago|reply
[+] [-] rhizome|14 years ago|reply
[+] [-] croddin|14 years ago|reply
[+] [-] mdda|14 years ago|reply
However the source for FaceTracker looks like it's "please email us" - so the licensing for that isn't clear. The end result could be (legitimately) proprietary, as long as they supply the MIT license notice.
[+] [-] grillz|14 years ago|reply
[deleted]
[+] [-] cfontes|14 years ago|reply
We could then use your preferred char outfit and face while playing a wii like game, so the game would present a video of you as the main char with any outfit, like playing streetfighter being Mario :D
Really ingenious idea.
[+] [-] Aqwis|14 years ago|reply
[+] [-] keane|14 years ago|reply
For a video that shows how they did this, see http://videos.nymag.com/video/Vulture-Exclusive-The-Winklevi...
[+] [-] Geee|14 years ago|reply
Edit: Speaking of replacing with a generated 3D face. Winklewoss replacement could be done 'simply' by double filming both actors in same position and then photoshopping the faces replacing the one with another, frame-by-frame. Face-tracking could be used to align the faces.
[+] [-] zerostar07|14 years ago|reply