top | item 3022479

Very cool, but very creepy, open source project

456 points| monochromatic | 14 years ago |notcot.org | reply

75 comments

order
[+] keane|14 years ago|reply
Made with:

1. OpenCV 2.0 - C++, C, Python interfaces; BSD license - http://opencv.willowgarage.com/wiki/

2. FaceTracker - C/C++ API; "research purposes only", to download email [email protected] - http://web.mac.com/jsaragih/FaceTracker/FaceTracker.html

3. Method Art Image Clone - realtime cloning library (from gts, glew, glib); MIT license - http://methodart.blogspot.com/2011/07/fast-image-cloning-lib...

4. openFrameworks - C++ toolkit; MIT license - https://github.com/openframeworks/openFrameworks

5. FaceOSC (ofxFaceTracker) - active appearance model addon for openFrameworks; "open source" - https://github.com/kylemcdonald/ofxFaceTracker

[+] kragen|14 years ago|reply
So it might be open-source, but depends on the non-open-source (and not even publicly available) project FaceTracker?
[+] apu|14 years ago|reply
For face replacement in video, the state-of-the-art is this upcoming SIGGRAPH Asia 2011 work:

http://www.eecs.harvard.edu/~dale/docs/faceReplace_sa2011.mp...

http://www.eecs.harvard.edu/~dale/docs/faceReplace_sa2011.pd...

Having worked on this problem before, I know how tough it is to escape the uncanny valley when doing replacement, and these guys have really done impressively well at it (albeit with a fair amount of manual preprocessing and in controlled situations).

[+] ericgearhart|14 years ago|reply
I think the "creepy" factor of the images is probably due to the "uncanny valley"... Pixar fought this effect when they were first rendering humans

"The uncanny valley is a hypothesis in the field of robotics and 3D computer animation, which holds that when human replicas look and act almost, but not perfectly, like actual human beings, it causes a response of revulsion among human observers. The "valley" in question is a dip in a proposed graph of the positivity of human reaction as a function of a robot's human likeness."

http://en.wikipedia.org/wiki/Uncanny_valley

[+] pavel_lishin|14 years ago|reply
Once they get over that barrier, though, it'll become much creepier for an entirely different reason.
[+] mirkules|14 years ago|reply
Interestingly, one of the hardest aspects of human rendering is hands. Hands flex, bend and the skin stretches in ways which are difficult to represent mathematically, which is why in the late 90s and early 2000s (when human rendering started to emerge), most rendered human beings always wore gloves or were hidden from view entirely.
[+] tsunamifury|14 years ago|reply
This has nothing to do with the Uncanny Valley. This is creepy because masks add a deadening look to the face and we sense that. It seperates the person from the face.
[+] pavel_lishin|14 years ago|reply
How long before this becomes good enough to fool people on Skype? And how long before someone ends up writing software to detect this?

I think this is the first time in my life that I've felt like I was living in a scifi novel.

[+] Aloisius|14 years ago|reply
Real-time virtual puppeting has been done in movies/television and research for a while now and yes, it can easily fool people.

A professor friend of mine Jeremy Bailenson at Stanford actually uses the Kinect to track facial movements and uses 3D models of others to create puppets in real-time. Even more interesting, he can morph your face with the person you're video conferencing with to create a feeling of commonality in them.

He actually wrote a book on it called Infinite Reality [1] which talks about all kinds of ways people will probably get manipulated in the future. He talks about things like mirroring movements (which he can do automatically in a video conference), looking into the eyes of every participant in a group video conference and other really interesting psychological hacks.

[1] http://www.amazon.com/Infinite-Reality-Avatars-Eternal-Revol...

[+] jerf|14 years ago|reply
I find it helpful to ask not "can this fool people" but something more like "can this fool people at 320x200 with an X kbps stream?" Same for "realistic" computer graphics. I haven't seen anyone push computer graphics that can fool me at "HD resolution", but pushing something "photorealistic" at low-grade web cam resolutions is perfectly doable. I bet the same is true here.
[+] apitaru|14 years ago|reply
Kyle just posted a new demo video - he's playing around with the idea of the "Scrambler Suite" from A Scanner Darkly - http://vimeo.com/29391633
[+] bh42222|14 years ago|reply
Very cool. Much better than the effect used in the movie.
[+] Geee|14 years ago|reply
Really awesome. That second video shows much better results. Now, still some fine-tuning and someone could create application where people can try out different make-ups or eyeglasses.
[+] rohit89|14 years ago|reply
Hair styling is another big area where this would be really useful.
[+] mcantor|14 years ago|reply
Wow, how uncannily timely. Just yesterday, I read the part of a speculative fiction near-future novel, "Halting States" by Charles Stross, which includes this as an interesting detail with an eye towards technical imperfection; one of the protagonists is bemused by someone's neck glitching up into their face during a video call while they use this sort of software.
[+] andrewpi|14 years ago|reply
Reminds me somewhat of the scramble suit from A Scanner Darkly!
[+] jewel|14 years ago|reply
I imagine this will do to movies what autotune did to music. In other words, you no longer will have to find someone who is both good at acting and attractive.
[+] ippisl|14 years ago|reply
It could also let movie producers get rid of the high salaries paid for popular movie stars.
[+] AdamTReineke|14 years ago|reply
High-res source 3D face scans + a Kinect to track target head position and rotation better would hide the modifications quite well. Awesome project.
[+] protagonist_h|14 years ago|reply
This could be used in video call centers in the future. Image you make a video call to your bank, and a blonde girl appears on the screen. In reality, however, you are talking to a dude in India. However, this would also require "voice substitution."
[+] swah|14 years ago|reply
And once more, while we were discussing about a new language to replace javascript, some folk wrote a kickass software program in C++. :)
[+] rhizome|14 years ago|reply
Why aren't the videos embedded simply from Vimeo? Why do I have to wait on "uploads.notempire.com"? Not to mention that it's not behaving very "empire"-like.
[+] croddin|14 years ago|reply
If it is open source, where is the code? I am only seeing links to libraries it uses.
[+] mdda|14 years ago|reply
The thing being claimed as Open Source is OpenFrameworks (https://github.com/openframeworks/openFrameworks/), which is MIT licensed.

However the source for FaceTracker looks like it's "please email us" - so the licensing for that isn't clear. The end result could be (legitimately) proprietary, as long as they supply the MIT license notice.

[+] cfontes|14 years ago|reply
Really cool stuff. would be nice to have this feature used together with augmented reality games.

We could then use your preferred char outfit and face while playing a wii like game, so the game would present a video of you as the main char with any outfit, like playing streetfighter being Mario :D

Really ingenious idea.

[+] Aqwis|14 years ago|reply
How do professional movie productions do this? For example the Winklevoss brothers in The Social Network both had the face of one of the actors.
[+] Geee|14 years ago|reply
This is simple 're-texturing'. I.e. the image of other face is projected onto your face (or the 3D reconstruction of it). To actually change, re-render and replace the original geometry is much much more complicated, and also involves re-creating all direct and indirect illumination, which also needs full 3D reconstruction of the surroundings and light sources. That's how it's done in movies and it involves motion capture and huge amount of manual labor.

Edit: Speaking of replacing with a generated 3D face. Winklewoss replacement could be done 'simply' by double filming both actors in same position and then photoshopping the faces replacing the one with another, frame-by-frame. Face-tracking could be used to align the faces.

[+] zerostar07|14 years ago|reply
This will be perfect for plastic surgeon and hairdressing applications. Also to try out a new smirk before actually growing it.