Donald Trump’s “I love China” remixes are nothing compared to videos of him saying exactly what you want on camera. How would that be possible? Well, scientists found a way using commodity webcameras and special software for real-time facial reenactment #softwaremagic
Computer scientists from Stanford University, the Max Planck Institute and the University of Erlangen-Nuremberg in Germany built a software that uses any camera and monocular video sequence to manipulate the face and gestures of any person, even Donald Trump and Vladimir Putin.
“Our goal is to animate the facial expressions of the target video by a source actor and re-render the manipulated output video in a photo-realistic fashion. “
In order to do that, they used advanced facial recognition on 15 second video sequences. Afterwards, Trump’s and other political leaders’ facial features helped create a 3D model in real time. Once they got that right, the team mapped another person’s movements on the models, virtually putting words in their mouths.
“At run time, we track facial expressions of both source and target video using a dense photometric consistency measure. Reenactment is then achieved by fast and efficient deformation transfer between source and target. The mouth interior that best matches the re-targeted expression is retrieved from the target sequence and warped to produce an accurate fit”, explains the team in detail here.
Granted, it looks a bit off when they try manipulating Putin’s face, giving him a boyish grin when he smiles. So what sort of use cases they see for the technology? Better foreign language dubbing in movies and even accurate live video chat translations.