Artificial Intelligence is moving faster with every passing day and, at this point, there are few things that surprise us anymore when it comes to it but one thing is for sure: AI is getting very good at creating fake images.
NVIDIA researchers have published a paper earlier this month, which they have shared publicly, that describes their use of a generative adversarial network (GAN) in order to create realistic faces.
GAN is not a new concept – it has existed since 2014, but the results it yielded back then are worlds away from what the network managed to create now. You can see the 2014 results in the image below:
Less than a decade later, this is what the GAN managed to create:
If you wouldn’t have known these were not actual people, would you have been able to tell the difference?
These faces can also be fully customized, and easily so, via something the NVIDIA engineers called style transfer, where the characteristics of one image can be blended with another.
You can see the how the style transfer works in the grid below, where the top row hosts the real person, with the facial features of the one on the right hand column imposed into it. Additions like skin and hair color manage to create an entirely different person altogether by the end of the process.
With the potential of such fake photos to disrupt life as we know it, experts are already working on ways to authenticate digital photography. Right now, it looks like we can expect AI-generated images and videos to become progressively harder to tell from the real thing sooner than we expected.
Also Read: ✍️US Military Funds Project to Catch Deepfakes✍️
✍️Thanks to a Deepfake, Harrison Ford Now Stars in “Solo: A Star Wars Story”✍️
Follow TechTheLead on Google News to get the news first.