Home Upgrade Search Memberlist Extras Hacker Tools Award Goals Help Wiki Contact

HF Rulez the UniverseHF Rulez the Universe
Ph4n70M Phr34K
deepfake ai gan generative adversarial networks neural networks artificial intelligence

DeepFake – what is it and how does it work?

Posted Feb 2, 2024 03:59 PM

The majority of individuals who viewed the video linked above were taken aback. Yet, it was not the selection of "characters" and the song that caused astonishment—it was the prowess of modern technology. If you have never encountered anything like this before, welcome to the world of DeepFake technology.

The name DeepFake was derived from a Reddit user called "DeepFakes," who in December 2017, released several pornographic films. These films employed GAN neural networks to swap the faces of the original actresses with the faces of celebrities such as Emma Watson, Scarlett Johansson, and Gal Gadot. Although these films were clearly fabricated, they appeared remarkably realistic.

After some time, Reddit deleted threads containing DeepFake videos, citing rules related to involuntary pornography. Soon after, platforms such as Twitter, Discord, and even Pornhub announced similar policies. This, however, did not spell the end of this technology; on the contrary, it marked a new beginning...

How does it work?

DeepFake videos are produced with the help of GANs (Generative Adversarial Networks), composed of two competing deep neural networks, trained on actual photos or film frames. This competition involves one network that generates images (hence the "generative" part of the name) and another (referred to as the discriminator), which attempts to discern whether an image is authentic or fabricated.

Subsequently, a role reversal takes place: the generative network improves based on the feedback, and the discriminator seeks ways to increase its accuracy. Each cycle leads to progressively better outcomes for both networks. After millions of these cycles, the generative network becomes capable of creating a counterfeit image so realistic that the discriminator (which has been learning as well) is unable to differentiate it from an authentic one.

Numerous applications employ this methodology for a wide array of purposes—yet, DeepFake represents a particularly specific and emotive target for clear reasons (as illustrated by the aforementioned video). Notably, the advancement of DeepFake technology is pursued not only by individual enthusiasts but also by corporate entities. In 2019, the Russian branch of Samsung initiated research into "neural talking heads" technology.

Technology Development

The early applications designed to generate DeepFake videos were not particularly user-friendly, as they necessitated hundreds of "victim" photographs, all of which had to be properly cropped and scaled. Consequently, the production of these videos was essentially an exercise in artistic experimentation.

The scenario gradually improved as diverse individuals explored the subject, albeit in small increments. A significant breakthrough occurred in December 2019, following the release of Aliaksandra Siarohin's research paper "First Order Motion Model for Image Animation."

Since then, it has been feasible to produce DeepFake videos of superior quality with just a single properly prepared photograph of the subject, along with a source video. Consequently, this process can be completed in merely minutes, without the need for personal servers or assembling a photo collection.

Current times

Nowadays the development of deepfake technology is characterized by dynamic evolution and increasing application across various fields. Advances in machine learning algorithms and data availability have enabled the creation of increasingly convincing and indistinguishable deepfakes. Examples of this technology's use include the creation of realistic avatars in video games, personalized advertisements, educational historical simulations where historical figures can "come to life" and speak to the audience, and in movies, where deceased actors can reappear on screen. Platforms like TikTok and Snapchat have introduced deepfake-based filters, allowing users to enjoyably and creatively alter their appearances.

However, the development of deepfake technology also poses threats, such as the creation of fake compromising materials about public figures, manipulating video content to influence public opinion, or fake news. In response to these challenges, tools and software for detecting deepfakes have been developed, along with educational initiatives aimed at raising public awareness about these technologies.

In summary, since few years, deepfake technology has significantly evolved, finding applications in entertainment, education, and advertising, while also presenting new ethical and legal challenges. The development of tools for detecting fake content and educational initiatives are key in combating the negative aspects of this technology. Deepfake, being both a fascinating technological achievement and a potential source of disinformation, requires a conscious and responsible approach from both creators and consumers of digital content.