[ad_1]
JM Porup | Tube
Deepfakes look like fake video or audio recordings. It was once exclusive to Hollywood special effects studios and intelligence agencies producing propaganda, such as the CIA or GCHQ’s JTRIG Division, but now anyone can download deep fake software and create fake videos in their spare time.

So far, amateur interests have been using deepfaqs to synthesize the faces of celebrities with the bodies of porn stars or to change the words of politicians. But it is also easy to deepen the emergency warning that an attack is imminent, use fake sex videos to ruin someone’s marriage, or spread fake videos or audios of candidates a few days before voting.
How Dangerous are Deepfakes?
Deepfactors are a concern for many people. Florida Republican Senator and 2016 presidential candidate Marco Rubio described Deepfake as a modern nuclear weapon. In a speech in Washington two weeks ago, Rubio said, “In the past, the United States needed ten aircraft carriers, nuclear weapons and long-range missiles to make threats. Now all you have to do is access the internet systems, banking systems, power grids and infrastructure. “If you have the ability to produce real fake videos, you can undermine elections, plunge the United States into a serious internal crisis and inflict a severe blow.”
Is this futile ambitious political exaggeration, or is Deep Fax really a bigger threat than nuclear weapons? When you hear Rubio’s words, it feels like the world is coming to an end. But not everyone agrees with Rubio’s opinion.
Tim Hwang, head of ethics oversight for AI programs at the Berkman-Clean Center and MIT Media Lab, said: “Is it as dangerous as an atomic bomb? I do not think so. The actual examples in the past are certainly anxious. “People are worried and ask a lot of questions, but I do not think the probability of the situation will change as many people think.”
Regular users can download FakeOp and immediately create deep duplicates. The app is not very easy to use, but as Kevin Rouge demonstrated in the New York Times in early 2019, anyone with some experience with computers can use it without any problems.
Focusing on “mole catching” games using deep fakes is a false strategy, yet there is a lot of effective misinformation, Hwang said. There are many ways to do this. “
For example, you can shoot a video of a gang attacking someone on the street and then post a fake comment on the video (the attackers are immigrants from the United States, etc.) Advanced machine learning algorithms are not required, just a video that is acceptable and suitable for commentary.
How Deepfakes Work
There is an old saying ‘you have to see with your eyes to believe’, but the word ‘you have to believe to see’ is closer to the truth. People only take information that supports what they trust and ignores the rest.
If the attackers hack into this human tendency, the power will be considerable. There are already many instances where lies made through false information (fake news) are wrapped up and spread with the truth. It is already too late for those who have confirmed the facts to speak out against the lie. PizzaGate is a well-known example. The ghost that does this spreads as an organization of worshipers (note).
Deepfake exploits this human tendency by using a Generative Advertising Network (GAN) that competes with two machine learning models. This way a machine learning model is trained using a data set, then creates a duplicate video and another machine learning model finds out if it is a duplicate.
Fake machine learning models Fake machine learning models are fake until they can detect the wrong ones. Large training data set, creating a real deepfake is easy. Videos of former presidents and Hollywood celebrities have been used extensively in first generation deepfacts. In other words, there are a lot of public video images that teach fake.
In fact, there are many more uses for GAN than making fake sex videos and counterfeiting what politicians say. GAN is a big step in the field of “supervised learning” where machine learning models learn themselves. It has great potential to improve the ability of autonomous vehicles to detect pedestrians and bicycles and to enhance the communication capabilities of voice digital video such as Alexa and Siri. Some say that GAN predicts the arrival of ‘AI ination ha’.
Examples of deep duplicates
If you think fake news videos as a form of political deep duplication are a new phenomenon, you are wrong. For a generation after the film was invented, it was not surprising that news videos were forged to dramatically change the real news.
When the film took weeks to cross the sea, the filmmakers excited earthquake or fire scenes in small sets to enliven the news. In the 1920s, the latest trend was to broadcast black-and-white photographs on Atlantic cables, and used them to create scenes that destroyed the original photographs received by filmmakers. In the 1930s, when this practice changed, the idea was to use real scenes for news scenes.
Recently, the controversy over the Deepfaqs resurfaced with the release of the TicTac video starring actor Tom Cruise. This video actually looks like Tom Cruise Imitator combining with the open source Artificial Intelligence Deepface Lab algorithm.
Chris Umi, the Belgian visual effects (VFX) expert who created the video, said it was difficult for many people to reproduce the video at this level. However, the project shows that a combination of AI and CGI and VFX technologies can create deep fake videos, which are almost impossible for the average person to notice.
How to spot deepfakes
Detecting deepfake is a very difficult problem. In fact, it is easy to spot deep duplicates made by te enthusiasts. Some of the dummy signs that the machine can see include the blink of an eye or the awkward shape of the shadow. The GAN that produces Deepfax is constantly improving, so sooner or later, to identify Deepfaqs, you will have to rely on digital forensics and we cannot guarantee that they will be found.
As such, the Defense Advanced Research Projects Agency (DARPA) is investing heavily in research to find better ways to mask the authenticity of videos. However, if GANs can learn for themselves how to avoid these forensics, it could be a no-nonsense fight in the first place.
David Gunning, who is leading the project as DARPA Program Manager, told MIT Technology Review: “In theory, if we tell GAN of all the detection technologies we currently have, GAN can bypass all technologies. I don’t know if there is a limit to that ability. ”
Critics warn that if we do not detect counterfeit videos, there may be times when we need to distrust everything we see and hear. Right now, the internet affects every aspect of life and if anything you see there becomes unbelievable, the ‘end of truth’ becomes reality. It threatens not only beliefs in the political system, but also beliefs in the long-term, participatory objective reality. People who worry that if they can’t agree on what is true and what is not, they will not be able to discuss political issues.
However, Hwang argues that these concerns are exaggerated and that “I do not think we will ever cross that gloomy limit without knowing what is real and what is not.”
The exaggerated narrative of deepfactors serves as a great defense. The fact that people know that a video can be duplicated to this level is a relief to deepfactors.
Rules to Prevent Deepfactors
Is deep counterfeiting illegal? This is a tricky question, no answers yet. U.S. Although the First Amendment to the Constitution should be considered, there are also intellectual property laws, privacy laws, and laws recently enacted by several states in the United States prohibiting revenge pornography.
Platforms like Gfycat and Pornhub are actively deleting deepfake porn videos because they violate the terms and conditions. However, porn deepfakes are being shared on non-mainstream platforms.
In the case of political statements unrelated to sexual exploitation, the boundaries are blurred. The First Amendment to the U.S. Constitution protects the right of politicians to lie to the public. It also protects the right to post false information inadvertently or intentionally. At the heart of the ideas market is the separation of truth from lies, not government censorship for social media platforms or actual censorship that enforces arbitrary terms of service.
The regimes and legislatures in each country are continuing to debate this issue, so let’s see what happens. editor@itworld.co.kr
[ad_2]
Source by [ciokorea]
Re Writted By [Baji Infotech]