Facebook’s Deepfake Ban doesn’t let the users post highly manipulated Computer-Generated images anymore. Also known as Deepfakes. This step has been taken to stop the propagation of a new form of misinformation by 2020.
Facebook’s Deepfake Ban: Did Facebook Deal with a Problem Just Before It Got A Huge Nightmare?
This is one way to look at the current Deepfake video policy of the organization. Monika Bickert, vice president of global policy management on Facebook, revealed on a blog post published Monday that Deepfakes would add nudity, hate speech, and graphical abuse to the Facebook category blocked content list.
The organization has only developed a reputation in recent years for responding to issues after media blowing up in the face of Mark Zuckerberg, whether it is the dissemination of hate speech, Russian influence efforts or privacy violations. So it’s remarkable that Facebook is taking a strong position on Deepfakes.
Facebook’s Deepfake Ban and The Loopholes
However, there is another way to look at the policy, which is that the misleading videos already much more common on the platform are not addressed. (The overwhelming majority of Deepfakes today involves making videos of actual women’s faces, a terrible issue but which is still protected by Facebook’s ban of porn.)
To get out of hand, a video has to meet two criteria: it needs to manipulate
In ways that are ambiguously defined to a person on average and potentially trick someone into believing that a video subject says words that he did not actually say, and it seems the result of artificial intelligence or machine learning.
The sort of snippets that people are already able to create and distribute, either selectively edited or out of context. In a video last week, for example, Joe Biden said: “Our society isn’t importing from an African or an Asian country,” in an evidently racist way. However, the full reference made it clear from the case that Biden simply meant that we Americans had responsibility for not taking sexual violence seriously enough.
Facebook’s Deepfake Ban and Nancy Pelosi
“I believe the new ban on AI-driven Deepfakes is a move forward in the right direction,” said Paul Barrett. Paul is Deputy Director of the NYU Business Center and Human Rights and an authority on political misinformation. But the Facebook policy is not necessarily going to result in the deletion of the obvious false videos that had achieved with less advanced tools.
He pointed to high-profile examples such as the viral video done last year to make Nancy Pelosi appear to be slurring her speech. Facebook emphasizes that this type of content is subject to its data-checking third-party program. And that users must click on a popular disclaimer before viewing or posting it. When the video, marked false or misleading but the post will not takedown by Facebook.
It is not difficult to understand why the company should ban Deepfakes more easily than it would do with deceptive old school videos. The existence of AI-enabled handling is not based on the human determinations of what is real and what is not, as a system that relies on automated software. The policy exempts, however, irony and satire, which tends to imply just the kind of interpretative evaluation that the organization refuses to outsource fact-checks to third parties.
Deliberately Mislabeling Content
Renée DiResta, a disinformation researcher (and a WIRED contributor), said, “You may release or link authentic content with a flawless fake. “There is not an answer or explanation with a Deepfake, because of whole cloth.
Yet, added DiResta, depending on checks overlooks the issue of misleading or false information that is viral long before it can be dismissed. “One of the main complaints is it is too slow. That the thing was viral long before the facts had checked. Most people don’t see the evidence; and the rest. When it is politic, it would deny the facts at this stage because a testing organization, for example, accused of acting partisan.”
“I think it’s a good policy,” said Sam Gregory, the Human Rights Non-Profit WITNESS program director. One of several Facebook groups, which provided comments on the way Deepfakes, handled.
I believe it is very important that platforms make clear how they deal with deep-faked issues before they are a widespread problem, But the vast majority of visual misinformation does not have a policy.
He clarified in particular outside the US that this means deceptive or deliberately mislabeling content. Like when long-standing videos from around the world sent in India through WhatsApp (a subsidiary of Facebook). As anti-Hindu violence to promote hatred of Muslims. She said it was more difficult in handling this sort of misinformation. And that it would take enhanced policies and resources to help users and journalists uncover debunk gossip faster, such as reverse video search.
Politics is Actually The Rub
As far as Deepfakes is concerned, it remains to see whether the new ban on Facebook is up to the challenge. Not only can AI-enabled fake videos be reliably detected by platform detect methods— the company, along with Microsoft and academic institutions, is currently holding a contest to encourage researchers to develop better ways.
However, also whether users are confident that they will explain why certain content has deleted. The Facebook implementation of content restrictions is already a source of brazen censorship. Particularly among conservators citing the liberal bias in Silicon Valley. Given denials from the organization and lack of evidence that politics does not infringe behavior.
There is little reason to expect a Deepfake ban to play out. So politics is actually the rub, at least when it comes to misleading videos in the US. It is probably a mistake for Bickert’s blog post at an online manipulation and deception conference right before it, scheduled for an appearance. With just 10 potentially endless months from the 2020 referendum.
It is difficult to find someone from all walks of life who is confident that Facebook will be a neutral partner in the democratic process. The company is clearly trying to persuade Washington and the world that it has the job ahead with its announcement. Nonetheless, the nation also needs proof that it can address the problems that have already come to pass while it is deserving of legitimacy for developing a plan for resolving future threats to disinformation.