{"id":4912,"date":"2019-05-22T12:37:02","date_gmt":"2019-05-22T12:37:02","guid":{"rendered":"https:\/\/dfcme.me\/you-thought-fake-news-was-bad-deep-fakes-are-where-truth-goes-to-die\/"},"modified":"2022-10-22T20:37:38","modified_gmt":"2022-10-22T18:37:38","slug":"you-thought-fake-news-was-bad-deep-fakes-are-where-truth-goes-to-die","status":"publish","type":"post","link":"https:\/\/dfc.me\/en\/you-thought-fake-news-was-bad-deep-fakes-are-where-truth-goes-to-die\/","title":{"rendered":"You thought fake news was bad? Deep fakes are where truth goes to die"},"content":{"rendered":"\n<p>Technology can make it look as if anyone has said or done anything. Is it the next wave of (mis)information warfare? <\/p>\n\n\n\n<p>In May 2018, a&nbsp;video&nbsp;appeared on the internet of Donald Trump offering advice to the people of Belgium on the issue of climate change. <em>\u201cAs you know, I had the balls to withdraw from the Paris climate agreement,\u201d <\/em>he said, looking directly into the camera, <em>\u201cand so should you.\u201d<\/em><\/p>\n\n\n\n<p>The video was created by a Belgian political party, Socialistische Partij Anders, or sp.a, and posted on sp.a\u2019s Twitter and Facebook. It provoked hundreds of comments, many expressing outrage that the American president would dare weigh in on Belgium\u2019s climate policy.<\/p>\n\n\n\n<p>But this anger was misdirected. The speech, it was later revealed, was nothing more than a hi-tech forgery. <\/p>\n\n\n\n<p>Sp.a claimed that they had commissioned a production studio to use machine learning to produce what is known as a \u201cdeep fake\u201d \u2013 a computer-generated replication of a person, in this case Trump, saying or doing things they have never said or done. <\/p>\n\n\n\n<p>Sp.a\u2019s intention was to use the fake video to grab people\u2019s attention, then redirect them to an online petition calling on the Belgian government to take more urgent climate action. The video\u2019s creators later said they assumed that the poor quality of the fake would be enough to alert their followers to its inauthenticity. \u201cIt is clear from the lip movements that this is not a genuine speech by Trump,\u201d a spokesperson for sp.a&nbsp;told&nbsp;Politico.<\/p>\n\n\n\n<p>As it became clear that their practical joke had gone awry, sp.a\u2019s social media team went into damage control. \u201cHi Theo, this is a playful video. Trump didn\u2019t really make these statements.\u201d \u201cHey, Dirk, this video is supposed to be a joke. Trump didn\u2019t really say this.\u201d<\/p>\n\n\n\n<p>The party\u2019s communications team had clearly underestimated the power of their forgery, or perhaps overestimated the judiciousness of their audience. Either way, this small, left-leaning political party had, perhaps unwittingly, provided a deeply troubling example of the use of manipulated video online in an explicitly political context.<\/p>\n\n\n\n<p>It was a small-scale demonstration of how this technology might be used to threaten our already vulnerable information ecosystem \u2013 and perhaps undermine the possibility of a reliable, shared reality.<\/p>\n\n\n<h4 class=\"wp-block-heading\" id=\"expert-opinions\"><strong>Expert opinions<\/strong><\/h4>\n\n\n<p>Danielle Citron, a professor of law at the University of Maryland, along with her colleague Bobby Chesney, began working on a report outlining the extent of the potential danger. As well as considering the threat to privacy and national security, both scholars became increasingly concerned that the proliferation of deep fakes could catastrophically erode trust between different factions of society in an already polarized political climate.<\/p>\n\n\n\n<p>In particular, they could foresee deep fakes being exploited by purveyors of \u201cfake news\u201d. Anyone with access to this technology \u2013 from state-sanctioned propagandists to trolls \u2013 would be able to skew information, manipulate beliefs, and in so doing, push ideologically opposed online communities deeper into their own subjective realities.<\/p>\n\n\n\n<p>\u201cThe marketplace of ideas already suffers from truth decay as our networked information environment interacts in toxic ways with our cognitive biases,\u201d the report reads. \u201cDeep fakes will exacerbate this problem significantly.\u201d<\/p>\n\n\n\n<p>Citron and Chesney are not alone in these fears. In April 2018, the film director Jordan Peele and <em>BuzzFeed<\/em>&nbsp;released&nbsp;a deep fake of Barack Obama calling Trump a \u201ctotal and complete dipshit\u201d to raise awareness about how AI-generated synthetic media might be used to distort and manipulate reality.<\/p>\n\n\n\n<figure class=\"wp-block-image aligncenter is-resized\"><img fetchpriority=\"high\" decoding=\"async\" src=\"https:\/\/dfcme.me\/wp-content\/uploads\/2019\/08\/deepfake-barack-obama-1030x738.png\" alt=\"\" class=\"wp-image-4904\" width=\"498\" height=\"357\" srcset=\"https:\/\/dfc.me\/wp-content\/uploads\/2019\/08\/deepfake-barack-obama-1030x738.png 1030w, https:\/\/dfc.me\/wp-content\/uploads\/2019\/08\/deepfake-barack-obama-300x215.png 300w, https:\/\/dfc.me\/wp-content\/uploads\/2019\/08\/deepfake-barack-obama.png 1173w\" sizes=\"(max-width: 498px) 100vw, 498px\" \/><\/figure>\n\n\n\n<p>In September 2018, three members of Congress sent a letter to the director of national intelligence, raising the alarm about how deep fakes could be harnessed by \u201cdisinformation campaigns in our elections\u201d.<\/p>\n\n\n\n<p>While these disturbing hypotheticals might be easy to conjure, Tim Hwang, director of the Harvard-MIT Ethics and Governance of Artificial Intelligence Initiative, is not willing to bet on deep fakes having a high impact on elections in the near future. Hwang has been studying the spread of misinformation on online networks for a number of years, and, with the exception of the small-stakes Belgian incident, he is yet to see any examples of truly corrosive incidents of deep fakes \u201cin the wild\u201d.<\/p>\n\n\n\n<p>Hwang believes that this is partly because using machine learning to generate convincing fake videos still requires a degree of expertise and lots of data. \u201cIf you are a propagandist, you want to spread your work as far as possible with the least amount of effort,\u201d he said. \u201cRight now, a crude Photoshop job could be just as effective as something created with machine learning.\u201d<\/p>\n\n\n\n<p>At the same time, Hwang acknowledges that as deep fakes become more realistic and easier to produce in the coming years, they could usher in an era of forgery qualitatively different from what we have seen before. In the past, for example, if you wanted to make a video of the president saying something he didn\u2019t say, you needed a team of experts. Whereas today machine learning will not only automate this process, it will also probably make better forgeries.<\/p>\n\n\n\n<p>Couple this with the fact that access to this technology will spread over the internet, and suddenly you have, as Hwang put it, \u201ca perfect storm of misinformation\u201d.<\/p>\n\n\n<h4 class=\"wp-block-heading\" id=\"technology-on-the-rise\"><strong>Technology on the rise<\/strong><\/h4>\n\n\n<p>Nonetheless, research into machine learning-powered synthetic media forges ahead.<\/p>\n\n\n\n<p>To make a convincing deep fake you usually need a neural model that is trained with a lot of reference material. Generally, the larger your dataset of photos, video, or sound, the more eerily accurate the result will be. But this May, researchers at Samsung\u2019s AI Center in Moscow have devised a method to train a model to animate with an extremely limited dataset: just a single photo, and the results are surprisingly good.<\/p>\n\n\n\n<p>The researchers were able to create the &#8220;photorealistic talking head models&#8221; using convolutional neural networks: they trained the algorithm on a large dataset of talking head videos with a wide variety of appearances. In this case, they used the publicly available&nbsp;VoxCeleb&nbsp;databases containing more than 7,000 images of celebrities from YouTube videos.<\/p>\n\n\n\n<p>This trains the program to identify what they call &#8220;landmark&#8221; features of the faces: eyes, mouth shapes, the length and shape of a nose bridge.<\/p>\n\n\n\n<p>This, in a way, is a leap beyond what even deep fakes and other algorithms using generative adversarial networks can accomplish. Instead of teaching the algorithm to paste one face onto another using a catalogue of expressions from one person, they use the facial features that are common across most humans to then puppeteer a new face.<\/p>\n\n\n\n<p>As the team proves, its model even works on the Mona Lisa, and other single-photo still portraits. In the video, famous portraits of Albert Einstein, Fyodor Dostoyevsky, and Marilyn Monroe come to life as if they\u2019re Live Photos in your iPhone\u2019s camera roll. But like with most deep fakes, it\u2019s pretty easy to see the seams at this stage. Most of the faces are surrounded by visual artifacts.<\/p>\n\n\n\n<figure class=\"wp-block-image aligncenter is-resized\"><img decoding=\"async\" src=\"https:\/\/dfcme.me\/wp-content\/uploads\/2019\/08\/deepfake-mona-liza-1030x756.png\" alt=\"\" class=\"wp-image-4906\" width=\"454\" height=\"333\" srcset=\"https:\/\/dfc.me\/wp-content\/uploads\/2019\/08\/deepfake-mona-liza-1030x756.png 1030w, https:\/\/dfc.me\/wp-content\/uploads\/2019\/08\/deepfake-mona-liza-300x220.png 300w, https:\/\/dfc.me\/wp-content\/uploads\/2019\/08\/deepfake-mona-liza.png 1172w\" sizes=\"(max-width: 454px) 100vw, 454px\" \/><\/figure>\n\n\n<h4 class=\"wp-block-heading\" id=\"new-detection-methods\"><strong>New detection methods <\/strong><\/h4>\n\n\n<p>As the threat of deep fakes intensifies, so do efforts to produce new detection methods. In June 2018, researchers from the University at Albany (SUNY) published a paper outlining how fake videos could be identified by a lack of blinking in synthetic subjects. Facebook has also committed to developing machine learning models to detect deep fakes.<\/p>\n\n\n\n<p>But Hany Farid, professor of computer science at the University of California, Berkeley, is wary. Relying on forensic detection alone to combat deep fakes is becoming less viable, he believes, due to the rate at which machine learning techniques can circumvent them. \u201cIt used to be that we\u2019d have a couple of years between coming up with a detection technique and the forgers working around it. Now it only takes two to three months.\u201d<\/p>\n\n\n\n<p>This, he explains, is due to the flexibility of machine learning. \u201cAll the programmer has to do is update the algorithm to look for, say, changes of color in the face that correspond with the heartbeat, and then suddenly, the fakes incorporate this once imperceptible sign.\u201d <\/p>\n\n\n\n<p>Although Farid is locked in this technical cat-and-mouse game with deep fake creators, he is aware that the solution does not lie in new technology alone. \u201cThe problem isn\u2019t just that deep fake technology is getting better,\u201d he said. \u201cIt is that the social processes by which we collectively come to know things and hold them to be true or untrue are under threat.\u201d<\/p>\n\n\n<h4 class=\"wp-block-heading\" id=\"reality-apathy\"><strong><em>Reality apathy<\/em><\/strong><\/h4>\n\n\n<p>Indeed, as the fake video of Trump that spread through social networks in Belgium demonstrated \u2013 a video for which it was later revealed that it was not forged by machine learning technology, as sp.a claimed at first, but by using an editing software called <em>After Effects \u2013<\/em> deep fakes don\u2019t need to be undetectable or even convincing to be believed and do damage. It is possible that the greatest threat posed by deep fakes lies not in the fake content itself, but in the mere possibility of their existence.<\/p>\n\n\n\n<p>This is a phenomenon that scholar Aviv Ovadya has called \u201creality apathy\u201d, whereby constant contact with misinformation compels people to stop trusting what they see and hear. In other words, the greatest threat isn\u2019t that people will be deceived, but that they will come to regard everything as deception.<\/p>\n\n\n\n<p>Recent polls indicate that trust in major institutions and the media is dropping. The proliferation of deep fakes, Ovadya says, is likely to exacerbate this trend.<\/p>\n\n\n\n<p>According to Danielle Citron, we are already beginning to see the social ramifications of this epistemic decay. \u201cUltimately, deep fakes are simply amplifying what I call the liar\u2019s dividend,\u201d she said. \u201cWhen nothing is true then the dishonest person will thrive by saying what\u2019s true is fake.\u201d<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Technology can make it look as if anyone has said or done anything. Is it the next wave of (mis)information warfare? In May 2018, a&nbsp;video&nbsp;appeared on the internet of Donald Trump offering advice to the people of Belgium on the issue of climate change. \u201cAs you know, I had the balls to withdraw from the [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":4910,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_seopress_robots_primary_cat":"","_seopress_titles_title":"","_seopress_titles_desc":"","_seopress_robots_index":"","footnotes":""},"categories":[35],"tags":[],"class_list":["post-4912","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-news-en"],"_links":{"self":[{"href":"https:\/\/dfc.me\/en\/wp-json\/wp\/v2\/posts\/4912","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/dfc.me\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/dfc.me\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/dfc.me\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/dfc.me\/en\/wp-json\/wp\/v2\/comments?post=4912"}],"version-history":[{"count":0,"href":"https:\/\/dfc.me\/en\/wp-json\/wp\/v2\/posts\/4912\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/dfc.me\/en\/wp-json\/wp\/v2\/media\/4910"}],"wp:attachment":[{"href":"https:\/\/dfc.me\/en\/wp-json\/wp\/v2\/media?parent=4912"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/dfc.me\/en\/wp-json\/wp\/v2\/categories?post=4912"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/dfc.me\/en\/wp-json\/wp\/v2\/tags?post=4912"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}