Review
BibTex RIS Cite

IS SEEING ENOUGH TO BELIEVE? POLITICAL DEEPFAKE CONTENTS AS A DISTINCTIVE FORM OF VISUAL DISINFORMATION

Year 2022, , 50 - 72, 30.01.2022
https://doi.org/10.14783/maruoneri.908542

Abstract

The development of artificial intelligence-based communication technologies carries significant risks in the relationship between human and machine, in means of information manipulation. Deepfake creates a world where the technology in question is far from being “artificial”, as the name of the technology it has created and perceived as real day by day. Deepfake, which is defined as artificial intelligence-based visual disinformation, has brought the problem of reliability inherent to information. For this reason, it is deemed to evaluate the literature that discusses the risks of disinformation content sharing on information manipulation with a deepfake perspective, which can be considered as a form of visual disinformation. Deepfake has made it possible to mass-produce synthetic videos that are very similar to real videos. It has become possible to reach deepfake videos containing political discourse in countries other than Turkey. The starting point of this study is to present a comprehensive contribution in the context of the literature concerning the disinformation deepfake in Turkey. From this point of view, the technology that forms the basis of the deepfake creation process as a form of visual disinformation has been discussed, and the methods that enable deepfake technology to manipulate the audio-visual media are presented. After compiling the information that will enable us to understand how deepfake videos work, the dilemmas of deepfake-based manipulative contents in the special context of political communication are discussed.

References

  • Agarwal, S., Farid, H., Gu, Y., Mingming, H. Nagano, K., Li, H. (2019). Protecting world leaders against deep fakes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops. pp. 38-45.
  • Aktan, S. (2018). Türkiye Sahte Haber ve Dezenformasyonda Zirveye Oturdu https://tr.euronews.com/2018/06/15/turkiye-sahte-haber-ve-dezenformasyonda-zirveye-oturdu Erişim tarihi: 10.04.2021.
  • Baerthlein, T. (2016). The Rise of Political Bots On Social Media. https://www.dw.com/en/the-rise-of-political-bots-on-social-media/a-19450562 Erişim Tarihi: 10.04.2021.
  • Barari, S., Lucas, C., Munger, K. (2021). Political deepfake videos misinform the public, but no more than other fake media.” OSF Preprints. doi:10.31219/osf.io/cdfh3
  • Baudrillard, J. (2014). Simülakrlar ve Simülasyon. O. Adanır (Çev.). Ankara: Doğu Batı.
  • BBC, (2020): “Deepfake queen to deliver Channel 4 Christmas message”, https://www.bbc.com/news/technology-55424730, Erişim Tarihi: 15.06.2021.
  • Bennett, W. L., & Livingston, S. (2018). The disinformation order: Disruptive communication and the decline of democratic institutions. European Journal of Communication, 33(2), 122–139
  • Beridze, I., & Butcher, J. (2019). When seeing is no longer believing. Nature Machine Intelligence. doi:10.1038/s42256-019-0085-5.
  • Berinsky, A. J. (2017). Rumors and health care reform: Experiments in political misinformation. British Journal of Political Science, 47(2), 241–262.
  • Buo, A. S. (2020). The emerging threats of deepfake attacks and countermeasures. DOI: 10.13140/RG.2.2.23089.81762
  • Chadwick, A., Vaccari, C. & O’Loughlin, B. (2018). Do tabloids poison the well of social media? Explaining democratically dysfunctional news sharing. New Media & Society, 20(11), 4255–4274.
  • Chadwick, A. (2019). The new crisis of public communication: Challenges and opportunities for future research on digital media and politics. Online Civic Culture Center. https://www.lboro.ac.uk/research/online-civic-culture-centre/news-events/articles/o3c-2-crisis/ Erişim Tarihi: 02.04.2021.
  • Chawla, R. (2019). Deepfakes: How a pervert shook the world. International Journal of Advance Research and Development, 4(6), 4–8.
  • Chen, C. Metz. (2019). Google’s Duplex uses A.I. to mimic humans (Sometimes). The New York Times. https://www.nytimes.com/2019/05/22/technology/personaltech/ai-google-duplex.html Erişim tarihi: 03.01.2021
  • Chesney, B. & Citron, D. (2019). Deep fakes: looming challenge for privacy, democracy, and national security. California Law Review, 107(6), 1753-1820.
  • Coldewey, D. (2020). Facebook’s ‘deepfake detection challenge’ yields promising early results. TechCrunch. https://social.techcrunch.com/2020/06/12/facebooks-deepfakedetection-challenge-yields-promising-early-results/. Erişim tarihi: 10.12.2020.
  • Dobber, T., Metoui, N., Trilling, D., Helberger, N., de Vreese, C. (2021). Do (Microtargeted) Deepfakes Have Real Effects on Political Attitudes?. The International Journal of Press/Politics. Vol. 26(1) 69–9.
  • Guera, D. & Delp, E. (2018). Deepfake video detection using recurrent neural networks in IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS), Auckland, New Zeland.
  • Flynn, D. J., Nyhan, B. & Reifler, J. (2017). The nature and origins of misperceptions: Understanding false and unsupported beliefs about politics. Political Psychology, 38, 127–150.
  • Frenda, S. J., Knowles, E. D., Saletan, W., & Loftus, E. F. (2013). False memories of fabricated political events. Journal of Experimental Social Psychology, 49(2), 280–286.
  • Giasano, A. (2019). Deep fakes: a challenge of the post-truth era. Romanian Cyber Security Journal. 2 (1), 67- 74.
  • Goel, S., Anderson, A., Hofman, J., & Watts, D. J. (2015). The structural virality of online diffusion. Management Science, 62(1), 180–196.
  • Grabe, M. E. & Bucy, E. P. (2009). Image bite politics: News and the visual framing of elections. Oxford University Press.
  • Graber, D. A. (1990). Seeing is remembering: How visuals contribute to learning from television news. Journal of Communication, 40(3), 134–156.
  • Hall, H. (2018). Deepfake videos: When seeing isn't believing. Catholic University Journal of Law and Technology, 27(1), 51-76.
  • Howes, S. A. (2018). Digital replicas, performers’ livelihoods, and sex scenes: Likeness rights for the 21st century. Columbia Journal of Law & Arts 42, 345-349.
  • Hovland, C. I. & W. Weiss. (1951). The influence of source credibility on communication effectiveness. Public Opinion Quarterly 15, no.4: 635–650.
  • Kahn, J. (2020). These deepfake videos of Putin and Kim have gone viral. https://fortune.com/2020/10/02/deepfakes-putin-kim-jong-un-democracy-disinformation/ Erişim Tarihi: 16.06.2021.
  • Kietzmann, J., Lee, W. L., McCarthy, P. I. ve Kietzmann, C. T. (2020). Deepfakes: Trick or treat?. Business Horizons. 63, 135- 146.
  • Kirchengast, Tyrone (2020). Deepfakes and image manipulation: criminalisation and control. Information & Communications Technology Law, 29(3), 308–323.
  • Mcintyre, L. (2018). Post- truth. Cambridge: MIT Press
  • Messing, S. & Westwood, S. J. (2012). Selective exposure in the age of social media: Endorsements Trump Partisan source affiliation when selecting news online. Communication Research, 41(8), 1042-1063.
  • Murata, K., Orito, Y., Yamazaki, T. Ve Shimizu, K. (2020). Post- Truth society: The ai- driven society where no one is responsible. J. Borondo- Pelegrin, M. Arias- Oliva, K. Murata, A. M. Lara Palma (Eds.). Paradigm Shifts in ICT Ethics Proceedings of The ETHICOMP 2020, 18. International Conference on the Ethical and Social Impacts of ICT, Lograno, Spain, June 2020
  • Newman, E. J., Garry, M., Unkelbach, C. ve Bernstein, D. M., Lindsay, D., & Nash, R. A. (2015). Truthiness and falsiness of trivia claims depend on judgmental contexts. Journal of Experimental Psychology: Learning, Memory, and Cognition, 41(5), 1337–1348.
  • Nguyen, T. T., Nguyen, M. C., Nguyen, T. D., Nguyen, T. D., Nahavandi, S. (2019). Deep learning for deepfakes creation and detection: A survey. arXiv preprint arXiv:1909.11573
  • Nyilasy, G. (2019). Fake news: When the dark side of persuasion takes over. International Journal of Advertising 38, no.2: 336–42.
  • Pancer, E., & Poole, M. (2016). The popularity and virality of political social media: Hashtags, mentions, and links predict likes and retweets of 2016 US presidential nominees’ tweets. Social Influence, 11(4), 259–270.
  • Perot, E., Mostert, F. (2020). Fake it till you make it: an examination of the US and English approaches to persona protection as applied to deepfakes on social media. Journal of Intellectual Property Law& Practice, 15 (1), 32- 39.
  • Petersen, M. B., Osmundsen, M., ve Arceneaux, K. (2018). A “need for chaos” and the sharing of hostile political rumours in advanced democracies. PsyArXiv Preprints. https://psyarxiv.com/6m4ts/
  • Prior, M. (2013). Visual political knowledge: A different road to competence? Journal of Politics, 76(1), 41–57.
  • Rojecki, A., & Meraz, S. (2016). Rumors and factitious informational blends: The role of the web in speculative politics. New Media & Society, 18(1), 25–43.
  • Ruiz, D. (2020). Deepfakes laws and proposals flood US. Malwarebytes Labs. https://blog.malwarebytes.com/artificial-intelligence/2020/01/deepfakes-laws-and-proposalsflood-us/. Erişim tarihi: 12.01.2021
  • Qayyum, A., Qadir, J., Janjua, M. U., & Sher, F. (2019). Using Blockchain to Rein in the New Post-Truth World and Check the Spread of Fake News. IT Professional, 21(4), 16–24.
  • Schwarz, N., Sanna, L. J., Skurnik, I., & Yoon, C. (2007). Metacognitive experiences and the intricacies of setting people straight: Implications for debiasing and public information campaigns. Advances in Experimental Social Psychology, 39, 127–161.
  • Solsman, J. (2020). Deepfakes’ threat to 2020 US election isn’t what you’d think. CNET. https://www.cnet.com/features/deepfakes-threat-to-the-2020-us-election-isnt-what-youdthink/. Erişim tarihi: 15.01.2021
  • Stenberg, G. (2006). Conceptual and perceptual factors in the picture superiority effect. European Journal of Cognitive Psychology, 18(6), 813–847.
  • Sundar, S. (2008). The MAIN model: A heuristic approach to understanding technology effects on credibility. In M. Metzger & A. Flanagin (Eds.), Digital media, youth, and credibility, MIT Press. 73–100.
  • TASAM. (2019). “Sentetik Gerçeklik Teknolojisi (Deepfake): Derin- Sahte Ürün ve Savunma Ekosistemi İnşası Raporu”. Brains Türkiye Uygulama Programı. Erişim Adresi: https://tasam.org/tr-TR/Icerik/61765/sentetik_gerceklik_teknolojisi_deep_fake_derin-sahte_urun_ve_savunma_ekosistemi_insasi_ Erişim tarihi: 23.02.2021
  • Temir, E. (2020). Deepfake: New Era in The Age of Disinformation & End of Reliable Journalism. Selçuk İletişim Dergisi. 13 (2), 1009- 1024.
  • The Brussels Times, (2020). “XR Belgium posts deepfake of Belgian premier linking Covid-19 with climate crisis”. https://www.brusselstimes.com/news/belgium-all-news/politics/106320/xr-belgium-posts-deepfake-of-belgian-premier-linking-covid-19-with-climate-crisis/. Erişim Tarihi: 18.06.2021.
  • Thorson, K., ve Wells, C. (2016). Curated flows: A framework for mapping media exposure in the digital age. Communication Theory, 26(3), 309–328
  • Twitter. (2018, April 17). You won’t believe what Obama says in this video! https://twitter.com/BuzzFeed/status/9862579917 99222272 Erişim tarihi: 02.02.2021.
  • Vaccari, C. (2017). Online mobilization in comparative perspective: Digital appeals and political engagement in Germany, Italy, and the United Kingdom. Political Communication, 34(1), 69–88.
  • Vaccari, C. & Chadwick, A. (2020). Deepfakes and Disinformation: Exploring the Impact of Synthetic Political Video on Deception, Uncertainty, and Trust in News. Social Media + Society, 1- 13.
  • Villasenor, J. (2019). Artificial intelligence, deepfakes, and the uncertain future of truth Erişim adresi (10.02.2021): https://www.brookings.edu/blog/techtank/2019/02/14/artificial-intelligence-deepfakes-and-the-uncertain-future-of-truth/ Erişim tarihi: 15.01.2021
  • Yadav, D., & Salmani, S. (2019). Deepfake: A Survey on Facial Forgery Technique Using Generative Adversarial Network. 2019 International Conference on Intelligent Computing and Control Systems (ICCS). Jun 27, 2019 - Jun 28, 2019, Secunderabad, India.
  • Wagner, L. T., Blewer, A. (2019). “The word real is no longer real”: Deepfakes, gender, and the challenges of AI-Altered video. Open Information Science. 3, 32–46.
  • Waisbord, S. (2018). Truth is what happens to news: On journalism, fake news, and post-truth. Journalism Studies, 19(13), 1866–1878.
  • Westerlund, M. (2019). The Emergence of Deepfake Technology: A Review. Technology Innovation Management Review, 9(11): 40-53.
  • Westerman, D., Spence, P. R. ve Van Der Heide, B. (2014). Social media as ınformation source: recency of updates and credibility of information. Journal Of Computer-Mediated Communication, 19(2), 171- 183. https://doi.org/10.1111/jcc4.12041 Erişim tarihi: 04.04.2020.
  • Whyte, C. (2020). Deepfake news: AI-enabled disinformation as a multi-level public policy challenge. Journal of Cyber Policy, 5 (2), 199- 217.
  • Witness. (2018, June 11). Mal-uses of AI-generated synthetic media and deepfakes: Pragmatic solutions discovery convening. https:// http://www.mediafire.com/file/q5juw7dc3a2w8p7/Deepfakes_Final.pdf/file Erişim tarihi: 18.01.2021
  • Witten, I. B., & Knudsen, E. I. (2005). Why seeing is believing: Merging auditory and visual worlds. Neuron, 48(3), 489–496.
  • You Won’t Believe What Obama Says In This Video! https://www.youtube.com/watch?v=cQ54GDm1eL0 Erişim tarihi: 16.06.2021
  • Zannettou, S., Sirivianos, M., Blackburn, J., & Kourtellis, N. (2019). The Web of False Information. Journal of Data and Information Quality, 11(3), 1–37.
  • Zimmermann, F. ve Kohring.M. (2020). “Mistrust, Disinforming News, and Vote Choice: A Panel Survey on the Origins and Consequences of Believing Disinformation in the 2017 German Parliamentary Election.” Political Communication 37:215–37.

GÖRMEK İNANMAYA YETER Mİ? GÖRSEL DEZENFORMASYONUN AYIRT EDİCİ BİÇİMİ OLARAK SİYASİ DEEPFAKE İÇERİKLER

Year 2022, , 50 - 72, 30.01.2022
https://doi.org/10.14783/maruoneri.908542

Abstract

Yapay zekâ temelli iletişim teknolojilerinin gelişimi, insan ve makine arasındaki ilişkide, bilginin manipülasyonu noktasında önemli riskler taşımaktadır. Deepfake, söz konusu ilişkide oluşturulduğu teknolojinin ismi gibi “yapay” olmaktan uzaklaştığı ve gün geçtikte gerçek gibi algılandığı bir dünya yaratmaktadır. Temel olarak yapay zekâ destekli görsel dezenformasyon olarak tanımlanan deepfake, bilgiye içkin güvenilirlik sorunsalını gündeme getirmiştir. Bu nedenle dezenformasyon içerikli paylaşımların bilgi manipülasyonu konusundaki risklerini tartışan literatüre, görsel dezenformasyon biçimi olarak sayılabilen deepfake penceresinden bakmanın önem arz ettiği düşünülmektedir. Deepfake, gerçek videolara çok benzeyen sentetik videoların seri üretimini mümkün hale getirmiştir. Türkiye dışında diğer ülkelerde siyasi söylem içeren deepfake videolara ulaşmak mümkündür. Bu çalışmanın çıkış noktası ise Türkiye’deki dezenformasyona ilişkin literatüre, deepfake bağlamında kapsamlı bir katkı sunmaktır. Buradan hareketle görsel dezenformasyonun bir biçimi olarak deepfake yaratma sürecinin temelini oluşturan teknoloji ele alınarak, deepfake teknolojisinin işitsel – görsel medyayı manipüle etmesini mümkün kılan yöntemler aktarılmış ve deepfake videoların nasıl işlediğini anlamayı sağlayacak bilgiler derlendikten sonra deepfake temelli manipülatif içeriklerin, siyasal iletişim bağlamındaki çıkmazları tartışılmıştır.

References

  • Agarwal, S., Farid, H., Gu, Y., Mingming, H. Nagano, K., Li, H. (2019). Protecting world leaders against deep fakes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops. pp. 38-45.
  • Aktan, S. (2018). Türkiye Sahte Haber ve Dezenformasyonda Zirveye Oturdu https://tr.euronews.com/2018/06/15/turkiye-sahte-haber-ve-dezenformasyonda-zirveye-oturdu Erişim tarihi: 10.04.2021.
  • Baerthlein, T. (2016). The Rise of Political Bots On Social Media. https://www.dw.com/en/the-rise-of-political-bots-on-social-media/a-19450562 Erişim Tarihi: 10.04.2021.
  • Barari, S., Lucas, C., Munger, K. (2021). Political deepfake videos misinform the public, but no more than other fake media.” OSF Preprints. doi:10.31219/osf.io/cdfh3
  • Baudrillard, J. (2014). Simülakrlar ve Simülasyon. O. Adanır (Çev.). Ankara: Doğu Batı.
  • BBC, (2020): “Deepfake queen to deliver Channel 4 Christmas message”, https://www.bbc.com/news/technology-55424730, Erişim Tarihi: 15.06.2021.
  • Bennett, W. L., & Livingston, S. (2018). The disinformation order: Disruptive communication and the decline of democratic institutions. European Journal of Communication, 33(2), 122–139
  • Beridze, I., & Butcher, J. (2019). When seeing is no longer believing. Nature Machine Intelligence. doi:10.1038/s42256-019-0085-5.
  • Berinsky, A. J. (2017). Rumors and health care reform: Experiments in political misinformation. British Journal of Political Science, 47(2), 241–262.
  • Buo, A. S. (2020). The emerging threats of deepfake attacks and countermeasures. DOI: 10.13140/RG.2.2.23089.81762
  • Chadwick, A., Vaccari, C. & O’Loughlin, B. (2018). Do tabloids poison the well of social media? Explaining democratically dysfunctional news sharing. New Media & Society, 20(11), 4255–4274.
  • Chadwick, A. (2019). The new crisis of public communication: Challenges and opportunities for future research on digital media and politics. Online Civic Culture Center. https://www.lboro.ac.uk/research/online-civic-culture-centre/news-events/articles/o3c-2-crisis/ Erişim Tarihi: 02.04.2021.
  • Chawla, R. (2019). Deepfakes: How a pervert shook the world. International Journal of Advance Research and Development, 4(6), 4–8.
  • Chen, C. Metz. (2019). Google’s Duplex uses A.I. to mimic humans (Sometimes). The New York Times. https://www.nytimes.com/2019/05/22/technology/personaltech/ai-google-duplex.html Erişim tarihi: 03.01.2021
  • Chesney, B. & Citron, D. (2019). Deep fakes: looming challenge for privacy, democracy, and national security. California Law Review, 107(6), 1753-1820.
  • Coldewey, D. (2020). Facebook’s ‘deepfake detection challenge’ yields promising early results. TechCrunch. https://social.techcrunch.com/2020/06/12/facebooks-deepfakedetection-challenge-yields-promising-early-results/. Erişim tarihi: 10.12.2020.
  • Dobber, T., Metoui, N., Trilling, D., Helberger, N., de Vreese, C. (2021). Do (Microtargeted) Deepfakes Have Real Effects on Political Attitudes?. The International Journal of Press/Politics. Vol. 26(1) 69–9.
  • Guera, D. & Delp, E. (2018). Deepfake video detection using recurrent neural networks in IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS), Auckland, New Zeland.
  • Flynn, D. J., Nyhan, B. & Reifler, J. (2017). The nature and origins of misperceptions: Understanding false and unsupported beliefs about politics. Political Psychology, 38, 127–150.
  • Frenda, S. J., Knowles, E. D., Saletan, W., & Loftus, E. F. (2013). False memories of fabricated political events. Journal of Experimental Social Psychology, 49(2), 280–286.
  • Giasano, A. (2019). Deep fakes: a challenge of the post-truth era. Romanian Cyber Security Journal. 2 (1), 67- 74.
  • Goel, S., Anderson, A., Hofman, J., & Watts, D. J. (2015). The structural virality of online diffusion. Management Science, 62(1), 180–196.
  • Grabe, M. E. & Bucy, E. P. (2009). Image bite politics: News and the visual framing of elections. Oxford University Press.
  • Graber, D. A. (1990). Seeing is remembering: How visuals contribute to learning from television news. Journal of Communication, 40(3), 134–156.
  • Hall, H. (2018). Deepfake videos: When seeing isn't believing. Catholic University Journal of Law and Technology, 27(1), 51-76.
  • Howes, S. A. (2018). Digital replicas, performers’ livelihoods, and sex scenes: Likeness rights for the 21st century. Columbia Journal of Law & Arts 42, 345-349.
  • Hovland, C. I. & W. Weiss. (1951). The influence of source credibility on communication effectiveness. Public Opinion Quarterly 15, no.4: 635–650.
  • Kahn, J. (2020). These deepfake videos of Putin and Kim have gone viral. https://fortune.com/2020/10/02/deepfakes-putin-kim-jong-un-democracy-disinformation/ Erişim Tarihi: 16.06.2021.
  • Kietzmann, J., Lee, W. L., McCarthy, P. I. ve Kietzmann, C. T. (2020). Deepfakes: Trick or treat?. Business Horizons. 63, 135- 146.
  • Kirchengast, Tyrone (2020). Deepfakes and image manipulation: criminalisation and control. Information & Communications Technology Law, 29(3), 308–323.
  • Mcintyre, L. (2018). Post- truth. Cambridge: MIT Press
  • Messing, S. & Westwood, S. J. (2012). Selective exposure in the age of social media: Endorsements Trump Partisan source affiliation when selecting news online. Communication Research, 41(8), 1042-1063.
  • Murata, K., Orito, Y., Yamazaki, T. Ve Shimizu, K. (2020). Post- Truth society: The ai- driven society where no one is responsible. J. Borondo- Pelegrin, M. Arias- Oliva, K. Murata, A. M. Lara Palma (Eds.). Paradigm Shifts in ICT Ethics Proceedings of The ETHICOMP 2020, 18. International Conference on the Ethical and Social Impacts of ICT, Lograno, Spain, June 2020
  • Newman, E. J., Garry, M., Unkelbach, C. ve Bernstein, D. M., Lindsay, D., & Nash, R. A. (2015). Truthiness and falsiness of trivia claims depend on judgmental contexts. Journal of Experimental Psychology: Learning, Memory, and Cognition, 41(5), 1337–1348.
  • Nguyen, T. T., Nguyen, M. C., Nguyen, T. D., Nguyen, T. D., Nahavandi, S. (2019). Deep learning for deepfakes creation and detection: A survey. arXiv preprint arXiv:1909.11573
  • Nyilasy, G. (2019). Fake news: When the dark side of persuasion takes over. International Journal of Advertising 38, no.2: 336–42.
  • Pancer, E., & Poole, M. (2016). The popularity and virality of political social media: Hashtags, mentions, and links predict likes and retweets of 2016 US presidential nominees’ tweets. Social Influence, 11(4), 259–270.
  • Perot, E., Mostert, F. (2020). Fake it till you make it: an examination of the US and English approaches to persona protection as applied to deepfakes on social media. Journal of Intellectual Property Law& Practice, 15 (1), 32- 39.
  • Petersen, M. B., Osmundsen, M., ve Arceneaux, K. (2018). A “need for chaos” and the sharing of hostile political rumours in advanced democracies. PsyArXiv Preprints. https://psyarxiv.com/6m4ts/
  • Prior, M. (2013). Visual political knowledge: A different road to competence? Journal of Politics, 76(1), 41–57.
  • Rojecki, A., & Meraz, S. (2016). Rumors and factitious informational blends: The role of the web in speculative politics. New Media & Society, 18(1), 25–43.
  • Ruiz, D. (2020). Deepfakes laws and proposals flood US. Malwarebytes Labs. https://blog.malwarebytes.com/artificial-intelligence/2020/01/deepfakes-laws-and-proposalsflood-us/. Erişim tarihi: 12.01.2021
  • Qayyum, A., Qadir, J., Janjua, M. U., & Sher, F. (2019). Using Blockchain to Rein in the New Post-Truth World and Check the Spread of Fake News. IT Professional, 21(4), 16–24.
  • Schwarz, N., Sanna, L. J., Skurnik, I., & Yoon, C. (2007). Metacognitive experiences and the intricacies of setting people straight: Implications for debiasing and public information campaigns. Advances in Experimental Social Psychology, 39, 127–161.
  • Solsman, J. (2020). Deepfakes’ threat to 2020 US election isn’t what you’d think. CNET. https://www.cnet.com/features/deepfakes-threat-to-the-2020-us-election-isnt-what-youdthink/. Erişim tarihi: 15.01.2021
  • Stenberg, G. (2006). Conceptual and perceptual factors in the picture superiority effect. European Journal of Cognitive Psychology, 18(6), 813–847.
  • Sundar, S. (2008). The MAIN model: A heuristic approach to understanding technology effects on credibility. In M. Metzger & A. Flanagin (Eds.), Digital media, youth, and credibility, MIT Press. 73–100.
  • TASAM. (2019). “Sentetik Gerçeklik Teknolojisi (Deepfake): Derin- Sahte Ürün ve Savunma Ekosistemi İnşası Raporu”. Brains Türkiye Uygulama Programı. Erişim Adresi: https://tasam.org/tr-TR/Icerik/61765/sentetik_gerceklik_teknolojisi_deep_fake_derin-sahte_urun_ve_savunma_ekosistemi_insasi_ Erişim tarihi: 23.02.2021
  • Temir, E. (2020). Deepfake: New Era in The Age of Disinformation & End of Reliable Journalism. Selçuk İletişim Dergisi. 13 (2), 1009- 1024.
  • The Brussels Times, (2020). “XR Belgium posts deepfake of Belgian premier linking Covid-19 with climate crisis”. https://www.brusselstimes.com/news/belgium-all-news/politics/106320/xr-belgium-posts-deepfake-of-belgian-premier-linking-covid-19-with-climate-crisis/. Erişim Tarihi: 18.06.2021.
  • Thorson, K., ve Wells, C. (2016). Curated flows: A framework for mapping media exposure in the digital age. Communication Theory, 26(3), 309–328
  • Twitter. (2018, April 17). You won’t believe what Obama says in this video! https://twitter.com/BuzzFeed/status/9862579917 99222272 Erişim tarihi: 02.02.2021.
  • Vaccari, C. (2017). Online mobilization in comparative perspective: Digital appeals and political engagement in Germany, Italy, and the United Kingdom. Political Communication, 34(1), 69–88.
  • Vaccari, C. & Chadwick, A. (2020). Deepfakes and Disinformation: Exploring the Impact of Synthetic Political Video on Deception, Uncertainty, and Trust in News. Social Media + Society, 1- 13.
  • Villasenor, J. (2019). Artificial intelligence, deepfakes, and the uncertain future of truth Erişim adresi (10.02.2021): https://www.brookings.edu/blog/techtank/2019/02/14/artificial-intelligence-deepfakes-and-the-uncertain-future-of-truth/ Erişim tarihi: 15.01.2021
  • Yadav, D., & Salmani, S. (2019). Deepfake: A Survey on Facial Forgery Technique Using Generative Adversarial Network. 2019 International Conference on Intelligent Computing and Control Systems (ICCS). Jun 27, 2019 - Jun 28, 2019, Secunderabad, India.
  • Wagner, L. T., Blewer, A. (2019). “The word real is no longer real”: Deepfakes, gender, and the challenges of AI-Altered video. Open Information Science. 3, 32–46.
  • Waisbord, S. (2018). Truth is what happens to news: On journalism, fake news, and post-truth. Journalism Studies, 19(13), 1866–1878.
  • Westerlund, M. (2019). The Emergence of Deepfake Technology: A Review. Technology Innovation Management Review, 9(11): 40-53.
  • Westerman, D., Spence, P. R. ve Van Der Heide, B. (2014). Social media as ınformation source: recency of updates and credibility of information. Journal Of Computer-Mediated Communication, 19(2), 171- 183. https://doi.org/10.1111/jcc4.12041 Erişim tarihi: 04.04.2020.
  • Whyte, C. (2020). Deepfake news: AI-enabled disinformation as a multi-level public policy challenge. Journal of Cyber Policy, 5 (2), 199- 217.
  • Witness. (2018, June 11). Mal-uses of AI-generated synthetic media and deepfakes: Pragmatic solutions discovery convening. https:// http://www.mediafire.com/file/q5juw7dc3a2w8p7/Deepfakes_Final.pdf/file Erişim tarihi: 18.01.2021
  • Witten, I. B., & Knudsen, E. I. (2005). Why seeing is believing: Merging auditory and visual worlds. Neuron, 48(3), 489–496.
  • You Won’t Believe What Obama Says In This Video! https://www.youtube.com/watch?v=cQ54GDm1eL0 Erişim tarihi: 16.06.2021
  • Zannettou, S., Sirivianos, M., Blackburn, J., & Kourtellis, N. (2019). The Web of False Information. Journal of Data and Information Quality, 11(3), 1–37.
  • Zimmermann, F. ve Kohring.M. (2020). “Mistrust, Disinforming News, and Vote Choice: A Panel Survey on the Origins and Consequences of Believing Disinformation in the 2017 German Parliamentary Election.” Political Communication 37:215–37.
There are 66 citations in total.

Details

Primary Language Turkish
Journal Section Makale Başvuru
Authors

Elif Karakoç 0000-0002-2831-2247

Burcu Zeybek 0000-0002-2391-5727

Publication Date January 30, 2022
Published in Issue Year 2022

Cite

APA Karakoç, E., & Zeybek, B. (2022). GÖRMEK İNANMAYA YETER Mİ? GÖRSEL DEZENFORMASYONUN AYIRT EDİCİ BİÇİMİ OLARAK SİYASİ DEEPFAKE İÇERİKLER. Öneri Dergisi, 17(57), 50-72. https://doi.org/10.14783/maruoneri.908542

15795

Bu web sitesi Creative Commons Atıf-GayriTicari 4.0 Uluslararası Lisansı ile lisanslanmıştır.

Öneri Dergisi

Marmara Üniversitesi Sosyal Bilimler Enstitüsü

Göztepe Kampüsü Enstitüler Binası Kat:5 34722  Kadıköy/İstanbul

e-ISSN: 2147-5377