Research Article
BibTex RIS Cite

Can Information-Providing Artificial Intelligence Fight Disinformation? ChatGPT Example

Year 2024, Volume: 9 Issue: 2, 106 - 133, 31.12.2024
https://doi.org/10.56202/mbsjcs.1576832

Abstract

With the development of communication technologies, news can easily reach the masses without being monitored by any control center. Thus, the masses can quickly access unlimited content; they become vulnerable to disinformation that information density will bring. Disinformation, which spreads very quickly like information, can also cause significant problems in the public eye. However, artificial intelligence, which is one of the sources that enables the production and spread of disinformation, also plays an active role in detecting disinformation. This mission of artificial intelligence creates the need for how to use it in the most effective and correct way in order to prevent problems that may occur in digital media. ChatGPT, which shows a performance close to the human mind, is also an important medium that is frequently evaluated in the fight against disinformation. In this direction, the study aims to contribute to the literature by answering the question of whether artificial intelligence, which is an important source of disinformation production and dissemination, can detect news texts that are disinformation. The study was carried out by using content analysis, one of the qualitative research methods. Purposeful sampling was used for the purpose of the study; the extent to which news texts verified as “fake news” by the Directorate of Communications’ Disinformation Combat Center were detected by ChatGPT, an artificial intelligence chatbot, was categorized and analyzed. As a result of the study, it was determined that ChatGPT was indecisive in its responses to the detection of disinformation; it was rational in terms of providing new information on the subject and not providing a clear verification of disinformation, and it had a guiding attitude in terms of directing the user to various sources.

References

  • Akarsu, H. (2021). Reklam araştırmalarında evren ve örnekleme. Editör: S. Karaçor, M. Gençyürek & B. Akcan. Reklam araştırmaları nitel ve nicel tasarımlar içinde (s. 45-60). Çizgi Kitabevi.
  • Aksoy, H. (2023). Folklor ve gelenek kavramlarına “ChatGPT”nin yazdığı masallar üzerinden bakmak. Korkut Ata Türkiyat Araştırmaları Dergisi, Özel Sayı 1, 524-536. https://doi.org/10.51531/korkutataturkiyat.1361382.
  • Ali, D., Fatemi, Y.,Boskabadi, E.,Nikfar, M., Ugwuoke, J. & Ali, H. (2024). ChatGPT in teaching and learning: a systematic review. Education Sciences, 14(643). https://doi.org/10.3390/educsci14060643
  • Altınbaş, Ş. (2023). ChatGPT dijital dünyanın büyüsü teori ve uygulama. Kodlab.
  • Altıntop, M. (2023). Yapay zekâ/ akıllı öğrenme teknolojileriyle akademik metin yazma: ChatGPT örneği.
  • Süleyman Demirel Üniversitesi Sosyal Bilimler Enstitüsü Dergisi, 2(46), 186-211.
  • Arf, C. (1959). Makine düşünebilir mi ve nasıl düşünebilir? Atatürk Üniversitesi 1958-1959 öğretim yılı halk konferansları I içinde (s. 91-107). Atatürk Üniversitesi Üniversite Çalışmalarını Muhitte Yayma ve Halk Eğitimi Yayınları.
  • Asimov, I. (2004). I, robot. Bantam Dell.
  • Baum, L. F. (1900). The wonderful wizard of oz. George M. Hill Company.
  • BBC, (2016, Mart 9). Google’s AI beats world go champion in first of five matches. https://www.bbc.com/news/technology-35761246.
  • Benkler, Y. Faris, R. & Roberts, H. (2018). Network propaganda: manipulation, disinformation, and radicalization in American politics. Oxford University Press.
  • Bessi, A. & Ferrera, E. (2016). Social bots distort the 2016 U.S. Presidential election online discussion. First Monday, 21(11). https://doi.org/10.5210/fm.v21i11.7090.
  • Biswas, S. (2023b). Prospective role of chat gpt in the military: According to chatgpt. Qeios, 1-19. https://doi.org/10.32388/8WYYOD%202/19.
  • Biswas, S. S. (2023a). Role of ChatGPT in public health. Annals of Biomedical Engineering, 51(5), 868-869.
  • Bostancı, M. & Aksüt, E. (2023). Haber üretiminde yapay zekâ uygulamaları ve dezenformasyon: ChatGPT ve Bard Örneği. Editör: Y. Adıgüzel & M. Bostancı, Dijital İletişimi Anlamak-4 içinde (s. 58-71). Palet Yayınları.
  • Bozkurt, A. (2023a). ChatGPT, üretken yapay zekâ ve algoritmik paradigma değişikliği, Alanyazın, 4(1),63-72.
  • Bozkurt, A. (2023b). Generative artificial intelligence (AI) powered conversational educational agents: The inevitable paradigm shift. Asian Journal of Distance Education, 18(1). 198-204. https://doi.org/10.5281/zenodo.7716416.
  • Čapek, K. (1921). Rossum’s universal robots. Internet Archive. https://archive.org/details/CapekRUR/page/n1/mode/2up.
  • Cave, S. & Dihal, K. (2018). Ancient Dreams of Intelligent Machines: 3,000 Years of Robots. Nature, 559(7715), 473 475. https://doi.org/ 10.1038/d41586-018-05773-y.
  • Cellan-Jones, R. (2021, Ağustos, 21). Tech tent: Can AI write a play?. BBC. https://www.bbc.com/news/technology-58356716.
  • Descartes, R. (1998). Discourse on method and meditations on first philosophy. Hackett Publishing Company. Dezenformasyonla Mücadele Merkezi, (t.y.). Dezenformasyonla Mücadele Merkezi ne yapar? Dezenformasyonla Mücadele Merkezi. https://www.dmm.gov.tr/.
  • Dwivedi, Y. K., Kshetri, N., Hughes, L., Slade, E. L., Jeyaraj, A., Kar, A. K., Baabdullah, A. M., Koohang, A., Raghavan, V., Ahuja, M., Albanna, H., Albashrawi, M. A., Al-Busaidi, A. S., Balakrishnan, J., Barlette, Y., Basu, S., Bose, I., Brooks, L., Buhalis, D., ... Wright, R. (2023). Opinion paper: So what if ChatGPT wrote it? Multidisciplinary perspectives on opportunities, challenges and implications of generative conversational AI for research, practice and policy. International Journal of Information Management, 71. 1-63. https://doi.org/10.1016/j.ijinfomgt.2023.102642.
  • Erkan, G. & Ayhan, A. (2018). Siyasal iletişimde dezenformasyon ve sosyal medya: bir doğrulama platformu olarak teyit.org. Akdeniz Üniversitesi İletişim Fakültesi Dergisi (AKİL), 29, 201-223. https://doi.org/10.31123/akil.458933.
  • Fallis, D. (2015). What is disinformation? Libary Trends, 63(3), 401-426. https://doi.org/10.1353/lib.2015.0014
  • Ferrera, A. (2023). Should ChatGPT be biased? Challenges and risks of bias in large language models. First Monday, 28(11). 1-24. https://doi.org/10.5210/fm.v28i11.13346.
  • Floridi, L., Chiriatti, M. (2020), GPt-3.5: Its nature, scope, limits, and consequences. Minds & Machines, 30, 681 694.
  • Gelfert, A. (2018). Fake news: a definition. Informal Logic, 38(1), 84-117. https://doi.org 10.22329/il.v38i1.5068.
  • Grace, K., Salvatier, J., Dafoe, A., Zhang, B. & Evans, O. (2018), Viewpoint: When will AI exceed human performance? Evidence from AI experts. Journal of Artificial Intelligence Research, 62, 729-754.
  • Guston, D. H., Finn, E. & Robert, J. S. (2017). Frankenstein Mary Shelley. The Mit Press.
  • Harari, N. Y. (2016). Homo deus yarının kısa bir tarihi. P. N. Taneli (Çev). Kolektif Kitap.
  • Haupt, M. R., Yang, L., Purnat, T., & Mackey, T. (2024). Evaluating the Influence of Role-Playing Prompts on ChatGPT’s Misinformation Detection Accuracy: Quantitative Study. JMIR infodemiology, 4(1), e60678.
  • Hoes, E., Altay, S. & Bermeo, J. (2023). Using ChatGPT to fight misinformation: ChatGPT nails 72% of 12,000 verified claims. PsyArXiv.
  • Holmes, W., Bialik, M. & Fadel, C. (2019). Artificial intelligence in education. promise and implications for teaching and learning. Center for Curriculum Redesign.
  • Holmes, W., Persson, J., Chounta, I.-A., Wasson, B. & Dimitrova, V. (2022). Artificial intelligence and education. A critical view through the lens of human rights, democracy, and the rule of law. Council of Europe. https://rm.coe.int/artificial-intelligence-and-education-a-critical-view-through-the-lens/1680a886bd.
  • Hu, K. (2023, Şubat 2). ChatGPT sets record for fastest-growing user base - analyst note. Reuters. https://www.reuters.com/technology/chatgpt-sets-record-fastest-growing-user-base-analyst-note-2023-02-01/.
  • Huang, Y., Shu, K., Yu, P. S. & Sun, L. (2023). FakeGPT: fake news generation, explanation and detection of Large language models. Semantic Scholar. https://www.semanticscholar.org/paper/FakeGPT%3A-Fake-News Generation%2C-Explanation-and-of-Huang-Sun/4726ff813876b8e420d8c635dd2354693a8dc932.
  • Karakoç Keskin, E. (2023). Yapay zekâ sohbet robotu chatgpt ve Türkiye internet gündeminde oluşturduğu temalar, Yeni Medya Elektronik Dergisi, 7(2), 114-131. https://doi.org/ 10.17932/IAU.EJNM.25480200.2023/ejnm_v7i2003.
  • Kavak, A. C. (2023, Ekim 30). Yapay zekânın potansiyelini açığa çıkarın: Prompt engineering için uzman teknikler. Zeo. https://zeo.org/tr/kaynaklar/blog/yapay-zekanin-potansiyelini-aciga-cikarin-prompt-engineering-icin uzman-teknikler.
  • Kaynak, A. (2024). Hiçbir şey eskisi gibi değil: ChatGPT. Mediacat, 31(343), 88-89.
  • Kırık, A. M. & Özkoçak, V. (2023). Medya ve iletişim bağlamında yapay zekâ tarihi ve teknolojisi: ChatGPT ve deepfake ile gelen dijital dönüşüm. Karadeniz Uluslararası Bilimsel Dergi (58), 73-99. https://doi.org/10.17498/kdeniz.1308471.
  • Kose, M. (2024, Eylül 28). Yapay zeka ile metin analitiği. Medium. https://medium.com/@muratkose123/yapay zeka-ile-metin-analiti%C4%9Fi-40c2942c1c08.
  • Lewandowsky, S., Ecker, U. K. H. & Cook, J. (2017). Beyond misinformation: understanding and coping with the “post-truth” era. Journal of Applied Research in Memory and Cognition, 6, 353-369. https://doi.org/10.1016/j.jarmac.2017.07.008.
  • Liu, M., Ren, Y., Nyagoga, L. M., Stonier, F., Wu, Z. & Yu, L. (2023). Future of education in the era of generative artificial intelligence: Consensus among Chinese scholars on applications of ChatGPT in schools. Future in Educational Research, 1(1), 72-101. https://doi.org/10.1002/fer3.10.
  • Liu, V. & Chilton, L.B. (2022). Liu, V., & Chilton, L. B. (2022, April). Design guidelines for prompt engineering text-to image generative models. S. Barbosa, C. Lampe, C. Appert, D. A. Shamma, S. Drucker, J. Williamson & K. Yatani (Eds), Proceedings of the 2022 CHI conference on human factors in computing systems (pp. 1-23). Association for Computing Machinery. https://doi.org/10.1145/3491102.
  • Lund, B. D., Wang, T., Mannuru, N. R., Nie, B., Shimray, S. & Wang, Z. (2023). ChatGPT and a new academic reality: AI-written research papers and the ethics of the large language models in scholarly publishing Journal of the Association for Information Science and Technology, 74, 570-581. https://doi.org/10.1002/asi.24750.
  • Manjoo, F. (2020, Temmuz 29). How do you know a human wrote this? The New York Times. https://www.nytimes.com/2020/07/29/opinion/gpt-3-ai-automation.html.
  • Marcuse, H. (2007). One dimensional man. Routledge.
  • Marwick, A. & Lewis, R. (2017). Media manipulation and disinformation online. Data & Society Research Institute, 7-19. https://datasociety.net/library/media-manipulation-and-disinfo-online/
  • McCarthy, J., Minsky, M., Rochester, N. & Shannon, C. (1955). A proposal for dartmouth summer research project on artificial intelligence. Stanford University. http://jmc.stanford.edu/articles/dartmouth/dartmouth.pdf.
  • Moor, J. (2006). The dartmouth college artificial intelligence conference: The next fifty years. AI Magazine, 27(4), 87. https://doi.org/10.1609/aimag.v27i4.1911.
  • Norman, J. (2024, Temmuz 15). Alan Turing's contributions to artificial intelligence. History of Information. https://www.historyofinformation.com/detail.php?id=4289.
  • Özçıtak, İ. (2024). Bireyselleştiremediklerimizden misiniz?. Mediacat, 31(344), 60.
  • Pagnamenta, R. (2020, Ağustos 26). Forget deepfakes – we should be very worried about AI-generated text. The Telegraph, https://www.telegraph.co.uk/technology/2020/08/26/forget-deepfakes-ai-generated-text-should worried/.
  • Postman, N. (2020). Televizyon öldüren eğlence. Ayrıntı Yayınları.
  • Rudolph, J., Tan, S. & Tan, S. (2023), Chatgpt: Bullshit spewer or the end of traditional assessments in higher education? Journal of Applied Learning & Teaching, 6(1), 1-22. https://doi.org/10.37074/jalt.2023.6.1.9
  • Russell, S., & Norvig, P. (2010). Artificial intelligence: A modern Approach. Prentice Hall.
  • Rutherford, N. (2023, Şubat). About mis-disinformation, its potential impacts, and the challenges to finding effective countermeasures. Information Integrity Lab. https://pdinstitute.uottawa.ca/common/Uploaded%20files/PDI%20files/About-Mis-disinformation-and its%20Potential-Impacts-Nicolas-Rutherford.pdf.
  • Sarı, F. (2021). Cahit Arf’in “makine düşünebilir mi ve nasıl düşünebilir?” adlı makalesi üzerine bir çalışma. TRT Akademi, 6(13), 812-833. https://doi.org/10.37679/trta.962940.
  • Searle, J. R. (2002). Can computers think? D. J. Chalmers (Ed.), Philosophy of mind: classical and contemporary readings. Oxford University Press. (Orginal work published 1980).
  • Sebastian, G., & Sebastian, S. R. (2024). Exploring ethical implications of ChatGPT and other AI chatbots and regulation of disinformation propagation. Annals of Engineering Mathematics and Computational Intelligence, 1(1), 1-12.
  • Senekal, B., Brokensha, S. (2023). Is ChatGPT a friend or foe in the war on misinformation? A South African perspective. Journal for Communication Studies in Africa, 42(2), 3-16. https://doi.org/10.36615/jcsa
  • Shah, C. (2022), The rise of AI chat agents and the discourse with dilettantes. Information Matters, 2(12). https://informationmatters.org/2022/12/the-rise-of-ai-chat-agents-and-the-discourse-with-dilettantes.
  • Silva, T. P., Ocampo, T. S. C., Alencar-Palha, C., de Oliviera-Santos, C., Takeshita, W. M. & de Oliviera M. L. (2023). ChatGPT: a tool for scientific writing or a threat to integrity? The British Journal of Radiology, 96(1152), 20230430. https://doi.org/10.1259/bjr.20230430
  • Stone, P., Dunphy, D., Smith, M. & Ogilvie, D. (1966). The general inquirer: a computer approach to content analysis. The MIT Press.
  • Temel, E. A. (2024). Yarının Zekâsı. Mediacat, 31(348), 35-38.
  • Turing, A. (1948). Intelligent machinery. National Physical Laboratory.
  • Turing, A. (1950). Computing machinery and intelligence. Mind: A Quarterly Review of Psychology and Philosophy, 236, 433-460.
  • UNICEF. (2021). Policy guidance on AI for children. UNICEF. https://www.unicef.org/innocenti/reports/policy guidance-ai-children.
  • Uyar, T. (2024). ChatGPT’nin serbest mantıksal safsata tespitinde kullanımı, Yeni Medya Elektronik Dergisi, 8(1), 144-179.
  • Vaswani, A., Shazeer, R., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, L. & Polosukhin, I. (2017). Attention is all you need. U. von Luxburg, I. Guyon, S. Bengio, H. Wallach & R. Fergus (Eds.), Proceedings of the 31st International Conference on Neural Information Processing Systems. (pp 6000-6010). Red Hook.
  • Vosoughi, S., Roy, D. & Aral, S. (2018). The spread of true and false news online. Science, 359(6380), 1146-1151. https://doi.org/10.1126/science.aap9559.
  • Wardle, C. & Derakhshan, H. (2017, Eylül 27). Information disorder: Toward an interdisciplinary framework for research and policy making. Coincil of Europe. https://edoc.coe.int/en/media/7495-information-disorder toward-an-interdisciplinary-framework-for-research-and-policy-making.html
  • Wardle, C. (t.y.). The age of information disorder C. Silverman (ed.). Verification handbook for disinformation and media manipulation. Amazon. https://s3.eu-central 1.amazonaws.com/datajournalismcom/handbooks/Verification-Handbook-3.pdf.
  • Weik, M. H. (1961). The ENIAC Story. Ordnance, 45(244), 571-575. https://www.jstor.org/stable/45363261. World Economic Forum (2024, Ocak). The global risks report 19th edition. World Economic Forum. https://www3.weforum.org/docs/WEF_The_Global_Risks_Report_2024.pdf.
  • Yıkar, G. (2023). Farsça dil eğitiminde yapay zekâ (Al) destekli çeviri ve metin üretme üzerine bir değerlendirme. RumeliDE Dil ve Edebiyat Araştırmaları Dergisi, (36), 1204-1221. https://doi.org/10.29000/rumelide.1369151.
  • Zaitsu, W. & Jin, M. (2023). Distinguishing ChatGPT(-3.5, -4)-generated and human-written papers through Japanese stylometric analysis. PLoS ONE, 18(8). https://doi.org/10.1371/journal.pone.0288453
  • Zellers, R., Holtzman, A., Rashkin, H., Bisk, Y., Farhadi, A., Roesner, F. & Choi, Y. (2020). Defending against neural fake news. H. M. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché-Buc, E. B. Fox (Eds.), International Conference on Neural Information Processing Systems: 812, (pp. 9054 – 9065). Curran Associates Inc.
  • Zhou, E. & Lee, D. (2023). Generative AI, human creativity and art. PNAS Nexus, 3(3), 1-8. https://doi.org/10.1093/pnasnexus/pgae052.
  • Zhou, J., Müller, H., Holzinger, A. & Chen, F. (2024). Ethical chatGPT: concerns, challenges, and commandments. Electronics, 13(17), 1-8. https://doi.org/10.48550/arXiv.2305.10646.

Enformasyon Sağlayan Yapay Zekâ Dezenformasyonla Mücadele Edebilir Mi? ChatGPT Örneği

Year 2024, Volume: 9 Issue: 2, 106 - 133, 31.12.2024
https://doi.org/10.56202/mbsjcs.1576832

Abstract

İletişim teknolojilerinin gelişmesi ile haberler, herhangi bir kontrol merkezinde denetlenmeden kolayca kitlelere ulaşabilmektedir. Böylelikle kitleler sınırsız sayıda içeriğe hızlıca erişim sağlamakta; bilgi yoğunluğunun beraberinde getireceği dezenformasyona karşı savunmasız kalmaktadırlar. Enformasyon gibi oldukça hızlı yayılan dezenformasyon da kamu nezdinde önemli problemlere neden olabilmektedir. Bununla birlikte dezenformasyonun üretimi ve yayılmasını sağlayan kaynaklarından biri olan yapay zekâ, aynı zamanda dezenformasyonun tespit edilmesinde de etkin rol oynamaktadır. Yapay zekânın sahip olduğu bu misyon, dijital mecralarda yaşanabilecek problemlerin engellenebilmesi adına ondan en etkili ve doğru biçimde nasıl yararlanılması gerektiği ihtiyacını doğurmaktadır. İnsan zihnine yakın bir performans gösteren ChatGPT de dezenformasyonla mücadele konusunda sıklıkla değerlendirme altında olan önemli bir mecradır. Bu doğrultuda çalışma, dezenformasyon üretimi ve yayımının önemli kaynaklarından olan yapay zekâ, dezenformasyon niteliğindeki haber metinlerinin tespitini sağlayabilir mi, sorusuna cevap oluşturarak alanyazına katkı yapmayı amaçlamaktadır. Çalışma, nitel araştırma yöntemlerinden içerik analizi yapılarak gerçekleştirilmiştir. Çalışmanın amacı doğrultusunda amaçlı örneklem kullanılmış; İletişim Başkanlığı Dezenformasyonla Mücadele Merkezi tarafından “sahte haber” olarak doğrulanmış haber metinlerinin bir yapay zekâ sohbet robotu olan ChatGPT tarafından ne derece tespit edilebildiği kategorize edilerek analiz edilmiştir. Çalışmanın sonucunda ChatGPT’nin dezenformasyonun tespitine yönelik yanıtlarda kararsız kaldığı; konuya ilişkin yeni bilgiler sunması ve dezenformasyona ilişkin net bir doğrulama yapmaması açısından rasyonel, kullanıcıyı çeşitli kaynaklara sevk etmesi açışından yönlendirici bir tutuma sahip olduğu saptanmıştır.

References

  • Akarsu, H. (2021). Reklam araştırmalarında evren ve örnekleme. Editör: S. Karaçor, M. Gençyürek & B. Akcan. Reklam araştırmaları nitel ve nicel tasarımlar içinde (s. 45-60). Çizgi Kitabevi.
  • Aksoy, H. (2023). Folklor ve gelenek kavramlarına “ChatGPT”nin yazdığı masallar üzerinden bakmak. Korkut Ata Türkiyat Araştırmaları Dergisi, Özel Sayı 1, 524-536. https://doi.org/10.51531/korkutataturkiyat.1361382.
  • Ali, D., Fatemi, Y.,Boskabadi, E.,Nikfar, M., Ugwuoke, J. & Ali, H. (2024). ChatGPT in teaching and learning: a systematic review. Education Sciences, 14(643). https://doi.org/10.3390/educsci14060643
  • Altınbaş, Ş. (2023). ChatGPT dijital dünyanın büyüsü teori ve uygulama. Kodlab.
  • Altıntop, M. (2023). Yapay zekâ/ akıllı öğrenme teknolojileriyle akademik metin yazma: ChatGPT örneği.
  • Süleyman Demirel Üniversitesi Sosyal Bilimler Enstitüsü Dergisi, 2(46), 186-211.
  • Arf, C. (1959). Makine düşünebilir mi ve nasıl düşünebilir? Atatürk Üniversitesi 1958-1959 öğretim yılı halk konferansları I içinde (s. 91-107). Atatürk Üniversitesi Üniversite Çalışmalarını Muhitte Yayma ve Halk Eğitimi Yayınları.
  • Asimov, I. (2004). I, robot. Bantam Dell.
  • Baum, L. F. (1900). The wonderful wizard of oz. George M. Hill Company.
  • BBC, (2016, Mart 9). Google’s AI beats world go champion in first of five matches. https://www.bbc.com/news/technology-35761246.
  • Benkler, Y. Faris, R. & Roberts, H. (2018). Network propaganda: manipulation, disinformation, and radicalization in American politics. Oxford University Press.
  • Bessi, A. & Ferrera, E. (2016). Social bots distort the 2016 U.S. Presidential election online discussion. First Monday, 21(11). https://doi.org/10.5210/fm.v21i11.7090.
  • Biswas, S. (2023b). Prospective role of chat gpt in the military: According to chatgpt. Qeios, 1-19. https://doi.org/10.32388/8WYYOD%202/19.
  • Biswas, S. S. (2023a). Role of ChatGPT in public health. Annals of Biomedical Engineering, 51(5), 868-869.
  • Bostancı, M. & Aksüt, E. (2023). Haber üretiminde yapay zekâ uygulamaları ve dezenformasyon: ChatGPT ve Bard Örneği. Editör: Y. Adıgüzel & M. Bostancı, Dijital İletişimi Anlamak-4 içinde (s. 58-71). Palet Yayınları.
  • Bozkurt, A. (2023a). ChatGPT, üretken yapay zekâ ve algoritmik paradigma değişikliği, Alanyazın, 4(1),63-72.
  • Bozkurt, A. (2023b). Generative artificial intelligence (AI) powered conversational educational agents: The inevitable paradigm shift. Asian Journal of Distance Education, 18(1). 198-204. https://doi.org/10.5281/zenodo.7716416.
  • Čapek, K. (1921). Rossum’s universal robots. Internet Archive. https://archive.org/details/CapekRUR/page/n1/mode/2up.
  • Cave, S. & Dihal, K. (2018). Ancient Dreams of Intelligent Machines: 3,000 Years of Robots. Nature, 559(7715), 473 475. https://doi.org/ 10.1038/d41586-018-05773-y.
  • Cellan-Jones, R. (2021, Ağustos, 21). Tech tent: Can AI write a play?. BBC. https://www.bbc.com/news/technology-58356716.
  • Descartes, R. (1998). Discourse on method and meditations on first philosophy. Hackett Publishing Company. Dezenformasyonla Mücadele Merkezi, (t.y.). Dezenformasyonla Mücadele Merkezi ne yapar? Dezenformasyonla Mücadele Merkezi. https://www.dmm.gov.tr/.
  • Dwivedi, Y. K., Kshetri, N., Hughes, L., Slade, E. L., Jeyaraj, A., Kar, A. K., Baabdullah, A. M., Koohang, A., Raghavan, V., Ahuja, M., Albanna, H., Albashrawi, M. A., Al-Busaidi, A. S., Balakrishnan, J., Barlette, Y., Basu, S., Bose, I., Brooks, L., Buhalis, D., ... Wright, R. (2023). Opinion paper: So what if ChatGPT wrote it? Multidisciplinary perspectives on opportunities, challenges and implications of generative conversational AI for research, practice and policy. International Journal of Information Management, 71. 1-63. https://doi.org/10.1016/j.ijinfomgt.2023.102642.
  • Erkan, G. & Ayhan, A. (2018). Siyasal iletişimde dezenformasyon ve sosyal medya: bir doğrulama platformu olarak teyit.org. Akdeniz Üniversitesi İletişim Fakültesi Dergisi (AKİL), 29, 201-223. https://doi.org/10.31123/akil.458933.
  • Fallis, D. (2015). What is disinformation? Libary Trends, 63(3), 401-426. https://doi.org/10.1353/lib.2015.0014
  • Ferrera, A. (2023). Should ChatGPT be biased? Challenges and risks of bias in large language models. First Monday, 28(11). 1-24. https://doi.org/10.5210/fm.v28i11.13346.
  • Floridi, L., Chiriatti, M. (2020), GPt-3.5: Its nature, scope, limits, and consequences. Minds & Machines, 30, 681 694.
  • Gelfert, A. (2018). Fake news: a definition. Informal Logic, 38(1), 84-117. https://doi.org 10.22329/il.v38i1.5068.
  • Grace, K., Salvatier, J., Dafoe, A., Zhang, B. & Evans, O. (2018), Viewpoint: When will AI exceed human performance? Evidence from AI experts. Journal of Artificial Intelligence Research, 62, 729-754.
  • Guston, D. H., Finn, E. & Robert, J. S. (2017). Frankenstein Mary Shelley. The Mit Press.
  • Harari, N. Y. (2016). Homo deus yarının kısa bir tarihi. P. N. Taneli (Çev). Kolektif Kitap.
  • Haupt, M. R., Yang, L., Purnat, T., & Mackey, T. (2024). Evaluating the Influence of Role-Playing Prompts on ChatGPT’s Misinformation Detection Accuracy: Quantitative Study. JMIR infodemiology, 4(1), e60678.
  • Hoes, E., Altay, S. & Bermeo, J. (2023). Using ChatGPT to fight misinformation: ChatGPT nails 72% of 12,000 verified claims. PsyArXiv.
  • Holmes, W., Bialik, M. & Fadel, C. (2019). Artificial intelligence in education. promise and implications for teaching and learning. Center for Curriculum Redesign.
  • Holmes, W., Persson, J., Chounta, I.-A., Wasson, B. & Dimitrova, V. (2022). Artificial intelligence and education. A critical view through the lens of human rights, democracy, and the rule of law. Council of Europe. https://rm.coe.int/artificial-intelligence-and-education-a-critical-view-through-the-lens/1680a886bd.
  • Hu, K. (2023, Şubat 2). ChatGPT sets record for fastest-growing user base - analyst note. Reuters. https://www.reuters.com/technology/chatgpt-sets-record-fastest-growing-user-base-analyst-note-2023-02-01/.
  • Huang, Y., Shu, K., Yu, P. S. & Sun, L. (2023). FakeGPT: fake news generation, explanation and detection of Large language models. Semantic Scholar. https://www.semanticscholar.org/paper/FakeGPT%3A-Fake-News Generation%2C-Explanation-and-of-Huang-Sun/4726ff813876b8e420d8c635dd2354693a8dc932.
  • Karakoç Keskin, E. (2023). Yapay zekâ sohbet robotu chatgpt ve Türkiye internet gündeminde oluşturduğu temalar, Yeni Medya Elektronik Dergisi, 7(2), 114-131. https://doi.org/ 10.17932/IAU.EJNM.25480200.2023/ejnm_v7i2003.
  • Kavak, A. C. (2023, Ekim 30). Yapay zekânın potansiyelini açığa çıkarın: Prompt engineering için uzman teknikler. Zeo. https://zeo.org/tr/kaynaklar/blog/yapay-zekanin-potansiyelini-aciga-cikarin-prompt-engineering-icin uzman-teknikler.
  • Kaynak, A. (2024). Hiçbir şey eskisi gibi değil: ChatGPT. Mediacat, 31(343), 88-89.
  • Kırık, A. M. & Özkoçak, V. (2023). Medya ve iletişim bağlamında yapay zekâ tarihi ve teknolojisi: ChatGPT ve deepfake ile gelen dijital dönüşüm. Karadeniz Uluslararası Bilimsel Dergi (58), 73-99. https://doi.org/10.17498/kdeniz.1308471.
  • Kose, M. (2024, Eylül 28). Yapay zeka ile metin analitiği. Medium. https://medium.com/@muratkose123/yapay zeka-ile-metin-analiti%C4%9Fi-40c2942c1c08.
  • Lewandowsky, S., Ecker, U. K. H. & Cook, J. (2017). Beyond misinformation: understanding and coping with the “post-truth” era. Journal of Applied Research in Memory and Cognition, 6, 353-369. https://doi.org/10.1016/j.jarmac.2017.07.008.
  • Liu, M., Ren, Y., Nyagoga, L. M., Stonier, F., Wu, Z. & Yu, L. (2023). Future of education in the era of generative artificial intelligence: Consensus among Chinese scholars on applications of ChatGPT in schools. Future in Educational Research, 1(1), 72-101. https://doi.org/10.1002/fer3.10.
  • Liu, V. & Chilton, L.B. (2022). Liu, V., & Chilton, L. B. (2022, April). Design guidelines for prompt engineering text-to image generative models. S. Barbosa, C. Lampe, C. Appert, D. A. Shamma, S. Drucker, J. Williamson & K. Yatani (Eds), Proceedings of the 2022 CHI conference on human factors in computing systems (pp. 1-23). Association for Computing Machinery. https://doi.org/10.1145/3491102.
  • Lund, B. D., Wang, T., Mannuru, N. R., Nie, B., Shimray, S. & Wang, Z. (2023). ChatGPT and a new academic reality: AI-written research papers and the ethics of the large language models in scholarly publishing Journal of the Association for Information Science and Technology, 74, 570-581. https://doi.org/10.1002/asi.24750.
  • Manjoo, F. (2020, Temmuz 29). How do you know a human wrote this? The New York Times. https://www.nytimes.com/2020/07/29/opinion/gpt-3-ai-automation.html.
  • Marcuse, H. (2007). One dimensional man. Routledge.
  • Marwick, A. & Lewis, R. (2017). Media manipulation and disinformation online. Data & Society Research Institute, 7-19. https://datasociety.net/library/media-manipulation-and-disinfo-online/
  • McCarthy, J., Minsky, M., Rochester, N. & Shannon, C. (1955). A proposal for dartmouth summer research project on artificial intelligence. Stanford University. http://jmc.stanford.edu/articles/dartmouth/dartmouth.pdf.
  • Moor, J. (2006). The dartmouth college artificial intelligence conference: The next fifty years. AI Magazine, 27(4), 87. https://doi.org/10.1609/aimag.v27i4.1911.
  • Norman, J. (2024, Temmuz 15). Alan Turing's contributions to artificial intelligence. History of Information. https://www.historyofinformation.com/detail.php?id=4289.
  • Özçıtak, İ. (2024). Bireyselleştiremediklerimizden misiniz?. Mediacat, 31(344), 60.
  • Pagnamenta, R. (2020, Ağustos 26). Forget deepfakes – we should be very worried about AI-generated text. The Telegraph, https://www.telegraph.co.uk/technology/2020/08/26/forget-deepfakes-ai-generated-text-should worried/.
  • Postman, N. (2020). Televizyon öldüren eğlence. Ayrıntı Yayınları.
  • Rudolph, J., Tan, S. & Tan, S. (2023), Chatgpt: Bullshit spewer or the end of traditional assessments in higher education? Journal of Applied Learning & Teaching, 6(1), 1-22. https://doi.org/10.37074/jalt.2023.6.1.9
  • Russell, S., & Norvig, P. (2010). Artificial intelligence: A modern Approach. Prentice Hall.
  • Rutherford, N. (2023, Şubat). About mis-disinformation, its potential impacts, and the challenges to finding effective countermeasures. Information Integrity Lab. https://pdinstitute.uottawa.ca/common/Uploaded%20files/PDI%20files/About-Mis-disinformation-and its%20Potential-Impacts-Nicolas-Rutherford.pdf.
  • Sarı, F. (2021). Cahit Arf’in “makine düşünebilir mi ve nasıl düşünebilir?” adlı makalesi üzerine bir çalışma. TRT Akademi, 6(13), 812-833. https://doi.org/10.37679/trta.962940.
  • Searle, J. R. (2002). Can computers think? D. J. Chalmers (Ed.), Philosophy of mind: classical and contemporary readings. Oxford University Press. (Orginal work published 1980).
  • Sebastian, G., & Sebastian, S. R. (2024). Exploring ethical implications of ChatGPT and other AI chatbots and regulation of disinformation propagation. Annals of Engineering Mathematics and Computational Intelligence, 1(1), 1-12.
  • Senekal, B., Brokensha, S. (2023). Is ChatGPT a friend or foe in the war on misinformation? A South African perspective. Journal for Communication Studies in Africa, 42(2), 3-16. https://doi.org/10.36615/jcsa
  • Shah, C. (2022), The rise of AI chat agents and the discourse with dilettantes. Information Matters, 2(12). https://informationmatters.org/2022/12/the-rise-of-ai-chat-agents-and-the-discourse-with-dilettantes.
  • Silva, T. P., Ocampo, T. S. C., Alencar-Palha, C., de Oliviera-Santos, C., Takeshita, W. M. & de Oliviera M. L. (2023). ChatGPT: a tool for scientific writing or a threat to integrity? The British Journal of Radiology, 96(1152), 20230430. https://doi.org/10.1259/bjr.20230430
  • Stone, P., Dunphy, D., Smith, M. & Ogilvie, D. (1966). The general inquirer: a computer approach to content analysis. The MIT Press.
  • Temel, E. A. (2024). Yarının Zekâsı. Mediacat, 31(348), 35-38.
  • Turing, A. (1948). Intelligent machinery. National Physical Laboratory.
  • Turing, A. (1950). Computing machinery and intelligence. Mind: A Quarterly Review of Psychology and Philosophy, 236, 433-460.
  • UNICEF. (2021). Policy guidance on AI for children. UNICEF. https://www.unicef.org/innocenti/reports/policy guidance-ai-children.
  • Uyar, T. (2024). ChatGPT’nin serbest mantıksal safsata tespitinde kullanımı, Yeni Medya Elektronik Dergisi, 8(1), 144-179.
  • Vaswani, A., Shazeer, R., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, L. & Polosukhin, I. (2017). Attention is all you need. U. von Luxburg, I. Guyon, S. Bengio, H. Wallach & R. Fergus (Eds.), Proceedings of the 31st International Conference on Neural Information Processing Systems. (pp 6000-6010). Red Hook.
  • Vosoughi, S., Roy, D. & Aral, S. (2018). The spread of true and false news online. Science, 359(6380), 1146-1151. https://doi.org/10.1126/science.aap9559.
  • Wardle, C. & Derakhshan, H. (2017, Eylül 27). Information disorder: Toward an interdisciplinary framework for research and policy making. Coincil of Europe. https://edoc.coe.int/en/media/7495-information-disorder toward-an-interdisciplinary-framework-for-research-and-policy-making.html
  • Wardle, C. (t.y.). The age of information disorder C. Silverman (ed.). Verification handbook for disinformation and media manipulation. Amazon. https://s3.eu-central 1.amazonaws.com/datajournalismcom/handbooks/Verification-Handbook-3.pdf.
  • Weik, M. H. (1961). The ENIAC Story. Ordnance, 45(244), 571-575. https://www.jstor.org/stable/45363261. World Economic Forum (2024, Ocak). The global risks report 19th edition. World Economic Forum. https://www3.weforum.org/docs/WEF_The_Global_Risks_Report_2024.pdf.
  • Yıkar, G. (2023). Farsça dil eğitiminde yapay zekâ (Al) destekli çeviri ve metin üretme üzerine bir değerlendirme. RumeliDE Dil ve Edebiyat Araştırmaları Dergisi, (36), 1204-1221. https://doi.org/10.29000/rumelide.1369151.
  • Zaitsu, W. & Jin, M. (2023). Distinguishing ChatGPT(-3.5, -4)-generated and human-written papers through Japanese stylometric analysis. PLoS ONE, 18(8). https://doi.org/10.1371/journal.pone.0288453
  • Zellers, R., Holtzman, A., Rashkin, H., Bisk, Y., Farhadi, A., Roesner, F. & Choi, Y. (2020). Defending against neural fake news. H. M. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché-Buc, E. B. Fox (Eds.), International Conference on Neural Information Processing Systems: 812, (pp. 9054 – 9065). Curran Associates Inc.
  • Zhou, E. & Lee, D. (2023). Generative AI, human creativity and art. PNAS Nexus, 3(3), 1-8. https://doi.org/10.1093/pnasnexus/pgae052.
  • Zhou, J., Müller, H., Holzinger, A. & Chen, F. (2024). Ethical chatGPT: concerns, challenges, and commandments. Electronics, 13(17), 1-8. https://doi.org/10.48550/arXiv.2305.10646.
There are 79 citations in total.

Details

Primary Language Turkish
Subjects Communication Studies, Communication Systems, New Communication Technologies
Journal Section Makale
Authors

Aytaç Burak Dereli 0000-0002-6449-7509

Erdem Taşdemir 0000-0002-9781-4099

Hilal Sevimli 0000-0002-9043-5643

Publication Date December 31, 2024
Submission Date October 31, 2024
Acceptance Date November 22, 2024
Published in Issue Year 2024 Volume: 9 Issue: 2

Cite

APA Dereli, A. B., Taşdemir, E., & Sevimli, H. (2024). Enformasyon Sağlayan Yapay Zekâ Dezenformasyonla Mücadele Edebilir Mi? ChatGPT Örneği. Middle Black Sea Journal of Communication Studies, 9(2), 106-133. https://doi.org/10.56202/mbsjcs.1576832