Araştırma Makalesi
BibTex RIS Kaynak Göster

Lip Reading Using Various Deep Learning Models with Visual Turkish Data

Yıl 2024, Cilt: 37 Sayı: 3, 1190 - 1203, 01.09.2024
https://doi.org/10.35378/gujs.1239207

Öz

In Human-Computer Interaction, lip reading is essential and still an open research problem. In the last decades, there have been many studies in the field of Automatic Lip-Reading (ALR) in different languages, which is important for societies where the essential applications developed. Similarly to other machine learning and artificial intelligence applications, Deep Learning (DL) based classification algorithms have been applied for ALR in order to improve the performance of ALR. In the field of ALR, few studies have been done on the Turkish language. In this study, we undertook a multifaceted approach to address the challenges inherent to Turkish lip reading research. To begin, we established a foundation by creating an original dataset meticulously curated for the purpose of this investigation. Recognizing the significance of data quality and diversity, we implemented three robust image data augmentation techniques: sigmoidal transform, horizontal flip, and inverse transform. These augmentation methods not only elevated the quality of our dataset but also introduced a rich spectrum of variations, thereby bolstering the dataset's utility. Building upon this augmented dataset, we delved into the application of cutting-edge DL models. Our choice of models encompassed Convolutional Neural Networks (CNN), known for their prowess in extracting intricate visual features, Long-Short Term Memory (LSTM), adept at capturing sequential dependencies, and Bidirectional Gated Recurrent Unit (BGRU), renowned for their effectiveness in handling complex temporal data. These advanced models were selected to leverage the potential of the visual Turkish lip reading dataset, ensuring that our research stands at the forefront of this rapidly evolving field. The dataset utilized in this study was gathered with the primary objective of augmenting the extant corpus of Turkish language datasets, thereby substantively enriching the landscape of Turkish language research while concurrently serving as a benchmark reference. The performance of the applied method has been compared regarding precision, recall, and F1 metrics. According to experiment results, BGRU and LSTM models gave the same results up to the fifth decimal, and BGRU had the fastest training time.

Destekleyen Kurum

Aselsan-Bites

Proje Numarası

N/A

Teşekkür

We are grateful for endless support to Recai Yavuz.

Kaynakça

  • [1] Fisher, C. G., “Confusions among visually perceived consonants”, Journal of Speech, Language, and Hearing Research, 11(4): 796–804, (1968).
  • [2] Easton, R. D., and Basala, M., “Perceptual dominance during lipreading”, Perception and Psychophysics, 32(6): 562–570, (1982).
  • [3] Lesani, F. S., Ghazvini, F. F., and Dianat, R., “Mobile phone security using automatic lip reading", 9th International Conference on e-Commerce in Developing Countries: With focus on e-Business, Isfahan, Iran, 2015, 1-5, (2015).
  • [4] Mathulaprangsan, S., Wang, C. Y., Frisky, A. Z. K., Tai, T. C., and Wang, J. C., “A survey of visual lip reading and lip-password verification”, International Conference on Orange Technologies (ICOT), Hong Kong, China, 22-25, (2015).
  • [5] Bahdanau, D., Chorowski J., Serdyuk D., Brakel P., and Bengio Y., “End-to-end attention-based large vocabulary speech recognition”, 2016 IEEE International Conference on Acoustics, Speech and Signal Processing, Shanghai, China, 4945-4949, (2016).
  • [6] Huang, J. T., Li, J., and Gong, Y., “An analysis of convolutional neural networks for speech recognition”, IEEE International Conference on Acoustics, Speech and Signal Processing, South Brisbane, QLD, Australia, 4989–4993, (2015).
  • [7] Miao, Y., Gowayyed, M., Metze, and F., “EESEN: End-to-end speech recognition using deep RNN models and WFSTbased decoding”, IEEE Workshop on Automatic Speech Recognition and Understanding, 167–174, (2016).
  • [8] Hyunmin, C., Kang, C. M., Kim, B., Kim, J., Chung, C. C., and Choi, W., “Autonomous Braking System via Deep Reinforcement Learning”, ArXiv, abs/1702.02302, (2017).
  • [9] Soltani, F., Eskandari, F., and Golestan, S., “Developing a Gesture-Based Game for Deaf/Mute People Using Microsoft Kinect”, 2012 Sixth International Conference on Complex, Intelligent, and Software Intensive Systems, Palermo, Italy, 491-495, (2012).
  • [10] Tan, J., Nguyen, C. T., and Wang. X., “SilentTalk: Lip reading through ultrasonic sensing on mobile phones”, IEEE INFOCOM 2017 - IEEE Conference on Computer Communications, Atlanta, GA, USA, 1-9, (2017).
  • [11] Lu, L., Yu, J., Chen, Y., Liu, H., Zhu, Y., Kong, L., and Li, M., “Lip reading-based user authentication through acoustic sensing on smartphones”, IEEE/ACM Transactions on Networking, 27(1): 447–460, (2019).
  • [12] Tan, J., Wang, X., Nguyen, C., and Shi, Y., “Silentkey: A new authentication framework through ultrasonic-based lip reading”, Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, 2(1): 1–18, (2018).
  • [13] Chung, J. S., Senior, A., Vinyals, O., and Zisserman, A., “Lip reading sentences in the wild”, 2017 IEEE Conference on Computer Vision and Pattern Recognition, 6447-6456, (2017). DOI: https://doi.org/10.1109/cvpr.2017.367
  • [14] Iwano, K., Yoshinaga, T., Tamura, S., and Furui, S., “Audio-Visual Speech Recognition Using Lip Information Extracted from Side-Face Images”, Hindawi Publishing Corporation EURASIP Journal on Audio, Speech, and Music Processing, 2007: 0-9, (2007).
  • [15] Fenghour, S., Chen, D., Guo, K., Li, B., and Xiao, P., “Deep learning-based automated lip-reading: A survey”, IEEE Access, 9: 121184–121205, (2021).
  • [16] Pandey, L., and Arif, A. S., “LipType: A Silent Speech Recognizer Augmented with an Independent Repair Model”, In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, Association for Computing Machinery, Article 1: 1–19, (2021).
  • [17] Chitu, A., and Rothkrantz, L., “Visual Speech Recognition Automatic System for Lip Reading of Dutch”, Journal on Information Technologies and Control, 7(3): 2-9, Simolini-94, Sofia, Bulgaria, (2009).
  • [18] Faisal, M., and Manzoor, S., “Deep Learning for Lip Reading using Audio-Visual Information for Urdu Language”, ArXiv, abs/1802.05521, (2018).
  • [19] Haq, M. A., Ruan, S. J., Cai, W. J., and Li, L. P. H., “Using Lip Reading Recognition to Predict Daily Mandarin Conversation”, in IEEE Access, 10, 53481-53489, (2022).
  • [20] Zhang, S., Ma. Z., Lu. K., Liu. X., Liu. J., Guo. S., Zomaya. A. Y., Zhang. J., and Wang. J., “HearMe: Accurate and Real-time Lip Reading based on Commercial RFID Devices”, in IEEE Transactions on Mobile Computing, early access, 1-14, (2022).
  • [21] Peng, C., Li, J., Chai, J., Zhao, Z., Zhang, H., and Tian, W., “Lip Reading Using Deformable 3D Convolution and Channel-Temporal Attention”, 13532, In: Pimenidis, E., Angelov, P., Jayne, C., Papaleonidas, A., Aydin, M. (eds) Artificial Neural Networks and Machine Learning. Lecture Notes in Computer Science, Springer, Cham, 707-718, (2022).
  • [22] Xue, B., Hu, S., Xu, J., Geng, M., Liu, X., and Meng, H., “Bayesian Neural Network Language Modeling for Speech Recognition”, in IEEE/ACM Transactions on Audio, Speech, and Language Processing, 30: 2900-2917, (2022).
  • [23] Ozcan, T., and Basturk, A., “Lip Reading Using Convolutional Neural Networks with and without Pre-Trained Models”, Balkan Journal of Electrical and Computer Engineering, 7(2), (2019).
  • [24] Fernandez-Lopez, A., and Sukno, F. M., “Survey on automatic lip-reading in the era of Deep learning. Image and Vision Computing”, Image and Vision Computing, 78: 53–72, (2018).
  • [25] Fenghour, S., Chen, D., Guo, K., and Xiao, P., “Lip reading sentences using deep learning with only visual cues”, IEEE Access, 8: 215516–215530, (2020).
  • [26] Graves, A., Fernandez, S., Gomez, F., and Schmidhuber, J., “Connectionist temporal classification: labeling unsegmented sequence data with recurrent neural networks”, In Proceedings of the 23rd international conference on Machine learning, Association for Computing Machinery, New York, NY, USA, 369–376, (2006).
  • [27] Cooke, M., Barker, J., Cunningham, S., and Shao, X., “An audio-visual corpus for speech perception and automatic speech recognition”, The Journal of the Acoustical Society of America, 120(5): 2421–2424, (2006).
  • [28] Berkol, A., Tümer-Sivri, T., Pervan-Akman, N., Çolak, M., and Erdem, H., “Visual Lip Reading Dataset in Turkish”, Data, 8(1): 15, (2023).
  • [29] https://www.youtube.com. Access date: 08.11.2022
Yıl 2024, Cilt: 37 Sayı: 3, 1190 - 1203, 01.09.2024
https://doi.org/10.35378/gujs.1239207

Öz

Proje Numarası

N/A

Kaynakça

  • [1] Fisher, C. G., “Confusions among visually perceived consonants”, Journal of Speech, Language, and Hearing Research, 11(4): 796–804, (1968).
  • [2] Easton, R. D., and Basala, M., “Perceptual dominance during lipreading”, Perception and Psychophysics, 32(6): 562–570, (1982).
  • [3] Lesani, F. S., Ghazvini, F. F., and Dianat, R., “Mobile phone security using automatic lip reading", 9th International Conference on e-Commerce in Developing Countries: With focus on e-Business, Isfahan, Iran, 2015, 1-5, (2015).
  • [4] Mathulaprangsan, S., Wang, C. Y., Frisky, A. Z. K., Tai, T. C., and Wang, J. C., “A survey of visual lip reading and lip-password verification”, International Conference on Orange Technologies (ICOT), Hong Kong, China, 22-25, (2015).
  • [5] Bahdanau, D., Chorowski J., Serdyuk D., Brakel P., and Bengio Y., “End-to-end attention-based large vocabulary speech recognition”, 2016 IEEE International Conference on Acoustics, Speech and Signal Processing, Shanghai, China, 4945-4949, (2016).
  • [6] Huang, J. T., Li, J., and Gong, Y., “An analysis of convolutional neural networks for speech recognition”, IEEE International Conference on Acoustics, Speech and Signal Processing, South Brisbane, QLD, Australia, 4989–4993, (2015).
  • [7] Miao, Y., Gowayyed, M., Metze, and F., “EESEN: End-to-end speech recognition using deep RNN models and WFSTbased decoding”, IEEE Workshop on Automatic Speech Recognition and Understanding, 167–174, (2016).
  • [8] Hyunmin, C., Kang, C. M., Kim, B., Kim, J., Chung, C. C., and Choi, W., “Autonomous Braking System via Deep Reinforcement Learning”, ArXiv, abs/1702.02302, (2017).
  • [9] Soltani, F., Eskandari, F., and Golestan, S., “Developing a Gesture-Based Game for Deaf/Mute People Using Microsoft Kinect”, 2012 Sixth International Conference on Complex, Intelligent, and Software Intensive Systems, Palermo, Italy, 491-495, (2012).
  • [10] Tan, J., Nguyen, C. T., and Wang. X., “SilentTalk: Lip reading through ultrasonic sensing on mobile phones”, IEEE INFOCOM 2017 - IEEE Conference on Computer Communications, Atlanta, GA, USA, 1-9, (2017).
  • [11] Lu, L., Yu, J., Chen, Y., Liu, H., Zhu, Y., Kong, L., and Li, M., “Lip reading-based user authentication through acoustic sensing on smartphones”, IEEE/ACM Transactions on Networking, 27(1): 447–460, (2019).
  • [12] Tan, J., Wang, X., Nguyen, C., and Shi, Y., “Silentkey: A new authentication framework through ultrasonic-based lip reading”, Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, 2(1): 1–18, (2018).
  • [13] Chung, J. S., Senior, A., Vinyals, O., and Zisserman, A., “Lip reading sentences in the wild”, 2017 IEEE Conference on Computer Vision and Pattern Recognition, 6447-6456, (2017). DOI: https://doi.org/10.1109/cvpr.2017.367
  • [14] Iwano, K., Yoshinaga, T., Tamura, S., and Furui, S., “Audio-Visual Speech Recognition Using Lip Information Extracted from Side-Face Images”, Hindawi Publishing Corporation EURASIP Journal on Audio, Speech, and Music Processing, 2007: 0-9, (2007).
  • [15] Fenghour, S., Chen, D., Guo, K., Li, B., and Xiao, P., “Deep learning-based automated lip-reading: A survey”, IEEE Access, 9: 121184–121205, (2021).
  • [16] Pandey, L., and Arif, A. S., “LipType: A Silent Speech Recognizer Augmented with an Independent Repair Model”, In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, Association for Computing Machinery, Article 1: 1–19, (2021).
  • [17] Chitu, A., and Rothkrantz, L., “Visual Speech Recognition Automatic System for Lip Reading of Dutch”, Journal on Information Technologies and Control, 7(3): 2-9, Simolini-94, Sofia, Bulgaria, (2009).
  • [18] Faisal, M., and Manzoor, S., “Deep Learning for Lip Reading using Audio-Visual Information for Urdu Language”, ArXiv, abs/1802.05521, (2018).
  • [19] Haq, M. A., Ruan, S. J., Cai, W. J., and Li, L. P. H., “Using Lip Reading Recognition to Predict Daily Mandarin Conversation”, in IEEE Access, 10, 53481-53489, (2022).
  • [20] Zhang, S., Ma. Z., Lu. K., Liu. X., Liu. J., Guo. S., Zomaya. A. Y., Zhang. J., and Wang. J., “HearMe: Accurate and Real-time Lip Reading based on Commercial RFID Devices”, in IEEE Transactions on Mobile Computing, early access, 1-14, (2022).
  • [21] Peng, C., Li, J., Chai, J., Zhao, Z., Zhang, H., and Tian, W., “Lip Reading Using Deformable 3D Convolution and Channel-Temporal Attention”, 13532, In: Pimenidis, E., Angelov, P., Jayne, C., Papaleonidas, A., Aydin, M. (eds) Artificial Neural Networks and Machine Learning. Lecture Notes in Computer Science, Springer, Cham, 707-718, (2022).
  • [22] Xue, B., Hu, S., Xu, J., Geng, M., Liu, X., and Meng, H., “Bayesian Neural Network Language Modeling for Speech Recognition”, in IEEE/ACM Transactions on Audio, Speech, and Language Processing, 30: 2900-2917, (2022).
  • [23] Ozcan, T., and Basturk, A., “Lip Reading Using Convolutional Neural Networks with and without Pre-Trained Models”, Balkan Journal of Electrical and Computer Engineering, 7(2), (2019).
  • [24] Fernandez-Lopez, A., and Sukno, F. M., “Survey on automatic lip-reading in the era of Deep learning. Image and Vision Computing”, Image and Vision Computing, 78: 53–72, (2018).
  • [25] Fenghour, S., Chen, D., Guo, K., and Xiao, P., “Lip reading sentences using deep learning with only visual cues”, IEEE Access, 8: 215516–215530, (2020).
  • [26] Graves, A., Fernandez, S., Gomez, F., and Schmidhuber, J., “Connectionist temporal classification: labeling unsegmented sequence data with recurrent neural networks”, In Proceedings of the 23rd international conference on Machine learning, Association for Computing Machinery, New York, NY, USA, 369–376, (2006).
  • [27] Cooke, M., Barker, J., Cunningham, S., and Shao, X., “An audio-visual corpus for speech perception and automatic speech recognition”, The Journal of the Acoustical Society of America, 120(5): 2421–2424, (2006).
  • [28] Berkol, A., Tümer-Sivri, T., Pervan-Akman, N., Çolak, M., and Erdem, H., “Visual Lip Reading Dataset in Turkish”, Data, 8(1): 15, (2023).
  • [29] https://www.youtube.com. Access date: 08.11.2022
Toplam 29 adet kaynakça vardır.

Ayrıntılar

Birincil Dil İngilizce
Konular Mühendislik
Bölüm Computer Engineering
Yazarlar

Ali Berkol 0000-0002-3056-1226

Talya Tümer Sivri 0000-0003-1813-5539

Hamit Erdem 0000-0003-1704-1581

Proje Numarası N/A
Erken Görünüm Tarihi 15 Ocak 2024
Yayımlanma Tarihi 1 Eylül 2024
Yayımlandığı Sayı Yıl 2024 Cilt: 37 Sayı: 3

Kaynak Göster

APA Berkol, A., Tümer Sivri, T., & Erdem, H. (2024). Lip Reading Using Various Deep Learning Models with Visual Turkish Data. Gazi University Journal of Science, 37(3), 1190-1203. https://doi.org/10.35378/gujs.1239207
AMA Berkol A, Tümer Sivri T, Erdem H. Lip Reading Using Various Deep Learning Models with Visual Turkish Data. Gazi University Journal of Science. Eylül 2024;37(3):1190-1203. doi:10.35378/gujs.1239207
Chicago Berkol, Ali, Talya Tümer Sivri, ve Hamit Erdem. “Lip Reading Using Various Deep Learning Models With Visual Turkish Data”. Gazi University Journal of Science 37, sy. 3 (Eylül 2024): 1190-1203. https://doi.org/10.35378/gujs.1239207.
EndNote Berkol A, Tümer Sivri T, Erdem H (01 Eylül 2024) Lip Reading Using Various Deep Learning Models with Visual Turkish Data. Gazi University Journal of Science 37 3 1190–1203.
IEEE A. Berkol, T. Tümer Sivri, ve H. Erdem, “Lip Reading Using Various Deep Learning Models with Visual Turkish Data”, Gazi University Journal of Science, c. 37, sy. 3, ss. 1190–1203, 2024, doi: 10.35378/gujs.1239207.
ISNAD Berkol, Ali vd. “Lip Reading Using Various Deep Learning Models With Visual Turkish Data”. Gazi University Journal of Science 37/3 (Eylül 2024), 1190-1203. https://doi.org/10.35378/gujs.1239207.
JAMA Berkol A, Tümer Sivri T, Erdem H. Lip Reading Using Various Deep Learning Models with Visual Turkish Data. Gazi University Journal of Science. 2024;37:1190–1203.
MLA Berkol, Ali vd. “Lip Reading Using Various Deep Learning Models With Visual Turkish Data”. Gazi University Journal of Science, c. 37, sy. 3, 2024, ss. 1190-03, doi:10.35378/gujs.1239207.
Vancouver Berkol A, Tümer Sivri T, Erdem H. Lip Reading Using Various Deep Learning Models with Visual Turkish Data. Gazi University Journal of Science. 2024;37(3):1190-203.