Research Article
BibTex RIS Cite

Evrişimsel Sinir Ağı Mimarileri ve Öğrenim Aktarma ile Bitki Zararlısı Çekirge Türlerinin Sınıflandırması

Year 2023, , 321 - 331, 28.03.2023
https://doi.org/10.35234/fumbd.1228883

Abstract

Çekirgeler, mahsullere zarar vererek her yıl milyonlarca ton gıdanın yok olmasına neden olmaktadır. Etkili ve doğru çekirge tanımlama sistemlerinin geliştirilmesi, çekirge türlerinin kontrol altına alınması ve gıda kaybının önlenmesinde kritik öneme sahiptir. Bu çalışmada, ülkemizin ve dünyanın çeşitli yerlerinde görülen 11 farklı bitki zararlısı çekirge türü çeşitli evrişimsel sinir ağı modelleri kullanılarak sınıflandırılmıştır. Çalışmada kullanılan veri seti ülkemizin Doğu ve Güneydoğu Anadolu bölgesinde gözlemlenerek elde edilmiştir. Bu çalışmanın en büyük yeniliği, ülkemizde bulunan 11 farklı bitki zararlısı çekirge türüne ait GHCD11 adında yeni bir veri setinin oluşturulmuş olmasıdır. Bunun yanında, çalışmada 11 farklı bitki zararlısı çekirge türünün otomatik olarak sınıflandırılması için Keras kütüphanesinde bulunan ve görüntü sınıflandırmasında yaygın olarak kullanılan VGG16, VGG19, ResNet50, DenseNet121, EfficientNet, MobileNet kullanılmıştır. Öğrenme aktarımı ile GHCD11 veri seti üzerinde yapılan deneysel çalışmalar sonucunda, %95 ile %99 aralığında kayda değer sınıflandırma doğrulukları elde edilmiştir. Çalışma yeni bir veri seti sunmasının yanı sıra, bitki zararlısı çekirge türlerinin evrişimsel sinir ağı mimarileri ile otomatik tanı ve tespitinin yüksek başarım ile yapılabileceğini göstermesi açısından önem arz etmektedir.

References

  • E. Ayan, H. Erbay, and F. Varçın, “Crop pest classification with a genetic algorithm-based weighted ensemble of deep convolutional neural networks,” Comput. Electron. Agric., vol. 179, Dec. 2020, doi: 10.1016/j.compag.2020.105809.
  • P. Gullan, P., Cranston, The insects: an outline of entomology., vol. 21, no. 9. 2014.
  • L. Zhang, M. Lecoq, A. Latchininsky, and D. Hunter, “Locust and Grasshopper Management,” 2018, doi: 10.1146/annurev-ento-011118.
  • C. Xie et al., “Multi-level learning features for automatic classification of field crop pests,” Comput. Electron. Agric., vol. 152, no. October 2016, pp. 233–241, 2018, doi: 10.1016/j.compag.2018.07.014.
  • M. Martineau, D. Conte, R. Raveaux, I. Arnault, D. Munier, and G. Venturini, “A survey on image-based insect classification,” Pattern Recognit., vol. 65, pp. 273–284, 2017, doi: 10.1016/j.patcog.2016.12.020.
  • N. Larios et al., “Automated insect identification through concatenated histograms of local appearance features: Feature vector generation and region detection for deformable objects,” Mach. Vis. Appl., vol. 19, no. 2, pp. 105–123, 2008, doi: 10.1007/s00138-007-0086-y.
  • S. R. Huddar, S. Gowri, K. Keerthana, S. Vasanthi, and S. R. Rupanagudi, “Novel algorithm for segmentation and automatic identification of pests on plants using image processing,” 2012 3rd Int. Conf. Comput. Commun. Netw. Technol. ICCCNT 2012, no. July, 2012, doi: 10.1109/ICCCNT.2012.6396012.
  • A. Siva Sangari and D. Saraswady, “Analyzing the optimal performance of pest image segmentation using non linear objective assessments,” Int. J. Electr. Comput. Eng., vol. 6, no. 6, pp. 2789–2796, 2016, doi: 10.11591/ijece.v6i6.11564.
  • J. Zhao, M. Liu, and M. Yao, “Study on image recognition of insect pest of sugarcane cotton aphis based on rough set and fuzzy C-means clustering,” 3rd Int. Symp. Intell. Inf. Technol. Appl. IITA 2009, vol. 2, pp. 553–555, 2009, doi: 10.1109/IITA.2009.295.
  • P. J. D. Weeks, M. A. O’Neill, K. J. Gaston, and I. D. Gauld, “Species-identification of wasps using principal component associative memories,” Image Vis. Comput., vol. 17, no. 12, pp. 861–866, 1999, doi: 10.1016/S0262-8856(98)00161-9.
  • M. T. Do, J. M. Harp, and K. C. Norris, “A test of a pattern recognition system for identification of spiders,” Bull. Entomol. Res., vol. 89, no. 3, pp. 217–224, 1999, doi: 10.1017/s0007485399000334.
  • L. Q. Zhu and Z. Zhang, “Auto-classification of insect images based on color histogram and GLCM,” Proc. - 2010 7th Int. Conf. Fuzzy Syst. Knowl. Discov. FSKD 2010, vol. 6, no. Fskd, pp. 2589–2593, 2010, doi: 10.1109/FSKD.2010.5569848.
  • X. Cheng, Y. Zhang, Y. Chen, Y. Wu, and Y. Yue, “Pest identification via deep residual learning in complex background,” Comput. Electron. Agric., vol. 141, pp. 351–356, 2017, doi: 10.1016/j.compag.2017.08.005.
  • H. M. Ünver and E. Ayan, “Skin lesion segmentation in dermoscopic images with combination of yolo and grabcut algorithm,” Diagnostics, vol. 9, no. 3, 2019, doi: 10.3390/diagnostics9030072.
  • Q. Zu, B. Hu, N. Gu, and S. Seng, “Human Centered Computing: First International Conference, HCC 2014 Phnom Penh, Cambodia, November 27-29, 2014 Revised Selected Papers,” Lect. Notes Comput. Sci. (including Subser. Lect. Notes Artif. Intell. Lect. Notes Bioinformatics), vol. 8944, no. 18, pp. 812–820, 2015, doi: 10.1007/978-3-319-15554-8.
  • Z. Q. Zhao, P. Zheng, S. T. Xu, and X. Wu, “Object Detection with Deep Learning: A Review,” IEEE Trans. Neural Networks Learn. Syst., vol. 30, no. 11, pp. 3212–3232, 2019, doi: 10.1109/TNNLS.2018.2876865.
  • Y. Zhang, J. M. Gorriz, and Z. Dong, “Deep learning in medical image analysis,” J. Imaging, vol. 7, no. 4, p. NA, 2021, doi: 10.3390/jimaging7040074.
  • N. Şahin, N. Alpaslan, and D. Hanbay, “Robust optimization of SegNet hyperparameters for skin lesion segmentation,” Multimed. Tools Appl., vol. 81, no. 25, pp. 36031–36051, 2022, doi: 10.1007/s11042-021-11032-6.
  • D. Xia, P. Chen, B. Wang, J. Zhang, and C. Xie, “Insect detection and classification based on an improved convolutional neural network,” Sensors (Switzerland), vol. 18, no. 12, pp. 1–12, 2018, doi: 10.3390/s18124169.
  • Z. Liu, J. Gao, G. Yang, H. Zhang, and Y. He, “Localization and Classification of Paddy Field Pests using a Saliency Map and Deep Convolutional Neural Network,” Sci. Rep., vol. 6, no. June 2015, pp. 1–12, 2016, doi: 10.1038/srep20410.
  • S. Lim, S. Kim, and D. Kim, “Performance effect analysis for insect classification using convolutional neural network,” Proc. - 7th IEEE Int. Conf. Control Syst. Comput. Eng. ICCSCE 2017, vol. 2017-Novem, no. November, pp. 210–215, 2018, doi: 10.1109/ICCSCE.2017.8284406.
  • F. Visalli, T. Bonacci, and N. A. Borghese, “Insects Image Classification Through Deep Convolutional Neural Networks,” Smart Innov. Syst. Technol., vol. 184, pp. 217–228, 2021, doi: 10.1007/978-981-15-5093-5_21.
  • L. Liu et al., “PestNet: An End-to-End Deep Learning Approach for Large-Scale Multi-Class Pest Detection and Classification,” IEEE Access, vol. 7, pp. 45301–45312, 2019, doi: 10.1109/ACCESS.2019.2909522.
  • L. Nanni, A. Manfè, G. Maguolo, A. Lumini, and S. Brahnam, “High performing ensemble of convolutional neural networks for insect pest image detection,” Ecol. Inform., vol. 67, 2022, doi: 10.1016/j.ecoinf.2021.101515.
  • M. İlçin and Ş. Çelik, “Statistical Evaluation of Damage Status of Important Grasshopper Family in Plants,” Quest Journals J. Res. Agric. Anim. Sci., vol. 8, no. 2, pp. 2321–9459, 2021.
  • 2020 | FAO | Food and Agriculture Organization of the United Nations.” [Online]. Available: https://www.fao.org/news/archive/news-by-date/2020/en/?page=3&ipp=10&tx_dynalist_pi1%5Bpar%5D=YToxOntzOjE6IkwiO3M6MToiMCI7fQ%3D%3D. [Accessed: 06-Nov-2022].
  • M. İlçin, “Investigation of Orthoptera: Insecta Fauna of Useful, Harmful and Predator Species in the Batman Region (Turkey),” Sci. Stay. True Here" Biol. Chem. Res., vol. 6, pp. 30–40, 2019.
  • K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” 3rd Int. Conf. Learn. Represent. ICLR 2015 - Conf. Track Proc., pp. 1–14, 2015.
  • G. A. Shadeed, M. A. Tawfeeq, and S. M. Mahmoud, “Automatic medical images segmentation based on deep learning networks,” IOP Conf. Ser. Mater. Sci. Eng., vol. 870, no. 1, 2020, doi: 10.1088/1757-899X/870/1/012117.
  • K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit., vol. 2016-Decem, pp. 770–778, 2016, doi: 10.1109/CVPR.2016.90.
  • G. Huang, Z. Liu, L. Van Der Maaten, and K. Q. Weinberger, “Densely connected convolutional networks,” Proc. - 30th IEEE Conf. Comput. Vis. Pattern Recognition, CVPR 2017, vol. 2017-Janua, pp. 2261–2269, 2017, doi: 10.1109/CVPR.2017.243.
  • M. Tan and Q. V. Le, “EfficientNet: Rethinking model scaling for convolutional neural networks,” 36th Int. Conf. Mach. Learn. ICML 2019, vol. 2019-June, pp. 10691–10700, 2019.
  • A. G. Howard et al., “MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications,” 2017.

Classification of Plant Pest Grasshopper Species by Convolutional Neural Network Architectures and Transfer Learning

Year 2023, , 321 - 331, 28.03.2023
https://doi.org/10.35234/fumbd.1228883

Abstract

Grasshoppers damage crops and causes millions of tons of food to be destroyed every year. The development of effective and accurate locust identification systems is critical in controlling locust species and preventing food loss. In this study, 11 different plant pest grasshopper species seen in various parts of our country and the world were classified using various convolutional neural network models. The dataset used in the study was obtained by observing the Eastern and Southeastern Anatolia regions of our country. The novelity of this study is that a dataset named GHCD11 has been created for 11 different plant pest grasshopper species in our country. In addition, VGG16, VGG19, ResNet50, DenseNet121, EfficientNet, MobileNet, which are in the Keras library and are widely used in image classification, were used for the automatic classification of 11 different grasshopper species in the study. As a result of experimental studies on the GHCD11 dataset with learning transfer, remarkable classification accuracies in the range of 95% to 99% were obtained. The study is important because it not only presents a novel dataset, but also demonstrates that automatic identification and detection of plant pest grasshopper species can be done with high accuracy using convolutional neural network architectures.

References

  • E. Ayan, H. Erbay, and F. Varçın, “Crop pest classification with a genetic algorithm-based weighted ensemble of deep convolutional neural networks,” Comput. Electron. Agric., vol. 179, Dec. 2020, doi: 10.1016/j.compag.2020.105809.
  • P. Gullan, P., Cranston, The insects: an outline of entomology., vol. 21, no. 9. 2014.
  • L. Zhang, M. Lecoq, A. Latchininsky, and D. Hunter, “Locust and Grasshopper Management,” 2018, doi: 10.1146/annurev-ento-011118.
  • C. Xie et al., “Multi-level learning features for automatic classification of field crop pests,” Comput. Electron. Agric., vol. 152, no. October 2016, pp. 233–241, 2018, doi: 10.1016/j.compag.2018.07.014.
  • M. Martineau, D. Conte, R. Raveaux, I. Arnault, D. Munier, and G. Venturini, “A survey on image-based insect classification,” Pattern Recognit., vol. 65, pp. 273–284, 2017, doi: 10.1016/j.patcog.2016.12.020.
  • N. Larios et al., “Automated insect identification through concatenated histograms of local appearance features: Feature vector generation and region detection for deformable objects,” Mach. Vis. Appl., vol. 19, no. 2, pp. 105–123, 2008, doi: 10.1007/s00138-007-0086-y.
  • S. R. Huddar, S. Gowri, K. Keerthana, S. Vasanthi, and S. R. Rupanagudi, “Novel algorithm for segmentation and automatic identification of pests on plants using image processing,” 2012 3rd Int. Conf. Comput. Commun. Netw. Technol. ICCCNT 2012, no. July, 2012, doi: 10.1109/ICCCNT.2012.6396012.
  • A. Siva Sangari and D. Saraswady, “Analyzing the optimal performance of pest image segmentation using non linear objective assessments,” Int. J. Electr. Comput. Eng., vol. 6, no. 6, pp. 2789–2796, 2016, doi: 10.11591/ijece.v6i6.11564.
  • J. Zhao, M. Liu, and M. Yao, “Study on image recognition of insect pest of sugarcane cotton aphis based on rough set and fuzzy C-means clustering,” 3rd Int. Symp. Intell. Inf. Technol. Appl. IITA 2009, vol. 2, pp. 553–555, 2009, doi: 10.1109/IITA.2009.295.
  • P. J. D. Weeks, M. A. O’Neill, K. J. Gaston, and I. D. Gauld, “Species-identification of wasps using principal component associative memories,” Image Vis. Comput., vol. 17, no. 12, pp. 861–866, 1999, doi: 10.1016/S0262-8856(98)00161-9.
  • M. T. Do, J. M. Harp, and K. C. Norris, “A test of a pattern recognition system for identification of spiders,” Bull. Entomol. Res., vol. 89, no. 3, pp. 217–224, 1999, doi: 10.1017/s0007485399000334.
  • L. Q. Zhu and Z. Zhang, “Auto-classification of insect images based on color histogram and GLCM,” Proc. - 2010 7th Int. Conf. Fuzzy Syst. Knowl. Discov. FSKD 2010, vol. 6, no. Fskd, pp. 2589–2593, 2010, doi: 10.1109/FSKD.2010.5569848.
  • X. Cheng, Y. Zhang, Y. Chen, Y. Wu, and Y. Yue, “Pest identification via deep residual learning in complex background,” Comput. Electron. Agric., vol. 141, pp. 351–356, 2017, doi: 10.1016/j.compag.2017.08.005.
  • H. M. Ünver and E. Ayan, “Skin lesion segmentation in dermoscopic images with combination of yolo and grabcut algorithm,” Diagnostics, vol. 9, no. 3, 2019, doi: 10.3390/diagnostics9030072.
  • Q. Zu, B. Hu, N. Gu, and S. Seng, “Human Centered Computing: First International Conference, HCC 2014 Phnom Penh, Cambodia, November 27-29, 2014 Revised Selected Papers,” Lect. Notes Comput. Sci. (including Subser. Lect. Notes Artif. Intell. Lect. Notes Bioinformatics), vol. 8944, no. 18, pp. 812–820, 2015, doi: 10.1007/978-3-319-15554-8.
  • Z. Q. Zhao, P. Zheng, S. T. Xu, and X. Wu, “Object Detection with Deep Learning: A Review,” IEEE Trans. Neural Networks Learn. Syst., vol. 30, no. 11, pp. 3212–3232, 2019, doi: 10.1109/TNNLS.2018.2876865.
  • Y. Zhang, J. M. Gorriz, and Z. Dong, “Deep learning in medical image analysis,” J. Imaging, vol. 7, no. 4, p. NA, 2021, doi: 10.3390/jimaging7040074.
  • N. Şahin, N. Alpaslan, and D. Hanbay, “Robust optimization of SegNet hyperparameters for skin lesion segmentation,” Multimed. Tools Appl., vol. 81, no. 25, pp. 36031–36051, 2022, doi: 10.1007/s11042-021-11032-6.
  • D. Xia, P. Chen, B. Wang, J. Zhang, and C. Xie, “Insect detection and classification based on an improved convolutional neural network,” Sensors (Switzerland), vol. 18, no. 12, pp. 1–12, 2018, doi: 10.3390/s18124169.
  • Z. Liu, J. Gao, G. Yang, H. Zhang, and Y. He, “Localization and Classification of Paddy Field Pests using a Saliency Map and Deep Convolutional Neural Network,” Sci. Rep., vol. 6, no. June 2015, pp. 1–12, 2016, doi: 10.1038/srep20410.
  • S. Lim, S. Kim, and D. Kim, “Performance effect analysis for insect classification using convolutional neural network,” Proc. - 7th IEEE Int. Conf. Control Syst. Comput. Eng. ICCSCE 2017, vol. 2017-Novem, no. November, pp. 210–215, 2018, doi: 10.1109/ICCSCE.2017.8284406.
  • F. Visalli, T. Bonacci, and N. A. Borghese, “Insects Image Classification Through Deep Convolutional Neural Networks,” Smart Innov. Syst. Technol., vol. 184, pp. 217–228, 2021, doi: 10.1007/978-981-15-5093-5_21.
  • L. Liu et al., “PestNet: An End-to-End Deep Learning Approach for Large-Scale Multi-Class Pest Detection and Classification,” IEEE Access, vol. 7, pp. 45301–45312, 2019, doi: 10.1109/ACCESS.2019.2909522.
  • L. Nanni, A. Manfè, G. Maguolo, A. Lumini, and S. Brahnam, “High performing ensemble of convolutional neural networks for insect pest image detection,” Ecol. Inform., vol. 67, 2022, doi: 10.1016/j.ecoinf.2021.101515.
  • M. İlçin and Ş. Çelik, “Statistical Evaluation of Damage Status of Important Grasshopper Family in Plants,” Quest Journals J. Res. Agric. Anim. Sci., vol. 8, no. 2, pp. 2321–9459, 2021.
  • 2020 | FAO | Food and Agriculture Organization of the United Nations.” [Online]. Available: https://www.fao.org/news/archive/news-by-date/2020/en/?page=3&ipp=10&tx_dynalist_pi1%5Bpar%5D=YToxOntzOjE6IkwiO3M6MToiMCI7fQ%3D%3D. [Accessed: 06-Nov-2022].
  • M. İlçin, “Investigation of Orthoptera: Insecta Fauna of Useful, Harmful and Predator Species in the Batman Region (Turkey),” Sci. Stay. True Here" Biol. Chem. Res., vol. 6, pp. 30–40, 2019.
  • K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” 3rd Int. Conf. Learn. Represent. ICLR 2015 - Conf. Track Proc., pp. 1–14, 2015.
  • G. A. Shadeed, M. A. Tawfeeq, and S. M. Mahmoud, “Automatic medical images segmentation based on deep learning networks,” IOP Conf. Ser. Mater. Sci. Eng., vol. 870, no. 1, 2020, doi: 10.1088/1757-899X/870/1/012117.
  • K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit., vol. 2016-Decem, pp. 770–778, 2016, doi: 10.1109/CVPR.2016.90.
  • G. Huang, Z. Liu, L. Van Der Maaten, and K. Q. Weinberger, “Densely connected convolutional networks,” Proc. - 30th IEEE Conf. Comput. Vis. Pattern Recognition, CVPR 2017, vol. 2017-Janua, pp. 2261–2269, 2017, doi: 10.1109/CVPR.2017.243.
  • M. Tan and Q. V. Le, “EfficientNet: Rethinking model scaling for convolutional neural networks,” 36th Int. Conf. Mach. Learn. ICML 2019, vol. 2019-June, pp. 10691–10700, 2019.
  • A. G. Howard et al., “MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications,” 2017.
There are 33 citations in total.

Details

Primary Language Turkish
Journal Section MBD
Authors

Nurullah Şahin 0000-0002-3578-9959

Nuh Alpaslan 0000-0002-6828-755X

Mustafa İlçin 0000-0002-2542-9503

Davut Hanbay 0000-0003-2271-7865

Publication Date March 28, 2023
Submission Date January 3, 2023
Published in Issue Year 2023

Cite

APA Şahin, N., Alpaslan, N., İlçin, M., Hanbay, D. (2023). Evrişimsel Sinir Ağı Mimarileri ve Öğrenim Aktarma ile Bitki Zararlısı Çekirge Türlerinin Sınıflandırması. Fırat Üniversitesi Mühendislik Bilimleri Dergisi, 35(1), 321-331. https://doi.org/10.35234/fumbd.1228883
AMA Şahin N, Alpaslan N, İlçin M, Hanbay D. Evrişimsel Sinir Ağı Mimarileri ve Öğrenim Aktarma ile Bitki Zararlısı Çekirge Türlerinin Sınıflandırması. Fırat Üniversitesi Mühendislik Bilimleri Dergisi. March 2023;35(1):321-331. doi:10.35234/fumbd.1228883
Chicago Şahin, Nurullah, Nuh Alpaslan, Mustafa İlçin, and Davut Hanbay. “Evrişimsel Sinir Ağı Mimarileri Ve Öğrenim Aktarma Ile Bitki Zararlısı Çekirge Türlerinin Sınıflandırması”. Fırat Üniversitesi Mühendislik Bilimleri Dergisi 35, no. 1 (March 2023): 321-31. https://doi.org/10.35234/fumbd.1228883.
EndNote Şahin N, Alpaslan N, İlçin M, Hanbay D (March 1, 2023) Evrişimsel Sinir Ağı Mimarileri ve Öğrenim Aktarma ile Bitki Zararlısı Çekirge Türlerinin Sınıflandırması. Fırat Üniversitesi Mühendislik Bilimleri Dergisi 35 1 321–331.
IEEE N. Şahin, N. Alpaslan, M. İlçin, and D. Hanbay, “Evrişimsel Sinir Ağı Mimarileri ve Öğrenim Aktarma ile Bitki Zararlısı Çekirge Türlerinin Sınıflandırması”, Fırat Üniversitesi Mühendislik Bilimleri Dergisi, vol. 35, no. 1, pp. 321–331, 2023, doi: 10.35234/fumbd.1228883.
ISNAD Şahin, Nurullah et al. “Evrişimsel Sinir Ağı Mimarileri Ve Öğrenim Aktarma Ile Bitki Zararlısı Çekirge Türlerinin Sınıflandırması”. Fırat Üniversitesi Mühendislik Bilimleri Dergisi 35/1 (March 2023), 321-331. https://doi.org/10.35234/fumbd.1228883.
JAMA Şahin N, Alpaslan N, İlçin M, Hanbay D. Evrişimsel Sinir Ağı Mimarileri ve Öğrenim Aktarma ile Bitki Zararlısı Çekirge Türlerinin Sınıflandırması. Fırat Üniversitesi Mühendislik Bilimleri Dergisi. 2023;35:321–331.
MLA Şahin, Nurullah et al. “Evrişimsel Sinir Ağı Mimarileri Ve Öğrenim Aktarma Ile Bitki Zararlısı Çekirge Türlerinin Sınıflandırması”. Fırat Üniversitesi Mühendislik Bilimleri Dergisi, vol. 35, no. 1, 2023, pp. 321-3, doi:10.35234/fumbd.1228883.
Vancouver Şahin N, Alpaslan N, İlçin M, Hanbay D. Evrişimsel Sinir Ağı Mimarileri ve Öğrenim Aktarma ile Bitki Zararlısı Çekirge Türlerinin Sınıflandırması. Fırat Üniversitesi Mühendislik Bilimleri Dergisi. 2023;35(1):321-3.