Research Article
BibTex RIS Cite

EVALUATING THE ROBUSTNESS OF YOLO OBJECT DETECTION ALGORITHM IN TERMS OF DETECTING OBJECTS IN NOISY ENVIRONMENT

Year 2023, Issue: 054, 1 - 25, 30.09.2023
https://doi.org/10.59313/jsr-a.1257361

Abstract

Our daily lives are impacted by object detection in many ways, such as automobile driving, traffic control, medical fields, etc. Over the past few years, deep learning techniques have been widely used for object detection. Several powerful models have been developed over the past decade for this purpose. The YOLO architecture is one of the most important cutting-edge approaches to object detection. Researchers have used YOLO in their object detection tasks and obtained promising results. Since the YOLO algorithm can be used as an object detector in critical domains, it should provide a quite high accuracy both in noisy and noise-free environments. Consequently, in this study, we aim to carry out an experimental study to test the robustness of the YOLO v5 object detection algorithm when applied to noisy environments. To this end, four case studies have been conducted to evaluate this algorithm's ability to detect objects in noisy images. Specifically, four datasets have been created by injecting an original quality image dataset with different ratios of Gaussian noise. The YOLO v5 algorithm has been trained and tested using the original high-quality dataset. Then, the trained YOLO algorithm has been tested using the created noisy image datasets to monitor the changes in its performance in proportion to the injected Gaussian noise ratio. To our knowledge, this type of performance evaluation study did not conduct before in the literature. Furthermore, there are no such noisy image datasets have been shared before for conducting these types of studies. The obtained results showed that the YOLO algorithm failed to handle the noisy images efficiently besides degrading its performance in proportion to noise rates.

Project Number

Project Number: 2023-GENL-Müh-0007.

Thanks

This work has been supported by the Scientific Research Projects Coordination Unit of the Sivas University of Science and Technology.

References

  • [1] Ramík, D.M., Sabourin, C., Moreno, R., and Madani, K. (2014). A machine learning based intelligent vision system for autonomous object detection and recognition. Applied Intelligence. 40, 358–375.
  • [2] Nallasivam, M., and Senniappan, V. (2021). Moving human target detection and tracking in video frames. Studies in informatics and control. 30, 119–129.
  • [3] Erhan, D., Szegedy, C., Toshev, A., and Anguelov, D. (2014). Scalable object detection using deep neural networks. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 2147-2154).
  • [4] Han, F., Liu, B., Zhu, J. and Zhang, B. (2019). Algorithm design for edge detection of high-speed moving target image under noisy environment. Sensors, 19(2), p.343.
  • [5] Razakarivony, S., and Jurie, F. (2016). Vehicle detection in aerial imagery: A small target detection benchmark. J Vis Commun Image Represent. 34, 187–203.
  • [6] Wang, Z., Du, L., Mao, J., Liu, B., and Yang, D. (2019). SAR target detection based on SSD with data augmentation and transfer learning. IEEE Geoscience and Remote Sensing Letters. 16, 150–154.
  • [7] Xu, Q., Peng, J., Shen, J., Tang, H., and Pan, G. (2020). Deep CovDenseSNN: A hierarchical event-driven dynamic framework with spiking neurons in noisy environment. Neural Networks. 121, 512–519.
  • [8] Bakir, H., Oktay, S., and Tabaru, E. (2023). Detection of pneumonia from x-ray images using deep learning techniques. Journal of Scientific Reports-A. 419–440.
  • [9] Akgül, İ. and and Volkan, K.A.Y.A. (2022). Classification of cells infected with the malaria parasite with ResNet architectures. Journal of Scientific Reports-A, (048), pp.42-54.
  • [10] Bakir, H. and Yilmaz, Ş. (2022). Using transfer learning technique as a feature extraction phase for diagnosis of cataract disease in the eye. International Journal of Sivas University of Science and Technology, 1(1), pp.17-33.
  • [11] Tekin, S., Murat, G.O.K., Namdar, M. and Başgümüş, A. (2022). Autonomous guidance system for UAVs with image processing techniques. Journal of Scientific Reports-A, (051), pp.149-159.
  • [12] Girshick, R., Donahue, J., Darrell, T. and Malik, J. (2014). Rich feature hierarchies for accurate object detection and semantic segmentation. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 580-587).
  • [13] He, K., Zhang, X., Ren, S. and Sun, J. (2015). Spatial pyramid pooling in deep convolutional networks for visual recognition. IEEE transactions on pattern analysis and machine intelligence, 37(9), pp.1904-1916.
  • [14] Girshick, R. (2015). Fast r-cnn. In Proceedings of the IEEE international conference on computer vision (pp. 1440-1448).
  • [15] Ren, S., He, K., Girshick, R. and Sun, J. (2015). Faster r-cnn: Towards real-time object detection with region proposal networks. Advances in neural information processing systems, 28.
  • [16] Redmon, J., Divvala, S., Girshick, R. and Farhadi, A. (2016). You only look once: Unified, real-time object detection. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 779-788).
  • [17] Redmon, J. and Farhadi, A. (2017). YOLO9000: better, faster, stronger. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 7263-7271).
  • [18] Redmon, J. and Farhadi, A. (2018). Yolov3: An incremental improvement. arXiv preprint arXiv:1804.02767.
  • [19] Bochkovskiy, A., Wang, C.Y. and Liao, H.Y.M. (2020). Yolov4: Optimal speed and accuracy of object detection. arXiv preprint arXiv:2004.10934.
  • [20] Liu, W., Anguelov, D., Erhan, D., Szegedy, C., Reed, S., Fu, C.Y. and Berg, A.C. (2016). SSD: Single shot multibox detector. In Computer Vision–ECCV 2016: 14th European Conference, Amsterdam, The Netherlands, October 11–14, 2016, Proceedings, Part I 14 (pp. 21-37). Springer International Publishing.
  • [21] Felzenszwalb, P.F., Girshick, R.B., McAllester, D., and Ramanan, D. (2010). Object detection with discriminatively trained part-based models. IEEE Trans Pattern Anal Mach Intell. 32, 1627–1645.
  • [22] Ferrari, V., Jurie, F. and Schmid, C. (2010). From images to shape models for object detection. International journal of computer vision, 87(3), pp.284-303.
  • [23] Ren, X., and Ramanan, D. (2013). Histograms of sparse codes for object detection. In: Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition. pp. 3246–3253
  • [24] Girshick, R., Felzenszwalb, P. and McAllester, D. (2011). Object detection with grammar models. Advances in neural information processing systems, 24.
  • [25] Salakhutdinov, R., Torralba, A. and Tenenbaum, J. (2011). Learning to share visual appearance for multiclass object detection. In CVPR 2011 (pp. 1481-1488). IEEE.
  • [26] Alahi, A., Ortiz, R. and Vandergheynst, P. (2012). Freak: Fast retina keypoint. In 2012 IEEE conference on computer vision and pattern recognition (pp. 510-517). Ieee.
  • [27] Zhou, X., Yang, C. and Yu, W. (2012). Moving object detection by detecting contiguous outliers in the low-rank representation. IEEE transactions on pattern analysis and machine intelligence, 35(3), pp.597-610.
  • [28] Zhu, L., Chen, Y., Yuille, A. and Freeman, W. (2010). Latent hierarchical structural learning for object detection. In 2010 IEEE computer society conference on computer vision and pattern recognition (pp. 1062-1069). IEEE.
  • [29] Felzenszwalb, P.F., Girshick, R.B., and McAllester, D. (2010). Cascade object detection with deformable part models. In: Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition. pp. 2241–2248
  • [30] Jiang, H., Wang, J., Yuan, Z., Wu, Y., Zheng, N., and Li, S. (2013). Salient object detection: A discriminative regional feature integration approach. In: Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition. pp. 2083–2090
  • [31] Kim, C., Lee, J., Han, T., and Kim, Y.M. (2018). A hybrid framework combining background subtraction and deep neural networks for rapid person detection. J Big Data. 5.
  • [32] Zaidi, S.S.A., Ansari, M.S., Aslam, A., Kanwal, N., Asghar, M. and Lee, B. (2022). A survey of modern deep learning based object detection models. Digital Signal Processing, p.103514.
  • [33] Nobis, F., Geisslinger, M., Weber, M., Betz, J., and Lienkamp, M. (2019). A deep learning-based radar and camera sensor fusion architecture for object detection; A Deep Learning-based Radar and Camera Sensor Fusion Architecture for Object Detection.
  • [34] Elhoseny, M. (2020). Multi-object detection and tracking (modt) machine learning model for real-time video surveillance systems. Circuits Syst Signal Process. 39, 611–630.
  • [35] Das, S., Pal, S. and Mitra, M. (2016). Real time heart rate detection from ppg signal in noisy environment. In 2016 International Conference on Intelligent Control Power and Instrumentation (ICICPI) (pp. 70-73). IEEE.
  • [36] Nayan, A.-A., Saha, J., Mahmud, K.R., al Azad, A.K., and Kibria, M.G. (2020). Detection of objects from noisy images. In: 2020 2nd International Conference on Sustainable Technologies for Industry 4.0 (STI). pp. 1–6. IEEE
  • [37] Yadav, K., Mohan, D., and Parihar, A.S. (2021). Image detection in noisy images. In: 2021 5th International Conference on Intelligent Computing and Control Systems (ICICCS). pp. 917–923
  • [38] Milyaev, S. and Laptev, I. (2017). Towards reliable object detection in noisy images. Pattern Recognition and Image Analysis, 27, pp.713-722.
  • [39] Medvedeva, E. (2019). Moving object detection in noisy images. In: 2019 8th Mediterranean Conference on Embedded Computing (MECO). pp. 1–4. IEEE
  • [40] Que, J.F., Peng, H.F., and Xiong, J.Y. (2019). Low altitude, slow speed and small size object detection improvement in noise conditions based on mixed training. In: Journal of Physics: Conference Series. p. 012029. IOP Publishing
  • [41] Lee, G., Hong, S., and Cho, D. (2021). Self-supervised feature enhancement networks for small object detection in noisy images. IEEE Signal Process Lett. 28, 1026–1030
  • [42] Singh, M., Govil, M.C., and Pilli, E.S. (2018). V-SIN: visual saliency detection in noisy images using convolutional neural network. In: 2018 Conference on Information and Communication Technology (CICT). pp. 1–6. IEEE
  • [43] Gautam, A., and Biswas, M. (2018). Whale optimization algorithm based edge detection for noisy image. In: 2018 Second International Conference on Intelligent Computing and Control Systems (ICICCS). pp. 1878–1883. IEEE
  • [44] Mathew, M.P., and Mahesh, T.Y. (2022). Leaf-based disease detection in bell pepper plant using yolo v5. Signal Image Video Process. 1–7.
  • [45] Cheng, L., Li, J., Duan, P., and Wang, M. (2021). A small attentional YOLO model for landslide detection from satellite remote sensing images. Landslides. 18, 2751–2765.
  • [46] Wu, D., Lv, S., Jiang, M., and Song, H. (2020). Using channel pruning-based YOLO v4 deep learning algorithm for the real-time and accurate detection of apple flowers in natural environments. Comput Electron Agric. 178, 105742
  • [47] Liu, G., Nouaze, J.C., Touko Mbouembe, P.L., and Kim, J.H. (2020). YOLO-tomato: A robust algorithm for tomato detection based on YOLOv3. Sensors. 20, 2145
  • [48] Chen, W., Huang, H., Peng, S., Zhou, C., and Zhang, C. (2021). YOLO-face: a real-time face detector. Vis Comput. 37, 805–813
  • [49] Zaidi, S.S.A., Ansari, M.S., Aslam, A., Kanwal, N., Asghar, M. and Lee, B. (2022). A survey of modern deep learning based object detection models. Digital Signal Processing, p.103514.
  • [50] Nakamura, T. (2021). Military Aircraft Detection Dataset. Webpage: https://www.kaggle.com/datasets/a2015003713/militaryaircraftdetectiondataset
Year 2023, Issue: 054, 1 - 25, 30.09.2023
https://doi.org/10.59313/jsr-a.1257361

Abstract

Project Number

Project Number: 2023-GENL-Müh-0007.

References

  • [1] Ramík, D.M., Sabourin, C., Moreno, R., and Madani, K. (2014). A machine learning based intelligent vision system for autonomous object detection and recognition. Applied Intelligence. 40, 358–375.
  • [2] Nallasivam, M., and Senniappan, V. (2021). Moving human target detection and tracking in video frames. Studies in informatics and control. 30, 119–129.
  • [3] Erhan, D., Szegedy, C., Toshev, A., and Anguelov, D. (2014). Scalable object detection using deep neural networks. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 2147-2154).
  • [4] Han, F., Liu, B., Zhu, J. and Zhang, B. (2019). Algorithm design for edge detection of high-speed moving target image under noisy environment. Sensors, 19(2), p.343.
  • [5] Razakarivony, S., and Jurie, F. (2016). Vehicle detection in aerial imagery: A small target detection benchmark. J Vis Commun Image Represent. 34, 187–203.
  • [6] Wang, Z., Du, L., Mao, J., Liu, B., and Yang, D. (2019). SAR target detection based on SSD with data augmentation and transfer learning. IEEE Geoscience and Remote Sensing Letters. 16, 150–154.
  • [7] Xu, Q., Peng, J., Shen, J., Tang, H., and Pan, G. (2020). Deep CovDenseSNN: A hierarchical event-driven dynamic framework with spiking neurons in noisy environment. Neural Networks. 121, 512–519.
  • [8] Bakir, H., Oktay, S., and Tabaru, E. (2023). Detection of pneumonia from x-ray images using deep learning techniques. Journal of Scientific Reports-A. 419–440.
  • [9] Akgül, İ. and and Volkan, K.A.Y.A. (2022). Classification of cells infected with the malaria parasite with ResNet architectures. Journal of Scientific Reports-A, (048), pp.42-54.
  • [10] Bakir, H. and Yilmaz, Ş. (2022). Using transfer learning technique as a feature extraction phase for diagnosis of cataract disease in the eye. International Journal of Sivas University of Science and Technology, 1(1), pp.17-33.
  • [11] Tekin, S., Murat, G.O.K., Namdar, M. and Başgümüş, A. (2022). Autonomous guidance system for UAVs with image processing techniques. Journal of Scientific Reports-A, (051), pp.149-159.
  • [12] Girshick, R., Donahue, J., Darrell, T. and Malik, J. (2014). Rich feature hierarchies for accurate object detection and semantic segmentation. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 580-587).
  • [13] He, K., Zhang, X., Ren, S. and Sun, J. (2015). Spatial pyramid pooling in deep convolutional networks for visual recognition. IEEE transactions on pattern analysis and machine intelligence, 37(9), pp.1904-1916.
  • [14] Girshick, R. (2015). Fast r-cnn. In Proceedings of the IEEE international conference on computer vision (pp. 1440-1448).
  • [15] Ren, S., He, K., Girshick, R. and Sun, J. (2015). Faster r-cnn: Towards real-time object detection with region proposal networks. Advances in neural information processing systems, 28.
  • [16] Redmon, J., Divvala, S., Girshick, R. and Farhadi, A. (2016). You only look once: Unified, real-time object detection. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 779-788).
  • [17] Redmon, J. and Farhadi, A. (2017). YOLO9000: better, faster, stronger. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 7263-7271).
  • [18] Redmon, J. and Farhadi, A. (2018). Yolov3: An incremental improvement. arXiv preprint arXiv:1804.02767.
  • [19] Bochkovskiy, A., Wang, C.Y. and Liao, H.Y.M. (2020). Yolov4: Optimal speed and accuracy of object detection. arXiv preprint arXiv:2004.10934.
  • [20] Liu, W., Anguelov, D., Erhan, D., Szegedy, C., Reed, S., Fu, C.Y. and Berg, A.C. (2016). SSD: Single shot multibox detector. In Computer Vision–ECCV 2016: 14th European Conference, Amsterdam, The Netherlands, October 11–14, 2016, Proceedings, Part I 14 (pp. 21-37). Springer International Publishing.
  • [21] Felzenszwalb, P.F., Girshick, R.B., McAllester, D., and Ramanan, D. (2010). Object detection with discriminatively trained part-based models. IEEE Trans Pattern Anal Mach Intell. 32, 1627–1645.
  • [22] Ferrari, V., Jurie, F. and Schmid, C. (2010). From images to shape models for object detection. International journal of computer vision, 87(3), pp.284-303.
  • [23] Ren, X., and Ramanan, D. (2013). Histograms of sparse codes for object detection. In: Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition. pp. 3246–3253
  • [24] Girshick, R., Felzenszwalb, P. and McAllester, D. (2011). Object detection with grammar models. Advances in neural information processing systems, 24.
  • [25] Salakhutdinov, R., Torralba, A. and Tenenbaum, J. (2011). Learning to share visual appearance for multiclass object detection. In CVPR 2011 (pp. 1481-1488). IEEE.
  • [26] Alahi, A., Ortiz, R. and Vandergheynst, P. (2012). Freak: Fast retina keypoint. In 2012 IEEE conference on computer vision and pattern recognition (pp. 510-517). Ieee.
  • [27] Zhou, X., Yang, C. and Yu, W. (2012). Moving object detection by detecting contiguous outliers in the low-rank representation. IEEE transactions on pattern analysis and machine intelligence, 35(3), pp.597-610.
  • [28] Zhu, L., Chen, Y., Yuille, A. and Freeman, W. (2010). Latent hierarchical structural learning for object detection. In 2010 IEEE computer society conference on computer vision and pattern recognition (pp. 1062-1069). IEEE.
  • [29] Felzenszwalb, P.F., Girshick, R.B., and McAllester, D. (2010). Cascade object detection with deformable part models. In: Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition. pp. 2241–2248
  • [30] Jiang, H., Wang, J., Yuan, Z., Wu, Y., Zheng, N., and Li, S. (2013). Salient object detection: A discriminative regional feature integration approach. In: Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition. pp. 2083–2090
  • [31] Kim, C., Lee, J., Han, T., and Kim, Y.M. (2018). A hybrid framework combining background subtraction and deep neural networks for rapid person detection. J Big Data. 5.
  • [32] Zaidi, S.S.A., Ansari, M.S., Aslam, A., Kanwal, N., Asghar, M. and Lee, B. (2022). A survey of modern deep learning based object detection models. Digital Signal Processing, p.103514.
  • [33] Nobis, F., Geisslinger, M., Weber, M., Betz, J., and Lienkamp, M. (2019). A deep learning-based radar and camera sensor fusion architecture for object detection; A Deep Learning-based Radar and Camera Sensor Fusion Architecture for Object Detection.
  • [34] Elhoseny, M. (2020). Multi-object detection and tracking (modt) machine learning model for real-time video surveillance systems. Circuits Syst Signal Process. 39, 611–630.
  • [35] Das, S., Pal, S. and Mitra, M. (2016). Real time heart rate detection from ppg signal in noisy environment. In 2016 International Conference on Intelligent Control Power and Instrumentation (ICICPI) (pp. 70-73). IEEE.
  • [36] Nayan, A.-A., Saha, J., Mahmud, K.R., al Azad, A.K., and Kibria, M.G. (2020). Detection of objects from noisy images. In: 2020 2nd International Conference on Sustainable Technologies for Industry 4.0 (STI). pp. 1–6. IEEE
  • [37] Yadav, K., Mohan, D., and Parihar, A.S. (2021). Image detection in noisy images. In: 2021 5th International Conference on Intelligent Computing and Control Systems (ICICCS). pp. 917–923
  • [38] Milyaev, S. and Laptev, I. (2017). Towards reliable object detection in noisy images. Pattern Recognition and Image Analysis, 27, pp.713-722.
  • [39] Medvedeva, E. (2019). Moving object detection in noisy images. In: 2019 8th Mediterranean Conference on Embedded Computing (MECO). pp. 1–4. IEEE
  • [40] Que, J.F., Peng, H.F., and Xiong, J.Y. (2019). Low altitude, slow speed and small size object detection improvement in noise conditions based on mixed training. In: Journal of Physics: Conference Series. p. 012029. IOP Publishing
  • [41] Lee, G., Hong, S., and Cho, D. (2021). Self-supervised feature enhancement networks for small object detection in noisy images. IEEE Signal Process Lett. 28, 1026–1030
  • [42] Singh, M., Govil, M.C., and Pilli, E.S. (2018). V-SIN: visual saliency detection in noisy images using convolutional neural network. In: 2018 Conference on Information and Communication Technology (CICT). pp. 1–6. IEEE
  • [43] Gautam, A., and Biswas, M. (2018). Whale optimization algorithm based edge detection for noisy image. In: 2018 Second International Conference on Intelligent Computing and Control Systems (ICICCS). pp. 1878–1883. IEEE
  • [44] Mathew, M.P., and Mahesh, T.Y. (2022). Leaf-based disease detection in bell pepper plant using yolo v5. Signal Image Video Process. 1–7.
  • [45] Cheng, L., Li, J., Duan, P., and Wang, M. (2021). A small attentional YOLO model for landslide detection from satellite remote sensing images. Landslides. 18, 2751–2765.
  • [46] Wu, D., Lv, S., Jiang, M., and Song, H. (2020). Using channel pruning-based YOLO v4 deep learning algorithm for the real-time and accurate detection of apple flowers in natural environments. Comput Electron Agric. 178, 105742
  • [47] Liu, G., Nouaze, J.C., Touko Mbouembe, P.L., and Kim, J.H. (2020). YOLO-tomato: A robust algorithm for tomato detection based on YOLOv3. Sensors. 20, 2145
  • [48] Chen, W., Huang, H., Peng, S., Zhou, C., and Zhang, C. (2021). YOLO-face: a real-time face detector. Vis Comput. 37, 805–813
  • [49] Zaidi, S.S.A., Ansari, M.S., Aslam, A., Kanwal, N., Asghar, M. and Lee, B. (2022). A survey of modern deep learning based object detection models. Digital Signal Processing, p.103514.
  • [50] Nakamura, T. (2021). Military Aircraft Detection Dataset. Webpage: https://www.kaggle.com/datasets/a2015003713/militaryaircraftdetectiondataset
There are 50 citations in total.

Details

Primary Language English
Subjects Engineering
Journal Section Research Articles
Authors

Halit Bakır 0000-0003-3327-2822

Rezan Bakır 0000-0002-4373-2231

Project Number Project Number: 2023-GENL-Müh-0007.
Publication Date September 30, 2023
Submission Date February 27, 2023
Published in Issue Year 2023 Issue: 054

Cite

IEEE H. Bakır and R. Bakır, “EVALUATING THE ROBUSTNESS OF YOLO OBJECT DETECTION ALGORITHM IN TERMS OF DETECTING OBJECTS IN NOISY ENVIRONMENT”, JSR-A, no. 054, pp. 1–25, September 2023, doi: 10.59313/jsr-a.1257361.