Under Water Net: Efficient Visual Detection of Marine Garbage for Eco Monitoring
DOI:
https://doi.org/10.63665/epwfx449Keywords:
Underwater Garbage Detection, Marine Pollution, YOLOv10n, Object Detection, Deep Learning, Computer Vision, IoT, Edge Computing, Autonomous Underwater Vehicles (AUVs), Real-Time Detection, Lightweight Models, Environmental MonitoringAbstract
Marine pollution poses a severe threat to the sustainability of aquatic ecosystems and the global blue economy. The accumulation of underwater debris such as plastics, fishing nets, and industrial waste has significantly impacted marine biodiversity and environmental health. Effective detection and classification of such debris are essential for enabling timely intervention and supporting large-scale marine conservation efforts.
In this work, we propose an advanced underwater garbage detection system based on YOLOv10n, a state-of-the-art lightweight object detection model optimized for resource-constrained environments such as IoT devices and underwater robotic platforms. Unlike traditional object detection models that rely on computationally intensive backbones such as CSPDarknet, the proposed system leverages the efficient architecture of YOLOv10n to reduce model complexity while maintaining high detection accuracy.
The proposed model emphasizes low parameter count, reduced inference latency, and real-time processing capability, making it highly suitable for deployment on embedded systems. Experimental results demonstrate that the system achieves robust and precise detection of underwater debris under challenging conditions such as low visibility, color distortion, and dynamic backgrounds. Furthermore, the reduced memory footprint and computational requirements enable scalable deployment across multiple platforms, including Autonomous Underwater Vehicles (AUVs) and edge devices.
This research highlights the feasibility of integrating lightweight deep learning models into marine monitoring systems and provides a practical, cost-effective, and eco-friendly solution to combat marine pollution through intelligent automation.
Downloads
References
1) Redmon, J., Divvala, S., Girshick, R., & Farhadi, A., “You Only Look Once: Unified, Real-Time Object Detection,” Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016.
2) Bochkovskiy, A., Wang, C. Y., & Liao, H. Y. M., “YOLOv4: Optimal Speed and Accuracy of Object Detection,” arXiv preprint arXiv:2004.10934, 2020.
3) Jocher, G. et al., “YOLOv5 by Ultralytics,” 2020. Available: https://github.com/ultralytics/yolov5
4) Wang, C. Y., Bochkovskiy, A., & Liao, H. Y. M., “YOLOv7: Trainable Bag-of-Freebies Sets New State-of-the-Art for Real-Time Object Detectors,” arXiv preprint arXiv:2207.02696, 2022.
5) Ultralytics, “YOLOv8 Documentation,” 2023. Available: https://docs.ultralytics.com
6) Li, X., Wang, W., Hu, X., & Yang, J., “Selective Kernel Networks,” Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2019.
7) Howard, A. et al., “MobileNetV3: Searching for MobileNetV3,” Proceedings of the IEEE International Conference on Computer Vision (ICCV), 2019.
8) Wang, Q., Wu, B., Zhu, P., Li, P., Zuo, W., & Hu, Q., “ECA-Net: Efficient Channel Attention for Deep Convolutional Neural Networks,” Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2020.
9) Ren, S., He, K., Girshick, R., & Sun, J., “Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks,” IEEE Transactions on Pattern Analysis and Machine Intelligence, 2017.
10) Liu, W. et al., “SSD: Single Shot MultiBox Detector,” European Conference on Computer Vision (ECCV), 2016.
11) Zhang, Y., et al., “Underwater Object Detection Using Deep Learning: A Survey,” Journal of Marine Science and Engineering, 2021.
12) Simonyan, K., & Zisserman, A., “Very Deep Convolutional Networks for Large-Scale Image Recognition,” arXiv preprint arXiv:1409.1556, 2014.
Downloads
Published
Issue
Section
License
Copyright (c) 2026 Authors

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
