Automated Video-Based Capture of Crustacean Fisheries Data Using Low-Power Hardware
Abstract
:1. Introduction
1.1. Background and Motivation
1.2. Related Work
1.2.1. Computer Vision for Crabs and Lobsters
1.2.2. Lightweight Computer Vision
2. Materials and Methods
2.1. Hardware Setup
2.2. Overview of Proposed Computer Vision Pipeline
2.3. Data Preprocessing and Augmentation
- Randomly flip the image horizontally with probability 0.5;
- Randomly flip the image vertically with probability 0.5;
- Randomly blur the image with probability 0.3;
- Randomly shift all pixel intensity values +/−20% with probability 0.3(simulates varied lighting conditions).
2.4. Animal Segment Detection
2.5. Frame Selection
2.6. Image Cropping and Keypoint Detection
3. Results
3.1. Model Performance
3.1.1. Animal Segment Detection
3.1.2. Frame Selection
3.1.3. Object Detection
3.1.4. Keypoint Detection
3.2. Overall Pipeline
4. Discussion
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
Abbreviations
MSY | Maximum Sustainable Yield |
FMP | Fisheries Management Plans |
SOTA | State of the Art |
NAS | Neural Architecture Search |
SBC | Single Board Computer |
GSM | Global System for Mobile Communication |
GPS | Global Positioning System |
GNSS | Global Navigation Satellite System |
PCB | Printed Circuit Board |
UPS | Uninterruptible Power Supply |
FPS | Frames Per Second |
CNN | Convolutional Neural Network |
RMSE | Root Mean Squared Error |
ROI | Region of Interest |
References
- European Parliament and the Council of the European Union. CFP. Regulation (EU) No 1380/2013 of the European Parliament and of the Council of 11 December 2013 on the Common Fisheries Policy, Amending Council Regulations (EC) No 1954/2003 and (EC) No 1224/2009 and Repealing Council Regulations (EC) No 2371/2002 (2013). 2013. Available online: https://eur-lex.europa.eu/eli/reg/2013/1380/oj (accessed on 2 August 2023).
- Hilborn, R.; Amoroso, R.O.; Anderson, C.M.; Baum, J.K.; Branch, T.A.; Costello, C.; de Moor, C.L.; Faraj, A.; Hively, D.; Jensen, O.P.; et al. Effective fisheries management instrumental in improving fish stock status. Proc. Natl. Acad. Sci. USA 2020, 117, 2218–2224. [Google Scholar] [CrossRef] [PubMed]
- Hilborn, R.; Ovando, D. Reflections on the success of traditional fisheries management. ICES J. Mar. Sci. 2014, 71, 1040–1046. [Google Scholar] [CrossRef]
- Food and Agriculture Organization of the United Nations. The State of World Fisheries and Aquaculture 2022. Towards Blue Transformation. 2022. Available online: https://www.fao.org/3/cc0461en/cc0461en.pdf (accessed on 2 August 2023).
- Froese, R.; Zeller, D.; Kleisner, K.; Pauly, D. What catch data can tell us about the status of global fisheries. Mar. Biol. 2012, 159, 1283–1292. [Google Scholar] [CrossRef]
- Boenish, R.; Kritzer, J.P.; Kleisner, K.; Steneck, R.S.; Werner, K.M.; Zhu, W.; Schram, F.; Rader, D.; Cheung, W.; Ingles, J.; et al. The global rise of crustacean fisheries. Front. Ecol. Environ. 2022, 20, 102–110. [Google Scholar] [CrossRef]
- Marine Management Organisation. UK Sea FIsheries Statistics 2020. 2021. Available online: https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/1020837/UK_Sea_Fisheries_Statistics_2020_-_AC_checked.pdf (accessed on 2 August 2023).
- Hodgdon, C.T.; Khalsa, N.S.; Li, Y.; Sun, M.; Boenish, R.; Chen, Y. Global crustacean stock assessment modelling: Reconciling available data and complexity. Fish Fish. 2022, 23, 697–707. [Google Scholar] [CrossRef]
- Punt, A.E.; Huang, T.; Maunder, M.N. Review of integrated size-structured models for stock assessment of hard-to-age crustacean and mollusc species. ICES J. Mar. Sci. 2013, 70, 16–33. [Google Scholar] [CrossRef]
- Lipcius, R.N.; Herrnkind, W. Molt cycle alterations in behavior, feeding and diel rhythms of a decapod crustacean, the spiny lobster Panulirus argus. Mar. Biol. 1982, 68, 241–252. [Google Scholar] [CrossRef]
- Ziegler, P.; Frusher, S.; Johnson, C. Space-time variation in catchability of southern rock lobster Jasus edwardsii in Tasmania explained by environmental, physiological and density-dependent processes. Fish. Res. 2003, 61, 107–123. [Google Scholar] [CrossRef]
- van Helmond, A.T.; Mortensen, L.O.; Plet-Hansen, K.S.; Ulrich, C.; Needle, C.L.; Oesterwind, D.; Kindt-Larsen, L.; Catchpole, T.; Mangi, S.; Zimmermann, C.; et al. Electronic monitoring in fisheries: Lessons from global experiences and future opportunities. Fish Fish. 2020, 21, 162–189. [Google Scholar] [CrossRef]
- Hold, N.; Murray, L.G.; Pantin, J.R.; Haig, J.A.; Hinz, H.; Kaiser, M.J. Video capture of crustacean fisheries data as an alternative to on-board observers. ICES J. Mar. Sci. 2015, 72, 1811–1821. [Google Scholar] [CrossRef]
- Cao, S.; Zhao, D.; Liu, X.; Sun, Y. Real-time robust detector for underwater live crabs based on deep learning. Comput. Electron. Agric. 2020, 172, 105339. [Google Scholar] [CrossRef]
- Sandler, M.; Howard, A.; Zhu, M.; Zhmoginov, A.; Chen, L.C. MobileNetV2: Inverted Residuals and Linear Bottlenecks. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 4510–4520. [Google Scholar] [CrossRef]
- Ji, W.; Peng, J.; Xu, B.; Zhang, T. Real-time detection of underwater river crab based on multi-scale pyramid fusion image enhancement and MobileCenterNet model. Comput. Electron. Agric. 2023, 204, 107522. [Google Scholar] [CrossRef]
- Chen, X.; Zhang, Y.; Li, D.; Duan, Q. Chinese Mitten Crab Detection and Gender Classification Method Based on Gmnet-Yolov4. SSRN 2022. [Google Scholar] [CrossRef]
- Tang, C.; Zhang, G.; Hu, H.; Wei, P.; Duan, Z.; Qian, Y. An improved YOLOv3 algorithm to detect molting in swimming crabs against a complex background. Aquac. Eng. 2020, 91, 102115. [Google Scholar] [CrossRef]
- Hu, K.; Shi, C.; Gao, G.; Zhu, J. A Detection Systems For Molting Scylla Paramamosain Based On YOLO v4. In Proceedings of the 2021 3rd International Academic Exchange Conference on Science and Technology Innovation (IAECST), Guangzhou, China, 10–12 December 2021; pp. 635–639. [Google Scholar] [CrossRef]
- Wu, C.; Xie, Z.; Chen, K.; Shi, C.; Ye, Y.; Xin, Y.; Zarei, R.; Huang, G. A Part-based Deep Learning Network for identifying individual crabs using abdomen images. Front. Mar. Sci. 2023, 10, 1093542. [Google Scholar] [CrossRef]
- Mahmood, A.; Bennamoun, M.; An, S.; Sohel, F.; Boussaid, F.; Hovey, R.; Kendrick, G. Automatic detection of Western rock lobster using synthetic data. ICES J. Mar. Sci. 2019, 77, 1308–1317. [Google Scholar] [CrossRef]
- Chelouati, N.; Fares, F.; Bouslimani, Y.; Ghribi, M. Lobster detection using an Embedded 2D Vision System with a FANUC industrual robot. In Proceedings of the 2021 IEEE International Symposium on Robotic and Sensors Environments (ROSE), Virtually, 28–29 October 2021; pp. 1–6. [Google Scholar] [CrossRef]
- Chelouati, N.; Bouslimani, Y.; Ghribi, M. Lobster Position Estimation Using YOLOv7 for Potential Guidance of FANUC Robotic Arm in American Lobster Processing. Designs 2023, 7, 70. [Google Scholar] [CrossRef]
- Wang, D.; Holmes, M.; Vinson, R.; Seibel, G. Machine Vision Guided Robotics for Blue Crab Disassembly—Deep Learning Based Crab Morphology Segmentation. In Proceedings of the ASABE Annual International Meeting, Detroit, MI, USA, 29 July–1 August 2018. [Google Scholar] [CrossRef]
- Wang, D.; Vinson, R.; Holmes, M.; Seibel, G.; Tao, Y. Convolutional neural network guided blue crab knuckle detection for autonomous crab meat picking machine. Opt. Eng. 2018, 57, 043103. [Google Scholar] [CrossRef]
- Zhang, Z.D.; Tan, M.L.; Lan, Z.C.; Liu, H.C.; Pei, L.; Yu, W.X. CDNet: A real-time and robust crosswalk detection network on Jetson nano based on YOLOv5. Neural Comput. Appl. 2022, 34, 10719–10730. [Google Scholar] [CrossRef]
- Shamsoshoara, A.; Afghah, F.; Razi, A.; Zheng, L.; Fulé, P.Z.; Blasch, E. Aerial imagery pile burn detection using deep learning: The FLAME dataset. Comput. Netw. 2021, 193, 108001. [Google Scholar] [CrossRef]
- Awasthi, N.; Dayal, A.; Cenkeramaddi, L.R.; Yalavarthy, P.K. Mini-COVIDNet: Efficient Lightweight Deep Neural Network for Ultrasound Based Point-of-Care Detection of COVID-19. IEEE Trans. Ultrason. Ferroelectr. Freq. Control 2021, 68, 2023–2037. [Google Scholar] [CrossRef]
- Ju, R.Y.; Lin, T.Y.; Jian, J.H.; Chiang, J.S. Efficient convolutional neural networks on Raspberry Pi for image classification. J. Real-Time Image Process. 2023, 20, 21. [Google Scholar] [CrossRef]
- Wolf, W. Key frame selection by motion analysis. In Proceedings of the 1996 IEEE International Conference on Acoustics, Speech, and Signal Processing Conference Proceedings, Atlanta, GA, USA, 9 May 1996; Volume 2, pp. 1228–1231. [Google Scholar] [CrossRef]
- Elgammal, A.; Harwood, D.; Davis, L.S. Non-parametric Model for Background Subtraction. In Proceedings of the European Conference on Computer Vision, Dublin, Ireland, 26 June–1 July 2000. [Google Scholar]
- Viola, P.A.; Jones, M.J. Rapid object detection using a boosted cascade of simple features. In Proceedings of the 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, CVPR 2001, Kauai, HI, USA, 8–14 December 2001; Volume 1. [Google Scholar]
- Labuguen, R.T.; Volante, E.J.P.; Causo, A.; Bayot, R.; Peren, G.; Macaraig, R.M.; Libatique, N.J.C.; Tangonan, G.L. Automated fish fry counting and schooling behavior analysis using computer vision. In Proceedings of the 2012 IEEE 8th International Colloquium on Signal Processing and its Applications, Malacca, Malaysia, 23–25 March 2012; pp. 255–260. [Google Scholar] [CrossRef]
- Memon, S.; Bhatti, S.; Thebo, L.A.; Talpur, M.M.B.; Memon, M.A. A Video based Vehicle Detection, Counting and Classification System. Int. J. Image Graph. Signal Process. (IJIGSP) 2018, 10, 34–41. [Google Scholar] [CrossRef]
- Shortis, M.R.; Ravanbakhsh, M.; Shafait, F.; Mian, A. Progress in the automated identification, measurement, and counting of fish in underwater image sequences. Mar. Technol. Soc. J. 2016, 50, 4–16. [Google Scholar] [CrossRef]
- Howard, A.G.; Zhu, M.; Chen, B.; Kalenichenko, D.; Wang, W.; Weyand, T.; Andreetto, M.; Adam, H. MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications. arXiv 2017, arXiv:cs.CV/1704.04861. [Google Scholar]
- Chollet, F. Xception: Deep learning with depthwise separable convolutions. In Proceedings of the IEEE conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 1251–1258. [Google Scholar]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep Residual Learning for Image Recognition. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar] [CrossRef]
- Howard, A.; Sandler, M.; Chu, G.; Chen, L.C.; Chen, B.; Tan, M.; Wang, W.; Zhu, Y.; Pang, R.; Vasudevan, V.; et al. Searching for mobilenetv3. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Seoul, Republic of Korea, 27 October–2 November 2019; pp. 1314–1324. [Google Scholar]
- Saleh, A.; Jones, D.; Jerry, D.; Azghadi, M.R. A lightweight Transformer-based model for fish landmark detection. arXiv 2022, arXiv:cs.CV/2209.05777. [Google Scholar]
- Tan, M.; Le, Q.V. EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks. arXiv 2020, arXiv:cs.LG/1905.11946. [Google Scholar]
- Hu, J.; Shen, L.; Albanie, S.; Sun, G.; Wu, E. Squeeze-and-Excitation Networks. arXiv 2019, arXiv:cs.CV/1709.01507. [Google Scholar]
- Ren, J.; Shen, X.; Lin, Z.; Měch, R. Best Frame Selection in a Short Video. In Proceedings of the 2020 IEEE Winter Conference on Applications of Computer Vision (WACV), Snowmass, CO, USA, 1–5 March 2020; pp. 3201–3210. [Google Scholar] [CrossRef]
- Griffin, B.A.; Corso, J.J. BubbleNets: Learning to Select the Guidance Frame in Video Object Segmentation by Deep Sorting Frames. In Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 9 January 2019; pp. 8906–8915. [Google Scholar] [CrossRef]
- Chang, H.; Yu, F.; Wang, J.; Ashley, D.; Finkelstein, A. Automatic Triage for a Photo Series. ACM Trans. Graph. 2016, 35. [Google Scholar] [CrossRef]
- Jocher, G.; Chaurasia, A.; Stoken, A.; Borovec, J.; NanoCode012; Kwon, Y.; Michael, K.; TaoXie; Fang, J.; Imyhxy; et al. ultralytics/yolov5: v7.0—YOLOv5 SOTA Realtime Instance Segmentation. Zenodo 2022. [Google Scholar] [CrossRef]
- Dutta, A.; Zisserman, A. The VIA Annotation Software for Images, Audio and Video. In Proceedings of the 27th ACM International Conference on Multimedia, Association for Computing Machinery, New York, NY, USA, 21–25 October 2019; pp. 2276–2279. [Google Scholar] [CrossRef]
- Padilla, R.; Passos, W.L.; Dias, T.L.B.; Netto, S.L.; da Silva, E.A.B. A Comparative Analysis of Object Detection Metrics with a Companion Open-Source Toolkit. Electronics 2021, 10, 279. [Google Scholar] [CrossRef]
- Polino, A.; Pascanu, R.; Alistarh, D. Model Compression via Distillation and Quantization. arXiv 2018, arXiv:cs.NE/1802.05668. [Google Scholar]
- tensorflow.org. Post-Training Quantization. Available online: https://www.tensorflow.org/lite/performance/post_training_quantization (accessed on 11 September 2023).
- seafish.co.uk. Available online: https://seafish.co.uk (accessed on 11 September 2023).
Model | Memory Usage (GB) | Inference Time (s) | FPS | OOM |
---|---|---|---|---|
MobileNet-V1 | 5.5 | n/a | n/a | Yes |
TripleNet-S | 1.5 | 20.132 | 0.050 | Yes |
EfficientNetB0 | 0.24 | 2.182 | 0.458 | No |
MobileNetV3-small | 0.43 | 0.102 | 9.804 | No |
Component | Seconds of Processing | Frames Processed | FPS |
---|---|---|---|
Binary Classifier | 300.02 | 3600 | 11.998 |
Frame Selector | 410.54 | 1486 | 3.619 |
Object Detector | 109.48 | 15 | 0.137 |
Keypoint Detector | 5.71 | 15 | 2.626 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Gregory Dal Toé, S.; Neal, M.; Hold, N.; Heney, C.; Turner, R.; McCoy, E.; Iftikhar, M.; Tiddeman, B. Automated Video-Based Capture of Crustacean Fisheries Data Using Low-Power Hardware. Sensors 2023, 23, 7897. https://doi.org/10.3390/s23187897
Gregory Dal Toé S, Neal M, Hold N, Heney C, Turner R, McCoy E, Iftikhar M, Tiddeman B. Automated Video-Based Capture of Crustacean Fisheries Data Using Low-Power Hardware. Sensors. 2023; 23(18):7897. https://doi.org/10.3390/s23187897
Chicago/Turabian StyleGregory Dal Toé, Sebastian, Marie Neal, Natalie Hold, Charlotte Heney, Rebecca Turner, Emer McCoy, Muhammad Iftikhar, and Bernard Tiddeman. 2023. "Automated Video-Based Capture of Crustacean Fisheries Data Using Low-Power Hardware" Sensors 23, no. 18: 7897. https://doi.org/10.3390/s23187897
APA StyleGregory Dal Toé, S., Neal, M., Hold, N., Heney, C., Turner, R., McCoy, E., Iftikhar, M., & Tiddeman, B. (2023). Automated Video-Based Capture of Crustacean Fisheries Data Using Low-Power Hardware. Sensors, 23(18), 7897. https://doi.org/10.3390/s23187897