Deconvolution Enhancement Keypoint Network for Efficient Fish Fry Counting
Abstract
:Simple Summary
Abstract
1. Introduction
- High-resolution heatmap. Generating a high-resolution heatmap with superior quality is crucial for precisely locating the keypoints of small objects. However, a high-resolution heatmap is less investigated. Most methods mainly focus on grouping keypoints and simply using a single resolution of a feature map that is 1/4 of the input image resolution to predict the heatmap of keypoints. Coordinate regression based on the heatmap produces quantization errors, as the coordinates have their precision tied to the resolution in the heatmap [20]. For example, assume the input image size is 1024 × 1024, and one of the keypoints is located at 516 × 516. The resulting output heatmap would be 256 × 256, which is one-fourth the size of the original image. Consequently, due to downsampling, the keypoint would be positioned at 128 × 128. However, there might remain an error of 3 pixels (calculated as 516 − 128 × 4 = 3 pixels), even if the heatmap is restored without any discrepancies. Therefore, in order to handle quantization issues, two deconvolution (transposed convolution) modules are added to the generation head in DEKNet in order to generate a high-quality and high-resolution heatmap with the same resolution as the input image. Two factors made us select deconvolution. One reason is that deconvolution can be used in sequence at the end of a network to efficiently increase feature map resolution [21]. The other reason is that Xiao et al. [22] demonstrated that deconvolution can generate high-quality feature maps for heatmap prediction.
- High-resolution representation. High-resolution representation is conducive to identifying the object. In the whole process, the two parallel branches of the feature extraction module simply perform feature extraction synchronous with feature fusion, but the resolution of their respective feature map remains unchanged, where the resolution of the feature map in the first branch is as high as 1/4 of the original image, while the resolution of the feature map in the second branch is as high as 1/8 of the original image. The feature extraction module ultimately outputs the feature map acquired in the first branch. As a result, DEKNet can obtain high-resolution representations throughout the feature extraction process.
- Maximum efficiency. DEKNet is an end-to-end model that regresses keypoints through the heatmap. Aiming to achieve precise counting, DEKNet uses a single keypoint to fulfill the counting function and obtain the locations of fish fry without regressing other properties of the object. In addition, DEKNet only utilizes one keypoint for counting, which means that there is no need to group different types of keypoints, nor does it involve post-processing, thus greatly reducing the computational effort of the model.
- An automatic fry counting method (DEKNet) using one keypoint is proposed, which can effectively count the number of fish fry despite overlap and obscurity. The approach not only achieves state-of-the-art counting accuracy but also provides the exact locations of fish fry. Furthermore, it may also be transferred to count other small objects in smart aquaculture.
- A simple, lightweight and dual branch parallel feature extractor is designed, which can perform multi-scale feature extraction and maintain high-resolution representations. Meanwhile, the computational effort is significantly reduced.
- Two identical deconvolution modules (TDMs) are added to the generation head, or the keypoint-counting module, to generate a high-quality heatmap that has the same size as the original image with more detailed features. In this way, the proposed method can improve the performance of small object counting in high-density scenes.
- A large-scale fish fry dataset (FishFry-2023) is constructed. In the dataset, a single point instead of a bounding box is used to mark any individual fish fry when labeling the data, thus significantly reducing the workload of annotating tasks.
2. Materials and Methods
2.1. Materials
2.1.1. Fish Fry Images Collection
- Size variance. Since the smartphone has a local automatic gravity sensing function, the collected images have two different sizes, which are 4032 × 3024 and 3024 × 4032. All images are stored in JPG format without being processed into the same size, and both sizes are used for data labeling and model training.
- Light variance. Since the data are collected under different lighting conditions, the background of the images is different, including dark light, natural light, and reflection (Figure 2a).
- Water variance. Since the turbidity of the water differs, the background of the images also varies from clear to slightly muddy (Figure 2b,c).
- Density variance. When collecting data, we constantly increased the number of fry, so the dataset had three types of density distribution: low, medium and high levels of density. Among them, low density means that the number of fish fry is within the range from 204 to 591, with the corresponding images collected in Batches 1, 2, 3 (Figure 2d). Medium density refers to the number of fry ranging from 642 to 876, with the corresponding images collected in Batches 4, 5, 6 (Figure 2e). High-density indicates that the number of fry ranges between 1312 and 1935, with the corresponding images collected in Batches 7, 8 (Figure 2f).
2.1.2. FishFry-2023 Dataset
2.2. Methods
2.2.1. Framework
2.2.2. Feature Extraction Module
2.2.3. Keypoint-Counting Module
2.2.4. Training and Inference
Algorithm 1 Pseudo-code of inference |
Input: the original image; window_size; the counting score threshold Output: the counting results CR 1 preprocess the original results image /* conv_some: a convolution operation; Fmap: feature map */ 2 first_Fmap := conv_some (image) /* create a feature map of the second branch by down_sampling */ 3 second_Fmap := down_sampling (first_Fmap) 4 for i in range(0,3) do 5 first_Fmap := conv_same (first_Fmap) 6 second_Fmap := conv_same (first_Fmap) 7 upsample_Fmap :=up_sampling (second_Fmap) /* execute twice to fuse features of the first branch into the second one */ 8 if i < 2 then 9 down_Fmap := down_sampling (first_Fmap) 10 Second_Fmap := concate (second_Fmap, down_Fmap) 11 end /* feature fusion */ 12 first_Fmap := concate (frist_Fmap, upsample_Fmap) 13 end /* deconvolution performed twice to get critical features */ 14 for j in range (0,2) do 15 first_Fmap := deconvolution (first_Fmap) 16 end /* conv_1: a convolution operation with core 1×1 */ 17 prediction_heatmap := conv_1 (first_Fmap) /* please refer to Algorithm 2 for the procedure for counting results */ 18 CR := obtain_counting_results (prediction_heatmap, window_size, threshold) 19 return CR |
Algorithm 2 Pseudo-code of obtaining counting results |
Input: the prediction heatmap PH; the size of the sliding window; the counting score threshold Output: the counting results CR 1 initialization: CR:= ∅ /* get the height and width of the heatmap */ 2 height, width := PH.shape /* each value of the heatmap is the maximum value of this area */ 3 temp_heatmap := MaxPool2d (PH, window_size, stride:=1, padding) /* traversing the two heatmaps 4 for h in range(height) do 5 for w in range(width) do /* get the value of the current position */ 6 value := PH[h][w] /* add the location information and value of the keypoint to the CR */ 7 if value == temp_heatmap[h][w] and value >= threshold then 8 CR.append((h, w, value)) 9 end 10 end 11 end 12 return CR |
2.2.5. Evaluation Metrics
2.3. Experimental Setup
3. Results
3.1. Fish Fry Counting Results
3.2. Comparisons with Different Counting Methods
3.3. Comparisons with Crowd-Counting Methods
3.4. Comparisons Results on Penaeus Dataset
3.5. Comparisons Results on Adipocyte Cells Dataset
3.6. Ablative Analysis of Each Module
3.7. Ablative Analysis of Different Image Sizes
4. Discussion
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
Appendix A
Dataset | Image | Instance | Annotation Point | Batch Number |
---|---|---|---|---|
Train set | 160 | 103,880 | 103,880 | 1, 2, 4, 7 |
Validation set | 40 | 33,570 | 33,570 | 1–8 |
Test set | 800 | 670,900 | - | 1–8 |
Density Level | Batch Number | Image | Range |
---|---|---|---|
Low | 1, 2, 3 | 300 | 204–592 |
Medium | 4, 5, 6 | 300 | 642–876 |
High | 7, 8 | 200 | 1312–1935 |
Total | 1–8 | 800 | 204–1935 |
References
- Liu, S.; Zeng, X.; Whitty, M. A vision-based robust grape berry counting algorithm for fast calibration-free bunch weight estimation in the field. Comput. Electron. Agric. 2020, 173, 105360. [Google Scholar] [CrossRef]
- Xu, C.; Jiang, H.; Yuen, P.; Ahmad, K.Z.; Chen, Y. MHW-PD: A robust rice panicles counting algorithm based on deep learning and multi-scale hybrid window. Comput. Electron. Agric. 2020, 173, 105375. [Google Scholar] [CrossRef]
- Han, T.; Bai, L.; Gao, J.; Wang, Q.; Ouyang, W.L. Dr. Vic: Decomposition and reasoning for video individual counting. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 21–24 June 2022; pp. 3083–3092. [Google Scholar]
- Akçay, H.G.; Kabasakal, B.; Aksu, D.; Demir, N.; Öz, M.E.A. Automated bird counting with deep learning for regional bird distribution mapping. Animals 2020, 10, 1207. [Google Scholar] [CrossRef] [PubMed]
- Fan, L.; Liu, Y. Automate fry counting using computer vision and multi-class least squares support vector machine. Aquaculture 2013, 380, 91–98. [Google Scholar] [CrossRef]
- Zhang, J.; Pang, H.; Cai, W.; Yan, Z. Using image processing technology to create a novel fry counting algorithm. Aquac. Fish. 2022, 7, 441–449. [Google Scholar] [CrossRef]
- Hernández-Ontiveros, J.M.; Inzunza-González, E.; García-Guerrero, E.E.; López-Bonilla, O.R.; Infante-Prieto, S.O.; Cárdenas-Valdez, J.R.; Tlelo-Cuautle, E. Development and implementation of a fish counter by using an embedded system. Comput. Electron. Agric. 2018, 145, 53–62. [Google Scholar] [CrossRef]
- Aliyu, I.; Gana, K.J.; Musa, A.A.; Adegboye, M.A.; Lim, C.G. Incorporating recognition in catfish counting algorithm using artificial neural network and geometry. Ksii. T. Internet. Inf. 2020, 14, 4866–48888. [Google Scholar]
- Zhang, L.; Li, W.; Liu, C.; Zhou, X.; Duan, Q. Automatic fish counting method using image density grading and local regression. Comput. Electron. Agric. 2020, 179, 105844. [Google Scholar] [CrossRef]
- Zhao, Y.; Li, W.; Li, Y.; Qi, Y.; Li, Z.; Yue, J. LFCNet: A lightweight fish counting model based on density map regression. Comput. Electron. Agric. 2022, 203, 107496. [Google Scholar] [CrossRef]
- Yu, C.; Hu, Z.; Han, B.; Dai, Y.; Zhao, Y.; Deng, Y. An intelligent measurement scheme for basic characters of fish in smart aquaculture. Comput. Electron. Agric. 2023, 204, 107506. [Google Scholar] [CrossRef]
- Ditria, E.M.; Lopez-Marcano, S.; Sievers, M.; Jinks, E.L.; Brown, C.J.; Connolly, R.M. Automating the analysis of fish abundance using object detection: Optimizing animal ecology with deep learning. Front. Mar. Sci. 2020, 7, 429. [Google Scholar] [CrossRef]
- Allken, V.; Rosen, S.; Handegard, N.O.; Malde, K. A deep learning-based method to identify and count pelagic and mesopelagic fishes from trawl camera images. Ices. J. Mar. Sci. 2021, 78, 3780–3792. [Google Scholar] [CrossRef]
- Jalal, A.; Salman, A.; Mian, A.; Shortis, M.; Shafait, F. Fish detection and species classification in underwater environments using deep learning with temporal information. Ecol. Inform. 2020, 57, 101088. [Google Scholar] [CrossRef]
- Cai, K.; Miao, X.; Wang, W.; Pang, H.; Liu, Y.; Song, J. A modified YOLOv3 model for fish detection based on MobileNetv1 as backbone. Aquac. Eng. 2020, 91, 102117. [Google Scholar] [CrossRef]
- Lei, J.; Gao, S.; Rasool, M.A.; Fan, R.; Jia, Y.; Lei, G. Optimized small waterbird detection method using surveillance videos based on YOLOv7. Animals 2023, 13, 1929. [Google Scholar] [PubMed]
- Zhou, X.; Wang, D.; Krähenbühl, P. Objects as points. arXiv 2019, arXiv:1904.07850. [Google Scholar]
- Liu, G.; Hou, Z.; Liu, H.; Liu, J.; Zhao, W.; Li, K. TomatoDet: Anchor-free detector for tomato detection. Front. Plant. Sci. 2022, 13, 942875. [Google Scholar] [CrossRef] [PubMed]
- Chen, G.; Shen, S.; Wen, L.; Luo, S.; Bo, L. Efficient pig counting in crowds with keypoints tracking and spatial-aware temporal response filtering. arXiv 2020, arXiv:2005.13131. [Google Scholar]
- Nibali, A.; He, Z.; Morgan, S.; Prendergast, L. Numerical coordinate regression with convolutional neural networks. arXiv 2008, arXiv:1801.07372. [Google Scholar]
- Cheng, B.; Xiao, B.; Wang, J.; Shi, H.; Huang, T.S.; Zhang, L. HigherHRNet: Scale-Aware representation learning for Bottom-Up human pose estimation. arXiv 2020, arXiv:1908.10357. [Google Scholar]
- Xiao, B.; Wu, H.; Wei, Y. Simple baselines for human pose estimation and tracking. In Proceedings of the European Conference on Computer Vision, Munich, Germany, 8–14 September 2018; pp. 466–481. [Google Scholar]
- Sun, K.; Xiao, B.; Liu, D.; Wang, J. Deep High-Resolution representation learning for human pose estimation. arXiv 2019, arXiv:1902.09212. [Google Scholar]
- Lin, T.; Dollár, P.; Girshick, R.; He, K.; Hariharan, B.; Belongie, S. Feature pyramid networks for object detection. arXiv 2017, arXiv:1612.03144. [Google Scholar]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. arXiv 2015, arXiv:1512.03385. [Google Scholar]
- Li, X.; Liu, R.; Wang, Z.; Zheng, G.; Lv, J.; Fan, L.; Guo, Y.B.; Gao, Y.F. Automatic penaeus monodon larvae counting via equal keypoint regression with smartphones. Animals 2023, 13, 2036. [Google Scholar] [CrossRef] [PubMed]
- Lonsdale, J.; Thomas, J.; Salvatore, M.; Phillips, R.; Lo, E.; Shad, S. The genotype-tissue expression (GTEx) project. Nat. Genet. 2013, 45, 580–585. [Google Scholar] [CrossRef] [PubMed]
- Ma, Z.; Wei, X.; Hong, X.; Gong, Y. Bayesian loss for crowd count estimation with point supervision. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), Seoul, Republic of Korea, 27–31 October 2019; pp. 6141–6150. [Google Scholar]
- Liu, Z.; Lin, Y.; Cao, Y.; Hu, H.; Wei, Y.; Zhang, Z.; Lin, S.; Guo, B.N. Swin transformer: Hierarchical vision transformer using shifted windows. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), Montreal, BC, Canada, 11–17 October 2021; pp. 10012–10022. [Google Scholar]
- Wang, W.; Xie, E.; Li, X.; Fan, D.; Song, K.; Liang, D.; Lu, T.; Luo, P.; Shao, L. Pvt v2: Improved baselines with pyramid vision transformer. Comput. Vis. Media 2022, 8, 415–424. [Google Scholar] [CrossRef]
- Yuan, Y.; Fu, R.; Huang, L.; Lin, W.; Zhang, C.; Chen, X.; Wang, J.D. HRFormer: High-Resolution transformer for dense prediction. arXiv 2021, arXiv:2110.09408. [Google Scholar]
- Li, Y.C. Dilated convolutional neural networks for understanding the highly congested scenes. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 19–21 June 2018; pp. 1091–1100. [Google Scholar]
- Liu, W.; Salzmann, M.; Fua, P. Context-aware crowd counting. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 16–20 June 2019; pp. 5099–5108. [Google Scholar]
- Tian, Y.; Chu, X.; Wang, H. CCTrans: Simplifying and improving crowd counting with transformer. arXiv 2021, arXiv:2109.14483. [Google Scholar]
- Wang, Y.; Hou, X.; Chau, L. Dense point prediction: A simple baseline for crowd counting and localization. arXiv 2021, arXiv:2104.12505. [Google Scholar]
- Song, Q.; Wang, C.; Jiang, Z.; Wang, Y.; Tai, Y.; Wang, C.; Li, J.; Huang, F.; Wu, Y. Rethinking counting and localization in crowds:a purely Point-Based framework. arXiv 2021, arXiv:2107.12746. [Google Scholar]
- Cohen, J.P.; Boucher, G.; Glastonbury, C.A.; Lo, H.Z.; Bengio, Y. Count-ception: Counting by fully convolutional redundant counting. arXiv 2017, arXiv:1703.08710. [Google Scholar]
- Guo, Y.; Stein, J.; Wu, G.; Krishnamurthy, A. SAU-Net: A universal deep network for cell counting. In Proceedings of the 10th ACM International Conference on Bioinformatics, Computational Biology and Health Informatics, Niagara Falls, NY, USA, 7 September 2019; pp. 299–306. [Google Scholar]
Density | Low Level | Medium Level | High Level | |||||
---|---|---|---|---|---|---|---|---|
Batch number | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 |
Image | 45 | 45 | 5 | 45 | 5 | 5 | 45 | 5 |
Fry number in per image | 204 | 438 | 591 | 642 | 712 | 876 | 1312 | 1935 |
Instance | 9180 | 19,710 | 2955 | 28,890 | 3560 | 4380 | 59,040 | 9675 |
Parameter | Value |
---|---|
Input size | 1024 × 1024 |
Pre-trained weight | hrnet_w48 |
Epoch | 500 |
Optimizer | Adam |
Learning rate | 1.5 × 10−3 |
Batch size | 4 |
σ | 1 |
λ | 1 |
Density Level | Accuracy (%) | RMSE | MAE |
---|---|---|---|
Low | 98.91 | 7.01 | 5.11 |
Medium | 99.05 | 13.41 | 7.33 |
High | 97.44 | 59.56 | 46.62 |
Total | 98.59 | 31.19 | 16.32 |
Method | Year | Params (MB) | FLOPs (G) | Accuracy ↑ (%) | ||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|
1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | Total | ||||
HRNet-W48 | 2019 | 63.59 | 336.39 | 94.51 | 88.73 | 94.06 | 94.54 | 95.50 | 83.77 | 93.04 | 80.08 | 90.53 |
BL | 2019 | 21.50 | 421.73 | 99.28 | 98.83 | 98.34 | 98.27 | 99.17 | 97.29 | 96.90 | 94.61 | 97.84 |
HigherHRNet-W48 | 2020 | 63.80 | 382.75 | 96.66 | 94.85 | 95.54 | 95.65 | 98.43 | 94.73 | 96.94 | 86.98 | 94.97 |
Swin-Base | 2021 | 86.75 | 334.19 | 91.92 | 81.70 | 95.01 | 79.89 | 88.25 | 79.84 | 97.91 | 84.57 | 89.26 |
HRFormer-S | 2021 | 7.75 | 61.90 | 99.01 | 97.65 | 97.69 | 97.09 | 98.79 | 93.76 | 96.61 | 86.16 | 95.72 |
PVTv2-B2 | 2022 | 24.85 | 68.35 | 96.30 | 91.63 | 94.87 | 81.70 | 83.96 | 67.87 | 88.60 | 72.85 | 84.72 |
DEKNet (ours) | 2023 | 3.32 | 112.24 | 99.45 | 98.79 | 98.49 | 98.83 | 99.77 | 98.53 | 99.08 | 95.80 | 98.59 |
Method | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | ||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
RMSE | MAE | RMSE | MAE | RMSE | MAE | RMSE | MAE | RMSE | MAE | RMSE | MAE | RMSE | MAE | RMSE | MAE | |
HRNet | 11.69 | 11.19 | 52.50 | 49.33 | 36.10 | 35.11 | 41.27 | 35.01 | 33.97 | 32.04 | 231.68 | 142.20 | 96.42 | 91.25 | 394.16 | 385.46 |
BL | 1.67 | 1.47 | 5.54 | 5.11 | 10.64 | 9.80 | 12.44 | 11.12 | 6.51 | 5.89 | 25.32 | 23.75 | 42.37 | 40.73 | 108.06 | 104.31 |
HigherHRNet | 7.06 | 6.82 | 23.71 | 22.54 | 28.12 | 26.38 | 36.60 | 27.93 | 11.97 | 11.20 | 66.10 | 46.15 | 44.60 | 40.10 | 260.83 | 251.91 |
Swin | 18.40 | 16.48 | 98.70 | 80.16 | 33.47 | 29.47 | 172.85 | 129.12 | 119.13 | 83.69 | 328.12 | 185.4 | 40.75 | 27.43 | 300.81 | 298.58 |
HRFormer | 2.42 | 2.02 | 10.79 | 10.30 | 15.83 | 13.63 | 23.02 | 18.67 | 9.49 | 8.58 | 76.41 | 54.70 | 63.32 | 57.60 | 270.20 | 267.85 |
PVTv2 | 10.37 | 7.54 | 48.46 | 36.65 | 32.48 | 30.33 | 137.94 | 117.55 | 127.07 | 114.23 | 362.75 | 281.42 | 155.17 | 149.54 | 526.89 | 525.31 |
DEKNet | 1.55 | 1.12 | 5.94 | 5.26 | 10.47 | 8.94 | 11.96 | 7.51 | 2.12 | 1.63 | 19.79 | 12.84 | 14.40 | 12.04 | 82.99 | 81.19 |
Method | Venue | Params (MB) | FLOPs (G) | Accuracy (%) | RMSE | MAE |
---|---|---|---|---|---|---|
CSRNet | CVPR’18 | 16.26 | 433.79 | 81.1 | 225.53 | 157.71 |
CAN | CVPR’19 | 18.1 | 459.72 | 84.58 | 207.78 | 128.28 |
CCTrans | CVPR’21 | 104.61 | 397.37 | 90.61 | 73.14 | 55.73 |
SCALNet | IEEE’21 | 19.1 | 143.51 | 94.05 | 139.98 | 62.44 |
P2PNet | ICCV’21 | 21.58 | 383.31 | 96.58 | 47.65 | 32.42 |
DEKNet | - | 3.32 | 112.24 | 98.59 | 31.19 | 16.32 |
Method | Accuracy(%) | RMSE | MAE |
---|---|---|---|
HRNet-W48 | 91.44 | 126.90 | 69.20 |
BL | 95.96 | 56.60 | 27.72 |
HigherHRNet-W48 | 95.66 | 65.75 | 36.59 |
Swin-Base | 90.30 | 120.02 | 57.86 |
HRFormer-S | 91.85 | 113.16 | 64.96 |
PVTv2-B2 | 92.25 | 49.26 | 34.59 |
DEKNet (ours) | 98.51 | 25.68 | 14.77 |
Method | MAE ± STD |
---|---|
Count-ception | 19.40 ± 2.2 |
SAU-Net | 14.20 ± 1.6 |
DEKNet (ours) | 13.32 ± 1.2 |
Experiment | Method | Number of Deconvolution Module | Heatmap Size | Accuracy (%) | RMSE | MAE |
---|---|---|---|---|---|---|
A | FFE | 0 | H/4 × W/4 | 88.87 | 202.89 | 120.80 |
B | FFE + ODM1 | 1 | H/2 × W/2 | 97.35 | 64.72 | 33.50 |
C | FFE + ODM2 | 1 | H × W | 97.89 | 47.44 | 21.14 |
D | FFE + TDM(DEKNet) | 2 | H × W | 98.59 | 31.19 | 16.32 |
Experiments | Image_Size | Accuracy (%) |
---|---|---|
A | 256 × 256 | 33.60 |
B | 640 × 640 | 44.94 |
C | 720 × 720 | 45.83 |
D | 1000 × 1000 | 89.96 |
E | 1024 × 1024 | 98.51 |
F | 1400 × 1400 | 96.60 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Li, X.; Liang, Z.; Zhuang, Y.; Wang, Z.; Zhang, H.; Gao, Y.; Guo, Y. Deconvolution Enhancement Keypoint Network for Efficient Fish Fry Counting. Animals 2024, 14, 1490. https://doi.org/10.3390/ani14101490
Li X, Liang Z, Zhuang Y, Wang Z, Zhang H, Gao Y, Guo Y. Deconvolution Enhancement Keypoint Network for Efficient Fish Fry Counting. Animals. 2024; 14(10):1490. https://doi.org/10.3390/ani14101490
Chicago/Turabian StyleLi, Ximing, Zhicai Liang, Yitao Zhuang, Zhe Wang, Huan Zhang, Yuefang Gao, and Yubin Guo. 2024. "Deconvolution Enhancement Keypoint Network for Efficient Fish Fry Counting" Animals 14, no. 10: 1490. https://doi.org/10.3390/ani14101490
APA StyleLi, X., Liang, Z., Zhuang, Y., Wang, Z., Zhang, H., Gao, Y., & Guo, Y. (2024). Deconvolution Enhancement Keypoint Network for Efficient Fish Fry Counting. Animals, 14(10), 1490. https://doi.org/10.3390/ani14101490