Two-Stream Dense Feature Fusion Network Based on RGB-D Data for the Real-Time Prediction of Weed Aboveground Fresh Weight in a Field Environment
Abstract
:1. Introduction
- A method of data collection and preprocessing for constructing the fresh weight of different kinds of weeds is proposed.
- A YOLO-V4 model and a dense fusion network of two-stream features are established for weed detection and fresh weight estimation.
- The proposed method is tested and analyzed.
2. Data Collection
2.1. Research Area and Objects
2.2. Platform and Equipment
2.3. Collection Method
- Step A: Before shooting, the staff must first determine the camera’s field of view and underline the camera’s field of view. Two lines, the edge of the camera’s field of view and the position 400 pixels from the edge of the field of view, are established. The area between the two lines is called the label establishment area. After finding the weeds, the weeds are associated with a label, and the weed type and serial number are recorded in the label establishment area. If the weeds are within the same row, they are marked in order from the farthest to the closest label to the camera. If weeds are on the line or in the label establishment area, the weeds will not be recorded, as shown by the red cross in the picture. After the marking is completed, the lines are moved away to avoid affecting the subsequent shooting. At this point, the label establishment area can be distinguished based on the pixels of the captured image. It should be noted that there is no need to mark weeds on the lines or in the label area because this middle area is eventually cut and used to build the data set. Using this data collection method does not cause human interference with the shooting content in the middle area. This ensures that the constructed data set conforms to the natural state. This collection method is also more effective than other methods.
- Step B: The collection platform contains two Kinect v2 devices that can collect data from two rows at the same time. The movement speed of the platform is 0.3 meters per second, and the Kinect v2 shooting speed is set to 30 fps. The platform moves straight along the trajectory of the established line. The weeds and tags are photographed at the same time to obtain the RGB-D information for the weeds.
- Step C: After the platform passes, the staff uses destructive methods to obtain the aboveground parts of the weeds, weighs them on an electronic balance, and records the weight on the label. The robot will stop after walking 60 meters and wait for the collector to complete the collection before continuing. This avoids, as far as possible, any increase in the fresh weight of weeds on the ground due to time.
- Step D: The MapColorFrameToDepthSpace function in Kinect v2 for Windows SDK 2.0 is used to match the depth data and image data, which have different resolutions (1920 × 1080 and 512 × 424, respectively). The depth data is converted to 1920 × 1080 resolution, which is the same size as RGB to form RGB-D data. Corresponding weeds in RGB-D are cropped, and a data set corresponding to RGB-D data and fresh weight labels on the ground is obtained. In this process, highly overlapping frames are eliminated, and blurred images are filtered. It is worth noting that due to the different viewing angles and resolutions of the two cameras on the Kinect v2, after the depth image is registered with the color image, there is a certain lack of edge to the depth image, as shown in the detailed view of step D in Figure 1. Since only the 1080 × 1080 area in the middle of the image is used, this deletion does not affect the data.Table 1 shows the date, number of weeds, and weather information obtained in the data set. To ensure the diversity of the data set, the data were collected over half a month, and the spatial range of data collection almost covered the test area (60 × 60 meters). Because herbicides are used mainly in sunny weather, data collection was not carried out on rainy days. The data collection time is between 7am and 10am BST. A total of 20274 images were collected, of which 1200 of each weed had associated aboveground fresh weight data.
3. Model
3.1. Technical Route
3.2. KNN Missing Value Filling
3.3. YOLO-V4 Weed Detection Model
- (1)
- Data processing. The image acquisition process collected 20,274 images, selected images for detection through visual observation, deleted blurry images, and selected a final set of 7000 images in total. The 1920 × 1080 RGB-D data were first cropped along the label line established during data collection to 1080 × 1080, then scaled to a 540 × 540 matrix. Labeling [56] was then used to mark the RGB images. A total of 12,116 Solanum nigrum were tagged, 12,623 Abutilon theophrasti Medicus were tagged, and 7332 Sonchus arvensis were tagged in the dataset. To distinguish the dataset created from the weed example RGB-D and fresh weight labels, the dataset is referred to as dataset 1, the training set as training set 1, and the test set as test set 1. And the other is referred to as dataset 2. The training set is referred to as training set 2 and the test set is referred to as test set 2. The data set was divided into a training set (6300 images) and a test set (700 images) at a ratio of 9:1.
- (2)
- Training parameters. Considering the limitations on server memory, the batch size was set to 8, and the model was trained after defining the model parameters. The learning rate was set to 0.001, the classification was set to 3 categories, and the number of iterations was set to 40,000.
3.4. Two-Stream Dense Feature Fusion Network
- (1)
- Dense Module
- (2)
- Dense-NiN-Module
- (3)
- Output Layer
- (1)
- Data enhancement. A new data enhancement method suitable for the depth matrix, called depth transformation enhancement, is proposed. The source of this method is the simulation of the fluctuation on the distance between the camera and the ground in the field, as shown in Figure 9. As also shown in Figure 9a, when l is negative, the camera is closer to the ground and the target weed is shown larger in the image. When l is positive, the camera is further from the ground and the target weed is shown smaller in the image. As shown in Figure 9b. The values of the size and depth information can be changed according to the volatility of the distance in order to enhance the data. When the depth value increases or decreases overall, the image will be scaled according to the scale factor. The specific formula is as follows:
- Randomly rotate 90°, 180°, or 270°.
- Randomly flip vertically or horizontally.
- To make the data more adaptable to light fluctuations, randomly increase or decrease the brightness of RGB data by 10%.
- Perform random depth transformation.
- (2)
- Training parameters. The deep learning frameworks are all trained on the GPU. Usually, the input image (batch size × channel × h × w) is put into a specified tensor and sent to the GPU. Images of different sizes cannot form a unified tensor, so in this study, the batch size is set to 1, and each image is sent as a separate tensor to the GPU for training. The learning rate is set to 0.001, and Adam is used as the optimizer. The number of iterations is set to 10,000.
3.5. Model Evaluation
- (1)
- AP and mAP
- (2)
- IoU
4. Results and Discussion
4.1. Technical Route Results
4.2. Comparison of YOLO-V4 with Other Target Detection Algorithms
4.3. Two-Stream Dense Feature Fusion Network (DenseNet201-Rgbd)
4.3.1. Comparison of Regression Network Results Embedded with the Dense-NiN Module
4.3.2. The Impact of Different Data Enhancement Methods
4.3.3. The Two-Stream Dense Feature Fusion Network (DenseNet201) Is Affected by the Growth Period and Weed Species
4.3.4. Model Analysis
- (1)
- Dense connections extract deep features
- (2)
- Model visualization analysis
4.4. The Relationship between IOU and Fresh Weight Prediction
4.5. Predictive Effects for Shaded Weeds
- When two weeds cover each other, the network divides them into uniform individuals, as shown by the red bounding box.
- When two weeds cover each other, the network identifies only part of the weed but not the whole weed, as shown by the purple bounding boxes.
- When two weeds shade each other, the weed cannot be detected, as shown by the black arrow in (a).
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
Appendix A
References
- Zimdahl, R.L. Chapter 13—Introduction to Chemical Weed Control. In Fundamentals of Weed Science, 5th ed.; Academic Press: Waltham, MA, USA, 2018; pp. 391–416. [Google Scholar]
- Gil, Y.; Sinfort, C. Emission of pesticides to the air during sprayer application: A bibliographic review. Atmos. Environ. 2005, 39, 5183–5193. [Google Scholar] [CrossRef]
- Heap, I.; Duke, S.O. Overview of glyphosate-resistant weeds worldwide. Pest Manag. Sci. 2018, 74, 1040–1049. [Google Scholar] [CrossRef]
- Hall, D.; Dayoub, F.; Perez, T.; McCool, C. A rapidly deployable classification system using visual data for the application of precision weed management. Comput. Electron. Agric. 2018, 148, 107–120. [Google Scholar] [CrossRef] [Green Version]
- Partel, V.; Kakarla, C.; Ampatzidis, Y. Development and evaluation of a low-cost and smart technology for precision weed management utilizing artificial intelligence. Comput. Electron. Agric. 2019, 157, 339–350. [Google Scholar] [CrossRef]
- Cobb, A.H.; Reade, J.P. Herbicides and Plant Physiology; John Wiley & Sons: Hoboken, NJ, USA, 2011. [Google Scholar]
- Walker, S.; Boucher, L.; Cook, T.; Davidson, B.; McLean, A.; Widderick, M. Weed age affects chemical control of Conyza bonariensis in fallows. Crop Prot. 2012, 38, 15–20. [Google Scholar] [CrossRef]
- Kieloch, R.; Domaradzki, K. The role of the growth stage of weeds in their response to reduced herbicide doses. Acta Agrobot. 2011, 64, 259–266. [Google Scholar] [CrossRef]
- Dayan, F.E.; Barker, A.; Bough, R.; Ortiz, M.; Takano, H.; Duke, S.O.; Moo-Young, M. 4.04—Herbicide Mechanisms of Action and Resistance. In Comprehensive Biotechnology, 3rd ed.; Pergamon: Oxford, UK, 2019; pp. 36–48. [Google Scholar]
- Sterling, T.M. Mechanisms of Herbicide Absorption across Plant Membranes and Accumulation in Plant Cells. Weed Sci. 1994, 42, 263–276. [Google Scholar] [CrossRef]
- Holt, J.S.; Levin, S.A. Herbicides. In Encyclopedia of Biodiversity, 2nd ed.; Academic Press: Waltham, MA, USA, 2013; pp. 87–95. [Google Scholar]
- Huang, W.; Ratkowsky, D.A.; Hui, C.; Wang, P.; Su, J.; Shi, P. Leaf Fresh Weight Versus Dry Weight: Which is Better for Describing the Scaling Relationship between Leaf Biomass and Leaf Area for Broad-Leaved Plants? Forests 2019, 10, 256. [Google Scholar] [CrossRef] [Green Version]
- Bredmose, N.; Hansen, J. Topophysis affects the Potential of Axillary Bud Growth, Fresh Biomass Accumulation and Specific Fresh Weight in Single-stem Roses (Rosa hybridaL.). Ann. Bot. 1996, 78, 215–222. [Google Scholar] [CrossRef] [Green Version]
- Jiang, J.-S.; Kim, H.-J.; Cho, W.-J. On-the-go image processing system for spatial mapping of lettuce fresh weight in plant factory. IFAC-PapersOnLine 2018, 51, 130–134. [Google Scholar] [CrossRef]
- Arzani, K.; Lawes, S.; Wood, D. Estimation of ‘sundrop’ apricot fruit volume and fresh weight from fruit diameter. Acta Hortic. 1999, 321–326. [Google Scholar] [CrossRef]
- Reyes-Yanes, A.; Martinez, P.; Ahmad, R. Real-time growth rate and fresh weight estimation for little gem romaine lettuce in aquaponic grow beds. Comput. Electron. Agric. 2020, 179, 105827. [Google Scholar] [CrossRef]
- Mortensen, A.K.; Bender, A.; Whelan, B.; Barbour, M.M.; Sukkarieh, S.; Karstoft, H.; Gislum, R. Segmentation of lettuce in coloured 3D point clouds for fresh weight estimation. Comput. Electron. Agric. 2018, 154, 373–381. [Google Scholar] [CrossRef]
- Lee, S.; Kim, K.S. Estimation of fresh weight for chinese cabbage using the Kinect sensor. Korean J. Agric. For. Meteorol. 2018, 20, 205–213. [Google Scholar]
- Wang, A.; Zhang, W.; Wei, X. A review on weed detection using ground-based machine vision and image processing techniques. Comput. Electron. Agric. 2019, 158, 226–240. [Google Scholar] [CrossRef]
- Raja, R.; Nguyen, T.T.; Slaughter, D.C.; Fennimore, S.A. Real-time weed-crop classification and localisation technique for robotic weed control in lettuce. Biosyst. Eng. 2020, 192, 257–274. [Google Scholar] [CrossRef]
- Mottley, J.; Keen, B. Indirect assessment of callus fresh weight by non-destructive methods. Plant Cell Rep. 1987, 6, 389–392. [Google Scholar] [CrossRef]
- Sandmann, M.; Graefe, J.; Feller, C. Optical methods for the non-destructive estimation of leaf area index in kohlrabi and lettuce. Sci. Hortic. 2013, 156, 113–120. [Google Scholar] [CrossRef]
- Jung, D.-H.; Hyun, P.S.; Xiongzhe, H.; Hakjin, K. Image Processing Methods for Measurement of Lettuce Fresh Weight. J. Biosyst. Eng. 2015, 40, 89–93. [Google Scholar] [CrossRef] [Green Version]
- Feyaerts, F.; van Gool, L. Multi-spectral vision system for weed detection. Pattern Recognit. Lett. 2001, 22, 667–674. [Google Scholar] [CrossRef]
- Shirzadifar, A.; Bajwa, S.; Mireei, S.A.; Howatt, K.; Nowatzki, J. Weed species discrimination based on SIMCA analysis of plant canopy spectral data. Biosyst. Eng. 2018, 171, 143–154. [Google Scholar] [CrossRef]
- Pantazi, X.-E.; Moshou, D.; Bravo, C. Active learning system for weed species recognition based on hyperspectral sensing. Biosyst. Eng. 2016, 146, 193–202. [Google Scholar] [CrossRef]
- Zhang, Y.; Slaughter, D.C. Hyperspectral species mapping for automatic weed control in tomato under thermal environmental stress. Comput. Electron. Agric. 2011, 77, 95–104. [Google Scholar] [CrossRef]
- Hamuda, E.; Mc Ginley, B.; Glavin, M.; Jones, E. Automatic crop detection under field conditions using the HSV colour space and morphological operations. Comput. Electron. Agric. 2017, 133, 97–107. [Google Scholar] [CrossRef]
- Tang, J.-L.; Chen, X.-Q.; Miao, R.-H.; Wang, D. Weed detection using image processing under different illumination for site-specific areas spraying. Comput. Electron. Agric. 2016, 122, 103–111. [Google Scholar] [CrossRef]
- Tannouche, A.; Sbai, K.; Rahmoune, M.; Zoubir, A.; Agounoune, R.; Saadani, R.; Rahmani, A. A Fast and Efficient Shape Descriptor for an Advanced Weed Type Classification Approach. Int. J. Electr. Comput. Eng. 2016, 6, 1168. [Google Scholar]
- Pérez, A.J.; López, F.; Benlloch, J.V.; Christensen, S. Colour and shape analysis techniques for weed detection in cereal fields. Comput. Electron. Agric. 2000, 25, 197–212. [Google Scholar] [CrossRef]
- Lin, F.; Zhang, D.; Huang, Y.; Wang, X.; Chen, X. Detection of Corn and Weed Species by the Combination of Spectral, Shape and Textural Features. Sustainability 2017, 9, 1335. [Google Scholar] [CrossRef] [Green Version]
- Zheng, Y.; Zhu, Q.; Huang, M.; Guo, Y.; Qin, J. Maize and weed classification using color indices with support vector data description in outdoor fields. Comput. Electron. Agric. 2017, 141, 215–222. [Google Scholar] [CrossRef]
- Swain, K.C.; Nørremark, M.; Jørgensen, R.N.; Midtiby, H.S.; Green, O. Weed identification using an automated active shape matching (AASM) technique. Biosyst. Eng. 2011, 110, 450–457. [Google Scholar] [CrossRef]
- Kazmi, W.; Garcia-Ruiz, F.; Nielsen, J.; Rasmussen, J.; Andersen, H.J. Exploiting affine invariant regions and leaf edge shapes for weed detection. Comput. Electron. Agric. 2015, 118, 290–299. [Google Scholar] [CrossRef]
- Bakhshipour, A.; Jafari, A. Evaluation of support vector machine and artificial neural networks in weed detection using shape features. Comput. Electron. Agric. 2018, 145, 153–160. [Google Scholar] [CrossRef]
- Tian, Y.; Yang, G.; Wang, Z.; Wang, H.; Li, E.; Liang, Z. Apple detection during different growth stages in orchards using the improved YOLO-V3 model. Comput. Electron. Agric. 2019, 157, 417–426. [Google Scholar] [CrossRef]
- Quan, L.; Feng, H.; Li, Y.; Wang, Q.; Zhang, C.; Liu, J.; Yuan, Z. Maize seedling detection under different growth stages and complex field environments based on an improved Faster R-CNN. Biosyst. Eng. 2019, 184, 1–23. [Google Scholar] [CrossRef]
- Hu, K.; Coleman, G.; Zeng, S.; Wang, Z.Y.; Walsh, M. Graph weeds net: A graph-based deep learning method for weed recognition. Comput. Electron. Agric. 2020, 174, 9. [Google Scholar] [CrossRef]
- Dos Santos Ferreira, A.; Matte Freitas, D.; Gonçalves da Silva, G.; Pistori, H.; Theophilo Folhes, M. Weed detection in soybean crops using ConvNets. Comput. Electron. Agric. 2017, 143, 314–324. [Google Scholar] [CrossRef]
- Hasan, A.S.M.M.; Sohel, F.; Diepeveen, D.; Laga, H.; Jones, M.G.K. A survey of deep learning techniques for weed detection from images. Comput. Electron. Agric. 2021, 184, 106067. [Google Scholar] [CrossRef]
- Yu, J.; Sharpe, S.M.; Schumann, A.W.; Boyd, N.S. Deep learning for image-based weed detection in turfgrass. Eur. J. Agron. 2019, 104, 78–84. [Google Scholar] [CrossRef]
- Peteinatos, G.G.; Reichel, P.; Karouta, J.; Andújar, D.; Gerhards, R. Weed Identification in Maize, Sunflower, and Potatoes with the Aid of Convolutional Neural Networks. Remote Sens. 2020, 12, 4185. [Google Scholar] [CrossRef]
- Jiang, H.; Zhang, C.; Qiao, Y.; Zhang, Z.; Zhang, W.; Song, C. CNN feature based graph convolutional network for weed and crop recognition in smart farming. Comput. Electron. Agric. 2020, 174, 105450. [Google Scholar] [CrossRef]
- Zhou, J.; Fu, X.; Zhou, S.; Zhou, J.; Ye, H.; Nguyen, H.T. Automated segmentation of soybean plants from 3D point cloud using machine learning. Comput. Electron. Agric. 2019, 162, 143–153. [Google Scholar] [CrossRef]
- Li, J.; Tang, L. Developing a low-cost 3D plant morphological traits characterization system. Comput. Electron. Agric. 2017, 143, 1–13. [Google Scholar] [CrossRef] [Green Version]
- Chaivivatrakul, S.; Tang, L.; Dailey, M.N.; Nakarmi, A.D. Automatic morphological trait characterization for corn plants via 3D holographic reconstruction. Comput. Electron. Agric. 2014, 109, 109–123. [Google Scholar] [CrossRef] [Green Version]
- Li, Z.; Guo, R.; Li, M.; Chen, Y.; Li, G. A review of computer vision technologies for plant phenotyping. Comput. Electron. Agric. 2020, 176, 105672. [Google Scholar] [CrossRef]
- Sapkota, B.; Singh, V.; Neely, C.; Rajan, N.; Bagavathiannan, M. Detection of Italian Ryegrass in Wheat and Prediction of Competitive Interactions Using Remote-Sensing and Machine-Learning Techniques. Remote Sens. 2020, 12, 2977. [Google Scholar] [CrossRef]
- Krizhevsky, A.; Sutskever, I.; Hinton, G.E. ImageNet Classification with Deep Convolutional Neural Networks. Commun. Acm. 2017, 60, 84–90. [Google Scholar] [CrossRef]
- Schmidhuber, J. Deep learning in neural networks: An overview. Neural Netw. 2015, 61, 85–117. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Bawden, O.; Kulk, J.; Russell, R.; McCool, C.; English, A.; Dayoub, F.; Lehnert, C.; Perez, T. Robot for weed species plant-specific management. J. Field Robot. 2017, 34, 1179–1199. [Google Scholar] [CrossRef]
- Ren, S.; He, K.; Girshick, R.; Sun, J. Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks. arXiv 2015, arXiv:1506.01497. [Google Scholar] [CrossRef] [Green Version]
- Bochkovskiy, A.; Wang, C.-Y.; Liao, H.-Y.M. YOLOv4: Optimal Speed and Accuracy of Object Detection. arXiv 2020, arXiv:2004.10934. [Google Scholar]
- Redmon, J.; Farhadi, A. YOLOv3: An Incremental Improvement. arXiv 2018, arXiv:1804.02767. [Google Scholar]
- Tzutalin, D. Labelimg. 2018. Available online: https://github.com/tzutalin/labelImg (accessed on 10 June 2021).
- Bengio, Y.; Courville, A.; Vincent, P. Representation Learning: A Review and New Perspectives. IEEE Trans. Softw. Eng. 2013, 35, 1798–1828. [Google Scholar] [CrossRef]
- Lin, M.; Chen, Q.; Yan, S. Network In Network. arXiv 2013, arXiv:1312.4400. [Google Scholar]
- Liu, W.; Anguelov, D.; Erhan, D.; Szegedy, C.; Reed, S.; Fu, C.-Y.; Berg, A.C. SSD: Single Shot MultiBox Detector. arXiv 2015, arXiv:1512.02325. [Google Scholar]
- Ultralytics/yolov5: V4.0 nn.SiLU() Activations, Weights & Biases Logging, PyTorch Hub Integration. 2021. Available online: https://explore.openaire.eu/search/software?softwareId=r37b0ad08687::14e263719066a7bd19d7916893c6f127 (accessed on 10 June 2021).
- Zhao, Q.; Sheng, T.; Wang, Y.; Tang, Z.; Chen, Y.; Cai, L.; Ling, H. M2Det: A Single-Shot Object Detector based on Multi-Level Feature Pyramid Network. arXiv 2018, arXiv:1811.04533. [Google Scholar] [CrossRef]
Date | Total Images | Sn | Atm | Sa | Weather |
---|---|---|---|---|---|
15 May 2020 | 2106 | 120 | 116 | 128 | Cloud |
16 May 2020 | 2453 | 134 | 133 | 137 | Cloud |
18 May 2020 | 1846 | 120 | 115 | 120 | Cloud |
19 May 2020 | 2152 | 124 | 108 | 124 | Cloud |
20 May 2020 | 2386 | 126 | 122 | 122 | Cloud |
21 May 2020 | 1919 | 118 | 115 | 106 | Cloud |
22 May 2020 | 1052 | 75 | 94 | 95 | Cloud |
23 May 2020 | 1776 | 96 | 86 | 86 | Cloud |
27 May 2020 | 793 | 58 | 77 | 48 | Clear |
28 May 2020 | 737 | 48 | 49 | 66 | Clear |
29 May 2020 | 1816 | 96 | 96 | 85 | Clear |
30 May 2020 | 1238 | 85 | 89 | 83 | Clear |
Total | 20274 | 1200 | 1200 | 1200 | \ |
Model | M2DNet | SSD | Faster R-CNN | YOLO-V5x | YOLO-V4 |
---|---|---|---|---|---|
mAP (%) | 69.41 | 64.36 | 71.23 | 73.23 | 75.34 |
mIoU (%) | 84.24 | 82.63 | 86.33 | 85.62 | 86.36 |
Average time (s) | 0.126 | 0.192 | 0.238 | 0.016 | 0.033 |
Model | Atm | |||
---|---|---|---|---|
Alexnet-rgb | 0.8515 | 0.8327 | 0.8322 | 0.7541 |
Alexnet-rgbd | 0.8622 | 0.8742 | 0.8414 | 0.7836 |
Vgg19-rgb | 0.8721 | 0.8856 | 0.8653 | 0.7621 |
Vgg19-rgbd | 0.8826 | 0.8943 | 0.8699 | 0.7834 |
Xception-rgb | 0.9132 | 0.9018 | 0.9015 | 0.7562 |
Xception-rgbd | 0.9144 | 0.9126 | 0.9113 | 0.8314 |
Resnet101-rgb | 0.9314 | 0.9142 | 0.9154 | 0.8734 |
Resnet101-rgbd | 0.9526 | 0.9534 | 0.9336 | 0.8852 |
DenseNet201-rgb | 0.9674 | 0.9751 | 0.9465 | 0.9154 |
DenseNet201-rgbd | 0.9917 | 0.9921 | 0.9885 | 0.9433 |
Average Time (s) | ||||
---|---|---|---|---|
Model | Atm | Sn | Sa | All |
Alexnet-rgb | 0.0138 | 0.0113 | 0.0127 | 0.0122 |
Alexnet-rgbd | 0.0246 | 0.0225 | 0.0233 | 0.0235 |
Vgg19-rgb | 0.0133 | 0.0125 | 0.0196 | 0.0158 |
Vgg19-rgbd | 0.0326 | 0.0247 | 0.0296 | 0.0311 |
Xception-rgb | 0.0267 | 0.0226 | 0.0237 | 0.0259 |
Xception-rgbd | 0.0442 | 0.0426 | 0.0394 | 0.0463 |
Resnet101-rgb | 0.0348 | 0.0313 | 0.0333 | 0.0329 |
Resnet101-rgbd | 0.0622 | 0.0636 | 0.0624 | 0.0618 |
DenseNet201-rgb | 0.0496 | 0.0454 | 0.0441 | 0.0456 |
DenseNet201-rgbd | 0.0879 | 0.0821 | 0.0895 | 0.0846 |
Average | 0.0390 | 0.0359 | 0.0378 | 0.0380 |
Data Augmentation Method | ATM RMSE | Sn RMSE | Sa RMSE | Average |
---|---|---|---|---|
Dataset after augmentation | 0.358 | 0.416 | 0.424 | 0.400 |
Random rotation | 0.386 | 0.479 | 0.491 | 0.452 |
Random flip | 0.401 | 0.496 | 0.453 | 0.450 |
RGB brightness enhanced by 10% | 0.452 | 0.531 | 0.562 | 0.515 |
Depth transformation enhancement | 0.504 | 0.566 | 0.516 | 0.529 |
mIOU | Atm RMSE | Sn RMSE | Sa RMSE |
---|---|---|---|
90~100% | 0.026 | 0.014 | 0.023 |
80~90% | 0.038 | 0.021 | 0.036 |
70~80% | 0.061 | 0.016 | 0.057 |
60~70% | 0.087 | 0.054 | 0.089 |
50~60% | 0.112 | 0.067 | 0.093 |
Average | 0.065 | 0.034 | 0.060 |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Quan, L.; Li, H.; Li, H.; Jiang, W.; Lou, Z.; Chen, L. Two-Stream Dense Feature Fusion Network Based on RGB-D Data for the Real-Time Prediction of Weed Aboveground Fresh Weight in a Field Environment. Remote Sens. 2021, 13, 2288. https://doi.org/10.3390/rs13122288
Quan L, Li H, Li H, Jiang W, Lou Z, Chen L. Two-Stream Dense Feature Fusion Network Based on RGB-D Data for the Real-Time Prediction of Weed Aboveground Fresh Weight in a Field Environment. Remote Sensing. 2021; 13(12):2288. https://doi.org/10.3390/rs13122288
Chicago/Turabian StyleQuan, Longzhe, Hengda Li, Hailong Li, Wei Jiang, Zhaoxia Lou, and Liqing Chen. 2021. "Two-Stream Dense Feature Fusion Network Based on RGB-D Data for the Real-Time Prediction of Weed Aboveground Fresh Weight in a Field Environment" Remote Sensing 13, no. 12: 2288. https://doi.org/10.3390/rs13122288
APA StyleQuan, L., Li, H., Li, H., Jiang, W., Lou, Z., & Chen, L. (2021). Two-Stream Dense Feature Fusion Network Based on RGB-D Data for the Real-Time Prediction of Weed Aboveground Fresh Weight in a Field Environment. Remote Sensing, 13(12), 2288. https://doi.org/10.3390/rs13122288