Next Article in Journal
The Influence of Newly Developed Spray Drift Reduction Agents on Drift Mitigation by Means of Wind Tunnel and Field Evaluation Methods
Next Article in Special Issue
Research on Winter Wheat Growth Stages Recognition Based on Mobile Edge Computing
Previous Article in Journal
Borated Fertilizations via Foliar and Soil for Peanut Production during the Sugarcane Reform
Previous Article in Special Issue
A Cascaded Individual Cow Identification Method Based on DeepOtsu and EfficientNet
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Global Reconstruction Method of Maize Population at Seedling Stage Based on Kinect Sensor

1
College of Engineering, Nanjing Agricultural University, Nanjing 210095, China
2
Jiangsu Province Engineering Lab for Modern Facility Agriculture Technology & Equipment, Nanjing 210031, China
*
Author to whom correspondence should be addressed.
Agriculture 2023, 13(2), 348; https://doi.org/10.3390/agriculture13020348
Submission received: 31 December 2022 / Revised: 26 January 2023 / Accepted: 28 January 2023 / Published: 31 January 2023

Abstract

:
Automatic plant phenotype measurement technology based on the rapid and accurate reconstruction of maize structures at the seedling stage is essential for the early variety selection, cultivation, and scientific management of maize. Manual measurement is time-consuming, laborious, and error-prone. The lack of mobility of large equipment in the field make the high-throughput detection of maize plant phenotypes challenging. Therefore, a global 3D reconstruction algorithm was proposed for the high-throughput detection of maize phenotypic traits. First, a self-propelled mobile platform was used to automatically collect three-dimensional point clouds of maize seedling populations from multiple measurement points and perspectives. Second, the Harris corner detection algorithm and singular value decomposition (SVD) were used for the pre-calibration single measurement point multi-view alignment matrix. Finally, the multi-view registration algorithm and iterative nearest point algorithm (ICP) were used for the global 3D reconstruction of the maize seedling population. The results showed that the R2 of the plant height and maximum width measured by the global 3D reconstruction of the seedling maize population were 0.98 and 0.99 with RMSE of 1.39 cm and 1.45 cm and mean absolute percentage errors (MAPEs) of 1.92% and 2.29%, respectively. For the standard sphere, the percentage of the Hausdorff distance set of reconstruction point clouds less than 0.5 cm was 55.26%, and the percentage was 76.88% for those less than 0.8 cm. The method proposed in this study provides a reference for the global reconstruction and phenotypic measurement of crop populations at the seedling stage, which aids in the early management of maize with precision and intelligence.

1. Introduction

The term “crop phenotype” describes the physical, physiological, and biochemical traits representing the structural and functional traits of crop cells, tissues, plants, and populations. The accurate and intelligent administration of modern agriculture depends critically on the phenotypic information [1,2]. Historically, crop phenotypic measurements have been performed manually. These approaches are damaging, extremely subjective, ineffective, and inappropriate for modern agricultural precision management [3]. Crop phenotypic assessment tends to exhibit high-throughput, high-precision, and automation owing to the rapid development of technologies like machine vision, agricultural robots, and artificial intelligence [4,5]. Currently, crop phenotype measurements are primarily based on 2D images and 3D point cloud techniques [6,7]. Nevertheless, owing to the complex nature of the plant morphology and mutual occlusion between leaves, 2D image-based and single-viewpoint 3D point cloud phenotyping techniques cannot accurately assess plant phenotypic data [8]. Consequently, the creation of 3D plant models using computer vision techniques for the precise and effective extraction of plant phenotypic features has steadily grown to become a prominent research area in the field of crop phenotyping [9,10,11,12,13].
The 3D reconstruction approaches were divided into two categories based on active and passive vision. Techniques for functional vision-based 3D reconstruction include time-of-flight (TOF), laser scanning, and structured light. These techniques primarily employ visual tools to gather object surface information and perform 3D reconstruction. Monocular vision, binocular vision, and multi-visual vision methods are examples of passive-vision-based 3D reconstruction techniques. These methods primarily capture image sequences using visual sensors and subsequently achieve three-dimensional (3D) reconstruction.
The primary sensors used for 3D point cloud reconstruction are LIDAR, CT scanners, hyperspectral imagers, depth cameras, and RGB cameras. LIDAR has a low reconstruction efficiency, making it best suited for navigation and large-scale scene reconstruction. It cannot be used to reconstruct 3D point cloud models for smaller plants [14,15]. The CT scanner, which is mainly used for medical imaging, emits radiation. Environmental interference, sluggish imaging speed, and small measurement areas affect hyperspectral imagers. Owing to their extreme precision and low cost, vision sensors have been extensively utilized in agriculture in recent years [16]. Researchers from home and abroad have gradually replaced costly LIDAR in the study of the 3D reconstruction of crop phenotypes using visual sensors such as depth and RGB cameras. Peng et al. employed the iterative nearest-point approach for fine alignment to rebuild the 3D point cloud of tomato plants [17] based on the data collected by the KinectV2 depth camera to calculate the coarse alignment matrix from the end joint positions captured by the robotic arm. Using a reflecting single-frame camera and a multi-view stereo vision algorithm, Hu et al. reconstructed the structures of green pepper and eggplants in three dimensions [18]. He et al. employed the structure from motion (SFM) technique to create a 3D model of a strawberry based on an SLR camera [19]. Although there have been numerous studies on the 3D point cloud model reconstruction of crops, all of which have demonstrated high reconstruction accuracy and stable performance, most studies only reconstruct single plants, making it impossible to achieve global 3D reconstruction of crop populations. The global 3D reconstruction and measurement of crop populations remain challenging because of the complexity and unstructured nature of agricultural landscapes.
In this study, we developed a self-propelled crop phenotype measurement tool based on the ROS (Robot Operating System) mobile platform, combining the Harris corner point detection algorithm, singular value decomposition method, multiple measurement point alignment algorithm, and multiple filtering algorithms to achieve a global 3D reconstruction of the maize plant population. A reference for the global reconstruction and phenotypic assessment of maize populations at the seedling stage was provided by the methodology proposed in this study, which aids in the precise and careful management of maize in its early phases.

2. Materials and Methods

2.1. Experimental Data Collection

From 28 May to 27 June 2022, a maize population reconstruction experiment was conducted as part of this study at the Nanjing Agricultural University’s Pukou Campus. Maize seedlings were chosen as test objects. There were 16 plants in total, and the variety chosen was Zhenzhennuo 99. The row and column spacing of a single maize plant was 80 cm and the average initial plant height was approximately 30 cm; 96 sets of maize plant point cloud data were collected every five days. The test setup is illustrated in Figure 1.

2.2. Subsection Structure and Principles of Measurement Systems

The key components of the self-propelled crop phenotyping system were a Kinect-based mobile ROS platform, motorized rotating table, control cabinet, graphics workstation, and mobile power supply. A schematic of the hardware of the measurement system is shown in Figure 2. The system was built using a 20 × 20 mm aluminum structure, 40 cm long, 40 cm wide, and 140 cm high. A 2.0 version of the Kinect sensor, mainly composed of color and depth cameras, was used. The resolution of the color camera was 1920 × 1080 pixels, and that of the depth camera was 512 × 424 pixels. The measurement distance was 0.5–4.5 m and the viewing area angle was 70 × 60° (horizontal × vertical). The electric rotary table was the TBR 100 series with a table size of 102 mm, angle range of 360°, worm gear ratio of 180:1, using a 42 M-1.8 D-C-10 stepper motor, whole step resolution of 0.01°, positioning accuracy of 0.05°, a circuit control cabinet built-in motion control data acquisition card, driver, and switching power supply. The motion control data acquisition card model was the NET6043-S2XE built-in 10/100 M adaptive Ethernet card, 8-way 16-bit positive and negative 10 V range single-ended analog synchronous acquisition, up to 40 KSPS. The 8-way analog can be synchronized with a two-axis logic position or encoder high-speed synchronous acquisition and two-axis stepper/servo motor control. The driver model is AQMD3610NS-A2, which supports analog signals of 0–5/10 V, signal ports that can withstand voltages of 24 V, and standard mode voltage protection of 485. The DashgoB1 series is an intelligent mobile platform that includes navigation, map construction, obstacle avoidance, and other features. It has an STM 32 chassis controller, built-in LIDAR, ultrasonic radar, and wheel speed encoder. A shock absorber was mounted on the front side of the chassis, and the bottom shell was mounted underneath the shock absorber. For the cart to adjust to an uneven path, the bottom shell also has a stabilizer bar inside it. The Intel(R) Xeon(R) E-2176M CPU @2.70 GHz, 32 G of memory and an NVIDIA Quadro P600 4G graphics card comprised the graphics workstation processor. For the Ubuntu 18.04 system, MATLAB R2019 programming was used as the programming environment for the hardware and software of the system.
The algorithm for the worldwide 3D reconstruction of the population of seedling maize is depicted in Figure 3 and works as follows:
(1) The camera parameters were obtained using the Zhang Zhengyou calibration method [20], and then, the depth images captured by the Kinect were used to create 3D point cloud maps using a similar triangle-based transformation.
(2) Pre-calibration of the multi-view alignment matrix for a single measurement point was performed using the singular value decomposition method and the Harris corner point identification technique.
(3) The camera coordinate system was adjusted using the region growth method and the random sample consensus (RANSAC) algorithm.
(4) A filtering technique was employed to eliminate point cloud noise from the non-corn plants.
(5) The ICP technique and coarse alignment procedure of several measurement locations were used to accomplish the global 3D reconstruction of the crop population. In this work, the checkerboard grid, standard sphere, plant height, and maximum width were used to calibrate the local, global, and crop group reconstruction accuracies, respectively.

2.3. Global 3D Reconstruction Method of Maize Population at Seedling Stage

2.3.1. Three-Dimensional Point Cloud Acquisition Method

Figure 4 illustrates the precise acquisition procedure used in this study to obtain the 3D point cloud data of the maize plant using the Kinect sensor. First, using the KinectV2 camera, RGB and depth images of the maize plants and infrared images at various angles of the checkerboard grid were collected. The internal camera parameters were then collected based on the infrared images using the Zhang Zhengyou calibration method, and the coordinates of the camera’s center point (cx, cy) and focal length (fx, fy) were (256.00, 209.59), (365.18, 364.49). Using identical triangles and the camera small-aperture imaging method, the depth and RGB images of the maize plant were combined to create a 3D point cloud map with color information. Equations (1) and (2) depict how a point m (u, v) on the depth image and its corresponding 3D point M (X, Y, Z) relate to each other.
u = f x X Z + c x
v = f y Y Z + c y

2.3.2. Single-Point Multi-View Alignment Matrix Pre-Calibration Method

In this study, the multi-view alignment matrix was pre-calibrated for a single measurement point using the singular value decomposition approach. The specific pre-calibration procedure is shown in Figure 5. The KincetV2 camera was mounted on the TBR 100 motorized rotary table to capture the RGB images of the checkerboard grid at two different viewing angles of 0° and 45° in the first stage. In the following stage, 2D points (X, Y) were generated in accordance with the mesh grid using the Harris corner detection technique to identify the 2D feature corner points of the RGB images of the checkerboard grid from the two viewpoints. In the third stage, discrete differences were used to determine the mapping relations Fx and Fy between the 2D and 3D points, and the corresponding 3D feature points were computed using the 2D feature points of the checkerboard grid. The center of mass of the neighboring view feature points is determined in the fourth step using Equations (3) and (4), and the singular value decomposition method is applied to solve the rotation matrix R and translation matrix T of the 45° view transformation to the 0° view (there is no need to repeat the calibration subsequently; it is sufficient to calibrate once).
In this study, a multi-view alignment matrix for a single measurement point was solved using the singular value decomposition method [21]. Assuming two eigen point sets, P and Q, the rotation translation matrix between them is solved as follows:
(1) The centroid coordinates Pc (xc, yc, zc) and Qc (xc, yc, zc) of the feature point sets P and Q are calculated according to (3) and (4).
P c x c , y c , z c = i = 1 n w i · P i x i , y i , z i i = 1 n w i
Q c x c , y c , z c = i = 1 n w i · Q i x i , y i , z i i = 1 n w i
where wi denotes the weight and Pi (xi, yi, zi) and Qi (xi, yi, zi) are the 3D coordinates of the points within the point set.
(2) The covariance matrix E is calculated using (5), where E is a dn-dimensional matrix; X, Y are dn-dimensional matrices, and W = diag(w1, w2, w3, …, wn).
E = X W Y T = i n [ P i P C Q i Q c T ] i = 0 n w i
Equation (6) illustrates the execution of the singular value decomposition of the matrix E. The three matrices, U, V, and diagonal arrays, and (7) and (8) can be used to define the rotation matrix R and translation matrix T.
E = U · Λ · V T
R = V · U T
T = Q C R · P c
Equation (9) is applied to convert the point clouds of the other view camera coordinate systems to the first view camera coordinate system. R and T are the rotation and translation matrices, respectively.
P C j + 1 = R j P C j + 1 + 1 R j 1 R T
The 3D point cloud data of the j-th viewpoint world coordinate system are represented by PCj, whereas the j-th viewpoint camera coordinate system are represented by PCj.

2.3.3. Multi-Point 3D Point Cloud Coarse Alignment Method

This study used the ROS mobile platform to achieve the coarse alignment of 3D point clouds from a multi-point maize plant. The first step is to locate them and the multi-survey point positioning and navigation; the RVIZ 2D map of maize plants was created using the DashgoB1 ROS mobile platform’s map-building feature. Next, the robot’s localization in the generated map was accomplished using the adaptive Monte Carlo localization technique (AMCL), which is based on particle filtering. The movement path on the map is planned using the global path-planning algorithm to steer the chassis along the way and eventually arrive at the desired target spot. The second step is the acquisition of single-point multi-view 3D point cloud data; for this, we used the TBR 100 electric rotary slide and set the multi-view acquisition interval to 45°. Figure 6a shows the acquisition diagram. The third phase is the local 3D reconstruction of the multi-view point cloud. Method 2.3.2 is used to calibrate the multi-view transformation matrix. To achieve the unification of the coordinate system and realize the local 3D reconstruction of the multi-view maize plant point cloud, the coordinate system of the first view camera is used as the global coordinate system. The point clouds of the other views are then aligned to the global coordinate system. The location of the acquisition point is determined in the fourth step. In this study, the reconstructed site area was 4 m × 4 m. Multi-point 3D point cloud data gathering is necessary because local reconstruction cannot obtain cloud data for the entire site. The noise level of the plant point cloud increases with the distance from Kinect, as does the inaccuracy. This study screened the point cloud data within 2.25 m of the Kinect origin by setting the radius of the enclosing box to 2.25 m.
To guarantee an adequate overlap area between the measurement points, nine measurement points were evenly distributed according to the spatial distribution of the corn plants. The horizontal distance between the consecutive measurement locations was fixed at 1.6 m for consistency. Figure 6b depicts the locations of the measurement points and the acquisition paths. The final step was to pre-calibrate the coarse alignment matrix of several measurement points. A checkerboard grid was established between the adjacent measurement sites, and the 2.3.2 algorithm was utilized to accomplish the pre-calibration of the coarse alignment transformation matrix between the adjacent measurement points. Because the target navigation point position was established, subsequent crop reconstruction and measurement did not require repeated calibration.
To achieve the coarse alignment of multipoint 3D point clouds, the fifth point camera coordinate system was employed as the global coordinate system, and the point clouds under other point camera coordinate systems were translated into the global coordinate system.

2.3.4. ICP Fine Alignment Using Overlapping Regions

Because the proportion of non-plant point clouds was too high and there were some noise points in the original point clouds, the ICP had alignment issues [22,23,24]. Therefore, point clouds must first be processed. The straight-pass filtering algorithm [25] was initially employed in this study to segregate the point clouds of corn and non-corn plants by utilizing 1 cm below the true height of the planters. Owing to the possibility of gray noise (such as soil) in the point cloud after direct-pass filtering, the super green factor index ExG was applied for additional filtering. Finally, for outlier elimination, statistical filtering using radius filtering is performed [26,27]. The standard deviation of each point in the point cloud with respect to its k-neighborhood (k value of 35) was computed, and the outliers were defined as points with a standard deviation larger than a threshold of 1.5. For radius filtering, the filter radius r and minimum number of points within the filter radius were set to 8 mm and 5, respectively, that is, points with less than five points within the 8 mm radius were deemed outliers.
Because of the stringent requirements of the ICP on the initial location and overlapping area, this study presented an ICP fine registration algorithm based on overlapping area point clouds to achieve global three-dimensional reconstruction of maize plants. The steps involved were as follows:
Step 1: Mean downsampling processing was conducted on the point cloud to lower the computation amount of the point cloud registration in the subsequent step.
Step 2: Using the fifth measurement point camera coordinate system as the world coordinate system, the 2.3.3 method was used to solve the coarse alignment transformation matrix, transform the other measurement point camera coordinate system point cloud to the fifth measurement point camera coordinate system, and obtain the coordinates of each measurement point (xi, yi, zi, i = 1,2,...9) after the coarse alignment transformation.
Step 3: The first and fifth measurement points were used as examples, and the coordinates of the center point of the measurement point were determined using ( x 1 + x 5 2 , y 1 + y 5 2 , z 1 + z 5 2 ) . The distance between the measurement points was estimated using x 1 x 5 2 + y 1 y 5 2 + z 1 z 5 2 . The overlapping region of the two measurement points was defined as the area where the circle with the coordinates of the center of the measurement point is the center, and the spacing between the measurement points was the diameter.
Step 4: Using the camera coordinate system of measurement point five as the global coordinate system, the ICP transformation matrix between the camera coordinate system of measurement point five and that between its neighboring measurement points was solved using the ICP based on the point cloud of corn plants in the overlapping area.
Step 5: Using the solved fine registration transformation matrix, the camera coordinate system of the other measuring points was transformed into a five-camera coordinate system of the measuring points. The fine registration of the 3D point cloud of various measuring locations was accomplished, and finally, the global 3D reconstruction of the corn seedling population was achieved. The ICP was primarily utilized to solve the fine registration transformation matrix.
The specific registration steps are as follows:
(1) From source point cloud P, the subset P0, P0P was selected.
(2) In the target point cloud Q, the matching point subset Q0 of subset P0, Q0Q such that Qi -Pi = min was found.
(3) The equation f R , T = i = 1 n Q i R P i T 2 was satisfied as the minimum requirement by calculating the rotation matrix R and translation vector T and updating the subset P 0’ of the source point cloud.
(4) It was determined whether the iteration ends based on d = 1 n i = 1 n Q i P i 2 . If d is less than the specified threshold or the specified number of iterations is reached, the algorithm terminates; otherwise, the process was returned to Step (2) to continue the iteration.

2.4. Calibration Method for Accuracy of Global Reconstruction of Maize Population at Seedling Stage

2.4.1. Calibration of the Accuracy of Plant Height and Maximum Width Measurement

The major direction of the point cloud model of the maize plant based on the Kinect 3D reconstruction deviated from the 3D coordinate axis direction under visualization. To facilitate the subsequent measurement of the phenotypic parameters such as the plant height and maximum width, this study used the area growth method [28], the random sampling consistency algorithm [29], and the Harris corner point detection algorithm to uniformly convert the point clouds from different viewpoints into a ground-based global coordinate system.
(1) Plant height
In this study, a single corn plant was first segmented from the scene using a density-based point cloud clustering technique. Subsequently, all the point clouds of the corn plant were traversed, and the maximum and minimum values of the z-coordinate of the single corn plant were determined. Finally, the absolute value of the difference was considered to be the current corn plant height.
(2) Maximum width
To construct the matching projected point clouds, the extracted point clouds of individual maize plants were projected onto the OXY plane [30], and the largest outer circle of the projected point cloud in the OXY plane was calculated. The outer circle diameter represents the maximum width of the individual corn plants.
In this study, three metrics were used to assess the global reconstruction accuracy of crop populations. These are the root mean square error (RMSE), mean absolute percentage error (MAPE), and coefficient of determination (R2). The following equations were used to calculate the above-mentioned metrics:
R M S E = 1 n i = 1 n P i Q i 2
M A P E = 1 n i = 1 n Q i P i Q i × 100 %  
R 2 = i = 1 n Q i Q a v g · P i P a v g i = 1 n P i P a v g 2 0.5 i = 1 n Q i Q a v g 2 0.5
where,
Pi is the algorithm measurement of the i-th plant;
Qi is the manual measurement of the i-th plant;
Pavg is the mean value of the algorithm measurement;
Qavg is the mean value of the manual measurement;
n is the number of samples.

2.4.2. Calibration of the Accuracy of Global 3D Reconstruction of the Standard Sphere

H D R P , G P = min P b M P d P a , P b
where HD is the shortest distance between the generated and reconstructed point sets.
The spheres that generate the point set and reconstructed point set are denoted by GP and RP, respectively.
Pa and Pb are the points in RP and MP, respectively.

3. Analysis and Results

To validate the efficacy of the global reconstruction algorithm presented in this study for seedling maize populations, the accuracy was independently calibrated for neighboring viewpoint checkerboard grid feature points, multi-plant test subjects, and standard polystyrene foam balls.

3.1. Analysis of Single Measurement Point Local Reconstruction Accuracy

The tessellation grid feature points of the nearby views were used as measurement objects to assess the local 3D reconstruction accuracy of a single measurement point. The singular value decomposition approach was used to produce the tessellation grid 3D feature point matching and neighboring view alignment, and Figure 7 illustrates the 3D feature point matching accuracy analysis. The RMSE of the related tessellation 3D feature points was calculated as 0.18 cm, indicating that the reconstruction accuracy of the local 3D reconstruction was high.

3.2. Analysis of the Accuracy of Global 3D Crop Population Reconstruction

A total of 96 seedling maize plant samples were reconstructed in global 3D for six cycles in this experiment, and single maize point clouds were collected for quantitative analysis. The global reconstruction phase for the maize plant population is shown in Figure 8. Figure 9 depicts the global reconstruction findings for the same group of maize plants on different dates in the same position. Figure 10a,b illustrate a comparative evaluation of the algorithm’s projected maize plant height and maximum width values with actual manual measurement values.
From Figure 10a, R2 = 0.98, RMSE = 1.39 cm, and MAPE = 1.92%, and the accuracy of the algorithm for measuring the plant height was 98.08%. As shown in Figure 10b, R2 = 0.99, RMSE = 1.45 cm, MAPE = 2.29%, and the accuracy of the algorithm for measuring the maximum plant width was 97.71%. The results in Figure 10 reveal that the algorithm measurement error of the maximum width is greater than that of the plant height, which is mostly due to the more involved manual measurement of the maximum width compared to the measurement of the plant height. The overall accuracy of this seedling maize population 3D reconstruction technique is excellent, and the algorithm measurements have a significant connection with manual measurements.

3.3. Analysis of Standard Sphere Global 3D Reconstruction Accuracy

Because of the flaws in hand measurements, a standard foam sphere was chosen as the measurement object for further evaluation of the global 3D reconstruction accuracy. The Hausdorff Distance set was used to quantify the global 3D reconstruction accuracy. Figure 11a,b show the RGB images of six foam spheres and single-measurement-point local 3D reconstructions. The worldwide 3D reconstruction of conventional foam spheres with various measurement sites is shown in Figure 11c. Figure 11d depicts the standard spheres produced by the CloudCompare software. Figure 12 shows the Hausdorff Distance set distribution for standard spheres with diameters of 200, 300, 350, 400, and 500 mm. The distance sets are separated into five segments: 0 cm ≤ HD ≤ 0.2 cm, 0.2 cm < HD ≤ 0.5 cm, 0.5 cm < HD ≤ 0.8 cm, 0.8 cm < HD ≤ 1.2 cm, and HD > 1.2 cm. The average distance set distribution of all spheres can be computed, with 55.26% of the Hausdorff Distance sets less than 0.5 cm. Approximately 76.83% of the cloud points had a distance less than 0.8 cm and only approximately 8.73% of the point clouds had distances larger than 1.2 cm, showing that the majority of the standard sphere reconstruction point sets vary within 0.8 cm of the original coordinate positions, and only a few point sets diverge from the original coordinate positions.

4. Conclusions

Because most previous crop phenotype measurement methods were only for individual plants, in this study, a self-propelled crop phenotype measurement device for 96 seedling maize plants in six cycles was designed, and a global 3D reconstruction method for the seedling maize plant population was evaluated, with the following main findings.
(1) The RMSE of the feature points corresponding to the local reconstruction of adjacent views was 0.18 mm, and the distance set HD between the standard sphere reconstructed point cloud and the software generated point cloud was less than 0.5 cm for 55.26%, less than 0.8 cm for 76.83%, and only 8.76% of the point cloud distance was greater than 1.2 cm. This indicated that most of the point clouds did not deviate from the original coordinate positions after alignment, and the reconstruction accuracy of this algorithm was sufficient to meet the phenotypic measurement needs of seedling maize plants.
(2) The MAPE of the maize plant height and maximum width were 1.92% and 2.29%, respectively, compared to real manual measurements. The RMSE values were 1.39 cm and 1.45 cm, respectively, and R2 was 0.98 and 0.99. This demonstrated the high accuracy of the proposed seedling maize population reconstruction.
Through the 3D modeling of seedling maize populations, this study provided supporting data and theoretical guidance for phenotypic characterization and the accurate and intelligent management of maize. Because the alignment transformation matrix was obtained using a pre-calibration method, inter-crop occlusion will not impair the reconstruction accuracy, but it will result in missing information for some crops and will affect crop reconstruction integrity.
The influence of ground leveling was not considered in this study, and the algorithm was tested only on maize seedlings. The applicability and robustness of the algorithm need to be confirmed, and more experimental studies on various growth stages of different crops will be undertaken at a later stage.

Author Contributions

Conceptualization, N.X.; Methodology, N.X. and G.S.; Software, N.X. and Y.B.; Validation, N.X. and X.Z.; Formal Analysis, N.X., Y.B. and X.Z.; Investigation, N.X., J.C. and Y.H.; Writing—Original Draft Preparation, N.X., G.S. and Y.B.; Writing—Review & Editing, N.X., Y.H. and J.C.; Project Administration, G.S.; Funding Acquisition, G.S. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by Jiangsu agricultural science and technology Innovation Fund (No. CX(22)3097), Jiangsu agricultural science and technology Innovation Fund (No. CX(21)2006), The key R&D Program of Jiangsu Province(No. BE2022363) and High-end Foreign Experts Recruitment Plan of China (No. G2021145009L).

Data Availability Statement

The data that support the findings of this study are available from the corresponding author upon reasonable request.

Acknowledgments

The author would like to thank the editors and reviewers for their comments on how to improve the quality of this work.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Fan, J.; Zhang, Y.; Wen, W.; Gu, S.; Lu, X.; Guo, X. The future of Internet of Things in agriculture: Plant high-throughput phenotypic platform. J. Clean. Prod. 2020, 280, 123651. [Google Scholar] [CrossRef]
  2. Zhao, C.; Zhang, Y.; Du, J.; Guo, X.; Wen, W.; Gu, S.; Wang, J.; Fan, J. Crop phenomics: Current status and perspectives. Front. Plant Sci. 2019, 10, 714. [Google Scholar] [CrossRef] [Green Version]
  3. Sun, G.; Wang, X.; Liu, J.; Sun, Y.; Ding, Y.; Lu, W. Multi-modal three-dimensional reconstruction of greenhouse tomato plants based on phase-correlation method. Trans. Chin. Soc. Agric. Eng. (Trans. CSAE) 2019, 35, 134–142. [Google Scholar] [CrossRef]
  4. Song, P.; Wang, J.; Guo, X.; Yang, W.; Zhao, C. High-throughput phenotyping: Breaking through the bottle-neck in future crop breeding. Crop J. 2021, 9, 633–645. [Google Scholar] [CrossRef]
  5. Selvaraj, M.G.; Valderrama, M.; Guzman, D.; Valencia, M.; Ruiz, H.; Acharjee, A. Machine learning for high-throughput field phenotyping and image processing provides insight into the association of above and be-low-ground traits in cassava (Manihot esculenta Crantz). Plant Methods 2020, 16, 87. [Google Scholar] [CrossRef]
  6. Li, Y.; Liu, J.; Zhang, B.; Wang, Y.; Yao, J.; Zhang, X.; Fan, B.; Li, X.; Hai, Y.; Fan, X. Three-dimensional reconstruction and phe-notype measurement of maize seedlings based on multi-view image sequences. Front. Plant Sci. 2022, 13, 974339. [Google Scholar] [CrossRef]
  7. Qiu, R.; Miao, Y.; Zhang, M.; Li, H. Detection of the 3D temperature characteristics of maize under water stress using thermal and RGB-D cameras. Comput. Electron. Agric. 2021, 191, 106551. [Google Scholar] [CrossRef]
  8. Wu, J.; Xue, X.; Zhang, S.; Qin, W.; Chen, C.; Sun, T. Plant 3D reconstruction based on LiDAR and multi-view sequence images. Int. J. Precis. Agric. Aviat. 2018, 1, 37–43. [Google Scholar] [CrossRef]
  9. Nguyen, T.T.; Slaughter, D.C.; Max, N.; Maloof, J.N.; Sinha, N. Structured Light-Based 3D Reconstruction System for Plants. Sensors 2015, 15, 18587–18612. [Google Scholar] [CrossRef] [Green Version]
  10. Golbach, F.; Kootstra, G.; Damjanovic, S.; Otten, G.; van de Zedde, R. Validation of plant part measurements using a 3D reconstruction method suitable for high-throughput seedling phenotyping. Mach. Vis. Appl. 2016, 27, 663–680. [Google Scholar] [CrossRef]
  11. Teng, X.; Zhou, G.; Wu, Y.; Huang, C.; Dong, W.; Xu, S. Three-dimensional reconstruction method of rapeseed plants in the whole growth period using RGB-D camera. Sensors 2021, 21, 4628. [Google Scholar] [CrossRef]
  12. Peng, Y.; Yang, M.; Zhao, G.; Cao, G. Binocular-vision-based structure from motion for 3D reconstruction of plants. IEEE Geosci. Remote Sens. Lett. 2021, 19, 8019505. [Google Scholar]
  13. Ma, X.; Zhu, K.; Guan, H.; Feng, J.; Yu, S.; Liu, G. Calculation Method for Phenotypic Traits Based on the 3D Reconstruction of Maize Canopies. Sensors 2019, 19, 1201. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  14. Lin, X.; Zhang, J. 3D Power Line Reconstruction from Airborne LiDAR Point Cloud of Overhead Electric Power Transmission Corridors. Acta Geod. et Cartogr. Sin. 2016, 45, 347. [Google Scholar]
  15. Liu, R.; Liu, T.; Dong, R.; Li, Z.; Zhu, D.; Su, W. 3D modeling of maize based on terrestrial LiDAR point cloud data. J. China Agric. Univ. 2014, 19, 196–201. [Google Scholar]
  16. Zheng, C.; Abd-Elrahman, A.; Whitaker, V. Remote sensing and machine learning in crop phenotyping and management, with an emphasis on applications in strawberry farming. Remote Sens. 2021, 13, 531. [Google Scholar] [CrossRef]
  17. Peng, C.; Li, S.; Miao, Y. Stem-leaf segmentation and phenotypic trait extraction of tomatoes using three-dimensional point cloud. Trans. Chin. Soc. Agric. Eng. (Trans. CSAE) 2022, 38, 187–194, (In Chinese with English abstract). [Google Scholar]
  18. Hu, P.; Guo, Y.; Li, B.; Zhu, J.; Ma, Y. Three-dimensional reconstruction and its precision evaluation of plant architecture based on multiple view stereo method. Trans. Chin. Soc. Agric. Eng. (Trans. CSAE) 2015, 31, 209–214, (In Chinese with English abstract). [Google Scholar]
  19. He, J.Q.; Harrison, R.J.; Li, B. A novel 3D imaging system for strawberry phenotyping. Plant Methods 2017, 13, 93. [Google Scholar] [CrossRef]
  20. Zhang, Z. A flexible new technique for camera calibration. IEEE Trans. Pattern Anal. Mach. Intell. 2000, 22, 1330–1334. [Google Scholar] [CrossRef] [Green Version]
  21. Sorkine-Hornung, O.; Rabinovich, M. Least-squares rigid motion using svd. Computing 2017, 1, 1–5. [Google Scholar]
  22. Zhang, K.; Chen, H.; Wu, H.; Zhao, X.; Zhou, C. Point cloud registration method for maize plants based on conical surface fitting—ICP. Sci. Rep. 2022, 12, 6852. [Google Scholar] [CrossRef] [PubMed]
  23. Vázquez-Arellano, M.; Reiser, D.; Paraforos, D.S.; Garrido-Izard, M.; Burce, M.E.C.; Griepentrog, H.W. 3-D reconstruction of maize plants using a time-of-flight camera. Comput. Electron. Agric. 2018, 145, 235–247. [Google Scholar] [CrossRef]
  24. Lin, C.; Wang, H.; Liu, C.; Gong, L. 3D reconstruction based plant-monitoring and plant-phenotyping platform. In Proceedings of the 2020 3rd World Conference on Mechanical Engineering and Intelligent Manufacturing (WCMEIM), Shanghai, China, 4–6 December 2020; pp. 522–526. [Google Scholar]
  25. Zou, W.; Shen, D.; Cao, P.; Lin, C.; Zhu, J. Fast Positioning Method of Truck Compartment Based on Plane Segmentation. IEEE J. Radio Freq. Identif. 2022, 6, 774–778. [Google Scholar] [CrossRef]
  26. Han, X.-F.; Jin, J.S.; Wang, M.-J.; Jiang, W.; Gao, L.; Xiao, L. A review of algorithms for filtering the 3D point cloud. Signal Process. Image Commun. 2017, 57, 103–112. [Google Scholar] [CrossRef]
  27. Bi, S.; Wang, Y. LiDAR Point Cloud Denoising Method Based on Adaptive Radius Filter. Trans. Chin. Soc. Agric. Mach. 2021, 52, 234–243, (In Chinese with English abstract). [Google Scholar]
  28. Liu, M.; Shao, Y.; Li, R.; Wang, Y.; Sun, X.; Wang, J.; You, Y. Method for extraction of airborne LiDAR point cloud buildings based on segmentation. PLoS ONE 2020, 15, e0232778. [Google Scholar] [CrossRef]
  29. Fischler, M.A.; Bolles, R.C. Random Sample Consensus: A Paradigm for Model Fitting with Applications To Image Analysis and Automated Cartography. Commun. ACM 1981, 24, 381–395. [Google Scholar] [CrossRef]
  30. Das Choudhury, S.; Maturu, S.; Samal, A.; Stoerger, V.; Awada, T. Leveraging Image Analysis to Compute 3D Plant Phenotypes Based on Voxel-Grid Plant Reconstruction. Front. Plant Sci. 2020, 11, 521431. [Google Scholar] [CrossRef]
Figure 1. Physical picture of test scene. (a). Collection device; (b). crop plants.
Figure 1. Physical picture of test scene. (a). Collection device; (b). crop plants.
Agriculture 13 00348 g001
Figure 2. Schematic diagram of hardware structure.
Figure 2. Schematic diagram of hardware structure.
Agriculture 13 00348 g002
Figure 3. Flow chart of panoramic three-dimensional reconstruction of crops in self-propelled greenhouse.
Figure 3. Flow chart of panoramic three-dimensional reconstruction of crops in self-propelled greenhouse.
Agriculture 13 00348 g003
Figure 4. Acquisition process of 3D point cloud (a). Plant RGB image; (b) plant depth image; (c). plant 3D point cloud; (d). checkerboard grid infrared images; (e). similar triangle principle.
Figure 4. Acquisition process of 3D point cloud (a). Plant RGB image; (b) plant depth image; (c). plant 3D point cloud; (d). checkerboard grid infrared images; (e). similar triangle principle.
Agriculture 13 00348 g004
Figure 5. Pre-calibration diagram of multi-view alignment matrix for single-side measurement points.
Figure 5. Pre-calibration diagram of multi-view alignment matrix for single-side measurement points.
Agriculture 13 00348 g005
Figure 6. 3D point cloud data acquisition. (a). Multi-view 3D point cloud data acquisition; (b). multi-measuring points 3D point cloud data acquisition.
Figure 6. 3D point cloud data acquisition. (a). Multi-view 3D point cloud data acquisition; (b). multi-measuring points 3D point cloud data acquisition.
Agriculture 13 00348 g006
Figure 7. Matching accuracy analysis of 3D feature points.
Figure 7. Matching accuracy analysis of 3D feature points.
Agriculture 13 00348 g007
Figure 8. 3D reconstruction process of crop plants. (a). Local 3D reconstruction; (b). direct filtering; (c). ultra-green component value denoising; (d). 3D point cloud coarse registration; (e). ICP fine registration; (f). radius filtering and statistical filtering denoising.
Figure 8. 3D reconstruction process of crop plants. (a). Local 3D reconstruction; (b). direct filtering; (c). ultra-green component value denoising; (d). 3D point cloud coarse registration; (e). ICP fine registration; (f). radius filtering and statistical filtering denoising.
Agriculture 13 00348 g008
Figure 9. Global 3D point cloud model reconstruction of crop plants. (a). May 28th; (b). June 7th; (c). June 17th; (d). June 27th.
Figure 9. Global 3D point cloud model reconstruction of crop plants. (a). May 28th; (b). June 7th; (c). June 17th; (d). June 27th.
Agriculture 13 00348 g009
Figure 10. Comparison of artificial and algorithm measured values of crop plants. (a). Plant height; (b). maximum width.
Figure 10. Comparison of artificial and algorithm measured values of crop plants. (a). Plant height; (b). maximum width.
Agriculture 13 00348 g010
Figure 11. Standard sphere 3D point cloud model reconstruction. (a). Standard sphere RGB image; (b). local 3D reconstruction of the standard sphere; (c). global 3D reconstruction of the standard sphere; (d). standard sphere generation diagram.
Figure 11. Standard sphere 3D point cloud model reconstruction. (a). Standard sphere RGB image; (b). local 3D reconstruction of the standard sphere; (c). global 3D reconstruction of the standard sphere; (d). standard sphere generation diagram.
Agriculture 13 00348 g011
Figure 12. HD distribution ratio of standard spherical distance set. Diameters of (a). 200 mm; (b). 300 mm; (c). 350 mm; (d). 400 mm; (e). 500 mm; (f). 600 mm.
Figure 12. HD distribution ratio of standard spherical distance set. Diameters of (a). 200 mm; (b). 300 mm; (c). 350 mm; (d). 400 mm; (e). 500 mm; (f). 600 mm.
Agriculture 13 00348 g012
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Xu, N.; Sun, G.; Bai, Y.; Zhou, X.; Cai, J.; Huang, Y. Global Reconstruction Method of Maize Population at Seedling Stage Based on Kinect Sensor. Agriculture 2023, 13, 348. https://doi.org/10.3390/agriculture13020348

AMA Style

Xu N, Sun G, Bai Y, Zhou X, Cai J, Huang Y. Global Reconstruction Method of Maize Population at Seedling Stage Based on Kinect Sensor. Agriculture. 2023; 13(2):348. https://doi.org/10.3390/agriculture13020348

Chicago/Turabian Style

Xu, Naimin, Guoxiang Sun, Yuhao Bai, Xinzhu Zhou, Jiaqi Cai, and Yinfeng Huang. 2023. "Global Reconstruction Method of Maize Population at Seedling Stage Based on Kinect Sensor" Agriculture 13, no. 2: 348. https://doi.org/10.3390/agriculture13020348

APA Style

Xu, N., Sun, G., Bai, Y., Zhou, X., Cai, J., & Huang, Y. (2023). Global Reconstruction Method of Maize Population at Seedling Stage Based on Kinect Sensor. Agriculture, 13(2), 348. https://doi.org/10.3390/agriculture13020348

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop