Next Article in Journal
Drone-Based Remote Sensing for Research on Wind Erosion in Drylands: Possible Applications
Next Article in Special Issue
Rethinking the Fourier-Mellin Transform: Multiple Depths in the Camera’s View
Previous Article in Journal
Structured Object-Level Relational Reasoning CNN-Based Target Detection Algorithm in a Remote Sensing Image
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Technical Note

3D Characterization of Sorghum Panicles Using a 3D Point Cloud Derived from UAV Imagery

1
School of Engineering and Computing Sciences, Texas A&M University-Corpus Christi, Corpus Christi, TX 78412, USA
2
Lyles School of Civil Engineering, Purdue University, West Lafayette, IN 47907, USA
3
Department of Civil Engineering, Gyeongsang National University, Jinju, Gyeongsangnam-do 52828, Korea
4
Texas A&M AgriLife Research and Extension at Corpus Christi, Corpus Christi, TX 78406, USA
*
Author to whom correspondence should be addressed.
Remote Sens. 2021, 13(2), 282; https://doi.org/10.3390/rs13020282
Submission received: 11 December 2020 / Revised: 7 January 2021 / Accepted: 14 January 2021 / Published: 15 January 2021
(This article belongs to the Special Issue 2D and 3D Mapping with UAV Data)

Abstract

:
Sorghum is one of the most important crops worldwide. An accurate and efficient high-throughput phenotyping method for individual sorghum panicles is needed for assessing genetic diversity, variety selection, and yield estimation. High-resolution imagery acquired using an unmanned aerial vehicle (UAV) provides a high-density 3D point cloud with color information. In this study, we developed a detecting and characterizing method for individual sorghum panicles using a 3D point cloud derived from UAV images. The RGB color ratio was used to filter non-panicle points out and select potential panicle points. Individual sorghum panicles were detected using the concept of tree identification. Panicle length and width were determined from potential panicle points. We proposed cylinder fitting and disk stacking to estimate individual panicle volumes, which are directly related to yield. The results showed that the correlation coefficient of the average panicle length and width between the UAV-based and ground measurements were 0.61 and 0.83, respectively. The UAV-derived panicle length and diameter were more highly correlated with the panicle weight than ground measurements. The cylinder fitting and disk stacking yielded R2 values of 0.77 and 0.67 with the actual panicle weight, respectively. The experimental results showed that the 3D point cloud derived from UAV imagery can provide reliable and consistent individual sorghum panicle parameters, which were highly correlated with ground measurements of panicle weight.

1. Introduction

Sorghum is commonly used for human consumption, as livestock feed, and in ethanol fuel production [1]. There are numerous varieties, including grain sorghums used for human food, and forage sorghum for livestock hay and fodder [2]. Sorghum products also play an important role in food security because sorghum is one of the leading cereal crops worldwide, and it is a principal source of energy and nutrition for humans [3]. Although final grain yield can be potentially estimated by measuring the plant population and the weight per panicle [4], traditional hand-sampling methods in the field are labor-intensive, time-consuming, and prone to human errors. Therefore, an accurate and efficient method to measure crop phenotypes is required to enable research scientists and breeders to increase sorghum product yield and to develop improved cultivar [5].
The rapid development of unmanned aerial vehicle (UAV) and sensor technologies in recent years has enabled the collection of very-high-resolution images and high-throughput phenotyping (HTP) from remotely sensed imagery data [6]. Crop phenotypes, such as plant height, canopy cover, and vegetation indices, can be estimated with high accuracy and reliability from UAV images for agriculture applications [6,7,8,9]. UAV-based phenotypic data has been utilized to predict cotton yield and select high-yielding varieties [10]. Ashapure et al. [11] demonstrated that multi-temporal UAV data can provide more consistent measurements, and that UAV-based phenotypes are significant features to monitor the difference between tillage and no-tillage management. In addition, an artificial intelligence (AI) algorithm was adopted to yield estimation for soybean and cotton using UAV-based HTP [12,13]. Many studies have verified that UAV imagery can provide high-quality phenotypes and is feasible for conducting agriculture applications [14].
In sorghum breeding plots, Chang et al. [7] and Watanabe et al. [15] showed that the plant height estimated from a UAV-based digital surface model (DSM) is highly correlated with the plant height measured with rulers in the field. Although different sorghum heights were found according to the UAV image specifications and field conditions, it was verified that UAV data could provide reliable plant height. Shafian et al. [16] evaluated the performance of a fixed-wing UAV system for quantification of crop growth parameters of sorghum, including leaf area index (LAI), fractional vegetation cover (fc), and yield. UAV-derived crop parameters have also been used to estimate the biomass and yield for cereal crops, such as maize, corn, and rice [17,18,19].
Although UAV data have shown good performance in estimating crop parameters and predicting yield, most researchers have used UAV-based phenotypes at the plot level. Phenotypic data at the individual plant scale, such as panicle count, number of seeds per panicle, and panicle size (length and width), play key roles in assessing genetic diversity, selection of new cultivars, and estimation of potential yields [20,21,22]. However, the hand-sampling method to measure panicles traits has proved to be a bottleneck to sorghum crop improvement [23,24]. More recently, deep learning has shown tremendous potential for detecting and counting sorghum panicles from UAV images [5,24]. However, a large number of training images are required to obtain robust and accurate machine-learning algorithms. This means that constructing large training samples requires a long time and heavy labor. A terrestrial LiDAR (light detection and ranging) method was also used for the detection and measurement of individual sorghum panicles, including their length, width, and height [25]. Although terrestrial LiDAR can provide a high-density 3D point cloud, the point density should not be uniform, depending on the location of the LiDAR sensor and the distance between the sensor and objects.
Structure from motion (SfM) techniques can generate colored 3D point clouds from UAV images. When a UAV is flown at a lower altitude with high overlap under stable atmospheric conditions, high-density point clouds can be produced sufficiently to extract phenotypic data of individual sorghum panicles. The aim of this study is to develop a high-throughput method of detecting and characterizing individual sorghum panicles from UAV-derived 3D point clouds. Figure 1 shows the overall procedure employed to characterize sorghum estimates and compare tree phenotypes from UAV imagery. Very-high-resolution imagery is collected using a UAV platform and processed to generate an orthomosaic, a DSM, and a colored 3D point cloud. The color ratio is adopted to select potential panicle points. Then, the individual tree identification method is applied to detect individual sorghum panicles. We propose a phenotyping method of estimating panicle length, width, and panicle volume. The panicle parameters estimated from UAV-based 3D point cloud are compared with ground measurements.

2. Materials and Methods

2.1. Study Area and Data Collection

The study area was located in the research farm at the Texas A&M AgriLife Research and Extension Center in Corpus Christi, TX, USA. Three types of grain sorghum (100 pedigrees) and one forage sorghum (15 pedigrees) with four replications were planted on 28 March 2016, in north-south oriented two-rows. The single-row size was 1 × 5 m. There were a total of 784 rows, including filler and border rows. Grain sorghum plots were selected to extract 3D information of panicles growing at the top of the plant. The panicle length and diameter were manually measured by ruler in the field in early June. At the end of the growing season, actual panicle samples were collected over a 1 m space in the middle of each row. Whole plants were cut and moved into the lab to measure the panicle weight after drying. A total of panicle weight in each plot was used to calculate correlation with the UAV measurements.
UAV RGB images were collected using a DJI Phantom 4 (DJI, Shenzhen, China) quad-copter platform on 7 June 2016, at a 10 m altitude with an 85% overlap to generate an ultra-high-density 3D point cloud. The four flight missions were conducted at a flight speed of about 1 m/s under very calm wind conditions (<2.5 m/s) to circumvent plant-motion effects that can produce image misalignment and sparse point density in 3D point cloud generations due to insufficient tie-points and blurred imagery. Ground control points (GCPs) were also installed around the sorghum field and surveyed by PPK (post-processed kinematic) GPS (global positioning system) devices with sub-centimeter accuracy to precisely eliminate the bowl effect and georeferencing in SfM processing [7,26].

2.2. UAV Data Pre-Processing

The Agisoft Photoscan Pro (AgiSoft LLC, St. Petersburg, Russia), a commercial SfM software, was employed to generate the 3D point cloud, the DSM (digital surface model), and the orthomosaic image from UAV data. The ground sampling distance (GSD) for the orthomosaic image and DSM was 4.4 mm and 8.7 mm, respectively (Figure 2a,b). The point density of the RGB-colored 3D point cloud was 1.32 points/cm2 (Figure 2c). The orthomosaic image, DSM, and 3D point cloud were clipped using a single row boundary to detect sorghum panicles and extract the phenotypes for each variety.

2.3. Sorghum Panicle Detection

In this study, blooming plots were selected to extract panicle parameters, because panicles appear differently colored compared to other objects such as leaves, soil, and shadow. Potential blooming panicle pixels, which are yellowish, were extracted from the clipped orthomosaic image using the RGB color ratio given by Equation (1):
R G   >   0.9   a n d   B G < 0.75
where R, G, and B are the pixel values of the red, green, and blue bands of the RGB orthomosaic image, respectively. In an RGB color system, a yellowish pixel is composed of high red, high green, and low blue values. Therefore, the ratio among red, green, and blue bands was adopted to extract the panicles. Patrignani and Ochsner [27] developed a Canopeo algorithm to extract green pixels using the ratio parameters in RGB images. We used the reversed conditions to select yellowish pixels. The threshold values were empirically determined in this study. Morphological operations (opening and closing) were applied to remove speckle noise and fill holes in the binary classification result obtained using Equation (1). Elevation values of potential panicles in the selected area were used to determine the geometrical characteristics of individual panicles. Figure 2d shows an example of the concept of individual panicle detection. The concept of identifying individual tree crowns was adopted [28]. After extracting local maximum points, boundary points were searched using a threshold of 50 cm through eight directions (every 45°) around each local maximum; that is, pixels with an elevation difference from neighbor pixels larger than the threshold were determined to be boundary pixels. The panicle cross section was assumed to be circular, and Käsa’s circle-fitting algorithm [29] was used to calculate the center and diameter of a suitable circle. Horizontal 3D points within the fitted circle were selected as panicle points, among which those that did not satisfy the RGB color ratio determined using Equation (1) were filtered out.

2.4. 3D Characterization of Sorghum Panicles

The 3D characteristics of individual panicles, such as the length, diameter, and volume, were estimated from the RGB-colored panicle points for each plot (Figure 3c). The UAV-based panicle length ( L C ) was estimated as the elevation difference between the highest and lowest points of individual panicle points, and the UAV-based panicle diameter ( D C ) was estimated as the circle diameter fitted during individual panicle detection (Figure 3a).
The volume of individual panicles was calculated using a 3D point cloud by two methods: (1) cylinder fitting and (2) disk stacking. Cylinder fitting involved using all the panicle points to calculate a cylinder volume from which the panicle length and diameter were estimated (Figure 3a). Multiple disk stacking was used to capture details of elongated ellipsoidal shape of the panicle (Figure 3b). Individual panicle points were vertically divided into multiple layers with a constant height ( L d i ) to select points within a prescribed disk. The diameter ( D d i ) of the i t h disk, from top to bottom, was determined by fitting a circle to the selected disk points. When an insufficient number of points or point deployment resulted in an unreasonable disk diameter (too large or small), the previous disk diameter was used. All the disk volumes were summed to estimate the individual panicle volume.
Figure 3d,e provide an example of how cylinder fitting and disk stacking are used to estimate the volume of an individual panicle. Disk stacking can capture fine details of the panicle shape, but could be more sensitive to the point density and distribution of panicle points than cylinder fitting.

3. Results and Discussion

3.1. Comparison of Panicle Numbers

A total of seventeen varieties in blooming were selected to discriminate panicles from the other objects such as soil and leaves in the 3D point cloud. Since the panicles were sampled over a 1 m space in the middle of each row, the total number of panicles in two rows (single variety) was counted by visual assessment of the orthomosaic image. There were a few errors in detecting individual panicles. For example, some panicles were classified as a single panicle when they were closely located and overlapped, while small or green panicles, which still were not mature, were not detected properly. Despite these errors, the correlation coefficient and R2 values of linear regression were 0.91 and 0.83, respectively (Figure 4). If UAS imagery is collected at the proper timing at very high resolution, individual sorghum panicles can be successfully detected using the proposed method without a field survey, which would be efficient and useful for variety selection in the breeding program.

3.2. Evaluationa of Panicle Length and Diameter

As a limited number of panicle samples were collected in the field, the average of the all ground-measured panicle’s lengths and diameters was compared with those estimated from UAV data for each variety (Figure 5). The R2 values of the linearly regressed panicle length and diameter were 0.38 and 0.69, respectively. Although the panicle length and diameter extracted from UAV data were underestimated and overestimated, respectively, a linear trend between field and UAV measurements was found. The UAV and field measurements showed high correlation coefficients for panicle length (0.62) and diameter (0.83). The UAV-based panicle length exhibited a weaker correlation and a wider distribution than the UAV-based panicle diameter, which implies that panicle diameter could represent more stable data to be extracted from the UAV images.
The 3D point cloud generated from UAV images resulted in a nonuniform point density due to image misalignment, camera angle, and blurred images. Thus, sufficiently dense 3D points to express smooth panicle surfaces could not always be generated. That is, 3D points on one side or the bottom of a panicle can occasionally be missed when these parts are not directly captured on camera. For these reasons, the panicle diameter can be estimated more reliably with less sensitivity to the point density than the panicle length from UAV-imagery based 3D point clouds. An insufficient number of panicle points can result in vertical variability in the panicle length, which was estimated from the highest and lowest elevation in this study. The panicle diameter was determined by fitting a circle to boundary points located around a horizontal cross section of the panicle. This method reduces the possibility of estimating highly unrealistic panicle diameters.

3.3. Correlation between Panicle Phenotypes and Weight

We analyzed the correlations for the panicle weight versus the panicle length, diameter, and volume estimated by the proposed methods. Figure 6a shows a comparison of the panicle weights of the ground samples in each plot versus the average panicle length obtained from field data (red dots and lines) and the UAV measurements (blue dots and line). The UAV-derived panicle length exhibited a linear trend with the weight, showing very high R2 values (0.9). Although the ground measurements of the panicle length and weight were also positively correlated, the wide scatter in the data resulted in an R2 value below 0.5. The average of UAV-derived panicle diameter in each plot was also more highly correlated with the panicle weight than the ground measurements. The panicle diameters obtained from field sampling were distributed within an approximately 2 cm range, whereas the UAV measurements were spread over a wider range (Figure 6b).
The advantage of applying UAV-based phenotyping to sorghum panicle is that a large number of samples can be measured more efficiently. As field sampling is labor-intensive and time-consuming, a limited number of samples can be collected over entire plots or rows. By comparison, UAV data can provide more observations over larger area. The results of this study show that UAV-imagery based 3D point clouds can provide more reliable panicle parameters, which are more highly correlated with the actual panicle weight than field measurement by hand-sampling.
A linear regression model was employed to fit the ground-measured panicle weight to the total panicle volume obtained by cylinder fitting and disk stacking for each variety. The R2 value (0.77) for a linear fit of the data obtained using cylinder fitting was higher than that obtained using disk stacking (0.67) (Figure 7). The correlation coefficient between UAV and field measurements were also calculated as 0.88 and 0.82 for cylinder fitting and disk stacking, respectively. There was a very high relationship and correlation between the UAV-derived panicle volume and the field-measured panicle weight. Although cylinder fitting estimated an approximately two times larger panicle volume than that calculated using disk stacking, the panicle volume obtained using the proposed panicle methods were highly correlated with the field-measured panicle weight. If a sufficient 3D point cloud could be acquired from very-high-resolution images, the panicle weight could be estimated more efficiently without destructive sampling in the field.

4. Conclusions

Individual sorghum panicles were characterized using a UAV-imagery-based 3D point cloud in this study. An ultra-high-density point cloud was generated from UAV RGB images collected at a 10 m altitude. Despite the limitations of the 3D point cloud generated from the UAV images, individual panicle parameters, such as the length and diameter, were successfully estimated from RGB-colored 3D panicle points. The proposed cylinder-fitting and disk-stacking methods outperformed at calculating individual panicle volumes at the plot level, which are highly correlated with ground-measurements of panicle weight.
In the current study, it was necessary to generate the canopy height model (CHM) or remove the ground elevation to estimate the 3D parameters of individual panicles. It could be possible to apply the proposed method to a 3D point cloud obtained using LiDAR sensors to extract phenotypic data for sorghum. However, a main limitation of this study was how to collect high-quality images at a very low altitude under fine weather in blooming season. In the future, we will test the different types of 3D point clouds to estimate phenotypic information for various crops.

Author Contributions

Conceptualization, A.C., J.J.; methodology, A.C., J.J.; software, A.C., J.J.; validation, A.C., J.Y.; formal analysis, A.C.; investigation, A.C.; resources, A.C., J.Y., J.J.; data curation, A.C., J.Y.; writing—original draft preparation, A.C., J.J., J.Y., J.L.; writing—review and editing, A.C., J.Y., J.J., J.L.; visualization, A.C., J.Y.; supervision, J.J., J.L.; project administration, J.J., J.L.; funding acquisition, J.J., J.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data sharing is not applicable to this article.

Acknowledgments

This work was supported by Texas A&M AgriLife Research.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Xiong, Y.; Zhang, P.; Warner, R.D.; Fang, Z. Sorghum Grain: From Genotype, Nutrition, and Phenolic Profile to Its Health Benefits and Food Applications. Compr. Rev. Food. Sci. Food Saf. 2019, 18, 2025–2046. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  2. Dahlberg, J.; Hutmacher, B.; Wright, S. Sorghum: An alternative feed, hay and forage. In Proceedings of the 2015 Western Alfalfa & Forage Symposium, Reno, NV, USA, 2–4 December 2015; Available online: https://alfalfa.ucdavis.edu/+symposium/2015/PDFfiles/Dahlberg%20Jeff.pdf (accessed on 11 December 2020).
  3. Rooney, L.W.; Waniska, R.D. Sorghum food and industrial utilization. In Sorghum: Origin, History, Technology, and Production; Smith, C.W., Frederiksen, R.A., Eds.; John Wiley & Sons: Hoboken, NJ, USA, 2000; Volume 2, pp. 589–729. [Google Scholar]
  4. Norman, D.W.; Worman, F.D.; Siebert, J.D.; Modiakgotla, E. The Farming Systems Approach to Development and Appropriate Technology Generation; Food and Agriculture Organization of the United Nations: Rome, Italy, 1995. [Google Scholar]
  5. Lin, Z.; Guo, W. Sorghum Panicle Detection and Counting Using Unmanned Aerial System Images and Deep Learning. Front. Plant Sci. 2020, 11, 534853. [Google Scholar] [CrossRef] [PubMed]
  6. Tsouros, D.C.; Bibi, S.; Sarigiannidis, P.G. A Review on UAV-Based Applications for Precision Agriculture. Information 2019, 10, 349. [Google Scholar] [CrossRef] [Green Version]
  7. Chang, A.; Jung, J.; Maeda, M.M.; Landivar, J. Crop height monitoring with digital imagery from Unmanned Aerial System (UAS). Comput. Electron. Agric. 2017, 141, 232–237. [Google Scholar] [CrossRef]
  8. Ashapure, A.; Jung, J.; Chang, A.; Oh, S.; Maeda, M.; Landivar, J. A Comparative Study of RGB and Multispectral Sensor-Based Cotton Canopy Cover Modelling Using Multi-Temporal UAS Data. Remote Sens. 2019, 11, 2757. [Google Scholar] [CrossRef] [Green Version]
  9. Yeom, J.; Jung, J.; Chang, A.; Ashapure, A.; Maeda, M.; Maeda, A.; Landivar, J. Comparison of Vegetation Indices Derived from UAV Data for Differentiation of Tillage Effects in Agriculture. Remote Sens. 2019, 11, 1548. [Google Scholar] [CrossRef] [Green Version]
  10. Jung, J.; Maeda, M.; Chang, A.; Landivar, J.; Yeom, J.; McGinty, J. Unmanned Aerial System Assisted Framework for the Selection of High Yielding Cotton Genotypes. Comput. Electron. Agric. 2018, 152, 74–81. [Google Scholar] [CrossRef]
  11. Ashapure, A.; Jung, J.; Yeom, J.; Chang, A.; Maeda, M.; Maeda, A.; Landivar, J. A novel framework to detect conventional tillage and no-tillage cropping system effect on cotton growth and development using multi-temporal UAS data. ISPRS J. Photogramm. Remote Sens. 2019, 152, 49–64. [Google Scholar] [CrossRef]
  12. Ashapure, A.; Jung, J.; Chang, A.; Oh, S.; Yeom, J.; Maeda, M.; Maeda, A.; Dube, N.; Landivar, J.; Hague, S.; et al. Developing a machine learning based cotton yield estimation framework using multi-temporal UAS data. ISPRS J. Photogramm. Remote Sens. 2020, 169, 180–194. [Google Scholar] [CrossRef]
  13. Maimaitijiang, M.; Sagan, V.; Sidike, P.; Hartling, S.; Esposito, F.; Fritschi, F.B. Soybean yield prediction from UAV using multimodal data fusion and deep learning. Remote Sens. Environ. 2020, 237, 111599. [Google Scholar] [CrossRef]
  14. Jung, J.; Maeda, M.; Chang, A.; Bhandar, M.; Ashapure, A.; Landivar-Bowles, J. The potential of remote sensing and artificial intelligence as tools to improve the resilience of agriculture production systems. Curr. Opin. Biotechnol. 2021, 70, 15–22. [Google Scholar] [CrossRef] [PubMed]
  15. Watanabe, K.; Guo, W.; Arai, K.; Takanashi, H.; Kajiya-Kanegae, H.; Kobayashi, M.; Yano, K.; Tokunaga, T.; Fujiwara, T.; Tsutsumi, N.; et al. High-Throughput Phenotyping of Sorghum Plant Height Using an Unmanned Aerial Vehicle and Its Application to Genomic Prediction Modeling. Front. Plant Sci. 2017, 8, 421. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  16. Shafian, S.; Rajan, N.; Schnell, R.; Bagavathiannan, M.; Valasek, J.; Shi, Y.; Olsenholler, J. Unmanned aerial systems-based remote sensing for monitoring sorghum growth and development. PLoS ONE 2018, 13, e0196605. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  17. Niu, Y.; Zhang, L.; Zhang, H.; Han, W.; Peng, X. Estimating above-ground biomass of maize using features derived from UAV-based RGB imagery. Remote Sens. 2019, 11, 1261. [Google Scholar] [CrossRef] [Green Version]
  18. Furukawa, F.; Maruyama, K.; Saito, Y.K.; Kaneko, M. Corn Height Estimation Using UAV for Yield Prediction and Crop Monitoring. In Unmanned Aerial Vehicle: Applications in Agriculture and Environment; Avtar, R., Watanabe, T., Eds.; Springer: Cham, Switzerland, 2020; pp. 51–69. [Google Scholar]
  19. Zhou, X.; Zheng, H.B.; Xu, X.Q.; He, J.Y.; Ge, X.K.; Yao, X.; Cheng, T.; Zhu, Y.; Cao, W.X.; Tian, Y.C. Predicting grain yield in rice using multi-temporal vegetation indices from UAV-based multispectral and digital imagery. ISPRS J. Photogramm. Remote Sens. 2017, 130, 246–255. [Google Scholar] [CrossRef]
  20. Rooney, W.; Smith, C.W. Techniques for developing new cultivars. In Sorghum, Origin, History, Technology and Production; Wayne, S.C., Frederiksen, R.A., Eds.; John Wiley & Sons: New York, NY, USA, 2000; pp. 329–347. [Google Scholar]
  21. Boyles, R.E.; Pfieffer, B.K.; Cooper, E.A.; Zielinski, K.J.; Myers, M.T.; Rooney, W.L.; Kresovich, S. Quantitative trait loci mapping of agronomic and yield traits in two grain sorghum biparental families. Crop Sci. 2017, 57, 2443–2456. [Google Scholar] [CrossRef] [Green Version]
  22. Maman, N.; Mason, S.C.; Lyon, D.J.; Dhungana, P. Yield components of pearl millet and grain sorghum across environments in the central great plains. Crop Sci. 2004, 44, 2138–2145. [Google Scholar] [CrossRef]
  23. Araus, J.L.; Cairns, J.E. Field high-throughput phenotyping: The new crop breeding frontier. Trends Plant Sci. 2014, 19, 52–61. [Google Scholar] [CrossRef]
  24. Malambo, L.; Popescu, S.; Ku, N.W.; Rooney, W.; Zhou, T.; Moore, S. A Deep Learning Semantic Segmentation-Based Approach for Field-Level Sorghum Panicle Counting. Remote Sens. 2019, 11, 2939. [Google Scholar] [CrossRef] [Green Version]
  25. Malambo, L.; Popescu, S.C.; Horne, D.W.; Pugh, N.A.; Rooney, W.L. Automated detection and measurement of individual sorghum panicles using density-based clustering of terrestrial lidar data. ISPRS J. Photogramm. Remote Sens. 2019, 149, 1–13. [Google Scholar] [CrossRef]
  26. Mesas-Carrascosa, F.J.; Torres-Sánchez, J.; Clavero-Rumbao, I.; García-Ferrer, A.; Peña, J.M.; Borra-Serrano, I.; López-Granados, F. Assessing Optimal Flight Parameters for Generating Accurate Multispectral Orthomosaicks by UAV to Support Site-Specific Crop Management. Remote Sens. 2015, 7, 12793–12814. [Google Scholar] [CrossRef] [Green Version]
  27. Patrignani, A.; Ochsner, T.E. Canopeo: A Powerful New Tool for Measuring Fractional Green Canopy Cover. Agron. J. 2015, 107, 2312–2320. [Google Scholar] [CrossRef] [Green Version]
  28. Chang, A.; Eo, Y.; Kim, Y.; Kim, Y. Identification of individual tree crowns from LiDAR data using a circle fitting algorithm with local maxima and minima filtering. Remote Sens. Lett. 2013, 4, 29–37. [Google Scholar] [CrossRef]
  29. Corral, C.A.; Lindquist, C.S. On implementing Käsa’s circle fit procedure. IEEE Trans. Instrum. Meas. 1998, 47, 789–795. [Google Scholar] [CrossRef]
Figure 1. A workflow of sorghum panicle characterization.
Figure 1. A workflow of sorghum panicle characterization.
Remotesensing 13 00282 g001
Figure 2. Subset of (a) the orthomosiac image and (b) the digital surface model (DSM); (c) perspective view of RGB-colored 3D point cloud for a single row; (d) example of individual panicle detection in DSM, where red and blue dots indicate local maximum and boundary points, respectively; the most suitable circle (red circle) was fitted using boundary points for an individual panicle.
Figure 2. Subset of (a) the orthomosiac image and (b) the digital surface model (DSM); (c) perspective view of RGB-colored 3D point cloud for a single row; (d) example of individual panicle detection in DSM, where red and blue dots indicate local maximum and boundary points, respectively; the most suitable circle (red circle) was fitted using boundary points for an individual panicle.
Remotesensing 13 00282 g002
Figure 3. Schematic for calculating individual panicle volumes using (a) the cylinder-fitting method, and (b) the disk-stacking method; and an example showing (c) individual panicle points selected using RGB color and panicle volume estimated using (d) cylinder fitting and (e) disk stacking.
Figure 3. Schematic for calculating individual panicle volumes using (a) the cylinder-fitting method, and (b) the disk-stacking method; and an example showing (c) individual panicle points selected using RGB color and panicle volume estimated using (d) cylinder fitting and (e) disk stacking.
Remotesensing 13 00282 g003
Figure 4. Relationships of panicle numbers by visual assessment and using the proposed method to detect individual sorghum panicles from the UAV image. The red line indicates the 1-to-1 line.
Figure 4. Relationships of panicle numbers by visual assessment and using the proposed method to detect individual sorghum panicles from the UAV image. The red line indicates the 1-to-1 line.
Remotesensing 13 00282 g004
Figure 5. Relationships between field and UAV-based measurements of average panicle (a) length and (b) diameter in the selected varieties. The red line indicates the 1-to-1 line.
Figure 5. Relationships between field and UAV-based measurements of average panicle (a) length and (b) diameter in the selected varieties. The red line indicates the 1-to-1 line.
Remotesensing 13 00282 g005
Figure 6. Relationships between yield versus average panicle (a) length and (b) diameter of selected varieties; field and UAV measurements are shown in red and blue, respectively.
Figure 6. Relationships between yield versus average panicle (a) length and (b) diameter of selected varieties; field and UAV measurements are shown in red and blue, respectively.
Remotesensing 13 00282 g006
Figure 7. Relationships for ground-measured panicle weight versus panicle volume of selected varieties obtained by cylinder fitting (blue) and disk stacking (red); scattered points were fitted using a linear regression model.
Figure 7. Relationships for ground-measured panicle weight versus panicle volume of selected varieties obtained by cylinder fitting (blue) and disk stacking (red); scattered points were fitted using a linear regression model.
Remotesensing 13 00282 g007
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Chang, A.; Jung, J.; Yeom, J.; Landivar, J. 3D Characterization of Sorghum Panicles Using a 3D Point Cloud Derived from UAV Imagery. Remote Sens. 2021, 13, 282. https://doi.org/10.3390/rs13020282

AMA Style

Chang A, Jung J, Yeom J, Landivar J. 3D Characterization of Sorghum Panicles Using a 3D Point Cloud Derived from UAV Imagery. Remote Sensing. 2021; 13(2):282. https://doi.org/10.3390/rs13020282

Chicago/Turabian Style

Chang, Anjin, Jinha Jung, Junho Yeom, and Juan Landivar. 2021. "3D Characterization of Sorghum Panicles Using a 3D Point Cloud Derived from UAV Imagery" Remote Sensing 13, no. 2: 282. https://doi.org/10.3390/rs13020282

APA Style

Chang, A., Jung, J., Yeom, J., & Landivar, J. (2021). 3D Characterization of Sorghum Panicles Using a 3D Point Cloud Derived from UAV Imagery. Remote Sensing, 13(2), 282. https://doi.org/10.3390/rs13020282

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop