Next Article in Journal
DNA Methylation of Farnesyl Pyrophosphate Synthase, Squalene Synthase, and Squalene Epoxidase Gene Promoters and Effect on the Saponin Content of Eleutherococcus Senticosus
Previous Article in Journal
The Soil Microbiome of the Laurel Forest in Garajonay National Park (La Gomera, Canary Islands): Comparing Unburned and Burned Habitats after a Wildfire
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Forests Growth Monitoring Based on Tree Canopy 3D Reconstruction Using UAV Aerial Photogrammetry

1
Faculty of Mechanical Engineering and Automation, Zhejiang Sci-Tech University, Hangzhou 310023, China
2
College of Optical Science and Engineering, Zhejiang University, Hangzhou 310007, China
*
Author to whom correspondence should be addressed.
Forests 2019, 10(12), 1052; https://doi.org/10.3390/f10121052
Submission received: 28 October 2019 / Revised: 15 November 2019 / Accepted: 18 November 2019 / Published: 20 November 2019
(This article belongs to the Section Forest Inventory, Modeling and Remote Sensing)

Abstract

:
Land cover monitoring is a major task for remote sensing. Compared to traditional methods of forests monitoring which mostly use orthophotography from satellites or aircraft, there is very little research on the use of 3D canopy structure to monitor forest growth. Unmanned aerial vehicle (UAV) aerial could be a novel and feasible platform to generate more timely and high resolution forest 3D canopy images. In spring, the forest is supposed to experience rapid growth. In this research, we used a small UAV to monitor campus forest growth in spring at 2-day intervals. Each time, 140 images were acquired and the ground surface dense point cloud was reconstructed at high precision. Excess Green indexes (ExG) was used to extract the green canopy points. The segmented point cloud was triangulated using the greedy projection triangulation method into a mesh and its area was calculated. Forest canopy growth was analyzed at three levels: forest level, selected group level and individual tree level. A logistic curve was used to fit the time series canopy growth. Strong correlation was found R2 = 0.8517 at forest level, R2 = 0.9652 at selected group level and R2 = 0.9606 at individual tree level. Moreover, high correlation was found between canopies. By observing these results, we can conclude that the ground 3D model can act as a useful data type to monitor forest growth. Moreover UAV aerial remote sensing has advantages when monitoring forests in periods when the ground vegetation is growing and changing fast.

1. Introduction

Remote sensing has been accepted as an important and effective method in forest monitoring since it can cover a large area in a short time without going into the forest. Governments seek technological improvements to increase the speed and cost efficiency of conducting inventories while simultaneously increasing the precision and timeliness of an ever-widening array of estimates. The advent of low cost, widely available, remotely sensed data has been the basis for many of the important recent technological improvements. Remotely sensed data have not only contributed to increasing the speed, cost efficiency, precision, and timeliness associated with inventories, but they have facilitated construction of maps of forest attributes with spatial resolutions and accuracies that were not feasible even a few years ago. Researchers have used different remote sensing methods on many platforms for tree mortality [1], above-ground biomass [2], forest inventory [3], tree height [4], time series analysis [5], forest carbon dynamics [6], and seasonal variation evaluation [7]. Satellite-based remote sensors have high temporal and global coverage. However, space-borne remote sensing platforms have their own limitations. One cannot expect data to be acquired on specific dates or at a specific time. Data acquisition depends upon the satellite’s revisit or temporal resolution. If the satellite sensors do not operate in microwave wavelengths, obtaining cloud-free data is yet another challenge [8]. The improvement of satellite sensor spatial resolutions has led to higher definition. (From originally 30 m of LandSat-4 to current sub-meter level). However, the price of this sub-meter resolution data (e.g., Geo Eye and World View) is too high for wide area monitoring. These limitations can easily be overcome by the use of unmanned aerial vehicles (UAVs). UAVs are remotely controlled, uninhabited, reusable motorized aerial vehicles that can carry various types of payloads or cameras designed for specific purposes. Such platforms have the flexibility to choose appropriate data acquisition periods and data types (multispectral, hyperspectral, thermal, microwave, or light detection and ranging (LIDAR)). They can also adjust flying heights to obtain very high spatial resolution (over 1 cm ground resolution) while maintaining viewing angles and forward and side overlap ratios. The cost of these data acquisition methods is substantially lower than that of high-resolution satellite imagery. UAV-based remote sensing has the potential to bridge the gap between sparse and discontinuous field observations and continuous but coarse resolution space-borne remote sensing.
It is common practice to use vegetation indices derived from regular images or orthophotography images acquired from remote sensing platforms for forest monitoring. Donato Morresi [9] used different Landsat-derived spectral vegetation indices (SVIs), including normalized vegetation index (NDVI), normalized difference moisture index (NDMI), and normalized burn ratio (NBR) to track post-fire recovery occurring in burned forests of the central Apennines (Italy) at different development stages. Zhao [10] developed a set of continuous annual land cover mapping products at 30 m resolution using multi-temporal Landsat images. Vegetation index was proposed based on the mathematical operation of several spectral bands and it can provide a concise and straightforward way for ground cover segmentation. Cao [11] proposed using a spatio-temporal Savitzky-Golay filter to get better NDVI time-series data. Due to the limitations of the remote sensing platform and its mounted sensors, it is difficult make full use of the images.
Researchers are trying to explore the possibility of using digital shape model (DSM) information for some issues such as above-bottom biomass [12], tree height quantification [13,14,15], mapping pre-fire forest structure [16], and mapping wetland inundation dynamics [17]. Of these issues tree height is the most important. At present, the generated DSM from LIDAR or photogrammetry is projected to ground to form height-relevant images for further analysis. Cen [18] transformed DSM point cloud of a rice field to Hcsm (height extracted from crop surface model) index and found a high correlation R2=0.89 with nitrogen treatment. However, the internal structure of DSM has not been further studied.
In this research, we tried to use the time-series point cloud of forests generated from UAV aerial photogrammetry to study how the canopy grows during spring. Forests are supposed to have great change during spring since trees turn green and branches extend. This leads to significant changes in canopy structure. For example the area of the green part area increases and the height and coverage also increases. This study could hardly be done using satellites or aircraft due to the short time interval and low altitude. Lower altitude can give higher ground resolution and more precise DSM while shorter interval (1–2 days) time-series DSM can further illustrate the trend of canopy changes.

2. Materials and Methods

2.1. Study Area and Remote Sensing

This research was performed in a forest with an area of 1.5 ha (150 × 100 m) on the campus of Zhejiang Sci-Tech University in Hangzhou, Zhejiang province, southeast China (30°32′ N, 120°36′ E) in April 2019. Since Hangzhou city is located in the subtropical zone, most of the vegetation is evergreen broad-leaved. It was therefore necessary to look for the area where the canopy of the vegetation changed significantly in spring. After comparison of several target areas in the campus, the study area was finally selected due to the obvious and significant growth trend of vegetation in spring.
The remote sensing was conducted between 2 April and 1 May 2019, using an unmanned aerial vehicle named Phantom produced by DJI China. This provided digital images with 4000 × 3000 pixels in jpg or tiff format. The UAV was remotely controlled by the Altizure (Version 3.9.1, Shenzhen, China) on an iPhone, following the waypoints planned in advance, and it captured images 55 m above the ground of the study area with a 5-second interval and overlap rate over 60%. The flight path was programmed to cover the whole study area, capturing images of vegetation from different angles which could be used for 3D point cloud reconstruction in the later data processing.
It was planned to conduct a 30-day period of remote sensing data acquisition of vegetation in the study area, in order to monitor the growth trend of vegetation in the study area over the 30 days. Due to the rapid and significant growth of vegetation in early April, more frequent data collection was carried out. The aerial remote sensing was conducted on cloudy days to avoid the influence of strong light on the images acquisition.

2.2. 3D Point Cloud Reconstruction by SFM

The 3D point cloud was generated by the SfM (structure from motion) algorithm, which enables the generation of the 3D structure of objects from 2D images in different directions. 3D point cloud reconstruction of the study was automatically computed and completed in Agisoft Photoscan (Version 1.4.0, Petersburg, Russia), which provides an automatic and high-speed process of 3D point cloud generation for users [19]. Agisoft Photoscan requires an input of a series of RGB images of objects in different orientations, computed in steps including keypoints detection and matching, calculation of camera and keypoints 3D position and 3D point cloud reconstruction. The 3D point cloud generated by Photoscan contained RGB and XYZ (three-dimensional location) information which enabled us to extract green point of vegetation for the later data processing.
Agisoft Photoscan was installed in a DELL XPS15 personal laptop with Intel i5 8300H core and 8GB RAM. It required about 6 hours to generate the 3D point cloud from 200 digital images each day. Photoscan was run with a setting of “medium accuracy” which provided ideal precision with less time cost, compared to “high accuracy” which, according to the manufacturer’s description, has better accuracy but requires a large amount of time. The workflow of this research is illustrated as Figure 1.

2.3. Green Canopy Extraction

Once the 3D structure of the study area had been reconstructed, the green vegetation had to be discriminated from the background for the later canopy surface reconstruction. Since background objects such as roads, cars, and buildings, could be distinguished due to the different color information, the ExG (excess green) index was used to extract green points of vegetation from the whole 3D model. The ExG index was conducted from the color information (values in RGB respective channels) in each pixel as follows:
ExG = 2 G R B
Taking zero as the threshold, the ExG index indicated that a point should be regarded as the green point if the index is above zero. The green points in the 3D model of the study area belong to plant class, including grass, forests, and shrubs. The extraction by ExG index was processed in Matlab (Version 2016, Natick, MA, USA) 2016, which took about 8 h for each 3D point cloud model due to the large number of points. The result of vegetation extraction by ExG index is illustrated in Figure 2.
However, the green points extracted by ExG contain not only the canopy but also shrubs and grass, while growth and changes of the canopy can be regarded as an index which reflects the growth of vegetation. Considering the unification of experimental methods, we used CloudCompare to separate near-ground points of the same altitude. Cloudcompare is an open source project for 3D point cloud and mesh processing. It has been originally designed to perform comparison between two dense 3D points clouds (such as the ones acquired with a laser scanner) or between a point cloud and a triangular mesh. Afterwards, it has been extended to a more generic point cloud processing software, including many advanced algorithms (registration, resampling, color/normal/scalar fields handling, statistics computation, sensor management, interactive or automatic segmentation, display enhancement, etc.). The 3D point cloud model in which near-ground points had been separated is illustrated in Figure 3.

2.4. Surface Reconstruction and Measurements

In order to measure the area of green canopy, a surface reconstruction based on 3D point cloud is of necessary. The Delauney triangulation algorithm and Poisson reconstruction algorithm are the most commonly used methods of 3D surface reconstruction in recent years. However, better results of Poisson reconstruction algorithm had shown better results in the surface reconstruction of closed surfaces [20], while the Delauney mesh generation algorithm was more suitable in this study. Details about greedy Delauney surface reconstruction method can be found in [21].
The concept of Delaunay triangulation is to construct a set of triangulations in a set of discrete points in a plane so that the circumscribed circle of each triangle (excluding the boundary of the circumscribed circle) does not contain the fourth point in the plane. In the three-dimensional case, we constructed a tetrahedral partition, so that the outer contact of any tetrahedron did not contain the fifth point in the three-dimensional space. Since the Delaunay triangulation algorithm maximizes the minimum interior angle and improves the stability of numerical simulation, the Delauney triangulation algorithm has been regarded as the basis of mesh generation algorithms.
The 3D model after surface reconstruction by the Delauney triangulation algorithm is showed in Figure 4.
The mesh area measurement could be carried out in CloudCompare after surface mesh generation. Since the unit of area calculated in CloudCompare is square units, it needed a conversion of units as we needed the area data in square meters. One of the convenient ways of unit conversion is to measure the area of a specific object in the study area both in CloudCompare and in reality, then use the data to conduct a conversion factor between square units and square meters.

2.5. Canopy Coverage Acquisition

Vegetation coverage acquisition is usually derived from orthophoto maps, however, it had been found difficult to derive ideal results in this study situation. As is illustrated in Figure 5a, the canopy of the forest in the center of the study area is surrounded by a green grass lawn. Nevertheless, it could be inferred that the colors of green lawn and canopy are too close to be separated in 2D images. While height information was available in the 3D point cloud, it was possible to obtain vegetation coverage area by separating canopy from near-ground grass lawn then projecting onto the XY plane. After projection to the 2D plane and surface reconstruction of the point cloud, the coverage area of the canopy could be acquired. The same process was carried out on the original 3D point cloud of the whole study area. The projection program was processed in Matlab 2016. The canopy coverage was calculated as follows:
The   canopy   coverage ( % ) = canopy   coverage   area   in   2 D the   area   of   the   study   area

2.6. Noise Filtering

In the process of 3D point cloud data acquisition, due to the influence of hardware accuracy, environment, and electromagnetic wave diffraction of laser radar, the point cloud data acquired will inevitably contain noise. The noise points can be removed by filters in PCL (Point Cloud Library) such as SOR (statistical outlier removal) and radius outlier removal. The result is as the Figure 6 and Figure 7 show.
The basic idea of the SOR filtering algorithm is to conduct a statistical analysis of each point in the point cloud and filter out the points that are not within the standard range. Meanwhile, the radius outlier removal algorithm filters out the points which are not surrounded by a sufficient number of adjacent points within a certain radius. However, as the filters may filter out some verifiable canopy points, they should be used in a proper way.

3. Results

3.1. Vegetation Growth Trend

The vegetation growth trend can be reflected directly both by canopy area and canopy coverage. In this study, we planned to determine the trend and law of vegetation growth in spring by comparison of vegetation growth in the whole study area, in specific areas and the growth of the individual plants. The logistic function or logistic curve was first proposed by Pierre François Verhulst when he intended to study the law of population growth. The function illustrates a biological growth trend that is roughly exponential initially and then slows down until the increase stops at maturity. The function was later used in the study of biology and economic issues, and in regression analysis in math. The logistic function in regression analysis is as follows while the parameters (A1, A2, x0 and p) needs to be found out in fitting:
y = A 1 A 2 1 + ( x x 0 ) p

3.1.1. Growth Trend of Vegetation in the Whole Study Area

The remote sensing of the study started on 2 April 2019 and ended on 1 May 2019, and lasted for 30 days. The changes in canopy area and weather are illustrated in Table 1. As is shown in the table, the canopy area on 14 April and 17 April which were affected by strong sunlight were even less than on 12 April. It is reasonable that these two data should not be included in the statistics due to the error which cannot be ignored. For the remaining data, the logistics function was used for regression analysis, and the results are shown in Figure 8 and Table 2. It can be seen that the growth trend conforms well to the logistic function with an R-square of 0.983.

3.1.2. Growth Trend of a Specific Area

An area of sakura trees in the southeast corner of the study area was selected to study the growth trend of a specific area. The growth of the trees in this area can be seen visually in Figure 9. It can be seen in Figure 10 and Table 3 that the area of canopy increased sharply in the early days in April and slowed down later which conformed ideally to logistic regression. After taking the data into regression, we found that the growth fitted well with logistic function with an R-square of 0.976. The summary of logistic regression is shown in Table 4.

3.1.3. Growth Trend of a Single Plant

The single plant that we selected was located in the northwest corner of the study area. As is shown in Figure 11 and Table 5, this plant still remained bare without a canopy in 2 April but began to grow in the latter days. The results of the logistic fitting are illustrated in Figure 12 and Table 6. However, the results of the canopy area on 14 April and 17 April were less than the actual results due to the influence of sunlight, so these two data have not been included in the final statistical analysis.

3.2. Vegetation Coverage Changes

The three indices, including canopy area, area of coverage and canopy coverage are shown in Table 7. It can be inferred that the results in the three sunny days are obviously less than the actual value. As is shown in Figure 13, the results of the validation assessment comparing canopy coverage and canopy area fit ideally to linear function with an R-square of 0.9697.

4. Discussion

The forest point cloud was analyzed at three levels: general level, group level and individual level. Among the three levels, the general level had lower fitting accuracy (R2 = 0.8517) compared to selected group level (R2 = 0.9652), and individual plant level (R2 = 0.9606). This may be because the forest has different species of trees. The growth curves among them maybe quite different. For example, the camphor is a perennial woody plant and its canopy changes little during spring. However, ginkgo tree is a deciduous tree and its canopy changes a lot during spring. Meanwhile, ornamental trees first blossom, and then the green canopy starts to grow after the flowers fall. The flower would not have been classified as canopy in this research because we used the ExG index to split out the canopy. This can explain why canopy area dropped on 14 April and 17 April. However, the selected group and the individual tree had the same growth curve so the fitting accuracy was high.
As is known, the light intensity of remote sensing aerial photography is an influential factor that should not be ignored. In this study, light conditions were also an important factor which may have caused a certain amount of error. On the one hand, the key-points detection and matching in the shadow region would have been affected, resulting in a failure in the ideal 3D reconstruction of points in shadow. On the other hand, since the point cloud of the reconstructed vegetation in the shadow is darker in color, the points would have been mistaken for non-green points and removed in the process of green points extraction by the ExG index. In order to avoid experimental errors caused by strong light, on the one hand, aerial remote sensing should be carried out on cloudy days if possible. Preprocessing of images is necessary to reduce the influence of light. The authors believe that the tree shadow caused the problem. This problem could be solved by identifying the shadow area and making the necessary reflectance compensation. A few studies about shadow classification [22,23] examined shadows’ reflective features during image segmentation and extracted shadows according to the statistical features of the images. A properly set threshold was used to detect the shadows. Methods to remove shadows in UAV-based mosaicked DSM remote sensing data are still inadequate. Firstly, shadows cannot be removed in single pictures because this will cause errors in camera orientation calculation and can prevent 3D reconstruction. Secondly, there is a lack of accurate automatic object classification methods for 3D point cloud or the meshed model. Despite the successful application of machine learning in 2D images, machine learning methods are not effective for 3D images. This is because they are based on feature extraction from 2D images, but at present, feature extraction from 3D point cloud or meshed model data has still to be developed. Thirdly, tree shadows are supposed to be complex because the light in a forest can be scattered and diffracted. This will make it hard to precisely identify shadow areas. In summary, the influence of shadow in near-ground remote sensing is much larger than in satellite-based remote sensing.
The logistic S-curve application model is based on firmly proven laws of nature [24,25,26,27]. The S-curve model represents the growth or decline of every system in interaction with its environment (its limited resources). Thus, quantitative study of system transformation in combination with a qualitative approach contributes effectively to the reliability of long-term forecasting. Through this fitting process, we can see different group of forests have different growth curves.
Finally, we compared the area of canopy and area of coverage. Area of coverage has always been used to monitor the ground cover time-series change as the remote sensing data source has been limited to satellite. With the advent of UAV, we had the chance to explore the changes in canopy 3D structure during the spring. Through this comparison, we found that the area of canopy and the area of coverage had a very strong correlation.

5. Conclusions

In spring forests experience rapid change as the ornamental trees sprout and pass from blossoming to growing leaves. We tried use UAV to acquire time-series forest images at low altitude, high ground resolution, and short time intervals. High precision point clouds were generated from the acquired images using aerial photogrammetry. Structure from motion methods proved to an effective method of near-ground remote sensing image mosaicking and processing. Vegetation index was used to extract the green canopy. The greedy projection triangulation method was used to transform points to mesh and area was then calculated. Forest canopy growth was analyzed at three levels: forest level, selected group level, and individual tree level. Logistic curve was used to fit the time series canopy growth. Strong correlation was found: R2 = 0.8517 at forest level, R2 = 0.9652 at selected group level and R2 = 0.9606 at individual tree level. Moreover, high correlation was found between canopies. By observing these results, we can conclude that the ground 3D model can act as a useful data type as orthophotography to monitor forest growth. Moreover, the UAV aerial remote sensing has advantages in monitoring forest in periods when the ground vegetation is growing and changing fast.

Author Contributions

Conceptualization, funding acquisition supervision, methodology, resources, review and editing, Y.Z.; investigation, software, visualization, and original draft, H.W.; editing and improving, W.Y.

Funding

This research was funded by National Natural Science Foundation of China (NSFC), grant number: 61905219.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

LIDARLight detection and ranging
UAVUnmanned aerial vehicle
NDVINormalized vegetation index
NDMINormalized difference moisture index
NBRNormalized burn ratio
DSMDigital shape model
ExGExcess green index
SfMStructure from motion
SORStatistical outlier removal

References

  1. Rao, K.; Anderegg, W.R.L.; Sala, A.; Martinez-Vilalta, J.; Konings, A.G. Satellite-based vegetation optical depth as an indicator of drought-driven tree mortality. Remote Sens. Environ. 2019, 227, 125–136. [Google Scholar] [CrossRef]
  2. Han, L.; Yang, G.; Dai, H.; Xu, B.; Yang, H.; Feng, H.; Li, Z.; Yang, X. Modeling maize above-ground biomass based on machine learning approaches using UAV remote-sensing data. Plant Methods 2019, 15, 10. [Google Scholar] [CrossRef] [PubMed]
  3. McRoberts, R.E.; Tomppo, E.O. Remote sensing support for national forest inventories. Remote Sens. Environ. 2007, 110, 412–419. [Google Scholar] [CrossRef]
  4. Krause, S.; Sanders, T.G.M.; Mund, J.; Greve, K. UAV-Based Photogrammetric Tree Height Measurement for Intensive Forest Monitoring. Remote Sens. 2019, 11, 758. [Google Scholar] [CrossRef]
  5. Otero, V.; Van De Kerchove, R.; Satyanarayana, B.; Mohd-Lokman, H.; Lucas, R.; Dahdouh-Guebas, F. An Analysis of the Early Regeneration of Mangrove Forests using Landsat Time Series in the Matang Mangrove Forest Reserve, Peninsular Malaysia. Remote Sens. 2019, 11, 774. [Google Scholar] [CrossRef]
  6. Dalponte, M.; Jucker, T.; Liu, S.; Frizzera, L.; Gianelle, D. Characterizing forest carbon dynamics using multi-temporal lidar data. Remote Sens. Environ. 2019, 224, 412–420. [Google Scholar] [CrossRef]
  7. Wang, R.; Chen, J.M.; Liu, Z.; Arain, A. Evaluation of seasonal variations of remotely sensed leaf area index over five evergreen coniferous forests. ISPRS J. Photogramm. Remote Sens. 2017, 130, 187–201. [Google Scholar] [CrossRef]
  8. Bhardwaj, A.; Sam, L.; Akanksha; Javier Martin-Torres, F.; Kumar, R. UAVs as remote sensing platform in glaciology: Present applications and future prospects. Remote Sens. Environ. 2016, 175, 196–204. [Google Scholar] [CrossRef]
  9. Morresi, D.; Vitali, A.; Urbinati, C.; Garbarino, M. Forest Spectral Recovery and Regeneration Dynamics in Stand-Replacing Wildfires of Central Apennines Derived from Landsat Time Series. Remote Sens. 2019, 11, 308. [Google Scholar] [CrossRef]
  10. Zhao, Y.; Feng, D.; Yu, L.; Cheng, Y.; Zhang, M.; Liu, X.; Xu, Y.; Fang, L.; Zhu, Z.; Gong, P. Long-Term Land Cover Dynamics (1986-2016) of Northeast China Derived from a Multi-Temporal Landsat Archive. Remote Sens. 2019, 11, 599. [Google Scholar] [CrossRef]
  11. Cao, R.; Chen, Y.; Shen, M.; Chen, J.; Zhou, J.; Wang, C.; Yang, W. A simple method to improve the quality of NDVI time-series data by integrating spatiotemporal information with the Savitzky-Golay filter. Remote Sens. Environ. 2018, 217, 244–257. [Google Scholar] [CrossRef]
  12. Jing, R.; Gong, Z.; Zhao, W.; Pu, R.; Deng, L. Above-bottom biomass retrieval of aquatic plants with regression models and SfM data acquired by a UAV platform—A case study in Wild Duck Lake Wetland, Beijing, China. ISPRS J. Photogramm. Remote Sens. 2017, 134, 122–134. [Google Scholar] [CrossRef]
  13. Zarco-Tejada, P.J.; Diaz-Varela, R.; Angileri, V.; Loudjani, P. Tree height quantification using very high resolution imagery acquired from an unmanned aerial vehicle (UAV) and automatic 3D photo-reconstruction methods. Eur. J. Agron. 2014, 55, 89–99. [Google Scholar] [CrossRef]
  14. Yao, J.; Raffuse, S.M.; Brauer, M.; Williamson, G.J.; Bowman, D.M.J.S.; Johnston, F.H.; Henderson, S.B. Predicting the minimum height of forest fire smoke within the atmosphere using machine learning and data from the CALIPSO satellite. Remote Sens. Environ. 2018, 206, 98–106. [Google Scholar] [CrossRef]
  15. Gu, C.; Clevers, J.G.P.W.; Liu, X.; Tian, X.; Li, Z.; Li, Z. Predicting forest height using the GOST, Landsat 7 ETM+, and airborne LiDAR for sloping terrains in the Greater Khingan Mountains of China. ISPRS J. Photogramm. Remote Sens. 2018, 137, 97–111. [Google Scholar] [CrossRef]
  16. Dandois, J.P.; Olano, M.; Ellis, E.C. Optimal Altitude, Overlap, and Weather Conditions for Computer Vision UAV Estimates of Forest Structure. Remote. Sens. 2015, 7, 13895–13920. [Google Scholar] [CrossRef]
  17. Wu, Q.; Lane, C.R.; Li, X.; Zhao, K.; Zhou, Y.; Clinton, N.; DeVries, B.; Golden, H.E.; Lang, M.W. Integrating LiDAR data and multi-temporal aerial imagery to map wetland inundation dynamics using Google Earth Engine. Remote Sens. Environ. 2019, 228, 1–13. [Google Scholar] [CrossRef]
  18. Cen, H.; Wan, L.; Zhu, J.; Li, Y.; Li, X.; Zhu, Y.; Weng, H.; Wu, W.; Yin, W.; Xu, C.; et al. Dynamic monitoring of biomass of rice under different nitrogen treatments using a lightweight UAV with dual image-frame snapshot cameras. Plant Methods 2019, 15, 32. [Google Scholar] [CrossRef]
  19. Kaiser, J.E.; Damarany, A.M. 3D Modeling of Skeletal Remains Using Agisoft Photoscan: Best practices for Field Data Collection. In Proceedings of the 86th Annual Meeting of the American Association of Physical Anthropologists, Hoboken, NJ, USA, 21 April 2017; p. 236. [Google Scholar]
  20. Zhou, W.; Huang, G.; Troy, A.; Cadenasso, M. Object-based land cover classification of shaded areas in high spatial resolution imagery of urban areas: A comparison study. Remote. Sens. Environ. 2009, 113, 1769–1777. [Google Scholar] [CrossRef]
  21. Cohen-Steiner, D.; Da, F. A greedy Delaunay-based surface reconstruction algorithm. Vis. Comput. 2004, 20, 4–16. [Google Scholar] [CrossRef]
  22. Konstantzos, I.; Tzempelikos, A.; Chan, Y.-C. Experimental and simulation analysis of daylight glare probability in offices with dynamic window shades. Build. Environ. 2015, 87, 244–254. [Google Scholar] [CrossRef]
  23. Zhang, H.; Sun, K.; Li, W. Object-Oriented Shadow Detection and Removal from Urban High-Resolution Remote Sensing Images. IEEE Trans. Geosci. Remote. Sens. 2014, 52, 6972–6982. [Google Scholar] [CrossRef]
  24. Zhang, H.; Xing, H.; Yao, D.; Liu, L.; Xue, D.; Guo, F. The multiple logistic regression recognition model for mine water inrush source based on cluster analysis. Environ. Earth Sci. 2019, 78, 612. [Google Scholar] [CrossRef]
  25. Luo, T.; Zhang, W.; Yu, Y.; Feng, M.; Duan, B.; Xing, D. Cloud detection using infrared atmospheric sounding interferometer observations by logistic regression. Int. J. Remote. Sens. 2019, 40, 6530–6541. [Google Scholar] [CrossRef]
  26. Chuang, Y.-C.; Chow, C.-W.; Liu, Y.; Yeh, C.-H.; Liao, X.-L.; Lin, K.-H.; Chen, Y.-Y. Using logistic regression classification for mitigating high noise-ratio advisement light-panel in rolling-shutter based visible light communications. Opt. Express 2019, 27, 29924–29929. [Google Scholar] [CrossRef]
  27. Kim, M.; Lee, J.; Ohno-Machado, L.; Jiang, X. Secure and Differentially Private Logistic Regression for Horizontally Distributed Data. IEEE 2019, 15, 695–710. [Google Scholar] [CrossRef]
Figure 1. The general workflow of this research. Step 1, prepare drones including battery check and flight permission; Step 2, make flight mission plan, including covering area, overlap rate and shooting interval; Step 3, import images into Agisoft; Step 4, align images and generate sparse pointcloud; Step 5, generate dense point cloud based on sparse point cloud (PC); Step 6, crop PC to get anticipated research area PC; Step 7, extract the PC using excess green (ExG) index, then fit noise points using statistical filter; Step 8, reconstruct the surface using Delaunay traigulation; Step 9, calculate the surface area of forest canopy; and Step 10, carry out the time series analysis.
Figure 1. The general workflow of this research. Step 1, prepare drones including battery check and flight permission; Step 2, make flight mission plan, including covering area, overlap rate and shooting interval; Step 3, import images into Agisoft; Step 4, align images and generate sparse pointcloud; Step 5, generate dense point cloud based on sparse point cloud (PC); Step 6, crop PC to get anticipated research area PC; Step 7, extract the PC using excess green (ExG) index, then fit noise points using statistical filter; Step 8, reconstruct the surface using Delaunay traigulation; Step 9, calculate the surface area of forest canopy; and Step 10, carry out the time series analysis.
Forests 10 01052 g001
Figure 2. Result of green vegetation discrimination by ExG index: (a) 3D point cloud model of the study area before extraction by ExG index and (b) 3D point cloud model of the study area after extraction by ExG index. Non-green points have been deleted from the 3D model.
Figure 2. Result of green vegetation discrimination by ExG index: (a) 3D point cloud model of the study area before extraction by ExG index and (b) 3D point cloud model of the study area after extraction by ExG index. Non-green points have been deleted from the 3D model.
Forests 10 01052 g002
Figure 3. The 3D point cloud model in which near-ground points had been separated
Figure 3. The 3D point cloud model in which near-ground points had been separated
Forests 10 01052 g003
Figure 4. The 3D point cloud model after surface reconstruction by the Delauney triangulation algorithm.
Figure 4. The 3D point cloud model after surface reconstruction by the Delauney triangulation algorithm.
Forests 10 01052 g004
Figure 5. (a) The orthophoto map of the study area on 4 April 2019, generated by Agisoft Photoscan, (b) 3D point cloud model on 4 April 2019 after extraction by ExG index and separation from near-ground points, (c) point cloud model of the canopy in XY plane projected from 3D space, and (d) point cloud model of the canopy after surface reconstruction.
Figure 5. (a) The orthophoto map of the study area on 4 April 2019, generated by Agisoft Photoscan, (b) 3D point cloud model on 4 April 2019 after extraction by ExG index and separation from near-ground points, (c) point cloud model of the canopy in XY plane projected from 3D space, and (d) point cloud model of the canopy after surface reconstruction.
Forests 10 01052 g005
Figure 6. A comparison of the point cloud before and after being filtered by SOR filter. (a) Point cloud before filtering, (b) point cloud after filtering by SOR filter. The number of adjacent points was set to 50 and the standard deviation was set to 1.0.
Figure 6. A comparison of the point cloud before and after being filtered by SOR filter. (a) Point cloud before filtering, (b) point cloud after filtering by SOR filter. The number of adjacent points was set to 50 and the standard deviation was set to 1.0.
Forests 10 01052 g006
Figure 7. A comparison of the point cloud with or without the influence of sunlight. (a) Point cloud on 12 April 2019 and (b) point cloud on 14 April 2019, which was strongly affected by sunlight.
Figure 7. A comparison of the point cloud with or without the influence of sunlight. (a) Point cloud on 12 April 2019 and (b) point cloud on 14 April 2019, which was strongly affected by sunlight.
Forests 10 01052 g007aForests 10 01052 g007b
Figure 8. The graph of logistic fitting of the canopy area in the whole study area from 2 April to 1 May 2019.
Figure 8. The graph of logistic fitting of the canopy area in the whole study area from 2 April to 1 May 2019.
Forests 10 01052 g008
Figure 9. The comparison of the point cloud models of sakura trees in the selected area on 2 April (a), 8 April (b), 17 April (c), and 25 April 2019 (d). The point cloud models had been extracted by ExG index.
Figure 9. The comparison of the point cloud models of sakura trees in the selected area on 2 April (a), 8 April (b), 17 April (c), and 25 April 2019 (d). The point cloud models had been extracted by ExG index.
Forests 10 01052 g009
Figure 10. The graph of logistic fitting of the canopy area in the sakura region from 2 April to 1 May 2019.
Figure 10. The graph of logistic fitting of the canopy area in the sakura region from 2 April to 1 May 2019.
Forests 10 01052 g010
Figure 11. The 3D point cloud model of the single plant on 2 April (a) and 25 April 2019 (b).
Figure 11. The 3D point cloud model of the single plant on 2 April (a) and 25 April 2019 (b).
Forests 10 01052 g011
Figure 12. Graph of logistic fitting of the canopy area of the single plant from 2 April to 1 May 2019
Figure 12. Graph of logistic fitting of the canopy area of the single plant from 2 April to 1 May 2019
Forests 10 01052 g012
Figure 13. Results of the validation assessment comparing canopy coverage and canopy area.
Figure 13. Results of the validation assessment comparing canopy coverage and canopy area.
Forests 10 01052 g013
Table 1. The changes of canopy area in the whole study area from 2 April to 1 May 2019.
Table 1. The changes of canopy area in the whole study area from 2 April to 1 May 2019.
DateWeatherArea of Canopy (m2)
2 AprilCloudy12,313.6
4 AprilCloudy12,378.7
8 AprilCloudy13,427.1
10 AprilCloudy15,378.1
12 AprilCloudy19,950.2
14 AprilSunny17,892.8
17 AprilSunny18,093.2
25 AprilCloudy21,551.6
1 MayCloudy22,729.8
Table 2. Summary of the results of logistic fitting.
Table 2. Summary of the results of logistic fitting.
A1A2x0pStatistic
ValueErrorValueErrorValueErrorValueErrorR-Square
11,801.47571245.747822,349.92351815.959711.91221.93533.42171.81060.8517
Table 3. The changes of canopy area and weather in the sakura region from 2 April to 1 May 2019.
Table 3. The changes of canopy area and weather in the sakura region from 2 April to 1 May 2019.
DateWeatherArea of Canopy (m2)
2 AprilCloudy2451.25
4 AprilCloudy2821.56
7 AprilPartly cloudy2979.02
8 AprilCloudy3504.23
10 AprilCloudy3756.96
12 AprilCloudy3958.88
14 AprilSunny3960.76
17 AprilSunny4246.25
25 AprilCloudy4520.48
1 MayCloudy4766.18
Table 4. The summary of the results of logistic fitting of the sakura region.
Table 4. The summary of the results of logistic fitting of the sakura region.
A1A2x0pStatistic
ValueErrorValueErrorValueErrorValueErrorR-Square
2333.9309250.77055021.4925389.294810.20191.55361.82160.64600.9652
Table 5. Changes of the canopy area and weather situation of the single plant from 2 April to 1 May 2019.
Table 5. Changes of the canopy area and weather situation of the single plant from 2 April to 1 May 2019.
DateWeatherArea of Canopy (m2)
2 AprilCloudy0
4 AprilCloudy1.0372
7 AprilPartly cloudy29.2246
8 AprilCloudy52.4488
10 AprilCloudy76.0798
12 AprilCloudy80.1419
14 AprilSunny70.5397
17 AprilSunny65.4373
25 AprilCloudy80.7242
1 MayCloudy83.4578
Table 6. Summary of the results of logistic fitting of the sakura region.
Table 6. Summary of the results of logistic fitting of the sakura region.
A1A2x0pStatistic
ValueErrorValueErrorValueErrorValueErrorR-Square
0.559384.587376.51192.84497.37360.222310.51613.84860.9606
Table 7. Summary of the results including the area of the canopy, the area of coverage, and the canopy coverage.
Table 7. Summary of the results including the area of the canopy, the area of coverage, and the canopy coverage.
DateWeatherArea of Canopy (m2)Area of Coverage (m2)Canopy Coverage (m2)
2 AprilCloudy12,313.64470.000.2591
4 AprilCloudy12,378.74613.040.2674
7 AprilSunny10,505.64567.330.2647
8 AprilPartly cloud13,427.15466.190.3169
10 AprilCloudy15,378.15926.820.3435
12 AprilCloudy19,950.27278.390.4219
14 AprilSunny17,892.86344.710.3678
17 AprilSunny18,093.26647.870.3854
25 AprilCloudy21,551.67827.720.4537
1 MayCloudy22,729.88208.430.4759

Share and Cite

MDPI and ACS Style

Zhang, Y.; Wu, H.; Yang, W. Forests Growth Monitoring Based on Tree Canopy 3D Reconstruction Using UAV Aerial Photogrammetry. Forests 2019, 10, 1052. https://doi.org/10.3390/f10121052

AMA Style

Zhang Y, Wu H, Yang W. Forests Growth Monitoring Based on Tree Canopy 3D Reconstruction Using UAV Aerial Photogrammetry. Forests. 2019; 10(12):1052. https://doi.org/10.3390/f10121052

Chicago/Turabian Style

Zhang, Yanchao, Hanxuan Wu, and Wen Yang. 2019. "Forests Growth Monitoring Based on Tree Canopy 3D Reconstruction Using UAV Aerial Photogrammetry" Forests 10, no. 12: 1052. https://doi.org/10.3390/f10121052

APA Style

Zhang, Y., Wu, H., & Yang, W. (2019). Forests Growth Monitoring Based on Tree Canopy 3D Reconstruction Using UAV Aerial Photogrammetry. Forests, 10(12), 1052. https://doi.org/10.3390/f10121052

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop