Next Article in Journal
Prediction of the Potential Distribution and Conservation Strategies of the Endangered Plant Tapiscia sinensis
Previous Article in Journal
Effect of Different-Diameter Wooden Pins on Mechanical Properties of Triangular Girder Trusses
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Rapid Forest Change Detection Using Unmanned Aerial Vehicles and Artificial Intelligence

1
Research Center of Forestry Remote Sensing and Information Engineering, Central South University of Forestry and Technology, Changsha 410004, China
2
Key Laboratory of National Forestry and Grassland Administration on Forest Resources Management and Monitoring in Southern China, Changsha 410004, China
3
Hunan Provincial Key Laboratory of Forestry Remote Sensing Based Big Data and Ecological Security, Changsha 410004, China
4
Sanya Academy of Forestry, Sanya 572023, China
5
Research Institute of Tropical Forestry, Chinese Academy of Forestry, Guangzhou 510520, China
*
Author to whom correspondence should be addressed.
Forests 2024, 15(9), 1676; https://doi.org/10.3390/f15091676
Submission received: 2 September 2024 / Revised: 12 September 2024 / Accepted: 19 September 2024 / Published: 23 September 2024
(This article belongs to the Section Forest Inventory, Modeling and Remote Sensing)

Abstract

:
Forest inspection is a crucial component of forest monitoring in China. The current methods for detecting changes in forest patches primarily rely on remote sensing imagery and manual visual interpretation, which are time-consuming and labor-intensive approaches. This study aims to automate the extraction of changed forest patches using UAVs and artificial intelligence technologies, thereby saving time while ensuring detection accuracy. The research first utilizes position and orientation system (POS) data to perform geometric correction on the acquired UAV imagery. Then, a convolutional neural network (CNN) is used to extract forest boundaries and compare them with the previous vector data of forest boundaries to initially detect patches of forest reduction. The average boundary distance algorithm (ABDA) is applied to eliminate misclassified patches, ultimately generating precise maps of reduced forest patches. The results indicate that using POS data with RTK positioning for correcting UAV imagery results in a central area correction error of approximately 4 m and an edge area error of approximately 12 m. The TernausNet model achieved a maximum accuracy of 0.98 in identifying forest areas, effectively eliminating the influence of shrubs and grasslands. When the UAV flying height is 380 m and the distance threshold is set to 8 m, the ABDA successfully filters out misclassified patches, achieving an identification accuracy of 0.95 for reduced forest patches, a precision of 0.91, and a kappa coefficient of 0.89, fully meeting the needs of forest inspection work in China. Select urban forests with complex scenarios in the research area can be used to better promote them to other regions. This study ultimately developed a fully automated forest change detection system.

1. Introduction

Forests are important carbon sinks on Earth, as they convert carbon dioxide into organic matter through photosynthesis and store carbon in wood and trees, helping to slow the accumulation of carbon dioxide in the atmosphere [1,2]. They also provide habitats and various resources for wildlife and are a crucial part of ecosystems [3,4,5]. To reduce greenhouse gas emissions and mitigate the effects of climate change, achieving carbon neutrality by 2050 is a significant task [6,7]. However, various natural disasters and illegal logging have made scarce forest resources even more precarious [8]. Therefore, implementing effective forest monitoring and management surveys has become a necessary measure for protecting and managing forest resources. Forest inspection ensures the long-term carbon absorption and storage functions of forests by preventing overlogging and adopting sustainable forestry management methods. Detecting forestland changes is an essential part of forest inspection, and efficient detection methods are particularly important in the process of achieving carbon neutrality.
In recent decades, advancements in remote sensing technology have gradually replaced field surveys with the use of remote sensing data for forestland change detection [9,10]. Early forest inspections based on satellite remote sensing images typically involved comparing satellite images from two different periods, with changes visually identified and manually delineated. With the development of computer technology, automatic forestland change detection methods have emerged, primarily based on pixel-level and object-based approaches [11,12,13]. For example, Wang Hong et al. used MODIS data to study vegetation cover changes, employing the normalized difference vegetation index (NDVI) as an indicator for detecting vegetation changes [14]. Enoch Gyamfi-Ampadu et al. used support vector machines (SVMs) and random forests (RFs) for forest classification, followed by forestland change analysis [15]. These traditional methods have poor generalizability for different regions and struggle to maintain consistent performance across various datasets. They also lack automated learning capabilities, requiring manual sample selection and model parameter optimization. In recent years, the rapid development of deep learning has significantly improved prediction accuracy compared to that of traditional methods, as deep learning can more effectively capture forest texture features [16]. Many methods in the field of deep learning, mainly convolutional neural networks (CNNs) and their various derivatives, have been used for forestland change detection [17,18,19,20]. De Bem P P et al. used a CNN for change detection in the Amazon rainforest, showing a clear advantage in accuracy over traditional machine learning methods [21]. Kostiantyn Isaienkov et al. used Sentinel-2 satellite data with an improved U-Net model for periodic change detection in Ukrainian forests [22]. Kassim Kalinaki et al. used an attention residual-based deep learning model (FCD-AttResU-Net) to monitor forest vegetation cover changes in tropical areas, improving detection accuracy and computational performance [23]. The reliability of deep learning algorithms has been confirmed by many studies to meet the needs of forest change detection. The 2022 Chinese forest, grassland, and wetland survey and monitoring technical regulations began to use deep learning models, revealing an accuracy of more than 85% for forest change detection, although manual verification is still needed [24]. However, most of these forest change detection methods are based on satellite remote sensing data, which often lack the required resolution, and high-resolution satellite data can be expensive. Of course, there are also some high-resolution but low-cost methods, such as Azadeh Abdollahnejad using the Pleiades’-HR 1A-1B satellite to monitor harvested areas [25]. Additionally, satellite images are susceptible to cloud cover and atmospheric interference, leading to incomplete forest change detection results.
Using UAVs for forest change detection can avoid issues associated with satellite data, such as low data resolution and atmospheric interference [26]. UAVs can acquire high-resolution images in a flexible, cost-effective, and customizable way [27,28,29]. With the development of UAV technology, some techniques now use UAVs for forest monitoring. L. O. Wallace et al. used UAVs equipped with LiDAR to achieve high-resolution forest change detection [30]. Felix Schiefer et al. used UAVs to acquire high-resolution images and employed the U-Net method to segment and classify forests in the Black Forest region and the Hainich National Park in Southern Germany, achieving an average F1 score of 0.73 [31]. JongCheol Pyo et al. used aerial images and a U-Net model for accurate forest and non-forest area segmentation in the capital area of South Korea, demonstrating the effectiveness of deep learning models in detecting forest changes over different times and locations [32]. Using UAVs allows for the inexpensive and convenient acquisition of forest images for change detection, but most existing techniques involve recovering UAV images, calibrating, and stitching them before using two-phase data for change detection [33,34,35]. This process is time-consuming and requires professional data processing. Additionally, comparing UAV images from different periods may result in slight misalignments due to different stitching algorithms or coordinate system transformations, increasing the misclassification of change patches.
To address the timeliness issue in UAV-based forest change detection, this study proposes a system that uses UAVs and artificial intelligence technology for automatic forest change detection. The system includes four main aspects: (1) the geometric correction of UAV images based on POS data; (2) forest boundary extraction using artificial intelligence technology; (3) mapping reduced forest patches using previous forest boundary vector data and UAV-extracted forest boundaries; and (4) constructing algorithms to eliminate misclassified patches, resulting in accurate reduced forest patches. The aim is to maintain the high resolution and timeliness advantages of UAVs while reducing misclassified patches through a patch pixel average distance algorithm, forming a complete process for rapid and automatic forest change detection using UAV images.

2. Study Area and Data Sources

2.1. Study Area

The study area is Tielu Port in Hainan Province, which is located in Linwang town, Sanya city, at low latitudes (18°15′–18°17′ N, 109°42′–109°44′ E). The area is significantly influenced by a tropical maritime monsoon climate characterized by yearlong high temperatures, with mild summers and warm winters. The annual average temperature is 25.5 °C, with the coldest month averaging 20.3 °C and an extremely low temperature of 5.1 °C. The annual average precipitation is 1255 mm. The terrain is a sandbar–lagoon type bay with sandy soil (Figure 1).

2.2. Research Data

The UAV used in this study is a high-precision DJI Phantom 4 RTK UAV, manufactured by DJI, a company based in Shenzhen, China, weighing 1391 g, with a maximum flight speed of 50 km/h and a maximum tilt angle of 25°. The RTK positioning navigation system provides more precise positioning. In 2021, the DJI Phantom 4 RTK was used to fly at heights of 80 m and 380 m, with an average flight speed of 7.6 m/s, and two sets of UAV image data were obtained at different heights. Sixty-six images with fixed-point positioning and forestland presence were obtained. Based on the RTK positioning, data were selected as the data sources. The red points indicate images at 380 m, and the yellow points indicate images at 80 m.

3. Methods

In the process of fully automated detection of forestland changes using UAV imagery, several key issues need to be addressed. First, there is the issue of recognition accuracy in deep learning networks. Although deep learning networks have significantly higher classification accuracy than other algorithms, there are still subtle differences among different networks, leading to occasional misclassifications between forests and grasslands, forests and shadows, and other similar pairs.
Additionally, to improve the recognition efficiency, this study used single UAV images for forestland change detection. However, errors introduced by the correction of single images are unavoidable, making the elimination of misclassifications caused by these correction errors a crucial issue to be addressed in this study.
To address these issues, the technical approach of this study is outlined in Figure 2 and consists of four main components: (1) UAV image correction; (2) forest identification based on deep learning; (3) forest change detection and analysis of forest change areas; and (4) the application of the ABDA to eliminate misclassified patches, ultimately producing accurate maps of reduced forest patches. In the process of UAV image correction, POS data are used to establish a collinearity condition equation to correct a single image, but at the same time, this introduces a certain degree of error. In the UAV image correction process, POS data are used to correct single images, which introduces a certain degree of error. Using U-Net [36], ResUNet [37], and TernausNet, we aimed to identify the most suitable model for forest change detection. The white area is the recognized forest area. The identified forest areas are overlaid with the previous forest areas to calculate the regions of change. Due to correction errors, some of the change patches are misclassified. We use two methods, the area of the change regions and the ABDA, to evaluate these identified change patches. By comparing the effectiveness of these two evaluation methods, we selected the optimal method to eliminate misclassified change regions, thereby improving the accuracy of forest change detection.

3.1. Geometric Correction of UAV Imagery

Geometric correction is the process of adjusting images to correct various geometric distortions introduced during the imaging process, projecting image data onto a plane to conform to map projection systems. These geometric distortions cause pixel positions on the image to undergo compression, distortion, stretching, and displacement relative to the actual positions of the ground targets. Factors contributing to these distortions include lens distortions, changes in flight attitude, and terrain undulations [38,39,40]. The objective of geometric correction is to rectify these distortions, determine the row and column values of the corrected image, and establish the brightness values of each pixel in the new image to achieve registration [41]. Lens distortions in the camera are typically caused by manufacturing errors or non-spherical lens effects, which affect the position of the principal point but are relatively minor [42]. For aerial photos without ground control points, geometric correction primarily relies on position and orientation system (POS) data. By using the acquired flight’s three attitude angles, GPS coordinates, and other information, collinear condition equations are established to establish the relationship between distorted and corrected image point transformations [43].
Georeferencing based on POS parameters is the process of determining the ground photographic coordinates corresponding to image plane coordinates. It uses the three azimuth angles, longitude, latitude, and height from the POS parameters to establish collinear condition equations. These equations are then transformed to achieve mutual conversion between the image plane coordinates (x, y) and the ground photographic coordinates of the corresponding ground points (X, Y) [44]:
X = H a 1 x + a 2 y a 3 f c 1 x + c 2 y c 3 f + X S
Y = H b 1 x + b 2 y b 3 f c 1 x + c 2 y c 3 f + Y S
where XS and YS are the coordinates in the ground photography coordinate system of the projection center; S, ai, bi, and ci (i = 1, 2, 3) are direction cosines; H is the flying height; and f is the camera focal length.
Because only a single image is used for correction without control points or DEM data, the speed is fast, but the accuracy of the correction depends on the precision of the POS data, which introduce some errors. These errors can lead to misjudgments in forest change detection, a problem that will be addressed in subsequent steps.

3.2. Forest Recognition Based on Deep Learning

(1) U-Net: U-Net models adopt an encoder–decoder structure. The encoder extracts high-level features of the image through convolutional and pooling layers, while the decoder maps these features back to the original input image resolution to achieve pixel-level prediction. Skip connections exist between the encoder and decoder, connecting corresponding feature maps from the encoder to the decoder. These skip connections help retain detailed information from the input image and mitigate the vanishing gradient problem, thereby improving the training stability of U-net models.
(2) ResUNet: ResUNet models combine U-Net with ResNet by adding residual connections to each part of the U-Net structure. This approach aims to overcome the problems of gradient vanishing and information loss that occur when U-Net becomes too deep. However, this approach increases the number of model parameters, making training more challenging.
(3) TernausNet: TernausNet models are based on the U-Net structure but replace the original feature extraction module with a pretrained VGG-11 module. This modification allows the model to converge more quickly and enhance its performance, thereby avoiding overfitting issues that may arise with small training sets.

3.3. Forest Change Detection and Analysis of Change Areas

Many studies have extracted forest change patches by comparing remote sensing data from different periods. However, in practical forest monitoring, it is challenging to obtain complete UAV data from earlier periods. Therefore, this study attempted to use forest vector data from earlier periods and UAV imagery from later periods for forest change detection. As shown in Figure 3A, the forest area in the UAV imagery was first extracted using deep learning algorithms, and the forest vector data from the earlier period was converted into raster data. Then, the raster data from the two periods were overlaid, and the forest change areas were calculated. Finally, the change areas in the forest were re-vectorized.
To improve identification efficiency, this study used single-image correction to identify changes in forest areas. However, single UAV image correction, without precise control point correction or aerial triangulation, cannot accurately match earlier data, resulting in misidentified areas. These erroneous forest change areas usually appear at the edges of the forest due to shifts in the forest area. When using UAV imagery for forest change detection, there are two main situations where changes are identified. The first is actual forest reduction. In this case, a significant reduction in the forest area was detected. Such changes are generally only identified when the reduction reaches a certain extent, so the area of change is usually large. The second situation is displacement due to correction errors. This results in changes due to the displacement caused by correction errors. These areas are typically small and located at the edges of the forest, as shown in Figure 4.
To address the two situations of change patches, this study applied two methods for identifying misjudgments. The first method is the forest area size determination method, which sets a threshold based on the size of the change patches. If the size exceeds this threshold, it is identified as forest change; if it is below the threshold, it is considered a misjudgment. The idea behind this method is that smaller patches may be caused by correction errors, while larger patches indicate real changes in the forest.
The second method is the ABDA, which calculates the average distance (D) from each pixel in the change area to the forest boundary. This method assumes that the primary cause of misjudged patches is image displacement. If a change patch is on the forest boundary and the average distance to the boundary is small, this change is likely caused by correction errors. Conversely, if the average distance is large, it indicates that the change patch is within the forest area, suggesting that it is a real change caused by deforestation.
This study compared the two identification methods by analyzing the area and D of all change patches in the forest area. By comparing the effectiveness of these methods, an appropriate method and threshold for distinguishing between real forest changes and erroneous changes can be identified.

3.4. Accuracy Evaluation

To ultimately determine the identification accuracy of different deep learning algorithms and the results of forest change detection, this study used five indicators: accuracy, precision, recall, F1 score, and kappa coefficient. These evaluation indicators are calculated using data from a confusion matrix (Table 1). In Table 1, the positive class represents forests or areas with changes, and the negative class represents non-forests or areas without changes. We used manual interpretation to determine whether the changed forest areas were actually occurring. TN (true negative) represents the number of samples that are actually negative and are predicted to be negative; FP (false positive) represents the number of samples that are actually negative but are predicted to be positive; FN (false negative) represents the number of samples that are actually positive but are predicted to be negative; and TP (true positive) represents the number of samples that are actually positive and are predicted to be positive.
(1) Accuracy: the proportion of correctly predicted samples to the total number of samples, which is expressed as follows:
A c c u r a c y = T P + T N T P + T N + F P + F N
(2) Precision: the proportion of actual positive cases among those predicted as positive, which is given by the following:
P r e c i s i o n = T P T P + F P
(3) Recall: the proportion of predicted positive cases among the actual positive cases and is expressed as the following:
R e c a l l = T P T P + F N
(4) F1 score: the harmonic mean of precision and recall, which serves as a comprehensive evaluation metric and is calculated by the following equation:
F 1 = 2 P × R P + R
(5) Kappa coefficient: a performance metric used to measure the consistency and accuracy in classification tasks, which is especially effective in evaluating performance in cases of imbalanced class distribution, and is expressed as follows:
K a p p a = A c c p e 1 p e
p e = T P + F P T P + F N + T N + F N T N + F P T P + T N + F P + F N 2

4. Results and Analysis

4.1. Evaluation of the Geometric Correction Results of UAV Imagery

During UAV flights, it is generally difficult to maintain a constant flight altitude. Different flight altitudes affect the accuracy of image correction. As long as the error is controlled within a certain range, the accuracy of forest change detection can be ensured. This study tested the impact of different flight altitudes on the corrected imagery by setting the flight altitudes to 80 m and 380 m and compared the errors at the center and boundary points of the corrected images obtained at these two altitudes.

4.1.1. Correction Results for the Low-Altitude Flight Data

At an altitude of 80 m, a total of 30 UAV images were obtained and corrected. These corrected images were compared with satellite remote sensing data from the China Geodetic Coordinate System 2000. As shown in Figure 5, in relatively flat areas, points A and B on the corrected UAV image are ground feature points, while points a and b are the corresponding positions of these ground feature points on the satellite image. The distances Aa and Bb represent the correction error, i.e., the error between the corrected image and the accurate ground feature points. Point A is located at the edge of the UAV image, at a distance of 12.2 m from point a, while point B is located in the central area of the UAV image, at a distance of 1.4 m from point b.
In a relatively undulating terrain, as shown in Figure 6, the errors at points A and B are larger. The displacement of the ground feature at edge point A is 13.1 m, and the displacement at central point B is 6.6 m, both of which are greater than those in the flat areas. This is mainly due to the low flight altitude, where ground undulations have a greater impact on the image, resulting in larger errors. However, from the error comparison, it is found that the error at the image edge in the undulating areas is not significantly different from that in the flat areas, with an increase of less than 1 m. The error at the image center is slightly larger, increasing by 5.2 m. This indicates that even in areas with significant terrain undulations, the maximum error (i.e., the error at the image edge) does not increase significantly, but the minimum error (i.e., the error at the image center) does increase slightly. Overall, the total error of the corrected image does not change much.
A random selection of 20 low-altitude image correction results, as shown in Figure 7, reveals that the error in the central region of the images ranges from 0.48 m to 9.72 m, with an average error of 4.33 m. The error in the edge region of the images ranges from 6.65 m to 22.48 m, with an average error of 12.59 m. An analysis of the above correction results reveals that the corrected images meet the requirements for patch monitoring in China’s forest inspection work. According to the technical guidelines for forest inspection in China, the minimum area for extracting forest change patches is 400 m2, and the length and width of the patches are generally greater than 20 m. This length is much greater than the average correction error of the images, indicating that the corrected UAV images can be used to extract change areas.

4.1.2. Correction Results for High-Altitude Flight Data

In this study, a total of 50 images were obtained at a UAV flight altitude of 380 m. Compared to images obtained at the 80 m altitude, the objects captured in the images are smaller, and the distortion after image correction is also smaller. The correction results of the images obtained at a flight altitude of 380 m are shown in Figure 7 and Figure 8, where points A and B represent points on the corrected UAV images, and points a and b represent points on the satellite remote sensing images in the China Geodetic Coordinate System 2000. In addition, Figure 8 lists the errors of the higher altitude point in the image. Like other points, C represents the position of the point in the drone image, and c represents the position of the point in the satellite image.
In flat areas, the correction of UAV images is effective, as shown in Figure 7. The ground feature point A at the edge of the image has a distance of 8.8 m from point a, while ground feature point B in the central area coincides with point b without significant offset. In areas with large ground elevation differences, as shown in Figure 8, the error for edge region ground feature points A and a is 11.9 m; for ground feature points B and b in low-altitude regions, the offset reaches 4.9 m; and for ground feature points C and c in high-altitude regions, the offset is 2.2 m. At higher altitudes, the influence of ground undulations on the image was smaller, resulting in significantly reduced errors compared to low-altitude flights.
After correcting the images collected at the 380 m altitude, the images captured by the UAV at high altitudes exhibit smaller distortions than those captured at the 80 m altitude. Figure 8 compares the errors in the central and edge regions of flat areas, where the offset for edge region ground point A reaches 8.8 m, while ground feature point B in the central area has almost no offset. Figure 9 shows the offset of ground points in rugged areas: ground feature points A and a are in the image edge region, with an error offset of 11.9 m; ground feature points B and b are in the central region of the image with lower elevation, and with an error offset of 4.9 m; and ground feature points C and c are also in the central region of the image, slightly higher in altitude than point B, and with an offset of 2.2 m, which is the smallest error.
By randomly selecting 20 sets of UAV imagery captured at a height of 380 m, as shown in Figure 10, the statistical analysis reveals that the error in the central area of the images ranges from 0.48 m to 8.31 m, with an average error of 4.14 m, which is slightly better than the data obtained from the UAV flights at a height of 80 m. The error in the edge areas of the images ranges from approximately 4 m to 18.04 m, with an average error of 11.21 m, showing little difference compared to the results from flights at the 80 m height. The correction results show that the error in the edge areas is slightly larger as the UAV flight height increases from 80 m to 380 m, but there is not much difference in the corrected error between the different flight heights. With an increase in height, the error in the central areas of the corrected images decreases.
The national limit for UAV flight heights in China is 120 m. The imagery obtained from flights at a height of 80 m in this study already meets the technical requirements for forest inspection work in China, so flying at the 120 m height would further satisfy production needs.

4.2. Forest Recognition Results Based on Deep Learning

This study used a dataset of 2000 UAV samples, each with a resolution of 512 × 512 pixels, from Tieliugang, Sanya City, Hainan Province, China. The samples were divided into two classes, forest and non-forest, and split into training and testing sets at a ratio of 5:1. Three methods, U-Net, ResUNet, and TernausNet, were trained on the dataset, and the trained models were evaluated on the testing set.
The results, as shown in Figure 11, indicate that all three segmentation methods performed well overall. The precision evaluation results, as shown in Table 2, revealed that the TernausNet model achieved the highest precision of 0.98, while the ResUNet model had the lowest precision of 0.89. The U-Net network achieved high accuracy and performed well in identifying buildings, roads, and bare ground. However, as shown in Image 4.5, grasslands and forests were not well distinguished, and there were small gaps caused by shadows within the recognized areas. The U-Net network achieved a precision of 0.95, a recall of 0.94, an F1 score of 0.93, and a kappa coefficient of 0.73.
The ResUNet network exhibited more shadow gaps in its results, possibly because the residual module caused the model to overly focus on shadowed areas, resulting in poorer recognition in these regions and a worse performance in distinguishing grasslands from forests compared to U-Net. The ResUNet network achieved a precision of 0.89, a recall of 0.95, and an F1 score of 0.90, with a kappa coefficient of only 0.65, indicating a higher recall rate but lower accuracy.
The TernausNet network showed the best resistance to interference, with almost no shadow gaps, clear boundaries, and better distinction between grasslands and forests. Benefiting from the use of a pretrained model for feature extraction, it demonstrated stronger performance in feature recognition. Overall, the TernausNet model exhibited the best recognition performance, with a precision of 0.98, a recall of 0.97, an F1 score of 0.97, and a kappa coefficient of 0.90. The VGG-11 network used in TernausNet was easier to train, had fewer model parameters, and required less computation, making it suitable for simple classification tasks. Therefore, the TernausNet model was selected for forest area recognition in this study.

4.3. Forest Change Detection and Change Area Analysis Results

The corrected UAV dataset was processed through a deep learning model to identify forest areas and overlaid with previous forest vector maps to calculate forest change areas. During this process, small patches of forest change may occur due to shadows in the UAV images or insufficient forest canopy closure. Additionally, the use of previous vector data, which may have low accuracy, can also result in small patches of forest change. In national forest inspection work, the extraction accuracy of forest change patches is not very high, so this study excluded change areas smaller than 100 m2 to reduce the computational load in extracting change patches. Furthermore, errors in UAV image calibration can lead to misjudgments of forest change patches. To improve the accuracy of forest change detection and eliminate misjudged patches, this study analyzed the change areas extracted from UAV images based on their size and average distance from the edge. Suitable thresholds were determined to remove patches misjudged as changes.

4.3.1. Analysis of Forest Change Areas in Low-Altitude Images

This study conducted forest change detection on 30 UAV images captured at an altitude of approximately 80 m, identifying 72 forest change patches. The largest patch had an area of 1693.15 m2. The observations revealed that 22 patches were correctly identified, while 50 were misjudged. To address the issue of automatically excluding misjudged patches, this study employed two discrimination methods, area size discrimination and the ABDA, and the results are shown below.
(1)
Area Size Discrimination
This study divided the range of 0 to 1750 m2 into 10 equal intervals to illustrate the distribution of correctly identified and misjudged patches. The graph in Figure 12 shows the number of misjudged patches in blue and correctly identified patches in orange. It can be observed from Figure 12 that misjudged patches were mainly concentrated in small-area regions, while the number of correctly identified patches varied only slightly.
This study used thresholds of 250 m2, 325 m2, 375 m2, and 500 m2 to determine whether change patches were misjudged. Patches larger than the threshold were classified as change patches, while those smaller than the threshold were classified as misjudgments. The results for each threshold are summarized in Table 3. The best performing threshold was 325 m2, with an accuracy of 0.67 and a kappa coefficient of 0.33. However, the recall rate was only 0.47, indicating less than ideal discrimination accuracy for production purposes.
Upon a closer examination of the misjudged patches, most misjudgments occurred at the edges of the forests, as shown in Figure 13. Large misjudgments tended to occur at the edges of forests in individual calibrated images, indicating areas with significant misjudgments. Small misjudgments occurred at the edges of forests in the middle of individual calibrated images, indicating areas with small misjudgments. The main cause of misjudgment was the error in image calibration, which is a feature distributed in the edge areas of forests.
(2)
ABDA
Because the misjudged patches were mainly distributed at the edges of forests, this study used D as a threshold to discriminate forest change patches. The histogram in Figure 14 shows the average distance from the boundary for these change areas. The misjudged change areas were mostly within 10 m, while the correctly judged change areas were mainly distributed above 5 m. By setting the threshold based on the average distance, it is possible to better distinguish real forest change from misjudged change. A smaller average distance is closer to the forest edge, which indicates potential error-induced forest displacement, while a larger average distance is closer to the interior of the forest, which indicates real forest change. Figure 15 shows patches with a large average distance, such as D = 28.37, indicating real forest change, whereas patches with a small average distance, such as D = 4.07, were misjudged due to forest displacement.
This study used thresholds of 6, 7.5, 8, and 8.5 m for D to distinguish between real and false forest change patches. The results are shown in Table 4. When the threshold was too low, such as 6 m, the recall was low at 0.62, and many real forest changes were not identified. When the threshold was too high, such as 8.5 m, the precision was low at 0.64, and there were many incorrectly identified change areas. The best performance was observed with a threshold of 8 m for the average distance, achieving a kappa coefficient of 0.63 and a precision of 0.68. Compared to using area for classification, there was a significant improvement. The identification results can basically meet the requirements of forest change detection in forest inspection.

4.3.2. Analysis of Forest Change Areas from High-Altitude Aerial Images

This study also analyzed patches recognized in 50 aerial images taken at a height of 380 m, resulting in 110 forest change patches, including 32 correctly identified patches and 78 misidentified patches. The largest patch area was 8716 m2. This study also employed two discrimination methods to analyze the results of extracting change patches. The results are as follows.
(1)
Area Discrimination Method
Based on the statistical analysis, areas greater than 1750 m2 are all true forest change patches. To compare the results of low-altitude flight data identification, this study used the same data range and intervals to show the distribution of forest change patches by area, as shown in Figure 16. Forest change patches are mainly concentrated below 1000 m2, with misidentified change patches primarily below 500 m2, and correctly identified areas mainly distributed between 250 m2 and 750 m2, with a maximum area of 8716.8 m2. Figure 17 shows the distribution of misidentified patches, which are still mainly concentrated in the edge areas of forests.
This study used thresholds of 250, 375, and 500 m2 to distinguish between misidentified and correctly identified forest change patches based on area. The results are shown in Table 5. The best comprehensive performance was observed with a threshold of 375 m2, achieving a precision of 0.84, a recall of 0.53, and a kappa coefficient of 0.46. However, many real forest change areas were also excluded because they do not meet practical requirements.
(2)
ABDA
As shown in Figure 18, the average distance of misidentified areas from the edge were concentrated below 10 m and decreased gradually with increasing distance. Compared to low-altitude flights, the distinction between real and false change areas is greater in high-altitude change areas. In Figure 19, misidentified change areas are caused by forest displacement, resulting in change areas close to the forest edge with a small D value of only 3.18 m. Conversely, the true change areas in Figure 19 are also near the forest edge but with a larger D value of 26.23 m, which is likely due to real changes, as calibration errors are usually within 10 m.
This study used thresholds of 7, 7.5, 8, and 8.5 m to differentiate between misidentified and correctly identified forest change patches based on the distance from the edge. The results are shown in Table 6. Using a threshold of 8 m yielded the best performance, with a kappa coefficient of 0.89, an accuracy of 0.95, and a recall of 0.94, which fully meets the requirements for forest inspection in China.

5. Discussion

For forest change detection, the current mainstream approach involves comparing high-resolution satellite remote sensing data from two different periods. However, high-definition satellite images are relatively expensive to obtain and often affected by cloud cover, especially in southern China, where it can be difficult to obtain complete images for several months. In contrast, UAVs are not affected by cloud cover and provide high image clarity, fully meeting the needs of forest resource monitoring. However, most UAV imagery applications require image stitching, which is a time-consuming process and prone to errors. Additionally, stitched images require geometric correction by professionals, making the process cumbersome and not fully automated.
To address the above issues, this study selected a single UAV image with RTK positioning data. Without stitching, the POS data were used for correction. After correction, forest change detection was performed to extract patches of reduced forest area. Then, the ABDA was used to eliminate misclassified patches, generating the final patch discrimination results and achieving an accuracy of over 95%. Throughout the forest change detection process, human intervention is completely unnecessary, making the entire process fully automated. The functionalities and workflow of this fully automated forest change detection system are illustrated in Figure 20.
In the final stage of change detection, this study used the ABDA to remove erroneously identified change areas. However, this method may inadvertently exclude true change areas that have structural characteristics similar to those of misidentified change areas. As shown in Figure 21, two true change areas are incorrectly identified because their average distance is less than the set threshold of 8 m, specifically 4.34 m and 6.73 m.
To address this issue, a further reduction of correction errors is needed. The correction algorithm used in this study mainly relies on POS data to correct images. As positioning technology continues to improve, POS data will become more accurate, thereby improving the data correction accuracy. Additionally, high-precision terrain data are challenging to obtain but could be integrated into the data correction process in the future to further improve image correction algorithms and reduce errors, thereby minimizing misjudgments in forest change detection. Furthermore, comparing corrected image data reveals larger errors at the edges of the corrected images compared to the central regions. Therefore, when designing UAV flight plans, it is advisable to consider increasing image overlap and using only the central parts of the images to reduce overall errors.
This study obtained only two types of flight altitude image data: 80 m and 380 m. Further research is needed to explore the relationship between specific flight altitudes and the effectiveness of forest change detection.

6. Conclusions

Forest inspection is an important task in forest monitoring in China. It is mainly carried out by professionals who use high-definition satellite remote sensing data from two different periods for comparative analysis, extracting patches where forests have decreased. This process requires a significant amount of manpower and time. This study utilized UAV remote sensing technology to correct single images with RTK positioning data, combining a CNN to extract decreased forest patches and applying the ABDA to automatically remove misidentified patches using a distance threshold. The results show that the single UAV image correction accuracy with RTK positioning data has an error of approximately 4 m in the central region and approximately 12 m in the edge region. Among the three CNN algorithms, the TernausNet model is most suitable for forest identification, achieving an accuracy of 0.98. The ABDA developed in this study performs best in removing misidentified patches. With a set distance threshold of 8 m, the identification accuracy of change patches is greater than 0.85 for UAV flight altitudes of 80 m and 380 m, which is sufficient for practical application. The best forest change patch detection effect is achieved at a UAV flight altitude of 380 m, with an accuracy of 0.95, precision of 0.91, and kappa coefficient of 0.89. Additionally, this study developed a complete set of fully automated forest change detection systems, addressing the issue of consuming a large amount of manpower and time in forest inspection.

Author Contributions

J.X. and Z.Z.: conceptualization, methodology, software, formal analysis, project administration, funding acquisition, investigation, writing—original draft, and writing—review and editing. X.T.: supervision, project administration, and funding acquisition. M.Z.: visualization and investigation. P.C.: data curation. S.T.: methodology and software. X.W.: project administration. All authors have read and agreed to the published version of the manuscript.

Funding

This study was supported by the Hainan Provincial Natural Science Foundation of China (621RC673).

Data Availability Statement

The original contribution proposed in this study has been included in the article, and the main code is shared at https://github.com/shiftzeroXJH/Forestland-Changes-Based-on-DJI-Drones (accessed on 1 September 2024). If you have further questions, please contact the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Richards, K.R.; Stokes, C. A Review of Forest Carbon Sequestration Cost Studies: A Dozen Years of Research. Clim. Chang. 2004, 63, 1–48. [Google Scholar] [CrossRef]
  2. Zhang, Y.; Li, X.; Wen, Y. Forest carbon sequestration potential in China under the background of carbon emission peak and carbon neutralization. J. Beijing For. Univ. 2022, 44, 38–47. [Google Scholar]
  3. Mina, M.; Bugmann, H.; Cordonnier, T.; Irauschek, F.; Klopcic, M.; Pardos, M.; Cailleret, M. Future Ecosystem Services from European Mountain Forests under Climate Change. J. Appl. Ecol. 2017, 54, 389–401. [Google Scholar] [CrossRef]
  4. Hao, X.; Ouyang, W.; Zhang, K.; Wan, X.; Cui, X.; Zhu, W. Enhanced Release, Export, and Transport of Diffuse Nutrients from Litter in Forested Watersheds with Climate Warming. Sci. Total Environ. 2022, 837, 155897. [Google Scholar] [CrossRef]
  5. Njana, M.A.; Mbilinyi, B.; Eliakimu, Z. The Role of Forests in the Mitigation of Global Climate Change: Emprical Evidence from Tanzania. Environ. Chall. 2021, 4, 100170. [Google Scholar] [CrossRef]
  6. Wang, F.; Harindintwali, J.D.; Yuan, Z.; Wang, M.; Wang, F.; Li, S.; Yin, Z.; Huang, L.; Fu, Y.; Li, L.; et al. Technologies and Perspectives for Achieving Carbon Neutrality. Innovation 2021, 2, 100180. [Google Scholar] [CrossRef]
  7. Shi, X.; Zheng, Y.; Lei, Y.; Xue, W.; Yan, G.; Liu, X.; Cai, B.; Tong, D.; Wang, J. Air Quality Benefits of Achieving Carbon Neutrality in China. Sci. Total Environ. 2021, 795, 148784. [Google Scholar] [CrossRef]
  8. Advances in Forest Inventory for Sustainable Forest Management and Biodiversity Monitoring|SpringerLink. Available online: https://link.springer.com/book/10.1007/978-94-017-0649-0 (accessed on 20 June 2024).
  9. Desclée, B.; Bogaert, P.; Defourny, P. Forest Change Detection by Statistical Object-Based Method. Remote Sens. Environ. 2006, 102, 1–11. [Google Scholar] [CrossRef]
  10. Mani, J.K.; Varghese, A.O. Remote Sensing and GIS in Agriculture and Forest Resource Monitoring. In Geospatial Technologies in Land Resources Mapping, Monitoring and Management; Reddy, G.P.O., Singh, S.K., Eds.; Springer International Publishing: Cham, Switzerland, 2018; pp. 377–400. ISBN 978-3-319-78711-4. [Google Scholar]
  11. Shakya, A.K.; Ramola, A.; Vidyarthi, A. Exploration of Pixel-Based and Object-Based Change Detection Techniques by Analyzing ALOS PALSAR and LANDSAT Data. In Smart and Sustainable Intelligent Systems; John Wiley & Sons, Ltd.: Hoboken, NJ, USA, 2021; pp. 229–244. ISBN 978-1-119-75213-4. [Google Scholar]
  12. Chen, G.; Hay, G.J.; Carvalho, L.M.T.; Wulder, M.A. Object-Based Change Detection. Int. J. Remote Sens. 2012, 33, 4434–4457. [Google Scholar] [CrossRef]
  13. Ya’acob, N.; Azize, A.B.M.; Mahmon, N.A.; Yusof, A.L.; Azmi, N.F.; Mustafa, N. Temporal Forest Change Detection and Forest Health Assessment Using Remote Sensing. IOP Conf. Ser. Earth Environ. Sci. 2014, 19, 012017. [Google Scholar] [CrossRef]
  14. Wang, H.; Huang, J. Study on Characteristics of Land Cover Change Using MODIS NDVI Time Series. J. Zhejiang Univ. Agric. Life Sci. 2009, 35, 105–110. [Google Scholar]
  15. Gyamfi-Ampadu, E.; Gebreslasie, M.; Mendoza-Ponce, A. Mapping Natural Forest Cover Using Satellite Imagery of Nkandla Forest Reserve, KwaZulu-Natal, South Africa. Remote Sens. Appl. Soc. Environ. 2020, 18, 100302. [Google Scholar] [CrossRef]
  16. Bergamasco, L.; Martinatti, L.; Bovolo, F.; Bruzzone, L. An Unsupervised Change Detection Technique Based on a Super-Resolution Convolutional Autoencoder. In Proceedings of the 2021 IEEE International Geoscience and Remote Sensing Symposium IGARSS, Brussels, Belgium, 11–16 July 2021; pp. 3337–3340. [Google Scholar]
  17. Li, Y.; Peng, C.; Chen, Y.; Jiao, L.; Zhou, L.; Shang, R. A Deep Learning Method for Change Detection in Synthetic Aperture Radar Images. IEEE Trans. Geosci. Remote Sens. 2019, 57, 5751–5763. [Google Scholar] [CrossRef]
  18. Jiang, H.; Peng, M.; Zhong, Y.; Xie, H.; Hao, Z.; Lin, J.; Ma, X.; Hu, X. A Survey on Deep Learning-Based Change Detection from High-Resolution Remote Sensing Images. Remote Sens. 2022, 14, 1552. [Google Scholar] [CrossRef]
  19. Yang, M.; Jiao, L.; Liu, F.; Hou, B.; Yang, S. Transferred Deep Learning-Based Change Detection in Remote Sensing Images. IEEE Trans. Geosci. Remote Sens. 2019, 57, 6960–6973. [Google Scholar] [CrossRef]
  20. Sharifi, A.; Felegari, S.; Tariq, A.; Siddiqui, S. Forest Cover Change Detection Across Recent Three Decades in Persian Oak Forests Using Convolutional Neural Network. In Climate Impacts on Sustainable Natural Resource Management; John Wiley & Sons, Ltd.: Hoboken, NJ, USA, 2021; pp. 57–73. ISBN 978-1-119-79340-3. [Google Scholar]
  21. de Bem, P.P.; de Carvalho Junior, O.A.; Fontes Guimarães, R.; Trancoso Gomes, R.A. Change Detection of Deforestation in the Brazilian Amazon Using Landsat Data and Convolutional Neural Networks. Remote Sens. 2020, 12, 901. [Google Scholar] [CrossRef]
  22. Isaienkov, K.; Yushchuk, M.; Khramtsov, V.; Seliverstov, O. Deep Learning for Regular Change Detection in Ukrainian Forest Ecosystem With Sentinel-2. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2021, 14, 364–376. [Google Scholar] [CrossRef]
  23. Kalinaki, K.; Malik, O.A.; Ching Lai, D.T. FCD-AttResU-Net: An Improved Forest Change Detection in Sentinel-2 Satellite Images Using Attention Residual U-Net. Int. J. Appl. Earth Obs. Geoinf. 2023, 122, 103453. [Google Scholar] [CrossRef]
  24. National Forestry and Grassland Administration. Technical Regulations for Investigation and Monitoring of Forests, Grasslands, and Wetlands in China in 2022; National Forestry and Grassland Administration: Beijing, China, 2022. [Google Scholar]
  25. Abdollahnejad, A.; Panagiotidis, D.; Bílek, L. An Integrated GIS and Remote Sensing Approach for Monitoring Harvested Areas from Very High-Resolution, Low-Cost Satellite Images. Remote Sens. 2019, 11, 2539. [Google Scholar] [CrossRef]
  26. Duarte, A.; Borralho, N.; Cabral, P.; Caetano, M. Recent Advances in Forest Insect Pests and Diseases Monitoring Using UAV-Based Data: A Systematic Review. Forests 2022, 13, 911. [Google Scholar] [CrossRef]
  27. Xu, Z. Study on Subtropical Forest Monitoring Method Based on UAV Remote Sensing and AI Algorithm. PhD. Thesis, Jiangxi Agricultural University, Nanchang, China, 2022. [Google Scholar]
  28. Horcher, A.; Visser, R. Unmanned Aerial Vehicles: Applications for Natural Resource Management and Monitoring. In Proceedings of the 2004 Council on Forest Engineering (COFE) Conference Proceedings: “Machines and People, The Interface”, Hot Springs, AR, Canada, 27–30 April 2004. [Google Scholar]
  29. Ecke, S.; Dempewolf, J.; Frey, J.; Schwaller, A.; Endres, E.; Klemmt, H.-J.; Tiede, D.; Seifert, T. UAV-Based Forest Health Monitoring: A Systematic Review. Remote Sens. 2022, 14, 3205. [Google Scholar] [CrossRef]
  30. Wallace, L.O.; Lucieer, A.; Watson, C.S. ASSESSING THE FEASIBILITY OF UAV-BASED LIDAR FOR HIGH RESOLUTION FOREST CHANGE DETECTION. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2012, XXXIX-B7, 499–504. [Google Scholar] [CrossRef]
  31. Schiefer, F.; Kattenborn, T.; Frick, A.; Frey, J.; Schall, P.; Koch, B.; Schmidtlein, S. Mapping Forest Tree Species in High Resolution UAV-Based RGB-Imagery by Means of Convolutional Neural Networks. ISPRS J. Photogramm. Remote Sens. 2020, 170, 205–215. [Google Scholar] [CrossRef]
  32. Pyo, J.; Han, K.; Cho, Y.; Kim, D.; Jin, D. Generalization of U-Net Semantic Segmentation for Forest Change Detection in South Korea Using Airborne Imagery. Forests 2022, 13, 2170. [Google Scholar] [CrossRef]
  33. Wan, Q.; Luo, L.; Chen, J.; Wang, Y.; Guo, D. Drone Image Stitching Using Local Least Square Alignment. In Proceedings of the IGARSS 2020—2020 IEEE International Geoscience and Remote Sensing Symposium, Virtual Symposium, 26 September–2 October 2020; pp. 1849–1852. [Google Scholar]
  34. Wan, Q.; Chen, J.; Luo, L.; Gong, W.; Wei, L. Drone Image Stitching Using Local Mesh-Based Bundle Adjustment and Shape-Preserving Transform. IEEE Trans. Geosci. Remote Sens. 2020, 59, 7027–7037. Available online: https://ieeexplore.ieee.org/abstract/document/9211752 (accessed on 11 May 2024). [CrossRef]
  35. Dhana Lakshmi, M.; Mirunalini, P.; Priyadharsini, R.; Mirnalinee, T.T. Review of Feature Extraction and Matching Methods for Drone Image Stitching. In Proceedings of the International Conference on ISMAC in Computational Vision and Bio-Engineering 2018 (ISMAC-CVB), Palladam, India, 16–17 May 2018; Pandian, D., Fernando, X., Baig, Z., Shi, F., Eds.; Springer International Publishing: Cham, Switzerland, 2019; pp. 595–602. [Google Scholar]
  36. Ronneberger, O.; Fischer, P.; Brox, T. U-Net: Convolutional Networks for Biomedical Image Segmentation. In Proceedings of the 18th International Conference on Medical Image Computing and Computer-Assisted Intervention—MICCAI, Munich, Germany, 5–9 October 2015. [Google Scholar]
  37. Zhang, Z.; Liu, Q.; Wang, Y. Road Extraction by Deep Residual U-Net. IEEE Geosci. Remote Sens. Lett. 2018, 15, 749–753. [Google Scholar] [CrossRef]
  38. A Study on Near-Real-Time Geometric Correction System of Drones Image. In Proceedings of the 2018 International Conference on Information Networking (ICOIN), Chiang Mai, Thailand, 10–12 January 2018. Available online: https://ieeexplore.ieee.org/abstract/document/8343087/authors#authors (accessed on 11 May 2024).
  39. Peña-Haro, S.; Ljubičić, R.; Strelnikova, D. Chapter 8—Geometric Correction and Stabilization of Images Collected by UASs in River Monitoring. In Unmanned Aerial Systems for Monitoring Soil, Vegetation, and Riverine Environments; Manfreda, S., Ben, D.E., Eds.; Earth Observation; Elsevier: Amsterdam, The Netherlands, 2023; pp. 203–230. ISBN 978-0-323-85283-8. [Google Scholar]
  40. Jakob, S.; Zimmermann, R.; Gloaguen, R. The Need for Accurate Geometric and Radiometric Corrections of Drone-Borne Hyperspectral Data for Mineral Exploration: MEPHySTo—A Toolbox for Pre-Processing Drone-Borne Hyperspectral Data. Remote Sens. 2017, 9, 88. [Google Scholar] [CrossRef]
  41. Belloni, V.; Fugazza, D.; Di Rita, M. UAV-Based Glacier Monitoring: GNSS Kinematic Track Post-Processing and Direct Georeferencing for Accurate Reconstructions in Challenging Environments. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2022, XLIII-B1-2022, 367–373. [Google Scholar] [CrossRef]
  42. Cramer, M.; Przybilla, H.-J.; Zurhorst, A. UAV Cameras: Overview and Geometric Calibration Benchmark. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2017, XLII-2-W6, 85–92. [Google Scholar] [CrossRef]
  43. Ye, P.; Zhang, Y.; Ran, H. Aerial Image Stitching Method Based on Feature Transfer and Tile Image. In Proceedings of the 2023 6th International Conference on Artificial Intelligence and Big Data (ICAIBD), Chengdu, China, 26–29 May 2023; pp. 830–835. [Google Scholar]
  44. Qiu-Hui, X.U. A Method of Geometric Correction and Mosa-Ic of Unmanned Aerial Vehicle Remote Sensing Image With-out Ground Control Points. Doctoral Dissertation, Nanjing University, Nanjing, China, 2013. [Google Scholar]
Figure 1. Location of the study area.
Figure 1. Location of the study area.
Forests 15 01676 g001
Figure 2. A Technical flow chart of the study.
Figure 2. A Technical flow chart of the study.
Forests 15 01676 g002
Figure 3. Forestland change detection flowchart (A) and change area (B). d1 and d2 are the distances from the point to the forest boundary.
Figure 3. Forestland change detection flowchart (A) and change area (B). d1 and d2 are the distances from the point to the forest boundary.
Forests 15 01676 g003
Figure 4. Changing areas at the edge of the forest.
Figure 4. Changing areas at the edge of the forest.
Forests 15 01676 g004
Figure 5. Details of the correction results for low-altitude flat area UAV imagery.
Figure 5. Details of the correction results for low-altitude flat area UAV imagery.
Forests 15 01676 g005
Figure 6. Details of the correction results for low-altitude rugged area UAV imagery.
Figure 6. Details of the correction results for low-altitude rugged area UAV imagery.
Forests 15 01676 g006
Figure 7. Distribution of mean errors for low-altitude flight data.
Figure 7. Distribution of mean errors for low-altitude flight data.
Forests 15 01676 g007
Figure 8. Detailed correction results of UAV imagery in high-altitude flat areas.
Figure 8. Detailed correction results of UAV imagery in high-altitude flat areas.
Forests 15 01676 g008
Figure 9. Detailed correction results of UAV imagery in high-altitude rugged areas.
Figure 9. Detailed correction results of UAV imagery in high-altitude rugged areas.
Forests 15 01676 g009
Figure 10. Distribution of mean errors in high altitude flight data.
Figure 10. Distribution of mean errors in high altitude flight data.
Forests 15 01676 g010
Figure 11. Comparison of various deep learning network models.
Figure 11. Comparison of various deep learning network models.
Forests 15 01676 g011
Figure 12. Distribution of low-altitude forest change area sizes.
Figure 12. Distribution of low-altitude forest change area sizes.
Forests 15 01676 g012
Figure 13. Misjudged patch distribution.
Figure 13. Misjudged patch distribution.
Forests 15 01676 g013
Figure 14. Distribution of distances from the edge for low-altitude forest change areas.
Figure 14. Distribution of distances from the edge for low-altitude forest change areas.
Forests 15 01676 g014
Figure 15. Comparison of distances from the edge for low-altitude forest change areas.
Figure 15. Comparison of distances from the edge for low-altitude forest change areas.
Forests 15 01676 g015
Figure 16. Distribution of high-elevation forest change areas.
Figure 16. Distribution of high-elevation forest change areas.
Forests 15 01676 g016
Figure 17. Comparison of high-elevation forest change areas.
Figure 17. Comparison of high-elevation forest change areas.
Forests 15 01676 g017
Figure 18. Distribution of the distance from the edge for high-altitude forest change areas.
Figure 18. Distribution of the distance from the edge for high-altitude forest change areas.
Forests 15 01676 g018
Figure 19. Comparison of the distances from the edges of high-elevation forest change areas.
Figure 19. Comparison of the distances from the edges of high-elevation forest change areas.
Forests 15 01676 g019
Figure 20. Automated forest change detection system. (ac) Photos of the filed survey. (d,e) Forest land change detection system interface. (f) Forest land change detection results.
Figure 20. Automated forest change detection system. (ac) Photos of the filed survey. (d,e) Forest land change detection system interface. (f) Forest land change detection results.
Forests 15 01676 g020
Figure 21. True change areas with distances less than 8 m from the edge.
Figure 21. True change areas with distances less than 8 m from the edge.
Forests 15 01676 g021
Table 1. Confusion matrix.
Table 1. Confusion matrix.
Predicted NegativePredicted Positive
Actual NegativeTNFP
Actual PositiveFNTP
Table 2. Comparison of various deep learning network methods applied to the Tieliugang UAV dataset.
Table 2. Comparison of various deep learning network methods applied to the Tieliugang UAV dataset.
ModelAccuracyPrecisionRecallF1 ScoreKappa
U-Net0.970770.950980.938510.929910.73389
ResUNet0.862770.892310.948080.899230.64733
TernausNet0.992920.978220.974320.972760.89808
Table 3. Differentiation of low-altitude forest change areas based on area size thresholds.
Table 3. Differentiation of low-altitude forest change areas based on area size thresholds.
Threshold (m2)250325375500
Accuracy0.580.670.650.65
Precision0.820.770.680.55
Recall0.410.470.450.44
F1 Score0.550.590.550.49
Kappa0.230.330.280.23
Table 4. Differentiating real forest change areas based on distance from edge thresholds at low altitudes.
Table 4. Differentiating real forest change areas based on distance from edge thresholds at low altitudes.
Threshold (m)67.588.5
Accuracy0.580.670.650.65
Precision0.820.770.680.55
Recall0.410.470.450.44
F1 Score0.550.590.550.49
Kappa0.230.330.280.23
Table 5. Differentiating real forest change areas based on area thresholds at high altitudes.
Table 5. Differentiating real forest change areas based on area thresholds at high altitudes.
Threshold (m2)250375500
Accuracy0.690.740.74
Precision0.910.840.69
Recall0.480.530.54
F1 Score0.630.650.60
Kappa0.400.460.41
Table 6. Differentiation of high-elevation forest change areas based on distance from edge thresholds.
Table 6. Differentiation of high-elevation forest change areas based on distance from edge thresholds.
Threshold (m)77.588.5
Accuracy0.930.950.950.94
Precision0.940.940.910.84
Recall0.830.880.940.93
F1 Score0.880.910.920.89
Kappa0.830.870.890.84
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Xiang, J.; Zang, Z.; Tang, X.; Zhang, M.; Cao, P.; Tang, S.; Wang, X. Rapid Forest Change Detection Using Unmanned Aerial Vehicles and Artificial Intelligence. Forests 2024, 15, 1676. https://doi.org/10.3390/f15091676

AMA Style

Xiang J, Zang Z, Tang X, Zhang M, Cao P, Tang S, Wang X. Rapid Forest Change Detection Using Unmanned Aerial Vehicles and Artificial Intelligence. Forests. 2024; 15(9):1676. https://doi.org/10.3390/f15091676

Chicago/Turabian Style

Xiang, Jiahong, Zhuo Zang, Xian Tang, Meng Zhang, Panlin Cao, Shu Tang, and Xu Wang. 2024. "Rapid Forest Change Detection Using Unmanned Aerial Vehicles and Artificial Intelligence" Forests 15, no. 9: 1676. https://doi.org/10.3390/f15091676

APA Style

Xiang, J., Zang, Z., Tang, X., Zhang, M., Cao, P., Tang, S., & Wang, X. (2024). Rapid Forest Change Detection Using Unmanned Aerial Vehicles and Artificial Intelligence. Forests, 15(9), 1676. https://doi.org/10.3390/f15091676

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop