Optimizing Drone-Based Surface Models for Prescribed Fire Monitoring
Round 1
Reviewer 1 Report
Comments and Suggestions for AuthorsThis article presents a useful methodology for quantifying biomass using drones. The quantification of biomass and fuel load is, in the case of prescribed burns, of great importance for assessing their effectiveness and identifying potential problems arising from their application. In this sense, the techniques that help to facilitate this measure are necessary since, as the authors point out, obtaining a reliable measurement of the fuel load before and after a prescribed burn is costly in time and resources and cannot always be performed at the scale and reliability required.
The authors present a complex and detailed working procedure and provide sufficient details so that the method can evaluated in other situations, although, as they emphasize, the variability of the different situations can make difficult its application.
The text and its content are correct, although the application of the method, considering the limitations highlighted by the authors themselves, can be difficult and needs to be tested in other situations so that it can really help the progress of these tools. In this sense the authors themselves could provide more details.
Author Response
Thank you for your thoughtful suggestion.
We would indeed value the opportunity to undertake additional sampling; however, our current resources do not permit this endeavor. We hope that future researchers can take up this direction and validate our methodology in prescribed burn settings.
Additionally, we have reviewed and refined the English edition of the manuscript and have incorporated further analyses as recommended by another reviewer.
The updated manuscript has been uploaded to the platform.
Author Response File: Author Response.docx
Reviewer 2 Report
Comments and Suggestions for AuthorsI have no comments. Very interesting material. Consideration of the importance of errors makes the material obtained valuable. In these times of increasing drought and associated fires, the information contained in the article can be of great help to other unitaries facing problems of uncontrolled fires.
Author Response
First and foremost, I would like to express my deepest gratitude for your thorough review of our manuscript.
I wanted to let you know that we have uploaded an updated version of the manuscript to the platform. This new version includes minor revisions in the English editing, as well as additional analyses and content to address suggestions made by other reviewers. We believe these improvements enhance the overall quality of our study.
Best regards
Christian MR
Author Response File: Author Response.docx
Reviewer 3 Report
Comments and Suggestions for AuthorsThe manuscript is clear, relevant for the field of prescribed fire monitoring. The manuscript is presented in a well-structured manner.
The cited references are mostly recent publications and relevant. It doesn’t include an excessive number of self-citations.
The manuscript is scientifically sound. The experimental design is appropriate to test the hypothesis. The manuscript’s results are reproducible based on the details given in the methods section.
Figures and schemes are appropriate. All of them properly show the data.
All of figures are easy to interpret and understand.
The data is interpreted appropriately and consistently throughout the manuscript.
In general, the article gives an impression of a deep large-scale study. Its value also consists in the fact that the obtained results can be used not only for solving problems in the field of prescribed fire monitoring, but also in other areas where it is necessary to build three-dimensional models of objects of large size on the basis of digital images obtained by UAVs.
Author Response
First and foremost, I would like to express my deepest gratitude for your thorough review of our manuscript.
I wanted to let you know that we have uploaded an updated version of the manuscript to the platform. This new version includes minor revisions in the English editing, as well as additional analyses and content to address suggestions made by other reviewers. We believe these improvements enhance the overall quality of our study.
Best regards
Christian MR
Author Response File: Author Response.docx
Reviewer 4 Report
Comments and Suggestions for AuthorsThe research tests and evaluates various processing options within the SfM-MVS workflow systematically, resulting in significant improvements in the quality of surface models. It aims to implement an optimized SfM-MVS workflow and assess the impact of various SfM-MVS processing options on the quality of the generated 3D reconstruction and surface models, addressing not only the technical aspects but also the practical implications and future prospects.
The following remarks should be taken into consideration preparing updated version of the paper:
1. In 2.2.3, the black and yellow rubber objects used in the design shall be indicated in the drawing.
2. In 2.3,the Structure from Motion and Multi-View Stereopsis (SfM-MVS) method was employed to generate point clouds, from which DSMs, DTMs, and orthomosaics were derived. The previous article has a brief introduction to DSM and DTM, but lacks a brief summary of orthomosaics.
3. 2.3.2 mentions that each point in the DSPC version was assigned a confidence threshold based on the number of depth maps in which it appeared. So how exactly the confidence threshold should this be determined?And how to get confidence?
4. When optimizing sparse clouds in SfM, how are the data such as the optimal RE threshold, connection points, etc. derived?
5. There are some unavoidable errors in reconstructing sparse clouds, so the effect of these errors on the reconstructed model and whether they can be ignored should be demonstrated.
Comments on the Quality of English LanguageMinor editing of English language is required.
Author Response
- In 2.2.3, the black and yellow rubber objects used in the design shall be indicated in the drawing.
Thank you for your comment. In Figure 3, the black and yellow rubber objects used in the design are already represented and labeled as 'Check Points' and 'Control Points', respectively, as indicated in the figure's caption. The close-up view (b) and the more detailed zoom-in (c) in the figure provide a clear representation of these objects in the study. To address your feedback, we have also made a revision to the caption of Figure 3 to ensure clarity in representing these points according to the study's specifications. We believe that with this modification, we have adequately represented and labeled these points.
- In 2.3, the Structure from Motion and Multi-View Stereopsis (SfM-MVS) method was employed to generate point clouds, from which DSMs, DTMs, and orthomosaics were derived. The previous article has a brief introduction to DSM and DTM, but lacks a brief summary of orthomosaics.
The description of orthomosaic, as defined below, has been incorporated into the manuscript.
An orthomosaic is a detailed and geometrically accurate image of an area, composed of multiple photos that have been orthorectified. Within the SfM workflow, once the optimized sparse cloud and subsequently a DEM have been generated, the next step is to produce an orthomosaic. This is achieved by projecting the georeferenced images onto a mesh, which is interpolated using the optimized tie points from the sparse cloud. This ensures that the resulting image faithfully represents the actual surface, without distortions and at a uniform scale.
- In 2.3.2 mentions that each point in the DSPC version was assigned a confidence threshold based on the number of depth maps in which it appeared. So how exactly the confidence threshold should this be determined?And how to get confidence?
In the manuscript I integrated the concept of Filter Confidence in more detail and even applied some new analyses on filter confidence and its impact on the accuracy of the dense cloud.
In the following, for a more detailed understanding I answer your questions:
The confidence of a specific point in the point cloud is determined by the number of combined depth maps contributing to that point. The more depth maps that contribute to a particular point, the higher its confidence value. This is because the existence and position of that point have been corroborated across multiple depth maps, enhancing the reliability of its location and existence.
In other words:
- A point derived from many depth maps will have a high confidence value because it has been "seen" and logged from several perspectives.
- A point derived from a few depth maps will have a low confidence value, indicating that it may be an artifact or an error.
So, this filter is essentially based on redundancy and corroboration between depth maps to determine the reliability of points in the resulting point cloud.
To answer the reviewer’s questions and make the change in the manuscript: more holistically.
how exactly the confidence threshold should this be determined?
Depth maps are generated by assessing the parallax between overlapping images to determine the distance of each point in three-dimensional space. The better the image quality and the greater the overlap of images, the more precise and defined are the details, which facilitates the matching process and the creation of these depth maps. High-quality images and high overlap reduce ambiguities and errors during this process.
Moreover, overlap plays a critical role by ensuring that each point of terrain or object is captured from multiple perspectives. This not only improves the accuracy of depth maps by providing more information for their generation but also increases the confidence of each point in the resulting cloud. That is, a point "seen" in multiple overlapping images and represented in several depth maps has higher reliability.
Thus, combining high-quality images with good overlap ensures the generation of robust and reliable depth maps. In our project, this combination resulted in a high confidence point cloud from the outset, allowing us to work with the lowest confidence threshold (0-1 over 0-255), ensuring that the maximum amount of detail was retained in the final model.
And how to get confidence?
Increasing image overlap: The more images that capture a specific feature from different angles, the more depth maps will be generated for that point, increasing its confidence.
Image quality: Clear and well-focused images will allow for better depth map generation, which may result in greater confidence for the points.
Depth Map filter settings: Experimenting with Metashape's Depth Map settings to optimize depth map creation.
In summary, the confidence threshold and how to gain confidence are interrelated and depend both on image capture practices and software settings. Experimenting and visually reviewing the 3D rendering is key to optimizing confidence thresholds.
- When optimizing sparse clouds in SfM, how are the data such as the optimal RE threshold, connection points, etc. derived?
Based on this question, I also made some modifications in the manuscript for further understanding about sparse cloud optimisation process in SfM pipeline. In any case, here is an answer to your question
In the SfM process, tie points (Sparse Cloud) are initially derived through the detection and matching of distinctive features across multiple images. These features can be edges, corners, or unique texture patterns consistently identified in overlapping images. Once these features are detected in each image, correspondences are sought between images, meaning the same feature is located across several images. These image correspondences allow for triangulations and, eventually, the reconstruction of tie points in a three-dimensional space to form the initial sparse cloud. It's essential to note that each tie point is composed of quality attributes such as Reprojection Error (RE), Reconstruction Uncertainty (RU), and Projection Accuracy (PA).
Later, when optimizing sparse clouds in SfM, the estimation of tie points is based on these quality attributes. The RE reflects how well a reprojected point in an image matches its measured position, critical for evaluating fitting accuracy. RU, on the other hand, provides insight into the uncertainty associated with the three-dimensional position of the point, while PA assesses the accuracy with which a point is projected onto an image given the camera orientations (see Appendix Table S6).
When optimizing sparse clouds in SfM, the accuracy of the sparse cloud is enhanced by removing points that exceed a certain error threshold and re-optimizing camera positions. A balance exists between the stringency of the quality threshold and the number of points retained in the sparse cloud; overly strict criteria can adversely affect the quality of the 3D transformation process by removing too many individual points.
The iterative optimization process starts with conservative filtering of the sparse cloud using maximum threshold values. In our study, these values for RE, RU, and PA were initially set at 1, 50, and 10, respectively. Subsequently, the camera parameters were optimized, as well as their positions in each iteration.
The optimal value for RE is determined through an iterative and dynamic approach. Initially, we set a threshold value based on benchmark thresholds; in this case, we started with a value of 1 for RE. A search was conducted to find the RE value that minimizes the Root Mean Square Error (RMSE) between the check point coordinates and their correspondences in the sparse cloud. It's pivotal to mention that the RE calculation considers the difference between the projection of the point based on adjusted orientation parameters and the measured point's projection coordinates, all normalized by the image scale. Low RE values indicate high precision, meaning adjusting this threshold can significantly enhance the results of the sparse cloud.
This process and approach are based on the method detailed in [51, Ludwig et al. 2020], and the sparse cloud quality attributes are further described in Table S6.
- There are some unavoidable errors in reconstructing sparse clouds, so the effect of these errors on the reconstructed model and whether they can be ignored should be demonstrated.
It's undeniable that in 3D reconstruction processes using SfM-MVS, inconsistencies in sparse clouds can impact the final outcomes. However, it's vital to weigh the real impact of these errors and their relevance within the scope of our research.
Our study emphasizes the accuracy and quality of the models produced. As highlighted in the abstract:
- We determined the RMSE by comparing with GNSS control points, aiming to evaluate the sparse cloud optimization during georeferencing.
- We assessed elevation accuracy using the MAE of dense clouds against GNSS measurements and predefined box dimensions.
- We enriched the quality evaluation of the dense cloud with density metrics.
- Moreover, we showcased (Figure B1) how certain misclassifications of terrain points can lead to artifacts in the final CHM.
The error values we've recorded, such as the MAE for elevation and height models, are comparatively low. This suggests that potential inaccuracies in the sparse cloud have had a minimal impact on our 3D models.
To strengthen our analyses, we implemented an evaluation based on Voronoi tessellations with optimized tie points. This enables the identification of areas with more significant reprojection discrepancies. Overlaying these polygons on an orthoimage provides a clear view of zones with pronounced errors. A notable correlation was observed between areas with higher RE errors and those with low texture, like bare soils, shaded areas, and blurred images. However, given our error metrics are low and within acceptable margins for studies of this nature, we argue that errors in the sparse cloud below 0.5 can be overlooked in the context of our applications. A remedy for these challenges would be to ensure image clarity and choose overcast days to minimize shadows.
In this link you will find the new analyses performed and incorporated in the manuscript:
Voronoi tessellation based on optimised tie points
file:///E:/projects/Article-Dem-UAV/AnalysisR_errors/03-VoronoiRE.html
Iterative filtering of confidence points (brushes and reference box)
file:///E:/projects/Article-Dem-UAV/AnalysisR_errors/02-ConFi-brushe.html
file:///E:/projects/Article-Dem-UAV/AnalysisR_errors/01-ConFi-box.html
An English edition was made in the updated manuscript.
We reiterate our gratitude for your feedback. We hope to have addressed your concerns and are open to conducting additional analyses if deemed necessary.
Best
Christian MR
Author Response File: Author Response.pdf
Round 2
Reviewer 4 Report
Comments and Suggestions for AuthorsI'm satisfied with the response.