Next Article in Journal
Recent Development Trends in Plant Protection UAVs: A Journey from Conventional Practices to Cutting-Edge Technologies—A Comprehensive Review
Previous Article in Journal
Drone Swarm Robust Cooperative Formation Pursuit through Relative Positioning in a Location Denial Environment
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A New Precise Point Positioning with Ambiguity Resolution (PPP-AR) Approach for Ground Control Point Positioning for Photogrammetric Generation with Unmanned Aerial Vehicles

by
Hasan Bilgehan Makineci
1,*,
Burhaneddin Bilgen
2 and
Sercan Bulbul
1
1
Faculty of Engineering and Natural Sciences, Konya Technical University, Konya 42250, Türkiye
2
Academy of Land Registry and Cadastre, Ankara Hacı Bayram Veli University, Ankara 06560, Türkiye
*
Author to whom correspondence should be addressed.
Drones 2024, 8(9), 456; https://doi.org/10.3390/drones8090456
Submission received: 30 July 2024 / Revised: 28 August 2024 / Accepted: 29 August 2024 / Published: 2 September 2024

Abstract

:
Unmanned aerial vehicles (UAVs) are now widely preferred systems that are capable of rapid mapping and generating topographic models with relatively high positional accuracy. Since the integrated GNSS receivers of UAVs do not allow for sufficiently accurate outcomes either horizontally or vertically, a conventional method is to use ground control points (GCPs) to perform bundle block adjustment (BBA) of the outcomes. Since the number of GCPs to be installed limits the process in UAV operations, there is an important research question whether the precise point positioning (PPP) method can be an alternative when the real-time kinematic (RTK), network RTK, and post-process kinematic (PPK) techniques cannot be used to measure GCPs. This study introduces a novel approach using precise point positioning with ambiguity resolution (PPP-AR) for ground control point (GCP) positioning in UAV photogrammetry. For this purpose, the results are evaluated by comparing the horizontal and vertical coordinates obtained from the 24 h GNSS sessions of six calibration pillars in the field and the horizontal length differences obtained by electronic distance measurement (EDM). Bartlett’s test is applied to statistically determine the accuracy of the results. The results indicate that the coordinates obtained from a two-hour PPP-AR session show no significant difference from those acquired in a 30 min session, demonstrating PPP-AR to be a viable alternative for GCP positioning. Therefore, the PPP technique can be used for the BBA of GCPs to be established for UAVs in large-scale map generation. However, the number of GCPs to be selected should be four or more, which should be homogeneously distributed over the study area.
Keywords:
DEM; orthomosaic; PPP-AR; UAV

1. Introduction

Nowadays, unmanned aerial vehicles (UAVs) are frequently preferred for generating photogrammetric outcomes. One of the reasons for this is their ability to quickly generate topographic models of large areas with relatively high spatial accuracy [1,2]. Their ability to quickly generate topographic models of large areas with relatively high spatial accuracy is one of the reasons why they are preferred [1,2]. Topographic models can be categorized as the digital elevation model (DEM), digital terrain model (DTM) and digital surface model (DSM). In addition to these, outcomes prepared by performing classical geodetic surveys (3D drawing, mapping, etc.) can also be generated faster and easier with UAVs [3,4,5]. In addition to the advantages of UAVs, such as the speed, convenience, and the ability to access places that are not accessible to humans from the air, they also have disadvantages, such as being affected by atmospheric conditions and being unable to generate accuracy below the cm level. Improving the positional accuracy of the outcomes generated by UAVs is also an important research topic.
The integrated global navigation satellite system (GNSS) receivers of UAVs make it impossible to obtain a spatially accurate outcome, neither horizontally nor vertically [6,7]. The most commonly used technique for bundle block adjustment (BBA) is seen in the outcomes generated by the photogrammetric method. In order for the adjusted block accuracy to be high, points with known geodetic coordinates are needed. Using ground control points (GCPs) for the BBA is traditional. However, the number of GCPs to be established limits the temporality of the process in UAV operations. An important research topic is whether precise point positioning (PPP) can be an alternative for UAVs, mainly when geodetic position measurements cannot be performed with real-time kinematic (RTK), network RTK, and post-process kinematic (PPK) [8]. Polat and Uysal [9] compared airborne LIDAR and UAV-based DEM and found that the vertical accuracy of UAV-based DEM was ±15.7 cm. Martínez-Carricondo et al. [10] investigated the accuracy of UAV-photogrammetric mapping based on the GCP distribution. They reported that the best accuracy was achieved by placing GCPs at the edge of the study area. Manfreda et al. [11] suggested that the internal accuracy of the UAV-based model can reach 0.2 cm horizontally and 4 cm vertically. Yu et al. [12] analyzed the model accuracy and claimed that it is approximately 0.90 m. Zimmerman et al. [13] analyzed the effect of the UAV height and GCPs on the model accuracy in a selected application area along a coastline and achieved accuracies ranging from 3 to 18 cm. Famiglietti et al. [14], Žabota and Kobal [15], Liu et al. [16], Martínez-Carricondo et al. [17] and Hayamizu and Nakata [18] investigated the accuracy of models generated based on the RTK/PPK techniques. Recent studies show that the relative model accuracy obtained in RTK studies is between 0.5 m and 2.5 m [19,20], while the model accuracy (relative) in PPK studies is between 2.45 cm and 3.75 cm [19,21]. For the models obtained using GCPs with the geodetic surveying method, the horizontal positional accuracy of 8–11 cm and vertical positional accuracy of 9–74 cm was obtained in the measure results with network RTK [19,22,23] (Table 1).
The scientific studies mentioned above show that the RTK/PPK method is mainly preferred, and there are limited studies exploring PPP-AR as an alternative method for UAV photogrammetry, highlighting the need for further research in this area. This study proposes a new approach by investigating the contribution of the PPP technique to the model accuracy in terms of the positioning of GCPs and this is the main difference from the existing literature. Another novel aspect of this study is the determination of the optimal GNSS session duration by testing the PPP technique at different observation times. The PPP technique has the advantages of being implemented with free online or open-source software, requiring only a single GNSS receiver and not requiring a Global System for Mobile Communications (GSM) connection when surveying in the field. The only disadvantage is that it requires a long convergence time to achieve centimeter position accuracy [24,25,26,27]. Since PPK is available on a small number of UAVs and PPK decoding is performed with paid software, network RTK does not work everywhere due to the need for a GSM network, and users may not have enough GNSS receivers for classical RTK; the PPP technique can be proposed as an alternative. Whether this proposed technique is suitable for UAVs has received limited attention in scientific studies. This paper addresses this gap by demonstrating the viability and benefits of PPP-AR for GCP positioning.
The aim of this study is to suggest a new approach using PPP techniques for GCP coordinate determination for UAV-based mapping. Within this context, different photogrammetric outcomes are generated. The study looks at the accuracy of the orthomosaic and DEM created without using any GCPs, with four GCPs and eight GCPs. The accuracy is assessed by comparing the horizontal and vertical coordinates obtained from 24 h fixed GNSS sessions of six calibration pillars in the field and the horizontal length differences obtained by electronic distance measuring devices (EDMs). Statistically, Bartlett’s test is applied to determine the accuracy of the results. The results show that the coordinates obtained with the two-hour session did not show a relatively significant difference in positional accuracy compared to the coordinates obtained with the half-hour session. In this context, the PPP technique could be used to establish the BBA of the GCPs for UAVs in large-scale map generation.

2. Method and Data

2.1. PPP Method

In the PPP method, the positions of the points can be determined with the help of appropriate mathematical models depending on the visible satellite constellation. The GNSS observation equations for the code P and the carrier phase observation (Φ) at the i-th signal frequency are as follows:
P i G = ρ G + c . d t G c . d T + d o r b G + d i o n , P i G + d t r o p G + ε P i G
Φ i G = ρ G + c . d t G c . d T + d o r b G + d i o n , Φ i G + d t r o p G + d Φ i G + λ i G . N i G + ε Φ i G
In Equations (1) and (2), the superscript G and subscript i denote the GNSS satellites and the signal frequency being used; P i G and Φ i G represent the measured pseudo-range and carrier phase range; d t G and d T refer to the receiver and satellite clock biases; ρ is the geometric distance in meters; d i o n , P i G and d i o n , Φ i G are the ionospheric corrections for the code and phase observations, respectively; c is the speed of light in a vacuum; d o r b G is the satellite orbit correction in meters; d t r o p G is the slant tropospheric delay in meters; λ is the wavelength of the signal; N is the ambiguity in meters; d Φ i G denotes the combined phase correction term for the phase center offsets and variations, the site displacements and the phase wind-up effect; and ε P i G and ε Φ i G refer to the measurement noise and other unmodeled errors for the observations.
In practice, before using the above equations, the observations need to be corrected to eliminate satellite orbit and clock errors using IGS’s precise orbit and clock products. Moreover, the traditional PPP algorithm is based on a linear combination of dual-frequency carrier phase and code observations to eliminate the first-order ionospheric effect. As a result, the ionosphere-free (subscript IF) observation equations for the code and phase observations of the GPS/GLONASS PPP model used in this study are expressed by Chen et al. [28] as follows
P I F G = ρ G + c . d T + d t r o p G + ε P I F G
Φ I F G = ρ G + c . d T + d t r o p G + λ I F N I F G + b r , Φ , I F b Φ , I F G + ε Φ I F G
P I F R = ρ R + c . d T + I S B + d t r o p R + ε P I F R
Φ I F R = ρ R + c . d T + I S B + d t r o p R + λ I F N I F R + b r , Φ , I F b Φ , I F R + ε Φ I F R
In Equations (4) and (6), b r , Φ , I F is the receiver phase bias, b Φ , I F is the satellite phase bias, λ I F is the wavelength of the IF combination, N I F is the IF ambiguity term, and the superscripts G and R represent the GPS and GLONASS satellite constellations. The inter-system bias parameter (ISB) can be written as follows [29,30].
I S B = c . d t R c . d t G + b R a v g b G
In Equation (7), b R a v g represents the mean pseudo-range hardware delay for the GLONASS; b G refers to the pseudo-range hardware delay bias for the GPS; and d t R and d t G denote the GLONASS and GPS satellite clock biases. This model has proven more robust and accurate than the single system solution in the GPS/GLONASS-combined PPP solution [28]. The workflow scheme of the PPP solution is presented in Figure 1.
Scientific GNSS software such as GIPSY OASIS and Bernese can be used to obtain the point coordinates according to the above PPP model. In addition, many online services have been developed to obtain coordinates using the PPP method. These are Canadian Spatial Reference System Precise Point Positioning (CSRS-PPP—https://webapp.geod.nrcan.gc.ca/ (accessed on 30 July 2024)), Trimble CenterPoint RTX Post-Processing Service (Trimble-RTX—https://www.trimblertx.com/ (accessed on 30 July 2024)), magicGNSS PPP (produced by GMV—https://magicgnss.gmv.com/ (accessed on 30 July 2024)), and Automatic Precise Positioning Service (APPS by JPL—https://pppx.gdgps.net/ (accessed on 30 July 2024)).

2.2. Web-Based Online GNSS Post-Processing Services

Scientific and commercial software is available for post-processing GNSS data. However, over the last decade, the popularity of web-based GNSS post-processing services has increased due to their cost-effectiveness, simplicity and time savings. Many web services use the PPP method in GNSS data processing. APPS, CSRS-PPP, magicGNSS and Trimble RTX, are web-based PPP (WB-PPP) services commonly used by users. Thanks to the WB-PPP services, users can receive information, including station coordinates and result reports, by e-mail in a short time by sending the required GNSS data to the services. The advantages of these services are the ease of data upload, being free, being able to use similar GNSS products and requiring less maintenance. Table 2 shows information about the CSRS-PPP service used in this study.
CSRS-PPP is a web-based GNSS-processing service provided by NRCan (Natural Resources Canada) in 2003. Like other web-based processing services, it provides users with a simple interface. The system allows users to obtain solutions of static or kinematic observation data (GPS, GLONASS) collected with single- or dual-frequency receivers in North American Datum-1983 (NAD83) or the International Terrestrial Reference Frame (ITRF) datum [31,32]. As an additional option, CSRS-PPP allows users to define their own ocean-loading files for their measuring station. While CSRS-PPP provides web-based assessment services to users, it works with NRCan-PPP software version 3.0 in the background. CSRS-PPP uses the most appropriate of the final, rapid or ultra-rapid satellite ephemeris information [33]. As of October 2020, an update was made to the CSRS-PPP service and the PPP-AR (ambiguity resolution) algorithm was switched from the traditional PPP algorithm to the PPP-AR algorithm for the evaluation of data collected on or after 1 January 2018. The PPP-AR algorithm estimates the initial phase ambiguity as an integer instead of a float. With this update, the RINEX version 3 (v.3) data format also started to be analyzed. With another update in March 2021, CSRS-PPP automatically reduces high-frequency static data to a 30 s recording interval. Only dual-frequency static data, including both code and phase measurements where at least 75% of the expected 30 s intervals are available, are affected; other data are not. As of November 2022, ITRF coordinates are also provided in the IGS20 reference frame (https://webapp.geod.nrcan.gc.ca).

2.3. UAV’s Path Planning and Image Acquisition

The UAV used in this research is a Parrot ANAFI rotary-wing industrial device with an integrated compact camera. The UAV, which is not suitable for RTK and PPK techniques, has been successfully used in photogrammetric map generation for many years. In previous studies, the geodetic measurements of the coordinates of the GCPs were performed with the network RTK technique. Generally, in order to avoid problems, the BBA studies were designed to correspond to at least one GCP for every 9–10 images, taking into account the flight plan in the area to be studied. However, as the study areas increase in size, both the homogeneous distribution of the GCPs and the temporal losses in determining the point coordinates have increased. The RTK-enabled Ebee RTK+ fixed-wing UAV, which has been used in different studies, has yet to exhibit battery performance that can allow it to fly for as long as it should due to the principles of RTK and reception. Flight planning for UAVs has been a challenging problem regarding battery performance–flight time optimization [34]. In addition, due to the lack of adequate infrastructure for the PPK, this research was needed to guide anyone in similar conditions. In order to distinguish the centers of the established GCPs, a ground sampling distance (GSD) of 1.5 cm was used in flight planning. Low-altitude (45 m from the take-off center) and fixed-altitude (altitude does not change according to the terrain changes) flight planning was carried out to select the center of the GCP in the outcome and distinguish the pillar centers from the model. Since the terrain has no dense detail, the front and side overlaps were generally selected as default values in the literature. The general specs of the UAV used in the research and the path planning details of the research are presented in Table 3.

2.4. Photogrammetric Processing of Digital Images

Photogrammetric methods are based on visualization in stereo. Since obtaining information about the terrain from a single image is misleading, photogrammetry is based on generating information from multiple and overlapped images. Image acquisition is performed after path planning for photogrammetric map generation with a UAV. The collected datasets are transferred to the software, and the columns and image positions are checked. Then, the UAV images are aligned. Alignment is the initial step, and aerial triangulation (AT) and the BBA are the main alignment features. In parallel, the software performs automatic camera calibration (interior orientation (IO)). Since UAV cameras are nonmetric, the result of the IO process must be checked manually by the operator [35,36]. A set of camera locations and a sparse point cloud represent the visual outcomes of these processes. Except for the sparse point cloud-based surface reconstruction approach, which is only appropriate for fast estimations, such as the completeness of a dataset, the sparse point cloud, which reflects the outcomes of image alignment, would not be directly employed in subsequent processing. Then, if it is to be used, the AT and BBA process is repeated with the GCP coordinates to obtain a (relatively) high-spatial-accuracy dataset [37,38]. Then, a dense cloud is created by point densification, a mesh model is created by creating triangulated irregular networks (TINs) from the generated data, and DEM and orthomosaic generation are realized as outcomes (Figure 2).
Photogrammetric processing of digital images is based on the mathematical model of structure from motion (SfM). The SfM philosophy is based on matching multiple overlapped images by generating specific feature points such as edges and corners. The stereo mosaic images generated by SfM are reduced to vertical by orthorectification. Thus, orthomosaics are obtained and prepared for photogrammetric analysis [39,40].

2.5. Statistical Analysis and Bartlett’s Test

In this study, the differences between the ground coordinates and the coordinates obtained from the model are determined, and an accuracy analysis is performed to determine the spatial accuracy (horizontally and vertically) of the generated outcomes. The root mean square error (RMSE) analysis is performed, which is the most commonly used statistical method in accuracy determination. The ground truth points with known coordinates as shown in Equation (8) (χG) and the coordinates obtained from the photogrammetrically generated 3D model or orthomosaic (χM) differences are taken, and the RMSE calculations are performed.
R M S E = i = 1 n χ M i χ G i 2 n 1
Bartlett’s test is performed to determine whether there is a significant difference between the accuracies obtained from different models. If there is no statistical difference between the models, the optimal GNSS session duration is the shortest one. Bartlett’s test is used to statistically compare the RMSEs calculated for different session durations, while the null hypothesis is as follows.
H 0 = E m 1 2 = E m 2 2 = = E m k 2 = σ 0 2
and when calculating the variances of the measure groups f i with degrees of freedom,
M = f i l n f i m i 2 f i   f i l n m i 2
C = 1 + 1 3 k 1 1 f i 1 f i
and the test statistic,
χ 2 = M C
where M and C are calculated with Equations (10) and (11). 1 − α statistical confidence level and k − 1 degrees of freedom χ k 1,1 α 2 critical value χ 2 taken from the table. χ 2 < χ k 1,1 α 2 . The null hypothesis is accepted, and it is concluded that the variances of the measure groups are equal [41,42,43].

2.6. General Workflow of Study

Within the scope of this research, four main steps are carried out. The first step is the preprocessing. At this stage, approximate positionings are performed for the GCPs intended to be homogeneously distributed in the study area. The GNSS satellite angles for the days of UAV flights and whether there is planned maintenance are checked. Finally, the last flight plan and the phase are completed. The second step involves physical actions in the study area. Establishment of the GCPs and image acquisition with the measuring operations and image acquisition are completed. In the third step, the acquired datasets’ digital processing (defined as post-processing in this research) is completed. Photogrammetric processing of the digital images, together with geodetic coordinate evaluations, is performed at this stage. The last stage is the statistical validation stage, where the accuracy analysis of the generated data is performed. All the post-processed datasets and measurements are compared horizontally and vertically separately, and the results obtained in different scenarios are subjected to Bartlett’s test to determine whether the results are consistent (see Figure 3). If there is a significant difference between the scenarios as a result of Bartlett’s test, it is investigated which scenario provides the most accurate result.

2.7. Datasets

Within the scope of this research, the coordinates of eight GCPs, whose coordinates are geodetically determined with the GNSS receiver during the in situ measurement process, are presented in Table 4 with two-hour, 60 min and 30 min solutions using the PPP-AR technique using the CSRS-PPP service. In addition, the coordinates in the ITRF2020 datum obtained from the 24 h session of the six pillars performed previously are used in the research as north, east, and up (Table 5).

3. Application Results and Discussion

In this study, a 115 m × 820 m (width × length) study area was selected within the campus area of Konya Selçuk University (Figure 4). Within the study area, there were six pillar points whose 3D position accuracies were determined with high precision. These pillars constituted the control base used to determine the calibration parameters of the EDMs. Eight GCPs were installed to cover the application area (Figure 4) homogeneously. Measurement of GCPs has been performed with the Javad Triumph1 GNSS receiver for two hours, with 30 s recording intervals.
Before starting the measurements, the GCPs were installed in red and white colors so that they could be easily seen from the single image, as shown in Figure 5a. The GNSS receiver was fixed and leveled on the GCPs with a tripod, and static observations were performed. Figure 5b shows one of the pillars whose coordinates are obtained as a result of the long-term session. By processing the long-term GNSS observations previously made on the batteries, the coordinates are estimated by geodetic network adjustment with the relative GNSS technique. The RMSEs of the estimated coordinates are in the order of mm.
The GNSS measurements performed at the GCPs are processed with the CSRS-PPP software version 3.0 at observation times of 30 min, 60 min and two hours. These durations are chosen to compare the impact of the session duration on the accuracy of the generated topographic models. The outcomes for the study area are generated in three different scenarios: with no GCPs, with four GCPs (3, 9, 11, 21) and with all the GCPs (Figure 6).
The horizontal and vertical RMSEs for each scenario are calculated using Equation (8) (Table 6).
The model accuracies based on the point locations are shown in Table 6. The horizontal accuracy of the model generated with no GCPs is 102.26–105.40 cm, and the vertical accuracy is 335.57–341.46 cm. The model generated with four GCPs has a horizontal accuracy of 1.07–1.32 cm and a vertical accuracy of 14.20–14.86 cm. When all the GCPs are used, the accuracies decrease to 16.39–16.52 cm horizontally and 24.25–26.42 cm vertically. According to Figure 6 and Table 6, the three scenarios’ accuracies are significantly different, and the model with the four GCPs selected at the model corners with the smallest RMSE provides the best result. The main difference is due to the presence of grossly inaccurate points in all the GCPs, which have a negative impact on model accuracy. The four selected GCPs are both homogeneously located points within the block and have the lowest positional errors. These choices also affect the overall model accuracy. Due to the fact that the coordinates of the images, camera calibration values, and local coordinates values are used in the BBA, the error is distributed to the entire model (in case of even one or two grossly inaccurate coordinate values) [35,36,37,38].
Bartlett’s test is used to statistically compare the RMSEs of the models generated with GCPs at different session durations. Three horizontal and three vertical RMSEs are compared for all the models generated using 0-, 4- and 8-GCPs. Using Equations (9)–(12), χ 2 test statistics are calculated and shown in Table 7.
The values in Table 7 show the Bartlett’s test statistics of the RMSEs calculated by comparing the coordinates taken from the generated models with the PPP-2 h, PPP-1 h and PPP-30 m coordinates. The calculated test statistics are compared with the critical value in the table (5.9914) and are not significant. The conclusion to be drawn from this is that the coordinate accuracies of the models at three different session durations are consistent with each other. In other words, when the PPP technique is used in the GCPs, there is no significant difference between observing for 30 min and observing for 2 h. However, when the number of GCPs changes, the accuracy of the models generated is significantly different, and the best accuracy is achieved with 4 GCPs. This is due to the fact that there are gross inaccuracies in all the GCPs that affect the model accuracy. To analyze the model accuracies in more detail, the control–baseline lengths are calculated using the pillar coordinates from different models and compared with the known lengths.
When Figure 7 is evaluated, it is seen that the length differences in the model generated without a GCP vary between 185.2 and 16.6 cm. Moreover, 53% of the differences are greater than 91.6 cm. Differences 1–3, 1–4, 1–5, 1–6, 2–4, 2–5, 2–6, 3–5 and 3–6 are relatively large. The magnitude of these differences is thought to be due to the coarse error in the coordinates of the pillars numbered 4, 5 and 6 taken from the model. The differences of 1–2, 2–3, 3–4, 4–5, 4–6 and 5–6 are smaller than 59.8 cm, which are pretty good values for this model, with an average accuracy of 1 m. If Table 6 and Figure 7 are evaluated together, it can be said that while the horizontal accuracy of the model is approximately 1 m, almost half of the differences are 1 m and more considerable. This is due to the low accuracy of the UAV’s internal GNSS receiver and the nonmetric nature of the UAV’s integrated camera.
Figure 8 shows that the differences for the four GCPs range from −12.2 to 22.6 cm. In addition, 93% of the differences are 10 cm or less. Differences 1–5, 1–6, 2–5, 2–6, 3–5, 3–6, 4–5, 4–6, 5–6 are relatively large. All the differences between pillars 5 and 6 are large due to the fact that these pillars are on the edge of the block. The differences of 2–3, 2–4 and 3–4 in the middle of the selected GCPs are 0.2–3.4 cm. These findings indicate that when a length in the middle of the GCPs is measured from the model, accurate results comparable to ground measurements can be obtained. If a low-cost UAV and a GNSS receiver are used to generate the model by selecting four GCPs homogeneously distributed in the region, differences below 10 cm are found to be achievable.
For the model generated using all the GCPs, the differences between the known lengths and the measured lengths ranged between −25.8 and 20.2 cm for 30 min, −25.1 and 18.6 cm for 60 min, and −25.1 and 19.1 cm for 2 h (Figure 9). The differences of 1–2, 1–4, 1–6, 2–5, 3–4, 4–5 and 5–6 are greater than 10 cm in 30, 60 and 120 min. The main reason for this is that pillars 1, 5 and 6 are located in a region outside the GCPs, and the other reason is that coarse errors are spread over the modelled coordinates of pillars 3 and 4. The fact that 53.3% of the differences are smaller than 10 cm shows that the model generated is more accurate than the model with no GCPs and more inaccurate than the model generated with four GCPs. The RMSEs of the differences calculated using all the GCPs for the 30 min measurements are ±13.3 cm, for the 60 min measurements ±13.0 cm, and for the 2 h measurements ±12.8 cm. It can be seen that the RMSEs calculated for different session durations using all the RMSEs are statistically consistent. In addition, these RMSEs show that the models generated can provide outstanding results at three different PPP session durations. With a low-cost UAV with a nonmetric camera, the average ±13.0 cm accuracy shows that topographic models can be obtained from which length measurements can be performed. In addition to the horizontal distances, the pillar heights and known heights from different models are also compared in this study. The differences are presented in Figure 10, Figure 11 and Figure 12.
When Figure 10 is evaluated, it is seen that the differences between the heights taken from the model generated without a GCP and the known heights vary between −532.5 and394.7 cm. Differences 1–5, 2–4 and 5–6 are less than 76 cm, while all the other differences are greater than 100 cm. This is due to both the inability of the internal GNSS receiver to provide sufficient accuracy and the lower overall accuracy of the GNSS method in height compared to the horizontal. When the vertical accuracies in Table 6 and Figure 10 are considered together, 47% of the differences are around 300 cm and the vertical RMSEs are ±335.57–±341.46. These values show that an accurate height difference cannot be obtained from the model generated without using any GCP.
Figure 11 shows that the differences for four GCPs range from −270.7 to 329.5 cm. Differences 1–2, 1–3, 1–4, 1–5, 2–3, 2–4, 2–6, 3–5, 3–6, 4–5, 4–6 are larger than ten dm. The remaining ones account for 27% of all the differences and are between 4.4 and 7.1 dm. These values show that the height differences determined from the model are more accurate when four GCPs are used than when no GCPs are used. However, it cannot provide precise height information as it is in the order of decimeters.
Using all the control points, the differences between the known heights and measured heights range between −39.29 and 32.94 cm at 30, 60 and 120 min. The differences between 1–2, 1–3, 1–4, 2–4, 2–5, 2–6, 3–4, 3–5, 3–6, 4–5 and 4–6 are greater than 10 cm at 30, 60 and 120 min. The differences of 1–5, 1–6, 2–3 and 5–6 are between 1.9 and 6.3 cm. Precise height differences at the cm level could be obtained for only 26% of the differences. This is mainly due to the fact that the BBA propagates the errors of coarse error points to other points. Nevertheless, the most accurate results in terms of the height differences are obtained in the models generated using all the GCPs. While the differences in Figure 10 and Figure 11 are in the decimeter range, differences in the centimeter range are achieved in Figure 12. The RMSEs of the differences calculated using all the GCPs are ±20.5 cm for 30 min, ±20.5 cm for 60 min and ±20.77 cm for 2 h. It is seen that the vertical RMSEs calculated for different session durations using all the RMSEs are statistically consistent (Figure 12). UAVs are preferred for generating large-scale maps because they provide fast results, and it is usual to obtain an accuracy of dm and above in the outcomes generated without the use of a GCP. As a result, integrated GNSS systems determine their position as ordinary differential GNSS (D-GNSS) receivers. Studies in the literature with D-GNSS also show that the positioning accuracy can be in the order of dm [44,45,46,47]. The errors obtained as a result of the accuracy analysis allow the generation of large-scale maps of the outcomes planned to be obtained and to obtain relative accuracy that can be considered sufficient.
In addition, in order to make the accuracy analysis more understandable, the differences between the coordinates taken from different models and the control-based lengths and known lengths are shown in box plot graphs.
When Figure 13 is evaluated, it is seen that the differences between the lengths taken from the models generated without any GCP and the known lengths are typically distributed. There are no outliers, and the average is 92.5 cm. The maximum value of the differences is 185.2 cm, and the minimum is 16.6 cm. Seven of the differences are above the mean, and eight are below the mean. This shows that statistical calculations and inferences based on the differences can be made reliably.
Figure 14 shows that the differences for the four GCPs are again generally distributed, with a mean of 1.8 cm. The maximum value of the differences is 22.63 cm, and the minimum is −12.16 cm. Seven of the differences are above the mean, and eight are below the mean. This shows that statistical calculations and inferences based on the differences can be performed reliably.
It is seen that the differences between the known lengths and the lengths taken from the models generated with 30, 60 and 120 min measurements using all the GCPs have a normal distribution, there are no outliers, and the mean of the differences is 1.2 cm for the 30 min measurements, 0.9 cm for the 60 min measurements, and 0.9 cm for the 2 h measurements. (Figure 15). In the 30 min model, the maximum difference is 20.2 cm, and the minimum difference is −25.8 cm; in the 60 min model, the maximum difference is 18.6 cm, and the minimum difference is −25.1 cm; and in the 120 min model, the maximum difference is 19.1 cm, and the minimum difference is −25.1 cm. In all three scenarios, the ranges of variation of the differences are almost equal to each other. These findings show that reliable statistical inferences can be drawn using these differences.
The differences between the control–baseline point heights from different models and the known heights are also shown in the box plot graphs in Figure 16, Figure 17 and Figure 18.
When Figure 16 is evaluated, it is seen that the differences between the heights taken from the model generated without any GCP and the known heights are normally distributed. There are no outliers, and the mean is −11.8 dm. The maximum difference is 39.5 dm, and the minimum is −53.2 dm. In the box plot, seven of the values are above the mean, and eight are below the mean.
The four control points’ differences are typically distributed again, with a mean of 0.2 dm (Figure 17). The maximum difference is 32.9 dm, and the minimum is −27.1 dm. Of the differences in the box plot, seven are above the mean, and eight are below the mean. The differences are normally distributed, and there are no outliers.
It is seen that the differences between the heights taken from the models created using all the GCPs and the known heights have a normal distribution with 30, 60 and 120 min measurements. There are no outliers, and the average of the differences is −3.4 cm for the 30 min measurements, −3.4 cm for the 60 min measurements, and −3.6 cm for the 2 h measurements (Figure 18). When Figure 16, Figure 17 and Figure 18 are evaluated together, it is seen that reliable statistical parameters related to height can be calculated from these data and reliable inferences can be drawn.
The photogrammetric BBA is not a fully automated process. Many factors directly affect the model accuracy, such as marking the GCP locations from the image (performed manually by the operator) and ensuring that the selected GCP location accuracy is appropriate and that the GCPs are homogeneously distributed within the block when installed in the field [10]. In addition, the location information from which the GCP coordinates are taken (such as how many GNSS satellites are connected and at what angles), the date the images were taken (to determine the solar activity known as the Kp index), the type of UAV used, the UAV camera, UAV-integrated systems and UAV flight planning also have an impact on the model accuracy [13]. For this reason, it is not possible to talk about absolute accuracy like geodetic methods. In previous studies, the vertical accuracy value found in the comparison of the DEM generated from LIDAR and UAV images was ±15.7 cm [9], and in another study, it was determined that it could reach 0.2 cm horizontally and 4 cm vertically [11]. Another study determined that the model accuracy varied between 0.9 and 10 cm as a result of the analysis [12]. In this study, it is assumed that the horizontal positional accuracy of the 3D elevation models or ortho data generated in such a way that the RMSE value does not exceed ±10 cm and the vertical positional accuracy does not exceed ±20 cm are acceptable values for researchers and practitioners. As a result of the determination of the locations of the GCPs established in the field with PPP methods with different session durations and the accuracy analysis of the models generated with UAVs, it is seen that the measurement time of 30 min with the PPP-AR method made it possible to achieve the specified accuracies.

4. Conclusions

Nowadays, UAVs have become widely used tools in map generation. The generally used UAVs have nonmetric camera hardware. In addition, GNSS receivers integrated into UAVs result in low positional accuracy. Even if compact UAVs are capable of photogrammetric acquisition, the outcomes (such as orthomosaic images or 3D models) cannot be used directly in studies requiring high positional accuracy. For these reasons, points called GCPs, whose coordinates are determined by geodetic methods, are needed in studies requiring high spatial accuracy. Although there are various alternatives (such as PPK, RTK and network RTK) in determining GCP coordinates today, the issue of how to obtain GCP coordinates with high spatial accuracy with PPP is a very rare research subject. In this study, a 3D model of a 115 m × 820 m study area in the campus area of Selçuk University was generated by a compact rotary-wing UAV in three different scenarios and analyzed for statistical accuracy. The hRMSE and vRMSE of each scenario were calculated, statistical hypothesis tests were performed, and the differences between the lengths and heights taken from the model were distributed. Their actual values (accurate errors) were examined in detail. In line with the findings obtained in the study, the following conclusions were reached:
-
For the uninhabited area, 30 min GNSS measurements can be performed and processed with the PPP technique and then mapped using a low-cost UAV.
-
There are no significant differences between the horizontal accuracy of the models obtained by PPP with 30 min, 1 h and 2 h measurements. Similarly, there are no significant differences between the vertical accuracy of these models (Bartlett’s test results confirmed these findings).
-
The user can generate a model by processing half-hourly measurements using a GNSS receiver in CSRS-PPP software version 3.0. PPP-AR brings ease of use and provides advantages in obtaining photogrammetric outcomes from UAVs.
-
Without selecting a GCP, a position accuracy of about 1 m in the horizontal direction can be achieved, while selecting at least four GCPs to be homogeneously distributed can achieve a position accuracy of about 1–2 cm in the horizontal direction.
-
Without the GCP, a vertical accuracy of about 3 m is obtained, and about 15 cm vertical accuracy is obtained if four GCPs are used at the corners. An average vertical accuracy of 25 cm can be achieved if eight homogeneously distributed GCPs are selected.
-
As a consequence of this context, it is understood that when generating a model without GCPs, it is not possible to acquire outcomes whose accuracy is comparable with geodetic techniques. For this reason, when looking out for academic purposes, it is important to use homogeneously distributed GCPs on block corners.
-
It is also found that the horizontal and vertical accuracy decreases as the number of images captured decreases.
-
This study demonstrates that PPP-AR is a viable alternative to RTK/PPK for UAV photogrammetry, providing comparable accuracy with 1–2 cm horizontally and 15 cm vertically. However, practitioners should consider choosing an optimal number of GCPs in the model corners.
-
Generally, PPP-AR significantly improves the convergence time by 30 min. This is an important limitation of this study.
A 30 min GNSS session duration seems ideal for GCP positioning using PPP-AR at the moment. However, this duration can be shortened even further. In future studies, the PPP optimization objective of accuracy would be 3–5 cm, horizontally and vertically, respectively, with the use of four GCPs.

Author Contributions

Conceptualization, H.B.M., B.B. and S.B.; methodology, H.B.M., B.B. and S.B.; software, H.B.M., B.B. and S.B.; validation, H.B.M., B.B. and S.B.; formal analysis, H.B.M., B.B. and S.B.; investigation, H.B.M., B.B. and S.B.; resources, H.B.M., B.B. and S.B.; data curation, H.B.M., B.B. and S.B.; writing—original draft preparation, H.B.M., B.B. and S.B.; writing—review and editing, H.B.M., B.B. and S.B.; visualization, H.B.M., B.B. and S.B. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The original contributions presented in this study are included in the article; further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

AT, aerial triangulation; BBA, bundle block adjustment; CSRS-PPP, Canadian Spatial Reference System Precise Point Positioning; DEM, digital elevation model; D-GNSS, differential GNSS; DSM, digital surface model; DTM, digital terrain model; EDM, electronic distance measurement; GCP, ground control point; GNSS, global navigation satellite system; GPS, Global Positioning System; GSD, ground sampling distance; GSM, Global System for Mobile Communications; IGS, International GNSS Service; IO, interior orientation; ITRF, International Terrestrial Reference Frame; NAD83, North American Datum-1983; NRCan, Natural Resources Canada; PPK, Post Process Kinematic; PPP, precise point positioning; PPP-AR, precise point positioning with ambiguity resolution; RMSE, root mean square error; RTK, real-time kinematic; SfM, structure from motion; TIN, triangulated irregular network; UAV, unmanned aerial vehicle; WB-PPP, web-based PPP.

References

  1. Agüera-Vega, F.; Carvajal-Ramírez, F.; Martínez-Carricondo, P. Accuracy of digital surface models and orthophotos derived from unmanned aerial vehicle photogrammetry. J. Surv. Eng. 2017, 143, 04016025. [Google Scholar] [CrossRef]
  2. Gomes Pessoa, G.; Caceres Carrilho, A.; Takahashi Miyoshi, G.; Amorim, A.; Galo, M. Assessment of UAV-based digital surface model and the effects of quantity and distribution of ground control points. Int. J. Remote Sens. 2020, 42, 65–83. [Google Scholar] [CrossRef]
  3. Makineci, H.B.; Karabörk, H.; Durdu, A. Comparison of DEM based on Geodetic Methods and Photogrammetric Usage of UAV. Turk. J. Remote Sens. 2020, 2, 58–69. [Google Scholar]
  4. Yildirim, O.; Inal, C.; Bulbul, S.; Bilgen, B. Investigation of Position Accuracy in UAVs. Turk. J. Remote Sens. 2023, 5, 89–96. [Google Scholar] [CrossRef]
  5. Szypuła, B. Accuracy of UAV-based DEMs without ground control points. Geoinformatica 2024, 28, 1–28. [Google Scholar] [CrossRef]
  6. Reinartz, P.; Lehner, M.; Müller, R.; Schroeder, M. Accuracy analysis for DEM and orthoimages derived from SPOT HRS stereo data without using GCP. In Proceedings of the ISPRS 2004, Istanbul, Turkey, 12–23 July 2004; Volume 23, p. 2004. [Google Scholar]
  7. Štroner, M.; Urban, R.; Seidl, J.; Reindl, T.; Brouček, J. Photogrammetry Using UAV-Mounted GNSS RTK: Georeferencing Strategies without GCPs. Remote Sens. 2021, 13, 1336. [Google Scholar] [CrossRef]
  8. Ocalan, T.; Turk, T.; Tunalioglu, N.; Gurturk, M. Investigation of accuracy of PPP and PPP-AR methods for direct georeferencing in UAV photogrammetry. Earth Sci. Inf. 2022, 15, 2231–2238. [Google Scholar] [CrossRef]
  9. Polat, N.; Uysal, M. An Experimental Analysis of Digital Elevation Models Generated with Lidar Data and UAV Photogrammetry. J. Indian Soc. Remote Sens. 2018, 46, 1135–1142. [Google Scholar] [CrossRef]
  10. Martínez-Carricondo, P.; Agüera-Vega, F.; Carvajal-Ramírez, F.; Mesas-Carrascosa, F.J.; García-Ferrer, A.; Pérez-Porras, F.J. Assessment of UAV-photogrammetric mapping accuracy based on variation of ground control points. Int. J. Appl. Earth Obs. Geoinf. 2018, 72, 1–10. [Google Scholar] [CrossRef]
  11. Manfreda, S.; Dvorak, P.; Mullerova, J.; Herban, S.; Vuono, P.; Arranz Justel, J.J.; Perks, M. Assessing the Accuracy of Digital Surface Models Derived from Optical Imagery Acquired with Unmanned Aerial Systems. Drones 2019, 3, 15. [Google Scholar] [CrossRef]
  12. Yu, J.J.; Kim, D.W.; Lee, E.J.; Son, S.W. Determining the Optimal Number of Ground Control Points for Varying Study Sites through Accuracy Evaluation of Unmanned Aerial System-Based 3D Point Clouds and Digital Surface Models. Drones 2020, 4, 49. [Google Scholar] [CrossRef]
  13. Zimmerman, T.; Jansen, K.; Miller, J. Analysis of UAS Flight Altitude and Ground Control Point Parameters on DEM Accuracy along a Complex, Developed Coastline. Remote Sens. 2020, 12, 2305. [Google Scholar] [CrossRef]
  14. Famiglietti, N.A.; Cecere, G.; Grasso, C.; Memmolo, A.; Vicari, A. A Test on the Potential of a Low Cost Unmanned Aerial Vehicle RTK/PPK Solution for Precision Positioning. Sensors 2021, 21, 3882. [Google Scholar] [CrossRef] [PubMed]
  15. Žabota, B.; Kobal, M. Accuracy Assessment of UAV-Photogrammetric-Derived Products Using PPK and GCPs in Challenging Terrains: In Search of Optimized Rockfall Mapping. Remote Sens. 2021, 13, 3812. [Google Scholar] [CrossRef]
  16. Liu, X.; Lian, X.; Yang, W.; Wang, F.; Han, Y.; Zhang, Y. Accuracy Assessment of a UAV Direct Georeferencing Method and Impact of the Configuration of Ground Control Points. Drones 2022, 6, 30. [Google Scholar] [CrossRef]
  17. Martínez-Carricondo, P.; Agüera-Vega, F. Carvajal-Ramírez, F. Accuracy assessment of RTK/PPK UAV-photogrammetry projects using differential corrections from multiple GNSS fixed base stations. Geocarto Int. 2023, 38, 2197507. [Google Scholar] [CrossRef]
  18. Hayamizu, M.; Nakata, Y. Accuracy assessment of post-processing kinematic georeferencing based on uncrewed aerial vehicle-based structures from motion multi-view stereo photogrammetry. Geogr. Res. 2024, 62, 194–203. [Google Scholar] [CrossRef]
  19. Jiménez-Jiménez, S.I.; Ojeda-Bustamante, W.; Marcial-Pablo, M.d.J.; Enciso, J. Digital Terrain Models Generated with Low-Cost UAV Photogrammetry: Methodology and Accuracy. ISPRS Int. J. Geo-Inf. 2021, 10, 285. [Google Scholar] [CrossRef]
  20. Jarahizadeh, S.; Salehi, B. A Comparative Analysis of UAV Photogrammetric Software Performance for Forest 3D Modeling: A Case Study Using AgiSoft Photoscan, PIX4DMapper, and DJI Terra. Sensors 2024, 24, 286. [Google Scholar] [CrossRef]
  21. Lu, C.H.; Tsai, S.M.; Wu, M.T.; Lin, D.Y. Developing innovative and cost-effective UAS-PPK module for generating high-accuracy digital surface model. Terr. Atmos. Ocean Sci. 2023, 34, 23. [Google Scholar] [CrossRef]
  22. Yurtseven, H. Comparison of GNSS-, TLS- and Different Altitude UAV-Generated Datasets on the Basis of Spatial Differences. ISPRS Int. J. Geo-Inf. 2019, 8, 175. [Google Scholar] [CrossRef]
  23. Shoab, M.; Singh, V.K.; Ravibabu, M.V. High-Precise True Digital Orthoimage Generation and Accuracy Assessment based on UAV Images. J. Indian Soc. Remote Sens. 2022, 50, 613–622. [Google Scholar] [CrossRef]
  24. Lou, Y.; Zheng, F.; Gu, S.; Wang, C.; Gou, H.; Feng, Y. Multi-GNSS precise point positioning with raw single-frequency and dual-frequency measurement models. GPS Solut. 2016, 20, 849–862. [Google Scholar] [CrossRef]
  25. Abd Rabbou, M.; El-Rabbany, A. Performance analysis of precise point positioning using multi-constellation GNSS: GPS, GLONASS, Galileo and BeiDou. Surv. Rev. 2017, 49, 39–50. [Google Scholar] [CrossRef]
  26. Abou-Galala, M.; Rabahb, M.; Kaloopa, M.; Zidana, Z.M. Assessment of the accuracy and convergence period of Precise Point Positioning. Alex. Eng. J. 2018, 57, 1721–1726. [Google Scholar] [CrossRef]
  27. Bulbul, S.; Bilgen, B.; Inal, C. The performance assessment of Precise Point Positioning (PPP) under various observation conditions. Measurement 2021, 171, 108780. [Google Scholar] [CrossRef]
  28. Chen, J.; Zhang, Y.; Wang, J.; Yang, S.; Dong, D.; Wang, J.; Qu, W.; Wu, B. A simplified and unified model of multi-GNSS precise point positioning. Adv. Space Res. 2015, 55, 125–134. [Google Scholar] [CrossRef]
  29. Li, B.; Mi, J.; Zhu, H.; Gu, S.; Xu, Y.; Wang, H.; Yang, L.; Chen, Y.; Pang, Y. BDS-3/GPS/Galileo OSB Estimation and PPP-AR Positioning Analysis of Different Positioning Models. Remote Sens. 2022, 14, 4207. [Google Scholar] [CrossRef]
  30. Mu, X.; Wang, L.; Shu, B.; Tian, Y.; Li, X.; Lei, T.; Huang, G.; Zhang, Q. Performance Analysis of Multi-GNSS Real-Time PPP-AR Positioning Considering SSR Delay. Remote Sens. 2024, 16, 1213. [Google Scholar] [CrossRef]
  31. Choy, S.; Zhang, S.; Lahaye, F.; Héroux, P. A Comparison Between GPS-only and Combined GPS+GLONASS Precise Point Positioning. J. Spat. Sci. 2013, 58, 169–190. [Google Scholar] [CrossRef]
  32. Dawidowicz, K.; Krzan, G. Coordinate Estimation Accuracy of Static Precise Point Positioning Using on-line PPP Service, a Case Study. Acta Geod. et Geophys. 2014, 49, 37–55. [Google Scholar] [CrossRef]
  33. Erol, T. Web tabanlı CSRS-PPP uygulamasının farklı uydu sistemleri üzerindeki performansı. J. Geod. Geoinf. 2021, 8, 41–56. [Google Scholar] [CrossRef]
  34. Makineci, H.B.; Karabörk, H.; Durdu, A. ANN estimation model for photogrammetry-based UAV flight planning optimisation. Int. J. Remote Sens. 2022, 43, 5686–5708. [Google Scholar] [CrossRef]
  35. Huang, W.; Jiang, S.; Jiang, W. Camera Self-Calibration with GNSS Constrained Bundle Adjustment for Weakly Structured Long Corridor UAV Images. Remote Sens. 2021, 13, 4222. [Google Scholar] [CrossRef]
  36. Kılınç Kazar, G.; Karabörk, H.; Makineci, H.B. Evaluation of test field-based calibration and self-calibration models of UAV integrated compact cameras. J. Indian Soc. Remote Sens. 2022, 50, 13–23. [Google Scholar] [CrossRef]
  37. Lalak, M.; Wierzbicki, D.; Kędzierski, M. Methodology of Processing Single-Strip Blocks of Imagery with Reduction and Optimization Number of Ground Control Points in UAV Photogrammetry. Remote Sens. 2020, 12, 3336. [Google Scholar] [CrossRef]
  38. Maune, D.; Karlin, A. Understanding Aerial Triangulation. Photogramm. Eng. Remote Sens. 2021, 87, p319 xx. [Google Scholar] [CrossRef]
  39. Murtiyoso, A.; Grussenmeyer, P.; Börlin, N.; Vandermeerschen, J.; Freville, T. Open Source and Independent Methods for Bundle Adjustment Assessment in Close-Range UAV Photogrammetry. Drones 2018, 2, 3. [Google Scholar] [CrossRef]
  40. Slocum, R.K.; Parrish, C.E. Simulated Imagery Rendering Workflow for UAS-Based Photogrammetric 3D Reconstruction Accuracy Assessments. Remote Sens. 2017, 9, 396. [Google Scholar] [CrossRef]
  41. Lim, T.S.; Loh, W.Y. A Comparison of Tests of Equality of Variances. Comput. Stat. Data Anal. 1996, 22, 287–301. [Google Scholar] [CrossRef]
  42. Ott, R.L.; Longnecker, M. An Introduction to Statistical Methods and Data Analysis, 7th ed.; Cengage Learning: Boston, MA, USA, 2016; pp. 400–435. [Google Scholar]
  43. Zar, J.H. Biostatistical Analysis, 4th ed.; Prentice Hall Inc.: Saddle Rive, NJ, USA; Simon and Schuster: New York, NY, USA, 1999; pp. 177–206. [Google Scholar]
  44. Weng, D.; Gan, X.; Chen, W.; Ji, S.; Lu, Y. A New DGNSS Positioning Infrastructure for Android Smartphones. Sensors 2020, 20, 487. [Google Scholar] [CrossRef] [PubMed]
  45. Weng, D.; Ji, S.; Lu, Y.; Chen, W.; Li, Z. Improving DGNSS Performance through the Use of Network RTK Corrections. Remote Sens. 2021, 13, 1621. [Google Scholar] [CrossRef]
  46. Swaminathan, H.B.; Sommer, A.; Becker, A.; Atzmueller, M. Performance Evaluation of GNSS Position Augmentation Methods for Autonomous Vehicles in Urban Environments. Sensors 2022, 22, 8419. [Google Scholar] [CrossRef] [PubMed]
  47. Weng, D.; Chen, W.; Lu, Y.; Ji, S.; Luo, H.; Cai, M. Global DGNSS service for mobile positioning through public corrections. Adv. Space Res. 2023, 72, 4402–4412. [Google Scholar] [CrossRef]
Figure 1. Workflow scheme of the PPP solution.
Figure 1. Workflow scheme of the PPP solution.
Drones 08 00456 g001
Figure 2. Outcomes generated with the photogrammetric process and generation process steps: (a) image acquisition; (b) image processing; (c) sparse cloud; (d) dense cloud; (e) mesh model; (f) DEM; (g) orthomosaic; and (h) sample of orthomosaic.
Figure 2. Outcomes generated with the photogrammetric process and generation process steps: (a) image acquisition; (b) image processing; (c) sparse cloud; (d) dense cloud; (e) mesh model; (f) DEM; (g) orthomosaic; and (h) sample of orthomosaic.
Drones 08 00456 g002
Figure 3. General workflow scheme.
Figure 3. General workflow scheme.
Drones 08 00456 g003
Figure 4. Study area and distribution of GCPs and pillars.
Figure 4. Study area and distribution of GCPs and pillars.
Drones 08 00456 g004
Figure 5. (a) GCPs; and (b) pillars.
Figure 5. (a) GCPs; and (b) pillars.
Drones 08 00456 g005
Figure 6. The outcomes: (A) orthomosaic with no GCP; (B) orthomosaic with four GCPs; (C) orthomosaic with eight GCPs; (D) DEM with no GCP; (E) DEM with four GCPs; and (F) DEM with eight GCPs.
Figure 6. The outcomes: (A) orthomosaic with no GCP; (B) orthomosaic with four GCPs; (C) orthomosaic with eight GCPs; (D) DEM with no GCP; (E) DEM with four GCPs; and (F) DEM with eight GCPs.
Drones 08 00456 g006
Figure 7. Differences between known and measured distances without GCPs.
Figure 7. Differences between known and measured distances without GCPs.
Drones 08 00456 g007
Figure 8. Differences between known and measured distances with four GCPs.
Figure 8. Differences between known and measured distances with four GCPs.
Drones 08 00456 g008
Figure 9. Differences between known and measured distances with all the GCPs.
Figure 9. Differences between known and measured distances with all the GCPs.
Drones 08 00456 g009
Figure 10. Differences between known and measured heights without GCPs.
Figure 10. Differences between known and measured heights without GCPs.
Drones 08 00456 g010
Figure 11. Differences between known and measured heights with four GCPs.
Figure 11. Differences between known and measured heights with four GCPs.
Drones 08 00456 g011
Figure 12. Differences between known and measured heights with all the GCPs.
Figure 12. Differences between known and measured heights with all the GCPs.
Drones 08 00456 g012
Figure 13. Box plot for distance differences without GCPs.
Figure 13. Box plot for distance differences without GCPs.
Drones 08 00456 g013
Figure 14. Box plot for distance differences with four GCPs.
Figure 14. Box plot for distance differences with four GCPs.
Drones 08 00456 g014
Figure 15. Box plot for distance differences with all the GCPs.
Figure 15. Box plot for distance differences with all the GCPs.
Drones 08 00456 g015
Figure 16. Box plot for height differences without GCPs.
Figure 16. Box plot for height differences without GCPs.
Drones 08 00456 g016
Figure 17. Box plot for height differences with four GCPs.
Figure 17. Box plot for height differences with four GCPs.
Drones 08 00456 g017
Figure 18. Box plot for height differences with all the GCPs.
Figure 18. Box plot for height differences with all the GCPs.
Drones 08 00456 g018
Table 1. Results of related studies in terms of the RMSE.
Table 1. Results of related studies in terms of the RMSE.
StudyNumber of GCPsRMSEy,x (cm)RMSEh (cm)
Polat and Uysal [9]7Not Available±15.7
Martínez-Carricondo et al. [10]24±3.29±5.92
Manfreda et al. [11]16±0.20±4.00
Yu et al. [12]3–18±90.70±410.10
Zimmerman et al. [13]5±9.60±15.10
Žabota and Kobal [15]14±3.90±6.50
Martínez-Carricondo et al. [17]5±3.60±2.50
Hayamizu and Nakata [18]9±21.50±10.30
Lu et al. [21]12±4.70±2.45
Yurtseven [22]15±0.30±9.30
Table 2. General information about CSRS-PPP.
Table 2. General information about CSRS-PPP.
Information
Versionv3.0
Solution TypePPP-AR
Number of Benchmarks -
OrganizationNatural Research Canada (NRCan)
Reference FrameNAD83/ITRF2020
Antenna InformationIGS20
Satellite Orbit and Clock OffsetsIGS Final/Rapid/Ultra Rapid
Elevation MaskMin. 100
GNSS SystemGPS/GLO
SoftwareCSRS-PPP
ModeStatic/Kinematic
FrequencySingle/Dual
Method of Data UploadVia web page
Method of Obtaining Results E-mail
Data FormatRINEX/Hatanaka
Latest Update27 November 2022
Table 3. Specs of the UAV and path planning.
Table 3. Specs of the UAV and path planning.
Information
UAVParrot ANAFI
Altitude45 m
Overlaps75–65% (front and side)
Resolution of Images72 dpi
Image Size4608 px × 3456 px
Focal Length1/2.4 inch
Image Acquisition AngleNadir (90°)
Number of Images390
Flight Time30 min 42 s
Path5013 m
Block Area115 m × 820 m
Table 4. GCP coordinates from in situ measurements.
Table 4. GCP coordinates from in situ measurements.
ID120 min PPP-AR60 min PPP-AR30 min PPP-AR
NorthEastUpNorthEastUpNorthEastUp
GCP34,210,754.728456,170.2881164.2740.7310.3030.3530.7420.3110.379
GCP94,210,913.548455,658.5991190.3600.6190.6550.3310.6790.6980.271
GCP114,210,687.401456,236.2171161.0700.4140.2270.6700.4200.2310.189
GCP124,210,850.045455,864.8481178.3130.0460.8620.8730.0530.8670.398
GCP164,210,806.442456,050.3701169.1650.4450.3820.7250.4560.3890.229
GCP214,210,857.666455,687.6421181.7200.6830.6510.3130.6830.6520.851
GCP224,210,763.453455,932.5171171.3410.4650.5230.9160.4680.5250.434
GCP254,210,720.725456,097.8771164.4010.7350.8870.0040.7380.8910.528
Table 5. Pillar coordinates from in situ measurements.
Table 5. Pillar coordinates from in situ measurements.
Pillar ID24 h Static GNSS Measurement
NorthEastUp
14,210,709.443456,255.3741162.564
24,210,732.119456,177.4151164.307
34,210,783.224456,001.8471170.762
44,210,848.333455,777.6351180.977
54,210,885.163455,651.0311187.424
64,210,893.588455,621.9881188.811
Table 6. Horizontal and vertical RMSEs (cm).
Table 6. Horizontal and vertical RMSEs (cm).
Number of GCPRMSEPPP-2 hPPP-1 hPPP-30 m
0H102.26104.06105.40
V341.46336.05335.57
4H1.071.141.32
V14.2014.7414.86
8H16.3916.5416.52
V24.4524.6926.42
Table 7. Test statistics.
Table 7. Test statistics.
Number of GCPsCSRS-PPP
HV
00.005350.00218
40.281450.01352
80.000590.04212
χ 2,0.95 2 = 5.9914.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Makineci, H.B.; Bilgen, B.; Bulbul, S. A New Precise Point Positioning with Ambiguity Resolution (PPP-AR) Approach for Ground Control Point Positioning for Photogrammetric Generation with Unmanned Aerial Vehicles. Drones 2024, 8, 456. https://doi.org/10.3390/drones8090456

AMA Style

Makineci HB, Bilgen B, Bulbul S. A New Precise Point Positioning with Ambiguity Resolution (PPP-AR) Approach for Ground Control Point Positioning for Photogrammetric Generation with Unmanned Aerial Vehicles. Drones. 2024; 8(9):456. https://doi.org/10.3390/drones8090456

Chicago/Turabian Style

Makineci, Hasan Bilgehan, Burhaneddin Bilgen, and Sercan Bulbul. 2024. "A New Precise Point Positioning with Ambiguity Resolution (PPP-AR) Approach for Ground Control Point Positioning for Photogrammetric Generation with Unmanned Aerial Vehicles" Drones 8, no. 9: 456. https://doi.org/10.3390/drones8090456

APA Style

Makineci, H. B., Bilgen, B., & Bulbul, S. (2024). A New Precise Point Positioning with Ambiguity Resolution (PPP-AR) Approach for Ground Control Point Positioning for Photogrammetric Generation with Unmanned Aerial Vehicles. Drones, 8(9), 456. https://doi.org/10.3390/drones8090456

Article Metrics

Back to TopTop