Next Article in Journal
A Combined Fuzzy GMDH Neural Network and Grey Wolf Optimization Application for Wind Turbine Power Production Forecasting Considering SCADA Data
Previous Article in Journal
How Will the Improvements of Electricity Supply Quality in Poor Regions Reduce the Regional Economic Gaps? A Case Study of China
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Polymodal Method of Improving the Quality of Photogrammetric Images and Models

by
Pawel Burdziakowski
Department of Geodesy, Faculty of Civil and Environmental Engineering, Gdansk University of Technology, Narutowicza 11-12, 80-233 Gdansk, Poland
Energies 2021, 14(12), 3457; https://doi.org/10.3390/en14123457
Submission received: 16 May 2021 / Revised: 28 May 2021 / Accepted: 8 June 2021 / Published: 11 June 2021
(This article belongs to the Special Issue Deep Learning-Based System for Thermal Images)

Abstract

:
Photogrammetry using unmanned aerial vehicles has become very popular and is already commonly used. The most frequent photogrammetry products are an orthoimage, digital terrain model and a 3D object model. When executing measurement flights, it may happen that there are unsuitable lighting conditions, and the flight itself is fast and not very stable. As a result, noise and blur appear on the images, and the images themselves can have too low of a resolution to satisfy the quality requirements for a photogrammetric product. In such cases, the obtained images are useless or will significantly reduce the quality of the end-product of low-level photogrammetry. A new polymodal method of improving measurement image quality has been proposed to avoid such issues. The method discussed in this article removes degrading factors from the images and, as a consequence, improves the geometric and interpretative quality of a photogrammetric product. The author analyzed 17 various image degradation cases, developed 34 models based on degraded and recovered images, and conducted an objective analysis of the quality of the recovered images and models. As evidenced, the result was a significant improvement in the interpretative quality of the images themselves and a better geometry model.

1. Introduction

Photogrammetry using unmanned aerial vehicles, understood to be a tool for taking measurements, combines the possibility of ground, air and even suborbital photogrammetric measurements [1], at the same time being a low-cost competition for conventional aerial photogrammetry or satellite observation. The well-established photogrammetric techniques and technologies, already used with classic aircraft, were quickly adapted to low-level solutions with unmanned aerial vehicles (UAVs). Acquiring data from a low level using unmanned aerial vehicles—although in principle the same as the process in the case of classic aerial photogrammetry—due to obvious equipment differences and flight possibilities, generates new problems, encountered only in the event of UAV photogrammetry [2].
Commercial UAVs used in photogrammetry have a minor maximum take-off mass (MTOM) up to 25 kg, although the most commonly used models weigh up to 5 kg. Few possibilities of transporting additional loads and limitations on UAVs mass enforce the need to reduce the weight of all components carried by the vehicle. Miniaturization involves, among others, global navigation satellite system (GNSS) receivers, inertial units (INS), and optoelectronic devices (visible light, thermal imaging and multispectral cameras), often making these devices less sophisticated and accurate. Whereas, regarding digital cameras used on UAVs, they are usually small structures. Commercial UAVs usually use integrated cameras with a sensor from 1/2.3 ‘’ (DJI Mavic Pro) through to 1’ (DJI Mavic Pro 2, DJI Phantom 4 Pro) to APS-C (DJI Zenmuse X7) (Shenzhen DJI Sciences and Technologies Ltd., Shenzhen, China). Such structures do not utilize such image compensation systems used in air photogrammetry such as time-delayed integration (TDI) [3,4], bright lenses with constant internal orientation parameters and low distortion. Such a situation may lead to a number of errors in the data acquisition process, for example, blur and noise, which affects the quality of photogrammetric processing.
Nowadays, with the use of UAVs for measurement during various construction projects and monitoring natural environment phenomena so frequently, data acquisition can sometimes be forced by the schedule of a given project or the uniqueness of an individual natural phenomenon. In such cases, there are often no suitable measurement conditions, there are insufficient lighting conditions, and there is not much time for the UAV flight itself. Such a forced flight schedule and time can lead to image degradation. Most frequently, the sensor ISO sensitivity is increased to avoid photo underexposure, which generates higher noise visible in the images [5]. Limited time forces the operator to fly at higher speeds, which combined with extended shutter speed generates blur. Such phenomena are particularly intensified with small CMOS (Complementary Metal-Oxide-Semiconductor) image sensors, frequently used in commercial UAVs. As a result, the quality-related requirements of photogrammetric processing might not be satisfied.
The primary determinants of a photogrammetric process are its qualitative requirements. They are usually specified by the end user of the product and can take various forms, e.g., specifications in a given contract, certain minimum official requirements, or adopted standards. In this context, a photogrammetric process can be defined as a set of interconnected activities, the execution of which is necessary to obtain a specific result—the required image quality. The concept of quality has numerous definitions, with one of them defining quality as a process adaptability to set requirements. Therefore, reaching the required quality will strictly depend on the main factors of a given process. A factor is defined as a certain process activity impacting quality. The quality of the photogrammetric process can be built on three main pillars (Figure 1) [6]:
  • Procedures—every aspect of the image data collection process, which stem from the execution method and its correctness. In other words, within this group, the following process elements can be distinguished: applied flight plan (altitude, coverage, selected flight path, and camera position), GNSS measurement accuracy, selected time of day, scenery illumination quality, etc.;
  • Technical elements—all technical devices and their quality used to collect data, for instance, technical capabilities and accuracy of the lenses, cameras, stabilization systems, satellite navigation system, receivers, etc.;
  • Numerical methods—the capabilities and characteristics of the algorithms and mathematical methods applied for data processing.
Each of the aforementioned factors significantly impacts image quality, and their skillful balancing and matching to the existing measurement conditions and set requirements enables reaching an assumed objective. Importantly, there is no single path to achieving the required quality. For example, a required ground sampling distance (GPD) for a given image distance can be obtained through changing UAVs’ flight altitude (procedural factor) or changing a camera (technical factor), alternatively, by applying numerical methods for increasing image resolution, e.g., super-resolution algorithm [7] (numerical factor).
Several of the interesting, latest publications can be presented regarding procedural factors. The authors of [8] discussed a new approach to planning a photogrammetric mission, especially in terms of complex scenery. Complex sceneries are ones where the terrain is of variable elevation, and with scattered terrain obstacles and objects. Such a terrain requires an unconventional approach to flight planning. Traditional flight plans, well-established and frequently used, are widely discussed by Eisenbeiss et al. in [9]. In the works of [10,11], the authors discuss flight planning procedures to analyze changes in a coastal zone. The authors of [12] address the issues associated with beach measurements and show a highly interesting procedural factor. The recommended time of day for beach measurements was early morning. Owing to a change in the time of day, the mean error was reduced twofold. The issue related to the impact of sun position on image quality has been further studied in [13]. Its authors also show the effect of forward overlap on the root mean square error (RMSE), while indicating recommended values. The studies in [14,15] present the impact of ground control point (GCP) arrangement and the number of image accuracy.
Of course, the photogrammetric processing accuracy depends on the quality of used equipment. In terms of technical factors, the greatest impact will be that of the quality and type of UAV navigation, orientation and stability systems, camera type, shutter type and lens type [16,17]. UAVs equipped with simple RTK (real-time kinematics) receivers are becoming popular today. Using this navigation sensor significantly improves the accuracy of direct determination of external orientation elements [18,19,20,21]. Some authors even state that using RTK receivers on UAVs enables to withdraw from GCP development [22]. Vautherin et al. [23] showed how the shutter type affects the image quality. Global shutters still have the dominance over cheaper rolling shutter solutions. Publications of [24,25,26] describe the impact of GCP measurement accuracy and arrangement on image quality.
Numerical methods used in processing digital images significantly affect the quality of a photogrammetric image [27]. For example, the issue of a nonmetric camera calibration algorithm and its impact on the geometry of photogrammetric processing [28,29,30]. The authors are constantly developing new calibration methods, reaching a significant improvement in the geometry of the end images [31]. The authors of [32] presented a new numerical way of improving the geometric quality of an image with single-strip blocks using the Levenberg–Marquardt–Powell method. The article in [33] discusses a method for eliminating the impact of weather conditions on the quality of photogrammetric images. Some authors also suggest comprehensive solutions to this issue, noting that certain factors significantly impact the quality of a photogrammetric process, while designing and constructing UAVs with their own calibration and processing algorithms [34]. Neural networks, especially the ones based on deep models, are widely used also in photogrammetry. Such cases include, for example, the following methods of improving photogrammetric processing quality [6,7,35,36,37,38,39].
Due to the data in contemporary photogrammetry having a fully digital form, the algorithms and numerical methods used to process them will highly affect the end result [40]. It can be concluded that each numerical method used within the data processing process, starting from processing single values of digital sensor pixels [41], through writing them on a memory card or transferring the data to a server, to a full range of digital software-implemented photogrammetry methods will impact the final result. Therefore, the technique is to optimally select and choose those that lead to the lowest quality losses. In practice, software-implemented numerical methods are already properly selected and the user has no influence on changing them, and only on certain processing parameters instead. Furthermore, as observed in research [6], through the application of advanced processing algorithms, modern photogrammetric software is remarkably resistant to image-degrading factors and is able to generate the model, although the final result usually has low geometric quality.
In photogrammetric practice, especially in the event of using UAVs in commercial tasks, there may be a situation when the correct selection of procedural and technical factors is insufficient. As a result, achieving the required photogrammetric product quality can be unfeasible. Consequently, the following approach can be formulated and expressed as follows. In the event of UAV images containing typical quality-degrading elements, such as noise, blur, and low resolution, one can apply an additional process to eliminate these factors, hence improving the final quality of a photogrammetric product. This additional process interferes only with the image data directly prior to their processing, therefore, it does not change the elements of the software itself. The modern image restoration methods were used to confirm this thesis and develop a new method of improving photogrammetric image quality. These methods were also tested in terms of their impact on image quality, processing, and final photogrammetric models. The outcome of the conducted research was the development and presentation of new solutions in the field of low-level photogrammetry:
  • the impact of three basic image quality-degrading factors (noise, blur, and low resolution) on the processing in modern photogrammetry software and on the quality of models based on such images was assessed,
  • a polymodal algorithm for improving measurement image quality based on neural networks and numerical methods was developed,
  • image-degrading factors were eliminated, their quality was objectively assessed, and basic photogrammetric products were developed. Models developed from the recovered images, which are images after elimination of degradation factors, were compared with a reference model.

2. Materials and Methods

2.1. Process Description

Figure 2 shows the applied research and data processing processes. The process is commenced with a data acquisition block. Image data was collected using a typical commercial UAV—DJI Mavic Pro. Unmodified images were used as reference data (ground truth). Further copies of the images were subjected to noise, blur, and low-resolution simulating degradation. One dataset with added noise, 8 sets with added blur, and 8 sets with simultaneous blur and reduced resolution were generated. All these images were used to create photogrammetric models. The next stage involved subjecting the modified images to the polymodal method for improving image quality and once again used to develop models. The research process involved comparing the image quality at individual processing stages and evaluating the quality of models generated based on these images. The study utilized various software and software environments, which are shown in Figure 2.

2.2. Image Degradation Model

The objective of image restoration (IR) methods is to recover latent pure image x based on its degraded form y , which can be expressed by a quotation:
y = D ( x ) + n
where D is a noise-independent degradation operation, n represents the additive white Gaussian noise (AWGN) of standard deviation σ . This paper assumes that noise, blur, and low resolution would degrade the measurement images and, consequently, lead to degraded photogrammetric processing. Opposing operations, denoising, deblurring, and super resolution, respectively, will improve the quality of degraded images and lead to improved quality of photogrammetric processing. They can be classified as numerical methods of improving photogrammetric processing quality, described in the Introduction.
For a typical blur, the degradation model can be expressed as follows:
y b = x k + n ,
where x k is a two-dimensional convolution of a pure image and the blur kernel k . More information on blur in low-level photogrammetry can be found in [6]. Unlike the study in [6], the authors of this paper used 8 different blur kernels (Figure 3) that were adopted as in [42,43]. The proposed blur kernels, in connection with noise and low resolution led to the development of degraded test data. The blur kernels presented here were chosen to complement the kernels presented in the study [6], where very intense motion blur was simulated.
The degradation model for an image with reduced resolution is expressed by the following equation:
y l r = ( x k ) s + n
where s means simple image direct downsampling [44] at every s × s pixel, where s is a downscaling factor, here s = 2 . In the case of super-resolution algorithms, such an image degradation model is deemed most correct [43,45]. More information on how low resolution impacts a photogrammetric model, and on other models of increasing resolution, can be found in [7].
Noise can be defined as a certain accidental and unwanted signal. In the context of captured images, noises are an undesirable byproduct of image capture and recording, and constitute certain redundant information. The noise source in digital photography is primarily the image capturing, recording, and transmission channel. Outcomes of noise include undesirable effects like artifacts, unreal generated individual pixels of random quality, lines, emphasized object edges, etc. [46]. The authors of this paper adopted the Gaussian model of additive noise. This noise, also called electronic noise, is generated primarily as a result of signal amplification in the recording channel and directly in CCD (Charge Coupled Device) and CMOS sensors, as a result of thermal atomic vibration [46]. It should be stressed that an elevated noise or distortion level in input photogrammetric images can lead to significant degradation of the stereo-matching process. This applies to all stereo-vision algorithms, however, to a varying extent [47]. Such a situation can directly impact the developed model. Assuming that in Equation (1), D ( x ) will be the identity function, hence, random variable n in Equation (3) adopts the Gaussian density function p n ( x p ) , which can be expressed as [47]:
p n ( x p ) = 1 2 π σ e ( x μ ) 2 2 σ 2
where x p means the value of a single image pixel x , μ is the mean value 0 adopted herein, and σ is the standard deviation, adopted herein in the range from 0 to 50.

2.3. Restoration

As mentioned in the Introduction, the development of a method for improving image quality was based on already existing numerical methods, functioning in other fields. The assumption also was that the method used in the research had to have the plug and play capability [42,48,49,50,51,52,53,54,55,56,57,58,59], which means being functional without the need for user interference. The plug-and-play methods used for image restoration problems can perform generic image restoration independent of the degradation type. That capability is especially essential in real applications, due to the fact that during UAV data acquisition, the degradation factors can be very different and random. Moreover, a common feature of these methods is also that they are relatively simple to use, which means that they can be easily implemented within available environments or integrated with existing software, and that they are based on well-known numerical methods.
To put it simply, all aforementioned methods solve Equation (1) in different ways. One of the methods is Bayesian inference, therefore, Equation (1) can be solved using the rule of maximum a posteriori probability (MAP), which can be formally expressed as [60]:
  x ^ = a r g   min x 1 2 σ 2 y D ( x ) 2 + λ ( x )  
where the solution minimizes an energy function of a data term 1 2 σ 2 y D ( x ) 2 and a prior term λ ( x ) with regularization parameter λ. As stipulated by the source literature [60,61,62,63], the methods for solving the Solution (5) can be divided into two groups, namely, model-based methods and learning-based methods. They have their pros and cons. As a rule of thumb, model-based methods are rather flexible and offer numerous tasks ( D ) , but they unfortunately need more time for calculation. On the other hand, learning-based methods can provide results very quickly but require long learning time and are not as flexible. Learning is limited to a specific task only ( D ) . For photogrammetric purposes, the solution presented herein was adopted directly after [60], therefore, denoising will be conducted using a learning-based model, while deblurring and resolution improvement through model-based methods. Readers who want to further explore the issues of deblurring and resolution improvement through learning-based models are welcome to study [6,7], where learning-based methods were applied for photogrammetric purposes. As presented there, the methods were very effective and were able to restore even very blurry images, and in the case of super resolution, generate high-quality high resolution images, nevertheless, the methods were applied only for one separate degradation problem. Moreover, the approach presented in these works uses neural networks, that which require a long learning process, and the large training dataset, that which is generally not problematic for one task only. The polymodal image restoring method presented here should solve 3 degradation factors at one time, therefore, what is required is a different methodology.
As already mentioned, denoising was conducted using a DRUNnet neural network [53]. This network is classified as a convolutional neural network (CNN) and is able to remove noise of various levels, based on a single model. The backbone of DRUNnet is the well-known U-Net network [64] and consists of four scales, with each of them having an identity skip connection between 2 × 2 strided convolution (SConv) downscaling and 2 × 2 transposed convolution (TConv) upscaling operations. The number of channels in each layer, from the first to the fourth is 64, 128, 256, and 512, respectively. The activation function does not appear before the first and last convolutions, and before the SConv and TConv layers. Additionally, every residual block has only one ReLU activation function. A neural training database consists of 8794 images acquired from four following datasets [65,66,67,68].
Data term and prior term in Equation (5) can be decoupled using half quadratic splitting (HQS) algorithm [69] as introduced in [53]. Therefore, the HQS introduces the z auxiliary variable, resulting in:
x ^ = a r g   min x 1 2 σ 2 y D ( x ) 2 + λ R ( z )
which can be solved by minimizing the following problem:
L μ ( x , z ) = 1 2 σ 2 y D ( x ) 2 + λ R ( x ) + μ 2 z x 2
where μ is the penalty parameter. This problem can be solved through iterating two sub-problems for x and z, whereas the other variables are fixed:
{ x k = a r g   min x y D ( x ) 2 + μ σ 2     x z k 1 2 z k = a r g   min x 1 2 ( λ / μ ) 2 z z k 2 + R ( z )
Therefore, the z k solution task comes down to finding a proximal point of z k 1 and usually has a closed-form solution dependent on D . For the deblurring task, assuming that the function sport in Equation (2) is executed based on circular boundary conditions, a fast solution to x k is:
x k = F 1 ( F ( k ) ¯ F ( y ) α k F ( z k 1 ) F ( k ) ¯ F ( k ) + α k )
where F ( · ) and F 1 ( · ) mean the Fast Fourier Transform (FFT) and inverse Fast Fourier Transform, respectively, and F ( · ) ¯ describes the complex conjugate of F ( · ) .
The solution to z k for the super-resolution task, assuming that the function spot in Equation (3) is executed based on circular boundary conditions, can be taken from [53,70]:
x k = F 1 ( 1 α k ( d F ( k ) ¯     ( F ( k ) d ) s ( F ( k ) ¯ F ( k ) ) s + α k ) )
d = F ( k ) ¯ F ( y s ) + α k F ( z k 1 )
where means a distinct block processing operator with element-wise multiplication.
Degraded images were subjected to the presented method. Therefore, the polymodal method of improving the quality of photogrammetric images involves solving three sub-problems, namely, denoising, deblurring, and super resolution.

2.4. Reference Data Acquisition

The reference images was acquired using a DJI Mavic Pro (Shenzhen DJI Sciences and Technologies Ltd., China) UAV. The UAV is a typical representative of commercial aerial vehicles, designed and intended mainly for amateur movie creators. The flexibility and trustworthiness of these platforms were quickly appreciated by the photogrammetric community.
The flight was planned and executed as per a single grid [71] over an urban infrastructure fragment. The area of the test space is 0.192 km2, with the flight conducted at an altitude of 100 m above ground level (AGL), with a longitudinal and transverse coverage of 75%. 129 images were taken and supplemented with metadata and an actual UAV position. In addition, a photogrammetric network was established that consisted of 16 ground control points (GCPs), evenly arranged throughout the entire study area. The GCPs’ position were measured with the GNSS RTK accurate satellite positioning method and determined relative to the PL-2000 Polish state grid coordinate system and their altitude relative to the quasigeoid. Commercial Agisoft Metashape ver. 1.6.5 (Agisoft LLC, St. Petersburg, Russia) software was used to process the data. The results for the reference model, read from a report generated by the software, are shown in Table 1 and the visualizations are presented in Figure 4.

2.5. Degraded Models

The aforementioned relationships were used to create research data. Image data acquired during a reference flight were noised, blurred, and their resolution was reduced. One dataset with added Gaussian noise of σ = 15, 8 sets with added Gaussian noise of σ = 7.65 and variable blur, and 8 sets with low resolution ( s = 2 ) and variable blur. Blur was changed for each set, indicating the blur kernel (Figure 3), so that a given kernel is invariant for all images in the set. This enabled generating a total of 17 complete sets of degraded images (Table 2).
Next, in accordance with the rules of the art, the same process of generating typical low-ceiling photogrammetry products was conducted using the photogrammetric software. This process followed the same procedure and processing settings as the ones applied for generating reference products. The result was 17 products generated from degraded images. Table 2 presents the basic data of the surveys based on degraded data. It should be noted that a real flight altitude (ca. 100 m) was fixed, and the one shown in the table was calculated by the software. Table 3 shows the root mean square error (RMSE) calculated for control point locations. During the image processing process, the control points were manually indicated by the operator for each dataset. The fact that it was possible to develop models from all degraded data, and that the software selected for this task completed the process without significant disturbance is noteworthy.

2.6. Image Restoration and Model Processing

All 17 complete sets of degraded images (Table 2) were subjected to degradation elimination. The resulting images for each set were used to generate further sets of photogrammetric products. This process was similar to the one involving reference models, with the same processing software settings being used. Figure 5, as well as Figure 6 and Figure 7, show a visual comparison of the method’s operation for a fragment of one of the images. These figures present the reference image (ground truth) (a), degraded image (b), and restored image (c).
A visual analysis of the aforementioned images indicates that the method significantly restores the image, enabling significant denoising, deblurring, and improving resolution. In practice, the noise has been completely eliminated, and the images exhibit a significantly higher interpretative quality. Furthermore, the visual assessment of all restored images (c) indicates that they have a very similar or even identical quality. It is virtually difficult to assess their degradation extent, which enables the conclusion that all products based on these images will also exhibit similar interpretative and geometric quality, regardless of the problem source.
The assessment of image quality and the evaluation of the results of the presented polymodal method in comparison with the degraded images was conducted on the basis of four different image quality metrics (IQM): blind referenceless image spatial quality evaluator (BRISQUE) [71], natural image quality evaluator (NIQE) [72], perception-based image quality evaluator (PIQE) [73] and peak-signal-to-noise ratio (PSNR) [74]. Chosen no-reference image quality scores generally return a nonnegative scalar. The BRISQUE score is in the range from 0 to 100. Lower score values better reflect the perceptive qualities of images. The NIQE model is trained on a database of pristine images and can measure the quality of images with arbitrary distortion. NIQE is opinion-unaware and does not use subjective quality scores. The trade-off is that the NIQE score of an image might not correlate as well as the BRISQUE score with human perceptions of quality. Lower score values better reflect the perceptive quality of images with respect to the input model. The PIQE score is the no-reference image quality score, and it is inversely correlated with the perceptual quality of an image. A low score value indicates high perceptive quality, and a high score value indicates low perceptive quality. A higher PSNR value provides a higher image quality, and at the other end of the scale, a small value of the PSNR provides high numerical differences between images. Figure 8 presents the calculated results of the aforementioned image quality evaluators in graphical form.
An analysis of the aforementioned results shows that in terms of perceptive quality improvement (BRISQUE index), significant improvement in the resolution improvement subtask (task: sr), and minor improvement in denoising were achieved. The BRISQUE index values are clearly high in terms of the deblurring task, although the visual analysis clearly indicates significant quality improvement. On the other hand, the NIQE (natural image quality evaluator) index correctly indicates image quality improvement in each task. This means that, objectively, the quality of each image has been very explicitly improved, and the NIQE index values in certain cases are very similar to the reference values. Interestingly, the NIQE value for the denoise task indicates even better image quality after noise reduction than that of the ground truth image (obtained straight from a camera and unmodified). This means that the noise present on ground truth images was minor, as natural for a sensor of small digital cameras. Residual noise reduction on the ground truth image enabled to fully eliminate the noise, which translated into an improved NIQE index value. The PIQE index, similarly to BRISQUE, indicates a general improvement, however, the values are clearly overstated for the deblurring task. The popular PSNR index indicated a significant improvement of image quality in all tasks, with the highest value observed for deblurring, for which BRISQUE and PIQE showed quite the opposite.
Images subjected to the quality improvement method were used as a base to develop successive, typical photogrammetric products. This process followed the same procedure and processing settings as the ones applied for generating reference products and degraded images. The result was 17 products generated from images with improved quality.
Table 4 presents the basic study data based on images without blurring. Table 5 shows the RMSE calculated for control points’ location for the restored dataset.

3. Results

This chapter analyzes and discusses the geometry of all developed photogrammetric products based on restored images. It should be noted that all processes were correct and without disturbances, and the applied photogrammetric software did not indicate significant difficulties in generating products. Figure 9 shows a full summary of the basic quality parameters relating to the photogrammetric product, namely, reprojection error, total RMSE for GCPs, and number of key points.
The reprojection error (RE) for models based on restored images in all tasks adopts higher values than the reference ones and the values generated for degraded images. The difference between the error for degraded images and restored images is minor, yet slightly higher. The values in terms of total RMSE for GCPs are similar to the reference values. This means that significant improvement is observed in this respect. The number of key points is close to the values generated for the degraded models. All the aforementioned values differ much from the ground truth values, but it is possible to identify certain dependencies. RE values are not improved, RMSE for GCPs are improved, and the number of key points increases. The rather subtle differences in this respect enable a conclusion that the model geometry will be preserved.
The geometric quality of the developed topography models was evaluated using the methods described in [75,76], and similarly to the analyses performed in [6]. An M3C2 distance map (Multiscale Model to Model Cloud Comparison) was developed for each point cloud. The M3C2 distance map computation process utilized 3-D point precision estimates stored in scalar fields. Appropriate scalar fields were selected for both point clouds (referenced and tested) to describe measurement precision in X, Y, and Z ( σ X , σ Y , σ Z ). The results for sample cases are shown in Figure 10.
The statistical distribution of M3C2 distances is close to normal, which means that a significant part of the observations is concentrated around the mean. The means (μ) for all cases adopted negative values, which means that each model, both degraded and restored, was displaced on average by approximately 20 cm relative to the reference model. It should be noted that the degraded models exhibited greater differences from the reference model. All models based on restored images exhibited lower mean (μ) than in the case of the equivalent model for the same degradation parameters. Standard deviation was about 1 m. The M3C2 distance was approximately 1 m for the models in their eastern part (blue color) near the quay, where the water surface is recorded. Furthermore, one can notice a significant number of random points of extreme deviation in the northern and southern parts. Even a 1 meter difference in the flyover area (central model part Low-7 blue color) can be observed for extremely damaged cases (Low-7). This directly means that flyover altitude (object altitude above ground level) has been incorrectly calculated by the software. After restoring the images, this difference minimizes to around zero (green points—SuperRes-7). A similar situation is observed with a noised model. This proves that model geometry is significantly improved.
In areas where the M3C2 distance takes higher values, it was observed that tie points exhibited lower measurement precision, which was manifested by higher values of σ X ,   σ Y ,   σ Z . These values are calculated in millimeters (mm). Therefore, it was decided to additionally assess the quality of all products by conducting a statistical analysis of tie point precision ( d σ ) , expressed by Formula (12):
d σ = ( σ X ) 2 + ( σ Y ) 2 + ( σ Z ) 2
The statistical analysis included the median and standard deviation of tie point precision ( d σ ) and was calculated for each case. The numerical results are shown in Table 6 for the median value and Table 7 for standard deviation. The graphical comparison of the data from the tables is shown in Figure 11.
A results’ analysis clearly shows that the precision of tie point position determination was improved in each case, which consequently translates to improved geometric quality of the product. Median improvement for the noise reduction task is approx. 20 mm. The improvement for the blur reduction task depended on the kernel and amounted to 15 to 20 mm, while in the case of resolution improvement, this value varied from 40 to about 60 mm. It should be noted that the values of the “Low” task are converted from the ground resolution value, which was approximately twice as high for this task. Therefore, when comparing the results of this task with the “Super” task, the calculated precision values should be multiplied by 2. A reduction of the standard deviation was also noted for all cases, which also means improved geometric quality and precision of the product.
The last element of the results’ analysis was a visual analysis of orthophoto maps, digital elevation models, and dense point clouds. Figure 12 and Figure 13 contain several representative cases showing orthophoto map fragments and digital elevation model (DEM) fragments.
The visual assessment enables a conclusion that a significant improvement in the interpretative quality of the products was achieved in each case. Improved image quality, evidenced objectively in the previous section, clearly contributes to the improved orthoimage quality, which seems obvious. More details can be distinguished on products based on restored images. These details are also clearer and exhibit less noise. The geometric improvement (proven above) also translates to DEM quality. DEMs based on restored images have a clearly lower amount of terrain unevenness. Products developed using degraded images also exhibit clear, minor unevenness in places where there is no such object in reality, with the source of this situation being the imprecise determination of tie points.

4. Conclusions

The presented method supports the photogrammetric process by eliminating image-degrading factors, while allowing to correctly generate accurate photogrammetric models. As shown by analysis, the geometric and interpretative quality of the models is similar to that of the reference models, and is significantly higher than that of models based on degraded images. The discussed image quality improvement method comprehensively removes three factors that degrade photogrammetric models and improves the quality of end products.
Geometric accuracy of the models generated from the restored images was maintained, which is evidenced by the low standard deviation of the compared models. This deviation is stable for different blur kernels and various combinations of degradation factors. Degradation factors can appear in pairs or as a simultaneous cluster of all above. Such cases are particularly encountered for small sensors, with poor lighting (e.g., overcast sky). and upon fast UAV flight. The discussed method allows to use images from such measurements that are not fully correct, and ultimately develop a correct model.
The interpretive quality of textured products and images clearly increased. It has been shown, beyond any doubt, that reducing the degrading factor significantly improves image perception, and the objects depicted in an orthoimage are clearer.
The polymodal method of improving the quality of degraded images applied within these studies has been tested using typical photogrammetric software. Surprisingly, the software turned out to be rather resistant to these factors and enabled generating models based on all test data, even the ones with the highest degradation factors.
Degraded images are to be eliminated from a typical, not modified, photogrammetric process. In specific cases, it may turn out that all images within an entire photogrammetric flight will have various defects. Contrary to the appearance, such situations are not rare. The camera’s instrumentation and control system can adjust exposure for each image, and in the case of dynamic scenery, along with changing lighting, blur, and noise can appear on images from one flight. The presented method harmonizes all images, eliminating degrading factors.
Commonly used photogrammetric software, especially their versions of Cloud computing, will enable introducing this additional option that will eliminate undesirable degradation. This method is so fast that a user will be virtually unable to notice a significant slowdown of the photogrammetric model construction process. Furthermore, the versatility of the method and the independence from the degradation character means that its practical application will significantly expand the capabilities of photogrammetric software.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Acknowledgments

Calculations were carried out at the Academic Computer Centre in Gdańsk.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Burdziakowski, P.; Galecki, L.; Mazurkiewicz, M.; Struzinski, J. Very high altitude micro air vehicle deployment method. IFAC-PapersOnLine 2019, 52, 327–333. [Google Scholar] [CrossRef]
  2. Mikrut, S. Classical Photogrammetry and UAV—Selected Ascpects. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2016, 41, 947–952. [Google Scholar] [CrossRef] [Green Version]
  3. Lepage, G. Time Delayed Integration CMOS Image Sensor with Zero Desynchronization. U.S. Patent 7,675,561B2, 9 March 2010. [Google Scholar]
  4. Pain, B.; Cunningham, T.J.; Yang, G.; Ortiz, M. Time-Delayed-Integration Imaging with Active Pixel Sensors. U.S. Patent 7,268,814, 11 September 2007. [Google Scholar]
  5. Burdziakowski, P.; Bobkowska, K. UAV Photogrammetry under Poor Lighting Conditions—Accuracy Considerations. Sensors 2021, 21, 3531. [Google Scholar] [CrossRef]
  6. Burdziakowski, P. A Novel Method for the Deblurring of Photogrammetric Images Using Conditional Generative Adversarial Networks. Remote Sens. 2020, 12, 2586. [Google Scholar] [CrossRef]
  7. Burdziakowski, P. Increasing the Geometrical and Interpretation Quality of Unmanned Aerial Vehicle Photogrammetry Products using Super-Resolution Algorithms. Remote. Sens. 2020, 12, 810. [Google Scholar] [CrossRef] [Green Version]
  8. Gómez-López, J.M.; Pérez-García, J.L.; Mozas-Calvache, A.T.; Delgado-García, J. Mission Flight Planning of RPAS for Photogrammetric Studies in Complex Scenes. ISPRS Int. J. Geoinf. 2020, 9, 392. [Google Scholar] [CrossRef]
  9. Eisenbeiss, H.; Sauerbier, M. Investigation of uav systems and flight modes for photogrammetric applications. Photogramm. Rec. 2011, 26, 400–421. [Google Scholar] [CrossRef]
  10. Burdziakowski, P.; Specht, C.; Dabrowski, P.S.; Specht, M.; Lewicka, O.; Makar, A. Using UAV Photogrammetry to Analyse Changes in the Coastal Zone Based on the Sopot Tombolo (Salient) Measurement Project. Sensors 2020, 20, 4000. [Google Scholar] [CrossRef]
  11. Goncalves, J.A.; Henriques, R. UAV photogrammetry for topographic monitoring of coastal areas. ISPRS J. Photogramm. Remote Sens. 2015, 104, 101–111. [Google Scholar] [CrossRef]
  12. Contreras-De-Villar, F.; García, F.J.; Muñoz-Perez, J.J.; Contreras-De-Villar, A.; Ruiz-Ortiz, V.; Lopez, P.; Garcia-López, S.; Jigena, B. Beach Leveling Using a Remotely Piloted Aircraft System (RPAS): Problems and Solutions. J. Mar. Sci. Eng. 2020, 9, 19. [Google Scholar] [CrossRef]
  13. Sekrecka, A.; Wierzbicki, D.; Kedzierski, M. Influence of the Sun Position and Platform Orientation on the Quality of Imagery Obtained from Unmanned Aerial Vehicles. Remote Sens. 2020, 12, 1040. [Google Scholar] [CrossRef] [Green Version]
  14. Tonkin, T.N.; Midgley, N.G. Ground-Control Networks for Image Based Surface Reconstruction: An Investigation of Optimum Survey Designs Using UAV Derived Imagery and Structure-from-Motion Photogrammetry. Remote Sens. 2016, 8, 786. [Google Scholar] [CrossRef] [Green Version]
  15. Villanueva, J.K.S.; Blanco, A.C. Optimization of Ground Control Point (GCP) Configuration for Unmanned Aerial Vehicle (UAV) Survey Using Structure from Motion (SFM). ISPRS Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2019, XLII-4/W12, 167–174. [Google Scholar] [CrossRef] [Green Version]
  16. Hastedt, H.; Ekkela, T.; Luhmann, T. Evaluation of the Quality of Action Cameras with Wide-Angle Lenses in Uav Photogrammetry. In Proceedings of the International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences—ISPRS Archives, Prague, Czech Republic, 12–19 July 2016. [Google Scholar]
  17. Tjahjadi, M.E.; Sai, S.S.; Handoko, F. Assessing a 35mm Fixed-Lens Sony Alpha-5100 Intrinsic Parameters Prior to, During, and Post UAV Flight Mission. KnE Eng. 2019. [Google Scholar] [CrossRef]
  18. Ekaso, D.; Nex, F.; Kerle, N. Accuracy assessment of real-time kinematics (RTK) measurements on unmanned aerial vehicles (UAV) for direct geo-referencing. Geospat. Inf. Sci. 2020, 23, 165–181. [Google Scholar] [CrossRef] [Green Version]
  19. Gerke, M.; Przybilla, H.-J. Accuracy Analysis of Photogrammetric UAV Image Blocks: Influence of Onboard RTK-GNSS and Cross Flight Patterns. Photogramm. Fernerkund. Geoinf. 2016, 2016, 17–30. [Google Scholar] [CrossRef] [Green Version]
  20. Uysal, M.; Toprak, A.S.; Polat, N. DEM generation with UAV Photogrammetry and accuracy analysis in Sahitler hill. Meas. J. Int. Meas. Confed. 2015, 73, 539–543. [Google Scholar] [CrossRef]
  21. Tomaštík, J.; Mokroš, M.; Surový, P.; Grznárová, A.; Merganič, J. UAV RTK/PPK Method—An Optimal Solution for Mapping Inaccessible Forested Areas? Remote Sens. 2019, 11, 721. [Google Scholar] [CrossRef] [Green Version]
  22. Mian, O.; Lutes, J.; Lipa, G.; Hutton, J.J.; Gavelle, E.; Borghini, S. Direct Georeferencing on Small Unmanned Aerial Platforms For Improved Reliability And Accuracy Of Mapping Without The Need For Ground Control Points. ISPRS Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2015, XL-1/W4, 397–402. [Google Scholar] [CrossRef] [Green Version]
  23. Vautherin, J.; Rutishauser, S.; Schneider-Zapp, K.; Choi, H.F.; Chovancova, V.; Glass, A.; Strecha, C. Photogrammetric Accuracy and Modeling Of Rolling Shutter Cameras. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci. 2016, III-3, 139–146. [Google Scholar] [CrossRef] [Green Version]
  24. Wiącek, P.; Pyka, K. The Test Field for UAV Accuracy Assessments. ISPRS Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2019, XLII-1/W2, 67–73. [Google Scholar] [CrossRef] [Green Version]
  25. Sanz-Ablanedo, E.; Chandler, J.H.; Rodríguez-Pérez, J.R.; Ordóñez, C. Accuracy of Unmanned Aerial Vehicle (UAV) and SfM Photogrammetry Survey as a Function of the Number and Location of Ground Control Points Used. Remote Sens. 2018, 10, 1606. [Google Scholar] [CrossRef] [Green Version]
  26. Saponaro, M.; Tarantino, E.; Reina, A.; Furfaro, G.; Fratino, U. Assessing the Impact of the Number of GCPS on the Accuracy of Photogrammetric Mapping from UAV Imager. Baltic surveying. Int. Sci. J. 2019, 10. [Google Scholar] [CrossRef]
  27. Feng, C.; Yu, D.; Liang, Y.; Guo, D.; Wang, Q.; Cui, X. Assessment of Influence of Image Processing On Fully Automatic Uav Photogrammetry. ISPRS Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2019, XLII-2/W13, 269–275. [Google Scholar] [CrossRef] [Green Version]
  28. Harwin, S.; Lucieer, A.; Osborn, J. The Impact of the Calibration Method on the Accuracy of Point Clouds Derived Using Unmanned Aerial Vehicle Multi-View Stereopsis. Remote Sens. 2015, 7, 11933–11953. [Google Scholar] [CrossRef] [Green Version]
  29. Burdziakowski, P. Evaluation of Open Drone Map Toolkit for Geodetic Grade Aerial Drone Mapping—Case Study. In Proceedings of the 17th International Multidisciplinary Scientific GeoConference SGEM 2017, Albena, Bulgaria, 20 June 2017; pp. 101–110. [Google Scholar]
  30. Zhou, Y.; Rupnik, E.; Meynard, C.; Thom, C.; Pierrot-Deseilligny, M. Simulation and Analysis of Photogrammetric UAV Image Blocks—Influence of Camera Calibration Error. Remote Sens. 2020, 12, 22. [Google Scholar] [CrossRef] [Green Version]
  31. Kolecki, J.; Kuras, P.; Pastucha, E.; Pyka, K.; Sierka, M. Calibration of Industrial Cameras for Aerial Photogrammetric Mapping. Remote Sens. 2020, 12, 3130. [Google Scholar] [CrossRef]
  32. Lalak, M.; Wierzbicki, D.; Kędzierski, M. Methodology of Processing Single-Strip Blocks of Imagery with Reduction and Optimization Number of Ground Control Points in UAV Photogrammetry. Remote Sens. 2020, 12, 3336. [Google Scholar] [CrossRef]
  33. Wierzbicki, D.; Kedzierski, M.; Sekrecka, A. A Method for Dehazing Images Obtained from Low Altitudes during High-Pressure Fronts. Remote Sens. 2019, 12, 25. [Google Scholar] [CrossRef] [Green Version]
  34. Shahbazi, M.; Sohn, G.; Théau, J.; Menard, P. Development and Evaluation of a UAV-Photogrammetry System for Precise 3D Environmental Modeling. Sensors 2015, 15, 27493–27524. [Google Scholar] [CrossRef] [Green Version]
  35. Lilienblum, T.; Albrecht, P.; Michaelis, B. 3D-measurement of geometrical shapes by photogrammetry and neural networks. In Proceedings of the 13th International Conference on Pattern Recognition, Vienna, Austria, 25–29 August 1996; Institute of Electrical and Electronics Engineers (IEEE): Piscataway, NJ, USA, 1996; Volume 4, pp. 330–334. [Google Scholar]
  36. Pashaei, M.; Starek, M.J.; Kamangir, H.; Berryhill, J. Deep Learning-Based Single Image Super-Resolution: An Investigation for Dense Scene Reconstruction with UAS Photogrammetry. Remote Sens. 2020, 12, 1757. [Google Scholar] [CrossRef]
  37. Eastwood, J.; Zhang, H.; Isa, M.; Sims-Waterhouse, D.; Leach, R.K.; Piano, S. Smart photogrammetry for three-dimensional shape measurement. In Proceedings of the Optics and Photonics for Advanced Dimensional Metrology, Online Only, France, 6–10 April 2020; SPIE: Bellingham, WA, USA, 2020; Volume 11352, p. 113520A. [Google Scholar]
  38. Itasaka, T.; Imamura, R.; Okuda, M. DNN-based Hyperspectral Image Denoising with Spatio-spectral Pre-training. In Proceedings of the 2019 IEEE 8th Global Conference on Consumer Electronics (GCCE), Osaka, Japan, 5–18 October 2019; Institute of Electrical and Electronics Engineers (IEEE): Piscataway, NJ, USA, 2019; pp. 568–572. [Google Scholar]
  39. Chen, Z.; Wang, X.; Xu, Z.; Hou, W. Convolutional Neural Network Based Dem Super Resolution. ISPRS Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2016, XLI-B3, 247–250. [Google Scholar] [CrossRef] [Green Version]
  40. Brady, D.J.; Fang, L.; Ma, Z. Deep learning for camera data acquisition, control, and image estimation. Adv. Opt. Photon. 2020, 12, 787. [Google Scholar] [CrossRef]
  41. Chaudhry, M.; Ahmad, A.; Gulzar, Q.; Farid, M.; Shahabi, H.; Al-Ansari, N. Assessment of DSM Based on Radiometric Transformation of UAV Data. Sensors 2021, 21, 1649. [Google Scholar] [CrossRef] [PubMed]
  42. Zhang, K.; Zuo, W.; Zhang, L. Deep Plug-And-Play Super-Resolution for Arbitrary Blur Kernels. In Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 16–20 June 2019; Institute of Electrical and Electronics Engineers (IEEE): Piscataway, NJ, USA, 2019; pp. 1671–1681. [Google Scholar]
  43. Zhang, K.; Van Gool, L.; Timofte, R. Deep Unfolding Network for Image Super-Resolution. In Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 16–18 June 2020; Institute of Electrical and Electronics Engineers (IEEE): Piscataway, NJ, USA, 2020; pp. 3214–3223. [Google Scholar]
  44. Digital Image. Digital Image Interpolation in MATLAB®; Wiley Online Books; John Wiley & Sons Singapore Pte. Ltd.: Singapore, 2018; pp. 17–70. ISBN 9781119119623. [Google Scholar]
  45. Efrat, N.; Glasner, D.; Apartsin, A.; Nadler, B.; Levin, A. Accurate Blur Models vs. Image Priors in Single Image Super-resolution. In Proceedings of the 2013 IEEE International Conference on Computer Vision, Sydney, Australia, 1–8 December 2013; Institute of Electrical and Electronics Engineers (IEEE): Piscataway, NJ, USA, 2013; pp. 2832–2839. [Google Scholar]
  46. Boyat, A.K.; Joshi, B.K. A Review Paper: Noise Models in Digital Image Processing. Signal Image Process. Int. J. 2015, 6, 63–75. [Google Scholar] [CrossRef]
  47. Cyganek, B.; Siebert, J.P. An Introduction to 3D Computer Vision Techniques and Algorithms; John Wiley & Sons: Chichester, UK, 2011; ISBN 9780470017043. [Google Scholar]
  48. Ahmad, R.; Bouman, C.A.; Buzzard, G.T.; Chan, S.; Liu, S.; Reehorst, E.T.; Schniter, P. Plug-and-Play Methods for Magnetic Resonance Imaging: Using Denoisers for Image Recovery. IEEE Signal. Process. Mag. 2020, 37, 105–116. [Google Scholar] [CrossRef] [Green Version]
  49. Wei, K.; Aviles-Rivero, A.; Liang, J.; Fu, Y.; Schönlieb, C.-B.; Huang, H. Tuning-Free Plug-and-Play Proximal Algorithm for Inverse Imaging Problems. arXiv 2020, arXiv:2002.09611. Available online: https://arxiv.org/abs/2002.09611 (accessed on 3 May 2021).
  50. Kamilov, U.S.; Mansour, H.; Wohlberg, B. A Plug-and-Play Priors Approach for Solving Nonlinear Imaging Inverse Problems. IEEE Signal Process. Lett. 2017, 24, 1872–1876. [Google Scholar] [CrossRef]
  51. Nguyen, A.; Clune, J.; Bengio, Y.; Dosovitskiy, A.; Yosinski, J. Plug & Play Generative Networks: Conditional Iterative Generation of Images in Latent Space. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; IEEE: Piscataway, NJ, USA, 2017; pp. 3510–3520. [Google Scholar]
  52. Sun, Y.; Wohlberg, B.; Kamilov, U.S. An Online Plug-and-Play Algorithm for Regularized Image Reconstruction. IEEE Trans. Comput. Imaging 2019, 5, 395–408. [Google Scholar] [CrossRef] [Green Version]
  53. Zhang, K.; Li, Y.; Zuo, W.; Zhang, L.; van Gool, L.; Timofte, R. Plug-and-Play Image Restoration with Deep Denoiser Prior. arXiv 2020, arXiv:2008.13751. Available online: https://arxiv.org/abs/2008.13751 (accessed on 3 May 2021).
  54. Ono, S. Primal-Dual Plug-and-Play Image Restoration. IEEE Signal Process. Lett. 2017, 24, 1108–1112. [Google Scholar] [CrossRef]
  55. Chan, S.H.; Wang, X.; Elgendy, O. Plug-and-Play ADMM for Image Restoration: Fixed-Point Convergence and Applications. IEEE Trans. Comput. Imaging 2016, 3, 84–98. [Google Scholar] [CrossRef] [Green Version]
  56. Yuan, X.; Liu, Y.; Suo, J.; Dai, Q. Plug-and-Play Algorithms for Large-Scale Snapshot Compressive Imaging. In Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 6–18 June 2020; Institute of Electrical and Electronics Engineers (IEEE): Piscataway, NJ, USA, 2020; pp. 1444–1454. [Google Scholar]
  57. Venkatakrishnan, S.V.; Bouman, C.A.; Wohlberg, B. Plug-and-Play priors for model based reconstruction. In Proceedings of the 2013 IEEE Global Conference on Signal and Information Processing, Austin, TX, USA, 3–5 December 2013; Institute of Electrical and Electronics Engineers (IEEE): Piscataway, NJ, USA, 2013; pp. 945–948. [Google Scholar]
  58. Wang, X.; Chan, S.H. Parameter-free Plug-and-Play ADMM for image restoration. In Proceedings of the 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), New Orleans, LA, USA, 5–9 March 2017; Institute of Electrical and Electronics Engineers (IEEE): Piscataway, NJ, USA, 2017; pp. 1323–1327. [Google Scholar]
  59. He, J.; Yang, Y.; Wang, Y.; Zeng, D.; Bian, Z.; Zhang, H.; Sun, J.; Xu, Z.; Ma, J. Optimizing a Parameterized Plug-and-Play ADMM for Iterative Low-Dose CT Reconstruction. IEEE Trans. Med. Imaging 2019, 38, 371–382. [Google Scholar] [CrossRef] [PubMed]
  60. Zhang, K.; Zuo, W.; Gu, S.; Zhang, L. Learning Deep CNN Denoiser Prior for Image Restoration. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; Institute of Electrical and Electronics Engineers (IEEE): Piscataway, NJ, USA, 2017; pp. 2808–2817. [Google Scholar]
  61. Boyd, S.; Parikh, N.; Chu, E.; Peleato, B.; Eckstein, J. Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers. Found. Trends® Mach. Learn. 2010, 3, 1–122. [Google Scholar] [CrossRef]
  62. Gu, S.; Zhang, L.; Zuo, W.; Feng, X. Weighted Nuclear Norm Minimization with Application to Image Denoising. In Proceedings of the 2014 IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 23–28 June 2014; Institute of Electrical and Electronics Engineers (IEEE): Piscataway, NJ, USA, 2014; pp. 2862–2869. [Google Scholar]
  63. Tappen, M.F. Utilizing Variational Optimization to Learn Markov Random Fields. In Proceedings of the 2007 IEEE Conference on Computer Vision and Pattern Recognition, Minneapolis, MN, USA, 17–22 June 2007; Institute of Electrical and Electronics Engineers (IEEE): Piscataway, NJ, USA, 2007; pp. 1–8. [Google Scholar]
  64. Ronneberger, O.; Fischer, P.; Brox, T. U-net: Convolutional networks for biomedical image segmentation. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Cham, Switzerland, 5–9 October 2015; pp. 234–241, ISBN 9783319245737. [Google Scholar]
  65. Chen, Y.; Pock, T. Trainable Nonlinear Reaction Diffusion: A Flexible Framework for Fast and Effective Image Restoration. IEEE Trans. Pattern Anal. Mach. Intell. 2016, 39, 1256–1272. [Google Scholar] [CrossRef] [Green Version]
  66. Agustsson, E.; Timofte, R. NTIRE 2017 Challenge on Single Image Super-Resolution: Dataset and Study. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Honolulu, HI, USA, 21–26 July 2017; Institute of Electrical and Electronics Engineers (IEEE): Piscataway, NJ, USA, 2017; pp. 1122–1131. [Google Scholar]
  67. Lim, B.; Son, S.; Kim, H.; Nah, S.; Lee, K.M. Enhanced Deep Residual Networks for Single Image Super-Resolution. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Honolulu, HI, USA, 21–26 July 2017; Institute of Electrical and Electronics Engineers (IEEE): Piscataway, NJ, USA, 2017; pp. 1132–1140. [Google Scholar]
  68. Ma, K.; Duanmu, Z.; Wu, Q.; Wang, Z.; Yong, H.; Li, H.; Zhang, L. Waterloo Exploration Database: New Challenges for Image Quality Assessment Models. IEEE Trans. Image Process. 2017, 26, 1004–1016. [Google Scholar] [CrossRef]
  69. Geman, D.; Yang, C. Nonlinear image recovery with half-quadratic regularization. IEEE Trans. Image Process. 1995, 4, 932–946. [Google Scholar] [CrossRef] [Green Version]
  70. Zhao, N.; Wei, Q.; Basarab, A.; Dobigeon, N.; Kouame, D.; Tourneret, J.-Y. Fast Single Image Super-Resolution Using a New Analytical Solution for $\ell _{2}$$\ell _{2}$ Problems. IEEE Trans. Image Process. 2016, 25, 3683–3697. [Google Scholar] [CrossRef] [Green Version]
  71. Pix4D Support Team Selecting the Image Acquisition Plan Type 2018. Available online: https://support.pix4d.com/hc/en-us/articles/209960726-Types-of-mission-Which-type-of-mission-to-choose (accessed on 28 May 2021).
  72. Mittal, A.; Soundararajan, R.; Bovik, A. Making a “Completely Blind” Image Quality Analyzer. IEEE Signal. Process. Lett. 2013, 20, 209–212. [Google Scholar] [CrossRef]
  73. Venkatanath, N.; Praneeth, D.; Maruthi Chandrasekhar, B.H.; Channappayya, S.S.; Medasani, S.S. Blind Image Quality Evaluation Using Perception Based Features. In Proceedings of the 2015 21st National Conference on Communications, NCC, Mumbai, India, 27 February–1 March 2015. [Google Scholar]
  74. Horé, A.; Ziou, D. Image Quality Metrics: PSNR vs. SSIM. In Proceedings of the 20th International Conference on Pattern Recognition, Istanbul, Turkey, 23–26 August 2010; pp. 2366–2369. [Google Scholar] [CrossRef]
  75. James, M.; Robson, S.; D’Oleire-Oltmanns, S.; Niethammer, U. Optimising UAV topographic surveys processed with structure-from-motion: Ground control quality, quantity and bundle adjustment. Geomorphology 2017, 280, 51–66. [Google Scholar] [CrossRef] [Green Version]
  76. James, M.R.; Robson, S.; Smith, M.W. 3-D uncertainty-based topographic change detection with structure-from-motion photogrammetry: Precision maps for ground control and directly georeferenced surveys. Earth Surf. Process. Landforms 2017, 42, 1769–1788. [Google Scholar] [CrossRef]
Figure 1. Photogrammetric process quality pillars.
Figure 1. Photogrammetric process quality pillars.
Energies 14 03457 g001
Figure 2. Research process.
Figure 2. Research process.
Energies 14 03457 g002
Figure 3. Blur kernels.
Figure 3. Blur kernels.
Energies 14 03457 g003
Figure 4. Reference model: (a) Orthophoto map; (b) Digital surface model (DSM).
Figure 4. Reference model: (a) Orthophoto map; (b) Digital surface model (DSM).
Energies 14 03457 g004
Figure 5. Image visual comparison for denoise task: (a) Ground truth; (b) Degraded; (c) Restored.
Figure 5. Image visual comparison for denoise task: (a) Ground truth; (b) Degraded; (c) Restored.
Energies 14 03457 g005
Figure 6. Image visual comparison for denoise and deblur task: (a) Ground truth; (b) Degraded; (c) Restored (image presented for kernel = k4).
Figure 6. Image visual comparison for denoise and deblur task: (a) Ground truth; (b) Degraded; (c) Restored (image presented for kernel = k4).
Energies 14 03457 g006
Figure 7. Image visual comparison for deblur and super-resolution task: (a) Ground truth; (b) Degraded; (c) Restored (image presented for kernel = k4).
Figure 7. Image visual comparison for deblur and super-resolution task: (a) Ground truth; (b) Degraded; (c) Restored (image presented for kernel = k4).
Energies 14 03457 g007
Figure 8. Assessment of image qualities for specific subtasks and kernels.
Figure 8. Assessment of image qualities for specific subtasks and kernels.
Energies 14 03457 g008aEnergies 14 03457 g008b
Figure 9. Comparison of basic parameters relating to photogrammetric products based on restored images.
Figure 9. Comparison of basic parameters relating to photogrammetric products based on restored images.
Energies 14 03457 g009
Figure 10. M3C2 distances, calculated for the reference cloud, scale in meters, μ and σ calculated for Gaussian distribution.
Figure 10. M3C2 distances, calculated for the reference cloud, scale in meters, μ and σ calculated for Gaussian distribution.
Energies 14 03457 g010
Figure 11. Change in the value of the median (a) and standard deviation (b) for all cases.
Figure 11. Change in the value of the median (a) and standard deviation (b) for all cases.
Energies 14 03457 g011
Figure 12. Visual comparison of an orthoimage fragment.
Figure 12. Visual comparison of an orthoimage fragment.
Energies 14 03457 g012
Figure 13. Visual comparison of digital elevation models.
Figure 13. Visual comparison of digital elevation models.
Energies 14 03457 g013
Table 1. Accuracy-related data of the created reference model.
Table 1. Accuracy-related data of the created reference model.
Flight AltitudeGround Resolution Tie PointsProjectionsReprojection Error
105 m3.05 cm/pix134,894450,8740.952 pix
Camera locations and error estimates
X error (m)Y error (m)Z error (m)XY error (m)Total error (m)
2.603812.3683844.91933.5198145.057
GCP locations and error estimates
X error (cm)Y error (cm)Z error (cm)XY error (cm)Total (cm)
8.837511.02332.5958614.128514.365
Table 2. Reported survey data for the degraded dataset.
Table 2. Reported survey data for the degraded dataset.
Task Name Noise   Level   ( σ ) Blur Kernel (k)Down Sample (s)Flying Altitude (Reported) (m)Ground Resolution (cm/pix) Tie PointsKey PointsMean KP Size (pix)Reprojection Error
Noise15001023.06157,662167,8656.25680.2629
Blur-17.65101053.05150,367160,2825.17290.2814
Blur-27.65201053.05146,891155,9715.21770.2811
Blur-37.65301053.05143,123151,4735.33520.2782
Blur-47.65401043.05141,878149,7275.54920.2726
Blur-57.65501043.05142,278150,3835.65490.2729
Blur-67.65601043.05142,454150,4695.74610.2697
Blur-77.65701043.05142,545150,8345.67240.2710
Blur-87.65801043.05140,163147,7795.87390.2654
Low-101283.66.0859,15963,5496.73210.1681
Low-202297.46.07120,169126,4372.84860.2284
Low-303297.96.07117,191122,9972.73760.2356
Low-404298.26.07113,919119,5912.65680.2412
Low-5052986.07114,383120,1362.68630.2386
Low-606297.86.07114,190119,8152.69270.2396
Low-707297.86.07115,036120,6592.70550.2372
Low-808298.16.07111,672117,2692.63640.2422
Table 3. Control points’ RMSE for the degraded dataset. X—Easting, Y—Northing, Z—Altitude.
Table 3. Control points’ RMSE for the degraded dataset. X—Easting, Y—Northing, Z—Altitude.
Task NameX Error (cm)Y Error (cm)Z Error (cm)Total Error (cm)Image Pix (pix)
Noise8.40209.62242.321212.98355.0720
Blur-18.888810.78572.580014.21275.5300
Blur-28.869110.78932.559014.19935.5210
Blur-38.849910.75622.536714.15815.4990
Blur-48.754810.56602.477313.94365.4120
Blur-58.730810.54342.460313.90845.3920
Blur-68.698310.40722.448713.78295.3460
Blur-78.713110.45642.452213.82995.3630
Blur-88.643710.33582.419013.68925.3040
Low-13.12343.08100.91554.48182.9270
Low-24.22675.13441.53496.82524.3040
Low-34.24445.17151.53806.86474.3310
Low-44.26095.20491.54356.90144.3560
Low-54.20145.15131.01636.72464.1950
Low-64.25155.19521.53956.88734.3460
Low-74.24915.19081.54216.88314.3430
Low-84.26235.21301.54546.90874.3600
Table 4. Reported survey data for the restored dataset.
Table 4. Reported survey data for the restored dataset.
Task NameFlying Altitude (Reported) (m)Ground Resolution(cm/pix)Tie PointsKey PointsMean KP Size (pix)Reprojection Error
Denoise1053.04155,845164,9015.39080.2790
Deblur-11053.04144,744153,6534.70130.2959
Deblur-21063.04145,039153,7734.58990.3029
Deblur-31063.04143,553151,6604.55060.3072
Deblur-41063.04143,907151,5484.58540.3085
Deblur-51063.04142,921150,1964.61040.3096
Deblur-61063.04143,172150,3644.65200.3088
Deblur-71063.04143,590151,1124.63110.3081
Deblur-81063.04142,349149,5394.66150.3099
SuperRes-11063.04137,771147,0444.40760.3020
SuperRes-21063.04137,131146,2604.40050.3033
SuperRes-31063.04136,268145,5494.39650.3025
SuperRes-41063.04138,467148,2494.41120.3022
SuperRes-51063.04139,382149,1234.38950.3041
SuperRes-61063.04139,540149,4254.39840.3034
SuperRes-71063.04138,952148,8054.39640.3032
SuperRes-81063.04138,136148,0514.32630.3076
Table 5. Control points’ RMSE for the restored dataset. X—Easting, Y—Northing, Z—Altitude.
Table 5. Control points’ RMSE for the restored dataset. X—Easting, Y—Northing, Z—Altitude.
Task NameX Error (cm)Y Error (cm)Z Error (cm)Total Error (cm)Image Pix (pix)
Denoise8.772410.49702.506913.90785.4130
Deblur-19.004411.08742.682014.53285.6640
Deblur-29.045411.16702.705614.62335.7010
Deblur-39.080411.26502.726614.72375.7430
Deblur-49.052111.24292.706614.68575.7240
Deblur-59.045411.18812.695914.63775.6970
Deblur-69.032611.17742.683814.61935.6810
Deblur-79.029111.15702.688514.60245.6810
Deblur-89.025911.18932.680114.62365.6820
SuperRes-19.153611.41982.782514.89775.8170
SuperRes-29.163311.42542.775214.90665.8180
SuperRes-39.171811.43202.786814.91915.8220
SuperRes-49.161411.39602.767914.88165.8120
SuperRes-59.154311.40012.784014.88335.8190
SuperRes-69.153211.39952.765214.87875.8110
SuperRes-79.157311.40412.767514.88525.8100
SuperRes-89.195111.46072.795814.95715.8430
Table 6. Median values for individual cases (mm).
Table 6. Median values for individual cases (mm).
Task/Kernel 12345678
Ground Truth121.5112
Blur 158.0656156.5566155.5781158.5508161.6197161.3598161.0773162.7287
Deblur 142.5501141.8642140.8487141.7976141.3076142.2502143.2718142.0673
Low 242.344295.226191.0714 86.7184 99.515687.444488.2774 84.1599
Super 127.9740127.1332126.2845129.7804129.5772131.4754130.5548128.8021
Noise174.4342
Denoise155.4079
Table 7. Standard deviation values for individual cases (mm).
Table 7. Standard deviation values for individual cases (mm).
Task/Kernel 12345678
Ground Truth1320.0
Blur 1333.31516.7 1475.7 1483.5 1364.8 1644.2 1419.2 1336.4
Deblur 1315.01342.7 1203.4 1254.1 1188.7 1163.0 1259.5 1241.7
Low 4858.30892.2 0934.8 0792.6 0945.6 0828.0 0837.5 0828.3
Super 1214.61114.7 1101.5 1136.4 1213.6 1163.9 1095.7 1169.2
Noise2002.9
Denoise1402.6
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Burdziakowski, P. Polymodal Method of Improving the Quality of Photogrammetric Images and Models. Energies 2021, 14, 3457. https://doi.org/10.3390/en14123457

AMA Style

Burdziakowski P. Polymodal Method of Improving the Quality of Photogrammetric Images and Models. Energies. 2021; 14(12):3457. https://doi.org/10.3390/en14123457

Chicago/Turabian Style

Burdziakowski, Pawel. 2021. "Polymodal Method of Improving the Quality of Photogrammetric Images and Models" Energies 14, no. 12: 3457. https://doi.org/10.3390/en14123457

APA Style

Burdziakowski, P. (2021). Polymodal Method of Improving the Quality of Photogrammetric Images and Models. Energies, 14(12), 3457. https://doi.org/10.3390/en14123457

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop