The need to observe and characterize the environment leads to a constant increase of the spatial, spectral, and radiometric resolution of new optical sensors. Due to the commissioning of constellations of satellites, the revisiting time of the sites can be reduced, so multitemporal analysis is becoming widespread. Furthermore, taking advantage of the availability of many acquisition systems, multisensor analysis is now a real opportunity.
This context has motivated a special issue that presents interesting papers about the state-of-the-art of optical remote sensing data processing, embracing all the specific topics that impact on the quality of the data.
Therefore, the special issue considers not only the topics that usually deal with quality but also takes into consideration some specific methodologies that have a direct impact on the quality of the optical data. For this reason, some contributions are also pertinent to other important branches of remote sensing image processing.
In this editorial, all the papers of the special issue are presented. The papers are grouped according to their specific subject following a top-down approach so that the papers with a more general topic are presented earlier.
From this perspective, therefore, two papers [
1,
2] are particularly important.
In [
1], the authors present a critical review of image-quality criteria and the best methodologies to assess and improve the optical images acquired by the Earth-observing satellites. The methods are grouped in three categories: radiometric, geometric, and spatial quality. Regarding the first category, the paper presents absolute calibration, radiometric stability, relative calibration, signal-to-noise ratio, and image artifacts as the primary on-orbit radiometry characterization parameters. Regarding spatial category, the authors take into consideration modulation transfer function, ground sampling distance, and aliasing. In the last category, i.e., geometry, registration and geodetic accuracy are the quality parameters considered. Furthermore, the paper presents the best method to assess and improve each of these quality parameters, providing the corresponding specifications or requirements. In cases of multiple methods, recommendations are provided based on the strengths and weaknesses of each method. This review, therefore, helps satellite owners and operators decide which method of quality they will rely on. Regarding data users, they can become aware of the quality of scientific observations by knowing the different quality criteria.
Regarding [
2], it presents a full-reference metric to assess the visual quality of remote sensing images. The paper is based on some important considerations. First of all, the visual quality of remote sensing images is important because their visual inspection and analysis are still widely used in practice. Furthermore, the number of visual quality metrics specialized for remote sensing images is limited. On the other hand, there is a great number of visual quality metrics designed for other types of images. Therefore, the key idea of this paper is these metrics can be employed in remote sensing under the condition that they have been designed for the same distortion types. The previous condition also allows applying the visual quality metrics to remote sensing data based on the previous elementary indexes. In this framework, the paper presents a visual quality neural-network-based (NN) metric that combines elementary metrics. Several configurations of NNs and methods of input data preprocessing are studied. The image database TID2013 is employed in the experimentation since it includes images with the same types of distortions often observed in remote sensing. To further validate the proposed approach, the authors also consider some remote sensing images. The paper, therefore, provides the evidence that the visual quality metric works well when applied to RGB remote sensing images even though it has been developed using the TID2013 that is actually not a remote sensing dataset.
Continuing the presentation of the papers of the special issue, two of them [
3,
4] consider the blurring effects and noise introduced into the images by the acquisition system.
In particular, in [
3], the authors develop two simple algorithms to characterize and mitigate sensor-generated spatial correlations while emphasizing the implications of sensor point spread functions, i.e., considering the characteristics of the modulation transfer functions (MTF). In hyperspectral imaging (HSI), in fact, the spatial contribution to each pixel is nonuniform and extends past the traditionally square spatial boundaries designated by the pixel resolution, resulting in sensor-generated blurring effects. The first algorithm developed quantifies spatial correlations and the second one performs deconvolution by using a theoretically derived point spread function. Since the developed tools are simple and intuitive, they can be applied by end-users of all expertise levels.
Regarding [
4], the authors propose a superpixel-based noise estimation algorithm suitable for hyperspectral images acquired by new-generation hyperspectral sensors. For this reason, the methodology considers both electronic and photon noise. Therefore, it takes into consideration both the signal-independent noise component and the signal-dependent noise component, estimating the value of the corresponding standard deviations. Superpixel segmentation is applied to the first principal component obtained by minimum noise fraction (MNF) dimensionality reduction. Multiple linear regression (MLR) is performed to remove the spectral correlation in homogenous regions detected by MNF-based superpixel segmentation. By considering the local statistics of the residual image in each homogenous region, the noise variances are obtained.
Another group of papers [
5,
6] discuss the impact of the weather conditions on the quality of remote sensing images.
In [
5], the authors present an assessment of the influence of weather conditions on the radiometric quality of unmanned aerial vehicle (UAV) imagery acquired in the visible range. A new quality assessment indicator is developed for images obtained in different atmospheric conditions and with uniform lighting. This index considers the impact of air humidity and the solar angle present when the images are acquired. It is studied for altitudes in the range of 50 to 300 m, i.e., in the useful operating range in which UAVs acquire remote sensing images. Using this objective indicator, images are classified into appropriate categories, allowing, in the future, an improvement of the results of vegetation indices and the development of new haze models specifically dedicated to UAV image data.
In [
6], the author proposes an adaptive enhancement approach of optical remote sensing images determined by their viewing level of detail (LOD). For cases partially covered with clouds or cirrocumulus clouds, this approach provides a better visual effect as well as spectral details of satellite imagery compared to existing web mapping services. Therefore, this methodology can be utilized for improving the remote sensing imagery already archived and those that will be acquired in the future.
The weather conditions also play a role in the topic presented in [
7]. This paper is based on the assumption that the availability of the remote sensing images is a key factor for many applications. From this perspective, therefore, the availability factor can be considered within the framework of the quality of the data. The authors propose a method for determining the available time windows (ATWs) considering the influence of some parameters such as sunlight angle, elevation angle, and the type of sensor equipped on the satellite. Furthermore, in the Availability–Capacity–Profitability framework, they developed a satellite effectiveness evaluation (SEE) model for satellite observation and data-downlink scheduling (SODS). The effects of weather uncertainties on the tasks’ success are considered in the SEE model. The model developed in this paper can be applied to support the decision-makers in optimizing and improving task arrangements for Low Earth orbit (LEO) satellites.
The papers [
8,
9,
10] can be grouped together because all of them are about features extraction, even though each of them considers a different aspect of this topic.
The first contribution [
8] of this group is the most general. The authors, in fact, present a methodology to assess the segmentation of RGB remote sensing imagery. Its novelty consists of taking into account subjective evaluation by humans. This approach, in fact, is the most reliable for determining the image quality in many fields. The authors consider three classes of objective quality metrics. The first, called External, includes quality metrics that are calculated from the confusion matrix. The second class, Internal, includes quality metrics that require no external information—i.e., a reference image—to evaluate the segmentation quality. The segmented result, therefore, is evaluated based on a particular set of characteristics derived from the initial dataset. The last class, Image Quality Assessment, includes the metrics that directly assess the quality of the segmented image. Based on the “DeepGlobe Land Cover Classification Challenge” dataset, the authors built a specific dataset of the satellite images with ground truth (GT). The clustering algorithm adopted in the experimentation is k-means++ based on color information. The effectiveness of satellite image segmentation is assessed by determining the correlation between subjective and objective quality metrics, using Pearson’s Linear Correlation Coefficient (PLCC) and Spearman’s Rank Order Correlation Coefficient (SROCC). The approach presented can be applied for improving current state-of-the-art segmentation methods.
The paper [
9] develops a modeling and measurement approach for assessing the uncertainty of image features extracted from remote sensing images. This approach investigates and summarizes the characteristics of the feature uncertainty, taking into account its source and formation mechanisms. Then, a modeling and measurement approach for the uncertainty of image features is proposed on the basis of these characteristics. The feature uncertainty index proposed can comprehensively describe and quantify feature uncertainty. This study may therefore contribute to the development of uncertainty control methods or reliable classification schemes for remote sensing images.
In [
10], the authors investigate the use of Extended Multiattribute Profiles (EMAPs) to generate synthetic bands and examined the impact of EMAP use in change detection performance using heterogeneous images. Heterogeneous image pairs (also known as multimodal image pairs), which are acquired by different imagers, are used as pre-event and postevent images. The experiments are carried out by using ten change detection algorithms and five datasets, for fifty cases overall. The results show that, in thirty-four out of fifty cases, change detection performance is improved, providing a strong indication about the positive impact of using EMAP. When the number of original bands in the image pair is not so high, the obtained improvement is particularly relevant.
The special issue covers another important topic, pansharpening, by means of two papers [
11,
12].
In the first paper [
11], the authors define a pansharpening method that exhibits the features necessary to become a benchmark. Its development starts from a widespread hybrid method developed in 2005 and based on the Wavelet transform, i.e., the Additive Wavelet Luminance Proportional (AWLP) method. A series of modifications have been introduced to achieve an implicit optimization towards the spectral and spatial responses of the imaging instrument and towards the radiative transfer through the atmosphere. Haze correction is also considered. This revisited AWLP method is comparatively evaluated with some of the high-performing methods in the literature and achieves first place, on average, in the ranking of methods.
In [
12], the authors propose assessing pansharpened outcomes by using an image quality assessment (IQA) measure. This approach supports the visual qualitative analysis of pansharpened image by using the statistics of natural images, i.e., by considering the natural scene statistics (NSS). The pansharpened images, in fact, can be considered effected by the same distortions present in NSS images. By using this approach, the pansharpened image quality can be predicted automatically and accurately as it would be perceived and reported by human viewers. Therefore, the image quality analyzers proposed are completely blind and their predictions are highly correlated with human subjective evaluations. This paper, therefore, gives a contribution to support and standardize the visual qualitative evaluation of pansharpened images.