2.1. Radar Imaging and Processing
The radar research presented in the Special Issue included many application fields from satellite level observation via airplane levels and maritime navigation and safety for ground and underground investigation.
The new method of parallax correction for clouds observed by geostationary satellites is presented by Bielinski [
1]. The parallax shift effect of clouds occurs in satellite imaging, especially in the case of the high angles of satellite observations. The developed methods were compared with a known analytical method, namely the Vicente et al./Koenig method. It approximates the position of the cloud by means of an ellipsoid with the half-axis increased by the height of the cloud with an error of up to 50 m. The next two methods proposed in the article allow for significant error reduction. The first method proposed by the author, being an extended version of the Vicente et al./Koenig method, allows researchers to reduce the error to centimeters. The second method, by adjusting the number of iterations, allows researchers to reduce the error to a value close to zero. The article presents an example procedure of a numerical solution using the Newton method and also describes a simulation experiment, verifying the proposed methods. Due to the fact that the resolution of a functioning geostationary earth observation (EO) satellite currently ranges from 0.5 km to 8 km and the pixel dimensions are much larger than 50 m, the proposed method will be applied when the resolution of geostationary EO satellites reaches the assumed 50 m.
New satellite computing capabilities and extended applications for SAR imaging products have resulted in research into real-time synthetic aperture radar imaging. The orbit determination data of the SAR platform in space is essential for the SAR imaging procedure. In the case of real-time SAR imaging, the orbital determination data on board cannot reach a level of accuracy equivalent to the orbital ephemeris in ground-based SAR processing, which requires long processing times using the commonly used ground-based SAR imaging procedures. It is important to investigate the impact of errors in real-time orbiting data on the quality of the SAR imaging. Yan et al. [
2], instead of the commonly used numerical simulation method, proposed an analytical model of square phase error approximation (QPE) introduced by orbit determination errors. The model can provide approximation results at two granulations: approximation with the true anomaly of the satellite as an independent variable and approximation for all positions in the whole orbit of the satellite. The proposed analytical approximation model reduces the complexity of the simulation, the calculation range, and the processing time. Moreover, the model reveals the essence of the process in which errors are transferred to the QPE calculations. A detailed comparison of the proposed method with the numerical simulation method demonstrates the accuracy and reliability of the analytical approximation model.
Due to advantages such as its low power consumption and higher concealment, deceptive jamming against synthetic aperture radar (SAR) has received extensive attention during the last few decades. However, large-scene deceptive jamming is still a challenge because of the huge computing burden. Yang et al. [
3] propose a new large-scene deceptive jamming algorithm. First, the time-delay and frequency-shift (TDFS) algorithm is introduced to improve the jamming processing speed. The system function of the jammer (JSF) for a fake scatter is simplified to the multiplication of the scattering coefficient, a time-delay term in the range dimension and a frequency-shift term in the azimuth dimension. Then, in order to solve the problem that the effective region of the TDFS algorithm is limited, the scene deceptive jamming template is divided into several blocks according to the SAR parameters and the imaging quality control factor. The JSF of each block is calculated by the TDFS algorithm and added together to achieve the large-scene jamming. Finally, the correction algorithm in squint mode is derived. The simplification and parallel-block processing could improve the calculation efficiency significantly. The simulation results verified the validity of the algorithm.
Another interesting approach to SAR data processing is presented by Chen et al. [
4]. As a result of the method developed by the authors, image quality and depth of field have been significantly improved. The improved method enables the efficient processing of high resolution and low frequency SAR data in a wide range. It is commonly known that synthetic high resolution, low frequency aperture radar (SAR) has severe phase-to-immutaneous coupling due to its high bandwidth and long integration time. High resolution SAR processing methods are essential to concentrating the raw data of such radars. The generalized surgical scaling algorithm (GCSA) is widely accepted as an attractive solution to focus low frequency, high bandwidth, and wide beam SAR systems. However, as bandwidth and/or beam width increases, severe phase coupling reduces the performance of the current GCSA and degrades imaging quality. This degradation is mainly due to two main reasons: the residual high order phase coupling and the insignificant error introduced by linear fixed phase point zoom using the stationary phase principle (POSP). The authors first present the principle of determining the required range frequency sequence. After compensating for the independent feedback phase sequence above the third order, the GCSA’s analytically improved GCSA statement based on the Lagrange inversion is derived. The Lagrange inversion allows for the accurate compensation of the coupling phase dependent on the high order range. The results of the imaging of the SAR data in the P and L bands indicate the excellent performance of the proposed algorithm compared to the existing GCSA.
The phenomenon of the periodical penetration of synthetic aperture radar (SAR), which is induced in various ways, creates challenges in concentrating raw SAR data. To deal with this problem, Qian and Zhu [
5] propose a new method. Complex deconvolution is used to reconstruct the azimuthal spectrum of the complete data from the raw data acquired in the proposed method. In other words, the proposed method provides a new approach to dealing with periodically extracted raw SAR data using complex deconvolution. The proposed method provides a robust implementation of deconvolution to process raw data obtained from azimuth. The algorithm consists mainly of the phase compensation and recovery of the azimuth spectrum of raw data using complex deconvolution. The obtained data become less frequent in the Doppler domain after phase compensation. Then, it is possible to recover the azimuth spectrum of complete raw data by complex deconvolution in the Doppler domain. Then, the traditional SAR imaging algorithm is able to focus on the reconstructed raw data in this work. The effectiveness of the proposed method has been confirmed by simulating a point and surface target. Furthermore, actual SAR data was used to better demonstrate the validity of the proposed method.
Appreciating the great importance of synthetic aperture radar (SAR) image processing in the range of moving targets to be defocused due to unknown motion parameters, an effective algorithm to change the focus of SAR for moving targets is presented in [
6]. For fast-moving targets, range cell migration (RCM), Doppler frequency migration, and Doppler ambiguity are complex problems. As a result, focusing on fast-moving targets is difficult. The algorithm proposed by Wan et al. [
6] consists mainly of three stages. First, the RCM is corrected by reversing the sequence, multiplying the matrix complex and improving the second order RCM correction function. Secondly, a 1D scale Fourier transform is introduced to estimate the remaining chirp speed. Thirdly, a matched filter based on the estimated chirp speed is proposed to focus the maneuvering target in the azimuth time range. The method described in the paper is computationally effective as it can be implemented by a fast Fourier transform (FFT), reverse FFT, and uneven FFT. A new deramp function is proposed to further solve the serious Doppler ambiguity problem. A procedure for incorrect peak recognition based on cross-sectional analysis is proposed. Simulated and actual data processing results demonstrate the validity of the proposed targeting algorithm and false peak recognition procedure.
An interesting approach to imaging using interferometer radars with inverted synthetic aperture (InISAR) was presented by Zhang et al. [
7]. A technique involving the strong scattering of fusion centers (SSCF) was proposed in order to estimate the parameters of the translational movement of the maneuvering target. Compared to previous InISAR image recording methods, the SSCF technique is beneficial due to its high computational efficiency, excellent anti-nose performance, high recording precision, and simple system structure. Thanks to InISAR’s one-dimensional, three-output terahertz system, the parameters of translational motion in both the azimuth and height directions are precisely estimated. First of all, motion measurement curves are taken from the spatial spectra of independent strong dispersion centers, which allows researchers to avoid the adverse effects of noise and the “angular scintillation” phenomenon. Next, translational motion parameters are obtained by matching motion measurement curves to phase unwinding and intensity-weighted fusion processing. Finally, ISAR images are accurately captured by compensating for the estimated translational motion parameters, and high quality InISAR imaging results are obtained. The validity of the proposed method was proven by both simulation and experimental results.
The use of radar techniques to classify aircraft objects was undertaken by Wang et al. [
8]. With conventional narrow-band radars, detectable target information is limited, and the radar has difficulty in accurately identifying the type of target. In particular, the probability of classification can be further reduced if some echo data are omitted. By extracting target characteristics in the time and frequency domains from the scarce echo data of multi-wave gateways, a classification algorithm in the conventional narrowband radar is presented to identify three different types of aircraft target, i.e., helicopter, propeller, and jet. The classical algorithm for the reconstruction of a weak echo of an object is used to reconstruct the frequency spectrum of single-wave gateways with weak echo data. The micro-Doppler effect caused by rotating parts of different targets is analyzed, and then features based on the reconstructed echo data are extracted, such as the amplitude deviation factor, wave entropy in the time domain, and wave entropy in the frequency domain, in order to identify targets. Finally, the target characteristics that were extracted from the multi-wave gateways of the reconstructed echo data are weighted and combined to improve classification accuracy. Finally, the vectors of the combined elements are fed into the support vector machine model (SVM) for classification. The presented algorithm can effectively process scarce echo data and achieve a higher classification probability by combining the characteristics of weighted multi-wave gateway echo data. The results of simulation tests confirming the correctness of the algorithm are presented.
The problem of protection against the common occurrence of small unmanned aerial vehicles (UAV) in recent years has been addressed by Nowak et al. [
9]. UAV, popularly known as drones, are used to carry out many tasks, but they are mainly used for observation by both private individuals and professionals. Intrusions into the airspace of airports or other dangerous events involving drones have been observed. More and more attention is being paid to finding solutions to prevent such incidents. The cost analysis excludes in many cases the idea of building stationary UAV detection systems. It seems to be advisable to develop mobile anti-drone systems using continuous wave frequency modulated radars (FMCW). The common operation of the radar chain requires that the measurements be reduced to a common reference surface and that the direction of the radar is uniform in relation to the north. Adequate measurement of the constant corrections of the measured angles is a necessity in this case. The authors propose a method involving the quick, simultaneous calibration of a set of mobile FMCW operating in a network. The method has been tested by means of a numerical experiment consisting of 95,000 tests. Satisfactory results were obtained to confirm the assumptions made by improving the north orientation of the radar over the whole range of initial errors. The conducted experiments allow researchers to put forward a thesis about the advisability of practical use of the proposed method.
A major part of the Special Issue covered topics related to the maritime use of radar. In the article by Hessner et al. [
10], the authors used X-band marine radar (MR) to obtain data on sea surface currents. The quality of the measurements was verified by the control system working in near real time. The obtained results were verified by appropriate measurements using a Doppler acoustic current measurement device (ADCP). Numerous experiments were carried out under various wave, current, and weather conditions. The obtained results confirmed the accuracy and reliability of marine surface currents MR measurements.
Another example of the use of marine navigation radar, this time in the task of collision prevention, can be found in the article by Lisowski and Mohamed–Seghir [
11]. The authors present a method of optimizing collision prevention maneuvering in the navigator’s decision support system. The decision-making process is presented as a multi-stage optimization in a fuzzy and game environment. In the decision-making process, objective and subjective navigation parameters are analyzed. An interesting experiment was conducted on the basis of the actual navigation situation of passing three encountered ships in the Skagerrak Strait, with good and limited visibility at sea. According to the authors, the presented solution can be practically implemented in the decision support system of the ship’s navigator.
The next example utilizing automotive radar sensors in the 3D variant in the task of collision prevention can be found in the article by Stateczny et al. [
12]. Measuring the missions of unmanned vehicles, especially in autonomous missions mode, requires the detection and identification of objects both on the water and in the shore zone. The authors present the empirical results of their research on 3D automotive radar’s detection capabilities in water environments, which can be used in the future development of tracking and collision prevention systems for autonomous surface vehicles (ASV). The conducted experiments concerned the field of radar vision and determination of the detection range in terms of the detection of various objects, both floating and fixed on the shore. The obtained results confirm the usefulness of automotive radars for navigation tasks on bodies of water for small ASVs performing measurement missions, especially performing tasks in an autonomous mode.
Another application of the 3D sensor, this time for future oriented road signs that can display the speed limit autonomously in cases where the road situation requires it, is presented by Czyzewski et al. [
13]. Future oriented road signs contain a number of types of sensors, among which the Doppler sensor and acoustic probe, improved by the authors, are presented in the article. The authors present the method of vehicle detection and tracking, as well as the determination of vehicle speed, on the basis of Doppler sensor signals working on continuous waves. The algorithm for counting vehicles and determining their direction of movement by means of an acoustic vector sensor was also tested experimentally with the use of an improved Doppler radar and a developed sound intensity probe. The authors also present the assumptions of the method using the spatial distribution of sound intensity as determined by means of an integrated (3D) sound intensity sensor.
After space, aeronautical, marine, and land-based applications, it is now the turn of the subsurface application. Kang et al. [
14] proposed a three-dimensional underground cavity detection network (UcNet) to prevent the collapse of furrows in complex urban roads based on radar images (GPR). UcNet is being developed based on a convulsive neural network (CNN) integrated with the phase analysis of super-resolution GPR images. CNNs are popularly used for the automatic classification of GPR data, as the interpretation of GPR mass data from urban roads by experts is usually cumbersome and time consuming. However, conventional CNNs often provide erroneous classification results due to the similar characteristics of earth granules automatically taken from any underground objects such as cavities, wells, gravels, subsoil backgrounds, etc. In particular, properties unrelated to cavities are often wrongly classified as actual cavities, which reduces the performance and reliability of the neural network. UcNet improves the detection of underground cavities by generating SR GPR images of cavities taken from the neural network and analyzing their phase information. The proposed UcNet is experimentally verified using GPR data collected on site from complex urban roads in Seoul, South Korea. The results of the validation test reveal that the incorrect classification of underground cavities is significantly reduced compared to conventional CNN cavities.
2.2. Sonar Imaging and Processing
Sonar imaging and processing covers a wide set of methods and techniques aiming at better detection and interpretation of the data and information acquired with underwater acoustic systems. A relatively wide variety of topics is also presented in the papers published in this Special Issue, relating not only to the processing of raw measurements but also to sonar image analysis, up to fusion with multi-beam echosounders. The issues undertaken relate to side-scan sonars, multi-beam sounders, and synthetic aperture sonar, aiming at better formulation and understanding of the acquired information. Most of the proposed solutions were verified with real data and some in simulations.
In Zhang et al. [
15], the authors described multi-receiver synthetic aperture sonar (SAS) and propose a new method for providing high resolution images in systems. The idea is to overcome the problem of the approximation of the point target reference spectrum (PTRS), azimuth modulation, and coupling term in signal processing, as it results in the degradation of the accuracy of the obtained images. In the proposed method, the PTRS, azimuth modulation, and coupling term are deduced based on the accurate time delay. They are further exploited to develop the imaging processor, which compensates the coupling phase based on the sub-block processing method. It is also important that the proposed imaging scheme can be easily extended to any other PTRS, as it does not require the series expansion of the PTRS with respect to the instantaneous frequency. Thus, a novel imaging algorithm for the multi-receiver SAS, based on the accurate time delay and numerical evaluation method, is composed. The proposed method was verified firstly in simulation and then with real data. The results showed that it achieves high performance results compared with traditional methods. Based on simulations, it has been shown that the effectiveness of the traditional method in focusing is significantly reduced, as indicated by the residual error. The new method overcomes this problem, resulting in more accurate images from the multi-receiver SAS.
Other papers are focused more on image processing than imaging itself. Ye et al. [
16] proposed a modified Retinex algorithm (known for its image processing) for processing sonograms in order to perform gray scale correction. The original side-scan sonar image has uneven gray distribution, which affects the interpretation of the side-scan sonar image and the subsequent image processing. Various algorithms were proposed to overcome this problem, including Retinex. The authors propose the modification of it and the goal is to achieve comparable accuracy with less computational and time complexity. The idea is to apply sonar image characteristics in the algorithm, and thus an enhanced Retinex method is obtained. Compared with the commonly used gray scale correction methods for side-scan sonar images, this method avoids limitations such as the need to know the side-scan sonar parameters, the need to recalculate or reset the parameters for different side-scan sonar image processing, and the poor image enhancement effect. The method was verified with a large set of real data. The research showed that, compared with the latest image enhancement algorithms based on Retinex, the methods have similar image enhancement indexes, and our method is the fastest. When it is necessary to adjust the brightness of the corrected image, only the magnitude of constant coefficient A in the algorithm needs to be adjusted. Usage of the method provides a good basis for further image processing.
Interesting research on the processing of side-scan sonar images aiming at detection of targets is presented by Wang et al. [
17]. Taking into account the fact that the denoising and detecting of underwater sonar images is crucial for the proper interpretation of the image, the authors proposed a new adaptive approach for this. Firstly, an adaptive non-local spatial information denoising method based on the golden ratio is proposed, and then, a new adaptive cultural algorithm (NACA) is proposed to accurately and quickly complete the underwater sonar image detection in this paper. For denoising, the method makes use of earlier developments found in the literature; however, the thresholds for an adaptive non-local spatial information denoising method are calculated based on the golden ratio. For detecting NACA, the study makes use of an adaptive initialization algorithm based on the data field (AIA-DF) and then modification of the quantum-inspired shuffled frog leaping algorithm (QSFLA) is proposed—a new update strategy is adopted to update cultural individuals. The experimental results, as presented in the paper, demonstrate that the proposed denoising method can effectively remove noise and reduce the difficulty of the following underwater sonar image recognition. The method is also faster and has more advantages in its search ability. Thus, it can be considered an effective and important method for underwater sonar image detection, resulting in feature extraction for effective seabed topography.
Another important issue in side-scan sonar image processing is bottom tracking, which is examined by Yan et al. [
18]. The research aimed at proposing a new method for real-time bottom tracking based on artificial intelligence (Convolutional Neural Network—CNN) for the processing of an image. Bottom tracking can be effectively used for accurately obtaining the sonar height from the seabed by finding the first echo that reaches the seabed. This knowledge about sonar height is crucial for the proper interpretation of sonar images. The proposed approach consists of three steps for obtaining effective bottom tracking. First, according to the characteristics of the side-scan backscatter strength sequences, positive and negative samples are extracted, representing, respectively, the bottom sequences and water column and seabed sequences to establish the sample sets. Secondly, a one-dimensional CNN is designed and trained by using the sample set to recognize the bottom sequences. Thirdly, a complete processing procedure of the real-time bottom tracking method is established by traversing each side-scan ping datum and recognizing the bottom sequences. This approach introduces the use of a deep learning algorithm for solving the problem, while most of the methods which have been used up until now have been based on fixed thresholds and deterministic numerical filtering. The method is verified with real measured data. The experimental results described in the paper showed that the proposed method is highly robust to the effects of noise, rich seabed texture, and artificial targets and proved its accuracy and real-time performance. The average bottom tracking accuracy reached for the experimental data was 94.7% with a 4.5% miss-ping rate and 99.2% excluding the missing data, showing that the method provides an effective algorithm for bottom tracking.
Sonar data processing may also be an important issue for navigation. Stateczny et al. [
19] indicate that underwater sonar data can be processed with big data methods. In this particular research, 3D sonar data were processed and the purpose was the near real-time processing for so-called comparative navigation. A new approach of acquiring and simultaneously processing a set of bathymetric observations is presented. It includes fragmentary data acquisition and fast reduction (the optimum dataset method—OptD) within the acquired measuring strips in almost real time and the generation of DTMs. The OptD method was modified for this purpose by introducing a loop (FOR instruction) for fragmentary data processing. All processes in this approach were carried out at the first stage of data acquisition, but during the measurement the entire data set was not obtained, but rather a fragment of the data set was obtained. The proposed approach was compared with the method that uses full sets of bathymetric data. The results showed that it quickly obtained, reduced, and generated DTMs in almost real time for comparative navigation. The most important step during the processing was reduction, because a reduced number of data allowed faster 3D bottom model generation, which can be compared with other types of data within terrain reference navigation. In this paper, the research was based on the 3D Sidescan 3DSS-DX-450 sonar system, which provides bottom and water column data.
Xu at al. [
20] work not with bottom data but with water column data, showing a very interesting case of the use of multi-beam measurements. The goal of this research was to propose an effective method for detecting gas leaks from bottom pipelines based on an analysis of water column images (WCI). WCIs use the differences in acoustic characteristics, such as backscattering strength or target strength, to detect solid, liquid, or gas targets by distinguishing them from the background images. Gas leakages can be detected with the use of so-called motion-estimation techniques. A gas bubble is considered to move in the consecutive scans and based on this movement can be detected. The authors proposed to use the optical flow method for this purpose, as it had already been validated using suspended objects but for different sensors. The entire image processing chain is analyzed including side lobe suppression, coordinates transformation, and other factors, resulting in the modified optical flow algorithm adjusted for multi-beam WCI analysis. The method is based on the combination of motion, and the intensity information of WCI pixels was studied in this paper. The method has been verified in two experiments with real sensors in real environments (pool and lake) with simulated gas leakages. It can be seen that the velocities of the gas bubbles obtained based on two variants of the method had relatively good consistency. The great potential of the method was proved. Further research is planned in which bottom tracking technology will be introduced and the influence of sound velocity changes for the thresholds will be analyzed.
Underwater surveys nowadays are more and more often dealing with more than one data source. Joint analysis of the various sources can in many cases provide important added value in situational awareness. An example of this can be found in [
21], where Shang et al. propose a new method for acquiring a high resolution seabed topography and surface details that are difficult to obtain using MBES or SSS alone. It makes use of the observation that MBES data are well positioned, while SSS data (especially towed) provides high resolution images but with inaccurate positions. The authors proposed a method to combine both sources of data. Through taking the image geographic coordinates as the constraint when using the Speeded-Up Robust Features (SURF) algorithm for initial image matching, the authors have obtained more correct initial matched points compared to those obtained without constraint. Then, the finer matching step is conducted by adopting a template matching strategy which uses the dense local self-similarity (DLSS) descriptor to reflect the shape properties of the area’s centered feature points. The method was empirically verified with real data, showing that the proposed method can overcome the limitations of adopting a single MBES or SSS for seabed mapping. High resolution and high accuracy seabed topography and surface details can be represented together, which is meaningful for understanding and interpreting seabed topography. Meanwhile, this paper discusses the accuracy of the reckoned SSS positions and uses it as a reference threshold in the image matching process. In addition, this paper discusses the impact of sonar frequency on the sonar backscatter image and provides some useful suggestions when dealing with multi-frequency sonar image matching.