Next Article in Journal
An InSAR-Based Framework for Advanced Large-Scale Failure Probability Assessment of Oil and Gas Pipelines
Previous Article in Journal
Template Watermarking Algorithm for Remote Sensing Images Based on Semantic Segmentation and Ellipse-Fitting
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Lidar Doppler Tomography Focusing Error Analysis and Focusing Method for Targets with Unknown Rotational Speed

1
National Laboratory on Adaptive Optics, Chengdu 610209, China
2
Institute of Optics and Electronics, Chinese Academy of Sciences, Chengdu 610209, China
3
University of Chinese Academy of Sciences, Beijing 100049, China
4
Zhejiang University, Hangzhou 310058, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2025, 17(3), 506; https://doi.org/10.3390/rs17030506
Submission received: 31 October 2024 / Revised: 26 December 2024 / Accepted: 9 January 2025 / Published: 31 January 2025

Abstract

:
Lidar Doppler tomography (LDT) is a significant method for imaging rotating targets in long-distance air and space applications. Typically, these targets are non-cooperative and exhibit unknown rotational speeds. Inferring the rotational speed from observational data is essential for effective imaging. However, existing research predominantly emphasizes the development of imaging algorithms and interference suppression, often neglecting the analysis of rotational speed estimation. This paper examines the impact of errors in rotational speed estimation on imaging quality and proposes a robust method for accurate speed estimation that yields focused imaging results. We developed a specialized measurement matrix to characterize the imaging process, which effectively captures the variations in measurement matrices resulting from different rotational speed estimates. We refer to this variation as the law of spatiotemporal propagation of errors, indicating that both the imaging accumulation time and the spatial distribution of the target influence the error distribution of the measurement matrix. Furthermore, we validated this principle through imaging simulations of point and spatial targets. Additionally, we present a method for estimating rotational speed, which includes a coarse estimation phase, image filtering, and a fine estimation phase utilizing Rényi entropy minimization. The initial rough estimate is derived from the periodicity observed in the echo time-frequency distribution. The image filtering process leverages the spatial regularity of the measurement matrix’s error distribution. The precise estimation of rotational speed employs Rényi entropy to assess image quality, thereby enhancing estimation accuracy. We constructed a Lidar Doppler tomography system and validated the effectiveness of the proposed method through close-range experiments. The system achieved a rotational speed estimation accuracy of 97.81%, enabling well-focused imaging with a spatial resolution better than 1 mm.

1. Introduction

In the field of air and space target detection, long-distance, high-resolution imaging technology remains a key pursuit for researchers. With advancements in laser technology, optics, and signal processing, the performance of coherent detection lidar has continually improved, thereby facilitating its application in remote sensing. Numerous studies have demonstrated the feasibility of laser coherent detection for long-distance sensing, including applications such as inverse synthetic aperture lidar (ISAL) [1,2,3] and lidar reflection tomography [4,5,6,7,8,9,10]. Lidar Doppler tomography (LDT) is a similar advanced imaging technology that integrates coherent detection with the Doppler effect, enabling high-resolution imaging of distant rotating targets, including satellites, missiles, and UAV blades.
LDT utilizes Doppler projection information from various angles to reconstruct the internal structural images of targets based on the principles of computed tomography [11,12,13,14,15]. Compared to ISAL, LDT has the advantage of not requiring the emission of high-bandwidth modulation signals to achieve high-resolution imaging, thereby reducing hardware requirements. However, since the imaging principle relies on computed tomography, precise knowledge of the target’s rotational speed is essential for establishing the relationship between the projection angle and the projection matrix [16]. For non-cooperative targets, obtaining rotational speed information is challenging, making effective inference of this information one of the main obstacles for LDT.
Related work has demonstrated the feasibility of using Lidar Doppler tomography technology for high-resolution imaging. Mo et al. [15] conducted laboratory experiments to verify LDT, achieving high-resolution imaging through an imaging algorithm based on maximum a posteriori estimation. The study notes that the target’s rotational speed information is inferred from the projection matrix, thereby completing the imaging process. However, it does not provide details on the estimation algorithm or the impact of estimation accuracy on imaging results. The technical principles and challenges of laser reflection tomography and LDT are similar, with researchers also investigating non-cooperative space targets. Zhang et al. [9] and Guo et al. [4] addressed issues of sparsity and incompleteness in projection data. Notably, their experiments assumed known rotational speeds and did not consider the effects of rotational speed estimation on imaging outcomes. Currently, no research has systematically analyzed the specific impact of rotational speed estimation on imaging quality.
In the field of micro-Doppler research, estimating target information from the time-frequency distribution of target echoes is crucial, providing significant reference for our work. The micro-Doppler effect caused by general vibrations allows for the estimation of vibration frequency through autocorrelation processing of time-domain signals [17]. This method requires the echo signal to possess a high signal-to-noise ratio. Using Hough transform [18,19,20] or Radon transform [21,22] to convert signals from the time-frequency domain to the parameter domain for estimation is an effective approach. This technique is widely used in the microwave domain, primarily because targets detected with microwaves behave similarly to point targets, resulting in relatively straightforward time-frequency distributions. In contrast, targets in the laser spectrum often exhibit rough surfaces relative to the laser wavelength, increasing the complexity of features in the time-frequency distribution and rendering direct application of the aforementioned algorithms for rotation speed estimation impractical. As of now, a rotational speed estimation method specifically designed for LDT has yet to be developed. Therefore, we assert the necessity of thoroughly analyzing the mechanisms by which rotation speed estimation impacts imaging and developing a target rotation speed estimation method tailored for LDT.
In this paper, we analyze the mechanism by which rotation speed estimation errors affect imaging and propose an algorithm to estimate the rotation speed of a rotating target for improved imaging focus. First, we introduce the basic construction method for the measurement matrix in LDT. Building on this foundation, we present a specialized method for constructing measurement matrices that reflects the order of pixels from the exterior to the interior. By examining the differences in measurement matrices at various rotational speeds, we uncover the spatial and temporal laws governing the propagation of measurement matrix errors. Additionally, we conducted simulation imaging experiments on both point and extended targets to validate the consistency of these findings with the established rules of imaging errors. Moreover, we propose a two-step method for precise rotational speed estimation: rough estimation and fine estimation. Initially, the target’s rotational speed is estimated based on the time-frequency distribution of the echo signal, narrowing the parameter search scope. Subsequently, we designed an image filter informed by the spatial characteristics of focusing errors to enhance the sensitivity of Rényi entropy to the rotational speed parameter. We utilize Rényi entropy as an evaluation metric for imaging focus and achieve accurate parameter estimation by minimizing it. We constructed a LDT system operating at a 1550 nm wavelength in an indoor setting and applied the proposed method to achieve precise imaging of the target. Without noise reduction or additional image processing, we obtained high-focus imaging results with a resolution exceeding 1 mm for a target located 4.5 m away. Additionally, the system achieved a rotational speed estimation accuracy of 97.81%, further demonstrating the effectiveness and reliability of the proposed method.
The remainder of this paper is organized as follows. Section 2 presents the model of the echo signal and the principles underlying the imaging algorithm. Section 3 analyzes the mechanisms by which rotational speed estimation errors impact imaging. Section 4 introduces a precise method for estimating rotational speed. Section 5 discusses the experimental results. Finally, Section 6 provides the conclusion.

2. Principles of Lidar Doppler Tomography

2.1. Signal Model

Lidar can acquire Doppler information from rotating targets by transmitting single-frequency signals. LDT utilizes Doppler projection information from various angles to reconstruct the target’s reflectance image, based on the principles of computed tomography. The schematic diagram illustrating this imaging principle is shown in Figure 1.
The rotation angle of the target relative to the direction of the incident laser is denoted as θ . For a point ( x 0 , y 0 ) in the target’s initial coordinate ( x r , y r ) , its radial distance from the incident laser will vary with rotation.
d ( t ) = x 0 2 + y 0 2 sin θ + ϕ 0 = x 0 2 + y 0 2 sin w t + ϕ 0 ,
where d ( t ) represents the change in radial distance, ϕ 0 denotes the initial angle of ( x 0 , y 0 ) in the original coordinate system, ω represents the rotational angular velocity, and t represents time. For an extended target, its echo signal can be modeled as the integral of the reflectivity distribution u ( x , y ) .
s ( t ) = u ( x , y ) exp j 2 π λ x 2 + y 2 sin ( w t + ϕ ) d x d y ,
where s ( t ) represents the echo signal, u ( x , y ) represents the reflection coefficient distribution of the target, and λ represents the wavelength. Equation (2) is an integral expression. The integral sign corresponds to the product of the reflectivity and phase change of each pixel. Its phase is affected by the displacement and presents a sinusoidal change over time. By deriving the phase component of the echo signal from a single point, the corresponding Doppler frequency can be obtained.
f d = x 2 + y 2 ω λ cos ( ω t + ϕ ) .
It can be observed from the expression above that the Doppler frequency shift caused by the rotation of a single-point target is a sinusoidal function of time. Furthermore, the amplitude of the sinusoidal signal is proportional to both the distance and the rotation speed of the point target relative to the center of rotation, and inversely proportional to the wavelength. The frequency of the sinusoidal signal corresponds to the rotation frequency of the target.
An additional rule is that at angle θ , points exhibiting the same Doppler frequency correspond to the same distribution of cross range [23]. Therefore, the Doppler frequency can be expressed as
f d = 2 ω λ ρ ,
where ρ represents the cross distance distribution perpendicular to the direction of incidence. The set of points exhibiting the same cross distance distribution can be expressed as
ρ = x cos θ + y sin θ = λ f d 2 ω .
Therefore, we have established the following correspondence between the Doppler projection of the target along the direction of incidence and the Radon transform of the target reflectance coefficient [24,25].
p ( ρ , θ ) = u ( x , y ) δ ( x cos θ + y sin θ ρ ) d x d y = u ( x , y ) δ x cos ω t + y sin ω t λ f d 2 ω d x d y = p f d , t .
However, according to Equation (4), there exists a scaling relationship between the Doppler frequency shift and the target’s rotational radius, with the scaling factor dependent on the rotational speed of the target. The accuracy of rotation speed estimation directly influences the alignment of the projection and angle, ultimately resulting in imaging that may be out of focus or even impossible to achieve. This is because, in practical observations, the only available data are the observation time and the projection, while the rotational speed remains unknown. Consequently, the goal is to infer the relationship between the observation time and the observation angle, which fundamentally involves estimating the rotational velocity of the target.
Performing a Fourier transform on signals acquired from different angles allows for the extraction of the Doppler projection of the target. The resolution of the Doppler spectrum is contingent upon the duration of the signal used for the Fourier transform.
Δ f d = 1 Δ T .
Therefore, according to Formula (5), the expression of the target cross resolution can be obtained
Δ ρ = λ 2 ω Δ f d = λ 2 ω Δ T = λ 2 Δ θ ,
where Δ ρ represents the cross resolution, Δ T corresponds to the sampling time for a single angle, Δ f d denotes the frequency resolution, and Δ θ represents the angle through which the target rotates during the corresponding time.
This section establishes and analyzes the echo signal model of the target. Furthermore, the equivalence relationship between the target Doppler projection and the Radon transform of the target reflectance distribution is deduced. We have found that matching the projection matrix and projection angle can only be achieved when the rotational speed is known, which is a necessary condition for imaging. Additionally, we derive an expression for the cross resolution.

2.2. Imaging Method

The filtered back-projection (FBP) algorithm is a classic method for computer tomographic image reconstruction [26]. Its core principle is based on the Radon transform and its inverse. The Radon transform maps the image function to projection space, generating projection data. The inverse transform then converts this projection data back into image space to reconstruct the original image.
For non-cooperative targets, it is necessary to infer their rotational speed information from the data to facilitate imaging. We will introduce the method for estimating rotation speed in the following sections. Assuming that the speed information is known, we can employ the FBP algorithm to achieve rapid imaging of the target. The specific steps of the algorithm are as follows.
Firstly, the time-Doppler projection of the target is converted into an angle-cross range projection based on the rotational speed information.
p f d , t p ( ρ , θ ) .
Secondly, a one-dimensional Fourier transform is performed on the projection data at each angle. This step converts the data from the spatial domain to the frequency domain.
P θ ( ω θ ) = F p θ ( x ) ,
where p θ ( x ) is the projection data at the angle θ , F represents the one-dimensional Fourier transform, and P θ ( ω θ ) is the transformed frequency domain data.
Next, filtering is applied in the frequency domain. A filter (such as an R-L filter function) is utilized to process the projection data in the frequency domain. The purpose of filtering is to remove high-frequency noise and artifacts from the projection data, thereby improving the quality of the image.
H ( ω θ ) = | ω θ | P ˜ θ ( ω θ ) = P θ ( ω θ ) · H ( ω θ ) ,
where H ( ω θ ) represents the frequency response of the filter, and P ˜ θ ( ω θ ) denotes the filtered frequency domain data.
Then, a one-dimensional inverse Fourier transform is performed to convert the filtered projection data from the frequency domain back to the spatial domain.
p ˜ θ ( x ) = F 1 P ˜ θ ( ω θ ) ,
where F 1 represents the one-dimensional inverse Fourier transform, and p ˜ θ ( x ) represents the projection data after filtering and inverse transformation.
Finally, the reflectivity distribution of the target can be obtained through the back-projection transformation. The back-projection step involves redistributing the filtered projection data into the image domain along its original path.
f ( x , y ) = 0 θ d θ p ˜ θ ( R ) δ ( x cos θ + y sin θ R ) d R ,
where f ( x , y ) represents the reconstructed image, δ is the Dirac function used to represent the point distribution in the back-projection process, θ denotes the projection angle, p ˜ θ ( R ) represents the filtered projection data, specifically the integral along ray R at angle θ .

3. Focusing Error Analysis

In this section, we first introduce the basic construction method of the measurement matrix for the LDT process, which allows us to express the forward projection process as a linear matrix multiplication. Next, we propose a specialized construction method for the measurement matrix based on this basic approach. This method more clearly reflects the error distribution pattern of the measurement matrix. Finally, we conduct imaging simulation experiments on point targets and extended targets, verify the impact of rotational speed estimation errors on imaging, and reveal the correspondence between the imaging error distribution and the measurement matrix error distribution.

3.1. Measurement Matrix Model

Through the analysis of the echo signal, we recognize that the Doppler projection is the result of the Radon transform of the target reflectivity distribution. We propose using discrete linear algebra matrices to represent the integral formulation of the Radon transform, facilitating further problem introduction and analysis [27].
By applying the Radon transform to the target reflectance image, we obtain a mapping from the two-dimensional spatial domain of the target reflectance g ( x , y ) to a two-dimensional sinusoidal image.
p ( ρ , θ ) = R g ( x , y ) ,
where R represents the Radon transform, ρ and θ denote the distance and angle coordinates of the projection matrix, respectively, and x and y refer to the two-dimensional spatial coordinates of the reflectivity distribution. In actual measurements, we can only obtain the discrete form of the transformation described above. Assume that the size of the projection matrix p ( ρ , θ ) is K × L and the size of the reflectance matrix g ( x , y ) is M × N . The linear algebra representation of the Radon transform can be expressed as
b = A u .
where b is the observation result, A is the measurement matrix, and u is the reflectance coefficient of the target. To construct the corresponding measurement matrix A, we vectorize both the projection matrix and the reflectance matrix, yielding the following expression:
b i = b k L + l = p ( k , l ) ,
u j = u n M + m = g ( m , n ) .
Thus, the size of the measurement matrix A is K L × M N . We use the extended impulse function ϕ ( · ) in the image domain to represent the composition of the elements in A. The function ϕ ( · ) is defined as
ϕ x x m , y y n = 1 , if x = x m , y = y n 0 , others .
Figure 2 illustrates a toy example of the extended impulse function. In this example, the value of all pixels in the image is zero, except for the specified pixel. Consequently, the reflectance matrix of the target can be expressed as follows:
g ( x , y ) = m n g ( m , n ) ϕ x x m , y y n .
By definition, the Radon transform in integral form is given by:
p k , l = m n g ( m , n ) ϕ x x m , y y n δ ρ k x cos θ l y sin θ l d x d y = m n g ( m , n ) ϕ x x m , y y n δ ρ k x cos θ l y sin θ l d x d y .
Furthermore, the composition of the elements a i , j in A can be expressed as follows:
a i , j = a k L + l , n M + m = ϕ x x m , y y n δ ρ k x cos θ l y sin θ l d x d y .
This indicates that the measurement matrix is essentially the Radon transform of the extended impulse function ϕ ( · ) . Therefore, we can obtain the discrete expression of the matrix A by applying the Radon transform to the extended impulse function.

3.2. Error Analysis of Measurement Matrix

In the Doppler tomography problem, there exists a scaling relationship between the obtained Doppler projection map and the coordinates of the projection map used for image reconstruction. The two dimensions of the Doppler projection map correspond to frequency and time, which relate to distance and angle in the sinusoidal map, respectively. When detecting non-cooperative targets, the target’s rotational speed varies with its position. We must infer the target’s rotational speed information from the data to establish the correspondence between time and angle, as well as between frequency and distance. Assuming that the initial illumination angle is 0° and the central Doppler frequency is 0 Hz, the corresponding relationship can be expressed as follows:
θ t = ω t ,
ρ f = λ 2 ω f d ,
where ω represents the angular velocity of the target rotation, λ denotes the wavelength of the emitted signal, f d represents the Doppler projection, and t represents the observation time. When the size of the time-Doppler-intensity matrix we obtain is fixed, the accuracy of the rotation speed estimation directly affects our reconstruction process. In direct reconstruction methods, such as the FBP algorithm, the magnitude of the rotational speed determines the angular interval between adjacent projections during the actual back-projection. An incorrect estimation of the rotational speed will cause the back-projection angle to deviate, thereby affecting the final reconstruction result. When employing the linear algebra method, we need to derive the angular coordinates of the Radon transform based on the rotational speed information, and then reconstruct the measurement matrix A. Any error in rotation speed estimation will impact the composition of the measurement matrix A, consequently affecting the final imaging result.
We first examine the impact of the rotation speed estimation error on the measurement matrix A. Based on the analysis in the previous section, we understand that the composition of the measurement matrix A is related to the vectorized arrangement of the image reflectance matrix. Instead of using the original pixel arrangement order to construct the measurement matrix, we employ the pixel arrangement order depicted in Figure 3, where the pixels are organized from the outside to the inside of the image. When the image size is large, the pixels can be approximately arranged in descending order of radial distance from the center of rotation. Next, we need to calculate the projection angle corresponding to each moment based on the provided rotational speed. The rotational speed is proportional to the angular separation between two adjacent projections. Let ω 0 denote the true speed of the target and ω ^ represent the estimated speed. We define the absolute error in the rotational speed estimate as follows:
E ω = | ω ^ ω | .
When the time sampling interval Δ t of the Doppler projection matrix is fixed, the angular interval error Δ θ of adjacent matrix projections is proportional to the absolute error in rotation speed estimation. Its expression is as follows:
Δ θ ^ Δ θ = ( ω ^ ω ) Δ t .
Essentially, the estimated rotational speed influences the setting of the imaging system’s angular sampling interval. We define the absolute error in the angular sampling interval as follows:
E θ = | Δ θ Δ θ | .
To more intuitively quantify the impact of the rotation speed estimation error on the measurement matrix A, we constructed the measurement matrix A at different sampling intervals of 2°, 2.1°, 2.2° and 2.3° respectively. Assuming that the true sampling interval is 2°. The constructed measurement matrix at this interval is shown in Figure 4. The original image has a pixel size of 100 × 100 , resulting in a total of 10,000 pixels. We use color depth to represent the magnitude of the corresponding values, where black indicates smaller values and yellow indicates larger values. Figure 4a provides an overall representation of the measurement matrix, Figure 4b presents a partially enlarged view. It is evident that the measurement matrix exhibits a certain sparsity and displays a sinusoidal shape with amplitude attenuation, corresponding to the pixel arrangement sequence we designed.
Furthermore, to better quantify the differences between various measurement matrices, we used the measurement matrix corresponding to a sampling interval of 2° as the standard and calculated the differences between it and the other measurement matrices. The error matrix is computed using the following formula:
E A = A Δ θ 0 A Δ θ i .
As shown in Figure 5, these are the error matrices for sampling intervals of 2.1°, 2.2°, and 2.3°. Since the measurement matrix itself exhibits a certain level of sparsity, most corresponding values in the error matrices are 0. Consequently, we performed local average down-sampling on the error matrix. The result after down-sampling is presented in Figure 6.
When constructing the measurement matrix A, we meticulously designed the composition of its row and column vectors to reflect the spatial transformation rules and temporal variations inherent in A. Specifically, the row vectors map the pixel values arranged in order from distant to near, gradually approaching the center of rotation, thereby capturing spatial changes. The column vectors represent the gradual increase in the observation angle, which corresponds to the increase in observation time.
Based on this design, we aim to uncover and summarize the spatiotemporal patterns of measurement matrix error propagation. As shown in Figure 6, we quantify the magnitude of the error using a color gradient, where dark black areas represent extremely low error levels, bright yellow indicates larger error levels, and red falls in between. It can be observed that the four error matrices in Figure 6 exhibit similar error propagation patterns. The error level gradually decreases from left to right across the image, indicating that the measurement matrix error is smaller closer to the center of the image. This phenomenon reveals that the error in rotation speed estimation has a lesser impact on the central area than on the peripheral regions of the image, which are more susceptible to errors during reconstruction.
The error level progressively increases from the top to the bottom of the error matrix, indicating that errors accumulate as the observation angle increases. This corresponds to the angle sampling interval error analyzed earlier, which will also accumulate over time. The effect of this accumulation of errors, as the number of projections increases, impacts the incoherent accumulation of information in actual imaging. This suggests that rotation speed estimation errors are significant and can affect the final imaging results. Additionally, as shown in Figure 6, as the sampling interval error gradually increases, the area representing high error levels (yellow) expands from the lower left corner to the upper right corner. This change dynamically illustrates how rotation speed estimation errors propagate from the outer edges of the image to the inner regions, thereby affecting overall reconstruction quality.
To further verify the impact of rotation speed estimation errors on imaging and to examine the error propagation characteristics of the measurement matrix, we subsequently conducted simulations for point target imaging and extended target imaging.

3.3. Simulation of the Point Target Imaging

To verify and analyze the impact of rotation speed estimation errors on the pixel focusing position and focusing energy in imaging, we first considered conducting imaging simulation experiments using point targets, as shown in Figure 7. We set the size of the original image to 513 × 513 pixels, with the coordinates of the rotation center at (257, 257). Based on the varying distances from the rotation center, we established three groups of point targets with Gaussian intensity distributions. The coordinates of the strongest pixel centers, arranged from nearest to farthest, are as follows: P1 (257, 51), P2 (257, 128), and P3 (257, 204). We conducted a 180-degree Radon transform on the image, using an angle sampling interval of 2°. For image reconstruction, we employed the FBP algorithm, performing reconstructions with error-free angle sampling intervals as well as with intervals of 2.2°, 2.4°, 2.6°, and 2.8°.
First, using the sampling interval of 2.4° as an example, the absolute error in setting the angle sampling interval is 0.4°. The reconstruction results of the three point targets under these conditions are compared, as shown in Figure 8. Figure 8a,b present the color map and contour map, respectively, following the reconstruction of point targets with an error of 0.4°. Both figures utilize the same color mapping, with black representing lower reflectance intensity, red indicating intermediate intensity, and yellow signifying high intensity. We observed that, compared to the initial image, the three point targets exhibit varying degrees of energy diffusion and positional deviation. Specifically, points closer to the center are less affected, while those farther from the center show greater effects. We also compared the transverse and longitudinal slice results at the strongest scattering center between error-free reconstruction and reconstruction with an error of 0.4°. Figure 9a–c depict the results of slicing the three points along the horizontal axis. Figure 9d–f show the results of slicing along the vertical axis. The points are arranged from the inside out. From the horizontal axis slices, it is evident that points closer to the center have higher intensity, approaching the intensity of 1 of the original point target. The longitudinal axis slices reveal that beyond the principal rotation center, the energy is lower, exhibiting both positional shifts and energy diffusion.
We selected the reconstruction results corresponding to a sampling interval of 2.4° for display. The absolute angular error in this case is 0 . 4 . Under these conditions, we reconstructed three types of point targets. Figure 8a,b show the reconstructed color map and contour map, respectively. Both utilize the same color scheme to represent energy intensity, where black indicates low intensity, red signifies intermediate intensity, and yellow represents high intensity. We observed that, compared to the initial image, the three point targets exhibited varying degrees of energy diffusion and deviations from their central positions. Points closer to the center of rotation demonstrate the weakest energy diffusion and the smallest positional deviations, while the point farthest from the center of rotation shows the strongest energy diffusion and the largest positional deviation. We compared the horizontal and vertical slices of the scattering centers for both the original and reconstructed point targets. Figure 9a,d illustrate the slices before and after the reconstruction of P1, while Figure 9b,e depict the slices for P2, and Figure 9c,f show the slices for P3. The slicing results reveal that as the target moves farther from the rotation center, the peak reflectivity diminishes, and the deviation from the initial scattering center increases. Notably, the offset in the Y-axis slice is larger, which relates to the position of the selected point and the direction of the slice.
We further analyzed the imaging results under various error conditions and calculated their energy attenuation and positional shifts. The results are presented in Figure 10. Figure 10a illustrates the energy attenuation of the three point targets. P1 (near the center) exhibits a slow decay of energy as the error increases. P2 shows moderate decay, while P3(the farthest from the center) experiences the fastest energy decay.
Figure 10b depicts the position offset of the three point targets. The y-axis represents the Euclidean pixel distance between the offset position and the initial position. We observed a linear relationship between the position offset Δ r and the angular interval error, with the slope of the position offset being larger for point targets farther from the center. This indicates that the focal position in the outer region is more susceptible to errors.
We conducted point target imaging simulation experiments to explore and better understand the spatial relationship between rotation speed estimation errors and the resulting imaging quality. These experiments provide insights consistent with the spatial error propagation behavior of the measurement matrix A, as analyzed in our earlier work. Specifically, from a back-projection perspective, when the same angular deviation occurs across the imaging field, positional deviations are more pronounced toward the periphery of the image. This increased deviation leads to a significant reduction in energy levels at these peripheral regions, as the projected energy is dispersed and cannot be effectively focused. This phenomenon highlights the critical role of precise rotation speed estimation in maintaining accurate energy localization and image clarity, particularly in regions farther from the image center.

3.4. Simulation of the Extended Target Imaging

We constructed a cross-blade target with a reflectance of 1 and an overall image pixel size of 1000 × 1000, as illustrated in Figure 11, and conducted imaging simulation experiments on the extended target. In Doppler tomography, both the angular sampling rate and the overall angular sampling range effectively enhance signal quality. When the angular sampling rate is adequate, increasing the overall angular sampling range can improve the signal-to-noise ratio of the imaging. In the simulation, Gaussian white noise was added with a signal-to-noise ratio (SNR) of 5 dB. Imaging experiments were performed with different cumulative observation angles under three conditions: no error, an angular sampling interval error of 0.2°, and an angular sampling interval error of 0.4°. The accumulation of angles represents the length of observation time. The cumulative observation angles included 180°, 360°, 540°, 720°, and 900°, respectively. The results are presented in Figure 12.
Figure 12a illustrates the scenario without error. It is evident that as the observation angles accumulate, the signal-to-noise ratio of the image significantly improves, and the cross pattern retains its original shape, demonstrating a good focus state. Figure 12b depicts the situation when the error is 0.2°. Although the signal-to-noise ratio around the cross pattern decreases over time, the energy becomes dispersed, and the angle of the cross pattern shifts. However, a relatively distinct cross pattern remains visible near the center, where the energy appears focused. Figure 12c presents the case with an error of 0.4°. In this scenario, energy dispersion is more pronounced, and as the observation angles accumulate, only the energy in the central region remains focused, making it challenging to distinguish the original cross pattern.
We further analyze the quality and changes of the image using peak signal-to-noise ratio (PSNR) and mean square error (MSE) [28,29]. Figure 13a displays the PSNR values under three error conditions. The blue curve represents the scenario without error, while the red and yellow curves correspond to errors of 0.2° and 0.4°, respectively. In the absence of error, the peak signal-to-noise ratio increases rapidly with the accumulation of observation angles. However, when errors are present, the PSNR increases only slightly. Figure 13b illustrates the changes in MSE values across the three cases. As observation time accumulates, the MSE value decreases most rapidly when there is no error, while the decrease is slower in the presence of errors. This phenomenon, as noted in Figure 12, occurs because the accumulation of observation angles allows for the suppression of background noise in all three cases. However, errors can lead to offsets and energy dispersion in the cross pattern. As the observation angle increases, the reflectivity of the outer pattern does not improve and gradually diffuses. Although the focus of the inner pattern is better than that of the outer pattern, some energy dispersion still occurs. Consequently, the signal-to-noise ratio of the image cannot be effectively improved.
We verified the impact of rotational speed estimation error on imaging by conducting imaging simulations on the extended target. First, the spatial propagation of errors continues to follow the established pattern: the closer a target is to the rotation center, the smaller the energy dispersion and position deviation. Second, we further validated the temporal propagation of errors. Our intuitive understanding is that the rotational speed estimation error is effectively equivalent to a sampling interval error. The initial estimation error in the interval leads to increasingly larger deviations in the back-projection angle as the order of projection increases, making it impossible to improve imaging quality over time.

4. Rotation Speed Estimation Methods

4.1. Analysis of Rényi Entropy

Rényi entropy is a generalized measure of entropy in information theory and serves as a generalization of Shannon entropy [30]. For a discrete random variable X, the α -order Rényi entropy is defined as
H α ( X ) = 1 1 α log i = 1 n p i α ,
where p i is the probability of the i-th value of the random variable X, n is the total number of possible outcomes, and the parameter α is a real number that is not equal to 1.
Rényi entropy has been employed to measure the focus of inverse synthetic aperture radar images and for motion error compensation [30]. In our experiments, we found that Rényi entropy can effectively assess the degree of focus in extended target laser Doppler tomography results and estimate the target’s rotational speed. When its value is small, focused imaging results are obtained, indicating a correspondingly small estimation error in rotational speed. Compared to Shannon entropy, a key advantage of Rényi entropy is its flexibility in order adjustment. Different orders of Rényi entropy exhibit varying sensitivities to probability distributions, making them suitable for diverse scenarios. For images with higher levels of noise, a larger order can be chosen, as it causes Rényi entropy to emphasize values with higher probabilities while reducing sensitivity to lower-probability noise values.

4.2. Image Filtering Method

Based on the spatial propagation law of the rotational speed estimation error we identified, we designed an image filter for the imaging results. Our analysis indicates that the central area of the image is less affected by the rotational speed estimation error compared to the outer regions, leading to the development of the following filter.
Q β ( x , y ) = 0 , if x β X , y β Y 1 , others
where X and Y represent the side lengths of the imaging area, and β is a proportional coefficient ranging from 0 to 1, iindicating the size of the central square area relative to the total side length. The variables x and y correspond to the pixel coordinates of the imaging result. Therefore, we can filter the imaging results using the Hadamard product, represented by the following expression:
U ˜ β ( x , y ) = U ( x , y ) Q β ( x , y ) ,
where U ( x , y ) represents the initial imaging result, U ˜ β ( x , y ) represents the filtered imaging result, and ⊙ indicates the Hadamard product. We apply this filter to mitigate the influence of the central area on the overall Rényi entropy of the image, as the outer area is more sensitive to rotation speed estimation errors. The goal is to enhance the sensitivity of the Rényi entropy evaluation results to these errors. It is important to note that during actual operations, we not only set the central area to zero but also excluded these zero elements from the overall Rényi entropy calculation.

4.3. Parameter Estimation Based on Rényi Entropy Minimization

4.3.1. Time Frequency Analysis

Converting echo signals into images involves several steps. First, we employ a time-frequency analysis method to obtain the time-Doppler-intensity image of the target, corresponding to the Doppler projection collected at various times. The time-frequency analysis technique utilized is the Short-Time Fourier Transform (STFT) method [31], defined by the following formula:
STFT { x ( t ) } = X ( f , t ) = x ( τ ) ω ( τ t ) e j 2 π f τ d τ ,
where x ( t ) denotes a one-dimensional time-domain signal, ω ( t ) represents a window function. Commonly used window functions include the rectangular window, Hanning window, and Hamming window, among others. Here, f denotes the frequency variable, t is a time variable, and j represents the imaginary unit. The output X ( f , t ) produced by the STFT is a two-dimensional function that represents the intensity distribution of the signal in both time and frequency. We refer to the X ( f , t ) matrix as the Doppler-Time-Intensity (DTI) matrix. Since STFT employs a fixed-length window function, its time and frequency resolutions are inherently limited, a constraint defined by the Heisenberg uncertainty principle. The spatial resolution of laser Doppler tomography is positively correlated with the frequency resolution of the DTI matrix. In practical applications, it is essential to select an appropriate time window length based on the target’s size and rotational speed.

4.3.2. Rough Estimate of Rotation Speed

According to the time transformation law of Doppler projection in DTI images, we can obtain a rough estimate of the period and narrow the parameter search range. We perform cross-correlation operations on DTI images along the time dimension. Cross-correlation measures the similarity of two images during the translation process. When clear periodic structures are present in the image, it can be employed to estimate the period contained within the image [32,33].
Suppose we have two-dimensional matrices A and B, where A is the reference matrix and B is the template matrix. The calculation formula for cross-correlation along the time dimension is given by:
C ( i ) = f = 0 m 1 t = 0 n 1 A ( f , i + t ) · B * ( f , t ) .
The peak interval T of the correlation coefficient C ( i ) corresponds to the period of the similarity structure in the image. By determining the interval between adjacent peaks, we can estimate the target’s rotational speed. The calculation formula is:
T ^ = 1 n i n T i ,
ω = 2 π 1 T ^ ,
where T i represents the time interval between adjacent peaks, and T ^ represents the averaged estimated value. n represents the number of intervals and ω represents the estimated value of the rotational speed.

4.3.3. Accurate Estimation of Rotation Speed

Based on a given estimate of the rotational speed, we can derive the corresponding angle-Doppler-intensity image. First, we apply the FBP algorithm to obtain the imaging result U ( x , y ) of the target. Subsequently, the image is filtered using the proposed filter, yielding the filtered image U ˜ β ( x , y ) . Furthermore, we calculate the Rényi entropy of the image and achieve an accurate estimation of the rotational speed by minimizing the Rényi entropy. The expression is as follows:
ω * = arg min ω H α U ˜ β ( x , y ) .
The flowchart of the above algorithm is illustrated in Figure 14.

5. Experimental Results

5.1. Experimental Setup

The LDT system operating at a wavelength of 1550 nm is depicted in Figure 15. This system employs a narrow linewidth laser as the signal source. The signal passes through a 50/50 beam splitter, followed by an 80 MHz acousto-optic modulator (AOM) and an erbium-doped fiber amplifier (EDFA), before entering a 40 mm collimator for emission. The laser signal interacts with the rotating target and returns, being received by a 40 mm coupling lens. The return light signal enters one end of a 50/50 optical fiber coupler, while an additional 50% of the signal also feeds into the fiber coupler. The optical signal coupled by the coupler is directed to a balanced photodetector (BPD), where it is converted into an electrical signal. This electrical signal is then fed into an oscilloscope for collection, with a sampling rate set to 500 MHz.
The actual picture of the target is shown in Figure 16a. Figure 16b displays the image of the target illuminated by the laser, as captured by the infrared camera. The measured target is mounted on a specialized turntable, equipped with an encoder that ensures stable speed control, allowing it to rotate at a constant speed. The distance between the center of the turntable and the transceiver device is approximately 4.5 m, and the maximum target diameter is 12 cm, with a rotation speed of 1800 r/min.

5.2. Image Quality Evaluation Index

5.2.1. Image Peakness

Image Peakness (IP) is a crucial indicator for evaluating image quality, as it reflects the clarity of image edges [34]. There are various methods to calculate image sharpness; we employ the gradient method for this evaluation. In a clear image, the edges are sharper and more distinct than in a blurred image, resulting in greater variations in the gray values of edge pixels and, consequently, larger gradient values. The gradient indicates the rate of change of pixel values in an image, making it a suitable metric for assessing sharpness. Generally, a higher calculated gradient score correlates with increased image sharpness, which we express using the following formula for IP:
IP = 1 M N i = 1 M j = 1 N G ( i , j ) ,
G ( i , j ) = G x ( i , j ) 2 + G y ( i , j ) 2 .
For each pixel point ( i , j ) in the image, calculate its gradient G x ( i , j ) and G y ( i , j ) in the x direction and y direction, and finally calculate the mean value of the magnitude of the gradient in the image.

5.2.2. Natural Image Quality Evaluator

The Natural Image Quality Evaluator (NIQE) is a no-reference image quality assessment method [35]. It operates without requiring any prior knowledge or training data regarding image distortions, instead evaluating image quality based on the Natural Scene Statistics model. The core concept of the NIQE method is to utilize multi-scale spatial domain features—such as mean, variance, skewness, and kurtosis—extracted from high-quality natural images to construct a multivariate Gaussian (MVG) model that encapsulates the statistical properties of these images. For the image under evaluation, the same features are extracted, and the distance (e.g., Mahalanobis distance) between these features and the MVG model is calculated. This distance serves as an indicator for image quality assessment: the smaller the distance, the higher the image quality, indicating a closer alignment with the statistical characteristics of high-quality natural images.

5.2.3. Equivalent Number of Looks

The Equivalent Number of Looks (ENL) is a crucial indicator for assessing image quality, particularly in synthetic aperture radar (SAR) imaging [36]. It primarily reflects the relative intensity of coherent speckle noise in the image, along with the overall performance of noise suppression, edge clarity, and image retention. Generally, a smaller ENL value indicates better overall image quality. To calculate the ENL, a uniform area within the image must be selected as a sample, as pixel values in non-uniform areas (such as edges and textures) exhibit significant variation, which can distort the ENL calculation. The ENL is defined as the ratio of the square of the mean μ and the variance σ of the pixel gray values in a uniform area of the image. The expression is as follows:
E N L = μ 2 σ 2 .

5.3. Imaging Results of Star Model

The star model depicted in Figure 17 is constructed from 3M material known for its high reflective properties. The interior of the model features a grid-like structure with varying reflectivities. The grid spacing is approximately 1 mm, while the overall diameter of the pattern measures 10 cm.
The time-frequency analysis of the echo signal is performed to generate the time-Doppler-intensity image of the target, as shown in Figure 18a. In this image, a color map is employed to visualize the intensity distribution of the Doppler spectrum, where yellow represents the highest intensity, red indicates intermediate intensity, and black denotes the weakest intensity. The Doppler spectrum is observed to be approximately symmetrically distributed around 80 MHz, which corresponds to the frequency shift introduced by the acousto-optic frequency shifter. The measured Doppler bandwidth of the target is approximately 10 MHz, which reflects the motion characteristics and structural details of the target. Over time, the Doppler projection of the target exhibits periodic variations, indicating the rotational or oscillatory behavior of the target. This periodicity provides valuable information that enables a rough estimation of the target’s rotation or oscillation period. Such periodic features are critical for further understanding the dynamic properties of the target and improving the accuracy of subsequent analyses, such as velocity or motion pattern estimation. Additionally, these results demonstrate the capability of time-frequency analysis in capturing detailed temporal and spectral characteristics of moving targets in complex environments. Following the rough estimation step of the proposed method, we can derive the correlation map along the time direction, as shown in Figure 18b, and estimate the target’s period based on the peak value. When the target has an approximately symmetrical structure, the periodic variation frequency of the Doppler spectrum may exhibit a multiple relationship with the target’s rotation frequency. We can confirm this using the imaging results to determine the closest estimated rotation speed. In this experiment, the actual rotation frequency we set is 30 Hz, while the frequency obtained using the rough estimation method is 30.2197 Hz.
Based on the results of the rough estimation of the rotation speed, we narrowed the parameter search range to 29.5 Hz to 30.5 Hz and used Rényi entropy with an order of 0.5 as the evaluation index for further precise parameter estimation. The Rényi entropy values for different parameters are shown in Figure 19. The estimated frequency value corresponding to the minimum point is 30.0657 Hz, which represents the final refined estimation of the rotation speed. The rotational speed estimation accuracy reaches 97.81%. The lines of different colors in the figure correspond to the Rényi entropy calculation results of images filtered with varying β values. To facilitate comparison, we have shifted the minimum value point to 0. It is evident that when β = 0.15 , the downward trend is more pronounced, indicating a greater sensitivity to changes in rotational speed.
Finally, we imaged the target based on the coarse and fine estimation results. Figure 20a,b show the imaging results for the coarse and fine estimations, respectively. A comparison reveals that the energy in the coarse imaging results appears more diffuse. The internal grid structure is relatively blurred, making it difficult to distinguish the effective grid structure from the surrounding area. In contrast, the internal stripes of the fine imaging results are clearer, allowing for effective differentiation of the internal grid structure, with a resolution better than 1 mm. Based on the parameter settings used in this experiment, the laser wavelength is 1550 nm, the target rotational frequency is 30 Hz (corresponding to an angular velocity of 188.5 rad/s), and the short-time Fourier transform window length is 20 μs. Under these conditions, using Equation (4), the theoretical limit of resolution can be calculated to be approximately 0.21 mm. In practice, the resolution can be improved by increasing the time window. From the imaging results, it is evident that the resolution is affected by speckle and noise.
To quantify the difference between the rough imaging results and those obtained using the proposed method, we compared the imaging results using three indicators: IP, NIQE, and ENL. The results are presented in Table 1. The IP of the proposed method is higher than that of the rough reconstruction, indicating improved contrast. The NIQE value for the proposed method’s reconstruction is significantly lower than that of the rough reconstruction, suggesting higher visual image quality. Additionally, the ENL for the proposed method’s reconstruction is smaller, indicating reduced contamination from multiplicative noise. This observation is further supported by the figures, which show that the details of the rough registration imaging results are more ambiguous. The IP of the rough registration results is lower, primarily due to the difficulty in distinguishing boundaries in these images.

6. Conclusions

This paper addresses the challenge of rotational speed estimation in LDT, aiming to enhance the technology for long-distance imaging of non-cooperative rotating targets. We propose a specialized method for constructing measurement matrices to analyze the impact of rotational speed estimation errors and introduce an effective approach for estimating target rotation speed to achieve focused imaging. The proposed method reflects error distribution in both temporal and spatial dimensions. We utilized the constructed measurement matrix to examine the differences between measurement matrices at various rotational speeds. The results indicate that both imaging accumulation time and pixel spatial distribution significantly influence the distribution of measurement matrix errors. Additionally, we verified the consistency between the measurement matrix error distribution and imaging error through simulation experiments involving point and extended targets. In summary, rotational speed estimation errors primarily affect the energy focusing of the scattering center, causing a shift in the imaging position. The extent of this effect is proportional to imaging accumulation time, with a reduced impact observed further from the rotation center.
We present an estimation method to infer target rotation speed from echo data, consisting of two steps: rough estimation and fine estimation of rotational speed. Initially, a rough estimate is derived from the periodic structure observed in the Doppler-time-intensity image. We designed an image filter based on the spatial propagation law of rotation speed estimation errors to enhance the sensitivity of Rényi entropy to rotation speed parameters. Subsequently, we applied the FBP algorithm for imaging, filtering, and calculating Rényi entropy. Finally, precise estimation of rotation speed is achieved through Rényi entropy minimization. Experimental results based on the star model demonstrate that the proposed method effectively improves estimation accuracy and achieves focused imaging with a resolution better than 1 mm, without the need for noise reduction or other image processing techniques.
LDT is a method capable of high-resolution imaging of distant targets. To obtain imaging results suitable for identification, various factors must be considered. The focus error analysis and rotation speed estimation methods discussed in this article form the foundation for imaging inversion. To further enhance imaging performance, we must also address issues such as motion error compensation, noise suppression, and sparse reconstruction. In future research, we will conduct in-depth investigations into these related issues.

Author Contributions

Conceptualization, Y.L., C.X. and K.W.; formal analysis, Y.L. and C.X.; data curation, Y.L., D.L. and J.L.; writing—original draft preparation, Y.L., C.X., D.L. and A.S.; writing—review and editing, Y.L., C.X., D.L., D.H. and K.J.; visualization, Y.L. and C.X.; supervision, K.W. and Y.G.; All authors have read and agreed to the published version of the manuscript.

Funding

This research and APC was funded by the National Natural Science Foundation of China grant number 62431025.

Data Availability Statement

The datasets presented in this article are not readily available because the data are part of an ongoing study.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
AOMAcousto-optic Modulator
BPDBalanced Photodetector
DTIDoppler-Time-Intensity
EDFAErbium-Doped Fiber Amplifier
ENLEquivalent Number of Looks
FBPFiltered back-projection
IPImage Peakness
LDTLidar Doppler tomography
MSEMean Square Error
MVGMultivariate Gaussian
NIQENatural Image Quality Evaluator
PSNRPeak Signal-to-noise Ratio
SARSynthetic Aperture Radar
SNRSignal-to-noise Ratio
STFTShort-time Fourier Transform

References

  1. Song, A.; Jin, K.; Xu, C.; Li, J.; Guo, Y.; Wei, K. Subcarrier modulation based phase-coded coherent lidar. Opt. Express 2023, 32, 52–61. [Google Scholar] [CrossRef] [PubMed]
  2. Li, J.; Jin, K.; Xu, C.; Song, A.; Liu, D.; Cui, H.; Wang, S.; Wei, K. Adaptive motion error compensation method based on bat algorithm for maneuvering targets in inverse synthetic aperture LiDAR imaging. Opt. Eng. 2023, 62, 093103. [Google Scholar] [CrossRef]
  3. Xu, C.; Jin, K.; Jiang, C.; Li, J.; Song, A.; Wei, K.; Zhang, Y. Amplitude compensation using homodyne detection for inverse synthetic aperture LADAR. Appl. Opt. 2021, 60, 10594–10599. [Google Scholar] [CrossRef] [PubMed]
  4. Guo, R.; Jiang, Z.; Jin, Z.; Zhang, Z.; Zhang, X.; Guo, L.; Hu, Y. Reflective tomography LiDar image reconstruction for long distance non-cooperative target. Remote Sens. 2022, 14, 3310. [Google Scholar] [CrossRef]
  5. Shi, L.; Hu, Y.h.; Zhao, N.x.; Yu, L. Research on effect of reconstructed image quality in laser reflective tomography imaging. In Proceedings of the Optical Measurement Technology and Instrumentation, Edinburgh, UK, 26 June–1 July 2016; SPIE: Pune, India, 2016; Volume 10155, pp. 435–440. [Google Scholar]
  6. Andersen, A.H.; Kak, A.C. Simultaneous algebraic reconstruction technique (SART): A superior implementation of the ART algorithm. Ultrason. Imaging 1984, 6, 81–94. [Google Scholar] [CrossRef]
  7. Jin, X.; Sun, J.; Yan, Y.; Zhou, Y.; Liu, L. Imaging resolution analysis in limited-view Laser Radar reflective tomography. Opt. Commun. 2012, 285, 2575–2579. [Google Scholar] [CrossRef]
  8. Chen, J.; Sun, H.; Zhao, Y.; Shan, C. Typical influencing factors analysis of laser reflection tomography imaging. Optik 2019, 189, 1–8. [Google Scholar] [CrossRef]
  9. Zhang, X.; Hu, Y.; Wang, Y.; Shen, S.; Fang, J.; Liu, Y.; Han, F. Determining the limiting conditions of sampling interval and sampling angle for laser reflective tomography imaging in sensing targets with typical shapes. Opt. Commun. 2022, 519, 128413. [Google Scholar] [CrossRef]
  10. Zhang, X.; Han, F.; Shen, S.; Wang, Y.; Xu, S.; Dong, X.; Hu, Y. Target region extraction and segmentation algorithm for reflective tomography Lidar image. IET Image Process. 2023, 17, 1001–1009. [Google Scholar] [CrossRef]
  11. García, J.M.; Thurn, K.; Vossiek, M. Characterization of rotating objects with tomographic reconstruction of multiaspect scattered signals. IEEE Trans. Aerosp. Electron. Syst. 2019, 55, 3284–3291. [Google Scholar] [CrossRef]
  12. Mensa, D.L.; Halevy, S.; Wade, G. Coherent Doppler tomography for microwave imaging. Proc. IEEE 1983, 71, 254–261. [Google Scholar] [CrossRef]
  13. Fliss, G.G. Tomographic radar imaging of rotating structures. In Proceedings of the Synthetic Aperture Radar, Virtual, 1–3 November 1992; SPIE: Pune, India; Volume 1630, pp. 199–207. [Google Scholar]
  14. Sun, H.; Feng, H.; Lu, Y. High resolution radar tomographic imaging using single-tone CW signals. In Proceedings of the 2010 IEEE Radar Conference, Arlington, VA, USA, 10–14 May 2010; pp. 975–980. [Google Scholar]
  15. Mo, D.; Wang, N.; Wang, R.; Song, Z.Q.; Li, G.Z.; Wu, Y.R. Single-frequency LADAR super-resolution Doppler tomography for extended targets. Opt. Express 2019, 27, 12923–12938. [Google Scholar] [CrossRef] [PubMed]
  16. Goldman, L.W. Principles of CT and CT technology. J. Nucl. Med. Technol. 2007, 35, 115–128. [Google Scholar] [CrossRef] [PubMed]
  17. Niu, J.; Li, K.; Jiang, W.; Li, X.; Kuang, G.; Zhu, H. A new method of micro-motion parameters estimation based on cyclic autocorrelation function. Sci. China Inf. Sci. 2013, 56, 1–11. [Google Scholar] [CrossRef]
  18. Liu, Y.X.; Li, X.; Zhuang, Z.W. Estimation of micro-motion parameters based on micro-Doppler. IET Signal Process. 2010, 4, 213–217. [Google Scholar] [CrossRef]
  19. Fang, X.; Xiao, G. Rotor blades micro-Doppler feature analysis and extraction of small unmanned rotorcraft. IEEE Sens. J. 2020, 21, 3592–3601. [Google Scholar] [CrossRef]
  20. Zhang, Q.; Yeo, T.S.; Tan, H.S.; Luo, Y. Imaging of a moving target with rotating parts based on the Hough transform. IEEE Trans. Geosci. Remote Sens. 2007, 46, 291–299. [Google Scholar] [CrossRef]
  21. Zhang, Y.D.; Xiang, X.; Li, Y.; Chen, G. Enhanced micro-Doppler feature analysis for drone detection. In Proceedings of the 2021 IEEE Radar Conference (RadarConf21), Atlanta, GA, USA, 8–14 May 2021; pp. 1–4. [Google Scholar]
  22. Qin, X.; Deng, B.; Wang, H. Micro-Doppler feature extraction of rotating structures of aircraft targets with terahertz radar. Remote Sens. 2022, 14, 3856. [Google Scholar] [CrossRef]
  23. Ran, L.; Xie, R.; Liu, Z.; Zhang, L.; Li, T.; Wang, J. Simultaneous range and cross-range variant phase error estimation and compensation for highly squinted SAR imaging. IEEE Trans. Geosci. Remote Sens. 2018, 56, 4448–4463. [Google Scholar] [CrossRef]
  24. Beylkin, G. Discrete radon transform. IEEE Trans. Acoust. Speech, Signal Process. 1987, 35, 162–172. [Google Scholar] [CrossRef]
  25. Yan, Y.; Sun, J.; Jin, X.; Zhou, Y.; Zhi, Y.; Liu, L. Experimental research of circular incoherently synthetic aperture imaging ladar using chirped-laser and heterodyne detection. Chin. Opt. Lett. 2012, 10, 091101. [Google Scholar] [CrossRef]
  26. Lauritsch, G.; Härer, W.H. Theoretical framework for filtered back projection in tomosynthesis. In Proceedings of the Medical Imaging 1998: Image Processing, San Diego, CA, USA, 21–26 February 1998; SPIE: Pune, India; Volume 3338, pp. 1127–1137. [Google Scholar]
  27. Toft, P.A. The Radon Transform-Theory and Implementation; DTU Library: Kongens Lyngby, Denmark, 1996. [Google Scholar]
  28. Huynh-Thu, Q.; Ghanbari, M. Scope of validity of PSNR in image/video quality assessment. Electron. Lett. 2008, 44, 800–801. [Google Scholar] [CrossRef]
  29. Joshi, K.; Yadav, R.; Allwadhi, S. PSNR and MSE based investigation of LSB. In Proceedings of the 2016 International Conference on Computational Techniques in Information and Communication Technologies (ICCTICT), New Delhi, India, 11–13 March 2016; pp. 280–285. [Google Scholar]
  30. Munoz-Ferreras, J.; Perez-Martinez, F.; Datcu, M. Generalisation of inverse synthetic aperture radar autofocusing methods based on the minimisation of the Renyi entropy. IET Radar Sonar Navig. 2010, 4, 586–594. [Google Scholar] [CrossRef]
  31. Durak, L.; Arikan, O. Short-time Fourier transform: Two fundamental properties and an optimal implementation. IEEE Trans. Signal Process. 2003, 51, 1231–1242. [Google Scholar] [CrossRef]
  32. Zhao, F.; Huang, Q.; Gao, W. Image matching by normalized cross-correlation. In Proceedings of the 2006 IEEE International Conference on Acoustics Speech and Signal Processing Proceedings, Toulouse, France, 14–19 May 2006; Volume 2, p. II. [Google Scholar]
  33. Sarvaiya, J.N.; Patnaik, S.; Bombaywala, S. Image registration by template matching using normalized cross-correlation. In Proceedings of the 2009 International Conference on Advances in Computing, Control, and Telecommunication Technologies, Bangalore, India, 28–29 December 2009; pp. 819–822. [Google Scholar]
  34. Feichtenhofer, C.; Fassold, H.; Schallauer, P. A perceptual image sharpness metric based on local edge gradient analysis. IEEE Signal Process. Lett. 2013, 20, 379–382. [Google Scholar] [CrossRef]
  35. Mittal, A.; Soundararajan, R.; Bovik, A.C. Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 2012, 20, 209–212. [Google Scholar] [CrossRef]
  36. Anfinsen, S.N.; Doulgeris, A.P.; Eltoft, T. Estimation of the equivalent number of looks in polarimetric synthetic aperture radar imagery. IEEE Trans. Geosci. Remote Sens. 2009, 47, 3795–3809. [Google Scholar] [CrossRef]
Figure 1. Principles of Lidar Doppler tomography.
Figure 1. Principles of Lidar Doppler tomography.
Remotesensing 17 00506 g001
Figure 2. Schematic diagram of the extended pulse function.
Figure 2. Schematic diagram of the extended pulse function.
Remotesensing 17 00506 g002
Figure 3. Schematic diagram of the pixel arrangement order.
Figure 3. Schematic diagram of the pixel arrangement order.
Remotesensing 17 00506 g003
Figure 4. (a) Overall image of the measurement matrix. (b) Partial enlargement of the measurement matrix.
Figure 4. (a) Overall image of the measurement matrix. (b) Partial enlargement of the measurement matrix.
Remotesensing 17 00506 g004
Figure 5. Schematic diagram of the error matrix. (a) Error matrix with a sampling interval of 2.1°. (b) Error matrix with a sampling interval of 2.2°. (c) Error matrix with a sampling interval of 2.3°.
Figure 5. Schematic diagram of the error matrix. (a) Error matrix with a sampling interval of 2.1°. (b) Error matrix with a sampling interval of 2.2°. (c) Error matrix with a sampling interval of 2.3°.
Remotesensing 17 00506 g005
Figure 6. Error matrix after local mean downsampling. (a) Error matrix with a sampling interval of 2.1°. (b) Error matrix with a sampling interval of 2.2°. (c) Error matrix with a sampling interval of 2.3°.
Figure 6. Error matrix after local mean downsampling. (a) Error matrix with a sampling interval of 2.1°. (b) Error matrix with a sampling interval of 2.2°. (c) Error matrix with a sampling interval of 2.3°.
Remotesensing 17 00506 g006
Figure 7. Initial image of the three point targets.
Figure 7. Initial image of the three point targets.
Remotesensing 17 00506 g007
Figure 8. Reconstruction results when the sampling interval is 2.4°. (a) Reconstructed color map. (b) Reconstructed contour image.
Figure 8. Reconstruction results when the sampling interval is 2.4°. (a) Reconstructed color map. (b) Reconstructed contour image.
Remotesensing 17 00506 g008
Figure 9. Slice comparison of the original image and the reconstructed image. (a) Slice of point P1 along the X-axis. (b) Slice of point P2 along the X-axis. (c) Slice of point P3 along the X-axis. (d) Slice of point P1 along the Y-axis. (e) Slice of point P2 along the Y-axis. (f) Slice of point P3 along the Y-axis.
Figure 9. Slice comparison of the original image and the reconstructed image. (a) Slice of point P1 along the X-axis. (b) Slice of point P2 along the X-axis. (c) Slice of point P3 along the X-axis. (d) Slice of point P1 along the Y-axis. (e) Slice of point P2 along the Y-axis. (f) Slice of point P3 along the Y-axis.
Remotesensing 17 00506 g009
Figure 10. (a) Intensity values of the scattering center under different sampling interval errors. (b) Position offsets of the scattering center under different sampling interval errors.
Figure 10. (a) Intensity values of the scattering center under different sampling interval errors. (b) Position offsets of the scattering center under different sampling interval errors.
Remotesensing 17 00506 g010
Figure 11. Cross blade image.
Figure 11. Cross blade image.
Remotesensing 17 00506 g011
Figure 12. (a) Imaging results under different cumulative observation angles when there is no error. (b) Imaging results under different cumulative observation angles when the sampling interval error is 0.2°. (c) Imaging results under different cumulative observation angles when the sampling interval error is 0.4°.
Figure 12. (a) Imaging results under different cumulative observation angles when there is no error. (b) Imaging results under different cumulative observation angles when the sampling interval error is 0.2°. (c) Imaging results under different cumulative observation angles when the sampling interval error is 0.4°.
Remotesensing 17 00506 g012
Figure 13. (a) PSNR of images under different cumulative observation angles. (b) MSE of images under different cumulative observation angles.
Figure 13. (a) PSNR of images under different cumulative observation angles. (b) MSE of images under different cumulative observation angles.
Remotesensing 17 00506 g013
Figure 14. Flow chart of rotation speed fine estimation algorithm based on Rényi entropy.
Figure 14. Flow chart of rotation speed fine estimation algorithm based on Rényi entropy.
Remotesensing 17 00506 g014
Figure 15. Lidar Doppler tomography system based on coherent lidar.
Figure 15. Lidar Doppler tomography system based on coherent lidar.
Remotesensing 17 00506 g015
Figure 16. (a) Physical image of the target. (b) Infrared image of the target under laser irradiation.
Figure 16. (a) Physical image of the target. (b) Infrared image of the target under laser irradiation.
Remotesensing 17 00506 g016
Figure 17. The picture of the star model.
Figure 17. The picture of the star model.
Remotesensing 17 00506 g017
Figure 18. (a) Time-frequency distribution of star model echo signal. (b) Correlation coefficient after correlation calculation of time-frequency distribution along the time axis.
Figure 18. (a) Time-frequency distribution of star model echo signal. (b) Correlation coefficient after correlation calculation of time-frequency distribution along the time axis.
Remotesensing 17 00506 g018
Figure 19. Rényi entropy of imaging results at different rotation frequencies.
Figure 19. Rényi entropy of imaging results at different rotation frequencies.
Remotesensing 17 00506 g019
Figure 20. (a) Reconstruction result based on rough estimated periods. (b) Reconstruction result based on fine estimated periods.
Figure 20. (a) Reconstruction result based on rough estimated periods. (b) Reconstruction result based on fine estimated periods.
Remotesensing 17 00506 g020
Table 1. Comparison of indicators between rough imaging result and fine imaging result.
Table 1. Comparison of indicators between rough imaging result and fine imaging result.
IndexRough ImagingProposed Method
IP0.01190.0120
NIQE8.68967.4559
ENL0.42850.4176
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Li, Y.; Xu, C.; Liu, D.; Song, A.; Li, J.; Han, D.; Jin, K.; Guo, Y.; Wei, K. Lidar Doppler Tomography Focusing Error Analysis and Focusing Method for Targets with Unknown Rotational Speed. Remote Sens. 2025, 17, 506. https://doi.org/10.3390/rs17030506

AMA Style

Li Y, Xu C, Liu D, Song A, Li J, Han D, Jin K, Guo Y, Wei K. Lidar Doppler Tomography Focusing Error Analysis and Focusing Method for Targets with Unknown Rotational Speed. Remote Sensing. 2025; 17(3):506. https://doi.org/10.3390/rs17030506

Chicago/Turabian Style

Li, Yutang, Chen Xu, Dengfeng Liu, Anpeng Song, Jian Li, Dongzhe Han, Kai Jin, Youming Guo, and Kai Wei. 2025. "Lidar Doppler Tomography Focusing Error Analysis and Focusing Method for Targets with Unknown Rotational Speed" Remote Sensing 17, no. 3: 506. https://doi.org/10.3390/rs17030506

APA Style

Li, Y., Xu, C., Liu, D., Song, A., Li, J., Han, D., Jin, K., Guo, Y., & Wei, K. (2025). Lidar Doppler Tomography Focusing Error Analysis and Focusing Method for Targets with Unknown Rotational Speed. Remote Sensing, 17(3), 506. https://doi.org/10.3390/rs17030506

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop