Next Article in Journal
Detection of Typical Transient Signals in Water by XGBoost Classifier Based on Shape Statistical Features: Application to the Call of Southern Right Whale
Next Article in Special Issue
Characteristics Analysis of Acoustic Doppler Current Profile Measurements in Northeast Taiwan Offshore
Previous Article in Journal
The Impact of Special Marine Environments Such as the Kuroshio on Hydroacoustic Detection Equipment
Previous Article in Special Issue
Advanced Capacitor-Based Battery Equalizer for Underwater Vehicles
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Sequential Two-Mode Fusion Underwater Single-Photon Lidar Imaging Algorithm

1
School of Astronautics, Harbin Institute of Technology, Harbin 150001, China
2
School of Information Science and Engineering, Yanshan University, Qinhuangdao 066000, China
3
School of Information Science and Engineering, Harbin Institute of Technology (Weihai), Weihai 264209, China
*
Author to whom correspondence should be addressed.
J. Mar. Sci. Eng. 2024, 12(9), 1595; https://doi.org/10.3390/jmse12091595
Submission received: 11 August 2024 / Revised: 1 September 2024 / Accepted: 7 September 2024 / Published: 9 September 2024
(This article belongs to the Special Issue Ocean Observations)

Abstract

:
Aiming at the demand for long-range and high-resolution imaging detection of small targets such as submerged submarine markers in shallow coastal waters, research on single-photon lidar imaging technology is carried out. This paper reports the sequential two-mode fusion imaging algorithm, which has a strong information extraction capability and can reconstruct scene target depth and reflection intensity images from complex signal photon counts. The algorithm consists of four steps: data preprocessing, extremely large group value estimation, noise sieving, and total variation smoothing constraints to image the target with high quality. Simulation and test results show that the imaging performance and imaging characteristics of the method are better than the current high-performance first-photon group imaging algorithm, indicating that the method has a great advantage in sparse photon counting imaging, and the method proposed in this paper constructs a clear depth and reflectance intensity image of the target scene, even in the 50,828 Lux ambient strong light and strong interference, the 0.1 Lux low-light environment, or the underwater high-attenuation environment.

1. Introduction

In recent years, high-quality imaging technology in low-light and high-light-intensity attenuation environments has received great attention. However, traditional optical detection, such as traditional laser radar, visible light detection, and fluorescence imaging, is affected by the absorption and scattering of water molecules, dissolved substances, and suspended particles, which not only reduces the intensity of light, but also expands the pulse laser in the time domain and increases the spot radius in space, which affects the detection accuracy [1,2,3,4]; thus, the practical application of detection is greatly restricted. At present, single-photon lidar detection technology has aroused extensive research interest [5,6,7,8,9,10,11,12,13], and has become an effective way to solve the problem of detection in low-light and high-light-intensity attenuation environments.
As a key factor affecting the performance of single-photon lidar systems, the iteration of imaging algorithms goes hand in hand with the progress of the system. At present, in the field of single-photon lidar imaging, many researchers and scholars have made quite good results [14,15,16,17,18,19,20,21]. Based on the imaging principle, data processing method, and technical realization, the existing single-photon lidar imaging methods can be divided into two major categories; one is based on the neural network [22,23], which extracts the data information by designing various neural networks and then reconstructs the target scene. However, the lack of large-scale pairwise single-photon lidar image training datasets, the long training time, the easy loss of high-frequency information at the edge, the high cost of computational resources, and the high cost of hardware deployment in the field deployment [24,25,26,27] make the single-photon imaging algorithms based on neural networks suffer great limitations, and their actual deployment faces great difficulties [28,29]. In contrast, modeling-based imaging algorithms have demonstrated their efficiency and environmental adaptability in many practical applications [30,31,32,33,34,35,36,37]. However, although 3D point cloud imaging based on the modeling approach can enhance the perception of the shape, size, and spatial location of objects, it has higher technical complexity, increased cost, and higher processing power requirements [38], so 2D imaging technology is more suitable for a wide range of practical application scenarios due to its lower technical requirements and cost. In the existing single-photon 2D imaging algorithms, the fixed-pixel acquisition time reconstruction method does not fully take into account the variability of the pixel echo signals, which may lead to redundancy or insufficient acquisition time for overly bright or dark scenes [39,40,41,42], and thus the imaging method is not optimal. Another 2D imaging algorithm, based on the photon echo intensity, determines the pixel depth information, among which the classic first-photon method [43] and first-photon group method [40] use the “first” relevant photon information to reconstruct the target scene; however, according to the description of the paper [43,44], it was found that the first-photon algorithm is not suitable for complex environments, especially for environments with low signal-to-noise ratios, while the first-photon group algorithm has better anti-noise performance, but the “group” information needed for imaging needs to be manually tuned according to the environment, which makes it difficult to adapt to the changing environment.
An efficient single-photon lidar imaging algorithm that can effectively extract the target information is urgently needed for the high-resolution detection requirements in variable and extreme environments, especially underwater environments. In this thesis, we report a sequential two-mode fusion (STMF) imaging algorithm, which utilizes the fundamental property that the scene information photons in the echo signal are differently distributed from the noise photons in the time histogram to extract effective information. Compared with the current high-performance first-photon group algorithm, this method can quickly adapt to the current scene and perform high-quality depth imaging and reflection intensity imaging without any parameter tuning. Finally, we verify the convenience and effectiveness of the method through land and underwater experiments.

2. STMF Imaging Algorithm Modeling Approach

The signal photon counts have a small variance and show highly concentrated Gaussian distribution characteristics, while the noise photon counts have a large variance and the signal tends to cluster while the noise photons are scattered throughout the whole period. Therefore, based on this characteristic, we propose the STMF imaging algorithm, which is schematically shown in Figure 1.
In Figure 1, the strategy of the STMF algorithm to reconstruct the target scene is divided into four steps, which are (1) data preprocessing, (2) maximal group value estimation, (3) pixel value filtering and replacement, and (4) total variation smoothing constraints.
The data preprocessing step includes data reconstruction and data cleaning steps, which are used to construct the returned array of the system into an array of image pixels, remove noise initially, and locate the approximate range of the target. The principle is shown in Figure 2.
To find a “specific target”, the target time range window can be set, and the window can be used to scan near the superimposed peak point, which can not only filter out most of the noise outside the window but also locate the target range, making the imaging more targeted. This method is one of the advantages that other algorithms do not have.
In the subsequent information extraction process, firstly, the group interval in which the signal echo photons are located is determined according to the aggregation characteristics of the echo signal to initially filter the clutter interference and reduce the dependence on the subsequent photons. Secondly, the sum of the number of group photons and the corresponding time index are taken in the photon group interval, and the depth and reflectivity maps are then obtained after the screening and replacement steps and the total variational smoothing constraint steps. The algorithm interpretation is shown in Algorithm 1.
In Algorithm 1, num_of_bin is the number of bins of each pixel histogram, which can be given instead by χ . num_of_pixesl is the number of pixels in each image datum. time_of_per_bin is the time duration of each bin. location_of_data is the storage location after the echo data are acquired. n x , y i n d i c e s , n u m indicates the num bin of the indices group of the current pixel. T i n d i c e s , m a x χ indicates the bin corresponding to the maximum photon number of a bin in an indices group among all bin data of the current pixel. D i f f ( T i n d i c e s , m a x χ ) and D i f f ( N s u m , g r o u p r a n g e χ ) are the differences between the meantime, the mean number of photons in the scanning window of the current pixel time value, and the photon value. ψ is the parameter used to adjust the pixel excess. χ x , y n e w and n x , y χ , n e w are the reflection intensity and time matrices adjusted by the parameter ψ .
Algorithm 1. STMF algorithm interpretation
Jmse 12 01595 i001

2.1. Maximum Group Estimation

Assuming that χ is the number of bin cells of each pixel histogram in the test, the pixel time and photon number obtained by the test are as in Equation (1), the corresponding time of the maximum group data is given by Equation (2), and the number of echo photons is given by Equation (3).
t x , y 1 , t x , y 2 , t x , y 3 , , t x , y χ   n x , y 1 , n x , y 2 , n x , y 3 , , n x , y χ
t x , y γ , 1 , t x , y γ , 2 , t x , y γ , 3 , , t x , y γ , ζ
n x , y γ , 1 , n x , y γ , 2 , n x , y γ , 3 , , n x , y γ , ζ
In the above formula, γ is the ordinal number of the ( x , y ) position pixel group, ζ is the ordinal number of the bin within the group, t x , y χ is the moment of the χ -th bin of ( x , y ) position pixel, t x , y γ , ζ is the corresponding moment of the bin within the group, and n x , y γ , χ is the number of photons arriving at the corresponding moment of the bin within the group. x , y in Equations (1)–(3) denotes the coordinates of the pixel at ( x , y ) . After obtaining the photon data from Equation (3), the index in Equation (2) corresponding to the maximum value in Equation (3) can be obtained, and the photon number and time index are shown in Equation (4) and Equation (5), respectively.
N x , y γ i n d i c e s = 1 ζ = ζ n x , y γ , ζ
T x , y γ i n d i c e s = indices [ arg max ( n x , y γ , 1 , n x , y γ , 2 , n x , y γ , 3 , , n x , y γ , ζ ) ]
In the above equation, n x , y γ , ζ is the number of photons arriving at the corresponding moment in the bin cell of the group, N x , y γ i n d i c e s is the total number of photons within the pixel group at ( x , y ) , T x , y γ i n d i c e s is the moment index corresponding to the bin maximum photon value within the position pixel group at the ( x , y ) location, and γ is the pixel group serial number at ( x , y ) location. To reduce the problem of pixel value loss when the number of echo photons is low, we sum the photons in the group when determining the current number of pixel photons.
The initial reflection intensity map and depth map of the target scene can be obtained based on the sum of the maximum number of photons per pixel N x , y γ i n d i c e s and the corresponding moment index T x , y γ i n d i c e s . The diagram of pixel value selection is shown in Figure 3.

2.2. Pixel Value Filtering and Replacement

Because the depth and reflection intensity of the current pixel and the adjacent pixel are approximately equal, this relationship can be used to replace the anomalous pixel [44]. Assuming that the root-mean-square width of the system response function (the distribution function of the number of single pulse-echo photons on the time axis) is η , the median replacement strategy is shown in Figure 4.
(1) To realize the scanning replacement of the edge pixels of the image, one layer of the 0-value pixel is added to the edge of the original image, i.e., assuming that the original image is 64 × 64 pixels, after adding the edge pixels, it becomes an image of 66 × 66 pixels in size; the schematic of the 0-value expansion and scanning is shown in Figure 5.
(2) Utilizing the correlation of spatial pixels, the current pixel moment value T x , y is compared with the average value of spatial pixels T x , y a v e r a g e , and whether to change the current pixel value is determined by Equation (6).
T x , y γ , n e w i n d i c e s = T x , y γ i n d i c e s T x , y γ i n d i c e s T x , y a v e r a g e 2 η T x , y a v e r a g e T x , y γ i n d i c e s T x , y a v e r a g e > 2 η
In Equation (6), T x , y γ , n e w i n d i c e s is the photon arrival time after the replacement strategy, T x , y a v e r a g e is the time flight value averaged in the spatial pixel, T x , y i n d i c e s is the current pixel time-of-flight value, and η is the root-mean-square width of the system response function. We assume that the time flight values T x , y a v e r a g e represented by all pixels in the entire spatial pixel are averaged instead of excluding the current pixel value and then averaging it. This is because it is not possible to determine which of the current pixel and the domain pixel are noisy. When the absolute value of the difference between the current pixel time-of-flight value T x , y γ , n e w i n d i c e s and the mean value T x , y a v e r a g e of the 3 × 3 spatial pixels is not greater than 2 η , then the current pixel time T x , y γ , n e w i n d i c e s remains unchanged. When the absolute value of the difference between the current pixel time-of-flight value T x , y i n d i c e s and the mean value T x , y a v e r a g e of the 3 × 3 spatial pixels is greater than 2 η , then the current pixel value T x , y γ , n e w i n d i c e s needs to be replaced with the mean value T x , y a v e r a g e .
After replacement, the difference between the abnormal value (time value/distance value) and the surrounding pixels can be observed. If the abnormal value is too much, the spatial pixel group range needs to be reset or the secondary replacement can be considered.
Compared with the first-photon group imaging algorithm, a post-feedback step is introduced for pixel value screening and replacement in the STMF imaging algorithm. When the evaluation index after replacement is below the threshold, resetting the spatial pixel group range or secondary replacement can be considered.

2.3. Total Variation Smoothing Constraints

After obtaining the replaced pixels data, the image is smoothed using the optimal image total variation smoothing constraint method [44,45,46]. The smoothing rule is defined by Equation (7), and its flow diagram is shown in Figure 6.
T x , y γ , n e w i n d i c e s ˜ = arg min T x , y γ , n e w i n d i c e s ˜ ( T x , y γ , n e w i n d i c e s ˜ T x , y γ , n e w i n d i c e s ) + ψ f ( T x , y γ , n e w i n d i c e s ˜ )
In Equation (7), T x , y γ , n e w i n d i c e s ˜ is the time matrix of x y pixel size after smoothing constraints, and its value range is 0 T x , y γ , n e w i n d i c e s ˜ T x , y n e w . T x , y γ , n e w i n d i c e s is the time matrix after spatial pixel replacement, and ψ is the parameter to adjust the bias of T x , y γ , n e w i n d i c e s ˜ . If ψ is small, then T x , y γ , n e w i n d i c e s ˜ and T x , y γ , n e w i n d i c e s will be closer and if ψ is large, then T x , y γ , n e w i n d i c e s ˜ will be smoother. In the experiment, to ensure that the flight time is as accurate as possible and the image is smoothed, we take ψ = 0.1 . In Equation (7), f ( T x , y γ , n e w i n d i c e s ˜ ) can be defined by Equation (8).
f ( T x , y γ , n e w i n d i c e s ˜ ) = x = 1 X 1 y = 1 Y 1 ( T x , y γ , n e w i n d i c e s ˜ T x + 1 , y γ , n e w i n d i c e s ˜ ) 2 + ( T x , y γ , n e w i n d i c e s ˜ T x , y + 1 γ , n e w i n d i c e s ˜ ) 2 + x = 1 X 1 T x , Y γ , n e w i n d i c e s ˜ T x + 1 , Y γ , n e w i n d i c e s ˜ + y = 1 Y 1 T X , y γ , n e w i n d i c e s ˜ T X , y + 1 γ , n e w i n d i c e s ˜
In Equation (8), f ( T x , y γ , n e w i n d i c e s ˜ ) is the defining function of the difference size between the current pixel and the neighboring pixels, T x , y γ , n e w i n d i c e s ˜ is the time matrix after smoothing constraints of x y pixel sizes, T x + 1 , y γ , n e w i n d i c e s ˜ is the neighboring pixel value of the current pixel horizontally, and T x , y + 1 γ , n e w i n d i c e s ˜ is the neighboring pixel value of the current pixel vertically.
After the maximum group estimation, pixel value filtering and replacement, and the total variation smoothing constraint step, the final depth image can be obtained.

3. Terrestrial Imaging Test Analysis

To verify the superiority of the STMF imaging algorithm in performance compared to the current high-performance first-photon group imaging algorithm, multiple sets of land experiments were performed. Secondly, since underwater experiments are more complicated and costly than land experiments, the effects of external light intensity, the number of laser pulses, and the distance from the target scene on the imaging results were analyzed in land experiments to provide experimental references for further underwater experiments.

3.1. Comparative Analysis of Imaging Effect

The first-photon group imaging algorithm shows great superiority relative to previous imaging algorithms. To verify that the STMF imaging algorithm reported in this paper is more convenient and has a better imaging effect relative to the first-photon group algorithm, we first launched single-photon imaging tests on land. We built a single-photon lidar; its working principle is in Figure 7.
The MCC-532-5 laser (wavelength 532 nm, repetition frequency 5 kHz, single pulse energy 30 μJ) is used to emit pulsed laser light, while the photodetector (SPAD) outputs an electrical pulse synchronization signal after the pulsed laser light is detected, which is captured by the time-dependent photon counting module (TCSPC), and starts timing. The pulsed laser is injected through the shaping optical path to increase the beam diameter and reduce the divergence angle, and it scans the target after changing the direction by the two-axis galvanometer. When the photons return to the system in the form of diffuse reflection from the target point of the target, they are coupled into the echo receiving module after changing the propagation direction by the two-axis galvanometer again. The SPAD detector generates an electric pulse signal immediately after receiving the echo photons, and the time-correlation photon counting (TCSPC) module subtracts the time of the electrical signal and the time of the synchronization signal when the pulse laser is emitted, obtains the total flight time of the outgoing photons, and finally calculates the numerical distance of the target in the computer. The input and output signals relied on by each module in the system are managed by the signal control module, and the data flow of the time-correlation photon counting (TCSPC) module is read directly by the host computer and then processed by the software algorithm for imaging. In this system, FPGA is responsible for the key tasks of data synchronization and system control, to ensure the accurate synchronization of the laser emission and detector through timing control, and to control the working mode of the galvanometer and other modules. The function of the heat exchanger is mainly to carry out thermal management and thermal control of the laser, galvanometer, TCSPC, and other modules to maintain the stable operation of the system.
In the experiment, the imaging scene is about 40 m, the number of scanned pixels is 64 × 64, and the imaging targets are three sets of targets, which are the regular target plate (Figure 8), complex geometry target (Figure 9), and multi-depth parameters targets (Figure 10). We use the first-photon group algorithm and STMF imaging algorithm to image the three sets of scenes, respectively, and the imaging effects are comprehensively analyzed using Mean Squared Error (MSE), Root Mean Squared Error (RMSE), and the Structural Similarity Index (SSIM) to comprehensively analyze the imaging effect. MSE is a measure of the difference between the predicted value of the model and the actual observed value, calculated by summing the square of the difference between the predicted value and the actual value and averaging it. RMSE is the square root of MSE and provides a standardized measure of prediction error that is more intuitive. The SSIM is a measure of the similarity of two images, which takes into account brightness, contrast, and structural information. The selection of MSE, RMSE, and SSIM parameters can improve the evaluation of single-photon lidar imaging results because they provide evaluation information from different perspectives. MSE and RMSE are directly concerned with the difference between the predicted value and the actual value, and are suitable for quantifying the prediction accuracy of the model. The SSIM focuses on the visual quality of the image and can reflect the human eye’s perception of the change in the image structure, which is suitable for evaluating the visual similarity of the imaging results. These three parameters can be used to evaluate the accuracy and image quality of single-photon lidar.
The thing to note here is that the first-photon group algorithm currently shows great advantages in single-photon imaging algorithms (Maximum likelihood estimation, Peak method, Cross-correlation method, Shin algorithm, Unmixing algorithm, first-photon imaging algorithm, etc.), and the simple logic also makes it easier to deploy [46,47,48,49]. So, our proposed algorithm will be directly compared with the first-photon group algorithm.
In Figure 8, the part of the imaging target facing the single-photon lidar system is a regular 10 cm × 10 cm square, the rest of the part is a base; the base is not the main imaging scene, it mainly plays a role in supporting and adjusting the height. In Figure 9, the part of the imaging target facing the single-photon lidar system has a circle, rectangles of different sizes, and so on. Similarly, the remaining part is a base, which is not the main imaging scene and mainly supports and adjusts the height of the imaging target. In Figure 10, the part of the imaging target facing the single-photon lidar system has six square squares with different depths (the middle square is the 0-depth datum to view the entire target, and the rest of the numbers are the depth from the datum), with a length and width of 10 cm, and a minimum depth resolution of up to 5 cm. In addition, as in Figure 10, three squares in the right column have three kinds of reflectivity, and the experiment will also test if the STMF algorithm can distinguish between targets with the same scene depth and different reflectivity.
It is worth mentioning that, to ensure the randomness and fairness of the test, the data of the three targets are randomly selected from the results of multiple data acquisition, which are tested and evaluated using the first-photon group and the STMF imaging algorithm, respectively. In addition, the key to whether effective information can be collected from the echo signal is the second step of the algorithm, and the subsequent step is the noise reduction step after the information extraction, so the imaging effect is compared between the first-photon group and the STMF imaging algorithm through the information extraction step.
Figure 11, Figure 12 and Figure 13 are the echo depth maps and reflectivity maps formed at low pulse numbers, so the number of photons reflected (noise photons + signal photons) is relatively small, and the 5000-pulse maps imaged with high fractional detailed display as the reference maps are shown in Figure 14.
Analyze the first-photon group and STMF, and evaluate the imaging algorithms in terms of MSE, RMSE, and SSIM as shown in Table 1 and Table 2 (I represents the regular target plate, II represents the complex geometry target, III represents multi-depth parameters target).
By comprehensive comparison of the data in Table 1 and Table 2, the performance of the two algorithms is different in different scenarios with low pulse numbers, and the integrated MSE, RMSE, and SSIM evaluation values, although the first-photon composition imaging algorithm shows a slight advantage in some scenarios. For example, in the second type of target imaging experiment, the RMSE index of the STMF algorithm is reduced by 8.7% compared with the first-photon group algorithm, which reflects less degree error reduction. However, the SSIM is increased by 12.3%, which reflects better similarity with the original scene. In other targets, STMF shows better restoration of the original scene compared with the first-photon group algorithm. The algorithm displays the image when the display range of the impact will greatly affect the evaluation value, and through locating the range, the first step of the STMF algorithm is preprocessed. This is another advantage that the first-photon set algorithm does not have. The STMF imaging results can be seen in Figure 15. Therefore, on the whole, the STMF imaging algorithm performs better.
From the multi-depth parametric target analysis, it can be seen that the depth resolution of the system equipped with STMF can be up to 2.5 cm (middle column) in the case of very low echo photons. To further demonstrate the practical applicability of the STMF algorithm relative to the first-photon group algorithm, the following is a continuation of the comparison of the two algorithms from the perspective of real-world applications.
The μ value is an important parameter of the first-photon group single-photon imaging technique. According to the discussion in reference [45] and the parameter simulation and test of the first-photon group algorithm above, a small μ value indicates that the definition of the first signal photon group is very tight, and a large number of very concentrated photon counts must occur to obtain the first signal photon group. The result is high robustness to noise, because the noise just accumulates, but the probability of satisfying the condition is smaller, and the efficiency of the whole method will be reduced, that is, more accumulation time will be needed to obtain the first signal photon group aggregation condition. On the contrary, if the μ value is large, the conditions of the first signal photon group are easily satisfied, and the imaging efficiency of the whole system will increase, but the probability that the noise just meets the aggregation conditions will increase, and the robustness to the noise will decrease. Therefore, the determination of the μ value is very important. However, because of the above characteristics, specific parameters need to be adjusted in different scenarios and targets with different reflectance, and the practical application is more complicated. This is exactly what the STMF imaging algorithm avoids.
From Figure 16, Figure 17, Figure 18 and Figure 19, it can be seen that the choice of the threshold value has a direct impact on the imaging results, and the results of parameters μ = 1, μ = 2, μ = 3, μ = 4, μ = 5 need to be viewed in combination with the MSE, RMSE, and SSIM values rather than a single value because the context also affects the metric values. In practical application scenarios, since the μ values cannot be tuned autonomously, the actual imaging results will be greatly reduced, and even there will be no target, as shown in Figure 16 and Figure 17. Therefore, the determination of the μ value based on prior experience is obviously not conducive to the actual situation in the changeable environment, so the application of the first-photon group algorithm in the actual environment has certain limitations. The STMF imaging algorithm reported in this paper employs a maximum group selection strategy, which maintains the ability of the first-photon group algorithm to select signal clusters and avoids the shortcomings of parameter settings.

3.2. STMF Imaging Algorithm Noise Reduction Results

The advantages of the STMF imaging algorithm can be seen through the above analysis, and the next test was conducted to further observe the imaging results using the STMF imaging algorithm. At 0.1 Lux light intensity, 64 × 64 scanning pixel number, 5 pulses at 5 kHz repetition frequency, 30 μJ single-pulse power, and 532 nm laser wavelength at 40 m, the three sets of target imaging results and the imaging reference map (maximum group estimation + pixel value filtering and substitution + total variance smoothing constraints) are shown in Figure 20, Figure 21, and Figure 22, respectively.
In Table 3, Ⅰ represents the regular target plate, Ⅱ represents the complex geometry target, Ⅲ represents the multi-depth parameters target. Through observing Figure 20, Figure 21 and Figure 22, the three types of targets, MSE decreased (24.3%, 13.0%), (31.5%, 17.3%), and (72.2%, 47.3%), respectively, while RMSE increased by 15.6%, 19.7%, and 22.4%, respectively. It was found that the STMF imaging algorithm effectively extracts the target information after the maximum group estimation in the case of very few echo photons (the number of echo photons per pixel of the scene is less than 5) (Figure 22 left column), but the echo signal also contains more pixel-missing values and noise signals (Figure 22 left column). After the pixel value screening replacement + total variance smoothing constraint, and benchmarking against Table 3, it was found that the pixel anomalies obviously disappear after the noise reduction step of the algorithm, which shows that the subsequent two steps of the STMF imaging algorithm have very good noise reduction and compensation characteristics.
In addition, from Figure 22, we can observe that the algorithm can clearly distinguish the three targets with the same depth and different reflectivity of 10 cm side length on the rightmost side of the target and 2.5 cm depth difference (middle column) with very low echo photon count, which shows the excellent performance of the algorithm.

3.3. Analysis of Imaging Influences

In the process of applying single-photon lidar imaging in the actual environment, the external lighting conditions, the scene distance, and the number of pulses may have an impact on the imaging effect. The increase in the number of pulses can significantly improve the temporal resolution and enhance the signal-to-noise ratio through signal accumulation, thus improving the clarity and accuracy of the imaging. The distance of the scene is directly related to the intensity and depth resolution of the signal. The farther the distance, the greater the signal attenuation, which puts forward higher requirements for the design of the imaging system. The level of background noise is determined by the light intensity, and too much light will increase the noise and reduce the image quality. Taking these key factors into consideration, the imaging performance of single-photon lidar can be effectively improved through fine adjustment and algorithm optimization to ensure that high detection accuracy can be maintained under different environmental conditions. Thus, the influencing factors are analyzed, which can be used as a reference for the practical application.

3.3.1. Light Intensity Impact Analysis

The MSE, RMSE, and SSIM values of the STMF imaging algorithm to analyze the direct imaging effect of the multi-depth parametric target scene are shown in Table 4 (some example points are shown). To visualize the imaging effect, the analysis data are shown in Figure 23.
It can be seen that with the enhancement of the external light intensity, the MSE and RMSE are enhanced and the SSIM is reduced, which shows that the light intensity reduces the imaging effect. The external light intensity mainly manifests itself as noise photons that enhance the received photons. As far as this is concerned, the single-photon lidar is more applicable in the dark ocean when performing practical detection.

3.3.2. Line Art Figures

The MSE, RMSE, and SSIM values of the STMF imaging algorithm to analyze the direct imaging effect of the multi-depth parameters target scene are shown in Table 5 (some example points are shown). To visualize the imaging effect, the analyzed data are shown in Figure 24.
In Figure 24, the anomalous value of the 10 m scene is mainly due to the strong noise inside the system, and after ignoring these data, it can be seen that with the increase in the scene distance, the MSE and RMSE are enhanced, and the SSIM is reduced, which explains that the increase in the scene distance reduces the imaging effect. Concerning this, the optimum detection distance exists for single-photon lidar when performing actual detection, and future tests can be performed to determine the optimum detection distance for this system to guide system optimization.

3.3.3. Pulse Number Impact Analysis

The MSE, RMSE, and SSIM values of the STMF imaging algorithm to analyze the direct imaging effect of the multi-depth parameter target scene are shown in Table 6 (some example points are shown). To visualize the imaging effect, the analyzed data are shown in Figure 25.
In Figure 25, it can be seen that with the enhancement of the number of pulses, the MSE and RMSE decreased and the SSIM was enhanced, which indicates that the increase in the number of pulses enhances the imaging effect. For that matter, there is a cumulative time advantage of the single-photon lidar when performing actual detection, and the number of detection pulses can be appropriately increased in subsequent tests to guide the system optimization. To further remove the noise and enhance the imaging effect, we employed contrast enhancement. High-frequency information enhancement was completed with a PnP-ADMM algorithm iteration. We could have employed wavelet transform and ESRGAN enhancement, but the index evaluation is better than this method. In future research, we will try non-local spatial pixel replacement, superpixel, and other methods.
Overall, the single-photon lidar we built and equipped with the STMF imaging algorithm reported in this paper demonstrated good imaging performance as well as practicality, and its key imaging factors were analyzed in detail, which will be taken into account appropriately to optimize the system in future experiments.

4. Analysis of Underwater Imaging Tests

In the underwater environment, due to the strong absorption and scattering of light, traditional imaging technology faces many challenges, and it is difficult to obtain high-contrast and high-resolution images; so, the practical application of single-photon lidar underwater will be of great significance. Subject to the test conditions, the test was carried out in a pool constructed with a length of 12 m, a water depth of 1 m, and a water attenuation coefficient of 0.457 m−1. The test environment is photographed in Figure 26.
The lateral resolution, distance resolution, and color resolution were tested with a multi-depth parameters target. To further test the resolution ability of the algorithm, the top square in the middle column of the target was decomposed into four small square blocks with side lengths of 2.5 cm and depth difference of 2.5 cm, as shown in Figure 26c. The middle square is the 0-depth datum to view the entire target, and the rest of the numbers are the depth from the datum. The test results are shown in Figure 27, Figure 28, Figure 29 and Figure 30. It should be noted that the pool is not wrapped with light-absorbing material to simulate the actual environment.
The comparison before and after noise reduction using the first-photon group method in the same environment is shown in Figure 31, Figure 32, Figure 33 and Figure 34.
The test scene was at a distance of about 12 m, and the imaging under 5000 pulses obtained by the independent night test was also used as a reference (Figure 22 right column) to evaluate the detail retention effect by MSE, RMSE, and the SSIM; the index of underwater test depth map is shown in Table 7 and Table 8.
According to the results, the imaging map retains more detailed information, and the increase in the number of pulses enhances the effectiveness. For the STMF algorithm, for both 50 pulses or 500 pulses, SME and RMSE have a relatively obvious reduction, and the SSIM has a relatively obvious improvement. However, in the imaging of the first-photon group algorithm, only the rough outline of the target can be obtained; even if the display range is adjusted, the details cannot be obtained.
In addition, in Figure 27 and Figure 29, the imaging results in the right column show that the STMF algorithm can distinguish targets with different resolutions of 10 cm in the above environment, and the lateral resolution is up to 2.5 cm. From the first square in the middle column, it can be seen that the distance resolution is also up to 2.5 cm, which shows the algorithm’s excellent underwater detection performance. In contrast, although the first-photon group algorithm has also achieved good imaging effects on land, it is a great challenge to image complex targets under a loaded underwater environment. Both depth maps and reflection intensity maps can only see the approximate shape of the target, but the resolution is far from the effect of the STMF algorithm.

5. Conclusions

In this paper, we report a sequential two-mode fusion imaging algorithm, which achieves excellent performance of 2.5 cm horizontal and 2.5 cm vertical resolution on an autonomous single-photon lidar with a water attenuation coefficient of 0.457 m−1 at a scene depth of 12 m through three steps. After the description of the method and all the experimental analysis, it was found that the STMF algorithm not only solves the problem of manual frequent parameter adjustment for the first-photon set algorithm with an excellent imaging effect at present, but also has the ability to effectively extract information from small echo photon numbers and in challenging underwater environments. The excellent performance is a firm step forward for underwater high-resolution optical imaging. This algorithm and the matching single-photon lidar achieve fine imaging of complex targets underwater, and with the upgrading of test conditions, the underwater detection range is expected to be further improved.
In future research, we will further optimize the algorithm performance and reduce the algorithm imaging time in the expectation of real-time underwater high-resolution imaging. In addition, we will also focus on studying the transmission characteristics of a pulsed laser underwater, including its attenuation law in different water environments, broadening effect, and how to reduce the impact of scattering and absorption through technical means, to optimize the design of an underwater lidar system.

Author Contributions

Conceptualization, T.R.; methodology, T.R. and C.W.; software, T.R. and Q.L.; validation, T.R., Y.W. and Q.Z.; resources, C.W. and Z.Z.; data curation, T.R., Y.Z., J.L. and Y.W.; writing—original draft preparation, T.R.; writing—review and editing, C.W.; project administration, C.W.; funding acquisition, C.W. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Key R&D Program of Shandong Province, China (Major Scientific and Technological Innovation Project), grant number 2022ZLGX04 and 2021ZLGX05.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

All raw and processed data are available after publication by contacting the author.

Acknowledgments

I would like to thank my parents; without their encouragement, it would be difficult for me to have firm confidence in my research.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Eren, F.; Pe’eri, S.; Thein, M.; Rzhanov, Y.; Celikkol, B.; Swift, M.R. Position, Orientation and Velocity Detection of Unmanned Underwater Vehicles (UUVs) Using an Optical Detector Array. Sensors 2017, 17, 1741. [Google Scholar] [CrossRef] [PubMed]
  2. Moroni, D.; Pascali, A.M.; Reggiannini, M.; Salvetti, O. Underwater Manmade and Archaeological Object Detection in Optical and Acoustic Data. Pattern Recognit. Image Anal. 2014, 24, 310–317. [Google Scholar] [CrossRef]
  3. Lv, Z.J.; He, G.; Qiu, C.F.; Fan, Y.Y.; Wang, H.Y.; Liu, Z. CMOS monolithic photodetector with a built-in 2-dimensional light direction sensor for laser diode based underwater wireless optical communications. Opt. Express 2021, 29, 16197–16204. [Google Scholar] [CrossRef] [PubMed]
  4. Shen, Z.; Shang, W.; Wang, B.; Zhao, T.; Zhang, H.; Zheng, Y.; Zhou, G. Lidar with High Scattering Ratio Suppression for Underwater Detection. Acta Photonica Sin. 2020, 49, 0601001. [Google Scholar] [CrossRef]
  5. Shangguan, M.; Weng, Z.; Lin, Z.; Lee, Z.; Yang, Z.; Sun, J.; Wu, T.; Zhang, Y.; Wen, C. Day and night continuous high-resolution shallow-water depth detection with single-photon underwater lidar. Opt. Express 2023, 31, 43950–43962. [Google Scholar] [CrossRef] [PubMed]
  6. Brown, R.; Hartzell, P.; Glennie, C. Evaluation of SPL100 Single Photon Lidar Data. Remote Sens. 2020, 12, 722. [Google Scholar] [CrossRef]
  7. Zhang, Y.; Li, S.; Sun, J.; Zhang, X.; Zhang, R. Detection of the near-field targets by non-coaxial underwater single-photon counting Detection of the near-field targets by non-coaxial underwater single-photon counting. Lidar. Optimik 2020, 259, 169010. [Google Scholar] [CrossRef]
  8. Yang, X.; Tong, Z.; Dai, Y.; Chen, X.; Zhang, H.; Zou, H. 100 m full-duplex underwater wireless optical communication based on blue and green lasers and high sensitivity detectors. Opt. Commun. 2021, 498, 127261. [Google Scholar] [CrossRef]
  9. Lee, S.-J.; So, J.-P.; Kim, R.M.; Kim, K.-H.; Rha, H.-H.; Na, G. Spin angular momentum-encoded single-photon emitters in a chiral nanoparticle-coupled WSe2 monolayer. Sci. Adv. 2024, 10, eadn7210. [Google Scholar] [CrossRef]
  10. Gao, Q.; Wang, C.; Wang, X.; Liu, Z.; Liu, Y.; Wang, Q. Pointing Error Correction for Vehicle-Mounted Single-Photon Ranging Theodolite Using a Piecewise Linear Regression Model. Sensors 2024, 24, 3192. [Google Scholar] [CrossRef]
  11. Hadfield, R.H.; Leach, J.; Fleming, F.; Paul, D.J.; Tan, C.H.; Ng, J.S.; Henderson, R.K.; Buller, G.S. Single-photon detection for long-range imaging and sensing. Optica 2023, 10, 1124–1141. [Google Scholar] [CrossRef]
  12. Yang, L.; Niu, H.; Wu, S.; Hu, J.; Jing, M.; Qia, Z.; Yang, C.; Zhang, G.; Qin, C.; Chen, R.; et al. Single-photon frequency-modulated continuous-wave Lidar based on quantum compressed sensing. Chin. Opt. Lett. 2024, 22, 072602. [Google Scholar] [CrossRef]
  13. Liu, R.; Tang, X.; Xie, J.; Ma, R.; Yang, X.; Mo, F.; Lv, X. A Full-Link Simulation Method for Satellite Single-Photon LiDARs. IEEE Geosci. Remote Sens. Lett. 2024, 21, 6500105. [Google Scholar] [CrossRef]
  14. Bae, I.-H.; Park, S.; Hong, K.-S.; Lee, D.-H. Characteristics Measurement in a Deep UV Single Photon Detector Based on a TE-cooled 4H-SiC APD. IEEE Photonics J. 2023, 15, 6800606. [Google Scholar] [CrossRef]
  15. Talala, T.; Kaikkonen, V.A.; Virta, E.; Mäkynen, A.J.; Nissinen, I. Multipoint Raman Spectrometer Based on a Time-Resolved CMOS SPAD Sensor. IEEE Photonics Technol. Lett. 2023, 35, 629–632. [Google Scholar] [CrossRef]
  16. Liu, X.; Qiang, J.; Huang, G.; Zhang, L.; Zhao, Z.; Shu, R. Velocity-Based Sparse Photon Clustering for Space Debris Ranging by Single-Photon LiDAR. IEEE Geosci. Remote Sens. Lett. 2024, 21, 6500805. [Google Scholar] [CrossRef]
  17. Xiao, D.; Zang, Z.; Wang, Q.; Jiao, Z.; Della Rocca, F.M.; Chen, Y.; Li, D.D.U. Smart Wide-field Fluorescence Lifetime Imaging System with CMOS Single-photon Avalanche Diode Arrays. In Proceedings of the 2022 44th Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), Glasgow, Scotland, UK, 11–15 July 2022; pp. 1887–1890. [Google Scholar] [CrossRef]
  18. Yang, R.; Wu, T.; Hsueh, T.-C. A High-Accuracy Single-Photon Time-Interval Measurement in Mega-Hz Detection Rates With Collaborative Variance Reduction: Theoretical Analysis and Realization Methodology. IEEE Trans. Circuits Syst. I Regul. Pap. 2023, 70, 176–189. [Google Scholar] [CrossRef]
  19. Lin, Y.; Mos, P.; Ardelean, A.; Bruschini, C.; Charbon, E. Coupling a recurrent neural network to SPAD TCSPC systems for real-time fluorescence lifetime imaging. Sci. Rep. 2024, 14, 3286. [Google Scholar] [CrossRef]
  20. Qian, X.; Jiang, W.; Deen, M.J. Single Photon Detectors for Automotive LiDAR Applications: State-of-the-Art and Research Challenges. IEEE J. Sel. Top. Quantum Electron. 2024, 30, 3800520. [Google Scholar] [CrossRef]
  21. Qiang, W.; Wang, C.; Wang, Y.; Jiang, Y.; Li, Y.; Xue, X.; Dou, X. All-fiber multifunction differential absorption CO2 lidar integrating single-photon and coherent detection. Opt. Express 2024, 32, 19665–19675. [Google Scholar] [CrossRef]
  22. Lindell, D.B.; O’Toole, M.; Wetzstein, G. Single-photon 3D imaging with deep sensor fusion. ACM Trans. Graph. 2018, 37, 113. [Google Scholar] [CrossRef]
  23. Yang, X.; Tong, Z.Y.; Jiang, P.F.; Xu, L.; Wu, L.; Hu, J.M. Deep-learning based photon-efficient 3D and reflectivity imaging with a 64 × 64 single-photon avalanche detector array. Opt. Express. 2022, 30, 32948–32964. [Google Scholar] [CrossRef] [PubMed]
  24. Zhao, X.; Jiang, X.; Han, A.; Mao, T.; He, W.; Chen, Q. Photon-efficient 3D reconstruction employing an edge enhancement method. Opt. Express 2022, 30, 1555–1569. [Google Scholar] [CrossRef] [PubMed]
  25. Peng, J.; Xiong, Z.; Huang, X.; Li, Z.P.; Liu, D.; Xu, F. Photon-Efficient 3D Imaging with A Non-local Neural Network. In Computer Vision—ECCV 2020; Vedaldi, A., Bischof, H., Brox, T., Frahm, J.M., Eds.; Lecture Notes in Computer Science; Springer: Cham, Switzerland, 2020; Volume 12351. [Google Scholar]
  26. Jimenez-Mesa, C.; Arco, J.E.; Martinez-Murcia, F.J.; Suckling, J.; Ramirez, J.; Gorriz, J.M. Applications of machine learning and deep learning in SPECT and PET imaging: General overview, challenges and future prospects. Pharmacol. Res. 2023, 197, 225–241. [Google Scholar] [CrossRef]
  27. Shao, W.; Rowe, S.P.; Du, Y. Artificial intelligence in single photon emission computed tomography (SPECT) imaging: A narrative review. Ann. Transl. Med. 2021, 9, 820. [Google Scholar] [CrossRef]
  28. Yang, X.; Xiao, S.; Zhang, H.; Xu, L.; Wu, L.; Zhang, J.; Yong, Z. PE-RASP: Range image stitching of photon-efficient imaging through reconstruction, alignment, stitching integration network based on intensity image priors. Opt. Express 2024, 32, 2817–2838. [Google Scholar] [CrossRef]
  29. Su, Z.; Hu, C.; Hao, J.; Ge, P.; Han, B. Target Detection in Single-Photon Lidar Using CNN Based on Point Cloud Method. Photonics 2024, 11, 43. [Google Scholar] [CrossRef]
  30. Li, C.; Zhang, T. Single-photon imaging detection technology. In Seventh Symposium on Novel Photoelectronic Detection Technology and Applications; SPIE: Bellingham, WA, USA, 2021; Volume 11763. [Google Scholar]
  31. Yang, J.; Li, M.; Chen, X.; Yu, W.; Zhang, A. Single-photon quantum imaging via single-photon illumination. Appl. Phys. Lett. 2020, 17, 214001. [Google Scholar] [CrossRef]
  32. Hu, H.; Ren, X.; Wen, Z.; Li, X.; Liang, Y.; Yan, M. Single-Pixel Photon-Counting Imaging Based on Dual-Comb Interferometry. Nanomaterials 2021, 11, 1379. [Google Scholar] [CrossRef]
  33. Xu, F.; Buller, G.; Laurenzis, M.; Qian, L.; Tosi, A.; Velten, A.; Wang, J. Editorial Single-Photon Technologies and Applications. IEEE J. Sel. Top. Quantum Electron. 2024, 30, 0200103. [Google Scholar] [CrossRef]
  34. Teranishi, N. Required Conditions for Photon-Counting Image Sensors. IEEE Trans. Electron Devices 2012, 59, 2199–2205. [Google Scholar] [CrossRef]
  35. Suhling, K.; Hirvonen, L.M.; Becker, W.; Smietana, S.; Netz, H.; Milnes, J.; Conneely, T.; Le Marois, A.; Jagutzki, O. Wide-field TCSPC-based fluorescence lifetime imaging (FLIM) microscopy. Adv. Photon Count. Tech. X 2016, 9858, 98580J. [Google Scholar] [CrossRef]
  36. Liu, X.; Sun, Y.; Shi, J.; Zeng, G. Photon efficiency of computational ghost imaging with single-photon detection. J. Opt. Soc. Am. A 2018, 35, 1741–1748. [Google Scholar] [CrossRef] [PubMed]
  37. Kitichotkul, R.; Rapp, J.; Goyal, V.K. The Role of Detection Times in Reflectivity Estimation With Single-Photon Lidar. IEEE J. Sel. Top. Quantum Electron. 2024, 30, 8800114. [Google Scholar] [CrossRef]
  38. Zhao, Y.Y. Design and Implementation of Embedded Real-Time Processing System Based on Single-Photon LiDAR; Institute of Microelectronics of the Chinese Academy of Sciences: Beijing, China, 2023. [Google Scholar]
  39. Shin, D.; Kirmani, A.; Goyal, V.K.; Shapiro, J.H. Photon-Efficient Computational 3-D and Reflectivity Imaging with Single-Photon Detectors. IEEE Trans. Comput. Imaging 2014, 1, 112–125. [Google Scholar] [CrossRef]
  40. Kirmani, A.; Colaço, A.; Wong, F.N.C.; Goyal, V.K. Exploiting sparsity in time-of-flight range ac-quisition using a single time-resolved sensor. Opt. Express. 2011, 19, 21485–21507. [Google Scholar] [CrossRef]
  41. Rapp, J.; Goyal, V.K. A Few Photons Among Many: Unmixing Signal and Noise for Photon-Efficient Active Imaging. IEEE Trans. Comput. Imaging 2017, 3, 445–459. [Google Scholar] [CrossRef]
  42. Chen, W.; Li, S.; Tian, X. Robust single-photon counting imaging with spatially correlated and total variation constraints. Opt. Express 2020, 28, 2625–2639. [Google Scholar] [CrossRef]
  43. Kirmani, A.; Venkatraman, D.; Shin, D.; Colaço, A.; Wong, F.N.C.; Shapiro, J.H.; Goyal, V.K. First-Photon Imaging. Science 2013, 343, 58–61. [Google Scholar] [CrossRef]
  44. Hua, K.; Liu, B.; Chen, Z.; Fang, L.; Wang, H. Efficient and Noise Robust Photon-Counting Imaging with First Signal Photon Unit Method. Photonics 2021, 8, 229. [Google Scholar] [CrossRef]
  45. Rudin, L.I.; Osher, S.; Fatemi, E. Nonlinear total variation based noise removal algorithms. Phys. D Nonlinear Phenom. 1992, 60, 259–268. [Google Scholar] [CrossRef]
  46. Scharstein, D.; Pal, C. Learning Conditional Random Fields for Stereo. In Proceedings of the 2007 IEEE Conference on Computer Vision and Pattern Recognition, Minneapolis, MN, USA, 17–22 June 2007; pp. 1–8. [Google Scholar] [CrossRef]
  47. Halimi, A.; Maccarone, A.; Lamb, R.; Buller, G.; McLaughlin, S. Robust and Guided Bayesian Reconstruction of Single-Photon 3D Lidar Data: Application to Multispectral and Underwater Imaging. IEEE Trans. Comput. Imaging 2021, 7, 961–974. [Google Scholar] [CrossRef]
  48. Maccarone, A.; Drummond, K.; McCarthy, A.; Steinlehner, U.K.; Tachella, J.; Garcia, D.A.; Pawlikowska, A.; Lamb, R.A.; Henderson, R.K.; McLaughlin, S.; et al. Submerged single-photon LiDAR imaging sensor used for real-time 3D scene reconstruction in scattering underwater environments. Opt. Express 2023, 31, 16690–16708. [Google Scholar] [CrossRef]
  49. Plosz, S.; Maccarone, A.; McLaughlin, S.; Buller, G.S.; Halimi, A. Real-Time Reconstruction of 3D Videos from Single-Photon LiDaR Data in the Presence of Obscurants. IEEE Trans. Comput. Imaging 2023, 9, 106–119. [Google Scholar] [CrossRef]
Figure 1. Schematic diagram of STMF imaging algorithm flow.
Figure 1. Schematic diagram of STMF imaging algorithm flow.
Jmse 12 01595 g001
Figure 2. Schematic diagram of the data preprocessing steps. The information derived by the system is a two-dimensional array, and the pixels are not arranged by the target pixel position. We first reconstruct the order of the echo pixel data according to the system scanning law, and then superimpose the third-dimension data of all pixels containing the echo information of the current pixel. Since the reflection of the target is the most intense, the moment with the largest number of echo photons is the approximate distance of the target determined by all pixels “voting”. This is similar to the clustering rule.
Figure 2. Schematic diagram of the data preprocessing steps. The information derived by the system is a two-dimensional array, and the pixels are not arranged by the target pixel position. We first reconstruct the order of the echo pixel data according to the system scanning law, and then superimpose the third-dimension data of all pixels containing the echo information of the current pixel. Since the reflection of the target is the most intense, the moment with the largest number of echo photons is the approximate distance of the target determined by all pixels “voting”. This is similar to the clustering rule.
Jmse 12 01595 g002
Figure 3. Pixel value selection diagram. Select the group with the largest number of photons on the third-dimension time histogram of each pixel, and find the peak corresponding to the bin moment and the total number of photons in the group. The range of the group is based on the accumulation law of the reflected photons of the target and the empirical value after many experiments. Select 5 bin ranges to meet the measurement of various targets.
Figure 3. Pixel value selection diagram. Select the group with the largest number of photons on the third-dimension time histogram of each pixel, and find the peak corresponding to the bin moment and the total number of photons in the group. The range of the group is based on the accumulation law of the reflected photons of the target and the empirical value after many experiments. Select 5 bin ranges to meet the measurement of various targets.
Jmse 12 01595 g003
Figure 4. Pixel replacement strategy in STMF imaging algorithm.
Figure 4. Pixel replacement strategy in STMF imaging algorithm.
Jmse 12 01595 g004
Figure 5. Schematic diagram of pixel value filtering and replacement. To scan the edge pixels as well, a layer of 0 values is extended without introducing added value. The scanning window scans horizontally from left to right in step 1 and pixels are replaced according to Equation (6), then scanned for the next row until the scan is completed.
Figure 5. Schematic diagram of pixel value filtering and replacement. To scan the edge pixels as well, a layer of 0 values is extended without introducing added value. The scanning window scans horizontally from left to right in step 1 and pixels are replaced according to Equation (6), then scanned for the next row until the scan is completed.
Jmse 12 01595 g005
Figure 6. Schematic diagram of the total variational smoothing constraint flow. The total variation regularized objective function plays a key role in the optimization of depth maps and reflection intensity maps, enhancing the block-ness of the image by minimizing the total variation while maintaining edge sharpness.
Figure 6. Schematic diagram of the total variational smoothing constraint flow. The total variation regularized objective function plays a key role in the optimization of depth maps and reflection intensity maps, enhancing the block-ness of the image by minimizing the total variation while maintaining edge sharpness.
Jmse 12 01595 g006
Figure 7. Single-photon lidar working schematic. 1: Upper computer, 2: Heat exchanger, 3: Beam expander, 4: Open-aperture reflector, 5: Galvanometer, 6: Beam and target, 7: Radar window glass, 8: Lens, 9: Filter.
Figure 7. Single-photon lidar working schematic. 1: Upper computer, 2: Heat exchanger, 3: Beam expander, 4: Open-aperture reflector, 5: Galvanometer, 6: Beam and target, 7: Radar window glass, 8: Lens, 9: Filter.
Jmse 12 01595 g007
Figure 8. Schematic diagram of regular target plate and dimensions (Unit/mm).
Figure 8. Schematic diagram of regular target plate and dimensions (Unit/mm).
Jmse 12 01595 g008
Figure 9. Schematic diagram of complex geometry target and dimensions (Unit/mm).
Figure 9. Schematic diagram of complex geometry target and dimensions (Unit/mm).
Jmse 12 01595 g009
Figure 10. Schematic diagram of multi-depth parameters target and dimensions (Unit/mm).
Figure 10. Schematic diagram of multi-depth parameters target and dimensions (Unit/mm).
Jmse 12 01595 g010
Figure 11. Depth (top) and reflection intensity (bottom) maps of the regular target plate at pulse number 5, first-photon group imaging algorithm (left column, μ = 2), STMF imaging algorithm (right column).
Figure 11. Depth (top) and reflection intensity (bottom) maps of the regular target plate at pulse number 5, first-photon group imaging algorithm (left column, μ = 2), STMF imaging algorithm (right column).
Jmse 12 01595 g011
Figure 12. Depth (top) and reflection intensity (bottom) maps of complex geometry target at pulse number 5, first-photon group imaging algorithm (left column, μ = 2), STMF imaging algorithm (right column).
Figure 12. Depth (top) and reflection intensity (bottom) maps of complex geometry target at pulse number 5, first-photon group imaging algorithm (left column, μ = 2), STMF imaging algorithm (right column).
Jmse 12 01595 g012aJmse 12 01595 g012b
Figure 13. Depth (top) maps and reflection intensity (bottom) maps of multi-depth parametric targets at pulse number 5, first-photon group imaging algorithm (left column, μ = 2), STMF imaging algorithm (right column).
Figure 13. Depth (top) maps and reflection intensity (bottom) maps of multi-depth parametric targets at pulse number 5, first-photon group imaging algorithm (left column, μ = 2), STMF imaging algorithm (right column).
Jmse 12 01595 g013
Figure 14. High-resolution detailed display of the imaged 5000-pulse map as a reference map. The three targets are imaged separately. The first row shows the depth maps and the second row shows the reflection intensity maps.
Figure 14. High-resolution detailed display of the imaged 5000-pulse map as a reference map. The three targets are imaged separately. The first row shows the depth maps and the second row shows the reflection intensity maps.
Jmse 12 01595 g014
Figure 15. Depth maps of the three targets after adjusting the pixel display range at a single pixel pulse count of 5 through the STMF imaging algorithm.
Figure 15. Depth maps of the three targets after adjusting the pixel display range at a single pixel pulse count of 5 through the STMF imaging algorithm.
Jmse 12 01595 g015
Figure 16. Reference Depth Image (pulse number 5000, top left). First-photon group imaging algorithm with parameters μ = 1 (top center), μ = 2 (top right), μ = 3 (bottom left), μ = 4 (bottom center), and μ = 5 (bottom right) for depth imaging results; the tests (MSE, RMSE, SSIM) are (2233.168382, 47.256411, 0.720168) (1944.474829, 44.096200, 0.719074) (3149.886326, 56.123848, 0.706244) (3636.324166, 60.301942, 0.723175) (3733.789040, 61.104738, 0.726569).
Figure 16. Reference Depth Image (pulse number 5000, top left). First-photon group imaging algorithm with parameters μ = 1 (top center), μ = 2 (top right), μ = 3 (bottom left), μ = 4 (bottom center), and μ = 5 (bottom right) for depth imaging results; the tests (MSE, RMSE, SSIM) are (2233.168382, 47.256411, 0.720168) (1944.474829, 44.096200, 0.719074) (3149.886326, 56.123848, 0.706244) (3636.324166, 60.301942, 0.723175) (3733.789040, 61.104738, 0.726569).
Jmse 12 01595 g016
Figure 17. Reference reflection intensity image (pulse number 5000, top left). First-photon group imaging algorithm with parameters μ = 1 (top center), μ = 2 (top right), μ = 3 (bottom left), μ = 4 (bottom center), μ = 5 (bottom right) for reflected intensity imaging results; the tests (MSE, RMSE, SSIM) are (2940.371267, 54.225190, 0.636347) (4291.454902, 65.509197, 0.550790) (5879.315111, 76.676692, 0.568072) (6842.660833, 82.720377, 0.577880) (7041.941686, 83.916278, 0.578380).
Figure 17. Reference reflection intensity image (pulse number 5000, top left). First-photon group imaging algorithm with parameters μ = 1 (top center), μ = 2 (top right), μ = 3 (bottom left), μ = 4 (bottom center), μ = 5 (bottom right) for reflected intensity imaging results; the tests (MSE, RMSE, SSIM) are (2940.371267, 54.225190, 0.636347) (4291.454902, 65.509197, 0.550790) (5879.315111, 76.676692, 0.568072) (6842.660833, 82.720377, 0.577880) (7041.941686, 83.916278, 0.578380).
Jmse 12 01595 g017aJmse 12 01595 g017b
Figure 18. Trend plot of MSE, RMSE, and SSIM evaluation values of depth map with parameter μ values by using the first-photon imaging algorithm.
Figure 18. Trend plot of MSE, RMSE, and SSIM evaluation values of depth map with parameter μ values by using the first-photon imaging algorithm.
Jmse 12 01595 g018
Figure 19. Trend plot of MSE, RMSE, and SSIM evaluation values of reflection intensity map with parameter μ values by using the first-photon imaging algorithm.
Figure 19. Trend plot of MSE, RMSE, and SSIM evaluation values of reflection intensity map with parameter μ values by using the first-photon imaging algorithm.
Jmse 12 01595 g019
Figure 20. Regular target maximum group estimation depth imaging map (left), depth optimization map after pixel value screening replacement + total variance smoothing constraints (middle), depth map of 5000-pulse STMF algorithm (right).
Figure 20. Regular target maximum group estimation depth imaging map (left), depth optimization map after pixel value screening replacement + total variance smoothing constraints (middle), depth map of 5000-pulse STMF algorithm (right).
Jmse 12 01595 g020
Figure 21. Complex geometry target maximum group estimation depth imaging map (left), optimization map after pixel value screening replacement + total variance smoothing constraints (middle), depth map of 5000-pulse STMF algorithm (right).
Figure 21. Complex geometry target maximum group estimation depth imaging map (left), optimization map after pixel value screening replacement + total variance smoothing constraints (middle), depth map of 5000-pulse STMF algorithm (right).
Jmse 12 01595 g021
Figure 22. Multi-depth parameters target maximum group estimation depth imaging map (left), optimization map after pixel value screening replacement + total variance smoothing constraints (middle), depth map of 5000-pulse STMF algorithm (right).
Figure 22. Multi-depth parameters target maximum group estimation depth imaging map (left), optimization map after pixel value screening replacement + total variance smoothing constraints (middle), depth map of 5000-pulse STMF algorithm (right).
Jmse 12 01595 g022
Figure 23. Curves of MSE, RMSE, and SSIM values of the imaging results varying with external light intensity.
Figure 23. Curves of MSE, RMSE, and SSIM values of the imaging results varying with external light intensity.
Jmse 12 01595 g023
Figure 24. Curves of MSE, RMSE, and SSIM values of the imaging results varying with scene distance.
Figure 24. Curves of MSE, RMSE, and SSIM values of the imaging results varying with scene distance.
Jmse 12 01595 g024
Figure 25. Curves of MSE, RMSE, and SSIM values of the imaging results varying with the number of pulses.
Figure 25. Curves of MSE, RMSE, and SSIM values of the imaging results varying with the number of pulses.
Jmse 12 01595 g025
Figure 26. Underwater environment and target placement. (a) Underwater photography, (b) Target placement in water, (c) Small block target.
Figure 26. Underwater environment and target placement. (a) Underwater photography, (b) Target placement in water, (c) Small block target.
Jmse 12 01595 g026
Figure 27. Depth imaging map of underwater multi-depth parameters target with pulse number 50 (Left picture is before noise reduction, right picture is after noise reduction).
Figure 27. Depth imaging map of underwater multi-depth parameters target with pulse number 50 (Left picture is before noise reduction, right picture is after noise reduction).
Jmse 12 01595 g027
Figure 28. Reflection intensity imaging map of underwater multi-depth parameters target with pulse number 50 (Left picture is before noise reduction, right picture is after noise reduction).
Figure 28. Reflection intensity imaging map of underwater multi-depth parameters target with pulse number 50 (Left picture is before noise reduction, right picture is after noise reduction).
Jmse 12 01595 g028
Figure 29. Depth imaging map of underwater multi-depth parameters target with pulse number 500 (Left picture is before noise reduction, right picture is after noise reduction).
Figure 29. Depth imaging map of underwater multi-depth parameters target with pulse number 500 (Left picture is before noise reduction, right picture is after noise reduction).
Jmse 12 01595 g029
Figure 30. Reflection intensity imaging map of underwater multi-depth parameters target with pulse number 500 (Left picture is before noise reduction, right picture is after noise reduction).
Figure 30. Reflection intensity imaging map of underwater multi-depth parameters target with pulse number 500 (Left picture is before noise reduction, right picture is after noise reduction).
Jmse 12 01595 g030
Figure 31. Depth imaging map of underwater multi-depth parameters target with pulse number 50 (Left picture is before noise reduction, right picture is after noise reduction, μ = 4).
Figure 31. Depth imaging map of underwater multi-depth parameters target with pulse number 50 (Left picture is before noise reduction, right picture is after noise reduction, μ = 4).
Jmse 12 01595 g031
Figure 32. Reflection intensity imaging map of underwater multi-depth parameters target with pulse number 50 (Left picture is before noise reduction, right picture is after noise reduction, μ = 4).
Figure 32. Reflection intensity imaging map of underwater multi-depth parameters target with pulse number 50 (Left picture is before noise reduction, right picture is after noise reduction, μ = 4).
Jmse 12 01595 g032
Figure 33. Depth imaging map of underwater multi-depth parameters target with pulse number 500 (Left picture is before noise reduction, right picture is after noise reduction, μ = 29).
Figure 33. Depth imaging map of underwater multi-depth parameters target with pulse number 500 (Left picture is before noise reduction, right picture is after noise reduction, μ = 29).
Jmse 12 01595 g033
Figure 34. Reflection intensity imaging map of underwater multi-depth parameters target with pulse number 500 (Left picture is before noise reduction, right picture is after noise reduction, μ = 29).
Figure 34. Reflection intensity imaging map of underwater multi-depth parameters target with pulse number 500 (Left picture is before noise reduction, right picture is after noise reduction, μ = 29).
Jmse 12 01595 g034
Table 1. Comparison of MSE, RMSE, and SSIM evaluation values of depth maps of first-photon group and STMF imaging algorithms.
Table 1. Comparison of MSE, RMSE, and SSIM evaluation values of depth maps of first-photon group and STMF imaging algorithms.
IndexMSERMSESSIM
Methods
First-photon groupI1944.47482944.0962000.719074
II2597.68713450.9675110.677240
III4475.41869966.8985700.672388
STMFI1877.19068143.3265590.799389
II3072.72484055.4321640.760533
III1508.43458438.8385710.786565
Table 2. Comparison of MSE, RMSE, and SSIM evaluation values of reflected intensity maps for first-photon group and STMF imaging algorithms.
Table 2. Comparison of MSE, RMSE, and SSIM evaluation values of reflected intensity maps for first-photon group and STMF imaging algorithms.
IndexMSERMSESSIM
Methods
First-photon groupI4291.45490265.5091970.550790
II3478.64081458.9800040.677963
III4152.37238264.4389040.627105
STMFI3064.49326555.3578650.577002
II2620.73037551.1930700.625744
III4482.54044666.9517770.567506
Table 3. Evaluation index values of three types of targets before and after the STMF noise reduction step.
Table 3. Evaluation index values of three types of targets before and after the STMF noise reduction step.
IndexMSERMSESSIM
Methods
STMFI (before noise reduction)1236.79484035.1680940.789318
I (after noise reduction)937.12573730.6125090.912257
II (before noise reduction)878.32144829.6364880.749681
II (after noise reduction)601.32253324.5218790.897178
III (before noise reduction)1448.67833438.0615070.743624
III (after noise reduction)402.98232220.0744200.910434
Table 4. Depth imaging MSE, RMSE, and SSIM values with external light intensity (some example points are shown).
Table 4. Depth imaging MSE, RMSE, and SSIM values with external light intensity (some example points are shown).
IndexMSERMSESSIM
Intensity/Lux
0.11805.23910742.4881050.862387
161462094.04547845.7607420.865294
508283542.23471259.5166760.747120
Table 5. Depth imaging MSE, RMSE, and SSIM values a with scene distance (some example points are shown).
Table 5. Depth imaging MSE, RMSE, and SSIM values a with scene distance (some example points are shown).
IndexMSERMSESSIM
Distance/m
251895.81793943.5409910.869116
402094.04547845.7607420.865294
552873.15497653.6018190.837796
Table 6. Depth imaging MSE, RMSE, and SSIM values with number of pulses (some example points are shown).
Table 6. Depth imaging MSE, RMSE, and SSIM values with number of pulses (some example points are shown).
IndexMSERMSESSIM
Pulse Number
51888.40256243.4557540.874237
201350.84884036.7538960.903139
501059.43360332.5489420.926321
Table 7. MSE, RMSE, and SSIM evaluation metrics of underwater depth map (STMF).
Table 7. MSE, RMSE, and SSIM evaluation metrics of underwater depth map (STMF).
IndexMSERMSESSIM
Pulse Number
50before noise reduction4967.05157870.4773130.656152
after noise reduction5162.81147671.8527070.689871
500before noise reduction3974.91068563.0468930.685066
after noise reduction3978.87993663.0783630.716883
Table 8. MSE, RMSE, and SSIM evaluation metrics of underwater depth map (first-photon group).
Table 8. MSE, RMSE, and SSIM evaluation metrics of underwater depth map (first-photon group).
IndexMSERMSESSIM
Pulse Number
50before noise reduction9343.74323296.6630400.611259
after noise reduction10,459.830375102.2733120.615554
500before noise reduction10,699.728375103.4394910.631418
after noise reduction10,820.452469104.0214040.617580
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Rong, T.; Wang, Y.; Zhu, Q.; Wang, C.; Zhang, Y.; Li, J.; Zhou, Z.; Luo, Q. Sequential Two-Mode Fusion Underwater Single-Photon Lidar Imaging Algorithm. J. Mar. Sci. Eng. 2024, 12, 1595. https://doi.org/10.3390/jmse12091595

AMA Style

Rong T, Wang Y, Zhu Q, Wang C, Zhang Y, Li J, Zhou Z, Luo Q. Sequential Two-Mode Fusion Underwater Single-Photon Lidar Imaging Algorithm. Journal of Marine Science and Engineering. 2024; 12(9):1595. https://doi.org/10.3390/jmse12091595

Chicago/Turabian Style

Rong, Tian, Yuhang Wang, Qiguang Zhu, Chenxu Wang, Yanchao Zhang, Jianfeng Li, Zhiquan Zhou, and Qinghua Luo. 2024. "Sequential Two-Mode Fusion Underwater Single-Photon Lidar Imaging Algorithm" Journal of Marine Science and Engineering 12, no. 9: 1595. https://doi.org/10.3390/jmse12091595

APA Style

Rong, T., Wang, Y., Zhu, Q., Wang, C., Zhang, Y., Li, J., Zhou, Z., & Luo, Q. (2024). Sequential Two-Mode Fusion Underwater Single-Photon Lidar Imaging Algorithm. Journal of Marine Science and Engineering, 12(9), 1595. https://doi.org/10.3390/jmse12091595

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop