Next Article in Journal
Camouflage Breaking with Stereo-Vision-Assisted Imaging
Previous Article in Journal
Picometer-Sensitivity Surface Profile Measurement Using Swept-Source Phase Microscopy
Previous Article in Special Issue
Demonstration of a Low-SWaP Terminal for Ground-to-Air Single-Mode Fiber Coupled Laser Links
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Performance Exploration of Optical Wireless Video Communication Based on Adaptive Block Sampling Compressive Sensing

1
School of Opto-Electronic Engineering, Changchun University of Science and Technology, Changchun 130022, China
2
School of Optics and Photonics, Beijing Institute of Technology, Beijing 100081, China
3
Yangtze River Delta Research Institute (Jiaxing), Beijing Institute of Technology, Jiaxing 314000, China
*
Author to whom correspondence should be addressed.
Photonics 2024, 11(10), 969; https://doi.org/10.3390/photonics11100969
Submission received: 23 August 2024 / Revised: 9 October 2024 / Accepted: 11 October 2024 / Published: 16 October 2024
(This article belongs to the Special Issue Next-Generation Free-Space Optical Communication Technologies)

Abstract

:
Optical wireless video transmission technology combines the advantages of high data rates, enhanced security, large bandwidth capacity, and strong anti-interference capabilities inherent in optical communication, establishing it as a pivotal technology in contemporary data transmission networks. However, video data comprises a large volume of image information, resulting in substantial data flow with significant redundant bits. To address this, we propose an adaptive block sampling compressive sensing algorithm that overcomes the limitations of sampling inflexibility in traditional compressive sensing, which often leads to either redundant or insufficient local sampling. This method significantly reduces the presence of redundant bits in video images. First, the sampling mechanism of the block-based compressive sensing algorithm was optimized. Subsequently, a wireless optical video transmission experimental system was developed using a Field-Programmable Gate Array chip. Finally, experiments were conducted to evaluate the transmission of video optical signals. The results demonstrate that the proposed algorithm improves the peak signal-to-noise ratio by over 3 dB compared to other algorithms, with an enhancement exceeding 1.5 dB even in field tests, thereby significantly optimizing video transmission quality. This research contributes essential technical insights for the enhancement of wireless optical video transmission performance.

1. Introduction

Since its emergence, wireless data transmission have become increasingly prevalent due to their cost-effectiveness, high degree of flexibility, and ease of implementation. Video data, as a critical component of Internet of Things (IoT) services, represent a significant area of research, particularly in the context of ensuring reliable transmissions within wireless networks. According to a recent report by Cisco [1], mobile video traffic accounted for up to 78% of global mobile data traffic in 2021, reflecting the growing imperative for optimized video transmission strategies within wireless communication infrastructures. The sequential nature of video, consisting of consecutive images, places considerable demands on transmission links due to its substantial data volume. To address these challenges, researchers have extensively explored various strategies to optimize the use of limited transmission bandwidth while maintaining high-quality video delivery. Traditional video compression techniques remain the most commonly employed approach [2,3]; however, these methods often require redundant coding, which not only increases computational complexity but also adds unnecessary redundancy to the compressed data stream, thereby diminishing overall transmission efficiency [4]. To address the high data volume inherent in video transmission, media practitioners have developed applications that leverage Device-to-Device (D2D) communication to offload cellular network traffic, thereby alleviating the burden on the downlink transmission of operational networks. From this perspective, reference [5] reported a 30% performance gain for users. References [6,7] addressed the uplink allocation challenge for video streams by iteratively optimizing the application layer’s bandwidth allocation strategy. This line of research and development aims to circumvent challenges at the operational layer, yet it does not address the underlying issue of excessive data volume. An innovative approach to video transmission involves compressive sensing [8], which combines data compression and acquisition into a unified process. S. Zheng et al. proposed an efficient video-uploading system based on compressive sensing for terminal-to-cloud networks [9]. L. Li et al. developed a new compressive sensing model and corresponding reconstruction algorithm [10], creating an image communication system for IoT monitoring applications that addresses sensor node transmission resource constraints. Furthermore, we chose optical wireless communication (OWC) as our transmission medium [11,12,13] because of its advantages, such as high transmission speed, abundant spectrum availability, and strong security features [14,15]. N. Cvijetic et al. combined Low-Density Parity-Check (LDPC) coding with channel interleaving in OWC video transmission experiments, evaluating the improvement effects of this coding structure on video transmission [16]. Z. Hong et al. proposed a residual distribution-based source-channel coding scheme, enhancing the channel error resistance in video transmission, achieving a Bit Error Rate (BER) of 0.0421 in underwater OWC video transmission experiments.
In this paper, we present an adaptive block sampling compressed sensing algorithm that refines the traditional sampling rate allocation mechanism by integrating image saliency features. The simulation results demonstrate that the proposed algorithm enhances the peak signal-to-noise ratio (PSNR) of reconstructed images by more than 3 dB compared to conventional methods, with corresponding improvements in the structural similarity index (SSIM) ranging from 1% to 6%. To further evaluate the algorithm’s performance in optical wireless video transmission, we developed a spatial optical wireless video transmission system utilizing space laser communication technology. The system employs Artix-7 series field programmable gate array (FPGA) chips to implement the optical video transceiver circuit, incorporating optical antennas, Avalanche Photo Diode (APD) receivers, erbium-doped fiber amplifiers (EDFA), and other essential components for conducting spatial optical video transmission experiments. The optical signal transmission was carried out using the Giga Transceiver Protocol (GTP), achieving transmission rates between 0.8 and 6.6 Gbps. Experimental findings reveal that the proposed algorithm improves PSNR by over 1.5 dB and SSIM by more than 1%, thus confirming its effectiveness in optimizing the quality of reconstructed images.

2. Design and Principle

The algorithm initially performs block-by-block compressed sensing processing on the input image I with an adaptive sampling rate based on the saliency information of different image blocks. We utilize a fixed-size block and an adaptive block sampling rate mechanism for compressed sensing processing. The traditional block-based compressed sensing algorithm [17] applies a fixed sampling rate to different blocks. Figure 1 illustrates an example of a natural image divided into 4 × 4 blocks. The amount of information contained in different blocks in the figure is obviously different. Some blocks contain complex elements such as buildings, clouds, and plants, while most others consist primarily of a simple sky background with minimal data. When different blocks are sampled at a fixed rate, blocks with large amounts of information will be undersampled, leading to insufficient image reconstruction quality, while blocks with less information will be oversampled, resulting in redundant use of storage resources. In the actual block-based compressed sensing algorithm, more image blocks are generated, further amplifying the differences in information content across the blocks.
To enable adaptive sampling across different blocks, this paper introduces saliency information [18] as the foundation for determining the sampling rate allocation of each block. We proposed an Adaptive Block Sampling (ABS-SPL) compressed sensing algorithm based on the SPL algorithm [19]. The basic architecture of this approach is illustrated in Figure 2.
First, a saliency model for the image is constructed using a multi-scale spectral residual approach. The Spectral Residual (SR) technique serves as an analytical method that efficiently identifies salient regions by extracting and integrating frequency domain information. The theoretical foundation of this method posits that the logarithmic spectrum of the spectral amplitude, obtained through the Fourier transform of the image, follows a linear distribution trend. This consistent statistical pattern is indicative of the image’s inherent redundancy. By eliminating the redundant components from the logarithmic spectrum and retaining only the differential information, it becomes possible to accurately identify salient regions within the image. The following section elaborates on the detailed implementation steps of the multi-scale spectral residual analysis model.
Given an image I , the Gaussian pyramid method is applied to generate L images of I δ , δ = 1 , , L at different scales based on the original image. A Fourier transform is then performed on I δ ( x , y ) to obtain the amplitude spectrum A δ ( u , v ) and the phase spectrum ψ δ ( u , v ) at these L scales,
F δ ( u , v ) = 1 M N x = 1 M y = 1 N I δ ( x , y ) e ( j 2 π u x M + j 2 π v y N ) = A δ ( u , v ) e j ψ δ ( u , v ) A δ ( u , v ) = [ R δ 2 ( u , v ) + I δ 2 ( u , v ) ] 1 / 2 ψ δ ( u , v ) = arctan ( I δ ( u , v ) R δ ( u , v ) ) .
A logarithmic operation is performed on the amplitude spectrum A δ ( u , v ) to obtain the logarithmic amplitude spectrum L δ ( u , v ) ,
L δ ( u , v ) = lg [ A δ ( u , v ) ] .
The logarithmic amplitude spectrum L δ ( u , v ) is mean filtered (5 × 5 filter domain) to obtain L ¯ δ ( u , v ) , and then compared with the logarithmic amplitude spectrum L δ ( u , v ) , the spectral residual E δ ( u , v ) at L scales is obtained, respectively,
E δ ( u , v ) = L δ ( u , v ) L ¯ δ ( u , v ) .
Combine E δ ( u , v ) and ψ δ ( u , v ) , then perform an inverse Fourier transform to obtain I δ ( x , y ) . Subsequently, apply a Gaussian filter g ( x , y ) to I δ ( x , y ) to produce the significant feature maps S δ at L scales.
F δ ( u , v ) = exp [ E δ ( u , v ) + ψ δ ( u , v ) ] I δ ( x , y ) = 1 M N u = 1 M v = 1 N F δ ( u , v ) e ( j 2 π u x M + j 2 π v y N ) S δ ( x , y ) = I δ ( x , y ) g ( x , y ) .
The salient feature maps S δ at different scales are combined using a fusion algorithm to generate the final saliency map S . The fusion weight w δ is determined by the square of the contrast difference between the salient areas in the saliency feature map and the entire image. Finally, binary segmentation is applied to the final saliency map to obtain templates for salient and non-salient areas, which are then used to allocate compressed sensing sampling rates. The final result of the salient image processing is presented in Figure 3.
w δ = [ f M a x ( S δ ) f M e a n ( S δ ) ] 2 , δ = 1 , 2 , , L S = δ = 1 L ( w δ × S δ ) .
The salient signal of an image represents the amount of information contained within the image, and the quantity of salient signals in each block indicates the distribution of image features. A high distribution of salient pixel signals indicates that the block contains a substantial amount of features, typically corresponding to the textured regions of the image. Conversely, a smaller number of salient pixels represents regions with fewer distinct features, usually corresponding to smoother blocks in block-based compressed sensing. Therefore, the saliency of an image can serve as a metric for pixel activity, with the complexity of the image block’s texture quantified by the amount of salient signals. This allows for a reasonable allocation of the sampling rate based on these factors. The following section introduces the specific implementation method of block adaptive sampling.
We conducted experiments using a 256 × 256 resolution image, with a block size of 16 × 16 pixels and 256 bytes of information per data block. The image was then sampled based on these parameters.
Let p i = s i B i represent the proportion of significant signals in the image blocks, Where s i denotes the number of salient pixels within the block, and B i represents the total number of pixels in the block and then determine the adaptive sampling rate r i for each image block based on the salient image,
r i = ( R R min ) n p i i = 1 n p i + R min .
Through calculation, the difference det between the current adaptive sampling rate mean and the total sampling rate is obtained, allowing for the determination of the final sampling rate r ^ i ,
det = R m e a n ( i = 1 n r i ) .
r ^ i = r i + det .
Here, R represents the total sampling rate, R min denotes the minimum sampling rate threshold, det indicates the difference between the mean adaptive sampling rate and the total sampling rate, and n represents the number of image blocks.
To ensure that the adaptive sampling rate does not fall below an acceptable level, the minimum sampling rate threshold is specifically defined as follows,
R min = R / 2 ,   0 < R 0.1 0.05 ,   0.1 < R 1 .
Based on the sampling rate array r = { r 1 , r 2 , , r n } for each block, the size M i × M i of the sampling matrix is first determined.
M i = r o u n d ( s q r t ( r i × N i 2 ) ) .
where N i is a fixed block size. According to the size of each block sampling matrix, a discrete cosine transform algorithm is used to generate a sparse matrix, and finally, a sampling matrix array φ = { φ 1 , φ 2 , , φ n } is generated. The image is compressed based on the sampling matrix, and the final compressed data φ = { φ 1 , φ 2 , , φ n } is obtained by data splicing.
After receiving the data, even if the bytes are not received completely, the receiving end can restore the block grouping of the data through the relative position relationship of the frame header, frame tail, and row number to obtain the compressed data y ˜ = { y ˜ 1 , y ˜ 2 , , y ˜ n } . Using the known sampling matrix and sampling rate matrix, the image reconstruction can be completed. The reconstruction algorithm of BCS-SPL couples the complete image Wiener filter smoothing processing with the sparsity enhancement threshold processing in the domain of the complete image sparse transformation and uses the Landweber method to iterate between smoothing and threshold operations. The reconstruction algorithm in this paper is based on the same principle as BCS-SPL. The Landweber iterative steps are used for blocks at different sampling rates, and the measurement matrix φ i based on the current block is used. The specific reconstruction process is as Table 1:
The aforementioned calculation process represents a 2D image reconstruction algorithm based on adaptive sampling rate block compressed sensing. In this context, Wiener(.) refers to a Wiener filter that adapts pixel-by-pixel using a 3 × 3 neighborhood, while Threshold(.) denotes the threshold processing within the BCS-SPL algorithm. The application of this algorithm results in the accurate reconstruction of the image.

3. Results

3.1. Simulation Analysis

We utilize the SPL algorithm, the 2DCS algorithm, and the MS-SPL-DDWT algorithm as benchmarks for comparison. The SPL algorithm [19], a classical block compressed sensing approach, combines Wiener smoothing with Landweber iteration, offering superior processing performance. The 2DCS algorithm [20] is an encryption-then-compression (ETC) approach that enhances the error correction capability of reconstruction through its encryption process. In addition to enhancing the confidentiality of information transmission, it also significantly optimizes the overall quality of the reconstructed image. The MS-SPL algorithm [21] allocates appropriate sampling rates to the wavelet coefficients of images at different scales, significantly enhancing the reconstruction quality compared to previous methods. We selected these three algorithms for comparison with the algorithm proposed in this paper, utilizing PSNR, SSIM, gradient magnitude similarity deviation (GMSD), and normalized root mean square error (NMSE) as evaluation metrics. The PSNR metric [22] quantifies the peak error between the reconstructed image and the original image, providing a measure of the data discrepancy between the two images. The SSIM metric [23] evaluates the similarity between the reconstructed image and the original image by considering three key aspects: brightness, contrast, and structure. Elevated values of these two parameters reflect an improved quality of image reconstruction. The NMSE metric [24] and GMSD metric [25] serve as error metrics that quantify the discrepancies between the original and reconstructed images. Lower values for these two metrics indicate a better reconstruction quality. After conducting simulations, the test results are as follows:
As illustrated in Figure 4a–d, using a sampling rate of 0.5 as an example, the PSNR of the reconstruction result achieved by the algorithm proposed in this paper exceeds 41 dB, which is approximately 3 dB higher than that of other algorithms. From the perspective of the SSIM index, the reconstruction result of the algorithm proposed in this paper is slightly higher than that achieved by the MS-SPL algorithm. The results of these two algorithms are close to 98%, which is over 2% higher than those of the other algorithms. Regarding the two error parameters, NMSE and GMSD, in the low sampling rate range (0.1–0.3), the algorithm proposed in this paper outperforms other algorithms significantly, with error rates generally 1–6% lower than those of the other methods. As the sampling rate increases, the reconstruction results of the MS-SPL algorithm gradually converge, but its error remains slightly higher than that of the algorithm proposed in this paper. Analysis indicates that the algorithm proposed in this paper offers substantial enhancements in image reconstruction quality, particularly at lower sampling rates. By efficiently allocating sampling rates across different blocks, the algorithm improves data utilization efficiency and assigns higher subsampling rates to blocks containing more complex scenes. For more detailed data on Figure 4, please refer to Table A1 and Table A2 in Appendix A.

3.2. Optical Wireless Video Transmission Experiment

We utilized the Artix-7 series FPGA chip as the processor to design a spatial optical wireless video transmission system. As depicted in Figure 5, the compressed sensing algorithm is first employed on the PC to process the image sequence on a frame-by-frame basis. Subsequently, the compressed image sequence is transmitted to the device transmitter as a video stream via Camera-Link. After the FPGA captures the video stream frame by frame, the image data are internally cached and encoded. The optoelectronic transceiver module (Small Form Pluggable, SFP, manufactured by Shenzhen XYT-Tech Co., Ltd., Shenzhen, China) subsequently converts the electrical signal into an optical signal. This optical signal is then amplified by an EDFA before being transmitted through the optical system. At the receiving end, an APD detector module serves as the optical signal receiver. The collected optical signal is conveyed to the SFP optical module on the receiving FPGA board via the optoelectronic conversion module. The FPGA then converts the image into a Camera-Link video stream, which is sent to the PC for frame-by-frame reconstruction. Ultimately, the reconstructed image sequence is compiled into a video. We utilized the Artix-7 series 7a100t-fgg484 model FPGA as the main control chip and set the GTP transmission rate to 1.25 Gbps for the experiment. To ensure the equipment’s lightness and miniaturization, we opted for a highly integrated modular EDFA (BG-EDFA-M3-C1-N-15dBm, 0.95M-1m-FC/APC, manufactured by BEOGOLD Technology Co., Ltd., Xiamen, China). For the APD module, we selected the LSIAPDT-S200 InGaAs APD detector (manufactured by Lightsensing Co., Ltd., Beijing, China), which offers a superior response at the 1550 nm wavelength. The optical system incorporated a transmissive optical antenna with a 1550 nm communication band and a 25 mm aperture.
Figure 6 depicts the spatial optical wireless video transceiver prototype constructed based on the design principles outlined in Figure 5, which was used to conduct a dual-end video data transmission experiment in an atmospheric environment. Figure 6a presents the overall front view of the device, while Figure 6b illustrates the external interface from the rear. Figure 6c,d provide schematic representations of the device connections during the spatial optical wireless video transceiver experiment. The device is equipped with GTP high-speed transceivers, an RJ45 Gigabit Ethernet interface, and an SDR26 Camera-Link video transceiver interface. The optical signal transceiver supports rates ranging from 0.8 to 6.6 Gbps, thereby fulfilling the requirements for processing and converting spatial optical wireless video transceiver input from various interfaces.
A video sequence consisting of 500 frames was captured and processed using the four algorithms previously compared on the PC. A spatial optical wireless video transmission experiment was then conducted over a terminal distance of 20 m. We conducted a series of experiments on video transmission utilizing varying frame rates, revealing that alterations in frame rate did not produce a statistically significant effect on video quality metrics. Following calibration, a spatial optical power meter was used to measure the transmitter’s optical power, which was recorded as 9.3 dBm, while the APD receiver’s optical power was measured at −24.7 dBm. The sampling rate for the compressed sensing algorithm was uniformly set to 0.5, and the PSNR and SSIM metrics of the reconstruction results were statistically analyzed on a frame-by-frame basis. The experimental results are presented in Figure 7.
Figure 7a demonstrates that the proposed algorithm performs effectively in spatial optical wireless video transmission, with its reconstruction results showing a PSNR generally more than 1.5 dB higher than those of the other algorithms. Similarly, Figure 7b indicates that the proposed algorithm achieves SSIM values that are generally over 1% higher than those of the comparison algorithms, thereby delivering superior video reception quality. For more detailed data on Figure 7, please refer to Table A3 in Appendix A.

4. Summary and Discussion

This paper introduces a compressed sensing algorithm with an adaptive block sampling rate, wherein the sampling rate for each block is determined based on the proportion of significant information within the image blocks. This method enhances the quality of compressed sensing reconstructed images while maintaining the same data volume. Evaluation using image quality metrics reveals improvements of over 3 dB in PSNR and more than 2% in SSIM. Additionally, the NMSE and GMSD metrics are reduced by 1% to 6%. A spatial optical wireless video transceiver system, based on an FPGA master chip, was designed, and natural target video transmission experiments were conducted using 500 frames of image sequences processed by the proposed algorithm. The experimental results demonstrate that the algorithm maintains superior image transmission performance in spatial optical wireless video transmission, with PSNR improved by more than 1.5 dB and SSIM by over 1%. Furthermore, the spatial optical video transmission system developed in this study exhibits excellent integration and cost efficiency, offering significant practical and commercial value. In future research, the transmission rate and operational range of the communication system can be further enhanced by optimizing the optical aperture and hardware configuration. These improvements will facilitate the development of an outdoor long-range wireless image transmission system.

Author Contributions

Conceptualization, J.L. and H.Y.; methodology, J.L.; software, J.L.; validation, J.L., Z.C. and W.W.; formal analysis, H.Y.; investigation, Y.Z. and T.L.; resources, K.D.; data curation, J.L. and K.J.; writing—original draft preparation, J.L.; writing—review and editing, J.L.; visualization, Y.S.; supervision, H.Y.; project administration, Z.L.; funding acquisition, K.D. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China, NSFC, U2141231, 62105029, the Youth Talent Support Program of China Association for Science and Technology, CAST, No. YESS20220600, the National Key R&D Program of China, Ministry of Science and Technology, 2021YFA0718804, 2022YFB3902500, and the Major Science and Technology Project of Jilin Province, 20230301002GX, 20230301001GX.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Acknowledgments

We would like to extend our sincere gratitude towards Changchun University of Science and Technology, Beijing Institute of Technology, the National Natural Science Foundation of China (NSFC), the Youth Talent Support Program of China Association for Science and Technology (CAST), the National Key R&D Program of China, Ministry of Science and Technology, and the Major Science and Technology Project of Jilin Province for their support.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

FPGAfield programmable gate array.
D2DDevice-to-Device.
LDPCLow-Density Parity-Check.
BERBit Error Rate.
PSNRpeak signal-to-noise ratio.
SSIMstructural similarity index.
APDAvalanche Photo Diode.
EDFAerbium-doped fiber amplifier.
GTPGiga Transceiver Protocol.
SRSpectral Residual.
ETCencryption-then-compression.
GMSDgradient magnitude similarity deviation.
NMSEnormalized root mean square error.

Appendix A. Summary Table of Simulation and Experimental Results

Table A1. Summary of simulation results PSNR (dB) and SSIM index.
Table A1. Summary of simulation results PSNR (dB) and SSIM index.
Sampling RateBCS-SPL2DCSMS_SPL_DDWTABS-SPL
-PSNRSSIMPSNRSSIMPSNRSSIMPSNRSSIM
0.124.660.71426.650.77430.810.93131.900.936
0.227.650.81130.520.86932.960.94533.260.944
0.329.660.86233.340.91533.020.95034.910.954
0.431.280.89635.780.94037.210.96637.480.966
0.532.990.92437.570.95438.400.97141.150.978
0.634.710.94539.180.96539.780.97743.040.985
0.736.960.96440.340.97041.470.98345.270.989
0.839.720.97940.940.97243.520.98847.440.993
Table A2. Summary of simulation results NMSE and GMSD index.
Table A2. Summary of simulation results NMSE and GMSD index.
Sampling RateBCS-SPL2DCSMS_SPL_DDWTABS-SPL
-NMSEGMSDNMSEGMSDNMSEGMSDNMSEGMSD
0.10.01260.120.0080.100.00250.0200.00240.014
0.20.00630.0840.00330.0540.00220.0180.00170.01
0.30.0040.0600.00170.0300.00190.0160.00120.007
0.40.00270.0420.0010.0197 × 10−40.0056.5 × 10−40.004
0.50.00180.0316 × 10−40.0125 × 10−40.0033 × 10−40.0026
0.60.00120.0194 × 10−40.0084 × 10−40.0021 × 10−40.002
0.77 × 10−40.0123 × 10−40.0063 × 10−40.00161 × 10−40.0013
0.84 × 10−40.0063 × 10−40.0032 × 10−40.0016.6 × 10−58.2 × 10−4
Table A3. Summary of average values of optical video transmission indicators.
Table A3. Summary of average values of optical video transmission indicators.
IndexBCS-SPL2DCSMS_SPL_DDWTABS-SPL
PSNR32.98 dB36.33 dB36.97 dB38.56 dB
SSIM0.91680.9570.96380.9755

References

  1. Cisco Visual Networking Index: Global Mobile Data Traffic Forecast Update, 2016–2021. White Paper. 2017. Available online: http://www.czechmarketplace.cz/news/cisco-visual-networking-index-global-mobile-data-traffic-forecast-update-2016-2021-white-paper (accessed on 25 September 2024).
  2. Fan, D.; Zhao, H.; Zhang, C.; Liu, H.; Wang, X. Anti-Recompression Video Watermarking Algorithm Based on H.264/AVC. Mathematics 2023, 11, 2913. [Google Scholar] [CrossRef]
  3. Yang, L.; Wang, R.; Xu, D.; Dong, L.; He, S. Centralized Error Distribution-Preserving Adaptive Steganography for HEVC. IEEE Trans. Multimed. 2023, 26, 4255–4270. [Google Scholar] [CrossRef]
  4. Saini, D.K.J.B.; Kamble, S.D.; Shankar, R.; Kumar, M.R.; Kapila, D.; Tripathi, D.P.; de, A. Fractal video compression for IOT-based smart cities applications using motion vector estimation. Meas. Sens. 2023, 26, 100698. [Google Scholar] [CrossRef]
  5. Zhan, C.; Hu, H.; Wang, Z.; Fan, R.; Niyato, D. Unmanned Aircraft System Aided Adaptive Video Streaming: A Joint Optimization Approach. IEEE Trans. Multimed. 2020, 22, 795–807. [Google Scholar] [CrossRef]
  6. He, C.; Xie, Z.; Tian, C. A QoE-Oriented Uplink Allocation for Multi-UAV Video Streaming. Sensors 2019, 19, 3394. [Google Scholar] [CrossRef] [PubMed]
  7. Yamada, R.; Tomeba, H.; Sato, T.; Nakamura, O.; Hamaguchi, Y. Uplink Resource Allocation for Video Transmission in Wireless LAN System. In Proceedings of the 2022 IEEE 8th World Forum on Internet of Things (WF-IoT), Yokohama, Japan, 26 October–11 November 2022; pp. 1–6. [Google Scholar] [CrossRef]
  8. Donoho, D.L. Compressed sensing. IEEE Trans. Inf. Theory 2006, 52, 1289–1306. [Google Scholar] [CrossRef]
  9. Zheng, S.; Zhang, X.-P.; Chen, J.; Kuo, Y. A High-Efficiency Compressed Sensing-Based Terminal-to-Cloud Video Transmission System. IEEE Trans. Multimed. 2019, 21, 1905–1920. [Google Scholar] [CrossRef]
  10. Li, L.; Wen, G.; Wang, Z.; Yang, Y. Efficient and Secure Image Communication System Based on Compressed Sensing for IoT Monitoring Applications. IEEE Trans. Multimed. 2020, 22, 82–95. [Google Scholar] [CrossRef]
  11. Chowdhury, M.Z.; Shahjalal, M.; Hasan, M.K.; Jang, Y.M. The Role of Optical Wireless Communication Technologies in 5G/6G and IoT Solutions: Prospects, Directions, and Challenges. Appl. Sci. 2019, 9, 4367. [Google Scholar] [CrossRef]
  12. Haas, H.; Elmirghani, J.; White, I. Optical wireless communication. Philos. Trans. R. Soc. A 2020, 378, 20200051. [Google Scholar] [CrossRef] [PubMed]
  13. Tavakkolnia, I.; Jagadamma, L.K.; Bian, R.; Manousiadis, P.P.; Videv, S.; Turnbull, G.A.; Samuel, I.D.W.; Haas, H. Organic photovoltaics for simultaneous energy harvesting and high-speed MIMO optical wireless communications. Light Sci. Appl. 2021, 10, 41. [Google Scholar] [CrossRef] [PubMed]
  14. Yao, H.; Ni, X.; Chen, C.; Li, B.; Zhang, X.; Liu, Y.; Tong, S.; Liu, Z.; Jiang, H. Performance of M-PAM FSO communication systems in atmospheric turbulence based on APD detector. Opt. Express 2018, 26, 23819–23830. [Google Scholar] [CrossRef] [PubMed]
  15. Lin, G.-R.; Kuo, H.-C.; Cheng, C.-H.; Wu, Y.-C.; Huang, Y.-M.; Liou, F.-J.; Lee, Y.-C. Ultrafast 2 × 2 green micro-LED array for optical wireless communication beyond 5 Gbit/s. Photon. Res. 2021, 9, 2077–2087. [Google Scholar] [CrossRef]
  16. Cvijetic, N.; Wilson, S.G.; Zarubica, R. Performance Evaluation of a Novel Converged Architecture for Digital-Video Transmission Over Optical Wireless Channels. J. Lightw. Technol. 2007, 25, 3366–3373. [Google Scholar] [CrossRef]
  17. Gan, L.; Do, T.T.; Tran, T.D. Fast compressive imaging using scrambled block Hadamard ensemble. In Proceedings of the European Signal Processing Conference, Lausanne, Switzerland, 25–29 August 2008. [Google Scholar]
  18. Zhu, Y.; Liu, W.; Shen, Q. Adaptive Algorithm on Block-Compressive Sensing and Noisy Data Estimation. Electronics 2019, 8, 753. [Google Scholar] [CrossRef]
  19. Mun, S.; Fowler, J.E. Block compressed sensing of images using directional transforms. In Proceedings of the 2009 16th IEEE International Conference on Image Processing (ICIP), Cairo, Egypt, 7–10 November 2009. [Google Scholar]
  20. Zhang, B.; Xiao, D.; Zhang, Z.; Yang, L. Compressing Encrypted Images by Using 2D Compressed Sensing. In Proceedings of the 2019 IEEE 21st International Conference on High Performance Computing and Communications, Zhangjiajie, China, 10–12 August 2019. [Google Scholar]
  21. Fowler, J.E.; Mun, S.; Tramel, E.W. Multiscale block compressed sensing with smoothed projected Landweber reconstruction. In Proceedings of the 2011 19th European Signal Processing Conference, Barcelona, Spain, 29 August–2 September 2011; pp. 564–568. [Google Scholar]
  22. Poobathy, D.; Chezian, M. Edge Detection Operators: Peak Signal to Noise Ratio Based Comparison. IJ Image Graph. Signal Process. 2014, 10, 55–61. [Google Scholar] [CrossRef]
  23. Wang, Z.; Bovik, A.C.; Sheikh, H.R.; Simoncelli, E.P. Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image Process. 2004, 13, 600–612. [Google Scholar] [CrossRef] [PubMed]
  24. Pęksiński, J.; Mikołajczak, G. The Synchronization of the Images Based on Normalized Mean Square Error Algorithm. Adv. Multimed. Netw. Inf. Syst. Technol. 2010, 80, 15–24. [Google Scholar]
  25. Xue, W.; Zhang, L.; Mou, X.; Bovik, A.C. Gradient Magnitude Similarity Deviation: A Highly Efficient Perceptual Image Quality Index. IEEE Trans. Image Process. 2014, 23, 13996537. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Image segmentation diagram.
Figure 1. Image segmentation diagram.
Photonics 11 00969 g001
Figure 2. Adaptive block sampling compressed sensing algorithm flow.
Figure 2. Adaptive block sampling compressed sensing algorithm flow.
Photonics 11 00969 g002
Figure 3. Saliency map acquisition; (a) original image; (b) saliency map obtained in Gaussian domain 7 × 7; (c) saliency map obtained in Gaussian domain 5 × 5; (d) saliency map obtained in Gaussian domain 3 × 3.
Figure 3. Saliency map acquisition; (a) original image; (b) saliency map obtained in Gaussian domain 7 × 7; (c) saliency map obtained in Gaussian domain 5 × 5; (d) saliency map obtained in Gaussian domain 3 × 3.
Photonics 11 00969 g003
Figure 4. Comparison of reconstructed image results: (a) PSNR index statistics of the reconstructed image, (b) SSIM index statistics of the reconstructed image, (c) NMSE index statistics of the reconstructed image, and (d) GMSD index statistics of the reconstructed image.
Figure 4. Comparison of reconstructed image results: (a) PSNR index statistics of the reconstructed image, (b) SSIM index statistics of the reconstructed image, (c) NMSE index statistics of the reconstructed image, and (d) GMSD index statistics of the reconstructed image.
Photonics 11 00969 g004
Figure 5. Principle of space optical wireless video transmission.
Figure 5. Principle of space optical wireless video transmission.
Photonics 11 00969 g005
Figure 6. Space optical wireless video transmission experiment; (a) diagram 1 of spatial optical wireless video transmission system; (b) diagram 2 of spatial optical wireless video transmission system; (c) diagram 1 of setup for spatial optical wireless video transmission experiment; (d) diagram 2 of setup for spatial optical wireless video transmission experiment.
Figure 6. Space optical wireless video transmission experiment; (a) diagram 1 of spatial optical wireless video transmission system; (b) diagram 2 of spatial optical wireless video transmission system; (c) diagram 1 of setup for spatial optical wireless video transmission experiment; (d) diagram 2 of setup for spatial optical wireless video transmission experiment.
Photonics 11 00969 g006
Figure 7. Optical wireless transmission reconstruction results of video generated from a total of 500 frame image sequences: (a) PSNR index statistics (average value of the reconstructed image for each frame: APS-SPL = 38.56 dB, MS-SPL = 36.97 dB, 2DCS = 36.33 dB, SPL = 32.98 dB), (b) SSIM index statistics (average value of the reconstructed image for each frame: APS-SPL = 0.9755, MS-SPL = 0.9638, 2DCS = 0.957, SPL = 0.9168).
Figure 7. Optical wireless transmission reconstruction results of video generated from a total of 500 frame image sequences: (a) PSNR index statistics (average value of the reconstructed image for each frame: APS-SPL = 38.56 dB, MS-SPL = 36.97 dB, 2DCS = 36.33 dB, SPL = 32.98 dB), (b) SSIM index statistics (average value of the reconstructed image for each frame: APS-SPL = 0.9755, MS-SPL = 0.9638, 2DCS = 0.957, SPL = 0.9168).
Photonics 11 00969 g007
Table 1. Adaptive block sampling compressed sensing image reconstruction process.
Table 1. Adaptive block sampling compressed sensing image reconstruction process.
Function
x ˜ = Re ( y ˜ , { φ i , 1 i n } , ψ )
   for each block i
     x ˜ i ( 0 ) = φ i T y ˜ i
   j = 0
   do
     x ( j ) = B l o c k _ D W T 1 ( x ˜ ( j ) ) x ^ ( j ) = W i e n e r ( x ( j ) ) x ˜ ^ ( j ) = B l o c k _ D W T ( x ^ ( j ) )
   for each block i
     x ˜ ^ ^ i ( j ) = x ˜ ^ i ( j ) + φ i T ( y ˜ i φ i x ˜ ^ i ( j ) ) x ( j ) = φ B l o c k _ D W T 1 ( x ˜ ^ ^ i ( j ) ) x ( j ) = T h r e s h o l d ( x ( j ) ) x ˜ ¯ ( j ) = B l o c k _ D W T 1 ( x ( j ) )
    For each block i
      x ˜ i ( j + 1 ) = x ˜ ¯ i ( j ) + φ i T ( y ˜ i φ i x ˜ ¯ i ( j ) )
    D ( j + 1 ) = x ˜ ( j + 1 ) x ˜ ^ ^ ( j ) 2 j = j + 1 U n t i l D ( j ) D ( j 1 ) < 10 2
x ˜ = x ˜ ( j )
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Li, J.; Yao, H.; Dong, K.; Song, Y.; Liu, T.; Cao, Z.; Wang, W.; Zhang, Y.; Jiang, K.; Liu, Z. Performance Exploration of Optical Wireless Video Communication Based on Adaptive Block Sampling Compressive Sensing. Photonics 2024, 11, 969. https://doi.org/10.3390/photonics11100969

AMA Style

Li J, Yao H, Dong K, Song Y, Liu T, Cao Z, Wang W, Zhang Y, Jiang K, Liu Z. Performance Exploration of Optical Wireless Video Communication Based on Adaptive Block Sampling Compressive Sensing. Photonics. 2024; 11(10):969. https://doi.org/10.3390/photonics11100969

Chicago/Turabian Style

Li, Jinwang, Haifeng Yao, Keyan Dong, Yansong Song, Tianci Liu, Zhongyu Cao, Weihao Wang, Yixiang Zhang, Kunpeng Jiang, and Zhi Liu. 2024. "Performance Exploration of Optical Wireless Video Communication Based on Adaptive Block Sampling Compressive Sensing" Photonics 11, no. 10: 969. https://doi.org/10.3390/photonics11100969

APA Style

Li, J., Yao, H., Dong, K., Song, Y., Liu, T., Cao, Z., Wang, W., Zhang, Y., Jiang, K., & Liu, Z. (2024). Performance Exploration of Optical Wireless Video Communication Based on Adaptive Block Sampling Compressive Sensing. Photonics, 11(10), 969. https://doi.org/10.3390/photonics11100969

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop