Next Article in Journal
Thinger.io: An Open Source Platform for Deploying Data Fusion Applications in IoT Environments
Next Article in Special Issue
Adaptive Node Clustering Technique for Smart Ocean under Water Sensor Network (SOSNET)
Previous Article in Journal
Adaptive Structured Light with Scatter Correction for High-Precision Underwater 3D Measurements
Previous Article in Special Issue
A Self-Selective Correlation Ship Tracking Method for Smart Ocean Systems
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Photon-Counting Underwater Optical Wireless Communication for Reliable Video Transmission Using Joint Source-Channel Coding Based on Distributed Compressive Sensing

1
School of Information Engineering, Nanchang University, Nanchang 330031, China
2
State Key Laboratory of Transient Optics and Photonics, Xi’an Institute of Optics and Precision Mechanics, Chinese Academy of Sciences, Xi’an 710119, China
*
Author to whom correspondence should be addressed.
Sensors 2019, 19(5), 1042; https://doi.org/10.3390/s19051042
Submission received: 31 December 2018 / Revised: 23 February 2019 / Accepted: 24 February 2019 / Published: 1 March 2019

Abstract

:
To achieve long-distance underwater optical wireless communication, a single photon detector with single photon limit sensitivity is used to detect the optical signal at the receiver. The communication signal is extracted from the discrete single photon pulses output from the detector. Due to fluctuation of photon flux and quantum efficiency of photon detection, long-distance underwater optical wireless communication has the characteristics that the link is easily interrupted, the bit error rate is high, and the burst error is large. To achieve reliable video transmission, a joint source-channel coding scheme based on residual distributed compressive video sensing is proposed for the underwater photon counting communication system. Signal extraction from single photon pulses, data frame and data verification are specifically designed. This scheme greatly reduces the amount of data at the transmitter, transfers the computational complexity to the decoder in receiver, and enhances anti-channel error ability. The experimental results show that, when the baud rate was 100 kbps and the average number of photon pulses per bit was 20, the bit error rate (BER) was 0.0421 and video frame could still be restored clearly.

1. Introduction

Underwater optical wireless communication (UOWC) plays an important role in military, environmental detection, marine exploration and disaster prevention, and has attracted more and more attention in recent years [1,2,3]. The communication distance is limited due to serious absorption and scattering of water [4]. To achieve long-distance underwater communication, a single photon detector with single photon limit sensitivity is used to detect very weak light signal at the receiver [5,6,7,8,9,10,11]. The communication signals are extracted from the discrete single photon pulses that output from single photon detector.
Video communication is an important application aspect of underwater optical wireless communication because of its vivid and picturesque characteristic. Due to fluctuation of photon flux and quantum efficiency of photon detection, long-distance photon counting underwater optical wireless communication has the characteristics that the link is easily interrupted, the bit error rate is high, and the burst error is large. It is necessary to improve the transmission reliability through error correction coding. Traditional video compression standards, such as MPEG or H.26X, use a hybrid coding method combining predictive coding and transform coding to compress video sequences, which make video frames depend heavily on each other [12,13]. Thus, if a data frame were lost, other data frames would not be decoded. In a photon-counting underwater optical wireless communication system, the signal in a time slot is a discrete single-photon pulse sequence, and even there no photon is detected in many time slots, causing a lot of symbol deletion and packet loss. Therefore, the traditional video compression standards and coding have great difficulties to restore video [14,15].
In 2006, Tao et al. proposed a compressed sensing (CS) theory, which broke through the limitation of traditional Nyquist sampling theorem [16,17]. According to the CS theory, signal can be sampled and compressed at the same time; as a result, it is widely used in the field of video compression [18,19,20,21,22,23]. In 2008, Stanković et al. [18] firstly applied compressed sensing theory to video coding, and proposed compressive video sampling (CVS). On the other hand, this paper proposes to divide video frames into key frames and non-key frames, where key frames use traditional video coding, while non-key frames use compressed sensing coding. The experimental results show that, on conditions of a better reconstruction, the method saves nearly half of the video collection. In 2009, Kang et al. proposed distributed compressive video sensing (DCVS) based on inter-frame correlation [19,20]. The encoding end allocates different measurement rates to key frames and non-key frames to reduce the amount of data transmitted, and each frame is compressed by CS measurement. The decoding end uses joint decoding to reconstruct, while the amount of data at the transmitting end is reduced and the complexity is transmitted to the decoding end. Chen et al. [21] investigated dynamic measurement rate allocation in block-based DCVS, which can adaptively adjust measurement rates by estimating the sparsity of each block via feedback information. Chen et al. [22] proposed residual distributed compressive video sensing based on double side information (RDCVS-DSI). In RDCVS-DSI, by taking advantage of the frequency domain characteristics of the image and the correlation between consecutive frames, the low-quality video frames are regarded as the first side information in the encoding process, and the second side information is generated from the motion estimation of non-key frames. Performance analysis and simulation results show that the RDCVS-DSI model can reconstruct the high-fidelity video sequence with lower complexity.
In this paper, to realize reliable video transmission of long-distance optical wireless communication, a joint source-channel coding scheme based on residual distributed compressive video sensing is proposed for the underwater photon counting communication system with high bit error rate. Signal extraction from single photon pulses, data frame and data verification are specifically designed. Experimental results show that this scheme greatly reduced the amount of data at the transmitter, transfered the coding complexity to the receiver, and enhanced the anti-channel error ability.

2. Joint Source-Channel Coding Scheme Based on Residual Distributed Compressive Video Sensing

According to the CS theory, accurate reconstruction of a signal can be obtained by sampling the signal in a small amount, when the signal is sparse or the signal can be sparse under certain conditions. Compressed sensing theory can be abstracted into the mathematical expression y   =   Φ x , where x is sparse signal of N dimensions, Φ is measurement matrix with dimension M × N, and y is measured value. If the signal x has a basic base in a certain domain, e.g., x = Ψα, where α is a sparse signal and Ψ is a sparse matrix with dimension N × N, the mathematical expression becomes y = ΦΨα. In reconstruction process, the value of α is obtained by solving the underdetermined equations, and the signal x is obtained by α. The algorithm is usually solved by Orthogonal Matching Pursuit (OMP) [24], Total variation Augmented Lagrangian Alternating Direction Algorithm (TVAL3) [25], etc.
Joint source-channel coding scheme based on residual distributed compressive video sensing is shown in Figure 1. A video sequence consists of several GOPs (group of pictures), where a GOP consists of a key frame and non-key frame; each frame can be seen as a sparse signal of N dimensions [20]. To ensure the reconstruction performance and reduce the amount of data, the key frame adopts a higher measurement rate for the compressed sensing measurement. Besides, an extremely sparse residual matrix is obtained by subtracting key frames from non-key frames, and a lower measurement rate for the compressed sensing measurement is adopted. Suppose that Φk is measurement matrix of key frame with dimension Mk × N, Φcs is measurement matrix of residual matrix with dimension Mk × N, the measurement rate of key frames is rk = Mk/N, and the measurement rate of residuals matrix is rcs = Mcs/N. Assuming that the number of key frames is nk, the number of residuals matrix is ncs, respectively. Then, the average measurement rate (AMR) is:
r a v e = ( r k × n k + r c s × n c s ) ( n k + n c s )
Each measured value is converted into a data frame for transmission in the underwater channel, and the data frame format is shown in Figure 2. The data frame consists of the frame header “FF FF FF FF”, the sequence number of the measured value in the matrix, the positive and negative sign, integer and decimal part of the measured value, and CRC (Cyclic Redundancy Check) checksum value. In this way, a measured value is packed. Transmitter and receiver follow this predetermined data format for transmission and reception.
Receiver decodes the received packets and restores the measured value. According to the CRC, the correct measured value is placed in the measurement matrix by the serial number, and the wrong measured value is discarded (assignment 0). The measurement matrix is generated by setting the same random number seed as the sender; it does not need to be transmitted. The measured value matrix and the corresponding measurement matrix can be used to reconstruct the image frame by compression sensing algorithm. The key frames are reconstructed directly. The non-key frames are obtained by summing the reconstructed residual matrix and the key frames. Finally, the reconstructed video frame is restored to the video.
Light propagation in water suffers from attenuation through both absorption and scattering, which may cause data loss or bit error. The measured values determined by the distributed compressive video sensing theory are of equal importance. Thus, the reconstructed picture quality at the decoder depends only on the number and correctness of the received measured values, i.e., the impact of any data loss is apportioned to each frame of video. Therefore, the loss of measured values has little impact on the reconstruction quality of the image.

3. System Principle and Realization

The structure diagram of system is shown in Figure 3. The whole system is mainly composed of two parts: the transmitter and the receiver. The transmitter uses MATLAB to code video frame sequence based on the distributed compressive sensing theory. The encoded data are transmitted to FPGA (DIGILENT, ZYNQ-7000) via LWIP protocol, FPGA of the transmitter performs OOK (On-Off Keying) modulation on LED (CREE, Q5) through the driving circuit. The receiver transforms the discrete photon pulses into continuous bit information by utilizing the time interval of photons arrival, and then sends the continuous bit information to MATLAB for decoding via LWIP protocol.
The principles of restoring the signal is shown in Figure 4. The timing of signal extraction method has the following steps: (1) To facilitate sampling with a 50 M clock, the single-photon pulse signal output by the detector is stretched. The width of stretched single-photon pulse signal is greater than 40 ns. (2) The FPGA sets a threshold “T” whose type is time interval. The trailing demodulator outputs a signal “1” when the stretched single-photon pulse signal is detected, and if FPGA can detect the next stretched signal within “T”, the trailing demodulator keeps on outputting the signal “1” until the stretched signal does not arrive during the “T”. Then, the trailing demodulator outputs signal “0”. (3) The FPGA sends the trailing demodulation signal to the shift register whose width of the right shift is “T”. Moreover, “T” is also the redundant length of the tail demodulation signal. Then, the shift registers output signal and the trailing demodulation signal are logically AND to obtain a true demodulated signal. (4) The above operation is repeated.
Figure 5 draws a practical picture of photon-counting underwater optical wireless communication system. PC1 uses the MATLAB platform to encode the video based on the distributed compressive sensing theory. Then, the measured values are converted into data frames, which are loaded into the integrated transmitter via the network port. In the system, OOK modulation method is utilized and the LED driving circuit is applied to transform the encoded binary bit stream into optical signal. The length of cylindrical water tank is 150 cm, which is filled with still pure water. After the optical signal passing through the water tank to the single photon detector, the single photon detector receives the extremely weak optical signal and outputs the discrete single photon pulse sequence. A specially designed demodulator extracts data from discrete single photon pulse sequences at the integrated receiver. The data arrive at PC2 via the network port. PC2 uses the MATLAB platform to decode the received data stream and restore the measured value. Finally, the video frames are reconstructed and the video is restored.
Comparing the reconstructed video frames with the original images, the image quality assessment is usually divided into two categories: subjective quality evaluation and objective quality evaluation. Subjective quality evaluation depends on the human eye’s intuitive perception of the image [26]. Objective quality evaluation depends on formula, such as mean-square error (MSE), Peak Signal to Noise Ratio (PSNR) and so on. The expression of MSE is as follows:
M S E = i = 1 M j = 1 M ( f i j f i j ) 2 M × N
Where fij and fij represent the original image and the restored image, respectively, and 1 ≤ iM, 1 ≤ jM. The expression of PSNR is as follows:
P S N R = 10 lg 255 × 255 M S E
Figure 6 shows the optical signal transmission model of our underwater optical communication system. Emission power rate of LED is 1.1 W, and the emission angle of the collimator is 5°. The formula of receiving power for underwater optical communication is as follows [27]:
P r = P t exp ( c L )
where c is the attenuation coefficient, Pt is transmitting power, and L is the longest communication distance.

4. Experimental Results and Discussion

To verify the effectiveness of distributed compressive video sensing coding method in photon-counting underwater wireless optical communication system, we used the standard test video sequences “foreman” and “coastguard” (frame size: 128 × 128) as experimental object, with GOP size = 3. The first frame was a key frame, the second frame and the third frame are non-key frames, and the reconstruction algorithm used TVAL3. As a result of the limited length of the water tank, longer distance communication was simulated by attenuating the power of the LED, and the attenuation in the direction of communication was mainly caused by scattering and absorption. Since the communication rate was low and the communication distance was short, scattering was considered the only cause of attenuation, while other impacts caused by scattering were very small and therefore ignored. Therefore, we did not consider the effect of scattering on this experiment, even though it might be somewhat beneficial when some photons are scattered back to the receiver. Instead, we considered the transmitted power having the main effect on the number of received photons. In addition, we simulated the attenuation of light transmission in water by changing the voltage of the LED lamp. Figure 7 shows the BER at different number of photon pulses per bit calculated at the detector, and the baud rate of 100 kbps. The received optical power evaluation index used in this experiment was the number of single photon pulses output by the detector. It can be seen in the figure that the weaker was the attenuation process, i.e., the more photon pulses per bit, the lower was the bit error rate.
Figure 8 shows the waveforms of underwater photon-counting communication at different BER, where a is the modulation signal, b is the single photon pulse signal output by the detector, and c is the demodulation signal. It can be seen in the figure that, when the bit error rate was small, the demodulation signal waveform did not have pitting distortion and tailing phenomenon. When the bit error rate was high, the discrete single photon pulse signal became sparse, the number of photon pulses in the high level decreased, and the demodulation signal part presented pitting distortion.
Firstly, we used the standard test video sequences “coastguard” as experimental object. It does not need distributed compressive video sensing coding operation. Video frames data were directly transmitted through the underwater photon-counting communication system after channel coding. Then, video frames were measured by distributed compressive video sensing coding, and the measured values were transmitted through the underwater communication system after channel coding. Finally, video frames were restored by decoding at the receiver. The experimental results show the comparison of the first three frames of “coastguard” at different BER. In Figure 9, the first line of pictures are the original video frames, the second line of pictures are the result of direct transmission of video frames, and the third line of pictures are the result of transmission of video frames after distributed compressive video sensing coding. When the bit error rate was high, directly transmitted video frames data could not receive clear pictures.
To evaluate the image reconstruction performance subjectively at different average measurement rate, the comparison of the first three frames of the “foreman” is shown in Figure 10 when the average measurement rates were 0.4667, 0.5667, 0.6667 and 0.7667. As can be seen in Figure 10, the key frame was the first frame, and the other frames were non-key frames. The measurement rate of the key frame was higher, the reconstructed image was clearer, and the PSNR value was larger.
As can be seen in Figure 11 and Figure 12, the “foreman” and “coastguard” were reconstructed under different BER. In the following figures, the abscissa is the AMR and the ordinate represents the average PSNR value of the reconstructed image. Experimental results demonstrate that the PSNR increases with average measurement rate.
The optical signal became extremely weak when it passed through the underwater channel to the single photon detector. Since the communication distance was short and the rate was low, very few photons belonging to the slot of symbol “1” traverses to the slot of symbol “0” due to scattering. The bit error we observed was mainly because the number of photons in the time slot of the symbol “1” was not enough, causing the symbol “1” to be misjudged as “0”. It was somewhat advantageous to identify the symbol “1” if the scattered photons run into the time slot of the symbol “1”. The time slot of the symbol “0” had very few scattered photons. However, the distance between the dark count and the background noise photon was much larger than the threshold we set, thus the symbol “0” was not misjudged as “1”.
Figure 13 and Figure 14 show the average PSNR of the reconstructed video sequences at different BER under five average measurement rates. The results show that the larger was the BER, the smaller was the PSNR. Even when the BER was ≥0.1, the video frame could still be restored clearly as long as the measurement rate was large enough.
The wavelength in our experiment system was 500 nm. The quantum efficiency of single photon detector was η = 35%. Effective photosensitive area was a circle of diameter = 500 μm. To increase the communication distance, a focusing lens with a diameter of 10 cm was placed before the single photon detector. According to the experiments, when the baud rate was 100 kbps and the average number of photon pulses per bit was 20, the bit error rate was 0.0421, and the video frame could still be restored clearly. We calculated the optical power corresponding to 20 photon pulses per bit as 1.7 × 10 12 W, according to Equation (4). As shown in Table 1, we calculated the theoretical longest communication distance of three water types. The attenuation coefficients of three water types were taken from the work done by Petzold [28]. Farther communication distance could be achieved by increasing LED power or using multiple LEDs.

5. Conclusions

In this study, a photon-counting underwater optical wireless communication system was built. To achieve reliable video transmission at high bit error rate, we proposed and testified a joint source-channel coding scheme based on residual distributed compressive video sensing, and the transmitted information was specially designed in data frame. In the experiment, the influence of AMR and BER on the quality of reconstructed video frames was analyzed, and the results verify that, if the measurement rate was large enough, the video frames could be restored clearly in the high bit error rate communication environment. This scheme overcomes the traditional video coding shortcomings, such as large computational complexity, limited storage capacity and high coding complexity. It transfers the coding complexity to the decoder, and reduces the sampled data and the requirement of hardware caching. The experimental results show that, when the baud rate was 100 kbps and the average number of photon pulses per bit was 20, the bit error rate was 0.0421 and video frame could still be restored clearly.

Author Contributions

Conceptualization, Q.Y.; Data curation, Z.H.; Formal analysis, Q.Y.; Funding acquisition, Q.Y.; Project administration, Q.Y.; Software, Z.H., Z.L. and T.Z.; Supervision, Q.Y.; Validation, Z.H., Q.Y. and Z.L.; Writing—Original Draft, Z.H.; and Writing—Review and Editing, Z.H., Q.Y., Z.L. and Y.W.

Funding

This work was supported by National Natural Science Foundation of China (No. 61565012), China Postdoctoral Science Foundation (No. 2015T80691), the Science and Technology Plan Project of Jiangxi Province (No. 20151BBE50092), and the Funding Scheme to Outstanding Young Talents of Jiangxi Province (No. 20171BCB23007).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Arnon, S. Underwater optical wireless communication network. Opt. Eng. 2010, 49, 015001. [Google Scholar] [CrossRef]
  2. Kaushal, H.; Kaddoum, G. Underwater optical wireless communication. IEEE Access 2016, 4, 1518–1547. [Google Scholar] [CrossRef]
  3. Zeng, Z.; Fu, S.; Zhang, H.; Dong, Y.; Cheng, J. A survey of underwater optical wireless communications. IEEE Commun. Surv. Tutor. 2017, 19, 204–238. [Google Scholar] [CrossRef]
  4. Cochenour, B.; Mullen, L.; Laux, A.; Curran, T. Effects of multiple scattering on the implementation of an underwater wireless optical communications link. In Proceedings of the OCEANS 2006, Boston, MA, USA, 18–21 September 2006; pp. 1–6. [Google Scholar]
  5. Hu, S.; Mi, L.; Zhou, T.; Chen, W. 35.88 attenuation lengths and 3.32 bits/photon underwater optical wireless communication based on photon-counting receiver with 256-PPM. Opt. Express 2018, 26, 21685–21699. [Google Scholar] [CrossRef] [PubMed]
  6. Ji, Y.-w.; Wu, G.-f.; Wang, C.; Zhang, E.-f. Experimental study of SPAD-based long distance outdoor VLC systems. Opt. Commun. 2018, 424, 7–12. [Google Scholar] [CrossRef]
  7. Wang, C.; Yu, H.-Y.; Zhu, Y.-J. A long distance underwater visible light communication system with single photon avalanche diode. IEEE Photonics J. 2016, 8, 1–11. [Google Scholar] [CrossRef]
  8. Wang, C.; Yu, H.-Y.; Zhu, Y.-J.; Wang, T.; Ji, Y.-W. Experimental study on SPAD-based VLC systems with an LED status indicator. Opt. Express 2017, 25, 28783–28793. [Google Scholar] [CrossRef]
  9. Wang, C.; Yu, H.-Y.; Zhu, Y.-J.; Wang, T.; Ji, Y.-W. Multi-LED parallel transmission for long distance underwater VLC system with one SPAD receiver. Opt. Commun. 2018, 410, 889–895. [Google Scholar] [CrossRef]
  10. Yan, Q.; Zhao, B.; Sheng, L.; Liu, Y.A. Continuous measurement of the arrival times of x-ray photon sequence. Rev. Sci. Instrum. 2011, 82, 053105. [Google Scholar] [CrossRef] [PubMed]
  11. Zhou, T.; Hu, S.; Mi, L.; Zhu, X.; Chen, W. A long-distance underwater laser communication system with photon-counting receiver. In Proceedings of the 2017 16th International Conference on Optical Communications and Networks (ICOCN), Wuzhen, China, 7–10 August 2017; pp. 1–3. [Google Scholar]
  12. Richardson, I.E. H. 264 and MPEG-4 Video Compression: Video Coding for Next-Generation Multimedia; John Wiley & Sons: Hoboken, NJ, USA, 2004. [Google Scholar]
  13. Wiegand, T.; Sullivan, G.J.; Bjontegaard, G.; Luthra, A. Overview of the H. 264/AVC video coding standard. IEEE Trans. Circuits Syst. Video Technol. 2003, 13, 560–576. [Google Scholar] [CrossRef]
  14. Puri, R.; Majumdar, A.; Ramchandran, K. PRISM: A video coding paradigm with motion estimation at the decoder. IEEE Trans. Image Process. 2007, 16, 2436–2448. [Google Scholar] [CrossRef] [PubMed]
  15. Xiong, Z.; Liveris, A.D.; Cheng, S. Distributed source coding for sensor networks. IEEE Signal Process. Mag. 2004, 21, 80–94. [Google Scholar] [CrossRef]
  16. Candès, E.J.; Wakin, M.B. An introduction to compressive sampling. IEEE Signal Process. Mag. 2008, 25, 21–30. [Google Scholar] [CrossRef]
  17. Donoho, D.L. Compressed sensing. IEEE Trans. Inf. Theory 2006, 52, 1289–1306. [Google Scholar] [CrossRef]
  18. Stanković, V.; Stanković, L.; Cheng, S. Compressive video sampling. In Proceedings of the 2008 16th European, Signal Processing Conference, Lausanne, Switzerland, 25–29 August 2008; pp. 1–5. [Google Scholar]
  19. Guillemot, C.; Pereira, F.; Torres, L.; Ebrahimi, T.; Leonardi, R.; Ostermann, J. Distributed monoview and multiview video coding. IEEE Signal Process. Mag. 2007, 24, 67–76. [Google Scholar] [CrossRef]
  20. Kang, L.-W.; Lu, C.-S. Distributed compressive video sensing. In Proceedings of the 2009 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Taipei, Taiwan, 19–24 April 2009; pp. 1169–1172. [Google Scholar]
  21. Chen, H.-W.; Kang, L.-W.; Lu, C.-S. Dynamic measurement rate allocation for distributed compressive video sensing. In Proceedings of the Visual Communications and Image Processing 2010, Huangshan, China, 11–14 July 2010. [Google Scholar]
  22. Chen, J.; SU, K.; Wang, W.; Lan, C. Residual distributed compressive video sensing based on double side information. Acta Autom. Sin. 2014, 40, 2316–2323. [Google Scholar] [CrossRef]
  23. Imran, N.; Seet, B.-C.; Fong, A. Distributed compressive video sensing: A review of the state-of-the-art architectures. In Proceedings of the 2012 19th International Conference on Mechatronics and Machine Vision in Practice (M2VIP), Auckland, New Zealand, 28–30 November 2012; pp. 68–73. [Google Scholar]
  24. Tropp, J.A.; Gilbert, A.C. Signal recovery from random measurements via orthogonal matching pursuit. IEEE Trans. Inf. Theory 2007, 53, 4655–4666. [Google Scholar] [CrossRef]
  25. Li, C.; Yin, W.; Jiang, H.; Zhang, Y. An efficient augmented Lagrangian method with applications to total variation minimization. Comput. Optim. Appl. 2013, 56, 507–530. [Google Scholar] [CrossRef] [Green Version]
  26. Velasco, J.P.L. Video quality assessment. In Video Compression; InTech: London, UK, 2012. [Google Scholar]
  27. Mobley, C.D.; Gentili, B.; Gordon, H.R.; Jin, Z.; Kattawar, G.W.; Morel, A.; Reinersman, P.; Stamnes, K.; Stavn, R.H. Comparison of numerical models for computing underwater light fields. Appl. Opt. 1993, 32, 7484–7504. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  28. Petzold, T.J. Volume Scattering Functions for Selected Ocean Waters; Scripps Institution of Oceanography La Jolla Ca Visibility Lab: La Jolla, CA, USA, 1972. [Google Scholar]
Figure 1. Joint source-channel coding scheme based on residual distributed compressive video sensing.
Figure 1. Joint source-channel coding scheme based on residual distributed compressive video sensing.
Sensors 19 01042 g001
Figure 2. Data frame format.
Figure 2. Data frame format.
Sensors 19 01042 g002
Figure 3. The structure diagram of system. LED, led lamp; SPCM, single photon counter module; FPGA, field programmable gate array; JSCCRDCVS, joint source and channel coding scheme based on residual distributed compressive video sensing; LWIP, a lightweight TCP/IP stack; PC, personal computer.
Figure 3. The structure diagram of system. LED, led lamp; SPCM, single photon counter module; FPGA, field programmable gate array; JSCCRDCVS, joint source and channel coding scheme based on residual distributed compressive video sensing; LWIP, a lightweight TCP/IP stack; PC, personal computer.
Sensors 19 01042 g003
Figure 4. Timing of signal extraction: (a) single-photon pulse signal output by the detector; (b) stretched single-photon pulse signal; (c) trailing demodulated signal; (d) shift register output signal; and (e) demodulated signal.
Figure 4. Timing of signal extraction: (a) single-photon pulse signal output by the detector; (b) stretched single-photon pulse signal; (c) trailing demodulated signal; (d) shift register output signal; and (e) demodulated signal.
Sensors 19 01042 g004
Figure 5. Photon-counting underwater optical wireless communication system.
Figure 5. Photon-counting underwater optical wireless communication system.
Sensors 19 01042 g005
Figure 6. Optical signal transmission model of underwater optical communication.
Figure 6. Optical signal transmission model of underwater optical communication.
Sensors 19 01042 g006
Figure 7. BER at different average number of photon pulses per bit.
Figure 7. BER at different average number of photon pulses per bit.
Sensors 19 01042 g007
Figure 8. Waveform of underwater photon-counting communication at different BER: (a) BER = 0.1530; and (b) BER = 0.0210.
Figure 8. Waveform of underwater photon-counting communication at different BER: (a) BER = 0.1530; and (b) BER = 0.0210.
Sensors 19 01042 g008
Figure 9. Comparison of the first three frames of “coastguard “at different BER: (a) BER = 0.0062; (b) BER = 0.0081; and (c) BER = 0.0107. The first line of pictures are the original video frames, the second line of pictures are the result of direct transmission of video frames, and the third line of pictures is the result of transmission of video frames after distributed compressive video sensing coding.
Figure 9. Comparison of the first three frames of “coastguard “at different BER: (a) BER = 0.0062; (b) BER = 0.0081; and (c) BER = 0.0107. The first line of pictures are the original video frames, the second line of pictures are the result of direct transmission of video frames, and the third line of pictures is the result of transmission of video frames after distributed compressive video sensing coding.
Sensors 19 01042 g009
Figure 10. Comparison of the first three frames of “foreman” at different AMR: (a) AMR = 0.4667; (b) AMR = 0.5667; (c) AMR = 0.6667; and (d) AMR = 0.7667.
Figure 10. Comparison of the first three frames of “foreman” at different AMR: (a) AMR = 0.4667; (b) AMR = 0.5667; (c) AMR = 0.6667; and (d) AMR = 0.7667.
Sensors 19 01042 g010
Figure 11. The AMR-PSNR performances for the “foreman”.
Figure 11. The AMR-PSNR performances for the “foreman”.
Sensors 19 01042 g011
Figure 12. The AMR-PSNR performances for the “coastguard”.
Figure 12. The AMR-PSNR performances for the “coastguard”.
Sensors 19 01042 g012
Figure 13. BER-PSNR performances for the “foreman”.
Figure 13. BER-PSNR performances for the “foreman”.
Sensors 19 01042 g013
Figure 14. BER-PSNR performances for the “coastguard”.
Figure 14. BER-PSNR performances for the “coastguard”.
Sensors 19 01042 g014
Table 1. Theoretical longest communication distance for three water types.
Table 1. Theoretical longest communication distance for three water types.
Water Typec (m−1)L (m)
Harbor2.198.16
Coastal0.4040.66
Clear0.15102.27

Share and Cite

MDPI and ACS Style

Hong, Z.; Yan, Q.; Li, Z.; Zhan, T.; Wang, Y. Photon-Counting Underwater Optical Wireless Communication for Reliable Video Transmission Using Joint Source-Channel Coding Based on Distributed Compressive Sensing. Sensors 2019, 19, 1042. https://doi.org/10.3390/s19051042

AMA Style

Hong Z, Yan Q, Li Z, Zhan T, Wang Y. Photon-Counting Underwater Optical Wireless Communication for Reliable Video Transmission Using Joint Source-Channel Coding Based on Distributed Compressive Sensing. Sensors. 2019; 19(5):1042. https://doi.org/10.3390/s19051042

Chicago/Turabian Style

Hong, Zhu, Qiurong Yan, Zihang Li, Ting Zhan, and Yuhao Wang. 2019. "Photon-Counting Underwater Optical Wireless Communication for Reliable Video Transmission Using Joint Source-Channel Coding Based on Distributed Compressive Sensing" Sensors 19, no. 5: 1042. https://doi.org/10.3390/s19051042

APA Style

Hong, Z., Yan, Q., Li, Z., Zhan, T., & Wang, Y. (2019). Photon-Counting Underwater Optical Wireless Communication for Reliable Video Transmission Using Joint Source-Channel Coding Based on Distributed Compressive Sensing. Sensors, 19(5), 1042. https://doi.org/10.3390/s19051042

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop