Next Article in Journal
Adversarial Attacks on Medical Segmentation Model via Transformation of Feature Statistics
Next Article in Special Issue
Basic Approaches for Reducing Power Consumption in Finite State Machine Circuits—A Review
Previous Article in Journal
Pulmonary Hypertension Secondary to Myxomatous Mitral Valve Disease in Dogs: Current Insights into the Histological Manifestation and Its Determining Factors
Previous Article in Special Issue
Digitalization of Pulse Signal Processing for Ex-Core Instrumentation System in Nuclear Power Plants
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Modified Joint Source–Channel Trellises for Transmitting JPEG Images over Wireless Networks

1
Graduate Institute, Prospective Technology of Electrical Engineering and Computer Science, National Chin-Yi University of Technology, Taichung 41170, Taiwan
2
Department of Computer Science and Information Engineering, National Chin-Yi University of Technology, Taichung 41170, Taiwan
3
Department of Electronic Engineering, National Chin-Yi University of Technology, Taichung 41170, Taiwan
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Appl. Sci. 2024, 14(6), 2578; https://doi.org/10.3390/app14062578
Submission received: 30 December 2023 / Revised: 1 March 2024 / Accepted: 17 March 2024 / Published: 19 March 2024
(This article belongs to the Special Issue Advanced Electronics and Digital Signal Processing)

Abstract

:
This paper presents a joint source–channel image transmission model based on a modified trellis construction using variable-length and fixed-length codes. The model employs linear block trellis codes and a modified Bahl–Cocke–Jelinek–Raviv algorithm for decoding in the source channel. Additionally, this model utilizes a hierarchical tree and a parity-check matrix to combine the source and channel codes, reducing errors in the decoding process and enabling the decoding of longer sequences without substantial loss of quality.

1. Introduction

Richard W. Hamming developed the first error-correcting codes in 1950 [1], while Shannon [2] described the phenomenon whereby errors induced by a noisy channel can be reduced to any negligible error probability without the loss of information rate, provided that the information rate is less than the channel capacity. The problems of source and channel coding are usually treated separately per Shannon’s coding theorem [2], with the result that the source encoder and the channel decoder in communication systems are independent. To address this challenge, several scholars have designed combined source–channel encoders and decoders [3,4,5].
Shannon, in his landmark 1948 paper that laid the groundwork for information theory, divided information systems into distinct source and channel components requiring individual optimization. However, these strict divisions may not apply, in theory or in practice, to modern multimedia communications systems where message sequences can be encoded without a priori probabilities. Accordingly, more recent studies have examined the problem of transmitting images clearly over an unreliable channel with limited bandwidth [6,7,8]. Researchers have presented various coding schemes, particularly discrete wavelet and discrete cosine transform, to achieve excellent transformation coding in image processing [9,10]. Some common techniques for image coding include discrete wavelet transformation and discrete cosine transformation; additionally, several techniques are tailored to preserve images using joint source–channel coding (JSCC) [7]. An alternative approach involves partitioning images into sets in a hierarchical tree with an algorithm; such an approach far outperforms its traditional counterparts when images are transmitted across a noisy channel. However, the hierarchically encoded images are typically of poor quality when the channel is unreliable [8,9].
Several methods of error correction in image coding have been developed, including forward error correction, error-resilient coding, and error concealment. This paper presents a novel approach to error correction employing a modified trellis, general variable length codes (VLCs), and a parity-check matrix. This approach constructs a JSCC image coder that is capable of generating output bits according to their relative importance [10]. Such a modified trellis model offers several advantages over traditional image encoding models. First, the quality of the reconstructed image improves as the number of decoded bits increases, a desirable property in many applications, including progressive transmission and image browsing. Second, image encoding can be terminated as soon as a target bit rate is met, and the resulting coded bit stream is optimized for that bit rate. Third, images can be encoded once at a high bit rate and decoded at any desired lower bit rate by truncating the bit stream. Fourth, this method employs an optimized encoding algorithm that reduces errors and is highly efficient.
Furthermore, our approach to information encoding is considerably more robust and less error-prone than conventional image encoding techniques. Specifically, we combine the VLCs of a traditional source encoder with a modified trellis of block Hamming codes in a parity-check matrix. We correct errors by applying an optimized form of the Bahl–Cocke–Jelinek–Raviv (BCJR) algorithm. The algorithm uses expurgation techniques to achieve a considerable minimum Hamming distance by precisely selecting the subsets of Hamming codes.
Our proposed model applies serial concatenation forward error correction to block codes separated in a trellis with JPEG encoding [11,12,13,14]. The model employs two simultaneous trellis-based source encodings—VLC and linear block code—for two separate constituent codes to preserve the quality of scalable image sequences in the JSCC system. Such a system permits incorporating multilevel codes with various asymptotic and practical properties.
Moreover, for different values of SNR, ref. [15] proposed a model that adapts to changes in SNR for image transmission. By the channel model under different SNRs, good performance is achieved. The auto-encoder (AE)-based JSCC model has also been applied in the security field [16], and a higher-order modulation method was implemented using fully convolutional neural networks (FCNN) in [17]. Driven by deep learning technology, semantic communications are compatible with various multimedia sources, offering the advantages of reducing communication bandwidth and improving communication robustness [18]. Ref. [19] has also been introduced to CSI compression, serving as the source coding module in the separate source channel coding scheme.
The remainder of this paper is organized as follows: Section 2 outlines the construction of the model; Section 3 details an experimental application of the model; Section 4 provides a discussion of the results; and Section 5 presents the conclusion.

2. The Wireless JPEG Transmission Model

In this section, a robust model for compressing JPEG images that integrates linear encoding into the JPEG encoding process is proposed. Figure 1 depicts a block diagram of the model’s JSCC process. First, an image is sent to a discrete cosine transform (DCT) block. Second, the DCT coefficients are then quantized and rounded to the nearest integers. Third, a trellis is constructed to enable robust transmission of sequences decoded with the modified BCJR algorithm. Fourth, the source and channel encodings are divided into four parts, as illustrated in Figure 1a. Fifth, the original images are encoded in a trellis structure to facilitate expurgation. Sixth, Huffman code is employed to encode the bits that correspond to the DCT coefficients. Seventh, the source and channel encodings are combined and mapped to generate a JPEG image for transmission over a wireless channel mode, i.e., the 802.11 channel mode. Eighth, the receiver is prepared using VLC and random-code decoding by the BCJR algorithm to facilitate the decoding of the encoded sequences, a process illustrated in Figure 1b. Ninth, the BCJR decoding process is integrated with the VLC and linear block JPEG decoding processes. In this step, the JPEG transmission sequence is first decoded with BCJR to recover the JPEG encoding sequence. Next, VLC decoding is used to recover and decode the original image data as bit sequences.

3. Construction of the Combined Source–Channel Trellis

In this section, we describe the design of our modified trellis and the method of decoding messages encoded in the trellis using our modified optimal algorithm. For this purpose, a low-complexity JSCC scheme with expurgating trellis is developed.

3.1. Tier 1: Combined VLC–FLC Trellis Encoder

As done in Ref. [20], consider an (n, k) random code with a parity-check matrix as follows:
H = [ h 0 h 1 h n 1 ]  
where h i ,   i = 0 , 1 , , n 1 are column vectors of the parity-check matrix H . In the trellis, a valid codeword c must satisfy the following constraint:
s T = H c T = c 0 h 0 + c 1 h 1 + + c n 1 h n 1 = 0
To construct a trellis with the desired Hamming code, we employ the following formula:
s = s + c i h i
where s and s denote current state and next state, respectively. For example, consider a (7, 4) Hamming code with the parity-check matrix
H = 0 0 0 1 1 1 1 0 1 1 0 0 1 1 1 0 1 0 1 0 1 .
The trellis for this code is depicted in Figure 2.
In our two-tiered JSCC model, the first tier comprises variable-length codes (VLCs). VLCs can represent the same message as fixed-length codes (FLCs) but are highly sensitive to transmission errors. To address this shortcoming, we propose a modified BCJR decoding algorithm that is considerably more robust than hard-decision decoding techniques. We adopt a double trellis construction combining FLCs and VLCs depending on message length. Our VLC–FLC double trellis functions in the following manner: consider a typical trellis of VLCs and FLCs and a sequence of four source symbols represented by the equation u = (1, 4, 2, 3, 5). The VLC- and FLC sequences are (1) 000,10,001,01,11 with bit length n = 12, and (2) 000,011,001,010,100 with bit length n = 15. All sequences for this four-symbol message are illustrated in the block trellis in Figure 3, in which the right line represents bit 1, whereas the left line represents bit 0.
Next, consider a source that independently produces outputs selected from a codebook alphabet u = (A, B, C, D). A source sequence of length seven is mapped to a vector c of codewords taken from a VLC c for the symbol alphabet u. For example, the symbol A is a septuple that defines the codewords as 1000110. The other symbols in u are also encoded as illustrated in Figure 4.
The total bit length of the codeword c is denoted by n = k + m. Every sequence consisting of k symbols and m parity bits can be graphically represented by the trellis structure depicted in Figure 5 for k = 4 and m = 3, where the length of the trellis represents time and the width of the trellis represents various states. The alphabet size of the source in Figure 5 is k = 4, and the codewords have a length of seven. A merging of the VLC trellis and the expurgating block trellis is required in the design of the decoding algorithm. Figure 4 and Figure 5 show the JSCC trellis structure.

3.2. Tier 2: Modified Nonlinear Block Code Decoder Trellis

The next phase of the experiment involved constructing a joint source–channel decoding model based on a bit-level trellis in two parts: the VLC source and the linear block channel code. The procedure for constructing the VLC source trellis comprised the following steps: first, in the encoder, the serial VLC trellis and the trellis of block code were combined and treated as one bit sequence, then merged to create a joint trellis with compound states. Second, the modified BCJR bit-level a posteriori probability decoding algorithm was applied to this trellis to enable a joint iterative decoding approach with bit-level soft outputs. The joint trellis code offers two advantages: first, it provides a maximum a posteriori probability (MAP) algorithm for the bit stream level; second, it performs iteration decoding featuring linear block codes for each trellis section. Each path through the trellis represents a specific watermarked message. Third, we ran a Viterbi decoder to determine the optimal path with the highest correlation with the detected sequence, represented by the sequence y = y 1 , , y L . Finally, the message sequence was identified from this optimal message codeword.
Once the combined trellis was complete, we constructed a trellis code to test the ability of the trellis to transmit an image without loss of information, as illustrated in Figure 2 and Figure 3. In these figures, each path through the trellis represents a specific message. Because two arcs exit each node, 2 L possible paths exist for a message M with length L , ensuring the system encodes L bits. Linear block codes and convolutional codes have a natural trellis structure in which every path represents a codeword. In such a structure, S = s 1 , , s L represents one state sequence in a trellis T with L sections. In a k th trellis section, the system encodes linear block codes at a bandwidth rate of k/n to prepare the data for transmission. Next, data is packetized as a codeword w k of n bits, where w k = c k , 1 , c k , 2 , , c k , n represents a codeword sequence of n code bits per codeword block at time k with each bit c k , i 1,1 and with y as the vector over a wireless fading channel. Once the data has been packetized, a Viterbi algorithm is employed to maximize the probability of correct decoding. When the sequence S is equally distributed, the Viterbi algorithm is equivalent to a maximum likelihood (ML) criterion. As demonstrated in [21], by generalizing the Viterbi algorithm, we obtain
P S y = P ( y , S ) P ( y ) = P ( S ) P ( y | S ) P ( y )
where P ( y ) denotes a total probability such that
a r g m a x S   P S y = a r g m a x S   P y S P S = a r g m a x l n ( k = 1 L P y k s k 1 , s k P s k 1 s k ) = a r g m a x l n k = 1 L P y k s k 1 , s k + l n k = 1 L P s k 1 s k = a r g m a x ( l n k = 1 L i = 1 2 m 1 P y k , i s k 1 , s k + l n k = 1 L P ( s k 1 | s k ) ) = a r g m a x k = 1 L ( i = 1 2 m 1 ln P ( y k , i | s k 1 , s k ) + l n ( P ( s k 1 | s k ) )
The Viterbi algorithm is the most efficient method to determine an optimal state sequence S * or an equivalent optimal path with respect to the maximum likelihood criterion. In such a state sequence S * , the prior probability of decoding a codeword is usually equally distributed or unknown; therefore, the state sequence can be described by the Equation (6):
S * = argmax S T P S y = a r g max S T k = 1 L i = 1 2 m 1 l n P y k , i s k 1 , s k = a r g max S T k = 1 L l n   P ( y k | s k 1 , s k ) = a r g min S T k = 1 L d ( y k , w k )
where d ( y k , w k ) = | y k w k | 2 is the k th arc metric between states s k 1 and s k in the trellis.
i = 1 2 m 1 l n ( P ( y k , i | s k 1 , s k ) ) is the maximum likelihood of the algorithm, which can be expressed in binary symmetric channel form as
a r g max S T k = 1 L i = 1 2 m 1 l n ( P ( y k , i | s k 1 , s k ) ) = a r g max S T k = 1 L d y k , w k l n   P e + 2 m 1 d y k , w k l n 1 P e = a r g min S T k = 1 L d ( y k , w k ) l n P e 1 P e + C 1
where the metric d ( y k , w k ) is distance measurement and the P e is the probability of transmission error. By using Equation (5), a MAP criterion is applied to given received values y . It delivers not only the most likely path based on the maximum likelihood measure, but also the a posteriori probability for each bit in the code sequence. The Viterbi algorithm is an efficient decoding procedure for obtaining the maximum a posteriori probability estimate of the given received sequence y k , k = 1 , . . , N .

4. Simulated Results

We applied our combined JSCC model to estimate the average peak signal-to-noise ratio (PSNR) for test JPEG images over a wireless channel with additive white Gaussian noise (AWGN). When the linear code encoding JPEG images over such a channel is sufficiently long, accurately estimating image quality becomes difficult. The average bit error rate (BER) of a sufficiently long string of code can be estimated through adequate sampling to decrease computational complexity. We selected the subsets of several random codes and sample codewords produced from these codes through a code expurgation process. In the expurgation process, a JPEG image with dimensions 512 × 512 was first divided into 4096 8 × 8 blocks. Second, each block was converted to the frequency domain using the block’s DCT transform. Third, the frequency coefficients in each block were encoded and the resulting bit sequence was used to encode random codes. The trellis was constructed using a random code (n, k). Fourth, the watermarked image quality was calculated using the Equation (8)
P S N R = 10 log 10 255 2 M S E
where MSE represents the mean square error between the original JPEG image and the decoded image over a slow-fading channel with AWGN. MSE is calculated using the Equation (9):
M S E = 1 N i = 1 512 j = 1 512 ( I 0 ( i , j ) I r ( i , j ) ) 2
The robustness of the noise channel to be used in JPEG compression was evaluated as follows. The channel model and its impulse response conform to the IEEE 802.11b standard. The delay power spectrum for the IEEE 802.11b model is such that an exponentially decaying function and path amplitude form a Rayleigh distribution. Additionally, the distance between the amplitude samples is constant in the IEEE 802.11b model. The discrete channel impulse response of the 802.11 model is given by
g ( τ ) = k = 1 K max β k δ ( τ k T s ) e j ϕ k
where
K max = 10 τ r m s T s β k 2 ¯ = β 0 2 ¯ e k T s / τ r m s β 0 2 ¯ = 1 e T s / τ r m s 1 e ( K max + 1 ) T s / τ r m s
where τ r m s is the root mean square delay spread in a given indoor area, and T s is the sampling interval considered in realizing the channel impulse response. The delay power spectrum is calculated using the Equation (11):
Q ( τ ) = k = 1 K max β k 2 δ ( τ k T s )
Figure 6 illustrates the exponential profile for the delay power spectrum in the 802.11b channel mode. The filter taps are independent complex Gaussian variables with an average power profile that decays exponentially. In the simulation of this channel, each sample of the channel impulse response is generated from a filtered complex Gaussian random variable,
h ( k ) = N ( 0 , 1 2 β k 2 ¯ ) + j N ( 0 , 1 2 β k 2 ¯ ) ,                                                                     f o r k = 0 K max
This variable induces Rayleigh amplitude fading with variance β k 2 ¯ and a uniform distribution between { 0 , 2 π } for the phase ϕ k . The value of β 0 2 ¯ was selected to normalize the addition of all samples of the delay power spectrum to 1.
The two-tiered trellis model developed in this study employs BCJR decoding to maintain acceptable image quality over a wireless fading channel. The combination of VLCs and linear block codes in the transmitter reduces error propagation and decoding complexity compared with the conventional source–channel tandem system that uses separate source and channel coding optimizations. Our experiments show the superior performance of our proposed JSCC scheme with respect to the bandwidth compression ratio and coding ratio k/n, over different signal-to-noise ratios (SNRs). We examined the robustness of our JSCC model over various channels using expurgating techniques. Figure 7 illustrates the average PSNR of the reconstructed image versus the SNR over an AWGN channel for different bandwidth compression ratio values. Each case in Figure 7 was generated by expurgating random block codes for a specific channel SNR and then evaluating the performance of the system parameters on the JPEG image for varying SNR values. Each test case demonstrates the performance of our proposed two-tiered JSCC model under different channel conditions. The results of these tests (1) reveal the performance of our method under channel conditions that differ from those for which the conventional tandem system is most suited and (2) illustrate the robustness of the proposed JSCC to variations in channel quality. Moreover, the tested channel conditions were inferior to the encoding conditions optimized for the BCJR algorithm. Performance under these conditions indicates that our JSCC model is more robust to channel quality and exhibits superior resistance to performance degradation in the channel environment.
Figure 8 depicts the observed gradual improvement in the quality of the reconstructed images before performance peaked as expurgations exceeded a threshold value. Figure 8 displays the results of applying our two-tiered JSCC method over a channel with AWGN. In the Lena image tested, the maximum VLC length in the joint VLC/linear block code structure was 33. The BPP (Bits Per Pixel) was approximately 0.42 bits/pixel, encoded in shortened random block codes. We tested different SNRs ranging from 2 to 25 over an AWGN channel. Figure 8 reveals that the SNR of the model’s modified BCJR algorithm over such a channel was approximately 4 dB larger than the SNR of the conventional tandem system (SNR = 25 dB). We tested different SNRs ranging from 2 to 25; the SNR for our JSCC system was approximately 4–5 dB larger than the SNR of the conventional tandem system, with a BPP of ≈0.42 bits/pixel. Our JSCC model demonstrated superior BER and PSNR performance over a wireless channel. By using the proposed JSCC system, BER decreased rapidly as the PSNR increased. However, the results demonstrated excellent image robustness in the case of the AWGN channel.
Next, we evaluated the performance of our JSCC model over a slow-fading Rayleigh channel with AWGN, using the channel impulse response randomly sampled from the complex Gaussian distribution for each test image. The system does not consider the time-selective channel. Moreover, the sample frequency and the root mean square delay spread denote f s = 3 × 10 6 ( T s = 1 / f s ) and τ r m s = 10 7 to   10 6 , respectively. In the slow-fading channel, the channel gain was randomly sampled from the complex Gaussian distribution for each transmitted image. Channel gain remained constant during the transmission of the entire image. In Figure 9, we plot the performance of the proposed JSCC algorithm over a slow-fading Rayleigh channel as a function of the bandwidth compression ratio for different expurgations of a random code. Our JSCC algorithm allowed us to calculate performance for any bandwidth compression ratio of JPEG images at the average SNR value. As in our AWGN test case, we assumed that the JPEG image was encoded at a rate equal to the capacity of the complex AWGN channel at the average SNR value. When the channel capacity exceeded the transmission rate, the transmitted codewords were decoded reliably through expurgation, and we observed that our JSCC model outperformed the conventional tandem model at all SNR and bandwidth compression ratio values. By using the expurgating strategy, PSNR decreased as the robustness increased. The results demonstrated reliable robustness when PSNR > 23 dB.
In the experiments, the simulated results were obtained in a special wireless scenario where the channel model utilized an 802.11b channel mode subject to slow or frequency fading. The robustness of our proposed JSCC model to variations in the average channel SNR in a slow-fading Rayleigh channel is evident in Figure 10. We observed that the quality of the reconstructed images remained acceptable in our experiments and these simulations, although the performance of our JSCC model declined when executed over a static AWGN channel.

5. Conclusions

In this study, we reported the construction of a two-tiered model for transmitting JPEG images over a wireless network. The two tiers comprised a serial VLC trellis and a linear block code trellis. The model exhibits improved robustness when transmitting a JPEG image over AWGN and slow-fading channels compared with conventional source–channel tandem models. Despite its trellis code structure, the model enables decoding through several algorithms. Our JSCC model employs concatenated encoding and expurgation to control for errors in its innovative two-tiered arrangement of bit streams according to data relevance. For the proposed scheme, this study used an expurgating trellis instead of the original trellis structure when constructing the JSCC. In transmitting the 512 × 512 JPEG Lena image at a transmission rate of 1.7 bits/pixel over channel frequencies with bit error probabilities 10 3 , the model outperformed the conventional source–channel tandem model by 18 dB. General decoding algorithms optimized for the conventional tandem model experience noticeable declines in performance when the sequences to be transmitted are long. By combining encoding and decoding algorithms in a modified trellis construction, our JSCC model achieved acceptable performance even when transmitting long sequences. Additionally, whereas the tandem model preserves image quality according to the relative importance of image components by expurgating random code sequences alone, our JSCC model also applies the BCJR algorithm for decoding, improving accuracy when decoding long image sequences. However, our experiments also revealed that performance declined as the level of expurgation increased.

Author Contributions

Y.-C.L.: conceptualization, data curation, methodology, and writing—original draft. J.-J.W.: study design, reviewing the manuscript, and writing—original draft. S.-C.Y.: resources, validation, visualization, and review and editing. C.-C.C.: formal analysis, supervision, review and editing, and project administration. All authors have read and agreed to the published version of the manuscript.

Funding

This paper received funding support from the National Science and Technology Council, Taiwan, under grant number NSTC 109-2221-E-167-002-MY3.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are available on request from the corresponding author.

Conflicts of Interest

The authors declare that they have no competing interests.

References

  1. Hamming, R.W. Error detecting and error correcting codes. Bell Syst. Tech. J. 1950, 29, 147–160. [Google Scholar] [CrossRef]
  2. Shannon, C.E. A mathematical theory of communication. Bell Syst. Tech. J. 1948, 27, 379–423. [Google Scholar] [CrossRef]
  3. Peng, Z.; Huang, Y.-F.; Costello, D.J. Turbo codes for image transmission—A Joint channel and source decoding approach. IEEE J. Sel. Areas Commun. 2000, 18, 868–879. [Google Scholar] [CrossRef]
  4. Zhu, G.-C.; Alajaji, F. Joint source-channel turbo coding for binary Markov sources. IEEE Trans. Wirel. Commun. 2006, 5, 1065–1075. [Google Scholar]
  5. Jaspar, X.; Gillemot, C.; Vandendorpe, L. Joint source-channel turbo techniques for discrete-valued sources: From theory to practice. Proc. IEEE 2007, 95, 1345–1361. [Google Scholar] [CrossRef]
  6. Zhai, F.; Eisenberg, Y.; Katsaggelos, A.K. Joint source channel coding for video communications. In Handbook of Image and Video Processing, 2nd ed.; Bovik, A., Ed.; Academic Press: Burlington, ON, Canada, 2005. [Google Scholar]
  7. Balle, J.; Laparra, V.; Simoncelli, E.P. End-to-end optimized image compression. In Proceedings of the International Conference on Learning Representations (ICLR), Toulon, France, 24–26 April 2017; pp. 1–27. [Google Scholar]
  8. Fabregas, A.G.; Martinez, A.; Caire, G. Bit-Interleaved Coded Modulation. Found. Trends Commun. Inf. Theory 2008, 5, 1–153. [Google Scholar] [CrossRef]
  9. Narasimha, M.; Peterson, A. On the Computation of the Discrete Cosine Transform. IEEE Trans. Commun. 1978, 26, 934–936. [Google Scholar] [CrossRef]
  10. Chui, C.K. An Introduction to Wavelets; Academic Press: Cambridge, MA, USA, 1992. [Google Scholar]
  11. Goblick, T. Theoretical limitations on the transmission of data from analog sources. IEEE Trans. Inf. Theory 1965, 11, 558–567. [Google Scholar] [CrossRef]
  12. Tung, T.; Gunduz, D. Sparsecast: Hybrid digital-analog wireless image transmission exploiting frequency-domain sparsity. IEEE Commun. Lett. 2018, 22, 2451–2454. [Google Scholar] [CrossRef]
  13. Howard, A.G.; Zhu, M.; Chen, B.; Kalenichenko, D.; Wang, W.; Weyand, T. Mobilenets: Efficient convolutional neural networks for mobile vision applications. arXiv 2017, arXiv:1704.04861. [Google Scholar]
  14. Chiou, P.T.; Sun, Y.; Young, G.S. A complexity analysis of the JPEG image compression algorithm. In Proceedings of the 9th Computer Science and Electronic Engineering (CEEC), Colchester, UK, 27–29 September 2017; pp. 65–70. [Google Scholar]
  15. Ding, M.; Li, J.; Ma, M.; Fan, X. SNR-adaptive deep joint source-channel coding for wireless image transmission. In Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing, Toronto, ON, Canada, 6–11 June 2021; pp. 1555–1559. [Google Scholar]
  16. Jankowski, M.; Gündüz, D.; Mikolajczyk, K. Deep joint source-channel coding for wireless image retrieval. In Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing, Barcelona, Spain, 4–8 May 2020; pp. 5070–5074. [Google Scholar]
  17. Huang, Q.; Jiang, M.; Zhao, C. Learning to design constellation for AWGN channel using auto-encoders. In Proceedings of the IEEE International Workshop on Signal Processing Systems, Nanjing, China, 20–23 October 2019; pp. 154–159. [Google Scholar]
  18. Luo, X.; Chen, H.-H.; Guo, Q. Semantic communications: Overview open issues and future research directions. IEEE Wirel. Commun. 2022, 29, 210–219. [Google Scholar] [CrossRef]
  19. Chen, X.; Deng, C.; Zhou, B.; Zhang, H.; Yang, G.; Ma, S. High-accuracy CSI feedback with super-resolution network for massive MIMO systems. IEEE Wirel. Commun. Lett. 2022, 11, 141–145. [Google Scholar] [CrossRef]
  20. Lin, S.; Kasami, T.; Fujiwara, T.; Fossorier, M. Trellises and Trellis-Based Decoding Algorithms for Linear Block Codes; Springer: Berlin/Heidelberg, Germany, 1998. [Google Scholar]
  21. Soleymani, M.R.; Gao, Y.; Vilaipornsawai, U. Turbo Coding for Satellite and Wireless Communications; Kluwer Academic: Dordrecht, The Netherlands, 2002. [Google Scholar]
Figure 1. (a) Diagram of the proposed JSCC structure; (b) block diagram of the transmission/receiver system.
Figure 1. (a) Diagram of the proposed JSCC structure; (b) block diagram of the transmission/receiver system.
Applsci 14 02578 g001
Figure 2. (a) Trellis of a (7, 4) Hamming code; (b) visual representation of the connections between maximum state numbers.
Figure 2. (a) Trellis of a (7, 4) Hamming code; (b) visual representation of the connections between maximum state numbers.
Applsci 14 02578 g002
Figure 3. Subset trellis of a (7, 4) Hamming code.
Figure 3. Subset trellis of a (7, 4) Hamming code.
Applsci 14 02578 g003
Figure 4. Tree representation of the joint source–channel coding of the trellis.
Figure 4. Tree representation of the joint source–channel coding of the trellis.
Applsci 14 02578 g004
Figure 5. Trellis with combined source–channel coding.
Figure 5. Trellis with combined source–channel coding.
Applsci 14 02578 g005
Figure 6. Normalized delay power spectrum recommended by the IEEE for the 802.11b channel mode.
Figure 6. Normalized delay power spectrum recommended by the IEEE for the 802.11b channel mode.
Applsci 14 02578 g006
Figure 7. Performance of the JSCC scheme on a JPEG image over an AWGN channel.
Figure 7. Performance of the JSCC scheme on a JPEG image over an AWGN channel.
Applsci 14 02578 g007
Figure 8. Performance of the JSCC scheme on a JPEG Lena image over an AWGN channel at various compression ratios.
Figure 8. Performance of the JSCC scheme on a JPEG Lena image over an AWGN channel at various compression ratios.
Applsci 14 02578 g008
Figure 9. Performance of the JSCC scheme on a JPEG image over a slow Rayleigh fading channel.
Figure 9. Performance of the JSCC scheme on a JPEG image over a slow Rayleigh fading channel.
Applsci 14 02578 g009
Figure 10. Performance of the JSCC model on a JPEG image over a slow-fading Rayleigh channel at various compression ratios.
Figure 10. Performance of the JSCC model on a JPEG image over a slow-fading Rayleigh channel at various compression ratios.
Applsci 14 02578 g010aApplsci 14 02578 g010b
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Lin, Y.-C.; Wang, J.-J.; Yang, S.-C.; Chen, C.-C. Modified Joint Source–Channel Trellises for Transmitting JPEG Images over Wireless Networks. Appl. Sci. 2024, 14, 2578. https://doi.org/10.3390/app14062578

AMA Style

Lin Y-C, Wang J-J, Yang S-C, Chen C-C. Modified Joint Source–Channel Trellises for Transmitting JPEG Images over Wireless Networks. Applied Sciences. 2024; 14(6):2578. https://doi.org/10.3390/app14062578

Chicago/Turabian Style

Lin, Yin-Chen, Jyun-Jie Wang, Sheng-Chih Yang, and Chi-Chun Chen. 2024. "Modified Joint Source–Channel Trellises for Transmitting JPEG Images over Wireless Networks" Applied Sciences 14, no. 6: 2578. https://doi.org/10.3390/app14062578

APA Style

Lin, Y. -C., Wang, J. -J., Yang, S. -C., & Chen, C. -C. (2024). Modified Joint Source–Channel Trellises for Transmitting JPEG Images over Wireless Networks. Applied Sciences, 14(6), 2578. https://doi.org/10.3390/app14062578

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop