1. Introduction
Recent consumer electronics have been required to provide more functionalities to satisfy the demands of the market. The above technical trend has caused data to be transmitted over more physical channels at higher data rates. In other words, the number of channels and the corresponding data rates have continuously increased. Inside silicon chips, the number of transistors has increased to support the required functions [
1]. In order to fabricate more transistors on the limited size of a wafer, the silicon fabrication process has been developed for a narrower sized transistor [
2]. In electromagnetic compatibility (EMC), a signal integrity (SI) analysis provides information on how the signal propagates over a high-speed channel [
3]. Hence, the above technical trend makes an SI analysis more important in terms of high-performance system design [
4]. Furthermore, a system becomes more complicated with these advancements, thus an SI analysis of a system also becomes hard [
5,
6,
7].
An SI analysis includes time-domain and frequency-domain approaches to investigate how a signal is degraded during data transmission. The frequency-domain approach uses scattering (S)-parameters such as reflection loss and insertion loss [
8]. And the time-domain approach uses an eye diagram [
9], which overlaps the received ONEs and ZEROs. The insertion loss in the S-parameters is defined as a ratio of the transmitted waves compared to the incident waves. Thus, the amount of the transmitted energy can be shown with the magnitude and the phase of the insertion loss, respectively. The magnitude and phase are given at each frequency of interest. Thus, the frequency-dependent behavior is shown within the frequencies.
In contrast, an eye diagram shows how reliable the received data by a channel are in a graphical manner as shown in
Figure 1. An eye diagram has important parameters, i.e., eye height (EH) and eye width (EW) inside its contour. The EH shows the voltage margin at the sampling time and the EW shows the timing margin at a given threshold voltage. The outcomes of the frequency- and time-domain approaches provide meaningful information about a channel. However, an eye diagram becomes more important for the fixed data rate because an eye diagram is the outcome of a combination of the frequency-dependent behavior of a channel and the input signal. Furthermore, an eye diagram is more intuitive compared to the insertion loss due to its representation. If the eye diagrams have non-zero EH and EW values, then an eye-opening exists in the middle. The eye-opening means that no waveform passes through the center of the eye diagrams. In other words, the received ZEROs and ONEs at the receiver are clearly distinctive.
2. Statistical Eye Diagram
An eye diagram has the critical drawback of being time-consuming due to the principle of acquisition. An eye diagram is an overlapped waveform at a receiver. Thus, the reliability of an eye diagram is determined by how many bits are received. That is, more received bits are always required to avoid a biased result. If the target bit-error rate (BER) is 10−N, then the required number of bits would be 10N. The measured eye diagram might be achievable depending on the memory of the measurement instrument used such as an oscilloscope. However, a simulated eye diagram is different in terms of calculation and memory. In a simulation, a data bit has multiple points, i.e., 50, 64, or 100 to represent the continuous waveform within a unit interval (UI). Thus, the required number of bits is multiplied by the number of points in the UI. Therefore, a simulation requires a larger memory than that needed for eye diagram measurements.
Another difference between simulated and measured eye diagrams is how they collect waveforms. The measured waveforms are recorded by a sampling and an analog-to-digital (A-to-D) conversion [
10]. The recorded data are accumulated to construct an eye diagram. However, the simulated waveforms are calculated based on the channel response and input waveforms. The calculation includes consecutive additions in a transient simulation to calculate the inter-symbol-interference (ISI) noises [
11]. The ISI noise is defined as the sum of each received bit which is a result of the degradation caused by the RC delay [
12]. Each bit may have a value of either ZERO or ONE; thus, the ISI noise might be determined by a huge number of bits within the ISI noise. This leads to the introduction of the eye diagram estimation methods into an SI analysis.
As can be seen from
Figure 2, a peak distortion analysis (PDA) calculates the inner-most contour of an eye diagram, which is defined as the worst contour of an eye diagram [
13]. The worst contour provides the EH value in the amplitude and the EW value in the sampling time. Thus, the PDA method has been widely used for an efficient SI analysis. However, the worst contour only provides simplified results, which leads to limited usage. In order to provide more information on the SI evaluation, the statistical eye diagram was introduced. The statistical approach provides probability distributions depending on either a sampling time or a sampling voltage. Hence, some postprocessing algorithms are applied to obtain further information such as the BER values from a statistical eye diagram. The BER values represent how many errors occur during data transmission. A system-level SI analysis typically includes a BER-based analysis [
14]. The statistical approach requires a longer calculation time and more memories during the calculations. When it comes to the calculation, a PDA includes selective additions. However, the statistical approach includes consecutive convolutions to calculate the PDF. The above properties are critical drawbacks of the current eye diagram estimation methods. Nevertheless, the PDA and statistical approach are still faster and simpler than a transient simulation.
The statistical approach includes the following steps to calculate the probability distributions depending on the sampling time [
15]:
Single-bit response (SBR) calculation;
Cursor separation from the SBR;
ISI calculation with the pre-and post-cursors;
Bit PDF calculation with the main cursors;
The above procedures are iterated with sampling times.
The SBR is the channel response for a single bit, which is the attenuated and widened pulse caused by parasitic resistance and the capacitance of a high-speed channel. Thus, the SBR
V(t) is obtained by the following equation:
where,
The channel response
is the insertion loss
in the frequency domain. Therefore, the ISI noise by a single bit is evaluated with the SBR. The next step is to define the main cursors which are equivalent to the degraded ONE in the current sampling time. Hence, the main cursors have a length of 1
UI. Typically, the first and last main cursors have the smallest voltage gap compared to the others to avoid a mismatched voltage in the eye diagram. The timing window for the main cursors is shifted until the first and last cursors have the smallest gap within the given range. The timing window should have a length of 1
UI. If not, it violates the definition of an eye diagram. The given range is determined based on the peak value of the SBR. As long as the above timing window includes the peak value, the timing window is shifted left or right. In other words, the main cursors are defined as follows:
where
.
Figure 3 shows the Bit PDF based on the main cursors. After defining the main cursors from the SBR, the rest of the waveforms are defined as the pre- and post-cursors, respectively:
The defined post-cursors are divided into multiple intervals with a length of 1
UI:
where
N is a positive integer. Likewise, the divided pre-cursors are also divided as follows:
The probability density function (PDF) of each cursor is defined as follows:
where
. Consecutive convolutions are applied to the above pre- and post-cursors to calculate the probability distribution function (PDF) of the ISI:
The PDF of the ISI is defined as the result of the convolution with the pre- and post-cursors:
Therefore, the bit PDF at the sampling time
is given by the following:
In conclusion, the statistical eye diagram is a union set of the above bit PDF depending on the sampling time
:
The statistical eye diagram includes the consecutive convolutions instead of the additions. Thus, an efficient calculation is achievable compared to a transient simulation.
3. System-Level Statistical Eye Diagram
The system includes several techniques to improve the SI, BER, and electromagnetic interference (EMI) performances. Even though the objectives of the techniques are different, all of them affect an eye diagram in different ways. In other words, a system-level eye diagram is different from that of a channel. Hence, previous works on the system-level statistical approach were proposed. This section introduces how the above techniques are considered in a statistical eye diagram.
3.1. Encoding
The high-speed receiver buffers include clock and data recovery (CDR) circuits [
16,
17]. The objective of the CDR is to recover the data and clock signals from the received signal. The reliability of the CDR is proportional to the number of bit transitions due to the property of the receiver circuit and because the threshold voltage in the receiver is determined based on the received waveform. In other words, consecutive bits such as 000…000 or 111…111 cause the CDR to become unreliable [
18]. An encoding is a process of mapping that occurs between different sequence groups. The 8B/10B [
19] and transition minimized differential signaling (TMDS) [
20] have been used in the industry. The 8B/10B encoding is used to increase the number of bit transitions for the reliable receiver. In contrast, TMDS encoding decreases the number of bit transitions to mitigate the amount of crosstalk noise between the channels as can be seen in
Figure 4. The 8B/10B encoding includes two look-up tables (LUTs) that convert the 8-bit sequences to 10-bit sequences with more bit transitions. Alternatively, the TMDS encodes an 8-bit sequence to a 10-bit sequence with an algorithm.
The high-speed multimedia connector has multiple data channels in parallel to meet the required data throughput. When the data rate is high enough, the parasitic capacitance and inductance between the adjacent channels create coupled energy. The energy coupling between channels is defined as crosstalk. In order to mitigate the coupled energy, TMDS encoding makes fewer bit transitions. TMDS encoding uses the TMDS mapping algorithm to convert the 8-bit sequences into 10-bit sequences before the data transmission. TMDS encoding becomes more beneficial for advanced high-performance systems. Because the intensity of the magnetic and electric fields increases at higher frequencies, the amount of the crosstalk is proportional to the data rate. Thus, crosstalk mitigation becomes essential. The advanced high-performance systems also have more data channels. This means that an analysis of the crosstalk would complicated. In this case, TMDS encoding reduces the crosstalk noise which may simplify the analysis.
In order to apply the statistical approach to the encoded channel, the stochastic models for encoders were introduced. The stochastic model represents the behavior of an encoder in terms of probability. The statistical approach commonly uses the SBR which is the simplest channel response. As both encodings focus on the bit transitions, a statistical eye diagram has to be constructed with the bit transitions. Thus, the proposed method was based on the double-edge response (DER) [
21]. The DER approach obtains a statistical eye diagram with the rising and falling responses [
22]. The difference between the SBR and the DER approaches is the consideration of the asymmetry between the N/P-type metal-oxide-semiconductors (N/PMOSs). The N/PMOS have independent electrical characteristics. Thus, the rising and falling responses are always asymmetric in the waveform. Therefore, the DER approach constructs an eye diagram with the separate rising and falling responses to consider the above asymmetry. However, the purpose of the DER approach is to handle the bit transitions in an eye diagram calculation. To reflect the effect of the encodings onto an eye diagram, the SBR method has to be converted in terms of the bit transition. The conversion may limit the probability calculation. Therefore, the DER method was used in [
21].
The assumption in the conventional estimation method is that ONE and ZERO have the same probability. However, the assumption is no longer valid for the encoded channels. As a result of the encodings, the probability of the bit transitions might be lower or higher than 0.5. The stochastic model has the resultant bit probabilities. The stochastic model for 8B/10B encoding was obtained by counting the number of ONEs in the LUTs. The calculated probabilities were defined as the stochastic model for 8B/10B encoding. Likewise, the stochastic model for TMDS encoding was obtained by calculating the bit transition probabilities using the TMDS algorithm. Hence, the TMDS algorithm is replaced by the corresponding stochastic model. Therefore, a statistical eye diagram for the TMDS-encoded channel is obtained by the above DER method and the stochastic model.
Figure 5 shows the statistical eye diagrams depending on the encoders. In an eye diagram, the transition area corresponds to the rising or falling region. And the non-transition area corresponds to the top and bottom regions. The statistical eye diagram without any encoding had similar probabilities for the transition and non-transition areas. However, the non-transition area had a lower probability compared to that of the transition area in the statistical eye diagram. Likewise, the TMDS-encoded eye diagram shows that the transition area has a lower probability compared to that of the non-transition area. Therefore, the proposed method with the stochastic model for the encoder successfully shows how the encoder was interpreted in the statistical eye diagram.
3.2. Equalizer
Equalizers compensate for the electrical degradation in either the frequency or time domains [
23,
24,
25,
26]. The frequency-domain equalizer includes a continuous time linear equalizer (CTLE). The time-domain equalizers include the pre-/de-emphasis and decision feedback equalizer (DFE) as shown in
Figure 6. The CTLE either amplifies the high-frequency energies or attenuates the low-frequency energies to make the channel response as flat as possible [
27]. Typically, the CTLE is placed just before the receiver to directly compensate for the channel loss. Likewise, a pre-emphasis boosts the high-frequency energies and a de-emphasis attenuates the low-frequency energies in the time domain. Unlike the CTLE, the pre-/de-emphasis are placed at the transmitter. In other words, the CTLE mitigates the channel loss after the channel and the pre-/de-emphasis compensates for the channel loss before the channel. The DFE is a time-domain equalizer mitigating the loss after the channel [
28,
29,
30,
31]. The DFE makes a bit decision based on the received waveform: LOW or HIGH in a binary system. And the equalizer mitigates the expected ISI noise based on the previous bits. As discussed earlier, the ISI noise caused by parasitic resistance and capacitance makes the channel response attenuated and widened. Thus, if the received bit is equal to ONE, the next bit would experience the ISI noise just created by the previous bit. The DFE attenuates the currently received waveform by its coefficient to mitigate the expected ISI noise.
Because the DFE operates in a digital manner, the effect of the DFE on an eye diagram has to be interpreted in a probability domain for a statistical eye diagram. The statistical eye diagrams are based on the equalized SBR. The conventional statistical approach is applied after the equalizers are applied to the SBR. That is, the equalized SBR replaces the SBR in the statistical approach. The probability distribution in the statistical eye diagram is determined by the SBR and the consecutive convolutions. Therefore, the equalizer only modifies the input of the statistical approach, not the algorithm.
Figure 6 shows the equalized SBR depending on the equalizers. By definition, the equalized SBR by the pre-emphasis has a boosted channel response at the rising area [
32]. Thus, the resultant waveform has boosted high-frequency energies compared to the non-equalized ones. Likewise, the de-emphasis has a dip at the falling area to mitigate the expected ISI noise. The CTLE also amplifies the high-frequency energies. The difference between the pre-emphasis and the CTLE is the amplifying area in the waveform. The pre-emphasis only locally amplifies the high-frequency signal at the rising area. In contrast, the CTLE amplifies the high-frequency signal in the overall area of the waveform.
Figure 7 shows the statistical eye diagrams depending on the equalizers [
32]. Because the color of the statistical eye diagram represents the probability, how the equalizer works on the eye diagram is identified. The equalized statistical eye diagram has a higher probability at the center of ONE and ZERO. Because the DFE equalizes the waveform based on the recovered data, the timing of the feedback signal determines which area has a higher probability. The pre- and de-emphasis make the probability higher after and before the center in the eye diagram, respectively. The pre-emphasis causes an additional peak in the waveform. Thus, the voltage range right after the equalization becomes narrow. Therefore, the pre- and de-emphasis have a higher probability area after/before the center. As mentioned earlier, the difference between the pre-emphasis equalizer and the CTLE is the range of equalization. Therefore, the statistical eye diagram for the CTLE has a higher level for the ONE as a result of the equalization.
3.3. Multi-Level Signaling
A binary number system expresses data with ZEROs and ONEs. Thus, each bit has a LOW or HIGH logic value. Non-return zero (NRZ) signaling is used by binary systems in data transmission. However, the NRZ signaling is inefficient in terms of bandwidth because a single bit is only transmitted per signal pulse. In order to overcome this inefficiency, multi-level signaling has been widely used [
33]. Multi-level signaling transmits the data with multiple bits per signal pulse, i.e., 01 as shown in
Figure 8. The advantage of multi-level signaling is it decreases the bandwidth needed for the same data throughput. In other words, the data rate can be lowered if multiple bits are transmitted. The decreased data rate is determined by how many bits are transmitted per signal. The pulse amplitude modulation (PAM) with 4 levels (PAM-4) transmits two bits per signal. The relationship between the number of bits per signal
and the number of levels
is as follows:
Equation (14) also shows the required bandwidth for signaling is inversely proportional to the number of levels in the multi-level signaling. Therefore, for the same data rate, the required bandwidth can be decreased in the case of multi-level signaling.
Based on the above SBRs, the corresponding statistical eye diagrams can be constructed by consecutive convolutions. The multiple-level signaling uses the scaled input pulse. Thus, the eye diagram is constructed based on the scaled channel responses [
34]. Multi-level signaling has no effect on probability in the statistical approach. Because the SBR is only modified during the multi-level signaling, the statistical approach itself is used as it is. That is, the statistical approach has the modified input in this case. Another factor that has to be considered in multi-level signaling is the conversion loss. Conversion loss is caused by scaling the input pulse. Multi-level signaling has various levels to represent the data with a pulse. Thus, the amplitude is decreased by 1/N. That is, the pulse has a small amount of energy compared to NRZ signaling. The loss caused by the scaling in multi-level signaling is defined as the conversion loss. Therefore, in the case of multi-level signaling, the conversion loss always has to be compared with the insertion loss. In other words, when the channel is less lossy, then multi-level signaling may ironically degrade data transmission performance. This is another reason a statistical eye diagram is needed for multi-level signaling in high-speed systems [
35].
Figure 9 shows the statistical eye diagrams for multi-level signaling with different probabilities [
36], which means each level has a different probability. The gray coding is used to assign the symbol to decrease the bit error rate in the case of an erroneous symbol. In other words, the symbol error rate does not need to be the same as the BER. Thus, the gray coding maps the symbol to have a short Hamming distance. As a result of the gray coding, each logic might have a different probability. The authors in [
36] proposed a statistical eye diagram with logic probabilities to consider the effect of the encoding with multi-level signaling.
3.4. Scrambling
Scrambling is an electromagnetic interference (EMI) suppression technique in electromagnetic compatibility (EMC) [
37]. When periodic data patterns are transmitted over a channel, then the energy at a certain frequency is concentrated and starts to radiate. As a result, the radiated energy affects other electronic devices in the form of noise. The scrambling is to mix the data with the linear feedback shift register (LFSR) [
38]. The LFSR is a mathematical field with a primitive polynomial and a feedback path. Because the LFSR generates a pseudo-random binary sequence (PRBS), the LFSR has been widely used as a random source. The generated random sequences are not purely random. It is not guaranteed that they will have a probability of 0.5. Therefore, a statistical eye diagram for scrambling has a different profile in terms of probability compared to the non-scrambling data. Table I shows how the bit probabilities change when scrambling is applied [
39]. The data have different probabilities for each bit index. After scrambling, the data have almost the same probability as the bit index. In other words, the periodic pattern can be removed by scrambling to make the bit probabilities the same.
The scrambled data bits have different probabilities for the purpose of EMI suppression. The proposed method in [
39] reflects the effect of scrambling by counting the number of ONEs depending on if scrambling was used. For this, the data bits are divided in accordance with the number of registers in the LFSR. The process of splitting is equivalent to symbolizing the given bits. As a result of symbolizing, the bit index can be defined as shown in
Table 1. Then, the probability of each bit index was determined to be the input for the statistical approach. Therefore, the statistical approach has the modified bit probabilities as the input.
Figure 10 compares the statistical eye diagrams depending on the scrambling [
39]. The principle of scrambling is complicated. However, the analysis on the eye diagram is simple. The probability distribution becomes equally distributed by the scrambling because the statistical eye diagram only shows the result in terms of the bit probability and the channel responses.
3.5. Error-Correction Codes (ECCs)
Error-correction codes (ECCs) modify the bit sequence to correct the errors in the received bits in a mathematical manner [
40,
41]. The ECC encodes the data bits and generates a word [
42,
43,
44]. The words, instead of the bits, are transmitted over a channel. The receiver corrects the erroneous bits during the decoding. The Bose–Chaudhuri–Hocquenghem (BCH) code corrects a single bit error in a word [
45] and the Reed–Solomon (RS) code corrects multiple bit errors [
46]. Their difference is due to the Galois Field which defines the generator polynomial. To correctly estimate the statistical eye diagram for the ECC-encoded channel, generator polynomial-based statistical eye diagrams were introduced [
15].
As discussed, the ECC is based on the generator polynomial for the Galois field. Thus, the codeword transmitted over the channel is determined by the generator polynomial. That is, the bit probability is determined by the generator polynomial in the case of the ECC. The bit probabilities depending on the bit index are calculated based on the codeword for the cases of BCH and RS codes. However, the probability calculation includes the effect of the generator polynomial. The generator polynomial can be regarded as a linear combination of the input data bits. Thus, the ECC-encoded data bits are the output of the linear summation of the input bits completed by the generator polynomial.
As shown in
Figure 11, depending on the ECCs, the corresponding statistical eye diagram has different probability distributions [
15]. The difference between the BCH and RS codes is the unit of encoding and decoding. The BCH code encodes the data bits in a bit-wise fashion. However, the RS code encodes the data bits in a symbol-wise fashion. The symbol-wise encoding means that the errors are corrected for the given symbol. Thus, it affects the bit probability distribution as a result of the ECC. The BCH and RS codes improve BER performance as a result of the decoding. The encoded data also improve the SI performance in the case of an RS code. As can be seen in
Figure 11b, the RS-encoded statistical eye diagram has a higher probability for ZERO. This means that the number of ZEROs is much larger than the number of ONEs. These ZERO-biased bits provide additional advantages: mitigated crosstalk noise and suppressed radiated electromagnetic interference (EMI) noise. The decreased number of rising and falling responses corresponds to a smaller number of bit transitions as discussed in the case of TMDS encoding. Likewise, each bit transition induces high-frequency radiation. Thus, the RS code also suppresses the radiated EMI during data transmission.
4. Discussion
The introduced previous works proposed statistical approaches to system-level eye diagrams. Studies on system-level statistical SI analyses still have to be addressed in the future to achieve the desired system performance evaluation in statistical and practical manners. The objective of a statistical SI analysis is to accurately evaluate a system in an efficient manner. The transient circuit simulation definitely has a limitation in terms of calculation and memory. Thus, a system-level analysis based on a circuit simulation is being replaced by a statistical SI analysis. However, a statistical SI analysis has a discrepancy due to the limited information of each system. The lack of information causes an impractical SI analysis. Therefore, the components in high-speed systems have to be interpreted to be integrated into a statistical eye diagram. In order to reflect the effect of the components in a statistical eye diagram, each component has to be modeled in terms of probability. Although this article reviews several previous studies, the technical trend drives the need to equip systems with more techniques to improve SI and BER performances. Thus, as the systems evolve, further study on statistical approaches to high-speed systems have to be conducted.
The introduced approach proposed here describes how each component of a high-speed system can be applied in a statistical approach. Typically, no high-speed system uses a single component to improve SI performance. Namely, all of the systems include several techniques. In other words, the above methods have to be considered for practical purposes. The combination of the techniques used is determined by the system designer. For the systems consisting of 8B/10B encoding and equalizers, the equalized SBR has to be applied to the statistical approach for 8B/10B encoding. If the system includes scrambling and an ECC, then the ECC has to be considered first. Because the purpose of the scrambling is to suppress the radiated EMI, the effects caused by the ECC have to be reflected first. Therefore, the priority of the high-speed component in the statistical approach can be defined like the above.
A statistical SI analysis can be utilized in a BER analysis if post-processing is applied. A BER analysis typically is fulfilled with a shmoo plot [
47]. The shmoo plot shows the BER values depending on the system conditions such as supply voltage, threshold voltage, and operating frequency. The shmoo plot has been especially used in the silicon industry. Because of the statistical characteristic of the shmoo plot, any of the probability density functions (PDFs) are applicable. The typical parameters such as process, voltage, and temperature (PVT) variation can be analyzed in a statistical SI analysis. In other words, a statistical SI analysis is expandable depending on the post-processing. Thus, a statistical SI analysis is valuable in terms of expandability. The above benefits show the importance of a statistical approach to an SI analysis.
5. Conclusions
This article reviews the previous works on statistical approaches at the system level. Even though the eye diagram is a critical metric in an SI analysis, its time-consuming nature makes an SI analysis inefficient. Furthermore, the recent high-speed systems include several techniques to enhance SI performance. This makes an SI analysis more complicated than it was before. For the above, statistical approaches for high-speed systems have been introduced. This paper reviews not only the basics of a statistical eye diagram, but also the applications: encoding, equalizers, multi-level signaling, scrambling, and the ECC. The stochastic models for encoding were introduced for the 8B/10B and TMDS. The equalizers which are the CTLE, DFE, and pre-/de-emphasis are addressed. The obtained equalized SBRs were converted to the statistical eye diagram in the proposed method. Multi-level signaling has been used to increase bandwidth. However, it has a critical drawback of conversion loss. Hence, the insertion loss and the conversion loss have to be compared before applying multi-level signaling. The scaled channel responses were converted to the statistical eye diagram to compare the above losses. Scrambling has been used to suppress the radiated EMI noise. As a result of the scrambling, the bit probabilities were determined, which also affect the eye diagram. Thus, the statistical eye diagram for scrambling was introduced. The ECC has been used by an upper hierarchy for data integrity. It also affects the eye diagram by encoding the data bits, especially the RS code which lowers the ONE’s probability, which corresponds to the lower probability of the lower area of the statistical eye diagram. In conclusion, this paper thoroughly reviews the previous works on statistical eye diagrams at the system level.