1. Introduction
Because of its dependability and efficiency, many standards organizations have acknowledged and embraced multi-carrier (MC) transmission [
1] for wireline and wireless systems [
2,
3]. Key advantages of this method include its enhanced capacity to reduce inter-symbol interference (ISI) and its resistance to single-frequency interference. MC is a popular choice for emerging applications such as wireless local area networks (WLANs) [
4] and power line communications (PLC) [
5] thanks to its capacity to withstand a range of channel impairments, including impulse noise and frequency selectivity. The fundamental idea behind the MC protocol is to use complex exponentials as information carriers to break the spectrum down into a number of orthogonal narrowband subchannels [
6]. The two most frequently used MC techniques are discrete multi-tone (DMT) [
7] for wireline systems and orthogonal frequency-division multiplexing (OFDM) [
8], which is mostly utilized in wireless applications.
Recently, numerous advanced multi-carrier (MC) techniques based on OFDM have been explored to provide enhanced energy and spectrum [
9,
10]. Among these, OFDM employing index modulation (OFDM-IM) stands out as a promising approach. OFDM-IM is a viable alternative for next-generation wireless communication that transmits data more efficiently than conventional OFDM. Index modulation (IM) encodes information not only in data symbols but also in the indices of active elements, enabling additional data transmission without requiring extra bandwidth [
11]. This technique enhances data throughput as well as resilience against frequency-selective fading and interference by leveraging diverse subcarriers and indices. Moreover, the selective activation of subcarriers supports energy-efficient operation, making it suitable for energy-conscious applications [
12]. The ability of OFDM-IM to mitigate both inter-symbol and inter-carrier interference while balancing spectral efficiency, energy efficiency, and system complexity makes it a versatile choice for modern wireless networks [
13].
Several works have been conducted on OFDM-IM [
14,
15]. In [
16], a noncoherent OFDM-IM system based on maximum likelihood (ML) was proposed where the information bits were transmitted solely through the indices of active subcarriers. Low-complexity near-optimal detectors for OFDM-IM were introduced in [
11] to address the computational challenges of ML detection, which is intensive due to the presence of index bits. The log-likelihood ratio (LLR) detector was introduced in [
17]. Although it achieves near-ML performance, knowledge of the incoming signal’s noise power spectral density is required, which represents a drawback. A low-complexity OFDM-IM detector that encodes every potential subcarrier activation pattern (SAP) was presented in [
18]. This approach overlooks the possibility of incorrect SAP detection by using every potential SAP to transmit data. In [
19], a greedy detector (GD) based on energy detection was proposed to estimate OFDM-IM performance with significantly lower complexity, although it did not achieve near-optimal performance.
Reliable data transmission in OFDM-IM relies on accurate detection, as it ensures proper decoding of both index and data symbols, an enhanced bit error rate (BER), resilience to channel impairments, and preservation of communication quality [
20]. Deep learning (DL) excels in handling high-dimensional complex data, adapting to varying conditions, and performing end-to-end optimization, making it ideal for efficient signal detection and data processing in communication systems [
21,
22,
23]. Thus, DL may serve as a promising approach for OFDM-IM systems. Various DL-based strategies have been explored to simplify OFDM-IM implementations while achieving near-optimal performance [
24,
25,
26]. A deep neural network (DNN)-based OFDM-IM system called DeepIM was proposed in [
27]. The authors investigated the performance of the proposed model by utilizing both rectified linear units (ReLU) and hyperbolic tangent (Tanh) activation functions. A low-complexity dual-mode OFDM-IM system was proposed in [
28], which employed a DL model known as DeepDM constructed through a combined DNN and convolutional neural network (CNN) to identify carrier bits and index bits independently. The authors of [
29] proposed an index-bit detection neural network for OFDM-IM that incorporated fully connected layers and an inception module to effectively manage different modulation settings. In [
30], the authors presented a DL-based index bit estimator (IE-DNN) for active LED index detection in visible-light communication systems. By employing fully connected or convolutional layers, IE-DNN significantly reduces the index error rate (IER) and BER with acceptable complexity. In [
31], the authors presented a CNN-based detection framework for OFDM-IM systems. This framework uses convolutional layers to effectively identify indices by converting incoming symbols to polar coordinates, creating 2D matrices. In [
32], the authors proposed a multiple-input–multiple-output OFDM-IM system using IMNet, a non-iterative detector. The IMNet design combines two subnets with the conventional least squares method to recover the transmitted signal. In [
33], a transformer-based detector called TransIM was proposed. The structure of the transformer adds a great deal of complexity to TransIM. In an earlier work [
34], we presented a long short-term memory (LSTM)-based detector for OFDM-IM systems. Although we mostly concentrated on perfect channel conditions in that study, we also assessed BER performance under uncertain channel state information (CSI) settings. Interestingly, under imperfect CSI conditions, our previous experiment outperformed the ML, GD, and DeepIM detectors at higher SNR levels while showing comparable performance at lower SNR levels.
CNN-based models can effectively learn and extract features from received signals [
35]. Their ability to capture spatial hierarchies enhances the detection of active subcarriers, enabling more accurate index identification in OFDM-IM systems. The translation invariance of CNNs ensures consistent performance across varying signal positions, while their parameter efficiency reduces the risk of overfitting [
31]. Furthermore, by incorporating dependencies from both past and future contexts, bidirectional LSTM (Bi-LSTM) models enhance sequential data processing. They address the vanishing gradient issue, enabling effective learning of long-range relationships, and provide parallel processing, which increases training efficiency [
36,
37]. In this paper, we propose a hybrid DL (HDL)-based OFDM-IM system designed to operate under uncertain CSI conditions. The proposed HDL detector integrates a one-dimensional CNN (1D-CNN) with a Bi-LSTM model. The 1D-CNN model extracts spatial information from the received signal, focusing on patterns associated with the activated subcarriers by analyzing the received symbols in the OFDM-IM system. This step enhances the model’s capacity to differentiate between various indices in the input data. The Bi-LSTM then processes these features, capturing temporal dependencies in the signal by considering both past and future information. This combined approach leverages spatial and temporal insights, significantly improving the OFDM-IM system’s BER performance as well as its adaptability to signal variations, resulting in more accurate index detection. We evaluate the BER performance of the proposed model across different signal-to-noise ratios (SNRs) for the OFDM-IM system under imperfect CSI conditions. Additionally, we assess the BER for various optimizers and equalizers. We determine the throughput and spectral efficiency (SE) to further illustrate the resilience of the suggested HDL-based OFDM-IM system. Our simulation results demonstrate that the proposed model outperforms traditional detectors as well as other DL-based detectors. The key contributions of this paper are summarized as follows:
We introduce an HDL-based OFDM-IM system that operates under uncertain CSI conditions. This HDL-based detector integrates a 1D-CNN model with a Bi-LSTM model; the 1D-CNN model captures spatial information from received signal, while the Bi-LSTM model processes these features. This combined approach improves detection performance even in the presence of CSI uncertainty.
The channel data and received signal are preprocessed using OFDM-IM domain knowledge before being fed into the HDL model. This preprocessing step enhances the HDL-based detector’s accuracy in detecting active subcarrier indices. Additionally, we apply several different equalizers to the received signal and evaluate their comparative performance.
We assess the BER performance of the proposed HDL-based detector at various SNR levels and explore its performance with different optimizers. Additionally, we calculate the SE and throughput of the model to further demonstrate its effectiveness. The results indicate that the proposed HDL detector significantly enhances detection performance in OFDM-IM systems.
The remainder of this paper is organized as follows: the system modeling is introduced in
Section 2; a thorough discussion of the suggested model, including details on offline training and online testing protocols, is provided in
Section 3; the simulation results are presented in
Section 4 and the complexity in
Section 5; finally, the conclusions are presented in
Section 6.
2. System Model
OFDM-IM is a sophisticated modulation technique that extends the traditional OFDM architecture. Unlike conventional OFDM, which utilizes all subcarriers to carry data, OFDM-IM conveys additional information by using both data symbols and the indices of active subcarriers. By selectively activating a subset of subcarriers, OFDM-IM enhances spectral efficiency and improves the system’s resilience against interference and fading, leading to better BER performance. The signal transmission process of an OFDM-IM system is illustrated in
Figure 1.
There are
N subcarriers in each of the sections
S that make up the entire transmitted signal bandwidth. Therefore, the total number of transmitted subcarriers in this OFDM-IM system is
, where
. Each section’s signal processing system is independent from the others and operates separately. Of the
N subcarriers, only
A are active in each section, while the remaining
subcarriers are deactivated (zero-padded). A total of
m bits is transmitted per section in each transmission. These
m bits consist of
bits sent by the modulated symbols and
bits sent by the active subcarriers’ indices. In this case,
bits is represented as
and
bits is represented as
; hence, the total bits
m can be expressed as follows:
where
M is the
modulation size. The
bits are mapped to a set of
A active subcarrier indices using combinatorial methods. As a result, the transmit vector
is formed by assigning
A nonzero data symbols to the
A active subcarriers based on the incoming
m bits. The mapping from bits to symbols is denoted as
, where
represents the input sequence of
m bits for each group.
The received signal in the frequency domain is represented as follows:
where
denotes the Rayleigh fading channel and
. The term
represents the additive white Gaussian noise (AWGN).
For the imperfect CSI settings, we consider a system in which the receiver experiences channel estimation errors. Let
denote the estimated channel, while the actual channel is modeled as follows:
where the channel estimation error
follows a complex Gaussian distribution
, where
represents the channel estimation error variance. The estimated channel
is modeled as
. The CSI error variance
is inversely related to the average SNR by
, where
is the average energy of a transmitted
symbol and
is the noise variance.
In this experiment, is computed as , with . This relationship adjusts the error variance based on the average SNR, which can more accurately reflect the real-world conditions of imperfect CSI under Rayleigh fading.
3. Architecture of the Proposed HDL Detector
Figure 2 illustrates the structure of the proposed HDL model. Similar to existing OFDM-IM detection techniques, the receiver is assumed to have knowledge of the channel information. As a result, both the channel
and received signal
are considered as inputs to the HDL model. To enhance detection performance,
and
undergo preprocessing using domain knowledge from OFDM-IM before being fed into the HDL detector. To obtain the equalized received signal vector, the widely used zero-forcing (ZF) equalizer is applied [
38]. This is represented as follows:
where the received signal
is multiplied element-wise by the inverse of the channel
. Along with this equalizer, we also utilized the minimum mean square error (MMSE) and decision feedback equalizer (DFE) to compare their performance. By minimizing the mean square error (MSE) between the transmitted and received signals, the MMSE equalizer optimizes the tradeoff between inter-symbol interference (ISI) suppression and noise enhancement [
39]. The equalization process of the MMSE equalizer is expressed as follows:
where
represents the received signal equalized by MMSE,
represents the squared magnitude of the channel coefficient, and
is the noise variance.
The DFE equalizer achieves equalization by using the received signal along with feedback from previously detected symbols to reduce the ISI [
40]. The equalization process of the DFE equalizer can be represented as follows:
where
is the equalized output signal,
represents the feedback filter applied to the previously detected symbols, and
denotes the decision (or detected) output from the previous symbol decisions. To generate the HDL’s input, the energy of the received signal
is calculated and then added to
. The
-dimensional input vector
is created by concatenating the real and imaginary parts of
, respectively denoted as
and
, with the received energy vector
, as illustrated in
Figure 2.
Our proposed HDL detector employs an organized methodology consisting of several essential elements: a 1D-CNN layer, a batch normalization layer, a Bi-LSTM layer, a one-dimensional max-pooling layer, and a dense layer with a sigmoid activation function in the output layer. The processed data
I are fed into the 1D-CNN layer. A total of
filters of various
sizes are used in the convolutional layer to extract features [
41]. The function of the 1D-CNN model with the data
I can be explained as follows:
where the weight and bias vectors of the
i-th layer are indicated by
and
, respectively. The batch normalization (BN) layer follows next. The BN technique is used to normalize the input of each layer in order to minimize internal covariate shifts and stabilize the learning process. This enables neural networks to achieve faster training, greater stability, and improved performance [
42]. The Bi-LSTM layer takes its input from the output of the BN layer, denoted as
B. The proposed Bi-LSTM layer is configured with
hidden units.
Two LSTM layers (forward and backward layers) make up a Bi-LSTM hidden layer.
Figure 3 shows that each LSTM cell contains three gates, namely, the input, forget, and output gates [
43]. Additionally, the LSTM has two states, the hidden state
and cell state
. The cell state functions as memory to retain information extrapolated from previous inputs, while the hidden state is used to compute the output. Here,
t represents the time instant and
is the current input. The gates function updates the cell state, enabling the Bi-LSTM cell to add or remove data at each time step. The forget gate and input gate control the amount of cell state that needs to be updated and reset, respectively [
44].
The functions of the forget, input, output, and candidate gates in the Bi-LSTM model are expressed as follows:
where
denotes the input vector of the Bi-LSTM at iteration
t, the sigmoid activation function is represented by
, the terms
,
,
, and
correspond to the respective activation vectors of the forget, input, output, and candidate gates at iteration
t, the vector
represents the hidden state from the previous iteration
, and the weight matrices
, and
are associated with the forget, input, output, and candidate gates, respectively, with regard to the input
. Similarly, the weight matrices
, and
are tied to the previous hidden state
. Finally, the bias vectors for the forget, input, output, and candidate gates are represented by
, and
, respectively. The Bi-LSTM model updates the previous cell state
to the current cell state according to the following equation:
The hidden state function at any time step
t can be represented as follows:
The operation of the hidden state output in the Bi-LSTM layer is expressed as follows:
where
and
respectively represent the forward and backward sequence of the Bi-LSTM network.
The max-pooling layer now uses the Bi-LSTM layer’s output as its input. By choosing the maximum value within predetermined window sizes, the max-pooling layer shortens the input sequences, reducing dimensionality while preserving important temporal features for later layers [
45]. The fundamental function of the max-pooling layer can be illustrated as follows:
where
is the output obtained after max-pooling,
denotes the indices within the pooling window,
K is the pooling window, and
is the input feature map.
The output layer, which consists of a dense layer with a sigmoid activation function, receives the input
O from the max-pooling layer. The weighted sum of the inputs is converted by the sigmoid activation function into a value between 0 and 1 indicating the output neuron’s probability or activity level [
46]. The output
of the HDL detector through the sigmoid activation function can be expressed as follows:
where
is the weight vector of the output layer and
is its bias.
Offline Training and Online Testing Procedure
The DL model must be trained offline using simulation data before applying the proposed HDL detector. Specifically, a set of transmitted vectors is generated by randomly producing different sequences of
m bits, denoted as
, such that
. These vectors are subsequently transmitted to the receiver, where they are impacted by AWGN noise and the Rayleigh fading channel. According to their established statistical models, the noise vectors and the channel are also created at random, and vary from one bit-sequence to another in this case. The input dataset
, for which the labels are corresponding bit sequences
, is obtained by preprocessing the received signal and channel vectors, i.e.,
and
, as explained in the preceding section. The signal detection process of the proposed HDL-based decoder in the OFDM-IM system is comprehensively outlined in Algorithm 1, featuring each stage from data generation to HDL model design, training, and testing. This step-by-step breakdown offers a clear understanding of the entire workflow.
Algorithm 1 OFDM-IM symbol detection with HDL model |
- 1:
Initialize Parameters: - 2:
Set subcarriers N, active subcarriers A, modulation order M, SNR , batch size, training epochs, and learning rate. - 3:
Calculate derived parameters: bits per symbol m, QAM symbols, power allocation, and noise level. - 4:
Define Subcarrier Patterns: - 5:
Generate index patterns based on N and A (e.g., index set idx). - 6:
Generate OFDM-IM Signal: - 7:
Define OFDM_IM_received(bits, SNRdb) for training and testing data generation:
- •
Convert bits into subcarrier indices and QAM symbols. - •
Apply channel and noise effects. - •
Normalize and return the received signals.
- 8:
Design Neural Network (HDL model): - 9:
Construct HDL_model(x):
- •
Use Conv1D for feature extraction. - •
Apply BatchNormalization. - •
Add bidirectional LSTM layers. - •
Add GlobalMaxPool1D. - •
Use a dense output layer with sigmoid activation for bit prediction.
- 10:
Training Phase: - 11:
for each epoch from 1 to training epochs do - 12:
Initialize average cost,
- 13:
for each batch from 1 to total batches do - 14:
Initialize empty batches batch_x and batch_y - 15:
for each sample from 1 to batch size do - 16:
Generate random bits - 17:
Compute received signal using OFDM_IM_received - 18:
Append signal to batch_x and bits to batch_y - 19:
end for - 20:
Run optimizer, calculate training cost, and update - 21:
end for - 22:
Append to training loss list - 23:
end for - 24:
Testing Phase: - 25:
for each SNR in range of SNR values do - 26:
Set test number, test_number, based on SNR - 27:
Initialize empty test batches batch_x and batch_y - 28:
for each test sample from 1 to test_number do - 29:
Generate random bits - 30:
Compute received signal using OFDM_IM_received - 31:
Append signal to batch_x and bits to batch_y - 32:
end for - 33:
Calculate BER using model predictions and store result - 34:
end for - 35:
Display Results: - 36:
Plot BER vs. SNR
|
The HDL model is trained using the gathered data in order to reduce the BER, or more specifically the discrepancy between
and its prediction
. Therefore, we use the mean-squared error (MSE) loss function for the training process. The MSE effectively penalizes larger errors more heavily than smaller ones due to squaring of the differences, making it highly sensitive to outliers. Mathematically, it is expressed as follows [
47]:
where
represents the true values,
represents the predicted values, and
n is the number of samples. Adaptive moment estimation (Adam) is a DL optimization technique that combines the strengths of both RMSprop and momentum. By adjusting the learning rate for each parameter, Adam efficiently handles sparse gradients and noisy data, making it well suited for complex models and large datasets [
48]. Adam is readily implementable in a number of commercial DL platforms, including Keras 2.10.0 and TensorFlow 2.10.0. Additionally, we evaluate the performance of the RMSProp, Adagrad, and Gradient Descent optimizers for comparison.
Figure 4 and
Figure 5 illustrate the training loss of the proposed model for the different equalizers and modulation schemes, respectively. We evaluate the training loss across each epoch to monitor the model’s performance over time. As observed in
Figure 4, the proposed model demonstrates strong training loss performance across all equalizers. While the DFE equalizer exhibits slightly higher loss, both the ZF and MMSE equalizers achieve very low and nearly identical training loss levels.
Figure 5 illustrates the training loss for different modulation schemes under various data setups, specifically for
,
, and
with the ZF equalizer. Notably, this training loss is estimated at 15 dB training SNR with a batch size of 1000 and learning rate of 0.005. As can be observed, the training loss tends to increase with higher modulation orders and data setup.
As the model’s performance depends significantly on the SNR level chosen for training, denoted by ; thus, careful selection is crucial in order to optimize HDL’s effectiveness. Specifically, the model trained at this SNR level must perform reliably across other relevant SNR values, making it essential to select an optimal . For instance, if is set too low, the model may fail to generalize adequately, as it will not account for the noise impact during training. The simulation results presented in the next section detail the process of selecting an appropriate training SNR for each experimental setup.
We trained our proposed HDL model over 1000 epochs across all experimental settings, with each epoch comprising 20 batches and each batch containing 1000 data samples (often referred to as the batch size). This setup resulted in a total of
batches, yielding approximately
unique training samples. The samples for each batch were generated randomly during the training process. The HDL model was trained for various configurations of
. Specifically, we trained the proposed model with modulation orders of QPSK, 8QAM, and 16QAM. All training configurations for the proposed model are detailed in
Table 1. Our proposed model was trained using a batch size of 1000 over 1000 epochs.
The proposed system can operate in real time to estimate data bits across different channel fading scenarios without the need for further modifications to after it has been trained offline. Upon receiving the signal and channel characteristics, the HDL model efficiently generates an estimated bit and performs comparably to the ML detector in the presence of channel estimation errors. We applied 100,000 samples to test the proposed model.
Figure 6 presents the confusion matrix of the proposed model under the
data setup and with a training SNR of 15 dB, providing a bit-wise performance evaluation on the test data. The confusion matrix shows the model’s accuracy in predicting “0” and “1” bits at the specified SNR level. Test samples with true bit labels were passed through the trained model to obtain predicted values, which were thresholded at 0.5 to produce binary predictions. The confusion matrix indicates that the model correctly classified 197,769 “0” bits and 197,822 “1” bits, with 2214 “0” bits misclassified as “1” bits and 2195 “1” bits misclassified as “0” bits. These results highlight the model’s strong bit-wise accuracy with a low misclassification rate.
4. Results
The BER performance of the proposed HDL detector was evaluated with respect to SNR under imperfect CSI conditions. A model’s learning rate balances speed and stability, determining how quickly it learns; an optimal rate accelerates convergence, prevents instability, and ensures efficient and accurate outputs [
49]. Our proposed model demonstrates efficient performance across different learning rates, as shown in
Figure 7a. The performance of the HDL-based detector at various learning rates was calculated with a training SNR of 15 dB and a
data setup. At lower SNR levels, it performs similarly for learning rates of 0.01, 0.005, and 0.002; at higher SNR levels it shows a slight advantage, with a learning rate of 0.005 compared to the others. Consequently, all experimental setups were designed with a learning rate of 0.005.
The BER performance of the proposed HDL-based model for different batch sizes is shown in
Figure 7b. The results demonstrate that the model exhibits robust performance across all batch sizes. At lower SNR levels, the performance remains nearly consistent for batch sizes of 500, 1000, and 2000. However, at higher SNR levels, the model achieves slightly superior performance with a batch size of 1000.
The training SNR is critical for DL models in communication systems, as it directly impacts the model’s ability to generalize across various channel conditions. The performance of the proposed detector across various training SNR levels is illustrated in
Figure 8. Notably, the model demonstrates the most stable performance compared to the other levels at a training SNR of 15 dB. Increasing or decreasing the training SNR away from this optimal point results in diminished performance. For training SNR levels of 0 dB and 10 dB, the HDL model performs well at lower SNRs but exhibits slightly weaker performance at higher SNRs. In contrast, at training SNRs of 20 dB and 25 dB the proposed model demonstrates good performance at higher SNR levels while showing comparatively weaker results at lower SNRs. Thus, the experimental results indicate that the proposed HDL-based detector achieves superior performance at a training SNR of 15 dB. Consequently, all experimental setups were designed with a training SNR of 15 dB.
In all experimental settings, the results were calculated using the ZF equalizer and Adam optimizer, except in
Figure 9 and
Figure 10, where BER performance is compared across different equalizers and optimizers. Additionally, all experiments in this setup utilized
except in
Figure 11, which presents a comparison of model performance across different modulation orders and data setups.
Equalizers play a crucial role in communication systems by mitigating channel distortions such as ISI and multipath fading. Equalizers ensure reliable data transmission in noisy environments by improving BER and overall system performance by providing improved signal quality and dependability [
50]. The BER performance of the proposed model with various equalizers is depicted in
Figure 9. The results reveal that the proposed detector achieves impressive BER performance with both the ZF and MMSE equalizers, while its performance with the DFE equalizer is notably less effective. Although the model delivers nearly equivalent performance with the ZF and MMSE equalizers, it demonstrates slightly enhanced performance when employing the ZF equalizer, highlighting its efficiency in this configuration.
The BER performance of the proposed model with different optimizers is illustrated in
Figure 10. Specifically, we examined the performance of the HDL model using the Adam, RMSProp, Adagrad, and Gradient Descent optimizers. The figure clearly indicates that the proposed model demonstrates acceptable performance across all optimizers. However, its performance with the Gradient Descent optimizer is somewhat less favorable compared to the others. Notably, while the HDL detector exhibits nearly comparable performance with both Adam and RMSProp, the Adam optimizer achieves a slightly superior performance in comparison to RMSProp.
Figure 11 illustrates the BER performance of the HDL detector in the OFDM-IM system with different modulation orders under imperfect channel conditions. We evaluated the performance with the QPSK, 8QAM, and 16QAM modulation schemes, where the configurations were set as follows:
for QPSK,
for 8QAM, and
for 16QAM. We observed that increasing the modulation order and data setup leads to a reduction in performance; specifically, the model achieves 5 dB and 9 dB superior performance for the
configuration compared to the
and
configurations, respectively.
Figure 12 compares the BER performance of the proposed HDL-based detector against ML, GD, DeepIM [
27], and LSTM-IM [
34] detectors. This evaluation used a 15 dB training SNR and a 0.005 learning rate along with the ZF equalizer and Adam optimizer. The results clearly demonstrate that the HDL detector consistently outperforms GD, ML, DeepIM, and LSTM-IM models with the
data setup. Specifically, the proposed detector shows approximately 2.5 dB improved performance over GD and achieves nearly 0.9 dB better BER performance compared to both the ML and DeepIM detectors under imperfect CSI conditions. Additionally, we compared the performance of the proposed model with the
data setting against a CNN-based model [
31].
Table 2 presents the BER performance of all comparable models at an SNR of 20 dB, with the results illustrated in
Figure 12. The simulation results confirm that the HDL model outperforms the CNN model at higher data rates.
To further demonstrate the robustness of the proposed HDL-based OFDM-IM system, we calculated the throughput and spectral efficiency (SE), as shown in
Figure 13. These metrics were evaluated for the (4,1,4), (4,2,4), and (4,3,4) data configuration, with each setup representing a distinct combination of active indices. The results indicate that both throughput and SE increase with higher active indices. The model demonstrated the highest performance with (4,3,4) setup. These results confirm that our proposed HDL model effectively achieves high throughput and SE.