Next Article in Journal
Advancing Skarn Iron Ore Detection through Multispectral Image Fusion and 3D Convolutional Neural Networks (3D-CNNs)
Next Article in Special Issue
Characterization of Electric Field Fluctuations in the High-Latitude Ionosphere Using a Dynamical Systems Approach: CSES-01 Observations
Previous Article in Journal
Fast Semantic Segmentation of Ultra-High-Resolution Remote Sensing Images via Score Map and Fast Transformer-Based Fusion
Previous Article in Special Issue
Multi-Instrument Observations of the Ionospheric Response Caused by the 8 April 2024 Total Solar Eclipse
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Modeling and Forecasting Ionospheric foF2 Variation Based on CNN-BiLSTM-TPA during Low- and High-Solar Activity Years

School of Telecommunication Engineering, Xidian University, Xi’an 710071, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2024, 16(17), 3249; https://doi.org/10.3390/rs16173249
Submission received: 22 July 2024 / Revised: 22 August 2024 / Accepted: 29 August 2024 / Published: 2 September 2024
(This article belongs to the Special Issue Ionosphere Monitoring with Remote Sensing (3rd Edition))

Abstract

:
The transmission of high-frequency signals over long distances depends on the ionosphere’s reflective properties, with the selection of operating frequencies being closely tied to variations in the ionosphere. The accurate prediction of ionospheric critical frequency foF2 and other parameters in low latitudes is of great significance for understanding ionospheric changes in high-frequency communications. Currently, deep learning algorithms demonstrate significant advantages in capturing characteristics of the ionosphere. In this paper, a state-of-the-art hybrid neural network is utilized in conjunction with a temporal pattern attention mechanism for predicting variations in the foF2 parameter during high- and low-solar activity years. Convolutional neural networks (CNNs) and bidirectional long short-term memory (BiLSTM), which is capable of extracting spatiotemporal features of ionospheric variations, are incorporated into a hybrid neural network. The foF2 data used for training and testing come from three observatories in Brisbane (27°53′S, 152°92′E), Darwin (12°45′S, 130°95′E) and Townsville (19°63′S, 146°85′E) in 2000, 2008, 2009 and 2014 (the peak or trough years of solar activity in solar cycles 23 and 24), using the advanced Australian Digital Ionospheric Sounder. The results show that the proposed model accurately captures the changes in ionospheric foF2 characteristics and outperforms International Reference Ionosphere 2020 (IRI-2020) and BiLSTM ionospheric prediction models.

1. Introduction

High-frequency (HF) communication refers to a radio communication technology characterized by wavelengths ranging from 10 to 100 m and frequencies spanning between 3 and 30 MHz [1]. It is also called shortwave communication. As a venerable and traditional communication method, HF sky-wave transmission utilizes the low-loss ionosphere to facilitate ultra-long-distance and global connectivity via one or more ionospheric reflections [2]. The F2 layer serves as the principal reflective layer with the highest electron density. It is the farthest reflective layer that the signal can reach. In the shortwave sky wave propagation mode, the F2 layer is the main reflective layer used to achieve ionospheric reflection for signal propagation. The prediction and analysis of the F2 layer are of great significance for the selection of operating frequencies such as shortwave communications and radar detection.
The International Reference Ionosphere (IRI) [3,4] is an empirically standard model of the ionosphere. As an international project, data from the IRI model are extensively utilized in diverse ionospheric research endeavors. It stands as the predominant standard model within the ionospheric research community. With the continuous updating of the model, the IRI has currently released 4 versions: 2007, 2012, 2016, and 2020. Observations from ground-based multi-sources and spaceborne satellites are used to calculate the IRI model [5,6].
Currently, with artificial intelligence experiencing rapid expansion, deep learning has swiftly evolved and is extensively employed to develop ionospheric models with remarkable outcomes. To this end, Iban et al. [7] conducted a comparative analysis of the predictive performance of three machine learning algorithms (decision trees, random forests and support vector machines) on ionospheric parameters. In references [8,9,10,11,12], researchers established deep learning models to predict the changes in total electron content (TEC) with considerable accuracy. McGranaghan et al. [13] used support vector machines to predict high-latitude ionospheric parameters, and the effect was significantly better than traditional methods. A study by Mallika et al. [14] investigated the Gaussian process regression (GPR) for low-latitude ionospheric prediction, and the GPR outperforms existing ARMA and ANN models.
This paper establishes a model based on deep learning and introduces the attention mechanism. Due to the insensitivity of conventional attention mechanisms to temporal variations, we incorporated a temporal pattern attention mechanism into the proposed hybrid neural network model. This enhancement aims to predict changes in the ionospheric foF2 parameter in low-latitude regions during both solar-minimum and solar-maximum years. Compared with similar research models based on IRI-2020 and BiLSTM, the proposed model has better prediction effects. The structure of this paper is arranged as follows. The data utilized and the proposed model are described in Section 2. In Section 3, we conduct a comparative analysis of the prediction results of the model to verify the effectiveness of the proposed method. Section 4 presents analyses of the research, and Section 5 offers a detailed conclusion of the whole paper.

2. Materials and Methods

2.1. Sources of Data Used in the Study

The foF2 data we used were obtained from the Space Weather Services of the Australian Bureau of Meteorology (https://www.sws.bom.gov.au/World_Data_Centre, accessed on 15 January 2024), and these data were measured on an hourly basis. We used the observation results from three observatories, namely Brisbane Station, Darwin Station and Townsville Station. Their spatial positions are, respectively, located in 27°53′S, 152°92′E; 12°45′S, 130°95′E; and 19°63′S, 146°85′E. The solar cycle, sunspot number (SSN) and geomagnetic activity data were sourced from NASA’s Goddard Space Flight Center (https://omniweb.gsfc.nasa.gov/form/dx1.html, accessed on 15 January 2024).

2.2. Season, Daily Cycle, Solar and Geomagnetic Activity

To predict foF2, we first need to obtain the parameters related to variations in foF2. Williscroft and Poole [15] noted that the foF2 parameter is correlated with the peak electron density in the ionosphere, and is influenced by various factors including solar and magnetic activity, geographic location, local time and season. Furthermore, in accordance with their suggestions, the day of year (DOY) is utilized to quantify the season. However, the problem arising from this approach is that although there is only a one-day difference between December 31st and January 1st, the numerical difference is very large. Consequently, a neural network faces challenges in treating these days as contiguous. To address this, the DOY variable is divided into two separate inputs [16,17].
D O Y s = s i n ( 2 π × D O Y 365 ) a n d D O Y c = c o s ( 2 π × D O Y 365 ) ,
In a comparable manner, the hours of the day are divided into consecutive hours using the following formulas. We use local time (LT) rather than universal time (UT) to more accurately represent the actual time at the observing stations.
U T s = s i n ( 2 π × U T 24 ) a n d U T c = c o s ( 2 π × U T 24 ) ,
According to common sense, using solar cycle-related indices is a requirement for developing an ionospheric model. Due to the lack of prolonged data for extreme ultraviolet-specific indices, researchers have employed two distinct solar indices from various wavelength ranges, specifically sunspot numbers and the solar radio flux at 10.7 cm (F10.7) [18,19]. In spite of the strong correlation between these two parameters, they are still used simultaneously to build the model [20]. It was posited by E.O. Joshua [21] that foF2 changes are complicated and difficult to capture fully, thereby necessitating a focus solely on the sunspot number (SSN). The findings of Liu et al. [22] indicate that incorporating F10.7 enhances the prediction accuracy and, therefore, should also be utilized in modeling foF2. Furthermore, Cao et al. [23] discovered that the dependence of foF2 on the SSN and F10.7 is most pronounced during mid-solar years and weakest during high-solar years. Compared to F10.7, Bai et al. [24] noted that the SSN has a more decisive influence on foF2 during quiet and active periods. However, F10.7 plays an important role in influencing foF2 over the SSN during periods when solar activity is moderate. Mursula et al. [25], in their recent study, highlighted significant changes in the relationship between the F10.7 index and sunspot numbers over long timescales. They emphasize that these parameters should be considered separately in long-term solar studies, as their asynchronous evolution reflects different aspects of solar atmospheric changes. Simply put, the SSN and F10.7 are both indispensable. Employing a hybrid neural network can enhance the model’s ability to accurately capture the variations in foF2 across different levels of solar activity.
Similarly, Mikhailov et al. [26] concluded that changes in foF2 clearly depend on geomagnetic activity parameters. The magnitude of magnetic activity can be measured with several indices, with the Ap index and the interplanetary magnetic field Bz component (IMF Bz) index being particularly relevant for predicting foF2. To more accurately differentiate between calm conditions and geomagnetic storms, additionally, the Disturbance Storm Time (Dst) index is employed [27,28].
It is noteworthy that in equatorial and low-latitude regions, the ionosphere exhibits irregularities in plasma density, leading to ionospheric scintillation. These irregularities complicate predictions of ionospheric conditions in these areas [29]. Therefore, employing a single deep learning model falls short in capturing the intricate variations within the ionosphere accurately. Utilizing a hybrid neural network model facilitates a more effective learning and prediction of the intrinsic relationships between ionospheric outcomes and climate data, thereby enhancing the accuracy of forecasts.
Therefore, the model utilizes F10.7 and the SSN, in conjunction with geomagnetic indices including Ap, Dst and IMF Bz, as well as seasonal data and local time, as essential inputs for the model. In taking the data of 2009 and 2014 as an example, the hourly indices of Dst, Ap, F10.7, IMF Bz and the SSN during the solar-minimum and solar-maximum year are shown in Figure 1.

2.3. Structure of the Hybrid Prediction Model

The model for predicting foF2, utilizing a hybrid architecture, combines a CNN, BiLSTM, and temporal pattern attention mechanism. A CNN, compared to traditional neural networks, is distinguished by local connections and weight sharing. It typically comprises a convolutional layer, where the convolution operation is fundamental [30]. The strength of this operation lies in its ability to automatically detect and analyze local spatial patterns within the data, such as those found in ionospheric variations. By applying filters that move across the input data, the CNN can capture and convey the intricate characteristics of ionospheric variations more effectively, identifying important features across multiple scales while maintaining computational efficiency. In addition, the pooling operation of the CNN enables it to reduce the data dimension while preserving key information, which enables the model to focus on the most significant features of ionospheric changes.
A recurrent neural network (RNN) can retain input information of the previous time, which is its primary difference from traditional neural networks. It can handle inputs of any length, and the model does not change shape as the input length increases. However, it is difficult for RNNs to obtain information from long ago. Due to the large number of partial derivative multiplications, it is difficult to capture long-term dependencies, which is not conducive to processing time series data. Utilizing a gate controller to enhance weight control of memory across different time instances and incorporating cross-layer connections to mitigate the effects of the vanishing gradient problem, the long short-term memory network (LSTM) is a distinct type of RNN. It decreases the computational complexity of the RNN through its special structure [31,32]. However, the LSTM cannot process time series information from back to front. Therefore, the model can only obtain limited feature information [33], which leads to increased prediction errors. On the contrary, due to the capability to deal with information in both directions, BiLSTM achieves better predictions through additional data training.
This study utilized MATLAB’s Deep Learning Toolbox to construct the BiLSTM framework. For an individual LSTM, the critical components are calculated as follows [34]:
f t = σ ( W x f x t + W h f h t 1 + b f ) ,
i t = σ ( W x i x t + W h i h t 1 + b i ) ,
c t = f t c t 1 + i t · t a n h ( W x c x t + W h c h t 1 + b c ) ,
o t = σ ( W x o x t + W h o h t 1 + b o ) ,
where ft is the forget gate, and it uses the sigmoid activation function σ to weigh the input xt and the previous hidden state ht−1 combined with its own bias bf. W x f and W h f are the weight matrices acting on the current input and the previous hidden state, respectively. Similarly, W x i and W h i ; W x o and W h o ; and W x c and W h c represent the weight matrices of the input gate, output gate, and new unit state, respectively. bi, bc and bo are their respective biases. it is the input gate; this gate determines what new information is added to the cell state. ct is the cell state; it is the core memory of the LSTM unit, which is updated in two steps. First, the old cell state ct−1 is modified by the forget gate. Second, the input gate it allows the addition of new candidate values scaled by the tanh function of the current input and previous hidden state. The output gate ot regulates the transfer of the cell state to the subsequent hidden state. It utilizes the sigmoid function to determine which components of the cell state are to be outputted. The output ht of the LSTM hidden layer is calculated from ot and the activation function tanh applied to the cell state:
h t = o t · tanh ( c t ) ,
The attention mechanism draws on the theory of human attention. For example, during reading, we focus our attention on important information. For the model, the weights of the inputs are dynamically changing. The attention mechanism can learn these weights and understand which inputs are more important. In the typical attention mechanism in an RNN, the previous states are represented by hi. The context vector vt is derived from these preceding states. This vector vt is computed as a weighted sum of each column vector hi in the matrix H, encapsulating the relevant information for the current time step. This approach allows the model to focus on the most pertinent features from the recent past, enhancing the prediction accuracy for the current time step.
Consider a scoring function f : R m × R m R that quantifies the correlation between input vectors. The calculation of α i is similar to the softmax function, which normalizes the similarity between variables into a probability distribution in order to weight the input vector. The vt is computed through the equation below:
α i = exp ( f ( h i , h t ) ) j = 1 t 1 exp ( f ( h j , h t ) ) ,
v t = i = 1 t 1 α i h i ,
Currently, many scholars have attempted to incorporate attention mechanisms into neural network models for the prediction of ionospheric parameters [35,36]. Although the traditional attention mechanism is capable of detecting the relationship between input vectors, there are shortcomings in applying it to the RNN for multivariate time series prediction. It identifies information pertinent recently and uses vt to compute the weighted sum of the antecedent states. This is why it is ideal for tasks involving basic data at each time step. If each time step contains multiple variables, it has difficulty excluding variables that contribute noise, which diminishes the accuracy of forecasting. Furthermore, when confronted with numerous time steps, the traditional attention mechanism responds by averaging the information. These problems make traditional attention mechanisms unable to identify temporal patterns that are valuable for prediction [37,38].
Temporal pattern attention (TPA) is proposed to solve the shortcomings of the traditional attention mechanism in multivariate time series prediction [39]. Assume that the multivariate time series given by the above LSTM is X = {x1, x2, …, xt−1}, where x i R n denotes the actual value at time i. The aim is to predict the value of x t 1 + Δ , where Δ is a variable increment. The corresponding predicted value is denoted as y t 1 + Δ . We use only {xtw,xtw+1,…,xt−1} to predict x t 1 + Δ , where w is a predefined, variable window size.
Figure 2 illustrates the TPA’s architecture.
In TPA, the input vector still needs to pass through the CNN first. We use k filters C i R 1 × T , where T can be regarded as the number of column vectors contained within the window. The convolution operations yield H i , j C R n × k , denoting the result of operations between the i-th row vector and the j-th filter. The formula is as follows:
H i , j C = l = 1 w H i , ( t w 1 + l ) × C j , T w + l ,
The same as before, vt is a weighted sum of row vectors of HC. Subsequently, the relevance is calculated using the scoring function f : R k × R m R .
f ( H i C , h t ) = ( H i C ) W a h t ,
The following equation represents the attention weight α i :
α i = s i g m o i d ( f ( H i C , h t ) ) ,
The v t R k is obtained through the weighted sum of attention weight and the row vectors of HC:
v t = i = 1 n α i H i C ,
The final prediction is obtained by combining v t and h t :
h t = W h h t + W v v t ,
y t 1 + Δ = W h h t ,
where Wh, Wv and W h are the weight matrices for the input vector vt, the hidden state ht, and the newly computed hidden state h′t, respectively. yt−1+Δ is the final predicted value.
Figure 3 provides a detailed depiction of the structure of the hybrid neural network prediction model proposed in this paper. The input data pass through a convolutional layer to extract spatial features, and then moves into a sequence expansion layer before entering the BiLSTM layer to capture temporal characteristics. The temporal pattern attention (TPA) focuses on time-related information, and after a series of processing steps, the output results are generated. The inputs to the model are shown in the following equation:
X f o F 2 = ( U T s , U T c , D O Y s , D O Y c , D s t , I M F B z , A p , S S N , F 10.7 , f o F 2 ) ,
The output layer Y t f o F 2 combines the three layers, temporal pattern attention, BiLSTM and CNN. The output of the model is shown in the following formula, which is the series consequence of the three layers:
Y t f o F 2 = T P A t o u t p u t ( B i L S T M t o u t p u t ( C N N t o u t p u t ) ) ,
The model input is processed by the CNN layer and then handed over to the BiLSTM layer, and finally processed by the TPA layer to output the result. It is noted that the model requires evaluation metrics to assess the quality of its predictions, including the RMSE, MAE and MAPE. The root mean square error (RMSE) is the square root of the average of the squared differences between the predicted and actual values. It is sensitive to outliers and gives a higher weight to larger differences between predicted and actual values. The meaning of MAE is mean absolute error; it provides a straightforward average of errors without squaring them, meaning it treats all errors equally regardless of their magnitude. The full name of MAPE is mean absolute percentage error, which expresses the error as a percentage of the actual values, making it easier to understand in a relative context. Their calculation formulas are as follows:
RMSE = i = 1 N U i o b s U i m o d e l 2 N ,
MAE = i = 1 N U i o b s U i m o d e l N ,
MAPE = i = 1 N U i o b s U i m o d e l U i o b s N ,
where U i o b s represents the actual observed value of foF2, and U i m o d e l is the predicted value of the model.

3. Results

The project employed foF2 observation data from three observation stations at low-latitude regions in 2000, 2008, 2009 and 2014. Among them, 2000 and 2014 represent the solar-maximum years in the two solar cycles, and the remaining two years represent the minimum years. For each observing station, we built the model using data from all years and seasons at that station. From the collected dataset, three-quarters of the data were used to train the model, while the remaining portion was used for testing/prediction. The model’s predictions were evaluated in these four years. Table 1 gives the specific configuration of the model used for prediction. Regarding the parameter settings of the IRI-2020 model, the Sunspot Number-R12 and F Peak Storm Model (on) were used. Sunspot number-R12 represents the 12-month running average of the number of sunspots. It is used by the IRI model to estimate the level of solar activity. The F-peak Storm Model uses historical data from ionospheric soundings to simulate the temporal behavior of storms in the ionosphere. For other parameters, such as Coordinate Type and Upper and Lower Height (km) for TEC Integration, we used the default values. We used 24 h of data to predict data for 1 h after one day (24 h) and slid everything down as a sliding window.
The effectiveness of the proposed model was assessed using data from three observatories in 2000, 2008, 2009 and 2014. Depending on the geographical location of the observatory, the seasons were categorized as follows: spring consists of August to October; summer consists of November, December and January; autumn includes February to April; and winter comprises May to July. Furthermore, we selected one month from each season to assess the predictive results of the model, as detailed in Table 2.

3.1. Scenario 1: Prediction and Analysis of foF2 Variations during Low-Solar Activity Year (2008 and 2009)

Years 2008 and 2009 were the troughs of solar activity in solar cycles 23 and 24, respectively, which were intuitively reflected in the data by a significant reduction in the number of sunspots and other indices.
Taking 2009 as an example, we selected one day from the forecast data of each month to display, as shown in Figure 4. Compared to IRI-2020 and BiLSTM, the predictions of the proposed model more accurately match the observed values.
For the proposed model as well as for other models, the evaluation and comparison of prediction results for all months in 2008 and 2009 at the Brisbane, Darwin and Townsville observation stations are shown in Table 3, Table 4, Table 5 and Table 6; the results retain three significant figures.
Through comparison, the prediction values from the proposed model demonstrate a strong relevance with the actual observed values. Compared with IRI-2020 and BiLSTM, this method has significant improvements in autumn (March) and winter (June), and also has varying degrees of improvement in other seasons. Based on the above data, the proposed model shows an average reduction in RMSE by 17.4%, 15.03%, 22.81%, and 14.33% across the three stations in January, March, June and October of 2008, respectively, when compared to the IRI-2020 model. In 2009, the average reduction in RMSE was 12.05%, 30.31%, 31.14% and 21.78% for the same months. Similarly, the prediction results of BiLSTM are also significantly improved. In addition, in contrast to other models, the proposed model shows enhancements in MAE and MAPE, indicating superior performance during solar-minimum years.

3.2. Scenario 2: Prediction and Analysis of foF2 Variations during High-Solar Activity Year (2000 and 2014)

In 2000 and 2014, solar activity reached the peak of the solar cycle, with the average number of sunspots for the year reaching 173.9 and 113.3.
Take 2014 as an example, Figure 5 displays the comparison between the predicted outputs of the proposed method, measured foF2 values, and results from other models across various months of each season in Brisbane, Darwin and Townsvile. Each sub-figure from Figure 5a–l presents a single day’s data for each station in the months of each season (October in spring, January in summer, March in autumn and June in winter). The performance of the proposed model and other foF2 prediction models, in terms of RMSE, MAE and MAPE for all months in 2000 and 2014, is systematically compiled in Table 7, Table 8, Table 9 and Table 10; the results retain three significant figures.
From the above graphs and tables, it can be observed that the proposed model exhibits the highest foF2 prediction accuracy. On an annual basis, Brisbane’s RMSE decreased by an average of 22.51% and 29.4% compared to IRI-2020 in 2000 and 2014; Darwin’s decreased by 20.34% and 30.4%; and Townsville’s decreased by 22.235% and 28.97%. In addition, various metrics indicate that the proposed method also improves to varying degrees compared with the other two models in other months and observation stations.

4. Discussion

4.1. Prediction Error Analysis for Each Model under Low-Solar Activity Year

In this study, the sample error (absolute error between the predicted value and the true value, MHZ) of each foF2 prediction model was calculated and analyzed during the solar minimum year (scenario 1). Figure 6, Figure 7 and Figure 8 depict the comparison of the cumulative distribution of absolute errors of each model. For example, when considering Δ f o F 2 < 0.3 , smaller values in the figure indicate a higher concentration of data within this range, signifying more accurate predictions.
Figure 6, Figure 7 and Figure 8 depict the cumulative distribution of prediction errors for the three observation stations. From the figures, the proposed model has the smallest cumulative distribution error in the samples, indicating that it achieves higher accuracy in predicting foF2 compared to other models. In comparison, the IRI-2020 model’s predictions show the poorest correlation with actual values. This may be due to the model’s overgeneralization, which prevents it from providing accurate predictions for specific tasks.

4.2. Analysis of Prediction Error Results under High-Solar Activity Year

For solar-maximum years, we calculated the errors between the prediction results of each model and the actual measured values of the three observation stations in 2000 and 2014, and drew histograms and normal distribution curves, as shown in Figure 9, Figure 10 and Figure 11.
Based on Figure 9, Figure 10 and Figure 11, we calculated the first three moments of the normal distribution, and the results are shown in Table 11.
It can be seen from Figure 9, Figure 10 and Figure 11 and Table 11 that the error normal curve of the model presented in this paper is relatively concentrated and fits better than other models. The curves of IRI-2020 and BiLSTM are more scattered in comparison, indicating that the proposed method is superior to them in the area of foF2 prediction. As demonstrated by the above graphs, and combined with the MAE data from three observatories, the proposed model can still achieve the highest prediction accuracy in solar-maximum years.

5. Conclusions

This paper introduces the temporal pattern attention mechanism and incorporates it into the proposed hybrid neural network, addressing the issue of typical attention mechanisms failing to identify temporal patterns that are valuable for prediction. We used this model to predict changes in the ionospheric parameter foF2 in low-latitude regions. The observed foF2 data were collected from three observation stations: Brisbane, Darwin and Townsvile stations. The relevant model input parameters, except for foF2, include the day of year (DOY), hour (LT), F10.7, SSN, IMF Bz, Ap and Dst from 2009 to 2014. These ionospheric observations from high- and low-solar activity years, along with solar activity and geomagnetic indices, were used to train the model and evaluate its predictive performance. The model’s predictions were then compared with those from the standard empirical ionospheric IRI-2020 model and BiLSTM-foF2 model. To evaluate the models’ prediction accuracy, the RMSE, MAE and MAPE were used to calculate and analyze the prediction errors of each model from different perspectives. The results show that the prediction accuracy of the proposed model is improved to varying degrees compared with other models, whether in the solar-minimum or solar-maximum years.
The accurate prediction of the ionospheric foF2 is crucial for determining the operational frequency of shortwave signals, thereby reducing the complexity involved in frequency selection. In conclusion, the proposed model is highly effective for predicting changes in ionospheric parameters and holds significant guidance for selecting operating frequencies for shortwave communications and other shortwave communication application scenarios.
Despite the promising results achieved in this study, there are several limitations that must be acknowledged. One of the primary challenges faced by our model is the reliance on a substantial amount of historical data for accurate training. In regions with limited observational data, such as high-latitude areas, the scarcity of available data poses a significant constraint. This lack of data leads to reduced accuracy in predicting the foF2 values in these regions. To address this limitation, future research could focus on several potential improvements. Firstly, incorporating data assimilation techniques or synthetic data generation could help mitigate the impact of sparse observational data. Additionally, integrating alternative data sources, such as satellite measurements or reanalysis data, may enhance the model’s ability to predict foF2 values more accurately in data-scarce regions. Further refinement of the model’s architecture, perhaps through the use of transfer learning or domain adaptation, could also contribute to better performance in high-latitude areas. In addition, the data we used were all from observation stations in Australia. In the future, we will input data from observation stations with a larger longitude and latitude range into the model to enhance the model’s capabilities.

Author Contributions

Conceptualization, B.X.; methodology, B.X.; validation, B.X. and W.H.; formal analysis, W.H.; investigation, Y.L. and W.H.; writing—original draft preparation, W.H.; writing—review and editing, Z.X. and P.R. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

In this paper, we used Brisbane, Darwin and Townsville stations’ ionospheric foF2 measured data from the Australian Bureau of Meteorology, SpaceWeather Services (https://www.sws.bom.gov.au/World_Data_Centre, accessed on 15 January 2024), and the solar cycle and geomagnetic information along with sunspot numbers (SSNs) from the Goddard Space Flight Center, NASA (https://omniweb.gsfc.nasa.gov/ow.html, accessed on 15 January 2024).

Acknowledgments

The authors are grateful to the Australian Bureau of Meteorology, Space Weather Services for the provision of ionospheric data.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
BiLSTMBidirectional long short-term memory;
CNNConvolutional neural network;
LSTMLong short-term memory;
TPATemporal pattern attention;
foF2Ionospheric F2 layer critical frequency;
IRIInternational Reference Ionosphere;
SSNSunspot number;
UTUniversal time;
LTLocal time;
F10.7Solar radio flux of 10.7 cm wavelength;
IMF BzInterplanetary magnetic field Bz component;
DstDisturbance storm time;
RNNRecurrent neural network;
RMSE    Root mean square error;
MAEMean absolute error;
MAPEMean absolute percentage error.

References

  1. Wang, J.; Ding, G.; Wang, H. HF communications: Past, present, and future. China Commun. 2018, 15, 1–9. [Google Scholar] [CrossRef]
  2. Wang, J.; Shi, Y.; Yang, C.; Feng, F. A review and prospects of operational frequency selecting techniques for HF radio communication. Adv. Space Res. 2022, 69, 2989–2999. [Google Scholar] [CrossRef]
  3. Bilitza, D. International reference ionosphere 2000. Radio Sci. 2001, 36, 261–275. [Google Scholar] [CrossRef]
  4. Bilitza, D.; Reinisch, B.W. International reference ionosphere 2007: Improvements and new parameters. Adv. Space Res. 2008, 42, 599–609. [Google Scholar] [CrossRef]
  5. Bilitza, D.; McKinnell, L.A.; Reinisch, B.; Fuller-Rowell, T. The international reference ionosphere today and in the future. J. Geod. 2011, 85, 909–920. [Google Scholar] [CrossRef]
  6. Vryonides, P.; Haralambous, H. Comparison of COSMIC measurements with the IRI-2007 model over the eastern Mediterranean region. J. Adv. Res. 2013, 4, 297–301. [Google Scholar] [CrossRef]
  7. Iban, M.C.; Şentürk, E. Machine learning regression models for prediction of multiple ionospheric parameters. Adv. Space Res. 2022, 69, 1319–1334. [Google Scholar] [CrossRef]
  8. Tang, J.; Li, Y.; Yang, D.; Ding, M. An approach for predicting global ionospheric TEC using machine learning. Remote Sens. 2022, 14, 1585. [Google Scholar] [CrossRef]
  9. Yang, K.; Liu, Y. Global Ionospheric Total Electron Content Completion with a GAN-Based Deep Learning Framework. Remote Sens. 2022, 14, 6059. [Google Scholar] [CrossRef]
  10. Silva, A.; Moraes, A.; Sousasantos, J.; Maximo, M.; Vani, B.; Faria, C., Jr. Using Deep Learning to Map Ionospheric Total Electron Content over Brazil. Remote Sens. 2023, 15, 412. [Google Scholar] [CrossRef]
  11. Reddybattula, K.D.; Nelapudi, L.S.; Moses, M.; Devanaboyina, V.R.; Ali, M.A.; Jamjareegulgarn, P.; Panda, S.K. Ionospheric TEC Forecasting over an Indian Low Latitude Location Using Long Short-Term Memory (LSTM) Deep Learning Network. Universe 2022, 8, 562. [Google Scholar] [CrossRef]
  12. Li, Q.; Yang, D.; Fang, H. Two Hours Ahead Prediction of the TEC over China Using a Deep Learning Method. Universe 2022, 8, 405. [Google Scholar] [CrossRef]
  13. McGranaghan, R.M.; Mannucci, A.J.; Wilson, B.; Mattmann, C.A.; Chadwick, R. New capabilities for prediction of high-latitude ionospheric scintillation: A novel approach with machine learning. Space Weather 2018, 16, 1817–1846. [Google Scholar] [CrossRef]
  14. Mallika, L.; Ratnam, D.V.; Raman, S.; Sivavaraprasad, G. Machine learning algorithm to forecast ionospheric time delays using Global Navigation satellite system observations. Acta Astronaut. 2020, 173, 221–231. [Google Scholar] [CrossRef]
  15. Williscroft, L.A.; Poole, A.W.V. Neural networks, foF2, sunspot number and magnetic activity. Geophys. Res. Lett. 1996, 23, 3659–3662. [Google Scholar] [CrossRef]
  16. McKinnell, L.A.; Poole, A.W.V. Ionospheric variability and electron density profile studies with neural networks. Adv. Space Res. 2001, 27, 83–90. [Google Scholar] [CrossRef]
  17. Athieno, R.; Jayachandran, P.T.; Themens, D.R. A neural network-based foF2 model for a single station in the polar cap. Radio Sci. 2017, 52, 784–796. [Google Scholar] [CrossRef]
  18. Bi, C.; Ren, P.; Yin, T.; Xiang, Z.; Zhang, Y. Modeling and Forecasting Ionospheric foF2 Variation in the Low Latitude Region during Low and High Solar Activity Years. Remote Sens. 2022, 14, 5418. [Google Scholar] [CrossRef]
  19. Tapping, K.F. The 10.7 cm solar radio flux (F10. 7). Space Weather 2013, 11, 394–406. [Google Scholar] [CrossRef]
  20. Perna, L.; Pezzopane, M. foF2 vs solar indices for the Rome station: Looking for the best general relation which is able to describe the anomalous minimum between cycles 23 and 24. J. Atmos. Sol.-Terr. Phys. 2016, 148, 13–21. [Google Scholar] [CrossRef]
  21. Joshua, E.O.; Nzekwe, N.M. foF2 correlation studies with solar and geomagnetic indices for two equatorial stations. J. Atmos. Sol.-Terr. Phys. 2012, 80, 312–322. [Google Scholar] [CrossRef]
  22. Liu, L.; Wan, W.; Ning, B. Statistical modeling of ionospheric foF2 over Wuhan. Radio Sci. 2004, 39, 1–10. [Google Scholar] [CrossRef]
  23. Cao, B.; Feng, J.; An, L. Long-term relationship between foF2 from China ionospheric stations and solar activity during the 24th solar activity cycle. Radio Sci. 2022, 57, 1–11. [Google Scholar] [CrossRef]
  24. Bai, H.; Feng, F.; Wang, J.; Wu, T. Nonlinear dependence study of ionospheric F2 layer critical frequency with respect to the solar activity indices using the mutual information method. Adv. Space Res. 2019, 64, 1085–1092. [Google Scholar] [CrossRef]
  25. Mursula, K.; Pevtsov, A.A.; Asikainen, T.; Tähtinen, I.; Yeates, A.R. Transition to a weaker Sun: Changes in the solar atmosphere during the decay of the Modern Maximum. A&A 2024, 685, A170. [Google Scholar] [CrossRef]
  26. Mikhailov, A.V.; Marin, D. Geomagnetic control of the foF2 long-term trends. Ann. Geophys. 2000, 18, 653–665. [Google Scholar] [CrossRef]
  27. Sugiura, M.; Kamei, T. Equatorial Dst Index, 1957–1986; Berthelier, A., Menvielle, M., Eds.; ISGI Publications Office: Paris, France, 1991. [Google Scholar]
  28. Sugiura, M. Hourly Values of Equatorial Dst for the IGY. Ann. Int. Geophys. Year 1964, 35, 9. [Google Scholar]
  29. Jia, G.; Luo, W.; Yu, X.; Zhu, Z.; Chang, S. Determining the Day-to-Day Occurrence of Low-Latitude Scintillation in Equinoxes at Sanya during High Solar Activities (2012–2013). Atmosphere 2023, 14, 1242. [Google Scholar] [CrossRef]
  30. Gu, J.; Wang, Z.; Kuen, J.; Ma, L.; Shahroudy, A.; Shuai, B.; Liu, T.; Wang, X.; Wang, G.; Cai, J.; et al. Recent advances in convolutional neural networks. Pattern Recognit. 2018, 77, 354–377. [Google Scholar] [CrossRef]
  31. Shewalkar, A.; Nyavanandi, D.; Ludwig, S.A. Performance evaluation of deep neural networks applied to speech recognition: RNN, LSTM and GRU. J. Artif. Intell. Soft Comput. Res. 2019, 9, 235–245. [Google Scholar] [CrossRef]
  32. Nosouhian, S.; Nosouhian, F.; Kazemi Khoshouei, A. A review of recurrent neural network architecture for sequence learning: Comparison between LSTM and GRU. Preprints 2021, 2021070252. [Google Scholar] [CrossRef]
  33. Siami-Namini, S.; Tavakoli, N.; Namin, A.S. The performance of LSTM and BiLSTM in forecasting time series. In Proceedings of the 2019 IEEE International Conference on Big Data (Big Data), Los Angeles, CA, USA, 9–12 December 2019; pp. 3285–3292. [Google Scholar]
  34. DiPietro, R.; Hager, G.D. Deep learning: RNNs and LSTM. In Handbook of Medical Image Computing and Computer Assisted Intervention; Academic Press: Cambridge, UK, 2020; pp. 503–519. [Google Scholar]
  35. Lei, D.; Liu, H.; Le, H.; Huang, J.; Yuan, J.; Li, L.; Wang, Y. Ionospheric TEC Prediction Base on Attentional BiGRU. Atmosphere 2022, 13, 1039. [Google Scholar] [CrossRef]
  36. Tang, J.; Li, Y.; Ding, M.; Liu, H.; Yang, D.; Wu, X. An Ionospheric TEC Forecasting Model Based on a CNN-LSTM-Attention Mechanism Neural Network. Remote Sens. 2022, 14, 2433. [Google Scholar] [CrossRef]
  37. Han, S.; Dong, H. A Temporal Window Attention-Based Window-Dependent Long Short-Term Memory Network for Multivariate Time Series Prediction. Entropy 2023, 25, 10. [Google Scholar] [CrossRef]
  38. Wang, X.; Dong, S.; Zhang, R. An Integrated Time Series Prediction Model Based on Empirical Mode Decomposition and Two Attention Mechanisms. Information 2023, 14, 610. [Google Scholar] [CrossRef]
  39. Shih, S.Y.; Sun, F.K.; Lee, H. Temporal pattern attention for multivariate time series forecasting. Mach. Learn. 2019, 108, 1421–1441. [Google Scholar] [CrossRef]
Figure 1. In 2009 and 2014, time series of hourly data: (a,b) geomagnetic index, Dst; (c,d) geomagnetic index, Ap; (e,f) solar activity index, F10.7; (g,h) geomagnetic index, IMF BZ; (i,j) solar activity index, SSN.
Figure 1. In 2009 and 2014, time series of hourly data: (a,b) geomagnetic index, Dst; (c,d) geomagnetic index, Ap; (e,f) solar activity index, F10.7; (g,h) geomagnetic index, IMF BZ; (i,j) solar activity index, SSN.
Remotesensing 16 03249 g001aRemotesensing 16 03249 g001b
Figure 2. Temporal pattern attention mechanism.
Figure 2. Temporal pattern attention mechanism.
Remotesensing 16 03249 g002
Figure 3. Structure diagram of hybrid neural network.
Figure 3. Structure diagram of hybrid neural network.
Remotesensing 16 03249 g003
Figure 4. (al) Comparison of these ionospheric prediction models’ performance on test samples in 2009.
Figure 4. (al) Comparison of these ionospheric prediction models’ performance on test samples in 2009.
Remotesensing 16 03249 g004
Figure 5. (al) Comparison of these ionospheric prediction models’ performance on test samples in 2014.
Figure 5. (al) Comparison of these ionospheric prediction models’ performance on test samples in 2014.
Remotesensing 16 03249 g005
Figure 6. For 2008 and 2009, the model’s cumulative distributions of sample errors at Brisbane: (a,d) IRI-2020; (b,e) BiLSTM-foF2; (c,f) proposed model.
Figure 6. For 2008 and 2009, the model’s cumulative distributions of sample errors at Brisbane: (a,d) IRI-2020; (b,e) BiLSTM-foF2; (c,f) proposed model.
Remotesensing 16 03249 g006
Figure 7. For 2008 and 2009, the model’s cumulative distributions of sample errors at Darwin: (a,d) IRI-2020; (b,e) BiLSTM-foF2; (c,f) proposed model.
Figure 7. For 2008 and 2009, the model’s cumulative distributions of sample errors at Darwin: (a,d) IRI-2020; (b,e) BiLSTM-foF2; (c,f) proposed model.
Remotesensing 16 03249 g007
Figure 8. For 2008 and 2009, the model’s cumulative distributions of sample errors at Townsvile: (a,d) IRI-2020; (b,e) BiLSTM-foF2; (c,f) proposed model.
Figure 8. For 2008 and 2009, the model’s cumulative distributions of sample errors at Townsvile: (a,d) IRI-2020; (b,e) BiLSTM-foF2; (c,f) proposed model.
Remotesensing 16 03249 g008
Figure 9. For 2000 and 2014, histograms and the curve of the normal distribution for prediction errors at Brisbane: (a,d) IRI-2020; (b,e) BiLSTM-foF2; (c,f) proposed model.
Figure 9. For 2000 and 2014, histograms and the curve of the normal distribution for prediction errors at Brisbane: (a,d) IRI-2020; (b,e) BiLSTM-foF2; (c,f) proposed model.
Remotesensing 16 03249 g009
Figure 10. For 2000 and 2014, histograms and the curve of the normal distribution for prediction errors at Darwin: (a,d) IRI-2020; (b,e) BiLSTM-foF2; (c,f) proposed model.
Figure 10. For 2000 and 2014, histograms and the curve of the normal distribution for prediction errors at Darwin: (a,d) IRI-2020; (b,e) BiLSTM-foF2; (c,f) proposed model.
Remotesensing 16 03249 g010
Figure 11. For 2000 and 2014, histograms and the curve of the normal distribution for prediction errors at Townsvile: (a,d) IRI-2020; (b,e) BiLSTM-foF2; (c,f) proposed model.
Figure 11. For 2000 and 2014, histograms and the curve of the normal distribution for prediction errors at Townsvile: (a,d) IRI-2020; (b,e) BiLSTM-foF2; (c,f) proposed model.
Remotesensing 16 03249 g011
Table 1. Configuration of the BiLSTM-based foF2 prediction model.
Table 1. Configuration of the BiLSTM-based foF2 prediction model.
Model ConfigurationBiLSTM-foF2
Learning MethodDeep learning
Numbers of Hidden Unit 1250
Numbers of Hidden Unit 2250
Epoch200
Minimum Batch Size512
Prediction Interval ( Δ )24 h
Window Length (k)24
Table 2. Input data selection and details.
Table 2. Input data selection and details.
Input DataYearSummerAutumnWinterSpring
Season2000JanuaryMarchJuneOctober
2008JanuaryMarchJuneOctober
2009JanuaryMarchJuneOctober
2014JanuaryMarchJuneOctober
Smooth monthly values of SSN2000145.1191.9199.6159.3
20083.97.53.45.3
20091.366.17.4
2014109.3114.3114.1101.9
Table 3. Comparison of ionospheric model predictions for foF2 during January 2008 and 2009.
Table 3. Comparison of ionospheric model predictions for foF2 during January 2008 and 2009.
Observation StationModel ConfigurationRMSE (MHz)MAE (MHz)MAPE
2008, 20092008, 20092008, 2009
BrisbaneIRI-20200.830, 0.8130.668, 0.65913.632%, 14.245%
BiLSTM-foF20.785, 0.7600.599, 0.58112.794%, 12.964%
Proposed method0.726, 0.7150.567, 0.57311.773%, 11.842%
DarwinIRI-20201.446, 1.2031.063, 0.90017.077%, 15.698%
BiLSTM-foF21.293, 1.1890.994, 0.90717.046%, 15.082%
Proposed method1.186, 1.0390.877, 0.76714.317%, 13.937%
TownsvileIRI-20201.092, 1.2010.847, 0.90515.875%, 15.886%
BiLSTM-foF21.012, 0.9920.800, 0.86014.822%, 14.927%
Proposed method0.855, 0.8560.672, 0.70812.724%, 13.448%
Table 4. Comparison of ionospheric model predictions for foF2 during March 2008 and 2009.
Table 4. Comparison of ionospheric model predictions for foF2 during March 2008 and 2009.
Observation StationModel ConfigurationRMSE (MHz)MAE (MHz)MAPE
2008, 20092008, 20092008, 2009
BrisbaneIRI-20201.037, 0.8660.783, 0.69114.655%, 14.233%
BiLSTM-foF20.926, 0.7220.732, 0.55513.917%, 11.233%
Proposed method0.862, 0.6050.690, 0.45213.004%, 9.226%
DarwinIRI-20201.227, 1.3580.943, 1.04316.740%, 18.473%
BiLSTM-foF21.152, 1.2870.856, 0.95615.331%, 15.312%
Proposed method1.133, 1.1370.825, 0.81815.211%, 13.840%
TownsvileIRI-20201.149, 1.3480.864, 1.03615.183%, 18.722%
BiLSTM-foF20.979, 0.9720.777, 0.78314.847%, 16.093%
Proposed method0.913, 0.7480.683, 0.55912.874%, 10.922%
Table 5. Comparison of ionospheric model predictions for foF2 during June 2008 and 2009.
Table 5. Comparison of ionospheric model predictions for foF2 during June 2008 and 2009.
Observation StationModel ConfigurationRMSE (MHz)MAE (MHz)MAPE
2008, 20092008, 20092008, 2009
BrisbaneIRI-20200.680, 0.9230.531, 0.68412.848%, 15.395%
BiLSTM-foF20.572, 0.7240.468, 0.46710.303%, 11.414%
Proposed method0.487, 0.6120.396, 0.4339.912%, 9.453%
DarwinIRI-20200.786, 0.7920.622, 0.63516.524%, 17.182%
BiLSTM-foF20.698, 0.6850.545, 0.54714.727%, 14.582%
Proposed method0.628, 0.5660.495, 0.51013.567%, 12.185%
TownsvileIRI-20200.606, 0.8460.496, 0.66812.777%, 16.986%
BiLSTM-foF20.548, 0.6590.431, 0.58511.748%, 13.775%
Proposed method0.508, 0.5820.398, 0.44910.033%, 11.192%
Table 6. Comparison of ionospheric model predictions for foF2 during October 2008 and 2009.
Table 6. Comparison of ionospheric model predictions for foF2 during October 2008 and 2009.
Observation StationModel ConfigurationRMSE (MHz)MAE (MHz)MAPE
2008, 20092008, 20092008, 2009
BrisbaneIRI-20200.612, 0.7230.479, 0.57310.428%, 11.434%
BiLSTM-foF20.609, 0.7400.488, 0.58910.563%, 11.238%
Proposed method0.568, 0.6530.433, 0.50210.182%, 10.109%
DarwinIRI-20201.274, 1.2850.990, 1.05018.890%, 17.676%
BiLSTM-foF21.073, 1.1530.889, 0.91916.677%, 14.796%
Proposed method0.924, 0.9790.703, 0.79213.398%, 12.551%
TownsvileIRI-20200.887, 1.2900.678, 1.05413.575%, 17.701%
BiLSTM-foF20.889, 0.9680.685, 0.89313.198%, 15.821%
Proposed method0.813, 0.8790.643, 0.72112.769%, 13.112%
Table 7. Comparison of ionospheric model predictions for foF2 during January 2000 and 2014.
Table 7. Comparison of ionospheric model predictions for foF2 during January 2000 and 2014.
Observation StationModel ConfigurationRMSE (MHz)MAE (MHz)MAPE
2000, 20142000, 20142000, 2014
BrisbaneIRI-20201.128, 1.0500.896, 0.84911.059%, 10.215%
BiLSTM-foF21.095, 1.0140.883, 0.82110.025%, 9.901%
Proposed method0.974, 0.7830.718, 0.6128.256%, 7.625%
DarwinIRI-20201.819, 1.9431.409, 1.54114.793%, 15.063%
BiLSTM-foF21.748, 1.5801.281, 1.21512.852%, 12.791%
Proposed method1.591, 1.4091.192, 1.14912.191%, 12.098%
TownsvileIRI-20201.300, 1.1331.038, 0.92112.520%, 10.110%
BiLSTM-foF21.245, 1.0941.085, 0.88712.105%, 9.467%
Proposed method0.979, 0.9870.783, 0.8178.965%, 9.222%
Table 8. Comparison of ionospheric model predictions for foF2 during March 2000 and 2014.
Table 8. Comparison of ionospheric model predictions for foF2 during March 2000 and 2014.
Observation StationModel ConfigurationRMSE (MHz)MAE (MHz)MAPE
2000, 20142000, 20142000, 2014
BrisbaneIRI-20201.030, 1.5850.822, 1.4558.801%, 15.492%
BiLSTM-foF20.900, 1.2590.681, 0.9787.232%, 9.199%
Proposed method0.762, 0.9090.604, 0.7136.551%, 7.480%
DarwinIRI-20201.606, 2.2671.199, 1.73110.550%, 15.042%
BiLSTM-foF21.612, 1.8551.147, 1.48910.208%, 12.983%
Proposed method1.412, 1.4481.063, 1.1569.420%, 10.927%
TownsvileIRI-20200.991, 1.3510.810, 1.1658.305%, 11.873%
BiLSTM-foF20.982, 1.2130.837, 1.0218.219%, 9.335%
Proposed method0.875, 0.8410.696, 0.7437.379%, 7.912%
Table 9. Comparison of ionospheric model predictions for foF2 during June 2000 and 2014.
Table 9. Comparison of ionospheric model predictions for foF2 during June 2000 and 2014.
Observation StationModel ConfigurationRMSE (MHz)MAE (MHz)MAPE
2000, 20142000, 20142000, 2014
BrisbaneIRI-20200.951, 1.1550.773, 0.95012.077%, 16.848%
BiLSTM-foF20.835, 0.8000.661, 0.72410.356%, 11.267%
Proposed method0.743, 0.6840.579, 0.5858.972%, 10.206%
DarwinIRI-20201.652, 1.9021.294, 1.65819.653%, 33.547%
BiLSTM-foF21.570, 1.2821.263, 0.98817.609%, 18.776%
Proposed method1.334, 1.1071.073, 0.75913.261%, 12.409%
TownsvileIRI-20201.396, 1.6051.185, 1.40319.048%, 27.045%
BiLSTM-foF21.083, 1.0570.875, 0.94214.683%, 15.929%
Proposed method0.872, 0.7420.677, 0.69810.180%, 12.329%
Table 10. Comparison of ionospheric model predictions for foF2 during October 2000 and 2014.
Table 10. Comparison of ionospheric model predictions for foF2 during October 2000 and 2014.
Observation StationModel ConfigurationRMSE (MHz)MAE (MHz)MAPE
2000, 20142000, 20142000, 2014
BrisbaneIRI-20201.265, 1.0170.926, 0.80710.898%, 9.637%
BiLSTM-foF21.191, 1.0320.903, 0.80110.571%, 9.814%
Proposed method1.023, 0.9280.818, 0.5589.159%, 8.795%
DarwinIRI-20201.847, 1.4591.356, 1.06911.576%, 10.243%
BiLSTM-foF21.748, 1.4351.271, 1.07211.158%, 10.840%
Proposed method1.501, 1.2231.089, 0.92410.069%, 9.257%
TownsvileIRI-20201.401, 0.9571.069, 0.75511.318%, 8.507%
BiLSTM-foF21.312, 0.9511.045, 0.74811.049%, 8.417%
Proposed method1.193, 0.8470.940, 0.68910.115%, 8.013%
Table 11. Comparison of the first three moments of the normal distribution of the model prediction errors during 2000.
Table 11. Comparison of the first three moments of the normal distribution of the model prediction errors during 2000.
Observation StationModel ConfigurationMean (MHz)Variance (MHz2)Skewness
2000, 20142000, 20142000, 2014
BrisbaneIRI-2020−0.459, −0.3411.454, 0.5600.369, −0.520
BiLSTM-foF2−0.402, −0.1290.727, 0.5320.126, −0.297
Proposed method−0.237, 0.1080.688, 0.5280.061, 0.317
DarwinIRI-20200.763, −0.3693.001, 3.564−0.831, −0.582
BiLSTM-foF2−0.747, −0.6742.992, 2.755−0.536, −0.556
Proposed method−0.504, −0.2852.689, 2.310−0.481, −0.510
TownsvileIRI-20200.502, −0.3621.392, 1.6380.262, 0.283
BiLSTM-foF2−0.419, −0.3711.226, 1.3220.258, 0.287
Proposed method−0.202, −0.3191.060, 1.0960.233, 0.266
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Xu, B.; Huang, W.; Ren, P.; Li, Y.; Xiang, Z. Modeling and Forecasting Ionospheric foF2 Variation Based on CNN-BiLSTM-TPA during Low- and High-Solar Activity Years. Remote Sens. 2024, 16, 3249. https://doi.org/10.3390/rs16173249

AMA Style

Xu B, Huang W, Ren P, Li Y, Xiang Z. Modeling and Forecasting Ionospheric foF2 Variation Based on CNN-BiLSTM-TPA during Low- and High-Solar Activity Years. Remote Sensing. 2024; 16(17):3249. https://doi.org/10.3390/rs16173249

Chicago/Turabian Style

Xu, Baoyi, Wenqiang Huang, Peng Ren, Yi Li, and Zheng Xiang. 2024. "Modeling and Forecasting Ionospheric foF2 Variation Based on CNN-BiLSTM-TPA during Low- and High-Solar Activity Years" Remote Sensing 16, no. 17: 3249. https://doi.org/10.3390/rs16173249

APA Style

Xu, B., Huang, W., Ren, P., Li, Y., & Xiang, Z. (2024). Modeling and Forecasting Ionospheric foF2 Variation Based on CNN-BiLSTM-TPA during Low- and High-Solar Activity Years. Remote Sensing, 16(17), 3249. https://doi.org/10.3390/rs16173249

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop