Next Article in Journal
Advanced Computational Pipeline for FAK Inhibitor Discovery: Combining Multiple Docking Methods with MD and QSAR for Cancer Therapy
Previous Article in Journal
Robust Goal Programming as a Novelty Asset Liability Management Modeling in Non-Financial Companies: A Systematic Literature Review
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Artificial Neural Network Model to Predict the Exportation of Traditional Products of Colombia

by
Andrea C. Gómez
*,
Lilian A. Bejarano
and
Helbert E. Espitia
Facultad de Ingeniería, Universidad Distrital Francisco José de Caldas, Bogotá 110231, Colombia
*
Author to whom correspondence should be addressed.
Computation 2024, 12(11), 221; https://doi.org/10.3390/computation12110221
Submission received: 23 September 2024 / Revised: 21 October 2024 / Accepted: 27 October 2024 / Published: 4 November 2024

Abstract

:
This article develops the design, training, and validation of a computational model to predict the exportation of traditional Colombian products using artificial neural networks. This work aims to obtain a model using a single multilayer neural network. The number of historical input data (delays), the number of layers, and the number of neurons were considered for the neural network design. In this way, an experimental design of 64 configurations of the neural network was performed. The main arduousness addressed in this work is the significant difference (in tons) in the values of the considered products. The results show the effect that occurs due to the different range values, and one of the proposals made allows this limitation to be handled appropriately. In summary, this work seeks to provide essential information for formulating a model for efficient and practical application.

1. Introduction

The world unfolds into the dynamics of globalization, and international exchanges are conducted in all areas of human activity, thus constituting a continuous process in which individuals and collectives seek to innovate [1,2,3]. Countries that interact according to the dynamics of globalization establish social, political, economic, and technological connections with other countries, facilitating joint work [4] and strengthening regional ties [5].
The possibility of expanding into international markets allows for the economic growth of a country or region. Globalized economies are a significant factor in economic growth, and, although their positive impact on economic growth depends on various and complex factors such as corruption, policies, trade, and capital restrictions, more globalized countries show higher growth rates [6,7,8].
Due to the effect of globalization, world trade has expanded, requiring countries to diversify exports, which are considered a tool to accelerate the economic growth rate in developing economies. Additionally, export diversification minimizes the impact of export instability on economic growth [9]. Exports, in terms of economic development, can improve a country’s international economic standing [10].
A country’s economic complexity measures its knowledge in the goods and services it produces and is calculated based on the diversity of its exports and the number of countries that produce them [11]. The Atlas of Economic Complexity [12] for Colombia shows that oil and coal have significant export percentages, with 23.11% and 10.45% respectively. On the other hand, coffee and ferroalloys represent 5.73% and 1.03% of Colombian exports, respectively.
Studying exports in the context of a globalized world is essential to promote a country’s economic growth. Although the positive influence of exports and globalization dynamics requires a multi-factorial study in economic, social, political, ecological, and technological fields, focusing on exports contributes to decision-making in building this process [13,14].
Just as globalization occurs in all areas of human activity, the evolution of knowledge and technology covers all aspects of human life [15]. It is not only about the development of technology, but also about the benefits of data processing and analysis, which is why machine learning, understood as the automated detection of significant patterns in data, “in the past couple of decades, has become a common tool in almost any task that requires information extraction from large data set” [16]. In this context, the present article develops the design and implementation of a computational model using artificial neural networks to predict the export of coffee, coal, oil, and ferronickel, four traditional Colombian products.
In the context of artificial intelligence and machine learning applications in economic forecasting, in [17] is presented a review of artificial intelligence (AI), machine learning (ML), and building information modeling (BIM) since in the construction sector is observed low performance, high waste generation, low productivity, and deficiency in the use of emerging technology. The significant amount of material waste in construction developments has financial and ecological consequences. The review states that innovations such as BIM and AI offer efficient solutions for the construction sector by predicting waste material generation.
According to [18], delays in resource delivery are among the main significant issues in a construction schedule. For instance, delays in a construction project arise from varied factors, and the problem is intensified when multiple delays happen at the same time. In this regard, ref. [18] proposed a methodology for enhancing the construction methods by minimizing delays and waste of resources utilizing technological tools like artificial intelligence, building information modeling, digital twin (DT), and Internet of Things (IoT).
Another related work is presented in [19], using time series analysis via AI proposing an automated capital prediction system. The authors state that it is crucial to forecast future capital based on the predictions of financial capital inflows and outflows to guarantee capital optimization given the prevailing macroeconomic conditions. The time series forecasting is addressed using the wavelet long short-term memory (LSTM). The authors forecast the strong volatility in financial capital by performing a two-part analysis; the first part considers the linear component, and the second considers the nonlinear embedded wavelet components.
Finally, reference [20] displays a review on energy efficiency in production systems. The authors observed that studies focused on enhancing production energy efficiency through AI offer encouraging solutions for sustainable energy production (efficiently). In this way, the adoption of AI has the potential to face economic and environmental challenges in the areas of production efficiency, forecasting accuracy, and energy consumption.

1.1. Related Works

With the growth of data collection, methods developed in big-data environments have become standard, mainly due to their performance in tasks such as classification and prediction. Their strong performance, which offers methodological and practical advantages, invites traditional statistical, economic, and econometric methods to adopt new tools such as algorithms and methods from machine learning [21,22]. Figure 1 displays the general context of related works on applications of machine learning in agriculture (which are addressed below).
In the literature on predictive methods, several comparisons have been made between machine learning and traditional statistical, economic, and econometric methods. For instance, authors in [23,24] study models such as Gaussian process functional data analysis (GPFDA), random forest regression (RFR), support vector machines (SVMs), deep neural networks (DNNs), Ada boost (AB), and artificial neural networks (ANNs) to predict the growth of live bulb weight for garlic and onion. The papers conclude that, in general, models that are not purely machine learning-based, such as the functional concurrence regression (FCR) model and autoregressive integrated moving average with exogenous variables (ARIMAX), demonstrate a suitable performance.
On the other hand, refs. [25,26,27] concluded that machine learning methods outperform traditional statistical models, as they allow the analysis of a large and diverse number of data, identify patterns, have greater accuracy, and are capable of capturing the nonlinearity and sporadicity of data. The models used for predictions, whose performances were compared, included support vector machines, autoregressive integrated moving average (AIMA), generalized linear models (GLMs), support vector regression (SVR); single-shot convolutional neural networks (SSCNNs), convolutional neural network autoregressive (CNNA), long short-term memory (LSTM), seasonal autoregressive integrated moving average with exogenous regressors (SARIMAX), elastic net (EN), random forest (RF), artificial neural networks, and autoregressive integrated moving average (ARIMA).
The predictive capacity of machine learning models has been tested in various fields, such as energy [28], sustainability [29,30], the environment [31,32], medicine [33,34], and economics [31,35]. Machine learning models are particularly noted for their ability to analyze, interpret, and interrelate variables, reduce overfitting, provide guidance in informed decision-making and policy creation, reduce computation time, and offer the comprehensive analysis of the uncertainty of parameters and inputs. Combining several machine learning algorithms or methods has demonstrated greater accuracy and better performance in cases, as displayed in the works [28,29,35].
In the field of economics, machine learning models are increasingly used for predictions. For example, ref. [36] proposes data-driven stock forecasting models based on neural networks, such as recurrent NNs (RNNs), convolutional NNs, transformers, graph NNs (GNNs), generative adversarial networks (GANs), and large language models (LLMs). All models have advantages and disadvantages, such as the CNN model’s limitations in capturing global dependencies within time series data. In general, all models demonstrate outstanding performance. In this regard, in [37] is implemented a data-driven automated machine learning model for research in economics. Authors conclude that this approach reduces complexity, improves data analysis efficiency, and serves as an effective method for policy evaluation and economic indicator analysis.
There are also machine learning methods that can predict behaviors perceived as more complex involving more variables and complex relationships, such as those in international trade. Paper [38] proposes novelty from three key aspects: data from the United Nations Commodity Trade Statistics Database (UNCTSD), which enhances the granularity and temporal precision of analysis; state-of-the-art nonlinear machine learning algorithms, which improve the ability to predict bilateral trade values; and a strategy matrix that combines predicted opportunities for export promotion with economic complexity and relatedness metrics. All this work helps guide the process of optimizing trade strategies, identifying new markets, achieving rapid economic growth, and keeping pace with the dynamics of international trade.
While machine learning models and algorithms are great tools for making predictions and forecasts, complementing them with other knowledge or combining them tends to yield better results, broader coverage, and diversity of data. For example, paper [39] integrates dual-output Monte Carlo dropout with deep neural networks, obtaining the optimal configuration with three intermediate layers. Meanwhile, reference [40] proposes an evolutionary morphological neural network that combines the neural network with fuzzy system theory to establish a predictive model for exports in the Chongqing province. This combination solves the problem of fuzziness, nonlinearity, and high dimensions, and handles the modeling of export predictions.
To leverage the potential of machine learning in prediction, in addition to attempting to complement and combine various knowledge areas, as in the works mentioned above, and in the case of [41], which combines random forest with a graph convolutional network (GCN) to capture the relationships and dependencies between nodes, and a graph attention network (GAN) to determine the importance of neighboring nodes; the decision to normalize data is also important, as indicated in [41]. Normalization ensures that the numerical features are uniformly scaled across various variables, creating a standardized and consistent framework for analysis and mitigating potential biases arising from differences in feature scales. Paper [42] proposes a predictive model for high-tech exports using neural networks, logistic regression (LR), and support vector regression (SVR), where the ANN proves to be the most successful model. Authors also note that normalization standardizes data values while preserving information and preventing distortion of value range differences.
Another relevant element to consider is the model configuration. In this regard, ref. [43] evaluates the accuracy of machine learning predictions on the exports from the Czech Republic to China. The process compares the accuracy of the ANN, multilayer perceptron (MLP), and the regression time series neural network (RTSNN). In addition, the authors test different time delay configurations. Reference [44] proposes a method to predict export volumes in Zhejiang Province using a back-propagation neural network (BPNN). The authors normalize the data to achieve smoothness and eliminate the influence of network noise, then determine the structure and training parameters of the BPNN. They conclude that the accuracy of the single-factor forecasting model is higher than that of the multivariable forecasting model.
Although predictive models can combine sophisticated machine learning models capable of analyzing complex relationships and massive datasets, establishing a simple model with a moderate dataset can yield significant results. As observed in the review, neural networks are widely used for these types of applications. For all the reasons above, the following work utilizes an integrated neural network to predict the export of four traditional Colombian products, and explores two models with different configurations to contrast predictive accuracy.

1.2. Approach and Paper Organization

This document displays a proposal for the design and implementation of a computational model to predict the exportation of traditional Colombian products using artificial neural networks. In this order, it is observed the performance and features of the proposed models. A comprehensive experimental design is carried out to determine the best configuration of the neural network for the two proposed models. The applicability of these models in a real environment is also discussed.
The objective of this work is to obtain a model to predict the export of traditional products from Colombia using a single neural network; that is, separate neural networks are not used for each product. The main limitation addressed in this work is the significant difference in the values of the considered products (in tons). In this order, the main contribution is to obtain a model to predict exports that allow for managing the imbalance of data values.
The document is organized as follows. Section 2 describes the artificial neural networks and Section 3 presents the dataset employed. Meanwhile, Section 4 describes the aspects related to the models’ implementation. Then, Section 5 presents the results. Finally, the discussion and conclusions are given in Section 6 and Section 7.

2. Artificial Neural Network

Artificial neural networks conform to a series of computational systems based in the biological neural networks that constitute the brain of animals. In itself, a neural network is rooted in a collection of nodes or neurons, each operating as a biological synapse with the possibility of transmitting and receiving signals to other nodes [45,46,47].
Usually, a set of input values are weighted and added up in a neuron to subsequently pass through a nonlinear function. The connections can be called edges between the nodes with certain weights that vary according to the learning process, increasing or decreasing depending on the signal’s strength. Another important aspect is that each of these neurons is added to certain layers. Then, the different layers can treat the information in a different way. In addition, the layers are divided from the initial to the final or solution layer. However, they do not have to follow a linear path, meaning that the information can travel through various layers, as displayed in Figure 2.
Equation (1) provides the relationship of inputs and outputs of a neuron: l is the layer, i the neuron, z i l the neuron output, w i j l the weights (from j to i), b i l the bias, and f the activation function. The synaptic weight, w i j l , represents the knowledge gained through the neuron’s training process. In addition, l corresponds to the index of the respective layer, and N l is the number of neurons in layer l, namely N l 1 in Equation (1) refers to the number of neurons in the previous layer.
z i l = f j = 1 N l 1 w i j l z j l 1 + b i l
A widely used activation function is the sigmoid given by Equation (2). Figure 3 displays the graphical representation of Equation (1).
f ( x ) = 1 1 + e x

3. Dataset Employed

The dataset used is taken from DANE (https://www.dane.gov.co/index.php/estadisticas-por-tema/comercio-internacional/exportaciones, (accessed on 3 June 2024)), which provides a dataset with a total of 1536 samples (384 for each product) acquired from 1 January 1992 to 1 March 2024 each month, and records the exportation of four products in tons, as seen in Figure 4. The traditional exportation products considered are:
  • x 1 : coffee.
  • x 2 : coal.
  • x 3 : petroleum and its derivatives.
  • x 4 : ferronickel.
Figure 5 displays the histogram exportation data, where the imbalance in the values of product exports is observed (in tons); such an aspect is critical in this work to achieve an adequate model.

4. Model Design and Implementation Process

The objective of this implementation is to propose an integral model to predict the exportation of each product. The main limitation observed is the range of values for the exportation of each product; thus, two approaches to develop the model are considered:
  • M1: integrated neural network model.
  • M2: integrated neural network model with weighed output.
An experimental design is carried out based on the number of signal delays, hidden layers, and neurons employed, aiming to determine the most suitable configuration of the neural network. In this way, the experimental design seeks to cover a wide of neural network configurations while adjusting to computational limitations (complexity and processing time). Regarding [48,49], a factorial design is performed where the possible values of the neural network parameters are established, and then all possible combinations of these parameters are determined. For each configuration and model, the results of the training and validation process are presented. The training of each neural network is performed using the “newff” and “train” MATLAB commands, where the Levenberg–Marquardt backpropagation algorithm is employed for neural network training. The PC utilized is Lenovo IdeaPad 5 14ITL05 with an 11th Gen Intel Core i7-1165G7 processor 2.80 GHz, with 16.0 GB of RAM, and Windows 10.
Considering that the initialization of the neural network is random, training for each neural network is carried out 20 times. In order to perform a factorial design [48,49], the assignment of the number of signal delays and layers is considered linear, with m = 1 , 2 , 3 , 4 . The number of neurons is considered by taking 2 n , with n = 1 , 2 , 3 , 4 . In this way, the possible configurations are
  • Delays: 1, 2, 3, and 4.
  • Layers: 1, 2, 3, and 4.
  • Neurons: 2, 4, 8, and 16.
After the data have been encoded according to the configuration and model to be used, for the training process, 80% of the data are employed for training and 20% for validation; this is to determine possible overtraining.

4.1. Integrated Neural Network Model

The first approach consists of having a unique neural network to implement the model. The data are normalized considering the highest value of the exported product. The difficulty with this approach is that coffee and ferronickel have remarkably low values compared with coal and petroleum (oil). The diagram for the integrated neural network model is presented in Figure 6. The variables x 1 to x 4 are encoded using values [ 0 , 1 ] to select the product to predict, the variable y [ n ] is the data to predict, and variables y [ n 1 ] to y [ n N ] are the previous values of the output where N is the total number of delays, that is, the previous data to be used in the prediction.
Normalization is performed by dividing all data by the maximum value present in the data; thus, the output gain of the neural network is calculated as the maximum value of all data K T = max ( Y ) , where Y = [ y 1 , y 2 , , y n ] corresponds to all output values.

4.2. Integrated Neural Network Model with Weighed Output

In this second approach, normalization is carried out separately for each product; in this way, the disparity of the range of product values is eliminated. The output of the neural network is multiplied by the respective scale factor to obtain the real value of each product. The adjustment made by the scale factor is carried out using the variables x 1 , , x 4 , which encode the respective products. The diagram for the integrated neural network model with weighed output is presented in Figure 7.
In this case, each output gain corresponds to the maximum value of each product. Considering X = [ x 1 , x 2 , x 3 , x 4 ] , the respective gains can be calculated as K 1 = max ( Y ) when X = [ 1 , 0 , 0 , 0 ] , K 2 = max ( Y ) when X = [ 0 , 1 , 0 , 0 ] , K 3 = max ( Y ) when X = [ 0 , 0 , 1 , 0 ] , and K 4 = max ( Y ) when X = [ 0 , 0 , 0 , 1 ] .

5. Results

This section displays the results obtained in the training and validation process for the considered models. In these results, it can be seen the best configuration obtained for both training and validation. The simulation for training and validation data is also shown using the best configuration obtained for each model. The results of the two proposed models are compared considering the prediction error for each product, and the total mean squared error (MSE) is given in Equation (3), where y i is the actual data, f i the predicted data, and N the total number of data.
MSE = 1 N i = 1 N ( y i f i ) 2

5.1. Integrated Neural Network Model

For the training process, the max, min, average, and STD values achieved for each case can be seen in Table 1, where the best configuration is with four delays, three layers, and eight neurons in each layer.
Table 2 was obtained using the validation data compared with the training data; the metrics for the validation data are in the same range (there is no difference in more than one decimal place). Regarding the results of Table 2, it is seen that the best configuration is with four delays, four layers, and eight neurons in each layer.
The best case in training is four delays, three layers, and eight neurons in each layer, while the best case in validation is four delays, four layers, and eight neurons in each layer. Figure 8 displays the simulation results for the best configuration obtained in the validation process. This figure contains the simulation with training and validation data. As can be seen for x 1 (coffee) and x 4 (ferronickel), the data adjustment is unsuitable.

5.2. Integrated Neural Network Model with Weighed Output

For the training process, the max, min, average, and STD values achieved for each case can be seen in Table 3, where it is noted that the best configuration corresponds to four delays, four layers, and 16 neurons in each layer.
In Table 4, using the validation data compared with the training data, the metrics for the validation data are in the same range (there is no difference in more than one decimal place). Regarding the results of Table 4, it is seen that the best configuration is obtained with three delays, two layers, and 16 neurons in each layer.
The best case in training is four delays, four layers, and 16 neurons in each layer, while the best case in validation is three delays, two layers, and 16 neurons in each layer. Figure 9 displays the simulation results for the best configuration obtained in the validation process. This figure contains the simulation with training and validation data. As seen, in this case for x 1 (coffee) and x 4 (ferronickel) the data adjustment is suitable.

5.3. Analysis Using Mean Absolute Percentage Error

In order to extend the results analysis, the mean absolute percentage error (MAPE) given in Equation (4) is calculated, where y i is the actual data, f i the predicted data, and N the total number of data.
MAPE = 1 N i = 1 N y i f i y i
MAPE allows us to observe the relative error considering the value of each datum. Table 5 displays the values of MAPE for models M1 and M2 with the respective configurations using training and validation data. In this case, the best configuration for M1 using training data corresponds to four delays, four layers, and two neurons in each layer; for validation data, the best configuration is with three delays, two layers, and four neurons in each layer. Meanwhile, for model M2, the best configuration using training data is with four delays, three layers and sixteen neurons in each layer; using validation data is with three delays, four layers, and four neurons in each layer.
Observing the results with MAPE, a significant difference is seen with the training and validation data for both models, which may indicate a possible overtraining effect; however, for the MSE value, the difference between training and validation data is not noticeable.

5.4. Models Comparison

Since the normalization carried out in the two models is different, a direct comparison of Table 1, Table 2, Table 3 and Table 4, obtained for the validation and training data, is not appropriate. The predicted values in tons for each item and model are observed, aiming to perform a suitable comparison. Table 6 displays a comparison of the error values using the best configuration obtained of each model. It is observed that model M2 presents a lower error for x 1 and x 4 , while model M1 presents a lower error for x 2 and x 3 . Although, in general, the M1 model presents a lower error, it fails to make a suitable prediction for products x 1 and x 4 , and therefore it may turn out to be an inadequate model since it only manages to make a suitable prediction for x 2 and x 3 .
In addition, Table 7 displays the results considering MAPE values obtained for each product; in this way, the same trend presented in Table 6 with the MSE is observed.
As observed in these results, M1 is more suitable for products like coal and petroleum, likely due to better feature alignment and less complexity needed for these products. However, M2 outperforms for coffee and ferronickel, likely due to a better fit with the patterns and scales of those products. Ultimately, model selection should be guided by the relative importance of each product and whether the lower total error (M1) or more balanced prediction performance (M2) is more valuable for the application at hand.
Table 5. Results using mean absolute percentage error.
Table 5. Results using mean absolute percentage error.
M1 (Training)
Layers1234
Neurons24816248162481624816
Delays: 10.316640.309140.329160.314200.329500.344540.294290.326320.329100.344390.318150.328130.367280.365430.305940.30896
Delays: 20.311220.314220.312510.305030.325900.308200.307990.356210.323430.325260.303760.357680.282890.509040.301690.28721
Delays: 30.354770.277510.418710.281200.299330.353100.309400.336400.295730.297730.307470.307450.323300.276710.287970.30201
Delays: 40.303550.295430.405050.317980.305540.318870.308490.312680.302900.293110.334000.371100.273300.304640.300310.36009
M1 (Validation)
Layers1234
Neurons24816248162481624816
Delays: 10.516190.444210.549150.528640.532540.454320.410640.529300.549390.471220.467100.521960.655520.528210.468070.47726
Delays: 20.429970.551750.449860.516450.605940.451010.482250.533300.386290.407480.500120.419010.425250.846570.390600.39642
Delays: 30.683280.395330.677210.496380.401690.370030.542860.486420.428690.488120.464640.522910.397950.448740.453560.44330
Delays: 40.417280.498570.612010.558360.399650.380830.491380.460230.400870.448770.470950.483590.426390.430070.504410.55690
M2 (Training)
Layers1234
Neurons24816248162481624816
Delays: 10.318380.301230.298980.302980.311690.304830.297770.301160.307340.305570.296960.301620.318240.301790.305610.28360
Delays: 20.284820.271170.265470.264250.268810.265310.253200.263060.282190.263800.260480.258810.283190.258060.257820.25727
Delays: 30.276270.256040.256330.250300.268810.252710.241260.243490.267390.252430.248490.245850.263340.248320.252550.23739
Delays: 40.267310.250200.246250.235450.266330.249530.227220.227810.267650.253910.244600.221290.259370.247750.241250.22690
M2 (Validation)
Layers1234
Neurons24816248162481624816
Delays: 10.537360.505140.498370.496820.518320.522990.502620.498290.514930.498830.501500.493390.497320.501090.500680.47131
Delays: 20.480840.468290.475150.477810.458770.435650.434550.454230.475650.449990.419850.465150.480240.441180.444340.43806
Delays: 30.483530.480230.453310.450070.476500.454660.459880.409890.478410.453720.446880.461550.482700.406170.440540.41886
Delays: 40.486450.477810.464060.445670.482690.480450.445300.427730.477530.475110.477810.487390.488320.452510.463600.44019
Table 6. Comparison MSE values for each item for each model.
Table 6. Comparison MSE values for each item for each model.
ModelCoffee ( x 1 ) Coal ( x 2 ) Petroleum ( x 3 ) Ferronickel ( x 4 ) Total
M1 3.1370 × 10 8 2.5124 × 10 12 6.6185 × 10 10 2.1027 × 10 8 2.5791 × 10 12
M2 1.2370 × 10 8 2.6599 × 10 12 6.7158 × 10 10 1.0502 × 10 7 2.7272 × 10 12
Table 7. Comparison MAPE values for each item for each model.
Table 7. Comparison MAPE values for each item for each model.
ModelCoffee ( x 1 ) Coal ( x 2 ) Petroleum ( x 3 ) Ferronickel ( x 4 ) Total
M10.21720.26060.09464.40810.3700
M20.15890.26340.09410.37000.4062

6. Discussion

The main difficulty observed when developing the model for export prediction is the difference in the range of values for each product. An alternative may be to build a neural network to predict the export of each product. However, in this work, it is proposed to have a single neural network in such a way that polysemantic neurons can be presented (which is expected to be studied in future works); that is, the model presents neurons that can have several concepts associated with activation at the same time.
Analyzing the value of the total error obtained, model M1 presents the smallest value, but the adjustment for products x 1 and x 2 is not adequate. Meanwhile, the M2 model, although it does not present the smallest error, manages to make an adequate prediction for all products.
Considering the results, a suitable prediction model can be utilized with an ANN; nevertheless, other machine learning techniques can be used. In this comparison, the imbalance in the data can be considered to determine the suitable model for exportation prediction. Other factors, such as computational requirements, ease of implementation, and interpretability, can be considered when selecting the prediction model.
In additional work, where the comparison is made with other techniques and different configurations in the parameters of each technique, a comparison between experimental groups can be made using an analysis of variance (ANOVA) or a Kruskal–Wallis test.

7. Conclusions

This paper can be considered as a study of the relationships held within the characteristics defined for the prediction of exportation, the qualities and shortcomings of each model, and how the variables were correlated. This paper provides an alternative for designing prediction models with a notable imbalance in the data values.
This work sought to create an alternative for the prediction of traditional exportation by creating a model based on artificial intelligence techniques. It was possible to show that artificial neural networks have the capacity to predict traditional exportation. The methodology and metrics considered allow the determination of a clear qualification of the executed model.
Model M1 presented the lowest mean squared error, indicating superior overall performance. However, model M2, which incorporated weighted outputs, proved to be more capable of handling the wide variability in export values, particularly in imbalanced data. This highlights that weighting strategies are crucial when predicting exports with high variability.
The research achieved the objective of proposing a reliable method for predicting traditional exportation, successfully addressing the challenge of imbalanced export values. However, the findings could more explicitly align with the objectives by highlighting how each result directly responds to the goals outlined at the study’s outset. Strengthening this connection would offer a clearer demonstration of how the model advancements contribute to improving export prediction strategies.
While the models performed well, several limitations should be acknowledged. First, the dataset primarily focused on internal characteristics of export data, leaving out critical external variables like international market fluctuations, administrative periods, and regional trade agreements. Future studies should incorporate these elements to enhance the models’ predictive power. Additionally, while the neural networks demonstrated strong adaptability, other machine learning techniques could be explored to complement or improve upon the current findings. Expanding the models’ application to other regions or product types may provide a more generalized tool for export prediction.
This research contributes to the field by demonstrating how artificial intelligence, and particularly neural networks, can be leveraged to predict export behavior more effectively.
Policymakers and stakeholders can use the framework proposed to optimize decision-making processes in the export sector. Furthermore, this study opens avenues for future investigations to refine these models by integrating more complex variables and adapting them to changing global trade dynamics.

Author Contributions

Conceptualization, A.C.G., L.A.B. and H.E.E.; methodology, A.C.G., L.A.B. and H.E.E.; project administration, A.C.G., L.A.B. and H.E.E.; supervision, H.E.E.; validation, A.C.G.; writing—original draft, A.C.G., L.A.B. and H.E.E.; writing—review and editing, A.C.G., L.A.B. and H.E.E. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

All data are included in the article.

Acknowledgments

The authors express gratitude to the Universidad Distrital Francisco José de Caldas.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Chabani, Z.; Chabani, W.; Shamout, M.D.; Hamouche, S. Exploring Global Innovation Networks’ Impact on UAE Manufacturing: Insights from Medium-Scale Multinationals in the Global Value Chain. In Proceedings of the 2024 2nd International Conference on Cyber Resilience (ICCR), Dubai, United Arab Emirates, 26–28 February 2024; pp. 1–5. [Google Scholar] [CrossRef]
  2. Zhan, S.; Tang, F.; Li, Y.; Mei, Q.; Lu, Z.; Li, X.-S.; Zhang, B. Construction of Index System of International Economic Cooperation Innovation Model in Digital Era-based on TIF Theory and DIKWP Model. In Proceedings of the 2023 IEEE International Conference on High Performance Computing & Communications, Data Science & Systems, Smart City & Dependability in Sensor, Cloud & Big Data Systems & Application (HPCC/DSS/SmartCity/DependSys), Melbourne, Australia, 17–21 December 2023; pp. 809–812. [Google Scholar] [CrossRef]
  3. Wang, T.; Wu, J.; Liu, J. Regional Differences, Dynamic Evolution, and Convergence of Global Agricultural Energy Efficiency. Agriculture 2024, 14, 1429. [Google Scholar] [CrossRef]
  4. Azam, M.; Uddin, I.; Khan, S.; Tariq, M. Are Globalization, Urbanization, and Energy Consumption Cause Carbon Emissions in SAARC Region? New Evidence from CS-ARDL Approach. Environ. Sci. Pollut. Res. 2022, 29, 87746–87763. [Google Scholar] [CrossRef] [PubMed]
  5. Mauro, F.; Dees, S.; Lombardi, M.J. Catching the Flu from the United States: Synchronisation and Transmission Mechanisms to the Euro Area; Palgrave Macmillan London: London, UK, 2010. [Google Scholar] [CrossRef]
  6. Dreher, A. Does Globalization Affect Growth? Evidence from a new Index of Globalization. TWI Res. Pap. Ser. 2005, 6, 1–32. [Google Scholar] [CrossRef]
  7. Ajeigbe, K.B.; Ganda, F. Leveraging Food Security and Environmental Sustainability in Achieving Sustainable Development Goals: Evidence from a Global Perspective. Sustainability 2024, 16, 7969. [Google Scholar] [CrossRef]
  8. Vijayagopal, P.; Jain, B.; Ayinippully Viswanathan, S. Regulations and Fintech: A Comparative Study of the Developed and Developing Countries. J. Risk Financ. Manag. 2024, 17, 324. [Google Scholar] [CrossRef]
  9. Sarin, V.; Mahapatra, S.K.; Sood, N. Export diversification and economic growth: A review and future research agenda. J. Public Aff. 2022, 22, e2524. [Google Scholar] [CrossRef]
  10. Mooney, A.; Evans, B. Globalization: The Key Concepts; Routledge: London, UK, 2007. [Google Scholar] [CrossRef]
  11. Palavicini-Corona, E. Globalisation and local economic development: Place-based and bottom-up public policies in Switzerland and Mexico. Local Econ. 2021, 36, 98–114. [Google Scholar] [CrossRef]
  12. Hausmann, R.; Hidalgo, C.A.; Bustos, S.; Coscia, M.; Simoes, A.; Yıldırım, M.A. The Atlas of Economic Complexity: Mapping Paths to Prosperity; The MIT Press: Cambridge, MA, USA, 2013. [Google Scholar] [CrossRef]
  13. Wang, C.-N.; Nguyen, M.-N.; Nguyen, T.-D.; Hsu, H.-P.; Nguyen, T.-H.-Y. Effective Decision Making: Data Envelopment Analysis for Efficiency Evaluation in the Cloud Computing Marketplaces. Axioms 2021, 10, 309. [Google Scholar] [CrossRef]
  14. Haghshenas, E.; Gholamalifard, M.; Mahmoudi, N.; Kutser, T. Developing a GIS-Based Decision Rule for Sustainable Marine Aquaculture Site Selection: An Application of the Ordered Weighted Average Procedure. Sustainability 2021, 13, 2672. [Google Scholar] [CrossRef]
  15. Wang, R.; Li, Y.N.; Wei, J. Growing in the changing global landscape: The intangible resources and performance of high-tech corporates. Asia Pac. J. Manag. 2022, 39, 999–1022. [Google Scholar] [CrossRef]
  16. Shalev-Shwartz, S.; Ben-David, S. Understanding Machine Learning—From Theory to Algorithms; Cambridge University Press: New York, NY, USA, 2014. [Google Scholar] [CrossRef]
  17. Aftab, U.; Jaleel, F.; Aslam, M.; Haroon, M.; Mansoor, R. Building Information Modeling (BIM) Application in Construction Waste Quantification-A Review. Eng. Proc. 2024, 75, 8. [Google Scholar] [CrossRef]
  18. Piras, G.; Muzi, F.; Tiburcio, V.A. Digital Management Methodology for Building Production Optimization through Digital Twin and Artificial Intelligence Integration. Buildings 2024, 14, 2110. [Google Scholar] [CrossRef]
  19. Li, K.; Zhou, Y. Improved Financial Predicting Method Based on Time Series Long Short-Term Memory Algorithm. Mathematics 2024, 12, 1074. [Google Scholar] [CrossRef]
  20. Rojek, I.; Mikołajewski, D.; Mroziński, A.; Macko, M. Green Energy Management in Manufacturing Based on Demand Prediction by Artificial Intelligence—A Review. Electronics 2024, 13, 3338. [Google Scholar] [CrossRef]
  21. Athey, S.; Imbens, G.W. Machine Learning Methods That Economists Should Know About. Annu. Rev. Econ. 2019, 11, 685–725. [Google Scholar] [CrossRef]
  22. Çağlayan-Akay, E.; Yılmaz Soydan, N.T.; Kocarık Gacar, B. Bibliometric analysis of the published literature on machine learning in economics and econometrics. Soc. Netw. Anal. Min. 2022, 12, 109. [Google Scholar] [CrossRef]
  23. Kim, D.; Cho, W.; Na, I.; Na, M.H. Prediction of Live Bulb Weight for Field Vegetables Using Functional Regression Models and Machine Learning Methods. Agriculture 2024, 14, 754. [Google Scholar] [CrossRef]
  24. Kim, S.Y.; Park, S.; Hong, S.J.; Kim, E.; Nurhisna, N.I.; Park, J.; Kim, G. Time-series prediction of onion quality changes in cold storage based on long short-term memory networks. Postharvest Biol. Technol. 2024, 213, 112927. [Google Scholar] [CrossRef]
  25. Zhu, H. Oil Demand Forecasting in Importing and Exporting Countries: AI-Based Analysis of Endogenous and Exogenous Factors. Sustainability 2023, 15, 13592. [Google Scholar] [CrossRef]
  26. Frison, L.; Gölzhäuser, S.; Bitterling, M.; Kramer, W. Evaluating different artificial neural network forecasting approaches for optimizing district heating network operation. Energy 2024, 307, 132745. [Google Scholar] [CrossRef]
  27. Das, P.K.; Das, P.K. Forecasting and Analyzing Predictors of Inflation Rate: Using Machine Learning Approach. J. Quant. Econ. 2024, 22, 493–517. [Google Scholar] [CrossRef]
  28. Kayacı Çodur, M. Ensemble Machine Learning Approaches for Prediction of Türkiye’s Energy Demand. Energies 2024, 17, 74. [Google Scholar] [CrossRef]
  29. Panahi, F.; Najah, A.; Singh, V.P.; Ehtearm, M.; Elshafie, A.; Torabi, A. Predicting Freshwater Production in Seawater Greenhouses Using Hubrid Artificial Neural Network Models. J. Clean. Prod. 2021, 329, 129721. [Google Scholar] [CrossRef]
  30. Kumar, A.; Singh, S.K.; Kumari, P. A Machine Learning Approach to Forecast the Food Prices for Food Security Issues. In Proceedings of the 11th International Conference on Intelligent Systems and Embedded Design (ISED) 2023, Dehradun, India, 15–17 December 2023; pp. 1–6. [Google Scholar] [CrossRef]
  31. Chin, M.Y.; Qin, Y.; Hoy, Z.X.; Farooque, A.A.; Wong, K.Y.; Mong, G.R.; Tan, J.P.; Woon, K.S. Assessing carbon budgets and reduction pathways in different income levels with neural network forecasting. Energy 2024, 305, 132331. [Google Scholar] [CrossRef]
  32. Wang, D.; Cao, J.; Zhang, B.; Zhang, Y.; Xie, L. A Novel Flexible Geographically Weighted Neural Network for High-Precision PM2.5 Mapping across the Contiguous United States. ISPRS Int. J. Geo-Inf. 2024, 13, 217. [Google Scholar] [CrossRef]
  33. Alhussaini, A.J.; Steele, J.D.; Jawli, A.; Nabi, G. Radiomics Machine Learning Analysis of Clear Cell Renal Cell Carcinoma for Tumour Grade Prediction Based on Intra-Tumoural Sub-Region Heterogeneity. Cancers 2024, 16, 1454. [Google Scholar] [CrossRef]
  34. Barbieri, F.; Pfeifer, B.E.; Senoner, T.; Dobner, S.; Spitaler, P.; Semsroth, S.; Lambert, T.; Zweiker, D.; Neururer, S.B.; Scherr, D.; et al. A Neuronal Network-Based Score Predicting Survival in Patients Undergoing Aortic Valve Intervention: The ABC-AS Score. J. Clin. Med. 2024, 13, 3691. [Google Scholar] [CrossRef]
  35. Aldabagh, H.; Zheng, X.; Mukkamala, R. A Hybrid Deep Learning Approach for Crude Oil Price Prediction. J. Risk Financ. Manag. 2023, 16, 503. [Google Scholar] [CrossRef]
  36. Bao, W.; Cao, Y.; Yang, Y.; Che, H.; Huang, J.; Wen, S. Data-driven stock forecasting models based on neural networks: A review. Inf. Fusion 2024, 113, 102616. [Google Scholar] [CrossRef]
  37. Wang, W.; Xu, W.; Yao, X.; Wang, H. Application of Data-driven Method for Automatic Machine Learning in Economic Research. In Proceedings of the 21st International Symposium on Distributed Computing and Applications for Business Engineering and Science (DCABES), Anhui, China, 14–18 October 2022; pp. 42–45. [Google Scholar] [CrossRef]
  38. Tiits, M.; Kalvet, T.; Ounoughi, C.; Ben Yahia, S. Relatedness and product complexity meet gravity models of international trade. J. Open Innov. Technol. Mark. Complex. 2024, 10, 100288. [Google Scholar] [CrossRef]
  39. Kummaraka, U.; Srisuradetchai, P. Time-Series Interval Forecasting with Dual-Output Monte Carlo Dropout: A Case Study on Durian Exports. Forecasting 2024, 6, 616–636. [Google Scholar] [CrossRef]
  40. Bin, J.; Tianli, X. Forecast of Export Demand Based on Artificial Neural Network and Fuzzy System Theory. J. Intell. Fuzzy Syst. 2020, 39, 1701–1709. [Google Scholar] [CrossRef]
  41. Sellami, B.; Ounoughi, C.; Kalvet, T.; Tiits, M.; Rincon-Yanez, D. Harnessing Graph Neural Networks to Predict International Trade Flows. Big Data Cogn. Comput. 2024, 8, 65. [Google Scholar] [CrossRef]
  42. Gulzar, Y.; Oral, C.; Kayakus, M.; Erdogan, D.; Unal, Z.; Eksili, N.; Caylak, P.C. Predicting High Technology Exports of Countries for Sustainable Economic Growth by Using Machine Learning Techniques: The Case of Turkey. Sustainability 2024, 16, 5601. [Google Scholar] [CrossRef]
  43. Suler, P.; Rowland, Z.; Krulicky, T. Evaluation of the Accuracy of Machine Learning Predictions of the Czech Republic’s Exports to the China. J. Risk Financ. Manag. 2021, 14, 76. [Google Scholar] [CrossRef]
  44. Dai, C. A method of forecasting trade export volume based on back-propagation neural network. Neural Comput. Appl. 2023, 35, 8775–8784. [Google Scholar] [CrossRef]
  45. Tufail, S.; Riggs, H.; Tariq, M.; Sarwat, A.I. Advancements and Challenges in Machine Learning: A Comprehensive Review of Models, Libraries, Applications, and Algorithms. Electronics 2023, 12, 1789. [Google Scholar] [CrossRef]
  46. Ebiaredoh-Mienye, S.A.; Esenogho, E.; Swart, T.G. Integrating Enhanced Sparse Autoencoder-Based Artificial Neural Network Technique and Softmax Regression for Medical Diagnosis. Electronics 2020, 9, 1963. [Google Scholar] [CrossRef]
  47. Kaya, E. A Comprehensive Comparison of the Performance of Metaheuristic Algorithms in Neural Network Training for Nonlinear System Identification. Mathematics 2022, 10, 1611. [Google Scholar] [CrossRef]
  48. Avuglah, R.K. Some Steps Towards Experimental Design for Neural Network Regression. Ph.D. Thesis, Technischen Universität Kaiserslautern, Kaiserslautern, Germany, 2011. Available online: https://nbn-resolving.de/urn:nbn:de:hbz:386-kluedo-23493 (accessed on 22 September 2024).
  49. Chiu, A.K. Exploring the Use of Experimental Design Techniques forHyperparameter Optimization in Convolutional Neural Networks. Ph.D. Thesis, University of California, Los Angeles, CA, USA, 2021. [Google Scholar]
Figure 1. Context of related works.
Figure 1. Context of related works.
Computation 12 00221 g001
Figure 2. Structure of an artificial neural network.
Figure 2. Structure of an artificial neural network.
Computation 12 00221 g002
Figure 3. Graphical representation of Equation (1).
Figure 3. Graphical representation of Equation (1).
Computation 12 00221 g003
Figure 4. Historical exportation data.
Figure 4. Historical exportation data.
Computation 12 00221 g004
Figure 5. Histogram of historical exportation data.
Figure 5. Histogram of historical exportation data.
Computation 12 00221 g005
Figure 6. Integrated neural network model diagram.
Figure 6. Integrated neural network model diagram.
Computation 12 00221 g006
Figure 7. Diagram for the integrated neural network model with weighed output.
Figure 7. Diagram for the integrated neural network model with weighed output.
Computation 12 00221 g007
Figure 8. Prediction results for model M1 using training and validation data.
Figure 8. Prediction results for model M1 using training and validation data.
Computation 12 00221 g008
Figure 9. Prediction results for model M2 using training and validation data.
Figure 9. Prediction results for model M2 using training and validation data.
Computation 12 00221 g009
Table 1. Results of training process for model M1.
Table 1. Results of training process for model M1.
Delays: 1
Layers1234
Neurons24816248162481624816
Max0.004250.004450.004880.004320.005280.004350.004170.004660.007580.006700.004330.005290.025970.005020.004690.00575
Min0.004000.003970.003910.003680.004010.003910.003900.003900.004010.003940.003910.003930.003970.003920.003900.00391
Mean0.004070.004060.004130.004030.004180.004070.004040.004070.004330.004160.004070.004130.005360.004070.004100.00411
STD0.000050.000110.000230.000140.000360.000110.000070.000160.000800.000600.000100.000310.004910.000230.000200.00039
Delays: 2
Layers1234
Neurons24816248162481624816
Max0.003890.003820.003830.004400.004790.004590.005300.003850.004820.003840.004110.004190.025940.004810.003800.00401
Min0.003710.003670.003620.003630.003680.003610.003670.003550.003730.003570.003630.003620.003720.003630.003560.00335
Mean0.003760.003730.003730.003820.003830.003780.003840.003720.003880.003730.003750.003770.004950.003860.003690.00367
STD0.000040.000040.000060.000160.000240.000200.000380.000080.000300.000060.000100.000130.004950.000260.000060.00015
Delays: 3
Layers1234
Neurons24816248162481624816
Max0.008710.003510.003520.004270.004490.003800.004980.004020.003930.003590.003690.004020.026330.004570.004050.00390
Min0.003290.003080.003120.002990.003360.002940.002870.003010.003330.002970.003100.002790.003340.003150.003020.00287
Mean0.003700.003380.003360.003390.003500.003400.003410.003380.003460.003360.003360.003280.007280.003440.003370.00336
STD0.001190.000120.000110.000240.000250.000160.000420.000230.000150.000130.000150.000280.008310.000300.000210.00023
Delays: 4
Layers1234
Neurons24816248162481624816
Max0.003350.003340.003470.003300.024150.003370.003300.003480.004260.003680.003200.004220.007530.003370.003280.00386
Min0.003110.002960.002800.002870.003090.002980.002900.002520.003110.002690.002300.002420.003040.002930.002650.00256
Mean0.003230.003140.003130.003080.004550.003160.003090.003070.003300.003150.003020.003110.003420.003170.003020.00303
STD0.000060.000100.000150.000130.004820.000090.000090.000310.000290.000210.000220.000400.000970.000130.000160.00032
Table 2. Results of validation process for model M1.
Table 2. Results of validation process for model M1.
Delays: 1
Layers1234
Neurons24816248162481624816
Max0.003590.003550.004020.004110.004420.003490.004000.003810.006090.004970.003850.004540.022780.004300.003730.00529
Min0.003340.003310.003300.003270.003310.003270.003300.003230.003330.003290.003300.003270.003300.003300.003260.00329
Mean0.003420.003410.003520.003450.003520.003380.003470.003450.003650.003500.003450.003530.004540.003490.003400.00353
STD0.000070.000070.000200.000190.000310.000070.000160.000160.000620.000350.000140.000350.004340.000210.000130.00043
Delays: 2
Layers1234
Neurons24816248162481624816
Max0.003260.003290.003240.003740.003860.003540.004580.003820.003950.003940.003960.003760.022720.003880.003540.00578
Min0.002900.002940.002880.002970.002890.002880.002950.002910.002940.002900.002860.002870.002910.002880.002860.00296
Mean0.003040.003050.003040.003250.003060.003100.003190.003180.003140.003070.003180.003140.004110.003160.003200.00349
STD0.000080.000080.000100.000230.000210.000160.000380.000190.000230.000220.000280.000240.004390.000250.000170.00061
Delays: 3
Layers1234
Neurons24816248162481624816
Max0.007800.002950.003160.003320.003630.002990.003170.003660.002930.003370.003260.003910.023260.003990.003660.00424
Min0.002390.002420.002320.002430.002350.002470.002480.002480.002410.002390.002360.002460.002320.002330.002470.00242
Mean0.002830.002610.002670.002720.002620.002640.002750.002820.002600.002680.002720.002900.006000.002730.002790.00299
STD0.001190.000140.000170.000260.000290.000160.000170.000350.000150.000250.000220.000380.007430.000360.000330.00058
Delays: 4
Layers1234
Neurons24816248162481624816
Max0.002690.004130.003240.004010.021820.003030.003400.004350.003440.003010.004530.004000.006060.003260.003240.00448
Min0.002420.002360.002470.002450.002410.002450.002480.002610.002440.002470.002650.002340.002430.002570.002310.00250
Mean0.002550.002740.002790.003050.003820.002720.002820.003270.002620.002710.003040.002990.002810.002840.002780.00327
STD0.000080.000350.000210.000420.004430.000160.000240.000440.000240.000160.000460.000450.000780.000210.000190.00061
Table 3. Results of training process for model M2.
Table 3. Results of training process for model M2.
Delays: 1
Layers1234
Neurons24816248162481624816
Max0.016410.014570.014600.014970.017830.016010.014120.014540.021740.014850.014320.014660.022720.024370.014300.01720
Min0.014090.013700.013580.013560.013730.013620.013560.013630.013690.013620.013430.013580.014140.013570.013540.01358
Mean0.014770.014050.013950.013900.014840.014200.013780.013890.014930.013930.013780.013850.016050.014470.013860.01402
STD0.000620.000270.000290.000320.001000.000680.000170.000240.001680.000300.000210.000300.002540.002340.000210.00078
Delays: 2
Layers1234
Neurons24816248162481624816
Max0.013590.013010.013010.014700.014070.013120.012370.012720.017040.020920.013370.012790.023210.013380.013890.01407
Min0.012500.011900.011720.011730.012340.011800.011480.011670.012450.011800.011720.011600.012410.011640.011080.01148
Mean0.012820.012320.012210.012290.012760.012280.011980.012080.013150.012630.012380.012140.013830.012310.012070.01213
STD0.000290.000270.000290.000640.000470.000380.000250.000330.001020.001960.000500.000340.002440.000410.000620.00060
Delays: 3
Layers1234
Neurons24816248162481624816
Max0.014240.012940.013550.013560.022980.013200.012340.012240.036190.022630.013110.012580.016440.012430.011900.01270
Min0.012140.011580.011410.011330.012110.011300.010220.010930.012130.011420.010690.010680.011970.011270.010970.01006
Mean0.012620.012220.011930.011960.013330.012010.011620.011560.013830.012480.011850.011410.012940.011930.011420.01157
STD0.000500.000370.000460.000570.002840.000440.000480.000350.005280.002420.000560.000390.001230.000300.000250.00061
Delays: 4
Layers1234
Neurons24816248162481624816
Max0.013830.012160.011980.013080.014650.012000.012180.011450.020540.013450.011810.014650.016670.013080.012520.01245
Min0.011840.011100.010820.010310.011630.011080.010090.010140.011700.011140.010900.009400.011790.010750.010450.00931
Mean0.012310.011530.011420.011290.012190.011550.011270.010660.012820.011670.011260.010980.012530.011620.011200.01105
STD0.000500.000270.000340.000660.000680.000270.000530.000390.002130.000600.000280.000970.001340.000570.000510.00066
Table 4. Results of validation process for model M2.
Table 4. Results of validation process for model M2.
Delays: 1
Layers1234
Neurons24816248162481624816
Max0.017230.014820.015000.015450.017390.015060.014460.014450.019820.015980.014640.015280.021430.023470.014680.01730
Min0.014030.013350.013310.013070.013270.013510.013290.012930.013440.013010.012910.013020.013440.013190.012770.01316
Mean0.014770.013910.013940.013800.014610.014100.013700.013610.014710.013780.013530.013790.015590.014300.013690.01389
STD0.000690.000340.000440.000500.000910.000480.000290.000470.001410.000660.000380.000560.002360.002190.000510.00089
Delays: 2
Layers1234
Neurons24816248162481624816
Max0.013300.013450.013510.015250.012990.012870.013960.013630.017690.020680.014660.014220.022070.013600.014060.01438
Min0.012080.012030.011920.012170.012000.011800.012160.011880.012200.011700.011720.011860.012140.012280.011900.01183
Mean0.012520.012580.012590.012890.012490.012400.012780.012800.013010.012880.012740.012860.013470.012670.012770.01270
STD0.000290.000330.000380.000650.000280.000330.000460.000470.001190.001880.000670.000670.002220.000350.000690.00065
Delays: 3
Layers1234
Neurons24816248162481624816
Max0.013010.012430.014520.013450.021170.012110.014000.014810.035760.020880.013470.015820.016180.014700.012860.01495
Min0.011240.011140.010750.010770.010960.010590.010740.009960.011000.010730.010860.010500.011190.010630.010600.01075
Mean0.011730.011630.011900.012300.012410.011490.011840.012060.012780.011950.012030.012110.012020.011600.011440.01252
STD0.000440.000360.000790.000790.002700.000340.000700.001140.005420.002180.000680.001280.001400.000870.000660.00105
Delays: 4
Layers1234
Neurons24816248162481624816
Max0.012620.012560.013510.014790.014840.012830.013330.017180.020740.013100.013430.016540.016500.014270.012750.01727
Min0.011140.011310.011580.011000.011430.011390.011060.010590.011270.010750.011280.011100.011260.011120.010950.01096
Mean0.011920.011930.012050.012340.012020.012000.012180.013030.012710.012100.012110.012860.012480.012240.011960.01319
STD0.000370.000350.000440.000880.000720.000420.000650.001550.002160.000600.000610.001440.001540.000650.000580.00168
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Gómez, A.C.; Bejarano, L.A.; Espitia, H.E. Artificial Neural Network Model to Predict the Exportation of Traditional Products of Colombia. Computation 2024, 12, 221. https://doi.org/10.3390/computation12110221

AMA Style

Gómez AC, Bejarano LA, Espitia HE. Artificial Neural Network Model to Predict the Exportation of Traditional Products of Colombia. Computation. 2024; 12(11):221. https://doi.org/10.3390/computation12110221

Chicago/Turabian Style

Gómez, Andrea C., Lilian A. Bejarano, and Helbert E. Espitia. 2024. "Artificial Neural Network Model to Predict the Exportation of Traditional Products of Colombia" Computation 12, no. 11: 221. https://doi.org/10.3390/computation12110221

APA Style

Gómez, A. C., Bejarano, L. A., & Espitia, H. E. (2024). Artificial Neural Network Model to Predict the Exportation of Traditional Products of Colombia. Computation, 12(11), 221. https://doi.org/10.3390/computation12110221

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop