Next Article in Journal
Response of Trichoptera and Oligochaeta Communities to Modifications of Mountain River Channels with Low-Head Barriers
Previous Article in Journal
Optimization of Selected Parameters in Vertical, Horizontal, and Hybrid Surface Flow Constructed Wetland Systems for Improving the Treatment Efficiency of Textile and Sewage Effluents
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Comparison of the Effectiveness of Artificial Neural Networks and Elastic Net Regression in Surface Runoff Modeling

by
Jacek Dawidowicz
and
Rafał Buczyński
*
Department of Water Supply and Sewage Systems, Faculty of Civil Engineering and Environmental Sciences, Bialystok University of Technology, 15-351 Białystok, Poland
*
Author to whom correspondence should be addressed.
Water 2025, 17(3), 405; https://doi.org/10.3390/w17030405
Submission received: 7 December 2024 / Revised: 13 January 2025 / Accepted: 25 January 2025 / Published: 1 February 2025

Abstract

:
This study compares Artificial Neural Networks (ANN) and Elastic Net regression for predicting surface runoff in urban stormwater catchments. Both models were trained on a data set derived from the Stormwater Management Model that included parameters such as imperviousness, flow path width, slope, Manning coefficients, and depression storage. ANN exhibited greater predictive accuracy and stability, especially when modeling nonlinear hydrologic interactions, while Elastic Net offered faster inference and clearer interpretability, but showed reduced accuracy in low-flow conditions. Validation on real-world data revealed the sensitivity of the models to scenarios not fully represented during training. Despite higher computational demands, the ANN proved more adaptable, while the more resource-efficient Elastic Net remains suitable for time-critical or large-scale applications. These findings provide practical insights for urban water resource management, indicating when each approach can be most effectively used in flood risk assessment and stormwater infrastructure planning.

1. Introduction

Hydrology is the scientific study of water within the Earth’s system, which includes its distribution, movement, and quality. The discipline of surface hydrology involves the description of processes dictating the distribution of rainfall, the penetration of water into the ground, and the routing of surplus precipitation across land planes leading to channels, eventually reaching the mouth of a watershed [1]. Phenomena such as surface runoff are crucial in managing water resources and mitigating the impact of extreme events (e.g., floods or droughts). Accurately modeling and predicting surface runoff is therefore essential for effective urban infrastructure planning, engineering activities, and risk management associated with hydrological extremes.
Reliable modeling of surface runoff requires robust approaches, as hydrological models depend heavily on field measurements [2]. However, field data can be limited or uncertain, leading to challenges such as simulation uncertainty due to frozen soil moisture parameterization [3]. Additional complexities arise from the hydrogeological characteristics of catchments, which govern hydrological response via surface runoff and infiltration [4]. Surface runoff also significantly affects water quality by transporting pollutants; thus, a better understanding of runoff processes is important from both quantitative and qualitative standpoints [5]. Classical tools such as the Storm Water Management Model (SWMM) have been widely used to manage and simulate rainwater in urban areas because of their comprehensive capabilities to model hydrological, hydraulic, and pollutant transport processes [6,7]. Recent research has proposed new optimization strategies for SWMM, including the use of integrated genetic algorithms to optimize buildup and wash-off parameters, thereby enhancing stormwater quality modeling in urban areas [8]. Despite SWMM’s broad functionality, its complexity and high computational burden motivate the search for alternative or complementary modeling approaches.
Artificial intelligence (AI) methods, particularly machine learning (ML) algorithms, offer new avenues to address the complexities of hydrological phenomena. ML-based techniques such as decision trees, support vector machines (SVMs), fuzzy logic, and wavelet transformations have all been applied to surface runoff and water resources modeling [7,9,10,11,12,13,14,15,16]. Indeed, recent studies have shown the effectiveness of SVM-based approaches in hydrological applications, including scour depth prediction [9,12] and apparent shear force modeling in compound channels [11]. Despite these advances, there is still a recognized gap in the literature regarding the performance of different ML algorithms in capturing the interactions of multiple catchment parameters, particularly under wide-ranging hydrological conditions. Machine learning methods can sometimes suffer from limitations related to overfitting, insufficient generalizability, or complexity in feature selection—challenges that become more acute for large-scale, data-intensive hydrological problems.
In view of these challenges, the present study focuses on two predictive modeling techniques: Artificial Neural Networks (ANN) and Elastic Net (Elastic Net) regression. We opted for ANN and Elastic Net because they respectively exemplify two important paradigms in ML-based hydrological modeling—nonlinear, neuron-based architectures versus a linear, regularized regression framework. ANNs have shown great promise in capturing complex, nonlinear relationships in hydrological data sets, enhancing forecast accuracy and reducing computation time in various applications [16,17]. At the same time, Elastic Net regression effectively addresses multicollinearity and overfitting by incorporating two forms of regularization—L1 (lasso) and L2 (ridge)—into a single optimization problem [18,19]. The term regularization constraints refers to the penalties imposed on the magnitude of regression coefficients, which help reduce model variance and improve generalization. In the context of Elastic Net, these constraints shrink some coefficients toward zero (L1 penalty) while simultaneously penalizing the sum of squared coefficients (L2 penalty), thus balancing feature selection and coefficient stability [20,21].
Elastic Net regression is gaining recognition as an analytical technique for modeling complex data sets in various fields, including hydrology. By combining the advantages of ridge (L2) and lasso (L1) regression, Elastic Net effectively solves problems of multicollinearity and overfitting, which are crucial in hydrological data analysis [19]. Its capability to simultaneously select variables and apply regularization proves to be highly beneficial in hydrological modeling, where numerous factors can affect surface runoff dynamics [20].
The mathematical underpinnings of Elastic Net, grounded in regularization and optimization theory, offer a robust framework for addressing the complexities of multidimensional data spaces [21]. Specifically, regularization constraints in Elastic Net penalize both the absolute values (L1) and squares (L2) of regression coefficients, improving generalization by shrinking less informative coefficients toward zero. Such adaptability is advantageous in dealing with complex variable interactions (e.g., land use, climatic conditions), surpassing some limitations of traditional linear models [22]. Several studies have demonstrated Elastic Net’s efficacy in forecasting river runoff rates [20] and modeling various water quality parameters [23], underscoring its potential for hydrological forecasting and environmental monitoring. Moreover, the method aligns well with current computing trends, given that it scales effectively to large data sets and can identify key predictors in data-intensive research [21,24,25].
Therefore, the aim of this paper is to address an existing gap in hydrological modeling by comparing the accuracy, stability, and computational efficiency of ANN and Elastic Net in surface runoff prediction. The novelty lies in examining how these two contrasting ML approaches handle interactions among a broad set of catchment parameters derived from SWMM simulations. Most past studies have either focused on purely data-driven models or on parametric regression but have not performed a systematic comparison that evaluates both nonlinear neuron-based architectures and regularized linear regression frameworks on identical data sets. The findings from this research may facilitate the adoption of robust AI strategies in operational water resource management and enhance the planning capabilities for urban drainage infrastructure.

2. Materials and Methods

2.1. Storm Water Management Model

In hydrology, a drainage basin is a land unit with a defined area. Its beginning point is determined by the lowest receiver, and its end is determined by the watershed, which separates surface runoff. The catchment’s shape is a physiographic parameter that influences the surge’s nature, the culmination’s size, and the wave’s length. Catchment geometry is typically used more frequently in studies, as it is easier to determine. The Storm Water Management Model is an accurate and suitable hydrological–hydraulic water quality simulation model [26]. The SWMM 5.2.2 program uses a non-linear reservoir model to determine the flow of rainwater runoff that flows into the network node from the catchment area [27]. The definitive outflow rate is therefore determined by Formula (1):
Q = W h h p 5 / 3 n p i p 1 / 2
where Q is the authoritative runoff flow rate (m3/s), W is the hydraulic width of the catchment area (m), h is the precipitation height (m), hp is the surface retention height (m), np is the surrogate coefficient of roughness of the catchment area (s/m1/3), and ip is the average slope of the catchment area (-).
SWMM has an extensive interface on catchment areas. Based on Rossman [28], Table 1 was compiled showing the most important parameters.

2.2. Artificial Neural Networks

Artificial neural networks are a type of computational model inspired by the structure and functioning of biological neural networks found in the brains of living organisms. They are adaptive learning systems capable of processing input data, analyzing signals, and generating output information [28]. ANNs are particularly useful for complex and fuzzy relationships between input and output data, making them an attractive tool for prediction and classification problems [29]. The basic element of an ANN is an artificial neuron, whose task is to simulate the functions of its biological counterpart. In regression models, a neuron consists of multiple inputs, a summation function, an activation function, and one output. The inputs are weighted according to weights, which are modified during the learning process. The summation function calculates the weighted sum of the input signals, and the activation function introduces nonlinearity by converting the weighted sum to the output of the neuron [28].
The architecture of a neural network usually consists of layers with a minimum of one neuron. There are three main types of layers: input layer, hidden layers, and output layer. The input layer receives signals, hidden layers process them, and the output layer generates the model’s response. The number of hidden layers and the number of neurons in each layer affect the network’s ability to model complex relationships [30]. Network learning involves updating the weights of neurons to minimize the error between predicted and actual values. The most popular learning algorithm in ANNs is the back-propagation algorithm, which modifies the weights in the network in an iterative manner, starting from the output layer and then propagating the error to the hidden layers [31]. Considering a simple neuron with activation function f, weights W, thresholds b, and loss function L, the propagation can be described by Formulas (2) and (3):
Δ W = η L W
Δ b = η L b
where η is the learning rate. The gradients L W and L b can be calculated using the chain rule (4).
L W = L a a W L b = L a a b
where a is the value of the neuron’s activation before applying the activation function, and L a is the gradient of the loss function relative to the neuron’s activation. Their values are calculated recursively as from the output layer to the input layer, making it possible to efficiently calculate gradients for the entire network. Then, the weights (5) and thresholds (6) are updated based on the calculated gradients, and the process is repeated for multiple epochs or learning iterations. Repeating the process of optimizing the weights and thresholds of the neural network, minimizing the loss function leads to finding the optimal model for the problem under consideration.
W = W + Δ W
b = b + Δ b

2.3. Elastic Net

The Elastic Net regression model is a regression analysis technique that combines features of Lasso regression and Ridge regression to achieve flexibility and efficiency in modeling complex relationships. This model is particularly useful in situations where the data are colinear or have a large number of predictors relative to the number of observations. Formally, the optimization problem for Elastic Net is defined by Equation (7):
m i n β 0 β 1 2 n i = 1 n y i β 0 x i T β 2 + λ 1 α | | β | | 2 2 + α | | β | | 1
where y i is the value of the dependent variable, x i is the vector of independent variables for the i-th observation, and β 0 i β are the estimated parameters of the model. The parameter λ is used to control the strength of regularization, while α allows balancing between the regularization elements L1 and L2.
A key element in the context of this model is its ability to deal effectively with collinearity and high dimensionality in the data. With the combination of L1 and L2 norms, the model can perform variable selection as in Lasso regression and stabilize parameter estimation as in Ridge regression. Algorithms such as the coordination gradient method are often used to solve this optimization problem. It is worth noting that although the model is linear in its predictors, its ability to effectively model complex relationships in the data, resulting from the regularization used, makes it useful in a variety of scientific and engineering applications.

2.4. Data Set

The data set was created through simulations in SWMM The simulations were conducted on a 1-hectare urban catchment with an existing rainfall collector. The rainfall event, characterized as a Euler-type rainfall, lasted 120 min with a peak intensity of approximately 102 mm/h, corresponding to a return period of 3 years [32]. All simulations were run using the PySWMM 1.2.0 package. Flow routing was set to Kinematic Wave (KINWAVE), and the Horton infiltration model was employed with a maximum infiltration rate of 75 mm/h, a minimum infiltration rate of 10 mm/h, a decay constant of 4, a drying time of 7, and a maximum infiltration volume of 0. No Low Impact Development (LID) practices were included.
Table 2 presents the parameter ranges for width, slope, impervious percentage, Manning’s coefficients for impervious and pervious areas, depression storage, and the percentage of impervious area without depression storage. These simulations yielded a total of 116,640 samples, providing insight into the effects of each catchment property on surface runoff. For descriptive statistics, please refer to Table 3.
The parameter ranges utilized in this study were carefully selected to represent realistic conditions typical of a 1-hectare urban catchment. For instance, width values correspond to potential overland flow paths observed in depressions formed by natural topography or engineered structures such as roadways. This parameter is heavily influenced by the catchment’s geometry, which plays a pivotal role in determining runoff dynamics. The slope range encompasses gradients from nearly flat terrain to moderately steep areas, thereby reflecting diverse urban topographies. In accordance with the USDA slope classification, this range includes “little or no slope” (0–3%) and “gentle slopes” (4–9%), ensuring alignment with established terrain categories.
Similarly, impervious percentage values span from 1% (typical of parks or other permeable landscapes) to 100% (characteristic of densely developed urban areas). Depression storage values for impervious surfaces align with conditions such as smooth asphalt or textured concrete, which exhibit varying capacities to retain water. For pervious surfaces, these values correspond to a spectrum of conditions ranging from sandy soils with minimal retention to forest litter capable of significant water storage. By incorporating these parameter ranges, the study ensures a realistic and comprehensive representation of urban catchment hydrology.

2.5. Analysis of Catchment Area Features

Understanding the connections between catchment parameters and surface runoff is crucial for urban infrastructure planning, water engineering, and managing the risk of extreme hydrological events. In this study, we utilized the same 1-hectare urban catchment described in Section 2.4. Table 4 summarizes the default parameter values applied in our simulations, while also indicating the respective ranges over which each parameter was varied. These default values represent typical conditions for an urban catchment, whereas the simulated ranges ensure that a broad spectrum of plausible scenarios is captured.
Among these parameters, Width is not inherently constrained by SWMM but is primarily influenced by the size and geometry of the catchment. In practical terms, values such as 1, 100, 125, and 150 m are considered extreme for a 1-hectare area, and these extremes were included as upper limits to examine the full range of potential responses.
To clarify how these parameters influence runoff, a correlation analysis was carried out using SWMM output data from simulations performed with the pyswmm and swmmio 0.6.0 packages. Our analysis identified a significant positive correlation between surface runoff and the parameters Width, Slope, and PercImperv. Spearman correlation coefficients of +1.0 (p-value: 0.0) were observed for these relationships (Figure 1a–c), indicating deterministic linear behavior under the controlled simulation settings. In contrast, N-Imperv, N-Perv, D-Imperv, and D-Perv each displayed a complete negative correlation (–1.0, p-value: 0.0) with runoff (Figure 1d–h). This result implies that higher values of these parameters systematically reduce runoff.
These strong correlations underscore the substantial role of impervious surfaces in generating runoff, particularly for the PercImperv metric. Because it reflects the fraction of the catchment area that is impervious, PercImperv directly correlates with the volume of runoff produced during rainfall events. Meanwhile, the negative correlations of N-Imperv, N-Perv, D-Imperv, and D-Perv illustrate their runoff-mitigating effect. Such findings highlight the importance of accurately parameterizing surface roughness and depression-storage features in hydrological models.
Although deterministic correlations of ±1.0 are unlikely under real-world conditions (where multiple factors typically interact), these results provide critical insights into the isolated influence of each parameter when other variables remain fixed. Consequently, incorporating a broad set of parameters (e.g., terrain, soil type, land cover, meteorological data, and anthropogenic factors) enhances the predictive accuracy of hydrological models. Therefore, considering as many relevant factors as possible is necessary to ensure the robustness and reliability of predictive models in urban stormwater applications.
Moreover, these outcomes are consistent with the theoretical runoff relationship expressed in Equation (1). The consistency of the simulation results with the theoretical formulation confirms the accuracy and reliability of the pyswmm and swmmio packages.

2.6. Structure of the Neural Network Model

A feed-forward neural network was employed to predict surface runoff based on eight input features. The model architecture is outlined as follows:
  • Normalization Layer: This layer standardizes input features based on the mean and standard deviation of the training data.
  • Dense Layer (Input Layer): Eight neurons with a Rectified Linear Unit (ReLU) activation function.
  • Dense Layer (Hidden Layer 1): Thirty-two neurons with ReLU activation.
  • Dense Layer (Hidden Layer 2): Sixteen neurons with ReLU activation.
  • Dense Layer (Output Layer): One neuron with a linear activation function for continuous runoff prediction.
To identify the optimal network configuration, a grid search was conducted, systematically evaluating the following hyperparameters:
  • Number of neurons in hidden layers: 8, 16, 32, and 64 neurons.
  • Dropout rate: 0.0 (no dropout), 0.2, 0.3, and 0.4.
  • Learning rate: 0.001 and 0.0005.
  • Number of epochs: Up to 1000 with early stopping (patience = 20).
  • Batch size: 16 and 32.
Each hyperparameter combination was trained and validated using mean squared error (MSE) and mean absolute error (MAE) metrics. The final model configuration was selected based on superior performance on the validation set, resulting in the architecture described above. The model was compiled with the Adam optimizer, a mean squared error loss function, and MAE and MSE as performance metrics. Training was conducted with a batch size of 32 and a validation split of 0.2. Early stopping was implemented to terminate training if the validation loss did not improve over 20 consecutive epochs. The final configuration consisted of 922 total parameters, of which 905 were trainable and 17 were associated with the normalization layer.

2.7. Structure of the Elastic Net Regression Model

An Elastic Net regression model was employed to predict surface runoff based on eight catchment features, implemented in Python’s scikit-learn library. The following steps summarize the model setup and selection process:
  • Preprocessing: All predictive variables were standardized to a mean of zero and a standard deviation of one using the StandardScaler class, ensuring a consistent scale for all features.
  • Hyperparameter Optimization: A grid search procedure systematically explored combinations of the α (overall strength of regularization) and l1_ratio of 0.2 were selected. The parameter α regulates the overall strength of regularize l1_ratio (balance between L1 and L2 penalties) hyperparameters. The ranges tested were:
  • α 0.0001 , 0.01 , 0.1 , 1 , 10 , 100 ,   l 1 _ r a t i o 0.001 , 0.1 , 0.2 , 0.4 , 0.6 , 0.8 , 1.0 , Each combination was evaluated using fivefold cross-validation, optimizing for mean squared error. The best-performing parameters were found to be α = 0.01 and l1_ratio = 0.2, and these values were adopted in the final model.
  • Data Partitioning: The data set was split into training (80%) and testing (20%) subsets for performance evaluation on unseen data.
  • Optimization Algorithm: Coordinate Descent was employed to minimize the Elastic Net objective function.
  • Predictive Variables: The same eight features used in the neural network model were included, reflecting various catchment attributes.
  • Outcome Variable: A single continuous outcome, surface runoff, was predicted.
After fitting the Elastic Net model with the optimal hyperparameters, the final regression equation can be expressed as Equation (8):
R u n o f f = 130.9443 + 1.7269 × W i d t h + 1.3635 × S l o p e + 86.7432 × I m p e r v i o u s 0.2292 × N i m p e r v 0.3149 × N p e r v ( 1.5519 × Dstoreimperv ) ( 0.5924 × Dstoreperv ) + ( 3.1201 × ZeroImperv )
Positive coefficients indicate an increase in runoff when the corresponding parameter value grows, whereas negative coefficients suggest a mitigating effect on runoff. The intercept represents the baseline runoff level when all predictors are zero (after standardization).

3. Results

The objective of this research was to evaluate the performance of Elastic Net and Artificial Neural Network models in hydrological catchment data analysis. As detailed in Section 2.4, the final data set comprised 116,640 samples generated using SWMM, capturing a range of catchment parameters alongside the Runoff outcome variable. The data were partitioned into training (80%) and testing (20%) subsets, and both models were implemented according to the configurations described in Section 2.6 (ANN) and 2.7 (Elastic Net). The subsequent results compare and contrast their predictive accuracy, stability, and computational efficiency.

3.1. Performance Metrics

Both models underwent evaluation using a set of performance measures, which included mean squared error, R2 score, Index of Agreement (d), Scatter Index (SI), and Root Mean Square Error (RMSE). Additionally, inference time was utilized as a performance indicator. Table 5 presents a summary of the metrics for both models, providing a thorough overview of their forecasting precision and computational efficiency.

3.2. Model Evaluation

Figure 2 illustrates the predictive accuracy of the ANN and Elastic Net models on the test data set. The plots compare actual versus predicted runoff values and include a regression line representing an ideal fit. The ANN achieved a notably lower MSE (21.257) and higher R2 (0.997) compared to the Elastic Net model (MSE of 340.134, R2 of 0.953). In addition, the ANN recorded a near-perfect Index of Agreement (d = 0.999), a low Scatter Index (SI = 0.037), and an RMSE of 4.610. By contrast, the Elastic Net model exhibited a higher RMSE (18.443) and a moderately lower Index of Agreement (d = 0.988), indicating a less precise fit.

3.3. Error Distribution Analysis

A statistical analysis of the error distribution for both the ANN and Elastic Net models was performed (Figure 3). The metrics considered included the mean, median, standard deviation, and selected percentiles of the error. For the ANN model, the mean error was 0.08 m3/s with a standard deviation of 1.51 m3/s, and the median error was −0.03 m3/s. The fifth and 95th percentiles indicate that 90% of the errors fall within the range of −2.13 to 2.78 m3/s.
By contrast, the Elastic Net model exhibited a mean error of 0.08 m3/s, a standard deviation of 18.44 m3/s, and a median error of 3.80 m3/s. Its fifth and 95th percentiles reveal that 90% of the errors lie between −24.23 and 16.10 m3/s, underscoring a much broader error distribution. These findings confirm that the ANN model demonstrates greater stability and narrower error margins in predicting runoff.

3.4. Comparative Analysis of Prediction Error

A comparative analysis was conducted to evaluate the differences between the predictions of the ANN and Elastic Net models (Figure 4). A histogram of these error differences (Figure 4a) indicates a mean error difference of 10.62 m3/s, a standard deviation of 14.96 m3/s, a minimum of 0.00008 m3/s, and a maximum of 183.89 m3/s. The red line marks the 90th percentile threshold at 17.36 m3/s, above which approximately 10% of the cases lie.
Figure 4b provides a point-wise overview, highlighting instances where the error difference exceeds 17.36 m3/s. While most samples exhibit relatively minor discrepancies between the two models, the outliers suggest that under certain conditions—specifically involving extreme values of the Width parameter, namely 1 and 5 m—the ANN and Elastic Net approaches diverge substantially in their runoff predictions. These observations supplement the findings reported in Section 3.2 and Section 3.3, illustrating how each model responds differently to specific catchment configurations and validating the need to select modeling approaches based on both accuracy requirements and potential extreme-use scenarios.

3.5. Efficiency Analysis

In addition to the accuracy metrics, the computational efficiency of each model was profiled using Intel VTune (version 2025.0.1). The ANN and Elastic Net models were compiled into executable files (ann.exe and elasticnet.exe, respectively) and executed on an Intel(R) Core (TM) i7-6920HQ CPU @ 2.90 GHz. The profiling was performed by invoking system-level API calls, and Table 6 summarizes the results, including CPU time, clock ticks, and the total number of retired instructions.
The ANN model required longer CPU time (29.79 s) and recorded approximately 1.06 × 1011 clock ticks, retiring about 2.01 × 1011 instructions. By contrast, the Elastic Net model completed its operations in 12.42 s, with 3.99 × 1010 clock ticks and 3.85 × 1010 retired instructions. This difference is consistent with the inference times reported in Table 5, where Elastic Net demonstrated faster single-batch predictions. However, the ANN may still offer advantages in scenarios involving parallelized computations or larger batch sizes, as its higher computational cost often correlates with improved modeling capacity in complex tasks.
The presence of an internal normalization layer in the ANN eliminated the need for an external scaling step, whereas Elastic Net required additional data transformations prior to making predictions. Nonetheless, the profiling results confirm that, despite its computational overhead, the ANN model achieves superior predictive performance, while the Elastic Net model provides a more lightweight alternative when lower resource consumption is a priority.

3.6. Real-World Validation

A final validation was carried out on a 10-hectare catchment located in the Rakowiec Estate in Wrocław, Poland [32]. This area comprises 37 sub-catchments (P1–P37), reflecting a mixture of residential, commercial, and green spaces. Infiltration, rainfall, and hydraulic configurations remained consistent with the descriptions in Section 2.4, and no Low Impact Development measures were included. To standardize the evaluation, each sub-catchment was effectively rescaled to 1 hectare, ensuring that the Width and Runoff parameters align with the ranges present in the training data.
Two Euler Type II rainfall events, each with a 3-year return period, were applied:
  • A 60-min storm featuring a peak intensity of 101.71 mm/h,
  • A 120-min storm featuring a peak intensity of 102 mm/h.
The 120-min event had been used to derive part of the training data set, whereas the 60-min event was new and thus lay partially outside the training distribution. Table 7 presents the predictive performance of the Artificial Neural Network (ANN) and Elastic Net models after rescaling the catchment data.
The ANN achieved a near-perfect fit (R2 = 1.00) for the 120-min event, reflecting its alignment with the training regime. By contrast, the 60-min event posed challenges for both models, most notably for the Elastic Net model, which attained an R2 of −0.84—indicative of a poor fit relative to the baseline. The ANN also showed reduced accuracy for the 60-min storm (R2 = 0.25) compared to its performance under controlled or previously trained conditions. These discrepancies highlight how variations in storm duration and intensity, particularly those unaccounted for during training, can diminish model reliability.

3.7. Sensitivity Analysis

Both models remain sensitive to parameter distributions and hydrologic conditions encountered in practice. The particularly weak performance on the 60-min event for Elastic Net underscores its reliance on data similarity between training and validation scenarios. Meanwhile, the ANN model shows better adaptability but still experiences a drop in accuracy when parameters differ significantly from those represented in the training set. Changes in infiltration processes or unknown factors (e.g., unmodeled soil heterogeneity) can also lead to underestimation or overestimation of runoff, further indicating the need to train on a larger data set that takes into account different precipitation events.
These findings affirm that rescaling sub-catchments to 1 hectare helps align real-world data with the training distributions used by ANN and Elastic Net. Nevertheless, the 60-min scenario reveals how untrained events can weaken performance. Practitioners aiming to generalize these models should ensure a sufficiently broad variety of storm events, infiltration parameters, and catchment geometries during training. If conditions such as extreme rainfall intensities or novel infiltration characteristics arise—beyond what the model has learned—retraining or fine-tuning may be necessary to maintain robust predictive capabilities.

4. Discussion and Concluding Remarks

This study compared Elastic Net regression and ANN for predicting surface runoff in urban catchments, focusing on accuracy, stability, and computational efficiency. Overall, the findings demonstrate that ANN provides superior predictive performance, particularly under complex or nonlinear conditions, whereas Elastic Net excels in computational speed and model interpretability.

4.1. Elastic Net Limitations Under Low-Flow Scenarios

The results presented in Section 3.2 and Section 3.3 underscore Elastic Net’s weaker performance when runoff is low. Because it relies on linear regularization to stabilize parameter estimation, its capability to capture intricate flow mechanisms diminishes under mild runoff conditions that deviate from the training distribution. One practical way to moderate these issues is to incorporate additional low-flow events in the training set, ensuring that the model learns the specific behaviors of minor flow regimes. Although Elastic Net can be enhanced through ensemble or hybrid frameworks, its fundamental linear assumptions often prove insufficient in fully representing nonlinear processes. Nevertheless, Elastic Net remains viable for scenarios where inputs do not stray far from its training regime and rapid inference is critical.

4.2. Computational Trade-Offs and Potential Applications

The efficiency profiles in Section 3.5 indicate that Elastic Net significantly outperforms ANN in terms of CPU time and clock ticks, making it suitable for large-scale applications requiring quick, repeated simulations or real-time responses. For instance, an emergency warning system may prioritize near-instantaneous results over marginal improvements in predictive accuracy, favoring Elastic Net’s lightweight execution. In contrast, ANN demands higher computational overhead—both in training and inference—but yields more accurate predictions whenever the hydrological setting is complex or includes a broader range of parameter interactions. This trade-off suggests that Elastic Net’s computational efficiency may outweigh its reduced accuracy for extensive, iterative assessments, while ANN is better suited for focused studies requiring high fidelity in modeling runoff volumes.

4.3. Practical Implications for Urban Hydrology and Flood Risk Management

The performance differences between the two methods carry notable consequences for urban planners and hydrologists charged with designing and maintaining stormwater management systems. ANN offers the ability to model nonlinear interactions among various parameters (e.g., slope, imperviousness, infiltration), thereby providing more reliable estimates of peak flows and flood extents. This is especially relevant for locales regularly experiencing intense rainfalls or complex catchment dynamics. Conversely, Elastic Net may suffice for scenarios in which speed, simplicity, or interpretability is paramount, such as preliminary planning exercises or computationally constrained environments.

4.4. Integrating ANN Models and Transfer Learning

Despite the higher computational demands of ANN, contemporary machine learning frameworks support advanced techniques that can enhance practical deployment. One such method is transfer learning, where a pretrained ANN can be fine-tuned or adapted to new data with minimal retraining. Engineers can leverage previously trained networks on partially overlapping conditions—such as catchments with similar hydrological characteristics or rainfall patterns—and then update a subset of weights using fresh data. This approach can expedite model deployment in new regions, reduce the need for large data sets, and mitigate overfitting when novel training examples are limited. Elastic Net, by contrast, does not inherently offer such transfer learning capabilities, making ANN a stronger candidate for iterative improvements and knowledge transfer across related urban catchments.

4.5. Acknowledging ANN Weaknesses

Although ANN consistently achieves higher accuracy, it is not immune to pitfalls. Overfitting remains a primary concern, particularly when the training set does not represent the variability of real-world conditions. Regularization strategies (e.g., dropout) and regular retraining help maintain robustness. Another potential limitation is the model’s reliance on ample computational resources for both training and inference. For smaller-scale applications or quick model updates, the heavier resource footprint may prove prohibitive unless parallelization or cloud-based solutions are employed. Finally, ANN models perform best when trained on wide-ranging data sets that capture different storm intensities and catchment topographies. Restricted training distributions can reduce their capacity to generalize to novel conditions.

4.6. Future Directions and Recommendations

The results highlight that ANN offers higher predictive accuracy and flexibility, whereas Elastic Net is faster and easier to interpret. Both are best applied under conditions resembling their training data but exhibit diminished reliability for significantly unrepresented scenarios. As a result, future efforts could encompass broader, more diverse training sets—including additional low-flow events and extreme storms—to reinforce the models’ generalizability. Research on ensemble methods may also help unify the computational efficiency of Elastic Net with the adaptive benefits of ANN. Moreover, developing domain-adaptation or continuous-learning techniques could keep both models current as new data or LID measures appear. ANN remains the superior choice where high accuracy is essential, while Elastic Net is justified in large-scale or speed-sensitive operations, especially if the storm profiles align well with its training range. Transfer learning offers ANN the advantage of ongoing adaptability, making it potentially more valuable over time in dynamic urban hydrological environments. Adopting either model requires careful consideration of data coverage, computational resources, and the anticipated real-world scenarios, ensuring that predictive insights effectively guide urban planning and flood risk management.

Author Contributions

Conceptualization, J.D. and R.B.; Methodology, R.B.; Software, R.B.; Validation, J.D. and R.B.; Formal analysis, R.B.; Investigation, R.B.; Resources, R.B.; Data curation, R.B.; Writing—original draft preparation, R.B.; Writing—review and editing, J.D.; Visualization, R.B.; Supervision, J.D.; Project administration, J.D.; Funding acquisition, J.D. All authors have read and agreed to the published version of the manuscript.

Funding

The research was carried out as part of work No. WZ/WB-IIŚ/3/2022 at the Białystok University of Technology and financed by a research subsidy provided by the Minister of Education and Science.

Data Availability Statement

The data presented in this study are available on request from the corresponding author.

Acknowledgments

We thank the reviewers for their valuable comments and constructive criticism, which have significantly improved the quality of our manuscript. We appreciate their time and dedication in the review process.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Johnson, B.E.; Julien, P.Y.; Molnar, D.K.; Watson, C.C. The Two-Dimensional Upland Erosion Model CASC2D-SED. JAWRA J. Am. Water Resour. Assoc. 2000, 36, 31–42. [Google Scholar] [CrossRef]
  2. Dehotin, J.; Breil, P.; Braud, I.; de Lavenne, A.; Lagouy, M.; Sarrazin, B. Detecting surface runoff location in a small catchment using distributed and simple observation method. J. Hydrol. 2015, 525, 113–129. [Google Scholar] [CrossRef]
  3. Pitman, A.J.; Slater, A.G.; Desborough, C.E.; Zhao, M. Uncertainty in the simulation of runoff due to the parameterization of frozen soil moisture using the Global Soil Wetness Project methodology. J. Geophys. Res. Atmos. 1999, 104, 16879–16888. [Google Scholar] [CrossRef]
  4. Beighley, R.E.; Dunne, T.; Melack, J.M. Understanding and modeling basin hydrology: Interpreting the hydrogeological signature. Hydrol. Process. Int. J. 2005, 19, 1333–1353. [Google Scholar] [CrossRef]
  5. Molina, A.; Govers, G.; Vanacker, V.; Poesen, J.; Zeelmaekers, E.; Cisneros, F. Runoff generation in a degraded Andean ecosystem: Interaction of vegetation cover and land use. CATENA 2007, 71, 357–370. [Google Scholar] [CrossRef]
  6. Yaseen, Z.M.; El-Shafie, A.; Jaafar, O.; Afan, H.A.; Sayl, K.N. Artificial intelligence based models for stream-flow forecasting: 2000–2015. J. Hydrol. 2015, 530, 829–844. [Google Scholar] [CrossRef]
  7. Zhang, G.; Patuwo, B.E.; Hu, M.Y. Forecasting with artificial neural networks:: The state of the art. Int. J. Forecast. 1998, 14, 35–62. [Google Scholar] [CrossRef]
  8. Kubat, M. Neural networks: A comprehensive foundation by Simon Haykin, Macmillan, 1994, ISBN 0-02-352781-7. Knowl. Eng. Rev. 1999, 13, 409–412. [Google Scholar] [CrossRef]
  9. Assaf, M.N.; Manenti, S.; Creaco, E.; Giudicianni, C.; Tamellini, L.; Todeschini, S. New optimization strategies for SWMM modeling of stormwater quality applications in urban area. J. Environ. Manag. 2024, 361, 121244. [Google Scholar] [CrossRef] [PubMed]
  10. Nil; Baranwal, A.; Das, B.S. Clear-water and live-bed scour depth modelling around bridge pier using support vector machine. Can. J. Civ. Eng. 2023, 50, 445–463. [Google Scholar] [CrossRef]
  11. Baranwal, A.; Das, B.S. Live-Bed Scour Depth Modelling Around the Bridge Pier Using ANN-PSO, ANFIS, MARS, and M5Tree. Water Resour. Manag. 2024, 38, 4555–4587. [Google Scholar] [CrossRef]
  12. Gaurav, R.K.; Das, B.S.; Khuntia, J.R.; Devi, K. Apparent shear force modelling in compound open channel using Support Vector Machine. In River Flow 2022; CRC Press: Boca Raton, FL, USA, 2024; pp. 61–70. [Google Scholar]
  13. Kadiyala, S.P.; Woo, W.L. Flood Prediction and Analysis on the Relevance of Features using Explainable Artificial Intelligence. arXiv 2022, arXiv:2201.05046. [Google Scholar]
  14. Yang, T.; Asanjan, A.A.; Welles, E.; Gao, X.; Sorooshian, S.; Liu, X. Developing reservoir monthly inflow forecasts using artificial intelligence and climate phenomenon information. Water Resour. Res. 2017, 53, 2786–2812. [Google Scholar] [CrossRef]
  15. Al Mehedi, A.; Smith, V.; Hosseiny, H.; Jiao, X. Unraveling the complexities of urban fluvial flood hydraulics through AI. Sci. Rep. 2022, 12, 18738. [Google Scholar] [CrossRef] [PubMed]
  16. Muftah, H.; Rowan, T.S.L.; Butler, A.P. Towards open-source LOD2 modelling using convolutional neural networks. Model. Earth Syst. Environ. 2022, 8, 1693–1709. [Google Scholar] [CrossRef]
  17. Govindaraju, R.S. Artificial neural networks in hydrology. II: Hydrologic applications. J. Hydrol. Eng. 2000, 5, 124–137. [Google Scholar] [CrossRef]
  18. Zou, H.; Hastie, T. Regularization and variable selection via the elastic net. J. R. Stat. Soc. Ser. B Stat. Methodol. 2005, 67, 301–320. [Google Scholar] [CrossRef]
  19. Worland, S.C.; Farmer, W.H.; Kiang, J.E. Improving predictions of hydrological low-flow indices in ungaged basins using machine learning. Environ. Model. Softw. 2018, 101, 169–182. [Google Scholar] [CrossRef]
  20. Sayari, S.; Meymand, A.M.; Aldallal, A.; Zounemat-Kermani, M. Meta-learner methods in forecasting regulated and natural river flow. Arab. J. Geosci. 2022, 15, 1051. [Google Scholar] [CrossRef]
  21. James, G. An Introduction to Statistical Learning; Springer: New York, NY, USA, 2013. [Google Scholar]
  22. Abed, M.; Imteaz, M.A.; Ahmed, A.N.; Huang, Y.F. Application of long short-term memory neural network technique for predicting monthly pan evaporation. Sci. Rep. 2021, 11, 20742. [Google Scholar] [CrossRef]
  23. Ahmed, U.; Mumtaz, R.; Anwar, H.; Shah, A.A.; Irfan, R.; García-Nieto, J. Efficient water quality prediction using supervised machine learning. Water 2019, 11, 2210. [Google Scholar] [CrossRef]
  24. Hastie, T.; Tibshirani, R.; Wainwright, M. Statistical learning with sparsity. Monogr. Stat. Appl. Probab. 2015, 143, 8. [Google Scholar]
  25. Jordan, M.I.; Mitchell, T.M. Machine learning: Trends, perspectives, and prospects. Science 2015, 349, 255–260. [Google Scholar] [CrossRef] [PubMed]
  26. Shinma, T.A.; Reis, L.F.R. Multiobjective automatic calibration of the storm water management model (SWMM) using non-dominated sorting genetic algorithm II (NSGA-II). In Proceedings of the World Environmental and Water Resources Congress 2011: Bearing Knowledge for Sustainability, Palm Springs, CA, USA, 22–26 May 2011; pp. 598–607. [Google Scholar]
  27. Rossman, L.A. Storm Water Management Model User’s Manual, Version 5.0; National Risk Management Research Laboratory, Office of Research and Development, US Environmental Protection Agency: Cincinnati, OH, USA, 2010; p. 276.
  28. Herberg, E. Neural Network Architectures. arXiv 2023, arXiv:2304.05133. [Google Scholar]
  29. Seyedashraf, O.; Bottacin-Busolin, A.; Harou, J.J. A Disaggregation-Emulation Approach for Optimization of Large Urban Drainage Systems. Water Resour. Res. 2021, 57, e2020WR029098. [Google Scholar] [CrossRef]
  30. Kim, H.I.; Keum, H.J.; Han, K.Y. Real-time urban inundation prediction combining hydraulic and probabilistic methods. Water 2019, 11, 293. [Google Scholar] [CrossRef]
  31. Mendrofa, G.A.; Hertono, G.F.; Handari, B.D. Ensemble Learning Model on Artificial Neural Network-Backpropagation (ANN-BP) Architecture for Coal Pillar Stability Classification. arXiv 2023, arXiv:2303.16524. [Google Scholar] [CrossRef]
  32. Kazmierczak, B.; Kotowski, A.; Dancewicz, A. Verification of Storm Sewerage Sizing Methods with the Hydrodynamic Model SWMM 5.0 for the Municipality of Wroclaw. Ochr. Srodowiska 2012, 34, 25–31. (In Polish) [Google Scholar]
Figure 1. Dependence of runoff on subcatchment properties: (a) Dependence of runoff on subcatchment width; (b) Dependence of runoff on subcatchment imperviousness; (c) Dependence of runoff on subcatchment slope; (d) Dependence of runoff on subcatchment N-Imperv; (e) Dependence of runoff on subcatchment N-Perv; (f) Dependence of runoff on subcatchment D-Imperv; (g) Dependence of runoff on subcatchment D-Perv; (h) Dependence of runoff on subcatchment PctZero.
Figure 1. Dependence of runoff on subcatchment properties: (a) Dependence of runoff on subcatchment width; (b) Dependence of runoff on subcatchment imperviousness; (c) Dependence of runoff on subcatchment slope; (d) Dependence of runoff on subcatchment N-Imperv; (e) Dependence of runoff on subcatchment N-Perv; (f) Dependence of runoff on subcatchment D-Imperv; (g) Dependence of runoff on subcatchment D-Perv; (h) Dependence of runoff on subcatchment PctZero.
Water 17 00405 g001aWater 17 00405 g001b
Figure 2. Comparison of flow forecasting accuracy by ANN and Elastic Net models with test data: (a) ANN model. Points represent actual data versus predicted values. The regression line represents a perfect fit. The ANN model achieves high prediction accuracy (MSE = 21.257, R2 = 0.997); (b) Elastic Net model. Points represent actual data versus predicted values. The regression line represents a perfect fit. The Elastic Net model has significant prediction error, especially at low flows (MSE = 340.134, R2 = 0.953).
Figure 2. Comparison of flow forecasting accuracy by ANN and Elastic Net models with test data: (a) ANN model. Points represent actual data versus predicted values. The regression line represents a perfect fit. The ANN model achieves high prediction accuracy (MSE = 21.257, R2 = 0.997); (b) Elastic Net model. Points represent actual data versus predicted values. The regression line represents a perfect fit. The Elastic Net model has significant prediction error, especially at low flows (MSE = 340.134, R2 = 0.953).
Water 17 00405 g002
Figure 3. Statistical analysis of the error distribution for the ANN and Elastic Net models: (a) ANN model. The mean error is 0.08 m3/s with a standard deviation of 1.51 m3/s; 90% of the errors are between −2.13 and 2.78 m3/s. The median error is −0.03 m3/s. (b) Elastic Net model. The mean error is 0.08 m3/s with a standard deviation of 18.44 m3/s; 90% of the errors range from −24.23 to 16.10 m3/s. The median error is 3.80 m3/s.
Figure 3. Statistical analysis of the error distribution for the ANN and Elastic Net models: (a) ANN model. The mean error is 0.08 m3/s with a standard deviation of 1.51 m3/s; 90% of the errors are between −2.13 and 2.78 m3/s. The median error is −0.03 m3/s. (b) Elastic Net model. The mean error is 0.08 m3/s with a standard deviation of 18.44 m3/s; 90% of the errors range from −24.23 to 16.10 m3/s. The median error is 3.80 m3/s.
Water 17 00405 g003
Figure 4. Analysis of the error difference between the ANN and Elastic Net models: (a) Histogram of absolute error differences (m3/s). The red line at 17.36 m3/s indicates the 90th percentile threshold. (b) Scatter plot highlighting cases below or above the threshold. Approximately 10% of the differences exceed 17.36 m3/s, revealing conditions under which the two models diverge considerably.
Figure 4. Analysis of the error difference between the ANN and Elastic Net models: (a) Histogram of absolute error differences (m3/s). The red line at 17.36 m3/s indicates the 90th percentile threshold. (b) Scatter plot highlighting cases below or above the threshold. Approximately 10% of the differences exceed 17.36 m3/s, revealing conditions under which the two models diverge considerably.
Water 17 00405 g004
Table 1. Basic catchment parameters in the Storm Water Management Model program.
Table 1. Basic catchment parameters in the Storm Water Management Model program.
Feature NameUnitDescription
AreahaArea of the subcatchment including any LID controls.
WidthmCharacteristic width of the overland flow path for sheet flow runoff
Slope%Average percent slope of the subcatchment.
Imperv%Percent of the land area (not including any LIDs), which is impervious.
N-Imperv-Manning’s n for overland flow over the impervious portion of the subcatchment.
N-Perv-Manning’s n for overland flow over the pervious portion of the subcatchment.
Dstore-ImpervmmDepth of depression storage on the impervious portion of the subcatchment.
Dstore-PervmmDepth of depression storage on the pervious portion of the subcatchment.
% Zero-Imperv%Percent of the impervious area with no depression storage.
Infiltration Data-Infiltration model (Horton, Modified Hortona, Green–Ampt, Modified Green–Ampt, Curve Number (SCS)).
Table 2. Ranges of simulated data.
Table 2. Ranges of simulated data.
Subcatchment FeatureUnitValues
Widthm[1, 5, 10, 25, 50, 75, 100, 125, 150]
Slope%[0.1, 1, 2, 3, 4, 5, 6, 7, 8, 9]
PercImperv%[1, 20, 40, 60, 80, 100]
N-Imperv-[0.011, 0.015]
N-Perv-[0.13, 0.40, 0.80]
D-Impervmm[1.27, 2.54]
D-Pervmm[2.54, 5.08, 7.62]
PctZero%[1, 20, 40, 60, 80, 100]
Table 3. Descriptive statistics of the data set.
Table 3. Descriptive statistics of the data set.
FeatureMeanStdMin25%50%75%Max
PercImperv87.5042.70255087.5125150
Width4.512.860.124.579
PercSlope50.1733.911205080100
N-Imperv0.010.000.0110.0110.0130.0150.015
N-Perv0.440.280.130.130.40.80.8
D-Imperv1.910.641.271.271.9052.542.54
D-Perv5.082.072.542.545.087.627.62
PctZero50.1733.911205080100
Runoff131.5688.262.553.1127.15210.1271.7
Table 4. Default parameter values and simulated ranges.
Table 4. Default parameter values and simulated ranges.
ParameterUnitDefaultSimulated Range
Widthm501–150 (in increments of 1)
% Slope%10.1–100 (in increments of 1)
% Imperv%500.1–100 (in increments of 1)
N-Imperv0.015[0.011, 0.012, 0.013, 0.014, 0.015, 0.024, 0.05, 0.06, 0.17, 0.13, 0.15, 0.24, 0.41, 0.4, 0.8]
N-Perv-0.40[0.011, 0.012, 0.013, 0.014, 0.015, 0.024, 0.05, 0.06, 0.17, 0.13, 0.15, 0.24, 0.41, 0.4, 0.8]
Dstore-Impervmm2.54[1.27, 2.54, 5.08, 7.62]
Dstore-Pervmm5.08[1.27, 2.54, 5.08, 7.62]
%Zero-Imperv%501–100 (in increments of 1)
Table 5. Summary of performance metrics.
Table 5. Summary of performance metrics.
ModelMSER2 ScoreIndex of Agreement (d)Scatter Index (SI)RMSEInference Time
Elastic Net340.1340.9530.9880.14718.4430.009
ANN2.2971.0001.0000.0121.5162.348
Table 6. Efficiency profiling.
Table 6. Efficiency profiling.
ProcessCPU TimeClockticksInstructions Retired
ann.exe29.79391.06 × 10112.01 × 1011
elasticnet.exe12.423.99 × 10103.85 × 1010
Table 7. Predictive accuracy of ANN and Elastic Net models in rainfall-driven runoff estimation.
Table 7. Predictive accuracy of ANN and Elastic Net models in rainfall-driven runoff estimation.
Rainfall DurationModelRMSE (m3/s)MAE (m3/s)R2
60 minANN20.9820.400.25
60 minElastic Net32.7829.87−0.84
120 minANN1.851.621.00
120 minElastic Net15.0211.550.72
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Dawidowicz, J.; Buczyński, R. Comparison of the Effectiveness of Artificial Neural Networks and Elastic Net Regression in Surface Runoff Modeling. Water 2025, 17, 405. https://doi.org/10.3390/w17030405

AMA Style

Dawidowicz J, Buczyński R. Comparison of the Effectiveness of Artificial Neural Networks and Elastic Net Regression in Surface Runoff Modeling. Water. 2025; 17(3):405. https://doi.org/10.3390/w17030405

Chicago/Turabian Style

Dawidowicz, Jacek, and Rafał Buczyński. 2025. "Comparison of the Effectiveness of Artificial Neural Networks and Elastic Net Regression in Surface Runoff Modeling" Water 17, no. 3: 405. https://doi.org/10.3390/w17030405

APA Style

Dawidowicz, J., & Buczyński, R. (2025). Comparison of the Effectiveness of Artificial Neural Networks and Elastic Net Regression in Surface Runoff Modeling. Water, 17(3), 405. https://doi.org/10.3390/w17030405

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop