Next Article in Journal
Structural Evolution of Mn-Substituted FeOOH and Its Adsorption Mechanism for U(VI): Effect of the Mole Ratio of Mn/(Fe + Mn)
Previous Article in Journal
Toward a Sociology of Water: Reconstructing the Missing “Big Picture” of Social Water Research
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Predicting Urban Water Consumption and Health Using Artificial Intelligence Techniques in Tanganyika Lake, East Africa

1
College of Hydrology and Water Resources, Hohai University, Nanjing 210098, China
2
Burundian Agency of Rural Hydraulics and Sanitation (AHAMR), RN2, Gitega P.O. Box 176, Burundi
3
Faculty of Engineering Sciences, University of Burundi, Belvedere Road, Bujumbura P.O. Box 2720, Burundi
4
College of Geography and Remote Sensing, Hohai University, Nanjing 210028, China
5
College of Civil Engineering and Transportation, Hohai University, Nanjing 210098, China
*
Author to whom correspondence should be addressed.
Water 2024, 16(13), 1793; https://doi.org/10.3390/w16131793
Submission received: 8 May 2024 / Revised: 18 June 2024 / Accepted: 19 June 2024 / Published: 25 June 2024
(This article belongs to the Section Urban Water Management)

Abstract

:
Water quality has significantly declined over the past few decades due to high industrial rates, rapid urbanization, anthropogenic activities, and inappropriate rubbish disposal in Lake Tanganyika. Consequently, forecasting water quantity and quality is crucial for ensuring sustainable water resource management, which supports agricultural, industrial, and domestic needs while safeguarding ecosystems. The models were assessed using important statistical variables, a dataset comprising six relevant parameters, and water use records. The database contained electrical conductivity, pH, dissolved oxygen, nitrate, phosphates, suspended solids, water temperature, water consumption records, and an appropriate date. Furthermore, Random Forest, K-nearest Neighbor, and Support Vector Machine are the three machine learning methodologies employed for water quality categorization forecasting. Three recurrent neural networks, namely long short-term memory, bidirectional long short-term memory, and the gated recurrent unit, have been specifically designed to predict urban water consumption and water quality index. The water quality classification produced by the Random Forest forecast had the highest accuracy of 99.89%. The GRU model fared better than the LSTM and BiLSTM models with values of R2 and NSE, which are 0.81 and 0.720 for water consumption and 0.78 and 0.759 for water quality index, in the prediction results. The outcomes showed how reliable Random Forest was in classifying water quality forecasts and how reliable gated recurrent units were in predicting water quality indices and water demand. It is worth noting that accurate predictions of water quantity and quality are essential for sustainable resource management, public health protection, and ecological preservation. Such promising research could significantly enhance urban water demand planning and water resource management.

1. Introduction

Water is an important natural resource that has economic and social significance for people. The survival of humans would be in jeopardy without water [1]. Surface water and groundwater are the two most significant drinking supplies globally. Currently, over 1.1 billion people worldwide lack access to safe drinking water [2]. Both point and non-point pollution sources contribute to deteriorating surface water quality (WQ) worldwide [3]. Growing global concern surrounds water quality degradation due to extensive human activity [3]. Developing nations focus on water supply and cleanliness, while industrialized countries prioritize public health and population growth [4,5]. Water quality directly impacts human health, biodiversity, and various uses in the world, including in agriculture and industry, including in Africa [6]. Contaminated water sources pose significant health risks, causing millions of illnesses and fatalities annually, particularly in developing regions, including East Africa [7]. In the same way, the lack of safe drinking water is observed in developing countries, including Africa and East Africa, where Lake Tanganyika is located. One of the most basic requirements for survival is access to water in an appropriate amount and quality.
In previous times, models such as conventional modeling methods, which are based on Box-Jenkins and autoregressive models, were used to assess WQ and water consumption but had several limitations. They often require gathering numerous parameters, relying on previous knowledge and calibration, which can be resource-intensive and hinder their applicability [8]. Similarly, physical-based models necessitate making multiple assumptions and demanding a comprehensive understanding of the subject matter beforehand [9]. For example, a physical model developed in Malaysia required substantial data collection and incurred significant expenses to evaluate floodplain dynamics and water levels [10]. Numerical models emerged as alternatives to address the limitations of physical models, as exemplified by the Yangtze River water level forecasting model by Wu et al. [11]. However, studies like Guan et al. [12] revealed shortcomings in numerical modeling approaches. Despite advancements, numerical models still struggle with replicating specific physical processes accurately. In recent years, data-driven approaches have gained traction for overcoming traditional models’ shortcomings. Machine learning (ML) algorithms, such as radial basis function (RBF) and support vector machines (SVM), have shown promise in forecasting hydrological characteristics like reservoir evaporation [13]. Similar success has been observed in predicting subsurface evaporation rates and daily water levels in reservoirs using SVM [14]. Though some challenges persist with ML algorithms, previous research indicates their superiority over traditional methods in accuracy. Adjusting hyperparameters, like weights and activation functions, is crucial to enhancing ML model performance [15]. Advancements in artificial intelligence (AI), particularly deep learning methods such as long short-term memory (LSTM) and nonlinear auto-regression neural networks (NARNET), have further improved water quality prediction accuracy. In addition, Theyazn and colleagues employed Naïve Bayes (NB), k-nearest neighbors (KNN), and support vector machines (SVM) models to classify water quality. They achieved an accuracy rate of 97.01% using the SVM machine learning technique, and the R2 of NARNET is 96.17% [16]. Additionally, machine learning techniques like decision trees (DT) and boosted decision trees (BDT) have demonstrated success in predicting daily precipitation [17]. Machine learning and deep learning algorithms have been employed in various studies to forecast and categorize water quality, highlighting their versatility and effectiveness [18].
Nonetheless, urban water distribution networks are crucial infrastructure in cities, necessitating intelligent management to ensure sufficient water supply at the desired pressure and quantity [19]. Forecasting short-term water demand is a key aspect of water distribution network operation and management, influenced by factors like temperature, population, and water pressure [20]. Water demand forecasting methods encompass both linear and nonlinear approaches [21]. Linear methods, including exponential smoothing and autoregressive integrated moving averages (ARIMA), rely on time series analysis [22]. However, nonlinear methods such as artificial neural networks (ANNs) tend to offer better accuracy for short-term forecasting. For instance, Ghiassi et al. [23] employed ANNs for water demand prediction. Herrera utilized the support vector machine method. Additionally, fuzzy logic was applied by [24]. Various other ANNs were employed in previous research for urban water demand prediction, including the generalized regression neural network, radial basis function networks, feedforward neural networks [25], and the extreme learning machine method [26]. Despite their effectiveness, ML models face challenges in feature selection and overfitting [27]. Deep learning methods, like long short-term memory (LSTM) and gated recurrent units (GRU), show promise in improving accuracy for water demand prediction [28]. Convolutional neural networks (CNNs) and recurrent neural networks (RNNs) are also employed, with recent studies favoring hybrid DNNs combining both architectures [28,29,30]. Hybrid DNNs integrate CNNs for spatial feature extraction and RNNs for temporal feature modeling. These models have applications across various domains, including human activity recognition and energy forecasting. Despite the effectiveness of hybrid DNNs, they remain underexplored in water demand prediction, a task inherently challenging due to the complex time series nature of water demand data [30].
However, natural elements and anthropogenic activities such as mining, urbanization, stormwater, construction waste, domestic waste, non-functional wastewater treatment plants (WWTPs), and agriculture are key factors influencing the WQ of Lake Tanganyika [31]. Point sources, like industrial sites and WWTPs, directly discharge pollutants into the water of Lake Tanganyika, including heavy metals and organic compounds, significantly impacting ecosystems [32]. Millions of people who rely on the resources of this lake for their livelihoods depend on clean water. The water quality of Lake Tanganyika is crucial for aquatic ecosystems’ productivity, particularly fish resources and human health. It is crucial to preserve high water quality in the Lake Tanganyika region since the lake is a major source of drinking water and irrigation for the local communities.
Moreover, sustainable management practices are crucial to mitigate water quality deterioration and safeguard public health and the ecosystems of Lake Tanganyika. The research is particularly significant as Lake Tanganyika is Bujumbura’s primary water source and is highly polluted [33]. In addition, water scarcity, population growth, urbanization expansion, industrial development, lifestyle changes, and inefficient distribution are major issues in Bujumbura, resulting in sporadic shortages and inconsistent supplies [34]. Accurate prediction of water use can help with resource management and provide a more steady supply of water for the region’s expanding population. Monitoring water quality and consumption is, therefore, essential to lessening the effects of contaminated and filthy water.
Nonetheless, recent studies have effectively used recurrent neural networks and machine learning techniques to predict water quality and consumption and classify water quality forecasting. Depending on this success, the current study leverages the water quality index (WQI) and consumption, incorporating diverse metrics to enhance prediction models. The main contributions of this paper include uniquely employing multi-model architectures such as GRU, LSTM, and BiLSTM, differing from previous single-model approaches. These models capture complex temporal dependencies, improve prediction accuracy and robustness, and offer insights into Lake Tanganyika’s water quality and consumption dynamics. The use of multiple models facilitates thorough comparisons, advancing research in water quality assessment and demand planning.
Additionally, SVM, KNN, and RF are employed for water quality categorization, broadening the study’s methodological scope and providing robust, interpretable results for environmental management. Accurate predictions aid in effective water resource management and pollution control. This pioneering study, focusing on the Lake Tanganyika region, offers valuable contributions to water quality management for Burundi, the Democratic Republic of Congo, Zambia, and Tanzania, ensuring ecological integrity and sustainable resource use.

2. Materials and Methods

2.1. Description of the Study Area

Lake Tanganyika, the largest of the three lakes in East Africa’s Great Rift Valley, spans from 3° S to 9° S and stretches 676 km from north to south, with an average width of 50 km. Situated mainly in the DRC’s southern basin, it boasts a maximum depth of 1471 m and an average depth of 570 m, with a shoreline spanning 1828 km. Covering 32,900 km2, it holds 18,880 km3 of water, making it the continent’s largest reservoir, containing 1/6 of the world’s surface freshwater reserve. Its equatorial location fosters a tropical environment with high temperatures and heavy precipitation [35]. Daily temperatures are typically warm to hot, ranging from around 25 °C to 30 °C, fostering a rich biodiversity in aquatic and terrestrial ecosystems. Heavy and frequent daily precipitation over 100 mm is common during the rainy season in the Lake Tanganyika region. While these periods of heavy precipitation support the surrounding ecosystems and help restore the lake’s water levels, they can also cause erosion and flooding in certain locations. Lake Tanganyika’s unique ecosystem is influenced by various factors, including seasonal precipitation patterns, river influxes, and interactions with the East African Rift System [36]. While nourished by numerous rivers, its drainage area is limited by steep surrounding mountains, with the Malagarasi River being its primary inflow from Tanzania’s east [37] and the Lukuga River serving as its significant outflow to the Congo River watershed. The region experiences distinct wet and dry seasons, with the basin supporting over 10 million people at a density of approximately 43 persons/km2, primarily reliant on agriculture. Lake Tanganyika is shared by bordering countries at 12% (Burundi), 43% (DRC), 36% (Tanzania), and 9% (Zambia). The climate of this area indicates that the wet season lasts for the rest of the year, and the dry season lasts from June to September or October. However, the area faces environmental challenges, including pollution from human activities such as deforestation, agricultural runoff, and industrial waste, threatening the delicate balance of this unique ecosystem. In addition, urbanization expansion around the lake and population growth raise concerns about pollution, as all residential waste is currently deposited untreated into the lake. Figure 1 shows the study area and its surrounding waters.

2.2. Methodology

This study utilizes artificial intelligence techniques for water quality and consumption forecasting, employing cutting-edge methods. Deep learning models predict water quality index (WQI) and urban water consumption (UWC), while machine learning models like SVM, K-NN, and RF forecast Lake Tanganyika’s water quality categorization. The study offers a straightforward yet efficient forecasting approach, as depicted in Figure 2. Model effectiveness is evaluated using precision, F1-score, accuracy, and recall. R2, Root Mean Square Error (RMSE), Mean Absolute Error (MAE), and Nash–Sutcliffe (NSE) also gauge deep learning model performance. Developing highly effective models for water contamination forecasting facilitates prompt solution implementation. The Pearson correlation matrix coefficients (PCMC) have been implemented to evaluate the relationship between water properties. The study outlines data collection, DL and ML model development, and result evaluation in subsequent sections.

2.2.1. Datasets

The dataset for the present study was gathered from the water and electricity production and distribution authority (REGIDESO) historical archives. Water quality measures were the subject of daily data collection from 2018 to 2023. Electrical conductivity (EC), pH, dissolved oxygen (DO), nitrate (NO3), phosphates, suspended solids (SS), and water temperature (WT) are the six important metrics that are included in the dataset. The databases also include the appropriate date and the urban water consumption (UWC) records for the city of Bujumbura.

2.2.2. Data Preprocessing

The data processing stage of data analysis is essential to enhancing the data’s quality. At this point, the WQI was determined and normalized using the dataset’s most crucial metrics. Subsequently, the water samples were classified using the WQI values. The UWC data has undergone processing and standardization. The z-score method has been used as a data standardization tool to increase accuracy.

2.2.3. Water Quality Index Calculation

The WQI is determined by considering many parameters that significantly impact water quality. This study utilizes a dataset of six important water quality characteristics to evaluate the suggested model. The method used is crucial for calculating the weighted arithmetic water quality index (WAWQI), a method initially proposed by Horton in 1965 [38] and further developed by Brown et al. [39]. The provided representation is the weighted arithmetic water quality index (WQI) calculation.
WQI = i = 0 n W i * q i i = 0 n W i  
where n is the number of parameters used in the water computation, q i is the quality rating scale of each parameter i calculated by Equation (2) mentioned below, and   W i is the weight unit calculated for each parameter, as highlighted in Equation (3) below.
q i = V i V ideal S i V ideal × 100
where: V i represents the quantified value of parameter i in the examined water samples. According to Table 1, S i is the recommended standard value for the parameter i. The ideal state is characterized by clean water, with all parameters set to 0 except for pH, which is set to 7.0, and dissolved oxygen (DO), which is set to 14.6 mg/L.
W i = K S i
Assuming the following calculation for K, this stands for the proportionality constant. It can be calculated as follows:
K = 1 i = 1 n S i
Table 2 and Table 3 display each parameter’s water quality categorization (WQC) and unit weight, respectively.

2.2.4. 1 Z-Score Normalization Process

Simplifying computations can be achieved through normalization. It turns from a dimensional expression into a scalar and a no-dimensional expression. When normalizing parameters (also called normalization scores) are applied, the Z-score normalization method uses the μ and σ values of the data tested. It is important to note that the z-score is important for standardizing the dataset because it normalizes data by centring it around the mean with a standard deviation of one, ensuring that all features contribute equally to the analysis and improving the performance of machine learning algorithms sensitive to feature scales. The following formula is implemented to compute it:
Z score = x μ σ
where: This represents an individual data point
μ (mu): This symbolizes the mean, or average, of the dataset.
σ (sigma): This denotes the dataset’s standard deviation.

Water Quality Index Calculated

The results and trends discovered following Lake Tanganyika’s daily WQI calculation from 2018 to 2024 are displayed in Figure 3. The WQI has been computed using WAWQI techniques [42]. The acceptable criteria and weights for the water quality index have been established and strictly adhered to. The continual discharge of contaminants into water bodies from industrial sources, agricultural runoff, heavy rainfall, and inadequate sewage treatment systems are most likely the main reasons for the daily increase in the water quality index for 2023–2024 [43]. Overall water quality gradually decreases over time due to this ongoing pollution, which raises contamination levels [44]. Growing numbers of people living in the bordering cities of Lake Tanganyika, intensifying agriculture, and expanding industries will all lead to increased pollution and declining water quality [45]. The increased water quality index (WQI) in Lake Tanganyika is attributed to factors such as rising temperatures due to climate change, leading to thermal stratification, municipal wastewater, stormwater, improper waste disposal, oil spills and chemical spills, construction waste, atmospheric deposition, and decreased oxygen levels [46]. Additionally, increased anthropogenic activities, including industrial pollution, contribute to nutrient loading and eutrophication, further degrading water quality. An increase in the water quality index (WQI) typically indicates deteriorating water quality, which can complicate predictions by introducing more variability and uncertainty into water quality models. It necessitates more sophisticated and adaptive predictive techniques to accurately capture the effects of pollution, climate change, and human activities on water resources. Proactive environmental laws, financial support for wastewater treatment facilities, and neighborhood-based conservation initiatives could reduce pollution and raise the WQI daily (Figure 3).

2.2.5. Urban Water Consumption

Urban water consumption forecasting uses a dataset of UWC and dates. The consumption variable holds the water utilized on that date, whereas the “Date” column has a timestamp. It is worth noting that the dataset includes data from 2018 through 2024. Figure 4 demonstrates the slow variation in water demand in Bujumbura from 2018 to 2020. This variation is observed due to the natural replenishment of water sources, reduced extraction needs, and decreased outdoor water usage. Figure 4 displays the demand in Bujumbura using data from the dataset on water consumption predictions. From 2020 to 2023, urban water consumption is rising due to population expansion, urban area development, climate change, and lifestyle changes that drive the need for water-intensive activities like landscaping and household use [47]. From 2023 to 2024, the highest increase in urban water consumption in Bujumbura city is primarily driven by rapid population growth, which escalates domestic, agricultural, and industrial water demand.
Additionally, urbanization and economic development have intensified the need for improved infrastructure and services, further amplifying water usage [48]. However, the increase in urban water consumption in Bujumbura city strains existing water resources, leading to potential shortages and heightened competition for water among the agricultural, industrial, and residential sectors. This case can exacerbate predictions of water scarcity, potentially accelerating the timeline for critical shortages and necessitating more urgent water management and conservation measures.
On the other hand, water-saving measures, seasonal variations in the weather, and the implementation of water-efficient practices and technology have also impacted daily variations in urban water use. Nevertheless, water demand has varied due to water conservation awareness and water-saving technology improvements from 2020 to 2023. In addition, the gradual decrease in consumption may continue to be observed when the measures of water conservation awareness and water-saving technology improvements implemented continue in the next few decades (Figure 4). Water demand may continue to rise due to growing urbanization, industrialization, and population growth, necessitating the strongest management measures.

2.2.6. Machine Learning and Deep Learning

The basic theory of ML, like the support vector machine, K-nearest neighbors, and random forest model, has been implemented. The DL, like Gated Recurrent Unit (GRU), Long Short-Term Memory (LSTM), and Bidirectional Long Short-Term Memory (BiLSTM) neural networks, are also covered in this section. It also examined whether creating a WQI and UWC time series data prediction system is feasible and has a sound theoretical basis.

Support Vector Machine

The SVM was established by Corinna Cortes and Vladimir Vapnik in 1995 [49]. SVMs have demonstrated efficacy across several areas because they can identify ideal hyperplanes for class separation in feature space, even in datasets with high dimensions. The contributions of Cortes and Vapnik have had a profound influence on the domain of machine learning, facilitating progress in pattern recognition and data analysis. It can be enlarged to fit different machine learning issue models. The requisite coefficients can be derived by splitting the points of the input vectors using the hyperplane. Support vectors are the points within the hyperplane. Equation (6) utilizes the Gaussian radial basis function and the linear SVM model to categorize the assessed water samples according to their quality.
K X , X   = exp X X 2 2 σ 2
The 2 feature inputs squared Euclidean distance (SED) is denoted by the X X 2 vectors of the feature of the input set, which are represented by (X, X′). A free parameter is denoted by σ .

K-Nearest Neighbors Model

The KNN is a key ML technique for regression and categorization tasks. When using the KNN to forecast data, the algorithm selects the closest K (P) (neighbors) from the training dataset using a specified metric, such as Euclidean distance. Tested data can be categorized since most results fit into one group. Given that it is utilized to get the neighbors (p) in the vectors of features closest to one another, the value of K would be distinct. The formula below can express (Di).
Di = x 1 x 2 ) + ( y 1 y 2 ) 2
where: x 1 , x 2 , y 1 , and y 2 represent the variables in the input data.

Random Forest model

The RF machine learning model is based on trees and combines the results of several DT fits using training data that is selected at random [50]. For every DT in RF, an indication such as the gini-index or information gain is used to create the root node. For both classification and regression issues [51]. The RF technique uses two randomization sources. The “root node” or “parent node” selects the dividing input characteristic at random to produce the first source [52]. Bootstrapping is another widely used method that introduces unpredictability into the RF-generating process. It is explained as randomly dividing the training data into tiny subsamples and then utilizing them to build multiple trees. Using bootstrap sampling techniques, RF solves the overfitting issue [53]. RF was deployed using two hyperparameters. Three hundred DTs were used in the prediction procedure when the n-estimator variable was assigned a value of 300. The 50 set as the max-depth option indicates the greatest depth to which a tree can grow. The RF approach involves combining a set of “base learners”, h1(x)… hj(x), to form the “ensemble predictor”, f(x) [54]. Equation (8) states that the average of each tree’s prediction in the model yields the outcome for the regression issues.
  f x = 1 J j = 1 J h j x
Although there are several criteria for calculating the model result error, the mean squared error is typically employed. The model is trained until the mean squared error reaches its lowest value. A majority vote is used to decide the forecast for the categorization tasks. The final estimated output is displayed in Equation (9).
f x = 1 J arg max j = 1 J I y = h j x
The RF model is trained until it reaches the lowest value of the splitting criterion, which could be entropy, the Gini index, or the misclassification error. Equation (10) states that the Gini index is the splitting criterion utilized in the current model. The split input feature in the “root node” is selected randomly. The feature with the lowest Gini index is the next to be divided, and it computes the impurity in the “child node” associated with each input feature [55]. The Gini index is zero after each node is split into two. “Terminal nodes” have a zero Gini value [56].
Gini   Index = k k k PkP k ; Pk = 1 n i = 1 n I y i = k
Pk is the node’s proportion of class k observations, y i is the value predicted, and k is the total number of classes in this instance [57].

Long Short-Term Memory (LSTM Model)

LSTM networks, similar to other recurrent neural networks (RNNs), usually sequentially handle time series data, processing one step at a time. Consequently, the model generally lacks access to future data points when making predictions or learning a sequence of data points [58]. In the context of time series forecasting or sequence prediction tasks, this attribute is perceived as a constraint since the model lacks immediate access to forthcoming data that could enhance the accuracy of predictions. Consequently, LSTM networks may encounter difficulties capturing specific patterns or relationships that span extended periods, particularly if those patterns rely heavily on future data. Nevertheless, despite this constraint, LSTM networks can capture and acquire intricate temporal patterns from past data. Their capacity to retain a memory state over some time enables them to capture relationships within the sequence up to the present moment. In addition, although not directly, instructor pushing and attention processes can assist LSTM networks in indirectly incorporating information from future time steps. Although LSTM networks do not explicitly incorporate future data in their processing, they can still be highly effective for various tasks such as time series forecasting and sequence prediction [59]. This is especially true when trained on extensive datasets containing ample past information. Precursory RNNs were heavily utilized to process series data.
However, due to the recurring gradient vanishing and explosion issues, RNN is no longer appropriate for making long-term predictions in time series data. Long short-term memory networks (LSTM) and RNN networks with gates and functions evolved, and the concept of cell state was first implemented. LSTM networks are an improved form of RNN networks that are well-suited for managing long-time series data. When it comes to time series data prediction, one of the best RNN algorithms is the LSTM model. The LSTM model uses a logistic sigmoid as its activating function. Provided the input gate is closed and the forget gate is open, the memory cell keeps reminding the user of their initial entry, resolving the common RNN issues [60]. The formulas for the RNN model are as follows:
h   t = tan h   ( w i · h   t + w x x t )
y t = ω y · ω t
where h   t denotes the neural network’s hidden layer for the given training set of data ( x t ). y t   is a representation of the output layer. Nevertheless,   ω t and ω y are the brain cells and matrix’s weights. The three main parts of an LSTM are the input gate, the forget gate, and the output gate. The LSTM model is computed using the following formulas:
Input   gate :   i   t = σ   W   ii x t +   W   hi   h   t   1  
Forget   gate :   f t = σ   W   if   x t +   W   hf   h   t   1
Output   gate :   o t = σ   W   io   x   t +   W   ho   h   t   1  
Process   input :   C t ˜ = tanh W   i C ˜ x t + W   h C ˜   h   t   1
Cell   update : C t = C t C t 1 +   i t   C t ˜
Output :   y t = O t tan   h   C t
Python was utilized to examine the LSTM. The LSTM layer contains 23 open variables. To complete the task, decide on the units, invoke the function, provide the sequence back, and terminate the procedure. The structure of LSTM is demonstrated in Figure 5 below:

Gated Recurrent Unit (GRU)

LSTM and GRU architectures are specifically developed to mitigate the vanishing gradient issue in conventional RNNs [27]. This problem hampers the network’s capacity to capture long-term relationships in sequential input effectively. They accomplish this by implementing systems that allow for the selective retention or forgetting of information over some time [61]. LSTM achieves this by employing a more intricate architecture incorporating three gates: the forget gate, the input gate, and the output gate. These mentioned gates regulate information flow inside the cell, figuring out when changes to the memory are necessary.
In contrast, GRU streamlines the structure by merging the gate’s forget and input into a unified “update gate”. In addition, compared to LSTM, GRU uses fewer parameters by combining the cell state and hidden state. Nevertheless, a GRU has two rather than an LSTM’s three layers [62]. Regarding computing power, GRU is more efficient than LSTM and vanilla RNN. It is primarily referred to as the conventionalized LSTM model. Figure 6 depicts the GRU structure.
Reset   gate :   r t = σ   W   ir   x t + W   hr   h   t   1
Update   gate :   z t = σ   W   iz x t + W   hz   h t 1
Process   input :   h t ˜ = tanh W   i h ˜ x t + W   h h ˜ h   t   1
Hidden   state   update :   h   t = 1 z t   h   t   1 + z t   h   t ˜
Output :   y   t = h t

Bidirectional Long Short-Term Memory

LSTMs are specifically engineered to capture interdependencies within sequences by selectively keeping or discarding information over some time. However, they usually handle sequences linearly, utilizing past knowledge to anticipate future information but not immediately integrating future information during processing. BiLSTM, in contrast, overcomes this constraint by analyzing the input sequence in both the forward and backward directions [63]. This feature enables the model to comprehend and depict the full sequence more effectively by incorporating dependencies from both preceding and subsequent contexts. It employs two hidden layers and an output layer that joins them. It stores past and future information on the present-time basis of the time series data.
They are stacking forward and backwards LSTM, and a rather robust improved network known as BiLSTM is produced. BiLSTM networks have the potential to foresee a specific time and rely less on pre-series than LSTM networks since they synthesize the input data output from the upper and lower time points. The BiLSTM hidden layer output includes forward and backward activation outputs. The BiLSTM formulation is shown in Equations (19)–(21), where the input for the hidden layer is represented by   H t ( h t ) , the activation function is indicated by σ, and the output is generated by updating the forward structure ( h t ) , and backward structure ( h t ). Figure 7 depicts the BiLSTM structure.
h t = σ W xh x t + W h h h t 1 + b h
h t = σ W xh x t + W h h h t 1 + b h
H t = W xh h + W hy h + b y

2.2.7. Evaluation of Performance

There are various error measures available to examine a classification model’s performance. The accuracy score, precision, recall, and F-score are defined by Equations (27)–(30). These metrics are obtained after normalizing the confusion matrix by comparing the predictions with the actual classes. There are five classes in this research, which is noteworthy. A normalized confusion matrix illustrating five classes that account for the outcomes is displayed in Figure 8. The findings after classification are shown in the following table and figures.
Accuracy = TN + TP TN + FN + TP + TN
Precision = TP TP + FP
Recall = TP TP + FN
F 1 score = 2 Precision Recall Precision + Recall
where: FP, FN, TP, and TN stand for false positive, false negative, true positive, and true negative, respectively.
Deep machine learning has been used to predict WQI and UWC, and the following equations are currently used: It is worth noting that MAE, RMSE, R2, and NSE are utilized to assess the robustness of the created models in the present research.
MAE = ( x o x p ) n
RMSE = x o x p 2 n
R 2 = 1 Explained   Variation Total   variation

2.2.8. Utilizing the Correlation Matrix

The Pearson’s correlation matrix coefficient (PCMC) method analyzes the correlation between the significant dataset characteristics utilized to forecast the WQI values.
R = n xy x y n x 2 x 2 n y 2 y 2 100 %  
where:
x: represents the input values in the first training dataset
R: stands for the method of calculating Pearson’s correlation coefficient.
y: the values entered from the second training set of data;
n: the total quantity of variables received.

3. Results

This section presents the prediction results from the multi-models of water quality categorization, WQI, and urban water consumption. Though a comparison of the ML currently in use to forecast WQC has been shown via F1-score, precision, recall, and accuracy, a comparative study of DL models used to predict WQI and water demand is presented through R2, RMSE, MAE, and NSE. This section has also implemented the Pearson correlation matrix analysis to make it more comprehensive. It is imperative to note that the discussion of the results obtained is implemented in Section 4 below.

3.1. Classification Prediction for the Water Quality of Lake Tanganyika

3.1.1. Normalized Confusion

Since normalized confusion takes into account class imbalances by expressing the matrix in relative terms (percentages or proportions), it is essential for assessing the performance of a classification model. This makes it simpler to compare the model’s accuracy across different classes. Figure 8 of normalization helps to highlight which classes are being predicted accurately and which are not, providing a clearer understanding of the model’s strengths and weaknesses. Figure 8a shows the SVM model’s strong performance, but some errors indicate potential class imbalances or data limitations. Figure 8b shows the KNN classifier excels in some categories but struggles in others. Similarly, Figure 8c shows the Random Forest classifier performs well overall but has trouble distinguishing certain classes.

3.1.2. Performance of Machine Learning for Water Quality Categorization Prediction

The efficacy of Random Forest (RF), K-Nearest Neighbors (KNN), and Support Vector Machine (SVM) machine learning techniques in forecasting water quality classification (WQC) is illustrated in Table 4. This table provides detailed metrics for each method, including precision, recall, and F1-score across different WQC categories, along with overall accuracy, macro average, and weighted average, highlighting that SVM achieved the highest accuracy for testing at 97%, followed by RF at 96%, and KNN at 87%. The table also shows that RF was more effective than SVM and KNN in terms of training accuracy, with a 99.89% success rate.

3.2. Validation and Settings of the Multi-Model for Water Quality Index Prediction

The number of epochs is critical for training DL like LSTM, BiLSTM, and GRU models as it determines how many times the model iterates through the entire training dataset, allowing for improved parameter tuning. Properly selecting epochs helps balance underfitting and overfitting, ensuring the model generalizes well to new data. Testing the dataset evaluates the model’s performance and prevents overfitting, while training is essential for the model to learn and accurately predict patterns in real-world applications. The findings of the deep learning models with respect to loss per epoch and training and testing daily graphs of the water quality index from 2018 to 2024 are shown in Figure 9. 20% was tested, and 80% of the instruction was completed. For smooth training and validation loss, the learning rate was set at 1 × 10−3. Figure 9 illustrates the 50 training epochs that were used to train the multi-model.

The Performance of DL Models for Water Quality Index Prediction

Figure 10 and Table 5 demonstrate the performance of the BiLSTM, LSTM, and GRU models. In Figure 10, the blue line represents the observed daily, and the orange line represents the model’s prediction for 2025 and the first few months of 2026. Fore-casted and observed WQI indicate that there’s a chance that summertime water pollu-tion levels will stay high. Table 5 demonstrates that among the deep learning models used to forecast water quality index (WQI), the GRU model achieved the best perfor-mance with the lowest MAE (0.3975%), RMSE (0.6941%), and the highest R2 value (0.78).

3.3. Validation and Settings of the Multi-Model for Water Demand Forecasting in Bujumbura City

The number of epochs is crucial for training deep learning models like LSTM, BiLSTM, and GRU, as it defines how often the model goes through the entire training dataset, enabling better parameter optimization. Choosing the right number of epochs helps avoid both underfitting and overfitting, ensuring the model performs well on new data. Assessing the model with a test dataset measures its performance and prevents overfitting, while training allows the model to learn and accurately predict patterns in practical applications. Figure 11 illustrates the loss per epoch and the training and testing daily graphs for urban water consumption in Bujumbura from 2018 to 2024, with 80% of the data used for training and 20% for testing. To ensure smooth training and validation loss, the learning rate was set at 1 × 10−3. Figure 11 shows the 50 epochs used to train the multi-model.

The Performance of DL Models for Urban Water Demand Prediction

Figure 12 and Table 6 highlight the performance of the BiLSTM, LSTM, and GRU models. In Figure 12, the blue line shows the observed daily water demand, while the orange line depicts the model’s predictions for 2025 and the initial months of 2026. Both the forecasted and observed data suggest that water demand during the summer may exceed that of the wetter seasons. Table 6 indicates that the GRU model outper-formed the others, achieving the lowest MAE (374), RMSE (530), and highest R2 value (0.81).

3.4. Nash–Sutcliffe Evaluation

The Nash-Sutcliffe Efficiency (NSE) is important for prediction as it measures how well the predicted values match the observed data, with values closer to 1 indicating better predictive performance. Table 7 shows that the GRU model achieved the highest NSE value (0.759) for the water quality index, indicating superior predictive accuracy compared to the BiLSTM (0.699) and LSTM (0.610) models. Table 8 reveals that the GRU model, with an NSE value of 0.720, outperforms both the LSTM (0.650) and BiLSTM (0.590) models in predicting urban water demand.

3.5. Correlation Matrix

The correlation matrix for water quality parameters such as pH, SS (suspended solids), WT (water temperature), EC (electrical conductivity), DO (dissolved oxygen), nitrate, and phosphates is essential for assessing their collective impact on the water quality index (WQI). It provides insights into how variations in these parameters correlate with changes in WQI, helping to identify key contributors to water quality degradation or improvement and guiding targeted interventions for environmental management. Figure 13 demonstrates the positive and negative correlations between those parameters. It reveals that variations in these parameters correlate with changes in WQI.

4. Discussion

4.1. Categorization Prediction for the Water Quality of Lake Tanganyika

4.1.1. Analysis of Normalized Confusion

Normalized confusion matrices are pivotal in water classification, especially in multiclass scenarios like this study’s five-class water quality classification. They offer a precise gauge of model prediction efficacy, vital for informed environmental management decisions, where misclassifications can skew resource allocation and water quality assessments. These matrices unveil insights crucial for refining models, enhancing feature selection, and adjusting parameters to boost accuracy and reliability. For instance, examining Figure 8a reveals SVM model performance, with diagonal dominance indicating robust predictions, albeit with some misclassifications, hinting at potential class imbalances or data insufficiencies. Similarly, Figure 8b illustrates KNN classifier proficiency, notably excelling in the “excellent” and “poor” categories but faltering between “good” and “very poor”. Likewise, Figure 8c showcases Random Forest classifier parallels, performing well overall but facing challenges distinguishing certain classes. Such analyses underscore the necessity of continual model refinement and data enrichment for optimal water quality assessment and management.

4.1.2. Analysis of the Performance of Machine Learning for Water Quality Categorization Prediction

The outcomes of the machine learning methods utilized are displayed in Table 4. Random Forest (RF) performs consistently well across categories. Excellent water quality is identified with high precision (1.00) but slightly lower recall (0.75), resulting in a balanced F1-score (0.86). RF reliably identifies good quality (F1-score: 0.97) and poor quality (F1-score: 0.98), with stable macro and weighted averages at 0.96. However, it struggles more with poor quality (F1-score: 0.79), although overall performance remains consistent. It is worth noting that the testing and training accuracy of RF are 96% and 99.89%, respectively. K-Nearest Neighbors (KNN) exhibits varied performance. It struggles to identify excellent quality (F1-score: 0.57) but performs adequately in other categories, with balanced F1-scores and consistent macro and weighted averages at 0.87. In addition, the testing and training accuracy of KNN are 87% and 82.88%, respectively. Support Vector Machines (SVM) generally performs well. It struggles with excellent quality (F1-score: 0.00) but effectively identifies other categories with balanced F1-scores (0.97 to 1.00) and consistent macro and weighted averages at 0.98. Likewise, the testing and training accuracy are 97% and 98.52%, respectively. The accuracy outcomes demonstrate the performance of RF compared to the SVM and KNN models. The KNN algorithm, however, has performed the poorest. Although RF has a 99.89% accuracy rate, SVM and KNN have accuracy rates of 98.52% and 82.88%, respectively, for training. Nevertheless, the SVM outperformed the RF and KNN for testing accuracy by 97%, but the outcomes of the RF and KNN for testing accuracy were 96% and 87%, respectively. The RF performed exceptionally better than SVM and KNN based on an accuracy of 99.89% obtained during the training.

4.2. Settings and Validation of Multi-Model

Figure 9 and Figure 11 demonstrate the results for deep learning models regarding loss per epoch and training and testing daily graphs of the water quality index and urban water consumption of Bujumbura from 2018 to 2024. The training reached 80%, and the testing was 20%. Deep learning models require a large dataset for a good fit. Figure 9 and Figure 11a–c demonstrate the training refers to one complete pass of the entire training dataset forward and backwards through the BiLSTM, LSTM, and GRU networks, updating the weights to optimize the model’s performance, often repeated multiple times until convergence. It is worth noting that the Adam optimizer, a standard choice for diverse DL tasks, was employed across all three models. Mean Squared Error (MSE) served as the loss function due to its relevance to RMSE and MAE metrics. Batch size’s impact on model development is minimal. The learning rate was fixed at 1 × 10−3 for smooth training and validation loss. Early stopping was implemented rather than a fixed iteration count. The study utilized 50 epochs and a batch size of 32. Figure 9d and Figure 11d show the training and testing of the daily dataset of WQI and UWC from 2018 to 2024. Figure 9d and Figure 11d demonstrate that the training of WQI and water consumption reached 80% and 20%, respectively.

4.3. Analysis of the Water Quality Index Trained and Tested

Figure 9d demonstrates the high variation of WQI in the summer, especially in June, July, August, and September. Lake Tanganyika experiences higher pollution levels in the summer months due to increased human activities such as tourism, fishing, industries, agriculture, and construction waste, leading to greater runoff of pollutants into the lake [46]. The WQI is high from 2023 to 2024 due to the rising temperatures caused by climate change, leading to thermal stratification, municipal wastewater, stormwater, improper waste disposal, oil spills, chemical spills, atmospheric deposition, and decreased oxygen levels [44]. Additionally, the increased anthropogenic activities, including industrial pollution in Burundi, the Republic Democratic of Congo (DRC), Tanzania, and Zambia, contribute to nutrient loading and eutrophication, further degrading water quality [45].
Additionally, warmer temperatures during these months can exacerbate the growth of algae and other aquatic plants, further contributing to pollution levels [64]. However, the WQI decreases in rainy months like October, November, December, January, February, March, April, and May in Lake Tanganyika due to the dilution of pollution and flushing effects caused by higher precipitation levels. Rainfall helps cleanse the lake by reducing pollutant concentrations through increased water flow and sediment transport, improving water quality during these months [65]. An increase in the Water Quality Index (WQI) usually denotes declining water quality, which adds additional unpredictability and uncertainty to water quality models and makes predictions more difficult. However, the models used in this study performed well in prediction according to the R2 and NSE results obtained, despite the highest WQI observed at the end of 2024.

4.4. Analysis of Daily Urban Water Demand in Bujumbura City

Figure 11d depicts no high water demand variation in the rainy season, but the highest is observed in the summer, especially in June, July, August, and September. During rainy months, urban water demand experiences slow variation due to the natural replenishment of water sources, reducing the need for extraction and decreasing outdoor water usage as rainfall fulfills irrigation needs, resulting in stable consumption patterns throughout the city. The high variation in urban water demand during the summer months in Bujumbura is primarily due to increased usage for irrigation, outdoor recreation, and higher consumption for cooling purposes. Additionally, population influx from tourism and seasonal migration further strain the water supply. Furthermore, hotter temperatures in this city and reduced precipitation escalate the need for water, intensifying the demand during this period. Population growth, urbanization, industrialization expansion, lifestyle changes, water-efficient techniques, and technological development also cause the highest water demand. The observed water consumption in a water distribution network (WDN) in Bujumbura city is intricately linked to both water demand and water pressure [66]. According to the Global Gradient Algorithm extension to the distributed pressure-driven pipe demand model, water pressure significantly affects how much water is actually delivered to consumers [67]. In scenarios where pressure is low, water may not reach all endpoints adequately, reducing water consumption despite the high demand observed.
Conversely, high pressure can lead to leaks and bursts, causing water loss and inefficient delivery, highlighting the need for balanced pressure management to ensure reliable and efficient water supply in accordance with actual demand [67]. In the same way, Bujumbura city’s growing urban water use is placing a strain on the region’s water supplies, raising the possibility of shortages and intensifying competition among the residential, commercial, and agricultural sectors for water distribution [68]. This situation may make water scarcity forecasts more likely, which could hasten the time until there are serious shortages and call for more immediate water management and conservation efforts [69]. The gradual decrease is observed due to rainfall, water conservation awareness, and technological advancements. In addition, forecasts indicate that by 2030, 28% of the world’s population could reside in cities with a population of at least one million due to the growth in both the number and size of cities [69].

4.5. Comparison of GRU, LSTM, and BiLSTM Models for Water Quality Index and Water Demand

Gated Recurrent Unit (GRU), Long Short-Term Memory (LSTM), and Bidirectional LSTM (BiLSTM) are advanced neural network architectures designed to handle sequential data, with applications in time series prediction, language modeling, and speech recognition. Here, we compare their performance based on three key metrics: Mean Absolute Error (MAE), Root Mean Square Error (RMSE), and R-squared (R2).

4.5.1. Analysis and Interpretation Performance Models for Water Quality Index Prediction

The GRU model outperforms the other models with the lowest MAE (0.3975%) and RMSE (0.6941%) values, indicating the smallest prediction errors. Additionally, its highest R2 (0.78) value demonstrates that GRU explains the most variance in the data. The superior performance of GRU could be attributed to its simpler architecture with fewer gates, which makes it both computationally efficient and effective in capturing essential dependencies within the data. However, the LSTM model, known for its capability to handle long-term dependencies with its three gates (input, forget, and output), shows the highest MAE (0.4826%) and RMSE (0.7624%), indicating the largest prediction errors. The lowest R2 value (0.69) implies that it explains the least variance in the data among the three models. This performance may result from overfitting, insufficient tuning, or the specific characteristics of the dataset, which might favor simpler models like GRU.
Moreover, the BiLSTM model, which processes data in both forward and backward directions, delivers intermediate performance. Its MAE (0.4197%) and RMSE (0.7126%) values are better than those of LSTM but not as good as GRU, indicating moderate prediction accuracy. The R2 value (0.73) is also between those of GRU and LSTM. BiLSTM’s bidirectional architecture helps in capturing context from both past and future data, providing an enhanced understanding of the data at the cost of higher computational complexity.
In conclusion, GRU is the most effective model for this dataset, providing the highest accuracy and reliability with the lowest computational complexity, as indicated by the lowest MAE (0.3975%) and RMSE (0.6941%) and the highest R2 (0.78). BiLSTM offers improved context understanding and better performance than LSTM but does not surpass GRU. LSTM, despite its robust architecture for long-term dependencies, performs the least effectively in this scenario, as evidenced by the highest MAE (0.4826%) and RMSE (0.7624%) and the lowest R2 (0.69). Therefore, GRU is recommended for tasks requiring efficient and accurate sequential data prediction, while BiLSTM can be considered when capturing bidirectional context, which is crucial. Based on Figure 10, Figure 10a of BiLSTM, and Figure 10b of LSTM, Figure 10c demonstrates the superior performance of the GRU model according to the trends observed in the blue line and predicted orange line. In Figure 10, the blue line represents the observed daily, and the orange line represents the model’s prediction for 2025 and the first few months of 2026. The observed and forecasted WQI show that water contamination may remain high for the summer months but decrease if the government implements strict water management policies.

4.5.2. Analysis and Interpretation of Performance Models for Water Demand Forecasts

The GRU model demonstrates superior performance with the lowest MAE (374) and RMSE (530), indicating the most accurate predictions and smallest errors. Its highest R2 value (0.81) suggests that the GRU model explains the most variance in the data, making it the most effective among the three models. The simpler architecture of GRU, with fewer gates compared to LSTM, contributes to its computational efficiency and effectiveness in capturing dependencies in the data. Nevertheless, the LSTM model shows moderate performance, with an MAE of 461 and an RMSE of 675. Although its errors are higher than those of GRU, the LSTM still performs better than BiLSTM. The R2 value of 0.70 indicates that LSTM explains a significant portion of the variance in the data, but not as much as GRU. This performance reflects LSTM’s ability to handle long-term dependencies, although it may require more tuning or be less suited to the specific dataset compared to GRU. Yet, the BiLSTM model has the highest MAE (500) and RMSE (726), indicating the least accurate predictions with the largest errors. Its R2 value of 0.65 is the lowest, suggesting it explains the least variance in the data among the three models. While BiLSTM’s bidirectional architecture allows for capturing context from both past and future data, it comes with increased computational complexity and does not outperform GRU or LSTM in this case.
Definitively, the GRU model is the most effective for this dataset, offering the highest accuracy and reliability, as indicated by the lowest MAE (374) and RMSE (530) and the highest R2 (0.81). LSTM performs moderately well, better than BiLSTM, but not as effectively as GRU. BiLSTM, despite its bidirectional approach, shows the least accuracy and explanatory power, with the highest MAE (500) and RMSE (726) and the lowest R2 (0.65). Therefore, GRU is recommended for tasks requiring efficient and accurate sequential data prediction, while BiLSTM may be less suitable for this specific dataset. In comparison to Figure 12a of BiLSTM and Figure 12b of LSTM, Figure 12c depicts the superior performance of the GRU model based on trends observed and predicted. In Figure 12, the blue line represents the observed daily, and the orange line represents the model’s prediction for 2025 and the first few months of 2026. The performance of the multi-model in the research is more necessary than using one model because the findings are compared to get the fit model.

4.6. Comparison of GRU, LSTM, and BiLSTM Models Based on Nash-Sutcliffe Efficiency

GRU (Gated Recurrent Unit), LSTM (Long Short-Term Memory), and BiLSTM (Bidirectional LSTM) are prominent recurrent neural network architectures used for processing sequential data. We evaluate their performance in the prediction of WQI and water demand using Nash-Sutcliffe Efficiency (NSE), a measure of predictive power and accuracy. An NSE value of 1 denotes perfect performance, where estimation error variance approaches zero.

4.6.1. Analysis and Interpretation of the Performance of Multi-Models for Water Quality Index

The GRU model achieves the highest NSE value of 0.759 in predicting WQI, indicating superior predictive accuracy and reliability. The high NSE suggests that GRU closely matches the observed data, effectively capturing the essential patterns and dependencies within the dataset. The simplicity of GRU’s architecture, with fewer gates than LSTM, contributes to its computational efficiency and robustness, making it particularly well-suited for this specific task. Besides, the LSTM model exhibits moderate performance, with an NSE value of 0.610. It indicates that while LSTM is effective at capturing long-term dependencies due to its complex architecture involving three gates (input, forget, and output), it is less accurate than GRU and BiLSTM in this scenario. The lower NSE suggests that LSTM’s predictions deviate more from the observed data, possibly due to overfitting or the need for further tuning to optimize its performance on the given dataset. However, the BiLSTM model performs better than LSTM but not as well as GRU, with an NSE value of 0.699. This scenario indicates that BiLSTM is moderately effective in predictive accuracy, benefiting from its bidirectional architecture that captures context from both past and future data. However, the increased computational complexity does not translate into superior performance over GRU, as indicated by the lower NSE compared to GRU. BiLSTM’s ability to understand bidirectional context is valuable, but it does not outperform the simpler GRU in this case.
In conclusion, the GRU model stands out as the most effective for this dataset, achieving the highest NSE of 0.759, reflecting its superior predictive accuracy and reliability. BiLSTM, with an NSE of 0.699, offers a balanced performance but does not surpass GRU. LSTM, with the lowest NSE of 0.610, demonstrates the least accuracy and reliability among the three models. Thus, GRU is recommended for tasks requiring efficient and accurate sequential data prediction, while BiLSTM can be considered when the bidirectional context is important. LSTM may need further tuning or might be less suitable for this specific dataset.

4.6.2. Analysis and Interpretation Performance of Multi-Models for Prediction of Water Demand

The GRU model achieves the highest NSE value of 0.720, indicating the most accurate and reliable predictions among the three models. The high NSE value signifies that the GRU model closely matches the observed data, effectively capturing the key patterns and dependencies within the dataset. The simpler architecture of GRU, with fewer gates than LSTM, enhances its computational efficiency and robustness, making it particularly effective for this task. Besides, the LSTM model shows moderate performance, with an NSE value of 0.650. This indicates that while LSTM is capable of handling long-term dependencies due to its complex architecture involving three gates (input, forget, and output), it is less accurate than GRU. The lower NSE value suggests that LSTM’s predictions deviate more from the observed data compared to GRU, potentially due to overfitting or the need for more tuning to optimize its performance on the given dataset. However, the BiLSTM model has the lowest NSE value of 0.590, indicating the least accurate predictions among the three models. It suggests that while BiLSTM benefits from its bidirectional architecture, capturing context from both past and future data, it does not perform as well as GRU or LSTM in this case. The increased computational complexity of BiLSTM does not translate into better performance, as indicated by the lower NSE value.
The GRU model definitely stands out as the most effective for this dataset, with the highest NSE of 0.720, reflecting its superior predictive accuracy and reliability. LSTM, with an NSE of 0.650, performs moderately well but not as effectively as GRU. BiLSTM, with the lowest NSE of 0.590, demonstrates the least accuracy and reliability among the three models. Therefore, GRU is recommended for tasks requiring efficient and accurate sequential data prediction, while BiLSTM may be less suitable for this specific dataset. LSTM, although capable, may require further tuning or may not be the best fit for this particular scenario. The values depicted in Table 7 and Table 8 of GRU nearing 1 signify minimized variance between predicted and actual values, underscoring the superior performance of this model in this study.

4.7. Correlation Matrix Analysis

The Pearson correlation matrix coefficients help establish connections between dataset properties, aiding in selecting optimal settings and clarifying relationships with water quality index (WQI) data. The correlation matrix quantifies linear relationships between physicochemical characteristics and WQI, showing pairwise parameter correlations, direction, and strength. The ideal positive correlation is 1; the ideal negative is −1.
Figure 13 demonstrates that pH moderately correlates positively with suspended solids (SS) at 0.26, suggesting pH increases SS concentration. WT shows linear relationships with other variables (Figure 13). Figure 13 illustrates positive links between pH, SS/EC, and the inverse of nitrate. DO has a somewhat negative association with EC and a somewhat positive association with nitrate/phosphate. Nitrate is somewhat positively associated with DO and negatively associated with phosphate. Phosphate has a moderately favorable relationship with SS/WQI/EC. The SS, pH, and phosphate are favorably correlated. EC weakly positively relates to pH/phosphate/nitrate but negatively to nitrate/phosphate. Water management relies on understanding these interactions via a correlation matrix.

5. Conclusions

Since clean water (CW) is a vital element in health worldwide and is necessary for human health, water quality (WQ) monitoring is required. Environmental protection greatly depends on the modeling and prediction of WQ. Unlike conventional approaches, deep learning and machine-learning-based systems that use features from the water quality index yield reliable findings for predicting water quality, water demand, and the classification of water quality. This paper proposes effective multi-model architectures of recurrent neural networks, including GRU, LSTM, and BiLSTM, that are used to forecast water quality and consumption with excellent accuracy and robustness rather than relying on a single model, aiming to leverage the unique strengths of each model and facilitate thorough comparisons. The suggested method performed exceptionally well, according to experiments using two datasets. Due to their efficiency in managing nonlinear relationships, high-dimensional data, and varied feature interactions, SVM, KNN, and RF are used for water quality classification. It allows for accurate classification, which is crucial for evaluating water quality and guiding decision-making processes. The Random Forest (RF) model demonstrated high effectiveness in forecasting water quality classification (WQC) with a testing accuracy of 96%, achieving a macro average and weighted average of 0.96 across precision, recall, and F1-score metrics. RF’s precision and recall were particularly strong in the “Excellent”, “Good”, and “Poor” categories, though slightly lower for “Very poor”, indicating consistent performance across most classes. The SVM model achieved the highest testing accuracy at 97% and had a macro average of 0.78 and a weighted average of 0.97, with near-perfect scores in most categories except “Excellent”, where precision was 0.00, highlighting an issue in predicting this specific class. The KNN model, with a lower testing accuracy of 87%, showed decent performance in terms of precision and recall, particularly for the “Poor” and “Unfit” categories, but had a relatively lower macro average of 0.76 and weighted average of 0.87, suggesting less consistent performance across all classes compared to RF and SVM. The training accuracy rates of RF are 99.89%; SVM and KNN are 98.52% and 82.88%, respectively. Nevertheless, based on the accuracy obtained of 99.89%, the RF fared better than the SVM and KNN.
Similarly, GRU emerges as the most effective model for this dataset, achieving the highest accuracy and reliability with minimal computational complexity. It is demonstrated by its lowest MAE (0.3975%) and RMSE (0.6941%) and the highest R2 (0.78) and NSE of 0.759 for WQ prediction. BiLSTM, which offers enhanced context understanding, performs better than LSTM but does not surpass GRU. Although LSTM is designed for handling long-term dependencies, it performs the least effectively in this context, as shown by the highest MAE (0.4826%), RMSE (0.7624%), and the lowest R2 (0.69) and NSE of 0.610. The BiLSTM model, processing data in both forward and backward directions, shows intermediate performance. Its MAE (0.4197%) and RMSE (0.7126%) are better than LSTM but not as good as GRU, reflecting moderate prediction accuracy. The R2 value (0.73) and NSE of 0.699 also fall between those of GRU and LSTM. The anticipated and original value of WQI showed relatively little variation, as indicated by the Nash-Sutcliffe efficiency (NSE) score of GRU obtained. In the same way, the GRU model outperforms others with the lowest MAE (374) and RMSE (530) and the highest R2 (0.81) and NSE of 0.720 for water demand, indicating the most accurate predictions and smallest errors. Its simpler architecture enhances computational efficiency and effectiveness. The LSTM model, with an MAE of 461 and an RMSE of 675, shows moderate performance and an R2 of 0.70 and an NSE of 0.650, reflecting its capability to handle long-term dependencies but being less suitable for this dataset compared to GRU. The BiLSTM model has the highest errors (MAE 500, RMSE 726) and the lowest R2 (0.65) and NSE value of 0.590, making it the least effective for this dataset. The GRU model outperforms the LSTM and BiLSTM for both WQI and water demand prediction, according to the R2 and NSE obtained. It has been found that, when compared to more intricate deep learning models like BiLSTM, LSTM, and GRU, simpler GRU recurrent neural network designs perform better. The suggested method performed exceptionally well, according to experiments using two datasets.
The WQI and urban water demand are forecasted using LSTM, GRU, and BiLSTM because they can identify short- and long-term dependencies in time-series data. This makes the forecasts accurate, which is important for managing and controlling WQ, planning water demand, protecting public health, and maintaining ecosystems. This allows for accurate classification, which is crucial for evaluating water quality and guiding decision-making processes. Using multi-model architectures is crucial for maximizing predictive accuracy, effectively capturing diverse data types, and advancing environmental monitoring and management research.
In addition, apart from emphasizing the need for continuous endeavors to oversee and regulate the quality of the local water supply, the article provides valuable insights into the contamination of the water in this vicinity. Compared to the wet season, the dry season has more severe pollution in Lake Tanganyika based on the increased and decreased WQI observed. Urban water demand is also high during the summer months compared to the wet season.
The increased water quality index (WQI) correlated with a significant rise in urban water consumption over the corresponding period in the study. This relationship suggests that as urban water usage increases, so does the potential for pollutants to enter water sources, leading to degradation in water quality. The influx of contaminants from various anthropogenic activities, such as industrial discharge and urban runoff, likely contributed to the deterioration in water quality, as reflected by the elevated WQI values. Therefore, the study indicates a direct association between heightened urban water consumption and deteriorating water quality, highlighting the need for sustainable water management practices to mitigate pollution and preserve water resources. Consequently, particular mitigation actions are required to stop further water quality degradation. These include continuous environmental monitoring, public awareness campaigns, and the establishment of stringent norms for the use and upkeep of lakes. The highest urban water demand is also observed during the dry season, necessitating increased awareness of water conservation and advancements in water-saving technology to progressively lower use. The correlation matrix helps identify important variables causing water quality degradation or improvement. It directs targeted activities for environmental management by providing insights into how variations in these parameters connect with changes in WQI. The negative and positive correlations are observed based on the findings illustrated in Figure 13. One limitation of this research is the variability and limited availability of high-quality, long-term environmental data specific to this region.
Additionally, the complex interactions between diverse ecological, climatic, and human factors make it challenging to create models that accurately capture and predict the lake’s dynamic conditions. Overfitting of the model could ensue. Using the smallest sample size that can be obtained for the trials is one approach, but doing so would leave too few samples for efficient testing and training. As a result, we intend to expand the dataset in our upcoming studies. Further research is needed to forecast and project WQ and urban water demand while taking climate change into account. This will help identify the issues that Lake Tanganyika has faced and will face in the coming decades. A study regarding the water distribution network (WDN) of Bujumbura City is needed to identify the impact of water pressure on its water consumption.

Author Contributions

Conceptualization, A.N.; methodology, A.N.; software, A.N. and M.I.; validation, Y.G. and D.Z.; formal analysis, A.N.; investigation, A.N.; data curation, A.N. and P.H.; writing—original draft preparation, A.N.; writing—review and editing, M.I. and A.K.G.; visualization, Z.W. and B.N.; supervision, D.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no funding.

Data Availability Statement

The corresponding author can provide the data necessary to substantiate the conclusions presented in this inquiry upon reasonable request.

Acknowledgments

The authors would like to thank REGIDESO for their assistance in obtaining the urban water consumption and WQ datasets.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Zhang, L. Different methods for the evaluation of surface water quality: The case of the Liao River, Liaoning province, China. Int. Rev. Spat. Plan. Sustain. Dev. 2017, 5, 4–18. [Google Scholar] [CrossRef] [PubMed]
  2. Kumar, P. Simulation of Gomti River (Lucknow City, India) future water quality under different mitigation strategies. Heliyon 2018, 4, e01074. [Google Scholar] [CrossRef] [PubMed]
  3. Damo, R.; Icka, P. Evaluation of water quality index for drinking water. Polish J. Environ. Stud. 2013, 22, 1045–1051. [Google Scholar]
  4. Alcamo, J. Water quality and its interlinkages with the Sustainable Development Goals. Curr. Opin. Environ. Sustain. 2019, 36, 126–140. [Google Scholar] [CrossRef]
  5. Li, P.; Wu, J. Drinking Water Quality and Public Health. Expo. Health 2019, 11, 73–79. [Google Scholar] [CrossRef]
  6. Najah, A.; El-Shafie, A.; Karim, O.A.; El-Shafie, A.H. Application of artificial neural networks for water quality prediction. Neural Comput. Appl. 2013, 22, 187–201. [Google Scholar] [CrossRef]
  7. Clean Water Is Life, Health, Food, Leisure, Energy… 2018. Available online: https://www.eea.europa.eu/signals-archived/signals-2018-content-list/articles/clean-water-is-life-health (accessed on 30 August 2018).
  8. Choong, S.M.; El-Shafie, A. State-of-the-Art for Modelling Reservoir Inflows and Management Optimization. Water Resour. Manag. 2015, 29, 1267–1282. [Google Scholar] [CrossRef]
  9. do Carmo, J.S.A. Physical Modelling vs. Numerical Modelling: Complementarity and Learning. 2020. Available online: https://www.preprints.org/manuscript/202007.0753/v2 (accessed on 1 July 2020).
  10. Mohamad, M.F.; Kamarul, M.; Samion, H.; Hamzah, S.B. Physical Modelling for Flood Evaluation of Selangor River Under Tidal Influence. 2021. Available online: http://iieng.org/siteadmin/upload/8285E0214013.pdf (accessed on 1 February 2014).
  11. Wu, X.; Xiang, X.; Li, L.; Wang, C. Water level updating model for flow calculation of river networks. Water Sci. Eng. 2014, 7, 60–69. [Google Scholar] [CrossRef]
  12. Guan, M.; Wright, N.G.; Sleigh, P.A. A robust 2D shallow water model for solving flow over complex topography using homogenous flux method. Int. J. Numer. Methods Fluids 2013, 73, 225–249. [Google Scholar] [CrossRef]
  13. Xu, M.; Wang, Z.; Duan, X.; Pan, B. Effects of pollution on macroinvertebrates and water quality bio-assessment. Hydrobiologia 2014, 729, 247–259. [Google Scholar] [CrossRef]
  14. Allawi, M.F.; Othman, F.B.; Afan, H.A.; Ahmed, A.N.; Hossain, S.; Fai, C.M.; El-shafie, A. Reservoir Evaporation Prediction Modeling Based on Artificial Intelligence Methods. Water 2019, 11, 1226. [Google Scholar] [CrossRef]
  15. Hipni, A.; El-shafie, A.; Najah, A.; Karim, O.A.; Hussain, A.; Mukhlisin, M. Daily Forecasting of Dam Water Levels: Comparing a Support Vector Machine (SVM) Model With Adaptive Neuro Fuzzy Inference System (ANFIS). Water Resour. Manag. 2013, 27, 3803–3823. [Google Scholar] [CrossRef]
  16. Najah, A.; van Lam, T.; Duy, N.; Thieu, N.V.; Kisi, O. A comprehensive comparison of recent developed meta-heuristic algorithms for streamflow time series forecasting problem. Appl. Soft Comput. 2021, 105, 107282. [Google Scholar] [CrossRef]
  17. Ridwan, W.M.; Sapitang, M.; Aziz, A.; Faizal, K.; Najah, A.; El-shafie, A. Rainfall forecasting model using machine learning methods: Case study. Ain Shams Eng. J. 2020, 12, 1651–1663. [Google Scholar] [CrossRef]
  18. Ismail, E.; Ayoub, B.; Azeddine, K.; Hassan, O. Machine learning in the service of a clean city. Procedia Comput. Sci. 2021, 198, 530–535. [Google Scholar] [CrossRef]
  19. Arbués, F.; García-Valiñas, M.Á.; Martínez-Espiñeira, R. Estimation of residential water demand: A state-of-the-art review. J. Socio. Econ. 2003, 32, 81–102. [Google Scholar] [CrossRef]
  20. Donkor, E.A.; Asce, S.M.; Mazzuchi, T.A.; Soyer, R.; Roberson, J.A. Urban Water Demand Forecasting: Review of Methods and Models. J. Water Resour. Plan. Manag. 2014, 140, 146–159. [Google Scholar] [CrossRef]
  21. Zhang, G.P. An investigation of neural networks for linear time-series forecasting. Comput. Oper. Res. 2001, 28, 1183–1202. [Google Scholar] [CrossRef]
  22. Ristow, D.C.M.; Henning, E.; Kalbusch, A.; Petersen, E. Models for forecasting water demand using time series analysis: A case study in Southern Brazil. J. Water Sanit. Hyg. Dev. 2021, 11, 231–240. [Google Scholar] [CrossRef]
  23. Ghiassi, M.; Zimbra, D.K.; Saidane, H. Urban Water Demand Forecasting with a Dynamic Artificial Neural Network Model. J. Water Resour. Plan. Manag. 2008, 134, 138–146. [Google Scholar] [CrossRef]
  24. Usselam, A.B.D.; Ozger, M. Water Consumption Prediction of Istanbul City by Using Fuzzy Logic Approach. Water Resour. Manag. 2005, 19, 641–654. [Google Scholar] [CrossRef]
  25. Firat, M.; Ali, M. Evaluation of Artificial Neural Network Techniques for Municipal Water Consumption Modeling. Water Resour. Manag. 2009, 23, 617–632. [Google Scholar] [CrossRef]
  26. Mouatadid, S.; Adamowski, J. Using extreme learning machines for short-term urban water demand forecasting. Urban Water J. 2017, 9006, 630–638. [Google Scholar] [CrossRef]
  27. Sajjad, M.; Khan, Z.A.; Ullah, A.; Member, S.; Hussain, T.; Member, S.; Baik, S.W. A Novel CNN-GRU-Based Hybrid Approach for Short-Term Residential Load Forecasting. IEEE Access 2020, 8, 143759–143768. [Google Scholar] [CrossRef]
  28. Guo, G.; Liu, S.; Wu, Y.; Li, J.; Zhou, R.; Zhu, X. Short-Term Water Demand Forecast Based on Deep Learning Method. J. Water Resour. Plan. Manag. 2018, 144, 04018076. [Google Scholar] [CrossRef]
  29. Namdari, H.; Haghighi, A.; Mohammad, S. Short-term urban water demand forecasting; application of 1D convolutional neural network (1D CNN) in comparison with different deep learning schemes. Stoch. Environ. Res. Risk Assess. 2023. [Google Scholar] [CrossRef]
  30. Ghalehkhondabi, I.; Ardjmand, E.; Ii, W.A.Y.; Weckman, G.R. Water demand forecasting: Review of soft computing methods. Environ. Monit. Assess. 2017, 189, 313. [Google Scholar] [CrossRef]
  31. Niyongabo, A.; Guan, Y.; Zhang, D.; Ziyuan, W. Water quality characteristics of Lake Tanganyika in Burundi and Lake Victoria in Uganda. Water Pract. Technol. 2023, 18, 1756–1774. [Google Scholar] [CrossRef]
  32. Uddin, M.G.; Nash, S.; Olbert, A.I. A review of water quality index models and their use for assessing surface water quality. Ecol. Indic. 2021, 122, 107218. [Google Scholar] [CrossRef]
  33. Langenberg, V.T.; Nyamushahu, S.; Roijackers, R.; Koelmans, A.A. External nutrient sources for Lake Tanganyika. J. Great Lakes Res. 2003, 29, 169–180. [Google Scholar] [CrossRef]
  34. Sindayigaya, I.; Toyi, O. Water Public Policy in Burundi: Case of the City of Bujumbura; Summer School, University of Burundi: Bujumbura, Burundi, 2023. [Google Scholar] [CrossRef]
  35. Phiri, H.; Mushagalusa, D.; Katongo, C.; Sibomana, C.; Ajode, M.Z.; Muderhwa, N.; Smith, S.; Ntakimazi, G.; De Keyzer, E.L.R.; Nahimana, D.; et al. Lake Tanganyika: Status, challenges, and opportunities for research collaborations. J. Great Lakes Res. 2023, 49, 102223. [Google Scholar] [CrossRef]
  36. Sarvala, J.; Langenberg, V.; Salonen, K.; Chitamwebwa, D.; Coulter, G.W.; Huttula, T.; Kanyaru, R.; Kotilainen, P.; Makasa, L.; Mulimbwa, N.; et al. Fish catches from Lake Tanganyika mainly reflect changes in fishery practices, not climate. Int. Ver. Für Theor. Und Angew. Limnol. Verhandlungen 2006, 29, 1182–1188. [Google Scholar] [CrossRef]
  37. Russell, J.M.; Barker, P.; Cohen, A.; Ivory, S.; Kimirei, I.; Lane, C.; Leng, M.; Maganza, N.; McGlue, M.; Msaky, E.; et al. ICDP workshop on the Lake Tanganyika Scientific Drilling Project: A late Miocene-present record of climate, rifting, and ecosystem evolution from the world’s oldest tropical lake. Sci. Drill. 2020, 27, 53–60. [Google Scholar] [CrossRef]
  38. Horton, R.K. An index number system for rating water quality. J. Water Pollut. Control Fed. 1965, 37, 303–306. [Google Scholar]
  39. Brown, R.M.; McClelland, N.I.; Deininger, R.A.; Tozer, R.G. A water quality index—Do we dare? Water Sew. Work. 1972, 117, 339–343. [Google Scholar]
  40. Al-othman, A.A. Evaluation of the suitability of surface water from Riyadh Mainstream Saudi Arabia for a variety of uses. Arab. J. Chem. 2019, 12, 2104–2110. [Google Scholar] [CrossRef]
  41. Tyagi, S.; Sharma, B.; Singh, P.; Dobhal, R. Water Quality Assessment in Terms of Water Quality Index. Am. J. Water Resour. 2013, 1, 34–38. [Google Scholar] [CrossRef]
  42. Brown, R.M.; McClelland, N.I.; Deininger, R.A.; O’Connor, M.F. A Water Quality Index–Crashing the Psychological Barrier. In Indicators of Environmental Quality; Springer: Boston, MA, USA, 1972; Volume 1, pp. 173–182. [Google Scholar] [CrossRef]
  43. Plisnier, P.D.; Nshombo, M.; Mgana, H.; Ntakimazi, G. Monitoring climate change and anthropogenic pressure at Lake Tanganyika. J. Great Lakes Res. 2018, 44, 1194–1208. [Google Scholar] [CrossRef]
  44. Chen, S.; Kimirei, I. Demonstration Research on Comprehensive Water Quality Monitoring in the Lake Tanganyika Basin; China Science and Technology Exchange Centre Nanjing Institute of Geography and Limnology: Nanjing, China, 2015. [Google Scholar]
  45. Nallakaruppan, M.K.; Gangadevi, E.; Shri, M.L.; Balusamy, B.; Bhattacharya, S.; Selvarajan, S. Reliable water quality prediction and parametric analysis using explainable AI models. Sci. Rep. 2024, 14, 7520. [Google Scholar] [CrossRef]
  46. Lumami, K.; Théophile, N.; Musibono, D.-D.; Patricia, L.A.; Njoyim, E.B.T.; Irene, T.; Van der Bruggen, B. Qualitative and quantitative analysis of the pollutant load of effluents discharged Northwestern of Lake Tanganyika, in the Democratic Republic of Congo. African J. Environ. Sci. Technol. 2020, 14, 361–373. [Google Scholar] [CrossRef]
  47. Stańczyk, J.; Szkudlarek, J.K.; Rychlikowski, P.; Lipiński, P. Improving short—Term water demand forecasting using evolutionary algorithms. Sci. Rep. 2022, 12, 13522. [Google Scholar] [CrossRef]
  48. Sanchez, G.M.; Terando, A.; Smith, J.W.; García, A.M.; Wagner, C.R.; Meentemeyer, R.K. Science of the Total Environment Forecasting water demand across a rapidly urbanizing region. Sci. Total Environ. 2020, 730, 139050. [Google Scholar] [CrossRef] [PubMed]
  49. Aggarwal, S.; Sehgal, S. Prediction of Water Consumption for New York city using Machine Learning. In Proceedings of the 2021 8th International Conference on Signal Processing and Integrated Networks (SPIN), Noida, India, 26–27 August 2021; Volume 6, pp. 486–490. [Google Scholar] [CrossRef]
  50. Biau, G.; Scornet, E. A random forest guided tour. Test 2016, 25, 197–227. [Google Scholar] [CrossRef]
  51. Breiman, L. Random Forests. Kluwer Acad. Publ. Manuf. Neth. 2001, 45, 5–32. [Google Scholar]
  52. Fitzgerald, J.; Azad, R.M.A.; Ryan, C. Bootstrapping to reduce bloat and improve generalisation in genetic programming. In Proceedings of the 15th Annual Conference Companion on Genetic and Evolutionary Computation, Amsterdam, The Netherlands, 6–10 July 2013; pp. 141–142. [Google Scholar]
  53. Zhang, L.; Huettmann, F.; Liu, S.; Sun, P.; Yu, Z.; Zhang, X.; Mi, C. Classification and regression with random forests as a standard method for presence-only data SDMs: A future conservation example using China tree species. Ecol. Inform. 2019, 52, 46–56. [Google Scholar] [CrossRef]
  54. Chen, H.; Liu, X.; Jia, Z.; Liu, Z.; Shi, K.; Cai, K. A combination strategy of random forest and back propagation network for variable selection in spectral calibration. Chemom. Intell. Lab. Syst. 2018, 182, 101–108. [Google Scholar] [CrossRef]
  55. Farris, F.A. The gini index and measures of inequality. Am. Math. Mon. 2010, 117, 851–864. [Google Scholar] [CrossRef]
  56. Cutler, A.; Cutler, D.R.; Stevens, J.R. Ensemble Machine Learning; Springer: New York, NY, USA, 2012. [Google Scholar] [CrossRef]
  57. Cui, Z.; Ke, R.; Pu, Z.; Wang, Y. Deep Bidirectional and Unidirectional LSTM Recurrent Neural Network for Network-wide Traffic Speed Prediction. arXiv 2018, arXiv:1801.02143. [Google Scholar] [CrossRef]
  58. Pyo, J.; Pachepsky, Y.; Kim, S.; Abbas, A.; Kim, M.; Sung, Y.; Ligaray, M.; Hwa, K. Long short-term memory models of water quality in inland water environments. Water Res. X 2023, 21, 100207. [Google Scholar] [CrossRef]
  59. Aldhyani, T.H.H.; Alrasheedi, M.; Alqarni, A.A.; Alzahrani, M.Y.; Bamhdi, A.M. Intelligent Hybrid Model to Enhance Time Series Models for Predicting Network Traffic. IEEE Access 2020, 8, 130431–130451. [Google Scholar] [CrossRef]
  60. Louhi, A.; Hammadi, A.; Achouri, M. Determination of some heavy metal pollutants in sediments of the Seybouse river in annaba, Algeria. Air Soil Water Res. 2012, 5, 91–101. [Google Scholar] [CrossRef]
  61. Islam Khan, M.S.; Islam, N.; Uddin, J.; Islam, S.; Nasir, M.K. Water quality prediction and classification based on principal component regression and gradient boosting classifier approach. J. King Saud Univ.-Comput. Inf. Sci. 2022, 34, 4773–4781. [Google Scholar] [CrossRef]
  62. Kim, J.; Moon, N. BiLSTM model based on multivariate time series data in multiple field for forecasting trading area. J. Ambient. Intell. Humaniz. Comput. 2019. [Google Scholar] [CrossRef]
  63. Pllsnier, P.E.-D. Probable Impact of Global Warming and ENSO on Lake Tanganyika. Bull. Des Séances Académie R. Des Sci. D’outre-Mer 2004, 50, 185–196. [Google Scholar]
  64. Jia, Z.; Chang, X.; Duan, T.; Wang, X.; Wei, T.; Li, Y. Water quality responses to rainfall and surrounding land uses in urban lakes. J. Environ. Manage. 2021, 298, 113514. [Google Scholar] [CrossRef] [PubMed]
  65. Brentan, B.M.; Lima, G.M. Rehabilitation of water distribution networks: When and how to rehabilitate. J. Hydroinform. 2023, 25, 1329–1340. [Google Scholar] [CrossRef]
  66. Menapace, A.; Avesani, D. Global Gradient Algorithm Extension to Distributed Pressure Driven Pipe Demand Model. Water Resour. Manag. 2019, 33, 1717–1736. [Google Scholar] [CrossRef]
  67. Ingrao, C.; Strippoli, R.; Lagioia, G.; Huisingh, D. Water scarcity in agriculture: An overview of causes, impacts and approaches for reducing the risks. Heliyon 2023, 9, e18507. [Google Scholar] [CrossRef]
  68. Kummu, M.; Guillaume, J.H.A.; De Moel, H.; Eisner, S.; Flörke, M.; Porkka, M.; Siebert, S.; Veldkamp, T.I.E.; Ward, P.J. The world’s road to water scarcity: Shortage and stress in the 20th century and pathways towards sustainability. Sci. Rep. 2016, 6, 38495. [Google Scholar] [CrossRef]
  69. United Nations, Department of Economic and Social Affairs. P.D. The World ’s Cities in 2018. World’s Cities 2018—Data Bookl; ST/ESA/SER.A/417; United Nations, Department of Economic and Social Affairs: New York, NY, USA, 2018; p. 34. [Google Scholar]
Figure 1. An outline of Lake Tanganyika and its surrounding waters.
Figure 1. An outline of Lake Tanganyika and its surrounding waters.
Water 16 01793 g001
Figure 2. Framework of the adopted methodology in this current study.
Figure 2. Framework of the adopted methodology in this current study.
Water 16 01793 g002
Figure 3. Water Quality Index of Lake Tanganyika.
Figure 3. Water Quality Index of Lake Tanganyika.
Water 16 01793 g003
Figure 4. Urban water consumption of Bujumbura City from 2018 to 2024.
Figure 4. Urban water consumption of Bujumbura City from 2018 to 2024.
Water 16 01793 g004
Figure 5. Model’s architecture for the LSTM.
Figure 5. Model’s architecture for the LSTM.
Water 16 01793 g005
Figure 6. Model’s architecture for the GRU.
Figure 6. Model’s architecture for the GRU.
Water 16 01793 g006
Figure 7. Model’s architecture for the BiLSTM.
Figure 7. Model’s architecture for the BiLSTM.
Water 16 01793 g007
Figure 8. Normalized confusion matrices for machine learning models showing correct and misclassifications: (a) Support Vector Machine; (b) k-nearest neighbors; (c) Random Forest.
Figure 8. Normalized confusion matrices for machine learning models showing correct and misclassifications: (a) Support Vector Machine; (b) k-nearest neighbors; (c) Random Forest.
Water 16 01793 g008
Figure 9. Deep learning models loss per epoch and training and testing daily graph of water quality index from 2018 to 2024: (a) Bi-directional long short-term memory loss graph per epoch; (b) Long short-term memory loss graph per epoch; (c) Gated recurrent unit and loss graph per epoch; (d) water quality index training and testing daily graph.
Figure 9. Deep learning models loss per epoch and training and testing daily graph of water quality index from 2018 to 2024: (a) Bi-directional long short-term memory loss graph per epoch; (b) Long short-term memory loss graph per epoch; (c) Gated recurrent unit and loss graph per epoch; (d) water quality index training and testing daily graph.
Water 16 01793 g009
Figure 10. Performances of models in predicting the daily water quality index of Lake Tanganyika: (a) Bi-directional long short-term memory graph performance in predicting WQI with orange and blue lines; (b) Long short-term memory graph performance in predicting WQI with blue and orange lines; (c) Gated recurrent unit graph of performance in predicting WQI with orange and blue lines.
Figure 10. Performances of models in predicting the daily water quality index of Lake Tanganyika: (a) Bi-directional long short-term memory graph performance in predicting WQI with orange and blue lines; (b) Long short-term memory graph performance in predicting WQI with blue and orange lines; (c) Gated recurrent unit graph of performance in predicting WQI with orange and blue lines.
Water 16 01793 g010
Figure 11. Deep learning models loss per epoch and testing and training daily graphs of urban water use from 2018 to 2024: (a) Bi-directional long short-term memory loss graph per epoch; (b) Long short-term memory loss graph per epoch; (c) Gated recurrent unit and loss graph per epoch; and (d) water demand training and testing daily graph.
Figure 11. Deep learning models loss per epoch and testing and training daily graphs of urban water use from 2018 to 2024: (a) Bi-directional long short-term memory loss graph per epoch; (b) Long short-term memory loss graph per epoch; (c) Gated recurrent unit and loss graph per epoch; and (d) water demand training and testing daily graph.
Water 16 01793 g011
Figure 12. Performances of models in predicting daily water consumption in Bujumbura city: (a) Bi-directional long short-term memory graph performance in predicting urban water use with orange and blue lines; (b) Long short-term memory graph performance in predicting urban water demand with blue and orange lines; (c) Gated recurrent unit graph of performance in predicting water demand with orange and blue lines.
Figure 12. Performances of models in predicting daily water consumption in Bujumbura city: (a) Bi-directional long short-term memory graph performance in predicting urban water use with orange and blue lines; (b) Long short-term memory graph performance in predicting urban water demand with blue and orange lines; (c) Gated recurrent unit graph of performance in predicting water demand with orange and blue lines.
Water 16 01793 g012aWater 16 01793 g012b
Figure 13. Establishes the association between important dataset properties.
Figure 13. Establishes the association between important dataset properties.
Water 16 01793 g013
Table 1. Acceptable ranges for the variables used to determine WQI [40].
Table 1. Acceptable ranges for the variables used to determine WQI [40].
ParametersAllowable Limitations
pH8.5
DO, mg/L10
NO3, mg/L45
Phosphate0.1
SS, mg/L100
EC, μS/cm1000
Table 2. Water quality classification based on WQI [41].
Table 2. Water quality classification based on WQI [41].
Range of WQIClassification
0–25Outstanding
26–50Good
51–75Poor
76–100Very poor
>100Not suitable to drink
Table 3. Weights according to parameters used in this current study.
Table 3. Weights according to parameters used in this current study.
ParametersWeight Unit (Wi)
pH0.011476789
DO0.00975527
Nitrate0.002167838
Phosphate0.975527024
SS0.000975527
EC9.75527 × 10−5
Table 4. The effectiveness of the machine learning methods utilized to forecast WQC.
Table 4. The effectiveness of the machine learning methods utilized to forecast WQC.
MLMetricsExcellentGoodPoorVery PoorUnifitAccuracyMacro AvgWeighted Avg
Test In%Train In %
RFPrecision1.000.990.950.851.009699.890.960.96
Recall0.750.941.000.750.820.850.96
F1-score0.860.970.980.790.900.900.96
KNNPrecision0.670.850.880.851.008782.880.850.87
Recall0.500.820.920.730.820.760.87
F1-score0.570.840.900.790.900.800.87
SVMPrecision0.000.960.990.941.009798.520.780.97
Recall0.000.990.991.001.000.800.98
F1-score0.000.970.990.971.000.790.98
Table 5. Shows the effectiveness of DL models utilized to forecast WQI.
Table 5. Shows the effectiveness of DL models utilized to forecast WQI.
ModelsMAE (%)RMSE (%)R2
BiLSTM0.41970.71260.73
LSTM0.48260.76240.69
GRU0.39750.69410.78
Table 6. The performance of DL models to predict urban water consumption.
Table 6. The performance of DL models to predict urban water consumption.
ModelsMAERMSER2
BiLSTM5007260.65
LSTM4616750.70
GRU3745300.81
Table 7. Shows the value of NSE for the water quality index.
Table 7. Shows the value of NSE for the water quality index.
ModelsNSE
BiLSTM0.699
LSTM0.610
GRU0.759
Table 8. the value of NSE for urban water demand.
Table 8. the value of NSE for urban water demand.
ModelsNSE
BiLSTM0.590
LSTM0.650
GRU0.720
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Niyongabo, A.; Zhang, D.; Guan, Y.; Wang, Z.; Imran, M.; Nicayenzi, B.; Guyasa, A.K.; Hatungimana, P. Predicting Urban Water Consumption and Health Using Artificial Intelligence Techniques in Tanganyika Lake, East Africa. Water 2024, 16, 1793. https://doi.org/10.3390/w16131793

AMA Style

Niyongabo A, Zhang D, Guan Y, Wang Z, Imran M, Nicayenzi B, Guyasa AK, Hatungimana P. Predicting Urban Water Consumption and Health Using Artificial Intelligence Techniques in Tanganyika Lake, East Africa. Water. 2024; 16(13):1793. https://doi.org/10.3390/w16131793

Chicago/Turabian Style

Niyongabo, Alain, Danrong Zhang, Yiqing Guan, Ziyuan Wang, Muhammad Imran, Bertrand Nicayenzi, Alemayehu Kabeta Guyasa, and Pascal Hatungimana. 2024. "Predicting Urban Water Consumption and Health Using Artificial Intelligence Techniques in Tanganyika Lake, East Africa" Water 16, no. 13: 1793. https://doi.org/10.3390/w16131793

APA Style

Niyongabo, A., Zhang, D., Guan, Y., Wang, Z., Imran, M., Nicayenzi, B., Guyasa, A. K., & Hatungimana, P. (2024). Predicting Urban Water Consumption and Health Using Artificial Intelligence Techniques in Tanganyika Lake, East Africa. Water, 16(13), 1793. https://doi.org/10.3390/w16131793

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop