Next Article in Journal
Using Deep Q-Learning to Dynamically Toggle between Push/Pull Actions in Computational Trust Mechanisms
Previous Article in Journal
Cross-Validation Visualized: A Narrative Guide to Advanced Methods
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Prediction of the Behaviour from Discharge Points for Solid Waste Management

by
Sergio De-la-Mata-Moratilla
*,
Jose-Maria Gutierrez-Martinez
,
Ana Castillo-Martinez
and
Sergio Caro-Alvaro
Department of Computer Science, University of Alcala, 28801 Alcala de Henares, Spain
*
Author to whom correspondence should be addressed.
Mach. Learn. Knowl. Extr. 2024, 6(3), 1389-1412; https://doi.org/10.3390/make6030066
Submission received: 24 April 2024 / Revised: 21 June 2024 / Accepted: 22 June 2024 / Published: 24 June 2024

Abstract

:
This research investigates the behaviour of the Discharge Points in a Municipal Solid Waste Management System to evaluate the feasibility of making individual predictions of every Discharge Point. Such predictions could enhance system management through optimisation, improving their ecological and economic impact. The current approaches consider installations as a whole, but individual predictions may yield better results. This paper follows a methodology that includes analysing data from 200 different Discharge Points over a period of four years and applying twelve forecast algorithms found as more commonly used for these predictions in the literature, including Random Forest, Support Vector Machines, and Decision Tree, to identify predictive patterns. The results are compared and evaluated to determine the accuracy of individual predictions and their potential improvements. As the results show that the algorithms do not capture the individual Discharge Points behaviour, alternative approaches are suggested for further development.

1. Introduction

Currently, more than 55% of the world’s population lives in cities, and according to the United Nations, this percentage will increase to 68% by 2050 [1]. This growth is accompanied by significant environmental impacts, including CO2 emissions, waste production, reduced biodiversity, and resource consumption. Consequently, cities are at the forefront of environmental concerns, drawing the attention of both environmentalists and researchers focused on mitigating environment degradation.
A particular visible environmental impact for city dwellers is the generation of wastes and its potential health implications [2,3,4]. Municipal solid waste (MSW) production has prompted national and international regulations aimed at promoting recycling within urban areas. The European Union, for instance, recommended substituting natural resources with recycled materials, especially for critical materials [5,6]. To achieve a sustainable society, it is essential to manage well-planned waste strategies. Effective strategy implementation relies on the ability to gather comprehensive information about the system to be optimised, making the monitoring of the MSW system crucial. This monitoring, typically conducted via IoT devices and networks, facilitates real-time data collection on various aspects of the MSW system, thereby enhancing waste collection and recycling efforts.
The existing research on MSW systems has predominantly focused on the overall status, considering only the total amount of waste collected for optimisation [7,8]. This information enables the creation of reactive strategies that anticipate the resources needed for efficient waste management. Even with optimal forecasting and management, the uneven distribution of waste filling across Discharge Points (DPs) necessitates a more detailed prediction to achieve better optimisation.
Monitoring MSW can provide valuable data on several process aspects, such as the fill rate of each DP. By refining the granularity to include the fill level of individual DPs, it is possible to optimise collection routing, frequency, scheduling, staff shifts, and redistribution of the DPs more effectively.
Previous research has shed light on MSW management, exploring different aspects like system optimisation and waste management transportation routing. However, there is a notable gap in forecasting the behaviour of individual DPs within MSW systems for more targeted optimisation strategies.
This paper presents a study that leverages data from actual DPs within a MSW system to forecast their individual behaviour over time, establishing a more precise foundation for optimising the key aspects of waste management. To achieve these results, a set of regression algorithms was used across different scenarios to identify the most effective approach for predicting the fill levels. The insight gained from this approach aims to enhance waste collection and management practices.
The paper is organised as follows: First, it starts with a review of related works in this field, followed by a clear outline of the project’s objectives. Next, it provides a detailed description of the materials and methods. Finally, a discussion of the results is presented, concluding with the exposition of the main findings and their implications.

2. Related Works

The related work section provides valuable context for our research objectives by summarising prior studies on MSW systems. It highlights the crucial need to forecast individual DP behaviour for more effective optimisation strategies, which is the primary focus of our study. This review of the literature not only underscores the importance of predicting DP behaviour but also sets the stage for the methodology employed in our research.
In the introduction, it was noted that MSW systems and their optimisation have been studied extensively in many countries, employing various approaches and focusing on different aspects, as evidenced by the literature review [9]. Some of these studies have attempted to develop mathematical models of waste management using advanced techniques [10] or have conducted scenario analyses centred on eco-efficiency management [11]. While these studies aimed at a global understanding of waste management systems, others have analysed specific factors such as waste generation in particular areas to propose reduction solutions [12] or forecast the amount of waste of different types [13].
Numerous papers on this topic have focused on optimising the routing of waste transportation from DPs to the collection sites [14]. Some propose the use of IoT support optimisation [15,16] or applying algorithms like the Ant Colony-Shuffled Frog Leaping Algorithm [17]. Other authors have emphasised the benefits of improved logistics, including the reduction of negative impacts [18,19].
Another line of research involves the planning of systems by proposing a better distribution of DPs or collection sites using techniques like the average nearest neighbour and kernel density to reduce uncollected waste [20] or creating a Geographic Information System to enhance collection site locations in post-war installations [21].
A common aspect of these studies is their focus on the overall management of systems, viewing them as integrated environments. They do not prioritise forecasting the behaviour of the system. However, forecasting as a topic has been covered for many years, as shown in the literature review [22]. The aim of these works is to use forecasting for planning and managing the MSW [23], to classify and represent system behaviour using time series [13], or to incorporate socioeconomic and geographical factors using a hybrid k-nearest neighbours approach [24]. Additionally, some papers applied algorithms like decision trees, SVM, or neural networks to predict waste generation [25].
Based on the analysis of prior research, it is evident that the types and the percentages of waste have been studied for areas ranging from entire countries to city sections to apply optimal planning strategies. This includes the number of trucks, sizes of dumping sites, human resources planning, and the enhancement of recycling policies. Some studies proposed optimisation techniques, while other focused on forecasting as a basis for optimisation. However, in all cases, predictions are made for the entire installation. When individual DPs are considered, the goal is typically to plan their distribution or enhance routing, not to forecast their behaviour as a key tool for optimisation.
These prior studies provide valuable context for our research objectives, highlighting the crucial need to forecast individual DP behaviour for more effective optimisation strategies, which is the primary focus of our study. This review of the literature not only underscores the importance of predicting DP behaviour but also sets the stage for the methodology employed in our research.
To build on these findings and address the identified gaps, our research seeks to achieve precise predictions of individual DP behaviour. The following section outlines the specific objectives and research questions that guided our investigation, aiming to enhance the efficiency and effectiveness of MSW management through advanced predictive techniques.

3. Objective and Research Questions

The objective of this paper is to accurately predict the individual behaviour of every DP as a basis for optimising various aspects of MSW management. Achieving this objective requires developing models or techniques to make precise predictions that can be applied individually to each DP.
To support this objective, we have formulated three research questions, referred to as RQ1, RQ2 and RQ3, which will guide our research activities.
  • RQ1: Are the size, reliability, and stability of the available data sufficient to obtain precise results for each individual DP?
  • RQ2: Is the precision of the prediction sufficient to optimise the MSW management effectively?
  • RQ3: Is it possible to perform the necessary calculations with the appropriate speed or within the required time frame to be used in real-time systems?
This objective and these questions are centred on predicting the increment in the fill levels of DPs with enough accuracy to anticipate their state at different points in time. The fill state and its evolution represent the behaviour we aim to estimate using the predicted fill increment.
To answer these questions and achieve our objective, we followed the techniques presented in the Related Works section and employed a rigorous methodology. The following section details the materials and methods used in our study, including the dataset, the forecast algorithms applied, and the different scenarios considered to predict the fill levels of each DP.

4. Materials and Methods

The increment in the fill level of the DPs can be modelled using time series, like how the fill level of a complete facility is modelled. In these time series, the fill values vary over a time scale that can range from hours to minutes. However, the response time is not critical for periods shorter than minutes, as the intervals between interactions, such as people depositing garbage or collection system operating, are typically several minutes. Therefore, time series with a detail level of hours or fractions of hours are enough. This reduces the size of the datasets to be processed but also limits the number of values available for calculations.
To ensure the model’s accuracy, we utilised a dataset from a real installation with sufficient DPs and data spanning the last four years. This extensive dataset allows for training and testing algorithms and provides the opportunity to transition waste collection from a reactive to a proactive approach.
The database used to create the models consists of data from a Spanish small-sized city with a population of 18,000 inhabitants and a density of about 500 inhabitants/km2. These data include information about 200 filling points for the last four years, with records every 5 min. The data covered the period from January 2019 to mid-March 2023. However, the data from January to June 2020 were excluded from the initial analysis due to the impact of COVID-19, which altered the behaviour of the fill points. The types of waste at each DP vary, as this study aims to assess the feasibility of forecasting rather than achieving the best possible results.
To effectively utilise this extensive dataset, a detailed data preparation phase was conducted. The following subsection outlines this phase and provides a small representation of the dataset structure from the MSW system. This subsection is accompanied by others related to the forecast algorithms and scenarios selected for the study.

4.1. Data Preparation

To process and analyse this extensive dataset, a systematic data preparation phase was necessary. The most relevant aspect to consider was the frequency and detail of the measurements in relation to the dataset size. In round numbers, the study includes 4 years of data from 200 DPs, with readings taken every 5 min. This results in approximately 84 million tuples (4 × 365 × 200 × 12), each containing information about the timestamp, DP ID, and filling increment.
Before using the data, it was essential to review it. The primary purpose of this review was to manage the large volume of data, which increased the likelihood of encountering outliers. This data preprocessing involved handling missing values, treating outliers, and normalising data to ensure consistency across the dataset.
To handle missing values at different intervals for each DP, interpolation from the last obtained data points was employed.
Regarding outliers, they were identified using boxplot diagrams for each DP. After careful consideration of their impact on the models’ performance, these outliers were either corrected or set aside for further analysis in future studies.
Regarding normalising the dataset involved, additional features such as days of the week and public holidays were included to enhance the models’ predictive performance, accounting for known factors that influencing waste generation.
Given the dataset size, the feasibility of processing was tested using standard libraries within a reasonable time frame. At this point, a computer with four RTX5000 video cards running 12 algorithms (Decision Tree, Elastic Net, Gaussian Regression, KNN, Lasso Regression, Linear Regression, Logistic Regression, Naïve Bayes, Polynomial Regression, Random Forest, Ridge Regression, and Support Vector Machine) on the entire dataset was used. The total training time extended to 90 h, which was deemed unacceptable. Then, additional tests were conducted, reducing the data size by grouping the data into intervals of 15, 30, 45, and 60 min. The total training time for the 12 algorithms over all the DPs was two days, one and a half days, and one day and 12 h, respectively.
These tests proved that reducing the data size was essential to align with the study’s objective. The first step in this reduction process involved grouping the data by hours. This approach decreased the number of daily samples from 288 to 24, simplifying the data processing for most existing algorithms. Grouping was performed by summing all increments in filling from minute 0 to minute 59 and assigning the result to the corresponding hour. Additionally, negative values were replaced with zeroes, as they indicated moments when the DP was emptied. Values representing the capacity filled at each DP were divided by 100, as the original values were stored in the database multiplied by 100 to avoid decimal numbers.
Table 1 presents the first 10 tuples of data in this processed state, providing an example of how the data were received from the MSW system for the DPs. This table illustrates the adjusted data, reflecting the transformation applied to prepare the data for subsequent analysis.
In the initial tests, cross-validation (CV) was used to evaluate how the results were building by separating the data from each DP. The data from each DP were divided in sets, ranging from 2 to 10 sets. However, this approach encountered several issues, such as high computational timing and resource usage, leading to poor predictions. This was related to the limited number of records available for some DPs, causing models to overfit or fail to adjust properly, resulting in a coefficient of determination (R2) below 0.5. Additionally, randomising the data within each set sometimes resulted in unrealistic predictions due to non-sequential records.
To address these issues, we revised the prediction methodology. The revised approach is detailed in the following sections.
Subsequently, we reduced the number of DPs to demonstrate the feasibility of forecasting and handling the unique cases of each DP in future studies. By successfully forecasting a small set of DPs, we aim to establish feasibility and pave the way for incorporating more elements and proposing a real-time forecasting architecture. Initially, we considered reducing the number of DPs to four, but additional elements of interest led us to expand the study to six DPs while keeping the number manageable. DPs were discarded based on several criteria, including a reduced number of measures due to sensor malfunctions or locations in unpopulated areas, data series with significant gaps, or unexpected values likely due to sensors malfunctions or outlier behaviour, such as that observed during the COVID-19 pandemic.
The next step involved transforming the raw data into relevant information forecasting. This transformation required utilising the date to generate synthetic fields useful for categorising the measurements. Table 2 provides an overview of this initial transformation step, showcasing the decomposition of the date into several fields to extract relevant information from the data of each DP.
Holiday dates were also identified for the different DPs, as this information could distinguish significant changes in behaviour on holidays. Other useful categorisations included the day of the week, day of month, weekend, or weekday. As a result, the dataset for each DP included attributes such as DP number, date and time, real increment, day of the month, day of the week, month, week of the month, week of the year, year, holiday status, season, time of the day, and weekday/weekend. Detailed information about each attribute is provided in Appendix A.
After completing these processes, the data from each DP (32,319 records) were divided into training and testing sets. To ensure consistency, the data were first grouped by date, with all the records from the same date included in the same dataset. The first 70% of the records was used for training (22,608 records for every DP), ordered by date, and the remaining 30% for testing (9711 records).
Next, different ratios were explored for training and testing, ranging from 60% to 80% for training and 20% to 40% for testing. However, the 70% training and 30% testing split proved to be the most effective, yielding the best results.

4.2. Forecast Algorithms Selection

Given the variety of algorithms for processing the datasets, it was necessary to select a subset and evaluate their suitability for modelling the behaviour of the DPs [26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81]. From the range of forecast algorithms available today, we selected twelve of the most used in this study area: Decision Tree, Elastic Net, Gaussian Regression, KNN, Lasso Regression, Linear Regression, Logistic Regression, Naïve Bayes, Polynomial Regression, Random Forest, Ridge Regression, and Support Vector Machine. Appendix B contains a list and brief description of all the considered algorithms, included those not ultimately selected. Neural networks are also included in this list, despite their structural differences from the selected forecast algorithms, and are reserved for future studies.
Since each DP can show different behaviours, applying algorithms to the dataset as a whole might produce predictions that deviate from the actual results. Therefore, we applied the algorithms individually to each DP, allowing them to better understand and adapt to each DP’s specific behaviour.
In the initial tests, each algorithm underwent extensive hyperparameter tuning using grid search and cross-validation techniques to optimise their performance, ensuring that the models were well-calibrated to the characteristics of the DPs’ datasets. However, due to the poor results obtained and high execution time, as presented in subsection ‘Data Preparation’, we decided to adopt another way to obtain the predictions for each DP using each forecast algorithm for every scenario selected.
The selected algorithms were applied to the training dataset from each DP and then validated using the testing dataset from the same DP to assess the precision in their predictions. Each algorithm’s computational complexity was assessed, considering factors such as training time (with Gaussian Regression, Naïve Bayes, Random Forest, and Support Vector Machine having the longest execution times between 5 and 20 min); memory requirements; and scalability, to ensure feasibility and efficiency in real-world deployment.
For executing the different algorithm on the DPs across various scenarios, the parameters were primarily set to the default values, as defined by the Python library used. While alternative parameter settings were explored for each algorithm, the results were consistently less favourable compared to those obtained using the default parameter values.

4.3. Forecast Scenario Selection

Even though all fields from each record could be used at any of the selected algorithms, we used different perspectives in the forecast. Using fewer input parameters also makes the predictions easier to interpret in relation to the actual increment of the DP at a given time.
The initial predictions were also made using the entire set of available parameters. However, the poor forecast results and the complexity of the patterns obtained led us to defer this approach to future studies.
For this initial study, we decided to work with the following subsets of fields as input for the forecast using all the selected algorithms (each subset includes the increment and the date):
  • Day of the week and hour
  • Day of the month and hour
  • Weekend and hour
  • Holiday and hour
Regarding the different scenarios presented, one of the unique challenges was accounting for fluctuating waste volume during different periods, which might require adjusting the seasonal components and introducing additional control variables in the model. For this study, it was considered not relevant to have a first perspective on how well the forecast models adjust to the datasets of each DP in each scenario, and it will be considered in a future study.
This selection resulted in four scenarios, twelve algorithms, six DP, and 32,319 records for each DP (divided into two sets for training and testing). We conducted all the planned tests, and the results are discussed in the following sections.

5. Discussion

To evaluate the predictive accuracy of each algorithm for the behaviour of the DP in each of the proposed scenarios, several metrics were obtained to assess how closely the results align with the expected outcomes. The findings are presented with an initial overview using graphs, followed by a quantitative analysis of the selected metrics and a comparison of the results across different algorithms. Finally, a new approach is proposed to continue the research.

5.1. Predictions vs. Real Values

Time series plots were generated to visually compare actual waste generation patterns with model forecasts, providing intuitive insights into the models’ ability to capture temporal dynamics and trends.
To present a clear view of the findings, Figure 1 illustrates a summarised version of one of the generated graphs. This analysis examines the filling process with respect to the day of the month, date, time, and filling value. The graph compares the actual values over three consecutive days to the hourly predictions provided by two algorithms used in the study: Decision Tree and KNN. These algorithms were selected for the detailed presentation because including all of them would have made the graph difficult to interpret, and several results were covered by giving exactly the same behaviour. Conversely, other algorithms deviated significantly from the real values and were not included. The presented algorithms, Decision Tree and KNN, offer the most promising results. In the graph, alongside the actual values and the predictions from both algorithms, the error from each algorithm at each instant of time is also represented. This error is calculated as the absolute value of the difference between the actual and predicted values, depicted as negative to provide a separate view of the errors from the time series predicted. This representation offers a clear perception of the error magnitude and its distribution across the entire prediction.
The predictions shown in the graph were obtained from the regression algorithms. They do not perfectly align with the real values, although they are relatively close in some cases.
This graph facilitates a direct comparison between the actual values and the predicted values. The inclusion of errors offers a comprehensive understanding of the algorithms’ performance.
Examining the error, it is evident that the prediction results from the forecast algorithms deviate significantly from the actual values, indicating that the predictions lack the desired accuracy and reliability. This suggests that, while the algorithms capture some trends, they are not yet refined enough to provide precise forecasts for practical applications, at least for the interval of time selected. Consequently, further enhancements and optimisations are necessary to improve the predictive performance and bring the forecasted values closer to the actual data.
This graph shows a small portion of the existing dataset, serving as the first evaluation of the results. Analysing the results in detail for the entire dataset and all the scenarios and algorithms requires the use of analytic mechanisms and metrics that generate objective and measurable results.
Considering this, a further study was undertaken to determine if the behaviour seen in the graphs is representative of the dataset or if it reflects outlier behaviours for the selected periods of time at different DPs.

5.2. Results for the Selected Metrics

The analytic tools used to evaluate the forecasts were obtained from the different libraries used to execute the algorithms. Appendix C provides a list and brief explanation of these tools. Although a total of 11 metrics were considered, we focused on those most frequently used in the related literature [9,23,82] to facilitate a comparison of the results with those presented in previous studies. The metrics considered are the Mean Absolute Error (MAE), Mean Squared Error (MSE), Root Mean Squared Error (RMSE), coefficient of determination (R2), and Maximum Error (ME).
These metrics were chosen because they provide a comprehensive evaluation of the prediction algorithms, capturing different aspects of the predictive performance, such as accuracy, variance, and worst-case scenarios. Their values signify how well the models perform in terms of the average error (MAE), sensitivity to large errors (MSE and RMSE), goodness of fit (R2), and worst prediction errors (ME).
Considering all the scenarios for the different DPs and applying the different regression algorithms selected for the study, a total of 288 records were generated to indicate the accuracy of the predictions.
Upon analysing the results, it was evident that the predictions did not align well with the actual values in the different cases studied. This was demonstrated by the generally high values of MSE and ME (typically greater than 100) and R2 values lower than 0.5, indicating poor predictive accuracy.
A deeper analysis was conducted by averaging the results obtained across all cases. Table 3 shows the average results, the range of values, and their standard deviations.
As seen in Table 3, the average results across all the cases were suboptimal, particularly with the R2 values not only being below 0.5 but also below 0. This indicated that the models failed to capture the underlying patterns in the data accurately.
To further understand these results, an average of the outcomes from all the cases for each algorithm was calculated. This analysis aimed to identify which algorithms produced the worst results and understand the overall performance presented in Table 3. The findings are detailed in Table 4 and are discussed in the following subsection.

5.3. Algorithms Comparison

Taking a closer look at the average results obtained, the Naïve Bayes and Logistic Regression algorithms performed the worst, followed by Elastic Net, Lasso, Linear Regression, and Ridge Regression, all of which had similar results and R2 values close to 0.
The poor performance of Naïve Bayes and Logistic Regression can be attributed to their underlying assumptions and suitability for the type of data used. Naïve Bayes, for instance, assumes independence between predictors, which might not hold true in this context, leading to significant errors. Logistic Regression, typically used for classification, may not be well-suited for continuous output predictions in this scenario.
Elastic Net, Lasso, Linear Regression, and Ridge Regression, despite being widely used, showed subpar performance, potentially due to their linear nature, which might not capture the complex, non-linear relationships in the data effectively.
The Gaussian Process and Support Vector Regression (SVR) algorithms also performed suboptimally but showed slightly better results compared to the aforementioned models. Gaussian Procession Regression, while flexible and capable of capturing non-linear relationships, might suffer from computational inefficiency and sensitivity to hyperparameter settings in high-dimensional spaces, which can lead to suboptimal predictions. SVR, on the other hand, is generally effective for regression tasks but requires extensive tuning of its parameters (such as the kernel type and regularisation parameter) to achieve optimal performance. In our study, the tuning might not have been sufficient to handle the variability in the data.
Polynomial Regression, while better than some linear models, still failed to provide high accuracy. This can be attributed to the potential overfitting of higher-degree polynomials, especially with limited data points or noisy data, leading to poor generalisation to new data.
The k-nearest neighbours (KNN) algorithm, although performing better than many linear models, also showed limitations. KNN’s performance is sensitive to the choice of k (the number of neighbours) and the distance metric used. In this study, KNN demonstrated moderate accuracy, with an average MAE of 8.8 and R2 of 0.2, indicating that it was somewhat effective in capturing patterns but still fell short of providing highly accurate predictions. The relatively higher error rates could be due to the algorithm’s susceptibility to noise and its computational intensity, especially with larger datasets, as is the case in our study.
At a more detailed level, we examined the prediction results using the Decision Tree and Random Forest algorithms, which had the best average forecast results. We analysed the average results by grouping the data by DP and scenario to check if these results were close to the global average values. The results, included in Table 5 and Table 6, show the average results grouped by algorithm and DP and by algorithm and case studied. These values confirm the global averages, as they are quite similar and indicate that the results are not good enough for our goals.
It appears that these algorithms do not perfect well for the studied system. However, we conducted a final study of the prediction results for all the cases regarding Table 5 and Table 6. The prediction results for all the DPs using the Random Forest and Decision Tree algorithms for the four situations studied are included as Appendix D. Table 7 presents the 10 best results obtained by R2 in descending order. These results follow the previous trend, showing that, while there are cases where R2 can reach up to 0.44, most results are below 0.3, and none reach a value of 0.5 for R2.
While the Decision Tree and Random Forest algorithms outperformed others, their performance still leaves room for improvement. The results suggests that even the best-performing algorithms struggle to provide highly accurate predictions, indicating a need for more advanced models or additional features engineering to enhance the predictive accuracy.
In summary, the comparative analysis of the algorithms revealed that tree-based models like Decision Tree and Random Forest show promise but are not yet fully adequate for precise forecasting in this context. Further research and development, including exploring advanced machine learning techniques and refining the model parameters, are essential to achieve the desired level of accuracy in predictions.
To complete this review of results, we validated if the results were homogeneous across the DPs used in the experiment and if adding extra information about the day of month, week, or weekend enhanced the results. The same behaviour was observed across scenarios with slight differences in DP, but the trends remained the same, with values far from acceptable. Figure 2 illustrates these data for the Decision Tree MAE averages across DPs and scenarios (DM—Day of Month, DW—Day of Week, H—Hour, Hol—Holyday, and Wnd—Weekend).
At this point, we can reflect on the use of these results. Although they are not good enough for our goal, they may be effective for predictions in a system that considers larger aggregates, such as the total amount of filling across the installation or longer time periods. Using this approach of considering every single DP will yield better values, even with the current metrics.
While our study focused on a specific geographical region, the methodologies and insights gained are likely transferable to similar urban environments, highlighting the broader applicability of the proposed approach beyond the study area.

5.4. Scenarios Analysis

To further understand the performance of the models, a detailed analysis of specific situations was conducted (independently of which scenario and the DP were studied) where the models performed particularly well or poorly. This approach helps to highlight common characteristics or anomalies that might explain the differences in prediction accuracy.
The best-case scenarios typically occurred in cases with:
  • Low Variability in Data: Scenarios with relatively stable and low variability in waste generation patterns led to more accurate predictions. For instance, DP locations that had consistent usage patterns and fewer fluctuations showed better prediction accuracy.
  • Higher Data Quality: DPs with more complete and higher-quality data tended to produce better results. This included fewer missing values and more consistent data recording practices.
  • Effective Feature Relevance: Scenarios where the selected features (like day of the week or weekend) had a strong correlation with the waste generation patterns yielded more accurate predictions.
These best-case situations underscore the importance of several key factors in achieving high prediction accuracy:
  • Stability in Patterns: DPs with predictable and stable waste generation patterns allowed the models to learn and generalise more effectively. Consistency in data helps in identifying clear trends and reduces the noise that can obscure the underlying patterns.
  • Data Completeness and Quality: High-quality data with minimal missing values and consistent recording practices provided a solid foundation for model training. Accurate and complete datasets ensure that the models are trained on reliable information, enhancing their predictive capabilities.
  • Relevance of Features: The selection of relevant features that strongly correlate with waste generation patterns proved crucial.
These insights highlight the importance of focusing on data quality and feature relevance to achieve the best possible model performance. The successful situations, provide a roadmap for improving predictions in more challenging contexts by emphasising the need for stability, completeness, and relevance in the data used for model training.
Conversely, the worst-case scenarios were characterized by:
  • High Variability in Data: High variability in the waste generation data, possibly due to erratic usage patterns or inconsistent data collection, led to poor prediction accuracy. For instance, DPs in areas from the city with fluctuating population density or irregular events had significant prediction errors.
  • Incomplete Data: Scenarios with many missing values or inconsistencies in data recording practices negatively impacted the model performance. The lack of a complete and clean dataset made it challenging for the algorithm to learn accurate patterns.
  • Weak Feature Relevance: When the selected feature did not strongly correlate with the waste generation patterns, the models struggled to make accurate predictions. This was evident in DPs where external factors like special events or holidays significantly influenced waste generation but were not included in the feature set.
  • Small Dataset: DPs having this characteristic, where only a limited amount of data were available, posed significant challenges for the algorithms. A small dataset can lead to overfitting, where the model performs well on the training data but fails to generalise to new data. Additionally, small datasets often lack the diversity needed to capture all possible variations in waste generation patterns. For example, DPs in newly established or low-traffic areas may not have accumulated enough data to train robust models, resulting in less reliable predictions.
These worst-case scenarios underscore the importance of comprehensive data collection and the inclusion of relevant features in the predictive modelling process. This also highlights the need for advanced data handling techniques to mitigate the effects of variability and noise and for strategies to augment small datasets, such as synthetic data generation or transfer learning, from similar contexts.

5.5. Exploring Alternative Approaches

Considering the results obtained, it was decided to explore alternative approaches to understand the data structure, aiming to improve the forecast accuracy for each DP.
Upon examining the results shown in Figure 1, three interesting details emerged:
  • Pattern Similarity: The predictions from the two algorithms displayed a pattern like actual DP values, suggesting that the models capture some underlying trends despite the overall prediction errors.
  • Significant Upturns: The substantial differences between the upturns and the other values, which typically do not exhibit regular behaviour and have noticeable fluctuations, might explain why the forecasting algorithms struggled to match the DP values accurately.
  • Recurring Daily Patterns: The recurring daily pattern in the data could be a clue as to why the algorithms fail to predict the next hour accurately. A broader approach, predicting the next 24-h pattern and then adjusting it with real-time data, might be required.
Therefore, it was proposed to classify the dataset into different clusters to verify if the data indeed follow regular patterns. Figure 3, Figure 4 and Figure 5 illustrate the daily behaviour of the data classified into varying numbers of clusters (10, 5, and 2, respectively). The ordinate axis represents the DP fill level, ranging from 0 (empty) to 1 (full), while the abscissa axis represents the time in hours over a single day. In all graphics, the first cluster, number 0, is a control line over value 1 on the ordinate axis to ensure that the proportions are kept.
As shown in Figure 3, using ten clusters, the algorithm produced similar shapes with minor adjustments. The number of clusters was subsequently reduced to observe if fewer clusters could still capture the shape, ultimately using just two clusters.
Despite the promising results, it is important to acknowledge limitations such as simplifying assumptions.
This insight provides a valuable starting point for future research. Further studies could explore more sophisticated modelling techniques and incorporate additional exogenous variables to further enhance the forecasting accuracy.

6. Conclusions

Regarding the proposed research questions, we can conclude the following:
  • For RQ1, the answer is positive, as we were able to perform calculations without major issues related to data size or quality.
  • For RQ2, the answer is negative, as the forecasts were not accurate enough to optimise management effectively.
  • For RQ3, we found that the time required for calculations is substantial and would need enhancements and optimisations, such as pre-calculations or other techniques, to be feasible in a real-time scenario.
Our first conclusion is that predicting individual behaviours for DPs seems feasible, as the numeric results support. However, since the values are not sufficiently accurate, the algorithms cannot be used directly and need enhancements through other approaches. The base information for these forecasts is limited to the filling rate, but as the behaviour of people contributing to this filling could be influenced by other factors, it is necessary to explore climate factors to determine if they impact behaviour.
Although the results are not valid for individual forecasts, they could improve forecasts for an entire installation by combining forecasts from each DP. This could lead to more efficient waste collection schedules, reducing operational costs and the environmental impact.
Regarding the algorithms used, we conclude that only Decision Tree and Random Forest should be used in future works related to this subject, as they perform significantly better than all the others.
Based on our findings, it is recommended that policymakers encourage the adoption of advanced predictive models in MSW management. Investing in robust data infrastructure can further enhance the accuracy and reliability of these models.
Engaging key stakeholders such as waste management authorities, environmental agencies, and community representatives is crucial for the successful implementation and adoption of advanced forecasting solutions, fostering collaboration and knowledge sharing across diverse stakeholders.
The continuous monitoring and evaluation of the forecasting system’s performance are essential for identifying areas for improvement and adapting to evolving waste management challenges and regulatory requirements.
Future works should examine the feasibility of studying the behaviour of a large group of DPs simultaneously to create a general real-time response system. This system would generate predictions at the same frequency to determine the most probable behaviour of each DP throughout the day.
Moreover, based on the results obtained, we will consider developing a new methodology to better understand the behaviours of the different DPs and to achieve more accurate forecasts with optimal execution times.
Finally, we will continue to use the data from the different DPs used in this study. Additionally, we will explore other methods to ensure high-quality data, providing relevant information and enough to make useful predictions. Defining the criteria to validate and accept these data as a basis for calculations could be beneficial for this work.
Based on the observed performance of the predictive models, the following specific areas for future research are suggested:
  • Incorporating External Factors: While this research considered aspects related to the time of day, week, month or year, and local holidays, future research could investigate the impact of other external factors such as weather conditions and local events on the waste management generation patterns at DPs. This could enhance the predictive accuracy of the models.
  • Improving Algorithm Performance: Explore additional forecast algorithms not used but analysed in this study and investigate others that may offer a better performance for time series predictions.
  • Real-Time Data Integration: Develop methods to integrate real-time data from the IoT sensors from the DPs into the predictive models to improve the timeliness and accuracy of forecasts.
  • Cost-Benefit Analysis: Conduct a detailed cost–benefit analysis of implementing advanced predictive models in MSW management to quantify the economic and ecological benefits.
  • Scalability Studies: Assess the scalability of the predictive models for larger datasets and more extensive municipal areas to ensure that the proposed solutions can be effectively applied in diverse urban settings.
  • User Behaviour Analysis: Study the behaviour of residents and businesses contributing to the waste stream to identify patterns and trends that could inform more targeted waste management strategies.
  • Optimisation of Collection Routes: Investigate the integration of predictive models with dynamic route optimisation algorithms to enhance the efficiency of waste collection operations.
By addressing these areas, future research can build upon the findings of this study to develop more robust and effective waste management solutions.

Author Contributions

S.D.-l.-M.-M.: conceptualisation, methodology, investigation, software, validation, formal analysis, data curation, writing—review and editing, and visualisation. J.-M.G.-M.: supervision, funding acquisition, conceptualisation, methodology, formal analysis, resources, and writing—review and editing. A.C.-M.: conceptualisation, funding acquisition, software, data curation, resources, validation, and writing—review and editing. S.C.-A.: investigation, software, visualisation, and writing—review and editing. Each author agrees to be personally accountable for their own contributions. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Spanish Ministry of Education and Vocational Training with the funding number FPU22/00871.

Data Availability Statement

Due to privacy reasons, the research data is unavailable.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A

In this appendix, you will find the list of attributes used in the dataset and the range of values for each of them. All values are listed in Table A1.
Table A1. Final attributes for the records of each DP.
Table A1. Final attributes for the records of each DP.
AttributeDescription
Number of DPThe numerical ID of the DP.
Date and timeTimestamp with the time reduced to hours with 0 at minutes and seconds.
Real incrementInteger number with the DP filling increment for the hour.
Day of the monthObtained from date. Value from 1 to 31 depending on the month.
Day of the weekObtained from date. Value from 1 to 7.
No. monthObtained from date. Value from 1 to 12.
No. week of monthObtained from date. Value from 1 to 5.
No. week of yearObtained from date. Value from 1 to 53.
YearDate decomposition.
MonthDate decomposition.
DayDate decomposition.
Holiday/Not holidayCalculated with date and the use of a calendar with local holidays. Values are “HOLIDAY” or “NOT_HOLIDAY”.
SeasonObtained from date. Values are “SPRING”, “SUMMER”, “AUTUMN” or “WINTER”.
Time of the dayObtained from date. Values are “MORNING”, “AFTERNOON”, “EVENING” or “NIGHT” with the following ranges:
  • “MORNING”: 6:00 to 11:59, both instants included.
  • “AFTERNOON”: 12:00 to 17:59, both instants included.
  • “EVENING”: 18:00 to 22:59, both instants included.
  • “NIGHT”: 23:00 to 5:59, both instants included.
Weekday/WeekendObtained from date. Values “WEEKDAY” (Monday to Friday) or “WEEKEND”.

Appendix B

In this appendix, you will find the list of the selected algorithms used, along with the research, to forecast the behaviour of the different DPs represented in Table A2.
Table A2. Forecast algorithms used for the study.
Table A2. Forecast algorithms used for the study.
Forecast AlgorithmDescription
Gradient BoostingBased on an ensemble meta-algorithm to reduce errors in forecast analysis. It creates a prediction model from a set of weak predictions models where each of them makes a few assumptions related to the data.
Extreme Gradient Boosting (XGBoost) RegressionBased on the Gradient Boosting and Decision Tree algorithms, used for supervised learning tasks and supporting parallel processing to capture complex relationships between input features and target variables and to have a better selection and understanding of model behaviour.
Light Gradient Boosting Machine (LightGBM)Based on the Gradient Boosting algorithm, designed for efficient training on large-scale datasets with low memory cost using parallel and distributed computing.
CatBoostBased on Decision Tree algorithms using Gradient Boosting, used to classify the results from different searches.
Stepwise RegressionIteratively selects significant explanatory variables for the model, discarding less important ones by statical significance after each iteration.
Linear RegressionSupervised learning algorithm predicting the relationship between two variables, assuming a linear connection between them.
Adaptive Boosting (AdaBoost)Boosting algorithm that classifies data by combining multiple weak learners into a strong one.
Autoregressive Integrated Moving Average (ARIMA)Regression algorithm measuring the strength of one dependent variable relative to another changing variable using historical values.
Seasonal-ARIMA (SARIMA)Based on the ARIMA algorithm including seasonality in the forecast.
Neural Networks RegressionUses artificial neural networks where each node has an activation function that defines the output based on a set of inputs, building a complex relationship between inputs and outputs.
Multiple Linear Regression Extension of Linear Regression algorithm allowing predictions with multiple independent variables.
Ordinal RegressionPredicts variables on an arbitrary scale, considering the relative order of variables.
Fast Forest Quantile RegressionBased on the Decision Tree algorithm, predicting not only the mean but also quantiles of the target variable.
Boosted Decision Tree RegressionEnsemble algorithm combining predictions from multiple weak learners to create a strong predictive model by the correction of errors with the iterative trees created.
Robust RegressionProvides an alternative to least squares regression, reducing the influence of outliers to fit better to a greater part of the data.
Stochastic Gradient DescentEfficient algorithm fitting linear regressors under convex loss regression algorithms, suitable for large-scale datasets.
Decision TreeNon-linear regression algorithm splitting the dataset into smaller parts, creating a tree-like structure.
Elastic NetBased on Linear Regression, using penalisations to reduce predictor coefficients, combining absolute and squared values for the prediction.
Gaussian RegressionFlexible supervised learning algorithm with inherent uncertainty measures over predictions.
K-Nearest Neighbours (KNN)Non-linear regression algorithm predicting the target variable by averaging values of its k-nearest neighbours.
LASSO RegressionBased on Linear Regression, estimates sparse coefficients by selecting variables and regulating them to improve accuracy.
Logistic RegressionModels the probability of a discrete outcome given an input variable.
Naïve Bayes (Bayesian Regression)Incorporates Bayesian principles into another regression algorithm to estimate the probability distribution of the model.
Polynomial RegressionExtension from Linear Regression, predicting based on complex relationships using an nth degree polynomial over the independent variable.
Poisson RegressionCounts data or events within a fixed interval, considering each as a rare and independent event.
Random ForestNon-linear regression algorithm using multiple Decision Trees to predict the output.
Ridge RegressionBased on Linear Regression, provides regularisation to prevent overfitting.
Support Vector Regression (SVR)Supervised machine learning algorithm identifying the output in a multidimensional space.

Appendix C

In this appendix, you will find a brief description of the different forecast metrics used to study the results of the forecasts over the different DPs, situations, and algorithms applied in this study represented in Table A3.
Table A3. Tools used to analyse the predictions obtained from the different models applied to each DP.
Table A3. Tools used to analyse the predictions obtained from the different models applied to each DP.
MetricDescription
Explained Variance Regression Score (VRS)Based on the variance metric, representing the dispersion of a continuous dataset. Closer to 1 is better.
Mean Squared Error (MSE)Measures the quality of a predictor and prediction intervals. Lower values are better.
Mean Absolute Error (MAE)Represents the average error between the real and predicted values. Lower values are better.
Root Mean Squared Error (RMSE)Measures the average difference between predicted and actual values. Closer to 0 is better.
Mean Squared Logarithmic Error (MSLE)Measures the relative difference between the logarithmic transformed actual and predicted values. Closer to 0 but not 0 is better.
Median Absolute Error (Median AE)Median of the differences between observed and predicted values. Closer to 0 is better.
Coefficient of determination (R2)Indicates how well one variable explains the variance of another. Closer to 1 is better.
Mean Absolute Percentage Error (MAPE)Shows the average absolute percentage difference between real and predicted values.
Mean Tweedie Deviance (MTD)Calculates the mean Tweedie deviance error, indicating the prediction type (Mean Squared Error, Mean Poisson Deviance, or Gamma Deviance).
D2 scoreGeneralisation of R2, replacing squared error by a deviance like Tweedie (D2 TS), Pinball (D2 PS), or Mean Absolute Error (D2 AES).
Maximum Error (ME)Captures the worst-case error between predicted and real values. The closer to 0 is better.

Appendix D

At this appendix, you will find the results of the forecasts made using the Decision Tree and Random Forest algorithms over the different DPs and situations in the study represented in Table A4.
Table A4. Prediction results from the cases studied for the DPs selected using the Decision Tree and Random Forest algorithms.
Table A4. Prediction results from the cases studied for the DPs selected using the Decision Tree and Random Forest algorithms.
AlgorithmCase StudiedDPMAEMSERMSER2ME
Decision TreeDay of Month + Hour17.93144.11120.37139.67
27.55135.6911.650.33115.26
35.8781.619.030.4112.71
49.53191.1313.830.27135.22
59.91249.6215.80.3231.91
67.45114.4910.70.35113.71
Day of Week + Hour17.56133.4611.550.41138.76
27.27127.9811.310.37117.14
35.676.588.750.44111.24
49.02176.8513.30.33131.3
59.45238.415.440.33229.06
67.19108.4110.410.39119.44
Decision TreeHoliday + Hour17.83140.3911.850.38138.91
27.49132.511.510.35117.79
35.7879.798.930.42114.22
49.35186.1413.640.29132.4
59.76243.7815.610.32230.56
67.35110.8610.530.37109.3
Weekend + Hour17.65135.2911.630.4138.98
27.3129.1211.360.36117.13
35.6277.248.790.43111.97
49.06178.4513.360.32132.66
59.5239.6215.480.33228.87
67.25109.5710.470.38114.44
Random ForestDay of Month + Hour17.9142.5811.940.37138.45
27.55133.5911.560.34115.16
35.8881.359.020.4115.56
49.46188.0613.710.28132.38
59.86245.6115.670.31232.86
67.4111.610.560.37111.23
Day of Week + Hour17.83140.5411.860.38138.45
27.41130.6911.430.35116.1
35.880.368.960.41115.57
49.15180.0113.420.31131.62
59.61241.9415.550.32231.05
67.34110.5210.510.38111.55
Holiday + Hour17.9142.5311.940.37138.45
27.54133.4311.550.34117.65
35.8881.339.020.4115.56
49.45187.7213.70.28132.38
59.85245.5815.670.31232.86
67.39111.4910.560.37111.23
Weekend + Hour17.83140.5511.860.38138.45
27.43131.4211.460.35117.08
35.880.418.970.41115.57
49.16180.1513.420.31131.62
59.62242.3215.570.32230.94
67.35110.610.520.38111.23

References

  1. Department of Economic and Social Affairs. United Nations. World Urbanization Prospects. The 2018 Revision. 2019. Available online: https://population.un.org/wup/Publications/Files/WUP2018-Report.pdf (accessed on 16 January 2024).
  2. Ashtari, A.; Tabrizi, J.S.; Rezapour, R.; Maleki, M.R.; Azami-Aghdash, S. Health Care Waste Management Improvement Interventions Specifications and Results: A Systematic Review and Meta-Analysis. Iran. J. Public Health 2020, 49, 1611–1621. [Google Scholar] [CrossRef] [PubMed]
  3. Somani, P. Health Impacts of Poor Solid Waste Management in the 21st Century. In Solid Waste Management—Recent Advances, New Trends and Applications; IntechOpen: London, UK, 2023. [Google Scholar] [CrossRef]
  4. Singh, M.; Singh, M.; Singh, S.K. Tackling municipal solid waste crisis in India: Insights into cutting-edge technologies and risk assessment. Sci. Total Environ. 2024, 917, 170453. [Google Scholar] [CrossRef] [PubMed]
  5. Directorate-General for Internal Market, Industry, Entrepreneurship and SMEs (European Commission); Grohol, M.; Veeh, C. Study on the Critical Raw Materials for the EU 2023: Final Report. Publications Office of the European Union. 2023. Available online: https://data.europa.eu/doi/10.2873/725585 (accessed on 11 March 2024).
  6. Rosanvallon, S.; Kanth, P.; Elbez-Uzan, J. Waste management strategy for EU DEMO: Status, challenges and perspectives. Fusion Eng. Des. 2024, 202, 114307. [Google Scholar] [CrossRef]
  7. Anuardo, R.G.; Espuny, M.; Costa, A.C.F.; Oliveira, O.J. Toward a cleaner and more sustainable world: A framework to develop and improve waste management through organizations, governments and academia. Heliyon 2022, 8, e09225. [Google Scholar] [CrossRef] [PubMed]
  8. Perkumienė, D.; Atalay, A.; Safaa, L.; Grigienė, J. Sustainable Waste Management for Clean and Safe Environments in the Recreation and Tourism Sector: A Case Study of Lithuania, Turkey and Morocco. Recycling 2023, 8, 4. [Google Scholar] [CrossRef]
  9. Hoy, Z.X.; Phuang, Z.X.; Farooque, A.A.; Fan, Y.V.; Woon, K.S. Municipal solid waste management for low-carbon transition: A systematic review of artificial neural network applications for trend prediction. Environ. Pollut. 2024, 344, 123386. [Google Scholar] [CrossRef] [PubMed]
  10. Kaur, E.A.M. Mathematical Modelling Of Municipal Solid Waste Management In Spherical Fuzzy Environment. Adv. Nonlinear Var. Inequalities 2023, 26, 47–64. [Google Scholar] [CrossRef]
  11. Zhao, J.; Li, X.; Chen, L.; Liu, W.; Wang, M. Scenario analysis of the eco-efficiency for municipal solid waste management: A case study of 211 cities in western China. Sci. Total Environ. 2024, 919, 170536. [Google Scholar] [CrossRef]
  12. Meng, T.; Shan, X.; Ren, Z.; Deng, Q. Analysis of Influencing Factors on Solid Waste Generation of Public Buildings in Tropical Monsoon Climate Region. Buildings 2024, 14, 513. [Google Scholar] [CrossRef]
  13. Ahmed, A.K.A.; Ibraheem, A.M.; Abd-Ellah, M.K. Forecasting of municipal solid waste multi-classification by using time-series deep learning depending on the living standard. Results Eng. 2022, 16, 100655. [Google Scholar] [CrossRef]
  14. Ferrão, C.C.; Moraes, J.A.R.; Fava, L.P.; Furtado, J.C.; Machado, E.; Rodrigues, A.; Sellitto, M.A. Optimizing routes of municipal waste collection: An application algorithm. Manag. Environ. Qual. Int. J. 2024; ahead-of-print. [Google Scholar] [CrossRef]
  15. Rekabi, S.; Sazvar, Z.; Goodarzian, F. A bi-objective sustainable vehicle routing optimization model for solid waste networks with internet of things. Supply Chain Anal. 2024, 5, 100059. [Google Scholar] [CrossRef]
  16. Mohammadi, M.; Rahmanifar, G.; Hajiaghaei-Keshteli, M.; Fusco, G.; Colombaroni, C. Industry 4.0 in waste management: An integrated IoT-based approach for facility location and green vehicle routing. J. Ind. Inf. Integr. 2023, 36, 100535. [Google Scholar] [CrossRef]
  17. Hu, Y.; Ju, Q.; Peng, T.; Zhang, S.; Wang, X. Municipal solid waste collection and transportation routing optimization based on iac-sfla. J. Environ. Eng. Landsc. Manag. 2024, 32, 31–44. [Google Scholar] [CrossRef]
  18. Hashemi, S.E. A fuzzy multi-objective optimization model for a sustainable reverse logistics network design of municipal waste-collecting considering the reduction of emissions. J. Clean. Prod. 2021, 318, 128577. [Google Scholar] [CrossRef]
  19. Ge, Z.; Zhang, D.; Lu, X.; Jia, X.; Li, Z. A Disjunctive Programming Approach for Sustainable Design of Municipal Solid Waste Management. Chem. Eng. Trans. 2023, 103, 283–288. [Google Scholar] [CrossRef]
  20. Ramadan, B.S.; Ardiansyah, S.Y.; Sendari, S.; Wibowo, Y.G.; Rachman, I.; Matsumoto, T. Optimization of municipal solid waste collection sites by an integrated spatial analysis approach in Semarang City. J. Mater. Cycles Waste Manag. 2024, 26, 1231–1242. [Google Scholar] [CrossRef]
  21. Dudar, I.; Yavorovska, O.; Cirella, G.T.; Buha, V.; Kuznetsova, M.; Iarmolenko, I.; Svitlychnyy, O.; Pankova, L. Enhancing Urban Solid Waste Management Through an Integrated Geographic Information System and Multicriteria Decision Analysis: A Case Study in Postwar Reconstruction. In Handbook on Post-War Reconstruction and Development Economics of Ukraine; Cirella, G.T., Ed.; Springer International Publishing: Cham, Switzerland, 2024; pp. 377–392. [Google Scholar] [CrossRef]
  22. Kolekar, K.; Hazra, T.; Chakrabarty, S. A Review on Prediction of Municipal Solid Waste Generation Models. Procedia Environ. Sci. 2016, 35, 238–244. [Google Scholar] [CrossRef]
  23. Singh, D.; Satija, A. Prediction of municipal solid waste generation for optimum planning and management with artificial neural network-case study: Faridabad City in Haryana State (India). Int. J. Syst. Assur. Eng. Manag. 2018, 9, 91–97. [Google Scholar] [CrossRef]
  24. Paulauskaite-Taraseviciene, A.; Raudonis, V.; Sutiene, K. Forecasting municipal solid waste in Lithuania by incorporating socioeconomic and geographical factors. Waste Manag. 2022, 140, 31–39. [Google Scholar] [CrossRef]
  25. Meza, J.K.S.; Yepes, D.O.; Rodrigo-Ilarri, J.; Cassiraga, E. Predictive analysis of urban waste generation for the city of Bogotá, Colombia, through the implementation of decision trees-based machine learning, support vector machines and artificial neural networks. Heliyon 2019, 5, e02810. [Google Scholar] [CrossRef]
  26. Li, Z.; Zheng, Z.; Washington, S. Short-Term Traffic Flow Forecasting: A Component-Wise Gradient Boosting Approach With Hierarchical Reconciliation. IEEE Trans. Intell. Transp. Syst. 2019, 21, 5060–5072. [Google Scholar] [CrossRef]
  27. Cai, R.; Xie, S.; Wang, B.; Yang, R.; Xu, D.; He, Y. Wind Speed Forecasting Based on Extreme Gradient Boosting. IEEE Access 2020, 8, 175063–175069. [Google Scholar] [CrossRef]
  28. Sun, X.; Liu, M.; Sima, Z. A novel cryptocurrency price trend forecasting model based on LightGBM. Financ. Res. Lett. 2018, 32, 101084. [Google Scholar] [CrossRef]
  29. Zhang, J.; Mucs, D.; Norinder, U.; Svensson, F. LightGBM: An Effective and Scalable Algorithm for Prediction of Chemical Toxicity–Application to the Tox21 and Mutagenicity Data Sets. J. Chem. Inf. Model. 2019, 59, 4150–4158. [Google Scholar] [CrossRef] [PubMed]
  30. Huang, G.; Wu, L.; Ma, X.; Zhang, W.; Fan, J.; Yu, X.; Zeng, W.; Zhou, H. Evaluation of CatBoost method for prediction of reference evapotranspiration in humid regions. J. Hydrol. 2019, 574, 1029–1041. [Google Scholar] [CrossRef]
  31. Zhang, Y.; Zhao, Z.; Zheng, J. CatBoost: A new approach for estimating daily reference crop evapotranspiration in arid and semi-arid regions of Northern China. J. Hydrol. 2020, 588, 125087. [Google Scholar] [CrossRef]
  32. Hwang, J.-S.; Hu, T.-H. A stepwise regression algorithm for high-dimensional variable selection. J. Stat. Comput. Simul. 2014, 85, 1793–1806. [Google Scholar] [CrossRef]
  33. Burkholder, T.J.; Lieber, R.L. Stepwise regression is an alternative to splines for fitting noisy data. J. Biomech. 1996, 29, 235–238. [Google Scholar] [CrossRef]
  34. Heshmaty, B.; Kandel, A. Fuzzy linear regression and its applications to forecasting in uncertain environment. Fuzzy Sets Syst. 1985, 15, 159–191. [Google Scholar] [CrossRef]
  35. Nikolopoulos, K.; Goodwin, P.; Patelis, A.; Assimakopoulos, V. Forecasting with cue information: A comparison of multiple regression with alternative forecasting approaches. Eur. J. Oper. Res. 2007, 180, 354–368. [Google Scholar] [CrossRef]
  36. Li, B.-j.; He, C.-h. The combined forecasting method of GM(1,1) with linear regression and its application. In Proceedings of the 2007 IEEE International Conference on Grey Systems and Intelligent Services, Nanjing, China, 18–20 November 2007; pp. 394–398. [Google Scholar] [CrossRef]
  37. Heo, J.; Yang, J.Y. AdaBoost based bankruptcy forecasting of Korean construction companies. Appl. Soft Comput. 2014, 24, 494–499. [Google Scholar] [CrossRef]
  38. Mishra, S.; Mishra, D.; Santra, G.H. Adaptive boosting of weak regressors for forecasting of crop production considering climatic variability: An empirical assessment. J. King Saud Univ.-Comput. Inf. Sci. 2017, 32, 949–964. [Google Scholar] [CrossRef]
  39. Schapire, R.E.; Singer, Y. Improved Boosting Algorithms Using Confidence-rated Predictions. Mach. Learn. 1999, 37, 297–336. [Google Scholar] [CrossRef]
  40. Zhang, G.P. Time series forecasting using a hybrid ARIMA and neural network model. Neurocomputing 2003, 50, 159–175. [Google Scholar] [CrossRef]
  41. Khashei, M.; Bijari, M. A novel hybridization of artificial neural networks and ARIMA models for time series forecasting. Appl. Soft Comput. 2010, 11, 2664–2675. [Google Scholar] [CrossRef]
  42. Liang, Y.-H. Combining seasonal time series ARIMA method and neural networks with genetic algorithms for predicting the production value of the mechanical industry in Taiwan. Neural Comput. Appl. 2008, 18, 833–841. [Google Scholar] [CrossRef]
  43. Wong, F. Time series forecasting using backpropagation neural networks. Neurocomputing 1991, 2, 147–159. [Google Scholar] [CrossRef]
  44. Hill, T.; Marquez, L.; O’Connor, M.; Remus, W. Artificial neural network models for forecasting and decision making. Int. J. Forecast. 1994, 10, 5–15. [Google Scholar] [CrossRef]
  45. Nimon, K.F.; Oswald, F.L. Understanding the Results of Multiple Linear Regression: Beyond Standardized Regression Coefficients. Organ. Res. Methods 2013, 16, 650–674. [Google Scholar] [CrossRef]
  46. Saber, A.Y.; Alam, A.K.M.R. Short term load forecasting using multiple linear regression for big data. In Proceedings of the 2017 IEEE Symposium Series on Computational Intelligence (SSCI), Honolulu, HI, USA, 27 November–1 December 2017; pp. 1–6. [Google Scholar] [CrossRef]
  47. Gutierrez, P.A.; Perez-Ortiz, M.; Sanchez-Monedero, J.; Fernandez-Navarro, F.; Hervas-Martinez, C. Ordinal Regression Methods: Survey and Experimental Study. IEEE Trans. Knowl. Data Eng. 2015, 28, 127–146. [Google Scholar] [CrossRef]
  48. Taillardat, M.; Mestre, O.; Zamo, M.; Naveau, P. Calibrated Ensemble Forecasts Using Quantile Regression Forests and Ensemble Model Output Statistics. Mon. Weather Rev. 2016, 144, 2375–2393. [Google Scholar] [CrossRef]
  49. Molinder, J.; Scher, S.; Nilsson, E.; Körnich, H.; Bergström, H.; Sjöblom, A. Probabilistic Forecasting of Wind Turbine Icing Related Production Losses Using Quantile Regression Forests. Energies 2020, 14, 158. [Google Scholar] [CrossRef]
  50. Lic, I.; Görgülü, B.; Cevik, M.; Baydoğan, M.G. Explainable boosted linear regression for time series forecasting. Pattern Recognit. 2021, 120, 108144. [Google Scholar] [CrossRef]
  51. De’Ath, G. Boosted trees for ecological modeling and prediction. Ecology 2007, 88, 243–251. [Google Scholar] [CrossRef] [PubMed]
  52. Elith, J.; Leathwick, J.R.; Hastie, T. A working guide to boosted regression trees. J. Anim. Ecol. 2008, 77, 802–813. [Google Scholar] [CrossRef]
  53. Preminger, A.; Franck, R. Forecasting exchange rates: A robust regression approach. Int. J. Forecast. 2006, 23, 71–84. [Google Scholar] [CrossRef]
  54. Ikeuchi, K. (Ed.) Robust Regression; Computer Vision; Springer: Boston, MA, USA, 2014; p. 697. [Google Scholar] [CrossRef]
  55. Bonnabel, S. Stochastic Gradient Descent on Riemannian Manifolds. IEEE Trans. Autom. Control. 2013, 58, 2217–2229. [Google Scholar] [CrossRef]
  56. Mercier, Q.; Poirion, F.; Désidéri, J.-A. A stochastic multiple gradient descent algorithm. Eur. J. Oper. Res. 2018, 271, 808–817. [Google Scholar] [CrossRef]
  57. Ulvila, J.W. Decision trees for forecasting. J. Forecast. 1985, 4, 377–385. [Google Scholar] [CrossRef]
  58. Decision Tree Methods: Applications for Classification and Prediction—Shanghai Carchives of Psychiatry. Available online: https://shanghaiarchivesofpsychiatry.org/en/215044.html (accessed on 25 May 2024).
  59. Sokolov, A.; Carlin, D.E.; Paull, E.O.; Baertsch, R.; Stuart, J.M. Pathway-Based Genomics Prediction using Generalized Elastic Net. PLoS Comput. Biol. 2016, 12, e1004790. [Google Scholar] [CrossRef]
  60. Liu, W.; Dou, Z.; Wang, W.; Liu, Y.; Zou, H.; Zhang, B.; Hou, S. Short-Term Load Forecasting Based on Elastic Net Improved GMDH and Difference Degree Weighting Optimization. Appl. Sci. 2018, 8, 1603. [Google Scholar] [CrossRef]
  61. Parussini, L.; Venturi, D.; Perdikaris, P.; Karniadakis, G. Multi-fidelity Gaussian process regression for prediction of random fields. J. Comput. Phys. 2017, 336, 36–50. [Google Scholar] [CrossRef]
  62. Fang, D.; Zhang, X.; Yu, Q.; Jin, T.C.; Tian, L. A novel method for carbon dioxide emission forecasting based on improved Gaussian processes regression. J. Clean. Prod. 2018, 173, 143–150. [Google Scholar] [CrossRef]
  63. Sun, B.; Cheng, W.; Goswami, P.; Bai, G. Short-term traffic forecasting using self-adjusting k-nearest neighbours. IET Intell. Transp. Syst. 2017, 12, 41–48. [Google Scholar] [CrossRef]
  64. Yang, D.; Ye, Z.; Lim, L.H.I.; Dong, Z. Very short term irradiance forecasting using the lasso. Sol. Energy 2015, 114, 314–326. [Google Scholar] [CrossRef]
  65. Ranstam, J.; A Cook, J. LASSO regression. Br. J. Surg. 2018, 105, 1348. [Google Scholar] [CrossRef]
  66. Ben Bouallègue, Z. Calibrated Short-Range Ensemble Precipitation Forecasts Using Extended Logistic Regression with Interaction Terms. Weather Forecast. 2013, 28, 515–524. [Google Scholar] [CrossRef]
  67. Stoltzfus, J.C. Logistic Regression: A Brief Primer. Acad. Emerg. Med. 2011, 18, 1099–1104. [Google Scholar] [CrossRef] [PubMed]
  68. Davig, T.; Hall, A.S. Recession forecasting using Bayesian classification. Int. J. Forecast. 2019, 35, 848–867. [Google Scholar] [CrossRef]
  69. Aditya, E.; Situmorang, Z.; Hayadi, B.H.; Zarlis, M.; Wanayumini. New Student Prediction Using Algorithm Naive Bayes And Regression Analysis In Universitas Potensi Utama. In Proceedings of the 2022 4th International Conference on Cybernetics and Intelligent System (ICORIS), Prapat, Indonesia, 8–9 October 2022; pp. 1–6. [Google Scholar] [CrossRef]
  70. Xu, M.; Pinson, P.; Lu, Z.; Qiao, Y.; Min, Y. Adaptive robust polynomial regression for power curve modeling with application to wind power forecasting. Wind Energy 2016, 19, 2321–2336. [Google Scholar] [CrossRef]
  71. Regonda, S.; Rajagopalan, B.; Lall, U.; Clark, M.; Moon, Y.-I. Local polynomial method for ensemble forecast of time series. Nonlinear Process. Geophys. 2005, 12, 397–406. [Google Scholar] [CrossRef]
  72. Yelland, L.N.; Salter, A.B.; Ryan, P. Performance of the Modified Poisson Regression Approach for Estimating Relative Risks From Clustered Prospective Data. Am. J. Epidemiol. 2011, 174, 984–992. [Google Scholar] [CrossRef] [PubMed]
  73. Frome, E.L. The Analysis of Rates Using Poisson Regression Models. Biometrics 1983, 39, 665–674. [Google Scholar] [CrossRef] [PubMed]
  74. Dudek, G. A Comprehensive Study of Random Forest for Short-Term Load Forecasting. Energies 2022, 15, 7547. [Google Scholar] [CrossRef]
  75. Tyralis, H.; Papacharalampous, G. Variable Selection in Time Series Forecasting Using Random Forests. Algorithms 2017, 10, 114. [Google Scholar] [CrossRef]
  76. Ziegler, A.; König, I.R. Mining data with random forests: Current options for real-world applications. WIREs Data Min. Knowl. Discov. 2013, 4, 55–63. [Google Scholar] [CrossRef]
  77. Peña, M.; Dool, H.v.D. Consolidation of Multimodel Forecasts by Ridge Regression: Application to Pacific Sea Surface Temperature. J. Clim. 2008, 21, 6521–6538. [Google Scholar] [CrossRef]
  78. McDonald, G.C. Ridge regression. WIREs Comput. Stat. 2009, 1, 93–100. [Google Scholar] [CrossRef]
  79. Hoerl, A.E.; Kennard, R.W. Ridge Regression: Applications to Nonorthogonal Problems. Technometrics 1970, 12, 69–82. [Google Scholar] [CrossRef]
  80. Hao, W.; Yu, S. Support Vector Regression for Financial Time Series Forecasting. In Knowledge Enterprise: Intelligent Strategies in Product Design, Manufacturing, and Management; Wang, K., Kovacs, G.L., Wozny, M., Fang, M., Eds.; Springer: Boston, MA, USA, 2006; pp. 825–830. [Google Scholar] [CrossRef]
  81. Bao, Y.; Xiong, T.; Hu, Z. Multi-step-ahead time series prediction using multiple-output support vector regression. Neurocomputing 2014, 129, 482–493. [Google Scholar] [CrossRef]
  82. Singh, T.; Uppaluri, R.V.S. Machine learning tool-based prediction and forecasting of municipal solid waste generation rate: A case study in Guwahati, Assam, India. Int. J. Environ. Sci. Technol. 2022, 20, 12207–12230. [Google Scholar] [CrossRef]
Figure 1. Real values, predictions from Decision Tree and KNN algorithms, and errors from both algorithms’ predictions over 3 days (day of the month + hour).
Figure 1. Real values, predictions from Decision Tree and KNN algorithms, and errors from both algorithms’ predictions over 3 days (day of the month + hour).
Make 06 00066 g001
Figure 2. Average MAE for Decision Tree across the selected scenarios and DPs.
Figure 2. Average MAE for Decision Tree across the selected scenarios and DPs.
Make 06 00066 g002
Figure 3. Clustering of one day’s data from a DP divided into ten clusters.
Figure 3. Clustering of one day’s data from a DP divided into ten clusters.
Make 06 00066 g003
Figure 4. Clustering of one day’s data from a DP divided into five clusters.
Figure 4. Clustering of one day’s data from a DP divided into five clusters.
Make 06 00066 g004
Figure 5. Clustering of one day’s data from a DP divided into two clusters.
Figure 5. Clustering of one day’s data from a DP divided into two clusters.
Make 06 00066 g005
Table 1. Initial structure from the data.
Table 1. Initial structure from the data.
Equip IDDateReal Increment
632019-jan-08 00:00:000
632019-jan-08 13:20:001400
632019-jan-08 13:30:001500
632019-jan-08 14:00:001400
632019-jan-08 14:35:000
632019-jan-08 16:25:001400
632019-jan-08 17:20:001500
632019-jan-08 17:40:000
632019-jan-08 18:30:001400
632019-jan-08 18:35:000
Table 2. Initial date field decomposition from the records.
Table 2. Initial date field decomposition from the records.
DateHourEquip IDReal IncrementDatetimeYearMonthDay
2019-jan-0806302019-jan-08 T00:002019108
2019-jan-0816302019-jan-08 T01:002019108
2019-jan-0826302019-jan-08 T02:002019108
2019-jan-0836302019-jan-08 T03:002019108
2019-jan-0846302019-jan-08 T04:002019108
2019-jan-0856302019-jan-08 T05:002019108
2019-jan-0866302019-jan-08 T06:002019108
2019-jan-0876302019-jan-08 T07:002019108
2019-jan-0886302019-jan-08 T08:002019108
2019-jan-0896302019-jan-08 T09:002019108
Table 3. Average results regarding the predictions from all the cases studied.
Table 3. Average results regarding the predictions from all the cases studied.
MAEMSERMSER2ME
Average11.24388.8916.46−0.8152.43
Min value5.676.578.75−61.21109.3
Max value70.4311,013.97104.950.44490
Deviation6.919962.84110,881514957.405
Table 4. Average results for predictions from the different forecast algorithms applied to the various cases of the DPs studied.
Table 4. Average results for predictions from the different forecast algorithms applied to the various cases of the DPs studied.
AlgorithmMAEMSERMSER2ME
Decision Tree7.84147.5411.960.36140.94
Elastic Net11.2221.1714.70.03147.75
Gaussian Process7.85147.5611.960.36140.94
KNN8.8183.9813.320.2142.53
Lasso11.18220.9914.690.03147.85
Linear Regression11.18220.9914.690.03147.86
Logistic Regression12.28372.2319.07−0.64161.33
Naïve Bayes27.942443.9744.73−10.81222.83
Polynomial Regression9.49178.7813.180.22143.24
Random Forest7.93148.9312.020.35140.96
Ridge11.18220.9914.690.03147.86
SVR8.02159.5512.480.3145.11
Table 5. Average results for predictions grouped by selected forecast algorithms and selected DPs.
Table 5. Average results for predictions grouped by selected forecast algorithms and selected DPs.
AlgorithmDPMAEMSERMSER2ME
Decision Tree17.74138.3111.760.39139.08
27.4131.3211.460.35116.83
35.7278.818.880.42112.54
49.24183.1413.530.3132.89
59.65242.8615.580.32230.1
67.31110.8310.530.37114.22
Random Forest17.86141.5511.90.38138.45
27.48132.2811.50.34116.5
35.8480.868.990.4115.56
49.3183.9813.560.3132
59.74243.8615.620.32231.93
67.37111.0510.540.38111.31
Table 6. Average results for predictions grouped by selected forecast algorithms and situations studied.
Table 6. Average results for predictions grouped by selected forecast algorithms and situations studied.
AlgorithmCase StudiedMAEMSERMSER2ME
Decision TreeDay of Month + Hour8.04152.7812.170.34141.41
Day of Week + Hour7.68143.6111.790.38141.16
Holiday + Hour7.93148.9112.010.36140.53
Weekend + Hour7.73144.8811.850.37140.67
Random ForestDay of Month + Hour8.01150.4612.080.34140.94
Day of Week + Hour7.86147.3411.960.36140.72
Holiday + Hour8150.3512.070.34141.36
Weekend + Hour7.86147.5711.970.36140.82
Table 7. Top 10 prediction results for the selected DPs using the Decision Tree and Random Forest algorithms by R2 in a descendant order.
Table 7. Top 10 prediction results for the selected DPs using the Decision Tree and Random Forest algorithms by R2 in a descendant order.
AlgorithmCase StudiedDPMAEMSERMSER2ME
Decision TreeDay of Week + Hour35.676.588.750.44111.24
Decision TreeWeekend + Hour35.6277.248.790.43111.97
Decision TreeHoliday + Hour35.7879.798.930.42114.22
Decision TreeDay of Week + Hour17.56133.4611.550.41138.76
Random ForestDay of Week + Hour35.880.368.960.41115.57
Random ForestWeekend + Hour35.880.418.970.41115.57
Decision TreeWeekend + Hour17.65135.2911.630.4138.98
Decision TreeDay of Month + Hour35.8781.619.030.4112.71
Random ForestDay of Month + Hour35.8881.359.020.4115.56
Random ForestHoliday + Hour35.8881.339.020.4115.56
Decision TreeDay of Week + Hour35.676.588.750.44111.24
Decision TreeWeekend + Hour35.6277.248.790.43111.97
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

De-la-Mata-Moratilla, S.; Gutierrez-Martinez, J.-M.; Castillo-Martinez, A.; Caro-Alvaro, S. Prediction of the Behaviour from Discharge Points for Solid Waste Management. Mach. Learn. Knowl. Extr. 2024, 6, 1389-1412. https://doi.org/10.3390/make6030066

AMA Style

De-la-Mata-Moratilla S, Gutierrez-Martinez J-M, Castillo-Martinez A, Caro-Alvaro S. Prediction of the Behaviour from Discharge Points for Solid Waste Management. Machine Learning and Knowledge Extraction. 2024; 6(3):1389-1412. https://doi.org/10.3390/make6030066

Chicago/Turabian Style

De-la-Mata-Moratilla, Sergio, Jose-Maria Gutierrez-Martinez, Ana Castillo-Martinez, and Sergio Caro-Alvaro. 2024. "Prediction of the Behaviour from Discharge Points for Solid Waste Management" Machine Learning and Knowledge Extraction 6, no. 3: 1389-1412. https://doi.org/10.3390/make6030066

APA Style

De-la-Mata-Moratilla, S., Gutierrez-Martinez, J. -M., Castillo-Martinez, A., & Caro-Alvaro, S. (2024). Prediction of the Behaviour from Discharge Points for Solid Waste Management. Machine Learning and Knowledge Extraction, 6(3), 1389-1412. https://doi.org/10.3390/make6030066

Article Metrics

Back to TopTop