Next Article in Journal
Improving Air Quality Data Reliability through Bi-Directional Univariate Imputation with the Random Forest Algorithm
Previous Article in Journal
Urban Sharing Logistics Strategies against Epidemic Outbreaks: Its Feasibility and Sustainability
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Transforming Urban Sanitation: Enhancing Sustainability through Machine Learning-Driven Waste Processing

by
Dhanvanth Kumar Gude
1,
Harshavardan Bandari
1,
Anjani Kumar Reddy Challa
1,
Sabiha Tasneem
2,
Zarin Tasneem
2,
Shyama Barna Bhattacharjee
3,
Mohit Lalit
1,
Miguel Angel López Flores
4,5,6 and
Nitin Goyal
7,*
1
Apex Institute of Technology (AIT-CSE), Chandigarh University, Mohali 140413, Panjab, India
2
Department of Allied Sciences, Faculty of Science, Engineering and Technology (FSET), University of Science and Technology Chittagong (USTC), Chattogram 4220, Bangladesh
3
Department of Computer Science and Engineering, Faculty of Science, Engineering and Technology (FSET), University of Science and Technology Chittagong (USTC), Chattogram 4220, Bangladesh
4
Engineering Research & Innovation Group, Universidad Europea del Atlántico, C/Isabel Torres 21, 39011 Santander, Spain
5
Department of Project Management, Universidad Internacional Iberoamericana, Campeche 24560, Mexico
6
Instituto Politécnico Nacional, Unidad Profesional Interdisciplinaria de Ingeniería y Ciencias Sociales y Administrativas (UPIICSA), Ciudad de México 04510, Mexico
7
Department of Computer Science and Engineering, School of Engineering and Technology, Central University of Haryana, Mahendergarh 123031, Haryana, India
*
Author to whom correspondence should be addressed.
Sustainability 2024, 16(17), 7626; https://doi.org/10.3390/su16177626
Submission received: 17 July 2024 / Revised: 20 August 2024 / Accepted: 26 August 2024 / Published: 3 September 2024

Abstract

:
The enormous increase in the volume of waste caused by the population boom in cities is placing a considerable burden on waste processing in cities. The inefficiency and high costs of conventional approaches exacerbate the risks to the environment and human health. This article proposes a thorough approach that combines deep learning models, IoT technologies, and easily accessible resources to overcome these challenges. Our main goal is to advance a framework for intelligent waste processing that utilizes Internet of Things sensors and deep learning algorithms. The proposed framework is based on Raspberry Pi 4 with a camera module and TensorFlow Lite version 2.13. and enables the classification and categorization of trash in real time. When trash objects are identified, a servo motor mounted on a plastic plate ensures that the trash is sorted into appropriate compartments based on the model’s classification. This strategy aims to reduce overall health risks in urban areas by improving waste sorting techniques, monitoring the condition of garbage cans, and promoting sanitation through efficient waste separation. By streamlining waste handling processes and enabling the creation of recyclable materials, this framework contributes to a more sustainable waste management system.

1. Introduction

As the population ages, cities are faced with various problems, with waste disposal becoming one of the biggest issues. The World Bank estimated that in 2016, urban population growth and economic expansion produced 2.01 billion metric tons of garbage. By 2050, the figure is expected to rise to 3.40 billion metric tons, as Figure 1 shows [1]. In 2016, EUROSTAT reported that 24% of the 179 million metric tons of waste produced domestically in the EU was disposed of in landfills, while 423 million tons, representing 56% of the total, was recycled [2].
The processing of waste, also known as waste collection, as shown in Figure 2, involves several steps, including collection, transportation, monitoring, and regulation of waste disposal. Urban and rural regions have different waste processing methods. Reuse and recycling are often the most efficient ways to dispose of the waste generated [3]. However, effective waste processing methods must be implemented with considerable effort, which requires the involvement of the public and the authorities [4].
In conjunction with the negative effect on the environment, improper waste disposal poses a major threat to the health of urban and rural dwellers. Inadequate waste processing techniques can release hazardous chemicals, toxins, and pathogens back into the environment, contaminating the soil, water supply, and air [5,6]. For communities that settle near the landfill, this contamination can cause a variety of health problems, and contact with contaminated water sources can lead to waterborne diseases [7]. Contaminated agricultural produce can become contaminated, leading to food-borne infections and nutritional deficiencies among locals [8]. In addition, rodents, flies, and mosquitoes are attracted to open garbage dumps and landfills, amplifying the threat of transmission of diseases like dengue fever, malaria, and the Zika virus [9]. To counter these health risks, the government will take coordinated action to improve waste processing infrastructure, optimize waste collection and disposal techniques, and support recycling and reuse programs [7,10].
The Internet of Things (IoT) is transforming technology by integrating intelligence into current systems, significantly affecting a range of industries, including engineering [11,12,13]. The Internet of Things (IoT) encompasses a network of devices connected via the Internet, allowing them to communicate and share data with each other [14]. With approximately 127 internet-connected devices per second and a projected market volume of USD 650.5 billion by 2026 [4,15,16], the IoT plays a crucial role in modern society. By integrating real-world entities into a unified infrastructure, the IoT enables control and access to physical objects [13]. This integration offers significant opportunities for waste recycling systems to improve resource efficiency, ease of use and processing convenience while reducing waste volumes through recycling initiatives. Over the last decade, the Internet of Things has seen a revolutionary integration into waste processing [14].
Technology has been significantly shaped by machine learning (ML), a subfield of Artificial Intelligence (AI) that emphasizes creating algorithms and models capable of learning from data autonomously, whether labeled or unlabeled, rather than relying on explicit programming [17]. With certain models and algorithms able to predict scientific breakthroughs, this development has reached a peak. According to a recent report by Tractica [15], the market for AI and machine learning technologies was valued at USD 1.4 billion in 2016 and is expected to reach GBP USD 75.35 billion by 2025. The improved accuracy of deep neural networks in several machine learning and pattern recognition milestones shows how beneficial machine learning applications can be [18].
Thanks to deep learning, which is a domain of machine learning involving advanced neural networks designed to handle complex data representations and abstractions, computers now possess the capability to analyze information on multiple hierarchical levels [19]. These tools are driving breakthroughs in areas such as genetics, speech analysis, visual recognition, and drug discovery [4]. The Convolutional Neural Network (CNN) is a type of deep learning approach that is very useful for tasks such as identifying objects and processing images. By accurately detecting waste and conserving resources, integrating CNNs into a waste processing system can increase the efficiency of waste categorization and reduce global waste production [20,21]. This article presents an intelligent waste processing framework that can distinguish among distinct classes of waste and categorize them into the correct bin after identification and classification. To support reliable object recognition, SSD MobileNetV2 is utilized, an advanced detection framework with an optimized neural network structure for rapid processing on mobile and embedded platforms. This system is integrated with an LoRaWAN, a low-power, wide-area network solution for extended-range communication, and IoT sensors, which together enable the transmission of garbage bin status over significant distances [4,22,23].
This study addresses a critical research gap in waste processing through two primary advancements. Firstly, the scope of research is extended by incorporating a larger number of classification categories than those used in prior studies. Secondly, the study introduces a refined balanced model that enhances the performance of existing models. This broader classification enables a more comprehensive analysis and understanding of the waste processing problem. While many previous studies have not fully addressed issues of class imbalance, this approach ensures fair representation and consistent performance across all categories. These improvements lead to more accurate and reliable outcomes, thereby advancing the field of waste processing and contributing to the development of more effective and practical solutions.
The proposed framework has profound social implications, addressing key challenges in waste processing and environmental sustainability. By integrating advanced technologies such as SSD MobileNetV2, TensorFlow Lite, and LoRaWAN, this framework provides a highly accurate and efficient solution for waste classification and management.
This framework enables the real-time monitoring and processing of waste, which is essential to improving waste segregation practices. The automated identification and classification of waste reduce dependency on manual sorting, which is often labor-intensive, error-prone, and hazardous. By ensuring accurate segregation at the source, the framework promotes higher recycling rates, thereby lessening the burden on landfills and minimizing environmental pollution.
Furthermore, the framework’s use of cost-effective and accessible components such as the Raspberry Pi and ultrasonic sensors makes it a scalable solution suitable for widespread implementation, particularly in urban areas and developing regions where waste processing infrastructure may be lacking. The integration of an LoRaWAN for remote monitoring facilitates efficient communication over long distances, which is vital to processing waste in large or remote areas, thereby enhancing overall waste collection efficiency [16].
This article consists of five sections. Section 2 evaluates various technologies, such as data analytics, sensor networks, and the Internet of Things (IoT), in the context of smart waste recycling. Sensor networks monitor fill levels and conditions, while data analytics optimizes routes and predicts maintenance tasks. The Internet of Things (IoT) connects systems for seamless coordination and increases the efficiency of waste processing. In addition, the analysis identifies gaps in the existing literature that our study aims to address and reviews of previous research findings. Section 3 explains the approach for the design, development, and execution of the proposed smart waste processing framework, as well as the steps for setting up the experiment and data collection. Section 4 presents the results of the study along with a detailed analysis of the data collected. This includes quantitative and qualitative assessments of the effectiveness, efficiency, and performance of the model in handling waste. Section 5 provides further conclusions on the future possibilities of the article and concluding thoughts.

2. Literature Review

The literature review presented in Table 1 is intended to provide a general overview of intelligent waste processing frameworks and to shed light on their background, underlying technologies, deployment strategies, and results. The aim of the study is to clarify the intricate features of smart waste processing frameworks and explore how they can revolutionize waste processing techniques and promote sustainable urban development through a critical examination of empirical research, theoretical frameworks, and real-life case studies.
According to the evaluated articles, the suggested waste processing solutions are not adequate to address the main issues that cities confront.
Challenges:
  • Single-task focus: A lot of recommended frameworks are made to perform just one thing, like keep track of trash levels, and do not have any features that alert administrators with respect to important events or problems.
  • Limited communication range: Frameworks that depend on protocols such as Wi-Fi could face constraints when it comes to short-range data transmission, which could limit their usefulness in bigger areas or outdoor settings.
  • Sparse datasets: Most of the proposed techniques struggle with sparse datasets that have few classes for trash classification. The absence of data makes it challenging to provide useful information and identify trash accurately.
  • Recycling inadequacy: In certain frameworks, the urgent problem of recycling is not sufficiently handled because of insufficient trash classification and categorization. This shortfall puts the environmental problems related to waste processing at risk of getting worse.
Ultrasonic sensors are utilized to monitor trash levels, triggering alerts for administrators when these levels exceed a predefined threshold, enabling prompt intervention. LoRaWAN technology is employed for efficient data transmission, ensuring reliable communication between the waste processing framework and the cloud server. Furthermore, this research study introduces an expanded classification of waste categories, offering a more detailed understanding and processing of waste, which enhances recycling efficiency and overall waste processing practices.
In conclusion, several frameworks offer novel methods for trash monitoring. However, in order to achieve complete and effective waste processing solutions, issues like single-task focus, communication-range limitations, dataset quality, and inadequacies in recycling strategies must be addressed.

3. Proposed Methodology

The approach used in creating and deploying the smart waste processing framework, which integrates IoT and ML techniques, comprises various essential stages, each enhancing the framework’s functionality and efficiency.

3.1. Framework Model Design

The layout and dimensions of the bin, as illustrated in Figure 3, reveal a total of nine compartments, with the uppermost compartment specifically allocated for the accommodation of electrical components. The bin’s overall dimensions stand at 180 cm in height, with breadth and width each measuring 70 cm. Each compartment occupies a vertical space of 20 cm. At the summit of the bin, the 70 cm breadth is strategically divided into two distinct sections, where 30 cm is assigned to the provisional placement of waste, while the remaining 40 cm is dedicated to the compartment housing the electronic components.
Most of the electrical components are located in the uppermost part, which is often referred to as the chamber for the electrical components. These electrical components are protected with RFID tags so that the sensitive components are only accessible to authorized staff who have an RFID reader, reducing the possibility of tampering or illegal access. In the event of theft or loss, the unique identifiers on the RFID tags can help to recover the stolen components. Various types of waste are stored in the other compartments. When waste is tipped into a temporary bin, a Raspberry Pi recognizes this and transports the waste to the appropriate compartment by using servomotors.
As shown in Figure 4, the images of garbage are taken by a Pi camera integrated into a Raspberry Pi 4. This Pi camera is located under the roof of the garbage can in which the waste is disposed of, so that other components are not exposed to environmental influences such as weather conditions and light fluctuations. The images are then classified by using a machine learning algorithm in the garbage can itself. After classification, the waste is directed to the appropriate compartment. The classified data and the fill levels of the bins are sent to servers, and the recyclable waste is then sent to the industry for recycling.
This article mentions the use of a Raspberry Pi 4 and various sensors and actuators. It should address the power consumption requirements as this is a crucial aspect for a self-sufficient remote waste processing framework. The total power consumption of all components calculated by using Equation (1) is 25.575 watts per day, considering the power consumption of each component listed in Table 2 over a 24 h period.
T o t a l   P o w e r   C o n s u m p t i o n = P o w e r   C o n s u m p t i o n   ( R a s p b e r r y   P i   4 + P i   C a m e r a + S e r v o   M o t o r s + U l t r a s o n i c   S e n s o r )
The framework is designed for long-term use with minimal maintenance. Components such as the Raspberry Pi and the servo motors have a life expectancy of several years with regular operation [30]. A maintenance schedule with quarterly inspections and troubleshooting procedures ensures minimal downtime. The containers are designed to withstand various environmental conditions, reducing the risk of damage from weather or vandalism. The main focus of implementing an intelligent waste processing framework is to monitor the condition of the garbage cans and to classify waste. The development of the CNN-based framework for object recognition includes four main steps: deploying TensorFlow Lite on TensorFlow, creating the architecture and model for object recognition, acquiring the dataset, and incorporating the trained model into a physical device application [20]. Figure 5 represents the process of sorting and classifying waste. The setup includes a Raspberry Pi, an ultrasonic sensor, a camera module, and servo motors. The CNN-based model for object recognition and the corresponding hardware components are added to this configuration.

3.2. Object Detection Model

For use on low-power devices, TensorFlow Lite is preferred over TensorFlow. The main reason for this is that almost all models trained with TensorFlow require a powerful Graphics Processing Unit to accomplish object detection [4]. But a strong GPU is not required for the design of a smart garbage can [29]. The Raspberry Pi and other energy-efficient mobile devices can use object detection models with the help of TensorFlow Lite [28]. The COCO-trained Quantized SSD MobileNetV2 300 × 300 model compatible with TensorFlow was chosen for object detection. As can be seen in Figure 6, SSD, also known as Single-Shot Multibox Detector, was developed with a focus on real-time object detection, resulting in significantly faster performance and lower CPU utilization. A grid is used by SSD to partition the image, with each grid cell being in charge of identifying objects in that region. To detect an item in a given area is to simply anticipate its class and position [32]. In that case, the location is disregarded, and the item is regarded as a background class. The training process of SSD Mobile Net involves a number of processes, including data preparation, architecture setup, and optimization [32]. Accuracy is maintained, but efficiency is improved by simplifying the layers.
Single-Shot Multibox Detector is a powerful and efficient model for object detection, striking a balance between speed and accuracy.

3.3. Single-Shot Multibox Detector

Single-Shot Multibox Detector (SSD) Algorithm 1 is a groundbreaking approach to object detection that offers real-time performance and high accuracy [32]. This algorithm revolutionizes the field with its ability to efficiently detect objects of different scales and aspect ratios in a single pass.
Algorithm 1: SSD
1.(input_shape, num_classes) // Define the SSD function with parameters
2.base_model = MoblieNetV2()  //instantiate MobileNetV2 with provided input_shape, excluding top layers
3.For loop each layer in base_model.layer set layer.trainable = false  //Set all layers in base_model to non-trainable
4.feature_layers = [selected output layers from base_model]
5.conv_layers = []
6.For loop over each layer in the feature.layer
  • set conv_layer = apply_convolution(layer)  // Apply convolution operation to each feature layer
  • conv_layers.append(conv_layer)
7.set prediction_layers = [] //Initialize prediction_layers as an empty list
8.For Loop over each conv_layer in conv_layers:
a. 
Resize conv_layer to (30, 30) using a lambda function and store it in x
b. 
Apply a 1 × 1 convolutional layer with relu activation to x
c. 
Apply a 3 × 3 convolutional layer with softmax activation to x
d. 
Append x to prediction_layers
e. 
Concatenate all prediction_layers
9.return final_prediction // Return the concatenated prediction_layers
Above, ‘input_shape’ and ‘num_classes’ are the parameters that define the input shape and the number of classes for the Single-Shot Multibox Detector (SSD) function. ‘base_model’ instantiates MobileNetV2 with the specified parameters, and ‘fea-ture_layers’ selects the output layers from the base_model that are used for further processing in the SSD. ‘conv_layers’ stores the convolutional layers generated from each feature layer in feature_layers in the form of a list. ‘prediction_layers’ stores the prediction layers after applying convolutions and activations. ‘final_prediction’ represents the concatenated prediction layers formed from the convolution operations over the feature layers.
The difference between the actual labels or values associated with the input data and the expected outputs is quantified by the loss function. A measure of how effectively a model predicts the positions of objects within an image is called localization loss, or Lloc. The difference between the predicted (xi) and true ( x ¯ i) coordinates of each object’s bounding box is calculated by using the smooth L1 loss function. Equation (2) represents the localization loss [32].
L l o c =     1 / N     Σ i   s m o o t h L 1 ( x i x ¯ i )  
The model’s trust in its object predictions is measured by the confidence loss, or Lconf. It determines the negative log likelihood between the ground truth labels (yi) and the projected confidence ratings ( y ¯ i). Equation (3) represents the confidence loss [32].
  L c o n f =   1 / N   Σ i   ( y i   l o g   ( y ¯ i ) + ( 1 y i )   l o g   ( 1 y ¯ i ) )  
The combination of the localization loss and the confidence loss called Multibox loss, weighted by coefficients α and β, respectively, results in the total loss, represented by Ltotal. Equation (4) represents the total loss [32].
L t o t a l = α L l o c + β L c o n f      
The model’s goal throughout training is to minimize this overall loss. The model weights (Wnew) are updated in the optimization stage according to the gradient of the total loss (∇Ltotal). The learning rate (η) is used in this update to regulate the weight update size. This optimization reduces the overall loss. In order to minimize this loss over the training dataset, optimization adjusts the model parameters based on the total loss, which directs the training process. Equation (5) represents the optimization [32].
W n e w = W i o l d η L t o t a l    
In an effort to increase the accuracy of the model, SSD adds standard boxes and multi-scale features and eliminates the proposed face frame to expand the frame rate of object detection. It takes much less time to recognize an object when using low-resolution photos, such as 300 × 300 pixels. Compared with other recognition methods, SSD provides the fastest frame rate and the most accurate mean average precision (mAP), as shown in Table 3. The MobileNetV2 CNN architecture has been optimized alongside the recognition model to deliver exceptional classification accuracy on energy-efficient mobile devices.
The complicated structure and model size of the framework are considerably reduced by the MobileNet architecture. Table 4 shows the size of the architectural model of each framework along with its computational capacity, with MobileNetV2 having a smaller configuration and lower computational power. The MobileNet architecture significantly reduces the size and complexity of the framework’s model. In the proposed framework, the chosen model undergoes quantization. Conventional neural networks rely on high-precision numerical values, leading to tens or hundreds of millions of weights. Robust computational power and large memory are required to compute the particularly heavy weights, which requires either a powerful CPU, GPU, or TPU.
Quantization reduces the bit depth of image pixels by replacing high-precision numeric values, such as floating-point numbers and integers, with lower-precision equivalents, maintaining accuracy in the process. The presented model performs better in recognition and is smaller when its 32-bit parameters are quantized to 8 bits.
In this article, we utilize a dataset sourced from Kaggle, a renowned platform known for its extensive and diverse collection of high-quality datasets, making it a valuable resource for data science and machine learning projects. The dataset consists of a total of 15,515 validated images categorized into 12 different classes: cardboard, brown glass, biological waste, batteries, general waste, white glass, shoes, plastic, paper, metal, green glass, and clothing. It includes fundamental elements such as edges, lines, and corners, which are crucial to recognizing basic shapes and textures. Additionally, the dataset features intermediate characteristics like patterns and textures associated with different object components. It also provides more intricate attributes, including object parts and their arrangements, which allow the model to identify and differentiate among various objects.
This dataset is divided into two parts: a training set with 12,412 images and a validation set with 3103 images. The images in the dataset come from different sources to ensure a high degree of diversity in terms of content, angle, resolution, background, and lighting conditions. Each category includes several sub-categories. For example, the ‘plastic’ category contains images of plastic bottles, containers, and bags, while the ‘metal’ category includes cans, tins, and other metal objects.
The dataset’s extensive diversity significantly enhances the model’s accuracy in categorizing trash. Initial results show that the model exhibits high accuracy and generalizability, surpassing models trained on less diverse datasets. All images in the garbage dataset must have a resolution of 300 × 300 pixels to meet the requirements of the Quantized SSD MobileNetV2 model. However, the resulting photos have different resolutions and formats. To solve this problem, this article uses an open-source application that downscales each image to 300 × 300 pixels and then converts it to JPEG format. In this process, image scaling is performed in a batch process.
The garbage detection model uses supervised learning to classify the category of garbage. The process, known as labeling in machine learning, assigns informative labels to the image so that it can be understood and learned from. The photos are classified into twelve categories by using an open-source program called LabelImg. These categories, shown in Figure 7, are as follows: cardboard, brown glass, organic waste, batteries, garbage, white glass, shoes, plastic, paper, metal, green glass, and clothing. Data augmentation is a technique that makes numerous adjustments to an image by using existing training data to obtain new training data. Since the CNN is not able to reliably check similarities between images under different conditions, such as rotation, translation, and mirroring, data augmentation techniques help to improve the accuracy of the CNN model.
Keras is a neural network library that offers an application programming interface (API) for seamlessly integrating data extensions into the model training process. The five primary methods of data augmentation that we will use are image inversion, translation, rotation, and brightness and zoom adjustments. To improve the mean average precision (mAP) result and accelerate loss convergence, a sufficient GPU is required for training the object recognition model. A higher processing power of the Graphics Processing Unit helps to speed up the training, while a larger GPU memory can store more images for training at once. Since Google Colab’s GPUs are better than laptop GPUs in terms of performance, memory capacity, and bandwidth, Google Colab is used for training the CNN object recognition model instead of a laptop.
An optimizer can be used to adjust the hyperparameters to improve the performance of the garbage detection model.
The hyperparameters are adjusted during the training process by using the Adam optimizer. To improve the convergence of losses during training, an additional strategy to reduce the learning rate based on the cosine function was implemented. In this approach, the learning rate is gradually reduced as training progresses. Given the limitations of GPU memory, an optimal stack size of 16 is used. The model trained with TensorFlow can be exported as an inference graph, enabling its use via a Python script for object detection. However, due to incompatibilities in the model structure, the TensorFlow Lite interpreter is not able to create the inference graph directly. The TensorFlow Lite Optimizing Converter (TOCO) must be used to convert it. To use TOCO, it is necessary to create TensorFlow on the computer from the source code.

3.4. Waste Classification and Categorization Framework

The classification and categorization of waste cannot be achieved despite the hardware integration with the CNN training model for object recognition. The framework integrates a variety of hardware and software elements to form a cohesive solution. The hardware components consist of a Raspberry Pi 4, which functions as the main processing unit; a Pi camera for capturing images; ultrasonic sensors to measure waste levels; servo motors to control movement; and RFID tags for identification purposes.
On the software side, the SSD MobileNet V2 model is utilized for detecting objects, allowing for a precise assessment of waste levels. TensorFlow Lite is employed to enhance the model’s performance on the Raspberry Pi 4. The LoRaWAN facilitates long-range communication, enabling effective data transmission and remote monitoring. This combination of hardware and software provides a comprehensive framework for efficient waste management and data analysis.
Table 5 is a list of the electronic parts that will be integrated into this framework, and Figure 8 shows the schematic of the electrical component connection. The primary processor for garbage sorting and categorization is the Raspberry Pi 4 [29]. To detect and manage the movement of garbage, Raspberry Pi 4 receives the trained CNN garbage detection model and integrates it with Python-based algorithms. The Pi camera’s 8-megapixel camera module is used in conjunction with a trained model to detect trash that has appeared within its field of view [34]. The camera port of the Raspberry Pi 4 Camera Serial Interface is connected to Pi Camera V2 via a 15-pin connection, which requires 3.3 V to operate [29]. An ultrasonic sensor is utilized to monitor the fill level of the garbage can.
Table 6 illustrates how the different types of waste are categorized for the respective departments. In addition, non-detectable waste is disposed of in the last compartment of the bin. The servomotors in the sorting segment of the frame aid in transferring waste to the designated compartments.
A servo driver HAT, along with servo motors, is employed to facilitate the movement of waste into the designated compartments. The gear horn of the SG-90 servo motor is affixed to a plastic board, functioning as a door that permits waste to be directed into the appropriate compartment [35]. The SG-90 servo motor, with a torque capacity of 2.5 kg/cm, is sufficiently robust to endure the typical waste deposited onto the plastic board and is capable of rotating both clockwise and anticlockwise within a range of 0° to 180°, enabling precise waste direction control [4]. Nine servo motors are utilized to maneuver the plastic board, while an additional servo motor serves as the locking mechanism for the uppermost compartment of the bin. Given that each servo motor necessitates a 1 W power supply, the Raspberry Pi 4 is inherently limited by its insufficient 5 V pins and pulse–width modulation (PWM) pin. To mitigate this constraint, an expansion board, specifically a servo driver HAT, is employed. The Raspberry Pi 4 interfaces with the servo driver HAT via I2C, utilizing Pin 3 (SDA) and Pin 4 (SCL), and is thus able to control nine servo motors through the available 16 PWM output channels. Furthermore, the HAT is powered by an 11.1 V Li-Po battery via the VIN terminal, which concurrently supplies power to the Raspberry Pi 4 through the HAT [4].
To move the garbage can in the right direction, it can rotate both clockwise and counterclockwise, from 0 to 180 degrees [29]. Nine servo motors control the movements of the plastic plate, while an additional servo motor secures the top door of the bin. The Raspberry Pi 4 does not have enough pulse–width modulation (PWM) or 5 V pins. Therefore, a servo driver HAT expansion board provides a workaround for the Raspberry Pi 4′s drawback. The Raspberry Pi 4 can drive nine servo motors with its sixteen PWM output channels by connecting to the servo driver HAT via Pin 3 (Serial Data Alternate) and pin 4 (Serial Clock Line) on the I2C bus. In addition, an 11.1 V lithium-polonium battery is used to power the HAT via the VIN port, which can also power a Raspberry Pi 4 via the HAT [29].

3.5. Bin Status Monitoring and Locker System

In addition to sorting and classifying waste, the intelligent waste processing framework can also monitor and track the status of the garbage cans remotely. Furthermore, an RFID-based locking system secures the electronic components housed in the topmost compartment of the garbage can. The process of monitoring the bin status consists of two parts: The computer is connected by the server, and the bin acts as a client. The bin’s technology uses sensors to track its position and status, sends these data via a LoRaWAN connection, and uses an RFID-based lock to protect the compartment containing the electronic components [36].
The server attached to the computer obtains data from the garbage can, enabling administrators to monitor its status. The LoRaWAN is a widely adopted low-power wireless sensor network technology used in Internet of Things implementations [37]. Compared with technologies such as Zigbee or Bluetooth, it enables communication over long distances, albeit at lower data rates [38]. Figure 9 below shows the flowchart that defines how the data collected by the Raspberry Pi 4 is transmitted to servers by using the LoRaWAN. The ultrasonic sensor is installed on top of the container and pointed downwards at the material inside. It can measure distances from 2 cm to 400 cm with an accuracy of 0.3 cm and is, therefore, suitable for determining the fill level in each waste compartment. The sensor emits ultrasonic waves at regular intervals, i.e., sound waves with a frequency above the human hearing range, typically around 40 kHz.
The emitted waves travel down to the material’s surface inside the bin and are reflected back to the sensor. The sensor picks up the reflected waves and calculates the time elapsed between emission and reception of the echo. The microcontroller determines the distance to the material’s surface by using the formula mentioned in Equation (6) [29].
D i s t a n c e = S p e e d   o f   S o u n d × T i m e 2
To find the level of material in the bin, the measured distance is taken away from the total height by using the formula mentioned in Equation (7).
B i n   L e v e l = B i n   H e i g h t D i s t a n c e   M e a s u r e d  
Then, the LoRaWAN transmits the bin status to the Central Waste Handling department for collecting the trash, which will optimize the collection process [39]. Only the bins which are filled above 75% will send the information to the department to collect the trash from them.
The RFID-based locking system in the smart bin protects its electrical components. The solenoid lock on the electronic component compartment door is released by using an RC522 RFID reader and a registered RFID tag. The RC522 RFID reader creates an electromagnetic field at 13.56 MHz in order to interact with the RFID tags up to a distance of 5 cm.

3.6. Experimental Setup

The experimental procedure for waste classification was meticulously designed, encompassing the training, validation, and fine tuning of a MobileNetV2 model. The model was initially trained by using a batch size of 32, with the Adam optimizer set to a learning rate of 0.0001. Training extended over 10 epochs, with early stopping employed to track validation loss and mitigate overfitting risks. The dataset was divided into training and testing subsets, with 80% designated for training and 20% for testing. A validation set, drawn from the training data, was utilized during the training phase to fine-tune the model’s hyperparameters.
Subsequently to initial training, the base layers of the MobileNetV2 model, which had been frozen to exploit pre-trained weights, were unfrozen to facilitate further fine tuning. This process involved retraining the model with a significantly reduced learning rate of 1 × 10−5 to maintain stability in the model updates. Additionally, to further guard against overfitting, a dropout layer with a 0.5 rate was introduced before the final dense layer. This comprehensive fine tuning, coupled with hyperparameter optimization, ensured the model demonstrated strong performance on the validation set, making it highly effective for the practical application of waste classification.

4. Results and Analysis

The report shows the performance metrics and key lessons learned from implementing the intelligent waste processing framework in different urban environments by using comprehensive data collection and analysis.

4.1. Metrics

A comprehensive set of classification metrics is utilized to thoroughly assess the performance of the classification models. These metrics are derived from the counts of True Positives (TP), False Positives (FP), True Negatives (TN), and False Negatives (FN), and each metric provides a different perspective on the model’s effectiveness. The primary metrics include accuracy, precision, recall, and F1-score, which are defined and formulated as follows.
Accuracy measures the frequency with which the model correctly predicts the true class of the data. It is defined as the ratio of correctly classified instances (including both True Positives and True Negatives) to the total number of instances, as presented in Equation (8) [27].
A c c u r a c y = T P + T N T P + T N + F P + F N
Precision quantifies the proportion of true-positive predictions among all instances classified as positive by the model, as described in Equation (9) [27].
P r e c i s i o n = T P T P + F P
Recall, also referred to as sensitivity or the true positive rate, measures the proportion of actual positive instances that are correctly identified by the model, as outlined in Equation (10) [27].
R e c a l l = T P T P + F N
The F1-score, the harmonic mean of precision and recall, offers a single metric that balances both concerns, as indicated in Equation (11) [27].
F 1   S c o r e = 2 × P r e c i s i o n × R e c a l l P r e c i s i o n + R e c a l l
Classification performance is assessed by using these metrics with the SSD MobileNetV2 model. Accuracy, precision, recall, and the F1-score collectively provide a thorough evaluation of the model’s effectiveness. Accuracy measures the overall correctness, precision focuses on the proportion of true-positive predictions, recall evaluates the identification of all actual positive instances, and the F1-score balances precision and recall by accounting for both false positives and false negatives.

4.2. Classification Accuracy

The line graph in Figure 10 illustrates training and validation accuracy, and Figure 11 shows the loss of the model over 10 epochs. Over the epochs, the training accuracy improves, while the training loss decreases. The validation accuracy also shows an upward trend. The model reaches an accuracy of 93.97%, surpassing the training accuracy of 88%. This observation indicates that the model is effectively generalized and not too tailored to the training dataset. The initial high training loss gradually reduces over the epochs, suggesting that the model is successfully learning from the training data. The validation loss also exhibits a downward trend, indicating that the model performs effectively.
The classification report is shown in Figure 12 and shows the precision, recall, and F1-score for each material class. A classification report is a performance assessment tool used in supervised learning, especially in classification tasks [27]. It provides a summarized representation of the model’s predictive capabilities by presenting several metrics for each class of the dataset.
After evaluating the results, the model classifies and categorizes the waste perfectly with an accuracy of approx. 93.97%. The model works very efficiently with the test data.
The proposed framework effectively addresses the challenges posed by sparse datasets by incorporating an advanced classification algorithm that accurately differentiates waste into 12 distinct categories. This approach successfully mitigates the limitations associated with sparse data. Additionally, the strategic adoption of an LoRaWAN (Long-Range Wide-Area Network) significantly enhances communication capabilities, enabling rapid and reliable data transmission over long distances. This improvement not only resolves communication constraints but also enhances the overall efficiency of the system. The integration of the LoRaWAN within the waste management process ensures that essential data are seamlessly transmitted to administrative bodies, enabling industries to access waste processing information in real time, thereby facilitating the prompt and effective recycling of appropriate materials.

5. Conclusions and Future Scope

The study demonstrates that the proposed model achieves an impressive accuracy of approximately 94% on the validation dataset, showcasing its effectiveness in accurately categorizing waste materials. The model exhibits strong generalization capabilities, minimizing overfitting to the training data, as evidenced by the faster improvement in validation accuracy compared with training accuracy and the consistent decrease in validation loss across the epochs. The model’s performance surpasses that of other frameworks, offering higher classification accuracy and addressing existing limitations in communication methods. This significant advancement in urban waste classification represents a notable improvement over previous methods, as detailed in this article.
Table 7 presents a comparative analysis of various waste management systems, evaluating them based on the type of waste, sensors, communication protocols, microcontrollers, machine learning architecture, and accuracy. The comparison reveals that existing systems (1 to 5) are primarily limited to monitoring bin conditions, such as waste fill levels, gas pollution, and GPS location. Many of the communication protocols utilized are unsuitable for smart waste management systems due to their limited transmission range, and protocols like GSM are being phased out in numerous countries. Only one system includes a mechanism for waste categorization; however, it is restricted to detecting plastic waste alone. This analysis indicates that current waste processing frameworks fail to address the significant challenges prevalent in urban areas.
The proposed framework adeptly addresses the outlined challenges, with particular attention to the intricacies of sparse datasets. An advanced classification algorithm has been meticulously crafted to proficiently differentiate waste into 12 distinct categories, thereby overcoming the inherent limitations associated with sparse data. In response to the challenge of limited communication range, an LoRaWAN (Long-Range Wide-Area Network) has been strategically implemented, facilitating swift and reliable data transmission across substantial distances. This approach not only overcomes communication constraints but also significantly augments the system’s overall operational efficiency. In tackling the issue of inadequate recycling processes, the integration of the LoRaWAN ensures the seamless conveyance of essential data to administrative bodies. This information is subsequently stored on a cloud server, providing industries with immediate access to waste management data, thereby enabling the prompt and efficient recycling of suitable materials.
While the framework presents a robust solution, several limitations may exist. The reliance on cloud storage for data management, while efficient, introduces potential vulnerabilities concerning data security and privacy. Additionally, the effectiveness of the LoRaWAN in transmitting data over long distances may be impacted by environmental factors or interference, which could reduce communication reliability in certain contexts.
Future research in the field of smart waste could focus on IoT-based and ML approaches. Implementing YOLOv9 for optimizing waste collection routes can significantly enhance efficiency by accurately detecting and classifying waste levels in bins across various locations. By leveraging YOLOv9′s advanced object detection capabilities, municipalities can generate real-time data on waste accumulation, allowing for precise route planning. This ensures that collection vehicles are dispatched only when necessary, reducing fuel consumption and minimizing carbon emissions. Additionally, optimized routes lead to lower operational costs and improved service reliability, contributing to more sustainable and cost-effective waste processing practices. Infrared sensors can monitor garbage can levels in shaded areas, allowing ML frameworks to optimize collection and detect anomalies. Solar panels can power various components inside the bin, such as Pi camera modules, Raspberry Pi 4, and RFID tags, ensuring efficient power consumption. RFID tags also increase security by assigning a unique ID to each garbage can, allowing maintenance personnel to access the electronic compartment. Ultimately, the integration of IoT and ML technologies into waste processing frameworks can prevent the spread of diseases caused by overflowing garbage cans through the continuous monitoring of garbage can status, positively impacting public health.

Author Contributions

Conceptualization, D.K.G. and H.B.; methodology, A.K.R.C. and M.L.; software, S.T. and Z.T.; validation, S.B.B. and H.B.; formal analysis, M.A.L.F.; resources, A.K.R.C.; data curation, N.G.; writing—original draft preparation, D.K.G.; writing—review and editing, M.L.; visualization, S.T. and Z.T.; supervision, S.B.B. and N.G.; project administration, M.A.L.F. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The data presented in this study are available on reasonable request from the first author.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Kaza, S.; Yao, L.; Bhada-Tata, P.; Van Woerden, F. What a Waste 2.0: A Global Snapshot of Solid Waste Management to 2050; World Bank Publications: Chicago, IL, USA, 2018. [Google Scholar]
  2. Waste Management Indicators. Available online: https://ec.europa.eu/eurostat/statistics-explained/index.php?title=Waste_management_indicators (accessed on 3 July 2024).
  3. Teklemariam, N. Sustainable Development Goals and Equity in Urban Planning: A Comparative Analysis of Chicago, São Paulo, and Delhi. Sustainability 2022, 14, 13227. [Google Scholar] [CrossRef]
  4. Sheng, T.J.; Islam, M.S.; Misran, N.; Baharuddin, M.H.; Arshad, H.; Islam, M.R.; Chowdhury, M.E.H.; Rmili, H.; Islam, M.T. An Internet of Things Based Smart Waste Management System Using LoRa and Tensorflow Deep Learning Model. IEEE Access 2020, 8, 148793–148811. [Google Scholar] [CrossRef]
  5. Râpă, M.; Darie-Niță, R.N.; Coman, G. Valorization of Fruit and Vegetable Waste into Sustainable and Value-Added Materials. Waste 2024, 2, 258–278. [Google Scholar] [CrossRef]
  6. Saeed, T.; Ijaz, A.; Sadiq, I.; Qureshi, H.N.; Rizwan, A.; Imran, A. An AI-Enabled Bias-Free Respiratory Disease Diagnosis Model Using Cough Audio. Bioengineering 2024, 11, 55. [Google Scholar] [CrossRef]
  7. Cuevas-Chávez, A.; Hernández, Y.; Ortiz-Hernandez, J.; Sánchez-Jiménez, E.; Ochoa-Ruiz, G.; Pérez, J.; González-Serna, G. A Systematic Review of Machine Learning and IoT Applied to the Prediction and Monitoring of Cardiovascular Diseases. Healthcare 2023, 11, 2240. [Google Scholar] [CrossRef]
  8. Abubakar, I.R.; Maniruzzaman, K.M.; Dano, U.L.; AlShihri, F.S.; AlShammari, M.S.; Ahmed, S.M.S.; Al-Gehlani, W.A.G.; Alrawaf, T.I. Environmental Sustainability Impacts of Solid Waste Management Practices in the Global South. Int. J. Environ. Res. Public Health 2022, 19, 12717. [Google Scholar] [CrossRef]
  9. An, Q.; Rahman, S.; Zhou, J.; Kang, J.J. A Comprehensive Review on Machine Learning in Healthcare Industry: Classification, Restrictions, Opportunities and Challenges. Sensors 2023, 23, 4178. [Google Scholar] [CrossRef]
  10. Li, X.; Lu, W.; Ye, W.; Ye, C. Enhancing Environmental Sustainability: Risk Assessment and Management Strategies for Urban Light Pollution. Sustainability 2024, 16, 5997. [Google Scholar] [CrossRef]
  11. Zheng, C.; Yuan, J.; Zhu, L.; Zhang, Y.; Shao, Q. From digital to sustainable: A scientometric review of smart city literature between 1990 and 2019. J. Clean. Prod. 2020, 258, 120689. [Google Scholar] [CrossRef]
  12. Luo, J.; Zhao, C.; Chen, Q.; Li, G. Using deep belief network to construct the agricultural information system based on Internet of Things. J. Supercomput. 2022, 78, 379–405. [Google Scholar] [CrossRef]
  13. Hassan, S.A.; Samsuzzaman, M.; Hossain, M.J.; Akhtaruzzaman, M.; Islam, T. Compact planar UWB antenna with 3.5/5.8 GHz dual band-notched characteristics for IoT application. In Proceedings of the 2017 IEEE International Conference on Telecommunications and Photonics (ICTP), Dhaka, Bangladesh, 26–28 December 2017; pp. 195–199. [Google Scholar] [CrossRef]
  14. Zaidan, A.A.; Zaidan, B.B. A review on intelligent process for smart home applications based on IoT: Coherent taxonomy, motivation, open challenges, and recommendations. Artif. Intell. Rev. 2020, 53, 141–165. [Google Scholar] [CrossRef]
  15. Azim, R.; Islam, M.T.; Arshad, H.; Alam, M.M.; Sobahi, N.; Khan, A.I. CPW-Fed Super-Wideband Antenna With Modified Vertical Bow-Tie-Shaped Patch for Wireless Sensor Networks. IEEE Access 2021, 9, 5343–5353. [Google Scholar] [CrossRef]
  16. Sheng, M.; Zhou, D.; Bai, W.; Liu, J.; Li, H.; Shi, Y.; Li, J. Coverage enhancement for 6G satellite-terrestrial integrated networks: Performance metrics, constellation configuration and resource allocation. Sci. China Inf. Sci. 2023, 66, 130303. [Google Scholar] [CrossRef]
  17. Nowakowski, P.; Pamuła, T. Application of deep learning object classifier to improve e-waste collection planning. Waste Manag. 2020, 109, 1–9. [Google Scholar] [CrossRef]
  18. White, G.; Clarke, S. Urban Intelligence With Deep Edges. IEEE Access 2020, 8, 7518–7530. [Google Scholar] [CrossRef]
  19. Adedeji, O.; Wang, Z. Intelligent Waste Classification System Using Deep Learning Convolutional Neural Network. Procedia Manuf. 2019, 35, 607–612. [Google Scholar] [CrossRef]
  20. Bobulski, J.; Kubanek, M. CNN use for plastic garbage classification method. In Proceedings of the 25th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, Anchorage, AK, USA, 4–8 August 2019. [Google Scholar]
  21. Pardini, K.; Rodrigues, J.J.P.C.; Diallo, O.; Das, A.K.; de Albuquerque, V.H.C.; Kozlov, S.A. A Smart Waste Management Solution Geared towards Citizens. Sensors 2020, 20, 2380. [Google Scholar] [CrossRef]
  22. Vuori, T. Wireless Communication Technologies and Security in 5G; 2020. Available online: https://www.theseus.fi/bitstream/handle/10024/341826/Vuori_Taneli.pdf?sequence=2 (accessed on 3 July 2024).
  23. Shahidul Islam, M.; Islam, M.T.; Almutairi, A.F.; Beng, G.K.; Misran, N.; Amin, N. Monitoring of the Human Body Signal through the Internet of Things (IoT) Based LoRa Wireless Network System. Appl. Sci. 2019, 9, 1884. [Google Scholar] [CrossRef]
  24. Anwar, M.A. IOT Based Garbage Monitoring Using Arduino. Ph.D. Thesis, West Bengal University of Technology, Kolkata, India, 2018. [Google Scholar]
  25. Misra, D.; Das, G.; Chakrabortty, T.; Das, D. An IoT-based waste management system monitored by cloud. J. Mater. Cycles Waste Manag. 2018, 20, 1574–1582. [Google Scholar] [CrossRef]
  26. Cerchecci, M.; Luti, F.; Mecocci, A.; Parrino, S.; Peruzzi, G.; Pozzebon, A. A Low Power IoT Sensor Node Architecture for Waste Management Within Smart Cities Context. Sensors 2018, 18, 1282. [Google Scholar] [CrossRef]
  27. White, G.; Cabrera, C.; Palade, A.; Li, F.; Clarke, S. WasteNet: Waste classification at the edge of smart bins. arXiv 2020, arXiv:2006.05873. [Google Scholar] [CrossRef]
  28. Mao, W.-L.; Chen, W.-C.; Wang, C.-T.; Lin, Y.-H. Recycling waste classification using op timized convolutional neural network. Resour. Conserv. Recycl. 2021, 164, 105132. [Google Scholar] [CrossRef]
  29. Sallang, N.C.A.; Islam, M.T.; Islam, M.S.; Arshad, H. A CNN-Based Smart Waste Management System Using TensorFlow Lite and LoRa-GPS Shield in Internet of Things Environment. IEEE Access 2021, 9, 153560–153574. [Google Scholar] [CrossRef]
  30. Servos Explained—SparkFun Electronics. Available online: https://www.sparkfun.com/servos (accessed on 5 July 2024).
  31. Chang, J.; Kang, M.; Park, D. Low-Power On-Chip Implementation of Enhanced SVM Algorithm for Sensors Fusion-Based Activity Classification in Lightweighted Edge Devices. Electronics 2022, 11, 139. [Google Scholar] [CrossRef]
  32. Huang, S.; He, Y.; Chen, X. M-YOLO: A Nighttime Vehicle Detection Method Combining Mobilenet v2 and YOLO v3. J. Phys. Conf. Ser. 2021, 1883, 012094. [Google Scholar] [CrossRef]
  33. Li, Q.; Lin, Y.; He, W. SSD7-FFAM: A Real-Time Object Detection Network Friendly to Embedded Devices from Scratch. Appl. Sci. 2021, 11, 1096. [Google Scholar] [CrossRef]
  34. Vijay, S.; Kumar, P.; Raju, S.; Vivekanandan, S. Smart Waste Management System using ARDUINO. Int. J. Eng. Res. Technol. (IJERT) 2019, 8, 1. [Google Scholar]
  35. Sakama, S.; Tanaka, Y.; Kamimura, A. Characteristics of Hydraulic and Electric Servo Motors. Actuators 2022, 11, 11. [Google Scholar] [CrossRef]
  36. Islam, M.T.; Alam, T.; Yahya, I.; Cho, M. Flexible Radio-Frequency Identification (RFID) Tag Antenna for Sensor Applications. Sensors 2018, 18, 4212. [Google Scholar] [CrossRef]
  37. Sundaram JP, S.; Du, W.; Zhao, Z. A Survey on LoRa Networking: Research Problems, Current Solutions, and Open Issues. IEEE Commun. Surv. Tutor. 2020, 22, 371–388. [Google Scholar] [CrossRef]
  38. Available online: https://jonathan-hui.medium.com/ssd-object-detection-single-shot-multibox-detector-for-real-time-processing-9bd8deac0e06 (accessed on 16 June 2024).
  39. Coutinho, M.; Afonso, J.A.; Lopes, S.F. An Efficient Adaptive Data-Link-Layer Architecture for LoRa Networks. Future Internet 2023, 15, 273. [Google Scholar] [CrossRef]
Figure 1. Predicted waste generation by region (millions of tons annually).
Figure 1. Predicted waste generation by region (millions of tons annually).
Sustainability 16 07626 g001
Figure 2. Effective trash handling.
Figure 2. Effective trash handling.
Sustainability 16 07626 g002
Figure 3. Shape and size of the proposed bin.
Figure 3. Shape and size of the proposed bin.
Sustainability 16 07626 g003
Figure 4. Smart waste processing framework.
Figure 4. Smart waste processing framework.
Sustainability 16 07626 g004
Figure 5. Combined hardware and software workflow.
Figure 5. Combined hardware and software workflow.
Sustainability 16 07626 g005
Figure 6. SSD architecture [33].
Figure 6. SSD architecture [33].
Sustainability 16 07626 g006
Figure 7. Categorization of multiple objects by using LabelImg.
Figure 7. Categorization of multiple objects by using LabelImg.
Sustainability 16 07626 g007
Figure 8. Object detection procedure’s electrical component connection diagram.
Figure 8. Object detection procedure’s electrical component connection diagram.
Sustainability 16 07626 g008
Figure 9. Process of sending data to server.
Figure 9. Process of sending data to server.
Sustainability 16 07626 g009
Figure 10. Analysis of training and validation accuracy.
Figure 10. Analysis of training and validation accuracy.
Sustainability 16 07626 g010
Figure 11. Analysis of training and validation loss.
Figure 11. Analysis of training and validation loss.
Sustainability 16 07626 g011
Figure 12. Classification report.
Figure 12. Classification report.
Sustainability 16 07626 g012
Table 1. Literature review.
Table 1. Literature review.
Author(s)ReferencesMethodologySoftware UsedBenefitsLimitations
Aasim
et al.
2018
[24]Implementation of GSM electronic monitoring system with ultrasonic sensors to detect trash level and notify authorities via SMS.Ultrasonic sensors and GSM SIM900Efficient garbage collection; timely alerts authorities.Limited ability to accurately determine remaining room in the trash can.
Misra
et al.
2018
[25]Utilizing ultrasonic sensors for garbage detection, sending data to a server for analysis and decision making, and forecasting future conditions.Ultrasonic sensors and cloud server Arduino Integrated Development Environment (IDE) version 1.8.5:Cloud-monitored waste processing, enhanced decision making, and predictive analytics. System can forecast future conditions and make inferences beyond daily trash amount.Inability to classify different types of garbage.
M. Cechecci
et al.
2018
[26]Low-power sensor node architecture utilizing microprocessor, ultrasonic sensor, and LoRa for data transmission, with emphasis on energy-saving technologies.Single-chip microprocessor: Arduino Integrated Development Environment (IDE) version 1.6.12., ultrasonic sensor, and LoRa modulePower-saving waste processing; LoRa technology for long-distance communication. Focuses on energy-saving technologies and regulations.Inability to automatically classify garbage; potential mixed biodegradable/non-biodegradable trash.
Bobulski
et al.
2019
[20]CNN-based waste classification system for plastic trash segregation with improved recycling efficiency.Convolutional Neural Network (CNN):
TensorFlow version 1.14.
Automated material sorting, cost reduction, and increased recycling efficiency.Difficulty in extracting characteristics from images with less depth with networks.
Nowakowski
et al.
2020
[17]Image recognition system for identifying and categorizing electronic waste from images utilizing CNN and Faster R-CNN.Faster R-CNN and Convolutional Neural Network
(CNN): TensorFlow version 2.1.
Enhanced waste classification; improved waste pickup planning. Offers both mobile app and server options for image recognition system.Slower detection performance with large datasets; time-consuming training.
G White
et al.
2020
[27]Deep neural network model for waste categorization, deployed at the edge for smart bins, which utilizes transfer learning for improved accuracy.Jetson Nano edge device: TensorFlow version 2.2.High accuracy in waste categorization; deployment at the edge for efficiency.Transfer learning speeds up training but requires initial models from ImageNet.
Adedeji
et al.
2019
[19]ResNet-50 extractor combined with SVM for trash classification into glass, metal, paper, and plastic categories, achieving 87% accuracy.ResNet-50 and Support Vector Machines (SVMs): TensorFlow version 1.12. Enhanced waste classification; high accuracy.Limited to specific waste categories; potential difficulty in generalization.
Wei-Lung
et al.
2021
[28]CNN-based trash categorization with data augmentation for dataset diversity; optimization of DenseNet 121 using Genetic Algorithm for accuracy improvement.Convolutional Neural Network (CNN) and Genetic Algorithm (GA): TensorFlow version 2.3.Increased accuracy in recycling waste categorization; optimization of neural networks.Time-consuming training; potential overfitting with data augmentation.
Kellow Pardini
et al.
2020
[21]The study proposes an IoT-based waste processing framework involving real prototype deployment and a case study. The system includes smart bins equipped with various sensors (HC-SR04, load cell, DHT11, and GPS) and uses Arduino for control. Data are transferred via a SIM900 GSM/GPRS module and integrated with In.IoT middleware.Arduino IDE, In.IoT middleware, My Waste App (built with Ionic), and Web browser interface: Arduino Integrated Development Environment (IDE) version 1.8.12.Optimized waste processing, cost savings, environmental benefits, enhanced user interaction, accurate waste detection, efficient route planning, and statistical data generation.Scalability, cost issues, and lack of battery energy level visualization.
Table 2. Power consumed by components [10,29,30,31].
Table 2. Power consumed by components [10,29,30,31].
ComponentQuantityPower Consumed (in Watts)
Raspberry Pi 4115 W
Servo motor (SG90)99 W
Pi camera11.25 W–1.5 W
Ultrasonic sensor10.075 W
Table 3. Comparison of various detection models [29].
Table 3. Comparison of various detection models [29].
MethodmAPBatch SizeFPS#BoxesInput Resolution
Fast YOLO52.7115598448 × 448
Faster R-CNN (VGG–16)73.217~6000~1000 × 600
YOLO66.412198448 × 448
SSD51276.811924,564512 × 512
SSD30074.31468732300 × 300
Table 4. Analyzing multiple architectural frameworks [29].
Table 4. Analyzing multiple architectural frameworks [29].
NetworkTop 1ParamsMultiply-AddCPU
MobilenetV2 (1.4)74.76.9 M585 M143 ms
ShuffleNet (1.5)71.53.4 M292 M-
NasNet–A745.3 M564 M183 ms
MobilenetV170.64.2 M575 M113 ms
MobilenetV2723.4 M300 M75 ms
ShuffleNet (x2)73.75.4 M524 M-
Table 5. Enumeration of electrical parts used in object detection [29].
Table 5. Enumeration of electrical parts used in object detection [29].
Used ComponentsTotal
Raspberry Pi 41
11.1 V Li-Po battery1
SG-90 servo motor9
Pi Camera V21
HC-SR04 ultrasonic sensor1
Servo driver HAT1
Table 6. Classification of trash types and their corresponding compartments.
Table 6. Classification of trash types and their corresponding compartments.
Trash CompartmentType of Trash
1Glass (green, brown, and white)
2Plastic
3Biological trash
4Metal
5Clothes
6Shoes
7Paper and cardboard
8Non-detectable trash
Table 7. Benchmark waste processing frameworks: a comparative study.
Table 7. Benchmark waste processing frameworks: a comparative study.
No.Ref.Object Detection ModelType of Waste DetectableMicrocontroller UsedCommunication ProtocolSensor UsedAccuracy
1[24]-Common wasteArduino Uno and Node MCUGSMUltrasonic Sensor and DHT11 Sensor-
2[25]-Common wasteArduino Pro MiniWiFiUltrasonic Sensor, Stinky gas Sensor, MQ-135, and MQ-136-
3[20]Modified AlexNetPlastic---91%
4[17]Faster R-CNNRefrigerators, washing machines, and monitors---90–96.7%
5[27]WasteNetPaper, glass, metal, and cardboard.---92–94.5%
6Proposed systemQuantized SSD MobileNetV2Battery, biological waste, glass (brown, green, and white), clothes, cardboard, plastic, paper, shoes, and trashRaspberry Pi 4 and Arduino Uno R3LoRaUltrasonic Sensor and RFID Reader93.97%
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Gude, D.K.; Bandari, H.; Challa, A.K.R.; Tasneem, S.; Tasneem, Z.; Bhattacharjee, S.B.; Lalit, M.; Flores, M.A.L.; Goyal, N. Transforming Urban Sanitation: Enhancing Sustainability through Machine Learning-Driven Waste Processing. Sustainability 2024, 16, 7626. https://doi.org/10.3390/su16177626

AMA Style

Gude DK, Bandari H, Challa AKR, Tasneem S, Tasneem Z, Bhattacharjee SB, Lalit M, Flores MAL, Goyal N. Transforming Urban Sanitation: Enhancing Sustainability through Machine Learning-Driven Waste Processing. Sustainability. 2024; 16(17):7626. https://doi.org/10.3390/su16177626

Chicago/Turabian Style

Gude, Dhanvanth Kumar, Harshavardan Bandari, Anjani Kumar Reddy Challa, Sabiha Tasneem, Zarin Tasneem, Shyama Barna Bhattacharjee, Mohit Lalit, Miguel Angel López Flores, and Nitin Goyal. 2024. "Transforming Urban Sanitation: Enhancing Sustainability through Machine Learning-Driven Waste Processing" Sustainability 16, no. 17: 7626. https://doi.org/10.3390/su16177626

APA Style

Gude, D. K., Bandari, H., Challa, A. K. R., Tasneem, S., Tasneem, Z., Bhattacharjee, S. B., Lalit, M., Flores, M. A. L., & Goyal, N. (2024). Transforming Urban Sanitation: Enhancing Sustainability through Machine Learning-Driven Waste Processing. Sustainability, 16(17), 7626. https://doi.org/10.3390/su16177626

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop