Bulletin of Informatics and Data Science Vol. 4 No. May 2025. Page 62Oe70 ISSN 2580-8389 (Media Onlin. DOI 10. 61944/bids. https://ejurnal. id/index. php/bids/index Waste Classification using EfficientNetB3-Based Deep Learning for Supporting Sustainable Waste Management Sarifah Agustiani1,*. Agus Junaidi2. Riska Aryanti1. Anton Abdul Basah Kamil3 1 Faculty of Engineering and Informatics. Informatics. Bina Sarana Informatika University. Jakarta. Indonesia Faculty of Engineering and Informatics. Information Technology. Bina Sarana Informatika University. Jakarta. Indonesia 3 Faculty of Economics and Social Sciences. Business Administration. Istanbul Gelisim University. Istanbul. Turkey Email: 1,*sarifah. sgu@bsi. id, 2agus. asj@bsi. id, 3riska. rts@bsi. id, 4akamil@gelisim. Coressponding Author: sarifah. sgu@bsi. AbstractOeWaste management is a critical issue in sustainable development, particularly in large urban areas that generate a high volume of waste daily. One of the main challenges is the absence of a fast, accurate, and efficient waste sorting system. This study aims to develop a waste classification model using deep learning based on the EfficientNetB3 architecture to support more sustainable waste management. The model was trained on a dataset obtained from a Kaggle repository, consisting of 4,650 images evenly distributed across six waste categories: batteries, glass, metal, organic, paper, and plastic . images per clas. The training and evaluation were conducted using a supervised image classification approach. The model achieved an overall accuracy of 93%, with average precision, recall, and F1-score values of 93%. Among all categories, organic waste achieved the highest F1-score . followed by paper . and batteries . , while plastic and metal categories obtained F1-scores of 0. These results demonstrate that the EfficientNetB3 architecture is effective in performing multi-class waste classification. This model has the potential to be implemented in camera-based waste sorting systems such as smart bins or automated recycling units, thereby contributing to the reduction of unprocessed waste and supporting the achievement of Sustainable Development Goal (SDG) 12: responsible consumption and production. Keywords: Automated Waste Classification. Deep Learning. EfficientNetB3. Image Classification. Sustainable Waste Management INTRODUCTION Waste management that has not been maximized is still a major problem in various regions, especially in urban areas. Population growth and increased economic activity contribute to the increasing volume of waste every day. Unfortunately, the capacity of the available waste management infrastructure has not been able to keep up with the rate of increase. This condition results in the accumulation of waste in a number of locations, which not only damages the aesthetics of the environment, but also poses a risk to public health and exacerbates environmental pollution if not addressed immediately . Waste that is not handled properly has the potential to increase environmental and economic burdens, hinder recycling, and contribute to pollution . According to data from the Central Statistics Agency (BPS). Indonesia produces more than 64 million tons of waste annually, with most of it coming from the domestic sector . According to the World Health Organization (WHO), around 2 billion tons of solid waste is generated each year, and this number is expected to continue to increase. Not only an environmental problem, inefficient waste management can also cause health impacts, pollute natural resources, and exacerbate climate change. These conditions indicate the need for more integrated and innovative strategies to manage waste sustainably . Therefore, innovations are needed that can increase efficiency in waste management, one of which is by utilizing technology to simplify and speed up the sorting process. One promising solution is the use of deep learning technology for automatic classification of litter based on its type . This technology can help in identifying different types of litter through images with higher speed and accuracy compared to manual systems. Previous research in this field has used various deep learning algorithms, especially those based on convolutional neural networks (CNN), to classify waste types in an effort to support more sustainable waste However, despite the various studies that have been conducted, most of them still face obstacles related to accuracy and high computation time. Therefore, this research aims to develop an automatic classification system of waste types using the EfficientNetB3 architecture, which is known to have better performance with less parameter usage compared to other CNN architectures. Previous relevant research conducted by D. Aulia . , this research applies Deep Learning technology for waste classification, utilizing the Convolutional Neural Network (CNN) algorithm. The architecture used is MobileNet with an average accuracy of 88%, precision 89%, recall 88%, and F1-score 87%. When tested, the model achieved 86% accuracy, which is considered quite good considering the visual complexity of various household waste . Another study conducted by N. Saptadi . , aims to design a digital image-based classification model to assess the quality of organic waste using CNN Architecture with an accuracy of 85% . Another research was conducted by R. Kurniawan . , for the classification of inorganic waste types using Xception architecture with Adam optimizer and a learning rate of 0. 001 which resulted in an accuracy of 87. 81% . Furthermore, conducted by A. Sihananto . , applying CNN built with a number of layers such as Convolutional Layers. Max Pooling Layers and Relu. The performance results of training accuracy 83% and validation accuracy 61% . other research was conducted by A. Marzuki . , who developed a plastic bottle classification system based on image processing and machine learning, using the Convolutional Neural Networks (CNN) algorithm. The test results show that the model is able to identify the type of bottle waste with an average accuracy of 57. 5% . Further research by Risfenda . , this research develops a deep learning-based waste classification system using the EfficientNet-B0 architecture with a transfer learning approach. The model was Copyright A 2025 Authors. Page 62 This Journal is licensed under a Creative Commons Attribution 4. 0 International License Bulletin of Informatics and Data Science Vol. 4 No. May 2025. Page 62Oe70 ISSN 2580-8389 (Media Onlin. DOI 10. 61944/bids. https://ejurnal. id/index. php/bids/index trained on a dataset consisting of six types of waste. The training results show good performance, with a test accuracy of 94% and F1-score of 91. 96% . The main difference from previous studies lies in the architectural approach and classification scope used. This research specifically adopts the EfficientNetB3 architecture, which is an extension of previous EfficientNet variants and is designed to achieve an optimal balance between high accuracy and computational efficiency. Unlike the MobileNet architecture . , conventional CNN . Xception . , or EfficientNet-B0 . EfficientNetB3 uses a more sophisticated compound scaling technique to adjust the depth, width, and resolution of the network simultaneously. This allows the model to perform image classification with fewer parameters while still maintaining high accuracy, making it more suitable for real-time applications such as camera-based automatic waste sorting systems. Apart from the architecture, this study also introduces a more comprehensive multi-class classification approach than previous studies. While many studies only focus on organic waste . , plastic bottles . , or limited inorganic waste . This broad class coverage demonstrates the improved ability of the model to handle the visual and textural diversity of different types of waste, which has previously been a challenge in the development of image-based waste classification Thus, this research not only contributes to the improvement of the accuracy and efficiency of the classification model, but also to the scalability and relevance of the application of waste classification systems in the context of realworld implementation. The innovation offered is expected to accelerate the waste sorting automation process and support a sustainable waste management system, in line with the Sustainable Development Goals (SDG. , particularly point 12 on responsible consumption and production. The main objective of this research is to develop an automatic classification system of waste types based on deep learning using the EfficientNetB3 architecture. Several previous studies . have successfully applied the EfficientNetB3 method in various image classification tasks and shown excellent results, with high accuracy values. This proves that the EfficientNetB3 architecture is able to capture visual feature representations effectively, while maintaining computational efficiency, so it is very potential to be applied in image classification systems including deep learning-based garbage classification. which is capable of performing classification with high accuracy and good computing time efficiency. This research is expected to be a practical solution in waste management, especially in urban areas that face the challenge of increasing volume and complexity of waste types. With the results obtained, this system has the potential to be implemented in various applications to support more effective, sustainable and environmentally friendly waste management. RESEARCH METHODOLOGY 1 Research Stages This research was conducted through a series of systematic stages aimed at developing and evaluating a deep learningbased automatic classification model of waste types using the EfficientNetB3 architecture. The research methodology consists of six systematic stages including data collection, splitting, preprocessing, augmentation, model building, and The complete workflow is illustrated in Figure 1. Figure 1. Research Stage Copyright A 2025 Authors. Page 63 This Journal is licensed under a Creative Commons Attribution 4. 0 International License Bulletin of Informatics and Data Science Vol. 4 No. May 2025. Page 62Oe70 ISSN 2580-8389 (Media Onlin. DOI 10. 61944/bids. https://ejurnal. id/index. php/bids/index 2 Data Collection and Splitting The dataset used in this study consists of 4,650 waste images, obtained from the Kaggle open-access repository. These images are manually curated to ensure quality, clarity, and relevance to six predefined waste classes: battery, glass, metal, organic, paper, and plastic. Each class is equally represented with 755 images, making the dataset well-balanced for multiclass classification. To ensure the robustness of the model and reduce the risk of overfitting, the dataset was split into three distinct Training set . %): Used to train the EfficientNetB3 model. Validation set . %): Used to tune hyperparameters and monitor model performance during training. Test set . %): Used for final evaluation of the modelAos generalization capability. The split process was conducted using the validation_split parameter in the ImageDataGenerator class for training and validation, while the test set was separated using a manual directory partition. This splitting strategy ensures that the performance metrics reflect the model's ability to classify unseen data accurately and fairly. Table 1 presents the distribution of image data per class: Table 1. Image Distribution Waste Class Battery Glass Metal Organic Paper Plastic Total Number Number of Images This balanced and well-partitioned dataset forms the foundation for the subsequent training and evaluation of the classification model. 3 Data Preprocessing To ensure compliance with the EfficientNetB3 model architecture, all images were processed through the following preprocessing steps: Resizing: All images were resized to 224 y 224 pixels according to the EfficientNetB3 standard input. Color Conversion: The image is converted to RGB format to have three consistent color channels. Normalization: Pixel values are normalized into the range . by dividing the entire pixel value by 255, to help the stability of the training process. One-hot Encoding: Garbage category labels were converted into one-hot encoding format to suit the needs of multiclass classification. This step is done to ensure the quality of the data and the suitability of the deep learning model, so that the training process can take place efficiently and produce optimal performance. 4 Data Augmentation To increase the diversity of the training data and prevent overfitting of the model, a random data augmentation technique is applied to the waste image. Data augmentation is an important process in training deep learning models, especially when the dataset is relatively limited and additional visual variety is needed to help the model learn to generalize to new In this study, the augmentation process is performed in real-time . during training using the ImageDataGenerator class from the Keras library. Augmentation is applied only to the training data, not the validation or testing data, to ensure the model evaluation remains objective. The augmentation techniques used include: Random Rotation: The image is randomly rotated up to a maximum of 20 degrees to the right or left, to simulate variations in object orientation. Zooming: Random magnification of the image within a range of 0-20%, allowing the model to recognize objects from different distances. Horizontal Flipping: Horizontal mirroring of the image is used to improve symmetry and variation of the object Shear Transformation: Distortion of the image by a certain angle to give a tilt effect, mimicking natural perspective Brightness Adjustment: Random adjustment of brightness levels within a certain range to enable the model to identify objects under different lighting conditions. Copyright A 2025 Authors. Page 64 This Journal is licensed under a Creative Commons Attribution 4. 0 International License Bulletin of Informatics and Data Science Vol. 4 No. May 2025. Page 62Oe70 ISSN 2580-8389 (Media Onlin. DOI 10. 61944/bids. https://ejurnal. id/index. php/bids/index The application of this augmentation results in a wide visual variety of the same data, without changing the original label, thus improving the generalization of the model to new data. In this way, the model becomes more resilient to noise and real-world variations that may not be present in the original data. 5 Model Building The model used in this study is EfficientNetB3, a convolutional neural network architecture developed by Google with high efficiency in terms of accuracy and number of parameters . The model was fine-tuned from a pre-trained version using weights from ImageNet. The final layer was adjusted for 6-class classification, using a softmax activation function. The training parameters were adjusted as follows: Optimizer: Adam Learning rate: 0. Loss function: Categorical Crossentropy Batch size: 64 Number of epochs: 30 Validation split: 10% 6 Model Evaluation After the training process is complete, the model is evaluated using test data that has been split by 20% of the total dataset. The evaluation is done using five main metrics, namely accuracy, precision, recall. F1-score, and confusion matrix. Accuracy is used to measure the overall percentage of correct predictions against all predictions made by the model . Precision is used to evaluate the model's ability to avoid false positive misclassification . , while recall is used to measure the extent to which the model is able to detect all correct samples as positive . F1-score is the harmonic value between precision and recall which provides an overall picture of the balance of the two metrics . In addition, the confusion matrix is used to describe the distribution of the model's classification results, both correct and incorrect, between each waste class. The model is then evaluated based on the prediction results against the test data. The performance of each class is analyzed to determine the extent to which the model is able to perform multi-class classification accurately and in a balanced manner. This evaluation is important to ensure that the model does not only excel in certain classes, but also has a consistent performance across waste categories. RESULT AND DISCUSSION This study applies the EfficientNetB3 architecture, which is one of the variants of EfficientNet developed by Google, designed to achieve high accuracy with optimal computational efficiency. The model is adapted from the pre-trained version on ImageNet, then fine-tuned to recognize six types of waste: batteries, glass, metal, organic, paper, and plastic. The output layer is adjusted to six neurons with a softmax activation function for multi-class classification. Before the training process, the dataset was divided into three main parts: 70% for training data, 10% for validation data, and 20% for testing data. This division aims to ensure that model training is performed on sufficiently diverse data, validation is used to avoid overfitting, and testing is conducted on data the model has never seen to objectively measure its performance. 1 Model Evaluation Results The training process was carried out using the EfficientNetB3 architecture, where the model was trained on 70% of the data, validated using 10%, and tested using the remaining 20% of data that had never been seen before. During the training process, the model showed stable convergence, both in terms of loss and accuracy. Figures 2 and 3 show graphs of accuracy and loss changes over epochs on the training and validation data. Figure 2. Accuracy Training and Validation Curve Copyright A 2025 Authors. Page 65 This Journal is licensed under a Creative Commons Attribution 4. 0 International License Bulletin of Informatics and Data Science Vol. 4 No. May 2025. Page 62Oe70 ISSN 2580-8389 (Media Onlin. DOI 10. 61944/bids. https://ejurnal. id/index. php/bids/index Figure 3. Loss Training and Validation Curve From the graph, it can be seen that the model does not experience overfitting, which indicates that the data augmentation process and architecture selection are quite optimal. This is reflected in the training loss and validation loss curves that continue to decrease consistently as epochs pass, without any significant fluctuations in the validation data. This steady decline indicates that the model is able to generalize the data well, despite variations in the images used for The use of dynamically applied data augmentation techniques during training, such as rotation, zooming, flipping, and brightness changes, also enriches the diversity of the data, allowing the model to learn from a greater variety of images. In addition, the EfficientNetB3 architecture, with computational efficiency and fewer parameters than other models, allows the model to avoid overfitting, maintaining a balance between model complexity and good generalization Apart from the Loss and Accuracy graphs, model evaluation can also be seen in other metrics such as precision, recall, and F1-score calculated for each class to provide a more comprehensive picture of model performance. Precision measures how many of the positive predicted samples are actually relevant, while recall measures how many relevant samples the model successfully finds. F1-score is the harmonic mean of precision and recall which gives an idea of the balance between the two. The overall average value of precision, recall, and F1-score stands at 93%, which shows that the model is not only accurate in predicting litter correctly in general, but also quite balanced in avoiding false positive errors and being able to detect relevant samples well. A more detailed performance of the model can be seen in Table 2, which presents the values of these metrics for each waste class. Table 2. Classification Report Waste Class Organic Paper Battery Glass Plastic Metal Average Precision Recall F1-score From the table, it can be seen that the organic class shows the highest performance with an F1-score of 0. followed by paper and battery with F1-score values of 0. 97 each. This shows that the model can recognize the texture or distinctive visual features of organic objects, such as natural colors and irregular shapes. On the other hand, the metal and plastic classes have a relatively lower F1-score performance . , possibly due to visual or textural similarities between the two classes, such as surface luster and similar packaging shapes. However, overall the model was able to classify the waste types with very good performance. The high precision and recall numbers show that the model is not only able to identify the correct class accurately, but also consistently recognizes all classes. 2 Confusion Matrix To provide a more detailed overview of the correct and incorrect classifications. Figure 3 is a confusion matrix showing the model's predictions for each class. From the confusion matrix, it can be seen that the organic and paper classes are the highest performing classes, while batteries are slightly more frequently confused with the metal and plastic classes, which may be due to the similarity in color or shape in the image. From the matrix visualization, it was also found that the most misclassification occurred between the plastic and metal classes, and between glass and plastic. This indicates that there are overlapping visual features between objects in these classes, making it difficult for the model to distinguish consistently. Such errors can be reduced by advanced training strategies such as fine-tuning the final layer of the model or adding more varied training Copyright A 2025 Authors. Page 66 This Journal is licensed under a Creative Commons Attribution 4. 0 International License Bulletin of Informatics and Data Science Vol. 4 No. May 2025. Page 62Oe70 ISSN 2580-8389 (Media Onlin. DOI 10. 61944/bids. https://ejurnal. id/index. php/bids/index data for frequently confused classes. The use of segmentation techniques or multi-modal inputs such as a combination of RGB images and spectral images can also be considered for future research. Figure 4. Confusion Matrix 3 ROC and AUC To measure the performance of the classification model more thoroughly, in addition to using metrics such as accuracy, precision, recall, and F1-score, an evaluation using Receiver Operating Characteristic (ROC) and calculation of Area Under the Curve (AUC) values is also conducted. ROC is a curve that describes the relationship between true positive rate (TPR) and false positive rate (FPR) at various classification thresholds. The closer the ROC curve is to the top left corner of the graph, the better the model performance. In this study, since the classification is multi-class, the ROC is calculated using a one-vs-rest approach for each Each class is compared with the rest of the classes combined, and ROC curves are generated for each of these Figure 5. ROC Curve The evaluation results show that the model achieved an average AUC value of 0. 92, which indicates that the model has a very high discriminatory ability in distinguishing between one class and another. AUC values close to 1 indicate that the model is consistently able to generate high true positives with low false positives. This ROC evaluation strengthens previous evaluation metrics such as accuracy and F1-score, and confirms that the model not only performs well in aggregate, but is also stable and reliable in classifying each specific waste type. other words, the model is not only Augood in generalAy, but also robust and effective when dealing with data variations between classes. The implementation of ROC and AUC in this study is an important component in model validation, as it is able to describe classification performance in a more detailed and holistic manner, especially in the context of real applications that demand high accuracy and minimal classification error. Copyright A 2025 Authors. Page 67 This Journal is licensed under a Creative Commons Attribution 4. 0 International License Bulletin of Informatics and Data Science Vol. 4 No. May 2025. Page 62Oe70 ISSN 2580-8389 (Media Onlin. DOI 10. 61944/bids. https://ejurnal. id/index. php/bids/index 4 Visualization of Model Prediction Results To provide a visual representation of the model's performance in classification, the prediction results on sample images from several classes, namely the glass and plastic classes, are shown. This visualization aims to show how the model recognizes and classifies images into six main categories: battery, glass, metal, organic, paper, and plastic. The prediction visualization results are shown in Figure 6 for the glass class and Figure 7 for the plastic class. Figure 6. Glass Class Prediction Result Figure 7. Plastic Class Prediction Result From the visualization results, it can be seen that in Figure 6 the classification model predicts that the entered garbage image is glass, and the model is very confident . 93%) in the prediction. Whereas in Figure 7 the classification model predicts that the inputted waste image is plastic, and the model is very confident . 84%) in this prediction. This shows that the model works with a very high level of confidence in recognizing the type of waste and is able to identify the image as a class. This visualization not only helps in intuitively understanding the classification ability of the model, but is also useful for identifying potential misclassification errors that may still occur, especially in images that have similar visual characteristics between classes. 5 Comparison with Related Research In comparison, the performance of the models in this study showed superior results compared to some previous studies. The EfficientNetB3 model produced higher accuracy and F1-score, demonstrating its effectiveness in multi-class A detailed performance comparison can be seen in Table 3. Table 3. Comparison with Related Research Researcher Aulia . Saptadi . Kurniawan . Sihananto . Marzuki . Risfenda . Proposed Method Architecture MobileNet CNN Xception CNN CNN EfficientNetB0 EfficientNetB3 Accuracy (%) Table 3 shows that this study has advantages in terms of the model architecture used. EfficientNetB3 is known to be higher and more optimized in terms of accuracy and parameter efficiency than previous models. In addition, the dataset Copyright A 2025 Authors. Page 68 This Journal is licensed under a Creative Commons Attribution 4. 0 International License Bulletin of Informatics and Data Science Vol. 4 No. May 2025. Page 62Oe70 ISSN 2580-8389 (Media Onlin. DOI 10. 61944/bids. https://ejurnal. id/index. php/bids/index in this study includes six main classes that represent the most common types of waste in daily life, making it more applicable for automated waste sorting systems. Thus, there is a gap that this research addresses, which is that there are not many studies that implement EfficientNetB3 directly in multi-class waste classification with more comprehensive class coverage and balanced evaluation metrics. 6 Interpretation and Implications for SDG 12 The results obtained show great potential in the application of deep learning-based automatic classification systems to support sustainable waste management. The implementation of this model can be used in various smart waste bin devices or camera-based automatic recycling systems, which are capable of real-time waste sorting. This directly supports the 12th point in the Sustainable Development Goals (SDG), namely Responsible Consumption and Production. With this technology, the sorting process no longer relies entirely on human labor that is prone to errors and time In addition, this automatic classification also contributes to improving the effectiveness of the recycling system, reducing the amount of waste that ends up in landfills, and increasing public awareness in sorting waste from the 7 Limitations of the Study Although the results are promising, this study has several limitations. First, the data per class is relatively balanced but limited to a controlled environment. This may affect the model's generalization to real-world conditions, such as low lighting or complex backgrounds. Second, the system has not been tested in real-time scenarios, where classification speed is a crucial factor for real-world implementation. Another limitation is the absence of contextual understanding factors, such as the size or location of objects in the image, which could help differentiate visually similar objects. Further development could consider integrating additional sensors or multimodal approaches to improve classification accuracy. CONCLUSION This This study successfully developed a waste type classification model based on deep learning using the EfficientNetB3 The model was trained on a balanced dataset consisting of six main waste categories: battery, glass, metal, organic, paper, and plastic. The evaluation results showed a high accuracy of 93%, with average precision, recall, and F1score values of 93%. The organic category achieved the highest performance (F1-score of 0. , while metal and plastic had the lowest . The primary strength of this study lies in the application of the EfficientNetB3 architecture, which demonstrates a balance between accuracy and computational efficiency in handling multi-class image classification tasks. These findings indicate that deep learning methods, especially EfficientNet-based models, can be effective in recognizing various types of waste from visual data. However, this study also has several limitations. The dataset was collected in a controlled environment and may not fully represent real-world conditions such as variable lighting, background clutter, or occlusion. The system has not been tested in real-time scenarios, where speed and latency are crucial. Moreover, the model does not incorporate contextual understanding such as object size, position, or surrounding elements, which can be important for distinguishing visually similar classes like metal and plastic. Future work should address these limitations by incorporating more diverse and real-world datasets, testing the system in real-time environments, and exploring multimodal or sensor-based approaches to enhance classification robustness and generalization. These improvements will support the development of more practical, scalable, and reliable solutions for automated waste classification, contributing to the achievement of Sustainable Development Goal (SDG) 12: Responsible Consumption and Production. REFERENCES