58 Journal of Applied Sciences. Management and Engineering Technology. Vol 5. No 2, 2024: 58Ae66 Implementation of The CNN Deep Learning Method in Tajong (Sarun. Samarinda Classification Sitti Muhartini1. Andi Sunyoto2. Alva Hendi Muhammad3 1,2,3 Master of Informatics Engineering Department. Universitas Amikom Yogyakarta. Yogyakarta Email: 112tini@students. Received: 2024-08-06 Received in revised from 2024-08-20 Accepted: 2024-09-30 Abstract Samarinda sarongs are one of Indonesia's traditional fabrics that are famous for their beautiful motifs and textures. This fabric is made using traditional weaving techniques using non-machine looms (ATBM. , resulting in a unique and distinctive diversity of textures. The difference between the loom, namely the machine and the non-machine, resulting in a difference in the texture of the Samarinda sarong. This difference can be seen from the thread density, texture smoothness, and sharpness of the motif. On certain Samarinda sarong motifs that do not require special details. This study aims to develop a classification model of Samarinda sarong texture based on the loom . achine and non-machin. using the Deep Learning method. This model is expected to help, increase the selling value of Samarinda sarongs, preserve and promote traditional fabrics In this context, the choice between DenseNet121 and VGG16 can depend on user preferences or specific needs, such as computing speed or model size. Keywords: ATBM. CNN. DenseNet121. Sarong. VGG16 Introduction The Samarinda sarong is a traditional Indonesian fabric which is famous for its beautiful motifs and textures. This fabric is made using traditional weaving techniques using non-machine looms (ATBM), producing a variety of unique and distinctive textures. The differences in looms, namely machines and non-machines, result in differences in the texture of the Samarinda sarong. This difference can be seen from the density of the thread, the smoothness of the texture, and the sharpness of the motif. For certain Samarinda sarong motifs that do not require special details, machine weaving results in the appearance of woven sarongs that almost resemble non-machine woven products. This can of course be an opening for irresponsible sellers to deceive consumers who don't really understand the differences between these fabrics. The proposed deep learning-based model with data augmentation and transfer learning for automatic woven fabric classification . , was proven to be more accurate and reliable than other methods with the application of the ResNet-50 method which was proven to be more accurate than VGGNet. applies a pre-processing stage where the fabric image is converted to grayscale format, blurred using Gaussian blur, and binarized to remove noise. Then, fabric texture features are extracted using the LSTM model. The extracted fabric texture features are features that represent dark and light patterns in the fabric image. Test results show that this method can classify fabric texture with an accuracy of 98. 8% and detect fabric defects with an accuracy of 97. This research aims to develop a texture classification model for Samarinda sarongs based on the loom . achine and non-machin. using the Deep Learning method. It is hoped that this model can help increase the selling value of Samarinda sarongs, preserve and promote traditional fabrics. Based on previous research, this study used 500 images of Samarinda sarongs . machines, 250 non-machine. The main model used is ResNet, and compared with other models such as VGGNet and LeNet. previous research. Gaussian blur was used to clarify defects in fabric, but it is more appropriate to use it on images that need to be smoothed. Meanwhile, this research applies and compares the evaluation results of the CNN LeNet. VGG16. ResNet50 and DenseNet50 models. Using 1000 texture image data Muhartini. Implementation of the CNN Deep Learning Method in Tajong (Sarun. Samarinda Classification of Samarinda woven fabric produced by non-machine looms and 1000 texture image data of Samarinda woven fabric produced by machine looms. This image was taken manually using a digital microscope X4 1600X. It is hoped that the results of this research can contribute to the development of texture detection and fabric detection systems for traditional Indonesian fabrics. Method Provide sufficient detail methods to allow the work to be reproduced. Methods already published should be indicated by a reference: only relevant modifications should be described. Raw Data Preprocessing Resize Normalization Augmentation Data Training Data Validation Data Testing Various CNN Architectures Evaluation Figure 1. Flowchart of Research Methodology Based on Figure 1. above regarding the flow of the research process, it can be briefly explained as follows: Raw Data Sarong texture images are divided into two types, which will later be separated into each folder to help the classification process. To take the picture itself, use a digital microscope model X4 with a resolution of 1920x1440. For sarong texture images produced from machine looms, add them to the ATM folder and for sarong texture images produced from non-machine looms, add them to the ATBM folder. Pre-Processing The texture image of ATBM and ATm sheaths becomes 256x256 pixels and is pre-processed using various techniques such as rotation, scale and translation. Training and Training Model Carrying out the training process using the Samarinda sarong dataset which has been classified based on the loom. At this stage two scenarios are implemented with the first scenario using 60 Journal of Applied Sciences. Management and Engineering Technology. Vol 5. No 2, 2024: 58Ae66 80% of the data for training and 20% for testing. Meanwhile, the second scenario uses 70% of the data for training and 30% of the data for testing. Testing and Testing Models The testing process of the test image uses CNN classification, and is evaluated using a confusion Where the previous data has been resized, normalized and augmented and Gabor Filter and G2RGB have been applied as additional data variants. Trial and Evaluation Carry out the testing process by testing several test process flows as an evaluation process to determine the accuracy of model performance. Results and Discussion This research uses the Deep Learning CNN method with DenseNet121 and VGG16 which will be processed using Google Colab: https://colab. com/ using Python language. The scenario used is in accordance with the explanation regarding the Training and Model Training stages where the first scenario uses 80% of the data for training and 20% for testing. Meanwhile, the second scenario uses 70% of the data for training and 30% of the data for testing. Discussion for the VGG16 Split Data Model The research used the CNN VGG16 Deep Learning method by applying two scenarios to obtain the following confusion matrix and graph results. Deep Learning CNN Model VGG16 Split Data . :15:. Figure 2. Confusin Matrix Model VGG16 Split Data . :15:. Figure 3. VGG16 Split Data Model Graphic . :15:. Based on Figure 2 and Figure 3 using Google Colab, the following explanation is obtained: Muhartini. Implementation of the CNN Deep Learning Method in Tajong (Sarun. Samarinda Classification Table 1. Description of Confusin Matrix VGG16 Split Data Model . :15:. Precision Recall F1-Score Support ATBM ATM Accuracy Macro avg Weighted avg 0. Table 2. Description of VGG16 Split Data Model Graphic . :15:. Epoch Train_Accuracy Val_Accuracy Test_Accuracy Train_Loss Val_Loss Test_Loss Deep Learning CNN VGG16 Split Data Model . :10:. Figure 4. Confusin Matrix VGG16 Split Data Model . :10:. 62 Journal of Applied Sciences. Management and Engineering Technology. Vol 5. No 2, 2024: 58Ae66 Figure 5. VGG16 Split Data Model Graphic . :10:. Based on Figure 4 and Figure 5 using Google Colab, the following explanation is obtained: Table 3. Description of Confusin Matrix Model VGG16 Split Data . :10:. Precision Recall F1-Score Support ATBM ATM Accuracy Macro Avg Weighted Avg Table 4. Description of VGG16 Split Data Model Graphic . :10:. Epoch Train_Accuracy Val_Accuracy Test_Accuracy Train_Loss Val_Loss Test_Loss Discussion for the DenseNet121 Split Data Model The research used the CNN DenseNet121 Deep Learning method by applying two scenarios to obtain the following confusion matrix and graph results. Deep Learning CNN Model DenseNet121 Split Data . :15:. Muhartini. Implementation of the CNN Deep Learning Method in Tajong (Sarun. Samarinda Classification Figure 6. Confusin Matrix DenseNet121 Split Data Model . :15:. Figure 7. DenseNet121 Split Data Model Graphic . :15:. Based on Figure 6 and Figure 7 using Google Colab, the following explanation is obtained: Table 5. Description of Confusin Matrix DenseNet121 Split Data Model . :15:. Precision Recall F1-Score Support ATBM ATM Accuracy Macro avg Weighted avg 0. Table 6. Description of DenseNet121 Split Data Model Graphic . :15:. Epoch Train_Accuracy Val_Accuracy Test_Accuracy Train_Loss Val_Loss Test_Loss 64 Journal of Applied Sciences. Management and Engineering Technology. Vol 5. No 2, 2024: 58Ae66 Deep Learning CNN Model DenseNet121 Split Data . :10:. Figure 8. Confusin Matrix DenseNet121 Split Data Model . :10:. Figure 9. DenseNet121 Split Data Model Graphic . :10:. Based on Figure 8 and Figure 9 using Google Colab, the following explanation is obtained: Table 7. Confusin Matrix Model DenseNet121 Split Data . :10:. Explanation Precision Recall F1-Score Support ATBM ATM Accuracy Macro avg Weighted avg Muhartini. Implementation of the CNN Deep Learning Method in Tajong (Sarun. Samarinda Classification Table 8. Description of DenseNet121 Split Data Model Graphic . :10:. epoch train_accuracy val_accuracy test_accuracy train_loss val_loss test_loss Conclussion Both models are able to perform the task of fabric texture classification very well. There is no significant difference in the overall effectiveness of these two models, making them both excellent choices for this task. However. VGG16 is slightly simpler in architecture than DenseNet121, which may make it faster in training and inference. In this context, the choice between DenseNet121 and VGG16 may depend on user preferences or specific needs, such as computing speed or model size. Reference