ISSN 2087-3336 (Prin. | 2721-4729 (Onlin. TEKNOSAINS: Jurnal Sains. Teknologi dan Informatika Vol. No. 1, 2026, page. http://jurnal. id/index. php/tekno https://doi. org/10. 37373/tekno. Enhancing rice plant disease detection through transfer learning and image segmentation with YOLOv11 Muhammad Reza Redo Islami*. Sylvia. Atika Arpan. Rizka Permata. Dwi Handoko. Rima Maulini. Dwirgo Sahlinal *Politeknik Negeri Lampung. Indonesia, 35141 Corresponding Author: reza. redo@polinela. Submitted: 1/7/2025 Revised: 28/7/2025 Accepted: 7/8/2025 ABSTRACT Rice is a staple food globally, yet its productivity is often threatened by diseases such as blast and brown spot. Traditional diagnostic methods relying on human observation are prone to delays and inaccuracies. This study introduces an automated detection system that utilizes YOLOv11-seg to improve the accuracy and efficiency of rice disease identification. The model integrates object detection and instance segmentation, is trained on over 6,000 annotated images covering six categories . ive disease types and healthy leave. , and leverages transfer learning from COCO weights. Experimental results show that the model achieved a bounding box mAP@50 of 607 and a segmentation mAP@50 of 0. 564, with F1-scores of 0. 62 and 0. 59, respectively. The highest detection accuracy was recorded for healthy leaves . %), while segmentation performance declined on visually similar classes such as Brown Spot and Sheath Blight. Overfitting was observed during training, with a 15Ae20% gap between training and validation metrics. These findings demonstrate the model's potential for real-time field application in precision agriculture. Future improvements should focus on enhancing spatial accuracy and robustness through synthetic data generation and architectural optimization. Keywords: YOLOv11-seg. rice leaf disease detection. instance segmentation. deep learning. Introduction Rice is a staple food for more than half of the worldAos population, and ensuring its healthy production is vital for global food security . However, productivity is frequently threatened by foliar diseases such as blast, brown spot, sheath blight, and tungro, which may reduce both yield and quality . Manual identification methods, traditionally used by farmers and agricultural officers, suffer from delays, subjectivity, and reliance on expert availability . Recent developments in deep learning, particularly convolutional neural networks (CNN. , have enabled promising results in plant disease detection. YOLO (You Only Look Onc. models have gained popularity due to their real-time detection capabilities . Studies using models like VGG16. ResNet50, and Inception have reported high accuracy in image-based rice disease classification, yet many focus solely on whole-image classification . This lack of spatial symptom mapping limits diagnostic interpretability . While recent works have explored YOLOv8 to YOLOv10 in agricultural contexts, integration of the newer YOLOv11 with segmentation remains underexplored . Furthermore, approaches that combine bounding box detection and instance segmentation using annotated masks for multi-label tasks are scarce . This gap is especially significant for plant diseases with overlapping symptoms such as Brown Spot and Leaf Scald . This study introduces a YOLOv11-seg model designed to detect and localize six rice leaf classes, five disease types and one healthy leaf using both bounding box and mask annotations. Trained with TEKNOSAINS: Jurnal Sains. Teknologi & Informatika is licensed under a Creative Commons Attribution-NonCommercial 4. 0 International License. ISSN 2087-3336 (Prin. | 2721-4729 (Onlin. ISSN 2087-3336 (Prin. | 2721-4729 (Onlin. DOI 10. 37373/tekno. over 6,000 Roboflow-annotated images and pre-trained COCO weights, the model aims to improve spatial detection performance and training efficiency. By leveraging segmentation and transfer learning, this research advances the development of AI-powered precision agriculture tools for real-time, fieldready crop monitoring . Method Participants . ataset source and characteristic. The dataset used in this study consisted of 6,232 rice leaf images, sourced from the Roboflow platform and manually annotated by agricultural experts. Each image belonged to one of six predefined categories: five rice leaf diseases (Blast. Brown Spot. Sheath Blight. Leaf Scald, and Rice Tungr. and one healthy class. Annotations included both bounding boxes and instance segmentation masks. While rich in class diversity and symptom morphology, the dataset lacks explicit metadata on geographical origin, potentially limiting ecological generalization, a limitation previously noted in similar agricultural datasets . An 80:20 training-validation split was applied, yielding 4,985 training images and 1,247 validation images, consistent with conventional practices in deep learning-based agricultural studies . Experimental design The YOLOv11-seg architecture was selected for its unified approach to object detection and instance segmentation. The model comprises three modules: . Backbone for feature extraction, . Neck, integrating C2f and Cross-Path Fusion (CPF) layers to refine spatial features, and . Head, which outputs bounding boxes, segmentation masks, and classification scores for six classes. Model initialization used pretrained COCO weights to accelerate convergence and enhance generalization, as demonstrated in prior work on rice disease detection using YOLO variants . Training was performed for 90 epochs using Stochastic Gradient Descent (SGD) with a learning rate of 0. 001, batch size of 16, and a dropout-free setting. To avoid catastrophic forgetting, 23 layers were frozen during initial training, following the layer-freezing strategy described by . Figure 1. The YOLOv11-seg architecture Measures The modelAos performance was evaluated using multiple metrics: Precision. Recall, and F1-score: to assess classification performance across disease classes. Mean Average Precision at IoU 0. AP. and mAP50:95: to quantify object localization accuracy in both bounding box and segmentation output, consistent with evaluation frameworks . Confusion Matrix: to identify class-specific misclassifications and cross-class confusion patterns. Thresholds for detection confidence were fine-tuned using F1-score curve optimization, resulting in optimal values of 0. 306 for bounding boxes and 0. 327 for masks. Result and Discussion Detection performance analysis The YOLOv11-seg model demonstrated highly accurate disease detection on rice leaf images, as evidenced by comparative pre- and post-detection visualizations see Table 1. The system successfully 72 Muhammad Reza Redo Islami. Sylvia. Atika Arpan. Rizka Permata. Handoko. Rima Maulini. Dwirgo Syahrial Enhancing rice plant disease detection through transfer learning and image segmentation with YOLOv11 identified all pathological symptoms with precise bounding box localization and spatial segmentation Class Type Table 1. Comparative pre- and post-detection Mask True Original Result Positive Mask OverSegmentation BLAST The model correctly identified blast disease symptoms, though the predicted segmentation mask encompassed a broader area than the actual lesion Figure 2. This over-segmentation may stem from Figure 2. Blast over-segmentation ISSN 2087-3336 (Prin. | 2721-4729 (Onlin. DOI 10. 37373/tekno. Detection accuracy The YOLOv11-seg model demonstrated competitive detection capability across all six rice leaf As shown in Table 1, the model achieved a bounding box mAP50 of 0. 607 and a segmentation mAP50 of 0. 564, exceeding the predefined performance threshold of 0. Figure 3 illustrates the performance progression across 90 training epochs. Figure 3. Performance metrics comparison Class-wise performance The per-class results are summarized in Table 2. The Healthy category achieved the highest detection accuracy with a bounding box precision of 0. 91 and a recall of 0. In contrast, the Brown Spot class recorded the lowest recall . , with moderate precision . , suggesting challenges in capturing its complex symptom patterns. Table 2. Evaluation of model performance by individual class Mask P Mask R Mask mAP50 0,421528 0,354861 0,247222 0,600694 0,384722 0,484722 Box mAP5095 0,2875 0,203472 0,114583 0,524306 0,225 0,331944 0,48125 0,482639 0,426389 0,621528 0,452083 0,447222 0,367361 0,29375 0,177083 0,58125 0,339583 0,388194 0,391667 0,344444 0,2375 0,568056 0,359722 0,409028 Mask mAP5095 0,233333 0,171528 0,104167 0,51875 0,165278 0,170139 0,456944 0,323611 0,455556 0,424306 0,432639 0,272222 Class Images Instances Box P Box R Box mAP50 All Classes Blast BrownSpot Healthy Leaf Scald RiceTungro Sheath Blight 0,492361 0,46875 0,43125 0,636806 0,455556 0,492361 0,386111 0,295139 0,189583 0,597917 0,354861 0,432639 0,46875 0,447917 Training dynamics Over the training process, all loss metrics showed consistent reductions: Box Loss: 1. 3777 Ie 0. Segmentation Loss: 2. 7493 Ie 0. Classification Loss: 2. 5702 Ie 0. DFL Loss: 1. 8531 Ie 0. 74 Muhammad Reza Redo Islami. Sylvia. Atika Arpan. Rizka Permata. Handoko. Rima Maulini. Dwirgo Syahrial Enhancing rice plant disease detection through transfer learning and image segmentation with YOLOv11 Figure 4. Loss components during training Confusion matrix analysis. The confusion matrix Figure 5 reveals that most misclassifications occurred between Brown Spot. Leaf Scald, and Sheath Blight. These classes frequently overlapped in spatial patterns. The Healthy class had the highest classification consistency . %), while the Brown Spot class exhibited the most frequent confusion with other categories. Figure 5. Confusion matrix ISSN 2087-3336 (Prin. | 2721-4729 (Onlin. DOI 10. 37373/tekno. Discussion The results indicate that YOLOv11-seg offers a high level of diagnostic capability for rice leaf disease detection. However, three critical challenges were identified. Spatial precision limitations. The model exhibited a drop in accuracy at higher IoU thresholds . AP50Ae. , particularly in Brown Spot and Leaf Scald. These conditions feature diffuse lesion boundaries, which complicate segmentation consistent with the limitations described by . This underscores the need for finer edgeaware segmentation or post-processing. Class imbalance and overfitting. Despite using transfer learning, the model showed a 15Ae20% performance drop on validation data, indicating mild overfitting. This aligns with observations by, where underrepresented disease classes especially Rice Tungro struggled to generalize due to limited sample diversity . Segmentation vs. Detection Trade-offs. Although instance segmentation provides more detailed spatial outputs, it underperformed compared to bounding box detection in precision and recall . Noted similar issues in rice disease datasets, where annotation quality directly influenced mask accuracy. Additionally, segmentation requires more computational resources, which may limit its usability on mobile or edge devices. Comparative Advantage. Compared to prior YOLO-based works this model integrates real-time detection and segmentation, achieving better performance on spatially complex classes . It also improves over YOLOv5 and YOLOv8 benchmarks, as shown in . Conclusion In this study, we successfully developed an automated rice leaf disease detection system using the YOLOv11-seg architecture, which integrates both bounding box detection and instance segmentation. The model achieved a bounding box F1-score of 0. 62 and a segmentation mAP50 of 0. outperforming previous YOLO variants in multi-class agricultural tasks. The model demonstrated strong performance in detecting Healthy and Blast categories, with over 85% accuracy, indicating its suitability for real-time deployment in field settings. However, lower recall in Brown Spot and Rice Tungro highlights the modelAos limitations in handling visually ambiguous and underrepresented classes. Theoretically, this research contributes to the literature on precision agriculture by showing that combining transfer learning and instance segmentation in object detection improves spatial diagnosis, addressing one of the key limitations of traditional classification-only models. Nonetheless, three main limitations were observed: . Class imbalance, which affected the minority class accuracy . Overfitting, with a 15Ae20% drop in validation metrics. Inconsistent segmentation, particularly for diseases with blurred or diffuse lesion boundaries. Based on these findings, the following future directions are recommended: Data-centric enhancements: Generate synthetic samples using diffusion models for underrepresented classes such as Brown Spot, as suggested. Architectural refinements: Explore hybrid ViT-YOLO heads to improve long-range dependency modeling, especially for overlapping lesions. Post-processing: Integrate Conditional Random Fields (CRF) to refine segmentation masks, especially at lesion boundaries. Edge deployment: Apply model compression techniques . , pruning, quantizatio. to enable mobile-based real-time inference. This work lays a strong foundation for scalable, intelligent crop monitoring systems and aligns with the broader goal of developing AI-powered smart farming solutions. Acknowledgement The authors gratefully acknowledge the support from Politeknik Negeri Lampung for providing the research facilities and resources essential for this study. We extend our sincere appreciation to: C P3M (Pusat Penelitian dan Pengabdian kepada Masyaraka. Politeknik Negeri Lampung for their administrative and funding support. C The Department of Information Technology foster an interdisciplinary research environment. 76 Muhammad Reza Redo Islami. Sylvia. Atika Arpan. Rizka Permata. Handoko. Rima Maulini. Dwirgo Syahrial Enhancing rice plant disease detection through transfer learning and image segmentation with YOLOv11 Our colleagues in the Computer Vision research group for their valuable feedback during model C The agricultural experts who contributed to the dataset annotation and validation. This work was partially supported by Hibah Penelitian Dosen Pemula (Hibah DIPA) Tahun 2025 from Politeknik Negeri Lampung Reference