ISSN 2087-3336 (Prin. | 2721-4729 (Onlin. TEKNOSAINS: Jurnal Sains. Teknologi dan Informatika Vol. No. 1, 2026, page. http://jurnal. id/index. php/tekno https://doi. org/10. 37373/tekno. Automated identification of tomato leaf pathologies using deep learning via ResNet18 and a Tailored CNN Architecture Yanto Supriyanto STMIK IKMI Cirebon. Jl. Perjuangan No. Karyamulya. Kec. Kesambi. Kota Cirebon. Jawa Barat Corresponding Author: yanto200445@gmail. Submitted: 3/7/2025 Revised: 29/7/2025 Accepted: 7/8/2025 ABSTRACT Tomato leaves (Solanum lycopersicu. are susceptible to various diseases that significantly impact both yield and quality in agricultural production. To enhance the effectiveness of precision agriculture, deep learning-based image classification techniques have emerged as reliable tools for automatically detecting disease symptoms. In this study, two deep learning models are investigated for this purpose: a custom-built Convolutional Neural Network (CNN) and the ResNet18 architecture, which leverages transfer learning. The experimental workflow encompasses preprocessing of input images, data augmentation strategies, architectural development of the custom CNN, and the fine-tuning phase of the ResNet18 model. The evaluation was conducted on a validation dataset comprising 1,000 images evenly distributed across ten disease categories. Results show that the ResNet18 model attained a validation accuracy of 74%, whereas the custom CNN model achieved 55%. While the latter demonstrated lower predictive performance, it offers advantages in computational simplicity and execution speed. These findings suggest that transfer learning with ResNet18 is more suitable for complex, multi-class classification problems on limited datasets, whereas the lightweight CNN model may be better positioned for deployment on low-resource, edge-based agricultural systems. Keywords: Deep learning. ResNet18. custom CN. tomato leaf disease. image classification Introduction Tomato (Solanum lycopersicu. is a horticultural crop with substantial economic significance and is extensively cultivated across various regions in Indonesia. Despite its agricultural importance, crop productivity is frequently hampered by leaf diseases, including bacterial spot, early and late blight, leaf mold, mosaic virus, septoria leaf spot, spider mite infestations, target spot, and yellow leaf curl virus . These diseases not only manifest as visible damage on the foliage but also negatively impact both crop quality and yield. Traditionally, disease detection in crops has relied on manual inspection by farmers or agricultural advisors. However, this method is inherently subjective and susceptible to inconsistencies, as it depends heavily on individual skill and experience in identifying disease symptoms . With the progression of precision agriculture, the use of artificial intelligenceAiespecially deep learningAihas gained traction as a promising solution. Among various AI techniques. Convolutional Neural Networks (CNN. have been recognized for their ability to enhance diagnostic accuracy and efficiency . One well-regarded model for image-based classification is ResNet18, which leverages residual learning to extract intricate features and mitigate the vanishing gradient problem that commonly arises in deep neural networks . Alternatively, custom-designed CNNs offer greater adaptability to the specific nature of a dataset, although they often require intensive hyperparameter tuning to reach optimal performance levels . Furthermore, employing strategies such as data augmentation and transfer learning has proven beneficial, especially when working with relatively small datasets . light of these considerations, this study explores and compares the performance of two deep learning TEKNOSAINS: Jurnal Sains. Teknologi & Informatika is licensed under a Creative Commons Attribution-NonCommercial 4. 0 International License. ISSN 2087-3336 (Prin. | 2721-4729 (Onlin. ISSN 2087-3336 (Prin. | 2721-4729 (Onlin. DOI 10. 37373/tekno. ResNet18 utilizing transfer learning and a handcrafted CNN, for classifying various tomato leaf diseases. The models are quantitatively evaluated based on four key metrics: accuracy, precision, recall, and F1-score. The ultimate goal is to contribute to the development of a robust, image-driven diagnostic system that can support precision agriculture practices in Indonesia . Method This study followed a structured sequence of steps encompassing data acquisition, preprocessing of images, model development, and performance evaluation in classification tasks. Figure 1 primary aim was to examine and contrast two deep learning strategies namely, a manually constructed Convolutional Neural Network (CNN) and the ResNet18 architecture enhanced through transfer learning in classifying tomato leaf diseases. The dataset employed in this study was sourced from the publicly available Tomato Leaf Disease Dataset on Kaggle. It comprises ten disease categories, including bacterial spot, early blight, and yellow leaf curl virus. The dataset was partitioned into training and validation sets, with each class evenly represented. During the preprocessing phase, all images were resized to 224y224 pixels and standardized using ImageNet normalization parameters . , ensuring input compatibility with ResNet18. Increase the modelAos generalization capability and minimize overfitting, data augmentation techniques such as random rotations, horizontal flips, and other variations were applied to the training ResNet18 was utilized as a pretrained backbone within a transfer learning framework, where only the final classification layer was replaced and retrained according to the number of output classes. The earlier layers remained fixed to function as generic feature extractors . Model training was carried out in Google Colab, taking advantage of the T4 GPU for improved computational speed . Both models were trained for 10 epochs using the Adam optimizer, a batch size of 32, a learning rate of 0. and CrossEntropyLoss as the loss function. Performance assessment, the models were evaluated using standard classification metrics: accuracy, precision, recall, and F1-score, alongside the use of a confusion matrix to further analyze class-wise prediction outcomes. Problems Tomato leaf disease reduces productivity and harvest quality Research purposes Classifying tomato leaf diseases using deep learning ResNet18 (Transfer Learning and Custom CNN) Research methodology C Data collection (Kaggle tomato leaf Data preprocessing & augmentation Custom CNN model implementation ResNet18 model implementation . ransfer learnin. Training and evaluation . ccuracy, precision, recall, f1-score, confusion Conclusions and Recommendations C ResNet18 is recommended for limited Custom CNNs are relevant for lowresource devices. Figure 1. Research method flowchart for tomato leaf disease classification using CNN Custom and ResNet18 TEKNOSAINS: Jurnal Sains. Teknologi & Informatika is licensed under a Creative Commons Attribution-NonCommercial 4. 0 International License. ISSN 2087-3336 (Prin. | 2721-4729 (Onlin. 80 Yanto Supriyanto Automated identification of tomato leaf pathologies using deep learning via ResNet18 and a Tailored CNN Architecture Dataset The dataset utilized in this study originates from the Tomato Leaf Disease collection, created by publicly accessible on the Kaggle platform. It comprises 10 distinct disease categories, with each class containing 100 images for model training and another 100 for validation, yielding a total of 2,000 images with equal class distribution. All images were organized using a directory hierarchy that separates training and validation sets. Each image class is placed into dedicated subfolders, adhering to the PyTorch ImageFolder format. The raw images are provided in RGB color space and exhibit diverse native resolutions. During the preprocessing phase, they were resized to a uniform dimension of 224 y 224 pixels to meet the input specification requirements of both the ResNet18 and the custom CNN Model architecture and data transformation This research follows a structured methodology consisting of several phases, including data acquisition, image preprocessing, model training, and evaluation of classification performance. The primary objective is to compare two deep learning models: a manually constructed CNN (Custom CNN) and ResNet18, which incorporates transfer learning techniques. The dataset employed is the Tomato Leaf Disease collection retrieved from Kaggle . , comprising 10 distinct disease categories, with 1,000 images used for training and another 1,000 for validation. The dataset was organized to fit the PyTorch ImageFolder structure, where each class is allocated to its own subfolder . During preprocessing, all images were resized to 224y224 pixels and normalized using ImageNet statistical values . align with ResNet18 input expectations. For training, augmentation techniques such as A15A random rotations and horizontal flips were employed to improve generalizability and reduce overfitting risks. The validation set, in contrast, was only resized and normalized . Figure 2. Dataset distribution The Custom CNN was developed with an emphasis on simplicity and efficiency. It comprises three convolutional layers activated with ReLU and followed by max pooling, ending with a fully connected layer and a softmax output layer. Meanwhile. ResNet18 was applied as a pretrained model where only the final classification layer was fine-tuned, and the rest of the layers were kept static to act as feature extractors . Both models were trained on Google Colab utilizing a T4 GPU environment . , over the span of 10 epochs using the Adam optimizer, a batch size of 32, and a learning rate of 0. The loss function adopted was CrossEntropyLoss. Evaluation metrics include accuracy, precision, recall, and F1-score, along with a confusion matrix to examine prediction discrepancies across individual A complete visual representation of the method is presented in Figure 1. ResNet18 model implementation with transfer learning In this study. ResNet18 is employed as a pretrained model to perform classification of tomato leaf diseases using a transfer learning approach. This model was chosen due to its efficiency in computation and its established performance in a wide range of image classification tasks, including those involving ISSN 2087-3336 (Prin. | 2721-4729 (Onlin. DOI 10. 37373/tekno. plant pathology. The model is compatible with various hardware configurations and is run on a GPU when available, or on a CPU otherwise. ResNet18 is initialized with ImageNet-pretrained weights, allowing it to identify essential visual patterns such as shape boundaries, textures, and colors, an important feature for processing datasets of limited size. The implementation in this study. ResNet18 functions as a feature extractor, in which all convolutional layers are frozen, preserving the learned Only the final fully connected layer is adjusted and retrained to align with the ten classification labels in the dataset. If necessary, the entire model can be fine-tuned by unfreezing all layers, although this would increase the training time. Because the original ResNet18 is structured for 1,000-class classification, the final classification layer is replaced with a new one whose number of output units matches the number of disease categories in this task. Once model modification is complete, it is transferred to the appropriate device (GPU or CPU) for optimal execution during both training and evaluation, based on the available system resources. The ResNet18 architecture begins with an initial 7y7 convolutional layer, followed by batch normalization and a ReLU activation function. It proceeds through four residual blocks, with progressively increasing channel dimensions: 64, 128, 256, and Each residual block incorporates skip connections that enable stable backpropagation of gradients during training. This design choice effectively addresses the vanishing gradient problem, ensuring that the network can continue learning as it deepens. Each block from layer1 to layer4 includes two convolutional layers and identity connections, facilitating smooth gradient flow during deep network Custom CNN model implementation Besides employing ResNet18 with a transfer learning scheme, a custom Convolutional Neural Network was specifically developed to classify tomato leaf diseases. The model was constructed with a focus on simplicity and efficiency, aiming to accommodate the constraints and unique traits of the dataset used. The architecture comprises three primary convolutional blocks, each employing filter with a 3y3 kernel and a padding value of 1. Each convolutional layer is followed by ReLU activation and 2y2 max pooling, which serves to downsample spatial dimensions. The number of filters increases 32 filters in the first block, 64 in the second, and 128 in the third, allowing the network to learn hierarchical features from basic to more complex patterns. After the feature extraction stage, the output is flattened and fed into a fully connected layer consisting of 256 neurons. To reduce the risk of overfitting, a dropout layer with a rate of 0. 5 is introduced before passing to the final classifier layer. The final output layer is a linear unit with 10 neurons, each representing one of the target disease classes, and is followed by a softmax activation function to convert raw outputs into class probabilities. Training, the model employs the Adam optimizer with the CrossEntropyLoss function. The training was conducted over 10 epochs, with a batch size of 32 and a learning rate of 0. To improve generalization, data augmentation techniques such as random resized cropping, horizontal flipping, and random rotation were applied during training. The model was trained on the Google Colab platform utilizing T4 GPU acceleration, which enhanced training efficiency. While the custom CNN has a simpler architecture compared to ResNet18, experimental results demonstrate that it can achieve competitive classification accuracy. With appropriate hyperparameter tuning and sufficient augmentation, this model offers a viable and computationally lightweight alternative for plant disease detection, especially in scenarios with limited hardware resources. Model training process (Epoch. The training process was performed over 10 epochs, with the objective of refining model parameters until reaching convergence on the training data. The training log indicated a progressive reduction in loss values across epochs, accompanied by an increase in prediction accuracy on both the training and validation sets. During the first epoch, the ResNet18 model utilizing transfer learning recorded a training loss of 1. 7199, a training accuracy of 47. 40%, and a validation accuracy of 55. As training progressed, performance improved. by epoch five, the training loss dropped to 1. 1553, with training and validation accuracies of 69. 30% and 68. 70%, respectively. By the end of the 10th epoch, the model achieved a training loss of 0. 9147, a training accuracy of 78. 80%, and a validation accuracy The sixth epoch onward, the validation accuracy stabilized in the range of 72Ae75%, suggesting that the model had begun to converge. Nevertheless, further improvements could still be attained through additional training epochs or refinement of hyperparameters. These observations are consistent with 82 Yanto Supriyanto Automated identification of tomato leaf pathologies using deep learning via ResNet18 and a Tailored CNN Architecture findings in prior studies, which highlight that a declining loss curve combined with stable validation performance across epochs serves as a reliable indicator of model convergence. Enhancing the modelAos generalization capability and minimizing overfitting, particularly when applied to unseen or real-world test data can be addressed using techniques such as learning rate scheduling and early stopping . Moreover, carefully configuring the hyperparameters plays a critical role in achieving optimal model The full set of hyperparameter values applied to the ResNet18 and custom CNN configurations used in this study is detailed in Table 1. Table 1. Hyperparameter training model ResNet18 and CNN Hyperparameter ResNet18 Transfer learning CNN custom 1 Optimization Algoritm Adam Learning Rate Batch Size Epoch Loss Function CrossEntropyLoss CrossEntropyLoss Scheduler None Dropout Data Augmentation Resize. Flip. Rotation. Normalize Resize. Flip. Rotation. Normalize Results and Discussion The results demonstrate that the ResNet18 model, trained through transfer learning, achieves strong validation accuracy, reaching 74%. This suggests that the model possesses robust generalization capabilities, particularly when handling tomato leaf disease classification involving multiple categories. On the other hand, the Custom CNN designed with a simpler architecture delivers lower performance, attaining only 55% accuracy. Despite this, the model still performs relatively well on certain classes, such as Tomato Yellow Leaf Curl Virus and Mosaic Virus, which achieved F1-scores of 0. 66 and 0. These classes are likely easier to identify due to their more distinctive visual patterns. Conversely, the lowest scores were recorded for Early Blight (F1-score: 0. and Target Spot (F1score: 0. , most likely because of visual overlap with other categories, which complicates feature Moreover, the limited ability of the Custom CNN to capture intricate features, along with potential shortcomings during preprocessing, may have further contributed to the drop in performance. Taken together, these findings emphasize the practical benefits of employing pretrained models like ResNet18, especially when working with small-scale datasets characterized by high inter-class visual Table 2. Classification performance evaluation Model Accuracy presisi Recall ResNet18 CNN Custom F1-Score Table 2, the ResNet18 model consistently outperforms the Custom CNN across all evaluation These findings highlight the substantial advantage offered by leveraging pretrained models through transfer learning, particularly when working with limited datasets and high inter-class visual Model accuracy The ResNet18 model, implemented using a transfer learning technique, obtained a validation accuracy of 74%, demonstrating its capability in handling complex classification tasks. The highest recognition performance was observed in the Tomato Yellow Leaf Curl Virus category, which achieved an F1-score of 0. This can be attributed to its distinctive visual markers, such as curled leaf structures and yellow coloration. Additionally, high F1-scores were reported for the Bacterial Spot . and Healthy . On the other hand, the Early Blight class recorded the lowest score at 0. This underperformance may stem from its visual resemblance to other diseases like Target Spot, as well as issues related to image preprocessing, illumination conditions, and camera capture angles. In contrast, the CNN Custom model yielded a lower validation accuracy of 55%. While overall accuracy was ISSN 2087-3336 (Prin. | 2721-4729 (Onlin. DOI 10. 37373/tekno. reduced, the model retained the ability to detect visually unique disease patterns. However, it showed noticeable difficulty in distinguishing categories with overlapping symptoms, such as Target Spot (F1score: 0. and Septoria Leaf Spot (F1-score: 0. These shortcomings are likely the result of architectural limitations in feature representation. In summary. ResNet18 exhibited superior performance across nearly all classes and evaluation metrics. Confusion matrix The confusion matrix serves as a key diagnostic tool, offering a detailed overview of the model's ability to differentiate between multiple classes in a classification setting. In this study, two confusion matrices were analyzed to compare the predictive behavior of the ResNet18 model . tilizing transfer learnin. and the Custom CNN. Figure 5 illustrates the confusion matrix for the ResNet18 model, which demonstrates robust classification results across most disease categories. High prediction accuracy was observed for Tomato_Healthy . correct prediction. Tomato_Bacterial_Spot . , and Tomato_Yellow_Leaf_Curl_Virus . Nonetheless, the model struggled to distinguish Early_Blight, often misclassifying it as Target_Spot or Late_Blight, revealing the difficulty of identifying diseases with overlapping visual symptoms. In contrast. Figure 6 shows the confusion matrix for the Custom CNN. This model achieved impressive accuracy for certain categories, such as Tomato_Mosaic_Virus and Tomato_Healthy, which reached 100% and 98%, respectively. However, its overall misclassification rate was higher than that of ResNet18, particularly in the Target_Spot category, which was frequently confused with Early_Blight. This may stem from limitations in the Custom CNNAos capacity to capture intricate visual cues possibly due to its relatively shallow network depth and simpler Figure 3 a comparison of the two matrices highlights essential differences in model behavior: while ResNet18 exhibits more consistent and generalized performance across classes. Figure 4 the Custom CNN appears more effective for certain specific categories but lacks reliability in handling visually similar cases. These findings emphasize the necessity of thoughtful architecture selection and strategic data augmentation when designing robust plant disease recognition systems. Figure 3. Per-class classification report using ResNet18 transfer learning model Figure 4. Per-class classification report using custom CNN Model 84 Yanto Supriyanto Automated identification of tomato leaf pathologies using deep learning via ResNet18 and a Tailored CNN Architecture Figure 5. Confusion matrix visualization of ResNet18 model predictions on the tomato leaf disease validation data Model prediction on test images (ResNet. Figure 7 illustrates the classification outcomes of the ResNet18 model when applied to several tomato leaf images sourced from the validation set. Overall, the model demonstrates a solid classification ability, correctly identifying most samples based on their ground-truth labels, with particularly strong performance in the Tomato_Bacterial_spot category. Green-labeled predictions in the figure indicate accurate classification and reflect the modelAos effectiveness in detecting unique visual traits, such as dark circular lesions with pronounced contrast. Figure 6. Confusion matrix visualization of custom CNN model predictions on the tomato leaf disease validation data. Despite these strengths, some misclassification cases were observed. For instance, an image labeled as Tomato_Bacterial_spot was misidentified as Tomato_Early_blight, highlighted in red within the These inaccuracies likely stem from visual symptom overlap between disease types such as irregular dark spotsAiwhich can challenge the model, particularly under high intra-class variability. This observation is consistent with the claim . , who noted that one major difficulty in tomato leaf disease identification lies in the visual similarity between disease classes. Factors such as image resolution, lighting conditions, and camera perspective can significantly influence model accuracy in real-world environments . In summary, the visual evidence aligns with earlier quantitative results, reinforcing that while ResNet18 performs reliably on distinct disease classes, it still requires refinement for better differentiation of visually similar cases. Future work may focus on improving the networkAos ISSN 2087-3336 (Prin. | 2721-4729 (Onlin. DOI 10. 37373/tekno. structural design, leveraging ensemble learning methods, or adopting multiview imagig strategies to enhance prediction reliability in practical scenarios. Figure 7. ResNet18 prediction on Bacterial_spot leaf images. Green text shows correct predictions. text indicates misclassification as Early_blight due to symptom similarity. Model prediction on test images (CNN Custo. To assess its capability in classifying disease types based on visual cues, the CNN Custom model was evaluated using multiple images from the Tomato_Bacterial_spot category. Figure 8 illustrates the prediction outcomes, where each image is annotated with its actual label alongside the model's Correct classifications are highlighted in green, whereas incorrect ones are shown in red. The majority of samples were accurately identified, showcasing the model's ability to detect distinctive features of the Bacterial_spot class such as the presence of dark, circular lesions on the leaf surface. This qualitative observation is consistent with earlier numerical results, where the same class achieved an F1score of 0. However, one notable error occurred when an image belonging to the Bacterial_spot category was misclassified as Early_blight. This confusion likely stems from the overlapping visual characteristics shared between the two diseases, such as spot texture or variations in leaf coloration. Similar misclassification challenges have been discussed in earlier studies, including those . , who emphasized that visual similarities in tomato leaf disease symptoms can hinder accurate automated Identified several influencing factors such as lighting inconsistencies, image quality, and differences in camera anglesAithat can impact model performance in visual recognition tasks al . Visual interpretation methods like this are essential for complementing quantitative metrics, offering more comprehensive insights into the modelAos operational behavior under real-world imaging scenarios and helping identify aspects that warrant further refinement. Figure 8. Example of CNN Custom model predictions on Tomato_Bacterial_spot class. 86 Yanto Supriyanto Automated identification of tomato leaf pathologies using deep learning via ResNet18 and a Tailored CNN Architecture Green text indicates correct predictions, while red text indicates misclassifications. One image labeled Bacterial_spot was incorrectly predicted as Early_blight. Comparison with previous studies These findings reinforce that pretrained deep learning architectures such as ResNet18 generally outperform custom-designed models like CNNs tailored from scratch, particularly in handling multiclass classification problems, including those related to tomato leaf disease detection. In this study. ResNet18 yielded a validation accuracy of 74%, whereas the CNN Custom achieved only 55%. Similarly, in terms of average F1-score. ResNet18 reached 0. 73, surpassing the CNN CustomAos result This disparity highlights the advantages offered by pretrained networks like ResNet18, which benefit from prior exposure to large-scale datasets such as ImageNet, enabling the model to extract and generalize complex visual patterns more effectively through the use of residual learning blocks. Despite the performance gap, the custom CNN still proved capable of classifying certain disease categories with reasonable accuracy particularly those with distinct visual features like Tomato Yellow Leaf Curl Virus. However, it struggled with classes showing overlapping visual symptoms, such as Early Blight and Target Spot, which frequently led to misclassifications. This challenge is not unexpected, considering that the CNN Custom utilizes a more compact and shallow architecture, limiting its representational capacity for deeper feature extraction. This observation aligns with earlier studies suggesting that pretrained models are more adept at interpreting heterogeneous and complex imagery. Furthermore, the application of data augmentation techniquesAisuch as image flipping and random rotationAiused in this experiment, aligns with the findings by Wu et al. , who emphasized the value of such transformations for improving model generalization. Although the CNN CustomAos overall accuracy is lower, it presents a viable option for deployment in environments with constrained computational resources. Its training process is faster and less demanding in terms of hardware, making it suitable for edge computing applications or devices with limited processing power. Looking forward, future studies could investigate hybrid approachesAisuch as ensemble techniques or the integration of lightweight pretraining leveraging the strengths of both pretrained and custom models. Such approaches may lead to the development of classification frameworks that are not only robust and accurate but also computationally efficient and adaptable to real-world agricultural scenarios. Limitations This study still encounters several challenges that warrant further refinement. Initially, although the dataset was designed to contain an equal number of images for each class, some instances particularly in certain categories were not properly processed due to issues during transformation or As a result, the final class distribution became unbalanced, which had a negative impact on the modelAos performance, especially for underrepresented categories like Early_blight, which recorded an F1-score of only 0. The limited data availability in this category hindered the modelAos ability to effectively learn its unique visual traits. Secondly, even though the architecture of the CNN Custom model has been thoroughly outlined, its experimental outcomes were not comprehensively incorporated into the earlier stages of analysis. A rigorous side-by-side evaluation of CNN Custom and ResNet18 is crucial to fully highlight the advantages and shortcomings of each model. Prior literature suggests that well-constructed CNN architectures can still deliver competitive results, especially when backed by proper parameter optimization. Third, the training duration was restricted to 10 epochs due to computational constraints on the Google Colab environment, which often experienced interruptions during longer sessions. This raised the possibility that the model failed to reach optimal convergence. Researchers such as Ahmed et al. recommend adopting incremental training strategies with dynamically adjusted epoch counts to ensure loss stabilization and model robustness. Fourth, hyperparameter optimization remains unexplored. Key components like learning rate scheduling, dropout configurations, and batch normalization have yet to be systematically investigated. Properly calibrating these elements could potentially enhance the modelAos generalization capability across a wider range of image types. Moreover, ensemble strategies have not yet been applied in this study, even though combining modelsAisuch as through soft voting or stacking between CNN Custom and ResNet18Ai may improve classification effectiveness, especially for classes that are harder to identify. Additionally, the current research lacks empirical testing on lightweight platforms or edge devices, such as ISSN 2087-3336 (Prin. | 2721-4729 (Onlin. DOI 10. 37373/tekno. smartphones or agricultural IoT systems. As a result, the deployment feasibility of the compact CNN Custom model under real-world conditions remains unverified. Recognizing these constraints, future work should focus on enhancing the experimental design through dataset enrichment, extended training durations, advanced hyperparameter tuning, and integration of hybrid modeling techniques. These steps are expected to yield a classification framework that is not only more accurate but also adaptable and practical for real-world precision agriculture applications. Deductive arguments and speculation Drawing from the experimental outcomes and prior analysis, it is evident that employing a transfer learning framework using ResNet18 offers notable advantages, particularly in classifying tomato leaf diseases from digital images especially under limited data conditions. The modelAos validation accuracy of 74% and macro F1-score of 0. 73 serve as strong indicators of its capability to consistently identify diverse visual features. This robustness stems largely from its residual block structure, which facilitates stable gradient propagation, allowing effective parameter optimization in deep networks without compromising performance. Despite this, the CNN Custom model also demonstrates value, especially for deployment on devices with restricted computational resources, such as smartphones or IoT Owing to its lower parameter count, it exhibits significantly better computational efficiency. Although this model's performance in the current study was suboptimal, findings from Ahmed et al. suggest that with careful design and fine-tuning. CNN Custom architectures can achieve accuracy levels comparable to those of pretrained networks, especially on more homogeneous datasets. A promising direction for future research involves the development of ensemble models that harness the respective strengths of both ResNet18 and CNN Custom. Such hybrid frameworks enable the combination of ResNet18Aos strong feature extraction with the fast inference capabilities of lightweight CNNs. Techniques like soft voting and stacking have shown efficacy in prior studies for enhancing multi-class classification in agricultural vision tasks. Further improvements may involve refining CNN Custom through advanced optimization methods, such as Neural Architecture Search (NAS) or metaheuristic strategies like Particle Swarm Optimization (PSO). These approaches allow for the design of models that maintain a practical equilibrium between predictive accuracy and computational efficiency. With continued refinement, these models are expected to become increasingly adaptive to real-world environments, thereby supporting the broader application of deep learning systems in precision agriculture, particularly within the Indonesian context. Conclusion This research conducted a comparative evaluation between two deep learning-based methods for identifying tomato leaf diseases through digital imagery: the custom CNN architecture and the pretrained ResNet18 model. The experimental analysis demonstrated that ResNet18 consistently delivered better results than the CNN Custom model, both in accuracy and overall evaluation metrics. It achieved a validation accuracy of 74%, reflecting its robustness in recognizing complex visual features characteristic of various plant diseases. In contrast, the CNN Custom modelAibuilt on a simpler structural design resulted in a lower validation accuracy of 55%. Despite this, it was still capable of correctly identifying several categories, such as Mosaic Virus and Yellow Leaf Curl Virus. However, its performance declined when faced with visually similar disease classes like Early Blight and Target Spot. From these findings. ResNet18 emerges as the more suitable option for tomato leaf disease Its architectural advantages such as residual connections, the ability to extract rich features from pretrained ImageNet weights, and a consistent training processAicontribute to superior generalization and predictive accuracy. Nonetheless, the compact and computationally efficient nature of the custom CNN model retains practical significance for real-world applications on edge computing devices with limited processing capabilities. References