TELKOMNIKA Telecommunication Computing Electronics and Control Vol. No. April 2026, pp. ISSN: 1693-6930. DOI: 10. 12928/TELKOMNIKA. A comparative MRI-based study of ResNet-152 and novel deep learning approaches for early AlzheimerAos disease classification Kelvin Leonardi Kohsasih. Octara Pribadi. Andy. Daniel Smith Sunario Department of Informatics Engineering. STMIK TIME. Medan. Indonesia Article Info ABSTRACT Article history: AlzheimerAos disease (AD) is the leading cause of dementia, making earlystage detection essential for timely intervention. Most prior studies have focused on binary AD classification, which limits sensitivity to disease This study addressed this gap by evaluating whether tailored convolutional neural network (CNN) architectures could improve stageaware classification using a publicly available magnetic resonance imaging (MRI) dataset containing 35,984 images across four diagnostic categories. The dataset underwent grayscale conversion, resizing, contrast enhancement, normalization, and class balancing prior to model development. Four models were trained and compared: ResNet-152, a custom multiclass CNN, a onevs-one (OvO) model, and a one-vs-rest (OvR) model. Performance was measured using accuracy, precision, recall. F1 score, and confusion-matrixAe based metrics. The custom multiclass CNN achieved the strongest performance, yielding the highest accuracy and balanced results across all evaluation metrics. These findings demonstrate the value of systematically comparing decomposition strategies for multi-stage AlzheimerAos detection and highlight the potential of the proposed approach to enhance early diagnostic support. Future work may incorporate multimodal inputs or hybrid architectures to improve sensitivity to subtle structural changes and further strengthen clinical applicability. Received Jul 10, 2025 Revised Dec 11, 2025 Accepted Jan 30, 2026 Keywords: Convolutional neural network Deep learning Medical image classification Mutli-class classification Transfer learning This is an open access article under the CC BY-SA license. Corresponding Author: Kelvin Leonardi Kohsasih Department of Informatics Engineering. STMIK TIME Merbabu Street. Medan City. North Sumatra 20212. Indonesia Email: kelvinleonardi@stmik-time. INTRODUCTION Advances in information technology and computer science have accelerated progress in medical data processing and the application of deep learning (DL) for diagnosing neurological disorders using neuroimaging techniques . , . As DL techniques have become integrated into neuroimaging research . , the availability of large, high-quality datasets . has further enabled the earlier and more accurate detection of neurodegenerative diseases. AlzheimerAos disease (AD) is the most common cause of dementia, responsible for 60Ae80% of cases among older adults . AD is a progressive and irreversible neurodegenerative disorder leading to cognitive decline and eventual mortality . , . At present, no curative treatment exists, and available therapies such as aducanumab, lecanemab, and donanemab have only been shown to slow progression when administered in the early phases, particularly mild cognitive impairment (MCI) or early MCI (EMCI) . , . Ae. Nevertheless, early symptoms are nonspecific and often mistaken for normal aging . , so diagnosis is frequently delayed until the moderate or severe stages, when interventions are less effective . Journal homepage: http://journal. id/index. php/TELKOMNIKA TELKOMNIKA Telecommun Comput El Control Magnetic resonance imaging (MRI) is crucial for examining brain anatomy and diagnosing neurological disorders. It helps identify abnormal regions . Its high resolution enables clear differentiation between gray and white matter. MRI also reveals structural alterations such as hippocampal atrophy, widened sulci, and ventricular enlargement that accompany disease progression . , . , . These features support accurate diagnosis and effective disease monitoring . Despite its diagnostic value. MRI interpretation remains inherently subjective and dependent on expert judgment, which introduces variability and potential diagnostic inconsistency . These challenges highlight a growing need for automated, artificial intelligence (AI)-supported analysis that can improve accuracy and reduce observer DL systems have demonstrated increasing effectiveness across various research domains and are particularly useful for analyzing structural brain changes associated with AD . DL also improves diagnostic efficiency by reducing time, cost, and manual effort . However, prior research on AD detection using DL has not yet achieved optimal efficiency . Comparative studies evaluating you only look once (YOLO) variants. DenseNet, residual network (ResNe. , visual geometry group (VGG). EfficientNet, and vision transformers indicate that although high-capacity feature extractors achieve strong performance in binary AD classification, they often struggle in multi-stage tasks, especially at MCI or EMCI levels where inter-class differences are subtle . Ae. Even models such as DenseNet-169 and ResNet-50 show reduced discrimination ability when required to distinguish closely related AlzheimerAos stages . Multimodal MRIAepositron emission tomography (PET) fusion approaches have demonstrated improved robustness and interpretability, particularly when combined with explainability techniques. In contrast, these methods still do not consistently outperform optimized single-modality CNNs for multi-class staging tasks . Overall, reported accuracies commonly range between 70% and 86%, with most studies focusing on binary AD detection rather than fine-grained staging . , . Ae. This trend highlights a persistent gap in developing models capable of reliably differentiating early AlzheimerAos stages. To address these limitations, this study evaluates whether tailored convolutional neural network (CNN) architectures can improve fine-grained, stage-aware AD classification, an area where existing DL approaches remain limited. Using a publicly available MRI dataset comprising 35,984 images across four diagnostic categories, the dataset was preprocessed using grayscale conversion, resizing, contrast enhancement, normalization, and class balancing. Three decomposition strategies-multiclass, one-vs-one (OvO), and one-vs-rest (OvR)-were implemented alongside a ResNet-152 transfer-learning baseline to enable a systematic and controlled comparison. The novelty of this work lies in its structured evaluation of these strategies on a large and balanced MRI corpus, aimed at improving stage-specific sensitivity while maintaining computational feasibility. Improving classification granularity is clinically important because prognosis, response to therapy, and eligibility for disease-modifying treatments depend heavily on accurate early-stage differentiation. The proposed stage-aware DL framework therefore offers enhanced diagnostic value for supporting early detection of AD. METHOD This study compares the performance of ResNet-152, applied through transfer learning, with a custom DL model that employs multiclass decomposition strategies, including OvO and OvR, for AlzheimerAos detection from MRI scans. The methodology is structured into five phases: data acquisition, preprocessing, class balancing and dataset partitioning, model architecture with training configuration, and model evaluation supported by statistical analysis, as summarized in Figure 1. Figure 1. Method research A comparative MRI-based study of ResNet-152 and novel deep learning A (Kelvin Leonardi Kohsasi. A ISSN: 1693-6930 Data acquisition The dataset used in this study was obtained from the publicly available Kaggle AlzheimerAos Disease MRI dataset, which provides a comprehensive collection of brain MRI scans labeled according to cognitive As shown in Figure 2, the dataset includes four diagnostic categories: non-demented (ND), very mild demented (VMD), mild demented (MD), and moderate demented (ModD). These categories represent progressive levels of cognitive decline, ranging from normal cognitive function to more advanced dementia symptoms, and broadly correspond to the early to moderate stages of AD progression. In total, the dataset comprises 35,984 MRI images, distributed as follows: 9,600 images . 2%) for ND, 8,960 images . for VMD, 8,960 images . 4%) for MD, and 6,464 images . 0%) for ModD. All images are provided in a standardized format and dimension, minimizing variability and ensuring seamless integration into the preprocessing pipeline. The dataset was chosen because it is openly accessible, which guarantees reproducibility and transparency. It also has a relatively balanced distribution across impairment stages, which supports robust model training and evaluation. Additionally, the dataset offers high-quality labeling, enabling reliable supervised learning. Its distribution across four diagnostic categories further allows this study to address common challenges in medical imaging research, including class imbalance and inter-class similarity, through targeted preprocessing and data augmentation strategies. Figure 2. Representative MRI brain scan samples from the dataset Class balancing and dataset partitioning The initial dataset exhibited class imbalance, with ND containing 9,600 samples. MD 8,960 samples. VMD 8,960 samples, and ModD 6,464 samples, which could bias the model toward majority classes. In this approach, the majority class samples were randomly discarded until class sizes were equalized, thereby reducing bias toward dominant categories, which is consistent with the fundamental principles of re-balancing methods . , . A fixed random seed . was used to ensure reproducibility. The balanced dataset was subsequently divided into 80% for training and 20% for testing to support robust model learning and objective performance assessment on unseen data. The class distribution of the dataset before and after the re-balancing procedure is presented in Figure 3. Figure 3. shows the class distribution before the re-balancing procedure, while Figure 3. illustrates the distribution after the re-balancing procedure. Data preprocessing Before model training, all MRI images underwent a standardized preprocessing pipeline consisting of: Oe Grayscale conversion to reduce computational complexity while preserving structural information. TELKOMNIKA Telecommun Comput El Control. Vol. No. April 2026: 536-548 TELKOMNIKA Telecommun Comput El Control Oe Resizing to 224y224 pixels to ensure uniform input dimensions for the CNN model . Oe Contrast enhancement using contrast limited adaptive histogram equalization (CLAHE) to improve local contrast and highlight subtle anatomical features relevant to AD classification . , . Oe Bilateral filtering to suppress noise while preserving edge details. Oe MinAemax normalization to scale pixel intensities into the . , . range for stable network optimization. Oe Batchwise processing of 300 images per cycle to maintain memory efficiency during preprocessing. Figure 3. Initial and balanced class distributions of the AD image dataset. initial imbalanced distribution and . balanced distribution after undersampling Model architecture and training configuration Resnet 152 ResNet-152 is a deep variant of the ResNet family that addresses gradient degradation in very deep models through residual learning with identity shortcut connections . With 152 layers organized into bottleneck blocks, it provides hierarchical feature extraction that is effective for detecting subtle morphological changes in brain MRI scans associated with early AD . , . Beyond its widespread use. ResNet-152 was selected in this study because recent work on deep residual and convolutional architectures applied to structural MRI has shown that very deep networks can capture distributed and fine-grained patterns of brain atrophy that are critical for accurate AD staging . In this study. ResNet-152 was implemented using transfer learning with pretrained ImageNet weights, where convolutional layers were frozen to preserve general feature representations and a task-specific classification head was trained on the AlzheimerAos dataset . , . , . The classification head consisted of a global average pooling layer, a dense layer with 1024 neurons and rectified linear unit (ReLU) activation, and a final dense layer with four neurons and softmax activation to classify the diagnostic categories. The model was trained using the adaptive moment estimation (Ada. optimizer with categorical cross-entropy loss and a batch size of 32. Early stopping was applied to prevent overfitting and to restore the best weights. This configuration aligns with established transfer learning practices in medical image analysis . , . Custom classifier CNNs have demonstrated strong effectiveness in neuroimaging because they learn hierarchical spatial features directly from MRI data . , . , . Prior studies also indicate that task-specific CNN architectures are better suited to capture disease-related morphological patterns, offering improved stability and interpretability in medical imaging applications . Ae. In this study, a custom multiclass CNN was developed using stacked convolutional blocks for feature extraction and fully connected layers with dropout to enhance generalization performance . , . A final softmax layer produces predictions across four diagnostic categories based on structural MRI features . A controlled comparison for multi-stage AlzheimerAos classification was achieved by implementing two additional formulations. The OvO approach trains individual pairwise classifiers that refine decision boundaries between specific class pairs, while the OvR formulation trains separate models to distinguish one diagnostic category from the remaining classes. Both approaches employ compact CNN backbones and follow established multiclass learning practices in medical imaging . Ae. Detailed architectural specifications, including layer composition, output dimensions, and parameter counts, are provided in Tables 1 to 4. A comparative MRI-based study of ResNet-152 and novel deep learning A (Kelvin Leonardi Kohsasi. A ISSN: 1693-6930 Table 1. Layers of custom multiclass CNN Type Input layer Conv block 1 Conv block 2 Conv block 3 Conv block 4 Flatten Dense . Dropout Dense . Dropout Output layer Description Input_1 2yConv . BatchNorm MaxPool 2yConv . BatchNorm MaxPool 2yConv . BatchNorm MaxPool 1yConv . BatchNorm MaxPool Flatten Dense . Dropout . Dense . Dropout . Dense . , softma. Paramaters 9,824 55,936 222,464 296,192 15,860,224 131,328 1,028 Output shape (None, 224, 224, . (None, 110, 110, . (None, 53, 53, . (None, 24, 24, . (None, 11, 11, . (None, 30. (None, . (None, . (None, . (None, . (None, . Table 2. Layers of custom OvO baseline-CNN Type Input layer Conv2D MaxPooling2D Conv2D MaxPooling2D Conv2D MaxPooling2D Flatten Dense . Dropout Output layer Description Input_1 Conv2D . , 3y3. ReLU) MaxPool . Conv2D . , 3y3. ReLU) MaxPool . Conv2D . , 3y3. ReLU) MaxPool . Flatten Dense . ReLU) Dropout . Dense . , sigmoi. Paramaters 18,496 73,856 5,537,856 Output shape (None, 224, 224, . (None, 222, 222, . (None, 111, 111, . (None, 109, 109, . (None, 54, 54, . (None, 52, 52, . (None, 26, 26, . (None, 86,. (None, . (None, . (None, . Table 3. Layers of custom OvO deep-batch normalization (BN)-CNN Type Input layer Conv2D BatchNorm MaxPooling2D Conv2D BatchNorm MaxPooling2D Conv2D BatchNorm MaxPooling2D Conv2D MaxPooling2D Flatten Dense . BatchNorm Dropout Dense . BatchNorm Dropout Dense . Output layer Description Input_1 Conv2D . , 3y3. ReLU) BatchNormalization . MaxPool . Conv2D . , 3y3. ReLU) BatchNormalization . MaxPool . Conv2D . , 3y3. ReLU) BatchNormalization . MaxPool . Conv2D . , 3y3. ReLU) . xtra convolutional bloc. MaxPool . Flatten Dense . ReLU) BatchNormalization . Dropout . Dense . ReLU) BatchNormalization . Dropout . Dense . ReLU) Dense . , sigmoi. Parameters 18,496 73,856 295,168 4,718,720 8,256 2,080 Output shape (None, 224, 224, . (None, 222, 222, . (None, 222, 222, . (None, 111, 111, . (None, 109, 109, . (None, 109, 109, . (None, 54, 54, . (None, 52, 52, . (None, 52, 52, . (None, 26, 26, . (None, 24, 24, . (None, 12, 12, . (None, 36,. (None, . (None, . (None, . (None, . (None, . (None, . (None, . (None, . Table 4. Layers of custom OvR CNN Type Input layer Conv2D MaxPooling2D Conv2D MaxPooling2D Conv2D MaxPooling2D Flatten Dense . Dropout Output layer Description Input_1 Conv2D . , 3y. MaxPool . Conv2D . , 3y. MaxPool . Conv2D . , 3y. MaxPool . Flatten Dense . Dropout . Dense . , sigmoi. Paramaters 18,496 73,856 44,302,848 Output shape (None, 224, 224, . (None, 222, 222, . (None, 111, 111, . (None, 109, 109, . (None, 54, 54, . (None, 52, 52, . (None, 26, 26, . (None, 86,. (None, . (None, . (None, . TELKOMNIKA Telecommun Comput El Control. Vol. No. April 2026: 536-548 TELKOMNIKA Telecommun Comput El Control Reproducibility was ensured by standardizing the primary training configurations. A dropout rate of 5 was applied to reduce overfitting, and early stopping with a patience of ten epochs was used to promote stable convergence. Training was conducted using the Adam optimizer with a learning rate of 0. 0001 and a batch size of 32, settings commonly reported as effective for MRI-based DL tasks. Hyperparameters were refined through exploratory grid searches that varied learning rate, dropout magnitude, and batch size to ensure stable optimization across all classification strategies. Model evaluation This study is evaluated using a set of confusion-matrix-based metrics that capture complementary aspects of classification performance. These include accuracy, precision . ositive predictive value. PPV), recall . F1-score, specificity, false positive rate (FPR), false negative rate (FNR), and negative predictive value (NPV) . Since the class distribution is balanced, overall accuracy is considered a reliable indicator of performance. In addition, we report precision, recall. F1-score, and specificity to provide complementary perspectives. In the multiclass setting, per-class precision, recall, and F1-score are computed by treating each class against the rest and then summarized using macro, weighted, or micro averages, consistent with standard classification reports and confusion-matrix analyses . , . The formulas used to compute these metrics are as follows: yaycaycaycycycaycayc = ycNycE ycNycA ycNycE ycNycA yaycE yaycA ycEycyceycaycnycycnycuycu . cEycEycO) = ycNycE ya1 Oe ycIycaycuycyce = 2 O ycIycyyceycaycnyceycnycaycnycyc = yaycEycI = yaycAycI = ycAycEycO = yaycE yaycE ycNycA yaycA ycNycE ycNycA ycNycA yaycA ycNycE yaycE ycIyceycaycaycoyco / ycIyceycuycycnycycnycycnycyc = ycNycE ycNycE yaycA ycEycyceycaycnycycnycuycuOycIyceycaycaycoyco ycEycyceycaycnycycnycuycu ycIyceycaycaycoyco ycNycA ycNycA yaycE RESULTS AND DISCUSSION The dataset was first balanced through random undersampling, resulting in 6,464 images per class and a total of 25,856 samples. It was then split into 80% for training and 20% for testing. All images were processed using a standardized pipeline that consisted of grayscale conversion, resizing to 224y224 pixels. CLAHE-based contrast enhancement, bilateral filtering, and minAemax normalization to the . , . This procedure produced clearer and more consistent MRI images, with enhanced visibility of sulci and ventricles, reduced noise, and standardized dimensions. Representative examples of the preprocessed data are shown in Figure 4. The training history of ResNet-152 is presented in Figure 5. Although the model was allocated 2000 epochs, training converged early at epoch 81, as the early stopping criterion was met. The model reached 72% training accuracy at the final epoch, while validation accuracy followed a similar trend with moderate fluctuations, reflecting sensitivity to data variability but no severe overfitting. The corresponding loss curves show a consistent decrease in training loss, with validation loss exhibiting higher variance but overall convergence with training loss. The second model trained was the custom CNN, which employed a multiclass classification This model achieved the best performance with a test accuracy of 90%, confirming its effectiveness in handling the four-class AlzheimerAos classification task. The training history shown in Figure 6 indicates that both training and validation accuracy stabilized at around 90%, while the loss curves converged smoothly without signs of overfitting. A comparative MRI-based study of ResNet-152 and novel deep learning A (Kelvin Leonardi Kohsasi. A ISSN: 1693-6930 Figure 4. Sample images after preprocessing steps Figure 5. Training accuracy and loss curves of ResNet-152 Figure 6. Training accuracy and loss curves of custom CNN multiclass TELKOMNIKA Telecommun Comput El Control. Vol. No. April 2026: 536-548 TELKOMNIKA Telecommun Comput El Control The third model trained was the custom CNN with an OvO classification strategy. In this approach, six independents binary CNNs were trained, each focusing on a specific pair of classes. The training histories of the six classifiers are shown in Figures 7. Ae. : training accuracy and loss curves of custom CNN OvO. Most classifiers achieved stable convergence, with some pairs, such as ND vs. ModD and MD vs. ModD, reaching validation accuracies above 90%. However, classifiers involving early-stage categories, such as ND vs VMD or VMD vs MD, exhibited lower and more fluctuating validation accuracy, reflecting the inherent difficulty in distinguishing subtle changes at these stages. The classification report further supports these findings, with an overall accuracy of 85%. ND reached an F1-score of 93%. VMD 89%. MD 82%, and ModD 99%. The macro-average F1 was 73%, indicating variability across classes, while the weightedaverage F1 reached 91%, confirming the strong overall performance of the OvO approach. Figure 7. Training accuracy and loss curves of the six custom CNN OvO binary classifiers. ND vs VMD, . ND vs ModD, . VMD vs ModD, . ND vs MD, . VMD vs MD, and . MD vs ModD The final model developed was the custom CNN using an OvR strategy, where four binary classifiers were trained independently, each isolating one target class from the remaining categories. The training behaviors of these models are illustrated in Figures 8. Ae. Figures 8. , representing the ND vs rest and ModD vs rest classifiers, show the most stable convergence, with training and validation accuracies consistently exceeding 85%. In contrast. Figure 8. for the MD vs rest classifier exhibits greater variability, with noticeable fluctuations in validation accuracy, indicating the challenges in correctly separating MDs amples from neighboring classes. Meanwhile. Figure 8. , corresponding to the VMD vs Rest classifier, demonstrates moderate performance and reveals sensitivity to class imbalance and overlapping early-stage features. These observations highlight the varying complexity across diagnostic boundaries within the OvR framework. A comparative MRI-based study of ResNet-152 and novel deep learning A (Kelvin Leonardi Kohsasi. A ISSN: 1693-6930 . Figure 8. Training accuracy and loss curves of the four custom CNN OvR classifiers. ND vs rest, . ModD vs rest, . MD vs rest, and . VMD vs rest Although the custom multiclass CNN showed the strongest and most consistent performance, the confusion matrices reveal why the OvO and OvR models performed poorly on early-stage classes. Both approaches frequently misclassified VMD and MD, which reflects the subtle morphological differences between these adjacent stages and their overlap with normal aging. The OvR models tended to favor the dominant AuRestAy class, reducing sensitivity to early-stage cases, while the OvO models produced inconsistent pairwise outputs that accumulated during voting and increased the likelihood of error propagation. These observations indicate that decomposition-based strategies are more vulnerable to boundary ambiguity than a unified multiclass approach. Applying statistical significance tests such as McNemarAos test or paired bootstrap comparisons would help confirm whether the performance differences between models are statistically meaningful. A consolidated comparison across all models is presented to evaluate overall diagnostic performance and error characteristics. Table 5 summarizes the results and highlights the differences among the multiclass. OvO. OvR, and ResNet-152 approaches. Table 5. Results performance evaluation Model Approach ResNet-152 Custom CNN Custom CNN Custom CNN Multiclass Multiclass OvO OvR Accuracy (%) Precision (%) Recall (%) F1-Score (%) Specificity (%) FPR (%) FNR (%) NPV (%) This study presented a stage-aware DL pipeline for AD classification on structural MRI, comparing a transfer-learning baseline ResNet-152 with three proposed classifiers: custom CNN multiclass, custom CNN OvO, and custom CNN OvR. On the test set, the custom multiclass CNN delivered the strongest and most balanced performance, achieving 90% accuracy with consistent precision, recall, and F1 scores across the non-demented, very mild, mild, and moderate stages. This result surpassed the performance of OvO at 85%. OvR at 81%, and the baseline ResNet-152 at 72%. Model reliability was further supported by stable validation trends, with fluctuations of approximately A2% for the custom multiclass CNN. A3% for the OvO model. A4% for the OvR model, and A5% for ResNet-152. Collectively, these findings indicate that the multiclass formulation is particularly effective for early-stage AlzheimerAos detection, where subtle structural differences often resemble normal aging and introduce subjectivity in clinical assessment. When interpreted in the context of existing MRI-based AlzheimerAos classification research, these results show a clear advancement. Prior studies using YOLOv5. VGG16. DenseNet-169, or ResNet-50 TELKOMNIKA Telecommun Comput El Control. Vol. No. April 2026: 536-548 TELKOMNIKA Telecommun Comput El Control typically report accuracies ranging from 61 to 82% . , . , . and multi-stage models often struggle to separate adjacent stages along the clinical dementia rating scale . In contrast, the proposed custom multiclass CNN demonstrates stronger stage-specific discrimination, providing empirical evidence that a tailored architecture can leverage disease-related structural patterns more effectively than conventional transfer-learning backbones. This improvement in stage-aware performance represents a key novelty of the present work, as it shows that a dedicated multiclass architecture can simultaneously enhance accuracy, reduce inter-class confusion, and improve early-stage detection sensitivity. The clinical relevance of these gains is considerable. Early identification of very mild and MD categories is difficult even for experienced radiologists, yet the proposed model captured these borderline distinctions with greater reliability. This capability can support radiologists by reducing inter-rater variability, flagging subtle morphological changes for re-evaluation, and prioritizing patients for additional biomarker testing or timely initiation of disease-modifying therapy . Nonetheless, several challenges remain for real-world deployment. Models trained on a single dataset may fail to generalize across scanners or populations due to domain shift, and reliance solely on structural MRI may overlook early functional or metabolic abnormalities detectable by PET or functional magnetic resonance imaging . MRI) . , . Future work should therefore examine cross-dataset training, domain adaptation, and multimodal fusion, and should explore hybrid or ensemble architectures combined with explainable AI to further improve robustness and clinical safety. CONCLUSION This study introduced a stage-aware DL framework for AD classification using structural MRI, demonstrating that the proposed custom multiclass CNN achieved 90% accuracy and outperformed both the OvO. OvR models, and the ResNet-152 transfer-learning baseline. These findings highlight the modelAos ability to capture subtle morphological differences across AlzheimerAos stages and its clinical relevance for improving early-stage detection, where symptoms often mimic normal aging. Nonetheless, several limitations remain, including reliance on a single MRI dataset, potential domain shift across scanners or populations, and the exclusive use of structural MRI, which may limit sensitivity to early pathological changes. Future research should incorporate multimodal imaging such as PET or fMRI, integrate clinical and cognitive biomarkers, perform cross-dataset or multicenter validation, and explore hybrid or ensemble architectures to enhance robustness. With further refinement and external validation, the proposed framework has the potential to serve as a reliable decision-support tool that strengthens diagnostic consistency and facilitates earlier intervention in clinical practice. ACKNOWLEDGMENTS The authors would like to express their gratitude to STMIK TIME for institutional support and the research facilities provided during the completion of this study. This work was supported by the Directorate of Research. Technology, and Community Service (DRTPM). Ministry of Education. Culture. Research, and Technology of Indonesia under Research Grant No. 122/C3/DT. 00/PL/2025 and Subcontract No. 69/SPK/LL1/AL. 03/PL/2025. FUNDING INFORMATION This research received financial support from the Ministry of Education. Culture. Research, and Technology of Indonesia. AUTHOR CONTRIBUTIONS STATEMENT This journal uses the Contributor Roles Taxonomy (CRediT) to recognize individual author contributions, reduce authorship disputes, and facilitate collaboration. Name of Author Kelvin Leonardi Kohsasih Octara Pribadi Andy Daniel Smith Sunario ue ue ue ue ue ue ue ue ue ue ue ue ue ue ue ue ue ue ue ue ue ue ue ue ue ue ue ue ue ue ue ue ue ue ue ue ue ue ue A comparative MRI-based study of ResNet-152 and novel deep learning A (Kelvin Leonardi Kohsasi. A C : Conceptualization M : Methodology So : Software Va : Validation Fo : Formal analysis ISSN: 1693-6930 I : Investigation R : Resources D : Data Curation O : Writing - Original Draft E : Writing - Review & Editing Vi : Visualization Su : Supervision P : Project administration Fu : Funding acquisition CONFLICT OF INTEREST STATEMENT Authors state no conflict of interest. DATA AVAILABILITY The data that support the findings of this study are available from the corresponding author, . KLK], upon reasonable request. REFERENCES