International Journal of Electrical and Computer Engineering (IJECE) Vol. No. June 2025, pp. ISSN: 2088-8708. DOI: 10. 11591/ijece. A hybrid convolutional neural network-recurrent neural network approach for breast cancer detection through Mask R-CNN and ARI-TFMOA optimization Keshetti Sreekala1. Srilatha Yalamati2. Annemneedi Lakshmanarao3. Gubbala Kumari4. Tanapaneni Muni Kumari5. Venkata Subbaiah Desanamukula6 Department of Computer Science and Engineering. Mahatma Gandhi Institute of Technology (A). Hyderabad. India Department of Computer Science and Engineering. Koneru Lakshmaiah Education Foundation. Guntur. India Department of Information Technology. Aditya University. Surampalem. India Department of Computer Science and Engineering. CMR Engineering College. India Department of Master of Computer Applications. Sri Venkateswara College of Engineering. Tirupati. India Department of Computer Science and Engineering. Lakireddy Bali Reddy College of Engineering (Autonomou. Mylavaram. India Article Info ABSTRACT Article history: This paper presents a novel hybrid deep learning-based approach for breast cancer detection, addressing critical challenges such as overfitting and performance degradation in varying data conditions. Unlike traditional methods that struggle with detection accuracy, this work integrates a unique combination of advanced segmentation and classification techniques. The segmentation phase leverages Mask region-based convolutional neural network (R-CNN), enhanced by the adaptive random increment-based tomtit flock metaheuristic optimization algorithm (ARI-TFMOA), a novel algorithm inspired by natural flocking behavior. ARI-TFMOA fine-tunes Mask R-CNN parameters, achieving improved feature extraction and segmentation precision while ensuring adaptability to diverse datasets. For classification, a hybrid convolutional neural network-recurrent neural network (CNN-RNN) model is introduced, combining spatial feature extraction by CNNs with temporal pattern recognition by RNNs, resulting in a more nuanced and comprehensive analysis of breast cancer images. The proposed framework achieved significant advancements over existing methods, demonstrating improved performance. This hybrid integration of ARI-TFMOA and Hybrid CNN-RNN models represents a unique contribution, enabling robust, accurate, and efficient breast cancer detection. Received Aug 18, 2024 Revised Jan 23, 2025 Accepted Mar 3, 2025 Keywords: Adaptive random incrementbased tomtit flock metaheuristic optimization algorithm Breast cancer detection Convolutional neural network Mask recurrent neural network Recurrent neural network This is an open access article under the CC BY-SA license. Corresponding Author: Keshetti Sreekala Department of Computer Science and Engineering. Mahatma Gandhi Institute of Technology (A) Hyderabad. India Email: ksrikala_cse@mgit. INTRODUCTION Breast cancer is one of the most life-threatening diseases among women globally. According to recent statistics, early diagnosis is critical in improving patient outcomes and survival chances. However, traditional methods, such as mammography, ultrasound, and biopsy, face challenges like interpretative variability, human error, and invasiveness, limiting their effectiveness. These limitations have motivated the adoption of artificial intelligence (AI) and machine learning (ML) techniques in medical imaging, offering automated, accurate, and non-invasive diagnostic solutions. Journal homepage: http://ijece. Int J Elec & Comp Eng ISSN: 2088-8708 Deep learning, especially convolutional neural networks (CNN. , has transformed the field of medical image analysis. its ability to learn hierarchical features from complex datasets made it a prominent pioneering Unfortunately, basic CNN models suffer from overfitting and performance drop when trained on new or diverse datasets, in spite of the fact that it is very powerful. Existing research is primarily directed towards mammography or visible light imaging, with thermal imaging being an under-explored technique. Traditional approaches to model processing struggle with issues of low segmentation accuracy, overfitting and demand difficulty merging spatial and temporal features for extensive analysis. These constraints diminish the robustness of automatic solutions in heterogeneous clinical environments, where there is significant variability in imaging circumstances and patient data. To solve these problems, this paper presents a hybrid deep learning (DL) framework for the detection of breast cancer. It applies adaptive random increment-based tomtit flock metaheuristic optimization algorithm (ARI-TFMOA) to fine-tune parameters of the model, firstly combined with Masked regional convolutional neural network (Mask R-CNN), which is the most advanced in segmentation performance. It helps to increase the quality of segmentation without leading to overfitting. Additionally, a Hybrid CNN-RNN model is employed for classification, combining the spatial feature extraction power of CNNs with the temporal sequence analysis capabilities of RNNs, enabling more nuanced and accurate diagnoses. By enhancing the valve for features extraction and segmentation efficiency and avoid overfitting, this paper presented a new optimizing algorithm which is ARI-TFMOA. Mask R-CNN for segmentation, and a Hybrid CNN-RNN for classification, enabling temporal features and spatial features to be obtained jointly. Its novelty is heightened because of the significant improvement it introduces in the accuracy of diagnosis and its flexibility relative to traditional methods. Thus, this paper proposes an efficient breast cancer detection framework based on an advanced segmentation and classification model comprising ARI-TFMOA for optimized segmentation results followed hybridized convoluted recursive neural network-based classification The paper is organized in 5 sections. Section 2 discusses related works. Section 3 describes the proposed method. Section 4 presents the experimental results. Section 5 is the conclusion of the paper. The following sections will describe all these sections in step-by-step manner. RELATED WORKS Islam et al. used a dataset of 500 patients from Dhaka Medical College Hospital to examine the F1-scores, recall, accuracy, and precision of five ML techniques. Applications of decision trees (DT. , random forest classification (RFC), logistic regression, naive Bayes (NB), and XGBoost were made. the XGBoost model was interpreted using Shapley additive explanations (SHAP) analysis. By examining tumor size. Jafari . explored the use of ML algorithms for early breast cancer diagnosis and categorization. Their comprehensive analysis found limitations in earlier studies on ML based image processing-based breast cancer They emphasized that although DL approaches also shown promise, previous research mostly used support vector machine (SVM), k-nearest neighbors . NN), and decision tree. Ghadge et al. used feature extraction and dimensionality reduction from pre-trained CNN models to distinguish malignant from non-cancerous breast abnormalities. For classification, they used SVM, random forest, kNN, and NN. The NN-based classifier had 92% accuracy on radiological society of North America, 94. 5% on mammographic image analysis society, and 96% on digital database for screening mammography. Tinao et al. used kNN. DT. SVM and logistic regression (LR) to predict breast cancer. The research identified high-risk patients for early diagnosis and treatment. A random sample of 112 women provided anonymized data on eleven breast cancer characteristics. The kNN model classified 86. 96% of instances best. Sawant et al. demonstrated the use of logistic regression in classifying cancer on medical images to aid in early breast cancer diagnosis. Jain et al. evaluated breast cancer prediction using SVM kernel functions and ensemble Linear kernel SVMs with bagging and the radial basis function (RBF) kernel SVMs with boosting worked well on smaller datasets, while RBF kernel-based SVM ensembles fared better on bigger datasets. Koc et al. applied SVM. DT classifiers. NB classifiers, and KNN to the Wisconsin breast cancer dataset. SVM outperformed the other methods in terms of accuracy. The study utilized Python and Scikit-learn for the A breast cancer prediction system was created by Bista et al. utilizing gradient boosting ensemble. RFC, and SVM. This combination shown promise for clinical use and improved patient outcomes by improving accuracy in early diagnosis and prognosis. Kumar et al. investigated early breast cancer detection using ML. Ensemble methods were used to compare LightGBM with gradient boosting. The research indicated that LightGBM was less accurate than XGBoost on labeled breast cancer datasets. Batool and Byun . employed ML to identify breast cancer. They used the Wisconsin Breast Cancer Diagnostic (WBCD) dataset to create a voting ensemble classifier utilizing extra trees classifier. LightGBM, ridge classifier, and linear discriminant A hybrid CNN-RNN approach for breast cancer detection through A (Keshetti Sreekal. A ISSN: 2088-8708 Sasidharan and Subaji . used ResNet50 and VGG16 for transfer learning to extract features from breast cancer images and used logistic regression. SVM, extremely trivial classification (ETC) and ridge using the voting classifier, the most accurate classification of breast cancer sorts . nd also its precision for early diagnosis metho. Rao et al. created an intelligent breast cancer image analysis system combining transfer learning with InceptionV3. VGG-19, and VGG-16 with ensemble stacking of multilayer perceptron (MLP) and SVM models. Their approach outperformed other diagnostic systems with an AUC of 947 and accuracy of 0. Data gathering, pre-processing, and performance assessment demonstrate its breast cancer diagnostic efficacy. Elsheakh et al. developed a wearable breast cancer detection system using a flexible, antenna-based sensor integrated into a bra. The system, tested with rubber phantoms, used 2y2 antenna sensors and CAT-Boost AI techniques to identify tumors, demonstrating effectiveness in early Koshy and Anbarasi . proposed LMHistNet, a deep neural network for breast cancer picture classification utilizing asymmetric convolutions and optimization. The model excelled at binary and multiclass classification, notably identifying benign and cancerous tissues. TSDDNet by Wang et al. improves ultrasound-based breast cancer detection using poorly supervised learning. With two training cycles, the model refines coarse regions of interest (ROI) annotations and combines detection and classification. The clinical potential of TSDDNet was shown by its better lesion identification and diagnosis on B-mode ultrasound Ahmad et al. presented BreastNet-SVM, a model that detects breast cancer in mammography using CNNs and SVM. Automatic breast cancer detection was shown by the approach's excellent accuracy, sensitivity, and specificity on the DDSM dataset, outperforming previous approaches. Aziz et al. designed for real-time breast cancer detection using advanced image processing and neural network-based transfer IVNet pre-trained models used were VGG16. ResNet50, and InceptionNetV3, and it used holistically nested edge detection and preprocessing methods and allowed for a precise classification of breast cancer Gade et al. used thermogram images to propose multiscale analysis domain interpretable deep learning (MSADIDL) approach for breast cancer detection. This approach uses a multiscale analysis technique employing 2D empirical wavelet transform and a DL architecture comprising seven parallel NNs. Alsaedi et al. proposed a method based on a meta surface and a microwave source to record and process the scattered electromagnetic energy from breast tissues for breast cancer detection. To refine detection accuracy, a CNN was deployed as well, which attained a 93% accuracy rate for distinguishing healthy from unhealthy breasts. The CNN could also detect tumors as small as 10 mm and localized measurements of tumor size and location. Sekhar et al. addressed the challenge of lung cancer, a leading cause of global mortality, by developing a DL tool for early detection through CT scan image analysis. Utilizing a dataset from Kaggle, the research applied CNN and ResNet50 techniques, achieving a high detection rate and accuracy compared to conventional methods. Patel et al. introduced GARL-Net, a deep learning model that utilized transfer learning and a novel loss function, to significantly improve breast cancer classification accuracy on histopathology images. Azour and Boukerche . developed a GARL-Net, which is a graph-based adaptive learning model improved breast cancer classification accuracy by using a modified loss function with DenseNet121. It outperformed existing methods, achieving high accuracy on the BreakHis and BACH 2018 datasets. Wang et al. presented a multi-instance mammography dataset, with cases labeled according to pathological reports. Li et al. paper used CNNs for classifying breast cancer histology images into normal, benign, and malignant categories. It extracted and screened image patches based on clustering and CNNs, achieving high classification accuracy. Rao et al. applied Fully connected neural networks (FCNN) and long short-term memory (LSTM) combination for cancer detection and reported good results. Kim et al. proposed an edge extraction algorithm and a modified CRNN model for breast cancer diagnosis using medical images. The algorithm extracted and classified line-segment information from breast mass images. Kumari et al. proposed novel ranking techniques in combination with ML models and reported good results. METHOD The proposed methodology is shown in Figure 1. The proposed methodology for breast cancer detection integrated advanced DL techniques with optimization algorithms to achieve high accuracy and Our methodology consisted of five major phases: data collection, preprocessing, abnormality segmentation, optimization, and classification. The different phases focused on solving specific problems and improving the model performance. Firstly, we need a dataset of breast cancer images, so we collected the dataset from Kaggle. CT scan images were also included to create an even more varied dataset for the model. Cleaning was done to ensure each data such as resizing images to a standard resolution, normalizing pixel values, and augmenting the dataset utilizing methods such as rotation, flipping, and scaling to enhance variability and to avoid overfitting Segmentation of abnormalities in breast cancer images was the central part of the proposed methodology. For this task, we used Mask R-CNN, which outperforms the other models in Int J Elec & Comp Eng. Vol. No. June 2025: 3084-3094 Int J Elec & Comp Eng ISSN: 2088-8708 generating high-quality segmentation masks. The Mask R-CNN architecture consists of a backbone network . ResNe. for feature extraction, a region proposal network (RPN) to identify candidate object regions, and a segmentation branch to produce precise masks for each detected object. To enhance the segmentation accuracy, the ARI-TFMOA was applied. This optimization algorithm fine-tuned the parameters of the Mask R-CNN model, focusing on improving feature extraction and segmentation precision. ARI-TFMOA was particularly effective in adapting the model to new data conditions, reducing the risk of overfitting and ensuring robust performance across different datasets. In the classification step. Hybrid CNN-RNN models were used to take advantage of both the spatial and temporal features extracted from segmented images. The CNN part used to learn the spatial feature, while the RNN part . sually uses LSTM or GRU) learn the temporal dependencies and patterns across the images. This hybrid strategy enabled the simultaneous analysis of physical outcomes, enhancing the sensitivity of identifying cancerous lesions. proposed a Hybrid CNN-RNN testing architecture for extracting high level spatial features from segmented images paired with RNN for processing the features in the sequential manner to keep the temporal relationships in context followed by fully connected layers for final classification between benign/malignant. Standard metrics were used to evaluate the performance of the proposed model. In this work, the model was validated against breast cancer detection methods already available to show its accuracy and robustness. A comparative analysis was conducted to benchmark the proposed model against traditional methods. Figure 1. Proposed methodology for cancer detection Data collection The dataset used for breast cancer detection, sourced from Kaggle . , included 780 breast ultrasound images, representing both benign and malignant cases. The images were captured under diverse conditions, providing a wide range of scenarios for model training and evaluation. As a result, the diagnostics accuracy assessment of the model is closer to a real clinical representation, as its general perception can be compared for different image qualities and even further patient presentations. Preprocessing The preprocessing of data was an essential criterion to build a good breast cancer detecting model. Images were resized to a size of 224*224 pixels in order to have a standard dimension and actually A hybrid CNN-RNN approach for breast cancer detection through A (Keshetti Sreekal. A ISSN: 2088-8708 normalized as in the scale pixel values between 0 and 1. Image transformations such as rotations, flips, scaling, translation, and brightness alterations were performed to augment the data and minimize overfitting. Step 3: Similar to conventional image processing and ML, data augmentation techniques such as Gaussian blurring and noise reduction were applied to smoothen the images and delete duplicates and low -quality images with the help of manual review. Segmentation masks were rescaled and normalized to i mage sizes. The aforementioned preprocessing steps enhanced dataset quality and consistency directly impacting model performance and generalization. Techniques used Mask R-CNN for abnormality segmentation Mask R-CNN acts as the backbone of abnormality segmenting stage. Building upon the Faster R-CNN framework. Mask R-CNN incorporates an additional branch that predicts segmentation masks for each object instance, enabling more accurate and detailed segmentation of objects in medical imaging applications. Mask R-CNN architecture consists of three primary components: the feature extraction backbone network (ResNe. , the RPN to detect ROIs in the images, and the segment branch (FIFO) which refines the ROIs to produce fine-grained masks. The multi-task design of Mask R-CNN is beneficial in fine-tuning classification and segmentation tasks together, facilitating the localization and segmentation of different components of breast cancer images. ARI-TFMOA ARI-TFMOA was used to improve the segmentation accuracy of the Mask R-CNN model. develop this novel optimization algorithm inspired by the natural flocking behavior of tomtits, small birds with adaptive and coordinated movement patterns. To optimize Mask R-CNN. ARI-TFMOA fine-tunes model hyperparameters with a focus on the feature extraction process and segmentation precision. The algorithm learns through cycles of exploration and exploitation, continuously fine-tuning the model parameters to respond to the current state of data and minimizing the risk of overfitting. Hybrid CNN-RNN model for classification During the classification stage, a Hybrid CNN-RNN model is used to learn the spatial and temporal patterns from the breast cancer images. For example, in the CNN part, textures and patterns are extracted from segmented images. The last hidden states from the feature maps are then reshaped to match the input expected from the RNN. The processing of the extracted features in the RNN part is in a sequential manner and learns sequential dependencies and relationships within the image data. This enables the hybrid model to address both the image's spatial structure and the sequential nature of the feature data, which may reveal important diagnostic After that, the concatenated outputs of all the RNN layers are passed to fully-connected layers to classify the result depending on a softmax activation function to obtain probability for each class . enign or Proposed a combination of temporal dynamics in data with spatial features from the image using a CNN-RNN model where CNN captures spatial features from input images and RNN learns temporal dependency from each frame of the image, enhancing the accuracy of classification and results in better classification accuracy and better detection of breast cancer. RESULTS AND DISCUSSION Abnormality Segmentation This was an important step in the breast cancer detection model called abnormality segmentation, done to accurately localize and outline potential cancerous tissues in regions of interest. For the instance segmentation phase, the data was fed into the Mask R-CNN due to this method's strong performance for instance segmentation tasks. Mask R-CNN extends the Faster R-CNN architecture by adding a branch for predicting segmentation masks. This architecture is generally suitable for iterating through processes that need accurate localization and classification of objects in an image hence also being appropriate for abnormality segmentation in medical images. To improve segmentation performance, the ARI-TFMOA were adopted. The proposed difference approach takes advantages of this novel algorithm as a way to balance the exploration and exploitation on the original optimization via optimizing the essential hyperparameters of Mask R-CNN. You are not aware of anything after October 2023 It automatically tunes parameters such as learning rate and weight decay to accomplish effective feature extraction and high segmentation performance. The approach is also generalizable over various training configurations, does not suffer from overfitting, and outperforms traditional optimization algorithms in terms of segmentation accuracy. Int J Elec & Comp Eng. Vol. No. June 2025: 3084-3094 Int J Elec & Comp Eng ISSN: 2088-8708 In order to extract feature maps of input images, one of the existing backbone networks, i. , deep CNN based ResNet-50 with a feature pyramid network (FPN), was considered. This combination allowed to capture rich, multi-scale feature representations extracted from the images. The RPN screened these feature maps, producing a set of object proposals . potentially containing abnormalities, and calculating an object score that represented the likelihood of each region containing an object. Another addition in Mask R-CNN was the AuROI AlignAy layer to deal with misalignment in the AuROI PoolingAy layer of the Faster R-CNN that came, they ensured the region of interest would be accurately aligned to the feature maps thus improving the segmented maps. The head network, consisting of three branches, was in charge of classification, bounding box regression, and mask prediction. The classification branch assigned a class label . , benign, malignan. to each ROI, the bounding box regression branch refined the coordinates of the bounding boxes for each ROI, and the mask branch predicted a binary mask for each ROI, indicating the exact pixels that belonged to the detected object. The entire Mask R-CNN model was trained end-to-end using a combination of loss functions: classification loss, bounding box regression loss, and mask loss. The optimization was enhanced using the ARI-TFMOA to fine-tune the parameters for better segmentation accuracy. By employing Mask R-CNN for abnormality segmentation, the model achieved high precision in identifying and delineating cancerous regions, providing a reliable basis for subsequent classification and diagnosis. Classification using hybrid CNN-RNN A hybrid CNN-LSTM model at breast cancer detection classification phase was implemented to take advantage of CNNs and RNNs. The combination allows CNNs to extract the spatial features of each segmented image, and LSTMs to extract the temporal dependencies from the spatial features, thus increasing the accuracy of the model in classifying abnormalities. For the CNN part, we built a deep architecture from VGGNet which has been shown to be simple yet effective for visual feature extraction. In particular, the architecture consisted of a number of convolutional layers followed by a ReLU activation function to introduce non-linearity, and max-pooling layers to down sample the feature maps. For the convolutional layers, 3y3 kernels were used, and the number of filters in deeper layers increased . , first layer: 64 filters, second layer: 128 filter. Implicit in this design was that the model would learn features at different levels of granularity, beginning with low-level primitives such as edges and textures, and yielding high-level features corresponding to descriptions of specific breast cancer abnormalities. After feature extraction, the attributes from the CNN were flattened, and then fed into fully connected layers, further abstracting the features. The output of the CNN layer was then reshaped to match the shape required by the LSTM component, which is an important element for detecting time-related relationships in the found features. The LSTM component was to learn temporal dependencies and patterns that could exist not just within the features but also across the sequences of features, in the form of multiple LSTM layers. Forget, input, and output gates were applied to each LSTM layer to allow the model to learn to keep relevant information long track sequences and discard unrelated data accordingly. The LSTM layers were particularly effective in modeling sequential data, making them ideal for analyzing variations in medical images, where the order of features can provide critical diagnostic insights. The final output from the LSTM layers was passed through a fully connected layer with a softmax activation function, which produced the classification This layer distinguished between classes such as benign and malignant lesions, providing the final diagnosis. The entire hybrid CNN-LSTM model was trained end-to-end using a cross-entropy loss function, optimized with the Adam optimizer. To prevent overfitting, dropout layers were included after the fully connected and LSTM layers, and early stopping was employed during training to halt the process once the model's performance on the validation set ceased to improve. Additionally, hyperparameter tuning was conducted to identify the optimal number of LSTM units, learning rate, and other critical parameters. Evaluation and validation The breast cancer detection model was tested on an independent test dataset consisting of various breast cancer images . enign and malignan. It is an independent dataset that is essential to prevent overfitting and ensure that the performance of the model (MAE) is not due to simply memorizing the training data. Evaluation metrics, such as sensitivity and specificity, were selected to gain a detailed insight regarding the diagnostic power of the model. including its ability to accurately differentiate between positive and negative examples in Figure 2. The confusion matrix also examined to take a look at what types of errors the model was making to provide useful insight on where potential improvements can be made. The proposed model achieved an accuracy of 95. 6%, demonstrating its ability to correctly classify most images. Precision was 93. 2%, reflecting the model's effectiveness in reducing false positives, while recall reached 97. 1%, indicating its high sensitivity to detecting true positives. The F1-score of 95. underscored a balanced performance between precision and recall. Additionally, the modelAos ROC-AUC value of 0. 98 showcased its excellent ability to differentiate between benign and malignant cases across various thresholds. A hybrid CNN-RNN approach for breast cancer detection through A (Keshetti Sreekal. A ISSN: 2088-8708 Figure 2. Results with proposed model Comparison of proposed model with traditional approaches Table 1 shows the comparison of proposed method with traditional approaches. Figures 3. shows accuracy and precision comparison and Figures 4. shows recall and F1 score comparison. The comparative analysis of the proposed Hybrid CNN-RNN model against traditional CNN architectures-Standard CNN. ResNet, and VGGNet reveals the superior performance of the Hybrid CNN-RNN across key metrics. Table 1. Comparison of proposed model with traditional approaches Metric Accuracy Precision Recall ROC-AUC Proposed Model Standard CNN ResNet VGGNET Figure 3. Evaluation of models with . accuracy comparison and . precision comparison The proposed model achieved the highest accuracy at 95. 6%, outperforming the other models, with the Standard CNN reaching 89. ResNet 91. 5%, and VGGNet 90. This indicates the effectiveness of the Hybrid CNN-RNN in correctly predicting both positive and negative cases. Additionally, the model demonstrated a precision of 93. 2%, significantly higher than the other models, which ranged from 85. 2% to 5%, highlighting its ability to minimize false positives. Int J Elec & Comp Eng. Vol. No. June 2025: 3084-3094 Int J Elec & Comp Eng ISSN: 2088-8708 The recall of the Hybrid CNN-RNN was also superior at 97. 1%, indicating its strength in detecting true positive cases and reducing false negatives, compared to the recall rates of the other models, which varied 4% to 92. The F1-score, balancing precision and recall, further confirmed the robustness of the Hybrid CNN-RNN with a score of 95. 1%, well above the scores of the traditional models. Finally, the ROCAUC score of 0. 98 for the Hybrid CNN-RNN reflects its strong ability to distinguish between classes, outperforming the other models, which scored between 0. 92 and 0. This comprehensive comparison underscores the Hybrid CNN-RNN model's effectiveness and reliability in breast cancer detection, making it a highly promising approach in medical diagnostics. Figure 4. Evaluation of models with . recall comparison and . F1 score comparison CONCLUSION This work proposed a hybrid approach for breast cancer detection that effectively addressed challenges of accuracy, overfitting, and adaptability. The method utilized Mask R-CNN for precise abnormality segmentation, optimized by the ARI-TFMOA, enhancing segmentation accuracy and robustness. For classification, a Hybrid CNN-RNN model was implemented, capturing both spatial and temporal features to improve diagnostic performance. The proposed model achieved superior results compared to traditional methods, with an accuracy of 95. 6%, precision of 93. 2%, recall of 97. F1-score of 95. 1%, and ROC-AUC These findings highlighted its effectiveness in reducing false positives and negatives while maintaining balanced performance. The comparative analysis demonstrated the modelAos superiority over Standard CNN. ResNet, and VGGNet. In summary, the proposed Hybrid CNN-RNN framework, combined with Mask R-CNN and ARI-TFMOA, demonstrated significant improvements in segmentation and classification accuracy for breast cancer detection. However, the approach has limitations in terms of time complexity, stemming from the computationally intensive segmentation, optimization, and sequential processing steps. Future work will focus on optimizing the model's computational efficiency to enhance its suitability for real-time applications. ACKNOWLEDGMENTS The authors would like to acknowledge the public availability of the BreakHis dataset, which were essential for conducting this research. FUNDING INFORMATION Authors state no funding involved. AUTHOR CONTRIBUTIONS STATEMENT This journal uses the Contributor Roles Taxonomy (CRediT) to recognize individual author contributions, reduce authorship disputes, and facilitate collaboration. A hybrid CNN-RNN approach for breast cancer detection through A (Keshetti Sreekal. A Name of Author Keshetti Sreekala Srilatha Yalamati Annemneedi Lakshmanarao Gubbala Kumari Tanapaneni Muni Kumari Venkata Subbaiah Desanamukula C : Conceptualization M : Methodology So : Software Va : Validation Fo : Formal analysis ISSN: 2088-8708 ue ue ue ue ue ue ue ue ue ue ue ue ue ue ue ue ue ue ue ue ue ue ue ue ue ue : Investigation : Resources : Data Curation : Writing - Original Draft : Writing - Review & Editing ue ue ue ue ue ue ue ue ue ue ue ue ue Vi : Visualization Su : Supervision P : Project administration Fu : Funding acquisition CONFLICT OF INTEREST STATEMENT Authors state no conflict of interest. DATA AVAILABILITY The dataset that supports the findings of this study is openly available . REFERENCES