International Journal of Electrical and Computer Engineering (IJECE) Vol. No. August 2025, pp. ISSN: 2088-8708. DOI: 10. 11591/ijece. Multi-layer convolutional autoencoder for recognizing threedimensional patterns in attention deficit hyperactivity disorder using resting-state functional magnetic resonance imaging Zarina Begum. Kareemulla Shaik School of Computer Science and Engineering. VIT-AP University. Amaravati. Andhra Pradesh. India Article Info ABSTRACT Article history: Attention deficit hyperactivity disorder (ADHD) is a neurological disorder that develops over time and is typified by impulsivity, hyperactivity, and attention deficiency. There have been noticeable changes in the patterns of brain activity in recent studies using functional magnetic resonance imaging . MRI). Particularly in the prefrontal cortex. Machine learning algorithms show promise in distinguishing ADHD subtypes based on these neurobiological signatures. However, the inherent heterogeneity of ADHD complicates consistent classification, while small sample sizes limit the generalizability of findings. Additionally, methodological variability across studies contributes to inconsistent results, and the opaque nature of machine learning models hinders the understanding of underlying mechanisms. suggest a novel deep learning architecture to overcome these issues by combining spatio-temporal feature extraction and classification through a hierarchical residual convolutional noise reduction autoencoder (HRCNRAE) and a 3D convolutional gated memory unit (GMU). This framework effectively reduces spatial dimensions, captures key temporal and spatial features, and utilizes a sigmoid classifier for robust binary Our methodology was rigorously validated on the ADHD-200 dataset across five sites, demonstrating enhancements in diagnostic accuracy ranging from 1. 26% to 9. 6% compared to existing models. Importantly, this research represents the first application of a 3D Convolutional GMU for diagnosing ADHD with fMRI data. The improvements highlight the efficacy of our architecture in capturing complex spatio-temporal features, paving the way for more accurate and reliable ADHD diagnoses. Received Nov 5, 2024 Revised Apr 8, 2025 Accepted May 24, 2025 Keywords: Attention-deficit/hyperactivity disorder classification Deep learning Gated memory unit Hierarchical residual Convolutional noise reduction Resting-state functional magnetic resonance imaging This is an open access article under the CC BY-SA license. Corresponding Author: Kareemulla Shaik School of Computer Science and Engineering. VIT-AP University Andhra Pradesh 522237. India Email: kareemulla. shaik@vitap. INTRODUCTION One common neurodevelopmental condition that affects children is attention deficit hyperactivity disorder (ADHD), with a significant proportion of cases persisting into adulthood. Research indicates that approximately 65% of individuals diagnosed in childhood continue to experience symptoms into adulthood, resulting in substantial economic and psychological ramifications for both patients and their families . Clinical judgment is the primary method of diagnosis, which introduces subjectivity and variability into Timely intervention depends on early identification, but the diagnostic statistical manual interpretive flexibility makes the diagnostic procedure difficult . Although a thorough assessment by qualified experts is required, inconsistencies are exacerbated by disparities in training. Like autism spectrum Journal homepage: http://ijece. ISSN: 2088-8708 ADHD has recognizable patterns of brain activity that facilitate diagnosis. New developments in machine learning (ML) and medical imaging present encouraging paths for more objective diagnosis. Brain data is being analyzed using methods including structural magnetic resonance imaging . MRI) and functional magnetic resonance imaging . MRI), which improve diagnostic accuracy . For instance. Yilin et al. used graph convolutional networks (GCN) to examine functional connectivity data, which improved the classification of ADHD by emphasizing the distinctions between peers who are usually developing and those who have ADHD. A similar model, attention-based spatial temporal network, was presented by Qiu et al. using resting-state fMRI data with time division multiplexing to capture temporal fluctuations and adaptive functional connectivity generation (AFCG) for spatial correlation analysis. Through the integration of spatial and temporal data, adaptive spatial-temporal neural network (ASTNe. surpasses earlier approaches, opening the door to more precise and impartial diagnosis of ADHD. Recent advancements in ML and deep learning (DL) have significantly enhanced the diagnosis of ADHD and other psychiatric disorders through the integration of neuroimaging data. For example. Hatami et . showed how to improve the diagnosis of major depressive disorder by using fMRI data in conjunction with the MobileNet V2 model and the data processing and analysis for brain imaging toolbox. Liu et al. developed the multimodal generative fusion framework, integrating fMRI and sMRI data through multi-task learning to generate paired data, improving diagnostic accuracy. Alsharif et al. developed an ML-based decision system that achieved 91% accuracy on standard ADHD datasets, lowering subjectivity in traditional and Agarwal et al. used a dual approach, combining image-based DL models and graphbased networks to analyze fMRI connectivity matrices, showing that different brain atlas and connection matrix selections improve classification accuracy. In contrast to conventional ReHo techniques. Gylhan and ynzmen . shown that maintaining spatial information improves classification by using 3D convolutional neural networks (CNN. to evaluate fractional amplitude of low frequency fluctuation data. In order to improve brain activity investigation in ADHD cases. Saurabh and Gupta . reshaped 4D pictures and utilized a modified bidirectional long short-term memory (BLSTM) model to interpret resting-state fMRI In order to get high computational efficiency. Salah et al. combined fMRI and optical amplification data in their residual learning layer-based ADHD screening model. In an effort to create "explainable AI," Amado-Caballero et al. evaluated demographic parameters influencing ADHD diagnosis using CNN visualization techniques including occlusion maps. Using a transformer-based model using ADHD-200 fMRI data. Qin et al. combined phenotypic and fMRI data to obtain 74. 5% classification accuracy, surpassing earlier methods. To facilitate effective feature extraction, a 4D CNN model was also applied, capturing both spatial and temporal dimensions with an observed accuracy of 71. 3% . Around 800 people from various universities contributed sMRI and resting-state fMRI data to the ADHD-200 consortium, which was created by the INDI in 2011. This allowed researchers worldwide to improve ADHD classification algorithms by using consistent datasets . This information made it easier to create sophisticated models. For example. Mao et al. achieved improved classification accuracy by combining 3D CNNs for spatial extraction with an long short-term memory (LSTM) model to capture temporal relationships. In order to evaluate demographic aspects influencing ADHD diagnosis. Liu et al. investigated CNN visualization approaches, such as occlusion maps. This work contributed to the developing topic of "explainable AI," which improves the interpretability of DL models in clinical practice. Finally, efforts to enhance diagnostic accuracy have been exemplified by Qin et al. who employed a transformer-based model using ADHD200 fMRI images, combining fMRI and phenotypic data to achieve a classification accuracy of 74. surpassing multiple advanced approaches. To facilitate effective feature extraction, a 4D CNN model was also applied, capturing both spatial and temporal dimensions with an observed accuracy of 71. This study improves the detection of ADHD by introducing three major contributions: . a convolutional gated memory unit (GMU) to capture spatial and temporal information. a multi-layer convolutional denoising auto encoder network to recognize 3D spatial patterns in resting-state functional magnetic resonance imaging . sfMRI) data. a validated framework evaluated across various sites. The overview of the algorithm, its conceptual underpinnings, the experimental setting, and its consequences and future directions are covered in the paper. ALGORITHM The algorithm for classifying ADHD from resting-state fMRI data extracts spatial features using an hierarchical residual convolutional noise reduction autoencoder (HRCNRAE), which is then strengthened by residual connections to preserve significant information. During training, the model reconstructs the input while measuring reconstruction loss, and a convolutional GMU records temporal dependency. Then, using global average pooling, people are categorized as either ADHD-positive or not, offering a quick and effective way to examine intricate neuroimaging data. Int J Elec & Comp Eng. Vol. No. August 2025: 3965-3976 Int J Elec & Comp Eng ISSN: 2088-8708 Algorithm 1: Spatio-temporal feature extraction and ADHD classification Input:X input :4D resting-state fMRI. s-fMRI) data. Step 1: Data Regularization Add Gaussian noise for regularization: Input X input = X input N noise Step 2: Feature Encoding with HRCNRAE Pass X noisy through CP block 1: F1 =H cp (X nois. Process through CP block 2: F2 = H cp(F1 ) Process through CP block 3: F3 = H cp(F. Step 3: Incorporate Residual blocks Combine features: F residual_outer = F3 Hskip (F1. Step 4: Feature Decoding Decode 1: D1 = Hud (Fresidual_oute. Decode 2: D2 = Hud (D. Final output: Yreconstructed = Hud(D. Step 5: Compute Loss Function: calculate reconstruction loss Step 6: Spation-Temporal Feature Extraction with GMU For each time step t Reset gate: rt =(Wr . ht-1 Ur . Update gate: zt=(Wz . ht-1 Uz . Candidate hidden state: hE t = tanh(Wh . Uh . Final hidden state: ht = . -z. oht-1 zt o hE . Step 7: Classification:using Global Average Pooling O=GAP. T ) Output: Predicted ADHD label. for ADHD,0 for non-ADHD). PROPOSED METHOD DL has been used extensively for a variety of applications because it is effective at extracting features from huge datasets. It is distinguished by its complexity and more accurate predictions compared to conventional machine learning methods. This research presents a network architecture that combines a convolutional GMU with an HRCNRAE to use rs-fMRI data to obtain joint spatiotemporal properties, which are then applied to the classification of ADHD. As illustrated in Figure 1, the encoder part of the auto encoder first extracts high-level spatial characteristics from the rs-fMRI. The auto encoder incorporates a residual network to further capture deeper spatial features. The convolutional GMU then organizes and processes these spatial characteristics in a temporal manner, capturing both spatial and temporal dynamics at the same time. The GMU-extracted characteristics are subjected to GAP . before being subjected to a sigmoid classifier for the ultimate ADHD classification. Figure 1. Flowchart of the proposed classification algorithm Multi-layer convolutional autoencoder for recognizing three-dimensional A (Zarina Begu. A ISSN: 2088-8708 Hierarchical residual convolutional noise reduction autoencoder The HRCNRAE is a new method that combines residual networks with convolutional denoising autoencoders (CDAE) to extract spatial information from unlabeled rs-fMRI data . Its main goal is to recover high-level spatial characteristics while lowering dimensionality so that the convolutional GMU can more easily extract spatiotemporal features. HRCNRAE uses skip connections to preserve high-level feature extraction while using the residual learning technique first presented by He et al. to avoid the vanishing gradient issue in deep networks. The encoder, decoder, and residual blocks make up the HRCNRAE architecture, as seen in Figure 2. There are three CP segments with data filtering and reduction layers in the input processing unit, and three unpooling deconvolution (UD) segments with reverse filtering and data expansion in the output processing unit. An external bypass with two CP and two UD segments and an internal bypass with one CP and one UD segment is the two bypass architectures that guarantee effective data The system performance is improved by these bypasses, which allow input and output units to be trained simultaneously. Figure 2. Organization of HRCNRAE Regarding the provided rs-fMRI data: ycu = . cu1 ycuycy ]. cu1 OO ycI60y72y60 ) . where the time period, denoted by p, varies depending on the location. The input of HRCNRAE is first obtained by adding random noise, which is regularized by following a conventional normal distribution. As a result, we can get ycu = . cu1 ycuycy ], . cuycn OO ycI60y72y60 ) . To extract the enhanced pattern h, the input processing section of HRCNRAE transforms the original data into a condensed representation, which is Ea = yce. The interim pattern h is reconstructed into y through HRCNRAE's output processing section, which is yc = yci(E. Since the goal of HRCNRAE's training is to reduce the discrepancy between the cleaned and rebuilt data, the loss is described as . loss = y Oe x The maximum value . that can be produced within the pooling window is the goal of the max pooling Let sk,i OO Qlymyn be the previous layer's i-th feature map, and let skOe 1,i OO Qlymyn represent the i-th feature map in the k-th layer. The i-th feature map of the k-th layer's components can be calculated as . Int J Elec & Comp Eng. Vol. No. August 2025: 3965-3976 Int J Elec & Comp Eng ycu ,ycu ,ycu ISSN: 2088-8708 ycu ,ycu ,ycu ycyco,ycn1 2 3 = ycoycaycu. cEyco,ycn1 2 3 ) A . To perform deconvolution, the deconvolution kernel slides across the features, whereas the unpooling process zero-fills the latent features to return the feature map to its original picture size . Three CP blocks make up the encoder, and the CP block's i-th output is Fi = Hcp (Fi-. i=1, 2, 3, . where specific composite functions are used, such pooling, rectified linear unit (ReLU), and convolution, are represented by HCP . For i = 1. Fi-1 represents the input noise image I, and the final result resulting from the subsequent CP block for all other values. The skip-connection's output is ycI1 ycuyc = ycIyceyaycO(Oc yaycn O ycyc ycayc ) ya . The internal residual block is represented when SI or E = I and i = 2, and the outer residual block is represented where i = 1 and SI or E = E. The i-th convolution kernel is indicated by Oj. Bj is a representation of the bias. The ReLU activation adds a tiny degree of sparsity to the trained network when j is the number of channels, which is shown as . Re LU ( x ) = max ( 0, x ) . Three symmetric UD blocks make up the decoder component. The result of the first UD block is displayed as . G0 = HUD ( F3 ) The following is the outcome of the i-th UD block: Gi = HUD ( Gi Oe1 S ) i = 2,3 . where HUD is for the composite operation that combines sampling, deconvolution, and ReLU, and S stands for the output from the shortcut convolution layer. It should be noted that the deconvolution up sampling blocks belong to the decoder, whereas the convolution pooling blocks indicated above are part of the Two nested residual blocks in the residual structure connect the encoder and decoder. In the beginning, the internal residual block creates a skip-connection between the encoder and decoder, which is established by . R1 = S1 G1 The outer residual block's output is shown as . RE = S E G2 Gated convolutional memory unit The GMU is a sophisticated, parameter-efficient variant of the traditional recurrent neural network (RNN) . that successfully resolves long-sequence dependence and gradient vanishing problems. In this study, spatial information must be incorporated in order to extract spatiotemporal characteristics from rsfMRI data using convolutional GMU. Convolutional GMU processes spatial and temporal features at the same time, in contrast to conventional GMU . The information flow is managed by means of reset and update gates, where the reset gate eliminates some previous data and the update gate regulates the amount of historical data that is kept. Both gates to process the preceding output . tOe. at time t and the current input . use a sigmoid activation function. The update gate . regulates the amount of the prior data that is retained, while the reset gate . establishes how much of it is considered. Convolution with x t multiplies rt by htOe1 to yield the candidate hidden state hE t. The previous hidden state and the candidate-hidden state are combined to determine the final hidden state ht. rt = A (Wr *[ xt , ht Oe1 ) . Multi-layer convolutional autoencoder for recognizing three-dimensional A (Zarina Begu. A ISSN: 2088-8708 Z t = A (Wz *[ xt , ht Oe1 ) . ht tanh (W *[ xt , rt Acht Oe1 ) . The convolution operation is represented by *, the Hadamard product by Oo, and the sigmoid activation function by E. The weights that correspond to rt, zt, and Eht are Wr. Wz, and W. Experimental setup Data description The developed method is analyzed and processed using the Python simulation platform. The publicly available ADHD-200 database, which contains fMRI images, is collected and examined for this study in order to carry out the experiment. dataset comprised of imaging and phenotypic data . ender, age. IQ, handednes. from 973 people spread over eight worldwide sites. Of these, 585 had normally developing typical developing (TD) profiles, 26 had an unidentified diagnosis, and 362 had ADHD. Three sitesAiBrown. Pittsburgh, and Washington UniversityAiwere disqualified because of missing diagnostic data, whereas data from five sitesAiPeking. Neuroimaging. KKI. OHSU, and NYUAiwere examined. The data processing assistant for resting-state fMRI (DPARSF) tools, which include noise reduction, spatial smoothing, normalization to MNI space, slice-timing correction, and head motion correction, were applied to pre-process rs-fMRI data. Images containing objects and participants who moved their heads excessively were 93,650 frames were kept for training the HRCNRAE model as shown in Table 1 after preprocessing. Table 1. Building data using the ADHD-200 dataset Training set Testing set NYU ADHD Control Total ADHD Control Total OHSU Peking N image KKI Total Training of models The encoder for the HRCNRAE model is made up of three CP blocks with convolution kernel sizes of 3y3y3, 3y3y3, and 2y2y2, and feature mappings set to 16, 32, and 96. The stride for all convolution layers is 1y1y1, however the pooling layers have the same padding but kernel sizes and strides of 2y2y2, 3y3y3, and 2y2y2. Its structure is mirrored in the decoder. The learning rate was set at 0. 001 and decreased exponentially at 0. 9 during the 220 epochs of training, which had a batch size of 50. To improve every network component, an overview of the model's complex design is shown in Table 2. The Adam optimizer was employed. A learning rate of 0. 00001 and a batch size of 8 were used to refine the convolutional GMU. The fully connected layer was subjected to a dropout rate of 0. 5 and L2 regularization . in order to avoid Table 2. The network's specifics Layer type Input layer CP Block 1 External residual block CP Block 2 Output size 62y70y62 16, 30y36y30 16, 1y1y1 32, 10y12y10 Internal residual block CP Block 3 32, 1y1y1 96, 5y6y5 UD Block 1 96, 10y12y10 Internal residual block UD Block 2 32, 1y1y1 32, 30y36y30 External residual block UD Block 3 16, 1y1y1 16, 60y72y60 Int J Elec & Comp Eng. Vol. No. August 2025: 3965-3976 Filter size, stride -----------3y3y3 Conv, stride 1 1y1y1 Conv, stride 1 2y2y2 Conv, stride 1 3y3y3 max pool, stride 3 1y1y1 Conv, stride 1 3y3y3 Conv, stride 1 2y2y2 max pool, stride 2 2y2y2 DeConv, stride 1 2y2y2 Un Pool, stride 2 -----------3y3y3 DeConv, stride 1 3y3y3 Un Pool, stride 3 ------------3y3y3 DeConv, stride 1 2y2y2 Un pool, stride 2 Int J Elec & Comp Eng ISSN: 2088-8708 RESULTS AND DISCUSSION We evaluated the proposed model's performance using the test set derived from the dataset ADHD200. Since generalization ability is essential when using machine learning to diagnose disorders, the model's efficacy was evaluated using classification performance across several sites. The next sections present the findings from a variety of experimental testing. Visualization results To evaluate the model's performance, feature maps and convolution weights from CP blocks were Figure 3 . presents 16 neural weights from the initial CP block, initialized using the Xavier The variation in grayscale shades highlights how the trained weights capture richer and more diverse feature representations. Figure 4 emphasizes the extraction of spatial information. and Figure 5, which demonstrates notable variations between people with ADHD and those with TD. Figure 3. Weights assigned to the convolution layer in the first CP block Figure 4. The maps with features that were produced using the initial CP block Figure 5. The first filter was used to create differential feature maps for the third block of CP Choosing the regularization parameters The convolutional GMU's accuracy under various regularization settings is displayed in Figure 6, underscoring the important influence that parameter choice has on classification outcomes. As the regularization parameter rises, accuracy increases and reaches a top of 0. But when the parameter goes 3, performance suffers, most likely as a result of overfitting. Smaller numbers, on the other hand, result in excessive features, redundancy and lower performance. Multi-layer convolutional autoencoder for recognizing three-dimensional A (Zarina Begu. A ISSN: 2088-8708 Figure 6. The precision associated with distinct regularization settings Model ablation The residual block The HRCNRAE model shows improved classification performance compared to the convolutional denoising autoencoder (CDAE), largely due to the integration of a residual block. This residual block enhances feature preservation across layers, allowing the model to learn patterns that are more complex without losing important information. As shown in Figure 7, evaluation metrics such as the area under the curve (AUC) and receiver operating characteristic (ROC) clearly demonstrate the effectiveness of this architectural improvement. Figure 7. The contrast between HRCNRAE and CDAE A set of CP units We also ran two other schemes using two blocks and one block for contrast in order to get the ideal number of CP blocks. Table 3 displays the findings. The classification assessment indices' accuracy, sensitivity, specificity and are examples of metrics used in evaluation. The probability that the model would accurately categorize TD and ADHD is known as accuracy, and it is described as . Accuracy = TP TN/TP TN FP FN Int J Elec & Comp Eng. Vol. No. August 2025: 3965-3976 Int J Elec & Comp Eng ISSN: 2088-8708 The likelihood of correctly identifying ADHD is represented by sensitivity, which could be computed as . Sensitivity = TP/TP FN Specificity is a measure of the model's ability to accurately identify TD and is calculated as . Specificity = TN/TN FP The study used TP. TN. FP, and FN indicators to assess ADHD categorization performance. Using 10,000 bootstrapped samples, accuracy, sensitivity, and specificity were computed with 90% confidence intervals. Table 3 illustrates how additional CP blocks increased accuracy, with three blocks yielding 72. 44% accuracy, 22% sensitivity, and 74. 18% specificity. Computational constraints prevented testing of more blocks. Table 3. Comparative analysis of many convolutions pooling blockshe network's specifics Accuracy Sensitivity Specificity With its residual connections and hierarchical structure. HRCNRAE provides faster convergence, increased stability, and improved noise suppression, making it perfect for complicated workloads. The simpler and more effective CDAE, on the other hand, could have trouble with complex noise patterns and deeper Figure 7 demonstrates that HRCNRAE is more resilient than CDAE, with a higher AUC of 0. compared to a lower AUC of 0. Choosing techniques for extracting spatiotemporal features Contrast between convolutional LSTM and convolutional GMU Using the same datasets, the efficacy of convolutional GMU and convolutional LSTM was Table 4 demonstrates that convolutional GMU surpassed convolutional LSTM in classification accuracy and specificity, despite having a marginally lower sensitivity. GMU had superior overall performance and stability, presumably as a result of LSTM's complexity and larger number of variables, which make training more challenging. LSTM attained 72. 14% accuracy, 72. 24% specificity, and 68. Table 4. Convolutional LSTM vs. convolutional GMU comparison Methodology Convolutional LSTM Convolutional GMU Accuracy Specificity Sensitivity 2 GMU and convolutional GMU comparison The effectiveness of the convolution operation is assessed by comparing the efficiency of convolutional GMU with standard GMU in order to classify ADHD. The same testing set is used to evaluate both models, and the same training set is used to train them. Table 5 shows the experiment's results. Table 5. Comparison of convolutional GMU and convolutional GMU Methodology GMU Convolutional GMU Accuracy Sensitivity Specificity Comparison with other methods The suggested algorithm for classifying ADHD was contrasted with five cutting-edge techniques: CDAE-AdaDT . MKL . MDS-SVM . , 3D-CNN . , and 4D-CNN . In accuracy, specificity, and sensitivity, it outperformed MDS-SVM, 3D-CNN. MKL, and 4D-CNN, as seen in Table 6. However, in specificity, it performed somewhat worse than CDAE-AdaDT, most likely as a result of variations in generalization across different sites. Figure 8 shows a graphic comparison of these findings. Multi-layer convolutional autoencoder for recognizing three-dimensional A (Zarina Begu. A ISSN: 2088-8708 Table 6. Results of classification on the ADHD-200 dataset using different methods Method Accuracy Specificity Sensitivity 4D-CNN MDS-SVM 3D-CNN ----- CDAE MKL Our method Figure 8. Comparing the results of various methods CONCLUSION The four-dimensional data recorded using functional magnetic resonance imaging in the resting state . s-fMRI) incorporates one-dimensional temporal information alongside three-dimensional spatial details. Traditional methods often reduce this 4D data to 2D or 3D formats for categorization, it may cause a substantial loss of information. We provide a novel classification method to address this problem. That utilizes HRCNRAE and convolutional GMU, effectively preserving the comprehensive 4D structure of rs-fMRI images. Through the utilization of the temporal and spatial dynamics present in the data, our approach improves the model's capacity to precisely detect ADHD. The results of the experiments show that the suggested method functions well in cross-site classification tasks, greatly increasing the classification accuracy of ADHD. While this study focuses on ADHD, future research will aim to broaden the application of this methodology to other neurodevelopmental disorders, thereby enhancing its generalizability and potential impact within the field. ACKNOWLEDGEMENTS Our research endeavors were greatly aided by the resources and invaluable support provided by VIT-AP University, for which we are grateful. FUNDING INFORMATION Authors state no funding involved. AUTHOR CONTRIBUTIONS STATEMENT This journal uses the Contributor Roles Taxonomy (CRediT) to recognize individual author contributions, reduce authorship disputes, and facilitate collaboration Name of Author Zarina Begum Kareemulla Shaik ue ue ue ue ue ue ue ue ue Int J Elec & Comp Eng. Vol. No. August 2025: 3965-3976 ue ue ue ue ue ue ue ue Int J Elec & Comp Eng C : Conceptualization M : Methodology So : Software Va : Validation Fo : Formal analysis ISSN: 2088-8708 I : Investigation R : Resources D : Data Curation O : Writing - Original Draft E : Writing - Review & Editing A Vi : Visualization Su : Supervision P : Project administration Fu : Funding acquisition CONFLICT OF INTEREST STATEMENT Authors state no conflict of interest. DATA AVAILABILITY The data that support the findings of this study are available on request from the corresponding author. KS. The data, which contain information that could compromise the privacy of research participants, are not publicly available due to certain restrictions. REFERENCES