International Journal of Electrical and Computer Engineering (IJECE) Vol. No. August 2025, pp. ISSN: 2088-8708. DOI: 10. 11591/ijece. Enhancing ultrasound image quality using deep structure of residual network Ade Iriani Sapitri. Siti Nurmaini. Muhammad Naufal Rachmatullah. Annisa Darmawahyuni. Firdaus Firdaus. Anggun Islami. Bambang Tutuko. Akhiar Wista Arum Intelligent System Research Group. Faculty of Computer Science. Universitas Sriwijaya. Palembang. Indonesia Article Info ABSTRACT Article history: Ultrasonography, a medical imaging technique, is often affected by various types of noise and low brightness, which can result in low image quality. These drawbacks can significantly impede accurate interpretation and hinder effective medical diagnoses. Therefore, improving image quality is an essential aspect of the field of ultrasound systems. This study aims to enhance the quality of ultrasound images using deep learning (DL). The experiment is conducted using a custom dataset consisting of 2,175 infant heart ultrasound images collected from Indonesian hospitals, and the model is subsequently generalized using other datasets. We propose enhanced deep residual network combined convolutional neural networks (EDR-CNN. to improve the image quality. After the enhancement process, our model achieved peak signal-to-noise ratio (PSNR) and structural similarity index metrics (SSIM) scores of 38. 35 and 0. 92 respectively, outperforming other The benchmarking with other ultrasound medical images indicates that our proposed model produces good performance, as evidenced by higher PSNR, lower SSIM, a decrease in mean square error (MSE), and a lower contrast improvement index (CII). In conclusion, this study encapsulates the forthcoming trends in advancing low-illumination image enhancement, along with exploring the prevailing challenges and potential directions for further research. Received Sep 2, 2024 Revised Apr 5, 2025 Accepted May 24, 2025 Keywords: Deep learning Enhancement Medical image Residual network Ultrasonography This is an open access article under the CC BY-SA license. Corresponding Author: Siti Nurmaini Intelligent System Research Group. Faculty of Computer Science. Universitas Sriwijaya Palembang. South Sumatera. Indonesia Email: siti_nurmaini@unsri. INTRODUCTION Enhancing the quality of medical images, particularly in ultrasound systems, is a crucial area of focus in the medical field. This is primarily attributed to the extensive utilization of ultrasound technology in medical diagnosis and a wide range of medical procedures. Ultrasound images often suffer from issues such as noise and blur, which can significantly impede medical interpretation and accurate diagnosis . Ae. Several factors contribute to the low quality of ultrasound images. One of these factors is acoustic variability, where the ultrasound image is affected by acoustic variations in the body, causing noise in the image . Ae. In addition, signal deficiencies or attenuations may occur as the ultrasound signal passes through different body tissues, and resolution limitations may arise due to patient movement, suboptimal positioning, or the use of less sophisticated ultrasound technology . Ae. These problems can significantly affect the quality of ultrasound images and make medical interpretation and diagnosis challenging. To address this problem, many research studies have focused on developing methods to improve the quality of ultrasound Journal homepage: http://ijece. ISSN: 2088-8708 There are two broad categories of traditional methods for low-light enhancement such as histogram equalization . , . and Retinex models . , . The histogram technique aims to enhance the contrast of an image by redistributing the intensity values of the pixels, resulting in a brighter and more vibrant image. Whereas a typical approach based on Retinex models decomposes a low-light image into two components: the reflection component and the illumination component using priors or regularizations. The estimated reflection component is considered the enhanced result. However such two methods have several limitations . , . Both histogram equalization and Retinex models can be particularly problematic when dealing with images containing sharp edges and textures, as they may become exaggerated and appear artificial. In such approaches, the existing noise in the images is often disregarded, leading to either its persistence or amplification in the enhanced results. Complex optimization processes involved in Retinex models and the computation of image histograms and cumulative distribution functions in histogram equalization increasing the computational time, and i. Both approaches necessitate manual adjustments and interventions to accommodate specific scenarios or edge cases. In recent times, profound advancements have been made in the realm of image enhancement techniques, specifically leveraging DL techniques such as convolutional neural networks (CNN. , . These remarkable developments have shown great potential in reducing noise, augmenting resolution, and mitigating artifacts. As a result. CNNs have emerged as highly promising solutions to tackle the persistent challenge of enhancing ultrasound image quality, ultimately leading to a substantial improvement in diagnostic accuracy. There has been research focused on developing more specialized and effective DL techniques to improve ultrasound image quality, such as using transfer learning on existing DL architectures . Ae. , . Ae. These advances continue to improve the quality of ultrasound images and improve the accuracy of medical diagnoses and interventions. This research contributes as follows: Proposing a custom deep residual CNNs architecture to improve the quality of ultrasound images in terms of both performance and efficiency. Validating the effectiveness of our proposed model using rigorous experiments and benchmarks against established state-of-the-art DL models. Demonstrating the proposed custom architecture with four ultrasound datasets i. , infant heart, fetal heart, fetal head, and abdomen. Implementing the proposed model for predicting the fetal heart substructure abnormality. MATERIALS AND METHOD The general methodology of our experiment is depicted in Figure 1, consisting of four main stages. The first stage, data acquisition is carried out. Data for the acquisition was obtained from several previously collected ultrasound videos. The data is then extracted based on the frame and cropped the heart object The second stage involves data pre-processing, where low-resolution data using noise injection and high resolution are generated as targets. This process involves labeling the data for further analysis. In the third stage, the proposed deep learning algorithm proposes an enhanced residual CNN architecture. The final stage involves performance measurement, where the peak signal-to-noise ratio (PSNR), structural similarity index metrics (SSIM), mean square error (MSE), and contrast improvement index (CII) values are calculated to evaluate the performance of the proposed deep-learning model. The whole steps constitute the general methodology of our experiment. Figure 1. The general methodology of our experiment Int J Elec & Comp Eng. Vol. No. August 2025: 3779-3794 Int J Elec & Comp Eng ISSN: 2088-8708 Data acquisition The dataset used in this research consists of medical images acquired from ultra-sound devices, including images of infant hearts, fetal hearts, fetal heads, and abdomens. While infant and fetal heart datasets are custom-made from Indonesian hospitals, the remaining datasets are sourced from public The data distribution of our proposed model can be found in Table 1. The study was carried out in two main stages: . In the first stage, we focused on generating an image enhancement model using images of infant hearts, and i. Subsequently, the enhancement model that demonstrated the best performance was selected and applied to the other datasets for benchmarking purposes. Table 1. Data distribution for the learning process. USG Object Infant's heart Fetal heart Fetal head Abdomen Training Validation Testing Unseen Total The custom of the infant heart dataset is obtained from a general hospital. Dr. Mohammad Hoesin. Indonesia. Videos are captured using a Philips EPIQ Elite ultrasound machine and video file sizes range from 30-50 MB. To generate a reliable image enhancement model, we used 1,171 training data and 976 validation All the image data used to build the image enhancement model originated from images of a child's It can be seen a sample image of an infantAos heart used for model enhancement development in Figure 2. Figure 2. A sample image of an infant's heart as raw data for the development of the model enhancement Data pre-processing Based on the data acquisition process, previously collected several ultrasound videos. Data is extracted frame by frame, and all data is cropped to 800y600 size. Then, the dataset is processed by producing high-resolution and low-resolution images as input for model training as shown in Figure 3. Highresolution images are the target output from the image processing process and low-resolution images is generated as the input for the image enhancement model. Low-resolution images as input models use noise injection which often occurs in medical ultrasound data that can give the raw image a smoother or more blurry appearance. Different types of noise are added, such as poison, salt-and-pepper noise, or speckle noise. We control the intensity and type of noise to add based on the different settings to achieve the desired level of High-resolution images serve as the target result of image processing, while low-resolution images are the input needed to train the model to learn patterns in low-resolution data and improve image quality. The results of the pre-processing stage are visualized in Figure 4. After low-resolution images are entered into the model, a multiscale information representation process is performed, combining information from various spatial scales and extracting relevant features to improve image quality. By taking advantage of low and high-resolution image types, the enhancement model can effectively improve image quality and produce high-quality images. Enhancing ultrasound image quality using deep structure of residual network (Ade Iriani Sapitr. A ISSN: 2088-8708 Figure 3. Data pre-processing process Original image . igh resolutio. The low-resolution image by adding a controlled amount of noise Poisson Speckle Gaussian Salt Pepper Lam = 3. 0, size = . , . MedianBlur . mg, . Mean = 0, variance = 50. Salt =1 . , 255, . and sigma = variance * and Pepper = 2 ,0,. Figure 4. High-resolution and low-resolution Image enhancement model This study proposed customized enhanced deep residual network combined convolutional neural networks (EDR-CNN. architecture as shown in Figure 5. The structure is modified by emulating residual network (ResNe. which is designed to address the challenge of image noise . , . The EDR-CNN structure focuses on convolution widening, enabling networks to expand their visual range without compromising image resolution. It uses multi-resolution branches with different resolutions to handle information at various scales. The network bypass connections enable the learning of more complex features and accelerate network convergence. In order to improve model performance and efficiency, the modified structure includes deeply separable attention and convolution modules. The network structure is shown in Table 2. Deep convolution is used to reduce the computational complexity in the convolutional layer and reduce the number of parameters Int J Elec & Comp Eng. Vol. No. August 2025: 3779-3794 Int J Elec & Comp Eng ISSN: 2088-8708 required by the model. In addition, the proposed model has been modified with a sampling technique called sub-pixel convolution to increase the image resolution after super-resolution processing. This technique estimates new pixels and studies existing spatial patterns in low-resolution image quality. The structure of the EDR-CNNs model comprises convolutions with 3y3 filters followed by the LeakyReLU activation function and Charbonnier Loss, as . ycy ya. cu, y. = (. cu Oe yc 2 yun 2 )2 . where ycu and yc are the input and target images, respectively. yun is a small constant to avoid division by zero. Figure 5. The proposed EDR-CNNs architecture Table 2. The proposed EDR-CNNs network Convolution Input Image Conv 1 Conv 2 Conv 3 Conv 4 Conv 5 Residual Conv 6 Conv 7 Conv 8 Conv 9 Conv 10 Residual Layer . Input_1 (InputLaye. Conv2d (Conv2D) leaky_re_lu (LeakyReLU) conv2d_1 (Conv2D) leaky_re_lu_1 (LeakyReLU) conv2d_2 (Conv2D) leaky_re_lu_2 (LeakyReLU) conv2d_3 (Conv2D) leaky_re_lu_3 (LeakyReLU) conv2d_4 (Conv2D) leaky_re_lu_4 (LeakyReLU) add_1 (Ad. conv2d_5 (Conv2D) leaky_re_lu_5 (LeakyReLU) conv2d_6 (Conv2D) leaky_re_lu_6 (LeakyReLU) conv2d_7 (Conv2D) leaky_re_lu_7 (LeakyReLU) conv2d_8 (Conv2D) leaky_re_lu_8 (LeakyReLU) conv2d_9 (Conv2D) add_1 (Ad. Output Shape [(None. None. None, . ] [(None. None. None, . ] [(None. None. None, . ] Number of Parameters [(None. None. None, . ] [(None. None. None, . ] [(None. None. None, . ] [(None. None. None, . ] [(None. None. None, . ] [(None. None. None, . ] [(None. None. None, . ] [(None. None. None, . ] [(None. None. None, . ] [(None. None. None, . ] [(None. None. None, . ] [(None. None. None, . ] [(None. None. None, . ] [(None. None. None, . ] [(None. None. None, . ] [(None. None. None, . ] [(None. None. None, . ] [(None. None. None, . ] [(None. None. None, . ] Enhancing ultrasound image quality using deep structure of residual network (Ade Iriani Sapitr. A ISSN: 2088-8708 After several convolution layers there is residual linking which is done by adding the output of the first layer to the output of the last layer before the next convolution layer. This architectural design helps in studying more detailed features and preserving the original information of the input. The final output goes through the last convolution layer and the sigmoid activation function to produce the final output which is the enhanced image prediction. The total number of parameters in the model is 263,971 which also includes the weights and biases in each convolution layer. The proposed model is generated on hardware equipped with TensorFlow 2. 7 and NVIDIA RTX 2080ti GPU. All data has an initial input size of 128y128 for the training and validation process. The model was trained using Adam's loss function and optimizer with a learning rate Model enhanced deep residual CNNs are based on multiple residual blocks using a batch normalization layer for consistent results. Our implementation includes only 16 remaining blocks with 64 Evaluation of image quality metrics The evaluation step is used to compare the performance of the proposed method with the image quality improvement method. The peak signal-to-noise ratio (PSNR) is used to measure the quality of the reconstructed image. Mathematically . PSNR is expressed by . , max. 2 ycEycIycAycI = 10. ycoycuyci10 ( ycAycIya where ya represents each image and MSE is the mean square error. The lower the MSE, the higher the PSNR. SSIM is an image processing technique that measures the structural similarity between the actual image and the reconstructed image, such as image compression or image up-sampling/down-sampling . Mathematically. SSIM can be represented by . , ycIycIyaycA. a, ya )I = . yuNyayuNyaI yca. yuayayaI yca. uNya2 yuNyaI2 yca. uaya2 yuayaI2 yca. The equation . represents the per pixel comparison between the two images for SSIM. MSE has a duty as image recovery or image enhancement. MSE measures the average of the squared difference between the original pixel value and the predicted pixel value in the enhanced image. ycAycIya = Ocycuycn=1. cUycn Oe ycUCycn )2 ycu The calculation process involves calculating the squared difference between the original pixels and the predicted pixels where ycu is the data point number ycUycn are the original pixels of the image and ycUCycn is the pixel prediction from the model. The CII evaluates the competitiveness of various contrast enhancement techniques, the most well-known benchmark being the image enhancement measure. It is used to compare the results of contrast enhancement methods. Contrast enhancement can be measured using CII as a ratio. CII is defined in . yayaya = yaycyycycuycyycuycyceycc where C is the average value of the local contrast measured with a 3y3 window as: ycoycaycuOeycoycnycu ycoycaycu ycoycnycu yaycyycycuycyycuycyceycc dan yaycuycycnyciycnycuycayco is the average value of the local contrast in the output and original images. RESULTS AND DISCUSSION Model enhancement with infant heart object In this section, experiments are conducted to create an image enhancement model that yields satisfactory performance in terms of PSNR. SSIM. MSE, and CII. To build such a model, only infant heart objects are used. Curating an infant heart dataset for constructing high-fidelity DL models tailored for clinical deployment necessitates images of sufficiently high quality. This enables the algorithm to discern both Int J Elec & Comp Eng. Vol. No. August 2025: 3779-3794 Int J Elec & Comp Eng ISSN: 2088-8708 common and distinct patterns among images and facilitates precise measurements. Five DL model structures have been developed, and we will assess the best performance of the selected model in Table 3. The entire image was trained by utilizing small 256y256 patches extracted from infant heart images. The training process spanned 50 epochs, employing a batch size of 4. Notably, the EDR-CNNs architecture was implemented, incorporating the Charbonnier loss as the loss function. For optimization, the Adam Optimizer was utilized, with a learning rate set at 0. Table 3. The five models generated to select the best structure CNNs 16 layer Input Image Conv 1 Batch Normalization ReLU Conv n Batch Normalization Addition (Residua. ReLU Conv n Batch Normalization Addition (Residua. ReLU Conv n Batch Normalization Addition (Residua. ReLU Conv n Batch Normalization Addition (Residua. ReLU Activation ReLU Loss MSE Optimizer Adam ResNet 50 layer Input Image Conv 1 Conv 2. Batch Normalization ReLU Addition (Residua. Conv 3. Batch Normalization ReLU Addition (Residua. Conv 4. Batch Normalization ReLU Addition (Residua. Conv 5. Batch Normalization ReLU Addition (Residua. Activation Softmax Loss MSE Optimizer Adam Model 1 12 layer Input Image Conv 1 Conv 2 Conv 3 Conv 4 ReLU Addition (Residua. Conv 5 Conv 6 Conv 7 Conv 8 ReLU Addition (Residua. Conv 9 Conv 10 Conv 11 Conv 12 ReLU Addition (Residua. Activation ReLU Loss MSE Optimizer Adam Model 2 10 layer Input Image Conv 1 Conv 2 Conv 3 Conv 4 Conv 5 Addition (Residua. Conv 6 Conv 7 Conv 8 Conv 9 Conv 10 Addition (Residua. Activation Sigmoid Loss charbonnier Optimizer Adam EDR-CNNs 10 layer Input Image Conv 1 LeakyReLU Conv 2 Conv 3 LeakyReLU Conv 4 LeakyReLU Conv 5 LeakyReLU Addition (Residua. Conv 6 LeakyReLU Conv 7 LeakyReLU Conv 8 LeakyReLU Conv 9 LeakyReLU Conv 10 Residual Activation Sigmoid Loss Charbonnier Optimizer Adam Through the analysis of the five model structures, we observe diverse performance outcomes in PSNR. SSIM. MSE, and CII values. As found in the result, the custom EDR-CNNs structure stands out by surpassing the performance of the other models. The PSNR values approach 40 dB, and it demonstrates comparable performance in other metrics as well in Figure 6. This compelling performance suggests that the proposed model warrants further evaluation to ascertain its resilience and ability to generalize across various medical objects. EDR-CNNs . Model 3 CNNs Model 2 Model 1 Original EDR CII MSE SSIM PSNR Figure 6. The performance validation of the selected model Enhancing ultrasound image quality using deep structure of residual network (Ade Iriani Sapitr. A ISSN: 2088-8708 Model evaluation This section pertains to the experimental report that focuses on the evaluation and interpretation of our model's performance. We benchmark a thorough analysis of the EDR-CNNs obtained results against various methods, including contrast limited adaptive histogram equalization (CLAHE) . , bilateral filter . , . MirNet, low-light convolutional neural network (LLCNN) . , and Autoencoder . as seen in Table 4. Four performance values were used to indicate the best image enhancement architecture. Higher PSNR values indicate better image quality. SSIM values range from -1 to 1, where 1 indicates identical images, lower MSE values indicate better image quality and higher CII values indicate better color image quality. We can observe that the EDR-CNNs produce outstanding results, with 38. 35 PSNR, 0. 92 SSIM, 9. 66 MSE, and 1. 97 CII. All the test values demonstrate satisfactory results in comparison to the other five methods. Table 4. Model image enhancement performance with CLAHE. Bilateral filter. MirNet. LLCNN. Autoencoder, and EDSR-CNNs Metrics PSNR SSIM MSE CII CLAHE Bilateral Filter MirNet LLCNN Autoencoder EDR-CNNs The graphs for loss and PSNR during the training and validation process, indicate a consistent improvement in the quality of medical images throughout the enhancement process. Figure 7. visualizes the results of the model comparison using MirNet. LLCNN. Autoencoder, and EDR-CNNs. The whole graphical representation highlights a notable trend in the PSNR values, demonstrating an upward trajectory signifying enhanced image quality. It can be observed that the PSNR value varies with each iteration, but it should be noted that the PSNR value given to the EDR-CNNs method does not show significant fluctuations in Figure 7. As the model improves, the PSNR values should increase, indicating better image fidelity, as well as loss. The loss curve displays a gradual decline, eventually converging to zero with a smooth progression. A lower loss in EDR-CNNs response indicates that the model's predictions are getting closer to the actual values. It can be observed that the EDR-CNNs model effectively learns and adapts its parameters, leading to a desirable decrease in the loss. Figure 8. , showcased the heightened image quality achieved through the implementation of several image enhancement models. It can be seen from the figure that all low-illumination image enhancement methods based on deep learning effectively improve the accuracy of an infant's heart. But the visual effect is different. These techniques showcase the advancements in using deep learning to improve the quality of images by enhancing details, reducing noise, and mitigating other image-related issues. Each approach has its unique characteristics and applications within the realm of image enhancement and Nevertheless, when it comes to PSNR. SSIM, and MSE performance. EDR-CNNs surpass other In this section, we undertook a comprehensive benchmarking analysis using the latest advancements in the field. This analysis involves a meticulous evaluation and comparison of our proposed model against well-established deep-learning approaches within the domain of enhancement. To facilitate this assessment, we employed four distinct datasets: the infant heart dataset for the development of our enhancement model, and three additional datasets Ai fetal heart, fetal head, and abdomen Ai to thoroughly evaluate the model's To ensure the fairness of the test, all the methods are tested under the same hardware environment after reaching the optimal level of training. The experimental outcomes yielded noteworthy insights, specifically in terms of PSNR. SSIM, and MSE metrics across all four datasets in detail for various techniques refer to Figures 9. It observes that our proposed model EDR-CNNs consistently delivers satisfactory results across the entire dataset, achieving higher PSNR, lower SSIM, and decreased MSE This indicates that our suggested model consistently produces favorable performance. By evaluating the enhanced images using different techniques and datasets, it can be stated that the results are promising To ensure the performance of the proposed image enhancement technique using EDR-CNNs, such a model has been implemented for segmenting and detecting the 10 objects of the fetal heart substructure, including the left atrium (LA), right atrium (RA), left ventricle (LV), right ventricle (RV), tricuspid valve (TV), pulmonary valve (PV), mitral valve (MV), interventricular septum (IVS), aorta (A. and hole . The experiments involved the utilization of the you only look at once with masking (YOLACT) framework. YOLACT is an algorithm designed for real-time object detection that combines object detection with instance segmentation. Its goal is to concurrently identify and segment objects present in an image. Detecting the fetal heart sub-structure object proves to be highly challenging due to its small size, the poor quality of Int J Elec & Comp Eng. Vol. No. August 2025: 3779-3794 Int J Elec & Comp Eng ISSN: 2088-8708 images, and its shape that dynamically changes due to fetal movements . While it often performs well during testing using validation data, its effectiveness tends to decline or even result in failure when faced with unseen data . , . The image sample as visualized in Figure 10. Figure 7. Graphic training and validation of loss and PSNR over epochs for infant heart object including . MirNet, . LLCNN, . Autoencoder, and . EDR-CNNs structure Enhancing ultrasound image quality using deep structure of residual network (Ade Iriani Sapitr. A ISSN: 2088-8708 . Figure 8. The sample of infant heart image enhancement with six of the existing model . target, . CLAHE, . bilateral filter, . MirNet, . LLCNN, . autoencoder, and . EDSR-CNNs Int J Elec & Comp Eng. Vol. No. August 2025: 3779-3794 Int J Elec & Comp Eng ISSN: 2088-8708 Figure 9. Performance of proposed model evaluation against other models with various datasets: . PSNR, . SSIM, and . MSE We attempted to perform detection and segmentation of these fetal heart substructures using training, validation, and unseen data. The data sample is about 243 images for training data, 58 images for testing data, and 28 images for unseen data. The segmentation results before and after the image enhancement process, along with predictions using unseen data, were compared against several previously Enhancing ultrasound image quality using deep structure of residual network (Ade Iriani Sapitr. A ISSN: 2088-8708 employed models, as presented in Table 5. As indicated by the results, using the YOLACT and EDR-CNNs model produces satisfactory performance in terms of mean average precision . AP) for bounding box (Bbo. , and masking . From the nine fetal heart substructures. AO. LA. MV, and IVS produce a detection rate above 60%, while LV. PV. RA, and RV produce detection in the range of 4455%. However. TV is hard to detect only reaching around 27%. The size of the fetal heart can vary depending on the gestational age of the fetus around 60Ae100 millimeters. During different stages of pregnancy, the fetal heart undergoes significant growth and development. Hence, due to the fetal heart substructure being even smaller, attaining an overlap of approximately 30%-50% Bbox and Mask between the ground truth and the predicted image proves to be quite challenging. By using unseen data indicating that the nine substructures are capable of segmenting and detecting. This suggests that the proposed model is proficient in delivering satisfactory detection performance for fetal heart substructure, both when employing validation data and unseen data, with results that outperform other methods. IVS Hole as a defect Figure 10. The sample annotation of 10 fetal heart substructures. the fetal heart has a defect in the ventricle Table 5. The fetal heart substructure detection using EDR-CNNs image enhancement and YOLACT model with unseen data Model YOLACT YOLACT CLAHE YOLACT Bilateral Filter YOLACT MirNet YOLACT LLCNN YOLACT Autoencoder YOLACT EDRCNN BBox Mask BBox Mask BBox Mask BBox Mask BBox Mask BBox Mask BBox Mask mAP (%) IVS In this research, the proposed model was also examined for its ability to detect abnormalities in the structure of the fetal heart. In our previous investigation, our accomplishments remained constrained, yielding validation data exclusively . , . This limitation arose from the difficulty in identifying fetal hearts with unseen data, which occasionally led to instances of failure. Nevertheless, through the implementation of an enhancement process, the detection outcomes witnessed a notable advancement, reaching 60%-70% for Bbox and mask mAP, see Table 6. This marked improvement signifies a significant result and surpasses other methods. Int J Elec & Comp Eng. Vol. No. August 2025: 3779-3794 Int J Elec & Comp Eng ISSN: 2088-8708 Although the utilization of the proposed method offers numerous benefits, there are still several limitations within this research. The dataset used for comparison remains relatively restricted, encompassing only four types of medical objects, and the number of benchmarking methods is limited to five. Consideration could be given to integrating currently advanced image enhancement methods as supplementary benchmarks. Additionally, there is a dearth of definitive outcomes, such as predictions or determinations concerning the anomalies present within these medical objects. It is worth noting that research in the realm of medical imaging fundamentally serves the purpose of facilitating diagnosis. The use of the enhancement methods in terms of classification or detection of the diseases will be subject to further exploration in forthcoming investigations. Table 6. The fetal heart abnormality as a defect detection with unseen data Model YOLACT YOLACT CLAHE YOLACT Bilateral Filter YOLACT MirNet YOLACT LLCNN YOLACT Autoencoder YOLACT EDR-CNN mAP (%) BBox Mask CONCLUSION In conclusion, improving the quality of medical images, especially ultrasound images, is very important for accurate medical diagnosis and procedures. Ultrasound images often suffer from low quality due to acoustic variability, signal deficiency, attenuation, and limited resolution. These problems can interpret inhibition and medical diagnosis. To overcome this challenge, researchers have explored various methods, including image processing techniques and deep learning algorithms such as CNNs. These techniques have shown promise in reducing noise, increasing resolution, and reducing artifacts in ultrasound images, thereby increasing diagnostic accuracy. Recent advances have focused on developing specialized deep learning techniques, such as transfer learning on the proposed architecture of enhanced residual CNNs (EDR-CNN. This architecture uses techniques such as residual learning, extended convolution, and attention-channel mechanisms to improve image quality. The results of improving the quality of the image will be tested into a special detection model to detect the structure of the fetal heart object. The results of this study can contribute to the development of technology to improve the quality of medical images in the future. The results of the detection of the structure of the fetal heart object increased from the results before the increase was carried out and after the increase had an increase that was able to increase the accuracy of the detection diagnosis. By gaining a clearer understanding of the strengths and weaknesses of various deep learning architectures, medical professionals can make informed decisions about the most effective approaches to improve the quality of medical images and consequently, improve medical diagnoses and ACKNOWLEDGMENTS This work was supported by Universitas Sumatera Selatan and the Intelligent Systems Research Group (ISysRG) at Universitas Sriwijaya FUNDING INFORMATION This research was funded by the Ministry of Education. Culture. Research, and Technology as a Matching Fund. Kedai Reka Project. This work was also supported by Sriwijaya University funded by the Professional Research 2023section should describe sources of funding agency that have supported the work. Authors should state how the research described in their article was funded, including grant numbers if Include the following . r simila. statement if there is no funding involved: Authors state no funding involved. AUTHOR CONTRIBUTIONS STATEMENT This journal uses the Contributor Roles Taxonomy (CRediT) to recognize individual author contributions, reduce authorship disputes, and facilitate collaboration. Enhancing ultrasound image quality using deep structure of residual network (Ade Iriani Sapitr. A Name of Author Ade Iriani Sapitri Siti Nurmaini Muhammad Naufal Rachmatullah Annisa Darmawahyuni Firdaus Firdaus Anggun Islami Bambang Tutuko Akhiar Wista Arum C : Conceptualization M : Methodology So : Software Va : Validation Fo : Formal analysis ISSN: 2088-8708 ue ue ue ue ue ue ue ue ue ue ue ue ue ue ue ue ue ue ue I : Investigation R : Resources D : Data Curation O : Writing - Original Draft E : Writing - Review & Editing ue ue ue ue ue Vi : Visualization Su : Supervision P : Project administration Fu : Funding acquisition CONFLICT OF INTEREST STATEMENT Authors state no conflict of interest. INFORMED CONSENT We have obtained informed consent from all individuals included in this study. ETHICAL APPROVAL All methods were conducted according to relevant guidelines and regulations. All experimental protocols were approved by Dr. Mohammad Hoesin Hospital. Indonesia. Informed consent was obtained from all subjects and/or their legal guardians. DATA AVAILABILITY Oe The code is available in https://github. com/BestJuly/LLCNN, while the dataset is available in https://github. com/ISySRGg/LLCNNs. Oe Any additional information required to reanalyse the data reported in this paper is available from the lead contact upon request. REFERENCES