International Journal of Electrical and Computer Engineering (IJECE) Vol. No. October 2025, pp. ISSN: 2088-8708. DOI: 10. 11591/ijece. Real time object detection for advanced driver assistance systems using deep learning techniques Sudarshan Sivakumar. Shikha Tripathi Department of Electronics and Communication. PES University. Bangalore. India Article Info ABSTRACT Article history: Object detection plays a critical role in advanced driver assistance systems (ADAS), where timely and accurate detection of objects on road is essential for vehicular safety. In this study, we propose and evaluate deep learningbased object detection techniquesAispecifically, convolutional neural networks (CNN) and dense neural networks for real-time object detection. The proposed model is trained on a publicly available image dataset demonstrating its potential to enhance the reliability of ADAS systems without the use of an image preprocessing block. Here the system automatically stops without any human intervention. Our results highlight the strengths and limitations of using CIFAR-10. CIFAR-100 and YOLO datasets for transfer learning, pre-training and algorithm classification. Improvements in model optimization and hardware integration have been achieved using hardware in loop (HIL) set up. The models are evaluated on CIFAR-10. CIFAR-100 and YOLO datasets, with a focus on the impact of image pre-processing on detection accuracy and speed. Experimental results show that the proposed algorithm outperforms the previous methods, by achieving a better accuracy, contributing to safer and robust system without an additional image preprocessing block. Received May 25, 2025 Revised Jul 3, 2025 Accepted Jul 12, 2025 Keywords: Advanced driver assistance Anti-lock braking system ARS 620 RADAR Convolutional neural networks electronic control unit Hardware in loop SVC 300 Camera This is an open access article under the CC BY-SA license. Corresponding Author: Sudarshan Sivakumar Department of Electronics and Communication. PES University 100 feet ring road. Banashankari stage 3. Dwaraka Nagar. Bangalore. Karnataka 560085. India Email: sudarshansivakumar99@gmail. INTRODUCTION Road safety remains a global concern, with traffic-related incidents resulting in significant loss of life and property. In response, advanced driver assistance systems (ADAS) have emerged as a critical component of modern vehicles, aiming to reduce human error and improve driving outcomes through the integration of sensing and decision-making technologies. Among these, object detection plays a vital role in recognizing pedestrians, vehicles, traffic signs, and obstacles, enabling timely interventions such as automatic braking or steering corrections. Recent advancements in deep learning have enabled substantial progress in computer vision tasks, including object detection and classification. Convolutional neural networks (CNN. have proven highly effective in extracting spatial features from visual data, while recurrent neural networks (RNN. offer temporal modeling capabilities, beneficial for dynamic driving environments. This paper proposes a CNN-RNN hybrid model for object detection, with a focus on its implementation in an embedded automotive environment. Object detection is a fundamental task in computer vision that involves identifying and localizing objects within images or video frames. With rapid advancements in deep learning, object detection techniques have achieved remarkable accuracy and are now widely deployed in various real-world applications, including security surveillance, robotics, healthcare, and intelligent transportation systems. One Journal homepage: http://ijece. Int J Elec & Comp Eng ISSN: 2088-8708 of the most critical applications of object detection is in ADAS, where the ability to detect and recognize vehicles, pedestrians, traffic signs, and other road elements in real-time is essential for ensuring driver and passenger safety. These systems rely on accurate visual perception to assist with tasks such as lane departure warnings, collision avoidance, and automated braking. Rapid advancement in automotive technology has led to the development of ADAS, which aims to reduce human errors and enhance driving safety. Object detection is a fundamental component of ADAS, enabling vehicles to identify pedestrians, other vehicles, traffic signs, and obstacles on the road. Various detection techniques, including image processing, machine learning, and deep learning, have been applied to improve accuracy and efficiency . , . Traditional object detection techniques rely on image processing and feature extraction methods such as edge detection, color segmentation, and template matching. These methods often require handcrafted features and are sensitive to changes in lighting and environmental conditions . Machine learning algorithms, such as support vector machines (SVM) and random forests (RF. , improve object detection by learning patterns from labeled data. However, these methods still rely on feature extraction and are limited in their ability to handle complex environments . Deep learning has revolutionized object detection in ADAS by leveraging CNNs and other neural network architectures. Popular models such as Faster R-CNN, you only look once (YOLO), and single shot multibox detector (SSD) have demonstrated high accuracy and real-time performance . These models automatically learn hierarchical features and can adapt to various driving conditions . , . There is increasing demand by customers for enhanced ADAS experience with improved safety to driver and pedestrians. In the year 2022 there were only 2 original equipment manufacturers and in the year 2023 there were 7 OEMAos that adapted the ADAS systems to their cars. compared to many other countries. India is having most complex use cases due to its population density and road conditions. Today the advancement of Advanced driver assistant systems has resulted in increased fuel efficiency. As per a study by Global Edge Soft, the future advanced driver assistance systems would have wireless network connectivity that can easily be installed in cars. In other words, cars will communicate better, resulting in a safer and more convenient automated driving experience. It has been predicted by many automotive OEMAos that ADAS is going to dominate Indian markets in another 3-4 years even in the mid variant cars. By the year 2027 itAos expected that L4 . ADAS systems will be available in many ADAS are critical in reducing road accidents by minimizing human error. These systems provide functionalities such as pedestrian detection, lane departure warning, automatic emergency braking, and collision avoidance. However, conventional advanced driver assistance implementations struggle in lowvisibility conditions, affecting detection accuracy and increasing accident risks. Despite significant advancements, object detection in ADAS faces several challenges, including real-time Processing, ensuring fast and efficient detection while maintaining accuracy . Adverse weather conditions handling variations in lighting, fog, rain, and snow is the second challenge . Occlusions and dynamic environments, where objects are partially hidden or moving unpredictably is also a challenge . Unlike traditional approaches that require significant preprocessing or high computational power, the proposed model in this paper achieves competitive accuracy using raw images from the CIFAR-10 and CIFAR-100 datasets. Furthermore, we validate the real-time applicability of the system by integrating it with an anti-lock braking system (ABS) electronic control unit (ECU) and simulating responses through hardwarein-the-loop (HIL) testing. This research bridges the gap between algorithmic development and practical deployment in vehicular systems. Future research aims to enhance model robustness, integrate sensor fusion techniques . uch as LiDAR and rada. , and develop energy-efficient algorithms for deployment in embedded automotive systems . Ae. LITERATURE REVIEW Fuzzy logic controlled antilock braking system was basically introduced in the ABS systems for improving the bidirectional stability over the uniform and non-uniform road surfaces. The performance for few use cases was not satisfactory. For example, the driver must take an action after detecting the obstacle. The system failed when the obstacles were too close as there was no mechanism in the system to indicate the driver whenever the obstacle is few meters ahead of the vehicle . The work by Liu et al. uses a combination of electromechanical braking systems and feature based algorithm that helps to shorten the braking distance thereby resulting in improved braking stability compared to previous system. It uses electric energy as the energy source of the braking system and uses a motor to drive the actuator to generate braking force, which responds quickly. This action may affect the lifetime of the system. Again, there is no mechanisms in the system to halt it or alert the driver when an obstacle is detected . Real time object detection for advanced driver assistance systems A (Sudarshan Sivakuma. A ISSN: 2088-8708 With the introduction of fuzzy PID controllers along with the control algorithm in the ABS systems promising results were achieved compared to the previous works. It reduced the braking distance by up to 10% compared to a conventional PID and by more than 30% compared to a relay controller. Gradual improvements towards stoppage of vehicles were seen in conjunction with the ABS systems in the use cases where the obstacles were closer to the object. But these systems lacked to calculate the distance of obstacles which is a key factor and primarily focused on reduction of braking distance . The proposed system by Urmat et al. uses 32-bit ARM based embedded systems with accelerometer, gyroscope and magnetometer. GNSS module along with BLE was used to detect the pedestrians by simulating pedestrianAos walking positions in the real time scenario. The results of experiment are verified as 2. 2% distance error, 4% maximum average positioning error and 3. 65% step count error. The results were good for short range distances but not good for long range distances . The proposed system in . uses thermal night vision systems to detect the pedestrians in real time car dashboards by extracting the region of interests (ROI) thereby alerting the drivers. The main drawback of this method is that some of the pedestrian samples are not highlighted due to the background thermal energy such as a car exhaust, which can distort the candidates shape and put it outside the stringent aspect ratio With the adaption of ADAS systems, the system in . uses techniques of image processing, adaptive signal processing, deep learning and computer vision to detect the real time pedestrian movements. The system achieved an accuracy of 75. The samples taken were clear pedestrian samples in real time. The systemAos behavior to detect under bad weather conditions or when unclear images are fed was not The work in . attempts to alert the pedestrians through their smart phones whenever a vehicle comes close. A fully functioning mobile system has been developed where pedestrians get warnings when the smartphone sensor receives information on the arrival of vehicles through direct Wi-Fi based peer to peer The results demonstrate that the energy efficiency and positioning accuracy of Safer Cross are improved by 52% and 72% on average compared with existing solutions with missing support for positioning accuracy and energy efficiency, and the phone-viewing event detection accuracy is over 90%. The average error of the system is around 1. 6 secs. There is no such mechanism of alerting the driver about pedestrian crossings. The system in . uses a CNN based approach by adapting higher level of semantics, features, architecture and training strategies in object detection. The accuracy achieved by taking clear sample of images is around 68%. The accuracy/sensitivity of safety critical systems should reasonably be a little higher. This sensitivity/accuracy of the system for clear pedestrian sample seems to be low. The system in . uses supervised, unsupervised, deep and reinforcement learning. The focus of the ML research in this paper is aimed at reducing the road traffic fatalities and accidents by using Hadoop and big data methods by predicting the pedestrian density. It focuses on the use cases such as automatic lane assistance, automatic park assistance and steering control mechanisms. This mechanism alerts only the driver about the above factors. The work in . attempts to extend the life time of ABS systems that gets deteriorated due to structural damages/wear and tear. It expects the driver to see the pedestrians and take action. Fuzzy logicbased life - extending control (FLEC) system for increasing the service life of the ABS is proposed. The main contributions with this approach as compared to other approaches are it decreased stopping distance by 2%, stopping time by 20. 9% and wheel lockup time from 83. 3% to 3. 1% of total braking time. There is no mechanism to halt the ABS braking system automatically in case the object is very close as the reflex action from the driver cannot be expected in such cases. De Sousa et al. use NVIDIAAos embedded chip in the ADAS systems that helps the driver to make decisions on day-to-day traffic situations. It uses a combination of deep learning and computer vision technique for lane segmentation, detecting the vehicles/pedestrians in the real time traffic environment and alerts the driver about collision of the system with cars or pedestrians. This uses ResNet architecture where the accuracy is 66. Again, the systemAos behavior for unclear samples/during bad weather conditions are not taken into consideration. The vehicle in . uses back propagation method for training and testing of vehicle accelerated automatic driving algorithm. The system mainly attempts to simulate pedestrian head collision test results, effectiveness of collision warning model and collision avoidance for intelligent connected vehicles. The system in . attempts for early detection of pedestrians over blind spots that causes the driver to block the view. The system uses monocular depth estimation algorithm that allows a smaller search area for pedestrian detection. This reduces the latency of detecting unexpected pedestrians. It has been experimented in the complex street scenes, and the accuracy seems to be good. The experimental results revealed that the performance of the method in detecting blind spots and sudden appearance of pedestrians is good and has reduced the reaction latency for such emergency situations by applying the pre-detection This method compared to previous methods has reduced the accident rate caused by sudden pedestrian crossings. Int J Elec & Comp Eng. Vol. No. October 2025: 4942-4953 Int J Elec & Comp Eng ISSN: 2088-8708 Advanced driver assistance systems enhance vehicle safety by utilizing sensors and intelligently assist the drivers in navigation and parking tasks. These systems incorporate camera-based sensors to increase driver awareness of the surrounding environment. Central to ADAS functionality are systems -ona-chip (SoC. , which integrate multiple chips to connect sensors with actuators through interfaces and high-performance electronic control units (ECU. This integration enables real-time data processing and quick response, facilitating features such as adaptive cruise control, lane departure warnings, and collision avoidance . Convolutional neural networks (CNN. play a crucial role in object detection for ADAS specifically for these applications, with datasets such as CIFAR-100 being widely used for training and evaluation . , . These datasets provide diverse image categories, enabling robust model The work in . suggests that Advance driver assistant systems require high computational powers to identify and classify the objects for identifying the augmented reality. The authors here try to create localized points around the surrounding environment such that vehicles capture these objects at higher The aim is to create a three-dimensional model with accurate information about the test site. It is expected that using AI and IS will significantly improve performance as computational speed and accuracy for AR applications in automobiles. In the work proposed in . , the authors try to compare automated lane keeping systems (ALKS) with the adaptive cruise control (ACC). The results prove that ALKS performs much better than ACC. The rapid advancement of artificial intelligence (AI) in autonomous driving has led to significant advances, enabling the development of robust driver assistant systems . However, as these systems become more prevalent, addressing the ethical considerations surrounding their deployment and operation is This research denounces the multifaceted domain of ethics in AI for ADAS, analyzing various use cases and scenarios. Ethical concerns encompass topics such as safety and privacy related to data collection and usage, decision-making dilemmas, accountability, and societal impact. The study focuses on intricate challenges in autonomous driving, examining real-world cases to shed light on complex ethical issues. The paper also focuses on Data security when data is transferred from one car to another. The rise of software-defined vehicles (SDV. has rapidly enhanced the advancement of ADAS, autonomous vehicles (AV. , and battery electric vehicle (BEV) technology. While AVs require power to compute data from perception to controls. BEVs need efficiency to optimize their electric driving range and stand out compared to traditional internal combustion engine (ICE) vehicles. AVs possess certain lags in the current world, but SAE Level 2 (L2 ) automated vehicles are the focus of major original equipment manufacturers (OEM. The common form of an SDV today combines AV and BEV technology on the same platform, prominently available in most OEMs' lineups. As the compute and sensor architectures for L2 automated vehicles lean towards a computationally expensive robust design, it may hamper the most important purchasing factor of a BEV-the electric driving range. The work described in . tries to advance the YOLO detection accuracy and integrating voice-based techniques for object detection. The paper mainly focuses on dynamic convolutions and attention mechanisms. The work in . describes the accuracy improvement for automotive applications for different YOLO architectures. Here the paper uses an image pre-processing block and cites the limitations of previously used YOLO models. The result in this paper shows an improved object classification accuracy. In the proposed work in this paper, we have focused on some of the drawbacks of existing object detection in ADAS. Major contributions of this work include CNN and RNN-based object detection optimized for poor visibility conditions, automatic braking system integrated with ABS ECU, real-time pedestrian and driver alerts using hazard lights and honking, improved detection accuracy . 3% for unclear image. without image preprocessing block and evaluation using HIL simulations. PROPOSED METHODOLOGY Proposed system in Figure 1 integrates a hybrid model, combining CNNs and RNNs employed to perform object classification and temporal tracking. The CNN extracts spatial features from each frame, while the RNN captures temporal dependencies, thus enhancing detection performance in dynamic driving The model is trained and tested using CIFAR-10. CIFAR-100 and YOLO datasets, each containing low-resolution images across 10 and 100 categories, respectively. Despite the dataset limitations, the model achieves an accuracy of 78. 3% without preprocessing for CIFAR-10 50% for CIFAR-100 dataset and 93% for YOLOv8 dataset, demonstrating robustness to raw input data. An embedded system integration where the trained model is deployed on a computing platform connected to the vehicleAos ABS ECU. The object detection output is used to trigger braking commands based on predefined threat assessment criteria. This step ensures that the system transitions from perception to action, a key requirement for real-world ADAS deployment. To evaluate real-time performance and reliability, the entire system is tested using HIL simulation. This setup enables the testing of ECU responses Real time object detection for advanced driver assistance systems A (Sudarshan Sivakuma. A ISSN: 2088-8708 to be detected objects under control and repeatable scenarios. The HIL environment mimics actual vehicle dynamics and sensor inputs, allowing for comprehensive validation before in-vehicle deployment. Figure 1. Overview of the proposed system architecture To improve accuracy, the following modifications were applied to the CNN architecture: Dense layer: Increased to 1024 units for improved feature extraction. Learning rate: Set to 0. 01 with an adaptive decay function. Padding & Kernel constraints: Optimized for better image feature retention. Use of dense CNN: Enhanced classification performance over standard CNN models Dense layer: Increased to 1024 units for improved feature extraction. Learning rate: Set to 0. 01 with an adaptive decay function. Padding & Kernel constraints: Optimized for better image feature retention. Use of dense CNN: Enhanced classification performance over standard CNN models A 2D convolutional layer transforms an input feature map ycU OO ycIya y ycO y yaycU \. cI}^. a \ycycnycoyceyc ycO \ ycycnycoyceyc y. ycU OO ycI using a kernel constraint K in order to produce an output feature Yi. yc = yco = 0Ocyco Oe 1ycu = 0Ocyco Oe 1yca = 0Ocya Oe 1yayco, ycu, yca UI ycUycn yco, yc ycu, yca From . the kernel constraint OuKOuO, =0. 1, is used to regularize learning, balance the unbalanced weights and prevent overfitting of the model while going layer by layer. The output dimension of a feature map is calculated using yaycuycyc = UOycIyaycnycu 2ycE Oe yaUU 1 In . the padding adds an extra 0 or 1 before image pixel for 32y32 CNN network in order to prevent problems at the edges of the CNN and to maintain the spatial side at the output. A demo of value selection is Oe Input size: 32y32 Oe Kernel: 3 Oe Stride: 1. From . , padding: ycE = 1, yaycuycyc = UO32 2. Oe 31UU 1 = 32ya_{\ycyceycuyc. } =\ycoyceyceyc\yceycoycuycuyc \yceycycayca. Oe . yaycuycyc = UO132 2. Oe 3UU 1 = 32 So the output retains the original size. Learning rate decay is inverse of time decay, which adjusts the learning rate yuC\yceycycayuC at each epoch or step Int J Elec & Comp Eng. Vol. No. October 2025: 4942-4953 Int J Elec & Comp Eng ISSN: 2088-8708 yuCyc = yuC01 yuyc\yceycyca_yc = \yceycycayca{\yceycyca_. \ycaycoycyEayca y. yuCyc = 1 yuycyuC0 From the . the decay function in the architecture is used to adapt with the noisy raw inputs, particularly without the image preprocessing blocks. This makes very high updates in the training data to minimize local minimum, useful for convergence and stability DATASET DESCRIPTION. PREPROCESSING AND TRAINING The CIFAR-10 and CIFAR-100 datasets are widely used benchmarks in computer vision tasks. Both datasets consist of colored natural images, each with a resolution of 32y32 pixels. CIFAR-10 includes 60,000 images categorized into 10 classes such as cars, trucks, airplanes, and animals, while CIFAR-100 comprises the same number of images but across 100 fine-grained categories. Although these datasets do not have traffic scenarios, they demonstrate effective testbeds for evaluating model architecture, training efficiency, and object recognition capabilities during poor visibility and adverse weather conditions. Classes such as automobile and truck in CIFAR-10 provide limited yet relevant relevance for vehicular object recognition with lower resolutions to evaluate the performance of the All input images are normalized to ensure uniformity and improve convergence during training. The pixel values are scaled to the range . , . , and data augmentation techniques such as horizontal flipping, cropping, and rotation are applied to improve generalization and reduce overfitting. The YOLOv8 dataset consists of images like stop, yield, speed limit, pedestrian crossing, no entry & U-turn . eft/righ. This is first preprocessed using Ultralytics YOLOv8 framework, where the objects represented in txt format are converted to binary variables of 0Aos and 1Aos. Once the image pixels are converted to this format, the model is accordingly trained to obtain the bounding box in the . cu, yc & yc forma. to identify the exact positions in the real-world validation. RESULTS AND DISCUSSION Following performance metrics are used for analysis: Detection success rate: 78. 3% (CIFAR-. , 50% (CIFAR-. for unclear image samples and 93% for YOLOv8 without using image preprocessing bock. Stopping distance: 5 meters before collision. Latency reduction: Faster hazard response over previous mode. In automotive embedded systems the accuracy of capturing and processing unclear images using deep learning algorithms is defined as a proportion of true positive detections among all the positive detections. In case of the proposed algorithm, all positive detections for detecting an unclear image are 60,000 image samples out of which 46,980 image samples are detected which is the true positive detection for CIFAR-10 dataset. For CIFAR-100 dataset total positive detections are 60,000 image samples and true positive detections are 30,000 image samples. Table 1 provides a comparison of existing and proposed driver assistance systems. From Table 1 it is very evident that the proposed system performs well by using the dense convolution neural networks without an image preprocessing block. One major improvement here is the system also alerts the pedestrian wherein the previous system only alerts the driver. We also see here that the detection accuracy is increased without an image preprocessing block in the adverse weather conditions and the system stops automatically if the driver is not responding for few seconds. Table 1. Comparison of existing and proposed driver assistance system Feature Object types CNN architecture Image-preprocessing blocks Automatic braking Image preprocessing Pedestrian alerts Detection accuracy Existing advanced driver assistance systems Limited to pedestrians Standard Required Not present Required Driver only 8% . lear image. 30% for CIFAR-100 . nclear sample. Proposed advanced driver assistance system Includes cats, automobiles, trucks Dense CNN Not required Present via ABS ECU integration Not required Driver Pedestrian 3% . nclear image. 50% . or CIFAR-100 unclear sample. We achieved an accuracy of 78. 3% for blurred images, where the accuracy was about 75. 3% . for clear image sample in related work. It is worth mentioning that the accuracy of 78. 3% is achieved without using any pre-processing blocks. If pre-processing blocks are used the accuracy will be much greater for CIFAR 10 dataset. For CIFAR 100 dataset the accuracy is 34. 38% for 5 epochs. Using dense neural network. Real time object detection for advanced driver assistance systems A (Sudarshan Sivakuma. A ISSN: 2088-8708 we were able to achieve 50% accuracy and for YOLO dataset . eal time automotive datase. we achieved an accuracy of 93% for 8 epochs . Ae. Table 2 demonstrates these results. Figures 2, 3, 4 and Table 3 clearly indicate the results of our work outperforms compared to previously obtained results . , . , . Ae . Tables 2 and 3 give the accuracy effectiveness of previous results v/s the results obtained by changing the structure. Table 2. Accuracy after changing the CNN architecture Accuracy for CIFAR-10 dataset using CNN . Ae. , . Accuracy for CIFAR10 dataset using CNN Figure 2. Accuracy using CNN network for CIFAR-10 Accuracy of CIFAR 100 dataset using Dense CNN . Ae. , . Accuracy of CIFAR 100 using dense CNN Figure 3. Accuracy using CNN network for CIFAR 100 without any changes Figure 4. Accuracy using dense CNN after modification of architecture Table 3. Comparison of results . ith and without image preprocessing bloc. Dataset Model CIFAR-10 CIFAR-10 CIFAR-100 CIFAR 100 YOLOV8 YOLOV8 CNN CNN Dense RCNN Dense RCNN CNN CNN Use of image preprocessing block Yes Yes Yes Int J Elec & Comp Eng. Vol. No. October 2025: 4942-4953 Accuracy F1 score 85-90% Int J Elec & Comp Eng ISSN: 2088-8708 Table 3 provides the accuracy and F1 score based on the parameters of selection of image preprocessing blocks and the algorithm used. The results prove that the proposed system has achieved greater accuracy without the usage of an image preprocessing block for both CIFAR 10 CIFAR 100 and YOLO dataset by modifying the structure. From the obtained results it can be observed that the system is expected to stop before 5m of detecting an object when the unclear images are used. The system responsible for this action is called hardware in loop system (HIL) that is integrated with the embedded system as shown in Figure 5. A HIL system in the automotive is an embedded test bench where all the modules are integrated together in a closed loop and the response of the system is evaluated by validating the correctness of an algorithm for a specific dataset. The HIL system proposed in Figure 5 consists of 4 ARS 620 RADAR sensor module, 4 SVC 300 camera, and an ABS ECU that is connected to a master control ECU. The dataset and algorithm are flashed into these cameras with the help of a technique called PPAR flashing using Trace 32 device that is integrated with the HIL system. All these sensor modules and ECU is the input to the brake torque module of the embedded system. The brake torque is adjusted based on the cameraAos input on detecting a pedestrian or an object with the help of RADAR sensors. It can also be inferred from the results that the sensitivity response of the system is good as observed from the results shown in Table 3. Figures 2, 4, 6, and 7 . , . , . , . Ae. Figure 5. Proposed embedded system design Real time object detection for advanced driver assistance systems A (Sudarshan Sivakuma. A ISSN: 2088-8708 Figure 6. Detection accuracy of YOLOV8, . lass 15-speed limits, class 11- Bikes, pedestrian and Bicycle. Figure 7. Results describing original and improved accuracy From the results and architectural changes done in the neural networks in accordance with a major architecture change in ABS ECU, it can be observed that we have received an accuracy of 3. 3 % more than the previous work for CIFAR-10 dataset using convolution neural networks . , . , . , 16% more for CIFAR-100 . Ae. for unclear datasets and 2% . Ae. more for YOLOv8 images during bad weather conditions without using an image preprocessing block. From the past work it can be observed that for clear image samples using an image preprocessing block the accuracy was about 68% and using dense CNN it was around 30% in all the previous research works cited in the paper. The proposed system is expected to stop within 5 meters. Advanced embedded system HIL validation was conducted with 4 ARS 620 RADAR Sensors,4 SVC 300 Cameras. ABS ECU connected to a Master Control ECU and PPAR Flashing for dataset integration. Results confirm enhanced braking response time and improved detection reliability in challenging conditions. The 4-channel based Antil-lock braking system module . sually BOSCH ABS/ESP control modul. in Figure 5 consists of wheel speed sensors mounted on 4 corners of the wheel. Hydraulic valves. Brake Pump and the ECU along with camera sensors and adaptive cruise control system. The view port ECU and the ABS ECU are connected to a master ECU using CAN protocol that works based on the multimaster-multislave protocol. Also, the ADAS system consists of camera sensors that are mounted on the four corners of the The main work of the CNN and dense neural network algorithm integrated in the camera sensor is to classify the type of object and detect the distance of the blurred sample especially during the poor visibility or unclear weather conditions. When the camera sensor detects a person, . or example within 10 meter. , the camera sensors send a pulse signal to the speed sensors to decrease the speed as per the proposed algorithm. The speed is decreased with the help of Adaptive cruise control. The driver starts getting an alert on the dashboard parallelly to apply brakes. The master control ECU present in the system gradually reduces the Int J Elec & Comp Eng. Vol. No. October 2025: 4942-4953 Int J Elec & Comp Eng ISSN: 2088-8708 speed with the help of ABS systems where the hydraulic valves automatically control the release and action of the brake fluids according to a uniform time interval between the distance of the object. This is done with the help of hydraulic valves. When the driver is ignorant till the 8th meter, the system automatically goes to a halt state where the ABS module present receives a signal to stop the car in 2 meters from the master ECU, thereby automatically turning on the hazard lights and horn buzzers. From the Table 2 it is clear that the accuracy of detecting objects has increased by 3. 3% for CIFAR10 dataset for blurred/ unclear samples when it is found to be a maximum of 68% for clear samples itself . , . , . This is evident from the results of Figure 7. From Figure 7 it is observed that the accuracy of CIFAR 100 is 34% when CNN is applied, but in our case, we have achieved 50% which is 16% more by applying dense CNN and from Figure 6 it is very evident that the detection accuracy 93% for YOLOv8 dataset, while it was 90% in the references cited previously in Table 3 . Ae. This was possible because of the proposed embedded system wherein the hydraulic action of brake fluids and signal from the speed sensors are automatically controlled with respect to the specific time interval based on the distance of the object using brake valves. But in the previous works, the action of speed sensors and hydraulic mechanism in anti-lock braking ECU is triggered only when the driver presses the brake pads using fuzzy logic and other algorithms . Ae. CONCLUSION AND FURURE SCOPE This paper presents a robust Advance driver assistance system that enhances object detection in realworld driving scenarios leveraging CNN and dense neural networks, validated through HIL simulations. The proposed system addresses critical ADAS by ensuring robust pedestrian detection, automatic braking, and real-time hazard alerts. Unlike conventional methods, which rely on driver intervention, this system autonomously stops the vehicle, enhancing road safety and advanced driver assistance reliability with the By integrating deep learning techniques with ABS ECU automation, the system significantly improves safety without requiring additional image preprocessing block in very bad/adverse environmental It has also imbibed some of the sensor fusion techniques as well. By eliminating the need for image preprocessing, the system achieves reliable performance under adverse visibility conditions, with detection accuracy of 78. 3% (CIFAR-. , 50% (CIFAR-. , and 93% (YOLOv. The integration of automatic braking through the ABS ECU further enhances vehicular safety by reducing dependence on driver This demonstrates the system's real-world applicability and robustness. However, the use of CIFAR datasets poses limitation of lack of traffic-specific context. Future work would focus on real-world vehicle deployment for further testing. Multi-modal sensor fusion (LIDAR. RADAR. Thermal Camera. can enhance detection accuracy and implementation of Transformer-based vision models can result in better image classification. Also, the algorithm can be tested for training and evaluation of the model on real-world ADAS datasets such as KITTI. BDD100K or nuScenes, along with integrating the detection pipeline into an embedded hardware platform for real-time FUNDING INFORMATION This is to declare that there is no funding given to this work by any of the forums and this section is not applicable. AUTHOR CONTRIBUTIONS STATEMENT This journal uses the Contributor Roles Taxonomy (CRediT) to recognize individual author contributions, reduce authorship disputes, and facilitate collaboration. Name of Author Sudarshan Sivakumar Shikha Tripathi C : Conceptualization M : Methodology So : Software Va : Validation Fo : Formal analysis ue I : Investigation R : Resources D : Data Curation O : Writing - Original Draft E : Writing - Review & Editing ue Vi : Visualization Su : Supervision P : Project administration Fu : Funding acquisition Real time object detection for advanced driver assistance systems A (Sudarshan Sivakuma. A ISSN: 2088-8708 CONFLICT OF INTEREST STATEMENT The authors declare that there are no conflicts of interest and no experiments are being performed using humans. DATA AVAILABILITY The data that support our findings are directly available in: Oe CIFAR 100 Dataset link -https://jovian. com/tessdja/cnn-practice-cifar100 Oe CIFAR 10 dataset link - https://w. edu/kriz/cifar. Oe YOLO dataset for real time application is available in Traffic Signs Dataset in YOLO format. REFERENCES