SINERGI Vol. No. October 2025: 771-778 http://publikasi. id/index. php/sinergi http://doi. org/10. 22441/sinergi. Car seatbelt monitoring system using a real-time object detection algorithm under low-light and bright-light Agus Suryanto. Dwi Haryo Wicaksono. Anggraini Mulwinda. Muhammad Harlanu. Mario Norman Syah Department of Electrical Engineering. Universitas Negeri Semarang. Indonesia Abstract Seatbelt usage is essential for minimizing injury risk during vehicular The monitoring seatbelt system in modern vehicles can be easily tricked into not displaying the warning alert. Car seatbelt detection, utilising real-time object detection, is employed to monitor seatbelt usage. However, the accuracy of such systems needs to be further evaluated under low-light and bright-light conditions. This study aims to develop a car seatbelt monitoring system using a realtime object detection algorithm, which will be tested in low-light and bright-light scenarios. The system integrates a trained YOLOv5 model into embedded hardware, which interfaces directly with the vehicleAos ignition system, enabling or disabling engine start based on seatbelt usage. Notifications are also delivered through LEDs, a buzzer, and Telegram messages. This system has an accuracy of 75%, precision of 99. 1%, recall of 96. 2%, and an F1-score of The results show that the system can generate a better confidence score under bright-light conditions than under low-light This work offers tangible proof of the efficacy of applying intelligent object detection models for real-time driver monitoring, particularly in enhancing compliance through physical intervention and IoT-based alerts. Keywords: Internet of Things. Monitoring. Seatbelt. You Only Look Once. Article History: Received: December 5, 2024 Revised: April 22, 2025 Accepted: April 30, 2025 Published: September 3, 2025 Corresponding Author: Agus Suryanto Electrical Engineering Department. Universitas Negeri Semarang. Indonesia Email: agusku2@mail. This is an open-access article under the CC BY-SA license. INTRODUCTION Safety in driving is an essential aspect of using transportation, as road traffic accidents remain a significant public health concern. In . have shown that accident-prone road segments are often associated with poor road conditions, inadequate signage, and environmental factors that increase collision rates. However, many drivers still neglect it, including the use of The use of seatbelts in motor vehicles is useful in minimizing the risk of injury or death in the event of a traffic accident. An ideal driver is said to always use a seatbelt when driving. However, many drivers of four-wheeled vehicles are unwilling to use seatbelts. They merely fasten it to the buckle to silence the warning buzzer without actually wearing it on their bodies. Using a combination of lap and shoulder seatbelts is an efficient way to minimize the seriousness and mortality rate of injuries caused by vehicle collisions . The absence of seatbelt use among new drivers, particularly those aged 15 to 17, significantly increases the risk of morbidity and mortality in motor vehicle accidents, with studies showing over 20% greater odds of mortality and nearly two-thirds of pediatric spinal fractures occurring without seatbelts . In 2020, only 49% of vehicle passengers were using seatbelts during accidents . Therefore, a driver monitoring system is needed to ensure the proper use of An object detected by a camera requires an algorithm to process it. Algorithms such as Faster R-CNN use two stages in object detection, which results in longer processing times but higher Suryanto et al. Car seatbelt monitoring system using a real-time object detection A SINERGI Vol. No. October 2025: 771-778 accuracy . Additionally, object detection algorithms like YOLO and SSD tend to excel in speed, while R-CNN and its variants perform better in terms of accuracy . The RetinaNet algorithm has a slower speed due to its more complex architecture, which results in higher accuracy for detecting small objects . For object detection systems that require speed, the use of the YOLO algorithm is a suitable choice. YOLO can detect objects quickly while maintaining accuracy because it uses a simpler architecture, requiring only a single pass of the image through the network . The ETLE (Electronic Traffic Law Enforcemen. system, which can monitor driver behavior, has been implemented. The penalties issued are diverse, including violations of seatbelt usage while driving . However, this method is still not effective, as the monitoring cameras are not installed in all locations and are only present at certain points . The YOLO algorithm has been applied in various fields, including ETLE The YOLO algorithm in ETLE systems can detect riders who are not wearing helmets . Several studies have explored seatbelt detection using different algorithmic approaches and environmental conditions. The work in . applied YOLOv5 to identify seatbelt usage from frontal images of drivers, while the author in . developed a model resilient to weather-related S-AlexNet Other approaches incorporated hybrid feature modeling, such as local-global predictors and shape modeling processes . , or emphasized detection performance under varying light conditions, particularly in the context of autonomous vehicles . Detection frameworks have also evolved to support multi-class classification of seatbelt states, such as properly worn, worn incorrectly, or absent, enabling more granular analysis for occupant monitoring systems (OMS) and driver monitoring systems (DMS) . However, these implementations remain constrained to laboratory environments or imagelevel validations, with limited integration into physical vehicular systems or evaluation under diverse real-world lighting conditions. This study develops a comprehensive car seatbelt monitoring system that integrates the YOLOv5 object detection algorithm into embedded hardware for real-time deployment. addition to recognizing seatbelt usage under varying lighting conditions, the system is directly linked to the vehicleAos ignition circuit, thereby enabling automated enforcement of seatbelt It also incorporates a dual-alert mechanism through visual/auditory signals and remote Telegram messaging. By validating the system under actual driving conditions, this work demonstrates a practical advancement over previous approaches that were limited to algorithmic evaluation or simulation-based METHOD This study employs the YOLO (You Only Look Onc. algorithm, which uses a Convolutional Neural Network (CNN) to perform real-time object detection by dividing an image into an SyS grid, with each cell predicting bounding boxes and confidence scores, resulting in an output tensor of SySy(By5 C) . The system was developed using hardware, including a Lenovo Ideapad Slim i5 laptop. Raspberry Pi 4, relay, buzzer. LED. LM2596 step-down module, cables, smartphone. HD 1080DPI webcam, and 64 GB micro SD card. Software tools such as Google Colab. Raspberry Pi Imager. VNC Viewer. Roboflow, and Telegram supported the process. A dataset of 1,617 images featuring seatbelts and face objects was used, sourced from Roboflow and a manual collection of simulated drivers with and without seatbelts, accessible at: https://universe. com/faceseatbelt/seatbeltmonitoring/browse?queryText=&pageSize=50&st artingIndex=0&browseQuery=true Research Workflow Based on Figure 1, the research workflow starts with dataset collection, labeling, and splitting into training, validation, and testing sets, followed by pre-processing and model training. object detection testing fails, it loops back to data Once successful, device components are assembled and tested in a car environment. Figure 1. Car Seatbelt Detection Research Workflow Suryanto et al. Car seatbelt monitoring system using a real-time object detection A p-ISSN: 1410-2331 e-ISSN: 2460-1217 Dataset Collection This research used a raw dataset of 1,617 images from Roboflow and manual collection, showing individuals with and without seatbelts. Images were manually annotated and labeled in Roboflow with bounding boxes and two classes: "seatbelt" and "face. " The dataset was then split into training . %), validation . %), and testing . %) sets. Pre-processing The dataset underwent pre-processing in Roboflow, including resizing to 640x640 pixels, auto-orienting, and augmentation . ertical flip, 15% grayscale, and A25% saturatio. This enhanced data quality and increased the dataset to 2,749 images: 2,264 for training, 323 for validation, and 162 for testing. Training Model This research uses the YOLOv5s model, a lightweight yet effective variation suitable for lowcapability devices . , offering a good balance between complexity and model size . Training parameters include a 640x640 image size, batch size of 64, and 100 epochs, where larger image sizes improve accuracy but demand more resources, and more epochs allow deeper The training took 1. 294 hours, producing a model to detect face and seatbelt objects. The YOLOv5s training process produced strong performance metrics: precision of 0. 2%), recall of 0. %), and a mean Average Precision . AP) of 0. 5%). A performance graph, shown in Figure 2, further illustrates the model's effectiveness. Device Assembly The monitoring design, as shown in Figure 3, is powered by the car battery via the ignition switch and uses a Raspberry Pi 4 for object A connected webcam detects the presence of a face and a seatbelt. If a seatbelt is detected, the LED and buzzer remain off, and the relay stays closed, allowing engine start. If not, the LED and buzzer activate, the relay opens, and the engine is blocked. Additionally. Telegram alerts are sent to the driver every 5 minutes if the seatbelt is not detected, ensuring reminders without excessive disturbance. Testing and Evaluation System monitoring, device, and Telegram notification Detection testing evaluates accuracy, precision, recall, and F1-score using test data and a confusion matrix . The matrix shows TP . rue positiv. FP . alse positiv. FN . alse negativ. , and TN . rue negativ. , with evaluation calculated using . Ae . Table 1 shows the confusion matrix used to assess model performance. Accuracy = TP TN TP FP FN TN Precision = TP FP Recall = . TP FN F1-Score =2 x (Precision x Recal. Precision Recall Figure 3. Monitoring Device Design Table1. Confusion Matrix Figure 2. YOLOv5s Training Model Results Predicted True False Actual Positive Negative Suryanto et al. Car seatbelt monitoring system using a real-time object detection A SINERGI Vol. No. October 2025: 771-778 Accuracy, precision, recall, and F1-score are used to evaluate prediction performance, with error percentage calculated using . %yaycycycuyc = . Oe Accurac. x 100% For monitoring, device, and Telegram testing, the system is directly connected to the car's ignition to assess its ability to respond to seatbelt usage and light intensity inside the vehicle. RESULTS AND DISCUSSION Detection System Testing The detection system was tested on 162 images containing seatbelt and face objects using a confidence threshold of 0. 5 and an IoU threshold 45 to balance precision and recall . shown in Figures 4 and 5, the images were resized to 640y640 pixels, and object detection results were displayed with bounding boxes, class labels, and confidence scores. An error is observed in Figure 5. , where the system failed to detect a worn seatbelt, highlighting a limitation in model Evaluation was conducted using confusion matrix values (TP. FP. FN. TN), as shown in Table 2. The accuracy, precision, recall, and F1-score derived from these counts are summarized in Table 3. Car Seatbelt Monitoring Device Testing The monitoring system is tested by observing the detection performance of the seatbelt and face based on the light intensity inside the car in real-time. The tests are conducted under two conditions, with each condition being tested ten times. In the first condition, which is bright-light, the performance results are displayed in Table 4. The installation of the car seatbelt monitoring device on the car dashboard is shown in Figure 6. Based on Table 4, during bright-light testing at 11:19 AM, the system achieved an average confidence of 90. 3% for face detection and 85. for seatbelt detection, with an average light intensity of 4334. 2 lux and a detection speed of 81 ms. The results are shown in Figure 7. Under low-light conditions (Table . , tested at 5:59 PM, face and seatbelt detection confidences were 7% and 83. 6%, respectively, with 2295. 8 lux light intensity and 1512. 29 ms detection speed, as shown in Figure 8. Figure 6. Placement of The Monitoring Device Figure 4. Before being Detected . After being Detected . Figure 5. Before being Detected . After being Detected Table 2. Confusion Matrix No Object Face 2 Seatbelt Table 3. Evaluation Matrix No Object Accuracy Precision Face 2 Seatbelt 0. Average Recall F1-Score Figure 7. Bright-light Test Sample Table 4. Bright-light Test Results Conf. Seatbelt Conf. Face Light Detection Test Time Intensity Speed . Suryanto et al. Car seatbelt monitoring system using a real-time object detection A p-ISSN: 1410-2331 e-ISSN: 2460-1217 Table 5. Low-light Test Results Conf. Seatbelt (%) Conf. Face Light Detection Test Time Intensity Speed . Figure 9. Result of Test Using Seatbelt Table 7. Not Use Seatbelt Conf. Conf. Light Notificati LED & Seatbelt Face Intensity Buzzer Sent Not Sent Not Sent Not Sent Sent Not Sent Not Sent Not Sent Not Sent Not Sent Relay Open Open Open Open Open Open Open Open Open Open Time . Figure 8. Low-light Test Sample A test is performed to assess the car seatbelt monitoring response, which includes the relay, buzzer. LED, and Telegram notifications, based on the driver's detection status. Table 6 shows system response when the system detects the seatbelt and face, the LED and buzzer notification components are off, the relay condition that was initially open becomes closed, and the driverAos smartphone does not receive a Telegram During testing in the car, the driver can start the engine by activating the starter motor via the ignition switch. The average confidence value for seatbelt detection is 81. 4%, while the average confidence value for face detection is 94. All these tests were conducted with an average light intensity of 4124. 6 lux, performed during the daytime at 13:25. The results of the test, when the driver is using the seatbelt, are shown in Figure 9. Table 6. Test Result with Seatbelt Conf. Conf. Light Notificati LED & Seatbelt Face Intensity Buzzer Not Sent Off Not Sent Off Not Sent Off Not Sent Off Not Sent Off Not Sent Off Not Sent Off Not Sent Off Not Sent Off Not Sent Off Relay Close Close Close Close Close Close Close Close Close Close Time . Figure 10. Result of Test Without Seatbelt Table 7 shows the test results when the driver is not wearing a seatbelt. In 10 trials, the system consistently detected the face . ith an average confidence of 94%) without errors, under an average light intensity of 4178. 5 lux. When the seatbelt was not detected, the LED and buzzer activated, the relay remained open. Telegram sent alerts every 5 minutes, and the car's engine could not be started via the ignition. A sample result is shown in Figure 10. Telegram Notification Testing Telegram notifications are sent when the driver is detected not wearing their seatbelt. The system automatically sends notifications to the driver's smartphone via the Telegram application. Notifications are sent every 5 minutes while the driver is not detected wearing the seatbelt. The format of the Telegram notification received on the smartphone is shown in Figure 11. Suryanto et al. Car seatbelt monitoring system using a real-time object detection A SINERGI Vol. No. October 2025: 771-778 Figure 13. Comparison of Detection Speeds Figure 11. Telegram Notification Result Analysis Based on Table 3, from 162 test images, the system achieved 95. 75% accuracy, 99. precision, 96. 2% recall, and a 97. 2% F1-score. Detection performance was higher for faces than for seatbelts. Face detection reached 98. accuracy, 98. 2% precision, 100% recall, and 4% F1-score, with a 1. 7% error rate. Seatbelt detection scored 93. 2% accuracy, 100% precision, 92. 4% recall, and 96% F1-score, with a 8% error rate. Face detection outperforms seatbelt detection in accuracy, recall. F1-score, and error rate, but lags in precision. Seatbelt detection only excels in precision. This discrepancy may stem from dataset imbalanceAi1237 seatbelt objects 2539 face objectsAimaking seatbelt detection more challenging under varying conditions. Seatbelt objects are larger and include more background noise, while face objects are smaller and consistently captured due to camera Although data augmentation was applied, further techniques like SMOTE. GANs, color space transformation, or noise injection . are needed to improve balance and detection Figure 12. Confidence Score Comparison The average intensity of light entering the car in the evening is lower . ow-ligh. lux compared to during the day . right-ligh. , which 2 lux. This can be a factor for the lower confidence scores produced in the low-light conditions compared to during the bright-light conditions, as the reduced light entering the car affects the object detection results . Based on the monitoring tests under brightlight and low-light conditions, the confidence values for object detection fluctuate due to factors like the driverAos movement and the seatbelt being blocked by the driverAos hand . As shown in Figure 12, the system detects face objects more confidently than seatbelt objects. In bright-light conditions, the average confidence is 90. 3% for faces and 85. 5% for seatbelts. in low-light conditions, it is 87. 7% for faces and 83. 6% for This may occur because face objects are smaller and less obstructed, while seatbelt objects are more often blocked or partially visible during driving. As shown in Figure 13, object detection speed remains consistent across 10 tests, with only a 1. 72 ms difference between bright-light . 81 m. and low-light . 29 m. This stability is due to the use of the same model and image size, which influences Raspberry Pi's processing efficiency. When the driver wears a seatbelt, the system detects both face and seatbelt, resulting in the relay closing, no activation of the LED/buzzer, and no Telegram alert, allowing the engine to start normally. Discussion This study shows that YOLOv5 detects faces better than seatbelts due to dataset A larger number of face annotations leads to higher performance. As supported by . , dataset balance and representativeness are crucial for achieving accurate object detection In . , an achieved 89% precision and 81% recall using YOLOv5 under bright light. contrast, the proposed system performs better 1% precision, simpler implementation, and includes hardware-based alerts and Telegram Suryanto et al. Car seatbelt monitoring system using a real-time object detection A p-ISSN: 1410-2331 e-ISSN: 2460-1217 Therefore, the proposed monitoring system demonstrates improved results compared to the referenced study. In . , implemented YOLOv7 for seatbelt detection using Jetson Nano, buzzer, and display, achieving a 98% F1-score and high precision, though not integrated with the vehicle's electrical system. Meanwhile, the proposed system achieves slightly lower F1-score . 2%) but uses more datasets . 7 compared to 1. , integrates with car electronics, includes Telegram alerts, and reaches 100% seatbelt precision, indicating enhanced functionality. The monitoring system in this study demonstrates better performance under bright-light conditions than in low-light environments. This aligns with the findings of . , which observed a decline in detection accuracy under low-light settings. The system in this study was tested without applying advanced preprocessing techniques . uch as lowlight enhancement or adaptive histogram equalizatio. in order to reflect the original image conditions from the camera. However, the use of these techniques is believed to improve detection performance under various lighting conditions. Therefore, future research is recommended to integrate these methods and compare the detection results to evaluate their impact on system performance. CONCLUSION This study developed a car seatbelt monitoring system using the YOLOv5 algorithm to detect seatbelts and face objects. The system is tested under low-light and bright-light conditions. The system achieved 95. 75% accuracy, 99. precision, 96. 2% recall, and a 97. 2% F1-score. Average confidence scores were 90. 3% . 5% . in bright-light . 2 lu. , 7% . 6% . in low-light . 29 lu. , with detection speeds of 1513. ms and 1512. 29 ms, respectively. The device connects to the carAos ignition and sends Telegram alerts every 5 minutes. Future work should expand the dataset, optimize for speed and accuracy, integrate law enforcement notifications, and test across diverse vehicles and conditions. REFERENCES