International Journal of Electrical and Computer Engineering (IJECE) Vol. No. October 2025, pp. ISSN: 2088-8708. DOI: 10. 11591/ijece. Low complexity human fall detection using body location and posture geometry Pipat Sakarin1. Suchada Sitjongsataporn2 Electrical Engineering Graduate Program. Faculty of Engineering and Technology. Mahanakorn University of Technology. Bangkok. Thailand Department of Electronic Engineering. Mahanakorn Institute of Innovation (MII) Faculty of Engineering and Technology. Mahanakorn University of Technology. Bangkok. Thailand Article Info ABSTRACT Article history: This paper presents the human fall detection using body location (HFBL) and posture geometry. The main contribution of the proposed HFBL system is to reduce the computational complexity of fall detection system while maintaining accuracy, as most fall detection techniques rely on computationally complex algorithms from machine learning or deep This approach examines the human posture by applying the image segmentation and ratio by posture geometry. Then, the distance transform is used to calculate the high brightness points on the human body. These points are the maximum values compared with the edge values. Afterward, one of these points is selected as a center point. A line is formed by this center point aligned horizontally to separate the upper area and lower area, then an intersection line is drawn through this center point vertically that can separate the four quadrants of body location. With the help of posture geometry, the angles are employed for prediction AuFallAy or AuNotFallAy actions at each frame of video sequence. Referring to the dynamic balance, the ratio between the distance vectors from the center point to the right and left legs is calculated to confirm fall and non-fall activities, utilizing the Pythagorean trigonometric identity. For experiments, 2,542 images from the UR fall detection dataset, with dimensions of 640y480y3 were prepared through image segmentation to find the human body shape for analysis using the proposed HFBL system. Results demonstrate that the low computational HFBL approach can provide 91. 23% accuracy, the precision value is 14%, the recall value is 84. 48%, and the F1-score value is 91. Received Jan 5, 2025 Revised Jun 19, 2025 Accepted Jul 12, 2025 Keywords: Body location Distance transform Euclidean distance Human fall detection Posture geometry This is an open access article under the CC BY-SA license. Corresponding Author: Pipat Sakarin Electrical Engineering Graduate Program. Faculty of Engineering and Technology. Mahanakorn University of Technology 140 Cheum-Samphan Road. Krathum Rai. Nongchok. Bangkok 10530. Thailand Email: pipat. skr@gmail. INTRODUCTION The annual population trend has also led to an increasing number of elderly people and the number of people aged over 80 years is projected to reach 256 million in 2030 . The increasing number of elderly people is specifically related to the development of care technologies. In . , the authors have presented an integrated smart caring home system based on the internet of things (IoT) technology. As stated on reducing the data calculation, the proposed energy-saving compression method on the cloud platform . has been introduced to decrease the latency of data transmission on the wearable medical sensors in embedded signal Recently, the human fall detection system (HFDS) is a key important technology for caring the Journal homepage: http://ijece. Int J Elec & Comp Eng ISSN: 2088-8708 elderly people using the various movements of position detection. Falls by aging people are the leading cause of severe injury-related death for aging people . such as backward fall on slippery ground, forward falls by tripping, sideways, and straight-down falls due to miss-stepping and fainting, respectively. Several fall detection systems have been modified by using various technologies classified into vision-based approaches as machine learning, sensors and wearable devices. As followed the vision-based system and machine learning system. Harrou et al. have demonstrated the HFDS technique by dividing five parts of image and calculating the area ratio of different poses. In . , a multi-stage convolution neural network (CNN) has been deployed with the inverted pendulum model for fall prediction. Based on the Openpose skeleton. Lin et al. have analyzed the information on the changes of human bone skeletal joints in the various movements using long short-term memory (LSTM) and gated recurrent unit (GRU) model. In . , a two-stage HFDS has been applied by comparing the energy value and 3-state scores analyzed from skeletal structure. Following the machine learning system. Beddiar et al. modified their approach by using the angle between the center of the head and hip to predict results. Fall detection in . is performed using human skeleton and the machine learning system for prediction. In study . , based on the fast pose estimation method, the timedistributed convolutional LSTM (TD-CNN-LSTM) is used to predict the results. Keskes and Noumeir . has substantiated fall detection by spatial temporal graph convolutional networks (ST-GCN) method. In . , the results of the machine learning system using postures from human silhouettes are presented. In . , the skeleton information from Openpose, the movement of the center point of the hip joint, the angle between body center line, ground and the ratio between the width and height of human body rectangular are used for As a pose estimation based . , the information ratio between deflection and acceleration features with machine learning system to predict the results. In . , the information about position change between point of head and shoulder extracted by PoseNet was analyzed by GRU model. In . , the OpenPifPaf model extracts the human pose estimation information from multi-camera and uses LSTM for In this paper, we propose a human fall detection using body location (HFBL). The purpose of this system is to reduce the high computational complexity often found in machine learning and deep learning techniques applied to fall detection systems as discussed in prior research, while the proposed HFBL still achieves accurate and reliable fall detection. The proposed HFBL system will be performed using image segmentation . and distance to organize the human body posture for fall prediction. Then, the distance transform . is used to find a center point. An intersection line started at this center point is aligned to separate the upper area and lower area, including the ratio between vectors from the center point to right and left legs, to confirm fall and non-fall situations. Due to the low complexity design, the proposed HFBL system can be applied to the embedded IoT devices. PROPOSED HUMAN FALL DETECTION USING HUMAN BODY LOCATION The human fall detection using body location (HFBL) system is to predict human falls by the human body position and then to send information for assistance. Proposed HFBL system is introduced in Figure 1. These images or video sequences from internet protocol camera (IP Camer. are sent to make the image segments for finding the human body shape. The distance transformation is used to analyze the body shape to find the angle and ratio from the center and reference points. Finally, the angle and ratio are used to predict fall or non-fall activities. Figure 1. Overview of proposed HFBL system Human segment and distance transform Image segmentation is a technique for classifying objects or finding locations using pixel-level analysis to separate objects in the images. There are several ways to classify objects or find their locations such as finding edges, separating colors or finding characteristics of the image. There are several techniques based on CNN. U-net is one of several segmentation techniques using encoder-decoder or up-sampling and down-sampling in each layer in a U-shape . Low complexity human fall detection using body location and posture geometry (Pipat Sakari. A ISSN: 2088-8708 For the proposed HFBL model system, we introduce the ResNet-34 . , . implemented with 34 The pre-trained model . Ae. is selected for discovering the human pose segmentation for encoding/decoding or up/down sampling based on U-Net. A frame of video sequences is changed from red, green, and blue (RGB) to gray scale for reducing the resolution. An original RGB image is shown in Figure 2. Result of human segment in gray scale is shown in Figure 2. Following the image processing, the distance transformation (DT) technique is a widely used technique for calculating the closest distance between the objects of interest and disinterest or the Results are stored in each pixel, which is called a distance map as shown in Figure 2. Referring to . Ae. , we determine that an image ya. cy, y. consists of objects of interest . cCya ) and objects of non - interest . cCEya ), where . cy, y. OO . cCya , ycCEya }. The position of the pixel . cyycu , ycyyc } of the objects of interest . cCya ) is defined in x-axis and y-axis. In similar way, the position of pixel . cyycu , ycyyc } of objects of non-interest . cCEya ) in x-axis and y-axis is intended. So, the distance between pixels of . cCya ) and . cCEya ) can be obtained from the Euclidean distance yaycc. cy, y. cy, y. = Oo{. cyycu Oe ycycu )2 . cyyc Oe ycyc ) } . Furthermore, we refer to the Euclidean distance transformation yaycc. cy, y. , which can be defined for . cCya , ycCEya } by . cy, y. OO . cCEya } yaycc. cy, y. = { ycoycnycu . cy, y. cy, y. OO . cCya } . where ycoycnycu . cy, y. } is under condition OAya. cy0 , yc0 ) OO ycCEya and . cy0 , yc0 ) are the initial values of pixels of . cCya , ycCEya }. Angle and ratio by calculus Referring to . , the results from the Euclidean distance transform can identify the brightest point, which becomes the center point ya. cyyca , ycyca ) as defined by . cyyca , ycyca ) = ycoycaycu. cy, y. } . Subsequently, the edge values . cyycoycnycu , ycycoycnycu ) can be computed using a minimum distance transform. cyycoycnycu , ycycoycnycu ) = ycoycnycu. cy, y. } . where yayaycN. cy, y. is referring . As shown in Figure 2, the center point C in Figure 2. is used to divide the image into four quadrants (Q1. Q2. Q3. in an anti-clockwise direction, as demonstrated in Figure 2. , where the minimum distance values from . ycE1. cyycE1 , ycycE1 ), ycE2. cyycE2 , ycycE2 ), ycE3. cyycE3 , ycycE3 ), and ycE4. cyycE4 , ycycE4 ) can be defined as . ycE1. cyycE1 , ycycE1 ) = ya. cyycoycnycu > ycyyca , ycycoycnycu < ycyca ) . ycE2. cyycE2 , ycycE2 ) = ya. cyycoycnycu < ycyyca , ycycoycnycu < ycyca ) . ycE3. cyycE3 , ycycE3 ) = ya. cyycoycnycu < ycyyca , ycycoycnycu > ycyca ) . ycE4. cyycE2 , ycycE2 ) = ya. cyycoycnycu > ycyyca , ycycoycnycu > ycyca ) . From the . , we can find the point ya. cyyca , ycyca ), yaA. cyyca , ycyca ), ya. cyyce , ycyce ) and ya. cyEa , ycEa ) in Figure 2. that can be computed by . cyyca , ycyca ) = ycoycaycu. a, ycE. } . cyyca , ycyca ) = ycoycaycu. a, ycE. } . cyyce , ycyce ) = ycoycaycu. a, ycE. } . cyEa , ycEa ) = ycoycaycu. a, ycE. } . a, ycE. denote a center point on the body and the second quadrants. From Figure 2. , the distance from point C to point A, point B, point E, point H and point G that can be defined as . Int J Elec & Comp Eng. Vol. No. October 2025: 4620-4629 Int J Elec & Comp Eng ISSN: 2088-8708 EcEcEcEcEc = yaycc. a, y. yayaA EcEcEcEcEc = yaycc. a, yaA) . EcEcEcEcEc = yaycc. a, y. EcEcEcEcEc yaya = yaycc. a, y. EcEcEcEcEc yaya = yaycc. a, y. Figure 2. Image of human . original RGB, . segment, . distance map and center point, and . 4 quadrants Following the lower right side of Figure 3. for the angle calculus, angle yuEyayayaA between point A, point C, and point B can be calculated by . yuEyayayaA = yuE1A yuE2A where the angle yuE1A and yuE2A is determined by Pythagorean trigonometric identity as . EcEcEcEcEc yaya EcEcEcEcEc yaya yaya yayaA yuE1A = ycaycuyc Oe1 EcEcEcEcEc . ycaycuycyuE2A = EcEcEcEcEc Therefore, the angle yuEyayayaA . can be determined by . EcEcEcEcEc EcEcEcEcEc yuEyayayaA = ycaycuyc Oe1 EcEcEcEcEc ycaycuyc Oe1 EcEcEcEcEc The angle yuEyayaya between point H, point C and point E as presented in Figure 3. which can calculated by . yuEyayaya = yuE3A yuE4A The angle yuE3A and yuE4A can computed by . EcEcEcEcEc yaya EcEcEcEcEc yaya yaya yaya yuE3A = ycaycuyc Oe1 EcEcEcEcEc . yuE4A = ycaycuyc Oe1 EcEcEcEcEcEc As . , the angle yuEyayaya can be calculated by . EcEcEcEcEc EcEcEcEcEc yuEyayaya = ycaycuyc Oe1 EcEcEcEcEc ycaycuyc Oe1 EcEcEcEcEcEc According to standing balance, the dynamic balance is ability to maintain balance while moving the body. Therefore, the ratio between the left and right legs while balancing the body is related by R1 and R2. EcEcEcEcEc yaya EcEcEcEcEcEc yaya yaya yaya ycI1 = EcEcEcEcEcEc . ycI2 = EcEcEcEcEc where the ratio R1 between EcEcEcEcEc yaya and EcEcEcEcEc yaya is reciprocal to the ratio R2 between EcEcEcEcEc yaya and EcEcEcEcEc yaya . Low complexity human fall detection using body location and posture geometry (Pipat Sakari. A ISSN: 2088-8708 . Figure 3. Angle . yuEyayayaA and . yuEyayaya AuFallAy and AuNotFallAy prediction In this section, we introduce the constraints for AuFallAy and AuNotFallAy prediction related to body location and posture geometry. The prediction AuNotFallAy as presented in Figure 4. shows that the angle yuEyayayaA is the angle between point A, point C and point B, the angle yuEyayaya is the angle between point H, point C and point E. The angles yuEyayayaA and yuEyayaya are close to 0A , the proposed HFBL system will predict AuNotFallAy activity. As Figure 4. , the prediction AuFallAy consists of yuEyayayaA and yuEyayaya , that are close to 180A , the proposed HFBL system will predict that human AuFallAy activity. The relationship for predicting whether a person will fall or not fall which can be defined as . AuyaycaycoycoAy , ycEycyceyccycnycayc. cycn ) = { AuycAycuycyaycaycoycoAy, yuEyayayaA = 180A ycuyc yuEyayaya = 180A yuEyayayaA = 0A ycuyc yuEyayaya = 0A where ycycn is the sequence of video data sets for simulation. Figures 4. , the parameters . u, yu. are threshold for falling and . u, yu. are threshold for not falling. Then, we can predict whether a human will fall and not fall which can be determined by . AuyaycaycoycoAy, yuEyayayaA = 180A Oe yu ycuyc yuEyayaya = 180A Oe yu1, ycEayceycu 0 O . u, yu. O 180 ycEycyceyccycnycayc. cycn ) = AuycAycuycyaycaycoycoAy, yuEyayayaA = 0A yu ycuyc yuEyayaya = 0A yu1, ycEayceycu 0 O . u, yu. O 180 where the summation of yu, yu, yu1 are yu1 correlated with this condition below yu yu = yu1 yu1 = 180A Figure 4. Prediction . AuNotFallAy and . AuFallAy Int J Elec & Comp Eng. Vol. No. October 2025: 4620-4629 Int J Elec & Comp Eng ISSN: 2088-8708 SIMULATION RESULTS A total of 26-video datasets was used for the simulation, with each video being 2 minutes long and having a resolution of 640y480. The UR fall detection datasets . with 2,542 images and the size of 640y480y3 was prepared through image segmentation to find the human body shape and to analyze using the proposed HFBL system. The results are shown in Figure 5. Figures 6. , and Tables 1 to 5. Figure 5. Results of the proposed HFBL model for human fall prediction . Figure 6. The results of . , . , . expected values of falls, . , . , . the predicted values of falls by the proposed HFBL model, and . , . , . the actual values of falls Low complexity human fall detection using body location and posture geometry (Pipat Sakari. A ISSN: 2088-8708 As shown in Table 1, the experimental results were obtained by modifying the values of yu and yu1 of the proposed HFBL model without including the values of R1 and R2. The best values in this table are yu = 60A and yu1 = 60A , with an accuracy of 78. 91%, a precision of 86. 80%, a recall of 71. 87%, and F1-Score All the results are obtained are low accurate. Table 2 is the result of the experiment by adjusting the value of yu and yu1 of the proposed HFBL model by taking into consideration the value of R1=1. 5 and R2=0. The best value in this table is yu = 60A and yu1 = 30A with the value of accuracy of 91. 15%, precision of 99. 06%, recall of 84. 37% and F1-Score of The results have higher accuracy than before. Table 3 presents the results of the experiment by adjusting the value of yu and yu1 of the proposed HFBL model and adjusting the value of R1=2. 5 and R2=0. 4 to be considered as well. The best value in this table is yu = 60A and yu1 = 30A with the value accuracy at 91. 23%, the value of precision at 99. 14%, the value of recall at 84. 48% and the value of F1-Score at 91. The results demonstrate higher accuracy. As shown in Table 4, the experiment's results were obtained by adjusting the values of yu and yu1 of the proposed HFBL model R1=3. 5 and R2=0. The best values in this table are yu = 60A and yu1 = 30A with the value of accuracy at 91. 23%, the value of precision at 99. 06%, the value of recall at 84. 51% and the value of F1-Score at 91. The results have higher accuracy. The proposed HFBL model predicts AuFallAy or AuNotFallAy and its results are presented in Figure 5 using a dataset of 26 video data sets. This prediction is based on a dataset of 26 video clips, each 2 minutes in length and with a resolution of 640y480. The prediction result (AoPredictA. is that a person will fall by the proposed HFBL model compared with the expectation of fall and the actual fall. As shown in Table 5, the OIycyaycuycyOeyayayaAya is the difference between the proposed HFBL model and fall expectation. The average time of fall with the expected value has an average difference OIycyaycuycyOeyayayaAya at 5. 31 seconds. The OIycyaOeyayayaAya is the difference between the proposed HFBL model and actual fall, the average time of OIycyaOeyayayaAya is 31. 31 seconds. Table 1. Experimental results of adjusting yu and yu1 values of the proposed HFBL model without R1 and R2 yuEyayayaA yuEyayaya Accuracy (%) Precision (%) Recall (%) F1-Score (%) Table 2. Experimental results of adjusting yu, yu1. R1=1. 5 and R2=0. 7 of the proposed HFBL model yuEyayayaA yuEyayaya R1=1. R2=0. Accuracy (%) Precision (%) Recall (%) F1-Score (%) Table 3. Experimental results of adjusting yu, yu1. R1=2. 5 and R2=0. 4 of the proposed HFBL model yuEyayayaA yuEyayaya R1=2. R2=0. Accuracy (%) Precision (%) Recall (%) F1-Score (%) Table 4. Experimental results of adjusting yu, yu1. R1=3. 5 and R2=0. 3 of the proposed HFBL model yuEyayayaA yuEyayaya R1=3. R2=0. Accuracy (%) Precision (%) Int J Elec & Comp Eng. Vol. No. October 2025: 4620-4629 Recall (%) F1-Score (%) Int J Elec & Comp Eng ISSN: 2088-8708 Table 5. The difference OIycyaycuycyOeyayayaAya between the prediction of the proposed HFBL model and the expectation and the difference OIycyaOeyayayaAya between prediction of the proposed HFBL and the actual fall No. of sequence of video Expectation Proposed HFBL model Fall OIycyaycuycyOeyayayaAya OIycyaOeyayayaAya . Time average . From Figure 6, it shows the expected value in Figure 6. of a person falling at 54 seconds of the video, and the predicted value of the person falling of the proposed HFBL model in Figure 6. at 60 seconds of the video and the actual person falling in Figure 6. at 84 seconds of the video. The expected value of a person falling in Figure 6. at 28 seconds of the video, the predicted value of a person falling of the proposed HFBL model in Figure 6. at 32 seconds of the video, the actual person falling in Figure 6. at 68 seconds of the video, and the expected value of a person falling in Figure 6. at 66 seconds of the video. The predicted value of a person falling of the proposed HFBL model in Figure 6. at 72 seconds of the video, and the actual person falling in Figure 6. at 86 seconds of the video. Overall, it shows that the predicted value of a person falling of the proposed HFBL model is closed to the expected value of a person falling and can be predicted before the actual person falls. CONCLUSION We propose a low computational and reliable accurate fall detection HFBL model using human body location and posture geometry. The proposed HFBL model applied the image segmentation to find human posture and distance transform to calculate the reference points and angles for fall and non-fall A dataset of 2,542 images with the size of 640y480y3 and a dataset of 26 videos with each video being 2 minutes long and having a resolution of 640y480 from the UR fall detection dataset are prepared for By modifying only yu and yu of the proposed HFBL model, the accuracy value is 91. 23%, the precision value is 99. 14%, the recall value is 84. 48%, and the F1-score value is 91. The prediction of fallers by the proposed HFBL model can predict fallers close to the expected faller value at an average of 31 seconds, and the prediction value of fallers by the proposed HFBL model can predict fallers in advance with the actual faller value at an average of 31. 31 seconds. Future work on the proposed model may incorporate adaptive, tracking, and time-based functions with high accuracy. These improvements aim to reduce mispredictions caused by high-angle and insufficient lighting, improve accuracy for activities such as lying down or kneeling, and enable multi-person prediction, thus overcoming the current single-person limitation. FUNDING INFORMATION Authors state no funding involved. AUTHOR CONTRIBUTIONS STATEMENT This journal uses the Contributor Roles Taxonomy (CRediT) to recognize individual author contributions, reduce authorship disputes, and facilitate collaboration. Name of Author Pipat Sakarin Suchada Sitjongsataporn C : Conceptualization M : Methodology So : Software Va : Validation Fo : Formal analysis ue I : Investigation R : Resources D : Data Curation O : Writing - Original Draft E : Writing - Review & Editing ue Vi : Visualization Su : Supervision P : Project administration Fu : Funding acquisition Low complexity human fall detection using body location and posture geometry (Pipat Sakari. A ISSN: 2088-8708 CONFLICT OF INTEREST STATEMENT Authors state no conflict of interest. DATA AVAILABILITY The data that support the findings of this study are openly available in UR fall detection datasets at http://doi. org/ 10. 1016/j. 005, . REFERENCES