OPEN ACCESS ISSN 2356-5462 http://socj. id/ijoict/ Intl. Journal on ICT Vol. No. Des 2023. doi: doi. org/10. 21108/ijoict. Performance Analysis of Facial Image Feature Extraction Algorithm for Smart Home Security System Detection Muhammad Ihsan Adly 1. Satria Mandala 2* School of Computing. Telkom University Bandung. Indonesia *satriamandala@telkomuniversity. Abstract In Indonesia, alongside advancements in multi-family security technology, smart home security tools have gained popularity. These tools operate automatically in real-time without environmental However, existing tools often lack consistent accuracy and performance. To address this, we propose a smart home security system incorporating an Arduino UNO-connected camera, two relay modules, a magnetic lock, and integration with a home Internet of Things system. Our research methodology involved: 1. Reviewing existing literature on Smart Home Security, specifically focusing on facial image feature extraction algorithms. Deploying Arduino UNO, 2 Relay Module, and Solenoid Lock. Employing the Wavelet feature extraction algorithm, chosen for its compatibility with human behavior patterns. Our proposed method aims for an accuracy rate of 80% or higher. Experimental results demonstrate that our prototype achieved an accuracy of 7%, along with a precision rate of 87. 94%, recall rate of 87. 56%, and an f1-score rate of 87. Keywords: Internet of Things. Smart Home Security. Arduino UNO, 2 Relay Module. Solenoid Lock. Wavelet INTRODUCTION HIT the rapid development of economy, living standards and environmental degradation, people's demands for comfortable and safe living environment are getting higher and higher. Criminal crimes such as robbery and theft are serious problems for any family. The victims of theft are not only the middle to upper economic communities, as even the middle to lower economic communities are the victims of theft. The cause of many house break-ins is because the security level of the building is still low, making it easy for criminals to enter. Home security systems have become an important issue in recent years. In fact, the system is designed to ensure that your property and loved ones are always safe. A home security system commonly used by Indonesians is only an ordinary door lock, where the lock can be easily broken with a screwdriver. There is also the problem of human error where people forget to lock the door at night, and so criminals can easily enter the house also. Traditional door locks are inconvenient because they require a key to access. Consequently, it is crucial to implement preventive measures, such as developing a face recognition smart home security. A smart home can be seen as an environment that utilizes communication and computing technologies to enhance the residents' quality of life and enable remote control and management of household appliances. The Received on 04 Aug 2023. Revised on 21 Aug 2023. Accepted and Published on 08 Jan 2024. INTL. JOURNAL ON ICT VOL. NO. DEC 2023 primary goal is to improve convenience, safety, and comfort for the occupants of the building by integrating advanced technological solutions within the home. With this advance technology, burglar are not easy to enter the house. In recent times, face recognition has emerged as a prominent subject within pattern recognition. In response, researchers have put forward diverse approaches and methodologies to tackle this area. Face recognition is a widely studied subject with applications spanning across various domains, including military, commercial, and public security fields. There are many smart security devices for homes being sold nowadays, but these tools still have weaknesses, such as the low accuracy of tool detection. And its application is still complex. By installing this detection tool, it is considered a first attempt to improve security and reduce cases of home break-ins. Among the many smart home security tools that use facial recognition designed by researchers are. Most analysts center on the accuracy of detection in supporting conditions and disregard one of the concepts of this instrument, which is infinite. Usually, this device requires good lighting, and so if the lighting conditions are not stable, this instrument does not work accurately, and this can lead to break-ins. Besides the problem of detection algorithms, it is still rare to develop smart home security tools using facial recognition that can work with poor lighting. Security is one of the main concerns in the smart home environment. This research proposes a feature extraction algorithm using Wavelet and a deep learning algorithm using CNNs for the classification. Wavelet feature extraction algorithm transforming facial images using wavelets to capture important spatial details and frequency information. It is expected that with the application of the proposed feature extraction algorithm, this tool can detect accurately under difficult conditions, and the obtained accuracy can be quite high and also can maintain the best security possible. II. LITERATURE REVIEW Existing Research In this study. , researchers designed an Internet of Things-based Face Recognition tool that uses physical location monitoring. The accuracy of this tool reaches 87. 77% using the hold out method and reaches 89. using the stratified 10-fold cross validation method, for accuracy with other classifications is also still above 75% of all accuracy so that it can be said that this tool is competitive. In this study. , researchers designed a smart home security system that can detect faces and movements using motion sensors and IoT cameras. The accuracy of this tool reaches 80%. This system uses KNN algorithm and Complex decision tree. In this study. , researchers designed a system to detect facial expressions using Dual tree multi-band wavelet transform and Gaussian mixture model. The accuracy obtained from implementing the DTMBWT system and GMM classifier reached 98. This system gets very high accuracy, so there are no shortcomings. In this study. , researchers designed a face detection and voice detection system, which is able to perform detection when the user uses a face mask or medical mask. The advantage of this system is its high detection accuracy 5% for face detection and 84. 62% for voice detection. Even if the user uses a face mask or medical mask, the accuracy does not decrease much. Theoretical Foundation . Wavelet Feature Extraction: Feature extraction is the process of extracting information from the original Efficient feature extraction is a very difficult problem for machine approaches because of human visual perception. Feature extraction algorithms in facial recognition are used to identify relevant facial features, reduce dimensionality, normalize pose and lighting, handle noise and variation, and enable matching and These algorithms extract characteristic facial features, create compact representations, account for variations in facial appearance, ensure robustness to noise, and accurately match faces for identification and One of the most potent and useful methods for face recognition is the wavelet transformation. Wavelet feature extraction is a valuable face detection technique that can effectively analyze facial features in images. We apply a wavelet transform to the image and decompose it into its various MUHAMMAD IHSAN ADLY ET AL. PERFORMANCE ANALYSIS OF FACIAL IMAGE FEATURE EXTRACTION ALGORITHM FOR SMART HOME A frequency components to capture global and local facial details. Multi-level decomposition allows extraction of finer features such as eyes, nose, and mouth. Reducing feature dimensionality improves efficiency. These extracted features are used to recognize faces using classification algorithms. This approach is more robust and can respond to changes in facial appearance, lighting conditions, and poses, making it useful for facial recognition systems, surveillance, and human-computer interaction. The extracted feature representations are used for subsequent representation classification. CNN Classification: Classification is the final stage of the recognition process. Mainly used to categorize emotional states about a subject. Perception is the human face. Despite the impressive success of traditional face recognition methods through extracted hand-crafted feature, researchers have turned to deep learning over the past decade for its high automatic recognition capabilities . Deep learning is an artificial intelligence application with the ability to automatically learn from experience to improve performance. A CNNs or Convolutional Neural Networks is a type of deep learning model used to analyze visual data such as images, which will automatically learn and extract important features from different levels of data. It captures progressively more complex details, enabling accurate detection and classification of facial image data. CNNs is successful in tasks such as image classification and face recognition due to its ability to learn and extract features from visual data. CNNs is used for classification tasks where it learns how to assign labels to input data such as facial images. This is achieved by automatically extracting relevant features from the input during the training process. CNNs adjusts weights to improve predictions, minimize errors, and achieve accurate A feature of CNNs is its ability to capture and represent the key features needed to distinguish different objects and classes without the need for manual feature engineering. This property makes CNNs very effective in tasks where a feature extraction Wavelet plays an important role in face recognition. Test Parameters: After constructing a model that consisted of a Wavelet feature extraction algorithm and CNNs (Convolutional Neural Networ. Classification, the authors evaluated the performance of the model. The importance of performance evaluation of the model is to show that the model proposed by the authors is fulfilling the purpose of this experiment and can achieve the measurement goals of performance analysis in the The performance evaluation used in this performance analysis of facial image feature extraction algorithm experiment consisted of Accuracy. Precision. Recall, and F1-Score. A Accuracy Accuracy is a metric that measures the accuracy of the predictions made by the model. Quantify the percentage of correctly classified samples of the facial image data relative to the total number of instances in the facial image dataset. The formula for calculating accuracy is given below. yaycaycaycycycaycayc = ycNycycyce ycEycuycycnycycnycyce ycNycycyce ycAyceyciycaycycnycyce ycNycycyce ycEycuycycnycycnycyce yaycaycoycyce ycEycuycycnycycnycyce ycNycycyce ycAyceyciycaycycnycyce yaycaycoycyce ycAyceyciycaycycnycyce A Precision Precision is calculated by dividing the number of true positive predictions by the sum of true positive and false positive predictions. This provides insight into the model's ability to minimize false positives and accurately identify positive cases. The formula for calculating precision is given below. ycEycyceycaycnycycnycuycu = ycNycycyce ycEycuycycnycycnycyce ycNycycyce ycEycuycycnycycnycyce yaycaycoycyce ycEycuycycnycycnycyce A Recall Recall is a metric that measures the model's ability to correctly identify positive instances from the total number of actual positive instances. It quantifies the proportion of true positive predictions made by the model to the total number of positive instances in the dataset. The formula for calculating recall is given below. INTL. JOURNAL ON ICT VOL. NO. DEC 2023 ycIyceycaycaycoyco = ycNycycyce ycEycuycycnycycnycyce ycNycycyce ycEycuycycnycycnycyce yaycaycoycyce ycAyceyciycaycycnycyce A F1-Score F1-Score is a metric that combines precision and recall into a single measurement. F1-Score provides a balanced assessment of a model's performance by taking into account both false positives and false The formula for calculating F1-score is given below : ya1 Oe ycIycaycuycyce = 2 ycu . cEycyceycaycnycycnycuycu ycu ycIyceycaycaycoyc. cEycyceycaycnycycnycuycu ycIyceycaycaycoyc. RESEARCH METHOD Research Stages This research begins with designing the system, then gathering the dataset, then create the proposed method using the proposed algorithm, and evaluate the outcome. The details of the process can be seen in Fig. 1 below. Fig. Flowchart of the Processing Flow Fig. 1 shows the process of performance analysis of feature extraction on facial image data. This experiment begins with a base design of the system, and then prepares the dataset. The dataset will undergo pre-processing development, where the pre-processing consists of labeling the facial image data AoOccupantAo [Penghun. and AoNon-OccupantAo [Bkn_Penghun. After the facial image data has been labeled, the datasets are divided into two sets, which are Data Train and Data Test. After splitting, the feature of facial image data is extracted using the feature of an extraction algorithm Wavelet. The extracted dataset is then classified using a CNNs algorithm. The end result of the classification of the facial image data will be the same as the labeled data such as AoOccupantAo and AoNon-OccupantAo. Dataset for Experiments The data required for this experiment consisted of facial images of family members and non-family members. The images were captured differently, with most of the facial image data taken from the internet website that provides datasets to be used for author experiments, and part of the facial image data was taken with author's smartphone Redmi Note 9 Pro. After gathering the facial image data, the author put a label on the data. For family members the label was AoPenghuniAo [Occupan. and for the non-family the data was labeled as AoBkn PenghuniAo [Non-Occupan. The labeled data from the internet and author's smartphone were split evenly for the Data Test and Data Train. The total amount of facial image data that was used in this experiment was four hundred and ten . MUHAMMAD IHSAN ADLY ET AL. PERFORMANCE ANALYSIS OF FACIAL IMAGE FEATURE EXTRACTION ALGORITHM FOR SMART HOME A IV. RESULTS AND DISCUSSION Dataset Input and Image Training using Proposed Model The dataset that the author gathered consisted of privately collected data using the author's smartphone and data that the author collected from internet websites, specifically human facial data. The total combined data from the authorAos smartphone and internet website were 407 facial images. The data that the author privately collected using the author's phone were captured using various different angles while taking the facial image photo, and the data that the author collected from the internet were chosen variously to show different angles as well. The detailed facial image dataset that author used in this experiment is shown in TABLE I and TABLE II. After gathering and selection, the facial image data were trained using the proposed model that consists of a Wavelet feature extraction algorithm and CNNs . onvolutional neural network. classification algorithm. The model created will be used in this experiment. TABLE I OCCUPANT FACIAL IMAGE DATASET SAMPLE. Amount Source Writer Smartphone Internet Website Internet Website Writer Smartphone Example of Facial Image Data INTL. JOURNAL ON ICT VOL. NO. DEC 2023 TABLE II NON-OCCUPANT FACIAL IMAGE DATASET SAMPLE Amount Source Internet Website Internet Website Internet Website Writer Smartphone Example of Facial Image Data Experiment In this experiment, the model that was already created before using a feature extraction algorithm Wavelet and a classification algorithm CNNs will be the one that author uses to create a facial recognition program. The performance measurement of this model is evaluated using k-fold cross validation with the number of the fold is 5 . -Fol. The author chose a K-Fold of 5 specifically because of the small size of the dataset which is less than 1000 data points in total. The performance measurement results of the model are shown in TABLE i TABLE i PERFORMANCE MEASUREMENT RESULTS. Fold Accuracy Precision Recall F1-Score MUHAMMAD IHSAN ADLY ET AL. PERFORMANCE ANALYSIS OF FACIAL IMAGE FEATURE EXTRACTION ALGORITHM FOR SMART HOME A With the result of all five folds, the average result of performance measurement of the model that author proposed was 85. 7% in accuracy, 87. 94% in precision, 87. 56% in recall, and 87. 28% in F1-Score. The highest performance measurement results were achieved in Fold 5, which are 87. 1% in accuracy, 90. 5% in precision, 7% in recall, and 89. 5% in f1-score. The lowest performance measurement results were achieved in Fold 3, which are 82. 5% in accuracy, 86. 1% in precision, 85. 5% in recall, and 84. 8% in f1-score. Hardware and Software In this experiment, the author used the available hardware and software that was owned by the author The details of the hardware and software specifications are shown in the next point. Software Specification A Windows 10 Pro A Python A OpenCV A Jupyter Notebook A Visual Studio Code A Arduino IDE . Hardware Specification A Processor : Intel Core i5-7200U A Memory : 8192MB RAM A Arduino UNO A 2 Relay Module A Solenoid Lock A 12v Power Supply A Wires male/female Prototype Testing In this prototype testing, the author used the already trained model and implemented the trained model into the prototype device that the author proposed. There are two conditions for the end results. The details of the results will be shown in the next point below. Accepted Condition: The accepted condition is where the facial image data that has been captured was matched according to the description of the trained model. This condition will allow the solenoid lock to open because it detects that the person is classified as AoPenghuniAo [Occupan. The example of the accepted condition is shown in Fig. 2 below. Fig. Accepted Condition Example INTL. JOURNAL ON ICT VOL. NO. DEC 2023 Declined Condition: The declined condition is where the facial image data that has been captured did not match according to the description of the trained model. This condition will not allow the solenoid lock to open, because it detects that the person is classified as AoBkn_PenghuniAo [Non-Occupan. The example of the declined condition is shown in Fig. 3 below. Fig. Declined Condition Example. Discussion Recent research experiments into facial recognition have already demonstrated a significant increase in terms of accuracy through the application of a feature extraction Wavelet algorithm. This experiment also highlighted the effectiveness of a feature extraction algorithm Wavelet in facial image detection. With the use of a feature extraction algorithm Wavelet, the result of accuracy has been improved and the result is stable. The advantages of the Wavelet algorithm compared to other feature extraction algorithm, include its ability to analyze data at different scales, accurately localize features in time and frequency, detect edges, compress data effectively, adapt to different data parts, and succeed in tasks like object recognition and biomedical signal Nevertheless, the choice of the appropriate method depends on the unique characteristics of the data and the specific requirements of the task at hand. TABLE iv below is a comparison of accuracy from other experiments using different methods also using the feature extraction algorithm Wavelet. TABLE IV PERFORMANCE COMPARISON WITH OTHER RESEARCH ON FACE DETECTION. Method Wavelet with Convolutional Neural Networks HOG with Feedforward Neural Networks. Gabor-Based. Quanvolutional Neural Networks with a Single Quanvolutional Filter. Accuracy The objective of this comparison in TABLE iv is to show the advantage of a Wavelet feature extraction algorithm from other methods. The accuracy performance outcome of using Wavelet feature extraction algorithm that writer achieved was 85. 7%, another experiment from. that used HOG (Histogram of Oriented Gradient. 34% accuracy, where the experiment from. 67% using the Gabor-Based The experiment from. using Quanvolutional Neural Networks achieved 95. 8% QNNs are simply MUHAMMAD IHSAN ADLY ET AL. PERFORMANCE ANALYSIS OF FACIAL IMAGE FEATURE EXTRACTION ALGORITHM FOR SMART HOME A an extension of classical CNNs. The various classification methods mentioned earlier are part of the artificial neural network category, which relies on shared mathematical modeling principles in the field of machine These methods incorporate neuron activation functions, such as ReLU, sigmoid, or tanh, within their layers to introduce non-linearity into their models. Despite their unique architectural variances, they have practical applications across a wide range of machine learning tasks. The difference between results are the dataset that been used in every research is not the same, therefore there is a quite significance differences. CONCLUSION In this section, the writer will present the conclusion and analyze the result of the experiment in order to show if the proposed research objective has been achieved. After careful analysis, the experiment showed a successful outcome of the proposed accuracy percentage. The proposed method of facial image recognition using a Wavelet feature extraction algorithm and CNNs (Convolutional Neural Network. classification algorithm achieved a percentage 85. 7% in accuracy, 87. 94% in precision, 87. 56% in recall, and 87. 28% in F1score. The detection of faces using a combination of Wavelet feature extraction and CNNs (Convolutional Neural Network. classification provides significant benefits over a Gabor-Based method. Both wavelet-based feature extraction algorithms and convolutional neural networks (CNN. offer distinct advantages in various aspects of facial recognition. Wavelet-based methods are well suited for tasks requiring multiresolution analysis, feature localization, edge detection, texture analysis, and energy compression. These are especially useful for tasks with varying scales and noise reduction requirements. Wavelets can efficiently retrieve important information from images and are suitable for various feature fusion techniques. CNNs, on the other hand, have revolutionized the field of computer vision with their ability to automatically learn hierarchical and abstract features from raw pixel data. They excel at tasks such as object recognition, image classification, and semantic segmentation. CNNs can effectively handle complex and large datasets, offering high accuracy and generalization ability. They can learn hierarchical representations, making it suitable for tasks where the underlying patterns are not explicitly known. It is important to consider the specific requirements of the task at hand when deciding on the appropriate approach. Wavelet-based techniques may be suitable for tasks involving multiresolution analysis, localization, and denoising, whereas CNNs are better suited for tasks requiring advanced pattern recognition and large-scale image analysis. In some cases, a combination of both methods can take advantage of each other's strengths and provide even better performance. Combining wavelet-based functions with convolutional neural networks (CNN. offers several advantages over Gabor-based functions. CNNs can automatically learn hierarchical and abstract features from raw data, making them more adaptive and capable of capturing complex patterns. This data-driven approach improves performance across a wide variety of tasks and datasets. Additionally, end-to-end training with CNNs optimizes both feature extraction and classification, increasing model effectiveness. This combination also makes it more resistant to changes in pose, lighting, and scale. Additionally, wavelet-based functions help reduce dimensionality and make processing of large image datasets more efficient. However, when choosing between Gabor functions and wavelet-based functions, you should consider your specific task requirements, as either method may work well for your particular application. Experimentation and evaluation are essential to determine the most appropriate feature extraction approach for a particular problem. However the effectiveness of this research method's accuracy percentage is also determined by the dataset used, the complicatedness of the condition, and the implementation of the method on different problems. In real-world try-out, the result may produce a different outcome of detection rate. DATA AND COMPUTER PROGRAM AVAILABILITY Data and program used in this paper can be accessed in the following site https://github. com/GNite12/TugasAkhir. and https://s. id/1RZCN ACKNOWLEDGMENT The authors would like to thank everyone that has support the author for this entire research, especially author mother and father, also author lecturer for the guidance during the research, and to all of author friends that support author along the way. INTL. JOURNAL ON ICT VOL. NO. DEC 2023 REFERENCES