11th ISC 2024 (Universitas Advent Indonesia, Indonesia) “Research and Education Sustainability: Unlocking Opportunities in Shaping Today's Generation Decision Making and Building Connections” October 22-23, 2024 Allertify: Android-based Mobile Application for Food Allergens Identification Using Barcode Scanning and Image Recognition Eniel Leba1, Jimea Aziel Faigao1, Mary Valen Magante1, Vernon Joseph Yeremia Tamba1, Ivar Nelson Eñano Bingcang2* Adventist University of the Philippines INEBingcang@aup.edu.ph ABSTRACT This study introduces "Allertify," an Android-based mobile application designed to assist individuals with food allergies by identifying potential allergens in food products through barcode scanning and image recognition. The primary objective was to develop a comprehensive, userfriendly tool to minimize allergic reactions by providing quick and accurate allergen detection. The need for a practical solution to help people make safer food choices, particularly in environments where allergen exposure is a health risk, motivated the project. Allertify was developed using Thunkable for front-end design and Google Teachable Machine for training the image recognition model with a custom dataset. Barcode scanning and image recognition were implemented to identify allergens in food products. The application was tested within a private university campus, focusing on food products available in the cafeteria and store. The findings showed that barcode scanning performed with a 100% success rate, with an average processing time of 4 seconds. In contrast, the image recognition feature had a 91% success rate but only achieved 50% accuracy, with an average processing time of 17 seconds. The lower accuracy for image recognition was primarily due to a limited training dataset and visual similarities between certain food items. The key limitations include a small food item database and the requirement for an internet connection, which may restrict the application's availability in some settings. Despite these challenges, Allertify demonstrates significant potential as a tool for improving food safety, offering users real-time feedback on allergens. Future development could focus on expanding the database, improving image recognition accuracy, and adding offline capabilities, making the application more robust and reliable. Keywords: Food Allergens, Barcode scanning, Image recognition, Thunkable, Google Teachable Machine 1542 11th ISC 2024 (Universitas Advent Indonesia, Indonesia) “Research and Education Sustainability: Unlocking Opportunities in Shaping Today's Generation Decision Making and Building Connections” October 22-23, 2024 INTRODUCTION Food allergies affect millions of people worldwide, posing serious health risks ranging from mild reactions to life-threatening conditions. These allergic reactions can be triggered by even small amounts of allergens present in food, leading to symptoms such as swelling, vomiting, difficulty breathing, and, in severe cases, anaphylaxis. In the Philippines, a food allergy reaction sends someone to the emergency room every three minutes, highlighting the severity of the issue (Gaas, 2021). While food poisoning often garners more public attention, food allergies present an equally urgent but often overlooked public health concern. Current measures for managing food allergies, such as stringent food labeling and consumer education, provide some level of protection, but they are not foolproof. Unintentional allergen consumption remains a significant risk, particularly for those with severe allergies, despite the efforts of food safety organizations and health agencies (Grayson, 2022). To address this challenge, various mobile applications have been developed to help individuals with food allergies identify potential allergens in their food. Most of these solutions rely on barcode scanning and databases to provide information about food ingredients. However, a 2020 study reviewing 1,376 food allergy/intolerance-related applications revealed that only 14 were relevant and useful for users. These apps often lack reliability and comprehensiveness, focusing on narrow functionalities, such as identifying allergens in restaurant menus or packaged foods (Mandracchia, Llauradó, Tarro, Valls, & Solà, 2020). Barcode-based apps depend heavily on the availability and accuracy of product barcodes and databases, which can be limiting when it comes to fresh, unpackaged, or homemade food, where no barcode is present. Additionally, these apps tend to lack personalized features, such as the ability to create and save custom allergen lists based on the user's specific allergies (Mandracchia et al., 2020; Bollinger, 2020). Given the increasing number of food allergy-related hospitalizations, there is a clear need for more advanced, integrated, and user-friendly tools that combine multiple functionalities to provide accurate and comprehensive allergen detection. Existing solutions fail to adequately address the broad range of food environments, leaving individuals vulnerable, especially when consuming unpackaged or freshly prepared foods (Bollinger, 2020). This study was motivated by the need to enhance allergen identification tools by integrating two technologies—barcode scanning and image recognition—to create a more comprehensive and versatile system for detecting potential allergens. The combination of barcode scanning and image recognition provides a dual-layered approach to allergen detection. While barcode scanning works well for packaged foods, it is not applicable to many real-life scenarios, such as eating at restaurants or consuming homemade meals where barcodes are absent. Image recognition helps bridge this gap by allowing users to take photos of their food for allergen identification, making the solution more robust (Chen et al., 2020). By combining these two technologies, the application can address a wider range of food environments, increasing its usefulness and reliability in helping individuals avoid allergens. The primary objective of this study is to design and develop a mobile application that can accurately and efficiently detect food allergens using both barcode scanning and image recognition. "Allertify" is intended to give users a seamless experience for identifying potential allergens in food items, thereby reducing the risk of allergic reactions in a wide variety of settings. 1543 11th ISC 2024 (Universitas Advent Indonesia, Indonesia) “Research and Education Sustainability: Unlocking Opportunities in Shaping Today's Generation Decision Making and Building Connections” October 22-23, 2024 The application was tested in a controlled environment in a private university in the Philippines, specifically focusing on food items available in the cafeteria and store. The application was developed using Thunkable for front-end design and Google Teachable Machine for training the image recognition model with a custom dataset of food items. The barcode scanning functionality retrieves product information, while the image recognition function identifies food items directly from user-captured images. The combination of these two technologies enables a more comprehensive and personalized approach to allergen identification, as it allows users to check both packaged and unpackaged foods (Chen et al., 2020; Herrmann et al., 2017). The results of this study show that Allertify effectively identifies potential allergens by comparing food ingredients with a user-defined allergen list. However, there are limitations, including the need for an internet connection and a relatively small food item database, which limits the system’s coverage. Despite these challenges, Allertify demonstrates significant potential as a valuable tool for improving food safety. Future development could focus on expanding the database, improving image recognition accuracy, and adding offline capabilities to enhance usability in various environments. This study contributes to the fields of health and food safety by providing a practical, innovative solution for managing food allergies, with the potential to significantly improve the quality of life for individuals with food sensitivities. LITERATURE REVIEW The literature review delves into the technological advancements and methodologies in food allergen identification, focusing on image recognition, barcode scanning, and the development of mobile applications. Technical Background The advent of image recognition and computer vision marked significant milestones in automated image analysis, with deep learning algorithms playing a crucial role. Computer vision, emerging in the 1960s, aimed to replicate human vision systems to automate the process of image analysis. Deep learning has enhanced the capability of computers to identify and categorize images with a vast dataset, employing benchmarks to compare new images for more precise recognition (Pulsar Platform, 2018) Barcode scanning technology has evolved from the 1950s and has become a prevalent method for product identification. Initially used in grocery stores, the Universal Product Code (UPC) became a standard for tracking product information. While barcode scanning offers a costeffective means of product tracking, it has limitations such as line-of-sight dependency and issues with improperly printed barcodes (Damle et al., 2020; Elaskari et al., 2021; Thanapal et al., 2017) Review of Related Systems 1544 11th ISC 2024 (Universitas Advent Indonesia, Indonesia) “Research and Education Sustainability: Unlocking Opportunities in Shaping Today's Generation Decision Making and Building Connections” October 22-23, 2024 Foreign systems leveraging barcode scanning and image recognition have shown promise in enhancing dietary tracking and food identification. Ming et al. (2018) developed DietLens, an app utilizing a deep learning algorithm for food recognition, allowing users to identify ingredients in food images and obtain nutritional information. Similarly, Chaturvedi et al. (2020) created a mobile application using deep learning to estimate nutritional content based on images of food items, thus raising awareness about food consumption and dietary needs. Local systems have also explored image recognition in different domains. For instance, De Goma & Devaraj (2020) developed a system using image processing and machine learning to categorize skin conditions, demonstrating the potential of image recognition in healthcare applications. Integration of Thunkable and Google Teachable Machine The integration of machine learning tools such as Google Teachable Machine (GTM) with application development platforms like Thunkable has facilitated the creation of user-friendly interfaces and efficient machine learning models. GTM employs convolutional neural networks (CNNs) for data analysis and offers different ways to train models. It utilizes transfer learning, which involves taking knowledge from one domain and applying it to another, enhancing the efficiency of model training (Prasad et al., 2022. Several studies have utilized GTM for various recognition tasks, including facial recognition for fraud detection (Shoukat & Akram, 2021) and identification of plant seeds with high accuracy (Chazar & Rafsanjani, 2022) Conceptual Framework The conceptual framework of the Allertify application illustrates the workflow and data flow between the end user and the internal systems of the application. This framework is designed to showcase the input-output process and provide a detailed structure of how the application functions. The main aim is to depict the steps involved in allergen identification using barcode 1545 11th ISC 2024 (Universitas Advent Indonesia, Indonesia) “Research and Education Sustainability: Unlocking Opportunities in Shaping Today's Generation Decision Making and Building Connections” October 22-23, 2024 scanning and image recognition, providing an understanding of how the application processes user inputs to generate outputs. Figure 1-Conceptual Framework 1. Data Input and Storage: o The system’s database is initially populated with a list of common food allergens and food items obtained from the Adventist University of the Philippines (AUP) store. Each food item in the database is associated with its ingredients, name, and barcode value. o For image recognition, the researchers train the model using Google Teachable Machine (GTM) by inputting image samples of food items served in the AUP cafeteria. This trained model is uploaded online to be accessible by the application. The food items' names and ingredients are stored in the database. 2. User Interaction: o Users enter their allergen list, which is stored locally on the device. They can select allergens from a predefined list of common allergens available in the system’s database or manually input their own allergens. o Users can then choose to scan food items using either barcode scanning or image recognition. The system will process the uploaded image or barcode to identify the food item. 3. Food Item Identification: o If the food item is not in the database, the application displays a "not found" result to the user. o If the food item is found in the database, the system identifies potential allergens by comparing the food item's ingredients list with the user's allergen list. The application will indicate whether the food item is potentially safe or unsafe for the user, based on this comparison. 4. Output and History: o The application adds the result to a history list, stored locally, allowing users to keep a record of their scans. The result is then displayed on the user's device, providing real-time feedback on the safety of the food item. METHODS The methodology for the Allertify project is centered on developing a mobile application using a combination of machine learning, image recognition, and agile development practices. The approach involves utilizing Google Teachable Machine (GTM) for image recognition and Thunkable for front-end development, with Node.js and MySQL for backend processes (Prasad et al., 2022). Machine Learning and Google Teachable Machine 1546 11th ISC 2024 (Universitas Advent Indonesia, Indonesia) “Research and Education Sustainability: Unlocking Opportunities in Shaping Today's Generation Decision Making and Building Connections” October 22-23, 2024 The project utilizes machine learning through Google Teachable Machine (GTM), an accessible platform designed for easy model training. GTM was selected for this project due to its ability to build image recognition models quickly and efficiently without requiring extensive coding knowledge. The platform leverages Convolutional Neural Networks (CNNs) to classify data, which makes it suitable for tasks such as food item identification, a key feature of Allertify. In this study, GTM was employed to develop the image recognition model, which is crucial for identifying food items based on user-uploaded images. The training process began with the collection of image data, which was uploaded by the developers. These images were then organized into specific categories corresponding to different food items. GTM uses these categories to train the model. The model training process followed a standard machine learning pipeline, where the data was divided into two sets: 85% of the data was used for training, and 15% was reserved for testing the model's performance. This approach ensured that the model was adequately trained while also allowing for validation during testing. GTM's interface simplifies this process, allowing users to observe how well the model performs in real-time based on the provided dataset. A total of 23,649 pictures, separated into seven classes, were used to train and test the model. The number of the training samples is shown in Figure 2. Classes and Number of Training Samples 1547 11th ISC 2024 (Universitas Advent Indonesia, Indonesia) “Research and Education Sustainability: Unlocking Opportunities in Shaping Today's Generation Decision Making and Building Connections” October 22-23, 2024 As mentioned GTM uses 85% of the samples provided for training and 15% for testing. It then provided data to the researchers on the accuracy of the model. According to the internal testing, the model has 100% accuracy for all the different classes (Figure 3) and there is no confusion between classes (Figure 4.). Figure 3. Number of testing samples and accuracy per class Figure 4. Confusion Matrix Model GTM leverages the TensorFlow.js library, a powerful JavaScript library for machine learning, which allows the trained model to be deployed seamlessly within a web environment or integrated into mobile applications like Allertify. TensorFlow.js enables the model to run efficiently on client devices, ensuring smooth real-time image recognition without the need for extensive server-side resources (Prasad et al., 2022). The combination of GTM's CNN architecture and TensorFlow.js library makes it an ideal choice for implementing image recognition in the Allertify app. Figure 5 in the document outlines the step-by-step process used to train the model in Google Teachable Machine. Figure 5. Step by Step Process used to ttrain the model Thunkable Thunkable was selected for the Allertify project due to its ease of use and no-code platform, which allowed the development team to focus on building core functionalities like barcode scanning and 1548 11th ISC 2024 (Universitas Advent Indonesia, Indonesia) “Research and Education Sustainability: Unlocking Opportunities in Shaping Today's Generation Decision Making and Building Connections” October 22-23, 2024 image recognition without extensive coding knowledge (Thunkable, 2023). Its drag-and-drop interface made it possible to quickly prototype and iterate the app, which helped enhance the user experience and streamline the development process (Thunkable, 2023). In addition, Thunkable’s cross-platform compatibility allowed for simultaneous development for both Android and iOS, a key factor for the project’s scalability (Thunkable, 2023). Built-in features, such as camera access and barcode scanning, further simplified the development process, making Thunkable a more efficient choice compared to traditional development platforms like Android Studio, which require extensive coding (Thunkable, 2023). When compared to other platforms like Flutter and AppGyver, Thunkable provided the right balance between functionality and ease of use. While other platforms may offer more advanced options, they often require greater technical expertise, which can slow down development and testing (Thunkable, 2023). Figure 6. Example of Thunkable Code Blocks 1549 11th ISC 2024 (Universitas Advent Indonesia, Indonesia) “Research and Education Sustainability: Unlocking Opportunities in Shaping Today's Generation Decision Making and Building Connections” October 22-23, 2024 Agile Methodology and Scrum The development process followed the Agile methodology, specifically utilizing the Scrum framework. Agile was chosen due to its iterative and incremental nature, which is well-suited for mobile application development. The Scrum method involved planning user stories, constructing a product backlog, and implementing a sprint plan to guide development. The team organized the project into three sprints, each focusing on different aspects of the application (Al-Saqqa, Sawalha, & Nabi, 2020; Cerna et al., 2018; Hassan et al., 2022). System Design and Architectural Design The system was designed using a three-tier architecture consisting of the presentation tier (user interface), application tier (processing of data), and data/database tier (storing of data). Thunkable was used for front-end development to create the user interface, while Node.js was employed to handle data processing between the front end and back end. Data storage was managed using MySQL and Thunkable Data Source databases. The user interface was built to facilitate smooth interaction and data processing (Kendall & Kendall, 2012. Figure 2 in the document provides a visual representation of the application’s architecture. Figure 7. Architectural Design 1550 11th ISC 2024 (Universitas Advent Indonesia, Indonesia) “Research and Education Sustainability: Unlocking Opportunities in Shaping Today's Generation Decision Making and Building Connections” October 22-23, 2024 Development and Testing The development process involved simultaneous front-end and back-end development, utilizing Thunkable for building the application’s UI and functionalities. The developers tested the front end using Thunkable's "live test on device" feature, while the back end was tested using Postman to validate the API. The testing phase also included user testing with at least ten users to provide feedback on the app's accuracy, quality, and functionality In summary, the methodology involved leveraging machine learning for image recognition using GTM, adopting Agile Scrum practices for project management, and implementing a threetier system architecture to build a functional and reliable mobile application for allergen identification. Results and Discussion The Allertify project aimed to provide an effective tool for food allergen identification using barcode scanning and image recognition. The testing phase involved evaluating the application’s performance across these two core features, assessing their accuracy, efficiency, and user satisfaction. Overall Test Summary During the testing phase, a total of 65 scans were conducted to assess the application's performance. Of these scans, 62 were successful, indicating that the system was generally robust in processing user inputs. However, 3 scans were unsuccessful due to unfinished processing and server errors, highlighting areas for improvement in system reliability and server handling. Among the 62 successful scans, 48 produced correct outputs while 17 scans provided incorrect results, suggesting variability in the application's effectiveness depending on the scanning method used. Barcode Scanning The barcode scanning feature exhibited a high degree of accuracy and reliability. Out of 31 tests performed using this feature, all scans were successful, resulting in a 100% success rate. The average processing time for each barcode scan was approximately 4 seconds, indicating a quick and efficient performance. This rapid processing time is crucial in real-world settings where users may need immediate information about food products to make safe dietary choices. All barcode scans returned correct outputs, affirming the feature's capability to accurately identify food items and potential allergens through barcode data. The reliability of barcode scanning in identifying food items is supported by research indicating that barcode systems can provide quick and accurate information retrieval, making them a valuable tool in various industries, including healthcare and food safety (Koutkias et al., 2013; Yang & Ho, 2017). The barcode scanning’s success suggests that it is a highly reliable method for allergen identification when product barcodes are available and correctly stored in the applications. 1551 11th ISC 2024 (Universitas Advent Indonesia, Indonesia) “Research and Education Sustainability: Unlocking Opportunities in Shaping Today's Generation Decision Making and Building Connections” October 22-23, 2024 Table 2 Summary of barcode training test Number of Scan Successful Scan Unsuccessful Scan Average Process Time (seconds) Correct Output Incorrect Output 31 31 0 4 31 0 Image Recognition The image recognition feature showed more variability in its performance compared to barcode scanning. A total of 34 image recognition tests were conducted, with 31 scans being successful, resulting in a 91% success rate. However, the accuracy of the image recognition feature was lower than that of barcode scanning, with only 17 out of 34 scans (50%) yielding correct outputs. The average processing time for image recognition was 17 seconds, which is considerably longer than barcode scanning, reflecting the complexity involved in processing and analyzing images. Notably, the "Food not Recognized" error occurred in 7 instances, primarily due to the limited variety of image samples used during the training phase and the visual similarities among certain food items. These findings indicate that while image recognition has potential as an allergen identification tool, its effectiveness is currently constrained by the limitations of the training dataset and the inherent challenges of accurately identifying visually similar food items. Similar challenges in food image recognition have been noted in other studies, where the need for large and diverse training datasets is emphasized to improve model accuracy and robustness (Chen et al., 2020; Herrmann et al., 2017). Table 2 Summary of image recognition tests Number of Scan Successful Scan Unsuccessful Scan Average Process Time (seconds) Correct Output Incorrect Output Food not Recognized Server Error 34 31 3 17 17 17 7 1 1552 11th ISC 2024 (Universitas Advent Indonesia, Indonesia) “Research and Education Sustainability: Unlocking Opportunities in Shaping Today's Generation Decision Making and Building Connections” October 22-23, 2024 Unfinished Process 1 User Satisfaction To assess user perception of the application's utility and user-friendliness, a user satisfaction survey was conducted with nine participants. The survey aimed to evaluate various aspects of the application, including ease of use, navigation, effectiveness, and overall usefulness. The results were positive, with most participants rating the app favorably. Specifically, the average response to the question regarding the app's usefulness for food-allergic individuals was 4.9 out of 5. This high score suggests that users found the application valuable and effective in aiding them with allergen identification. Users also rated the app highly in terms of ease of use and navigation, indicating that the application's design successfully provided a user-friendly experience. Previous research has highlighted the importance of user-friendly interfaces in mobile health applications, as ease of use significantly influences user engagement and the overall effectiveness of healthrelated tools (Dennison et al., 2013). CONCLUSION, IMPLICATION, SUGGESTION, AND LIMITATIONS The Allertify app was developed as a user-friendly mobile solution to assist individuals with food allergies in identifying potential allergens and making safe food choices. The project successfully achieved most of its objectives, with the application featuring two primary functionalities: barcode scanning and image recognition. Users are able to input personalized allergen lists and select their preferred scanning method, after which the app processes the information and determines whether the scanned food item is safe for consumption. The app operates optimally with an internet connection and can be installed through an APK file, as it was developed using the free version of Thunkable and is not yet available on the Google Play Store. The barcode scanning feature performed reliably, accurately reading product barcodes and providing immediate results. This aligns with existing research showing that barcode scanning is a highly dependable method for identifying food products and ingredients, aiding in allergen detection (Koutkias et al., 2013; Yang & Ho, 2017). However, the image recognition feature, while functional, only partially met its objective, with an accuracy rate of 50%. This limitation was attributed to the methods used in capturing food images and the visual similarities among food items in the dataset. These challenges are documented in other studies, which highlight the importance of diverse datasets for improving accuracy in image recognition models (Chen et al., 2020; Herrmann et al., 2017). Despite these challenges, the image recognition feature has the potential for significant improvement through further training with more varied food images, different angles, and lighting conditions. Implications The Allertify app has broader implications for food safety, as it provides an accessible and practical tool for individuals with food allergies. By integrating both barcode scanning and image 1553 11th ISC 2024 (Universitas Advent Indonesia, Indonesia) “Research and Education Sustainability: Unlocking Opportunities in Shaping Today's Generation Decision Making and Building Connections” October 22-23, 2024 recognition, the application offers real-time allergen detection, empowering users to make informed dietary decisions. This project aligns with ongoing efforts to leverage mobile health technologies to enhance public health outcomes, particularly in managing food allergies and preventing allergic reactions (Mandracchia et al., 2020). The combination of these technologies in daily life contributes to a growing body of evidence supporting the use of mobile health tools for improving health and safety, offering a more holistic solution compared to existing apps. The scalability of Allertify also holds promise for adoption in various settings beyond personal use, such as restaurants, grocery stores, and other food establishments. Barcode scanning can be extended to packaged foods in supermarkets, while image recognition can analyze dishes in restaurants where allergen labeling may not be readily available. This scalability is critical in broadening the impact of Allertify, making it applicable across different food environments. Suggestions for Future Development For future improvements, the project suggests several enhancements. First, expanding the variety of food items in the application will make it more comprehensive and interactive. This includes training the model by capturing images from different angles, lighting conditions, and placing food items on various objects such as plates, tables, and bowls. Enhancing the barcode feature by increasing the number of detectable items would also be beneficial. The importance of diverse and comprehensive training datasets is supported by previous research, indicating that such improvements can significantly enhance the accuracy and robustness of image recognition models (Chen et al., 2020). Second, implementing a PIN or password lock for the application would improve user security, aligning with best practices in mobile application development to ensure data privacy and user protection (Dennison et al., 2013). Third, allowing users to save their data in the cloud by creating an account would enable data transfer between devices, enhancing the user experience. Implementing these recommendations would result in a more robust and sophisticated application. Limitations Several limitations were identified during the development of the Allertify application. One significant limitation is the limited number of food items available for both barcode and image recognition. For barcode scanning, the database included only six food items due to the unavailability of additional data because of confidentiality concerns. For image recognition, the model's accuracy was constrained by the limited variety of food items available in the cafeteria. Only 83 dishes were captured and categorized into six classes, limiting the number of food items and products the application could recognize. Similar limitations in food recognition applications have been noted, where limited training datasets result in lower accuracy rates (Herrmann et al., 2017). Another limitation is the requirement for an internet connection to access the application, which could always restrict its availability to users. This dependency highlights the need for offline capabilities in mobile health applications to ensure broader accessibility (Dennison et al., 2013). The mentioned limitations serve as areas for improvement and future development, ensuring that 1554 11th ISC 2024 (Universitas Advent Indonesia, Indonesia) “Research and Education Sustainability: Unlocking Opportunities in Shaping Today's Generation Decision Making and Building Connections” October 22-23, 2024 future versions of Allertify or similar applications can offer a more comprehensive and reliable solution for food allergen identification and management. ACKNOWLEDGEMENT The developers of the Allertify project expressed their heartfelt gratitude to several individuals and institutions that supported them throughout the project's journey. First and foremost, they thanked God for granting them knowledge, intelligence, and strength to complete the study. They also conveyed their sincere gratitude to their capstone advisor for the guidance, time, and valuable feedback provided during the project's development. Appreciation was extended to their college professors for the knowledge and support they offered during the study. Additionally, the developers acknowledged their parents for their unwavering support from the project's inception to its conclusion. Lastly, they expressed their deepest thanks to the supervisors and staff of the Adventist University of the Philippines cafeteria for their patience and understanding during the project's development, noting that the project would not have been possible without their cooperation. REFERENCES Bollinger, M. E. (2020). Mobile Applications for Food Allergy: Enhancing Safety and Knowledge. Journal of Allergy and Clinical Immunology: In Practice, 8(2), 558-565. Chen, J., Jia, J., & Chen, B. (2020). Food Image Recognition via Integrated Learning Approach. IEEE Access, 8, 69839-69850. Damle, A., Bangera, M., Tripathi, S., & Meena, M. (2020). Analysis of Barcode Scanning and Management. AMRIDDHI: A Journal of Physical Sciences, Engineering and Technology, 12(SUP 1), 90-95. Dennison, L., Morrison, L., Conway, G., & Yardley, L. (2013). Opportunities and Challenges for Smartphone Applications in Supporting Health Behavior Change: Qualitative Study. Journal of Medical Internet Research, 15(4), e86. Dennison, L., Morrison, L., Conway, G., & Yardley, L. (2013). Opportunities and Challenges for Smartphone Applications in Supporting Health Behavior Change: Qualitative Study. Journal of Medical Internet Research, 15(4), e86. 1555 11th ISC 2024 (Universitas Advent Indonesia, Indonesia) “Research and Education Sustainability: Unlocking Opportunities in Shaping Today's Generation Decision Making and Building Connections” October 22-23, 2024 Elaskari, S., Imran, M., Elaskri, A., & Almasoudi, A. (2021). Using Barcode To Track Student Attendance And Assets In Higher Education Institutions. Procedia Computer Science, 184, 226–233. Obtenido de https://pdf.sciencedirectassets.com/280203/1-s2.0S1877050921X00075/1-s2.0-S187705092100778X/main.pdf?X-Amz-SecurityToken=IQoJb3JpZ2luX2VjEL3%2F%2F%2F%2F%2F%2F%2F%2F%2F%2FwEaCXVzL WVhc3QtMSJHMEUCIQCds%2BtWkQQQ5LArV%2BHhNlWMJXiDISi%2FRK5U2Qx dFbGVWgIgLePKJI Gaas, M. A. (06 de July de 2021). Serious Facts You Need To Know About Food Allergies. Obtenido de National Nutrition Council: https://nnc.gov.ph/regionaloffices/mindanao/region-ix-zamboanga-peninsula/5597-serious-facts-you-need-to-knowabout-food-allergies Grayson, M. (April de 2022). Allergy Facts and Figures. Obtenido de Asthma and Allergy Foundation of America: https://aafa.org/allergies/allergy-facts/ Herrmann, M., Wanjek, F., & Patil, K. R. (2017). Towards Machine Learning in Food Informatics: A Review on the Structural Learning of Food Graphs. Frontiers in Artificial Intelligence and Applications, 301, 315-320. Koutkias, V. G., Lillo-Le Louet, A., & Jaulent, M. C. (2013). Exploiting the Potential of Barcode Technology in Healthcare: Current Research and Future Trends. International Journal of Medical Informatics, 82(11), 1116-1128. Koutkias, V. G., Lillo-Le Louet, A., & Jaulent, M. C. (2013). Exploiting the Potential of Barcode Technology in Healthcare: Current Research and Future Trends. International Journal of Medical Informatics, 82(11), 1116-1128. Mandracchia, F., Llauradó, E., Tarro, L., Valls, R. M., & Solà, R. (2020, August 08). Mobile Phone Apps for Food Allergies or Intolerances in App Stores: Systematic Search and Quality Assessment Using the Mobile App Rating Scale (MARS). JMIR mHealth and uHealth. Pulsar Platform. (15 de August de 2018). A brief history of Computer Vision and AI Image Recognition. Obtenido de Pulsar: https://www.pulsarplatform.com/blog/2018/brief-historycomputer-vision-vertical-ai-image-recognition/ Thanapal, P., Prabhu, J., & Jakhar, M. (2017). A survey on barcode RFID and NFC. IOP Conf. Series: Materials Science and Engineering, 263(4), 042049. Thunkable. (2023). Thunkable: No-code mobile app development platform. Retrieved from https://thunkable.com 1556 11th ISC 2024 (Universitas Advent Indonesia, Indonesia) “Research and Education Sustainability: Unlocking Opportunities in Shaping Today's Generation Decision Making and Building Connections” October 22-23, 2024 Yang, Y., & Ho, C. C. (2017). Barcode Technology Applications in Healthcare: An Overview. Health Information Science and Systems, 5(1), 1-8. 1557