JOIV : Int. Inform. Visualization, 9. - January 2025 97-103 INTERNATIONAL JOURNAL ON INFORMATICS VISUALIZATION INTERNATIONAL JOURNAL ON INFORMATICS VISUALIZATION journal homepage : w. org/index. php/joiv Automatic Feature Extraction of Marble Fleck in Digital Beef Images to Support Decision Preferences Feriantano Sundang Pranata a,*. Anjjani Mardhika Adif b. Jufriadif NaAoam c Universitas Negeri Padang. Padang. Indonesia Universitas Putra Indonesia YPTK Padang. Padang. Indonesia Universitas Nusa Mandiri. Jakarta. Indonesia Corresponding author: *feriantano@unp. AbstractAiBeef is one of the essential food ingredients to meet human nutritional needs. These nutrients are fundamental to the growth and development of the human body. The primary nutrient found in beef is protein. The nutritional value of protein in beef can be observed by the quality of the beef itself. An indicator of the protein level is the amount of marbling or white streaks in the meat. Marbling is characterized by a marble-like pattern in the meat layers. This study aims to process beef images to automatically identify The data processed is secondary data obtained from Kaggle. com, consisting of 60 images with a resolution of 800 by 800 This study develops a highly subjective method to produce fast and accurate classification. The processing stages used are preprocessing, segmentation, and extraction. The automatic stage is in the extraction, by developing a filtering algorithm. The results of this study can identify the marbling fleck ratio of each beef image very well, where each beef image has marbling flecks. The area of marbling flecks varies depending on the quality of the meat, with the lowest quality having a ratio of 1. 0% and the highest being 71. This ratio level becomes an indicator in determining the quality of the meat, which is the primary preference in making accurate decisions in selecting meat quality. Thus, this study can serve as an indicator in determining the appropriate meat preference choice. KeywordsAiSupport decision preferences. filter feature extraction. marble fleck. digital beef images. image processing. Manuscript received 13 Jun. revised 29 Jul. accepted 11 Aug. Date of publication 31 Jan. International Journal on Informatics Visualization is licensed under a Creative Commons Attribution-Share Alike 4. 0 International License. The assessment of beef quality and the identification of its composition are still frequently conducted manually by human visual observation . This process involves direct comparison between the observed meat and standard images for each class, conducted by an expert whose assessment tends to be subjective . Meat preference assessment is based on two main factors, namely price and quality . Meat quality is categorized based on four main characteristics: marbling, muscle color, fat color, and meat density . The marbling score plays a major role in determining meat quality . Marbling is a determining factor in assessing meat quality. It refers to the white streaks and spots of fat found within the muscle fibers of the meat . Marbling imparts a distinctive flavor to the meat and enhances its quality. The higher the marbling content, the more tender, flavorful, and higher quality the meat . An illustration of marbling is presented in Fig. INTRODUCTION Beef plays a crucial role in meeting human nutritional needs due to its rich nutrient content . The primary nutrient is protein, which is essential for body growth and development . The presence of high-quality protein in beef provides essential amino acids needed to build and repair body tissues and serves as a significant energy source to maintain optimal body functions . Regular consumption of beef is an important part of maintaining nutritional balance and supporting excellent health . The demand for beef in Indonesia has experienced a significant increase . Beef with a tender texture, enticing flavor, and essential health benefits has become an important preference . Beef production in Indonesia continues to rise year after year, from 487,802. 21 tons in 2021, increasing to 498,923. 14 tons in 2022, and reaching 503,506. 8 tons in 2023, with ongoing growth . The continuously increasing demand presents a challenge in determining the quality of beef . , . reference preference in identifying and making more accurate decisions in selecting quality beef. II. MATERIALS AND METHOD This research involves several stages in identifying marbling patterns in beef images. Each stage is interconnected and produces a new image that becomes the input for the next stage. The initial step in the process is preprocessing, which includes converting RGB to Grayscale. Binary Thresholding. Image Negative. Morphology Operation, and ROI Region. The second stage is segmentation, involving converting RGB to Grayscale and Binary Thresholding. The final stage is extraction, which consists of Region Properties and Filtering Object. All process stages were tested using MATLAB Software. The sequence of process stages is presented in Fig. Fig. 1 Marbling flecks . Manual identification of marbling often takes time and is less accurate due to the limitations of human visual capabilities . Thus, innovation is needed to perform accurate and efficient meat composition analysis . Image processing innovation provides a solution that can be used to identify marbling in beef images . Gray Level Cooccurrence Matrix (GLCM) feature extraction and the KNearest Neighbor (KNN) algorithm can identify beef quality with an accuracy of 91. 7% . Identification using Support Vector Machine (SVM). Linear Discriminant Analysis (LDA), and Decision Tree algorithms achieve an accuracy of 42% . A new approach for classifying beef images to distinguish between spoiled and healthy meat using Generative Adversarial Network (GAN) can be applied with a limited number of images . The implementation of ResNet-50 as a classification tool can achieve an accuracy of 96. 03% . The classification of a Deep Neural Network (DNN) model to detect minced beef adulteration increases accuracy to 99. The combination of color and texture features in classifying beef and pork using Local Optimal-Oriented Pattern (LOOP) can improve accuracy to 99. 16% . The combination of color features, namely Red Green Blue (RGB) and HSV, can identify pork and beef with a test accuracy of 66% . Automatically classifying beef freshness using Learning Vector Quantization (LVQ) can achieve an accuracy 92% . A new model in Convolutional Neural Network (CNN). HarNet, can classify fresh and spoiled beef with an accuracy of 80% . Fresh beef from that which has been frozen and thawed again has an accuracy of 88. 89% . Innovation in image prediction using Isos-bestic wavelength reduction methods on ground meat types for nonlinear feature extraction and classification has an accuracy 0% . The approach to determining the freshness level of salmon, tuna, and beef by analyzing spectral responses has a freshness prediction rate of 85% for salmon, 88% for tuna, and 92% for beef . Predicting the quality of beef by identifying muscle color scores using DNN has optimal performance with an accuracy of 90. 0% . Prediction using a classification approach of beef components with RGB images by applying a CNN-based model achieves the best performance . An artificial intelligence-based approach for identifying the origin of beef to differentiate fat marbling patterns results in an accuracy of 95% . Therefore, this study proposes a new method to automatically identify the area of fleck marbling on beef images to accurately determine the quality of beef based on publicly accessible data. This method has the potential to serve as a Input Image (RGB Imag. Convert RGB to Grayscale Binary Thresholding Image Negative Morphology Operation ROI Region Convert RGB to Grayscale Binary Thresholding Region Properties Filtering Object Output Imgae (Object Detectio. Fig. 2 Developed automatic model Dataset The input image data in this research is secondary data obtained from Kaggle. This data has been collected and analyzed . A total of 60 images with a resolution of 800 x 800 pixels were processed. The depth of each image is 24-bit RGB in JPG format. The dataset is divided into three types, each with 20 images based on quality: high quality, medium quality, and low quality, with each type having 20 images. Sample data of beef images is presented in Fig. where ab is the threshold value, vp. , . is the pixel value, and th. is the pixel value resulting after the thresholding Image Negative This stage produces an image with inverted pixel values. The process involves changing the intensity value of each pixel in the image by subtracting the original intensity value from the maximum intensity value. Each pixel has a color depth of only 1 bit, so the pixel intensity values are limited to two possible values: 0 . or blac. and 1 . or whit. The result of this step changes the pixel color from white to black or vice The equation used for this step is explained in Formula . ( , )= ! . Morphology Operation The morphology operation stage consists of two steps: hole filling and image erosion. Hole filling functions to fill or cover small holes or unwanted areas in the binary image to produce continuous images without disconnected objects . This process uses the mathematical morphology concept of dilation. Dilation includes a structural element or kernel that serves to expand the object area in the binary image. The dilation operation is repeated until all holes in the object are filled . Next is the image erosion process, which functions to reduce the edges of objects in the image using the mathematical morphology erosion operation. Erosion removes pixels from the edge of an object by comparing each pixel in the image with the structural element or kernel. The purpose of this process is to reduce the size of objects, separate overlapping objects, and clarify object boundary The equation used for hole filling is explained in Formula . and the equation used in image erosion is found in Formula . Convert RGB to Grayscale This stage involves converting RGB images to grayscales, where the conversion process involves three color channels: red, green, and blue, into a single grayscale channel representing the gray intensity of each pixel . This stage calculates the weighted average of the color intensity in each RGB channel. The resulting grayscale image has intensities ranging from 0 . to 255 . , representing the pixel's brightness level. The conversion aims to reduce computational complexity and enhance the efficiency of image analysis when color information is not required . The equation used in this process is presented in Formula . ($, %) = {'|(%)) UC$ O ,} ($, %) = {'|(%)) OI $} where gs is the gray value. Rd. Gr. Bl are intensity values based on the red, green, and blue channels. Value 3 is the denominator in the division operation to obtain the average intensity of the three colors in the RGB image. ( , )< ( , )Ou . where A is the original binary image being processed. B is the structural element or kernel used for the dilation operation. (B)z is the translation of kernel B to position z. O represents the logical AuandAy operation. The OI operation represents the empty set. The OI operation represents the set inclusion Dilation (A. B) is the result of the dilation operation applied to image A using kernel B. Binary Thresholding This process involves converting a grayscale image to a binary image using a thresholding algorithm. A binary image has only two brightness levels, black and white. The thresholding algorithm uses a certain threshold value to separate objects or the foreground from the background so that they do not overlap. Pixel values in the image that exceed or fall below the set threshold are grouped into the predetermined area . Formula . refers to the equation used in converting grayscale images to binary using the thresholding algorithm . where N. is the result of the inversion process. Vmax represents the maximum intensity value, and Vr. represents the original intensity value. Fig. 3 Sample data from each meat qualit. Low quality, . Medium quality, . High quality Ea( , ) = Oe 1 Oe #( , ) Region Properties This stage labels the objects using the regionprops Regionprops is used to measure various characteristics of each labeled region in the label matrix. The regionprops function can only identify objects marked with white, i. , pixels with a value of 1 as the foreground, whereas pixels with a value of 0, which are black, represent the background . The regionprops function of an object has an elliptical shape, and each object has a major axis length and a minor axis length. The farthest distance from the center of . mass to the farthest pixel determines the major axis length, and the closest distance to the edge pixels determines the minor axis length . This stage applies to the binary thresholding process so that the pixels in the grayscale image with values exceeding or falling below the threshold are adjusted according to the predetermined threshold. Pixels with values exceeding the threshold are identified as prominent object parts and will be displayed as white pixels, while pixels with values below the threshold are identified as background and displayed as black pixels. The result of this process is presented in Fig. 6, where the object is separated as the foreground from the background as shown in Fig. Filtering Object The object filtering stage aims to extract large and small objects needed based on a threshold value. This stage uses the Binary Image Connected Component Analysis function to identify and group connected pixels into separate objects in the binary image . This stage is also supported by the regionprops functioning to calculate the properties of each identified object and measure the area of the objects. Objects with an area above the pixel threshold are considered large objects, while those below the threshold are considered small Iteration is then performed through the identified objects, and each pixel within an object is identified as bit 0 in the original image. RESULTS AND DISCUSSION Fig. 6 Results of the binary thresholding process This paper presents 3 sample images randomly selected from a total of 60 processed beef images with low, medium, and high quality. The random selection process is conducted to ensure proper representation of the entire dataset. The sample beef images are shown in Fig. Where Fig. is in of Fig. Fig. is in of Fig. , and Fig. is in of Fig. Next is the negative image process on the image resulting from binary thresholding, which is subtracted from the maximum value in the pixel intensity range. This process produces an image with an inverted transformation of the original image. This process results in a different visual representation with areas that were originally dark, becoming bright or vice versa, thus enhancing the contrast between the object and the background. The result of this process is presented in Fig. Fig. 4 Sample data from each meat qualit. Low quality, . Medium quality, . High quality The image presented in Fig. 4 is still in the form of an RGB image, so the pixels are represented with three color components: red, green, and blue. This image is converted into a grayscale image. This conversion changes the image from an RGB color representation to an image by calculating the brightness level of the pixels without considering the This process produces a new image with pixels that have one color channel in the form of brightness levels. The result of this process is presented in Fig. Fig. 7 The result of the negative image process The next stage involves morphological operations aimed at removing imperfections and ensuring the unity of the objects present in the image. Finally, hole filling is processed with image erosion, so the image with filled holes will experience shrinkage at the object boundaries. This process helps smooth the contours of the objects and reduce imperfections that emerged during previous stages. The result is improved accuracy and clarity in the visual representation of the image. Morphological operations aim to enhance the image quality and prepare it for the analysis stage. The result of this process is presented in Fig. 8 and Fig. Fig. 5 Results of the RGB image conversion process to grayscale The next stage is the binary thresholding process on the image in Fig. This process aims to separate the object from the background by transforming all the pixels in the image into two binary values, black . and white . , based on a Fig. 8 Results of hole filling process Next is the extraction stage by filtering objects based on The predetermined threshold value can separate objects based on size. This approach aims to identify objects with significant size, and small objects are identified as noise. This noise is a disturbance in the image that must be removed. The use of threshold values is only for relevant and important objects to be retained. The result of this process can be seen in Fig. Fig. 9 Results of image erosion The result of the image erosion process is the object that will be used to create the Region of Interest (ROI) in the input This ROI is a specific area within the image for The result of this process is displayed in Fig. Fig. 13 Results of the filtering process The final step of this model is the object identification process using the regionprops function and labeling each detected object. Each detected object is changed to blue These objects are identified as marbling flecks on the beef image. The result of this step is presented in Fig. Fig. 10 ROI process results The obtained ROI is converted back to the grayscale image. This process is necessary because the ROI result is still in the form of an RGB image, so each pixel is represented as a grayscale image. Unnecessary color information can then be The result of this process is shown in Fig. 11, where each pixel has one color channel, namely the brightness level. In the grayscale process, the color intensity of each pixel is converted into a grayscale where the values represent the relative brightness level of each pixel in the image. Fig. 14 Results of the marbling detection process in beef images The image presented in Fig. 13 shows that the objects in the image have been very well identified. The fleck marbling objects in this image can be recognized. The recognized fleck marbling can be counted in pixels. The pixel count is an indicator of the area of meat calculated in the image presented in Fig. Meanwhile, the area of fleck marbling is calculated in the image in Fig. From the calculation results, the ratio of the fleck marbling area to the meat area can be determined. The calculation used to determine this ratio is the ratio equation presented in Formula 6. Fig. 11 Results of the RGB image conversion process to grayscale Next, binary thresholding is performed to separate the pixels in the image into two binary values, black . and white . , according to the threshold. The grayscale image result from ROI with binary thresholding is used to extract features or objects by identifying pixels with intensities exceeding the This process results in objects that highlight important areas or features in the image for analysis by more clearly separating the object from the background. The result of this binary thresholding process is presented in Fig. 34 5 678 9 :. = :4 ::> The ratio is the comparison level of the fleck marbling area to the beef area multiplied by 100%. The results of Formula 6 conclude that the larger the fleck marbling area, the larger the beef area containing fleck marbling. The higher this ratio number indicates a higher quality of beef. The calculation results for all patients are presented in Table 1. The image processing results presented in Table 1 state that each meat contains fleck marbling. Fleck marbling will affect the texture and taste of the meat. The ratio percentage level in this study is 1. 00% for the smallest and 71. 39% for the largest. The larger the percentage value, the most delicious and juicy the meat with marbling is, so meat with a high fleck marbling percentage indicates high quality and can be a primary preference in selection by meat industry stakeholders in making more accurate decisions based on meat quality. Fig. 12 Results of the binary thresholding process TABLE I COMPARATIVE RATIO OF FLEX MARBLING WITH BEEF Image Name Image 01 Image 02 Image 03 Image 04 Image 05 Image 06 Image 07 Image 08 Image 09 Image 10 Image 11 Image 12 Image 13 Image 14 Image 15 Image 16 Image 17 Image 18 Image 19 Image 20 Image 21 Image 22 Image 23 Image 24 Image 25 Image 26 Image 27 Image 28 Image 29 Image 30 Image 31 Image 32 Image 33 Image 34 Image 35 Image 36 Image 37 Image 38 Image 39 Image 40 Image 41 Image 42 Image 43 Image 44 Image 45 Image 46 Image 47 Image 48 Image 49 Image 50 Image 51 Image 52 Image 53 Image 54 Image 55 Image 56 Image 57 Image 58 Image 59 Image 60 Area (Pixe. Marbling Fleck Beef 3,265 236,420 2,196 202,858 4,713 260,530 3,185 281,258 3,495 264,985 5,052 345,120 4,463 309,493 2,598 258,847 4,879 242,079 3,867 251,383 4,284 196,070 2,300 202,009 4,495 225,203 3,657 233,303 3,713 299,780 3,431 232,979 3,283 241,545 2,421 155,310 3,050 273,957 2,069 103,729 13,919 173,273 14,616 295,853 11,451 199,685 11,980 257,933 12,439 228,914 17,403 266,348 20,428 235,827 16,382 270,266 12,783 182,578 20,663 270,515 11,935 178,393 13,925 222,355 11,361 195,449 22,028 334,678 12,129 191,814 12,755 193,580 17,497 210,164 10,607 182,793 11,416 223,269 12,432 244,787 114,634 207,697 70,469 153,830 89,444 187,644 124,006 263,511 111,452 181,977 233,839 404,974 152,161 269,780 98,732 203,536 107,980 249,150 123,425 234,938 83,652 178,849 98,939 208,373 137,971 232,901 212,884 298,179 99,820 170,086 97,029 188,676 95,672 180,496 118,408 209,880 111,414 206,422 79,679 161,011 IV. CONCLUSION From the beef image processing results, it can be concluded that fleck marbling can be identified very well. The identification results are shown by the area of fleck marbling which can be accurately measured in pixels. The ratio comparison between fleck marbling and meat area is an indicator in determining the quality of meat, becoming a preference in making quick and precise decisions. Therefore, a high marbling ratio is categorized as premium meat with higher market value, while a low ratio indicates lower quality This image analysis not only enhances the efficiency of beef quality assessment processes but also supports more accurate pricing and strategy determination, improving service and increasing consumer satisfaction. Ratio (%) REFERENCES