J. Indones. Math. Soc. Vol. No. , pp. 1Ae20. Tucker3 Tensor Decomposition for the Standardized Residual Hypermatrix on Three-Way Correspondence Analysis Karunia Eka Lestari1O . Mokhammad Ridwan Yudhanegara2 . Edwin Setiawan Nugraha 3 , and Sisilia Sylviani 4 Department of Mathematics Education. University of Singaperbangsa Karawang. Indonesia. karunia@staff. id, 2 mridwan. yudhanegara@staff. Study Program of Actuarial Science. President University. Indonesia. nugraha@president. Department of Mathematics. Padjadjaran University. Indonesia. sylviani@unpad. Abstract. This study investigates the theoretical and practical mathematical aspects of Tucker3 tensor decomposition from the three-way correspondence analysis point of view. Since the standardized residual hypermatrix represents the association of the three categorical variables, this study focused on . Tucker3 tensor decomposition for the standardized residual hypermatrix, . some mathematical properties of Tucker3 tensor decomposition, and . constructing the correspondence plot via Tucker3 tensor decomposition. Some mathematical results are presented in lemmas, theorems and algorithms, while a practical result is exhibited at the end of the discussion. Key words and Phrases: Tucker3 decomposition, tensor and hypermatrix, three-way correspondence analysis, categorical data analysis INTRODUCTION Statistical and graphical analysis of associations between categorical variables has a long and interesting history. The contributions of several statisticians, including Karl Pearson and Ronald Aylmer Fisher, have left a trail on how the analysis of categorical data is carried out, including its graphical representation. These experts have produced various statistical techniques to measure, model, examine and O Corresponding author 2020 Mathematics Subject Classification: 62H25 Received: 15-08-2023, accepted: 18-01-2025. visualize how the variables are related. Some of these statistical techniques involve contingency tables of categorical data in their analysis. There are various techniques for measuring associations in categorical data, as have been proposed in . , 2, . These techniques generally involve calculating statistical measures that quantify the magnitude of interrelation among variables. On the other side, obtaining a graphical representation of the data can help users better visually understand this associationAos nature. Correspondence analysis (CA) is mostly used to explore associations between categorical variables statistically and graphically. CA provides intuitive visual observations of the associations between variables at the category level. CA is adequately flexible to be used on large data matrices since it only requires data input in the contingency table . , . If a contingency table consists of two categorical variables, the technique is well-known as two-way correspondence analysis (CA. or simple correspondence analysis (SCA). The data in a two-way contingency table where consisting I rows and J columns can be considered as a data matrix N of size IyJ. By performing a statistical procedure that involves matrix operations, one could obtain: . Row or column profiles that represent the marginal distribution for each row or column . Principal coordinates which are linear combinations of the eigenvectors and centered row or column profiles. CA plot resulting from principal coordinate mapping on d-dimensional plot. However, the CA plots that can be presented visually are limited to d = 1, 2, 3. In practice, one can display the CA plot visualization in one, two or three dimensions according to the required data analysis needs. Nevertheless, this impacts the absorption level of information generated through the plot . Thus, the main problem in CA is how the existing plot can represent the rows or columns in the contingency table in a low-dimensional space . imensional reductio. while optimally absorbing as much information about their association structure as possible. Beh & Lombardo . focused on solving dimension-reduction problems using matrix decomposition since the essence of all decomposition methods is to reduce Several decomposition methods were developed for a two-way contingency table where consisting of I row categories and J column categories, including eigendecomposition . , singular value decomposition . , bivariate moment decomposition . , and hybrid decomposition . In the CA context, the decomposition is performed on the standard residual matrix S, representing the linear associations between each row and column category. In line with todayAos advances in technology and information, various data can be accessed easily by anyone at any time. It also influenced CA research and Problems are even more complex when dealing with data consisting of three categorical variables. When this problem is solved using CA2, it requires an extensive matrix calculation process, as well as raises two significant question, including: . how to present each separate plot in a whole plot comprehensively? and . does the absorption level of information after merging in a whole plot Another alternative to solving such a problem is using multiple correspondence analysis (MCA). MCAAos main idea is to group data through coding techniques to be arranged as a two-way contingency table. Thus, the procedure on CA2 can be applied later. In the context of MCA. Aycoding techniquesAy refer to the manners in which categorical data are prepared, restructured, and represented as a numerical matrix such that it can be analyzed mathematically. Since MCA involves the analysis of associations among categorical variables, coding techniques is fundamental for transforming qualitative data into a form that allows for mathematical operations . , . One of the coding techniques commonly used in MCA is the Burt matrix . Nevertheless, the use of MCA also has several limitations, as stated by . , . , including . there is no information obtained regarding interactions between multiple variables. the absorption level of information on lower dimensions is not optimal, and . impractical to use in big data. Therefore. Beh & Lombardo . proposed solving problems related to three categorical variables using three-way correspondence analysis (CA. since it can provide a merge plot display of each categorical variable in a whole plot with the same dimensions and more optimal information absorption . Analogous to CA2, to reveal the association between three categorical variables graphically, the determination of the principal coordinates and the seeking for low-dimensional space in CA3 also be solved by matrix decomposition. Rather, the decomposition method in CA2 is no longer appropriate when applied to CA3. As a consequence, particular attention will be paid to the three-way generalization of the singular value decomposition or, especially, the Tucker3 decomposition . , which De Lathauwer et al. have mentioned as a higher-order SVD (HOSVD) . The decomposition on CA3 was undertaken on the standard residual hypermatrix S, which reflects the association between the three observed categorical variables. In the process, this method involves algebraic calculations such as tensor operations and hypermatrix properties. This research extends the field of CA3 by offering a structured tensor-based approach that simplifies, improves interpretinability, and allows for more effective visualization of three-way categorical data. Such approach harnesses the Tucker3 decomposition to address persistent challenges in standardized residual analysis within correspondence analysis and potentially expanding the applicability of CA3 across multiple disciplines. Thus, this study will focus on . Tucker3 tensor decomposition for S, . some mathematical properties of Tucker3 tensor decomposition, and . constructing the CA3 plot via Tucker3 tensor decomposition. These three core problems become urgent in this research, which yield theoretical and practical mathematical novelty. THEORETICAL FRAMEWORK Hypermatrix and related tensor operations. The hypermatrix is a generalization of a matrix of order n1 y n2 y A A And , where n1 , n2 . A A A, nd OO N. In data analysis, a hypermatrix can be viewed as a representation of a d-order tensor for d > 2. Formally, a hypermatrix is defined as Definition 2. Let n1 . A A And OO N. f : n1 yA A Aynd Ie F is an order-d of hypermatrix or d-hypermatrix Hypermatrix is denoted by boldface EulerAos letters, for example. The element of S with order-d is denoted as sm1 ,a,md representing a value of the function f . m1 OOn1 . A A A ,md OOnd . Thus, a d-hypermatrix can be written 1 . A A A , m. n1with as Sm1 ,. ,md m1 ,. ,md =1 or S = sm1 ,. ,md . The set of all d-hypermatrix with domain n1 yA A Aynd is denoted by Fn1 yaynd . If n1 = n2 = . = nd = n then the hypermatrix SOOFnyayn is called hypercubical or cubical of dimension n . A hypermatrix S = sm1 ,. ,md can be transformed into matrix S. This process is called matricization. In some references, it is also called unfolding or flattening . , . The rearrangement of the S elements into columns in the matrix S. is represented by mode-d fibers. Figure 1 illustrates the matricization process of SOOR2y2y2 for generating S. , and S. Moreover, a mode-1 fibers S. is called a column fiber, mode-2 fiber S. is a row fiber, and mode-3 fiber S. is a tube fiber. Figure 1. Illustration of matricization for hypermatrix SOOR2y2y2 the column, row, or tube fibers are aligned to form S. or S. matrices Several references use the rearranging of the columns for different mode-d As a comparison, see . , . Basically, the permutation of a particular column is not important as long as it is consistent in related calculations. for more details, see . The structure, operations, and properties inherent in a hypermatrix can be found in . The following discussion will focus on the structure and operation of the hypermatrix required in the Tucker3 decomposition on CA3. The tensor multiplication or the d-mode product of SOORn1 yaynd is defined as follows. Definition 2. Suppose SOORn1 yaynd and UOORjynd . The d-mode product of hypermatrix S with a matrix U is denoted by Syd U, and is defined as Syd U = sn1 and ujnd nd =1 where n1 y A A A yndOe1 yJynd 1 y A A A ynd is the size of Syd U. Definition 2. 2 asserts that each mode-d fiber is multiplied by a matrix U. Kolda & Bader . apply this idea to S. such that it yields an equivalent. N = Syd U Ni X. = US. If VOORQyJ for QOON, then S yn1 (VU) . S yn1 U yn2 V = S yn2 V yn1 U, f or n1 = n2 f or n1 = n2 The tensor product of matrices U and V is defined as the following Kronecker Definition 2. Given UOORP yI and VOORQyJ where I. QOON. The Kronecker product of U and V is denoted by UOVOORP QyIJ and is defined as u11 V u12 V A A A u11 V u21 V u22 V U O V = . uP 1 V a uP 1 V Recognition and notation of three-way correspondence. In the late 1990s. CA3 was first introduced by Carlier and Kroonenberg as a generalization of CA2 . CA3 has been proposed to analyze the IyJyK contingency table using a three-mode principal component (Tucker3 mode. or parallel factor analysis model (PARAFAC), further studied in . , 22, 23, 24, . CA3 is commonly used for categorical data or categorized numerical data. In 2017. Lombardo & Beh have conducted a CA3 for ordinal-nominal variables . CA3 can also be used for nonlinear associations or as a data grouping and reduction technique. DAoAmbra et al. used CA3 to reduce dimensions in an ordinal three-way contingency table . The CA3 input is a three-way contingency table consisting of I row. J column, and K tube categories. The rows, columns, and tubes of such table represent X. Y and Z categorical variable, respectively. Therefore, the IyJyK three-way contingency table represents the three categorical variables, as shown in Table 1. The elements of the data hypermatrix N = nijk are the frequencies for each combination of i-th row, j-th column, and k-th tube . ell frequencie. , where nijk OON for i = 1. A A A . I, j = 1. A A A . J, and k = 1. A A A . The univariate marginal frequencies of the i-th row, j-th column, and k-th tube are reffered to as Table 1. The IyJyK three-way contingency table Y2 A A A n121 A A A n221 A A A Y2 A A A n122 A A A n222 A A A nI11 nI21 a nIJ1 nI12 nI22 a nIJ2 XYZ a a a a a a a n12K A A A n22K A A A nI1K nI2K a nIJK PJ PK PI PK slices, respectively, ni. k=1 nijk , n. k=1 nijk and n. PI PJ j=1 nijk . Similarly, the bivariate marginal frequencies of the i-th row, j-th column, and k-th tube are reffered to as fibers determined by ni. k = j=1 nijk , n. = k=1 nijk . The frequency of all observations is the grand i=1 nijk , and nij. Pj PK total of N be N = i=1 j=1 k=1 nijk . Figure 2. Illustration of univariate and bivariate marginal frequencies of data hypermatrix N from the three-way contingency A three-way contingency table can be constructed from two categorical variables observed in different conditions, times or spaces . The characteristics of such data are in the form of arrays or data cubes, also called boxes, by Kroonenberg . The cell frequencies nijk in the contingency table can be converted to the relative frequencies pijk by dividing with the grand total N , that is, pijk = nijk N . The relative frequency hypermatrix is called the correspondence hypermatrix and is denoted by P, nijk P = pijk = N . Figure 3. The data structure derived from the IyJyK threeway contingency table as an input on CA3 represented by data hypermatrix N . odified from . ) The total dependence of the IyJyK table with relative frequency pijk is measured by inertia 2 . An alogous to the CA2, 2 is based on derivation from the three-way model of independence, such that 2 X . ijk Oe pi. k )2 . ijk Oepi. jk ) The total dependency of cell ijk can be divided into the partial contribution of the two-way interaction and the three-way interaction, such that ijk = Oe pi. k Oe pi. jk Oe p. pijk Oe pijk Here, pijk = pij. k pi. k p. jk pi. Oe 2pi. k , reflects the three-way interaction measure for the . , j, . -cell. The element ijk describes the dependency or association measure of the . , j, . -cell, which can be written as ijk = P . P . j | . Oe1 P . P . P . If the conditional probabilities for each k are equal, then P . j | . = P . and the first ratio is 1. Consequently, ijk = ij and the three-way contingency table can be analyzed by CA2. The element of the bivariate marginal total is defined as the sum of the weights over the third index. Therefore, the elements of IyJ bivariate marginal to be ij. k ijk = pijk Oe pi. Oe pi. The elements of other bivariate marginals, i. k and . jk , are defined in the same way. The univariate marginal total is defined as the sum of the weights of the two indices, and the value is 0 . y definition ijk ). Thus, the row univariate marginal is defined by i. k ijk = X pij. X pi. Oe pi. Oe = 1Oe1 = 0 The elements of other univariate marginals, . k , are defined similarly. Furthermore, the inertia of 2 , which measures the total dependencies of the three-way contingency table, can be partitioned as 2 Oe pi. k Oe pi. jk Oe p. pijk Oe pijk 2IJ 2IK 2JK 2IJK This partition helps present the appropriate measure for each interaction such that its contribution to the total dependency can be known. Considering the symmetrical association structure between the three categorical variables from the three-way contingency table . nder the assumption that X. Y, and Z are completely independen. , the . , j, . -th elements of the table can be expressed in terms of deviations from the three-way independence model using the three-way PearsonAos ratio as follows. S = . ijk ] = pijk Oe pi. Tucker3 tensor decomposition. Since the last six decades. Tucker3 tensor decomposition, also known as threemode principal component analysis . MPCA), has been considered an ingenious method to solve order-d tensor decomposition problems with d>2. The Tucker3 decomposition was first introduced by Tucker in 1963 and rewritten by Levin . and Tucker . TuckerAos 1966 article was more comprehensive than the early literature and widely cited. The article discusses some of the mathematical notes on three-mode factor analysis. Various analyzes in different fields have been carried out using the Tucker3 decomposition . , 31, 32, 33, . The Tucker3 decomposition has some terminology, as summarized in the table below. Table 2. Some terminology of the Tucker3 decomposition. specific for three-way and others for N -way Terminology Three-mode factor analysis . MFA/Tucker. Three-mode principal component analysis . MPCA) n-Mode component analysis Higher-order singular value decomposition (HOSVD) N-mode singular value decomposition (N-mode SVD) Proposed by Tucker . Kroonenberg and De Leeuw . Kapteyn et al. De Lathauwer et al. Vasilescu and Terzopoulos . Figure 4. Tucker3 tensor decomposition model for order-3 hypermatrix In the Tucker3 tensor decomposition, a hypermatrix S is decomposed into three matrices: U1 . U2 , and U3 , representing row, column and tube profiles, and one hypermatrix A as core, representing the interactions of row, column and tube. Van Loan . elaborates on the HOSVD to represent the Tucker3 decomposition, as asserted in Definition 2. Definition 2. (Van Loan, 2. Let SOORIyJyK . Suppose S. = U1 D1 V1T . = U2 D2 V2T , and S. = U3 D3 V3T , respectively are the SVD of S. , where U1 OO RIyP . U2 OO RJyQ . U3 OO RKyR and A = S y1 UT1 y2 UT2 y3 UT3 OORP yQyR . Then . S = A y1 U1 y2 U2 y3 U3 is the Tucker3 tensor decomposition of S, i. = U1 A. (U3 O U2 ) . = U2 A. (U3 O U1 ) and S. = U3 A. (U2 O U1 ) are unfolding matrix of S. MAIN RESULTS Tucker3 tensor decomposition for the standardized residual hypermatrix. The standard residual hypermatrix S is calculated using PearsonAos three-way ratio in Equation . This equation indicates the difference between the joint relative frequencies and the univariate marginal relative frequencies for each category. Such calculation allows for an inefficient numerical process . uch rounding occur. which can make the resulting CA3 plot inaccurate. In order to minimize the numerical process, the hypermatrix S is calculated directly from the elements of the data hypermatrix N as formulated in Lemma 3. Lemma 3. If N = nijk is the IyJyK hypermatrix derive from a three-ways PJ PK PI PK contingency table. with ni. = j=1 K=1 nijk , n. = i=1 K=1 nijk , n. PI PJ PI PJ PK i=1 j=1 nijk , and grand total N = k=1 nijk , then the element of S = sijk are n2 Anijk Oe ni. p Oep p. p Oep p. Proof. Consider S = sijk = ijkpi. Since sijk = ijkpi. k=1 pijk k=1 pijk j=1 pijk N Oe k=1 pijk k=1 pijk j=1 pijk PK nijk PI PK nijk PI PJ nijk k=1 N k=1 N j=1 N PK nijk PI PK nijk PI PJ nijk N Oe N3 k=1 nijk k=1 nijk k=1 nijk k=1 nijk j=1 nijk j=1 nijk N Oe N 3 . N 3 . N nijk Oe ni. n The vectors aligned from the columns of the hypermatrix S are not always Consequently, the association values of the row, column, and tube categories cannot always be plotted to Cartesian coordinates where the coordinate axes are mutually perpendicular . , . For this reason, new bases, which are linear combinations of the row, column, and tube components with the core elements, called principal coordinates, are constructed. In order to find these new bases, the standardized residual hypermatrix S is decomposed using Tucker3. Some mathematical properties of Tucker3 tensor decomposition. There is a huge amount of literature available on Tucker3 tensor decomposition and its properties. In particular, one may refer to Van Loan . for an extensive and excellent discussion on Tucker3. This study shall only touch on some of the more pertinent mathematical properties related to three-way correspondence Theorem 3. If S = sijk OO Rnynyn is 3-hypercubical, such that S = Ay1 U1 y2 U2 y3 U3 is the Tucker3 tensor decomposition of S, then S. = U3 U2 U1 A. and A. = (U3 U2 U1 ) S. Proof. Consider S = sijk OO Rnynyn is 3-hypercubical. The matrices U1 . U2 and U3 are obtained by determining the SVD of S. , and S. , which are of size nym with m Ou n such that U1 . U2 and U3 are the symmetric matrices of size nyn. Since S = Ay1 U1 y2 U2 y3 U3 , based on Equation 1 and Equation 2, such that NaNe S = Ay1 (U3 U2 U1 ) NaNe S. = U3 U2 U1 A. Since U1 . U2 and U3 are the orthogonal matrices, where U1 UT1 = I = U2 UT2 = I = U3 UT3 thus NaNe = U3 U2 U1 A. NaNe UT3 S. = UT3 U3 U2 U1 A. NaNe UT2 UT3 S. = U2 U1 A. NaNe UT1 UT2 UT3 S. = UT1 U1 A. UT3 UT2 UT1 S. = A. NaNe n Based on Theorem 3. 2, in the same way, it can also be proven that . = U1 U3 U2 A. and A. = (U1 U3 U2 ) S. = U2 U1 U3 A. and A. = (U2 U1 U3 ) S. Core hypermatrix A is obtained by rearranging the columns of matrices A. , and A. into the elements in the corresponding core A. It implies that the core hypermatrix A is not unique. Theorem 3. Let S OORIyJyK where I. K OO N. Given S. = U1 D1 V1T . = U2 D2 V2T and S. = U3 D3 V3T are the SV D of S. , respectively. Suppose U1 OO RIyP . U2 OO RJyK . U3 OO RKyR . A1 OO RP yQyR . If S = Ay1 U1 y2 U2 y3 U3 is the Tucker3 tensor decomposition of S, then A is not unique. Proof. Suppose S = Ay1 U1 y2 U2 y3 U3 is the Tucker3 tensor decomposition of Based on Definition 2. 4, then A = Sy1 UT1 y2 UT2 y3 UT3 . Let B OO RIyI is an orthogonal matrix such that. A = Sy1 UT1 y2 UT2 y3 UT3 = (Sy1 B) y1 UT1 BT y2 UT2 y3 UT3 . By applying Definition 2. 4, obtained S. = U1 A. (U3 OU2 ) = U1 BBT A. (U3 OU2 ) NaNe A. = UT1 BT BS. (U3 OU2 ) n Finding a unique core hypermatrix A, such as a super diagonal hypermatrix, is almost impossible, even for a symmetric hypermatrix . Another alternative that can be considered to increase this uniqueness is to make a lot of zero-value core elements. Such simplifying the core structure in some way might be useful in computation. By applying Theorem 3. 2, it yields a core structure that has this Moreover, the Tucker3 tensor decomposition procedure for the standardized residual hypermatrix S is generated in Algorithm 1. Algorithm 1 Tucker3 tensor decomposition for the standardized residual hypermatrix Step 1 : Start Step 2 : Read the IyJyK three-way contingency table Step 3 : Create data hypermatrix. N Step 4 : Compute the standardized residual hypermatrix S using Lemma 3. Step 5 : Compute univariate marginal frequencies Step 6 : Compute bivariate marginal frequencies Step 7 : Apply matricization process Step 8 : Find mode-1, mode-2, and mode-3 fibers. and S. Step 9 : Apply SVD on S. and S. Step 10: Apply Tucker3 tensor decomposition Step 11: Print U1 . U2 . U3 , and core hypermatrix A Step 12: End Figure 5 illustrates the Tucker3 tensor decomposition process for the standard residual hypermatrix formulated in Algorithm 1. Figure 5. Tucker3 tensor decomposition for the standardized residual hypermatrix on CA3 Constructing the CA3 plot via Tucker3 tensor decomposition. Considering Tucker3 decomposition. Beh & Lombardo . define the row principal coordinates (F) as F = U1 A(P yQR) = fi,qr = u1ip apqr Equation 11 confirmed that the row principal coordinates are a linear combination of the row components . and the core elements. Instead, the . , . pair of the column-tube categories will be represented by a single-point coordinate . Hence, the tube-column principal coordinates (H) are defined as Q X H = (U2 OU3 ) AQRyP = hjk,p = apqr u2jq u3kr q=1 r=1 Depicting the row (F) and column-tube (H) principal coordinates yields the correspondence plot (CA3 plo. A coordinate point on the plot represents each row and column-tube category in the contingency table. The construction of the principal coordinates in CA3 is formulated in Algorithm 2. Algorithm 2 Constructing the CA3 plot via Tucker3 tensor decomposition Step 1 : Start Step 2 : Read the IyJyK three-way contingency table Step 3 : Create data hypermatrix. N Step 4 : Compute the standardized residual hypermatrix S Step 5 : Compute univariate marginal frequencies Step 6 : Compute bivariate marginal frequencies Step 7 : Apply matricization process Step 8 : Find mode-1, mode-2, and mode-3 fibers. and S. Step 9 : Apply SVD on S. and S. Step 10: Apply Tucker3 tensor decomposition Step 11: Print U1 . U2 . U3 , and core hypermatrix A Step 12: Compute the row principal coordinates. F Step 13: Compute the column-tube principal coordinates. H Step 14: Print the CA3 plot Step 15: End Another CA3 output is inertia which reflects the amount of information in each dimension on the CA3 plot. The last two equations calculate the total inertia (Enum ) from the contingency table and the category contribution. Enum = J X K X j=1 k=1 p=1 k h2jk,p = Q X I X i=1 q=1 r=1 fi,qr . The contribution percentage of the jk-th column-tube category to the p-dimensional (Cjk,p ) determined by: Cjk,p = 100 y Itot APPLICATION IN PRACTICAL DATA ANALYSIS Data Description. As an illustration of the application in data analysis, this study take into consideration the olive data that was considered by Agresti . and rediscussed by Beh & Lombardo . , as summarized in the three-way contingency table of Table 3. The row categorical variable is given by the preferences for black olives of Armed Forces personnel with six levels in increasing order: A. E, and F. The column categorical variable represents position with three categories: SouthWest (SW). North-West (NW), and North-East (NE). The tube categorical variable deputized the location consisting of Urban and Rural categories. Therefore, the data provide a three-way cross-classification of a number of black olives of Armed Forces personnel based on the its preference, geographical position and location. Table 3. The three-way contingency table for olive data . , . Preferensi Urban NW NE Rural Instead, seeking the asymmetric association structure as discussed by Beh & Lombardo . , this study interested is in the symmetrical association between the preference for black olives of Armed Forces personnel with the geographical position and location. Indeed, the dimensional approach of interpretation is valid for both asymmetric and symmetric plots. However, the asymmetric plot work well when total inertia is high, but are problematic when total inertia is low because the profile points in principal coordinates occupy a small space around the origin . Since asymmetric plot interpret the distance between column and row points, the column profiles must be presented in row space or vice-versa. Eaton & Tayler . also described that it was Aoextremely dangerousAo to interpret the proximity of principal coordinates from different variables . ow-to-colum. Rather than defining the column coordinates in row space or vice-versa, consider instead defining the position of the row and column categories so that the strength of the association that exists between the variables is reflected . In a symmetric plot the separate configurations of row profiles and column profiles are overlaid in a joint display, even though they emanate, strictly speaking, from different spaces . Therefore, both row and column points are displayed in principal coordinates. For this reason, we chose to focus on symmetrical association rather than asymmetrical. The association statistical tests using partitioning PearsonAos phi-squared as in Equation 9 are recorded in the following table. Table 4. Partitioning of the PearsonAos phi-squared statistics for olive data in Table 3 Preferences Index Explained inertia p-value 2 2IJK Table 4 records the three-way association term from the partition of Pearson phi-squared statistic is 2 = 0. 078 with 27 degrees of freedom and p-value =0. It implies that for a 95% confidence interval, there is strong evidence to suggest that the preference for black olives of Armed Forces personnel is strongly symmetrically associated with the geographical position and location. When considering the two-way association, one can see that the association between preference and geographical position is statistically significant since the 2IJ statistic of 0. 048 has a p-value =0. 001 is less than the significant level = 0. This association accounts for 61. 49% of the association that exists in the olive data table. Similarly, the association between preference and geographical location is statistically significant since the p-value of 2IK is less than = 0. Their association contributes to 24. 12% of the association that exists among three categorical variables. Meanwhile, the association between the geographical position and location is not statistically significant since the statistic 2JK has p-value =0. Additionally, despite the residual three-way interaction 2IJK contributing 79% to the total association, it is not statistically significant and will be ignored along with the other non-significant marginal partitioning 2JK . Visualizing the association by the CA3 plot. This subsection aims to visually display the associations that exist among three categorical variables. By considering olive data in Table 3, the associations among preference, position, and location are represented by the standardized residual hypermatrix S determined by Lemma 3. Furthermore. S is decomposed using the Tucker3 tensor decomposition as formulated in Algorithm 1 and illustrated in Figure 5. This decomposition involves a matricization process that is undertaken by utilizing Theorem 3. Meanwhile. Theorem 3. 3 asserted that despite the core hypermatrix A obtained from Tucker3 is not unique. Theorem 3. 2 guaranteed the resulting core consists of many zero-value elements since it involves the SVD of Table 5. Total inertia Enum and contributions of the column-tube categories to the Enum Category SW-Urban NW-Urban NE-Urban SW-Rural NW-Rural NE-Rural Enum Cumulative Axis 1 Axis 2 Axis 3 each fiber matrix. The last step of Algorithm 1 produces three matrices representing row, column, and tube profiles and one core hypermatrix representing the three-way interaction of the three. Finally, the CA3 plot is obtained by applying Algorithm 2, as depicted in Figure 6. Figure 6. Tucker3 tensor decomposition for the standardized residual hypermatrix on CA3 Figure 6 shows the two-dimensional plot visually reflects 91. 58% of the association that exists among preference, position, and location in olive data. comparison, the three-dimensional plot visually reflects 99. 72% of such associations. Therefore, using the two-dimensional plot has kept the visual display quality high, making it even simpler and easier to interpret. In the two-dimensional plot, the NW-Urban category is relatively far from the origin, meaning that this category contributes the most to the association. their contribution value in Table 5 is On the other hand, in a three-dimensional plot. NE-Urban is the category that contributes the most, with a contribution value of 0. The contribution of each column-tube category to the association depicted along the first three axes is summarized in Table 5. Moreover, a row category of preference E is strongly associated with a column-tube category of SW-Urban since their coordinates are close The complete association structure between three categorical variables is visualized in Figure 7. Figure 7. Column-tube interactive biplot from the CA3 for olive data in Table 3 The interactive biplot in Figure 7 shows that the projection of a row point coordinate of preference E on the arrow of SW-Urban is short. It indicates there is a strong association between preference E and SW-Urban. Similarly, the point projection of preference D on the SW-Rural arrow is shorter, which implies that their association is stronger. The NW-Urban is the column-tube category that contributes the most to the complete association structure since it has the most extended vector length. Finally, it can conclude that the symmetric association between the preference for black olives of Armed Forces personnel with the geographical position and location is statistically significant, which the preference for black olives depends, in a statistically significant manner, both on position and location, while the position does not statistically significant depending on the location. CONCLUDING REMARKS This study investigates the decomposition of the Tucker3 tensor in the CA3 The decomposition is performed on a standardized residual hypermatrix S that deputizes the association between the three categorical variables. As a first step, the study formulates how to get a more precise hypermatrix S by calculating its elements directly from the data hypermatrix N as in Lemma 3. Furthermore. Algorithm 1 is applied to decompose the hypermatri S using Tucker3 tensor This decomposition requires a process matricization, which is the rearrangement of the S elements into columns in the matrix S. , referred to as mode-d fibers. This process uses Theorem 3. 2, which yields the three matrices, each representing row, column, and tube profiles, and a core hypermatrix reflecting the interactions between them. Considering the importance of the core hypermatrix A in Tucker3, the study explores some of the properties of this core as formulated in Theorem 3. Finally. Algorithm 2 is applied to obtain the row and columntubes principal coordinates plotted to the two- and three-dimensional CA3 plots. As an illustration of the application in data analysis, it conducts the entire procedure to the olive data in Table 3. The results show that for the 95% confidence interval, there is strong evidence to suggest that the preference for black olives of Armed Forces personnel is strongly symmetrically associated with geographical position and location. Future research will examine the convergence of the Tucker3 algorithm and the reconstruction of three-way tables by three component matrices and core hypermatrix, including their properties to find the core with more zero Acknowledgement. The authors thank the research assistant team Mr Doni Ramdan. Mrs Julia Permata, and Mr Dian Pebriana, for contributing. The authors thank the anonymous reviewers for providing constructive comments to improve the earlier version of the manuscript. LPPM Universitas Singaperbangsa Karawang supported this research through the HIPKA grant scheme. REFERENCES