 Research
 Open access
 Published:
Detection and elimination of insignificant interacting subsystems in MIMO closedloop systems using the least mean squarebased partial correlation algorithm
Journal of Engineering and Applied Science volume 70, Article number: 116 (2023)
Abstract
Closedloop identification of multiinput multioutput (MIMO) systems in largescale plants has significant difficulties due to subsystem interactions. This complexity is attributed to several input‒output variables, interactions such as recycling to improve or save material and energy, and disturbances such as heating or cooling within the plant. One of the fundamental problems in closedloop identification is the input perturbation of the interacting subsystems to capture the dynamics of the system for producing an informative dataset and consequently obtaining an accurate model. However, perturbing all the interacting subsystems in the plant increases the applied excitation signals, which makes the identification a nontrivial task. Thus, a precise and quantitative procedure to evaluate the significance and contribution of such interacting subsystems before applying these excitation signals is required to simplify the identification task. Conventional partial correlation analysis is one of the implemented techniques to measure the significance of these interacting subsystems. However, this technique is based on least square estimation. Thus, incorrect estimation of the model errors is produced due to the correlations amongst the process inputs and unmeasured disturbances. Accordingly, this paper describes the implementation of a developed least mean squarebased partial correlation algorithm for detecting and eliminating insignificant interacting subsystems of MIMO closedloop systems. The developed algorithm can discriminate the interacting subsystems that substantially influence the plant interaction from those that do not by minimizing the model regression errors produced due to the process input correlation, unmeasured disturbances, and colored noise. The effectiveness of the proposed method is demonstrated through a case study.
Introduction
Several industrial systems and plants are multivariable by nature. The complex nature of identifying such processes arises from the embedded interactions amongst the process inputs and outputs. Thus, each control loop is necessarily affected by other loops, and thus, the control of such a process becomes challenging [1, 2]. Moreover, such processes generally suffer from perfectly correlated input signals, low signaltonoise ratios (SNRs) [3], correlated noise signals, and colored noises.
Simultaneous perturbation of all external reference signals is adopted to enrich the identifiability of a model structure and deliver an informative dataset in multiinput multioutput (MIMO) systems [4]. However, perturbing only one or a few input signals leads to a simple implementation for the identification task [5]. Moreover, introducing such perturbing signals is usually unattractive or too costly because such signals typically interrupt plant operation and consequently disturb product quality [6].
Such complex simultaneous perturbations are often undesired, although they can excite any system. This situation can be attributed to random or large excitations that lead to excessive process variability and make the operation personnel anxious [7]. Therefore, defining appropriate settings that permit the design of the minimal signal properties that are essential for the reference signal and obtaining an accurate system model while reducing the effect of the perturbations are desirable [8].
The results in [5, 9, 10] revealed that signal perturbation is optional to accomplish the identification of a system. Noise sources, as an alternative, can deliver an informative dataset. In such cases, the selected controller has adequate complexity compared with the selected model structure in MIMO systems under closedloop control [11]. However, a lengthy dataset must be acquired to fulfil the given accuracy levels. In [5], the results of not exciting all reference signals were validated by applying variance analysis.
In [12], the influence of reducing the sampling rate on discrete, routine operating closedloop system identification was investigated. The data were collected without external excitation and were exclusively affected by natural disturbances. Recently, typical results for identifiability were determined by applying closedloop data with or without modifications in the reference input signal [8]. The study is an extension of the work presented in [12]. The sampling time and time delay are dominant factors that define the difficulty level of the applied reference signal.
All the abovementioned works show the necessity for simplifying the identification task by minimizing the reference signal perturbations. However, the possible reduction in input signal perturbation can be further investigated. The input variables of the interaction transfer functions are the most suitable candidates given that some of them hold very minimal dynamic information about largescale plants. Thus, they can be left out from being perturbed according to correlation analysis and then let off from the identification task [13].
In reality, exciting all the input signals, irrespective of their contribution to interaction, is impractical for the MIMO closedloop system case because of process restrictions or economic reasons. Thus, the interacting subsystems that contribute to interaction should be eliminated in a primary stage to ensure that minimal input signals are perturbed. Subsequently, the MIMO system can be identified by using any identification method, such as the prediction error method (PEM).
The determination of the null or less contributing input–output pairs in the multivariable system was investigated in many studies by determining the controllability based on the minimum singular value decomposition analysis in defined frequencies [14,15,16,17]. Although these methods have successful applications in openloop systems, they generally provide misleading results when applied to detect and eliminate nomodel pairs in MIMO closedloop systems [18]. This issue is attributed to the feedback mechanism and the controller action [19]. Controller action commonly results in a decoupled situation that hides information regarding model existence for the input–output pair [20].
Recently, several studies have been conducted to overcome the controller action and inherent feedback in closedloop systems using correlationbased methods. In [18, 19, 21], the determination of input–output combinations in MIMO closedloop systems with nomodel pairs is presented based on a correlation analysis method. In [21], the low estimated values of the polynomial coefficients of the identified autoregressive with eXogenous input (ARX) model are the key elements to deciding the nomodel pairs. However, the associated limitation with this method is the requirement of prior knowledge of the controller law of each subsystem, which is due to that the control law is part of the correlation analysis.
In [19, 21], Pearson’s correlation analysis between the set point and the controller output signals is implemented as part of the algorithm implementation. However, the implemented correlation analysis based on Pearson’s correlation certainly provides misleading results in the MIMO closedloop implementation due to the inherent and hidden input–output variables correlation due to the feedback nature. Thus, implementing the partial correlation analysis is effective compared with the regular Pearson’s correlation due to the possibility of the input–output variables being correlated [13].
The partial correlation method can be applied to remove the effect of other correlative variables by keeping them untouched in the investigation. Thus, the correlation between the two variables of interest is known [22]. The partial correlation technique is trusted if one ensures that the regression error terms for conducting the analysis are unaffected by variable correlations or unmeasured disturbances, which is not the case in multivariable industrial practices. Another difficulty arises in the conventional partial correlation because of the perfect correlation between the process input signals from the neighbor subsystems. The aforementioned problems make the assessment of the dependency between these input signals using the conventional partial correlation analysis misleading due to the use of the least square (LS) method in calculating the model regression errors.
In [13], the differentiation between the interacting subsystems was achieved using conventional partial correlation analysis where the LS method was adopted. Complications, such as bias estimation of the model regression error terms because of the unmeasured disturbances and correlations that occur amongst the process input signals from the other subsystems in the plant, constrain the success of the partial correlationbased LS method.
The least mean square (LMS) method is a popular adaptive algorithm that was proposed in [23]. The algorithm was applied iteratively to overcome the local minima and slow learning problems associated with backpropagation in [24] and for other nonlinear applications [25, 26]. The LMS algorithm utilizes the unbiased instantaneous gradient that is determined based on the most recent information. Thus, minimizing the instantaneous squared error criteria recursively using an LMS algorithm can be effective in partially removing the effects of correlations between the disturbances and the manipulated variables and results in unbiased and consistent parameter estimates [11]. In addition, this instantaneous parameter estimate avoids the necessity for the prefiltering process of the experimental data. Thus, it evades the possible loss of some of the system dynamics due to the improper selection of frequency ranges.
In this study, the LMSbased partial correlation algorithm is developed to avoid the restriction of the conventional partial correlation. The LMS method combines the simplicity of LS and the effectiveness of the PEM.
The algorithm guarantees accurate results, while the conventional partial correlation probably delivers misleading results in particular cases. Thus, in this study, determining which subsystems are unimportant and thus left out of the identification process at a prior stage makes the task of selecting the input signals to be perturbed straightforward.
Methods
A multivariable process under decentralized control is considered, as shown in Fig. 1.
For the closedloop setting in Fig. 1, the case of three decentralized loops is considered, and the aim is to assess the importance of the individual interacting dynamics and their contribution to the subsystem interaction under closedloop conditions. In this case, the interacting subsystems (in the red rectangle) affecting loop 2 are examined for example. Thus, the interaction from neighbor subsystems (g_{21} and g_{23} in this case) has to be investigated. The question now is whether both of these interaction transfer functions directly affect the closedloop identification in loop 2 or whether one of these interaction transfer functions depends on the other due to some correlation.
Conventional partial correlation analysis can be applied to examine the dependency relationship amongst u_{1}, u_{2}, and u_{3} for answering the abovementioned question. Firstly, the computed error terms from the solution of the linear regression problem between these inputs represent the independent component for each input signal. Secondly, another linear regression problem is to be solved between (y_{2}) and the inputs (u_{1}, u_{2}, and u_{3}) to compute the output error. The computed errors in the first and second stages are lastly used to compute the partial correlation coefficients following Eq. (1). If the test shows any dependency, then the specific transfer function has less influence on the interaction and could be eliminated. Consequently, the identification task is reduced, and the identification precision is enhanced by eliminating the interacting subsystems to be dithered.
The partial correlation can be represented as a correlation between (u_{1}, y_{1}):
where n is the number of data points and σ is the variance of the analogous subscript.
Development of the LMS algorithm for MIMO systems
The linear regression form for the information vector and the parameter matrix is regarded as follows:
The parameter matrix is defined as follows:
where \(n=q\times {n}_{a}+p\times {n}_{b}\) and the corresponding information vector is as follows:
where \(\mathbf{u}\left(t\right)={\left[{u}_{1}\left(t\right),{u}_{2}\left(t\right),\ldots,{u}_{p}\left(t\right)\right]}^{T}\in {\mathfrak{R}}^{p}\) and \(\mathbf{y}\left(t\right)={\left[{y}_{1}\left(t\right),{y}_{2}\left(t\right),\ldots,{y}_{q}\left(t\right)\right]}^{T}\in {\mathfrak{R}}^{q}\) are the process input and output vectors, respectively.
Recursive LMS algorithms are commonly used online to update the parameter estimate vectors as time t increases as illustrated in Fig. 2. The algorithm applies the mean square error (MSE) criterion to minimize the cost function while estimating the parameter vector.
For the sake of compactness, \({\varvec{\upvarepsilon}}\left(t,{\varvec{\uptheta}}\right)\) is denoted as \(\widehat{{\varvec{\upvarepsilon}}}\) hereinafter. Then, the MSE criterion function in Eq. (5) above can be defined as follows:
The algorithm uses the gradient descent technique to attain the optimal Wiener solution \({{\varvec{\uptheta}}}^{*}\left(t\right)\) by minimizing an instantaneous squared error that is quadratic in the parameter vector. Thus, it guarantees the existence of a single minimum in the error surface. Accordingly, the algorithm converges to this single minimum.
Minimizing \(J\left[{\varvec{\uptheta}}\left(t\right)\right]\) in Eq. (6) delivers the LMS estimate of \({\varvec{\uptheta}}\left(t\right)\), with the assumption that \(\mathbf{\varphi }\left(t\right)\) is persistently excited using a perturbation signal. The recursive equation to compute the parameter update \(\widehat{{\varvec{\uptheta}}}\left(t1\right)\) is given by [27, 28].
where \(\mu >0\) is the step size parameter that controls the convergence rate and stability of the algorithm. The new estimate of \(\widehat{{\varvec{\uptheta}}}\left(t+1\right)\) is determined by the gradient of \(J\left[\widehat{{\varvec{\uptheta}}}\left(t\right)\right]\) according to \(\widehat{{\varvec{\uptheta}}}\left(t\right)\):
Equation (8) will be used to drive the LMS updating equation, with the assumption that the noise vector \(\mathbf{v}\left(t\right)\) is an independent identically distributed (i.i.d.) sequence and the information vectors \(\mathbf{\varphi }\left(t\right)\) are independent over time. The first assumption is widespread, but the second assumption is far from being realistic and appears to be difficult to hold in most of the applications [29] and not even true for the input–output pairs in \(\mathbf{\varphi }\left(t\right)\) for t = 1 to N, where N is the number of data points [30].
The reason is that a large part of the data are shared between two successive information vectors, and therefore, they are strongly correlated. Accordingly, the convergence analysis for the case where the assumption is invalid, and the correlated observations used in \(\mathbf{\varphi }\left(t\right)\) were investigated in several works. In [29], the convergence analysis was conducted under assumptions close to practical applications, where a strong correlation in \(\mathbf{\varphi }\left(t\right)\) was considered. The authors showed that the mean square difference between the optimal and true parameter estimate vectors leans towards zero with the decrease in the step size parameter \(\mu\).
This result is similar to that in [24, 31], where the analysis shows that the LMS solution converges to a stable range when the step size approaches zero. In [32], the convergence under the common assumption of correlated observations was also investigated. The author proved that when \(\mu\) approaches zero, the convergence of an algorithm is nearly sure.
To compute \(\widehat{{\varvec{\uptheta}}}\left(t+1\right)\), the parameter update equation is derived by substituting Eq. (8) in Eq. (7) [27, 28].
Thus, the cost function \(J\left[\widehat{{\varvec{\uptheta}}}\left(t\right)\right]\) is minimized based on the instantaneous estimate of \(\widehat{{\varvec{\upvarepsilon}}}\) computed by the following:
By replacing \(\widehat{{\varvec{\uptheta}}}\left(t\right)\) in Eq. (11) with its iterative parameter estimates, \({\widehat{{\varvec{\uptheta}}}}_{k}\left(t\right)\), the estimate of \(\widehat{{\varvec{\upvarepsilon}}}\) can be iteratively computed at iteration k by the following:
In the iterative LMS algorithm, the parameter estimate equation updates are iteratively computed as follows:
The gradient of the iterative LMS cost function will be as follows:
Equation (15) is the MIMO counterpart of the LMS for the single output case in [31, 33]. This equation shows that the LMS algorithm can improve the parameter estimate accuracy due to the implementation of the iterative procedure.
Elimination procedure of the interacting subsystem using the LMSbased partial correlation algorithm

Step 1: The effect of all other variables in U_{1} on u_{i} is eliminated.
The linear regression is written as follows:
where U is an n × p matrix of causal variables, \({{\varvec{\uptheta}}}_{{\mathrm{u}}_{1}}\) is a p × 1 vector of the parameters to be estimated, and \({\mathbf{u}}_{i}\) and \({\varvec{e}}i\) are the p × 1 vectors of the effect variable and regression errors, respectively.
Given that U contains all other subsystem input variables \({\mathbf{u}}_{j}\) (j ≠ i), the LMS expression to minimize the cost function is obtained by applying Eq. (15) for the U matrix:
The regression errors for the abovementioned equation are estimated as follows:
e_{ui} is a vector of the independent components of u_{i} that neglects the effects of all other variables.

Step 2: The impact of U on \({{\varvec{y}}}_{i}\) is estimated by a regression model:
where y_{i} is the output of the main subsystem from which \({{\varvec{\uptheta}}}_{{y}_{i}}\) can be estimated as follows:
The regression errors for y_{i} can then be estimated as follows:

Step 3: The regression error terms are assumed to involve those elements of u_{i} and y_{i} that are independent of U. Therefore, the partial correlation can be represented as a regular correlation between (u_{i},y_{i}):
where N is the number of data points and σ is the standard deviation of the corresponding subscript.
Given that the tests will be conducted in a dynamic sense, the value of N can be selected to be adequately large based on the process knowledge to consider the presence of the time delay in the signals under analysis.
For conventional partial correlation analysis, if the true regression model is given by the LS method as follows:
Then, the parameter estimate errors can be computed as follows:
If N approaches infinity:
In LS estimates to maintain \(\widehat{{\varvec{\uptheta}}}{{\varvec{\uptheta}}}_{LS}^{0}=0\), one has to keep \(E\left[{\mathbf{U}}^{T}{\mathbf{e}}_{i}\right]=0\), and this condition is only satisfied if e_{i} is white noise. Therefore, the LS usually has nonzero asymptotic bias and is not consistent with the case of colored noise, as the case in real MIMO systems operates under closedloop control. However, estimating the regression error on the basis of LMS in this case is certainly more accurate.
The key difference between the conventional partial correlation method and the LMSbased partial correlation algorithm is the way to calculate the regression error vectors e_{ui} and e_{yi} that contain the cancellation of the impact of all other variables on u_{i} and y_{i}. In the conventional partial correlation method, a simple linear regression problem is firstly solved to calculate the parameter estimates \({\widehat{{\varvec{\uptheta}}}}_{{u}_{i k}}\) and \({\widehat{{\varvec{\uptheta}}}}_{{y}_{i k}}\). Then, the regression error vectors e_{ui} and e_{yi} are obtained by solving two different regression models. Lastly, Eq. (22) is applied to calculate the partial correlation coefficients. This case is different from that of the proposed LMSbased partial correlation algorithm in this study, where the LMS algorithm is used to accurately determine the regression errors. The accurate calculation of the regression errors will obviously lead to more accurate partial correlation coefficients given that Eq. (22) heavily depends on these regression errors.
LMSbased partial correlation algorithm

i.
Collect input–output data from the closedloop experiment.

ii.
Solve the linear regression problem between u_{i} and the input matrix U.

iii.
Solve the linear regression problem between y_{i} and the input matrix U.

iv.
Compare \({\widehat{{\varvec{\uptheta}}}}_{{u}_{i k}}\left(N\right)\) and \({\widehat{{\varvec{\uptheta}}}}_{{y}_{i k}}\left(N\right)\) with \({\widehat{{\varvec{\uptheta}}}}_{{u}_{i k1}}\left(N\right)\) and \({\widehat{{\varvec{\uptheta}}}}_{{y}_{i k1}}\left(N\right)\), respectively: if they are appropriately close to the predetermined small ε, then terminate the procedure and obtain the iterative estimates at k; otherwise, increase k by 1.

v.
Determine the regression errors for the two regressions e_{ui} and e_{yi} above.

vi.
Calculate the partial correlation coefficients using Eq. (22).

vii.
Apply the ttest for significance.
Case study description
This case study considers two different simulation scenarios of white and colored noises affecting the plants. The case study is applied to demonstrate the power of the developed LMSbased partial correlation algorithm for interacting subsystems elimination in strongly interacting MIMO closedloop systems compared with that of the conventional partial correlation method.
Copolymerization process
The transfer function model in Eq. (26) of a copolymerization process of methyl methacrylate and vinyl acetate in a continuous stirred tank reactor (CSTR) reported in [34] has been considered in this study.
The best pairing is determined by the RGA analysis. The system was divided into three decentralized control loops. In the first decentralized loop, the pairing was y_{1}, y_{2} with u_{2}, u_{3}. The suggested pairing for the second loop was y_{3} with u_{4}, and for the third loop, it was y_{4} with u_{5}, while u_{1} was kept constant. Proportional controllers k_{c1}, k_{c2}, k_{c3}, and k_{c4} with the values 0.1, − 0.3, 0.5, and − 0.5 were used respectively to stabilize the control system. To compare the conventional partial correlation against the LMSbased partial correlation, the process model in Eq. (26) was simulated in Simulink for data collection.
where

y_{1}: Polymer production rate, kg/h

y_{2}: Mole fraction of monomer A (methacrylate)

y_{3}: Weight average molecular weight

y_{4}: Reactor temperature, °K

u_{1}: Monomer A flow rate, kg/h

u_{2}: Monomer B (vinyl acetate) flow rate, kg/h

u_{3}: Initiator flow rate, kg/h

u_{4}: Chain transfer agent flow rate, kg/h

u_{5}: Temperature of the jacket, °K
Results and discussion
Two simulation cases are illustrated to demonstrate the cases in which the LMSbased partial correlation is superior compared to the conventional partial correlation. In the first simulation case (referred to as case A), the process is affected by white noise, assuming smooth plant operation. While in the second simulation case (referred to as case B), a more complex case that represents the real disturbances in largescale plants is considered; hence, a colored noise was added. The colored noise was produced by filtering the added white noise sequences to the process outputs and simulated by the following firstorder filter described in Eq. (27).
where w_{1}(t) to w_{4}(t) are assumed to be produced by filtering the white noise sequences v_{1}(t) to v_{4}(t), respectively. The white noise has zero mean and variances σ^{2} equal to 0.086, 0.066, 0.043, and 0.072 for v_{1}(t) to v_{4}(t), respectively. The SNR values for the process outputs in the two simulation cases are reported in Table 1. In this analysis, 3500 data points were generated with a sampling interval of 1 min to conduct the partial correlation analysis for the conventional and the proposed algorithm.
In case A, white noise is introduced to the process. The effect of the white noises on the process outputs in case A is shown in Fig. 3, whereas Fig. 4 shows the sequential setpoint step changes applied to the process outputs of cases A and B.
While in case B, a colored noise was introduced to reflect a similar situation in largescale plant affected by several disturbance sources. Figure 5 demonstrates the characteristic of the added colored noise to the process outputs. The figure shows the power spectral density (PSD) in dB/Hz and the autocorrelation of colored noise for all process outputs. The effects of the colored noises on the process outputs in case B are shown in Fig. 6.
To retain the real case of plants operated under closed loop, the sequential step changes in Fig. 4 were applied to all process outputs in the two simulations cases A and B in Figs. 3 and 6, respectively. These changes were applied to each output separately, starting from y_{1} and moving to the other outputs after a prespecified period. The sequential step change might be necessary in this case to avoid any possible correlation in the reference signals and subsequently adding more bias to the regression error terms, which negatively affect the accuracy of the estimated partial correlation coefficients.
In this paper, the effect of different characteristics of white and colored noises on the concerned plants has been investigated. Hence, the PSD is a crucial tool adopted in signal frequency analysis to define the signal power distribution over the frequency and is generally applied in the identification theory to distinguish between different noises characteristics.
The white noise is regarded as an ideal stationary random signal and has a constant power spectral density across the entire frequency spectrum. For the white noise case, it is known that a flat profile is generally obtained since no signal frequency contributes extensively to comparison with another one. Hence, the correlation or autocovariance between the samples is zero through all the sampling instances.
In the least square estimation method, the best parameter estimates can be found conditioned on an existing noisy dataset. Consequently, when random noise is in the measurements, as shown in Fig. 3, this conditioning removes the randomness from the concerned problem. It considers independent and identically distributed (i.i.d) noise. Therefore, the performance of the LS method is satisfactory in such cases, and subsequently, a reliable partial correlation coefficient can be produced by the conventional method. However, when the plant is affected by low SNRs, the performance of the LS method is certainly degraded [2, 3, 35], and therefore, the conventional partial correlation coefficients will be misleading.
The system in this case study is strongly interacting, and this can be observed from the set point applied to y_{2} and y_{4}, as shown in Fig. 4. Although the set point for this particular output is applied with positive magnitude, however, the output response is in the opposite direction, as illustrated in Fig. 3 (downward). This is attributed to the fact that the second decentralized loop in this plant has the lowest dynamics according to the loworder transfer functions in this loop. This makes the contribution of other interacting subsystems more dominant compared to the dynamics of the loop itself. On the other hand, loop 4 has no interaction effect from other subsystems; therefore, the corresponding output y_{4} has immediately responded to the applied setpoint change at 2750 (upward) since the process does not incorporate a process time delay.
In case B, Fig. 5 shows a colored noise without uniform power spectral density across the entire frequency spectrum, which reflects a similar situation in largescale plants affected with several disturbance sources. Therefore, the correlation or the autocovariance between the samples is nonzero through all the sampling instances.
Adopting the LMS algorithm to the outputs affected with colored noise as illustrated in Fig. 6 overcomes the issue of the correlation between the colored noise samples as it minimizes an instantaneous squared error criterion and does not rely on cross or autocorrelations between the samples. Moreover, the LMS algorithm assumes that the measures in the cost function are affected with random noise. Thus, a noisegenerating process is applied through the parameter estimation process using an unbiased instantaneous gradient at each instant, and subsequently produces more accurate regression errors, which are the key component in computing the partial correlation coefficients.
This situation differs from the white noise case where an ideal stationary random signal is assumed with a zero correlation between the samples through all the sampling instances affecting the plant. Therefore, in case B, biased and inconsistent estimates can be produced by the LS method and, consequently, incorrect partial correlation coefficients.
On the case study considered in this section, the cause and effect relationship was analyzed using conventional partial correlation and LMSbased partial correlation methods. The analysis was applied to full plant dynamics for the normal operation case in case A and to a more complex scenario in case B due to the addition of the colored noise, and the results are illustrated in Figs. 7 and 8, respectively.
The results in Figs. 7 and 8 show significant contribution of all the interacting subsystems, and none of them can be isolated prior to the identification stage; thus, an assumption is made to intentionally remove one of these interacting subsystems from the process transfer function model to assess the performance of the two partial correlation algorithms when different types of noise are added to replace the intentionally removed interacting subsystem. However, the interaction transfer function (y_{3}u_{2}) is removed (assumed to be zero), and new data are collected while replacing the removed interacting subsystems with white and colored noise in cases A and B, respectively. This assumption can also demonstrate to what extent the LMSbased partial correlation is able to differentiate between the actual correlation between (y_{3}u_{2}) and that between y_{3} and the white and colored noises after the interacting subsystem is removed. The corresponding results are presented in Figs. 9 and 10 for cases A and B, respectively.
The significance of the partial correlation coefficients for the full plant analysis in Figs. 7 and 8 has been verified by applying the ttest, and Tables 2 and 3 reflect the ttest results. Similarly, Tables 4 and 5 reflect the ttest results of the partial correlation coefficients in Figs. 9 and 10 for the plant with isolated interacting subsystems analysis.
A value of zero was produced by the ttest when the LMSbased partial correlation algorithm for the removed transfer function (y_{3}u_{2}) was applied, as it no longer existed in the two simulation cases, while the ttest shows a value of one for the conventional partial correlation in case B. This indicates that the conventional partial correlation can produce correct results in normal operating conditions while not under colored noise conditions.
In case A, at less disturbed conditions for the full plant case, comparable results were obtained, and small partial correlation coefficients for the interaction transfer functions were reported for both the conventional partial correlation and the LMSbased partial correlation algorithm, as shown in Fig. 7. Nevertheless, under colored noise conditions in case B for the same full plant analysis, both partial correlation methods reported slightly higher partial correlation coefficients as shown in Fig. 8.
For the input–output control pairing (within the dashdot lines in the figures), small partial correlation coefficients were obtained by the conventional partial correlation method compared with that obtained by the LMSbased partial correlation algorithm except in the pairing (y_{4}u_{5}). This can be observed in cases A and B of the full plant, as shown in Figs. 7 and 8, respectively. This is in contrast to the expected partial correlation coefficient values for these input–output pairs, since they are highly correlated through the feedback mechanism. Therefore, they are expected to have high partial correlation coefficients that approach unity. Regardless of the slightly inaccurate results obtained with the conventional method in case A, the method was able to detect the removed transfer function (y_{3}u_{2}) as the LMSbased partial correlation algorithm did as shown in Fig. 9.
On the other hand, in case B, the conventional partial correlation method gave higher values for the interaction transfer functions in most cases compared with that of the LMSbased partial correlation algorithm, as shown in Figs. 8 and 10 for the full plant case and the plant with isolated interacting subsystem case, respectively. Again, the conventional partial correlation analysis returned small partial correlation coefficients for the input–output pairs compared with that obtained by the LMSbased partial correlation algorithm except in the pairing (y_{4}u_{5}), even though these paired variables are highly correlated. However, one can conclude that the conventional partial correlation method is very effective and competitive with the LMSbased partial correlation algorithm when the number of interacting subsystems is less. This is attributed to the fact that the fourth decentralized loop has the pairing (y_{4}u_{5}) and has no any interaction with other subsystem.
In case B, the conventional partial correlation failed to detect the removed correlation of u_{2}y_{3} as shown in Fig. 10; instead, a misleading partial correlation coefficient is computed which is actually reflects the correlation between y_{3} and the added colored noise. In contrast, the LMSbased partial correlation algorithm was able to differentiate between the removed correlation between u_{2} and y_{3} and between the one that is due to the colored noise as shown in the same figure.
On the other hand, the LMSbased partial correlation algorithm shows less partial correlation coefficient values for most of the interaction transfer functions in both simulation cases compared with the conventional partial correlation, which demonstrated high partial correlation values mainly in case B. This indicates that the LMSbased partial correlation algorithm is able to estimate the regression errors more accurately by eliminating the error due to the correlation between y_{3} and the added colored noise and, consequently, resulted in a partial correlation coefficient that only reflects the causality relation.
Figure 11 shows the average regression errors that were calculated by the proposed LMSbased partial correlation algorithm for the process outputs for the full plant analysis. For case A, the algorithm required 3 iterations for error convergence (except y_{4}), while more iterations were required for the error convergence for all the outputs in case B. Moreover, the final regression errors after the last iteration in case B are higher than that in case A. This is attributed to the fact that when the plant is affected with white noises, the algorithm can easily compute the regression errors, while the algorithm is encountering some numerical difficulties when colored noise or low SNR affect the plant.
Finally, Fig. 12 shows higher regression errors for the conventional partial correlation method compared to that of the LMSbased partial correlation algorithm with the isolated interacting subsystem. Usually, the regression errors only represent the unmodeled part that comes with no information about the cause and effect relationship in the system under investigation. However, when the LS method is used, the regression errors may contain an additional component that represents the correlation between the dependent variables and some colored noise dynamics, and thus, the actual regression error values increase in an incorrect way.
Recall Eq. (28), the partial correlation coefficients heavily depend on the accurately estimating the regression errors e_{ui} and e_{yi}. Thus, when significant regression errors are misestimated, as in the case when the system is highly correlated or affected by colored noise, the numerator value of Eq. (28) becomes much bigger, and consequently, the partial correlation coefficients increase, providing misleading and confounding partial correlation results. However, the main difference between the conventional partial correlation method and the LMSbased partial correlation algorithm is in calculating these regression errors more accurately using the LMS algorithm.
Conclusions
The work presented in this paper has shown that the proposed LMSbased partial correlation algorithm resulted in a notable improvement in the discrimination between the weak and strong interacting subsystems compared with the conventional partial correlation method in MIMO closedloop systems. The proposed LMSbased partial correlation algorithm has been shown to be more efficient despite its simplicity. The algorithm also overcomes two major problems in the literature: one is the necessity of prior knowledge of the controller algorithm; the other is the implementation of conventional correlation analysis that leads to bias estimation of the model regression errors and thus inaccurate detection of the weak interacting subsystems. Our findings show that the conventional partial correlation is effective in normal operating conditions and when no unmeasured disturbances regularly upset the closedloop processes. The accuracy of LMSbased partial correlation in detecting the subsystems that contribute less to the plant interaction is demonstrated in a case study. The simulated case study revealed that the interpreted results from the conventional partial correlation are biased when disturbances and colored noise exist. Nevertheless, the MIMO closedloop simulation studies have demonstrated that the proposed LMSbased partial correlation algorithm can detect insignificant interacting subsystems and retain satisfactory performance when applied in complex operation conditions where the collected data involve strongly interacting subsystems and colored noises.
Availability of data and materials
Data can be shared upon request.
Abbreviations
 ARX:

Autoregressive with eXogenous input
 CSTR:

Continuous stirred tank reactor
 i.i.d:

Independent and identically distributed
 LMS:

Least mean squares
 LS:

Least square
 MIMO:

Multiinputmulti output
 MSE:

Mean square error
 PEM:

Prediction error method
 SNR:

Signaltonoise ratio
References
Gigi S, Tangirala AK (2013) Quantification of interaction in multiloop control systems using directed spectral decomposition. Automatica 49:1174–1183
Mei H, Li S, Cai WJ, Xiong Q (2005) Decentralized closedloop parameter identification for multivariable processes from step responses. Math Comput Simul 68:171–192
Yakoub Z, Chetoui M, Amairi M, Aoun M (2015) A bias correction method for fractional closedloop system identification. J Process Control 33:25–36
Söderström T, Ljung L, Gustavsson I (1976) Identifiability conditions for linear multivariable systems operating under feedback. IEEE Trans Autom Control 21:837–840
Mišković L, Karimi A, Bonvin D, Gevers M (2008) Closedloop identification of multivariable systems: with or without excitation of all references? Automatica 44:2048–2056
Wang J, Chen T, Huang B (2004) Closedloop identification via output fast sampling. J Process Control 14:555–570
Taheri S, Hosseini P, Razban A (2022) Model predictive control of heating, ventilation, and air conditioning (HVAC) systems: a stateoftheart review. J Build Eng. https://doi.org/10.1016/j.jobe.2022.105067
Shardt YA, Huang B, Ding SX (2015) Minimal required excitation for closedloop identification: some implications for datadriven, system identification. J Process Control 27:22–35
Gevers M, Bazanella AS, Mišković L (2007) Identifiability and informative experiments in open and closedloop identification. In: Chiuso A, Pinzoni S, Ferrante A (eds) Modeling, estimation and control. Lecture Notes in Control and Information Sciences, Springer, Berlin, Heidelberg
Bazanella AS, Gevers M, Miškovic L (2010) Closedloop identification of MIMO systems: a new look at identifiability and experiment design. Eur J Control 16:228–239
Shi Z, Yang H, Dai M (2023) The datafiltering based bias compensation recursive least squares identification for multiinput singleoutput systems with colored noises. J Franklin Inst 360:4753–4783
Shardt YA, Huang B (2011) Closedloop identification with routine operating data: effect of time delay and sampling time. J Process Control 21:997–1010
Gudi RD, Rawlings JB (2006) Identification for decentralized model predictive control. AIChE J 52:2198–2210
Morari M (1983) Design of resilient processing plants III: a general framework for the assessment of dynamic resilience. Chem Eng Sci 38:1881–1891
Tzouanas VK, Luyben WL, Georgakis C, Ungar LH (1990) Expert multivariable control. 1. Structure and design methodology. Ind Eng Chem Res 29:382–389
Havre K, Morud J, Skogestad S (1996) Selection of feedback variables for implementing optimizing control schemes. In: UKACC International Conference on Control. Exeter: IET; 1996.
Postlethwaite I (1996) Multivariable feedback control: analysis and design. Wiley, New Jersey, USA
Potts AS, Alvarado CSM, Garcia C (2017) Detection of nomodel inputoutput pairs in closedloop systems. ISA Trans 71:317–327
Vaillant OR, de Godoy RJC, Garcia C (2012) Detection of nomodel input/output combination in transfer matrix in closedloop MIMO systems. IFAC Proc Vol 45:343–348
Rahim M, Ramasamy M, Tufa LD, Faisal A (2014) Isolation of interacting channels in decentralized control systems using instrumental variables method. Appl Mech Mater 625:435–438
Potts AS, Alvarado CSM, Garcia C (2018) Inputoutput pairs detection in a distillation column operating in closedloop. IFACPapersOnLine 51:205–210
Yang J, Li L, Wang A (2011) A partial correlationbased Bayesian network structure learning algorithm under linear SEM. KnowlBased Syst 24:963–976
Widrow B, Hoff ME (1960) Adaptive switching circuits. IRE WESCON convention record
Wang ZQ, Manry MT, Schiano JL (2000) LMS learning algorithms: misconceptions and new results on converence. IEEE Trans Neural Networks 11:47–56
Chaudhary NI, Raja MAZ, Khan JA, Aslam MS (2013) Identification of input nonlinear control autoregressive systems using fractional signal processing approach. ScientificWorldJournal. https://doi.org/10.1155/2013/467276
Chaudhary NI, Raja MAZ (2015) Identification of Hammerstein nonlinear ARMAX systems using nonlinear adaptive algorithms. Nonlinear Dyn 79:1385–1397
Widrow B, Stearns SD (1985) Adaptive signal processing. PrenticeHall, Inc, Englewood Cliffs
Spall JC (2005) Introduction to stochastic search and optimization: estimation, simulation, and control. Wiley, New Jersey, USA
Macchi O, Eweda E (1983) Secondorder convergence analysis of stochastic adaptive linear filtering. IEEE Trans Autom Control 28:76–85
Douglas SC, Pan W (1995) Exact expectation analysis of the LMS adaptive filter. IEEE Trans Signal Process 43:2863–2871
Mayyas K, Aboulnasr T (1997) Leaky LMS algorithm: MSE analysis for Gaussian data. IEEE Trans Signal Process 45:927–934
Ljung L (1977) Analysis of recursive stochastic algorithms. IEEE Trans Autom Control 22:551–575
M. Kamenetsky, B. Widrow (2004) A variable leaky LMS adaptive algorithm. In: Conference Record of the ThirtyEighth Asilomar Conference on Signals, Systems and Computers
Congalidis JP, Richards JR, Ray WH (1986) Modeling and control of a copolymerization reactor. American Control Conference, Seattle
Natrella M (2010) NIST/SEMATECH ehandbook of statistical methods. National Institute of Standards and Technology, Gaithersburg
Acknowledgements
Not applicable.
Funding
The author declares that there is no funding for the research.
Author information
Authors and Affiliations
Contributions
Not applicable.
Corresponding author
Ethics declarations
Competing interests
The author declares no competing interests.
Additional information
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.
About this article
Cite this article
Rahim, M.A. Detection and elimination of insignificant interacting subsystems in MIMO closedloop systems using the least mean squarebased partial correlation algorithm. J. Eng. Appl. Sci. 70, 116 (2023). https://doi.org/10.1186/s44147023002857
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/s44147023002857