## Abstract

For linear and well-defined estimation problems with Gaussian white noise, the Kalman filter (KF) yields the best result in terms of estimation accuracy. However, the KF performance degrades and can fail in cases involving large uncertainties such as modeling errors in the estimation process. The smooth variable structure filter (SVSF) is a relatively new estimation strategy based on sliding mode theory and has been shown to be robust to modeling uncertainties. The SVSF makes use of an existence subspace and of a smoothing boundary layer to keep the estimates bounded within a region of the true state trajectory. Currently, the width of the smoothing boundary layer is chosen based on designer knowledge of the upper bound of modeling uncertainties, such as maximum noise levels and parametric errors. This is a conservative choice, as a more well-defined smoothing boundary layer will yield more accurate results. In this paper, the state error covariance matrix of the SVSF is used for the derivation of an optimal time-varying smoothing boundary layer. The robustness and accuracy of the new form of the SVSF was validated and compared with the KF and the standard SVSF by testing it on a linear electrohydrostatic actuator (EHA).

## Introduction

A list of the Nomenclature used throughout this paper is provided at the end. It is
the goal of a filter to remove the effects that the system *w*_{k} and measurement *$\nu $*_{k} noise have on extracting the true state values *x*_{k} from the measurements *z*_{k}. The KF is formulated in a
predictor-corrector manner. The states are first estimated using the system model,
termed as a priori estimates, meaning “prior to” knowledge of the observations. A
correction term is then added based on the innovation (also called residuals or
measurement errors), thus forming the updated or a posteriori (meaning “subsequent
to” the observations) state estimates.

*A*, the previous state estimate $x\u2227k|k$, the input matrix

*B*, and the input

*u*

_{k}, and the corresponding state error covariance matrix $Pk+1|k$, respectively

A number of different methods have extended the classical KF to nonlinear systems, with the most popular and simplest method being the extended Kalman filter (EKF) [9,10]. The EKF is conceptually similar to the KF; however, the nonlinear system is linearized according to its Jacobian. This linearization process introduces uncertainties that can sometimes cause instability [10]. For the purposes of this paper, only linear systems will be considered.

The optimality of the KF comes at a price of stability and robustness. The KF assumes that the system model is known and linear, the system and measurement noises are white, and the states have initial conditions with known means and variances [9,11]. However, the previous assumptions do not always hold in real applications. If these assumptions are violated, the KF yields suboptimal results and can become unstable [12]. Furthermore, the KF is sensitive to computer precision and the complexity of computations involving matrix inversions [13]. In an effort to further increase stability, the KF has been combined with a variety of square root algorithms and methods, such as Cholesky decomposition, unit diagonal-factorization, and triangularization algorithms [14–17]. These methods are based on reformulating the KF equations by using numerically stable implementations to mathematically increase the arithmetic precision of the computation [13]. Increasing the arithmetic precision reduces the effects of round-off errors, which improves the overall numerical stability of the filter.

Other methods have been proposed to reduce the effects of modeling errors [18,19]. These techniques are based on increasing the a priori covariance matrix, which increases the gain value. This approach puts more emphasis on the system model, as opposed to the model used by the filter [9].

The effects due to assuming Gaussian noise distributions may be minimized by implementing a Gaussian sum. This method is used to approximate the non-Gaussian probability density function (PDF) by a finite number of Gaussian PDFs [20]. This approach is computationally complex due to the number of filters that are used to approximate the overall estimate, however, has been shown to work well.

A recent robust filtering strategy that is less susceptible to uncertainties and is computationally efficient is the variable structure filter [21]. Variable structure system theory originated from the Soviet Union in the 1940s [22,23]. A special subcategory of it referred to as sliding mode control (SMC) is commonly used in control applications as it provides enhanced robustness and stability. In a typical sliding mode controller, a discontinuous switching gain is used to maintain the states along some desired trajectory [23]. The discontinuous gain is determined based on the distance of the states from a switching hyperplane. The gain forces the states to convergence onto the hyperplane, and slide along it [24]. While on the hyperplane and under ideal conditions, the state trajectory becomes insensitive to disturbances and uncertainties. The discontinuous switching brings an inherent amount of stability to the control, while in practice introducing chattering due to limitations and delays in switching. To remove chattering, a smoothing boundary layer is introduced along the sliding surface in order to interpolate and scale the discontinuous gain within the boundary region. This results in the discontinuous gain being applied outside the smoothing boundary layer, while inside it a continuous corrective action is applied. A number of sliding mode observers and filters have been proposed in literature [25,26]. In 2002, an optimal sliding mode filter design was introduced; however, the derivation led to more of a robust control strategy, rather than an estimator [27]. The estimation strategy to be discussed in this paper is significantly different.

The smooth variable structure filter (SVSF) is a relatively new estimation strategy based on sliding mode theory, and has been shown to be robust to modeling uncertainties. Similarly to SMC, the SVSF uses a discontinuous gain and a smoothing boundary layer ψ in its formulation. In this paper, an “optimal” smoothing boundary layer is derived for the SVSF with respect to the state error covariance matrix. Section 2 provides a brief overview of the SVSF, followed by the derivation of an optimal smoothing boundary layer equation. A linear electrohydrostatic actuator (EHA) estimation problem is then described, and the new form of the SVSF is compared with the KF and the standard SVSF in terms of estimation accuracy and robustness to uncertainties. The paper then concludes with a summary of the results.

## The Smooth Variable Structure Filter

A revised form of the VSF, referred to as the SVSF, was presented in 2007 [28]. The SVSF strategy is also a predictor-corrector estimator based on sliding mode concepts, and can be applied on both linear or nonlinear systems and measurements. As shown in Fig. 1, and similar to the VSF, it utilizes a switching gain to converge the estimates to within a boundary of the true state values (i.e., existence subspace) [28]. The SVSF has been shown to be stable and robust to modeling uncertainties and noise, when given an upper bound on the level of unmodeled dynamics and noise [21,28]. The origin of the SVSF name comes from the requirement that the system is differentiable (or “smooth”) [28,29]. Furthermore, it is assumed that the system under consideration is observable [28].

*i*refers to the

*i*th width; the “SVSF” memory or convergence rate $\gamma $ with elements $0<\gamma ii\u22641$; and the linear measurement matrix

*C*. However, for numerical stability, it is important to ensure that one does not divide by zero in Eq. (2.5). This can be accomplished using a simple

*if*statement with a very small threshold (i.e., 1 × 10

^{−12}). The SVSF gain is used to refine the state estimates as follows:

Note that |*e*|_{Abs} is the absolute of the vector *e*, and is equal to $|e|Abs=e\xb7sign(e)$. The proof, as
described in Refs. [28,29], yields the
derivation of the SVSF gain from Eq. (2.8). The SVSF results in the state estimates converging to within a
region of the state trajectory, referred to as the existence subspace. Thereafter,
it switches back and forth across the state trajectory, as shown earlier in Fig. 1. The existence subspace shown in Figs. 1–3 represents the amount of
uncertainties present in the estimation process, in terms of modeling errors or the
presence of noise. The width of the existence space β is a function of the uncertain
dynamics associated with the inaccuracy of the internal model of the filter as well
as the measurement model, and varies with time [28]. Typically this value is not exactly known but an upper bound may be
selected based on a priori knowledge.

The SVSF gain is considerably less complex than its predecessor (VSF), which allows it to be implemented more easily (mathematically and conceptually). Furthermore, the SVSF estimation process is inherently robust and stable to modeling uncertainties due to the switching effect of the gain. This makes for a powerful estimation strategy, particularly when the system is not well known. Note that for systems that have fewer measurements than states, a “reduced order” approach is taken to formulate a full measurement matrix [28,31]. Essentially “artificial measurements” are created and used throughout the estimation process.

## Derivation of an Optimal Smoothing Boundary Layer

The partial derivative of the a posteriori covariance (trace) with respect to the smoothing boundary layer term ψ is the basis for obtaining a strategy for the specification of ψ. The approach taken is similar to determining an optimal gain for the KF. The following derivation is applicable to any measurement case provided that the measurement matrix is completely observable [32]. For the case when there are fewer measurements than states, one needs to implement a reduced order form of the SVSF as shown in Ref. [28]. This allows the creation of a full measurement matrix, typically in the form of an identity. For the case when there are more measurements than states, the system output can be multiplied by the inverse of the measurement matrix, thus mapping the measurements to the states. One could then use a full measurement matrix (i.e., identity) in the estimation process.

*E*is a “vector of errors,” defined as follows:

*ā*to signify a diagonal matrix formed of the vector

*a*, such that

*ā*= diag(

*a*)

*C*=

*I*), such that Eq. (3.6) becomes

The proposed smoothing boundary layer equation (3.22) is found to be a function of the a priori state error
covariance *P*_{k}_{+1|k},
measurement covariance *S*_{k}_{+1} measurement matrix *C*, a priori and previous a posteriori
measurement error vectors
(*e*_{z.k}_{+1|k} and *e*_{z.k}_{|k} and the convergence rate or SVSF “memory” $\gamma $.
It appears that the width of the boundary layer is therefore directly related to the
level of modeling uncertainties (by virtue of the errors), as well as the estimated
system and measurement noise (captured by *P*_{k}_{+1|k} and *S*_{k}_{+1}). The smoothing
boundary layer widths can now be obtained according to Eq. (3.22) at each time step, in an
optimal fashion, as opposed to the constant (conservative) width presented in Ref.
[28]. As shown in the Appendix, the units
and values of the smoothing boundary layer matrix have been studied.

## A Robust Filtering Strategy for Linear Systems

### Description of the SVSF–VBL Strategy.

As per the previous results and as shown in the Appendix, it appears that the time-varying smoothing boundary layer (VBL) for the SVSF yields the KF solution (gain) for linear systems. In this case, robustness to modeling uncertainties using the SVSF strategy is lost. It is hence beneficial to propose a combined strategy, referred to here as the SVSF–VBL, such that an accurate estimate is maintained (i.e., using the VBL calculation or KF gain) while ensuring the estimate remains stable (i.e., using the standard SVSF gain). This strategy is implemented by imposing a saturation limit on the optimal smoothing boundary layer as follows. Outside the limit the robustness and stability of the SVSF is maintained, while inside the boundary layer the optimal gain is applied. Consider the following sets of figures to help describe the overall implementation of the SVSF–VBL strategy.

Figure 4 illustrates the case when a limit is imposed on the smoothing boundary layer width (a conservative value) and the time-varying (optimal) smoothing boundary layer per Eq. (3.22) follows within this limit. In the standard SVSF, the smoothing boundary layer width is made equal to the limit; such that the difference between the limit and the optimal variable boundary layers quantifies the loss in optimality. Essentially, in this case, the SVSF–VBL (or KF) gain should be used to obtain the best result. Another way to simplify and understand this process is to consider the SVSF–VBL as using a time-varying boundary layer with saturated limits to ensure stability.

Figure 5 illustrates the case when the
optimal time-varying smoothing boundary layer is larger than the limit imposed
on the smoothing boundary layer. This typically occurs when there is modeling
uncertainty (which leads to a loss in optimality) or when the limit on the
smoothing boundary layer is underestimated. This strategy is useful for
applications such as fault detection. Recall that that the width of the
smoothing boundary layer (3.22) is directly related to the level of modeling
uncertainties (by virtue of the errors), as well as the estimated system and
measurement noise (captured by *P*_{k}_{+1|k} and *S*_{k}_{+1}). Therefore,
the VBL creates another indicator of performance for the SVSF: the widths may be
used to determine the presence of modeling uncertainties, as well as detect any
changes in the system.

To summarize the estimation strategy (SVSF–VBL) proposed in this section, consider Fig. 6. Essentially, in a well-defined case, the gain used to correct the estimate is calculated by the SVSF–VBL or KF gain. When the smoothing boundary layer calculated by Eq. (3.22) or Eq. (4.7) goes beyond the limits, the smoothing boundary layer width requires saturation.

### The Computational Process for the SVSF–VBL.

*if*statement with a very small threshold (i.e., 1 × 10

^{−12}).

## Simulation Results

### Description of the Linear Estimation Problem.

In this section, the proposed algorithm is applied for state estimation on an EHA. This example uses computer simulations in order to allow a detailed investigation of the effects of parametric uncertainties. The EHA model is based on an actual prototype built for experimentation [28,34]. The purpose of this example is to demonstrate that the new SVSF–VBL estimation process is functional, and that the resulting estimation process is comparable to the KF for linear and known systems. Furthermore, the addition of modeling errors will demonstrate its robustness. For this computer experiment, the input to the system is a random signal with amplitude in the range of ±1 rad/s, superimposed onto a unit step occurring at 0.5 s [28].

*C*=

*I*). The sample time of the system is

*T*= 0.001

*s*, and the discrete-time state space system equation may be defined as follows [28]:

*w*and

*v*are considered to be Gaussian, with zero mean and variances

*Q*and

*R*, respectively. The initial state error covariance

*P*

_{0|0}, system noise covariance

*Q*, and measurement noise covariance

*R*are defined, respectively, as follows:

For the standard SVSF estimation process, the memory or convergence rate was set
to γ = 0.1, and the limits for the smoothing boundary layer widths (diagonal
elements) were defined as $\psi =[0.050.55]T$.
These parameters were selected based on the distribution of the system and
measurement noises. For example, the limit for the smoothing boundary layer
width ψ was set to 5 times the maximum system noise, or approximately equal to
the measurement noise. The initial state estimates for the filters were defined
randomly by a normal distribution, around the true initial state values *x*_{0} and using the initial state error covariance *P*_{0|0}. Two different cases were studied in this
section. The first case was considered “normal,” and the second included system
modeling error half-way through the simulation.

### Normal Case.

The main results of applying the KF, SVSF, and the SVSF–VBL are shown in Fig. 7. This figure shows the true position of the EHA, with the corresponding filter estimates. The estimation results of all filters are practically the same (note that the lines are nearly overlapping and are thus difficult to distinguish). It is important to note that the KF provides the best estimate (i.e., optimal) for a linear and known system, subject to Gaussian noise. Consequently, the SVSF–VBL yielded the same results, since the derived gain (4.8) is the same as the KF. Although the standard SVSF yielded good results, the estimates were not optimal. The velocity and acceleration estimates were relatively the same, and were thus omitted for space constraints. As shown in Table 1, in the normal (standard) case, the KF and SVSF–VBL provide optimal results (in terms of estimation accuracy). The SVSF–VBL improved the SVSF with a constant boundary layer width by roughly 40% (in the position estimate). This is a significant improvement in terms of estimation accuracy. However, note that after some tuning by trial-and-error, it may be possible to improve the SVSF results.

Filter | Position (m) | Velocity (m/s) | Acceleration (m/s^{2}) |
---|---|---|---|

KF | 3.72 × 10^{−3} | 4.89 × 10^{−2} | 0.87 |

SVSF–VBL | 3.72 × 10^{−3} | 4.89 × 10^{−2} | 0.87 |

SVSF | 6.11 × 10^{−3} | 5.93 × 10^{−2} | 1.21 |

Filter | Position (m) | Velocity (m/s) | Acceleration (m/s^{2}) |
---|---|---|---|

KF | 3.72 × 10^{−3} | 4.89 × 10^{−2} | 0.87 |

SVSF–VBL | 3.72 × 10^{−3} | 4.89 × 10^{−2} | 0.87 |

SVSF | 6.11 × 10^{−3} | 5.93 × 10^{−2} | 1.21 |

The root mean squared error (RMSE) results of running the computer experiment are shown in Table 1.

Figure 8 provides an illustration of the individual smoothing boundary layer widths (found within the ψ matrix), as they evolve with time. The standard SVSF results could be improved if the information contained along the diagonal of the smoothing boundary layer matrix were used to tune the standard SVSF boundary layer widths.

In its current form, the SVSF–VBL is equivalent to the KF. However, as shown in the following example, some cases exist such that the KF no longer provides an optimal and reliable estimate.

### Modeling Uncertainties Case.

The corresponding position estimates for this case are shown in Fig. 9.

An interesting result occurs when studying the elements of the smoothing boundary
layer matrix. As shown in Fig. 10, the
smoothing boundary layer widths corresponding to the acceleration state grows
larger at the inception of the modeling uncertainty (0.5 s). This is due to the
fact that the width of the smoothing boundary layer is directly related to the
level of modeling uncertainties (by virtue of the errors), as well as the
estimated system and measurement noise (captured by *P*_{k}_{+1|k} and *S*_{k}_{+1}), as described
in Eq. (3.22). Furthermore,
this can be seen by looking at the value of Eq. (4.6) at the onset of modeling uncertainties. The average
value in *E* (corresponding to the third state *E*_{3} ) increased by nearly 100 times; which in
turn, drastically increased the smoothing boundary layer width. The system
modeling error leads to an incorrect a priori state covariance *P*_{k}_{+1|k},
which propagates to the smoothing boundary layer calculation. The smoothing
boundary layer matrix ψ_{k}_{+1} therefore
provides an alternative method for fault detection, as demonstrated by the
immediate changes at the inception of the system modeling uncertainties.

The smoothing boundary layers grow to accommodate for the increased uncertainties at 0.5 s. Injection of uncertainties leads to a loss of optimality, as the basic assumption related to having a known model no longer applies. As shown by Fig. 9, at the inception of the modeling error (0.5 s), the KF failed to yield a reasonable estimate. However, the SVSF–VBL and SVSF retain their robust stability and their estimates remained bounded to within a region of the true state trajectory. In terms of RMSE, the SVSF–VBL estimation strategy yielded the best results, as shown in Table 2.

Filter | Position (m) | Velocity (m/s) | Acceleration (m/s^{2}) |
---|---|---|---|

SVSF–VBL | 4.96 × 10^{−3} | 5.43 × 10^{−2} | 0.98 |

SVSF | 6.01 × 10^{−3} | 5.75 × 10^{−2} | 1.12 |

KF | 0.31 | 3.49 | 17.9 |

Filter | Position (m) | Velocity (m/s) | Acceleration (m/s^{2}) |
---|---|---|---|

SVSF–VBL | 4.96 × 10^{−3} | 5.43 × 10^{−2} | 0.98 |

SVSF | 6.01 × 10^{−3} | 5.75 × 10^{−2} | 1.12 |

KF | 0.31 | 3.49 | 17.9 |

As shown in Table 2, the KF provides the worst result (in terms of estimation accuracy). However, the standard SVSF and the SVSF–VBL estimation processes remained relatively stable (when compared with Table 1). These results would have significant implications when attempting the accurate control of a mechanical or electrical system.

## Conclusions

This paper introduced the derivation of an optimal smoothing boundary layer width for the smooth variable structure filter. A new estimation strategy which makes use of the KF optimality and robustness of the SVSF was presented and is referred to as SVSF–VBL. Prior to this work, a variable smoothing boundary layer did not exist for the SVSF. In the standard SVSF, the smoothing boundary layer widths were selected based on upper bounds of the uncertainties in the estimation process. This was a conservative choice for the smoothing boundary layer, which resulted in a loss of optimality. In this paper, a variable smoothing boundary layer was derived in an optimal fashion based on minimizing the state error covariance matrix with respect to the smoothing boundary layer term. The robustness and accuracy of the new form of the SVSF was demonstrated and compared with the KF by testing it on a linear EHA estimation problem. It was demonstrated that the SVSF–VBL strategy performed exactly the same as the KF in the absence of modeling error. In the presence of system modeling uncertainties (or a fault), the SVSF–VBL outperformed both the KF and standard SVSF, yielding very accurate and stable estimates.

*x*=state vector or values

*z*=measurement (system output) vector or values

*y*=artificial measurement vector or values

*u*=input to the system

*w*=system noise vector

*v*=measurement noise vector

*A*=linear system transition matrix

*B*=input gain matrix

*C*=linear measurement (output) matrix

*E*=combination of measurement error vectors

*K*=filter gain matrix (i.e., KF or SVSF)

*P*=state error covariance matrix

*Q*=system noise covariance matrix

*R*=measurement noise covariance matrix

*S*=innovation covariance matrix

*e*=measurement (output) error vector

- diag(
*a*) or ā = defines a diagonal matrix of some vector

*a*- sat(
*a*) = defines a saturation of the term

*a*- γ =
SVSF “convergence” or memory parameter

- ψ =
SVSF smoothing boundary layer width

- |
*a*| = absolute value of some parameter

*a**E*{·} =expectation of some vector or value

*T*=transpose of some vector or matrix

- ∧ =
estimated vector or values

*k*+ 1|*k*=a priori time step (i.e., before applied gain)

*k*+ 1|*k*+ 1 =a posteriori time step (i.e., after update)

### Appendix A: Closer Look at the Saturation Term

*K*

_{k}

_{+1}defined by Eq. (3.3) reveals that the derivation of $\psi $ removes the need for the saturation term in the gain, as follows. Consider the saturation term of Eq. (3.3) with Eq. (3.21) as follows:

*i*of Eq. (A3) given any system

If the convergence rate γ is set to zero, Eq. (A6) simply yields the sign function of the measurement
error (and the answer is −1, 0, or 1). If the convergence rate γ is nonzero
(however, bounded between 0_{+} and 1), Eq. (A6) yields a value between −1 and
1. The argument holds for Eq. (A3). Given the above discussion, when calculating a time-varying
smoothing boundary layer using Eq. (3.22), the argument inside the saturation term will always be
between −1 and 1. Hence, the saturating function used in Eq. (A3) is redundant given the
definition of ψ as provided in Eq. (3.22). Note that this also works with the earlier assumption (3.5) that the region of interest
for the value of the smoothing boundary layer width is inside the saturation
term (i.e., between −1 and 1).

### Appendix B: Studying the Revised SVSF Gain

Therefore, based on a full smoothing boundary layer matrix defined by Eq. (3.22), the gain (3.8) becomes the KF gain (1.5), which yields the optimal solution for well-defined linear systems. This is to be expected as the KF yields the best possible estimate for linear, known systems with Gaussian noise. This implies that the robustness of the SVSF is lost with the use of an optimal smoothing boundary layer that would make the saturation function redundant.