On the Regression-Tensor Analysis of the Hardening Process of Metal Coatings

Rusanov Vyacheslav Anatolievich^{1},
Agafonov Sergey Viktorovich^{2},
Daneev Aleksey Vasilyevich^{3},
Gubanov Ilya Alecseevich^{3}

Show more

1. Introduction

Researchers pay a lot of attention to increasing the strength characteristics of the technological process of hardening metal coatings (see, for example, 1) Munz W.-D., Lewis D.B., Hovsepian P.E. *et al*. Industrial scale manufacturing of superlattice hard PVD Coatings//Surface Engineering. -2001. -V. 17. -pp. 15-17; 2) Mitterer C., Holler F., Ustel F. *et al*. Application of hard coatings in aluminum die casting//Surface & Coating Technology. -2000. -V. 125. -pp. 233-239). Nonlinear integrative physical and chemical (PC) processes lie at the root of the methods of hardening the working surfaces of modern power machines, which actualizes the issues related to formalization/development of their mathematical models. In this context, regression models [1] [2] [3] [4] are still in demand, where regression-tensor systems [5] [6] [7] form an important class. These systems, on the one hand, are obviously close in their predictive properties to polynomial models [2], admitting a detailed analytical description based on tensor calculus [7], functional analysis of strong Frechet’s differentials [8] and extremum problems theory. And on the other hand, they acquire an important role in the nonlinear analysis of multifactorial tribological and anticorrosion properties of complex metal coatings based on mathematical modeling of physical and mechanical (PM) properties of composite media, developing a nonlinear predictive analysis of integrative characteristics of metal coatings induced by their nanostructure geometry [9] [10].

In the article, you will find the development of the tasks set in the conclusions of [5]. In this case, the main goal is not so much the formal accuracy of inferences, but rather the clarity of concepts in the development of general problems of tribology [11] related to the precision modeling of nanostructures of complex metal coatings. In the context of the article, the problem of the formation of the PM functional that evaluates the PC mode hardening of composite metal coatings is solved. Analytical interpretations of the multi-connected conditions of the PC mode optimization, under the imposed nonlinear (and essentially difficult to formalize) constraints, are constructed [12] [13]. The regression-tensor model for tribological/corrosion tests is substantiated by means of identification of multivariate nonlinear PM regression equations with a minimum tensor norm by the least squares method (LSM).

2. Motivations, Terminology and Problem Formulation

Let *R* be a field of real numbers, *R ^{n}* be a

Let
$v\in {R}^{m}$ be the vector of varying PC predictors [2] for a nonlinear PM regression with a fixed origin in
$\omega \in {R}^{m}$ (the reference PM mode of hardening),
$w\left(\omega +v\right)\in {R}^{n}$ be the vector of indices of PM variables. To describe a multifactorial physical and chemical process, consider a multidimensional functional nonlinear input-output type system described by a vector-tensor *k*-valent PM regression equation of the following form:

$w\left(\omega +v\right)=\text{col}\left({\displaystyle \underset{j=0,\cdots ,k}{\sum}{f}_{1}^{j,m}\left(v,\cdots ,v\right),\cdots ,}{\displaystyle \underset{j=0,\cdots ,k}{\sum}{f}_{n}^{j,m}\left(v,\cdots ,v\right)}\right)+\epsilon \left(\omega ,v\right).$ (1)

Here ${f}_{i}^{j,m}\in {T}_{m}^{j}$, $\epsilon \left(\omega ,\cdot \right):{R}^{m}\to {R}^{n}$ is a nonparameterizable class vector-function

${\Vert \epsilon \left(\omega ,v\right)\Vert}_{{R}^{n}}=o\left({\left({v}_{1}^{2}+\cdots +{v}_{m}^{2}\right)}^{k/2}\right)$, (2)

$v=\text{col}\text{\hspace{0.05em}}\left({v}_{1},\cdots ,{v}_{m}\right)$, ${f}_{i}^{0,m}$ is the 0-rank tensor, representing the tribological index ${w}_{i},i=\stackrel{\xaf}{1,n}$ of the PM quality of the investigated PC process in its reference mode, given by the vector $\omega \in {R}^{m}$.

Note 1. The precision of nonlinear simulation of the PC process in the class of regression-tensor systems (1) (and adaptation of their parameters) is correct because of the continuous dependence ( [8], p. 495) of solutions of the differential diffusion equation [15] on its initial boundary conditions. The tensor structure of the Equation (1) arises in accordance with Theorem 3 ( [16], p. 255) and the polylinear nature ( [8], p. 490] of the higher-order Frechet derivatives when computing the strong differentials at a point
$\omega $ from the vector function
$w(\cdot )=\text{col}\left({w}_{1}(\cdot ),\cdots ,{w}_{n}(\cdot )\right)$. This ultimately summarizes Assertion 2 from [5] (see Problem (I) below). In this case, the accuracy of the nonlinear PM modeling is represented by the function estimate (2) as a remainder term in the Peano form related to the *k*-valence index of the Equation (1).

The problem of the multidimensional nonlinear regression-tensor modeling of multifactor physical and chemical process of hardening of metal coatings, optimal with respect to some target “tribological criterion”, was set and investigated in detail in work [5] for the 2-valent model (1). With that, analytical solutions of three methodological positions of this problem of optimal mathematical modeling are obtained:

(I) for a fixed vector-predictor $\omega \in {R}^{m}$ and its open neighborhood $V\subset {R}^{m}$, analytical conditions are defined, under which the vector function $w(\cdot ):V\to {R}^{n}$ of PM property indices satisfies the multivariate regression-tensor system (1);

(II) a direct algorithm is constructed for identifying tensor coordinates ${f}_{i}^{j,m},i=\stackrel{\xaf}{1,n},j=\stackrel{\xaf}{0,2}$ in a 2-valent regression-tensor model (1) based on a numerical solution of a two-criteria LSM problem of optimal a posteriori PM modeling written as:

$\{\begin{array}{l}\mathrm{min}{\left({{\displaystyle \underset{l=1,\cdots ,q}{\sum}\left({\Vert {w}_{\left(l\right)}-\text{col}\left({\displaystyle \underset{j=0,\cdots ,k}{\sum}{f}_{1}^{j,m}\left({v}_{\left(l\right)},\cdots ,{v}_{\left(l\right)}\right),\cdots ,}{\displaystyle \underset{j=0,\cdots ,k}{\sum}{f}_{n}^{j,m}\left({v}_{\left(l\right)},\cdots ,{v}_{\left(l\right)}\right)}\right)\Vert}_{{R}^{n}}\right)}}^{2}\right)}^{1/2},\\ \mathrm{min}{\left({\displaystyle \underset{i=1,\cdots ,n}{\sum}{\displaystyle \underset{j=0,\cdots ,k}{\sum}{\Vert {f}_{i}^{j,m}\Vert}_{{T}_{m}^{j}}^{2}}}\right)}^{1/2},\end{array}$ (3)

where
${w}_{\left(l\right)}\in {R}^{n},{v}_{\left(l\right)}\in {R}^{m},l=\stackrel{\xaf}{1,q}$ are, respectively, the vectors of experimental factor-predictors of the PC process, *i.e.*
${w}_{\left(l\right)}$ is the a posteriori response to the target variation
${v}_{\left(l\right)}$ relative to the coordinates of the reference vector
$\omega $ under the condition
${\Vert {v}_{\left(l\right)}\Vert}_{{R}^{m}}<1$ (this inequality is methodologically dictated by condition (2)), *q* is the number of tribological experiments conducted (determined by representativeness of model (1)), carried out with the dynamics of PC processes [15];

(III) for the 2-valent regression-tensor model (1) with the given predictor
$\omega \in {R}^{m}$ and nominal condition
$\epsilon \left(\omega ,\cdot \right)\equiv 0$, the analytical solution of the optimization problem as a non-linear “*v*-optimization” of the varied (relative to the vector
$\omega $ ) factor-predictors of the prognostic PM characteristics of the designed composite metal coatings was obtained:

$\underset{v\in {R}^{m}}{\mathrm{max}}F\left(v\right):={r}^{\text{T}}w\left(\omega +v\right)={r}_{1}{w}_{1}\left(\omega +v\right)+\cdots +{r}_{n}{w}_{n}\left(\omega +v\right)$, (4)

where the vector function $v\mapsto w\left(\omega +v\right)=\text{col}\left({w}_{1}\left(\omega +v\right),\cdots ,{w}_{n}\left(\omega +v\right)\right)$ has a coordinate representation according to the LSM-identified model (1)-(3), ${r}_{i}>0$ are weight factors reflecting the priority of PM indices; we can also investigate Problem (III) for some ${r}_{j}<0$, which corresponds to the position when ${w}_{j}$ should be minimized in PM indices.

The significance of the nonlinear multifactor regression-tensor analysis is not only in the exact theorems already obtained by this method [4] [5], but also in the simple and clear heuristic rules (e.g. the condition of the experiments ${\Vert {v}_{\left(l\right)}\Vert}_{{R}^{m}}<1$, or the equality $n=m$ in Corollary 2) involved in the construction of optimal multivariate posterior modeling. Over time, these rules can be brought to the level of strict theorems of regression analysis (like [2] [17] [18] ), but even now their usefulness is undoubted [6].

*Problem statement* (according to analytical conclusions of [5] ):

(i) to determine necessary and sufficient conditions of solvability of the optimization problem (4) for a 3-valent ( $k=3$ ) functional regression-tensor system (1);

(ii) to construct an algorithm for correction of sufficient conditions of extremum of stationary point of Problem (i) based on the *r*-parametric adjustment of the
$r\mapsto {r}^{\text{T}}w\left(\omega +v\right)$ PM functional

$v\mapsto F\left(v\right)={r}^{\text{T}}w\left(\omega +v\right)$. (5)

3. Optimization of Physical and Mechanical Indices of the Hardening Process of Metal Coatings

Consider Problem (i) on optimization of the PM characteristics of metal coatings at $k=3$ ; note that the solution of the accompanying Problem (II) of the parametric identification for $k=3$ is an non-complicated modification of Assertion 3 of [5] (see also [17] ).

In such a mathematical formulation, the nonlinear multivariate prognostic equation (1) can be given in the following vector-matrix-tensor form:

$\begin{array}{l}w\left(\omega +v\right)=c+Av+\text{col}\left({v}^{\text{T}}{B}_{1}v+{f}_{1}^{3,m}\left(v,v,v\right),\cdots ,{v}^{\text{T}}{B}_{n}v+{f}_{n}^{3,m}\left(v,v,v\right)\right)\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}+\epsilon \left(\omega ,v\right)\end{array}$ (6)

where $c\in {R}^{n},A\in {M}_{n,m}\left(R\right),{B}_{i}\in {M}_{m,m}\left(R\right),i=\stackrel{\xaf}{1,n}$. Without loss of generality, we believe that each matrix ${B}_{i}$ has an upper triangular structure; this substantially simplifies the numerical implementation of the ANC-algorithm (3). Additionally, we note that the vector function $\epsilon \left(\omega ,\cdot \right):{R}^{m}\to {R}^{n}$ satisfies (according to (2)) the qualitative estimate of ${\Vert \epsilon \left(\omega ,v\right)\Vert}_{{R}^{n}}=o\left({\left({v}_{1}^{2}+\cdots +{v}_{m}^{2}\right)}^{3/2}\right)$.

According to (1), at $k=3$ PM functional of the total tribological indices (5) are twice continuously differentiable, which guarantees the equality of the mixed derivatives

${\partial}^{2}F\left({v}_{1},\cdots ,{v}_{m}\right)/\partial {v}_{g}\partial {v}_{p}={\partial}^{2}F\left({v}_{1},\cdots ,{v}_{m}\right)/\partial {v}_{p}\partial {v}_{g}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\forall g,p=\stackrel{\xaf}{1,m}\text{\hspace{0.17em}}.$ (7)

Therefore, in the solution of optimization Problem (4) for 3-valent model (6) the main result, according to Theorem 3 ( [8], p. 505) and Theorem 7.2.5 [14], can be considered as the following Assertion 1. But first of all, let us first agree on a condition that

${B}_{i}^{*}:=\left({B}_{i}+{B}_{i}^{\text{T}}\right)\in {M}_{m,m}\left(R\right),\text{\hspace{0.17em}}i=\stackrel{\xaf}{1,n}$, (8)

where each ${B}_{i}$ is a matrix of the system (6) (the matrix of the tensor ${f}_{i}^{2,m}$ in such a statement when it is not considered to be symmetric in the system (1)). Moreover, let us consider a vector function

$v\mapsto \Phi \left(v\right):={\left({r}_{1}{B}_{1}^{*}+\cdots +{r}_{n}{B}_{n}^{*}\right)}^{-1}\left({A}^{\text{T}}+\left[{\nabla}_{v}{f}_{1}^{3,m}\left(v,v,v\right),\cdots ,{\nabla}_{v}{f}_{n}^{3,m}\left(v,v,v\right)\right]\right)r,$ (9)

where ${\nabla}_{v}{f}_{i}^{3,m}\left(v,v,v\right)$ is the gradient of the functional $v\mapsto {f}_{i}^{3,m}\left(v,v,v\right)$.

Assertion 1. *The stationary points
${v}^{*}\in {R}^{m}$ of Problem *(*i*)*are the essence of the solutions of equation *

${v}^{*}+\Phi \left({v}^{*}\right)=0\text{\hspace{0.17em}}.$ (10)

*A sufficient condition
$F\left({v}^{*}\right)=\mathrm{max}\left\{F\left(v\right):v\in {R}^{m}\right\}$ is that
${v}^{*}$ *,*as a stationary point of the functional *(5),*must be of elliptic type. In other words*,*the point
${v}^{*}$ for the Hessian
$G\left(v,r\right)$ of the functional *(5)*must satisfy the inequalities *

$\mathrm{det}{\left[{b}_{ij}\right]}_{p}<0,\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}p=\stackrel{\xaf}{1,m},$ (11)

*where
${\left[{b}_{ij}\right]}_{p}\in {M}_{p,p}\left(R\right),p=\stackrel{\xaf}{1,m}$ are the principal submatrices of the Hessian*, det *is* *determinant *

$\begin{array}{l}G\left({v}^{*},r\right)={r}_{1}\left({B}_{1}^{*}+\left[{\partial}^{2}{f}_{1}^{3,m}\left(v,v,v\right)/\partial {v}_{g}\partial {v}_{p}|{}_{{v}^{*}}\right]\right)+\cdots \\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}+{r}_{n}\left({B}_{n}^{*}+\left[{\partial}^{2}{f}_{n}^{3,m}\left(v,v,v\right)/\partial {v}_{g}\partial {v}_{p}|{}_{{v}^{*}}\right]\right)\in {M}_{m,m}\left(R\right),\end{array}$

*which is equivalent: characteristic numbers
${\lambda}_{p}\left({v}^{*},r\right)$ of the matrix*
$G\left({v}^{*},r\right)$ *satisfy the *

${\lambda}_{p}\left({v}^{*},r\right)<0,\text{\hspace{0.17em}}\text{\hspace{0.17em}}p=\stackrel{\xaf}{1,m}.$ (12)

Corollary 1. *In case of
$k=2$ the Hessian of the functional *(5)*and conditions *(11), (12)*are invariant to the position of the stationary point
${v}^{*}$ *,*and the Hessian equals *

$G\left(r\right)={r}_{1}{B}_{1}^{*}+\cdots +{r}_{n}{B}_{n}^{*}$, (13)

*which leads to a linear dependence of the numbers
${\lambda}_{p}\left(r\right),p=\stackrel{\xaf}{1,m}$ on the normalization of the vector r. *

*If
$\text{rank}\text{\hspace{0.05em}}\text{\hspace{0.05em}}G\left(r\right)=m$ *,*the solution of Equation *(10)*is unique and has the form of *

${v}^{*}=-{G}^{-1}\left(r\right){A}^{\text{T}}r$, (14)

*which makes the position of the point
${v}^{*}$ invariant to the normalization of the vector r. *

According to vector functions
${\nabla}_{v}{f}_{i}^{3,m}\left(v,v,v\right)$, the equation (10) is geometrically defined by the intersection of *m* quadrics ( [16], p. 219]. Local analysis can be performed on the basis of *the fixed point principle* ( [8], p. 75]). If inequalities (11) (equivalent to (12)) are not fulfilled, *i.e.* at least one of them has a sign change to the opposite, the stationary point
${v}^{*}$ is hyperbolic (saddle point). On the other hand, changing the inequality < to the reflexive £ (*i.e.*
$\text{rank}\text{\hspace{0.05em}}\text{\hspace{0.05em}}G\left({v}^{*},r\right)<m$ ) induces a parabolic point structure for
${v}^{*}$. Thus, in the case of a saddle/parabolic point
${v}^{*}$, a purposeful parametric correction of the functional (5) is required to ensure its elliptic nature (12). It is clear that such a correction can shift the position of the stationary point, *i.e.* a refinement recalculation
${v}^{*}$ is required after this correction (by virtue of Corollary 1, such a recalculation at
$k=2$, in turn, no longer entails changing the spectrum (12) of the Hessian
$G\left(r\right)$ ).

One of the factors affecting the stationary point ${v}^{*}$ geometry of Assertion 1 is the digital adaptive parametric adjustment of $r\mapsto G\left({v}^{*},r\right)$, which leads to elliptic conditions (11) or (12). This is the subject of the next section.

4. Parametric Correction of the PM Functional Using the *r*-Parameter Family of Its Hessians

Consider statement (ii): For a stationary point of the optimization problem (i), construct a numerical procedure for correction of weight factors
$r\in {R}^{n}$, based on fulfillment of spectral conditions (12), *i.e.* providing elliptic nature of the stationary point
${v}^{*}$ of Statement 1. This formulation is relevant for optimization of
${v}^{*}$ -parameters of the PM process when some target PM indices have to be minimized (*i.e.*
${r}_{j}<0$ ).

Note 2. Despite the algebraic equivalence of conditions (11)~(12), the use of expansion of determinants (11) in construction of adaptive correction $r\mapsto G\left({v}^{*},r\right)$ is almost inevitably doomed to failure (even by means of computer algebra) due to a large number of terms expressed through multivariate regression coefficients.

The solvability conditions for a problem similar to (ii) can be obtained only in exceptional cases. Therefore, below we shall discuss an approach to this problem based on the ideas of the theory of localization and perturbations of eigenvalues [14]. Another productive mathematical tool appears to be the transformation of conditions (12) to the problem of a “quadratic” stability by constructing a Lyapunov function ( [19], p. 134) (see Conclusion below) in the affine family of Hessians of the optimization Problem (i) on the grounds that this family clearly depends on variations of vector $r\in {R}^{n}$ coordinates due to the structure of functional (5).

Let some initial vector ${r}_{0}\in {R}^{n}$ of weight factors from Statement (ii) be given. For example, the heuristic choice of the vector ${r}_{0}$ can be made based on the equality of its coordinates ${r}_{0i},\text{\hspace{0.17em}}\text{\hspace{0.17em}}i=\stackrel{\xaf}{1,n}$ to the values of some functions ${\Psi}_{i}:R\to R$ (with a clear physical context) that depend on the values of functionals ${J}_{i}\left(v\right):={w}_{i}\left(\omega +v\right),i=\stackrel{\xaf}{1,n}$ from auxiliary problems of optimal prediction of PM quality by individual target tribological indices ${w}_{i}$. In particular, for the 2-valent regression model (1), this position, according to Corollary 2 of [5], will be characterized by the following simple proposition.

Assertion 2. *If the maximal valency of tensors k is two*,*then the vector of initial weight factors
${r}_{0}=\text{col}\left({r}_{01},\cdots ,{r}_{0n}\right)$ with coordinates *

${r}_{0i}={\Psi}_{i}\left({z}_{i}\right),\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}{z}_{i}=\text{max}\left\{{J}_{i}\left(v\right)\text{:}v\in {R}^{m}\right\}\text{,}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}i=\stackrel{\xaf}{1,n}$

*has an analytic representation *

${r}_{0}=\text{col}\left({\Psi}_{1}\left({c}_{1}-{e}_{1}^{\text{T}}A{B}_{1}^{*-1}{A}^{\text{T}}{e}_{1}/2\right),\cdots ,{\Psi}_{n}\left({c}_{n}-{e}_{n}^{\text{T}}A{B}_{n}^{*-1}{A}^{\text{T}}{e}_{n}/2\right)\right)$,

*where
${\left\{{e}_{i}\right\}}_{i=\stackrel{\xaf}{1,n}}$ * *is the canonical basis in*
${R}^{n}$ *. *

Let us denote by
${v}^{0}\in {R}^{m}$ some stationary point of the functional (5) in the case when the *r*-priority of the probing points is
${r}_{0}$. Correspondingly, we denote by
${G}_{0}\in {M}_{m,m}\left(R\right)$ the Hessian of the given functional calculated for the pair
$\left({r}_{0},{v}^{0}\right)$ and let

${G}_{i}:={B}_{i}^{*}+\left[{\partial}^{2}{f}_{i}^{3,m}\left(v,v,v\right)/\partial {v}_{g}\partial {v}_{p}|{}_{{v}^{0}}\right],\text{\hspace{0.17em}}\text{\hspace{0.17em}}i=\stackrel{\xaf}{1,n}$.

Then for the admissible linear variation $\Delta r$ of vector ${r}_{0}=\text{col}\left({r}_{01},\cdots ,{r}_{0n}\right)$ coordinates, given (due to comments to formula (4)) by the region of this variation $W\subset {R}^{n}$ written as

$\Delta r:=\text{col}\left(\Delta {r}_{1},\cdots ,\Delta {r}_{n}\right)\in W$,

${r}_{i}={r}_{0i}+\Delta {r}_{i}>0,\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}i=\stackrel{\xaf}{1,n}\text{\hspace{0.17em}},$

the $\Delta r$ -parametric family of linear variations of the Hessian $G\left({v}^{0},{r}_{0}+\Delta r\right)$ is defined by a matrix $m\times m$ -multiverse written as:

${G}_{0}+{\displaystyle \underset{i=1,\cdots ,n}{\sum}\Delta {r}_{i}{G}_{i}}\text{\hspace{0.17em}},\text{\hspace{0.17em}}\text{\hspace{0.17em}}\Delta r\in W.$ (15)

By virtue of (7), the matrices of the family (15) are symmetric.

For the matrices of the manifold (15), the eigenvalues can be characterized as a series of optimization problems by means of the Courant-Fischer Theorem [14]. On the other hand, in the circle of analytic applications of this theorem lie the reasoning of the Weyl Theorem [11] on the relations between the characteristic numbers of the Hessian and any matrix from the manifold (15), allowing to clarify more transparently the geometric meaning of constructions of the linear $\Delta r$ -correction $\Delta r\mapsto {\left({r}_{0}+\Delta r\right)}^{\text{T}}w\left(\omega +v\right)$ of the target functional (5) carried out below.

Taking into account the introduced constructions, the adaptive adjustment of the PC process tribological quality functional $F\left(v\right)={r}^{\text{T}}w\left(\omega +v\right)$, which ensures that inequality (12) is fulfilled when varying the vector $r\in {R}^{n}$ at the stationary point, contains the following Assertion 3 below. In essence, this assertion is a non-complicated modification (in the version of the strong derivative $\text{d}G\left({v}^{0},r\right)/\text{d}r|{}_{{r}_{0}}$ ) of Theorem 6.3.12 [14] based on Theorem 2 ( [8], p. 491] and Theorem 4.1.3 [14], which takes into account the structure of the manifold (15) as symmetric matrices.

Assertion 3. *Let
$r={r}_{0}+\Delta r$ *,
$\left\{\left({\lambda}_{p}\left({r}_{0}\right),{x}_{p}\right),p=\stackrel{\xaf}{1,m}\right\}\subset R\times {R}^{m}$ *be the set of eigenpairs of the Hessian
${G}_{0}$ *,*i.e.
${\lambda}_{p}\left({r}_{0}\right){x}_{p}={G}_{0}{x}_{p},p=\stackrel{\xaf}{1,m}$ *,*and let*,*given the realization of the manifold *(15),*the numbers *

${g}_{pi}={x}_{p}^{\text{T}}{G}_{i}{x}_{p}/{x}_{p}^{\text{T}}{x}_{p},\text{\hspace{0.17em}}\text{\hspace{0.17em}}p=\stackrel{\xaf}{1,m},\text{\hspace{0.17em}}i=\stackrel{\xaf}{1,n}$

*are set. *

*Then the eigenvalues
${\lambda}_{p}\left({v}^{0},{r}_{0}+\Delta r\right),p=\stackrel{\xaf}{1,m}$ of the Hessian
$G\left({v}^{0},{r}_{0}+\Delta r\right)$ have the form *

$\begin{array}{l}{\lambda}_{1}\left({v}^{0},{r}_{0}+\Delta r\right)={\lambda}_{1}\left({r}_{0}\right)+{\displaystyle \underset{i=1,\cdots ,n}{\sum}{g}_{1i}\Delta {r}_{i}}+o\left({\Vert \Delta r\Vert}_{{R}^{n}}\right),\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\vdots \\ {\lambda}_{m}\left({v}^{0},{r}_{0}+\Delta r\right)={\lambda}_{m}\left({r}_{0}\right)+{\displaystyle \underset{i=1,\cdots ,n}{\sum}{g}_{mi}\Delta {r}_{i}}+o\left({\Vert \Delta r\Vert}_{{R}^{n}}\right).\end{array}$ (16)

System (16) gives an estimate of the sensitivity of the Hessian $G\left({v}^{0},{r}_{0}+\Delta r\right)$ spectrum to linear variations $\Delta {r}_{i},i=\stackrel{\xaf}{1,n}$ of the weight factors. For nonlinear variations we can refer to the recurrence formulas from p. (b) ( [16], p. 154), which can be computed symbolically using computer algebra. Of course, this analysis is approximate (valid for small ${\Vert \Delta r\Vert}_{{R}^{n}}$ ). It is especially efficient for the 2-valent model when $n=m$ (this equality is not difficult to implement due to the relative variability of the number of PM indices).

Corollary 2. *Let
$k=2$ *,
$n=m$,
$\Lambda \left({r}_{0}\right):=\text{col}\left({\lambda}_{1}\left({r}_{0}\right),\cdots ,{\lambda}_{m}\left({r}_{0}\right)\right)$ *be a vector of characteristic numbers of the matrix
$\left({r}_{01}{B}_{1}^{*}+\cdots +{r}_{0m}{B}_{m}^{*}\right)$ and
${\left\{{x}_{p}\right\}}_{p=\stackrel{\xaf}{1,m}}$ * *be their corresponding eigenvectors. Moreover*,*let
${\Lambda}^{*}:=\text{col}\left({\lambda}_{1}^{*},\cdots ,{\lambda}_{m}^{*}\right)$ be a vector of characteristic numbers that are “benchmark/reference” by criterion *(12),*and
$B:=\left[{b}_{pi}\right]$ * *be a
$m\times m$ *-*matrix with elements *

${b}_{pi}={x}_{p}^{\text{T}}{B}_{i}^{*}{x}_{p}/{x}_{p}^{\text{T}}{x}_{p}$ *. *

*Then for
${r}_{0}+\Delta r$ *,*where the variation vector has the representation
$\Delta r={B}^{-1}\left({\Lambda}^{*}-\Lambda \left({r}_{0}\right)\right)$ *,*the eigenvalues of the Hessian
$G\left({r}_{0}+\Delta r\right)$ will be
$o\left({\Vert \Delta r\Vert}_{{R}^{n}}\right)$ * *close to the benchmark
${\left\{{\lambda}_{p}^{*}\right\}}_{p=\stackrel{\xaf}{1,m}}$. *

Note 3. Since Corollary 2 is valid for small ${\Vert \Delta r\Vert}_{{R}^{m}}$, the question remains whether the iterative computational process will converge to

${r}_{j}=\left({r}_{j-1}+\Delta {r}_{j-1}\right)\in {R}^{m},\text{\hspace{0.17em}}\text{\hspace{0.17em}}j=1,2,\cdots $,

constructed from the calculation
$\Delta {r}_{j-1}={B}^{-1}\left({\Lambda}^{*}-\Lambda \left({r}_{j-1}\right)\right)$, if the initial divergence
${\Vert {\Lambda}^{*}-\Lambda \left({r}_{0}\right)\Vert}_{{R}^{m}}$ is significant enough. Moreover, according to the structure of the target functional (5), at each iteration step *j* for vector
${r}_{j}\in {R}^{m}$ coordinates it is necessary (within the physical statement of Problem (4)) to check the coordinate conditions
${r}_{ij}>0,i=\stackrel{\xaf}{1,n}$.

Note 4. For adaptive systems, the evaluation of input signals (in our case
${\Vert {v}_{\left(l\right)}\Vert}_{{R}^{m}}<1$ in (3)) is essential (which is why adaptive techniques with learning are used). In this context, it is important to obtain sufficient conditions for the adaptive system to have robust bounded solutions [20], with the very fact of existence of predetermining solutions satisfying these properties being more important (see (2)) than their specific solutions. Thus, a fixed parameter setting providing qualitative (see (12)) control of the predictive system (1), which is not very sensitive to the exact value of the parameters, can yield a number of possible values
$\Delta r$, allowing us to determine the optimal values *v*, guaranteeing the target quality (4).

In the context of Note 3, let us provide the result of calculating an upper bound estimate for perturbation ${\Vert \Delta r\Vert}_{{R}^{m}}$. To this end, assume that ${\Vert \cdot \Vert}_{M}$ is the matrix norm in ${M}_{m,m}\left(R\right)$, consistent with the norm in Euclidean space ${\Vert \cdot \Vert}_{{R}^{m}}$, and ${\Vert E\Vert}_{M}=1$, $E\in {M}_{m,m}\left(R\right)$ is the unit matrix. For example, the Frobenius norm

${\Vert D\Vert}_{F}:={\left({m}^{-1}{\displaystyle \sum {d}_{ij}^{2}}\right)}^{1/2},\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}D=\left[{d}_{ij}\right]\in {M}_{m,m}\left(R\right)$,

or the spectral (induced) matrix norm

${\Vert D\Vert}_{S}:=\mathrm{sup}\left\{{\Vert Dx\Vert}_{{R}^{m}}:x\in {R}^{m},{\Vert x\Vert}_{{R}^{m}}=1\right\}=\underset{1\le i\le m}{\mathrm{max}}{\lambda}_{i}^{1/2}\left({D}^{\text{T}}D\right)$

can serve as such.

Returning to Corollary 2, we have
$B\Delta r={\Lambda}^{*}-\Lambda \left({r}_{0}\right)$,
$\mathrm{det}B\ne 0$. Now suppose that the vector of characteristic numbers
${\Lambda}^{*}-\Lambda \left({r}_{0}\right)$ turns into a perturbed vector
${\Lambda}^{*}-\Lambda \left({r}_{0}\right)+\delta $ (in particular, due to the members
$o{\left(\Vert \Delta r\Vert \right)}_{{R}^{m}}$ of system (16)), and the matrix *B* turns into
$B+D$. Then the vector
$\Delta r$ will get (due to a modification of Corollary 2) some increment
$\theta $, passing to the value
$\Delta r+\theta $, which satisfies the equation
$\left(B+D\right)\left(\Delta r+\theta \right)={\Lambda}^{*}-\Lambda \left({r}_{0}\right)+\delta $.

It is obvious that
$\delta \in {R}^{m},D\in {M}_{m,m}\left(R\right)$ models the perturbations of the vector
${\Lambda}^{*}-\Lambda \left({r}_{0}\right)$, and the inaccuracy of the parametric estimation of the matrix *B* (if
${\Vert D\Vert}_{M}{\Vert {B}^{-1}\Vert}_{M}<1$, then
${\Vert D\Vert}_{M}<{\Vert B\Vert}_{M}$ ; see ( [21], p. 197). The result of calculating the upper bound estimate of perturbation
${\Vert \theta \Vert}_{{R}^{m}}/{\Vert \Delta r\Vert}_{{R}^{m}}$ formulates Corollary 3. For technical details of the accompanying calculations using the construction of the matrix conditional number, see a popular (among graduate students) monograph ( [21], p. 197).

Corollary 3*. Let
$s\left(B\right):={\Vert B\Vert}_{M}{\Vert {B}^{-1}\Vert}_{M}$ *,*the conditional number of matrix B*,*where
${\Vert \cdot \Vert}_{M}$ is the norm
${\Vert \cdot \Vert}_{F}$ or
${\Vert \cdot \Vert}_{S}$ *,*be added to the assumptions of Corollary *2*. Then the following estimate is valid for
$\theta ,\Delta r$ *

$\begin{array}{c}{\Vert \theta \Vert}_{{R}^{m}}/{\Vert \Delta r\Vert}_{{R}^{m}}\le s\left(B\right){\left(1-s\left(B\right){\Vert D\Vert}_{M}/{\Vert B\Vert}_{M}\right)}^{-1}\\ \text{\hspace{0.17em}}\times \left({\Vert \delta \Vert}_{{R}^{m}}/{\Vert {\Lambda}^{*}-\Lambda \left({r}_{0}\right)\Vert}_{{R}^{m}}+{\Vert D\Vert}_{M}/{\Vert B\Vert}_{M}\right).\end{array}$

*If
${\Vert \cdot \Vert}_{M}={\Vert \cdot \Vert}_{S}$ and
${\lambda}_{1},{\lambda}_{m}$ are*,*respectively*,*the smallest and the largest eigenvalues of the matrix
${B}^{\text{T}}B$ *,*then in the last inequality we can assume that
$s\left(B\right)={\left({\lambda}_{m}/{\lambda}_{1}\right)}^{1/2}$. *

Note 5. The construction of the conditional number $s\left(B\right)={\left({\lambda}_{m}/{\lambda}_{1}\right)}^{1/2}$ obtained using the spectral norm ${\Vert \cdot \Vert}_{S}$ is transparent due to equality $s\left(B\right)={\Vert B\Vert}_{S}{\Vert {B}^{-1}\Vert}_{S}$.

Alternative approaches [22] [23] [24], including deep insight (via computer algebra [6] ) into the physical content of the subject of nonlinear PC modeling, can be used to take into account interferences other than those covered by Corollary 3.

5. Conclusions

The aim of the article was, in development of the results [5], to point out the connection that exists between the problem of determining the value of the Hessian matrix function at the stationary point of the target functional (5) and the vector *r* of weight factors in (5), reflecting the priority between
${w}_{i}$ -modeled predictions of the target tribological PM indicators. In this context, Assertion 1 and its Corollary 1 show that, unlike the 3-valent regression-tensor model, in the 2-valent one, the Hessian
$G\left(v,r\right)$ is invariant to the stationary point position. In this case, both variants allow us to identify the *r*-dependence of the Hessian spectrum
$G\left(v,r\right)$ on the basis of the nonlinear multivariate regression PM model for the PC mode of hardening of composite metal coatings identified within the framework of the LSM problem (II).

Assertion 3 essentially asked: what can we say about the eigenvalues of the matrix
${G}_{0}+{\displaystyle {\sum}_{i=1,\cdots ,n}\Delta {r}_{i}{G}_{i}}$, if each variation of
$\Delta {r}_{i}$ is a small parameter? Thus, we were only interested in the purely formal aspect of the mathematical modeling problem under study, when we do not consider the question of what the real value of the increment
$\Delta {r}_{i}$ must be for the term “small parameter” to be really relevant. In this case, the result of Assertion 3 is based on the fact that the eigenvalues (12) smoothly *r* depend on the Hessian
$G\left(v,r\right)$ elements during the current parametric *r*-correction of the target functional (5). However, it should be noted that some information is lost when we deal only with the characteristic polynomial, because there are many different matrices with a given characteristic polynomial. It is therefore not surprising that the stronger results on modeling the Hessian spectrum
$G\left(v,r\right)$, in particular Assertion 3 and Corollary 2, take into account the structure of
$G\left(v,r\right)$. The latter ones admit technical simplifications by means of specialized computer algebra proceeding from the geometrical assumption that any Hessian matrix is orthogonally similar to a real diagonal matrix.

Numerical methods for finding eigenvalues and eigenvectors represent one of the most important parts of matrix theory. The analysis of the vector
${\Lambda}^{*}-\Lambda \left({r}_{0}\right)$ and matrix *B* from Corollary 2 has not touched on any aspect of this topic above, but Corollary 3 gives an upper estimate for the perturbation
$\Delta r$ via relative perturbations
${\Lambda}^{*}-\Lambda \left({r}_{0}\right)$, *B* and the conditional number
$s\left(B\right)$. The
$s\left(B\right)$ is involved in the estimation in all cases, whether the perturbations occur in
${\Lambda}^{*}-\Lambda \left({r}_{0}\right)$, only in *B*, or in
${\Lambda}^{*}-\Lambda \left({r}_{0}\right)$ and *B* at the same time.

Finally, we denote another approach (essentially cybernetic) in adaptive correction
$r\mapsto {r}^{\text{T}}w\left(\omega +v\right)$, related to the use of sufficient robust stability conditions for the 2-valent model of the matrix
$G\left(r\right)$, which also leads to conditions (12). In this context, it is required that with interval tolerances on the vector *r* coordinates one can construct a Lyapunov function
$V\left(x\right)={x}_{p}^{\text{T}}P{x}_{p}$, where
$P\in {M}_{m,m}\left(R\right)$ is the symmetric positively-defined matrix for which the Lyapunov equation
$G\left(r\right)P+PG\left(r\right)=-Q$ has a solution given a symmetric positively-defined
$m\times m$ -matrix *Q*. The transition to adaptive robust quadratic stability [19] and methods of its solution are also proposed in [20] [23]. This theory, due to the abundance of its computational problems and the opportunities that it opens for applications of nonlinear multivariate regression-tensor analysis, may acquire great (extended) importance in the problems of precision multifactor nonlinear optimization of PC processes of the hardening of complex composite metal coatings and alloys [25].

Acknowledgements

The research was carried out with funding from the Ministry of Education and Science of the Russian Federation (project: 121041300056-7).

References

[1] Stapleton, J.H. (1995) Linear Statistical Models. Wiley, New York, 467 p.

https://doi.org/10.1002/9780470316924

[2] Draper, N.R. and Smith, H. (1998) Applied Regression Analysis. John Wiley & Sons Ltd., New York. Vilyms, Moscow, 2007, 912 p. (In Russian)

[3] Ross, G.J. (1990) Nonlinear Estimation. Springer-Verlag, New York, 237 p.

https://doi.org/10.1007/978-1-4612-3412-8

[4] Rusanov, V.A., Agafonov, S.V., Daneev, A.V. and Lyamin, S.V. (2014) Computer Modeling of Optimal Technology in Materials Engineering. Lecture Notes in Electrical Engineering, Vol. 307, Springer, Berlin, 279-286.

https://doi.org/10.1007/978-3-319-03967-1_22

[5] Rusanov, V.A., Agafonov, S.V., Dumnov, S.N., Daneev, A.V. and Lyamin, S.V. (2014) Regression-Tensor Modeling of Multivariate Optimization of Process for Applying Metal-Coatings. Journal of Applied Mathematics and Physics, No. 2, 1207-1223.

https://doi.org/10.4236/jamp.2014.213142

[6] Agafonov, S.V., Sharpinskiy, D.Y., Rusanov, V.A. and Udilov, T.V. (2008) Hybrid Regression Complex “GREEK”. Certificate of the Federal Service for Intellectual Property, Patents and Trademarks of the Registration of a Computer Program, No. 2008614737.

[7] Akivis, M.A. and Goldberg, V.V. (1972) Tensor Calculation. Nauka, Moscow, 352 p. (In Russian)

[8] Kolmogorov, A.N. and Fomin, S.V. (1976) Elements of the Theory of Functions and Functional Analysis. Nauka, Moscow, 544 p. (In Russian)

[9] Khomich, V.Yu. and Shmakov, V.A. (2012) Formation of Periodic Nano Dimensional Structures on the Surface of Rigid Bodies under Phase and Structural Transformation. Reports of the Academy of Sciences, 446, 276-278. (In Russian)

[10] Gerasimov, S.A., Kuksenova, L.I., Lapteva, V.T., et al. (2014) Improvement of Mechanical Characteristics of Heat-Stability Steels by the Method of Activation of the Process of Nitriding. Problems of Mechanical Engineering and Reliability of Machines, No. 2, 90-96. (In Russian)

[11] Trukhanov, V.M. (2013) Forecasting of Resource for Parts, Assemblies and Mechanisms and for an Engineering Object on the Whole on the Design Stage. Problems of Mechanical Engineering and Reliability of Machines, No. 3, 38-42. (In Russian)

[12] Yakovlev, N.N., Lukashev, E.A. and Radkevich, E.V. (2012) Investigation of the Process of Controlled Crystallization by the Method of Mathematical Reconstruction. Reports of the Academy of Sciences, 445, 398-401. (In Russian)

[13] Gilev, V.G., Bezmaternykh, N.V. and Morozov, E.A. (2014) Investigation of the Microstructure and Micro-Hardness of Pseudo Alloy Steel-Copper after Laser Thermal Working. Physical Metallurgy and Thermal Working of Metals, No. 5, 34-39. (In Russian)

[14] Horn, P. and Johnson, C. (1986) Matrix Analysis. Cambridge University Press, Cambridge, Mir, Moscow, 1989, 656 p. (In Russian)

[15] Kärger, J., Grinberg, F. and Heitjans, P. (2005) Diffusion Fundamentals. Leipziger University, Leipzig, 615 p.

[16] Kostrikin, A.I. and Manin, Y.I. (1986) Linear Algebra and Geometry. Nauka, Moscow, 304 p. (In Russian)

[17] Statnikov, R.B. and Matusov, I.B. (2012) On Solving the Problem of Multicriterial Identification and Finishing of Test Samples. Problems of Mechanical Engineering and Reliability of Machines, No. 5, 20-29. (In Russian)

[18] Sarychev, A.P. (2013) Modeling in the Class of Systems of Regression Equations on the Basis of the Method of Group Account of the Arguments. Problems of Control and Informatics, No. 2, 8-24. (In Russian)

[19] Polyak, B.T. and Shcherbakov, P.S. (2002) Robustness Stability and Control. Nauka, Moscow, 304 p. (In Russian)

[20] Ackerman, J. (1993) Robust Control: Systems with Uncertain Physical Parameters. Springer-Verlag, New York, 404 p.

https://doi.org/10.1007/978-1-4471-3365-0

[21] Lankaster, P. (1969) Theory of Matrices. Academic Press, London, Nauka, Moscow, 1982, 270 p. (In Russian)

[22] Boyd, S.L., El Ghaoui, L., Feron, E. and Balakrishnan, V. (1994) Linear Matrix Inequalities in Systems and Control Theory. SIAM, Philadelphia, 193 p.

https://doi.org/10.1137/1.9781611970777

[23] Kreinovich, V., Lakeyev, A.V., Rohn, J. and Kahl, P. (1998) Computational Complexity and Feasibility of Data Processing and Interval Computational. Kluwer, Dordrecht, 472 p.

https://doi.org/10.1007/978-1-4757-2793-7

[24] Calafiore, G. and Polyac, B.T. (2001) Stochastic Algorithms for Exact and Approximate Feasibility of Robust LMIs. IEEE Transactions on Automatic Control, 46, 1755-1759.

https://doi.org/10.1109/9.964685

[25] Agafonov, S.V., Daneyev, A.V. and Lyamin, S.V. (2013) Mechanical Properties of Ultradisperse Alloys: State of Development and Prospects. Modern Technologies. System Analysis Modeling. No. 3, 220-230.