ISSUE 181, article 4
DOI:https://doi.org/10.15407/kvt181.01.043
Kibern. vyčisl. teh., 2015, Issue 181, pp.
Zhiteckii L.S., Nikolaienko S.A., Solovchuk K.Yu.
International Research and Training Center for Information Technologies and Systems of the National Academy of Science of Ukraine and Ministry of Education and Sciences of Ukraine, Kiev, Ukraine
ADAPTATION AND LEARNING IN SOME CLASSES OF IDENTIFICATION AND CONTROL SYSTEMS
Introduction. The paper deals with studying the asymptotical properties of the standard discrete-time gradient online learning algorithm in the two-layer neural network model of the uncertain nonlinear system to be identified. Also, the design of the discrete-time adaptive closed-loop system containing the linear multivariable memoryless plant with possibly singular but unknown matrix gain in the presence of unmeasurable bounded disturbances having the unknown bounds are addressed in this paper. It is assumed that the learning process in the neural network model is implemented in the stochastic environment whereas the adaptation of the plant model in the control system is based on the non-stochastic description of the external environment.
The purpose of the paper is to establish the global convergence conditions of the gradient online learning algorithm in the neural network model by utilizing the probabilistic asymptotic analysis and to derive the convergent adaptive control algorithm guaranteeing the boundedness of the signals in the closed-loop system which contains the multivariable memoryless plant with an arbitrary matrix gain in the presence of unmeasurable disturbances whose bounds are unknown.
Results. The Lyapunov function approach as the suitable tool for analyzing the asymptotic behavior both of the gradient learning algorithm in the neural network identification systems and of the adaptive gradient algorithm in the certain closed-loop control systems is utilized. Within this approach, the two groups of global sufficient conditions guaranteeing the convergence of the online gradient learning algorithm in neural network model with probability 1 are obtained. The first group of these conditions defines the requirements under which this algorithm will be convergent almost sure with a constant learning rate. Such an asymptotic property holds in the ideal case where the nonlinearity to be identified can exactly be described by a neural network model. The second group of convergence conditions shows that this property can also be achieved in non-ideal case. It turns out that adding a penalty term to the current error function is indeed not necessary to guarantee this property. It is established that in a worst case where the matrix gain of multivariable plant is unknown and may be singular, and the bounds on the arbitrary unmeasurable disturbances remain unknown, the convergence of the gradient adaptation algorithm and the boundedness of all signals in the adaptive closed-loop system can be ensured.
Conclusions. In order to guarantee the global convergence of the online learning algorithm in the neural network identification system with probability 1, the certain conditions should be satisfied. Also the boundedness of all signals in the closed-loop adaptive control system containing the multivariable memoryless plant whose matrix gain is unknown and possibly singular can be achieved even if the bounds on the unmeasurable disturbances are unknown.
Keywords: neural network, gradient learning algorithm, convergence, multivariable memoryless plant, adaptive control algorithm, boundedness of the signals.
References
1 Tsypkin Ya.Z. Adaptation and Learning in Automatic Systems. N.Y.: Academic Press, 1971.
2 Tsypkin Ya.Z. Foundation of the Theory of Learning Systems. N.Y.: Academic Press, 1973.
3 Kuntsevich V.M. Control under Uncertainty Conditions: Guaranteed Results in Control and Identification Problems. Kiev: Nauk. dumka, 2006. (in Russian).
4 Zhiteckii L.S. and Skurikhin V.I. Adaptive Control Systems with Parametric and Nonparametric Uncertainties. Kiev: Nauk. dumka, 2010. (in Russian).
5 Suykens J. and Moor B.D. Nonlinear system identification using multilayer neural networks: some ideas for initial weights, number of hidden neurons and error criteria. In Proc. 12nd IFAC World Congress, 1993, vol. 3, pp. 49–52. https://doi.org/10.1016/S1474-6670(17)48485-0
6 Kosmatopoulos E.S., Polycarpou M.M., Christodoulou M.A. and Ioannou P.A. High-order neural network structures for identification of dynamical systems. IEEE Trans. on Neural Networks, 1995, vol. 6, pp. 422–431. https://doi.org/10.1109/72.363477
7 Levin A.U. and Narendra K.S. Recursive identification using feedforward neural networks. Int. J. Control, 1995, vol. 61, pp. 533–547. https://doi.org/10.1080/00207179508921916
8 Tsypkin Ya.Z., Mason J.D., Avedyan E.D., Warwick K. and Levin I.K. Neural networks for identification of nonlinear systems under random piecewise polynomial disturbances. IEEE Trans. on Neural Networks, 1999, vol. 10, pp. 303–311. https://doi.org/10.1109/72.750559
9 Behera L., Kumar S., and Patnaik A. On adaptive learning rate that guarantees convergence in feedforward networks. IEEE Trans. on Neural Networks, 2006, vol. 17, pp. 1116–1125. https://doi.org/10.1109/TNN.2006.878121
10 White H. Some asymptotic results for learning in single hidden-layer neural network models. J. Amer. Statist. Assoc., 1987, vol. 84, pp. 117–134.
11 Kuan C M. and Hornik K. Convergence of learning algorithms with constant learning rates. IEEE Trans. on Neural Networks, 1991, vol. 2, pp. 484 – 489. https://doi.org/10.1109/72.134285
12 Luo Z. On the convergence of the LMS algorithm with adaptive learning rate for linear feedforward networks. Neural Comput., 1991, vol. 3, pp. 226–245. https://doi.org/10.1162/neco.1991.3.2.226
13 Finnoff W. Diffusion approximations for the constant learning rate backpropagation algorithm and resistance to local minima. Neural Comput., 1994, vol. 6, pp. 285– 295. https://doi.org/10.1162/neco.1994.6.2.285
14 Gaivoronski A.A. Convergence properties of backpropagation for neural nets via theory of stochastic gradient methods. Optim. Methods Software, 1994, vol. 4, pp. 117–134. https://doi.org/10.1080/10556789408805582
15 Fine T.L. and Mukherjee S. Parameter convergence and learning curves for neural networks. Neural Comput., 1999, vol. 11, pp. 749–769. https://doi.org/10.1162/089976699300016647
16 Tadic V. and Stankovic S. Learning in neural networks by normalized stochastic gradient algorithm: Local convergence. In Proc. 5th Seminar Neural Netw. Appl. Electr. Eng., 2000, pp. 11–17. https://doi.org/10.1109/NEUREL.2000.902375
17 Zhang H., Wu W., Liu F. and Yao M. Boundedness and convergence of online gradient method with penalty for feedforward neural networks. IEEE Trans. on Neural Networks, 2009, vol. 20, pp. 1050–1054. https://doi.org/10.1109/TNN.2009.2020848
18 Mangasarian O.L. and Solodov M.V. Serial and parallel backpropagation convergence via nonmonotone perturbed minimization. Optim. Methods Software, 1994, pp. 103–106. https://doi.org/10.1080/10556789408805581
19 Wu W., Feng G. and Li X. Training multilayer perceptrons via minimization of ridge functions. Advances in Comput. Mathematics, vol. 17, pp. 331–347, 2002. https://doi.org/10.1023/A:1016249727555
20 Zhang N., Wu W. and Zheng G. Convergence of gradient method with momentum for two-layer feedforward neural networks. IEEE Trans. on Neural Networks, 2006, vol. 17, pp. 522–525. https://doi.org/10.1109/TNN.2005.863460
21 Wu W., Feng G., Li X and Xu Y. Deterministic convergence of an online gradient method for BP neural networks. IEEE Trans. on Neural Networks, 2005, vol. 16, pp. 1–9. https://doi.org/10.1109/TNN.2005.844903
22 Xu Z.B., Zhang R. and Jing W.F. When does online BP training converge? IEEE Trans. on Neural Networks, 2009, vol. 20, pp. 1529–1539. https://doi.org/10.1109/TNN.2009.2025946
23 Shao H., Wu W. and Liu L. Convergence and monotonicity of an online gradient method with penalty for neural networks. WSEAS Trans. Math., 2007, vol. 6, pp. 469–476.
24 Ellacott S.W. The numerical analysis approach. In Mathematical Approaches to Neural Networks (Taylor J.G. ed; B.V.: Elsevier Science Publisher), 1993, pp. 103–137. https://doi.org/10.1016/S0924-6509(08)70036-9
25 Skantze F.P., Kojic A., Loh A.P. and Annaswamy A.M. Adaptive estimation of discrete time systems with nonlinear parameterization. Automatica, 2000, vol. 36, pp. 1879–1887. https://doi.org/10.5555/S0005-1098(00)00106-0 https://doi.org/10.1016/S0005-1098(00)00106-0
26 Loeve M. Probability Theory. N.Y.: Springer-Verlag, 1963.
27 Zhiteckii L.S., Azarskov V.N. and Nikolaienko S.A. Convergence of learning algorithms in neural networks for adaptive identification of nonlinearly parameterized systems. In Proc. 16th IFAC Symposium on System Identification, 2012, pp. 1593–1598. https://doi.org/10.3182/20120711-3-BE-2027.00150
28 Skurikhin V.I., Gritsenko V.I., Zhiteckii L.S. and Solovchuk K.Yu. Generalized inverse operator method in the problem of optimal controlling linear interconnected static plants. Dopovidi Natsionalnoi Akademii Nauk Ukrainy, 2014, no. 8, pp. 57–66. (in Russian).
29 Fomin V.N., Fradkov A.L. and Yakubovich V.A. Adaptive Control of Dynamic Systems. Moscow: Nauka, 1981. (in Russian).
30 Goodwin G.C. and Sin K.S. Adaptive Filtering, Prediction and Control. Engewood Cliffs. NJ.: Prentice-Hall, 1984.
31 Azarskov V.N., Zhiteckii L.S. and Solovchuk K.Yu. Adaptive robust control of multivariable static plants with possibly singular transfer matrix. Electronics and Control Systems, 2013, no. 4, pp. 47–53.
32 Polyak B.T.Convergence and convergence rate of iterative stochastic algorithms, I: General case. Autom. Remote Control, 1976, vol. 12, pp. 1858–1868.
33 Marcus M. and Minc H. A Survey of Matrix Theory and Matrix Inequalities. Boston: Allyn & Bacon Inc. 1964.
Received 06.07.2015