Issue 2 (192), article 1

DOI:https://doi.org/10.15407/kvt192.02.005

Kibern. vyčisl. teh., 2018, Issue 2 (192), pp.

Fainzilberg L.S.1, Dr (Engineering), Professor,
Chief Researcher of the Department of Intelligent Automatic Systems
e-mail: fainzilberg@gmail.com
Matushevych N.A.2, Master student,
Faculty of Biomedical Engineering
e-mail: natalie.matushevych@gmail.com
1International Research and Training Center for Information Technologies
and Systems of the National Academy of Sciences of Ukraine and Ministry of Education and Science of Ukraine, Acad. Glushkova av., 40, Kiev, 03187, Ukraine
2The National Technical University of Ukraine «Igor Sikorsky Kyiv Polytechnic Institute», Peremohy av., 37, Kiev, 03056, Ukraine

COMPARATIVE EVALUATION OF CONVERGENCE’S SPEED OF LEARNING ALGORITHMS FOR LINEAR CLASSIFIERS BY STATISTICAL EXPERIMENTS METHOD

Introduction. One of the main tasks of artificial intelligence is pattern recognition, which is often reduced to determining the discriminant function parameters in the multidimensional feature space. When recognizable objects can be completely separated by a linear discriminant function, the task is reduced to the linear classifier learning. There are many algorithms for linear classifiers learning, two of which are the Rosenblatt learning algorithm and the Kozinets algorithm.
The purpose of the article is to investigate the properties of the Rosenblatt and Kozinets learning algorithms on the basis of statistical experiment by the Monte Carlo method.
Methods. Two algorithms for linear classifiers learning have been studied: Rosenblatt and Kozinets. A number of researches have been performed to compare the convergence rate of algorithms for a different number of points and for their different location. Variation of the iterations number of algorithms spent on samples of different sizes was analyzed.
Results. Statistical experiments have shown that for a small sample size in approximately 20% of cases the convergence rates of the Rosenblatt and Kozinets algorithms are the same, but with the increase of observations number, the Kozinets learning algorithm proved to be the absolute leader. Also, the convergence rate of the Kozinets learning algorithm is less sensitive to the location of points in the learning sample.
Conclusions. The higher convergence rate of the Kozinets algorithm compared to the Rosenblatt algorithm, confirmed by a series of statistical experiments, allows formulating a promising research line on the evolution of neural networks where the Kozinets algorithm will be used to adjust the basic elements — perceptrons.

Keywords: Linear classifier, Rosenblatt algorithm, Kozinets algorithm.

Download full text!

REFERENCES

1 Hastie T., Tibshirani R., Friedman J. The elements of statistical learning. NYk: Springer; 2014. 739 p.

2 Bishop C.M. Pattern recognition and machine learning. NY: Springer; 2006. 738 p.

3 Merkov A.B. Image recognition: Introduction to statistical learning methods. Moscow: URSS; 2011. 256 p. (In Russian).

4 Vapnik, V. The nature of statistical learning theory. NY: Springer-Verlag; 1995. 188 p. https://doi.org/10.1007/978-1-4757-2440-0

5 Gori M. Machine Learning: A constraint-based approach. Waltham: Morgan Kaufmann; 2017. 580 p.

6 Kodratoff Y., Michalski R.S. Machine learning: an artificial intelligence approach, Vol. 3. Elsevier; 2014. 825 p.

7 Camastra F., Vinciarelli A. Machine learning for audio, image and video analysis: Theory and Applications. Madrid: Springer; 2015. 561 p. https://doi.org/10.1007/978-1-4471-6735-8

8 Schlesinger M, Hlavac V. Ten lectures on statistical and structural pattern recognition. Dodrecht/Boston /London: Kluwer Academic Publishers; 2002. 519 p. https://doi.org/10.1007/978-94-017-3217-8

9 Burstein F., Holsapple Cl.W. Handbook on decision support systems 2: variations. NY: Springer; 2008. 798 p.

10 Kung S.Y. Kernel methods and machine learning. Cambridge: Cambridge University Press; 2014. 591 p. https://doi.org/10.1017/CBO9781139176224

11 Rosenblatt F. The perceptron: a probabilistic model for information storage and organization in the brain, Psychological Review. 1958. N. 65(6), P. 386–408.

12 Kozinets B.N. A recursive algorithm for dividing the convex hulls of two sets. “Computer Science and Programming”. 1966. N. 4. P. 43–50.

13 Novikoff A.B. On convergence proofs on perceptrons. Symposium on the Mathematical Theory of Automata. 1962. N. 12, P. 615–622.

14 Vapnik V.N., Chervonenkis A.J. Theory of pattern recognision. Moscow: Nauka; 1974. 416 p. (In Russian).

15 Buslenko N.P. The method of statistical modeling. Moscow: Statistics; 1970. 113 p. (In Russian).

16 Rubinstein R.Y., Kroese D.P. Simulation and the Monte Carlo method(3 ed.). NY: John Wiley & Sons; 2016. 432 p. https://doi.org/10.1002/9781118631980

17 Robert C.P., Casella G. Monte Carlo statistical methods(2nd ed.). NY: Springer; 2004. 649 p. https://doi.org/10.1007/978-1-4757-4145-2

18 Trickey K.A. Structural models of coefficient of variation matrices. Los Angeles: University of California; 2015. 233 p.

19 Kussul E., Baidyk T., Wunsch D.C. Neural networks and micromechanics. NY: Springer Science & Business Media; 2009. 221 p.

20 Misuno I.S., Rachkovskij D.A., Slipchenko S.V. Experimental investigation of handwritten digit classification // System Technologies. 2005. Issue 4(39). P. 110–133.

Received 15.03.2018