Issue 4 (194), article 1

DOI:https://doi.org/10.15407/kvt194.04.007

Kibern. vyčisl. teh., 2018, Issue 4 (194), pp.

Grytsenko V.I., Corresponding Member of NAS of Ukraine,
Director of International Research and Training
Center for Information Technologies and Systems
of the National Academy of Sciences of Ukraine
and Ministry of Education and Science of Ukraine
e-mail: vig@irtc.org.ua

Rachkovskij D.A., DSc (Engineering), Leading Researcher
Dept. of Neural Information Processing Technologies
e-mail: dar@infrm.kiev.ua

Revunova E.G., PhD (Engineering), Senior Researcher
Dept. of Neural Information Processing Technologies
e-mail: egrevunova@gmail.com

International Research and Training Center for Information Technologies
and Systems of the National Academy of Sciences of Ukraine
and Ministry of Education and Science of Ukraine,
Acad. Glushkov av., 40, Kiev, 03187, Ukraine

NEURAL DISTRIBUTED REPRESENTATIONS OF VECTOR DATA IN INTELLIGENT INFORMATION TECHNOLOGIES

Introduction. Distributed representation (DR) of data is a form of a vector representation, where each object is represented by a set of vector components, and each vector component can belong to representations of many objects. In ordinary vector representations, the meaning of each component is defined, which cannot be said about DR. However, the similarity of RP vectors reflects the similarity of the objects they represent.
DR is a neural network approach based on modeling the representation of information in the brain, resulted from ideas about a “distributed” or “holographic” representations. DRs have a large information capacity, allow the use of a rich arsenal of methods developed for vector data, scale well for processing large amounts of data, and have a number of other advantages. Methods for data transformation to DRs have been developed for data of vari-ous types – from scalar and vector to graphs.

The purpose of the article is to provide an overview of part of the work of the Department of Neural Information Processing Technologies (International Center) in the field of neural network distributed representations. The approach is a development of the ideas of Nikolai Amosov and his scientific school of modeling the structure and functions of the brain.

Scope. The formation of distributed representations from the original vector representations of objects using random projection is considered. With the help of the DR, it is possible to efficiently estimate the similarity of the original objects represented by numerical vectors. The use of DR allows developing regularization methods for obtaining a stable solution of discrete ill-posed inverse problems, increasing the computational efficiency and accuracy of their solution, analyzing analytically the accuracy of the solution. Thus DRs allow for in-creasing the efficiency of information technologies applying them.

Conclusions. DRs of various data types can be used to improve the efficiency and intelligence level of information technologies. DRs have been developed for both weakly structured data, such as vectors, and for complex structured representations of objects, such as sequences, graphs of knowledge-base situations (episodes), etc. Transformation of different types of data into the DR vector format allows unifying the basic information technologies of their processing and achieving good scalability with an increase in the amount of data processed.
In future, distributed representations will naturally combine information on structure and semantics to create computationally efficient and qualitatively new information technologies in which the processing of relational structures from knowledge bases is performed by the similarity of their DRs. The neurobiological relevance of distributed representations opens up the possibility of creating intelligent information technologies based on them that func-tion similarly to the human brain.

Keywords: distributed data representation, random projection, vector similarity estimation, discrete ill-posed problem, regularization.

Download full text!

REFERENCES

1. Amosov N. M. Modelling of thinking and the mind. New York: Spartan Books, 1967. 192 p. https://doi.org/10.1007/978-1-349-00640-3

2. Amosov N.M., Baidyk T.N., Goltsev A.D., Kasatkin A.M., Kasatkina L.M., Rachkovskij D.A. Neurocomputers and Intelligent Robots. Kyiv: Nauk. Dumka. 1991. 269 p.(in Russian)

3. Gritsenko V.I., Rachkovskij D.A., Goltsev A.D., Lukovych V.V., Misuno I.S., Revunova E.G., Slipchenko S.V., Sokolov A.M., Talayev S.A. Neural distributed representation for intelligent information technologies and modeling of thinking. Kibernetika i vycislitelnaa tehnika. 2013. Vol. 173. P. 7–24. (in Russian)

4. Goltsev A.D., Gritsenko V.I. Neural network technologies in the problem of handwriting recognition. Control Systems and Machines. 2018. N 4. P. 3–20. (in Russian).

5. Kussul E.M. Associative neuron-like structures. Kyiv: Nauk. Dumka. 1992. 144 p. (in Russian)

6. Kussul E.M., Rachkovskij D.A., Baidyk T.N. Associative-Projective Neural Networks: Architecture, Implementation, Applications. Proc. Neuro-Nimes’91. (Nimes, 25–29th of Oct. 25–29, 1991). Nimes, 1991. P. 463–476.

7. Gayler R. Multiplicative binding, representation operators, and analogy. Advances in Analogy Research: Integration of Theory and Data from the Cognitive, Computational, and Neural Sciences. Edited by K. Holyoak, D. Gentner, and B. Kokinov. Sofia, Bulgaria: New Bulgarian University, 1998. P. 405.

8. Kanerva P. Hyperdimensional computing: An introduction to computing in distributed representation with high-dimensional random vectors. Cognitive Computation. 2009. Vol. 1, N 2. P. 139–159. https://doi.org/10.1007/s12559-009-9009-8

9. Goltsev A., Husek D. Some properties of the assembly neural networks. Neural Network World. 2002. Vol. 12, N. 1. P. 15–32.

10. Goltsev A.D. Neural networks with assembly organization. Kyiv: Nauk. Dumka. 2005. 200 p. (in Russian)

11. Goltsev A., Gritsenko V. Modular neural networks with radial neural columnar architecture. Biologically Inspired Cognitive Architectures. 2015. Vol. 13. P. 63–74. https://doi.org/10.1016/j.bica.2015.06.001

12. Frolov A.A., Rachkovskij D.A., Husek D. On information characteristics of Willshaw-like auto-associative memory. Neural Network World. 2002. Vol. 12, No 2. P. 141–158.

13. Frolov A.A., Husek D., Rachkovskij D.A. Time of searching for similar binary vectors in associative memory. Cybernetics and Systems Analysis. 2006. Vol. 42, N 5. P. 615–623. https://doi.org/10.1007/s10559-006-0098-z

14. Gritsenko V.I., Rachkovskij D.A., Frolov A.A., Gayler R., Kleyko D., Osipov E. Neural distributed autoassociative memories: A survey. Kibernetika i vycislitel`naa tehnika. 2017. N 2 (188). P. 5–35.

15. Li P., Hastie T.J., Church K.W. Very sparse random projections. Proc. KDD’06. (Philadelphia, 20 – 23th of Aug.). Philadelphia, 2006. P. 287–296. https://doi.org/10.1145/1150402.1150436

16. Rachkovskij D.A. Vector data transformation using random binary matrices. Cybernetics and Systems Analysis. 2014. Vol. 50, N 6. P. 960–968. https://doi.org/10.1007/s10559-014-9687-4

17. Rachkovskij D.A. Formation of similarity-reflecting binary vectors with random binary projections. Cybernetics and Systems Analysis. 2015. Vol. 51, N 2. P. 313–323. https://doi.org/10.1007/s10559-015-9723-z

18. Rachkovskij D.A. Estimation of vectors similarity by their randomized binary projections. Cybernetics and Systems Analysis. 2015. Vol. 51, N 5. P. 808–818. https://doi.org/10.1007/s10559-015-9774-1

19. Revunova E.G., Rachkovskij D.A. Using randomized algorithms for solving discrete ill-posed problems. Intern. Journal Information Theories and Applications. 2009. Vol. 16, N 2. P. 176–192.

20. Durrant R.J., Kaban A. Random projections as regularizers: learning a linear discriminant from fewer observations than dimensions. Machine Learning. 2015. Vol. 99, N 2. P. 257–286. https://doi.org/10.1007/s10994-014-5466-8

21. Xiang H., Zou J. Randomized algorithms for large-scale inverse problems with general Tikhonov regularizations. Inverse Problems. 2015. Vol. 31, N 8: 085008. P. 1–24.

22. Revunova E.G. Study of error components for solution of the inverse problem using random projections. Mathematical Machines and Systems. 2010. N 4. P. 33–42 (in Russian).

23. Rachkovskij D.A., Revunova E.G. Randomized method for solving discrete ill-posed problems. Cybernetics and Systems Analysis. 2012. Vol. 48, N. 4. P. 621–635. https://doi.org/10.1007/s10559-012-9443-6

24. Revunova E.G. Randomization approach to the reconstruction of signals resulted from indirect measurements. Proc. ICIM’13 (Kyiv 16–20th of Sept., 2013). Kyiv, 2013. P. 203–208.

25. Revunova E.G. Analytical study of the error components for the solution of discreteill-posed problems using random projections. Cybernetics and Systems Analysis. 2015. Vol. 51, N. 6. P. 978–991. https://doi.org/10.1007/s10559-015-9791-0

26. Revunova E.G. Model selection criteria for a linear model to solve discrete ill-posed problems on the basis of singular decomposition and random projection. Cybernetics and Systems Analysis. 2016. Vol. 52, N.4. P. 647–664. https://doi.org/10.1007/s10559-016-9868-4

27. Revunova E.G. Averaging over matrices in solving discrete ill-posed problems on the basis of random projection. Proc. CSIT’17 (Lviv 05–08th of Sept., 2017). Lviv, 2017. Vol. 1. P. 473–478. https://doi.org/10.1109/STC-CSIT.2017.8098831

28. Revunova E.G. Solution of the discrete ill-posed problem on the basis of singular value decomposition and random projection. Advances in Intelligent Systems and Computing II. Cham: Springer. 2018. P. 434–449.

29. Hansen P. Rank-deficient and discrete ill-posed problems. Numerical aspects of linear inversion. Philadelphia: SIAM, 1998. 247 p. https://doi.org/10.1137/1.9780898719697

30. Nowicki D., Verga P., Siegelmann H. Modeling reconsolidation in kernel associative memory. PLoS ONE. 2013. Vol. 8(8): e68189. doi:10.1371/journal.pone.0068189. https://doi.org/10.1371/journal.pone.0068189

31. Nowicki D, Siegelmann H. Flexible kernel memory. PLoS ONE. 2010. Vol. 5(6): e10955. doi:10.1371/journal.pone.0010955. https://doi.org/10.1371/journal.pone.0010955

32. Revunova E.G., Tyshchuk A.V. A model selection criterion for solution of discrete ill-posed problems based on the singular value decomposition. Proc. IWIM’2015 (20–24th of July, 2015, Kyiv-Zhukin). Kyiv-Zhukin, 2015. P.43–47.

33. Revunova E.G. Improving the accuracy of the solution of discrete ill-posed problem by random projection. Cybernetics and Systems Analysis. 2018. Vol. 54, N 5. P. 842–852. https://doi.org/10.1007/s10559-018-0086-0

34. Marzetta T., Tucci G., Simon S. A random matrix-theoretic approach to handling singular covariance estimates. IEEE Trans. Information Theory. 2011. Vol. 57, N 9. P. 6256–6271. https://doi.org/10.1109/TIT.2011.2162175

35. Stepashko V. Theoretical aspects of GMDH as a method of inductive modeling. Control systems and machines. 2003. N 2. P. 31–38. (in Russian)

36. Stepashko V. Method of critical variances as analytical tool of theory of inductive modeling. Journal of Automation and Information Sciences. 2008. Vol. 40, N 3. P. 4–22. https://doi.org/10.1615/JAutomatInfScien.v40.i3.20

37. Kussul E.M., Baidyk T.N., Lukovich V.V., Rachkovskij D.A. Adaptive neural network classifier with multifloat input coding. Proc. Neuro-Nimes’93 (25–29th of Oct., 1993, Nimes). Nimes, France, 1993 P. 209–216.

38. Lukovich V.V., Goltsev A.D., Rachkovskij D.A. Neural network classifiers for micromechanical equipment diagnostics and micromechanical product quality inspection. Proc. EUFIT’97 (8–11th of Sept, 1997, Aachen). Aachen, Germany, 1997. P. 534–536.

39. Kussul E.M., Kasatkina L.M., Rachkovskij D.A., Wunsch D.C. Application of random threshold neural networks for diagnostics of micro machine tool condition. Proc. IJCNN’01 (4–9th of May, 1998, Anchorage). Anchorage, Alaska, USA, 1998 P. 241–244. https://doi.org/10.1109/IJCNN.1998.682270

40. Gol’tsev A.D. Structured neural networks with learning for texture segmentation in images. Cybernetics and Systems Analysis. 1991. Vol. 27, N 6. P. 927–936. https://doi.org/10.1007/BF01246527

41. Rachkovskij D.A., Revunova E.G. Intelligent gamma-ray data processing for environmental monitoring. In: Intelligent Data Processing in Global Monitoring for Environment and Security. Kyiv-Sofia: ITHEA. 2011. P. 136–157.

42. Revunova E.G., Rachkovskij D.A. Random projection and truncated SVD for estimating direction of arrival in antenna array. Kibernetika i vycislitel`naa tehnika. 2018. N 3(193). P. 5–26.

43. Ferdowsi S., Voloshynovskiy S., Kostadinov D., Holotyak T. Fast content identification in highdimensional feature spaces using sparse ternary codes. Proc. WIFS’16 (4–7th of Dec., 2016, Abu Dhabi) Abu Dhabi, UAE, 2016. P. 1–6.

44. Dasgupta S., Stevens C.F., Navlakha S. A neural algorithm for a fundamental computing problem. Science. 2017. Vol. 358(6364). P. 793–796. https://doi.org/10.1126/science.aam9868

45. Iclanzan D., Szilagyi S.M., Szilagyi L.. Evolving computationally efficient hashing for similarity search. Proc. ICONIP’18. 2. (Siem Reap, 15-18th of Dec., 2018). Siem Reap, Cambodia, 2018. 2018. https://doi.org/10.1007/978-3-030-04179-3_49

46. Rachkovskij D.A., Slipchenko S.V., Kussul E.M., Baidyk T.N. Properties of numeric codes for the scheme of random subspaces RSC. Cybernetics and Systems Analysis. 2005. Vol. 41, N. 4. P. 509–520. https://doi.org/10.1007/s10559-005-0086-8

47. Rachkovskij D.A., Slipchenko S.V., Kussul E.M., Baidyk T.N. Sparse binary distributed encoding of scalars. 2005. Journal of Automation and Information Sciences. Vol. 37, N 6. P. 12–23. https://doi.org/10.1615/J
Automat Inf Scien.v37.i6.20

48. Rachkovskij D.A., Slipchenko S.V., Misuno I.S., Kussul E.M., Baidyk T. N. Sparse binary distributed encoding of numeric vectors. Journal of Automation and Information Sciences. 2005. Vol. 37, N 11. P. 47–61. https://doi.org/10.1615/J
Automat Inf Scien.v37.i11.60

49. Kleyko D., Osipov E., Rachkovskij D.A. Modification of holographic graph neuron using sparse distributed representations. Procedia Computer Science. 2016. Vol. 88. P. 39–45. https://doi.org/10.1016/j.procs.2016.07.404

50. Kleyko D., Rahimi A., Rachkovskij D., Osipov E., Rabaey J. Classification and recall with binary hyperdimensional computing: Tradeoffs in choice of density and mapping characteristics. IEEE Trans. Neural Netw. Learn. Syst. 2018.

51. Kussul E., Baidyk T., Kasatkina L. Lukovich V. Rosenblatt perceptrons for handwritten digit recognition. Proc. IJCNN’01. (Washington, 15-19 July, 2001). Washington, USA. 2001. P. 1516–1521. https://doi.org/10.1109/IJCNN.2001.939589

52. Baidyk T, Kussul E., Makeyev O., Vega A., Limited receptive area neural classifier based image recognition in micromechanics and agriculture. International Journal of Applied Mathematics and Informatics. 2008.Vol. 2, N 3. P. 96–103.

53. Baydyk T., Kussul E., Hernandez Acosta M. LIRA neural network application for microcomponent measurement. International Journal of Applied Mathematics and Informatics. Vol.6, N 4. 2012. P.173–180.

54. Goltsev A.D., Gritsenko V.I. Algorithm of sequential finding the textural features characterizing homogeneous texture segments for the image segmentation task. Kibernetika i vycislitel`naa tehnika. 2013. N 173. P. 25–34 (in Russian).

55. Goltsev A., Gritsenko V., Kussul E., Baidyk T. Finding the texture features characterizing the most homogeneous texture segment in the image. Proc. IWANN’15. (Palma de Mallorca, Spain, June 10-12, 2015). Palma de Mallorca, 2015. 2015. P. 287–300. https://doi.org/10.1007/978-3-319-19258-1_25

56. Goltsev A., Gritsenko V., Husek D. Extraction of homogeneous fine-grained texture segments in visual images. Neural Network World. 2017. Vol. 27, N 5. P. 447– 477. https://doi.org/10.14311/NNW.2017.27.024

57. Kussul N.N., Sokolov B.V., Zyelyk Y.I., Zelentsov V.A., Skakun S.V., Shelestov A.Y. Disaster risk assessment based on heterogeneous geospatial information. J. of Automation and Information Sci. 2010. Vol. 42, N 12. P. 32–45. https://doi.org/10.1615/JAutomatInfScien.v42.i12.40

58. Kussul N., Lemoine G., Gallego F. J., Skakun S. V, Lavreniuk M., Shelestov A. Y. Parcel-based crop classification in Ukraine using Landsat-8 data and Sentinel-1A data. IEEE J. Sel. Topics Appl. Earth Observ. Remote Sens. 2016. Vol. 9, N 6. P. 2500–2508. https://doi.org/10.1109/JSTARS.2016.2560141

59. Kussul N., Lavreniuk M., Shelestov A., Skakun S. Crop inventory at regional scale in Ukraine: developing in season and end of season crop maps with multi-temporal optical and SAR satellite imagery. European Journal of Remote Sensing. 2018. Vol. 51, N 1. P. 627–636. https://doi.org/10.1080/22797254.2018.1454265

60. Sokolov A., Rachkovskij D. Approaches to sequence similarity representation. Information Theories and Applications. 2005. Vol.13, N 3. P. 272–278.

61. Recchia G., Sahlgren M., Kanerva P., Jones M. Encoding sequential information in semantic space models: Comparing holographic reduced representation and random permutation. Comput. Intell. Neurosci. 2015. Vol. 2015. Art. 986574. P. 1–18.

62. Rasanen O.J., Saarinen J.P. Sequence prediction with sparse distributed hyperdimensional coding applied to the analysis of mobile phone use patterns. IEEE Trans. Neural Netw. Learn. Syst. 2016. Vol. 27, N 9. P. 1878–1889. https://doi.org/10.1109/TNNLS.2015.2462721

63. Gallant S.I., Culliton P. Positional binding with distributed representations. Proc. ICIVC’16. (Portsmouth, UK 3–5 Aug., 2016). Portsmouth, 2016. 2016. P. 108–113. https://doi.org/10.1109/ICIVC.2016.7571282

64. Frady E. P., Kleyko D., Sommer F. T. A theory of sequence indexing and working memory in recurrent neural networks. Neural Comput. 2018. Vol. 30, N. 6. P. 1449–1513. https://doi.org/10.1162/neco_a_01084

65. Rachkovskij D.A. Some approaches to analogical mapping with structure sensitive distributed representations. Journal of Experimental and Theoretical Artificial Intelligence. 2004. Vol. 16, N 3. P. 125–145. https://doi.org/10.1080/09528130410001712862

66. Slipchenko S.V., Rachkovskij D.A. Analogical mapping using similarity of binary distributed representations. Int. J. Information Theories and Applications. 2009. Vol. 16, N 3. P. 269–290.

Received 22.08.2018

Issue 3 (193), article 1

DOI:https://doi.org/10.15407/kvt192.03.005

Kibern. vyčisl. teh., 2018, Issue 3 (193), pp.

Revunova E.G., Ph.D. (Engineering),
Senior Researcher Department of Neural Information Processing Technologies
e-mail: egrevunova@gmail.com

Rachkovskij D.A., DSc. (Engineering),
Leading Researcher, Department of Neural Information Processing Technologies
e-mail: dar@infrm.kiev.ua

International Research and Training Center for Information Technologies
and Systems of the National Academy of Sciences of Ukraine
and Ministry of Education and Science of Ukraine,
Acad. Glushkova av., 40, Kiev, 03187, Ukraine

RANDOM PROJECTION AND TRUNCATED SVD FOR ESTIMATING DIRECTION OF ARRIVAL IN ANTENNA ARRAY

Introduction. The need to solve inverse problems arises in many areas of science and technology in connection with the recovery of the object signal based on the results of indirect remote measurements. In the case where the transformation matrix has a high conditional number, the sequence of its singular numbers falls to zero, and the output of the measuring system contains noise, the problem of estimating the input vector is called discrete ill-posed problem (DIP). It is known that the DIP solution using pseudoinverse of the input-output transformation matrix is unstable. To overcome the instability and to improve the accuracy of the solution, regularization methods are used.

Our approaches to ensuring the stability of the DIP solution (truncated singular decomposition (TSVD) and random projection (RP)) use the integer regularization parameter, which is the number of terms of the linear model. Regularization with an integer parameter makes it possible to provide a model close to the best in terms of the accuracy of the input vector recovery, and also to reduce the computational complexity by reducing the dimensionality of the problem.

The purpose of the article is to develop an approach to estimating the direction of arrival of signals in the antenna array using the DIP solution, to compare the results with the well-known MUSIC method, to reveal the advantages and disadvantages of the methods.

Results. Comparison of TSVD and MUSIC (implemented in real numbers) when working with correlated sources and five snapshots showed the advantage of TSVD in terms of the power of the useful signal Pratio by 2.2 times with the number of antenna elements K = 15 and by 4.7 times with K = 90. The advantage of TSVD in Pratio is by 3.7 times for K = 15 and by 4.2 times for K = 90. Comparison of RP and MUSIC (implemented in real numbers), when working with correlated sources and five snapshots, showed the advantage of RP in Pratio by 3 times at K = 15 and by 4.4 times at K = 90. When working with 100 snapshots, the advantage of RP in Pratio is by 3.8 times for K = 15 and by 4.2 times for K = 90.

Conclusions. The approach to determining the direction of arrival based on the l2-regularization methods provides a stable solution in the case of a small number of snapshots, high noise and correlated source signals. Methods of determining the direction of arrival based on l2-regularization, in contrast to l1-regularization, do not impose restrictions on the properties of the input-output transformation matrix, do not require a priori information on the number of signal sources, allow constructing efficient hardware implementations.

Keywords: Direction of arrival estimation, truncated singular value decomposition, random projection, MUSIC.

Download full text!

REFERENCES

1. Hansen P. Rank-deficient and discrete ill-posed problems. Numerical aspects of linear inversion. Philadelphia: SIAM, 1998. 247 p. https://doi.org/10.1137/1.9780898719697

2. Tikhonov A., Arsenin V. Solution of ill-posed problems. Washington: V.H. Winston, 1977. 231 p.

3. Starkov V. Constructive methods of computational physics in interpretation problems. Kiev: Naukova Dumka, 2002. 263 p. (in Russian)

4. Hansen P.C. The truncated SVD as a method for regularization. BIT. 1987. Vol. 27, N 2. P. 534–553. https://doi.org/10.1007/BF01937276

5. Revunova E.G., Tishchuk A.V. Criterion for choosing a model for solving discrete ill-posed problems on the basis of a singular expansion. Control systems and machines. 2014. N 6. P. 3–11. (in Russian).

6. Revunova E.G., Tyshchuk A.V. A model selection criterion for solution of discrete ill-posed problems based on the singular value decomposition. Proc. IWIM’2015 (20–24th of July, 2015, Kyiv–Zhukyn) . Kyiv–Zhukyn. 2015. P.43–47.

7. Revunova E.G. Model selection criteria for a linear model to solve discrete ill-posed problems on the basis of singular decomposition and random projection. Cybernetics and Systems Analysis. 2016. Vol. 52, N.4. P. 647–664. https://doi.org/10.1007/s10559-016-9868-4

8. Revunova E.G. Study of error components for solution of the inverse problem using random projections. Mathematical Machines and Systems. 2010. N 4. P. 33–42 (in Russian).

9. Revunova E.G. Randomization approach to the reconstruction of signals resulted from indirect measurements. Proc. ICIM’13 (16-20th of September, 2013, Kyiv). Kyiv, 2013. P. 203–208.

10. Revunova E.G. Analytical study of the error components for the solution of discrete ill-posed problems using random projections. Cybernetics and Systems Analysis. 2015. Vol. 51, N. 6. P. 978–991. https://doi.org/10.1007/s10559-015-9791-0

11. Revunova E.G. Averaging over matrices in solving discrete ill-posed problems on the basis of random projection. Proc. CSIT’17(05–08th of September, 2017, Lviv). Lviv, 2017. Vol. 1. P. 473–478. https://doi.org/10.1109/STC-CSIT.2017.8098831

12. Revunova E.G. Solution of the discrete ill-posed problem on the basis of singular value decomposition and random projection. Advances in Intelligent Systems and Computing II. Cham: Springer. 2017. P. 434–449.

13. Revunova E.G. Increasing the accuracy of the solution of discrete ill-posed problems by the method of random projections. Control systems and machines. 2018. N 1. P. 16–27. (in Ukrainian)

14. Revunova E.G., Tishchuk A.V., Desyaterik A.A. Criteria for choosing a model for solving discrete ill-posed problems based on SVD and QR decompositions. Inductive modeling of complex systems. 2015. N 7. P. 232–239. (in Russian).

15. Revunova E.G., Rachkovskij D.A. Using randomized algorithms for solving discrete ill-posed problems. Intern. Journal Information Theories and Applications. 2009. Vol. 2, N. 16. P. 176–192.

16. Rachkovskij D.A., Revunova E.G. Randomized method for solving discrete ill-posed problems. Cybernetics and Systems Analysis. 2012. Vol. 48, N. 4. P. 621–635. https://doi.org/10.1007/s10559-012-9443-6

17. Schmidt R.O. Multiple emitter location and signal parameter estimation. IEEE Trans. Antennas Propagation. 1986. Vol. AP–34. P. 276–280. https://doi.org/10.1109/TAP.1986.1143830

18. Krim H., Viberg M. Two decades of array signal processing research: The parametric approach. IEEE Signal Processing Magazine. 1996. Vol. 13, N 4. P. 67–94. https://doi.org/10.1109/79.526899

19. Schmidt R.O. A signal subspace approach to multiple emitter location spectral estimation. PhD thesis. Stanford University, 1981. 201 p.

20. Bartlett M.S. Smoothing periodograms from time series with continuous spectra. Nature. 1948. Vol. 161. P. 686–687. https://doi.org/10.1038/161686a0

21. Malioutov D.M., Cetin M., Fisher J.W. III, Willsky A.S. Superresolution source localization through data-adaptive regularization. Proc. SAM’02 (6 august, 2002, Rosslyn, Virginia). Rosslyn, Virginia, 2002. P. 194–198. https://doi.org/10.1109/SAM.2002.1191027

22. Malioutov D., Cetin M., Willsky A.S. A sparse signal reconstruction perspective for source localization with sensor arrays. IEEE Transactions on Signal Processing. 2005. Vol. 53, N 8. P. 3010–3022. https://doi.org/10.1109/TSP.2005.850882

23. Panahi A. Viberg M. Fast lasso based DOA tracking. Proc. CAMSAP’11 (13–16th of December, 2011, San Juan, Puerto Rico) . San Juan, Puerto Rico, 2011. P. 397–400.

24. Panahi A. Viberg M. A novel method of DOA tracking by penalized least squares. Proc. CAMSAP’13 (15–18th of December, 2013, St. Martin, France). St. Martin, France, 2013. P. 61–64.

25. Golub G.H., Van Loan C.F. Matrix Computations. Baltimore: The Johns Hopkins University Press, 1996.

26. Ivakhnenko A., Stepashko V. Noise-immunity of modeling. Kiev: Naukova Dumka, 1985. (in Russian)

27. Stepashko V. Theoretical aspects of GMDH as a method of inductive modeling. Control systems and machines 2003. N 2. P. 31–38. (in Russian)

28. Stepashko V. Method of critical variances as analytical tool of theory of inductive modeling. Journal of Automation and Information Sciences. 2008. Vol. 40, N 3. P. 4–22. https://doi.org/10.1615/JAutomatInfScien.v40.i3.20

29. Xiang H., Zou J. Regularization with randomized SVD for large-scale discrete inverse problems. Inverse Problems. 29(8):085008, 2013. https://doi.org/10.1088/0266-5611/29/8/085008

30. Xiang H., Zou J. Randomized algorithms for large-scale inverse problems with general Tikhonov regularizations. Inverse Problems. 2015. Vol. 31, N 8:085008. P. 1–24.

31. Wei Y., Xie P., Zhang L. Tikhonov regularization and randomized GSVD. SIAM J. Matrix Anal. Appl. 2016. Vol. 37, N 2. P. 649–675. https://doi.org/10.1137/15M1030200

32. Zhang L., Wei Y. Randomized core reduction for discrete ill-posed problem. arXiv:1808.02654. 2018.

33. Misuno I.S., Rachkovskij D.A., Slipchenko S.V., Sokolov A.M. Searching for text information with the help of vector representations. Problems of Programming. 2005. N. 4. P. 50–59. (in Russian)

34. Rachkovskij D.A. Formation of similarity-reflecting binary vectors with random binary projections. Cybernetics and Systems Analysis. 2015. Vol. 51, N 2. P. 313–323. https://doi.org/10.1007/s10559-015-9723-z

35. Ferdowsi S., Voloshynovskiy S., Kostadinov D., Holotyak T. Fast content identification in highdimensional feature spaces using sparse ternary codes. Proc. WIFS’16 (4–7th of December, 2016, Abu Dhabi, UAE). Abu Dhabi, UAE, 2016. P. 1–6.

36. Rachkovskij D.A., Slipchenko S.V., Kussul E.M., Baidyk T. N. Properties of numeric codes for the scheme of random subspaces RSC. Cybernetics and Systems Analysis. 2005. Vol. 41, N. 4. P. 509–520. https://doi.org/10.1007/s10559-005-0086-8

37. Rachkovskij D.A., Slipchenko S.V., Kussul E.M., Baidyk T.N. Sparse binary distributed encoding of scalars. 2005. Journal of Automation and Information Sciences. Vol. 37, N 6. P. 12–23. https://doi.org/10.1615/J
Automat Inf Scien.v37.i6.20

38. Rachkovskij D.A., Slipchenko S.V., Misuno I.S., Kussul E.M., Baidyk T. N. Sparse binary distributed encoding of numeric vectors. Journal of Automation and Information Sciences. 2005. Vol. 37, N 11. P. 47–61. https://doi.org/10.1615/J
Automat Inf Scien.v37.i11.60

39. Kleyko D., Osipov E., Rachkovskij D.A. Modification of holographic graph neuron using sparse distributed representations. Procedia Computer Science. 2016. Vol. 88. P. 39–45. https://doi.org/10.1016/j.procs.2016.07.404

40. Kleyko D., Osipov E., Senior A., Khan A.I., Sekercioglu Y.A. Holographic graph neuron: A bioinspired architecture for pattern processing. IEEE Trans. Neural Netw. Learn. Syst. 2017.Vol. 28, N 6. P. 1250–1262. https://doi.org/10.1109/TNNLS.2016.2535338

41. Kleyko D., Rahimi A., Rachkovskij D., Osipov E., Rabaey J. Classification and recall with binary hyperdimensional computing: Tradeoffs in choice of density and mapping characteristics. IEEE Trans. Neural Netw. Learn. Syst. 2018.

42. Kleyko D., Osipov E. On bidirectional transitions between localist and distributed representations: The case of common substrings search using vector symbolic architecture. Procedia Computer Science. 2014. Vol. 41. P. 104–113. https://doi.org/10.1016/j.procs.2014.11.091

43. Recchia G., Sahlgren M., Kanerva P., Jones M. Encoding sequential information in semantic space models: Comparing holographic reduced representation and random permutation. Comput. Intell. Neurosci. 2015. Vol. 2015. Art. no. 58. https://doi.org/10.1155/2015/986574

44. Räsänen O.J., Saarinen J.P. Sequence prediction with sparse distributed hyperdimensional coding applied to the analysis of mobile phone use patterns. IEEE Trans. Neural Netw. Learn. Syst. 2016. Vol. 27, N 9. P. 1878–1889. https://doi.org/10.1109/TNNLS.2015.2462721

45. Slipchenko S. V., Rachkovskij D.A. Analogical mapping using similarity of binary distributed representations. Int. J. Information Theories and Applications. 2009. Vol. 16, N 3. P. 269–290.

46. Kanerva P. Hyperdimensional computing: An introduction to computing in distributed representation with high-dimensional random vectors. Cogn. Comput. 2009. Vol. 1, N 2. P. 139–159. https://doi.org/10.1007/s12559-009-9009-8

47. Gallant S. I., Okaywe T.W. Representing objects, relations, and sequences. Neural Comput. 2013. Vol. 25, N 8. P. 2038–2078. https://doi.org/10.1162/NECO_a_00467

48. Gritsenko V.I., Rachkovskij D.A., Goltsev A.D., Lukovych V.V., Misuno I.S., Revunova E.G., Slipchenko S.V., Sokolov A.M., Talayev S.A. Neural distributed representation for intelligent information technologies and modeling of thinking. Kibernetika i vyčislitel’naâ tehnika. 2013. Vol. 173. P. 7–24. (in Russian).

49. Frolov A.A., Rachkovskij D.A., Husek D. On information characteristics of Willshaw-like auto-associative memory. Neural Network World. 2002. Vol. 12, No 2. P. 141–158.

50. Frolov A.A., Husek D., Rachkovskij D.A. Time of searching for similar binary vectors in associative memory. Cybernetics and Systems Analysis. 2006. Vol. 42, No. 5. P. 615–623. https://doi.org/10.1007/s10559-006-0098-z

51. Frady E. P., Kleyko D., Sommer F. T. A theory of sequence indexing and working memory in recurrent neural networks. Neural Comput. 2018. Vol. 30, N. 6. P. 1449–1513. https://doi.org/10.1162/neco_a_01084

52. Kussul N.N., Sokolov B.V., Zyelyk Y.I., Zelentsov V.A., Skakun S.V., Shelestov A.Y. Disaster risk assessment based on heterogeneous geospatial information. J. of Automation and Information Sciences. 2010. Vol. 42, N 12. P. 32–45. https://doi.org/10.1615/JAutomatInfScien.v42.i12.40

53. Kussul N., Shelestov A., Basarab R., Skakun S., Kussul O., Lavrenyuk M. Geospatial intelligence and data fusion techniques for sustainable development problems. Proc. ICTERI’15. 2015. P. 196–203.

54. Kussul N., Skakun S., Shelestov A., Kravchenko O., Kussul O. Crop classification in Ukraine using satellite optical and SAR images. International Journal Information Models and Analyses. 2013. Vol. 2, N 2. P. 118–122.

55. Kussul N., Lemoine G., Gallego F. J., Skakun S. V, Lavreniuk M., Shelestov A. Y. Parcel-based crop classification in Ukraine using Landsat-8 data and Sentinel-1A data. IEEE J. Sel. Topics Appl. Earth Observ. Remote Sens. 2016. Vol. 9, N 6. P. 2500–2508. https://doi.org/10.1109/JSTARS.2016.2560141

56. Lavreniuk M., Kussul N., Meretsky M., Lukin V., Abramov S., Rubel O. Impact of SAR data filtering on crop classification accuracy. Proc. UKRCON’17 (29th of May — 02th of June, 2017, Kyiv). Kyiv, 2017.2017. P. 912–917. https://doi.org/10.1109/UKRCON.2017.8100381

57. Kussul N., Lavreniuk M., Shelestov A., Skakun S. Crop inventory at regional scale in Ukraine: developing in season and end of season crop maps with multi-temporal optical and SAR satellite imagery. European Journal of Remote Sensing. 2018. Vol. 51, N 1. P. 627–636. https://doi.org/10.1080/22797254.2018.1454265

58. Moreira A., Prats-Iraola P., Younis M., Krieger G., Hajnsek I., Papathanassiou K. A tutorial on synthetic aperture radar. IEEE Geosci. Remote Sensing Mag. 2013. Vol. 1, N 1. P. 6–43. https://doi.org/10.1109/MGRS.2013.2248301

59. Ramakrishnan S., Demarcus V., Le Ny J., Patwari N., Gussy J. Synthetic aperture radar imaging using spectral estimation techniques. Technical Report. University of Michigan, 2002. 34 p.

Received 15.05.2018

Issue 2 (188), article 1

DOI:https://doi.org/10.15407/kvt188.02.005

Kibern. vyčisl. teh., 2017, Issue 2 (188), pp.

Grytsenko V.I., Corresponding Member of NAS of Ukraine, Director
e-mail: vig@irtc.org.ua
International Research and Training Center for Information Technologies and Systems of the NAS of Ukraine and of Ministry of Education and Science of Ukraine,
av. Acad. Glushkova, 40, Kiev, 03680, Ukraine

Rachkovskij D.A., Doctor of Engineering, Leading Researcher,
Dept. of Neural Information Processing Technologies,
e-mail: dar@infrm.kiev.ua
International Research and Training Center for Information Technologies and Systems of the NAS of Ukraine and of Ministry of Education and Science of Ukraine,
av. Acad. Glushkova, 40, Kiev, 03680, Ukraine

Frolov A.A., Doctor of Biology, Professor,
Faculty of Electrical Engineering and Computer Science FEI,
e-mail: docfact@gmail.com
Technical University of Ostrava, 17 listopadu 15, 708 33 Ostrava-Poruba, Czech Republic

Gayler R., PhD,
Independent Researcher,
r.gayler@gmail.com
Melbourne, VIC, Australia

Kleyko D., PhD post graduated,
Department of Computer Science, Electrical and Space Engineering,
denis.kleyko@ltu.se
Lulea University of Technology, 971 87 Lulea, Sweden

Osipov E., PhD, Professor,
Department of Computer Science, Electrical and Space Engineering,
evgeny.osipov@ltu.se
Lulea University of Technology, 971 87 Lulea, Sweden

NEURAL DISTRIBUTED AUTOASSOCIATIVE MEMORIES: A SURVEY.

Introduction. Neural network models of autoassociative, distributed memory allow storage and retrieval of many items (vectors) where the number of stored items can exceed the vector dimension (the number of neurons in the network). This opens the possibility of a sublinear time search (in the number of stored items) for approximate nearest neighbors among vectors of high dimension.

The purpose of the paper is to review models of autoassociative, distributed memory that can be naturally implemented by neural networks (mainly with local learning rules and iterative dynamics based on information locally available to neurons).

Scope. The survey is focused mainly on the networks of Hopfield, Willshaw, and Potts, that have connections between pairs of neurons and operate on sparse binary vectors. We discuss not only autoassociative memory, but also the generalization properties of these networks. We also consider neural networks with higher-order connections, and networks with a bipartite graph structure for non-binary data with linear constraints.

Conclusions. In conclusion we discuss the relations to similarity search, advantages and drawbacks of these techniques, and topics for further research. An interesting and still not completely resolved question is whether neural autoassociative memories can search for approximate nearest neighbors faster than other index structures for similarity search, in particular for the case of very high dimensional vectors.

Keywords: distributed associative memory, sparse binary vector, Hopfield network, Willshaw memory, Potts model, nearest neighbor, similarity search

Download full text!

REFERENCE

1 Abbott L.F., Arian Y. Storage capacity of generalized networks. Physical Review A. 1987. Vol. 36, N 10. P. 5091–5094. https://doi.org/10.1103/PhysRevA.36.5091

2 Ahle T.D. Optimal las vegas locality sensitive data structures. arXiv:1704.02054. 6 Apr 2017.

3 Aliabadi B. K., Berrou C., Gripon V., Jiang X. Storing sparse messages in networks of neural cliques. IEEE Trans. NNLS. 2014. Vol. 25. P. 980–989. https://doi.org/10.1109/TNNLS.2013.2285253

4 Amari S. Characteristics of sparsely encoded associative memory. Neural Networks. 1989. Vol. 2, N 6. P. 451–457. https://doi.org/10.1016/0893-6080(89)90043-9

5 Amari S., Maginu K. Statistical neurodynamics of associative memory. Neural Networks. 1988. Vol. 1. P. 63–73. https://doi.org/10.1016/0893-6080(88)90022-6

6 Amit D.J. Modeling brain function: the world of attractor neural networks. Cambridge: Cambridge University Press, 1989. 554 p. https://doi.org/10.1017/CBO9780511623257

7 Amit D.J., Fusi S. Learning in neural networks with material synapses. Neural Computation. 1994. V. 6, N 5. P. 957–982. https://doi.org/10.1162/neco.1994.6.5.957

8 Amit D.J., Gutfreund H., Sompolinsky H. Statistical mechanics of neural networks near saturation. Annals of Physics. 1987. Vol. 173. P. 30–67. https://doi.org/10.1016/0003-4916(87)90092-3

9 Amosov N. M. Modelling of thinking and the mind. New York: Spartan Books. 1967. https://doi.org/10.1007/978-1-349-00640-3

10 Anderson J. A. A theory for the recognition of items from short memorized lists. Psychological Review. 1973. Vol. 80, N 6. P. 417–438. https://doi.org/10.1037/h0035486

11 Anderson J. A. Cognitive and psychological computation with neural models. IEEE trans. Systems, Man, and Cybernetics. 1983. Vol. 13, N 5. P. 799–814. https://doi.org/10.1109/TSMC.1983.6313074

12 Anderson J.A., Murphy G.L. Psychological concepts in a parallel system. Physica D. 1986. Vol. 22, N 1–3. P. 318–336. https://doi.org/10.1016/0167-2789(86)90302-2

13 Anderson J.A., Silverstein J.W., Ritz S.A., Jones R.S. Distinctive features, categorical perception and probability learning: Some applications of a neural model. Psychological Review. 1977. V. 84. P. 413–451. https://doi.org/10.1037/0033-295X.84.5.413

14 Andoni A., Laarhoven T., Razenshteyn I., Waingarten E. Optimal hashing-based time-space trade-offs for approximate near neighbors. Proc. SODA’17. 2017. P. 47–66. https://doi.org/10.1137/1.9781611974782.4

15 Baidyk T.N., Kussul E.M. Structure of neural assembly. Proc. RNNS/IEEE symposium on neuroinformatics and neurocomputers. 1992. P. 423–434.

16 Baidyk T.N., Kussul E.M., Rachkovskij D.A. Numerical-analytical method for neural network investigation. Proc. NEURONET’90. 1990. P. 217–219.

17 Baldi, P. and Venkatesh, S.S. Number of stable points for spin-glasses and neural networks of higher orders. Physical Review Letters. 1987. Vol. 58, N 9. P. 913–916. https://doi.org/10.1103/PhysRevLett.58.913

18 Becker A., Ducas L., Gama N., Laarhoven T. New directions in nearest neighbor searching with applications to lattice sieving. Proc. SODA’16. 2016. P. 10–24. https://doi.org/10.1137/1.9781611974331.ch2

19 Boguslawski B., Gripon V., Seguin F., Heitzmann F. Twin neurons for efficient real-world data distribution in networks of neural cliques: Applications in power management in electronic circuits. IEEE Trans. NNLS. 2016. Vol. 27, N 2. P. 375–387. https://doi.org/10.1109/TNNLS.2015.2480545

20 Bovier A. Sharp upper bounds on perfect retrieval in the Hopfield model. J. Appl. Probab. 1999. Vol. 36, N 3. P. 941–950. https://doi.org/10.1239/jap/1032374647
https://doi.org/10.1017/S0021900200017708

21 Braitenberg V. Cell assemblies in the cerebral cortex. In Theoretical approaches to complex systems. Berlin: Springer-Verlag. 1978. P. 171–188. https://doi.org/10.1007/978-3-642-93083-6_9

22 Broder A., Mitzenmacher M. Network applications of Bloom filters: A survey. Internet mathematics. 2004. Vol. 1, N 4. P. 485–509. https://doi.org/10.1080/15427951.2004.10129096

23 Brunel N., Carusi F., Fusi S. Slow stochastic Hebbian learning of classes of stimuli in a recurrent neural network. Network. 1998. Vol. 9. P. 123–152. https://doi.org/10.1088/0954-898X_9_1_007

24 Buckingham J., Willshaw D. On setting unit thresholds in an incompletely connected associative net. Network. 1993. Vol. 4. P. 441–459. https://doi.org/10.1088/0954-898X_4_4_003

25 Burshtein D. Non-direct convergence radius and number of iterations of the Hopfield associative memory. IEEE Trans. Inform. Theory. 1994. Vol. 40. P. 838–847. https://doi.org/10.1109/18.335894

26 Burshtein D. Long-term attraction in higher order neural networks. IEEE Trans. Neural Networks. 1998. Vol. 9, N 1. P. 42–50. https://doi.org/10.1109/72.655028

27 Christiani T., Pagh R. Set similarity search beyond MinHash. Proc. STOC’17. 2017. https://doi.org/10.1145/3055399.3055443

28 Cole R., Gottlieb L.-A., Lewenstein M. Dictionary matching and indexing with errors and don’t cares. Proc. STOC’04. 2004. P. 91–100. https://doi.org/10.1145/1007352.1007374

29 Dahlgaard S., Knudsen M.B.T., Thorup M. Fast similarity sketching. arXiv:1704.04370. 14 Apr 2017.

30 Demircigil M., Heusel J., Lowe M., Upgang S. Vermet F. On a model of associative memory with huge storage capacity. J. Stat. Phys. doi:10.1007/s10955-017-1806-y. 2017. https://doi.org/10.1007/s10955-017-1806-y

31 Donaldson R., Gupta A, Plan Y., Reimer T. Random mappings designed for commercial search engines. arXiv:1507.05929. 21 Jul 2015.

32 Feigelman M.V., Ioffe L.B. The augmented models of associative memory – asymmetric interaction and hierarchy of patterns. Int. Journal of Modern Physics B. 1987. Vol. 1, N 1, P. 51–68. https://doi.org/10.1142/S0217979287000050

33 Ferdowsi S., Voloshynovskiy S., Kostadinov D., Holotyak T. Fast content identification in highdimensional feature spaces using sparse ternary codes. Proc. WIFS’16. 2016. P. 1–6.

34 Frolov A.A., Husek D., Muraviev I.P. Information capacity and recall quality in sparsely encoded Hopfield-like neural network: Analytical approaches and computer simulation. Neural Networks. 1997. Vol. 10, N 5. P. 845–855. https://doi.org/10.1016/S0893-6080(96)00122-0

35 Frolov A.A., Husek D., Muraviev I.P. Informational efficiency of sparsely encoded Hopfield-like associative memory. Optical Memory & Neural Networks. 2003. Vol. 12, N 3. P. 177–197.

36 Frolov A.A., Husek D., Muraviev I.P., Polyakov P. Boolean factor analysis by attractor neural network. IEEE Trans. Neural Networks. 2007. Vol. 18, N 3. P. 698–707. https://doi.org/10.1109/TNN.2007.891664

37 Frolov A.A., Husek D., Polyakov P.Y. Recurrent neural-network-based boolean factor analysis and its application to word clustering. IEEE Trans. Neural Networks. 2009.Vol. 20, N 7. P. 1073–1086. https://doi.org/10.1109/TNN.2009.2016090

38 Frolov A. A., Husek D., Rachkovskij. Time of searching for similar binary vectors in associative memory. Cybernetics and Systems Analysis. 2006. Vol. 42, N 5. P. 615–623. https://doi.org/10.1007/s10559-006-0098-z

39 Frolov A.A., Muraviev I.P. Neural models of associative memory. Moscow: Nauka, 1987. 161 p.

40 Frolov A.A., Muraviev I.P. Information characteristics of neural networks. Moscow: Nauka, 1988. 160 p.

41 Frolov A.A., Muraviev I.P. Information characteristics of neural networks capable of associative learning based on Hebbian plasticity. Network. 1993. Vol. 4, N 4. P. 495–536. https://doi.org/10.1088/0954-898X_4_4_006

42 Frolov A., Kartashov A., Goltsev A., Folk R. Quality and efficiency of retrieval for Willshaw-like autoassociative networks. Correction. Network. 1995. Vol. 6. P. 513–534. https://doi.org/10.1088/0954-898X_6_4_001
https://doi.org/10.1088/0954-898X_6_4_002

43 Frolov A., Kartashov A., Goltsev A., Folk R. Quality and efficiency of retrieval for Willshaw-like autoassociative networks. Recognition. Network. 1995. Vol. 6. P. 535–549. https://doi.org/10.1088/0954-898X_6_4_001
https://doi.org/10.1088/0954-898X_6_4_002

44 Frolov A.A., Rachkovskij D.A., Husek D. On information characteristics of Willshaw-like auto-associative memory. Neural Network World. 2002. Vol. 12, N 2. P. 141–157.

45 Gallant S.I., Okaywe T.W. Representing objects, relations, and sequences. Neural Computation. 2013. Vol. 25, N 8. P. 2038–2078. https://doi.org/10.1162/NECO_a_00467

46 Gardner E. 1987. Multiconnected neural network models. Journal of Physics A. 1998. Vol. 20, N 11. P.3453–3464. https://doi.org/10.1088/0305-4470/20/11/046

47 Gardner E. The space of interactions in neural-network models. J. Phys. A. 1988. Vol. 21. P. 257–270. https://doi.org/10.1088/0305-4470/21/1/030

48 Gibson W. G., Robinson J. Statistical analysis of the dynamics of a sparse associative memory. Neural Networks. 1992. Vol. 5. P. 645–661. https://doi.org/10.1016/S0893-6080(05)80042-5

49 Golomb D., Rubin N., Sompolinsky H. Willshaw model: Associative memory with sparse coding and low firing rates 1990. Phys Rev A. Vol. 41, N 4. P. 1843–1854.

50 Goltsev A. An assembly neural network for texture segmentation. Neural Networks. 1996. Vol. 9, N 4. P. 643–653. https://doi.org/10.1016/0893-6080(95)00136-0

51 Goltsev A. Secondary learning in the assembly neural network. Neurocomputing. 2004. Vol. 62. P. 405–426. https://doi.org/10.1016/j.neucom.2004.06.001
https://doi.org/10.1016/S0925-2312(04)00305-4

52 Goltsev A., Husek D. Some properties of the assembly neural networks. Neural Network World. 2002. Vol. 12, N 1. P. 15–32.

53 Goltsev A., Wunsch D.C. Generalization of features in the assembly neural networks. International Journal of Neural Systems. 2004. Vol. 14, N 1. P. 1–18. https://doi.org/10.1142/S0129065704001838

54 Goltsev A.D. Neural networks with the assembly organization. Kiev: Naukova Dumka, 2005. 200 p.

55 Goltsev A., Gritsenko V. Modular neural networks with Hebbian learning rule. Neurocomputing. 2009. Vol. 72. P. 2477–2482. https://doi.org/10.1016/j.neucom.2008.11.011

56 Goltsev A., Gritsenko V. Modular neural networks with radial neural columnar architecture. Biologically Inspired Cognitive Architectures. 2015. Vol. 13, P. 63–74. https://doi.org/10.1016/j.bica.2015.06.001

57 Goswami M., Pagh R., Silvestri F., Sivertsen J. Distance sensitive bloom filters without false negatives. Proc. SODA’17. 2017. P. 257–269. https://doi.org/10.1137/1.9781611974782.17

58 Gripon V., Berrou C. Sparse neural networks with large learning diversity. IEEE Trans. on Neural Networks. 2011. Vol. 22, N 7. P. 1087–1096. https://doi.org/10.1109/TNN.2011.2146789

59 Gripon V., Heusel J., Lowe M., Vermet F. A comparative study of sparse associative memories. Journal of Statistical Physics. 2016. Vol. 164. P. 105–129. https://doi.org/10.1007/s10955-016-1530-z

60 Gripon V., Lowe M., Vermet F. Associative memories to accelerate approximate nearest neighbor search. ArXiv:1611.05898. 10 Nov 2016.

61 Gritsenko V.I., Rachkovskij D.A., Goltsev A.D., Lukovych V.V., Misuno I.S., Revunova E.G., Slipchenko S.V., Sokolov A.M., Talayev S.A. Neural distributed representation for intelligent information technologies and modeling of thinking. Cybernetics and Computer Engineering. 2013. Vol. 173. P. 7–24.

62 Guo J. K., Brackle D. V., Lofaso N., Hofmann M. O. Vector representation for sub-graph encoding to resolve entities. Procedia Computer Science. 2016. Vol. 95. P. 327–334. https://doi.org/10.1016/j.procs.2016.09.342

63 Gutfreund H. Neural networks with hierarchically correlated patterns. Physical Review A. 1988. Vol. 37, N 2. P. 570–577. https://doi.org/10.1103/PhysRevA.37.570

64 Hacene G. B., Gripon V., Farrugia N., Arzel M., Jezequel M. Finding all matches in a database using binary neural networks. Proc. COGNITIVE’17. 2017. P. 59–64.

65 Hebb D.O. The Organization of Behavior: A Neuropsychological Theory. New York: Wiley, 1949. 335 p.

66 Herrmann M., Ruppin E., Usher M. A neural model of the dynamic activation of memory. Biol. Cybern. 1993. Vol. 68. P. 455–463. https://doi.org/10.1007/BF00198778

67 Heusel J., Lowe M., Vermet F. On the capacity of an associative memory model based on neural cliques. Statist. Probab. Lett. 2015. Vol. 106. P. 256–261. https://doi.org/10.1016/j.spl.2015.07.026

68 Hopfield J.J. Neural networks and physical systems with emergent collective computational abilities // Proc. of the Nat. Acad. Sci. USA. 1982. Vol. 79, N 8. P. 2554–2558. https://doi.org/10.1073/pnas.79.8.2554

69 Hopfield J.J., Feinstein D.I., Palmer R.G. “Unlearning” has a stabilizing effect in collective memories. Nature. 1983. Vol. 304. P. 158–159. https://doi.org/10.1038/304158a0

70 Horn D., Usher M. Capacities of multiconnected memory models. Journal de Physique, 1988. Vol. 49, N 3. P. 389–395. https://doi.org/10.1051/jphys:01988004903038900

71 Horner H., Bormann D., Frick M., Kinzelbach H., Schmidt A. Transients and basins of attraction in neutral network models. Z. Physik B. 1989. Vol. 76. P. 381–398. https://doi.org/10.1007/BF01321917

72 Howard M. W., Kahana M. J. A distributed representation of temporal context. Journal of Mathematical Psychology. 2002. Vol. 46. P. 269–299. https://doi.org/10.1006/jmps.2001.1388

73 Iscen A., Furon T., Gripon V., Rabbat M., Jegou H. Memory vectors for similarity search in high-dimensional spaces. arXiv:1412.3328. 1 Mar 2017.

74 Kakeya H., Kindo T. Hierarchical concept formation in associative memory composed of neuro-window elements. Neural Networks. 1996. Vol. 9, N 7. P. 1095–1098. https://doi.org/10.1016/0893-6080(96)00030-5

75 Kanerva P. Sparse Distributed Memory. Cambridge: MIT Press, 1988. 155 p.

76 Kanerva P. Hyperdimensional computing: an introduction to computing in distributed representation with high-dimensional random vectors. Cognitive Computation. 2009. Vol. 1, N 2. P. 139–159. https://doi.org/10.1007/s12559-009-9009-8

77 Kanter I. Potts-glass models of neural networks. Physical Rev. A. 1988. V. 37, N 7. P. 2739–2742. https://doi.org/10.1103/PhysRevA.37.2739

78 Karbasi A., Salavati A. H., Shokrollahi A. Iterative learning and denoising in convolutional neural associative memories. Proc. ICML’13. 2013. P. 445–453.

79 Kartashov A., Frolov A., Goltsev A., Folk R. Quality and efficiency of retrieval for Willshaw-like autoassociative networks. Willshaw–Potts model. Network. 1997. Vol. 8, N 1. P. 71–86. https://doi.org/10.1088/0954-898X_8_1_007

80 Kinzel W. Learning and pattern recognition in spin glass models. Z. Physik B. 1985. Vol. 60. P. 205–213. https://doi.org/10.1007/BF01304440

81 Knoblauch A., Palm G., Sommer F. T. Memory capacities for synaptic and structural plasticity. Neural Computation. 2010. Vol. 22, N 2. P. 289–341. https://doi.org/10.1162/neco.2009.08-07-588

82 Kleyko D., Khan S., Osipov E., Yong S. P. Modality Classification of Medical Images with Distributed Representations based on Cellular Automata Reservoir Computing. Proc. ISBI’17. 2017. P. 1–4. https://doi.org/10.1109/ISBI.2017.7950697

83 Kleyko D., Lyamin N., Osipov E., Riliskis L. Dependable MAC layer architecture based on holographic data representation using hyperdimensional binary spatter codes. Proc. MACOM’12. 2012. P. 134–145

84 Kleyko D., Osipov E. Brain-like classifier of temporal patterns. Proc. ICCOINS’14. 2014. P. 1–6. https://doi.org/10.1109/ICCOINS.2014.6868349

85 Kleyko D., Osipov E. On bidirectional transitions between localist and distributed representations: The case of common substrings search using vector symbolic architecture. Procedia Computer Science. 2014. Vol. 41. P. 104–113. https://doi.org/10.1016/j.procs.2014.11.091

86 Kleyko D., Osipov E., Gayler R. W. Recognizing permuted words with Vector Symbolic Architectures: A Cambridge test for machines. Procedia Computer Science. 2016. Vol. 88. P. 169–175. https://doi.org/10.1016/j.procs.2016.07.421

87 Kleyko D., Osipov E., Gayler R. W., Khan A. I., Dyer A. G. Imitation of honey bees’ concept learning processes using vector symbolic architectures. Biologically Inspired Cognitive Architectures. 2015. Vol. 14. P. 55–72. https://doi.org/10.1016/j.bica.2015.09.002

88 Kleyko D., Osipov E., Papakonstantinou N., Vyatkin V., Mousavi A. Fault detection in the hyperspace: Towards intelligent automation systems. Proc. INDIN’15. 2015. P. 1219–1224. https://doi.org/10.1109/INDIN.2015.7281909

89 Kleyko D., Osipov E., Senior A., Khan A. I., Sekercioglu Y. A. Holographic Graph Neuron: a bio-inspired architecture for pattern processing. IEEE Trans. Neural Networks and Learning Systems. 2017. Vol. 28, N 6. P. 1250–1262. https://doi.org/10.1109/TNNLS.2016.2535338

90 Kleyko D., Osipov E., Rachkovskij D. Modification of holographic graph neuron using sparse distributed representations. Procedia Computer Science. 2016. Vol. 88. P. 39–45. https://doi.org/10.1016/j.procs.2016.07.404

91 Kleyko D., Rahimi A., Rachkovskij D.A., Osipov E., Rabaey J.M. Classification and recall with binary hyperdimensional computing: trade-offs in choice of density and mapping characteristics (2017, Submitted).

92 Kleyko D., Rahimi A, Osipov E. Autoscaling Bloom Filter: controlling trade-off between true and false. arXiv:1705.03934. 10 May 2017

93 Kohonen T. Content-Addressable Memories. Berlin: Springer, 1987. 388 p. https://doi.org/10.1007/978-3-642-83056-3

94 Kohring G.A. Neural networks with many-neuron interactions. Journal de Physique. 1990. Vol. 51, N 2. P. 145–155. https://doi.org/10.1051/jphys:01990005102014500

95 Krotov D., Hopfield J.J. Dense associative memory for pattern recognition. Proc. NIPS’16. 2016. P. 1172–1180.

96 Krotov D., Hopfield J.J. Dense associative memory is robust to adversarial inputs. arXiv:1701.00939. 4 Jan 2017

97 Kryzhanovsky B.V., Mikaelian A.L., Fonarev A.B. Vector neural net identifing many strongly distorted and correlated patterns. Proc. SPIE. 2005. Vol. 5642. 124–133. https://doi.org/10.1117/12.572334

98 Kussul E. M. Associative neuron-like structures. Kiev: Naukova Dumka, 1992.

99 Kussul E. M., Baidyk T. N. A modular structure of associative-projective neural networks. Preprint 93-6. Kiev, Ukraine: GIC, 1993.

100 Kussul E.M., Fedoseyeva T.V. On audio signals recognition in neural assembly structures. Preprint 87-28. Kiev: Inst. of Cybern. 1987. 21 pp.

101 Kussul E., Makeyev O., Baidyk T., Calderon Reyes D. Neural network with ensembles. Proc. IJCNN’10. 2010. P. 2955–2961. https://doi.org/10.1109/IJCNN.2010.5596574

102 Kussul E.M., Rachkovskij D.A. Multilevel assembly neural architecture and processing of sequences. In Neurocomputers and Attention, V. II: Connectionism and neurocomputers. Manchester and New York: Manchester University Press, 1991. P.577–590.

103 Kussul E.M., Rachkovskij D.A., Wunsch D.C. The random subspace coarse coding scheme for real-valued vectors. Proc. IJCNN’99. 1999. P. 450-455. https://doi.org/10.1109/IJCNN.1999.831537

104 Lansner A. Associative memory models: From the cell assembly theory to biophysically detailed cortex simulations. Trends in Neurosciences. 2009. Vol. 32, N 3. P. 178–186. https://doi.org/10.1016/j.tins.2008.12.002

105 Lansner A., Ekeberg O. Reliability and speed of recall in an associative network. IEEE Trans. on Pattern Analysis and Machine Intelligence. 1985. Vol. 7. P. 490–498. https://doi.org/10.1109/TPAMI.1985.4767688

106 Levy S. D., Gayler R. Vector Symbolic Architectures: A new building material for artificial general intelligence. Proc. AGI’08. 2008. P. 414–418.

107 Lowe M. On the storage capacity of Hopfield models with correlated patterns. The Annals of Applied Probability. 1998. Vol. 8, N 4. P. 1216–1250. https://doi.org/10.1214/aoap/1028903378

108 Lowe M., Vermet F. The storage capacity of the Hopfield model and moderate deviations. Statistics and Probability Letters. 2005. Vol. 75. P. 237–248. https://doi.org/10.1016/j.spl.2005.06.001

109 Lowe M., Vermet F. The capacity of q-state Potts neural networks with parallel retrieval dynamics. Statistics and Probability Letters. 2007. Vol. 77, N 4. P. 1505–1514. https://doi.org/10.1016/j.spl.2007.03.030

110 Mazumdar A., Rawat A.S. Associative memory via a sparse recovery model. Proc. NIPS’15. 2015. P. 2683–2691.

111 Mazumdar A., Rawat A.S. Associative memory using dictionary learning and expander decoding. Proc. AAAI’17. 2017.

112 McEliece R.J., Posner E.C., Rodemich E.R., Venkatesh S.S. The capacity of the Hopfield associative memory. IEEE Trans. Information Theory. 1987. Vol. 33, N 4. P. 461–482. https://doi.org/10.1109/TIT.1987.1057328

113 Misuno I.S., Rachkovskij D.A., Slipchenko S.V. Vector and distributed representations reflecting semantic relatedness of words. Math. machines and systems. 2005. N 3. P. 50–67.

114 Misuno I.S., Rachkovskij D.A., Slipchenko S.V., Sokolov A.M. Searching for text information with the help of vector representations. Probl. Progr. 2005. N 4. P. 50–59.

115 Nadal J.-P. Associative memory: on the (puzzling) sparse coding limit. J. Phys. A. 1991. Vol. 24. P. 1093–1101. https://doi.org/10.1088/0305-4470/24/5/023

116 Norouzi M., Punjani A., Fleet D. J. Fast exact search in Hamming space with multi-index hashing. IEEE Trans. PAMI. 2014. Vol. 36, N 6. P. 1107–1119. https://doi.org/10.1109/TPAMI.2013.231

117 Onizawa N., Jarollahi H., Hanyu T., Gross W.J. Hardware implementation of associative memories based on multiple-valued sparse clustered networks. IEEE Journal on Emerging and Selected Topics in Circuits and Systems. 2016. Vol. 6, N 1. P. 13–24. https://doi.org/10.1109/JETCAS.2016.2528721

118 Palm G. On associative memory. Biological Cybernetics. 1980. Vol. 36. P. 19–31. https://doi.org/10.1007/BF00337019

119 Palm G. Memory capacity of local rules for synaptic modification. Concepts in Neuroscience. 1991. Vol. 2, N 1. P. 97–128.

120 Palm G. Neural associative memories and sparse coding. Neural Networks. 2013. Vol. 37. P. 165–171. https://doi.org/10.1016/j.neunet.2012.08.013

121 Palm G., Knoblauch A., Hauser F., Schuz A. Cell assemblies in the cerebral cortex. Biol. Cybern. 2014. Vol. 108, N 5. P. 559–572 https://doi.org/10.1007/s00422-014-0596-4

122 Palm G., Sommer F. T. Information capacity in recurrent Mc.Culloch-Pitts networks with sparsely coded memory states. Network. 1992, Vol. 3, P. 177–186. https://doi.org/10.1088/0954-898X_3_2_006

123 Parga N., Virasoro M.A. The ultrametric organization of memories in a neural network. J. Mod. Phys. 1986. Vol. 47, N 11. P. 1857–1864. https://doi.org/10.1142/9789812799371_0047
https://doi.org/10.1051/jphys:0198600470110185700

124 Peretto P., Niez J.J. Long term memory storage capacity of multiconnected neural networks. Biol. Cybern. 1986. Vol. 54, N 1. P. 53–63. https://doi.org/10.1007/BF00337115

125 Personnaz L., Guyon I., Dreyfus G. Collective computational properties of neural networks: New learning mechanisms. Phys. Rev. A. 1986. Vol. 34, N 5. P. 4217–4228. https://doi.org/10.1103/PhysRevA.34.4217

126 Plate T. Holographic reduced representation: Distributed representation for cognitive structures. Stanford: CSLI Publications, 2003. 300 p.

127 Rachkovskij D.A. Representation and processing of structures with binary sparse distributed codes. IEEE Transactions on Knowledge and Data Engineering. 2001. Vol. 13, N 2. P. 261–276. https://doi.org/10.1109/69.917565

128 Rachkovskij D.A. Some approaches to analogical mapping with structure sensitive distributed representations. Journal of Experimental and Theoretical Artificial Intelligence. 2004. Vol. 16, N 3. P. 125–145. https://doi.org/10.1080/09528130410001712862

129 Rachkovskij D.A. Formation of similarity-reflecting binary vectors with random binary projections. Cybernetics and Systems Analysis. 2015. Vol. 51, N 2. P. 313–323. https://doi.org/10.1007/s10559-015-9723-z

130 Rachkovskij D.A. Estimation of vectors similarity by their randomized binary projections. Cybernetics and Systems Analysis. 2015. Vol. 51, N 5. P. 808–818. https://doi.org/10.1007/s10559-015-9774-1

131 Rachkovskij D.A. Binary vectors for fast distance and similarity estimation. Cybernetics and Systems Analysis. 2017. Vol. 53, N 1. P. 138–156. https://doi.org/10.1007/s10559-017-9914-x

132 Rachkovskij D.A. Distance-based index structures for fast similarity search. Cybernetics and Systems Analysis. 2017. Vol. 53, N 4. https://doi.org/10.1007/s10559-017-9966-y

133 Rachkovskij D.A. Index structures for fast similarity search of binary vectors. Cybernetics and Systems Analysis. 2017. Vol. 53, N 5. https://doi.org/10.1007/s10559-017-9983-x

134 Rachkovskij D.A., Kussul E.M., Baidyk T.N. Building a world model with structure-sensitive sparse binary distributed representations. BICA. 2013. Vol. 3. P. 64–86. https://doi.org/10.1016/j.bica.2012.09.004

135 Rachkovskij D.A., Misuno I.S., Slipchenko S.V. Randomized projective methods for construction of binary sparse vector representations. Cybernetics and Systems Analysis. 2012. Vol. 48, N 1. P. 146–156. https://doi.org/10.1007/s10559-012-9384-0

136 Rachkovskij D.A., Slipchenko S.V. Similarity-based retrieval with structure-sensitive sparse binary distributed representations. Computational Intelligence. 2012. Vol. 28, N 1. P. 106–129. https://doi.org/10.1111/j.1467-8640.2011.00423.x

137 Rachkovskij D.A., Slipchenko S.V., Kussul E.M., Baidyk T.N. Sparse binary distributed encoding of scalars. J. of Automation and Inf. Sci. 2005. Vol. 37, N 6. P. 12–23. https://doi.org/10.1615/J
Automat Inf Scien.v37.i6.20

138 Rachkovskij D.A., Slipchenko S.V., Kussul E.M., Baidyk T.N. Properties of numeric codes for the scheme of random subspaces RSC. Cybernetics and Systems Analysis. 2005. 41, N 4. P. 509–520. https://doi.org/10.1007/s10559-005-0086-8

139 Rachkovskij D.A., Slipchenko S.V., Misuno I.S., Kussul E.M., Baidyk T.N. Sparse binary distributed encoding of numeric vectors. J. of Automation and Inf. Sci. 2005. Vol. 37, N 11. P. 47–61. https://doi.org/10.1615/J
Automat Inf Scien.v37.i11.60

140 Reznik A.M., Sitchov A.S., Dekhtyarenko O.K., Nowicki D.W. Associative memories with killed neurons: the methods of recovery. Proc. IJCNN’03. 2003. P. 2579–2582. https://doi.org/10.1109/IJCNN.2003.1223972

141 Rizzuto D.S., Kahana M.J. An autoassociative neural network model of paired-associate learning. Neural Computation. 2001. Vol. 13. P. 2075–2092. https://doi.org/10.1162/089976601750399317

142 Rolls E. T. Advantages of dilution in the connectivity of attractor networks in the brain. Biologically Inspired Cognitive Architectures. 2012. Vol. 1, P. 44–54. https://doi.org/10.1016/j.bica.2012.03.003

143 Romani S., Pinkoviezky I., Rubin A., Tsodyks M. Scaling laws of associative memory retrieval. Neural Computation. 2013. Vol. 25, N 10. P. 2523–2544. https://doi.org/10.1162/NECO_a_00499

144 Rosenfeld R., Touretzky D.S. Coarse-coded symbol memories and their properties. Complex Systems. 1988. Vol. 2, N 4. P. 463–484.

145 Salavati A.H., Kumar K.R., Shokrollahi A. Nonbinary associative memory with exponential pattern retrieval capacity and iterative learning. IEEE Trans. Neural Networks and Learning Systems. 2014. Vol. 25, N 3. P. 557–570. https://doi.org/10.1109/TNNLS.2013.2277608

146 Schwenker F., Sommer F.T., Palm G. Iterative retrieval of sparsely coded associative memory patterns. Neural Networks. 1996. Vol. 9. P. 445-455. https://doi.org/10.1016/0893-6080(95)00112-3

147 Shrivastava A., Li P. In defense of minhash over simhash. Proc. AISTATS’14. 2014. P. 886–894.

148 Slipchenko S.V., Rachkovskij D.A. Analogical mapping using similarity of binary distributed representations. International Journal Information Theories and Applications. 2009. Vol. 16, N 3. P. 269–290.

149 KO5. Tarkoma S., Rothenberg C. E., Lagerspetz E. Theory and Practice of Bloom Filters for Distributed Systems. IEEE Communications Surveys and Tutorials. 2012. Vol. 14, N 1. P. 131–155. https://doi.org/10.1109/SURV.2011.031611.00024

150 Tsodyks M. Associative memory in asymmetric diluted network with low level of activity. Europhysics Letters. 1988. Vol. 7, N 3. 203–208. https://doi.org/10.1209/0295-5075/7/3/003

151 Tsodyks M. Associative memory in neural networks with the hebbian learning rule. Modern Physics Letters B. 1989. Vol. 3, N 7. P. 555–560. https://doi.org/10.1142/S021798498900087X

152 Tsodyks M.V. Associative memory in neural networks with binary synapses. Mod. Phys. Lett. 1990. Vol. B4. P. 713–716. https://doi.org/10.1142/S0217984990000891

153 Tsodyks M.V. Hierarchical associative memory in neural networks with low activity level. Modern Physics Letters B. 1990. Vol. 4, N 4. P. 259–265. https://doi.org/10.1142/S0217984990000325

154 Tsodyks M., Feigelman M. The enhanced storage capacity in neural networks with low activity level. Europhysics Letters. 1988. Vol. 6, N 2. P. 101–105. https://doi.org/10.1209/0295-5075/6/2/002

155 Vedenov A. A., Ezhov A.A., Knizhnikova L.A., Levchenko E.B. “Spurious memory” in model neural networks. Preprint IAE-4395/1. 1987. Moscow: KIAE.

156 Willshaw D. Holography, associative memory and inductive generalization. In Parallel Models of Associative Memory. Hillside: Lawrence Erlbaum Associates. 1981. P. 83–104.

157 Willshaw D. J., Buneman O. P., Longuet-Higgins H. C. Non-holographic associative memory. Nature. Vol. 1969. Vol. 222. P. 960–962. https://doi.org/10.1038/222960a0

158 Yang X., Vernitski A., Carrea L. An approximate dynamic programming approach for improving accuracy of lossy data compression by Bloom filters. European Journal of Operational Research. 2016. Vol. 252, N 3. P 985–994. https://doi.org/10.1016/j.ejor.2016.01.042

159 Yu C., Gripon V., Jiang X., Jegou H. Neural associative memories as accelerators for binary vector search. Proc. COGNITIVE’15. 2015. P. 85–89.

Recieved 15.04.2017