Content



History of research

  1. S. Kaczmarz, Bulletin international de l’Acadámie polonaise des sciences et des lettres. Classe des sciences mathématiques et naturelles. Série A, Sciences mathématiques, 35:355-357, 1937.

    Introduction of iterative method for solution of system of linear algebraic equations by sequential projecting an approximation from one hyper plane to another. The method is very effective when involved equations form near orthogonal hype plane system.

  2. V.I. Arnold, Doklady Akademii Nauk SSSR 114(4):679-681 (1957).
  3. Representations of the functions with three variables by the sum of univariate funcions.

  4. A.N. Kolmogorov, Doklady Akademii Nauk SSSR 114(5):953-956 (1957).
  5. Generalization of the previous concept to the case of any continuous multivariate function.

  6. G.G. Lorentz, Am. Math. Mon. 69(6): 469-485 (1962).

    Introduced different adequate form of Kolmogorov-Arnold representation.

  7. D.A. Sprecher, Trans. Am. Math. Soc. 115(3):340-355 (1965).

    Also introduced different from Lorentz adequate form of Kolmogorov-Arnold representation.

  8. B.L. Fridman, Doklady Akademii Nauk SSSR 177(5):1019-1022 (1967).

    Proven that inner functions of Kolmogorov-Arnold representation can be chosen as satisfying Lipschitz conditions and are independent from the given multivariate function.

  9. D.A. Sprecher, J. Math. Anal. Appl. 38(1):208-213 (1972).

    Proven that inner functions can be chosen monotonic.

  10. V.V. Krylov, Avtomatika i Telemehanika 40(5):82-88 (1979).

    Researched properties of discrete Urysohn operators. These operators are building blocks of Kolmorogorov-Arnold representation.

  11. K.-H. Meyn, Numerische Mathematik, 42(2):161-172 (1983).

    Convergence of nonlinear extension of Kaczmarz method is proven.

  12. R. Hecht-Nielsen, in Proceedings of the International Conference on Neural Networks, pp. 11-14 (1987).

    First clear justification that Kolmogorov-Arnold representation is a neural network equivalent.

  13. T.J. Hastie, R.J. Tibshirani, Generalized Additive Models 1(3):297-318 (1986).

    The approximate GAM model was constructed.

  14. V. Kůrková, Neural Networks 5(3):501-506 (1992).

    Proposed the first neural network based on Kolmogorov-Arnold representation with two hidden layers.

  15. M. Nees, Journal of Computational and Applied Mathematics 54(2):239-250 (1994).

    Provided constructive proof of Kolmogorov superposition theorem.

  16. D.A. Sprecher, Neural Networks 9(5):765-772 (1996).

    The paper gives an explicit numerical implementation of the hidden layer that also enables the implementation of the output layer.

  17. D.A. Sprecher, Neural Networks 10(3):447-457 (1997).

    Paper presents a numerical algorithm for the parallel computations of the outer functions in Kolmogorov superpositions.

  18. M. Köppen, in Artificial Neural Networks – ICANN 2002, pp. 474-479 (2002).

    Customization of Sprecher algorithm for resampling of image function.

  19. B. Igelnik, N. Parikh, IEEE Transactions on Neural Networks 14(4):725-733 (2003).

    First implementation of the cubic splines for approximation of Kolmogorov-Arnold representation.

  20. M. Coppejans, Journal of Econometrics 123(1):1-31 (2004).

    Another application of cubic splines for slightly different form of nested functions tree.

  21. D. Bryant, PhD thesis, University of Central Florida (2008).

    Analysis of Kolmogorov's Superposition theorem and its implementation in applications with low and high dimensional data.

  22. J. Braun, M. Griebel, Constructive Approximation 30(3):653-675 (2009).

    Theoretical research of the properties of the inner functions.

  23. P.-E. Leni, PhD thesis, Université de Bourgogne (2010).

    Using KA model for image processing.

  24. X. Liu, PhD thesis, Imperial College London (2015).

    Using KA model for image processing.

  25. J. Actor, MA thesis, Rice University (2018).

    Particular algorithm for KA approximation using monotonic properties of inner functions.

  26. M. Poluektov, A. Polar, Journal of the Franklin Institute 357:3865-3892 (2020).

    Kaczmarz algorithm for identification of discrete Urysohn operators which are building blocks of KA representation.

  27. H. Montanelli, H. Yang, Neural Networks 129:1-6 (2020).

    Using deep ReLU networks for approximation of KA representation.

  28. A. Polar, M. Poluektov, Engineering Applications of Artificial Intelligence, 99:104137 (2021).

    Using Kaczmarz algorithm and piecewise linear functions for training of KA networks.

  29. J. Schmidt-Hieber, Neural Networks 137:119-126 (2021).

    Some improvements of using deep ReLU networks for KA representation.

  30. H. van Deventer, P. J. van Rensburg et al., arXiv:2205.06376 (2022).

    Another example of using cubic splines for KA approximation.

  31. M. Poluektov, A. Polar, arXiv:2305.08194 (2023).

    Using Kaczmarz algorithm with arbitrary basis functions and proof of local convergence.

  32. A. Ismayilova, V.E. Ismailov, Neural Networks 176:106333 (2024).

    Theoretical research on the properties of KAN functions. Suggestion that used functions can be discontinuous.

  33. Z. Liu, Y. Wang et al., arXiv:2404.19756 (2024).

    Using Broyden algorithm for training KAN, pruning, using KAN for numerical solution of partial derivative equations.