About this project
Contrary to popular belief
Kolmogorov-Arnold network (KAN) is not a representation published in 1957
$$ M(x_1, x_2, x_3, ... , x_n) = \sum_{q=0}^{2n} \Phi_q\left(\sum_{p=1}^{n} \phi_{q,p}(x_{p})\right). $$
It is a chain of integral equations with 3D kernels introduced by Pavel Urysohn in 1924
$$u(t) = \int_{s_1}^{s_2} F[z(s),s,t]ds, \quad t \in [t_1, t_2]$$
$$z(s) = \int_{x_1}^{x_2} G[y(x),x,s]dx, \quad s \in [s_1, s_2]$$
$$...$$
which maps features $y$ into targets $u$ having hidden layer $z$. The chain may be longer than two layers
and target may be a scalar in particular case.
The kernels $G$ and $F$ actually make up the network, training process is finding them having set of $y$'s and $u$'s.
Contrary to popular belief
identification of KAN, provided features and targets, is very easy and simple task due to one
counter-intuitive property of Urysohn equation.
Math on the level of provincial community college and basic programming
skills are more than enough to write the code from scratch.
Contrary to popular belief
the first successful implementation of KAN was in 2003.
Contrary to popular belief
the individual core element of KAN
$$
u = \sum_{j = 1}^{n} f_j(z_j),
$$
which is a replacement for a neuron, can be identified for a few milliseconds from 10,000 data records (vectors $z$ and scalars $u$) by 10 lines code,
using one counter-intuitive property of Urysohn equation.
Since 1995 I showed it to every software engineer and computer scientist I came across.
The concept was even sitting on my site under the title good non-demanded ideas

We managed to publish this concept in 2020 after mulitple rejections. One journal conducted 1.5 year reviewing process before final rejection. This concept
lays in the foundation of fast KAN training.
Contrary to popular belief
KAN can be very large, have big number of hidden layers, handle hundreds of features and millions of training records
and training process can be significantly quicker than even very optimized for fast execution neural network.
Significanlty is not several times faster, it may be even several hundred times faster.
Those who interested to learn more can navigate to the site, recommended reading top to bottom.
The pictures of people contributed to concept used in the software, published on this site, are on the right. They are
Andrei Kolmogorov, Vladimir Arnold, Pavel Urysohn and Stefan Kaczmarz. Strictly speaking all theoretical knowledge
for training KAN was already available in 1938. Kolmogorov and Arnold only drew attention of the scientific world by exposing the problem,
so people started looking for identification of KAN in 1960's and succeeded in 2000's.
The very modest contribution of the authors of this project into KAN is also published. All research and software development
were conducted by the authors during their personal hours outside their job duties.
Discrete Urysohn Operator
Kolmogorov-Arnold Representation
Newton-Kaczmarz Method
Contact
Andrew Polar |  |
|




|