Menu
Home Explore People Places Arts History Plants & Animals Science Life & Culture Technology
On this page
Instantaneously trained neural networks

Instantaneously trained neural networks are a type of feedforward artificial neural network that create a hidden neuron for each new training sample, enabling quick generalization by separating samples with the nearest hyperplane. Key implementations include the CC1 and CC4 networks, which differ in how the neighborhood of generalization is defined. These networks use unary coding for data representation. First proposed by Subhash Kak in 1993, they have since been applied to short-term learning, web search, time series prediction, document classification, deep learning, and data mining. Besides software, implementations also exist in hardware, including through optical neural networks.

We don't have any images related to Instantaneously trained neural networks yet.
We don't have any YouTube videos related to Instantaneously trained neural networks yet.
We don't have any PDF documents related to Instantaneously trained neural networks yet.
We don't have any Books related to Instantaneously trained neural networks yet.
We don't have any archived web articles related to Instantaneously trained neural networks yet.

CC4 network

In the CC4 network, which is a three-stage network, the number of input nodes is one more than the size of the training vector, with the extra node serving as the biasing node whose input is always 1. For binary input vectors, the weights from the input nodes to the hidden neuron (say of index j) corresponding to the trained vector is given by the following formula:

w i j = { − 1 , for  x i = 0 + 1 , for  x i = 1 r − s + 1 , for  i = n + 1 {\displaystyle w_{ij}={\begin{cases}-1,&{\mbox{for }}x_{i}=0\\+1,&{\mbox{for }}x_{i}=1\\r-s+1,&{\mbox{for }}i=n+1\end{cases}}}

where r {\displaystyle r} is the radius of generalization and s {\displaystyle s} is the Hamming weight (the number of 1s) of the binary sequence. From the hidden layer to the output layer the weights are 1 or -1 depending on whether the vector belongs to a given output class or not. The neurons in the hidden and output layers output 1 if the weighted sum to the input is 0 or positive and 0, if the weighted sum to the input is negative:

y = { 1 if  ∑ x i ≥ 0 0 if  ∑ x i < 0 {\displaystyle y=\left\{{\begin{matrix}1&{\mbox{if }}\sum x_{i}\geq 0\\0&{\mbox{if }}\sum x_{i}<0\end{matrix}}\right.}

Other networks

The CC4 network has also been modified to include non-binary input with varying radii of generalization so that it effectively provides a CC1 implementation.11

In feedback networks the Willshaw network as well as the Hopfield network are able to learn instantaneously.

References

  1. Kak, S. On training feedforward neural networks. Pramana, vol. 40, pp. 35-42, 1993 [1] https://link.springer.com/article/10.1007/BF02898040

  2. Kak, S. New algorithms for training feedforward neural networks. Pattern Recognition Letters 15: 295-298, 1994. https://www.sciencedirect.com/science/article/pii/0167865594900620

  3. Kak, S. On generalization by neural networks, Information Sciences 111: 293-302, 1998. https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.86.3290&rep=rep1&type=pdf

  4. Kak, S. On training feedforward neural networks. Pramana, vol. 40, pp. 35-42, 1993 [1] https://link.springer.com/article/10.1007/BF02898040

  5. Kak, S. Faster web search and prediction using instantaneously trained neural networks. IEEE Intelligent Systems 14: 79-82, November/December 1999.

  6. Zhang, Z. et al., TextCC: New feedforward neural network for classifying documents instantly. Advances in Neural Networks ISNN 2005. Lecture Notes in Computer Science 3497: 232-237, 2005. https://link.springer.com/chapter/10.1007/11427445_37

  7. Zhang, Z. et al., Document Classification Via TextCC Based on Stereographic Projection and for deep learning, International Conference on Machine Learning and Cybernetics, Dalin, 2006

  8. Schmidhuber, J. Deep Learning in Neural Networks: An Overview, arXiv:1404.7828, 2014 https://arxiv.org/abs/1404.7828 https://arxiv.org/abs/1404.7828

  9. Zhu, J. and G. Milne, Implementing Kak Neural Networks on a Reconfigurable Computing Platform, Lecture Notes in Computer Science Volume 1896: 260-269, 2000. https://link.springer.com/chapter/10.1007/3-540-44614-1_29

  10. Shortt, A., J.G. Keating, L. Moulinier, C.N. Pannell, Optical implementation of the Kak neural network, Information Sciences 171: 273-287, 2005. http://eprints.maynoothuniversity.ie/8663/1/JK-Optical-2005.pdf

  11. Tang, K.W. and Kak, S. Fast classification networks for signal processing. Circuits, Systems, Signal Processing 21, 2002, pp. 207-224. https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.444.303&rep=rep1&type=pdf