Update README.md

Signed-off-by: David Rotermund <54365609+davrot@users.noreply.github.com>
This commit is contained in:
David Rotermund 2024-02-17 11:10:13 +01:00 committed by GitHub
parent b386b7d1f0
commit 8546a7d242
No known key found for this signature in database
GPG key ID: B5690EEEBB952194

View file

@ -4,14 +4,14 @@
Generative models represent a framework that is well suited for describing probabilistic computation. In technical contexts, generative models have been used successfully, for example, for deconvolution of noisy, blurred images. Since information processing in the brain, too, seems to be stochastic in nature, generative models are also promising candidates for modelling neural computation. In many practical applications and - because firing rates are positive - possibly also in the brain, generative models are subject to nonnegativity constraints on their parameters and variables. The subcategory of update algorithms for these constrained models was termed nonnegative matrix factorization (NNMF) [2]. The objective of NNMF is to iteratively factorize scenes (images) $V_µ$ into a linear additive superposition $H_µ$ of elementary features $W$ such that $V_µ \approx W H_µ$. The Spike-by-Spike (SbS) network, which you will use, is a variant that works on single spikes [1].
![SBS_A.pdf](SBS_A.pdf)
![SBS_A.png](SBS_A.png)
**Caption: Example scheme of the general framework you will investigate. A scene $V$ (left) composed as a superposition of different features or objects is coded into stochastic spike trains $S$ by the sensory system (middle). In this example, the scene comprises a fence, a tree, and a horse. With each incoming spike at one input channel $s$, the model updates the internal variables $h(i)$ of the hidden nodes $i$ (left). The generative network uses the likelihoods $p(s|i)$ to evolve its internal representation $H$ toward the best explanation, which maximizes the likelihood of the observation $S$. An alternative application of a generative model can be used to infer functional dependencies in its input.**
The SbS framework is universal because it can learn Boolean functions and perform computations on the basis of single stochastic spikes. Furthermore, it can be extended to hierarchical structures, which in case of Boolean functions leads to a reduction of the required neuronal resources. To demonstrate its performance you will apply it to classification tasks (e.g. the USPS data base with pictures of handwritten digits). You can also investigate the algorithm as an implementation with integrate-and-fire model neurons. Learning in this neurophsyiologically realistic framework corresponds to a Hebbian rule. When trained on natural images, the algorithm generates oriented and localized receptive fields, and the algorithm allows to reconstruct natural scenes from spike trains with few spikes per input neuron.
![SBS_A.pdf](SBS_B.pdf)
![SBS_A.png](SBS_B.png)
**Caption: (left) Spike-by-spike network for training a classification task (binary input / output shown as empty = 0 and filled = 1 boxes). The training patterns with $S_p$ components (here the first 4 neurons; depicted by the less dark filled circles), together with their correct classification into one of $S_c$ classes (the other 3 neurons; depicted by the darker filled circles), are presented as randomly drawn spike trains to $S = S_p + S_c$ input nodes during learning. The network thereby finds a suitable representation $p(s|i)$ of the input ensemble and estimates an internal state $h(i)$ for each input pattern according to the Spike-by-Spike learning algorithm. (right) Spike-by-spike network for classification of patterns. The test patterns are presented to the $S_p$ input nodes (now the 4 neurons on the lowest layer), and an appropriate internal state $h(i)$ explaining the actual pattern is estimated by the SbS algorithm. From the internal state, a classification is inferred (highest layer) and compared to the correct classification.**