Namespaces
Variants
Actions

Hebb rule

From Encyclopedia of Mathematics
Revision as of 17:19, 7 February 2011 by 127.0.0.1 (talk) (Importing text file)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search

Hebbian learning

A learning rule dating back to D.O. Hebb's classic [a1], which appeared in 1949. The idea behind it is simple. Neurons of vertebrates consist of three parts: a dendritic tree, which collects the input, a soma, which can be considered as a central processing unit, and an axon, which transmits the output. Neurons communicate via action potentials or spikes, pulses of a duration of about one millisecond. If neuron emits a spike, it travels along the axon to a so-called synapse on the dendritic tree of neuron , say. This takes milliseconds. The synapse has a synaptic strength, to be denoted by . Its value, which encodes the information to be stored, is to be governed by the Hebb rule.

In [a1], p. 62, one can find the "neurophysiological postulate" that is the Hebb rule in its original form: When an axon of cell is near enough to excite a cell and repeatedly or persistently takes part in firing it, some growth process or metabolic change takes place in one or both cells such that the efficiency of , as one of the cells firing , is increased.

Hebb's postulate has been formulated in plain English (but not more than that) and the main question is how to implement it mathematically. The key ideas are that:

i) only the pre- and post-synaptic neuron determine the change of a synapse;

ii) learning means evaluating correlations. If both and are active, then the synaptic efficacy should be strengthened. Efficient learning also requires, however, that the synaptic strength be decreased every now and then [a2].

In the present context, one usually wants to store a number of activity patterns in a network with a fairly high connectivity ( in biological nets). Most of the information presented to a network varies in space and time. So what is needed is a common representation of both the spatial and the temporal aspects. As a pattern changes, the system should be able to measure and store this change. How can it do that?

For unbiased random patterns in a network with synchronous updating this can be done as follows. The neuronal dynamics in its simplest form is supposed to be given by , where . Let be the synaptic strength before the learning session, whose duration is denoted by . After the learning session, is to be changed into with

(cf. [a3], [a4]). The above equation provides a local encoding of the data at the synapse . The is a constant known factor. The learning session having a duration , the multiplier in front of the sum takes saturation into account. The neuronal activity equals if neuron is active at time and if it is not. At time it is combined with the signal that arrives at at time , i.e., , where is the axonal delay. Here, , denotes the pattern as it is taught to the network of size during the learning session of duration . The time unit is milliseconds. In the case of asynchronous dynamics, where each time a single neuron is updated randomly, one has to rescale and the above sum is reduced to an integral as . In passing one notes that for constant, spatial, patterns one recovers the Hopfield model [a5].

Suppose now that the activity in the network is low, as is usually the case in biological nets, i.e., . Then the appropriate modification of the above learning rule reads

(cf. [a4]). Since when the presynaptic neuron is not active, one sees that the pre-synaptic neuron is gating. One gets a depression (LTD) if the post-synaptic neuron is inactive and a potentiation (LTP) if it is active. So it is advantageous to have a time window [a6]: The pre-synaptic neuron should fire slightly before the post-synaptic one. The above Hebbian learning rule can also be adapted so as to be fully integrated in biological contexts [a6]. The biology of Hebbian learning has meanwhile been confirmed. See the review [a7].

G. Palm [a8] has advocated an extremely low activity for efficient storage of stationary data. Out of neurons, only should be active. This seems to be advantageous for hardware realizations.

In summary, Hebbian learning is efficient since it is local, and it is a powerful algorithm to store spatial or spatio-temporal patterns. If so, why is it that good? As to the why, the succinct answer [a3] is that synaptic representations are selected according to their resonance with the input data; the stronger the resonance, the larger . In other words, the algorithm "picks" and strengthens only those synapses that match the input pattern.

References

[a1] D.O. Hebb, "The organization of behavior--A neurophysiological theory" , Wiley (1949)
[a2] T.J. Sejnowski, "Statistical constraints on synaptic plasticity" J. Theor. Biol , 69 (1977) pp. 385–389
[a3] A.V.M. Herz, B. Sulzer, R. Kühn, J.L. van Hemmen, "The Hebb rule: Storing static and dynamic objects in an associative neural network" Europhys. Lett. , 7 (1988) pp. 663–669 (Hebbian learning reconsidered: Representation of static and dynamic objects in associative neural nets, Biol. Cybern. 60 (1989), 457–467)
[a4] J.L. van Hemmen, W. Gerstner, A.V.M. Herz, R. Kühn, M. Vaas, "Encoding and decoding of patterns which are correlated in space and time" G. Dorffner (ed.) , Konnektionismus in artificial Intelligence und Kognitionsforschung , Springer (1990) pp. 153–162
[a5] J.J. Hopfield, "Neural networks and physical systems with emergent collective computational abilities" Proc. Nat. Acad. Sci. USA , 79 (1982) pp. 2554–2558
[a6] W. Gerstner, R. Ritz, J.L. van Hemmen, "Why spikes? Hebbian learning and retrieval of time-resolved excitation patterns" Biol. Cybern. , 69 (1993) pp. 503–515 (See also: W. Gerstner and R. Kempter and J.L. van Hemmen and H. Wagner: A neuronal learning rule for sub-millisecond temporal coding, Nature 383 (1996), 76–78)
[a7] T.H. Brown, S. Chattarji, "Hebbian synaptic plasticity: Evolution of the contemporary concept" E. Domany (ed.) J.L. van Hemmen (ed.) K. Schulten (ed.) , Models of neural networks , II , Springer (1994) pp. 287–314
[a8] G. Palm, "Neural assemblies: An alternative approach to artificial intelligence" , Springer (1982)
How to Cite This Entry:
Hebb rule. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Hebb_rule&oldid=16900
This article was adapted from an original article by J.L. van Hemmen (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article