Encyclopedia  |   World Factbook  |   World Flags  |   Reference Tables  |   List of Lists     
   Academic Disciplines  |   Historical Timeline  |   Themed Timelines  |   Biographies  |   How-Tos     
Sponsor by The Tattoo Collection
Hebbian learning
Main Page | See live article | Alphabetical index

Hebbian learning

Hebbian learning is a hypothesis for how neuronal connections are enforced in mammalian brains; it is also a technique for weight selection in artificial neural networks.

The idea is named after Donald Hebb, who in 1949 presented it in his book The Organization of Behavior (and kick-started research into neural networks as a result). His idea specified how much the strength of a connection between two neurons should be altered according to how they are firing at the current time. Hebb's original principle was essentially that if one neuron is stimulating some other neuron, and at the same time that receiving neuron is also firing, then the strength of the connection between the two neurons will be increased, and vice versa — if one is firing and the other isn't, then the connection strength is decreased.

See this related page on Hebbian theory.

Hebbian learning in artificial intelligence research

From the point of view of artificial neurons and artificial neural networks, Hebb's principle can be described as a method of determining how to alter the weights between model neurons. The weight between two neurons will increase if the two neurons activate simultaneously; it is reduced if they activate separately. Nodes which tend to be either both positive or both negative at the same time will have strong positive weights while those which tend to be opposite will have strong negative weights. It is sometimes stated more simply as "neurons that fire together, wire together."

This original principle is perhaps the simplest form of weight selection. While this means it can be relatively easily coded into a computer program and used to update the weights for a network, it also prohibits the number of applications of Hebbian learning. Today the term Hebbian learning generally refers to some form of mathematical abstraction of the original principle proposed by Hebb. In this sense, Hebbian learning involves weights between learning nodes being adjusted so that each weight better represents the relationship between the nodes. As such many learning methods can be considered to be somewhat Hebbian in nature.

Hebbian learning in biological systems

Work in the laboratory of Eric Kandel has provided evidence for the involvement of Hebbian learning mechanisms at synapses in the marine invertebrate Aplysia californica.

Experiments on Hebbian synapse modification mechanisms at the central nervous system synapses of vertebrates are much more difficult to control than are experiments with the relatively simple peripheral nervous system synapses studied in marine invertebrates. Much of the work on long-lasting synaptic changes between vertebrate neurons (such as long-term potentiation) involves the use non-physiological experimental stimulation of brain cells. However, some of the physiologically relevant synapse modification mechanisms that have been studied in vertebrate brains do seem to be examples of Hebbian processes. On such study reviews results from experiments that indicate that long-lasting changes in synaptic strengths can be induced by physiologically relevant synaptic activity working through both Hebbian and non-Hebbian mechanisms [1].