Connectionism is an approach in the fields of cognitive science, neuroscience, psychology and philosophy of mind. Connectionism models mental or behavioral phenomena as the emergent processes of interconnected networks of simple units. There are many different forms of connectionism, but the most common forms utilize neural network models.
Basic principles
The central connectionist principle is that mental phenomena can be described by interconnected networks of simple units. The form of the connections and the units can vary from model to model. For example, units in the network could represent neurons and the connections could represent synapses. Another model might make each unit in the network a word, and each connection an indication of semantic similarity.
Spreading activation
Most connectionist models include time, i.e. there is a variable which represents time and the network changes over time. A closely related and extremely common aspect of connectionist models is activation. At any time a unit in the network has an activation, which is a numerical number intended to represent some aspect of the unit. For example, if the units in the model are neurons the activation could represent the probability that the neuron would generate an action potential spike. If the model is a spreading activation model then over time a unit's activation spreads to all the other units connected to it. Spreading activation is always a feature of neural network connectionist models.
Neural networks:
Neural networks are by far the dominant form of connectionist model
today. A lot of research utilizing neural networks is carried out
under the more general name "connectionist". These connectionist
models adhere to two major principles regarding the mind:
Any given mental state can be described as a (N)-dimensional vector
of numeric activation values over neural units in a network.
Memory is created by modifying the strength of the connections between
neural units. The connection strengths, or "weights",
are generally represented as a (N×N)-dimensional matrix.
Though there is a large variety of neural network models, they very
rarely stray from these two basic principles. Most of the variety
comes from:
Interpretation of units—units can be interpreted as neurons
or groups of neurons.
Definition of activation—activation can be defined in a variety
of fashions. For example, in a Boltzmann machine, the activation
is interpreted as the probability of generating an action potential
spike, and it's determined via a logistic function on the sum of
the inputs to a unit.
Learning algorithm—different networks modify their connections
differently. Generally, any mathematically defined change in connection
weights over time is referred to as the "learning algorithm".
Connectionists are generally in agreement that recurrent neural
networks (networks wherein connections of the network can form a
directed cycle) are a better model of the brain than feedforward
neural networks (networks with no directed cycles). A lot of recurrent
connectionist models incorporate dynamical systems theory as well.
Many researchers, such as the connectionist Paul Smolensky, have
argued that the direction connectionist models will take is towards
fully continuous, high-dimensional, non-linear, dynamic systems
approaches.
Biological realism
The neural network branch of connectionism suggests that the study of mental activity is really the study of neural systems. This links connectionism to neuroscience, and models involve varying degrees of biological realism. Connectionist work in general need not be biologically realistic, but some neural network researchers try to model the biological aspects of natural neural systems very closely. As well, many authors find the clear link between neural activity and cognition to be an appealing aspect of connectionism. However, this is also a source of criticism, as some people view this as reductionism.
Learning
Connectionists generally stress the importance of learning in their models. As a result, many sophisticated learning procedures for neural networks have been developed by connectionists. Learning always involves modifying the connection weights. These generally involve mathematical formula to determine the change in weights when given sets of data consisting of activation vectors for some subset of the neural units.
By formalizing learning in such a way connectionists have many tools at their hands. A very common tactic in connectionist learning methods is to incorporate gradient descent over an error surface in a space defined by the weight matrix. All gradient descent learning in connectionist models involves changing each weight by the partial derivative of the error surface with respect to the weight. Backpropagation, first made popular in the 1980s, is probably the most commonly known connectionist gradient descent algorithm today.