English Planet Science

– English Planet Science –

Learning Technique: Spaced Retrieval-Uniform and Expanding

Spaced retrieval, also known as expanded retrieval, is a learning technique, which requires users to rehearse information to be learned at different and increasing spaced intervals of time Benigas, J., Brush, J. & Elliot, G. (2016)[1]. Errorless learning and spaced retrieval: How do these methods fare in healthy and clinical populations? Journal of Clinical and Experimental Neuropsychology, 33(4), 432-447. doi:10.1080/13803395.2010.533155</ref> In testing this type of learning, people are instructed to rehearse a given set of information at a certain time, and each new rehearsal is expected to have a longer period of time between itself and the previous rehearsal or an equal amount of time between rehearsals. At the end of every trial period there is a test phase. Landauer and Bjork first studied this technique of learning in 1978. The study required participants to learn names from flash cards. Prior to learning participants were placed into five different rehearsal types: uniform short, uniform moderate, uniform long, expanding, and contracting.[1] These all indicate the amount and spacing of trials between each test. Uniform trials involve a number of trials between each test stage, but the trial numbers are fixed (e.g. 2-test-2-test-2-test).[2] Contracting rehearsals involve larger intervals of time between the first few trials and the test phase, but eventually the trials decrease in number. Expanding involves starting with trials and tests close together, and as they progress the person would have more time between each trial and test (e.g. 1-test-2-test-3-test). The effectiveness of the rehearsal types was measured by seeing how accurately participants responded during a test phase. Expanding was proven to be the most important because it produced the highest amount of recall during the test period.[1]

The data behind this initial research indicated that an increasing space between rehearsals (expanding) would yield a greater percentage of accuracy at test points.[1] Spaced retrieval with expanding intervals is believed to be so effective because with each expanded interval of retrieval it becomes more difficult to retrieve the information because of the time elapsed between test periods; this creates a deeper level of processing of the learned info in long term memory at each point. Another reason that the expanding retrieval model is believed to work so effectively is because the first test happens early on in the rehearsal process.[2] The purpose of this is to increase retrieval success. By having a first test that followed initial learning with a successful retrieval, people are more likely to remember this successful retrieval on following tests.[3] Although expanding retrieval is commonly associated with spaced retrieval, a uniform retrieval schedule is also a form of spaced retrieval procedure.[2]

Spaced retrieval is typically studied through the use of memorizing facts. Traditionally speaking, it has not been applied to fields that required some manipulation or thought beyond simple factual/semanticinformation. A more recent study has shown that spaced retrieval can benefit tasks such as solving math problems. In a study conducted by Pashler, Rohrer, Cepeda, and Carpenter,[4] participants had to learn a simple math principle in either a spaced or massed retrieval schedule. The participants given the spaced retrieval learning tasks showed higher scores on a final test distributed after their final practice session.[4]

 

  1. Landauer, T., & Bjork, R. (1978). Optimum rehearsal patterns and name learning. Practical Aspects of Memory, 625-632. Retrieved October 15, 2014. “Archived copy” (PDF). Archived from the original (PDF) on 2015-02-11. Retrieved 2014-12-10.
  2. Jump up to:ab c d e f g Karpicke, J., & Bauernschmidt, A. (2011). Spaced retrieval: Absolute spacing enhances learning regardless of relative spacing. Journal of Experimental Psychology: Learning, Memory, and Cognition, 37(5), 1250–1257.
  3. Jump up^Karpicke, J., & Roediger, H. (2010). Is expanding retrieval a superior method for learning text materials? Memory & Cognition, 38(1), 116-124. doi: 10.3758/MC.38.1.116
  4. Jump up to:ab c Pashler, H., Rohrer, D., Cepeda, N., & Carpenter, S. (2007). Enhancing learning and retarding forgetting: Choices and consequences. Psychonomic Bulletin & Review, 14(2), 187-193.

 

Therapy: Errorless Learning

esol classroom resources

Errorless learning is a therapy strategy that ensures children always respond correctly. As each skill is taught, children are provided with a prompt or cue immediately following an instruction. The immediate prompt prevents any chance for incorrect responses. Unlike other teaching procedures where opportunities for initial mistakes are allowed and then corrected through prompting, errorless learning’s immediate prompting ensures that a child may only respond correctly. Prompts are systematically removed until children are able to respond correctly on their own.  The theory behind errorless learning is that children with autism do not learn as successfully from their mistakes as typical children may, but instead continue to repeat them. Research suggests that frustration following incorrect responses associated with trial and error learning can actually provoke problem behavior such as tantrums, aggression, and self-injury. Using an initial prompt, before the child has an opportunity to respond incorrectly, avoids any chance of teaching a chain of errors and bypasses the discouragement that may come from incorrect responding.

The Role of Positive Reinforcement
Positive reinforcement is providing something after a behavior that increases the likelihood of the behavior occurring again in the future. Errorless learning uses positive reinforcement combined with prompting strategies to teach new skills. Instructions are immediately followed by a prompted correct response which is then followed positive reinforcement.
Example: 
Therapist gives instruction, “clap hands.”
Therapist immediately prompts child by manipulating the child’s hands to make a clapping motion.
Therapist praises the child, “nice job clapping your hands!” and gives the child a reinforcer.

To promote independence the immediate prompts, or amount of help provided, are systematically decreased, or faded, to allow children the opportunity to provide correct responses on their own. Errorless learning strategies used to decrease prompting and encourage independence may include time delay prompting and most-to-least prompting.

Time Delay Prompting
Time delay is a prompt fading strategy that systematically increases the amount of time between the instruction and the prompt. This delaying of prompts gives children a brief window of opportunity to give a correct response on their own. As the child begins to respond independently before a prompt is given, the delay is continuously increased until it is faded out completely. Responses provided independently, before any assistance is given, are immediately followed by positive reinforcement.
Example:
(2 second delay)
Therapist gives instruction, “clap hands.”
Therapist waits 2 seconds and then manipulates the child’s hands to make a clapping motion.
Therapist praises the child, “nice job clapping your hands!” and gives a reinforcer.
(3 second delay)
Therapist gives instruction, “clap hands.”
Therapist waits 3 seconds for the child to respond independently.
If the child does not respond independently, the therapist manipulates the child’s hands to make a clapping motion.
Therapist praises the child, “nice job clapping your hands!” and gives a reinforcer.

Most-to-Least Prompting
In most to least prompting, decreasing the intrusiveness of assistance provided to promote independence in responding systematically fades prompts.

Example:
(light physical prompt)
Therapist gives instruction, “clap hands”
Therapist immediately prompts child by providing a light physical prompt at the child’s elbows to make a clapping motion.
Therapist praises the child, “nice job clapping your hands!” and gives a reinforcer.

(Gesture)
Therapist gives instruction, “clap hands”
Therapist immediately prompts child by raising hands slightly to gesture clapping without touching the child.
Child begins clapping hands.
Therapist praises the child, “nice job clapping your hands!” and gives a reinforcer.

Promoting Independence
It is important to collect data on how often children require prompts as well as how often they give independent responses. This information is used to determine when to decrease prompt levels. An example of decreasing prompt levels using time delay may be delaying prompts 2 seconds, then 3 seconds, and then 5 seconds. An example of decreasing prompts in most-to-least prompting may be lessening the intrusiveness from hand over hand, to a light physical touch, to shadowing the response without any physical contact. For more information on prompting see the Prompting Fact Sheet.

Errors
Even with errorless learning, errors may still occur. If a child makes an error, the teacher may withhold reinforcement and present a new instruction or withhold reinforcement and present the same instruction again providing an immediate full prompt of the correct answer. Errors should never be followed by negative comments, reinforcement, or presentation of a reward.

Suggested Readings
Touchette, P., & Howard, J. (1984). Errorless Learning: Reinforcement Contingencies and Stimulus Control Transfer in Delayed Prompting. Journal of Applied Behavior Analysis, 17(2), 175–188.
Heflin, L. J. & Alberto, P. A. (2001). Establishing a behavioral context for learning for students with autism. Focus on Autism and Other Developmental Disabilities, 16, 93-10

 

Neuroscience: Hebbian Theory

Hebbian theory is a neuroscientific theory claiming that an increase in synaptic efficacy arises from a presynaptic cell‘s repeated and persistent stimulation of a postsynaptic cell. It is an attempt to explain synaptic plasticity, the adaptation of brain neurons during the learning process. It was introduced by Donald Hebb in his 1949 book The Organization of Behavior.[1] The theory is also called Hebb’s ruleHebb’s postulate, and cell assembly theory. Hebb states it as follows:

Let us assume that the persistence or repetition of a reverberatory activity (or “trace”) tends to induce lasting cellular changes that add to its stability.[…] When an axon of cell A is near enough to excite a cell B and repeatedly or persistently takes part in firing it, some growth process or metabolic change takes place in one or both cells such that A’s efficiency, as one of the cells firing B, is increased.[1]

The theory is often summarized as “Cells that fire together wire together.”[2] This summary, however, should not be taken too literally. Hebb emphasized that cell A needs to “take part in firing” cell B, and such causality can occur only if cell A fires just before, not at the same time as, cell B. This important aspect of causation in Hebb’s work foreshadowed what is now known about spike-timing-dependent plasticity, which requires temporal precedence.[3]

The theory attempts to explain associative or Hebbian learning, in which simultaneous activation of cells leads to pronounced increases in synaptic strength between those cells. It also provides a biological basis for errorless learning methods for education and memory rehabilitation. In the study of neural networks in cognitive function, it is often regarded as the neuronal basis of unsupervised learning.

Hebbian engrams and cell assembly theory

Hebbian theory concerns how neurons might connect themselves to become engrams. Hebb’s theories on the form and function of cell assemblies can be understood from the following:[1]:70

The general idea is an old one, that any two cells or systems of cells that are repeatedly active at the same time will tend to become ‘associated’, so that activity in one facilitates activity in the other.

Hebb also wrote:[1]:63

When one cell repeatedly assists in firing another, the axon of the first cell develops synaptic knobs (or enlarges them if they already exist) in contact with the soma of the second cell.

Gordon Allport posits additional ideas regarding cell assembly theory and its role in forming engrams, along the lines of the concept of auto-association, described as follows:

If the inputs to a system cause the same pattern of activity to occur repeatedly, the set of active elements constituting that pattern will become increasingly strongly interassociated. That is, each element will tend to turn on every other element and (with negative weights) to turn off the elements that do not form part of the pattern. To put it another way, the pattern as a whole will become ‘auto-associated’. We may call a learned (auto-associated) pattern an engram.[4]:44

Hebbian theory has been the primary basis for the conventional view that, when analyzed from a holistic level, engrams are neuronal nets or neural networks.

Work in the laboratory of Eric Kandel has provided evidence for the involvement of Hebbian learning mechanisms at synapses in the marine gastropod Aplysia californica.

Experiments on Hebbian synapse modification mechanisms at the central nervous system synapses of vertebrates are much more difficult to control than are experiments with the relatively simple peripheral nervous systemsynapses studied in marine invertebrates. Much of the work on long-lasting synaptic changes between vertebrate neurons (such as long-term potentiation) involves the use of non-physiological experimental stimulation of brain cells. However, some of the physiologically relevant synapse modification mechanisms that have been studied in vertebrate brains do seem to be examples of Hebbian processes. One such study reviews results from experiments that indicate that long-lasting changes in synaptic strengths can be induced by physiologically relevant synaptic activity working through both Hebbian and non-Hebbian mechanisms.

Principles

From the point of view of artificial neurons and artificial neural networks, Hebb’s principle can be described as a method of determining how to alter the weights between model neurons. The weight between two neurons increases if the two neurons activate simultaneously, and reduces if they activate separately. Nodes that tend to be either both positive or both negative at the same time have strong positive weights, while those that tend to be opposite have strong negative weights.

The following is a formulaic description of Hebbian learning: (note that many other descriptions are possible)

{\displaystyle \,w_{ij}=x_{i}x_{j}}

where {\displaystyle w_{ij}}  is the weight of the connection from neuron {\displaystyle j}  to neuron {\displaystyle i}  and {\displaystyle x_{i}}  the input for neuron {\displaystyle i} . Note that this is pattern learning (weights updated after every training example). In a Hopfield network, connections {\displaystyle w_{ij}}  are set to zero if {\displaystyle i=j}  (no reflexive connections allowed). With binary neurons (activations either 0 or 1), connections would be set to 1 if the connected neurons have the same activation for a pattern.

Another formulaic description is:

{\displaystyle w_{ij}={\frac {1}{p}}\sum _{k=1}^{p}x_{i}^{k}x_{j}^{k},\,}

where {\displaystyle w_{ij}}  is the weight of the connection from neuron {\displaystyle j}  to neuron {\displaystyle i} , {\displaystyle p}  is the number of training patterns, and {\displaystyle x_{i}^{k}}  the {\displaystyle k} th input for neuron {\displaystyle i} . This is learning by epoch (weights updated after all the training examples are presented). Again, in a Hopfield network, connections {\displaystyle w_{ij}}  are set to zero if {\displaystyle i=j}  (no reflexive connections).

A variation of Hebbian learning that takes into account phenomena such as blocking and many other neural learning phenomena is the mathematical model of Harry Klopf.[5] Klopf’s model reproduces a great many biological phenomena, and is also simple to implement.

Generalization and stability

Hebb’s Rule is often generalized as

{\displaystyle \,\Delta w_{i}=\eta x_{i}y,}

or the change in the {\displaystyle i} th synaptic weight {\displaystyle w_{i}}  is equal to a learning rate {\displaystyle \eta } times the {\displaystyle i} th input {\displaystyle x_{i}}  times the postsynaptic response {\displaystyle y} . Often cited is the case of a linear neuron,

{\displaystyle \,y=\sum _{j}w_{j}x_{j},}

and the previous section’s simplification takes both the learning rate and the input weights to be 1. This version of the rule is clearly unstable, as in any network with a dominant signal the synaptic weights will increase or decrease exponentially. However, it can be shown that for any neuron model, Hebb’s rule is unstable.[6] Therefore, network models of neurons usually employ other learning theories such as BCM theoryOja’s rule,[7]or the Generalized Hebbian Algorithm.

Hebbian learning of intrinsic excitability

Neurons also exhibit plasticity in their intrinsic excitability (intrinsic plasticity).

Exceptions

Despite the common use of Hebbian models for long-term potentiation, there exist several exceptions to Hebb’s principles and examples that demonstrate that some aspects of the theory are oversimplified. One of the most well-documented of these exceptions pertains to how synaptic modification may not simply occur only between activated neurons A and B, but to neighboring neurons as well.[8] This is due to how Hebbian modification depends on retrograde signaling in order to modify the presynaptic neuron.[9] The compound most commonly identified as fulfilling this retrograde transmitter role is nitric oxide, which, due to its high solubility and diffusibility, often exerts effects on nearby neurons.[10]This type of diffuse synaptic modification, known as volume learning, counters, or at least supplements, the traditional Hebbian model.[11]

Hebbian learning account of mirror neurons

Hebbian learning and spike-timing-dependent plasticity have been used in an influential theory of how mirror neurons emerge.[12][13] Mirror neurons are neurons that fire both when an individual performs an action and when the individual sees[14] or hears[15] another perform a similar action. The discovery of these neurons has been very influential in explaining how individuals make sense of the actions of others, by showing that, when a person perceives the actions of others, the person activates the motor programs which they would use to perform similar actions. The activation of these motor programs then adds information to the perception and helps predict what the person will do next based on the perceiver’s own motor program. A challenge has been to explain how individuals come to have neurons that respond both while performing an action and while hearing or seeing another perform similar actions.

Christian Keysers and David Perrett suggested that as an individual performs a particular action, the individual will see, hear, and feel himself perform the action. These re-afferent sensory signals will trigger activity in neurons responding to the sight, sound, and feel of the action. Because the activity of these sensory neurons will consistently overlap in time with those of the motor neurons that caused the action, Hebbian learning would predict that the synapses connecting neurons responding to the sight, sound, and feel of an action and those of the neurons triggering the action should be potentiated. The same is true while people look at themselves in the mirror, hear themselves babble, or are imitated by others. After repeated experience of this re-afference, the synapses connecting the sensory and motor representations of an action would be so strong that the motor neurons would start firing to the sound or the vision of the action, and a mirror neuron would have been created.

Evidence for that perspective comes from many experiments that show that motor programs can be triggered by novel auditory or visual stimuli after repeated pairing of the stimulus with the execution of the motor program (for a review of the evidence, see Giudice et al., 2009[16]). For instance, people who have never played the piano do not activate brain regions involved in playing the piano when listening to piano music. Five hours of piano lessons, in which the participant is exposed to the sound of the piano each time he presses a key is proven sufficient to trigger activity in motor regions of the brain upon listening to piano music when heard at a later time.[17] Consistent with the fact that spike-timing-dependent plasticity occurs only if the presynaptic neuron’s firing predicts the post-synaptic neuron’s firing,[18] the link between sensory stimuli and motor programs also only seem to be potentiated if the stimulus is contingent on the motor program.

See also

References

Hebb, D.O. (1949). The Organization of Behavior. New York: Wiley & Sons.

  1. Siegrid Löwel, Göttingen University; The exact sentence is: “neurons wire together if they fire together” (Löwel, S. and Singer, W. (1992) Science 255 (published January 10, 1992) “Selection of Intrinsic Horizontal Connections in the Visual Cortex by Correlated Neuronal Activity”. United States: American Association for the Advancement of Science. pp. 209–212. ISSN0036-8075.
  2. Caporale N; Dan Y (2008). “Spike timing-dependent plasticity: a Hebbian learning rule”. Annual Review of Neuroscience. 31: 25–46. doi:1146/annurev.neuro.31.060407.125639PMID18275283.
  3. Allport, D.A. (1985). “Distributed memory, modular systems and dysphasia”. In Newman, S.K.; Epstein R. (Eds.). Current Perspectives in Dysphasia. Edinburgh: Churchill Livingstone. ISBN0-443-03039-1.
  4. Klopf, A. H. (1972). Brain function and adaptive systems—A heterostatic theory. Technical Report AFCRL-72-0164, Air Force Cambridge Research Laboratories, Bedford, MA.
  5. Euliano, Neil R. (1999-12-21). “Neural and Adaptive Systems: Fundamentals Through Simulations”(PDF). Neural and Adaptive Systems: Fundamentals Through Simulations. Wiley. Archived from the original (PDF) on 2015-12-25. Retrieved 2016-03-16.
  6. Shouval, Harel (2005-01-03). “The Physics of the Brain”. The Synaptic basis for Learning and Memory: A theoretical approach. The University of Texas Health Science Center at Houston. Archived from the originalon 2007-06-10. Retrieved 2007-11-14.
  7. Horgan, John (May 1994). “Neural eavesdropping”. Scientific American. 270: 16. doi:1038/scientificamerican0594-16.
  8. Fitzsimonds, Reiko; Mu-Ming Poo(January 1998). “Retrograde Signaling in the Development and Modification of Synapses”. Psychological Review. doi:1152/physrev.1998.78.1.143.
  9. López, P; C.P. Araujo (2009). “A computational study of the diffuse neighbourhoods in biological and artificial neural networks”(PDF). International Joint Conference on Computational Intelligence.
  10. Mitchison, G; N. Swindale (October 1999). “Can Hebbian Volume Learning Explain Discontinuities in Cortical Maps?”. Neural Computation. 11: 1519–1526. doi:1162/089976699300016115.
  11. Keysers C; Perrett DI (2004). “Demystifying social cognition: a Hebbian perspective”. Trends in Cognitive Sciences. 8(11): 501–507. doi:1016/j.tics.2004.09.005PMID 15491904.
  12. Keysers, C. (2011). The Empathic Brain.
  13. Gallese V; Fadiga L; Fogassi L; Rizzolatti G (1996). “Action recognition in the premotor cortex”. Brain. 119(Pt 2): 593–609. doi:1093/brain/119.2.593PMID 8800951.
  14. Keysers C; Kohler E; Umilta MA; Nanetti L; Fogassi L; Gallese V (2003). “Audiovisual mirror neurons and action recognition”. Exp Brain Res. 153(4): 628–636. doi:1007/s00221-003-1603-5PMID12937876.
  15. Del Giudice M; Manera V; Keysers C (2009). “Programmed to learn? The ontogeny of mirror neurons”. Dev Sci. 12(2): 350–363. doi:1111/j.1467-7687.2008.00783.x.
  16. Lahav A; Saltzman E; Schlaug G (2007). “Action representation of sound: audiomotor recognition network while listening to newly acquired actions”. J Neurosci. 27(2): 308–314. doi:1523/jneurosci.4822-06.2007PMID 17215391.
  17. Bauer EP; LeDoux JE; Nader K (2001). “Fear conditioning and LTP in the lateral amygdala are sensitive to the same stimulus contingencies”. Nat Neurosci. 4(7): 687–688. doi:1038/89465.

Further reading

External links