by Sandi Mann
Suggestion then, can lead to the creation of false memories, but so can misinformation, existing memories and misattribution.
Dig deeper
Read more on false memory syndrome at the British False Memory Society:
http://bfms.org.uk/
Read more about memory and smell in a newspaper article:
http://www.telegraph.co.uk/science/science-news/9042019/Smells-can-trigger-emotional-memories-study-finds.html
An excellent website on human memory:
http://www.human-memory.net/disorders_age.html
Fact-check
1 The three stages of memory are:
a Encoding, storage and retrieval
b Long-term memory, short-term memory and sensory memory
c Visual, acoustic and semantic memory
d Working, multistore and levels of processing
2 The three main ways in which information can be encoded are:
a Encoding, storage and retrieval
b Visual, acoustic and semantic
c Working, multistore and levels of processing
d Long-term memory, short-term memory and sensory memory
3 The three stages of Atkinson and Shiffrin’s (1968) Multistore Model are:
a Encoding, storage and retrieval
b Long-term memory, short-term memory and sensory memory
c Visual, acoustic and semantic memory
d Working, multistore and levels of processing
4 The three subsystems in Baddeley and Hitch’s (1974) Working Memory Model are:
a The phonological/articulatory loop, the visuo-spatial sketchpad and the multimodal episodic buffer
b The central executive, the phonological/articulatory loop and the visuo-spatial sketchpad
c Declarative, procedural and episodic systems
d Iconic, echoic and sensory systems
5 Which of the following is not an example of procedural memory?
a Riding a bike
b Driving a car
c Reciting a learned poem
d Using a new computer program
6 Which of the following is not a reason for memory failures?
a Failure to encode
b Failure to retrieve
c Interference effects
d Selective attention
7 Smells evoke emotions because:
a Smells are a powerful sense
b Smells reach primitive parts of the brain
c Every scent has an emotion associated with it
d The smell receptors are closely linked to areas of memory within the brain
8 Mood congruence theory suggests that:
a We remember events that match our current mood
b Remembering is easier when your mood at retrieval does not match your mood at encoding
c Moods are often congruent with emotions
d Moods are often evoked by smells
9 What is false memory?
a When we fail to remember things that happened
b When other people remember things that happened to us but we don’t
c When we remember things that did not actually happen
d When people imagine historic sexual abuse
10 Which of the following is not a factor in leading to false memories?
a Suggestion
b Misattribution
c Misinformation
d Sexual abuse
5
Learning
Learning refers to the changes that occur as the result of experience or exposure to stimuli, and much of what we know about how we learn originates in the theories of classical and operant conditioning. But there are other forms of learning, too, and social learning suggests that we don’t always have to learn by doing things ourselves but just by watching others.
Learning is defined as a relatively lasting change in behaviour as a result of experience. We can’t see learning, but can infer it only from observable behaviour. Changes sometimes take place in behaviour that are not the result of learning, but these are usually short-lasting (e.g. drugs or temperature might change behaviour in the short-term but this is not the result of learning).
Of course, not all permanent change is due to learning, either. For example, brain damage, maturation and so on can change behaviour. For change in behaviour to be classed as ‘learning’, then, it has to have come about through some kind of experience that the person has undergone.
Learning as a psychological discipline has its roots in behaviourism. Behaviourism is a psychological approach developed by the American psychologist John B. Watson (1878–1958) that was concerned with measurable behaviours (as oppose to internal mental processes that could not be measured). Behaviourism can perhaps be best summed up by the following quote from Watson:
‘Give me a dozen healthy infants, well-formed, and my own specified world to bring them up in and I’ll guarantee to take any one at random and train him to become any type of specialist I might select – doctor, lawyer, artist, merchant-chief and, yes, even beggar-man and thief, regardless of his talents, penchants, tendencies, abilities, vocations, and race of his ancestors.’
John Watson, Behaviorism (Chicago: University of Chicago Press, 1930)
By this he meant that most behaviours that can be measured can be trained, changed or ‘conditioned’ – and that virtually any behaviours can be produced with the right training. Watson’s theory was based on the idea that all behaviours are acquired through the conditioning that occurs when we interact with our environment. Two types of conditioning are identified and these form the basis of all learning according to the behaviourist perspective: classical conditioning and operant. Each of these will be considered in turn.
Classical conditioning
Classical conditioning is a learning process in which an association is made between a previously neutral stimulus (i.e. one that didn’t provoke a response) and a stimulus that naturally evokes a response.
Spotlight: Ivan Pavlov and his dogs
Although classical conditioning is one of the building blocks of psychology, the man who first noted the phenomenon was not a psychologist at all. Ivan Pavlov was a Russian physiologist who, in 1904, won a Nobel Prize for his advancement of knowledge in the field of digestion. It was his work in this area on dogs that led to the now-famous Pavlovian response.
Ivan Pavlov was born in Russia in 1849 and was the eldest of 11 children. Turning down the chance to be a priest like his father, he instead became a scientist and by the early 1900s he was busy studying the production of saliva in dogs as part of his ongoing work in the field of digestion. He gave his dogs a range of edible and non-edible items and then measured the amount of saliva they produced. Salivation, he knew, was an automatic reflex that was outside the conscious control of the animals. Saliva is produced when animals taste or smell food and is designed to aid digestion.
However, he noted that, after a few experiments, the dogs began to produce saliva as soon as his assistants opened the door and entered the room – that is, before any food had been produced to see, smell or taste. Pavlov realized that the dogs must have learned to produce saliva as soon as the assistants appeared because they had come to associate the appearance of the assistants with the production of food. Unlike the salivary response to the presentation of food, which is an unconditioned reflex (in that it doesn’t need to be learned), salivating to the expectation of food was, he claimed, a conditioned or learned reflex.
When Pavlov made this discovery, he devoted the rest of his life to studying more about this type of learning, called classical conditioning because it is the first systematic study of basic laws of learning. His most famous experiments involved pairing a neutral stimulus (a bell) with the food that produced the unconditioned response (salivation) to the food; after a few pairings, the bell alone was enough to produce salivation even without pairing it with the food (conditioned response).
In classical conditioning, it is important to distinguish between the m
ain components:
• The unconditioned stimulus (US): this is the stimulus that naturally produces a response to something that does not need any learning to achieve (e.g. feeling hungry at the sight of food – the US is the food).
• The conditioned stimulus (CS): this is the previously neutral stimulus that, after becoming associated with the unconditioned stimulus, eventually comes to trigger a conditioned response (e.g. feeling hungry when the clock strikes 1 p.m. irrespective of whether your stomach is empty or not – the CS is the clock).
• The unconditioned response (UR): this is the automatic response elicited by the unconditioned stimulus (e.g. hunger or salivation at the sight of food).
• The conditioned response (CR): this is the learned response to the previously neutral stimulus (e.g. hunger when the clock strikes one).
THE BASIC PRINCIPLES OF CLASSICAL CONDITIONING
There are a number of different phenomena associated with all examples of classical conditioning. These include:
• Extinction: in classical conditioning, extinction happens when a conditioned stimulus is no longer paired with an unconditioned stimulus. Just as pairing the two leads to a conditioned response, when the pairing stops, eventually the response will stop, too – it will become extinct. Thus, if Pavlov’s dogs stopped hearing a bell before each presentation of food, eventually a bell on its own would no longer be enough to produce salivation.
• Spontaneous recovery: even after extinction, the conditioned response can sometimes come back at random times, even years later.
• Stimulus generalization: this is the tendency for events similar to the conditioned stimulus to evoke the conditioned response. For example, if a child has been bitten by a Yorkshire Terrier he may develop a conditioned fear to all Yorkies that may generalize to all dogs – and possibly even to all animals.
• Discrimination: this is the opposite of generalizability as it is the ability to differentiate between a conditioned stimulus and other stimuli that have not been paired with an unconditioned stimulus. For example, if schoolchildren pair the sound of the lunch bell with feeling hungry, they won’t feel hungry at the sound of a different bell.
• Contiguity: the more closely in time two events occurred, the more likely they are to become associated; as time passes, association becomes less likely. This is why a dog owner who returns home from work to find his dog has soiled the carpet will not succeed in punishing the dog then – too long has passed since the event for the dog to make the connection between his soiling and the punishment.
SCHEDULING THE CS AND THE US
There are different ways in which the conditioned and the unconditioned stimulus can be paired. Using Pavlov’s dogs as an example, we can see various possible schedules:
• Delayed/forward conditioning: this is where the CS (bell) is presented first and while the bell is still ringing the dog is given the US (food). This is the fastest way to get acquisition (the term used for conditioning a stimulus).
• Trace conditioning: this is where the CS (bell) is presented but it is followed by a short break before the US (food) appears.
• Simultaneous conditioning: here the CS (bell) and the US (food) are presented at the same time and continue for the same amount of time.
• Backward conditioning: In this case the US (food) is presented first and is followed by the CS (bell).
HIGHER-ORDER OR SECOND-ORDER CONDITIONING
Classical conditioning might seem simple but it actually gets a little more complicated. Imagine pairing a bell with the appearance of food for one of Pavlov’s dogs. Classical conditioning occurs whereby eventually the bell alone will produce salivation without the need for food to be present at all. So far so clear. Now, imagine that every time we ring the bell, we also present the dog with a flash of light. Eventually, the flash of light becomes weakly conditioned to produce the salivation even without the bell. This is second-order conditioning.
And it can go further: another stimulus can be paired with the flash of light and salivation can be conditioned to occur with that, too (third-order conditioning). And so it goes on.
A good real-life example is this. A child has a kind aunt who has no children of her own so indulges her nieces and nephews instead. The aunt starts as a neutral figure but the treats and presents that she gives to the child soon turn her into a conditioned stimulus: the child feels happy whenever he sees his aunt. It just so happens that this aunt wears a particular perfume and one day the child smells that perfume in a store – and feels really happy. The perfume has become a second-order conditioned stimulus.
Operant conditioning
‘The only way to tell whether a given event is reinforcing to a given organism under given conditions is to make a direct test. We observe the frequency of a selected response, then make an event contingent upon it and observe any change in frequency. If there is a change, we classify the event as reinforcing to the organism under the existing conditions.’
B. F. Skinner, Science and Human Behavior (New York: Simon & Schuster 1953)
Operant conditioning (or instrumental conditioning) is a type of learning in which an individual’s behaviour is changed by its antecedents (things that preceded it) and consequences (things that follow it). Operant conditioning is distinguished from classical conditioning in the following ways:
• Classical conditioning involves placing a neutral stimulus before a reflex (i.e. a response that does not need to be learned) and focuses on involuntary, automatic behaviours. It involves making an association between an involuntary response and a stimulus. The learner does not actively learn – but is passive in the whole process.
• Operant conditioning, on the other hand, involves applying reinforcement (reward) or punishment after a behaviour and focuses on strengthening or weakening voluntary behaviours. Operant conditioning is about making an association between a voluntary behaviour and a consequence. Operant conditioning requires the learner to actively participate and perform some type of action in order to be rewarded or punished.
Operant conditioning then focuses on using either reinforcement or punishment to increase or decrease a behaviour. Through this process an association is formed between the behaviour and the consequences for that behaviour. The ‘father’ of operant conditioning was B. F. Skinner (1904–90), although his theories were based on the work of Edward Thorndike (1874–1949), whose studies of learning in animals using a puzzle box led to his ‘Law of Effect’. The puzzle box consisted of an arrangement into which a cat was placed and from which it had to work out a way to escape in order to obtain its reward (scraps of fish). The boxes would contain a lever which, when pressed, opened the exit door. The cats would stumble around until they happened to press the lever accidentally. This would happen for a few trials until the cat realized that there was a connection between their pressing the lever and the door opening. Then they would immediately press the lever to escape. Thorndike’s ‘Law of Effect’ stated that any behaviour that is followed by pleasant consequences is likely to be repeated, and any behaviour followed by unpleasant consequences is likely to be stopped.
Spotlight: B. F. Skinner
B. F. Skinner is widely regarded as one of the ‘fathers’ of operant conditioning. Not many people know that the B. F. stands for Burrhus Frederic – and this does explain why his initials tend to be used!
Almost half a century later, this Law of Effect provided a framework for Skinner to develop his principles of operant conditioning. He used an updated version of Thorndike’s puzzle box, called the operant chamber, or Skinner box (1948), which has contributed immensely to our understanding of the Law of Effect in modern society and how it relates to operant conditioning. Skinner introduced a new addition to the Law of Effect – reinforcement. This concept stated that behaviour that is reinforced tends to be repeated (i.e. strengthened) while behaviour that is not reinforced tends to die out or be extinguished (i.e. weakened).
While Thorndike used cats, Skin
ner used rats (which allowed for smaller puzzle boxes to be designed). The principle of the puzzle was the same: the rat had to press a lever in order to obtain a pellet of food. At first this would happen accidentally when the rat brushed against the lever, but the rat soon learned what action to take in order to obtain its reward. This was termed positive reinforcement, which strengthens a behaviour (in this case pressing the lever) by providing a consequence (food) that an individual finds rewarding.
The removal of an unpleasant reinforcer can also strengthen behaviour. Skinner introduced electric shocks to the rats in his boxes – these shocks could be removed by pressing the lever in the same way that the food was released in the positive reinforcement trials. The rats learned to press the lever to escape the unpleasant stimulus, just as they learned to press it to gain a pleasant reward. This is known as negative reinforcement because it is the removal of an adverse stimulus that is ‘rewarding’ to the animal. Negative reinforcement strengthens behaviour because it stops or removes an unpleasant experience.
Punishment is another operant term and this is different from negative reinforcement. In fact, punishment is the opposite of reinforcement since it is used to stop a behaviour or weaken the likelihood of it occurring. Punishment can either be negative or positive: positive punishment is when an unfavourable outcome results from performing some action, whereas negative punishment is when a favourable outcome is stopped after the action is performed.