The Basics of Behaviorism
Though
Watson. Thorndike, Pavlov and Skinner are often mentioned in relation to
behavioural learning theories. Behaviorism, can be traced back to philosophers
such as Aristotle, whose essay "Memory" focused on associations being
made between events such as lightning and thunder. Other philosophers that
followed Aristotle's thoughts are Hobbs (1650), Hume (1740), Brown (1820), Bain
(1855) and Ebbinghause (1885) (Black, 1995).
The theory of
behaviorism concentrates on the study of overt behaviors that can be observed
and measured (Good & Brophy, 1990). It views the mind as a "black
box" in the sense that response to stimulus can be observed
quantitatively, totally ignoring the possibility of thought processes occurring
in the mind. Some key players in the development of the behaviorist theory were
Pavlov, Watson, Thorndike and Skinner.
Pavlov (1849 - 1936)
For most
people, the name "Pavlov" rings a bell The Russian physiologist is
best known for his work in classical conditioning or stimulus substitution.
Pavlov's most famous experiment involved food, a dog and a bell.
Pavlov's Experiment
- Before conditioning,
ringing the bell caused no response from the dog. Placing food in front of
the dog initiated salivation.
- During conditioning, the
bell was rung a few seconds before the dog was presented with food.
- After conditioning, the
ringing of the bell alone produced salivation
(Dembo, 1994).
Stimulus and Response Items of Pavlov's Experiment
Food
|
Unconditioned
Stimulus
|
Salivation
|
Unconditioned
Response (natural, not learned)
|
Bell
|
Conditioned
Stimulus
|
Salivation
|
Conditioned
Response (to bell)
|
Other Observations Made by Pavlov
- Stimulus Generalization:
Once the dog has learned to salivate at the sound of the bell, it will
salivate at other similar sounds.
- Extinction: If you stop
pairing the bell with the food, salivation will eventually cease in
response to the bell.
- Spontaneous Recovery:
Extinguished responses can be "recovered" after an elapsed time,
but will soon extinguish again if the dog is not presented with food.
- Discrimination: The dog
could learn to discriminate between similar bells (stimuli) and discern
which bell would result in the presentation of food and which would not.
- Higher-Order
Conditioning: Once the dog has been conditioned to associate the bell with
food, another unconditioned stimulus, such as a light may be flashed at
the same time that the bell is rung. Eventually the dog will salivate at
the flash of the light without the sound of the bell.
Thorndike (1874 - 1949)
Edward
Thorndike did research in animal behavior before becoming interested in human
psychology. He set out to apply "the methods of exact science" to
educational problems by emphasizing "accurate quantitative treatment of
information". "Anything that exists, exists in a certain quantity and
can be measured" (Johcich, as cited in Rizo, 1991). His theory,
Connectionism, stated that learning was the formation of a connection between
stimulus and response.
- The "law of
effect" stated that when a connection between a stimulus and response
is positively rewarded it will be strengthened and when it is negatively
rewarded it will be weakened. Thorndike later revised this "law"
when he found that negative reward, (punishment) did not necessarily
weaken bonds, and that some seemingly pleasurable consequences do not
necessarily motivate performance.
- The "law of
exercise" held that the more an S-R (stimulus response) bond is
practiced the stronger it will become. As with the law of effect, the law
of exercise also had to be updated when Thorndike found that practice
without feedback does not necessarily enhance performance.
- The "law of
readiness" : because of the structure of the nervous system, certain
conduction units, in a given situation, are more predisposed to conduct
than others.
Thorndike's
laws were based on the stimulus-response hypothesis. He believed that a neural
bond would be established between the stimulus and response when the response
was positive. Learning takes place when the bonds are formed into patterns of
behavior (Saettler, 1990).
Watson (1878 - 1958)
John B.
Watson was the first American psychologist to use Pavlov's ideas. Like
Thorndike, he was originally involved in animal research, but later became
involved in the study of human behavior.
Watson
believed that humans are born with a few reflexes and the emotional reactions
of love and rage. All other behavior is established through stimulus-response
associations through conditioning.
Watson's Experiment
Watson
demonstrated classical conditioning in an experiment involving a young child
(Albert) and a white rabbit. Originally, Albert was unafraid of the rabbit; but
Watson created a sudden loud noise whenever Albert touched the rabbit. Because
Albert was frightened by the loud noise, he soon became conditioned to fear and
avoid the rabbit. The fear was generalized to other small animals. Watson then
"extinguished" the fear by presenting the rabbit without the loud
noise. Some accounts of the study suggest that the conditioned fear was more
powerful and permanent than it really was. (Harris, 1979; Samelson, 1980, in
Brophy, 1990)
Certainly Watson's research methods would be questioned today; however,
his work did demonstrate the role of conditioning in the development of
emotional responses to certain stimuli. This may explain certain fears, phobias
and prejudices that people develop. (Watson is credited with coining the term
"behaviorism")
Skinner (1904 - 1990)
Like Pavlov,
Watson and Thorndike, Skinner believed in the stimulus-response pattern of
conditioned behavior. His theory dealt with changes in observable behavior,
ignoring the possibility of any processes occurring in the mind. Skinner's 1948
book, Walden Two , is about a
utopian society based on operant conditioning. He also wrote,Science and Human Behavior, (1953) in
which he pointed out how the principles of operant conditioning function in
social institutions such as government, law, religion, economics and education
(Dembo, 1994).
Skinner's
work differs from that of his predecessors (classical conditioning), in that he
studied operant behavior (voluntary behaviors used in operating on the
environment).
Difference between Classical and Operant Conditioning
Skinner's Operant Conditioning Mechanisms
- Positive Reinforcement or
reward: Responses that are rewarded are likely to be repeated. (Good
grades reinforce careful study.)
- Negative Reinforcement:
Responses that allow escape from painful or undesirable situations are
likely to be repeated. (Being excused from writing a final because of good
term work.)
- Extinction or
Non-Reinforcement : Responses that are not reinforced are not likely to be
repeated. (Ignoring student misbehavior should extinguish that behavior.)
- Punishment: Responses
that bring painful or undesirable consequences will be suppressed, but may
reappear if reinforcement contingencies change. (Penalizing late students
by withdrawing privileges should stop their lateness.)
(Good & Brophy, 1990)
Behavioral Shaping
If placed in
a cage an animal may take a very long time to figure out that pressing a lever
will produce food. To accomplish such behavior successive approximations of the
behavior are rewarded until the animal learns the association between the lever
and the food reward. To begin shaping, the animal may be rewarded for simply
turning in the direction of the lever, then for moving toward the lever, for
brushing against the lever, and finally for pawing the lever.
Behavioral
chaining occurs when a succession of steps need to be learned. The animal would
master each step in sequence until the entire sequence is learned.
Reinforcement Schedules
Once the
desired behavioral response is accomplished, reinforcement does not have to be
100%; in fact it can be maintained more successfully through what Skinner
referred to as partial reinforcement schedules. Partial reinforcement schedules
include interval schedules and ratio schedules.
- Fixed Interval Schedules:
the target response is reinforced after a fixed amount of time has passed
since the last reinforcement.
- Variable Interval
Schedules: similar to fixed interval schedules, but the amount of time
that must pass between reinforcement varies.
- Fixed Ratio Schedules: a
fixed number of correct responses must occur before reinforcement may recur.
- Variable Ratio Schedules:
the number of correct repetitions of the correct response for
reinforcement varies.
Variable
interval and especially, variable ratio schedules produce steadier and more
persistent rates of response because the learners cannot predict when the
reinforcement will come although they know that they will eventually succeed. (Have you checked
your Lottery tickets lately?)
Overview
According to
the behaviorists,
learning can be defined as the relatively permanent change in behavior brought
about as a result of experience or practice. [Note: an internal event displayed
by overt behavior; contrasted with biological maturation or genetics as an
explanation for relatively permanent change.] In fact, the term "learning
theory" is often associated with the behavioral view. Researchers who
affiliate with this position do not generally look with favor on the term
"behavior potential" (i.e., may be capable of performing but did not
for some reason such as illness, situation, etc.) that was included in a
definition accepted by those with a cognitive or humanistic viewpoint. The focus
of the behavioral
approach is on how the environment impacts overt behavior. The psychomotor
domain is associated with overt behavior when writing instructional
objectives. Cunia (2005) provides an excellent overview of the behavioral
approach applied to learning. Behavior
analysis is the term used to describe the scientific study of
behavior and behavior modification is the term used to describe the application
of behavior analysis concepts and principles for the systematic or programmatic
changing of behavior.
As we discuss
the behavioral
approach, for the most part we will assume that the mind is a
"black box" that we cannot see into. The only way we know what is
going on in the mind, according to most behaviorists, is to look at overt
behavior. The feedback loop
that connects overt behavior to stimuli that activate the senses has been
studied extensively from this perspective.
There are
three types of behaviorist
learning theories:
- Contiguity -- any
stimulus and response connected in time and/or space will tend to be
associated
ASSOCIATED
TERMINOLOGY:
1. stimulus = environmental event
2. response = action = behavior = overt behavior
- Classical
(Respondent) Conditioning -- association of stimuli (an
antecedent stimulus will reflexively elicit an innate emotional or
physiological response; another stimulus will elicit an orienting
response)
ASSOCIATED TERMINOLOGY:
- conditioning = learning
- antecedent = a stimulus occuring "before" a response
- reflexive = involuntary (e.g., involuntary responses cannot be consciously
stopped once they start)
- innate
= inborn
- elicits
= causes (to bring forth)
- Operant
(Instrumental) Conditioning -- connection of emitted behavior
and its consequences (reinforcement and punishment)
ASSOCIATED TERMINOLOGY:
- emitted
= voluntary (e.g., voluntary responses can be consciously stopped)
- consequent or consequences = a stimulus occurring "after" a
response that changes the probability the response will occur again
Additional Terminology:
There are
several terms associated with the behavioral approach that deserve further
explanation.
Extinction -- the breaking of the stimulus-stimulus or
stimulus-response connection
1.
contiguity theory -- if the stimulus is no longer
paired with the response, the association will be discontinued.
2.
classical conditioning -- if the conditioned stimulus
(CS) is repeatedly presented by itself (without pairing with the unconditioned
stimulus [US]) the conditioning / association process is reversed, and the CS
will become an NS.
3.
operant conditioning -- if the response is no longer
followed by a consequence (it is not reinforced or punished), it will cease to
be emitted.
4.
social learning theory -- if the observed response is
no longer followed by a consequence (it is not reinforced or punished), or if
the model begins to display an incompatible behavior, the response will cease
to be emitted.
Spontaneous recovery: Sometimes, after extinction in classical
conditioning, if the conditioned stimulus (CS) is again presented, it will
"spontaneously" elicit the conditioned response (CR).
Higher (or second) order conditioning: Classical conditioning does not
have to involve pairing an neutral stimulus (NS) with an unconditioned stimulu
(US). If an NS is paired with an existing conditioned stimulus (CS), the NS
will also become a CS.
Stimulus generalization and discrimination
·
generalization -- behaviors learned in one context or
situation are transfered to another (e.g., studying hard in Ed Psyc is
transfered to studying hard in other classes)
·
discrimination -- behaviors reward or punished in one
context or situation have a different contingency in another (e.g., spending 5
hours per week in most courses is OK, but must spend 10 hours per week in Ed
Psyc)
References
- Cunia, E. (2005).
Behavioral learning theory. Principles
of Instruction and Learning: A Web Quest. Retrieved April 2006,
from http://suedstudent.syr.edu/~ebarrett/ide621/behavior.htm
No comments:
Post a Comment