Purposive Behaviorism (Edward Chance Tolman – 1922)

Another contribution to cognitive learning theory, which somewhat smudged the line between cognitive and behavioral learning theory, was the work of Edward Chance Tolman. Tolman was a behaviorist, but he was a purposive behaviorist (McDougall, 1925a, p. 278).

Purpose is held to be essentially a mentalistic category…[but] it will be the thesis of the present paper that a behaviorism (if it be of the proper sort) finds it just as easy and just as necessary to include the descriptive phenomena of “purpose” as does a mentalism. (Tolman, 1925a, pp. 36-37)

For Tolman, a “proper sort” of behaviorism was “not a mere Muscle Twitchism of the Watsonian variety” (1925a, p. 37), but was broad enough to cover “all that was valid in the results of the older introspective psychology” (1922, p. 47). In his view, the Watsonian variety of behaviorism was “an account in terms of muscle contraction and gland secretion” and “as such, would not be behaviorism at all but a mere physiology” (p. 45).

In contrast to the limiting notions of physiological behaviorism, Tolman (1922) suggested a new formula of behaviorism that would “allow for a more ready and adequate treatment of the problems of motive, purpose, determining tendency, and the like” (p. 53). He defined purpose quite simply as persistence in behavior:

Purpose, adequately conceived, it will be held, is itself but an objective aspect of behavior. When an animal is learning a maze, or escaping from a puzzle-box, or merely going about his daily business of eating, nest-building, sleeping, and the like, it will be noted that in all such performances a certain persistence until character is to be found. Now it is just this persistence until character which we will define as purpose. (1925a, p. 37)

As an example he gave the following:

When a rat is running a maze and is exhibiting trial and error, such trials and errors, we discover, are not wholly identifiable in terms of specific muscle contraction A, followed by specific muscle contraction, B, etc. They are only completely describable as responses which “persist until” a specific “end-object,” food, is reached. An identification of these trial-and-error explorations has to include, in short, a statement of the end-situation (i.e., the presence of food) toward which they eventuate. Such a behavior is, therefore, in our terminology a case of purpose. (p. 38)

It is of interesting note that Tolman spoke frequently of purpose and cognition— going so far as to call them out as the “determiners of animal learning” (1925b, p. 285)—but simultaneously went to great lengths to establish and hold his conception of these terms as distinct from a mentalistic view of the same:[1]

The present paper will offer a new set of concepts for describing and interpreting the facts of animal learning. These new concepts will differ from the usual ones in not being restricted to the customary physiological notions of stimulus, neural excitation, synaptic resistance, and muscle contraction (or gland secretion). They will rather include such immediate and common sense notions as purpose and cognition. These latter, however, will be defined objectively and behavioristically, not ‘mentalistically.’ (p. 285)

Tolman’s efforts to establish himself apart from the physiological behaviorism of Pavlov, Thorndike, and Watson, and from the introspective, mentalistic practices of clinical and human psychology are products of the time in which his research took place. When he began, introspection had largely been discredited among its opponents as a valid means of fact finding, and displaced by the methodology of the early, physiologically grounded, experimental behaviorists. However, Tolman did not agree with their “molecular” view of behavior—the contraction of muscles, the firing of nerve receptors, or the secretion of glands. In contrast, he viewed behavior as a molar phenomenon, larger than what happens inside the cells of the nervous system:

Behavior…is more than and different from the sum of its physiological parts. Behavior, as such, is an “emergent” phenomenon that has descriptive and defining properties of its own. And we shall designate this latter as the molar definition of behavior. (Tolman, 1932, p. 7)

As a molar phenomenon, behavior’s immediate descriptive properties appear to be those of: getting to or from goal-objects by selecting certain means-object-routes as against others and by exhibiting specific patterns of commerces with these selected means-objects. But these descriptions in terms of gettings to or from, selections of routes and patterns of commerces-with imply and define immediate, immanent purpose and cognition aspects in the behavior. These two aspects of behavior are, however, but objectively and functionally defined entities. (p. 21)

Tolman was not the first to suggest that behaviorism was larger than its physiological roots,[2] but in defining behavior as purposive, he was faced with the two-fold challenge of (a) reintroducing the notions of purpose, goal, and motive without being dismissed as a mentalist, and (b) securing for his views a place apart from those of Thorndike. As has already been mentioned, the first he did by simply defining purpose as the persistence of behavior, and by merely stating that his use of the term ‘cognition’ was not mentalistic but behavioristic. More importantly, in accomplishment of the second challenge—finding a place of light in Thorndike’s shadow—his most valuable contributions to cognitive psychology were made, namely the phenomenon of latent learning and the development and use of cognitive maps. In reference to Thorndike’s theory he said,

I have quite a number of quarrels with this theory. I would like to say first, however, that it seems to me that this theory of Thorndike’s either in its present or in its earlier form, is the theory relative to which the rest of us here in America have oriented ourselves. The psychology of animal learning—not to mention that of child learning—has been and still is primarily a matter of agreeing or disagreeing with Thorndike, or trying in minor ways to improve upon him. Gestalt psychologists, conditioned reflex psychologists, sign-gestalt psychologists—all of us here in American seem to have taken Thorndike, overtly or covertly, as our starting point. And we have felt very smart and pleased with ourselves if we could show that we have, even in some very minor way, developed new little wrinkles of our own. (Tolman, 1932, p. 152)

Tolman’s first “wrinkle”—latent learning—refers to the type of learning that occurs through casual, non-goal-directed interaction with the environment. That which is learned in this way is not manifest until needed:

Let me recall again the facts of “latent learning.” During latent learning the rat is building up a “condition” in himself, which I have designated as a set of “hypotheses,” and this condition—these hypotheses—do not then and there show in his behavior. S’s are presented but the corresponding R’s do not function. It is only later, after a goal has been introduced which results in a strong appetite, that the R’s, or as I would prefer to say, the B’s, appropriate to these built-up hypotheses appear. (Tolman, 1938, p. 161)

As evidence of latent learning, Tolman (1948) cited experiments that were mostly “carried out by graduate students (or underpaid research assistants) who, supposedly,” he said, “got some of their ideas from me” (p. 189).[3] In each of these experiments it was found that when rats were allowed to explore a fourteen unit T maze for a period of a few days, without any reward of food in the goal box, they consistently showed a sudden drop in errors and time required to reach the goal box once food was discovered there,[4] matching or exceeding the performance of rats that had been trained in the customary behaviorist fashion, in which food was present in the goal box for the duration of the training.

Results of these experiments provided evidence for the phenomenon of latent learning,[5] and simultaneously provided evidence against the law of effect, which evidence Tolman used to establish his position as an improvement upon Thorndike:

My second objection is that the theory as stated by Thorndike does not allow for the facts of “latent learning,” of the complementary phenomenon of a sudden shoot-up in errors when a goal is removed, and of the utilization of alternative habits under different motivations. (Tolman, 1932, p. 153)

Tolman also viewed the latent learning experiments as one type of experiment that provided evidence in favor of cognitive maps. In my review of his writings it has been somewhat difficult to pin down precisely his conception of cognitive map, given the religious efforts he has made to avoid any association with mentalism. However, in his most direct treatment on the topic, Cognitive Maps in Rats and Men (1948), he referred to cognitive maps as “something like a field map of the environment” that “gets established in the rat’s brain” and once established, is then employed by “intervening brain processes” in the selective attention to stimuli by the nervous system, and the execution of responses (p. 192). Even in making this definition, of course, Tolman abstracted himself away from and diminished any assumed association that the reader might make with mentalism by following it up with a metaphorical reference to a “central control room” and a qualifying term, “cognitive-like:”

The incoming impulses are usually worked over and elaborated in the central control room into a tentative, cognitive-like map of the environment. And it is this tentative map, indicating routes and paths and environmental relationships, which finally determines what responses, if any, the animal will finally release. (p. 192)

Tolman’s vague, and someone inconsistent, use of terminology notwithstanding, it seems a fair assumption that Tolman’s cognitive maps were, in fact, assumed by him to be contents of the mind. These cognitive maps were built up in latent learning maze experiments during non-rewarded trials. The rats were then presumed to use this knowledge to quickly navigate to the goal box, once they found food there, just as quickly—and with just as few errors— as rats that had been trained over many trials.

In addition to the experiments demonstrating latent learning, Tolman also cited four other types of experiments that provide evidence for cognitive maps. The second type, VTE, or Vicarious Trial and Error experiments, investigated the “hesitating, looking-back-and-forth, sort of behavior which rats can often be observed to indulge in at a choice-point before actually going one way or the other” (pp. 196-197).  VTE experiments support the theory of cognitive maps by showing that “the animal’s activity is not just one of responding passively to discrete stimuli, but rather one of the active selecting and comparing of stimuli” (p. 200).

The third type Tolman referred to as “Searching for the Stimulus” experiments. In these experiments rats were observed to, anthropomorphically speaking, “look around after the shock to see what it was that had hit them” (p. 201). It was found that rats who received a shock when attempting to eat out of a food cup set in front of a striped visual pattern, would avoid going near the cup, or even try to hide the cup and striped pattern with sawdust, even months after only one shocking encounter. In contrast, when the conditions of the experiment were modified so that the lights would briefly go out coincidental to the shock—during which time the pattern and food cup dropped out of sight—a large percentage of the rats that were put back into the cage only 24 hours later showed no avoidance of the striped pattern. These experiments reinforced the notion of “the largely active selective character in the rat’s building up of his cognitive map” (p. 201).

The fourth type of experiment, the “Hypothesis” experiments involved a four-compartment discrimination box in which the correct door at each choice point (between boxes) could be determined by the experimenter to be left or right, light or dark, or a combination of these. By randomizing the 40 correct choices made in 10 runs of each day’s test, the problem became insoluble—meaning, there was no pattern or basis for decision that the rat could take advantage of to know in advance which of the doors was correct. It was found that rats in this condition began to systematically test the system, for example, by always choosing the door at the right, then giving up and always choosing the door on the left, or choosing all the dark doors, or choosing all the light doors, etc. These “relatively persistent, and well-above-chance systematic types of choice” (p. 202) were referred to by Krech (as cited by Tolman, 1948, p. 202) as hypotheses. Tolman viewed Krech’s hypotheses as being equivalent to what he had been calling cognitive maps, and noted that from the results of Krech’s experiments it appeared that cognitive maps “get set up in a tentative fashion to be tried out first one and then another until, if possible, one is found which works” (p. 202).

The fifth type of experiment was one of spatial orientation. Experiments of this type demonstrated that rats not only learn how to navigate a maze in order to obtain food in the exit box, but that they simultaneously develop a wider spatial map that includes more than just the specific trained paths. Evidence of this was reported as early as 1929 by Lashley  when two of his rats after having learned an alley maze, “pushed back the cover near the starting box, climbed out and ran directly across the top to the goal-box where they climbed down in again and ate” (as cited in Tolman, 1948, p. 203).  Tolman also noted that other investigators have reported similar findings.

In a series of radial path experiments, Tolman, Ritchie, and Kalish (also cited in Tolman, 1948, p. 203) found that rats not only develop a narrow map of the correct route, but a very wide map of the overall layout. When the known path is blocked, this map enables them to circumvent the problem and return as close as possible to the point at which they last received food. It was shown that even when the maze was rotated by 180 degrees rats were able to return to the original point of food by turning in the opposition direction than that which was previously learned.

Another experiment, which provided evidence against Thorndike’s law of effect, was Tolman’s experiment with human subjects (introductory psychology students) that involved a punchboard maze, a metal stylus, a bell, and a shock (Tolman, Hall, & Bretnall, 1932, as cited in Leahey & Harris, 1997, p. 57).  In this study students learned a punchboard maze by inserting a metal stylus into one of two holes, one of which was “correct” and one of which was “incorrect.” The punchboard “maze” consisted of several pairs of holes. The students were required to pass through the maze repeatedly until they were able to do it without choosing a “wrong” hole. The students were divided into a variety of experimental groups, four of which were:

Bell-right—when the subject inserted the stylus into the correct hole of each pair, an electrical circuit closed and rang a bell.

Bell-wrong—when the subject inserted the stylus into the incorrect hole of each pair, the bell rang.

Bell-right-shock—when the subject chose the correct hole, not only did the bell ring, but the subject also received a painful electric shock through the stylus.

Bell-wrong-shock—when the subject chose the incorrect hole, not only did the bell ring, but the subject was shocked.

(p. 57)

The results of the experiment are quite interesting. First, the effect of the bell—supposedly a neutral stimulus with no reinforcing value—appeared to reinforce whatever response it followed, since both bell-wrong groups learned more slowly than the bell-right groups. Subjects in the bell-wrong group had trouble learning to choose the hole that did not ring the bell. Students in the shock groups faced a similar challenge:

Subjects in the bell-right-shock group were learning to receive shocks, not avoid them, as suggested by the law of effect. Indeed, their rate of learning was not significantly different from the bell-right group. On the other hand, the bell-wrong shock group was learning to avoid shocks, since for them every error resulted in a shock. But the shock did not make them learning faster; in fact, they were the slowest of all the groups. The shock seemed to act as an emphasizer that impeded learning rather than helped it. (p.58)

Leahey and Harris explained these results by citing an article entitled, “Reinforcement in Human Behavior” (W. K. Estes, 1982b), which describes every reinforcing event as having both an affective and a cognitive dimension. The affective, or emotional, dimension identifies the reinforcer as either pleasurable or painful. The cognitive dimension provides information about whether the response was correct or incorrect.

Tolman’s experiment separated the affective and cognitive values of the reinforcers he used. A bell has no affective value by itself; it changes behavior solely by telling the subject he or she had chosen the correct move in the pegboard maze. In the shock-right groups, the affective value of the reinforcer was brought in conflict with its cognitive value. While the shock was painful, it told the subject that he or she had chosen the correct move. (Leahey & Harris, 1997, p. 58)

In Determiners of Behavior at a Choice Point (1938), Tolman attempted a theory of intervening variables to describe “why rats turn the way they do, at a given choice point in a given maze at a given stage of learning” (p. 1). He believed that all factors determining the choice the rat would make at any point in the maze could be envisioned as a causal function of both independent variables and intervening variables.

The independent variables of the general model were of two types: environmental variables and individual difference variables. Tolman’s environmental variables were (a) maintenance schedule [M], (b) appropriateness of goal object [G], (c) types and modes of stimuli provided [S], (d) types of motor response required [R], (e) cumulative nature and number of trials [∑(OBO)], and (f) pattern of preceding and succeeding maze units. The individual difference variables were (a) heredity [H]; (b) age [A]; (c) previous training [T]; and (d) special endocrine, drug or vitamin conditions [E]. He viewed these as “possible modifiers” (p. 8) between the independent variable and the dependent variable. Tolman presented this as a general model that he supposed to account for theories such as those of Hull and Thorndike. The difference between one theory and another, he said, was simply the intervening variables chosen by the theorist:

A theory, as I shall conceive it, is a set of “intervening variables.” These to-be-inserted intervening variables are “constructs” which we, the theorists, evolve as a useful way of breaking down into more manageable form the original [f1 function which relates independent variables to the dependent variable].…In place of [f1], I have introduced a set of intervening variables, Ia, Ib, Ic, etc., few or many, according to the particular theory. (p. 9)

For his own theory, he defined the function, f1, to consist of six intervening variables: (a) demand, (b) appetite, (c) differentiation, (d) skill, (e) hypotheses, and (f) biases. Each of these intervening variables was, by Tolman’s definition, a measurement of change in the corresponding independent variable while all the others are held constant. Unfortunately, what he presented was, in his own words, an “oversimplified and incomplete version” of his theory “because [he had] not as yet completely thought the whole thing through.” (pp. 15-16). Because of this, much interpretation and assumption is required to take these concepts any further. Similarly, in one of the final chapters of Purposive Behavior in Animals and Men (1932), Tolman enumerated what he called, “The Laws of Learning, Envisaged by Purposive Behaviorism” (p. 372). Disappointingly, the title of the list held much more promise than the content of the list itself. Like much of Tolman’s writing, it was on the verge of saying something really important, but in the end said nothing much at all.

Just as Tolman’s writing is filled with concatenated terms (e.g., means-object, means-end-capacities, means-end-relation, alternativeness, roundaboutness, and food-demandingness) the ideas expressed in many of his promises-not-quite-fulfilled chapters seem to be nothing more than the concatenation of disparate ideas that have apparent but not actual value in providing truly useful perspective in learning. Still, his persistence to explore latent learning, cognitive maps, purpose behind behavior, and cognitive control in directing attention and behavior, served as a platform on which later cognitive research could be established, and thereby provided a valuable contribution to the emergence of cognitive learning theory. The legacy of his ideas is that they called into question the need for reinforcement in order to learn, and positioned the locus of control of action within the individual, who selects from a previously learned set of alternatives according to his needs at any given moment:

Our final criticism of the trial and error doctrine is that it is its fundamental notion of stimulus-response bonds, which is wrong. Stimuli do not, as such, call out responses willy nilly. Correct stimulus-response connections do not get “stamped in,” and incorrect ones do not get “stamped out.” Rather learning consists in the organisms’ “discovering” or “refining” what all the respective alternative responses lead to. And then, if, under the appetite-aversion conditions of the moment, the consequences of one of these alternatives is more demanded than the others—or if it be “demanded-for” and the others be “demanded-against”—then the organism will tend, after such learning, to select and to perform the response leading to the more “demanded-for” consequences. But, if there be no such difference in demands there will be no such selection and performance of the one response, even though there has been learning. (Tolman, 1932, p. 364)


[1] I find Tolman’s use of cognitive terms (e.g., cognitions, cognitive hunches, initial cognitions, and cognition intent) to be excessive and unusual. Though he says repeatedly what they are not (mentalistic) he never says exactly what they are. The possibility seems likely that he was, in fact, referring to cognition in the ‘thinking’ sense, but to avoid being side-lined or benched by the mainstream behaviorists of the day, he refused to admit any supposition of hypothetical mental activity. Of course, if he was not referring to thinking, why would he have used the term ‘cognition’ at all?

I recently found that this same point is brought up by McDougall (1925b, p. 298):

Tolman seems inclined to attach much importance to the fact that by using the words of common speech (such words as desire, purpose, striving, cognition, perception and memory and anticipation) you can describe the event and yet can avoid what he calls the ‘mentalist’ implications, if you carefully explain that you don’t mean to use the words in the ordinary sense, but merely as words which are convenient for the description of the objective event you observe.

[2] Tolman notes Holt, Perry, Singer, de Laguna, Hunter, Weiss, Lashley, and Frost as offering alternative views to the Watsonian brand of behaviorism (Tolman, 1932, pp. 4, 8-10).

[3] In Purposive Behavior in Animals and Men, Tolman (1932, p. 343) lists four specific experiments conducted by: Blodgett in 1929, Williams in 1924, Elliott in 1929, and Tolman and Honzik in 1930. The term “latent learning” comes from Blodgett.

[4] One might wonder what is meant by a sudden drop in errors and time required to reach the goal box if previously there was no reward. Why would the rats even go to the goal box? The answer is that in the process of exploring the maze the rats would eventually end up in the goal box. Once there, they were confined in the goal box for a period of two minutes, without food, and then returned to their cages. Some have argued that because the rats were removed from the maze and returned to their cages, “that reward was, in fact, not removed from the situation” Hergenhahn (1982, p. 307). Even though this may be true (as I personally believe is the case, based on the obvious perturbation I observed in the subjects of my own maze learning experiments to plot the learning curve of a hamster in a variety of maze configurations) there is no question that the rats showed a very sudden, and very significant decrease in errors and time in making their way to the goal box once it was discovered that food was to be found there. To use Tolman’s terms, the rats moved very purposely and directly to the goal box when a “more demanded goal-object” was present (Tolman, 1932, p. 48).

[5] Latent learning was experimentally defined by Tolman as the sudden decrease in errors made in a maze when a reward was placed in the end-goal box, as compared to the number of errors made when there was no reward present. The complement of latent learning, also proved out by Tolman in maze experiments with rats, was that when the end-goal reward was removed, there was a sudden increase in errors, presumably because the rats were now looking elsewhere for the food.

 

Memory and forgetting (Hermann Ebbinghaus – 1885) | Insight Learning (Wolfgang Kohler – 1925)>

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s