All intentional or directed learning is aimed at the attainment of some target. Feedback is the means by which the learner, or any other agent directing the learning process, ascertains whether or not progress is being made toward the end goal, and whether or not the goal has been reached.
Aristotle recognized the importance of feedback—in particular, the role of external feedback from a teacher—in making a difference between learning to do something well and learning to do something poorly:
The material form from which and the means by which any form of goodness is produced and those by which it is destroyed are the same…. for it is by playing the lyre that both good and bad lyre-players are produced, and it is the same with builders and the rest. It is by building well that they will become good builders and by building badly that they will become bad builders. If it were otherwise, we should have no need of anyone to teach us; all would become good or bad as the case might be. (Aristotle & Burnet, 1913, p. 45)
Feedback was central to Thorndike’s theory of learning, cast in terms of his law of effect. He believed, based on his own experimental evidence, that repetition in the absence of feedback does nothing to improve performance. In an experiment in which subjects were blindfolded and repeatedly asked to draw a four-inch line with one quick movement Thorndike discovered that doing so 3,000 times “caused no learning” because the lines drawn in the eleventh or twelfth sittings were “not demonstrably better than or different from those drawn in the first or second” (Thorndike, 1931, p. 10). He also explored the relative effectiveness of positive and negative feedback through a variety of experiments and concluded that satisfiers (reward) and annoyers (punishment) are not equal in their power to strengthen or weaken a connection. In one such experiment students learned Spanish vocabulary by selecting for each Spanish word one of five possible English meanings followed by the rewarding feedback of being told “Right” or the punishing feedback of being told “Wrong.” From the results of this experiment Thorndike concluded that punishment does not diminish response as originally stated in the law of effect. In his own words,
Indeed the announcement of “Wrong” in our experiments does not weaken the connection at all, so far as we can see. Rather there is more gain in strength from the occurrence of the response than there is weakening by the attachment of “Wrong” to it. Whereas two occurrences of a right response followed by “Right” strengthen the connection much more than one does, two occurrences of a wrong response followed by “Wrong” weaken that connection less than one does. (Thorndike, 1931, p. 45)
He also observed a “spreading of effect,” meaning that “a satisfying after-effect” not only strengthens “the connection which it follows directly and to which it belongs” (p. 174), but also strengthens “by a smaller amount the connections preceding and following that, and by a still smaller amount the preceding and succeeding connections two steps removed” (p. 174).
In the reflex-type conditioning of Pavlov feedback is simply the administration of the unconditioned stimulus, or the lack thereof. “If for a long time, such as days or weeks continuously, a certain kind of food is shown to the animal without it being given to him to eat, it loses its power of stimulating from a distance” (Pavlov et al., 1928, p. 85).
Watson described feedback in two forms: (a) punishment and (b) satisfaction of a need (Watson, 1914, pp. 204-206). He also compared the process of adjustment in man to the satisfaction a hungry animal experiences when food is introduced into the stomach, noting that man “becomes adjusted only when he reacts in such a way as to bring about the disappearance of the particular organic stimulus which is acting at the moment” (Watson, 1919, p. 271). This removal of the drive stimulus provides feedback to the man that the actions which led to his current state were successful in removing the stimulus and sets the expectation of future success in similar situations.
The work for which B. F. Skinner is most well known in relation to behavioral psychology is his method of operant conditioning, which relies completely on reinforcement, which may be administered according to various schedules. For a detailed description of each schedule type see Ferster and Skinner (1957). With no intention to trivialize his program of research or method of operating conditioning, I believe it is fair to say Skinner’s method is simply one of directing learning or behavior by providing feedback to the organism, or rather controlling consequences to the organism’s emitted behavior. In his own words, “Behavior is shaped and maintained by its consequences” (Skinner, 1989, p. 14). Using this method, by controlling the type, amount, timing, and schedule of reinforcement, the experimenter directly controls the feedback received by the subject, and thereby indirectly controls its behavior:
“The Law of Effect has been taken seriously; we have made sure that effects do occur and that they occur under conditions which are optimal for producing the changes called learning. Once we have arranged the particular type of consequence called a reinforcement, our techniques permit us to shape up the behavior of an organism almost at will.” (Skinner, 1961g, pp. 145-146)
One of the most important factors in providing effective reinforcement is that it must be given “almost simultaneously with the desired behavior” (Skinner, 1961e, p. 413). Hull (1942) called out reinforcement as the source for the “increment of a habit” (p. 72) and agreed on the importance of the concomitant timing of reinforcement and behavior:
In higher organisms, through some process of learning not yet wholly clear, the power of reinforcement is extended to any stimulus situation which has been consistently and rather closely associated in time with the reduction in a primary need, or even with any other stimulus so associated. Stimuli (or the objects yielding these stimuli) which have thus become reinforcing states of affairs are said to be secondary reinforcing agents, and reinforcements so mediated are called secondary reinforcements. (pp. 67-68)
Timing of feedback was also mentioned in the challenge Guthrie noted, particularly with children, of connecting present punishment with prior behavior. He felt this connection was so problematic that instead of providing feedback in the form of post performance punishment, he recommended the arrangement of circumstances to as to ensure performances of the desired action:
“The child that is punished at eight in the evening because he did not return home at seven will learn as a result of the punishment; but what he will learn will be problematical. Unless he is as rational as the average adult and can establish a chain of associations through complicated speech cues while he is brooding over his punishment, one thing he will not learn is to return in the future at seven. He learns what he does. To achieve a habit of returning at seven it is necessary with the average child to forget the first unfortunate outcome, which is now past and has had its bad effect on habit, and to lay plans to insure that the next evening he will be reminded in time and perform the action as it is desired.” (Guthrie, 1942, p. 26)
Guthrie’s formal position on feedback appears to have been that it does not play a role in learning, and his primary thrust was an attempt to reduce “established facts of learning” (Guthrie, 1930, p. 412) to the simple principle of association of contiguity. Feedback did not seem to play a role in his formal theory. However, though somewhat inconsistent with his formal position, Guthrie did on occasion mention feedback—e.g., in terms of (a) drive removal, (b) instructor guidance (or interruption), and (c) the learner’s own recognition of failure:
The successful act or series of acts is learned because it is always the last association with the drive and…this association remains because the drive has been removed [italics added] by the consummatory response. (Guthrie, 1939, p. 481)
A first lucky drive to the green, a first arrow on the target, or the first strike at bowling does not make a man a golfer, an archer or a bowler. The fortunate outcome was an accident. But it is out of accidents that skills are made. The next try is likely to be from a different stance and to have less fortunate results. The very fact that it is a second try rather than the first means that the action has a different beginning. In order to master the sport, the beginner must be exposed to the variety of situations that are encountered in the course of play. His awkward and erroneous movements must be somehow eliminated. His instructor’s words or his own recognition of failure may lead to changes of attack with the result that new movements are attached to the situation [italics added]. The problem of teaching skills is largely the problem of breaking up wrong action and encouraging practice [italics added] in which there is eventually a chance of successful movement. The track coach or the orchestra leader may correct many obviously wrong methods by interrupting the activities and suggesting new behavior to replace the wrong methods [italics added]. His method is to interrupt in order to discourage wrong movements and to leave undisturbed the right movements [italics added] when they finally appear. They will remain unless something happens to cause other behavior to be established in their place.” (Guthrie, 1942, p. 36)
Estes described reinforcement as having both “informational and motivation components” (W. K. Estes, 1967, p. 3; W. K. Estes, 1982b, p. 46). The motivational component identifies the reinforcer as either pleasurable or painful. The informational component provides information about whether the response was correct or incorrect. Leahey and Harris (1997, pp. 57-58) apply this perspective to explain the results of Tolman’s experiment of learning a punchboard maze—results which contradict the prediction suggested by the law of effect, namely, that punishment would discourage behavior while reward would encourage it.
The experiment contained four groups: bell-right, bell-wrong, bell-right-shock, and bell-wrong-shock. Both “bell-right” groups did better than the “bell-wrong” groups and the “bell-right-shock” group learned in spite of receiving shocks with their rate of learning not significantly different from the ‘bell-right” group (Tolman, Hall, & Bretnall, 1932). Leahey and Harris interpret these results to mean that the informational component of the shock for the bell-right-shock group was of greater influence than the physical discomfort of the shock.
Estes (1967) also cites studies suggesting that individuals do not necessarily have to be aware of the connection between their responses and reinforcing operations in order for reinforcement to be effective:
Suitably programmed rewards controlled the occurrence of verbal behaviors in a manner predictable from analogous studies of operant behavior in animals, and, in particular, that effects of rewards were independent of the subjects’ awareness of relationships or contingencies between their responses and reinforcing operations. (p. 1)
In cognitive learning theory we find the study of memory by Ebbinghaus marked by three forms of feedback: (a) the possibility of reproduction , (b) the ease of recall, and (c) the ease of relearning. The successful learning of a given content is evident first by the possibility of unaided reproduction (Ebbinghaus, 1913, p. 4). Another indicator of progress toward the end goal of completely unaided reproduction of a series is the ease with which it can be “read” from an “inscription” on the “mental substratum”(pp. 52-53): the greater the learning, the deeper the engraving. The third feedback indicator is the savings in relearning the list, that is, the reduced amount of time required to the first unaided and complete production of the list, as compared with the time required for the previous learning (p. 61). In contrast with the reward and punishment type feedback of behavioral learning theory we find a form of feedback in which learning progress is directed not by the imposition of reinforcement by an external agent but by some internal motivation to reach a goal and self-evident progress of performance toward the goal.
Tolman’s research adds to this perspective the possibility of retaining the feedback of learned consequences and using it in subsequent settings to make behavior choices. In addition to the punchboard maze experiments previously mentioned, Tolman provided two additional insights on feedback in learning, one theoretical and one experimental. The first is found in his criticism of what he referred to as “the trial and error doctrine” (Tolman, 1932, p. 364). “Correct stimulus-response connections,” he said, “do not get ‘stamped in,’ and incorrect ones do not get ‘stamped out.'” Instead, he believed that the response selected in a given situation was determined by the previously learned consequences of each available alternative.
The second insight comes from the cleverly designed “Searching for the Stimulus” experiments (Tolman, 1948). In these experiments rats who received a shock after attempting to eat out of a food cup set in front of a striped visual pattern were observed to “look around after the shock to see what it was that had hit them” (p. 201). It was found that these rats would avoid going near the cup, or even try to hide the cup and striped pattern with sawdust, even months after only one shocking encounter. When the conditions of the experiment were modified so that the lights would briefly go out coincidental to the shock—during which time the pattern and food cup dropped out of sight—a large percentage of the rats that were put back into the cage only 24 hours later showed no avoidance of the striped pattern. Thus, while it may be true that individuals do not necessarily have to have conscious awareness of the connection between their responses and reinforcing operations for the reinforcing operations to be effective, Tolman’s experiment suggest that the connection must be made, at least at some level, in order for learning to occur:
Mindless drilling practice leads to little improvement. For practice or drill of any kind to be effective, it should be mindful, or deliberate. Individuals should be attentive to what they are doing. They should watch for and correct errors and work toward improvement (Ericsson, 1996a, 1996b). (Sternberg & Williams, 2010, p. 336)
Ausubel et al. (1978, p. 310) described feedback as knowledge of results rather than reinforcement of correct responses. He described the process by which concepts are formed as a process of ongoing hypothesis generation and testing (p. 56) and stated that the consolidation of ideas—a necessary step in order to establish a framework of subsuming ideas—is achieved “through confirmation, correction, clarification, differential practice, and review in the course of repeated exposure, with feedback, to learning material” (p. 197). He also introduced an additional type of feedback which is the “internal logic of meaningfully learned material” that “allows for more self-provided feedback that do inherently arbitrary associations” (p. 310). Ausubel et al. (1978) cited Thorndike to support a position that learning cannot take place in the absence of feedback:
Thorndike’s research on frequency (1931, 1932) is often cited as proof that the effect of frequency is negligible on learning. However, “it merely demonstrates that certain atypical kinds of learning cannot take place in the absence of explicit intention or feedback, no matter how frequently the learning task is repeated. (p. 316)
Ausubel (1962) also noted that feedback from an instructor can assist learners in identifying similarities and differences between new materials and “their presumed subsumers in cognitive structure” (p. 219).
Two types of self-recognized feedback are suggested by schema theory (Rumelhart & Norman, 1976, p. 17). They are (a) limited utility of accumulated information, and (b) incongruence with functional demand. In the first type, a critical mass of information has accumulated in the mind, but the initial organization of the various individual pieces of information is recognized as no longer sufficient for effective utilization of the body as a whole. As it grows it becomes unwieldy and “gives rise to the need for restructuring” (p. 4).
The second type has to do with recognition that an existing schema does not meet functional demands due to (a) insufficient accuracy, (b) overly narrow constraints, (c) overly broad constraints, or (d) unspecified default values. In the first case, insufficient accuracy, it is recognized that the variable terms of the schema need to be improved in order to “specify the concepts that fit the variables with more accuracy” (p. 17). In the second, overly narrow constraints, it is discovered that the range of a given variable in a schema needs to be generalized to “extend its range of applicability” (p. 17) to include other relevant cases that the schema should account for. In the third, overly broad constraints, the range of a given variable needs to be constrained by “adding to the constraints of the variable or, in the extreme, by effectively replacing the variable with a constant term” (p. 17). In the fourth, unspecified defaults, it is found that default values for the variables of the schema, which provide for making intelligence guesses when making inferences or to guide further processing, have not yet been specified. In all cases, discrepancy must be recognized by the learner in order for restructuring to occur (pp. 21-22).
In applied constructive learning theory, feedback results from the interaction of new and existing knowledge, in particular, as “new information prompts the emergence or enhancement of cognitive structures that enable us to rethink our prior ideas” (Brooks & Brooks, 1993, p. 15) or when students are engaged in “experiences that might engender contradictions to their initial hypotheses” (p. 112). This idea comes from Piaget, who believed that the functioning of reflexes, specifically, and accommodation, in general, is driven by interaction with the environment. “Certain instincts are lost or certain reflexes cease to function normally, due to the lack of a suitable environment” (Piaget, 1963, p. 30). It is through feedback, received first through the senses, and later through “the new structure of thought”(Piaget & Inhelder, 1969, p. 131), that the infant develops it’s sensorimotor action schemes and subsequent cognitive substructures. It is the constant sensory feedback and consequences of action that facilitate development of “the broad categories of action which are the schemes of the permanent object, space, time, and causality” (p. 13).
Bruner described feedback as evaluation, and named it as one of three aspects of learning. “A third aspect of learning is evaluation: checking whether the way we have manipulated information is adequate to the task. Is the generalization fitting, have we extrapolated it appropriately, are we operating properly” (J. S. Bruner, 1960, p. 48)? In his model of discovery the student “is not a bench-bound listener, but is taking a part in the formulation”(J. S. Bruner, 1961, p. 23) and is constantly evaluating incoming information and reconciling it with his system of beliefs and understanding. This incoming information provides feedback as to whether or not his understanding accurately reflects the world around him.
Feedback in human learning theories consistently follows three themes: (a) the effects of outcomes, (b) the role of self-evaluation, and (c) secondary feedback through cognitive interpretation and attribution. From the outcomes of one’s actions expectancies are derived and used to predict future outcomes (for example, J. W. Atkinson & Feather, 1966; Heider, 1958, p. 99; Keller, 2010, pp. 6-7). It is also the outcomes of action, in particular accomplishments, that determine one’s sense of worth and self-efficacy (Covington, 1984, pp. 8-9). Accomplishments are of greatest influence on self-worth and self-efficacy when they are attributed to be results of one’s own effort and ability, rather than external or uncontrollable factors (Deci & Ryan, 1985, p. 61). For example, success resulting from remedial assistance is not usually valued as highly as success attributed to one’s independent effort and ability (Covington, 1984, p. 8). Successes typically increase beliefs of self-worth and self-efficacy. Failures diminish them. Whether or not the outcome of action is considered a success or failure depends in part on one’s prior expectations, or level of aspirations. Ferdinand Hoppe (as cited in Covington, 1998, p. 28) found that judgments of success or failure depend less on actual levels of performance, and more on the relationship between the individual’s performances and aspirations. Feelings of success come when goals are achieved. Feelings of failure come when they are not.
Human learning theories often recognize all sources of feedback as being subject to cognitive interpretation and a process of causal attribution (Heider, 1958, p. 99; Weiner, 2010, p. 33). The result of this cognition form a secondary source of feedback. Feedback also comes through vicarious experiences in which one perceives the subject of observation as similar to oneself, and assumes that outcomes of the subject’s actions will be similar to outcomes for one’s own actions. It may also come in the form of verbal and social persuasion or physiological, somatic, or emotional responses to a given situation (Bandura, 1977a, pp. 191 (abstract), and 195-200; Bandura, 1994a, pp. 2-3).
Feedback affects intrinsic motivation when (a) it is positive rather than negative (Deci & Ryan, 1985, pp. 60-61), (b) the locus of causality is perceived to be internal (p. 61)and (c) it is informational rather than controlling (p. 92). In most human theories—especially those centered on models of agency, self-regulation, or self-determination—feedback from others is of secondary importance in comparison to feedback from one’s own self-awareness, self-monitoring, self-recording, and self-evaluation (Rogers, 1969, p. 163).
In social learning theory, two sources of feedback are typical. The first is found in the consequences of one’s actions. For example, Vygotsky (1994b, p. 64) described four stages of the child’s cultural development. In the first “the younger child tries to remember the data supplied to him by a primitive or natural means” (p. 64). In the second stage the child begins to use a mnemotechnical method—e.g., using picture cards available to him. The transition from the first to the second stage is usually made only after the child discovers—through the result and consequences of his actions—that he is unable to remember the information on his own. In another example, from situated learning, Lave (1988) found that participants in her studies often made multiple attempts before solving problems. In the process they checked their partial or interim solutions to see if they were consistent with reality and if they were likely to reach a satisfactory answer using their chosen method. One of the advantages of situated learning is the “immediate ground for self-evaluation” (Lave & Wenger, 1991, p. 111) that it provides. “The scarcity of tests, praise, or blame typical of apprenticeship follows from the apprentice’s legitimacy as a participant” (p. 111). Another example of feedback through consequences, from expansive learning theory, are the contradictions manifested in accepted practice that cause members of the group to question, criticize, or reject some aspects of the existing practice (Engestrom, 2010, p. 7). In solving problems in cognitive apprenticeship “the adequacy of the solution they reach becomes apparent in relation to the role it must play in allowing activity to continue” (J. S. Brown et al., 1989, p. 36).
The second type of feedback typically mentioned in social learning theory is input from others, for example, in the learning of complex skills:
A common problem in learning complex skills, such as golf or swimming, is that performers cannot fully observe their responses, and must therefore rely upon vague kinesthetic cues or verbal reports of onlookers.” (Bandura, 1977b, p. 28)
In cognitive apprenticeship, this type of feedback comes through coaching:
Coaching consists of observing students while they carry out a task and offering hints, scaffolding, feedback, modeling, reminders, and new tasks aimed at bringing their performance closer to expert performance….The content of the coaching interaction is immediately related to specific events or problems that arise as the student attempts to accomplish the target task. (A. Collins et al., 1991, p. 14)
Table 8 summarizes the local principles from the theories reviewed that are subsumed by the universal principle of feedback.
Principles of Learning Subsumed by the Universal Principle of Feedback
|Theory Group||Local principles|
Guidance from teachers necessary to produce good builders
Law of effect
Reward versus punishment
Spread of effect
Presence or absence of UCS in conjunction with UCS
Response of a care giver
Satisfaction of a need
Feedback as reinforcement usually in the form of punishment or food
Feedback as reinforcement
Schedules of reinforcement
Timing of reinforcement
Law of effect
No feedback, simply arrangement to ensure stimulus and response association
Recognition of failure (in learning skills)
Two dimensions of feedback: affective and cognitive
Linking responses to a particular stimulus population
Evidence of progress: ability to reproduce a list from memory
Evidence of progress: ease of recall
Evidence of progress: ease of relearning
Searching for the stimulus experiments
Affective versus cognitive dimensions of feedback
Failure or success of attempts
Cognitive Information Processing:
Errors as a source of feedback
An accurate sense of both your current state and how far you have to go
Lack of feedback hinders performance
Feedback enables self correction
Explicitly pointing out similarities and differences
Hypothesis generation and testing
Confirmation, correction, and clarification
Knowledge of results
Self-provided feedback via internal logic of meaningfully learned material
Learning cannot take place in the absence of feedback
Prompting replaced by confirmation as learning progresses
Incongruence with functional demand
Limited utility of accumulated information
Overly narrow constraints
Overly broad constraints
Unspecified default values
Recognition of discrepancy
Feedback through analysis
Contradictions to hypotheses
Feedback through interaction with the environment
Evaluating: checking whether the way we have manipulated information is adequate to the task
Expectancies for success and failure are formed from results of past successes or failures
Action manifestations used to predict future action
Outcomes of performance
One’s sense of worth depends heavily on one’s accomplishments
Human beings tend to embrace success no matter how it occurs
Successes resulting from remedial assistance are not always valued as highly as successes resulting from one’s own efforts
People will sometimes reject credit for their successes if they feel they cannot repeat them
Perceptions of ability (even in the absence of solid accomplishments)
Performance accomplishments or mastery experiences
Verbal or social persuasion
Physiological, or somatic and emotional, states
The cognitive interpretation of physiological states based on beliefs of self-efficacy
Self-Determination Theory of Motivation:
Positive feedback enhances intrinsic motivation
Negative feedback and perceived incompetence decrease intrinsic motivation, unless the locus of causality is not perceived to be internal
Informative versus controlling feedback
Controlling feedback leads to less intrinsic motivation
Self-awareness of performance outcomes
Effort-expectancy feedback loop
Performance-expectancy feedback loop
Satisfaction-value feedback loop
Freedom to Learn:
Significant learning is evaluated by the learner
Self-criticism and self-evaluation are basic and evaluation by others is of secondary importance
An Agentic Theory of Self:
Self-realized difficulty or failure
Awareness of connection between consequences and actions
Vague kinesthetic cues and verbal reports [or modeling] of onlookers
Check against reality
Self-evaluation- opportunities for understanding how well or poorly one’s efforts contribute are evident in practice
Contradictions manifest in the accepted practice
Results of analyzing the situation to understand the contradiction
Results of examining the proposed model as a potential solution
Results of reflecting on and evaluating the process by which the solution was found
Natural consequence of actions
Feedback through coaching
Self-monitoring and correction
Articulation and reflection
 Meaning a change in behavior.