Home of the National Tactical Invitational

American Tactical Shooting Association

 

To view us online, visit:  http://www.teddytactical.com/

 


Feature Article: 12-2004

 

Training for Situation Awareness:
What? How?

By:  Jack M. Feldman, Ph.D.

 

Note: Born and raised in Chicago, the author received his Ph.D. in social and industrial psychology from the University of Illinois in 1972. He is a professor of psychology at the Georgia Institute of Technology, a Fellow of the American Psychological Association, and a Charter Fellow of the American Psychological Society. His research focuses on processes of human judgment and decision-making, both theoretical and applied. A student of self-defense since 1997, he has made up for lost time by training with a number of exceptional instructors, none of whom bears any responsibility for deficiencies in his performance. He is an active competitor and safety officer in IDPA, a charter member of the Polite Society, and an NTI participant since 2001.

Thanks to Drs. Larry James and Martin Topper for helpful comments on an earlier draft of this article. Responsibility for any errors rests entirely with the author.

 

“It ain’t what you don’t know that gets you in trouble.
It’s what you know that just ain’t so.”
Artimus Ward

 

The Nature of Situation Awareness

 “Situation awareness” (SA) is taught, researched, and debated in every field of human activity that involves risk: aviation, combat, medicine, hazardous systems operation, law enforcement—and self-defense (see, e.g. Endsley, 1995; Endsley & Bolstad, 1994; Endsley & Kiris, 1995; Gonzalez, 2004; Marsh, 2000). It has been defined in detail (“…the perception of the elements in [one’s] environment, within a volume of time and space, the comprehension of their meaning, and the projecting of their status in the near future.” Endsley, 1995, p. 36, emphasis added.) It has also been defined simply (“…paying attention to your surroundings…” Gonzalez, 2004). However, to the best of my knowledge, nobody has applied research-based knowledge to the self-defense problems of ordinary citizens. Neither has anyone tried to link more recent research on “intuitive awareness” with SA research and practice in anything but a casual way. The use (or non-use) of intuition, defined as “thoughts and preferences that come to mind quickly and without much reflection” or “gut responses” is of major interest to law enforcement (National Institute of Justice, 2004), public safety (Klein, 1998), and medicine (King & Appleton, 1997.) Discussion of intuitive factors in self-defense, however, has been largely anecdotal (e.g. deBecker, 1997). Specifically absent has been any consideration of how to train or practice “intuition,” as separate from consciously processed lists of danger signals, for instance discussion of Cooper’s “color codes” (e.g. Givens, undated a & b). While informative, they do not tell us how to acquire or use information that may come to us, and be signaled by, processes that are nonconscious, unintentional, nonverbal, relatively effortless, fast, and that operate in parallel with conscious awareness (see, e.g., Bargh, 1994). While often labeled “instinctive,” these automatic responses are most certainly learned.

For the present, I will adopt Endsley’s (1995) definition, which supports the point that “awareness” is about understanding in the service of effective action. I assume an intimate and dynamic connection between awareness, goals, and action. Though my primary focus is on attention and comprehension, this assumption should be kept in mind. (Martin Topper, personal communication.) I also make another useful assumption: that the distinction between SA as a conscious, controlled, volitional, effortful process and “intuition” as discussed above is more apparent than real and that in fact both stem from the same sources, operating in complementary ways. This perspective, which both contrasts and unifies “controlled” and “automatic” (intuitive or implicit) processing, is fundamental to many areas of modern psychology (see Bargh, 1994; Feldman Barrett, Tugade, & Engle, 2004; Slovic, Finucane, Peters, & MacGregor, 2002[1]).

Adopting this perspective highlights the idea that awareness need not be conscious, and indeed the capability for consciousness is not a prerequisite either for SA or for effective action. Anyone observing predators and prey (whether, say, zebras and lions or squirrels and housecats) can testify to the high level of awareness any creature must have in order to survive for any length of time. Regardless of sensory adaptation or neurological readiness, learning plays a critical role in its development. Conscious SA may provide detailed information (“There’s a man wearing a jacket standing near my car, and it’s 2 a.m. in Miami on August 10.”) Intuitive SA may provide only a feeling of apprehension, directing one’s conscious attention (see Givens, undated a & b). However, both are based on knowledge, whose structure and accessibility are crucial to its usefulness.

The Sources of Situation Awareness

 There is no such thing as “awareness” in the absence of knowledge. That is, SA depends on a “mental model” (Endsley, 2000) of situations and people, a model which may or may not be fully correct. Implicit responses likewise depend on knowledge; even if that “knowledge” cannot be verbalized, it is no less systematic and no less real. It may have been learned unconsciously, or before one had language with which to express it (see, for instance, Frensch & Runger, 2003; Katkin, Wiens, and Oman, 2001), but it functions as knowledge nevertheless. Awareness is awareness of something, and what that “thing” is depends on our knowledge of the world. If our knowledge is objectively incorrect, (as in “someone so nice couldn’t be a rapist,” c.f. deBecker, 1997), our “awareness” is, too, but it is no less subjectively real.

If SA depends on either explicit or implicit knowledge, it stands to reason that the amount and structure of that knowledge matters—and it does. Expertise in any area consists of a vast amount of specific information, organized and interrelated around general principles. This is what lets the expert marksman automatically adjust the point of aim when shooting up- or downhill, without consciously reviewing the principles governing the bullet’s trajectory, while the novice is trying to remember a rule. It is what lets the chess grandmaster perceive, not analyze, patterns on the chessboard, and quickly project moves and countermoves. The very same processes allow rapid, decisive action in life-or-death situations (see, e.g., Klein, 1998) whether or not one is consciously aware of the source of one’s intuitive feeling of apprehension. In fact, it is not even necessary for emotional responses to be consciously experienced for them to influence judgments and behavior (Winkielman & Berrige, 2004). Sometimes we don’t “know” (consciously) what we know.

It is also necessary to point out, though, that “knowing,” whether conscious or not, whether emotional or verbal, is much more variable and context-dependent than it seems to be. The patterns of association that govern our interpretations of, and emotional responses to, the world can influence us to a greater or lesser degree, depending on “accessibility,” the degree to which a concept is likely to be activated and used. Accessibility, in turn, depends on a number of factors: Expertise, already discussed, involves a great deal of elaborated knowledge and considerable emotional investment. It produces high, and chronic, accessibility of relevant concepts. Ideology and value systems act likewise, with perhaps greater emotional investment. Operative motives like hunger, fear, affection, or achievement, when active, render concepts associated with them more accessible, and if the motive is chronic, the concept’s accessibility is too. Recent use of a concept (for example “danger,” caused by reading a news story about terrorism or robbery) makes it temporarily accessible, and experiencing any emotional state also makes emotionally compatible concepts temporarily accessible.

 Why does this matter? Because if and when one encounters a situation that offers multiple cues as to its meaning and consequences, those that are relevant to (“diagnostic of “) our accessible concepts tend to be noticed more easily, and the situation tends to be interpreted in terms of that concept rather than another, perhaps equally valid, one. In other words, we experience the world (at least in part) in terms of that which we are ready to experience. This process is not deliberate, not open to consciousness, is controllable only with deliberate effort, and sometimes not then (for a general review, see Bargh, 1994). It happens with respect to our knowledge and our prejudices, positive or negative, alike (See, for instance, Amodio, et al., 2004; Blair, 2002; Levy, Stark, & Squire, 2004). Our “situational awareness,” then, whether implicit or explicit, depends on knowledge, values, current motives, emotional states, arousal, recent experiences, expectations, fatigue and other physical factors, and many other variables.

It is definitely true that strong signals from the environment, cues that stand out sharply from the background, can draw attention, activate motives and knowledge, thereby directing perception. However, one can miss even very prominent and unusual events happening before one’s eyes when active goals lead attention to be fixed elsewhere. Imagine paying close attention to a video of people playing with a basketball and being told later that a gorilla walked among the players—a gorilla you didn’t see. It sounds impossible, but it has happened in more than one experiment. Imagine talking to someone on the street, being momentarily distracted, and then resuming the conversation, not noticing that you’re now talking to a different person. That has happened, too. (Mack, 2003; Simon & Chabris, 1999). The first is called “inattentional blindness,” the second, “change blindness.” The good news, though, is that even without awareness of specifics, implicit processes can signal us—if we are sensitive enough to notice them (Rensink, 2004).

Situation Awareness and the Armed Citizen

Having some understanding of the nature and origin of situation awareness, we now turn to understanding its role in self-defense. Conscious SA is studied in contexts like aviation safety and military operations; intuitive or implicit SA is only beginning to be studied in domains such as medicine and law enforcement. There has been no research in the area of self-defense for the ordinary citizen. How, then, are we to evaluate and apply the knowledge we have, let alone acquire new information? We need to start by understanding the differences between the professional’s situation and the layperson’s or citizen’s. Briefly put, the professional’s job requires and encourages attention to a limited part of the environment. The job of police officer, soldier, pilot, firefighter, power plant operator, doctor, nurse, and so forth, exists to take a limited set of actions with respect to a limited set of people and conditions. Any other actions or concerns—listening to a sporting event, arguing with a partner, worrying about the mortgage—are at least officially out of bounds, regardless of how often they happen in real life. The fact that mistakes occur—aircraft land on the wrong runway, soldiers get caught in an ambush, the wrong medicine is given—is evidence that even under the best of circumstances SA can be imperfect.

The ordinary person concerned with self-defense has a job that is easier than the professional’s in some ways and harder in others. It is easier because, except in truly dire circumstances, people are not required to seek out danger, or carry out missions regardless of danger. The police officer, soldier, and firefighter ultimately exist in order to confront and contain danger. The medical professional, though not often at personal risk, exists to intervene in situations that threaten others’ lives or well-being.

The typical armed citizen or layperson, in contrast, has little to do with danger on a daily basis and is rarely, if ever, threatened (except perhaps in traffic). Most can order their lives to minimize their exposure, and the likelihood of their need for awareness of threat is correspondingly less. Furthermore, the layperson’s first option is to avoid rather than face threats. That’s how their job is easier. It is more difficult because, when danger is present, their knowledge is less accessible, their skills are likely to be less practiced, and (at least compared to police, military, and firefighters) their allowable actions are more restricted. Also unlike the professional “on the job,” the layperson’s attention is directed to a range of tasks—getting the groceries, making the sale, writing the article—and these motives and their associated concepts render the knowledge necessary for SA relatively less accessible. Being mindful of one’s surroundings takes additional effort and skill, beyond that required for one’s daily life. For the professional, that is one’s daily life. Furthermore, unlike many professionals (e.g. soldiers and firefighters), the armed citizen is likely to be alone, or at least be the only person with any training, when facing possible danger. In short, the layperson is less likely to need awareness of threat on a day-to-day basis, but when it is necessary, he or she must rely on less accessible knowledge, on less practiced skills, and must create a response from among fewer options.

Some might argue that awareness skills are already in place, at least for most people. After all, don’t we drive in heavy traffic and avoid accidents regularly? Doesn’t this require observation and inference, both conscious and intuitive? Yes, but that is largely irrelevant. One, in traffic the vast majority just want to get to their destinations. They may be careless, unskilled, intoxicated, or reckless, but they’re not after you. Second, regardless of how good a driver you are, the domain of knowledge is different, and we know that expertise doesn’t transfer well (see, for example, Bedard and Chi, 1992). For example, I’ve been riding motorcycles for 41 years. I’ve raced, toured, commuted and cruised, in circumstances ranging from Florida swamps to Chicago rush hours, in all seasons and all weather. When my helmet goes on, so does my “race face,” and I move up and down the color code from yellow to orange to red and back several times a trip. I find myself noticing drivers about to do something potentially dangerous without knowing why I did, and likewise know when there are likely to be hazards like gravel or wet leaves on the road. Yes, I make mistakes when tired or distracted, though probably fewer than the average person. Nevertheless, when walking around in Atlanta, or on the Georgia Tech campus, I frequently find myself in Condition White despite my best intentions (and the efforts of those who’ve trained me). I’ll be in a hallway, for instance, and someone will pass me from behind, someone I didn’t know was there. Maybe it’s just me—there are individual differences in SA (Endsley & Bolstad, 1994)—but there’s likewise substantial data on the limits of expertise that it makes no sense to ignore.

What to Train?

 SA training, like any other, requires us to establish both general and specific training objectives. Our general objective should be to increase two types of correct actions, and reduce two types of mistakes. We want to increase, first, true positives; that is, to detect danger when it exists. Klein (1998) gives a vivid account of how a firefighter’s intuitive misgivings led him to evacuate a seemingly ordinary house fire just before the floor collapsed. Pinizzotto, Davis, and Miller (2004) provide a similar example, a police officer’s timely identification of an armed suspect during a drug raid. Next, we want to increase true negatives; that is, dismiss a potential source of danger when it is, in fact, harmless. Givens (undated b) discusses returning to condition yellow after checking a potential danger. There are no dramatic examples of true negatives, but they are just as important to accurate SA.

Two types of mistakes require attention because of their huge potential cost. First, the false positive identifies danger where none is present. Ayoob (2000) provides a compelling account of one such mistake, the tragic shooting of Amadou Diallo. Experienced New York City police officers’ training, motives, expectations, and emotional state combined with Diallo’s own actions and the marginal environment to produce a needless death that none intended or imagined could happen.

The second type of mistake, the false negative, is the perception of safety where danger exists. Just as tragic as Diallo’s death, though not as well publicized, is the murder of Captain Robbie Bishop of Carrollton County, GA. Captain Bishop, an experienced officer and expert in drug interdiction, was shot to death in his patrol car as he wrote a routine traffic citation (www.copsite.com/lwf/lwf99disjon.html; www.ncea314.com/robbiebishop.asp). Though we can never know what led Captain Bishop to miss the danger signals his murderer gave, we must realize that any of us are capable of the same mistake.

Preventing mistakes like these might seem require contradictory courses of action: training both slower, more thoughtful responses (to avoid false positives) or faster, more aggressive responses (to avoid false negatives). Both are wrong. Simply put, at a given level of information, any change in response threshold (the “mental trigger” that governs action) to reduce one type of mistake will inevitably increase the other. Any change made to increase the percent of true positives will also increase the percent of false positives, and if one acts to increase the rate of true negatives, false negatives will increase as well. Given any level of error or uncertainty in our information, this must be true, simply by the laws of probability.

There are only two ways to reduce the rate of both types of mistakes while increasing that of both types of correct decisions: to have information that is more accurate and to use the information at hand better. These goals are the general objectives of training. The most efficient way to accomplish them is to find people who are already excellent at gathering and using information, discover what they know, how they know it, how it is organized, and how it is used, and teach those things to others. At the same time, research and further experience can increase our knowledge and the effectiveness of our training. While it may not be possible to make an expert of every trainee, we can certainly raise the average and, as in sports, raise the level of peak performance as well.

Specific Objectives

A great many people have provided lists of potential danger signals, and it would be redundant to repeat them here. One thing we don’t know is whether these signals—the coat in warm weather, the stranger who watches you or avoids your eyes, and so forth—are the only useful ones. These are simply the ones that experts can consciously articulate.

We also don’t know for sure what the expert notices about the environment, beyond the important but obvious features: the location of exits, the arrangement of tables in a restaurant, the location of cover and concealment, the position of other patrons in a store, etc. Sometimes, in fact, even these “obvious” features are unnoticed when we are preoccupied. Therefore, the first thing we need to know is how the expert scans the environment and how that information is organized and interpreted. Beyond visuals, we need to know what is heard, felt, smelled, tasted—even if the expert him- or herself can’t really tell us. In short, we need research. Some studies can use simulations, with equipment that tracks eye movements and records scanning patterns of scenes presented on a video monitor. These can be coupled with verbal probes. Such studies are now beginning (Force Science News, 2004). More elaborate studies might employ volunteers wearing glasses containing video cameras, so that areas attracting attention as the person goes about their daily life can be recorded and analyzed. GPS devices can track a person’s movement through environments such as shopping malls. In each case, experts’ and non-experts’ patterns of attention, movement, and reports of observations can be compared. Probably the simplest and cheapest method is what we currently do informally:  interview people. We typically only interview after some incident, but I suggest  that we also interview more and less expert observers during and after routine days, with questions designed to capture not only conscious observations but also feelings and intuitive signals. Most importantly, we should employ multiple methods, since each has strengths that complement another’s drawbacks (see, for instance, Ericsson, 2002).

We should treat the information thus gained as tentative, as hypotheses rather than facts. If, say, we find a particular pattern of scanning or movement to be characteristic of experts, its effectiveness can be tested in training studies and simulations.

Waiting for research to provide all the answers, though, is unnecessary and counterproductive.  “The perfect is the enemy of the good.” Our knowledge may not be perfect, but it will never be. With our present technology and experience, we can train not only attention to known danger signals, but also the elaborated situational models that support both conscious and implicit awareness as well as action. As we gain knowledge, we can incorporate it into ongoing training.

We can also train observational skills. Although these are not independent of specific knowledge and situational models, each can reinforce the others. Scanning, listening, awareness of change, and especially attention to implicit responses—“feelings”-- will add a dimension now missing from most training.

We can also train motivation towards two goals: to attend to one’s environment, and to practice the skills necessary to awareness. While those who choose to arm themselves are already “motivated,” the specific motive to attend to one’s environment must compete with others, even motives as mundane as remembering to pick up a gallon of milk, or to get the car’s oil changed. Likewise, the motive to improve one’s skills at observation, inference, and intuition must compete with other ordinary motives on a moment-to-moment basis, and compete long-term for our limited attention, time, and energy.

We also need to train immediate action skills. In one sense, these are the focus of most of our current training in armed or unarmed combat. But all of these presuppose that we have identified a real danger. What if we are uncertain? It’s apparent that there are real individual differences in the skills that gain us the distance and time to make better judgments; they emerge reliably in simulations and assessments like the ATSA Village scenarios, but if they have been systematized anywhere, I’m not aware of it. Once again, expert reports and careful observation might be valuable.

How to Train

Before considering training techniques, we need to establish measures of performance. Without reliable and valid feedback, effective learning doesn’t happen. Without useful measures of awareness, we are unable to evaluate the effectiveness of training. At present, there are two types of measures: individual knowledge of situations, assessed by direct questioning during or after simulations (Endsley, Sollenberger, & Stein, 2000; Jones & Endsley, 2000; Matthews, Pleban, Endsley, & Strater, 2000) and performance scores on tasks that require situational knowledge (Pritchett, Hausman, & Johnson, 1996). The latter are likewise measured via simulations. Both types are limited. Questioning does not assess the connection of knowledge to action, and is limited to the contents of awareness. If not properly conducted, questioning itself may bias the results (see Ericsson, 2002). Performance measures, unless very carefully designed, do not provide specific knowledge of the timing and content of awareness, though they reflect both implicit and explicit processes. Fortunately, the two methods are complementary. Both can and should be used.

Situational Models and Danger Signals

These need to be discussed together, because signals are only meaningful as parts of a cognitive model. Without an elaborated model, a list of signals is no more meaningful or easy to use than a laundry list. If our goal is to create expert-level models to guide perception and response, we need to do it the same way other kinds of expertise are created: deliberate, guided practice (Ericsson & Charness, 1994; Ericsson, Krampe, & Tesch-Romer, 1993). But at what tasks?

I suggest that we can incorporate the desired skills into a number of tasks. First, the technology of first-person video games can be adapted to present realistic scenarios based on existing and future knowledge. For example, we know that behavior such as voice tone and posture can communicate intention and emotional state (see de Gelder, et al., 2004, for a recent example.) There is no reason why subtle signals of danger or safety, once discovered, can’t be represented in video games as well as they are in movies. They should include active response options, to build connections between SA and multiple options for action. The chess master’s perception of a position automatically calls up sets of effective moves and countermoves, and this automaticity additionally provides the capacity needed to create new options. The novice, meanwhile, is searching memory or “dithering,” trying to make a choice. Our training should aim at producing the master’s kind of skill. These games have the advantage of being usable at home, easily upgraded, adaptable to any skill level and relatively inexpensive. While they lack important elements of realism (physical movement, for instance) they are certainly no worse than other training simulations. They can be programmed to probe for knowledge at random intervals and to provide detailed performance feedback.

We can incorporate realistic awareness training into recreational activities such as IDPA competition. Right now IDPA tests marksmanship, movement, and gun-handling skills, but there is no reason why we can’t build threat identification and avoidance into scenarios. I’ve been impressed, for instance, by the creativity of a number of friends who devised inexpensive moving targets and “pop-out” threat cues. My local Polite Society group has made efforts along these lines, too, and of course, it’s a central theme of the NTI.

More elaborate training facilities offer “shoot-houses” of varying levels of complexity. Every year, entrepreneurs offer visits to Halloween “haunted houses” starting about October 1, and firms exist that will set them up in any warehouse or other space. It seems to me an easy step to combine these creations, using airsoft training weapons if the use of live-fire or simunitions weapons is not feasible. While obviously too elaborate and costly for everyday use, they could be employed to teach both awareness and response skills, with immediate feedback.

As digital video recording becomes less expensive, this technology can also be incorporated into training. Having a visual reference for feedback and review (e.g. “See how you walked past that doorway?”)  could be very helpful in correcting mistakes and in planning more effective actions.

Mental rehearsal is another valuable practice routine. Widely used in sports training and in a variety of therapies, (see, e.g., Dunn, 2001; Swets & Bjork, 1990), visualization and mental rehearsal skills can be easily learned and practiced almost anywhere. Combined with video and text materials, and guided by formal instruction, visualization and rehearsal can help integrate and elaborate one’s mental models of situations, habits of observation, and patterns of response. A technique suggested by several trainers is to read crime reports in the local newspaper and visualize one’s response to the situation. We can easily expand this to visualizing and rehearsing scanning patterns and behavioral signals that trigger effective action.

Feedback is necessary for practice to build skill. I suggest applying awareness skills consciously, as we go through our daily routines. We can test ourselves by recording, for example, how many times per day we have to suddenly stop because someone we didn’t notice came out of a doorway or around a corner, or at lunch by trying to remember the location of exits in our restaurant.

Motivation for Awareness

It might seem silly to say that one needs to learn the motivation to be aware of one’s surroundings, especially to NTI participants. We know, however, that motives must be active to guide perception and action, and that motives compete for our limited attentional capacity (see, e.g., Bargh & Gollwitzer, 1994; Feldman Barrett, Tugade, & Engle, 2004). In order to influence our awareness reliably, then, the motive to be aware needs chronic activation.

Motives arise because classes of actions are consistently associated with pleasurable outcomes, or the avoidance of painful ones. Effective soldiers, firefighters, and police officers maintain awareness for two simple reasons: they may die if they don’t, and their partners, teammates and buddies both support and depend on them.

Supporters, though, rarely surround armed citizens—in fact, we’re likely to be dismissed as “paranoid”—and the presence of danger is far less frequent and obtrusive. That means that, most of the time, each person has to reward him- or herself. We can set up self-reward schedules based on our self-evaluated performance and alertness, as discussed above. It may be a feeling of accomplishment we allow ourselves to have, an extra helping of dessert, a cigar after dinner, or $5 towards something we want to buy—the trick is to develop a consistent habit of thought and action around awareness.

We can build positive reinforcement into our group practices and competitions as well. People are social creatures, and receiving approval and status for an activity is a powerful incentive as well as a way to make the activity itself rewarding. When awareness tests are included in competition scenarios, and we create a social norm of mutual encouragement and reinforcement, we’ve taken an important step in creating a chronically active motive.

Motivation to Train

Ericsson, et al. (1993) and Ericsson and Charness (1994) find that many years of dedicated practice are necessary to achieve world-class expertise in any field. Furthermore, they note that most people do not deliberately practice after attaining minimal skill at some activity; they play for fun, not for keeps. How do experts discipline themselves to attain peak performance? Is this level of dedication necessary to our goals?

Fortunately, the answer to second question is “no.” What’s necessary is to be better, and to seek continual improvement. While it’s true that in life-threatening situations there is no such thing as “good enough,” it’s also true that we all have other areas of our lives that are as or more important on a daily basis. The crucial goal is to make improving awareness an integral part of daily activities, not something that unduly interferes with them.

If training is a source of frustration and anxiety, it’s not going to be done, and will undermine awareness motives as well. The trick to maintaining “motivation control” and “emotion control’ (Kanfer & Ackerman, 1995) lies in knowing how to set goals, what goals to set, and how to react to them.

We know that setting difficult, specific goals improves performance on well-learned tasks and inhibits learning at early stages. Self-focused attention and negative emotion seem to be the culprits in the latter. That suggests avoiding specific goals early in training, instead adopting a “mastery” orientation—that is, focusing on improvement, regardless of the rate. This needs to be combined with self-reinforcement for any improvement, however small, and periods of reflection on task strategies. That is, regard feedback as information, rather than as evaluation, and use it to explore various means of improving performance. As skill builds, specific goals can be adopted, keeping the “mastery’ approach. The logic is that there is no pre-set upper limit to performance, no “good enough” point, but that improvement is its own reward.

Directing attention to the task rather than the self is only half the story, though. A learning process necessarily creates mistakes, and for at least some people mistakes create negative emotions that can not only interfere with learning but also lead to withdrawal. Some people become anxious at the thought of doing any activity at which they may fail, with similar results. Teaching emotion management skills can increase performance and allow the activity itself to become enjoyable. A variety of techniques, such as controlled relaxation combined with visualization, can short-circuit anxiety. “Positive self-talk” is a way of making emotionally positive ideas and concepts accessible in stressful situations. These, combined with rehearsal and visualization of skills, can enhance skill and motivation simultaneously.

Conclusion

This paper has not been nearly so much about answers as questions; How should we regard “situation awareness”? How is our knowledge of the world organized, and how might we use that organization to our advantage? What do experts know that the rest of us don’t? How can we capture that knowledge, and transfer it efficiently? The theories, data, and methods discussed here represent (in my opinion) our best current answers to those questions, but if science teaches us anything it’s that the questions count more than the answers, and that we make progress by learning to ask different questions. Experience teaches us that some of the most productive questions come from observations of the world, especially observations of the solutions people find to the daily problems they face. I hope that this paper stimulates people to explore, ponder, discuss, and evaluate in practice the ideas summarized here, and that the process proves to be of benefit even if some or all of the ideas are wrong. This will take time and effort. That shouldn’t be discouraging. As engineers say about any kind of project or product:

You can have it good.

You can have it fast.

You can have it cheap.

Pick two.

 


 

References

Amodio, D.M., Harmon-Jones, E., Devine, P.G., Curtin, J.J., Hartley, S.L., & Covert, A.E. (2004). Neural signals for the detection of unintentional race bias. Psychological Science, 15, 88-93.

Ayoob, M. (2000). Hallway firefight: The Amadou Diallo shooting. American Handgunner (Nov.) www.findarticles.com

Bargh, J.A. (1994). The four horsement of automaticity: Awareness, efficiency, intention and control in social cognition. In R.S. Wyer, Jr. & T.K. Srull (Eds.) Handbook of social cognition (2nd ed, pp. 1-40). Hillsdale, NJ: Erlbaum.

Bargh, J.A., & Gollwitzer, P.M. (1994). Environmental control over goal-directed action: Automatic and strategic contingencies between situations and behavior. In W.D. Spaulding (Ed.) Nebraska Symposium on Motivation: Vol. 41, Integrative views of motivation, cognition, and emotion (pp. 71-124). Lincoln: University of Nebraska Press.

Bedard, J. & Chi, M.T.H. (1992). Expertise. Current Directions in Psychological Science, 1, 135-139.

Blair, I.V. (2002). The malleability of automatic stereotypes and prejudice. Personality and Social Psychology Review, 6, 242-261.

deBecker, G. (1997). The gift of fear. New York, NY: Dell.

Dunn, J.R. (2001). Psychology in military special operations: An interview with John C. Chin, Ph.D. Psychology Online Journal, 2, www.psychjournal.com/interviews

Endsley, M.R. (2000). Situation models: An avenue to the modelling of mental models. Proceedings of the 44th Annual Meeting of the Human Factors and Ergonomics Society.

Endsley, M.R. & Kiris, E.O. (1995). The out-of-the-loop performance problem and level of control in automation. Human Factors, 37, 381-394.

Endsley, M.R. (1995). Measurement of situation awareness in dynamic systems. Human Factors, 37, 65-84.

Endsley, M.R., & Bolstead, C.A. (1994). Individual differences in pilot situation awareness. International Journal of Aviation Psychology, 4, 241-264.

Endsley, M.R., Sollenberger, R., & Stein, E. (2000). Situation awareness: A comparison of measures. Proceedings of the Human Performance, Situation Awareness, and Automation: User Centered Design for the New Millennium Conference. October.

Ericsson, K.A. & Charness, N. (1994). Expert performance: Its structure and acquisition. American Psychologist, 49, 725-747.

Ericsson, K.A. (2002). Towards a procedure for eliciting verbal expression of non-verbal experience without reactivity: Interpreting the verbal overshadowing effect within the theoretical framework for protocol analysis. Applied Cognitive Psychology, 16, 981-987.

Feldman Barrett, L., Tugade, M.M., & Engle, R.W. (2004). Individual differences in working-memory capacity and dual-process theories of the mind. Psychological Bulletin, 13, 553-573.

Force Science News (2004). Scan patterns: Next breakthrough in survival training? Transmission #4, Force Science Research Center, Minnesota State University, Mankato, MN. www.forcescience.com

Frensch, P.A. & Runger, D. (2003). Implicit learning. Current Directions in Psychological Science, 12, 13-18.

Givens, T. (undated a). States of awareness, the Cooper Color Codes. www.rangemaster.com.

Givens, T. (undated b). Intelligence gathering for personal safety. www.rangemaster.com.

Gonzalez, J. (2004). Situational awareness. SWAT, January, 18-20.

Jones, D.G., & Endsley, M.R. (2000). Can real-time probes provide a valid measure of situation awareness? Proceedings of the Human Performance, Situation Awareness, and Automation: User Centered Design for the New Millennium Conference. October.

Kanfer, R., & Ackerman, P.L. (1995). A self-regulatory skills approach to reducing cognitive interference. In I.E. Sarason, G.R. Pierce & B.R. Sarason (Eds.). Cognitive interference: Theories, methods, and findings (pp. 153-171). Hillsdale, N.J.: Erlbaum.

Katkin, E.S., Wiens, S., & Ohman, A. (2001). Nonconscious fear conditioning, visceral perception, and the development of gut feelings. Psychological Science, 12, 366-370.

King, L., & Appleton, J.V. (1997). Intuition: A critical review of the research and rhetoric. Journal of Advanced Nursing, 26, 194-202.

Klein, G. (1998). Sources of power: How people make decisions. Cambridge, MA, MIT Press.

Levy, D.A., Starck, C.E.L., & Squire, L.R. (2004). Intact conceptual priming in the absence of declarative memory. Psychological Science, 15, 680-686.

Mack, A. (2003). Inattentional blindness: Looking without seeing. Current Directions in Psychological Science, 12, 180-184.

Marsh, H.S. (2000). Beyond situation awareness: The battlespace of the future. Draft report, Office of Naval Research, 20 March 2000.

Matthews, M.D., Pleban, R.J., Endsley, M.R., & Strater, L.D. (2000). Measures of infantry situation awareness in a vitual MOUT environment. Proceedings of the Human Performance, Situation Awareness, and Automation: User Centered Design for the New Millennium Conference. October.

National Institute of Justice (2004). Nature and influence of intuition in law enforcement: Theory and practice. Introduction to conference notes, June 22-23, 2004, Marymount University, Arlington, VA.

Pinizzotto, A.J., Davis, E.F., & Miller, C.E. III (2004). Intuitive policing: Emotional/rational decision making in law enforcement. www.blackwaterusa.com/btw2004/articles/0322intuit.html

Pritchett, A.R., Hansman, R.J., & Johnson, E.N. (1996). Use of testable responses for performance-based measurement of situation awareness. Presented at the International Conference on Experimental Analysis and Measurement of Situational Awareness. November.

Resink, R.A. (2004). Visual sensing without seeing. Psychological Science, 15, 27-32.

Simons, D.J., & Chabris, C.F. (1999). Gorillas in our midst: Sustained inattentional blindness for dynamic events. Perception, 28, 1059-1074.

Slovic, P., Finucane, M.L., Peters, E., & MacGregor, D.G. (2002). Risk as analysis and risk as feelings. Paper presented at the Annual Meeting of the Society for Risk Analysis, New Orleans, LA December.

Swets, J.A. & Bjork, R.A. (1990). Enhancing human performance: An evaluation of “new age” techniques considered by the U.S. Army. Psychological Science, 1, 85-96.

Winkielman, P., & Berridge, K.C. (2004). Unconscious emotion. Current Directions in Psychological Science, 13, 120-123.

 

 



[1] There are some statements in Slovic, et al. with which readers will disagree violently. I do, too. However, these do not affect the validity of their arguments.