Cover Image

PAPERBACK
$70.00



View/Hide Left Panel

8
Group Processes in Intelligence Analysis

Reid Hastie

WHAT DO INTELLIGENCE TEAMS DO?

“The mission of intelligence analysis is to evaluate, integrate, and interpret information in order to provide warning, reduce uncertainty, and identify opportunities,” Fingar writes in Chapter 1 of this volume. Intelligence analysis encompasses a vast variety of intellectual tasks and aims to achieve these objectives. Most analyses are performed in a social context with analysts interacting face to face or electronically in formal or informal teams to create estimates, answer questions, and solve problems that serve the interests of diplomatic, political, military, and law enforcement customers (see this volume’s Fingar, Chapter 1, and Skinner, Chapter 5).

To idealize some role assignments, analysts occupy an organizational niche located between collectors and policy makers. Collectors are responsible for acquiring and initially processing “raw” intelligence information, described by a veritable dictionary of acronyms (e.g., HUMINT, SIGINT, MASINT). One reason for the separation of roles between collector and analyst is because collection often involves highly specialized technical skills (e.g., monitoring a telecommunications channel or maintaining an electronic system that transmits satellite images). Another reason is to protect the original sources from exposure in case, for example, the product of an analysis is acquired by an adversary. On the other side of the chain, analysts and policy makers are separated to protect the analyst’s objectivity and single-minded focus on “what is true,” without considerations of what is desirable or politically expedient. This unusual, insulated role is



The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement



Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 169
8 Group Processes in Intelligence Analysis Reid Hastie WHAT DO INTELLIgENCE TEAMS DO? “The mission of intelligence analysis is to evaluate, integrate, and interpret information in order to provide warning, reduce uncertainty, and identify opportunities,” Fingar writes in Chapter 1 of this volume. Intel- ligence analysis encompasses a vast variety of intellectual tasks and aims to achieve these objectives. Most analyses are performed in a social context with analysts interacting face to face or electronically in formal or informal teams to create estimates, answer questions, and solve problems that serve the interests of diplomatic, political, military, and law enforcement custom- ers (see this volume’s Fingar, Chapter 1, and Skinner, Chapter 5). To idealize some role assignments, analysts occupy an organizational niche located between collectors and policy makers. Collectors are respon- sible for acquiring and initially processing “raw” intelligence information, described by a veritable dictionary of acronyms (e.g., HUMINT, SIGINT, MASINT). One reason for the separation of roles between collector and analyst is because collection often involves highly specialized technical skills (e.g., monitoring a telecommunications channel or maintaining an electronic system that transmits satellite images). Another reason is to pro- tect the original sources from exposure in case, for example, the product of an analysis is acquired by an adversary. On the other side of the chain, analysts and policy makers are separated to protect the analyst’s objectiv- ity and single-minded focus on “what is true,” without considerations of what is desirable or politically expedient. This unusual, insulated role is 169

OCR for page 169
170 INTELLIGENCE ANALYSIS: FOUNDATIONS central to intelligence analysis, and there are no other close organizational analogues (Zegart, this volume, Chapter 13). Of course, these distinc- tions are not quite as sharp in practice as they sound from this description because analysts are often involved in the collection process and work in a close relationship with policy makers in order to provide the most relevant information and to communicate effectively. The typical product of an analysis is a written document that describes the conditions in a politically significant situation, sometimes with evalua- tions of more than one interpretation of the true situation. The best known products of American intelligence analysis, the President’s Daily Brief and National Intelligence Estimates, often look like news reports. However, they are likely to be more forward looking and include predictions of significant events, dissenting views, and confidence assessments (customarily expressed on a verbal scale indexed by terms such as “remote, unlikely, even chance, probably likely, and almost certainty”). Some estimates provide answers to specific questions (e.g., How many armed Taliban insurgents are pres- ent today in Kabul?), and many aim to provide a more comprehensive understanding of a situation (e.g., How is Israel likely to respond to Iran’s increased nuclear weapons capacity?). Analytic activities vary along many dimensions. Some involve immedi- ate, in-person interactions among analysts, while others involve indirect, usually electronically mediated, interactions among individuals in remote geographical locations; some involve one-shot, time-intensive interactions, while others involve sustained, long-term interactions; some involve inte- grating information from several sources into a summary description, while others involve complex inferences about events that might occur under alternate uncertain scenarios; and still others require the generation of innovative responses to diplomatic, economic, or political problems. This heterogeneity creates a challenge for someone who attempts to give pre- scriptive advice to improve the many different processes. I address that challenge by focusing on one idealized analysis task and then generalizing from that example to other analysis tasks. Distinguishing among three idealized, truth-seeking analytic tasks is useful, with the following scenarios provided as examples: 1. Judgment and estimation tasks involve integrating information from several sources into a unitary quantitative or qualitative esti- mate or descriptive report of a specific or general situation: Provide a best estimate of the date when Iran will have the capacity to launch a nuclear warhead missile strike on Israel (if its development of nuclear capacities continues at the current rate); 2. Detection tasks involve the detection of a signal that a change has occurred, that there is a pattern of interrelated events occurring, or

OCR for page 169
171 GROUP PROCESSES IN INTELLIGENCE ANALYSIS that “something funny” is happening: Has the opium production rate changed in Faizabad during the past few months? Has Kim Jong-Il’s control of the government of North Korea changed at all during the past week? 3. Complex problem-solving tasks require generating and applying novel solutions in a specified context: Will the current regime in Pakistan stay in power for the next 12 months? What is the likeliest scenario that would result if the current regime fails? WHAT IS DISTINCTIVE ABOUT INTELLIgENCE ANALYSIS? Of course, these dimensions also describe aspects of many other impor- tant team performance situations in business, science, and government settings. But several conditions converge in intelligence analysis to create a distinctive, if not unique, situation: • First, as noted above, analysts have a special, indirect connection to many sources of their intelligence—the front line of collectors acquire information, then pass it on to the analysts. This means there are special challenges in evaluating the validity and credibility of information because the analyst is not directly involved in the initial acquisition (see Schum, 1987, for a discussion of the special problems of cascaded and hierarchical inference that arise in intel- ligence and forensic contexts). • Second, more than in any other domain, denial and deception must be considered when evaluating the credibility and validity of information. In many analytic situations, adversaries are present and trying to undermine and defeat the analysis. • Third, many outcomes of intelligence analysis involve low- probability, high-impact consequences that can mean life or death for thousands of people. Furthermore, analysts must anticipate and infer what policy makers will want to know and even how they are likely to weight multifaceted outcomes, including the inevitable trade-offs between false alarms (e.g., weapons of mass destruction) and misses (e.g., 9/11) that are inherent in every policy decision. • Fourth, the organizational relationship between the analysts and their customers can include the temptation to bias answers to fit what the customer wants to hear. • Fifth, as in any complex collection of interdependent organizations, some of these activities occur in the intelligence community’s frag- mented, “siloed” organizational terrain with 16 loosely connected agencies attempting to cooperate while they simultaneously pursue sometimes conflicting and nonaligned objectives.

OCR for page 169
172 INTELLIGENCE ANALYSIS: FOUNDATIONS • Finally, feedback is especially rare and unreliable. For many impor- tant analytic estimates, outcomes remain unknown for a long time or cannot ever be known. Furthermore, often the U.S. govern- ment itself or another party will take an action that changes the outcomes that were the subject of the original analysis, making learning from feedback even more difficult. The difficulty of learning from feedback is compounded by the intense scrutiny and criticisms in hindsight of every visible intelligence failure, while successes are rarely attributed to the analysts and, under many conditions, are unobserved (see Bruce, 2008, for a catalog of publicized failures, but see Jervis, 2006, for a defense of achievements of the intelligence community). There will always be room for improvement, but there is ample evidence for the high levels of professionalism and dedication in intelligence analysis (cf., Dawes, 1993; Fischhoff, 1975; Gladwell, 2003). One essential means to improving intelligence analysis is to develop systematic methods to evalu- ate the validity and accuracy of estimates (cf., Tetlock, 2006; Arkes and Kajdasz, this volume, Chapter 7; McClelland, this volume, Chapter 4) and then to apply these criteria to identify and reward best practices. In this paper, I will focus on short-range, tactical intelligence estimates in the international domain, made by small teams of three to seven analysts working together face to face or through electronic communication. I will restrict the discussion to tasks for which the goal is to achieve the highest possible levels of accuracy in describing or forecasting a state of the exter- nal world. Our knowledge of how teams perform such tasks comes from all of the social sciences, sociology, social psychology, economics, political science, and anthropology as well as from composite fields of study, such as management science and cognitive science, although social psychology is the primary source for the current conclusions about truth-seeking group judgments. FOUR ESSENTIAL CONDITIONS FOR EFFECTIVE TEAMWORK In the most general terms, four basic conditions must be met if a team is to perform effectively in a larger organizational context (Hackman, 2002; Wageman, 2001). First, the team must have an identity as a distinct social unit in the larger organization (Tinsley, this volume, Chapter 9). It must be recognized as autonomous and be given a well-defined, organizationally sig- nificant set of objectives. It must be given the essential resources to achieve those objectives, including effective channels of communication with other units in the larger organization, especially the agent outside the team who oversees the team’s activities. Under some conditions, the team should have a distinctive identity and even a “subculture” appropriate for its task within

OCR for page 169
173 GROUP PROCESSES IN INTELLIGENCE ANALYSIS the larger organization (Tinsley, this volume, Chapter 9). In general terms, the more distributed and independent the team’s later working procedures will be, the more important it is to establish a distinctive identity at the beginning (Moreland and Levine, 1982). Second, the team must have a compelling direction, with clear, chal- lenging, and consequential objectives. Its members should be autonomous, and individual activities should not be micromanaged by team leaders or organizational authorities outside of the team. Each member’s personal goals must, to some extent, be subordinate to and aligned with the team’s organizationally defined objectives. This means that both tangible incen- tives (e.g., financial or status rewards) and intrinsic incentives (e.g., social recognition, positive internal feelings) should be conditional on achieve- ments relevant to the team’s goals. Third, the team must have an “enabling design” that provides the proper individual composition (skills, diversity, size), specialized role assign- ments if appropriate to the larger task, and plans and technological support for intermember communication, coordination, and a “group memory” of task-relevant information (Fiore et al., 2003). Finally, the team must have a self-conscious, meta-level perspective that is constantly monitoring and correcting member motivations; refin- ing operating procedures; and providing short-term feedback and eventual evaluation to allow members and the team to learn from experience per- forming the task. BREAKINg THE OVERARCHINg ANALYTIC TASK INTO SUBTASKS Each of these four conditions is essential for teams performing any task, but the specific manner in which each is accomplished depends on the task type. Each of the analytic tasks—integration, detection, and problem solving—can be described in terms of a stylized process model that breaks the larger task down into its component subtasks. This conceptual breakdown describes the task as it might be performed by an individual, a team, or even by an automated software system. What is distinctive about the performance of a team is the collection of special motivation and coordination problems that arise when independent agents collaborate on the task. Two closely related tensions describe the essential dilemma for effective teamwork: (1) individualistic-selfish motives versus collective- organizational motives; and (2) promotion of diversity and independence versus promotion of consensus and interdependence. Good team perfor- mance depends on addressing these tensions flexibly and effectively. The second requires the design of explicit incentives that will motivate indi- vidual members to work for the good of the team and the organization in

OCR for page 169
174 INTELLIGENCE ANALYSIS: FOUNDATIONS which it is embedded. Implicit incentives, often attributed to the team and organizational “culture,” are also important. The second requires careful oversight by the team’s leader (or external manager) so that when cer- tain subtasks are performed, independence is promoted; in other subtasks, consensus-conformity is promoted, appropriate to the local objectives of each subtask. (This last motivational problem is what economists call the principal-agent problem. There is a large literature on the subtle solutions to the problem, including discussions of conditions that seem to have no known theoretical solution; see Baron and Kreps, 1999, and Chen et al., 2009, for discussions of methods of motivating individuals in teams.) Judgment and simple estimation tasks can be described as an ideal analytic process in terms of five component activities: Subtask 1, define the problem; Subtask 2, acquire relevant information; Subtask 3, terminate the information acquisition process; Subtask 4, integrate the information into a summary statement (estimate of a state of the past, present, or future world; descriptive summary report); and Subtask 5, generate an appropri- ate response (see Hinsz et al., 1997, for a similar discussion of “groups as information processors”; see Lee and Cummins, 2004, for a similar task analysis). (In the case of intelligence analysis, the “response” is nearly always the provision of information to a policy maker or a military actor, who decides on an appropriate action based on the intelligence.) The pri- mary advantages of teams over individuals in performing such tasks are the teams’ capacity for acquiring and pooling more information than any indi- vidual can contribute, and the teams’ ability to “damp errors,” as different views counterbalance one another, yielding a central consensus belief in discussion when integrating information and opinions from several sources. The potential advantages of performing tasks requiring information integration and estimation with a team are derived from the greater store of information (including analytic skills) available to a team of several people and from the capacity of the group to leverage diverse perspectives to damp errors and converge on a sensible central value or solution. This implies that in the early stages of the team process, care must be taken to promote diversity in information acquisition; in the middle stages, coordi- nated information pooling; and in the later stages, convergence on a unitary “solution” or consensus response. Let’s look at the requirements for effec- tive team performance of each component subtask of the larger judgment process (for complementary analyses, also see Heuer, 2007; Kozlowski and Ilgen, 2006; and Straus et al., 2008). Team Composition Several affirmative suggestions can be made about how to design effec- tive teams before they begin work on their analytic tasks (see Hackman,

OCR for page 169
175 GROUP PROCESSES IN INTELLIGENCE ANALYSIS 2002, for similar advice). First, there are organizational issues: The team needs to be embedded appropriately in the larger organization in which it functions. This means effective lines of communication must define the team’s operational goals in terms of the organizational objectives. In other words, the team needs to know what its task, goals, and performance criteria are in terms of what would help the organization. The team also needs resources from the larger organization and needs to be insulated from interference from the larger organization (e.g., to prevent micromanage- ment or undue influence from the organizational manager to whom the team reports). Teams are usually composed of members from a larger organization or individuals recruited by that organization to support the team’s perfor- mance (Kozlowski, this volume, Chapter 12). Team composition is obvi- ously significant, although it is difficult to specify useful selection criteria that are general across tasks. Three conditions seem essential: (1) task- relevant diversity of knowledge and skills; (2) a capacity for full, open, and truthful exchange (i.e., communication skills); and (3) a commitment to the team’s goal (the capacity or willingness to align one’s own interests with the team goal to produce an accurate estimate). Composition depends on the task contents, so formulating more specific prescriptions for good practice is difficult. Two generalities emerge from the behavioral literature: In practice, teams are usually too large (Hackman and Vidmar, 1970) and not diverse enough (Page, 2007). Of course, there is a paradox posed by the fact that smaller teams (e.g., an implication of much of the behavioral literature is that a typi- cal analysis team should be composed of about five members) must be less diverse than larger teams. Part of the paradox arises from the fact that larger teams have more resources of all types than smaller teams, but larger teams also suffer from more “process losses” than smaller teams (Steiner, 1972). Process losses include the variety of conditions that impede group productiv- ity in any goal-directed task: difficulties in communication and coordination; within-group social conflicts; lower cohesion; and confusions about group identity, to name the most obvious problems. Note that the term “diversity” refers to task-relevant diversity in terms of knowledge, skills, perspectives, and opinions that promote variety in the types of task-relevant information and solutions that contribute to the team’s performance. This kind of task-relevant diversity is likely to be correlated with differences in gender, cultural background, or personality, but not necessarily so. Page (2007) has provided the most comprehensive research-based argument for the advantages of task-relevant diversity over raw expertise in team problem solving. Some of his proofs take the form of abstract theoretical analyses of the capacities for multiple idealized inter- acting agents to solve mathematical problems. These results are abstract,

OCR for page 169
176 INTELLIGENCE ANALYSIS: FOUNDATIONS but support strong claims for the advantages of task-relevant diversity. He also reviews sociological analyses of diverse versus homogeneous groups in behavioral experiments and natural settings, and again finds support for the value of diversity. Mannix and Neale (2005) have also reviewed the behavioral literature and reach pessimistic conclusions with regard to the effects of increased social diversity (race, gender, age) on team performance. Like Page, they note the potential value of task-relevant diversity (knowl- edge, skills, social-network resources), especially in performing tasks that involve information seeking, information evaluation, and creative thinking. But they also conclude that social diversity inevitably increases process losses through interpersonal conflict, communication problems, and low- ered cohesion. Another aspect of this trade-off was pointed out by Calvert (1985) in a theoretical analysis of how a rational decision maker should weight biased information. One of the counterintuitive implications of his rational analysis was that, under many conditions, teammates who are biased to agree with you are more reliable sources of divergent information than those who are biased to disagree with you. On the basis of current scientific results, it is impossible to spell out specific prescriptions for recruiting members with productively diverse characteristics without knowing something about the details of the team’s task and the context in which it performs. Nonetheless, a good practice is always to oversample for diversity when a team is composed because the common tendency is to err in the direction of uniformity. At a minimum, a priori differences of opinion on the correct solution improve the perfor- mance of most problem-solving groups (Nemeth, 1986; Schulz-Hardt et al., 2006; Winquist and Larson, 1998). Several behavioral studies demon- strate the importance of member diversity, but also of the necessity that members know the specialties of other members, so that appropriate role assignments and coordination are supported (Austin, 2003; Moreland et al., 1996; Stasser et al., 1995). Hackman and colleagues (2008) provide a thoughtful discussion of team composition in intelligence analysis that promotes the design of teams that balance members’ diverse cognitive skills (see also Pashler et al., 2008, for a discussion of the concept of cognitive styles). They also report a behavioral study that demonstrates the impor- tance of aligning individual differences in skill sets (visual versus verbal thinking styles) with matching role assignments (navigation versus acquisi- tion of targets) to maximize the contribution of member diversity to team performance. To repeat, subtle trade-offs are always present between independence and conformity with the ultimate impact on team productivity (Mannix and Neale, 2005). With too much independence and diversity, team per- formance suffers because of loss of identification, decreased motivation, and simple coordination problems. Too much dependence and uniformity

OCR for page 169
177 GROUP PROCESSES IN INTELLIGENCE ANALYSIS undermine the team’s ability to perform components of the overall task that require divergent thinking. This balancing problem has no simple “fixes.” This problem, of course, highlights the need for more rigorous research on analytic teamwork, based on objective measures of team performance. Subtask 1: Defining the Problem When the team initiates its performance on an analytic task, an essen- tial step is to thoughtfully execute each of the subtasks of the overarching task. Completion of each subtask, in some manner, is necessary to produce a good solution, but many teams perform component subtasks in a per- functory manner. Many teams fail to verify that every member understands and agrees on the target of the estimate, including criteria for a successful solution and a sense of cost–benefit trade-offs. The decision to terminate information search is next most likely to be performed in a careless manner; the most common postmortem evaluation of a poor team judgment is that information was not acquired or pooled effectively. The first subtask of team performance, defining the problem, requires a mixture of independence and consensus (cf., Eisenhardt et al., 1997). Dur- ing this stage, each team member grasps the goal state or target of the judg- ment and other relevant criteria for a successful or accurate response. This discussion should include consideration of the costs and benefits associated with potential errors (over- and underestimates or false alarms and misses). These criteria need to be shared with other team members; as the old saying goes, the team will fail if some members are headed for Los Angeles, when the primary destination is San Francisco. Each member also assesses “the givens,” the information that is in hand or needs to be acquired to make a good estimate. At this point, independence and member diversity are probably best in the sources of information or evidence that will be used. The notion here is that “triangulation” based on independent sources of information (given a shared judgment objective) will promote innovation, error damping, and robustness in the final estimate. Subtask 2: Information Acquisition The second subtask, information acquisition, is the one for which independence and diversity of perspectives count the most. Team judgments have two major advantages (compared to individual judgments): Teams have more information than any one member and teams can damp errors in individual judgments and converge on an accurate “central tendency” (Sunstein, 2006, and Surowiecki, 2004, provide popularized accounts of these principles). Several devices can be used to achieve independence and diversity: recruiting a diverse set of perspectives and expertise sets when

OCR for page 169
178 INTELLIGENCE ANALYSIS: FOUNDATIONS team members are selected; working anonymously and in dispersed settings during the information acquisition (and pooling) subtask; and cycling back and forth between searching for and pooling information, so that informa- tion from other members can stimulate new directions in search for each member. Information acquisition (Subtask 2) and information pooling (Sub- task 4) are probably most effectively promoted by careful design of the team’s composition—by having a good mix of members with diverse infor- mation, backgrounds, and skill sets. At least two negative conditions, dis- cussed below, need to be avoided (also see the discussion of Groupthink, below). Association Blocking If members interact with one another when they seek or pool infor- mation, association blocking can occur. Association blocking refers to a condition that occurs when individual team members get “locked into” a whirlpool of similar associations, and individual capacities for divergent thinking are impaired as they naturally respond associatively to one anoth- er’s communications. For example, when a first interpretation concludes that certain aluminum tubing is likely to be used for uranium enrichment, then the mind is primed automatically to retrieve and interpret additional information as relevant to nuclear weapons, rather than, for example, ordinary military rockets. The phenomenon is most apparent when people try to generate unrelated, novel solutions to an innovation problem while interacting in person (Diehl and Stroebe, 1987; Nijstad et al., 2003; Paulus and Yang, 2000). Several interaction process solutions to association blocking involve isolating members and promoting independent thinking. One method is to cycle between independent individual analysis and social interaction, and to have individuals acquire information separately; or in the case of pooling, each individual should pool information separately. The best practice is to start independently, share ideas, then return to independent search or gen- eration, then back to social interaction. Several “unblocking” techniques, borrowed from group brain-storming practices, are available to promote novel search and generation by introducing haphazard or new directions (Kelley and Littman, 2001). Another method is to vary the composition of the group by adding new members (Choi and Thompson, 2005). Information Pooling and the Common-Knowledge Effect Beyond association blocking there is also a tendency to focus discus- sion on shared information and its implications, while neglecting to pool

OCR for page 169
179 GROUP PROCESSES IN INTELLIGENCE ANALYSIS unshared information. This phenomenon has been observed most dramati- cally in “hidden profile” tasks (Larson et al., 1994; Stasser and Titus, 1985, 2003) and was dubbed the “Common Knowledge Effect” by Gigone and Hastie (1993, 1997). The Hidden Profile method was invented by Stasser and Titus and provides a powerful test bed to evaluate team performance on elementary inference and judgment tasks. The basic method involves designing a judgment task that provides an opportunity for high levels of achievement by individuals and groups who have been provided with full information relevant to the judgment. However, to create hidden profiles, the researcher distributes the information to members of the to-be-tested team in a way that no member has sufficient information to perform at a high level of isolation, although the team has all of the relevant infor- mation—albeit dispersed in a manner that provides a stiff challenge to the information-pooling capacity of the team. Of course, cases of widely distributed and vastly unshared information are the norm in intelligence analysis. Adding to the difficulty is the fact that often analysts with differ- ent regional or technical specialties must communicate with one another to converge on the truth. For example, regional experts, satellite image techni- cians, and nuclear scientists were all involved in the effort to determine if Saddam Hussein was developing nuclear weapons. In its most diabolical form, the Hidden Profile method capitalizes on two fundamental human weaknesses to create a nearly insurmountable challenge. First, in the extreme form of the task, each member has an incorrect impression of the correct solution. The full set of information is distributed, so that the individual member subsets each favor a nonoptimal solution—in other words, a reasonable person begins the task with the wrong answer in mind. This creates a strong cognitive bias toward confir- matory thinking, and many naïve teams begin discussion by eliminating the correct solution because, after all, no individual member believes it might be the solution. Intelligence analysis, which involves many verified cases in which one party attempts to deceive another party by seeding communica- tions with false and misleading information, represents one situation in which the diabolical forms of “hidden profiles” occur in naturally occur- ring contexts (others are cases of corporate strategic deception and some personnel matters in which individuals attempt to deceive others about professional qualifications). Furthermore, there are the social biases to underpool unshared information and overpool shared information, which if not resisted, amplify the bias against the correct solution. Finally, time pressure increases the negative effects of the confirmatory thinking and information-pooling challenges (Lavery et al., 1999). Qualitative analysis of the content of group discussions shows that when shared information is mentioned, it is likely to be followed by affir- mative statements and relevant discussion (Larson et al., 1994). When

OCR for page 169
186 INTELLIGENCE ANALYSIS: FOUNDATIONS questions, but this also does not demonstrate relative overconfidence for groups. Rather, it reaches a conclusion about individual impact on group solutions. As with polarization, it is not clear that the overconfidence effect does not occur in teams, only that reliable research has not yet demon- strated such an effect. Also as with polarization, I believe the prescriptions outlined above for effective team performance include advice on the best practices currently supported by behavioral research. “group Cognition” Cognitive scientists, usually working in multidisciplinary teams of engi- neers, psychologists, and mathematicians, have made a substantial con- tribution to our understanding of teamwork, with a focus on distributed workgroups that do not meet in person, and on the selection and training of team members (Kozlowski, this volume, Chapter 12; Fiore et al., 2003; Paris et al., 2000). The aspirations of these researchers are high, to create a practical theory that synthesizes most of the topics covered in the present chapter, adding selection and training of team members and the design of software systems to support and enhance teamwork. But the achievements are still modest. Much of the research involves pioneering observational studies (e.g., Klein and Miller, 1999), and many conclusions are in the form of useful conceptual frameworks (e.g., Bell and Kozlowski, 2002; Fiore et al., 2003; Klein et al., 2003). These foundations are critically important for the development of a comprehensive scientific analysis, but are in their infancy; they are useful as the source of hypotheses and research questions, but not a fount of practical advice or empirically verified conclusions. For present purposes, the major contribution of these research programs has been the development of the concept of shared cognition or shared men- tal models (see Rouse and Morris, 1986; Wilson and Rutherford, 1989, for background on the concept of mental models). These are concepts about “interrelationships between team objectives, team mechanisms, temporal patterns of activity, individual roles, individual functions, and relationships among individuals” (Paris et al., 2000, p. 1055). As implied by this broad definition, it is difficult to provide a precise specification for a theoretical representation of a shared mental model, and the operational measurement of shared mental models appears to be ad hoc and varies from study to study. Nonetheless, the notion of a shared mental model and practices that will support effective mental representations of “the team” seem to be an important element of any effort to improve team performance. For example, Mathieu et al. (2000) studied the performance of college student dyads completing missions “flying” a simulated F-16 fighter plane. Mathieu and colleagues measured individual mental models as ratings of the perceived relationships between operational components of operating

OCR for page 169
187 GROUP PROCESSES IN INTELLIGENCE ANALYSIS the aircraft (e.g., banking and turning, selecting and shooting weapons), then used a correlation coefficient as the index of the degree to which mental models of the situation (not team member interrelationships, as in the definition quoted above) were shared. The shared mental model index was correlated at moderate levels with performance of the flying missions (correlations ranging from 0.05 to 0.38), increasing over time on the task. The most tangible advice, based on the notion that enhancing shared mental models will improve team performance, is the suggestion to train teammates together (Hollingshead, 1998; Moreland and Myaskovsky, 2000). Providing specific role assignments and fully informing team mem- bers of one another’s primary capacities and duties in performing a collec- tive task is the most effective remedy for information-pooling inefficiencies in Hidden Profiles problems (Stasser and Augustinova, 2007; Stasser et al., 1995; discussed above in the section on “Information Pooling and the Common-Knowledge Effect”). High-Tech Alternatives to Face-to-Face Teamwork Importantly, several usually web-based techniques are available for performing simple estimation and categorization tasks. Surowiecki (2004) and Sunstein (2006) review several of these methods, all of which have been used in intelligence analysis (Kaplan, this volume, Chapter 2). The simplest methods involve mechanically combining individual judgments into a sum- mary solution—usually some kind of average value or election winner. Delphi Method The Delphi Method relies on a systematic social interaction process to find a central tendency in individual estimates (invented at the RAND Corporation in the 1950s by Helmer, Dalkey, Rescher, and others [see Rescher, 1998, for review of the method and its invention], cf., Linstone and Turoff, 1975). In its simplest form, the Delphi Method participants (usually selected for subject area expertise) make a series of estimates and reestimates anonymously, with a requirement to adjust on each round toward the center of the distribution of estimates from the prior round (e.g., each estimate must be within the interquartile range of the previous estimates). Some versions of the method also require participants to provide reasons for their estimates and adjustments. Although the method has been widely used in the intelligence community, few vigorous evaluations of its merits have been conducted. It does seem to outperform simple statistical aggregation methods (e.g., taking averages or even averages weighted by estimators’ confidence; e.g., Rowe and Wright, 1999). But, there are no definitive comparisons of the Delphi Method against the performance of

OCR for page 169
188 INTELLIGENCE ANALYSIS: FOUNDATIONS expert in-person teams, although it compares favorably with procedures based on statistical learning with feedback (a version of Social Judgment Theory; Cooksey, 1996; Hammond et al., 1977) and with prediction mar- kets (Green et al., 2008; Rohrbaugh, 1979). Prediction Markets Another popular method, prediction markets, has participants buy and sell shares in an estimate (usually a forecast) that is paid off when the true outcome is revealed (e.g., Hanson et al., 2006; Wolfers and Zitzewitz, 2004). In applications to predict the outcomes of events (e.g., elections, sports contests), the prices of the estimates can be converted into probability-of- occurrence assessments. The method is used in many business and popular culture applications (e.g., predicting the outcomes of media awards and political elections) and has substantial journalistic evidence for accuracy. Nonetheless, a prediction market is just a market, and markets were designed to assess aggregate values, not true states of the world. Markets have many demonstrated weaknesses, even as “evaluation devices.” Most published evaluations of prediction markets are theoretical and make arguments based on economic models, not on empirical data, for the efficacy or limits of the method (e.g., Manski, 2006; see Erikson and Wlezien, 2008, for an empiri- cal evaluation of political election markets). Graefe and Weinhardt (2008) provide a “soft” evaluation that concludes that prediction markets and the Delphi Method perform at comparable levels of accuracy. Following the negative public reaction to the Defense Advanced Research Projects Agency–sponsored Policy Analysis Market, the use of prediction markets in government agencies has been reduced, but not eliminated. (The original Policy Analysis Market was attacked by some members of Congress for promoting betting on assassinations and terrorist events, and the project was cancelled. See Congressional Record, 2003, and Hulse, 2003, for more information.) Note that prediction markets are restricted to applications in which a well-defined outcome set to occur in the near future can be verified. Furthermore, no market can be expected to perform efficiently without a substantial number of participants with different views on the “values” of the commodities being traded. Prediction markets are yet another tool for intel- ligence analysis that merit further exploration accompanied by hard-headed evaluations of efficacy (Arrow et al., 2007). The Delphi Method does not have this restriction to verifiable out- comes and is more generally applicable. The requirement for verification is especially restrictive in intelligence applications. One caveat is that users of a partly mechanical system need to think carefully about the impact of the method on information pooling. Recall that a major failing of socially interacting teams is to thoroughly acquire and pool relevant information. A

OCR for page 169
189 GROUP PROCESSES IN INTELLIGENCE ANALYSIS method is needed that encourages participants to share information relevant to the estimates as well as opinions on the correct solution. Some versions of the Delphi Method partially achieve this by requiring that on each round, each participant report an estimate and provide at least one item of infor- mation that he or she believes is an important cue to the solution. Similarly, prediction markets are often accompanied by chat room bulletin boards on which participants are encouraged to share relevant arguments about the information they used. (Note that some market mechanisms—e.g., posted bid double auctions—promote sharing information [participants want oth- ers to value investments they themselves have chosen], whereas others— e.g., parimutuel betting markets—promote secrecy.) Detection and Problem-Solving Tasks To summarize, the first general admonition for good performance is to make solid plans and be self-conscious about the team process, to under- stand the nature of the task you are performing, and to deliberately balance subtask demands for independence and consensus. Second, for estimation tasks, many research-supported suggestions are available on how to execute each subtask most effectively. Early subtasks tend to demand more inde- pendence and to profit most from task-relevant diversity. Later subtasks demand more interdependence, coordination, and even conformity. But what if a team is performing another task type? The best advice is to begin by analyzing the task, breaking it down into subtasks, and then figuring out what properties of the team process are demanded by the subtasks. Below are two additional subtask breakdowns for the next most commonly performed analytic tasks. The second major task performed by intelligence teams is the detection of informative signals in the vast spectrum of noise produced by collectors and sources at an incredible rate. Probably the most common individual analyst task is to forage through the morning’s incoming flood of electronic and other media. For a prototypical analyst, this usually involves search- ing various e-mail and news sources for something on a specific topic (e.g., Is anything relevant to the objective of detecting a local terrorist plan to attack a major U.S. target during the visit from a head of state?), or just for something out of place, strange, or anomalous (e.g., What does the sudden appearance of references to “nail polish remover” in e-mails intercepted between two suspected conspirators mean?). For such detection tasks, the research supports a six-subtask process model: (1) sample information; (2) construct an image or mental model of the “normal” or “status quo” conditions; (3) sample more information; (4) detect a difference (or not)

OCR for page 169
190 INTELLIGENCE ANALYSIS: FOUNDATIONS that is “large enough” or “over criterion” to explore further; (5) interpret the difference—important or not; and (6) generate an appropriate response. The analysis and performance of detection tasks is helped greatly by the availability of an optimal model for the detection decision, such as Signal Detection Theory (McClelland, this volume, Chapter 4). Even if the actual Signal Detection calculations cannot be performed, the model provides a useful organizing framework. Hundreds of concrete applications of the model have been reported in well-defined, real-world detection problems in medicine, meteorology, and other domains of practical activity. (Research by Sorkin and his colleagues is at the cutting edge of knowledge on team performance of detection tasks, e.g., Sorkin et al., 2001, 2004.) For problem-solving and decision-making tasks, there is also an ideal- ized subtask breakdown (although no model for optimal performance): Subtask 1, comprehending the problem and immersion in the relevant knowledge domains; Subtask 2, hypothesis (solutions) generation; Subtask 3, solution evaluation and selection; and Subtask 4, solution application and implementation. Again, the sheer volume and diversity of information offer many advantages that can be brought to bear on a solution by a team compared to an individual. The immersion, selection, and implementation subtasks can all be enhanced as more team members are included in a project. Something analogous to error damping can occur in the selection subtask, when diverse critical perspectives are focused on selecting the best generated solution. Furthermore, effectively deployed teamwork can increase the variety and quantity of different solutions that are produced in the innovative solution generation subtask. (Laughlin’s research on “collec- tive induction” is the best starting place, e.g., Laughlin, 1999.) Learning from Experience in Teams Including opportunities to learn from experience is essential for team performance. Effective leaders make sure that individuals receive feedback and coaching to improve both individual problem-solving skills and social teamwork skills. Ideally, when a team completes a task (e.g., by success- fully executing the five subtasks that compose an information integration estimation task), a final subtask would be executed to evaluate the team’s achievements and to extract lessons at the team and individual levels to improve future performance. To some extent objective feedback on the quality of the product will be of use (e.g., the accuracy of an estimate). But outcome feedback also provides indirect and partial information about the quality of the team process.

OCR for page 169
191 GROUP PROCESSES IN INTELLIGENCE ANALYSIS WHY TEAMWORK IS IMPORTANT IN INTELLIgENCE ANALYSIS Why have teams performed judgment, problem-solving, or decision- making tasks at all? Why not simply find the best individuals and have them perform all of the tasks? This question is often asked in the academic literature on small-group performance. A common answer is that there is no good reason to use teams or at least face-to-face teams (e.g., Armstrong, 2006). The reasoning is that in most controlled laboratory analyses that provide clear comparisons of group versus individual performance, groups perform at lower levels than the best individuals. Loosely speaking, teams perform between the median and the best member, usually closer to the median (Gigone and Hastie, 1997; Hastie, 1986). So, why not focus on methods to identify the most effective individuals or, at least, move to software-supported collaboration systems that do not require face-to-face meetings? The problem with this advice is that it is unrealistic and derived from scientifically valid studies, but studies of relatively simple, controlled tasks; these are tasks that can be performed effectively by both individu- als and groups. But, in the real world of intelligence analysis, many tasks cannot be performed by one individual acting alone. There is no plausible comparison between individual and team performance, because unaided individuals cannot do the tasks. In many areas of intelligence analysis, teamwork is not an option, it is a necessity. REFERENCES Armstrong, J. S. 2006. How to make better forecasts and decisions: Avoid face-to-face meet- ings. Foresight: The International Journal of Applied Forecasting 5:3–8. Arrow, K. J., S. Sunder, R. Forsythe, R. E. Litan, M. Gorham, E. Zitzewitz, R. W. Hahn, R. Hanson, D. Kahneman, J. O. Ledyard, S. Levmore, P. R. Milgrom, F. D. Nelson, G. R. Neumann, M. Ottaviani, C. R. Plott, T. C. Schelling, R. J. Shiller, V. L. Smith, E. C. Snowberg, C. R. Sunstein, P. C. Tetlock, P. E., Tetlock, H. R. Varian, and J. Wolfers. 2007. Statement on prediction markets. Pub. No. 07-11. Washington, DC: Brookings Institution. Austin, J. R. 2003. Transactive memory in organizational groups: The effects of content, consensus, specialization, and accuracy on group performance. Journal of Applied Psy- chology 88(5):866–878. Baron, J. S., and D. M. Kreps. 1999. Strategic human resources: Frameworks for general managers. New York: John Wiley and Sons. Baron, R. S. 2005. So right, it’s wrong: Groupthink and the ubiquitous nature of polarized decision making. Advances in Experimental Social Psychology 37:219–253. Bell, B. S., and S. W. J. Kozlowski. 2002. A typology of virtual teams: Implications for effective leadership. Group and Organizational Management 27(1):12–49. Bruce, J. B. 2008. The missing link: The analyst–collector relationship. In R. Z. George and J. B. Bruce, eds., Analyzing intelligence: Origins, obstacles, and innovations (pp. 191–210). Washington, DC: Georgetown University Press.

OCR for page 169
192 INTELLIGENCE ANALYSIS: FOUNDATIONS Calvert, R. L. 1985. The value of biased information: A rational choice model of political advice. Journal of Politics 47(2):530–555. Chen, G., R. Kanfer, R. P. DeShon, J. E. Mathieu, and S. W. J. Kozlowski. 2009. The motivating potential of teams: Test and extension of Chen and Kanfer’s (2006) cross-level model of motivation in teams. Organizational Behavior and Human Decision Processes 101(1):45–55. Choi, H.-S., and L. Thompson. 2005. Old wine in a new bottle: Impact of membership change on group creativity. Organizational Behavior and Human Decision Processes 98(2):121–132. Congressional Record. 2003. (Senate), July 29, pp. S10082–S10083. Available: http://www. fas.org/sgp/congress/2003/s072903.html [accessed June 2010]. Cooksey, R. W. 1996. Judgment analysis: Theory, methods, and applications. San Diego, CA: Academic Press. Davis, J. 2008. Why bad things happen to good analysts. In R. Z. George and J. B. Bruce, eds., Analyzing intelligence: Origins, obstacles, and innovations (pp. 157–170). Washington, DC: Georgetown University Press. Dawes, R. M. 1993. Prediction of the future versus understanding of the past: A basic asym- metry. American Journal of Psychology 106(1):1–24. De Groot, M. H. 1970. Optimal statistical decisions. New York: McGraw-Hill (reprinted, 2004, Wiley Classics Library). Diehl, M., and W. Stroebe. 1987. Productivity loss in brainstorming groups: Toward a solution of a riddle. Journal of Personality and Social Psychology 53(3):497–509. Eisenhardt, K. M., J. L. Kahwajy, and L. J. Bourgeois, III. 1997. How management teams can have a good fight. Harvard Business Review 75(4):77–85. Erikson, R. S., and C. Wlezien. 2008. Are political markets really superior to polls as election predictors? Public Opinion Quarterly 72(2):190–215. Fiore, S. M., E. Salas, H. M. Cuevas, and C. A. Bowers. 2003. Distributed coordination space: Toward a theory of distributed team process and performance. Theoretical Issues in Ergonomic Science 4(3):340–364. Fischhoff, B. 1975. Hindsight is not equal to foresight: The effect of outcome knowledge on judgment under uncertainty. Journal of Experimental Psychology: Human Perception and Performance 1(3):288–299. Gigone, D. M., and R. Hastie. 1993. The common knowledge effect: Information sharing and group judgment. Journal of Personality and Social Psychology 65:959–974. Gigone, D., and R. Hastie. 1997. The proper analysis of the accuracy of group judgments. Psychological Bulletin 121:149–167. Gladwell, M. 2003. Connecting the dots: The paradoxes of intelligence reform. The New Yorker (March 10):83–89. Gold, T., and B. Hermann. 2003. The role and status of DoD Red Teaming activities. Techni- cal Report, September, No. A139714. Washington, DC: Storming Media USA. Goodwin, D. K. 2005. Team of rivals: The political genius of Abraham Lincoln. New York: Simon and Schuster. Graefe, A., and C. Weinhardt. 2008. Long-term forecasting with prediction markets—A field ex- periment on applicability and expert confidence. Journal of Prediction Markets 2(2):71–91. Green, K. C., J. S. Armstrong, and A. Graefe. 2008. Methods to elicit forecasts from groups: Delphi and prediction markets compared. Foresight: The International Journal of Ap- plied Forecasting 8:17–20. Greitemeyer, R., S. Schulz-Hardt, F. C. Brodbeck, and D. Frey. 2006. Information sampling and group decision making: The effects of an advocacy decision procedure and task experience. Journal of Experimental Psychology: Applied 12(1):31–42. Hackman, J. R. 2002. Leading teams: Setting the stage for great performances. Boston, MA: Harvard Business School Press.

OCR for page 169
193 GROUP PROCESSES IN INTELLIGENCE ANALYSIS Hackman, J. R., and N. Vidmar. 1970. Effects of size and task type on group performance and member reactions. Sociometry 33(1):37–54. Hackman, J. R., S. M. Kosslyn, and A. W. Woolley. 2008. The design and leadership of intelli- gence analysis teams. Unpublished Technical Report No. 11. Available: http://groupbrain. wjh.harvard.edu [accessed February 2010]. Hammond, K. R., J. Rohrbaugh, J. Mumpower, and L. Adelman. 1977. Social judgment theory: Applications in policy formation. In M. F. Kaplan and S. Schwartz, eds., Human judgment and decision processes in applied settings (pp. 1–30). New York: Academic Press. Hanson, R., R. Oprea, and D. Porter. 2006. Information aggregation and manipulation in an experimental market. Journal of Economic Behavior and Organization 60(4):449–459. Hastie, R. 1986. Experimental evidence on group accuracy. In B. Grofman and G. Owen, eds., Information pooling and group decision making (pp. 129–157). Greenwich, CT: JAI Press. Heuer, R. J., Jr. 2007. Small group processes for intelligence analysis. Unpublished manuscript, Sherman Kent School of Intelligence Analysis, Central Intelligence Agency. Available: http://www.pherson.org/Library/H11.pdf [accessed February 2010]. Heuer, R. J., Jr. 2008. Computer-aided analysis of competing hypotheses. In R. Z. George and J. B. Bruce, eds., Analyzing intelligence: Origins, obstacles, and innovations (pp. 251–265). Washington, DC: Georgetown University Press. Hinsz, V. B., R. S. Tindale, and D. A. Vollrath. 1997. The emerging conception of groups as information processors. Psychological Bulletin 121(1):43–64. Hollingshead, A. B. 1998. Group and individual training: The impact of practice on perfor- mance. Small Group Research 29(2):254–280. Hulse, C. 2003. Pentagon abandons plans for futures market on terror. New York Times. July 2. Janis, I. L. 1972. Victims of Groupthink: A psychological study of foreign-policy decisions and fiascos. Boston, MA: Houghton Mifflin. Janis, I. L. 1982. Groupthink: Psychological studies of policy decisions and fiascos, 2nd ed. Boston, MA: Houghton Mifflin. Jervis, R. 2006. Reports, politics, and intelligence failures: The case of Iraq. Journal of Stra- tegic Studies 29(1):3–52. Kelley, T., and J. Littman. 2001. The art of innovation: Lessons in creativity from IDEO, America’s leading design firm. New York: Random House. Kerr, N. L., and R. S. Tindale. 2004. Group performance and decision making. Annual Review of Psychology 55:623–655. Klein, G., and T. E. Miller. 1999. Distributed planning teams. International Journal of Cogni- tive Ergonomics 3(3):203–222. Klein, G., K. G. Ross, B. M. Moon, D. E. Klein, and E. Hollnagel. 2003. Macrocognition. IEEE Intelligent Systems May–June:81–85. Kozlowski, S. W. J., and D. R. Ilgen. 2006. Enhancing the effectiveness of work groups and teams. Psychological Science in the Public Interest 7(3):77–124. Kramer, T. J., G. P. Fleming, and S. M. Mannis. 2001. Improving face-to-face brainstorming through modeling and facilitation. Small Group Research 32(5):533–557. Larson, J. R., Jr., P. G. Foster-Fishman, and C. B. Keys. 1994. Discussion of shared and unshared information in decision-making groups. Journal of Personality and Social Psychology 67(3):446–461. Larson, J. R., Jr., C. Christensen, A. S. Abbot, and T. M. Franz. 1996. Diagnosing groups: Charting the flow of information in medical decision-making teams. Journal of Personal- ity and Social Psychology 71(2):533–557. Laughlin, P. R. 1999. Collective induction: Twelve postulates. Organizational Behavior and Human Decision Processes 80(1):50–69.

OCR for page 169
194 INTELLIGENCE ANALYSIS: FOUNDATIONS Lavery, T. A., T. M. Franz, J. R. Winquist, and J. R. Larson, Jr. 1999. The role of information exchange in predicting group accuracy on a multiple judgment task. Basic and Applied Social Psychology 21(4):281–289. Lee, M. D., and T. D. R. Cummins. 2004. Evidence accumulation in decision making: Unify- ing the “take the best” and the “rational” models. Psychonomic Bulletin and Review 11(2):343–352. Linstone, H. A., and M. Turoff, eds. 1975. The Delphi Method: Techniques and applica- tions. Reading, MA: Addison-Wesley Educational. Available: http://www.is.njit.edu/pubs/ delphibook/ [accessed February 2010]. Mannix, E., and M. A. Neale. 2005. What differences make a difference? The promise and reality of diverse teams in organizations. Psychological Science in the Public Interest 6(2):31–55. Manski, C. F. 2006. Interpreting the predictions of prediction markets. Economics Letters 91(4):425–429. Mathieu, J. E., T. S. Heffner, G. F. Goodwin, E. Salas, and J. A. Cannon-Bowers. 2000. The influence of shared mental models on team process and performance. Journal of Applied Psychology 85:273–283. Moreland, R. L., and J. M. Levine. 1982. Socialization in small groups: Temporal changes in individual-group relations. Advances in Experimental Social Psychology 15:137–192. Moreland, R. L., and L. Myaskovsky. 2000. Exploring the performance benefits of group training: Transactive memory or improved communication. Organizational Behavior and Human Decision Processes 82(1):117–133. Moreland, R. L., L. Argote, and R. Krishnan. 1996. Socially shared cognition at work: Trans- active memory and group performance. In J. L. Nye and A. M. Browker, eds., What’s social about social cognition (pp. 128–141). Thousand Oaks, CA: Sage. Nemeth, C. J. 1986. Differential contributions of majority and minority influence. Psychologi- cal Review 93(1):23–32. Nemeth, C. J., K. Brown, and J. Rogers. 2001. Devil’s advocate versus authentic dissent: Stimulating quantity and quality. European Journal of Social Psychology 31(6):707–720. Nijstad, B. A., W. Stroebe, and H. F. M. Lodewijkx. 2003. Production blocking and idea generation: Does blocking interfere with cognitive processes? Journal of Experimental Social Psychology 39(4):531–548. Okhuysen, G. A., and K. M. Eisenhardt. 2002. Integrating knowledge in groups: How formal interventions enable flexibility. Organization Science 13(4):370–386. Oxley, N. L., M. T. Dzindolet, and P. B. Paulus. 1996. The effects of facilitators on the per- formance of brainstorming groups. Journal of Social Behavior and Personality 11(4): 633–646. Page, S. E. 2007. The difference: How the power of diversity creates better groups, firms, schools, and societies. Princeton, NJ: Princeton University Press. Paris, C. R., E. Salas, and J. A. Cannon-Bowers. 2000. Teamwork in multi-person systems: A review and analysis. Ergonomics 43(8):1,052–1,075. Pashler, H., M. McDaniel, D. Rohrer, and R. Bjork. 2008. Learning styles: Concepts and evidence. Psychological Science in the Public Interest 9(3):105–119. Paulus, P. B. 1998. Developing consensus about Groupthink after all these years. Organiza- tional Behavior and Human Decision Processes 73(2/3):362–374. Paulus, P. B., and H.-C. Yang. 2000. Idea generation in groups: A basis for creativity in orga- nizations. Organizational Behavior and Human Decision Processes 82(1):76–87. Plous, S. 1995. A comparison of strategies for reducing interval overconfidence in group judg- ments. Journal of Applied Psychology 80(4):443–454. Puncochar, J. M., and P. W. Fox. 2004. Confidence in individual and group decision making: When “two heads” are worse than one. Journal of Educational Psychology 96(3):582–591.

OCR for page 169
195 GROUP PROCESSES IN INTELLIGENCE ANALYSIS Rescher, N. 1998. Predicting the future. Albany: State University of New York Press. Rohrbaugh, J. 1979. Improving the quality of group judgment: Social judgment analysis and the Delphi technique. Organizational Behavior and Human Performance 24(1):73–92. Rohrbaugh, J. 1981. Improving the quality of group judgment: Social judgment analysis and the nominal group technique. Organizational Behavior and Human Performance 28(2): 272–288. Rouse, W. B., and N. M. Morris. 1986. On looking into the black box: Prospects and limits in the search for mental models. Psychological Bulletin 100(3):349–363. Rowe, G., and G. Wright. 1999. The Delphi technique as a forecasting tool: Issues and analy- sis. International Journal of Forecasting 15(3):353–375. Safire, W. 2004. On language: Groupthink. Available: http://www.nytimes.com/2004/08/08/ magazine/the-way-we-live-now-8-8-04-on-language-groupthink.html?sec=&spon =&pagewanted=1 [accessed February 2010]. Schkade, D., C. R. Sunstein, and R. Hastie. 2007. What happened on deliberation day? Cali- fornia Law Review 95(3):915–940. Schulz-Hardt, S., F. C. Brodbeck, A. Mojzisch, R. Kerschreiter, and D. Frey. 2006. Group decision making in hidden profile situations: Dissent as a facilitator for decision quality. Journal of Personality and Social Psychology 91(6):1,080–1,093. Schum, D. A. 1987. Evidence and inference for the intelligence analyst (2 vols.). Lanham, MD: University Press of America. Schweiger, D. M., W. R. Sandberg, and J. W. Ragan. 1986. Group approaches to improving strategic decision making: A comparative analysis of dialectical inquiry, devil’s advocacy, and consensus. Academy of Management Journal 29(1):51–71. Schwenk, C. R. 1988. The essence of strategic decision making. Lexington, MA: Lexington Press. Schwenk, C. R. 1990. Effects of devil’s advocacy and dialectical inquiry on decision making: A meta-analysis. Organizational Behavior and Human Decision Processes 47(1):161–176. Senate Select Committee on Intelligence. 2004. Report of the Select Committee on Intelligence on the U.S. intelligence community’s prewar intelligence assessments on Iraq. Available: http://www.gpoaccess.gov/serialset/creports/iraq.html [accessed June 2010]. Sniezek, J. A. 1992. Groups under uncertainty: An examination of confidence in group deci- sion making. Organizational Behavior and Human Decision Processes 52(2):124–155. Sorkin, R., C. Hays, and R. West. 2001. Signal-detection analysis of group decision making. Psychological Review 108(1):183–203. Sorkin, R., S. Luan, and J. Itzkowitz. 2004. Group decision and deliberation: A distributed detection process. In D. J. Koehler and N. Harvey, eds., Blackwell handbook of judgment and decision making (pp. 464–484). Malden, MA: Blackwell. Stasser, G., and M. Augustinova. 2007. Social engineering in distributed decision-making teams: Some implications for leadership at a distance. In S. P. Weisband, ed., Leadership at a distance: Research in technologically supported work (pp. 151–168). Mahwah, NJ: Erlbaum Associates. Stasser, G., and W. Titus. 1985. Pooling of unshared information in group decision making: Biased information sampling during discussion. Journal of Personality and Social Psy- chology 48(6):1,467–1,478. Stasser, G., and W. Titus. 2003. Hidden profiles: A brief history. Psychological Inquiry 14(3/4):304–313. Stasser, G., D. D. Stewart, and G. M. Wittenbaum. 1995. Expert roles and information exchange during discussion: The importance of knowing who knows what. Journal of Experimental Social Psychology 31:244–265. Steiner, I. D. 1972. Group process and productivity. San Diego, CA: Academic Press.

OCR for page 169
196 INTELLIGENCE ANALYSIS: FOUNDATIONS Straus, S. G., A. M. Parker, J. B. Bruce, and J. W. Dembosky. 2008. The group matters: A re- view of the effects of group interaction processes and outcomes in analytic teams. Work- ing Paper WR-580-USG. Santa Monica, CA: RAND Corporation. Available: http://www. rand.org/pubs/working_papers/2009/RAND_WR580.pdf [accessed February 2010]. Sunstein, C. R. 2006. Infotopia: How many minds produce knowledge. New York: Doubleday. Sunstein, C. R. 2009. Going to extremes: How like minds unite and divide. New York: Oxford University Press. Surowiecki, J. 2004. The wisdom of crowds. New York: Doubleday. Tapscott, D., and A. D. Williams. 2006. Wikinomics: How mass collaboration changes every- thing. New York: Penguin. Tetlock, P. E. 2006. Expert political judgment: How good is it? How can we know? Princeton, NJ: Princeton University Press. Turner, M. E., and A. R. Pratkanis. 1998. Twenty-five years of Groupthink theory and re- search: Lessons from the evaluation of a theory. Organizational Behavior and Human Decision Processes 73(2–3):105–115. U.S. Government. (2009) Tradecraft primer: Structured analytic techniques for improving intelligence analysis. Available: https://www.cia.gov/library/center-for-the-study-of- intelligence/csi-publications/books-and-monographs/Tradecraft%20Primer-apr09.pdf [accessed February 2010]. Wageman, R. 2001. How leaders foster self-managing team effectiveness: Design choice versus hands-on coaching. Organization Science 12(5):559–577. Whyte, G. 1998. Recasting Janis’s Groupthink model: The key role of collective efficacy in decision fiascoes. Organizational Behavior and Human Decision Processes 73(2–3): 185–209. Whyte, W. H., Jr. 1952. Groupthink. Fortune Magazine 45(March):6–7. Wilson, J. R., and A. Rutherford. 1989. Mental models: Theory and application in human factors. Human Factors 31(5):617–634. Winquist, J. R., and J. R. Larson, Jr. 1998. Information pooling: When it impacts group deci- sion making. Journal of Personality and Social Psychology 74(2):371–377. Wolfers, J., and E. Zitzewitz. 2004. Prediction markets. Journal of Economic Perspectives 18(2):107–126. Yasin, R. 2009. National security and social networking are compatible. Government Computer News, July 23. Available: http://www.gcn.com/Articles/2009/07/23/Social- networking-media-national-security.aspx?Page=1 [accessed February 2010]. Zarnoth, P., and J. A. Sniezek. 1997. The social influence of confidence in group decision making. Journal of Experimental Social Psychology 33(4):345–366.