GROUP COGNITION

For nearly a century, psychologists have sought to understand the processes that allow people to automatically evaluate their surroundings. Extensive research suggests that stereotypes and prejudice can operate automatically and efficiently without conscious control. For instance, racial cues activate stereotypes and evaluations within milliseconds of encountering an individual and these evaluations are difficult to suppress once they are activated. These initial evaluations can lead to biases in a wide variety of domains, including hiring, mortgage lending, and legal decision-making. This division between automaticity and control reflects a dual process approach that has become the dominant paradigm for understanding a wide range of psychological topics, from prejudice and stereotypes to moral judgment.

Although dual process models have proven highly generative for understanding a wide range of psychological phenomena, recent work suggests that the mind and brain are composed of multiple interactive component processes (for a review see Van Bavel et al., 2012). A key implication is that seemingly automatic reactions to social stimuli, like race, are not only updated based on additional contextual and motivational factors, but that top-down processes can shape the most automatic elements of social cognition. Our research has applied this framework to a wide variety of topics in psychology and neuroscience (Van Bavel & Cunningham, 2011; Van Bavel et al., 2014). We have found extensive evidence that many automatic perceptual and evaluative processes are context-dependent, dynamic, and shaped by collective concerns.

Evidence from our lab suggests that arbitrarily assigning people to groups can override their initial, implicit racial biases. This work built upon previous research showing that people quickly identify with social groups and favor in-group members, even in the absence of social interaction, stereotypes, or competition over resources. We found that merely assigning people to a mixed-race group led them to express a preference for in-group versus out-group members (Van Bavel & Cunningham, 2009). More striking, this preference was even apparent on implicit measures. In other words, White participants who were assigned to a group had an automatic preference for both Black and White in-group members. In a separate control condition, people who merely saw the two mixed-race groups still showed evidence of racial bias—suggesting that people need to actively identify with a group to experience this shift in their automatic evaluations. This work established that a subtle shift in social identity could reduce implicit racial bias.

To gain a better understanding of the psychological process underlying these shifts in evaluation, we conducted a series of neuroimaging experiments using a similar methodology. We assigned people to a mixed-race group a few minutes prior to a functional Magnetic Resonance Imaging (fMRI) session. Previous research had found that the amygdala—a small structure in the anterior temporal lobe engaged in affective processing—was associated with implicit measures of racial bias. We examined whether mere membership in a mixed-race group could elicit in-group bias in amygdala activity, regardless of the race of the targets (Van Bavel et al., 2008). As predicted, the amygdala exhibited greater activity to minimal in-group versus out-group members and there were no effects of race. We also observed greater activity in the fusiform gyri—a region of the occipitotemporal lobe involved in face processing—to in-group versus out-group members. This latter finding led us to design a follow-up study on the Fusiform Face Area—a subregion of the fusiform gyri that is involved specifically in facial recognition and perceptual expertise. Replicating our previous research, we found greater activity in the Fusiform Face Area to in-group versus out-group members (Van Bavel et al., 2011). Moreover, participants with greater in-group bias in Fusiform Face Area activity also had greater in-group bias in a subsequent recognition memory task, suggesting that this region might mediate downstream behavior. Together, these experiments helped shed light on the neural processes underlying in-group bias and further established that even seemingly trivial identities can shape implicit perception and evaluation.

Building on these results, we sought to understand the social motives that drive in-group bias by assessing the effects of race and group membership on recognition memory. Consistent with our previous research, we found that people had greater memory for in-group members, regardless of race (Van Bavel & Cunningham, 2012). In other words, membership in an arbitrary mixed-race group was enough to eliminate the own-race memory advantage—an effect that has been described as one of the most robust phenomena in the social sciences. Further, this memory advantage for in-group members was largest among participants who were highly identified with the in-group (Van Bavel & Cunningham, 2012) as well as those who had a very high need to belong (Van Bavel et al., 2012). In sum, mere categorization into a group was not itself sufficient to elicit in-group bias unless it was combined with identification with the group or other social motives. Further, we found that in-group bias was itself fairly easy to eliminate: when participants were assigned to a role on their team that required them to spy on the out-group, they had enhanced memory for out-group members. Thus, even the motive to be a good group member can be channeled in many different ways depending on group norms and expectations.

In light of the pervasive effects of group membership on perception, evaluation, and memory, some researchers have suggested that race is literally erased. We re-analyzed data from several neuroimaging experiments to see if this was indeed the case. Specifically, we used multi-voxel pattern analysis to see if race could be decoded from distributed patterns of neural activity in the visual system (Kaul et al., 2014; Ratner et al., 2013). While traditional univariate analyses provided no evidence of race-based responses in the fusiform or early visual cortex, multi-voxel pattern analyses suggested that race was indeed represented within these regions. Thus, while people may not judge in-group members by the color of their skin, they are nevertheless encoding race. This suggests that social identities may guide automatic elements of perception and evaluation even when aspects of race (e.g., physiognomic features) are still represented in the visual system. In that sense, this research offers hope that a shared identity might be a more important prerequisite than color blindness when it comes to treating people from another race equally.

In the past few years, our lab has expanded our work to examine the relationship between social identity and a host of other psychological processes and consequences. In one line of research, we examined how in-group and out-group members capture rapid attention (Brosch & Van Bavel, 2012). This work helped esablish the flexibility of emotional attention, moving beyond models that argue automatic emotional cueing is guided by an inflexible fear module. In other work, we have shown how social identity guides empathy and how cooperation between groups can broaden empathy towards out-group members (Cikara, Bruneau, Van Bavel, & Saxe, 2014). We also found that presenting people with evidence that the social networks of in-group and out-group members are interwoven can reduce the intergroup empathy gap that emerges in competitive groups. This research helps to understand the psychology underlying intergroup conflict and paves the way for potential interventions.

We recently received a grant from the National Science Foundation to examine how perceptual evidence and social group membership influence mental state attribution. In a series of experiments, we presented participants with a continuum of facial morphs ranging from humans to inanimate figures (e.g., dolls) who were described as models based on in-group or out-group members. We found that people had more stringent thresholds for perceiving minds behind out-group faces in both minimal and real-world groups (Hackel, Looser, & Van Bavel, 2014). In other words, people require less humanness in an in-group face to infer that they have mental state capacities. Consistent with our previous research, this intergroup bias was largest among highly identified group members. Importantly, this pattern of intergroup bias was sensitive to social motives. For instance, out-group threat reversed the relationship between group membership and mind perception: participants who perceived an out-group to be threatening had more lenient mind perception thresholds towards that group.

We recently proposed a model that characterizes how identities shape our perception of the physical world (Xiao, Coppin, & Van Bavel, in press). In one line of work, we found that participants estimate that the locations of threatening out-groups are much closer than the locations of non-threatening out-groups (Xiao & Van Bavel, 2012) and that this perceived proximity heightens discrimination (Xiao, Wohl & Van Bavel, under review). With collaborators in Switzerland, we found that Swiss participants who are primed with their national identity experience the smell of chocolate—a source of national pride—as more intense than non-Swiss participants or Swiss participants who are primed with their individual identity (Coppin et al., revise and resubmit). Similarly, Canadians primed with their national identity enjoy the taste of maple syrup more than when they are primed with their individual identity and Southern Americans report that grits and chicken fried steak are tastier when they are primed with their regional identity rather than individual identity (Hackel, Wohl, Coppin, & Van Bavel, under review). This latter research speaks to the pervasive influence of social identity on cognition and may have important implications for understanding and motivating healthy eating behavior.

Taken together, this research has shown that social identification may exert a powerful role over most aspects of social cognition, shaping what are often described as automatic or inflexible reactions to stimuli. The fact that collective concerns shift from one situation to another and shape our actions and behaviors represents a challenge to models throughout the field of psychology and underscores the social nature of human cognition (Packer & Van Bavel, 2015).

MORAL COGNITION

Over the past 15 years, several models of morality have challenged the longstanding view that reasoning is the sole or even primary means by which moral judgments are made. According to these intuitionist models, moral judgments are very often produced by reflexive mental computations that are unconscious, fast, and automatic. From this perspective, affective responses are automatically triggered by certain moral issues and provide a strong bottom-up influence on judgments and decision-making. As such, the role of moral reasoning has been relegated to the role of post hoc justification or corrective control, but not the causal impetus for an initial moral judgment. In our view, these dual process models often neglect the dynamic nature of human cognition. As such, we recently proposed an alternative, dynamic model of moral cognition (Van Bavel, FeldmanHall, & Mende-Siedlecki, 2015).

We argue top-down processes ranging from construal to reasoning often shape moral intuitions as they unfold (Van Bavel et al., 2015). For instance, we have shown that most actions can be construed as if they belong in the moral domain or not (Van Bavel et al, 2012). To take one example from our research, the simple act of riding a bike can be seen through the lens of morality (e.g., is it a morally appropriate thing to do?), pragmatics (e.g., is it the most convenient or inexpensive option?), or hedonics (e.g., do I enjoy the experience of riding a bike?) How people construe an action influences how it is evaluated. Specifically, we found that moral evaluations were faster, more extreme, and more strongly associated with universal prescriptions—the belief that absolutely nobody or everybody should engage in an action—than non-moral (pragmatic or hedonic) evaluations of the exact same actions. A follow-up neuroimaging experiment confirmed that moral evaluations recruit different neural processes than those involved in non-moral evaluations (Van Bavel et al., in prep). This suggests that brain regions implicated in moral evaluations, like the ventral medial PFC, are sensitive to top-down construal and may reflect the process of rendering a moral judgment, rather than the specific content inherent in moral dilemmas. Moreover, we found that activation in these same brain regions is associated with individual differences in moralization: people who chronically construe issues through the lens of morality rely on this process for all forms of evaluation. This work reveals that our intuitions about right or wrong may have as much to do with the lens we impose on the situation as the concrete features of the action itself. The basic process of determining whether something is in the domain of morality seems to be a fundamental, if overlooked, aspect of moral psychology.

We have recently examined whether deliberating about one’s values and beliefs may not only justify or correct for an initial emotional intuition, but also sensitize one to certain actions or events before they occur—a process we term moral tuning. To test this hypothesis, we encouraged participants to go with their initial gut response or not before responding to the popular trolley/footbridge moral dilemmas (Kappes & Van Bavel, in prep). Previous studies using this task have instructed people to go with their “first response” and found that people have an emotional response that makes it difficult for them to push one person off a footbridge to save five others—a pattern that is frequently cited as evidence of a dual process model of morality. However, when we drop the instruction to go with their “first response”, the pattern of results is reversed and people are much more willing to push one person off the bridge. Analyses of reaction times as well as process-dissociation analyses suggest that this framing manipulation may have shaped the automatic “deontological” responses rather than controlled “utilitarian” processes. Subsequent experiments using neuroimaging and mouse tracking suggest that these framing manipulations influence the computations in the ventral medial prefrontal cortex—which has been argued to represent emotional intuitions—and the initial hand movements engaged during decision making in these dilemmas (Kappes, Kaggen, & Van Bavel, in prep; Noorbaloochi, Kappes, & Van Bavel, in prep). Taken together, these findings support the notion that top-down influences can tune the initial intuitions that guide moral judgment—even in the absence of corrective control.

In other work, we have been able to change intuitions in moral dumbfounding scenarios. These moral scenarios involve cases where actors commit harmless wrongs, such as a brother and sister who agree to have protected sex without telling anyone. Although participants normally describe these actions as morally wrong, they are often unable to articulate a clear and compelling reason why—leading them to feel dumbfounded. This striking dissociation between reason and moral judgment is often cited as some of the strongest evidence for the primacy of moral intuitions. In a series of studies, we have found that moral dumbfounding—the feeling of confusion induced by these harmless wrongs—is reduced when participants have the opportunity to engage in a brief logical or moral reasoning task before reading the moral dumbfounding scenario (Ray & Van Bavel, in prep). We suspect that this reasoning task is priming a mindset that reduces the initial visceral response to incest and other harmless wrongs. To directly assess this hypothesis, we are currently using facial electromyography to assess initial levator labii activity—a facial muscle activated during feelings of disgust—to moral dumbfounding scenarios. This line of work offers promising evidence that automatic moral intuitions may be susceptible to reasoning.

To provide a more stringent test of the moral tuning hypothesis, we have adapted a number of tasks to capture the effects of moral reasoning and goals on people’s initial reactions to morally relevant stimuli (see Gantman & Van Bavel, 2015, for a review). In one line of work, we have found that moral (versus economic) reasoning in the classic Heinz dilemma, in which participants must decide whether or not a man should steal a drug to save his dying wife, can reduce subsequent automatic evaluations of morally laden words on a rapid evaluative priming task (Kappes & Van Bavel, in prep). This is perhaps the strongest evidence we have that moral reasoning can directly tune automatic intuitions. Related work in our lab has found that moral motives may even heighten perceptual awareness of moral words (Gantman & Van Bavel, 2014), suggesting that moral intuitions may hinge on reasoning that occurs prior to encountering stimuli. In fact, we have new evidence using electroencephalography suggesting that this advantage emerges within the first few hundred milliseconds (Gantman, Steele, Mende-Siedlecki, Van Bavel, & Mathewson, in prep). Importantly, we have conducted extensive analysis on our own data as well as that of other labs to show that the effect of morality on word recognition is not due to features of the stimuli (e.g., word length, frequency in the lexicon), differences in semantics (e.g., sequential priming or gradual accumulation), or simple cognitive accessibility (e.g., the effects of morality on recognition remain after adjusting for differences in reaction time; see Gantman & Van Bavel, in prep). As such, we are growing confident that reasoning and other top-down processes may be able to guide whether or not people detect morally-relevant stimuli. We believe this research will help provide the empirical foundations for a shift towards dynamic models of moral cognition that incorporate top-down influences in the initial phases of moral judgment and decision-making.

POLITICAL COGNITION

People not only hold favorable views towards themselves and their own groups, but also the overarching system in which they live. As such, people are motivated to justify and maintain the overarching social system. Although system justification and social identity motives are often aligned for members of advantaged groups, there are contexts in which system justification motives may overshadow individual and group-based concerns. Our recent work has examined these system-level motives in the domain of person perception when they differ from traditional social identity motives.

The tendency to categorize multiracial individuals according to their most subordinate social group is referred to as the principle of hypodescent. Building on previous research showing that political conservatives are more supportive of the traditional social order and accepting of inequality than liberals, we hypothesized that political ideology would moderate racial categorization (Krosch, Berntsen, Amodio, Jost, & Van Bavel, 2013). In a series of studies, participants categorized a series of morphed faces that varied in terms of racial ambiguity. Self-reported conservatism (vs. liberalism) was associated with the tendency to categorize perceptually ambiguous (i.e., mixed-race) faces as Black. Consistent with the notion that system justification motives help explain ideological differences in racial categorization, the association between conservatism and hypodescent was mediated by individual differences in opposition to equality.

We also reasoned that U.S. conservatives should be more motivated than U.S. liberals to maintain racial divisions that are part of the traditional American social system, but not those of an irrelevant system. Therefore, in follow-up research we activated system justification concerns directly by manipulating the salience of the American (vs. Canadian) social system and examined the relationship between ideology and racial categorization (Krosch et al., 2013). We hypothesized that the relationship between ideology and biased racial categorization would be stronger when participants were classifying “American” than “Canadian” faces. As predicted, the relationship between ideology and hypodescent was stronger when our U.S. participants categorized American than Canadian faces. This finding helped rule out the possibility that the link between conservatism and racial categorization was simply a matter of racial prejudice. Rather, it bolstered the argument that this bias was driven by system justification motives: conservatives were only motivated to engage in hypodescent for mixed-race individuals who were part of their own system (i.e., American faces).

We recently expanded this research to the domain of political neuroscience (Jost et al., 2013; 2014). For instance, we recently conducted a neuroimaging experiment to assess whether these ideological differences in racial categorization were driven by perceptual, evaluative, or decision-making systems (Krosch, Jost, & Van Bavel, in prep). We invited a large, ideologically diverse sample of participants to complete a race categorization task. Consistent with previous research, activity in the amygdala and anterior insula were positively correlated with both the objective Blackness of the faces and racial ambiguity (i.e., mixed-race). Importantly, individual differences in political orientation moderated the relationships between objective face racial ambiguity and insula activity, such that conservatism was associated with stronger insula activity in response to mixed-race faces. There were no such ideological differences in lower-level perceptual systems (e.g., fusiform gyrus) or higher-level decision-making systems (e.g., mPFC). This research illustrates one way in which neuroimaging can help tease apart psychological processes that are difficult to disentangle using behavioral measures. It also clarifies how, why, and when multiracial individuals are likely to be categorized as members of a subordinate racial group—a phenomenon that may enhance their vulnerability to discrimination and exacerbate existing inequalities.

Our current work in this area is focused on better understanding the psychological and neural basis of these system justification motives. In one line of research, we examined the relationship between system justification and the detection of perceptual anomalies (Tritt & Van Bavel, in prep). Specifically, we presented people with very brief images of playing cards that are expected (e.g., black 6 of spades) or unexpected (e.g., red 6 of spades). Most participants were able to correctly identify the expected playing cards and accuracy improves with longer presentation times (e.g., cards presented for 160 ms are associated with greater accuracy than cards presented for 16 ms). However, participants high in system justification needs performed poorly on this task: they were either unable or unwilling to correctly identify anomalous cards. These findings suggest that the basic capacity to detect anomalies—a feature of paradigmatic thinking—was impaired among system justifiers. We are currently examining if this bias mediates the application of stereotypes and other system justifying tendencies.

There is considerable debate in political psychology about the origins of political ideology (see Jost, Noorbaloochi, & Van Bavel, 2014; Jost, Nam, Amodio, & Van Bavel, 2014). We recently completed a pair of large-scale structural neuroimaging studies to examine the neural correlations of system justification and examine changes over time. In the first wave of research, we found that people with stronger system justifying tendencies had greater grey matter volume density in the bilateral amygdala (Nam, Kaggen, Campbell-Meiklejohn, Jost, & Van Bavel, in prep). Moreover, individual differences in the amygdala mediated the relationship between system justification and political activities (e.g., social protest) up to a year into the future. We are currently scanning these same people almost three years later to see how exposure to different academic coursework (and other, extracurricular activities) may be associated with changes in their brain structure and their political ideology. This line of research will help us better understand how features of the social context can shape brain structure by examining changes over time and linking them to downstream political behavior. Finally, we are complementing this work by examining how brain damage to specific regions (e.g., the amygdala) may change system justification needs and political behavior. Together, this work will help shed light on one of the most basic questions in political neuroscience.

SCIENTIFIC APPROACH

“If social scientists choose to select rigorous theory as their ultimate goal, as have the natural scientists, they will succeed to the extent they traverse broad stretches of time and space. That means nothing less than aligning their explanations with those of the natural sciences.” – E. O. Wilson (1998)

My lab takes a social neuroscience approach to human psychology, blending theory and methods from social psychology and cognitive neuroscience. This approach is based on the assumption that complex social phenomena are best understood by combining social and biological theory and methods. This approach involves breaking phenomena like social perception and evaluation into component processes to better understand the operating characteristics of these components and how they work in concert. Analyzing these phenomena across multiple levels-of-analysis offers the promise of developing more general, process-oriented theories of human cognition and developing novel interventions for pressing social issues.

The NYU Department of Psychology is ideally equipped to conduct research using a social neuroscience approach. The Department of Psychology, in collaboration with the Center for Brain Imaging, offers lab space for conducting behavioral studies (including social cognition protocols), several electroencephalography (EEG) systems (including high-density 128-channel EEG), eye-tracking capabilities, and a Siemens Allegra 3T scanner. Students also have access to an imaging lab outfitted with \ Linux machines that have all the current post processing software and programs needed for neuroscience analysis, psychophysiological measures, eye-tracking, and we have regular in-house training seminars on cutting-edge data collection and analysis techniques in social neuroscience.