Several methods are available for coding body movement in nonverbal behavior research, but there is no consensus on a reliable coding system that can be used for the study of emotion expression. Adopting an integrative approach, we developed a new method, the body action and posture coding system, for the time-aligned micro description of body movement on an anatomical level (different articulations of body parts), a form level (direction and orientation of movement), and a functional level (communicative and self-regulatory functions). We applied the system to a new corpus of acted emotion portrayals, examined its comprehensiveness and demonstrated intercoder reliability at three levels: (a) occurrence, (b) temporal precision, and (c) segmentation. We discuss issues for further validation and propose some research applications.
To investigate the perception of emotional facial expressions, researchers rely on shared sets of photos or videos, most often generated by actor portrayals. The drawback of such standardized material is a lack of flexibility and controllability, as it does not allow the systematic parametric manipulation of specific features of facial expressions on the one hand, and of more general properties of the facial identity (age, ethnicity, gender) on the other. To remedy this problem, we developed FACSGen: a novel tool that allows the creation of realistic synthetic 3D facial stimuli, both static and dynamic, based on the Facial Action Coding System. FACSGen provides researchers with total control over facial action units, and corresponding informational cues in 3D synthetic faces. We present four studies validating both the software and the general methodology of systematically generating controlled facial expression patterns for stimulus presentation.
Recent research has shown that rapid judgments about the personality traits of political candidates, based solely on their appearance, can predict their electoral success. This suggests that voters rely heavily on appearances when choosing which candidate to elect. Here we review this literature and examine the determinants of the relationship between appearance-based trait inferences and voting. We also reanalyze previous data to show that facial competence is a highly robust and specific predictor of political preferences. Finally, we introduce a computer model of face-based competence judgments, which we use to derive some of the facial features associated with these judgments.
Micro-expression has gained a lot of attention because of its potential applications (e.g., transportation security) and theoretical implications (e.g., expression of emotions). However, the duration of micro-expression, which is considered as the most important characteristic, has not been firmly established. The present study provides evidence to define the duration of micro-expression by collecting and analyzing the fast facial expressions which are the leakage of genuine emotions. Participants were asked to neutralize their faces while watching emotional video episodes. Among the more than 1,000 elicited facial expressions, 109 leaked fast expressions (less than 500 ms) were selected and analyzed. The distribution curves of total duration and onset duration for the micro-expressions were presented. Based on the distribution and estimation, it seems suitable to define micro-expression by its total duration less than 500 ms or its onset duration less than 260 ms. These findings may facilitate further studies of micro-expressions in the future.
We investigated the correspondence between perceived meanings of smiles and their morphological and dynamic characteristics. Morphological characteristics included co-activation of Orbicularis oculi (AU 6), smile controls, mouth opening, amplitude, and asymmetry of amplitude. Dynamic characteristics included duration, onset and offset velocity, asymmetry of velocity, and head movements. Smile characteristics were measured using the Facial Action Coding System (Ekman et al. 2002) and Automated Facial Image Analysis (Cohn and Kanade 2007). Observers judged 122 smiles as amused, embarrassed, nervous, polite, or other. Fifty-three smiles met criteria for classification as perceived amused, embarrassed/nervous, or polite. In comparison with perceived polite, perceived amused more often included AU 6, open mouth, smile controls, larger amplitude, larger maximum onset and offset velocity, and longer duration. In comparison with perceived embarrassed/nervous, perceived amused more often included AU 6, lower maximum offset velocity, and smaller forward head pitch. In comparison with perceived polite, perceived embarrassed/nervous more often included mouth opening and smile controls, larger amplitude, and greater forward head pitch. Occurrence of the AU 6 in perceived embarrassed/nervous and polite smiles questions the assumption that AU 6 with a smile is sufficient to communicate felt enjoyment. By comparing three perceptually distinct types of smiles, we found that perceived smile meanings were related to specific variation in smile morphological and dynamic characteristics.
This meta-analysis examines how interpersonal sensitivity (IS), defined as accurate judgment or recall of others' behavior or appearance, is related to psychosocial characteristics of the perceiver, defined as personality traits, social and emotional functioning, life experiences, values, attitudes, and self-concept. For 215 independent studies reported in 96 published sources, higher IS was generally associated with favorable or adaptive psychosocial functioning. Significant mean correlations were found for 27 of the 40 categories of psychosocial variables; these categories covered many different personality traits, indicators of mental health, and social and work-related competencies. Moreover, many additional studies that fell outside these conceptual categories also showed significant positive relations between IS and numerous other psychosocial variables. Taken together, the results support the construct validity of IS tests and demonstrate that IS is associated with many important aspects of personal and social functioning.
The “chameleon effect” refers to the tendency to adopt the postures, gestures, and mannerisms of interaction partners (Chartrand & Bargh, 1999). This type of mimicry occurs outside of conscious awareness, and without any intent to mimic or imitate. Empirical evidence suggests a bi-directional relationship between nonconscious mimicry on the one hand, and liking, rapport, and affiliation on the other. That is, nonconscious mimicry creates affiliation, and affiliation can be expressed through nonconscious mimicry. We argue that mimicry played an important role in human evolution. Initially, mimicry may have had survival value by helping humans communicate. We propose that the purpose of mimicry has now evolved to serve a social function. Nonconscious behavioral mimicry increases affiliation, which serves to foster relationships with others. We review current research in light of this proposed framework and suggest future areas of research.
Past research has revealed that natural social interactions contain interactional synchrony. The present study describes new methods for measuring interactional synchrony in natural interactions and evaluates whether the behavioral synchronization involved in social interactions is similar to dynamical synchronization found generically in nature. Two methodologies, a rater-coding method and a computational video image method, were used to provide time series representations of the movements of the co-actors as they enacted a series of jokes (i.e., knock–knock jokes). Cross-spectral and relative phase analyses of these time series revealed that speakers’ and listeners’ movements contained rhythms that were not only correlated in time but also exhibited phase synchronization. These results suggest that computational advances in video and time series analysis have greatly enhanced our ability to measure interactional synchrony in natural interactions. Moreover, the dynamical synchronization in these natural interactions is commensurate with that found in more stereotyped tasks, suggesting that similar organizational processes constrain bodily activity in natural social interactions and, hence, have implications for the understanding of joint action generally.
In this article, we review recent developments in the study of emotional expression within a basic emotion framework. Dozens of new studies find that upwards of 20 emotions are signaled in multimodal and dynamic patterns of expressive behavior. Moving beyond word to stimulus matching paradigms, new studies are detailing the more nuanced and complex processes involved in emotion recognition and the structure of how people perceive emotional expression. Finally, we consider new studies documenting contextual influences upon emotion recognition. We conclude by extending these recent findings to questions about emotion-related physiology and the mammalian precursors of human emotion.
Interpersonal coordination, the extent to which social partners coordinate each other’s postures and mannerisms, acts as a “social glue” that serves both individual and social goals, such as producing prosocial behaviors and facilitating harmonious interactions. Research in this area has become prominent in a variety of domains both within and outside of psychology, forming a sizeable literature dedicated to investigating the causes and consequences of interpersonal coordination. We conducted a series of meta-analyses on studies that treated interpersonal coordination as an independent variable, in order to measure its effect on several intrapersonal (e.g., mood, need to belong) and interpersonal (e.g., prosocial behavior) outcomes, as well as several potential moderators (e.g., percentage of female participants) that may affect the strength of the effect. Overall, the results demonstrated that the positive effects of interpersonal coordination are robust, with a few exceptions specific to intrapersonal outcomes. These findings provide a much-needed quantitative summary of the literature on interpersonal coordination, and highlight areas that merit future research.
When leaning toward a partner for a kiss, the direction that individuals turn their head when planting the kiss is found to vary based on the kiss’s context; romantic kissing between adult couples is consistently directed rightward, though recently, a non-romantic kiss between parent–child couples was observed to be leftward. The current study further examines the lateral head-turning direction between non-romantic couples using a novel context: a first kiss between strangers. Observing strangers kissing was feasible due to a unique social media phenomenon; since 2014, 23 “First Kiss” online videos have emerged which depict kisses facilitated by the video’s director between consenting strangers. The turning direction of 230 kissing couples were coded from the 23 First Kiss videos, and the proportion of right to left turns were almost equal; 51% of couples displayed a right-turn kiss, and 49% conveyed a left-turn kiss. Further, the proportion of right and left turns observed from our sample of strangers kissing were compared to Güntürkün’s (in Nature 421:711, https://doi.org/10.1038/421711a , 2003) original study that examined authentic kissing between adult couples. A significantly different turning bias was exhibited. Because the kissing criterion was parallel between these studies, our study demonstrates that the context influenced the direction of bias, namely, that of a non-romantic kiss. We discuss the potential role of context and emotional lateralization on kissing laterality, and propose future directions to test these predictions.
This article describes the basic mechanisms by which the nonverbal behavior of a communicator can influence recipients’ attitudes and persuasion. We review the literature on classic variables related to persuasive sources (e.g., physical attractiveness, credibility, and power), as well as research on mimicry and facial expressions of emotion, and beyond. Using the elaboration likelihood model (ELM) as a framework, we argue that the overt behavior of source variables can affect attitude change by different psychological processes depending on different circumstances. Specifically, we describe the primary and secondary cognitive processes by which nonverbal behaviors of the source (e.g., smiling, nodding, eye contact, and body orientation) affect attitude change. Furthermore, we illustrate how considering the processes outlined by the ELM can help to predict when and why attractive, credible, and powerful communicators can not only increase persuasion but also be detrimental for persuasion.
This article discusses a new systems model of dyadic nonverbal interaction. The model builds on earlier theories by integrating partners’ parallel sending and receiving nonverbal processes into a broader, dynamic ecological system. It does so in two ways. First, it moves the level of description beyond the individual level to the coordination of both partners’ contributions to the interaction. Second, it recognizes that the relationships between (a) individuals’ characteristics and processes and (b) the social ecology of the interaction setting are reciprocal and best analyzed at the systems level. Thus, the systems model attempts to describe and explain the dynamic interplay among individual, dyadic, and environmental processes in nonverbal interactions. The potential utility and the limitations of the systems model are discussed and the implications for future research considered. Although the systems model is focused explicitly on face-to-face nonverbal communication, it has considerable relevance for digital communication. Specifically, this model provides a useful framework for examining the social effects of mobile device use and as a template for studying human–robot interactions.
Several authors have recently presented evidence for perceptual and neural distinctions between genuine and acted expressions of emotion. Here, we describe how differences in authenticity affect the acoustic and perceptual properties of laughter. In an acoustic analysis, we contrasted spontaneous, authentic laughter with volitional, fake laughter, finding that spontaneous laughter was higher in pitch, longer in duration, and had different spectral characteristics from volitional laughter that was produced under full voluntary control. In a behavioral experiment, listeners perceived spontaneous and volitional laughter as distinct in arousal, valence, and authenticity. Multiple regression analyses further revealed that acoustic measures could significantly predict these affective and authenticity judgements, with the notable exception of authenticity ratings for spontaneous laughter. The combination of acoustic predictors differed according to the laughter type, where volitional laughter ratings were uniquely predicted by harmonics-to-noise ratio (HNR). To better understand the role of HNR in terms of the physiological effects on vocal tract configuration as a function of authenticity during laughter production, we ran an additional experiment in which phonetically trained listeners rated each laugh for breathiness, nasality, and mouth opening. Volitional laughter was found to be significantly more nasal than spontaneous laughter, and the item-wise physiological ratings also significantly predicted affective judgements obtained in the first experiment. Our findings suggest that as an alternative to traditional acoustic measures, ratings of phonatory and articulatory features can be useful descriptors of the acoustic qualities of nonverbal emotional vocalizations, and of their perceptual implications.
The present study evaluated whether the strength of relationship between child nonverbal behaviors (expressivity, attention, and coordination) across time points varied as a function of interviewer nonverbal behaviors (expressivity, attention, and coordination) under supportive versus neutral interviewing conditions. Children (n = 123) participated in an event where they were involved in breaking some rules. Three to four days later they were interviewed by either a supportive or neutral adult interviewer. Interviews were video recorded and nonverbal behaviors of both children and interviewers were coded. Multi-level modeling revealed that optimal interviewer nonverbal behaviors were predictive of optimal child nonverbal behaviors at the end of the interview. In contrast, explicitly manipulated interviewer supportiveness was related to suboptimal displays of child nonverbal behavior. Interestingly, as the interview progressed, optimally attentive interviewing was associated with suboptimal child expressivity scores. Likewise, displays of optimal interviewer coordination were associated with suboptimal child coordination scores over time. The implications of the findings for nonverbal behavior literature and professionals talking with children about sensitive information are discussed.
The ecological theory of social perception suggests that people’s first impressions should be especially accurate for judgments relevant to their goals. Here, we tested whether people could accurately judge others’ levels of antigay prejudice and whether gay men’s accuracy would exceed straight men’s accuracy in making these judgments. We found that people judged men’s (but not women’s) levels of antigay prejudice accurately from photos of their faces and that impressions of facial power supported their judgments. Gay men and straight men did not significantly differ in their sensitivity to antigay prejudice, however. People may therefore judge others’ levels of prejudice accurately regardless of their personal stake in its consequences.
The purpose of the present study was to see if emotion recognition skill and locus of control in 8-year-old children predicted teacher rated Goodman Strengths and Difficulties (SDQ, Goodman in J Am Acad Child Adolesc Psychiatry 40:1337–1345, 2001) 2 years later. Children participating in the Avon Longitudinal Study of Parents and Children (ALSPAC; Golding in Eur J Endocrinol 151:U119–U123, 2004. https://doi.org/10.1530/eje.0.151U119 ) completed emotion recognition tests of child facial expressions and voices and a child locus of control scale when they were 8 years of age. Later at age 10, as part of ALSPAC’s on-going-assessment of children’s personal and social lives, teachers completed the SDQ. Based on past research and developmental theory (e.g., Nowicki and Duke in J Nonverbal Behav 18:9–35, 1994; Thomas et al. in Dev Sci 10(5):547–558, 2007) it was predicted and found that children who made more recognition errors, were more external, and male at age 8 had a greater number of teacher-rated psychological/behavioral difficulties at age 10 than those who made fewer errors, were internal, and female. Implications of the findings for children’s personal and social adjustment were discussed.
This study examined how learners’ age, English proficiency, and years of learning English, affect the accuracy of the interpretation of nonverbal behaviors among English as a foreign language (EFL) learners. The participants consisted of four groups of Japanese students: (a) 32 sixth graders attending public schools, (b) 18 sixth graders attending English immersion schools, (c) 30 university students with lower English proficiency, and (d) 32 university students with higher English proficiency. They watched 48 video clips taken from EFL classrooms in Japanese elementary schools without sound and judged whether the teachers had asked a question. The accuracy of their judgements was statistically analyzed and their comments were qualitatively analyzed. Multiple regression analyses pointed to students’ years of learning English as the sole predictor almost significantly affecting accurate judgements but only when teachers’ utterances were accompanied by gestures. This indicates that learners’ ability to correctly decode nonverbal behaviors developed only for teacher gesture. In addition to this qualitative aspect, a quantitative aspect was also found to be affected by the duration of study. Precisely speaking, those learners with over 6 years of learning English noticed a larger number of nonverbal behaviors, including gestures, for correct judgements, which boosted the minimum accuracy of their judgements. This implies that the effect of age and nativeness observed in past literature on the interpretation of nonverbal behaviors may have been in fact under the disguise of the amount of exposure to the target language and culture.
Facial expressions of pain are important in assessing individuals with dementia and severe communicative limitations. Though frontal views of the face are assumed to allow for the most valid and reliable observational assessments, the impact of viewing angle is unknown. We video-recorded older adults with and without dementia using cameras capturing different observational angles (e.g., front vs. profile view) both during a physiotherapy examination designed to identify painful areas and during a baseline period. Facial responses were coded using the fine-grained Facial Action Coding System, as well as a systematic clinical observation method. Coding was conducted separately for panoramic (incorporating left, right, and front views), and a profile view of the face. Untrained observers also judged the videos in a laboratory setting. Trained coder reliability was satisfactory for both the profile and panoramic view. Untrained observer judgments from a profile view were substantially more accurate compared to the front view and accounted for more variance in differentiating non-painful from painful situations. The findings add specificity to the communications models of pain (clarifying factors influencing observers’ ability to decode pain messages). Perhaps more importantly, the findings have implications for the development of computer vision algorithms and vision technologies designed to monitor and interpret facial expressions in a pain context. That is, the performance of such automated systems is heavily influenced by how reliably these human annotations could be provided and, hence, evaluation of human observers’ reliability, from multiple angles of observation, has implications for machine learning development efforts.
Research suggests that certain individuals exhibit vulnerability through their gait, and that observers select such individuals as those most likely to experience victimization. It is currently assumed that the vulnerable gait pattern is an expression of one’s submissiveness. To isolate gait movement, Study 1 utilized kinematic point-light display to record 28 individuals walking. The findings suggested that victimization history was related to gait vulnerability. The results also indicated that, contrary to expectation, individuals with more vulnerable features in their gait were more likely to self-report dominant personality characteristics, rather than submissive characteristics. In Study 2, a sample of 129 observers watched the point-light recordings and rated the walkers on their vulnerability to victimization. The results suggested that observers agreed on which walkers were easy targets; they were also accurate in that the walkers they rated as most likely to experience victimization tended to exhibit vulnerable gait cues. The current research is one of the few to explore the relationship between internal dispositions and non-verbal behavior in a sample of self-reported victims. The findings provide exciting insights related to the communicative function of gait, and the characteristics that may put some individuals at a greater risk to be criminally targeted.