The topic of the present research is the influence of categorization on perceptual memory for facial expression. When people talk about others’ emotional expressions, they may use specific emotion categories, such as sad, happy and angry. However, categorization is affected by a number of factors, and this in turn leads to different perceptions of the same facial expression. Different perceptions of the same facial expression further motivate and justify different types of behaviour. Therefore, the present research tried to investigate how emotion concepts affect perceptual memory for emotional expressions.
There was only one previous study which has directly tested contextual effects in the perception of facial expressions of emotion (Woll and Martinez 1982, as cited in Halberstadt and Niedenthal, 2001). It was found that people erred in recognizing facial expressions in the direction of the incorrect labels. However, these errors only happened for facial expressions from the pleasant and middle range of the emotion continuum, but not for those ranked as unpleasant originally. Moreover, these errors occurred only after a 15-minute delay, but not after a 1-minute delay. Therefore, there are some weaknesses in Woll’s and Martinez’s experiment.
Hypotheses The present research tested the hypothesis that using specific emotion concepts, such as happy, angry and sad, in explaining perceived emotional facial expressions would bias perceptual memory for those expressions. Apart from testing the above hypothesis, it is further tested that whether there are any differences in biasing effects when different types of conceptualization are involved, that is verbalized explanation, imagined explanation and mere labeling, as well as when different types of emotional expressions are encoded, that is angry-happy and angry-sad.
Research Design In the present research, three experiments were carried out. Subjects In total, two hundred and eleven female and eighty-six male university students participated in all three experiments. Materials In Experiments 1 and 2, a set of eight seamless digital angry-happy movies, consisting of 100 facial composites was used. In Experiment 3, angry-sad, instead of angry-happy, movies was used. The validity of the actors’ facial expressions was pretested on an independent group of eighty-three participants. The psychological midpoint of the movies was determined by an independent group of twenty-three participants for angry-happy movies and thirty participants for angry-sad movies in a pretest.
Procedure Experiment 1 was divided into two parts, namely the presentation phase and the recognition phase. In the presentation phase, participants were given 3 angry and 3 happy experimental trials. Each target face appeared on the screen for 70 seconds, with an instruction, “Explain why this person is feeling [angry/happy]” below. Participants were divided into two groups, namely verbalizers and imaginers. Verbalizers were told to recite their explanation into a tape recorder while imaginers were told to just construct their stories in their head at the present stage.
After a 30-minute unrelated filler study, the recognition phase began. Participants were asked to view the entire movie with the sliding bar first and then to indicate which face was identical to one of the faces they told a story about. Same as Experiment 1, Experiment 2 consisted of two phases – the presentation and recognition phase. However, participants in Experiment 2 were divided into three conditions, namely explanation, label and control.
During the presentation phase, participants were asked to view 6 targeted faces used in Experiment 1. In the explanation condition, half of the faces were paired with an angry concept while the other half were paired with a happy concept. Participants were told to explain why target faces expressed such emotions. In the label condition, half of the faces were labeled as angry while the other half were labeled as happy.
In the control condition, no prompt was given for the faces. Participants in the label and control conditions were only asked to simply view the targeted faces. After a 30-minute unrelated filler study, participants took part in the recognition phase which was the same as that of Experiment 1. The procedure of Experiment 3 was identical to that of Experiment 2, but angry-sad movies and target faces were used instead.