Facial expressions are non-verbal cues which are essential for communication. There are two opposing models which attempt to explain how we encode different facial expressions. The categorical model suggests that expressions are perceived according to predefined expression classifiers, each of which is specific to a discrete emotion. Whereas, the continuous model assumes that expressions are perceived along general dimensions emotion, according to characteristics including valence. Recently, a transcranial magnetic stimulation (TMS) study uncovered that emotional expressions may be processed according to a continuous mechanism in the early visual cortex (EVC). The present study aimed to uncover whether the processing of expressions in the EVC is a valence-dependent or category-specific process, using TMS. In total, 18 healthy adults participated. Following the presentation of a facial expression (happy, angry, fearful, sad, disgusted, or surprised), single-pulse TMS was applied to the EVC at different time windows (90-150 msec, or no TMS). Initially, analysis of variance revealed no effect of TMS on expression perception. However, further analysis examining emotional valence uncovered that TMS appears to selectively disrupt the processing of valence-ambiguous expressions at -130 msec. Additionally, the data indicates there is a tendency for TMS to disrupt the recognition of negative expressions, which is supported by previous research. However, the recognition accuracy for negative expressions across all TMS conditions was considerably low, which was probably due to the task difficulty. Further research should adapt the task and replicate the experiment with a larger data set. Overall, the findings revealed that affective information is processed according to a valence-dependent mechanism in the EVC, suggesting that a continuous model operates in this early stage of processing.
PLEASE NOTE: You must be a member of the University of Lincoln to be able to view this dissertation. Please log in here.