Research

Research Overview

Organizing complex perceptual input in real time is crucial for our ability to interact with the world around us, and information received in the auditory modality in particular is central to many fundamental aspects of human behavior (e.g., spoken language, music, sound localization). Classic views of perception hold that we absorb environmental information from our senses and translate these inputs into signals that the brain organizes, identifies, and interprets in a bottom-up fashion. However, there is a long-standing debate in cognitive science as to the degree to which top-down effects from higher-level processes such as emotions, actions, motivation, intentions, and linguistic representations directly influence perceptual processing (e.g., Firestone & Scholl, 2016, and subsequent commentaries in Behavioral & Brain Sciences).

Our research supports this more flexible definition of perception, focusing on the importance of interactions, including the interaction of bottom-up and top-down processing and interactions within and across modalities. To this end, we take an interdisciplinary approach to understanding auditory perception. Our research program encompasses music, speech, and cross-modal perception using a combination of behavioral, cognitive neuroscience, and computational modeling approaches. This work speaks to a broad set of issues within psychological science: To what extent are cognitive processes encapsulated from one another? How is perception influenced by individual and cross-cultural differences, previous knowledge, expertise, and task demands?

Current Student Projects

The role of audiovisual interactions in speech perception (Hannah). Our specific goal for this experiment is to enhance the ecological validity of the McGurk effect, which is an illusion that is often used to study audiovisual speech integration. The illusion is created by presenting incongruent auditory and visual speech syllables (e.g., pairing audio for /ba/ with lip movements for /ga/); the majority of participants report hearing a "fusion" perception of /da/. In our experiment, we used word stimuli that mimic everyday conversations (e.g., pairing audio for /beer/ with lip movements for /gear/ to determine listeners’ interpretation). Looking at the proportion of fusion responses in various task (forced-choice vs open-ended) and stimuli (words vs nonwords) conditions will allow us to determine if the audiovisual integration in speech happens more at a lower perceptual level or higher decision-level and will reinforce previous work using a more natural speech setting.

The influence of music on the perception of products features (Jordaine). To investigate the extent to which aspects of music influence audiovisual perception of a product’s features, we created radio advertisements where either the pitch, tempo, or timbre of the music was manipulated (e.g., some participants heard high-pitched music and others heard low-pitched music paired with the same sandwich chain ad). Participants were then asked several questions regarding their perception of the advertised product. We are interested in whether participants will perceive products’ durability, weight, and size differently when the music of the ad is altered.

Personality influences on the use of music as a coping mechanism (Kunal). In the current study, we assessed the music usage of individuals in relation to their perceived stress levels and their personality type(s). We were specifically interested in stress generated by both the novel coronavirus pandemic and the polarization induced by the general political climate (racial tension, election cycle, etc). Understanding the interactions between stress, music use, and personality during times of crisis may have implications for utilizing shared musical experiences and may sway societal perceptions of beneficial coping mechanisms in a positive way.

Recent Publications

For full list of publications, see Google scholar.

Getz, L. & Toscano, J. (2020; invited review). The time-course of speech perception revealed by real-time neural measures. WIREs Cognitive Science.

Getz, L. & Toscano, J. (2019). Electrophysiological evidence for top-down lexical influences on early speech perception. Psychological Science.

Getz, L., & Kubovy, M. (2018). Questioning the automaticity of audiovisual correspondences. Cognition.

Getz, L., Nordeen, E., Vrabic, S., & Toscano, J. (2017). Modeling the development of audiovisual cue integration in speech perception. Brain Sciences.