Antje Ihlefeld

Our ears are exquisitely tuned to picking up air vibrations around us, allowing them to be turned into auditory percepts. When only one sound source is present in an environment, sound detection and identification could be solved through a linear time invariant system. In everyday settings, however, multiple sound sources often are present at the same time. This tasks the brain with the difficult challenge to disentangle a sound source of interest from the acoustic mixture. I am interested in how the brain accomplishes this feat, and what happens to the central nervous system for individuals with hearing loss and for cochlear implant users.

 

Biography

I was trained under Barbara Shinn-Cunningham and received my PhD from Boston University in 2007. For my dissertation research I worked on binaural hearing, specifically, spatial release from energetic and informational masking, and sound localization in anechoic and reverberant environments. For my postdoctoral training I then worked with Gerald Kidd and Chris Mason at Boston University on informational masking under hearing loss, with Bob Carlyon at  the MRC Cognition and Brain Science Unit in Cambridge/UK on rate coding and modulation masking release for cochlear implant listeners, with Ruth Litovsky at University of Wisconsin-Madison on binaural hearing and spatial masking release with bilateral cochlear implant users, and with Dan Sanes at New York University on how sound deprivation affects central nervous system function.

Research

My research spans the range from psychophysics to physiology, in concert with computational modelling.

Masking Release under Sound Deprivation         

Sound deprivation can alter auditory function throughout the auditory neuraxis, even when peripheral function is restored. We use a biological model of sound deprivation to examine cortical function for masking release.

 

Informational Masking

In an acoustic mixture of simultaneous sources, when sounds are perceptually similar to each other, or when the listener is uncertain which cues to attend to, this can lead to informational masking. We use psychophysics and computational modelling to understand the underlying principles of informational masking.

 

Sound localization

Most everyday environments are reverberant, but most studies of sound localization have focused on anechoic listening. We use psychophysics and computational modelling to understand how reverberation affects our ability to localize sound.

 

Listening with cochlear implants

Cochlear implants can restore auditory function in profoundly deaf individuals and are one of the most successful neural prostheses to date. We use psychophysics and computational modelling, with the aim to improve processing strategies for delivering auditory cues via cochlear implant processors.