Abstracts - MINDLab
dansk

Navigation

You are here:  Events » MINDLab Conference: Music and Language in the brain » Abstracts

Events

You are not logged in [Login]

Abstracts (in order of appearance)

 

Functional and Structural Correlates of Auditory Abilities: Predispositions and Plasticity

Robert J. Zatorre (McGill, Montreal)

In this lecture I will review findings from our lab and others regarding the relationship between anatomical features of human auditory cortex and the functions it subserves. I start with a consideration of hemispheric differences in basic auditory processing, and the consequences these factors may have for the processing of speech and musical signals. We then consider anatomical asymmetries in auditory cortex, and how they may relate to the functional differences that exist, with particular emphasis on the specialization of left auditory cortex for processing speech-relevant stimuli. In a parallel fashion, I examine the relationship between anatomical structure of the left auditory cortex and how it may be related to the processing of tonal signals. The findings overall indicate a consistent relationship between individual differences in cortical anatomy and behavioral abilities in both the speech and the music domain. I discuss how these effects may be understood in the context of plasticity and of predispositions.




Language and Music: same structures, different building blocks.

David Pesetsky (MIT, Cambridge MA) & Jonah Katz (Institut Jean Nicod, Paris)

Is there a special kinship between music and language? Both are complex, law-governed cognitive systems, Both are universal across the human species, but show some variation from culture to culture. Do the similarities run deeper than this? Although there is a rich tradition of speculation on this question, the current consensus among researchers is quite cautious. In this talk, we offer a linguist's perspective on the issue -- and argue against the cautious consensus. Though the formal properties of music and language do differ, we propose that these differences reflect what is obvious: that the fundamental building blocks of language and music are different (for example: words vs. pitches). In all other respects, however - what they do with these building blocks - language and music are identical. We call this proposal the Identity Thesis for Language and Music.

In particular, we propose, developing and extending earlier proposals by Lerdahl & Jackendoff (1983), that music, like language, contains a syntactic component in which headed structures are built by the same recursive rule that is also central to linguistic syntax, the rule called Merge. Time permitting, we will present some pilot experimental results that bear on this proposal. We further argue that the species of Merge known to linguists as syntactic movement (= Internal Merge) is also found in music, and is a crucial element in the main key-defining device of Western tonal music: the perfect cadence.




The pleasure of music.

Morten Kringelbach (University of Oxford)

Music is an integral part of human life and perhaps uniquely human. The neural principles underlying the translation of acoustic information into human pleasure are, however, not well understood. Here, I review how music compares to the other fundamental pleasures. I show how music can both inform and shape the fundamental principles of human brain function, specifically using the role of prediction as a guiding computational brain principle. I discuss how the sensation of groove, swing and ‘chills’ are distinct, and perhaps uniquely human, pleasure-evoking responses to music, where the hedonic expectation and evaluation is mediated through the reward networks of the brain, drawing upon the underlying principles of musical expectancy.




Piecing it together: Musical anticipation, pleasure and dopamine release.

Line Gebauer (MINDLab, Aarhus)

Music listening is highly pleasurable, and recent research shows that music is capable of activating the brain’s dopaminergic reward system. Yet, previous studies have neglected to explain the brain mechanisms underlying music’s ability to induce pleasure and evoking brain responses comparable to those of natural rewards. Here we provide a framework – the Predictive Coding Theory, originally developed by Karl Friston – as a novel understanding of how the brain processes music. We suggest anticipation as a fundamental mechanism underlying musical pleasure, and dopamine as a prime agent in coding musical anticipation and thus an important mediator between music and pleasure. Dopamine is critically involved in anticipation, especially with regards to reward anticipation and registration of prediction errors. Dopamine is often assumed to cause pleasure, but here we suggest that dopamine rather mediates musical pleasures through anticipation. Previous studies have primarily focused on the functional localization of auditory and emotional processing of music, but here we provide an explanatory framework for understanding how the brain integrates context sensitive, musical predictions and actual sensory input into a perceptual whole.



Music and pain

Eduardo Garza (MINDLab, Aarhus)

Listening to music reduces acute and chronic pain. However, little is known about the mechanisms behind the analgesic effect of music. Recent research suggests distraction and emotion as primary mechanisms. Here we discuss the recent findings from our studies using experimental pain in healthy participants and the future of music and pain research.



Music and Language in the Deaf Brain.

Bjørn Pedersen (MINDLab, Aarhus)

Background: Cochlear implantation (CI) is a surgical treatment, which restores hearing sensation in individuals with pre- or postlingual hearing loss. Many implant recipients achieve good speech understanding, but find perception of music and speech prosody very challenging. Music and language rely on brain processing of fundamental aspects of sound such as pitch, timing and timbre. Partly overlapping brain structures are involved in this process, and recent studies have shown that complex music tasks activate brain areas associated with language processing. We hypothesized that significant improvement of musical skills can be achieved, if active learning processes are initiated, and that improved perception of music could generalize to speech perception, especially the prosodic properties of language. Methods: Eighteen adult newly operated CI recipients were assigned to either a music or a control group. Shortly after switch-on of the implant, the nine participants in the music group began weekly one-to-one musical ear training. The control group received no training but followed standard aural therapy. We measured perception of music, speech and prosody at baseline, 3 months, and 6 months. Furthermore, to map the brain activity underlying the participants’ auditory development, we used Positron Emission Tomography (PET) concurrently with the behavioral tests. Results: The participants in the music group significantly improved their overall music perception compared to the control group. Discrimination of timbre, pitch, and melodic contour improved in particular, whereas rhythm discrimination equaled normal hearing levels. Both groups showed remarkable improvement in average speech perception performance, while the music group showed an earlier on-set of progress in recognition of emotional prosody, compared to controls. The behavioral and the PET data both showed a significant effect of history of hearing loss on speech perception performance. Furthermore, activation of Broca’s area as an effect of time was observed exclusively in CI listeners with a postlingual hearing loss. Conclusion: Our results suggest that one-to-one musical ear training has a great potential as an effective method to improve overall music perception in CI users, in particular for timbre, pitch and rhythm. Perception of speech may not necessarily benefit from musical ear training, while development of emotional prosody recognition may happen faster. If implemented in aural/oral rehabilitation therapy, the proposed musical ear training program could form a valuable complementary method of auditory rehabilitation, and on the long term contribute to an improved quality of life in CI users. The study also demonstrates the impact of hearing background on the adaptation to the implant and the key role of Broca’s area in restoration of speech perception.



Syntactic complexity and the brain.

Ken Ramshøj Christensen (MINDLab, Aarhus)

The syntax of human language is a recursive, generative system. Both of these properties are ubiquitous in the natural world. In fact, sentences, trees, and brains share the property of hierarchical structure (i.e., tree-structure) as a fundamental principle. Aspects of syntax are subject to general constraints on structure which in themselves have no meaning. In contrast, the way we use and manipulate these structures (usually) are meaningful. Another parallel between trees and brains is that they both have bark, in the brain called cortex. I argue that the activation in the “bark” of the brain depends on the nature of the syntactic tree, that is, structural properties of any given sentence. I present results from a series of neuroimaging studies on sentence comprehension which also provide support for a much more distributed implementation of language in the brain than is usually assumed, especially in neuropsychology textbooks. In particular, comprehension phenomena at the interface between syntax and pragmatics (i.e., between form and function) engage Broca’s area (often called the speech area), whereas purely structural differences in syntactic form engage motor and premotor cortex. Language is a complex system of subsystems (or modules) implemented in the brain as distributed and overlapping networks.



On common ground? Timing, rhythm, and syntax in tonal and sentential processing.

Sonja A. Kotz (Max Planck Institute, Leipzig)

Neural cortical correlates of linguistic functions such as syntax and phonology are well supported in the neuroscience literature. However, the influence of non-linguistic functions such as timing, rhythm, and attention, well established in music research, are currently sparsely considered in speech and language research. This is surprising as latter functions (1) play a critical role in learning, (2) seem to compensate brain dysfunction and developmental disorders, (3) can reveal commonalities/differences between domains (e.g. music and language), and (4) can further our understanding of subcortical contributions to linguistic and non-linguistic functions. In this context, I will focus on basal ganglia and cerebellar circuitries which are involved in beat perception, timing, attention, memory, language, and motor behaviour (see Kotz, Schwartze, & Schmidt-Kassow, 2009; Kotz and Schwartze, 2010). Furthermore, I will present a concept of how linguistic and non-linguistic functions interact and will support this concept with recent event-related potential (ERP) data from healthy and brain damaged populations as well as functional magnetic resonance imaging (fMRI) evidence.



Towards a neural basis of processing musical meaning.

Stefan Koelsch (Freie Universität Berlin)

The understanding of meaning is critical for language perception, and therefore the majority of research on meaning processing has focused on the semantic, lexical, conceptual, and propositional processing of language. However, meaning information is also conveyed by other sources of information, such as music, and the investigation of music perception can substantially broaden our understanding of how the human brain processes meaning information. This talk reviews neuroscientific studies on the processing of musical meaning. These studies reveal two neural correlates of meaning processing, the N400 and the N5. I propose that the N400 can be elicited by musical stimuli due to the processing of extra-musical meaning, whereas the N5 can be elicited due to the processing of intra-musical meaning. Notably, whereas the N400 can be elicited by both linguistic and musical stimuli, the N5 has so far only been observed for the processing of meaning in music. Thus, knowledge about both the N400 and the N5 can advance our understanding of how the brain understands the world.



Rhythm, Stress and Determinism.

Douglas Saddy (CINN, Reading)

Sometimes we hear things that are not there.  For example, when we hear a beep or note repeated systematically (a very simple form of determinism) for a period of time we come to hear it as an iamb.  This is an auditory illusion in which a regular and unchanging signal is perceived as having a rhythm or contour.  During this talk I will present experimental evidence for two other illusions in which a rhythm is perceived, both deriving from a simple deterministic source –one in which statistical regularity drives the perception and another in which determinism is retained but statistical regularity is not.    These illusions raise questions about the nature of rhythm and the perception of auditory contours in both music and speech.



Perception of speech sounds: Predispositions, Specificity, and Plasticity.

Ocke-Schwen Bohn (AarhuS)

This talk presents a review of innate and prenatally learned speech perception abilities in humans, and of how these predispositions are shaped by language experience in the first year of life. For both speech segments and linguistic tone, language experience serves to maintain or heighten perceptual abilities. However, absence of specific experience causes either reduction in the discriminability of speech sounds not encountered in the ambient language(s), or purely auditory, non-linguistic perception of sound that has no linguistic function in the ambient language (e.g., tones, clicks in non-tone and non-click languages). Research on the plasticity of speech perception abilities throughout childhood, adolescence, and adulthood has made it abundantly clear that our innate speech perception abilities are never lost; instead, tone and phone perception is (re-) learnable throughout life.



Processing efficiency, frequency, and word order.

Johannes Kizach (MINDLab, Aarhus)

According to the Parsing Theory of Order and Constituency (PTOC) (Hawkins 1994, 2004) word order preferences are to a large extent driven by processing efficiency. In cases where speakers have a choice between two or more alternative orders, there is a clear tendency to use the most efficient one, i.e. the order that facilitates processing the most). In my talk I shall present two types of data supporting PTOC. (a) A number of corpus studies of various languages have demonstrated a strong correlation between processing efficiency and frequency of occurrence; and (b) an fMRI study on modern Hebrew (Ben-Shachar et al. 2004) has shown that the level of cortical activity is correlated with processing efficiency such that the more efficient an order is, the less activation it invokes.



The Sounds of Sadness and the Sounds of Joy: Hemispheric Differences in the Processing of Emotion-bearing Spectral Information.

Ethan Weed (MINDLab, Aarhus)

Prosody is the melody and timbre with which language is spoken, and can convey important information about the intentions and emotions of the speaker. Imaging and lesion data suggest that the right auditory cortices may be biased toward processing spectral rather than temporal aspects of the acoustic signal (Hyde, Peretz, & Zatorre, 2008; Tranel & Damasio, 1990). While this spectral information conveys such aspects of music as tonal “color,” it also conveys aspects of a speaker’s emotional state in spoken language. For instance, a “broken” voice, expressing sadness or sorrow may be characterized by a higher degree of spectral noise than a happy or neutral voice (Banse & Scherer, 1996). Schirmer and Kotz (2006) have suggested that auditory areas in the right hemisphere are important for the extraction of acoustic cues signifying emotional valence. In this presentation, I report from a preliminary ERP study investigating the lateralization of linguistic and non-linguistic spectral processing.



Narrative emotions.

Mikkel Wallentin (MINDLab, Aarhus)

Emotions are often understood in relation to conditioned responses. Narrative emotions, however, cannot be reduced to a simple associative relationship between emotion words and their experienced counterparts. Intensity in stories may arise without any overt emotion depicting words and vice versa. This talk presents behavioral, physiological and brain imaging data on the processing of emotions in relation to stories.



Absolute pitch ability is linked to autism and vice versa.

Anders Dohn (MINDLab, Aarhus)

Absolute pitch (AP) is the ability to instantly and effortlessly identify and produce any musical pitch without aid from external references. The prevalence of AP is frequently estimated to .01% of the general population; however, there is a substantially higher AP prevalence (around 5 %) in individuals with autism spectrum disorder (ASD). This suggests that AP could be associated with some distinctive cognitive and social characteristics seen in the autistic spectrum. Pitch perception in ASD has been investigated in numerous studies, yet the degree of autistic traits in AP possessors (APs) has only sparsely been examined.Here, we measured the degree to which APs have the traits associated with the autistic spectrum compared to non-AP possessors (non-APs) hypothesizing a correlation between AP ability and the quantity of autistic traits.Thirty-four subjects (16 APs and 18 non-APs) participated in the study. All subjects were matched with regard to gender, age and onset of musical training. We used a test for AP with sine wave tones and piano tones developed by Baharloo et al. (1998) and the Autism-Spectrum Quotient (AQ) developed by Baron-Cohen et al. (2001).We found that the APs have a significantly higher AQ score than the non-APs. When compared to the score from the AP test we also found a statistical significant correlation. These results suggest that APs have a stronger tendency towards autistic traits than non-APs.



The many faces of musical expertise.

Peter Vuust (Royal Academy of Music, Aarhus)

Musicians’ skills, musical preferences and the way they communicate through music depend highly on instrument, style of music and on their level of expertise. This presentation discusses the effect on the human brain of different types of musical expertise and the putative transfer effect of this expertise to other areas of human cognition such as e.g. language or memory. It also report the results from a novel EEG-study on how the musical style of musicians modulates their neural and behavioral responses to acoustic change embedded in real music. The results suggest that the predictive power of the brain in relation to auditory stimuli is influenced by the specificity of training and listening experiences.


Comments on content: 

Revised 4-12-2011