In a new paper I, together with Roel Willems and Peter Hagoort, show that music and language are tightly coupled in the brain. Get the gist in a 180 second youtube clip and then try out what my participants did.
The task my participants had to do might sound very abstract to you, so let me make it concrete. Listen to these two music pieces and tell me which one sounds ‘finished’:
I bet you thought the second one ended a bit in an odd way. How do you know? You use your implicit knowledge of harmonic relations in Western music for such a ‘finished judgement’. All we did in the paper was to see whether an aspect of language grammar (syntax) can influence your ability to hear these harmonic relations, as revealed by ‘finished judgements’. The music pieces we used for this sounded very similar to what you just heard:
It turns out that reading syntactically difficult sentences while hearing the music reduced the feeling that music pieces like this did actually end well. This indicated that processing language syntax draws on brain resources which are also responsible for music harmony.
Difficult syntax: The surgeon consoled the man and the woman put her hand on his forehead.
Easy syntax: The surgeon consoled the man and the woman because the operation had not been successful.
Curiously, sentences with a difficult meaning had no influence on the ‘finished judgements’.
Difficult meaning: The programmer let his mouse run around on the table after he had fed it.
Easy meaning: The programmer let his field mouse run around on the table after he had fed it.
Because only language syntax influenced ‘finished judgements’, we believe that music and language share a common syntax processor of some kind. This conclusion is in line with a number of other studies which I blogged about before.
What this paper adds is that we rule out an attentional link between music and language as the source of the effect. In other words, difficult syntax doesn’t simply distract you and thereby disables your music hearing. Its influence is based on a common syntax processor instead.
In the end, I tested 278 participants across 3 pre-tests, 2 experiments, and 1 post-test. Judge for yourself whether it was worth it by reading the freely available paper here.
— — —
Kunert R, & Slevc LR (2015). A Commentary on: “Neural overlap in processing music and speech”. Frontiers in human neuroscience, 9 PMID: 26089792
Kunert, R., Willems, R., & Hagoort, P. (2016). Language influences music harmony perception: effects of shared syntactic integration resources beyond attention Royal Society Open Science, 3 (2) DOI: 10.1098/rsos.150685
When you read a book and listen to music, the brain doesn’t keep these two tasks nicely separated. In a new article just out, I show that there is a brain area which is busy with both tasks at the same time (Kunert et al., 2015). This brain area might tell us a lot about what music and language share.
The brain area which you see highlighted in red on this picture is called Broca’s area. Since the 19th century, many people believe it to be ‘the language production part of the brain’. However, a more modern theory proposes that this area is responsible for combining elements (e.g., words) into coherent wholes (e.g., sentences), a task which needs to be solved to understand and produce language (Hagoort, 2013). In my most recent publication, I found evidence that at the same time as combining words into sentences, this area also combines tones into melodies (Kunert et al., 2015).
What did I do with my participants in the MRI scanner?
Take for example the sentence The athlete that noticedthe mistresses looked out of the window. Who did the noticing? Was it the mistresses who noticedthe athlete or the athlete who noticedthe mistresses? In other words, how does noticedcombine with the mistresses and the athlete? There is a second version of this sentence which uses the same words in a different way: The athlete that the mistressesnoticedlooked out of the window. If you are completely confused now, I have achieved my aim of giving you a feeling for what a complicated task language is. Combining words is generally not easy (first version of the sentence) and sometimes really hard (second version of the sentence).
Listening to music can be thought of in similar ways. You have to combine tones or chords in order to hear actual music rather than just a random collection of sounds. It turns out that this is also generally not easy and sometimes really hard. Check out the following two little melodies. The text is just the first example sentence above, translated into Dutch (the fMRI study was carried out in The Netherlands).
If these examples don’t work, see more examples on my personal website here.
Did you notice the somewhat odd tone in the middle of the second example? Some people call this a sour note. The idea is that it is more difficult to combine such a sour note with the other tones in the melody, compared to a more expected note.
So, now we have all the ingedients to compare the combination of words into a sentence (with an easy and a difficult kind of combination) and tones in a melody (with an easy and a difficult kind of combination). My participants heard over 100 examples like the ones above. The experiment was done in an fMRI scanner and we looked at the brain area highlighted in red above: Broca’s area (under your left temple).
What did I find in the brain data?
The height of the bars represents the difference in brain activity signal between the easy and difficult versions of the sentences. As you can see, the bars are generally above zero, i.e. this brain area displays more activity for more difficult sentences (not a significant main effect in this analysis actually). I show three bars because the sentences were sung in three different music versions: easy (‘in-key’), hard (‘out-of-key’), or with an unexpected loud note (‘auditory anomaly’). As you can see the easy version of the melody (left bar) or the one with the unexpected loud note (right bar) hardly lead to an activity difference between easy and difficult sentences. It is the difficult version (middle bar) which does. In other words: when this brain area is trying to make a difficult combination of tones, it suddenly has great trouble with the combination of words in a sentence.
What does it all mean?
This indicates that Broca’s area uses the same resources for music and language. If you overwhelm this area with a difficult music task, there are less resources available for the language task. In a previous blog post, I have argued that behavioural experiments have shown a similar picture (Kunert & Slevc, 2015). This experiment shows that the music-language interactions we see in people’s behaviour might stem from the activity in this brain area.
So, this fMRI study contributes a tiny piece to the puzzle of how the brain deals with the many tasks it has to deal with. Instead of keeping everything nice and separated in different corners of the head, similar tasks appear to get bundled in specialized brain areas. Broca’s area is an interesting case. It is associated with combining a structured series of elements into a coherent whole. This is done across domains like music, language, and (who knows) beyond.
[Update 13/11/2015: added link to personal website.]
— — — Hagoort P (2013). MUC (Memory, Unification, Control) and beyond. Frontiers in psychology, 4 PMID: 23874313
Kunert R, & Slevc LR (2015). A Commentary on: “Neural overlap in processing music and speech”. Frontiers in human neuroscience, 9 PMID: 26089792
Kunert R, Willems RM, Casasanto D, Patel AD, & Hagoort P (2015). Music and Language Syntax Interact in Broca’s Area: An fMRI Study. PloS one, 10 (11) PMID: 26536026
— — —
DISCLAIMER: The views expressed in this blog post are not necessarily shared by my co-authors Roel Willems, Daniel Casasan/to, Ani Patel, and Peter Hagoort.
It is good practice in scientific publications to cite the original paper which established an idea, technique or model. However, there are worrying signs that these original papers exaggerated their effect sizes and are thus less trustworthy than later studies. Torn between a desire to grant credit where credit is due and writing about solid scientific findings, how should one decide?
A good example of a scientific breakthrough which established an idea comes from the neurosciences. Today, much of brain imaging is concerned with the localisation of function: Where in the brain is x? Whether we want to localise vision, intelligence or consciousness does not appear to matter to some researchers. There is much to say about this approach, its assumptions and its value. However, in this post I want to talk about its beginnings.
The first time a successful brain localisation was claimed was in the 19th century by anatomist Paul Broca. His discovery of the left inferior frontal cortex’s (the area below your left temple) importance for speech is still widely influential. After one of his patients with apparently normal general mental abilities but impaired speech died, Broca did a post mortem and found a lesion in his left frontal cortex. The same was true for 12 other patients. This was a breakthrough in the quest to link the brain to behaviour. The brain area Paul Broca identified as crucial for speech output is called Broca’s area to this day. The speech impairments resulting from its damage are still called Broca’s aphasia.
Discoveries like these can be game changers. However, there are worrying signs of reductions in how big effects are as results get replicated. Below is an example from psycholinguistics. Two sentences can mean practically identical things, such as ‘Peter gives the toy to Anna.’ (using a so called Prepositional Object) and ‘Peter gives Anna the toy.’ (using a so called Direct Object). When language users have to decide which construction to use they show a strong tendency to unconsciously adopt the one they heard just before. This effect is called syntactic priming and is typically associated with Kathryn Bock’s 1986 article in the journal Cognitive Psychology. It has been used to argue for convergence of syntax in dialogue, shared mental resources for comprehension and production or overlap of the various languages available to a speaker.
As can be seen below, since its initial characterisation by Bock, the effect size of Prepositional vs. Direct Object priming has declined. Because this is actually a new, unpublished finding, let me elaborate what this figure actually shows (you can skip the next two paragraphs in case you trust me). On the vertical axis I show a standard way of reporting the priming effect size. Mind that the way this is calculated can differ between studies, so I recalculated all values ignoring answers not classifiable as either a Prepositional or a Direct Object. On the horizontal axis is simply the publication year. The thirteen data points refer to all studies I could find on Web of Science in December of 2011 which replicated Bock’s initial finding using the same task (picture description), totalling 1169 participants. The standard way of looking for associations between variables (here, effect size and time) is the Pearson correlation coefficient and it is significantly negative, indicating that the later a study is published the lower its effect size.
One should not over-interpret this finding. It could be due to a) unusual effect sizes, b) publication bias or c) task differences. What happens if I control for these things?
a) To check whether unusually high values early on or unusually low values later on carry the effect I replicate it with a so called non-parametric test (Kendall tau). This test ranks all values and correlates the ranks. The effect size is still significantly negatively associated with publication year. Extreme values (also called outliers) do not carry the effect.
b) Furthermore, I show the studies’ sample sizes in the size of the data points. Some may argue that small studies tend to have more extreme values. The extremely low ones are not published because of publication bias, i.e. journals’ tendency to only accept significant finings. This publication pattern may have declined over the years due to higher sample sizes or less publication pressure. To control for this possibility I statistically control for sample size and the decline effect still holds.Differences in sample size do not carry the effect.
c) Finally, there is a small difference in the methodology for two publications (three effect sizes marked in red) related to whether the priming was from comprehension to production with a repetition of the prime (as in Bock’s original article) or whether it was slightly different. Rejecting these two studies only strengthens the negative correlations (r=-0.71 (p=0.02); t=-0.69 (p=0.01); weighted r=-0.73 (p=0.02)).
If someone has another idea how this decline effect came about, I would be happy to hear it. For the moment it is clear that Bock’s syntactic priming effect mysteriously declines with every year.
In terms of Paul Broca’s finding of brain areas associated with speech production the decline effect has to be recast in terms of what Broca’s finding stands for: the localisation of a cognitive function in the brain. So, evidence for the decline effect would be both the fact that the same brain area can be associated with more than one function and the fact that the function of speech production is associated with more than one brain area.
What sort of functions is Broca’s area associated with today? I checked Brainscanr, a free online tool developed by Voytek which shows how likely two terms are to co-occur in the literature. As can be seen below, Broca’s area is indeed related to language production. However, it is also associated with language comprehension and even mirror neurons. The one-area-one-function mapping does not hold for Broca’s area.
But what about speech production? Perhaps Broca’s area is able to handle different jobs but still each job resides in a specific place in the brain. Again, I use a free online tool, Neurosynth, to check this. For this example, Neurosynth aggregates 44 studies showing various brain areas associated with speech production. Broca’s area is indeed present but so is its right hemisphere homologue and medial areas (in the middle of the brain) also light up. So, the decline effect can be shown quite nicely for the brain localisation of cognitive functions as well.
Jonah Lehrer gives more examples from drug, memory and sexual selection research. Mind that the decline effect is not a case of fraud-ish research but instead a good example of how the scientific process refines its methods over time and edges ever closer to agreeing on some basic truths about the world. The problem is that the truth is typically more difficult (read: messier) than initially thought.
But who is to take credit for the discovery then? The original scientist whose finding did not fully stand the test of time or the later scientist whose result is more believable? To me, the current agreement appears to be to credit the former (at least as long as he or she is still alive) and possibly add some findings by the latter if need be. Does this amount to good scientific practice?
Bock, J.K. (1986). Syntactic Persistence in Language Production. Cognitive Psychology, 18, 335-387. doi: 10.1016/0010-0285(86)90004-6
I recently discovered a study which should have been included in the small scale meta-analysis. The revised values are:
correlations for analysis as reported in figure 1: r=-0.46 (p=0.07); t=-0.31 (p=0.12); r_weighted=-0.45 (p=0.09)
correlations for analysis without the two studies which slightly changed the design: r=-0.52 (p=0.07); t=-0.37 (p=0.12); r_weighted=-0.48 (p=0.12)
Thus, the original finding of a decline effect in the syntactic priming literature is at the most only marginally significant.