In a new paper I, together with Roel Willems and Peter Hagoort, show that music and language are tightly coupled in the brain. Get the gist in a 180 second youtube clip and then try out what my participants did.
The task my participants had to do might sound very abstract to you, so let me make it concrete. Listen to these two music pieces and tell me which one sounds ‘finished’:
I bet you thought the second one ended a bit in an odd way. How do you know? You use your implicit knowledge of harmonic relations in Western music for such a ‘finished judgement’. All we did in the paper was to see whether an aspect of language grammar (syntax) can influence your ability to hear these harmonic relations, as revealed by ‘finished judgements’. The music pieces we used for this sounded very similar to what you just heard:
It turns out that reading syntactically difficult sentences while hearing the music reduced the feeling that music pieces like this did actually end well. This indicated that processing language syntax draws on brain resources which are also responsible for music harmony.
Difficult syntax: The surgeon consoled the man and the woman put her hand on his forehead.
Easy syntax: The surgeon consoled the man and the woman because the operation had not been successful.
Curiously, sentences with a difficult meaning had no influence on the ‘finished judgements’.
Difficult meaning: The programmer let his mouse run around on the table after he had fed it.
Easy meaning: The programmer let his field mouse run around on the table after he had fed it.
Because only language syntax influenced ‘finished judgements’, we believe that music and language share a common syntax processor of some kind. This conclusion is in line with a number of other studies which I blogged about before.
What this paper adds is that we rule out an attentional link between music and language as the source of the effect. In other words, difficult syntax doesn’t simply distract you and thereby disables your music hearing. Its influence is based on a common syntax processor instead.
In the end, I tested 278 participants across 3 pre-tests, 2 experiments, and 1 post-test. Judge for yourself whether it was worth it by reading the freely available paper here.
— — —
Kunert R, & Slevc LR (2015). A Commentary on: “Neural overlap in processing music and speech”. Frontiers in human neuroscience, 9 PMID: 26089792
Kunert, R., Willems, R., & Hagoort, P. (2016). Language influences music harmony perception: effects of shared syntactic integration resources beyond attention Royal Society Open Science, 3 (2) DOI: 10.1098/rsos.150685
When you read a book and listen to music, the brain doesn’t keep these two tasks nicely separated. In a new article just out, I show that there is a brain area which is busy with both tasks at the same time (Kunert et al., 2015). This brain area might tell us a lot about what music and language share.
The brain area which you see highlighted in red on this picture is called Broca’s area. Since the 19th century, many people believe it to be ‘the language production part of the brain’. However, a more modern theory proposes that this area is responsible for combining elements (e.g., words) into coherent wholes (e.g., sentences), a task which needs to be solved to understand and produce language (Hagoort, 2013). In my most recent publication, I found evidence that at the same time as combining words into sentences, this area also combines tones into melodies (Kunert et al., 2015).
What did I do with my participants in the MRI scanner?
Take for example the sentence The athlete that noticedthe mistresses looked out of the window. Who did the noticing? Was it the mistresses who noticedthe athlete or the athlete who noticedthe mistresses? In other words, how does noticedcombine with the mistresses and the athlete? There is a second version of this sentence which uses the same words in a different way: The athlete that the mistressesnoticedlooked out of the window. If you are completely confused now, I have achieved my aim of giving you a feeling for what a complicated task language is. Combining words is generally not easy (first version of the sentence) and sometimes really hard (second version of the sentence).
Listening to music can be thought of in similar ways. You have to combine tones or chords in order to hear actual music rather than just a random collection of sounds. It turns out that this is also generally not easy and sometimes really hard. Check out the following two little melodies. The text is just the first example sentence above, translated into Dutch (the fMRI study was carried out in The Netherlands).
If these examples don’t work, see more examples on my personal website here.
Did you notice the somewhat odd tone in the middle of the second example? Some people call this a sour note. The idea is that it is more difficult to combine such a sour note with the other tones in the melody, compared to a more expected note.
So, now we have all the ingedients to compare the combination of words into a sentence (with an easy and a difficult kind of combination) and tones in a melody (with an easy and a difficult kind of combination). My participants heard over 100 examples like the ones above. The experiment was done in an fMRI scanner and we looked at the brain area highlighted in red above: Broca’s area (under your left temple).
What did I find in the brain data?
The height of the bars represents the difference in brain activity signal between the easy and difficult versions of the sentences. As you can see, the bars are generally above zero, i.e. this brain area displays more activity for more difficult sentences (not a significant main effect in this analysis actually). I show three bars because the sentences were sung in three different music versions: easy (‘in-key’), hard (‘out-of-key’), or with an unexpected loud note (‘auditory anomaly’). As you can see the easy version of the melody (left bar) or the one with the unexpected loud note (right bar) hardly lead to an activity difference between easy and difficult sentences. It is the difficult version (middle bar) which does. In other words: when this brain area is trying to make a difficult combination of tones, it suddenly has great trouble with the combination of words in a sentence.
What does it all mean?
This indicates that Broca’s area uses the same resources for music and language. If you overwhelm this area with a difficult music task, there are less resources available for the language task. In a previous blog post, I have argued that behavioural experiments have shown a similar picture (Kunert & Slevc, 2015). This experiment shows that the music-language interactions we see in people’s behaviour might stem from the activity in this brain area.
So, this fMRI study contributes a tiny piece to the puzzle of how the brain deals with the many tasks it has to deal with. Instead of keeping everything nice and separated in different corners of the head, similar tasks appear to get bundled in specialized brain areas. Broca’s area is an interesting case. It is associated with combining a structured series of elements into a coherent whole. This is done across domains like music, language, and (who knows) beyond.
[Update 13/11/2015: added link to personal website.]
— — — Hagoort P (2013). MUC (Memory, Unification, Control) and beyond. Frontiers in psychology, 4 PMID: 23874313
Kunert R, & Slevc LR (2015). A Commentary on: “Neural overlap in processing music and speech”. Frontiers in human neuroscience, 9 PMID: 26089792
Kunert R, Willems RM, Casasanto D, Patel AD, & Hagoort P (2015). Music and Language Syntax Interact in Broca’s Area: An fMRI Study. PloS one, 10 (11) PMID: 26536026
— — —
DISCLAIMER: The views expressed in this blog post are not necessarily shared by my co-authors Roel Willems, Daniel Casasan/to, Ani Patel, and Peter Hagoort.
When you listen to some music and when you read a book, does your brain use the same resources? This question goes to the heart of how the brain is organised – does it make a difference between cognitive domains like music and language? In a new commentary I highlight a successful approach which helps to answer this question.
How do we read? What is the brain doing in this picture?
When reading the following sentence, check carefully when you are surprised at what you are reading:
After | the trial | the attorney | advised | the defendant | was | likely | to commit | more crimes.
I bet it was on the segment was. You probably thought that the defendant was advised, rather than that someone else was advised about the defendant. Once you read the word was you need to reinterpret what you have just read. In 2009 Bob Slevc and colleagues found out that background music can change your reading of this kind of sentences. If you hear a chord which is harmonically unexpected, you have even more trouble with the reinterpretation of the sentence on reading was.
Why does music influence language?
Why would an unexpected chord be problematic for reading surprising sentences? The most straight-forward explanation is that unexpected chords are odd. So they draw your attention. To test this simple explanation, Slevc tried out an unexpected instrument playing the chord in a harmonically expected way. No effect on reading. Apparently, not just any odd chord changes your reading. The musical oddity has to stem from the harmony of the chord. Why this is the case, is a matter of debate between scientists. What this experiment makes clear though, is that music can influence language via shared resources which have something to do with harmony processing.
Why ignore the fact that music influences language?
None of this was mention in a recent review by Isabelle Peretz and colleagues on this topic. They looked at where in the brain music and language show activations, as revealed in MRI brain scanners. This is just one way to find out whether music and language share brain resources. They concluded that ‘the question of overlap between music and speech processing must still be considered as an open question’. Peretz call for ‘converging evidence from several methodologies’ but fail to mention the evidence from non-MRI methodologies.1
Sure one has to focus on something, but it annoys me that people tend focus on methods (especially fancy expensive methods like MRI scanners), rather than answers (especially answers from elegant but cheap research into human behaviour like reading). So I decided to write a commentary together with Bob Slevc. We list no less than ten studies which used a similar approach to the one outlined above. Why ignore these results?
If only Peretz and colleagues had truly looked at ‘converging evidence from several methodologies’. They would have asked themselves why music sometimes influences language and why it sometimes does not. The debate is in full swing and already beyond the previous question of whether music and language share brain resources. Instead, researchers ask what kind of resources are shared.
So, yes, music and language appear to share some brain resources. Perhaps this is not easily visible in MRI brain scanners. Looking at how people read with chord sequences played in the background is how one can show this.
— — — Kunert, R., & Slevc, L.R. (2015). A commentary on “Neural overlap in processing music and speech” (Peretz et al., 2015) Frontiers in Human Neuroscience : doi: 10.3389/fnhum.2015.00330
Peretz I, Vuvan D, Lagrois MÉ, & Armony JL (2015). Neural overlap in processing music and speech. Philosophical transactions of the Royal Society of London. Series B, Biological sciences, 370 (1664) PMID: 25646513
Slevc LR, Rosenberg JC, & Patel AD (2009). Making psycholinguistics musical: self-paced reading time evidence for shared processing of linguistic and musical syntax. Psychonomic bulletin & review, 16 (2), 374-81 PMID: 19293110
— — —
1 Except for one ECoG study.
DISCLAIMER: The views expressed in this blog post are not necessarily shared by Bob Slevc.
Imagine standing at a cocktail party and somewhere your name gets mentioned. Your attention is immediately grabbed by the sound of your name. It is a classic psychological effect with a new twist: old people are immune.
Someone mention my name?
The so-called cocktail party effect has fascinated researchers for a long time. Even though you do not consciously listen to a conversation around you, your own name can grab your attention. That means that unbeknownst to you, you follow the conversations around you. You check them for salient information like your name, and if it occurs you quickly switch attention to where your name was mentioned.
The cocktail party simulated in the lab
In the lab this is investigated slightly differently. Participants listen to one ear and, for example, repeat whatever they hear. Their name is embedded in what they hear coming in to the other (unattended) ear. After the experiment one simply asks ‘Did you hear your own name?’ In a recent paper published by Moshe Naveh-Benjamin and colleagues (in press), around half of the young student participants noticed their name in such a set-up. Compare this to old people aged around 70: next to nobody (only six out of 76 participants) noticed their name being mentioned in the unattended ear.
Why this age difference? Do old people simply not hear well? Unlikely, when the name was played to the ear that they attended to, 45% of old people noticed their names. Clearly, many old people can hear their names, but they do not notice their names if they do not pay attention to this. Young people do not show such a sharp distinction. Half the time they notice their names, even when concentrating on something else.
Focusing the little attention that is available
Naveh-Benjamin and colleagues instead suggest that old people simply have less attention. When they focus on a conversation, they give it their everything. Nothing is left for the kind of unconscious checking of conversations which young people can do so well.
At the next cocktail party you can safely gossip about your old boss. Just avoid mentioning the name of the young new colleague who just started.
— — —
Naveh-Benjamin M, Kilb A, Maddox GB, Thomas J, Fine HC, Chen T, & Cowan N (2014). Older adults do not notice their names: A new twist to a classic attention task. Journal of experimental psychology. Learning, memory, and cognition PMID: 24820668
The answers are not provided by just anybody but by language researchers themselves. Before they are put on the web they get checked by another researcher and they get translated into German, Dutch and English. It’s a huge enterprise, to be sure..
As an employee of the Max Planck Institute I’ve had my own go at answering a few questions:
A rather recent addition to laws designed to reduce these numbers was the adoption of compulsory hands-free devices for mobile phones. Their safety value is easy to understand. When you look at a mobile phone display you cannot simultaneously look at the road. Similarly, using your hands for typing and using them for steering are at least partly incompatible actions.
How mobile phone use impairs sight and hands.
From a psychological point of view the current law tries to ensure that visual input channels (eyes) and motor output channels (hands) remain undisturbed. But what about the brain areas which control these channels?
This is the question recently investigated by Bergen from UC San Diego and colleagues. They put undergraduates in a driving simulator giving the impression of a motorway with steady traffic and a car in front of the driver breaking from time to time. Simultaneously, the driver had to judge simple true/false statements from the motor domain (e.g., “To open a jar, you turn the lid counterclockwise.”), the visual domain (e.g., “The letters on a stop sign are white.”), or the abstract domain (e.g., “The capital of North Dakota is Bismarck.”). As a baseline condition, people were just asked to say “true” or “false” several times.
Why choose such questions? There is both behavioural and brain-imaging evidence that language comprehension involves the simulation of what was said. This set of findings is often summarised as embodied cognition and its take-home message is something like this: in order to understand it, you mentally do it. For example, to answer a motor question, you use your brain areas doing motor control and make them simulate what it would be like to open a jar. Based on the outcome of this simulation you answer the question.
So, will visual or motor questions affect driving differently than abstract questions because the former engage the same brain areas as those needed for driving while the latter don’t? The alternative would be that asking anything distracts because general attention gets pulled away from driving.
The results go both ways. First, one measure was affected by the true/false statements but not by which kinds: quickly breaking when the car in front breaks. The time it took to do so was longer if any sort of question was asked compared to baseline. This suggests that domain general mechanisms were interfered with through language, e.g., attention.
Was she a safe driver? May depend on whether she talked and if so about what.
Second, one measure was affected by what kind of statements had to be judged:generally holding a safe distance to other cars. This distance was greater if visual questions were asked compared to abstract questions and compared to baseline. A similar, albeit not as clear, pattern emerged for motor questions. It looks as if participants were so distracted by these kinds of questions that they fell behind their optimal driving distance. This suggests that a task such as keeping a safe driving distance which requires visual working memory (compare ideal distance to actual distance) and corrective motor responses (bring ideal and actual distances closer together) is influenced by language comprehension through mental simulation.
On the one hand, the scientific implications are quite straight forward. Bergen and colleague’s results suggest that those low level perception and action control areas which are needed for quick reactions are not what embodied cognition is about. Instead it seems like embodied cognition happens in higher perceptual and motor planning areas. Furthermore, the whole embodied cognition idea gets quite a boost from a conceptual replication under relatively realistic conditions.
On the other hand, the practical implications are somewhat controversial. Because talking in general impairs quick reactions by the driver, even hands-free devices pose a risk. This danger is compounded by talking about abstract topics since the driving distance is reduced compared to visual topics.
The authors refrain from saying that any sort of conversation should be prohibited. Passengers share perceptual experiences with the driver and can adjust their conversations to the dangerousness of the situation. Mobile phone contacts can’t do this. But what if you want to be really really safe? Well, cut your own risk of dying and take public transport. There you can chat and cut your death risk by 90% (bus) or even 95% (train or flight) compared to car travel (EU numbers).
A safe way to travel.
Bergen, B., Medeiros-Ward, N., Wheeler, K., Drews, F., & Strayer, D. (2012). The Crosstalk Hypothesis: Why Language Interferes With Driving. Journal of experimental psychology. General PMID: 22612769
1) By Ed Brown as Edbrown05 (Own work) [CC-BY-SA-2.5 (www.creativecommons.org/licenses/by-sa/2.5)], via Wikimedia Commons
This may be a completely obvious statement if it wasn’t for what it entails. First of all, words have to be pronounced. Secondly, words carry meaning. Both properties change how words are used. A bunch of studies has recently shown that these properties also influence how the people behind names are perceived. In essence, names open up the door for biases, misperceptions and prejudices.
Be careful, if your name happens to be MohammedVougiouklakis you may not like what you’re about to read.
Firstly, pronunciation is important. If a word is unpronounceable, it never enters a community’s language. Turns out people whose names are unpronounceable also have trouble in the community. Laham and colleagues (2012) asked Australian undergraduates to rate how good a fictional local council candidate was. Participants read a fake local news article which was always the same except for the surname of the candidate which was either difficult to pronounce (Vougiouklakis, Leszczynska) or easy (Lazaridis, Paradowska). Easy to pronounce candidates were rated better.
In another experiment, Laham and colleagues looked at the hierarchy within real US American law firms. Pronounceability was associated with the lawyer’s position in the firm’s hierarchy. This was found even just for the subset of names which were Anglo-American, likewise for the foreign name sample. So, the more easily pronounceable the name, the better your career prospects.
It is worth appreciating how weird this outcome is. People did not rate names but instead the people who carry the names. Furthermore, they had a wealth of information about them and one may think that name pronunciation is a very unimportant bit of information that is simply ignored. Nonetheless, even though it should be completely irrelevant for success name pronunciation appears to shape people’s lives.
Secondly, words have meaning. The most important meaning of a name is what it says about the community you are from. It signifies gender, ethnicity, race, region, etc. One widely known American study is Bertrand and Mullainathan’s (2004) job application study in which real job adverts were answered with fake resumes only differing in terms of name. Black sounding names (Lakisha Washington) received less call-backs than white sounding names (Emily Walsh). Furthermore, application quality was not important for black sounding names while it did change call-back rates for white sounding names.
If you are from Europe (like me) and you feel like racism is oh so American (somewhat like me before I wrote this post), bear in mind that the main finding has been replicated with local ethnic minority names in many European countries:
If he is called Tobias (rather than Fatih) he gets 14% more call-backs on applications.
Britain – Muhammed Kalid vs. Andrew Clarke (Wood et al., 2009)
France – BakariBongo vs. JulienRoche (Cediey and Foroni, 2008)
Germany – Fatih Yildiz vs. TobiasHartmann (Kaas and Manger, 2011)
Greece – NikolaiDridanski vs. IoannisChristou (Drydakis and Vlassis, 2010)
Netherlands – Mohammed vs. Henk (Derous et al., 2012)
Ireland (McGinnity and Lunn, 2011)
Sweden – Ali Said vs. Erik Andersson (Carlsson and Rooth, 2007)
This is really just evidence for old fashioned discrimination in the job market. But it says more than that. In the American study, getting additional qualifications is worth it for whites while it did not have a significant impact on call-back rates for blacks. Thus, similarly to the pronunciation effect above, additional information does not reduce the effect of the obviously irrelevant name characteristics. Instead, in the case of Bertrand and Mullainathan’s study, additional information of application quality even exacerbated the race difference.
The take-home message is that people take in all sorts of objectively irrelevant information – like names – and use it to make their choices. These choices are more likely to go against you if your name is difficult to pronounce or foreign sounding. People make choices about names and these choices affect the people behind the names.
So, what is there to do? If you really want to treat people fairly, i.e. give people an equal chance independent of the names they were given or have chosen, give them a number. Because – and this will sound terribly obvious – numbers aren’t words.
Bertrand, M., & Mullainathan, S. (2004). Are Emily and Greg More Employable Than Lakisha and Jamal? A Field Experiment on Labor Market Discrimination. The American Economic Review, 94(4), 991-1025. doi: 10.1257/0002828042002561
Carlsson, M., & Rooth, D.-O. (2007). Evidence of ethnic discrimination in the Swedish labor market using experimental data. Labour Economics, 14, 716–729. doi: 10.1016/j.labeco.2007.05.001
Cediey, E., & Foroni, F. (2008). Discrimination in Access to Employment on Grounds of Foreign Origin in France. ILO International Migration Paper 85E, International Labour Organization, Geneva, Switzerland.
Derous, E., Ryan, A.M., Nguyen, H.-H. D. (2012). Multiple categorization in resume screening: Examining effects on hiring discrimination against Arab applicants in field and lab settings. Journal of Organizational Behavior, 33, 544-570. doi: 10.1002/job.769
Drydakis, N., & Vlassis, M. (2010). Ethnic discrimination in the greek labour market: occupational access, insurance coverage and wage offers. The Manchester School, 78(3), 201–218. doi: 10.1111/j.1467-9957.2009.02132.x
Kaas, L., & Manger, C. (2011). Ethnic Discrimination in Germany’s Labour Market: A Field Experiment. German Economic Review 13(1): 1–20.
Laham, S.M., Koval, P., Alter, A.L. (2012). The name-pronunciation effect: Why people like Mr. Smith more than Mr. Colquhoun. Journal of Experimental Social Psychology, 48(3), 752-756. doi: 10.1016/j.jesp.2011.12.002
And nonetheless, one can answer them. Crime can be a beast haunting local neighbourhoods and it must be eradicated – a description suggesting it is well and alive. And musical pitch is high or low.
Of course, these are all just metaphors useful for quickly talking about things without having to stop for lengthy definitions. However, they are not only linguistic short cuts. They are also mental short cuts – or opportunities for manipulation, if you prefer a more racy description. Last year, a bunch of studies showed examples of how far one can go with this.
A metaphorical breeding program.
Thibodeau and Boroditsky (2011) contrasted two common Western metaphors related to crime: the crime as a beast (preying on a town, lurking in the neighbourhood) and crime as a virus (infecting a town, plaguing the neighbourhood). They ‘activated’ these metaphors by using these words alongside fictional crime statistics of an unknown town. When participants were asked what to do about the town’s crime problem, those in the beast-condition were more likely to suggest law enforcement actions (capture, enforce, punish) than those in the virus-condition who often opted for reform-measures (diagnose, treat, inoculate).
Thus, a linguistic short-cut affected how people reacted to a realistic real world problem in the realm of social policy. And the effects are big. As one might expect, the same researchers also found political and gender differences (US Republicans as well as men tend to be more on the enforcement side than US Democrats/ Independents and women). Simply mentioning a metaphor was twice as powerful in shaping opinion than any of these variables.
A literally high pitch.
In a different set of studies, even something as basic as the height of a tone was shown to be metaphorical. Dolscheid and colleagues (2011) showed that when a tone is presented with an image of height (basically a vertical line crossed by another line at a high or low point) this influences Westerners’ pitch repetition – as would be expected by the pitch-as-height metaphor. When Dutch participants sang a tone paired with a high line, they tended to sing higher. An image of thickness (a thick or thin line) was without influence. The reverse was the case for Farsi speakers even though they lived in the same country. In Farsi, low tones are called thick and high tones are called thin. In a second step, the research team trained people for only 20 minutes with the thickness metaphor – without them knowing. Afterwards, Dutch people performed similarly to Farsi speakers who had known it all their lives.
The wider point is one I have made before: Language is not just for talking, it is also a window into the Mind. However, the metaphor research goes further by also showing how easily this window gives access to the Mind, how easily we can be manipulated. Something as important as how to address crime can be influenced by a recently encountered metaphor. The same applies to something as basic as singing back a tone.
And don’t say they can be spotted easily. Or did you notice the race metaphor written black on white at the beginning of this post?
What is so female about ships to call them she (LINK)? What is so neuter about children to call them it (LINK)? Now imagine that entire languages – like German, Spanish and French – are full of these arbitrary gender assignments, not allowing any genderless nouns. This has a profound effect on the way the mind works. A couple of articles published last year on the grammatical gender of nouns in different languages nicely illustrate this point.
To native speakers of gendered languages – i.e. languages whose nouns are all masculine, feminine or perhaps neuter – their language’s gender system usually appears obvious. I vividly remember sitting in France in a Philosophy class and the teacher elaborated on the female gender of life (la vie). According to her, life could only ever be feminine for some forgotten reason. When a class mate pointed out that life was neuter in German and, that, therefore, her reasoning was flawed she turned to me as a native German speaker. I could only agree with the comment and see her theory fall apart in real time (btw, life can even be masculine as for example in Bulgarian or Hebrew). This is the first experience which I can remember of a native speaker applying the mostly arbitrary grammatical gender system beyond the domain of language.
Recent research has found more examples of grammatical gender influencing how language users think about completely asexual things. In a very small experiment, an Israeli friend of mine (Rony Halevy) asked Hebrew speakers to dress up cutlery and other objects and found more feminine dresses on grammatically female items and vice versa for male items (see picture). Dutch controls, who do not distinguish between male and female grammatical gender, did not show a similar effect. Still, one may argue that the reference to gender was in the task already. Similarly, language based tasks in this field could be said to only reveal an effect of grammatical gender on other linguistic processes. So, can language really influence the mind in general?
Hebrew is a gendered language and participants tend to dress up simple objects such as a spoon or a fork according to their grammatical gender. The Dutch gender system does not refer to male and female and does not show the same effect. Data based on student project by Rony Halevy.
Cubelli et al. (2011) used a categorisation task in which participants had to quickly judge whether two pictures showed objects belonging to the same category or not. Judgements were faster if the objects’ grammatical gender matched. The authors interpreted this as showing that people access the words related to the pictures even when this is not required for the task.
Even outside the laboratory the effect can be shown. Sampling images from a big online art database, Segel and Boroditsky (2011) looked at all the gendered depictions of naturally asexual entities like love, justice, or time. Depicted gender agreed with grammatical gender in 78% of the cases. The effect was replicable for Italian, French and German. On top of that, it even held when only looking at those entities whose grammatical genders are conflicting in the studied languages.
It is worth reiterating that the aforementioned behaviours were completely non-linguistic. The grammatical gender system is just a set of rules for how words change when combined. The fact that people draw on these purely linguistic rules to perform unrelated tasks shows quite powerfully what a central role language plays in our minds.
But the effect may go further than that. In English, natural gender must be included in personal pronouns (he/she). Admittedly, there are exceptions (child – it) but they are rare. In Chinese, there is no such requirement. Personal pronouns can mark gender (written forms of ta) or not (spoken ta). Chen and Su (2011, Experiment 2) presented English or Chinese participants with written English or Chinese sentences which included gendered personal pronouns. Participants were asked to match each sentence to one of two pictures, each showing a person of a different gender. English speaking participants were faster and more accurate than Chinese speakers on these judgements. It’s as if English speakers are better trained in thinking about natural gender because English makes such thinking compulsory. Chinese participants, on the other hand, can produce pronouns without thinking of natural gender and, thus, have this information less readily available for their judgements.
One may argue that the effect relies on people of different native tongues showing different behaviours. These people probably differ in many ways other than their native language. Wider cultural differences could be invoked. Still, given that the effect holds for German, French, Italian, Spanish and Chinese, the most straightforward explanation indeed appears to be their language background. A way of overcoming the confounding influence of cultural upbringing may be to contrast second language learners of the same native language who learn different second languages.
Despite these short comings, the influence of the gender status of a language on the mind of its users is clearly measurable. This illustrates quite nicely that thought is influenced by what you must say – rather than by what you can say. This highlights that language is not an isolated skill but instead a central part of how our minds function. Studying language use is important – not just for the sake of language.
Chen, J-Y., & Su, J-J. (2011). Differential Sensitivity to the Gender of a Person by English and Chinese Speakers. Journal of Psycholinguist Research, 40, 195–203. doi: 10.1007/s10936-010-9164-9
Cubelli, R., Paolieri, D., Lotto, L., & Job, R. (2011). The Effect of Grammatical Gender on Object Categorization. Journal of Experimental Psychology: Learning, Memory, and Cognition, 37, 449–460. doi: 10.1037/a0021965
Segel, E., & Boroditsky, L. (2011). Grammar in art. Frontiers in Psychology, 1,1. doi: 10.3389/fpsyg.2010.00244