Richard Kunert

The scientific community’s Galileo affair (you’re the Pope)

Science is in crisis. Everyone in the scientific community knows about it but few want to talk about it. The crisis is one of honesty. A junior scientist (like me) asks himself a similar question to Galileo in 1633: how much honesty is desirable in science?

Galileo versus Pope: guess what role the modern scientist plays.

Science Wonderland

According to nearly all empirical scientific publications that I have read, scientists allegedly work like this:

Introduction, Methods, Results, Discussion

Scientists call this ‘the story’ of the paper. This ‘story framework’ is so entrenched in science that the vast majority of scientific publications are required to be organised according to its structure: 1) Introduction, 2) Methods, 3) Results, 4) Discussion. My own publication is no exception.

Science Reality

However, virtually all scientists know that ‘the story’ is not really true. It is merely an ideal-case-scenario. Usually, the process looks more like this:

questionable research practices

Scientists call some of the added red arrows questionable research practices (or QRP for short). The red arrows stand for (going from left to right, top to bottom):

1) adjusting the hypothesis based on the experimental set-up. This is particularly true when a) working with an old data-set, b) the set-up is accidentally different from the intended one, etc.

2) changing design details (e.g., how many participants, how many conditions to include, how many/which measures of interest to focus on) depending on the results these changes produce.

3) analysing until results are easy to interpret.

4) analysing until results are statistically desirable (‘significant results’), i.e. so-called p-hacking.

5) hypothesising after results are known (so-called HARKing).

The outcome is a collection of blatantly unrealistic ‘stories’ in scientific publications. Compare this to the more realistic literature on clinical trials for new drugs. More than half the drugs fail the trial (Goodman, 2014). In contrast, nearly all ‘stories’ in the wider scientific literature are success stories. How?

Joseph Simmons and colleagues (2011) give an illustration of how to produce spurious successes. They simulated the situation of researchers engaging in the second point above (changing design details based on results). Let’s assume that the hypothesised effect is not real. How many experiments will erroneously find an effect at the conventional 5% significance criterion? Well, 5% of experiments should (scientists have agreed that this number is low enough to be acceptable). However, thanks to the questionable research practices outlined above this number can be boosted. For example, sampling participants until the result is statistically desirable leads to up to 22% of experiments reporting a ‘significant result’ even though there is no effect to be found. It is estimated that 70% of US psychologists have done this (John et al., 2012). When such a practice is combined with other, similar design changes, up to 61% of experiments falsely report a significant effect. Why do we do this?

The Pope of 1633 is back

If we know that the scientific literature is unrealistic why don’t we just drop the pretense and just tell it as it is? The reason is simple: because you like the scientific wonderland of success stories. If you are a scientist reader, you like to base the evaluation of scientific manuscripts on the ‘elegance’ (simplicity, straight-forwardness) of the text. This leaves no room for telling you what really happened. You also like to base the evaluation of your colleagues on the quantity and the ‘impact’ of their scientific output. QRPs are essentially a career requirement in such a system. If you are a lay reader, you like the research you fund (via tax money) to be sexy, easy and simple. Scientific data are as messy as the real world but the reported results are not. They are meant to be easily digestible (‘elegant’) insights.

In 1633 it did not matter much whether Galileo admitted to the heliocentric world view which was deemed blasphemous. The idea was out there to conquer the minds of the renaissance world. Today’s Galileo moment is also marked by an inability to admit to scientific facts (i.e. the so-called ‘preliminary’ research results which scientists obtain before applying questionable research practices). But this time the role of the Pope is played both by political leaders/ the lay public and scientists themselves. Actual scientific insights get lost before they can see the light of day.

There is a great movement to remedy this situation, including pressure to share data (e.g., at PLoS ONE), replication initiatives (e.g., RRR1, reproducibility project), the opportunity to pre-register experiments etc. However, these remedies only focus on scientific practice, as if Galileo was at fault and the concept of blasphemy was fine. Maybe we should start looking into how we got into this mess in the first place. Focus on the Pope.

— — —
John, L., Loewenstein, G., & Prelec, D. (2012). Measuring the Prevalence of Questionable Research Practices with Incentives for Truth-Telling SSRN Electronic Journal DOI: 10.2139/ssrn.1996631

Simmons, J., Nelson, L., & Simonsohn, U. (2011). False-Positive Psychology: Undisclosed Flexibility in Data Collection and Analysis Allows Presenting Anything as Significant Psychological Science, 22 (11), 1359-1366 DOI: 10.1177/0956797611417632

— — —

Picture: Joseph Nicolas Robert-Fleury [Public domain], via Wikimedia Commons

PS: This post necessarily reflects the field of science that I am familiar with (Psychology, Cognitive Science, Neuroscience). The situation may well be different in other scientific fields.

 

 

The real reason why new pop music is so incredibly bad

You have probably heard that Pink Floyd recently published their new album Endless River. Will this bring back the wonderful world of good music after the endless awfulness of the popular music scene in the last 20 years or so? Is good music, as we know it from the 60s and 70s, back for good? The reasons behind the alleged endless awfulness of pop music these days suggest otherwise. We shouldn’t be throwing stones at new music but instead at our inability to like it.

Pink Floyd 1973

When we were young we learned to appreciate Pink Floyd.

Daniel Levitin was asked at a recent music psychology conference in Toronto why old music is amazing and new music is awful. He believed that modern record companies are there to make money. In the olden days, on the other hand, they were there to make music and ready to hold on to musicians which needed time to become successful. More interestingly, he reminded the public that many modern kidz would totally disagree with the implication that modern music is awful. How can it be that new music is liked by young people if so much of it is often regarded as quite bad?

Everything changes for the better after a few repetitions

The answer to the mystery has nothing to do with flaws in modern music but instead with our brain. When adults hear new music they often hate it at first. After repeated listening they tend to find it more and more beautiful. For example, Marcia Johnson and colleagues (1985) played Korean melodies to American participants and found that hearing a new melody led to low liking ratings, a melody heard once before to higher ratings and even more exposure to higher than higher ratings. Even Korsakoff patients – who could hardly remember having heard individual melodies before – showed this effect, i.e. without them realising it they probably never forget melodies.

This so-called mere exposure effect is all that matters to me: a robust, medium-strong, generally applicable, evolutionarily plausible effect (Bornstein, 1989). You can do what you like, it applies to all sorts of stimuli. However, there is one interesting exception here. Young people do not show the mere exposure effect, no relationship between ‘repeat the stimulus’ and ‘give good feeling’ (Bornstein, 1989). As a result, adults need a lot more patience before they like a new song as much as young people do. No wonder adults are only satisfied with the songs they already know from their youth in the 60s and 70s. Probably, when looking at the music scene in 2050 the current generation will equally hate it and wish the Spice Girls back (notice the gradual rise of 90’s parties already).

I listened to it –> I like it

So, when it comes to an allegedly awful present and great past, ask yourself: how deep is your love for the old music itself rather than its repeated listening? Listen repeatedly to any of a million love songs and you will end up appreciating it. Personally, I give new music a chance and sometimes it manages to relight my fire. Concerning Endless River, if it’s not love at first sight, do not worry. The new Pink Floyd album sure is good (depending on how many times you listen to it).

— — —
Bornstein, R. (1989). Exposure and affect: Overview and meta-analysis of research, 1968-1987. Psychological Bulletin, 106 (2), 265-289 DOI: 10.1037/0033-2909.106.2.265

Johnson MK, Kim JK, & Risse G (1985). Do alcoholic Korsakoff’s syndrome patients acquire affective reactions? Journal of experimental psychology. Learning, memory, and cognition, 11 (1), 22-36 PMID: 3156951
— — —

Figure: By PinkFloyd1973.jpg: TimDuncan derivative work: Mr. Frank (PinkFloyd1973.jpg) [CC-BY-3.0 (http://creativecommons.org/licenses/by/3.0)%5D, via Wikimedia Commons

— — —

PS: Yes, I did hide 29 Take That song titles in this blog post. Be careful, you might like 90’s pop music a little bit more due to this exposure.

 

 

 

 

 

Dyslexia: trouble reading ‘four’

Dyslexia affects about every tenth reader. It shows up when trying to read, especially when reading fast. But it is still not fully clear what words dyslexic readers find particularly hard. So, I did some research to find out, and I published the article today.

Carl Spitzweg: the bookworm

The bookworm (presumably non-dyslexic)

Imagine seeing a new word ‘bour’. How would you pronounce it? Similar to ‘four’, similar to ‘flour’ or similar to ‘tour’? It is impossible to know. Therefore, words such as ‘four’, ‘flour’ and ‘tour’ are said to be inconsistent – one doesn’t know how to pronounce them when encountering them for the very first time. Given this pronunciation challenge, I, together with my co-author Christoph Scheepers, hypothesised that such words would be more difficult for readers generally, and for dyslexic readers especially.

Finding evidence for a dyslexia specific problem is challenging because dyslexic participants tend to be slower than non-dyslexic people in most tasks that they do. So, if you force them to be as quick as typical readers they will seem bad readers even though they might be merely slow readers. Therefore, we adopted a new task that gave people a very long time to judge whether a bunch of letters are a word or not.

It turns out that inconsistent words like ‘four’ slow down both dyslexic and typical readers. But on top of that dyslexic readers never quite reach the same accuracy as typical readers with these words. It is as if the additional challenge these words pose can, with time, be surmounted in normal readers while dyslexic readers have trouble no matter how much time you give them. In other words, dyslexic people aren’t just slow. At least for some words they have trouble no matter how long they look at them.

This is my very first publication based on work I did more than four years ago. You should check out whether the waiting was worth it. The article is free to access here. I hope it will convince you that dyslexia is a real challenge to investigate. Still, the pay-off to fully understanding it is enormous: helping dyslexic readers cope in a literate society.

— — —
Kunert, R., & Scheepers, C. (2014). Speed and accuracy of dyslexic versus typical word recognition: an eye-movement investigation Frontiers in Psychology, 5 DOI: 10.3389/fpsyg.2014.01129
— — —

Picture: Carl Spitzweg [Public domain or Public domain], via Wikimedia Commons

Old people are immune against the cocktail party effect

Imagine standing at a cocktail party and somewhere your name gets mentioned. Your attention is immediately grabbed by the sound of your name. It is a classic psychological effect with a new twist: old people are immune.

Someone mention my name?

The so-called cocktail party effect has fascinated researchers for a long time. Even though you do not consciously listen to a conversation around you, your own name can grab your attention. That means that unbeknownst to you, you follow the conversations around you. You check them for salient information like your name, and if it occurs you quickly switch attention to where your name was mentioned.

The cocktail party simulated in the lab

In the lab this is investigated slightly differently. Participants listen to one ear and, for example, repeat whatever they hear. Their name is embedded in what they hear coming in to the other (unattended) ear. After the experiment one simply asks ‘Did you hear your own name?’ In a recent paper published by Moshe Naveh-Benjamin and colleagues (in press), around half of the young student participants noticed their name in such a set-up. Compare this to old people aged around 70: next to nobody (only six out of 76 participants) noticed their name being mentioned in the unattended ear.

Why this age difference? Do old people simply not hear well? Unlikely, when the name was played to the ear that they attended to, 45% of old people noticed their names. Clearly, many old people can hear their names, but they do not notice their names if they do not pay attention to this. Young people do not show such a sharp distinction. Half the time they notice their names, even when concentrating on something else.

Focusing the little attention that is available

Naveh-Benjamin and colleagues instead suggest that old people simply have less attention. When they focus on a conversation, they give it their everything. Nothing is left for the kind of unconscious checking of conversations which young people can do so well.

At the next cocktail party you can safely gossip about your old boss. Just avoid mentioning the name of the young new colleague who just started.

 

— — —

Naveh-Benjamin M, Kilb A, Maddox GB, Thomas J, Fine HC, Chen T, & Cowan N (2014). Older adults do not notice their names: A new twist to a classic attention task. Journal of experimental psychology. Learning, memory, and cognition PMID: 24820668

— — —

Picture:

By Financial Times (Patrón cocktail bar) [CC-BY-2.0 (http://creativecommons.org/licenses/by/2.0)%5D, via Wikimedia Commons

ResearchBlogging.org

Why are ethical standards higher in science than in business and media?

Facebook manipulates user content in the name of science? Scandalous! It manipulates user content in the name of profit? No worries! Want to run a Milgram study these days? Get bashed by your local ethics committee! Want to show it on TV? No worries. Why do projects which seek knowledge have higher ethical standards than projects which seek profit?

Over half a million people were this mouse.

Just as we were preparing to leave for our well-deserved summer holidays this year, research was shaken by the fall-out to a psychological study (Kramer et al., 2014) which manipulated Facebook content. Many scientists objected to the study’s lack of asking for ‘informed consent’, and I think they are right. However, many ordinary people objected to something else. Here’s how Alex Hern put it over at the guardian:

At least when a multinational company, which knows everything about us and controls the very means of communication with our loved ones, acts to try and maximise its profit, it’s predictable. There’s something altogether unsettling about the possibility that Facebook experiments on its users out of little more than curiosity.

Notice the opposition between ‘maximise profit’ which is somehow thought to be okay and ‘experimenting on users’ which is not. I genuinely do not understand this distinction. Suppose the study had never been published in PNAS but instead in the company’s report to share holders (as a new means of emotionally enhancing advertisements), would there have been the same outcry? I doubt it. Why not?

Having issues with TV experimentation versus scientific experimentation?

Was the double standard around the Facebook study the exception? I do not think so.  In the following youtube clip you see the classic Milgram experiment re-created for British TV. The participants’ task is to follow the experimentor’s instructions to electrocute another participant (who is actually an actor) for bad task performance. Electro shocks increase in strength until they are allegedly lethal. People are obviously distressed in this task.

Yesterday, the New Scientist called the classic Milgram experiment one of ‘the most unethical [experiments] ever carried out’. Why is this okay for TV? Now, imagine a hybrid case. Would it be okay if the behaviour shown on TV was scientifically analysed and published in a respectable journal? I guess that would somehow be fine. Why is it okay to run the study with a TV camera involved, not when the TV camera is switched off? This is not a rhetorical question. I actually do not grasp the underlying principle.

Why is ‘experimenting on people’ bad?

In my experience, ethical guidelines are a real burden on researchers. And this is a good thing because society holds researchers to a high ethical standard. Practically all modern research on humans involves strong ethical safe guards. Compare this to business and media. I do not understand why projects seeking private gains (profit for share holders) have a lower ethical standard than research. Surely, the generation of public knowledge is in the greater public interest than private profit making or TV entertainment.

— — —

Kramer AD, Guillory JE, & Hancock JT (2014). Experimental evidence of massive-scale emotional contagion through social networks. Proceedings of the National Academy of Sciences of the United States of America, 111 (24), 8788-90 PMID: 24889601

Milgram, S. (1963). Behavioral Study of obedience The Journal of Abnormal and Social Psychology, 67 (4), 371-378 : doi: 10.1037/h0040525

— — —

picture: from http://www.geripal.org/2014/07/informed-consent-in-social-media.html

How to increase children’s patience in 5 seconds

A single act increases adults’ compliance with researchers. The same act makes students more likely to volunteer to solve math problems in front of others. Moreover, it makes four-year-olds more patient. What sounds like a miracle cure to everyday problems is actually the oldest trick in the book: human touch.

How do researchers know this? Here is one experiment. In a recently published study (Leonard et al., 2014), four and five year old children were asked to wait for ten minutes in front of candy. The experimenter told them to wait before eating the candy because he had to finish paperwork. How long would children wait before calling the experimenter in because they wanted to eat the candy earlier? Four-year-olds waited for about six minutes while five-year-olds waited for about eight minutes. The task was similar to the classic Marshmallow test shown in the video.

 

The positive effect of touch

However, it all depends on whether the experimenter gave children a friendly touch on the back during the request to wait. If she did, four-year-olds waited for seven minutes (versus 5 minutes without touch) and five-year-olds waited for nine minutes (versus seven minutes without touch). A simple, five-second-long touch made four-year-olds behave as patiently as five-year-olds. It’s surprising how simple and fast the intervention is.

Touch across the ages

This result nicely fits into a wider literature on the benefits of a friendly touch. Already back in the eighties Patterson and colleagues (1986) found that adults spent more time helping with the tedious task of scoring personality tests if they were touched by the experimenter. Interestingly, the touch on the shoulder was hardly ever reported as noteworthy. In the early noughties Gueguen picked this effect up and moved it to the real world. He showed that touch also increases adults’ willingness to help by watching after a large dog (Gueguen & Fisher-Loku, 2002) as well as students’ willingness to volunteer to solve a math problem in front of a class (Gueguen, 2004).

The reason underlying these effects remains a bit mysterious. Does the touch on the back reduce the anxiety of being faced with a new, possibly difficult, task? Does it increase the rapport between experimenter and experimental participant? Does it make time fly by because being touched feels good? Well, time will tell.

Touch your child?

There are obvious sexual connotations related to touching people, unfortunately this includes touching children. As a result, some schools in the UK have adopted a ‘no touch’ policy: teachers are never allowed to touch children. Research shows that such an approach comes at a cost: children behave less patiently when they are not touched. Should society deny itself the benefits of people innocently touching each other?

————————————————————————————————————————–

Guéguen N, & Fischer-Lokou J (2002). An evaluation of touch on a large request: a field setting. Psychological reports, 90 (1), 267-9 PMID: 11898995

Guéguen, N. (2004). Nonverbal Encouragement of Participation in a Course: the Effect of Touching Social Psychology of Education, 7 (1), 89-98 DOI: 10.1023/B:SPOE.0000010691.30834.14

Leonard JA, Berkowitz T, & Shusterman A (2014). The effect of friendly touch on delay-of-gratification in preschool children. Quarterly journal of experimental psychology (2006), 1-11 PMID: 24666195

Patterson, M., Powell, J., & Lenihan, M. (1986). Touch, compliance, and interpersonal affect Journal of Nonverbal Behavior, 10 (1), 41-50 DOI: 10.1007/BF00987204

ResearchBlogging.org

How to ask a conference question

Many people are too shy to ask a question after a talk. They may think that many questions are unnecessary, self-important or off topic. Well, that is true. However, that shouldn’t stop anyone from joining in. With this guide anyone is guaranteed to be able to ask a perfectly normal question at any conference in Psychology/Cognitive Neuroscience and beyond.

 

conference, question, speaker, talk, cartoon

 

Beginning formula

Was the talk any good?

Yes: “I really liked your talk. …”

No: “I really liked your talk. …”

 

Main question

Did the presented study use animal models?

Yes: “How could this research be done with humans and what would you predict to happen?”

No: “What would be a good animal model for this topic and couldn’t this resolve some of the methodological issues of your design.”

 

Was the research fundamental (non-applied)?

Yes: “What would be a practical application of these results?”

No: “What is the underlying mechanism that is behind these results?”

 

Was the research done on children?

Yes: “What do your results say about adult processing?”

No: “What would be the developmental time course of these effects?”

 

Did the study only use typical Western student participants?

Yes: “Have you thought about whether these effects will hold up also in non-Western cultures?”

No: “Have you looked into more detail whether the Western sample itself may have subgroups?”

 

Joker:

“Could you go back to slide 6 and explain something for me.” [Wait for scrolling back and ask what a figure actually means. If no figure on slide 6 appears, ask to go one ahead. Repeat until a slide with a figure appears.]

 

If all fails:

Talk at length about your own research followed by “this is less of a question and more of a comment”.

 

Behaviour after question

Could the presenter influence your career?

Yes: Hold eye-contact and nod (whatever s/he says).

No: Check your smartphone for brainsidea updates.

——————————————————————————————-
Picture: from twitter (https://twitter.com/tammyingram/status/343868282538954752/photo/1). Original source unknown.

The 10,000-Hour rule is nonsense

Have you heard of Malcom Gladwell’s 10,000-hour rule? The key to success in any field is practice, and not just a little. A new publication in the journal Psychological Science had a good look at all the evidence and concludes that this rule is nonsense. No Einstein in you, I am afraid.

Albert Einstein, by Doris Ulmann.jpg

Did he just practice a lot?

The authors of the new publication wanted to look at all major areas of expertise where the relationship between practice and performance had been investigated: music, games, sports, professions, and education. They accumulated all the 88 scientific articles that are available at this point and performed one big analysis on the accumulated data of 11,135 participants. A meta-analysis with a huge sample.

The take-home number is 12%. The amount of practice that you do only explains 12% of your performance in a given task. From the 10,000-Hour rule I expected at least 50%. And this low number of 12% is not due to fishy methods in some low-quality articles that were included. Actually, the better the method to assess the amount of practice the lower the apparent effect of practice. The same goes for the method to assess performance on the practiced task.

However, one should differentiate between different kinds of activities. Practice can have a bigger effect. For example, if the context in which the task is performed is very stable (e.g., running) 24% of performance is explained by practice. Unstable contexts (e.g., handling an aviation emergency) push this down to 4% . The area of expertise also made a difference:

  • games: 26%
  • music: 21%
  • sports: 18%
  • education: 4%
  • professions: 1%

In other words the 10,000-Hour rule is nonsense. Stop believing in it. Sure, practice is important. But other factors (age? intelligence? talent?) appear to play a bigger role.

Personally, I have decided not to become a chess master by practicing chess for 10,000 hours or more. I rather focus on activities that play to my strengths. Let’s hope that blogging is one of them.

————————————————————————————————————
Macnamara, B.N., Hambrick, D.Z., & Oswald, F.L. (2014). Deliberate Practice and Performance in Music, Games, Sports, Education, and Professions: A Meta-Analysis Psychological Science DOI: 10.1037/e633262013-474

ResearchBlogging.org

 

 

 

 

————————————————————————————————————
Albert Einstein, by Doris Ulmann” by Doris Ulmann (1882 – 1934) – Library of Congress, Prints & Photographs Division, [reproduction number LC-USZC4-4940]. Licensed under Public domain via Wikimedia Commons.

Play music and you’ll see more

 

Check out the video. It is a short demonstration of the so-called attentional blink. Whenever you try to spot the two letters in the rapid sequence you’ll miss the second one. This effect is so robust that generations of Psychology undergraduates learned about it. And then came music and changed everything.

Test your own attentional blink

Did you see the R in the video? Probably you did, but did you see the C? The full sequence starts at 0:48, the R occurs at 0:50 and the sequence ends at 0:53. As far as I can see each letter is presented for about 130 milliseconds (a typical rate for this sort of experiment).

Z P J E M S B S W P E R X C H W Z H B J P S W E Z S W H B P X J H E B P W Z S

Judging by the youtube comments, of those who did the task properly (14 comments when I checked), only 65% saw the C. This is remarkably close to the average performance during an attentional blink (around 60% or so).

Where does the attentional blink come from?

The idea is that when the C is presented it cannot enter attention because attention is busy with the R. Another theory states that you immediately forget that you’ve seen the C. The R is less vulnerable to rapid forgetting.

What does music do with our attention?

In 2005 Christian Olivers and Sander Nieuwenhuis reported that they could simply abolish this widely known effect by playing a rhythmic tune in the background (unfortunately no more details are given). Try it out yourself. Switch on the radio and play a song with a strong beat. Now try the video again. Can you see both the R and the C? The 16 people in the music condition of Olivers and Nieuwenhuis could. Music actually let them see things which without music were invisible.

It is a bit mysterious why music would have such an effect. The article only speculates that it has something to do with music inducing a more ‘diffuse’ state of mind, greater arousal, or positive mood. I think the answer lies somewhere else. Music, especially songs with a strong beat, change how we perceive the world. On the beat (i.e. when most people would clap to the beat) one pays more attention than off the beat. What music might have done to participants is to restructure attention. Once the R occurs, it is no longer able to dominate attention because people are following the rhythmic attentional structure.

Behind my explanation is the so-called dynamic attending theory. Unfortunately, Olivers and Nieuwenhuis appear not to be familiar with it. Perhaps it is time to include some music cognition lessons in psychology undergraduate classes. After all, a bit of music let’s you see things which otherwise remain hidden to you.

———————————————————————————————–

Jones, M., & Boltz, M. (1989). Dynamic attending and responses to time. Psychological Review, 96 (3), 459-491 DOI: 10.1037//0033-295X.96.3.459

Large, E., & Jones, M. (1999). The dynamics of attending: How people track time-varying events. Psychological Review, 106 (1), 119-159 DOI: 10.1037//0033-295X.106.1.119

Olivers, C., & Nieuwenhuis, S. (2005). The Beneficial Effect of Concurrent Task-Irrelevant Mental Activity on Temporal Attention Psychological Science, 16 (4), 265-269 DOI: 10.1111/j.0956-7976.2005.01526.x

ResearchBlogging.org

Everything you always wanted to know about language but were too afraid to ask

MPI Nijmegen

The MPI in Nijmegen: the origin of answers to your questions.

The Max Planck Institute in Nijmegen has started a great initiative which tries nothing less than answer all your questions about language. How does it work?

1) Go to this website: http://www.mpi.nl/q-a/questions-and-answers
2) See whether your question has already been answered
3) If not, scroll to the bottom and ask a question yourself.
The answers are not provided by just anybody but by language researchers themselves. Before they are put on the web they get checked by another researcher and they get translated into German, Dutch and English. It’s a huge enterprise, to be sure..
As an employee of the Max Planck Institute I’ve had my own go at answering a few questions:
- How does manipulating through language work?
- Is it true that people who are good in music can learn a language sooner?
- How do gender articles affect cognition?
.What do think of my answers? What questions would you like to see answered?

 

——————————————————————————————————————–

Thibodeau, P., & Boroditsky, L. (2011). Metaphors We Think With: The Role of Metaphor in Reasoning PLoS ONE, 6 (2) DOI: 10.1371/journal.pone.0016782

Asaridou, S., & McQueen, J. (2013). Speech and music shape the listening brain: evidence for shared domain-general mechanisms Frontiers in Psychology, 4 DOI: 10.3389/fpsyg.2013.00321

Segel, E., & Boroditsky, L. (2011). Grammar in Art Frontiers in Psychology, 1 DOI: 10.3389/fpsyg.2010.00244

 

ResearchBlogging.org