Why ear plugs are great for clubbing and concerts

I enjoy clubbing and pop/rock concerts exclusively with my ear plugs in. Does that mean I miss out? No, I enjoy the music exactly as it is meant to be.

Earplgs

Picture by Melianis at fi.wikipedia (CC BY 2.5)

Since 2004 the urban dictionary includes the term ‘deaf rave’ to describe a ‘rave, or party, organised by deaf people for deaf people, though hearing people are invited also’. Deaf people at a rave? Do they come for the flashy lights? No, the phenomenon behind deaf people’s enjoyment of raves is at the heart of why I wear ear plugs when going clubbing.

Deaf people enjoy loud music – i.e. strong air vibrations – through their skin – an organ signalling vibrating input. Hearing people’s skin is no different but we often fail to notice our skin-hearing because ear-hearing trumps it, given its greater sensitivity. However, once the volume is cranked up, as at many night clubs and concerts, the skin can do remarkable things.

For example, ordinary people can distinguish instruments whose sounds they can only feel on their backs (even deaf people can do this) (Russo et al., 2012). Moreover, ear-hearing can be affected by skin-hearing. When hearing different rhythms through the skin and the ears, people are worse at distinguishing the currently heard rhythm from a previous one, compared to the case of just ear-hearing the current rythm (Huang et al., 2012). Thus, the skin is an important organ for music listening. You cannot just ignore it.

All I do when putting in ear plugs in the night club is that I give my skin a slight advantage. And this advantage makes the music more intimate. Think about it, the skin is an organ which usually only reacts to objects which are extremely close. Compare this to our ears and eyes which react to objects far away. Seeing and ear-hearing a band is something we do at a distance. Skin-hearing a band creates an illusory proximity, as if the music was right there on your skin.

ALEX_NILSON.jpg

Picture by By Darshan08 (CC BY-SA 3.0) via wikimedia commons

I believe that this illusory proximity through skin-hearing is a major motivation behind the loudness one experiences in clubs and at concerts. Ear plugs are great for your intimate full-body experience of the music. The loudness of the music is not meant for ears. The proof of this seemingly nonsensical statement lies in the statistics of hearing loss. About half the people exposed to loud music during work have some hearing loss. This includes the musicians themselves, whether classical or rock/pop. And the audience is not immune either. The majority of rock concert attendees experience temporary auditory problems such as tinnitus or being hard of hearing (Zhao et al., 2010).

Clubbing and pop/rock concert music is simply too loud for unprotected ears. It is meant for the skin. Give your skin an advantage and protect your hearing with a simple, cheap, handy device: ear plugs.

— — —

Huang J, Gamble D, Sarnlertsophon K, Wang X, & Hsiao S (2013). Integration of auditory and tactile inputs in musical meter perception. Advances in experimental medicine and biology, 787, 453-61 PMID: 23716252

Russo FA, Ammirante P, & Fels DI (2012). Vibrotactile discrimination of musical timbre. Journal of experimental psychology. Human perception and performance, 38 (4), 822-6 PMID: 22708743

Zhao F, Manchaiah VK, French D, & Price SM (2010). Music exposure and hearing disorders: an overview. International journal of audiology, 49 (1), 54-64 PMID: 20001447

Advertisements

The growing divide between higher and low impact scientific journals

Ten years ago the Public Library of Science started one big lower impact and a series of smaller higher impact journals. Over the years these publication outlets diverged. The growing divide between standard and top journals might mirror wider trends in scholarly publishing.

There are roughly two kinds of journals in the Public Library of Science (PLoS): low impact (IF = 3.06) and higher impact (3.9 < IF < 13.59) journals. There is only one low impact journal, PLoS ONE, which is bigger in terms of output than all the other journals in PLoS combined. Its editorial policy is fundamentally different to the higher impact journals in that it does not require novelty or ‘strong results’. All it requires is methodological soundness.

Comparing PLoS ONE to the other PLoS journals then offers the opportunity to plot the growing divide between ‘high impact’ and ‘standard’ research papers. I will follow the hypothesis that more and more information is required for a publication (Vale, 2015). More information could be mirrored in three values: the number of references, authors, or pages.

And indeed, the higher impact PLoS journal articles have longer and longer reference sections, a rise of 24% from 46 to 57 over the last ten years (Pearson r = .11, Spearman rho = .11), see also my previous blog post for a similar pattern in another high impact journal outside of PLoS.

plos-not-one_more-and-more-references-over-time

The lower impact PLoS ONE journal articles, on the other hand, practically did not change in the same period (Pearson r = .01, Spearman rho = -.00).

plos-one_same-references-over-time

The diverging pattern between higher and low impact journals can also be observed with the number of authors per article. While in 2006 the average article in a higher impact PLoS journal was authored by 4.7 people, the average article in 2016 was written by 7.8 authors, a steep rise of 68% (Pearson r = .12, Spearman rho = .19).

plos-not-one_more-and-more-authors-over-time

And again, the low impact PLoS ONE articles do not exhibit the same change, remaining more or less unchanged (Pearson r = .01, Spearman rho = .02).

plos-one_same-author-count-over-time

Finally, the number of pages per article tells the same story of runaway information density in higher impact journals and little to no change in PLoS ONE. Limiting myself to articles published until late november 2014(when lay-out changes complicate the comparison), the average higher impact journal article grew substantially in higher impact journals (Pearson r = .16, Spearman rho = .13) but not in PLoS ONE (Pearson r = .03, Spearman rho = .02).

plos-not-one-is-getting-longer

plos-one-is-not-getting-longer

So, overall, it is true that more and more information is required for a publication in a high impact journal. No similar rise in information density is seen in PLoS ONE. The publication landscape has changed. More effort is now needed for a high impact publication compared to ten years ago.

Wanna explore the data set yourself? I made a web-app which you can use in RStudio or in your web browser. Have fun with it and tell me what you find.

— — —
Vale, R.D. (2015). Accelerating scientific publication in biology Proceedings of the National Academy of Sciences, 112, 13439-13446 DOI: 10.1101/022368

The slowing down of the biggest scientific journal

PLoS ONE started 11 years ago to disruptively change scholarly publishing. By now it is the biggest scientific journal out there. Why has it become so slow?

Many things changed at PLoS ONE over the years, reflecting general trends in how researchers publish their work. For one thing, PLoS ONE grew enourmously. After publishing only 137 articles in its first year, the number of articles published per year peaked in 2013 at 31,522.

plos-one_publication-count-per-year

However, as shown in the figure above, since then they have declined by nearly a third. In 2016 only 21,655 articles were published in PLoS ONE. The decline could be due to a novel open data policy implemented in March 2014, a slight increase in the cost to publish in October 2015, or a generally more crowded market place for open access mega journals like PloS ONE (Wakeling et al., 2016).

However, it might also be that authors are becoming annoyed with PLoS ONE for getting slower. In its first year, it took 95 days on average to get an article from submission to publication in PLoS ONE. In 2016 it took a full 172 days. This renders PLoS ONE no longer the fastest journal published by PLoS, a title it held for nine years.

plos-one-rev-time-versus-other-journals-rev-time

The graph below shows the developemtn of PLoS ONE in more detail by plotting each article’s review and publication speed against its publication date, i.e. each blue dot represents one of the 159,000 PLoS ONE articles.

plos-one-is-getting-slower

What can explain the increasingly poor publication speed of PLoS ONE? Most people might think it is the sheer volume of manuscripts the journal has to process. Processing more articles might simply slow a journal down. However, this slow down continued until  2015, i.e. beyond the peak in publication output in 2013. Below, I show a more thorough analysis which reiterates this point. The plot shows each article in PLoS ONE in terms of its time from submission to publication and the number of articles published around the same time (30 days before and after). There is a link, for sure (Pearson r = .13, Spearman rho = .15), but it is much weaker than I would have thought.

plos-one_output-competition-versus-speed

Moreover, when controlling for publication date via a partial correlation, the pattern above becomes much weaker (partial Pearson r = .05, partial Spearman rho = .11). This suggests that much of PLoS ONE’s slow down is simply due to the passage of time. Perhaps, during this time scientific articles changed, requiring a longer time to evaluate whether they are suitable for the journal.

For example, it might be that articles these days include more information which takes longer to be assessed by scientific peers. More information could be mirrored in three values: the number of authors (information contributors), the reference count (information links), the page count (space for information). However, the number of authors per article has not changed over the years (Pearson r = .01, Spearman rho = .02). Similarly, there is no increase in the length of the reference sections over the years (r = .01; rho = -.00). Finally, while articles have indeed become longer in terms of page count (see graph below), the change is probably just due to a new lay-out in January 2015.

plos-one-is-getting-longer

Perhaps, it takes longer to go through peer-review at PLoS ONE these days because modern articles are increasingly complex and interdisciplinary. A very small but reliable correlation between subject categories per article and publication date supports this possibility somewhat, see below. It is possible that PLoS ONE simply finds it increasingly difficult to look for the right experts to assess the scientific validity of an article because articles have become more difficult to pin down in terms of the expertise they require.

plos-one-is-getting-a-tiny-bit-more-interdisciplinary

Having celebrated its 10 year anniversary, PLoS ONE can be proud to have revolutionized scholarly publishing. However, whether PLoS ONE itself will survive in the new publishing environment it helped to create remains to be seen. The slowing down of its publication process is certainly a sign that PLoS ONE needs to up its game in order to remain competitive.

Wanna explore the data set yourself? I made a web-app which you can use in RStudio or in your web browser. Have fun with it and tell me what you find.

— — —
Wakeling S, Willett P, Creaser C, Fry J, Pinfield S, & Spezi V (2016). Open-Access Mega-Journals: A Bibliometric Profile. PloS one, 11 (11) PMID: 27861511

Do twitter or facebook activity influence scientific impact?

Are scientists smart when they promote their work on social media? Isn’t this a waste of time, time which could otherwise be spent in the lab running experiments? Perhaps not. An analysis of all available articles published by PLoS journals suggests otherwise.

My own twitter activity might best be thought of as learning about science (in the widest sense), while what I do on facebook is really just shameless procrastination. It turns out that this pattern holds more generally and impacts on how to use social media effectively to promote science.

In order to make this claim, I downloaded the twitter and facebook activity associated with every single article published in any journal by the Public Library of Science (PLoS), using this R-script here. PLoS is the open access publisher of the biggest scientific journal PLoS ONE as well as a number of smaller, more high impact journals. The huge amount of data allows me to have a 90% chance of discovering even a small effect (r = .1) if it actually exists.

I should add that I limited my sample to those articles published after May 2012 (which is when PLoS started tracking tweets) and January 2015 (in order to allow for at least two years to aggregate citations). The 87,649 remaining articles published in any of the PLoS journals offer the following picture.

plos-all_tweets-versus-citations

There is a small but non-negligible association between impact on twitter (tweets) and impact in the scientific literature (citations): Pearson r = .12, p < .001; Spearman rho = .18, p < .001. This pattern held for nearly every PLoS journal individually as well (all Pearson r ≥ .10 except for PLoS Computational Biology; all Spearman rho ≥ .12 except for PLoS Pathogens). This result is in line with Peoples et al.’s (2016) analysis of twitter activity and citations in the field of ecology.

So, twitter might indeed help a bit to promote an article. Does this hold for social media in general? A look at facebook reveals a different picture. The relationship between facebook mentions of an article and its scientific impact is so small as to be practically negligible: Pearson r = .03, p < .001; Spearman rho = .06, p < .001. This pattern of only a tiny association between facebook mentions and citations held for every single PLoS journal (Pearson r ≤ .09, Spearman rho ≤ .08).

plos-all_fb-versus-citations

In conclusion, twitter can be used for promoting your scientific work in an age of increased competition for scientific reading time (Renear & Palmer, 2009). Facebook, on the other hand, can be used for procrastinating.

Wanna explore the data set yourself? I made a web-app which you can use in RStudio or in your web browser. Have fun with it and tell me what you find.

— — —
Peoples BK, Midway SR, Sackett D, Lynch A, & Cooney PB (2016). Twitter Predicts Citation Rates of Ecological Research. PloS one, 11 (11) PMID: 27835703

Renear AH, & Palmer CL (2009). Strategic reading, ontologies, and the future of scientific publishing. Science (New York, N.Y.), 325 (5942), 828-32 PMID: 19679805

Why does music training increase intelligence?

We know that music training causes intelligence to increase, but why? In this post I 1) propose a new theory, and 2) falsify it immediately. Given that this particular combination of activities is unpublishable in any academic journal, I invite you to read the whole story here (in under 500 words).

1) Proposing the ISAML

Incredible but true, music lessons improve the one thing that determines why people who are good on one task tend to be better on another task as well: IQ (Schellenberg, 2004; Kaviani et al., 2013; see coverage in previous blog post). Curiously, I have never seen an explanation for why music training would benefit intelligence.

I propose the Improved Sustained Attention through Music Lessons hypothesis (ISAML). The ISAML hypothesis claims that all tasks related to intelligence are dependent to some degree on people attending to them continuously. This ability is called sustained attention. A lapse of attention, caused by insufficient sustained attention, leads to suboptimal answers on IQ tests. Given that music is related to the structuring of attention (Boltz & Jones, 1989) and removes attentional ‘gaps’ (Olivers & Nieuwenhuis, 2005; see coverage in previous blog post), music training might help in attentional control and, thus, in increasing sustained attention. This in turn might have a positive impact on intelligence, see boxes and arrows in Figure 1.

music_training_IQ_link

Figure 1. The Improved Sustained Attention through Music Lessons hypothesis (ISAML) in a nutshell. Arrows represent positive associations.

The ISAML does not predict that intelligence is the same as sustained attention. Instead, it predicts that:

a) music training increases sustained attention

b) sustained attention is associated with intelligence

c) music training increases intelligence

2) Evaluating the ISAML

Prediction c is already supported, see above. Does anyone know something about prediction b? Here, I shall evaluate prediction a: does music training increase sustained attention? So far, the evidence looks inconclusive (Carey et al., 2015). Therefore, I will turn to a data set of my own which I gathered in a project together with Suzanne R. Jongman (Kunert & Jongman, in press).

We used a standard test of sustained attention: the digit discrimination test (Jongman et al., 2015). Participants had the mind-boggingly boring task of clicking a button every time they saw a zero while watching one single digit after another on the screen for ten minutes. A low sustained attention ability is thought to be reflected by worse performance (higher reaction time to the digit zero) at the end of the testing session compared to the beginning, or by overall high reaction times.

Unfortunately for the ISAML, it turns out that there is absolutely no relation between musical training and sustained attention. As you can see in Figure 2A, the reaction time (logged) decrement between the first and last half of reactions to zeroes is not related to musical training years [Pearson r = .03, N = 362, p = .61, 95% CI = [-.076; .129], JZS BF01 with default prior = 7.59; Spearman rho = .05]. Same for mean reaction time (logged), see Figure 2B [Pearson r = .02, N = 362, p = .74, 95% CI = [-0.861; 0.120], JZS BF01 = 8.181; Spearman rho = 0.03].

fig-2

Figure 2. The correlation between two different measures of sustained attention (vertical axes) and musical training (horizontal axes) in a sample of 362 participants. High values on vertical axes represent low sustained attention, i.e. the ISAML predicts a negative correlation coefficient. Neither correlation is statistically significant. Light grey robust regression lines show an iterated least squares regression which reduces the influence of unusual data points.

3) Conclusion

Why on earth is musical training related to IQ increases? I have no idea. The ISAML is not a good account for the intelligence boost provided by music lessons.

— — —

Carey, D., Rosen, S., Krishnan, S., Pearce, M., Shepherd, A., Aydelott, J., & Dick, F. (2015). Generality and specificity in the effects of musical expertise on perception and cognition Cognition, 137, 81-105 DOI: 10.1016/j.cognition.2014.12.005

Jongman, S., Meyer, A., & Roelofs, A. (2015). The Role of Sustained Attention in the Production of Conjoined Noun Phrases: An Individual Differences Study PLOS ONE, 10 (9) DOI: 10.1371/journal.pone.0137557

Jones, M., & Boltz, M. (1989). Dynamic attending and responses to time. Psychological Review, 96 (3), 459-491 DOI: 10.1037//0033-295X.96.3.459

Kaviani, H., Mirbaha, H., Pournaseh, M., & Sagan, O. (2013). Can music lessons increase the performance of preschool children in IQ tests? Cognitive Processing, 15 (1), 77-84 DOI: 10.1007/s10339-013-0574-0

Kunert R, & Jongman SR (2017). Entrainment to an auditory signal: Is attention involved? Journal of experimental psychology. General, 146 (1), 77-88 PMID: 28054814

Olivers, C., & Nieuwenhuis, S. (2005). The Beneficial Effect of Concurrent Task-Irrelevant Mental Activity on Temporal Attention Psychological Science, 16 (4), 265-269 DOI: 10.1111/j.0956-7976.2005.01526.x

Glenn Schellenberg, E. (2004). Music Lessons Enhance IQ Psychological Science, 15 (8), 511-514 DOI: 10.1111/j.0956-7976.2004.00711.x

— — —

The curious effect of a musical rhythm on us

Do you know the feeling of a musical piece moving you? What is this feeling? One common answer by psychological researchers is that what you feel is your attention moving in sync with the music. In a new paper I show that this explanation is mistaken.

Watch the start of the following video and observe carefully what is happening in the first minute or so (you may stop it after that).

Noticed something? Nearly everyone in the audience moved to the rhythm, clapping, moving the head etc. And you? Did you move? I guess not. You probably looked carefully at what people were doing instead. Your reaction illustrates nicely how musical rhythms affect people according to psychological researchers. One very influential theory claims that your attention moves up and down in sync with the rhythm. It treats the rhythm like you treated it. It simply ignores the fact that most people love moving to the rhythm.

The theory: a rhythm moves your attention

Sometimes we have gaps of attention. Sometimes we manage to concentrate really well for a brief moment. A very influential theory, which has been supported in various experiments, claims that these fluctuations in attention are synced to the rhythm when hearing music. Attention is up at rhythmically salient moments, e.g., the first beat in each bar. And attention is down during rhythmically unimportant moments, e.g., off-beat moments.

This makes intuitive sense. Important tones, e.g., those determining the harmonic key of a music piece, tend to occur at rhythmically salient moments. Looking at language rhythm reveals a similar picture. Stressed syllables are important for understanding language and signal moments of rhythmic salience. It makes sense to attend well during moments which include important information.

The test: faster decisions and better learning?

I, together with Suzanne Jongman, asked whether attention really is up at rhythmically salient moments. If so, people should make decisions faster when a background rhythm has a moment of rhythmic importance. As if people briefly concentrated better at that moment. This is indeed what we found. People are faster at judging whether a few letters on the screen are a real word or not, if the letters are shown near a salient moment of a background rhythm, compared to another moment.

However, we went further. People should also learn new words better if they are shown near a rhythmically salient moment. This turned out not to be the case. Whether people have to memorise a new word at a moment when their attention is allegedly up or down (according to a background rhythm) does not matter. Learning is just as good.

What is more, even those people who react really strongly to the background rhythm in terms of speeding up a decision at a rhythmically salient moment (red square in Figure below), even those people do not learn new words better at the same time as they speed up.

It’s as if the speed-up of decisions is unrelated to the learning of new words. That’s weird because both tasks are known to be affected by attention. This makes us doubt that a rhythm affects attention. What could it affect instead?

fig5e_blog

Figure 1. Every dot is one of 60 participants. How much a background rhythm sped up responses is shown horizontally. How much the same rhythm, at the same time, facilitated pseudoword memorisation is shown on the vertical axis. The red square singles out the people who were most affected by the rhythm in terms of their decision speed. Notice that, at the same time, their learning is unaffected by the rhythm.

The conclusion: a rhythm does not move your attention, it moves your muscles

To our own surprise, a musical rhythm appears not to affect how your attention moves up and down, when your attentional lapses happen, or when you can concentrate well. Instead, it simply appears to affect how fast you can press a button, e.g., when indicating a decision whether a few letters form a word or not.

Thinking back to the video at the start, I guess this just means that people love moving to the rhythm because the urge to do so is a direct consequence of understanding a rhythm. Somewhere in the auditory and motor parts of the brain, rhythm processing happens. However, this has nothing to do with attention. This is why learning a new word shown on the screen – a task without an auditory or motor component – is not affected by a background rhythm.

The paper: the high point of my career

You may read all of this yourself in the paper (here). I will have to admit that in many ways this paper is how I like to see science done and, so, I will shamelessly tell you of its merits. The paper is not too long (7,500 words) but includes no less than 4 experiments with no less than 60 participants each. Each experiment tests the research question individually. However, the experiments build on each other in such a way that their combination makes the overall paper stronger than any experiment individually ever could.

In terms of analyses, we put in everything we could think of. All analyses are Bayesian (subjective Bayes factor) and frequentist (p-values). We report hypothesis testing analyses (Bayes factor, p-values) and parameter estimation analyses (effect sizes, Confidence intervals, Credible intervals). If you can think of yet another analysis, go for it. We publish the raw data and analysis code alongside the article.

The most important reason why this paper represents my favoured approach to science, though, is because it actually tests a theory. A theory I and my co-author truly believed in. A theory with a more than 30-year history. With a varied supporting literature. With a computational model implementation. With more than 800 citations for two key papers. With, in short, everything you could wish to see in a good theory.

And we falsified it! Instead of thinking of the learning task as ‘insensitive’ or as ‘a failed experiment’, we dug deeper and couldn’t help but concluding that the attention theory of rhythm perception is probably wrong. We actually learned something from our data!

PS: no-one is perfect and neither is this paper. I wish we had pre-registered at least one of the experiments. I also wish the paper was open access (see a free copy here). There is room for improvement, as always.

— — —
Kunert R, & Jongman SR (2017). Entrainment to an auditory signal: Is attention involved? Journal of experimental psychology. General, 146 (1), 77-88 PMID: 28054814

How to write a nature-style review

Nature Reviews Neuroscience is one of the foremost journals in neuroscience. What do its articles look like? How have they developed? This blog post provides answers which might guide you in writing your own reviews.

Read more than you used to

Reviews in Nature Reviews Neuroscience cover more and more ground. Ten years ago, 93 references were the norm. Now, reviews average 150 references. This might be an example of scientific reports in general having to contain more and more information so as not to be labelled ‘premature’, ‘incomplete’, or ‘insufficient’ (Vale, 2015).

nrn_fig1

Reviews in NRN include more and more references.

Concentrate on the most recent literature

Nature Reviews Neuroscience is not the outlet for your history of neuroscience review. Only 22% of cited articles are more than 10 years old. A full 17% of cited articles were published a mere two years prior to the review being published, i.e. something like one year before the first draft of the review reached Nature Reviews Neuroscience (assuming a fast review process of 1 year).

nrn_fig2

Focus on recent findings. Ignore historical contexts.

If at all, give a historical background early on in your review.

References are given in order of first presentation in Nature Reviews Neuroscience. Dividing this order in quarters allows us to reveal the age distribution of references in the quarter of the review where they are first mentioned. As can be seen in the figure below, the pressure for recency is less severe in the first quarter of your review. It increases thereafter. So, if you want to take a risk and provide a historical context to your review, do so early on.

nrn_fig3

Ignore historical contexts, especially later in your review. Q = quarter in which reference first mentioned

The change in reference age distributions of the different quarters of reviews is not easily visible. Therefore, I fit a logarithmic model to the distributions (notice dotted line in Figure above) and used its parameter estimates as a representation of how ‘historical’ references are. Of course, the average reference is not historical, hence the negative values. But notice how the parameter estimates become more negative in progressive quarters of the reviews: history belongs at the beginning of a review.

nrn_fig4

Ignore historical contexts, especially later in your review: the modeling outcome.

Now, find a topic and write that Nature Review Neuroscience review. What are you waiting for?

— — —

Vale, R. (2015). Accelerating scientific publication in biology Proceedings of the National Academy of Sciences, 112 (44), 13439-13446 DOI: 10.1073/pnas.1511912112

— — —

All the R-code, including the R-markdown script used to generate this blog post, is available at https://github.com/rikunert/NRNweb

Discovering a glaring error in a research paper – a personal account

New York Magazine has published a great article about how grad student Steven Ludeke tried to correct mistakes in the research of Pete Hatemi and Brad Verhulst. Overall, Ludeke summarises his experience as ‘not recommendable’. Back in my undergraduate years I spotted an error in an article by David DeMatteo and did little to correct it. Why?

Christian Bale playing a non-incarcerated American Psycho.

David DeMatteo, assistant professor in Psychology at Drexel University, investigates psychopathy. In 2010, I was a lowly undergraduate student and noticed a glaring mistake in one of his top ten publications which has now been cited 50 times according to Google Scholar.

The error

The study investigated the characteristics of psychopaths who live among us, the non-incarcerated population. How do these psychopaths manage to avoid prison? DeMatteo et al. (2006) measured their psychopathy in terms of personality features and in terms of overt behaviours. ‘Participants exhibited the core personality features of psychopathy (Factor 1) to a greater extent than the core behavioral features of psychopathy (Factor 2). This finding may be helpful in explaining why many of the study participants, despite having elevated levels of psychopathic characteristics, have had no prior involvement with the criminal justice system.’ (p. 142)

The glaring mistake in this publication is that Factor 2 scores at 7.1 (the behavioural features of psychopathy) are actually higher than the Factor 1 scores at 5.2 (the personality features of psychopathy). The numbers tell the exactly opposite story to the words.

DeMatteo_mistake.jpg

The error in short. The numbers obviously do not match up with the statement.

The numbers are given twice in the paper making a typo unlikely (p. 138 and p. 139). Adjusting the scores for the maxima of the scales that they are from (factor 1 x/x_max = 0.325 < factor 2 x/x_max=0.394) or the sample maximum (factor 1 x/x_max_obtained = 0.433 < factor 2 x/x_max_obtained = 0.44375) makes no difference. No outlier rejection is mentioned in the paper.

In sum, it appears as if DeMatteo and his co-authors interpret their numbers in a way which makes intuitive sense but which is in direct contradiction to their own data. When researchers disagree with their own data, we have a real problem.

The reaction

1) Self doubt. I consulted with my professor (the late Paddy O’Donnel) who confirmed the glaring mistake.

2) Contact the author. I contacted DeMatteo in 2010 but his e-mail response was evasive and did nothing to resolve the issue. I have contacted him again, inviting him to react to this post.

3) Check others’ reactions. I found three publications which cited DeMatteo et al.’s article (Rucevic, 2010; Gao & Raine, 2010; Ullrich et al., 2008) and simply ignored the contradictory numbers. They went with the story that community dwelling psychopaths show psychopathic personalities more than psychopathic behaviours, even though the data in the article favours the exactly opposite conclusion.

4) Realising my predicament. At this point I realised my options. Either I pursued this full force while finishing a degree and, afterwards, moving on to my Master’s in a different country. Or I let it go. I had a suspicion which Ludeke’s story in New York Magazine confirmed: in these situations one has much to lose and little to gain. Pursuing a mistake in the research literature is ‘clearly a bad choice’ according to Ludeke.

The current situation

And now this blog post detailing my experience. Why? Well, on the one hand, I have very little to lose from a disagreement with DeMatteo as I certainly don’t want a career in law psychology research and perhaps not even in research in general. The balance went from ‘little to gain, much to lose’ to ‘little to gain, little to lose’. On the other hand, following my recent blog posts and article (Kunert, 2016) about the replication crisis in Psychology, I have come to the conclusion that science cynicism is not the way forward. So, I finally went fully transparent.

I am not particularly happy with how I handled this whole affair. I have zero documentation of my contact with DeMatteo. So, expect his word to stand against mine soon. I also feel I should have taken a risk earlier in exposing this. But then, I used to be passionate about science and wanted a career in it. I didn’t want to make enemies before I had even started my Master’s degree.

In short, only once I stopped caring about my career in science did I find the space to care about science itself.

— — —

DeMatteo, D., Heilbrun, K., & Marczyk, G. (2006). An empirical investigation of psychopathy in a noninstitutionalized and noncriminal sample Behavioral Sciences & the Law, 24 (2), 133-146 DOI: 10.1002/bsl.667

Gao, Y., & Raine, A. (2010). Successful and unsuccessful psychopaths: A neurobiological model Behavioral Sciences & the Law DOI: 10.1002/bsl.924

Kunert, R. (2016). Internal conceptual replications do not increase independent replication success Psychonomic Bulletin & Review DOI: 10.3758/s13423-016-1030-9

Rucević S (2010). Psychopathic personality traits and delinquent and risky sexual behaviors in Croatian sample of non-referred boys and girls. Law and human behavior, 34 (5), 379-91 PMID: 19728057

Ullrich, S., Farrington, D., & Coid, J. (2008). Psychopathic personality traits and life-success Personality and Individual Differences, 44 (5), 1162-1171 DOI: 10.1016/j.paid.2007.11.008

— — —

Update 16/11/2016: corrected numerical typo in sentence beginning ‘Adjusting the scores for the maxima…’ pointed out to me by Tom Foulsham via twitter (@TomFoulsh).

How to excel at academic conferences in 5 steps

Academic conferences have been the biggest joy of my PhD and so I want to share with others how to excel at this academic tradition. 

1185006_10151842316587065_1748658640_n

The author (second from right, with can) at his first music cognition conference (SMPC 2013 in Toronto) which – despite appearances – he attended by himself.

1) Socialising

A conference is not all about getting to know facts. It’s all about getting to know people. Go to a conference where you feel you can approach people. Attend every single preparatory excursion/workshop/symposium, every social event, every networking lunch. Sit at a table where you know no-one at all. Talk to the person next to you in every queue. At first, you will have only tiny chats. Later, these first contacts can develop over lunch. Still later you publish a paper together (Kunert & Slevc, 2015). The peer-review process might make you think that academics are awful know-it-alls. At a conference you will discover that they are actually interesting, intelligent and sociable people. Meet them!

2) Honesty

The conference bar is a mythical place where researchers talk about their actual findings, their actual doubts, their actual thoughts. If you want to get rid of the nagging feeling that you are an academic failure, talk to researchers at a conference. You will see that the published literature is a very polished version of what is really going on in research groups. It will help you put your own findings into perspective.

3) Openness

You can get even more out of a conference if you let go of your fear of being scooped and answer other people’s honesty with being open about what you do. I personally felt somewhat isolated with my research project at my institute. Conferences were more or less the only place to meet people with shared academic interests. Being open there didn’t just improve the bond with other academics, it led to concrete improvements of my research (Kunert et al., 2016).

4) Tourism

Get out of the conference hotel and explore the city. More often than not conferences are held in suspiciously nice places. Come a few days early, get rid of your jet-lag while exploring the local sights. Stay a few days longer and gather your thoughts before heading back to normal life. You might never again have an excuse to go to so many nice places so easily.

5) Spontaneity

The most important answer is yes. You might get asked for all sorts of things to do during the conference. Just say yes. I attended the Gran’ Ol Opry in Nashville. I found myself in a jacuzzi in Redwood, CA. I attended a transvestite bar in Toronto. All with people I barely knew. All with little to no information on what the invitation entailed. Just say yes and see what happens.

It might sound terribly intimidating to go to an academic conference if you just started your PhD. In this case a national or student only conference might be a good first step into the academic conference tradition.

Conferences are the absolute highlight of academia. Don’t miss out on them.

— — —

Kunert R, & Slevc LR (2015). A Commentary on: “Neural overlap in processing music and speech”. Frontiers in human neuroscience, 9 PMID: 26089792

Kunert R, Willems RM, & Hagoort P (2016). Language influences music harmony perception: effects of shared syntactic integration resources beyond attention. Royal Society open science, 3 (2) PMID: 26998339

How to test for music skills

In a new article I evaluate a recently developed test for music listening skills. To my great surprise the test behaves very well. This could open the path to better understand the psychology underlying music listening. Why am I surprised?

I got my first taste of how difficult it is to replicate published scientific results during my very first empirical study as an undergraduate (eventually published as Kunert & Scheepers, 2014). Back then, I used a 25 minute long dyslexia screening test to distinguish dyslexic participants from non-dyslexic participants (the Lucid Adult Dyslexia Screener). Even though previous studies had suggested an excellent sensitivity (identifying actually dyslexic readers as dyslexic) of 90% and a moderate to excellent specificity (identifying actually non-dylexic readers as non-dyslexic) of 66% – 91% (Singleton et al., 2009; Nichols et al., 2009), my own values were worse at 61% sensitivity and 65% specificity. In other words, the dyslexia test only flagged someone with an official dyslexia diagnosis in 11/18 cases and only categorised someone without known reading problems as non-dyslexic in 13/20 cases. The dyslexia screener didn’t perform exactly as suggested by the published literature and I have been suspicious of ability tests every since.

Five years later I acquired data to look at how music can influence language processing (Kunert et al., 2016) and added a newly proposed music abilitily measure called PROMS (Law & Zentner, 2012) to the experimental sessions to see how bad it is. I really thought I would see the music listening ability scores derived from the PROMS to be conflated with things which on the face of it have little to do with music (digit span, i.e. the ability to repeat increasingly longer digit sequences), because previous music ability tests had that problem. Similarly, I expected people with better music training to not have that much better PROMS scores. In other words, I expected the PROMS to perform worse than suggested by the people who developed the test, in line with my negative experience with the dylexia screener.

It then came as a surprise to see that PROMS scores were hardly associated with the ability to repeat increasingly longer digit sequences (either in the same order, i.e. forward digit span, or in reverse order, i.e. backward digit span), see Figure 1A and 1B. This makes the PROMS scores surprisingly robust against variation in working memory, as you would expect from a good music ability test.

journal.pone.0159103.g002

Figure 1. How the brief PROMS (vertical axis) correlates with various validity measures (horizontal axis). Each dot is one participant. Lines are best fit lines with equal weights for each participant (dark) or downweighting unusual participants (light). Inserted correlation values reflect dark line (Pearson r) or a rank-order equivalent of it which is robust to outliers (Spearman rho). Correlation values range from -1 to +1.

The second surprise came when musical training was actually associated with better music skill scores, as one would expect for a good test of music skills, see Figures 1C, 1D, 1E, and 1H. To top it of, the PROMS score was also correlated with the music task performance in the experiment looking at how language influences music processing. This association between the PROMS and musical task accuracy was visible in two independent samples, see Figures 1F and 1G, which is truly surprising because the music task targets harmonic music perception which is not directly tested by the PROMS.

To conclude, I can honestly recommend the PROMS to music researchers. To my surprise it is a good test which could truly tell us something about where music skills actually come from. I’m glad that this time I have been proven wrong regarding my suspicions about ability tests.

— — —

Kunert R, & Scheepers C (2014). Speed and accuracy of dyslexic versus typical word recognition: an eye-movement investigation. Frontiers in psychology, 5 PMID: 25346708

Kunert R, Willems RM, & Hagoort P (2016). Language influences music harmony perception: effects of shared syntactic integration resources beyond attention. Royal Society open science, 3 (2) PMID: 26998339

Kunert R, Willems RM, & Hagoort P (2016). An Independent Psychometric Evaluation of the PROMS Measure of Music Perception Skills. PloS one, 11 (7) PMID: 27398805

Law LN, & Zentner M (2012). Assessing musical abilities objectively: construction and validation of the profile of music perception skills. PloS one, 7 (12) PMID: 23285071

Nichols SA, McLeod JS, Holder RL, & McLeod HS (2009). Screening for dyslexia, dyspraxia and Meares-Irlen syndrome in higher education. Dyslexia, 15 (1), 42-60 PMID: 19089876

Singleton, C., Horne, J., & Simmons, F. (2009). Computerised screening for dyslexia in adults Journal of Research in Reading, 32 (1), 137-152 DOI: 10.1111/j.1467-9817.2008.01386.x
— — —