The 10,000-Hour rule is nonsense

Have you heard of Malcom Gladwell’s 10,000-hour rule? The key to success in any field is practice, and not just a little. A new publication in the journal Psychological Science had a good look at all the evidence and concludes that this rule is nonsense. No Einstein in you, I am afraid.

Albert Einstein, by Doris Ulmann.jpg

Did he just practice a lot?

The authors of the new publication wanted to look at all major areas of expertise where the relationship between practice and performance had been investigated: music, games, sports, professions, and education. They accumulated all the 88 scientific articles that are available at this point and performed one big analysis on the accumulated data of 11,135 participants. A meta-analysis with a huge sample.

The take-home number is 12%. The amount of practice that you do only explains 12% of your performance in a given task. From the 10,000-Hour rule I expected at least 50%. And this low number of 12% is not due to fishy methods in some low-quality articles that were included. Actually, the better the method to assess the amount of practice the lower the apparent effect of practice. The same goes for the method to assess performance on the practiced task.

However, one should differentiate between different kinds of activities. Practice can have a bigger effect. For example, if the context in which the task is performed is very stable (e.g., running) 24% of performance is explained by practice. Unstable contexts (e.g., handling an aviation emergency) push this down to 4% . The area of expertise also made a difference:

  • games: 26%
  • music: 21%
  • sports: 18%
  • education: 4%
  • professions: 1%

In other words the 10,000-Hour rule is nonsense. Stop believing in it. Sure, practice is important. But other factors (age? intelligence? talent?) appear to play a bigger role.

Personally, I have decided not to become a chess master by practicing chess for 10,000 hours or more. I rather focus on activities that play to my strengths. Let’s hope that blogging is one of them.

————————————————————————————————————
Macnamara, B.N., Hambrick, D.Z., & Oswald, F.L. (2014). Deliberate Practice and Performance in Music, Games, Sports, Education, and Professions: A Meta-Analysis Psychological Science DOI: 10.1037/e633262013-474

ResearchBlogging.org

 

 

 

 

————————————————————————————————————
Albert Einstein, by Doris Ulmann” by Doris Ulmann (1882 – 1934) – Library of Congress, Prints & Photographs Division, [reproduction number LC-USZC4-4940]. Licensed under Public domain via Wikimedia Commons.

32 comments

  1. It would be nice if you referenced this in terms of K. Anders Ericsson and colleagues work, which Gladwell popularized/simplified.

    As the paper is behind a paywall it’s impossible to assess, but I do think Ericsson did some really solid work on how skills are acquired, which went well beyond anything like a 10k hour rule (in fact he’s been at pains for years to make the point that 10k hour popularisation is meaningless).

    I would be really interested, for instance, how this metastudy somehow disproved his original music study, which shows that hours of practice was by far the largest factor in future music students ability.

    1. Hi RadiantFlux,

      you’re absolutely right that Ericsson is behind the practice view. You’re also right that he did great work on our acquisition of skills. I by no means want to negate Ericsson’s contribution.

      However, I opted for a Gladwell take because this is a popular science blog and many readers have heard of Gladwell rather than Ericsson. Note that Gladwell himself is mentioned in the paper : “Ericsson et al.’s findings were also the inspiration for what Gladwell termed the “10,000-hour rule”—the idea that it takes 10,000 hr of practice to become an expert.”

      Sorry you can’t access the paper. I am not pro pay-walls but I am pro distribution of knowledge lying behind pay-walls (for example in the form of this blog post).

      Now, what does this paper add? My impression is that the field has moved on from the initial findings, better methodologies have been developed, certain confounding factors have been eliminated, etc.. A meta-analysis is then a nice way to summarise our state of knowledge. It is FAR more robust than any individual paper that it includes. You should trust a meta-analysis based on more than 10,000 people more than any individual paper based on 100 or so.

  2. Wish I could read this article. From reading this summary, the results don’t quite make intuitive sense to me.

    If practice only accounts for 21% of musical performance, that suggests to me that a lot of people should, hypothetically, be able to sit down at a piano and immediately play it reasonably well with no prior experience. Whereas, in reality, it takes some time to acquire competence in piano playing through repeated practice — the coordination necessary to move one’s fingers about the keyboard, learning how to read music, learning music theory, etc.

    I guess maybe the article narrows the definition of ‘practice’ only to rote physical mechanics — e.g. playing scales, etc. for piano players? To me, the intellectual effort involved in learning any skill is a form of practice — and a reasonable part of the 10,000 hour ‘rule of thumb’ that a lot of people subscribe to. There isn’t really any mechanical practice in learning mathematics, but it does take repeated, deliberative effort to learn the concepts.

    Does the article explain the composition of the other 88% of performance, if only 12% is explainable by practice?

    1. You are right to be skeptical. You ask what is in the 12% “practice” and what is in the 88% “other”.

      I can give a partial answer to this based on looking at one of the underlying studies in the meta-analysis (Chinn, Sheard, Carbone, & Laasko), which looked at study habits of students in an introductory computer programming course. The authors of the meta-analysis find that “practice” was unimportant for success on this course. Let’s take a look at what that means.

      The activities that the researchers (Hambrick and MacNamara) count as “practice” are activities such as:
      – Looking up information about programming on the internet
      – Discussing the course on the online discussion forum
      – Working through tutorial problems
      These “practice” activities were found to be relatively unimportant for success on the course. What were the “non-practice” activities? These included:
      – Turning up at lectures and programming workshops which were run for the course
      – Previous programming experience before the course
      These two activities were found by the original researchers to be the most important factors for success. This should be no great surprise. It’s not particularly radical to find that the people who do best on a computer programming course are those who were most experienced to start with (so had a head start), and turn up to the lectures and workshops.

      But Hambrick and MacNamara don’t count turning up at lectures, or having prior experience, as “practice”. Given that, it’s hardly surprising that they find that practice is unimportant.

      It is totally, wildly, wrong, to suggest, as this blog post does, that because the results are not explained by “practice”, they must be explained by “talent”. It does not take special talent to turn up at lectures. The study gives no evidence for the importance of talent at all.

      1. Thanks, Thomas! This background helps understand such a counterintuitive result. By not treating “prior experience” as part of practice, it seems the researchers are really showing that a very limited range of practice has a small effect. So, if I have 5000 hours of programming under my belt coming into the class – perhaps because I’ve been avidly modding minecraft and making android apps since I was ten – then the, say, 80-90 hours of outside practice I’ve done in the class (not including the 45-60 hours of core lectures and workshop practice sessions) has relatively little additional effect. Depending on the levels of prior experience involved, the added benefit of relatively minor amounts additional practice might be considered quite substantial.

        Playing to your strengths seems very reasonable, Richard. But do you really think there’s no room for improvement, with deliberative practice?

      2. Hi Tomas,

        you’re right that this study gives no evidence for the importance of talent. It simply leaves open the question of what explains the remaining 88%. That’s why I wrote “Sure, practice is important. But other factors (age? intelligence? talent?) appear to play a bigger role.”

        Coming to think of it, another factor might simply be noise, i.e. the inability of our measurement instruments to perfectly measure the concepts they are supposed to measure.

        As to Chinn et al., 2010: I haven’t read the study, but it is indeed odd. It is one of only three included studies which only report negative effect sizes, and the only one of those which provides several effect sizes. In plain language: it is suggesting that practice has a consistently negative effect on performance. For an intellectual activity like studying computer science this strikes me as unrealistic.

        While this raises question marks about Chinn et al. (2010), this does not invalidate the meta-analysis as a whole. The meta-analysis reports several analyses which did not include Chinn et al.. For example, Chinn et al. (2010) is classified as a ‘Education’. It does not explain why sports do not show an effect size of more than 50%.

  3. The research referenced in this article simply does not prove that the 10,000 hour rule is “nonsense”, even if the researchers think it does. Proving that the 10,000 hour rule is nonsense should be, in principle, quite simple. The rule states that “ten thousand hours of practice is required to achieve the level of mastery associated with being a world class expert” (Outliers, Chapter 2). To disprove it, all you have to do is to find a convincing example of someone who has become a world class expert in a significant field with substantially less than 10,000 hours practice. But the researchers have not done this – not a single convincing example is presented.

    I do not believe you will find any convincing examples of that in highly competitive fields, such as tennis, chess, or classical music. You do not find, for example, super talented pianists who take up the instrument and become leading virtuosos in two or three years. In highly competitive fields relying on skill, I believe it just doesn’t happen, and the research discussed here gives no evidence to suggest that it does.

    All the research does is to show that other things matter as well as the number of hours practice, which is completely unsurprising – things like the quality of practice, having a good coach, getting a lucky break and so on would all be expected to affect performance. The research gives no direct evidence for the importance of “talent” at all.

    1. I agree, To call another man work “nonsense” yet never actually try to disprove it is the disingenuous at best. The example sited a basic computer science class says absolutely nothing about experts in computer science. Why use that study is pure misdirection.

    2. The 10,000-hour rule was also used to argue that practicing for 10,000 hours will make you an expert. So, would an alternative test of the 10,000-hour rule be to take people at random and let them train for 10,000 hours? Would they all become experts on the trained skill, no matter what that actually is?

      The problem with retro-spective evidence (e.g., asking experts how much they practiced in the past) is that only the talented ones are likely to invest a lot of time in their skill. Untalented people don’t receive many rewards for the effort they put in, so they stop early. As a result, untalented people haven’t trained much and talented people have. It is a great research challenge to tease practice and talent apart.

      But again, I am not advocating the importance of talent here. I am only making a claim about the importance of practice.

      1. You ask:

        “would an alternative test of the 10,000-hour rule be to take people at random and let them train for 10,000 hours? Would they all become experts on the trained skill, no matter what that actually is?”

        The 10,000 hours rule says that 10,000 hours practice is “required” for “world class expertise” (Gladwell’s words, Outliers, Chapter 2). It does not say that 10,000 hours’ practice is sufficient to become an “expert”, which is what you seem to be testing with the proposed experiment.

        To test Gladwell’s 10,000 hours rule, choosing subjects at random is NOT of central importance. It doesn’t matter if the people in the experiment are chosen according to whether you think they are “talented” or not. If the 10,000 hour is right, none of them will reach world class expertise in significantly less than 10,000 hours, no matter how you pick them.

        If Gladwell (and psychologist Daniel Levitin, who Gladwell quotes as his basis for the rule) is right, then the people chosen at random will either fail to become world class experts, or, possibly, some might just about succeed in becoming world class experts at the end of the experiment as they get to 10,000 hours.

        If the rule is “nonsense”, then some of the people in the experiment would achieve expertise well within 10,000 hours. Presumably these people would either have exceptional talents, or have some other significant advantage. I do not believe that this will be the case, provided that the activities practiced are from competitive fields (like piano playing, chess or tennis). The randomly chosen people are unlikely to get anywhere near world class expertise with significantly less than 10,000 hours’ practice.

  4. As I recall, the meta-analysis focused on the explanatory power of *amount* of practice (ie., number of hours). I believe that even Ericsson would be unsurprised that this only explained a small amount of variance in acquired skill. His most important contribution was in describing how practice should be *structured* in order to be most effective. The meta-analysis shows that there’s lots of variance left to be explained by such factors.

  5. Am I reading correctly that this is a conference abstract? In many fields, conference abstracts are not refereed. Do you know if they are for this meeting? Rather dodgy claims have been known to be made at conferences.

  6. A PDF of the article is available through Scott Barry Kaufman’s Website. Word Press seems to think that the link is spam so is blocking it, but if you Google the article title it comes up near the top of the results.

    And the abstract states, “We conclude that deliberate practice is important, but not as important as has been argued.” They’re going after the strong version of the Ericsson et al claim, which is that genetics or natural talent cannot explain outstanding performance (except in very specific sports like gymnastics), and that deliberate practice is far more important. It seems to me the argument is over how much weight should be given to the different factors– deliberate practice, experience, genetics, background– versus whether excellent is a matter of nature or nurture.

  7. This research seems heavily reliant on definitions of key concepts. In particular, the outcome measure of “performance” seems ill-defined. Taking the example of a musician, how do we quantify how well someone plays the piano? Even if we can come up with a reliable measure, it seems unlikely that this would be a linear one. That is, while the amount of improvement in the first year of learning to play an instrument will undoubtedly be dramatic, and we might say that the student has improved by 100%, the amount of improvement in the 20th year will be more subtle. However, loads of people practice the piano for 1 year and attain a beginner level, while only a small number of people practice so much that they become experts, and the people we call experts are distinguished by having overcome even the subtlest flaws in their technique. So in some ways, the final 0.1% attainment of skill (where 100% = expert) could be said to be just as important as the first 50%, especially in a professional context (where there are many people at >99% and the competition is among the top 1%). And what if that 0.1% turns out to be totally dependent on practice, whereas the first 50% was more talent-based? What does that say about the importance of practice? (Thought: perhaps a more appropriate outcome measure would be *utility* gained from practice, rather than performance.)

    Then of course there is the fact that the amount you are willing and able to practice is also highly correlated with talent. Additionally, there may be interaction effects – if you don’t have any talent to begin with, practice may not do you any good, but those who have a gift might benefit hugely. So any attempt to single out the effect of practice is somewhat doomed from the off.

    In conclusion, it seems to me that saying “12% of performance is accounted for by practice” is a very dodgy statement to make, particularly on its own.

    1. Hi Ruben,

      thanks for taking the time to comment.

      You claim that the relationship between performance and practice is likely not a linear one. Well, this is entirely dependent on how you scale your performance measure. You can scale it to result in a linear relationship, I believe. I have no idea whether this issue is taken into account in this research field.

      What does the meta-analysis say about how performance is measured? If performance was assessed by group membership (e.g., amateur vs. professional), 26% of variance in performance was explained by deliberate practice. If it was assessed by objective scoring measures it was 8%. Laboratory tasks and expert ratings were in between.

      I agree with you that in a situation of fierce competition, 12% can make the crucial difference. This number might also vary depending on how good you already are. My impression is that this research field is more concerned with highly skilled performers (not sure though).

      Finally, the idea that practice is necessary but not sufficient, is something that I share. Once a minimum amount of talent and practice is secured, for example in amateur and expert musicians, comparisons might be possible. I don’t see the situation as ‘doomed from the off’, I would rather call it ‘challenging to investigate’.

      However, all of this doesn’t change the central claim of this post. Focussing merely on 10,000 hours of practice is misleading. When considering the balance between practice and other factors, the mentioned meta-analysis suggests that practice is not as important as the 10,000-hour rule would like you to believe. It doesn’t really matter if the 12% number is an under-estimate or not. Even if the real number was three times as large under certain circumstances, the effect of practice would still be dwarfed by other factors.

  8. What the study appears to be saying is that practice is not as important as we have been led to believe. “The Sports Gene” is an interesting read in which the author reports on current research in athletics on the effect of genetics, practice, trainability, motivation, etc. on athletic performance. There is remarkable variability and practice may be one of the least valuable aspects of performance. The author states that the experts he interviewed would describe talent as more a construct of trainibility in a given sport than anything else.

  9. This study only shows that the way we study and learn and the tool that we use for this are useless, the strategies are so poor that you can learn more practicing music that trying to learn in education. This shows that we need a change in schools and universities.

  10. This is bullshit.
    Running 24% of performance is explained by practice.
    Really? Before I started running I was not able to run even 5 mins without collapsing. When I ran up to 4th floor I was already out of breath. Now I run for 40 mins, 5-6 kilometers (3.5 miles). 24% my ass. more like 200%.

    music: 21%
    In my first piano class I was ashamed. Played like a retard. After 5 years, I was already playing at events and representing my school.

    games: 26%
    For long time I sucked at online player versus player games.Then, I finished my high school. Decided to take a break from everything for 1 year. During that year, I became so good at PvP fights so that top guilds were inviting me. I did tournaments. My skill at player versus player online games grew at least 3 times

    education: 4%
    In high school I scored ”F” in english (it’s foreign language for me). Were not even able to form basic sentences. My knowledge in english was probably worse than that of a child. And now look at me. How? Because I wasted hundreds if not thousands of hours online surfing net, playing games which is basically learning.

    This article is bullshit.

Leave a reply to Alex Soojungkim Pang (@askpang) Cancel reply