Month: September 2012

canine confirmation confound – lessons from poorly performing drug detection dogs

Intuitively, the use of police dogs as drug detectors makes sense. Dogs are known to have a better sense of smell than their human handlers. Furthermore, they cooperate easily. Still, compared to the generally good picture sniffer dogs have in the public eye, their performance as drug detectors in real life is terrible. The reason why scent dogs get used anyway holds important lessons for behavioural researchers working with animals or humans.

Survey data coming out of Australia paints an appalling picture of sniffer dog abilities. Their noses hardly ever detect drugs that they are trained on. For example, only about 6% of regular ecstasy users in possession of drugs reported that they were found out by a sniffer dog they saw (Hickey et al., 2012). But once they bark, you can be pretty sure that a drug was found, right? No, you can’t be sure at all. An Australian review by the ombudsman for New South Wales found that nearly three quarters of dog alerts did not result in any drugs being found. It’s clear: using sniffer dogs to detect drugs just does not work very well.
drug detection, military, dog

Both looking in the same direction. Who is following whom?

This raises the question why scent dogs are actually used at all. My guess is that they perform a lot better in ability demonstrations compared to real life. This is because in demonstration scenarios their handlers know the right answer. This answer can then be read off unconscious behavioural cues and thus guide the dog. This is exactly what a Californian research team led by Lit et al. (2011) found. When an area was marked so as to make the handler believe that it was containing an illicit substance, more than 80% of the time the handler reported that his/her dog had found the substance. However, the researchers in this study misled the dog handlers and in fact never hid any illicit substances, i.e. every alarm was a false alarm. Interestingly, when an area was not marked, significantly fewer dog alerts were reported. This suggests that the dog owners control to a large extent when their own dog responds. Apparently, sniffer dogs game the system by trusting not just their nose but also their handler when it comes to looking for drugs. This trick won’t work, though, if the handler himself doesn’t have a clue either, as in real life scenarios.
The deeper issue is that good test design has to exclude the possibility that the participant can game it. The most famous case where this went wrong was a horse called Clever Hans. Early last century this horse made waves because it could allegedly count and do all sorts of computations. Hans, however, was clever in a different way than people realised. He only knew the answer if the person asking the question and recording the response also knew the answer. Clearly, Hans gamed the system by reading off the right answers from behavioural cues sent out by the experimenter.
Whether reading research papers or designing studies, remember Hans! Remember that the person handling the participant during a test should never know the right answer. If s/he does, the research is more likely to produce the intended result for unintended reasons. This can happen with scent dogs (Lit et al., 2011), with horses but also with adult humans (see the Bargh controversy elicited by Doyen et al., 2012). Unfortunately, after 100 years of living with this knowledge, reviewers start noticing that the lesson has been forgotten (see Beran, 2012). Drug detection dogs show where this loss leads us.

——————————————————————————————————–

Beran, M.J. (2012). Did you ever hear the one about the horse that could count? Front. Psychology, 3 DOI: 10.3389/fpsyg.2012.00357

Doyen S, Klein O, Pichon CL, & Cleeremans A (2012). Behavioral priming: it’s all in the mind, but whose mind? PloS one, 7 (1) PMID: 22279526

Hickey S, McIlwraith F, Bruno R, Matthews A, & Alati R (2012). Drug detection dogs in Australia: More bark than bite? Drug and alcohol review, 31 (6), 778-83 PMID: 22404555

Lit L, Schweitzer JB, & Oberbauer AM (2011). Handler beliefs affect scent detection dog outcomes. Animal cognition, 14 (3), 387-94 PMID: 21225441

NSW Ombudsman (2006). Review of the Police Powers (Drug Detection Dogs) Act 2001 Sydney: Office of the New SouthWales Ombudsman
—————————————————————————————————————————-

ResearchBlogging.orgIf you liked this post you may also like:
Correcting for Human Researchers – the Rediscovery of Replication

images:

1) By U.S. Navy photo by Photographer’s Mate 3rd Class Douglas G. Morrison [Public domain], via Wikimedia Commons

—————————————————————————————————————————-

If you were not entirely indifferent to this post, please leave a comment.

Risk vs. Opportunity across the life-span: Risky choices decline with age

Risk taking is somewhat enigmatic. On the one hand, risky choices in every day life – like drug abuse or drink driving – peak in adolescence. Never again in life is the threat to die from easily preventable causes as great. On the other hand, in laboratory experiments this risky choice peak in adolescence is absent. Instead, the readiness to take a gamble simply goes down the older you are. How can we explain this paradox? Perhaps, we should look at a tribe in the Amazon rain forest for answers.

A group of psychologists from Duke University led by David Paulsen looked at risk taking in the laboratory. Participants had the choice between either a guaranteed mediocre reward (say, four coins) or a gamble with a 50/50 chance of getting a low (e.g., two coins) or a high (e.g., six coins) reward. This is reminiscent of many choices we face in life: do you prefer ‘better safe than sorry’ or ‘high risk/high gain’? As you can see in their figure below, Paulsen and colleagues found adolescents to be greater risk seekers than adults. No matter how risky the gamble, adolescents choose it more often compared to adults.
risk taking across age groups

‘Better save than sorry’ vs. ‘High risk – high gain’

Paradoxically, children are even more risk prone than adolescents. Moreover, the riskier the gamble the greater the difference to older people. Paulsen and colleagues have trouble explaining why risky choices in the laboratory do not show an adolescent peak which so many real world behaviours show. Could it have to do with laboratory risk being clearly defined while real world risk is unknown? Is it peer influencing which drives real world riskiness but is absent in the laboratory? Is there more thrill in real risk taking while lab experiments are so boring that thrill seeking doesn’t come into play?
Perhaps. However, one explanation – which I, personally, found totally obvious – is not even discussed. Risky choices decline with age, true. But the opportunity to make risky choices increases with age. In Western society there are both explicit laws as well as implicit norms that prevent children from the opportunity to take risks. Take as an example alcohol abuse. Many people perceive a party without alcohol as mediocre. With alcohol, however, you take a gamble between doing something very regrettable (read, low reward) or having the time of your life (read, high reward).
Amazon rainforest

Where to test an alternative explanation: the real world.

How does this play out across the life span? It is inconceivable to serve beers at children’s birthday parties. However, the older you are the more you choose yourself what is served at your parties. When you are a young adolescent this increased risk taking opportunity meets a still high (but declining) risk taking readiness and you get wasted.
So, with age, risk taking goes down because the opportunities to take risks do not get more after a certain age while the readiness to take these risks still declines. The outcome would be a peak in real life risk taking at adolescence despite a linear decline in risky choices, i.e. exactly the observed pattern.
This interaction between risk taking opportunities and risk taking readiness is nicely illustrated by a native American tribe Dan Everett described in his very readable book Don’t Sleep, There are Snakes. The Pirahã do not have the Western notion of childhood. Everett writes that ‘children are just human beings in Pirahã society, as worthy of respect as any fully grown human adult. They are not seen as in need of coddling or special protections.’ (p.89). As a consequence, ‘there is no prohibition that applies to children that does not equally apply to adults and vice versa’ (p.97).
What does this mean for child alcohol consumption on the infrequent occasions when alcohol is available to the tribe? This episode gives the answer (p. 98):
Once a trader gave the tribe enough cachaça [alcohol] for everyone to get drunk. And that is what happened. Every man, woman and child in the village got falling-down wasted. Now, it doesn’t take much alcohol for Pirahãs to get drunk. But to see six-year-olds staggering with slurred speech was a novel experience for me.
So, perhaps this solves the paradox. The laboratory results were unrealistic by Western standards because they gave children a choice which they usually do not have: sure reward or gamble? Once you look at societies that do give children this choice you see that the laboratory results line up better with real life.
There is much to be learned by going beyond the laboratory and looking at the real world. The entire real world.

—————————————————————————————————–

Everett, D. (2008). Don’t sleep, there are snakes. London: Profile Books

Paulsen, D.J., Platt, M.L., Huettel, SA, & Brannon, E.M. (2012). From risk-seeking to risk-averse: the development of economic risk preference from childhood to adulthood. Frontiers in psychology, 3 PMID: 22973247

—————————————————————————————————–

images:

1) as found in Paulsen et al. (2012)

2) By Jorge.kike.medina (Own work) via Wikimedia Commons

ResearchBlogging.org

.

.

.

.

—————————————————————————————————

If you were not entirely indifferent to this post, please leave a comment.

Improving Eye-Witness testimony by undoing false memories

Diana, Princess of Wales

Diana ten years before a certain false memory started spreading.

Do you remember August 31st, 15 years ago? Diana, Princess of Wales, died in a car crash in Paris along with her partner Dodi Fayed and others. Do you remember seeing the video of the crash? If so, you share that memory with 44% of the participants James Ost and colleagues recruited in 2002 in Britain.

This memory is false.
There is no such video. False memories are not a fringe problem, they are more widespread than one likes to think. Less than three months after the 9/11 attacks in 2001 in New York then US president George W. Bush claimed to have seen the first plane hit one of the Twin Towers. Afterwards, he claimed, he had entered a class room and had eventually been told about the second plane.
And I was sitting outside the classroom waiting to go in, and I saw an airplane hit the tower—the TV was obviously on, and I use[d] to fly myself, and I said, ‘There’s one terrible pilot.’
George W. Bush as quoted in Greenberg, 2004, p. 363
TV channels are usually not very good in predicting terrorist attacks and September 11th was no exception. The first plane hitting the World Trade Center was not shown on live television. The person who was preparing for war as a response to the attacks apparently had a false memory of them.
Marvin Anderson

Marvin Anderson was found guilty of rape due to a false memory. He spent 15 years in prison. Read his story: here.

If these examples are a chilling reminder of just how bad human memory is, consider that in 72% of wrongful convictions – which are later overturned by DNA evidence – eyewitness misidentification was a factor (Innocence Project). The unreliability of eye-witness memory is a widespread problem. New research coming out of Germany and Britain by Aileen Oeberst and Hartmut Blank (article in press) offers a way of overcoming false memories.
Their participants were shown a film of a car chase and heard a short summary of the action. The summary changed two small details but was otherwise correct. When asked in a subsequent questionnaire about the film these changed details were more likely to be misrembered than unchanged details which the summary of the film correctly represented. This finding is called the misinformation effect – a false memory is created through information received after a piece of information has been memorised. This is likely what happened to George W. Bush: the first plane hitting the World Trade Center was indeed shown on TV but only much later. A later viewing changed his memory of an earlier event.
After completing the questionnaire participants were told about the true purpose of the experiment, that details were changed between film and summary, and that they should fill in the questionnaire again. Now, the misinformation effect could no longer be found. Further experiments suggest that people no longer tried to remember a single detail (‘What happened to the car?’) but instead engaged in a more elaborate task of retrieving one or two memories from different sources (‘What happened to the car in the film rather than the summary?’).
Still, usually memories need to be retained for longer than 15 minutes. How do the findings change with a five week gap between implanting the false memory and trying to abolish it? The misinformation effect could still be reduced simply by telling people five weeks after getting film and summary that the two did not entirely match. Introducing a more elaborate questionnaire further improved memory. It leads to better performance because people are told in detail which manipulated details to consider carefully and it asks where they have a piece of information from.
The authors hesitantly suggest these changes to eye-witness testimony: 1) remind them that ‘they might have encountered additional information relevant to a witnessed event from various post-event sources (e.g. other witnesses, the media, etc.) and that some of this information may have been inconsistent with their own perceptions and memories.’ 2) ‘ask people not only for event details but also for (possibly contradictory) post-event information, and also […] explicitly ask for the source of every remembered detail.’ By making the remembering process more elaborate than a simple ‘Tell me what you know’ one can help people remember correctly.
The implications for what we mean by ‘memory’ are intriguing. Depending on what task you set people, they remember things differently. Apparently, constructing a memory from bits and pieces scattered in the mind is highly dependent on the situation we are in. The reason why we are not aware of this is because the brain plays a trick on us: a memory always feels somehow real, genuine, and personal. Even that of Diana’s crash video.
————————————————————————————-

Greenberg, D.L. (2004). President Bush’s False ‘Flashbulb’ Memory of 9/11/01 Applied Cognitive Psychology, 18, 363-370 DOI: 10.1002/acp.1016

Oeberst, A., & Blank, H. (2012). Undoing suggestive influence on memory: The reversibility of the eyewitness misinformation effect Cognition DOI: 10.1016/j.cognition.2012.07.009

Ost, J., Vrij, A., Costall, A., & Bull, R. (2002). Crashing Memories and Reality Monitoring: Distinguishing between Perceptions, Imaginations and ‘False Memories’ Applied Cognitive Psychology, 16, 125-134 DOI: 10.1002/acp.779

———————————————————————–

Images:

1) By Rick (Princess Diana, Bristol 1987) [CC-BY-2.0 (http://creativecommons.org/licenses/by/2.0)%5D, via Wikimedia Commons

2) via Innocence Project: http://www.innocenceproject.org/Content/Marvin_Anderson.php

ResearchBlogging.org

Extreme neural adaptation – how musical ability is lost through focal dystonia

‘There was a question of not having a purpose in life. Just floundering’.

Leon Fleischer was a true musical prodigy. By the age of sixteen he performed with the New York Philharmonic. He was called ‘the pianist find of the century’. Suddenly, in 1964, he lost control over his right hand. His fingers would simply curl up. The end of his career.

The illness which befell Leon Fleischer and about 1% of his fellow musicians is called focal dystonia, the loss of control over muscles involved in a highly trained task. It is a career breaker coming out of the blue. An investigation into the underlying neural problems leads on a journey into the brain’s muscle control circuitry and its ability to learn.
Primary Motor Cortex, M1
The human brain’s motor system which controls muscle movement is well understood. When one stimulates areas in the precentral sulcus (see Figure) one can observe muscle movements. I actually once saw my own finger move when this area was magnetically stimulated. Because the role of this brain area in muscle control is so well understood, it is simply called the primary motor cortex, or M1 for short.
The organisation within the primary motor cortex is such as described in the Figure: inside the brain the leg muscles are controlled, going to the side come hand areas and eventually facial muscles. Localisation of function (where in the brain is x?) doesn’t get much better.
motor cortex

The motor cortex, called: homunculus.

Motor learning is nothing else than changing the brain in order to better perform a task. Roughly, one learns when an intended outcome and sensory feedback about the actual outcome disagree. Focal dystonia is probably an example of how the brain’s ability to learn can be pushed too far. This illness messes up the localisation of function in one of the most clearly organised brain areas.
For example, the primary motor cortex’s finger areas are usually nicely aligned. However, when dystonia affects a finger, its brain area moves away from its allocated place. Furthermore, the amount of brain tissue which only controls the dystonic finger is reduced, likely because adjacent fingers take over some of the finger’s area (Burman et al., 2009). Thus, ineffective control over muscles because of a subtle disorganisation of motor control areas could be the brain basis for focal dystonia.
On the other hand, rather than the outcome of learning – motor control area changes – the process of learning could also be the reason for the illness. Sensory feedback from the fingers arrives on the other side of the ridge that separates the frontal part of the brain (which includes the motor cortex) and the back part beginning with the so called parietal cortex. The area responding to touch is called the somatosensory cortex and – as can be seen in the Figure in blue – it is also very well organised.
Elbert and colleagues (1998) found that dystonic musician’s digit areas were unusually tightly packed. Their MEG study thus shares some of the findings with Burman et al.’s fMRI study. Apparently, movement execution is disorganised, but also feedback is to some degree jumbled up. The brain seems to have lost some of its nice organisation in areas related to dystonic impairments.

Subcortical differences in sufferers of primary focal dystonia. The eyes would be on the left.

Lastly, a recent meta-analysis by Zheng and colleagues (2012) adds two more things to this picture. The aforementioned activation abnormalities in sensorimotor areas are mirrored in unusual structural features. Furthermore, areas deep inside the brain related to motor planning and movement initiation also show such structural abnormalities.
Focal dystonia seems to affect all sorts of parts of the brain’s sensori-motor system both in terms of brain structure and how the structure is used. Which of these effects actually cause the illness and which are just consequences cannot be said based on these findings. Still, the unusual mappings in the motor and the somatosensory cortices together with deep brain abnormalities are an indication that the brains of dystonic musicians may have adapted too much to the demands of professional instrument playing. Neither the brain’s control over the body’s muscles is good enough anymore nor the feedback from the fingers.
There is still no reliable cure for focal dystonia. Some people treat the symptoms with botox to the affected muscles. Otherwise, retraining of the brain’s sensorimotor areas away from the maladaptation is currently being tried.
How did Leon Fleischer deal with focal dystonia? He had to change his involvement with music to one-armed piano pieces, conducting, and music teaching. Later, surgery and some treatment of the symptoms improved his condition. He is by no means cured. Still, he can finally play the piano again with both hands. As you can hear and see in the Academy Award nominated documentary Two Hands, his performance sounds wonderful, but look closely at his right hand’s fingers.
This is what a breakdown in brain organisation looks like.


———————————————————-

Burman, D.D., Lie-Nemeth, T., Brandfonbrener, A.G., Parisi, T., & Meyer, J.R. (2009). Altered Finger Representations in Sensorimotor Cortex of Musicians with Focal Dystonia: Precentral Cortex Brain Imaging and Behavior, 3, 10-23 DOI: 10.1007/s11682-008-9046-z
Elbert, T., Candia, V., Altenmüller, E., Rau, H., Sterr, A., Rockstroh, B., Pantev, C., & Taub, E. (1998). Alteration of digital representations in somatosensory cortex in focal hand dystonia. Neuroreport, 9 (16), 3571-3575 PMID: 9858362
Zheng, ZZ., Pan, PL., Wang, W., & Shang, HF. (2012). Neural network of primary focal dystonia by an anatomic likelihood estimation meta-analysis of gray matter abnormalities Journal of the Neurological Sciences, 316, 51-55 DOI: 10.1016/j.jns.2012.01.032

————————————————–

images:
1) By 3D brain data is from Anatomography. (3D brain data is from Anatomography.) [CC-BY-SA-2.1-jp (http://creativecommons.org/licenses/by-sa/2.1/jp/deed.en)%5D, via Wikimedia Commons
2) By Maquesta via Wikimedia Commons
3) Found in Zheng et al. (2012, p. 53)

ResearchBlogging.org

 

 

 

[update 15/8/2014 11:00 new Figure 2 as old one was deleted from wikimedia commons]