Psychological principles as guidelines for effective PowerPoint presentations

A presentation using Powerpoint. Corporate pre...

How good can it get?

You probably wouldn’t have much difficulty if I asked you to imagine a bad PowerPoint presentation. Nowadays one sits through so many of them that confusing, boring or annoying slide shows are sometimes perceived as the norm rather than the exception. A research team from the universities of Stanford, Amsterdam and Harvard headed by Stephen Kosslyn explains how to do it better. In order to reap off the benefits and avoid the pitfalls of visual aids, presenters should think about avoiding weaknesses of human information processing and play on the strengths of such processing.

Kosslyn and colleagues see the task of the audience viewing a PowerPoint presentation as composed of three steps: a) information needs to be acquired, b) information needs to be processed, c) information needs to be connected to knowledge. They derive eight principles that a presenter should follow based on this analysis.
a) encoding, i.e. acquiring information and turning it into a usable form
1) Discriminability: make it easy for the audience to discriminate colours, letters, sizes, line orientations etc.
2) Perceptual Organisation: group things effectively in the visual space you’ve got
3) Salience: use large perceptual differences to guide attention to what is IMPORTANT
b) working memory: holding information in mind in order to integrate it online
4) Limited Capacity: understanding breaks down once too much information has to be retained
5) Informative Change: when something perceptual changes, this change has to mean something
c) accessing long term memory: connect the new information with knowledge in order to extract meaning.
6) Appropriate Knowledge: avoid as much novel concepts, jargon or symbols as possible
7) Compatibility: the meaning of a message needs to be compatible with its form
8) Relevance: provide neither too much nor too little information
These principles may look very obvious but they are frequently violated. From an internet sample of slide shows it became clear that on average a PowerPoint presentation violates six principles at least once. Some principles were nearly always ignored: 1) discriminability, 4) limited capacity, 5) informative change.
Now, one may argue that these principles are simply guidelines that lay people are unaware of. No wonder they get violated. However, in a subsequent laboratory experiment participants were 80% correct in choosing a non-violating slide and rejecting a bad one. Moreover, when asked to say why one slide was better, more than 80% of the correct choices were appropriately justified.
So, this study is about what one already knows but still ignores when designing a slide show. The authors use a backdrop of psychological literature to predict what sorts of principles should guide PowerPoint presentations. What they, unfortunately, fail to do is to empirically test each principle’s impact on presentation understanding and memory. As such, this study simply presents a set of guidelines, says that presentations usually violate guidelines and that most people are aware of these violations. How important the guidelines are to begin with remains unclear.
The main take-home message is that the more work a presenter does for his/her audience, the more the audience can tune into the content of the presentation. For my part I am always guided by a more memorable principle:
Look around the room and search for the newbie or the bored one or the least intelligent listener. S/he is your target audience.
For a complete list of useful rules which may help you and especially your audience, see the appendix of Kosslyn and colleagues’ paper.

Kosslyn S.M., Kievit R.A., Russell A.G., & Shephard J.M. (2012). PowerPoint® Presentation Flaws and Failures: A Psychological Analysis Front. Psychology, 3 DOI: 10.3389/fpsyg.2012.00230



1) Photo credit: Wikipedia


Thought Metaphors

Is crime alive? Where is musical pitch?
Neither question makes any sense.
And nonetheless, one can answer them. Crime can be a beast haunting local neighbourhoods and it must be eradicated – a description suggesting it is well and alive. And musical pitch is high or low.
Of course, these are all just metaphors useful for quickly talking about things without having to stop for lengthy definitions. However, they are not only linguistic short cuts. They are also mental short cuts – or opportunities for manipulation, if you prefer a more racy description. Last year, a bunch of studies showed examples of how far one can go with this.

A metaphorical breeding program.

Thibodeau and Boroditsky (2011) contrasted two common Western metaphors related to crime: the crime as a beast (preying on a town, lurking in the neighbourhood) and crime as a virus (infecting a town, plaguing the neighbourhood). They ‘activated’ these metaphors by using these words alongside fictional crime statistics of an unknown town. When participants were asked what to do about the town’s crime problem, those in the beast-condition were more likely to suggest law enforcement actions (capture, enforce, punish) than those in the virus-condition who often opted for reform-measures (diagnose, treat, inoculate).
Thus, a linguistic short-cut affected how people reacted to a realistic real world problem in the realm of social policy. And the effects are big. As one might expect, the same researchers also found political and gender differences (US Republicans as well as men tend to be more on the enforcement side than US Democrats/ Independents and women). Simply mentioning a metaphor was twice as powerful in shaping opinion than any of these variables.
high pitch

A literally high pitch.

In a different set of studies, even something as basic as the height of a tone was shown to be metaphorical. Dolscheid and colleagues (2011) showed that when a tone is presented with an image of height (basically a vertical line crossed by another line at a high or low point) this influences Westerners’ pitch repetition – as would be expected by the pitch-as-height metaphor. When Dutch participants sang a tone paired with a high line, they tended to sing higher. An image of thickness (a thick or thin line) was without influence. The reverse was the case for Farsi speakers even though they lived in the same country. In Farsi, low tones are called thick and high tones are called thin. In a second step, the research team trained people for only 20 minutes with the thickness metaphor – without them knowing. Afterwards, Dutch people performed similarly to Farsi speakers who had known it all their lives.
The wider point is one I have made before: Language is not just for talking, it is also a window into the Mind. However, the metaphor research goes further by also showing how easily this window gives access to the Mind, how easily we can be manipulated. Something as important as how to address crime can be influenced by a recently encountered metaphor. The same applies to something as basic as singing back a tone.
And don’t say they can be spotted easily. Or did you notice the race metaphor written black on white at the beginning of this post?
Dolscheid, S., Shayan, S., Majid, A., & Casasanto, D. (2011). The Thickness of Musical Pitch: Psychophysical evidence for the Whorfian hypothesis. Proceedings of the 33rd annual meeting of the Cognitive Science Society, Boston, MA.
Thibodeau, P.H., & Boroditsky, L. (2011). Metaphors We Think With: The Role of Metaphor in Reasoning. Plos One, 6 (2), e16782. doi:10.1371/journal.pone.0016782

How to discover scientific fraud – the case of Diederik Stapel

The junior researchers who revealed the most striking science fraud of last year shared their side of the story this weekend in the Dutch daily Volkskrant (LINK). What lessons are to be learned for young researchers?
In September last year, Professor Diederik Stapel, social psychologist at the University of Tilburg, was found out to fabricate his data. With over 100 publications which were cited over 1700 times he was one of the most prominent figures in his field. His findings were a collection of the weird and the wonderful: A dirty train station increases racist discrimination (LINK). Meat eaters are more selfish. One has better table manners in a restaurant (LINK). All retracted, prevented from publication or under investigation. How could three young researchers challenge the biggest name in their field? The answer is simple: with good scientific practice.
After being presented with results that seeing fruits on trees influences materialistic thinking differently than fruits in the grass, one of the young researchers grew curious/suspicious and joined Stapel’s data factory to do an unrelated study. After receiving a seemingly perfect data set from Stapel – which confirmed all predictions – he calculated the Cronbach’s alpha for his questionnaires. This measure tells researchers what the internal consistency of questionnaires is. For example, answering yes to ‘Are you a vegetarian?’ should correlate highly with ‘Do you avoid eating meat?’. When the consistency is low this is indicative of poor data quality. Reasons can be numerous including participants who don’t care, bad questionnaire design or mistakes in analysis. The young researcher’s data set turned out to have an internal consistency so low that he concluded chance responding. Such a questionnaire should really not confirm predictions. That was a year ago.
A decision was taken to join together as a team of three young investigators to reveal the truth about Stapel’s practice. In order to strengthen their case they tried to replicate a study twice and failed. Through the next six months they collected a whole dossier about calculations and weird occurrences which all pointed in the same direction.
Then they informed the Head of Department, Professor Zeelenberg who had published with Stapel. Zeelenberg believed them and within a week Stapel sat in front of the university principal to explain himself. Another week later Stapel admitted it all before the press.
So, what can a young researcher learn from the whistleblower’s investigation?
1) Extraordinary claims require extraordinary evidence
If you don’t believe a finding yourself, replicate it and find out. Science is objective, i.e. its results are not tied to who produced them. However, when you want to convince the world that a research star is a huge fraud – an equally extraordinary claim – you also need extraordinary evidence. One failed replication, for example, would not be enough because failed replications happen more often than data fabrication of the scale Stapel practiced it (I hope). So, if you want to make sure that you convince the doubters make sure it will stand up to scrutiny.
2) Know your data
A narrow look at p-values obscured the very low quality of the questionnaire data. How could the other researchers around Stapel simply live with that? Or did they not know their data? A researcher should really delve into his data before drawing conclusions. The reported analysis in the publication is often just the tip of the iceberg in terms of all the analyses done beforehand.
3) Have courage
One thing that becomes really clear is that the three young researchers were somewhat unlikely heroes. They did not boast, they got drunk after telling the Head of Department, they want to remain anonymous now. But despite their own doubts and the real risk of a backlash from the Stapel camp, they went through with it. Scientists sometimes do need real courage.
Surprisingly, the three whistleblowers do not appear to have turned away from science. That’s fortunate because they have truly proven themselves. Their careful, time consuming work revealed a truth which no one suspected. That’s how science is done.