Science

Don’t fake it: five steps to beat imposter syndrome in science communication

SketchplanationsImage: Sketchplanations

Does this image ring any bells?

If so, welcome to the club. You, too, may be experiencing imposter syndrome, a key barrier to effective science communication.

And, if so, you’re in very good company.

<> 

On 1 October 1861 – two years after he published On the Origin of Species, and by now world-famous – Charles Darwin wrote in a letter to his friend, Charles Lyell:   Charles-robert-darwin-62911_1280

 

 

But I am very poorly today + very stupid + hate everybody + everything. One lives only to make blunders. — I am going to write a little Book for Murray on orchids & today I hate them worse than everything so farewell & in a sweet frame of mind, I am | Ever yours | C. Darwin.

 

 

Darwin displays three qualities that are regularly associated with imposter syndrome.   

  • A feeling of never being competent or knowledgeable enough
  • Consistent critical self-talk
  • An excessive focus on failures and mistakes

But his letter lacks one element essential to imposterism, which didn’t appear until 1978.

In that year, two psychologists – Pauline Rose Clance and Suzanne Imes – published a paper entitled The imposter phenomenon in high achieving women. The 150-plus women they studied all reported symptoms similar to Darwin’s, with one crucial addition.

They said that they felt like frauds. Untitled

According to Clance and Imes, they were particularly prone to “an internal experience of intellectual phoniness” and lived in perpetual fear that “some significant person will discover that they are indeed intellectual impostors.”

 

Pauline Rose Clance and Suzanne Imes

image via https://blog.10minuteschool.com/fighting-the-impostor-syndrome/

By 1982, a journalist in Vogue was referring to “the ‘impostor’ syndrome” – a phrase Clance and Imes disliked and never used. In 2011, Valerie Young published her bestseller, The Secret Thoughts of Successful Women; since then, the topic has generated over 3,500 papers and a heap of self-help books.

(Incidentally, ‘impostor’ and ‘imposter’ seem to be more or less interchangeable spellings. If you’re interested, check out this article. I shall continue to favour ‘imposter’ in this post, except when I quote other writers.)

Whatever it is – phenomenon or syndrome – imposterism seems to be widespread. One much-repeated statistic, apparently originating in a 2007 article by John Gravois, suggests that 70% of us – men and women – experience imposter syndrome at some point. I took the Clance Test while researching this post. I scored 61 out of 100, indicating that I ‘frequently have Impostor feelings’. Which seems reasonable.

<>

According to Sandeep Ravindram, imposterism “seems especially common in competitive and creative fields, and those where evaluations are subjective. […] The feeling of being a fraud is also common in fast-changing fields such as technology or medicine.”

What about academia? As May Merino points out in a revealing post for The Oxford Scientist, measures of success in the post-grad, post-doc arena are much more nuanced than in undergraduate education, where grades and test scores offer seemingly clearer metrics of achievement. “The shift to being surrounded by scientists at the top of their game can be a challenging one,” Merino continues, and can “even evoke feelings of not deserving a seat at their table.”

Imposterism also thrives on feelings of isolation. After all, as Merino says, “every researcher’s path is unique.” The problems you face on that path – failed experiments, ambiguous survey results – “will not,” she says, “exactly mirror those that others face.” When things go wrong, you may feel trapped, ashamed as much by the sense of failure as by the prospect of giving it all up.

Stressed scientistimage by jcomp on Freepik

All of these feelings might be amplified in a competitive, ‘publish or perish’ culture, dominated by grant applications, citation scores and high impact factors. Heaven forbid that your peers might engage in back-biting or bullying to promote their own research…

In a recent article, Kate Munley writes: "the prevalence of imposter syndrome may be grossly underestimated in academia, particularly because mental health is considered a social stigma in higher education."

Now put science communication into the mix.

Is it possible that, in a research-intensive environment, showing an interest in public understanding of science might mark you out as a not-entirely serious researcher? That getting involved in science outreach or science engagement makes you feel somehow unworthy to be a scientist?

Well: I’ll assume that you’re willing to counter such negative thoughts. After all, you’re planning to make a presentation. Good for you. So: what to do?

<> 

Let’s focus, just for now, on the presentation itself.

  1. Observe your feelings.

Whatever they are, those feelings are not you. They’ve appeared from somewhere else and have chosen to visit. Bid them welcome.

It’s not so easy to view these thoughts objectively if you’re nervous. So take a few moments to breathe deeply – 7-11 breathing is a great technique here – and then let these feelings in through the door.

Write a few calling cards for them: one card for each feeling. “You don’t know what you’re talking about.” “Who do you think you are?” “They'll find you out." (Use the word ‘you’ – not ‘I’.) There they are, sitting on the desk in front of you. Don’t tear them up. Take a look. Say ‘hi’. And be kind to them.

Flat 750x 075 f-pad 750x1000 f8f8f8image via Redbubble

  1. Ask yourself what you can learn.

These feelings are a sign that you’re challenging yourself to do something new. You’re stretching yourself, taking a risk, stepping outside your comfort zone. And that’s good. Who wants to do the same thing day in, day out, for the rest of their life?

Nerves are a sign that you care. That you want to do a good job in this presentation.

So, what can you learn from the experience? How can you use it to become a better speaker, to explain your ideas more clearly, to construct more effective arguments?

Set some clear goals for yourself. And then think about what you can do to achieve them.

  1. Focus on your audience.

Next, set some goals for the presentation itself.

I’ll guess that your focus so far has been almost entirely on your material. You’ve spent hours on the slides. You’ve tried your utmost to cover every base, every angle. You feel that you need to include everything. You don’t want to be found out, right?

Good.

Now change your focus. Think about your audience. Think about how you want to influence them. How do you want them to leave your presentation?

  • What do you want them to think (or know, or have an opinion about)?
  • What do you want them to feel?
  • And what do you want them to do?

6a0120a55c9cd1970c022ad393c6ab200c
All three goals matter. But the dynamic between them will shift, depending on your audience.

  • At a conference, you might want your audience to know a lot but not necessarily do anything.
  • If you’re presenting to policy-makers or government officials, you might want them to take specific actions.
  • If you’re talking to young people at a science festival, influencing their feelings might come to the fore.

Now you need to decide how to achieve these goals. So:

  1. Find the narrative.

Your goals for the presentation describe where you want your audience to be at the end of the presentation.

Now think about where they are at the start.

What do they already know and feel? If you’re not sure, then make a reasonable guess: what are they likely to be thinking about and feeling in relation to this topic?

Your presentation needs to take them on a journey from where they are to where you want them to be. That journey is the narrative of the presentation.

There are lots of ways of creating this narrative. You might want to tell a story. But stories are only one kind of narrative. Two narrative structures that I find consistently helpful are the And, But, Therefore structure, and something called Monroe’s Motivated Sequence.

Image3
(Image Source)

By this point, you’ll probably realise that you need to reorganise your material. Go ahead.

  1. Practise and get detailed feedback.

Rehearse. In real time. With a real audience, if you can possibly do so.

Choose someone whose opinion your trust: preferably not a close colleague or someone who’s familiar with your research. Ask for specific, detailed feedback. What did they understand? Where did they get lost? How are you coming across?

Use this feedback to identify your strengths as a presenter. Build on those skills. Don’t worry about eliminating your faults. They’re probably not faults, anyway.

<> 

It is possible that impostor syndrome is not actually a Thing. Or rather, it could be as much the result of external factors as of mental activity. Increasingly, researchers are looking for the roots of this experience in social structures, systemic inequalities and what historian Christy Pichichero has called discriminatory gaslighting. All of which deserves another blog post.

Meanwhile, one final thought. It’s not mine; it comes from Professor Jessica Collett.

“Impostorism,” she writes, “is most often found among extremely talented and capable individuals, not people who are true impostors.”

So, if you do feel assailed by imposter syndrome, it’s probably because you are extremely knowledgeable and competent. If you were a real imposter, you wouldn’t feel like one.

If you want to read more about preparing a great science presentation, check out these three posts on my blog.

So what? The conundrum of science communication

Whats your message? Finding-the-foundation-of-a-great-science-presentation

Presenting science: finding the structure


Seven ways to make science zing

BrScFestAn inspirational day last week at the British Science Association, working with the winners of this year’s BSA Award Lectures.  The BSA has presented these lectures since 1990; notable past winners include Professor Brian Cox (2006), Maggie Aderin-Pocock (2008) and Richard Wiseman (2002).  This year’s speakers are absolutely in that league.

The lectures embody the BSA’s vision of a world where science is at the heart of society and culture.  They recognise and promote the work of early-career scientists in the UK.  Each one aims to engage a broad audience, without at any point diluting the seriousness or complexity of its material.  Somewhere in the tension between those two imperatives is born the sense of wonder that makes a lecture zing. 

Here are seven ideas that I took away from the day, and that any science presenter might find useful.

Let the material find its own shape.  Every theme, every topic, every message, demands its own structure.  But we can lay down three broad principles. 

  • First, the structures that succeed are always dynamic: they arouse expectations, and then fulfil them.  Many scientific presentations are static: they’re all fulfilment.  (‘Make your point, then give the evidence.’).  Instead, look for the points of arousal – the moments of mystery, choice, uncertainty, conflict – and arrange your structure around these turning points.  (Charles Crawford calls them ‘hinges’.)
  • Second, narrative isn’t everything.  Sure, standard narrative structures can help. (Cue the Freytag Triangle; ‘SPQR’; and Monroe’s Motivated Sequence.)   But explanations can generate points of arousal, too (‘Why did that happen?’ ‘How does that arise?’).  And if you’re bold enough to make an argumentative claim, then that will almost certainly arouse your audience. 
  • Third, let your intuition help you.  Caroline Goyder suggests factoring in dream time.  Create a loose framework “as soon as the invitation goes in the diary.” “Once you have that frame,” she says, “your unconscious will get to work and the idea will grow, even while you’re doing other things.”  I’d also suggest talking your material through with a (preferably non-scientific) friend.  Where do their eyes light up?  What fascinates them?  Those moments are potential hinges.

Create a mystery.  The more intriguing, the better.  It doesn’t have to be a burning controversy.  A life cycle with intriguing gaps; a manufacturing process that remains a mystery to this very day; a mismatch between theory and findings; all of these can give you the narrative hook that will capture your audience’s attention.  You might find the suspense you’re looking for in the gap between hypothesis and results in your own research. 

Give us meaning, not just information.  Your audience will appreciate simple explanatory models – either physical or mental.  We’re not very good at appreciating statistics (how big is a trillion?).  We are extremely good at deriving meaning from examples – even if they’re not exactly representative.  Analogies and metaphors are useful.  Another powerful technique is to talk about physical elements as if they’re characters in a story.  “The particles get really hot and want to get as far from each other as they can.” 

Involve your audience in the research.  The speakers in the group were brilliant at this.  They had dozens of ideas for creating mini-experiments in the hall: asking the audience to stand and asking a sequence of questions that filtered out sub-groups as respondents sat; using mobiles to conduct polls; offering a choice of experimental paths and asking the audience to choose one; asking the audience to explain surprising findings.  Test out these procedures, if you can, before the big day.  And give them lots of time.

Make it relevant.  Of course, you may want to show the social, economic or political effects of your research.  But imagination is just as relevant as utilitarian outcomes.  If you can make your audience feel awe, or wonder, or intrigue, you’ve strengthened their bond to the natural world, to their fellow humans or to the scientific project.  No bad thing.  

One very obvious way to do that, of course, is to –

Kids-Company-BSW-2-low-resInvoke the ‘wow’ factor.  Every subject will offer its own ‘wow’ opportunities.  It might be a spectacular demonstration, a dramatic visual analogy or a mind-blowing statistic (“for a few moments, this is the hottest place on the entire planet.”)  Potential applications for an untried or forgotten technology will often create a tingle.  The point about ‘wow’ moments is that they make us see things differently.  If you can shift people’s perceptions in some way, the shift in their thinking will follow.

Embrace controversy.  This is the toughest one.  Some issues are inherently controversial; others may provoke unanticipated emotional responses. 

We had powerful conversations in our group about how public expectations of scientists can frustrate their plans to communicate their ideas.  We (lay members of the public, like me) want scientists to give us definite answers; and we distrust them when they try to do so.  We want scientists to solve moral and ethical issues for us, and then criticise them if they dare to do so.   

The best thing you can do is show that you, too, are a human being.  Demonstrate the implications of your research, good and bad.  Point out how far the science can go, and where morality or law must take over.  Don’t be afraid to take a stand, if you believe in it and can justify it.  Above all, show us why your material fascinates you.  Excitement is contagious.

Here are the 2015 winners and their respective awards.

  • Katherine Woolf, from University College London, is the Charles Darwin Award Lecture winner for agriculture, biological and medical sciences.
  • Jill Stuart, from London School of Economics and Political Science, is the Margaret Mead Award Lecture winner for social sciences.
  • Julie Wertz, from University of Glasgow, is the Jacob Bronowski Award Lecture winner for science and the arts.
  • Hazel Gibson, from Plymouth University, is the Charles Lyell Award Lecture winner for environmental sciences.
  • Alex McLean, from University of Leeds, is the Daphne Oram Award Lecture winner for digital innovation.
  • Radu Sporea, from the Advanced Technology Institute, University of Surrey is the Isambard Kingdom Brunel Award Lecture winner for engineering, technology and industry, supported by Siemens.
  • Ian Chapman, from the Culham Centre for Fusion Energy, is the Rosalind Franklin Award Lecture winner for physical sciences and mathematics, supported by Siemens.

Each winner will give a talk at the British Science Festival in September in Bradford.  You can find out more about their lectures here


The roots of compulsion

RivetedRiveted: The Science of Why Jokes Make Us Laugh, Movies Make Us Cry, and Religion Makes Us Feel One With the Universe 

Jim Davies

Palgrave Macmillan, 2014

ISBN: 9781137279019

£14.44 (Amazon)

Kindle edition £9.94 (Amazon)

Gazing at a beautiful view from a log cabin; hearing a ghost story; finding yourself glued to pictures of a pile-up on the motorway; reciting the Lord’s Prayer... 

Are these experiences in any way alike? 

According to Jim Davies, they are.  “Strange as it may seem, compelling things share many similarities.”  In this book, Davies claims to do “something that has never been done before”: to show that “the qualities that are common to all these things fit like a key in a lock with our psychological proclivities.”  Generalise hypothetically from this commonality and – hey presto – we have a theory.

He calls it the compellingness foundations theory.  (The italics are his.)

1239177_10100579883092241_2012323786_oNothing as useful as a good theory, I always say.  So how useful is this one?  Well: quite a lot.  Davies – a professor at the Institute of Cognitive Science of Carleton University – posits six foundations for compellingness. 

I’ll buy four of them.

The first is social compellingness theory.  We tend to think that all patterns have something to do with social meaning, intention and agency; and we tend to believe social explanations that we hear from other people.  We look for reasons, not causes.  Faced with a mysterious or random catastrophe, for example, we assume conscious intent.  (Which explains conspiracy theories.)  We’re obsessed by status and gossip.  We have an unquenchable appetite for stories.  (Davies is good on stories, though not quite so good, perhaps, at telling them.)

Secondly, we tend to believe the things we fear or hope are true.  Believing in what we fear to be true has evolutionary advantages:  it’s safer to believe that the shape in the corner is a man-eater rather than a heap of old clothes.  Hope is a little more curious: “one of the ultimate reasons we do anything is so that we will have beliefs that make us happy.”  Thus, we prefer landscapes to abstract art; and we find gambling more compelling than regular work because “intermittent reward reinforces behaviour even more strongly than reliable reward”.   

Third, “we love patterns and repetition.”  We prefer patterns that are easy to understand.  And “we are more likely to like and even believe things that we find easy to understand.”   This fact triggers some interesting thoughts on music, and especially language:  quotations and idioms will stick if they are patterned simply.  

And fourth, we are compelled by incongruity, the flip side of pattern-recognition.  Incongruity triggers the desire to understand.  In fact, “sometimes people like things because they are confusing and hard to understand.  To explain this I created the concept of idea effort justification.” 

Davies's method in these chapters is breathless and excitable.  The connectivity sometimes suffers.  He plays the absent-minded professor, tumbling ideas onto the page, disconcertingly switching back and forth between subjects (“Returning to computer game addictions...”; “ let’s get back to miracles...”; “back to the subject...”).  With no obvious narrative arc or developing argument, he must rush us from one instant wonder to another to keep us hooked; the result is a kind of attention deficit disorder as we hurry to keep up. 

Shutterstock_59584435BuddhaWEB-ONLY-676x450“Meditation sounds relaxing,” pants Davies as we swerve into Buddhism, “but some, this author included, find it more like taking your brain to the gym.  It’s hard work.”  I can believe it.  Nonetheless, those four chapters do provide interesting and useful material.  I found myself almost immediately using some of it in my own training work.  And Davies is never less than entertaining, despite the helter-skelter approach.  

But then his thinking gets worryingly untethered.  Where previously he’s tied his account more or less to specific loci of attention – social relationships, fear, hope, patterns and surprises – he now starts to drift around the human body, and to clock up the psychological biases without which no popular account of brain activity seems to be complete.  There’s plenty of interest here – we are more likely to give to charity after riding up an escalator than after riding down one, for example – but the links to compellingness are sometimes tenuous.  And when it comes to sex – surely the most compelling of all human activities – Davies’s account is oddly dull.

“What I have presented here,” we read at the end of his book, “is not a knock-down set of experiments showing us that all things we love are compelling for the same reasons.”  Well: for most of the book, I’d say that’s exactly what he has presented. 

Video-undefined-21FB080000000578-671_636x358By the time I hit the last chapter, I was beginning to wonder whether perhaps Davies’s definition of compellingness was a bit baggy.  His theory, after all, is essentially a theory of attention.   Some forms of attention are momentary; others have the quality of a lifelong trance.  How can we consider, say, the compulsion to watch a fight in the street, and a lifelong devotion to a religion, to be experiences of the same kind

The theory would need to include some mechanism that links instant focus to permanent belief. 

Perhaps the availability cascade can help.  Take the news, for example, which worries Davies a good deal.  We believe stories rather than statistics; as a result, we believe that the events portrayed in the news are more common than they are, which makes us think that they are important, which fuels our desire to know more about them, which drives further media attention...

HistogramInterestingly, Davies suggests that something similar goes on in science.  A researcher will submit a paper with unusual findings and suppress the less interesting results (this is the ‘file drawer problem’); and journals prefer to publish ‘significant’ results rather than results backing up previous results.  Consequently, compelling scientific findings sometimes win out over accurate ones.

(Which triggers a question about the robustness of Davies’s own hypotheses.  If he claims his book to be ‘super lumpy’ – to be principally about what humans have in common rather than how individuals differ – then how many of the very many papers he cites explain common human preferences?  How many are survivors of the file drawer problem?)

This last chapter lurches into a completely different register.  From explanation, Davies turns to argumentation, engaging in a lengthy quarrel with himself about why religions are so persistently compelling.  It's a dangerous rhetorical move and it threatens to destabilise the book completely.

Part of the argument is to compare religion with science.  As usual, Davies looks for shared features.  “Science and religion,” claims Davies, “have two things in common.”  First, both generate beliefs that people endorse or reject.  Secondly, both have methods for generating those beliefs: in other words, they have different epistemologies. Science, he concludes, beats religion as a body of knowledge because its epistemology has a built-in self-correcting mechanism that religion lacks.  But if you’re looking for beliefs that will help hold a society together, science, by his own admission, has not been so successful.

“Beautiful ideas are not always true,” Davies warns us, “and when we encounter a compelling idea, we must take extra care.”  He wants us to “use knowledge of what makes ideas compelling to help us make decisions about what to believe.”  It’s a big ask.  How do we start? 

I think we'd do well to stick to Davies's four really strong ideas.

“Be wary of compelling ideas that are framed in terms of people and relationships, are easy to understand, present an intriguing puzzle, or play to our hope and fears.” 

Ok.  I’ll try.

 

 

 


What shall we do?

Closing the mind gap

Closing the Mind Gap

Ted Cadsby

BPS Books, 2014

ISBN 978 1 927483 78 7

£18.00

 

 

 

China Miéville sets one of his novels, The City & the City, in two cities occupying the same physical space.  Citizens of each city, partly through choice and partly through political coercion, have trained themselves to ‘unsee’ the other city: to recognize the buildings and inhabitants of the other city without seeing them.  Crossing the cognitive divide, even by accident, is regarded as ‘breaching’ – a terrible crime invoking unspeakable punishments.

Ted Cadsby, in his ambitious and enjoyable new book, similarly invokes two coterminous worlds.  We live in both, but usually recognise only one.  The consequences of ignoring the other can be profoundly damaging.

World #1 is, in his description, ‘straightforward’.  In World #1, we easily differentiate meaningful signals from noise; patterns are consistent across different situations; feedback is direct, timely and clear.  In World #1, learning is easy and prediction is reliable.  World #1 is the world “in which countless generations of our ancestors lived and in which we continue to spend much of our time.” 

Fractal19World #2 is ‘complex’.  In World #2, signals are buried in noise; patterns vary across situations because each situation is unique; feedback on our actions is indirect, delayed and ambiguous.  World #2 has, Cadsby suggests, “snuck up on us”, principally in the evolutionary blink of an eye that witnessed the Industrial and Information Revolutions. 

The farmers of World #1 could reliably expect their predictions to turn out correctly (except, presumably, when they didn’t); the knowledge workers of World #2, in contrast, “cannot rely on simple cues and timely feedback to make decisions.”

Cadsby argues that our brains have evolved to navigate World #1 and are unprepared for World #2.   In fact, we have, figuratively, two brains: the ‘old’ brain, which operates unconsciously, and the ‘new’ brain, which has evolved over the past 100,000 years and which we think of as conscious.  We think automatically with the ‘old’ brain, and effortfully with the ‘new’ one.  But the partnership is unequal:  the ‘new’ brain has limited access to the ‘old’ one.  As a result of this ‘brain-brain’ gap, the way we think is not always matched to our modern world, and so we face the second challenge of a ‘brain-world’ gap. 

The challenge is to close the gaps.

Cadsby’s book works with an explanatory narrative of human cognition that has developed Old brain, new brainrapidly over the past decade or two.   The ‘left-brain-right-brain’ narrative of the 70s and 80s has gradually given way to an ‘intuition-and-rationality’ narrative, under the influence of psychology, complexity science, evolutionary anthropology, cognitive science and what’s loosely referred to as neuroscience.  Paul MacLean's model of the triune brain helped get the narrative going; Guy Claxton’s Hare Brain, Tortoise Mind, Stephen Mithen’s The Prehistory of the Mind and Chris Frith's Making Up the Mind have all made interesting contributions.  

Like its predecessor, the ‘intuition/rationality’ narrative relies on a satisfyingly simple dichotomy.  Where the earlier explanation concentrated on a lateral division between left and right brain, the new one emphasizes a vertical division, the ‘new’ brain (represented by the neocortex) sitting on top of the ‘old’, intuitive, emotional brain (represented mostly by the hippocampus and the amygdala). 

This new narrative has considerable explanatory power.  Cadsby argues that “our minds are meaning-making machines”: we predict the nature of reality by intuitively pattern-matching to pre-existing mental models, some inherited (like the ability to recognize a face), some learned (like the ability to ride a bike).  ‘Constructive realism’ is useful in World #1 because in this world the pattern-matches are usually more or less accurate; but in World #2, constructive realism falls prey to “greedy reductionism”: we oversimplify complexity and conclude overconfidently.   

Type 1 thinking, intuitive and automatic, will help us solve straightforward problems, but not complex ones.  It will help us read a novel but not write one; eat a meal but not cook it; watch tennis but not play it.  If we want to understand complexity more effectively, we need to invoke Type 2 thinking.

The catch is that Type 2 thinking requires concentration.  Where Type 1 is quick, Type 2 must be slow; where Type 1 operates in parallel, Type 2 can operate only one task at a time.  Much of the book is devoted to the strategies necessary to develop Type 2 thinking: study the problem landscape more carefully; pursue missing information; analyse causal relationships; and so on.  Cadsby suggests that we need to develop two types of Type 2 thinking:  Type 2.1, which helps us model complexity more accurately; and Type 2.2, thinking about thinking, which “brings us as thinking agents into the process of thinking”.  Cadsby calls Type 2.2 ‘metacognition’ and, with a Buddhist inflection, ‘mindfulness’. 

Bigstock-Are-You-Sure-45817090But we’re not inclined to do either.  We prefer Type 1 thinking.  For one thing, effortful thinking requires – well – effort, and we need to conserve cognitive energy.  Worse still, we’re addicted to certainty: we need to know, we need to be in control, and we’re desperate to enjoy the calm, pleasurable (intuitive) feeling of knowing that we have figured something out.  Ambiguity and doubt create too much discomfort.

Closing the Mind Gap develops this thesis in great detail.  Cadsby synthesises huge quantities of information and explains it elegantly.  This may not be quite a popular science book and it may not be quite a management book; but it's certainly a page-turner.  Cadsby is much influenced by Daniel Kahnemann (Thinking Fast and Slow), although he also cites the work of Robin Hogarth, Nassim Nicholas Taleb and Keith Stanovich, along with a host of experimental evidence to support his argument.  Along the way, he offers excellent accounts of theory of mind, the workings of the emotions, Bayesian probability theory and much more.  For anybody interested in understanding why we so often fail to think as well as we can, this book will be useful (though I wish his endnotes indicated his sources more precisely). 

And yet, and yet.  Something bothers me. 

To begin with, I’m not sure about these two worlds.  How do we distinguish #1 from #2?  Are they not both simply mental constructs?  After all, as Cadsby himself says:  “our earliest forms of conscious awareness enabled language, culture and innovation, and we began to create a new world for ourselves.”  We find ourselves paradoxically limited in our ability to understand the cognitive complexity that we ourselves have generated. 

DecisionsAnd then, understanding complexity is never the whole story.  The primary function of a brain is to enable an organism to move.  If “all life is problem solving” – as Karl Popper suggested – then, as Cadsby points out, “the brain interprets its environment so it can motivate actions that are conducive to thriving.”  Or, to quote José Ortega y Gasset:  “Living is a constant process of deciding what we are going to do.”  The truth, however complex, matters less than the solution, which is not an answer but an intervention in the world.

Cadsby touches on decision-making.  He discusses the Taylor-Russell diagram; and he acknowledges, entertainingly, the provisional quality of all decisions.  But his advice on how to decide better is somewhat negative: we should qualify our conclusions with ‘probably not’, ‘could be’ or ‘it appears to me that...’  I’d like more emphasis on how to choose what to do, and how to manage risk. 

Perhaps Cadsby has picked up Kahnemann’s pessimism, along with the undoubted insights of behavioural economics.  It seems that that the best we can do is overcome – effortfully – our inevitable cognitive shortcomings.  For example, we read a lot about confirmation bias, availability bias and myside bias, but nothing about optimism bias: the tendency to assume that everything will turn out ok, which becomes a useful learning tool when surprised by failure or the unexpected.  (I’d like to see more in the book about learning.)  Rather than celebrating our successes in combining Type #1 and Type #2 thinking – in collaborative research, artistic production, business and diplomacy – Cadsby invokes the quietism of Stoicism and Buddhism to help us outmanoeuvre Type 1 thinking and the depressing negativity bias of our emotions.  (“The marginal value of eating and sex declines rapidly once we have had our fill, but the marginal value of avoiding danger never declines.”  Hm. ) 

What’s missing?

The clue may be in the ‘cultural big bang’ that Cadsby describes early in the book.  It’s a critical part of the narrative.  This was the moment, perhaps 50,000 years ago, when human LW109mithen2consciousness seemed to take a sudden leap forward, “fuelled by the ... ability to communicate complex ideas and generalize learning by applying insight from one task to different ones.”   Something happened to our thinking; something that allowed us to transcend the difference between Type 1 and Type 2 thinking and combine them; something that offered us the opportunity, not merely to generalise, but to create wholly new ideas.  Cadsby acknowledges that this cognitive leap expanded our working memories and enabled us to speculate about the past and the future.  But there’s a more radically significant element in this new ‘cathedral of the mind’, as Stephen Mithen has called it.  And Cadsby, I can’t help feeling, has missed it.

That element is metaphorical thinking.   

“The metaphor,” said José Ortega y Gasset, “is probably the most fertile power possessed by man.”  Metaphorical thinking has generated the massive potential for creativity that continues to drive our cognitive development.  Where, I wonder, might metaphor might fit into Ted Cadsby’s splendidly articulated argument?


Framing for wonks (and others)

Framing-shot

Photo by mnadi

A very good blogpost by Athene Donald set me thinking the other day, about writing policy papers, position papers, committee papers, and other kinds of persuasive document.  She was responding to this article by Stian Westlake on the Guardian Political Science blog.

Both pieces concentrate on matters stylistic.  Athene Donald quotes three key suggestions from Westlake’s piece.

  • Neither glibness nor prolixity make for useful advice.

(I think it should be ‘makes’ – but let that pass.)

  • Clarity, brevity and a sense of narrative are all important parts of good advice.

“It takes an eagle eye,” comments Professor Donald wisely, “to remove unnecessary circumlocutions and hesitancies.”

  • Good advice is not just a matter of providing information, or summarising research. It also involves making a judgment about the balance of facts, helping frame the issue, and communicating in a way that the person you’re counselling will understand and act on.

To which she adds:  “Scientists aren’t always familiar with the idea of framing, or at least that is my personal experience.”

An important point is lurking here, which needs to be dragged out for scrutiny.

Westlake quotes Alan Clark, who once wrote a paper advocating deep cuts in military spending. Clark crows:

180px-Alan_Clark_cropped

...not only was my paper first in, it was only five pages long. All this stuff [civil servants are] sending up now is ten, twenty pages per memo. On-the-one-hand, on-the-other-hand balls. No one will bother, and in any case all will be read in the context of my argument.  Julian told me that the Treasury had commented that mine was 'the first decently written paper' they had seen for thirty years.

 

Ignore, if you can, the schoolboyish glee.  Clark’s success, I suggest, wasn’t principally due to style.  Clark got his way, as Westlake notes nearby, “by force of argument and cunning.”

Scientists are often frustrated by the irrationality of non-scientists.  Creationists ignore the overwhelming success of evolution as an explanatory theory.  Climate change sceptics scoff at sophisticated meteorological analysis.  Nigerian citizens refuse to inoculate their children against polio because they believe, against all the evidence, that the vaccine causes infertility.

Why do people resist good arguments so often and so persistently?

Because argument doesn’t operate by reason alone.  At least, most arguments don’t.  To succeed, an argument has to be framed to fit the assumptions, values and beliefs of the audience. 

Frames are the mental models through which we perceive and make sense of the world.  Some frames seem to be genetically imprinted; most are learned and reinforced through experience.  The choices we make, the decisions we take, and the arguments we believe, are determined by the frames we use. 

The idea of framing has been around for some decades.  The great rhetorician Kenneth Burke talked about ‘terministic screens’; Gregory Bateson and Ervin Goffmann developed the idea further in the 1970s.  More recently, framing has become seriously trendy through the work of Amos Tversky and Daniel Kahnemann on ‘cognitive biases’.

In the field Toulminof informal logic, framing serves to establish what Stephen Toulmin calls an argument’s warrant.  A warrant is a generally held assumption, value or belief that justifies (or warrants) the word ‘because’ as a link between claim and reason.  We’ll be convinced by an argument if, and only if, we accept the warrant underlying it. 

Imagine a nutritionist making this case to a five-year-old.

Eat your vegetables because they’re good for you.

What chance of success here?  The warrant is the unstated assumption that ‘we should eat what’s good for us’.  Show me a five-year-old who holds that truth to be self-evident.  They don’t buy it.  The argument is unwarranted.  Failure of helpless nutritionist.

Of course, there are other methods of persuasion.  We could use force, social proof (‘your best friend Sam eats his vegetables’), or any of Robert Cialdini’s other patterns of influence. 

We could hire Brian Cox or Dara Ó Briain to do the job; but that’s not reason.  That’s charisma: a version of what Aristotle called ethos.

Now look at what Alan Clark was doing.  As Timothy Johnson points out in his comment to Westlake’s piece, Clark was preaching to the converted: trying to convince the Treasury to make spending cuts.  Perfect framing.  When does the Treasury ever not want to make spending cuts?  That warrant – ‘spending cuts are goooooooooood’ – pretty well acts as the Treasury’s motto.

Hardly deep; but definitely cunning.

According to political communication researcher Jim Kuyper, frames operate in four ways:

  • they define problems;
  • they diagnose causes;
  • they make moral judgments; and
  • they suggest remedies.

When we’re constructing an argument, then, we could usefully ask four questions about our audience.

  • How do they define the problem?
  • What do they think the cause is?
  • What’s their moral view of the problem?
  • What kind of remedy are they looking for?

We then have to frame our argument to address the answers to those four questions.

Now, I can see that this might be a horrifying suggestion for many scientists.  After all:

  • How do they define the problem?  As a hypothesis.
  • What do they think the cause is? Whatever the research tells them; and the causes may be complex and various.
  • What’s their moral view of the problem? Whatever can’t be falsified is likely to be true.
  • What kind of remedy are they looking for? One that respects the complexity of the truth uncovered by the research.

That’s the frame through which (I hope) they view reality.  Which is fine if their audience for their argument frames in the same way.  But if they don’t – as many politicians, journalists, activists, members of faith communities or ordinary folk so often don’t – then the argument will fail.

Framing, it seems to me, is a powerful tool for constructing more effective arguments.  Anyone arguing across intellectual, social or political boundaries will find it helpful.  Not just policy wonks.

(Thanks also to Timothy Johnson for pointing us to this article, which takes the conversation still further...)


Learning to juggle helps your brain grow - official!

Learning is good for you, right?  Absolutely: and new research suggests that it actually helps your brain to grow.

Recent research at the University of Oxford suggests the learning a new skill boosts the connections between different parts of the brain, by developing the brain’s ‘white matter’ – which contains mostly axons.

Axons are the long, slender projections of neurons. They make contact with other brain cells – usually other neurons – by sending electrical impulses to the synapses, which act as junctions between brain cells.


  Neuron

You might expect learning to strengthen those connections.  Until now, however, research has tended to concentrate on the grey matter of the brain: the neurons themselves.  The new work has actually seen the growth in white matter for the first time.

Jan Scholz and his colleagues work at the snappily titled Oxford Centre for Functional Magnetic Resonance Imaging of the Brain (FMRIB). They asked 24 young men and women to learn juggling, practicing for half an hour a day over six weeks.  Before and after this training period, the researchers scanned the brains of the jugglers – together with those of 24 people who didn't do any juggling.Parietal

They found that, while there was no change in the non-jugglers' brains, the jugglers grew more white matter in a part of the parietal lobe - the part of the brain involved in connecting what we see with how we move.
  

Jan Scholz

(And here is Jan expanding the white matter in his own brain...)

One heartening finding is that it’s the learning that matters, not how good you become.  White matter grew by the same amount in all the jugglers, no matter how good they got.  So it's probably the learning process itself that is important for brain development, not how good you are.

An even cheerier result is that the changes seem to be permanent.  When the researchers scanned the jugglers' brains again,  after four weeks without juggling, they found that the new white matter had stayed put.  What’s more, the amount of grey matter had even increased.

"It's like riding a bike," Scholz says. "Either you can juggle or you can't. It takes a lot of training to learn, but once it clicks, you don't forget it."

‘We tend to think of the brain as being static, or even beginning to degenerate, once we reach adulthood,’ says Dr Heidi Johansen-Berg of the Department of Clinical Neurology, University of Oxford, who led the work. ‘In fact we find the structure of the brain is ripe for change. We’ve shown that it is possible for the brain to condition its own wiring system to operate more efficiently.’

The brain wants to learn, it seems. 

Arne May of the University Medical Centre Hamburg-Eppendorf in Germany, who had led the previous work on juggling and grey matter, finds the Oxford findings "fascinating". "It suggests”, he said, “that learning a skill is more important than exercising what you are good at already – the brain wants to be puzzled and learn something new," he says.

Read the full story in New Scientist here.