'So what?': the conundrum of scicomm
11 September 2018
[image with thanks to magnoliamc.com]
This is the first of three posts. Links to the other two are at the end.
This week, I’m at the British Science Festival in Hull, which offers hundreds of exciting events creating a conversation between science – and scientists – and everyone else. It’s a great place to observe the challenges facing science communication, and the thrill when good scicomm successfully engages its audience.
This year’s festival is likely to be dominated by artificial intelligence – not least because the British Science Association’s new president, Jim Al-Khalili, will be devoting his Presidential Address to that topic.
It’s hard to keep up with the advances in AI: only three months ago, IBM unveiled Project Debater, a system capable of debating with humans on complex topics in real time. Project Debater seems to bring AI squarely into the rhetorical arena, perhaps for the first time. Does Project Debater automate persuasion?
The history of AI begins with attempts to replicate logical thinking: Simon and Newell developed Logical Theorist, widely considered the first AI program, in the mid 1950s. Classical AI, based in part on the work of Alan Turing, later developed algorithms using heuristics to make reasonable choices in pursuit of a goal. Classical AI now helps systems in logistics, in manufacturing and construction to plan and execute processes in highly controlled environments.
Machine learning – arguably the next stage in the AI story – creates systems that hunt for patterns in data. Neural networks take AI still further, mimicking to some extent the neurological structures of the brain. Systems like IBM’s Watson and AlphaGo (developed by Deep Mind) seem to be able to go beyond regurgitating knowledge and running logical deductions: they give a very good impression of discovering new strategies for solving problems and even generating new ideas.
But even these most advanced forms of AI lack the ability to think conceptually. As Professor Al-Khalili demonstrated in a recent TV documentary, we can train a program to recognise a dog, but it doesn’t know what a dog is.
So, at the heart of AI sits a conundrum. It’s known as Moravec’s paradox. AI is becoming ever more effective at the kind of complicated rational thinking that humans find hard, but it can't yet replicate the kind of perceptual and conceptual thinking that toddlers find effortless: recognizing a face, moving around in space, and catching a ball; paying attention to what’s interesting, setting goals, and planning a course of action.
Moravec famously explains the paradox in evolutionary terms:
Encoded in the large, highly evolved sensory and motor portions of the human brain is a billion years of experience about the nature of the world and how to survive in it. […] We are all prodigious olympians in perceptual and motor areas, so good that we make the difficult look easy. Abstract thought, though, is a new trick, perhaps less than 100 thousand years old. We have not yet mastered it. It is not all that intrinsically difficult; it just seems so when we do it
There’s a further paradox here. Those older, unconscious cognitive skills include the most sophisticated thinking skill of all: the ability to generate meaning. The question that AI has yet to learn to answer is: “So what?”
Scicomm faces the same paradoxical challenge. Scientists – inheritors of a rational method barely 2000 years old, which Moravec calls "the thinnest veneer of human thought" – need to be able to communicate with human brains that are highly evolved meaning-making systems. They need to engage the emotions, values, aesthetic judgements and social skills that shape our perception of reality. They need to tell us, not just what they know or how they’ve come to know it, but what it means.
In short, they need a rhetorical method to complement the scientific method.
In the next two posts, I’ll be exploring two key elements of that method: finding a message and discovering a structure for your presentation.