“when history is at its most obliging, the history-writer needs be at his most wary.” (China by John Keay)
I came across this nugget of wisdom when I was re-reading the Introduction to John Keay’s history of China. And it struck me that in some way this quote could be a part of the motto of this blog. The whole thing might then read something like this:
Hack at your thoughts at any opportunity to see if you can reveal new connections through analogies, metonymies and metaphors. Uncover hidden threads, weave new ones and follow them as far as they take you. But when you see them give way and oblige you with great new revelations about how the world really is, be wary!
Metaphors can be very obliging in their willingness to show us that things we previously thought separate are one and the same. But that is almost always the wrong conclusion. Everything is what it is, it is never like something else. (In this I have been subscribing to ‘tiny ontology’ even before I‘ve heard about it). But we can learn things about everything when we think about it as something else. Often we cannot even think of many things other than through something else. For instance, electricity. Electrons are useful to think of as particles or as waves. Electrons are electrons, they are not little balls nor are they waves. But when we start treating them as one or the other, they become more tractable for some problems (electrical current makes more sense when we think of them as waves and electricity generating heat makes sense when we think of them as little balls).
George Lakoff and Mark Johnson summarize metaphors in the X IS Y format (e.g. LOVE IS A JOURNEY) but this implied identity is where the danger lies. If love is a journey as we can see in a phrase like, ‘We’ve arrived at a junction in our relationship’, then it surely must be a journey in all respects: it has twists and turns, it takes time, it is expensive, it happens on asphalt! Hold on! Is that last one the reason ‘love can burn with an eternal flame’? Of course not. Love IS NOT a journey. Some aspects of what we call love make more sense to us when we think of them as a journey. But others don’t. Since it is obvious that love is not a journey but is like a journey, we don’t worry about it. But it’s more complicated than that. The identities implied in metaphor are so powerful (more so to some people than others) that some mappings are not allowed because of the dangers implied in following them too far. ‘LOVE IS A CONTRACT’ is a perfectly legitimate metaphor. There are many aspects of a romantic relationship that are contract-like. We agree to exclusivity, certain mode of interaction, considerations, etc. when we declare our love (or even when we just feel it – certain obligations seem to follow). But our moral compass just couldn’t stomach (intentional mix) the notion of paying for love or being in love out of obligation which could also be traced from this metaphor. We instinctively fear that ‘LOVE IS A CONTRACT’ is a far too obliging a metaphor and we don’t want to go there. (By we, I mean the general rules of acceptable discourse in certain circles, not every single cognizing individual.)
So even though metaphors do not describe identity, they imply it, and not infrequently, this identity is perceived as dangerous. But there’s nothing inherently dangerous about it. The issue is always the people and how willing they are to let themselves be obliged by the metaphor. They are aided and abetted in this by the conceptual universe the metaphor appears in but never completely constrained by it. Let’s take the common metaphor of WAR. I often mention the continuum of ‘war on poverty’, ‘war on drugs’, and ‘war on terror’ as an example of how the metaphors of ‘war’ do not have to lead to actual ‘war’. Lakoff showed that they can in ‘metaphors can kill’. But we see that they don’t have to. Or rather we don’t have to let them. If we don’t apply the brakes, metaphors can take us almost anywhere.
There are some metaphors that are so obliging, they have become cliches. And some are even recognized as such by the community. Take, for instance, the Godwin law. X is Hitler or X is Nazi are such seductive metaphors that sooner or later someone will apply them in almost any even remotely relevant situation. And even with the awareness of Godwin’s law, people continue to do it.
The key principle of this blog is that anything can be a metaphor for anything with useful consequences. Including:
United States is ancient Rome
China today is Soviet Union of the 1950s
Saddam Hussein is Hitler
Iraq is Vietnam
Education is a business
Mental difficulties are diseases
Learning is filling the mind with facts
The mind is the software running on the hardware of the brain
Marriage is a union between two people who love each other
X is evolved to do Y
X is a market place
But this only applies with the HUGE caveat that we must never misread the ‘is’ for a statement of perfect identity or even isomorphims (same shapedness). It’s ‘is(m)’. None of the above metaphors are perfect identities – they can be beneficially followed as far as they take us, but each one of them is needs to be bounded before we start drawing conclusions.
Now, things are not helped by the fact that any predication or attribution can appear as a kind of metaphor. Or rather it can reveal the same conceptual structures the same way metaphors do.
‘John is a teacher.’ may seem like a simple statement of fact but it’s so much more. It projects the identity of John (of whom we have some sort of a mental image) into the image schema of a teacher. That there’s more to this than just a simple statement can be revealed by ‘I can’t believe that John is a teacher.’ The underlying mental representations and work on them is not that different to ‘John is a teaching machine.’ Even simple naming is subject to this as we can see in ‘You don’t look much like a Janice.’
Equally, simple descriptions like ‘The sky is blue’ are more complex. The sky is blue in a different ways than somebody’s eyes are blue or the sea is blue. I had that experience myself when I first saw the ‘White Cliffs of Dover’ and was shocked that they were actually white. I just assumed that they were a lighter kind of cliff than a typical cliff or having some white smudges. They were white in the way chalk is white (through and through) and not in the way a zebra crossing is white (as opposed to a double yellow line).
A famous example of how complex these conceptualisations can get is ‘In France, Watergate would not have harmed Nixon.’ The ‘in France’ and ‘not’ bits establishe a mental space in which we can see certain parts of what we know about Nixon and Watergate projected onto what we know about France. Which is why sentences like “The King of France is bald.” and “Unicorns are white.” make perfect sense even though they both describe things that don’t exist.
Now, that’s not to say that sentences like ‘The sky is blue’, ‘I’m feeling blue’,’I’ll praise you to the sky.’, or ‘He jumped sky high.’ and ‘He jumped six inches high.’ are cognitively or linguistically exactly the same. There’s lots of research that shows that they have different processing requirements and are treated differently. But there seems to be a continuum in the ability of different people (much research is needed here) to accept the partiality of any statement of identity or attribution. On the one extreme, there appears something like autism which leads to a reduced ability to identify figurative partiality in any predication but actually, most of the time, we all let ourselves be swayed by the allure of implied identity. Students are shocked when they see their teacher kissing their spouse or shopping in the mall. We even ritualize this sort of thing when we expect unreasonable morality from politicians or other public figures. This is because over the long run overtly figurative sentence such as ‘he’s has eyes like a hawk’ and ‘the hawk has eyes’ need similar mental structures to be present to make sense to us. And I suspect that this is part of the reason why we let ourselves be so easily obliged by metaphors.
Update: This post was intended as a warning against over-obliging metaphors that lead to perverse understandings of things as other things in unwarranted totalities. In this sense, they are the ignes fatui of Hobbes. But there’s another way in which over-obliging metaphors can be misleading. And that is, they draw on their other connections to make it seem we’ve come to a new understanding where in fact all we’ve done is rename elements of one domain with the names of elements of another domain without any elucidation. This was famously and devastatingly the downfall of Skinner’s Verbal Behavior under Chomsky’s critique. He simply (at least in the extreme caricature that was Chomsky’s review) took things about language and described them in terms of operant conditioning. No new understanding was added but because the ‘new’ science of psychology was in as the future of our understanding of everything, just using those terms made us assume there was a deeper knowledge. Chomsky was ultimately right-if only to fall prey to the same danger with his computational metaphors of language. Another area where that is happening is evolution, genetics and neuroscience which are often used (sometimes all at once) to simply relabel something without adding any new understanding whatsoever.
…analogies work best when they are opportunistic, ad hoc, and abandoned as quickly as they are adopted. Analogies, if used generatively (i.e. to come up with new ideas), can be incredibly powerful. But when used exegeticaly (i.e. to interpret or summarize other people’s ideas), they can be very harmful.
The big problem is that in our cognition, ‘x is y’ and ‘x is like y’ are often treated very similarly. But the fact is that x is never y. So every analogy has to be judged on its own merit and we need to carefully examine why we’re using the analogy and at every step consider its limits. The power of analogy is in its ability to direct our thinking (and general cognition) i.e. not in its ‘accuracy’ but in its ‘aptness’.
I have long argued that history should be included in considering research results and interpretations. For example, every ‘scientific’ proof of some fundamental deficiencies of women with respect to their role in society has turned out to be either inaccurate or non-scalable. So every new ‘proof’ of a ‘woman’s place’ needs to be treated with great skepticism. But that does not mean that one such proof does not exist. But it does mean that we shouldn’t base any policies on it until we are very very certain.
The video started off with panache and promised some entertainment, however, I found myself increasingly annoyed as the video continued. The problem is that this is an exchange of cliches that pretends to be a fight of truth against ignorance. Sure, Storm doesn’t put forward a very coherent argument for her position, but neither does Minchin. His description of science vs. faith is laughable (being in awe at the size of the universe, my foot) and nowhere does he display any nuance nor, frankly, any evidence that he is doing anything other than parroting what he’s heard on some TV interview with Dawkins. I have much more sympathy with the Storms of this world than these self-styled defenders of science whose only credentials are that they can remember a bit of high school physics or chemistry and have read an article by some neo-atheist in Wired. What’s worse, it’s would be rationalists like him who do what passes for science reporting in major newspapers or on the BBC.
But most of all, I find it distasteful that he chose a young woman as his antagonist. If he wished to take on the ‘antiscience’ establishment, there are so many much better figures to target for ridicule. Why not take on the pseudo spiritualists in the mainstream media with their ecumenical conciliatory garbage. How about taking on tabloids like Nature or Science that publish unreliable preliminary probes as massive breakthroughs. How about universities that put out press releases distorting partial findings. Why not take on economists who count things that it makes no sense to count just to make things seem scientific. Or, if he really has nothing better to do, let him lay into some super rich creationist pastor. But no, none of these captured his imagination, instead he chose to focus his keen intellect and deep erudition on a stereotype of a young woman who’s trying to figure out a way to be taken seriously in a world filled with pompous frauds like Minchin.
The blog post commenting on the video sparked a debate about the limits of knowledge (Note: This is a modified version of my own comment). But while there’s a debate to be had about the limits of knowledge (what this blog is about), this is not the occasion. There is no need to adjudicate about which of these two is more ‘on to something’. They’re not touching on anything of epistemological interest, they’re just playing a game of social positioning in the vicinity of interesting issues. But in this game, people like Michin have been given a lot more chips to play with than people like Storm. It’s his follies and prejudices and not hers that are given a fair hearing. So I’d rather spend a few vacuous moments in her company than endorse his mindless ranting.
And as for ridiculing people for stupidity or shallow thinking, I’m more than happy to take part. But I want to have a look at those with power and prestige, because they just as often as Storms act just as silly and irrationally the moment they step out of their areas of expertise. I see this all the time in language, culture and history (where I know enough about to judge the level of insight). Here’s the most recent one that caught my eye:
It comes from a side note in a post about evolutionary foundations of violence by a self-proclaimed scientist (the implied hero in Minchin’s rant):
It is said that the Bedouin have nearly 100 different words for camels, distinguishing between those that are calm, energetic, aggressive, smooth-gaited, or rough, etc. Although we carefully identify a multitude of wars — the Hundred Years War, the Thirty Years War, the American Civil War, the Vietnam War, and so forth — we don’t have a plural form forpeace.
Well, this paragon of reason could be forgiven for not knowing what sort of non-sense this ‘100 words for’ cliche is. The Language Log has spilled enough bits on why this and other snowclonesare silly. But the second part of the argument is just stupid. And it is a typical scientist blundering about the world as if the rules of evidence didn’t apply to him outside the lab and as if data not in a spreadsheet did not require a second thought. As if being a PhD in evolutionary theory meant everything else he says about humans must be taken seriously. But how can such a moronic statement be taken as anything but feeble twaddle to be laughed at and belittled? How much more cumulatively harmful are moments like these (and they are all over the place) than the socializing efforts of people like Storm from the video?
So, I should probably explain why this is so brainless. First, we don’t have a multitude of wordswar (just like the Bedouin don’t have 100 or even 1 dozen for a camel). We just have the one and we have a lot of adjectives with which we can modify its meaning. And if we want to look for some that are at least equivalent to possible camel attributes, we won’t choose names of famous wars but rather things like civil war, total war, cold war, holy war, global war, naval war, nuclear war, etc. I’m sure West Point or even Wikipedia has much to say about a possible classification. And of course, all of this applies to peace in exactly the same way. There are ‘peaces’ with names like Peace of Westphalia, Arab-Israeli Peace, etc. with just as many attributive pairs like international peace, lasting peace, regional peace, global peace, durable peace, stable peace, great peace, etc. I went to a corpus to get some examples but that this must be the case was obvious and a simple Google search would give enough examples to confirm a normal language speaker’s intuition. But this ‘scientist’ had a point to make and because he’s spent twenty years doing research in evolution of violence, he must surely be right about everything on the subject.
Now, I’m sure this guy is not an idiot. He’s obviously capable of analysis and presenting a coherent argument. But there’s an area that he chose to address about which he is about as qualified to make pronouncements as Storm and Minchin are about the philosophy of science. And what he said there is stupid and he should be embarrassed for having said it. Should he be ridiculed and humiliated for it the way I did here? No. He made the sort of mistake everyone makes from high school students to Nobel laureates. He thought he knew something and didn’t bother to examine his knowledge. Or he did try to examine it but didn’t have the right tools to do it. Fine. But he’s a scientist (and a man not subject to stereotypes about women) so we give him and too many like him a pass. But Storm, a woman who like so many of her generation uses star signs to talk about relationships and is uncomfortable with the grasping maw of classifying science chomping on the very essence of her being, she is fair game?
It’s this inequality that makes me angry. We afford one type of shallowness the veneer respectability and rake another one over the coals of ridicule and opprobrium. Not on this blog!
UPDATE: I was just listening to this interview with a philosopher and historian of science about why there was so much hate coming from scientists towards the Gaia hypothesis and his summation, it seems to me, fits right in with what this post is about. He says: “When scientists feel insecure and threatened, they turn nasty.” And it doesn’t take a lot of study of the history and sociology of science to find ample examples of this. The ‘science wars’, the ‘linguistics wars’, the neo-Darwinst thought purism, the list just goes on. The world view of scientism is totalising and has to deal with exactly the same issues as other totalising views such as monotheistic religions with constitutive ontological views or socio-economicutopianisms (e.g. neo-liberalism or Marxism).
And one of those issues is how do you afford respect to or even just maintain conversation with people who challenge your ideological totalitarianism – or in other words, people who are willfully and dangerously “wrong”. You can take the Minchin approach of suffering in silence at parties and occasionally venting your frustration at innocent passerbys, but that can lead to outbreaks group hysteria as we saw with the Sokal hoax or one of the many moral panic campaigns.
Or you can take the more difficult journey of giving up some of your claims on totality and engaging with even those most threatening to to you as human beings; the way Feyerabend did or Gould sometimes tried to do. This does not mean patiently proselytizing in the style of evangelical missionaries but more of an ecumenical approach of meeting together without denying who you are. This will inevitably involve moments where irreconcilable differences will lead to a stand on principles (cf. Is multi-culturalism bad for women?) but even in those cases an effort at understanding can benefit both sides as with the question of vaccination described in this interview. At all stages, there will be temptation at “understanding” the other person by reducing them to our own framework of humanity. Psychologizing a religious person as an unsophisticate dealing with feelings of awe in the face of incomprehensible nature or pitying the atheist for not being able to feel the love of God and reach salvation. There is no solution. No utopia of perfect harmony and understanding. No vision of lions and lambs living in peace. But acknowledging our differences and slowing down our outrage can perhaps make us into the better versions of us and help us stop wasting time trying to reclaim other people’s stereotypes.
UPDATE 2: I am aware of the paradox between the introduction and the conclusion of the previous update. Bonus points for spotting it. I actually hold a slightly more nuanced view than the first paragraph would imply but that is a topic for another blog post.
For some reason, many accomplished people, when they are done accomplishing what they’ve set out to accomplish, turn their minds to questions like:
What is primary, thought or language.
What is primary, culture or language.
What is primary, thought or culture.
I’d like to offer a small metaphor hack for solving or rather dissolving these questions. The problem is that all three concepts: culture, mind and language are just useful heuristics for talking about aspects of our being. So when I see somebody speaking in a way I don’t understand, I can talk about their language. Or others behave in ways I don’t like, so I talk about their culture. Then, there’s stuff going on in my head that’s kind of like language, but not really, so I call that sort of stuff mind. But these words are just useful heuristics not discrete realities. Old Czechs used the same word for language and nation. English often uses the word ‘see’ for ‘understand’. What does it mean? Not that much.
Let’s compare it with the idea of the setting sun. I see the Sun disappearing behind the horizon and I can make some useful generalizations about it. Organize my directions (East/West), plant plants to grow better, orient how my dwelling is positioned, etc. And my description of this phenomenon as ‘the sun is setting behind the horizon’ is perfectly adequate. But then I might start asking questions like ‘what does the Sun do when it’s behind the horizon?’ Does it turn itself off and travel under the earth to rise again in the East the next morning? Or does it die and a new one rises again the next day? Those are all very bad questions because I accepted my local heuristic as describing a reality. It would be even worse if I tried to go and see the edge or the horizon. I’d be like the two fools who agreed that they would follow the railway tracks all the way to the point they meet. They keep going until one of them turns around and says ‘dude, we already passed it’.
So to ask questions about how language influences thought and culture influences language is the same as trying to go see the horizon. Language, culture and mind are just ways of describing things for particular purposes and when we start using them outside those purposes, we get ourselves in a muddle.
I was reminded by this blog post on LousyLinguist that many people still see metaphor as an unproblematic homogeneous concept leading to much circular thinking about them. I wrote about that quite a few years ago in:
Lukeš, D., 2005. Towards a classification of metaphor use in text: Issues in conceptual discourse analysis of a domain-specific corpus. In Third Workshop on Corpus-Based Approaches to Figurative Language. Birmingham.
I suggested that a classification of metaphor had better focused on their use rather than inherent nature. I came up with the heuristic device of: cognitive, social and textual uses of metaphor.
Some of the uses I came up with (inspired by the literature from Halliday to Lakoff) were:
Figurative (elegant variation)
Cohesive (anaphoric, cataphoric, exophoric)
I also posited a continuum of salience and recoverability in metaphors:
My thinking on metaphor has moved on since then – I see it as a special case of framing and conceptual integration rather than a sui generis concept – but I still find this a useful guide to return to when confronted with metaphor use.
I was listening to the discussion of the latest BioShock game on the latest TWiT podcast when I realized that I am in fact game illiterate. I am hearing these stories and descriptions of experiences but I know I can’t access them directly without a major investment in knowledge and skill acquisition. So, this is what people with no or limited literacy must feel like in highly literacy-dependent environments. I really want to access the stories in the way they are told by the game. But I know I can’t. I will stumble, be discouraged, not have a very good time before I can have a good time. I will be a struggling gamer, in the same way that there are struggling readers.
Note: When I say game, I mean mostly a non-casual computer game such as BioShock or War of Worldcraft or SimCity.
What would a game literacy entail?
What would I need to learn in order to access gaming? Literacy is composed of a multiplicity of knowledge areas and skills. I already have some of these but not enough. Roughly, I will need to get at the following:
Underlying cognitive skills (For reading: transforming the sight of letters into sounds or corresponding mental representations. For gaming: transforming desired effects on screen into actions on a controller)
Complex perceptual and productive fluency (Ability to employ the cognitive skills automatically in response to changing stimuli in multiple contexts).
Context-based or task-based strategies (Ability to direct the underlying skills towards solving particular problems in particular contexts. For reading: Skim text, or look things up in the index, or skip acknowledgements, discover the type of text, or adopt reading speed appropriate to type of text, etc. For gaming Discover the type of game, or gather appropriate achievements, or find hidden bonuses, etc.)
Metacognitive skills and strategies (Learn the terminology and concepts necessary for further learning and to achieve the appropopriate aims using stratgies.)
Socialization skills and strategies (Learn to use the skills and knowledge to make connections with other people and exploit those connections to acquire further skill, knowledge as well as social capital derriving from those)
Is literacy a suitable metaphor for gaming? Matches and mismatches!
With any metaphor it is worth to explore the mapping to see if there are sufficient similarities. In this case, I’ll look at the following areas for matches and mismatches:
Both reading/writing (I will continue to use reading for both unless I need to stress the difference) and gaming require skill that can become automatic and that takes time to acquire. People can be both “better” and “worse” at gaming and reading.
But reading is a more universal skill (although not as universal as most people think) whereas gaming skills are more genre based.
The skill at gaming can be more easily measured by game achievement. Quality of reading measures are a bit more tenuous because speed, fluency and accuracy are all contextual measures. However, even game achievement is a bit more relative, such as in recommendations to play at normal or easy to experience the game.
In this gaming is more like reading than for instance, listening to music or watching a skill which do not require any overt acquisition of skill. See Dara O’Briain’s funny bit on the differences between gaming and reading. Of course, when he says “you cannot be bad at watching a film”, we could quibble that much preparation is required for watching some films, but such training does not involve the development of underlying cognitive skills (assuming the same cultural and linguistic environment). Things are a bit more complex for some special kind of listening to music. Nevertheless people do talk about “media literacy”.
Reading is mostly a uni-modal experience. It is possible to read out loud or to read while listening but ultimately reading is its own mode. Reading has an equivalent in writing that though not a mirror image skill, requires relatively the same skill.
Gaming is a profoundly multimodal experience combining vision, sound, movement (and often reading, as well). There are even efforts to involve smell. Gaming does not have a clear expressive counterpart. The obvious expressive equivalent to writing would be game design but that clearly requires a different level of skill. However, gaming allows complex levels of self-expression within the process of game play which does not have an equivalent in reading but is not completely dissimilar to creative writing (like fanfiction).
Reading is a neutral to high status activity. The act itself is neutral but status can derrive from content. Writing (expressive rather than utilitarian) is a high status activity.
Gaming is a low status to neutral activity. No loss of status derives from inability to game to not gaming in a way that is true of reading. Some games have less questionable status and many games are played by people who derive high status from outside of gaming. There are emerging status sanction systems around gaming but none have penetrated outside gaming, yet.
Reading and writing are significant drivers of wider socialization. They are necessary to perform basic social functions and often represent gateways into important social contexts.
Gaming is only required to socialize in gaming groups. However, this socialization may become more highly desirable over time.
Writing is used to encode a wide variety of content – from shopping lists to neuclear plant manuals to fiction.
Games on the other hand, encode a much more narrower range of content. Primarily narrative and primarily finctional. Although more non-narrative and non-fictional games may exist. There are also expository games but so far, none that would afford easy storage of non-game information without using writing.
Reading and writing are very general purpose activities.
Gaming on the other hand has a limited range of purposes: enjoyment, learning, socialization with friends, achieving status in a wider community. You won’t see a bus stop with a game instead of a timetable (although some of these require puzzle solving skills to decipher).
Why may game literacy be important?
As we saw, there are many differences between gaming and reading and writing. Nevertheless, they are similar enough that the metaphor of ‘game literacy’ makes sense provided we see its limitations.
Why is it important? There will be a growing generational and populational divide of gamers and non-gamers. At the moment this is not very important in terms of opportunities and status but it could easily change within a generation.
Not being able to play a game may exclude people from social groups in the same way that not-playing golf or not engaging in some other locally sanctioned pursuit does (e.g. World of Warcraft).
But most importantly, as new generations of game creators explore the expressive boundaries of games (new narratives, new ways of story telling), not being able to play games may result in significant social exclusion. In the same way that a quick summary of what’s in a novel is inferior to reading the novel, films based on games will be pale imitations of playing the games.
I can easily imagine a future where the major narratives of the day will be expressed in games. In the same way that TV serials have supplanted novels as the primary medium of sharing crucial societal narratives, games can take over in the future. The inner life novel took about 150 years to mature and reigned supreme for about as long while drama and film functioned as its accompaniment. The TV serial is now solidifying its position and is about where the novel was in the 1850s. Gaming may take another couple of decades to get to a stage where it is ready as a format to take over. And maybe nothing like that will happen. But if I had a child, I’d certainly encourage them to play computer games as part of ensuring a more secure future.
I just found out that both abstracts I submitted to the Cognitive Futures of the Humanities Conference were accepted. I was really only expecting one to get through but I’m looking forward to talking about the ideas in both.
The first first talk has foundations in a paper I wrote almost 5 years ago now about the nature of evidence for discourse. But the idea is pretty much central to all my thinking on the subject of culture and cognition. The challenge as I see it is to come up with a cognitively realistic but not a cognitively reductionist account of culture. And the problem I see is that often the learning only goes one way. The people studying culture are supposed to be learning about the results of research on cognition.
Frames, scripts, scenarios, models, spaces and other animals: Bridging conceptual divides between the cognitive, social and computational
While the cognitive turn has a definite potential to revolutionize the humanities and social sciences, it will not be successful if it tries to reduce the fields investigated by the humanities to merely cognitive or by extension neural concepts. Its greatest potential is in showing continuities between the mind and its expression through social artefacts including social structures, art, conversation, etc. The social sciences and humanities have not waited on the sidelines and have developed a conceptual framework to apprehend the complex phenomena that underlie social interactions. This paper will argue that in order to have a meaningful impact, cognitive sciences, including linguistics, will have to find points of conceptual integration with the humanities rather than simply provide a new descriptive apparatus.
It is the contention of this paper that this can be best done through the concept of frame. It seems that most disciplines dealing with the human mind have (more or less independently) developed a similar notion dealing with the complexities of conceptualization variously referred to as frame, script, cognitive model or one of the as many as 14 terms that can be found across the many disciplines that use it. This paper will present the different terms and identify commonalities and differences between them. On this, it will propose several practical ways in which cognitive sciences can influence humanities and also derive meaningful benefit from this relationship. I will draw on examples from historical policy analysis, literary criticism and educational discourse.
The second paper is a bit more conceptually adventurous and testing the ideas put forth in the first one. I’m going to try to explore a metaphor for the merging of cultural studies with linguistic studies. This was done before with structuralism and ended more or less badly. For me, it ended when I read the Lynx by Lévi-Strauss and realized how empty it was of any real meaning. But I think structuralism ended badly in linguistics, as well. We can’t really understand how very basic things work in language unless we can involve culture. So even though, I come at this from the side of linguistics, I’m coming at it from the perspective of linguistics that has already been informed by the study of culture.
If Lévi-Strauss had met Langacker: Towards a constructional approach to the patterns of culture
Construction/cognitive grammar (Langacker, Lakoff, Croft, Verhagen, Goldberg) has broken the strict separation between the lexical and grammatical linguistic units that has defined linguistics for most of the last century. By treating all linguistic units as meaningful, albeit on a scale of schematicity, it has made it possible to treat linguistic knowledge as simply a part of human knowledge rather than as a separate module in the cognitive system. Central to this effort is the notion of language of as an organised inventory of symbolic units that interact through the process of conceptual integration.
This paper will propose a new view of ‘culture’ as an inventory of construction-like patterns that have linguistic, as well, as interactional content. I will argue that using construction grammar as an analogy allows for the requisite richness and can avoid the pitfalls of structuralism. One of the most fundamental contributions of this approach is the understanding that cultural patterns, like constructions, are pairings of meaning and form and that they are organised in a hierarchically structured inventory. For instance, we cannot properly understand the various expressions of politeness without thinking of them as systematically linked units in an inventory available to members of a given culture in the same as syntactic and morphological relationships. As such, we can understand culture as learnable and transmittable in the same way that language is but without reducing its variability and richness as structuralist anthropology once did.
In the same way that Jakobson’s work on structuralism across the spectrum of linguistic diversity inspired Lévi-Strauss and a whole generation of anthropological theorists, it is now time to bring the exciting advances made within cognitive/construction grammar enriched with blending theory back to the study of culture.
Part of this post was incorporated into an article I wrote with Brian Kelly and Alistair McNaught that appeared in the December issue of Ariadne. As part of that work and feedback from Alistair and Brian, I expanded the final section from a simple list of bullets into a more detailed research programme. You can see it below and in the article.
Background: From spelling reform to plain language
The idea that if we could only improve how we communicate, there would be less misunderstanding among people is as old as the hills.
Historically, this notion has been expressed through things like school reform, spelling reform, publication of communication manuals, etc.
The most radical expression of the desire for better understanding is the invention of a whole new artificial language like Esperanto with the intention of providing a universal language for humanity. This has had a long tradition but seemed to gain most traction towards the end of last century with the introduction and relative success of Esperanto.
But artificial languages have been a failure as a vehicle of global understanding. Instead, in about the last 50 years, the movement for plain English has been taking the place of constructed languages as something on which people pinned their hopes for clear communication.
Most recently, there have been proposals suggesting that “simple” language should become a part of a standard for accessibility of web pages along side other accessibility standards issued by the W3C standards body. http://www.w3.org/WAI/RD/2012/easy-to-read/Overview. This post was triggred by this latest development.
Problem 1: Plain language vs. linguistics
The problem is that most proponents of plain language (as so many would be reformers of human communication) seem to be ignorant of the wider context in which language functions. There is much that has been revealed by linguistic research in the last century or so and in particular since the 1960s that we need to pay attention to (to avoid confusion, this does not refer to the work of Noam Chomsky and his followers but rather to the work of people like William Labov, Michael Halliday, and many others).
Languages are not a simple matter of grammar. Any proposal for content accessibility must consider what is known about language from the fields of pragmatics, sociolinguistics, and cognitive linguistics. These are the key aspects of what we know about language collected from across many fields of linguistic inquiry:
Every sentence communicates much more than just its basic content (propositional meaning). We also communicate our desires and beliefs (e.g. “It’s cold here” may communicate, “Close the window” and “John denied that he cheats on his taxes” communicates that somebody accused John of cheating on his taxes. Similarly chosing a particular form of speech, like slang or jargon, communicates belonging to a community of practice.)
The understanding of any utterance is always dependent on a complex network of knowledge about language, about the world, as well as about the context of the utterance. “China denied involvement.” requires the understanding of the context in which countries operate, as well as metonomy, as well as the grammar and vocabulary. Consider the knowledge we need to possess to interpret “In 1939, the world exploded.” vs. “In Star Wars, a world exploded.”
There is no such thing as purely literal language. All language is to some degree figurative. “Between 3 and 4pm.”, “Out of sight”, “In deep trouble”, “An argument flared up”, “Deliver a service”, “You are my rock”, “Access for all” are all figurative to different degrees.
We all speak more than one variety of our language: formal/informal, school/friends/family, written/spoken, etc. Each of these variety has its own code. For instance, “she wanted to learn” vs. “her desire to learn” demonstrates a common difference between spoken and written English where written English often uses clauses built around nouns.
We constantly switch between different codes (sometimes even within a single utterance).
Bilingualism is the norm in language knowledge, not the exception. About half the world’s population regularly speaks more than one language but everybody is “bi-lingual” in the sense that they deal with multiple codes.
The “standard” or “correct” English is just one of the many dialects, not English itself.
Language prescription and requirements of language purity (incl. simple language) are as much political statements as linguistic or cognitive ones. All language use is related to power relationships.
Simplified languages develop their own complexities if used by a real community through a process known as creolization. (This process is well described for pidgins but not as well for artificial languages.)
All languages are full of redundancy, polysemy and homonymy. It is the context and our knowledge of what is to be expected that makes it easy to figure out the right meaning.
There is no straightforward relationship between grammatical features and language obfuscation and lack of clarity (e.g. It is just as easy to hide things using active as passive voice or any Subject-Verb-Object sentence as Object-Subject-Vern).
It is difficult to call any one feature of a language universally simple (for instance, SVO word order or no morphology) because many other languages use what we call complex as the default without any increase in difficulty for the native speakers (e.g. use of verb prefixes/particles in English and German)
Language is not really organized into sentences but into texts. Texts have internal organization to hang together formally (John likes coffee. He likes it a lot.) and semantically (As I said about John. He likes coffee.) Texts also relate to external contexts (cross reference) and their situations. This relationship is both implicit and explicit in the text. The shorter the text, the more context it needs for interpretation. For instance, if all we see is “He likes it.” written on a piece of paper, we do not have enough context to interpret the meaning.
Language is not used uniformly. Some parts of language are used more frequently than others. But this is not enough to understand frequency. Some parts of language are used more frequently together than others. The frequent coocurrence of some words with other words is called “collocation”. This means that when we say “bread and …”, we can predict that the next word will be “butter”. You can check this with a linguistic tool like a corpus, or even by using Google’s predictions in the search. Some words are so strongly collocated with other words that their meaning is “tinged” by those other words (this is called semantic prosody). For example, “set in” has a negative connotation because of its collocation with “rot”.
All language is idiomatic to some degree. You cannot determine the meaning of all sentences just by understanding the meanings of all their component parts and the rules for putting them together. And vice versa, you cannot just take all the words and rules in a language, apply them and get meaningful sentences. Consider “I will not put the picture up with John.” and “I will not put up the picture with John.” and “I will not put up John.” and “I will not put up with John.”
It seems to me that most plain language advocates do not take most of these factors into account.
Some examples from the “How to write in plain English” guide: http://www.plainenglish.co.uk/files/howto.pdf.
Try to call the reader ‘you’, even if the reader is only one of many people you are talking about generally. If this feels wrong at first, remember that you wouldn’t use words like ‘the applicant’ and ‘the supplier’ if you were speaking to somebody sitting across a desk from you. [emphasis mine]
This example misses the point about the contextuality of language. The part in bold is the very crux of the problem. It is natural to use a different code (or register) with someone we’re speaking to in person and in a written communication. This is partly a result of convention and partly the result of the different demands of writing and speaking when it comes to the ability to point to what we’re speaking about. The reason it feels wrong to the writer is that it breaks the convention of writing. That is not to say that this couldn’t become the new convention. But the argument misses the point.
Do you want your letters to sound active or passive − crisp and professional or stuffy and bureaucratic?
Using the passive voice and sounding passive are not one and the same thing. This is an example of polysemy. The word “passive” has two meanings in English. One technical (the passive voice) and one colloquial (“he’s too passive”). The booklet recommends that “The mine had to be closed by the authority. (Passive)” should be replaced with “The authority had to close the mine. (Active)” But they ignore the fact that word order also contributes to the information structure of the sentence. The passive sentence introduces the “mine” sooner and thus makes it clear that the sentence is about the mine and not the local authority. In this case, the “active” construction made the point of the sentence more difficult to understand.
The same is true of nominalization. Another thing recommended against by the Plain English campaign: “The implementation of the method has been done by a team.” is not conveying the same type of information as “A team has implemented the method.”
The point is that this advice ignores the context as well as the audience. Using “you” instead of “customers” in “Customers have the right to appeal” may or may not be simpler depending on the reader. For somebody used to the conventions of written official English, it may actually take longer to process. But for someone who does not deal with written English very often, it will be easier. But there is nothing intrinsically easier about it.
Likewise for the use of jargon. The campaign gives as its first example of unduly complicated English:
High-quality learning environments are a necessary precondition for facilitation and enhancement of the ongoing learning process.
And suggests that we use this instead:
Children need good schools if they are to learn properly.
This may be appropriate when it comes to public debate but within the professional context of, say, policy communication, these 2 sentences are not actually equivalent. There are more “learning environments” than just schools and the “learning process” is not the same as having learned something. It is also possible that the former sentence appeared as part of a larger context that would have made the distinction even clearer but the page does not give a reference and a Google search only shows pages using it as an example of complex English. http://www.plainenglish.co.uk/examples.html
The How to write in plain English document does not mention coherence of the text at all, except indirectly when it recommends the use of lists. This is good advice but even one of their examples has issues. They suggest that the following is a good example of a list:
Kevin needed to take:
• a penknife
• some string
• a pad of paper; and
• a pen.
And on first glance it is, but lists are not just neutral replacements for sentences. They are a genre in its own right used for specific purposes (Michael Hoey called them “text colonies”.) Let’s compare the list above to the sentence below.
Kevin needed to take a penknife, some string, a pad of paper and a pen.
Obviously they are two different kinds of text used in different contexts for different purposes and this would impinge on our understanding. The list implies instruction, and a level of importance. It is suitable to an official document, for example something sent before a child goes to camp. But it is not suitable to a personal letter or even a letter from the camp saying “All Kevin needed to take was a penknife, some string, a pad of paper and a pen. He should not have brought a laptop.” To be fair, the guide says to use lists “where appropriate”, but does not mention what that means.
The issue is further muddled by the “grammar quiz” on the Plain English website: http://www.plainenglish.co.uk/quiz.html. It is a hodgepodge of irrelevant trivia about language (not just grammar) that has nothing to do with simple writing. Although the Plain English guide gets credit for explicitly not endorsing petty peeves like not ending a sentence with a preposition, they obviously have peeves of their own.
Problem 2: Definition of simple is not simple
There is no clear definition of what constitutes simple and easy to understand language.
There are a number of intuitions and assumptions that seem to be made when both experts and lay people talk about language:
Shorter is simpler (fewer syllables, charactes, sounds per word, fewer words per sentence, fewer sentences per paragraph)
More direct is simpler (X did Y to Z is simpler than Y was done to Z by X)
Less variety is simpler (fewer different words)
More familiar simpler
These assumptions were used to create various measures of “readability” going back to the 1940s. They consisted of several variables:
Length of words (in syllables or in characters)
Length of sentences
Frequency of words used (both internally and with respect to their general frequency)
Intuitively, these are not bad measures, but they are only proxies for the assumptions. They say nothing about the context in which the text appears or the appropriateness of the choice of subject matter. They say nothing about the internal cohesion and coherence of the text. In short, they say nothing about the “quality” of the text.
The same thing is not always simple in all contexts and sometimes too simple, can be hard. We could see that in the example of lists above. Having a list instead of a sentence does not always make things simpler because a list is doing other work besides just providing a list of items.
Another example I always think about is the idea of “semantic primes” by Anna Wierzbicka. These are concepts like DO, BECAUSE, BAD believed to be universal to all languages. There are only about 60 of them (the exact number keeps changing as the research evolves). These were compiled into a Natural Semantic Metalanguage with the idea of being able to break complex concepts into them. Whether you think this is a good idea or not (I don’t but I think the research group working on this are doing good work in surveying the world’s languages) you will have to agree that the resulting descriptions are not simple. For example, this is the Natural Semantic Metalanguage description of “anger”:
anger (English): when X thinks of Y, X thinks something like this: “this person did something bad; I don’t want this; I would want to do something bad to this person”; because of this, X feels something bad
This seems like a fairly complicated way of describing anger and even if it could be universally understood, it would also be very difficult to learn to do this. And could we then capture the distinction between this and say “seething rage”? Also, it is clear that there is a lot more going on than combining 60 basic concepts. You’d have to learn a lot of rules and strategies before you could do this well.
Problem 3: Automatic measures of readability are easily gamed
There are about half dozen automated readability measures currently used by software and web services to calculate how easy or difficult it is to read a text.
I am not an expert in readability but I have no reason to doubt the references in Wikipedia claiming that they correlate fairly well overall with text comprehension. But as always correlation only tells half the story and, as we know, it is not causation.
It is not at all clear that the texts identified as simple based on measures like number of words per sentence or numbers of letters per word are actually simple because of the measures. It is entirely possible that those measures are a consequence of other factors that contribute to simplicity, like more careful word choice, empathy with an audience, etc.
This may not matter if all we are interested in is identifying simple texts, as you can do with an advanced Google search. But it does matter if we want to use these measures to teach people how to write simpler texts. Because if we just tell them use fewer words per sentence and shorter words, we may not get texts that are actually easier to understand for the intended readership.
And if we require this as a criterion of page accessibility, we open the system to gaming in the same way Google’s algorithms are gamed but without any of the sophistication. You can reduce the complexity of any text on any of these scores simply by replacing all commas with full stops. Or even with randomly inserting full stops every 5 words and putting spaces in the middle of words. The algorithms are not smart enough to capture that.
Also, while these measures may be fairly reliable in aggregate, they don’t give us a very good picture of any one individual text. I took a blog post from the Campaign for Plain English site http://www.plainenglish.co.uk/news/chrissies-comments.html and ran the text through several websites that calculate ease of reading scores:
The different tests ranged by up to 5 years in their estimate of the length of formal education required to understand the text from 10.43 to 15.57. Read-able.com even went as far as providing an average, coming up with 12. Well that doesn’t seem very reliable.
I preferred http://textalyser.net which just gives you the facts about the text and doesn’t try to summarize them. The same goes for the Plain English own little app that you can download from their website http://www.plainenglish.co.uk/drivel-defence.html.
By any of these measures, the text wasn’t very simple or plain at all. The longest sentence had 66 words because it contained a complex embedded clause (something not even mentioned in the Plain English guide). The average sentence length was 28 words.
The Plain English app also suggested 7 alternative words from their “alternative dictionary” but 5 of those were misses because context is not considered (e.g. “a sad state” cannot be replaced by “a sad say”). The 2 acceptable suggestions were to edit out one “really” and replace one “retain” with “keep”. Neither of which would have improved the readability of the text given its overall complexity.
In short, the accepted measures of simple texts are not very useful for creating simple texts of training people in creating them.
See also http://en.wikipedia.org/w/index.php?title=Readability&oldid=508236326#Using_the_readability_formulas.
See also this interesting study examining the effects for L2 instruction: http://www.eric.ed.gov/PDFS/EJ926371.pdf.
Problem 4: When simple becomes a new dialect: A thought experiment
But let’s consider what would happen if we did agree on simple English as the universal standard for accessibility and did actually manage to convince people to use it? In short, it would become its own dialect. It would acquire ways of describing things it was not designed to describe. It would acquire its own jargon and ways of obfuscation. There would arise a small industry of experts teaching you how to say what you want to say or don’t want to say in this new simple language.
Let’s take a look at Globish, a simplified English intended for international communication, that I have seen suggested as worth a look for accessibility. Globish has a restricted grammar and a vocabulary of 1500 words. They helpfully provide a tool for highlighting words they call “not compatible with Globish”. Among the words it highlighted for the blog post from the Plain English website were:
Globish seems to based on not much more than gueswork. It has words like “colony” and “rubber” but not words like “temperature” or “notebook”, “appoint” but not “appointment”, “govern” but not “government”. But both the derived forms “appointment” or “government” are more frequent (and intuitively more useful) than the root forms. There is a chapter in the eBook called “1500 Basic Globish Words Father 5000” so I assume there are some rules for derivation, but the derived forms more often than not have very “idiomatic” meanings. For example, “appointment” in its most commons use does not make any sense if we look at the core meanings of “appoint” and the suffix “-ment”. Consider also the difference between “govern” and “government” vs “enjoy” and “enjoyment”.
Yet, Globish supposedly avoids idioms, cultural references, etc. Namely all the things that make language useful. The founder says:
Globish is correct English without the English culture. It is English that is just a tool and not a whole way of life.
Leaving aside the dubious notion of correctness, this would make Globish a very limited tool indeed. But luckily for Globish it’s not true. Why have the word “colony” if not to reflect cultural preference? If it became widely used by a community of speakers, the first thing to happen to Globish would be a blossoming of idioms going hand in hand with the emergence of dialects, jargons and registers.
That is not to say that something like Globish could not be a useful tool for English learners along the way to greater mastery. But it does little for universal accessibility.
Also we need to ask ourselves what would it be like from the perspective of the users creating these simplified texts? They would essentially have to learn a whole new code, a sort of a dialect. And as with any second language learning, some would do it better than others. Some would become the “simple nazis”. Some would get jobs teaching others “how to” speak simple. It is not natural for us to speak simply and “plainly” as defined in the context of accessibility.
There is some experience with the use of controlled languages in technical writing and in writing for second language acquisition. This can be done but the universe of subjects and/or the group of people creating these texts is always extremely limited. Increasing the number of people creating simple texts to pretty much everybody would increase the difficulty of implementation exponentially. And given the poor state of automatic tools for analysis of “simplicity”, quality control is pretty much out of reach.
But would even one code/dialect suffice? Do we need one for technical writing, govenment documents, company filings? Limiting the vocabulary to 1500 words is not a bad idea but as we saw with Globish, it might need to be different 1500 words for each area.
Why is language inaccessible?
Does that mean we should give up on trying to make communication more accessible? Definitely not. The same processes that I described as standing in the way of a universal simple language are also at the root of why so much language is inaccessible. Part of how language works to create group cohesion which includes keeping some people out. A lot of “complicated” language is complicated because the nature of the subject requires it, and a lot of complicated language is complicated because the writer is not very good at expressing themselves.
But as much complicated language is complicated because the writer wants to signall belonging to a group that uses that kind of language. The famous Sokal Hoax provided an example of that. Even instructions on university websites on how to write essays are an example. You will find university websites recommending something like “To write like an academic, write in the third person.” This is nonsense, research shows that academics write as much in the first as in the third person. But it also makes the job of the people marking essays easier. They don’t have to focus on ideas, they just go by superficial impression. Personally, I think this is a scandal and complete failure of higher education to live up to its own hype but that’s a story for another time.
How to achieve simple communication?
So what can we do to avoid making our texts too inaccessible?
The first thing that the accessibility community will need to do is acknowledge Simple language is its own form of expression. It is not the natural state we get when we strip out all the artifice out of our communication. And learning how to communicate simply requires effort and practice of all individuals.
To help with the effort, most people will need some guides. And despite what I said about the shortcomings of the Plain English Guide above, it’s not a bad place to start. But it would need to be expanded. Here’s an example of some of the things that are missing:
Consider the audience: What sounds right in an investor brochure won’t sound right in a letter to a customer
Increase cohesion and coherence by highlighting relationships
Highlight the text structure with headings
Say new things first
Consider splitting out subordinate clauses into separate sentences if your sentence gets too long
Leave all the background and things you normally start your texts with for the end
But it will also require a changed direction for research.
Further research needs for simple language language
I don’t pretend to have a complete overview of the research being done in this area but my superficial impression is that it focuses far too much on comprehension at the level of clause and sentence. Further research will be necessary to understand comprehension at the level of text.
There is need for further research in:
How collocability influences understanding
Specific ways in which cohesion and coherence impact understanding
The benefits and downsides of elegant variation for comprehension
The benefits and downsides of figurative language for comprehension by people with different cognitive profiles
The processes of code switching during writing and reading
How new conventions emerge in the use of simple language
The uses of simple language for political purposes including obfuscation
How collocability influences understanding: How word and phrase frequency influences understanding with particular focus on collocations. The assumption behind software like TextHelp is that this is very important. Much research is available on the importance of these patterns from corpus linguistics but we need to know the practical implications of these properties of language both for text creators and consumers. For instance, should text creators use measures of collocability to judge the ease of reading and comprehension in addition to or instead of arbitrary measures like sentence and word lengths.
Specific ways in which cohesion and coherence affect understanding: We need to find the strategies challenged readers use to make sense of larger chunks of text. How they understand the text as a whole, how they find specific information in the text, how they link individual portions of the text to the whole, and how they infer overall meaning from the significance of the components. We then need to see what text creators can do to assist with these processes. We already have some intuitive tools: bullets, highlighting of important passages, text insets, text structure, etc. But we do not know how they help people with different difficulties and whether they can ever become a hindrance rather than a benefit.
The benefits and downsides of elegant variation for comprehension, enjoyment and memorability: We know that repetition is an important tool for establishing the cohesion of text in English. We also know that repetition is discouraged for stylistic reasons. Repetition is also known to be a feature of immature narratives (children under the age of about 10) and more “sophisticated” ways of constructing texts develop later. However, it is also more powerful in spoken narrative (e.g. folk stories). Research is needed on how challenged readers process repetition and elegant variation and what text creators can do to support any naturally developing meta textual strategies.
The benefits and downsides of figurative language for comprehension by people with different cognitive profiles: There is basic research available from which we know that some cognitive deficits lead to reduced understanding of non-literal language. There is also ample research showing how crucial figurative language is to language in general. However, there seems to be little understanding of how and why different deficits lead to problems with processing figurative language, what kind of figurative language causes difficulties. It is also not clear what types of figurative language are particularly helpful for challenged readers with different cognitive profiles. Work is needed on typology of figurative language and a typology of figurative language deficits.
The processes of code switching during writing and reading: Written and spoken English employ very different codes, in some ways even reminiscent of different language types. This includes much more than just the choice of words. Sentence structure, clauses, grammatical constructions, all of these differ. However, this difference is not just a consequence of the medium of writing. Different genres (styles) within a language may be just as different from one another as writing and speaking. Each of these come with a special code (or subset of grammar and vocabulary). Few native speakers never completely acquire the full range of codes available in a language with extensive literacy practices, particularly a language that spans as many speech communities as English. But all speakers acquire several different codes and can switch between them. However, many challenged writers and readers struggle because they cannot switch between the spoken codes they are exposed to through daily interactions and the written codes to which they are often denied access because of a print impairment. Another way of describing this is multiple literacies. How do challenged readers and writers deal with acquiring written codes and how do they deal with code switching?
How do new conventions emerge in the use of simple language? Using and accessing simple language can only be successful if it becomes a separate literacy practice. However, the dissemination and embedding of such practices into daily usage are often accompanied by the establishment of new codes and conventions of communication. These codes can then become typical of a genre of documents. An example of this is Biblish. A sentence such as “Fred spoke unto Joan and Karen” is easily identified as referring to a mode of expression associated with the translation of the Bible. Will similar conventions develop around “plain English” and how? At the same time, it is clear that within each genre or code, there are speakers and writers who can express themselves more clearly than others. Research is needed to establish if there are common characteristics to be found in these “clear” texts, as opposed to those inherent in “difficult” texts across genres?
All in all, introducing simple language as a universal accessibility standard is still too far from a realistic prospect. My intuitive impression based on documents I receive from different bureaucracies is that the “plain English” campaign has made a difference in how many official documents are presented. But a lot more research (ethnographic as well as cognitive) is necessary before we properly understand the process and its impact. Can’t wait to read it all.
This is an insight at the very heart of linguistics. Every language act we are a part of is an act of categorization. There are no simple unitary terms in language. When I say, “pull up a chair”, I’m in fact referring to a vast category of objects we refer to as chairs. These objects are not identified by any one set of features like four legs, certain height, certain ways of using it. There is no minimal set of features that will describe all chairs and just chairs and not other kinds of objects like tables or pillows. But chairs don’t stand on their own. They are related to other concepts or categories (and they are really one and the same). There are subcategories like stools and armchairs, containing categories like furniture or man-made objects and related categories like houses and shops selling objects. All of these categories are linked in our minds through a complex set of images, stories and definitions. But these don’t just live in our minds. They also appear in our conversations. So we say things like, “What kind of a chair would you like to buy?”, “That’s not real chair”, “What’s the point of a chair if you can’t sit in it?”, “Stools are not chairs.”, “It’s more of a couch than a chair.”, “Sofas are really just big plush chairs, when it comes down to it.”, “I’m using a box for a chair.”, “Don’t sit on a table, it’s not a chair.” Etc. Categories are not stable and uniform across all people, so we continue having conversations about them. There are experts on chairs, historians of chairs, chair craftsmen, people who buy chairs for a living, people who study the word ‘chair’, and people who casually use chairs. Some more than others. And their sets of stories and images and definitions related to chairs will be slightly different. And they will have had different types of conversations with different people about chairs. All of that goes into a simple word like “chair”. It’s really very simple as long as we accept the complexity for what it is. Philosophers of language have made a right mess of things because they tried to find simplicity where none exists. And what’s more where none is necessary.
But let’s get back to cliches. Cliches are types of categories. Or better still, cliches are categories with a particular type of social salience. Like categories, cliches are sets of images, stories and definitions compressed into seemingly simpler concepts that are labelled by some sort of an expression. Most prominently, it is a linguistic expression like a word or a phrase. But it could just as easily be a way of talking, way of dressing, way of being. What makes us likely to call something a cliche is a socially negotiated sense of awareness that the compression is somewhat unsatisfactory and that it is overused by people in lieu of an insight into the phenomenon we are describing. But the power of the cliche is in its ability to help us make sense of a complex or challenging phenomenon. But the sense making is for our benefit of cognitive and emotional peace. Just because we can make sense of something, doesn’t mean, we get the right end of the stick. And we know that, which is why we are wary of cliches. But challenging every cliche would be like challenging ourselves every time we looked at a chair. It can’t be done. Which is why we have social and linguistic coping mechanisms like “I know it’s such a cliche.” “It’s a cliche but in a way it’s true.” “Just because it’s a cliche, doesn’t mean, it isn’t true.” Just try Googling: “it’s a cliche *”
So we are at once locked into cliches and struggling to overcome them. Like “chair” the concept of a “cliche” as we use it is not simple. We use it to label words, phrases, people. We have stories about how to rebel against cliches. We have other labels for similar phenomena with different connotations such as “proverbs”, “sayings”, “prejudices”, “stereotypes”. We have whole disciplines studying these like cognitive psychology, social psychology, political science, anthropology, etc. And these give us a whole lot of cliches about cliches. But also a lot of knowledge about cliches.
The first one is exactly what this post started with. We have to use cliches. It’s who we are. But they are not inherently bad.
Next, we challenge cliches as much as we use them. (Well, probably not as much, but a lot.) This is something I’m trying to show through my research into frame negotiation. We look at concepts (the compressed and labelled nebulas of knowledge) and decompress them in different ways and repackage them and compress them into new concepts. (Sometimes this is called conceptual integration or blending.) But we don’t just do this in our minds. We do it in public and during conversations about these concepts.
We also know that unwillingness to challenge a cliche can have bad outcomes. Cliches about certain things (like people or types of people) are called stereotypes and particular types of stereotypes are called prejudices. And prejudices by the right people against the right kind of other people can lead to discrimination and death. Prejudice, stereotype, cliche. They are the same kind of thing presented to us from different angles and at different magnitudes.
So it is worth our while to harness the cliche negotiation that goes on all the time anyway and see if we can use it for something good. That’s not a certain outcome. The medieaval inquistions, anti-heresies, racism, slavery, genocides are all outcomes of negotiations of concepts. We mostly only know about their outcomes but a closer look will always reveal dissent and a lot of soul searching. And at the heart of such soul searching is always a decompression and recompression of concepts (conceptual integration). But it does not work in a vacuum. Actual physical or economic power plays a role. Conformance to communcal expectations. Personal likes or dislikes. All of these play a role.
So what chance have we of getting the right outcome? Do we even know what is the right outcome?
Well, we have to pick the right cliches says Abhijit Banerjee. Or we have to frame concepts better says George Lakoff. “We have to shine the light of truth” says a cliche.
“If you give people content, they’re willing to move away from their prejudices. Prejudices are partly sustained by the fact that the political system does not deliver much content.” says Banerjee. Prejudices matter in high stakes contexts. And they are a the result of us not challenging the right cliches in the right ways at the right time.
It is pretty clear from research in social psychology from Milgram on, that giving people information will challenge their cliches but only as long as you also give them sanction to challenge the cliches. Information on its own, does not seem to always be enough. Sometimes the contrary information even seems to reinforce the cliche (as we’re learning from newspaper corrections).
This is important. You can’t fool all of the people all of the time. Even if you can fool a lot of them a lot of the time. Information is a part of it. Social sanction of using that information in certain ways is another part of it. And this is not the province of the “elites”. People with the education and sufficient amount of idle time to worry about such things. There’s ample research to show that everybody is capable of this and engaged in these types of conceptual activities. More education seems to vaguely correlate with less prejudice but it’s not clear why. I also doubt that it does in a very straightforward and inevitable way (a post for another day). It’s more than likely that we’ve only studied the prejudices the educated people don’t like and therefore don’t have as much.
Bannerjee draws the following conclusion from his work uncovering cliches in development economics:
“Often we’re putting too much weight on a bunch of cliches. And if we actually look into what’s going on, it’s often much more mundane things. Things where people just start with the wrong presumption, design the wrong programme, they come in with their own ideology, they keep things going because there’s inertia, they don’t actually look at the facts and design programmes in ignorance. Bad things happen not because somebody wants bad things to happen but because we don’t do our homework. We don’t think hard enough. We’re not open minded enough.”
It sounds very appealing. But it’s also as if he forgot the point he started out with. We need cliches. And we need to remember that out of every challenge to a cliche arises a new cliche. We cannot go around the world with our concepts all decompressed and flapping about. We’d literally go crazy. So every challenge to a cliche (just like every paradigm-shifting Kuhnian revolution) is only the beginning phase of the formation of another cliche, stereotype, prejudice or paradigm (a process well described in Orwell’s Animal Farm which itself has in turn become a cliche of its own). It’s fun listening to Freakonomics radio to see how all the cliche busting has come to establish a new orthodoxy. The constant reminders that if you see things as an economist, you see things other people don’t don’t see. Kind of a new witchcraft. That’s not to say that Freakonomics hasn’t provided insights to challenge established wisdoms (a term arising from another perspective on a cliche). It most certainly has. But it hasn’t replaced them with “a truth”, just another necessary compression of a conceptual and social complex. During the moments of decompression and recompression, we have opportunities for change, however brief. And sometimes it’s just a memory of those events that lets us change later. It took over 150 years for us to remember the French revolution and make of it what we now think of as democracy with a tradition stretching back to ancient Athens. Another cliche. The best of a bad lot of systems. A whopper of a cliche.
So we need to be careful. Information is essential when there is none. A lot of prejudice (like some racism) is born simply of not having enough information. But soon there’s plenty of information to go around. Too much, in fact, for any one individual to sort through. So we resort to complex cliches. And the cliches we choose have to do with our in-groups, chains of trust, etc. as much as they do with some sort of rational deliberation. So we’re back where we started.
Humanity is engaged in a neverending struggle of personal and public negotiation of concepts. We’re taking them apart and putting them back together. Parts of the process happen in fractions of a second in individual minds, parts of the process happen over days, weeks, months, years and decades in conversations, pronouncements, articles, books, polemics, laws, public debates and even at the polling booths. Sometimes it looks like nothing is happening and sometimes it looks like everything is happening at once. But it’s always there.
So what does this have to do with metaphors and can a metaphor hacker do anything about it? Well, metaphors are part of the process. The same process that lets us make sense of metaphors, lets use negotiated cliches. Cliches are like little analogies and it takes a lot of cognition to understand them, take them apart and make them anew. I suspect most of that cognition (and it’s always discursive, social cognition) is very much the same that we know relatively well from metaphor studies.
But can we do anything about it? Can we hack into these processes? Yes and no. People have always hacked collective processes by inserting images and stories and definitions into the public debate through advertising, following talking points or even things like pushpolling. And people have manipulated individuals through social pressure, persuasion and lies. But none of it ever seems to have a lasting effect. There’re simply too many conceptual purchase points to lean on in any cliches to ever achieve a complete uniformity for ever (even in the most totalitarian regimes). In an election, you may only need to sway the outcome by a few percent. If you have military or some other power, you only need to get willing compliance from a sufficient number of people to keep the rest in line through a mix of conceptual and non-conceptual means. Some such social contracts last for centuries, others for decades and some for barely years or months. In such cases, even knowing how these processes work is not much better than knowing how continental drift works. You can use it to your advantage but you can’t really change it. You can and should engage in the process and try to nudge the conversation in a certain way. But there is no conceptual template for success.
But as individuals, we can certainly do quite a bit monitor our own cognition (in the broadest sense). But we need to choose our battles carefully. Use cliches but monitor what they are doing for us. And challenge the right ones at the right time. It requires a certain amount of disciplined attention and disciplined conversation.
Most of us are all too happy to repeat clichés about education to motivate ourselves and others to engage in this liminal ritual of mass socialization. One such phrase is “knowledge is power”. It is used to refer not just to education, of course, but to all sorts of intelligence gathering from business to politics. We tell many stories of how knowing something made the difference, from knowing a way of making something to work to knowing a secret only the hero or villain is privy to. But in education, in particular, it is not just knowing that matters to our tribe but also the display of knowing.
The more I look at education, the more I wonder how much of what is in the curriculum is about signaling rather than true need of knowledge. Signaling has been used in economics of education to indicate the complex value of a university degree but I think it goes much deeper. We make displays of knowledge through the curriculum to make the knowledge itself more valuable. Curriculum designers in all areas engage in complex dances to show how the content maps onto the real world. I have called this education voodoo, other people have spoken of cargo cult education, and yet others have talked about pseudo teaching. I wrote about pseudo teaching when I looked at Niall Ferguson‘s amusing, I think I called it cute, lesson plan of his own greatness. But pseudo teaching only describes the activities performed by teachers in the mistaken belief that they have real educational value. When pseudo teaching relies on pseudo content, I think we can talk more generally about “pseudo education”.
We were all pseudo-educated on a number of subjects. History, science, philosophy, etc. In history lessons, the most cherished “truths” of our past are distorted on a daily basis (see Lies My Teacher told me). From biology, we get to remember misinformation about the theory of evolution starting from attributing the very concept of evolution to Darwin or reducing natural selection to the nonsense of survival of the fittest. We may remember the names of a few philosophers but it rarely takes us any further than knowing winks at a Monty Python sketch or mouthing of unexamined platitudes like “the unexamined life is not worth living.”
That in itself is not a problem. Society, despite the omnipresent alarmist tropes, is coping quite well with pseudo-education. Perhaps, it even requires it to function because “it can’t handle the truth”. The problem is that we then judge people on how well they are able to replicate or respond to these pseudo-educated signals. Sometimes, these judgments are just a matter of petty prejudice but sometimes they could have an impact on somebody’s livelihood (and perhaps the former inevitably leads to the latter in aggregate).
Note: I have looked at some history and biology textbooks and they often offer a less distorted portrayal of their subject than what seems to be the outcome in public consciousness. Having the right curriculum and associated materials, then, doesn’t seem to be sufficient to avoid pseudo-education (if indeed avoiding it is desirable).
The one area where pseudo-education has received a lot of attention is language. Since time immemorial, our ways of speaking have served to identify us with one group or layer of society or another. And from its very beginning, education sought to play a role in slotting its charges into the linguistic groups with as high a prestige, as possible (or rather as appropriate). And even today, in academic literature we see references to the educated speaker as an analytic category. This is not a bad thing. Education correlates with exposure to certain types of language and engagement with certain kinds of speech communities. It is not the only way to achieve linguistic competence in those areas but it is the main way for the majority. But becoming “educated speaker” in this sense is mostly a by-product of education. Sufficient amount of the curriculum and classroom instruction is aimed in this direction to count for something but most students acquire the in-group ways of speaking without explicit instruction (disadvantaging those who would benefit from it). But probably a more salient output of language education is supposed knowledge about language (as opposed to knowledge of language).
Here students are expected not only to speak appropriately but also to know how this “appropriate language” works. And here is where most of what happens in schools can be called pseudo-education. Most teachers don’t really have any grasp of how language works (even those who took intro to linguistics classes). They are certainly not aware of the more complex issues around the social variability of language or its pragmatic dimension. But even in simple matters like grammar and usage, they are utterly clueless. This is often blamed on past deficiencies of the educational system where “grammar was not taught” to an entire generation. But judging by the behavior of previous generations who received ample instruction in grammar, that is not the problem. Their teachers were just as inept at teaching about language as they are today. They might have been better at labeling parts of speech and their tenses but that’s about it. It is possible that in the days of yore, people complaining about the use of the passive were actually more able to identify passive constructions in the text but it didn’t make that complaint any less inaccurate (Orwell made a right fool of himself when it turned out that he uses more passives than is the norm in English despite kvetching about their evil).
No matter what the content of school curriculum and method of instruction, “educated” people go about spouting nonsense when it comes to language. This nonsense seems to have its origins in half-remembered injunctions of their grade school teacher. And because the prime complainers are likely to either have been “good at language” or envied the teacher’s approbation of those who were described as being “good at language”, what we end up with in the typical language maven is a mishmash of linguistic prejudice and unjustified feeling smug superiority. Every little linguistic label that a person can remember, is then trotted out as a badge of honor regardless of how good that person is at deploying it.
And those who spout the loudest, get a reputation of being the “grammar experts” and everybody else who preemptively admits that they are “not good at grammar” defers to them and lets themselves be bullied by them. The most recent case of such bullying was a screed by an otherwise intelligent person in a position of power who decided that he will no longer hire people with bad grammar.
The trouble with pseudo educated blowhards complaining about grammar, like +Kyle Wien, is that they have no idea what grammar is. 90% of the things they complain about are spelling problems. The rest is a mishmash of half-remembered objections from their grade school teacher who got them from some other grammar bigot who doesn’t know their tense from their time.
I’ve got news for you Kyle! People who spell they’re, there and their interchangeably know the grammar of their use. They just don’t differentiate their spelling. It’s called homophony, dude, and English is chock full of it. Look it up. If your desire rose as you smelled a rose, you encountered homophony. Homophony is a ubiquitous feature of all languages. And equally all languages have some high profile homophones that cause trouble for spelling Nazis but almost never for actual understanding. Why? Because when you speak, there is no spelling.
Kyle thinks that what he calls “good grammar” is indicative of attention to detail. Hard to say since he, presumably always perfectly “grammatical”, failed to pay attention to the little detail of the difference between spelling and grammar. The other problem is, that I’m sure that Kyle and his ilk would be hard pressed to list more than a dozen or so of these “problems”. So his “attention to detail” should really be read as “attention to the few details of language use that annoy Kyle Wien”. He claims to have noticed a correlation in his practice but forgive me if I don’t take his word for it. Once you have developed a prejudice, no matter how outlandish, it is dead easy to find plenty of evidence in its support (not paying attention to any of the details that disconfirm it).
Sure there’s something to the argument that spelling mistakes in a news item, a blog post or a business newsletter will have an impact on its credibility. But hardly enough to worry about. Not that many people will notice and those who do will have plenty of other cues to make a better informed judgment. If a misplaced apostrophe is enough to sway them, then either they’re not convinced of the credibility of the source in the first place, or they’re not worth keeping as a customer. Journalists and bloggers engage in so many more significant pursuits that damage their credibility, like fatuous and unresearched claims about grammar, so that the odd it’s/its slip up can hardly make much more than (or is it then) a dent.
Note: I replaced ‘half-wit’ in the original with ‘blowhard’ because I don’t actually believe that Kyle Wien is a half-wit. He may not even be a blowhard. But, you can be a perfectly intelligent person, nice to kittens and beloved by co-workers, and be a blowhard when it comes to grammar. I also fixed a few typos, because I pay attention to detail.
My issue is not that I believe that linguistic purism and prescriptivism are in some way anomalous. In fact, I believe the exact opposite. I think, following a brilliant insight by my linguistics teacher, that we need to think of these phenomena as integral to our linguistic competence. I doubt that there is a linguistic community of any size above 3 that doesn’t enact some form of explicit linguistic normativity.
But when pseudo-knowledge about language is used as a n instrument of power, I think it is right to call out the perpetrators and try to shame them. Sure, linguists laugh at them, but I think we all need to follow the example of the Language Log and expose all such examples to public ridicule. Countermand the power.
Post Script: I have been similarly critical of the field of Critical Discourse Analysis which while based on an accurate insight about language and power, in my view, goes on to abuse the power that stems from the knowledge about language to clobber their opponents. My conclusion has been that if you want to study how people speak, study it for its own sake, and if you want to engage with the politics of what they say, do that on political terms not on linguistic ones. That doesn’t mean that you shouldn’t point out if you feel somebody is using language in a manipulative or misleading ways, but if you don’t need the apparatus of a whole academic discipline to do it, you’re doing something wrong.
George Lakoff is known for saying that “metaphors can kill” and he’s not wrong. But in that, metaphors are no different from any other language. The simple amoral imperative “Kill!” will do the job just as nicely. Nor are metaphors any better or worse at obfuscating than any other type of language. But they are very good at their primary purpose which is making complex connections between domains.
Metaphors can create very powerful connections where none existed before. And we are just as often seduced by that power as inspired to new heights of creativity. We don’t really have a choice. Metaphoric thinking is in our DNA (itself a metaphor). But just like with DNA, context is important, and sometimes metaphors work for us and sometimes they work against us. The more powerful they are, the more cautious we need to be. When faced with powerful metaphors we should always look for alternatives and we should also explore the limits of the metaphors and the connections they make. We need to keep in mind that nothing IS anything else but everything is LIKE something else.
I was reminded of this recently when listening to an LSE lecture by the journalist Andrew Blum who was promoting his book “Tubes: Behind the Scenes at the Internet”. The lecture was reasonably interesting although he tried to make the subject seem more important than it perhaps was through judicious reliance of the imagery of covertness.
But I was particularly struck by the last example where he compared Facebook’s and Google’s data centers in Colorado. Facebook’s center was open and architecturally modern, being part of the local community. Facebook also shared the designs of the center with the global community and was happy to show Blum around. Google’s center was closed, ugly and opaque. Google viewed its design as part of their competitive advantage and most importantly didn’t let Blum past the parking lot.
From this Blum drew far reaching conclusions which he amplified by implying them. If architecture is an indication of intent, he implied, then we should question what Google’s ugly hidden intent is as opposed to Facebook’s shining open intent. When answering a question he later gave prosecutors in New England and in Germany as compounding evidence of people who are also frustrated with Google’s secrecy. Only reluctantly admitting that Google invited him to speak at their Authors Speak program.
Now, Blum may have a point regarding the secrecy surrounding that data center by Google, there’s probably no great competitive advantage in its design and no abiding security reason in not showing its insides to a journalist. But using this comparison to imply anything about the nature of Facebook or Google is just an example of typical journalist dishonesty. Blum is not lying to us. He is lying to himself. I’m sure he convinced himself that since he was so clever to come up with such a beautiful analogy, it must be true.
The problem is that pretty much anything can be seen through multiple analogies. And any one of those analogies can be stopped at any point or be stretched out far and wide. A good metaphor hacker will always seek out an alternative analogy and explore the limits of the domain mapping of the dominant one. In this case, not much work is needed to uncover what a pompous idiot Blum is being.
First, does this facilities reflect attitudes extend to what we know about the two companies in other spheres. And here the answer is NO. Google tells let’s you liberate your data, Facebook does not. Google lets you opt out of many more things that Facebook. Google sponsors many open source projects, Facebook is more closed source (even though they do contribute heavily to some key projects). When Facebook acquires a company, they often just shut it down leaving customers high and dry, Google closes projects too, but they have repeatedly released source code of these projects to the community. Now, is Google the perfect open company? Hardly. But Facebook with its interest in keeping people in its silo is can never be given a shinign beacon of openness. It might be at best a draw (if we can even make a comparison) but I’d certainly give Google far more credit in the openness department. But the analogy simply fails when exposed to current knowledge. I can only assume that Blum was so happy to have come up with it that he wilfully ignored the evidence.
But can we come up with other analogies? Yes. How about the fact that the worst dictatorships in the world have come up with grand idealistic architectural designs in history. Designs and structures that spoke of freedom, beautiful futures and love of the people for their leaders. Given that we know all that, why would we ever trust a design to indicate anything about the body that commissioned it? Again, I can only assume that Blum was seduced by his own cleverness.
Any honest exploration of this metaphor would lead us to abandoning it. It was not wrong to raise it, in the world of cognition, anything is fair. But having looked at both the limits of the metaphor and alternative domain mappings, it’s pretty obvious that it’s not doing us any good. It supports a biased political agenda.
The moral of the story is don’t trust one-metaphor journalists (and most journalists are one-metaphor drones). They might have some of the facts right but they’re almost certainly leaving out large amounts of relevant information in pursuit of their own figurative hedonism.
Disclaimer: I have to admit, I’m rather a fan of Google’s approach to many things and a user of many of their services. However, I have also been critical of Google on many occasions and have come to be wary of many of their practices. I don’t mind Facebook the company, but I hate that it is becoming the new AOL. Nevertheless, I use many of Facebook’s services. So there.