I was listening to the discussion of the latest BioShock game on the latest TWiT podcast when I realized that I am in fact game illiterate. I am hearing these stories and descriptions of experiences but I know I can’t access them directly without a major investment in knowledge and skill acquisition. So, this is what people with no or limited literacy must feel like in highly literacy-dependent environments. I really want to access the stories in the way they are told by the game. But I know I can’t. I will stumble, be discouraged, not have a very good time before I can have a good time. I will be a struggling gamer, in the same way that there are struggling readers.
Note: When I say game, I mean mostly a non-casual computer game such as BioShock or War of Worldcraft or SimCity.
What would a game literacy entail?
What would I need to learn in order to access gaming? Literacy is composed of a multiplicity of knowledge areas and skills. I already have some of these but not enough. Roughly, I will need to get at the following:
Underlying cognitive skills (For reading: transforming the sight of letters into sounds or corresponding mental representations. For gaming: transforming desired effects on screen into actions on a controller)
Complex perceptual and productive fluency (Ability to employ the cognitive skills automatically in response to changing stimuli in multiple contexts).
Context-based or task-based strategies (Ability to direct the underlying skills towards solving particular problems in particular contexts. For reading: Skim text, or look things up in the index, or skip acknowledgements, discover the type of text, or adopt reading speed appropriate to type of text, etc. For gaming Discover the type of game, or gather appropriate achievements, or find hidden bonuses, etc.)
Metacognitive skills and strategies (Learn the terminology and concepts necessary for further learning and to achieve the appropopriate aims using stratgies.)
Socialization skills and strategies (Learn to use the skills and knowledge to make connections with other people and exploit those connections to acquire further skill, knowledge as well as social capital derriving from those)
Is literacy a suitable metaphor for gaming? Matches and mismatches!
With any metaphor it is worth to explore the mapping to see if there are sufficient similarities. In this case, I’ll look at the following areas for matches and mismatches:
Both reading/writing (I will continue to use reading for both unless I need to stress the difference) and gaming require skill that can become automatic and that takes time to acquire. People can be both “better” and “worse” at gaming and reading.
But reading is a more universal skill (although not as universal as most people think) whereas gaming skills are more genre based.
The skill at gaming can be more easily measured by game achievement. Quality of reading measures are a bit more tenuous because speed, fluency and accuracy are all contextual measures. However, even game achievement is a bit more relative, such as in recommendations to play at normal or easy to experience the game.
In this gaming is more like reading than for instance, listening to music or watching a skill which do not require any overt acquisition of skill. See Dara O’Briain’s funny bit on the differences between gaming and reading. Of course, when he says “you cannot be bad at watching a film”, we could quibble that much preparation is required for watching some films, but such training does not involve the development of underlying cognitive skills (assuming the same cultural and linguistic environment). Things are a bit more complex for some special kind of listening to music. Nevertheless people do talk about “media literacy”.
Reading is mostly a uni-modal experience. It is possible to read out loud or to read while listening but ultimately reading is its own mode. Reading has an equivalent in writing that though not a mirror image skill, requires relatively the same skill.
Gaming is a profoundly multimodal experience combining vision, sound, movement (and often reading, as well). There are even efforts to involve smell. Gaming does not have a clear expressive counterpart. The obvious expressive equivalent to writing would be game design but that clearly requires a different level of skill. However, gaming allows complex levels of self-expression within the process of game play which does not have an equivalent in reading but is not completely dissimilar to creative writing (like fanfiction).
Reading is a neutral to high status activity. The act itself is neutral but status can derrive from content. Writing (expressive rather than utilitarian) is a high status activity.
Gaming is a low status to neutral activity. No loss of status derives from inability to game to not gaming in a way that is true of reading. Some games have less questionable status and many games are played by people who derive high status from outside of gaming. There are emerging status sanction systems around gaming but none have penetrated outside gaming, yet.
Reading and writing are significant drivers of wider socialization. They are necessary to perform basic social functions and often represent gateways into important social contexts.
Gaming is only required to socialize in gaming groups. However, this socialization may become more highly desirable over time.
Writing is used to encode a wide variety of content – from shopping lists to neuclear plant manuals to fiction.
Games on the other hand, encode a much more narrower range of content. Primarily narrative and primarily finctional. Although more non-narrative and non-fictional games may exist. There are also expository games but so far, none that would afford easy storage of non-game information without using writing.
Reading and writing are very general purpose activities.
Gaming on the other hand has a limited range of purposes: enjoyment, learning, socialization with friends, achieving status in a wider community. You won’t see a bus stop with a game instead of a timetable (although some of these require puzzle solving skills to decipher).
Why may game literacy be important?
As we saw, there are many differences between gaming and reading and writing. Nevertheless, they are similar enough that the metaphor of ‘game literacy’ makes sense provided we see its limitations.
Why is it important? There will be a growing generational and populational divide of gamers and non-gamers. At the moment this is not very important in terms of opportunities and status but it could easily change within a generation.
Not being able to play a game may exclude people from social groups in the same way that not-playing golf or not engaging in some other locally sanctioned pursuit does (e.g. World of Warcraft).
But most importantly, as new generations of game creators explore the expressive boundaries of games (new narratives, new ways of story telling), not being able to play games may result in significant social exclusion. In the same way that a quick summary of what’s in a novel is inferior to reading the novel, films based on games will be pale imitations of playing the games.
I can easily imagine a future where the major narratives of the day will be expressed in games. In the same way that TV serials have supplanted novels as the primary medium of sharing crucial societal narratives, games can take over in the future. The inner life novel took about 150 years to mature and reigned supreme for about as long while drama and film functioned as its accompaniment. The TV serial is now solidifying its position and is about where the novel was in the 1850s. Gaming may take another couple of decades to get to a stage where it is ready as a format to take over. And maybe nothing like that will happen. But if I had a child, I’d certainly encourage them to play computer games as part of ensuring a more secure future.
Part of this post was incorporated into an article I wrote with Brian Kelly and Alistair McNaught that appeared in the December issue of Ariadne. As part of that work and feedback from Alistair and Brian, I expanded the final section from a simple list of bullets into a more detailed research programme. You can see it below and in the article.
Background: From spelling reform to plain language
The idea that if we could only improve how we communicate, there would be less misunderstanding among people is as old as the hills.
Historically, this notion has been expressed through things like school reform, spelling reform, publication of communication manuals, etc.
The most radical expression of the desire for better understanding is the invention of a whole new artificial language like Esperanto with the intention of providing a universal language for humanity. This has had a long tradition but seemed to gain most traction towards the end of last century with the introduction and relative success of Esperanto.
But artificial languages have been a failure as a vehicle of global understanding. Instead, in about the last 50 years, the movement for plain English has been taking the place of constructed languages as something on which people pinned their hopes for clear communication.
Most recently, there have been proposals suggesting that “simple” language should become a part of a standard for accessibility of web pages along side other accessibility standards issued by the W3C standards body. http://www.w3.org/WAI/RD/2012/easy-to-read/Overview. This post was triggred by this latest development.
Problem 1: Plain language vs. linguistics
The problem is that most proponents of plain language (as so many would be reformers of human communication) seem to be ignorant of the wider context in which language functions. There is much that has been revealed by linguistic research in the last century or so and in particular since the 1960s that we need to pay attention to (to avoid confusion, this does not refer to the work of Noam Chomsky and his followers but rather to the work of people like William Labov, Michael Halliday, and many others).
Languages are not a simple matter of grammar. Any proposal for content accessibility must consider what is known about language from the fields of pragmatics, sociolinguistics, and cognitive linguistics. These are the key aspects of what we know about language collected from across many fields of linguistic inquiry:
Every sentence communicates much more than just its basic content (propositional meaning). We also communicate our desires and beliefs (e.g. “It’s cold here” may communicate, “Close the window” and “John denied that he cheats on his taxes” communicates that somebody accused John of cheating on his taxes. Similarly chosing a particular form of speech, like slang or jargon, communicates belonging to a community of practice.)
The understanding of any utterance is always dependent on a complex network of knowledge about language, about the world, as well as about the context of the utterance. “China denied involvement.” requires the understanding of the context in which countries operate, as well as metonomy, as well as the grammar and vocabulary. Consider the knowledge we need to possess to interpret “In 1939, the world exploded.” vs. “In Star Wars, a world exploded.”
There is no such thing as purely literal language. All language is to some degree figurative. “Between 3 and 4pm.”, “Out of sight”, “In deep trouble”, “An argument flared up”, “Deliver a service”, “You are my rock”, “Access for all” are all figurative to different degrees.
We all speak more than one variety of our language: formal/informal, school/friends/family, written/spoken, etc. Each of these variety has its own code. For instance, “she wanted to learn” vs. “her desire to learn” demonstrates a common difference between spoken and written English where written English often uses clauses built around nouns.
We constantly switch between different codes (sometimes even within a single utterance).
Bilingualism is the norm in language knowledge, not the exception. About half the world’s population regularly speaks more than one language but everybody is “bi-lingual” in the sense that they deal with multiple codes.
The “standard” or “correct” English is just one of the many dialects, not English itself.
Language prescription and requirements of language purity (incl. simple language) are as much political statements as linguistic or cognitive ones. All language use is related to power relationships.
Simplified languages develop their own complexities if used by a real community through a process known as creolization. (This process is well described for pidgins but not as well for artificial languages.)
All languages are full of redundancy, polysemy and homonymy. It is the context and our knowledge of what is to be expected that makes it easy to figure out the right meaning.
There is no straightforward relationship between grammatical features and language obfuscation and lack of clarity (e.g. It is just as easy to hide things using active as passive voice or any Subject-Verb-Object sentence as Object-Subject-Vern).
It is difficult to call any one feature of a language universally simple (for instance, SVO word order or no morphology) because many other languages use what we call complex as the default without any increase in difficulty for the native speakers (e.g. use of verb prefixes/particles in English and German)
Language is not really organized into sentences but into texts. Texts have internal organization to hang together formally (John likes coffee. He likes it a lot.) and semantically (As I said about John. He likes coffee.) Texts also relate to external contexts (cross reference) and their situations. This relationship is both implicit and explicit in the text. The shorter the text, the more context it needs for interpretation. For instance, if all we see is “He likes it.” written on a piece of paper, we do not have enough context to interpret the meaning.
Language is not used uniformly. Some parts of language are used more frequently than others. But this is not enough to understand frequency. Some parts of language are used more frequently together than others. The frequent coocurrence of some words with other words is called “collocation”. This means that when we say “bread and …”, we can predict that the next word will be “butter”. You can check this with a linguistic tool like a corpus, or even by using Google’s predictions in the search. Some words are so strongly collocated with other words that their meaning is “tinged” by those other words (this is called semantic prosody). For example, “set in” has a negative connotation because of its collocation with “rot”.
All language is idiomatic to some degree. You cannot determine the meaning of all sentences just by understanding the meanings of all their component parts and the rules for putting them together. And vice versa, you cannot just take all the words and rules in a language, apply them and get meaningful sentences. Consider “I will not put the picture up with John.” and “I will not put up the picture with John.” and “I will not put up John.” and “I will not put up with John.”
It seems to me that most plain language advocates do not take most of these factors into account.
Some examples from the “How to write in plain English” guide: http://www.plainenglish.co.uk/files/howto.pdf.
Try to call the reader ‘you’, even if the reader is only one of many people you are talking about generally. If this feels wrong at first, remember that you wouldn’t use words like ‘the applicant’ and ‘the supplier’ if you were speaking to somebody sitting across a desk from you. [emphasis mine]
This example misses the point about the contextuality of language. The part in bold is the very crux of the problem. It is natural to use a different code (or register) with someone we’re speaking to in person and in a written communication. This is partly a result of convention and partly the result of the different demands of writing and speaking when it comes to the ability to point to what we’re speaking about. The reason it feels wrong to the writer is that it breaks the convention of writing. That is not to say that this couldn’t become the new convention. But the argument misses the point.
Do you want your letters to sound active or passive − crisp and professional or stuffy and bureaucratic?
Using the passive voice and sounding passive are not one and the same thing. This is an example of polysemy. The word “passive” has two meanings in English. One technical (the passive voice) and one colloquial (“he’s too passive”). The booklet recommends that “The mine had to be closed by the authority. (Passive)” should be replaced with “The authority had to close the mine. (Active)” But they ignore the fact that word order also contributes to the information structure of the sentence. The passive sentence introduces the “mine” sooner and thus makes it clear that the sentence is about the mine and not the local authority. In this case, the “active” construction made the point of the sentence more difficult to understand.
The same is true of nominalization. Another thing recommended against by the Plain English campaign: “The implementation of the method has been done by a team.” is not conveying the same type of information as “A team has implemented the method.”
The point is that this advice ignores the context as well as the audience. Using “you” instead of “customers” in “Customers have the right to appeal” may or may not be simpler depending on the reader. For somebody used to the conventions of written official English, it may actually take longer to process. But for someone who does not deal with written English very often, it will be easier. But there is nothing intrinsically easier about it.
Likewise for the use of jargon. The campaign gives as its first example of unduly complicated English:
High-quality learning environments are a necessary precondition for facilitation and enhancement of the ongoing learning process.
And suggests that we use this instead:
Children need good schools if they are to learn properly.
This may be appropriate when it comes to public debate but within the professional context of, say, policy communication, these 2 sentences are not actually equivalent. There are more “learning environments” than just schools and the “learning process” is not the same as having learned something. It is also possible that the former sentence appeared as part of a larger context that would have made the distinction even clearer but the page does not give a reference and a Google search only shows pages using it as an example of complex English. http://www.plainenglish.co.uk/examples.html
The How to write in plain English document does not mention coherence of the text at all, except indirectly when it recommends the use of lists. This is good advice but even one of their examples has issues. They suggest that the following is a good example of a list:
Kevin needed to take:
• a penknife
• some string
• a pad of paper; and
• a pen.
And on first glance it is, but lists are not just neutral replacements for sentences. They are a genre in its own right used for specific purposes (Michael Hoey called them “text colonies”.) Let’s compare the list above to the sentence below.
Kevin needed to take a penknife, some string, a pad of paper and a pen.
Obviously they are two different kinds of text used in different contexts for different purposes and this would impinge on our understanding. The list implies instruction, and a level of importance. It is suitable to an official document, for example something sent before a child goes to camp. But it is not suitable to a personal letter or even a letter from the camp saying “All Kevin needed to take was a penknife, some string, a pad of paper and a pen. He should not have brought a laptop.” To be fair, the guide says to use lists “where appropriate”, but does not mention what that means.
The issue is further muddled by the “grammar quiz” on the Plain English website: http://www.plainenglish.co.uk/quiz.html. It is a hodgepodge of irrelevant trivia about language (not just grammar) that has nothing to do with simple writing. Although the Plain English guide gets credit for explicitly not endorsing petty peeves like not ending a sentence with a preposition, they obviously have peeves of their own.
Problem 2: Definition of simple is not simple
There is no clear definition of what constitutes simple and easy to understand language.
There are a number of intuitions and assumptions that seem to be made when both experts and lay people talk about language:
Shorter is simpler (fewer syllables, charactes, sounds per word, fewer words per sentence, fewer sentences per paragraph)
More direct is simpler (X did Y to Z is simpler than Y was done to Z by X)
Less variety is simpler (fewer different words)
More familiar simpler
These assumptions were used to create various measures of “readability” going back to the 1940s. They consisted of several variables:
Length of words (in syllables or in characters)
Length of sentences
Frequency of words used (both internally and with respect to their general frequency)
Intuitively, these are not bad measures, but they are only proxies for the assumptions. They say nothing about the context in which the text appears or the appropriateness of the choice of subject matter. They say nothing about the internal cohesion and coherence of the text. In short, they say nothing about the “quality” of the text.
The same thing is not always simple in all contexts and sometimes too simple, can be hard. We could see that in the example of lists above. Having a list instead of a sentence does not always make things simpler because a list is doing other work besides just providing a list of items.
Another example I always think about is the idea of “semantic primes” by Anna Wierzbicka. These are concepts like DO, BECAUSE, BAD believed to be universal to all languages. There are only about 60 of them (the exact number keeps changing as the research evolves). These were compiled into a Natural Semantic Metalanguage with the idea of being able to break complex concepts into them. Whether you think this is a good idea or not (I don’t but I think the research group working on this are doing good work in surveying the world’s languages) you will have to agree that the resulting descriptions are not simple. For example, this is the Natural Semantic Metalanguage description of “anger”:
anger (English): when X thinks of Y, X thinks something like this: “this person did something bad; I don’t want this; I would want to do something bad to this person”; because of this, X feels something bad
This seems like a fairly complicated way of describing anger and even if it could be universally understood, it would also be very difficult to learn to do this. And could we then capture the distinction between this and say “seething rage”? Also, it is clear that there is a lot more going on than combining 60 basic concepts. You’d have to learn a lot of rules and strategies before you could do this well.
Problem 3: Automatic measures of readability are easily gamed
There are about half dozen automated readability measures currently used by software and web services to calculate how easy or difficult it is to read a text.
I am not an expert in readability but I have no reason to doubt the references in Wikipedia claiming that they correlate fairly well overall with text comprehension. But as always correlation only tells half the story and, as we know, it is not causation.
It is not at all clear that the texts identified as simple based on measures like number of words per sentence or numbers of letters per word are actually simple because of the measures. It is entirely possible that those measures are a consequence of other factors that contribute to simplicity, like more careful word choice, empathy with an audience, etc.
This may not matter if all we are interested in is identifying simple texts, as you can do with an advanced Google search. But it does matter if we want to use these measures to teach people how to write simpler texts. Because if we just tell them use fewer words per sentence and shorter words, we may not get texts that are actually easier to understand for the intended readership.
And if we require this as a criterion of page accessibility, we open the system to gaming in the same way Google’s algorithms are gamed but without any of the sophistication. You can reduce the complexity of any text on any of these scores simply by replacing all commas with full stops. Or even with randomly inserting full stops every 5 words and putting spaces in the middle of words. The algorithms are not smart enough to capture that.
Also, while these measures may be fairly reliable in aggregate, they don’t give us a very good picture of any one individual text. I took a blog post from the Campaign for Plain English site http://www.plainenglish.co.uk/news/chrissies-comments.html and ran the text through several websites that calculate ease of reading scores:
The different tests ranged by up to 5 years in their estimate of the length of formal education required to understand the text from 10.43 to 15.57. Read-able.com even went as far as providing an average, coming up with 12. Well that doesn’t seem very reliable.
I preferred http://textalyser.net which just gives you the facts about the text and doesn’t try to summarize them. The same goes for the Plain English own little app that you can download from their website http://www.plainenglish.co.uk/drivel-defence.html.
By any of these measures, the text wasn’t very simple or plain at all. The longest sentence had 66 words because it contained a complex embedded clause (something not even mentioned in the Plain English guide). The average sentence length was 28 words.
The Plain English app also suggested 7 alternative words from their “alternative dictionary” but 5 of those were misses because context is not considered (e.g. “a sad state” cannot be replaced by “a sad say”). The 2 acceptable suggestions were to edit out one “really” and replace one “retain” with “keep”. Neither of which would have improved the readability of the text given its overall complexity.
In short, the accepted measures of simple texts are not very useful for creating simple texts of training people in creating them.
See also http://en.wikipedia.org/w/index.php?title=Readability&oldid=508236326#Using_the_readability_formulas.
See also this interesting study examining the effects for L2 instruction: http://www.eric.ed.gov/PDFS/EJ926371.pdf.
Problem 4: When simple becomes a new dialect: A thought experiment
But let’s consider what would happen if we did agree on simple English as the universal standard for accessibility and did actually manage to convince people to use it? In short, it would become its own dialect. It would acquire ways of describing things it was not designed to describe. It would acquire its own jargon and ways of obfuscation. There would arise a small industry of experts teaching you how to say what you want to say or don’t want to say in this new simple language.
Let’s take a look at Globish, a simplified English intended for international communication, that I have seen suggested as worth a look for accessibility. Globish has a restricted grammar and a vocabulary of 1500 words. They helpfully provide a tool for highlighting words they call “not compatible with Globish”. Among the words it highlighted for the blog post from the Plain English website were:
Globish seems to based on not much more than gueswork. It has words like “colony” and “rubber” but not words like “temperature” or “notebook”, “appoint” but not “appointment”, “govern” but not “government”. But both the derived forms “appointment” or “government” are more frequent (and intuitively more useful) than the root forms. There is a chapter in the eBook called “1500 Basic Globish Words Father 5000” so I assume there are some rules for derivation, but the derived forms more often than not have very “idiomatic” meanings. For example, “appointment” in its most commons use does not make any sense if we look at the core meanings of “appoint” and the suffix “-ment”. Consider also the difference between “govern” and “government” vs “enjoy” and “enjoyment”.
Yet, Globish supposedly avoids idioms, cultural references, etc. Namely all the things that make language useful. The founder says:
Globish is correct English without the English culture. It is English that is just a tool and not a whole way of life.
Leaving aside the dubious notion of correctness, this would make Globish a very limited tool indeed. But luckily for Globish it’s not true. Why have the word “colony” if not to reflect cultural preference? If it became widely used by a community of speakers, the first thing to happen to Globish would be a blossoming of idioms going hand in hand with the emergence of dialects, jargons and registers.
That is not to say that something like Globish could not be a useful tool for English learners along the way to greater mastery. But it does little for universal accessibility.
Also we need to ask ourselves what would it be like from the perspective of the users creating these simplified texts? They would essentially have to learn a whole new code, a sort of a dialect. And as with any second language learning, some would do it better than others. Some would become the “simple nazis”. Some would get jobs teaching others “how to” speak simple. It is not natural for us to speak simply and “plainly” as defined in the context of accessibility.
There is some experience with the use of controlled languages in technical writing and in writing for second language acquisition. This can be done but the universe of subjects and/or the group of people creating these texts is always extremely limited. Increasing the number of people creating simple texts to pretty much everybody would increase the difficulty of implementation exponentially. And given the poor state of automatic tools for analysis of “simplicity”, quality control is pretty much out of reach.
But would even one code/dialect suffice? Do we need one for technical writing, govenment documents, company filings? Limiting the vocabulary to 1500 words is not a bad idea but as we saw with Globish, it might need to be different 1500 words for each area.
Why is language inaccessible?
Does that mean we should give up on trying to make communication more accessible? Definitely not. The same processes that I described as standing in the way of a universal simple language are also at the root of why so much language is inaccessible. Part of how language works to create group cohesion which includes keeping some people out. A lot of “complicated” language is complicated because the nature of the subject requires it, and a lot of complicated language is complicated because the writer is not very good at expressing themselves.
But as much complicated language is complicated because the writer wants to signall belonging to a group that uses that kind of language. The famous Sokal Hoax provided an example of that. Even instructions on university websites on how to write essays are an example. You will find university websites recommending something like “To write like an academic, write in the third person.” This is nonsense, research shows that academics write as much in the first as in the third person. But it also makes the job of the people marking essays easier. They don’t have to focus on ideas, they just go by superficial impression. Personally, I think this is a scandal and complete failure of higher education to live up to its own hype but that’s a story for another time.
How to achieve simple communication?
So what can we do to avoid making our texts too inaccessible?
The first thing that the accessibility community will need to do is acknowledge Simple language is its own form of expression. It is not the natural state we get when we strip out all the artifice out of our communication. And learning how to communicate simply requires effort and practice of all individuals.
To help with the effort, most people will need some guides. And despite what I said about the shortcomings of the Plain English Guide above, it’s not a bad place to start. But it would need to be expanded. Here’s an example of some of the things that are missing:
Consider the audience: What sounds right in an investor brochure won’t sound right in a letter to a customer
Increase cohesion and coherence by highlighting relationships
Highlight the text structure with headings
Say new things first
Consider splitting out subordinate clauses into separate sentences if your sentence gets too long
Leave all the background and things you normally start your texts with for the end
But it will also require a changed direction for research.
Further research needs for simple language language
I don’t pretend to have a complete overview of the research being done in this area but my superficial impression is that it focuses far too much on comprehension at the level of clause and sentence. Further research will be necessary to understand comprehension at the level of text.
There is need for further research in:
How collocability influences understanding
Specific ways in which cohesion and coherence impact understanding
The benefits and downsides of elegant variation for comprehension
The benefits and downsides of figurative language for comprehension by people with different cognitive profiles
The processes of code switching during writing and reading
How new conventions emerge in the use of simple language
The uses of simple language for political purposes including obfuscation
How collocability influences understanding: How word and phrase frequency influences understanding with particular focus on collocations. The assumption behind software like TextHelp is that this is very important. Much research is available on the importance of these patterns from corpus linguistics but we need to know the practical implications of these properties of language both for text creators and consumers. For instance, should text creators use measures of collocability to judge the ease of reading and comprehension in addition to or instead of arbitrary measures like sentence and word lengths.
Specific ways in which cohesion and coherence affect understanding: We need to find the strategies challenged readers use to make sense of larger chunks of text. How they understand the text as a whole, how they find specific information in the text, how they link individual portions of the text to the whole, and how they infer overall meaning from the significance of the components. We then need to see what text creators can do to assist with these processes. We already have some intuitive tools: bullets, highlighting of important passages, text insets, text structure, etc. But we do not know how they help people with different difficulties and whether they can ever become a hindrance rather than a benefit.
The benefits and downsides of elegant variation for comprehension, enjoyment and memorability: We know that repetition is an important tool for establishing the cohesion of text in English. We also know that repetition is discouraged for stylistic reasons. Repetition is also known to be a feature of immature narratives (children under the age of about 10) and more “sophisticated” ways of constructing texts develop later. However, it is also more powerful in spoken narrative (e.g. folk stories). Research is needed on how challenged readers process repetition and elegant variation and what text creators can do to support any naturally developing meta textual strategies.
The benefits and downsides of figurative language for comprehension by people with different cognitive profiles: There is basic research available from which we know that some cognitive deficits lead to reduced understanding of non-literal language. There is also ample research showing how crucial figurative language is to language in general. However, there seems to be little understanding of how and why different deficits lead to problems with processing figurative language, what kind of figurative language causes difficulties. It is also not clear what types of figurative language are particularly helpful for challenged readers with different cognitive profiles. Work is needed on typology of figurative language and a typology of figurative language deficits.
The processes of code switching during writing and reading: Written and spoken English employ very different codes, in some ways even reminiscent of different language types. This includes much more than just the choice of words. Sentence structure, clauses, grammatical constructions, all of these differ. However, this difference is not just a consequence of the medium of writing. Different genres (styles) within a language may be just as different from one another as writing and speaking. Each of these come with a special code (or subset of grammar and vocabulary). Few native speakers never completely acquire the full range of codes available in a language with extensive literacy practices, particularly a language that spans as many speech communities as English. But all speakers acquire several different codes and can switch between them. However, many challenged writers and readers struggle because they cannot switch between the spoken codes they are exposed to through daily interactions and the written codes to which they are often denied access because of a print impairment. Another way of describing this is multiple literacies. How do challenged readers and writers deal with acquiring written codes and how do they deal with code switching?
How do new conventions emerge in the use of simple language? Using and accessing simple language can only be successful if it becomes a separate literacy practice. However, the dissemination and embedding of such practices into daily usage are often accompanied by the establishment of new codes and conventions of communication. These codes can then become typical of a genre of documents. An example of this is Biblish. A sentence such as “Fred spoke unto Joan and Karen” is easily identified as referring to a mode of expression associated with the translation of the Bible. Will similar conventions develop around “plain English” and how? At the same time, it is clear that within each genre or code, there are speakers and writers who can express themselves more clearly than others. Research is needed to establish if there are common characteristics to be found in these “clear” texts, as opposed to those inherent in “difficult” texts across genres?
All in all, introducing simple language as a universal accessibility standard is still too far from a realistic prospect. My intuitive impression based on documents I receive from different bureaucracies is that the “plain English” campaign has made a difference in how many official documents are presented. But a lot more research (ethnographic as well as cognitive) is necessary before we properly understand the process and its impact. Can’t wait to read it all.
This is an insight at the very heart of linguistics. Every language act we are a part of is an act of categorization. There are no simple unitary terms in language. When I say, “pull up a chair”, I’m in fact referring to a vast category of objects we refer to as chairs. These objects are not identified by any one set of features like four legs, certain height, certain ways of using it. There is no minimal set of features that will describe all chairs and just chairs and not other kinds of objects like tables or pillows. But chairs don’t stand on their own. They are related to other concepts or categories (and they are really one and the same). There are subcategories like stools and armchairs, containing categories like furniture or man-made objects and related categories like houses and shops selling objects. All of these categories are linked in our minds through a complex set of images, stories and definitions. But these don’t just live in our minds. They also appear in our conversations. So we say things like, “What kind of a chair would you like to buy?”, “That’s not real chair”, “What’s the point of a chair if you can’t sit in it?”, “Stools are not chairs.”, “It’s more of a couch than a chair.”, “Sofas are really just big plush chairs, when it comes down to it.”, “I’m using a box for a chair.”, “Don’t sit on a table, it’s not a chair.” Etc. Categories are not stable and uniform across all people, so we continue having conversations about them. There are experts on chairs, historians of chairs, chair craftsmen, people who buy chairs for a living, people who study the word ‘chair’, and people who casually use chairs. Some more than others. And their sets of stories and images and definitions related to chairs will be slightly different. And they will have had different types of conversations with different people about chairs. All of that goes into a simple word like “chair”. It’s really very simple as long as we accept the complexity for what it is. Philosophers of language have made a right mess of things because they tried to find simplicity where none exists. And what’s more where none is necessary.
But let’s get back to cliches. Cliches are types of categories. Or better still, cliches are categories with a particular type of social salience. Like categories, cliches are sets of images, stories and definitions compressed into seemingly simpler concepts that are labelled by some sort of an expression. Most prominently, it is a linguistic expression like a word or a phrase. But it could just as easily be a way of talking, way of dressing, way of being. What makes us likely to call something a cliche is a socially negotiated sense of awareness that the compression is somewhat unsatisfactory and that it is overused by people in lieu of an insight into the phenomenon we are describing. But the power of the cliche is in its ability to help us make sense of a complex or challenging phenomenon. But the sense making is for our benefit of cognitive and emotional peace. Just because we can make sense of something, doesn’t mean, we get the right end of the stick. And we know that, which is why we are wary of cliches. But challenging every cliche would be like challenging ourselves every time we looked at a chair. It can’t be done. Which is why we have social and linguistic coping mechanisms like “I know it’s such a cliche.” “It’s a cliche but in a way it’s true.” “Just because it’s a cliche, doesn’t mean, it isn’t true.” Just try Googling: “it’s a cliche *”
So we are at once locked into cliches and struggling to overcome them. Like “chair” the concept of a “cliche” as we use it is not simple. We use it to label words, phrases, people. We have stories about how to rebel against cliches. We have other labels for similar phenomena with different connotations such as “proverbs”, “sayings”, “prejudices”, “stereotypes”. We have whole disciplines studying these like cognitive psychology, social psychology, political science, anthropology, etc. And these give us a whole lot of cliches about cliches. But also a lot of knowledge about cliches.
The first one is exactly what this post started with. We have to use cliches. It’s who we are. But they are not inherently bad.
Next, we challenge cliches as much as we use them. (Well, probably not as much, but a lot.) This is something I’m trying to show through my research into frame negotiation. We look at concepts (the compressed and labelled nebulas of knowledge) and decompress them in different ways and repackage them and compress them into new concepts. (Sometimes this is called conceptual integration or blending.) But we don’t just do this in our minds. We do it in public and during conversations about these concepts.
We also know that unwillingness to challenge a cliche can have bad outcomes. Cliches about certain things (like people or types of people) are called stereotypes and particular types of stereotypes are called prejudices. And prejudices by the right people against the right kind of other people can lead to discrimination and death. Prejudice, stereotype, cliche. They are the same kind of thing presented to us from different angles and at different magnitudes.
So it is worth our while to harness the cliche negotiation that goes on all the time anyway and see if we can use it for something good. That’s not a certain outcome. The medieaval inquistions, anti-heresies, racism, slavery, genocides are all outcomes of negotiations of concepts. We mostly only know about their outcomes but a closer look will always reveal dissent and a lot of soul searching. And at the heart of such soul searching is always a decompression and recompression of concepts (conceptual integration). But it does not work in a vacuum. Actual physical or economic power plays a role. Conformance to communcal expectations. Personal likes or dislikes. All of these play a role.
So what chance have we of getting the right outcome? Do we even know what is the right outcome?
Well, we have to pick the right cliches says Abhijit Banerjee. Or we have to frame concepts better says George Lakoff. “We have to shine the light of truth” says a cliche.
“If you give people content, they’re willing to move away from their prejudices. Prejudices are partly sustained by the fact that the political system does not deliver much content.” says Banerjee. Prejudices matter in high stakes contexts. And they are a the result of us not challenging the right cliches in the right ways at the right time.
It is pretty clear from research in social psychology from Milgram on, that giving people information will challenge their cliches but only as long as you also give them sanction to challenge the cliches. Information on its own, does not seem to always be enough. Sometimes the contrary information even seems to reinforce the cliche (as we’re learning from newspaper corrections).
This is important. You can’t fool all of the people all of the time. Even if you can fool a lot of them a lot of the time. Information is a part of it. Social sanction of using that information in certain ways is another part of it. And this is not the province of the “elites”. People with the education and sufficient amount of idle time to worry about such things. There’s ample research to show that everybody is capable of this and engaged in these types of conceptual activities. More education seems to vaguely correlate with less prejudice but it’s not clear why. I also doubt that it does in a very straightforward and inevitable way (a post for another day). It’s more than likely that we’ve only studied the prejudices the educated people don’t like and therefore don’t have as much.
Bannerjee draws the following conclusion from his work uncovering cliches in development economics:
“Often we’re putting too much weight on a bunch of cliches. And if we actually look into what’s going on, it’s often much more mundane things. Things where people just start with the wrong presumption, design the wrong programme, they come in with their own ideology, they keep things going because there’s inertia, they don’t actually look at the facts and design programmes in ignorance. Bad things happen not because somebody wants bad things to happen but because we don’t do our homework. We don’t think hard enough. We’re not open minded enough.”
It sounds very appealing. But it’s also as if he forgot the point he started out with. We need cliches. And we need to remember that out of every challenge to a cliche arises a new cliche. We cannot go around the world with our concepts all decompressed and flapping about. We’d literally go crazy. So every challenge to a cliche (just like every paradigm-shifting Kuhnian revolution) is only the beginning phase of the formation of another cliche, stereotype, prejudice or paradigm (a process well described in Orwell’s Animal Farm which itself has in turn become a cliche of its own). It’s fun listening to Freakonomics radio to see how all the cliche busting has come to establish a new orthodoxy. The constant reminders that if you see things as an economist, you see things other people don’t don’t see. Kind of a new witchcraft. That’s not to say that Freakonomics hasn’t provided insights to challenge established wisdoms (a term arising from another perspective on a cliche). It most certainly has. But it hasn’t replaced them with “a truth”, just another necessary compression of a conceptual and social complex. During the moments of decompression and recompression, we have opportunities for change, however brief. And sometimes it’s just a memory of those events that lets us change later. It took over 150 years for us to remember the French revolution and make of it what we now think of as democracy with a tradition stretching back to ancient Athens. Another cliche. The best of a bad lot of systems. A whopper of a cliche.
So we need to be careful. Information is essential when there is none. A lot of prejudice (like some racism) is born simply of not having enough information. But soon there’s plenty of information to go around. Too much, in fact, for any one individual to sort through. So we resort to complex cliches. And the cliches we choose have to do with our in-groups, chains of trust, etc. as much as they do with some sort of rational deliberation. So we’re back where we started.
Humanity is engaged in a neverending struggle of personal and public negotiation of concepts. We’re taking them apart and putting them back together. Parts of the process happen in fractions of a second in individual minds, parts of the process happen over days, weeks, months, years and decades in conversations, pronouncements, articles, books, polemics, laws, public debates and even at the polling booths. Sometimes it looks like nothing is happening and sometimes it looks like everything is happening at once. But it’s always there.
So what does this have to do with metaphors and can a metaphor hacker do anything about it? Well, metaphors are part of the process. The same process that lets us make sense of metaphors, lets use negotiated cliches. Cliches are like little analogies and it takes a lot of cognition to understand them, take them apart and make them anew. I suspect most of that cognition (and it’s always discursive, social cognition) is very much the same that we know relatively well from metaphor studies.
But can we do anything about it? Can we hack into these processes? Yes and no. People have always hacked collective processes by inserting images and stories and definitions into the public debate through advertising, following talking points or even things like pushpolling. And people have manipulated individuals through social pressure, persuasion and lies. But none of it ever seems to have a lasting effect. There’re simply too many conceptual purchase points to lean on in any cliches to ever achieve a complete uniformity for ever (even in the most totalitarian regimes). In an election, you may only need to sway the outcome by a few percent. If you have military or some other power, you only need to get willing compliance from a sufficient number of people to keep the rest in line through a mix of conceptual and non-conceptual means. Some such social contracts last for centuries, others for decades and some for barely years or months. In such cases, even knowing how these processes work is not much better than knowing how continental drift works. You can use it to your advantage but you can’t really change it. You can and should engage in the process and try to nudge the conversation in a certain way. But there is no conceptual template for success.
But as individuals, we can certainly do quite a bit monitor our own cognition (in the broadest sense). But we need to choose our battles carefully. Use cliches but monitor what they are doing for us. And challenge the right ones at the right time. It requires a certain amount of disciplined attention and disciplined conversation.
George Lakoff is known for saying that “metaphors can kill” and he’s not wrong. But in that, metaphors are no different from any other language. The simple amoral imperative “Kill!” will do the job just as nicely. Nor are metaphors any better or worse at obfuscating than any other type of language. But they are very good at their primary purpose which is making complex connections between domains.
Metaphors can create very powerful connections where none existed before. And we are just as often seduced by that power as inspired to new heights of creativity. We don’t really have a choice. Metaphoric thinking is in our DNA (itself a metaphor). But just like with DNA, context is important, and sometimes metaphors work for us and sometimes they work against us. The more powerful they are, the more cautious we need to be. When faced with powerful metaphors we should always look for alternatives and we should also explore the limits of the metaphors and the connections they make. We need to keep in mind that nothing IS anything else but everything is LIKE something else.
I was reminded of this recently when listening to an LSE lecture by the journalist Andrew Blum who was promoting his book “Tubes: Behind the Scenes at the Internet”. The lecture was reasonably interesting although he tried to make the subject seem more important than it perhaps was through judicious reliance of the imagery of covertness.
But I was particularly struck by the last example where he compared Facebook’s and Google’s data centers in Colorado. Facebook’s center was open and architecturally modern, being part of the local community. Facebook also shared the designs of the center with the global community and was happy to show Blum around. Google’s center was closed, ugly and opaque. Google viewed its design as part of their competitive advantage and most importantly didn’t let Blum past the parking lot.
From this Blum drew far reaching conclusions which he amplified by implying them. If architecture is an indication of intent, he implied, then we should question what Google’s ugly hidden intent is as opposed to Facebook’s shining open intent. When answering a question he later gave prosecutors in New England and in Germany as compounding evidence of people who are also frustrated with Google’s secrecy. Only reluctantly admitting that Google invited him to speak at their Authors Speak program.
Now, Blum may have a point regarding the secrecy surrounding that data center by Google, there’s probably no great competitive advantage in its design and no abiding security reason in not showing its insides to a journalist. But using this comparison to imply anything about the nature of Facebook or Google is just an example of typical journalist dishonesty. Blum is not lying to us. He is lying to himself. I’m sure he convinced himself that since he was so clever to come up with such a beautiful analogy, it must be true.
The problem is that pretty much anything can be seen through multiple analogies. And any one of those analogies can be stopped at any point or be stretched out far and wide. A good metaphor hacker will always seek out an alternative analogy and explore the limits of the domain mapping of the dominant one. In this case, not much work is needed to uncover what a pompous idiot Blum is being.
First, does this facilities reflect attitudes extend to what we know about the two companies in other spheres. And here the answer is NO. Google tells let’s you liberate your data, Facebook does not. Google lets you opt out of many more things that Facebook. Google sponsors many open source projects, Facebook is more closed source (even though they do contribute heavily to some key projects). When Facebook acquires a company, they often just shut it down leaving customers high and dry, Google closes projects too, but they have repeatedly released source code of these projects to the community. Now, is Google the perfect open company? Hardly. But Facebook with its interest in keeping people in its silo is can never be given a shinign beacon of openness. It might be at best a draw (if we can even make a comparison) but I’d certainly give Google far more credit in the openness department. But the analogy simply fails when exposed to current knowledge. I can only assume that Blum was so happy to have come up with it that he wilfully ignored the evidence.
But can we come up with other analogies? Yes. How about the fact that the worst dictatorships in the world have come up with grand idealistic architectural designs in history. Designs and structures that spoke of freedom, beautiful futures and love of the people for their leaders. Given that we know all that, why would we ever trust a design to indicate anything about the body that commissioned it? Again, I can only assume that Blum was seduced by his own cleverness.
Any honest exploration of this metaphor would lead us to abandoning it. It was not wrong to raise it, in the world of cognition, anything is fair. But having looked at both the limits of the metaphor and alternative domain mappings, it’s pretty obvious that it’s not doing us any good. It supports a biased political agenda.
The moral of the story is don’t trust one-metaphor journalists (and most journalists are one-metaphor drones). They might have some of the facts right but they’re almost certainly leaving out large amounts of relevant information in pursuit of their own figurative hedonism.
Disclaimer: I have to admit, I’m rather a fan of Google’s approach to many things and a user of many of their services. However, I have also been critical of Google on many occasions and have come to be wary of many of their practices. I don’t mind Facebook the company, but I hate that it is becoming the new AOL. Nevertheless, I use many of Facebook’s services. So there.
Given how long I’ve been studying metaphor (at least since 1991 when I first encountered Lakoff and Johnson’s work and full on since 2000) it is amazing that I have yet to attend a RaAM (Researching and Applying Metaphor) conference. I had an abstract accepted to one of the previous RaAMs but couldn’t go. This time, I’ve had an abstract accepted and wild horses won’t keep me away (even though it is expensive since no one is sponsoring my going). The abstract that got accepted is about a small piece of research that I conceived back in 2004, wrote up in a blog post in 2006, was supposed to talk about at a conference in 2011 and finally will get to present this July at RaAM 9).
Unlike most academic endeavours, this one needs to come with a parental warning. The materials described contains profane sexual and scatological imagery as employed for the purposes of satire. But I think it makes a really important point that I don’t see people making as a matter of course in the metaphor studies literature. I argue that metaphors can be incredibly powerful and seductive but that they are also routinely deconstructed and negotiated. They are not something that just happens to us. They are opportunistic and random just as much as they are systematic and fundamental to our cognition. Much of the current metaphor studies is still fighting the battle against the view that metaphors are mere peripheral adornments on the literal. And to be sure the “just a metaphor” label is still to be seen in popular discourse today. But it has now been over 40 years since this fight has been intellectually won. So we need to focus on the broader questions about the complexities of the role metaphor plays in social cognition. And my contribution to RaAM hopes to point in that direction.
Of Doves and Cocks: Collective Negotiation of a Metaphoric Seduction
In this contribution, I propose to investigate metaphoric cognition as an extended discursive and social phenomenon that is the cornerstone of our ability to understand and negotiate issues of public importance. Since Lakoff and Johnson’s groundbreaking study, research in linguistics, cognitive psychology, as well as discourse studies, has tended to view metaphor as a purely unconscious phenomenon that is outside of a normal speaker’s ability to manipulate. However important this view of metaphor and cognition may be, it tells only a part of the story. An equally important and surprisingly frequent is the ability of metaphor to enter into collective (meta)cognition through extended discourse in which acceptable cross-domain mappings are negotiated.
I will provide an example of a particular metaphorical framing and the metacognitive framework it engendered that made it possible for extended discourse to develop. This metaphor, a leitmotif in the ‘Team America’ film satire, mapped the physiological and phraseological properties of taboo body parts onto geopolitical issues of war in such a way that made it possible for participants in the subsequent discourse to simultaneously be seduced by the power of the metaphor and empowered to engage in talk about cognition, text and context as exemplified by statements such as: “It sounds quite weird out of context, but the paragraph about dicks, pussies and assholes was the craziest analogy I’ve ever heard, mainly because it actually made sense.” I will demonstrate how this example is typical rather than aberrant of metaphor in discourse and discuss the limits of a purely cognitive approach to metaphor.
Following Talmy, I will argue that significant elements of metaphoric cognition are available to speakers’ introspection and thus available for public negotiation. However, this does not preclude for the sheer power of the metaphor to have an impact on both cognition and discourse. I will argue that as a result of the strength of this one metaphor, the balance of the discussion of this highly satirical film was shifted in support of military interventionism as evidenced by the subsequent popular commentary. By mapping political and gender concepts on the basic structural inevitability of human sexual anatomy reinforced by idiomatic mappings between taboo words and moral concepts, the metaphor makes further negotiation virtually impossible within its own parameters. Thus an individual speaker may be simultaneously seduced and empowered by a particular metaphorical mapping.
I thought the moral compass metaphor has mostly left current political discourse but it just cropped up – this time pointing from left to right – as David Plouffe accused Mitt Romney of not having one. As I keep repeating, George Lakoff once said “Metaphors can kill.” And Moral Compass has certainly done its share of homicidal damage. Justifying wars, interventions and unflinching black and white resolve in the face of gray circumstances. It is a killer metaphor!
But with a bit of hacking it is not difficult to subvert it for good. Yes, I’m ready to declare, it is good to have a moral compass, providing you make it more like a “true compass” to quote Plouffe. The problem is, as I learned during my sailing courses many years ago, that most people don’t understand how compasses actually work.
First, compasses don’t point to “the North”. They point to what is called the Magnetic North which is quite a ways from the actual North. So if you want to go to the North pole, you need make a lot of adjustment in where you’re going. Sound familiar? Kind of like following your convictions. They often lead you to places that are different from where you’re saying you’re going.
Second, the Magnetic North keeps moving. Yes, the difference of where it is in relation to the “actual” North changes from year to year. So you have to adjust your directions to the times you live in! Sound familiar? Being a devout Christian led people to different actions in the 1500s, 1700s and 1900. Although, we keep saying the “North” or faith is the same, the actual needle showing us where to go points to different directions.
Third, the compass is easily distracted by its immediate physical context. The distribution of metals on a boat, for instance, will throw it completely off course. So it needs to be calibrated differently for each individual installation. Sound familiar?
And it’s also worth noting that the south magnetic pole is not the exact polar opposite of the north magnetic pole!
So what can we learn from this newly hacked moral compass metaphor? Well, nothing terribly useful. Our real ethics and other commitments are always determined by the times we live and contexts we find ourselves in. And often we say we’re going one way when we’re actually heading another way. But we already knew that. Everybody knows that! Even people who say it’s not true (the anti-relativists) know that! They are not complete idiots after all, they just pretend to be to avoid making painful decisions!
As so often, we can tell two stories about the change of views by politicians or anybody else.
The first story is of the feckless, unprincipled opportunist who changes her views following the prevailing winds – supported by the image of the weather vane. This person is contrasted with the stalwart who sticks to her principles even as all around her are swayed the moral fad of the day.
The second story is of the wise (often old) and sage person who can change her views even as all around her persist in their simplistic fundamentalism. Here we have the image of the tree that bends in the wind but does not break. This person is contrasted with the bigot or the zealot who cannot budge even an inch from old prejudices even though they are obviously wrong?
So which story is true of Romney, Bush and Obama? We don’t know. In every instance, we have to fine tune our image and very carefully watch out for our tendencies to just tell the negative stories about people we don’t like. Whether one story is more convincing than the other depends, like the needle of a compass, no a variety of obvious and non-obvious contexts. The stories are here to guide us and help us make decisions. But we must strive to keep them all in mind at the same time. And this can be painful. They are a little bit like the Necker Cube, Vase, the Duck/Rabbit or similar optical illusions. We know they’re both there but while we’re perceiving the one, it is easy to forget the others are there. So it is uncomfortable. And also not a little bit inconvenient.
Is this kind of metaphorical nuance something we can expect in a time of political competition? It can be. Despite their bad rep, politicians and the media surrounding them can be nuanced. But often they’re not. So instead of nuance, when somebody next trots out the moral compass, whether you like them or not, say: “Oh, you mean you’re a liar, then?” and tell them about the Magnetic North!
Post Script: Actually, Plouffe didn’t say Romney didn’t have a moral compass. He said that you “you need to have a true compass, and you’ve got to be willing to make tough calls.” So maybe he was talking about a compass adjusted for surrounding metals and the advice of whose needle we follow only having taken into account as much of our current context as we can. A “true compass” like a true friend! I agree with most of the “old Romney” and none of the “new Romney”. And I loved the old Obama created in the image of our unspoken liberal utopias, and I am lukewarm on the actual Obama (as I actually knew I would) who steers a course pointing to the North of reality rather than the one magnetically attracting our needles. So if its that kind of moral compass after all, we’re in good hands!
The online media are drawn to any “scientific” claims about the internet’s influence on our nature as humans like flies to a pile of excrement. Sadly, in this metaphor, only the flies are figurative. The latest heap of manure to instigate an annoying buzzing cloud of commentary from Wired to the BBC, is an article by Sparrow et al. claiming to show that because there are search engines, we don’t have to remember as much as before. Most notably, if we know that some information can be easily retrieved, we remember where it can be obtained instead of what it is. As Wired reports:
Sparrow et al. designed a bunch of experiments that “prove” this claim. Thus, they holler, the internet changes how we remember. This was echoed by literally hundreds of headlines (Google claims over 600). Here’s a sample:
Google Effect: Changes to our Brains
Search engines like Google ‘changing the way human memory works’
Search engines change how memory works
Google Is Destroying Our Memories, Scientists Find
It pays to remember, search engines ruining our memory
Google rewiring the way we remember, study says
Has Google turned your memory to mush?
Internet search engines cause poor memory, scientists claim
Many of these headlines are from “reputable” publications and they can be summarized by three words: Bullshit! Bullshit! Bullshit!
All they had to do is read this part of the abstract to understand that nothing like the stuff they blather about follows from the study:
“The results of four studies suggest that when faced with difficult questions, people are primed to think about computers and that when people expect to have future access to information, they have lower rates of recall of the information itself and enhanced recall instead for where to access it. The Internet has become a primary form of external or transactive memory, where information is stored collectively outside ourselves.”
But they were not helped by Science whose publication of these results is more of a self-serving stunt than a serious attempt to further knowledge. The title of the original “Google Effects on Memory” is all but designed to generate bat-shit crazy headlines. If the title were to be truthful, it would have to be “Google has no more effects on memory than a paper and pen or a friend.” Even the Science Magazine report on the study entitled “Searching for the Google Effect on People’s Memory” concludes it “doesn’t directly answer that question”. In fact, it says that the internet is filling in the role of “transactive memory” which describes the fact that we rely on people to remember things. Which means it has no impact on our brains at all. It just magnifies the transactive effects already in existence.
Any claim about a special effect of Google on any kind of memory can be debunked in two words: “shopping list”! All Sparrow at al. discovered is that the internet has changed us as much as a stub of a pencil and a grubby piece of paper. Meaning, not at all.
Some headlines cottoned onto this but they are few and far between:
Search Engine “Memory Loss” in Fact a Sign of Smart Behavior
Search Engines Ruin Our Memory, Make Us Smarter
Sparrow, the lead author of the study, when interviewed by Wired said: “It’s very similar to how we use people in our lives, The internet is really just an interface with a lot of other people.”
In other words, What the internet has changed is the deployment of strategies we have always used for managing our memory. Sparrow et al. use an old term “transactive memory” to describe this but that’s needed only because cognitive psychology’s view of memory has been so limited. Memory is not just about storage and retrieval. Like all of our cognition it is tied in with a whole host of strategies (sometimes lumped together under the heading of metacognition) that have a transactive and social dimension.
Let’s take the example of mobile phones. About 15 years ago I remembered about 4 phone numbers (home, work, mother, friend). Now, I remember none. They’re all stored in my mobile phone. What’s happened? I changed my strategy of information storage and retrieval because of the technology available. Was this a radical change? No, because I needed a lot more number so I carried a little booklet where I kept the rest of the numbers. So the mobile phone freed my memory of four items. Big deal! Arguably, these four items have a huge potential transactional impact. They mean that if my mobile phone is dead or lost, I cannot call the people most likely to be able to offer assistance. But how often does that happen? It hasn’t happened to me yet in an emergency. And in an non-emergency I have many backups. At any rate, in the past I was much more likely to be caught up in an emergency where I couldn’t find a phone at all. So the change has been fairly minimal.
But what’s more interesting here is that I didn’t realize this change until I heard someone talk about it. This transactional change is a topic of conversation, it is not just something that happened, it is part of common knowledge (and common knowledge only becomes common because of a lot of people talk about it to a lot of other people).
The same goes for the claims made by Sparrow et al. The strategies used to maintain access to factual knowledge have changed with the technology available. But they didn’t just change, people have been talking about this change. “Just Google it” is a part of daily conversation. In his podcasts, Leo Laporte has often talked about how his approach to remembering has changed with the advent of Google. One early strategy for remembering websites has been the Bookmark. People have made significant collections of bookmarks to sites, not unlike rollodexes of old. But about five or so years ago Google got a lot better at finding the right sites, so bookmarks went away. Personally, now that Chrome syncs bookmarks so seemlessly, I’ve started using them again. Wow, change in technology, facilitates a change in strategy. Sparrow et al. should do some research on this. Since I started using the Internet when it was still spelled with a capital “I”, I still remember urls of key websites: Google, Yahoo, Gmail, BBC, my own, etc. But there are people who don’t. I’ve personally observed a highly intelligent CEO of a company to type “Google” in the Bing search box in Internet Explorer. And a few years ago, after a university changed its portal, I was soothing an angry professor, who complained that the link to Google was removed from the page that automatically came up on his computer. He never learned how to get there any other way because he didn’t need to. Now he does. We acquire strategies to deal with information as we need them.
Before the availability of writing (and even after), there were a whole lot of strategies available for remembering things. These were part of the cultural conversation as much as the internet is today. Some of these strategies became part of religious ritual. Some of them are a part of a trickster’s arsenal – Joshua Foer describes some in Moonwalking with Einstein. Many are part of the art of “study skills” many people talk about.
All that Sparrow et al. demonstrated is that when some of these strategies are deployed, it has a small effect on recall. This is not a bad thing to know but it’s not in any way worth over 600 media stories about it. To evaluate this much reduced claim we would have to carefully examine their research methodology and the underlying assumptions which is not what this post is about. It’s about the mistreatment of research results by media hungry academics.
I don’t begrudge Sparrow et al. their 15 minutes of fame. I’m not surprised, dismayed or even disappointed at the link greed of the journalistic herd who fell over themselves to uncritically spread this research fluff. Also, many of the actual articles were quite balanced about the findings but how much of that balance will survive the effect of a mendatiously bombastic headline is anybody’s guess. So all in all it’s business as usual in the popularization of “science” in the “media”.
I have forgotten a lot of things in my life. Names, faces, numbers, words, facts, events, quotes. Just like for anyone, forgetting is as much a part of my life as remembering. Memories short and long come and go. But only twice in my life have I seen a good memory die under suspicious circustances.
Both of these were good reliable everyday memories as much declarative as non-declarative. And both died unexpectedly without warning and without reprieve. They were both memories of entry codes but I retrieved both in different ways. Both were highly contextualised but each in a different way.
The first time was almost 20 years ago (in 1993) and it was the PIN for my first bank card (before they let you change them). I’d had it for almost two years by then using it every few days for most of that period. I remembered it so well that even after I’d been out of the country for 6 months and not even thinking about it once, I walked up to an ATM on my return and without hesitation, typed it in. And then, about 6 months later, I walked up to another ATM, started typing in the PIN and it just wasn’t there. It was completely gone. I had no memory of it. I knew about the memory but the actual memory completely disappeared. It wasn’t a temporary confusion, it was simply gone and I never remembered it again. This PIN I remembered as a number.
The second death occurred just a few days ago. This time, it was the entrance code to a building. But I only remembered it as a shape on the keypad (as I do for most numbers now). In the intervening years, I’ve memorised a number of PINs and entrance codes. Most I’ve forgetten since, some I remember even now (like the PIN of a card that expired a year ago but I’d only used once every few months for many years). Simply, the normal processes you’d expect of memory. But this one, I’ve been using for about a year since they’d changed it from the previous one. About five months ago I came back from a month-long leave and I remembered it instantly. But three days ago, I walked up to the keypad and the memory was gone. I’d used the keypad at least once if not twice that day already. But that time I walked up to the keypad and nothing. After a few tries I started wondering if I might be typing in the old code since before the change so I flipped the pattern around (I had a vague memory of once using it to remember the new pattern) and it worked. But the working pattern felt completely foreign. Like one I’d never typed in before. I suddenly understood what it must feel like for someone to recognize their loved one but at the same time be sure that it’s not them. I was really discomfitted by this impostor keypad pattern. For a few moments, it felt really uncomfortable – almost an out of body (or out of memory) experience.
The one thing that set the second forgetting apart from the first one was that I was talking to someone as it happened (the first time I was completely alone on a busy street – I still remember which one, by the way). It was an old colleague who visited the building and was asking me if I knew the code. And seconds after I confidently declared I did, I didn’t. Or I remembered the wrong one.
So in the second case, we could conclude that the presence of someone who had been around when the previous code was being used, triggered the former memory and overrode the latter one. But the experience of complete and sudden loss, I recall vividly, was the same. None of my other forgettings were so instant and inexplicable. And I once forgot the face of someone I’d just met as soon and he turned around (which was awkward since he was supposed to come back in a few minutes with his car keys – so I had to stand in the crowd looking expectantly at everyone until the guy returned and nodded to me).
What does this mean for our metaphors of memory based on the various research paradigms? None seem to apply. These were not repressed memories associated with traumatic events (although the forgetting itself was extremely mildly traumatic). These were not quite declarative memories nor were they exactly non-declarative. They both required operations in working memories but were long-term. They were both triggered by context and had a muscle-memory component. But the first one I could remember as a number whereas the second one only as a shape and only on that specific keypad. But neither were subject to long-term decay. In fact, both proved resistant to decay surving long or longish periods of disuse. They both were (or felt) as solid memories as my own name. Until they were there no more. The closest introspective analogy to me seems Luria’s man who remembered too much who once forgot a white object because he placed it against white background in his memory which made it disappear.
The current research on memory seems to be converging on the idea that we reconstruct our memories. Our brains are not just some stores with shelves from which memories can be plucked. Although, memories are highly contextual, they are not discrete objects encoded in our brain as files on a harddrive. But for these two memories, the hard drive metaphor seems more appropriate. It’s as if a tiny part of my brain that held those memories was corrupted and they simply winked out of existence at the flip of a bit. Just like a hard drive.
There’s a lot of research on memory loss, decay and reliability but I don’t know of any which could account for these two deaths. We have many models of memory which can be selectively applied to most memory related events but these two fall between the cracks.
All the research I could find is either on sudden specific-event-induced amnesia (http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1961972/?page=1) or senescence (http://brain.oxfordjournals.org/content/89/3/539.extract). In both cases, there are clear causes to the memory and loss is much more total (complete events or entire identity). I could find nothing about the sudden loss of a specific reliable memory in a healthy individual (given that it only happened twice 18 years apart – I was 21 when it happened first – I assume this is not caused by any pathology in my brain) not precipitated by any traumatic (or other) event. Yet, I suspect this happens all the time… So what gives?
Niall Ferguson wrote in The Guardian some time ago about how awful history education has become with these “new-fangled” 40-year-old methods like focusing on “history skills” that leads to kids leaving school knowing “unconnected fragments of Western history: Henry VIII and Hitler, with a small dose of Martin Luther King, Jr.” but not who was the reigning English monarch at the time of the Armada. Instead, he wants history to be taught his way: deep trends leading to the understanding why the “West” rules and why Fergusson is the cleverest of all the historians that ever lived. He even provided (and how cute is this) a lesson plan!
Now, personally, I’m convinced that the history of historical education teaches us mostly that historical education is irrelevant to the success of current policy. Not that we cannot learn from history. But it’s such a complex source domain for analogies that even very knowledgeable and reasonable people can and do learn the exact opposites from the same events. And even if they learn the “right” things it still doesn’t stop them from being convinced that they can do it better this time (kind of like people in love who think their marriage will be different). So Ferguson’s bellyaching is pretty much an empty exercise. But that doesn’t mean we cannot learn from it.
Ferguson, who is a serious historian of financial markets, didn’t just write a whiney column for the Guardian, he wrote a book called Civilization (I’m writing a review of it and a few others under the joint title “Western Historiographical Eschatology” but here I’ll only focus on some aspects of it) and is working on a computer game and teaching materials. To show how seriously he takes his pedagogic mission and possibly also how hip and with it he is, Ferguson decided to not call his historical trends trends but rather “killer apps”. I know! This is so laugh out loud funny I can’t express my mirth in mere words:))). And it gets even funnier as one reads his book. As a pedagogical instrument this has all the practical value of putting a spoiler on a Fiat. He uses the term about 10 times (it’s not in the index!) throughout the book including one or two mentions of “downloading” when he talks about the adoption of an innovation.
Unfortunately for Ferguson, he wrote his book before the terms “pseudocontext” and “pseudoteaching” made an appearance in the edublogosphere. And his “killer apps” and the lesson plan based on them are a perfect example of both. Ferguson wrote a perfectly servicable and an eminently readable historical book (even though it’s a bit of a tendentious mishmash). But it is still a historical book written by a historian. It’s not particularly stodgy or boring but it’s no different from myriad other currently popular historical books that the “youth of today” don’t give a hoot about. He thinks (bless him) that using the language of today will have the youth flocking to his thesis like German princes to Luther. Because calling historical trends “killer apps” will bring everything into clear context and make all the convoluted syntax of even the most readable history book just disappear! This is just as misguided as thinking that talking about men digging holes at different speeds will make kids want to do math.
What makes it even more precious is that the “killer app” metaphor is wrong. For all his extensive research, Ferguson failed to look up “killer app” on Wikipedia or in a dictionary. There he would have found out that it doesn’t mean “a cool app” but rather an application that confirms the viability of an existing platform whose potential may have been questioned. There have only been a handful of killer apps. The one undisputed killer app was Visicalc which all of a sudden showed how an expensive computer could simplify the process of financial management through electronic spreadsheets and therefore pay for itself. All of a sudden, personal computers made sense to the most important people of all, the money counters. And thus the personal computer revolution could begin. A killer app proved that a personal computer is useful. But the personal computer had already existed as a platform when Visicalc appeared.
None of Ferguson’s “killer apps” of “competition, science, property rights, medicine, consumer society, work ethic” are this sort of a beast. They weren’t something “installed” in the “West” which then proved its viability. They were something that, according to Ferguson anyway, made the West what it was. In that they are more equivalent to the integrated circuit than Visicalc. They are the “hardware” that makes up the “West” (as Ferguson sees it), not the software that can run on it. The only possible exception is “medicine” or more accurately “modern Western medicine” which could be the West’s one true “killer app” showing the viability of its platform for something useful and worth emulating. Also, “killer apps” required a conscious intervention, whereas all of Ferguson’s “apps” were something that happened on its own in a myrriad disparate processes – we can only see them as one thing now.
But this doesn’t really matter at all. Because Ferguson, as so many people who want to speak the language of the “young people”, neglected to pay any attention whatsoever to how “young people” actually speak. The only people who actually use the term “killer app” are technology journalists or occasionally other journalists who’ve read about it. I did a quick Google search for “killer app” and did not find a single non-news reference where somebody “young” would discuss “killer apps” on a forum somewhere. That’s not to say it doesn’t happen but it doesn’t happen enough to make Ferguson’s work any more accessible.
This overall confusion is indicative of Ferguson’s book as a whole which is definitely less than the sum of its parts. It is full of individual insight and a fair amount of wit but it flounders in its synthetic attempts. Not all his “killer apps” are of the same type, some follow from the others and some don’t appear to be anything at all than Ferguson’s wishful thinking. They certainly didn’t happen on one “platform” – some seem the outcome rather than the cause of “Western” ascendancy. Ferguson’s just all too happy to believe his own press. At the beginning he talks about early hints around 1500AD that the West might achieve ascendancy but at the end he takes a half millenium of undisputed Western rule for granted. But in 1500, “the West” had still 250 years to go before the start of the industrial revolution, 400 years before modern medicine, 50 years before Protestantism took serious hold and at least another 100 before the Protestant work ethic kicked in (if there really is such a thing). It’s all over the place.
Of course, there’s not much innovative about any of these “apps”. It’s nothing a reader of the Wall Street Journal editorial page couldn’t come up with. Ferguson does a good job of providing interesting anecdotes to support his thesis but each of his chapters meanders around the topic at hand with a smattering of unsystematic evidence here and there. Sometimes the West is contrasted with China, sometimes the Ottomans, sometimes Africa! It is hard to see how his book can help anybody’s “chronological understanding” of history that he’s so keen on.
But most troublingly it seems in places that he mostly wrote the book for as a carrier for ultra-conservative views that would make his writing more suitable for The Daily Mail rather than the Manchester Pravda: “the biggest threat to Western civilization is posed not by other civilizations, but by our own pusilanimity” unless of course it is the fact that “private property rights are repeatedly violated by governments that seem to have an insatiable appetite for taxing out incomes and our wealth and wasting a large portion of the proceeds”.
It’s almost as if the “civilized” historical discourse was just a veneer that peels off in places and reveals the real Ferguson, a comrade of Pat Buchanan whose “The Death of the West” (the Czech translation of which screed I was once unfortunate enough to review) came from the same dissatisfaction with the lack of our confidence in the West. Buchanan also recommends teaching history – or more specifically, lies about history – to show us what a glorious bunch of chaps the leaders of the West were. Ferguson is too good a historian to ignore the inconsistencies in this message and a careful reading of his book reveals enough subtlety not to want to reconstitute the British Empire (although the yearning is there). But the Buchananian reading is available and in places it almost seems as if that’s the one Ferguson wants readers to go away with.
From metaphor to fact, Ferguson is an unreliable thinker flitting between insight, mental shortcut and unreflected cliche with ease. Which doesn’t mean that his book is not worth reading. Or that his self-serving pseudo-lesson plan is not worth teaching (with caution). But remember I can only recommend it because I subscribe to that awful “culture of relativism” that says that “any theory or opinion, no matter how outlandish, is just as good as whatever it was we used to believe in.”
Update 1: I should perhaps point out, that I think Ferguson’s lesson plan is pretty good, as such things go. It gives students an activity that engages a number of cognitive and affective faculties rather than just rely on telling. Even if it is completely unrealistic in terms the amount of time allocated and the objectives set. “Students will then learn how to construct a causal explanation for Western ascendancy” is an aspiration, not a learning objective. Also, it and the other objectives really rely on the “historical skills” he derides elsewhere.
The lesson plan comes apart at about point 5 where the really cringeworthy part kicks in. Like in his book, Ferguson immediately assumes that his view is the only valid one – so instead of asking the students to compare two different perspectives on why the world looked like it did in 1913 as opposed to 1500 (or even compare maps at strategic moments) he simply asks them to come up with reasons why his “killer apps” are right (and use evidence while they’re doing it!) .
I also love his aside: “The groups need to be balanced so that each one has an A student to provide some kind of leadership.” Of course, there are shelf-fuls of literature on group work – and pretty much all of them come from the same sort of people who’re likely to practice “new history” – Ferguson’s nemesis.
I don’ think using Ferguson’s book and materials would do any more damage than using any other history book. Not what I would recommend but who cares. I recently spent some time at Waterstone’s browsing through modern history textbooks and I think they’re excellent. They provide far more background to events and present them in a much more coherent picture than Ferguson. They perhaps don’t encourage the sort of broad synthesis that has been the undoing of so many historians over the centuries (including Ferguson) but they demonstrate working with evidence in a way he does not.
The reason most people leave school not knowing facts and chronologies is because they don’t care, not because they don’t have an opportunity to learn. And this level of ignorance has remained constant over decades. At the end of the day, history is just a bunch of stories not that different from what you see on a soap opera or in a celebrity magazine, just not as relevant to a peer group. No amount of “killer applification” is going to change this. What remains at the end of historical education is a bunch of disconnected images, stories and conversation pieces (as many of them about the tedium of learning as about its content). But there’s nothing wrong with that. Let’s not underestimate the ability of disinterested people to become interested and start making the connections and filling in the gaps when they need to. That’s why all these “after-market” history books like Ferguson’s are so popular (even though for most people they are little more than tour guides to the exotic past).
Update 2: By a fortuitous coincidence, an announcement of the release of George L. Mosse‘s lectures on European cultural history: http://history.wisc.edu/mosse/george_mosse/audio_lectures.htm came across my news feeds. I think it is important to listen to these side by side with Ferguson’s seductively unifying approach to fully realize the cultural discontinuity in so many key aspects between us and the creators of the West. Mosse’s view of culture, as his Wikipedia entry reads, was as “a state or habit of mind which is apt to become a way of life”. The practice of history after all is a culture of its own, with its own habits of mind. In a way, Ferguson is asking us to adopt his habits of mind as our way of life. But history is much more interesting and relevant when it is, Mosse’s colleague Harvey Goldberg put it on this recording, a quest after a “usable past” spurred by our sense of the “present crisis” or “present struggle”. So maybe my biggest beef with Ferguson is that I don’t share his justificationist struggle.
In my thinking about things human, I often like to draw on the domain of second language learning as the source of analogies. The problem is that relatively few people in the English speaking world have experience with language learning to such an extent that they can actually map things onto it. In fact, in my experience, even people who have a lot of experience with language learning are actually not aware of all the things that were happening while they were learning. And of course awareness of research or language learning theories is not to be expected. This is not helped by the language teaching profession’s propaganda that language learning is “fun” and “rewarding” (whatever that is). In fact my mantra of language learning (I learned from my friend Bill Perry) is that “language learning is hard and takes time” – at least if you expect to achieve a level of competence above that of “impressing the natives” with your “please” and “thank you”. In that, language learning is like any other human endeavor but because of its relatively bounded nature — when compared to, for instance, culture — it can be particularly illuminating.
But how can not just the fact of language learning but also its visceral experience be communicated to those who don’t have that kind of experience? I would suggest engrossing literature.
For my money, one of the most “realistic” depictions of language learning with all its emotional and cognitive peaks and troughs can be found in James Clavell‘s “Shogun“. There we follow the Englishman Blackthorne as he goes from learning how to say “yes” to conversing in halting Japanese. Clavell makes the frustrating experience of not knowing what’s going on and not being able to express even one’s simplest needs real for the reader who identifies with Blackthorne’s plight. He demonstrates how language and cultural learning go hand in hand and how easy it is to cause a real life problem through a little linguistic misstep.
Shogun stands in stark contrast to most other literature where knowledge of language and its acquisition is viewed as mostly a binary thing: you either know it or you don’t. One of the worst offenders here is Karl May (virtually unknown in the English speaking world) whose main hero Old Shatterhand/Kara Ben Nemsi acquires effortlessly not only languages but dialects and local accents which allow him to impersonate locals in May’s favorite plot twists. Language acquisition in May just happens. There’s never any struggle or miscommunication by the main protagonist. But similar linguistic effortlessness in the face of plot requirements is common in literature and film. Far more than magic or the existence of Vampires, the thing that used to stretch my credulity the most in Buffy the Vampire Slayer was ease with which linguistic facility was disposed of.
To be fair, even in Clavell’s book, there are characters whose linguistic competence is largely binary. Samurai either speak Portugese or Latin or they don’t – and if the plot demands, they can catch even whispered colloquial conversation. Blackthorne’s own knowledge of Dutch, Spanish, Portugese and Latin is treated equally as if identical competence would be expected in all four (which would be completely unrealistic given his background and which resembles May’s Kara Ben Nemsi in many respects).
Nevertheless, when it comes to Japanese, even a superficially empathetic reader will feel they are learning Japanese along with the main character. Largely through Clavell’s clever use of limited translation.
This is all the more remarkable given that Clavell obviously did not speak Japanese and relied on informants. This, as the “Learning from Shogun” book pointed out, led to many inaccuracies in the actual Japanese, advising readers not to rely on the language of Shogun too much.
Clavell (in all his books – not just Shogun) is even more illuminating in his depiction of intercultural learning and communication – the novelist often getting closer to the human truth of the process than the specialist researcher. But that is a blog post for another time.
Another novel I remember being an accurate representation of language learning is John Grisham‘s “The Broker” in which the main character Joel Backman is landed in a foreign country by the CIA and is expected to pick up Italian in 6 months. Unlike Shogun, language and culture do not permeate the entire plot but language learning is a part of about 40% of the book. “The Broker” underscores another dimension which is also present in the Shogun namely teaching, teachers and teaching methods.
Blackthorne in Shogun orders an entire village (literally on the pain of death) to correct him every time he makes a mistake. And then he’s excited by a dictionary and a grammarbook. Backman spends a lot of time with a teacher who makes him repeat every sentence multiple times until he knows it “perfectly”. These are today recognized as bad strategies. Insisting on perfection in language learning is often a recipe for forming mental blocks (Krashen’s cognitive and affective filters). But on the other hand, it is quite likely that in totally immersive situations like Blackthorne’s or even partly immersive situations like Backman’s (who has English speakers around him to help), pretty much any approach to learning will lead to success.
Another common misconception reflected in both works is the demand language learning places on rote memory. Both Blackthorne and Backman are described as having exceptional memories to make their progress more plausible but the sort of learning successes and travails described in the books would accurately reflect the experiences of anybody learning a foreign language even without a memory. As both books show without explicit reference, it is their strategies in the face of incomprehension that help their learning rather than a straight memorization of words (although that is by no means unnecessary).
So what are the things that knowing about the experience of second language learning can help us ellucidate? I think that any progress from incompetence to competence can be compared to learning a second language. Particularly when we can enhance the purely cognitive view of learning with an affective component. Strategies as well as simple brain changes are important in any learning which is why none of the brain-based approaches have produced unadulterated success. In fact, linguists studying language as such would do well to pay attention to the process of second language learning to more fully realize the deep interdependence between language and our being.
But I suspect we can be more successful at learning anything (from history or maths to computers or double entery book keeping) if we approach it as a foreign language. Acknowledge the emotional difficulties alongside cognitive ones.
Also, if we looked at expertise more as linguistic fluency than a collection of knowledge and skills, we could devise a program of learning that would take better into account not only the humanity of the learner but also the humanity of the whole community of experts which he or she is joining.