Tag Archives: Semantics

running man

What language looks like: Dictionary and grammar are to language what standing on one foot is to running

Share
Send to Kindle

Background

Sometimes a rather obscure and complex analogy just clicks into place in one’s mind and allows a slightly altered way of thinking that just makes so much sense, it hurts. Like putting glasses on in the morning and the world suddenly snapping into shape.

This happened to me this morning when reading the Notes from Two Scientific Psychologists blog and the post on Do people really not know what running looks like?

It describes the fact that many famous painters (and authors of instructional materials on drawing) did not depict running people correctly. When running, it is natural (and essential) to put forward the arm opposite the leg that’s going forward. But many painters who depict running (including the artist who created the poster for the 1922 Olympics!) do it the wrong way round. Not just the wrong way, the way that is almost impossible to perform. And this has apparently been going for as long depiction has been thing. But it’s not just artists (who could even argue that they have other concerns). What’s more when you ask a modern human being to imitate somebody running in a stationary pose (as somebody did on the website Phoons­) they will almost invariably do it the wrong way round. Why? There are really two separate questions here.

  1. Why don’t the incorrect depictions of running strike most people as odd?
  2. Why don’t we naturally arrange our bodies into the correct stance when asked to imitate running while standing still?

Andrew Wilson (one of the two psychologists) has the perfect answer to question 2:

Asking people to pose as if running is actually asking them to stand on one leg in place, and from their point of view these are two very different things with, potentially, two different solutions. [my emphasis]

And he prefaces that with a crucial point about human behavior:

people’s behaviour is shaped by the demands of the task they are actually solving, and that might not be the task you asked them to do.

Do try this at home, try to imitate a runner standing up, then slowly (mime-like), then speed it up. Standing into the wrong configuration is the natural thing to do. Doing it the ‘right’ way round, is hard. It’s not until I sped up into an actual run that my arms found the opposite motion natural until I could keep track of what was going on any more. I would imagine that this would be the case for most people. In fact, the few pictures I could find of runners arranged standing at the start of the race have most of them also with the ‘wrong’ hand/leg position and they’re not even standing on one leg. (See here and here.)

Which brings us back to the first question. Why does not anybody notice? I personally find it really hard to even identify the wrong static description at a glance. I have to slow down, remember what is correct, then match it to the image. What’s going on. We obviously don’t have any cognitive control over the part of running that controls the movement arms in relation tot he movement of legs. We also don’t have any models or social scripts that pay attention to this sort of thing. It is a matter of conscious effort, a learned behaviour, to recognize these things.

Why is this relevant to language?

If you ask someone to describe a language, they will most likely start telling you about the words and the rules for putting them together. In other words, compiling a dictionary and a grammar. They will say something like: “In Albanian, the word for ‘bread’ is ‘bukë’”. Or they will say something like “English has 1 million words.”, “Czech has no word for training.” or “English has no cases.”

All of these statements reflect a notion of language that has a list of words that looks a little like this:

bread n. = 1. baked good, used for food, 2. metaphor for money, etc.
eat v. = 1. process of ingestion and digestion, 2. metaphor, etc.
people n. plural = human beings

And a grammar that looks a little bit like this.

Sentence = Noun (subj.) + Verb + Noun (obj.)

All of this put together will give us a sentence:

People eat food.

All you need is long enough list of words and enough (but not as many) rules and you got a language.

But as linguists have discovered through not a bit of pain, you don’t have a language. You have something that looks like a language but not something that you can actually speak as a language. It’s very similar to language but it’s not language.

Kind of like the picture of the runner with the arms going in the opposite direction. It looks very much like someone running but it’s not it’s just a picture of them running and the picture is fundamentally wrong. Just not in a way that is at all obvious to most people most of the time.

Why grammars and dictionaries seem like a good portrait of language

So, we can ask the same two questions again.

  1. Why does the stilted representation of language as rules and words not strike most people (incl. Steven Pinker) as odd?
  2. Why don’t we give more realistic examples of language when asked to imitate one?

Let’s start with question 2 again which will also give us a hint as to how to answer question 1.

So why, when asked to give an example of English, am I more likely to give:

John loves Mary.

or

Hello. Thank you. Good bye.

than

Is it cold in here? Could you pass the sugar, please. No no no. I’ll think about it?

It’s because I’m achieving a task that is different from actually speaking the language. When asked to illustrate a language, we’re not communicating anything in the language. So our very posture towards the language changes. We start thinking in equivalencies and left and right sides of the word (word = definition) and building blocks of a sentence. Depending on who we’re speaking to, we’ll choose something very concrete or something immediately useful. We will not think of nuance, speech acts, puns or presupposition.

But the vast majority of our language actions are of the second kind. And many of the examples we give of language are actually good for only one thing: Giving an example of the language. (Such as the famous example from logic ‘A man walks’ which James MacCawley analysed as only being usable in one very remote sense.)

As a result, if we’re given the task of describing language, coming up with something looking like a dictionary and a grammar is the simplest and best way of achieving fullfilling the assignment. If we take a scholarly approach to this task over generations, we end up with something that very much looks like the modern grammars and dictionaries we all know .

The problem is that these don’t really give us “a picture of language”, they give us “a picture of a pose of language” that looks so much like language to our daily perception, that we can’t tell the difference. But in fact, they are exactly the opposite of language looks like.

Now, we’re in much more complex waters than running. Although, I imagine the exact performance of running is in many ways culturally determined, the amount of variation is going to be limited by the very physical nature of the relatively simple task. Language on the other hand, is almost all culture. So, I would expect people in different contexts to give different examples. I read somewhere (can’t track down the reference now) that Indian grammarians tended to give examples of sentences in the imperative. Early Greeks (like Plato) had a much more impoverished view of the sentence than I showed above. And I’m sure there are languages with even more limited metalanguage. However, the general point still stands. The way we tend to think about language is determined by the nature of the task

The key point I’ve repeated over and over (following Michael Hoey) is that grammars and dictionaries are above all texts written in the language. They don’t stand aprt from it. They have their own rules, conventions and inventories of expression. And they are susceptible to the politics and prejudices of their time. Even the OUP. At the same time, they can be very useful tools to developing language skills or dealing with unfamiliar texts. But so does asking a friend or figuring out the meaning in context.

Which brings us to question 1. Why has nobody noticed that language doesn’t quite work that way? The answer is that – just like with running – people have. But only when they try to match the description with something that is right in front of them. Even then, they frequently (and I’m talking about professional linguists like Stephen Pinker here) ignore the discrepancy or ascribe it to a lack of refinement of the descriptions. But most of the time, the tasks that we fulfil with language do not require us to engage the sort of metacognitive aparatus that would direct us to reflect on what’s actually going on.

What does language really look like

So is there a way to have an accurate picture of language? Yes. In fact, we already have it. It’s all of it. We don’t perhaps have all the fine details, but we have enough to see what’s going on – if we look carefully. It’s not like linguists of all stripes have not described pretty much everything that goes on with language in one way or another. The problem is that they try to equate the value of a description to the value of the corresponding model that very often looks like an algorithm amenable to being implemented in a computer program. So, if I describe a phenomenon of language as a linguist, my tendency is to immediately come up with a fancy looking notation that will look like ‘science’. If I can make it ‘mathematical’, all the better. But all of these things are only models. They are ways of achieving a very particular task. Which is to – in one way or another – model language for a particular purpose. Development of AI, writing of pedagogic grammars, compiling word lists, predicting future trends, tracing historical developments, estimating psychological impact, etc. All of these are distinct from actual pure observation of what is going on. Of course, even simple description of what I observe is a task of its own with its own requirements. I have to choose what I notice and decide what I report on. It’s a model of a sort, just like an accurate painting of a runner in motion is just a model (choosing what to emphasize, shadows, background detail, facial expression, etc.) But it’s the task we’re really after: Coming up with as accurate and complete a picture of language as is possible for a collectivity of humans.

People working in construction grammars in the usage-based approach are closest to the task. But they need to talk with people who work on texts, as well, if they really want to start painting a fuller picture.

Language is signs on doors of public restrooms, dirty jokes on TV, mothers speaking to children, politicians making speeches, friends making small talk in the street, newscasters reading the headlines, books sold in bookshops, gestures, teaching ways of communication in the classroom, phone texts, theatre plays, songs, blogs, shopping lists, marketing slogans, etc.

Trying to reduce their portrait to words and rules is just like trying to describe a building by talking about bricks and mortar. They’re necessary and without them nothing would happen. But a building does not look like a collection of bricks and mortar. Nor does knowing how to put a brick to brick and glue them together help in getting a house built. At best, you’d get a knee-high wall. You need a whole of other knowledge and other kinds of strategies of building corners, windows, but also getting a planning permission, digging a foundation, hiring help, etc. All of those are also involved in the edifices we construct with language.

An easy counterargument here would be: That’s all well and good but the job of linguistics is to study the bricks and the mortar and we’ll leave the rest to other disciplines like rhetoric or literature. At least, that’s been Chomsky’s position. But the problem is that even the words and grammar rules don’t actually look like what we think they do. For a start, they’re not arranged in any of the ways in which we’re used to seeing them. But they probably don’t even have the sorts of shapes we think of them in. How do I decide whether I say, “I’m standing in front of the Cathedral” or “The Cathedral is behind me.”? Each of these triggers a very different situation and perspective on exactly the same configuration of reality. And figuring out which is which requires a lot more than just the knowledge of how the sentence is put together. How about novel uses of words that are instantly recognizable like “I sneezed the napkin off the table.” What exactly are all the words and what rules are involved?

Example after example shows us that language does not look very much like that traditional picture we have drawn of it. More and more linguists are looking at language with freshly open eyes but I worry that they may get off task when they’re asked to make a picture what they see.

Where does the metaphor break

Ok, like all metaphors and analogies, even this one must come to an end. The power of a metaphor is not just finding where it fits but also pointing out its limits.

The obvious breaking point here is the level of complexity. Obviously, there’s only one very discretely delineated aspect of what the runners are doing that does not match what’s in the picture. The position of the arms. With language, we’re dealing with many subtle continua.

Also, the notion of the task is taken from a very specific branch of cognitive psychology and it may be inappropriate extending it to areas where tasks take a long time, are collaborative and include a lot of deliberately chosen components as well as automaticity.

But I find it a very powerful metaphor nevertheless. It is not an easy one to explain because both fields are unfamiliar. But I think it’s worth taking the time with it if it opens the eyes of just one more person trying to make a picture of language looks like.

Send to Kindle
""

What is not a metaphor: Modelling the world through language, thought, science, or action

Share
Send to Kindle

The role of metaphor in science debate (Background)

Recently, the LSE podcast an interesting panel on the subject of “Metaphors and Science”. It featured three speakers talking about the interface between metaphor and various ‘scientific’ disciplines (economics, physics and surgery). Unlike many such occasions, all speakers were actually very knowledgeable and thoughtful on the subject.

In particular, I liked Felicity Mellor and Richard Bronk who adopted the same perspective that underlies this blog and which I most recently articulated in writing about obliging metaphors. Felicity Mellor put it especially eloquently when she said:

“Metaphor allows us to speak the truth by saying something that is wrong. That means it can be creatively liberating but it can also be surreptitiously coercive.”

This dual nature of coerciveness and liberation was echoed throughout the discussion by all three speakers. But they also shared the view of ubiquity of metaphor which is what this post is about.

What is not a metaphor? The question!

The moderator of the discussion was much more stereotypically ambivalent about such expansive attitude toward metaphor and challenged the speakers with the question of ‘what is the opposite of metaphor’ or ‘what is not a metaphor’. He elicited suggestions from the audience, who came up with this list:

model, diagram, definition, truths, math, experience, facts, logic, the object, denotation

The interesting thing is that most of the items on this list are in fact metaphorical in nature. Most certainly models, diagrams and definitions (more on these in future posts). But mathematics and logic are also deeply metaphorical (both in their application but also internally; e.g. the whole logico mathematical concept of proof is profoundly metaphorical).

Things get a bit more problematic with things like truth, fact, denotation and the object. All of those seem to be pointing at something that is intuitively unmetaphorical. But it doesn’t take a lot of effort to see that ‘something metaphorical’ is going on there as well. When we assign a label (denotation), for instance, ‘chair’ or ‘coast’ or ‘truth’ we automatically trigger an entire cognitive armoury for dealing with things that exist and these things have certain properties. But it is clear that ‘chair’, ‘coast’ and ‘metaphor’ are not the same kind of thing at all. Yet, we can start treating them the same way because they are both labels. So we start asking for the location, shape or definition of metaphor, just because we assigned it a label in the same way we can ask for the same thing about a chair or a coast. We want to take a measure of it, but this is much easier with a chair than with a coast (thus the famous fractal puzzle about the length of the coast of Britain). But chairs are not particularly easy to nail down (metaphorically, of course) either, as I discussed in my post on clichés and metaphors.

Brute facts of tiny ontology

So what is the thing that is not a metaphor? Both Bronk and Mellor suggested the “brute fact”. A position George Lakoff called basic realism and I’ve recently come to think of as tiny ontology. The idea, as expressed by Mellor and Bronk in this discussion, is that there’s a real world out there which impinges upon our bodily existence but with which we can only interact through the lens of our cognition which is profoundly metaphorical.

But ultimately, this does not give us a very useful answer. Either everything is a metaphor, so we might as well stop talking about it, or there is something that is not a metaphor. In which case, let’s have a look. Tiny ontology does not give us the solution because we can only access it through the filter of our cognition (which does not mean consciously or through some wilful interpretation). So the real question is, are there some aspects of our cognition that are not metaphorical?

Metaphor as model (or What is metaphor)

The solution lies in the revelation hinted at above that labels are in themselves metaphors. The act of labelling is metaphorical, or rather, it triggers the domain of objects. What do I mean by that? Well, first let’s have a look at how metaphor actually works. I find it interesting that nobody during the entire discussion tried to raise that question other than the usual ‘using something to talk about something else’. Here’s my potted summary of how metaphor works (see more details in the About section).

Metaphor is a process of projecting one conceptual domain onto another. All of our cognition involves this process of conceptual integration (or blending). This integration is fluid, fuzzy and partial. In language, this domain mapping is revealed through the process of deixis, attribution, predication, definition, comparison, etc. Sometimes it is made explicit by figurative language. Figurative language spans the scale of overt to covert. Figurative language has a conceptual, communicative and textual dimension (see my classification of metaphor use). In cognition, this process of conceptual integration is involved in identification, discrimination, manipulation. All of these can be more or less overtly analogical.

So all of this is just a long way of saying, that metaphor is a metaphor for a complicated process which is largely unconscious but not uncommonly conscious. In fact, in my research, I no longer use the term ‘metaphor’ because it misleads more than it helps. There’s simply too much baggage from what is just overt textual manifestation of metaphor – the sort of ‘common sense’ understanding of metaphor. However, this common sense ordinary understanding of ‘metaphor’ makes using the word a useful shortcut in communication with people who don’t have much of a background in this thought. But when we think about the issue more deeply, it becomes a hindrance because of all the different types of uses of metaphor I described here (a replay of the dual liberating and coercive nature of metaphor mentioned above – we don’t get escape our cognition just because we’re talking about metaphors).

In my work, I use the term frame, which is just a label for a sort of conceptual model (originally suggested by Lakoff as Idealized Cognitive Model). But I think in this context the term ‘model’ is a bit more revealing about what is going on.

So we can say that every time, we engage conceptually with our experience, we are engaging in an act of modelling (or framing). Even when I call something ‘true’, I am applying a certain model (frame) that will engage certain heuristics (e.g. asking for confirmation, evidence). Equally, if I say something like ‘education is business’, I am applying a certain model that will allow me to talk about things like achieving economies of scale or meeting consumer demand but will make it much harder to talk about ethics and personal growth. That doesn’t mean that I cannot apply more than one model, a combination of models or build new models from old ones. (Computer virus is a famous example, but natural law is another one. Again more on this in later posts.)

Action as an example of modelling

The question was asked during the discussion by an audience member, whether I can experience the world directly (not mediated by metaphoric cognition). The answer is yes, but even this kind of experience involves modelling. When I walk along a path, I automatically turn to avoid objects – therefore I’m modelling their solid and interactive nature. Even when I’m lying still, free of all thought and just letting the warmth of the shining sun wash over me, I’m still applying a model of my position in the world in a particular way. That is, my body is not activating my ears to hear the sun rays, nor is it perceiving the bacteria going about their business in my stomach. A snake, polar bear or a fish would all model that situation in a different way.

This may seem like unnecessary extension of the notion of a model. (But it echos the position of the third speaker Roger Kneebone who was talking about metaphor as part of the practice of surgery.) It is not particularly crucial to our understanding of metaphor, but I think it’s important to divert us from a certain kind of perceptual mysticism in which many people unhappy with the limitations of their cognitive models engage. The point is that not all of our existence is necessarily conceptual but all of it models our interaction with the world and switches between different models as appropriate. E.g. my body applies different models of the world when I’m stepping down from a step on solid ground or stepping into a pool of water.

The languages of metaphor: Or how a metaphor do

I am aware that this is all very dense and requires a lot more elaboration (well, that’s why I’m writing a blog, after all). But I’d like to conclude with a warning that the language used for talking about metaphor brings with it certain models of thinking about the world which can be very confusing if we don’t let go of them in time. Just the fact that we’re using words is a problem. When words are isolated (for instance, in a dictionary or at the end of the phrase ‘What is a…’) it only seems natural that they should have a definition. We have a word “metaphor” and it would seem that it needs to have some clear meaning. The kind of thing we’re used to seeing on the right-hand side of dictionaries. But insisting that dictionary-like definition is what must be at the end of the journey is to misunderstand what we’ve seen along the way.

There are many contexts in which the construction “metaphor is…” is not only helpful but also necessary. For example when clarifying one’s use: “In this blog, what I mean by metaphor is much broader than what traditional students of figurative language meant by it.” But in the context of trying to get at what’s going on in the space that we intuitively describe as metaphorical, we should almost be looking for something along to the lines of “metaphor does” or “metaphors feels like”. Or perhaps refrain from the construction “metaphor verb” altogether and just admit that we’re operating in a kind of metaphor-tasting soup. We can get at the meaning/definition by disciplined exploration and conversation.

In conclusion, metaphor is a very useful model when thinking about cognition, but it soon fails us, so we can replace it with more complex models, like that of a model. We are then left with the rather unsatisfactory notion of a metaphor of metaphor or a model of model. The sort of dissatisfaction that lead Derrida and his like to the heights of obscurity. I think we can probably just about avoid deconstructionist obscurantism but only if we adopt one of its most powerful tools, the fleeting sidelong glance (itself a metaphor/model). Just like the Necker cube, this life on the edge of metaphor is constantly shifting before our eyes. Never quite available to us perceptually all at once but readily apprehended by us in its totality. At once opaque and so so crystal clear. Rejoice all you parents of freshly screaming thoughts. It’s a metaphor!
Photo Credit: @Doug88888 via Compfight cc

Send to Kindle
Language

5 things everybody should know about language: Outline of linguistics’ contribution to the liberal arts curriculum

Share
Send to Kindle

Drafty

This was written in some haste and needs further refinement. Maybe one day that will come. For now, it will be left as it stands.

Background

This post outlines what I think are the key learnings from the output of the research of the field of linguistics that should be reflected in the general curriculum (in as much as any should be). This is in reaction to the recent posts by Mark Liberman suggesting the role and form of grammar analysis in general education. I argue that he is almost entirely wrong in his assumptions as well as in his emphasis. I will outline my arguments against his position at the end of the post. At the beginning I will outline key easily digestible lessons of modern linguistics that should be incorporated into language education at all levels.

I should note that despite my vociferous disagreement, Mark Liberman is one of my heros. His ‘Breakfast Experiments(tm)’ have brought me much joy and his and his fellow contributors to the Language Log make me better informed about developments in linguistics outside my own specialty that I would otherwise not know about. Thanks Mark for all your great work.

I have addressed some of these issues in previous posts here, here and here.

What should linguistics teach us

In my post on what proponents of simple language should know about linguistics, I made a list of findings that I propose are far more important than specific grammatical and lexicographic knowledge. Here I take a slightly more high-level approach – but in part, this is a repetition of that post.

Simply, I propose that any school-level curriculum of language education should 1. expose students (starting at primary level) to the following 5 principles through reflection on relevant examples, and 2. these principles should be reflected in the practical instruction students receive toward the acquisition of skills and general facility in the standards of that language.

Summary of key principles

  1. Language is a dialect with an army and a navy
  2. Standard English is just one of the many dialects of English
  3. We are all multilingual in many different ways
  4. A dictionary is just another text written in the language, not a law of the language
  5. Language is more than words and rules

Principle 1: Language is a dialect with an army and a navy

This famous dictum (see Wikipedia on origins ) encapsulates the fact that language does not have clear boundaries and that there is no formula for distinguishing where one language ends and another begins. Often, this disctinction depends on the political interests of different groups. In different political contexts, the different Englishes around the world today, could easily qualify for separate language status and many of them have achieved this.

But exploring the examples that help us make sense of this pithy phrase also teaches us the importance of language in the negotiation of our identity and its embeddedness in the wider social sphere. There are piles and piles of evidence to support this claim and learning about the evidence has the potential of making us all better human beings less prone to disenfranchise others based on the way they speak (in as much any form of schooling is capable of such a thing). Certainly more worthy than knowing how to tell the passive voice.

Principle 2: Standard English is just one of the many dialects of English

Not only are there no clear boundaries between languages, there are no clear principles of what constitutes an individual language. A language is identified by its context of use as much as by the forms it uses. So if kayak and a propos can be a part of English so can ain’t and he don’t. It is only a combination of subconscious convention and conscious politics that decides which is which.

Anybody exploring the truth of this statement (and, yes, I’m perfectly willing to say the word truth in this context) will learn about the many features of English and all human languages such as:

  • stratification of language through registers
  • regional and social variation in language
  • processes of change in language over time
  • what we call good grammar are more or less fixed conventions of expression in certain contexts
  • the ubiquity of multiple codes and constant switching between codes (in fact, I think this is so important that it gets a top billing in this list as number 3)

Again, althoguh I’m highly skeptical of claims to causality from education to social change, I can’t see why instruction in this reality of our lives could not contribute to an international conversation about language politics. Perhaps, an awareness of this ‘mantra’ could reduce the frequency of statements such as:

  • I know I don’t speak very good English
  • Word/expression X is bad English
  • Non-native speaker X speaks better English than native speaker Y

And just maybe, teachers of English will stop abusing their students with ‘this is bad grammar’ and instead guide them towards understanding that in different contexts, different uses are appropriate. Even at the most elementary levels, children can have fun learning to speak like a newscaster or a local farm hand, without the violent intrusion into their identity that comes from the misguided and evil labeling of the first as proper and the second as ‘not good English’. Or how about giving the general public enough information to have judged the abominable behavior of the the journalist pseudo elites during the ‘Ebonics controversy’ as the disgraceful display of shameful ignorance it was.

Only and only when they have learned all that, should we mention something about the direct object.

Principle 3: We are all multilingual in many different ways

One of the things linguistics has gathered huge amounts of evidence about is the fact that we are all constantly dealing with multiple quite distinct codes. This is generally not expressed in quite as stark terms as I do here, but I take my cue from bilingualism studies where it has been suggested (either by Chaika or Romaine – I can’t track down the reference to save my life) that we should treat all our study of language as if bilingualism was the default state rather than some exception. This would make good sense even if we went by the educated guess that just over half of the world’s population speaks regularly two or more languages. But I want to go further.

First, we know from principle 1 that there is no definition of language that allows us draw clear boundaries between individual languages. Second, we know from principle 2 that each language consists of many different ‘sub-languages’ or ‘codes’. Because language is so vast and complex, it follows that knowing a language is not an either/or proposition. People are constantly straddling boundaries between different ways of speaking and understanding. Speaking in different ways for different purposes, to different people in different codes. And we know that people switch between the codes constantly for different reasons, even in the same sentence or just one word (very common in languages with rich morphologies like Czech – less common in English but possible with ‘un-fucking-convinving’). Some examples that should illustrate this: “Ladies and gentlemen, we’re screwed” and “And then Jeff said unto Karen”

We also know from all the wailing and gnashing of teeth derriving from the ignorance of principle 2, that acquiring these different codes is not easy. The linguist Jim Miller has suggested to me that children entering school are in a way required to learn a foreign language. In Czech schools, they are instructed in a new lexicon and new morphology (e.g. say ‘malý’ instead of ‘malej’). in English schools they are taught a strange syntax with among other things a focus on nominal structures (cf. ‘he went and’ vs. ‘his going was’) as well as an alien lexicon (cf. ‘leave’ vs. ‘depart’). This is compounded with a spelling system whose principles are often explained on the basis of a phonology they don’t understand (e.g. much of England pronuncing ‘bus’ and ‘booss’ but using teaching materials that rhyme ‘bus’ with ‘us’).

It is not therefore a huge leap to say that for all intents and purposes, we are all multilingual even if we only officially speak one language with its own army and a navy. Or at least, we enagage all the social, cognitive and linguistic processes that are involved in speaking multiple languages. (There is some counter evidence from brain imaging but in my view it is still too early to interpret this either way.)

But no matter whether we accept the strong or the weak version of my proposition, learning about the different pros and cons would make students’ lives much easier at all levels. Instead of feeling like failures over their grammar, they could be encouraged to practice switching between codes. They could also take comfort in the knowledge that there are many different ways of knowing a language and no one person can know it all.

If any time is left over, let’s have a look at constituent structures.

Principle 4: A dictionary is just another text written in the language, not a law of the language

The defference shown to ‘official’ reference materials is at the heart of a myth that the presense of a word in a dictionary in some way validates the word as being a ‘real’ word in the language. But the absolute truth about language that everyone should know and repeat as a mantra every time they ask ‘is X a word’ is that dictionaries are just another text. In fact, they are their own genre of a type that Michael Hoey calls text colonies. This makes them cousins of the venerable shopping list. Dictionaries have their own conventions, their own syntax and their own lexicon. They have ‘heads’ and ‘definitions’ that are both presented in particular ways.

What they most emphatically do not do is confirm or disconfirm the existence of a word or its meaning. It’s not just that they are always behind current usage, it’s that they only reflect a fraction of the knowledge involved in knowing and using words (or as the philosopher John Austin would say ‘doing things with words’). Dictionaries fullfil two roles at once. They are useful tools for gathering information to enable us to deal with the consequences of principle 3 (i.e. to function in a complex multi-codal linguistic environment both as passive and active participants). And they help us express many beliefs about our world such as:

  • The world is composed of entities with meanings
  • Our knowledge is composed of discrete items
  • Some things are proper and others are improper

Perhaps this can become more transparent when we look at entries for words like ‘the’ or ‘cat’. No dictionary definition can help us with ‘the’ unless we can already use it. In this case, the dictionary serves no useful role other than as a catalog of our reality. Performatively assuring us of its own relevance by its irrelevance. How about the entry for ‘cat’. Here, the dictionary can play a very useful role in a bilingual situation. A German will see ‘cat = Katze’ and all will be clear in an instant. A picture can be helpful to those who have no language yet (little children). But the definition of ‘cat’ as “a small domesticated carnivorous mammal with soft fur, a short snout, and retractile claws” is of no use to anybody who doesn’t already know what ‘cat’ means. Or at the very least, if you don’t know ‘cat’, your chances of understanding any definition in the dictionary are very low. A dictionary can be helpful in reminding us that ‘cat’ is also used to refer to ‘man’ among jazz musicians (as in “he’s a cool cat”) but again, all that requires existing knowledge of cat. A dictionary definition that would say ‘a cat is that thing you know as a cat but jazz musicians sometimes use cat to refer to men’ would be just as useful.

In this way, a dictionary is like an audience in the theatre, who are simultaneously watching a performance, and performing themselves the roles of theatre audiences (dress, behavior, speech).

It is also worthwhile to think about what is required of the dictionary author. While the basic part of the lexicographer’s craft is the collection of usage examples (on index cards in the past and in corpora today) and their interpretation, all this requires a prior facility with the language and much introspection about the dictionary makers own linguistic intuitions. So lexicographers make mistakes. Furthermore, in the last hundred years or so, they also almost never start from scratch. Most dictionaries are based on some older dictionaries and the order of definitions is often as much a reflection of a tradition (e.g. in the case of the word ‘literally’ or the word ‘brandish’) as analysis of actual usage.

Why should this be taught as part of the language education curriculum? Simple! Educated people should know how the basic tools surrounding their daily lives work. But even more importantly, they should never use the presence of a word in a dictionary, and as importantly the definition of a word in a dictionary, as the definitive argument for their preferred meaning of a word. (Outside some contexts such as playing SCRABBLE or confirming an uncertainty over archaic or specialist words).

An educated person should be able to go and confirm any guidance found in a dictionary by searching a corpus and evaluate the evidence. It’s not nearly as hard as as identifying parts of speech in a sentence and about a million times more useful for the individual and beneficial for society.

Principle 5: Language is more than words and rules

Steven Pinker immortalised the traditional structuralist vision of what language consists of in the title of his book “Words and rules”. This vision is almost certainly wrong. It is based on an old articulation of language as being the product of a relatively small number of rules applied to a really large number of words (Chomsky expressed this quite starkly but the roots of this model go much deeper).

That is not to say that words and rules are not useful heuristic shortcuts to talking about language. I use this metaphor myself every day. And I certainly am not proposing that language should not be taught with reference to this metaphor.

However, this is a very impoverished view of language and rather than spend time on learning the ‘relatively few’ rules for no good reason other than to please Mark Liberman, why not learn some facts we know about the vastness and complexity of language. That way instead of having a completely misguided view of language as something finite that can be captured in a few simple precepts (often expressed in one of those moronic ‘Top X grammatical errors lists’), one could actually have a basic understanding of all the ways language expresses our minds and impresses itself on our life. Perhaps, we could even get to a generation of psycholinguists and NLP specialists who try to deal with language as it actually is rather than in its bastardised form that can be captured by rules and words.

Ok, so I’m hoisting my theoretical flag here, flying the colors of the ‘usage-based’, ‘construction grammar’, ‘cognitive semantics’ crowd. But the actual curricular proposal is theory free (other than in the ‘ought’ portion of it) and based on well-known and oft-described facts – many of them by the Language Log itself.

To illustrate the argument, let’s open the dictionary and have a look at the entry ‘get’. It will go on for several pages even if we decide to hide all its phrasal friends under separate entries. Wiktionary lists 26 definitions as a verb and 4 as a noun which is fairly conservative. But each of these definitions also comes with usage examples and usage exceptions. For instance, in ‘get behind him’, it is intransitive but in ‘get Jim to come’, it is transitive. This is combined with general rules that apply across all uses such ‘got’ as the past tense and ‘gets’ as the third person singular. Things can be even more complex as with the word ‘bad’ which has the irregular superlative ‘worst’ when it is used in a negative sense as in ‘teaching grammar in schools is the worst idea’ and ‘baddest’ in the positive sense as in ‘Mark Liberman is the baddest linguist on the internet’. ‘Baddest’ is also only appropriate in certain contexts (so my example is at the same time an illustration of code mixing). When we look at any single word in the dictionary, the amount of conscious and unconscious knowledge required to use the word in our daily speech is staggering. This is made even trickier by the fact that not everyone in any one speech community has exactly the same grasp of the word leading to a lot of negotiation and conversation repair.

It is also the sort of stuff that makes understanding of novel expressions like ‘she sneezed the napking off the table’ possible. If we must, let’s do some sentence diagramming now.

Some other things to know

I could go on, some of my other candidate principles that didn’t make this list either because they could be subsumed by one of the items, or they are too theory laden, or because I wanter a list of 5, or because this blog post is over 3,000 words already, are:

  • All lexical knowledge is encyclopedic knowledge
  • Rules of the road like conversation repair, turn taking or text cohesion are just as much part of language as things like passives, etc.
  • Metaphors (and other types of figurative language) are normal, ubiquitous and necessary for language
  • Pretty much every prejudice about gender and language is wrong (like who is more conservative, etc.)
  • No language is more beatiful or amazing than any other, saying this is most likely part of a nationalistic discourse
  • Children are not very good language learners when you put them in the same learning context as adults (e.g. two hours of instruction a week as opposed to living in a culture with no other obligation but to learn)
  • Learning a language is hard and it takes time
  • The etymology of a word does not reflect some deeper meaning of the word
  • Outside some very specific contexts (e.g. language death), languages don’t decline, they change
  • Etc.

Why we should not teach grammar in schools

So, that was my outline of what linguistic expertise should be part of the language education curriculum – and as importantly should inform teachers across all subjects. Now, let’s have a look, as promised, at why Mark Liberman is wrong to call for the teaching of grammar in schools in the first place.

To his credit, he does not trot out any of the usual utilitarian arguments for the teaching of grammar:

  • It will make learning of foreign languages easier
  • It will make today’s graduates better able to express themselves
  • It will contribute to higher quality of discourse
  • It will stop the decline of English
  • It will improve critical thinking of all students

These are all bogus, not supported by evidence and with some evidence against them (see this report for a summary of a part of them).

My argument is with his interpretation of his claim that

a basic understanding of how language works should be part of what every educated person knows

I have a fundamental problem with the very notion of ‘educated person’ because of its pernicious political baggage. But in this post I’ve used it to accept the basic premise that if we’re going to teaching lots of stuff to children in schools, we might as well teach them the good stuff. Perhaps, not always the most immediately useful stuff but definitely the stuff that reflects the best in what we have to offer to ourselves as the humanity we want to be.

But if that is the case, then I don’t think his offer of

a modern version of the old-fashioned idea that grammar (and logic and rhetoric :-)) should be for everyone

is that sort of stuff. Let’s look at what that kind of education did for the likes of Orwell, and Stunk and White who have had the benefit of all the grammar education a school master’s cane can beat into a man and yet committed such outrageous, embarrassing and damaging transgressions against linguistic knowledge (not infrequently decried on the Language Log).

The point is that ‘grammar’ (and ‘logic’ and ‘rhetoric’) do not represent even a fraction of the principles involved in how language works. The only reason why we would privilege their teaching over the teaching of the things I propose (which cover a much larger area of how language works) is because they have been taught in the past. But why? Basing it on something as arbitrary as the hodgepodge that is the treebank terminology proposed by Mark Liberman only exposes the weakness of the argument – sure, it’s well known and universally understood by professional linguists but it hides as much about language as it reveals. And as Mark acknowledges, the aim is not to educate future linguists. There are alternatives such as Dickson’s excellent Basic Linguistic Theory that take into account much more subtly the variation across languages. But even then, we avoid all the really interesting things about language. I’m not against some very basic metalinguistic terminology to assist students in dealing with language but parsing sentences for no other reason than to do it seems pointless.

The problem with basing a curriculum on old-fashioned values is that we are catering to the nostalgia of old men (and sorry Mark, despite my profound appreciation for your work, you are an old man). (By the way, I use ‘men’ to evoke a particular image rather than to make any assertions about the gender of the person in question.) But it’s not just nostalgia. It’s also their disorientation in a changing world and discomfort with encountering people who are not like them – and, oh horror, can’t tell the passive voice from the past tense. Yes, it would be more convenient for me, if everyone I spoke to had the same appreciation for what an adverb is (particularly when I was teaching foreign languages). But is that really the best we have to offer when we want to show what should be known? How much of this is just the maintenance of the status of academics who want to see their discipline reflected in the cauldron of power and respectability that is the school curriculum? If the chemists get to waste everyone’s time with polymers, why not us with trees and sentence diagrams? In a follow up post, Dick Hudson claims that at present “we struggle to cope with the effects of [the disaster of no grammar teaching]“. But I don’t see any disaster going on at the moment. Why is teaching no grammar more disasterous than the teaching of grammar based on Latin and Greek with little connection to the nature of English? Whose rules are we after?

The curriculum is already full to bursting with too much stuff that someone threw up as a shibboleth for being educated and thus eligible for certain privileges. But perhaps our curriculum has now become the kind of stable that needs the janitorial attention of a modern Heracles.

Post script: Minimalist metalinguistic curriculum

I once analysed the Czech primary curriculum and found over 240 metalinguistic terms. I know, riddiculous. The curriculum was based on the work of eminent Czech structuralists (whose theorizing influenced much of the rest of the world). It didn’t make the Czechs any more educated, eloquent, or better at learning foreign languages – although it did make it easier for me to study linguistics. But as I said above, there is certainly some place for metalanguage in general education. Much of it comes from stylistics but when it comes to grammar, I’d stick to about 15. This is not a definitive list:

  1. Noun
  2. Verb
  3. Adjective
  4. Adverb
  5. Preposition
  6. Pronoun
  7. Prefix
  8. Suffix
  9. Clause
  10. Past form of verb
  11. Future form of verbs
  12. Present form of verbs
  13. Subject
  14. Object
  15. Passive

Languages with rich morphology might need a few others to cover things like case but overall in my career as a language educator, I’ve never felt the need for any more, and nor have I felt in the presence of uneducated people of people who couldn’t tell me what the infinitive was. In fact, I’d rather take some items away (like adverb, prefix, suffix, or clause) than add new ones.

Sentence diagramming is often proposed as a way of instilling some metalinguistic awareness. I don’t see any harm in that (and a lot of potential benefit). But only with the enormous proviso that students use it to learn the relationship between parts of their language in use and NOT as a gateway to a cancerous taxonomy pretending to the absolute existence of things that could easily be just artifacts of our metacognition.

Things are different when it comes to the linguistic education of language teachers. On the one hand, I’m all for language teachers having a comprehensive education in how language works. On the other hand, I have perpetrated a lot of such teacher training over the years and have watch others struggle with it, as well. And the effects are dispiriting. I’ve seen teachers who can diagram a sentence with the best of them and are still quite clueless when it comes to understanding how speech acts work. Very often language teachers find any such lessons painful and something to get through. This means that the key thing they remember about the subject is that linguistics is hard or boring or both.

Photo Credit: CarbonNYC via Compfight cc

Send to Kindle
1800103901_a70f86b6d0_o

Binders full of women with mighty pens: What is metonymy

Share
Send to Kindle

Metonymy in the wild

""Things were not going well for Mitt Romney in early autumn of last year. And then he responded to a query about gender equality with this sentence:

“I had the chance to pull together a cabinet, and all the applicants seemed to be men… I went to a number of women’s groups and said, ‘Can you help us find folks?’ and they brought us whole binders full of women.” http://en.wikipedia.org/wiki/Binders_full_of_women

This became a very funny meme that stuck around for weeks. The reason for the longevity was the importance of women’s issues and the image of Romney himself. Not the phrase itself. What it showed or rather confirmed that journalists who in the same breath bemoan the quality of language education are completely ignorant about issues related to language. Saying things like:

In fairness, “binders” was most likely a slip of the tongue. http://edition.cnn.com/2012/10/17/opinion/cardona-binders-women/index.html

The answer to this is NO. This was not some ‘freudian slip of the tongue’ nor was it an inelegant phrase. It was simply a perfectly straightforward use of metonymy. Something we use and hear used probably a dozen times every day without remarking on it (or mostly so – see below).

What is metonymy

Metonymy is a figure of speech where something stands for something else because it has a connection to it. This connection can be physical, where a part of something can stand for a whole and a whole can stand for one of its parts.

  • Part for a whole: In I got myself some new wheels., ‘wheels’ stand in for ‘car’.
  • Whole for a part: In My bicycle got a puncture., ‘bicycle’ stands for a ‘tyre‘ which is a part of the it.

But the part/whole relationship does not have to be physical. Something can be a part of a process, idea, or configuration. The part/whole relationship can also be a membership or a cause and effect link. There are some subdomain instantiations where whole sets of conventional metonymies often congregate. Tools also often stand for jobs and linguistic units can stand for their uses. Materials can also be used to stand for things made from them. Some examples of these are:

  • Membership for members: “The Chess club sends best wishes.” < the ‘chess club’ stands for its members
  • Leader for lead: “The president invaded another country.” < the ‘president’ stands for the army
  • Tool  for person: “hired gun” < the tool stands for the person
  • Linguistic units for uses: “no more ifs and buts’ < if’ and ‘but’ stand for their types of questions
  • End of a process for process: ”the house is progressing nicely” < the ‘house’ is the final end of a process which stands for the process as a whole.
  • Tool/position for job“chair person” < ‘chair’ stands for the role of somebody who sits on it.
  • Body part for use: “lend a hand”, the ‘hand’ stands for the part of the process where hands are used.
  • City for inhabitants: “Detroit doesn’t like this” < the city of ‘Detroit’ stands for the people and industries associated with the city.
  • Material for object made from material: “he buried 6 inches of steel in his belly” < the steel stands for a sword as in “he filled him full of lead”, lead stands for bullets.

Metonymy chaining

Metonymies often occur in chains. A famous example by Michael Reddy is

“You’ll find better ideas than that in the library.”

where ideas are expressed in words, printed on pages, bound in books, stored in libraries.

In fact the ‘binders full of women’ is an example of a metonymic chain where women stand for profiles which are written on pages contained in binders.

It has been argued that these chains illustrate the very nature of metonymic inference. (See more below in section on reasoning.) In fact, it is not unreasonable to say that most metonymy contains some level of chaining or potential chaining. Not in cases of direct parts like ‘wheels’ standing for ‘cars’ but in the less concrete types like ‘hands’ standing for help or ‘president’ for the invading army, there is some level of chaining involved.

Metonymy vs. synechdoche

Metonymy is a term which is a part of a long standing classification of rhetorical tropes. The one term from this classification that metonymy is most closely associated with is synechdoche. In fact, what used to be called synechdoche is now simply subsumed under metonymy by most people who write about it.

The distinction is:
- Synechdoche describes a part standing for a whole (traditionally called pars pro toto) as in ‘The king built a cathedral.’ or the whole standing for a part (traditionally called totum pro parte) as in ‘Poland votes no’
- Metonymy describes a connection based on a non-part association such as containment, cause and effect, etc. (see above for a variety of examples)

While this distinction is not very hard to determine in most cases, it is not particularly useful and most people won’t be aware of it. In fact, I was taught that synechdoche was pars pro toto and metonymy was totum pro parte and all the other uses are an extension of these types. This makes just as much as sense as any other division but doesn’t seem to be the way the ancients looked at it.

Metaphor vs. metonymy

More commonly and perhaps more usefully, metonymy is contrasted with metaphor. In fact, ‘metaphor/metonymy’ is one of the key oppositions made in studies of figurative language.

People studying these tropes in the Lakoff and Johnson tradition will say something along the lines of metonymy relies on continguity wheras metaphor relies on similarity.

So for example:

  • you‘re such a kiss ass” is a metaphor because ‘kissing ass’ signifies a certain kind of behavior, but the body part is not involved, while
  • “I got this other car on my ass” is a metonymy because ‘ass’ stands for everything that’s behind you.

Or:

  • all men are pigs” is a metaphor because we ascribe the bad qualities of pigs to men but
  • this is our pig man” is a metonymy because ‘pig’ is part of the man’s work

Some people (like George Lakoff himself) maintain that the distinction between metaphor and metonymy represent a crucial divide. Lakoff puts metonymic connections along with metaphoric ones as the key figurative structuring principles of conceptual frames (along with propositions and image schemas). But I think that there is evidence to show that they play a similar role in figurative language and language in general. For example, we could add a third sentence to our ‘ass’ opposition such as ‘she kicked his ass’ which could be either metonymic when actual kicking occured but only some involved the buttocks or metaphoric if no kicking at all took place. But even then the metaphor relies on an underlying metonymy.

When we think of metaphor as a more special instance of domain mapping (or conceptual blending, as I do on this blog), then we see that very similar connections are being made in both. Very often both metaphor and metonymy are involved in the same figurative process. There is also often a component of social convention where some types of connections are more likely to be made.

For example, in “pen is mightier than the sword” the connections of ‘pen’ to writing and ‘sword’ to war or physical enforcement is often given as an example of metonymy. But the imagery is much richer than that. In order to understand this phrase, we need to compare two scenarios (one with the effects of writing and one with the effects of fighting) which is exactly what happens in the conceptualisation taking place in metaphors and analogies. These two processes are not just part of a chain but seem to happen all at once.

Another example is ‘enquiring minds want to know’ the labeling of which was the subject of a recent debate. We know that minds often metonymically stand for thinkers as in ‘we have a lot of sharp minds in this class’. But when we hear of ‘minds’ doing something, we think of metaphor. This is not all that implausible because ‘my mind has a mind of its own’ is out there: http://youtu.be/SdUZe2BddHo. But this figure of speech obviously relies on both conceptualisations at once (at least in the way some people will construe it).

Metonymy and meronymy

One confusion, I’ve noticed is putting metonymy into opposition to meronymy. However, the term ‘meronymy has nothing to do with the universe of figurative language. It is simply a term for a name used to label a the meaning of a word in relationship to another word where one of these words denotes a whole and another its part. So ‘wheels’ are a meronym of ‘car’ and ‘bike’ but calling a nice car ‘sharp wheels’ is synechdoche, not meronymy as this post http://wuglife.tumblr.com/post/68572697017/metonymy-or-meronymy erroneously claims.

Meronyms together with hyponyms and hyperonyms are simply terms that describe semantic relationships between words. You could say that synechdoche relies on the meronymic (or holonymic) relationship between words or that it uses meronyms for reference.

It doesn’t make much difference for the overall understanding of the issues but perhaps worth clarifying.

William Croft also claims that meronymy is the only constituent relationship in his radical construction grammar (something which I have a lot of time for but not something hugely relevant to this discussion).

Metonymic imagery

Compared to metaphor, metonymy is often seen as the more pedestrian figure of speech. But as we saw in the reactions to Romney’s ‘binders of women’ that this is not necessarily the case:

he managed to conjure an image confirming every feminist’s worst fears about a Romney presidency; that he views women’s rights in the workplace as so much business admin, to be punched and filed and popped on a shelf http://www.theguardian.com/world/shortcuts/2012/oct/17/binders-full-of-women-romneys-four-words

The meme that sprang up around it consisted mostly of people illustrating this image, many of which can be found on http://bindersfullofwomen.tumblr.com (see one such image above).

This is not uncommon in the deconstructions and hypostatic debates about metonymies. ‘Pen is mightier than the sword’ is often objected to on the basis that somebody with a sword will always prevail over somebody with a pen. People will also often critique the ’cause of’ relationships, as in ‘the king did not erect this tower, all the hard-working builders did’. Another example could be all the gruesome jokes about ‘lending a hand’ or ‘asking for a hand in marriage’. I still remember a comedy routine from my youth which included the sentence, “The autopsy was successful, the doctor came over to me extending a hand…for me to take to the trash.”

But there is a big difference in how the imagery works in metonymy and metaphor. Most of the time we don’t notice it. But when we become aware of the rich evocative images that make a metaphor work, we think of the metaphor as working and those images illustrate the relationship between the two domains. But when we become aware of the images that are contained in a metonymy (as in the examples above), we are witnessing a failure of the metonymy. It stops doing its job as a trope and starts being perceived as somehow inappropriate usage. But metaphor thus revealed typically does its job even better (though not in all cases as I’ve often illustrated on this blog).

Reasoning with metonymy

Much has been written about metaphoric reasoning (sometimes in the guise of ‘analogic reasoning’) but connection is just as an important part of reasoning as similarity is.

Much of sympathetic magic requires both connection and similarity. So the ‘voodoo doll’ is shaped like a person but is connected to them by a their hair, skin, or an item belonging to them.

But reasoning by connection is all around us. For instance, in science, the relationship of containment is crucial to classification and much of logic. Also, the question of sets being part of sets which has spurred so much mathematical reasoning has both metaphoric and metonymic dimensions.

But we also reason by metonymy in daily life when we pay homage to the flag or call on the president to do something about the economy. Sometimes we understand something metonymically by compression, as if when we equate the success of a company with the success of its CEO. Sometimes we use metonymy to elaborate as when we say something like 12 hard working pistons brought the train home.

Metonymy is also involved in the process of exemplars and paragons. While the ultimate conceptualization is metaphoric, we also ask that the exemplar has some real connection. Journalists engage the process of metonymy when they pick someone to tell their story to exemplify a larger group. This person has to be both similar and connected to engage the power of the trope fully. On a more accessible level, insults and praise often have a metonymic component. When we call someone ‘an asshole’ or ‘a hero’, we often substitute a part of who they are for the whole, much to the detriment of our understanding of who they are (note that a metaphor is also involved).

Finally, many elements of representative democracy rely on metonymic reasoning. We want MPs to represent particular areas and think it is best if they originate in that area. We think because we paid taxes, the police ‘work for us’. Also, the ideology of nationalism and nation states are very much metonymic.

Warning in conclusion

I have often warned against the dangers of overdoing the associations generated by metaphors. But in many ways metonymy is potentially even more dangerous because of the magic of direct connection. It can be a very useful (and often necessary) shortcut to communication (particularly when used as compression) but just as often it can lead us down dangerous paths if we let it.

Background

This post is an elaboration and reworking of my comment on Stan Carey’s post on metonymy:  It seemed to me a surprisingly confused and unclear about what metonymy does. This could be because Stan is no linguistic lightweight so I have expected more. But it’s easy to get this wrong, and rereading my comment there, it seems, I got a bit muddled myself. And, I’m sure even my more worked out description here could be successfully picked over. Even Wikipedia, which is normally quite good in this area, is a bit confused on the matter. The different entries for synechdoche and metonymy as well as related terms seem a bit patched together and don’t provide a straightforward definition.

Ultimately, the finer details don’t matter as long as we understand the semantic field. I hope this post contributes to that understanding but I’ll welcome any comments and corrections.

Send to Kindle

How we use metaphors

Share
Send to Kindle

I was reminded by this blog post on LousyLinguist that many people still see metaphor as an unproblematic homogeneous concept leading to much circular thinking about them.  I wrote about that quite a few years ago in:

Lukeš, D., 2005. Towards a classification of metaphor use in text: Issues in conceptual discourse analysis of a domain-specific corpus. In Third Workshop on Corpus-Based Approaches to Figurative Language. Birmingham.

I suggested that a classification of metaphor had better focused on their use rather than inherent nature. I came up with the heuristic device of: cognitive, social and textual uses of metaphor.

Some of the uses I came up with (inspired by the literature from Halliday to Lakoff) were:

  • Cognitive
    • Conceptual (constitutive)
      • Explanative
      • Generative
    • Attributive
  • Social (Interpersonal)
    • Conceptual/Declarative (informational)
    • Figurative (elegant variation)
    • Innovative
    • Exegetic
    • Prevaricative
    • Performative
  • Textual
    • Cohesive (anaphoric, cataphoric, exophoric)
    • Coherent
      • Local
      • Global

I also posited a continuum of salience and recoverability in metaphors:

  • High salience and recoverability
  • Low salience and recoverability

Read the entire paper here.

My thinking on metaphor has moved on since then – I see it as a special case of framing and conceptual integration rather than a sui generis concept – but I still find this a useful guide to return to when confronted with metaphor use.

Send to Kindle

The complexities of simple: What simple language proponents should know about linguistics [updated]

Share
Send to Kindle

Update

Part of this post was incorporated into an article I wrote with Brian Kelly and Alistair McNaught that appeared in the December issue of Ariadne. As part of that work and feedback from Alistair and Brian, I expanded the final section from a simple list of bullets into a more detailed research programme. You can see it below and in the article.

Background: From spelling reform to plain language

Simple

Simple (Photo credit: kevin dooley)

The idea that if we could only improve how we communicate, there would be less misunderstanding among people is as old as the hills.

Historically, this notion has been expressed through things like school reform, spelling reform, publication of communication manuals, etc.

The most radical expression of the desire for better understanding is the invention of a whole new artificial language like Esperanto with the intention of providing a universal language for humanity. This has had a long tradition but seemed to gain most traction towards the end of last century with the introduction and relative success of Esperanto.

But artificial languages have been a failure as a vehicle of global understanding. Instead, in about the last 50 years, the movement for plain English has been taking the place of constructed languages as something on which people pinned their hopes for clear communication.

Most recently, there have been proposals suggesting that “simple” language should become a part of a standard for accessibility of web pages along side other accessibility standards issued by the W3C standards body. http://www.w3.org/WAI/RD/2012/easy-to-read/Overview. This post was triggred by this latest development.

Problem 1: Plain language vs. linguistics

The problem is that most proponents of plain language (as so many would be reformers of human communication) seem to be ignorant of the wider context in which language functions. There is much that has been revealed by linguistic research in the last century or so and in particular since the 1960s that we need to pay attention to (to avoid confusion, this does not refer to the work of Noam Chomsky and his followers but rather to the work of people like William Labov, Michael Halliday, and many others).

Languages are not a simple matter of grammar. Any proposal for content accessibility must consider what is known about language from the fields of pragmatics, sociolinguistics, and cognitive linguistics. These are the key aspects of what we know about language collected from across many fields of linguistic inquiry:

  • Every sentence communicates much more than just its basic content (propositional meaning). We also communicate our desires and beliefs (e.g. “It’s cold here” may communicate, “Close the window” and “John denied that he cheats on his taxes” communicates that somebody accused John of cheating on his taxes. Similarly chosing a particular form of speech, like slang or jargon, communicates belonging to a community of practice.)
  • The understanding of any utterance is always dependent on a complex network of knowledge about language, about the world, as well as about the context of the utterance. “China denied involvement.” requires the understanding of the context in which countries operate, as well as metonomy, as well as the grammar and vocabulary. Consider the knowledge we need to possess to interpret “In 1939, the world exploded.” vs. “In Star Wars, a world exploded.”
  • There is no such thing as purely literal language. All language is to some degree figurative. “Between 3 and 4pm.”, “Out of sight”, “In deep trouble”, “An argument flared up”, “Deliver a service”, “You are my rock”, “Access for all” are all figurative to different degrees.
  • We all speak more than one variety of our language: formal/informal, school/friends/family, written/spoken, etc. Each of these variety has its own code. For instance, “she wanted to learn” vs. “her desire to learn” demonstrates a common difference between spoken and written English where written English often uses clauses built around nouns.
  • We constantly switch between different codes (sometimes even within a single utterance).
  • Bilingualism is the norm in language knowledge, not the exception. About half the world’s population regularly speaks more than one language but everybody is “bi-lingual” in the sense that they deal with multiple codes.
  • The “standard” or “correct” English is just one of the many dialects, not English itself.
  • The difference between a language and a dialect is just as much political as linguistic. An old joke in linguistics goes: “A language is a dialect with an army and a navy.”
  • Language prescription and requirements of language purity (incl. simple language) are as much political statements as linguistic or cognitive ones. All language use is related to power relationships.
  • Simplified languages develop their own complexities if used by a real community through a process known as creolization. (This process is well described for pidgins but not as well for artificial languages.)
  • All languages are full of redundancy, polysemy and homonymy. It is the context and our knowledge of what is to be expected that makes it easy to figure out the right meaning.
  • There is no straightforward relationship between grammatical features and language obfuscation and lack of clarity (e.g. It is just as easy to hide things using active as passive voice or any Subject-Verb-Object sentence as Object-Subject-Vern).
  • It is difficult to call any one feature of a language universally simple (for instance, SVO word order or no morphology) because many other languages use what we call complex as the default without any increase in difficulty for the native speakers (e.g. use of verb prefixes/particles in English and German)
  • Language is not really organized into sentences but into texts. Texts have internal organization to hang together formally (John likes coffee. He likes it a lot.) and semantically (As I said about John. He likes coffee.) Texts also relate to external contexts (cross reference) and their situations. This relationship is both implicit and explicit in the text. The shorter the text, the more context it needs for interpretation. For instance, if all we see is “He likes it.” written on a piece of paper, we do not have enough context to interpret the meaning.
  • Language is not used uniformly. Some parts of language are used more frequently than others. But this is not enough to understand frequency. Some parts of language are used more frequently together than others. The frequent coocurrence of some words with other words is called “collocation”. This means that when we say “bread and …”, we can predict that the next word will be “butter”. You can check this with a linguistic tool like a corpus, or even by using Google’s predictions in the search. Some words are so strongly collocated with other words that their meaning is “tinged” by those other words (this is called semantic prosody). For example, “set in” has a negative connotation because of its collocation with “rot”.
  • All language is idiomatic to some degree. You cannot determine the meaning of all sentences just by understanding the meanings of all their component parts and the rules for putting them together. And vice versa, you cannot just take all the words and rules in a language, apply them and get meaningful sentences. Consider “I will not put the picture up with John.” and “I will not put up the picture with John.” and “I will not put up John.” and “I will not put up with John.”

It seems to me that most plain language advocates do not take most of these factors into account.

Some examples from the “How to write in plain English” guide: http://www.plainenglish.co.uk/files/howto.pdf.

Try to call the reader ‘you’, even if the reader is only one of many people you are talking about generally. If this feels wrong at first, remember that you wouldn’t use words like ‘the applicant’ and ‘the supplier’ if you were speaking to somebody sitting across a desk from you. [emphasis mine]

This example misses the point about the contextuality of language. The part in bold is the very crux of the problem. It is natural to use a different code (or register) with someone we’re speaking to in person and in a written communication. This is partly a result of convention and partly the result of the different demands of writing and speaking when it comes to the ability to point to what we’re speaking about. The reason it feels wrong to the writer is that it breaks the convention of writing. That is not to say that this couldn’t become the new convention. But the argument misses the point.

Do you want your letters to sound active or passive − crisp and professional or stuffy and bureaucratic?
Using the passive voice and sounding passive are not one and the same thing. This is an example of polysemy. The word “passive” has two meanings in English. One technical (the passive voice) and one colloquial (“he’s too passive”). The booklet recommends that “The mine had to be closed by the authority. (Passive)” should be replaced with “The authority had to close the mine. (Active)” But they ignore the fact that word order also contributes to the information structure of the sentence. The passive sentence introduces the “mine” sooner and thus makes it clear that the sentence is about the mine and not the local authority. In this case, the “active” construction made the point of the sentence more difficult to understand.

The same is true of nominalization. Another thing recommended against by the Plain English campaign: “The implementation of the method has been done by a team.” is not conveying the same type of information as “A team has implemented the method.”

The point is that this advice ignores the context as well as the audience. Using “you” instead of “customers” in “Customers have the right to appeal” may or may not be simpler depending on the reader. For somebody used to the conventions of written official English, it may actually take longer to process. But for someone who does not deal with written English very often, it will be easier. But there is nothing intrinsically easier about it.

Likewise for the use of jargon. The campaign gives as its first example of unduly complicated English:

High-quality learning environments are a necessary precondition for facilitation and enhancement of the ongoing learning process.

And suggests that we use this instead:

Children need good schools if they are to learn properly.

This may be appropriate when it comes to public debate but within the professional context of, say, policy communication, these 2 sentences are not actually equivalent. There are more “learning environments” than just schools and the “learning process” is not the same as having learned something. It is also possible that the former sentence appeared as part of a larger context that would have made the distinction even clearer but the page does not give a reference and a Google search only shows pages using it as an example of complex English. http://www.plainenglish.co.uk/examples.html

The How to write in plain English document does not mention coherence of the text at all, except indirectly when it recommends the use of lists. This is good advice but even one of their examples has issues. They suggest that the following is a good example of a list:

Kevin needed to take:
• a penknife
• some string
• a pad of paper; and
• a pen.

And on first glance it is, but lists are not just neutral replacements for sentences. They are a genre in its own right used for specific purposes (Michael Hoey called them “text colonies”.) Let’s compare the list above to the sentence below.

Kevin needed to take a penknife, some string, a pad of paper and a pen.

Obviously they are two different kinds of text used in different contexts for different purposes and this would impinge on our understanding. The list implies instruction, and a level of importance. It is suitable to an official document, for example something sent before a child goes to camp. But it is not suitable to a personal letter or even a letter from the camp saying “All Kevin needed to take was a penknife, some string, a pad of paper and a pen. He should not have brought a laptop.” To be fair, the guide says to use lists “where appropriate”, but does not mention what that means.

The issue is further muddled by the “grammar quiz” on the Plain English website: http://www.plainenglish.co.uk/quiz.html. It is a hodgepodge of irrelevant trivia about language (not just grammar) that has nothing to do with simple writing. Although the Plain English guide gets credit for explicitly not endorsing petty peeves like not ending a sentence with a preposition, they obviously have peeves of their own.

Problem 2: Definition of simple is not simple

There is no clear definition of what constitutes simple and easy to understand language.

There are a number of intuitions and assumptions that seem to be made when both experts and lay people talk about language:

  • Shorter is simpler (fewer syllables, charactes, sounds per word, fewer words per sentence, fewer sentences per paragraph)
  • More direct is simpler (X did Y to Z is simpler than Y was done to Z by X)
  • Less variety is simpler (fewer different words)
  • More familiar simpler

These assumptions were used to create various measures of “readability” going back to the 1940s. They consisted of several variables:

  • Length of words (in syllables or in characters)
  • Length of sentences
  • Frequency of words used (both internally and with respect to their general frequency)

Intuitively, these are not bad measures, but they are only proxies for the assumptions. They say nothing about the context in which the text appears or the appropriateness of the choice of subject matter. They say nothing about the internal cohesion and coherence of the text. In short, they say nothing about the “quality” of the text.

The same thing is not always simple in all contexts and sometimes too simple, can be hard. We could see that in the example of lists above. Having a list instead of a sentence does not always make things simpler because a list is doing other work besides just providing a list of items.

Another example I always think about is the idea of “semantic primes” by Anna Wierzbicka. These are concepts like DO, BECAUSE, BAD believed to be universal to all languages. There are only about 60 of them (the exact number keeps changing as the research evolves). These were compiled into a Natural Semantic Metalanguage with the idea of being able to break complex concepts into them. Whether you think this is a good idea or not (I don’t but I think the research group working on this are doing good work in surveying the world’s languages) you will have to agree that the resulting descriptions are not simple. For example, this is the Natural Semantic Metalanguage description of “anger”:

anger (English): when X thinks of Y, X thinks something like this: “this person did something bad; I don’t want this; I would want to do something bad to this person”; because of this, X feels something bad

This seems like a fairly complicated way of describing anger and even if it could be universally understood, it would also be very difficult to learn to do this. And could we then capture the distinction between this and say “seething rage”? Also, it is clear that there is a lot more going on than combining 60 basic concepts. You’d have to learn a lot of rules and strategies before you could do this well.

Problem 3: Automatic measures of readability are easily gamed

There are about half dozen automated readability measures currently used by software and web services to calculate how easy or difficult it is to read a text.

I am not an expert in readability but I have no reason to doubt the references in Wikipedia claiming that they correlate fairly well overall with text comprehension. But as always correlation only tells half the story and, as we know, it is not causation.

It is not at all clear that the texts identified as simple based on measures like number of words per sentence or numbers of letters per word are actually simple because of the measures. It is entirely possible that those measures are a consequence of other factors that contribute to simplicity, like more careful word choice, empathy with an audience, etc.

This may not matter if all we are interested in is identifying simple texts, as you can do with an advanced Google search. But it does matter if we want to use these measures to teach people how to write simpler texts. Because if we just tell them use fewer words per sentence and shorter words, we may not get texts that are actually easier to understand for the intended readership.

And if we require this as a criterion of page accessibility, we open the system to gaming in the same way Google’s algorithms are gamed but without any of the sophistication. You can reduce the complexity of any text on any of these scores simply by replacing all commas with full stops. Or even with randomly inserting full stops every 5 words and putting spaces in the middle of words. The algorithms are not smart enough to capture that.

Also, while these measures may be fairly reliable in aggregate, they don’t give us a very good picture of any one individual text. I took a blog post from the Campaign for Plain English site http://www.plainenglish.co.uk/news/chrissies-comments.html and ran the text through several websites that calculate ease of reading scores:

The different tests ranged by up to 5 years in their estimate of the length of formal education required to understand the text from 10.43 to 15.57. Read-able.com even went as far as providing an average, coming up with 12. Well that doesn’t seem very reliable.

I preferred http://textalyser.net which just gives you the facts about the text and doesn’t try to summarize them. The same goes for the Plain English own little app that you can download from their website http://www.plainenglish.co.uk/drivel-defence.html.

By any of these measures, the text wasn’t very simple or plain at all. The longest sentence had 66 words because it contained a complex embedded clause (something not even mentioned in the Plain English guide). The average sentence length was 28 words.

The Plain English app also suggested 7 alternative words from their “alternative dictionary” but 5 of those were misses because context is not considered (e.g. “a sad state” cannot be replaced by “a sad say”). The 2 acceptable suggestions were to edit out one “really” and replace one “retain” with “keep”. Neither of which would have improved the readability of the text given its overall complexity.

In short, the accepted measures of simple texts are not very useful for creating simple texts of training people in creating them.

See also http://en.wikipedia.org/w/index.php?title=Readability&oldid=508236326#Using_the_readability_formulas.

See also this interesting study examining the effects for L2 instruction: http://www.eric.ed.gov/PDFS/EJ926371.pdf.

Problem 4: When simple becomes a new dialect: A thought experiment

But let’s consider what would happen if we did agree on simple English as the universal standard for accessibility and did actually manage to convince people to use it? In short, it would become its own dialect. It would acquire ways of describing things it was not designed to describe. It would acquire its own jargon and ways of obfuscation. There would arise a small industry of experts teaching you how to say what you want to say or don’t want to say in this new simple language.

Let’s take a look at Globish, a simplified English intended for international communication, that I have seen suggested as worth a look for accessibility. Globish has a restricted grammar and a vocabulary of 1500 words. They helpfully provide a tool for highlighting words they call “not compatible with Globish”. Among the words it highlighted for the blog post from the Plain English website were:

basics, journalist, grandmother, grammar, management, principle, moment, typical

But event the transcript of a speech by its creator, Jean-Paul Nerriere, advertised as being completely in Globish, contained some words flagged up as incompatible:

businessman, would, cannot, maybe, nobody, multinational, software, immediately

Globish seems to based on not much more than gueswork. It has words like “colony” and “rubber” but not words like “temperature” or “notebook”, “appoint” but not “appointment”, “govern” but not “government”. But both the derived forms “appointment” or “government” are more frequent (and intuitively more useful) than the root forms. There is a chapter in the eBook called “1500 Basic Globish Words Father 5000″ so I assume there are some rules for derivation, but the derived forms more often than not have very “idiomatic” meanings. For example, “appointment” in its most commons use does not make any sense if we look at the core meanings of “appoint” and the suffix “-ment”. Consider also the difference between “govern” and “government” vs “enjoy” and “enjoyment”.

Yet, Globish supposedly avoids idioms, cultural references, etc. Namely all the things that make language useful. The founder says:

Globish is correct English without the English culture. It is English that is just a tool and not a whole way of life.

Leaving aside the dubious notion of correctness, this would make Globish a very limited tool indeed. But luckily for Globish it’s not true. Why have the word “colony” if not to reflect cultural preference? If it became widely used by a community of speakers, the first thing to happen to Globish would be a blossoming of idioms going hand in hand with the emergence of dialects, jargons and registers.

That is not to say that something like Globish could not be a useful tool for English learners along the way to greater mastery. But it does little for universal accessibility.

Also we need to ask ourselves what would it be like from the perspective of the users creating these simplified texts? They would essentially have to learn a whole new code, a sort of a dialect. And as with any second language learning, some would do it better than others. Some would become the “simple nazis”. Some would get jobs teaching others “how to” speak simple. It is not natural for us to speak simply and “plainly” as defined in the context of accessibility.

There is some experience with the use of controlled languages in technical writing and in writing for second language acquisition. This can be done but the universe of subjects and/or the group of people creating these texts is always extremely limited. Increasing the number of people creating simple texts to pretty much everybody would increase the difficulty of implementation exponentially. And given the poor state of automatic tools for analysis of “simplicity”, quality control is pretty much out of reach.

But would even one code/dialect suffice? Do we need one for technical writing, govenment documents, company filings? Limiting the vocabulary to 1500 words is not a bad idea but as we saw with Globish, it might need to be different 1500 words for each area.

Why is language inaccessible?

Does that mean we should give up on trying to make communication more accessible? Definitely not. The same processes that I described as standing in the way of a universal simple language are also at the root of why so much language is inaccessible. Part of how language works to create group cohesion which includes keeping some people out. A lot of “complicated” language is complicated because the nature of the subject requires it, and a lot of complicated language is complicated because the writer is not very good at expressing themselves.

But as much complicated language is complicated because the writer wants to signall belonging to a group that uses that kind of language. The famous Sokal Hoax provided an example of that. Even instructions on university websites on how to write essays are an example. You will find university websites recommending something like “To write like an academic, write in the third person.” This is nonsense, research shows that academics write as much in the first as in the third person. But it also makes the job of the people marking essays easier. They don’t have to focus on ideas, they just go by superficial impression. Personally, I think this is a scandal and complete failure of higher education to live up to its own hype but that’s a story for another time.

How to achieve simple communication?

So what can we do to avoid making our texts too inaccessible?

The first thing that the accessibility community will need to do is acknowledge Simple language is its own form of expression. It is not the natural state we get when we strip out all the artifice out of our communication. And learning how to communicate simply requires effort and practice of all individuals.

To help with the effort, most people will need some guides. And despite what I said about the shortcomings of the Plain English Guide above, it’s not a bad place to start. But it would need to be expanded. Here’s an example of some of the things that are missing:

  • Consider the audience: What sounds right in an investor brochure won’t sound right in a letter to a customer
  • Increase cohesion and coherence by highlighting relationships
  • Highlight the text structure with headings
  • Say new things first
  • Consider splitting out subordinate clauses into separate sentences if your sentence gets too long
  • Leave all the background and things you normally start your texts with for the end

But it will also require a changed direction for research.

Further research needs for simple language language

I don’t pretend to have a complete overview of the research being done in this area but my superficial impression is that it focuses far too much on comprehension at the level of clause and sentence. Further research will be necessary to understand comprehension at the level of text.

There is need for further research in:

  • How collocability influences understanding
  • Specific ways in which cohesion and coherence impact understanding
  • The benefits and downsides of elegant variation for comprehension
  • The benefits and downsides of figurative language for comprehension by people with different cognitive profiles
  • The processes of code switching during writing and reading
  • How new conventions emerge in the use of simple language
  • The uses of simple language for political purposes including obfuscation

[Updated for Ariadne article mentioned above:] In more detail, this is what I would like to see for some of these points.

How collocability influences understanding: How word and phrase frequency influences understanding with particular focus on collocations. The assumption behind software like TextHelp is that this is very important. Much research is available on the importance of these patterns from corpus linguistics but we need to know the practical implications of these properties of language both for text creators and consumers. For instance, should text creators use measures of collocability to judge the ease of reading and comprehension in addition to or instead of arbitrary measures like sentence and word lengths.

Specific ways in which cohesion and coherence affect understanding: We need to find the strategies challenged readers use to make sense of larger chunks of text. How they understand the text as a whole, how they find specific information in the text, how they link individual portions of the text to the whole, and how they infer overall meaning from the significance of the components. We then need to see what text creators can do to assist with these processes. We already have some intuitive tools: bullets, highlighting of important passages, text insets, text structure, etc. But we do not know how they help people with different difficulties and whether they can ever become a hindrance rather than a benefit.

The benefits and downsides of elegant variation for comprehension, enjoyment and memorability: We know that repetition is an important tool for establishing the cohesion of text in English. We also know that repetition is discouraged for stylistic reasons. Repetition is also known to be a feature of immature narratives (children under the age of about 10) and more “sophisticated” ways of constructing texts develop later. However, it is also more powerful in spoken narrative (e.g. folk stories). Research is needed on how challenged readers process repetition and elegant variation and what text creators can do to support any naturally developing meta textual strategies.

The benefits and downsides of figurative language for comprehension by people with different cognitive profiles: There is basic research available from which we know that some cognitive deficits lead to reduced understanding of non-literal language. There is also ample research showing how crucial figurative language is to language in general. However, there seems to be little understanding of how and why different deficits lead to problems with processing figurative language, what kind of figurative language causes difficulties. It is also not clear what types of figurative language are particularly helpful for challenged readers with different cognitive profiles. Work is needed on typology of figurative language and a typology of figurative language deficits.

The processes of code switching during writing and reading: Written and spoken English employ very different codes, in some ways even reminiscent of different language types. This includes much more than just the choice of words. Sentence structure, clauses, grammatical constructions, all of these differ. However, this difference is not just a consequence of the medium of writing. Different genres (styles) within a language may be just as different from one another as writing and speaking. Each of these come with a special code (or subset of grammar and vocabulary). Few native speakers never completely acquire the full range of codes available in a language with extensive literacy practices, particularly a language that spans as many speech communities as English. But all speakers acquire several different codes and can switch between them. However, many challenged writers and readers struggle because they cannot switch between the spoken codes they are exposed to through daily interactions and the written codes to which they are often denied access because of a print impairment. Another way of describing this is multiple literacies. How do challenged readers and writers deal with acquiring written codes and how do they deal with code switching?

How do new conventions emerge in the use of simple language? Using and accessing simple language can only be successful if it becomes a separate literacy practice. However, the dissemination and embedding of such practices into daily usage are often accompanied by the establishment of new codes and conventions of communication. These codes can then become typical of a genre of documents. An example of this is Biblish. A sentence such as “Fred spoke unto Joan and Karen” is easily identified as referring to a mode of expression associated with the translation of the Bible. Will similar conventions develop around “plain English” and how? At the same time, it is clear that within each genre or code, there are speakers and writers who can express themselves more clearly than others. Research is needed to establish if there are common characteristics to be found in these “clear” texts, as opposed to those inherent in “difficult” texts across genres?

All in all, introducing simple language as a universal accessibility standard is still too far from a realistic prospect. My intuitive impression based on documents I receive from different bureaucracies is that the “plain English” campaign has made a difference in how many official documents are presented. But a lot more research (ethnographic as well as cognitive) is necessary before we properly understand the process and its impact. Can’t wait to read it all.

Send to Kindle

Literally: Triumph of pet peeve over matter

Share
Send to Kindle

[dropcap]I[/dropcap] have a number of pet peeves about how people use language. I am genuinely annoyed by the use of apostrophes before plural of numerals or acronyms like 50′s or ABC’s. But because I understand how language works, I keep my mouth shut. The usage has obviously moved on. I don’t think, ABC’s is wrong or confusing, I just don’t like the way it looks. But I don’t like a lot of things that there’s nothing wrong with. I get over it.

Recently I came across a couple of blog posts pontificating on the misuse or overuse of the word literally. And as usual they confuse personal dislike with incorrect or confusing usage. So let’s set the record straight! No matter what some dictionaries or people who should know better say, the primary function of the word “literally” in the English language is to intensify the meaning of figurative, potentially figurative or even non-figurative expressions. This is not some colloquial appendage to the meaning of the word. That’s how it is used in standard English today. Written, edited and published English! Frequently, it is used to intensify expressions that are for all intents and purposes non-figurative or where the figurative nature of the expression can be hypostesized:

1. “Bytches is literally a record of life in a nineties urban American community.” [BNC]

2. “it’s a a horn then bassoon solo, and it it’s a most worrying opening for er a because it is. it is literally a solo, er unaccompanied” [BNC]

3. “The evidence that the continents have drifted, that South America did indeed break away from Africa for instance, is now literally overwhelming” [BNC, Richard Dawkins]

The TIME magazine corpus can put pay to the non-sense about “literally” as an intensifier being new or colloquial. The use of the word in all functions does show an increase from the 40s, peak in the 1980s and 2000s returning to the level of 1950s. I didn’t do the counting (plus it’s often hard to decide) but at a glance the proportion of intensifier uses is if anything slightly higher in the 1920s than in 2000s:

4. This is almost literally a scheme for robbing Peter to pay Paul. [TIME, 1925]

5. He literally dropped the book which he was reading and seized his sabre. [TIME, 1926]

6. The Tuchuns-military governors are literally heads of warring factions. [TIME, 1926]

But there are other things that indicate that the intensifier use of literally is what is represented in people’s linguistic knowledge. Namely collocations. Some of the most common adverbs preceding literally (first 2 words in COCA) are graded: 1. quite (558), 2. almost (119), 5. so (67), 7. too (54),  9. sometimes (42), 12. more, 15. very, 16. often.

7. Squeezed almost literally between a rock and a hard place, the artery burst. [COCA, SportsIll, 2007]

Another common adverbial collocate is “just” (number 4) often used to support the intensification:

8. they eventually went missing almost just literally a couple of minutes apart from one another [COCA, CNN, 2004]

Other frequent collocates are non-gradual: “up”, “down”, “out”, “now” but their usage seems coincidental – simply to be attributed to their generally high frequency in English.

The extremely unsurprising finding is that if we don’t limit the collocates by just 2 preceding words, by far the most common collocate of literally is “figuratively” (304). Used exclusively as part of “literally and figuratively”. This should count as its own use:

9. A romantic tale of love between two scarred individuals, one literally and one figuratively. [COCA, ACAD, 1991]

But even here, sometimes both possible senses of the use are figurative but one is perceived as being less so:

10. After years of literally and figuratively being the golden-haired boy… [COCA, NEWS, 1990]

11. Mercy’s parents had pulled the plug, literally and figuratively, on her burgeoning romance. [COCA, Fic, 2003]

This brings us to the secondary function (and notice I don’t use the word meaning, here) of “literally”, which is to disambiguate statements that in the appropriate context could have either figurative or literal meaning. Sometimes, we can apply a relatively easy test to differentiate between the two. The first sense cannot be rephrased using the adjective “literal”. However, as we saw above, a statement  cannot always be  strictly categorized as literal or figurative. For instance, example (2) above contains a disambiguating function although it is not between figurative or non-figurative but rather between two non-figurative interpretations of two situations that it may be possible to describe as a ‘solo’ (one where the soloists is prominent against background music and one where the soloist is completely unaccompanied.) Clear examples are not nearly as easy to find in a corpus, as the prescriptivist lore would have us believe and neither is the figurative part clear cut:

11. And they were about literally to be lynched and they had to save their lives. [COCA, SPOK, 1990]

12. another guy is literally a brain surgeon [COCA, MAG, 2010)

Often the trope does not include a clear domain mapping, as in the case of hyperbole.

13. I was terrified of women. Literally. [COCA, LIFE, 2006]

This type of disambiguation is often used with numerals and other quantifiers where a hyperbolic interpretation might be expected:

14. this is an economy that is generating literally 500,000 jobs because of our foreign trade [COCA, SPOK, PBS, 1996]

15. While there are literally millions of mobile phones that consumers and business people use [COCA, MAG, 2008]

16. “Then literally after two weeks I’m ready to come back,” he says. [COCA, MAG, 2010]

Or sometimes it is not clear whether some vague figure is being intensified or a potential trope is being disambiguated as in:

17. He was the man who lost his wife when his house literally broke apart in the storm. [COCA, CNN, 2005]

These types of examples also sometimes occur when the speaker realizes that what they had previously only intended as an intensified use is an actual disambiguating use:

18. will allow us to actually literally X-ray the universe using these distant objects

Another common usage is to indicate a word for word translation from a foreign language or a component analysis of an etymology of a word. E.g.

19. theory of  revolution (literally,  an overturning) [BNC].

Sometimes this explanation includes side elaboration as in

20. “Ethnography – literally, textual description of particular cultures” [BNC].

“Literally” also has a technical sense meaning roughly “not figuratively” but that has nothing do with its popular usage. I could not find any examples of this in the corpus.

The above is far from an exhaustive analysis. If I had the time or inclination, we could fine tune the categories but it’s not all that necessary. Everyone should get the gist. “Literally” is primarily an intensifier and secondarily a disambiguator. And categorizing individual uses between these two functions is a matter of degree rather than strict complementarity.

None of the above is hugely surprising, either. “Literally” is a pretty good indicator that figurative language is nearby and a less good indicator that strict fact is in the vicinity. Andrew Goatly has described the language of metaphor including “literally” in his 1997 book. And the people behind the ATT-META Project tell me that they’ve been using “literally” as one of the indicators of metaphoric language.

Should we expect bloggers on language to have read widely on metaphor research? Probably not. But by now I would expect any language blogger to know that to look up something in a dictionary doesn’t tell them much about the use of the word (but a lot about the lexicographer) and the only acceptable basis for argumentation on the usage of words is a corpus (with some well recognized exceptions).

The “Literally Blog” that ran out of steam in 2009 was purportedly started by linguistics graduates who surely cannot have gotten very far past Prescriptivism 101. But their examples are often amusing. As are the ones on the picture site Litera.ly that has great and funny pictures even if they are often more figurative than the phrases they attempt to literalize. Another recent venture “The literally project” was started by a comedian with a Twitter account on @literallytsar who is also very funny. Yes, indeed, as with so many expressions, if we apply an alternative interpretation to them, we get a humorous effect. But what did two language bloggers think they were doing when they put out this and this on “literally”, I don’t know. It got started by Sentence First, who listed all the evidence to the contrary gathered by the Language Log and then went on to ignore it in the conclusion:

Literally centuries of non-literal ‘literally’ « Sentence first. Few would dispute that literally, used non-literally, is often superfluous. It generally adds little or nothing to what it purports to stress. Bryan Garner has described the word in some of its contemporary usages as “distorted beyond recognition”.

Well this is pretty much nonsense. You see, “pretty much” in the previous sentence was a hedge. Hedges, like intensifiers, might be considered superfluous. But I chose to use that instead of a metaphor such as “pile of garbage”. The problem with this statement is twofold. First, no intensifiers add anything to what they intensify. Except for intensification! What if we used “really” or “actually” – what do they add in that “literally” doesn’t? And what about euphemisms and so many other constructions that never add anything to any meaning. Steven Pinker in his recent RSA talk listed 18 different words for “feces”. Why have that many when “shit” would suffice?

Non-literal literally amuses, too, usually unintentionally. The more absurd the literal image is, the funnier I tend to find it. And it is certainly awkward to use literally and immediately have to backtrack and qualify it (“I mean, not literally, but…”). Literally is not, for the most part, an effective intensifier, and it annoys a lot of people. Even the dinosaurs are sick of it.

What is the measure of the effectiveness of an intensifier? The examples above seem to show that it does a decent job. And annoying a lot of prescriptivists should not be an argument for not using it. These people are annoyed by pretty much anything that strikes their fancy. We should annoy them. Non-sexist language also annoys a lot of people. All the more reason for using it.

“Every day with me is literally another yesterday” (Alexander Pope, in a letter to Henry Cromwell)

For sure, words change their meanings and acquire additional ones over time, but we can resist these if we think that doing so will help preserve a useful distinction. So it is with literally. If you want your words to be taken seriously – at least in contexts where it matters – you might see the value in using literally with care.

But this is obviously not a particularly useful distinction and never has been. The crazier the non-intensifier interpretation of an intensifier use of “literally” is, the less of a potential for confusion there is. But I could not find a single example where it really mattered in the more subtle cases. But if we think this sort of thing is important why not pick on other intensifiers such as “really”, “virtually” or “actually” (well, some people do). My hypothesis is that it’s a lot of prescriptivists like the feeling of power and “literally” is a particularly useful tool for subjugating those who are unsure of their usage (often because of a relentless campaign by the prescriptivist). It’s very easy to show someone the “error” of their ways when you can present two starkly different images. And it feels like this could lead to a lot of confusion. But it doesn’t. This is a common argument of the prescriptivist but they can rarely support the assertion with more than a couple of examples if any. So unless a prescriptivist can show at least 10 examples where this sort of ambiguity led to a real consequential misunderstanding in the last year, they deserve to be told to just shut up.

Which is why I was surprised to see Motivated Grammar (a blog dedicated to the fighting of prescriptivism) jump into the fray:

Non-literal “literally” isn’t wrong. That said… « Motivated Grammar Non-literal literally isn’t “wrong” — it’s not even non-standard. But it’s overused and overdone. I would advise (but not require) people to avoid non-literal usages of literally, because it’s just not an especially good usage. Too often literally is sound and fury that signifies nothing.

Again, I ask for the evidence of what constitutes good usage? It has been good enough for TIME Magazine for close to a century! Should we judge correct usage by the New York Review of Books? And what’s wrong with “sound and fury that signifies nothing”? How many categories of expressions would we have to purge from language, if this was the criterion? I already mentioned hedges. What about half the adverbs? What about adjectives like “good” or “bad”. Often they describe nothing. Just something to say. “How are you?”, “You look nice.”, “Love you” – off with their heads!

And then, what is the measure of “overused”? TIME Magazine uses the word in total about 200-300 times a decade. That’s not even once per issue. Eric Schmidt used it in some speeches over his 10-year tenure as Google’s CEO and if you watch them all together, it stands out. Otherwise nobody’s noticed! If you’re a nitpicker who thinks it matters, every use of “literally” is going to sound too much. So, you don’t count. Unless you have an objective measure across the speech community, you can’t make this claim. Sure, lots of people have their favorite turns of phrases that are typical of their speech. I rather suspect I use “in fact” and “however” far too much. But that’s not the fault of the expression. Nor is it really a problem, until it forces listeners to focus on that rather than the speech itself. But even then, they get by. Sometimes expressions become “buzz words” and “symbols of their time” but as the TIME corpus evidence suggests, this is not the case with literally. So WTF?

Conciliatory confession:

I just spent some time going after prescriptivists. But I don’t actually think there’s anything wrong with prescriptivism (even though their claims are typically factually wrong). Puristic and radical tendencies are a part of any speech community. And as my former linguistics teacher and now friend Zdeněk Starý once said, they are both just as much a part of language competence as the words and grammatical constructions. So I don’t expect they will ever go away nor can I really be too critical of them. They are part of the ecosystem. So as a linguist, I think of them as a part of the study of language. However, making fun of them is just too hard to resist. Also, it’s annoying when you have to beat this nonsense out of the heads of your students. But that’s just the way things are. I’ll get over it.

Update 1:

Well, I may have been a bit harsh at the blogs and bloggers I was so disparaging about. Both Sentence first and Motivated grammar have a fine pedigree in language blogging. I went and read the comments under their posts and they both profess anti-prescriptivism. But I stand behind my criticism and its savagery of the paragraphs I quoted above. There is simply no plausible deniability about them. You can never talk about good usage and avoid prescriptivism. You can only talk about patterns of usage. And if you want to quantify these, you must use some sort of a representative samples. Not what you heard. Not what you or people like you. Evidence. Such as a corpus (or better still corpora provide.) So saying you shouldn’t use literally because a lot of people don’t like it needs evidence. But what evidence there is suggests that literally isn’t that big a deal. I did three Google searches on common peeve and “literally” came third: +literally +misuse (910,000), preposition at the end of a sentence (1,620,000), and +passive +misuse writing (6,630,000). Obviously, these numbers mean relatively little and can include all sorts of irrelevant examples, but they are at least suggestive. Then I did a search for top 10 grammar mistakes and looked at the top 10 results. Literally did not feature in either one of these. Again, this is not a reliable measure, but it’s at least suggestive. I’m waiting for some evidence to show where the confusion over the intensifier and disambiguator use  has caused a real problem.

I also found an elluminating article in the Slate by Jesse Sheidlower on other examples of ‘contranyms’ in English showing that picking on “literally” is quite an arbitrary enterprise.

Update 2:

A bit of corpus fun revealed some other interesting collocate properties of literally. There are some interesting patterns within individual parts of speech. The top 10 adjectives immediately following are:

  1. TRUE 91
  2. IMPOSSIBLE 24
  3. STARVING 14
  4. RIGHT 9
  5. SICK 8
  6. UNTHINKABLE 8
  7. ALIVE 6
  8. ACCURATE 6
  9. HOMELESS 6
  10. SPEECHLESS 6
The top 10 nouns are all quantifiers:
  1. HUNDREDS 152
  2. THOUSANDS 118
  3. MILLIONS 55
  4. DOZENS 35
  5. BILLIONS 17
  6. HOURS 14
  7. SCORES 14
  8. MEANS 11
  9. TONS 11

The top 10 numerals (although here we may run up to the limitations of the tagger) are:

  1. ONE 25
  2. TWO 12
  3. TENS 9
  4. THREE 8
  5. SIX 8
  6. 10 7
  7. 100 7
  8. 24 6
  9. NEXT 5
  10. FIVE 4

There are the top adverbs:

  1. JUST 91
  2. OVERNIGHT 24
  3. ALMOST 19
  4. ALL 17
  5. EVERYWHERE 14
  6. NEVER 13
  7. DOWN 12
  8. RIGHT 12
  9. SO 11
  10. ABOUT 11

And the top 10 preceding adverbs:

  1. QUITE 552
  2. ALMOST 117
  3. BOTH 91
  4. JUST 67
  5. TOO 50
  6. SO 38
  7. MORE 37
  8. VERY 31
  9. SOMETIMES 30
  10. NOW 26

One of the patterns in the collocates suggests that “literally” often (although this is only a significant minority of uses) has to do with scale or measure. So I was thinking is it possible that one can use the intensifier literally incorrectly (in the sense that most speakers would find the intensity inappropriate). For example, is it OK to intensify height of a person in any proportion? Is there a difference between “He was literally 6 feet tall” (disambiguator) and “He was literally seven feet tall.” (intensifier requiring further disambiguation) and “He was literally 12 feet all” (intensifier). The corpus had nothing to say on this, but Google provided some information. Among the results of the search  “literally * feet tall” referring to people, the most prominent height related to literally is 6 or 7 feet tall. There are some literally 8 feet tall people and people literally taller because of some extension to their height (stilts, helmets spikes, etc.) But (as I thought would be the case) there seem to be no sentences like “He was literally 12 feet tall.” So it seems “literally” isn’t used entirely arbitrarily with numbers and scales. Although it is rarely used to mean “actually, verifiably, precisely”, it is used in proportion to the scale of the thing measured. However, it is used both when a suspicion of hyperbole may arise and where a plausible number needs to be intensified. And most often a mixture of both. But it is not entirely random. “*Literally thousands of people showed up for dinner last night” or “*We had literally a crowd of people” is “ungrammatical” while “literally two dozen” is OK even if the actual number was only 18. But this is all part of the speakers’ knowledge of usage. Speakers know that with quantifiers, the use of literally is ambiguous. So if you wanted to say “The guy sitting next to me was literally 7 feet tall”, you’d have to disambiguate and say “And I mean literally, he is the third tallest man in the NBA.”

Send to Kindle

Why ideas aren’t enough to solve the Palestine-Israeli conflict

Share
Send to Kindle

An advertising agency is trying to solve a bloody conflict. This is presumptuous on such as scale that it could be called idiotic. Quoth http://www.theimpossiblebrief.com:

“Rather than ‘out of date’ policies, we need ‘out of the box’ solutions. Let’s show the world that creative minds at their best can inspire even political leaders.”

Assuming that there’s an idea out there about resolving this conflict that noone’s ever thought of is nonsense. We should call this assumption that simple ideas can solve difficult problems the “TED Syndrome” (btw: I love TED talks, even if I share Stephen Downes’ misgivings about the TED ‘elite’). Simple solutions to complex problems exist but they are very rare and when we hear about them, we are seldom told the whole story. Generally, it should be safe to assume that a solution should be of proportional complexity to the problem.

What Saatchi and Saatchi are so ineptly asking for could be thought of as a kind of metaphor hacking. But could metaphor hacking done right find a solution to the Palestine-Israeli conflict?

Short answer: No.

And now for the long answer. Metaphor hacking can’t solve anything. There are never any magical conceptual ways out of configurationally difficult situations. Metaphor hacking can provide insight and direction for individuals or groups (see the paintbrush example) but it has to be followed by hard work (both real and conceptual – I would call simplistically this ‘propositional’ work). On its own insight (whatever its source) achieves nothing.

Let’s try a few small hacks and see how far we get.

Although it can certainly be helpful to be aware of the conceptualizations that are involved, this awareness doesn’t necessarily give us power over them (I know a stick half-immersed in water is not broken but no power on Earth will make me see it so, I know that there is no up and down for the Globe, yet seeing a map with Africa on top will seem strange). First, metaphor is not the only conceptual structure involved in how people understand this situation. Metaphor (and its brethren) are mental structures relying on similarity. We also need to look at structures of contiguity (metonymy) and add other conceptual structuring devices that are propositional, imagistic and textual.

Let’s start at the end. I purposely entitled the problem Palestine-Israeli conflict. Logically, it shouldn’t matter, conflict is a commutative relationship – if I’m in conflict with you, you are in conflict with me. But we have textual iconicity. The thing that is first in real life is more important, and therefore we tend to put the more important things first in language first. That’s why we are instructed to say politely “Ladies and gentlemen” which only underscores the hidden sexism behind “boys and girls”, “men and women”, etc. So a small hack for all involved. Make sure you always describe the conflict with an iconicity that goes against your natural inclinations. This is not going to solve anything but it might keep you more attuned to your own possible prejudices.

We can also hack the “we were here first” trope. Now remember, there’s no hidden metaphorical solution. But if we can accept that our understanding of “claim by primacy” is structured by a number of source domains from which no perfect mappings exist, we can perhaps invest the claims with a bit less weight. The only way to settle this argument would be to close off or designate as illegitimate some of these source domains. But since such closing off is always the result of the application of power and not some disembodied logic, this is not the right way to go about it. So a useful hack would be to list all the possible source domains for understanding the domain of “we were here first, therefore we have a claim to this X”, draw all the mappings from the obvious to the ridiculous and see how easily challenged any such claim must always be (or maybe we’ll find that one side has many more favorable mappings than the other but I don’t think that’s very likely).

Can we hack our way out of the holy place and holy war controversies? Again, mostly no. A lot of religion is based on similarity and contiguity: from sympathetic magic or Anglican liturgy to free market capitalism or theory of evolution. These are bolstered by textual constructions that normally don’t carry a particularly heavy semantic load but will discharge their potential meanings in times of conflict. The same formulas that are mindlessly droned by the faithful during their rituals (be they Sunday worship or a Wall Street Journal editorial) can be brought into full conceptual battle readiness when necessary. This conceptual mobilization is always selective. All liturgical systems are internally contradictory (they might tell you to love your mom and dad one day and to ditch them the next) and it is necessary that some formulas remain just that while others are brought out in their full semantic splendor. This is what makes ecumenicalism possible. But from there we can perform a useful hack. Not all ideas potentially contained in a text have to come to fruition. Ours and theirs. If we can just keep them as part of the liturgy and not get too incensed over them. If we can accept that while the others may recite verses that would have us die, they may not necessarily mean “really” die, then we can go have a cautious conversation to make sure of that.

Growing up in communist Czechoslovakia, I remember my largely pacifist and moderately Christian family and friends singing to the tune of John Brown’s Body (unaware of the gruesome irony) “when all the communists are swinging from a tree, when all the communists are swinging from a tree, when all the communists are swinging from a tree, then there will be paradise.” Few of them would have probably even supported the death penalty let alone be part of a lynch mob. But in the right circumstances…who knows? A similar case is made for the traditional song “Shoot the Boer” sung in South Africa in this On The Media feature. (Cf. also the fluctuating militarism of Onward Christian Soldiers.)

In the case of Israel-Palestine, of course, we know that some of the people involved would be and have been involved in the carrying out of the underlying meanings of their phrases. However, the thing to remember is that they don’t have to be. We just have to keep in mind that words and actions aren’t always in sync and that is usually to the good. So in other words, we can’t solve the idea problem with more ideas but we can temper the ideas and divorce them from actions. Not easy and not instantaneous but historically inevitable.

So the hacks might be interesting but we come back to the original assumption that difficult situations are difficult to resolve. There are many ways in which you can hack somebody else’s mind, magicians, con artists and advertisers do it all the time. But these hacks are very straightforward, build on frame-based expectations and rarely have a lasting effect. Propaganda and brain washing are a kind of a mind hack but they only work predictably in conjunction with real power closing off other sources of cross-domain mappings. Mostly, when it comes to metaphor, we can’t hack somebody else’s, we can present a few alternative mappings, we can offer a more detailed analysis of the source domain or even reject or replace a source domain altogether. But whether this will carry weight is dependent on factors outside of the metaphor itself (although perhaps relying on the same sort of principles) such as social prestige, context, material resources, political clout, etc. Ideas always come with the people who espouse them and I doubt ideas coming with Saatchi and Saatchi will have enough internal coherence to carry them over the disadvantages flowing from their carrier. Let’s hope, I’m wrong.

Send to Kindle

What it’s all About

Share
Send to Kindle
Rendering of human brain.
Image via Wikipedia
Metaphors are not just something extra we use when we’re feeling poetic or at a loss for le mot juste, they are all over our minds, texts and conversations. Just like conjunctions, tenses or word. And just like anything else, they can be used for good or ill, on purpose or without conscious regard. Their meanings can be exposed, explored and exorcised. They can be brought from the dead by fresh perspectives or trodden into the ground by frequent use. They may bring us into the very heights of ecstasy or they may pass by unnoticed. They elluminate and obscure, lead and mislead, bring life and death. They can be too constrained or they can taken too far. They can be wrong and they can be right. And they can be hacked.

Hacking metaphors means taking them apart seeing how they work and putting them back together in a creative and useful way. People hack metaphors all the time without realizing what they’re doing and often getting into trouble by not recognizing that this is what they’re doing.  Paying a bit more attention to how metaphors work and how they can made work differently can make their hacking an easier process.

Oh, and …

Metaphor doesn’t really exist as a separate clearly delineated concept. It is really only one expression of a more general cognitive faculty I call conceptual framing. Depending on who you ask, it is different from or the same as simileanalogyallegory and closely related or in opposition to metonymysynechdoche, irony, and a host of other tropes. On this site, these distinctions don’t matter. All of the above rely on the same conceptual structures and metaphor is just as good a label as any for them.

Enhanced by Zemanta
Send to Kindle