Category Archives: Language

Not ships in the night: Metaphor and simile as process

Share

In some circles (rhetoric and analytics philosophy come to mind), much is made of the difference between metaphor and simile.

(Rhetoricians pay attention to it because they like taxonomies of communicative devices and analytic philosophers spend time on it because of their commitment to a truth-theoretical account of meaning and naive assumptions about compositionality).

It is true that their surface and communicative differences have an impact in certain contexts but if we’re interested in the conceptual underpinnings of metaphor, we’re more likely to ignore the distinction altogether.

But what’s even more interesting, is  to think about metaphor and simile as just part of the process of interpersonal meaning construction.  Consider this quote from a blog on macroeconomics:

[1a] Think of [1b] the company as a ship. [2] The captain has steered the ship too close to the rocks, and seeing the impending disaster has flown off in the ship’s helicopter and with all the cash he could find. After the boat hit the rocks no lives were lost, but many of the passengers had a terrifying ordeal in the water and many lost possessions, and the crew lost their jobs. [3] Now if this had happened to a real ship you would expect the captain to be in jail stripped of any ill gotten gains. [4] But because this ship is a corporation its captains are free and keep all their salary and bonuses. [5] The Board and auditors which should have done something to correct the ship’s disastrous course also suffer no loss.

Now, this is really a single conceptual creation but it happens in about 5 moves which I highlighted above. (Note: I picked these 5 as an illustrative heuristic but this is not to assume some fixed sequence).

[1] The first move establishes an idea of similarity through a simile. But it is not in the traditional form of ‘X is like Y’. Rather, it starts with the performative ‘Think of’ [1a] and then uses the simile ‘as’. [1b]. ‘Think of X as Y’ is a common construction but it is rarely seen as an example in discussions of similes.

[2] This section lays out an understanding of the source domain for the metaphorical projection. It also sets the limit on the projection in that it is talking about ‘company as a ship traveling through water’ in this scenario, not a ship as a metonym for its internal structure (for instance, the similarities in the organisational structure of ships and companies.) This is another very common aspect of metaphor discourse that is mostly ignored. It is commonly deployed as an instrument in the process of what I like to call ‘frame negotiation’. On the surface, this part seems like a narrative with mostly propositional content that could easily stand alone. But…

[3] By saying, ‘if this happened to a real ship’ the author immediately puts the preceding segment into question as an innocent proposition and reveals that it was serving a metaphorical purpose all along. Not that any of the readers were really lulled into a false sense of security, nor that the author was intending some dramatic reveal. But it is an interesting illustration of how the process of constructing analogies contains many parts.

[4] This part looks like a straightforward metaphor: ‘the ship is a corporation’ but it is flipped around (one would expect ‘the corporation is a ship’. This move links [2] and [3] and reminds us that [1].

[5] This last bit seems to refer to both domains at once. ‘The board and the auditors’ to the business case and ‘ships course’ to the narrative in the simile. But we could even more profitably think of it as referring to this new blended domain in which we have a hypothetical model in which both the shipping and business characteristics were integrated.

But the story does not end there, even though people who are interested in metaphors often feel that they’ve done enough at this stage (if they ever reach it). My recommended heuristic for metaphor analysts is to always look at what comes next. This is the start of the following paragraph:

To say this reflects everything that is wrong with neoliberalism is I think too imprecise. [1] I also think focusing on the fact that Carillion was a company built around public sector contracts misses the point. (I discussed this aspect in an earlier post.)

If you study metaphor in context, this will not surprise you. The blend is projected into another domain that is in a complex relationship to what precedes and what follows. This is far too conceptually intricate to take apart here but it is of course completely communicatively transparent to the reader and would have required little constructive effort on the part of the author (who is most likely to have spent time on constructing the simile/metaphor and its mappings but little on their embedding into the syntactic and textual weave that give it its intricacy).

In the context of the whole text, this is a local metaphor that plays as much an affective as it does a  cognitive role. It opens up some conceptual spaces but does not structure the whole argument.

The metaphor comes up again later and in this case it also plays the role of an anaphor by linking 2 sections of the text:

Few people would think that never being able to captain a ship again was a sufficient disincentive for the imaginary captain who steered his boat too close to the rocks.

Also of note is the use of the word ‘imaginary’ which puts that statement somewhere between a metaphor (similarity expressed as identity) and simile (similarity expressed as comparison).

There are two lessons here:

  1. The distinction between metaphor and simile could be useful in certain contexts but in practice, their use blends together and is not always easy to establish boundaries between them. But even if we could, the underlying cognition is the same (even if truth-conditionally they may differ on the surface). We could even complicate things further and introduce terms such as analogy, allegory, or even parable in this context but it is hard to see how much they would help us elucidate what is going on.

  2. Both metaphor and simile are not static components of a larger whole (like bricks in a wall or words in a dictionary). They are surface aspects of a rich and dynamic process of meaning making.  And the meaning is ‘literally’ (but not really literally) being made here right in front of our eyes or rather by our eyes.  What metaphor and simile (or the sort of hybrid metasimile present here) do is  help structure the conceptual spaces (frames) being created but they are not doing it alone. There are also narratives, schemas, propositions,  definitions, etc. All of these help fill out the pool of meaning into which we may slowly immerse ourselves or hurtle into headlong.  This is not easy to see if we only look at metaphor and simile outside their natural habitat of real discourse. Let that be a lesson to us.

How to read ‘Women, Fire and Dangerous Things’: Guide to essential reading on human cognition

Share

Note:

These are rough notes for a metaphor reading group, not a continuous narrative. Any comments, corrections or elaborations are welcome.

Why should you read WFDT?

Women, Fire, and Dangerous Things: What Categories Reveal About the Mind is still a significantly underappreciated and (despite its high citation count) not-enough-read book that has a lot to contribute to thinking about how the mind works.

I think it provides one of the most concise and explicit models for how to think about the mind and language from a cognitive perspective. I also find its argument against the still prevalent approach to language and the mind as essentially fixed objects very compelling.

The thing that has been particularly underused in subsequent scholarship is the concept of ‘ICMs’ or ‘Idealised Cognitive Models’ which both puts metaphor (for work on which Lakoff is most well known) in its rightful context but also outlines what we should look for when we think about things like frames, models, scripts, scenarios, etc. Using this concept would have avoided many undue simplifications in work in the social sciences and humanities.

Why this guide

Unfortunately, the concision and explicitness I extolled above is surrounded by hundreds of pages of arguments and elaborations that are often less well-thought out than the central thesis and have been a vector for criticism (I’ve responded to some of these in my review of Verena Haser’s book).

As somebody who translated the whole book into Czech and penned extensive commentary on its relevance to the structuralist linguistic tradition, I have perhaps spent more time with it than most people other than the author and his editors.

Which is why when people ask me whether to read it, I usually recommend an abreviated tour of the core argument with some selections depending on the individual’s interest.

Here are some of my suggestions.

Chapters everyone should read

Chapters 3, 4, 5, 6 – Core contribution of the book – Fundamental structuring principles of human cognition

These four chapters summarize what I think everybody who thinks about language, mind and society should know about how categories work. Even if it is not necessarily the last word on every (or any) aspect, it should be the starting point for inquiry.

All the key concepts (see below) are outlined here.

Preface and Chapter 1 – Outline of the whole argument and its implications

These brief chapters lay out succinctly and, I think very clearly, the overall argument of the book and its implications. This is where he outlines the core of the critique of objectivism which I think is very important (if itself open to criticism).

Chapter 2: Precursors

This is where he outlines the broader panoply of thinkers and research outcomes in recent intellectual history whose insights this books tries to systematise and take further.

The chapter takes up some of the key thinkers who have been critical of the established paradigm. Read it not necessarily for understanding them but for a way of thinking about their work in the context of this book.

Case studies

The case studies represent a large chunk of the book and few people will read all 3. But I think at least one of them should be part of any reading of the book. Most people will be drawn to number 1 on metaphor but I find that number 2 shows off the key concepts in most depth. It will require some focus and patience from non-linguists but I think is worth the effort.

Case study 3 is perhaps too linguistic (even though it introduces the important concept of constructions) for most non-linguist.

Key concepts

No matter how the book is read, these are the key concepts I think people should walk away with understanding.

Idealized Cognitive Models (also called Frames in Lakoff’s later work)

I don’t know of any more systematic treatment of how our conceptual system is structured than this. It is not necessarily the last word but should not be overlooked.

Radial Categories

When people talk about family resemblances they ignore the complexity of the conceptual work that goes into them. Radial categories give a good sense of that depth.

Schemas and rich images

While image schemas are still a bit controversial as actual cognitive constructs, Lakoff’s treatment of them alongside rich images shows the importance of both as heuristics to interpreting cognitive phenomena.

Objectivism vs Basic Realism

Although objectivism (nothing to do with Ayn Rand) is not a position taken by any practicing philosophers and feels a bit straw-manny, I find Lakoff’s outline of it eerily familiar as I read works across the humanities and social sciences, let alone philosophy. When people read the description, they should avoid dismissing it with ‘of course nobody thinks that’ and reflect on how many people approach problems of mind and language as if they did think that.

Prototype effects and basic-level categories

These concepts are not original to Lakoff but are essential to understanding the others.

Role of metaphor and metonymy

Lakoff is best known for his earlier work on metaphor (which is why figurative language is not a key concept in itself) but this book puts metaphor and metonymy in perspective of the broader cognition.

Embodiment and motivation

Embodiment is an idea thrown around a lot these days. Lakoff’s is an important early contribution that shows some of the actual interaction between embodiment and cognition.

I find it particularly relevant when he talks about how concepts are motivated but not determined by embodied cognition.

Constructions

Lakoff’s work was taking shape alongside Fillmore’s work on construction grammar and Langacker’s on cognitive grammar. While the current construction grammar paradigm is much more influenced by those, I think it is still worth reading Lakoff for his contribution here. Particularly case studies 2 and 3 are great examples of the power of this approach.

Additional chapters of interest

Elaborations of core concepts

Chapters 17 and 18 elaborate on the core concepts in important ways but many people never reach them because they follow a lot of work on philosophical implications.

Chapter 17 on Cognitive Semantics takes another more deeper look at ICMs (idealized cognitive models) across various dimensions.

Chapter 18 deals with the question of how conceptual categories work across languages in the context of relativism. The name of the book is derived from a non-English example but this takes the question of universals and language specificity head on. Perhaps not the in the most comprehensive way (the debate on relativism has moved on) but it illuminates the core concepts further.

Case studies

Case Studies 2 and 3 should be of great interest to linguists. Not because they are perfect but because they show the depth of analysis required of even relatively simple concepts.

Philosophical implications

Lakoff is not shy about placing his work in the context of disrruption of the reigning philosophical paradigm of his (and to a significant extent our) day. Chapter 11 goes into more depth on how he understands the ‘objectivist paradigm’. It has been criticised for not representing actual philosophical positions (which he explicitly says he’s not doing) but I think it’s representative of many actual philosophical and other treatments of language and cognition.

This is then elaborated in chapters 12 – 16 and of course in his subsequent book with Mark Johnson Philosophy in the Flesh. I find the positive argument they’re making compelling but it is let down by staying on the surface of the issues they’re criticising.

What to skip

Where Lakoff (and elsewhere Lakoff and Johnson) most open themselves to criticism is their relatively shallow reading of their opponents. Most philosophers don’t engage with this work because they don’t find it speaks their language and when it does, it is easily dismissed as too light.

While I think that the broad critique this book presents of what it calls ‘objectivist approaches’ is correct, I don’t recommend that anyone takes the details too seriously. Lakoff simultaneously gives it too little and too much attention. He argues against very small details but leaves too many gaps.

This means that those who should be engaging with the very core of the work’s contribution fixate on errors and gaps in his criticism and feel free to dismiss the key aspects of what he has to say (much to their detriment).

For example, his critique of situational semantics leaves too many gaps and left him open to successful rejoinders even if he was probably right.

What is missing

While Lakoff engages with cognitive anthropology (and he and Johnson acknowledge their debts in the preface to Metaphors We Live By), he does not reflect the really interesting work in this area. Goffman (shockingly) gets no mention, nor does Victor Turner whose work on liminality is pretty important companion.

There’s also little acknowledgement of work on texts such as that by Halliday and Hasan (although, that was arguably still waiting for its greatest impact in the mid 1980s with the appearance of corpora). But Lakoff and most of the researchers in this areas stay firmly at the level of a clause. But give that my own work is mostly focusing on discourse and text-level phenomena, I would say that.

What to read next

Here are some suggestions for where to go next for elaborations of the key concepts or ideas with relevance to those outlined in the book.

  • Moral politics by Lakoff launched his forays into political work but I think it’s more important as an example of this way of thinking applied for a real purpose. He replaces Idealized Cognitive Models with Frames but shows many great examples of them at work. Even if it falls short as an exhaustive analysis of the issues, it is very important as a methodological contribution of how frames work in real life. I think of it almost as a fourth case study to this book.
  • The Way We Think by Gilles Fauconnier and Mark Turner provides a model of how cognitive models work ‘online’ during the process of speaking. Although, it has made a more direct impact in the field of construction grammar, its importance is still underappreciated outside it. I think of it as an essential companion to the core contribution of this book. Lakoff himself draws on Fauconnier’s earlier work on mental spaces in this book.
  • Work on construction grammar This book was one of the first places where the notion of ‘construction’ in the sense of ‘construction grammar’ was introduced. It has since developed in its own substantive field of study that has been driven by others. I’d say the work of Adele Goldberg is still the best introduction but for my money William Croft’s ‘Radical Construction Grammar’ is the most important. Taylor’s overview of the related ‘Cognitive Grammar’ is also not a bad next read.
  • Work on cognitive semantics There is much to read here. Talmy’s massive 2 volumes of ‘Cognitive Semantics’ are perhaps the most comprehensive but most of the work here happens across various journals. I’m not aware of a single shorter introduction.
  • Philosophy and the Mirror of Nature by Richard Rorty is a book I frankly wish Lakoff had read. Rorty’s taking apart of philosophy’s epistemological imaginings is very much complementary to Lakoff’s critique of ‘objectivism’ but done while engaging deeply with the philosophical issues. While I basically go along with Lakoff’s and later Lakoff and Johnson’s core argument, I can see why it could be more easily dismissed than Rorty. Of course, Rorty’s work is also better known for its reputation than deeply reflected in much of today’s philosophy. Lakoff and Johnson’s essential misunderstanding of Rorty’s contribution and fundamental compatibility with their project in Philosophy in the Flesh is an example of why so many don’t take that aspect of this work seriously. (Although, they are right that both Rorty and Davidson would have been better served by a less impoverished view of meaning and language.)

What language looks like: Dictionary and grammar are to language what standing on one foot is to running

Share

Background

Sometimes a rather obscure and complex analogy just clicks into place in one’s mind and allows a slightly altered way of thinking that just makes so much sense, it hurts. Like putting glasses on in the morning and the world suddenly snapping into shape.

This happened to me this morning when reading the Notes from Two Scientific Psychologists blog and the post on Do people really not know what running looks like?

It describes the fact that many famous painters (and authors of instructional materials on drawing) did not depict running people correctly. When running, it is natural (and essential) to put forward the arm opposite the leg that’s going forward. But many painters who depict running (including the artist who created the poster for the 1922 Olympics!) do it the wrong way round. Not just the wrong way, the way that is almost impossible to perform. And this has apparently been going for as long depiction has been thing. But it’s not just artists (who could even argue that they have other concerns). What’s more when you ask a modern human being to imitate somebody running in a stationary pose (as somebody did on the website Phoons­) they will almost invariably do it the wrong way round. Why? There are really two separate questions here.

  1. Why don’t the incorrect depictions of running strike most people as odd?
  2. Why don’t we naturally arrange our bodies into the correct stance when asked to imitate running while standing still?

Andrew Wilson (one of the two psychologists) has the perfect answer to question 2:

Asking people to pose as if running is actually asking them to stand on one leg in place, and from their point of view these are two very different things with, potentially, two different solutions. [my emphasis]

And he prefaces that with a crucial point about human behavior:

people’s behaviour is shaped by the demands of the task they are actually solving, and that might not be the task you asked them to do.

Do try this at home, try to imitate a runner standing up, then slowly (mime-like), then speed it up. Standing into the wrong configuration is the natural thing to do. Doing it the ‘right’ way round, is hard. It’s not until I sped up into an actual run that my arms found the opposite motion natural until I could keep track of what was going on any more. I would imagine that this would be the case for most people. In fact, the few pictures I could find of runners arranged standing at the start of the race have most of them also with the ‘wrong’ hand/leg position and they’re not even standing on one leg. (See here and here.)

Which brings us back to the first question. Why does not anybody notice? I personally find it really hard to even identify the wrong static description at a glance. I have to slow down, remember what is correct, then match it to the image. What’s going on. We obviously don’t have any cognitive control over the part of running that controls the movement arms in relation tot he movement of legs. We also don’t have any models or social scripts that pay attention to this sort of thing. It is a matter of conscious effort, a learned behaviour, to recognize these things.

Why is this relevant to language?

If you ask someone to describe a language, they will most likely start telling you about the words and the rules for putting them together. In other words, compiling a dictionary and a grammar. They will say something like: “In Albanian, the word for ‘bread’ is ‘bukë'”. Or they will say something like “English has 1 million words.”, “Czech has no word for training.” or “English has no cases.”

All of these statements reflect a notion of language that has a list of words that looks a little like this:

bread n. = 1. baked good, used for food, 2. metaphor for money, etc.
eat v. = 1. process of ingestion and digestion, 2. metaphor, etc.
people n. plural = human beings

And a grammar that looks a little bit like this.

Sentence = Noun (subj.) + Verb + Noun (obj.)

All of this put together will give us a sentence:

People eat food.

All you need is long enough list of words and enough (but not as many) rules and you got a language.

But as linguists have discovered through not a bit of pain, you don’t have a language. You have something that looks like a language but not something that you can actually speak as a language. It’s very similar to language but it’s not language.

Kind of like the picture of the runner with the arms going in the opposite direction. It looks very much like someone running but it’s not it’s just a picture of them running and the picture is fundamentally wrong. Just not in a way that is at all obvious to most people most of the time.

Why grammars and dictionaries seem like a good portrait of language

So, we can ask the same two questions again.

  1. Why does the stilted representation of language as rules and words not strike most people (incl. Steven Pinker) as odd?
  2. Why don’t we give more realistic examples of language when asked to imitate one?

Let’s start with question 2 again which will also give us a hint as to how to answer question 1.

So why, when asked to give an example of English, am I more likely to give:

John loves Mary.

or

Hello. Thank you. Good bye.

than

Is it cold in here? Could you pass the sugar, please. No no no. I’ll think about it?

It’s because I’m achieving a task that is different from actually speaking the language. When asked to illustrate a language, we’re not communicating anything in the language. So our very posture towards the language changes. We start thinking in equivalencies and left and right sides of the word (word = definition) and building blocks of a sentence. Depending on who we’re speaking to, we’ll choose something very concrete or something immediately useful. We will not think of nuance, speech acts, puns or presupposition.

But the vast majority of our language actions are of the second kind. And many of the examples we give of language are actually good for only one thing: Giving an example of the language. (Such as the famous example from logic ‘A man walks’ which James MacCawley analysed as only being usable in one very remote sense.)

As a result, if we’re given the task of describing language, coming up with something looking like a dictionary and a grammar is the simplest and best way of achieving fullfilling the assignment. If we take a scholarly approach to this task over generations, we end up with something that very much looks like the modern grammars and dictionaries we all know .

The problem is that these don’t really give us “a picture of language”, they give us “a picture of a pose of language” that looks so much like language to our daily perception, that we can’t tell the difference. But in fact, they are exactly the opposite of language looks like.

Now, we’re in much more complex waters than running. Although, I imagine the exact performance of running is in many ways culturally determined, the amount of variation is going to be limited by the very physical nature of the relatively simple task. Language on the other hand, is almost all culture. So, I would expect people in different contexts to give different examples. I read somewhere (can’t track down the reference now) that Indian grammarians tended to give examples of sentences in the imperative. Early Greeks (like Plato) had a much more impoverished view of the sentence than I showed above. And I’m sure there are languages with even more limited metalanguage. However, the general point still stands. The way we tend to think about language is determined by the nature of the task

The key point I’ve repeated over and over (following Michael Hoey) is that grammars and dictionaries are above all texts written in the language. They don’t stand aprt from it. They have their own rules, conventions and inventories of expression. And they are susceptible to the politics and prejudices of their time. Even the OUP. At the same time, they can be very useful tools to developing language skills or dealing with unfamiliar texts. But so does asking a friend or figuring out the meaning in context.

Which brings us to question 1. Why has nobody noticed that language doesn’t quite work that way? The answer is that – just like with running – people have. But only when they try to match the description with something that is right in front of them. Even then, they frequently (and I’m talking about professional linguists like Stephen Pinker here) ignore the discrepancy or ascribe it to a lack of refinement of the descriptions. But most of the time, the tasks that we fulfil with language do not require us to engage the sort of metacognitive aparatus that would direct us to reflect on what’s actually going on.

What does language really look like

So is there a way to have an accurate picture of language? Yes. In fact, we already have it. It’s all of it. We don’t perhaps have all the fine details, but we have enough to see what’s going on – if we look carefully. It’s not like linguists of all stripes have not described pretty much everything that goes on with language in one way or another. The problem is that they try to equate the value of a description to the value of the corresponding model that very often looks like an algorithm amenable to being implemented in a computer program. So, if I describe a phenomenon of language as a linguist, my tendency is to immediately come up with a fancy looking notation that will look like ‘science’. If I can make it ‘mathematical’, all the better. But all of these things are only models. They are ways of achieving a very particular task. Which is to – in one way or another – model language for a particular purpose. Development of AI, writing of pedagogic grammars, compiling word lists, predicting future trends, tracing historical developments, estimating psychological impact, etc. All of these are distinct from actual pure observation of what is going on. Of course, even simple description of what I observe is a task of its own with its own requirements. I have to choose what I notice and decide what I report on. It’s a model of a sort, just like an accurate painting of a runner in motion is just a model (choosing what to emphasize, shadows, background detail, facial expression, etc.) But it’s the task we’re really after: Coming up with as accurate and complete a picture of language as is possible for a collectivity of humans.

People working in construction grammars in the usage-based approach are closest to the task. But they need to talk with people who work on texts, as well, if they really want to start painting a fuller picture.

Language is signs on doors of public restrooms, dirty jokes on TV, mothers speaking to children, politicians making speeches, friends making small talk in the street, newscasters reading the headlines, books sold in bookshops, gestures, teaching ways of communication in the classroom, phone texts, theatre plays, songs, blogs, shopping lists, marketing slogans, etc.

Trying to reduce their portrait to words and rules is just like trying to describe a building by talking about bricks and mortar. They’re necessary and without them nothing would happen. But a building does not look like a collection of bricks and mortar. Nor does knowing how to put a brick to brick and glue them together help in getting a house built. At best, you’d get a knee-high wall. You need a whole of other knowledge and other kinds of strategies of building corners, windows, but also getting a planning permission, digging a foundation, hiring help, etc. All of those are also involved in the edifices we construct with language.

An easy counterargument here would be: That’s all well and good but the job of linguistics is to study the bricks and the mortar and we’ll leave the rest to other disciplines like rhetoric or literature. At least, that’s been Chomsky’s position. But the problem is that even the words and grammar rules don’t actually look like what we think they do. For a start, they’re not arranged in any of the ways in which we’re used to seeing them. But they probably don’t even have the sorts of shapes we think of them in. How do I decide whether I say, “I’m standing in front of the Cathedral” or “The Cathedral is behind me.”? Each of these triggers a very different situation and perspective on exactly the same configuration of reality. And figuring out which is which requires a lot more than just the knowledge of how the sentence is put together. How about novel uses of words that are instantly recognizable like “I sneezed the napkin off the table.” What exactly are all the words and what rules are involved?

Example after example shows us that language does not look very much like that traditional picture we have drawn of it. More and more linguists are looking at language with freshly open eyes but I worry that they may get off task when they’re asked to make a picture what they see.

Where does the metaphor break

Ok, like all metaphors and analogies, even this one must come to an end. The power of a metaphor is not just finding where it fits but also pointing out its limits.

The obvious breaking point here is the level of complexity. Obviously, there’s only one very discretely delineated aspect of what the runners are doing that does not match what’s in the picture. The position of the arms. With language, we’re dealing with many subtle continua.

Also, the notion of the task is taken from a very specific branch of cognitive psychology and it may be inappropriate extending it to areas where tasks take a long time, are collaborative and include a lot of deliberately chosen components as well as automaticity.

But I find it a very powerful metaphor nevertheless. It is not an easy one to explain because both fields are unfamiliar. But I think it’s worth taking the time with it if it opens the eyes of just one more person trying to make a picture of language looks like.

What does it mean when words ‘really’ mean something: Dismiss the Miss

Share

A few days ago, I tweeted a link to an article in TES:

Today, I got the following response back:

@lizzielh is absolutely right. As the title of an as yet unpublished blog post of mine goes: “Words don’t mean things, people mean things”. I even wrote a whole book chapter on that with the same title as this post.

Indeed, if it had been me writing on the topic, I would have chosen a more judicious title. Such as “The legacy of discrimination behind the humble Miss” or “Past and present inequalities encoded in the simple Miss”.

In fact, the only reason I tweeted that article in the first place was because it was making a much more subtle and powerful point than simple etymology (as you would expect from one based on the work of the eminent scholar of language and gender Jennifer Coates). Going all the way back to Language and the Woman’s Place and even before, people have been trying to peg the blame on simple words. All along the response has been, but these are just words, we don’t mean anything bad by them. Or, these are just words, the real harm is done in the real world.

Many women I meet continue to like the Miss/Mrs distinction despite the long availability of the now destigmatized Ms. It was not too long ago that I set up a sign up form with only Prof Dr Mr Ms and got lots of complaints from women who wanted to keep their Miss or Mrs. So restigmatizing Miss is actively harmful to the self-image of many women whose identity is tied with that label. Feminist tend to make light of the ‘unfeminist’ cry of “I like it when men open the door to me”, or “Carrying my bag for me just shows respect”.  Or going back even further, “I don’t need a vote, I exercise my influence through my husband.” But change is literally hard, it takes time and effort, so an attempt at making the world better will always making temporarily worse (at least for some people).

The fact is that Miss is a bound in a network of meanings, interactions and power relations. And even if it takes some mental pain, it’s worth picking at all it covers up.

But not every minute of every day. Sometimes, we need to say something to get from conversational point A to conversational point B and even a laden word may be better than no word. As one of the respondents in the article says:

My response is always that my name isn’t Miss; it’s Mrs Coslett. But if I’m in a school where students don’t know me and they call me Miss, I’m fine with that. They’re showing respect by giving me a title, rather than ‘hey’ or ‘oi, you’ or whatever.

Most of the time contentious words are used, challenging them is not feasible. But she’s wrong in her conclusion:

That’s just the way the English language works.

That’s absolutely not true. Just like words don’t mean anything on their own, language does not just work. It’s used to do things (to riff on Austin’s famous book) by people. It is not always used purposefully but its use is always bound in the many ways and means of people. The way we speak now is a result of centuries of little power plays, imitations of prestige, prescriptions of obedience. When you look closer, they’re all easy to see.

Things have let up considerably since the 1970s. Many fewer people are concerned about how language encodes gender inequality and it’s worthwhile reminding ourselves that many of the historical unfairnesses hidden in word histories are still with us. Just like you can’t get away with saying “I didn’t mean anything by the ‘n’ word”, you can’t just shrug off the critique of the complex tapestry of gender bias in ‘Miss’.

Miss does not “really mean” anything. It’s just a sequence of letters or sounds. And most people using it do not “really mean” anything by it. Or it does not “really mean” anything to them. But context is everything.

It is a truism to say that racism will be done away with when people don’t dislike each other because of the color of their skin. But the opposite is the case. The sign that racism has disappeared is when I can say “I really don’t like black people” simply because I don’t like the color of their skin in the same way I may prefer redheads to blondes. Preference for skin colour is then just a harmless quirk. But we’re centuries away from that because any such preference is tied to a system of discrimination going back a long way.  (BTW: just to avoid misunderstanding, I personally find black skin beautiful.)

The same thing applies to “Miss”, we can’t just turn our back on its pernicious potential. Most of the time it’s hidden from sight but it’s recoverable at a moment’s notice. Because we live in a world where male is still the default position. We have to work to change that. Change our minds, hearts, cognitions and languages. They don’t  just work on their own. We make them work. So let’s make them work for us. The ‘us’ we want to be, rather than the ‘us’ we used to be in the bad old days.

Photo Credit: abdallahh via Compfight cc

Linguistics according to Fillmore

Share

While people keep banging on about Chomsky as being the be all and end all of linguistics (I’m looking at you philosophers of language), there have been many linguists who have had a much more substantial impact on how we actually think about language in a way that matters. In my post on why Chomsky is not really a linguist at all I listed a few.

Sadly, one of these linguists died yesterday. It was Charles J Fillmore who was a towering figure among linguists without writing a single book. In my mind, he changed the face of linguistics three times with just three articles (one of them co-authored). Obviously, he wrote many more but compared to his massive impact, his output was relatively modest. His ideas have been with me all through my life as a linguist and on reflection, they form a foundation about what I know language to be. Therefore, this is not so much an obituary (for which I’m hardly the most qualified person out there) as a manifesto for a linguistics of a truly human language.

The case for Fillmore

The first article, more of a slim monograph at 80 odd pages, was Case for Case (which, for some reason, I first read in Russian translation). Published in 1968 it was one of the first efforts to find deeper functional connections in generative grammar (following on his earlier work with transformations). If you’ve studied Chomskean Government and Binding, this is where thematic roles essentially come from. I only started studying linguistics in 1991 which is when Case for Case was already considered a classic. Particularly in Prague where function was so important. But even after all those years, it is still worth reading for any minimalist  out there. Unlike so many in today’s divided world, Fillmore engaged with the whole universe of linguistics, citing Halliday, Tesniere, Jakobson,  Whorf, Jespersen, and others while giving an excellent overview of the treatment of case by different theories and theorists. But the engagement went even deeper, the whole notion of ‘case’ as one “base component of the grammar of every language” brought so much traditional grammar back into contact with a linguistics that was speeding away from all that came before at a rate of knots.

From today’s perspective, its emphasis on the deep and surface structures, as well as its relatively impoverished semantics may seem a bit dated, but it represents an engagement with language used to express real meaning.  The thinking that went into deep cases transformed into what has become known as Frame Semantics (“I thought of each case frame as characterizing a small abstract ‘scene’ or ‘situation’, so that to understand the semantic structure of the verb it was necessary to understand the properties of such schematized scenes” [1982]) which is where things really get interesting.

Fillmore in the frame

When I think about frame semantics, I always go to his 1982 article Frame Semantics published in the charmingly named conference proceedings ‘Linguistics in the morning calm’ but it had its first outing in 1976. George Lakoff used it as one of the key inspirations to his idealized cognitive models in Women, Fire, and Dangerous things which is where this site can trace its roots. As I have said before, I essentially think about metaphors as a special kinds of frames.

In it, he says:

By the term ‘frame’ I have in mind any system of concepts related in such a way that to understand anyone of them you have to  understand the whole structure in which it fits; when one of the things in such a structure is introduced into a text, or into a conversation, all of the others are automatically made available. I intend the word ‘frame’ as used here to be a general cover term for the set of concepts variously known, in the literature on natural language understanding, as ‘schema: ‘script’, ‘scenario’, ‘ideational scaffolding’, ‘cognitive model’, or ‘folk theory’.

It is a bit of a mouthful but it captures in a paragraph the absolute fundamentals of the semantics of human language as opposed to projecting the rules of formal logic and truth conditions onto an impoverished version of language that all the generative-inspired approaches try to do. Also, it brings together many other concepts from different fields of scholarship. Last year I presented a paper on the power of the concept of frame where I found even more terms that have a close affinity to it which only underscores the far reaching consequences of Fillmore’s insight.

As I was looking for some more quotes from that article, I realized that I’d have to pretty much cut and paste in the whole of it. Almost, every sentence there is pure gold. Rereading it now after many many years, it’s becoming clear how many things from it I’ve internalized (and frankly, reinvented some of the ideas I forgot had been there).

Constructing Fillmore

About the same time, and merging the two earlier insights, Fillmore started working on the principles that have come to be known as construction grammar. Although, by then, the ideas were some years old, I always think of his 1988 article with Paul Kay and Mary Catherine O’Conner as a proper construction grammar manifesto. In it they say:

The overarching claim is that the proper units of a grammar are more similar to the notion of construction in traditional and pedagogical grammars than to that of rule in most versions of generative grammar.

Constructions, according to Fillmore have these properties:

  1. They are not limited to the constituents of a single syntactic tree. Meaning, they span what has been considered as the building blocks of language.
  2. They specify at the same time syntactic, lexical, semantic and pragmatic information.

  3. Lexical items can also be viewed as constructions (this is absolutely earth shattering and I don’t think linguistics has come to grips with it, yet).

  4. They are idiomatic. That is, their meaning is not built up from their constituent parts.

Although Lakoff’s study of ‘there constructions’ in Women, Fire, and Dangerous Things came out a year earlier (and is still essential reading), I prefer Fillmore as an introduction to the subject (if only because I never had to translate it).

The beauty of construction grammar (just as the beauty of frame semantics) is in that it can bridge much of the modern thinking about language with grammatical insights and intuitions of generations of researchers from across many schools of thought. But I am genuinely inspired by its commitment to language as a whole, expressed in the 1999 article by Fillmore and Kay:

To adopt a constructional approach is to undertake a commitment in principle to account for the entirety of each language. This means that the relatively general patterns of the language, such as the one licensing the ordering of a finite auxiliary verb before its subject in English as illustrated in 1, and the more idiomatic patterns, such as those exemplified in 2, stand on an equal footing as data for which the grammar  must provide an account.

(1) a. What have you done?  b. Never will I leave you. c. So will she. d. Long may you prosper! e. Had I known, . . . f. Am I tired! g. . . . as were the others h. Thus did the hen reward Beecher.

(2) a. by and large b. [to] have a field day c. [to] have to hand it to [someone]  d. (*A/*The) Fool that I was, . . . e. in x’s own right

Given such a commitment, the construction grammarian is required to develop an explicit system of representation, capable of encoding economically and without loss of generalization all the constructions (or patterns) of the language, from the most idiomatic to the most general.

Notice that they don’t just say ‘language’ but ‘each language’. Both of those articles give ample examples of how constructions work and what they do and I commend them to your linguistic enjoyment.

Ultimately, I do not subscribe to the exact version of construction grammar that Fillmore and Kay propose, agreeing with William Croft that it is still too beholden to the formalist tradition of the generative era, but there is something to learn from on every page of everything Fillmore wrote.

Once more with meaning: the FrameNet years

Both frame semantics and construction grammar impacted Fillmore’s work in lexicography with Sue Atkins and culminated in FrameNet a machine readable frame semantic dictionary providing a model for a semantic module to a construction grammar. To make the story complete, we can even see FrameNet as a culmination of the research project begun in Case for Case  which was the development of a “valence dictionary” (as he summarized it in 1982). While FrameNet is much more than that and has very much abandoned the claim to universal deep structures, it can be seen as accomplishing the mission of a language with meaning Fillmore set out on in the 1960s.

Remembering Fillmore

I only met Fillmore once when he came to lecture at a summer school in Prague almost twenty years ago. I enjoyed his lectures but was really too star struck to take advantage of the opportunity. But I saw enough of him to understand why he is remembered with deep affection and admiration by all of his colleagues and students whose ranks form a veritable who’s who of linguists to pay attention to.

In my earlier post, I compared him in stature and importance to Roman Jakobson (even if Jakobson’s crazy voluminous output across four languages dwarfs Fillmore’s – and almost everyone else’s). Fillmore was more than a linguist’s linguist, he was a linguist who mattered (and matters) to anyone who wanted (and wants) to understand how language works beyond a few minimalist soundbites. Sadly it is possible to meet graduates with linguistics degrees who never heard of Jakobson or Fillmore. While it’s almost impossible to meet someone who doesn’t know anything about language but has heard of Chomsky. But I have no doubt that in the decades of language scholarship to come, it will be Fillmore and his ideas that will be the foundation upon which the edifice of linguistics will rest. May he rest in peace.

Post Script

I am far from being an expert on Fillmore’s work and life. This post reflects my personal perspective and lessons I’ve learned rather than a comprehensive or objective reference work. I may have been rather free with the narrative arc of his work. Please be free with corrections and clarifications. Language Log reposted a more complete profile of his life.

References

  • Fillmore, C., 1968. The Case for Case. In E. Bach & R. Harms, eds. Universals in Linguistic Theory. New York: Holt, Rinehart and Winston, pp. 1–88. Available at: http://pdf.thepdfportal.com/PDFFiles/123480.pdf [Accessed February 15, 2014].
  • Fillmore, C.J., 1976. Frame Semantics and the nature of language. Annals of the New York Academy of Sciences, 280 (Origins and Evolution of Language and Speech), pp.20–32.
  • Fillmore, C., 1982. Frame Semantics. In The Linguistic Society of Korea, ed. Linguistics in the morning calm : International conference on linguistics : Selected papers. Seoul  Korea: Hanshin Pub. Co., pp. 111–139.
  • Fillmore, C.J., Kay, P. & O’Connor, M.C., 1988. Regularity and Idiomaticity in Grammatical Constructions: The Case of Let Alone. Language, 64(3), pp.501–538.
  • Kay, P. & Fillmore, C.J., 1999. Grammatical constructions and linguistic generalizations: the What’s X doing Y? construction. Language, 75(1), pp.1–33.

5 things everybody should know about language: Outline of linguistics’ contribution to the liberal arts curriculum

Share

Drafty

This was written in some haste and needs further refinement. Maybe one day that will come. For now, it will be left as it stands.

Background

This post outlines what I think are the key learnings from the output of the research of the field of linguistics that should be reflected in the general curriculum (in as much as any should be). This is in reaction to the recent posts by Mark Liberman suggesting the role and form of grammar analysis in general education. I argue that he is almost entirely wrong in his assumptions as well as in his emphasis. I will outline my arguments against his position at the end of the post. At the beginning I will outline key easily digestible lessons of modern linguistics that should be incorporated into language education at all levels.

I should note that despite my vociferous disagreement, Mark Liberman is one of my heros. His ‘Breakfast Experiments(tm)’ have brought me much joy and his and his fellow contributors to the Language Log make me better informed about developments in linguistics outside my own specialty that I would otherwise not know about. Thanks Mark for all your great work.

I have addressed some of these issues in previous posts here, here and here.

What should linguistics teach us

In my post on what proponents of simple language should know about linguistics, I made a list of findings that I propose are far more important than specific grammatical and lexicographic knowledge. Here I take a slightly more high-level approach – but in part, this is a repetition of that post.

Simply, I propose that any school-level curriculum of language education should 1. expose students (starting at primary level) to the following 5 principles through reflection on relevant examples, and 2. these principles should be reflected in the practical instruction students receive toward the acquisition of skills and general facility in the standards of that language.

Summary of key principles

  1. Language is a dialect with an army and a navy
  2. Standard English is just one of the many dialects of English
  3. We are all multilingual in many different ways
  4. A dictionary is just another text written in the language, not a law of the language
  5. Language is more than words and rules

Principle 1: Language is a dialect with an army and a navy

This famous dictum (see Wikipedia on origins ) encapsulates the fact that language does not have clear boundaries and that there is no formula for distinguishing where one language ends and another begins. Often, this disctinction depends on the political interests of different groups. In different political contexts, the different Englishes around the world today, could easily qualify for separate language status and many of them have achieved this.

But exploring the examples that help us make sense of this pithy phrase also teaches us the importance of language in the negotiation of our identity and its embeddedness in the wider social sphere. There are piles and piles of evidence to support this claim and learning about the evidence has the potential of making us all better human beings less prone to disenfranchise others based on the way they speak (in as much any form of schooling is capable of such a thing). Certainly more worthy than knowing how to tell the passive voice.

Principle 2: Standard English is just one of the many dialects of English

Not only are there no clear boundaries between languages, there are no clear principles of what constitutes an individual language. A language is identified by its context of use as much as by the forms it uses. So if kayak and a propos can be a part of English so can ain’t and he don’t. It is only a combination of subconscious convention and conscious politics that decides which is which.

Anybody exploring the truth of this statement (and, yes, I’m perfectly willing to say the word truth in this context) will learn about the many features of English and all human languages such as:

  • stratification of language through registers
  • regional and social variation in language
  • processes of change in language over time
  • what we call good grammar are more or less fixed conventions of expression in certain contexts
  • the ubiquity of multiple codes and constant switching between codes (in fact, I think this is so important that it gets a top billing in this list as number 3)

Again, althoguh I’m highly skeptical of claims to causality from education to social change, I can’t see why instruction in this reality of our lives could not contribute to an international conversation about language politics. Perhaps, an awareness of this ‘mantra’ could reduce the frequency of statements such as:

  • I know I don’t speak very good English
  • Word/expression X is bad English
  • Non-native speaker X speaks better English than native speaker Y

And just maybe, teachers of English will stop abusing their students with ‘this is bad grammar’ and instead guide them towards understanding that in different contexts, different uses are appropriate. Even at the most elementary levels, children can have fun learning to speak like a newscaster or a local farm hand, without the violent intrusion into their identity that comes from the misguided and evil labeling of the first as proper and the second as ‘not good English’. Or how about giving the general public enough information to have judged the abominable behavior of the the journalist pseudo elites during the ‘Ebonics controversy’ as the disgraceful display of shameful ignorance it was.

Only and only when they have learned all that, should we mention something about the direct object.

Principle 3: We are all multilingual in many different ways

One of the things linguistics has gathered huge amounts of evidence about is the fact that we are all constantly dealing with multiple quite distinct codes. This is generally not expressed in quite as stark terms as I do here, but I take my cue from bilingualism studies where it has been suggested (either by Chaika or Romaine – I can’t track down the reference to save my life) that we should treat all our study of language as if bilingualism was the default state rather than some exception. This would make good sense even if we went by the educated guess that just over half of the world’s population speaks regularly two or more languages. But I want to go further.

First, we know from principle 1 that there is no definition of language that allows us draw clear boundaries between individual languages. Second, we know from principle 2 that each language consists of many different ‘sub-languages’ or ‘codes’. Because language is so vast and complex, it follows that knowing a language is not an either/or proposition. People are constantly straddling boundaries between different ways of speaking and understanding. Speaking in different ways for different purposes, to different people in different codes. And we know that people switch between the codes constantly for different reasons, even in the same sentence or just one word (very common in languages with rich morphologies like Czech – less common in English but possible with ‘un-fucking-convinving’). Some examples that should illustrate this: “Ladies and gentlemen, we’re screwed” and “And then Jeff said unto Karen”

We also know from all the wailing and gnashing of teeth derriving from the ignorance of principle 2, that acquiring these different codes is not easy. The linguist Jim Miller has suggested to me that children entering school are in a way required to learn a foreign language. In Czech schools, they are instructed in a new lexicon and new morphology (e.g. say ‘malý’ instead of ‘malej’). in English schools they are taught a strange syntax with among other things a focus on nominal structures (cf. ‘he went and’ vs. ‘his going was’) as well as an alien lexicon (cf. ‘leave’ vs. ‘depart’). This is compounded with a spelling system whose principles are often explained on the basis of a phonology they don’t understand (e.g. much of England pronuncing ‘bus’ and ‘booss’ but using teaching materials that rhyme ‘bus’ with ‘us’).

It is not therefore a huge leap to say that for all intents and purposes, we are all multilingual even if we only officially speak one language with its own army and a navy. Or at least, we enagage all the social, cognitive and linguistic processes that are involved in speaking multiple languages. (There is some counter evidence from brain imaging but in my view it is still too early to interpret this either way.)

But no matter whether we accept the strong or the weak version of my proposition, learning about the different pros and cons would make students’ lives much easier at all levels. Instead of feeling like failures over their grammar, they could be encouraged to practice switching between codes. They could also take comfort in the knowledge that there are many different ways of knowing a language and no one person can know it all.

If any time is left over, let’s have a look at constituent structures.

Principle 4: A dictionary is just another text written in the language, not a law of the language

The defference shown to ‘official’ reference materials is at the heart of a myth that the presense of a word in a dictionary in some way validates the word as being a ‘real’ word in the language. But the absolute truth about language that everyone should know and repeat as a mantra every time they ask ‘is X a word’ is that dictionaries are just another text. In fact, they are their own genre of a type that Michael Hoey calls text colonies. This makes them cousins of the venerable shopping list. Dictionaries have their own conventions, their own syntax and their own lexicon. They have ‘heads’ and ‘definitions’ that are both presented in particular ways.

What they most emphatically do not do is confirm or disconfirm the existence of a word or its meaning. It’s not just that they are always behind current usage, it’s that they only reflect a fraction of the knowledge involved in knowing and using words (or as the philosopher John Austin would say ‘doing things with words’). Dictionaries fullfil two roles at once. They are useful tools for gathering information to enable us to deal with the consequences of principle 3 (i.e. to function in a complex multi-codal linguistic environment both as passive and active participants). And they help us express many beliefs about our world such as:

  • The world is composed of entities with meanings
  • Our knowledge is composed of discrete items
  • Some things are proper and others are improper

Perhaps this can become more transparent when we look at entries for words like ‘the’ or ‘cat’. No dictionary definition can help us with ‘the’ unless we can already use it. In this case, the dictionary serves no useful role other than as a catalog of our reality. Performatively assuring us of its own relevance by its irrelevance. How about the entry for ‘cat’. Here, the dictionary can play a very useful role in a bilingual situation. A German will see ‘cat = Katze’ and all will be clear in an instant. A picture can be helpful to those who have no language yet (little children). But the definition of ‘cat’ as “a small domesticated carnivorous mammal with soft fur, a short snout, and retractile claws” is of no use to anybody who doesn’t already know what ‘cat’ means. Or at the very least, if you don’t know ‘cat’, your chances of understanding any definition in the dictionary are very low. A dictionary can be helpful in reminding us that ‘cat’ is also used to refer to ‘man’ among jazz musicians (as in “he’s a cool cat”) but again, all that requires existing knowledge of cat. A dictionary definition that would say ‘a cat is that thing you know as a cat but jazz musicians sometimes use cat to refer to men’ would be just as useful.

In this way, a dictionary is like an audience in the theatre, who are simultaneously watching a performance, and performing themselves the roles of theatre audiences (dress, behavior, speech).

It is also worthwhile to think about what is required of the dictionary author. While the basic part of the lexicographer’s craft is the collection of usage examples (on index cards in the past and in corpora today) and their interpretation, all this requires a prior facility with the language and much introspection about the dictionary makers own linguistic intuitions. So lexicographers make mistakes. Furthermore, in the last hundred years or so, they also almost never start from scratch. Most dictionaries are based on some older dictionaries and the order of definitions is often as much a reflection of a tradition (e.g. in the case of the word ‘literally’ or the word ‘brandish’) as analysis of actual usage.

Why should this be taught as part of the language education curriculum? Simple! Educated people should know how the basic tools surrounding their daily lives work. But even more importantly, they should never use the presence of a word in a dictionary, and as importantly the definition of a word in a dictionary, as the definitive argument for their preferred meaning of a word. (Outside some contexts such as playing SCRABBLE or confirming an uncertainty over archaic or specialist words).

An educated person should be able to go and confirm any guidance found in a dictionary by searching a corpus and evaluate the evidence. It’s not nearly as hard as as identifying parts of speech in a sentence and about a million times more useful for the individual and beneficial for society.

Principle 5: Language is more than words and rules

Steven Pinker immortalised the traditional structuralist vision of what language consists of in the title of his book “Words and rules”. This vision is almost certainly wrong. It is based on an old articulation of language as being the product of a relatively small number of rules applied to a really large number of words (Chomsky expressed this quite starkly but the roots of this model go much deeper).

That is not to say that words and rules are not useful heuristic shortcuts to talking about language. I use this metaphor myself every day. And I certainly am not proposing that language should not be taught with reference to this metaphor.

However, this is a very impoverished view of language and rather than spend time on learning the ‘relatively few’ rules for no good reason other than to please Mark Liberman, why not learn some facts we know about the vastness and complexity of language. That way instead of having a completely misguided view of language as something finite that can be captured in a few simple precepts (often expressed in one of those moronic ‘Top X grammatical errors lists’), one could actually have a basic understanding of all the ways language expresses our minds and impresses itself on our life. Perhaps, we could even get to a generation of psycholinguists and NLP specialists who try to deal with language as it actually is rather than in its bastardised form that can be captured by rules and words.

Ok, so I’m hoisting my theoretical flag here, flying the colors of the ‘usage-based’, ‘construction grammar’, ‘cognitive semantics’ crowd. But the actual curricular proposal is theory free (other than in the ‘ought’ portion of it) and based on well-known and oft-described facts – many of them by the Language Log itself.

To illustrate the argument, let’s open the dictionary and have a look at the entry ‘get’. It will go on for several pages even if we decide to hide all its phrasal friends under separate entries. Wiktionary lists 26 definitions as a verb and 4 as a noun which is fairly conservative. But each of these definitions also comes with usage examples and usage exceptions. For instance, in ‘get behind him’, it is intransitive but in ‘get Jim to come’, it is transitive. This is combined with general rules that apply across all uses such ‘got’ as the past tense and ‘gets’ as the third person singular. Things can be even more complex as with the word ‘bad’ which has the irregular superlative ‘worst’ when it is used in a negative sense as in ‘teaching grammar in schools is the worst idea’ and ‘baddest’ in the positive sense as in ‘Mark Liberman is the baddest linguist on the internet’. ‘Baddest’ is also only appropriate in certain contexts (so my example is at the same time an illustration of code mixing). When we look at any single word in the dictionary, the amount of conscious and unconscious knowledge required to use the word in our daily speech is staggering. This is made even trickier by the fact that not everyone in any one speech community has exactly the same grasp of the word leading to a lot of negotiation and conversation repair.

It is also the sort of stuff that makes understanding of novel expressions like ‘she sneezed the napking off the table’ possible. If we must, let’s do some sentence diagramming now.

Some other things to know

I could go on, some of my other candidate principles that didn’t make this list either because they could be subsumed by one of the items, or they are too theory laden, or because I wanter a list of 5, or because this blog post is over 3,000 words already, are:

  • All lexical knowledge is encyclopedic knowledge
  • Rules of the road like conversation repair, turn taking or text cohesion are just as much part of language as things like passives, etc.
  • Metaphors (and other types of figurative language) are normal, ubiquitous and necessary for language
  • Pretty much every prejudice about gender and language is wrong (like who is more conservative, etc.)
  • No language is more beatiful or amazing than any other, saying this is most likely part of a nationalistic discourse
  • Children are not very good language learners when you put them in the same learning context as adults (e.g. two hours of instruction a week as opposed to living in a culture with no other obligation but to learn)
  • Learning a language is hard and it takes time
  • The etymology of a word does not reflect some deeper meaning of the word
  • Outside some very specific contexts (e.g. language death), languages don’t decline, they change
  • Etc.

Why we should not teach grammar in schools

So, that was my outline of what linguistic expertise should be part of the language education curriculum – and as importantly should inform teachers across all subjects. Now, let’s have a look, as promised, at why Mark Liberman is wrong to call for the teaching of grammar in schools in the first place.

To his credit, he does not trot out any of the usual utilitarian arguments for the teaching of grammar:

  • It will make learning of foreign languages easier
  • It will make today’s graduates better able to express themselves
  • It will contribute to higher quality of discourse
  • It will stop the decline of English
  • It will improve critical thinking of all students

These are all bogus, not supported by evidence and with some evidence against them (see this report for a summary of a part of them).

My argument is with his interpretation of his claim that

a basic understanding of how language works should be part of what every educated person knows

I have a fundamental problem with the very notion of ‘educated person’ because of its pernicious political baggage. But in this post I’ve used it to accept the basic premise that if we’re going to teaching lots of stuff to children in schools, we might as well teach them the good stuff. Perhaps, not always the most immediately useful stuff but definitely the stuff that reflects the best in what we have to offer to ourselves as the humanity we want to be.

But if that is the case, then I don’t think his offer of

a modern version of the old-fashioned idea that grammar (and logic and rhetoric :-)) should be for everyone

is that sort of stuff. Let’s look at what that kind of education did for the likes of Orwell, and Stunk and White who have had the benefit of all the grammar education a school master’s cane can beat into a man and yet committed such outrageous, embarrassing and damaging transgressions against linguistic knowledge (not infrequently decried on the Language Log).

The point is that ‘grammar’ (and ‘logic’ and ‘rhetoric’) do not represent even a fraction of the principles involved in how language works. The only reason why we would privilege their teaching over the teaching of the things I propose (which cover a much larger area of how language works) is because they have been taught in the past. But why? Basing it on something as arbitrary as the hodgepodge that is the treebank terminology proposed by Mark Liberman only exposes the weakness of the argument – sure, it’s well known and universally understood by professional linguists but it hides as much about language as it reveals. And as Mark acknowledges, the aim is not to educate future linguists. There are alternatives such as Dickson’s excellent Basic Linguistic Theory that take into account much more subtly the variation across languages. But even then, we avoid all the really interesting things about language. I’m not against some very basic metalinguistic terminology to assist students in dealing with language but parsing sentences for no other reason than to do it seems pointless.

The problem with basing a curriculum on old-fashioned values is that we are catering to the nostalgia of old men (and sorry Mark, despite my profound appreciation for your work, you are an old man). (By the way, I use ‘men’ to evoke a particular image rather than to make any assertions about the gender of the person in question.) But it’s not just nostalgia. It’s also their disorientation in a changing world and discomfort with encountering people who are not like them – and, oh horror, can’t tell the passive voice from the past tense. Yes, it would be more convenient for me, if everyone I spoke to had the same appreciation for what an adverb is (particularly when I was teaching foreign languages). But is that really the best we have to offer when we want to show what should be known? How much of this is just the maintenance of the status of academics who want to see their discipline reflected in the cauldron of power and respectability that is the school curriculum? If the chemists get to waste everyone’s time with polymers, why not us with trees and sentence diagrams? In a follow up post, Dick Hudson claims that at present “we struggle to cope with the effects of [the disaster of no grammar teaching]”. But I don’t see any disaster going on at the moment. Why is teaching no grammar more disasterous than the teaching of grammar based on Latin and Greek with little connection to the nature of English? Whose rules are we after?

The curriculum is already full to bursting with too much stuff that someone threw up as a shibboleth for being educated and thus eligible for certain privileges. But perhaps our curriculum has now become the kind of stable that needs the janitorial attention of a modern Heracles.

Post script: Minimalist metalinguistic curriculum

I once analysed the Czech primary curriculum and found over 240 metalinguistic terms. I know, riddiculous. The curriculum was based on the work of eminent Czech structuralists (whose theorizing influenced much of the rest of the world). It didn’t make the Czechs any more educated, eloquent, or better at learning foreign languages – although it did make it easier for me to study linguistics. But as I said above, there is certainly some place for metalanguage in general education. Much of it comes from stylistics but when it comes to grammar, I’d stick to about 15. This is not a definitive list:

  1. Noun
  2. Verb
  3. Adjective
  4. Adverb
  5. Preposition
  6. Pronoun
  7. Prefix
  8. Suffix
  9. Clause
  10. Past form of verb
  11. Future form of verbs
  12. Present form of verbs
  13. Subject
  14. Object
  15. Passive

Languages with rich morphology might need a few others to cover things like case but overall in my career as a language educator, I’ve never felt the need for any more, and nor have I felt in the presence of uneducated people of people who couldn’t tell me what the infinitive was. In fact, I’d rather take some items away (like adverb, prefix, suffix, or clause) than add new ones.

Sentence diagramming is often proposed as a way of instilling some metalinguistic awareness. I don’t see any harm in that (and a lot of potential benefit). But only with the enormous proviso that students use it to learn the relationship between parts of their language in use and NOT as a gateway to a cancerous taxonomy pretending to the absolute existence of things that could easily be just artifacts of our metacognition.

Things are different when it comes to the linguistic education of language teachers. On the one hand, I’m all for language teachers having a comprehensive education in how language works. On the other hand, I have perpetrated a lot of such teacher training over the years and have watch others struggle with it, as well. And the effects are dispiriting. I’ve seen teachers who can diagram a sentence with the best of them and are still quite clueless when it comes to understanding how speech acts work. Very often language teachers find any such lessons painful and something to get through. This means that the key thing they remember about the subject is that linguistics is hard or boring or both.

Photo Credit: CarbonNYC via Compfight cc

Three books of the year 2013 and some books of the century 1900-2013

Share

I have been asked (as every year) to nominate three books of the year for Lidové Noviny (a Czech paper I contribute to occasionally). This is always a tough choice for me and some years I don’t even bother responding. This is because I don’t tend to read books ‘of the moment’ and range widely in my reading across time periods. But I think I have some good ones this time. Additionally, Lidové Noviny are celebrating 120 years of being a major Czech newspaper so they also asked me for a book of the century (since 1900 till now). It makes no sense to even try to pick ‘the one’, so I picked three categories that are of interest to me (language, society and fiction) and chose three books in each.

The Colorful Library of an Interaction Designer (Juhan Sonin) / 20100423.7D.05887.P1 / SMLCreative Commons License See-ming Lee via Compfight

Three books of 2013

Thanks to the New Books Network, I tend to be more clued in on the most recent publications, so 2 of my recommendations are based on interviews heard there.

A Cultural History of the Atlantic World, 1250-1820 by John K. Thornton is without question a must read for anyone interested in, well, history. Even though he is not the first, Thornton shows most persuasively how the non-Europeans on both sides of the Atlantic (Africa and the Americas) were full-fledged political partners of the Europeans who are traditionally seen simply as conquerors with their dun powder, horses and steel weapons. Bowerman shows how these were just a small part of the mix, having almost no impact in Africa and playing a relatively small role in the Americas. In both cases, Europeans succeeded through alliances with local political elites and for centuries simply had no access to vast swathes of both continents.

Raising Germans in the Age of Empire: Youth and Colonial Culture, 1871-1914 by Jeff Bowersox. This book perhaps covers an exceedingly specific topic (compared to the vast sweep of my first pick) but it struck a chord with me. It shows the complex interplay between education, propaganda and their place in the lives of youth and adults alike.

Writing on the Wall: Social Media – the First 2,000 Years by Tom Standage. Standage’s eye opening book on the telegraph (The Victorian Internet) now has a companion dealing with social communication networks going back to the Romans. Essential corrective to all the gushing paradigm shifters. He doesn’t say there’s nothing new about the Internet but he does say that there’s nothing new abou humans. Much lighter reading but still highly recommended.

Books of the Century

This really caught my fancy. I was asked for books that affected me, but I thought more about those that had an impact going beyond the review cycle of a typical book.

Language

Course in General Linguistics by Ferdinand de Saussure published in 1916. The Course (or Le Cours) Published posthumously by Saussure’s students from lecture notes is the cornerstone of modern linguistics. I think many of the assumptions have been undermined in the past 30-40 years and we are ripe for a paradigm change. But if you talk to a modern linguist, you will still hear much of what Saussure was saying to his students in the early 1900s in Geneva. (Time to rethink the Geneva Convention in language?)

Syntactic Structures by Noam Chomsky published in 1957. Unlike The Course, which is still worth reading by anyone who wants to learn about language, Syntactic Structures is now mostly irrelevant and pretty much incomprehensible to non-experts. However, it launched the Natural Language Processing revolution and its seeds are still growing (although not in the Chomskean camp). Its impact may not survive the stochastic turn in NLP but the computational view of language is still with us for good and for ill.

Metaphors We Live By by George Lakoff and Mark Johnson published in 1980 while not completely original, kickstarted a metaphor revival of sorts. While, personally, I think Lakoff’s Women, Fire, and Dangerous Things is by far the most important book of the second half of the 20th century, Metaphors We Live By is a good start (please, read the 2003 edition and pay special attention to the Afterword).

Society

The Second Sex by Simone de Beauvoir published in 1949 marked a turning point in discourse about women. Although the individual insights had been available prior to Beauvoir’s work, her synthesis was more than just a rehashing of gender issues.

Language and Woman’s Place by Robin Tolmach Lakoff published in 1973 stands at the foundation of how we speak today about women and how we think about gender being reflected in language. I would now quible with some of the linguistics but not with the main points. Despite the progress, it can still open eyes of readers today.

The Savage Mind by Claude Levi-Strauss published in 1962 was one of the turning points in thinking about modernity, complexity and backwardness. Strauss’s quip that philosophers like Sartre were more of a subject of study for him than valuable interlocutors is still with me when I sit in on a philosophy seminar. I read this book without any preparation and it had a profound impact on me that is still with me today.

Fiction

None of the below are my personal favourites but all have had an impact on the zeit geist that transcended just the moment.

1984 by George Orwell published in 1949. Frankly I can’t stand this book. All of its insight is skin deep and its dystopian vision (while not in all aspects without merit) lacks the depth it’s often attributed. There are many sci-fi and fantasy writers who have given the issue of societal control and freedom much more subtle consideration. However, it’s certainly had a more profound impact on general discourse than possibly any piece of fiction of the 20th century.

The Joke by Milan Kundera published in 1967 is the only book by Kundera with literary merit (I otherwise find his writing quite flat). Unlike Orwell, Kundera has the capacity to show the personal and social dimensions of totalitarian states. In The Joke he shows both the randomness of dissent and the heterogeniety of totalitarian environments.

The Castle by Franz Kafka published in 1912 (or just the collected works of Kafka) have provided a metaphor for alienation for the literati of the next hundred years. I read The Castle first so it for me more than others illustrates the sense of helplessness and alienation that a human being can experience when faced with the black box of anonymous bureaucracy. Again, I rate this for impact, rather than putting it on a ‘good read’ scale.

My personal favorites would be authors rather than individual works: Kurt Vonnegut, Robertson Davies, James Clavell would make the list for me. I also read reams of genre fiction and fan fiction that can easily stand up next to any of “the greats”. I have no shame and no guilty pleasures. I’ve read most of Terry Pratchett and regularly reread childhood favorites by the likes of Arthur Ransome or Karl May. I’ve quoted from Lee Child and Tom Clancy in academic papers and I’ve published an extensive review of a Buffy the Vampire Slayer fan fiction novel.

Finally, for me, the pinnacle of recent literary achievement is Buffy the Vampire Slayer. I’ve used this as an example of how TV shows have taken over from the Novel, as the narrative format addressing weighty issues of the day, and Buffy is one of the first examples. Veronica Mars is right up there, as well, and there are countless others I’d recommend ‘reading’.

Pervasiveness of Obliging Metaphors in Thought and Deed

Share

when history is at its most obliging, the history-writer needs be at his most wary.” (China by John Keay)

Die Mykologen - Glückspilze - Lucky Fellows - Fungi ExpertsI came across this nugget of wisdom when I was re-reading the Introduction to John Keay’s history of China. And it struck me that in some way this quote could be a part of the motto of this blog. The whole thing might then read something like this:

Hack at your thoughts at any opportunity to see if you can reveal new connections through analogies, metonymies and metaphors. Uncover hidden threads, weave new ones and follow them as far as they take you. But when you see them give way and oblige you with great new revelations about how the world really is, be wary!

Metaphors can be very obliging in their willingness to show us that things we previously thought separate are one and the same. But that is almost always the wrong conclusion. Everything is what it is, it is never like something else. (In this I have been subscribing to ‘tiny ontology’ even before I‘ve heard about it). But we can learn things about everything when we think about it as something else. Often we cannot even think of many things other than through something else. For instance, electricity. Electrons are useful to think of as particles or as waves. Electrons are electrons, they are not little balls nor are they waves. But when we start treating them as one or the other, they become more tractable for some problems (electrical current makes more sense when we think of them as waves and electricity generating heat makes sense when we think of them as little balls).

George Lakoff and Mark Johnson summarize metaphors in the X IS Y format (e.g. LOVE IS A JOURNEY) but this implied identity is where the danger lies. If love is a journey as we can see in a phrase like, ‘We’ve arrived at a junction in our relationship’, then it surely must be a journey in all respects: it has twists and turns, it takes time, it is expensive, it happens on asphalt! Hold on! Is that last one the reason ‘love can burn with an eternal flame’? Of course not. Love IS NOT a journey. Some aspects of what we call love make more sense to us when we think of them as a journey. But others don’t. Since it is obvious that love is not a journey but is like a journey, we don’t worry about it. But it’s more complicated than that. The identities implied in metaphor are so powerful (more so to some people than others) that some mappings are not allowed because of the dangers implied in following them too far. ‘LOVE IS A CONTRACT’ is a perfectly legitimate metaphor. There are many aspects of a romantic relationship that are contract-like. We agree to exclusivity, certain mode of interaction, considerations, etc. when we declare our love (or even when we just feel it – certain obligations seem to follow). But our moral compass just couldn’t stomach (intentional mix) the notion of paying for love or being in love out of obligation which could also be traced from this metaphor. We instinctively fear that ‘LOVE IS A CONTRACT’ is a far too obliging a metaphor and we don’t want to go there. (By we, I mean the general rules of acceptable discourse in certain circles, not every single cognizing individual.)

So even though metaphors do not describe identity, they imply it, and not infrequently, this identity is perceived as dangerous. But there’s nothing inherently dangerous about it. The issue is always the people and how willing they are to let themselves be obliged by the metaphor. They are aided and abetted in this by the conceptual universe the metaphor appears in but never completely constrained by it. Let’s take the common metaphor of WAR. I often mention the continuum of ‘war on poverty’, ‘war on drugs’, and ‘war on terror’ as an example of how the metaphors of ‘war’ do not have to lead to actual ‘war’. Lakoff showed that they can in ‘metaphors can kill’. But we see that they don’t have to. Or rather we don’t have to let them. If we don’t apply the brakes, metaphors can take us almost anywhere.

There are some metaphors that are so obliging, they have become cliches. And some are even recognized as such by the community. Take, for instance, the Godwin law. X is Hitler or X is Nazi are such seductive metaphors that sooner or later someone will apply them in almost any even remotely relevant situation. And even with the awareness of Godwin’s law, people continue to do it.

The key principle of this blog is that anything can be a metaphor for anything with useful consequences. Including:

  • United States is ancient Rome
  • China today is Soviet Union of the 1950s
  • Saddam Hussein is Hitler
  • Iraq is Vietnam
  • Education is a business
  • Mental difficulties are diseases
  • Learning is filling the mind with facts
  • The mind is the software running on the hardware of the brain
  • Marriage is a union between two people who love each other
  • X is evolved to do Y
  • X is a market place

But this only applies with the HUGE caveat that we must never misread the ‘is’ for a statement of perfect identity or even isomorphims (same shapedness). It’s ‘is(m)’. None of the above metaphors are perfect identities – they can be beneficially followed as far as they take us, but each one of them is needs to be bounded before we start drawing conclusions.

Now, things are not helped by the fact that any predication or attribution can appear as a kind of metaphor. Or rather it can reveal the same conceptual structures the same way metaphors do.

‘John is a teacher.’ may seem like a simple statement of fact but it’s so much more. It projects the identity of John (of whom we have some sort of a mental image) into the image schema of a teacher. That there’s more to this than just a simple statement can be revealed by ‘I can’t believe that John is a teacher.’ The underlying mental representations and work on them is not that different to ‘John is a teaching machine.’ Even simple naming is subject to this as we can see in ‘You don’t look much like a Janice.’

Equally, simple descriptions like ‘The sky is blue’ are more complex. The sky is blue in a different ways than somebody’s eyes are blue or the sea is blue. I had that experience myself when I first saw the ‘White Cliffs of Dover’ and was shocked that they were actually white. I just assumed that they were a lighter kind of cliff than a typical cliff or having some white smudges. They were white in the way chalk is white (through and through) and not in the way a zebra crossing is white (as opposed to a double yellow line).

A famous example of how complex these conceptualisations can get is ‘In France, Watergate would not have harmed Nixon.’ The ‘in France’ and ‘not’ bits establishe a mental space in which we can see certain parts of what we know about Nixon and Watergate projected onto what we know about France. Which is why sentences like “The King of France is bald.” and “Unicorns are white.” make perfect sense even though they both describe things that don’t exist.

Now, that’s not to say that sentences like ‘The sky is blue’, ‘I’m feeling blue’,’I’ll praise you to the sky.’, or ‘He jumped sky high.’ and ‘He jumped six inches high.’ are cognitively or linguistically exactly the same. There’s lots of research that shows that they have different processing requirements and are treated differently. But there seems to be a continuum in the ability of different people (much research is needed here) to accept the partiality of any statement of identity or attribution. On the one extreme, there appears something like autism which leads to a reduced ability to identify figurative partiality in any predication but actually, most of the time, we all let ourselves be swayed by the allure of implied identity. Students are shocked when they see their teacher kissing their spouse or shopping in the mall. We even ritualize this sort of thing when we expect unreasonable morality from politicians or other public figures. This is because over the long run overtly figurative sentence such as ‘he’s has eyes like a hawk’ and ‘the hawk has eyes’ need similar mental structures to be present to make sense to us. And I suspect that this is part of the reason why we let ourselves be so easily obliged by metaphors.

Update: This post was intended as a warning against over-obliging metaphors that lead to perverse understandings of things as other things in unwarranted totalities. In this sense, they are the ignes fatui of Hobbes. But there’s another way in which over-obliging metaphors can be misleading. And that is, they draw on their other connections to make it seem we’ve come to a new understanding where in fact all we’ve done is rename elements of one domain with the names of elements of another domain without any elucidation. This was famously and devastatingly the downfall of Skinner’s Verbal Behavior under Chomsky’s critique. He simply (at least in the extreme caricature that was Chomsky’s review) took things about language and described them in terms of operant conditioning. No new understanding was added but because the ‘new’ science of psychology was in as the future of our understanding of everything, just using those terms made us assume there was a deeper knowledge. Chomsky was ultimately right-if only to fall prey to the same danger with his computational metaphors of language. Another area where that is happening is evolution, genetics and neuroscience which are often used (sometimes all at once) to simply relabel something without adding any new understanding whatsoever.

Update 2: Here’s another example of overobliging metaphor in the seeking of analogies to the worries about climate change: http://andrewgelman.com/2013/11/25/interesting-flawed-attempt-apply-general-forecasting-principles-contextualize-attitudes-toward-risks-global-warming/#comment-151713.  My comment was:

…analogies work best when they are opportunistic, ad hoc, and abandoned as quickly as they are adopted. Analogies, if used generatively (i.e. to come up with new ideas), can be incredibly powerful. But when used exegeticaly (i.e. to interpret or summarize other people’s ideas), they can be very harmful.

The big problem is that in our cognition, ‘x is y’ and ‘x is like y’ are often treated very similarly. But the fact is that x is never y. So every analogy has to be judged on its own merit and we need to carefully examine why we’re using the analogy and at every step consider its limits. The power of analogy is in its ability to direct our thinking (and general cognition) i.e. not in its ‘accuracy’ but in its ‘aptness’.

I have long argued that history should be included in considering research results and interpretations. For example, every ‘scientific’ proof of some fundamental deficiencies of women with respect to their role in society has turned out to be either inaccurate or non-scalable. So every new ‘proof’ of a ‘woman’s place’ needs to be treated with great skepticism. But that does not mean that one such proof does not exist. But it does mean that we shouldn’t base any policies on it until we are very very certain.

Image Creative Commons License Hartwig HKD via Compfight

Sunsets, horizons and the language/mind/culture distinction

Share

For some reason, many accomplished people, when they are done accomplishing what they’ve set out to accomplish, turn their minds to questions like:

  • What is primary, thought or language.
  • What is primary, culture or language.
  • What is primary, thought or culture.

I’d like to offer a small metaphor hack for solving or rather dissolving these questions. The problem is that all three concepts: culture, mind and language are just useful heuristics for talking about aspects of our being. So when I see somebody speaking in a way I don’t understand, I can talk about their language. Or others behave in ways I don’t like, so I talk about their culture. Then, there’s stuff going on in my head that’s kind of like language, but not really, so I call that sort of stuff mind. But these words are just useful heuristics not discrete realities. Old Czechs used the same word for language and nation. English often uses the word ‘see’ for ‘understand’. What does it mean? Not that much.

Let’s compare it with the idea of the setting sun. I see the Sun disappearing behind the horizon and I can make some useful generalizations about it. Organize my directions (East/West), plant plants to grow better, orient how my dwelling is positioned, etc. And my description of this phenomenon as ‘the sun is setting behind the horizon’ is perfectly adequate. But then I might start asking questions like ‘what does the Sun do when it’s behind the horizon?’ Does it turn itself off and travel under the earth to rise again in the East the next morning? Or does it die and a new one rises again the next day? Those are all very bad questions because I accepted my local heuristic as describing a reality. It would be even worse if I tried to go and see the edge or the horizon. I’d be like the two fools who agreed that they would follow the railway tracks all the way to the point they meet. They keep going until one of them turns around and says ‘dude, we already passed it’.

So to ask questions about how language influences thought and culture influences language is the same as trying to go see the horizon. Language, culture and mind are just ways of describing things for particular purposes and when we start using them outside those purposes, we get ourselves in a muddle.

Great Lakes in Sunglint (NASA, International Space Station, 06/14/12) NASA’s Marshall Space Flight Center via Compfight

The complexities of simple: What simple language proponents should know about linguistics [updated]

Share

Update

Part of this post was incorporated into an article I wrote with Brian Kelly and Alistair McNaught that appeared in the December issue of Ariadne. As part of that work and feedback from Alistair and Brian, I expanded the final section from a simple list of bullets into a more detailed research programme. You can see it below and in the article.

Background: From spelling reform to plain language

Simple
Simple (Photo credit: kevin dooley)

The idea that if we could only improve how we communicate, there would be less misunderstanding among people is as old as the hills.

Historically, this notion has been expressed through things like school reform, spelling reform, publication of communication manuals, etc.

The most radical expression of the desire for better understanding is the invention of a whole new artificial language like Esperanto with the intention of providing a universal language for humanity. This has had a long tradition but seemed to gain most traction towards the end of last century with the introduction and relative success of Esperanto.

But artificial languages have been a failure as a vehicle of global understanding. Instead, in about the last 50 years, the movement for plain English has been taking the place of constructed languages as something on which people pinned their hopes for clear communication.

Most recently, there have been proposals suggesting that “simple” language should become a part of a standard for accessibility of web pages along side other accessibility standards issued by the W3C standards body. http://www.w3.org/WAI/RD/2012/easy-to-read/Overview. This post was triggred by this latest development.

Problem 1: Plain language vs. linguistics

The problem is that most proponents of plain language (as so many would be reformers of human communication) seem to be ignorant of the wider context in which language functions. There is much that has been revealed by linguistic research in the last century or so and in particular since the 1960s that we need to pay attention to (to avoid confusion, this does not refer to the work of Noam Chomsky and his followers but rather to the work of people like William Labov, Michael Halliday, and many others).

Languages are not a simple matter of grammar. Any proposal for content accessibility must consider what is known about language from the fields of pragmatics, sociolinguistics, and cognitive linguistics. These are the key aspects of what we know about language collected from across many fields of linguistic inquiry:

  • Every sentence communicates much more than just its basic content (propositional meaning). We also communicate our desires and beliefs (e.g. “It’s cold here” may communicate, “Close the window” and “John denied that he cheats on his taxes” communicates that somebody accused John of cheating on his taxes. Similarly chosing a particular form of speech, like slang or jargon, communicates belonging to a community of practice.)
  • The understanding of any utterance is always dependent on a complex network of knowledge about language, about the world, as well as about the context of the utterance. “China denied involvement.” requires the understanding of the context in which countries operate, as well as metonomy, as well as the grammar and vocabulary. Consider the knowledge we need to possess to interpret “In 1939, the world exploded.” vs. “In Star Wars, a world exploded.”
  • There is no such thing as purely literal language. All language is to some degree figurative. “Between 3 and 4pm.”, “Out of sight”, “In deep trouble”, “An argument flared up”, “Deliver a service”, “You are my rock”, “Access for all” are all figurative to different degrees.
  • We all speak more than one variety of our language: formal/informal, school/friends/family, written/spoken, etc. Each of these variety has its own code. For instance, “she wanted to learn” vs. “her desire to learn” demonstrates a common difference between spoken and written English where written English often uses clauses built around nouns.
  • We constantly switch between different codes (sometimes even within a single utterance).
  • Bilingualism is the norm in language knowledge, not the exception. About half the world’s population regularly speaks more than one language but everybody is “bi-lingual” in the sense that they deal with multiple codes.
  • The “standard” or “correct” English is just one of the many dialects, not English itself.
  • The difference between a language and a dialect is just as much political as linguistic. An old joke in linguistics goes: “A language is a dialect with an army and a navy.”
  • Language prescription and requirements of language purity (incl. simple language) are as much political statements as linguistic or cognitive ones. All language use is related to power relationships.
  • Simplified languages develop their own complexities if used by a real community through a process known as creolization. (This process is well described for pidgins but not as well for artificial languages.)
  • All languages are full of redundancy, polysemy and homonymy. It is the context and our knowledge of what is to be expected that makes it easy to figure out the right meaning.
  • There is no straightforward relationship between grammatical features and language obfuscation and lack of clarity (e.g. It is just as easy to hide things using active as passive voice or any Subject-Verb-Object sentence as Object-Subject-Vern).
  • It is difficult to call any one feature of a language universally simple (for instance, SVO word order or no morphology) because many other languages use what we call complex as the default without any increase in difficulty for the native speakers (e.g. use of verb prefixes/particles in English and German)
  • Language is not really organized into sentences but into texts. Texts have internal organization to hang together formally (John likes coffee. He likes it a lot.) and semantically (As I said about John. He likes coffee.) Texts also relate to external contexts (cross reference) and their situations. This relationship is both implicit and explicit in the text. The shorter the text, the more context it needs for interpretation. For instance, if all we see is “He likes it.” written on a piece of paper, we do not have enough context to interpret the meaning.
  • Language is not used uniformly. Some parts of language are used more frequently than others. But this is not enough to understand frequency. Some parts of language are used more frequently together than others. The frequent coocurrence of some words with other words is called “collocation”. This means that when we say “bread and …”, we can predict that the next word will be “butter”. You can check this with a linguistic tool like a corpus, or even by using Google’s predictions in the search. Some words are so strongly collocated with other words that their meaning is “tinged” by those other words (this is called semantic prosody). For example, “set in” has a negative connotation because of its collocation with “rot”.
  • All language is idiomatic to some degree. You cannot determine the meaning of all sentences just by understanding the meanings of all their component parts and the rules for putting them together. And vice versa, you cannot just take all the words and rules in a language, apply them and get meaningful sentences. Consider “I will not put the picture up with John.” and “I will not put up the picture with John.” and “I will not put up John.” and “I will not put up with John.”

It seems to me that most plain language advocates do not take most of these factors into account.

Some examples from the “How to write in plain English” guide: http://www.plainenglish.co.uk/files/howto.pdf.

Try to call the reader ‘you’, even if the reader is only one of many people you are talking about generally. If this feels wrong at first, remember that you wouldn’t use words like ‘the applicant’ and ‘the supplier’ if you were speaking to somebody sitting across a desk from you. [emphasis mine]

This example misses the point about the contextuality of language. The part in bold is the very crux of the problem. It is natural to use a different code (or register) with someone we’re speaking to in person and in a written communication. This is partly a result of convention and partly the result of the different demands of writing and speaking when it comes to the ability to point to what we’re speaking about. The reason it feels wrong to the writer is that it breaks the convention of writing. That is not to say that this couldn’t become the new convention. But the argument misses the point.

Do you want your letters to sound active or passive − crisp and professional or stuffy and bureaucratic?
Using the passive voice and sounding passive are not one and the same thing. This is an example of polysemy. The word “passive” has two meanings in English. One technical (the passive voice) and one colloquial (“he’s too passive”). The booklet recommends that “The mine had to be closed by the authority. (Passive)” should be replaced with “The authority had to close the mine. (Active)” But they ignore the fact that word order also contributes to the information structure of the sentence. The passive sentence introduces the “mine” sooner and thus makes it clear that the sentence is about the mine and not the local authority. In this case, the “active” construction made the point of the sentence more difficult to understand.

The same is true of nominalization. Another thing recommended against by the Plain English campaign: “The implementation of the method has been done by a team.” is not conveying the same type of information as “A team has implemented the method.”

The point is that this advice ignores the context as well as the audience. Using “you” instead of “customers” in “Customers have the right to appeal” may or may not be simpler depending on the reader. For somebody used to the conventions of written official English, it may actually take longer to process. But for someone who does not deal with written English very often, it will be easier. But there is nothing intrinsically easier about it.

Likewise for the use of jargon. The campaign gives as its first example of unduly complicated English:

High-quality learning environments are a necessary precondition for facilitation and enhancement of the ongoing learning process.

And suggests that we use this instead:

Children need good schools if they are to learn properly.

This may be appropriate when it comes to public debate but within the professional context of, say, policy communication, these 2 sentences are not actually equivalent. There are more “learning environments” than just schools and the “learning process” is not the same as having learned something. It is also possible that the former sentence appeared as part of a larger context that would have made the distinction even clearer but the page does not give a reference and a Google search only shows pages using it as an example of complex English. http://www.plainenglish.co.uk/examples.html

The How to write in plain English document does not mention coherence of the text at all, except indirectly when it recommends the use of lists. This is good advice but even one of their examples has issues. They suggest that the following is a good example of a list:

Kevin needed to take:
• a penknife
• some string
• a pad of paper; and
• a pen.

And on first glance it is, but lists are not just neutral replacements for sentences. They are a genre in its own right used for specific purposes (Michael Hoey called them “text colonies”.) Let’s compare the list above to the sentence below.

Kevin needed to take a penknife, some string, a pad of paper and a pen.

Obviously they are two different kinds of text used in different contexts for different purposes and this would impinge on our understanding. The list implies instruction, and a level of importance. It is suitable to an official document, for example something sent before a child goes to camp. But it is not suitable to a personal letter or even a letter from the camp saying “All Kevin needed to take was a penknife, some string, a pad of paper and a pen. He should not have brought a laptop.” To be fair, the guide says to use lists “where appropriate”, but does not mention what that means.

The issue is further muddled by the “grammar quiz” on the Plain English website: http://www.plainenglish.co.uk/quiz.html. It is a hodgepodge of irrelevant trivia about language (not just grammar) that has nothing to do with simple writing. Although the Plain English guide gets credit for explicitly not endorsing petty peeves like not ending a sentence with a preposition, they obviously have peeves of their own.

Problem 2: Definition of simple is not simple

There is no clear definition of what constitutes simple and easy to understand language.

There are a number of intuitions and assumptions that seem to be made when both experts and lay people talk about language:

  • Shorter is simpler (fewer syllables, charactes, sounds per word, fewer words per sentence, fewer sentences per paragraph)
  • More direct is simpler (X did Y to Z is simpler than Y was done to Z by X)
  • Less variety is simpler (fewer different words)
  • More familiar simpler

These assumptions were used to create various measures of “readability” going back to the 1940s. They consisted of several variables:

  • Length of words (in syllables or in characters)
  • Length of sentences
  • Frequency of words used (both internally and with respect to their general frequency)

Intuitively, these are not bad measures, but they are only proxies for the assumptions. They say nothing about the context in which the text appears or the appropriateness of the choice of subject matter. They say nothing about the internal cohesion and coherence of the text. In short, they say nothing about the “quality” of the text.

The same thing is not always simple in all contexts and sometimes too simple, can be hard. We could see that in the example of lists above. Having a list instead of a sentence does not always make things simpler because a list is doing other work besides just providing a list of items.

Another example I always think about is the idea of “semantic primes” by Anna Wierzbicka. These are concepts like DO, BECAUSE, BAD believed to be universal to all languages. There are only about 60 of them (the exact number keeps changing as the research evolves). These were compiled into a Natural Semantic Metalanguage with the idea of being able to break complex concepts into them. Whether you think this is a good idea or not (I don’t but I think the research group working on this are doing good work in surveying the world’s languages) you will have to agree that the resulting descriptions are not simple. For example, this is the Natural Semantic Metalanguage description of “anger”:

anger (English): when X thinks of Y, X thinks something like this: “this person did something bad; I don’t want this; I would want to do something bad to this person”; because of this, X feels something bad

This seems like a fairly complicated way of describing anger and even if it could be universally understood, it would also be very difficult to learn to do this. And could we then capture the distinction between this and say “seething rage”? Also, it is clear that there is a lot more going on than combining 60 basic concepts. You’d have to learn a lot of rules and strategies before you could do this well.

Problem 3: Automatic measures of readability are easily gamed

There are about half dozen automated readability measures currently used by software and web services to calculate how easy or difficult it is to read a text.

I am not an expert in readability but I have no reason to doubt the references in Wikipedia claiming that they correlate fairly well overall with text comprehension. But as always correlation only tells half the story and, as we know, it is not causation.

It is not at all clear that the texts identified as simple based on measures like number of words per sentence or numbers of letters per word are actually simple because of the measures. It is entirely possible that those measures are a consequence of other factors that contribute to simplicity, like more careful word choice, empathy with an audience, etc.

This may not matter if all we are interested in is identifying simple texts, as you can do with an advanced Google search. But it does matter if we want to use these measures to teach people how to write simpler texts. Because if we just tell them use fewer words per sentence and shorter words, we may not get texts that are actually easier to understand for the intended readership.

And if we require this as a criterion of page accessibility, we open the system to gaming in the same way Google’s algorithms are gamed but without any of the sophistication. You can reduce the complexity of any text on any of these scores simply by replacing all commas with full stops. Or even with randomly inserting full stops every 5 words and putting spaces in the middle of words. The algorithms are not smart enough to capture that.

Also, while these measures may be fairly reliable in aggregate, they don’t give us a very good picture of any one individual text. I took a blog post from the Campaign for Plain English site http://www.plainenglish.co.uk/news/chrissies-comments.html and ran the text through several websites that calculate ease of reading scores:

  • http://www.online-utility.org/english/readability_test_and_improve.jsp,
  • http://www.editcentral.com
  • http://www.read-able.com

The different tests ranged by up to 5 years in their estimate of the length of formal education required to understand the text from 10.43 to 15.57. Read-able.com even went as far as providing an average, coming up with 12. Well that doesn’t seem very reliable.

I preferred http://textalyser.net which just gives you the facts about the text and doesn’t try to summarize them. The same goes for the Plain English own little app that you can download from their website http://www.plainenglish.co.uk/drivel-defence.html.

By any of these measures, the text wasn’t very simple or plain at all. The longest sentence had 66 words because it contained a complex embedded clause (something not even mentioned in the Plain English guide). The average sentence length was 28 words.

The Plain English app also suggested 7 alternative words from their “alternative dictionary” but 5 of those were misses because context is not considered (e.g. “a sad state” cannot be replaced by “a sad say”). The 2 acceptable suggestions were to edit out one “really” and replace one “retain” with “keep”. Neither of which would have improved the readability of the text given its overall complexity.

In short, the accepted measures of simple texts are not very useful for creating simple texts of training people in creating them.

See also http://en.wikipedia.org/w/index.php?title=Readability&oldid=508236326#Using_the_readability_formulas.

See also this interesting study examining the effects for L2 instruction: http://www.eric.ed.gov/PDFS/EJ926371.pdf.

Problem 4: When simple becomes a new dialect: A thought experiment

But let’s consider what would happen if we did agree on simple English as the universal standard for accessibility and did actually manage to convince people to use it? In short, it would become its own dialect. It would acquire ways of describing things it was not designed to describe. It would acquire its own jargon and ways of obfuscation. There would arise a small industry of experts teaching you how to say what you want to say or don’t want to say in this new simple language.

Let’s take a look at Globish, a simplified English intended for international communication, that I have seen suggested as worth a look for accessibility. Globish has a restricted grammar and a vocabulary of 1500 words. They helpfully provide a tool for highlighting words they call “not compatible with Globish”. Among the words it highlighted for the blog post from the Plain English website were:

basics, journalist, grandmother, grammar, management, principle, moment, typical

But event the transcript of a speech by its creator, Jean-Paul Nerriere, advertised as being completely in Globish, contained some words flagged up as incompatible:

businessman, would, cannot, maybe, nobody, multinational, software, immediately

Globish seems to based on not much more than gueswork. It has words like “colony” and “rubber” but not words like “temperature” or “notebook”, “appoint” but not “appointment”, “govern” but not “government”. But both the derived forms “appointment” or “government” are more frequent (and intuitively more useful) than the root forms. There is a chapter in the eBook called “1500 Basic Globish Words Father 5000” so I assume there are some rules for derivation, but the derived forms more often than not have very “idiomatic” meanings. For example, “appointment” in its most commons use does not make any sense if we look at the core meanings of “appoint” and the suffix “-ment”. Consider also the difference between “govern” and “government” vs “enjoy” and “enjoyment”.

Yet, Globish supposedly avoids idioms, cultural references, etc. Namely all the things that make language useful. The founder says:

Globish is correct English without the English culture. It is English that is just a tool and not a whole way of life.

Leaving aside the dubious notion of correctness, this would make Globish a very limited tool indeed. But luckily for Globish it’s not true. Why have the word “colony” if not to reflect cultural preference? If it became widely used by a community of speakers, the first thing to happen to Globish would be a blossoming of idioms going hand in hand with the emergence of dialects, jargons and registers.

That is not to say that something like Globish could not be a useful tool for English learners along the way to greater mastery. But it does little for universal accessibility.

Also we need to ask ourselves what would it be like from the perspective of the users creating these simplified texts? They would essentially have to learn a whole new code, a sort of a dialect. And as with any second language learning, some would do it better than others. Some would become the “simple nazis”. Some would get jobs teaching others “how to” speak simple. It is not natural for us to speak simply and “plainly” as defined in the context of accessibility.

There is some experience with the use of controlled languages in technical writing and in writing for second language acquisition. This can be done but the universe of subjects and/or the group of people creating these texts is always extremely limited. Increasing the number of people creating simple texts to pretty much everybody would increase the difficulty of implementation exponentially. And given the poor state of automatic tools for analysis of “simplicity”, quality control is pretty much out of reach.

But would even one code/dialect suffice? Do we need one for technical writing, govenment documents, company filings? Limiting the vocabulary to 1500 words is not a bad idea but as we saw with Globish, it might need to be different 1500 words for each area.

Why is language inaccessible?

Does that mean we should give up on trying to make communication more accessible? Definitely not. The same processes that I described as standing in the way of a universal simple language are also at the root of why so much language is inaccessible. Part of how language works to create group cohesion which includes keeping some people out. A lot of “complicated” language is complicated because the nature of the subject requires it, and a lot of complicated language is complicated because the writer is not very good at expressing themselves.

But as much complicated language is complicated because the writer wants to signall belonging to a group that uses that kind of language. The famous Sokal Hoax provided an example of that. Even instructions on university websites on how to write essays are an example. You will find university websites recommending something like “To write like an academic, write in the third person.” This is nonsense, research shows that academics write as much in the first as in the third person. But it also makes the job of the people marking essays easier. They don’t have to focus on ideas, they just go by superficial impression. Personally, I think this is a scandal and complete failure of higher education to live up to its own hype but that’s a story for another time.

How to achieve simple communication?

So what can we do to avoid making our texts too inaccessible?

The first thing that the accessibility community will need to do is acknowledge Simple language is its own form of expression. It is not the natural state we get when we strip out all the artifice out of our communication. And learning how to communicate simply requires effort and practice of all individuals.

To help with the effort, most people will need some guides. And despite what I said about the shortcomings of the Plain English Guide above, it’s not a bad place to start. But it would need to be expanded. Here’s an example of some of the things that are missing:

  • Consider the audience: What sounds right in an investor brochure won’t sound right in a letter to a customer
  • Increase cohesion and coherence by highlighting relationships
  • Highlight the text structure with headings
  • Say new things first
  • Consider splitting out subordinate clauses into separate sentences if your sentence gets too long
  • Leave all the background and things you normally start your texts with for the end

But it will also require a changed direction for research.

Further research needs for simple language language

I don’t pretend to have a complete overview of the research being done in this area but my superficial impression is that it focuses far too much on comprehension at the level of clause and sentence. Further research will be necessary to understand comprehension at the level of text.

There is need for further research in:

  • How collocability influences understanding
  • Specific ways in which cohesion and coherence impact understanding
  • The benefits and downsides of elegant variation for comprehension
  • The benefits and downsides of figurative language for comprehension by people with different cognitive profiles
  • The processes of code switching during writing and reading
  • How new conventions emerge in the use of simple language
  • The uses of simple language for political purposes including obfuscation

[Updated for Ariadne article mentioned above:] In more detail, this is what I would like to see for some of these points.

How collocability influences understanding: How word and phrase frequency influences understanding with particular focus on collocations. The assumption behind software like TextHelp is that this is very important. Much research is available on the importance of these patterns from corpus linguistics but we need to know the practical implications of these properties of language both for text creators and consumers. For instance, should text creators use measures of collocability to judge the ease of reading and comprehension in addition to or instead of arbitrary measures like sentence and word lengths.

Specific ways in which cohesion and coherence affect understanding: We need to find the strategies challenged readers use to make sense of larger chunks of text. How they understand the text as a whole, how they find specific information in the text, how they link individual portions of the text to the whole, and how they infer overall meaning from the significance of the components. We then need to see what text creators can do to assist with these processes. We already have some intuitive tools: bullets, highlighting of important passages, text insets, text structure, etc. But we do not know how they help people with different difficulties and whether they can ever become a hindrance rather than a benefit.

The benefits and downsides of elegant variation for comprehension, enjoyment and memorability: We know that repetition is an important tool for establishing the cohesion of text in English. We also know that repetition is discouraged for stylistic reasons. Repetition is also known to be a feature of immature narratives (children under the age of about 10) and more “sophisticated” ways of constructing texts develop later. However, it is also more powerful in spoken narrative (e.g. folk stories). Research is needed on how challenged readers process repetition and elegant variation and what text creators can do to support any naturally developing meta textual strategies.

The benefits and downsides of figurative language for comprehension by people with different cognitive profiles: There is basic research available from which we know that some cognitive deficits lead to reduced understanding of non-literal language. There is also ample research showing how crucial figurative language is to language in general. However, there seems to be little understanding of how and why different deficits lead to problems with processing figurative language, what kind of figurative language causes difficulties. It is also not clear what types of figurative language are particularly helpful for challenged readers with different cognitive profiles. Work is needed on typology of figurative language and a typology of figurative language deficits.

The processes of code switching during writing and reading: Written and spoken English employ very different codes, in some ways even reminiscent of different language types. This includes much more than just the choice of words. Sentence structure, clauses, grammatical constructions, all of these differ. However, this difference is not just a consequence of the medium of writing. Different genres (styles) within a language may be just as different from one another as writing and speaking. Each of these come with a special code (or subset of grammar and vocabulary). Few native speakers never completely acquire the full range of codes available in a language with extensive literacy practices, particularly a language that spans as many speech communities as English. But all speakers acquire several different codes and can switch between them. However, many challenged writers and readers struggle because they cannot switch between the spoken codes they are exposed to through daily interactions and the written codes to which they are often denied access because of a print impairment. Another way of describing this is multiple literacies. How do challenged readers and writers deal with acquiring written codes and how do they deal with code switching?

How do new conventions emerge in the use of simple language? Using and accessing simple language can only be successful if it becomes a separate literacy practice. However, the dissemination and embedding of such practices into daily usage are often accompanied by the establishment of new codes and conventions of communication. These codes can then become typical of a genre of documents. An example of this is Biblish. A sentence such as “Fred spoke unto Joan and Karen” is easily identified as referring to a mode of expression associated with the translation of the Bible. Will similar conventions develop around “plain English” and how? At the same time, it is clear that within each genre or code, there are speakers and writers who can express themselves more clearly than others. Research is needed to establish if there are common characteristics to be found in these “clear” texts, as opposed to those inherent in “difficult” texts across genres?

All in all, introducing simple language as a universal accessibility standard is still too far from a realistic prospect. My intuitive impression based on documents I receive from different bureaucracies is that the “plain English” campaign has made a difference in how many official documents are presented. But a lot more research (ethnographic as well as cognitive) is necessary before we properly understand the process and its impact. Can’t wait to read it all.