Warning: Undefined property: wpdb::$dmtable in /home/bohemica/public_html/techczech.net/wp-includes/class-wpdb.php on line 783
The complexities of simple: What simple language proponents should know about linguistics [updated] – Metaphor Hacker
Categories
Extended writing Language Linguistics Metaphor

The complexities of simple: What simple language proponents should know about linguistics [updated]

Update

Part of this post was incorporated into an article I wrote with Brian Kelly and Alistair McNaught that appeared in the December issue of Ariadne. As part of that work and feedback from Alistair and Brian, I expanded the final section from a simple list of bullets into a more detailed research programme. You can see it below and in the article.

Background: From spelling reform to plain language

Simple
Simple (Photo credit: kevin dooley)

The idea that if we could only improve how we communicate, there would be less misunderstanding among people is as old as the hills.

Historically, this notion has been expressed through things like school reform, spelling reform, publication of communication manuals, etc.

The most radical expression of the desire for better understanding is the invention of a whole new artificial language like Esperanto with the intention of providing a universal language for humanity. This has had a long tradition but seemed to gain most traction towards the end of last century with the introduction and relative success of Esperanto.

But artificial languages have been a failure as a vehicle of global understanding. Instead, in about the last 50 years, the movement for plain English has been taking the place of constructed languages as something on which people pinned their hopes for clear communication.

Most recently, there have been proposals suggesting that “simple” language should become a part of a standard for accessibility of web pages along side other accessibility standards issued by the W3C standards body. http://www.w3.org/WAI/RD/2012/easy-to-read/Overview. This post was triggred by this latest development.

Problem 1: Plain language vs. linguistics

The problem is that most proponents of plain language (as so many would be reformers of human communication) seem to be ignorant of the wider context in which language functions. There is much that has been revealed by linguistic research in the last century or so and in particular since the 1960s that we need to pay attention to (to avoid confusion, this does not refer to the work of Noam Chomsky and his followers but rather to the work of people like William Labov, Michael Halliday, and many others).

Languages are not a simple matter of grammar. Any proposal for content accessibility must consider what is known about language from the fields of pragmatics, sociolinguistics, and cognitive linguistics. These are the key aspects of what we know about language collected from across many fields of linguistic inquiry:

  • Every sentence communicates much more than just its basic content (propositional meaning). We also communicate our desires and beliefs (e.g. “It’s cold here” may communicate, “Close the window” and “John denied that he cheats on his taxes” communicates that somebody accused John of cheating on his taxes. Similarly chosing a particular form of speech, like slang or jargon, communicates belonging to a community of practice.)
  • The understanding of any utterance is always dependent on a complex network of knowledge about language, about the world, as well as about the context of the utterance. “China denied involvement.” requires the understanding of the context in which countries operate, as well as metonomy, as well as the grammar and vocabulary. Consider the knowledge we need to possess to interpret “In 1939, the world exploded.” vs. “In Star Wars, a world exploded.”
  • There is no such thing as purely literal language. All language is to some degree figurative. “Between 3 and 4pm.”, “Out of sight”, “In deep trouble”, “An argument flared up”, “Deliver a service”, “You are my rock”, “Access for all” are all figurative to different degrees.
  • We all speak more than one variety of our language: formal/informal, school/friends/family, written/spoken, etc. Each of these variety has its own code. For instance, “she wanted to learn” vs. “her desire to learn” demonstrates a common difference between spoken and written English where written English often uses clauses built around nouns.
  • We constantly switch between different codes (sometimes even within a single utterance).
  • Bilingualism is the norm in language knowledge, not the exception. About half the world’s population regularly speaks more than one language but everybody is “bi-lingual” in the sense that they deal with multiple codes.
  • The “standard” or “correct” English is just one of the many dialects, not English itself.
  • The difference between a language and a dialect is just as much political as linguistic. An old joke in linguistics goes: “A language is a dialect with an army and a navy.”
  • Language prescription and requirements of language purity (incl. simple language) are as much political statements as linguistic or cognitive ones. All language use is related to power relationships.
  • Simplified languages develop their own complexities if used by a real community through a process known as creolization. (This process is well described for pidgins but not as well for artificial languages.)
  • All languages are full of redundancy, polysemy and homonymy. It is the context and our knowledge of what is to be expected that makes it easy to figure out the right meaning.
  • There is no straightforward relationship between grammatical features and language obfuscation and lack of clarity (e.g. It is just as easy to hide things using active as passive voice or any Subject-Verb-Object sentence as Object-Subject-Vern).
  • It is difficult to call any one feature of a language universally simple (for instance, SVO word order or no morphology) because many other languages use what we call complex as the default without any increase in difficulty for the native speakers (e.g. use of verb prefixes/particles in English and German)
  • Language is not really organized into sentences but into texts. Texts have internal organization to hang together formally (John likes coffee. He likes it a lot.) and semantically (As I said about John. He likes coffee.) Texts also relate to external contexts (cross reference) and their situations. This relationship is both implicit and explicit in the text. The shorter the text, the more context it needs for interpretation. For instance, if all we see is “He likes it.” written on a piece of paper, we do not have enough context to interpret the meaning.
  • Language is not used uniformly. Some parts of language are used more frequently than others. But this is not enough to understand frequency. Some parts of language are used more frequently together than others. The frequent coocurrence of some words with other words is called “collocation”. This means that when we say “bread and …”, we can predict that the next word will be “butter”. You can check this with a linguistic tool like a corpus, or even by using Google’s predictions in the search. Some words are so strongly collocated with other words that their meaning is “tinged” by those other words (this is called semantic prosody). For example, “set in” has a negative connotation because of its collocation with “rot”.
  • All language is idiomatic to some degree. You cannot determine the meaning of all sentences just by understanding the meanings of all their component parts and the rules for putting them together. And vice versa, you cannot just take all the words and rules in a language, apply them and get meaningful sentences. Consider “I will not put the picture up with John.” and “I will not put up the picture with John.” and “I will not put up John.” and “I will not put up with John.”

It seems to me that most plain language advocates do not take most of these factors into account.

Some examples from the “How to write in plain English” guide: http://www.plainenglish.co.uk/files/howto.pdf.

Try to call the reader ‘you’, even if the reader is only one of many people you are talking about generally. If this feels wrong at first, remember that you wouldn’t use words like ‘the applicant’ and ‘the supplier’ if you were speaking to somebody sitting across a desk from you. [emphasis mine]

This example misses the point about the contextuality of language. The part in bold is the very crux of the problem. It is natural to use a different code (or register) with someone we’re speaking to in person and in a written communication. This is partly a result of convention and partly the result of the different demands of writing and speaking when it comes to the ability to point to what we’re speaking about. The reason it feels wrong to the writer is that it breaks the convention of writing. That is not to say that this couldn’t become the new convention. But the argument misses the point.

Do you want your letters to sound active or passive − crisp and professional or stuffy and bureaucratic?
Using the passive voice and sounding passive are not one and the same thing. This is an example of polysemy. The word “passive” has two meanings in English. One technical (the passive voice) and one colloquial (“he’s too passive”). The booklet recommends that “The mine had to be closed by the authority. (Passive)” should be replaced with “The authority had to close the mine. (Active)” But they ignore the fact that word order also contributes to the information structure of the sentence. The passive sentence introduces the “mine” sooner and thus makes it clear that the sentence is about the mine and not the local authority. In this case, the “active” construction made the point of the sentence more difficult to understand.

The same is true of nominalization. Another thing recommended against by the Plain English campaign: “The implementation of the method has been done by a team.” is not conveying the same type of information as “A team has implemented the method.”

The point is that this advice ignores the context as well as the audience. Using “you” instead of “customers” in “Customers have the right to appeal” may or may not be simpler depending on the reader. For somebody used to the conventions of written official English, it may actually take longer to process. But for someone who does not deal with written English very often, it will be easier. But there is nothing intrinsically easier about it.

Likewise for the use of jargon. The campaign gives as its first example of unduly complicated English:

High-quality learning environments are a necessary precondition for facilitation and enhancement of the ongoing learning process.

And suggests that we use this instead:

Children need good schools if they are to learn properly.

This may be appropriate when it comes to public debate but within the professional context of, say, policy communication, these 2 sentences are not actually equivalent. There are more “learning environments” than just schools and the “learning process” is not the same as having learned something. It is also possible that the former sentence appeared as part of a larger context that would have made the distinction even clearer but the page does not give a reference and a Google search only shows pages using it as an example of complex English. http://www.plainenglish.co.uk/examples.html

The How to write in plain English document does not mention coherence of the text at all, except indirectly when it recommends the use of lists. This is good advice but even one of their examples has issues. They suggest that the following is a good example of a list:

Kevin needed to take:
• a penknife
• some string
• a pad of paper; and
• a pen.

And on first glance it is, but lists are not just neutral replacements for sentences. They are a genre in its own right used for specific purposes (Michael Hoey called them “text colonies”.) Let’s compare the list above to the sentence below.

Kevin needed to take a penknife, some string, a pad of paper and a pen.

Obviously they are two different kinds of text used in different contexts for different purposes and this would impinge on our understanding. The list implies instruction, and a level of importance. It is suitable to an official document, for example something sent before a child goes to camp. But it is not suitable to a personal letter or even a letter from the camp saying “All Kevin needed to take was a penknife, some string, a pad of paper and a pen. He should not have brought a laptop.” To be fair, the guide says to use lists “where appropriate”, but does not mention what that means.

The issue is further muddled by the “grammar quiz” on the Plain English website: http://www.plainenglish.co.uk/quiz.html. It is a hodgepodge of irrelevant trivia about language (not just grammar) that has nothing to do with simple writing. Although the Plain English guide gets credit for explicitly not endorsing petty peeves like not ending a sentence with a preposition, they obviously have peeves of their own.

Problem 2: Definition of simple is not simple

There is no clear definition of what constitutes simple and easy to understand language.

There are a number of intuitions and assumptions that seem to be made when both experts and lay people talk about language:

  • Shorter is simpler (fewer syllables, charactes, sounds per word, fewer words per sentence, fewer sentences per paragraph)
  • More direct is simpler (X did Y to Z is simpler than Y was done to Z by X)
  • Less variety is simpler (fewer different words)
  • More familiar simpler

These assumptions were used to create various measures of “readability” going back to the 1940s. They consisted of several variables:

  • Length of words (in syllables or in characters)
  • Length of sentences
  • Frequency of words used (both internally and with respect to their general frequency)

Intuitively, these are not bad measures, but they are only proxies for the assumptions. They say nothing about the context in which the text appears or the appropriateness of the choice of subject matter. They say nothing about the internal cohesion and coherence of the text. In short, they say nothing about the “quality” of the text.

The same thing is not always simple in all contexts and sometimes too simple, can be hard. We could see that in the example of lists above. Having a list instead of a sentence does not always make things simpler because a list is doing other work besides just providing a list of items.

Another example I always think about is the idea of “semantic primes” by Anna Wierzbicka. These are concepts like DO, BECAUSE, BAD believed to be universal to all languages. There are only about 60 of them (the exact number keeps changing as the research evolves). These were compiled into a Natural Semantic Metalanguage with the idea of being able to break complex concepts into them. Whether you think this is a good idea or not (I don’t but I think the research group working on this are doing good work in surveying the world’s languages) you will have to agree that the resulting descriptions are not simple. For example, this is the Natural Semantic Metalanguage description of “anger”:

anger (English): when X thinks of Y, X thinks something like this: “this person did something bad; I don’t want this; I would want to do something bad to this person”; because of this, X feels something bad

This seems like a fairly complicated way of describing anger and even if it could be universally understood, it would also be very difficult to learn to do this. And could we then capture the distinction between this and say “seething rage”? Also, it is clear that there is a lot more going on than combining 60 basic concepts. You’d have to learn a lot of rules and strategies before you could do this well.

Problem 3: Automatic measures of readability are easily gamed

There are about half dozen automated readability measures currently used by software and web services to calculate how easy or difficult it is to read a text.

I am not an expert in readability but I have no reason to doubt the references in Wikipedia claiming that they correlate fairly well overall with text comprehension. But as always correlation only tells half the story and, as we know, it is not causation.

It is not at all clear that the texts identified as simple based on measures like number of words per sentence or numbers of letters per word are actually simple because of the measures. It is entirely possible that those measures are a consequence of other factors that contribute to simplicity, like more careful word choice, empathy with an audience, etc.

This may not matter if all we are interested in is identifying simple texts, as you can do with an advanced Google search. But it does matter if we want to use these measures to teach people how to write simpler texts. Because if we just tell them use fewer words per sentence and shorter words, we may not get texts that are actually easier to understand for the intended readership.

And if we require this as a criterion of page accessibility, we open the system to gaming in the same way Google’s algorithms are gamed but without any of the sophistication. You can reduce the complexity of any text on any of these scores simply by replacing all commas with full stops. Or even with randomly inserting full stops every 5 words and putting spaces in the middle of words. The algorithms are not smart enough to capture that.

Also, while these measures may be fairly reliable in aggregate, they don’t give us a very good picture of any one individual text. I took a blog post from the Campaign for Plain English site http://www.plainenglish.co.uk/news/chrissies-comments.html and ran the text through several websites that calculate ease of reading scores:

  • http://www.online-utility.org/english/readability_test_and_improve.jsp,
  • http://www.editcentral.com
  • http://www.read-able.com

The different tests ranged by up to 5 years in their estimate of the length of formal education required to understand the text from 10.43 to 15.57. Read-able.com even went as far as providing an average, coming up with 12. Well that doesn’t seem very reliable.

I preferred http://textalyser.net which just gives you the facts about the text and doesn’t try to summarize them. The same goes for the Plain English own little app that you can download from their website http://www.plainenglish.co.uk/drivel-defence.html.

By any of these measures, the text wasn’t very simple or plain at all. The longest sentence had 66 words because it contained a complex embedded clause (something not even mentioned in the Plain English guide). The average sentence length was 28 words.

The Plain English app also suggested 7 alternative words from their “alternative dictionary” but 5 of those were misses because context is not considered (e.g. “a sad state” cannot be replaced by “a sad say”). The 2 acceptable suggestions were to edit out one “really” and replace one “retain” with “keep”. Neither of which would have improved the readability of the text given its overall complexity.

In short, the accepted measures of simple texts are not very useful for creating simple texts of training people in creating them.

See also http://en.wikipedia.org/w/index.php?title=Readability&oldid=508236326#Using_the_readability_formulas.

See also this interesting study examining the effects for L2 instruction: http://www.eric.ed.gov/PDFS/EJ926371.pdf.

Problem 4: When simple becomes a new dialect: A thought experiment

But let’s consider what would happen if we did agree on simple English as the universal standard for accessibility and did actually manage to convince people to use it? In short, it would become its own dialect. It would acquire ways of describing things it was not designed to describe. It would acquire its own jargon and ways of obfuscation. There would arise a small industry of experts teaching you how to say what you want to say or don’t want to say in this new simple language.

Let’s take a look at Globish, a simplified English intended for international communication, that I have seen suggested as worth a look for accessibility. Globish has a restricted grammar and a vocabulary of 1500 words. They helpfully provide a tool for highlighting words they call “not compatible with Globish”. Among the words it highlighted for the blog post from the Plain English website were:

basics, journalist, grandmother, grammar, management, principle, moment, typical

But event the transcript of a speech by its creator, Jean-Paul Nerriere, advertised as being completely in Globish, contained some words flagged up as incompatible:

businessman, would, cannot, maybe, nobody, multinational, software, immediately

Globish seems to based on not much more than gueswork. It has words like “colony” and “rubber” but not words like “temperature” or “notebook”, “appoint” but not “appointment”, “govern” but not “government”. But both the derived forms “appointment” or “government” are more frequent (and intuitively more useful) than the root forms. There is a chapter in the eBook called “1500 Basic Globish Words Father 5000” so I assume there are some rules for derivation, but the derived forms more often than not have very “idiomatic” meanings. For example, “appointment” in its most commons use does not make any sense if we look at the core meanings of “appoint” and the suffix “-ment”. Consider also the difference between “govern” and “government” vs “enjoy” and “enjoyment”.

Yet, Globish supposedly avoids idioms, cultural references, etc. Namely all the things that make language useful. The founder says:

Globish is correct English without the English culture. It is English that is just a tool and not a whole way of life.

Leaving aside the dubious notion of correctness, this would make Globish a very limited tool indeed. But luckily for Globish it’s not true. Why have the word “colony” if not to reflect cultural preference? If it became widely used by a community of speakers, the first thing to happen to Globish would be a blossoming of idioms going hand in hand with the emergence of dialects, jargons and registers.

That is not to say that something like Globish could not be a useful tool for English learners along the way to greater mastery. But it does little for universal accessibility.

Also we need to ask ourselves what would it be like from the perspective of the users creating these simplified texts? They would essentially have to learn a whole new code, a sort of a dialect. And as with any second language learning, some would do it better than others. Some would become the “simple nazis”. Some would get jobs teaching others “how to” speak simple. It is not natural for us to speak simply and “plainly” as defined in the context of accessibility.

There is some experience with the use of controlled languages in technical writing and in writing for second language acquisition. This can be done but the universe of subjects and/or the group of people creating these texts is always extremely limited. Increasing the number of people creating simple texts to pretty much everybody would increase the difficulty of implementation exponentially. And given the poor state of automatic tools for analysis of “simplicity”, quality control is pretty much out of reach.

But would even one code/dialect suffice? Do we need one for technical writing, govenment documents, company filings? Limiting the vocabulary to 1500 words is not a bad idea but as we saw with Globish, it might need to be different 1500 words for each area.

Why is language inaccessible?

Does that mean we should give up on trying to make communication more accessible? Definitely not. The same processes that I described as standing in the way of a universal simple language are also at the root of why so much language is inaccessible. Part of how language works to create group cohesion which includes keeping some people out. A lot of “complicated” language is complicated because the nature of the subject requires it, and a lot of complicated language is complicated because the writer is not very good at expressing themselves.

But as much complicated language is complicated because the writer wants to signall belonging to a group that uses that kind of language. The famous Sokal Hoax provided an example of that. Even instructions on university websites on how to write essays are an example. You will find university websites recommending something like “To write like an academic, write in the third person.” This is nonsense, research shows that academics write as much in the first as in the third person. But it also makes the job of the people marking essays easier. They don’t have to focus on ideas, they just go by superficial impression. Personally, I think this is a scandal and complete failure of higher education to live up to its own hype but that’s a story for another time.

How to achieve simple communication?

So what can we do to avoid making our texts too inaccessible?

The first thing that the accessibility community will need to do is acknowledge Simple language is its own form of expression. It is not the natural state we get when we strip out all the artifice out of our communication. And learning how to communicate simply requires effort and practice of all individuals.

To help with the effort, most people will need some guides. And despite what I said about the shortcomings of the Plain English Guide above, it’s not a bad place to start. But it would need to be expanded. Here’s an example of some of the things that are missing:

  • Consider the audience: What sounds right in an investor brochure won’t sound right in a letter to a customer
  • Increase cohesion and coherence by highlighting relationships
  • Highlight the text structure with headings
  • Say new things first
  • Consider splitting out subordinate clauses into separate sentences if your sentence gets too long
  • Leave all the background and things you normally start your texts with for the end

But it will also require a changed direction for research.

Further research needs for simple language language

I don’t pretend to have a complete overview of the research being done in this area but my superficial impression is that it focuses far too much on comprehension at the level of clause and sentence. Further research will be necessary to understand comprehension at the level of text.

There is need for further research in:

  • How collocability influences understanding
  • Specific ways in which cohesion and coherence impact understanding
  • The benefits and downsides of elegant variation for comprehension
  • The benefits and downsides of figurative language for comprehension by people with different cognitive profiles
  • The processes of code switching during writing and reading
  • How new conventions emerge in the use of simple language
  • The uses of simple language for political purposes including obfuscation

[Updated for Ariadne article mentioned above:] In more detail, this is what I would like to see for some of these points.

How collocability influences understanding: How word and phrase frequency influences understanding with particular focus on collocations. The assumption behind software like TextHelp is that this is very important. Much research is available on the importance of these patterns from corpus linguistics but we need to know the practical implications of these properties of language both for text creators and consumers. For instance, should text creators use measures of collocability to judge the ease of reading and comprehension in addition to or instead of arbitrary measures like sentence and word lengths.

Specific ways in which cohesion and coherence affect understanding: We need to find the strategies challenged readers use to make sense of larger chunks of text. How they understand the text as a whole, how they find specific information in the text, how they link individual portions of the text to the whole, and how they infer overall meaning from the significance of the components. We then need to see what text creators can do to assist with these processes. We already have some intuitive tools: bullets, highlighting of important passages, text insets, text structure, etc. But we do not know how they help people with different difficulties and whether they can ever become a hindrance rather than a benefit.

The benefits and downsides of elegant variation for comprehension, enjoyment and memorability: We know that repetition is an important tool for establishing the cohesion of text in English. We also know that repetition is discouraged for stylistic reasons. Repetition is also known to be a feature of immature narratives (children under the age of about 10) and more “sophisticated” ways of constructing texts develop later. However, it is also more powerful in spoken narrative (e.g. folk stories). Research is needed on how challenged readers process repetition and elegant variation and what text creators can do to support any naturally developing meta textual strategies.

The benefits and downsides of figurative language for comprehension by people with different cognitive profiles: There is basic research available from which we know that some cognitive deficits lead to reduced understanding of non-literal language. There is also ample research showing how crucial figurative language is to language in general. However, there seems to be little understanding of how and why different deficits lead to problems with processing figurative language, what kind of figurative language causes difficulties. It is also not clear what types of figurative language are particularly helpful for challenged readers with different cognitive profiles. Work is needed on typology of figurative language and a typology of figurative language deficits.

The processes of code switching during writing and reading: Written and spoken English employ very different codes, in some ways even reminiscent of different language types. This includes much more than just the choice of words. Sentence structure, clauses, grammatical constructions, all of these differ. However, this difference is not just a consequence of the medium of writing. Different genres (styles) within a language may be just as different from one another as writing and speaking. Each of these come with a special code (or subset of grammar and vocabulary). Few native speakers never completely acquire the full range of codes available in a language with extensive literacy practices, particularly a language that spans as many speech communities as English. But all speakers acquire several different codes and can switch between them. However, many challenged writers and readers struggle because they cannot switch between the spoken codes they are exposed to through daily interactions and the written codes to which they are often denied access because of a print impairment. Another way of describing this is multiple literacies. How do challenged readers and writers deal with acquiring written codes and how do they deal with code switching?

How do new conventions emerge in the use of simple language? Using and accessing simple language can only be successful if it becomes a separate literacy practice. However, the dissemination and embedding of such practices into daily usage are often accompanied by the establishment of new codes and conventions of communication. These codes can then become typical of a genre of documents. An example of this is Biblish. A sentence such as “Fred spoke unto Joan and Karen” is easily identified as referring to a mode of expression associated with the translation of the Bible. Will similar conventions develop around “plain English” and how? At the same time, it is clear that within each genre or code, there are speakers and writers who can express themselves more clearly than others. Research is needed to establish if there are common characteristics to be found in these “clear” texts, as opposed to those inherent in “difficult” texts across genres?

All in all, introducing simple language as a universal accessibility standard is still too far from a realistic prospect. My intuitive impression based on documents I receive from different bureaucracies is that the “plain English” campaign has made a difference in how many official documents are presented. But a lot more research (ethnographic as well as cognitive) is necessary before we properly understand the process and its impact. Can’t wait to read it all.