Category Archives: Scholarship

How to read ‘Women, Fire and Dangerous Things’: Guide to essential reading on human cognition

Share

Note:

These are rough notes for a metaphor reading group, not a continuous narrative. Any comments, corrections or elaborations are welcome.

Why should you read WFDT?

Women, Fire, and Dangerous Things: What Categories Reveal About the Mind is still a significantly underappreciated and (despite its high citation count) not-enough-read book that has a lot to contribute to thinking about how the mind works.

I think it provides one of the most concise and explicit models for how to think about the mind and language from a cognitive perspective. I also find its argument against the still prevalent approach to language and the mind as essentially fixed objects very compelling.

The thing that has been particularly underused in subsequent scholarship is the concept of ‘ICMs’ or ‘Idealised Cognitive Models’ which both puts metaphor (for work on which Lakoff is most well known) in its rightful context but also outlines what we should look for when we think about things like frames, models, scripts, scenarios, etc. Using this concept would have avoided many undue simplifications in work in the social sciences and humanities.

Why this guide

Unfortunately, the concision and explicitness I extolled above is surrounded by hundreds of pages of arguments and elaborations that are often less well-thought out than the central thesis and have been a vector for criticism (I’ve responded to some of these in my review of Verena Haser’s book).

As somebody who translated the whole book into Czech and penned extensive commentary on its relevance to the structuralist linguistic tradition, I have perhaps spent more time with it than most people other than the author and his editors.

Which is why when people ask me whether to read it, I usually recommend an abreviated tour of the core argument with some selections depending on the individual’s interest.

Here are some of my suggestions.

Chapters everyone should read

Chapters 3, 4, 5, 6 – Core contribution of the book – Fundamental structuring principles of human cognition

These four chapters summarize what I think everybody who thinks about language, mind and society should know about how categories work. Even if it is not necessarily the last word on every (or any) aspect, it should be the starting point for inquiry.

All the key concepts (see below) are outlined here.

Preface and Chapter 1 – Outline of the whole argument and its implications

These brief chapters lay out succinctly and, I think very clearly, the overall argument of the book and its implications. This is where he outlines the core of the critique of objectivism which I think is very important (if itself open to criticism).

Chapter 2: Precursors

This is where he outlines the broader panoply of thinkers and research outcomes in recent intellectual history whose insights this books tries to systematise and take further.

The chapter takes up some of the key thinkers who have been critical of the established paradigm. Read it not necessarily for understanding them but for a way of thinking about their work in the context of this book.

Case studies

The case studies represent a large chunk of the book and few people will read all 3. But I think at least one of them should be part of any reading of the book. Most people will be drawn to number 1 on metaphor but I find that number 2 shows off the key concepts in most depth. It will require some focus and patience from non-linguists but I think is worth the effort.

Case study 3 is perhaps too linguistic (even though it introduces the important concept of constructions) for most non-linguist.

Key concepts

No matter how the book is read, these are the key concepts I think people should walk away with understanding.

Idealized Cognitive Models (also called Frames in Lakoff’s later work)

I don’t know of any more systematic treatment of how our conceptual system is structured than this. It is not necessarily the last word but should not be overlooked.

Radial Categories

When people talk about family resemblances they ignore the complexity of the conceptual work that goes into them. Radial categories give a good sense of that depth.

Schemas and rich images

While image schemas are still a bit controversial as actual cognitive constructs, Lakoff’s treatment of them alongside rich images shows the importance of both as heuristics to interpreting cognitive phenomena.

Objectivism vs Basic Realism

Although objectivism (nothing to do with Ayn Rand) is not a position taken by any practicing philosophers and feels a bit straw-manny, I find Lakoff’s outline of it eerily familiar as I read works across the humanities and social sciences, let alone philosophy. When people read the description, they should avoid dismissing it with ‘of course nobody thinks that’ and reflect on how many people approach problems of mind and language as if they did think that.

Prototype effects and basic-level categories

These concepts are not original to Lakoff but are essential to understanding the others.

Role of metaphor and metonymy

Lakoff is best known for his earlier work on metaphor (which is why figurative language is not a key concept in itself) but this book puts metaphor and metonymy in perspective of the broader cognition.

Embodiment and motivation

Embodiment is an idea thrown around a lot these days. Lakoff’s is an important early contribution that shows some of the actual interaction between embodiment and cognition.

I find it particularly relevant when he talks about how concepts are motivated but not determined by embodied cognition.

Constructions

Lakoff’s work was taking shape alongside Fillmore’s work on construction grammar and Langacker’s on cognitive grammar. While the current construction grammar paradigm is much more influenced by those, I think it is still worth reading Lakoff for his contribution here. Particularly case studies 2 and 3 are great examples of the power of this approach.

Additional chapters of interest

Elaborations of core concepts

Chapters 17 and 18 elaborate on the core concepts in important ways but many people never reach them because they follow a lot of work on philosophical implications.

Chapter 17 on Cognitive Semantics takes another more deeper look at ICMs (idealized cognitive models) across various dimensions.

Chapter 18 deals with the question of how conceptual categories work across languages in the context of relativism. The name of the book is derived from a non-English example but this takes the question of universals and language specificity head on. Perhaps not the in the most comprehensive way (the debate on relativism has moved on) but it illuminates the core concepts further.

Case studies

Case Studies 2 and 3 should be of great interest to linguists. Not because they are perfect but because they show the depth of analysis required of even relatively simple concepts.

Philosophical implications

Lakoff is not shy about placing his work in the context of disrruption of the reigning philosophical paradigm of his (and to a significant extent our) day. Chapter 11 goes into more depth on how he understands the ‘objectivist paradigm’. It has been criticised for not representing actual philosophical positions (which he explicitly says he’s not doing) but I think it’s representative of many actual philosophical and other treatments of language and cognition.

This is then elaborated in chapters 12 – 16 and of course in his subsequent book with Mark Johnson Philosophy in the Flesh. I find the positive argument they’re making compelling but it is let down by staying on the surface of the issues they’re criticising.

What to skip

Where Lakoff (and elsewhere Lakoff and Johnson) most open themselves to criticism is their relatively shallow reading of their opponents. Most philosophers don’t engage with this work because they don’t find it speaks their language and when it does, it is easily dismissed as too light.

While I think that the broad critique this book presents of what it calls ‘objectivist approaches’ is correct, I don’t recommend that anyone takes the details too seriously. Lakoff simultaneously gives it too little and too much attention. He argues against very small details but leaves too many gaps.

This means that those who should be engaging with the very core of the work’s contribution fixate on errors and gaps in his criticism and feel free to dismiss the key aspects of what he has to say (much to their detriment).

For example, his critique of situational semantics leaves too many gaps and left him open to successful rejoinders even if he was probably right.

What is missing

While Lakoff engages with cognitive anthropology (and he and Johnson acknowledge their debts in the preface to Metaphors We Live By), he does not reflect the really interesting work in this area. Goffman (shockingly) gets no mention, nor does Victor Turner whose work on liminality is pretty important companion.

There’s also little acknowledgement of work on texts such as that by Halliday and Hasan (although, that was arguably still waiting for its greatest impact in the mid 1980s with the appearance of corpora). But Lakoff and most of the researchers in this areas stay firmly at the level of a clause. But give that my own work is mostly focusing on discourse and text-level phenomena, I would say that.

What to read next

Here are some suggestions for where to go next for elaborations of the key concepts or ideas with relevance to those outlined in the book.

  • Moral politics by Lakoff launched his forays into political work but I think it’s more important as an example of this way of thinking applied for a real purpose. He replaces Idealized Cognitive Models with Frames but shows many great examples of them at work. Even if it falls short as an exhaustive analysis of the issues, it is very important as a methodological contribution of how frames work in real life. I think of it almost as a fourth case study to this book.
  • The Way We Think by Gilles Fauconnier and Mark Turner provides a model of how cognitive models work ‘online’ during the process of speaking. Although, it has made a more direct impact in the field of construction grammar, its importance is still underappreciated outside it. I think of it as an essential companion to the core contribution of this book. Lakoff himself draws on Fauconnier’s earlier work on mental spaces in this book.
  • Work on construction grammar This book was one of the first places where the notion of ‘construction’ in the sense of ‘construction grammar’ was introduced. It has since developed in its own substantive field of study that has been driven by others. I’d say the work of Adele Goldberg is still the best introduction but for my money William Croft’s ‘Radical Construction Grammar’ is the most important. Taylor’s overview of the related ‘Cognitive Grammar’ is also not a bad next read.
  • Work on cognitive semantics There is much to read here. Talmy’s massive 2 volumes of ‘Cognitive Semantics’ are perhaps the most comprehensive but most of the work here happens across various journals. I’m not aware of a single shorter introduction.
  • Philosophy and the Mirror of Nature by Richard Rorty is a book I frankly wish Lakoff had read. Rorty’s taking apart of philosophy’s epistemological imaginings is very much complementary to Lakoff’s critique of ‘objectivism’ but done while engaging deeply with the philosophical issues. While I basically go along with Lakoff’s and later Lakoff and Johnson’s core argument, I can see why it could be more easily dismissed than Rorty. Of course, Rorty’s work is also better known for its reputation than deeply reflected in much of today’s philosophy. Lakoff and Johnson’s essential misunderstanding of Rorty’s contribution and fundamental compatibility with their project in Philosophy in the Flesh is an example of why so many don’t take that aspect of this work seriously. (Although, they are right that both Rorty and Davidson would have been better served by a less impoverished view of meaning and language.)

Anthropologists’ metaphorical shenanigans: Or how (not) to research metaphor

Share

Over on the excellent ‘Genealogy of Religion’, Cris Campbell waved a friendly red rag in front of my eyes to make me incensed over exaggerated claims (some) anthropologists make about metaphors. I had expressed some doubts in previous comments but felt that perhaps this particular one deserves its own post.

The book Cris refers to is a collection of essays  America in 1492: The World of the Indian Peoples Before the Arrival of Columbus (1991, ed. Alvin Josephy) which also contains an essay by Joel Sherzer called “A Richness of Voices”.  I don’t have the book but I looked up a few quotes on metaphor from the book.

The introduction summarizes the conclusion thus:

“Metaphors about the relations of people to animals and natural forces were essential to the adaptive strategies of people who lived by hunting.” (p. 26)

This is an example what Sherzer has to say about metaphor:

“Another important feature of native vocabularies was the metaphor – the use of words or groups of words that related to one realm of meaning to another. To students they provide a window into American Indian philosophies. … The relationship between the root and the derived form was often metaphorical.” (p. 256)

The first part of both statements is true but the second part does not follow. That is just bad bad scholarship. I’m not a big Popperian but if you want to make claims about language you have to postulate some hypotheses and try really really really hard to disprove them. Why? Because there are empirical aspects to the questions that can have empirical support. Instead the hypotheses are implied and no attempt is made to see if they work. So this is what I suggest are Sherzer’s implicit hypotheses that should be made explicit and tested:

  1. American Indian languages use metaphors for essential parts of their understanding of the world. (Corollary: If we understand the metaphors, we can understand the worldview of the speakers of those languages.)
  2. American Indian language use of metaphors was necessary to their survival because of their hunter-gatherer lifestyles.
  3. American Indian languages use metaphor than the SAE (Standard Average European) languages.

Re 1: This is demonstrably true. It is true of all languages so it is not surprising here. However, exactly how central this metaphorical reasoning is and how it works cognitively is an open question. I addressed some of this in my review of Verena Haser’s book.

As to the corollary, I’ve mentioned this time and time again. There is no straightforward link between metaphor and worldview. War on poverty, war on drugs, war on terror all draw on different aspects of war. As does Salvation Army, Peace Corps and the Marine Corps. You can’t say that Salvation Army subscribes to the same level of violence than a ‘real’ Army. The same goes for metaphors like ‘modern Crusades’ or the various notions of ‘Jihad’. Metaphor works exactly because it does not commit us to a particular course of action.

That’s not to say that the use of metaphor can never be revealing of underlying conceptualizations. For instance, calling something rebellion vs. calling it a ‘civil war’ imposes a certain order on the configuration of participants and reveals the speaker’s assumptions. But calling one ‘my rock’ does not reveal any cultural preoccupation with rocks. The latter (I propose) is much more common than the former.

Re 2: I think this is demonstrably false. From my (albeit incomplete) reading of the literature, most of the time metaphors just got in the way of hunting. Thinking of the ‘Bear’ as the father to whom you have to ritually apologise before killing him seems a bit excessive. Over metaphorisation of plants and animals has also led to their over or under exploitation. E.g. the Nuer not eating birds and foregoing an important source of nutricion or the Hawains hunting rare birds to extinction for their plumes. Sure, metaphors were essential to the construction of folk taxonomies but that is equally true of Western ‘scientific’ taxonomies which map into notions of descent, progress and containment. (PS: I’ve been working on a post called ‘Taxonomies are metaphors where I elaborate on this).

Re 3: This is just out and out nonsense. The example given are stuff like bark of the tree being called ‘skin’ and spacial prepositions like ‘on top of’ or ‘behind’ being derived from body parts. The author obviously did not bother to consult an English etymological dictionary where he could discover that ‘top’ comes from ‘tuft’ as in ‘tuft of hair’ (or is at the very least connected). And of course the connection of ‘behind’ to body part (albeit in the other direction) should be pretty obvious to anyone. Anyway, body part metaphors are all over all languages in all sorts similar but inconsistent ways: mountains have feet (but not heads), human groups have heads (but not feed), trees have trunks (but not arms), a leader may have someone as their right arm (but not their left foot). And ‘custard has skin’ in English (chew on that). In short, unless the author can show even a hint of a quantitative tendency, it’s clear that American Indian languages are just as metaphorical as any other languages.

Sherzer comes to this conclusion:

“Metaphorical language pervaded the verbal art of the Americas in 1492, in part because of the closeness Native American had always felt to the natural world around them and their social, cultural, aesthetic, and personal identification with it and in part because of their faith in the immediacy of a spirit world whose presence could be manifest in discourse.”

But that displays fundamental misunderstanding of how metaphor works in language. Faith in immediacy has no link to the use of metaphors (or at the very least Sherzer did not demonstrate any link because he confused lyricism with scholarship). Sure, metaphors based on the natural world might indicate ‘closeness to the natural world around’ but that’s just as much of a discovery as saying that people who live in an area with lots of oaks have a word that means ‘oak’. The opposite would be surprising. The problem is that if you analyzed English without preconceptions about the culture of its speakers you would find as much of a closeness to the natural world (e.g a person can be ‘a force of nature’, ‘eyes like a hawk’, ‘dirty as a pig’, ‘wily as a fox’, ‘slow as a snail’, ‘beautiful as a flower’, ‘sturdy as a tree’, etc.).

While this seems deep, it’s actually meaningless.

“The metaphorical and symbolic bent of Mesoamerica was reflected in the grammars, vocabularies, and verbal art of the region. (p. 272)

Mesoamerica had no ‘symbolic bent’. Humans have a symbolic bent, just like they have spleens, guts and little toes.  So let’s stop being all gushy about it and study things that are worth a note.

PS: This just underscores my comments on an earlier post of Cris’ where I took this quote to task:

“Nahuatl was and is a language rich in metaphor, and the Mexica took delight in exploring veiled resemblances…” This is complete and utter nonsense. Language is rich in metaphor and all cultures explore veiled resemblances. That’s just how language works. All I can surmise is that the author did not learn the language very well and therefore was translating some idioms literally. It happens. Or she’s just mindlessly spouting a bullshit trope people trot out when they need to support some mystical theory about a people.

And the conclusion!? “In a differently conceptualized world concepts are differently distributed. If we want to know the metaphors our subjects lived by, we need first to know how the language scanned actuality. Linguistic messages in foreign (or in familiar) tongues require not only decoding, but interpretation.” Translated from bullshit to normal speak: “When you translate things from a foreign language, you need to pay attention to context.” Nahuatl is no different to Spanish in this. In fact, the same applies to British and American English.

Finally, this metaphor mania is not unique to anthropologists. I’ve seen this in philosophy, education studies, etc. Metaphors are seductive… Can’t live without them…

Image by moune.drah CC BY NC SA

Linguistics according to Fillmore

Share

While people keep banging on about Chomsky as being the be all and end all of linguistics (I’m looking at you philosophers of language), there have been many linguists who have had a much more substantial impact on how we actually think about language in a way that matters. In my post on why Chomsky is not really a linguist at all I listed a few.

Sadly, one of these linguists died yesterday. It was Charles J Fillmore who was a towering figure among linguists without writing a single book. In my mind, he changed the face of linguistics three times with just three articles (one of them co-authored). Obviously, he wrote many more but compared to his massive impact, his output was relatively modest. His ideas have been with me all through my life as a linguist and on reflection, they form a foundation about what I know language to be. Therefore, this is not so much an obituary (for which I’m hardly the most qualified person out there) as a manifesto for a linguistics of a truly human language.

The case for Fillmore

The first article, more of a slim monograph at 80 odd pages, was Case for Case (which, for some reason, I first read in Russian translation). Published in 1968 it was one of the first efforts to find deeper functional connections in generative grammar (following on his earlier work with transformations). If you’ve studied Chomskean Government and Binding, this is where thematic roles essentially come from. I only started studying linguistics in 1991 which is when Case for Case was already considered a classic. Particularly in Prague where function was so important. But even after all those years, it is still worth reading for any minimalist  out there. Unlike so many in today’s divided world, Fillmore engaged with the whole universe of linguistics, citing Halliday, Tesniere, Jakobson,  Whorf, Jespersen, and others while giving an excellent overview of the treatment of case by different theories and theorists. But the engagement went even deeper, the whole notion of ‘case’ as one “base component of the grammar of every language” brought so much traditional grammar back into contact with a linguistics that was speeding away from all that came before at a rate of knots.

From today’s perspective, its emphasis on the deep and surface structures, as well as its relatively impoverished semantics may seem a bit dated, but it represents an engagement with language used to express real meaning.  The thinking that went into deep cases transformed into what has become known as Frame Semantics (“I thought of each case frame as characterizing a small abstract ‘scene’ or ‘situation’, so that to understand the semantic structure of the verb it was necessary to understand the properties of such schematized scenes” [1982]) which is where things really get interesting.

Fillmore in the frame

When I think about frame semantics, I always go to his 1982 article Frame Semantics published in the charmingly named conference proceedings ‘Linguistics in the morning calm’ but it had its first outing in 1976. George Lakoff used it as one of the key inspirations to his idealized cognitive models in Women, Fire, and Dangerous things which is where this site can trace its roots. As I have said before, I essentially think about metaphors as a special kinds of frames.

In it, he says:

By the term ‘frame’ I have in mind any system of concepts related in such a way that to understand anyone of them you have to  understand the whole structure in which it fits; when one of the things in such a structure is introduced into a text, or into a conversation, all of the others are automatically made available. I intend the word ‘frame’ as used here to be a general cover term for the set of concepts variously known, in the literature on natural language understanding, as ‘schema: ‘script’, ‘scenario’, ‘ideational scaffolding’, ‘cognitive model’, or ‘folk theory’.

It is a bit of a mouthful but it captures in a paragraph the absolute fundamentals of the semantics of human language as opposed to projecting the rules of formal logic and truth conditions onto an impoverished version of language that all the generative-inspired approaches try to do. Also, it brings together many other concepts from different fields of scholarship. Last year I presented a paper on the power of the concept of frame where I found even more terms that have a close affinity to it which only underscores the far reaching consequences of Fillmore’s insight.

As I was looking for some more quotes from that article, I realized that I’d have to pretty much cut and paste in the whole of it. Almost, every sentence there is pure gold. Rereading it now after many many years, it’s becoming clear how many things from it I’ve internalized (and frankly, reinvented some of the ideas I forgot had been there).

Constructing Fillmore

About the same time, and merging the two earlier insights, Fillmore started working on the principles that have come to be known as construction grammar. Although, by then, the ideas were some years old, I always think of his 1988 article with Paul Kay and Mary Catherine O’Conner as a proper construction grammar manifesto. In it they say:

The overarching claim is that the proper units of a grammar are more similar to the notion of construction in traditional and pedagogical grammars than to that of rule in most versions of generative grammar.

Constructions, according to Fillmore have these properties:

  1. They are not limited to the constituents of a single syntactic tree. Meaning, they span what has been considered as the building blocks of language.
  2. They specify at the same time syntactic, lexical, semantic and pragmatic information.

  3. Lexical items can also be viewed as constructions (this is absolutely earth shattering and I don’t think linguistics has come to grips with it, yet).

  4. They are idiomatic. That is, their meaning is not built up from their constituent parts.

Although Lakoff’s study of ‘there constructions’ in Women, Fire, and Dangerous Things came out a year earlier (and is still essential reading), I prefer Fillmore as an introduction to the subject (if only because I never had to translate it).

The beauty of construction grammar (just as the beauty of frame semantics) is in that it can bridge much of the modern thinking about language with grammatical insights and intuitions of generations of researchers from across many schools of thought. But I am genuinely inspired by its commitment to language as a whole, expressed in the 1999 article by Fillmore and Kay:

To adopt a constructional approach is to undertake a commitment in principle to account for the entirety of each language. This means that the relatively general patterns of the language, such as the one licensing the ordering of a finite auxiliary verb before its subject in English as illustrated in 1, and the more idiomatic patterns, such as those exemplified in 2, stand on an equal footing as data for which the grammar  must provide an account.

(1) a. What have you done?  b. Never will I leave you. c. So will she. d. Long may you prosper! e. Had I known, . . . f. Am I tired! g. . . . as were the others h. Thus did the hen reward Beecher.

(2) a. by and large b. [to] have a field day c. [to] have to hand it to [someone]  d. (*A/*The) Fool that I was, . . . e. in x’s own right

Given such a commitment, the construction grammarian is required to develop an explicit system of representation, capable of encoding economically and without loss of generalization all the constructions (or patterns) of the language, from the most idiomatic to the most general.

Notice that they don’t just say ‘language’ but ‘each language’. Both of those articles give ample examples of how constructions work and what they do and I commend them to your linguistic enjoyment.

Ultimately, I do not subscribe to the exact version of construction grammar that Fillmore and Kay propose, agreeing with William Croft that it is still too beholden to the formalist tradition of the generative era, but there is something to learn from on every page of everything Fillmore wrote.

Once more with meaning: the FrameNet years

Both frame semantics and construction grammar impacted Fillmore’s work in lexicography with Sue Atkins and culminated in FrameNet a machine readable frame semantic dictionary providing a model for a semantic module to a construction grammar. To make the story complete, we can even see FrameNet as a culmination of the research project begun in Case for Case  which was the development of a “valence dictionary” (as he summarized it in 1982). While FrameNet is much more than that and has very much abandoned the claim to universal deep structures, it can be seen as accomplishing the mission of a language with meaning Fillmore set out on in the 1960s.

Remembering Fillmore

I only met Fillmore once when he came to lecture at a summer school in Prague almost twenty years ago. I enjoyed his lectures but was really too star struck to take advantage of the opportunity. But I saw enough of him to understand why he is remembered with deep affection and admiration by all of his colleagues and students whose ranks form a veritable who’s who of linguists to pay attention to.

In my earlier post, I compared him in stature and importance to Roman Jakobson (even if Jakobson’s crazy voluminous output across four languages dwarfs Fillmore’s – and almost everyone else’s). Fillmore was more than a linguist’s linguist, he was a linguist who mattered (and matters) to anyone who wanted (and wants) to understand how language works beyond a few minimalist soundbites. Sadly it is possible to meet graduates with linguistics degrees who never heard of Jakobson or Fillmore. While it’s almost impossible to meet someone who doesn’t know anything about language but has heard of Chomsky. But I have no doubt that in the decades of language scholarship to come, it will be Fillmore and his ideas that will be the foundation upon which the edifice of linguistics will rest. May he rest in peace.

Post Script

I am far from being an expert on Fillmore’s work and life. This post reflects my personal perspective and lessons I’ve learned rather than a comprehensive or objective reference work. I may have been rather free with the narrative arc of his work. Please be free with corrections and clarifications. Language Log reposted a more complete profile of his life.

References

  • Fillmore, C., 1968. The Case for Case. In E. Bach & R. Harms, eds. Universals in Linguistic Theory. New York: Holt, Rinehart and Winston, pp. 1–88. Available at: http://pdf.thepdfportal.com/PDFFiles/123480.pdf [Accessed February 15, 2014].
  • Fillmore, C.J., 1976. Frame Semantics and the nature of language. Annals of the New York Academy of Sciences, 280 (Origins and Evolution of Language and Speech), pp.20–32.
  • Fillmore, C., 1982. Frame Semantics. In The Linguistic Society of Korea, ed. Linguistics in the morning calm : International conference on linguistics : Selected papers. Seoul  Korea: Hanshin Pub. Co., pp. 111–139.
  • Fillmore, C.J., Kay, P. & O’Connor, M.C., 1988. Regularity and Idiomaticity in Grammatical Constructions: The Case of Let Alone. Language, 64(3), pp.501–538.
  • Kay, P. & Fillmore, C.J., 1999. Grammatical constructions and linguistic generalizations: the What’s X doing Y? construction. Language, 75(1), pp.1–33.

Storms in all Teacups: The Power and Inequality in the Battle for Science Universality

Share

The great blog Genealogy of Religion posted this video with a somewhat approving commentary:

The video started off with panache and promised some entertainment, however, I found myself increasingly annoyed as the video continued. The problem is that this is an exchange of cliches that pretends to be a fight of truth against ignorance. Sure, Storm doesn’t put forward a very coherent argument for her position, but neither does Minchin. His description of science vs. faith is laughable (being in awe at the size of the universe, my foot) and nowhere does he display any nuance nor, frankly, any evidence that he is doing anything other than parroting what he’s heard on some TV interview with Dawkins. I have much more sympathy with the Storms of this world than these self-styled defenders of science whose only credentials are that they can remember a bit of high school physics or chemistry and have read an article by some neo-atheist in Wired. What’s worse, it’s would be rationalists like him who do what passes for science reporting in major newspapers or on the BBC.

But most of all, I find it distasteful that he chose a young woman as his antagonist. If he wished to take on the ‘antiscience’ establishment, there are so many much better figures to target for ridicule. Why not take on the pseudo spiritualists in the mainstream media with their ecumenical conciliatory garbage. How about taking on tabloids like Nature or Science that publish unreliable preliminary probes as massive breakthroughs. How about universities that put out press releases distorting partial findings. Why not take on economists who count things that it makes no sense to count just to make things seem scientific. Or, if he really has nothing better to do, let him lay into some super rich creationist pastor. But no, none of these captured his imagination, instead he chose to focus his keen intellect and deep erudition on a stereotype of a young woman who’s trying to figure out a way to be taken seriously in a world filled with pompous frauds like Minchin.

The blog post commenting on the video sparked a debate about the limits of knowledge (Note: This is a modified version of my own comment). But while there’s a debate to be had about the limits of knowledge (what this blog is about),  this is not the occasion. There is no need to adjudicate about which of these two is more ‘on to something’. They’re not touching on anything of epistemological interest, they’re just playing a game of social positioning in the vicinity of interesting issues. But in this game, people like Michin have been given a lot more chips to play with than people like Storm. It’s his follies and prejudices and not hers that are given a fair hearing. So I’d rather spend a few vacuous moments in her company than endorse his mindless ranting.

And as for ridiculing people for stupidity or shallow thinking, I’m more than happy to take part. But I want to have a look at those with power and prestige, because they just as often as Storms act just as silly and irrationally the moment they step out of their areas of expertise. I see this all the time in language, culture and history (where I know enough about to judge the level of insight). Here’s the most recent one that caught my eye:

It comes from a side note in a post about evolutionary foundations of violence by a self-proclaimed scientist (the implied hero in Minchin’s rant):

 It is said that the Bedouin have nearly 100 different words for camels, distinguishing between those that are calm, energetic, aggressive, smooth-gaited, or rough, etc. Although we carefully identify a multitude of wars — the Hundred Years War, the Thirty Years War, the American Civil War, the Vietnam War, and so forth — we don’t have a plural form for peace.

Well, this paragon of reason could be forgiven for not knowing what sort of non-sense this ‘100 words for’ cliche is. The Language Log has spilled enough bits on why this and other snowclones are silly. But the second part of the argument is just stupid. And it is a typical scientist blundering about the world as if the rules of evidence didn’t apply to him outside the lab and as if data not in a spreadsheet did not require a second thought. As if being a PhD in evolutionary theory meant everything else he says about humans must be taken seriously. But how can such a moronic statement be taken as anything but feeble twaddle to be laughed at and belittled? How much more cumulatively harmful are moments like these (and they are all over the place) than the socializing efforts of people like Storm from the video?

So, I should probably explain why this is so brainless. First, we don’t have a multitude of words war  (just like the Bedouin don’t have 100 or even 1 dozen for a camel). We just have the one and we have a lot of adjectives with which we can modify its meaning. And if we want to look for some that are at least equivalent to possible camel attributes, we won’t choose names of famous wars but rather things like civil war, total war, cold war, holy war, global war, naval war, nuclear war, etc. I’m sure West Point or even Wikipedia has much to say about a possible classification. And of course,  all of this applies to peace in exactly the same way. There are ‘peaces’ with names like Peace of Westphalia, Arab-Israeli Peace, etc. with just as many attributive pairs like international peace, lasting peace, regional peace, global peace, durable peace, stable peace, great peace, etc.  I went to a corpus to get some examples but that this must be the case was obvious and a simple Google search would give enough examples to confirm a normal language speaker’s  intuition. But this ‘scientist’ had a point to make and because he’s spent twenty years doing research in evolution of violence, he must surely be right about everything on the subject.

Creative Commons License jbraine via Compfight

Now, I’m sure this guy is not an idiot. He’s obviously capable of analysis and presenting a coherent argument. But there’s an area that he chose to address about which he is about as qualified to make pronouncements as Storm and Minchin are about the philosophy of science. And what he said there is stupid and he should be embarrassed for having said it. Should he be ridiculed and humiliated for it the way I did here? No. He made the sort of mistake everyone makes from high school students to Nobel laureates. He thought he knew something and didn’t bother to examine his knowledge. Or he did try to examine it but  didn’t have the right tools to do it. Fine. But he’s a scientist (and a man not subject to stereotypes about women) so we give him and too many like him a pass. But Storm, a woman who like so many of her generation uses star signs to talk about relationships and is uncomfortable with the grasping maw of classifying science chomping on the very essence of her being, she is fair game?

It’s this inequality that makes me angry. We afford one type of shallowness the veneer respectability and rake another one over the coals of ridicule and opprobrium. Not on this blog!

Creative Commons License Juliana Coutinho via Compfight

UPDATE: I was just listening to this interview with a philosopher and historian of science about why there was so much hate coming from scientists towards the Gaia hypothesis and his summation, it seems to me, fits right in with what this post is about. He says: “When scientists feel insecure and threatened, they turn nasty.” And it doesn’t take a lot of study of the history and sociology of science to find ample examples of this. The ‘science wars’, the ‘linguistics wars’, the neo-Darwinst thought purism, the list just goes on. The world view of scientism is totalising and has to deal with exactly the same issues as other totalising views such as monotheistic religions with constitutive ontological views or socio-economic utopianisms (e.g. neo-liberalism or Marxism).

And one of those issues is how do you afford respect to or even just maintain conversation with people who challenge your ideological totalitarianism – or in other words, people who are willfully and dangerously “wrong”. You can take the Minchin approach of suffering in silence at parties and occasionally venting your frustration at innocent passerbys, but that can lead to outbreaks group hysteria as we saw with the Sokal hoax or one of the many moral panic campaigns.

Or you can take the more difficult journey of giving up some of your claims on totality and engaging with even those most threatening to to you as human beings; the way Feyerabend did or Gould sometimes tried to do. This does not mean patiently proselytizing in the style of evangelical missionaries but more of an ecumenical approach of meeting together without denying who you are. This will inevitably involve moments where irreconcilable differences will lead to a stand on principles (cf. Is multi-culturalism bad for women?) but even in those cases an effort at understanding can benefit both sides as with the question of vaccination described in this interview. At all stages, there will be temptation at “understanding” the other person by reducing them to our own framework of humanity. Psychologizing a religious person as an unsophisticate dealing with feelings of awe in the face of incomprehensible nature or pitying the atheist for not being able to feel the love of God and reach salvation. There is no solution. No utopia of perfect harmony and understanding. No vision of lions and lambs living in peace. But acknowledging our differences and slowing down our outrage can perhaps make us into the better versions of us and help us stop wasting time trying to reclaim other people’s stereotypes.

Storm in a teacupCreative Commons License BruceW. via Compfight

UPDATE 2: I am aware of the paradox between the introduction and the conclusion of the previous update. Bonus points for spotting it. I actually hold a slightly more nuanced view than the first paragraph would imply but that is a topic for another blog post.

Sunsets, horizons and the language/mind/culture distinction

Share

For some reason, many accomplished people, when they are done accomplishing what they’ve set out to accomplish, turn their minds to questions like:

  • What is primary, thought or language.
  • What is primary, culture or language.
  • What is primary, thought or culture.

I’d like to offer a small metaphor hack for solving or rather dissolving these questions. The problem is that all three concepts: culture, mind and language are just useful heuristics for talking about aspects of our being. So when I see somebody speaking in a way I don’t understand, I can talk about their language. Or others behave in ways I don’t like, so I talk about their culture. Then, there’s stuff going on in my head that’s kind of like language, but not really, so I call that sort of stuff mind. But these words are just useful heuristics not discrete realities. Old Czechs used the same word for language and nation. English often uses the word ‘see’ for ‘understand’. What does it mean? Not that much.

Let’s compare it with the idea of the setting sun. I see the Sun disappearing behind the horizon and I can make some useful generalizations about it. Organize my directions (East/West), plant plants to grow better, orient how my dwelling is positioned, etc. And my description of this phenomenon as ‘the sun is setting behind the horizon’ is perfectly adequate. But then I might start asking questions like ‘what does the Sun do when it’s behind the horizon?’ Does it turn itself off and travel under the earth to rise again in the East the next morning? Or does it die and a new one rises again the next day? Those are all very bad questions because I accepted my local heuristic as describing a reality. It would be even worse if I tried to go and see the edge or the horizon. I’d be like the two fools who agreed that they would follow the railway tracks all the way to the point they meet. They keep going until one of them turns around and says ‘dude, we already passed it’.

So to ask questions about how language influences thought and culture influences language is the same as trying to go see the horizon. Language, culture and mind are just ways of describing things for particular purposes and when we start using them outside those purposes, we get ourselves in a muddle.

Great Lakes in Sunglint (NASA, International Space Station, 06/14/12) NASA’s Marshall Space Flight Center via Compfight

RaAM 9 Abstract: Of Doves and Cocks: Collective Negotiation of a Metaphoric Seduction

Share

Given how long I’ve been studying metaphor (at least since 1991 when I first encountered Lakoff and Johnson’s work and full on since 2000) it is amazing that I have yet to attend a RaAM (Researching and Applying Metaphor) conference. I had an abstract accepted to one of the previous RaAMs but couldn’t go. This time, I’ve had an abstract accepted and wild horses won’t keep me away (even though it is expensive since no one is sponsoring my going). The abstract that got accepted is about a small piece of research that I conceived back in 2004, wrote up in a blog post in 2006, was supposed to talk about at a conference in 2011 and finally will get to present this July at RaAM 9).

Unlike most academic endeavours, this one needs to come with a parental warning. The materials described contains profane sexual and scatological imagery as employed for the purposes of satire. But I think it makes a really important point that I don’t see people making as a matter of course in the metaphor studies literature. I argue that metaphors can be incredibly powerful and seductive but that they are also routinely deconstructed and negotiated. They are not something that just happens to us. They are opportunistic and random just as much as they are systematic and fundamental to our cognition. Much of the current metaphor studies is still fighting the battle against the view that metaphors are mere peripheral adornments on the literal. And to be sure the “just a metaphor” label is still to be seen in popular discourse today. But it has now been over 40 years since this fight has been intellectually won. So we need to focus on the broader questions about the complexities of the role metaphor plays in social cognition. And my contribution to RaAM hopes to point in that direction.

 

Of Doves and Cocks: Collective Negotiation of a Metaphoric Seduction

In this contribution, I propose to investigate metaphoric cognition as an extended discursive and social phenomenon that is the cornerstone of our ability to understand and negotiate issues of public importance. Since Lakoff and Johnson’s groundbreaking study, research in linguistics, cognitive psychology, as well as discourse studies, has tended to view metaphor as a purely unconscious phenomenon that is outside of a normal speaker’s ability to manipulate. However important this view of metaphor and cognition may be, it tells only a part of the story. An equally important and surprisingly frequent is the ability of metaphor to enter into collective (meta)cognition through extended discourse in which acceptable cross-domain mappings are negotiated.
I will provide an example of a particular metaphorical framing and the metacognitive framework it engendered that made it possible for extended discourse to develop. This metaphor, a leitmotif in the ‘Team America’ film satire, mapped the physiological and phraseological properties of taboo body parts onto geopolitical issues of war in such a way that made it possible for participants in the subsequent discourse to simultaneously be seduced by the power of the metaphor and empowered to engage in talk about cognition, text and context as exemplified by statements such as: “It sounds quite weird out of context, but the paragraph about dicks, pussies and assholes was the craziest analogy I’ve ever heard, mainly because it actually made sense.” I will demonstrate how this example is typical rather than aberrant of metaphor in discourse and discuss the limits of a purely cognitive approach to metaphor.
Following Talmy, I will argue that significant elements of metaphoric cognition are available to speakers’ introspection and thus available for public negotiation. However, this does not preclude for the sheer power of the metaphor to have an impact on both cognition and discourse. I will argue that as a result of the strength of this one metaphor, the balance of the discussion of this highly satirical film was shifted in support of military interventionism as evidenced by the subsequent popular commentary. By mapping political and gender concepts on the basic structural inevitability of human sexual anatomy reinforced by idiomatic mappings between taboo words and moral concepts, the metaphor makes further negotiation virtually impossible within its own parameters. Thus an individual speaker may be simultaneously seduced and empowered by a particular metaphorical mapping.

21st Century Educational Voodoo

Share

Jim Shimabukuro uses Rupert Murdoch’s quote “We have a 21st century economy with a 19th century education system” to pose a question of what should 21st Century Education look like (http://etcjournal.com/2008/11/03/174/) “what are the key elements for an effective 21st century model for schools and colleges?”.

However, what he is essentially asking us to do is perform an act of voodoo. He’s encouraging us to start thinking about what would make our education similar to our vision of what the 21st Century Economy looks like. Such exercises can come up with good ideas but unfortunately this one is very likely to descend into predictability. People will write in how important it is to prepare students to be flexible, learn important skills to compete in the global markets, use technology to leverage this or the other. There may be the odd original idea but most respondents will stick with cliches. Because that’s what our magical discourse about education encourages most of all (this sounds snarky but I really mean this more descriptively than as an evaluation).

There are three problems with the whole exercises.

First, why should we listen to to moguls and venture capitalists about education? They’re no more qualified to address this topic than any random individual who’s given it some thought and are more likely to have ulterior motives. To Murdoch we should say, you’ve messed up the print media environment, failed with your online efforts, stay away from our schools.

Second, we don’t have a 19th century education system. Sure, we still have teachers standing in front of students. We have classes and we have school years. We have what David Tyack and Larry Cuban have called the “grammar of schooling”. It hasn’t changed much on the surface. But neither has the grammar of English. Yet, we can express things in English now that we couldn’t in the 1800s. We use the English grammar with its ancient roots to express what we need in our time. Likewise, we use the same grammar of schooling to have the education system express our societal needs. It is imperfect but it is in NO way holding us down. The evidence is either manufactured or misinterpreted. Sure, if we sat down and started designing an education system today from scratch, we’d probably do it differently but the outcomes would probably be pretty much the same. Meaning, the state of the world isn’t due to the educational system but rather vice versa.

Third, we don’t have a 21st century economy. Of course, the current economy is in the 21st century but it is much less than what we envision 21st century economy to imply. It is global (as it was in the 1848 when Marx and Engels were writing their manifesto). It is exploitative (of both human and natural resources). It is in the hands of the powerful and semicompetent few. Just because workers get fired by email from a continent away and stocks crash in matter of minutes rather than hours, we can’t really talk about something fundamentally unique. Physical and symbolic property is still the key part of the economy. Physical property still takes roughly as long to shift about as it did two centuries ago (give or take a day or a month) and symbolic property is still traded in the same way (can I interest you in a tulip?). Sure, there are thousands particular differences we could point to but the essence of our existence is not that much changed. Except for things like indoor plumbing  (thank God!), modern medicine and speed of communication – but the education system of today has all of those pretty much in hand.

My conclusion. Don’t expect people to be relevant or right just because they are rich or successful. Question the route they took to where they are before you take their advice on the direction you should go. And, if you’re going to drag history into your analogies, study it very very carefully. Don’t rely on what your teachers told you, it was all lies!

There’s more to memory than the brain: Psychologists run clever experiments, make trivial claims, take gullible internet by storm

Share
GoodSearch home page

Image via Wikipedia

The online media are drawn to any “scientific” claims about the internet’s influence on our nature as humans like flies to a pile of excrement. Sadly, in this metaphor, only the flies are figurative. The latest heap of manure to instigate an annoying buzzing cloud of commentary from Wired to the BBC, is an article by Sparrow et al. claiming to show that because there are search engines, we don’t have to remember as much as before. Most notably, if we know that some information can be easily retrieved, we remember where it can be obtained instead of what it is. As Wired reports:

“A study of 46 college students found lower rates of recall on newly-learned facts when students thought those facts were saved on a computer for later recovery.” http://www.wired.co.uk/news/archive/2011-07/15/search-engines-memory

Sparrow et al. designed a bunch of experiments that “prove” this claim. Thus, they holler, the internet changes how we remember. This was echoed by literally hundreds of headlines (Google claims over 600). Here’s a sample:

  • Google Effect: Changes to our Brains
  • Search engines like Google ‘changing the way human memory works’
  • Search engines change how memory works
  • Google Is Destroying Our Memories, Scientists Find
  • It pays to remember, search engines ruining our memory
  • Google rewiring the way we remember, study says
  • Has Google turned your memory to mush?
  • Internet search engines cause poor memory, scientists claim
  • Researchers: Search Engines Supplanting Our Memory
  • Google changing way brain remembers information

Many of these headlines are from “reputable” publications and they can be summarized by three words: Bullshit! Bullshit! Bullshit!

All they had to do is read this part of the abstract to understand that nothing like the stuff they blather about follows from the study:

“The results of four studies suggest that when faced with difficult questions, people are primed to think about computers and that when people expect to have future access to information, they have lower rates of recall of the information itself and enhanced recall instead for where to access it. The Internet has become a primary form of external or transactive memory, where information is stored collectively outside ourselves.”

But they were not helped by Science whose publication of these results is more of a self-serving stunt than a serious attempt to further knowledge. The title of the original “Google Effects on Memory” is all but designed to generate bat-shit crazy headlines. If the title were to be truthful, it would have to be “Google has no more effects on memory than a paper and pen or a friend.” Even the Science Magazine report on the study entitled “Searching for the Google Effect on People’s Memory” concludes it “doesn’t directly answer that question”. In fact, it says that the internet is filling in the role of “transactive memory” which describes the fact that we rely on people to remember things. Which means it has no impact on our brains at all. It just magnifies the transactive effects already in existence.

Any claim about a special effect of Google on any kind of memory can be debunked in two words: “shopping list”! All Sparrow at al. discovered is that the internet has changed us as much as a stub of a pencil and a grubby piece of paper. Meaning, not at all.

Some headlines cottoned onto this but they are few and far between:

  • Search Engine “Memory Loss” in Fact a Sign of Smart Behavior‎
  • Search Engines Ruin Our Memory, Make Us Smarter

Sparrow, the lead author of the study, when interviewed by Wired said: “It’s very similar to how we use people in our lives, The internet is really just an interface with a lot of other people.”

In other words, What the internet has changed is the deployment of strategies we have always used for managing our memory. Sparrow et al. use an old term “transactive memory” to describe this but that’s needed only because cognitive psychology’s view of memory has been so limited. Memory is not just about storage and retrieval. Like all of our cognition it is tied in with a whole host of strategies (sometimes lumped together under the heading of metacognition) that have a transactive and social dimension.

Let’s take the example of mobile phones. About 15 years ago I remembered about 4 phone numbers (home, work, mother, friend). Now, I remember none. They’re all stored in my mobile phone. What’s happened? I changed my strategy of information storage and retrieval because of the technology available. Was this a radical change? No, because I needed a lot more number so I carried a little booklet where I kept the rest of the numbers. So the mobile phone freed my memory of four items. Big deal! Arguably, these four items have a huge potential transactional impact. They mean that if my mobile phone is dead or lost, I cannot call the people most likely to be able to offer assistance. But how often does that happen? It hasn’t happened to me yet in an emergency. And in an non-emergency I have many backups. At any rate, in the past I was much more likely to be caught up in an emergency where I couldn’t find a phone at all. So the change has been fairly minimal.

But what’s more interesting here is that I didn’t realize this change until I heard someone talk about it. This transactional change is a topic of conversation, it is not just something that happened, it is part of common knowledge (and common knowledge only becomes common because of a lot of people talk about it to a lot of other people).

The same goes for the claims made by Sparrow et al. The strategies used to maintain access to factual knowledge have changed with the technology available. But they didn’t just change, people have been talking about this change. “Just Google it” is a part of daily conversation. In his podcasts, Leo Laporte has often talked about how his approach to remembering has changed with the advent of Google. One early strategy for remembering websites has been the Bookmark. People have made significant collections of bookmarks to sites, not unlike rollodexes of old. But about five or so years ago Google got a lot better at finding the right sites, so bookmarks went away. Personally, now that Chrome syncs bookmarks so seemlessly, I’ve started using them again. Wow, change in technology, facilitates a change in strategy. Sparrow et al. should do some research on this. Since I started using the Internet when it was still spelled with a capital “I”, I still remember urls of key websites: Google, Yahoo, Gmail, BBC, my own, etc. But there are people who don’t. I’ve personally observed a highly intelligent CEO of a company to type “Google” in the Bing search box in Internet Explorer. And a few years ago, after a university changed its portal, I was soothing an angry professor, who complained that the link to Google was removed from the page that automatically came up on his computer. He never learned how to get there any other way because he didn’t need to. Now he does. We acquire strategies to deal with information as we need them.

Before the availability of writing (and even after), there were a whole lot of strategies available for remembering things. These were part of the cultural conversation as much as the internet is today. Some of these strategies became part of religious ritual. Some of them are a part of a trickster’s arsenal – Joshua Foer describes some in Moonwalking with Einstein. Many are part of the art of “study skills” many people talk about.

All that Sparrow et al. demonstrated is that when some of these strategies are deployed, it has a small effect on recall. This is not a bad thing to know but it’s not in any way worth over 600 media stories about it. To evaluate this much reduced claim we would have to carefully examine their research methodology and the underlying assumptions which is not what this post is about. It’s about the mistreatment of research results by media hungry academics.

I don’t begrudge Sparrow et al. their 15 minutes of fame. I’m not surprised, dismayed or even disappointed at the link greed of the journalistic herd who fell over themselves to uncritically spread this research fluff. Also, many of the actual articles were quite balanced about the findings but how much of that balance will survive the effect of a mendatiously bombastic headline is anybody’s guess. So all in all it’s business as usual in the popularization of “science” in the “media”.

ResearchBlogging.org Bohannon, J. (2011). Searching for the Google Effect on People’s Memory Science, 333 (6040), 277-277 DOI: 10.1126/science.333.6040.277

Sparrow, B., Liu, J., & Wegner, D. (2011). Google Effects on Memory: Cognitive Consequences of Having Information at Our Fingertips Science DOI: 10.1126/science.1207745

Enhanced by Zemanta

The natural logistics of life: The Internet really changes almost nothing

Share
Cover of "You've Got Mail"
Cover of You've Got Mail

This is a post that has been germinating for a long time. But it was most immediately inspired by Marshall Poe‘s article claiming that “The Internet Changes Nothing“. And as it turns out, I mostly agree.

OK, this may sound a bit paradoxical. Twelve years ago, when I submitted my first column to be published, I delivered the text to my editor on a diskette. Now, I don’t even have an editor (or at least not for this kind of writing). I just click a button and my text is published. But! If my server logs are to be trusted, it will be read by 10s or at best 100s of people over its lifetime. That’s more than if I’d just written some notes to myself or published it in an academic journal but much less than if I publish it in a national daily with a readership of hundreds of thousands. Not all of them will read what I write but more than would on this blog.
So while democratising the publishing industry has worked for Kos, Huffington and many others, still many more blogs languish in obscurity. I can say anything I want but my voice matters little in the cacophony.

In terms of addressing an audience and having a voice, the internet has done little for most people. This is not because not enough people have enough to say but because there’s only so much content the world can consume. There is a much longer tail trailing behind Clay Shirkey‘s long tail. It’s the tail of 5-post 0-comment blogs and YouTube videos with 15 views. Even millions of typewriter-equipped monkeys with infinities of time can’t get to them all. Plus it’s hard to predict what will be popular (although educated guesses can produce results in aggregate). Years ago I took a short clip with my stills camera of a black-smith friend of mine making a candle-holder. It’s had 30 thousand views on YouTube. Why I don’t know. There’s nothing particularly exciting about it but there must be some sort of a long tail longing after it. None of the videos I hoped would take off did. This is the experience of many if not most. Most attempts at communities fail because the people starting them don’t realize how hard it is to nurture them to self-sustainability. I experienced this with my first site Bohemica.com. It got off to a really good start but since it was never my primary focus, the community kind of dissipated after a site redesign that was intended to foster it.

Just in terms of complete democratization of expression, the internet has done less for most than it may appear. But how about the speed of communication? I’m getting ready to do an interview with someone in the US, record it, transcribe it and translate it – all within a few days. The Internet (or more accurately Skype) makes the calling cheap, the recording and transcription is made much quicker by tools I didn’t have access to even in the early 2000s when I was doing interviews. And of course, I can get the published product to my editor in minutes via email. But what hasn’t changed is the process. The interview, transcription and translation take pretty much the same amount of time. The work of agreeing with the editor on the parameters of the interview, arranging it with the interviewee take pretty much as long as before. As does preparation for the interview. The only difference is the speed and ease of the transport of information from me to its target and me to the information. It’s faster to get to the research subject – but the actual research still takes about the same amount of time limited by the speed of my reading and the speed of my mind.

A chain is only as strong as its weakest link. And as long as humans are a part of the interface in a communication chain, the communication will happen at a human speed. I remember sitting over a print out of an obscure 1848 article on education from Jstor with an academic who started doing research in the 1970s and reminiscing how in the old days, he’d have to get on the train to London to get a thing like this in the British Library or at least having to arrange a protracted interlibrary loan. On reflection this is not as radical a change as it may seem. Sure, the information takes longer to get here. But people before the internet didn’t just sit around waiting for it. They had other stuff to read (there’s always more stuff to read than time) and writing to get on with in the meantime. I don’t remember anyone claiming that modern scholarship is any better than scholarship from the 1950s because we can get information faster. I’m as much in awe of some of the accomplishments of the scholars of the 1930s as people doing research now. And just as disdainful of others from any period. When reading a piece of scholarly work, I never care about the logistics of the flow of information that was necessary for the work to be completed (unless of course, it impinges on the methodology – where moderns scholars are just as likely to take preposterous shortcuts as ancient ones). During the recent Darwin frenzy, we heard a lot about how he was the communication hub of his time. He was constantly sending and receiving letters. Today, he’d have Twitter and a blog. Would he somehow achieve more? No, he’d still have to read all those research reports and piddle about with his worms. And it’s just as likely he’d miss that famous letter from Brno.

Of course, another fallacy we like to commit is assuming that communication in the past was simply communication today minus the internet (or telephone, or name your invention). But that’s nonsense. I always like to remind people that the “You’ve Got Mail” where Tom Hanks and Meg Ryan meet and fall in love online is a remake of a 1940s film where the protagonists sent each other letters. But these often arrived the same day (particularly in the same city). There were many more messenger services, pneumatic tubes, and a reliable postal service. As the Internet takes over the burden of information transmission, these are either disappearing or deteriorating but that doesn’t mean that’s the state they were in when they were the chief means of information transmission. Before there were photocopiers and faxes, there were copyists and messengers (and both were pretty damn fast). Who even sends faxes now? We like to claim we get more done with the internet but take just one step back and this claim looses much of its appeal. Sure there are things we can do now that we couldn’t do before like attend a virtual conference or a webinar. That’s true and it’s really great. But what would have the us of the 1980s have done? No doubt something very similar like buying video tapes of lectures or attending Open Universities. And the us of the 1960s? Correspondence courses and pirate radio stations. We would have had far less choice but our human endeavor would have been roughly the same. The us of 1930s, 1730s or 330s? That’s a more interesting question but nobody’s claiming that the internet changed the us of those times. We mostly think of the Internet as changing the human condition as compared to the 1960s or 1980s. And there the technology changes have far outstripped the changes in human activity.

If it’s not true that the internet has enabled us to get things done in a qualitatively different manner on a personal level, it’s even less true that it has made a difference at the level of society. There are simply so many things involved and they take so much time because humans and human institutions were involved. Let’s take the “Velvet Revolution” of 1989 in which I was an eager if extremely marginal participant. On Friday, November 17 a bunch of protesters got roughed up, on November 27, a general strike was held and on December 10, the president resigned. In Egypt, the demonstrations started on January 25, lots of stuff happened, on February 11 the president resigned. The Egyptians have the Czechs beat in their demonstration to resignation time by 5 days (17 v 23). This was the “Twitter” revolution. We didn’t even have mobile phones. Actually, we mostly even didn’t have phones. Is that what all this new global infrastructure has gotten us? Five days off on the toppling of a dictator? Of course, not. Twitter made no difference to what was happening in Egypt, at all, when compared to other revolutoin. If anything Al Jazeera played a bigger role. But on the ground, most people found out about things by being told by someone next to them. Just like we did. We even managed to let the international media up to speed pretty quickly, which could be argued is the main thing Twitter has done in the “Arab Spring” (hey another thing the Czechs did and failed at).

Malcolm Gladwell got a lot of criticism for pointing out the same thing. But he’s absolutely right:

“high risk” social activism requires deep roots and strong ties http://www.newyorker.com/online/blogs/newsdesk/2011/02/does-egypt-need-twitter.html

And while these ties can be established and maintained purely virtually, it takes a lot more than a few tweets to get people moving. Adam Weinstein adds to Gladwell’s example:

Anyone who lived through 1989 or the civil rights era or 1967 or 1956 knows that media technology is not a motive force for civil disobedience. Arguing otherwise is not just silly; it’s a distraction from the real human forces at play here.
http://motherjones.com/mojo/2011/02/malcolm-gladwell-tackles-egypt-twitter

Revolutions simply take their time. On paper, the Russian October Revolution of 1917 took just a day to topple the regime (as did so many others). But there were a bunch of unsuccessful revolutions prior to that and of course a bloody civil war lasting for years following. To fully institutionalize its aims, the Russian revolution could be said to have taken decades and millions dead. Even in ancient times, sometimes things moved very quickly (and always more messily than we can retell the story). The point about revolutions and wars is that they don’t move at the speed of information but at the speed of a fast walking revolutionary or soldier. Ultimately, someone has to sit in the seat where the buck stops, and they can only get there so fast even with jets, helicopters and fast cars. Such are the natural logistics of human communal life.

This doesn’t mean that there the speed or manner of communication doesn’t have some implications where logistics are concerned. But their impact is surprisingly small and easily absorbed by the larger concerns. In the Victorian Internet, Tom Standage describes how war ship manifests could no longer be published in The Times during the Crimean war because they could be telegraphed to the enemy faster than the ships would get there (whereas in the past, a spy’s message would be no faster than the actual ships). Also, betting and other financial establishments had to make adjustments not to get the speed of information get in the way of making profit. But if we compare the 1929 financial crisis with the one in 2008, we see that the speed of communication made little difference on the overall medium-term shape of the economy. Even though in 2008 we were getting up to the second information about the falling banking houses, the key decisions about support or otherwise took about the same amount of time (days). Sure, some stock trading is now done to the fraction of the second by computers because humans simply aren’t fast enough. But the economy still moves at about the same pace – the pace of lots and lots of humans shuffling about through their lives.

As I said at the start, although this post has been brewing in me for a while, it was most immediately inspired by that of Marshall Poe (of New Books in History) published about 6 months ago. What he said got no less relevant through the passage of time.

Think for a moment about what you do on the Internet. Not what you could do, but what you actually do. You email people you know. In an effort to broaden your horizons, you could send email to strangers in, say, China, but you don’t. You read the news. You could read newspapers from distant lands so as to broaden your horizons, but you usually don’t. You watch videos. There are a lot of high-minded educational videos available, but you probably prefer the ones featuring, say, snoring cats. You buy things. Every store in the world has a website, so you could buy all manner of exotic goods. As a rule, however, you buy the things you have always bought from the people who have always sold them.

This is easy to forget. We call online shopping and food delivery a great achievement. But having shopping delivered was always an option in the past (and much more common than now when delivery boys are more expensive). Amazon is amazing but still just a glorified catalog.

But there are revolutionary inventions that nobody even notices. What about the invention of the space between words? None of the ancients bothered to put spaces between words or in general read silently. It has been estimated that putting spaces between words not only allowed for silent reading (a highly suspicious activity until the 1700s) but also sped up reading by about 30%. Talk about a revolution! I’m a bit skeptical about the 30% number but still nobody talks about it. We think about audio books as an post-Eddison innovation but in fact, all reading was partly listening not too long ago. Another forgotten invention is that of the blackboard which made large-volume dissemination of information much more feasible through a simple reconfiguration of space and attention between pupil and teacher.

Visualization of the various routes through a ...
Image via Wikipedia

David Weinberger recently wrote what was essentially a poem about the hypertext (a buzz word I haven’t heard for a while):

The old institutions were more fragile than we let ourselves believe. They were fragile because they made the world small. A bigger truth burst them. The world is more like a messy, inconsistent, ever-changing web than like a curated set of careful writings. Truth burst the world made of atoms.

Yes, there is infinite space on the Web for lies. Nevertheless, the Web’s architecture is a better reflection of our human architecture. We embraced as if it were always true, and as if we had known it all along, because it is and we did.
http://www.hyperorg.com/blogger/2011/05/01/a-big-question

It is remarkable how right and wrong he can be at the same time. Yes, the web is more of a replication of the human architecture. It has some notable strengths (lack of geographic limitation, speed of delivery of information) and weaknesses (no internal methods for exchange of tangible goods, relatively limited methods for synchronous face-to-face communication.) I’d even go as far as calling the Internet “computer-assissted humanity”. But that just means that nothing about human organization online is a radical transformation of humanity offline.

What on Earth makes Weinberger think that the “existing institutions were fragile”? If anything they proved extremely robust. I find The Cluetrain Manifesto extremely inspiring and in many ways moving. But I find “The Communist Manifesto” equally profound without wanting to live in a world governed by it. The “The Communist Manifesto” got the description of the world as it is perfectly right. Pretty much every other paragraph in it applies just as much today as it did then. But the predictions offered in the other paragraphs can really cause nothing but laughter today. “The Cluetrain Manifesto” gave the same kind of expression to the frustration with the Dilbert world of big corporations and asked for our humanity back. They were absolutely right.

Markets can be looked at as conversations and the internet can facilitate certain kinds of conversation. But they were wrong in assuming that there is just one kind of conversation. There are all sorts of group symbolic and ritualized conversations that make the world of humans go around. And they have never been limited just to the local markets. In practical terms, I can now complain about a company on a blog or in a tweet. And these can be viewed by others. But since there’s an Xsuckx.com website for pretty much all major brands, the incentive for companies to be responsive to this are relatively small. I have actually received some response to complaints from companies on Twitter. But only once it led to the resolution of the problem. But Twitter is still a domain of “the elite” so it pays companies to appease people. However, should it reach the level of ubiquitous obscurity that many pages have, it will become even less appealing due to the lack of permanence of Tweets.

The problem is that large companies with large numbers of customers can only scale if they keep their interaction with those customers at certain levels. It was always thus and will always remain so. Not because of intrinsic attitudes but because of configurational limitations of time and human attention. Even the industrially oppressed call-center operator can only deal with about 10 customers an hour. So you have to work in some 80/20 cost checks into customer support. Most of any company’s interaction with their customers will be one to many and not on one on one. (And this incidentally holds for communications about the company by customers).

There’s a genre of conversations in the business and IT communities that focus on ‘why is X’ successful. Ford of the 1920s, IBM of the 1960s, Apple of the 2000s. The constant in these conversations is the wilful effort of projecting the current convetnional wisdom about business practices onto what companies do and used to do. This often requires significant reimagining of the present and the past. Leo Laporte and Paul Thurott recently had a conversation (http://twit.tv/ww207) in which they were convinced that companies that interact and engage with their customers will be successful. But why then, one of them asks, is not Microsoft whose employees blog all the time is not more successful than Apple who couldn’t be more tightlipped about its processes and whose attitude to customers is very much take it or leave it? Maybe it’s the Apple Store, one of them comments. That must be it. That engages the crap out of the Apple’s customers. But neither of them asked what is the problem with traditional stores, then? What is the point of the internet. The problem is that as with any metaphoric projection, the customer engagement metaphor is just partial. It’s more a way for us to grasp with processes that are fairly stable at the macro institutional level (which is the one I’m addressing here), but basically chaotic at the level of individual companies or even industries.

So I agree with Marshall Poe about the amount of transformation going on:

As for transformative, the evidence is thin. The basic institutions of modern society in the developed world—representative democracy, regulated capitalism, the welfare net, cultural liberalism—have not changed much since the introduction of the Internet. The big picture now looks a lot like the big picture then.

Based on my points above, I would even go as far as to argue that the basic institutions have not changed at all. Sure, foreign ministries now give advisories online, taxes can be paid electronically and there are new agencies that regulate online communication (ICANN) as well as old ones with new responsibilities. But as we read the daily news, can we perceive any new realities taking place? New political arrangements based on this new and wonderful thing called the Internet? No. If you read a good spy thriller from the 80s and one taking place now, you can hardly tell the difference. They may have been using payphones instead of the always on mobile smart devices we have now but the events still unfold in pretty much the same way: people go from place to place and do things.

Writing, print, and electronic communications—the three major media that preceded the Internet—did not change the big picture very much. Rather, they were brought into being by major historical trends that were already well underway, they amplified things that were already going on.

Exactly! If you read about the adventures of Sinuhe, it doesn’t seem that different from something written by Karl May or Tom Clancy. Things were happening as they were and whatever technology was available to be used, was used as well as possible. Remember that the telephone was originally envisioned to be a way of attending the opera – people calling in to a performance instead of attending live.

As a result, many things that happened could not have happened exactly in the same way without the tools of the age being there. The 2001 portion of the war in Afghanistan certainly would have looked different without precision bombing. But now in 2011 it seems to be playing out pretty much along the same lines experienced by the Brits and the Soviets. Meaning: it’s going badly.

The role of TV imagery in the ending of the Vietnam war is often remarked on. But that’s just coincidental. There have been plenty of unpopular wars that were ended because the population refused to support them and they were going nowhere. Long before the “free press”, the First Punic Wars were getting a bad rep at home. Sure, the government could have done a better job of lying to the press and its population but that’s hard to do when you have a draft. It didn’t work for Ramses II when he got his ass handed to him at Kadesh and didn’t ultimately work for the Soviet misadventure in Afghanistan. The impact of the impact of the TV images can easily be overestimated. The My Lai Massacre happened in 1968 when the war was about in its mid-point. It still took 2 presidential elections and 1 resignation before it was really over. It played a role but if the government wanted, it could have kept the war going.

Communications tools are not “media” in the sense we normally use the word. A stylus is not a scriptorium, movable type is not a publishing industry, and a wireless set is not a radio network. In order for media technologies to become full-fledged media, they need to respond to some big-picture demand.

It is so easy to confuse the technology with the message. On brief reflection, the McLuhan quote we all keep repeating like sheep is really stupid. The medium is the medium and the message is the message. Sometimes they are so firmly paired we can’t tell them apart, sometimes they have nothing in common. What is the medium of this message? HTML, the browser, your screen, a blog post, the Internet, TCP/IP, ehternet? They’re all involved in the transmission. We can choose whether we pay attention to some of them. If I’d posted somebody a parchment with this on it, it would certainly add to the message or become a part of it. But it still wouldn’t BE the message! Lots of artists like Apollinaire and his calligrams actually tried to blend the message and the medium in all sorts of interesting ways. But it was hard work. Leo Laporte (whose podcasts I enjoy listening to greatly) spent a lot of time trying to displace podcast with netcast to avoid an association with the medium. He claimed that his shows are not ‘podcasts’ but ‘shows’, i.e. real content. Of course, he somehow missed the fact that we don’t listen to programs but to the radio and don’t view drama but rather watch TV. The modes of transmission have always been associated with the message – including the word “show” – until they weren’t. We don’t mean anything special now when we say we ‘watch TV’.

Of course, the mode of transmission has changed how the “story” is told. Every new medium has always first tried to emulate the one it was replacing but ultimately found its own way of expression. But this is no different to other changes in styles. The impressionists were still using the same kinds of paints and canvasses, and modernist writers the same kind of inks and books. Every message exists in a huge amount of context and we can choose which of it we pay attention to at any one time. Sometimes the medium becomes a part of the context, sometimes it’s something else. Get over it!

There are some things Marshall Poe says I don’t agree with. I don’t think we need to reduce everything to commerce (as he does – perhaps having imbibed too much of Marxist historiography). But most importantly I don’t agree when he says that the Internet is mature in the same way that TV was mature in the 1970s. Technologies take different amounts of time to mature as widespread consumer utilities. It is always on the order of decades and sometimes centuries but there is no set trajectory. TV took less time than cars, planes took longer than TV, cars took longer than the Internet. (All depending on how we define mature – I’m mostly talking about wide consumer use – i.e. when only oddballs don’t use it and access is not disproportionately limited by socioeconomic status). The problem with the Internet is that there are still enough people who don’t use it and/or who can’t access it. In the 1970s, the majority had TVs or radios which were pretty much equivalent as a means of access to information and entertainment. TV was everywhere but as late as the 1980s, the BBC produced radio versions of its popular TV shows (Dad’s Army, All Gas and Gaiters, etc.) The radio performance of Star Wars was a pretty big deal in the mid-80s.

There is no such alternative access to the Internet. Sure, there are TV shows that play YouTube clips and infomercials that let you buy things. But it’s not the experience of the internet – more like a report on what’s on the Internet.

Even people who did not have TVs in the 1970s (both globally and nationally) could readily understand everything about their operation (later jokes about programing VCRs aside). You pushed a button and all there was to TV was there. Nothing was hiding. Nothing was trying to ambush you. People had to get used to the idiom of the TV, learn to trust some things and not others (like advertising). But the learning curve was flat.

The internet is more like cars. When you get in one, you need to learn lots of things from rules of manipulation to rules of the road. Not just how to deal with the machinery but also how to deal with others using the same machinery. The early cars were a tinkerer’s device. You had to know a lot about cars to use cars. And then at some point, you just got in and drove. At the moment, you still have to know a lot about the internet to use it. Search engines, Facebook, the rules of Twitter, scams, viruses. That intimidates a lot of people. But less so now than 10 years ago. Navigating the Internet needs to become as socially common place as navigating traffic in the street. It’s very close. But we’re not quite there yet on the mass level.

Nor do I believe that the business models on the Internet are as settled as they were with TV in the 1970s. Least of all the advertising model. Amazon’s, Google’s and Apple’s models are done – subject to normal developments. But online media are still struggling as are online services.

We will also see significant changes with access to the Internet going mobile as well as the increasing speed of access. There are still some possible transformations hiding there – mostly around video delivery and hyper-local services. I’d give it another 10 years (20 globally). By then the use of the internet will be a part of everyday idiom in a way that it’s still quite not now (although it is more than in 2001). But I don’t think the progress will go unchecked. The prospect of flying cars ran into severe limitations of technology and humanity. After 2021, I would expect the internet to continue changing under the hood (just like cars have since the 1960s) but not much in the way of its human interface (just like cars since the 1960s).

There are still many things that need working out. The role of social media (like YouTube) and social networking (like Facebook). Will they put a layer on top of the internet or just continue being a place on the internet? And what business models other than advertising and in-game purchases will emerge? Maybe none. But I suspect that the Internet has about a decade of maturing to get to where it will be recognisable in 2111. Today, cars from the 1930s don’t quite look like cars but those from the 1960s do. In this respect, I’d say the internet is somewhere in the 1940s or 50s. Both in usability, ubiquity, accessibility and it’s overall shape.

The most worrying thing about the future of the internet is a potential fight over the online commons. One possible development is that significant parts of the online space will become proprietary with no rights of way. This is not just net-neutrality but a possible consequence of the lack of it. It is possible that in the future so many people will only access the online space to take advantage of proprietary services tied to their connection provider that they may not even notice that at first some and later on most non-proprietary portions of the internet are no longer accessible. It feels almost unimaginable now but I’m sure people in 16th century East Anglia never thought their grazing commons would disappear (http://www.eh-resources.org/podcast/podcast2010.html). I’m not suggesting that this is a necessary development. Only that it is a configurational possibility.

As I’m writing this. A Tweet just popped up on my screen mentioning another shock in Almaty a place where I spent a chunk of time and where a friend of mine is about to take up a two-year post. I switch over to Google and find out no reports of destruction. If not for Twitter, I may not have even heard about it. I go on Twitter and see people joking about it in Russian. I sort of do my own journalism for a few minutes gathering sources. How could I still claim that the Internet changes nothing? Well, I did say “almost”. Actually, for many individuals the Internet changes everything. They (like me) get to do jobs they wouldn’t, find out things they couldn’t and talk to people they shouldn’t. But it doesn’t change (or even augment) our basic flesh-bound humanity. Sure, I know about something that happened somewhere I care about that I otherwise wouldn’t. But there’s nothing more I can do about it. I did my own news gathering about as fast as it would have taken to listen to a BBC report on this (I’ve never had a TV and now only listen to live radio in the mornings.) I can see some scenarios where the speed would be beneficial but when the speed is not possible we adjust our expectations. I first visited Kazakhstan in 1995 and although I had access to company email, my mother knew about what was happening at the speed of a postcard. And just the year before during my visit to Russia, I got to send a few telegrams. You work with what you have.

All the same, the internet has changed the direction my life has taken since about 1998. It allowed me to fulfil my childhood dream of sailing on the Norfolk Broads, just yesterday it helped me learn a great new blues lick on the guitar. It gives me reading materials, a place to share my writing, brings me closer to people I otherwise wouldn’t have heard of. It gives me podcasts like the amazing New Books in History or China History podcast! I love the internet! But when I think about my life before the internet, I don’t feel it was radically different. I can point at a lot of individual differences but I don’t have a sense of a pre-Internet me and post-Internet me. And equally I don’t think there will be a pre-Internet and post-Internet humanity. One of the markers of the industrial revolution is said to be its radical transformation of the shape of things. So much so that a person of 1750 would still recognize the shape of the country in 1500 but a person in 1850 would no longer see them the same. I wonder if this is a bit too simplistic. I think we need to bring more rigor to the investigation of human contextual identity and embeddedness in the environment. But that is outside the scope of this essay.

It is too tempting to use technologies as a metaphor for expressing our aspirations. We do (and have always done) this through poetry, polemic, and prose. Our depictions of what we imagine the Internet society is like appear in lengthy essays or chance remarks. They are carried even in tiny words like “now” when judiciously deployed. But sadly exactly the same aspirations of freedom and universal sisterhood were attached to all the preceding communication technologies, as well: print, telegraph, or the TV. Our aspirations aren’t new. Our attachment to projecting these aspiration into the world around us is likewise ancient. Even automatised factory production has been hailed by poets as beautiful. And it is. We always live in the future just about to come with regrets about the past that has never been. But our prosaic present seems never to really change who we are. Humans for better or worse.

Enhanced by Zemanta

When is subtle manipulation of data a flat out lie? Truth about Chinese prisons [UPDATE]

Share

I’ve been on a China kick lately (reading and listening about its history and global position) and a crime public policy kick (reading and listening to Mark Kleiman). I was struck when I heard Mark say in an interview that the US has more people in jail in absolute terms than China. So I went about to looking for some data. I found the most comprehensive source of info in the “World Prison Population List” published by the King’s College London International Centre for Prison Studies. Their top bullet point is alarming:

More than 9.25 million people are held in penal institutions throughout the world, mostly aspre-trial detainees (remand prisoners) or assentenced prisoners. Almost half of these are inthe United States (2.19m), China (1.55m plus pre-trial detainees and prisoners in ‘administrativedetention’) or Russia (0.87m).

But I was surprised by China. The US have a 738  people in prison per 100,000 of population, Russia 611 and China 111. England and Wales has more than China with 158. In fact, more than half of the countries of the world have more than China. I did some numbers in the spreadsheet below what that means with respect to the total population of each countries (throwing in the UK, India and Brazil for good measure):

And the results could not be clearer. China is not in any way comparable to Russia and the US when it comes to prison population. In fact, the UK is a worse offender (pun intended) when it comes to owning a disproportionate chunk of the global prison population. It is just under parity. India is by far the most lenient when it comes to incarceration with only 3.5% of the world prison population to 16% of the world’d global population. The Center provides no estimate of the pre-trial and administrative detainees in China. But even if it was another half-a-million people, it would still only give China a parity. To be as disproportionately prison-happy as the US, China would have to arrest more than 2.5 as many people as it has in jail right now.

But the question arises why did the Centre for Prison Studies choose to include US, Russia and China on the same list? My suggestion is prejudice combined with number magic. The authors were trying to come up with a way to get to say that half of the prisoners are in a small number of countries. And China is “known” for its human rights record, so it must be OK to list it there if it will bump up the numbers. But in effect, they managed to lie about China by saying something numerically true. It didn’t say anything flat out incorrect but it created an implicit category which clearly labels China as a bad country. This is a silly way to affirm Western supremacy where there is none.

There are lots of other things that could be estimated based on these numbers. I couldn’t find a clear estimate of how many people were sent to prison for things they didn’t do (we can’t just extrapolate from death row exonerations) but if we set it at about 0.5%, we get that there may be more unjustly imprisoned people in the US than there are political prisoners in China (estimated at about 5,000) or slightly less if we count the same rate of miscarriage of justice across the rest of China’s prison population. This is, of course, too much guess work for drawing any firm conclusions but it certainly puts the numbers in some perspective.

UPDATE: I have actually interviewed Mark Kleiman (it was a long time ago but I only now remembered to update here) and his estimate is that there are 3-4% of people in US prisons who are there because of something they did not do (often because of police mis-behavior). Now it is important to qualify this by saying that most of these people have done other things for which they deserve to go to prison but were not caught, so the miscariage of justice is more technical than moral. But it shows the massive holes in the US vaunted “rule of law”. It is there, no doubt, when it comes to settling middle-class property and other business disputes (and by all accounts this would be very important thing to have in many countries in the Middle East and China). But it is not evenly distributed. I think it would not be completely outrageous to say that, for many of its citizens, the US is in effect a police state. Just like it could be said that for many of China’s citizens, China is not!

Enhanced by Zemanta