Category Archives: Metaphor

Character Assasination through Metaphoric Pomposity: When one metaphor is not enough

Share

George Lakoff is known for saying that “metaphors can kill” and he’s not wrong. But in that, metaphors are no different from any other language. The simple amoral imperative “Kill!” will do the job just as nicely. Nor are metaphors any better or worse at obfuscating than any other type of language. But they are very good at their primary purpose which is making complex connections between domains.

Metaphors can create very powerful connections where none existed before. And we are just as often seduced by that power as inspired to new heights of creativity. We don’t really have a choice. Metaphoric thinking is in our DNA (itself a metaphor). But just like with DNA, context is important, and sometimes metaphors work for us and sometimes they work against us. The more powerful they are, the more cautious we need to be. When faced with powerful metaphors we should always look for alternatives and we should also explore the limits of the metaphors and the connections they make. We need to keep in mind that nothing IS anything else but everything is LIKE something else.

I was reminded of this recently when listening to an LSE lecture by the journalist Andrew Blum who was promoting his book “Tubes: Behind the Scenes at the Internet”. The lecture was reasonably interesting although he tried to make the subject seem more important than it perhaps was through judicious reliance of the imagery of covertness.

But I was particularly struck by the last example where he compared Facebook’s and Google’s data centers in Colorado. Facebook’s center was open and architecturally modern, being part of the local community. Facebook also shared the designs of the center with the global community and was happy to show Blum around. Google’s center was closed, ugly and opaque. Google viewed its design as part of their competitive advantage and most importantly didn’t let Blum past the parking lot.

From this Blum drew far reaching conclusions which he amplified by implying them. If architecture is an indication of intent, he implied, then we should question what Google’s ugly hidden intent is as opposed to Facebook’s shining open intent. When answering a question he later gave prosecutors in New England and in Germany as compounding evidence of people who are also frustrated with Google’s secrecy. Only reluctantly admitting that Google invited him to speak at their Authors Speak program.

Now, Blum may have a point regarding the secrecy surrounding that data center by Google, there’s probably no great competitive advantage in its design and no abiding security reason in not showing its insides to a journalist. But using this comparison to imply anything about the nature of Facebook or Google is just an example of typical journalist dishonesty. Blum is not lying to us. He is lying to himself. I’m sure he convinced himself that since he was so clever to come up with such a beautiful analogy, it must be true.

The problem is that pretty much anything can be seen through multiple analogies. And any one of those analogies can be stopped at any point or be stretched out far and wide. A good metaphor hacker will always seek out an alternative analogy and explore the limits of the domain mapping of the dominant one. In this case, not much work is needed to uncover what a pompous idiot Blum is being.

First, does this facilities reflect attitudes extend to what we know about the two companies in other spheres. And here the answer is NO. Google tells let’s you liberate your data, Facebook does not. Google lets you opt out of many more things that Facebook. Google sponsors many open source projects, Facebook is more closed source (even though they do contribute heavily to some key projects). When Facebook acquires a company, they often just shut it down leaving customers high and dry, Google closes projects too, but they have repeatedly released source code of these projects to the community. Now, is Google the perfect open company? Hardly. But Facebook with its interest in keeping people in its silo is can never be given a shinign beacon of openness. It might be at best a draw (if we can even make a comparison) but I’d certainly give Google far more credit in the openness department. But the analogy simply fails when exposed to current knowledge. I can only assume that Blum was so happy to have come up with it that he wilfully ignored the evidence.

But can we come up with other analogies? Yes. How about the fact that the worst dictatorships in the world have come up with grand idealistic architectural designs in history. Designs and structures that spoke of freedom, beautiful futures and love of the people for their leaders. Given that we know all that, why would we ever trust a design to indicate anything about the body that commissioned it? Again, I can only assume that Blum was seduced by his own cleverness.

Any honest exploration of this metaphor would lead us to abandoning it. It was not wrong to raise it, in the world of cognition, anything is fair. But having looked at both the limits of the metaphor and alternative domain mappings, it’s pretty obvious that it’s not doing us any good. It supports a biased political agenda.

The moral of the story is don’t trust one-metaphor journalists (and most journalists are one-metaphor drones). They might have some of the facts right but they’re almost certainly leaving out large amounts of relevant information in pursuit of their own figurative hedonism.

Disclaimer: I have to admit, I’m rather a fan of Google’s approach to many things and a user of many of their services. However, I have also been critical of Google on many occasions and have come to be wary of many of their practices. I don’t mind Facebook the company, but I hate that it is becoming the new AOL. Nevertheless, I use many of Facebook’s services. So there.

RaAM 9 Abstract: Of Doves and Cocks: Collective Negotiation of a Metaphoric Seduction

Share

Given how long I’ve been studying metaphor (at least since 1991 when I first encountered Lakoff and Johnson’s work and full on since 2000) it is amazing that I have yet to attend a RaAM (Researching and Applying Metaphor) conference. I had an abstract accepted to one of the previous RaAMs but couldn’t go. This time, I’ve had an abstract accepted and wild horses won’t keep me away (even though it is expensive since no one is sponsoring my going). The abstract that got accepted is about a small piece of research that I conceived back in 2004, wrote up in a blog post in 2006, was supposed to talk about at a conference in 2011 and finally will get to present this July at RaAM 9).

Unlike most academic endeavours, this one needs to come with a parental warning. The materials described contains profane sexual and scatological imagery as employed for the purposes of satire. But I think it makes a really important point that I don’t see people making as a matter of course in the metaphor studies literature. I argue that metaphors can be incredibly powerful and seductive but that they are also routinely deconstructed and negotiated. They are not something that just happens to us. They are opportunistic and random just as much as they are systematic and fundamental to our cognition. Much of the current metaphor studies is still fighting the battle against the view that metaphors are mere peripheral adornments on the literal. And to be sure the “just a metaphor” label is still to be seen in popular discourse today. But it has now been over 40 years since this fight has been intellectually won. So we need to focus on the broader questions about the complexities of the role metaphor plays in social cognition. And my contribution to RaAM hopes to point in that direction.

 

Of Doves and Cocks: Collective Negotiation of a Metaphoric Seduction

In this contribution, I propose to investigate metaphoric cognition as an extended discursive and social phenomenon that is the cornerstone of our ability to understand and negotiate issues of public importance. Since Lakoff and Johnson’s groundbreaking study, research in linguistics, cognitive psychology, as well as discourse studies, has tended to view metaphor as a purely unconscious phenomenon that is outside of a normal speaker’s ability to manipulate. However important this view of metaphor and cognition may be, it tells only a part of the story. An equally important and surprisingly frequent is the ability of metaphor to enter into collective (meta)cognition through extended discourse in which acceptable cross-domain mappings are negotiated.
I will provide an example of a particular metaphorical framing and the metacognitive framework it engendered that made it possible for extended discourse to develop. This metaphor, a leitmotif in the ‘Team America’ film satire, mapped the physiological and phraseological properties of taboo body parts onto geopolitical issues of war in such a way that made it possible for participants in the subsequent discourse to simultaneously be seduced by the power of the metaphor and empowered to engage in talk about cognition, text and context as exemplified by statements such as: “It sounds quite weird out of context, but the paragraph about dicks, pussies and assholes was the craziest analogy I’ve ever heard, mainly because it actually made sense.” I will demonstrate how this example is typical rather than aberrant of metaphor in discourse and discuss the limits of a purely cognitive approach to metaphor.
Following Talmy, I will argue that significant elements of metaphoric cognition are available to speakers’ introspection and thus available for public negotiation. However, this does not preclude for the sheer power of the metaphor to have an impact on both cognition and discourse. I will argue that as a result of the strength of this one metaphor, the balance of the discussion of this highly satirical film was shifted in support of military interventionism as evidenced by the subsequent popular commentary. By mapping political and gender concepts on the basic structural inevitability of human sexual anatomy reinforced by idiomatic mappings between taboo words and moral concepts, the metaphor makes further negotiation virtually impossible within its own parameters. Thus an individual speaker may be simultaneously seduced and empowered by a particular metaphorical mapping.

Moral Compass Metaphor Points to Surprising Places

Share

I thought the moral compass metaphor has mostly left current political discourse but it just cropped up – this time pointing from left to right – as David Plouffe accused Mitt Romney of not having one. As I keep repeating, George Lakoff once said “Metaphors can kill.” And Moral Compass has certainly done its share of homicidal damage. Justifying wars, interventions and unflinching black and white resolve in the face of gray circumstances. It is a killer metaphor!

But with a bit of hacking it is not difficult to subvert it for good. Yes, I’m ready to declare, it is good to have a moral compass, providing you make it more like a “true compass” to quote Plouffe. The problem is, as I learned during my sailing courses many years ago, that most people don’t understand how compasses actually work.

First, compasses don’t point to “the North”. They point to what is called the Magnetic North which is quite a ways from the actual North. So if you want to go to the North pole, you need make a lot of adjustment in where you’re going. Sound familiar? Kind of like following your convictions. They often lead you to places that are different from where you’re saying you’re going.

Second, the Magnetic North keeps moving. Yes, the difference of where it is in relation to the “actual” North changes from year to year. So you have to adjust your directions to the times you live in! Sound familiar? Being a devout Christian led people to different actions in the 1500s, 1700s and 1900. Although, we keep saying the “North” or faith is the same, the actual needle showing us where to go points to different directions.

Third, the compass is easily distracted by its immediate physical context. The distribution of metals on a boat, for instance, will throw it completely off course. So it needs to be calibrated differently for each individual installation. Sound familiar?

And it’s also worth noting that the south magnetic pole is not the exact polar opposite of the north magnetic pole!

So what can we learn from this newly hacked moral compass metaphor? Well, nothing terribly useful. Our real ethics and other commitments are always determined by the times we live and contexts we find ourselves in. And often we say we’re going one way when we’re actually heading another way. But we already knew that. Everybody knows that! Even people who say it’s not true (the anti-relativists) know that! They are not complete idiots after all, they just pretend to be to avoid making painful decisions!

As so often, we can tell two stories about the change of views by politicians or anybody else.

The first story is of the feckless, unprincipled opportunist who changes her views following the prevailing winds – supported by the image of the weather vane. This person is contrasted with the stalwart who sticks to her principles even as all around her are swayed the moral fad of the day.

The second story is of the wise (often old) and sage person who can change her views even as all around her persist in their simplistic fundamentalism. Here we have the image of the tree that bends in the wind but does not break. This person is contrasted with the bigot or the zealot who cannot budge even an inch from old prejudices even though they are obviously wrong?

So which story is true of Romney, Bush and Obama? We don’t know. In every instance, we have to fine tune our image and very carefully watch out for our tendencies to just tell the negative stories about people we don’t like. Whether one story is more convincing than the other depends, like the needle of a compass, no a variety of obvious and non-obvious contexts. The stories are here to guide us and help us make decisions. But we must strive to keep them all in mind at the same time. And this can be painful. They are a little bit like the Necker Cube, Vase, the Duck/Rabbit or similar optical illusions. We know they’re both there but while we’re perceiving the one, it is easy to forget the others are there. So it is uncomfortable. And also not a little bit inconvenient.

Is this kind of metaphorical nuance something we can expect in a time of political competition? It can be. Despite their bad rep, politicians and the media surrounding them can be nuanced. But often they’re not. So instead of nuance, when somebody next trots out the moral compass, whether you like them or not, say: “Oh, you mean you’re a liar, then?” and tell them about the Magnetic North!

 

Post Script: Actually, Plouffe didn’t say Romney didn’t have a moral compass. He said that you “you need to have a true compass, and you’ve got to be willing to make tough calls.” So maybe he was talking about a compass adjusted for surrounding metals and the advice of whose needle we follow only having taken into account as much of our current context as we can. A “true compass” like a true friend! I agree with most of the “old Romney” and none of the “new Romney”. And I loved the old Obama created in the image of our unspoken liberal utopias, and I am lukewarm on the actual Obama (as I actually knew I would) who steers a course pointing to the North of reality rather than the one magnetically attracting our needles. So if its that kind of moral compass after all, we’re in good hands!

There’s more to memory than the brain: Psychologists run clever experiments, make trivial claims, take gullible internet by storm

Share
GoodSearch home page

Image via Wikipedia

The online media are drawn to any “scientific” claims about the internet’s influence on our nature as humans like flies to a pile of excrement. Sadly, in this metaphor, only the flies are figurative. The latest heap of manure to instigate an annoying buzzing cloud of commentary from Wired to the BBC, is an article by Sparrow et al. claiming to show that because there are search engines, we don’t have to remember as much as before. Most notably, if we know that some information can be easily retrieved, we remember where it can be obtained instead of what it is. As Wired reports:

“A study of 46 college students found lower rates of recall on newly-learned facts when students thought those facts were saved on a computer for later recovery.” http://www.wired.co.uk/news/archive/2011-07/15/search-engines-memory

Sparrow et al. designed a bunch of experiments that “prove” this claim. Thus, they holler, the internet changes how we remember. This was echoed by literally hundreds of headlines (Google claims over 600). Here’s a sample:

  • Google Effect: Changes to our Brains
  • Search engines like Google ‘changing the way human memory works’
  • Search engines change how memory works
  • Google Is Destroying Our Memories, Scientists Find
  • It pays to remember, search engines ruining our memory
  • Google rewiring the way we remember, study says
  • Has Google turned your memory to mush?
  • Internet search engines cause poor memory, scientists claim
  • Researchers: Search Engines Supplanting Our Memory
  • Google changing way brain remembers information

Many of these headlines are from “reputable” publications and they can be summarized by three words: Bullshit! Bullshit! Bullshit!

All they had to do is read this part of the abstract to understand that nothing like the stuff they blather about follows from the study:

“The results of four studies suggest that when faced with difficult questions, people are primed to think about computers and that when people expect to have future access to information, they have lower rates of recall of the information itself and enhanced recall instead for where to access it. The Internet has become a primary form of external or transactive memory, where information is stored collectively outside ourselves.”

But they were not helped by Science whose publication of these results is more of a self-serving stunt than a serious attempt to further knowledge. The title of the original “Google Effects on Memory” is all but designed to generate bat-shit crazy headlines. If the title were to be truthful, it would have to be “Google has no more effects on memory than a paper and pen or a friend.” Even the Science Magazine report on the study entitled “Searching for the Google Effect on People’s Memory” concludes it “doesn’t directly answer that question”. In fact, it says that the internet is filling in the role of “transactive memory” which describes the fact that we rely on people to remember things. Which means it has no impact on our brains at all. It just magnifies the transactive effects already in existence.

Any claim about a special effect of Google on any kind of memory can be debunked in two words: “shopping list”! All Sparrow at al. discovered is that the internet has changed us as much as a stub of a pencil and a grubby piece of paper. Meaning, not at all.

Some headlines cottoned onto this but they are few and far between:

  • Search Engine “Memory Loss” in Fact a Sign of Smart Behavior‎
  • Search Engines Ruin Our Memory, Make Us Smarter

Sparrow, the lead author of the study, when interviewed by Wired said: “It’s very similar to how we use people in our lives, The internet is really just an interface with a lot of other people.”

In other words, What the internet has changed is the deployment of strategies we have always used for managing our memory. Sparrow et al. use an old term “transactive memory” to describe this but that’s needed only because cognitive psychology’s view of memory has been so limited. Memory is not just about storage and retrieval. Like all of our cognition it is tied in with a whole host of strategies (sometimes lumped together under the heading of metacognition) that have a transactive and social dimension.

Let’s take the example of mobile phones. About 15 years ago I remembered about 4 phone numbers (home, work, mother, friend). Now, I remember none. They’re all stored in my mobile phone. What’s happened? I changed my strategy of information storage and retrieval because of the technology available. Was this a radical change? No, because I needed a lot more number so I carried a little booklet where I kept the rest of the numbers. So the mobile phone freed my memory of four items. Big deal! Arguably, these four items have a huge potential transactional impact. They mean that if my mobile phone is dead or lost, I cannot call the people most likely to be able to offer assistance. But how often does that happen? It hasn’t happened to me yet in an emergency. And in an non-emergency I have many backups. At any rate, in the past I was much more likely to be caught up in an emergency where I couldn’t find a phone at all. So the change has been fairly minimal.

But what’s more interesting here is that I didn’t realize this change until I heard someone talk about it. This transactional change is a topic of conversation, it is not just something that happened, it is part of common knowledge (and common knowledge only becomes common because of a lot of people talk about it to a lot of other people).

The same goes for the claims made by Sparrow et al. The strategies used to maintain access to factual knowledge have changed with the technology available. But they didn’t just change, people have been talking about this change. “Just Google it” is a part of daily conversation. In his podcasts, Leo Laporte has often talked about how his approach to remembering has changed with the advent of Google. One early strategy for remembering websites has been the Bookmark. People have made significant collections of bookmarks to sites, not unlike rollodexes of old. But about five or so years ago Google got a lot better at finding the right sites, so bookmarks went away. Personally, now that Chrome syncs bookmarks so seemlessly, I’ve started using them again. Wow, change in technology, facilitates a change in strategy. Sparrow et al. should do some research on this. Since I started using the Internet when it was still spelled with a capital “I”, I still remember urls of key websites: Google, Yahoo, Gmail, BBC, my own, etc. But there are people who don’t. I’ve personally observed a highly intelligent CEO of a company to type “Google” in the Bing search box in Internet Explorer. And a few years ago, after a university changed its portal, I was soothing an angry professor, who complained that the link to Google was removed from the page that automatically came up on his computer. He never learned how to get there any other way because he didn’t need to. Now he does. We acquire strategies to deal with information as we need them.

Before the availability of writing (and even after), there were a whole lot of strategies available for remembering things. These were part of the cultural conversation as much as the internet is today. Some of these strategies became part of religious ritual. Some of them are a part of a trickster’s arsenal – Joshua Foer describes some in Moonwalking with Einstein. Many are part of the art of “study skills” many people talk about.

All that Sparrow et al. demonstrated is that when some of these strategies are deployed, it has a small effect on recall. This is not a bad thing to know but it’s not in any way worth over 600 media stories about it. To evaluate this much reduced claim we would have to carefully examine their research methodology and the underlying assumptions which is not what this post is about. It’s about the mistreatment of research results by media hungry academics.

I don’t begrudge Sparrow et al. their 15 minutes of fame. I’m not surprised, dismayed or even disappointed at the link greed of the journalistic herd who fell over themselves to uncritically spread this research fluff. Also, many of the actual articles were quite balanced about the findings but how much of that balance will survive the effect of a mendatiously bombastic headline is anybody’s guess. So all in all it’s business as usual in the popularization of “science” in the “media”.

ResearchBlogging.org Bohannon, J. (2011). Searching for the Google Effect on People’s Memory Science, 333 (6040), 277-277 DOI: 10.1126/science.333.6040.277

Sparrow, B., Liu, J., & Wegner, D. (2011). Google Effects on Memory: Cognitive Consequences of Having Information at Our Fingertips Science DOI: 10.1126/science.1207745

Enhanced by Zemanta

The death of a memory: Missing metaphors of remembering and forgetting?

Share

Memories

I have forgotten a lot of things in my life. Names, faces, numbers, words, facts, events, quotes. Just like for anyone, forgetting is as much a part of my life as remembering. Memories short and long come and go. But only twice in my life have I seen a good memory die under suspicious circustances.

Both of these were good reliable everyday memories as much declarative as non-declarative. And both died unexpectedly without warning and without reprieve. They were both memories of entry codes but I retrieved both in different ways. Both were highly contextualised but each in a different way.

The first time was almost 20 years ago (in 1993) and it was the PIN for my first bank card (before they let you change them). I’d had it for almost two years by then using it every few days for most of that period. I remembered it so well that even after I’d been out of the country for 6 months and not even thinking about it once, I walked up to an ATM on my return and without hesitation, typed it in. And then, about 6 months later, I walked up to another ATM, started typing in the PIN and it just wasn’t there. It was completely gone. I had no memory of it. I knew about the memory but the actual memory completely disappeared. It wasn’t a temporary confusion, it was simply gone and I never remembered it again. This PIN I remembered as a number.

The second death occurred just a few days ago. This time, it was the entrance code to a building. But I only remembered it as a shape on the keypad (as I do for most numbers now). In the intervening years, I’ve memorised a number of PINs and entrance codes. Most I’ve forgetten since, some I remember even now (like the PIN of a card that expired a year ago but I’d only used once every few months for many years). Simply, the normal processes you’d expect of memory. But this one, I’ve been using for about a year since they’d changed it from the previous one. About five months ago I came back from a month-long leave and I remembered it instantly. But three days ago, I walked up to the keypad and the memory was gone. I’d used the keypad at least once if not twice that day already. But that time I walked up to the keypad and nothing. After a few tries I started wondering if I might be typing in the old code since before the change so I flipped the pattern around (I had a vague memory of once using it to remember the new pattern) and it worked. But the working pattern felt completely foreign. Like one I’d never typed in before. I suddenly understood what it must feel like for someone to recognize their loved one but at the same time be sure that it’s not them. I was really discomfitted by this impostor keypad pattern. For a few moments, it felt really uncomfortable – almost an out of body (or out of memory) experience.

The one thing that set the second forgetting apart from the first one was that I was talking to someone as it happened (the first time I was completely alone on a busy street – I still remember which one, by the way). It was an old colleague who visited the building and was asking me if I knew the code. And seconds after I confidently declared I did, I didn’t. Or I remembered the wrong one.

So in the second case, we could conclude that the presence of someone who had been around when the previous code was being used, triggered the former memory and overrode the latter one. But the experience of complete and sudden loss, I recall vividly, was the same. None of my other forgettings were so instant and inexplicable. And I once forgot the face of someone I’d just met as soon and he turned around (which was awkward since he was supposed to come back in a few minutes with his car keys – so I had to stand in the crowd looking expectantly at everyone until the guy returned and nodded to me).

What does this mean for our metaphors of memory based on the various research paradigms? None seem to apply. These were not repressed memories associated with traumatic events (although the forgetting itself was extremely mildly traumatic). These were not quite declarative memories nor were they exactly non-declarative. They both required operations in working memories but were long-term. They were both triggered by context and had a muscle-memory component. But the first one I could remember as a number whereas the second one only as a shape and only on that specific keypad. But neither were subject to long-term decay. In fact, both proved resistant to decay surving long or longish periods of disuse. They both were (or felt) as solid memories as my own name. Until they were there no more. The closest introspective analogy to me seems Luria’s man who remembered too much who once forgot a white object because he placed it against white background in his memory which made it disappear.

The current research on memory seems to be converging on the idea that we reconstruct our memories. Our brains are not just some stores with shelves from which memories can be plucked. Although, memories are highly contextual, they are not discrete objects encoded in our brain as files on a harddrive. But for these two memories, the hard drive metaphor seems more appropriate. It’s as if a tiny part of my brain that held those memories was corrupted and they simply winked out of existence at the flip of a bit. Just like a hard drive.

There’s a lot of research on memory loss, decay and reliability but I don’t know of any which could account for these two deaths. We have many models of memory which can be selectively applied to most memory related events but these two fall between the cracks.

All the research I could find is either on sudden specific-event-induced amnesia (http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1961972/?page=1) or senescence (http://brain.oxfordjournals.org/content/89/3/539.extract). In both cases, there are clear causes to the memory and loss is much more total (complete events or entire identity). I could find nothing about the sudden loss of a specific reliable memory in a healthy individual (given that it only happened twice 18 years apart – I was 21 when it happened first – I assume this is not caused by any pathology in my brain) not precipitated by any traumatic (or other) event. Yet, I suspect this happens all the time… So what gives?

Killer App is a bad metaphor for historical trends, good for pseudoteaching

Share
This map shows the countries in the world that...
Image via Wikipedia

Niall Ferguson wrote in The Guardian some time ago about how awful history education has become with these “new-fangled” 40-year-old methods like focusing on “history skills” that leads to kids leaving school knowing “unconnected fragments of Western history: Henry VIII and Hitler, with a small dose of Martin Luther King, Jr.” but not who was the reigning English monarch at the time of the Armada. Instead, he wants history to be taught his way: deep trends leading to the understanding why the “West” rules and why Fergusson is the cleverest of all the historians that ever lived. He even provided (and how cute is this) a lesson plan!

Now, personally, I’m convinced that the history of historical education teaches us mostly that historical education is irrelevant to the success of current policy. Not that we cannot learn from history. But it’s such a complex source domain for analogies that even very knowledgeable and reasonable people can and do learn the exact opposites from the same events. And even if they learn the “right” things it still doesn’t stop them from being convinced that they can do it better this time (kind of like people in love who think their marriage will be different). So Ferguson’s bellyaching is pretty much an empty exercise. But that doesn’t mean we cannot learn from it.

Ferguson, who is a serious historian of financial markets, didn’t just write a whiney column for the Guardian, he wrote a book called Civilization (I’m writing a review of it and a few others under the joint title “Western Historiographical Eschatology” but here I’ll only focus on some aspects of it) and is working on a computer game and teaching materials. To show how seriously he takes his pedagogic mission and possibly also how hip and with it he is, Ferguson decided to not call his historical trends trends but rather “killer apps”. I know! This is so laugh out loud funny I can’t express my mirth in mere words:))). And it gets even funnier as one reads his book. As a pedagogical instrument this has all the practical value of putting a spoiler on a Fiat. He uses the term about 10 times (it’s not in the index!) throughout the book including one or two mentions of “downloading” when he talks about the adoption of an innovation.

Unfortunately for Ferguson, he wrote his book before the terms “pseudocontext” and “pseudoteaching” made an appearance in the edublogosphere. And his “killer apps” and the lesson plan based on them are a perfect example of both. Ferguson wrote a perfectly servicable and an eminently readable historical book (even though it’s a bit of a tendentious mishmash). But it is still a historical book written by a historian. It’s not particularly stodgy or boring but it’s no different from myriad other currently popular historical books that the “youth of today” don’t give a hoot about. He thinks (bless him) that using the language of today will have the youth flocking to his thesis like German princes to Luther. Because calling historical trends “killer apps” will bring everything into clear context and make all the convoluted syntax of even the most readable history book just disappear! This is just as misguided as thinking that talking about men digging holes at different speeds will make kids want to do math.

What makes it even more precious is that the “killer app” metaphor is wrong. For all his extensive research, Ferguson failed to look up “killer app” on Wikipedia or in a dictionary. There he would have found out that it doesn’t mean “a cool app” but rather an application that confirms the viability of an existing platform whose potential may have been questioned. There have only been a handful of killer apps. The one undisputed killer app was Visicalc which all of a sudden showed how an expensive computer could simplify the process of financial management through electronic spreadsheets and therefore pay for itself. All of a sudden, personal computers made sense to the most important people of all, the money counters. And thus the personal computer revolution could begin. A killer app proved that a personal computer is useful. But the personal computer had already existed as a platform when Visicalc appeared.

None of Ferguson’s “killer apps” of “competition, science, property rights, medicine, consumer society, work ethic” are this sort of a beast. They weren’t something “installed” in the “West” which then proved its viability. They were something that, according to Ferguson anyway, made the West what it was. In that they are more equivalent to the integrated circuit than Visicalc. They are the “hardware” that makes up the “West” (as Ferguson sees it), not the software that can run on it. The only possible exception is “medicine” or more accurately “modern Western medicine” which could be the West’s one true “killer app” showing the viability of its platform for something useful and worth emulating. Also, “killer apps” required a conscious intervention, whereas all of Ferguson’s “apps” were something that happened on its own in a myrriad disparate processes – we can only see them as one thing now.

But this doesn’t really matter at all. Because Ferguson, as so many people who want to speak the language of the “young people”, neglected to pay any attention whatsoever to how “young people” actually speak. The only people who actually use the term “killer app” are technology journalists or occasionally other journalists who’ve read about it. I did a quick Google search for “killer app” and did not find a single non-news reference where somebody “young” would discuss “killer apps” on a forum somewhere. That’s not to say it doesn’t happen but it doesn’t happen enough to make Ferguson’s work any more accessible.

This overall confusion is indicative of Ferguson’s book as a whole which is definitely less than the sum of its parts. It is full of individual insight and a fair amount of wit but it flounders in its synthetic attempts. Not all his “killer apps” are of the same type, some follow from the others and some don’t appear to be anything at all than Ferguson’s wishful thinking. They certainly didn’t happen on one “platform” – some seem the outcome rather than the cause of “Western” ascendancy. Ferguson’s just all too happy to believe his own press. At the beginning he talks about early hints around 1500AD that the West might achieve ascendancy but at the end he takes a half millenium of undisputed Western rule for granted. But in 1500, “the West” had still 250 years to go before the start of the industrial revolution, 400 years before modern medicine, 50 years before Protestantism took serious hold and at least another 100 before the Protestant work ethic kicked in (if there really is such a thing). It’s all over the place.

Of course, there’s not much innovative about any of these “apps”. It’s nothing a reader of the Wall Street Journal editorial page couldn’t come up with. Ferguson does a good job of providing interesting anecdotes to support his thesis but each of his chapters meanders around the topic at hand with a smattering of unsystematic evidence here and there. Sometimes the West is contrasted with China, sometimes the Ottomans, sometimes Africa! It is hard to see how his book can help anybody’s “chronological understanding” of history that he’s so keen on.

But most troublingly it seems in places that he mostly wrote the book for as a carrier for ultra-conservative views that would make his writing more suitable for The Daily Mail rather than the Manchester Pravda: “the biggest threat to Western civilization is posed not by other civilizations, but by our own pusilanimity” unless of course it is the fact that “private property rights are repeatedly violated by governments that seem to have an insatiable appetite for taxing out incomes and our wealth and wasting a large portion of the proceeds”.

Panelist Economic Historian Niall Ferguson at ...
Image via Wikipedia

It’s almost as if the “civilized” historical discourse was just a veneer that peels off in places and reveals the real Ferguson, a comrade of Pat Buchanan whose “The Death of the West” (the Czech translation of which screed I was once unfortunate enough to review) came from the same dissatisfaction with the lack of our confidence in the West. Buchanan also recommends teaching history – or more specifically, lies about history – to show us what a glorious bunch of chaps the leaders of the West were. Ferguson is too good a historian to ignore the inconsistencies in this message and a careful reading of his book reveals enough subtlety not to want to reconstitute the British Empire (although the yearning is there). But the Buchananian reading is available and in places it almost seems as if that’s the one Ferguson wants readers to go away with.

From metaphor to fact, Ferguson is an unreliable thinker flitting between insight, mental shortcut and unreflected cliche with ease. Which doesn’t mean that his book is not worth reading. Or that his self-serving pseudo-lesson plan is not worth teaching (with caution). But remember I can only recommend it because I subscribe to that awful “culture of relativism” that says that “any theory or opinion, no matter how outlandish, is just as good as whatever it was we used to believe in.”

Update 1: I should perhaps point out, that I think Ferguson’s lesson plan is pretty good, as such things go. It gives students an activity that engages a number of cognitive and affective faculties rather than just rely on telling. Even if it is completely unrealistic in terms the amount of time allocated and the objectives set. “Students will then learn how to construct a causal explanation for Western ascendancy” is an aspiration, not a learning objective. Also, it and the other objectives really rely on the “historical skills” he derides elsewhere.

The lesson plan comes apart at about point 5 where the really cringeworthy part kicks in. Like in his book, Ferguson immediately assumes that his view is the only valid one – so instead of asking the students to compare two different perspectives on why the world looked like it did in 1913 as opposed to 1500 (or even compare maps at strategic moments) he simply asks them to come up with reasons why his “killer apps” are right (and use evidence while they’re doing it!) .

I also love his aside: “The groups need to be balanced so that each one has an A student to provide some kind of leadership.” Of course, there are shelf-fuls of literature on group work – and pretty much all of them come from the same sort of people who’re likely to practice “new history” – Ferguson’s nemesis.

I don’ think using Ferguson’s book and materials would do any more damage than using any other history book. Not what I would recommend but who cares. I recently spent some time at Waterstone’s browsing through modern history textbooks and I think they’re excellent. They provide far more background to events and present them in a much more coherent picture than Ferguson. They perhaps don’t encourage the sort of broad synthesis that has been the undoing of so many historians over the centuries (including Ferguson) but they demonstrate working with evidence in a way he does not.

The reason most people leave school not knowing facts and chronologies is because they don’t care, not because they don’t have an opportunity to learn. And this level of ignorance has remained constant over decades. At the end of the day, history is just a bunch of stories not that different from what you see on a soap opera or in a celebrity magazine, just not as relevant to a peer group. No amount of “killer applification” is going to change this. What remains at the end of historical education is a bunch of disconnected images, stories and conversation pieces (as many of them about the tedium of learning as about its content). But there’s nothing wrong with that. Let’s not underestimate the ability of disinterested people to become interested and start making the connections and filling in the gaps when they need to. That’s why all these “after-market” history books like Ferguson’s are so popular (even though for most people they are little more than tour guides to the exotic past).

Update 2: By a fortuitous coincidence, an announcement of the release of George L. Mosse‘s lectures on European cultural history: http://history.wisc.edu/mosse/george_mosse/audio_lectures.htm came across my news feeds. I think it is important to listen to these side by side with Ferguson’s seductively unifying approach to fully realize the cultural discontinuity in so many key aspects between us and the creators of the West. Mosse’s view of culture, as his Wikipedia entry reads, was as “a state or habit of mind which is apt to become a way of life”. The practice of history after all is a culture of its own, with its own habits of mind. In a way, Ferguson is asking us to adopt his habits of mind as our way of life. But history is much more interesting and relevant when it is, Mosse’s colleague Harvey Goldberg put it on this recording, a quest after a “usable past” spurred by our sense of the “present crisis” or “present struggle”. So maybe my biggest beef with Ferguson is that I don’t share his justificationist struggle.

Enhanced by Zemanta

Language learning in literature as a source domain for generative metaphors about anything

Share
Portrait of Yoritomo, copy of the 1179 origina...
Image via Wikipedia

In my thinking about things human, I often like to draw on the domain of second language learning as the source of analogies. The problem is that relatively few people in the English speaking world have experience with language learning to such an extent that they can actually map things onto it. In fact, in my experience, even people who have a lot of experience with language learning are actually not aware of all the things that were happening while they were learning. And of course awareness of research or language learning theories is not to be expected. This is not helped by the language teaching profession’s propaganda that language learning is “fun” and “rewarding” (whatever that is). In fact my mantra of language learning (I learned from my friend Bill Perry) is that “language learning is hard and takes time” – at least if you expect to achieve a level of competence above that of “impressing the natives” with your “please” and “thank you”. In that, language learning is like any other human endeavor but because of its relatively bounded nature — when compared to, for instance, culture — it can be particularly illuminating.

But how can not just the fact of language learning but also its visceral experience be communicated to those who don’t have that kind of experience? I would suggest engrossing literature.

For my money, one of the most “realistic” depictions of language learning with all its emotional and cognitive peaks and troughs can be found in James Clavell‘s “Shogun“. There we follow the Englishman Blackthorne as he goes from learning how to say “yes” to conversing in halting Japanese. Clavell makes the frustrating experience of not knowing what’s going on and not being able to express even one’s simplest needs real for the reader who identifies with Blackthorne’s plight. He demonstrates how language and cultural learning go hand in hand and how easy it is to cause a real life problem through a little linguistic misstep.

Shogun stands in stark contrast to most other literature where knowledge of language and its acquisition is viewed as mostly a binary thing: you either know it or you don’t. One of the worst offenders here is Karl May (virtually unknown in the English speaking world) whose main hero Old Shatterhand/Kara Ben Nemsi acquires effortlessly not only languages but dialects and local accents which allow him to impersonate locals in May’s favorite plot twists. Language acquisition in May just happens. There’s never any struggle or miscommunication by the main protagonist. But similar linguistic effortlessness in the face of plot requirements is common in literature and film. Far more than magic or the existence of Vampires, the thing that used to stretch my credulity the most in Buffy the Vampire Slayer was ease with which linguistic facility was disposed of.

To be fair, even in Clavell’s book, there are characters whose linguistic competence is largely binary. Samurai either speak Portugese or Latin or they don’t – and if the plot demands, they can catch even whispered colloquial conversation. Blackthorne’s own knowledge of Dutch, Spanish, Portugese and Latin is treated equally as if identical competence would be expected in all four (which would be completely unrealistic given his background and which resembles May’s Kara Ben Nemsi in many respects).

Nevertheless, when it comes to Japanese, even a superficially empathetic reader will feel they are learning Japanese along with the main character. Largely through Clavell’s clever use of limited translation.

This is all the more remarkable given that Clavell obviously did not speak Japanese and relied on informants. This, as the “Learning from Shogun” book pointed out, led to many inaccuracies in the actual Japanese, advising readers not to rely on the language of Shogun too much.

Clavell (in all his books – not just Shogun) is even more illuminating in his depiction of intercultural learning and communication – the novelist often getting closer to the human truth of the process than the specialist researcher. But that is a blog post for another time.

Another novel I remember being an accurate representation of language learning is John Grisham‘s “The Broker” in which the main character Joel Backman is landed in a foreign country by the CIA and is expected to pick up Italian in 6 months. Unlike Shogun, language and culture do not permeate the entire plot but language learning is a part of about 40% of the book. “The Broker” underscores another dimension which is also present in the Shogun namely teaching, teachers and teaching methods.

Blackthorne in Shogun orders an entire village (literally on the pain of death) to correct him every time he makes a mistake. And then he’s excited by a dictionary and a grammarbook. Backman spends a lot of time with a teacher who makes him repeat every sentence multiple times until he knows it “perfectly”. These are today recognized as bad strategies. Insisting on perfection in language learning is often a recipe for forming mental blocks (Krashen’s cognitive and affective filters). But on the other hand, it is quite likely that in totally immersive situations like Blackthorne’s or even partly immersive situations like Backman’s (who has English speakers around him to help), pretty much any approach to learning will lead to success.

Another common misconception reflected in both works is the demand language learning places on rote memory. Both Blackthorne and Backman are described as having exceptional memories to make their progress more plausible but the sort of learning successes and travails described in the books would accurately reflect the experiences of anybody learning a foreign language even without a memory. As both books show without explicit reference, it is their strategies in the face of incomprehension that help their learning rather than a straight memorization of words (although that is by no means unnecessary).

So what are the things that knowing about the experience of second language learning can help us ellucidate? I think that any progress from incompetence to competence can be compared to learning a second language. Particularly when we can enhance the purely cognitive view of learning with an affective component. Strategies as well as simple brain changes are important in any learning which is why none of the brain-based approaches have produced unadulterated success. In fact, linguists studying language as such would do well to pay attention to the process of second language learning to more fully realize the deep interdependence between language and our being.

But I suspect we can be more successful at learning anything (from history or maths to computers or double entery book keeping) if we approach it as a foreign language. Acknowledge the emotional difficulties alongside cognitive ones.

Also, if we looked at expertise more as linguistic fluency than a collection of knowledge and skills, we could devise a program of learning that would take better into account not only the humanity of the learner but also the humanity of the whole community of experts which he or she is joining.

Enhanced by Zemanta

You don’t have to be a xenophobe to think Britain being an island matters, but it helps!

Share

I have a distinct feeling of writing about this somewhere but can’t find it, so here’s the rant redux.

The images on which our thinking and reasoning are based can sometimes exert a powerful force. There are many mechanisms we use to counter that force but sometimes it is very difficult. It seems particularly difficult for some people to shed the idea that the fact that Britain is an island has any bearing on its immigration, housing or grave digging policy as compared to a continental state!

I was reminded of this again when the sociologist Kate Woodthorpe mentioned on this edition of Thinking Aloud http://www.bbc.co.uk/programmes/b0112gzd that the shortage of grave plots is a particular problem in the UK because ‘after all we are an island’.

This same trope pops up all over the place when it comes to population and housing. See here and here and here.

I don’t know how to say this without sounding like I’m stating the obvious. But that’s because it is just that obvious. In current geopolitical context, every country is pretty much an island from the point of view of its population, housing or grave digging. France can’t just borrow a bit of Germany to house its excess immigrants or to dig a few graves (although I have this image of an underground corpse tunnel…). This is often accompanied by calling Britain a “small island” which it is not but even if it were half the size of Belgium, the same principle would apply.

I think the power of this image stems from its underlying schema which is that of a platform in the middle of water off the edges of which it is possible to fall. And the more people you add to the platform, the more likely they are to fall off.  This is so powerful that even a few people I pointed this out to, took a lot of convincing.

Of course, this schema is popular with groups wanting to promote a certain point of view on migration but I think its power works regardless of ideology.

The natural logistics of life: The Internet really changes almost nothing

Share
Cover of "You've Got Mail"
Cover of You've Got Mail

This is a post that has been germinating for a long time. But it was most immediately inspired by Marshall Poe‘s article claiming that “The Internet Changes Nothing“. And as it turns out, I mostly agree.

OK, this may sound a bit paradoxical. Twelve years ago, when I submitted my first column to be published, I delivered the text to my editor on a diskette. Now, I don’t even have an editor (or at least not for this kind of writing). I just click a button and my text is published. But! If my server logs are to be trusted, it will be read by 10s or at best 100s of people over its lifetime. That’s more than if I’d just written some notes to myself or published it in an academic journal but much less than if I publish it in a national daily with a readership of hundreds of thousands. Not all of them will read what I write but more than would on this blog.
So while democratising the publishing industry has worked for Kos, Huffington and many others, still many more blogs languish in obscurity. I can say anything I want but my voice matters little in the cacophony.

In terms of addressing an audience and having a voice, the internet has done little for most people. This is not because not enough people have enough to say but because there’s only so much content the world can consume. There is a much longer tail trailing behind Clay Shirkey‘s long tail. It’s the tail of 5-post 0-comment blogs and YouTube videos with 15 views. Even millions of typewriter-equipped monkeys with infinities of time can’t get to them all. Plus it’s hard to predict what will be popular (although educated guesses can produce results in aggregate). Years ago I took a short clip with my stills camera of a black-smith friend of mine making a candle-holder. It’s had 30 thousand views on YouTube. Why I don’t know. There’s nothing particularly exciting about it but there must be some sort of a long tail longing after it. None of the videos I hoped would take off did. This is the experience of many if not most. Most attempts at communities fail because the people starting them don’t realize how hard it is to nurture them to self-sustainability. I experienced this with my first site Bohemica.com. It got off to a really good start but since it was never my primary focus, the community kind of dissipated after a site redesign that was intended to foster it.

Just in terms of complete democratization of expression, the internet has done less for most than it may appear. But how about the speed of communication? I’m getting ready to do an interview with someone in the US, record it, transcribe it and translate it – all within a few days. The Internet (or more accurately Skype) makes the calling cheap, the recording and transcription is made much quicker by tools I didn’t have access to even in the early 2000s when I was doing interviews. And of course, I can get the published product to my editor in minutes via email. But what hasn’t changed is the process. The interview, transcription and translation take pretty much the same amount of time. The work of agreeing with the editor on the parameters of the interview, arranging it with the interviewee take pretty much as long as before. As does preparation for the interview. The only difference is the speed and ease of the transport of information from me to its target and me to the information. It’s faster to get to the research subject – but the actual research still takes about the same amount of time limited by the speed of my reading and the speed of my mind.

A chain is only as strong as its weakest link. And as long as humans are a part of the interface in a communication chain, the communication will happen at a human speed. I remember sitting over a print out of an obscure 1848 article on education from Jstor with an academic who started doing research in the 1970s and reminiscing how in the old days, he’d have to get on the train to London to get a thing like this in the British Library or at least having to arrange a protracted interlibrary loan. On reflection this is not as radical a change as it may seem. Sure, the information takes longer to get here. But people before the internet didn’t just sit around waiting for it. They had other stuff to read (there’s always more stuff to read than time) and writing to get on with in the meantime. I don’t remember anyone claiming that modern scholarship is any better than scholarship from the 1950s because we can get information faster. I’m as much in awe of some of the accomplishments of the scholars of the 1930s as people doing research now. And just as disdainful of others from any period. When reading a piece of scholarly work, I never care about the logistics of the flow of information that was necessary for the work to be completed (unless of course, it impinges on the methodology – where moderns scholars are just as likely to take preposterous shortcuts as ancient ones). During the recent Darwin frenzy, we heard a lot about how he was the communication hub of his time. He was constantly sending and receiving letters. Today, he’d have Twitter and a blog. Would he somehow achieve more? No, he’d still have to read all those research reports and piddle about with his worms. And it’s just as likely he’d miss that famous letter from Brno.

Of course, another fallacy we like to commit is assuming that communication in the past was simply communication today minus the internet (or telephone, or name your invention). But that’s nonsense. I always like to remind people that the “You’ve Got Mail” where Tom Hanks and Meg Ryan meet and fall in love online is a remake of a 1940s film where the protagonists sent each other letters. But these often arrived the same day (particularly in the same city). There were many more messenger services, pneumatic tubes, and a reliable postal service. As the Internet takes over the burden of information transmission, these are either disappearing or deteriorating but that doesn’t mean that’s the state they were in when they were the chief means of information transmission. Before there were photocopiers and faxes, there were copyists and messengers (and both were pretty damn fast). Who even sends faxes now? We like to claim we get more done with the internet but take just one step back and this claim looses much of its appeal. Sure there are things we can do now that we couldn’t do before like attend a virtual conference or a webinar. That’s true and it’s really great. But what would have the us of the 1980s have done? No doubt something very similar like buying video tapes of lectures or attending Open Universities. And the us of the 1960s? Correspondence courses and pirate radio stations. We would have had far less choice but our human endeavor would have been roughly the same. The us of 1930s, 1730s or 330s? That’s a more interesting question but nobody’s claiming that the internet changed the us of those times. We mostly think of the Internet as changing the human condition as compared to the 1960s or 1980s. And there the technology changes have far outstripped the changes in human activity.

If it’s not true that the internet has enabled us to get things done in a qualitatively different manner on a personal level, it’s even less true that it has made a difference at the level of society. There are simply so many things involved and they take so much time because humans and human institutions were involved. Let’s take the “Velvet Revolution” of 1989 in which I was an eager if extremely marginal participant. On Friday, November 17 a bunch of protesters got roughed up, on November 27, a general strike was held and on December 10, the president resigned. In Egypt, the demonstrations started on January 25, lots of stuff happened, on February 11 the president resigned. The Egyptians have the Czechs beat in their demonstration to resignation time by 5 days (17 v 23). This was the “Twitter” revolution. We didn’t even have mobile phones. Actually, we mostly even didn’t have phones. Is that what all this new global infrastructure has gotten us? Five days off on the toppling of a dictator? Of course, not. Twitter made no difference to what was happening in Egypt, at all, when compared to other revolutoin. If anything Al Jazeera played a bigger role. But on the ground, most people found out about things by being told by someone next to them. Just like we did. We even managed to let the international media up to speed pretty quickly, which could be argued is the main thing Twitter has done in the “Arab Spring” (hey another thing the Czechs did and failed at).

Malcolm Gladwell got a lot of criticism for pointing out the same thing. But he’s absolutely right:

“high risk” social activism requires deep roots and strong ties http://www.newyorker.com/online/blogs/newsdesk/2011/02/does-egypt-need-twitter.html

And while these ties can be established and maintained purely virtually, it takes a lot more than a few tweets to get people moving. Adam Weinstein adds to Gladwell’s example:

Anyone who lived through 1989 or the civil rights era or 1967 or 1956 knows that media technology is not a motive force for civil disobedience. Arguing otherwise is not just silly; it’s a distraction from the real human forces at play here.
http://motherjones.com/mojo/2011/02/malcolm-gladwell-tackles-egypt-twitter

Revolutions simply take their time. On paper, the Russian October Revolution of 1917 took just a day to topple the regime (as did so many others). But there were a bunch of unsuccessful revolutions prior to that and of course a bloody civil war lasting for years following. To fully institutionalize its aims, the Russian revolution could be said to have taken decades and millions dead. Even in ancient times, sometimes things moved very quickly (and always more messily than we can retell the story). The point about revolutions and wars is that they don’t move at the speed of information but at the speed of a fast walking revolutionary or soldier. Ultimately, someone has to sit in the seat where the buck stops, and they can only get there so fast even with jets, helicopters and fast cars. Such are the natural logistics of human communal life.

This doesn’t mean that there the speed or manner of communication doesn’t have some implications where logistics are concerned. But their impact is surprisingly small and easily absorbed by the larger concerns. In the Victorian Internet, Tom Standage describes how war ship manifests could no longer be published in The Times during the Crimean war because they could be telegraphed to the enemy faster than the ships would get there (whereas in the past, a spy’s message would be no faster than the actual ships). Also, betting and other financial establishments had to make adjustments not to get the speed of information get in the way of making profit. But if we compare the 1929 financial crisis with the one in 2008, we see that the speed of communication made little difference on the overall medium-term shape of the economy. Even though in 2008 we were getting up to the second information about the falling banking houses, the key decisions about support or otherwise took about the same amount of time (days). Sure, some stock trading is now done to the fraction of the second by computers because humans simply aren’t fast enough. But the economy still moves at about the same pace – the pace of lots and lots of humans shuffling about through their lives.

As I said at the start, although this post has been brewing in me for a while, it was most immediately inspired by that of Marshall Poe (of New Books in History) published about 6 months ago. What he said got no less relevant through the passage of time.

Think for a moment about what you do on the Internet. Not what you could do, but what you actually do. You email people you know. In an effort to broaden your horizons, you could send email to strangers in, say, China, but you don’t. You read the news. You could read newspapers from distant lands so as to broaden your horizons, but you usually don’t. You watch videos. There are a lot of high-minded educational videos available, but you probably prefer the ones featuring, say, snoring cats. You buy things. Every store in the world has a website, so you could buy all manner of exotic goods. As a rule, however, you buy the things you have always bought from the people who have always sold them.

This is easy to forget. We call online shopping and food delivery a great achievement. But having shopping delivered was always an option in the past (and much more common than now when delivery boys are more expensive). Amazon is amazing but still just a glorified catalog.

But there are revolutionary inventions that nobody even notices. What about the invention of the space between words? None of the ancients bothered to put spaces between words or in general read silently. It has been estimated that putting spaces between words not only allowed for silent reading (a highly suspicious activity until the 1700s) but also sped up reading by about 30%. Talk about a revolution! I’m a bit skeptical about the 30% number but still nobody talks about it. We think about audio books as an post-Eddison innovation but in fact, all reading was partly listening not too long ago. Another forgotten invention is that of the blackboard which made large-volume dissemination of information much more feasible through a simple reconfiguration of space and attention between pupil and teacher.

Visualization of the various routes through a ...
Image via Wikipedia

David Weinberger recently wrote what was essentially a poem about the hypertext (a buzz word I haven’t heard for a while):

The old institutions were more fragile than we let ourselves believe. They were fragile because they made the world small. A bigger truth burst them. The world is more like a messy, inconsistent, ever-changing web than like a curated set of careful writings. Truth burst the world made of atoms.

Yes, there is infinite space on the Web for lies. Nevertheless, the Web’s architecture is a better reflection of our human architecture. We embraced as if it were always true, and as if we had known it all along, because it is and we did.
http://www.hyperorg.com/blogger/2011/05/01/a-big-question

It is remarkable how right and wrong he can be at the same time. Yes, the web is more of a replication of the human architecture. It has some notable strengths (lack of geographic limitation, speed of delivery of information) and weaknesses (no internal methods for exchange of tangible goods, relatively limited methods for synchronous face-to-face communication.) I’d even go as far as calling the Internet “computer-assissted humanity”. But that just means that nothing about human organization online is a radical transformation of humanity offline.

What on Earth makes Weinberger think that the “existing institutions were fragile”? If anything they proved extremely robust. I find The Cluetrain Manifesto extremely inspiring and in many ways moving. But I find “The Communist Manifesto” equally profound without wanting to live in a world governed by it. The “The Communist Manifesto” got the description of the world as it is perfectly right. Pretty much every other paragraph in it applies just as much today as it did then. But the predictions offered in the other paragraphs can really cause nothing but laughter today. “The Cluetrain Manifesto” gave the same kind of expression to the frustration with the Dilbert world of big corporations and asked for our humanity back. They were absolutely right.

Markets can be looked at as conversations and the internet can facilitate certain kinds of conversation. But they were wrong in assuming that there is just one kind of conversation. There are all sorts of group symbolic and ritualized conversations that make the world of humans go around. And they have never been limited just to the local markets. In practical terms, I can now complain about a company on a blog or in a tweet. And these can be viewed by others. But since there’s an Xsuckx.com website for pretty much all major brands, the incentive for companies to be responsive to this are relatively small. I have actually received some response to complaints from companies on Twitter. But only once it led to the resolution of the problem. But Twitter is still a domain of “the elite” so it pays companies to appease people. However, should it reach the level of ubiquitous obscurity that many pages have, it will become even less appealing due to the lack of permanence of Tweets.

The problem is that large companies with large numbers of customers can only scale if they keep their interaction with those customers at certain levels. It was always thus and will always remain so. Not because of intrinsic attitudes but because of configurational limitations of time and human attention. Even the industrially oppressed call-center operator can only deal with about 10 customers an hour. So you have to work in some 80/20 cost checks into customer support. Most of any company’s interaction with their customers will be one to many and not on one on one. (And this incidentally holds for communications about the company by customers).

There’s a genre of conversations in the business and IT communities that focus on ‘why is X’ successful. Ford of the 1920s, IBM of the 1960s, Apple of the 2000s. The constant in these conversations is the wilful effort of projecting the current convetnional wisdom about business practices onto what companies do and used to do. This often requires significant reimagining of the present and the past. Leo Laporte and Paul Thurott recently had a conversation (http://twit.tv/ww207) in which they were convinced that companies that interact and engage with their customers will be successful. But why then, one of them asks, is not Microsoft whose employees blog all the time is not more successful than Apple who couldn’t be more tightlipped about its processes and whose attitude to customers is very much take it or leave it? Maybe it’s the Apple Store, one of them comments. That must be it. That engages the crap out of the Apple’s customers. But neither of them asked what is the problem with traditional stores, then? What is the point of the internet. The problem is that as with any metaphoric projection, the customer engagement metaphor is just partial. It’s more a way for us to grasp with processes that are fairly stable at the macro institutional level (which is the one I’m addressing here), but basically chaotic at the level of individual companies or even industries.

So I agree with Marshall Poe about the amount of transformation going on:

As for transformative, the evidence is thin. The basic institutions of modern society in the developed world—representative democracy, regulated capitalism, the welfare net, cultural liberalism—have not changed much since the introduction of the Internet. The big picture now looks a lot like the big picture then.

Based on my points above, I would even go as far as to argue that the basic institutions have not changed at all. Sure, foreign ministries now give advisories online, taxes can be paid electronically and there are new agencies that regulate online communication (ICANN) as well as old ones with new responsibilities. But as we read the daily news, can we perceive any new realities taking place? New political arrangements based on this new and wonderful thing called the Internet? No. If you read a good spy thriller from the 80s and one taking place now, you can hardly tell the difference. They may have been using payphones instead of the always on mobile smart devices we have now but the events still unfold in pretty much the same way: people go from place to place and do things.

Writing, print, and electronic communications—the three major media that preceded the Internet—did not change the big picture very much. Rather, they were brought into being by major historical trends that were already well underway, they amplified things that were already going on.

Exactly! If you read about the adventures of Sinuhe, it doesn’t seem that different from something written by Karl May or Tom Clancy. Things were happening as they were and whatever technology was available to be used, was used as well as possible. Remember that the telephone was originally envisioned to be a way of attending the opera – people calling in to a performance instead of attending live.

As a result, many things that happened could not have happened exactly in the same way without the tools of the age being there. The 2001 portion of the war in Afghanistan certainly would have looked different without precision bombing. But now in 2011 it seems to be playing out pretty much along the same lines experienced by the Brits and the Soviets. Meaning: it’s going badly.

The role of TV imagery in the ending of the Vietnam war is often remarked on. But that’s just coincidental. There have been plenty of unpopular wars that were ended because the population refused to support them and they were going nowhere. Long before the “free press”, the First Punic Wars were getting a bad rep at home. Sure, the government could have done a better job of lying to the press and its population but that’s hard to do when you have a draft. It didn’t work for Ramses II when he got his ass handed to him at Kadesh and didn’t ultimately work for the Soviet misadventure in Afghanistan. The impact of the impact of the TV images can easily be overestimated. The My Lai Massacre happened in 1968 when the war was about in its mid-point. It still took 2 presidential elections and 1 resignation before it was really over. It played a role but if the government wanted, it could have kept the war going.

Communications tools are not “media” in the sense we normally use the word. A stylus is not a scriptorium, movable type is not a publishing industry, and a wireless set is not a radio network. In order for media technologies to become full-fledged media, they need to respond to some big-picture demand.

It is so easy to confuse the technology with the message. On brief reflection, the McLuhan quote we all keep repeating like sheep is really stupid. The medium is the medium and the message is the message. Sometimes they are so firmly paired we can’t tell them apart, sometimes they have nothing in common. What is the medium of this message? HTML, the browser, your screen, a blog post, the Internet, TCP/IP, ehternet? They’re all involved in the transmission. We can choose whether we pay attention to some of them. If I’d posted somebody a parchment with this on it, it would certainly add to the message or become a part of it. But it still wouldn’t BE the message! Lots of artists like Apollinaire and his calligrams actually tried to blend the message and the medium in all sorts of interesting ways. But it was hard work. Leo Laporte (whose podcasts I enjoy listening to greatly) spent a lot of time trying to displace podcast with netcast to avoid an association with the medium. He claimed that his shows are not ‘podcasts’ but ‘shows’, i.e. real content. Of course, he somehow missed the fact that we don’t listen to programs but to the radio and don’t view drama but rather watch TV. The modes of transmission have always been associated with the message – including the word “show” – until they weren’t. We don’t mean anything special now when we say we ‘watch TV’.

Of course, the mode of transmission has changed how the “story” is told. Every new medium has always first tried to emulate the one it was replacing but ultimately found its own way of expression. But this is no different to other changes in styles. The impressionists were still using the same kinds of paints and canvasses, and modernist writers the same kind of inks and books. Every message exists in a huge amount of context and we can choose which of it we pay attention to at any one time. Sometimes the medium becomes a part of the context, sometimes it’s something else. Get over it!

There are some things Marshall Poe says I don’t agree with. I don’t think we need to reduce everything to commerce (as he does – perhaps having imbibed too much of Marxist historiography). But most importantly I don’t agree when he says that the Internet is mature in the same way that TV was mature in the 1970s. Technologies take different amounts of time to mature as widespread consumer utilities. It is always on the order of decades and sometimes centuries but there is no set trajectory. TV took less time than cars, planes took longer than TV, cars took longer than the Internet. (All depending on how we define mature – I’m mostly talking about wide consumer use – i.e. when only oddballs don’t use it and access is not disproportionately limited by socioeconomic status). The problem with the Internet is that there are still enough people who don’t use it and/or who can’t access it. In the 1970s, the majority had TVs or radios which were pretty much equivalent as a means of access to information and entertainment. TV was everywhere but as late as the 1980s, the BBC produced radio versions of its popular TV shows (Dad’s Army, All Gas and Gaiters, etc.) The radio performance of Star Wars was a pretty big deal in the mid-80s.

There is no such alternative access to the Internet. Sure, there are TV shows that play YouTube clips and infomercials that let you buy things. But it’s not the experience of the internet – more like a report on what’s on the Internet.

Even people who did not have TVs in the 1970s (both globally and nationally) could readily understand everything about their operation (later jokes about programing VCRs aside). You pushed a button and all there was to TV was there. Nothing was hiding. Nothing was trying to ambush you. People had to get used to the idiom of the TV, learn to trust some things and not others (like advertising). But the learning curve was flat.

The internet is more like cars. When you get in one, you need to learn lots of things from rules of manipulation to rules of the road. Not just how to deal with the machinery but also how to deal with others using the same machinery. The early cars were a tinkerer’s device. You had to know a lot about cars to use cars. And then at some point, you just got in and drove. At the moment, you still have to know a lot about the internet to use it. Search engines, Facebook, the rules of Twitter, scams, viruses. That intimidates a lot of people. But less so now than 10 years ago. Navigating the Internet needs to become as socially common place as navigating traffic in the street. It’s very close. But we’re not quite there yet on the mass level.

Nor do I believe that the business models on the Internet are as settled as they were with TV in the 1970s. Least of all the advertising model. Amazon’s, Google’s and Apple’s models are done – subject to normal developments. But online media are still struggling as are online services.

We will also see significant changes with access to the Internet going mobile as well as the increasing speed of access. There are still some possible transformations hiding there – mostly around video delivery and hyper-local services. I’d give it another 10 years (20 globally). By then the use of the internet will be a part of everyday idiom in a way that it’s still quite not now (although it is more than in 2001). But I don’t think the progress will go unchecked. The prospect of flying cars ran into severe limitations of technology and humanity. After 2021, I would expect the internet to continue changing under the hood (just like cars have since the 1960s) but not much in the way of its human interface (just like cars since the 1960s).

There are still many things that need working out. The role of social media (like YouTube) and social networking (like Facebook). Will they put a layer on top of the internet or just continue being a place on the internet? And what business models other than advertising and in-game purchases will emerge? Maybe none. But I suspect that the Internet has about a decade of maturing to get to where it will be recognisable in 2111. Today, cars from the 1930s don’t quite look like cars but those from the 1960s do. In this respect, I’d say the internet is somewhere in the 1940s or 50s. Both in usability, ubiquity, accessibility and it’s overall shape.

The most worrying thing about the future of the internet is a potential fight over the online commons. One possible development is that significant parts of the online space will become proprietary with no rights of way. This is not just net-neutrality but a possible consequence of the lack of it. It is possible that in the future so many people will only access the online space to take advantage of proprietary services tied to their connection provider that they may not even notice that at first some and later on most non-proprietary portions of the internet are no longer accessible. It feels almost unimaginable now but I’m sure people in 16th century East Anglia never thought their grazing commons would disappear (http://www.eh-resources.org/podcast/podcast2010.html). I’m not suggesting that this is a necessary development. Only that it is a configurational possibility.

As I’m writing this. A Tweet just popped up on my screen mentioning another shock in Almaty a place where I spent a chunk of time and where a friend of mine is about to take up a two-year post. I switch over to Google and find out no reports of destruction. If not for Twitter, I may not have even heard about it. I go on Twitter and see people joking about it in Russian. I sort of do my own journalism for a few minutes gathering sources. How could I still claim that the Internet changes nothing? Well, I did say “almost”. Actually, for many individuals the Internet changes everything. They (like me) get to do jobs they wouldn’t, find out things they couldn’t and talk to people they shouldn’t. But it doesn’t change (or even augment) our basic flesh-bound humanity. Sure, I know about something that happened somewhere I care about that I otherwise wouldn’t. But there’s nothing more I can do about it. I did my own news gathering about as fast as it would have taken to listen to a BBC report on this (I’ve never had a TV and now only listen to live radio in the mornings.) I can see some scenarios where the speed would be beneficial but when the speed is not possible we adjust our expectations. I first visited Kazakhstan in 1995 and although I had access to company email, my mother knew about what was happening at the speed of a postcard. And just the year before during my visit to Russia, I got to send a few telegrams. You work with what you have.

All the same, the internet has changed the direction my life has taken since about 1998. It allowed me to fulfil my childhood dream of sailing on the Norfolk Broads, just yesterday it helped me learn a great new blues lick on the guitar. It gives me reading materials, a place to share my writing, brings me closer to people I otherwise wouldn’t have heard of. It gives me podcasts like the amazing New Books in History or China History podcast! I love the internet! But when I think about my life before the internet, I don’t feel it was radically different. I can point at a lot of individual differences but I don’t have a sense of a pre-Internet me and post-Internet me. And equally I don’t think there will be a pre-Internet and post-Internet humanity. One of the markers of the industrial revolution is said to be its radical transformation of the shape of things. So much so that a person of 1750 would still recognize the shape of the country in 1500 but a person in 1850 would no longer see them the same. I wonder if this is a bit too simplistic. I think we need to bring more rigor to the investigation of human contextual identity and embeddedness in the environment. But that is outside the scope of this essay.

It is too tempting to use technologies as a metaphor for expressing our aspirations. We do (and have always done) this through poetry, polemic, and prose. Our depictions of what we imagine the Internet society is like appear in lengthy essays or chance remarks. They are carried even in tiny words like “now” when judiciously deployed. But sadly exactly the same aspirations of freedom and universal sisterhood were attached to all the preceding communication technologies, as well: print, telegraph, or the TV. Our aspirations aren’t new. Our attachment to projecting these aspiration into the world around us is likewise ancient. Even automatised factory production has been hailed by poets as beautiful. And it is. We always live in the future just about to come with regrets about the past that has never been. But our prosaic present seems never to really change who we are. Humans for better or worse.

Enhanced by Zemanta

When is subtle manipulation of data a flat out lie? Truth about Chinese prisons [UPDATE]

Share

I’ve been on a China kick lately (reading and listening about its history and global position) and a crime public policy kick (reading and listening to Mark Kleiman). I was struck when I heard Mark say in an interview that the US has more people in jail in absolute terms than China. So I went about to looking for some data. I found the most comprehensive source of info in the “World Prison Population List” published by the King’s College London International Centre for Prison Studies. Their top bullet point is alarming:

More than 9.25 million people are held in penal institutions throughout the world, mostly aspre-trial detainees (remand prisoners) or assentenced prisoners. Almost half of these are inthe United States (2.19m), China (1.55m plus pre-trial detainees and prisoners in ‘administrativedetention’) or Russia (0.87m).

But I was surprised by China. The US have a 738  people in prison per 100,000 of population, Russia 611 and China 111. England and Wales has more than China with 158. In fact, more than half of the countries of the world have more than China. I did some numbers in the spreadsheet below what that means with respect to the total population of each countries (throwing in the UK, India and Brazil for good measure):

And the results could not be clearer. China is not in any way comparable to Russia and the US when it comes to prison population. In fact, the UK is a worse offender (pun intended) when it comes to owning a disproportionate chunk of the global prison population. It is just under parity. India is by far the most lenient when it comes to incarceration with only 3.5% of the world prison population to 16% of the world’d global population. The Center provides no estimate of the pre-trial and administrative detainees in China. But even if it was another half-a-million people, it would still only give China a parity. To be as disproportionately prison-happy as the US, China would have to arrest more than 2.5 as many people as it has in jail right now.

But the question arises why did the Centre for Prison Studies choose to include US, Russia and China on the same list? My suggestion is prejudice combined with number magic. The authors were trying to come up with a way to get to say that half of the prisoners are in a small number of countries. And China is “known” for its human rights record, so it must be OK to list it there if it will bump up the numbers. But in effect, they managed to lie about China by saying something numerically true. It didn’t say anything flat out incorrect but it created an implicit category which clearly labels China as a bad country. This is a silly way to affirm Western supremacy where there is none.

There are lots of other things that could be estimated based on these numbers. I couldn’t find a clear estimate of how many people were sent to prison for things they didn’t do (we can’t just extrapolate from death row exonerations) but if we set it at about 0.5%, we get that there may be more unjustly imprisoned people in the US than there are political prisoners in China (estimated at about 5,000) or slightly less if we count the same rate of miscarriage of justice across the rest of China’s prison population. This is, of course, too much guess work for drawing any firm conclusions but it certainly puts the numbers in some perspective.

UPDATE: I have actually interviewed Mark Kleiman (it was a long time ago but I only now remembered to update here) and his estimate is that there are 3-4% of people in US prisons who are there because of something they did not do (often because of police mis-behavior). Now it is important to qualify this by saying that most of these people have done other things for which they deserve to go to prison but were not caught, so the miscariage of justice is more technical than moral. But it shows the massive holes in the US vaunted “rule of law”. It is there, no doubt, when it comes to settling middle-class property and other business disputes (and by all accounts this would be very important thing to have in many countries in the Middle East and China). But it is not evenly distributed. I think it would not be completely outrageous to say that, for many of its citizens, the US is in effect a police state. Just like it could be said that for many of China’s citizens, China is not!

Enhanced by Zemanta