Category Archives: Metaphor

Moral Compass Metaphor Points to Surprising Places

Share
Send to Kindle

I thought the moral compass metaphor has mostly left current political discourse but it just cropped up – this time pointing from left to right – as David Plouffe accused Mitt Romney of not having one. As I keep repeating, George Lakoff once said “Metaphors can kill.” And Moral Compass has certainly done its share of homicidal damage. Justifying wars, interventions and unflinching black and white resolve in the face of gray circumstances. It is a killer metaphor!

But with a bit of hacking it is not difficult to subvert it for good. Yes, I’m ready to declare, it is good to have a moral compass, providing you make it more like a “true compass” to quote Plouffe. The problem is, as I learned during my sailing courses many years ago, that most people don’t understand how compasses actually work.

First, compasses don’t point to “the North”. They point to what is called the Magnetic North which is quite a ways from the actual North. So if you want to go to the North pole, you need make a lot of adjustment in where you’re going. Sound familiar? Kind of like following your convictions. They often lead you to places that are different from where you’re saying you’re going.

Second, the Magnetic North keeps moving. Yes, the difference of where it is in relation to the “actual” North changes from year to year. So you have to adjust your directions to the times you live in! Sound familiar? Being a devout Christian led people to different actions in the 1500s, 1700s and 1900. Although, we keep saying the “North” or faith is the same, the actual needle showing us where to go points to different directions.

Third, the compass is easily distracted by its immediate physical context. The distribution of metals on a boat, for instance, will throw it completely off course. So it needs to be calibrated differently for each individual installation. Sound familiar?

And it’s also worth noting that the south magnetic pole is not the exact polar opposite of the north magnetic pole!

So what can we learn from this newly hacked moral compass metaphor? Well, nothing terribly useful. Our real ethics and other commitments are always determined by the times we live and contexts we find ourselves in. And often we say we’re going one way when we’re actually heading another way. But we already knew that. Everybody knows that! Even people who say it’s not true (the anti-relativists) know that! They are not complete idiots after all, they just pretend to be to avoid making painful decisions!

As so often, we can tell two stories about the change of views by politicians or anybody else.

The first story is of the feckless, unprincipled opportunist who changes her views following the prevailing winds – supported by the image of the weather vane. This person is contrasted with the stalwart who sticks to her principles even as all around her are swayed the moral fad of the day.

The second story is of the wise (often old) and sage person who can change her views even as all around her persist in their simplistic fundamentalism. Here we have the image of the tree that bends in the wind but does not break. This person is contrasted with the bigot or the zealot who cannot budge even an inch from old prejudices even though they are obviously wrong?

So which story is true of Romney, Bush and Obama? We don’t know. In every instance, we have to fine tune our image and very carefully watch out for our tendencies to just tell the negative stories about people we don’t like. Whether one story is more convincing than the other depends, like the needle of a compass, no a variety of obvious and non-obvious contexts. The stories are here to guide us and help us make decisions. But we must strive to keep them all in mind at the same time. And this can be painful. They are a little bit like the Necker Cube, Vase, the Duck/Rabbit or similar optical illusions. We know they’re both there but while we’re perceiving the one, it is easy to forget the others are there. So it is uncomfortable. And also not a little bit inconvenient.

Is this kind of metaphorical nuance something we can expect in a time of political competition? It can be. Despite their bad rep, politicians and the media surrounding them can be nuanced. But often they’re not. So instead of nuance, when somebody next trots out the moral compass, whether you like them or not, say: “Oh, you mean you’re a liar, then?” and tell them about the Magnetic North!

 

Post Script: Actually, Plouffe didn’t say Romney didn’t have a moral compass. He said that you “you need to have a true compass, and you’ve got to be willing to make tough calls.” So maybe he was talking about a compass adjusted for surrounding metals and the advice of whose needle we follow only having taken into account as much of our current context as we can. A “true compass” like a true friend! I agree with most of the “old Romney” and none of the “new Romney”. And I loved the old Obama created in the image of our unspoken liberal utopias, and I am lukewarm on the actual Obama (as I actually knew I would) who steers a course pointing to the North of reality rather than the one magnetically attracting our needles. So if its that kind of moral compass after all, we’re in good hands!

Send to Kindle

There’s more to memory than the brain: Psychologists run clever experiments, make trivial claims, take gullible internet by storm

Share
Send to Kindle
GoodSearch home page

Image via Wikipedia

The online media are drawn to any “scientific” claims about the internet’s influence on our nature as humans like flies to a pile of excrement. Sadly, in this metaphor, only the flies are figurative. The latest heap of manure to instigate an annoying buzzing cloud of commentary from Wired to the BBC, is an article by Sparrow et al. claiming to show that because there are search engines, we don’t have to remember as much as before. Most notably, if we know that some information can be easily retrieved, we remember where it can be obtained instead of what it is. As Wired reports:

“A study of 46 college students found lower rates of recall on newly-learned facts when students thought those facts were saved on a computer for later recovery.” http://www.wired.co.uk/news/archive/2011-07/15/search-engines-memory

Sparrow et al. designed a bunch of experiments that “prove” this claim. Thus, they holler, the internet changes how we remember. This was echoed by literally hundreds of headlines (Google claims over 600). Here’s a sample:

  • Google Effect: Changes to our Brains
  • Search engines like Google ‘changing the way human memory works’
  • Search engines change how memory works
  • Google Is Destroying Our Memories, Scientists Find
  • It pays to remember, search engines ruining our memory
  • Google rewiring the way we remember, study says
  • Has Google turned your memory to mush?
  • Internet search engines cause poor memory, scientists claim
  • Researchers: Search Engines Supplanting Our Memory
  • Google changing way brain remembers information

Many of these headlines are from “reputable” publications and they can be summarized by three words: Bullshit! Bullshit! Bullshit!

All they had to do is read this part of the abstract to understand that nothing like the stuff they blather about follows from the study:

“The results of four studies suggest that when faced with difficult questions, people are primed to think about computers and that when people expect to have future access to information, they have lower rates of recall of the information itself and enhanced recall instead for where to access it. The Internet has become a primary form of external or transactive memory, where information is stored collectively outside ourselves.”

But they were not helped by Science whose publication of these results is more of a self-serving stunt than a serious attempt to further knowledge. The title of the original “Google Effects on Memory” is all but designed to generate bat-shit crazy headlines. If the title were to be truthful, it would have to be “Google has no more effects on memory than a paper and pen or a friend.” Even the Science Magazine report on the study entitled “Searching for the Google Effect on People’s Memory” concludes it “doesn’t directly answer that question”. In fact, it says that the internet is filling in the role of “transactive memory” which describes the fact that we rely on people to remember things. Which means it has no impact on our brains at all. It just magnifies the transactive effects already in existence.

Any claim about a special effect of Google on any kind of memory can be debunked in two words: “shopping list”! All Sparrow at al. discovered is that the internet has changed us as much as a stub of a pencil and a grubby piece of paper. Meaning, not at all.

Some headlines cottoned onto this but they are few and far between:

  • Search Engine “Memory Loss” in Fact a Sign of Smart Behavior‎
  • Search Engines Ruin Our Memory, Make Us Smarter

Sparrow, the lead author of the study, when interviewed by Wired said: “It’s very similar to how we use people in our lives, The internet is really just an interface with a lot of other people.”

In other words, What the internet has changed is the deployment of strategies we have always used for managing our memory. Sparrow et al. use an old term “transactive memory” to describe this but that’s needed only because cognitive psychology’s view of memory has been so limited. Memory is not just about storage and retrieval. Like all of our cognition it is tied in with a whole host of strategies (sometimes lumped together under the heading of metacognition) that have a transactive and social dimension.

Let’s take the example of mobile phones. About 15 years ago I remembered about 4 phone numbers (home, work, mother, friend). Now, I remember none. They’re all stored in my mobile phone. What’s happened? I changed my strategy of information storage and retrieval because of the technology available. Was this a radical change? No, because I needed a lot more number so I carried a little booklet where I kept the rest of the numbers. So the mobile phone freed my memory of four items. Big deal! Arguably, these four items have a huge potential transactional impact. They mean that if my mobile phone is dead or lost, I cannot call the people most likely to be able to offer assistance. But how often does that happen? It hasn’t happened to me yet in an emergency. And in an non-emergency I have many backups. At any rate, in the past I was much more likely to be caught up in an emergency where I couldn’t find a phone at all. So the change has been fairly minimal.

But what’s more interesting here is that I didn’t realize this change until I heard someone talk about it. This transactional change is a topic of conversation, it is not just something that happened, it is part of common knowledge (and common knowledge only becomes common because of a lot of people talk about it to a lot of other people).

The same goes for the claims made by Sparrow et al. The strategies used to maintain access to factual knowledge have changed with the technology available. But they didn’t just change, people have been talking about this change. “Just Google it” is a part of daily conversation. In his podcasts, Leo Laporte has often talked about how his approach to remembering has changed with the advent of Google. One early strategy for remembering websites has been the Bookmark. People have made significant collections of bookmarks to sites, not unlike rollodexes of old. But about five or so years ago Google got a lot better at finding the right sites, so bookmarks went away. Personally, now that Chrome syncs bookmarks so seemlessly, I’ve started using them again. Wow, change in technology, facilitates a change in strategy. Sparrow et al. should do some research on this. Since I started using the Internet when it was still spelled with a capital “I”, I still remember urls of key websites: Google, Yahoo, Gmail, BBC, my own, etc. But there are people who don’t. I’ve personally observed a highly intelligent CEO of a company to type “Google” in the Bing search box in Internet Explorer. And a few years ago, after a university changed its portal, I was soothing an angry professor, who complained that the link to Google was removed from the page that automatically came up on his computer. He never learned how to get there any other way because he didn’t need to. Now he does. We acquire strategies to deal with information as we need them.

Before the availability of writing (and even after), there were a whole lot of strategies available for remembering things. These were part of the cultural conversation as much as the internet is today. Some of these strategies became part of religious ritual. Some of them are a part of a trickster’s arsenal – Joshua Foer describes some in Moonwalking with Einstein. Many are part of the art of “study skills” many people talk about.

All that Sparrow et al. demonstrated is that when some of these strategies are deployed, it has a small effect on recall. This is not a bad thing to know but it’s not in any way worth over 600 media stories about it. To evaluate this much reduced claim we would have to carefully examine their research methodology and the underlying assumptions which is not what this post is about. It’s about the mistreatment of research results by media hungry academics.

I don’t begrudge Sparrow et al. their 15 minutes of fame. I’m not surprised, dismayed or even disappointed at the link greed of the journalistic herd who fell over themselves to uncritically spread this research fluff. Also, many of the actual articles were quite balanced about the findings but how much of that balance will survive the effect of a mendatiously bombastic headline is anybody’s guess. So all in all it’s business as usual in the popularization of “science” in the “media”.

ResearchBlogging.org Bohannon, J. (2011). Searching for the Google Effect on People’s Memory Science, 333 (6040), 277-277 DOI: 10.1126/science.333.6040.277

Sparrow, B., Liu, J., & Wegner, D. (2011). Google Effects on Memory: Cognitive Consequences of Having Information at Our Fingertips Science DOI: 10.1126/science.1207745

Enhanced by Zemanta
Send to Kindle

The death of a memory: Missing metaphors of remembering and forgetting?

Share
Send to Kindle

Memories

I have forgotten a lot of things in my life. Names, faces, numbers, words, facts, events, quotes. Just like for anyone, forgetting is as much a part of my life as remembering. Memories short and long come and go. But only twice in my life have I seen a good memory die under suspicious circustances.

Both of these were good reliable everyday memories as much declarative as non-declarative. And both died unexpectedly without warning and without reprieve. They were both memories of entry codes but I retrieved both in different ways. Both were highly contextualised but each in a different way.

The first time was almost 20 years ago (in 1993) and it was the PIN for my first bank card (before they let you change them). I’d had it for almost two years by then using it every few days for most of that period. I remembered it so well that even after I’d been out of the country for 6 months and not even thinking about it once, I walked up to an ATM on my return and without hesitation, typed it in. And then, about 6 months later, I walked up to another ATM, started typing in the PIN and it just wasn’t there. It was completely gone. I had no memory of it. I knew about the memory but the actual memory completely disappeared. It wasn’t a temporary confusion, it was simply gone and I never remembered it again. This PIN I remembered as a number.

The second death occurred just a few days ago. This time, it was the entrance code to a building. But I only remembered it as a shape on the keypad (as I do for most numbers now). In the intervening years, I’ve memorised a number of PINs and entrance codes. Most I’ve forgetten since, some I remember even now (like the PIN of a card that expired a year ago but I’d only used once every few months for many years). Simply, the normal processes you’d expect of memory. But this one, I’ve been using for about a year since they’d changed it from the previous one. About five months ago I came back from a month-long leave and I remembered it instantly. But three days ago, I walked up to the keypad and the memory was gone. I’d used the keypad at least once if not twice that day already. But that time I walked up to the keypad and nothing. After a few tries I started wondering if I might be typing in the old code since before the change so I flipped the pattern around (I had a vague memory of once using it to remember the new pattern) and it worked. But the working pattern felt completely foreign. Like one I’d never typed in before. I suddenly understood what it must feel like for someone to recognize their loved one but at the same time be sure that it’s not them. I was really discomfitted by this impostor keypad pattern. For a few moments, it felt really uncomfortable – almost an out of body (or out of memory) experience.

The one thing that set the second forgetting apart from the first one was that I was talking to someone as it happened (the first time I was completely alone on a busy street – I still remember which one, by the way). It was an old colleague who visited the building and was asking me if I knew the code. And seconds after I confidently declared I did, I didn’t. Or I remembered the wrong one.

So in the second case, we could conclude that the presence of someone who had been around when the previous code was being used, triggered the former memory and overrode the latter one. But the experience of complete and sudden loss, I recall vividly, was the same. None of my other forgettings were so instant and inexplicable. And I once forgot the face of someone I’d just met as soon and he turned around (which was awkward since he was supposed to come back in a few minutes with his car keys – so I had to stand in the crowd looking expectantly at everyone until the guy returned and nodded to me).

What does this mean for our metaphors of memory based on the various research paradigms? None seem to apply. These were not repressed memories associated with traumatic events (although the forgetting itself was extremely mildly traumatic). These were not quite declarative memories nor were they exactly non-declarative. They both required operations in working memories but were long-term. They were both triggered by context and had a muscle-memory component. But the first one I could remember as a number whereas the second one only as a shape and only on that specific keypad. But neither were subject to long-term decay. In fact, both proved resistant to decay surving long or longish periods of disuse. They both were (or felt) as solid memories as my own name. Until they were there no more. The closest introspective analogy to me seems Luria’s man who remembered too much who once forgot a white object because he placed it against white background in his memory which made it disappear.

The current research on memory seems to be converging on the idea that we reconstruct our memories. Our brains are not just some stores with shelves from which memories can be plucked. Although, memories are highly contextual, they are not discrete objects encoded in our brain as files on a harddrive. But for these two memories, the hard drive metaphor seems more appropriate. It’s as if a tiny part of my brain that held those memories was corrupted and they simply winked out of existence at the flip of a bit. Just like a hard drive.

There’s a lot of research on memory loss, decay and reliability but I don’t know of any which could account for these two deaths. We have many models of memory which can be selectively applied to most memory related events but these two fall between the cracks.

All the research I could find is either on sudden specific-event-induced amnesia (http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1961972/?page=1) or senescence (http://brain.oxfordjournals.org/content/89/3/539.extract). In both cases, there are clear causes to the memory and loss is much more total (complete events or entire identity). I could find nothing about the sudden loss of a specific reliable memory in a healthy individual (given that it only happened twice 18 years apart – I was 21 when it happened first – I assume this is not caused by any pathology in my brain) not precipitated by any traumatic (or other) event. Yet, I suspect this happens all the time… So what gives?

Send to Kindle

Killer App is a bad metaphor for historical trends, good for pseudoteaching

Share
Send to Kindle
This map shows the countries in the world that...

Image via Wikipedia

Niall Ferguson wrote in The Guardian some time ago about how awful history education has become with these “new-fangled” 40-year-old methods like focusing on “history skills” that leads to kids leaving school knowing “unconnected fragments of Western history: Henry VIII and Hitler, with a small dose of Martin Luther King, Jr.” but not who was the reigning English monarch at the time of the Armada. Instead, he wants history to be taught his way: deep trends leading to the understanding why the “West” rules and why Fergusson is the cleverest of all the historians that ever lived. He even provided (and how cute is this) a lesson plan!

Now, personally, I’m convinced that the history of historical education teaches us mostly that historical education is irrelevant to the success of current policy. Not that we cannot learn from history. But it’s such a complex source domain for analogies that even very knowledgeable and reasonable people can and do learn the exact opposites from the same events. And even if they learn the “right” things it still doesn’t stop them from being convinced that they can do it better this time (kind of like people in love who think their marriage will be different). So Ferguson’s bellyaching is pretty much an empty exercise. But that doesn’t mean we cannot learn from it.

Ferguson, who is a serious historian of financial markets, didn’t just write a whiney column for the Guardian, he wrote a book called Civilization (I’m writing a review of it and a few others under the joint title “Western Historiographical Eschatology” but here I’ll only focus on some aspects of it) and is working on a computer game and teaching materials. To show how seriously he takes his pedagogic mission and possibly also how hip and with it he is, Ferguson decided to not call his historical trends trends but rather “killer apps”. I know! This is so laugh out loud funny I can’t express my mirth in mere words:))). And it gets even funnier as one reads his book. As a pedagogical instrument this has all the practical value of putting a spoiler on a Fiat. He uses the term about 10 times (it’s not in the index!) throughout the book including one or two mentions of “downloading” when he talks about the adoption of an innovation.

Unfortunately for Ferguson, he wrote his book before the terms “pseudocontext” and “pseudoteaching” made an appearance in the edublogosphere. And his “killer apps” and the lesson plan based on them are a perfect example of both. Ferguson wrote a perfectly servicable and an eminently readable historical book (even though it’s a bit of a tendentious mishmash). But it is still a historical book written by a historian. It’s not particularly stodgy or boring but it’s no different from myriad other currently popular historical books that the “youth of today” don’t give a hoot about. He thinks (bless him) that using the language of today will have the youth flocking to his thesis like German princes to Luther. Because calling historical trends “killer apps” will bring everything into clear context and make all the convoluted syntax of even the most readable history book just disappear! This is just as misguided as thinking that talking about men digging holes at different speeds will make kids want to do math.

What makes it even more precious is that the “killer app” metaphor is wrong. For all his extensive research, Ferguson failed to look up “killer app” on Wikipedia or in a dictionary. There he would have found out that it doesn’t mean “a cool app” but rather an application that confirms the viability of an existing platform whose potential may have been questioned. There have only been a handful of killer apps. The one undisputed killer app was Visicalc which all of a sudden showed how an expensive computer could simplify the process of financial management through electronic spreadsheets and therefore pay for itself. All of a sudden, personal computers made sense to the most important people of all, the money counters. And thus the personal computer revolution could begin. A killer app proved that a personal computer is useful. But the personal computer had already existed as a platform when Visicalc appeared.

None of Ferguson’s “killer apps” of “competition, science, property rights, medicine, consumer society, work ethic” are this sort of a beast. They weren’t something “installed” in the “West” which then proved its viability. They were something that, according to Ferguson anyway, made the West what it was. In that they are more equivalent to the integrated circuit than Visicalc. They are the “hardware” that makes up the “West” (as Ferguson sees it), not the software that can run on it. The only possible exception is “medicine” or more accurately “modern Western medicine” which could be the West’s one true “killer app” showing the viability of its platform for something useful and worth emulating. Also, “killer apps” required a conscious intervention, whereas all of Ferguson’s “apps” were something that happened on its own in a myrriad disparate processes – we can only see them as one thing now.

But this doesn’t really matter at all. Because Ferguson, as so many people who want to speak the language of the “young people”, neglected to pay any attention whatsoever to how “young people” actually speak. The only people who actually use the term “killer app” are technology journalists or occasionally other journalists who’ve read about it. I did a quick Google search for “killer app” and did not find a single non-news reference where somebody “young” would discuss “killer apps” on a forum somewhere. That’s not to say it doesn’t happen but it doesn’t happen enough to make Ferguson’s work any more accessible.

This overall confusion is indicative of Ferguson’s book as a whole which is definitely less than the sum of its parts. It is full of individual insight and a fair amount of wit but it flounders in its synthetic attempts. Not all his “killer apps” are of the same type, some follow from the others and some don’t appear to be anything at all than Ferguson’s wishful thinking. They certainly didn’t happen on one “platform” – some seem the outcome rather than the cause of “Western” ascendancy. Ferguson’s just all too happy to believe his own press. At the beginning he talks about early hints around 1500AD that the West might achieve ascendancy but at the end he takes a half millenium of undisputed Western rule for granted. But in 1500, “the West” had still 250 years to go before the start of the industrial revolution, 400 years before modern medicine, 50 years before Protestantism took serious hold and at least another 100 before the Protestant work ethic kicked in (if there really is such a thing). It’s all over the place.

Of course, there’s not much innovative about any of these “apps”. It’s nothing a reader of the Wall Street Journal editorial page couldn’t come up with. Ferguson does a good job of providing interesting anecdotes to support his thesis but each of his chapters meanders around the topic at hand with a smattering of unsystematic evidence here and there. Sometimes the West is contrasted with China, sometimes the Ottomans, sometimes Africa! It is hard to see how his book can help anybody’s “chronological understanding” of history that he’s so keen on.

But most troublingly it seems in places that he mostly wrote the book for as a carrier for ultra-conservative views that would make his writing more suitable for The Daily Mail rather than the Manchester Pravda: “the biggest threat to Western civilization is posed not by other civilizations, but by our own pusilanimity” unless of course it is the fact that “private property rights are repeatedly violated by governments that seem to have an insatiable appetite for taxing out incomes and our wealth and wasting a large portion of the proceeds”.

Panelist Economic Historian Niall Ferguson at ...

Image via Wikipedia

It’s almost as if the “civilized” historical discourse was just a veneer that peels off in places and reveals the real Ferguson, a comrade of Pat Buchanan whose “The Death of the West” (the Czech translation of which screed I was once unfortunate enough to review) came from the same dissatisfaction with the lack of our confidence in the West. Buchanan also recommends teaching history – or more specifically, lies about history – to show us what a glorious bunch of chaps the leaders of the West were. Ferguson is too good a historian to ignore the inconsistencies in this message and a careful reading of his book reveals enough subtlety not to want to reconstitute the British Empire (although the yearning is there). But the Buchananian reading is available and in places it almost seems as if that’s the one Ferguson wants readers to go away with.

From metaphor to fact, Ferguson is an unreliable thinker flitting between insight, mental shortcut and unreflected cliche with ease. Which doesn’t mean that his book is not worth reading. Or that his self-serving pseudo-lesson plan is not worth teaching (with caution). But remember I can only recommend it because I subscribe to that awful “culture of relativism” that says that “any theory or opinion, no matter how outlandish, is just as good as whatever it was we used to believe in.”

Update 1: I should perhaps point out, that I think Ferguson’s lesson plan is pretty good, as such things go. It gives students an activity that engages a number of cognitive and affective faculties rather than just rely on telling. Even if it is completely unrealistic in terms the amount of time allocated and the objectives set. “Students will then learn how to construct a causal explanation for Western ascendancy” is an aspiration, not a learning objective. Also, it and the other objectives really rely on the “historical skills” he derides elsewhere.

The lesson plan comes apart at about point 5 where the really cringeworthy part kicks in. Like in his book, Ferguson immediately assumes that his view is the only valid one – so instead of asking the students to compare two different perspectives on why the world looked like it did in 1913 as opposed to 1500 (or even compare maps at strategic moments) he simply asks them to come up with reasons why his “killer apps” are right (and use evidence while they’re doing it!) .

I also love his aside: “The groups need to be balanced so that each one has an A student to provide some kind of leadership.” Of course, there are shelf-fuls of literature on group work – and pretty much all of them come from the same sort of people who’re likely to practice “new history” – Ferguson’s nemesis.

I don’ think using Ferguson’s book and materials would do any more damage than using any other history book. Not what I would recommend but who cares. I recently spent some time at Waterstone’s browsing through modern history textbooks and I think they’re excellent. They provide far more background to events and present them in a much more coherent picture than Ferguson. They perhaps don’t encourage the sort of broad synthesis that has been the undoing of so many historians over the centuries (including Ferguson) but they demonstrate working with evidence in a way he does not.

The reason most people leave school not knowing facts and chronologies is because they don’t care, not because they don’t have an opportunity to learn. And this level of ignorance has remained constant over decades. At the end of the day, history is just a bunch of stories not that different from what you see on a soap opera or in a celebrity magazine, just not as relevant to a peer group. No amount of “killer applification” is going to change this. What remains at the end of historical education is a bunch of disconnected images, stories and conversation pieces (as many of them about the tedium of learning as about its content). But there’s nothing wrong with that. Let’s not underestimate the ability of disinterested people to become interested and start making the connections and filling in the gaps when they need to. That’s why all these “after-market” history books like Ferguson’s are so popular (even though for most people they are little more than tour guides to the exotic past).

Update 2: By a fortuitous coincidence, an announcement of the release of George L. Mosse‘s lectures on European cultural history: http://history.wisc.edu/mosse/george_mosse/audio_lectures.htm came across my news feeds. I think it is important to listen to these side by side with Ferguson’s seductively unifying approach to fully realize the cultural discontinuity in so many key aspects between us and the creators of the West. Mosse’s view of culture, as his Wikipedia entry reads, was as “a state or habit of mind which is apt to become a way of life”. The practice of history after all is a culture of its own, with its own habits of mind. In a way, Ferguson is asking us to adopt his habits of mind as our way of life. But history is much more interesting and relevant when it is, Mosse’s colleague Harvey Goldberg put it on this recording, a quest after a “usable past” spurred by our sense of the “present crisis” or “present struggle”. So maybe my biggest beef with Ferguson is that I don’t share his justificationist struggle.

Enhanced by Zemanta
Send to Kindle

Language learning in literature as a source domain for generative metaphors about anything

Share
Send to Kindle
Portrait of Yoritomo, copy of the 1179 origina...

Image via Wikipedia

In my thinking about things human, I often like to draw on the domain of second language learning as the source of analogies. The problem is that relatively few people in the English speaking world have experience with language learning to such an extent that they can actually map things onto it. In fact, in my experience, even people who have a lot of experience with language learning are actually not aware of all the things that were happening while they were learning. And of course awareness of research or language learning theories is not to be expected. This is not helped by the language teaching profession’s propaganda that language learning is “fun” and “rewarding” (whatever that is). In fact my mantra of language learning (I learned from my friend Bill Perry) is that “language learning is hard and takes time” – at least if you expect to achieve a level of competence above that of “impressing the natives” with your “please” and “thank you”. In that, language learning is like any other human endeavor but because of its relatively bounded nature — when compared to, for instance, culture — it can be particularly illuminating.

But how can not just the fact of language learning but also its visceral experience be communicated to those who don’t have that kind of experience? I would suggest engrossing literature.

For my money, one of the most “realistic” depictions of language learning with all its emotional and cognitive peaks and troughs can be found in James Clavell‘s “Shogun“. There we follow the Englishman Blackthorne as he goes from learning how to say “yes” to conversing in halting Japanese. Clavell makes the frustrating experience of not knowing what’s going on and not being able to express even one’s simplest needs real for the reader who identifies with Blackthorne’s plight. He demonstrates how language and cultural learning go hand in hand and how easy it is to cause a real life problem through a little linguistic misstep.

Shogun stands in stark contrast to most other literature where knowledge of language and its acquisition is viewed as mostly a binary thing: you either know it or you don’t. One of the worst offenders here is Karl May (virtually unknown in the English speaking world) whose main hero Old Shatterhand/Kara Ben Nemsi acquires effortlessly not only languages but dialects and local accents which allow him to impersonate locals in May’s favorite plot twists. Language acquisition in May just happens. There’s never any struggle or miscommunication by the main protagonist. But similar linguistic effortlessness in the face of plot requirements is common in literature and film. Far more than magic or the existence of Vampires, the thing that used to stretch my credulity the most in Buffy the Vampire Slayer was ease with which linguistic facility was disposed of.

To be fair, even in Clavell’s book, there are characters whose linguistic competence is largely binary. Samurai either speak Portugese or Latin or they don’t – and if the plot demands, they can catch even whispered colloquial conversation. Blackthorne’s own knowledge of Dutch, Spanish, Portugese and Latin is treated equally as if identical competence would be expected in all four (which would be completely unrealistic given his background and which resembles May’s Kara Ben Nemsi in many respects).

Nevertheless, when it comes to Japanese, even a superficially empathetic reader will feel they are learning Japanese along with the main character. Largely through Clavell’s clever use of limited translation.

This is all the more remarkable given that Clavell obviously did not speak Japanese and relied on informants. This, as the “Learning from Shogun” book pointed out, led to many inaccuracies in the actual Japanese, advising readers not to rely on the language of Shogun too much.

Clavell (in all his books – not just Shogun) is even more illuminating in his depiction of intercultural learning and communication – the novelist often getting closer to the human truth of the process than the specialist researcher. But that is a blog post for another time.

Another novel I remember being an accurate representation of language learning is John Grisham‘s “The Broker” in which the main character Joel Backman is landed in a foreign country by the CIA and is expected to pick up Italian in 6 months. Unlike Shogun, language and culture do not permeate the entire plot but language learning is a part of about 40% of the book. “The Broker” underscores another dimension which is also present in the Shogun namely teaching, teachers and teaching methods.

Blackthorne in Shogun orders an entire village (literally on the pain of death) to correct him every time he makes a mistake. And then he’s excited by a dictionary and a grammarbook. Backman spends a lot of time with a teacher who makes him repeat every sentence multiple times until he knows it “perfectly”. These are today recognized as bad strategies. Insisting on perfection in language learning is often a recipe for forming mental blocks (Krashen’s cognitive and affective filters). But on the other hand, it is quite likely that in totally immersive situations like Blackthorne’s or even partly immersive situations like Backman’s (who has English speakers around him to help), pretty much any approach to learning will lead to success.

Another common misconception reflected in both works is the demand language learning places on rote memory. Both Blackthorne and Backman are described as having exceptional memories to make their progress more plausible but the sort of learning successes and travails described in the books would accurately reflect the experiences of anybody learning a foreign language even without a memory. As both books show without explicit reference, it is their strategies in the face of incomprehension that help their learning rather than a straight memorization of words (although that is by no means unnecessary).

So what are the things that knowing about the experience of second language learning can help us ellucidate? I think that any progress from incompetence to competence can be compared to learning a second language. Particularly when we can enhance the purely cognitive view of learning with an affective component. Strategies as well as simple brain changes are important in any learning which is why none of the brain-based approaches have produced unadulterated success. In fact, linguists studying language as such would do well to pay attention to the process of second language learning to more fully realize the deep interdependence between language and our being.

But I suspect we can be more successful at learning anything (from history or maths to computers or double entery book keeping) if we approach it as a foreign language. Acknowledge the emotional difficulties alongside cognitive ones.

Also, if we looked at expertise more as linguistic fluency than a collection of knowledge and skills, we could devise a program of learning that would take better into account not only the humanity of the learner but also the humanity of the whole community of experts which he or she is joining.

Enhanced by Zemanta
Send to Kindle

You don’t have to be a xenophobe to think Britain being an island matters, but it helps!

Share
Send to Kindle

I have a distinct feeling of writing about this somewhere but can’t find it, so here’s the rant redux.

The images on which our thinking and reasoning are based can sometimes exert a powerful force. There are many mechanisms we use to counter that force but sometimes it is very difficult. It seems particularly difficult for some people to shed the idea that the fact that Britain is an island has any bearing on its immigration, housing or grave digging policy as compared to a continental state!

I was reminded of this again when the sociologist Kate Woodthorpe mentioned on this edition of Thinking Aloud http://www.bbc.co.uk/programmes/b0112gzd that the shortage of grave plots is a particular problem in the UK because ‘after all we are an island’.

This same trope pops up all over the place when it comes to population and housing. See here and here and here.

I don’t know how to say this without sounding like I’m stating the obvious. But that’s because it is just that obvious. In current geopolitical context, every country is pretty much an island from the point of view of its population, housing or grave digging. France can’t just borrow a bit of Germany to house its excess immigrants or to dig a few graves (although I have this image of an underground corpse tunnel…). This is often accompanied by calling Britain a “small island” which it is not but even if it were half the size of Belgium, the same principle would apply.

I think the power of this image stems from its underlying schema which is that of a platform in the middle of water off the edges of which it is possible to fall. And the more people you add to the platform, the more likely they are to fall off.  This is so powerful that even a few people I pointed this out to, took a lot of convincing.

Of course, this schema is popular with groups wanting to promote a certain point of view on migration but I think its power works regardless of ideology.

Send to Kindle

The natural logistics of life: The Internet really changes almost nothing

Share
Send to Kindle
Cover of "You've Got Mail"

Cover of You've Got Mail

This is a post that has been germinating for a long time. But it was most immediately inspired by Marshall Poe‘s article claiming that “The Internet Changes Nothing“. And as it turns out, I mostly agree.

OK, this may sound a bit paradoxical. Twelve years ago, when I submitted my first column to be published, I delivered the text to my editor on a diskette. Now, I don’t even have an editor (or at least not for this kind of writing). I just click a button and my text is published. But! If my server logs are to be trusted, it will be read by 10s or at best 100s of people over its lifetime. That’s more than if I’d just written some notes to myself or published it in an academic journal but much less than if I publish it in a national daily with a readership of hundreds of thousands. Not all of them will read what I write but more than would on this blog.
So while democratising the publishing industry has worked for Kos, Huffington and many others, still many more blogs languish in obscurity. I can say anything I want but my voice matters little in the cacophony.

In terms of addressing an audience and having a voice, the internet has done little for most people. This is not because not enough people have enough to say but because there’s only so much content the world can consume. There is a much longer tail trailing behind Clay Shirkey‘s long tail. It’s the tail of 5-post 0-comment blogs and YouTube videos with 15 views. Even millions of typewriter-equipped monkeys with infinities of time can’t get to them all. Plus it’s hard to predict what will be popular (although educated guesses can produce results in aggregate). Years ago I took a short clip with my stills camera of a black-smith friend of mine making a candle-holder. It’s had 30 thousand views on YouTube. Why I don’t know. There’s nothing particularly exciting about it but there must be some sort of a long tail longing after it. None of the videos I hoped would take off did. This is the experience of many if not most. Most attempts at communities fail because the people starting them don’t realize how hard it is to nurture them to self-sustainability. I experienced this with my first site Bohemica.com. It got off to a really good start but since it was never my primary focus, the community kind of dissipated after a site redesign that was intended to foster it.

Just in terms of complete democratization of expression, the internet has done less for most than it may appear. But how about the speed of communication? I’m getting ready to do an interview with someone in the US, record it, transcribe it and translate it – all within a few days. The Internet (or more accurately Skype) makes the calling cheap, the recording and transcription is made much quicker by tools I didn’t have access to even in the early 2000s when I was doing interviews. And of course, I can get the published product to my editor in minutes via email. But what hasn’t changed is the process. The interview, transcription and translation take pretty much the same amount of time. The work of agreeing with the editor on the parameters of the interview, arranging it with the interviewee take pretty much as long as before. As does preparation for the interview. The only difference is the speed and ease of the transport of information from me to its target and me to the information. It’s faster to get to the research subject – but the actual research still takes about the same amount of time limited by the speed of my reading and the speed of my mind.

A chain is only as strong as its weakest link. And as long as humans are a part of the interface in a communication chain, the communication will happen at a human speed. I remember sitting over a print out of an obscure 1848 article on education from Jstor with an academic who started doing research in the 1970s and reminiscing how in the old days, he’d have to get on the train to London to get a thing like this in the British Library or at least having to arrange a protracted interlibrary loan. On reflection this is not as radical a change as it may seem. Sure, the information takes longer to get here. But people before the internet didn’t just sit around waiting for it. They had other stuff to read (there’s always more stuff to read than time) and writing to get on with in the meantime. I don’t remember anyone claiming that modern scholarship is any better than scholarship from the 1950s because we can get information faster. I’m as much in awe of some of the accomplishments of the scholars of the 1930s as people doing research now. And just as disdainful of others from any period. When reading a piece of scholarly work, I never care about the logistics of the flow of information that was necessary for the work to be completed (unless of course, it impinges on the methodology – where moderns scholars are just as likely to take preposterous shortcuts as ancient ones). During the recent Darwin frenzy, we heard a lot about how he was the communication hub of his time. He was constantly sending and receiving letters. Today, he’d have Twitter and a blog. Would he somehow achieve more? No, he’d still have to read all those research reports and piddle about with his worms. And it’s just as likely he’d miss that famous letter from Brno.

Of course, another fallacy we like to commit is assuming that communication in the past was simply communication today minus the internet (or telephone, or name your invention). But that’s nonsense. I always like to remind people that the “You’ve Got Mail” where Tom Hanks and Meg Ryan meet and fall in love online is a remake of a 1940s film where the protagonists sent each other letters. But these often arrived the same day (particularly in the same city). There were many more messenger services, pneumatic tubes, and a reliable postal service. As the Internet takes over the burden of information transmission, these are either disappearing or deteriorating but that doesn’t mean that’s the state they were in when they were the chief means of information transmission. Before there were photocopiers and faxes, there were copyists and messengers (and both were pretty damn fast). Who even sends faxes now? We like to claim we get more done with the internet but take just one step back and this claim looses much of its appeal. Sure there are things we can do now that we couldn’t do before like attend a virtual conference or a webinar. That’s true and it’s really great. But what would have the us of the 1980s have done? No doubt something very similar like buying video tapes of lectures or attending Open Universities. And the us of the 1960s? Correspondence courses and pirate radio stations. We would have had far less choice but our human endeavor would have been roughly the same. The us of 1930s, 1730s or 330s? That’s a more interesting question but nobody’s claiming that the internet changed the us of those times. We mostly think of the Internet as changing the human condition as compared to the 1960s or 1980s. And there the technology changes have far outstripped the changes in human activity.

If it’s not true that the internet has enabled us to get things done in a qualitatively different manner on a personal level, it’s even less true that it has made a difference at the level of society. There are simply so many things involved and they take so much time because humans and human institutions were involved. Let’s take the “Velvet Revolution” of 1989 in which I was an eager if extremely marginal participant. On Friday, November 17 a bunch of protesters got roughed up, on November 27, a general strike was held and on December 10, the president resigned. In Egypt, the demonstrations started on January 25, lots of stuff happened, on February 11 the president resigned. The Egyptians have the Czechs beat in their demonstration to resignation time by 5 days (17 v 23). This was the “Twitter” revolution. We didn’t even have mobile phones. Actually, we mostly even didn’t have phones. Is that what all this new global infrastructure has gotten us? Five days off on the toppling of a dictator? Of course, not. Twitter made no difference to what was happening in Egypt, at all, when compared to other revolutoin. If anything Al Jazeera played a bigger role. But on the ground, most people found out about things by being told by someone next to them. Just like we did. We even managed to let the international media up to speed pretty quickly, which could be argued is the main thing Twitter has done in the “Arab Spring” (hey another thing the Czechs did and failed at).

Malcolm Gladwell got a lot of criticism for pointing out the same thing. But he’s absolutely right:

“high risk” social activism requires deep roots and strong ties http://www.newyorker.com/online/blogs/newsdesk/2011/02/does-egypt-need-twitter.html

And while these ties can be established and maintained purely virtually, it takes a lot more than a few tweets to get people moving. Adam Weinstein adds to Gladwell’s example:

Anyone who lived through 1989 or the civil rights era or 1967 or 1956 knows that media technology is not a motive force for civil disobedience. Arguing otherwise is not just silly; it’s a distraction from the real human forces at play here.
http://motherjones.com/mojo/2011/02/malcolm-gladwell-tackles-egypt-twitter

Revolutions simply take their time. On paper, the Russian October Revolution of 1917 took just a day to topple the regime (as did so many others). But there were a bunch of unsuccessful revolutions prior to that and of course a bloody civil war lasting for years following. To fully institutionalize its aims, the Russian revolution could be said to have taken decades and millions dead. Even in ancient times, sometimes things moved very quickly (and always more messily than we can retell the story). The point about revolutions and wars is that they don’t move at the speed of information but at the speed of a fast walking revolutionary or soldier. Ultimately, someone has to sit in the seat where the buck stops, and they can only get there so fast even with jets, helicopters and fast cars. Such are the natural logistics of human communal life.

This doesn’t mean that there the speed or manner of communication doesn’t have some implications where logistics are concerned. But their impact is surprisingly small and easily absorbed by the larger concerns. In the Victorian Internet, Tom Standage describes how war ship manifests could no longer be published in The Times during the Crimean war because they could be telegraphed to the enemy faster than the ships would get there (whereas in the past, a spy’s message would be no faster than the actual ships). Also, betting and other financial establishments had to make adjustments not to get the speed of information get in the way of making profit. But if we compare the 1929 financial crisis with the one in 2008, we see that the speed of communication made little difference on the overall medium-term shape of the economy. Even though in 2008 we were getting up to the second information about the falling banking houses, the key decisions about support or otherwise took about the same amount of time (days). Sure, some stock trading is now done to the fraction of the second by computers because humans simply aren’t fast enough. But the economy still moves at about the same pace – the pace of lots and lots of humans shuffling about through their lives.

As I said at the start, although this post has been brewing in me for a while, it was most immediately inspired by that of Marshall Poe (of New Books in History) published about 6 months ago. What he said got no less relevant through the passage of time.

Think for a moment about what you do on the Internet. Not what you could do, but what you actually do. You email people you know. In an effort to broaden your horizons, you could send email to strangers in, say, China, but you don’t. You read the news. You could read newspapers from distant lands so as to broaden your horizons, but you usually don’t. You watch videos. There are a lot of high-minded educational videos available, but you probably prefer the ones featuring, say, snoring cats. You buy things. Every store in the world has a website, so you could buy all manner of exotic goods. As a rule, however, you buy the things you have always bought from the people who have always sold them.

This is easy to forget. We call online shopping and food delivery a great achievement. But having shopping delivered was always an option in the past (and much more common than now when delivery boys are more expensive). Amazon is amazing but still just a glorified catalog.

But there are revolutionary inventions that nobody even notices. What about the invention of the space between words? None of the ancients bothered to put spaces between words or in general read silently. It has been estimated that putting spaces between words not only allowed for silent reading (a highly suspicious activity until the 1700s) but also sped up reading by about 30%. Talk about a revolution! I’m a bit skeptical about the 30% number but still nobody talks about it. We think about audio books as an post-Eddison innovation but in fact, all reading was partly listening not too long ago. Another forgotten invention is that of the blackboard which made large-volume dissemination of information much more feasible through a simple reconfiguration of space and attention between pupil and teacher.

Visualization of the various routes through a ...

Image via Wikipedia

David Weinberger recently wrote what was essentially a poem about the hypertext (a buzz word I haven’t heard for a while):

The old institutions were more fragile than we let ourselves believe. They were fragile because they made the world small. A bigger truth burst them. The world is more like a messy, inconsistent, ever-changing web than like a curated set of careful writings. Truth burst the world made of atoms.

Yes, there is infinite space on the Web for lies. Nevertheless, the Web’s architecture is a better reflection of our human architecture. We embraced as if it were always true, and as if we had known it all along, because it is and we did.
http://www.hyperorg.com/blogger/2011/05/01/a-big-question

It is remarkable how right and wrong he can be at the same time. Yes, the web is more of a replication of the human architecture. It has some notable strengths (lack of geographic limitation, speed of delivery of information) and weaknesses (no internal methods for exchange of tangible goods, relatively limited methods for synchronous face-to-face communication.) I’d even go as far as calling the Internet “computer-assissted humanity”. But that just means that nothing about human organization online is a radical transformation of humanity offline.

What on Earth makes Weinberger think that the “existing institutions were fragile”? If anything they proved extremely robust. I find The Cluetrain Manifesto extremely inspiring and in many ways moving. But I find “The Communist Manifesto” equally profound without wanting to live in a world governed by it. The “The Communist Manifesto” got the description of the world as it is perfectly right. Pretty much every other paragraph in it applies just as much today as it did then. But the predictions offered in the other paragraphs can really cause nothing but laughter today. “The Cluetrain Manifesto” gave the same kind of expression to the frustration with the Dilbert world of big corporations and asked for our humanity back. They were absolutely right.

Markets can be looked at as conversations and the internet can facilitate certain kinds of conversation. But they were wrong in assuming that there is just one kind of conversation. There are all sorts of group symbolic and ritualized conversations that make the world of humans go around. And they have never been limited just to the local markets. In practical terms, I can now complain about a company on a blog or in a tweet. And these can be viewed by others. But since there’s an Xsuckx.com website for pretty much all major brands, the incentive for companies to be responsive to this are relatively small. I have actually received some response to complaints from companies on Twitter. But only once it led to the resolution of the problem. But Twitter is still a domain of “the elite” so it pays companies to appease people. However, should it reach the level of ubiquitous obscurity that many pages have, it will become even less appealing due to the lack of permanence of Tweets.

The problem is that large companies with large numbers of customers can only scale if they keep their interaction with those customers at certain levels. It was always thus and will always remain so. Not because of intrinsic attitudes but because of configurational limitations of time and human attention. Even the industrially oppressed call-center operator can only deal with about 10 customers an hour. So you have to work in some 80/20 cost checks into customer support. Most of any company’s interaction with their customers will be one to many and not on one on one. (And this incidentally holds for communications about the company by customers).

There’s a genre of conversations in the business and IT communities that focus on ‘why is X’ successful. Ford of the 1920s, IBM of the 1960s, Apple of the 2000s. The constant in these conversations is the wilful effort of projecting the current convetnional wisdom about business practices onto what companies do and used to do. This often requires significant reimagining of the present and the past. Leo Laporte and Paul Thurott recently had a conversation (http://twit.tv/ww207) in which they were convinced that companies that interact and engage with their customers will be successful. But why then, one of them asks, is not Microsoft whose employees blog all the time is not more successful than Apple who couldn’t be more tightlipped about its processes and whose attitude to customers is very much take it or leave it? Maybe it’s the Apple Store, one of them comments. That must be it. That engages the crap out of the Apple’s customers. But neither of them asked what is the problem with traditional stores, then? What is the point of the internet. The problem is that as with any metaphoric projection, the customer engagement metaphor is just partial. It’s more a way for us to grasp with processes that are fairly stable at the macro institutional level (which is the one I’m addressing here), but basically chaotic at the level of individual companies or even industries.

So I agree with Marshall Poe about the amount of transformation going on:

As for transformative, the evidence is thin. The basic institutions of modern society in the developed world—representative democracy, regulated capitalism, the welfare net, cultural liberalism—have not changed much since the introduction of the Internet. The big picture now looks a lot like the big picture then.

Based on my points above, I would even go as far as to argue that the basic institutions have not changed at all. Sure, foreign ministries now give advisories online, taxes can be paid electronically and there are new agencies that regulate online communication (ICANN) as well as old ones with new responsibilities. But as we read the daily news, can we perceive any new realities taking place? New political arrangements based on this new and wonderful thing called the Internet? No. If you read a good spy thriller from the 80s and one taking place now, you can hardly tell the difference. They may have been using payphones instead of the always on mobile smart devices we have now but the events still unfold in pretty much the same way: people go from place to place and do things.

Writing, print, and electronic communications—the three major media that preceded the Internet—did not change the big picture very much. Rather, they were brought into being by major historical trends that were already well underway, they amplified things that were already going on.

Exactly! If you read about the adventures of Sinuhe, it doesn’t seem that different from something written by Karl May or Tom Clancy. Things were happening as they were and whatever technology was available to be used, was used as well as possible. Remember that the telephone was originally envisioned to be a way of attending the opera – people calling in to a performance instead of attending live.

As a result, many things that happened could not have happened exactly in the same way without the tools of the age being there. The 2001 portion of the war in Afghanistan certainly would have looked different without precision bombing. But now in 2011 it seems to be playing out pretty much along the same lines experienced by the Brits and the Soviets. Meaning: it’s going badly.

The role of TV imagery in the ending of the Vietnam war is often remarked on. But that’s just coincidental. There have been plenty of unpopular wars that were ended because the population refused to support them and they were going nowhere. Long before the “free press”, the First Punic Wars were getting a bad rep at home. Sure, the government could have done a better job of lying to the press and its population but that’s hard to do when you have a draft. It didn’t work for Ramses II when he got his ass handed to him at Kadesh and didn’t ultimately work for the Soviet misadventure in Afghanistan. The impact of the impact of the TV images can easily be overestimated. The My Lai Massacre happened in 1968 when the war was about in its mid-point. It still took 2 presidential elections and 1 resignation before it was really over. It played a role but if the government wanted, it could have kept the war going.

Communications tools are not “media” in the sense we normally use the word. A stylus is not a scriptorium, movable type is not a publishing industry, and a wireless set is not a radio network. In order for media technologies to become full-fledged media, they need to respond to some big-picture demand.

It is so easy to confuse the technology with the message. On brief reflection, the McLuhan quote we all keep repeating like sheep is really stupid. The medium is the medium and the message is the message. Sometimes they are so firmly paired we can’t tell them apart, sometimes they have nothing in common. What is the medium of this message? HTML, the browser, your screen, a blog post, the Internet, TCP/IP, ehternet? They’re all involved in the transmission. We can choose whether we pay attention to some of them. If I’d posted somebody a parchment with this on it, it would certainly add to the message or become a part of it. But it still wouldn’t BE the message! Lots of artists like Apollinaire and his calligrams actually tried to blend the message and the medium in all sorts of interesting ways. But it was hard work. Leo Laporte (whose podcasts I enjoy listening to greatly) spent a lot of time trying to displace podcast with netcast to avoid an association with the medium. He claimed that his shows are not ‘podcasts’ but ‘shows’, i.e. real content. Of course, he somehow missed the fact that we don’t listen to programs but to the radio and don’t view drama but rather watch TV. The modes of transmission have always been associated with the message – including the word “show” – until they weren’t. We don’t mean anything special now when we say we ‘watch TV’.

Of course, the mode of transmission has changed how the “story” is told. Every new medium has always first tried to emulate the one it was replacing but ultimately found its own way of expression. But this is no different to other changes in styles. The impressionists were still using the same kinds of paints and canvasses, and modernist writers the same kind of inks and books. Every message exists in a huge amount of context and we can choose which of it we pay attention to at any one time. Sometimes the medium becomes a part of the context, sometimes it’s something else. Get over it!

There are some things Marshall Poe says I don’t agree with. I don’t think we need to reduce everything to commerce (as he does – perhaps having imbibed too much of Marxist historiography). But most importantly I don’t agree when he says that the Internet is mature in the same way that TV was mature in the 1970s. Technologies take different amounts of time to mature as widespread consumer utilities. It is always on the order of decades and sometimes centuries but there is no set trajectory. TV took less time than cars, planes took longer than TV, cars took longer than the Internet. (All depending on how we define mature – I’m mostly talking about wide consumer use – i.e. when only oddballs don’t use it and access is not disproportionately limited by socioeconomic status). The problem with the Internet is that there are still enough people who don’t use it and/or who can’t access it. In the 1970s, the majority had TVs or radios which were pretty much equivalent as a means of access to information and entertainment. TV was everywhere but as late as the 1980s, the BBC produced radio versions of its popular TV shows (Dad’s Army, All Gas and Gaiters, etc.) The radio performance of Star Wars was a pretty big deal in the mid-80s.

There is no such alternative access to the Internet. Sure, there are TV shows that play YouTube clips and infomercials that let you buy things. But it’s not the experience of the internet – more like a report on what’s on the Internet.

Even people who did not have TVs in the 1970s (both globally and nationally) could readily understand everything about their operation (later jokes about programing VCRs aside). You pushed a button and all there was to TV was there. Nothing was hiding. Nothing was trying to ambush you. People had to get used to the idiom of the TV, learn to trust some things and not others (like advertising). But the learning curve was flat.

The internet is more like cars. When you get in one, you need to learn lots of things from rules of manipulation to rules of the road. Not just how to deal with the machinery but also how to deal with others using the same machinery. The early cars were a tinkerer’s device. You had to know a lot about cars to use cars. And then at some point, you just got in and drove. At the moment, you still have to know a lot about the internet to use it. Search engines, Facebook, the rules of Twitter, scams, viruses. That intimidates a lot of people. But less so now than 10 years ago. Navigating the Internet needs to become as socially common place as navigating traffic in the street. It’s very close. But we’re not quite there yet on the mass level.

Nor do I believe that the business models on the Internet are as settled as they were with TV in the 1970s. Least of all the advertising model. Amazon’s, Google’s and Apple’s models are done – subject to normal developments. But online media are still struggling as are online services.

We will also see significant changes with access to the Internet going mobile as well as the increasing speed of access. There are still some possible transformations hiding there – mostly around video delivery and hyper-local services. I’d give it another 10 years (20 globally). By then the use of the internet will be a part of everyday idiom in a way that it’s still quite not now (although it is more than in 2001). But I don’t think the progress will go unchecked. The prospect of flying cars ran into severe limitations of technology and humanity. After 2021, I would expect the internet to continue changing under the hood (just like cars have since the 1960s) but not much in the way of its human interface (just like cars since the 1960s).

There are still many things that need working out. The role of social media (like YouTube) and social networking (like Facebook). Will they put a layer on top of the internet or just continue being a place on the internet? And what business models other than advertising and in-game purchases will emerge? Maybe none. But I suspect that the Internet has about a decade of maturing to get to where it will be recognisable in 2111. Today, cars from the 1930s don’t quite look like cars but those from the 1960s do. In this respect, I’d say the internet is somewhere in the 1940s or 50s. Both in usability, ubiquity, accessibility and it’s overall shape.

The most worrying thing about the future of the internet is a potential fight over the online commons. One possible development is that significant parts of the online space will become proprietary with no rights of way. This is not just net-neutrality but a possible consequence of the lack of it. It is possible that in the future so many people will only access the online space to take advantage of proprietary services tied to their connection provider that they may not even notice that at first some and later on most non-proprietary portions of the internet are no longer accessible. It feels almost unimaginable now but I’m sure people in 16th century East Anglia never thought their grazing commons would disappear (http://www.eh-resources.org/podcast/podcast2010.html). I’m not suggesting that this is a necessary development. Only that it is a configurational possibility.

As I’m writing this. A Tweet just popped up on my screen mentioning another shock in Almaty a place where I spent a chunk of time and where a friend of mine is about to take up a two-year post. I switch over to Google and find out no reports of destruction. If not for Twitter, I may not have even heard about it. I go on Twitter and see people joking about it in Russian. I sort of do my own journalism for a few minutes gathering sources. How could I still claim that the Internet changes nothing? Well, I did say “almost”. Actually, for many individuals the Internet changes everything. They (like me) get to do jobs they wouldn’t, find out things they couldn’t and talk to people they shouldn’t. But it doesn’t change (or even augment) our basic flesh-bound humanity. Sure, I know about something that happened somewhere I care about that I otherwise wouldn’t. But there’s nothing more I can do about it. I did my own news gathering about as fast as it would have taken to listen to a BBC report on this (I’ve never had a TV and now only listen to live radio in the mornings.) I can see some scenarios where the speed would be beneficial but when the speed is not possible we adjust our expectations. I first visited Kazakhstan in 1995 and although I had access to company email, my mother knew about what was happening at the speed of a postcard. And just the year before during my visit to Russia, I got to send a few telegrams. You work with what you have.

All the same, the internet has changed the direction my life has taken since about 1998. It allowed me to fulfil my childhood dream of sailing on the Norfolk Broads, just yesterday it helped me learn a great new blues lick on the guitar. It gives me reading materials, a place to share my writing, brings me closer to people I otherwise wouldn’t have heard of. It gives me podcasts like the amazing New Books in History or China History podcast! I love the internet! But when I think about my life before the internet, I don’t feel it was radically different. I can point at a lot of individual differences but I don’t have a sense of a pre-Internet me and post-Internet me. And equally I don’t think there will be a pre-Internet and post-Internet humanity. One of the markers of the industrial revolution is said to be its radical transformation of the shape of things. So much so that a person of 1750 would still recognize the shape of the country in 1500 but a person in 1850 would no longer see them the same. I wonder if this is a bit too simplistic. I think we need to bring more rigor to the investigation of human contextual identity and embeddedness in the environment. But that is outside the scope of this essay.

It is too tempting to use technologies as a metaphor for expressing our aspirations. We do (and have always done) this through poetry, polemic, and prose. Our depictions of what we imagine the Internet society is like appear in lengthy essays or chance remarks. They are carried even in tiny words like “now” when judiciously deployed. But sadly exactly the same aspirations of freedom and universal sisterhood were attached to all the preceding communication technologies, as well: print, telegraph, or the TV. Our aspirations aren’t new. Our attachment to projecting these aspiration into the world around us is likewise ancient. Even automatised factory production has been hailed by poets as beautiful. And it is. We always live in the future just about to come with regrets about the past that has never been. But our prosaic present seems never to really change who we are. Humans for better or worse.

Enhanced by Zemanta
Send to Kindle

When is subtle manipulation of data a flat out lie? Truth about Chinese prisons [UPDATE]

Share
Send to Kindle

I’ve been on a China kick lately (reading and listening about its history and global position) and a crime public policy kick (reading and listening to Mark Kleiman). I was struck when I heard Mark say in an interview that the US has more people in jail in absolute terms than China. So I went about to looking for some data. I found the most comprehensive source of info in the “World Prison Population List” published by the King’s College London International Centre for Prison Studies. Their top bullet point is alarming:

More than 9.25 million people are held in penal institutions throughout the world, mostly aspre-trial detainees (remand prisoners) or assentenced prisoners. Almost half of these are inthe United States (2.19m), China (1.55m plus pre-trial detainees and prisoners in ‘administrativedetention’) or Russia (0.87m).

But I was surprised by China. The US have a 738  people in prison per 100,000 of population, Russia 611 and China 111. England and Wales has more than China with 158. In fact, more than half of the countries of the world have more than China. I did some numbers in the spreadsheet below what that means with respect to the total population of each countries (throwing in the UK, India and Brazil for good measure):

And the results could not be clearer. China is not in any way comparable to Russia and the US when it comes to prison population. In fact, the UK is a worse offender (pun intended) when it comes to owning a disproportionate chunk of the global prison population. It is just under parity. India is by far the most lenient when it comes to incarceration with only 3.5% of the world prison population to 16% of the world’d global population. The Center provides no estimate of the pre-trial and administrative detainees in China. But even if it was another half-a-million people, it would still only give China a parity. To be as disproportionately prison-happy as the US, China would have to arrest more than 2.5 as many people as it has in jail right now.

But the question arises why did the Centre for Prison Studies choose to include US, Russia and China on the same list? My suggestion is prejudice combined with number magic. The authors were trying to come up with a way to get to say that half of the prisoners are in a small number of countries. And China is “known” for its human rights record, so it must be OK to list it there if it will bump up the numbers. But in effect, they managed to lie about China by saying something numerically true. It didn’t say anything flat out incorrect but it created an implicit category which clearly labels China as a bad country. This is a silly way to affirm Western supremacy where there is none.

There are lots of other things that could be estimated based on these numbers. I couldn’t find a clear estimate of how many people were sent to prison for things they didn’t do (we can’t just extrapolate from death row exonerations) but if we set it at about 0.5%, we get that there may be more unjustly imprisoned people in the US than there are political prisoners in China (estimated at about 5,000) or slightly less if we count the same rate of miscarriage of justice across the rest of China’s prison population. This is, of course, too much guess work for drawing any firm conclusions but it certainly puts the numbers in some perspective.

UPDATE: I have actually interviewed Mark Kleiman (it was a long time ago but I only now remembered to update here) and his estimate is that there are 3-4% of people in US prisons who are there because of something they did not do (often because of police mis-behavior). Now it is important to qualify this by saying that most of these people have done other things for which they deserve to go to prison but were not caught, so the miscariage of justice is more technical than moral. But it shows the massive holes in the US vaunted “rule of law”. It is there, no doubt, when it comes to settling middle-class property and other business disputes (and by all accounts this would be very important thing to have in many countries in the Middle East and China). But it is not evenly distributed. I think it would not be completely outrageous to say that, for many of its citizens, the US is in effect a police state. Just like it could be said that for many of China’s citizens, China is not!

Enhanced by Zemanta
Send to Kindle

Poetry without metaphor? Sure but can it darn your socks?

Share
Send to Kindle
Chinese poet Li Bai from the Tang dynasty, in ...

Image via Wikipedia

Over on the Language Log, Victor Mair puts to rest that all English expressions have to be tensed and thus prevent timeless poetry. He shares his translation of a 13th century Chinese poet thus:

Autumn Thoughts by Ma Zhiyuan

Withered wisteria, old tree, darkling crows –
Little bridge over flowing water by someone’s house –
Emaciated horse on an ancient road in the western wind –
Evening sun setting in the west –
Broken-hearted man on the horizon.

And indeed, he is right. The poem exudes timelessness (if a lack of something can be exuded). It is more difficult for some languages than others to avoid certain grammatical commitments (like gender or number) which makes translation more difficult but there’s always a way.

What struck me about the poem was something different. It is so rich in imagery and yet so poor in figurative language . This is by no means unique but worth a note. In fact, there is no figurative language there at all if we discount such foundational figures as “sun setting in the West”, “broken-hearted” or “man on the horizon”. In fact, had I not known this was a Chinese poem, I could have easily believed it was a description of 17th century Dutch master‘s painting or even something by Constable.

But of course, the conceptual work we’re doing while reading this poem is not that different from the work we would do if it was full of metaphor. I’m working on a post of how adjectives and predicates are really very similar to metaphors and this is one example that illustrates the point. In order to appreciate this poem, we have to construct a series of fairly rich images and then we have to blend them with each other to place them in the same place.  We have to interpret “the broken hearted man on the horizon” is it just another image, the poet or ourselves? In other words, we have to map from the image presented by the poem to the images available to us by our experience. Which, in short, is the same work we have to do when interpreting metaphors and similies.  But the title is the clincher: “autumn thoughts” – what if the whole poem is a metaphor and the elements in it just figures signifying age, loneliness, the passage of time and so on and so forth. There are simply too many mappings to make. And there the escape from metaphor ends.

Enhanced by Zemanta
Send to Kindle

Religion, if it exists, is negotiation of underdetermined metaphoric cognition [UPDATED]

Share
Send to Kindle

Preamble

Richard Buchta - Portrait of a Zande witchdoctor

Image via Wikipedia

I am an old atheist and a new agnostic. I don’t believe in God in the old-fashioned Russellian way – if I don’t believe in Krishna, Zeus, water sprites or the little teapot orbiting the Sun, I don’t believe in God and the associated supernatual phenomena (monotheism my foot!). However, I am agnostic about nearly everything else and everything else in the new atheist way is pretty much science and reason. If history is any judge (and it is) most of what we believe to be scientific truths today is bunk. This makes me feel not superior at all to people of faith. Sure I think what they believe is a stupid and irrational thing to believe, but I don’t think they are stupid or irrational people to believe it. The smartest people believe the most preposterous things just look at Newton, Chomsky or Dawkins.

But one thing I’m pretty certain about is religion. Or rather, I’m pretty certain it does not exist. It is in many ways an invention of the Enlightenment and just like equality and brotherhood it only makes sense until you see the first person winding the up guillotine. Religion only makes sense if you want to set a certain set of beliefs and practices aside, most importantly to deprive their holders of power and legitimacy.

But is it a useful concept for deliberation about human universals? I think on balance it is not. Religion is a collection of stated beliefs, internal beliefs and public and private practices. In other words, religion is a way of life for a community of people. Or to be etymological about it, it is what binds the community together. The nature of the content of those beliefs is entirely irrelevant to the true human universal: a shared collection of beliefs and practices develops over a short amount of time inside any group of people. And when I say beliefs, I mean all explicit and implicit knowledge and applied cognition.

In this sense, modern secular humanism is just as much a religion as rabid evangelicalism.

On the mundane nature of sacred thought

So, why the scientist asks, do all cultures develop knowledge system that includes belief in the supernatural? That’s because they don’t. For instance, as Geertz so beautifully described in his reinterpretation of the Azande, witchcraft isn’t supernatural. It is the natural explanation after everything else has failed. We keep forgetting that until Einstein, everybody believed in this (as Descartes pointed out) supernatural force called gravity that could somehow magically transmit motion accross vast distances. And now (as Angel and Demetis point out) we believe in magical sheets that make gravity all nice and natural. Or maybe strings? Give me a break!

What about the distinction between the sacred and mundane you ask? Well, that obviously exists including the liminality between them. But sacred/mundane is not limited to anything supernatural and magical – just look at the US treatment of the flag or citizenship. In fact, even the most porfoundly sacred and mystical has a significant mundane dimension necessitated by its logistics.

There are no universals of faith. But there are some strong tendencies among the world’s cultures: Ancestor worship, belief in superhuman and non-human (often invisible, sometimes disembodied) agents, sympathetic magic and ritual (which includes belief in empowered and/or sapient substances and objects). This is combined with preserving and placating personal and collective practices.

All of the above describes western atheists as much as the witchcraft believing Azande. We just define the natural differently. Our beliefs in the power of various pills and the public professions of faith in the reality of evolution or the transformative nature of the market fall under the above just as nicely as the rain dance. Sure I’d much rather go to a surgeon with an inflamed appendix than a witch doctor but I’d also much rather go to a renowned witch doctor than an unknown one if that was my only choice. Medicine is simply witchcraft with better peer review.

Leaving the merits of the modern world aside. The question remains why do humans seem to converge on similar content of their beliefs? Helen de Cruz and the commenters on her post about the naturalness of religious belief: http://www.cognitionandculture.net/Helen-De-Cruz-s-blog/does-atheism-challenge-the-naturalness-of-religious-belief.html give a great overview of the current debate on the topic.

They pretty much put to rest some of the evolutionary notions and the innateness of mind/body dualism. I particularly like the proposition Helene de Cruz made building on Pascal’s remark that some people “seem so made that [they] cannot believe”. “For those people” continues de Cruz, “religious belief requires a constant cognitive effort.”

I think this is a profound statement. I see it as being in line with my thesis of frame negotiation. Some things require more cognitive effort for some people than other things for other people. It doesn’t have to be religion. We know reading requires more cognitive effort for different people in different ways (dyslexics being one group with a particular profile of cognitive difficulties). So does counting, painting, hunting, driving cars, cutting things with knives, taking computers apart, etc. These things are suceptible to training and practice to different degrees with different people.

So it makes perfect sense on the available evidence that different people require different levels of cognitive effort to maintain belief in what is axiomatic for others.

In the comments Mitch Hodge contributed a question to “researchers who propose that mind-body dualism undergirds representations of supernatural entities: What do you do with all of the anthropological evidence that humans represent most all supernatural entities as embodied? How do disembodied beings eat, wear clothes, physically interact with the living and each other?”

This is really important. Before you can talk about content of belief, you need to carefully examine all its aspects. And as I tried to argue above, starting with religion as a category already leads us down certain paths of argumentation that are less than telos-neutral.

But the answer to the “are humans natural mind-body dualists” does not have to be to choose one over the other. I suggest an alternative answer:

Humans are natural schematicists and schema negotiators

What does that mean? Last year, I gave a talk (in Czech) on the “Schematicity and undetermination as two forgotten processes in mind and language”. In it I argue that operating on schematic or in other ways underdetermined concepts is not only possible but it is built into the very fabric of cognition and language. It is extremely common for people to hold incomplete images (Lakoff’s pizza example was the one that set me on this path of thinking) of things in their mind. For instance, on slide 12 of the presentation below, I show different images that Czechs submitted into a competition run online by a national newspaper on “what does baby Jesus look like” (Note: In Czech, it is baby Jesus – or Ježíšek – who delivers the presents on Christmas Eve). The images ran from an angelic adult and a real baby to an outline of the baby in the light to just a light.

[slideshare id=6059571&doc=schematicnostanedourcenost-101207060558-phpapp02]
This shows that people not only hold underdetermined images but that those images are determined to varying degrees (in my little private poll, I came across people who imagined Ježíšek as an old bearded man and personally, I did not explicitly associated the diminutive ježíšek with the baby Jesus, until I had to translate it into English). The discussions like those around Trinity or the embodied nature of key deities are the results of conversations about what parts of a shared schema is it acceptable to fill out and how to fill them out.

It is basically metaphor (or as I call it frame) negotiation. Early Christianity was full of these debates and it is not surprising that it wasn’t always the most cognitively parsimoneous image that won out.

It is further important that humans have various explicit and implicit strategies to deal with infelicitous schematicity or schema clashes, which is to defer parts of their cognition to a collectively recognised authority. I spent years of my youth believing that although the Trinity made no sense to me, there were people to who it did make sense and to whom as guardians of sense, I would defer my own imperfect cognition. But any study of the fights over the nature of the Trinity are a perfect illustration of how people negotiate over their imagery. And as in any negotiation it is not just the power of the argument but also the power of the arguer that determines the outcome.

Christianity is not special here in any regard but it does provide two millenia of documented negotiation of mappings between domains full of schemas and rich images. It starts with St Paul’s denial that circumcision is a necessary condition of being a Christian and goes on into the conceptual contortions surrounding the Trinity debates. Early Christian eschatology also had to constantly renegotiate its foundations as the world sutbbornly refused to end and was in that no different from modern eschatology – be it religion or science based. Reformation movements (from monasticism to Luther or Calvin) also exhibit this profound contrasting of imagery and exploration of mappings, rejecting some, accepting others, ignoring most.

All of these activities lead to paradoxes and thus spurring of heretical and reform movements. Waldensians or Lutherans or Hussites all arrived at their disagreement with the dogma through painstaking analysis of the imagery contained in the text. Arianism was in its time the “thinking man’s” Christianity, because it made a lot more sense than the Nicean consensus. No wonder it experienced a post-reformation resurgence. But the problems it exposed were equally serious and it was ultimately rejected for probably good reason.

How is it possible that the Nicean consensus held so long as the mainstream interpretation? Surely, Luther could not have been the first to notice the discrepancies between lithurgy and scripture. Two reasons: inventory of expression and undedetermination of conceptual representationa.

I will deal with the idea of inventory in a separate post. Briefly, it is based on the idea of cognitive grammar that language is not a system but rather a patterned invenotory of symbolic units. This inventory is neither static nor has it clear boundaries but it functions to constrain what is available for both speech and imagination. Because of the nature of symbolic units and their relationship, the inventory (a usage-based beast) is what constrains our ability to say certain things although they are possible by pure grammatical or conceptual manipulation. By the same token, the inventory makes it possible to say things that make no demonstrable sense.

Frame (or metaphor) negotiation operates on the inventory but also has to battle against its constraints. The units in the inventory range in their schematicity and determination but they are all schematic and underdetermined to some degree. Most of the time this aids effortless conceptual integration. However, a significant proportion of the time, particularly for some speakers, the conceptual integration hits a snag. A part of a schematic concept usually left underdetermined is filled out and it prevents easy integration and an appropriate mapping needs to be negotiated.

For example, it is possible to say that Jesus is God and Jesus is the Son of God even in the same sentence and as long as we don’t project the offspring mapping on the identity mapping, we don’t have a problem. People do these things all the time. We say things like “taking a human life is the ultimate decision” and “collateral damage must be expected in war” and abhor people calling soldiers “murderers”. But the alternative to “justified war” namely “war is murder” is just as easy to sanction given the available imagery. So people have a choice.

But as soon as we flesh out the imagery of “X is son of Y” and “X is Y” we see that something is wrong. This in no way matches our experience of what is possible. Ex definitio “X is son of Y” OR “X is Y”. Not AND. So we need to do other things make the nature of “X is Y” compatible with “X is the son of Y”. And we can either do this by attributing a special nature to one or both of the statements. Or we can acknowledge the problem and defer knowledge of the nature to a higher authority. This is something we do all the time anyway.

Drawing from René Descartes' (1596-1650) in

Image via Wikipedia

So to bring the discussion to the nature of embodiment, there is no difficulty for a single person or a culture to maintained that some special being is disembodied but yet can perform many embodied functions (like eating). My favorite joke told to me by a devout Catholic begins: “The Holy Trinity are sitting around a table talking about where they’re going to go for their vacation…” Neither my friend nor I assumed that the Trinity is in any way an embodied entity, but it was nevertheless very easy for us to talk about its members as embodied beings. Another Catholic joke:

A saussage goes to Heaven. St Peter is busy so he sends Mary to answer the Pearly Gates. When she comes back he asks: “Who was it?” She responds: “I don’t know but, it sure looked like the Holy Ghost.”

Surely a more embodied joke is very difficult to imagine. But it just illustrates the availability of rich imagery to fill out schemas in a way that forces us to have two incompatible images in our heads at the same time. A square circle, of sorts.

There is nothing sophisticated about this. Any society is going to have members who are more likely to explore the possibilities of integration of items within its conceptual inventory. In some cases, it will get them ostracised. In most cases, it will just be filed away as an idiosyncratic vision that makes a lot of sense (but is not worth acting on). That’s why people don’t organize their lives around the dictums of stand-up comedians in charge. What they say often “makes perfect sense” but this sense can be filed away into the liminal space of our brain where it does not interfere with what makes sense in the mundane or the sacred context of conceptual integration. And in a few special cases, this sort of behavior will start new movements and faiths.

These “special” individuals are probably present in quite a large number in any group. They’re the people who like puns or the ones who correct everyone’s grammar. But no matter how committed they are to exploring the imagery of a particular area (content of faith, moral philosophy, use of mobile phones or genetic engineering) they will never be able to rid it of its schematicity and indeterminacies. They will simply flesh out some schemas and strip off the flesh of others. As Kuhn said, a scientific revolution is notable not just for the new it brings but also for all the old it ignores. And not all of the new will be good and not all of the old will be bad.

Not that I’m all that interested in the origins of language but my claim is that the negotiation of the mappings between undertermined schemas is at the very foundation of language and thought. And as such it must have been present from the very begining of language – it may have even predated language. “Religious” thought and practice must have emerged very quickly; as soon as one established category came into contact with another category. The first statement of identity or similarity was probably quite shortly followed by “well, X is only Y, in as much as Z” (expressed in grunts, of course). And since bodies are so central to our thought, it is not surprising that imagery of our bodies doing special things or us not having a body and yet preserving our identity crops up pretty much everywhere. Hypothesizing some sort of innate mind-body dualism is taking an awfully big hammer to a medium-sized nail. And looking for an evolutionary advantage in it is little more than the telling of campfire stories of heroic deeds.

Epilogue

To look for an evolutionary foundation of religious belief is little more sophisticated than arguing about the nature of virgin birth. If nothing else, the fervor of its proponents should be highly troubling. How important is it that we fill in all the gaps left over by neo-Darwinism? There is nothing special about believing in Ghosts or Witches. It is an epiphenomenon of our embodied and socialised thought. Sure, it’s probably worth studying the brains of mushroom-taking mystical groups. But not as a religious phenomenon. Just as something that people do. No more special than keeping a blog. Like this.

Post Script on Liminality [UPDATE a year or so later]

Cris Campbell on his Genealogy of Religion Blog convinced me with the aid of some useful references that we probably need to take the natural/supernatural distinction a bit more seriously than I did above. I still don’t agree it’s as central as is often claimed but I agree that it cannot be reduced to the sacred v. mundane as I tried above.  So instead I proposed the distinction between liminal and metaliminal in a comment on the blog. Here’s a slightly edited version (which may or may not become its own post):

I read with interest Hultkranz’s suggestion for an empirical basis for the concept of the supernatural but I think there are still problems with this view. I don’t see the warrant for the leap from “all religions contain some concept of the supernatural” to “supernatural forms the basis of religion”. Humans need a way to talk about the experienced and the adduced and this will very ‘naturally’ take the form of “supernatural” (I’m aware of McKinnon’s dissatisfaction with calling this non-empirical).

On this account, science itself is belief in the supernatural – i.e. postulating invisible agents outside our direct experience. And in particular speculative cognitive science and neuroscience have to make giant leaps of faith from their evidence to interpretation. What are the chances that much of what we consider to be givens today will in the future be regarded as much more sophisticated than phrenology? But even if we are more charitable to science and place its cognition outside the sphere of that of a conscientious sympathetic magician, the use of science in popular discourse is certainly no different from the use of supernatural beliefs. There’s nothing new, here. Let’s just take the leap from the science of electricity to Frankenstein’s monster. Modern public treatments of genetics and neuroscience are essentially magical. I remember a conversation with an otherwise educated philosophy PhD student who was recoiling in horror from genetic modification of fruit (using fish genes to do something to oranges) as unnatural – or monstrous. Plus we have stories of special states of cognition (absent-minded professors, en-tranced scientists, rigour of study) and ritual gnostic purification (referencing, peer review). The strict naturalist prescriptions of modern science and science education are really not that different from “thou shalt have no other gods before me.”

I am giving these examples partly as an antidote to the hidden normativity in the term ‘supernatural’ (I believe it is possible to mean it non-normatively but it’s not possible for it not to be understood that way by many) but also as an example of why this distinction is not one that relates to religion as opposed to general human existence.

However, I think Hultkranz’s objection to a complete removal of the dichotomy by people like Durkheim and Hymes is a valid one as is his claim of the impossibility of reducing it to the sacred/profane distinction. However, I’d like to propose a different label and consequently framing for it: meta-liminal. By “meta-liminal” I mean beyond the boundaries of daily experience and ethics (a subtle but to me an important difference from non-empirical). The boundaries are revealed to us in liminal spaces and times (as outlined by Turner) and what is beyond them can be behaviours (Greek gods), beings (leprechauns), values (Platonic ideals) or modes of existence (land of the dead). But most importantly, we gain access to them through liminal rituals where we stand with one foot on this side of the boundary and with another on the “other” side. Or rather, we temporarily blur and expand the boundaries and can be in both places at once. (Or possibly both.) This, however, I would claim is a discursively psychological construct and not a cognitively psychological construct. We can study the neural correlates of the various liminal rituals (some of which can be incredibly mundane – like wearing a pin) but searching for a single neural or evolutionary foundation would be pointless.

The quote from Nemeroff and Rozin that ‘“the supernatural” as that which “generally does not make sense in terms of the contemporary understanding of science.”’ sums up the deficiency of the normative or crypto-normative use of “supernatural”. But even the strictly non-normative use suffers from it.

What I’m trying to say is that not only is not religious cognition a special kind of cognition (in common with MacKendrick), but neither is any other type of cognition (no matter how Popperian its supposed heuristics). The different states of transcendence associated with religious knowing (gnosis) ranging from a vague sense of fear, comfort or awe to a dance or mushroom induced trance are not examples of a special type of cognition. They are universal psychosomatic phenomena that are frequently discursively constructed as having an association with the liminal and meta-liminal. But can we postulate an evolutionary inevitability that connects a new-age whackjob who proclaims that there is something “bigger than us” to a sophisticated theologian to Neil DeGrasse Tyson to a jobbing shaman or priest to a simple client of a religious service? Isn’t it better to talk of cultural opportunism that connects liminal emotional states to socially constructed liminal spaces? Long live the spandrel!

This is not a post-modernist view. I’d say it’s a profoundly empirical one. There are real things that can be said (provided we are aware of the limitations of the medium of speech). And I leave open the possibility that within science, there is a different kind of knowledge (that was, after all, my starting point, I was converted to my stance by empirical evidence so I am willing to respond to more).

Enhanced by Zemanta
Send to Kindle