Pseudo-education as a weapon: Beyond the ridiculous in linguistic prescriptivism

Send to Kindle
Teacher in primary school in northern Laos

Teacher in primary school in northern Laos (Photo credit: Wikipedia)

Most of us are all too happy to repeat clichés about education to motivate ourselves and others to engage in this liminal ritual of mass socialization. One such phrase is “knowledge is power”. It is used to refer not just to education, of course, but to all sorts of intelligence gathering from business to politics. We tell many stories of how knowing something made the difference, from knowing a way of making something to work to knowing a secret only the hero or villain is privy to. But in education, in particular, it is not just knowing that matters to our tribe but also the display of knowing.

The more I look at education, the more I wonder how much of what is in the curriculum is about signaling rather than true need of knowledge. Signaling has been used in economics of education to indicate the complex value of a university degree but I think it goes much deeper. We make displays of knowledge through the curriculum to make the knowledge itself more valuable. Curriculum designers in all areas engage in complex dances to show how the content maps onto the real world. I have called this education voodoo, other people have spoken of cargo cult education, and yet others have talked about pseudo teaching. I wrote about pseudo teaching when I looked at Niall Ferguson‘s amusing, I think I called it cute, lesson plan of his own greatness. But pseudo teaching only describes the activities performed by teachers in the mistaken belief that they have real educational value. When pseudo teaching relies on pseudo content, I think we can talk more generally about “pseudo education”.

We were all pseudo-educated on a number of subjects. History, science, philosophy, etc. In history lessons, the most cherished “truths” of our past are distorted on a daily basis (see Lies My Teacher told me). From biology, we get to remember misinformation about the theory of evolution starting from attributing the very concept of evolution to Darwin or reducing natural selection to the nonsense of survival of the fittest. We may remember the names of a few philosophers but it rarely takes us any further than knowing winks at a Monty Python sketch or mouthing of unexamined platitudes like “the unexamined life is not worth living.”

That in itself is not a problem. Society, despite the omnipresent alarmist tropes, is coping quite well with pseudo-education. Perhaps, it even requires it to function because “it can’t handle the truth”. The problem is that we then judge people on how well they are able to replicate or respond to these pseudo-educated signals. Sometimes, these judgments are just a matter of petty prejudice but sometimes they could have an impact on somebody’s livelihood (and perhaps the former inevitably leads to the latter in aggregate).

Note: I have looked at some history and biology textbooks and they often offer a less distorted portrayal of their subject than what seems to be the outcome in public consciousness. Having the right curriculum and associated materials, then, doesn’t seem to be sufficient to avoid pseudo-education (if indeed avoiding it is desirable).

The one area where pseudo-education has received a lot of attention is language. Since time immemorial, our ways of speaking have served to identify us with one group or layer of society or another. And from its very beginning, education sought to play a role in slotting its charges into the linguistic groups with as high a prestige, as possible (or rather as appropriate). And even today, in academic literature we see references to the educated speaker as an analytic category. This is not a bad thing. Education correlates with exposure to certain types of language and engagement with certain kinds of speech communities. It is not the only way to achieve linguistic competence in those areas but it is the main way for the majority. But becoming “educated speaker” in this sense is mostly a by-product of education. Sufficient amount of the curriculum and classroom instruction is aimed in this direction to count for something but most students acquire the in-group ways of speaking without explicit instruction (disadvantaging those who would benefit from it). But probably a more salient output of language education is supposed knowledge about language (as opposed to knowledge of language).

Here students are expected not only to speak appropriately but also to know how this “appropriate language” works. And here is where most of what happens in schools can be called pseudo-education. Most teachers don’t really have any grasp of how language works (even those who took intro to linguistics classes). They are certainly not aware of the more complex issues around the social variability of language or its pragmatic dimension. But even in simple matters like grammar and usage, they are utterly clueless. This is often blamed on past deficiencies of the educational system where “grammar was not taught” to an entire generation. But judging by the behavior of previous generations who received ample instruction in grammar, that is not the problem. Their teachers were just as inept at teaching about language as they are today. They might have been better at labeling parts of speech and their tenses but that’s about it. It is possible that in the days of yore, people complaining about the use of the passive were actually more able to identify passive constructions in the text but it didn’t make that complaint any less inaccurate (Orwell made a right fool of himself when it turned out that he uses more passives than is the norm in English despite kvetching about their evil).

No matter what the content of school curriculum and method of instruction, “educated” people go about spouting nonsense when it comes to language. This nonsense seems to have its origins in half-remembered injunctions of their grade school teacher. And because the prime complainers are likely to either have been “good at language” or envied the teacher’s approbation of those who were described as being “good at language”, what we end up with in the typical language maven is a mishmash of linguistic prejudice and unjustified feeling smug superiority. Every little linguistic label that a person can remember, is then trotted out as a badge of honor regardless of how good that person is at deploying it.

And those who spout the loudest, get a reputation of being the “grammar experts” and everybody else who preemptively admits that they are “not good at grammar” defers to them and lets themselves be bullied by them. The most recent case of such bullying was a screed by an otherwise intelligent person in a position of power who decided that he will no longer hire people with bad grammar.

This prompted me to issue a rant on Google Plus, repeated below:

The trouble with pseudo educated blowhards complaining about grammar, like +Kyle Wien, is that they have no idea what grammar is. 90% of the things they complain about are spelling problems. The rest is a mishmash of half-remembered objections from their grade school teacher who got them from some other grammar bigot who doesn’t know their tense from their time.

I’ve got news for you Kyle! People who spell they’re, there and their interchangeably know the grammar of their use. They just don’t differentiate their spelling. It’s called homophony, dude, and English is chock full of it. Look it up. If your desire rose as you smelled a rose, you encountered homophony. Homophony is a ubiquitous feature of all languages. And equally all languages have some high profile homophones that cause trouble for spelling Nazis but almost never for actual understanding. Why? Because when you speak, there is no spelling.

Kyle thinks that what he calls “good grammar” is indicative of attention to detail. Hard to say since he, presumably always perfectly “grammatical”, failed to pay attention to the little detail of the difference between spelling and grammar. The other problem is, that I’m sure that Kyle and his ilk would be hard pressed to list more than a dozen or so of these “problems”. So his “attention to detail” should really be read as “attention to the few details of language use that annoy Kyle Wien”. He claims to have noticed a correlation in his practice but forgive me if I don’t take his word for it. Once you have developed a prejudice, no matter how outlandish, it is dead easy to find plenty of evidence in its support (not paying attention to any of the details that disconfirm it).

Sure there’s something to the argument that spelling mistakes in a news item, a blog post or a business newsletter will have an impact on its credibility. But hardly enough to worry about. Not that many people will notice and those who do will have plenty of other cues to make a better informed judgment. If a misplaced apostrophe is enough to sway them, then either they’re not convinced of the credibility of the source in the first place, or they’re not worth keeping as a customer. Journalists and bloggers engage in so many more significant pursuits that damage their credibility, like fatuous and unresearched claims about grammar, so that the odd it’s/its slip up can hardly make much more than (or is it then) a dent.

Note: I replaced ‘half-wit’ in the original with ‘blowhard’ because I don’t actually believe that Kyle Wien is a half-wit. He may not even be a blowhard. But, you can be a perfectly intelligent person, nice to kittens and beloved by co-workers, and be a blowhard when it comes to grammar. I also fixed a few typos, because I pay attention to detail.

My issue is not that I believe that linguistic purism and prescriptivism are in some way anomalous. In fact, I believe the exact opposite. I think, following a brilliant insight by my linguistics teacher, that we need to think of these phenomena as integral to our linguistic competence. I doubt that there is a linguistic community of any size above 3 that doesn’t enact some form of explicit linguistic normativity.

But when pseudo-knowledge about language is used as a n instrument of power, I think it is right to call out the perpetrators and try to shame them. Sure, linguists laugh at them, but I think we all need to follow the example of the Language Log and expose all such examples to public ridicule. Countermand the power.

Post Script: I have been similarly critical of the field of Critical Discourse Analysis which while based on an accurate insight about language and power, in my view, goes on to abuse the power that stems from the knowledge about language to clobber their opponents. My conclusion has been that if you want to study how people speak, study it for its own sake, and if you want to engage with the politics of what they say, do that on political terms not on linguistic ones. That doesn’t mean that you shouldn’t point out if you feel somebody is using language in a manipulative or misleading ways, but if you don’t need the apparatus of a whole academic discipline to do it, you’re doing something wrong.

Send to Kindle

Character Assasination through Metaphoric Pomposity: When one metaphor is not enough

Send to Kindle

George Lakoff is known for saying that “metaphors can kill” and he’s not wrong. But in that, metaphors are no different from any other language. The simple amoral imperative “Kill!” will do the job just as nicely. Nor are metaphors any better or worse at obfuscating than any other type of language. But they are very good at their primary purpose which is making complex connections between domains.

Metaphors can create very powerful connections where none existed before. And we are just as often seduced by that power as inspired to new heights of creativity. We don’t really have a choice. Metaphoric thinking is in our DNA (itself a metaphor). But just like with DNA, context is important, and sometimes metaphors work for us and sometimes they work against us. The more powerful they are, the more cautious we need to be. When faced with powerful metaphors we should always look for alternatives and we should also explore the limits of the metaphors and the connections they make. We need to keep in mind that nothing IS anything else but everything is LIKE something else.

I was reminded of this recently when listening to an LSE lecture by the journalist Andrew Blum who was promoting his book “Tubes: Behind the Scenes at the Internet”. The lecture was reasonably interesting although he tried to make the subject seem more important than it perhaps was through judicious reliance of the imagery of covertness.

But I was particularly struck by the last example where he compared Facebook’s and Google’s data centers in Colorado. Facebook’s center was open and architecturally modern, being part of the local community. Facebook also shared the designs of the center with the global community and was happy to show Blum around. Google’s center was closed, ugly and opaque. Google viewed its design as part of their competitive advantage and most importantly didn’t let Blum past the parking lot.

From this Blum drew far reaching conclusions which he amplified by implying them. If architecture is an indication of intent, he implied, then we should question what Google’s ugly hidden intent is as opposed to Facebook’s shining open intent. When answering a question he later gave prosecutors in New England and in Germany as compounding evidence of people who are also frustrated with Google’s secrecy. Only reluctantly admitting that Google invited him to speak at their Authors Speak program.

Now, Blum may have a point regarding the secrecy surrounding that data center by Google, there’s probably no great competitive advantage in its design and no abiding security reason in not showing its insides to a journalist. But using this comparison to imply anything about the nature of Facebook or Google is just an example of typical journalist dishonesty. Blum is not lying to us. He is lying to himself. I’m sure he convinced himself that since he was so clever to come up with such a beautiful analogy, it must be true.

The problem is that pretty much anything can be seen through multiple analogies. And any one of those analogies can be stopped at any point or be stretched out far and wide. A good metaphor hacker will always seek out an alternative analogy and explore the limits of the domain mapping of the dominant one. In this case, not much work is needed to uncover what a pompous idiot Blum is being.

First, does this facilities reflect attitudes extend to what we know about the two companies in other spheres. And here the answer is NO. Google tells let’s you liberate your data, Facebook does not. Google lets you opt out of many more things that Facebook. Google sponsors many open source projects, Facebook is more closed source (even though they do contribute heavily to some key projects). When Facebook acquires a company, they often just shut it down leaving customers high and dry, Google closes projects too, but they have repeatedly released source code of these projects to the community. Now, is Google the perfect open company? Hardly. But Facebook with its interest in keeping people in its silo is can never be given a shinign beacon of openness. It might be at best a draw (if we can even make a comparison) but I’d certainly give Google far more credit in the openness department. But the analogy simply fails when exposed to current knowledge. I can only assume that Blum was so happy to have come up with it that he wilfully ignored the evidence.

But can we come up with other analogies? Yes. How about the fact that the worst dictatorships in the world have come up with grand idealistic architectural designs in history. Designs and structures that spoke of freedom, beautiful futures and love of the people for their leaders. Given that we know all that, why would we ever trust a design to indicate anything about the body that commissioned it? Again, I can only assume that Blum was seduced by his own cleverness.

Any honest exploration of this metaphor would lead us to abandoning it. It was not wrong to raise it, in the world of cognition, anything is fair. But having looked at both the limits of the metaphor and alternative domain mappings, it’s pretty obvious that it’s not doing us any good. It supports a biased political agenda.

The moral of the story is don’t trust one-metaphor journalists (and most journalists are one-metaphor drones). They might have some of the facts right but they’re almost certainly leaving out large amounts of relevant information in pursuit of their own figurative hedonism.

Disclaimer: I have to admit, I’m rather a fan of Google’s approach to many things and a user of many of their services. However, I have also been critical of Google on many occasions and have come to be wary of many of their practices. I don’t mind Facebook the company, but I hate that it is becoming the new AOL. Nevertheless, I use many of Facebook’s services. So there.

Send to Kindle

Who-knows-what-how stories: The scientific and religious knowledge paradox

Send to Kindle

I never meant to listen to this LSE debate on modern atheism because I’m bored of all the endless moralistic twaddle on both sides but it came on on my MP3 player and before I knew it, I was interested enough not to skip it. Not that it provided any Earth-shattering new insights but on balance it had more to contribute to the debate than a New Atheist diatribe might. And there were a few stories about how people think that were interesting.

The first speaker was the well-known English cleric, Giles Fraser who regaled the audience with his conversion story starting as an atheist student of Wittgenstein and becoming a Christian who believes in a “Scripture-based” understanding of Christianity. The latter is not surprising given how pathetically obssessed Wittgensteinian scholars are with every twist and turn of their bipolar master’s texts.

But I thought Fraser’s description of how he understands his faith in contrast to his understanding of the dogma was instructive. He says: “Theology is faith seeking understanding. Understanding is not the basis on which one has faith but it is what one does to try to understand the faith one has.”

In a way, faith is a kind of axiomatic knowledge. It’s not something that can or need be usefully questioned but it is something on which to base our further dialog. Obviously, this cannot be equated with religion but it can serve as a reminder of the kind of knowledge religion works off. And it is only in some contexts that this axiomatic knowledge needs be made explicit or even just pointed to – this only happens when the conceptual frames are brought into conflict and need to be negotiated.

Paradox of utility vs essence and faith vs understanding

This kind of knowledge is often contrasted with scientific knowledge. Knowledge that is held to be essentially superior due to its utility. But if we look at the supporting arguments, we are faced with a kind of paradox.

The paradox is that scientists claim that their knowledge is essentially different from religious (and other non-scientific) knowledge but the warrant for the special status claim of this knowledge stems from the method of its aquisition rather than its inherent nature. They cite falsificationist principles as foundations of this essential difference and peer review as their practical embodiment (strangely making this one claim immune from the dictum of falsification – of which, I believe, there is ample supply).

But that is not a very consistent argument. The necessary consequences of the practice of peer review fly in the face of the falsificationist agenda. The system of publication and peer review that is in place (and that will always emerge) is guaranteed to minimize any fundamental falsificationism of the central principles. Meaning that falsification happens more along Kuhnian rather than Popperian lines. Slowly, in bursts and with great gnashing of teeth and destroying of careers.

Now, religious knowledge does not cite falsificationism as the central warrant of its knowledge. Faith is often given as the central principle underlying religious knowledge and engagement with diverse exegetic authorities as the enactment of this principle. (Although, crucially, this part of religion is a modern invention brought about by many frame negotiations. For the most part, when it comes to religion, faith and knowing are coterminous. Faith determines the right ways of knowing but it is only rarely negotiated.)

But in practice the way religious knowlege is created, acquired and maintained is not very different from scientific knowledge. Exegesis and peer review are very similar processes in that they both refer to past authorities as sources of arbitration for the expression of new ideas.

And while falsificationism is (perhaps with the exception of some Budhist or Daoist schools) never the stated principle of religious hermeneutics, in principle, it is hard to describe the numerous religious reforms, movements and even work-a-day conversions as anything but falsificationist enterprises. Let’s just look at the various waves of monastic movements from Benedictines to Dominicans or Franciscans to Jesuits. They all struggled with reconciling the current situation with the evidence (of the interpretation of scripture) and based their changed practices on the result of this confrontation.

And what about religious reformers like Hus, Luther or Calvin? Wasn’t their intellectual entreprise in essense falsificationist? Or St Paul or Arianism? Or scholastic debates?

‘But religion never invites scrutiny, it never approaches problems with an open mind.’ Crow the new-atheists. But neither does science at the basic level. Graduate students are told to question everything but they soon learn that this questioning is only good as long as it doesn’t disrupt the research paradigm. Their careers and livelihoods depend on not questioning much of anything. In practice, this is not very different from the personal reading of the Bible – you can have a personal relationship with God as long as it’s not too different from other people’s personal relationships.

The stories we know by

One of the most preposterous pieces of scientific propaganda is Dawkins’ story about an old professor who went to thank a young researcher for disproving his theory. I recently heard it trotted out again on an episode of Start The Week where it was used as proof positive of how science is special – this time as a way of establishing its superiority over political machinations. It may have happened but it’s about as rare as a priest being convinced by an argument about the non-existence of God. The natural and predominant reactions to a research “disproving” a theory are to belittle it, deny its validity, ostracise its author, deny its relevance or simply ignore it (a favourite passtime of Chomskean linguists).

So in practice, there doesn’t seem to be that much of a difference between how scientific and religious knowledge work. They both follow the same cognitive and sociological principles. They both have their stories to tell their followers.

Conversion stories are very popular in all movements and they are always used on both sides of the same argument. There are stories about conversions from one side to the other in the abortion/anti-abortion controversy, environmental debates, diet wars, alternative medicine, and they are an accompanying feature of pretty much any scientific revolution – I’ve even seen the lack of prominent conversions cited as an argument (a bad one) against cognitive linguistics. So a scientist giving examples of the formerly devout seeing the light through a rational argument, is just enacting a social script associated with the spreading of knowledge. It’s a common frame negotiation device, not evidence of anything about the nature of the knowledge to the profession of which the person was converted.

There are other types of stories about knowledge that scientists like to talk about as much as people of religion. There are stories of the long arduous path to knowledge and stories of mystical gnosis.

The path-to-knowledge stories are told when people talk about the training it takes to achieve a kind of knowledge. They are particularly popular about medical doctors (through medical dramas on TV) but they also exist about pretty much any profession including priests and scientists. These stories always have two components, a liminal one (about high jinks students got to while avoiding the administration’s strictures and lovingly mocking the crazy demanding teachers) and a sacral one (about the importance of hard learning that the subject demands). These stories are, of course, based on the sorts of things that happen. But their telling follows a cultural and discursive script. (It would be interesting to do some comparative study here.)

The stories of mystical gnosis are also very interesting. They are told about experts, specialists who achieve knowledge that is too difficult for normal mortals to master. In these stories people are often described as loosing themselves in the subject, setting themselves apart, or becoming obsessively focused. This is sometimes combined or alternated with descriptions of achieving sudden clarity.

People tell these stories about themselves as well as about others. When told about others, these stories can be quite schematic – the absent minded professor, the geek/nerd, the monk in the library, the constantly pracising musician (or even boxer). When told about oneself, the sudden light stories are very common.

Again, these stories reflect a shared cultural framing of people’s experiences of knowledge in the social context. But they cannot be given as evidence of the special nature of one kind of knowledge over another. Just like stories about Gods cannot be taken as evidence of the superiority of some religions.

Utility and essence revisited

But, the argument goes, scientific knowledge is so useful. Just look at all the great things it brought to this world. Doesn’t that mean that its very “essence” is different from religious knowledge.

Here, I think, the other discussant in the podcast, the atheist philosopher, John Gray can provide a useful perspective: “The ‘essence’ is an apologetic invention that someone comes up with later to try and preserve the core of an idea [like Christianity or Marxism] from criticism.”

In other words this argument is also a kind of story telling. And we again find these utility and essence stories in many areas following remarkably similar scripts. That does not mean that the stories are false or fabricated or that what they are told about is in some way less valid. All it means is that we should be skeptical about arguments that rely on them as evidence of exclusivity.

Ultimately, looking for the “essence” of any complex phenomenon is always a fool’s errand. Scientists are too fond of their “magic formula” stories where incredibly complex things are described by simple equations like e=mc2 or zn+1 = zn2 + c. But neither Einstein’s nor Mandlebrot’s little gems actually define the essence of their respective phenomena. They just offer a convenient way of capturing some form of knowledge about it. e=mc2 will be found on T-shirts and the Mandelbrot set on screen savers of people who know little about their relationship to the underlying ideas. They just know they’re important and want to express their alegiance to the movement. Kind of like the people who feel it necessary to proclaim that they “believe” in the theory of evolution. Of course, we could also take some gnostic stories about what it takes to “really” understand these equations – and they do require some quite complex mix of expertise (a lot more complex than the stories would let on).

But we still haven’t dealt with the question of utility. Scientific knowledge derives its current legitimacy from its connection to technology and religious knowledge makes claims on the realms of morality and human relationships. They clash because both also have views on each other’s domains (science on human relationships and religion on origins of the universe). Which is one of the reasons why I don’t think that the non-overlapping magisteria idea is very fruitful (as much as I prefer Gould over the neo-Darwinists).

Here I have to agree with Dawkins and the new atheists. There’s no reason why some prelate should have more of a say on morality or relationships than anyone else. Reading a holy book is a qualification for prescribing ritual not for the arbitration of morality. But Dawkins should be made to taste his own medicine. There’s no reason why a scientist’s view on the origin of the universe should hold any sway over any theologian’s. The desire of the scientist to provide a cosmogony for the atheist crowd is a strange thing. It seeks to make questions stemming from a non-scientific search for broader meaning consistent with the scientific need for contiguous causal explanations. But the Big Bang or some sort of Priomordial Soup don’t provide internal consistency to the scientific entreprise. They leave as many questions open as they give answers to.

It seems that the Augustinian and later scholastic solution of a set of categories external to the ones which are accessible to us. Giles Fraser cites Thomas Acquinas’ example of counting everything in the world and not finding God. That would still be fine because God is not a thing to be counted, or better still, an entity that fits within our concept of things and the related concept of counting. Or in other words, God created our world with the categories available to the cognizing humans, not in those categories. Of course, to me, that sounds like a very good argument for atheism but it is also why a search for some Grand Unified Theory leaves me cold.

Epistemology as politics

It is a problem that the better philosophers from Parmenides to Wittgenstein tried to express in one way or another. But their problems were in trying to draw practical conclusions. There is no reason why the two political factions shouldn’t have a good old fight over the two overlapping magisteria. Because the debate about utility is a political one, not an epistemological one. Just because I would rather go to a surgeon than a witch doctor doesn’t mean that the former has tapped into some superior form of cognition. We make utility judgements of a similar nature even within these two domains but we would not draw essentialist epistemological conclusions based on them. People choose their priests and they choose their doctors. Is a bad doctor better than a good healer? I would imagine that there are a whole range of ailments where some innocuous herb would do more good than a placebo with side effects.

But we can consider a less contentious example. Years ago I was involved with TEFL teacher training and hiring. We ran a one-month starter course for teachers of English as a foreign language. And when hiring teachers I would always much rather hire one of these people rather than somebody with an MA in TESOL. The teachers with the basic knowledge would often do a better job than those with superior knowledge. I would say that when these two groups would talk about teaching, their understanding of it would be very different. The MAs would have the research, evidence and theory. The one-month trainees would have lots of useful techniques but little understanding of how or why they worked. Yet, there seemed to be an inverse relationship between the “quality” of knowledge and practical ability of the teacher (or at best no predictable relationship). So I would routinely recommend these “witch-teachers” over the “surgeon-teachers” to schools for hiring because I believe they were better for them.

There are many similar stories where utility and knowledge don’t match up. Again, that doesn’t mean that we should take homeopathy seriously but it means that the foundations of our refusal of accepting homeopathy cannot also be the foundations of placing scientific knowledge outside the regular epistemological constraints of all humanity.

Epistemology, as I have said elsewhere, is much better explained as ethics.

Thus endeth the blog post.

Send to Kindle

RaAM 9 Abstract: Of Doves and Cocks: Collective Negotiation of a Metaphoric Seduction

Send to Kindle

Given how long I’ve been studying metaphor (at least since 1991 when I first encountered Lakoff and Johnson’s work and full on since 2000) it is amazing that I have yet to attend a RaAM (Researching and Applying Metaphor) conference. I had an abstract accepted to one of the previous RaAMs but couldn’t go. This time, I’ve had an abstract accepted and wild horses won’t keep me away (even though it is expensive since no one is sponsoring my going). The abstract that got accepted is about a small piece of research that I conceived back in 2004, wrote up in a blog post in 2006, was supposed to talk about at a conference in 2011 and finally will get to present this July at RaAM 9).

Unlike most academic endeavours, this one needs to come with a parental warning. The materials described contains profane sexual and scatological imagery as employed for the purposes of satire. But I think it makes a really important point that I don’t see people making as a matter of course in the metaphor studies literature. I argue that metaphors can be incredibly powerful and seductive but that they are also routinely deconstructed and negotiated. They are not something that just happens to us. They are opportunistic and random just as much as they are systematic and fundamental to our cognition. Much of the current metaphor studies is still fighting the battle against the view that metaphors are mere peripheral adornments on the literal. And to be sure the “just a metaphor” label is still to be seen in popular discourse today. But it has now been over 40 years since this fight has been intellectually won. So we need to focus on the broader questions about the complexities of the role metaphor plays in social cognition. And my contribution to RaAM hopes to point in that direction.

 

Of Doves and Cocks: Collective Negotiation of a Metaphoric Seduction

In this contribution, I propose to investigate metaphoric cognition as an extended discursive and social phenomenon that is the cornerstone of our ability to understand and negotiate issues of public importance. Since Lakoff and Johnson’s groundbreaking study, research in linguistics, cognitive psychology, as well as discourse studies, has tended to view metaphor as a purely unconscious phenomenon that is outside of a normal speaker’s ability to manipulate. However important this view of metaphor and cognition may be, it tells only a part of the story. An equally important and surprisingly frequent is the ability of metaphor to enter into collective (meta)cognition through extended discourse in which acceptable cross-domain mappings are negotiated.
I will provide an example of a particular metaphorical framing and the metacognitive framework it engendered that made it possible for extended discourse to develop. This metaphor, a leitmotif in the ‘Team America’ film satire, mapped the physiological and phraseological properties of taboo body parts onto geopolitical issues of war in such a way that made it possible for participants in the subsequent discourse to simultaneously be seduced by the power of the metaphor and empowered to engage in talk about cognition, text and context as exemplified by statements such as: “It sounds quite weird out of context, but the paragraph about dicks, pussies and assholes was the craziest analogy I’ve ever heard, mainly because it actually made sense.” I will demonstrate how this example is typical rather than aberrant of metaphor in discourse and discuss the limits of a purely cognitive approach to metaphor.
Following Talmy, I will argue that significant elements of metaphoric cognition are available to speakers’ introspection and thus available for public negotiation. However, this does not preclude for the sheer power of the metaphor to have an impact on both cognition and discourse. I will argue that as a result of the strength of this one metaphor, the balance of the discussion of this highly satirical film was shifted in support of military interventionism as evidenced by the subsequent popular commentary. By mapping political and gender concepts on the basic structural inevitability of human sexual anatomy reinforced by idiomatic mappings between taboo words and moral concepts, the metaphor makes further negotiation virtually impossible within its own parameters. Thus an individual speaker may be simultaneously seduced and empowered by a particular metaphorical mapping.
Send to Kindle

21st Century Educational Voodoo

Send to Kindle

Jim Shimabukuro uses Rupert Murdoch’s quote “We have a 21st century economy with a 19th century education system” to pose a question of what should 21st Century Education look like (http://etcjournal.com/2008/11/03/174/) “what are the key elements for an effective 21st century model for schools and colleges?”.

However, what he is essentially asking us to do is perform an act of voodoo. He’s encouraging us to start thinking about what would make our education similar to our vision of what the 21st Century Economy looks like. Such exercises can come up with good ideas but unfortunately this one is very likely to descend into predictability. People will write in how important it is to prepare students to be flexible, learn important skills to compete in the global markets, use technology to leverage this or the other. There may be the odd original idea but most respondents will stick with cliches. Because that’s what our magical discourse about education encourages most of all (this sounds snarky but I really mean this more descriptively than as an evaluation).

There are three problems with the whole exercises.

First, why should we listen to to moguls and venture capitalists about education? They’re no more qualified to address this topic than any random individual who’s given it some thought and are more likely to have ulterior motives. To Murdoch we should say, you’ve messed up the print media environment, failed with your online efforts, stay away from our schools.

Second, we don’t have a 19th century education system. Sure, we still have teachers standing in front of students. We have classes and we have school years. We have what David Tyack and Larry Cuban have called the “grammar of schooling”. It hasn’t changed much on the surface. But neither has the grammar of English. Yet, we can express things in English now that we couldn’t in the 1800s. We use the English grammar with its ancient roots to express what we need in our time. Likewise, we use the same grammar of schooling to have the education system express our societal needs. It is imperfect but it is in NO way holding us down. The evidence is either manufactured or misinterpreted. Sure, if we sat down and started designing an education system today from scratch, we’d probably do it differently but the outcomes would probably be pretty much the same. Meaning, the state of the world isn’t due to the educational system but rather vice versa.

Third, we don’t have a 21st century economy. Of course, the current economy is in the 21st century but it is much less than what we envision 21st century economy to imply. It is global (as it was in the 1848 when Marx and Engels were writing their manifesto). It is exploitative (of both human and natural resources). It is in the hands of the powerful and semicompetent few. Just because workers get fired by email from a continent away and stocks crash in matter of minutes rather than hours, we can’t really talk about something fundamentally unique. Physical and symbolic property is still the key part of the economy. Physical property still takes roughly as long to shift about as it did two centuries ago (give or take a day or a month) and symbolic property is still traded in the same way (can I interest you in a tulip?). Sure, there are thousands particular differences we could point to but the essence of our existence is not that much changed. Except for things like indoor plumbing  (thank God!), modern medicine and speed of communication – but the education system of today has all of those pretty much in hand.

My conclusion. Don’t expect people to be relevant or right just because they are rich or successful. Question the route they took to where they are before you take their advice on the direction you should go. And, if you’re going to drag history into your analogies, study it very very carefully. Don’t rely on what your teachers told you, it was all lies!

Send to Kindle

Moral Compass Metaphor Points to Surprising Places

Send to Kindle

I thought the moral compass metaphor has mostly left current political discourse but it just cropped up – this time pointing from left to right – as David Plouffe accused Mitt Romney of not having one. As I keep repeating, George Lakoff once said “Metaphors can kill.” And Moral Compass has certainly done its share of homicidal damage. Justifying wars, interventions and unflinching black and white resolve in the face of gray circumstances. It is a killer metaphor!

But with a bit of hacking it is not difficult to subvert it for good. Yes, I’m ready to declare, it is good to have a moral compass, providing you make it more like a “true compass” to quote Plouffe. The problem is, as I learned during my sailing courses many years ago, that most people don’t understand how compasses actually work.

First, compasses don’t point to “the North”. They point to what is called the Magnetic North which is quite a ways from the actual North. So if you want to go to the North pole, you need make a lot of adjustment in where you’re going. Sound familiar? Kind of like following your convictions. They often lead you to places that are different from where you’re saying you’re going.

Second, the Magnetic North keeps moving. Yes, the difference of where it is in relation to the “actual” North changes from year to year. So you have to adjust your directions to the times you live in! Sound familiar? Being a devout Christian led people to different actions in the 1500s, 1700s and 1900. Although, we keep saying the “North” or faith is the same, the actual needle showing us where to go points to different directions.

Third, the compass is easily distracted by its immediate physical context. The distribution of metals on a boat, for instance, will throw it completely off course. So it needs to be calibrated differently for each individual installation. Sound familiar?

And it’s also worth noting that the south magnetic pole is not the exact polar opposite of the north magnetic pole!

So what can we learn from this newly hacked moral compass metaphor? Well, nothing terribly useful. Our real ethics and other commitments are always determined by the times we live and contexts we find ourselves in. And often we say we’re going one way when we’re actually heading another way. But we already knew that. Everybody knows that! Even people who say it’s not true (the anti-relativists) know that! They are not complete idiots after all, they just pretend to be to avoid making painful decisions!

As so often, we can tell two stories about the change of views by politicians or anybody else.

The first story is of the feckless, unprincipled opportunist who changes her views following the prevailing winds – supported by the image of the weather vane. This person is contrasted with the stalwart who sticks to her principles even as all around her are swayed the moral fad of the day.

The second story is of the wise (often old) and sage person who can change her views even as all around her persist in their simplistic fundamentalism. Here we have the image of the tree that bends in the wind but does not break. This person is contrasted with the bigot or the zealot who cannot budge even an inch from old prejudices even though they are obviously wrong?

So which story is true of Romney, Bush and Obama? We don’t know. In every instance, we have to fine tune our image and very carefully watch out for our tendencies to just tell the negative stories about people we don’t like. Whether one story is more convincing than the other depends, like the needle of a compass, no a variety of obvious and non-obvious contexts. The stories are here to guide us and help us make decisions. But we must strive to keep them all in mind at the same time. And this can be painful. They are a little bit like the Necker Cube, Vase, the Duck/Rabbit or similar optical illusions. We know they’re both there but while we’re perceiving the one, it is easy to forget the others are there. So it is uncomfortable. And also not a little bit inconvenient.

Is this kind of metaphorical nuance something we can expect in a time of political competition? It can be. Despite their bad rep, politicians and the media surrounding them can be nuanced. But often they’re not. So instead of nuance, when somebody next trots out the moral compass, whether you like them or not, say: “Oh, you mean you’re a liar, then?” and tell them about the Magnetic North!

 

Post Script: Actually, Plouffe didn’t say Romney didn’t have a moral compass. He said that you “you need to have a true compass, and you’ve got to be willing to make tough calls.” So maybe he was talking about a compass adjusted for surrounding metals and the advice of whose needle we follow only having taken into account as much of our current context as we can. A “true compass” like a true friend! I agree with most of the “old Romney” and none of the “new Romney”. And I loved the old Obama created in the image of our unspoken liberal utopias, and I am lukewarm on the actual Obama (as I actually knew I would) who steers a course pointing to the North of reality rather than the one magnetically attracting our needles. So if its that kind of moral compass after all, we’re in good hands!

Send to Kindle

There’s more to memory than the brain: Psychologists run clever experiments, make trivial claims, take gullible internet by storm

Send to Kindle
GoodSearch home page

Image via Wikipedia

The online media are drawn to any “scientific” claims about the internet’s influence on our nature as humans like flies to a pile of excrement. Sadly, in this metaphor, only the flies are figurative. The latest heap of manure to instigate an annoying buzzing cloud of commentary from Wired to the BBC, is an article by Sparrow et al. claiming to show that because there are search engines, we don’t have to remember as much as before. Most notably, if we know that some information can be easily retrieved, we remember where it can be obtained instead of what it is. As Wired reports:

“A study of 46 college students found lower rates of recall on newly-learned facts when students thought those facts were saved on a computer for later recovery.” http://www.wired.co.uk/news/archive/2011-07/15/search-engines-memory

Sparrow et al. designed a bunch of experiments that “prove” this claim. Thus, they holler, the internet changes how we remember. This was echoed by literally hundreds of headlines (Google claims over 600). Here’s a sample:

  • Google Effect: Changes to our Brains
  • Search engines like Google ‘changing the way human memory works’
  • Search engines change how memory works
  • Google Is Destroying Our Memories, Scientists Find
  • It pays to remember, search engines ruining our memory
  • Google rewiring the way we remember, study says
  • Has Google turned your memory to mush?
  • Internet search engines cause poor memory, scientists claim
  • Researchers: Search Engines Supplanting Our Memory
  • Google changing way brain remembers information

Many of these headlines are from “reputable” publications and they can be summarized by three words: Bullshit! Bullshit! Bullshit!

All they had to do is read this part of the abstract to understand that nothing like the stuff they blather about follows from the study:

“The results of four studies suggest that when faced with difficult questions, people are primed to think about computers and that when people expect to have future access to information, they have lower rates of recall of the information itself and enhanced recall instead for where to access it. The Internet has become a primary form of external or transactive memory, where information is stored collectively outside ourselves.”

But they were not helped by Science whose publication of these results is more of a self-serving stunt than a serious attempt to further knowledge. The title of the original “Google Effects on Memory” is all but designed to generate bat-shit crazy headlines. If the title were to be truthful, it would have to be “Google has no more effects on memory than a paper and pen or a friend.” Even the Science Magazine report on the study entitled “Searching for the Google Effect on People’s Memory” concludes it “doesn’t directly answer that question”. In fact, it says that the internet is filling in the role of “transactive memory” which describes the fact that we rely on people to remember things. Which means it has no impact on our brains at all. It just magnifies the transactive effects already in existence.

Any claim about a special effect of Google on any kind of memory can be debunked in two words: “shopping list”! All Sparrow at al. discovered is that the internet has changed us as much as a stub of a pencil and a grubby piece of paper. Meaning, not at all.

Some headlines cottoned onto this but they are few and far between:

  • Search Engine “Memory Loss” in Fact a Sign of Smart Behavior‎
  • Search Engines Ruin Our Memory, Make Us Smarter

Sparrow, the lead author of the study, when interviewed by Wired said: “It’s very similar to how we use people in our lives, The internet is really just an interface with a lot of other people.”

In other words, What the internet has changed is the deployment of strategies we have always used for managing our memory. Sparrow et al. use an old term “transactive memory” to describe this but that’s needed only because cognitive psychology’s view of memory has been so limited. Memory is not just about storage and retrieval. Like all of our cognition it is tied in with a whole host of strategies (sometimes lumped together under the heading of metacognition) that have a transactive and social dimension.

Let’s take the example of mobile phones. About 15 years ago I remembered about 4 phone numbers (home, work, mother, friend). Now, I remember none. They’re all stored in my mobile phone. What’s happened? I changed my strategy of information storage and retrieval because of the technology available. Was this a radical change? No, because I needed a lot more number so I carried a little booklet where I kept the rest of the numbers. So the mobile phone freed my memory of four items. Big deal! Arguably, these four items have a huge potential transactional impact. They mean that if my mobile phone is dead or lost, I cannot call the people most likely to be able to offer assistance. But how often does that happen? It hasn’t happened to me yet in an emergency. And in an non-emergency I have many backups. At any rate, in the past I was much more likely to be caught up in an emergency where I couldn’t find a phone at all. So the change has been fairly minimal.

But what’s more interesting here is that I didn’t realize this change until I heard someone talk about it. This transactional change is a topic of conversation, it is not just something that happened, it is part of common knowledge (and common knowledge only becomes common because of a lot of people talk about it to a lot of other people).

The same goes for the claims made by Sparrow et al. The strategies used to maintain access to factual knowledge have changed with the technology available. But they didn’t just change, people have been talking about this change. “Just Google it” is a part of daily conversation. In his podcasts, Leo Laporte has often talked about how his approach to remembering has changed with the advent of Google. One early strategy for remembering websites has been the Bookmark. People have made significant collections of bookmarks to sites, not unlike rollodexes of old. But about five or so years ago Google got a lot better at finding the right sites, so bookmarks went away. Personally, now that Chrome syncs bookmarks so seemlessly, I’ve started using them again. Wow, change in technology, facilitates a change in strategy. Sparrow et al. should do some research on this. Since I started using the Internet when it was still spelled with a capital “I”, I still remember urls of key websites: Google, Yahoo, Gmail, BBC, my own, etc. But there are people who don’t. I’ve personally observed a highly intelligent CEO of a company to type “Google” in the Bing search box in Internet Explorer. And a few years ago, after a university changed its portal, I was soothing an angry professor, who complained that the link to Google was removed from the page that automatically came up on his computer. He never learned how to get there any other way because he didn’t need to. Now he does. We acquire strategies to deal with information as we need them.

Before the availability of writing (and even after), there were a whole lot of strategies available for remembering things. These were part of the cultural conversation as much as the internet is today. Some of these strategies became part of religious ritual. Some of them are a part of a trickster’s arsenal – Joshua Foer describes some in Moonwalking with Einstein. Many are part of the art of “study skills” many people talk about.

All that Sparrow et al. demonstrated is that when some of these strategies are deployed, it has a small effect on recall. This is not a bad thing to know but it’s not in any way worth over 600 media stories about it. To evaluate this much reduced claim we would have to carefully examine their research methodology and the underlying assumptions which is not what this post is about. It’s about the mistreatment of research results by media hungry academics.

I don’t begrudge Sparrow et al. their 15 minutes of fame. I’m not surprised, dismayed or even disappointed at the link greed of the journalistic herd who fell over themselves to uncritically spread this research fluff. Also, many of the actual articles were quite balanced about the findings but how much of that balance will survive the effect of a mendatiously bombastic headline is anybody’s guess. So all in all it’s business as usual in the popularization of “science” in the “media”.

ResearchBlogging.org Bohannon, J. (2011). Searching for the Google Effect on People’s Memory Science, 333 (6040), 277-277 DOI: 10.1126/science.333.6040.277

Sparrow, B., Liu, J., & Wegner, D. (2011). Google Effects on Memory: Cognitive Consequences of Having Information at Our Fingertips Science DOI: 10.1126/science.1207745

Enhanced by Zemanta
Send to Kindle

The death of a memory: Missing metaphors of remembering and forgetting?

Send to Kindle

Memories

I have forgotten a lot of things in my life. Names, faces, numbers, words, facts, events, quotes. Just like for anyone, forgetting is as much a part of my life as remembering. Memories short and long come and go. But only twice in my life have I seen a good memory die under suspicious circustances.

Both of these were good reliable everyday memories as much declarative as non-declarative. And both died unexpectedly without warning and without reprieve. They were both memories of entry codes but I retrieved both in different ways. Both were highly contextualised but each in a different way.

The first time was almost 20 years ago (in 1993) and it was the PIN for my first bank card (before they let you change them). I’d had it for almost two years by then using it every few days for most of that period. I remembered it so well that even after I’d been out of the country for 6 months and not even thinking about it once, I walked up to an ATM on my return and without hesitation, typed it in. And then, about 6 months later, I walked up to another ATM, started typing in the PIN and it just wasn’t there. It was completely gone. I had no memory of it. I knew about the memory but the actual memory completely disappeared. It wasn’t a temporary confusion, it was simply gone and I never remembered it again. This PIN I remembered as a number.

The second death occurred just a few days ago. This time, it was the entrance code to a building. But I only remembered it as a shape on the keypad (as I do for most numbers now). In the intervening years, I’ve memorised a number of PINs and entrance codes. Most I’ve forgetten since, some I remember even now (like the PIN of a card that expired a year ago but I’d only used once every few months for many years). Simply, the normal processes you’d expect of memory. But this one, I’ve been using for about a year since they’d changed it from the previous one. About five months ago I came back from a month-long leave and I remembered it instantly. But three days ago, I walked up to the keypad and the memory was gone. I’d used the keypad at least once if not twice that day already. But that time I walked up to the keypad and nothing. After a few tries I started wondering if I might be typing in the old code since before the change so I flipped the pattern around (I had a vague memory of once using it to remember the new pattern) and it worked. But the working pattern felt completely foreign. Like one I’d never typed in before. I suddenly understood what it must feel like for someone to recognize their loved one but at the same time be sure that it’s not them. I was really discomfitted by this impostor keypad pattern. For a few moments, it felt really uncomfortable – almost an out of body (or out of memory) experience.

The one thing that set the second forgetting apart from the first one was that I was talking to someone as it happened (the first time I was completely alone on a busy street – I still remember which one, by the way). It was an old colleague who visited the building and was asking me if I knew the code. And seconds after I confidently declared I did, I didn’t. Or I remembered the wrong one.

So in the second case, we could conclude that the presence of someone who had been around when the previous code was being used, triggered the former memory and overrode the latter one. But the experience of complete and sudden loss, I recall vividly, was the same. None of my other forgettings were so instant and inexplicable. And I once forgot the face of someone I’d just met as soon and he turned around (which was awkward since he was supposed to come back in a few minutes with his car keys – so I had to stand in the crowd looking expectantly at everyone until the guy returned and nodded to me).

What does this mean for our metaphors of memory based on the various research paradigms? None seem to apply. These were not repressed memories associated with traumatic events (although the forgetting itself was extremely mildly traumatic). These were not quite declarative memories nor were they exactly non-declarative. They both required operations in working memories but were long-term. They were both triggered by context and had a muscle-memory component. But the first one I could remember as a number whereas the second one only as a shape and only on that specific keypad. But neither were subject to long-term decay. In fact, both proved resistant to decay surving long or longish periods of disuse. They both were (or felt) as solid memories as my own name. Until they were there no more. The closest introspective analogy to me seems Luria’s man who remembered too much who once forgot a white object because he placed it against white background in his memory which made it disappear.

The current research on memory seems to be converging on the idea that we reconstruct our memories. Our brains are not just some stores with shelves from which memories can be plucked. Although, memories are highly contextual, they are not discrete objects encoded in our brain as files on a harddrive. But for these two memories, the hard drive metaphor seems more appropriate. It’s as if a tiny part of my brain that held those memories was corrupted and they simply winked out of existence at the flip of a bit. Just like a hard drive.

There’s a lot of research on memory loss, decay and reliability but I don’t know of any which could account for these two deaths. We have many models of memory which can be selectively applied to most memory related events but these two fall between the cracks.

All the research I could find is either on sudden specific-event-induced amnesia (http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1961972/?page=1) or senescence (http://brain.oxfordjournals.org/content/89/3/539.extract). In both cases, there are clear causes to the memory and loss is much more total (complete events or entire identity). I could find nothing about the sudden loss of a specific reliable memory in a healthy individual (given that it only happened twice 18 years apart – I was 21 when it happened first – I assume this is not caused by any pathology in my brain) not precipitated by any traumatic (or other) event. Yet, I suspect this happens all the time… So what gives?

Send to Kindle

Killer App is a bad metaphor for historical trends, good for pseudoteaching

Send to Kindle
This map shows the countries in the world that...

Image via Wikipedia

Niall Ferguson wrote in The Guardian some time ago about how awful history education has become with these “new-fangled” 40-year-old methods like focusing on “history skills” that leads to kids leaving school knowing “unconnected fragments of Western history: Henry VIII and Hitler, with a small dose of Martin Luther King, Jr.” but not who was the reigning English monarch at the time of the Armada. Instead, he wants history to be taught his way: deep trends leading to the understanding why the “West” rules and why Fergusson is the cleverest of all the historians that ever lived. He even provided (and how cute is this) a lesson plan!

Now, personally, I’m convinced that the history of historical education teaches us mostly that historical education is irrelevant to the success of current policy. Not that we cannot learn from history. But it’s such a complex source domain for analogies that even very knowledgeable and reasonable people can and do learn the exact opposites from the same events. And even if they learn the “right” things it still doesn’t stop them from being convinced that they can do it better this time (kind of like people in love who think their marriage will be different). So Ferguson’s bellyaching is pretty much an empty exercise. But that doesn’t mean we cannot learn from it.

Ferguson, who is a serious historian of financial markets, didn’t just write a whiney column for the Guardian, he wrote a book called Civilization (I’m writing a review of it and a few others under the joint title “Western Historiographical Eschatology” but here I’ll only focus on some aspects of it) and is working on a computer game and teaching materials. To show how seriously he takes his pedagogic mission and possibly also how hip and with it he is, Ferguson decided to not call his historical trends trends but rather “killer apps”. I know! This is so laugh out loud funny I can’t express my mirth in mere words:))). And it gets even funnier as one reads his book. As a pedagogical instrument this has all the practical value of putting a spoiler on a Fiat. He uses the term about 10 times (it’s not in the index!) throughout the book including one or two mentions of “downloading” when he talks about the adoption of an innovation.

Unfortunately for Ferguson, he wrote his book before the terms “pseudocontext” and “pseudoteaching” made an appearance in the edublogosphere. And his “killer apps” and the lesson plan based on them are a perfect example of both. Ferguson wrote a perfectly servicable and an eminently readable historical book (even though it’s a bit of a tendentious mishmash). But it is still a historical book written by a historian. It’s not particularly stodgy or boring but it’s no different from myriad other currently popular historical books that the “youth of today” don’t give a hoot about. He thinks (bless him) that using the language of today will have the youth flocking to his thesis like German princes to Luther. Because calling historical trends “killer apps” will bring everything into clear context and make all the convoluted syntax of even the most readable history book just disappear! This is just as misguided as thinking that talking about men digging holes at different speeds will make kids want to do math.

What makes it even more precious is that the “killer app” metaphor is wrong. For all his extensive research, Ferguson failed to look up “killer app” on Wikipedia or in a dictionary. There he would have found out that it doesn’t mean “a cool app” but rather an application that confirms the viability of an existing platform whose potential may have been questioned. There have only been a handful of killer apps. The one undisputed killer app was Visicalc which all of a sudden showed how an expensive computer could simplify the process of financial management through electronic spreadsheets and therefore pay for itself. All of a sudden, personal computers made sense to the most important people of all, the money counters. And thus the personal computer revolution could begin. A killer app proved that a personal computer is useful. But the personal computer had already existed as a platform when Visicalc appeared.

None of Ferguson’s “killer apps” of “competition, science, property rights, medicine, consumer society, work ethic” are this sort of a beast. They weren’t something “installed” in the “West” which then proved its viability. They were something that, according to Ferguson anyway, made the West what it was. In that they are more equivalent to the integrated circuit than Visicalc. They are the “hardware” that makes up the “West” (as Ferguson sees it), not the software that can run on it. The only possible exception is “medicine” or more accurately “modern Western medicine” which could be the West’s one true “killer app” showing the viability of its platform for something useful and worth emulating. Also, “killer apps” required a conscious intervention, whereas all of Ferguson’s “apps” were something that happened on its own in a myrriad disparate processes – we can only see them as one thing now.

But this doesn’t really matter at all. Because Ferguson, as so many people who want to speak the language of the “young people”, neglected to pay any attention whatsoever to how “young people” actually speak. The only people who actually use the term “killer app” are technology journalists or occasionally other journalists who’ve read about it. I did a quick Google search for “killer app” and did not find a single non-news reference where somebody “young” would discuss “killer apps” on a forum somewhere. That’s not to say it doesn’t happen but it doesn’t happen enough to make Ferguson’s work any more accessible.

This overall confusion is indicative of Ferguson’s book as a whole which is definitely less than the sum of its parts. It is full of individual insight and a fair amount of wit but it flounders in its synthetic attempts. Not all his “killer apps” are of the same type, some follow from the others and some don’t appear to be anything at all than Ferguson’s wishful thinking. They certainly didn’t happen on one “platform” – some seem the outcome rather than the cause of “Western” ascendancy. Ferguson’s just all too happy to believe his own press. At the beginning he talks about early hints around 1500AD that the West might achieve ascendancy but at the end he takes a half millenium of undisputed Western rule for granted. But in 1500, “the West” had still 250 years to go before the start of the industrial revolution, 400 years before modern medicine, 50 years before Protestantism took serious hold and at least another 100 before the Protestant work ethic kicked in (if there really is such a thing). It’s all over the place.

Of course, there’s not much innovative about any of these “apps”. It’s nothing a reader of the Wall Street Journal editorial page couldn’t come up with. Ferguson does a good job of providing interesting anecdotes to support his thesis but each of his chapters meanders around the topic at hand with a smattering of unsystematic evidence here and there. Sometimes the West is contrasted with China, sometimes the Ottomans, sometimes Africa! It is hard to see how his book can help anybody’s “chronological understanding” of history that he’s so keen on.

But most troublingly it seems in places that he mostly wrote the book for as a carrier for ultra-conservative views that would make his writing more suitable for The Daily Mail rather than the Manchester Pravda: “the biggest threat to Western civilization is posed not by other civilizations, but by our own pusilanimity” unless of course it is the fact that “private property rights are repeatedly violated by governments that seem to have an insatiable appetite for taxing out incomes and our wealth and wasting a large portion of the proceeds”.

Panelist Economic Historian Niall Ferguson at ...

Image via Wikipedia

It’s almost as if the “civilized” historical discourse was just a veneer that peels off in places and reveals the real Ferguson, a comrade of Pat Buchanan whose “The Death of the West” (the Czech translation of which screed I was once unfortunate enough to review) came from the same dissatisfaction with the lack of our confidence in the West. Buchanan also recommends teaching history – or more specifically, lies about history – to show us what a glorious bunch of chaps the leaders of the West were. Ferguson is too good a historian to ignore the inconsistencies in this message and a careful reading of his book reveals enough subtlety not to want to reconstitute the British Empire (although the yearning is there). But the Buchananian reading is available and in places it almost seems as if that’s the one Ferguson wants readers to go away with.

From metaphor to fact, Ferguson is an unreliable thinker flitting between insight, mental shortcut and unreflected cliche with ease. Which doesn’t mean that his book is not worth reading. Or that his self-serving pseudo-lesson plan is not worth teaching (with caution). But remember I can only recommend it because I subscribe to that awful “culture of relativism” that says that “any theory or opinion, no matter how outlandish, is just as good as whatever it was we used to believe in.”

Update 1: I should perhaps point out, that I think Ferguson’s lesson plan is pretty good, as such things go. It gives students an activity that engages a number of cognitive and affective faculties rather than just rely on telling. Even if it is completely unrealistic in terms the amount of time allocated and the objectives set. “Students will then learn how to construct a causal explanation for Western ascendancy” is an aspiration, not a learning objective. Also, it and the other objectives really rely on the “historical skills” he derides elsewhere.

The lesson plan comes apart at about point 5 where the really cringeworthy part kicks in. Like in his book, Ferguson immediately assumes that his view is the only valid one – so instead of asking the students to compare two different perspectives on why the world looked like it did in 1913 as opposed to 1500 (or even compare maps at strategic moments) he simply asks them to come up with reasons why his “killer apps” are right (and use evidence while they’re doing it!) .

I also love his aside: “The groups need to be balanced so that each one has an A student to provide some kind of leadership.” Of course, there are shelf-fuls of literature on group work – and pretty much all of them come from the same sort of people who’re likely to practice “new history” – Ferguson’s nemesis.

I don’ think using Ferguson’s book and materials would do any more damage than using any other history book. Not what I would recommend but who cares. I recently spent some time at Waterstone’s browsing through modern history textbooks and I think they’re excellent. They provide far more background to events and present them in a much more coherent picture than Ferguson. They perhaps don’t encourage the sort of broad synthesis that has been the undoing of so many historians over the centuries (including Ferguson) but they demonstrate working with evidence in a way he does not.

The reason most people leave school not knowing facts and chronologies is because they don’t care, not because they don’t have an opportunity to learn. And this level of ignorance has remained constant over decades. At the end of the day, history is just a bunch of stories not that different from what you see on a soap opera or in a celebrity magazine, just not as relevant to a peer group. No amount of “killer applification” is going to change this. What remains at the end of historical education is a bunch of disconnected images, stories and conversation pieces (as many of them about the tedium of learning as about its content). But there’s nothing wrong with that. Let’s not underestimate the ability of disinterested people to become interested and start making the connections and filling in the gaps when they need to. That’s why all these “after-market” history books like Ferguson’s are so popular (even though for most people they are little more than tour guides to the exotic past).

Update 2: By a fortuitous coincidence, an announcement of the release of George L. Mosse‘s lectures on European cultural history: http://history.wisc.edu/mosse/george_mosse/audio_lectures.htm came across my news feeds. I think it is important to listen to these side by side with Ferguson’s seductively unifying approach to fully realize the cultural discontinuity in so many key aspects between us and the creators of the West. Mosse’s view of culture, as his Wikipedia entry reads, was as “a state or habit of mind which is apt to become a way of life”. The practice of history after all is a culture of its own, with its own habits of mind. In a way, Ferguson is asking us to adopt his habits of mind as our way of life. But history is much more interesting and relevant when it is, Mosse’s colleague Harvey Goldberg put it on this recording, a quest after a “usable past” spurred by our sense of the “present crisis” or “present struggle”. So maybe my biggest beef with Ferguson is that I don’t share his justificationist struggle.

Enhanced by Zemanta
Send to Kindle

Language learning in literature as a source domain for generative metaphors about anything

Send to Kindle
Portrait of Yoritomo, copy of the 1179 origina...

Image via Wikipedia

In my thinking about things human, I often like to draw on the domain of second language learning as the source of analogies. The problem is that relatively few people in the English speaking world have experience with language learning to such an extent that they can actually map things onto it. In fact, in my experience, even people who have a lot of experience with language learning are actually not aware of all the things that were happening while they were learning. And of course awareness of research or language learning theories is not to be expected. This is not helped by the language teaching profession’s propaganda that language learning is “fun” and “rewarding” (whatever that is). In fact my mantra of language learning (I learned from my friend Bill Perry) is that “language learning is hard and takes time” – at least if you expect to achieve a level of competence above that of “impressing the natives” with your “please” and “thank you”. In that, language learning is like any other human endeavor but because of its relatively bounded nature — when compared to, for instance, culture — it can be particularly illuminating.

But how can not just the fact of language learning but also its visceral experience be communicated to those who don’t have that kind of experience? I would suggest engrossing literature.

For my money, one of the most “realistic” depictions of language learning with all its emotional and cognitive peaks and troughs can be found in James Clavell‘s “Shogun“. There we follow the Englishman Blackthorne as he goes from learning how to say “yes” to conversing in halting Japanese. Clavell makes the frustrating experience of not knowing what’s going on and not being able to express even one’s simplest needs real for the reader who identifies with Blackthorne’s plight. He demonstrates how language and cultural learning go hand in hand and how easy it is to cause a real life problem through a little linguistic misstep.

Shogun stands in stark contrast to most other literature where knowledge of language and its acquisition is viewed as mostly a binary thing: you either know it or you don’t. One of the worst offenders here is Karl May (virtually unknown in the English speaking world) whose main hero Old Shatterhand/Kara Ben Nemsi acquires effortlessly not only languages but dialects and local accents which allow him to impersonate locals in May’s favorite plot twists. Language acquisition in May just happens. There’s never any struggle or miscommunication by the main protagonist. But similar linguistic effortlessness in the face of plot requirements is common in literature and film. Far more than magic or the existence of Vampires, the thing that used to stretch my credulity the most in Buffy the Vampire Slayer was ease with which linguistic facility was disposed of.

To be fair, even in Clavell’s book, there are characters whose linguistic competence is largely binary. Samurai either speak Portugese or Latin or they don’t – and if the plot demands, they can catch even whispered colloquial conversation. Blackthorne’s own knowledge of Dutch, Spanish, Portugese and Latin is treated equally as if identical competence would be expected in all four (which would be completely unrealistic given his background and which resembles May’s Kara Ben Nemsi in many respects).

Nevertheless, when it comes to Japanese, even a superficially empathetic reader will feel they are learning Japanese along with the main character. Largely through Clavell’s clever use of limited translation.

This is all the more remarkable given that Clavell obviously did not speak Japanese and relied on informants. This, as the “Learning from Shogun” book pointed out, led to many inaccuracies in the actual Japanese, advising readers not to rely on the language of Shogun too much.

Clavell (in all his books – not just Shogun) is even more illuminating in his depiction of intercultural learning and communication – the novelist often getting closer to the human truth of the process than the specialist researcher. But that is a blog post for another time.

Another novel I remember being an accurate representation of language learning is John Grisham‘s “The Broker” in which the main character Joel Backman is landed in a foreign country by the CIA and is expected to pick up Italian in 6 months. Unlike Shogun, language and culture do not permeate the entire plot but language learning is a part of about 40% of the book. “The Broker” underscores another dimension which is also present in the Shogun namely teaching, teachers and teaching methods.

Blackthorne in Shogun orders an entire village (literally on the pain of death) to correct him every time he makes a mistake. And then he’s excited by a dictionary and a grammarbook. Backman spends a lot of time with a teacher who makes him repeat every sentence multiple times until he knows it “perfectly”. These are today recognized as bad strategies. Insisting on perfection in language learning is often a recipe for forming mental blocks (Krashen’s cognitive and affective filters). But on the other hand, it is quite likely that in totally immersive situations like Blackthorne’s or even partly immersive situations like Backman’s (who has English speakers around him to help), pretty much any approach to learning will lead to success.

Another common misconception reflected in both works is the demand language learning places on rote memory. Both Blackthorne and Backman are described as having exceptional memories to make their progress more plausible but the sort of learning successes and travails described in the books would accurately reflect the experiences of anybody learning a foreign language even without a memory. As both books show without explicit reference, it is their strategies in the face of incomprehension that help their learning rather than a straight memorization of words (although that is by no means unnecessary).

So what are the things that knowing about the experience of second language learning can help us ellucidate? I think that any progress from incompetence to competence can be compared to learning a second language. Particularly when we can enhance the purely cognitive view of learning with an affective component. Strategies as well as simple brain changes are important in any learning which is why none of the brain-based approaches have produced unadulterated success. In fact, linguists studying language as such would do well to pay attention to the process of second language learning to more fully realize the deep interdependence between language and our being.

But I suspect we can be more successful at learning anything (from history or maths to computers or double entery book keeping) if we approach it as a foreign language. Acknowledge the emotional difficulties alongside cognitive ones.

Also, if we looked at expertise more as linguistic fluency than a collection of knowledge and skills, we could devise a program of learning that would take better into account not only the humanity of the learner but also the humanity of the whole community of experts which he or she is joining.

Enhanced by Zemanta
Send to Kindle

Hacking Metaphors, Frames and Other Ideas

Follow

Get every new post delivered to your Inbox

Join other followers: