Tag Archives: Education

21st Century Educational Voodoo

Share
Send to Kindle

Jim Shimabukuro uses Rupert Murdoch’s quote “We have a 21st century economy with a 19th century education system” to pose a question of what should 21st Century Education look like (http://etcjournal.com/2008/11/03/174/) “what are the key elements for an effective 21st century model for schools and colleges?”.

However, what he is essentially asking us to do is perform an act of voodoo. He’s encouraging us to start thinking about what would make our education similar to our vision of what the 21st Century Economy looks like. Such exercises can come up with good ideas but unfortunately this one is very likely to descend into predictability. People will write in how important it is to prepare students to be flexible, learn important skills to compete in the global markets, use technology to leverage this or the other. There may be the odd original idea but most respondents will stick with cliches. Because that’s what our magical discourse about education encourages most of all (this sounds snarky but I really mean this more descriptively than as an evaluation).

There are three problems with the whole exercises.

First, why should we listen to to moguls and venture capitalists about education? They’re no more qualified to address this topic than any random individual who’s given it some thought and are more likely to have ulterior motives. To Murdoch we should say, you’ve messed up the print media environment, failed with your online efforts, stay away from our schools.

Second, we don’t have a 19th century education system. Sure, we still have teachers standing in front of students. We have classes and we have school years. We have what David Tyack and Larry Cuban have called the “grammar of schooling”. It hasn’t changed much on the surface. But neither has the grammar of English. Yet, we can express things in English now that we couldn’t in the 1800s. We use the English grammar with its ancient roots to express what we need in our time. Likewise, we use the same grammar of schooling to have the education system express our societal needs. It is imperfect but it is in NO way holding us down. The evidence is either manufactured or misinterpreted. Sure, if we sat down and started designing an education system today from scratch, we’d probably do it differently but the outcomes would probably be pretty much the same. Meaning, the state of the world isn’t due to the educational system but rather vice versa.

Third, we don’t have a 21st century economy. Of course, the current economy is in the 21st century but it is much less than what we envision 21st century economy to imply. It is global (as it was in the 1848 when Marx and Engels were writing their manifesto). It is exploitative (of both human and natural resources). It is in the hands of the powerful and semicompetent few. Just because workers get fired by email from a continent away and stocks crash in matter of minutes rather than hours, we can’t really talk about something fundamentally unique. Physical and symbolic property is still the key part of the economy. Physical property still takes roughly as long to shift about as it did two centuries ago (give or take a day or a month) and symbolic property is still traded in the same way (can I interest you in a tulip?). Sure, there are thousands particular differences we could point to but the essence of our existence is not that much changed. Except for things like indoor plumbing  (thank God!), modern medicine and speed of communication – but the education system of today has all of those pretty much in hand.

My conclusion. Don’t expect people to be relevant or right just because they are rich or successful. Question the route they took to where they are before you take their advice on the direction you should go. And, if you’re going to drag history into your analogies, study it very very carefully. Don’t rely on what your teachers told you, it was all lies!

Send to Kindle

Language learning in literature as a source domain for generative metaphors about anything

Share
Send to Kindle
Portrait of Yoritomo, copy of the 1179 origina...

Image via Wikipedia

In my thinking about things human, I often like to draw on the domain of second language learning as the source of analogies. The problem is that relatively few people in the English speaking world have experience with language learning to such an extent that they can actually map things onto it. In fact, in my experience, even people who have a lot of experience with language learning are actually not aware of all the things that were happening while they were learning. And of course awareness of research or language learning theories is not to be expected. This is not helped by the language teaching profession’s propaganda that language learning is “fun” and “rewarding” (whatever that is). In fact my mantra of language learning (I learned from my friend Bill Perry) is that “language learning is hard and takes time” – at least if you expect to achieve a level of competence above that of “impressing the natives” with your “please” and “thank you”. In that, language learning is like any other human endeavor but because of its relatively bounded nature — when compared to, for instance, culture — it can be particularly illuminating.

But how can not just the fact of language learning but also its visceral experience be communicated to those who don’t have that kind of experience? I would suggest engrossing literature.

For my money, one of the most “realistic” depictions of language learning with all its emotional and cognitive peaks and troughs can be found in James Clavell‘s “Shogun“. There we follow the Englishman Blackthorne as he goes from learning how to say “yes” to conversing in halting Japanese. Clavell makes the frustrating experience of not knowing what’s going on and not being able to express even one’s simplest needs real for the reader who identifies with Blackthorne’s plight. He demonstrates how language and cultural learning go hand in hand and how easy it is to cause a real life problem through a little linguistic misstep.

Shogun stands in stark contrast to most other literature where knowledge of language and its acquisition is viewed as mostly a binary thing: you either know it or you don’t. One of the worst offenders here is Karl May (virtually unknown in the English speaking world) whose main hero Old Shatterhand/Kara Ben Nemsi acquires effortlessly not only languages but dialects and local accents which allow him to impersonate locals in May’s favorite plot twists. Language acquisition in May just happens. There’s never any struggle or miscommunication by the main protagonist. But similar linguistic effortlessness in the face of plot requirements is common in literature and film. Far more than magic or the existence of Vampires, the thing that used to stretch my credulity the most in Buffy the Vampire Slayer was ease with which linguistic facility was disposed of.

To be fair, even in Clavell’s book, there are characters whose linguistic competence is largely binary. Samurai either speak Portugese or Latin or they don’t – and if the plot demands, they can catch even whispered colloquial conversation. Blackthorne’s own knowledge of Dutch, Spanish, Portugese and Latin is treated equally as if identical competence would be expected in all four (which would be completely unrealistic given his background and which resembles May’s Kara Ben Nemsi in many respects).

Nevertheless, when it comes to Japanese, even a superficially empathetic reader will feel they are learning Japanese along with the main character. Largely through Clavell’s clever use of limited translation.

This is all the more remarkable given that Clavell obviously did not speak Japanese and relied on informants. This, as the “Learning from Shogun” book pointed out, led to many inaccuracies in the actual Japanese, advising readers not to rely on the language of Shogun too much.

Clavell (in all his books – not just Shogun) is even more illuminating in his depiction of intercultural learning and communication – the novelist often getting closer to the human truth of the process than the specialist researcher. But that is a blog post for another time.

Another novel I remember being an accurate representation of language learning is John Grisham‘s “The Broker” in which the main character Joel Backman is landed in a foreign country by the CIA and is expected to pick up Italian in 6 months. Unlike Shogun, language and culture do not permeate the entire plot but language learning is a part of about 40% of the book. “The Broker” underscores another dimension which is also present in the Shogun namely teaching, teachers and teaching methods.

Blackthorne in Shogun orders an entire village (literally on the pain of death) to correct him every time he makes a mistake. And then he’s excited by a dictionary and a grammarbook. Backman spends a lot of time with a teacher who makes him repeat every sentence multiple times until he knows it “perfectly”. These are today recognized as bad strategies. Insisting on perfection in language learning is often a recipe for forming mental blocks (Krashen’s cognitive and affective filters). But on the other hand, it is quite likely that in totally immersive situations like Blackthorne’s or even partly immersive situations like Backman’s (who has English speakers around him to help), pretty much any approach to learning will lead to success.

Another common misconception reflected in both works is the demand language learning places on rote memory. Both Blackthorne and Backman are described as having exceptional memories to make their progress more plausible but the sort of learning successes and travails described in the books would accurately reflect the experiences of anybody learning a foreign language even without a memory. As both books show without explicit reference, it is their strategies in the face of incomprehension that help their learning rather than a straight memorization of words (although that is by no means unnecessary).

So what are the things that knowing about the experience of second language learning can help us ellucidate? I think that any progress from incompetence to competence can be compared to learning a second language. Particularly when we can enhance the purely cognitive view of learning with an affective component. Strategies as well as simple brain changes are important in any learning which is why none of the brain-based approaches have produced unadulterated success. In fact, linguists studying language as such would do well to pay attention to the process of second language learning to more fully realize the deep interdependence between language and our being.

But I suspect we can be more successful at learning anything (from history or maths to computers or double entery book keeping) if we approach it as a foreign language. Acknowledge the emotional difficulties alongside cognitive ones.

Also, if we looked at expertise more as linguistic fluency than a collection of knowledge and skills, we could devise a program of learning that would take better into account not only the humanity of the learner but also the humanity of the whole community of experts which he or she is joining.

Enhanced by Zemanta
Send to Kindle

The brain is a bad metaphor for language

Share
Send to Kindle

Note: This was intended to be a brief note. Instead it developed into a monster post that took me two weeks of stolen moments to write. It’s very light on non-blog references but they exist. Nevertheless, it is still easy to find a number of oversimplifications,  conflations, and other imperfections below. The general thrust of the argument however remains.

How Far Can You Trust a Neuroscientist?

Shiny and colored objects usually attract Infa...

Image via Wikipedia

A couple of days ago I watched a TED talk called the Linguistic Genius of Babies by Patricia Kuhl. I had been putting it off, because I suspected I wouldn’t like it but I was still disappointed at how hidebound it was. It conflated a number of really unconnected things and then tried to sway the audience to its point of view with pretty pictures of cute infants in brain scanners. But all it was, is a hodgepodge of half-implied claims that is incredibly similar to some of the more outlandish claims made by behaviorists so many years ago. Kuhl concluded that brain research is the next frontier of understanding learning. But she did not give a simple credible example of how this could be. She started with a rhetorical trick. Mentioned an at-risk language with a picture of a mother holding an infant facing towards her. And then she said (with annoying condescension) that this mother and the other tribe members know something we do not:

What this mother — and the 800 people who speak Koro in the world — understand that, to preserve this language, they need to speak it to the babies.

This is garbage. Languages do not die because there’s nobody there to speak it to the babies (until the very end, of course) but because there’s nobody of socioeconomic or symbolic prestige children and young adults can speak the language to. Languages don’t die because people can’t learn them, they die because they have no reason (other than nostalgia) to learn them or have a reason not to learn them. Given a strong enough reason they would learn a dying language even if they started at sixteen. They just almost never are given the reason. Why Kuhl felt she did not need to consult the literature on language death, I don’t know.

Patricia Kuhl has spent the last 20 years studying pretty much one thing: acoustic discrimination in infants (http://ilabs.washington.edu/kuhl/research.html). Her research provided support for something that had been already known (or suspected), namely that young babies can discriminate between sounds that adults cannot (given similar stimuli such as the ones one might find in the foreign language classroom). She calls this the “linguistic genius of babies” and she’s wrong:

Babies and children are geniuses until they turn seven, and then there’s a systematic decline.

First, the decline (if there is such a thing) is mostly limited to acoustic processing and even then it’s not clear that the brain is the thing that causes it. Second, being able to discriminate (by moving their head) between sounds in both English and Mandarin at age 6 months is not a sign of genius. It’s a sign of the baby not being able to differentiate between language and sound. Or in other words, the babies are still pretty dumb. But it doesn’t mean they can’t learn a similar distinction at a later age – like four or seven or twelve. They do. They just probably do it in a different way than a 6-month old would. Third, in the overall scheme of things, acoustic discrimination at the individual phoneme level (which is what Kuhl is studying) is only a small part of learning a language and it certainly does NOT stop at 7 months or even 7 years of age. Even children who start learning a second language at the age of 6 achieve a native-like phonemic competence. And even many adults do. They seem not to perform as well on certain fairly specialized acoustic tests but functionally, they can be as good as native speakers. And it’s furthermore not clear that accent deficiencies are due to the lack of some sort of brain plasticity. Fourth, language learning and knowledge is not a binary thing. Even people who only know one language know it to a certain degree. They can be lexically, semantically and syntactically quite challenged when exposed to a sub-code of their language they have little to no contact with. So I’m not at all sure what Kuhl was referring to. François Grosjean (an eminent researcher in the field) has been discussing all this on his Life as Bilingual blog (and in books, etc.). To have any credibility, Kuhl must address this head on:

There is no upper age limit for acquiring a new language and then continuing one’s life with two or more languages. Nor is there any limit in the fluency that one can attain in the new language with the exception of pronunciation skills.

Instead she just falls on old prejudices. She simply has absolutely nothing to support this:

We think by studying how the sounds are learned, we’ll have a model for the rest of language, and perhaps for critical periods that may exist in childhood for social, emotional and cognitive development.

A paragraph like this may get her some extra funding but I don’t see any other justification for it. Actually, I find it quite puzzling that a serious scholar would even propose anything like this today. We already know there is no critical period for social development. Well, we don’t really know what social development is, but there’s no critical brain period to what there is. We get socialized to new collective environments throughout our lives.

But there’s no reason to suppose that learning to interact in a new environment is anything like learning to discriminate between sounds. There are some areas of language linked to perception where that may partly be the case (such as discriminating shapes, movements, colors, etc.) but hardly things like morphology or syntax, where much more complexity is involved. But this argument cuts both ways. Let’s say a lot of language learning was like sound development. And we know most of it continues throughout life (syntax, morphology, lexicon) and it doesn’t even start at 6 months (unless you’re a crazy Chomskean who believes in some sort of magical parameter setting). So if sound development was like that, maybe it has nothing to do with the brain in the way Kuhl imagines – although she’s so vague that she could always claim that that’s what she’d had in mind. This is what Kuhl thinks of as additional information:

We’re seeing the baby brain. As the baby hears a word in her language the auditory areas light up, and then subsequently areas surrounding it that we think are related to coherence, getting the brain coordinated with its different areas, and causality, one brain area causing another to activate.

So what? We know that that’s what was going to happen. Some parts of the brain were going to light up as they always do. What does that mean? I don’t know. But I also know that Patricia Kuhl and her colleagues don’t know either (at least not in the way she pretends). We speak a language, we learn a language and at the same time we have a brain and things happen in the brain. There are neurons and areas that seem to be affected by impact (but not always and not always in exactly the same way). Of course, this is an undue simplification. Neuroscientists know a huge amount about the brain. Just not how it links to language in a way that would say much about the language that we don’t already know. Kuhl’s next implied claim is a good example of how partial knowledge in one area may not at all extend to knowledge in another area.

What you see here is the audio result — no learning whatsoever — and the video result — no learning whatsoever. It takes a human being for babies to take their statistics. The social brain is controlling when the babies are taking their statistics.

In other words, when the children were exposed to audio or video as opposed to a live person, no effect was shown. At 6 months of age! As is Kuhl’s wont, she only hints at the implications, but over at the Royal Society’s blog comments, Eric R. Kandel has spelled it out:

I’m very much taken with Patricia Kuhl’s finding in the acquisition of a second language by infants that the physical presence of a teacher makes enormous difference when compared to video presence. We all know from personal experience how important specific teachers have been. Is it absurd to think that we might also develop methodologies that would bring out people’s potential for interacting empathically with students so that we can have a way of selecting for teachers, particularly for certain subjects and certain types of student? Neuroscience: Implications for Education and Lifelong Learning.

But this could very well be absurd! First, Kuhl’s experiments were not about second language acquisition but sensitivity to sounds in other languages. Second, there’s no evidence that the same thing Kuhl discovered for infants holds for adults or even three-year olds. A six-month old baby hasn’t learned yet that the pictures and sounds coming from the machine represent the real world. But most four-year olds have. I don’t know of any research but there is plenty of anecdotal evidence. I have personally met several people highly competent in a second language who claimed they learned it by watching TV at a young age. A significant chunk of my own competence in English comes from listening to radio, audio books and watching TV drama. How much of our first language competence comes from reading books and watching TV? That’s not to say that personal interaction is not important – after all we need to learn enough to understand what the 2D images on the screen represent. But how much do we need to learn? Neither Kuhl nor Kandel have the answer but both are ready (at least by implication) to shape policy regarding language learning. In the last few years, several reports raised questions about some overreaching by neuroscience (both in methods and assumptions about their validity) but even perfectly good neuroscience can be bad scholarship in extending its claims far beyond what the evidence can support.

The Isomorphism Fallacy

This section of the post is partly based on a paper I presented at a Czech cognitive science conference about 3 years ago called Isomorphism as a heuristic and philosophical problem.

IMG_7845The fundamental problem underlying the overreach of basic neuroscience research is the fallacy of isomorphism. This fallacy presumes that the same structures we see in language, behavior, society must have structural counterparts in the brain. So there’s a bit of the brain that deals with nouns. Another bit that deals with being sorry. Possibly another one that deals with voting Republican (as Woody Allen proved in “Everyone Says I Love You“). But at the moment the evidence for this is extremely weak, at best. And there is no intrinsic need for a structural correspondence to exist. Sidney Lamb came up with a wonderful analogy that I’m still working my way through. He says (recalling an old ‘Aggie‘ joke) that trying to figure out where the bits we know as language structure are in the brain is like trying to work out how to fit the roll that comes out of a tube of tooth paste back into the container. This is obviously a fool’s errand. There’s nothing in the tooth-paste container that in any way resembles the colorful and tubular object we get when we squeeze the paste container. We get that through an interaction of the substance, the container, external force, and the shape of the opening. It seems to me entirely plausible, that the link between language and the brain is much more like that between the paste, the container and their environment than like that between a bunch of objects and box. The structures that come out are the result of things we don’t quite understand happening in the brain interacting with its environment. (I’m not saying that that’s how it is, just that it’s plausible.) The other thing to lends it credence is the fact that things like nouns or fluency are social constructs with fuzzy boundaries, not hard discrete objects, so actually localizing them in the brain would be a bit of a surprise. Not that it can’t be done, but the burden of evidence of making this a credible finding is substantial.

Now, I think that the same problem applies to looking for isomorphism the other way. Lamb himself tries to look at grammar by looking for connections resembling the behavior of activating neurons. I don’t see this going anywhere. George Lakoff (who influenced me more than any other linguist in the world) seems to think that a Neural Theory of Language is the next step in the development of linguistics. At one point he and many others thought that mirror neurons say something about language but now that seems to have been brought into question. But why do we need mirror neurons when we already know a lot of the immitative behaviors they’re supposed facilitate? Perhaps as a treatment and diagnostic protocol for pathologies but is this really more than story-telling? Jerome Feldman described NTL in his book “From Molecule to Metaphor” but his main contribution seems to me lies in showing how complex language phenomena can be modelled with brain-like neural networks, not saying anything new about these phenomena (see here for an even harsher treatment). The same goes for the Embodied Construction Grammar. I entirely share ECG’s linguistic assumptions but the problem is that it tries to link its descriptive apparatus directly to the formalisms necessary for modeling. This proved to be a disaster for the generative project that projected its formalisms into language with a imperfect fit and now spends most of its time refining those formalisms rather than studying language.

So far I don’t see any advantage in linking language to the brain in either the way Kuhl et al or Feldman et al try to do it (again with the possible exception of pathologies). In his recent paper on compositionality, Feldman describes research that shows that spacial areas are activated in conjunction with spatial terms and that sentence processing time increases as the sentence gets removed from “natural spatial orientation”. But brain imaging at best confirms what we already knew. But how useful is that confirmatory knowledge? I would argue that not very useful. In fact there is a danger that we will start thinking of brain imaging as a necessary confirmation of linguistic theory. Feldman takes a step in this dangerous direction when he says that with the advent of new techniques of neuroscience we can finally study language “scientifically”. [Shudder.]

We know there’s a connection between language and the brain (more systematic than with language and the foot, for instance) but so far nobody’s shown convincingly that we can explain much about language by looking at the brain (or vice versa). Language is best studied as its own incredibly multifaceted beast and so is the brain. We need to know a lot more about language and about the brain before we can start projecting one into the other.

And at the moment, brain science is the junior partner, here. We know a lot about language and can find out more without looking for explanations in the brain. It seems as foolish as trying to illuminate language by looking inside a computer (as Chomsky’s followers keep doing). The same question that I’m asking for language was asked about cognitive processes (a closely related thing) by William Uttal in The New Phrenology who’s asking “whether psychological processes can be defined and isolated in a way that permits them to be associated with particular brain regions” and warns against a “neuroreductionist wild goose chase” – and how else can we characterize Kuhl’s performance – lest we fall “victim to what may be a ‘neo-phrenological’ fad”. Michael Shremer voiced a similar concern in the Scientific American:

The brain is not random kludge, of course, so the search for neural networks associated with psychological concepts is a worthy one, as long as we do not succumb to the siren song of phrenology.

What does a “siren song of phrenology” sound like? I imagine it would sound pretty much like this quote by Kuhl:

We are embarking on a grand and golden age of knowledge about child’s brain development. We’re going to be able to see a child’s brain as they experience an emotion, as they learn to speak and read, as they solve a math problem, as they have an idea. And we’re going to be able to invent brain-based interventions for children who have difficulty learning.

I have no doubt that there are some learning difficulties for which a ‘brain-based intervention’ (whatever that is) may be effective. But it’s just a relatively small part of the universe of learning difficulties that it hardly warrants a bombastic claim like the one above. I could find nothing in Kuhl’s narrow research that would support this assertion. Learning and language are complex psycho-social phenomena that are unlikely to have straightforward counterparts in brain activations such as can be seen by even the most advanced modern neuroimaging technology. There may well be some straightforward pathologies that can be identified and have some sort of treatment aimed at them. The problem is that brain pathologies are not necessarily opposites of a typically functioning brain (a fallacy that has long plagued interpretation of the evidence from aphasias) – it is, as brain plasticity would suggest, just as  likely that at least some brain pathologies simply create new qualities rather than simply flipping an on/off switch on existing qualities. Plus there is the historical tendency of the self-styled hard sciences to horn in on areas where established disciplines have accumulated lots of knowledge, ignore the knowledge, declare a reductionist victory, fail and not admit failure.

For the foreseeable future, the brain remains a really poor metaphor for language and other social constructs. We are perhaps predestined to finding similarities in anything we look at but researchers ought to have learned by now to be cautious about them. Today’s neuroscientists should be very careful that they don’t look as foolish to future generations as phrenologists and skull measurers look to us now.

In praise of non-reductionist neuroscience

Let me reiterate, I have nothing against brain research. The more of it, the better! But it needs to be much more honest about its achievements and limitations (as much as it can given the politics of research funding). Saying the sort of things Patricia Kuhl does with incredibly flimsy evidence and complete disregard for other disciplines is good for the funding but awful for actually obtaining good results. (Note: The brevity of the TED format is not an excuse in this case.)

A much more promising overview of applied neuroscience is a report by the Royal Society on education and the brain that is much more realistic about the state of neurocognitive research who admit at the outset: “There is enormous variation between individuals, and brain-behaviour relationships are complex.”

The report authors go on to enumerate the things they feel we can claim as knowledge about the brain:

  1. The brain’s plasticity
  2. The brain’s response to reward
  3. The brain’s self-regulatory processes
  4. Brain-external factors of cognitive development
  5. Individual differences in learning as connected to the brain and genome
  6. Neuroscience connection to adaptive learning technology

So this is a fairly modest list made even more modest by the formulations of the actual knowledge. I could only find a handful of statements made to support the general claims that do not contain a hedge: “research suggests”, “may mean”, “appears to be”, “seems to be”, “probably”. This modesty in research interpretation does not always make its way to the report’s policy suggestions (mainly suggestions 1 and 2). Despite this, I think anybody who thinks Patricia Kuhl’s claims are interesting would do well do read this report and pay careful attention to the actual findings described there.

Another possible problem for those making wide reaching conclusions is a relative newness of the research on which these recommendations are based. I had a brief look at the citations in the report and only about half are actually related to primary brain research. Of those exactly half were published in 2009 (8) and 2010 (20) and only two in the 1990s. This is in contrast to language acquisition and multilingualism research which can point to decades of consistently replicable findings and relatively stable and reliable methods. We need to be afraid, very afraid of sexy new findings when they relate to what is perceived as the “nature” of humans. At this point, as a linguist looking at neuroscience (and the history of the promise of neuroscience), my attitude is skeptical. I want to see 10 years of independent replication and stable techniques before I will consider basing my descriptions of language and linguistic behavior on neuroimaging. There’s just too much of ‘now we can see stuff in the brain we couldn’t see before, so this new version of what we think the brain is doing is definitely what it’s doing’. Plus the assumption that exponential growth in precision brain mapping will result in the same growth in brain function identification is far from being a sure thing (cf. genome decoding). Exponential growth in computer speed, only led to incremental increases in computer usability. And the next logical step in the once skyrocketing development of automobiles was not flying cars but pretty much just the same slightly better cars (even though they look completely different under the hood).

The sort of knowledge to learn and do good neuroscience is staggeringly awesome. The scientists who study the brain deserve all the personal accolades they get. But the actual knowledge they generate about issues relating to language and other social constructs is much less overwhelming. Even a tiny clinical advance such as helping a relatively small number of people to communicate who otherwise wouldn’t be able to express themselves makes this worthwhile. But we must not confuse clinical advances with theoretical advances and must be very cautious when applying these to policy fields that are related more by similarity than a direct causal connection.

Send to Kindle

The most ridiculous metaphor of education courtesy of an economics professor

Share
Send to Kindle

Acclaimed academics have policy agendas just like anybody else. And often they let them interfere with a straightforward critical analysis of their output. The monumental capacity for blindness of highly intelligent people  is sometimes staggering. Metaphors and analogies (same thing for metaphor hacking) make thinkers particularly prone to mis-projection blindness. Edward Glaeser, a Harvard economics prof, is just the latest in the long line of economists and blowhards, who think they have the education system licked by comparing it to some -free market gimmick. They generally reveal that they know or care precious little about the full extent of the processes involved in the market and Glaeser is a shining example of this. His analogy is so preposterous and only needs so little thought to break down, I can’t believe he didn’t take a few minutes to to do it himself. Well, actually, I can believe it. It’s pretty,  neat and seductive. So who cares that it is pure non-sense. Here’s what he said:

Why Cities Rock | Freakonomics Radio: I want you to just imagine if for example, instead of having a New York restaurant scene that was dominated by private entrepreneurs who competed wildly with each other trying to come up with new, new things. The bad restaurants collapse; the good restaurants go on to cooking show fame. You have these powerful forces of competition and innovation working. Imagine instead if there was a food superintendent who operated a system of canteens where the menus were decided at the local level, and every New Yorker had to eat in these canteens. Well, the food would be awful. And that’s kind of what we decided to do with schooling. Instead of harnessing the urban ability to provide innovation, competition, new entry, we’ve put together a system where we’ve turn all that stuff off, and we’ve allowed only a huge advantage for a local, public monopoly. It’s very, very difficult to fix this. I think the most hopeful signs, and there’s been as you know a steady stream of economics papers on this, the most hopeful signs I think are coming from charter schools, which are particularly effective in urban areas. And it’s not so much that the average charter school is so much better than the average public school, but rather that in charter schools, because they can go bankrupt, because the can fail, the good ones will succeed, and the bad ones will drop out of the market. And certainly we’ve seen lots of great randomized studies that have shown the ability of charters to deliver great test score results.

As we know, metaphors (and their ilk) rely on projections from one domain to another. Generative metaphor (of which this is one) then try to base a solution on a new domain which is the result of the blending of the source domain.

So this is how Glaser envisions the domain of New York restaurants: there is competition, which drives up the quality of the food (note that he didn’t mention driving down prices, lowering expense per unit, and other tricks used by ‘wildly competing entrepreneurs’). Restaurateurs and chefs must strive to provide better food than others because there is so much choice, people will flock to their competitors for the better food.

This is how he wants to project it into schooling: give people more choice (and means to exercise that choice by using the intra city’s short commutes) and this will result in competition, the competition will increase experimentation and as a result the quality of education goes up. He also mentions test scores at the end but these have little to do with education (but why should somebody at Harvard know that?).

Of course, he makes most of the argument through a reverse projection, where he asks us to imagine what the New York restaurants would look like if they were run like a centralized public school system. He envisions the end process as similar to Apple’s 1984 commercial: a sea of bland canteens with awful food. But this is just so much elitist blather. Glaeser should be ashamed of himself for not thinking this through.

First, what he describes is only true of the top tier of New York restaurants. The sort of places the upper-middle glass go to because of a review on Yelp. The majority of places where New Yorkers (and people everywhere) eat their lunches and the occasional dinner are either real canteens, some local greasy spoon, or a chain that makes its consistent prices and dining experiences possible through resolute mediocrity. The Zagat guide is for special occasions, not daily nutrition.

Second, Glaser never asks how this maps onto schooling or education, in general. Probably because the answer would be that it doesn’t. Glaeser certainly refused to say anything useful about his analogy. He went far enough to promote his shallow ideology and stopped while the stopping was good. Let’s look at a few possible mappings and see how we fare.

So first we have the quality of the food. This would seem to map quite nicely onto quality of education. But it doesn’t. Or at least not in the way Gleaeser and his like would like.  Quality of the food that can impact on competition is a surface property. We cannot also always trust people that they can judge the quality apart from the decor of the restaurant or its reputation – just like with wine, they are very likely to judge the quality based on a  review or the recommendation of a trusted acquaintance. In Glaeser’s analogy, we’re not really talking about the quality of food but the quality of the dining experience. And if we project this onto the quality of a school, we’re only increasing the scope of the problem. No matter how limited and unreliable, we can at least judge the quality of the overall dining experience by our own reaction to our experience. But with schools, the experience is mediated through the child and the most important criterion of quality – viz an educated human being at the end – is deferred until long after the decision on quality has been made. It’s like judging the quality of a restaurant we go to for an anniversary dinner by whether we will be healthy in 5 years. Of course, we can force such judgements but arbitrarily ranking schools based on a single number – like the disastrous UK league tables that haven’t improved the education of a single child but made a lot of people extremely anxious.

The top restaurants (where the competition makes a difference) don’t look at food from the perspective of what matters for life, namely nutrition. It’s quite likely the most popular restaurants don’t serve anything particularly healthy or prepared with regard to the environmental impact. Quality is only important to them as one of many competitive advantage. They also use a number of tricks to make the dining experience better – cheat on ingredients, serve small portions on large plates, etc. They rely on ‘secret recipes’ – the last thing we want to see in education. And this is exactly the experience of schools that compete in the market. They fudge, cheat and flat out lie to protect their competitive advantage.  They provide the minimum of education that they can get away with to look good. Glaeser also conveniently forgets that there is a huge amount of centralized oversight of New York restaurants – much more, in some ways, than on charter schools. Quality is only one of the results of rampant competition and oversight is necessary to protect consumers. This is much more important in schools than in restaurants (but it almost seems that restaurants have more of it, than schools – proportionally to their importance).

But that is only one part of this important mismapping, which is the process of competition. Many economists forget that the market forces don’t work on their own. They work off the backs of the cheated and underserved. Bad restaurants don’t go out of business by some market magic. They go out of business because enough people ate there and were cheated, served bad food or almost got poisoned. And this experience had to have been bad enough for them to talk about it and dissuade others from visiting. With restaurants the individual cost is relatively minor (at least for those comfortably off). You have to contribute one or two bad meals or ruined evenings a year to keep the invisible hand doing its business among the chefs of New York. (This could be significant to someone who only goes out once every few months but still something you can get over.) Also the cost of making a choice is virtually nill. It takes no effort to go to a different restaurant or to choose to eat at home. Except for the small risk of food poisoning, you’ve made no investment in your choice and the potential loss is minimal.

However, in the case of schooling, you’re making a long-term commitment (at least a semester or a year but most likely at least four years). You can shrug off a bad meal but what about a wasted half-a-decade of your child’s life? Or what if you enrolled your child in the educational equivalent of Burger King serving nothing but giant whoppers. Everything seems fine all along but the results are clogged intellectual arteries. Also the costs of a school going out of business (and here Glaeser is one of the honest few that admit to bankrupt schools as a desirable outcome of competition in education) are exceedingly high. Both financial and emotional. Let’s say a school goes out of business and a poor parent has invested in books, school uniform and transportation choice only to have to start this again in a new school. Or how about the toll that making new friends, getting used to new teachers, etc. takes on a child. How many ruined childhoods is Glaeser willing to accept for the benefits of his ideology? As far as I know, the churn among New York restaurants is quite significant – could the education system sustain 10% or even 1% of schools going out of business every year.

And more importantly what about great schools going out of business because of financial mismanagement of capitalist wannabes? Not all market failures (maybe even not most) are due to low quality. Bad timing, ruthless competition, impatient investors and insufficient growth have killed many a great product. How many great schools would succumb to one of these? And won’t we see the same race to mediocrity once the ‘safe middle ground’ of survival is discovered? How many schools will take on the risk of innovation in the face of relentless market pressures? For a Chef, one bad recipe is just a learning experience. For a school, one unsuccessful innovation can lead to absolute disaster.

But all that is assuming that we can even map the “quality of education” onto quality in any sphere of commercial activity whatsoever. What business do you get a product or service from for four or eight years that requires a daily performance of a complex and variable task such as caring for and educating a young person is? Not your electricity provider who provides a constant but a non-variable service, nor your medical care provider who offers a variable but only periodical service. Also, “the consumers of education’s” requirements keep changing over time. They may have wanted a rounded and fulfilling education for their child at the start but just want them to get to university at the end. You can measure quality by test scores or graduation rates but that still doesn’t guarantee success for roughly 10-20% of students even in the best of schools.

To conclude, fine food plays a role in the prosperity of restaurants but so does convenience and habit. The quality of education is too complex to map successfully on the quality of food (and possibly any single commercial product). And even if that was possible, the cost of making the market forces work is incomparably higher in education than in dining. Glaeser’s proposed model for reform is just as likely to produce pinnacles of excellence as ruthlessly competitive MacDonald’s-like chains of garbage.

There’s nothing wrong with using metaphors to try to look for ways to improve education. But generally, these should be very local rather than global and always have their limits carefully investigated. That means detailed understanding of both domains and meticulous mappings between them as well as the relationships between them. Not all mappings need to be perfect and some need not be there at all (for instance, computer virus is still useful metaphor even though it doesn’t spread through the air), but this should be done consciously and with care. Steve Jones once said of evolution that metaphor is to it like bird excrement is to statues. The same often goes for education, but it doesn’t have to.

Finally, this analysis didn’t necessarily imply that the current system is the best there can be or that it is even any good (although I think it’s pretty good). Just that reforming it based on this cock-a-maney metaphor could be extremely dangerous. New solutions must ultimately be judged on their own merit but with the many market metaphors, very many their merit is irretrievably tied to the power of the initial metaphor and not any intrinsic solution.

UPDATE: It seems I may have a been a bit too harsh on Glaeser. Obsevational Epidemiology posts this quote form his book (the one he was promoting on the Freakonomics podcast):

All of the world’s older cities have suffered the great scourges of urban life: disease, crime, congestion. And the fight against these ills has never been won by passively accepting things as they are or by mindlessly relying on the free market.

Ok, so he’s not just a mindless free-marketeer. So why on earth would he suggest the above as a suitable metaphor to base educational reform on?

Send to Kindle

Hacking a metaphor in five steps

Share
Send to Kindle

Preliminaries

This is the image of the structure of "Th...
Image via Wikipedia

1. Before you start metaphor hacking you must first accept that you don’t have a choice but to speak in some sort of a figurative fashion. Almost nothing worth saying is entirely literal and there are many things whose “literalness” is rooted in metaphor. Look at “I sat in a chair the whole day.” Looks very literal at first glance but it depends on our understanding of a chair as a container (e.g. he was spilling out of his chair) and the day as an object (e.g. she was counting the days, cutting the day short, a long day, etc.)

2. You must also learn to recognize how metaphors are constructed through mappings from one domain to another. Sometimes these mappings are explicit, sometimes they are hidden, sometimes they are clear cut one-on-one connections and sometimes they are fuzzy and cross levels of categorization. But they’re there. If you say, “life is a journey” you can also say “I’ve reached a fork in the road” or “I’ve hit a rough patch” because you map elements of the “road/journey domain” such as intersections, rocky surfaces, hills, etc. to elements of the “life domain” such as decisions and difficult time periods. This way of thinking about metaphor was popularized by Lakoff and Johnson in their 1980 book “Metaphors we live by” which is a great weekend read. However, do read the 2003 edition which contains an important additional chapter.

Metaphor hacking

Once you’ve done the above, you can start hacking (or really do them at the same time).

1. Find an example of a metaphor being used in a way that limits your ability to achieve something or one that constrains your thinking or actions. For example, “education is a marketplace.”

2. Identify the domains involved in the metaphor. The source domain is the domain of knowledge or experience which is being used to structure our understanding of the target domain. This is frequently being confused with concrete/abstract or known/unknown but very often the source domain is just as abstract or well/little known as the target domain. For example: The source domain of marketplace and business is no more concrete or better known than the target domain of education. But it can still be used to structure our understanding  of the domain education.

3. Identity the most common mappings between the source and target domains. These generally have the form of “X is (like) Y” and carry with them the assumption that if X is like Y, it should have a similar relationship to Z or perform similar activities. The “is like” function relies on a fuzzy concept of identity, a sort of family resemblance. For example, in the “education is a marketplace” metaphor, some common mappings are “students are customers” and “schools are companies providing a service”. Don’t make any judgements at this stage. Simply list as many mappings as you can find.

4. See which of the existing mappings are problematic in some way. Some mappings may lead us to others which we didn’t set out to create. This could be good or bad.  For instance, if we think of students as the clients of schools, it’s a very short step to thinking of  teachers as service staff and performance pay. This may be good or bad. But it also leads to students saying “I’ve paid you money for my education” so I deserve to pass. Which is a consequence very few would describe as good. You can also find some one-to-many mappings to see where the metaphor may get you into trouble. For example, if schools are businesses who is their customer? Students, parents, government or society? What is the currency? Knowledge, career prospects, etc. There’s nothing intrinsically wrong with one-to-many mappings but they can underscore a possible problem area in the way the metaphor is being used to generate new understandings.

5. Finally, find other possible mappings and try to imagine what the consequences would be. For this, you must strive to learn as much as possible about both of the domains involved and keep an open mind about the mappings. Anything goes. This can be done in a negative manner to bring into question the premise of the metaphor. For instance, Jeffrey Henig pointed out in his book on the Market Metaphor in education that one of the key prerequisites to the functioning of the market is a failure of business entities but none of the market reformers in education have provided a sufficient alternative to failure in their market model of schools.  This should certainly give the market advocates a pause. It doesn’t automatically mean that the marketplace metaphor cannot help us understand education in a useful way but it points to a possible limit to its utility. This process is similar to the rhetorical technique known as reductio ad absurdum but it has a different purpose. Also the metaphor hacker will approach this process with an open mind and will rule nothing out as a priori absurd but will also understand that all these mappings as just options not necessary consequences.

But driving a metaphor forward is most often a positive experience. Donald A Schön called this kind of metaphor use the “generative metaphor”. He gives a great example from engineering. When trying to design a new type of synthetic bristle for a paintbrush, a group of engineers was stuck because they were trying to figure out how to make the paint stick to the threads. This led to blobs of paint rather than nice smooth surfaces. Until one engineer said “You know what, a paintbrush is really a pump”. And immediately the research shifted from the surface of the bristles to their flexibility to create a pump like environment between the bristles rather than trying to make the paint stick to them. Anywhere else the “paintbrush is a pump” metaphor would have seemed ridiculous but in this context it didn’t even need an explanation. The engineers just got on with their work.

This process never stops. You can always find alternative mappings or alternative domains to help you understand the world. You can even have more than one source domain in a process called blending (or conceptual integration) that generates new domains Fauconnier and Turner give the example of a computer virus which blended the domain of software with the domain of medicine to generate a domain of computer viruses that has some properties of both and some emergent properties of its own. But this is for another time.

Conclusion

All good hackers, engineers, journalists or even just members of a school or pub debate club have been hacking at metaphors ever since the phrase “is like” appeared in human language (and possibly even before). But this post urges a transition from hacking at metaphors to hacking metaphors in the best sense of the word. This requires some work at understanding how metaphors work and also getting rid of quite a few prejudices. We’re all used to dismissing others’ arguments as just metaphors and “literalness” is seen as virtue. Once we accept metaphors for what they are, we can start using them to improve how we think and what we do. Not through a wholesale transformation but through little tweaks and a bit of conceptual duct tape. And that’s what the hacker spirit is all about.

Readings

  1. George Lakoff and Mark Johnson, Metaphors we live by (Chicago: University of Chicago Press, 1980).
  2. Jeffrey R. Henig, Rethinking school choice: limits of the market metaphor (Princeton, N.J.: Princeton University Press, 1994).
  3. Donald Alan Schön, Displacement of concepts (London: Tavistock Publications, 1963).
  4. Donald A. Schön, “Generative metaphor: A perspective on problem-setting in social policy,” in Metaphor (Cambridge: Cambridge University Press, 1979), 254-283.
  5. Gilles Fauconnier and Mark Turner, The way we think: conceptual blending and the mind’s hidden complexities (New York: Basic Books, 2002).
Enhanced by Zemanta
Send to Kindle