Category Archives: Knowledge

Pseudo-education as a weapon: Beyond the ridiculous in linguistic prescriptivism

Share
Teacher in primary school in northern Laos
Teacher in primary school in northern Laos (Photo credit: Wikipedia)

Most of us are all too happy to repeat clichés about education to motivate ourselves and others to engage in this liminal ritual of mass socialization. One such phrase is “knowledge is power”. It is used to refer not just to education, of course, but to all sorts of intelligence gathering from business to politics. We tell many stories of how knowing something made the difference, from knowing a way of making something to work to knowing a secret only the hero or villain is privy to. But in education, in particular, it is not just knowing that matters to our tribe but also the display of knowing.

The more I look at education, the more I wonder how much of what is in the curriculum is about signaling rather than true need of knowledge. Signaling has been used in economics of education to indicate the complex value of a university degree but I think it goes much deeper. We make displays of knowledge through the curriculum to make the knowledge itself more valuable. Curriculum designers in all areas engage in complex dances to show how the content maps onto the real world. I have called this education voodoo, other people have spoken of cargo cult education, and yet others have talked about pseudo teaching. I wrote about pseudo teaching when I looked at Niall Ferguson‘s amusing, I think I called it cute, lesson plan of his own greatness. But pseudo teaching only describes the activities performed by teachers in the mistaken belief that they have real educational value. When pseudo teaching relies on pseudo content, I think we can talk more generally about “pseudo education”.

We were all pseudo-educated on a number of subjects. History, science, philosophy, etc. In history lessons, the most cherished “truths” of our past are distorted on a daily basis (see Lies My Teacher told me). From biology, we get to remember misinformation about the theory of evolution starting from attributing the very concept of evolution to Darwin or reducing natural selection to the nonsense of survival of the fittest. We may remember the names of a few philosophers but it rarely takes us any further than knowing winks at a Monty Python sketch or mouthing of unexamined platitudes like “the unexamined life is not worth living.”

That in itself is not a problem. Society, despite the omnipresent alarmist tropes, is coping quite well with pseudo-education. Perhaps, it even requires it to function because “it can’t handle the truth”. The problem is that we then judge people on how well they are able to replicate or respond to these pseudo-educated signals. Sometimes, these judgments are just a matter of petty prejudice but sometimes they could have an impact on somebody’s livelihood (and perhaps the former inevitably leads to the latter in aggregate).

Note: I have looked at some history and biology textbooks and they often offer a less distorted portrayal of their subject than what seems to be the outcome in public consciousness. Having the right curriculum and associated materials, then, doesn’t seem to be sufficient to avoid pseudo-education (if indeed avoiding it is desirable).

The one area where pseudo-education has received a lot of attention is language. Since time immemorial, our ways of speaking have served to identify us with one group or layer of society or another. And from its very beginning, education sought to play a role in slotting its charges into the linguistic groups with as high a prestige, as possible (or rather as appropriate). And even today, in academic literature we see references to the educated speaker as an analytic category. This is not a bad thing. Education correlates with exposure to certain types of language and engagement with certain kinds of speech communities. It is not the only way to achieve linguistic competence in those areas but it is the main way for the majority. But becoming “educated speaker” in this sense is mostly a by-product of education. Sufficient amount of the curriculum and classroom instruction is aimed in this direction to count for something but most students acquire the in-group ways of speaking without explicit instruction (disadvantaging those who would benefit from it). But probably a more salient output of language education is supposed knowledge about language (as opposed to knowledge of language).

Here students are expected not only to speak appropriately but also to know how this “appropriate language” works. And here is where most of what happens in schools can be called pseudo-education. Most teachers don’t really have any grasp of how language works (even those who took intro to linguistics classes). They are certainly not aware of the more complex issues around the social variability of language or its pragmatic dimension. But even in simple matters like grammar and usage, they are utterly clueless. This is often blamed on past deficiencies of the educational system where “grammar was not taught” to an entire generation. But judging by the behavior of previous generations who received ample instruction in grammar, that is not the problem. Their teachers were just as inept at teaching about language as they are today. They might have been better at labeling parts of speech and their tenses but that’s about it. It is possible that in the days of yore, people complaining about the use of the passive were actually more able to identify passive constructions in the text but it didn’t make that complaint any less inaccurate (Orwell made a right fool of himself when it turned out that he uses more passives than is the norm in English despite kvetching about their evil).

No matter what the content of school curriculum and method of instruction, “educated” people go about spouting nonsense when it comes to language. This nonsense seems to have its origins in half-remembered injunctions of their grade school teacher. And because the prime complainers are likely to either have been “good at language” or envied the teacher’s approbation of those who were described as being “good at language”, what we end up with in the typical language maven is a mishmash of linguistic prejudice and unjustified feeling smug superiority. Every little linguistic label that a person can remember, is then trotted out as a badge of honor regardless of how good that person is at deploying it.

And those who spout the loudest, get a reputation of being the “grammar experts” and everybody else who preemptively admits that they are “not good at grammar” defers to them and lets themselves be bullied by them. The most recent case of such bullying was a screed by an otherwise intelligent person in a position of power who decided that he will no longer hire people with bad grammar.

This prompted me to issue a rant on Google Plus, repeated below:

The trouble with pseudo educated blowhards complaining about grammar, like +Kyle Wien, is that they have no idea what grammar is. 90% of the things they complain about are spelling problems. The rest is a mishmash of half-remembered objections from their grade school teacher who got them from some other grammar bigot who doesn’t know their tense from their time.

I’ve got news for you Kyle! People who spell they’re, there and their interchangeably know the grammar of their use. They just don’t differentiate their spelling. It’s called homophony, dude, and English is chock full of it. Look it up. If your desire rose as you smelled a rose, you encountered homophony. Homophony is a ubiquitous feature of all languages. And equally all languages have some high profile homophones that cause trouble for spelling Nazis but almost never for actual understanding. Why? Because when you speak, there is no spelling.

Kyle thinks that what he calls “good grammar” is indicative of attention to detail. Hard to say since he, presumably always perfectly “grammatical”, failed to pay attention to the little detail of the difference between spelling and grammar. The other problem is, that I’m sure that Kyle and his ilk would be hard pressed to list more than a dozen or so of these “problems”. So his “attention to detail” should really be read as “attention to the few details of language use that annoy Kyle Wien”. He claims to have noticed a correlation in his practice but forgive me if I don’t take his word for it. Once you have developed a prejudice, no matter how outlandish, it is dead easy to find plenty of evidence in its support (not paying attention to any of the details that disconfirm it).

Sure there’s something to the argument that spelling mistakes in a news item, a blog post or a business newsletter will have an impact on its credibility. But hardly enough to worry about. Not that many people will notice and those who do will have plenty of other cues to make a better informed judgment. If a misplaced apostrophe is enough to sway them, then either they’re not convinced of the credibility of the source in the first place, or they’re not worth keeping as a customer. Journalists and bloggers engage in so many more significant pursuits that damage their credibility, like fatuous and unresearched claims about grammar, so that the odd it’s/its slip up can hardly make much more than (or is it then) a dent.

Note: I replaced ‘half-wit’ in the original with ‘blowhard’ because I don’t actually believe that Kyle Wien is a half-wit. He may not even be a blowhard. But, you can be a perfectly intelligent person, nice to kittens and beloved by co-workers, and be a blowhard when it comes to grammar. I also fixed a few typos, because I pay attention to detail.

My issue is not that I believe that linguistic purism and prescriptivism are in some way anomalous. In fact, I believe the exact opposite. I think, following a brilliant insight by my linguistics teacher, that we need to think of these phenomena as integral to our linguistic competence. I doubt that there is a linguistic community of any size above 3 that doesn’t enact some form of explicit linguistic normativity.

But when pseudo-knowledge about language is used as a n instrument of power, I think it is right to call out the perpetrators and try to shame them. Sure, linguists laugh at them, but I think we all need to follow the example of the Language Log and expose all such examples to public ridicule. Countermand the power.

Post Script: I have been similarly critical of the field of Critical Discourse Analysis which while based on an accurate insight about language and power, in my view, goes on to abuse the power that stems from the knowledge about language to clobber their opponents. My conclusion has been that if you want to study how people speak, study it for its own sake, and if you want to engage with the politics of what they say, do that on political terms not on linguistic ones. That doesn’t mean that you shouldn’t point out if you feel somebody is using language in a manipulative or misleading ways, but if you don’t need the apparatus of a whole academic discipline to do it, you’re doing something wrong.

Who-knows-what-how stories: The scientific and religious knowledge paradox

Share

I never meant to listen to this LSE debate on modern atheism because I’m bored of all the endless moralistic twaddle on both sides but it came on on my MP3 player and before I knew it, I was interested enough not to skip it. Not that it provided any Earth-shattering new insights but on balance it had more to contribute to the debate than a New Atheist diatribe might. And there were a few stories about how people think that were interesting.

The first speaker was the well-known English cleric, Giles Fraser who regaled the audience with his conversion story starting as an atheist student of Wittgenstein and becoming a Christian who believes in a “Scripture-based” understanding of Christianity. The latter is not surprising given how pathetically obssessed Wittgensteinian scholars are with every twist and turn of their bipolar master’s texts.

But I thought Fraser’s description of how he understands his faith in contrast to his understanding of the dogma was instructive. He says: “Theology is faith seeking understanding. Understanding is not the basis on which one has faith but it is what one does to try to understand the faith one has.”

In a way, faith is a kind of axiomatic knowledge. It’s not something that can or need be usefully questioned but it is something on which to base our further dialog. Obviously, this cannot be equated with religion but it can serve as a reminder of the kind of knowledge religion works off. And it is only in some contexts that this axiomatic knowledge needs be made explicit or even just pointed to – this only happens when the conceptual frames are brought into conflict and need to be negotiated.

Paradox of utility vs essence and faith vs understanding

This kind of knowledge is often contrasted with scientific knowledge. Knowledge that is held to be essentially superior due to its utility. But if we look at the supporting arguments, we are faced with a kind of paradox.

The paradox is that scientists claim that their knowledge is essentially different from religious (and other non-scientific) knowledge but the warrant for the special status claim of this knowledge stems from the method of its aquisition rather than its inherent nature. They cite falsificationist principles as foundations of this essential difference and peer review as their practical embodiment (strangely making this one claim immune from the dictum of falsification – of which, I believe, there is ample supply).

But that is not a very consistent argument. The necessary consequences of the practice of peer review fly in the face of the falsificationist agenda. The system of publication and peer review that is in place (and that will always emerge) is guaranteed to minimize any fundamental falsificationism of the central principles. Meaning that falsification happens more along Kuhnian rather than Popperian lines. Slowly, in bursts and with great gnashing of teeth and destroying of careers.

Now, religious knowledge does not cite falsificationism as the central warrant of its knowledge. Faith is often given as the central principle underlying religious knowledge and engagement with diverse exegetic authorities as the enactment of this principle. (Although, crucially, this part of religion is a modern invention brought about by many frame negotiations. For the most part, when it comes to religion, faith and knowing are coterminous. Faith determines the right ways of knowing but it is only rarely negotiated.)

But in practice the way religious knowlege is created, acquired and maintained is not very different from scientific knowledge. Exegesis and peer review are very similar processes in that they both refer to past authorities as sources of arbitration for the expression of new ideas.

And while falsificationism is (perhaps with the exception of some Budhist or Daoist schools) never the stated principle of religious hermeneutics, in principle, it is hard to describe the numerous religious reforms, movements and even work-a-day conversions as anything but falsificationist enterprises. Let’s just look at the various waves of monastic movements from Benedictines to Dominicans or Franciscans to Jesuits. They all struggled with reconciling the current situation with the evidence (of the interpretation of scripture) and based their changed practices on the result of this confrontation.

And what about religious reformers like Hus, Luther or Calvin? Wasn’t their intellectual entreprise in essense falsificationist? Or St Paul or Arianism? Or scholastic debates?

‘But religion never invites scrutiny, it never approaches problems with an open mind.’ Crow the new-atheists. But neither does science at the basic level. Graduate students are told to question everything but they soon learn that this questioning is only good as long as it doesn’t disrupt the research paradigm. Their careers and livelihoods depend on not questioning much of anything. In practice, this is not very different from the personal reading of the Bible – you can have a personal relationship with God as long as it’s not too different from other people’s personal relationships.

The stories we know by

One of the most preposterous pieces of scientific propaganda is Dawkins’ story about an old professor who went to thank a young researcher for disproving his theory. I recently heard it trotted out again on an episode of Start The Week where it was used as proof positive of how science is special – this time as a way of establishing its superiority over political machinations. It may have happened but it’s about as rare as a priest being convinced by an argument about the non-existence of God. The natural and predominant reactions to a research “disproving” a theory are to belittle it, deny its validity, ostracise its author, deny its relevance or simply ignore it (a favourite passtime of Chomskean linguists).

So in practice, there doesn’t seem to be that much of a difference between how scientific and religious knowledge work. They both follow the same cognitive and sociological principles. They both have their stories to tell their followers.

Conversion stories are very popular in all movements and they are always used on both sides of the same argument. There are stories about conversions from one side to the other in the abortion/anti-abortion controversy, environmental debates, diet wars, alternative medicine, and they are an accompanying feature of pretty much any scientific revolution – I’ve even seen the lack of prominent conversions cited as an argument (a bad one) against cognitive linguistics. So a scientist giving examples of the formerly devout seeing the light through a rational argument, is just enacting a social script associated with the spreading of knowledge. It’s a common frame negotiation device, not evidence of anything about the nature of the knowledge to the profession of which the person was converted.

There are other types of stories about knowledge that scientists like to talk about as much as people of religion. There are stories of the long arduous path to knowledge and stories of mystical gnosis.

The path-to-knowledge stories are told when people talk about the training it takes to achieve a kind of knowledge. They are particularly popular about medical doctors (through medical dramas on TV) but they also exist about pretty much any profession including priests and scientists. These stories always have two components, a liminal one (about high jinks students got to while avoiding the administration’s strictures and lovingly mocking the crazy demanding teachers) and a sacral one (about the importance of hard learning that the subject demands). These stories are, of course, based on the sorts of things that happen. But their telling follows a cultural and discursive script. (It would be interesting to do some comparative study here.)

The stories of mystical gnosis are also very interesting. They are told about experts, specialists who achieve knowledge that is too difficult for normal mortals to master. In these stories people are often described as loosing themselves in the subject, setting themselves apart, or becoming obsessively focused. This is sometimes combined or alternated with descriptions of achieving sudden clarity.

People tell these stories about themselves as well as about others. When told about others, these stories can be quite schematic – the absent minded professor, the geek/nerd, the monk in the library, the constantly pracising musician (or even boxer). When told about oneself, the sudden light stories are very common.

Again, these stories reflect a shared cultural framing of people’s experiences of knowledge in the social context. But they cannot be given as evidence of the special nature of one kind of knowledge over another. Just like stories about Gods cannot be taken as evidence of the superiority of some religions.

Utility and essence revisited

But, the argument goes, scientific knowledge is so useful. Just look at all the great things it brought to this world. Doesn’t that mean that its very “essence” is different from religious knowledge.

Here, I think, the other discussant in the podcast, the atheist philosopher, John Gray can provide a useful perspective: “The ‘essence’ is an apologetic invention that someone comes up with later to try and preserve the core of an idea [like Christianity or Marxism] from criticism.”

In other words this argument is also a kind of story telling. And we again find these utility and essence stories in many areas following remarkably similar scripts. That does not mean that the stories are false or fabricated or that what they are told about is in some way less valid. All it means is that we should be skeptical about arguments that rely on them as evidence of exclusivity.

Ultimately, looking for the “essence” of any complex phenomenon is always a fool’s errand. Scientists are too fond of their “magic formula” stories where incredibly complex things are described by simple equations like e=mc2 or zn+1 = zn2 + c. But neither Einstein’s nor Mandlebrot’s little gems actually define the essence of their respective phenomena. They just offer a convenient way of capturing some form of knowledge about it. e=mc2 will be found on T-shirts and the Mandelbrot set on screen savers of people who know little about their relationship to the underlying ideas. They just know they’re important and want to express their alegiance to the movement. Kind of like the people who feel it necessary to proclaim that they “believe” in the theory of evolution. Of course, we could also take some gnostic stories about what it takes to “really” understand these equations – and they do require some quite complex mix of expertise (a lot more complex than the stories would let on).

But we still haven’t dealt with the question of utility. Scientific knowledge derives its current legitimacy from its connection to technology and religious knowledge makes claims on the realms of morality and human relationships. They clash because both also have views on each other’s domains (science on human relationships and religion on origins of the universe). Which is one of the reasons why I don’t think that the non-overlapping magisteria idea is very fruitful (as much as I prefer Gould over the neo-Darwinists).

Here I have to agree with Dawkins and the new atheists. There’s no reason why some prelate should have more of a say on morality or relationships than anyone else. Reading a holy book is a qualification for prescribing ritual not for the arbitration of morality. But Dawkins should be made to taste his own medicine. There’s no reason why a scientist’s view on the origin of the universe should hold any sway over any theologian’s. The desire of the scientist to provide a cosmogony for the atheist crowd is a strange thing. It seeks to make questions stemming from a non-scientific search for broader meaning consistent with the scientific need for contiguous causal explanations. But the Big Bang or some sort of Priomordial Soup don’t provide internal consistency to the scientific entreprise. They leave as many questions open as they give answers to.

It seems that the Augustinian and later scholastic solution of a set of categories external to the ones which are accessible to us. Giles Fraser cites Thomas Acquinas’ example of counting everything in the world and not finding God. That would still be fine because God is not a thing to be counted, or better still, an entity that fits within our concept of things and the related concept of counting. Or in other words, God created our world with the categories available to the cognizing humans, not in those categories. Of course, to me, that sounds like a very good argument for atheism but it is also why a search for some Grand Unified Theory leaves me cold.

Epistemology as politics

It is a problem that the better philosophers from Parmenides to Wittgenstein tried to express in one way or another. But their problems were in trying to draw practical conclusions. There is no reason why the two political factions shouldn’t have a good old fight over the two overlapping magisteria. Because the debate about utility is a political one, not an epistemological one. Just because I would rather go to a surgeon than a witch doctor doesn’t mean that the former has tapped into some superior form of cognition. We make utility judgements of a similar nature even within these two domains but we would not draw essentialist epistemological conclusions based on them. People choose their priests and they choose their doctors. Is a bad doctor better than a good healer? I would imagine that there are a whole range of ailments where some innocuous herb would do more good than a placebo with side effects.

But we can consider a less contentious example. Years ago I was involved with TEFL teacher training and hiring. We ran a one-month starter course for teachers of English as a foreign language. And when hiring teachers I would always much rather hire one of these people rather than somebody with an MA in TESOL. The teachers with the basic knowledge would often do a better job than those with superior knowledge. I would say that when these two groups would talk about teaching, their understanding of it would be very different. The MAs would have the research, evidence and theory. The one-month trainees would have lots of useful techniques but little understanding of how or why they worked. Yet, there seemed to be an inverse relationship between the “quality” of knowledge and practical ability of the teacher (or at best no predictable relationship). So I would routinely recommend these “witch-teachers” over the “surgeon-teachers” to schools for hiring because I believe they were better for them.

There are many similar stories where utility and knowledge don’t match up. Again, that doesn’t mean that we should take homeopathy seriously but it means that the foundations of our refusal of accepting homeopathy cannot also be the foundations of placing scientific knowledge outside the regular epistemological constraints of all humanity.

Epistemology, as I have said elsewhere, is much better explained as ethics.

Thus endeth the blog post.

The death of a memory: Missing metaphors of remembering and forgetting?

Share

Memories

I have forgotten a lot of things in my life. Names, faces, numbers, words, facts, events, quotes. Just like for anyone, forgetting is as much a part of my life as remembering. Memories short and long come and go. But only twice in my life have I seen a good memory die under suspicious circustances.

Both of these were good reliable everyday memories as much declarative as non-declarative. And both died unexpectedly without warning and without reprieve. They were both memories of entry codes but I retrieved both in different ways. Both were highly contextualised but each in a different way.

The first time was almost 20 years ago (in 1993) and it was the PIN for my first bank card (before they let you change them). I’d had it for almost two years by then using it every few days for most of that period. I remembered it so well that even after I’d been out of the country for 6 months and not even thinking about it once, I walked up to an ATM on my return and without hesitation, typed it in. And then, about 6 months later, I walked up to another ATM, started typing in the PIN and it just wasn’t there. It was completely gone. I had no memory of it. I knew about the memory but the actual memory completely disappeared. It wasn’t a temporary confusion, it was simply gone and I never remembered it again. This PIN I remembered as a number.

The second death occurred just a few days ago. This time, it was the entrance code to a building. But I only remembered it as a shape on the keypad (as I do for most numbers now). In the intervening years, I’ve memorised a number of PINs and entrance codes. Most I’ve forgetten since, some I remember even now (like the PIN of a card that expired a year ago but I’d only used once every few months for many years). Simply, the normal processes you’d expect of memory. But this one, I’ve been using for about a year since they’d changed it from the previous one. About five months ago I came back from a month-long leave and I remembered it instantly. But three days ago, I walked up to the keypad and the memory was gone. I’d used the keypad at least once if not twice that day already. But that time I walked up to the keypad and nothing. After a few tries I started wondering if I might be typing in the old code since before the change so I flipped the pattern around (I had a vague memory of once using it to remember the new pattern) and it worked. But the working pattern felt completely foreign. Like one I’d never typed in before. I suddenly understood what it must feel like for someone to recognize their loved one but at the same time be sure that it’s not them. I was really discomfitted by this impostor keypad pattern. For a few moments, it felt really uncomfortable – almost an out of body (or out of memory) experience.

The one thing that set the second forgetting apart from the first one was that I was talking to someone as it happened (the first time I was completely alone on a busy street – I still remember which one, by the way). It was an old colleague who visited the building and was asking me if I knew the code. And seconds after I confidently declared I did, I didn’t. Or I remembered the wrong one.

So in the second case, we could conclude that the presence of someone who had been around when the previous code was being used, triggered the former memory and overrode the latter one. But the experience of complete and sudden loss, I recall vividly, was the same. None of my other forgettings were so instant and inexplicable. And I once forgot the face of someone I’d just met as soon and he turned around (which was awkward since he was supposed to come back in a few minutes with his car keys – so I had to stand in the crowd looking expectantly at everyone until the guy returned and nodded to me).

What does this mean for our metaphors of memory based on the various research paradigms? None seem to apply. These were not repressed memories associated with traumatic events (although the forgetting itself was extremely mildly traumatic). These were not quite declarative memories nor were they exactly non-declarative. They both required operations in working memories but were long-term. They were both triggered by context and had a muscle-memory component. But the first one I could remember as a number whereas the second one only as a shape and only on that specific keypad. But neither were subject to long-term decay. In fact, both proved resistant to decay surving long or longish periods of disuse. They both were (or felt) as solid memories as my own name. Until they were there no more. The closest introspective analogy to me seems Luria’s man who remembered too much who once forgot a white object because he placed it against white background in his memory which made it disappear.

The current research on memory seems to be converging on the idea that we reconstruct our memories. Our brains are not just some stores with shelves from which memories can be plucked. Although, memories are highly contextual, they are not discrete objects encoded in our brain as files on a harddrive. But for these two memories, the hard drive metaphor seems more appropriate. It’s as if a tiny part of my brain that held those memories was corrupted and they simply winked out of existence at the flip of a bit. Just like a hard drive.

There’s a lot of research on memory loss, decay and reliability but I don’t know of any which could account for these two deaths. We have many models of memory which can be selectively applied to most memory related events but these two fall between the cracks.

All the research I could find is either on sudden specific-event-induced amnesia (http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1961972/?page=1) or senescence (http://brain.oxfordjournals.org/content/89/3/539.extract). In both cases, there are clear causes to the memory and loss is much more total (complete events or entire identity). I could find nothing about the sudden loss of a specific reliable memory in a healthy individual (given that it only happened twice 18 years apart – I was 21 when it happened first – I assume this is not caused by any pathology in my brain) not precipitated by any traumatic (or other) event. Yet, I suspect this happens all the time… So what gives?

Religion, if it exists, is negotiation of underdetermined metaphoric cognition [UPDATED]

Share

Preamble

Richard Buchta - Portrait of a Zande witchdoctor
Image via Wikipedia

I am an old atheist and a new agnostic. I don’t believe in God in the old-fashioned Russellian way – if I don’t believe in Krishna, Zeus, water sprites or the little teapot orbiting the Sun, I don’t believe in God and the associated supernatual phenomena (monotheism my foot!). However, I am agnostic about nearly everything else and everything else in the new atheist way is pretty much science and reason. If history is any judge (and it is) most of what we believe to be scientific truths today is bunk. This makes me feel not superior at all to people of faith. Sure I think what they believe is a stupid and irrational thing to believe, but I don’t think they are stupid or irrational people to believe it. The smartest people believe the most preposterous things just look at Newton, Chomsky or Dawkins.

But one thing I’m pretty certain about is religion. Or rather, I’m pretty certain it does not exist. It is in many ways an invention of the Enlightenment and just like equality and brotherhood it only makes sense until you see the first person winding the up guillotine. Religion only makes sense if you want to set a certain set of beliefs and practices aside, most importantly to deprive their holders of power and legitimacy.

But is it a useful concept for deliberation about human universals? I think on balance it is not. Religion is a collection of stated beliefs, internal beliefs and public and private practices. In other words, religion is a way of life for a community of people. Or to be etymological about it, it is what binds the community together. The nature of the content of those beliefs is entirely irrelevant to the true human universal: a shared collection of beliefs and practices develops over a short amount of time inside any group of people. And when I say beliefs, I mean all explicit and implicit knowledge and applied cognition.

In this sense, modern secular humanism is just as much a religion as rabid evangelicalism.

On the mundane nature of sacred thought

So, why the scientist asks, do all cultures develop knowledge system that includes belief in the supernatural? That’s because they don’t. For instance, as Geertz so beautifully described in his reinterpretation of the Azande, witchcraft isn’t supernatural. It is the natural explanation after everything else has failed. We keep forgetting that until Einstein, everybody believed in this (as Descartes pointed out) supernatural force called gravity that could somehow magically transmit motion accross vast distances. And now (as Angel and Demetis point out) we believe in magical sheets that make gravity all nice and natural. Or maybe strings? Give me a break!

What about the distinction between the sacred and mundane you ask? Well, that obviously exists including the liminality between them. But sacred/mundane is not limited to anything supernatural and magical – just look at the US treatment of the flag or citizenship. In fact, even the most porfoundly sacred and mystical has a significant mundane dimension necessitated by its logistics.

There are no universals of faith. But there are some strong tendencies among the world’s cultures: Ancestor worship, belief in superhuman and non-human (often invisible, sometimes disembodied) agents, sympathetic magic and ritual (which includes belief in empowered and/or sapient substances and objects). This is combined with preserving and placating personal and collective practices.

All of the above describes western atheists as much as the witchcraft believing Azande. We just define the natural differently. Our beliefs in the power of various pills and the public professions of faith in the reality of evolution or the transformative nature of the market fall under the above just as nicely as the rain dance. Sure I’d much rather go to a surgeon with an inflamed appendix than a witch doctor but I’d also much rather go to a renowned witch doctor than an unknown one if that was my only choice. Medicine is simply witchcraft with better peer review.

Leaving the merits of the modern world aside. The question remains why do humans seem to converge on similar content of their beliefs? Helen de Cruz and the commenters on her post about the naturalness of religious belief: http://www.cognitionandculture.net/Helen-De-Cruz-s-blog/does-atheism-challenge-the-naturalness-of-religious-belief.html give a great overview of the current debate on the topic.

They pretty much put to rest some of the evolutionary notions and the innateness of mind/body dualism. I particularly like the proposition Helene de Cruz made building on Pascal’s remark that some people “seem so made that [they] cannot believe”. “For those people” continues de Cruz, “religious belief requires a constant cognitive effort.”

I think this is a profound statement. I see it as being in line with my thesis of frame negotiation. Some things require more cognitive effort for some people than other things for other people. It doesn’t have to be religion. We know reading requires more cognitive effort for different people in different ways (dyslexics being one group with a particular profile of cognitive difficulties). So does counting, painting, hunting, driving cars, cutting things with knives, taking computers apart, etc. These things are suceptible to training and practice to different degrees with different people.

So it makes perfect sense on the available evidence that different people require different levels of cognitive effort to maintain belief in what is axiomatic for others.

In the comments Mitch Hodge contributed a question to “researchers who propose that mind-body dualism undergirds representations of supernatural entities: What do you do with all of the anthropological evidence that humans represent most all supernatural entities as embodied? How do disembodied beings eat, wear clothes, physically interact with the living and each other?”

This is really important. Before you can talk about content of belief, you need to carefully examine all its aspects. And as I tried to argue above, starting with religion as a category already leads us down certain paths of argumentation that are less than telos-neutral.

But the answer to the “are humans natural mind-body dualists” does not have to be to choose one over the other. I suggest an alternative answer:

Humans are natural schematicists and schema negotiators

What does that mean? Last year, I gave a talk (in Czech) on the “Schematicity and undetermination as two forgotten processes in mind and language”. In it I argue that operating on schematic or in other ways underdetermined concepts is not only possible but it is built into the very fabric of cognition and language. It is extremely common for people to hold incomplete images (Lakoff’s pizza example was the one that set me on this path of thinking) of things in their mind. For instance, on slide 12 of the presentation below, I show different images that Czechs submitted into a competition run online by a national newspaper on “what does baby Jesus look like” (Note: In Czech, it is baby Jesus – or Ježíšek – who delivers the presents on Christmas Eve). The images ran from an angelic adult and a real baby to an outline of the baby in the light to just a light.

[slideshare id=6059571&doc=schematicnostanedourcenost-101207060558-phpapp02]
This shows that people not only hold underdetermined images but that those images are determined to varying degrees (in my little private poll, I came across people who imagined Ježíšek as an old bearded man and personally, I did not explicitly associated the diminutive ježíšek with the baby Jesus, until I had to translate it into English). The discussions like those around Trinity or the embodied nature of key deities are the results of conversations about what parts of a shared schema is it acceptable to fill out and how to fill them out.

It is basically metaphor (or as I call it frame) negotiation. Early Christianity was full of these debates and it is not surprising that it wasn’t always the most cognitively parsimoneous image that won out.

It is further important that humans have various explicit and implicit strategies to deal with infelicitous schematicity or schema clashes, which is to defer parts of their cognition to a collectively recognised authority. I spent years of my youth believing that although the Trinity made no sense to me, there were people to who it did make sense and to whom as guardians of sense, I would defer my own imperfect cognition. But any study of the fights over the nature of the Trinity are a perfect illustration of how people negotiate over their imagery. And as in any negotiation it is not just the power of the argument but also the power of the arguer that determines the outcome.

Christianity is not special here in any regard but it does provide two millenia of documented negotiation of mappings between domains full of schemas and rich images. It starts with St Paul’s denial that circumcision is a necessary condition of being a Christian and goes on into the conceptual contortions surrounding the Trinity debates. Early Christian eschatology also had to constantly renegotiate its foundations as the world sutbbornly refused to end and was in that no different from modern eschatology – be it religion or science based. Reformation movements (from monasticism to Luther or Calvin) also exhibit this profound contrasting of imagery and exploration of mappings, rejecting some, accepting others, ignoring most.

All of these activities lead to paradoxes and thus spurring of heretical and reform movements. Waldensians or Lutherans or Hussites all arrived at their disagreement with the dogma through painstaking analysis of the imagery contained in the text. Arianism was in its time the “thinking man’s” Christianity, because it made a lot more sense than the Nicean consensus. No wonder it experienced a post-reformation resurgence. But the problems it exposed were equally serious and it was ultimately rejected for probably good reason.

How is it possible that the Nicean consensus held so long as the mainstream interpretation? Surely, Luther could not have been the first to notice the discrepancies between lithurgy and scripture. Two reasons: inventory of expression and undedetermination of conceptual representationa.

I will deal with the idea of inventory in a separate post. Briefly, it is based on the idea of cognitive grammar that language is not a system but rather a patterned invenotory of symbolic units. This inventory is neither static nor has it clear boundaries but it functions to constrain what is available for both speech and imagination. Because of the nature of symbolic units and their relationship, the inventory (a usage-based beast) is what constrains our ability to say certain things although they are possible by pure grammatical or conceptual manipulation. By the same token, the inventory makes it possible to say things that make no demonstrable sense.

Frame (or metaphor) negotiation operates on the inventory but also has to battle against its constraints. The units in the inventory range in their schematicity and determination but they are all schematic and underdetermined to some degree. Most of the time this aids effortless conceptual integration. However, a significant proportion of the time, particularly for some speakers, the conceptual integration hits a snag. A part of a schematic concept usually left underdetermined is filled out and it prevents easy integration and an appropriate mapping needs to be negotiated.

For example, it is possible to say that Jesus is God and Jesus is the Son of God even in the same sentence and as long as we don’t project the offspring mapping on the identity mapping, we don’t have a problem. People do these things all the time. We say things like “taking a human life is the ultimate decision” and “collateral damage must be expected in war” and abhor people calling soldiers “murderers”. But the alternative to “justified war” namely “war is murder” is just as easy to sanction given the available imagery. So people have a choice.

But as soon as we flesh out the imagery of “X is son of Y” and “X is Y” we see that something is wrong. This in no way matches our experience of what is possible. Ex definitio “X is son of Y” OR “X is Y”. Not AND. So we need to do other things make the nature of “X is Y” compatible with “X is the son of Y”. And we can either do this by attributing a special nature to one or both of the statements. Or we can acknowledge the problem and defer knowledge of the nature to a higher authority. This is something we do all the time anyway.

Drawing from René Descartes' (1596-1650) in
Image via Wikipedia

So to bring the discussion to the nature of embodiment, there is no difficulty for a single person or a culture to maintained that some special being is disembodied but yet can perform many embodied functions (like eating). My favorite joke told to me by a devout Catholic begins: “The Holy Trinity are sitting around a table talking about where they’re going to go for their vacation…” Neither my friend nor I assumed that the Trinity is in any way an embodied entity, but it was nevertheless very easy for us to talk about its members as embodied beings. Another Catholic joke:

A saussage goes to Heaven. St Peter is busy so he sends Mary to answer the Pearly Gates. When she comes back he asks: “Who was it?” She responds: “I don’t know but, it sure looked like the Holy Ghost.”

Surely a more embodied joke is very difficult to imagine. But it just illustrates the availability of rich imagery to fill out schemas in a way that forces us to have two incompatible images in our heads at the same time. A square circle, of sorts.

There is nothing sophisticated about this. Any society is going to have members who are more likely to explore the possibilities of integration of items within its conceptual inventory. In some cases, it will get them ostracised. In most cases, it will just be filed away as an idiosyncratic vision that makes a lot of sense (but is not worth acting on). That’s why people don’t organize their lives around the dictums of stand-up comedians in charge. What they say often “makes perfect sense” but this sense can be filed away into the liminal space of our brain where it does not interfere with what makes sense in the mundane or the sacred context of conceptual integration. And in a few special cases, this sort of behavior will start new movements and faiths.

These “special” individuals are probably present in quite a large number in any group. They’re the people who like puns or the ones who correct everyone’s grammar. But no matter how committed they are to exploring the imagery of a particular area (content of faith, moral philosophy, use of mobile phones or genetic engineering) they will never be able to rid it of its schematicity and indeterminacies. They will simply flesh out some schemas and strip off the flesh of others. As Kuhn said, a scientific revolution is notable not just for the new it brings but also for all the old it ignores. And not all of the new will be good and not all of the old will be bad.

Not that I’m all that interested in the origins of language but my claim is that the negotiation of the mappings between undertermined schemas is at the very foundation of language and thought. And as such it must have been present from the very begining of language – it may have even predated language. “Religious” thought and practice must have emerged very quickly; as soon as one established category came into contact with another category. The first statement of identity or similarity was probably quite shortly followed by “well, X is only Y, in as much as Z” (expressed in grunts, of course). And since bodies are so central to our thought, it is not surprising that imagery of our bodies doing special things or us not having a body and yet preserving our identity crops up pretty much everywhere. Hypothesizing some sort of innate mind-body dualism is taking an awfully big hammer to a medium-sized nail. And looking for an evolutionary advantage in it is little more than the telling of campfire stories of heroic deeds.

Epilogue

To look for an evolutionary foundation of religious belief is little more sophisticated than arguing about the nature of virgin birth. If nothing else, the fervor of its proponents should be highly troubling. How important is it that we fill in all the gaps left over by neo-Darwinism? There is nothing special about believing in Ghosts or Witches. It is an epiphenomenon of our embodied and socialised thought. Sure, it’s probably worth studying the brains of mushroom-taking mystical groups. But not as a religious phenomenon. Just as something that people do. No more special than keeping a blog. Like this.

Post Script on Liminality [UPDATE a year or so later]

Cris Campbell on his Genealogy of Religion Blog convinced me with the aid of some useful references that we probably need to take the natural/supernatural distinction a bit more seriously than I did above. I still don’t agree it’s as central as is often claimed but I agree that it cannot be reduced to the sacred v. mundane as I tried above.  So instead I proposed the distinction between liminal and metaliminal in a comment on the blog. Here’s a slightly edited version (which may or may not become its own post):

I read with interest Hultkranz’s suggestion for an empirical basis for the concept of the supernatural but I think there are still problems with this view. I don’t see the warrant for the leap from “all religions contain some concept of the supernatural” to “supernatural forms the basis of religion”. Humans need a way to talk about the experienced and the adduced and this will very ‘naturally’ take the form of “supernatural” (I’m aware of McKinnon’s dissatisfaction with calling this non-empirical).

On this account, science itself is belief in the supernatural – i.e. postulating invisible agents outside our direct experience. And in particular speculative cognitive science and neuroscience have to make giant leaps of faith from their evidence to interpretation. What are the chances that much of what we consider to be givens today will in the future be regarded as much more sophisticated than phrenology? But even if we are more charitable to science and place its cognition outside the sphere of that of a conscientious sympathetic magician, the use of science in popular discourse is certainly no different from the use of supernatural beliefs. There’s nothing new, here. Let’s just take the leap from the science of electricity to Frankenstein’s monster. Modern public treatments of genetics and neuroscience are essentially magical. I remember a conversation with an otherwise educated philosophy PhD student who was recoiling in horror from genetic modification of fruit (using fish genes to do something to oranges) as unnatural – or monstrous. Plus we have stories of special states of cognition (absent-minded professors, en-tranced scientists, rigour of study) and ritual gnostic purification (referencing, peer review). The strict naturalist prescriptions of modern science and science education are really not that different from “thou shalt have no other gods before me.”

I am giving these examples partly as an antidote to the hidden normativity in the term ‘supernatural’ (I believe it is possible to mean it non-normatively but it’s not possible for it not to be understood that way by many) but also as an example of why this distinction is not one that relates to religion as opposed to general human existence.

However, I think Hultkranz’s objection to a complete removal of the dichotomy by people like Durkheim and Hymes is a valid one as is his claim of the impossibility of reducing it to the sacred/profane distinction. However, I’d like to propose a different label and consequently framing for it: meta-liminal. By “meta-liminal” I mean beyond the boundaries of daily experience and ethics (a subtle but to me an important difference from non-empirical). The boundaries are revealed to us in liminal spaces and times (as outlined by Turner) and what is beyond them can be behaviours (Greek gods), beings (leprechauns), values (Platonic ideals) or modes of existence (land of the dead). But most importantly, we gain access to them through liminal rituals where we stand with one foot on this side of the boundary and with another on the “other” side. Or rather, we temporarily blur and expand the boundaries and can be in both places at once. (Or possibly both.) This, however, I would claim is a discursively psychological construct and not a cognitively psychological construct. We can study the neural correlates of the various liminal rituals (some of which can be incredibly mundane – like wearing a pin) but searching for a single neural or evolutionary foundation would be pointless.

The quote from Nemeroff and Rozin that ‘“the supernatural” as that which “generally does not make sense in terms of the contemporary understanding of science.”’ sums up the deficiency of the normative or crypto-normative use of “supernatural”. But even the strictly non-normative use suffers from it.

What I’m trying to say is that not only is not religious cognition a special kind of cognition (in common with MacKendrick), but neither is any other type of cognition (no matter how Popperian its supposed heuristics). The different states of transcendence associated with religious knowing (gnosis) ranging from a vague sense of fear, comfort or awe to a dance or mushroom induced trance are not examples of a special type of cognition. They are universal psychosomatic phenomena that are frequently discursively constructed as having an association with the liminal and meta-liminal. But can we postulate an evolutionary inevitability that connects a new-age whackjob who proclaims that there is something “bigger than us” to a sophisticated theologian to Neil DeGrasse Tyson to a jobbing shaman or priest to a simple client of a religious service? Isn’t it better to talk of cultural opportunism that connects liminal emotional states to socially constructed liminal spaces? Long live the spandrel!

This is not a post-modernist view. I’d say it’s a profoundly empirical one. There are real things that can be said (provided we are aware of the limitations of the medium of speech). And I leave open the possibility that within science, there is a different kind of knowledge (that was, after all, my starting point, I was converted to my stance by empirical evidence so I am willing to respond to more).

Enhanced by Zemanta

Are we the masters of our morality? Yes!

Share
Best Friends Forever #BFF #Friends #Winnetou

Morality and the freedom of the human spirit

We spend a lot of time worrying about the content to which we expose the young generation both individually and collectively. However, I am exceedingly coming to the conclusion that it makes absolutely no difference (at least as far as morality and lawfulness is concerned). Well sure, we know things like that children of Christians are likely to be Christians as adults and adults who are abusive are likely to have been brought up in abusive environments. But this is about as illuminating as saying that children growing up in German homes are likely to speak German as adults.

We are limited by our upbringing in as much as it imposes constraints on certain parameters of our behavior in language and culture. However, when the “what of our children’s ethics” moral outrage debates are held strictly within these parameters, the predictability of both individual and collective impact of content to which children (and adults) are exposed, seems to me, is pretty minimal.

First, it is amazing that, given all the bullshit supposedly great thinkers have put forth about the state of the youth of their day, we haven’t learned that such statements are just never right. They weren’t right about pulp fiction, comic books, and they are not even right about violent video games whose rise in the US coincided with halving the crime rate. There is simply no reliable or predictable connection between what people read or watch and what they do.

Sure we can always point at some whacko who did something horrible because he read it in a book or saw it on TV but there is no way to predict who will be influenced by what when and how ahead of time. The Bible contains all sorts of violence and depravity (and not just in a way that says don’t do it) and yet we don’t see a lot more violence in devout Christians. But neither do we see less. In fact, if we look at the range of behaviors the Bible, or any other religious text for that matter, inspired over the millennia, the only thing we can say about them is that they are typical of human beings. They happened in parallel not as a consequence of the text.

By the same token, we can no more expect virtue coming out of exposure to virtuous content than we can expect depravity coming out of depraved content. A good example is Karl May who popped up on a comment thread on the Language Log recently. Entire generations of Central European boys (and more recently girls) grew up reading May’s voluminous output detailing the exploits of the sagacious explorer Old Shatterhand aka Kara ben Nemsi. Old Shatterhand, a pacifist with a gun and a fist – both used only as last resort and in self-defense, embodies very much a New Testament kind of ethics. His focus is on love, equality and turning the other cheek. But also on health, vigour and the German indomitable spirit.

It is inconceivable that anyone reading these books by the dozen (as I did in my youth) could ever think less of another race or do anything bad to man or beast. Yet, as we know, Central Europe was anything but calm in the last century which saw sales of May’s books in the tens of millions. I wonder how many death camp guards or Wermacht soldiers did not read Karl May as boys. And as one of the Language Log commenters points out, Hitler himself was a May fan and supposedly tried to write Mein Kampf in the same style as his favorite author. How is this possible? Should we ban May’s books lest such horrors happen again?

Of course not. People seem to have a remarkable ability to read around the bits that don’t concern their interests. We can background or foreground pretty much anything. It is possible to read Kipling’s Maugli as a cute children’s story or as a justification for colonialism. When I first read it, I saw it as a manifesto of environmentalism and ecosystem preservation (I was about 8 so I did nor perhaps formulate it that way). But it can just as easily be read as an apology for man’s mastery over nature.

Karl May wasn’t quite banned in my native Czechoslovakia but his books weren’t always easy to come by due, I was told, to a strong Christian bias. I could never understand that until I reread Winnetou recently and discovered long philosophical expositions on sin and violence that were just out there. No coy hints, straight up quotations from the Bible! When I was reading these books, I simply did not see that. Equally incomprehensibly, there are people who’ve read The Chronicles of Narnia and did not notice that Aslan was Jesus. It’s an adventure story for them and that’s pretty much it.

When I look at my own political morality, I can see clear foundations laid by May and my reading of Kipling, Defoe (in a Czech reworking without the racism) and others – including a watered down New Testament. But I also see people around me who clearly grew up on the same literature and are rabid Old Testament tooth-for-toothers. Such is the freedom of the human spirit that it can overcome the influence of any content – good or bad. (Again within the parameters of our linguistic and social environments.)

UPDATE: Here’s an interesting summary of how Karl May’s impact cut both ways (Hitler and Einstein) via the Wikipedia entry on May. Jeff Bowersox also has a lot of relevant things to say to explain this seeming paradox of children both appropriating ‘moral’ messages for their own play and being shaped by them through the prism of their socio-discursive embeddedness.

Epistemology as ethics: Decisions and judgments not methods and solutions for evidence-based practice

Share
Common (rapper) Common Sense (rapper) Tufts Un...
Image via Wikipedia

Show me the money! Or so the saying goes. Implying that talk is cheap and facts are the only thing that matters. But there is another thing we are being asked to do with money and that is put it where our mouth is. So evidence is not quite enough. We have to also be willing to act on it and demonstrate a personal commitment to our facts.

As so often, folk wisdom has outlined two contradictory dictums that get applied based on the parameters of a given situation. On one account, evidence is all that is needed. Wheras an alternative interpretation claims that evidence is only sufficient when backed up by personal commitment.

I’ve recently come across two approaches to the same issue that resonate with my own, namely claiming that evidence cannot be divorced from the personal commitment of its wielder. As such it needs to be constantly interrogated from all perspectives, including the ones that come from “scientific method” but not relying solely on them.

This is crucial for anybody claiming to adhere to evidence-based practice. Because the evidence is not just out there. It is always in the hands of someone caught (as we all are) within a complex web of ethical commitments. Sure, we can point to a number of contexts where lack of evidence was detrimental (vaccinations). But we can equally easily find examples of evidence being the wrong thing to have (UK school league tables, eugenics). History of science demonstrates both incorrect conlusions being drawn from correct facts (ether), and correct conclusions being drawn form incorrect facts (including aircraft construction as Ira Flatow showed). I am not aware of any research quantifying the proportion of these respective positions. Intuitively, one would feel inclined to conclude that mostly the correct facts have led to the correct conclusions and incorrect ones to the wrong ones but we have been burned before.

That’s why I particularly liked what Mark Kleiman had to say at the end of his lecture on Evidence-based practice in policing:

“The notion that we can substitute method for common sense, a very wide-spread notion, is wrong. We’re eventually going to have to make judgments. Evidence-based practice is a good slogan, but it’s not a method in the Cartesian sense. It does not guarantee that what we’re going to do is right.”

Of course, neither is the opposite true. I’m always a bit worried about appeals to “common sense”.  I used to tell participants on cross-cultural training courses: there is nothing common about common sense. It always belongs to someone. And it does not come naturally to us. It is only common because someone told someone else about it. How often, have we heard a myth debunked only to conclude that the debunking had really been common sense. That makes no sense. It was the myth that was common sense. The debunking will only become common sense after we tell everyone about it. How many people would figure out even the simplest things such as looking right and then left before crossing the road without someone telling them? We spend a lot of time (as a society and individuals) maintaining the commonality of our senses. Clifford Geertz showed in “Common sense as a cultural system” the complex processes involved in common sense and I would not want to entrust anything of importance to common sense. But I agree that judgments are essential. We always make judgements whether we realise that or not.

This resonates wonderfully with what I consider to be the central thesis of the “Science’s First Mistake” a wonderful challenge to the assumptions of the universality of science by Ian Angell and Dionysios Demetis.

“…different ages have different perceptions of uncertainty; and so there are different approaches to theory construction and application, delivering different risk assessments and prompting different decisions. Note this book stresses decisions not solutions, because from its position there are no solutions, only contingent decisions. And each decision is itself a start of a new journey, not the end of an old one.

Indeed, there is no grander delusion than the production of a solution, with its linear insistence on cause and effect.”

All our solutions are really decisions. Decisions contingent on a myriad of factors both related to the data and our personal situation. We couch these decisions in the guise of solutions to avoid personal responsibility. Not perhaps often in a conscious way but in a way that abstracts away from the situation. We do not always have to ask qui bono when presented with a solution but we should always ask where from. What is the situation in which a solution was born? Not because we have a suspicious mind but because we have an inquisitive one.

But decisons are just what they are. They are on their own neither good or bad. They are just inevitable. And here I’d like to present the conclusions to a paper on the epistemological basis of case study in educational policy making I co-wrote a few years ago with John Elliott. It’s longer (by about a mile) than the two extracts above. But I highlighted in bold the bits that are relevant. The rest is there just to provide sufficient context to understand the key theses:

from the Summary and Conclusions of “Epistemology as ethics in research and policy: Under what terms might case studies yield useful knowledge to policy makers. by John Elliott and Dominik Lukeš. In: Evidence-based Education Policy: What Evidence? What Basis? Whose Policy?, 2008.”

Overall, we have informed our inquiry with three perspectives on case study. One rooted in ethnography and built around the metaphor of the understanding of cultures one is not familiar with (these are presented by the work of scholars like Becker, Willis, Lacey, Wallace, Ball, and many others). Another strand revolves around the tradition of responsive and democratic evaluation and portrayal represented by the work of scholars such as MacDonald, Simons, Stake, Parlett, and others. This tradition (itself not presenting a unified front) aims to break down the barriers between the researcher, the researched and the audience. It recognizes the situated nature of all actors in the process (cf. Kemmis 1980) and is particularly relevant to the concerns of the policy maker. Finally, we need to add Stenhouse’s approach of contemporary history into the mix. Stenhouse provides a perspective that allows the researcher to marry the responsibility to data with the responsibility to his or her environment in a way that escapes the stereotypes associated with the ‘pure forms’ of both quantitative and qualitative research. A comprehensive historian-like approach to data collection, retention and dissemination that allows multiple interpretations by multiple actors accounts both for the complexity of the data and the situation in the context of which it is collected and interpreted.

However, we can ask ourselves whether a policy-maker faced with examples of one of these traditions could tell these perspectives apart? Should it not perhaps be the lessons from the investigation of case study that matter rather than an ability to straightforwardly classify it as an example of one or the other? In many ways, in the policy context, it is the act of choice, such as choosing on which case study a policy should be based, that is of real importance.

In that case, instead of transcendental arbitration we can provide an alternative test: Does the case study change the prejudices of the reader? Does it provide a challenge? Perhaps our notion of comprehensiveness can include the question: Is the case study opening the mind of the reader to factors that they would have otherwise ignored? This reminds us of Gadamer’s “fusion of horizons” (“Understanding […] is always the fusion of […] horizons [present and past] which we imagine to exist by themselves.” Gadamer 1975, p. 273).

However, this could be seen as suggesting that case study automatically leads to a state of illumination. In fact, the interpretation of case study in this way requires a purposeful and active approach on the part of the reader. What, then, is the role of the philosopher? Should each researcher have a philosopher by his or her side? Or is it necessary, as John Elliott has long argued, to locate the philosopher in the practitioner? Should we expect teachers and/or policy-makers to go to the philosopher for advice or would a better solution be to strive for philosopher teachers and philosopher practitioners? Being a philosopher is of course determined not by speaking of Plato and Rousseau but by the constant challenge to personal and collective prejudices.

In that case, we can conclude that case study would have a great appeal to the politician and policy maker as a practical philosopher but that it would be a mistake to elevate it above other ways of doing practical philosophy. In this, following Gadamer, we advocate an antimethodological approach. The idea of the policy maker as philosopher and policy maker as researcher (i.e. underscoring the individual ethical agency of the policy maker) should be the proper focus of discussion of reliability and generalization. Since the policy maker is the one making the judgement, the type of research and study is then not as important as a primary focus.

And, in a way, the truthfulness of the case arises out of the practitioners’ use of the study. The judgment of warrant as well as the universalizing and revelatory nature of a particular study should become apparent to anybody familiar with the complexities of the environment. An abstract standard of quality reminiscent of statistical methods (number of interviewees, questions asked, sampling) is ultimately not a workable basis for decision making and action although that does not exclude the process of seeking a shared metaphorical perspective both on the process of data gathering and interpretation (cf. Kennedy 1979, Fox 1982). Gadamer’s words seem particularly relevant in this context: “[t]he understanding and interpretation of texts is not merely a concern of science, but is obviously part of the total human experience of the world.” (1975, p. xi)

We cannot discount the situation of the researcher no more than we can discount the situation of the researched. One constitutive element of the situation is the academic tribe: “[w]e are pursuing a ‘scholarly’ identity through our case studies rather than an intrinsic fascination with the phenomena under investigation” warns Fox (1982). To avoid any such accusations of impropriety social science cultivates a ‘prejudice against prejudice’, a distancing from experience and valuing in order to achieve objectivity whereas the condition of our understanding is that we have prejudices and any inquiry undertaken by ‘us’ needs to be approached in the spirit of a conversation with others; the conversation alerts participants to their prejudices. In a sense, the point of conversation is to reconstruct prejudices, which is an alternative view of understanding itself.

It should be stressed however, that this conversation does not automatically lead to a “neater picture” of the situation nor does it necessarily produce a “social good”. There is the danger of viewing ‘disciplined conversation’ as an elevated version of the folk theory on ideal policy: ‘if only everyone talked to one another, the world would be a nicer place’. Academic conversation (just like any democratic dialectic process) is often contentious and not quite the genteel affair it tries to present itself as. Equally, any given method of inquiry, including and perhaps headed by case study, can be both constitutive and disruptive of our prejudices.

Currently the culture of politeness aimed at avoiding others’ and one’s own discomfort at any cost contributes to the problem. Can one structure research that enables people to reflect about prejudices that they inevitably bring to the situation and reconstruct their biases to open up the possibility of action and not cause discomfort to themselves or others? We could say that concern with generalization and method is a consequence of academic discourse and culture and one of the ways in which questions of personal responsibility are argued away. Abstraction, the business of academia, is seen as antithetical to the process of particularization, the business of policy implementation. But given some of the questions raised in this paper, we should perhaps be asking whether abstraction and particularization are parallel processes to which we ascribe polar directionality only ex post facto. In this sense we can further Nussbaum’s distinction between generalization and universalization by rephrasing the dichotomy in the following terms: generalization is assumed to be internal to the data whereas universalization is a situated human cognitive and affective act.

A universalizable case study is of such quality that the philosopher policy maker can discern its relevance for the process of policy making (similarly to Stake’s naturalistic generalization). This is a different way of saying the same thing as Kemmis (1980, p. 136): “Case study cannot claim authority, it must demonstrate it.” The power of case study in this context can be illustrated by anecdotes from the field where practitioners had been convinced that a particular case study describes their situation and berating colleagues or staff for revealing intimate details of their situation, whereas the case study had been based on research of an entirely unrelated entity. In cases like these, the universal nature of the case study is revealed to the practitioner. Its public aspects often engender action where idle rumination and discontent would be otherwise prevalent. Even when this kind of research tells people what they already “know”, it can inject accountability by rendering heretofore private knowledge public. The notion of case study as method with transcendental epistemology therefore cannot be rescued even by attempts like Bassey’s (1999) to offer ‘fuzzy generalizations’ since while cognition and categorization is fuzzy, action involves a commitment to boundaries. This focus on situated judgement over transcendental rationality in no way denies the need for rigour and instrumentalism. We agree with Stenhouse that there needs to be a space in which the quality of a particular case study can be assessed. But such judgments will be different when made by case study practitioners and when made by policy-makers or teachers. The epistemological philosopher will apply yet another set of criteria. All these agents would do well to familiarize themselves with the criteria applied by the others but they would be unwise to assume that they can ever fully transcend the situated parameters of their community of practice and make all boundaries (such as those described by Kushner 1993) disappear.

Philosophy often likes to position itself in the role of an independent arbiter but it must not forget that it too is an embedded practice with its own community rules. That does not mean there’s no space for it in this debate. If nothing else, it can provide a space (not unlike the liminal space of Turner’s rituals) in which the normal assumptions are suspended and transformation of the prejudice can occur. In this context, we should perhaps investigate the notion of therapeutic reading of philosophy put forth by the New Wittgensteinians (see Crary and Read 2000).

This makes the questions of ethics alluded to earlier even more prominent. We propose that given their situated nature, questions of generalization are questions of ethics rather than inquiries into some disembodied transcendent rationality. Participants in the complex interaction between practitioners of education, educational researchers and educational policy makers are constantly faced with ethical decisions asking themselves: how do I act in ways that are consonant with my values and goals? Questions of warrant are internal to them and their situation rather than being easily resolvable by external expert arbitration. This does not exclude instrumental expertise in research design and evaluation of results but the role of such expertise is limited to the particular. MacDonald and Walker (1977) point out the importance of apprenticeship in the training of case study practitioners and we should bear in mind that this experience cannot be distilled into general rules for research training as we find them laid out in a statistics textbook.

Herein can lie the contribution of philosophy: An inquiry into the warrant and generalization of case study should be an inquiry into the ethics surrounding the creation and use of research, not an attempt to provide an epistemologically transcendent account of the representativeness of sampled data.

 

Enhanced by Zemanta

Do science fiction writers dream of fascist dictatorships?

Share
This is the cover to the January 1953 issue of...Image via Wikipedia
Image via Wikipedia

Some years ago in a book review, I made an off-the-cuff comment that thriller writers tend to be quite right-wing in their outlook whereas science fiction authors are much more progressive and leftist. This is obviously an undue generalisation (as most of such comments tend to be) but it felt intuitively right. Even then I thought of Michael Chricton as the obvious counterexample – a thirller writer with distinctly liberal leanings – but I couldn’t think of a science fiction writer that would provide the counterexample. I put this down to my lack of comprehensive sci-fi reading and thought nothing more of it. Now, I’m not even sure that the general trend is there or at least that the implications are very straightforward.

Recently, I was listening to the excellent public policy lectures by Mark Kleiman and remembered that years ago, I’d read some similar suggestions in the Bio of the Space Tyrant by Pierce Anthony. It wasn’t a book (or rather a book series) I was going to reread but I set to it with a researcher’s determination. And frankly, I was shocked.

What I found was not a vision of a better society (Anthony projects the global politics of his day – the early 80s – into the Solar system 600 years hence) but rather a grotesque depiction of what the elites of the day would consider ‘common-sense’ policies: free-market entrepreneurship with social justice with a few twists. It was anti-corporalist and individualist on the surface but with a strong sense of collective duty (and pro-militarism) that was much more reminiscent of fascism than communism. It espoused strong, charismatic leadership with a sense of duty and most of all a belief in the necessity of change led by common sense. The needs of the collective justified the suppression of the individual in almost any way. But all of this is couched in good liberal politics (like the free press, free enterprise, etc.)

It is not clear whether Anthony means this as a parody of a fascist utopia but there are no hints there that this is the case. The overwhelming sense I get from this book is one of frustration of the intellectual elite that nobody is listening to what they have to say and a perverted picture of what the world would be like if they only got to start over with their policies.

Speaking of perverted. Through all of this is woven a bizzare and disturbing mix of patriarchy and progressive gender politics. On the one hand, Anthony is strongly against violence against women and treats women as strong and competent individuals. But on the other hand, his chief protagonist is an embodiment of a philanderer’s charter. All women love him but understand that he cannot love just one! The policies are there but what is to prevent any man from feeling that he is the one exception. So despite the progressive coating one is left feeling slightly unclean.

Now, is Anthony the exception I was looking for years ago? I don’t think so. First, I think he would fall in the liberal to libertarian camp if asked. But second, I don’t think he’s any exception at all. I recently reread some SciFi classics and found hints of to full blown monuments to this rationalist yearning for control over society – the “if they only listened to us” syndrome, also known as the “TED syndrome”. We understand so much about how things work, so now we have the solution for how everything works. That’s why we should never seek to be ruled by philosopher kings (Plato, Hobbes, and any third rate philosopher – more likely to be fascist than liberal). Classics like “Mote in God’s Eye“, “Starship Troupers” (the movie was a parody but the book wasn’t), “Foundation” or less well-known ones like “The Antares Trilogy” or “The Lost Fleet“. They all unwittingly struggle with the dilemma of what happens when we know what to do but we also know that it can’t be achieved unless we have complete control. I found echoes of this even in cyberpunk like Snowcrash.

So am I seeing a trend that isn’t there? I’m not as widely read in SciFi as other genres so it’s possible I just happened on books that confirm my thesis (such as it is). Again, the exception I can think of immediately is Cory Doctorow whose “For the Win” is as beautiful and sincere a depiction of the union movement as any song by Pete Seger. And I’m sure there are many more. But are there enough to make my impression just that? (Of course, there’s SciFi where this doesn’t come up, at all.)

But this tendency of the extremely intelligent and educated (and SciFi writers are on the whole just as well versed in anthropology as they are in science) to tell stories of how their images of the just society can be projected onto society as a whole is certainly a worrying presence in the genre. It seems to be largely absent From fantasy, which generally deals with journeys of individuals within existing worlds. And while these worlds may be dystopic, they generally are not changed, only explored. Fantasy has a strong thread of historical nostalgia – looking for a pure world of yore – which can be quite destructive when mis-projected to our own world. But on the whole, I feel, it contains less public policy than the average science fiction novel.

Enhanced by Zemanta

The brain is a bad metaphor for language

Share

Note: This was intended to be a brief note. Instead it developed into a monster post that took me two weeks of stolen moments to write. It’s very light on non-blog references but they exist. Nevertheless, it is still easy to find a number of oversimplifications,  conflations, and other imperfections below. The general thrust of the argument however remains.

How Far Can You Trust a Neuroscientist?

Shiny and colored objects usually attract Infa...
Image via Wikipedia

A couple of days ago I watched a TED talk called the Linguistic Genius of Babies by Patricia Kuhl. I had been putting it off, because I suspected I wouldn’t like it but I was still disappointed at how hidebound it was. It conflated a number of really unconnected things and then tried to sway the audience to its point of view with pretty pictures of cute infants in brain scanners. But all it was, is a hodgepodge of half-implied claims that is incredibly similar to some of the more outlandish claims made by behaviorists so many years ago. Kuhl concluded that brain research is the next frontier of understanding learning. But she did not give a simple credible example of how this could be. She started with a rhetorical trick. Mentioned an at-risk language with a picture of a mother holding an infant facing towards her. And then she said (with annoying condescension) that this mother and the other tribe members know something we do not:

What this mother — and the 800 people who speak Koro in the world — understand that, to preserve this language, they need to speak it to the babies.

This is garbage. Languages do not die because there’s nobody there to speak it to the babies (until the very end, of course) but because there’s nobody of socioeconomic or symbolic prestige children and young adults can speak the language to. Languages don’t die because people can’t learn them, they die because they have no reason (other than nostalgia) to learn them or have a reason not to learn them. Given a strong enough reason they would learn a dying language even if they started at sixteen. They just almost never are given the reason. Why Kuhl felt she did not need to consult the literature on language death, I don’t know.

Patricia Kuhl has spent the last 20 years studying pretty much one thing: acoustic discrimination in infants (http://ilabs.washington.edu/kuhl/research.html). Her research provided support for something that had been already known (or suspected), namely that young babies can discriminate between sounds that adults cannot (given similar stimuli such as the ones one might find in the foreign language classroom). She calls this the “linguistic genius of babies” and she’s wrong:

Babies and children are geniuses until they turn seven, and then there’s a systematic decline.

First, the decline (if there is such a thing) is mostly limited to acoustic processing and even then it’s not clear that the brain is the thing that causes it. Second, being able to discriminate (by moving their head) between sounds in both English and Mandarin at age 6 months is not a sign of genius. It’s a sign of the baby not being able to differentiate between language and sound. Or in other words, the babies are still pretty dumb. But it doesn’t mean they can’t learn a similar distinction at a later age – like four or seven or twelve. They do. They just probably do it in a different way than a 6-month old would. Third, in the overall scheme of things, acoustic discrimination at the individual phoneme level (which is what Kuhl is studying) is only a small part of learning a language and it certainly does NOT stop at 7 months or even 7 years of age. Even children who start learning a second language at the age of 6 achieve a native-like phonemic competence. And even many adults do. They seem not to perform as well on certain fairly specialized acoustic tests but functionally, they can be as good as native speakers. And it’s furthermore not clear that accent deficiencies are due to the lack of some sort of brain plasticity. Fourth, language learning and knowledge is not a binary thing. Even people who only know one language know it to a certain degree. They can be lexically, semantically and syntactically quite challenged when exposed to a sub-code of their language they have little to no contact with. So I’m not at all sure what Kuhl was referring to. François Grosjean (an eminent researcher in the field) has been discussing all this on his Life as Bilingual blog (and in books, etc.). To have any credibility, Kuhl must address this head on:

There is no upper age limit for acquiring a new language and then continuing one’s life with two or more languages. Nor is there any limit in the fluency that one can attain in the new language with the exception of pronunciation skills.

Instead she just falls on old prejudices. She simply has absolutely nothing to support this:

We think by studying how the sounds are learned, we’ll have a model for the rest of language, and perhaps for critical periods that may exist in childhood for social, emotional and cognitive development.

A paragraph like this may get her some extra funding but I don’t see any other justification for it. Actually, I find it quite puzzling that a serious scholar would even propose anything like this today. We already know there is no critical period for social development. Well, we don’t really know what social development is, but there’s no critical brain period to what there is. We get socialized to new collective environments throughout our lives.

But there’s no reason to suppose that learning to interact in a new environment is anything like learning to discriminate between sounds. There are some areas of language linked to perception where that may partly be the case (such as discriminating shapes, movements, colors, etc.) but hardly things like morphology or syntax, where much more complexity is involved. But this argument cuts both ways. Let’s say a lot of language learning was like sound development. And we know most of it continues throughout life (syntax, morphology, lexicon) and it doesn’t even start at 6 months (unless you’re a crazy Chomskean who believes in some sort of magical parameter setting). So if sound development was like that, maybe it has nothing to do with the brain in the way Kuhl imagines – although she’s so vague that she could always claim that that’s what she’d had in mind. This is what Kuhl thinks of as additional information:

We’re seeing the baby brain. As the baby hears a word in her language the auditory areas light up, and then subsequently areas surrounding it that we think are related to coherence, getting the brain coordinated with its different areas, and causality, one brain area causing another to activate.

So what? We know that that’s what was going to happen. Some parts of the brain were going to light up as they always do. What does that mean? I don’t know. But I also know that Patricia Kuhl and her colleagues don’t know either (at least not in the way she pretends). We speak a language, we learn a language and at the same time we have a brain and things happen in the brain. There are neurons and areas that seem to be affected by impact (but not always and not always in exactly the same way). Of course, this is an undue simplification. Neuroscientists know a huge amount about the brain. Just not how it links to language in a way that would say much about the language that we don’t already know. Kuhl’s next implied claim is a good example of how partial knowledge in one area may not at all extend to knowledge in another area.

What you see here is the audio result — no learning whatsoever — and the video result — no learning whatsoever. It takes a human being for babies to take their statistics. The social brain is controlling when the babies are taking their statistics.

In other words, when the children were exposed to audio or video as opposed to a live person, no effect was shown. At 6 months of age! As is Kuhl’s wont, she only hints at the implications, but over at the Royal Society’s blog comments, Eric R. Kandel has spelled it out:

I’m very much taken with Patricia Kuhl’s finding in the acquisition of a second language by infants that the physical presence of a teacher makes enormous difference when compared to video presence. We all know from personal experience how important specific teachers have been. Is it absurd to think that we might also develop methodologies that would bring out people’s potential for interacting empathically with students so that we can have a way of selecting for teachers, particularly for certain subjects and certain types of student? Neuroscience: Implications for Education and Lifelong Learning.

But this could very well be absurd! First, Kuhl’s experiments were not about second language acquisition but sensitivity to sounds in other languages. Second, there’s no evidence that the same thing Kuhl discovered for infants holds for adults or even three-year olds. A six-month old baby hasn’t learned yet that the pictures and sounds coming from the machine represent the real world. But most four-year olds have. I don’t know of any research but there is plenty of anecdotal evidence. I have personally met several people highly competent in a second language who claimed they learned it by watching TV at a young age. A significant chunk of my own competence in English comes from listening to radio, audio books and watching TV drama. How much of our first language competence comes from reading books and watching TV? That’s not to say that personal interaction is not important – after all we need to learn enough to understand what the 2D images on the screen represent. But how much do we need to learn? Neither Kuhl nor Kandel have the answer but both are ready (at least by implication) to shape policy regarding language learning. In the last few years, several reports raised questions about some overreaching by neuroscience (both in methods and assumptions about their validity) but even perfectly good neuroscience can be bad scholarship in extending its claims far beyond what the evidence can support.

The Isomorphism Fallacy

This section of the post is partly based on a paper I presented at a Czech cognitive science conference about 3 years ago called Isomorphism as a heuristic and philosophical problem.

IMG_7845The fundamental problem underlying the overreach of basic neuroscience research is the fallacy of isomorphism. This fallacy presumes that the same structures we see in language, behavior, society must have structural counterparts in the brain. So there’s a bit of the brain that deals with nouns. Another bit that deals with being sorry. Possibly another one that deals with voting Republican (as Woody Allen proved in “Everyone Says I Love You“). But at the moment the evidence for this is extremely weak, at best. And there is no intrinsic need for a structural correspondence to exist. Sidney Lamb came up with a wonderful analogy that I’m still working my way through. He says (recalling an old ‘Aggie‘ joke) that trying to figure out where the bits we know as language structure are in the brain is like trying to work out how to fit the roll that comes out of a tube of tooth paste back into the container. This is obviously a fool’s errand. There’s nothing in the tooth-paste container that in any way resembles the colorful and tubular object we get when we squeeze the paste container. We get that through an interaction of the substance, the container, external force, and the shape of the opening. It seems to me entirely plausible, that the link between language and the brain is much more like that between the paste, the container and their environment than like that between a bunch of objects and box. The structures that come out are the result of things we don’t quite understand happening in the brain interacting with its environment. (I’m not saying that that’s how it is, just that it’s plausible.) The other thing to lends it credence is the fact that things like nouns or fluency are social constructs with fuzzy boundaries, not hard discrete objects, so actually localizing them in the brain would be a bit of a surprise. Not that it can’t be done, but the burden of evidence of making this a credible finding is substantial.

Now, I think that the same problem applies to looking for isomorphism the other way. Lamb himself tries to look at grammar by looking for connections resembling the behavior of activating neurons. I don’t see this going anywhere. George Lakoff (who influenced me more than any other linguist in the world) seems to think that a Neural Theory of Language is the next step in the development of linguistics. At one point he and many others thought that mirror neurons say something about language but now that seems to have been brought into question. But why do we need mirror neurons when we already know a lot of the immitative behaviors they’re supposed facilitate? Perhaps as a treatment and diagnostic protocol for pathologies but is this really more than story-telling? Jerome Feldman described NTL in his book “From Molecule to Metaphor” but his main contribution seems to me lies in showing how complex language phenomena can be modelled with brain-like neural networks, not saying anything new about these phenomena (see here for an even harsher treatment). The same goes for the Embodied Construction Grammar. I entirely share ECG’s linguistic assumptions but the problem is that it tries to link its descriptive apparatus directly to the formalisms necessary for modeling. This proved to be a disaster for the generative project that projected its formalisms into language with a imperfect fit and now spends most of its time refining those formalisms rather than studying language.

So far I don’t see any advantage in linking language to the brain in either the way Kuhl et al or Feldman et al try to do it (again with the possible exception of pathologies). In his recent paper on compositionality, Feldman describes research that shows that spacial areas are activated in conjunction with spatial terms and that sentence processing time increases as the sentence gets removed from “natural spatial orientation”. But brain imaging at best confirms what we already knew. But how useful is that confirmatory knowledge? I would argue that not very useful. In fact there is a danger that we will start thinking of brain imaging as a necessary confirmation of linguistic theory. Feldman takes a step in this dangerous direction when he says that with the advent of new techniques of neuroscience we can finally study language “scientifically”. [Shudder.]

We know there’s a connection between language and the brain (more systematic than with language and the foot, for instance) but so far nobody’s shown convincingly that we can explain much about language by looking at the brain (or vice versa). Language is best studied as its own incredibly multifaceted beast and so is the brain. We need to know a lot more about language and about the brain before we can start projecting one into the other.

And at the moment, brain science is the junior partner, here. We know a lot about language and can find out more without looking for explanations in the brain. It seems as foolish as trying to illuminate language by looking inside a computer (as Chomsky’s followers keep doing). The same question that I’m asking for language was asked about cognitive processes (a closely related thing) by William Uttal in The New Phrenology who’s asking “whether psychological processes can be defined and isolated in a way that permits them to be associated with particular brain regions” and warns against a “neuroreductionist wild goose chase” – and how else can we characterize Kuhl’s performance – lest we fall “victim to what may be a ‘neo-phrenological’ fad”. Michael Shremer voiced a similar concern in the Scientific American:

The brain is not random kludge, of course, so the search for neural networks associated with psychological concepts is a worthy one, as long as we do not succumb to the siren song of phrenology.

What does a “siren song of phrenology” sound like? I imagine it would sound pretty much like this quote by Kuhl:

We are embarking on a grand and golden age of knowledge about child’s brain development. We’re going to be able to see a child’s brain as they experience an emotion, as they learn to speak and read, as they solve a math problem, as they have an idea. And we’re going to be able to invent brain-based interventions for children who have difficulty learning.

I have no doubt that there are some learning difficulties for which a ‘brain-based intervention’ (whatever that is) may be effective. But it’s just a relatively small part of the universe of learning difficulties that it hardly warrants a bombastic claim like the one above. I could find nothing in Kuhl’s narrow research that would support this assertion. Learning and language are complex psycho-social phenomena that are unlikely to have straightforward counterparts in brain activations such as can be seen by even the most advanced modern neuroimaging technology. There may well be some straightforward pathologies that can be identified and have some sort of treatment aimed at them. The problem is that brain pathologies are not necessarily opposites of a typically functioning brain (a fallacy that has long plagued interpretation of the evidence from aphasias) – it is, as brain plasticity would suggest, just as  likely that at least some brain pathologies simply create new qualities rather than simply flipping an on/off switch on existing qualities. Plus there is the historical tendency of the self-styled hard sciences to horn in on areas where established disciplines have accumulated lots of knowledge, ignore the knowledge, declare a reductionist victory, fail and not admit failure.

For the foreseeable future, the brain remains a really poor metaphor for language and other social constructs. We are perhaps predestined to finding similarities in anything we look at but researchers ought to have learned by now to be cautious about them. Today’s neuroscientists should be very careful that they don’t look as foolish to future generations as phrenologists and skull measurers look to us now.

In praise of non-reductionist neuroscience

Let me reiterate, I have nothing against brain research. The more of it, the better! But it needs to be much more honest about its achievements and limitations (as much as it can given the politics of research funding). Saying the sort of things Patricia Kuhl does with incredibly flimsy evidence and complete disregard for other disciplines is good for the funding but awful for actually obtaining good results. (Note: The brevity of the TED format is not an excuse in this case.)

A much more promising overview of applied neuroscience is a report by the Royal Society on education and the brain that is much more realistic about the state of neurocognitive research who admit at the outset: “There is enormous variation between individuals, and brain-behaviour relationships are complex.”

The report authors go on to enumerate the things they feel we can claim as knowledge about the brain:

  1. The brain’s plasticity
  2. The brain’s response to reward
  3. The brain’s self-regulatory processes
  4. Brain-external factors of cognitive development
  5. Individual differences in learning as connected to the brain and genome
  6. Neuroscience connection to adaptive learning technology

So this is a fairly modest list made even more modest by the formulations of the actual knowledge. I could only find a handful of statements made to support the general claims that do not contain a hedge: “research suggests”, “may mean”, “appears to be”, “seems to be”, “probably”. This modesty in research interpretation does not always make its way to the report’s policy suggestions (mainly suggestions 1 and 2). Despite this, I think anybody who thinks Patricia Kuhl’s claims are interesting would do well do read this report and pay careful attention to the actual findings described there.

Another possible problem for those making wide reaching conclusions is a relative newness of the research on which these recommendations are based. I had a brief look at the citations in the report and only about half are actually related to primary brain research. Of those exactly half were published in 2009 (8) and 2010 (20) and only two in the 1990s. This is in contrast to language acquisition and multilingualism research which can point to decades of consistently replicable findings and relatively stable and reliable methods. We need to be afraid, very afraid of sexy new findings when they relate to what is perceived as the “nature” of humans. At this point, as a linguist looking at neuroscience (and the history of the promise of neuroscience), my attitude is skeptical. I want to see 10 years of independent replication and stable techniques before I will consider basing my descriptions of language and linguistic behavior on neuroimaging. There’s just too much of ‘now we can see stuff in the brain we couldn’t see before, so this new version of what we think the brain is doing is definitely what it’s doing’. Plus the assumption that exponential growth in precision brain mapping will result in the same growth in brain function identification is far from being a sure thing (cf. genome decoding). Exponential growth in computer speed, only led to incremental increases in computer usability. And the next logical step in the once skyrocketing development of automobiles was not flying cars but pretty much just the same slightly better cars (even though they look completely different under the hood).

The sort of knowledge to learn and do good neuroscience is staggeringly awesome. The scientists who study the brain deserve all the personal accolades they get. But the actual knowledge they generate about issues relating to language and other social constructs is much less overwhelming. Even a tiny clinical advance such as helping a relatively small number of people to communicate who otherwise wouldn’t be able to express themselves makes this worthwhile. But we must not confuse clinical advances with theoretical advances and must be very cautious when applying these to policy fields that are related more by similarity than a direct causal connection.