Category Archives: Philosophy

6244195131_20235bdf74_b

Storms in all Teacups: The Power and Inequality in the Battle for Science Universality

Share
Send to Kindle

The great blog Genealogy of Religion posted this video with a somewhat approving commentary:

The video started off with panache and promised some entertainment, however, I found myself increasingly annoyed as the video continued. The problem is that this is an exchange of cliches that pretends to be a fight of truth against ignorance. Sure, Storm doesn’t put forward a very coherent argument for her position, but neither does Minchin. His description of science vs. faith is laughable (being in awe at the size of the universe, my foot) and nowhere does he display any nuance nor, frankly, any evidence that he is doing anything other than parroting what he’s heard on some TV interview with Dawkins. I have much more sympathy with the Storms of this world than these self-styled defenders of science whose only credentials are that they can remember a bit of high school physics or chemistry and have read an article by some neo-atheist in Wired. What’s worse, it’s would be rationalists like him who do what passes for science reporting in major newspapers or on the BBC.

But most of all, I find it distasteful that he chose a young woman as his antagonist. If he wished to take on the ‘antiscience’ establishment, there are so many much better figures to target for ridicule. Why not take on the pseudo spiritualists in the mainstream media with their ecumenical conciliatory garbage. How about taking on tabloids like Nature or Science that publish unreliable preliminary probes as massive breakthroughs. How about universities that put out press releases distorting partial findings. Why not take on economists who count things that it makes no sense to count just to make things seem scientific. Or, if he really has nothing better to do, let him lay into some super rich creationist pastor. But no, none of these captured his imagination, instead he chose to focus his keen intellect and deep erudition on a stereotype of a young woman who’s trying to figure out a way to be taken seriously in a world filled with pompous frauds like Minchin.

The blog post commenting on the video sparked a debate about the limits of knowledge (Note: This is a modified version of my own comment). But while there’s a debate to be had about the limits of knowledge (what this blog is about),  this is not the occasion. There is no need to adjudicate about which of these two is more ‘on to something’. They’re not touching on anything of epistemological interest, they’re just playing a game of social positioning in the vicinity of interesting issues. But in this game, people like Michin have been given a lot more chips to play with than people like Storm. It’s his follies and prejudices and not hers that are given a fair hearing. So I’d rather spend a few vacuous moments in her company than endorse his mindless ranting.

And as for ridiculing people for stupidity or shallow thinking, I’m more than happy to take part. But I want to have a look at those with power and prestige, because they just as often as Storms act just as silly and irrationally the moment they step out of their areas of expertise. I see this all the time in language, culture and history (where I know enough about to judge the level of insight). Here’s the most recent one that caught my eye:

It comes from a side note in a post about evolutionary foundations of violence by a self-proclaimed scientist (the implied hero in Minchin’s rant):

 It is said that the Bedouin have nearly 100 different words for camels, distinguishing between those that are calm, energetic, aggressive, smooth-gaited, or rough, etc. Although we carefully identify a multitude of wars — the Hundred Years War, the Thirty Years War, the American Civil War, the Vietnam War, and so forth — we don’t have a plural form for peace.

Well, this paragon of reason could be forgiven for not knowing what sort of non-sense this ’100 words for’ cliche is. The Language Log has spilled enough bits on why this and other snowclones are silly. But the second part of the argument is just stupid. And it is a typical scientist blundering about the world as if the rules of evidence didn’t apply to him outside the lab and as if data not in a spreadsheet did not require a second thought. As if being a PhD in evolutionary theory meant everything else he says about humans must be taken seriously. But how can such a moronic statement be taken as anything but feeble twaddle to be laughed at and belittled? How much more cumulatively harmful are moments like these (and they are all over the place) than the socializing efforts of people like Storm from the video?

So, I should probably explain why this is so brainless. First, we don’t have a multitude of words war  (just like the Bedouin don’t have 100 or even 1 dozen for a camel). We just have the one and we have a lot of adjectives with which we can modify its meaning. And if we want to look for some that are at least equivalent to possible camel attributes, we won’t choose names of famous wars but rather things like civil war, total war, cold war, holy war, global war, naval war, nuclear war, etc. I’m sure West Point or even Wikipedia has much to say about a possible classification. And of course,  all of this applies to peace in exactly the same way. There are ‘peaces’ with names like Peace of Westphalia, Arab-Israeli Peace, etc. with just as many attributive pairs like international peace, lasting peace, regional peace, global peace, durable peace, stable peace, great peace, etc.  I went to a corpus to get some examples but that this must be the case was obvious and a simple Google search would give enough examples to confirm a normal language speaker’s  intuition. But this ‘scientist’ had a point to make and because he’s spent twenty years doing research in evolution of violence, he must surely be right about everything on the subject.

Creative Commons License jbraine via Compfight

Now, I’m sure this guy is not an idiot. He’s obviously capable of analysis and presenting a coherent argument. But there’s an area that he chose to address about which he is about as qualified to make pronouncements as Storm and Minchin are about the philosophy of science. And what he said there is stupid and he should be embarrassed for having said it. Should he be ridiculed and humiliated for it the way I did here? No. He made the sort of mistake everyone makes from high school students to Nobel laureates. He thought he knew something and didn’t bother to examine his knowledge. Or he did try to examine it but  didn’t have the right tools to do it. Fine. But he’s a scientist (and a man not subject to stereotypes about women) so we give him and too many like him a pass. But Storm, a woman who like so many of her generation uses star signs to talk about relationships and is uncomfortable with the grasping maw of classifying science chomping on the very essence of her being, she is fair game?

It’s this inequality that makes me angry. We afford one type of shallowness the veneer respectability and rake another one over the coals of ridicule and opprobrium. Not on this blog!

Creative Commons License Juliana Coutinho via Compfight

UPDATE: I was just listening to this interview with a philosopher and historian of science about why there was so much hate coming from scientists towards the Gaia hypothesis and his summation, it seems to me, fits right in with what this post is about. He says: “When scientists feel insecure and threatened, they turn nasty.” And it doesn’t take a lot of study of the history and sociology of science to find ample examples of this. The ‘science wars’, the ‘linguistics wars’, the neo-Darwinst thought purism, the list just goes on. The world view of scientism is totalising and has to deal with exactly the same issues as other totalising views such as monotheistic religions with constitutive ontological views or socio-economic utopianisms (e.g. neo-liberalism or Marxism).

And one of those issues is how do you afford respect to or even just maintain conversation with people who challenge your ideological totalitarianism – or in other words, people who are willfully and dangerously “wrong”. You can take the Minchin approach of suffering in silence at parties and occasionally venting your frustration at innocent passerbys, but that can lead to outbreaks group hysteria as we saw with the Sokal hoax or one of the many moral panic campaigns.

Or you can take the more difficult journey of giving up some of your claims on totality and engaging with even those most threatening to to you as human beings; the way Feyerabend did or Gould sometimes tried to do. This does not mean patiently proselytizing in the style of evangelical missionaries but more of an ecumenical approach of meeting together without denying who you are. This will inevitably involve moments where irreconcilable differences will lead to a stand on principles (cf. Is multi-culturalism bad for women?) but even in those cases an effort at understanding can benefit both sides as with the question of vaccination described in this interview. At all stages, there will be temptation at “understanding” the other person by reducing them to our own framework of humanity. Psychologizing a religious person as an unsophisticate dealing with feelings of awe in the face of incomprehensible nature or pitying the atheist for not being able to feel the love of God and reach salvation. There is no solution. No utopia of perfect harmony and understanding. No vision of lions and lambs living in peace. But acknowledging our differences and slowing down our outrage can perhaps make us into the better versions of us and help us stop wasting time trying to reclaim other people’s stereotypes.

Storm in a teacupCreative Commons License BruceW. via Compfight

UPDATE 2: I am aware of the paradox between the introduction and the conclusion of the previous update. Bonus points for spotting it. I actually hold a slightly more nuanced view than the first paragraph would imply but that is a topic for another blog post.

Send to Kindle
7637356614_078f45ca70_h

Sunsets, horizons and the language/mind/culture distinction

Share
Send to Kindle

For some reason, many accomplished people, when they are done accomplishing what they’ve set out to accomplish, turn their minds to questions like:

  • What is primary, thought or language.
  • What is primary, culture or language.
  • What is primary, thought or culture.

I’d like to offer a small metaphor hack for solving or rather dissolving these questions. The problem is that all three concepts: culture, mind and language are just useful heuristics for talking about aspects of our being. So when I see somebody speaking in a way I don’t understand, I can talk about their language. Or others behave in ways I don’t like, so I talk about their culture. Then, there’s stuff going on in my head that’s kind of like language, but not really, so I call that sort of stuff mind. But these words are just useful heuristics not discrete realities. Old Czechs used the same word for language and nation. English often uses the word ‘see’ for ‘understand’. What does it mean? Not that much.

Let’s compare it with the idea of the setting sun. I see the Sun disappearing behind the horizon and I can make some useful generalizations about it. Organize my directions (East/West), plant plants to grow better, orient how my dwelling is positioned, etc. And my description of this phenomenon as ‘the sun is setting behind the horizon’ is perfectly adequate. But then I might start asking questions like ‘what does the Sun do when it’s behind the horizon?’ Does it turn itself off and travel under the earth to rise again in the East the next morning? Or does it die and a new one rises again the next day? Those are all very bad questions because I accepted my local heuristic as describing a reality. It would be even worse if I tried to go and see the edge or the horizon. I’d be like the two fools who agreed that they would follow the railway tracks all the way to the point they meet. They keep going until one of them turns around and says ‘dude, we already passed it’.

So to ask questions about how language influences thought and culture influences language is the same as trying to go see the horizon. Language, culture and mind are just ways of describing things for particular purposes and when we start using them outside those purposes, we get ourselves in a muddle.

Great Lakes in Sunglint (NASA, International Space Station, 06/14/12) NASA’s Marshall Space Flight Center via Compfight

Send to Kindle

Life expectancy and the length and value of life: On a historical overimagination

Share
Send to Kindle
History of Russia (1992–present)

Image via Wikipedia

About 10 years ago, I was looking through a book on populations changes in the Czech lands. It consisted of pretty much just tables of data with little commentary. But I was shocked when I came across the life expectancy charts. But not shocked at how short people’s lives had been but how long. The headline figure of life expectancy in the early 1800s was pretty much on par with expectations (I don’t have the book to hand but it was in the high 30s or low 40s). How sad, I thought. So many people died in their 40s before they could experience life in full. But unlike most of the comparisons reporting life expectancy, this one went beyond the overall average. And it was the additional figures that shocked me. Turns out the extremely short life expectancy only applies right at birth. Once you make it to 10, you have a pretty good chance to make it into your late 50s and at 30, your chances of getting your ‘threescore and ten’ were getting pretty good. The problem is that life expectancy rates at birth only really measure child mortality not the typical lives of adults. You can see from this chart: http://www.infoplease.com/ipa/A0005140.html that in 1850, the US life expectancy at birth was a shocking 38 years. But that does not mean that there were a lot of 38-year-olds around dying. Because if you made it to 10, your life expectancy was 58 years and at 30, it was 64 years. Now these are average numbers so it is possible that for any age cohort, exactly half the people died at the start of it and exactly half died at the end of it. But that was not the case after a certain age. Remember, a population where exactly half the people born die at or near birth (age 0) and exactly half live to be 60 will have the average life expectancy of 30. If you reduce child mortality to 10%, you will have the average life expectancy of 54. If you reduce it to 1%, the average life expectancy will be 59.4 years. Most people still die at sixty but very few die at 1. Massive gains in child mortality reduction will have made no difference to the end of life.

In reality, as the US charts show, the life expectancy at birth doubled but life expectancy at 10 went up by only about a third. That’s still a significant gain but shows a much different profile of life span than the normal figure would have us believe. It was not unusual to live into the late 50s and early 60s. And there were still a large enough number of people who lived into their 70s and 80s. Now, there are exceptions to it, during famines, epidemics and wars and for certain groups in society, the life span was significantly shorter (notice the life expectancy of whites vs. non-whites in the US). But for most populations throughout history, the most common age of death for any given person born was before the age of 10 not in their 30s.

I don’t understand why this is not commonly known. Even many historians (particularly the ones who write popular history) either don’t know this or are unwilling to distrub their narrative of how young people died in the past (in other words, they lie). I certainly was not taught this during my brief (long-ago) studies of ancient and medieval history.

What brought all this to mind is a most disturbing example of this is in a just published book called Civilization by the prominent public historian Niall Ferguson. In the preface he quotes a poem about death and suffering from John Donne and he comments on it:

“Everyone should read these lines who wants to understand better the human condition in the days when life expetancy was less than half what it is today.”

To say I was aghast is an understatement. I nearly threw my Kindle against the wall. Here’s this public intellectual, historian who goes about preaching on how it is important to understand history and yet he peddles this sort of nonsense. If he had said days with high child mortality and a shorter typical life span, I’d have no problem with it. But he didn’t and didn’t even hint that’s what he meant.

He then goes on blathering about how awful it is that all these historical luminaries died so young. Spinoza at 44, Pascal at 39. Saying:

“Who knows what other great works these geniuses might have brought forth had they been granted the lifespans enjoyed by, for example the great humanists Erasmus (69) and Montaigne (59)?”

Common! Bringing forth great works! Really?!? Pathos much? He then goes on comparing Brahms (died old, disliked by Ferguson) and Shubert (died young, liked by Ferguson). So much for academic distance. Why on earth would Ferguson think that listing artists who died young means anything. Didn’t he ever hear of Jimmy Hendrix or Kurt Cobain?

But more importantly, he doesn’t seem to notice his own counterexamples. Erasmus died almost a hundred years before Spinoza was born. What does that tell us about life expectancy and historical periods?

And since when has naming random people’s ages been considered evidence of anything? What about: Isaac Newton 84, Immanuel Kant 79, Galileo 77, John Locke 72, Voltaire 83, Louis Pasteur 72, Michael Faraday 75, Roger Bacon 80. Isn’t that evidence that people live long before the advent of modern medicine?

Or what’s any of that have to do with how much people may have contributed, had they lived l

Louis Pasteur

Image via Wikipedia

onger? I don’t think longevity can serve as a measure of intellectual or cultural productivity. Can we compare Plato (80) and Aristotle (60). It seems to me that Aristotle produced a lot more and varied work than Plato with 20 fewer years to do it in. Aquinas (49) was no less prolific than St Augustine (75). Is it really possible to judge the impact of the inventive John L Austin (who died at 49 – in the 20th century!) is any less than of the tireless and multitalented Russell who lived pretty much forever (97)?

But there are still more counter examples. Let’s look at the list of longest reigning monarchs. The leader of that board is a 6th dynasty Pharaoh (who arguably acended to the throne as a child but still managed to live to a hundred (2200BC!). And most other long-lived monarchs were born during times when life expectancy was about half of what it is now. Sure, they were priveleged and they are relatively rare. And there were a lot of other rulers who went in their 50s and 60s. But not typically in their 40s! Maybe there is already a study out there that measures the longevity of kings with relation to their time but I doubt a straightforward corellation can be found.

Finally, I can match Ferguson poem by poem. From the ancient:

Our days may come to seventy years,
or eighty, if our strength endures;
yet the best of them are but trouble and sorrow,
for they quickly pass, and we fly away.
(Psalm 90:10)

to the modern:


Sadness is all I’ve ever known.
Inside my retched body it’s grown.
It has eaten me away, to what the fuck I am today.
There’s nothing left for me to say.
There’s nothing there for me to hate.
There’s no feelings, and there’s no thoughts.
My body’s left to fucking rot.
Life sucks, life sucks, life sucks, life sucks.
http://www.plyrics.com/lyrics/nocash/lifesucks.html

Clearly all that medicine made less of an impact on our experience of life than Ferguson thinks.

Perhaps I shouldn’t get so exercised about a bit of rhetorical flourish in one of many books of historical cosmogeny and eschatology. But I’m really more disappointed than angry. I was hoping this book may have some interesting ideas in it (although I enter it with a skeptical mind) but I’m not sure I can trust the author not to surrender the independence of his frame of mind and bend the facts to suit his pet notions.

Enhanced by Zemanta
Send to Kindle

Are we the masters of our morality? Yes!

Share
Send to Kindle
Best Friends Forever #BFF #Friends #WinnetouMorality and the freedom of the human spirit

We spend a lot of time worrying about the content to which we expose the young generation both individually and collectively. However, I am exceedingly coming to the conclusion that it makes absolutely no difference (at least as far as morality and lawfulness is concerned). Well sure, we know things like that children of Christians are likely to be Christians as adults and adults who are abusive are likely to have been brought up in abusive environments. But this is about as illuminating as saying that children growing up in German homes are likely to speak German as adults.

We are limited by our upbringing in as much as it imposes constraints on certain parameters of our behavior in language and culture. However, the “what of our children’s ethics” moral outrage debates are held strictly within these parameters. And here the predictability of both individual and collective impact of content to which children (and adults) are exposed, seems to me, is pretty minimal.

First, one would hope that given all the bullshit supposedly great thinkers have put forth about the state of the youth of their day, it is amazing that we haven’t learned that such statements are just never right. They weren’t right about pulp fiction, comic books, and they are not even right about violent video games whose rise in the US coincided with halving the crime rate. No, the kids were not getting it out of their system! There is simply no reliable or predictable connection between what people read or watch and what they do. Sure we can always point at some whacko who did something horrible because he read it in a book or saw it on TV but there is no way to predict who will be influenced by what when and how. The Bible contains all sorts of violence and depravity (and not just in a way that says don’t do it) and yet we don’t see a lot more violence in devout Christians. But neither do we see less. In fact, if we look at the range of behaviors the Bible, or any other religious text for that matter, inspired over the millennia, the only thing we can say about them is that they are typical of human beings. They happened in parallel not as a consequence of the text.

By the same token, we can no more expect virtue coming out of exposure to virtuous content than we can expect depravity coming out of depraved content. A good example is Karl May who popped up on a comment thread on the Language Log recently. Entire generations of Central European boys (and at least more recently girls) grew up reading May’s voluminous output detailing the exploits of the sagacious explorer Old Shatterhand aka Kara ben Nemsi. Old Shatterhand, a pacifist with a gun and a fist – both used only as last resort and in self-defense, embodies very much a New Testament kind of ethics, focusing on love, equality and turning the other cheek. But also on health, vigour and the German indomitable spirit.

It is inconceivable that anyone reading these books by the dozen (as I did in my youth) could ever think less of another race or do anything bad to man or beast. Yet, as we know, Central Europe was anything but calm in the last century which saw sales of May’s books in the tens of millions. I wonder how many death camp guards or Wermacht soldiers did not read Karl May as boys. And as one of the Language Log commenters points out, Hitler himself was a May fan and supposedly tried to write Mein Kampf in the same style as his favorite author. How is this possible? Should we ban May’s books lest such horrors happen again?

Of course not. People seem to have a remarkable ability to read around the bits that don’t concern their interests. We can background or foreground pretty much anything. It is possible to read Kipling’s Maugli as a cute children’s story or as a justification for colonialism. When I first read it, I saw it as a manifesto of environmentalism and ecosystem preservation (I was about 8 so I did nor perhaps formulate it that way). But it can just as easily be read as an apology for man’s mastery over nature.

Karl May wasn’t quite banned in my native Czechoslovakia but his books weren’t always easy to come by due, I was told, to a strong Christian bias. I could never understand that until I reread Winnetou recently and discovered long philosophical expositions on sin and violence that were just out there. No coy hints, straight up quotations from the Bible! When I was reading these books, I simply did not see that. Equally incomprehensibly, there are people who’ve read The Chronicles of Narnia and did not notice that Aslan was Jesus. It’s an adventure story for them and that’s pretty much it.

When I look at my own political morality, I can see clear foundations laid by May and my reading of Kipling, Defoe and others – including a watered down New Testament. But I also see people around me who clearly grew up on the same literature and are rabid Old Testament tooth-for-toothers. Such is the freedom of the human spirit that it can overcome the influence of any content – good or bad. (Again within the parameters of our linguistic and social environments.)

UPDATE: Here’s an interesting summary of how Karl May’s impact cut both ways (Hitler and Einstein) via the Wikipedia entry on May. Jeff Bowersox also has a lot of relevant things to say to explain this seeming paradox of children both appropriating ‘moral’ messages for their own play and being shaped by them through the prism of their socio-discursive embeddedness.
Send to Kindle

Epistemology as ethics: Decisions and judgments not methods and solutions for evidence-based practice

Share
Send to Kindle
Common (rapper) Common Sense (rapper) Tufts Un...

Image via Wikipedia

Show me the money! Or so the saying goes. Implying that talk is cheap and facts are the only thing that matters. But there is another thing we are being asked to do with money and that is put it where our mouth is. So evidence is not quite enough. We have to also be willing to act on it and demonstrate a personal commitment to our facts.

As so often, folk wisdom has outlined two contradictory dictums that get applied based on the parameters of a given situation. On one account, evidence is all that is needed. Wheras an alternative interpretation claims that evidence is only sufficient when backed up by personal commitment.

I’ve recently come across two approaches to the same issue that resonate with my own, namely claiming that evidence cannot be divorced from the personal commitment of its wielder. As such it needs to be constantly interrogated from all perspectives, including the ones that come from “scientific method” but not relying solely on them.

This is crucial for anybody claiming to adhere to evidence-based practice. Because the evidence is not just out there. It is always in the hands of someone caught (as we all are) within a complex web of ethical commitments. Sure, we can point to a number of contexts where lack of evidence was detrimental (vaccinations). But we can equally easily find examples of evidence being the wrong thing to have (UK school league tables, eugenics). History of science demonstrates both incorrect conlusions being drawn from correct facts (ether), and correct conclusions being drawn form incorrect facts (including aircraft construction as Ira Flatow showed). I am not aware of any research quantifying the proportion of these respective positions. Intuitively, one would feel inclined to conclude that mostly the correct facts have led to the correct conclusions and incorrect ones to the wrong ones but we have been burned before.

That’s why I particularly liked what Mark Kleiman had to say at the end of his lecture on Evidence-based practice in policing:

“The notion that we can substitute method for common sense, a very wide-spread notion, is wrong. We’re eventually going to have to make judgments. Evidence-based practice is a good slogan, but it’s not a method in the Cartesian sense. It does not guarantee that what we’re going to do is right.”

Of course, neither is the opposite true. I’m always a bit worried about appeals to “common sense”.  I used to tell participants on cross-cultural training courses: there is nothing common about common sense. It always belongs to someone. And it does not come naturally to us. It is only common because someone told someone else about it. How often, have we heard a myth debunked only to conclude that the debunking had really been common sense. That makes no sense. It was the myth that was common sense. The debunking will only become common sense after we tell everyone about it. How many people would figure out even the simplest things such as looking right and then left before crossing the road without someone telling them? We spend a lot of time (as a society and individuals) maintaining the commonality of our senses. Clifford Geertz showed in “Common sense as a cultural system” the complex processes involved in common sense and I would not want to entrust anything of importance to common sense. But I agree that judgments are essential. We always make judgements whether we realise that or not.

This resonates wonderfully with what I consider to be the central thesis of the “Science’s First Mistake” a wonderful challenge to the assumptions of the universality of science by Ian Angell and Dionysios Demetis.

“…different ages have different perceptions of uncertainty; and so there are different approaches to theory construction and application, delivering different risk assessments and prompting different decisions. Note this book stresses decisions not solutions, because from its position there are no solutions, only contingent decisions. And each decision is itself a start of a new journey, not the end of an old one.

Indeed, there is no grander delusion than the production of a solution, with its linear insistence on cause and effect.”

All our solutions are really decisions. Decisions contingent on a myriad of factors both related to the data and our personal situation. We couch these decisions in the guise of solutions to avoid personal responsibility. Not perhaps often in a conscious way but in a way that abstracts away from the situation. We do not always have to ask qui bono when presented with a solution but we should always ask where from. What is the situation in which a solution was born? Not because we have a suspicious mind but because we have an inquisitive one.

But decisons are just what they are. They are on their own neither good or bad. They are just inevitable. And here I’d like to present the conclusions to a paper on the epistemological basis of case study in educational policy making I co-wrote a few years ago with John Elliott. It’s longer (by about a mile) than the two extracts above. But I highlighted in bold the bits that are relevant. The rest is there just to provide sufficient context to understand the key theses:

from the Summary and Conclusions of “Epistemology as ethics in research and policy: Under what terms might case studies yield useful knowledge to policy makers. by John Elliott and Dominik Lukeš. In: Evidence-based Education Policy: What Evidence? What Basis? Whose Policy?, 2008.”

Overall, we have informed our inquiry with three perspectives on case study. One rooted in ethnography and built around the metaphor of the understanding of cultures one is not familiar with (these are presented by the work of scholars like Becker, Willis, Lacey, Wallace, Ball, and many others). Another strand revolves around the tradition of responsive and democratic evaluation and portrayal represented by the work of scholars such as MacDonald, Simons, Stake, Parlett, and others. This tradition (itself not presenting a unified front) aims to break down the barriers between the researcher, the researched and the audience. It recognizes the situated nature of all actors in the process (cf. Kemmis 1980) and is particularly relevant to the concerns of the policy maker. Finally, we need to add Stenhouse’s approach of contemporary history into the mix. Stenhouse provides a perspective that allows the researcher to marry the responsibility to data with the responsibility to his or her environment in a way that escapes the stereotypes associated with the ‘pure forms’ of both quantitative and qualitative research. A comprehensive historian-like approach to data collection, retention and dissemination that allows multiple interpretations by multiple actors accounts both for the complexity of the data and the situation in the context of which it is collected and interpreted.

However, we can ask ourselves whether a policy-maker faced with examples of one of these traditions could tell these perspectives apart? Should it not perhaps be the lessons from the investigation of case study that matter rather than an ability to straightforwardly classify it as an example of one or the other? In many ways, in the policy context, it is the act of choice, such as choosing on which case study a policy should be based, that is of real importance.

In that case, instead of transcendental arbitration we can provide an alternative test: Does the case study change the prejudices of the reader? Does it provide a challenge? Perhaps our notion of comprehensiveness can include the question: Is the case study opening the mind of the reader to factors that they would have otherwise ignored? This reminds us of Gadamer’s “fusion of horizons” (“Understanding [...] is always the fusion of […] horizons [present and past] which we imagine to exist by themselves.” Gadamer 1975, p. 273).

However, this could be seen as suggesting that case study automatically leads to a state of illumination. In fact, the interpretation of case study in this way requires a purposeful and active approach on the part of the reader. What, then, is the role of the philosopher? Should each researcher have a philosopher by his or her side? Or is it necessary, as John Elliott has long argued, to locate the philosopher in the practitioner? Should we expect teachers and/or policy-makers to go to the philosopher for advice or would a better solution be to strive for philosopher teachers and philosopher practitioners? Being a philosopher is of course determined not by speaking of Plato and Rousseau but by the constant challenge to personal and collective prejudices.

In that case, we can conclude that case study would have a great appeal to the politician and policy maker as a practical philosopher but that it would be a mistake to elevate it above other ways of doing practical philosophy. In this, following Gadamer, we advocate an antimethodological approach. The idea of the policy maker as philosopher and policy maker as researcher (i.e. underscoring the individual ethical agency of the policy maker) should be the proper focus of discussion of reliability and generalization. Since the policy maker is the one making the judgement, the type of research and study is then not as important as a primary focus.

And, in a way, the truthfulness of the case arises out of the practitioners’ use of the study. The judgment of warrant as well as the universalizing and revelatory nature of a particular study should become apparent to anybody familiar with the complexities of the environment. An abstract standard of quality reminiscent of statistical methods (number of interviewees, questions asked, sampling) is ultimately not a workable basis for decision making and action although that does not exclude the process of seeking a shared metaphorical perspective both on the process of data gathering and interpretation (cf. Kennedy 1979, Fox 1982). Gadamer’s words seem particularly relevant in this context: “[t]he understanding and interpretation of texts is not merely a concern of science, but is obviously part of the total human experience of the world.” (1975, p. xi)

We cannot discount the situation of the researcher no more than we can discount the situation of the researched. One constitutive element of the situation is the academic tribe: “[w]e are pursuing a ‘scholarly’ identity through our case studies rather than an intrinsic fascination with the phenomena under investigation” warns Fox (1982). To avoid any such accusations of impropriety social science cultivates a ‘prejudice against prejudice’, a distancing from experience and valuing in order to achieve objectivity whereas the condition of our understanding is that we have prejudices and any inquiry undertaken by ‘us’ needs to be approached in the spirit of a conversation with others; the conversation alerts participants to their prejudices. In a sense, the point of conversation is to reconstruct prejudices, which is an alternative view of understanding itself.

It should be stressed however, that this conversation does not automatically lead to a “neater picture” of the situation nor does it necessarily produce a “social good”. There is the danger of viewing ‘disciplined conversation’ as an elevated version of the folk theory on ideal policy: ‘if only everyone talked to one another, the world would be a nicer place’. Academic conversation (just like any democratic dialectic process) is often contentious and not quite the genteel affair it tries to present itself as. Equally, any given method of inquiry, including and perhaps headed by case study, can be both constitutive and disruptive of our prejudices.

Currently the culture of politeness aimed at avoiding others’ and one’s own discomfort at any cost contributes to the problem. Can one structure research that enables people to reflect about prejudices that they inevitably bring to the situation and reconstruct their biases to open up the possibility of action and not cause discomfort to themselves or others? We could say that concern with generalization and method is a consequence of academic discourse and culture and one of the ways in which questions of personal responsibility are argued away. Abstraction, the business of academia, is seen as antithetical to the process of particularization, the business of policy implementation. But given some of the questions raised in this paper, we should perhaps be asking whether abstraction and particularization are parallel processes to which we ascribe polar directionality only ex post facto. In this sense we can further Nussbaum’s distinction between generalization and universalization by rephrasing the dichotomy in the following terms: generalization is assumed to be internal to the data whereas universalization is a situated human cognitive and affective act.

A universalizable case study is of such quality that the philosopher policy maker can discern its relevance for the process of policy making (similarly to Stake’s naturalistic generalization). This is a different way of saying the same thing as Kemmis (1980, p. 136): “Case study cannot claim authority, it must demonstrate it.” The power of case study in this context can be illustrated by anecdotes from the field where practitioners had been convinced that a particular case study describes their situation and berating colleagues or staff for revealing intimate details of their situation, whereas the case study had been based on research of an entirely unrelated entity. In cases like these, the universal nature of the case study is revealed to the practitioner. Its public aspects often engender action where idle rumination and discontent would be otherwise prevalent. Even when this kind of research tells people what they already “know”, it can inject accountability by rendering heretofore private knowledge public. The notion of case study as method with transcendental epistemology therefore cannot be rescued even by attempts like Bassey’s (1999) to offer ‘fuzzy generalizations’ since while cognition and categorization is fuzzy, action involves a commitment to boundaries. This focus on situated judgement over transcendental rationality in no way denies the need for rigour and instrumentalism. We agree with Stenhouse that there needs to be a space in which the quality of a particular case study can be assessed. But such judgments will be different when made by case study practitioners and when made by policy-makers or teachers. The epistemological philosopher will apply yet another set of criteria. All these agents would do well to familiarize themselves with the criteria applied by the others but they would be unwise to assume that they can ever fully transcend the situated parameters of their community of practice and make all boundaries (such as those described by Kushner 1993) disappear.

Philosophy often likes to position itself in the role of an independent arbiter but it must not forget that it too is an embedded practice with its own community rules. That does not mean there’s no space for it in this debate. If nothing else, it can provide a space (not unlike the liminal space of Turner’s rituals) in which the normal assumptions are suspended and transformation of the prejudice can occur. In this context, we should perhaps investigate the notion of therapeutic reading of philosophy put forth by the New Wittgensteinians (see Crary and Read 2000).

This makes the questions of ethics alluded to earlier even more prominent. We propose that given their situated nature, questions of generalization are questions of ethics rather than inquiries into some disembodied transcendent rationality. Participants in the complex interaction between practitioners of education, educational researchers and educational policy makers are constantly faced with ethical decisions asking themselves: how do I act in ways that are consonant with my values and goals? Questions of warrant are internal to them and their situation rather than being easily resolvable by external expert arbitration. This does not exclude instrumental expertise in research design and evaluation of results but the role of such expertise is limited to the particular. MacDonald and Walker (1977) point out the importance of apprenticeship in the training of case study practitioners and we should bear in mind that this experience cannot be distilled into general rules for research training as we find them laid out in a statistics textbook.

Herein can lie the contribution of philosophy: An inquiry into the warrant and generalization of case study should be an inquiry into the ethics surrounding the creation and use of research, not an attempt to provide an epistemologically transcendent account of the representativeness of sampled data.

 

Enhanced by Zemanta
Send to Kindle

The brain is a bad metaphor for language

Share
Send to Kindle

Note: This was intended to be a brief note. Instead it developed into a monster post that took me two weeks of stolen moments to write. It’s very light on non-blog references but they exist. Nevertheless, it is still easy to find a number of oversimplifications,  conflations, and other imperfections below. The general thrust of the argument however remains.

How Far Can You Trust a Neuroscientist?

Shiny and colored objects usually attract Infa...

Image via Wikipedia

A couple of days ago I watched a TED talk called the Linguistic Genius of Babies by Patricia Kuhl. I had been putting it off, because I suspected I wouldn’t like it but I was still disappointed at how hidebound it was. It conflated a number of really unconnected things and then tried to sway the audience to its point of view with pretty pictures of cute infants in brain scanners. But all it was, is a hodgepodge of half-implied claims that is incredibly similar to some of the more outlandish claims made by behaviorists so many years ago. Kuhl concluded that brain research is the next frontier of understanding learning. But she did not give a simple credible example of how this could be. She started with a rhetorical trick. Mentioned an at-risk language with a picture of a mother holding an infant facing towards her. And then she said (with annoying condescension) that this mother and the other tribe members know something we do not:

What this mother — and the 800 people who speak Koro in the world — understand that, to preserve this language, they need to speak it to the babies.

This is garbage. Languages do not die because there’s nobody there to speak it to the babies (until the very end, of course) but because there’s nobody of socioeconomic or symbolic prestige children and young adults can speak the language to. Languages don’t die because people can’t learn them, they die because they have no reason (other than nostalgia) to learn them or have a reason not to learn them. Given a strong enough reason they would learn a dying language even if they started at sixteen. They just almost never are given the reason. Why Kuhl felt she did not need to consult the literature on language death, I don’t know.

Patricia Kuhl has spent the last 20 years studying pretty much one thing: acoustic discrimination in infants (http://ilabs.washington.edu/kuhl/research.html). Her research provided support for something that had been already known (or suspected), namely that young babies can discriminate between sounds that adults cannot (given similar stimuli such as the ones one might find in the foreign language classroom). She calls this the “linguistic genius of babies” and she’s wrong:

Babies and children are geniuses until they turn seven, and then there’s a systematic decline.

First, the decline (if there is such a thing) is mostly limited to acoustic processing and even then it’s not clear that the brain is the thing that causes it. Second, being able to discriminate (by moving their head) between sounds in both English and Mandarin at age 6 months is not a sign of genius. It’s a sign of the baby not being able to differentiate between language and sound. Or in other words, the babies are still pretty dumb. But it doesn’t mean they can’t learn a similar distinction at a later age – like four or seven or twelve. They do. They just probably do it in a different way than a 6-month old would. Third, in the overall scheme of things, acoustic discrimination at the individual phoneme level (which is what Kuhl is studying) is only a small part of learning a language and it certainly does NOT stop at 7 months or even 7 years of age. Even children who start learning a second language at the age of 6 achieve a native-like phonemic competence. And even many adults do. They seem not to perform as well on certain fairly specialized acoustic tests but functionally, they can be as good as native speakers. And it’s furthermore not clear that accent deficiencies are due to the lack of some sort of brain plasticity. Fourth, language learning and knowledge is not a binary thing. Even people who only know one language know it to a certain degree. They can be lexically, semantically and syntactically quite challenged when exposed to a sub-code of their language they have little to no contact with. So I’m not at all sure what Kuhl was referring to. François Grosjean (an eminent researcher in the field) has been discussing all this on his Life as Bilingual blog (and in books, etc.). To have any credibility, Kuhl must address this head on:

There is no upper age limit for acquiring a new language and then continuing one’s life with two or more languages. Nor is there any limit in the fluency that one can attain in the new language with the exception of pronunciation skills.

Instead she just falls on old prejudices. She simply has absolutely nothing to support this:

We think by studying how the sounds are learned, we’ll have a model for the rest of language, and perhaps for critical periods that may exist in childhood for social, emotional and cognitive development.

A paragraph like this may get her some extra funding but I don’t see any other justification for it. Actually, I find it quite puzzling that a serious scholar would even propose anything like this today. We already know there is no critical period for social development. Well, we don’t really know what social development is, but there’s no critical brain period to what there is. We get socialized to new collective environments throughout our lives.

But there’s no reason to suppose that learning to interact in a new environment is anything like learning to discriminate between sounds. There are some areas of language linked to perception where that may partly be the case (such as discriminating shapes, movements, colors, etc.) but hardly things like morphology or syntax, where much more complexity is involved. But this argument cuts both ways. Let’s say a lot of language learning was like sound development. And we know most of it continues throughout life (syntax, morphology, lexicon) and it doesn’t even start at 6 months (unless you’re a crazy Chomskean who believes in some sort of magical parameter setting). So if sound development was like that, maybe it has nothing to do with the brain in the way Kuhl imagines – although she’s so vague that she could always claim that that’s what she’d had in mind. This is what Kuhl thinks of as additional information:

We’re seeing the baby brain. As the baby hears a word in her language the auditory areas light up, and then subsequently areas surrounding it that we think are related to coherence, getting the brain coordinated with its different areas, and causality, one brain area causing another to activate.

So what? We know that that’s what was going to happen. Some parts of the brain were going to light up as they always do. What does that mean? I don’t know. But I also know that Patricia Kuhl and her colleagues don’t know either (at least not in the way she pretends). We speak a language, we learn a language and at the same time we have a brain and things happen in the brain. There are neurons and areas that seem to be affected by impact (but not always and not always in exactly the same way). Of course, this is an undue simplification. Neuroscientists know a huge amount about the brain. Just not how it links to language in a way that would say much about the language that we don’t already know. Kuhl’s next implied claim is a good example of how partial knowledge in one area may not at all extend to knowledge in another area.

What you see here is the audio result — no learning whatsoever — and the video result — no learning whatsoever. It takes a human being for babies to take their statistics. The social brain is controlling when the babies are taking their statistics.

In other words, when the children were exposed to audio or video as opposed to a live person, no effect was shown. At 6 months of age! As is Kuhl’s wont, she only hints at the implications, but over at the Royal Society’s blog comments, Eric R. Kandel has spelled it out:

I’m very much taken with Patricia Kuhl’s finding in the acquisition of a second language by infants that the physical presence of a teacher makes enormous difference when compared to video presence. We all know from personal experience how important specific teachers have been. Is it absurd to think that we might also develop methodologies that would bring out people’s potential for interacting empathically with students so that we can have a way of selecting for teachers, particularly for certain subjects and certain types of student? Neuroscience: Implications for Education and Lifelong Learning.

But this could very well be absurd! First, Kuhl’s experiments were not about second language acquisition but sensitivity to sounds in other languages. Second, there’s no evidence that the same thing Kuhl discovered for infants holds for adults or even three-year olds. A six-month old baby hasn’t learned yet that the pictures and sounds coming from the machine represent the real world. But most four-year olds have. I don’t know of any research but there is plenty of anecdotal evidence. I have personally met several people highly competent in a second language who claimed they learned it by watching TV at a young age. A significant chunk of my own competence in English comes from listening to radio, audio books and watching TV drama. How much of our first language competence comes from reading books and watching TV? That’s not to say that personal interaction is not important – after all we need to learn enough to understand what the 2D images on the screen represent. But how much do we need to learn? Neither Kuhl nor Kandel have the answer but both are ready (at least by implication) to shape policy regarding language learning. In the last few years, several reports raised questions about some overreaching by neuroscience (both in methods and assumptions about their validity) but even perfectly good neuroscience can be bad scholarship in extending its claims far beyond what the evidence can support.

The Isomorphism Fallacy

This section of the post is partly based on a paper I presented at a Czech cognitive science conference about 3 years ago called Isomorphism as a heuristic and philosophical problem.

IMG_7845The fundamental problem underlying the overreach of basic neuroscience research is the fallacy of isomorphism. This fallacy presumes that the same structures we see in language, behavior, society must have structural counterparts in the brain. So there’s a bit of the brain that deals with nouns. Another bit that deals with being sorry. Possibly another one that deals with voting Republican (as Woody Allen proved in “Everyone Says I Love You“). But at the moment the evidence for this is extremely weak, at best. And there is no intrinsic need for a structural correspondence to exist. Sidney Lamb came up with a wonderful analogy that I’m still working my way through. He says (recalling an old ‘Aggie‘ joke) that trying to figure out where the bits we know as language structure are in the brain is like trying to work out how to fit the roll that comes out of a tube of tooth paste back into the container. This is obviously a fool’s errand. There’s nothing in the tooth-paste container that in any way resembles the colorful and tubular object we get when we squeeze the paste container. We get that through an interaction of the substance, the container, external force, and the shape of the opening. It seems to me entirely plausible, that the link between language and the brain is much more like that between the paste, the container and their environment than like that between a bunch of objects and box. The structures that come out are the result of things we don’t quite understand happening in the brain interacting with its environment. (I’m not saying that that’s how it is, just that it’s plausible.) The other thing to lends it credence is the fact that things like nouns or fluency are social constructs with fuzzy boundaries, not hard discrete objects, so actually localizing them in the brain would be a bit of a surprise. Not that it can’t be done, but the burden of evidence of making this a credible finding is substantial.

Now, I think that the same problem applies to looking for isomorphism the other way. Lamb himself tries to look at grammar by looking for connections resembling the behavior of activating neurons. I don’t see this going anywhere. George Lakoff (who influenced me more than any other linguist in the world) seems to think that a Neural Theory of Language is the next step in the development of linguistics. At one point he and many others thought that mirror neurons say something about language but now that seems to have been brought into question. But why do we need mirror neurons when we already know a lot of the immitative behaviors they’re supposed facilitate? Perhaps as a treatment and diagnostic protocol for pathologies but is this really more than story-telling? Jerome Feldman described NTL in his book “From Molecule to Metaphor” but his main contribution seems to me lies in showing how complex language phenomena can be modelled with brain-like neural networks, not saying anything new about these phenomena (see here for an even harsher treatment). The same goes for the Embodied Construction Grammar. I entirely share ECG’s linguistic assumptions but the problem is that it tries to link its descriptive apparatus directly to the formalisms necessary for modeling. This proved to be a disaster for the generative project that projected its formalisms into language with a imperfect fit and now spends most of its time refining those formalisms rather than studying language.

So far I don’t see any advantage in linking language to the brain in either the way Kuhl et al or Feldman et al try to do it (again with the possible exception of pathologies). In his recent paper on compositionality, Feldman describes research that shows that spacial areas are activated in conjunction with spatial terms and that sentence processing time increases as the sentence gets removed from “natural spatial orientation”. But brain imaging at best confirms what we already knew. But how useful is that confirmatory knowledge? I would argue that not very useful. In fact there is a danger that we will start thinking of brain imaging as a necessary confirmation of linguistic theory. Feldman takes a step in this dangerous direction when he says that with the advent of new techniques of neuroscience we can finally study language “scientifically”. [Shudder.]

We know there’s a connection between language and the brain (more systematic than with language and the foot, for instance) but so far nobody’s shown convincingly that we can explain much about language by looking at the brain (or vice versa). Language is best studied as its own incredibly multifaceted beast and so is the brain. We need to know a lot more about language and about the brain before we can start projecting one into the other.

And at the moment, brain science is the junior partner, here. We know a lot about language and can find out more without looking for explanations in the brain. It seems as foolish as trying to illuminate language by looking inside a computer (as Chomsky’s followers keep doing). The same question that I’m asking for language was asked about cognitive processes (a closely related thing) by William Uttal in The New Phrenology who’s asking “whether psychological processes can be defined and isolated in a way that permits them to be associated with particular brain regions” and warns against a “neuroreductionist wild goose chase” – and how else can we characterize Kuhl’s performance – lest we fall “victim to what may be a ‘neo-phrenological’ fad”. Michael Shremer voiced a similar concern in the Scientific American:

The brain is not random kludge, of course, so the search for neural networks associated with psychological concepts is a worthy one, as long as we do not succumb to the siren song of phrenology.

What does a “siren song of phrenology” sound like? I imagine it would sound pretty much like this quote by Kuhl:

We are embarking on a grand and golden age of knowledge about child’s brain development. We’re going to be able to see a child’s brain as they experience an emotion, as they learn to speak and read, as they solve a math problem, as they have an idea. And we’re going to be able to invent brain-based interventions for children who have difficulty learning.

I have no doubt that there are some learning difficulties for which a ‘brain-based intervention’ (whatever that is) may be effective. But it’s just a relatively small part of the universe of learning difficulties that it hardly warrants a bombastic claim like the one above. I could find nothing in Kuhl’s narrow research that would support this assertion. Learning and language are complex psycho-social phenomena that are unlikely to have straightforward counterparts in brain activations such as can be seen by even the most advanced modern neuroimaging technology. There may well be some straightforward pathologies that can be identified and have some sort of treatment aimed at them. The problem is that brain pathologies are not necessarily opposites of a typically functioning brain (a fallacy that has long plagued interpretation of the evidence from aphasias) – it is, as brain plasticity would suggest, just as  likely that at least some brain pathologies simply create new qualities rather than simply flipping an on/off switch on existing qualities. Plus there is the historical tendency of the self-styled hard sciences to horn in on areas where established disciplines have accumulated lots of knowledge, ignore the knowledge, declare a reductionist victory, fail and not admit failure.

For the foreseeable future, the brain remains a really poor metaphor for language and other social constructs. We are perhaps predestined to finding similarities in anything we look at but researchers ought to have learned by now to be cautious about them. Today’s neuroscientists should be very careful that they don’t look as foolish to future generations as phrenologists and skull measurers look to us now.

In praise of non-reductionist neuroscience

Let me reiterate, I have nothing against brain research. The more of it, the better! But it needs to be much more honest about its achievements and limitations (as much as it can given the politics of research funding). Saying the sort of things Patricia Kuhl does with incredibly flimsy evidence and complete disregard for other disciplines is good for the funding but awful for actually obtaining good results. (Note: The brevity of the TED format is not an excuse in this case.)

A much more promising overview of applied neuroscience is a report by the Royal Society on education and the brain that is much more realistic about the state of neurocognitive research who admit at the outset: “There is enormous variation between individuals, and brain-behaviour relationships are complex.”

The report authors go on to enumerate the things they feel we can claim as knowledge about the brain:

  1. The brain’s plasticity
  2. The brain’s response to reward
  3. The brain’s self-regulatory processes
  4. Brain-external factors of cognitive development
  5. Individual differences in learning as connected to the brain and genome
  6. Neuroscience connection to adaptive learning technology

So this is a fairly modest list made even more modest by the formulations of the actual knowledge. I could only find a handful of statements made to support the general claims that do not contain a hedge: “research suggests”, “may mean”, “appears to be”, “seems to be”, “probably”. This modesty in research interpretation does not always make its way to the report’s policy suggestions (mainly suggestions 1 and 2). Despite this, I think anybody who thinks Patricia Kuhl’s claims are interesting would do well do read this report and pay careful attention to the actual findings described there.

Another possible problem for those making wide reaching conclusions is a relative newness of the research on which these recommendations are based. I had a brief look at the citations in the report and only about half are actually related to primary brain research. Of those exactly half were published in 2009 (8) and 2010 (20) and only two in the 1990s. This is in contrast to language acquisition and multilingualism research which can point to decades of consistently replicable findings and relatively stable and reliable methods. We need to be afraid, very afraid of sexy new findings when they relate to what is perceived as the “nature” of humans. At this point, as a linguist looking at neuroscience (and the history of the promise of neuroscience), my attitude is skeptical. I want to see 10 years of independent replication and stable techniques before I will consider basing my descriptions of language and linguistic behavior on neuroimaging. There’s just too much of ‘now we can see stuff in the brain we couldn’t see before, so this new version of what we think the brain is doing is definitely what it’s doing’. Plus the assumption that exponential growth in precision brain mapping will result in the same growth in brain function identification is far from being a sure thing (cf. genome decoding). Exponential growth in computer speed, only led to incremental increases in computer usability. And the next logical step in the once skyrocketing development of automobiles was not flying cars but pretty much just the same slightly better cars (even though they look completely different under the hood).

The sort of knowledge to learn and do good neuroscience is staggeringly awesome. The scientists who study the brain deserve all the personal accolades they get. But the actual knowledge they generate about issues relating to language and other social constructs is much less overwhelming. Even a tiny clinical advance such as helping a relatively small number of people to communicate who otherwise wouldn’t be able to express themselves makes this worthwhile. But we must not confuse clinical advances with theoretical advances and must be very cautious when applying these to policy fields that are related more by similarity than a direct causal connection.

Send to Kindle

Metaphor is my co-pilot: How the literal and metaphorical rely on the same type of knowledge

Share
Send to Kindle
Classical definition of knowledge, according t...
Image via Wikipedia

“Thanks” to experimental philosophy, we have a bit more evidence confirming, that what many people think about the special epistemological status of metaphor is bunk. We should also note that Gibbs’ and Glucksberg’s teams have been doing a lot of similar research with the same results since the late 1980s.

This is how Joshua Knobe on the Experimental Philosophy blog summarized a forthcoming paper by Mark Phelan (http://pantheon.yale.edu/~mp622/inadequacy.pdf):

In short, it looks like it really is pretty impossible to explain what a metaphor means. But that is not because of anything special about metaphors. It is merely a reflection of the fact that we can’t explain what any sentence means.  Experimental Philosophy: What Metaphors Mean

Phelan went and asked people to paraphrase metaphorical and non-metaphorical statements only to find that the resulting paraphrases were judged equally inadequate for metaphors and literal statements. In fact, paraphrases of metaphorical statements like “Music is the universal language” or “Never give your heart away” were judged as more acceptable than paraphrases of their “literal” counterparts “French is the language Quebec” and “Always count your change”. The result shows something that any good translator will know intuitively – paraphrases are always hard.

So the conclusion (one to which I’m repeatedly drawn) is that there’s nothing special about metaphors when it comes to meaning, understanding and associated activities like paraphrasing. The availability of paraphrase (and understanding in general) is broadly dependent to two factors knowledge and usage. We have to know a lot about the world and how language is used to navigate it. So while we might consider “there’s a chair in the office”, “a chair is in the office” or “how about that chair in the office” as adequate descriptions of a particular configuration of objects in space, the same does not apply to usage. And things get even trickier when we substitute “a cobra” or “an elephant” for “a chair” and then start playing around with definiteness.  We know that chairs in offices are normal and desirable, cobras unlikely and undesirable and elephants impossible and most likely metaphorical. Thinking that we can understand both “there’s an elephant in the office” and “there’s a chair in the office” as simply a combination of the words and the construction “there’s X in Y” is a bad idea. And the same goes for metaphors. We need to know a lot about the world and language to understand them.

One of the pairs of sentences Phelan compared was “God is my co-pilot” and “Bill Thomson is my co-pilot”. Intuitively, we’d say that the “literal” one would be easier to paraphrase and we’d be right but not as radically: 47% of respondents chose “God is helping me get where I want to go” as a good paraphrase and mere 58% went with “I have a copilot named Bill Thomson”. And that goes slightly against intuition. But not if we think about it a bit more carefully. All the same questions we can ask about the meaning of these two sentences demonstrate a significant dependence on knowledge and usage. “In what way is God your co-pilot” makes sense where “In what way is Bill Thomson your co-pilot” doesn’t. But we can certainly ask “What exactly does Bill do when he’s your copilots”, “What do co-pilots do anyway”. And armed with that knowledge and knowledge of the situation we can challenge either statement “no God isn’t really your co-pilot” or “no Bill isn’t really your co-pilot”. Metaphoricity really had no impact – it was knowledge. Most people know relatively little about what co-pilots do so we might even suspect that their understanding of “God is my co-pilot” is greater than of “Bill is my co-pilot”.

This is because the two utterances are not even that different conceptually. They both depend on our ability to create mental mappings between two domains of understanding: the present situation and what we know about co-pilots. We might argue that in the “literal” case, there are fewer more determinate mappings but that is only the case if we have precise and extensive knowledge. If we hear the captain say “Bill is my co-pilot” and we know that “co-pilots sit next to pilots and twiddle with instruments”, we can then conclude “the guy sitting next to the captain and switching toggles is Bill”. If the person sitting next to us said “God is my co-pilot”, we can draw conclusions from our knowledge of usage e.g. “people who say this are also likely to talk to me about God”. It seems a very simple mapping. This would get a lot more complex if the captain said “God is my co-pilot” and the person sitting next to us on the plane would say “Bill is my co-pilot” but it would still be a case of reconciling our knowledge of the world and language usage with the present situation through mappings. So the seeming simplicity of the literal is really just an illusion when it comes to statements of any consequence.

–7 Aug – Edited slightly for coherence and typos

Enhanced by Zemanta
Send to Kindle