Category Archives: Knowledge

Image by moune.drah CC BY NC SA https://www.flickr.com/photos/70148128@N00/12893857533/

Anthropologists’ metaphorical shenanigans: Or how (not) to research metaphor

Share
Send to Kindle

Over on the excellent ‘Genealogy of Religion’, Cris Campbell waved a friendly red rag in front of my eyes to make me incensed over exaggerated claims (some) anthropologists make about metaphors. I had expressed some doubts in previous comments but felt that perhaps this particular one deserves its own post.

The book Cris refers to is a collection of essays  America in 1492: The World of the Indian Peoples Before the Arrival of Columbus (1991, ed. Alvin Josephy) which also contains an essay by Joel Sherzer called “A Richness of Voices”.  I don’t have the book but I looked up a few quotes on metaphor from the book.

The introduction summarizes the conclusion thus:

“Metaphors about the relations of people to animals and natural forces were essential to the adaptive strategies of people who lived by hunting.” (p. 26)

This is an example what Sherzer has to say about metaphor:

“Another important feature of native vocabularies was the metaphor – the use of words or groups of words that related to one realm of meaning to another. To students they provide a window into American Indian philosophies. … The relationship between the root and the derived form was often metaphorical.” (p. 256)

The first part of both statements is true but the second part does not follow. That is just bad bad scholarship. I’m not a big Popperian but if you want to make claims about language you have to postulate some hypotheses and try really really really hard to disprove them. Why? Because there are empirical aspects to the questions that can have empirical support. Instead the hypotheses are implied and no attempt is made to see if they work. So this is what I suggest are Sherzer’s implicit hypotheses that should be made explicit and tested:

  1. American Indian languages use metaphors for essential parts of their understanding of the world. (Corollary: If we understand the metaphors, we can understand the worldview of the speakers of those languages.)
  2. American Indian language use of metaphors was necessary to their survival because of their hunter-gatherer lifestyles.
  3. American Indian languages use metaphor than the SAE (Standard Average European) languages.

Re 1: This is demonstrably true. It is true of all languages so it is not surprising here. However, exactly how central this metaphorical reasoning is and how it works cognitively is an open question. I addressed some of this in my review of Verena Haser’s book.

As to the corollary, I’ve mentioned this time and time again. There is no straightforward link between metaphor and worldview. War on poverty, war on drugs, war on terror all draw on different aspects of war. As does Salvation Army, Peace Corps and the Marine Corps. You can’t say that Salvation Army subscribes to the same level of violence than a ‘real’ Army. The same goes for metaphors like ‘modern Crusades’ or the various notions of ‘Jihad’. Metaphor works exactly because it does not commit us to a particular course of action.

That’s not to say that the use of metaphor can never be revealing of underlying conceptualizations. For instance, calling something rebellion vs. calling it a ‘civil war’ imposes a certain order on the configuration of participants and reveals the speaker’s assumptions. But calling one ‘my rock’ does not reveal any cultural preoccupation with rocks. The latter (I propose) is much more common than the former.

Re 2: I think this is demonstrably false. From my (albeit incomplete) reading of the literature, most of the time metaphors just got in the way of hunting. Thinking of the ‘Bear’ as the father to whom you have to ritually apologise before killing him seems a bit excessive. Over metaphorisation of plants and animals has also led to their over or under exploitation. E.g. the Nuer not eating birds and foregoing an important source of nutricion or the Hawains hunting rare birds to extinction for their plumes. Sure, metaphors were essential to the construction of folk taxonomies but that is equally true of Western ‘scientific’ taxonomies which map into notions of descent, progress and containment. (PS: I’ve been working on a post called ‘Taxonomies are metaphors where I elaborate on this).

Re 3: This is just out and out nonsense. The example given are stuff like bark of the tree being called ‘skin’ and spacial prepositions like ‘on top of’ or ‘behind’ being derived from body parts. The author obviously did not bother to consult an English etymological dictionary where he could discover that ‘top’ comes from ‘tuft’ as in ‘tuft of hair’ (or is at the very least connected). And of course the connection of ‘behind’ to body part (albeit in the other direction) should be pretty obvious to anyone. Anyway, body part metaphors are all over all languages in all sorts similar but inconsistent ways: mountains have feet (but not heads), human groups have heads (but not feed), trees have trunks (but not arms), a leader may have someone as their right arm (but not their left foot). And ‘custard has skin’ in English (chew on that). In short, unless the author can show even a hint of a quantitative tendency, it’s clear that American Indian languages are just as metaphorical as any other languages.

Sherzer comes to this conclusion:

“Metaphorical language pervaded the verbal art of the Americas in 1492, in part because of the closeness Native American had always felt to the natural world around them and their social, cultural, aesthetic, and personal identification with it and in part because of their faith in the immediacy of a spirit world whose presence could be manifest in discourse.”

But that displays fundamental misunderstanding of how metaphor works in language. Faith in immediacy has no link to the use of metaphors (or at the very least Sherzer did not demonstrate any link because he confused lyricism with scholarship). Sure, metaphors based on the natural world might indicate ‘closeness to the natural world around’ but that’s just as much of a discovery as saying that people who live in an area with lots of oaks have a word that means ‘oak’. The opposite would be surprising. The problem is that if you analyzed English without preconceptions about the culture of its speakers you would find as much of a closeness to the natural world (e.g a person can be ‘a force of nature’, ‘eyes like a hawk’, ‘dirty as a pig’, ‘wily as a fox’, ‘slow as a snail’, ‘beautiful as a flower’, ‘sturdy as a tree’, etc.).

While this seems deep, it’s actually meaningless.

“The metaphorical and symbolic bent of Mesoamerica was reflected in the grammars, vocabularies, and verbal art of the region. (p. 272)

Mesoamerica had no ‘symbolic bent’. Humans have a symbolic bent, just like they have spleens, guts and little toes.  So let’s stop being all gushy about it and study things that are worth a note.

PS: This just underscores my comments on an earlier post of Cris’ where I took this quote to task:

“Nahuatl was and is a language rich in metaphor, and the Mexica took delight in exploring veiled resemblances…” This is complete and utter nonsense. Language is rich in metaphor and all cultures explore veiled resemblances. That’s just how language works. All I can surmise is that the author did not learn the language very well and therefore was translating some idioms literally. It happens. Or she’s just mindlessly spouting a bullshit trope people trot out when they need to support some mystical theory about a people.

And the conclusion!? “In a differently conceptualized world concepts are differently distributed. If we want to know the metaphors our subjects lived by, we need first to know how the language scanned actuality. Linguistic messages in foreign (or in familiar) tongues require not only decoding, but interpretation.” Translated from bullshit to normal speak: “When you translate things from a foreign language, you need to pay attention to context.” Nahuatl is no different to Spanish in this. In fact, the same applies to British and American English.

Finally, this metaphor mania is not unique to anthropologists. I’ve seen this in philosophy, education studies, etc. Metaphors are seductive… Can’t live without them…

Image by moune.drah CC BY NC SA

Send to Kindle
""

What is not a metaphor: Modelling the world through language, thought, science, or action

Share
Send to Kindle

The role of metaphor in science debate (Background)

Recently, the LSE podcast an interesting panel on the subject of “Metaphors and Science”. It featured three speakers talking about the interface between metaphor and various ‘scientific’ disciplines (economics, physics and surgery). Unlike many such occasions, all speakers were actually very knowledgeable and thoughtful on the subject.

In particular, I liked Felicity Mellor and Richard Bronk who adopted the same perspective that underlies this blog and which I most recently articulated in writing about obliging metaphors. Felicity Mellor put it especially eloquently when she said:

“Metaphor allows us to speak the truth by saying something that is wrong. That means it can be creatively liberating but it can also be surreptitiously coercive.”

This dual nature of coerciveness and liberation was echoed throughout the discussion by all three speakers. But they also shared the view of ubiquity of metaphor which is what this post is about.

What is not a metaphor? The question!

The moderator of the discussion was much more stereotypically ambivalent about such expansive attitude toward metaphor and challenged the speakers with the question of ‘what is the opposite of metaphor’ or ‘what is not a metaphor’. He elicited suggestions from the audience, who came up with this list:

model, diagram, definition, truths, math, experience, facts, logic, the object, denotation

The interesting thing is that most of the items on this list are in fact metaphorical in nature. Most certainly models, diagrams and definitions (more on these in future posts). But mathematics and logic are also deeply metaphorical (both in their application but also internally; e.g. the whole logico mathematical concept of proof is profoundly metaphorical).

Things get a bit more problematic with things like truth, fact, denotation and the object. All of those seem to be pointing at something that is intuitively unmetaphorical. But it doesn’t take a lot of effort to see that ‘something metaphorical’ is going on there as well. When we assign a label (denotation), for instance, ‘chair’ or ‘coast’ or ‘truth’ we automatically trigger an entire cognitive armoury for dealing with things that exist and these things have certain properties. But it is clear that ‘chair’, ‘coast’ and ‘metaphor’ are not the same kind of thing at all. Yet, we can start treating them the same way because they are both labels. So we start asking for the location, shape or definition of metaphor, just because we assigned it a label in the same way we can ask for the same thing about a chair or a coast. We want to take a measure of it, but this is much easier with a chair than with a coast (thus the famous fractal puzzle about the length of the coast of Britain). But chairs are not particularly easy to nail down (metaphorically, of course) either, as I discussed in my post on clichés and metaphors.

Brute facts of tiny ontology

So what is the thing that is not a metaphor? Both Bronk and Mellor suggested the “brute fact”. A position George Lakoff called basic realism and I’ve recently come to think of as tiny ontology. The idea, as expressed by Mellor and Bronk in this discussion, is that there’s a real world out there which impinges upon our bodily existence but with which we can only interact through the lens of our cognition which is profoundly metaphorical.

But ultimately, this does not give us a very useful answer. Either everything is a metaphor, so we might as well stop talking about it, or there is something that is not a metaphor. In which case, let’s have a look. Tiny ontology does not give us the solution because we can only access it through the filter of our cognition (which does not mean consciously or through some wilful interpretation). So the real question is, are there some aspects of our cognition that are not metaphorical?

Metaphor as model (or What is metaphor)

The solution lies in the revelation hinted at above that labels are in themselves metaphors. The act of labelling is metaphorical, or rather, it triggers the domain of objects. What do I mean by that? Well, first let’s have a look at how metaphor actually works. I find it interesting that nobody during the entire discussion tried to raise that question other than the usual ‘using something to talk about something else’. Here’s my potted summary of how metaphor works (see more details in the About section).

Metaphor is a process of projecting one conceptual domain onto another. All of our cognition involves this process of conceptual integration (or blending). This integration is fluid, fuzzy and partial. In language, this domain mapping is revealed through the process of deixis, attribution, predication, definition, comparison, etc. Sometimes it is made explicit by figurative language. Figurative language spans the scale of overt to covert. Figurative language has a conceptual, communicative and textual dimension (see my classification of metaphor use). In cognition, this process of conceptual integration is involved in identification, discrimination, manipulation. All of these can be more or less overtly analogical.

So all of this is just a long way of saying, that metaphor is a metaphor for a complicated process which is largely unconscious but not uncommonly conscious. In fact, in my research, I no longer use the term ‘metaphor’ because it misleads more than it helps. There’s simply too much baggage from what is just overt textual manifestation of metaphor – the sort of ‘common sense’ understanding of metaphor. However, this common sense ordinary understanding of ‘metaphor’ makes using the word a useful shortcut in communication with people who don’t have much of a background in this thought. But when we think about the issue more deeply, it becomes a hindrance because of all the different types of uses of metaphor I described here (a replay of the dual liberating and coercive nature of metaphor mentioned above – we don’t get escape our cognition just because we’re talking about metaphors).

In my work, I use the term frame, which is just a label for a sort of conceptual model (originally suggested by Lakoff as Idealized Cognitive Model). But I think in this context the term ‘model’ is a bit more revealing about what is going on.

So we can say that every time, we engage conceptually with our experience, we are engaging in an act of modelling (or framing). Even when I call something ‘true’, I am applying a certain model (frame) that will engage certain heuristics (e.g. asking for confirmation, evidence). Equally, if I say something like ‘education is business’, I am applying a certain model that will allow me to talk about things like achieving economies of scale or meeting consumer demand but will make it much harder to talk about ethics and personal growth. That doesn’t mean that I cannot apply more than one model, a combination of models or build new models from old ones. (Computer virus is a famous example, but natural law is another one. Again more on this in later posts.)

Action as an example of modelling

The question was asked during the discussion by an audience member, whether I can experience the world directly (not mediated by metaphoric cognition). The answer is yes, but even this kind of experience involves modelling. When I walk along a path, I automatically turn to avoid objects – therefore I’m modelling their solid and interactive nature. Even when I’m lying still, free of all thought and just letting the warmth of the shining sun wash over me, I’m still applying a model of my position in the world in a particular way. That is, my body is not activating my ears to hear the sun rays, nor is it perceiving the bacteria going about their business in my stomach. A snake, polar bear or a fish would all model that situation in a different way.

This may seem like unnecessary extension of the notion of a model. (But it echos the position of the third speaker Roger Kneebone who was talking about metaphor as part of the practice of surgery.) It is not particularly crucial to our understanding of metaphor, but I think it’s important to divert us from a certain kind of perceptual mysticism in which many people unhappy with the limitations of their cognitive models engage. The point is that not all of our existence is necessarily conceptual but all of it models our interaction with the world and switches between different models as appropriate. E.g. my body applies different models of the world when I’m stepping down from a step on solid ground or stepping into a pool of water.

The languages of metaphor: Or how a metaphor do

I am aware that this is all very dense and requires a lot more elaboration (well, that’s why I’m writing a blog, after all). But I’d like to conclude with a warning that the language used for talking about metaphor brings with it certain models of thinking about the world which can be very confusing if we don’t let go of them in time. Just the fact that we’re using words is a problem. When words are isolated (for instance, in a dictionary or at the end of the phrase ‘What is a…’) it only seems natural that they should have a definition. We have a word “metaphor” and it would seem that it needs to have some clear meaning. The kind of thing we’re used to seeing on the right-hand side of dictionaries. But insisting that dictionary-like definition is what must be at the end of the journey is to misunderstand what we’ve seen along the way.

There are many contexts in which the construction “metaphor is…” is not only helpful but also necessary. For example when clarifying one’s use: “In this blog, what I mean by metaphor is much broader than what traditional students of figurative language meant by it.” But in the context of trying to get at what’s going on in the space that we intuitively describe as metaphorical, we should almost be looking for something along to the lines of “metaphor does” or “metaphors feels like”. Or perhaps refrain from the construction “metaphor verb” altogether and just admit that we’re operating in a kind of metaphor-tasting soup. We can get at the meaning/definition by disciplined exploration and conversation.

In conclusion, metaphor is a very useful model when thinking about cognition, but it soon fails us, so we can replace it with more complex models, like that of a model. We are then left with the rather unsatisfactory notion of a metaphor of metaphor or a model of model. The sort of dissatisfaction that lead Derrida and his like to the heights of obscurity. I think we can probably just about avoid deconstructionist obscurantism but only if we adopt one of its most powerful tools, the fleeting sidelong glance (itself a metaphor/model). Just like the Necker cube, this life on the edge of metaphor is constantly shifting before our eyes. Never quite available to us perceptually all at once but readily apprehended by us in its totality. At once opaque and so so crystal clear. Rejoice all you parents of freshly screaming thoughts. It’s a metaphor!
Photo Credit: @Doug88888 via Compfight cc

Send to Kindle
6244195131_20235bdf74_b

Storms in all Teacups: The Power and Inequality in the Battle for Science Universality

Share
Send to Kindle

The great blog Genealogy of Religion posted this video with a somewhat approving commentary:

The video started off with panache and promised some entertainment, however, I found myself increasingly annoyed as the video continued. The problem is that this is an exchange of cliches that pretends to be a fight of truth against ignorance. Sure, Storm doesn’t put forward a very coherent argument for her position, but neither does Minchin. His description of science vs. faith is laughable (being in awe at the size of the universe, my foot) and nowhere does he display any nuance nor, frankly, any evidence that he is doing anything other than parroting what he’s heard on some TV interview with Dawkins. I have much more sympathy with the Storms of this world than these self-styled defenders of science whose only credentials are that they can remember a bit of high school physics or chemistry and have read an article by some neo-atheist in Wired. What’s worse, it’s would be rationalists like him who do what passes for science reporting in major newspapers or on the BBC.

But most of all, I find it distasteful that he chose a young woman as his antagonist. If he wished to take on the ‘antiscience’ establishment, there are so many much better figures to target for ridicule. Why not take on the pseudo spiritualists in the mainstream media with their ecumenical conciliatory garbage. How about taking on tabloids like Nature or Science that publish unreliable preliminary probes as massive breakthroughs. How about universities that put out press releases distorting partial findings. Why not take on economists who count things that it makes no sense to count just to make things seem scientific. Or, if he really has nothing better to do, let him lay into some super rich creationist pastor. But no, none of these captured his imagination, instead he chose to focus his keen intellect and deep erudition on a stereotype of a young woman who’s trying to figure out a way to be taken seriously in a world filled with pompous frauds like Minchin.

The blog post commenting on the video sparked a debate about the limits of knowledge (Note: This is a modified version of my own comment). But while there’s a debate to be had about the limits of knowledge (what this blog is about),  this is not the occasion. There is no need to adjudicate about which of these two is more ‘on to something’. They’re not touching on anything of epistemological interest, they’re just playing a game of social positioning in the vicinity of interesting issues. But in this game, people like Michin have been given a lot more chips to play with than people like Storm. It’s his follies and prejudices and not hers that are given a fair hearing. So I’d rather spend a few vacuous moments in her company than endorse his mindless ranting.

And as for ridiculing people for stupidity or shallow thinking, I’m more than happy to take part. But I want to have a look at those with power and prestige, because they just as often as Storms act just as silly and irrationally the moment they step out of their areas of expertise. I see this all the time in language, culture and history (where I know enough about to judge the level of insight). Here’s the most recent one that caught my eye:

It comes from a side note in a post about evolutionary foundations of violence by a self-proclaimed scientist (the implied hero in Minchin’s rant):

 It is said that the Bedouin have nearly 100 different words for camels, distinguishing between those that are calm, energetic, aggressive, smooth-gaited, or rough, etc. Although we carefully identify a multitude of wars — the Hundred Years War, the Thirty Years War, the American Civil War, the Vietnam War, and so forth — we don’t have a plural form for peace.

Well, this paragon of reason could be forgiven for not knowing what sort of non-sense this ’100 words for’ cliche is. The Language Log has spilled enough bits on why this and other snowclones are silly. But the second part of the argument is just stupid. And it is a typical scientist blundering about the world as if the rules of evidence didn’t apply to him outside the lab and as if data not in a spreadsheet did not require a second thought. As if being a PhD in evolutionary theory meant everything else he says about humans must be taken seriously. But how can such a moronic statement be taken as anything but feeble twaddle to be laughed at and belittled? How much more cumulatively harmful are moments like these (and they are all over the place) than the socializing efforts of people like Storm from the video?

So, I should probably explain why this is so brainless. First, we don’t have a multitude of words war  (just like the Bedouin don’t have 100 or even 1 dozen for a camel). We just have the one and we have a lot of adjectives with which we can modify its meaning. And if we want to look for some that are at least equivalent to possible camel attributes, we won’t choose names of famous wars but rather things like civil war, total war, cold war, holy war, global war, naval war, nuclear war, etc. I’m sure West Point or even Wikipedia has much to say about a possible classification. And of course,  all of this applies to peace in exactly the same way. There are ‘peaces’ with names like Peace of Westphalia, Arab-Israeli Peace, etc. with just as many attributive pairs like international peace, lasting peace, regional peace, global peace, durable peace, stable peace, great peace, etc.  I went to a corpus to get some examples but that this must be the case was obvious and a simple Google search would give enough examples to confirm a normal language speaker’s  intuition. But this ‘scientist’ had a point to make and because he’s spent twenty years doing research in evolution of violence, he must surely be right about everything on the subject.

Creative Commons License jbraine via Compfight

Now, I’m sure this guy is not an idiot. He’s obviously capable of analysis and presenting a coherent argument. But there’s an area that he chose to address about which he is about as qualified to make pronouncements as Storm and Minchin are about the philosophy of science. And what he said there is stupid and he should be embarrassed for having said it. Should he be ridiculed and humiliated for it the way I did here? No. He made the sort of mistake everyone makes from high school students to Nobel laureates. He thought he knew something and didn’t bother to examine his knowledge. Or he did try to examine it but  didn’t have the right tools to do it. Fine. But he’s a scientist (and a man not subject to stereotypes about women) so we give him and too many like him a pass. But Storm, a woman who like so many of her generation uses star signs to talk about relationships and is uncomfortable with the grasping maw of classifying science chomping on the very essence of her being, she is fair game?

It’s this inequality that makes me angry. We afford one type of shallowness the veneer respectability and rake another one over the coals of ridicule and opprobrium. Not on this blog!

Creative Commons License Juliana Coutinho via Compfight

UPDATE: I was just listening to this interview with a philosopher and historian of science about why there was so much hate coming from scientists towards the Gaia hypothesis and his summation, it seems to me, fits right in with what this post is about. He says: “When scientists feel insecure and threatened, they turn nasty.” And it doesn’t take a lot of study of the history and sociology of science to find ample examples of this. The ‘science wars’, the ‘linguistics wars’, the neo-Darwinst thought purism, the list just goes on. The world view of scientism is totalising and has to deal with exactly the same issues as other totalising views such as monotheistic religions with constitutive ontological views or socio-economic utopianisms (e.g. neo-liberalism or Marxism).

And one of those issues is how do you afford respect to or even just maintain conversation with people who challenge your ideological totalitarianism – or in other words, people who are willfully and dangerously “wrong”. You can take the Minchin approach of suffering in silence at parties and occasionally venting your frustration at innocent passerbys, but that can lead to outbreaks group hysteria as we saw with the Sokal hoax or one of the many moral panic campaigns.

Or you can take the more difficult journey of giving up some of your claims on totality and engaging with even those most threatening to to you as human beings; the way Feyerabend did or Gould sometimes tried to do. This does not mean patiently proselytizing in the style of evangelical missionaries but more of an ecumenical approach of meeting together without denying who you are. This will inevitably involve moments where irreconcilable differences will lead to a stand on principles (cf. Is multi-culturalism bad for women?) but even in those cases an effort at understanding can benefit both sides as with the question of vaccination described in this interview. At all stages, there will be temptation at “understanding” the other person by reducing them to our own framework of humanity. Psychologizing a religious person as an unsophisticate dealing with feelings of awe in the face of incomprehensible nature or pitying the atheist for not being able to feel the love of God and reach salvation. There is no solution. No utopia of perfect harmony and understanding. No vision of lions and lambs living in peace. But acknowledging our differences and slowing down our outrage can perhaps make us into the better versions of us and help us stop wasting time trying to reclaim other people’s stereotypes.

Storm in a teacupCreative Commons License BruceW. via Compfight

UPDATE 2: I am aware of the paradox between the introduction and the conclusion of the previous update. Bonus points for spotting it. I actually hold a slightly more nuanced view than the first paragraph would imply but that is a topic for another blog post.

Send to Kindle

Do we need a gaming literacy: Literacy metaphor hack

Share
Send to Kindle

I am a gaming semi-literate!

I was listening to the discussion of the latest BioShock game on the latest TWiT podcast when I realized that I am in fact game illiterate. I am hearing these stories and descriptions of experiences but I know I can’t access them directly without a major investment in knowledge and skill acquisition. So, this is what people with no or limited literacy must feel like in highly literacy-dependent environments. I really want to access the stories in the way they are told by the game. But I know I can’t. I will stumble, be discouraged, not have a very good time before I can have a good time. I will be a struggling gamer, in the same way that there are struggling readers.

Note: When I say game, I mean mostly a non-casual computer game such as BioShock or War of Worldcraft or SimCity.

What would a game literacy entail?

What would I need to learn in order to access gaming? Literacy is composed of a multiplicity of knowledge areas and skills. I already have some of these but not enough. Roughly, I will need to get at the following:

  • Underlying cognitive skills (For reading: transforming the sight of letters into sounds or corresponding mental representations. For gaming: transforming desired effects on screen into actions on a controller)
  • Complex perceptual and productive fluency (Ability to employ the cognitive skills automatically in response to changing stimuli in multiple contexts).
  • Context-based or task-based strategies (Ability to direct the underlying skills towards solving particular problems in particular contexts. For reading: Skim text, or look things up in the index, or skip acknowledgements, discover the type of text, or adopt reading speed appropriate to type of text, etc. For gaming Discover the type of game, or gather appropriate achievements, or find hidden bonuses, etc.)
  • Metacognitive skills and strategies (Learn the terminology and concepts necessary for further learning and to achieve the appropopriate aims using stratgies.)
  • Socialization skills and strategies (Learn to use the skills and knowledge to make connections with other people and exploit those connections to acquire further skill, knowledge as well as social capital derriving from those)

Is literacy a suitable metaphor for gaming? Matches and mismatches!

With any metaphor it is worth to explore the mapping to see if there are sufficient similarities. In this case, I’ll look at the following areas for matches and mismatches:

  • Skill
  • Mode
  • Status
  • Socialization
  • Content
  • Purpose

Skill

Both reading/writing (I will continue to use reading for both unless I need to stress the difference) and gaming require skill that can become automatic and that takes time to acquire. People can be both “better” and “worse” at gaming and reading.

But reading is a more universal skill (although not as universal as most people think) whereas gaming skills are more genre based.

The skill at gaming can be more easily measured by game achievement. Quality of reading measures are a bit more tenuous because speed, fluency and accuracy are all contextual measures. However, even game achievement is a bit more relative, such as in recommendations to play at normal or easy to experience the game.

In this gaming is more like reading than for instance, listening to music or watching a skill which do not require any overt acquisition of skill. See Dara O’Briain’s funny bit on the differences between gaming and reading. Of course, when he says “you cannot be bad at watching a film”, we could quibble that much preparation is required for watching some films, but such training does not involve the development of underlying cognitive skills (assuming the same cultural and linguistic environment). Things are a bit more complex for some special kind of listening to music. Nevertheless people do talk about “media literacy”.

Mode

Reading is mostly a uni-modal experience. It is possible to read out loud or to read while listening but ultimately reading is its own mode. Reading has an equivalent in writing that though not a mirror image skill, requires relatively the same skill.

Gaming is a profoundly multimodal experience combining vision, sound, movement (and often reading, as well). There are even efforts to involve smell. Gaming does not have a clear expressive counterpart. The obvious expressive equivalent to writing would be game design but that clearly requires a different level of skill. However, gaming allows complex levels of self-expression within the process of game play which does not have an equivalent in reading but is not completely dissimilar to creative writing (like fanfiction).

Status

Reading is a neutral to high status activity. The act itself is neutral but status can derrive from content. Writing (expressive rather than utilitarian) is a high status activity.

Gaming is a low status to neutral activity. No loss of status derives from inability to game to not gaming in a way that is true of reading. Some games have less questionable status and many games are played by people who derive high status from outside of gaming. There are emerging status sanction systems around gaming but none have penetrated outside gaming, yet.

Socialization

Reading and writing are significant drivers of wider socialization. They are necessary to perform basic social functions and often represent gateways into important social contexts.

Gaming is only required to socialize in gaming groups. However, this socialization may become more highly desirable over time.

Content

Writing is used to encode a wide variety of content – from shopping lists to neuclear plant manuals to fiction.

Games on the other hand, encode a much more narrower range of content. Primarily narrative and primarily finctional. Although more non-narrative and non-fictional games may exist. There are also expository games but so far, none that would afford easy storage of non-game information without using writing.

Purpose

Reading and writing are very general purpose activities.

Gaming on the other hand has a limited range of purposes: enjoyment, learning, socialization with friends, achieving status in a wider community. You won’t see a bus stop with a game instead of a timetable (although some of these require puzzle solving skills to decipher).

Why may game literacy be important?

As we saw, there are many differences between gaming and reading and writing. Nevertheless, they are similar enough that the metaphor of ‘game literacy’ makes sense provided we see its limitations.

Why is it important? There will be a growing generational and populational divide of gamers and non-gamers. At the moment this is not very important in terms of opportunities and status but it could easily change within a generation.

Not being able to play a game may exclude people from social groups in the same way that not-playing golf or not engaging in some other locally sanctioned pursuit does (e.g. World of Warcraft).

But most importantly, as new generations of game creators explore the expressive boundaries of games (new narratives, new ways of story telling), not being able to play games may result in significant social exclusion. In the same way that a quick summary of what’s in a novel is inferior to reading the novel, films based on games will be pale imitations of playing the games.

I can easily imagine a future where the major narratives of the day will be expressed in games. In the same way that TV serials have supplanted novels as the primary medium of sharing crucial societal narratives, games can take over in the future. The inner life novel took about 150 years to mature and reigned supreme for about as long while drama and film functioned as its accompaniment. The TV serial is now solidifying its position and is about where the novel was in the 1850s. Gaming may take another couple of decades to get to a stage where it is ready as a format to take over. And maybe nothing like that will happen. But if I had a child, I’d certainly encourage them to play computer games as part of ensuring a more secure future.

Send to Kindle

Framing and constructions as a bridge between cognition and culture: Two Abstracts for Cognitive Futures

Share
Send to Kindle

I just found out that both abstracts I submitted to the Cognitive Futures of the Humanities Conference were accepted. I was really only expecting one to get through but I’m looking forward to talking about the ideas in both.

The first first talk has foundations in a paper I wrote almost 5 years ago now about the nature of evidence for discourse. But the idea is pretty much central to all my thinking on the subject of culture and cognition. The challenge as I see it is to come up with a cognitively realistic but not a cognitively reductionist account of culture. And the problem I see is that often the learning only goes one way. The people studying culture are supposed to be learning about the results of research on cognition.

Frames, scripts, scenarios, models, spaces and other animals: Bridging conceptual divides between the cognitive, social and computational

While the cognitive turn has a definite potential to revolutionize the humanities and social sciences, it will not be successful if it tries to reduce the fields investigated by the humanities to merely cognitive or by extension neural concepts. Its greatest potential is in showing continuities between the mind and its expression through social artefacts including social structures, art, conversation, etc. The social sciences and humanities have not waited on the sidelines and have developed a conceptual framework to apprehend the complex phenomena that underlie social interactions. This paper will argue that in order to have a meaningful impact, cognitive sciences, including linguistics, will have to find points of conceptual integration with the humanities rather than simply provide a new descriptive apparatus.

It is the contention of this paper that this can be best done through the concept of frame. It seems that most disciplines dealing with the human mind have (more or less independently) developed a similar notion dealing with the complexities of conceptualization variously referred to as frame, script, cognitive model or one of the as many as 14 terms that can be found across the many disciplines that use it.  This paper will present the different terms and identify commonalities and differences between them. On this, it will propose several practical ways in which cognitive sciences can influence humanities and also derive meaningful benefit from this relationship. I will draw on examples from historical policy analysis, literary criticism and educational discourse.

See the presentation on Slideshare.

The second paper is a bit more conceptually adventurous and testing the ideas put forth in the first one. I’m going to try to explore a metaphor for the merging of cultural studies with linguistic studies. This was done before with structuralism and ended more or less badly. For me, it ended when I read the Lynx by Lévi-Strauss and realized how empty it was of any real meaning. But I think structuralism ended badly in linguistics, as well. We can’t really understand how very basic things work in language unless we can involve culture. So even though, I come at this from the side of linguistics, I’m coming at it from the perspective of linguistics that has already been informed by the study of culture.

If Lévi-Strauss had met Langacker: Towards a constructional approach to the patterns of culture

Construction/cognitive grammar (Langacker, Lakoff, Croft, Verhagen, Goldberg) has broken the strict separation between the lexical and grammatical linguistic units that has defined linguistics for most of the last century. By treating all linguistic units as meaningful, albeit on a scale of schematicity, it has made it possible to treat linguistic knowledge as simply a part of human knowledge rather than as a separate module in the cognitive system. Central to this effort is the notion of language of as an organised inventory of symbolic units that interact through the process of conceptual integration.

This paper will propose a new view of ‘culture’ as an inventory of construction-like patterns that have linguistic, as well, as interactional content. I will argue that using construction grammar as an analogy allows for the requisite richness and can avoid the pitfalls of structuralism. One of the most fundamental contributions of this approach is the understanding that cultural patterns, like constructions, are pairings of meaning and form and that they are organised in a hierarchically structured inventory. For instance, we cannot properly understand the various expressions of politeness without thinking of them as systematically linked units in an inventory available to members of a given culture in the same as syntactic and morphological relationships. As such, we can understand culture as learnable and transmittable in the same way that language is but without reducing its variability and richness as structuralist anthropology once did.

In the same way that Jakobson’s work on structuralism across the spectrum of linguistic diversity inspired Lévi-Strauss and a whole generation of anthropological theorists, it is now time to bring the exciting advances made within cognitive/construction grammar enriched with blending theory back to the study of culture.

See the presentation on SlideShare.

Send to Kindle

Cliches, information and metaphors: Overcoming prejudice with metahor hacking and getting it back again

Share
Send to Kindle
Professor Abhijit Banerjee

Professor Abhijit Banerjee (Photo credit: kalyan3)

“We have to use cliches,” said professor Abhijit Banerjee at the start of his LSE lecture on Poor Economics. “The world is just too complicated.” He continued. “Which is why it is all the more important, we choose the right cliches.” [I'm paraphrasing here.]

This is an insight at the very heart of linguistics. Every language act we are a part of is an act of categorization. There are no simple unitary terms in language. When I say, “pull up a chair”, I’m in fact referring to a vast category of objects we refer to as chairs. These objects are not identified by any one set of features like four legs, certain height, certain ways of using it. There is no minimal set of features that will describe all chairs and just chairs and not other kinds of objects like tables or pillows. But chairs don’t stand on their own. They are related to other concepts or categories (and they are really one and the same). There are subcategories like stools and armchairs, containing categories like furniture or man-made objects and related categories like houses and shops selling objects. All of these categories are linked in our minds through a complex set of images, stories and definitions. But these don’t just live in our minds. They also appear in our conversations. So we say things like, “What kind of a chair would you like to buy?”, “That’s not real chair”, “What’s the point of a chair if you can’t sit in it?”, “Stools are not chairs.”, “It’s more of a couch than a chair.”, “Sofas are really just big plush chairs, when it comes down to it.”, “I’m using a box for a chair.”, “Don’t sit on a table, it’s not a chair.” Etc. Categories are not stable and uniform across all people, so we continue having conversations about them. There are experts on chairs, historians of chairs, chair craftsmen, people who buy chairs for a living, people who study the word ‘chair’, and people who casually use chairs. Some more than others. And their sets of stories and images and definitions related to chairs will be slightly different. And they will have had different types of conversations with different people about chairs. All of that goes into a simple word like “chair”. It’s really very simple as long as we accept the complexity for what it is. Philosophers of language have made a right mess of things because they tried to find simplicity where none exists. And what’s more where none is necessary.

But let’s get back to cliches. Cliches are types of categories. Or better still, cliches are categories with a particular type of social salience. Like categories, cliches are sets of images, stories and definitions compressed into seemingly simpler concepts that are labelled by some sort of an expression. Most prominently, it is a linguistic expression like a word or a phrase. But it could just as easily be a way of talking, way of dressing, way of being. What makes us likely to call something a cliche is a socially negotiated sense of awareness that the compression is somewhat unsatisfactory and that it is overused by people in lieu of an insight into the phenomenon we are describing. But the power of the cliche is in its ability to help us make sense of a complex or challenging phenomenon. But the sense making is for our benefit of cognitive and emotional peace. Just because we can make sense of something, doesn’t mean, we get the right end of the stick. And we know that, which is why we are wary of cliches. But challenging every cliche would be like challenging ourselves every time we looked at a chair. It can’t be done. Which is why we have social and linguistic coping mechanisms like “I know it’s such a cliche.” “It’s a cliche but in a way it’s true.” “Just because it’s a cliche, doesn’t mean, it isn’t true.” Just try Googling: “it’s a cliche *”

So we are at once locked into cliches and struggling to overcome them. Like “chair” the concept of a “cliche” as we use it is not simple. We use it to label words, phrases, people. We have stories about how to rebel against cliches. We have other labels for similar phenomena with different connotations such as “proverbs”, “sayings”, “prejudices”, “stereotypes”. We have whole disciplines studying these like cognitive psychology, social psychology, political science, anthropology, etc. And these give us a whole lot of cliches about cliches. But also a lot of knowledge about cliches.

The first one is exactly what this post started with. We have to use cliches. It’s who we are. But they are not inherently bad.

Next, we challenge cliches as much as we use them. (Well, probably not as much, but a lot.) This is something I’m trying to show through my research into frame negotiation. We look at concepts (the compressed and labelled nebulas of knowledge) and decompress them in different ways and repackage them and compress them into new concepts. (Sometimes this is called conceptual integration or blending.) But we don’t just do this in our minds. We do it in public and during conversations about these concepts.

We also know that unwillingness to challenge a cliche can have bad outcomes. Cliches about certain things (like people or types of people) are called stereotypes and particular types of stereotypes are called prejudices. And prejudices by the right people against the right kind of other people can lead to discrimination and death. Prejudice, stereotype, cliche. They are the same kind of thing presented to us from different angles and at different magnitudes.

So it is worth our while to harness the cliche negotiation that goes on all the time anyway and see if we can use it for something good. That’s not a certain outcome. The medieaval inquistions, anti-heresies, racism, slavery, genocides are all outcomes of negotiations of concepts. We mostly only know about their outcomes but a closer look will always reveal dissent and a lot of soul searching. And at the heart of such soul searching is always a decompression and recompression of concepts (conceptual integration). But it does not work in a vacuum. Actual physical or economic power plays a role. Conformance to communcal expectations. Personal likes or dislikes. All of these play a role.

George Lakoff

George Lakoff (Photo credit: Wikipedia)

So what chance have we of getting the right outcome? Do we even know what is the right outcome?

Well, we have to pick the right cliches says Abhijit Banerjee. Or we have to frame concepts better says George Lakoff. “We have to shine the light of truth” says a cliche.

“If you give people content, they’re willing to move away from their prejudices. Prejudices are partly sustained by the fact that the political system does not deliver much content.” says Banerjee. Prejudices matter in high stakes contexts. And they are a the result of us not challenging the right cliches in the right ways at the right time.

It is pretty clear from research in social psychology from Milgram on, that giving people information will challenge their cliches but only as long as you also give them sanction to challenge the cliches. Information on its own, does not seem to always be enough. Sometimes the contrary information even seems to reinforce the cliche (as we’re learning from newspaper corrections).

This is important. You can’t fool all of the people all of the time. Even if you can fool a lot of them a lot of the time. Information is a part of it. Social sanction of using that information in certain ways is another part of it. And this is not the province of the “elites”. People with the education and sufficient amount of idle time to worry about such things. There’s ample research to show that everybody is capable of this and engaged in these types of conceptual activities. More education seems to vaguely correlate with less prejudice but it’s not clear why. I also doubt that it does in a very straightforward and inevitable way (a post for another day). It’s more than likely that we’ve only studied the prejudices the educated people don’t like and therefore don’t have as much.

Bannerjee draws the following conclusion from his work uncovering cliches in development economics:

“Often we’re putting too much weight on a bunch of cliches. And if we actually look into what’s going on, it’s often much more mundane things. Things where people just start with the wrong presumption, design the wrong programme, they come in with their own ideology, they keep things going because there’s inertia, they don’t actually look at the facts and design programmes in ignorance. Bad things happen not because somebody wants bad things to happen but because we don’t do our homework. We don’t think hard enough. We’re not open minded enough.”

It sounds very appealing. But it’s also as if he forgot the point he started out with. We need cliches. And we need to remember that out of every challenge to a cliche arises a new cliche. We cannot go around the world with our concepts all decompressed and flapping about. We’d literally go crazy. So every challenge to a cliche (just like every paradigm-shifting Kuhnian revolution) is only the beginning phase of the formation of another cliche, stereotype, prejudice or paradigm (a process well described in Orwell’s Animal Farm which itself has in turn become a cliche of its own). It’s fun listening to Freakonomics radio to see how all the cliche busting has come to establish a new orthodoxy. The constant reminders that if you see things as an economist, you see things other people don’t don’t see. Kind of a new witchcraft. That’s not to say that Freakonomics hasn’t provided insights to challenge established wisdoms (a term arising from another perspective on a cliche). It most certainly has. But it hasn’t replaced them with “a truth”, just another necessary compression of a conceptual and social complex. During the moments of decompression and recompression, we have opportunities for change, however brief. And sometimes it’s just a memory of those events that lets us change later. It took over 150 years for us to remember the French revolution and make of it what we now think of as democracy with a tradition stretching back to ancient Athens. Another cliche. The best of a bad lot of systems. A whopper of a cliche.

So we need to be careful. Information is essential when there is none. A lot of prejudice (like some racism) is born simply of not having enough information. But soon there’s plenty of information to go around. Too much, in fact, for any one individual to sort through. So we resort to complex cliches. And the cliches we choose have to do with our in-groups, chains of trust, etc. as much as they do with some sort of rational deliberation. So we’re back where we started.

Humanity is engaged in a neverending struggle of personal and public negotiation of concepts. We’re taking them apart and putting them back together. Parts of the process happen in fractions of a second in individual minds, parts of the process happen over days, weeks, months, years and decades in conversations, pronouncements, articles, books, polemics, laws, public debates and even at the polling booths. Sometimes it looks like nothing is happening and sometimes it looks like everything is happening at once. But it’s always there.

So what does this have to do with metaphors and can a metaphor hacker do anything about it? Well, metaphors are part of the process. The same process that lets us make sense of metaphors, lets use negotiated cliches. Cliches are like little analogies and it takes a lot of cognition to understand them, take them apart and make them anew. I suspect most of that cognition (and it’s always discursive, social cognition) is very much the same that we know relatively well from metaphor studies.

But can we do anything about it? Can we hack into these processes? Yes and no. People have always hacked collective processes by inserting images and stories and definitions into the public debate through advertising, following talking points or even things like pushpolling. And people have manipulated individuals through social pressure, persuasion and lies. But none of it ever seems to have a lasting effect. There’re simply too many conceptual purchase points to lean on in any cliches to ever achieve a complete uniformity for ever (even in the most totalitarian regimes). In an election, you may only need to sway the outcome by a few percent. If you have military or some other power, you only need to get willing compliance from a sufficient number of people to keep the rest in line through a mix of conceptual and non-conceptual means. Some such social contracts last for centuries, others for decades and some for barely years or months. In such cases, even knowing how these processes work is not much better than knowing how continental drift works. You can use it to your advantage but you can’t really change it. You can and should engage in the process and try to nudge the conversation in a certain way. But there is no conceptual template for success.

But as individuals, we can certainly do quite a bit monitor our own cognition (in the broadest sense). But we need to choose our battles carefully. Use cliches but monitor what they are doing for us. And challenge the right ones at the right time. It requires a certain amount of disciplined attention and disciplined conversation.

This is not a pessimistic message, though. As I’ve said elsewhere, we can be masters of our own thoughts and feelings. And we have the power to change how we see the world and we can help others along with how they see the world. But it would be foolish to expect to world to be changed beyond all recognition just through the power of the mind. In one way or another, it will always look like our world. But we need to keep trying to make it look like the best possible version of our world. But this will not happen by following some pre-set epistemological route. Doing this is our human commitment. Our human duty. And perhaps our human inevitability. So, good luck to us.

Send to Kindle

Pseudo-education as a weapon: Beyond the ridiculous in linguistic prescriptivism

Share
Send to Kindle
Teacher in primary school in northern Laos

Teacher in primary school in northern Laos (Photo credit: Wikipedia)

Most of us are all too happy to repeat clichés about education to motivate ourselves and others to engage in this liminal ritual of mass socialization. One such phrase is “knowledge is power”. It is used to refer not just to education, of course, but to all sorts of intelligence gathering from business to politics. We tell many stories of how knowing something made the difference, from knowing a way of making something to work to knowing a secret only the hero or villain is privy to. But in education, in particular, it is not just knowing that matters to our tribe but also the display of knowing.

The more I look at education, the more I wonder how much of what is in the curriculum is about signaling rather than true need of knowledge. Signaling has been used in economics of education to indicate the complex value of a university degree but I think it goes much deeper. We make displays of knowledge through the curriculum to make the knowledge itself more valuable. Curriculum designers in all areas engage in complex dances to show how the content maps onto the real world. I have called this education voodoo, other people have spoken of cargo cult education, and yet others have talked about pseudo teaching. I wrote about pseudo teaching when I looked at Niall Ferguson‘s amusing, I think I called it cute, lesson plan of his own greatness. But pseudo teaching only describes the activities performed by teachers in the mistaken belief that they have real educational value. When pseudo teaching relies on pseudo content, I think we can talk more generally about “pseudo education”.

We were all pseudo-educated on a number of subjects. History, science, philosophy, etc. In history lessons, the most cherished “truths” of our past are distorted on a daily basis (see Lies My Teacher told me). From biology, we get to remember misinformation about the theory of evolution starting from attributing the very concept of evolution to Darwin or reducing natural selection to the nonsense of survival of the fittest. We may remember the names of a few philosophers but it rarely takes us any further than knowing winks at a Monty Python sketch or mouthing of unexamined platitudes like “the unexamined life is not worth living.”

That in itself is not a problem. Society, despite the omnipresent alarmist tropes, is coping quite well with pseudo-education. Perhaps, it even requires it to function because “it can’t handle the truth”. The problem is that we then judge people on how well they are able to replicate or respond to these pseudo-educated signals. Sometimes, these judgments are just a matter of petty prejudice but sometimes they could have an impact on somebody’s livelihood (and perhaps the former inevitably leads to the latter in aggregate).

Note: I have looked at some history and biology textbooks and they often offer a less distorted portrayal of their subject than what seems to be the outcome in public consciousness. Having the right curriculum and associated materials, then, doesn’t seem to be sufficient to avoid pseudo-education (if indeed avoiding it is desirable).

The one area where pseudo-education has received a lot of attention is language. Since time immemorial, our ways of speaking have served to identify us with one group or layer of society or another. And from its very beginning, education sought to play a role in slotting its charges into the linguistic groups with as high a prestige, as possible (or rather as appropriate). And even today, in academic literature we see references to the educated speaker as an analytic category. This is not a bad thing. Education correlates with exposure to certain types of language and engagement with certain kinds of speech communities. It is not the only way to achieve linguistic competence in those areas but it is the main way for the majority. But becoming “educated speaker” in this sense is mostly a by-product of education. Sufficient amount of the curriculum and classroom instruction is aimed in this direction to count for something but most students acquire the in-group ways of speaking without explicit instruction (disadvantaging those who would benefit from it). But probably a more salient output of language education is supposed knowledge about language (as opposed to knowledge of language).

Here students are expected not only to speak appropriately but also to know how this “appropriate language” works. And here is where most of what happens in schools can be called pseudo-education. Most teachers don’t really have any grasp of how language works (even those who took intro to linguistics classes). They are certainly not aware of the more complex issues around the social variability of language or its pragmatic dimension. But even in simple matters like grammar and usage, they are utterly clueless. This is often blamed on past deficiencies of the educational system where “grammar was not taught” to an entire generation. But judging by the behavior of previous generations who received ample instruction in grammar, that is not the problem. Their teachers were just as inept at teaching about language as they are today. They might have been better at labeling parts of speech and their tenses but that’s about it. It is possible that in the days of yore, people complaining about the use of the passive were actually more able to identify passive constructions in the text but it didn’t make that complaint any less inaccurate (Orwell made a right fool of himself when it turned out that he uses more passives than is the norm in English despite kvetching about their evil).

No matter what the content of school curriculum and method of instruction, “educated” people go about spouting nonsense when it comes to language. This nonsense seems to have its origins in half-remembered injunctions of their grade school teacher. And because the prime complainers are likely to either have been “good at language” or envied the teacher’s approbation of those who were described as being “good at language”, what we end up with in the typical language maven is a mishmash of linguistic prejudice and unjustified feeling smug superiority. Every little linguistic label that a person can remember, is then trotted out as a badge of honor regardless of how good that person is at deploying it.

And those who spout the loudest, get a reputation of being the “grammar experts” and everybody else who preemptively admits that they are “not good at grammar” defers to them and lets themselves be bullied by them. The most recent case of such bullying was a screed by an otherwise intelligent person in a position of power who decided that he will no longer hire people with bad grammar.

This prompted me to issue a rant on Google Plus, repeated below:

The trouble with pseudo educated blowhards complaining about grammar, like +Kyle Wien, is that they have no idea what grammar is. 90% of the things they complain about are spelling problems. The rest is a mishmash of half-remembered objections from their grade school teacher who got them from some other grammar bigot who doesn’t know their tense from their time.

I’ve got news for you Kyle! People who spell they’re, there and their interchangeably know the grammar of their use. They just don’t differentiate their spelling. It’s called homophony, dude, and English is chock full of it. Look it up. If your desire rose as you smelled a rose, you encountered homophony. Homophony is a ubiquitous feature of all languages. And equally all languages have some high profile homophones that cause trouble for spelling Nazis but almost never for actual understanding. Why? Because when you speak, there is no spelling.

Kyle thinks that what he calls “good grammar” is indicative of attention to detail. Hard to say since he, presumably always perfectly “grammatical”, failed to pay attention to the little detail of the difference between spelling and grammar. The other problem is, that I’m sure that Kyle and his ilk would be hard pressed to list more than a dozen or so of these “problems”. So his “attention to detail” should really be read as “attention to the few details of language use that annoy Kyle Wien”. He claims to have noticed a correlation in his practice but forgive me if I don’t take his word for it. Once you have developed a prejudice, no matter how outlandish, it is dead easy to find plenty of evidence in its support (not paying attention to any of the details that disconfirm it).

Sure there’s something to the argument that spelling mistakes in a news item, a blog post or a business newsletter will have an impact on its credibility. But hardly enough to worry about. Not that many people will notice and those who do will have plenty of other cues to make a better informed judgment. If a misplaced apostrophe is enough to sway them, then either they’re not convinced of the credibility of the source in the first place, or they’re not worth keeping as a customer. Journalists and bloggers engage in so many more significant pursuits that damage their credibility, like fatuous and unresearched claims about grammar, so that the odd it’s/its slip up can hardly make much more than (or is it then) a dent.

Note: I replaced ‘half-wit’ in the original with ‘blowhard’ because I don’t actually believe that Kyle Wien is a half-wit. He may not even be a blowhard. But, you can be a perfectly intelligent person, nice to kittens and beloved by co-workers, and be a blowhard when it comes to grammar. I also fixed a few typos, because I pay attention to detail.

My issue is not that I believe that linguistic purism and prescriptivism are in some way anomalous. In fact, I believe the exact opposite. I think, following a brilliant insight by my linguistics teacher, that we need to think of these phenomena as integral to our linguistic competence. I doubt that there is a linguistic community of any size above 3 that doesn’t enact some form of explicit linguistic normativity.

But when pseudo-knowledge about language is used as a n instrument of power, I think it is right to call out the perpetrators and try to shame them. Sure, linguists laugh at them, but I think we all need to follow the example of the Language Log and expose all such examples to public ridicule. Countermand the power.

Post Script: I have been similarly critical of the field of Critical Discourse Analysis which while based on an accurate insight about language and power, in my view, goes on to abuse the power that stems from the knowledge about language to clobber their opponents. My conclusion has been that if you want to study how people speak, study it for its own sake, and if you want to engage with the politics of what they say, do that on political terms not on linguistic ones. That doesn’t mean that you shouldn’t point out if you feel somebody is using language in a manipulative or misleading ways, but if you don’t need the apparatus of a whole academic discipline to do it, you’re doing something wrong.

Send to Kindle

Who-knows-what-how stories: The scientific and religious knowledge paradox

Share
Send to Kindle

I never meant to listen to this LSE debate on modern atheism because I’m bored of all the endless moralistic twaddle on both sides but it came on on my MP3 player and before I knew it, I was interested enough not to skip it. Not that it provided any Earth-shattering new insights but on balance it had more to contribute to the debate than a New Atheist diatribe might. And there were a few stories about how people think that were interesting.

The first speaker was the well-known English cleric, Giles Fraser who regaled the audience with his conversion story starting as an atheist student of Wittgenstein and becoming a Christian who believes in a “Scripture-based” understanding of Christianity. The latter is not surprising given how pathetically obssessed Wittgensteinian scholars are with every twist and turn of their bipolar master’s texts.

But I thought Fraser’s description of how he understands his faith in contrast to his understanding of the dogma was instructive. He says: “Theology is faith seeking understanding. Understanding is not the basis on which one has faith but it is what one does to try to understand the faith one has.”

In a way, faith is a kind of axiomatic knowledge. It’s not something that can or need be usefully questioned but it is something on which to base our further dialog. Obviously, this cannot be equated with religion but it can serve as a reminder of the kind of knowledge religion works off. And it is only in some contexts that this axiomatic knowledge needs be made explicit or even just pointed to – this only happens when the conceptual frames are brought into conflict and need to be negotiated.

Paradox of utility vs essence and faith vs understanding

This kind of knowledge is often contrasted with scientific knowledge. Knowledge that is held to be essentially superior due to its utility. But if we look at the supporting arguments, we are faced with a kind of paradox.

The paradox is that scientists claim that their knowledge is essentially different from religious (and other non-scientific) knowledge but the warrant for the special status claim of this knowledge stems from the method of its aquisition rather than its inherent nature. They cite falsificationist principles as foundations of this essential difference and peer review as their practical embodiment (strangely making this one claim immune from the dictum of falsification – of which, I believe, there is ample supply).

But that is not a very consistent argument. The necessary consequences of the practice of peer review fly in the face of the falsificationist agenda. The system of publication and peer review that is in place (and that will always emerge) is guaranteed to minimize any fundamental falsificationism of the central principles. Meaning that falsification happens more along Kuhnian rather than Popperian lines. Slowly, in bursts and with great gnashing of teeth and destroying of careers.

Now, religious knowledge does not cite falsificationism as the central warrant of its knowledge. Faith is often given as the central principle underlying religious knowledge and engagement with diverse exegetic authorities as the enactment of this principle. (Although, crucially, this part of religion is a modern invention brought about by many frame negotiations. For the most part, when it comes to religion, faith and knowing are coterminous. Faith determines the right ways of knowing but it is only rarely negotiated.)

But in practice the way religious knowlege is created, acquired and maintained is not very different from scientific knowledge. Exegesis and peer review are very similar processes in that they both refer to past authorities as sources of arbitration for the expression of new ideas.

And while falsificationism is (perhaps with the exception of some Budhist or Daoist schools) never the stated principle of religious hermeneutics, in principle, it is hard to describe the numerous religious reforms, movements and even work-a-day conversions as anything but falsificationist enterprises. Let’s just look at the various waves of monastic movements from Benedictines to Dominicans or Franciscans to Jesuits. They all struggled with reconciling the current situation with the evidence (of the interpretation of scripture) and based their changed practices on the result of this confrontation.

And what about religious reformers like Hus, Luther or Calvin? Wasn’t their intellectual entreprise in essense falsificationist? Or St Paul or Arianism? Or scholastic debates?

‘But religion never invites scrutiny, it never approaches problems with an open mind.’ Crow the new-atheists. But neither does science at the basic level. Graduate students are told to question everything but they soon learn that this questioning is only good as long as it doesn’t disrupt the research paradigm. Their careers and livelihoods depend on not questioning much of anything. In practice, this is not very different from the personal reading of the Bible – you can have a personal relationship with God as long as it’s not too different from other people’s personal relationships.

The stories we know by

One of the most preposterous pieces of scientific propaganda is Dawkins’ story about an old professor who went to thank a young researcher for disproving his theory. I recently heard it trotted out again on an episode of Start The Week where it was used as proof positive of how science is special – this time as a way of establishing its superiority over political machinations. It may have happened but it’s about as rare as a priest being convinced by an argument about the non-existence of God. The natural and predominant reactions to a research “disproving” a theory are to belittle it, deny its validity, ostracise its author, deny its relevance or simply ignore it (a favourite passtime of Chomskean linguists).

So in practice, there doesn’t seem to be that much of a difference between how scientific and religious knowledge work. They both follow the same cognitive and sociological principles. They both have their stories to tell their followers.

Conversion stories are very popular in all movements and they are always used on both sides of the same argument. There are stories about conversions from one side to the other in the abortion/anti-abortion controversy, environmental debates, diet wars, alternative medicine, and they are an accompanying feature of pretty much any scientific revolution – I’ve even seen the lack of prominent conversions cited as an argument (a bad one) against cognitive linguistics. So a scientist giving examples of the formerly devout seeing the light through a rational argument, is just enacting a social script associated with the spreading of knowledge. It’s a common frame negotiation device, not evidence of anything about the nature of the knowledge to the profession of which the person was converted.

There are other types of stories about knowledge that scientists like to talk about as much as people of religion. There are stories of the long arduous path to knowledge and stories of mystical gnosis.

The path-to-knowledge stories are told when people talk about the training it takes to achieve a kind of knowledge. They are particularly popular about medical doctors (through medical dramas on TV) but they also exist about pretty much any profession including priests and scientists. These stories always have two components, a liminal one (about high jinks students got to while avoiding the administration’s strictures and lovingly mocking the crazy demanding teachers) and a sacral one (about the importance of hard learning that the subject demands). These stories are, of course, based on the sorts of things that happen. But their telling follows a cultural and discursive script. (It would be interesting to do some comparative study here.)

The stories of mystical gnosis are also very interesting. They are told about experts, specialists who achieve knowledge that is too difficult for normal mortals to master. In these stories people are often described as loosing themselves in the subject, setting themselves apart, or becoming obsessively focused. This is sometimes combined or alternated with descriptions of achieving sudden clarity.

People tell these stories about themselves as well as about others. When told about others, these stories can be quite schematic – the absent minded professor, the geek/nerd, the monk in the library, the constantly pracising musician (or even boxer). When told about oneself, the sudden light stories are very common.

Again, these stories reflect a shared cultural framing of people’s experiences of knowledge in the social context. But they cannot be given as evidence of the special nature of one kind of knowledge over another. Just like stories about Gods cannot be taken as evidence of the superiority of some religions.

Utility and essence revisited

But, the argument goes, scientific knowledge is so useful. Just look at all the great things it brought to this world. Doesn’t that mean that its very “essence” is different from religious knowledge.

Here, I think, the other discussant in the podcast, the atheist philosopher, John Gray can provide a useful perspective: “The ‘essence’ is an apologetic invention that someone comes up with later to try and preserve the core of an idea [like Christianity or Marxism] from criticism.”

In other words this argument is also a kind of story telling. And we again find these utility and essence stories in many areas following remarkably similar scripts. That does not mean that the stories are false or fabricated or that what they are told about is in some way less valid. All it means is that we should be skeptical about arguments that rely on them as evidence of exclusivity.

Ultimately, looking for the “essence” of any complex phenomenon is always a fool’s errand. Scientists are too fond of their “magic formula” stories where incredibly complex things are described by simple equations like e=mc2 or zn+1 = zn2 + c. But neither Einstein’s nor Mandlebrot’s little gems actually define the essence of their respective phenomena. They just offer a convenient way of capturing some form of knowledge about it. e=mc2 will be found on T-shirts and the Mandelbrot set on screen savers of people who know little about their relationship to the underlying ideas. They just know they’re important and want to express their alegiance to the movement. Kind of like the people who feel it necessary to proclaim that they “believe” in the theory of evolution. Of course, we could also take some gnostic stories about what it takes to “really” understand these equations – and they do require some quite complex mix of expertise (a lot more complex than the stories would let on).

But we still haven’t dealt with the question of utility. Scientific knowledge derives its current legitimacy from its connection to technology and religious knowledge makes claims on the realms of morality and human relationships. They clash because both also have views on each other’s domains (science on human relationships and religion on origins of the universe). Which is one of the reasons why I don’t think that the non-overlapping magisteria idea is very fruitful (as much as I prefer Gould over the neo-Darwinists).

Here I have to agree with Dawkins and the new atheists. There’s no reason why some prelate should have more of a say on morality or relationships than anyone else. Reading a holy book is a qualification for prescribing ritual not for the arbitration of morality. But Dawkins should be made to taste his own medicine. There’s no reason why a scientist’s view on the origin of the universe should hold any sway over any theologian’s. The desire of the scientist to provide a cosmogony for the atheist crowd is a strange thing. It seeks to make questions stemming from a non-scientific search for broader meaning consistent with the scientific need for contiguous causal explanations. But the Big Bang or some sort of Priomordial Soup don’t provide internal consistency to the scientific entreprise. They leave as many questions open as they give answers to.

It seems that the Augustinian and later scholastic solution of a set of categories external to the ones which are accessible to us. Giles Fraser cites Thomas Acquinas’ example of counting everything in the world and not finding God. That would still be fine because God is not a thing to be counted, or better still, an entity that fits within our concept of things and the related concept of counting. Or in other words, God created our world with the categories available to the cognizing humans, not in those categories. Of course, to me, that sounds like a very good argument for atheism but it is also why a search for some Grand Unified Theory leaves me cold.

Epistemology as politics

It is a problem that the better philosophers from Parmenides to Wittgenstein tried to express in one way or another. But their problems were in trying to draw practical conclusions. There is no reason why the two political factions shouldn’t have a good old fight over the two overlapping magisteria. Because the debate about utility is a political one, not an epistemological one. Just because I would rather go to a surgeon than a witch doctor doesn’t mean that the former has tapped into some superior form of cognition. We make utility judgements of a similar nature even within these two domains but we would not draw essentialist epistemological conclusions based on them. People choose their priests and they choose their doctors. Is a bad doctor better than a good healer? I would imagine that there are a whole range of ailments where some innocuous herb would do more good than a placebo with side effects.

But we can consider a less contentious example. Years ago I was involved with TEFL teacher training and hiring. We ran a one-month starter course for teachers of English as a foreign language. And when hiring teachers I would always much rather hire one of these people rather than somebody with an MA in TESOL. The teachers with the basic knowledge would often do a better job than those with superior knowledge. I would say that when these two groups would talk about teaching, their understanding of it would be very different. The MAs would have the research, evidence and theory. The one-month trainees would have lots of useful techniques but little understanding of how or why they worked. Yet, there seemed to be an inverse relationship between the “quality” of knowledge and practical ability of the teacher (or at best no predictable relationship). So I would routinely recommend these “witch-teachers” over the “surgeon-teachers” to schools for hiring because I believe they were better for them.

There are many similar stories where utility and knowledge don’t match up. Again, that doesn’t mean that we should take homeopathy seriously but it means that the foundations of our refusal of accepting homeopathy cannot also be the foundations of placing scientific knowledge outside the regular epistemological constraints of all humanity.

Epistemology, as I have said elsewhere, is much better explained as ethics.

Thus endeth the blog post.

Send to Kindle

The death of a memory: Missing metaphors of remembering and forgetting?

Share
Send to Kindle

Memories

I have forgotten a lot of things in my life. Names, faces, numbers, words, facts, events, quotes. Just like for anyone, forgetting is as much a part of my life as remembering. Memories short and long come and go. But only twice in my life have I seen a good memory die under suspicious circustances.

Both of these were good reliable everyday memories as much declarative as non-declarative. And both died unexpectedly without warning and without reprieve. They were both memories of entry codes but I retrieved both in different ways. Both were highly contextualised but each in a different way.

The first time was almost 20 years ago (in 1993) and it was the PIN for my first bank card (before they let you change them). I’d had it for almost two years by then using it every few days for most of that period. I remembered it so well that even after I’d been out of the country for 6 months and not even thinking about it once, I walked up to an ATM on my return and without hesitation, typed it in. And then, about 6 months later, I walked up to another ATM, started typing in the PIN and it just wasn’t there. It was completely gone. I had no memory of it. I knew about the memory but the actual memory completely disappeared. It wasn’t a temporary confusion, it was simply gone and I never remembered it again. This PIN I remembered as a number.

The second death occurred just a few days ago. This time, it was the entrance code to a building. But I only remembered it as a shape on the keypad (as I do for most numbers now). In the intervening years, I’ve memorised a number of PINs and entrance codes. Most I’ve forgetten since, some I remember even now (like the PIN of a card that expired a year ago but I’d only used once every few months for many years). Simply, the normal processes you’d expect of memory. But this one, I’ve been using for about a year since they’d changed it from the previous one. About five months ago I came back from a month-long leave and I remembered it instantly. But three days ago, I walked up to the keypad and the memory was gone. I’d used the keypad at least once if not twice that day already. But that time I walked up to the keypad and nothing. After a few tries I started wondering if I might be typing in the old code since before the change so I flipped the pattern around (I had a vague memory of once using it to remember the new pattern) and it worked. But the working pattern felt completely foreign. Like one I’d never typed in before. I suddenly understood what it must feel like for someone to recognize their loved one but at the same time be sure that it’s not them. I was really discomfitted by this impostor keypad pattern. For a few moments, it felt really uncomfortable – almost an out of body (or out of memory) experience.

The one thing that set the second forgetting apart from the first one was that I was talking to someone as it happened (the first time I was completely alone on a busy street – I still remember which one, by the way). It was an old colleague who visited the building and was asking me if I knew the code. And seconds after I confidently declared I did, I didn’t. Or I remembered the wrong one.

So in the second case, we could conclude that the presence of someone who had been around when the previous code was being used, triggered the former memory and overrode the latter one. But the experience of complete and sudden loss, I recall vividly, was the same. None of my other forgettings were so instant and inexplicable. And I once forgot the face of someone I’d just met as soon and he turned around (which was awkward since he was supposed to come back in a few minutes with his car keys – so I had to stand in the crowd looking expectantly at everyone until the guy returned and nodded to me).

What does this mean for our metaphors of memory based on the various research paradigms? None seem to apply. These were not repressed memories associated with traumatic events (although the forgetting itself was extremely mildly traumatic). These were not quite declarative memories nor were they exactly non-declarative. They both required operations in working memories but were long-term. They were both triggered by context and had a muscle-memory component. But the first one I could remember as a number whereas the second one only as a shape and only on that specific keypad. But neither were subject to long-term decay. In fact, both proved resistant to decay surving long or longish periods of disuse. They both were (or felt) as solid memories as my own name. Until they were there no more. The closest introspective analogy to me seems Luria’s man who remembered too much who once forgot a white object because he placed it against white background in his memory which made it disappear.

The current research on memory seems to be converging on the idea that we reconstruct our memories. Our brains are not just some stores with shelves from which memories can be plucked. Although, memories are highly contextual, they are not discrete objects encoded in our brain as files on a harddrive. But for these two memories, the hard drive metaphor seems more appropriate. It’s as if a tiny part of my brain that held those memories was corrupted and they simply winked out of existence at the flip of a bit. Just like a hard drive.

There’s a lot of research on memory loss, decay and reliability but I don’t know of any which could account for these two deaths. We have many models of memory which can be selectively applied to most memory related events but these two fall between the cracks.

All the research I could find is either on sudden specific-event-induced amnesia (http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1961972/?page=1) or senescence (http://brain.oxfordjournals.org/content/89/3/539.extract). In both cases, there are clear causes to the memory and loss is much more total (complete events or entire identity). I could find nothing about the sudden loss of a specific reliable memory in a healthy individual (given that it only happened twice 18 years apart – I was 21 when it happened first – I assume this is not caused by any pathology in my brain) not precipitated by any traumatic (or other) event. Yet, I suspect this happens all the time… So what gives?

Send to Kindle

Religion, if it exists, is negotiation of underdetermined metaphoric cognition [UPDATED]

Share
Send to Kindle

Preamble

Richard Buchta - Portrait of a Zande witchdoctor

Image via Wikipedia

I am an old atheist and a new agnostic. I don’t believe in God in the old-fashioned Russellian way – if I don’t believe in Krishna, Zeus, water sprites or the little teapot orbiting the Sun, I don’t believe in God and the associated supernatual phenomena (monotheism my foot!). However, I am agnostic about nearly everything else and everything else in the new atheist way is pretty much science and reason. If history is any judge (and it is) most of what we believe to be scientific truths today is bunk. This makes me feel not superior at all to people of faith. Sure I think what they believe is a stupid and irrational thing to believe, but I don’t think they are stupid or irrational people to believe it. The smartest people believe the most preposterous things just look at Newton, Chomsky or Dawkins.

But one thing I’m pretty certain about is religion. Or rather, I’m pretty certain it does not exist. It is in many ways an invention of the Enlightenment and just like equality and brotherhood it only makes sense until you see the first person winding the up guillotine. Religion only makes sense if you want to set a certain set of beliefs and practices aside, most importantly to deprive their holders of power and legitimacy.

But is it a useful concept for deliberation about human universals? I think on balance it is not. Religion is a collection of stated beliefs, internal beliefs and public and private practices. In other words, religion is a way of life for a community of people. Or to be etymological about it, it is what binds the community together. The nature of the content of those beliefs is entirely irrelevant to the true human universal: a shared collection of beliefs and practices develops over a short amount of time inside any group of people. And when I say beliefs, I mean all explicit and implicit knowledge and applied cognition.

In this sense, modern secular humanism is just as much a religion as rabid evangelicalism.

On the mundane nature of sacred thought

So, why the scientist asks, do all cultures develop knowledge system that includes belief in the supernatural? That’s because they don’t. For instance, as Geertz so beautifully described in his reinterpretation of the Azande, witchcraft isn’t supernatural. It is the natural explanation after everything else has failed. We keep forgetting that until Einstein, everybody believed in this (as Descartes pointed out) supernatural force called gravity that could somehow magically transmit motion accross vast distances. And now (as Angel and Demetis point out) we believe in magical sheets that make gravity all nice and natural. Or maybe strings? Give me a break!

What about the distinction between the sacred and mundane you ask? Well, that obviously exists including the liminality between them. But sacred/mundane is not limited to anything supernatural and magical – just look at the US treatment of the flag or citizenship. In fact, even the most porfoundly sacred and mystical has a significant mundane dimension necessitated by its logistics.

There are no universals of faith. But there are some strong tendencies among the world’s cultures: Ancestor worship, belief in superhuman and non-human (often invisible, sometimes disembodied) agents, sympathetic magic and ritual (which includes belief in empowered and/or sapient substances and objects). This is combined with preserving and placating personal and collective practices.

All of the above describes western atheists as much as the witchcraft believing Azande. We just define the natural differently. Our beliefs in the power of various pills and the public professions of faith in the reality of evolution or the transformative nature of the market fall under the above just as nicely as the rain dance. Sure I’d much rather go to a surgeon with an inflamed appendix than a witch doctor but I’d also much rather go to a renowned witch doctor than an unknown one if that was my only choice. Medicine is simply witchcraft with better peer review.

Leaving the merits of the modern world aside. The question remains why do humans seem to converge on similar content of their beliefs? Helen de Cruz and the commenters on her post about the naturalness of religious belief: http://www.cognitionandculture.net/Helen-De-Cruz-s-blog/does-atheism-challenge-the-naturalness-of-religious-belief.html give a great overview of the current debate on the topic.

They pretty much put to rest some of the evolutionary notions and the innateness of mind/body dualism. I particularly like the proposition Helene de Cruz made building on Pascal’s remark that some people “seem so made that [they] cannot believe”. “For those people” continues de Cruz, “religious belief requires a constant cognitive effort.”

I think this is a profound statement. I see it as being in line with my thesis of frame negotiation. Some things require more cognitive effort for some people than other things for other people. It doesn’t have to be religion. We know reading requires more cognitive effort for different people in different ways (dyslexics being one group with a particular profile of cognitive difficulties). So does counting, painting, hunting, driving cars, cutting things with knives, taking computers apart, etc. These things are suceptible to training and practice to different degrees with different people.

So it makes perfect sense on the available evidence that different people require different levels of cognitive effort to maintain belief in what is axiomatic for others.

In the comments Mitch Hodge contributed a question to “researchers who propose that mind-body dualism undergirds representations of supernatural entities: What do you do with all of the anthropological evidence that humans represent most all supernatural entities as embodied? How do disembodied beings eat, wear clothes, physically interact with the living and each other?”

This is really important. Before you can talk about content of belief, you need to carefully examine all its aspects. And as I tried to argue above, starting with religion as a category already leads us down certain paths of argumentation that are less than telos-neutral.

But the answer to the “are humans natural mind-body dualists” does not have to be to choose one over the other. I suggest an alternative answer:

Humans are natural schematicists and schema negotiators

What does that mean? Last year, I gave a talk (in Czech) on the “Schematicity and undetermination as two forgotten processes in mind and language”. In it I argue that operating on schematic or in other ways underdetermined concepts is not only possible but it is built into the very fabric of cognition and language. It is extremely common for people to hold incomplete images (Lakoff’s pizza example was the one that set me on this path of thinking) of things in their mind. For instance, on slide 12 of the presentation below, I show different images that Czechs submitted into a competition run online by a national newspaper on “what does baby Jesus look like” (Note: In Czech, it is baby Jesus – or Ježíšek – who delivers the presents on Christmas Eve). The images ran from an angelic adult and a real baby to an outline of the baby in the light to just a light.

[slideshare id=6059571&doc=schematicnostanedourcenost-101207060558-phpapp02]
This shows that people not only hold underdetermined images but that those images are determined to varying degrees (in my little private poll, I came across people who imagined Ježíšek as an old bearded man and personally, I did not explicitly associated the diminutive ježíšek with the baby Jesus, until I had to translate it into English). The discussions like those around Trinity or the embodied nature of key deities are the results of conversations about what parts of a shared schema is it acceptable to fill out and how to fill them out.

It is basically metaphor (or as I call it frame) negotiation. Early Christianity was full of these debates and it is not surprising that it wasn’t always the most cognitively parsimoneous image that won out.

It is further important that humans have various explicit and implicit strategies to deal with infelicitous schematicity or schema clashes, which is to defer parts of their cognition to a collectively recognised authority. I spent years of my youth believing that although the Trinity made no sense to me, there were people to who it did make sense and to whom as guardians of sense, I would defer my own imperfect cognition. But any study of the fights over the nature of the Trinity are a perfect illustration of how people negotiate over their imagery. And as in any negotiation it is not just the power of the argument but also the power of the arguer that determines the outcome.

Christianity is not special here in any regard but it does provide two millenia of documented negotiation of mappings between domains full of schemas and rich images. It starts with St Paul’s denial that circumcision is a necessary condition of being a Christian and goes on into the conceptual contortions surrounding the Trinity debates. Early Christian eschatology also had to constantly renegotiate its foundations as the world sutbbornly refused to end and was in that no different from modern eschatology – be it religion or science based. Reformation movements (from monasticism to Luther or Calvin) also exhibit this profound contrasting of imagery and exploration of mappings, rejecting some, accepting others, ignoring most.

All of these activities lead to paradoxes and thus spurring of heretical and reform movements. Waldensians or Lutherans or Hussites all arrived at their disagreement with the dogma through painstaking analysis of the imagery contained in the text. Arianism was in its time the “thinking man’s” Christianity, because it made a lot more sense than the Nicean consensus. No wonder it experienced a post-reformation resurgence. But the problems it exposed were equally serious and it was ultimately rejected for probably good reason.

How is it possible that the Nicean consensus held so long as the mainstream interpretation? Surely, Luther could not have been the first to notice the discrepancies between lithurgy and scripture. Two reasons: inventory of expression and undedetermination of conceptual representationa.

I will deal with the idea of inventory in a separate post. Briefly, it is based on the idea of cognitive grammar that language is not a system but rather a patterned invenotory of symbolic units. This inventory is neither static nor has it clear boundaries but it functions to constrain what is available for both speech and imagination. Because of the nature of symbolic units and their relationship, the inventory (a usage-based beast) is what constrains our ability to say certain things although they are possible by pure grammatical or conceptual manipulation. By the same token, the inventory makes it possible to say things that make no demonstrable sense.

Frame (or metaphor) negotiation operates on the inventory but also has to battle against its constraints. The units in the inventory range in their schematicity and determination but they are all schematic and underdetermined to some degree. Most of the time this aids effortless conceptual integration. However, a significant proportion of the time, particularly for some speakers, the conceptual integration hits a snag. A part of a schematic concept usually left underdetermined is filled out and it prevents easy integration and an appropriate mapping needs to be negotiated.

For example, it is possible to say that Jesus is God and Jesus is the Son of God even in the same sentence and as long as we don’t project the offspring mapping on the identity mapping, we don’t have a problem. People do these things all the time. We say things like “taking a human life is the ultimate decision” and “collateral damage must be expected in war” and abhor people calling soldiers “murderers”. But the alternative to “justified war” namely “war is murder” is just as easy to sanction given the available imagery. So people have a choice.

But as soon as we flesh out the imagery of “X is son of Y” and “X is Y” we see that something is wrong. This in no way matches our experience of what is possible. Ex definitio “X is son of Y” OR “X is Y”. Not AND. So we need to do other things make the nature of “X is Y” compatible with “X is the son of Y”. And we can either do this by attributing a special nature to one or both of the statements. Or we can acknowledge the problem and defer knowledge of the nature to a higher authority. This is something we do all the time anyway.

Drawing from René Descartes' (1596-1650) in

Image via Wikipedia

So to bring the discussion to the nature of embodiment, there is no difficulty for a single person or a culture to maintained that some special being is disembodied but yet can perform many embodied functions (like eating). My favorite joke told to me by a devout Catholic begins: “The Holy Trinity are sitting around a table talking about where they’re going to go for their vacation…” Neither my friend nor I assumed that the Trinity is in any way an embodied entity, but it was nevertheless very easy for us to talk about its members as embodied beings. Another Catholic joke:

A saussage goes to Heaven. St Peter is busy so he sends Mary to answer the Pearly Gates. When she comes back he asks: “Who was it?” She responds: “I don’t know but, it sure looked like the Holy Ghost.”

Surely a more embodied joke is very difficult to imagine. But it just illustrates the availability of rich imagery to fill out schemas in a way that forces us to have two incompatible images in our heads at the same time. A square circle, of sorts.

There is nothing sophisticated about this. Any society is going to have members who are more likely to explore the possibilities of integration of items within its conceptual inventory. In some cases, it will get them ostracised. In most cases, it will just be filed away as an idiosyncratic vision that makes a lot of sense (but is not worth acting on). That’s why people don’t organize their lives around the dictums of stand-up comedians in charge. What they say often “makes perfect sense” but this sense can be filed away into the liminal space of our brain where it does not interfere with what makes sense in the mundane or the sacred context of conceptual integration. And in a few special cases, this sort of behavior will start new movements and faiths.

These “special” individuals are probably present in quite a large number in any group. They’re the people who like puns or the ones who correct everyone’s grammar. But no matter how committed they are to exploring the imagery of a particular area (content of faith, moral philosophy, use of mobile phones or genetic engineering) they will never be able to rid it of its schematicity and indeterminacies. They will simply flesh out some schemas and strip off the flesh of others. As Kuhn said, a scientific revolution is notable not just for the new it brings but also for all the old it ignores. And not all of the new will be good and not all of the old will be bad.

Not that I’m all that interested in the origins of language but my claim is that the negotiation of the mappings between undertermined schemas is at the very foundation of language and thought. And as such it must have been present from the very begining of language – it may have even predated language. “Religious” thought and practice must have emerged very quickly; as soon as one established category came into contact with another category. The first statement of identity or similarity was probably quite shortly followed by “well, X is only Y, in as much as Z” (expressed in grunts, of course). And since bodies are so central to our thought, it is not surprising that imagery of our bodies doing special things or us not having a body and yet preserving our identity crops up pretty much everywhere. Hypothesizing some sort of innate mind-body dualism is taking an awfully big hammer to a medium-sized nail. And looking for an evolutionary advantage in it is little more than the telling of campfire stories of heroic deeds.

Epilogue

To look for an evolutionary foundation of religious belief is little more sophisticated than arguing about the nature of virgin birth. If nothing else, the fervor of its proponents should be highly troubling. How important is it that we fill in all the gaps left over by neo-Darwinism? There is nothing special about believing in Ghosts or Witches. It is an epiphenomenon of our embodied and socialised thought. Sure, it’s probably worth studying the brains of mushroom-taking mystical groups. But not as a religious phenomenon. Just as something that people do. No more special than keeping a blog. Like this.

Post Script on Liminality [UPDATE a year or so later]

Cris Campbell on his Genealogy of Religion Blog convinced me with the aid of some useful references that we probably need to take the natural/supernatural distinction a bit more seriously than I did above. I still don’t agree it’s as central as is often claimed but I agree that it cannot be reduced to the sacred v. mundane as I tried above.  So instead I proposed the distinction between liminal and metaliminal in a comment on the blog. Here’s a slightly edited version (which may or may not become its own post):

I read with interest Hultkranz’s suggestion for an empirical basis for the concept of the supernatural but I think there are still problems with this view. I don’t see the warrant for the leap from “all religions contain some concept of the supernatural” to “supernatural forms the basis of religion”. Humans need a way to talk about the experienced and the adduced and this will very ‘naturally’ take the form of “supernatural” (I’m aware of McKinnon’s dissatisfaction with calling this non-empirical).

On this account, science itself is belief in the supernatural – i.e. postulating invisible agents outside our direct experience. And in particular speculative cognitive science and neuroscience have to make giant leaps of faith from their evidence to interpretation. What are the chances that much of what we consider to be givens today will in the future be regarded as much more sophisticated than phrenology? But even if we are more charitable to science and place its cognition outside the sphere of that of a conscientious sympathetic magician, the use of science in popular discourse is certainly no different from the use of supernatural beliefs. There’s nothing new, here. Let’s just take the leap from the science of electricity to Frankenstein’s monster. Modern public treatments of genetics and neuroscience are essentially magical. I remember a conversation with an otherwise educated philosophy PhD student who was recoiling in horror from genetic modification of fruit (using fish genes to do something to oranges) as unnatural – or monstrous. Plus we have stories of special states of cognition (absent-minded professors, en-tranced scientists, rigour of study) and ritual gnostic purification (referencing, peer review). The strict naturalist prescriptions of modern science and science education are really not that different from “thou shalt have no other gods before me.”

I am giving these examples partly as an antidote to the hidden normativity in the term ‘supernatural’ (I believe it is possible to mean it non-normatively but it’s not possible for it not to be understood that way by many) but also as an example of why this distinction is not one that relates to religion as opposed to general human existence.

However, I think Hultkranz’s objection to a complete removal of the dichotomy by people like Durkheim and Hymes is a valid one as is his claim of the impossibility of reducing it to the sacred/profane distinction. However, I’d like to propose a different label and consequently framing for it: meta-liminal. By “meta-liminal” I mean beyond the boundaries of daily experience and ethics (a subtle but to me an important difference from non-empirical). The boundaries are revealed to us in liminal spaces and times (as outlined by Turner) and what is beyond them can be behaviours (Greek gods), beings (leprechauns), values (Platonic ideals) or modes of existence (land of the dead). But most importantly, we gain access to them through liminal rituals where we stand with one foot on this side of the boundary and with another on the “other” side. Or rather, we temporarily blur and expand the boundaries and can be in both places at once. (Or possibly both.) This, however, I would claim is a discursively psychological construct and not a cognitively psychological construct. We can study the neural correlates of the various liminal rituals (some of which can be incredibly mundane – like wearing a pin) but searching for a single neural or evolutionary foundation would be pointless.

The quote from Nemeroff and Rozin that ‘“the supernatural” as that which “generally does not make sense in terms of the contemporary understanding of science.”’ sums up the deficiency of the normative or crypto-normative use of “supernatural”. But even the strictly non-normative use suffers from it.

What I’m trying to say is that not only is not religious cognition a special kind of cognition (in common with MacKendrick), but neither is any other type of cognition (no matter how Popperian its supposed heuristics). The different states of transcendence associated with religious knowing (gnosis) ranging from a vague sense of fear, comfort or awe to a dance or mushroom induced trance are not examples of a special type of cognition. They are universal psychosomatic phenomena that are frequently discursively constructed as having an association with the liminal and meta-liminal. But can we postulate an evolutionary inevitability that connects a new-age whackjob who proclaims that there is something “bigger than us” to a sophisticated theologian to Neil DeGrasse Tyson to a jobbing shaman or priest to a simple client of a religious service? Isn’t it better to talk of cultural opportunism that connects liminal emotional states to socially constructed liminal spaces? Long live the spandrel!

This is not a post-modernist view. I’d say it’s a profoundly empirical one. There are real things that can be said (provided we are aware of the limitations of the medium of speech). And I leave open the possibility that within science, there is a different kind of knowledge (that was, after all, my starting point, I was converted to my stance by empirical evidence so I am willing to respond to more).

Enhanced by Zemanta
Send to Kindle