Tag Archives: Academia

Framing and constructions as a bridge between cognition and culture: Two Abstracts for Cognitive Futures

Share
Send to Kindle

I just found out that both abstracts I submitted to the Cognitive Futures of the Humanities Conference were accepted. I was really only expecting one to get through but I’m looking forward to talking about the ideas in both.

The first first talk has foundations in a paper I wrote almost 5 years ago now about the nature of evidence for discourse. But the idea is pretty much central to all my thinking on the subject of culture and cognition. The challenge as I see it is to come up with a cognitively realistic but not a cognitively reductionist account of culture. And the problem I see is that often the learning only goes one way. The people studying culture are supposed to be learning about the results of research on cognition.

Frames, scripts, scenarios, models, spaces and other animals: Bridging conceptual divides between the cognitive, social and computational

While the cognitive turn has a definite potential to revolutionize the humanities and social sciences, it will not be successful if it tries to reduce the fields investigated by the humanities to merely cognitive or by extension neural concepts. Its greatest potential is in showing continuities between the mind and its expression through social artefacts including social structures, art, conversation, etc. The social sciences and humanities have not waited on the sidelines and have developed a conceptual framework to apprehend the complex phenomena that underlie social interactions. This paper will argue that in order to have a meaningful impact, cognitive sciences, including linguistics, will have to find points of conceptual integration with the humanities rather than simply provide a new descriptive apparatus.

It is the contention of this paper that this can be best done through the concept of frame. It seems that most disciplines dealing with the human mind have (more or less independently) developed a similar notion dealing with the complexities of conceptualization variously referred to as frame, script, cognitive model or one of the as many as 14 terms that can be found across the many disciplines that use it.  This paper will present the different terms and identify commonalities and differences between them. On this, it will propose several practical ways in which cognitive sciences can influence humanities and also derive meaningful benefit from this relationship. I will draw on examples from historical policy analysis, literary criticism and educational discourse.

See the presentation on Slideshare.

The second paper is a bit more conceptually adventurous and testing the ideas put forth in the first one. I’m going to try to explore a metaphor for the merging of cultural studies with linguistic studies. This was done before with structuralism and ended more or less badly. For me, it ended when I read the Lynx by Lévi-Strauss and realized how empty it was of any real meaning. But I think structuralism ended badly in linguistics, as well. We can’t really understand how very basic things work in language unless we can involve culture. So even though, I come at this from the side of linguistics, I’m coming at it from the perspective of linguistics that has already been informed by the study of culture.

If Lévi-Strauss had met Langacker: Towards a constructional approach to the patterns of culture

Construction/cognitive grammar (Langacker, Lakoff, Croft, Verhagen, Goldberg) has broken the strict separation between the lexical and grammatical linguistic units that has defined linguistics for most of the last century. By treating all linguistic units as meaningful, albeit on a scale of schematicity, it has made it possible to treat linguistic knowledge as simply a part of human knowledge rather than as a separate module in the cognitive system. Central to this effort is the notion of language of as an organised inventory of symbolic units that interact through the process of conceptual integration.

This paper will propose a new view of ‘culture’ as an inventory of construction-like patterns that have linguistic, as well, as interactional content. I will argue that using construction grammar as an analogy allows for the requisite richness and can avoid the pitfalls of structuralism. One of the most fundamental contributions of this approach is the understanding that cultural patterns, like constructions, are pairings of meaning and form and that they are organised in a hierarchically structured inventory. For instance, we cannot properly understand the various expressions of politeness without thinking of them as systematically linked units in an inventory available to members of a given culture in the same as syntactic and morphological relationships. As such, we can understand culture as learnable and transmittable in the same way that language is but without reducing its variability and richness as structuralist anthropology once did.

In the same way that Jakobson’s work on structuralism across the spectrum of linguistic diversity inspired Lévi-Strauss and a whole generation of anthropological theorists, it is now time to bring the exciting advances made within cognitive/construction grammar enriched with blending theory back to the study of culture.

See the presentation on SlideShare.

Send to Kindle

The brain is a bad metaphor for language

Share
Send to Kindle

Note: This was intended to be a brief note. Instead it developed into a monster post that took me two weeks of stolen moments to write. It’s very light on non-blog references but they exist. Nevertheless, it is still easy to find a number of oversimplifications,  conflations, and other imperfections below. The general thrust of the argument however remains.

How Far Can You Trust a Neuroscientist?

Shiny and colored objects usually attract Infa...

Image via Wikipedia

A couple of days ago I watched a TED talk called the Linguistic Genius of Babies by Patricia Kuhl. I had been putting it off, because I suspected I wouldn’t like it but I was still disappointed at how hidebound it was. It conflated a number of really unconnected things and then tried to sway the audience to its point of view with pretty pictures of cute infants in brain scanners. But all it was, is a hodgepodge of half-implied claims that is incredibly similar to some of the more outlandish claims made by behaviorists so many years ago. Kuhl concluded that brain research is the next frontier of understanding learning. But she did not give a simple credible example of how this could be. She started with a rhetorical trick. Mentioned an at-risk language with a picture of a mother holding an infant facing towards her. And then she said (with annoying condescension) that this mother and the other tribe members know something we do not:

What this mother — and the 800 people who speak Koro in the world — understand that, to preserve this language, they need to speak it to the babies.

This is garbage. Languages do not die because there’s nobody there to speak it to the babies (until the very end, of course) but because there’s nobody of socioeconomic or symbolic prestige children and young adults can speak the language to. Languages don’t die because people can’t learn them, they die because they have no reason (other than nostalgia) to learn them or have a reason not to learn them. Given a strong enough reason they would learn a dying language even if they started at sixteen. They just almost never are given the reason. Why Kuhl felt she did not need to consult the literature on language death, I don’t know.

Patricia Kuhl has spent the last 20 years studying pretty much one thing: acoustic discrimination in infants (http://ilabs.washington.edu/kuhl/research.html). Her research provided support for something that had been already known (or suspected), namely that young babies can discriminate between sounds that adults cannot (given similar stimuli such as the ones one might find in the foreign language classroom). She calls this the “linguistic genius of babies” and she’s wrong:

Babies and children are geniuses until they turn seven, and then there’s a systematic decline.

First, the decline (if there is such a thing) is mostly limited to acoustic processing and even then it’s not clear that the brain is the thing that causes it. Second, being able to discriminate (by moving their head) between sounds in both English and Mandarin at age 6 months is not a sign of genius. It’s a sign of the baby not being able to differentiate between language and sound. Or in other words, the babies are still pretty dumb. But it doesn’t mean they can’t learn a similar distinction at a later age – like four or seven or twelve. They do. They just probably do it in a different way than a 6-month old would. Third, in the overall scheme of things, acoustic discrimination at the individual phoneme level (which is what Kuhl is studying) is only a small part of learning a language and it certainly does NOT stop at 7 months or even 7 years of age. Even children who start learning a second language at the age of 6 achieve a native-like phonemic competence. And even many adults do. They seem not to perform as well on certain fairly specialized acoustic tests but functionally, they can be as good as native speakers. And it’s furthermore not clear that accent deficiencies are due to the lack of some sort of brain plasticity. Fourth, language learning and knowledge is not a binary thing. Even people who only know one language know it to a certain degree. They can be lexically, semantically and syntactically quite challenged when exposed to a sub-code of their language they have little to no contact with. So I’m not at all sure what Kuhl was referring to. François Grosjean (an eminent researcher in the field) has been discussing all this on his Life as Bilingual blog (and in books, etc.). To have any credibility, Kuhl must address this head on:

There is no upper age limit for acquiring a new language and then continuing one’s life with two or more languages. Nor is there any limit in the fluency that one can attain in the new language with the exception of pronunciation skills.

Instead she just falls on old prejudices. She simply has absolutely nothing to support this:

We think by studying how the sounds are learned, we’ll have a model for the rest of language, and perhaps for critical periods that may exist in childhood for social, emotional and cognitive development.

A paragraph like this may get her some extra funding but I don’t see any other justification for it. Actually, I find it quite puzzling that a serious scholar would even propose anything like this today. We already know there is no critical period for social development. Well, we don’t really know what social development is, but there’s no critical brain period to what there is. We get socialized to new collective environments throughout our lives.

But there’s no reason to suppose that learning to interact in a new environment is anything like learning to discriminate between sounds. There are some areas of language linked to perception where that may partly be the case (such as discriminating shapes, movements, colors, etc.) but hardly things like morphology or syntax, where much more complexity is involved. But this argument cuts both ways. Let’s say a lot of language learning was like sound development. And we know most of it continues throughout life (syntax, morphology, lexicon) and it doesn’t even start at 6 months (unless you’re a crazy Chomskean who believes in some sort of magical parameter setting). So if sound development was like that, maybe it has nothing to do with the brain in the way Kuhl imagines – although she’s so vague that she could always claim that that’s what she’d had in mind. This is what Kuhl thinks of as additional information:

We’re seeing the baby brain. As the baby hears a word in her language the auditory areas light up, and then subsequently areas surrounding it that we think are related to coherence, getting the brain coordinated with its different areas, and causality, one brain area causing another to activate.

So what? We know that that’s what was going to happen. Some parts of the brain were going to light up as they always do. What does that mean? I don’t know. But I also know that Patricia Kuhl and her colleagues don’t know either (at least not in the way she pretends). We speak a language, we learn a language and at the same time we have a brain and things happen in the brain. There are neurons and areas that seem to be affected by impact (but not always and not always in exactly the same way). Of course, this is an undue simplification. Neuroscientists know a huge amount about the brain. Just not how it links to language in a way that would say much about the language that we don’t already know. Kuhl’s next implied claim is a good example of how partial knowledge in one area may not at all extend to knowledge in another area.

What you see here is the audio result — no learning whatsoever — and the video result — no learning whatsoever. It takes a human being for babies to take their statistics. The social brain is controlling when the babies are taking their statistics.

In other words, when the children were exposed to audio or video as opposed to a live person, no effect was shown. At 6 months of age! As is Kuhl’s wont, she only hints at the implications, but over at the Royal Society’s blog comments, Eric R. Kandel has spelled it out:

I’m very much taken with Patricia Kuhl’s finding in the acquisition of a second language by infants that the physical presence of a teacher makes enormous difference when compared to video presence. We all know from personal experience how important specific teachers have been. Is it absurd to think that we might also develop methodologies that would bring out people’s potential for interacting empathically with students so that we can have a way of selecting for teachers, particularly for certain subjects and certain types of student? Neuroscience: Implications for Education and Lifelong Learning.

But this could very well be absurd! First, Kuhl’s experiments were not about second language acquisition but sensitivity to sounds in other languages. Second, there’s no evidence that the same thing Kuhl discovered for infants holds for adults or even three-year olds. A six-month old baby hasn’t learned yet that the pictures and sounds coming from the machine represent the real world. But most four-year olds have. I don’t know of any research but there is plenty of anecdotal evidence. I have personally met several people highly competent in a second language who claimed they learned it by watching TV at a young age. A significant chunk of my own competence in English comes from listening to radio, audio books and watching TV drama. How much of our first language competence comes from reading books and watching TV? That’s not to say that personal interaction is not important – after all we need to learn enough to understand what the 2D images on the screen represent. But how much do we need to learn? Neither Kuhl nor Kandel have the answer but both are ready (at least by implication) to shape policy regarding language learning. In the last few years, several reports raised questions about some overreaching by neuroscience (both in methods and assumptions about their validity) but even perfectly good neuroscience can be bad scholarship in extending its claims far beyond what the evidence can support.

The Isomorphism Fallacy

This section of the post is partly based on a paper I presented at a Czech cognitive science conference about 3 years ago called Isomorphism as a heuristic and philosophical problem.

IMG_7845The fundamental problem underlying the overreach of basic neuroscience research is the fallacy of isomorphism. This fallacy presumes that the same structures we see in language, behavior, society must have structural counterparts in the brain. So there’s a bit of the brain that deals with nouns. Another bit that deals with being sorry. Possibly another one that deals with voting Republican (as Woody Allen proved in “Everyone Says I Love You“). But at the moment the evidence for this is extremely weak, at best. And there is no intrinsic need for a structural correspondence to exist. Sidney Lamb came up with a wonderful analogy that I’m still working my way through. He says (recalling an old ‘Aggie‘ joke) that trying to figure out where the bits we know as language structure are in the brain is like trying to work out how to fit the roll that comes out of a tube of tooth paste back into the container. This is obviously a fool’s errand. There’s nothing in the tooth-paste container that in any way resembles the colorful and tubular object we get when we squeeze the paste container. We get that through an interaction of the substance, the container, external force, and the shape of the opening. It seems to me entirely plausible, that the link between language and the brain is much more like that between the paste, the container and their environment than like that between a bunch of objects and box. The structures that come out are the result of things we don’t quite understand happening in the brain interacting with its environment. (I’m not saying that that’s how it is, just that it’s plausible.) The other thing to lends it credence is the fact that things like nouns or fluency are social constructs with fuzzy boundaries, not hard discrete objects, so actually localizing them in the brain would be a bit of a surprise. Not that it can’t be done, but the burden of evidence of making this a credible finding is substantial.

Now, I think that the same problem applies to looking for isomorphism the other way. Lamb himself tries to look at grammar by looking for connections resembling the behavior of activating neurons. I don’t see this going anywhere. George Lakoff (who influenced me more than any other linguist in the world) seems to think that a Neural Theory of Language is the next step in the development of linguistics. At one point he and many others thought that mirror neurons say something about language but now that seems to have been brought into question. But why do we need mirror neurons when we already know a lot of the immitative behaviors they’re supposed facilitate? Perhaps as a treatment and diagnostic protocol for pathologies but is this really more than story-telling? Jerome Feldman described NTL in his book “From Molecule to Metaphor” but his main contribution seems to me lies in showing how complex language phenomena can be modelled with brain-like neural networks, not saying anything new about these phenomena (see here for an even harsher treatment). The same goes for the Embodied Construction Grammar. I entirely share ECG’s linguistic assumptions but the problem is that it tries to link its descriptive apparatus directly to the formalisms necessary for modeling. This proved to be a disaster for the generative project that projected its formalisms into language with a imperfect fit and now spends most of its time refining those formalisms rather than studying language.

So far I don’t see any advantage in linking language to the brain in either the way Kuhl et al or Feldman et al try to do it (again with the possible exception of pathologies). In his recent paper on compositionality, Feldman describes research that shows that spacial areas are activated in conjunction with spatial terms and that sentence processing time increases as the sentence gets removed from “natural spatial orientation”. But brain imaging at best confirms what we already knew. But how useful is that confirmatory knowledge? I would argue that not very useful. In fact there is a danger that we will start thinking of brain imaging as a necessary confirmation of linguistic theory. Feldman takes a step in this dangerous direction when he says that with the advent of new techniques of neuroscience we can finally study language “scientifically”. [Shudder.]

We know there’s a connection between language and the brain (more systematic than with language and the foot, for instance) but so far nobody’s shown convincingly that we can explain much about language by looking at the brain (or vice versa). Language is best studied as its own incredibly multifaceted beast and so is the brain. We need to know a lot more about language and about the brain before we can start projecting one into the other.

And at the moment, brain science is the junior partner, here. We know a lot about language and can find out more without looking for explanations in the brain. It seems as foolish as trying to illuminate language by looking inside a computer (as Chomsky’s followers keep doing). The same question that I’m asking for language was asked about cognitive processes (a closely related thing) by William Uttal in The New Phrenology who’s asking “whether psychological processes can be defined and isolated in a way that permits them to be associated with particular brain regions” and warns against a “neuroreductionist wild goose chase” – and how else can we characterize Kuhl’s performance – lest we fall “victim to what may be a ‘neo-phrenological’ fad”. Michael Shremer voiced a similar concern in the Scientific American:

The brain is not random kludge, of course, so the search for neural networks associated with psychological concepts is a worthy one, as long as we do not succumb to the siren song of phrenology.

What does a “siren song of phrenology” sound like? I imagine it would sound pretty much like this quote by Kuhl:

We are embarking on a grand and golden age of knowledge about child’s brain development. We’re going to be able to see a child’s brain as they experience an emotion, as they learn to speak and read, as they solve a math problem, as they have an idea. And we’re going to be able to invent brain-based interventions for children who have difficulty learning.

I have no doubt that there are some learning difficulties for which a ‘brain-based intervention’ (whatever that is) may be effective. But it’s just a relatively small part of the universe of learning difficulties that it hardly warrants a bombastic claim like the one above. I could find nothing in Kuhl’s narrow research that would support this assertion. Learning and language are complex psycho-social phenomena that are unlikely to have straightforward counterparts in brain activations such as can be seen by even the most advanced modern neuroimaging technology. There may well be some straightforward pathologies that can be identified and have some sort of treatment aimed at them. The problem is that brain pathologies are not necessarily opposites of a typically functioning brain (a fallacy that has long plagued interpretation of the evidence from aphasias) – it is, as brain plasticity would suggest, just as  likely that at least some brain pathologies simply create new qualities rather than simply flipping an on/off switch on existing qualities. Plus there is the historical tendency of the self-styled hard sciences to horn in on areas where established disciplines have accumulated lots of knowledge, ignore the knowledge, declare a reductionist victory, fail and not admit failure.

For the foreseeable future, the brain remains a really poor metaphor for language and other social constructs. We are perhaps predestined to finding similarities in anything we look at but researchers ought to have learned by now to be cautious about them. Today’s neuroscientists should be very careful that they don’t look as foolish to future generations as phrenologists and skull measurers look to us now.

In praise of non-reductionist neuroscience

Let me reiterate, I have nothing against brain research. The more of it, the better! But it needs to be much more honest about its achievements and limitations (as much as it can given the politics of research funding). Saying the sort of things Patricia Kuhl does with incredibly flimsy evidence and complete disregard for other disciplines is good for the funding but awful for actually obtaining good results. (Note: The brevity of the TED format is not an excuse in this case.)

A much more promising overview of applied neuroscience is a report by the Royal Society on education and the brain that is much more realistic about the state of neurocognitive research who admit at the outset: “There is enormous variation between individuals, and brain-behaviour relationships are complex.”

The report authors go on to enumerate the things they feel we can claim as knowledge about the brain:

  1. The brain’s plasticity
  2. The brain’s response to reward
  3. The brain’s self-regulatory processes
  4. Brain-external factors of cognitive development
  5. Individual differences in learning as connected to the brain and genome
  6. Neuroscience connection to adaptive learning technology

So this is a fairly modest list made even more modest by the formulations of the actual knowledge. I could only find a handful of statements made to support the general claims that do not contain a hedge: “research suggests”, “may mean”, “appears to be”, “seems to be”, “probably”. This modesty in research interpretation does not always make its way to the report’s policy suggestions (mainly suggestions 1 and 2). Despite this, I think anybody who thinks Patricia Kuhl’s claims are interesting would do well do read this report and pay careful attention to the actual findings described there.

Another possible problem for those making wide reaching conclusions is a relative newness of the research on which these recommendations are based. I had a brief look at the citations in the report and only about half are actually related to primary brain research. Of those exactly half were published in 2009 (8) and 2010 (20) and only two in the 1990s. This is in contrast to language acquisition and multilingualism research which can point to decades of consistently replicable findings and relatively stable and reliable methods. We need to be afraid, very afraid of sexy new findings when they relate to what is perceived as the “nature” of humans. At this point, as a linguist looking at neuroscience (and the history of the promise of neuroscience), my attitude is skeptical. I want to see 10 years of independent replication and stable techniques before I will consider basing my descriptions of language and linguistic behavior on neuroimaging. There’s just too much of ‘now we can see stuff in the brain we couldn’t see before, so this new version of what we think the brain is doing is definitely what it’s doing’. Plus the assumption that exponential growth in precision brain mapping will result in the same growth in brain function identification is far from being a sure thing (cf. genome decoding). Exponential growth in computer speed, only led to incremental increases in computer usability. And the next logical step in the once skyrocketing development of automobiles was not flying cars but pretty much just the same slightly better cars (even though they look completely different under the hood).

The sort of knowledge to learn and do good neuroscience is staggeringly awesome. The scientists who study the brain deserve all the personal accolades they get. But the actual knowledge they generate about issues relating to language and other social constructs is much less overwhelming. Even a tiny clinical advance such as helping a relatively small number of people to communicate who otherwise wouldn’t be able to express themselves makes this worthwhile. But we must not confuse clinical advances with theoretical advances and must be very cautious when applying these to policy fields that are related more by similarity than a direct causal connection.

Send to Kindle

The Tortoise and the Hare: Analogy for Academia in the Digital World?

Share
Send to Kindle
A mash-up of two images to metaphorically depi...
Image via Wikipedia

Dan Cohen has decided to “crowdsource” (a fascinating blend, by the way) the title of his next book with the following instructions.

The title should be a couplet like “The X and the Y” where X can be “Highbrow Humanities” “Elite Academia” “The Ivory Tower” “Deep/High Thought” [insert your idea] and Y can be “Lowbrow Web” “Common Web” “Vernacular Technology/Web” “Public Web” [insert your idea]. so possible titles are “The Highbrow Humanities and the Lowbrow Web” or “The Ivory Tower and the Wild Web” etc.

via Dan Cohen’s Digital Humanities Blog » Blog Archive » Crowdsourcing the Title of My Next Book.

Before I offer my suggestion, let me pause and wonder how do we know what the book is to be about? Well, we know exactly what it is to be about because what he has in fact done was describe its contents in the form of two cross domain mappings that are then mapped onto each other (a sort of a double-barrel metaphor). And the title, it goes without saying (in a culture that agrees on what titles should be) should as eloquently and entertainingly point to the complex mapping through yet more mappings (if this was a post on blending theory, I’d elaborate on this some more).

We (I mean us the digitized or unanalog) can also roughly guess what Dan Cohen’s stance will be and if he were to be writing it just for us, we’d much rather just get it as a series of blog posts, or perhaps not at all. The paragraph quoted above is enough for us. We know what’s going on.

So aware of the ease with which meaning was co-constructed, I would recommend a more circumspect and ambiguous title. The Tortoise and the Hare with a subtitle:  Who’s Chasing Whom in Digital Scholarship or possibly The Winners and Losers of Digital Academia. Why this title? Well, I believe in challenging preconceptions, starting with our own. The tale of the Tortoise and the Hare (as the excellent Wikipedia entry documents) offers no easy answer. Or rather it offers too many easy answers for comfort. The first comes from the title and a vague awareness of the fact that this is a story about a speed contest between two animals who are stereotypes for the polar opposites of speed. So the first impression is “of course, the hare is the winner” and this is a book about the benefits of digital scholarship, so the digital scholars must be the hare. Also, and also digital equals fast so that means the book is about how the hare of digital scholarship is leaving the tortoise of ivory-tower academia in the dust. And we could come up with a dozen stories illustrating how this is the case.

Then we pause and remember, ah, but didn’t the tortoise win the race because of the hasty overconfidence and carelessness of the hare? So that means that perhaps the traditional academics, moving slowly but deliberately, are the favored ones, after all? Can’t we all also think of too many errors made on blogs, crowdsourced encyclopedic entries and easily make the case that the deliberate approach is superior to moving at breakneck speed? Aren’t hares known for their short and precarious life spans as well as speed while the tortoise is almost proverbial in its longevity?

But the moral of the story is even more complex and less determinate. If we continue further in our deliberations, we might be able to get a few more hints of this. In particular, we must ask, what does this story tell us about speed and wisdom? And the answer must be: absolutely nothing. We knew coming into it that hares were faster than tortoises over any distance that can be traveled by both animals. We’re not exactly clear why the tortoise challenged the hare. Unless it had secret knowledge of its narcolepsy, it couldn’t have possibly known that the hare would take a nap or get distracted (depending on the version of the story) in the middle of the race? So equating the tortoise with wisdom would seem foolish. At best we can see the tortoise as an inveterate gambler whose one-in-a-million bet paid off. We would certainly be foolish (as was noticed by Lord Dunsany cited in the Wikipedia entry) to assume that the hare’s loss makes the tortoise more suitable for a job delivering a swift message over the same journey the following day. So the only possible learning could be that taking nap in the middle of a race and not waking up in time can lead to loosing the race. Conceivably, there could be something about the dangers of overconfidence. But again didn’t we know this already through many much less ambiguous stories?

What does that mean for the digital and traditional scholarship? Very tentatively, I would suggest it is that we cannot predict the results of a single race (i.e. any single academic enterprise) based purely on the known (or inferred) qualities of one approach. There are too many variables. But neither can we discount what we know about the capabilities of one approach in favor of another simply because it proved to be a failure where we would have expected success. In a way, just like with the fable, we already know everything about the situation. For some things hares are better than tortoises and vice versa. Most of the time, our expectations are borne out and sometimes they are not. Sometimes the differences are insignificant, sometimes they matter a lot. In short, life is pretty damn complicated, and hoping a simple contrast of two prejudice-laden images will help us understand it better is perhaps the silliest thing of all. But often it is also the thing without which understanding would be impossible. So perhaps the moral of this story, this blog, and of Dan Cohen’s book really should be: beware of easy understandings.

Enhanced by Zemanta
Send to Kindle