Warning: Undefined property: wpdb::$dmtable in /home/bohemica/public_html/techczech.net/wp-includes/class-wpdb.php on line 783
What would make linguistics a better science? Science as a metaphor – Metaphor Hacker
Categories
Extended writing Linguistics Philosophy of Science Scholarship

What would make linguistics a better science? Science as a metaphor

Background

This is a lightly edited version of a comment posted on Martin Haspelmath’s blog post “Against traditional grammar – and for normal science in linguistics“.  In it he offers a critique of the current linguistic scene as being unclear as to its goals and in need of better definitions. He proposes ‘normal science’ as an alternative:

In many fields of science, comparative research is based on objective measurements, not on categories that are hoped to be universal natural kinds. In linguistics, we can work with objectively defined comparative concepts (Haspelmath 2010).

While I am in broad agreement with the critique, I’m not sure the solution is going to lead to a ‘better science’ of linguistics. (Also, I’m not sure that this is an accurate description of how science actually works.)

Problem with ‘normal science’ approach to linguistics

I would say that the problem with the ‘normal science’ approach is that it makes it seem natural to turn to a structural description as a mode of ‘doing good linguistics’. But I think that this is misleading as to the nature of language. The current challenge of the radical potential coming from the constructionism (covering Fillmore, Croft, Goldberg) on the one hand and cognitive semantics (Lakoff, Talmy, Langacker) on the other makes a purely structural description (even imbued with functionalism) less appealing as the foundation of a newly scientificised linguistics.

It’s curious that it’s physics and chemistry that get mentioned in this context with their fully mathematised personas and not biology or geography. In both of those, precise definitions are much more provisional and iterative. Even foundational terms such as species or gene are much more fluid and less well defined than it might seem (I recommend Keller’s ‘Century of the Gene’ for an account of how discrepancies on how gene is defined among various labs was actually beneficial for the development of genetics).

That’s not to say that I strongly disagree with any of Haspelmath’s proposals but they don’t particularly make me excited to do linguistics. I found Dixon’s ‘Basic Linguistic Theory’ an exhilarating read but it was not because I felt that Dixon’s programme would lead to more consistency but because it was a radically new proposal (despite his claims to the contrary) for a theoretical basis for a comparative linguistic research agenda. Which is also why I like Haspelmath’s body of work (exemplified in projects such as the World Atlas of Language Structures and Glottolog).

But I doubt that the road ahead is in better definitions. I’m not opposed to them just skeptical that they will lead to much. The road ahead is in better data and better theory. I think that between corpus linguistics, frame semantics and construction grammar we can get both. I proposed the analogy of ‘dictionary and grammar being to language what standing on one foot is to running‘ . I think linguistics needs to embrace the dynamism of language as a human property rather than as a fixed effect (to borrow Clark’s phrase). Fillmore and Kay’s early writing on construction grammar was a first step but things seemed to have settled into the bad old ways of static structural description.

Data and theory need each other in a dialectic fashion. You need data to create a theory but you needed some proto-theory to see the data in the first place. And then you need your theory to collect more data and that data then further shapes your theory which in turns let you see the data in different ways. The difference between biology and linguistics is that our proto-theories of the biological world correspond much better to the dynamic structures which can be theorized (modeled) based on systematic data collection and its modeling. Which is why folk taxonomies of the biological world are much closer to those of botany or zoology than folk taxonomies of language are to linguistic structures. (They are much more elaborate – to start with – at least at the level available to human perception.)

My proposal is to take seriously human ability to reflect (hypostaticaly) on the way they speak (cf. Talmy’s defense of introspection) because this is at the start of any process that bootstraps a theory of language. We then need to be mindful of the way this awareness interacts with the subconscious automaticity in which the patterns of regularity we call structures seem to be used. In the same way that Fillmore and Kay asked any theory of grammar to account for the exceptions (or even take them as a starting point), I’d want to ask any theory of language to take bilingualism and code mixing as its starting point (inspired by Elaine Chaika) and take seriously the variability of acquisition of the ability to automatically use those structures.

None of this is precludes or denies the utility of the great work of linguistics like Haspemath. But it is what I think would lead to linguistics being a ‘better’ science (at least, in the sense of Wissenschaft or ‘natural philosophy’ rather than in the sense implied by the physics envy which often characterizes these efforts).

Update: 

After I finished writing this, I was listening to this episode of the Unsupervised Thinking podcast where the group was discussing two papers critiquing some of the theoretical foundations of biology (“Can a biologist fix a radio?”) and neuroscience (“Could a Neuroscientist Understand a Microprocessor?”). The general thrust of the discussion was that better definitions would be important. Because they would allow better measurement and thus quantifiable models.  But the discussion also veered towards the question of theory and pre-theoretical knowledge. To me it underscored the tension between data and theory.

My concern is only about the assumption that definitions are the solution. But I’d say that a definition (unless purely disambiguating of polysemy) is just a distillation or a snapshot of a slice in time in the never-ending push and pull of data and the model used to make sense of it as well as collect it (otherwise known as theory). This is not that different from the definition of a lexical item in a dictionary.

That is not to deny the heuristic usefulness of definitions. Which reminded me of the critique of modern axiomatic mathematics (in particular set theory and number theory) exemplified by NJ Wildberger in his online courses on Math Foundations. Wildberger is also calling for more precise definitions in mathematics and less reliance on axioms.

Future directions:

I outlined some of the fundamental epistemological problems with definitions (as a species of a referential theory of meaning).

I’m working on a more extensive elaboration of some of the issues of comparing epistemic heuristics used to model the physical and the social world with a subtitle “The differential susceptibility of units to idealization in the social and physical realms” that addresses some of the questions I outlined above.

In it, I want to suggest that the key difference between the social and physical sciences is due to how easy it is to usefully idealize units and sets in the physical and the social world. Key passage from this:

All of physics is based on idealization. You have ideal gas, perfect motion, perfect vacuum, etc. All of Newtonian physics  is based on the mathematical description of a world where things like friction don’t exist. An ideal world, if you will. Platonic, almost. And it turned out that this type of idealization can take us extremely far, if we let engineers loose on it.

Because all the progress we attribute to science has really been made by engineers. People who take a ballistic curve and ask ‘how about we add a little cross wind’. The modern world of technology around us is all built on tolerances – encoded in books of tables describing how far can we take the idealized formulas of science into the non-ideal conditions of the ‘real’ world.

In the social world, the ideal individual and ideal society are more difficult to treat as units of analysis than perfect vacuum or ideal gas. But even if we could define them, there’s far less we can do with them that would make them anywhere as useful as the idealizations of physics and chemistry. That’s why engineering a solution is a positive description when we talk about the physical world but a negative when we talk about social.