Category Archives: History

Library of an Interaction Designer (Juhan Sonin) / 20100423.7D.0

Three books of the year 2013 and some books of the century 1900-2013

Share
Send to Kindle

I have been asked (as every year) to nominate three books of the year for Lidové Noviny (a Czech paper I contribute to occasionally). This is always a tough choice for me and some years I don’t even bother responding. This is because I don’t tend to read books ‘of the moment’ and range widely in my reading across time periods. But I think I have some good ones this time. Additionally, Lidové Noviny are celebrating 120 years of being a major Czech newspaper so they also asked me for a book of the century (since 1900 till now). It makes no sense to even try to pick ‘the one’, so I picked three categories that are of interest to me (language, society and fiction) and chose three books in each.

The Colorful Library of an Interaction Designer (Juhan Sonin) / 20100423.7D.05887.P1 / SMLCreative Commons License See-ming Lee via Compfight

Three books of 2013

Thanks to the New Books Network, I tend to be more clued in on the most recent publications, so 2 of my recommendations are based on interviews heard there.

A Cultural History of the Atlantic World, 1250-1820 by John K. Thornton is without question a must read for anyone interested in, well, history. Even though he is not the first, Thornton shows most persuasively how the non-Europeans on both sides of the Atlantic (Africa and the Americas) were full-fledged political partners of the Europeans who are traditionally seen simply as conquerors with their dun powder, horses and steel weapons. Bowerman shows how these were just a small part of the mix, having almost no impact in Africa and playing a relatively small role in the Americas. In both cases, Europeans succeeded through alliances with local political elites and for centuries simply had no access to vast swathes of both continents.

Raising Germans in the Age of Empire: Youth and Colonial Culture, 1871-1914 by Jeff Bowersox. This book perhaps covers an exceedingly specific topic (compared to the vast sweep of my first pick) but it struck a chord with me. It shows the complex interplay between education, propaganda and their place in the lives of youth and adults alike.

Writing on the Wall: Social Media – the First 2,000 Years by Tom Standage. Standage’s eye opening book on the telegraph (The Victorian Internet) now has a companion dealing with social communication networks going back to the Romans. Essential corrective to all the gushing paradigm shifters. He doesn’t say there’s nothing new about the Internet but he does say that there’s nothing new abou humans. Much lighter reading but still highly recommended.

Books of the Century

This really caught my fancy. I was asked for books that affected me, but I thought more about those that had an impact going beyond the review cycle of a typical book.

Language

Course in General Linguistics by Ferdinand de Saussure published in 1916. The Course (or Le Cours) Published posthumously by Saussure’s students from lecture notes is the cornerstone of modern linguistics. I think many of the assumptions have been undermined in the past 30-40 years and we are ripe for a paradigm change. But if you talk to a modern linguist, you will still hear much of what Saussure was saying to his students in the early 1900s in Geneva. (Time to rethink the Geneva Convention in language?)

Syntactic Structures by Noam Chomsky published in 1957. Unlike The Course, which is still worth reading by anyone who wants to learn about language, Syntactic Structures is now mostly irrelevant and pretty much incomprehensible to non-experts. However, it launched the Natural Language Processing revolution and its seeds are still growing (although not in the Chomskean camp). Its impact may not survive the stochastic turn in NLP but the computational view of language is still with us for good and for ill.

Metaphors We Live By by George Lakoff and Mark Johnson published in 1980 while not completely original, kickstarted a metaphor revival of sorts. While, personally, I think Lakoff’s Women, Fire, and Dangerous Things is by far the most important book of the second half of the 20th century, Metaphors We Live By is a good start (please, read the 2003 edition and pay special attention to the Afterword).

Society

The Second Sex by Simone de Beauvoir published in 1949 marked a turning point in discourse about women. Although the individual insights had been available prior to Beauvoir’s work, her synthesis was more than just a rehashing of gender issues.

Language and Woman’s Place by Robin Tolmach Lakoff published in 1973 stands at the foundation of how we speak today about women and how we think about gender being reflected in language. I would now quible with some of the linguistics but not with the main points. Despite the progress, it can still open eyes of readers today.

The Savage Mind by Claude Levi-Strauss published in 1962 was one of the turning points in thinking about modernity, complexity and backwardness. Strauss’s quip that philosophers like Sartre were more of a subject of study for him than valuable interlocutors is still with me when I sit in on a philosophy seminar. I read this book without any preparation and it had a profound impact on me that is still with me today.

Fiction

None of the below are my personal favourites but all have had an impact on the zeit geist that transcended just the moment.

1984 by George Orwell published in 1949. Frankly I can’t stand this book. All of its insight is skin deep and its dystopian vision (while not in all aspects without merit) lacks the depth it’s often attributed. There are many sci-fi and fantasy writers who have given the issue of societal control and freedom much more subtle consideration. However, it’s certainly had a more profound impact on general discourse than possibly any piece of fiction of the 20th century.

The Joke by Milan Kundera published in 1967 is the only book by Kundera with literary merit (I otherwise find his writing quite flat). Unlike Orwell, Kundera has the capacity to show the personal and social dimensions of totalitarian states. In The Joke he shows both the randomness of dissent and the heterogeniety of totalitarian environments.

The Castle by Franz Kafka published in 1912 (or just the collected works of Kafka) have provided a metaphor for alienation for the literati of the next hundred years. I read The Castle first so it for me more than others illustrates the sense of helplessness and alienation that a human being can experience when faced with the black box of anonymous bureaucracy. Again, I rate this for impact, rather than putting it on a ‘good read’ scale.

My personal favorites would be authors rather than individual works: Kurt Vonnegut, Robertson Davies, James Clavell would make the list for me. I also read reams of genre fiction and fan fiction that can easily stand up next to any of “the greats”. I have no shame and no guilty pleasures. I’ve read most of Terry Pratchett and regularly reread childhood favorites by the likes of Arthur Ransome or Karl May. I’ve quoted from Lee Child and Tom Clancy in academic papers and I’ve published an extensive review of a Buffy the Vampire Slayer fan fiction novel.

Finally, for me, the pinnacle of recent literary achievement is Buffy the Vampire Slayer. I’ve used this as an example of how TV shows have taken over from the Novel, as the narrative format addressing weighty issues of the day, and Buffy is one of the first examples. Veronica Mars is right up there, as well, and there are countless others I’d recommend ‘reading’.

Send to Kindle

Cliches, information and metaphors: Overcoming prejudice with metahor hacking and getting it back again

Share
Send to Kindle
Professor Abhijit Banerjee

Professor Abhijit Banerjee (Photo credit: kalyan3)

“We have to use cliches,” said professor Abhijit Banerjee at the start of his LSE lecture on Poor Economics. “The world is just too complicated.” He continued. “Which is why it is all the more important, we choose the right cliches.” [I'm paraphrasing here.]

This is an insight at the very heart of linguistics. Every language act we are a part of is an act of categorization. There are no simple unitary terms in language. When I say, “pull up a chair”, I’m in fact referring to a vast category of objects we refer to as chairs. These objects are not identified by any one set of features like four legs, certain height, certain ways of using it. There is no minimal set of features that will describe all chairs and just chairs and not other kinds of objects like tables or pillows. But chairs don’t stand on their own. They are related to other concepts or categories (and they are really one and the same). There are subcategories like stools and armchairs, containing categories like furniture or man-made objects and related categories like houses and shops selling objects. All of these categories are linked in our minds through a complex set of images, stories and definitions. But these don’t just live in our minds. They also appear in our conversations. So we say things like, “What kind of a chair would you like to buy?”, “That’s not real chair”, “What’s the point of a chair if you can’t sit in it?”, “Stools are not chairs.”, “It’s more of a couch than a chair.”, “Sofas are really just big plush chairs, when it comes down to it.”, “I’m using a box for a chair.”, “Don’t sit on a table, it’s not a chair.” Etc. Categories are not stable and uniform across all people, so we continue having conversations about them. There are experts on chairs, historians of chairs, chair craftsmen, people who buy chairs for a living, people who study the word ‘chair’, and people who casually use chairs. Some more than others. And their sets of stories and images and definitions related to chairs will be slightly different. And they will have had different types of conversations with different people about chairs. All of that goes into a simple word like “chair”. It’s really very simple as long as we accept the complexity for what it is. Philosophers of language have made a right mess of things because they tried to find simplicity where none exists. And what’s more where none is necessary.

But let’s get back to cliches. Cliches are types of categories. Or better still, cliches are categories with a particular type of social salience. Like categories, cliches are sets of images, stories and definitions compressed into seemingly simpler concepts that are labelled by some sort of an expression. Most prominently, it is a linguistic expression like a word or a phrase. But it could just as easily be a way of talking, way of dressing, way of being. What makes us likely to call something a cliche is a socially negotiated sense of awareness that the compression is somewhat unsatisfactory and that it is overused by people in lieu of an insight into the phenomenon we are describing. But the power of the cliche is in its ability to help us make sense of a complex or challenging phenomenon. But the sense making is for our benefit of cognitive and emotional peace. Just because we can make sense of something, doesn’t mean, we get the right end of the stick. And we know that, which is why we are wary of cliches. But challenging every cliche would be like challenging ourselves every time we looked at a chair. It can’t be done. Which is why we have social and linguistic coping mechanisms like “I know it’s such a cliche.” “It’s a cliche but in a way it’s true.” “Just because it’s a cliche, doesn’t mean, it isn’t true.” Just try Googling: “it’s a cliche *”

So we are at once locked into cliches and struggling to overcome them. Like “chair” the concept of a “cliche” as we use it is not simple. We use it to label words, phrases, people. We have stories about how to rebel against cliches. We have other labels for similar phenomena with different connotations such as “proverbs”, “sayings”, “prejudices”, “stereotypes”. We have whole disciplines studying these like cognitive psychology, social psychology, political science, anthropology, etc. And these give us a whole lot of cliches about cliches. But also a lot of knowledge about cliches.

The first one is exactly what this post started with. We have to use cliches. It’s who we are. But they are not inherently bad.

Next, we challenge cliches as much as we use them. (Well, probably not as much, but a lot.) This is something I’m trying to show through my research into frame negotiation. We look at concepts (the compressed and labelled nebulas of knowledge) and decompress them in different ways and repackage them and compress them into new concepts. (Sometimes this is called conceptual integration or blending.) But we don’t just do this in our minds. We do it in public and during conversations about these concepts.

We also know that unwillingness to challenge a cliche can have bad outcomes. Cliches about certain things (like people or types of people) are called stereotypes and particular types of stereotypes are called prejudices. And prejudices by the right people against the right kind of other people can lead to discrimination and death. Prejudice, stereotype, cliche. They are the same kind of thing presented to us from different angles and at different magnitudes.

So it is worth our while to harness the cliche negotiation that goes on all the time anyway and see if we can use it for something good. That’s not a certain outcome. The medieaval inquistions, anti-heresies, racism, slavery, genocides are all outcomes of negotiations of concepts. We mostly only know about their outcomes but a closer look will always reveal dissent and a lot of soul searching. And at the heart of such soul searching is always a decompression and recompression of concepts (conceptual integration). But it does not work in a vacuum. Actual physical or economic power plays a role. Conformance to communcal expectations. Personal likes or dislikes. All of these play a role.

George Lakoff

George Lakoff (Photo credit: Wikipedia)

So what chance have we of getting the right outcome? Do we even know what is the right outcome?

Well, we have to pick the right cliches says Abhijit Banerjee. Or we have to frame concepts better says George Lakoff. “We have to shine the light of truth” says a cliche.

“If you give people content, they’re willing to move away from their prejudices. Prejudices are partly sustained by the fact that the political system does not deliver much content.” says Banerjee. Prejudices matter in high stakes contexts. And they are a the result of us not challenging the right cliches in the right ways at the right time.

It is pretty clear from research in social psychology from Milgram on, that giving people information will challenge their cliches but only as long as you also give them sanction to challenge the cliches. Information on its own, does not seem to always be enough. Sometimes the contrary information even seems to reinforce the cliche (as we’re learning from newspaper corrections).

This is important. You can’t fool all of the people all of the time. Even if you can fool a lot of them a lot of the time. Information is a part of it. Social sanction of using that information in certain ways is another part of it. And this is not the province of the “elites”. People with the education and sufficient amount of idle time to worry about such things. There’s ample research to show that everybody is capable of this and engaged in these types of conceptual activities. More education seems to vaguely correlate with less prejudice but it’s not clear why. I also doubt that it does in a very straightforward and inevitable way (a post for another day). It’s more than likely that we’ve only studied the prejudices the educated people don’t like and therefore don’t have as much.

Bannerjee draws the following conclusion from his work uncovering cliches in development economics:

“Often we’re putting too much weight on a bunch of cliches. And if we actually look into what’s going on, it’s often much more mundane things. Things where people just start with the wrong presumption, design the wrong programme, they come in with their own ideology, they keep things going because there’s inertia, they don’t actually look at the facts and design programmes in ignorance. Bad things happen not because somebody wants bad things to happen but because we don’t do our homework. We don’t think hard enough. We’re not open minded enough.”

It sounds very appealing. But it’s also as if he forgot the point he started out with. We need cliches. And we need to remember that out of every challenge to a cliche arises a new cliche. We cannot go around the world with our concepts all decompressed and flapping about. We’d literally go crazy. So every challenge to a cliche (just like every paradigm-shifting Kuhnian revolution) is only the beginning phase of the formation of another cliche, stereotype, prejudice or paradigm (a process well described in Orwell’s Animal Farm which itself has in turn become a cliche of its own). It’s fun listening to Freakonomics radio to see how all the cliche busting has come to establish a new orthodoxy. The constant reminders that if you see things as an economist, you see things other people don’t don’t see. Kind of a new witchcraft. That’s not to say that Freakonomics hasn’t provided insights to challenge established wisdoms (a term arising from another perspective on a cliche). It most certainly has. But it hasn’t replaced them with “a truth”, just another necessary compression of a conceptual and social complex. During the moments of decompression and recompression, we have opportunities for change, however brief. And sometimes it’s just a memory of those events that lets us change later. It took over 150 years for us to remember the French revolution and make of it what we now think of as democracy with a tradition stretching back to ancient Athens. Another cliche. The best of a bad lot of systems. A whopper of a cliche.

So we need to be careful. Information is essential when there is none. A lot of prejudice (like some racism) is born simply of not having enough information. But soon there’s plenty of information to go around. Too much, in fact, for any one individual to sort through. So we resort to complex cliches. And the cliches we choose have to do with our in-groups, chains of trust, etc. as much as they do with some sort of rational deliberation. So we’re back where we started.

Humanity is engaged in a neverending struggle of personal and public negotiation of concepts. We’re taking them apart and putting them back together. Parts of the process happen in fractions of a second in individual minds, parts of the process happen over days, weeks, months, years and decades in conversations, pronouncements, articles, books, polemics, laws, public debates and even at the polling booths. Sometimes it looks like nothing is happening and sometimes it looks like everything is happening at once. But it’s always there.

So what does this have to do with metaphors and can a metaphor hacker do anything about it? Well, metaphors are part of the process. The same process that lets us make sense of metaphors, lets use negotiated cliches. Cliches are like little analogies and it takes a lot of cognition to understand them, take them apart and make them anew. I suspect most of that cognition (and it’s always discursive, social cognition) is very much the same that we know relatively well from metaphor studies.

But can we do anything about it? Can we hack into these processes? Yes and no. People have always hacked collective processes by inserting images and stories and definitions into the public debate through advertising, following talking points or even things like pushpolling. And people have manipulated individuals through social pressure, persuasion and lies. But none of it ever seems to have a lasting effect. There’re simply too many conceptual purchase points to lean on in any cliches to ever achieve a complete uniformity for ever (even in the most totalitarian regimes). In an election, you may only need to sway the outcome by a few percent. If you have military or some other power, you only need to get willing compliance from a sufficient number of people to keep the rest in line through a mix of conceptual and non-conceptual means. Some such social contracts last for centuries, others for decades and some for barely years or months. In such cases, even knowing how these processes work is not much better than knowing how continental drift works. You can use it to your advantage but you can’t really change it. You can and should engage in the process and try to nudge the conversation in a certain way. But there is no conceptual template for success.

But as individuals, we can certainly do quite a bit monitor our own cognition (in the broadest sense). But we need to choose our battles carefully. Use cliches but monitor what they are doing for us. And challenge the right ones at the right time. It requires a certain amount of disciplined attention and disciplined conversation.

This is not a pessimistic message, though. As I’ve said elsewhere, we can be masters of our own thoughts and feelings. And we have the power to change how we see the world and we can help others along with how they see the world. But it would be foolish to expect to world to be changed beyond all recognition just through the power of the mind. In one way or another, it will always look like our world. But we need to keep trying to make it look like the best possible version of our world. But this will not happen by following some pre-set epistemological route. Doing this is our human commitment. Our human duty. And perhaps our human inevitability. So, good luck to us.

Send to Kindle

Killer App is a bad metaphor for historical trends, good for pseudoteaching

Share
Send to Kindle
This map shows the countries in the world that...

Image via Wikipedia

Niall Ferguson wrote in The Guardian some time ago about how awful history education has become with these “new-fangled” 40-year-old methods like focusing on “history skills” that leads to kids leaving school knowing “unconnected fragments of Western history: Henry VIII and Hitler, with a small dose of Martin Luther King, Jr.” but not who was the reigning English monarch at the time of the Armada. Instead, he wants history to be taught his way: deep trends leading to the understanding why the “West” rules and why Fergusson is the cleverest of all the historians that ever lived. He even provided (and how cute is this) a lesson plan!

Now, personally, I’m convinced that the history of historical education teaches us mostly that historical education is irrelevant to the success of current policy. Not that we cannot learn from history. But it’s such a complex source domain for analogies that even very knowledgeable and reasonable people can and do learn the exact opposites from the same events. And even if they learn the “right” things it still doesn’t stop them from being convinced that they can do it better this time (kind of like people in love who think their marriage will be different). So Ferguson’s bellyaching is pretty much an empty exercise. But that doesn’t mean we cannot learn from it.

Ferguson, who is a serious historian of financial markets, didn’t just write a whiney column for the Guardian, he wrote a book called Civilization (I’m writing a review of it and a few others under the joint title “Western Historiographical Eschatology” but here I’ll only focus on some aspects of it) and is working on a computer game and teaching materials. To show how seriously he takes his pedagogic mission and possibly also how hip and with it he is, Ferguson decided to not call his historical trends trends but rather “killer apps”. I know! This is so laugh out loud funny I can’t express my mirth in mere words:))). And it gets even funnier as one reads his book. As a pedagogical instrument this has all the practical value of putting a spoiler on a Fiat. He uses the term about 10 times (it’s not in the index!) throughout the book including one or two mentions of “downloading” when he talks about the adoption of an innovation.

Unfortunately for Ferguson, he wrote his book before the terms “pseudocontext” and “pseudoteaching” made an appearance in the edublogosphere. And his “killer apps” and the lesson plan based on them are a perfect example of both. Ferguson wrote a perfectly servicable and an eminently readable historical book (even though it’s a bit of a tendentious mishmash). But it is still a historical book written by a historian. It’s not particularly stodgy or boring but it’s no different from myriad other currently popular historical books that the “youth of today” don’t give a hoot about. He thinks (bless him) that using the language of today will have the youth flocking to his thesis like German princes to Luther. Because calling historical trends “killer apps” will bring everything into clear context and make all the convoluted syntax of even the most readable history book just disappear! This is just as misguided as thinking that talking about men digging holes at different speeds will make kids want to do math.

What makes it even more precious is that the “killer app” metaphor is wrong. For all his extensive research, Ferguson failed to look up “killer app” on Wikipedia or in a dictionary. There he would have found out that it doesn’t mean “a cool app” but rather an application that confirms the viability of an existing platform whose potential may have been questioned. There have only been a handful of killer apps. The one undisputed killer app was Visicalc which all of a sudden showed how an expensive computer could simplify the process of financial management through electronic spreadsheets and therefore pay for itself. All of a sudden, personal computers made sense to the most important people of all, the money counters. And thus the personal computer revolution could begin. A killer app proved that a personal computer is useful. But the personal computer had already existed as a platform when Visicalc appeared.

None of Ferguson’s “killer apps” of “competition, science, property rights, medicine, consumer society, work ethic” are this sort of a beast. They weren’t something “installed” in the “West” which then proved its viability. They were something that, according to Ferguson anyway, made the West what it was. In that they are more equivalent to the integrated circuit than Visicalc. They are the “hardware” that makes up the “West” (as Ferguson sees it), not the software that can run on it. The only possible exception is “medicine” or more accurately “modern Western medicine” which could be the West’s one true “killer app” showing the viability of its platform for something useful and worth emulating. Also, “killer apps” required a conscious intervention, whereas all of Ferguson’s “apps” were something that happened on its own in a myrriad disparate processes – we can only see them as one thing now.

But this doesn’t really matter at all. Because Ferguson, as so many people who want to speak the language of the “young people”, neglected to pay any attention whatsoever to how “young people” actually speak. The only people who actually use the term “killer app” are technology journalists or occasionally other journalists who’ve read about it. I did a quick Google search for “killer app” and did not find a single non-news reference where somebody “young” would discuss “killer apps” on a forum somewhere. That’s not to say it doesn’t happen but it doesn’t happen enough to make Ferguson’s work any more accessible.

This overall confusion is indicative of Ferguson’s book as a whole which is definitely less than the sum of its parts. It is full of individual insight and a fair amount of wit but it flounders in its synthetic attempts. Not all his “killer apps” are of the same type, some follow from the others and some don’t appear to be anything at all than Ferguson’s wishful thinking. They certainly didn’t happen on one “platform” – some seem the outcome rather than the cause of “Western” ascendancy. Ferguson’s just all too happy to believe his own press. At the beginning he talks about early hints around 1500AD that the West might achieve ascendancy but at the end he takes a half millenium of undisputed Western rule for granted. But in 1500, “the West” had still 250 years to go before the start of the industrial revolution, 400 years before modern medicine, 50 years before Protestantism took serious hold and at least another 100 before the Protestant work ethic kicked in (if there really is such a thing). It’s all over the place.

Of course, there’s not much innovative about any of these “apps”. It’s nothing a reader of the Wall Street Journal editorial page couldn’t come up with. Ferguson does a good job of providing interesting anecdotes to support his thesis but each of his chapters meanders around the topic at hand with a smattering of unsystematic evidence here and there. Sometimes the West is contrasted with China, sometimes the Ottomans, sometimes Africa! It is hard to see how his book can help anybody’s “chronological understanding” of history that he’s so keen on.

But most troublingly it seems in places that he mostly wrote the book for as a carrier for ultra-conservative views that would make his writing more suitable for The Daily Mail rather than the Manchester Pravda: “the biggest threat to Western civilization is posed not by other civilizations, but by our own pusilanimity” unless of course it is the fact that “private property rights are repeatedly violated by governments that seem to have an insatiable appetite for taxing out incomes and our wealth and wasting a large portion of the proceeds”.

Panelist Economic Historian Niall Ferguson at ...

Image via Wikipedia

It’s almost as if the “civilized” historical discourse was just a veneer that peels off in places and reveals the real Ferguson, a comrade of Pat Buchanan whose “The Death of the West” (the Czech translation of which screed I was once unfortunate enough to review) came from the same dissatisfaction with the lack of our confidence in the West. Buchanan also recommends teaching history – or more specifically, lies about history – to show us what a glorious bunch of chaps the leaders of the West were. Ferguson is too good a historian to ignore the inconsistencies in this message and a careful reading of his book reveals enough subtlety not to want to reconstitute the British Empire (although the yearning is there). But the Buchananian reading is available and in places it almost seems as if that’s the one Ferguson wants readers to go away with.

From metaphor to fact, Ferguson is an unreliable thinker flitting between insight, mental shortcut and unreflected cliche with ease. Which doesn’t mean that his book is not worth reading. Or that his self-serving pseudo-lesson plan is not worth teaching (with caution). But remember I can only recommend it because I subscribe to that awful “culture of relativism” that says that “any theory or opinion, no matter how outlandish, is just as good as whatever it was we used to believe in.”

Update 1: I should perhaps point out, that I think Ferguson’s lesson plan is pretty good, as such things go. It gives students an activity that engages a number of cognitive and affective faculties rather than just rely on telling. Even if it is completely unrealistic in terms the amount of time allocated and the objectives set. “Students will then learn how to construct a causal explanation for Western ascendancy” is an aspiration, not a learning objective. Also, it and the other objectives really rely on the “historical skills” he derides elsewhere.

The lesson plan comes apart at about point 5 where the really cringeworthy part kicks in. Like in his book, Ferguson immediately assumes that his view is the only valid one – so instead of asking the students to compare two different perspectives on why the world looked like it did in 1913 as opposed to 1500 (or even compare maps at strategic moments) he simply asks them to come up with reasons why his “killer apps” are right (and use evidence while they’re doing it!) .

I also love his aside: “The groups need to be balanced so that each one has an A student to provide some kind of leadership.” Of course, there are shelf-fuls of literature on group work – and pretty much all of them come from the same sort of people who’re likely to practice “new history” – Ferguson’s nemesis.

I don’ think using Ferguson’s book and materials would do any more damage than using any other history book. Not what I would recommend but who cares. I recently spent some time at Waterstone’s browsing through modern history textbooks and I think they’re excellent. They provide far more background to events and present them in a much more coherent picture than Ferguson. They perhaps don’t encourage the sort of broad synthesis that has been the undoing of so many historians over the centuries (including Ferguson) but they demonstrate working with evidence in a way he does not.

The reason most people leave school not knowing facts and chronologies is because they don’t care, not because they don’t have an opportunity to learn. And this level of ignorance has remained constant over decades. At the end of the day, history is just a bunch of stories not that different from what you see on a soap opera or in a celebrity magazine, just not as relevant to a peer group. No amount of “killer applification” is going to change this. What remains at the end of historical education is a bunch of disconnected images, stories and conversation pieces (as many of them about the tedium of learning as about its content). But there’s nothing wrong with that. Let’s not underestimate the ability of disinterested people to become interested and start making the connections and filling in the gaps when they need to. That’s why all these “after-market” history books like Ferguson’s are so popular (even though for most people they are little more than tour guides to the exotic past).

Update 2: By a fortuitous coincidence, an announcement of the release of George L. Mosse‘s lectures on European cultural history: http://history.wisc.edu/mosse/george_mosse/audio_lectures.htm came across my news feeds. I think it is important to listen to these side by side with Ferguson’s seductively unifying approach to fully realize the cultural discontinuity in so many key aspects between us and the creators of the West. Mosse’s view of culture, as his Wikipedia entry reads, was as “a state or habit of mind which is apt to become a way of life”. The practice of history after all is a culture of its own, with its own habits of mind. In a way, Ferguson is asking us to adopt his habits of mind as our way of life. But history is much more interesting and relevant when it is, Mosse’s colleague Harvey Goldberg put it on this recording, a quest after a “usable past” spurred by our sense of the “present crisis” or “present struggle”. So maybe my biggest beef with Ferguson is that I don’t share his justificationist struggle.

Enhanced by Zemanta
Send to Kindle

The natural logistics of life: The Internet really changes almost nothing

Share
Send to Kindle
Cover of "You've Got Mail"

Cover of You've Got Mail

This is a post that has been germinating for a long time. But it was most immediately inspired by Marshall Poe‘s article claiming that “The Internet Changes Nothing“. And as it turns out, I mostly agree.

OK, this may sound a bit paradoxical. Twelve years ago, when I submitted my first column to be published, I delivered the text to my editor on a diskette. Now, I don’t even have an editor (or at least not for this kind of writing). I just click a button and my text is published. But! If my server logs are to be trusted, it will be read by 10s or at best 100s of people over its lifetime. That’s more than if I’d just written some notes to myself or published it in an academic journal but much less than if I publish it in a national daily with a readership of hundreds of thousands. Not all of them will read what I write but more than would on this blog.
So while democratising the publishing industry has worked for Kos, Huffington and many others, still many more blogs languish in obscurity. I can say anything I want but my voice matters little in the cacophony.

In terms of addressing an audience and having a voice, the internet has done little for most people. This is not because not enough people have enough to say but because there’s only so much content the world can consume. There is a much longer tail trailing behind Clay Shirkey‘s long tail. It’s the tail of 5-post 0-comment blogs and YouTube videos with 15 views. Even millions of typewriter-equipped monkeys with infinities of time can’t get to them all. Plus it’s hard to predict what will be popular (although educated guesses can produce results in aggregate). Years ago I took a short clip with my stills camera of a black-smith friend of mine making a candle-holder. It’s had 30 thousand views on YouTube. Why I don’t know. There’s nothing particularly exciting about it but there must be some sort of a long tail longing after it. None of the videos I hoped would take off did. This is the experience of many if not most. Most attempts at communities fail because the people starting them don’t realize how hard it is to nurture them to self-sustainability. I experienced this with my first site Bohemica.com. It got off to a really good start but since it was never my primary focus, the community kind of dissipated after a site redesign that was intended to foster it.

Just in terms of complete democratization of expression, the internet has done less for most than it may appear. But how about the speed of communication? I’m getting ready to do an interview with someone in the US, record it, transcribe it and translate it – all within a few days. The Internet (or more accurately Skype) makes the calling cheap, the recording and transcription is made much quicker by tools I didn’t have access to even in the early 2000s when I was doing interviews. And of course, I can get the published product to my editor in minutes via email. But what hasn’t changed is the process. The interview, transcription and translation take pretty much the same amount of time. The work of agreeing with the editor on the parameters of the interview, arranging it with the interviewee take pretty much as long as before. As does preparation for the interview. The only difference is the speed and ease of the transport of information from me to its target and me to the information. It’s faster to get to the research subject – but the actual research still takes about the same amount of time limited by the speed of my reading and the speed of my mind.

A chain is only as strong as its weakest link. And as long as humans are a part of the interface in a communication chain, the communication will happen at a human speed. I remember sitting over a print out of an obscure 1848 article on education from Jstor with an academic who started doing research in the 1970s and reminiscing how in the old days, he’d have to get on the train to London to get a thing like this in the British Library or at least having to arrange a protracted interlibrary loan. On reflection this is not as radical a change as it may seem. Sure, the information takes longer to get here. But people before the internet didn’t just sit around waiting for it. They had other stuff to read (there’s always more stuff to read than time) and writing to get on with in the meantime. I don’t remember anyone claiming that modern scholarship is any better than scholarship from the 1950s because we can get information faster. I’m as much in awe of some of the accomplishments of the scholars of the 1930s as people doing research now. And just as disdainful of others from any period. When reading a piece of scholarly work, I never care about the logistics of the flow of information that was necessary for the work to be completed (unless of course, it impinges on the methodology – where moderns scholars are just as likely to take preposterous shortcuts as ancient ones). During the recent Darwin frenzy, we heard a lot about how he was the communication hub of his time. He was constantly sending and receiving letters. Today, he’d have Twitter and a blog. Would he somehow achieve more? No, he’d still have to read all those research reports and piddle about with his worms. And it’s just as likely he’d miss that famous letter from Brno.

Of course, another fallacy we like to commit is assuming that communication in the past was simply communication today minus the internet (or telephone, or name your invention). But that’s nonsense. I always like to remind people that the “You’ve Got Mail” where Tom Hanks and Meg Ryan meet and fall in love online is a remake of a 1940s film where the protagonists sent each other letters. But these often arrived the same day (particularly in the same city). There were many more messenger services, pneumatic tubes, and a reliable postal service. As the Internet takes over the burden of information transmission, these are either disappearing or deteriorating but that doesn’t mean that’s the state they were in when they were the chief means of information transmission. Before there were photocopiers and faxes, there were copyists and messengers (and both were pretty damn fast). Who even sends faxes now? We like to claim we get more done with the internet but take just one step back and this claim looses much of its appeal. Sure there are things we can do now that we couldn’t do before like attend a virtual conference or a webinar. That’s true and it’s really great. But what would have the us of the 1980s have done? No doubt something very similar like buying video tapes of lectures or attending Open Universities. And the us of the 1960s? Correspondence courses and pirate radio stations. We would have had far less choice but our human endeavor would have been roughly the same. The us of 1930s, 1730s or 330s? That’s a more interesting question but nobody’s claiming that the internet changed the us of those times. We mostly think of the Internet as changing the human condition as compared to the 1960s or 1980s. And there the technology changes have far outstripped the changes in human activity.

If it’s not true that the internet has enabled us to get things done in a qualitatively different manner on a personal level, it’s even less true that it has made a difference at the level of society. There are simply so many things involved and they take so much time because humans and human institutions were involved. Let’s take the “Velvet Revolution” of 1989 in which I was an eager if extremely marginal participant. On Friday, November 17 a bunch of protesters got roughed up, on November 27, a general strike was held and on December 10, the president resigned. In Egypt, the demonstrations started on January 25, lots of stuff happened, on February 11 the president resigned. The Egyptians have the Czechs beat in their demonstration to resignation time by 5 days (17 v 23). This was the “Twitter” revolution. We didn’t even have mobile phones. Actually, we mostly even didn’t have phones. Is that what all this new global infrastructure has gotten us? Five days off on the toppling of a dictator? Of course, not. Twitter made no difference to what was happening in Egypt, at all, when compared to other revolutoin. If anything Al Jazeera played a bigger role. But on the ground, most people found out about things by being told by someone next to them. Just like we did. We even managed to let the international media up to speed pretty quickly, which could be argued is the main thing Twitter has done in the “Arab Spring” (hey another thing the Czechs did and failed at).

Malcolm Gladwell got a lot of criticism for pointing out the same thing. But he’s absolutely right:

“high risk” social activism requires deep roots and strong ties http://www.newyorker.com/online/blogs/newsdesk/2011/02/does-egypt-need-twitter.html

And while these ties can be established and maintained purely virtually, it takes a lot more than a few tweets to get people moving. Adam Weinstein adds to Gladwell’s example:

Anyone who lived through 1989 or the civil rights era or 1967 or 1956 knows that media technology is not a motive force for civil disobedience. Arguing otherwise is not just silly; it’s a distraction from the real human forces at play here.
http://motherjones.com/mojo/2011/02/malcolm-gladwell-tackles-egypt-twitter

Revolutions simply take their time. On paper, the Russian October Revolution of 1917 took just a day to topple the regime (as did so many others). But there were a bunch of unsuccessful revolutions prior to that and of course a bloody civil war lasting for years following. To fully institutionalize its aims, the Russian revolution could be said to have taken decades and millions dead. Even in ancient times, sometimes things moved very quickly (and always more messily than we can retell the story). The point about revolutions and wars is that they don’t move at the speed of information but at the speed of a fast walking revolutionary or soldier. Ultimately, someone has to sit in the seat where the buck stops, and they can only get there so fast even with jets, helicopters and fast cars. Such are the natural logistics of human communal life.

This doesn’t mean that there the speed or manner of communication doesn’t have some implications where logistics are concerned. But their impact is surprisingly small and easily absorbed by the larger concerns. In the Victorian Internet, Tom Standage describes how war ship manifests could no longer be published in The Times during the Crimean war because they could be telegraphed to the enemy faster than the ships would get there (whereas in the past, a spy’s message would be no faster than the actual ships). Also, betting and other financial establishments had to make adjustments not to get the speed of information get in the way of making profit. But if we compare the 1929 financial crisis with the one in 2008, we see that the speed of communication made little difference on the overall medium-term shape of the economy. Even though in 2008 we were getting up to the second information about the falling banking houses, the key decisions about support or otherwise took about the same amount of time (days). Sure, some stock trading is now done to the fraction of the second by computers because humans simply aren’t fast enough. But the economy still moves at about the same pace – the pace of lots and lots of humans shuffling about through their lives.

As I said at the start, although this post has been brewing in me for a while, it was most immediately inspired by that of Marshall Poe (of New Books in History) published about 6 months ago. What he said got no less relevant through the passage of time.

Think for a moment about what you do on the Internet. Not what you could do, but what you actually do. You email people you know. In an effort to broaden your horizons, you could send email to strangers in, say, China, but you don’t. You read the news. You could read newspapers from distant lands so as to broaden your horizons, but you usually don’t. You watch videos. There are a lot of high-minded educational videos available, but you probably prefer the ones featuring, say, snoring cats. You buy things. Every store in the world has a website, so you could buy all manner of exotic goods. As a rule, however, you buy the things you have always bought from the people who have always sold them.

This is easy to forget. We call online shopping and food delivery a great achievement. But having shopping delivered was always an option in the past (and much more common than now when delivery boys are more expensive). Amazon is amazing but still just a glorified catalog.

But there are revolutionary inventions that nobody even notices. What about the invention of the space between words? None of the ancients bothered to put spaces between words or in general read silently. It has been estimated that putting spaces between words not only allowed for silent reading (a highly suspicious activity until the 1700s) but also sped up reading by about 30%. Talk about a revolution! I’m a bit skeptical about the 30% number but still nobody talks about it. We think about audio books as an post-Eddison innovation but in fact, all reading was partly listening not too long ago. Another forgotten invention is that of the blackboard which made large-volume dissemination of information much more feasible through a simple reconfiguration of space and attention between pupil and teacher.

Visualization of the various routes through a ...

Image via Wikipedia

David Weinberger recently wrote what was essentially a poem about the hypertext (a buzz word I haven’t heard for a while):

The old institutions were more fragile than we let ourselves believe. They were fragile because they made the world small. A bigger truth burst them. The world is more like a messy, inconsistent, ever-changing web than like a curated set of careful writings. Truth burst the world made of atoms.

Yes, there is infinite space on the Web for lies. Nevertheless, the Web’s architecture is a better reflection of our human architecture. We embraced as if it were always true, and as if we had known it all along, because it is and we did.
http://www.hyperorg.com/blogger/2011/05/01/a-big-question

It is remarkable how right and wrong he can be at the same time. Yes, the web is more of a replication of the human architecture. It has some notable strengths (lack of geographic limitation, speed of delivery of information) and weaknesses (no internal methods for exchange of tangible goods, relatively limited methods for synchronous face-to-face communication.) I’d even go as far as calling the Internet “computer-assissted humanity”. But that just means that nothing about human organization online is a radical transformation of humanity offline.

What on Earth makes Weinberger think that the “existing institutions were fragile”? If anything they proved extremely robust. I find The Cluetrain Manifesto extremely inspiring and in many ways moving. But I find “The Communist Manifesto” equally profound without wanting to live in a world governed by it. The “The Communist Manifesto” got the description of the world as it is perfectly right. Pretty much every other paragraph in it applies just as much today as it did then. But the predictions offered in the other paragraphs can really cause nothing but laughter today. “The Cluetrain Manifesto” gave the same kind of expression to the frustration with the Dilbert world of big corporations and asked for our humanity back. They were absolutely right.

Markets can be looked at as conversations and the internet can facilitate certain kinds of conversation. But they were wrong in assuming that there is just one kind of conversation. There are all sorts of group symbolic and ritualized conversations that make the world of humans go around. And they have never been limited just to the local markets. In practical terms, I can now complain about a company on a blog or in a tweet. And these can be viewed by others. But since there’s an Xsuckx.com website for pretty much all major brands, the incentive for companies to be responsive to this are relatively small. I have actually received some response to complaints from companies on Twitter. But only once it led to the resolution of the problem. But Twitter is still a domain of “the elite” so it pays companies to appease people. However, should it reach the level of ubiquitous obscurity that many pages have, it will become even less appealing due to the lack of permanence of Tweets.

The problem is that large companies with large numbers of customers can only scale if they keep their interaction with those customers at certain levels. It was always thus and will always remain so. Not because of intrinsic attitudes but because of configurational limitations of time and human attention. Even the industrially oppressed call-center operator can only deal with about 10 customers an hour. So you have to work in some 80/20 cost checks into customer support. Most of any company’s interaction with their customers will be one to many and not on one on one. (And this incidentally holds for communications about the company by customers).

There’s a genre of conversations in the business and IT communities that focus on ‘why is X’ successful. Ford of the 1920s, IBM of the 1960s, Apple of the 2000s. The constant in these conversations is the wilful effort of projecting the current convetnional wisdom about business practices onto what companies do and used to do. This often requires significant reimagining of the present and the past. Leo Laporte and Paul Thurott recently had a conversation (http://twit.tv/ww207) in which they were convinced that companies that interact and engage with their customers will be successful. But why then, one of them asks, is not Microsoft whose employees blog all the time is not more successful than Apple who couldn’t be more tightlipped about its processes and whose attitude to customers is very much take it or leave it? Maybe it’s the Apple Store, one of them comments. That must be it. That engages the crap out of the Apple’s customers. But neither of them asked what is the problem with traditional stores, then? What is the point of the internet. The problem is that as with any metaphoric projection, the customer engagement metaphor is just partial. It’s more a way for us to grasp with processes that are fairly stable at the macro institutional level (which is the one I’m addressing here), but basically chaotic at the level of individual companies or even industries.

So I agree with Marshall Poe about the amount of transformation going on:

As for transformative, the evidence is thin. The basic institutions of modern society in the developed world—representative democracy, regulated capitalism, the welfare net, cultural liberalism—have not changed much since the introduction of the Internet. The big picture now looks a lot like the big picture then.

Based on my points above, I would even go as far as to argue that the basic institutions have not changed at all. Sure, foreign ministries now give advisories online, taxes can be paid electronically and there are new agencies that regulate online communication (ICANN) as well as old ones with new responsibilities. But as we read the daily news, can we perceive any new realities taking place? New political arrangements based on this new and wonderful thing called the Internet? No. If you read a good spy thriller from the 80s and one taking place now, you can hardly tell the difference. They may have been using payphones instead of the always on mobile smart devices we have now but the events still unfold in pretty much the same way: people go from place to place and do things.

Writing, print, and electronic communications—the three major media that preceded the Internet—did not change the big picture very much. Rather, they were brought into being by major historical trends that were already well underway, they amplified things that were already going on.

Exactly! If you read about the adventures of Sinuhe, it doesn’t seem that different from something written by Karl May or Tom Clancy. Things were happening as they were and whatever technology was available to be used, was used as well as possible. Remember that the telephone was originally envisioned to be a way of attending the opera – people calling in to a performance instead of attending live.

As a result, many things that happened could not have happened exactly in the same way without the tools of the age being there. The 2001 portion of the war in Afghanistan certainly would have looked different without precision bombing. But now in 2011 it seems to be playing out pretty much along the same lines experienced by the Brits and the Soviets. Meaning: it’s going badly.

The role of TV imagery in the ending of the Vietnam war is often remarked on. But that’s just coincidental. There have been plenty of unpopular wars that were ended because the population refused to support them and they were going nowhere. Long before the “free press”, the First Punic Wars were getting a bad rep at home. Sure, the government could have done a better job of lying to the press and its population but that’s hard to do when you have a draft. It didn’t work for Ramses II when he got his ass handed to him at Kadesh and didn’t ultimately work for the Soviet misadventure in Afghanistan. The impact of the impact of the TV images can easily be overestimated. The My Lai Massacre happened in 1968 when the war was about in its mid-point. It still took 2 presidential elections and 1 resignation before it was really over. It played a role but if the government wanted, it could have kept the war going.

Communications tools are not “media” in the sense we normally use the word. A stylus is not a scriptorium, movable type is not a publishing industry, and a wireless set is not a radio network. In order for media technologies to become full-fledged media, they need to respond to some big-picture demand.

It is so easy to confuse the technology with the message. On brief reflection, the McLuhan quote we all keep repeating like sheep is really stupid. The medium is the medium and the message is the message. Sometimes they are so firmly paired we can’t tell them apart, sometimes they have nothing in common. What is the medium of this message? HTML, the browser, your screen, a blog post, the Internet, TCP/IP, ehternet? They’re all involved in the transmission. We can choose whether we pay attention to some of them. If I’d posted somebody a parchment with this on it, it would certainly add to the message or become a part of it. But it still wouldn’t BE the message! Lots of artists like Apollinaire and his calligrams actually tried to blend the message and the medium in all sorts of interesting ways. But it was hard work. Leo Laporte (whose podcasts I enjoy listening to greatly) spent a lot of time trying to displace podcast with netcast to avoid an association with the medium. He claimed that his shows are not ‘podcasts’ but ‘shows’, i.e. real content. Of course, he somehow missed the fact that we don’t listen to programs but to the radio and don’t view drama but rather watch TV. The modes of transmission have always been associated with the message – including the word “show” – until they weren’t. We don’t mean anything special now when we say we ‘watch TV’.

Of course, the mode of transmission has changed how the “story” is told. Every new medium has always first tried to emulate the one it was replacing but ultimately found its own way of expression. But this is no different to other changes in styles. The impressionists were still using the same kinds of paints and canvasses, and modernist writers the same kind of inks and books. Every message exists in a huge amount of context and we can choose which of it we pay attention to at any one time. Sometimes the medium becomes a part of the context, sometimes it’s something else. Get over it!

There are some things Marshall Poe says I don’t agree with. I don’t think we need to reduce everything to commerce (as he does – perhaps having imbibed too much of Marxist historiography). But most importantly I don’t agree when he says that the Internet is mature in the same way that TV was mature in the 1970s. Technologies take different amounts of time to mature as widespread consumer utilities. It is always on the order of decades and sometimes centuries but there is no set trajectory. TV took less time than cars, planes took longer than TV, cars took longer than the Internet. (All depending on how we define mature – I’m mostly talking about wide consumer use – i.e. when only oddballs don’t use it and access is not disproportionately limited by socioeconomic status). The problem with the Internet is that there are still enough people who don’t use it and/or who can’t access it. In the 1970s, the majority had TVs or radios which were pretty much equivalent as a means of access to information and entertainment. TV was everywhere but as late as the 1980s, the BBC produced radio versions of its popular TV shows (Dad’s Army, All Gas and Gaiters, etc.) The radio performance of Star Wars was a pretty big deal in the mid-80s.

There is no such alternative access to the Internet. Sure, there are TV shows that play YouTube clips and infomercials that let you buy things. But it’s not the experience of the internet – more like a report on what’s on the Internet.

Even people who did not have TVs in the 1970s (both globally and nationally) could readily understand everything about their operation (later jokes about programing VCRs aside). You pushed a button and all there was to TV was there. Nothing was hiding. Nothing was trying to ambush you. People had to get used to the idiom of the TV, learn to trust some things and not others (like advertising). But the learning curve was flat.

The internet is more like cars. When you get in one, you need to learn lots of things from rules of manipulation to rules of the road. Not just how to deal with the machinery but also how to deal with others using the same machinery. The early cars were a tinkerer’s device. You had to know a lot about cars to use cars. And then at some point, you just got in and drove. At the moment, you still have to know a lot about the internet to use it. Search engines, Facebook, the rules of Twitter, scams, viruses. That intimidates a lot of people. But less so now than 10 years ago. Navigating the Internet needs to become as socially common place as navigating traffic in the street. It’s very close. But we’re not quite there yet on the mass level.

Nor do I believe that the business models on the Internet are as settled as they were with TV in the 1970s. Least of all the advertising model. Amazon’s, Google’s and Apple’s models are done – subject to normal developments. But online media are still struggling as are online services.

We will also see significant changes with access to the Internet going mobile as well as the increasing speed of access. There are still some possible transformations hiding there – mostly around video delivery and hyper-local services. I’d give it another 10 years (20 globally). By then the use of the internet will be a part of everyday idiom in a way that it’s still quite not now (although it is more than in 2001). But I don’t think the progress will go unchecked. The prospect of flying cars ran into severe limitations of technology and humanity. After 2021, I would expect the internet to continue changing under the hood (just like cars have since the 1960s) but not much in the way of its human interface (just like cars since the 1960s).

There are still many things that need working out. The role of social media (like YouTube) and social networking (like Facebook). Will they put a layer on top of the internet or just continue being a place on the internet? And what business models other than advertising and in-game purchases will emerge? Maybe none. But I suspect that the Internet has about a decade of maturing to get to where it will be recognisable in 2111. Today, cars from the 1930s don’t quite look like cars but those from the 1960s do. In this respect, I’d say the internet is somewhere in the 1940s or 50s. Both in usability, ubiquity, accessibility and it’s overall shape.

The most worrying thing about the future of the internet is a potential fight over the online commons. One possible development is that significant parts of the online space will become proprietary with no rights of way. This is not just net-neutrality but a possible consequence of the lack of it. It is possible that in the future so many people will only access the online space to take advantage of proprietary services tied to their connection provider that they may not even notice that at first some and later on most non-proprietary portions of the internet are no longer accessible. It feels almost unimaginable now but I’m sure people in 16th century East Anglia never thought their grazing commons would disappear (http://www.eh-resources.org/podcast/podcast2010.html). I’m not suggesting that this is a necessary development. Only that it is a configurational possibility.

As I’m writing this. A Tweet just popped up on my screen mentioning another shock in Almaty a place where I spent a chunk of time and where a friend of mine is about to take up a two-year post. I switch over to Google and find out no reports of destruction. If not for Twitter, I may not have even heard about it. I go on Twitter and see people joking about it in Russian. I sort of do my own journalism for a few minutes gathering sources. How could I still claim that the Internet changes nothing? Well, I did say “almost”. Actually, for many individuals the Internet changes everything. They (like me) get to do jobs they wouldn’t, find out things they couldn’t and talk to people they shouldn’t. But it doesn’t change (or even augment) our basic flesh-bound humanity. Sure, I know about something that happened somewhere I care about that I otherwise wouldn’t. But there’s nothing more I can do about it. I did my own news gathering about as fast as it would have taken to listen to a BBC report on this (I’ve never had a TV and now only listen to live radio in the mornings.) I can see some scenarios where the speed would be beneficial but when the speed is not possible we adjust our expectations. I first visited Kazakhstan in 1995 and although I had access to company email, my mother knew about what was happening at the speed of a postcard. And just the year before during my visit to Russia, I got to send a few telegrams. You work with what you have.

All the same, the internet has changed the direction my life has taken since about 1998. It allowed me to fulfil my childhood dream of sailing on the Norfolk Broads, just yesterday it helped me learn a great new blues lick on the guitar. It gives me reading materials, a place to share my writing, brings me closer to people I otherwise wouldn’t have heard of. It gives me podcasts like the amazing New Books in History or China History podcast! I love the internet! But when I think about my life before the internet, I don’t feel it was radically different. I can point at a lot of individual differences but I don’t have a sense of a pre-Internet me and post-Internet me. And equally I don’t think there will be a pre-Internet and post-Internet humanity. One of the markers of the industrial revolution is said to be its radical transformation of the shape of things. So much so that a person of 1750 would still recognize the shape of the country in 1500 but a person in 1850 would no longer see them the same. I wonder if this is a bit too simplistic. I think we need to bring more rigor to the investigation of human contextual identity and embeddedness in the environment. But that is outside the scope of this essay.

It is too tempting to use technologies as a metaphor for expressing our aspirations. We do (and have always done) this through poetry, polemic, and prose. Our depictions of what we imagine the Internet society is like appear in lengthy essays or chance remarks. They are carried even in tiny words like “now” when judiciously deployed. But sadly exactly the same aspirations of freedom and universal sisterhood were attached to all the preceding communication technologies, as well: print, telegraph, or the TV. Our aspirations aren’t new. Our attachment to projecting these aspiration into the world around us is likewise ancient. Even automatised factory production has been hailed by poets as beautiful. And it is. We always live in the future just about to come with regrets about the past that has never been. But our prosaic present seems never to really change who we are. Humans for better or worse.

Enhanced by Zemanta
Send to Kindle

Life expectancy and the length and value of life: On a historical overimagination

Share
Send to Kindle
History of Russia (1992–present)

Image via Wikipedia

About 10 years ago, I was looking through a book on populations changes in the Czech lands. It consisted of pretty much just tables of data with little commentary. But I was shocked when I came across the life expectancy charts. But not shocked at how short people’s lives had been but how long. The headline figure of life expectancy in the early 1800s was pretty much on par with expectations (I don’t have the book to hand but it was in the high 30s or low 40s). How sad, I thought. So many people died in their 40s before they could experience life in full. But unlike most of the comparisons reporting life expectancy, this one went beyond the overall average. And it was the additional figures that shocked me. Turns out the extremely short life expectancy only applies right at birth. Once you make it to 10, you have a pretty good chance to make it into your late 50s and at 30, your chances of getting your ‘threescore and ten’ were getting pretty good. The problem is that life expectancy rates at birth only really measure child mortality not the typical lives of adults. You can see from this chart: http://www.infoplease.com/ipa/A0005140.html that in 1850, the US life expectancy at birth was a shocking 38 years. But that does not mean that there were a lot of 38-year-olds around dying. Because if you made it to 10, your life expectancy was 58 years and at 30, it was 64 years. Now these are average numbers so it is possible that for any age cohort, exactly half the people died at the start of it and exactly half died at the end of it. But that was not the case after a certain age. Remember, a population where exactly half the people born die at or near birth (age 0) and exactly half live to be 60 will have the average life expectancy of 30. If you reduce child mortality to 10%, you will have the average life expectancy of 54. If you reduce it to 1%, the average life expectancy will be 59.4 years. Most people still die at sixty but very few die at 1. Massive gains in child mortality reduction will have made no difference to the end of life.

In reality, as the US charts show, the life expectancy at birth doubled but life expectancy at 10 went up by only about a third. That’s still a significant gain but shows a much different profile of life span than the normal figure would have us believe. It was not unusual to live into the late 50s and early 60s. And there were still a large enough number of people who lived into their 70s and 80s. Now, there are exceptions to it, during famines, epidemics and wars and for certain groups in society, the life span was significantly shorter (notice the life expectancy of whites vs. non-whites in the US). But for most populations throughout history, the most common age of death for any given person born was before the age of 10 not in their 30s.

I don’t understand why this is not commonly known. Even many historians (particularly the ones who write popular history) either don’t know this or are unwilling to distrub their narrative of how young people died in the past (in other words, they lie). I certainly was not taught this during my brief (long-ago) studies of ancient and medieval history.

What brought all this to mind is a most disturbing example of this is in a just published book called Civilization by the prominent public historian Niall Ferguson. In the preface he quotes a poem about death and suffering from John Donne and he comments on it:

“Everyone should read these lines who wants to understand better the human condition in the days when life expetancy was less than half what it is today.”

To say I was aghast is an understatement. I nearly threw my Kindle against the wall. Here’s this public intellectual, historian who goes about preaching on how it is important to understand history and yet he peddles this sort of nonsense. If he had said days with high child mortality and a shorter typical life span, I’d have no problem with it. But he didn’t and didn’t even hint that’s what he meant.

He then goes on blathering about how awful it is that all these historical luminaries died so young. Spinoza at 44, Pascal at 39. Saying:

“Who knows what other great works these geniuses might have brought forth had they been granted the lifespans enjoyed by, for example the great humanists Erasmus (69) and Montaigne (59)?”

Common! Bringing forth great works! Really?!? Pathos much? He then goes on comparing Brahms (died old, disliked by Ferguson) and Shubert (died young, liked by Ferguson). So much for academic distance. Why on earth would Ferguson think that listing artists who died young means anything. Didn’t he ever hear of Jimmy Hendrix or Kurt Cobain?

But more importantly, he doesn’t seem to notice his own counterexamples. Erasmus died almost a hundred years before Spinoza was born. What does that tell us about life expectancy and historical periods?

And since when has naming random people’s ages been considered evidence of anything? What about: Isaac Newton 84, Immanuel Kant 79, Galileo 77, John Locke 72, Voltaire 83, Louis Pasteur 72, Michael Faraday 75, Roger Bacon 80. Isn’t that evidence that people live long before the advent of modern medicine?

Or what’s any of that have to do with how much people may have contributed, had they lived l

Louis Pasteur

Image via Wikipedia

onger? I don’t think longevity can serve as a measure of intellectual or cultural productivity. Can we compare Plato (80) and Aristotle (60). It seems to me that Aristotle produced a lot more and varied work than Plato with 20 fewer years to do it in. Aquinas (49) was no less prolific than St Augustine (75). Is it really possible to judge the impact of the inventive John L Austin (who died at 49 – in the 20th century!) is any less than of the tireless and multitalented Russell who lived pretty much forever (97)?

But there are still more counter examples. Let’s look at the list of longest reigning monarchs. The leader of that board is a 6th dynasty Pharaoh (who arguably acended to the throne as a child but still managed to live to a hundred (2200BC!). And most other long-lived monarchs were born during times when life expectancy was about half of what it is now. Sure, they were priveleged and they are relatively rare. And there were a lot of other rulers who went in their 50s and 60s. But not typically in their 40s! Maybe there is already a study out there that measures the longevity of kings with relation to their time but I doubt a straightforward corellation can be found.

Finally, I can match Ferguson poem by poem. From the ancient:

Our days may come to seventy years,
or eighty, if our strength endures;
yet the best of them are but trouble and sorrow,
for they quickly pass, and we fly away.
(Psalm 90:10)

to the modern:


Sadness is all I’ve ever known.
Inside my retched body it’s grown.
It has eaten me away, to what the fuck I am today.
There’s nothing left for me to say.
There’s nothing there for me to hate.
There’s no feelings, and there’s no thoughts.
My body’s left to fucking rot.
Life sucks, life sucks, life sucks, life sucks.
http://www.plyrics.com/lyrics/nocash/lifesucks.html

Clearly all that medicine made less of an impact on our experience of life than Ferguson thinks.

Perhaps I shouldn’t get so exercised about a bit of rhetorical flourish in one of many books of historical cosmogeny and eschatology. But I’m really more disappointed than angry. I was hoping this book may have some interesting ideas in it (although I enter it with a skeptical mind) but I’m not sure I can trust the author not to surrender the independence of his frame of mind and bend the facts to suit his pet notions.

Enhanced by Zemanta
Send to Kindle