Do Books have a Future? An Interview with Robert Darnton

On academia, Google Book Search and the future of electronic publishing
Professor Robert Darnton at Harvard University
The following is an extract from an interview I conducted with Professor Robert Darnton on 5 December 2011. Darnton is the Carl H. Pforzheimer University Professor at Harvard and Director of the Harvard University Library. In 1999, he served as President of the American Historical Association, and he remains a trustee of the New York Public Library. He is also the author of numerous academic works, including The Great Cat Massacre and Other Episodes in French Cultural History, and The Case for Books: Past, Present, and Future. We met to discuss the way technological developments are changing academia, the problems surrounding Google Book Search, and what the future of electronic publishing might bring:
The expression ‘Open Access’ is often used by software developers in relation to emergent technologies, and confers certain ideals of accessibility and cooperation. You have suggested that the idea can allow us to rethink the boundaries of the academic institution. How can Open access change the way we think about technology in higher education?

Well, I’m not sure it has, actually. I hope it has, but Open Access is far from a fait accompli, I mean most accessed is closed. In fact, it’s astonishing how restrained and closed academic exchange is. There are severe laws, copyright laws, that prevent me making available to my students all kinds of digital texts which they could use with profit. So, it’s not as if Open Access can declare victory, as if it has transformed academic life or the whole world of knowledge. I think that’s its ambition. Put simply, you could say its ambition is to democratise access to knowledge. I see a danger of the opposite happening: that access to knowledge could be restricted through commercialisation. And so, I see, for example, the spiralling cost of periodicals as a threat to knowledge, even though the publishers of the academic periodicals would say, on the contrary, we are communicating knowledge! Well, you’re not, if you’re making it so expensive that libraries have to cut back on their purchases of monographs and other periodicals. We are reaching a point in which the inflation of costs of periodical literature is a real endangerment to access to knowledge.

In the case of books, the obvious example is Google Book Search. I feel that Google Book Search, which was going to commercialise access to a database of books, was a real threat to the communication of knowledge, even though it looks like a great leap forward. And, therefore, we are trying to create what we call the Digital Public Library of America: an Open Access digital library that will be available to everyone, not just everyone in the United States, but everyone in the world. This is a long answer to your question, but what I’m trying to explain is that Open Access is at an early stage. It’s the beginning of something that I think will be a process of democratisation, but we’re very far from being there yet, even in developed countries in the West. But when you think of countries on the other side of the so-called ‘digital divide’, their access is very far from being open at all. So there’s a long way to go before the whole globe is united in some kind of digital network in which everyone has immediate access to our entire cultural heritage.

You’ve written that Google Book Search has started ‘a monopoly of a new kind’, that despite its goal to ‘organise the world’s information and make it universally accessible and useful’, it presents obstacles to the proliferation of knowledge. You’ve already touched upon this a little bit in your previous response, but could you expand on why do you feel this is the case? How is Google a threat to the spread of knowledge or information?

First of all, I should say that I admire Google in many ways. If you’ve ever met any of the Google engineers, first of all, they’re very young, they’re even younger than you are, they’re full of energy and ideas, and there is a kind of ‘can do’ spirit about them—it’s exhilarating. And the atmosphere in the world of Google is quite something, it’s electric, and I admire this. I admire their sheer hutzpah, as we put it, their ability to take a problem and wrestle it to the ground and do something with it. This is all wonderful.

I would use the word ‘threat’ cautiously. I saw a threat in Google Book Search, which was a very precise plan that emerged from a lawsuit. So Google was sued by the authors and the publishers in the United States for alleged infringement of their copyrights. And in trying to negotiate a settlement to that lawsuit, Google transformed what was originally a search operation into something entirely different: a commercial library. So the entire database of digitised books, fifteen million books or so, would be made available, but at a price. And that price would be set by Google and the plaintiffs that had now become its partners. Well, that incipiently is a great danger to knowledge, it’s making knowledge available to those who could afford it. The reply to my argument is: ‘don’t be naive, nothing is free, it’s normal that you should have to pay for access to knowledge because all of this costs money.’ And, my reply to the reply is that knowledge is a public good, and public goods—of course—cost money, nothing is free, but they should be made available free, through whatever devices we can come up with. State action, or in the case of the Digital Public Library of America, a coalition of foundations who are providing the money and a coalition of research libraries who are providing the books. So there are solutions, but if you care about public goods, if you feel that everyone, ordinary citizens ought to have equal access to knowledge, then its important to establish the rules of the game. I think we’re at a very interesting moment in the history of communication, in which those rules are being established. One of the interesting points is the way they’re being established. You might find it extraordinary, here in Europe, that the rules of the game in the US are being determined by lawsuits, by court action, not by the legislature. And that is in fact the case. Of course, the legislature has a role to play as well, it votes copyright laws, in fact it’s voted eleven copyright laws in the last fifty years. But I think these copyright laws are becoming a hindrance to open communication, and I feel that they ought to be modified, but I have very little confidence in Congress’ ability to modify them for the public good, because there are so many lobbies that descend on Congress and that determine copyright. So, copyright is a very complex subject which has evolved, as you know, over a long period of time, and right now I think we are copyrighting ourselves into a corner. It’s a very grave problem, and how we can get out of this corner I don’t know. That’s part of what Open Access is all about.


When developing the Gutenberg-e project, you attempted to realise the potential of the e-book in a new kind of academic monograph. Could you say a little more about this potential, and what you mean by a pyramid structure?

The Gutenberg-e project was an attempt to create and legitimise a new kind of publishing. One that would be especially important for young scholars, for people who were trying to convert dissertations into monographs, and, at the same time, one that open up new possibilities of scholarly communication. As you know, e-books can do wonderful things. They can have film, they can have sounds, they are multimedia by their very nature. They can also have documentation that extends indefinitely into the depths of cyberspace. So the potential is there for a new form of scholarly communication. It’s really thrilling. But practice is something else entirely.

I’m an historian, and when I created Gutenberg-e, I was President of the American Historical Association. The Association tried to use electronic media as a way to help younger people develop as scholars, develop careers, by making the most of the new technology. However, a lot of the older scholars said ‘Well, these e-books aren’t books at all. Books are things that appear with print on paper.’ And part of the struggle was, therefore, legitimation. I forget the exact number of e-books we published, I think it was 17, and it was quite a list. And I found myself writing letters to chairs of history departments saying ‘Look, these are books! They are real books. In fact, they’re better than most printed books.’ And they are. They’re terrific books. In that respect, I think Gutenberg-e was a success. I think we did help to breakdown this barrier to the notion that electronic communication isn’t ‘real’ communication.

Where it was less successful was as a business. We had a business plan, it was working quite well, by the end (after seven years) we managed to cover costs. But only barely. Just as we were emerging from red ink into black ink an economic crisis hit. And the publisher who was managing all of this, Columbia University Press, decided this was too risky an enterprise to continue. So that’s why Gutenberg-e was discontinued. It still exists, it’s available online and elsewhere, but I can’t herald it as an unambiguous success. It was a first attempt to do these things. But now there are lots of e-books, and lots of scholarly e-books . In fact I’ve published one myself, a hybrid book which I think is now rather typical. I’m still not answering your question about the pyramid structure, though, so maybe you could put that question to me again if you want?

Okay. Well, the pyramid structure of the e-book was been adopted by a number of commercial publishers. Faber and Faber have released an iPad edition of T. S. Eliot’s The Waste Land, which includes access to audio and visual performances, alongside documentary features. While Penguin have released an ‘amplified’ edition of Jack Kerouac’s novel On the Road, allowing readers to browse manuscript materials, access documentary material, and follow events in the narrative on an interactive map of America. Do you think devices like the Apple iPad or Amazon’s Kindle will change the way we read long-term? And if so, how?

The short answer would be ‘Yes’, but then you could say ‘How?’, as you just did, and I don’t have an answer to that one. But the examples you cite, which I actually haven’t looked at myself, sound terrific. I think it’s thrilling that the reader, or user, can experience these texts in multi-dimensions. You can take texts in through your ears as well as your eyes, and for me that’s a huge advance. It will help place someone like Kerouac into a context in a way that you can’t simply by asserting that he was travelling through this rather strange landscape. So, yes, I feel that this is a very significant advance.

But how will it change reading? I honestly don’t know, but I’m often told ‘Don’t be naive, there are losses today in the way people read, especially when they read online’. The cover-to-cover deep reading that was typical of my generation when we were students is now almost extinct, and instead you’ve got superficial reading: reading snippets and tweets and cutting texts up into tiny units that really prevent any appreciation of the whole sweep of a text. I have one half-answer to that, which isn’t adequate but I think deserves consideration. And that is, first of all, that this cover-to-cover deep reading shouldn’t be exaggerated as something that occurred in the past. We have learned a lot about the history of reading, which is one of the aspects of the history of books that we’re trying to develop, and one thing we have learned is that, for example, sixteenth-century humanists rarely read a book from cover to cover. They were reading what we today would call ‘snippets’, or even ‘tweets’, they were taking -

As in the Commonplace Books?

That’s right. They were taking short passages out, copying them into Commonplace Books, and using those passages for various purposes, often rhetorical battles at court by their patrons, or what ever it was. But this was not reading in the way that we like to imagine it. Now, of course, deep reading also did take place. I’m not denying that for a minute. But I’m not sure that we can assume that it was typical.

Has deep reading become extinct today? Well, I assign my students books, often printed books, and when we have discussions of these books, I have the impression that they’ve mastered the basic arguments, and that they have learnt to read critically. Perhaps the big difference is this: when I teach courses on the history of books, I try to sensitise students to the physical aspects of books, and how those physical aspects convey meaning. It’s not just erudition for the bibliographical sake of erudition, but rather, a question of how paratextual elements and so on shape the message that is being conveyed in the text, and the way the reader makes sense of that message. So, I do find that students who are, so-to-speak, ‘digital natives’ and are used to electronic communication are very excited about this new way of looking at old books. Their reactions are much more energising than those of students, say, twenty years ago who saw the world of print as the established world: one that had existed since Gutenberg and was never going to change.


You’ve talked about how libraries are under pressure to ‘advance on both fronts’: the analogue and the digital. Could you elaborate on some of the dangers of relying on electronic texts, including texts that are ‘born digital’?

Well, is it a problem that the focus is so heavily among publishers on digital books? Frankly, I don’t think there’s enough focus. I can understand that publishers are perplexed and fearful about the digital future, because they have to cover costs and make a profit. It’s a serious industry. And they are also committed to the higher things such as the spread of knowledge and the creation of art. So I’m not in any sense minimising the problems faced by publishers.

But I think many publishers are very cautious about how to deal with this future that they can vaguely see but which is very blurred. Not that they’re opposed to digital books, but they don’t want to risk enormous losses. Every publisher is trying to develop a business plan of some kind or other. I shouldn’t speak as if I’m an authority on current publishing so you should discount a little bit of what I have to say. But I think that one of the issues every publisher faces is what to do with the backlist. The so-called ‘long tail’ of books which they can monetize through digitisation.

You could say that today, to use the phrase ‘in print’ or ‘out of print’ is quite misleading. Because, potentially, every book is in print. You’ve got a digital version of it as part of your backlist, and so any consumer should be able to order any book. And we’re very close to that now. It’s true that not every publisher has digitised all of their backlist, but even then, the publisher could, at the order of a reader, digitise a book, scan it very cheaply and make it available through print-on-demand.

We have these new Espresso Book Machines [EBM]. You, the consumer, go into a bookshop and you find a computer, you order a text, the order is transmitted to a database, the text is transmitted instantly to a not-terribly-large machine (about half the size of a bed). The machine is encased in glass, so you can watch it all happen. The text is printed on paper, the paper is trimmed, a paperback cover—in colour—is attached to it. All of this within four minutes, and often at a very cheap price. That is to say, in the US, the price is in many cases eight dollars for a custom-made paperback. In less than four minutes! So, what’s happening is the new technology is reinforcing the old-fashioned printed codex. And believe me, the products of these machines are excellent. Not fancy, but several times I’ve seen the machine produce copies of my own books which look every bit as good as the original paperback. Not the hardcover version, but a very acceptable example. So that’s a way in which the backlist of a publisher is a source of enormous potential profit, thanks to the new technology. It’s not a kind of technology that is simply going to wipe out the codex, but it could reinforce the printed codex.

You mentioned this earlier in our discussion, but I wonder whether you could tell me more about the Digital Library of America project? And how does it compare with your idea of a Digital Republic of Learning?

The Digital Public Library of America is more than a gleam in my eye, or anyone else’s eye, it’s a reality that is now coming into existence. It began a year ago at a conference at Harvard, where we debated it as a general idea. And the general idea is to make available, free of charge, the cultural heritage of our great research libraries. So, in that respect it was like Google Book Search except that it would be non-commercial.

Was this simply a utopian dream? Well, we said no. First of all, because we can find the money. All the major foundations of the United States are enthusiastically supporting this idea. If they will chip in the money, we can fund it. And they will chip in the money. Secondly, comes the question: ‘Is this technologically feasible?’ Well, Google proved that it is technologically feasible. Maybe it wasn’t quite as good as it ought to have been, but it’s still remarkable. And we’ve now dealt extensively with computer scientists and they all say this is not even difficult, they can design the infrastructure for this new library.

It will be a distributed system, so you shouldn’t imagine some magnificent building sitting on top of a gigantic database. It will link databases scattered all over the United States in a way to make them perfectly compatible: the user won’t even know where the book, or the pamphlet or the manuscript is located. The user will just have instant access to the text. (I mean, there will be metadata explaining where the text is, but it will be very user-friendly.)

We will have this new kind of library up and running in April 2013. That’s seventeen months from now. Of course, it will be up and running in a preliminary way, because we have the problem of copyright. We are not going to infringe copyright, we will respect copyright. But we can make available two million public domain books, and all kinds of works from special collections. The research libraries in the US has fabulous special collections, as you do in your great Rare Book Library here at Cardiff University. And many of them have been digitised or partially digitised. This is a huge amount of natural that will be part of the Digital Public Library of America. And then we will build it incrementally.

That is, we will move into the world of copyright, but we won’t break the law. But how will we do it? That is a difficult question. We have groups working on this, we have the best law school professors devising legal strategies to do it, et cetera. And we don’t have a clear answer to it. I could go into some of the details, but they would probably sound too esoteric for the people on your blog. But there are possibilities of making at least some of these copyrighted books available. These, on the whole, will be books that are ‘out of print’, but not commercially available books, not books currently on the market, but that are still covered by copyright. These works we can make available, I think, but we have to do so by some kind of agreement with the authors and the publishers, and that kind of agreement has yet to be determined. It’s not easy, but it’s something I think we will do little by little over the course of the next decade. So, in ten years, we will have a library greater than the Library of Congress, which is the largest library in the world, available free of charge to everyone.
A full transcript is available at the Cardiff University blog, Cardiff Book History.

Also at A Piece of Monologue: