Thanks to a suggestion from David Riordan of the New York Public Library Labs, I got a quick introduction to Field Trip, a new augmented reality (AR) Android app that emerged out of Google last autumn. Field Trip comes out of an internal startup called Niantic Labs at Google headed by John Hanke, who created an early online mapping application called Keyhole. Keyhole was acquired by Google and turned into Google Maps under Hanke’s leadership. I think Field Trip points toward a new generation of geolocal story telling, enabling us to find stories and interact with narratives wherever we happen to be. Continue reading
As we start a new year, it might appear that the hurdles facing public libraries have never been greater. With financially burdened communities; ebooks, movies, and music increasingly delivered through walled gardens by technology companies that have no resonance with free-to-all service; and rapidly evolving modes of publishing, it would appear that libraries are in a tight corner. That may all be true, but there are signs of rescue, signs of hope.
One of the best things coming is the growing awareness that public libraries need to solve their own problems. That is not an easy proposition; public libraries come in all shapes and sizes, from Boston and New York research libraries to small town libraries in the American west. However, the internet bridges both vast distances and town/gown differences, and we are starting to see a whole new community of libraries emerge. A portion of this effort is being negotiated through the Digital Public Library of America (DPLA), but the greater and more important aspect is being developed peer to peer.
A current example is the ReadersFirst initiative, a growing collaboration of libraries that has endorsed a straightforward set of propositions that seek to provide more seamless access to digital resources. ReadersFirst seeks simple but high impact goals: make content like ebooks more portable between providers, and more available to patrons; simplify integration into library discovery systems to ease access by patrons; and make content available in any useful format, whether EPUB, Mobi, or a website. And in this effort, amazingly, they may succeed.
I just got back from the Charleston Conference – a lively mix of publishers and librarians discussing digital transitions in information access. It was the first time I attended, and I was struck by how many other friends in trade publishing were also there for the first time, ranging from Smashwords and Safari Books Online to the Frankfurt Book Fair. O’Reilly also organized a premier Tools of Change Charleston with Mitchell Davis, the local entrepreneur behind BookSurge and BiblioLabs.
One thing that immediately struck me was how much the conversation about trade publishing seems increasingly to leak into discussions about other sectors of publishing, including Charleston’s focus on academic and A&I resources. Part of that was intentional by the organizers, and part because it’s hard to open a newspaper without reading about the titanic shift towards Big 6 trade consolidation. The combination of Random House and Penguin seems inevitable to everyone, and most pundits and prognosticators agree that more combinations are on the way. Additionally, there seems to be strong concurrence that the merger’s primary achievement is to buttress a strong arm against the market power of Amazon, giving ever larger publishers more heft in negotiations, and heading off ultimatums from Amazon’s perceived monopsony power.
One critique of this trend is that there may be little benefit to making publishing businesses ever larger through M&A because internal coordination costs for larger firms grow faster than the benefits of output efficiencies. At Charleston, there was speculation that inevitably one would see a dissolution of the great houses, and a re-emergence of their imprints as stand alone publishers. In an age of networked production and ebook distribution, the strong countervailing argument against consolidation is that there is no better time for Alfred A. Knopf and Panthenon to take themselves out of megalithic houses and re-assert editorial and business independence. I must admit, as a literature geek I find this scenario romantically appealing, and I would love to see these noble brands born anew and ascendant.
However, I think that the opportunity for those organizations to resurface is gone. That’s not due to change in the brilliance of their staffs or their aspirations – it’s a result of wholesale changes in publishing. Once we start producing literature without traditional firms, even born-again, smaller and nimbler houses based on traditional publishing structures are not going to be successful. It will take an entirely different model of publishing to succeed – one that recognizes that the costs of literary production are plummeting; distribution occurs on the network; and that entry points into story-telling are growing increasingly diverse. New publishers are as likely to be independent videographers or game companies as trade houses, and a growing industry meme focuses on how likely it will be for film producers to commission books, rather than see traditional publishers managing 360 deals. With tools like Mozilla’s Popcorn, transmedia production is reaching the hands of technically unsophisticated creators.
Making strategic choices about optimal organizational form based on a desire to achieve effective market position against the dominant retailers of the existing industry will not be successful. Newly emergent publishing models are going to develop on the periphery of the existing publishing industry, often wholly independent of it, with both large and micro actors emerging to produce a wide range of new forms of content. The consultant Mike Shatzkin has persuasively argued that everything but traditional text narratives in trade is merely an experiment, and that’s a logical analysis. However, it’s not in trade that those experiments are going to be successful.
During the Charleston Conference, I grabbed a quiet morning and toured Fort Sumter, site of the start of the U.S. Civil War. One of the things I learned was that the war bridged a great transition in artillery technology, with field bombardments shifting to vastly more deadly and accurate rifled cannons. It seems a similar transition is amongst us within publishing. As armies in this war, Random House and Penguin have reached for a bigger musket to arm themselves in order to retain financial independence. Unfortunately, more innovative firms have started to adopt Kalashnikov AK-47s.
The Confederate Army abandoned Fort Sumter in February 1865, as Sherman swept his way through South Carolina.
The Internet Archive has announced that it is using BitTorrent to encourage more efficient downloading of content from its servers. Over 1 million archival items are now available as torrents, including Librivox audio books, movies from the Prelinger Archive collection, old radio broadcasts, and hundreds of thousands of digital books.
This is an innovative use of BitTorrent technology, demonstrating that large digital libraries can foster accessibility of selected materials for broader access. As Brewster Kahle notes in his blog post:
BitTorrent is the now fastest way to download items from the Archive, because the BitTorrent client downloads simultaneously from two different Archive servers located in two different datacenters, and from other Archive users who have downloaded these Torrents already. The distributed nature of BitTorrent swarms and their ability to retrieve Torrents from local peers may be of particular value to patrons with slower access to the Archive, for example those outside the United States or inside institutions with slow connections.
John Gilmore, a co-founder of the Electronic Frontier Foundation, notes in the same blog post that BitTorrent can be reliably used for a wide range of content, including “large files that are permanently available from libraries like the Internet Archive.”
The use of technologies such as BitTorrent can facilitate not only greater access but also increased opportunities for preservation of digital content.
(N.B.: Although employed at the Internet Archive, I was not part of this project).
Like a lot of people in my generation, I am system administrator for my parents. That’s okay – I’ve been one in real life, so I don’t mind it very much, and I try to think of it as a learning opportunity. Over the last couple of weeks, I’ve had to update my father’s computer, a Mac Mini, and it was instructive in unexpected ways.
Most experienced systems admins will tell you they abide religiously by several inviolable rules, one of which is: upgrade applications when necessary; operating systems, rarely. The problem with an OS upgrade is that it often changes how people work with their computers, not just in one application, but across the board. Throw in a random number of incompatibilities and surprise, forced upgrades in both peripherals and utility software, and you have a predictable nightmare. Yet sometimes, pain is necessary, and I knew that upgrading my dad’s Mac Mini was going to mean a move to Mountain Lion. I read John Siracusa’s review and got ready.
I keep system privileges for myself, and it was after I created his “standard” user account that I found myself surprised. When you create a new user in 10.8 and open up the finder, you get a very simple menu. In the left hand panel, “All Files” will show you a flattened view of your account; there are the customary folders for “Music”, “Pictures”, and “Movies”; plus “Documents” and “Downloads”. Snarkily, I didn’t expect to find “Books”, and it wasn’t there. Continue reading
The Chronicle of Higher Education has released its first ebook, appropriately enough an expanded version of its Rebooting the Academy series, which examines changes in the practice of research, teaching, and institutional management in the midst of technological change. Nearly simultaneously, on the occasion of John Siracusa’s exhaustive review of the new Apple operating system Mountain Lion, Condé Nast’s Ars Technica will soon make available a Kindle ebook for those wishing to absorb all 26,000 words in a digestible format. And, in September, the New York Review of Books will release their first title in their new ebook only imprint, NYRB-Lit.
That digitally facile publishers such as the Chronicle and Condé Nast are able to quickly produce and sell ebooks is simultaneously exceptional, and increasingly mundane. Ten years ago, publishing an ebook from a lengthy periodical series would have taken months of preparation; today, as the tools for publishing on the internet enter the mainstream book trade, anyone who can run a blog can produce an ebook. That’s not necessarily terrific news if you are an established publisher; with each news release about self- and independently-published ebooks, the value proposition of large, integrated publishing firms seems less obvious. When Los Angeles media entrepreneurs like Barry Diller and Scott Rudin see the virtue of starting up their own high-brow literary publishing endeavors, midtown real estate in Manhattan starts looking particularly expensive.
The presence of active tumult in a prominent economic sector makes it especially troubling when government agencies listen uncritically to entrenched publishing multinationals for advisement and consultation in areas of high-impact policy formulation. For example, there has been significant worldwide interest in negotiating a WIPO treaty that would make require countries to allow published, in-copyright print works to be converted into an accessible format for the blind and others with reading disabilities, and permit accessible works to be shared around the world without permission from the copyright holder. However, the United States has wavered in its support for a binding treaty, and is instead seeking softer, non-binding recommendations or guidelines.
This U.S. reluctance to finalize treaty language echos the concerns of the American Association of Publishers, as evidenced in a videotaped interview with the AAP’s vice president of policy, Alan Adler. Adler voices concerns that a binding international treaty will introduce a precedent that will make negotiations over copyright exceptions and limitations more likely for educational, library, and archival uses. Adler, and by extension the AAP, seem to forget that copyright is itself a set of specially codified grants that are carved out from public access for a limited duration, and that exceptions and limitations simply return to the public the access to creative works that is society’s baseline.
However, the rapid influx of Internet-based publishing tools, and the blossoming of a rich diversity of new self- and independent publisher services, along with new mixed media entrants taking advantage of mobile content platforms from companies like Google and Apple force us to raise a more fundamental issue: who can speak for publishing-related policy issues? Surely today we must listen not only to the large publishing combines, but also to new companies like Smashwords, Aerbook, Vook, Byliner, and The Atavist in order to understand the perspective of publishers. And equally, as publishing becomes an integral part of the firmament of the internet, the government must consult and evaluate the competing aims of Google, Amazon, Apple, and Microsoft.
The time in which the AAP can speak authoritatively for publishing is over. Formulating policy over intellectual property issues that heretofore was considered the domain of a few specific industry and interest groups is instead the domain of all internet users, including readers and authors, as well as a wide range of new publisher entrants. Ours is a economy undergoing network industrialization, and if the federal government wants industry consultation, it will need to listen to the wider array of people and firms who are engineering and empowering the future of expression, instead of a handful of companies fighting the U.S. Justice Department after colluding to maximise their interests at the expense of consumers.
Today two different thunderbolts struck in academic publishing, one from an old storm, and the other from a new one. The weather forecast continues to be troubled, but as they say, we need the rain.
The first story is the imminent closing this summer of the University of Missouri Press, after five decades of operation. MU Press is not the first university press to close, and it certainly won’t be the last. It was receiving a subsidy of $400,000 annually and still not able to obtain a profit from its operations; that is a lot of money, but not exceptional in the realm of university presses. Nor, sadly, is the lack of profitability, which is why we are likely to see more closures on the horizon.
I’ve been much consumed recently by questions of what the future holds for books and expression. In part that’s because I have to give a commencement address soon to an Information School, and I am struggling to articulate, at least to myself, some of the things that are nagging me. Further, yesterday I had coffee with Bob Stein, a consistently iterative digital pioneer; we spoke briefly about the nature of innovation, and he reflected on the comment Beethoven is said to have made to a critic of his string quartets (Op 59 No 1), “Oh, they are not for you, but for another age.”
What’s common in these threads is the reduction of separation between ourselves and the networked environments we are living in. This isn’t a cyborgian vision so much as an awareness that we are living in a sensed and sensing environment of ever greater pervasiveness. Like it or not, conscious or not, we are truly in a conversation with a man-made habitat that is machine-enhanced. Some of that new perception will find its way into artistic expression.
As I write, my week is not half over yet, and it’s been full of meetings with startups at various stages of market readiness. I’ve had conversations with folks at the Atavist in New York; Aerbook here in San Francisco, and some visiting founders from a new company in private beta, Valobox. Additionally, I had the honor of participating in a “Future of Publishing” event sponsored by Pearson at Rocket Space in South of Market in San Francisco, with my co-panelists Matt McInnis of Inkling, the Managing Director of FT.com, Rob Grimshaw, and the CEO of Blurb, Eileen Gittins.
It did not take long in my conversations before a common thread became apparent. It gelled in the Pearson event when Matt McInnis asked our audience, “How many here have used Pagemaker? InDesign? Word?” His question targeted a 30 years evolutionary path in software that is about to become obsolete – page-oriented authoring and design. Publishing’s new default is not a page of paper, but a web page, which has dynamic sizes and shapes. Regardless of the kind of content new publishing startups are thinking about building services around, at heart, they are oriented toward the web, not the page. Yet, quixotically, the web itself is not quite ready. For now, startups are building software for tablets with sophisticated processors, built-in networking, tightly coupled user accounts, and a mix of local and remote storage for media: a combination for which current web standards can’t do full justice.
As a relative outsider to publishing, I am still often surprised by how difficult business transformation can be for some organizations. I am a member of the Project Muse Advisory Board, and I’ve just emerged from their board and publisher meetings. Project Muse is a journals publishing platform; it aggregates journals in digital form and sells content packages to university and college libraries, research centers, and similar organizations. Muse is also making a significant entry into the higher education ebook market by providing access to publishers’ lists. Our meeting was energetic, and focused at a conceptual level on the challenges of delivering new types of services while transitioning away from more traditional aspects of journal publishing.
What was striking for me was not my anticipated discussion of content management systems that supported a wide range of data queries, might be more semantically aware, and capable of supporting a wide range of interactive media; indeed, these are today’s currency of the realm. Rather, it was the more basic conundrum of being caught between different kinds of customers: publisher suppliers, who are also customers, in a sense; and institutions, who buy their product.
The core conundrum for Project Muse, as with all platform providers, is that they can easily come into conflict with the priorities of the university presses and scholarly societies that provide them with content. For example, one opportunity discussed widely today in academia is creating “push to publish” services that are much closer to the user, often utilizing approachable tools such as WordPress; these services would be at home in library publishing units. If an existing platform provider tried to deploy such a lightweight and configurable publishing system, it could siphon audience away from constituent publishers. In fact, most new services that leverage internet technology and network-scale data sharing and computation end up being ones under consideration as well by university presses and scholarly societies.
The underlying issue is that the suite of possible new publishing services is within reach of multiple levels of the publishing field: university presses could make a go at putting broad net-scale services like PLoS One out of business just as easily as Muse or JSTOR, which operate at a higher level of aggregation. If a small press or society is willing to go through the significant tumult of re-inventing itself, it can reach the global community of scholars just as easily as Elsevier.
What that made me realize is that if you designed a publishing enterprise to support scholarly communication de novo, aggregating content from a range of sources but also developing direct publishing and reader/writer services, you could do it with very different constraints than Muse, JSTOR, and other platform providers have to grapple with. A new entrant, not unlike the Public Library of Science, could actually turn its back on existing publishing practice and design a direct-to-faculty or direct-to-discipline infrastructure that was wholly divorced from existing players.
That kind of disruption hasn’t happened much yet outside of science, technology, and medicine, but it is likely that it will, unless existing platforms quickly manage to figure out ways of innovating themselves into a new content environment while bringing their publishing contributors and constituents along with them, benefitting from the same new services platforms are designing for a broader audience. There may even be some unique advantages in sustaining those relationships, if they can be successfully leveraged.
The coming change in how we publish the humanities and social sciences, and in fact, what we can publish, could be even more transformative than the re-invention of STM. Building a new digital humanities infrastructure will mean interacting with visual interpretations of historical sites, hearing ancient or less common modern languages in linguistic treatises, and grappling with philosophical quandaries in a gaming environment with virtual goods. Ultimately this may reshape how faculty think about doing their research, as well as how it is communicated.