Category Archives: Peter Brantley

Preserving Less of the World’s Literature

Peter Brantley -- July 30th, 2013

Miners in a tunnelThe rapid development of online publishing has been a boon for advancing access to literature and science. At the same time, it portends a dramatic lessening of the currently-legislated ability for national libraries with preservation and access mandates to record and store national and world literatures. There are at least two principal axes to this concern: independently published literature, and the growing wealth of alternative direct-to-web publishing channels. Continue reading

On the NSA’s reading list

Peter Brantley -- July 17th, 2013

Spy Vs SpyWhen I was about 15 years old, I purchased a copy of The Anarchist’s Cookbook by mail. I’m not exactly sure why I did that; it was being heavily advertised back then in the progressive magazines I read, and I probably thought it was the cool, geeky thing for a young boy to do. I remember trying to plow through it, but it was sloppily written and I didn’t think much of it; the author has strongly disavowed it, and wants it off the market.

Even in the late 1970s, I didn’t agree with the sentiments; my parents were smart not to intervene or counsel caution. It was a very liberal household for reading, and my father, a professor of literature, also kept a “teaching copy” of The Story of O in a place that was probably a little too accessible — or just accessible enough. My purchase of the Cookbook was primarily about personal semiotics: a (failed) attempt to signal to myself, something about myself.

It was an innocent act. Now, I am not so sure that it would be. With the hands of the NSA clearly deep in international personal and organizational communications traffic, along with presumably any other national security agency around the world worth its salt, tracing that kind of information might well be worth their while. Why not? Storage is affordable at scale, and computation is pervasive.

The government now clearly understands that the most critical internet infrastructure – freely flowing information – doesn’t actually flow that freely, but is usually routed through the application silos maintained by a small number of companies that include Amazon, Apple, Facebook, Google, Microsoft, Twitter, and Yahoo. As citizens, it is our responsibility to consider the ramifications that control over the data we generate as individuals offers for surveillance. Regardless of how ferociously companies have fought to protect user information – e.g., EFF has applauded Yahoo, and Google has apparently given ground only reluctantly – these companies remain data honeypots for government infosec query.

Our reading takes place, overwhelmingly, in those silos, within Apple, Amazon, and maybe Microsoft. With the apparent slow withering of Barnes & Noble’s business, it becomes ever more likely that Microsoft’s strategic investment of $300MM in the Nook Media unit might well become the vehicle to majority ownership. With a new reorganization under its belt, Microsoft is clearly interested in entertainment and mobile hardware platforms, and content is a critical component of hardware based business offerings.

Microsoft has drawn significant attention in the NSA scandal as being eager to please government inquiries, being accused of providing low level access to Outlook and Skype services, among others. Although Microsoft has strongly refuted these allegations, it hedges carefully by noting, “Looking forward, as Internet-based voice and video communications increase, it is clear that governments will have an interest in using (or establishing) legal powers to secure access to this kind of content to investigate crimes or tackle terrorism.”

And so whatever protections progressive States like California have been able to secure to protect reader privacy, it is not at all clear they would work against FISA orders on the national level. With our books in the hands of a few very large internet application hosts, and even assuming the best efforts by those companies to protect their users, we can not purchase ebooks, much less read them, with any sense of privacy or confidentiality.

It was a privilege to read the Anarchist’s Cookbook as a teenager and not worry about it; even to have a reasonably decent chance of not having anyone know about it. But no era is innocent, and the 1970s certainly weren’t. I would later take a university political science class on terrorism, studying the tracts and tactics of Black September and Baader-Meinhof; I would read with some fascination Marighella’s manual on urban guerrilla warfare; the Munich Massacre occurred in 1972, and hijackings continued apace through the decade.

We read in a political and social context. It is far too easy for the knowledge of which books we buy, and what we browse online, to become imbued with a deeply and profoundly political cast by organizations with great power, regardless of the intentions and free thoughts of the individual.

For now, the privilege of reading privately, digitally, has been lost. The Internet needs new architectures for distributing and securing data, using encryption that remains in the hands of the users, creating a very different kind of cloud based architecture for data storage and computation. Until then, assume that someone else has a set of keys to your bookshelf.

What’s next for libraries and digital content

Peter Brantley -- July 15th, 2013

Seattle PublicThe only thing that can be said for certain about the relationship of libraries with publishers and ebooks is that they are subject to change. To keep libraries and publishers up-to-date, ALA is hosting a virtual conference, via registration, that includes a session on “New Directions for Libraries and Digital Content.” Robert Wolven, from Columbia University Libraries, will join Peter Brantley, from Hypothes.is, in a discussion of what to expect next for libraries.

Even as Apple lost a price-fixing court case involving the Big 6 publishers (now the Big 5, with the merger of Random House and Penguin Books), ALA continues their on-going dialogue with publishers to attempt to increase access to both current and backlist titles. Spearheaded by the Digital Content and Libraries Working Group (DCWG), co-chaired by Robert Wolven, ALA has made surprisingly and steady progress in broadening the available ebook catalog, while continuing to press for changes in license terms; opportunities for content preservation; development of alternative business models; and privacy guarantees for ebook borrowers.

In the longer run, we are likely to confront far greater changes in what “books” are, and how they are procured and read. Both startups and dominant web companies like Apple, Google, and Amazon are utilizing software development techniques and ever more powerful browsers to advance towards a far different horizon for publishing content. Peter Brantley’s article for the American Libraries’ Digital Supplement, “The Unpackaged Book,” highlights some of these evolutions in how books and storytelling are presented on the web, and the opportunities and challenges that libraries face as a result.

Through a Glass, Brightly: Marrakesh

Peter Brantley -- June 26th, 2013

Teen Read Week
This week, Pew Internet Research released a study on young readers, their library habits, and reading preferences. Although ereading continues to grow, the eye-opening part of the survey for some has been the high percentage – 75 percent of those aged 16-29 – who have read a print book in the past year.

Count me among the surprised. I would have thought the percentage of print readers would have been lower. Print has an interesting stickiness – it’s still available nearly ubiquitously, both in book- and general stores, as well as through Amazon – and it has one other gainful characteristic. It is a self-contained media package. Unlike walking around with an LP, CD, or DVD, which are useless in-and-of themselves, most of us can stick a book in our purse, bag, or jacket and not require anything else to go forth and ponder the world’s mysteries or plumb an imagination. (Except reasonably dry weather).

Of course, there’s a fly in the ointment. Well several, but I personally came across one the other day in a jarring fashion. Last month, I wound up purchasing two print books: one, a mass market paperback, and one hardback. Both purchases were driven by vexations with the clumsy, overly-corporatized digital transformation of publishing. First is the continuing gnash-the-teeth frustration that my partner and I have figuring out a way to share our ebooks, which we are wont to do, since we have overlapping interests in literature. We will probably wind up sharing a single Amazon account, but it’s a stupid solution that grates. Second, the rights associated with the pocket book have not been adjudicated for digital; for the hardback book, which is older, and concerns itself with political diplomacy with a scholarly bent, a digital reprint must represent a questionable source of income for its publisher.

I couldn’t read either one of them. It was a purely physical, I’m-getting-older experience, but I found the contrast of type on paper for both the mass market paperback and the better-manufactured hardback rather execrable. The paper stock in both volumes was darker than I would expected, and appeared rough. The printing itself was not very sharp, and the apparent fuzziness of the font’s strokes made the experience more tiresome. I reached for reading lamps to no avail. The fact of the matter is that I am now used to high contrast, highly controllable screens. Putting aside all the endless stupidities of DRM and the inane restrictions on access that prevent my family from enjoying books together, reading digitally is simply superior – it is more customizable, and extends more control to the reader.

It is this basic, raw accessibility of digital reading that makes the recent WIPO Marrakesh agreement on the right to read for people with reading disabilities so damn important. The treaty won’t help me (yet, anyway), but for the first time, it will be possible for accessible editions to cross borders without needless restrictions, and for a reader to be able to access global reading-impaired library platforms built from books from many different countries. Up until now, for example, it has been impossible for the Internet Archive to make available its DAISY-formatted and protected modern books to blind or dyslexic readers in Canada or other Commonwealth countries.

As James Love of Knowledge Ecology, one of the foremost advocates for greater access by the disabled, comments in a statement:

The treaty will provide a dramatic and massive improvement in access to reading materials for persons in common languages, such as English, Spanish, French, Portuguese and Arabic, and it will provide the building blocks for global libraries to service blind persons. On the issues that mattered the most for blind persons, such as the ability to deliver documents across borders to individuals, and to break technical measures, the treaty was a resounding success.

Shame on the MPAA, GE, Disney, and Viacom, among other corporations, for resisting the treaty. Their intransigence in gracefully acknowledging the greater access that digital technologies make available to the blind and dyslexic should be widely noted.

The Intelligent, Adaptable Book

Peter Brantley -- June 20th, 2013

Network: Domestic Intervention
In the midst of revolutions, ideas which initially seem inscrutable, or fantastic, suddenly become the building blocks of a new world. This week, Professor Robert Glushko, of UC Berkeley’s Information School and a leading pioneer in hypertext, dropped by the offices of Hypothes.is to discuss his new textbook, The Discipline of Organization. The conversation ranged from notes, to annotations, to transclusion – bringing an early concept of Ted Nelson‘s back to the fore. Continue reading

Shelf-talkers: Kickstarting a new library journal

Peter Brantley -- June 18th, 2013

PE - Printing Press MTAOne of the things that I am most excited about is the creation of new forms of publishing on the web, using lightweight platforms featuring clean and simple writing and editing tools that allow communities to express themselves without expensive, legacy production workflows that belie the print era. And yet one of the most collaborative and openly sharing communities in the world – librarians – have yet to take advantage of this opportunity. I think it is time for that to change.

In the last couple of years, something remarkable has happened – three publications have won Pulitzer Prizes for national reporting that exists only on the web: ProPublica, Huffington Post, and most recently, Inside Climate News. These organizations, and many of their brethren, have been founded by experienced journalists who have seized the opportunity of reaching people more quickly through leaner staffing and reduced operations costs. They often have been assisted through startup grants from philanthropic organizations, and are supported by a wide mix of web-centric advertising and subscription models.

New companies are helping catalyze these emerging models of journalism. Ev William’s highly regarded Medium, an elegant and lean publishing platform that encourages collaboration and community, has attracted wide attention. It has in turn recently acquired MATTER, a long form journalism project founded by two experienced professionals, Bobbie Johnson and Jim Giles. Startups like Editorially and Draft are working to produce production tools that enable communities of writers to collaboratively develop material as easily as authoring a blog. Publet is building a platform for simple, rich-media authoring to support periodical and journal publishing. The world of professional journalism is entering a web-native era, on the cusp of redefining how it does business.

Oddly, the library community has a dearth of competitive products to help inform them about the rapidly changing information landscape. Our primary vehicle, Library Journal, has many well-regarded contributors but is rooted in an older model of “push” journalism, and premised on print subscription revenue. Its cost structure is consequently higher than a digital-only publication, and requires significant underwriting through large corporate advertisers which inevitably have their own editorial interests. ALA runs a flagship publication called “American Libraries” but it is more topically focused; it doesn’t cover breaking news, critical reviews of the library marketplace in products and services, active discussion on information discovery and analysis, or discursive coverage of the spectrum of emerging technical standards and debates.

It’s time for librarians to develop our own journalism. The basis of the American Library Association – individual membership vs. institutional affiliation – evidences the affinity for an in-community approach. A new library publication – call it Shelf Talkers – could be supported through librarian subscriptions, rather than vendor dollars, to assure complete editorial independence, lowering the risks of special interests. The PeerJ membership model is one option, although given the finite number of librarians, annual renewal will be required to establish a self-sustaining product. Launch support could come via a Kickstarter or Indiegogo campaign, and I suspect the concept of a new generation online publication would find resonance at the Bill & Melinda Gates Foundation, which could potentially underwrite some of the recurring costs for the first couple of years. The DPLA is also intrinsically geared to provide assistance in-kind, and its interests are well-aligned.

Shelf Talkers – or whatever we wanted to call it – could run with an editor-in-chief, an operations manager, and a small cadre of staff reporters. Additional contributors from the library world – one of the most literate and expressive communities around – could fill out a publication which need not worry itself with “issues” or “volumes” or printed matter. Its reach would be global, as would its contribution base – an inherent advantage of a networked publication. Libraries span the world, and although funding and support models may differ, the critical problems and core opportunities show far less divergence. Our shared values make the power of librarians’ global voice greater than any corporation’s or state’s.

The world of librarianship has never been bigger, and our influence potentially never more profound. Let’s seize the tools at hands, and tell our story in our own way, leveraging our community’s independent spirit, and embracing the freedom to engage in a life of literacy and debate.

The best APIs are simple web standards

Peter Brantley -- June 13th, 2013

Studying HomeopathyThere’s been much recent attention paid to the addressability of book content on the web, with a “Publishing Hackathon” in New York, and HarperCollins’ creation of an API-fueled hackathonProgramming Challenge“, both of which received a mix of criticism and praise; nonetheless they are a good start. But in the rush to try to entice a more technically savvy element, I think publishers are missing a more elemental approach – borrowing simple and well-established web standards. Continue reading

The New Ones: The Only Horizon Is Before Us

Peter Brantley -- April 29th, 2013

Chasing birds at the Hunt Library by meikimeikiRecently, I had the opportunity to meet a young software developer who is a graduate student at UC Berkeley. He’s amazingly quick; a good coder, confident in his abilities, and a budding novelist. Both for school and his own needs, he helped to build an open source ebook reader, FuturePress, in javascript. In part, he and his friends felt the need for a lightweight reader; and as a novelist he also wants to play with versioning, reader collaboration, and all the other cool things you can do on the web. What struck me was not that they had written an ebook reader: others have done that. My more significant realization is about the world they know. Continue reading

Extending the Arc of Publishing: Preprints at PeerJ

Peter Brantley -- April 8th, 2013

Multiphysics Object-Oriented Simulation Environment (MOOSE)This month the open access journal PeerJ launched a new preprint server where scholars can post papers prior to submission for formal peer review and publication. Preprints are common in many disciplines, but have been unusual in the biology and biomedical areas that PeerJ focuses on. The culture of biomedicine and the academic overlap with highly competitive and potentially lucrative biotechnology and biopharm firms have retarded pre-publication release of results.

Pre-print servers are part of a growing trend. Over the last few years, the breadth of scholarly communication has begun to dramatically expand to support a life-cycle trajectory extending from the publication of small pieces of the research process in “nanopublications,” to the publication of pre-prints, and subsequently publications of record, often with post-print versions. With the launch of its preprint server, PeerJ hopes to capitalize on the growing comfort with pre-publication review and commentary that is increasingly accepted as a normal part of the publication lifecycle.

I was able to do a Q+A with the founders of PeerJ this last week, Pete Binfield and Jason Hoyt, to ask them more about their motivations.

PW: Why are you launching a pre-print server now?

PeerJ: Three reasons really:

Firstly, “Green Open Access” and the role of repositories are very important issues these days. The demand for Green OA is coming from both the top and bottom, and if you look at it, then the peer-reviewed portion of Green OA is covered by institutional repositories, but the ‘un peer-reviewed’ or draft versions of articles (i.e. the pre-prints) really have no major venues (at least not in the bio/medical sciences). So we view PeerJ PrePrints as one solution to that demand.

Secondly, academic journals themselves started out as non peer-reviewed venues for the rapid communication of results. Peer-review came about later on, evolving over centuries, to create something which has certainly introduced many positives for science. Still, ‘preprints’ also have many benefits that we no longer get to enjoy, because peer-review has come to dominate peoples attitudes towards what deserves to see the light of day. Now that more and more scientists are comfortable with the sharing attitude of the Internet (in part encouraged by the rise of Open Access), and as the costs of ‘preprinting’ are really quite low, it seemed like a good time to return to the roots of scholarly communication. Both peer-review and preprints have important roles to play in the ecosystem.

And thirdly, we believe that we are finally seeing a desire from the Bio and Medical communities for a service like this, but with no viable venue to meet that need. Just in the last year or two, we have seen biologists start to use the arXiv preprint server more (even though it really isn’t set up for their areas); we have seen services like FigShare and F1000 Research launch; and we have heard from many academics that they are eager to submit to something like this.

PW: In biomed, particularly, there has been a marked reluctance to pre-publish findings or early stages of papers because of the highly competitive nature of the domain. Do you think that is changing, or do you think you will attract a certain audience?

PeerJ: We do think that this is changing. It is common, of course, for early adopters to prove the value of a new way of doing things before the rest of a field will follow, and we believe that there is now a sufficient ‘critical mass’ of engaged academics who will use this service, to the extent that the rest of their communities will see what they are doing and give it a try as well. In this respect, we believe that earlier ‘failed experiments’ in the preprint space may have been simply (and unfortunately) too far ahead of their time to gain wide enough adoption.

In addition. although the default state for a PeerJ PrePrint will be to be ‘fully open’, future developments of the site will allow authors to apply ‘access’ and ‘privacy’ controls to create what we call ‘private preprints’. Specifically, in future authors will be able to limit the audience for a specific preprint (e.g. to just collaborators) or only make part of the preprint visible (e.g. just the abstract). In this way, we hope to make people comfortable enough to share to an extent that they might previously have been uncomfortable with.

PW: Researchers are increasingly publishing smaller bits of their research workflow, e.g., as data or even specific queries or lab runs. In some ways, a pre-pub server could be seen as a very conservative component of academic publishing. What do you regard as the “MVP” (Minimum Viable Product) for a pre-print publication?

PeerJ: We’re focusing on what we support best, long-form writing that is still a necessary step (for the time-being) on the road to a formally peer-reviewed publication. It’s a good fit for the lifecycle of a manuscript. Therefore, as a general rule, the MVP could be considered as “something which represents an early draft of a final journal submission”. On the other hand, there are no restrictions to the exact format used for these preprints, so we are actually hoping to see the use cases evolve due to an intentional lack of what is or isn’t allowed.

In terms of content, the only thing we don’t allow as a PrePrint for the moment are clinical trials or submissions which make therapeutic claims (as well as things which don’t fit within our Aims and Scope, or adhere to our Policies).

PW: How does pre-print fit into the economic model that peerj is running?

PeerJ: First it should be noted that authors who ‘preprint’ with PeerJ have no obligation to submit that item for formal peer-review – they can go to any journal that accepts preprints that have not been peer reviewed. Our membership plans allow for one public preprint per year for Free Members, and paying users can have unlimited public preprints. Paid memberships also have different levels of private preprints, but that isn’t available just yet. This is a similar model to several repository type services, such as GitHub, with a mix of public and private options. We expect private preprints to be attractive to those who want to test the waters of preprints, but restrict access to groups that they choose themselves.

PW: Are you requiring a CC-By license on pre-pub contributions? If so, do you think it discourages submissions from researchers who are sensitive to potential commercial gains from their work?

PeerJ: Any ‘public’ preprint will be published under a CC-BY license. The PeerJ Journal is also under the same license, and so if this license dissuades researchers, then they would also have been dissuaded from submitting a Journal article for the same reason.

PW: How do you imagine a work flowing within the PeerJ environment from a prepub status into an official publication?

PeerJ: The submission form that PeerJ PrePrint authors use is basically the same submission form that PeerJ Journal authors use (although there are some missing questions which are relevant to a preprint, and some fields are “Suggested” rather than “Required”). Because of this, it will be quite easy for an author to take their preprint submission and ‘convert’ it into a journal submission (they would simply have to supply a few extra bits of metadata and perhaps a fuller set of original files). Therefore, we expect a PeerJ PrePrint author to publish their preprint, get feedback, perhaps publish revisions etc, before then deciding it is ‘ready’ to be submitted to a Journal. If they choose PeerJ as their journal then it will be a simple matter to submit it to the journal for formal peer reviewed and eventual publication (assuming it passes peer review). If the preprint version had already gathered any informal comments then clearly those could also be used in the formal evaluation by the Journal’s Editors and Reviewers.

PW: How do you imagine a pre-print server generating additional traffic or “buy-in” for PeerJ? Will a pre-print server be able to increase the overall conversation that happens at peerj.com?

PeerJ: Our first focus is to make sure that we’ve built a service that researchers enjoy and engage with. We look at metrics such activity rates and engagement time as a barometer for whether what we are building is actually benefiting anyone. We are not worrying about traffic levels, but rather engagement levels. The traffic and new members will follow if we build something that researchers love.

PW: Thanks!

At ALA, ReadersFirst Moves Forward A Notch

Peter Brantley -- January 31st, 2013

ReadersFirst meeting at ALA MWReadersFirst, the international coalition of libraries seeking to reassert control of user discovery and access for digital content, turned out on a rainy, cold afternoon at Seattle Public Library during ALA Midwinter to discuss their goals with the library vendor community. Members of the ReadersFirst (RF) steering committee ran over the organization’s history and mission, and then elicited engagement with senior representatives from the companies selling services that often, at present, conflict with the goals of RF.

RF seeks a common, cross-content discovery layer in the library catalog so that users only experience the library’s own web services. RF’s goal is for content providers and platforms, such as Overdrive, to provide APIs that enable users to request and retrieve materials without additional vendor interaction. For example, ebooks could retrieved “under the hood” from Overdrive without the user needing to re-authenticate or encounter systems beyond the library catalog. Currently, because libraries are forced to subscribe to services from multiple vendors, the user’s experience of digital media use is fractured with multiple vendor accounts, and ebooks are then accessed through different paths ranging from download to cloud-based access. As steering committee member Christina de Castell of the Vancouver Public Library said, “We don’t need the reader to know where the library bought the ebook from.”

Tom Galante of Queens Public Library reinforced, “The reader should be able to look at their library account and see what they have borrowed regardless of the vendor that supplied the ebook.” Continue reading