In the midst of revolutions, ideas which initially seem inscrutable, or fantastic, suddenly become the building blocks of a new world. This week, Professor Robert Glushko, of UC Berkeley’s Information School and a leading pioneer in hypertext, dropped by the offices of Hypothes.is to discuss his new textbook, The Discipline of Organization. The conversation ranged from notes, to annotations, to transclusion – bringing an early concept of Ted Nelson‘s back to the fore. Continue reading
One of the things that I am most excited about is the creation of new forms of publishing on the web, using lightweight platforms featuring clean and simple writing and editing tools that allow communities to express themselves without expensive, legacy production workflows that belie the print era. And yet one of the most collaborative and openly sharing communities in the world – librarians – have yet to take advantage of this opportunity. I think it is time for that to change.
In the last couple of years, something remarkable has happened – three publications have won Pulitzer Prizes for national reporting that exists only on the web: ProPublica, Huffington Post, and most recently, Inside Climate News. These organizations, and many of their brethren, have been founded by experienced journalists who have seized the opportunity of reaching people more quickly through leaner staffing and reduced operations costs. They often have been assisted through startup grants from philanthropic organizations, and are supported by a wide mix of web-centric advertising and subscription models.
New companies are helping catalyze these emerging models of journalism. Ev William’s highly regarded Medium, an elegant and lean publishing platform that encourages collaboration and community, has attracted wide attention. It has in turn recently acquired MATTER, a long form journalism project founded by two experienced professionals, Bobbie Johnson and Jim Giles. Startups like Editorially and Draft are working to produce production tools that enable communities of writers to collaboratively develop material as easily as authoring a blog. Publet is building a platform for simple, rich-media authoring to support periodical and journal publishing. The world of professional journalism is entering a web-native era, on the cusp of redefining how it does business.
Oddly, the library community has a dearth of competitive products to help inform them about the rapidly changing information landscape. Our primary vehicle, Library Journal, has many well-regarded contributors but is rooted in an older model of “push” journalism, and premised on print subscription revenue. Its cost structure is consequently higher than a digital-only publication, and requires significant underwriting through large corporate advertisers which inevitably have their own editorial interests. ALA runs a flagship publication called “American Libraries” but it is more topically focused; it doesn’t cover breaking news, critical reviews of the library marketplace in products and services, active discussion on information discovery and analysis, or discursive coverage of the spectrum of emerging technical standards and debates.
It’s time for librarians to develop our own journalism. The basis of the American Library Association – individual membership vs. institutional affiliation – evidences the affinity for an in-community approach. A new library publication – call it Shelf Talkers – could be supported through librarian subscriptions, rather than vendor dollars, to assure complete editorial independence, lowering the risks of special interests. The PeerJ membership model is one option, although given the finite number of librarians, annual renewal will be required to establish a self-sustaining product. Launch support could come via a Kickstarter or Indiegogo campaign, and I suspect the concept of a new generation online publication would find resonance at the Bill & Melinda Gates Foundation, which could potentially underwrite some of the recurring costs for the first couple of years. The DPLA is also intrinsically geared to provide assistance in-kind, and its interests are well-aligned.
Shelf Talkers – or whatever we wanted to call it – could run with an editor-in-chief, an operations manager, and a small cadre of staff reporters. Additional contributors from the library world – one of the most literate and expressive communities around – could fill out a publication which need not worry itself with “issues” or “volumes” or printed matter. Its reach would be global, as would its contribution base – an inherent advantage of a networked publication. Libraries span the world, and although funding and support models may differ, the critical problems and core opportunities show far less divergence. Our shared values make the power of librarians’ global voice greater than any corporation’s or state’s.
The world of librarianship has never been bigger, and our influence potentially never more profound. Let’s seize the tools at hands, and tell our story in our own way, leveraging our community’s independent spirit, and embracing the freedom to engage in a life of literacy and debate.
There’s been much recent attention paid to the addressability of book content on the web, with a “Publishing Hackathon” in New York, and HarperCollins’ creation of an API-fueled hackathon “Programming Challenge“, both of which received a mix of criticism and praise; nonetheless they are a good start. But in the rush to try to entice a more technically savvy element, I think publishers are missing a more elemental approach – borrowing simple and well-established web standards. Continue reading
This month the open access journal PeerJ launched a new preprint server where scholars can post papers prior to submission for formal peer review and publication. Preprints are common in many disciplines, but have been unusual in the biology and biomedical areas that PeerJ focuses on. The culture of biomedicine and the academic overlap with highly competitive and potentially lucrative biotechnology and biopharm firms have retarded pre-publication release of results.
Pre-print servers are part of a growing trend. Over the last few years, the breadth of scholarly communication has begun to dramatically expand to support a life-cycle trajectory extending from the publication of small pieces of the research process in “nanopublications,” to the publication of pre-prints, and subsequently publications of record, often with post-print versions. With the launch of its preprint server, PeerJ hopes to capitalize on the growing comfort with pre-publication review and commentary that is increasingly accepted as a normal part of the publication lifecycle.
I was able to do a Q+A with the founders of PeerJ this last week, Pete Binfield and Jason Hoyt, to ask them more about their motivations.
PW: Why are you launching a pre-print server now?
PeerJ: Three reasons really:
Firstly, “Green Open Access” and the role of repositories are very important issues these days. The demand for Green OA is coming from both the top and bottom, and if you look at it, then the peer-reviewed portion of Green OA is covered by institutional repositories, but the ‘un peer-reviewed’ or draft versions of articles (i.e. the pre-prints) really have no major venues (at least not in the bio/medical sciences). So we view PeerJ PrePrints as one solution to that demand.
Secondly, academic journals themselves started out as non peer-reviewed venues for the rapid communication of results. Peer-review came about later on, evolving over centuries, to create something which has certainly introduced many positives for science. Still, ‘preprints’ also have many benefits that we no longer get to enjoy, because peer-review has come to dominate peoples attitudes towards what deserves to see the light of day. Now that more and more scientists are comfortable with the sharing attitude of the Internet (in part encouraged by the rise of Open Access), and as the costs of ‘preprinting’ are really quite low, it seemed like a good time to return to the roots of scholarly communication. Both peer-review and preprints have important roles to play in the ecosystem.
And thirdly, we believe that we are finally seeing a desire from the Bio and Medical communities for a service like this, but with no viable venue to meet that need. Just in the last year or two, we have seen biologists start to use the arXiv preprint server more (even though it really isn’t set up for their areas); we have seen services like FigShare and F1000 Research launch; and we have heard from many academics that they are eager to submit to something like this.
PW: In biomed, particularly, there has been a marked reluctance to pre-publish findings or early stages of papers because of the highly competitive nature of the domain. Do you think that is changing, or do you think you will attract a certain audience?
PeerJ: We do think that this is changing. It is common, of course, for early adopters to prove the value of a new way of doing things before the rest of a field will follow, and we believe that there is now a sufficient ‘critical mass’ of engaged academics who will use this service, to the extent that the rest of their communities will see what they are doing and give it a try as well. In this respect, we believe that earlier ‘failed experiments’ in the preprint space may have been simply (and unfortunately) too far ahead of their time to gain wide enough adoption.
In addition. although the default state for a PeerJ PrePrint will be to be ‘fully open’, future developments of the site will allow authors to apply ‘access’ and ‘privacy’ controls to create what we call ‘private preprints’. Specifically, in future authors will be able to limit the audience for a specific preprint (e.g. to just collaborators) or only make part of the preprint visible (e.g. just the abstract). In this way, we hope to make people comfortable enough to share to an extent that they might previously have been uncomfortable with.
PW: Researchers are increasingly publishing smaller bits of their research workflow, e.g., as data or even specific queries or lab runs. In some ways, a pre-pub server could be seen as a very conservative component of academic publishing. What do you regard as the “MVP” (Minimum Viable Product) for a pre-print publication?
PeerJ: We’re focusing on what we support best, long-form writing that is still a necessary step (for the time-being) on the road to a formally peer-reviewed publication. It’s a good fit for the lifecycle of a manuscript. Therefore, as a general rule, the MVP could be considered as “something which represents an early draft of a final journal submission”. On the other hand, there are no restrictions to the exact format used for these preprints, so we are actually hoping to see the use cases evolve due to an intentional lack of what is or isn’t allowed.
In terms of content, the only thing we don’t allow as a PrePrint for the moment are clinical trials or submissions which make therapeutic claims (as well as things which don’t fit within our Aims and Scope, or adhere to our Policies).
PW: How does pre-print fit into the economic model that peerj is running?
PeerJ: First it should be noted that authors who ‘preprint’ with PeerJ have no obligation to submit that item for formal peer-review – they can go to any journal that accepts preprints that have not been peer reviewed. Our membership plans allow for one public preprint per year for Free Members, and paying users can have unlimited public preprints. Paid memberships also have different levels of private preprints, but that isn’t available just yet. This is a similar model to several repository type services, such as GitHub, with a mix of public and private options. We expect private preprints to be attractive to those who want to test the waters of preprints, but restrict access to groups that they choose themselves.
PW: Are you requiring a CC-By license on pre-pub contributions? If so, do you think it discourages submissions from researchers who are sensitive to potential commercial gains from their work?
PeerJ: Any ‘public’ preprint will be published under a CC-BY license. The PeerJ Journal is also under the same license, and so if this license dissuades researchers, then they would also have been dissuaded from submitting a Journal article for the same reason.
PW: How do you imagine a work flowing within the PeerJ environment from a prepub status into an official publication?
PeerJ: The submission form that PeerJ PrePrint authors use is basically the same submission form that PeerJ Journal authors use (although there are some missing questions which are relevant to a preprint, and some fields are “Suggested” rather than “Required”). Because of this, it will be quite easy for an author to take their preprint submission and ‘convert’ it into a journal submission (they would simply have to supply a few extra bits of metadata and perhaps a fuller set of original files). Therefore, we expect a PeerJ PrePrint author to publish their preprint, get feedback, perhaps publish revisions etc, before then deciding it is ‘ready’ to be submitted to a Journal. If they choose PeerJ as their journal then it will be a simple matter to submit it to the journal for formal peer reviewed and eventual publication (assuming it passes peer review). If the preprint version had already gathered any informal comments then clearly those could also be used in the formal evaluation by the Journal’s Editors and Reviewers.
PW: How do you imagine a pre-print server generating additional traffic or “buy-in” for PeerJ? Will a pre-print server be able to increase the overall conversation that happens at peerj.com?
PeerJ: Our first focus is to make sure that we’ve built a service that researchers enjoy and engage with. We look at metrics such activity rates and engagement time as a barometer for whether what we are building is actually benefiting anyone. We are not worrying about traffic levels, but rather engagement levels. The traffic and new members will follow if we build something that researchers love.
ReadersFirst, the international coalition of libraries seeking to reassert control of user discovery and access for digital content, turned out on a rainy, cold afternoon at Seattle Public Library during ALA Midwinter to discuss their goals with the library vendor community. Members of the ReadersFirst (RF) steering committee ran over the organization’s history and mission, and then elicited engagement with senior representatives from the companies selling services that often, at present, conflict with the goals of RF.
RF seeks a common, cross-content discovery layer in the library catalog so that users only experience the library’s own web services. RF’s goal is for content providers and platforms, such as Overdrive, to provide APIs that enable users to request and retrieve materials without additional vendor interaction. For example, ebooks could retrieved “under the hood” from Overdrive without the user needing to re-authenticate or encounter systems beyond the library catalog. Currently, because libraries are forced to subscribe to services from multiple vendors, the user’s experience of digital media use is fractured with multiple vendor accounts, and ebooks are then accessed through different paths ranging from download to cloud-based access. As steering committee member Christina de Castell of the Vancouver Public Library said, “We don’t need the reader to know where the library bought the ebook from.”
Tom Galante of Queens Public Library reinforced, “The reader should be able to look at their library account and see what they have borrowed regardless of the vendor that supplied the ebook.” Continue reading
San Antonio, Texas has a history of supporting mavericks; in fact, the word originates from a signer of Texas Independence, Samuel Maverick, whose grandson, Maury, also held true to his surname. Now that Bexar (pronounced “Bear”) County, San Antonio’s home, is committed to building a new public library, BiblioTech, that will be devoid of all printed books, County Judge Nelson W. Wolff stands to follow in hallowed South Texas footsteps. Judge Wolff is a progressive politician who was also the founder of Sun Harvest Farms, a natural foods grocery store chain. As an elected County official, the Judge is something of an overlord along with the County Commission over many essential County-wide functions and services, including taxes and infrastructure investments.
“BiblioTech,” a play on the Spanish word for library, biblioteca, will open up on the south side of the county as a test of the proposition that providing a mix of services centered on Internet access and access to e-books is a cost effective strategy for providing information resources and library services to far-suburban and rural communities. Although the city has been providing library services to the county, it recently upped the tab from $3.7MM to $6.7MM – the highest city-county bill in the nation. As growth in unincorporated areas outside the city limits continues, the city has found it increasingly expensive to provide library services to areas without much in the way of their own tax base.
BiblioTech fits into Judge Wolff’s pattern of encouraging the long-term development of San Antonio. The vision anticipates a multi-location facility providing community information needs, with the first site serving as a model; it would be open into evening hours, available to registered County residents, and would provide access to up to an anticipated 10,000 ebook titles, supported with a pool of up to 100 e-readers. On January 15, 2013 the County gave its permission to release an RFP for an e-reader provider, a RFQ for an architect to remodel some existing underutilized County space, initial budgetary capital and operations (for computers and ebooks, among other things), and the creation of an advisory board.
Taking advantage of a visit to San Antonio, I was able to sit down with the Judge to talk about this new initiative and its longer term goals. In a wide ranging interview that included senior staff, I was impressed with his awareness of the overall public library environment. Partly inspired from the local UTSA engineering library, which went bookless in 2010, and Stanford’s engineering library, Wolff is alert to the dramatic shifts in digital access. I raised the most obvious objection from other library directors – that no digital library can be comprehensive today because of publisher reluctance to license their books – and he readily acknowledged that not all literature could be presented to county residents through an ebook platform. Yet, he was hopeful that forward-looking demonstrations of community libraries such as BiblioTech would encourage publishers to enlarge their offerings, reaching readers that lacked any bookstore.
Judge Wolff sees BiblioTech as not just a model for Bexar County, but far beyond it. With great enthusiasm, the County’s staff is rapidly gathering information about e-book vendors and licensing models; educating itself about national initiatives such as ReadersFirst; and has contacted innovative libraries ranging from New York Public to Chattanooga. BiblioTech will have a strong children’s area, with dedicated technology support and a concentration on children’s e-literature. More broadly, as expected from a leader long engaged in State and local politics, Wolff is beginning to consider what mix of community information needs can be presented through its facility; citizen education is considered an important element. And, perhaps because of its newness and innocence, it seems everyone has leapt to provide assistance. Even praise on the layout, size, and staffing of Apple stores has brought offers of help from unexpected places.
The serious grappling with what future libraries will embrace extends well beyond how they will address books. The BiblioTech team is also considering digital access to music and movies. Although the Judge’s staff had little exposure to maker spaces and some of the other forms of technology engagement and education, they were eager to learn about the range of opportunities. Wolff has been instrumental in bringing large concerns into San Antonio, such as Toyota’s newest truck manufacturing facility – on the same side of the city as BiblioTech – and has formed strong ties to Rackspace, a native San Antonio startup. and powerful cloud storage and computing provider. The opportunity to reshape libraries in San Antonio is significant, and with it there is an opportunity to inform what libraries look like across the globe.
Thanks to a suggestion from David Riordan of the New York Public Library Labs, I got a quick introduction to Field Trip, a new augmented reality (AR) Android app that emerged out of Google last autumn. Field Trip comes out of an internal startup called Niantic Labs at Google headed by John Hanke, who created an early online mapping application called Keyhole. Keyhole was acquired by Google and turned into Google Maps under Hanke’s leadership. I think Field Trip points toward a new generation of geolocal story telling, enabling us to find stories and interact with narratives wherever we happen to be. Continue reading
This last week, the Douglas County Library (DCL) system announced that they had acquired 10,000 ebook titles from the leading self- and independently-published e-book distributor, Smashwords. At an average of $4.00, this required an expenditure of $40,000 to purchase, not merely license, a large number of ebooks for the readers of Douglas County, nearly doubling the number of titles that DCL owns to 21,000. The deal was culminated through the legal equivalent of a sketch on a cocktail napkin, not a 330 page contract with multiple addenda.
This purchase is an example of the Smashwords Library Direct program, which allows libraries and library consortia to purchase large numbers of self-published titles in a streamlined and automated fashion using whatever selection criteria they see fit; additional large library consortia, such as California’s Califa, are expected to follow DCL’s suit. Smashwords permits its authors and publishers to set their own library prices using a web-based pricing tool; the majority of its participating authors have opted for library prices at below-market levels, reflecting the premium value they place on library exposure and promotion.
The most promising aspect of the deal – and one that I hope will set a precedent – is that it was concluded through Smashwords’ acceptance of a simple document [pdf], “Statement of Common Understanding for Purchasing Electronic Content.” The keystone clause underpinning the Common Understanding’s resolutions is: “The Library affirms that it will comply with U.S. Copyright Law.” It subsequently specifies in clean and commonsense language what that means: i.e., purchase is not a transfer of copyright; the library will loan one copy for each ebook copy purchased; and it will not make derivative works such as films or translations. It affirms DCL’s right to make archival or preservation copies (Copyright Section 108(c)), and the ability to make accessible copies available to the reading impaired (Section 121). The whole document does not cover two pages. There is also a handshake agreement that should an author or publisher publish material through Smashwords without necessary rights and the library owns that title, then Smashwords will issue a request for the library to remove that title from its collection. The library will receive a refund for its purchase.
This is a model for a straightforward and civil agreement between publishers and libraries that rests solidly on current copyright, without the need for confining and restrictive licensing agreements that add complexity, increase user frustration, and diminish access without providing significant additional protection for rightsholders. I hope more publishers will be willing to take the Common Understanding, and its premise, as a template for building stronger and more trusting relationships.
As we start a new year, it might appear that the hurdles facing public libraries have never been greater. With financially burdened communities; ebooks, movies, and music increasingly delivered through walled gardens by technology companies that have no resonance with free-to-all service; and rapidly evolving modes of publishing, it would appear that libraries are in a tight corner. That may all be true, but there are signs of rescue, signs of hope.
One of the best things coming is the growing awareness that public libraries need to solve their own problems. That is not an easy proposition; public libraries come in all shapes and sizes, from Boston and New York research libraries to small town libraries in the American west. However, the internet bridges both vast distances and town/gown differences, and we are starting to see a whole new community of libraries emerge. A portion of this effort is being negotiated through the Digital Public Library of America (DPLA), but the greater and more important aspect is being developed peer to peer.
A current example is the ReadersFirst initiative, a growing collaboration of libraries that has endorsed a straightforward set of propositions that seek to provide more seamless access to digital resources. ReadersFirst seeks simple but high impact goals: make content like ebooks more portable between providers, and more available to patrons; simplify integration into library discovery systems to ease access by patrons; and make content available in any useful format, whether EPUB, Mobi, or a website. And in this effort, amazingly, they may succeed.