Category Archives: academic market

Shelf-talkers: Kickstarting a new library journal

Peter Brantley -- June 18th, 2013

PE - Printing Press MTAOne of the things that I am most excited about is the creation of new forms of publishing on the web, using lightweight platforms featuring clean and simple writing and editing tools that allow communities to express themselves without expensive, legacy production workflows that belie the print era. And yet one of the most collaborative and openly sharing communities in the world – librarians – have yet to take advantage of this opportunity. I think it is time for that to change.

In the last couple of years, something remarkable has happened – three publications have won Pulitzer Prizes for national reporting that exists only on the web: ProPublica, Huffington Post, and most recently, Inside Climate News. These organizations, and many of their brethren, have been founded by experienced journalists who have seized the opportunity of reaching people more quickly through leaner staffing and reduced operations costs. They often have been assisted through startup grants from philanthropic organizations, and are supported by a wide mix of web-centric advertising and subscription models.

New companies are helping catalyze these emerging models of journalism. Ev William’s highly regarded Medium, an elegant and lean publishing platform that encourages collaboration and community, has attracted wide attention. It has in turn recently acquired MATTER, a long form journalism project founded by two experienced professionals, Bobbie Johnson and Jim Giles. Startups like Editorially and Draft are working to produce production tools that enable communities of writers to collaboratively develop material as easily as authoring a blog. Publet is building a platform for simple, rich-media authoring to support periodical and journal publishing. The world of professional journalism is entering a web-native era, on the cusp of redefining how it does business.

Oddly, the library community has a dearth of competitive products to help inform them about the rapidly changing information landscape. Our primary vehicle, Library Journal, has many well-regarded contributors but is rooted in an older model of “push” journalism, and premised on print subscription revenue. Its cost structure is consequently higher than a digital-only publication, and requires significant underwriting through large corporate advertisers which inevitably have their own editorial interests. ALA runs a flagship publication called “American Libraries” but it is more topically focused; it doesn’t cover breaking news, critical reviews of the library marketplace in products and services, active discussion on information discovery and analysis, or discursive coverage of the spectrum of emerging technical standards and debates.

It’s time for librarians to develop our own journalism. The basis of the American Library Association – individual membership vs. institutional affiliation – evidences the affinity for an in-community approach. A new library publication – call it Shelf Talkers – could be supported through librarian subscriptions, rather than vendor dollars, to assure complete editorial independence, lowering the risks of special interests. The PeerJ membership model is one option, although given the finite number of librarians, annual renewal will be required to establish a self-sustaining product. Launch support could come via a Kickstarter or Indiegogo campaign, and I suspect the concept of a new generation online publication would find resonance at the Bill & Melinda Gates Foundation, which could potentially underwrite some of the recurring costs for the first couple of years. The DPLA is also intrinsically geared to provide assistance in-kind, and its interests are well-aligned.

Shelf Talkers – or whatever we wanted to call it – could run with an editor-in-chief, an operations manager, and a small cadre of staff reporters. Additional contributors from the library world – one of the most literate and expressive communities around – could fill out a publication which need not worry itself with “issues” or “volumes” or printed matter. Its reach would be global, as would its contribution base – an inherent advantage of a networked publication. Libraries span the world, and although funding and support models may differ, the critical problems and core opportunities show far less divergence. Our shared values make the power of librarians’ global voice greater than any corporation’s or state’s.

The world of librarianship has never been bigger, and our influence potentially never more profound. Let’s seize the tools at hands, and tell our story in our own way, leveraging our community’s independent spirit, and embracing the freedom to engage in a life of literacy and debate.

Extending the Arc of Publishing: Preprints at PeerJ

Peter Brantley -- April 8th, 2013

Multiphysics Object-Oriented Simulation Environment (MOOSE)This month the open access journal PeerJ launched a new preprint server where scholars can post papers prior to submission for formal peer review and publication. Preprints are common in many disciplines, but have been unusual in the biology and biomedical areas that PeerJ focuses on. The culture of biomedicine and the academic overlap with highly competitive and potentially lucrative biotechnology and biopharm firms have retarded pre-publication release of results.

Pre-print servers are part of a growing trend. Over the last few years, the breadth of scholarly communication has begun to dramatically expand to support a life-cycle trajectory extending from the publication of small pieces of the research process in “nanopublications,” to the publication of pre-prints, and subsequently publications of record, often with post-print versions. With the launch of its preprint server, PeerJ hopes to capitalize on the growing comfort with pre-publication review and commentary that is increasingly accepted as a normal part of the publication lifecycle.

I was able to do a Q+A with the founders of PeerJ this last week, Pete Binfield and Jason Hoyt, to ask them more about their motivations.

PW: Why are you launching a pre-print server now?

PeerJ: Three reasons really:

Firstly, “Green Open Access” and the role of repositories are very important issues these days. The demand for Green OA is coming from both the top and bottom, and if you look at it, then the peer-reviewed portion of Green OA is covered by institutional repositories, but the ‘un peer-reviewed’ or draft versions of articles (i.e. the pre-prints) really have no major venues (at least not in the bio/medical sciences). So we view PeerJ PrePrints as one solution to that demand.

Secondly, academic journals themselves started out as non peer-reviewed venues for the rapid communication of results. Peer-review came about later on, evolving over centuries, to create something which has certainly introduced many positives for science. Still, ‘preprints’ also have many benefits that we no longer get to enjoy, because peer-review has come to dominate peoples attitudes towards what deserves to see the light of day. Now that more and more scientists are comfortable with the sharing attitude of the Internet (in part encouraged by the rise of Open Access), and as the costs of ‘preprinting’ are really quite low, it seemed like a good time to return to the roots of scholarly communication. Both peer-review and preprints have important roles to play in the ecosystem.

And thirdly, we believe that we are finally seeing a desire from the Bio and Medical communities for a service like this, but with no viable venue to meet that need. Just in the last year or two, we have seen biologists start to use the arXiv preprint server more (even though it really isn’t set up for their areas); we have seen services like FigShare and F1000 Research launch; and we have heard from many academics that they are eager to submit to something like this.

PW: In biomed, particularly, there has been a marked reluctance to pre-publish findings or early stages of papers because of the highly competitive nature of the domain. Do you think that is changing, or do you think you will attract a certain audience?

PeerJ: We do think that this is changing. It is common, of course, for early adopters to prove the value of a new way of doing things before the rest of a field will follow, and we believe that there is now a sufficient ‘critical mass’ of engaged academics who will use this service, to the extent that the rest of their communities will see what they are doing and give it a try as well. In this respect, we believe that earlier ‘failed experiments’ in the preprint space may have been simply (and unfortunately) too far ahead of their time to gain wide enough adoption.

In addition. although the default state for a PeerJ PrePrint will be to be ‘fully open’, future developments of the site will allow authors to apply ‘access’ and ‘privacy’ controls to create what we call ‘private preprints’. Specifically, in future authors will be able to limit the audience for a specific preprint (e.g. to just collaborators) or only make part of the preprint visible (e.g. just the abstract). In this way, we hope to make people comfortable enough to share to an extent that they might previously have been uncomfortable with.

PW: Researchers are increasingly publishing smaller bits of their research workflow, e.g., as data or even specific queries or lab runs. In some ways, a pre-pub server could be seen as a very conservative component of academic publishing. What do you regard as the “MVP” (Minimum Viable Product) for a pre-print publication?

PeerJ: We’re focusing on what we support best, long-form writing that is still a necessary step (for the time-being) on the road to a formally peer-reviewed publication. It’s a good fit for the lifecycle of a manuscript. Therefore, as a general rule, the MVP could be considered as “something which represents an early draft of a final journal submission”. On the other hand, there are no restrictions to the exact format used for these preprints, so we are actually hoping to see the use cases evolve due to an intentional lack of what is or isn’t allowed.

In terms of content, the only thing we don’t allow as a PrePrint for the moment are clinical trials or submissions which make therapeutic claims (as well as things which don’t fit within our Aims and Scope, or adhere to our Policies).

PW: How does pre-print fit into the economic model that peerj is running?

PeerJ: First it should be noted that authors who ‘preprint’ with PeerJ have no obligation to submit that item for formal peer-review – they can go to any journal that accepts preprints that have not been peer reviewed. Our membership plans allow for one public preprint per year for Free Members, and paying users can have unlimited public preprints. Paid memberships also have different levels of private preprints, but that isn’t available just yet. This is a similar model to several repository type services, such as GitHub, with a mix of public and private options. We expect private preprints to be attractive to those who want to test the waters of preprints, but restrict access to groups that they choose themselves.

PW: Are you requiring a CC-By license on pre-pub contributions? If so, do you think it discourages submissions from researchers who are sensitive to potential commercial gains from their work?

PeerJ: Any ‘public’ preprint will be published under a CC-BY license. The PeerJ Journal is also under the same license, and so if this license dissuades researchers, then they would also have been dissuaded from submitting a Journal article for the same reason.

PW: How do you imagine a work flowing within the PeerJ environment from a prepub status into an official publication?

PeerJ: The submission form that PeerJ PrePrint authors use is basically the same submission form that PeerJ Journal authors use (although there are some missing questions which are relevant to a preprint, and some fields are “Suggested” rather than “Required”). Because of this, it will be quite easy for an author to take their preprint submission and ‘convert’ it into a journal submission (they would simply have to supply a few extra bits of metadata and perhaps a fuller set of original files). Therefore, we expect a PeerJ PrePrint author to publish their preprint, get feedback, perhaps publish revisions etc, before then deciding it is ‘ready’ to be submitted to a Journal. If they choose PeerJ as their journal then it will be a simple matter to submit it to the journal for formal peer reviewed and eventual publication (assuming it passes peer review). If the preprint version had already gathered any informal comments then clearly those could also be used in the formal evaluation by the Journal’s Editors and Reviewers.

PW: How do you imagine a pre-print server generating additional traffic or “buy-in” for PeerJ? Will a pre-print server be able to increase the overall conversation that happens at peerj.com?

PeerJ: Our first focus is to make sure that we’ve built a service that researchers enjoy and engage with. We look at metrics such activity rates and engagement time as a barometer for whether what we are building is actually benefiting anyone. We are not worrying about traffic levels, but rather engagement levels. The traffic and new members will follow if we build something that researchers love.

PW: Thanks!

Working Around the Publishing Industry

Peter Brantley -- October 1st, 2012

Folio from the Talmud
Swissnex, a private/public effort of the Swiss government, has hosted some amazing salons with Science Online Bay Area on new forms of publishing. This past Thursday night, they brought together several cutting edge protagonists creating the next generation of scholarly communication in “Innovations in Academic Publishing and Peer Review.” A web of exciting new companies is rapidly emerging, each reinventing an aspect of how we share the results of science.

The speakers – Pete Binfield of PeerJ; Kristen Fisher Ratan of PLoS One; Sarah Greene of Cancer Commons; and Dan Whaley of Hypothes.is – disputed forms of peer review, time to publication, membership models, and alternative metrics for evaluation. Yet despite the raucousness of experimentation in the future of academic publishing, the sense in San Francisco was of a world entirely different than the traditional linear paradigm that heretofore defined scientific communication. Pete Binfield observed that open access is on path to become the dominant model of academic publishing in about five years. And, as Kristen Fisher Ratan noted, scientists are increasingly working around the existing publishing industry to share their work with others – a sharing that is both more open, and more interactive, than ever before. Continue reading

Storm Clouds in Academic Publishing

Peter Brantley -- May 25th, 2012

Today two different thunderbolts struck in academic publishing, one from an old storm, and the other from a new one. The weather forecast continues to be troubled, but as they say, we need the rain.

The first story is the imminent closing this summer of the University of Missouri Press, after five decades of operation. MU Press is not the first university press to close, and it certainly won’t be the last. It was receiving a subsidy of $400,000 annually and still not able to obtain a profit from its operations; that is a lot of money, but not exceptional in the realm of university presses. Nor, sadly, is the lack of profitability, which is why we are likely to see more closures on the horizon.

Continue reading

Innovating from betwixt and between

Peter Brantley -- April 29th, 2012

As a relative outsider to publishing, I am still often surprised by how difficult business transformation can be for some organizations. I am a member of the Project Muse Advisory Board, and I’ve just emerged from their board and publisher meetings. Project Muse is a journals publishing platform; it aggregates journals in digital form and sells content packages to university and college libraries, research centers, and similar organizations. Muse is also making a significant entry into the higher education ebook market by providing access to publishers’ lists. Our meeting was energetic, and focused at a conceptual level on the challenges of delivering new types of services while transitioning away from more traditional aspects of journal publishing.

What was striking for me was not my anticipated discussion of content management systems that supported a wide range of data queries, might be more semantically aware, and capable of supporting a wide range of interactive media; indeed, these are today’s currency of the realm. Rather, it was the more basic conundrum of being caught between different kinds of customers: publisher suppliers, who are also customers, in a sense; and institutions, who buy their product.

The core conundrum for Project Muse, as with all platform providers, is that they can easily come into conflict with the priorities of the university presses and scholarly societies that provide them with content. For example, one opportunity discussed widely today in academia is creating “push to publish” services that are much closer to the user, often utilizing approachable tools such as WordPress; these services would be at home in library publishing units. If an existing platform provider tried to deploy such a lightweight and configurable publishing system, it could siphon audience away from constituent publishers. In fact, most new services that leverage internet technology and network-scale data sharing and computation end up being ones under consideration as well by university presses and scholarly societies.

The underlying issue is that the suite of possible new publishing services is within reach of multiple levels of the publishing field: university presses could make a go at putting broad net-scale services like PLoS One out of business just as easily as Muse or JSTOR, which operate at a higher level of aggregation. If a small press or society is willing to go through the significant tumult of re-inventing itself, it can reach the global community of scholars just as easily as Elsevier.

What that made me realize is that if you designed a publishing enterprise to support scholarly communication de novo, aggregating content from a range of sources but also developing direct publishing and reader/writer services, you could do it with very different constraints than Muse, JSTOR, and other platform providers have to grapple with. A new entrant, not unlike the Public Library of Science, could actually turn its back on existing publishing practice and design a direct-to-faculty or direct-to-discipline infrastructure that was wholly divorced from existing players.

That kind of disruption hasn’t happened much yet outside of science, technology, and medicine, but it is likely that it will, unless existing platforms quickly manage to figure out ways of innovating themselves into a new content environment while bringing their publishing contributors and constituents along with them, benefitting from the same new services platforms are designing for a broader audience. There may even be some unique advantages in sustaining those relationships, if they can be successfully leveraged.

The coming change in how we publish the humanities and social sciences, and in fact, what we can publish, could be even more transformative than the re-invention of STM. Building a new digital humanities infrastructure will mean interacting with visual interpretations of historical sites, hearing ancient or less common modern languages in linguistic treatises, and grappling with philosophical quandaries in a gaming environment with virtual goods. Ultimately this may reshape how faculty think about doing their research, as well as how it is communicated.

Libraries as Community Publishers: How to Turn the Tables

Peter Brantley -- March 16th, 2012

This week I gave a talk at the University Michigan for the Library and Press on the changes in publishing, with an analysis of the consequences for libraries and other organizations. And unexpectedly, it got me thinking about how public libraries could turn the tables on publishers who obviously feel that their digital books are too precious to share with libraries.

Between several discussions about scholarly publishing with the Library’s Head of Publishing Services, Shana Kimball (@shanakimball), and a lively conversation with Eli Neiburger (@ulotrichous) of the Ann Arbor District Library, I came away thinking about how sharply the barriers to publishing had fallen. Scholars, for example, are increasingly talking about using outlets such as Kindle Singles or The Byliner to publish essays and working papers that just a few years ago would have been short run paperbound manuscripts released by obscure research institutes; now they can reach a global audience.

And that, in turn, made me finally comprehend one big thing. A university is like a community; it draws upon itself to produce insights for a larger world. And as a university library or press is working to find new ways to express the ideas of its community to the broader world, so too can a public library for the community that it serves.

The New York Public Library, the San Francisco Public Library, and every public library system in between now has the capability of starting their own publishing imprint. Imagine “NYPL Press” extended into a series of digital books, stemming out of the rich literary community of New York City. Libraries excel at selection and curation, and to have the stamp of approval of one’s local library as your press could be the most valued signet of the publishing market.

Public libraries are born of their communities. Increasingly with digital tools, libraries are places where people can come together and learn how to write their own stories. There is no reason why libraries can’t be the place where those stories are published. New York Public already has experience in publishing, with a history of putting out print publications, largely derived from its own collections. Extending that into the creation of a digital publishing arm would hardly be insurmountable.

Even for smaller library systems, it would be possible to dedicate staff to support the growth of a publishing series. Using tools like Pressbooks and other easy to use authoring environments, it’s possible for libraries to get community works into the hands of retailers quite easily. A library could offer both general purpose publishing tools and services, as well as establish a house imprint for those materials it felt were worthy of its imprimatur.

Of course, the best thing about libraries becoming presses is that it could bring in needed revenue to support and innovate across all of their services. Network based publishing technologies provide us many opportunities, and one of those is for the institution that is most in touch with the community, in addition to being one of the most loved, to barn raise publishing houses in every city and town across the nation. Libraries are establishing all sorts of citizen platforms; a press could be just one more.

This is just one option among many possibilities available to public libraries. I am not naive about the need for a library publishing imprint to have at least a basic supporting staff at a time when budgets are tight. But it is at least within arms reach, and it provides opportunities for librarians to grow and engage in new services that have a stronger future than those dealing with analog culture. Having one foot in the community and one in the network, libraries can help define a new cultural commons.

[N.B. Since writing, I've learned of "The librarian's guide to micropublishing," by Walt Crawford, which should be helpful. However, the work is focused on library supported patron publications in print, and less on the concept of a library digital imprint, which is the path I find more attractive.]

Back doors to transformation

Peter Brantley -- January 30th, 2012

This weekend I had the pleasure of participating in a Mellon Foundation-funded meeting discussing the future of open peer review for scholarly materials. Peer review is the process by which journal articles and books are evaluated by one’s colleagues in advance of publication in order to improve their quality, or in some cases, recommend their rejection. Peer review concerns itself with questions of originality, clarity, and overall contribution to the literature. The practice arose in conjunction with publishing, and as peer review evolves, we begin to see new – and potentially profound – impacts on scholarly presses.

In contrast to older models of peer review in which submissions are reviewed by one’s colleagues in a “single blind” fashion in which the author does not know the identity of the reviewers, open peer review takes place more or less openly on the web. This has a number of potential benefits, including timeliness; lessened risk of favoritism or backstabbing; and increased quality of comments, knowing they will be aired publicly. Open peer review is not an absolute; portions of the process might be initially closed and then opened up, or the reverse. Anonymity might be preserved at certain moments, but prohibited in others. The reviewing community might be global, or restricted to members of a specific community.

In our discussions, one of the things we kept stumbling over most was the relationship between open peer review, and open access. The distinction is significant: open peer review concerns itself with how the scholarly community evaluates itself online, more or less openly, whereas “open access” presumes that scholarly publications are openly available within at least the boundaries of academic institutions, and perhaps the broader public. But open peer review inherently means that the text under consideration is public to a greater extent than ever before, along with the comments that any number of reviewers might have of it. If this richer fabric is available online for anyone to see, what is then left to publish by a press? To put it another way, open peer review opens a back door to new forms of publishing.

Continue reading

The PW Morning Report: Wednesday, June 1, 2011

Calvin Reid -- June 1st, 2011

Today’s links! And please check out our new Facebook Page.

DC Comics Reboots the Universe! Well, maybe the DC Universe: DC is relaunching its classic titles with new numbers and simultaneous day and date digital/print release.

Amazon vs. NACS. College bookstore Association seeks to dismiss Amazon suit over ads for discounted textbooks.

The saga continues. Borders asks court for more time for turnaround plan.

Cave books. Her fans rejoice as Jean Auel returns with a new book set among prehistoric cave men and women.

Buy this F***ing Book! Go to the the Bookstore already!

The case of the purloined trailer! Sony maybe has the trailer for the upcoming movie adaptation of Girl with the Dragon Tatoo removed from the web.

Let’s hope so. Does reading make us better people?

The PW Morning Report: Monday, April 11

Craig Morgan Teicher -- April 11th, 2011

Today’s links!

Old Book: A 600 year old book was found in Utah! From KPLC.

Dean Koontz on TV: The bestselling author talked on CBS yesterday about his work and dogs.

Steve Jobs Bio: The authorized Steve Jobs bio will be written by Walter Isaacson, published by Simon & Schuster, and hitting shelves in 2012. From EW.

The Future of Google Books: According to the Philadelphia Inquirer.

Poetry for the Rest of Us: The new book by David Orr, subject of one of this year’s PW poetry profiles and poetry critic for the NYTBR, this time reviewed by the NYTBR.

Preserving Libraries: A blogger argues for the library as a sanctuary. From Groton Patch.

E-books As Study Tools: The Indiana Daily Students wonders whether they’re good study tools.

SXSW: Ogilvy & Mather Gives Back Cool Graphic Recordings

Calvin Reid -- March 13th, 2011

Jordan Berkowitz (l.) and artist Heather Willems in front of her drawing on "Health in Africa"

One of the first things I noted at certain SXSW panels was the presence of artists, set up with large boards and drawing tables, frantically drawing and sketching. Turns out they are part of Ogilvy Notes, an impressive visual project by the advertising agency Ogilvy & Mather that is creating large-scale “visual notes” or vivid graphic documentation of a selection of panels and keynotes, improvised and executed on the spot by a team of artists.

Directed by Jordan Berkowitz, Ogilvy & Mather executive director Creative Technology & Innovation, Ogilvy Notes has brought together a team of artists that specialize in a process called variously, “viz notes,” or “graphic facilitation.” The artists set up at events, panels or business meetings and have the ability to sketch representations of the themes, topics and high points of the discussion on the spot, rendering a kind of graphic map of the event in a frenzy of typographic and representational design. Although the drawings have elements of comics, they really offer an overall field of clever, funny and pointed illustration that essentially visually recreates the event they feature.

“They’re a stream of consciousness creation,” said Berkowitz, who brought together a team of about 6 artists who specialize in this kind of on-the-spot graphic recreation. At a time when schools, businesses and the media have realized the importance of visual learning and visual storytelling, the project offers an inventive and memorable strategy to connect and communicate topical issues with the public.

“Its an amazing skill. How do you manage to spot and represent a point made in an ongoing discussion,” Berkowitz said. Ogilvy is using the artists as way to “give back”, Berkowitz said, to the SXSW community. He also emphasized that a team of editors went over the schedule to choose a broad range of panels—from “Public Transit Data and APIs” to “Black Women in Media”—“we didn’t want the content to be self-serving; there is bredth and depth in the subject matter,” said Berkowitz.

The project will document panels for three days over the weekend and producing a phenomenal 25-30 drawings at day! Once completed the project will have about 85-90 drawings and Ogilvy will turn them into original prints and make them available for free (you can pick them up today). In addition the public can download free high resolution versions of all the drawings at the Ogilvy Notes website.

“They’re constructed and created in the moment, people can find them through twitter and facebook,” Berkowitz said. Ogilvy has stacked the large drawings in a kind of “house of cards” sculptural installation on the top level of the Austin Convention Center. Berkowitz said the site is also encouraging artists interested in working in this manner to upload their sketchbooks and art and they may get a chance to work on future graphic documentations.