‘Social Cost’

The Knowledge Futures Group’s Gabe Stein, in a sharp analysis of the peculiar market dynamics of scholarly publishing:

Because of this complexity, the academic publishing market imposes enormous social switching costs on its customers. For an individual researcher to change their behavior, they need to know that the university or lab which pays their salary, the funder which pays their research costs, and the library which pays for access to their field’s top journals won’t penalize them for it — or, in the best case, that they’ll be rewarded for it.

It’s true that there’s enormous inertia—resistance to experimentation—that props up the current publishing system. But Stein’s framing in terms of “social switching costs” hitched to a researcher’s three constituents (the university, funder, and library), while insightful, risks obscuring two facets of the problem. The first is that it’s really one of those constituents, the university, that imposes the switching cost, by way of the tenure, promotion, and hiring systems. The second missing piece here is the role of researchers’ own beliefs: Many scholars have internalized the journal prestige economy. It’s not merely a complex opportunity structure or the friction of too many moving parts. The bigger issue is that (1) scholars want to get published in Nature, which (2) universities reward in formal and informal ways, often (3) drawing on other scholars who serve every step of the tenure, promotion, and hiring processes. The problem of journal prestige lock-in, in other words, has a lot to do with belief and culture. We’ve met the enemy, and it’s us. One implication is that culture change, and efforts to overhaul tenure-and-promotion and hiring criteria, are where a lot of our attention should be.

Echoing some of the other feedback on Stein’s piece, I found the initial framing around for-profit startups and private research labs a bit jarring. In my view, profit-seeking and the university system are fundamentally misaligned. A more practical issue is incumbent acquisition. Many of the would-be disrupters that Stein mentions met their maker as new members of the Elsevier or Wiley families—one of a handful of desired “exits” for the VC firms that backed them. The main task (admittedly a long reach) is to restore the scholarly publishing infrastructure to the mission-aligned, nonprofit sector—which is one reason Stein’s Knowledge Futures Group is so indispensable. Stein agrees, I think, and the main thrust of the piece is that the next round of starry-eyed entrepreneurs will probably join their forebears in the startup graveyard. I look forward to the day when we find concepts like “total addressable market” (TAM) unseemly when applied to spheres, like education and medicine, oriented to the public good rather than private gain.

‘We need a clean break from commercial publishers’

Stanford’s Robert Kaplan, arguing in Times Higher Education for bringing academic publishing in-house [paywalled, alas], cites the author-excluding APC:

academics often can’t afford those high open access fees – especially faculty outside the sciences, the wealthier institutions and the developed world. This makes it more likely that journals will fill their pages with papers by authors who have money, as opposed to authors who have good ideas. Pay to play is simply the wrong model for academia.

Kaplan’s right that the current joint-custody arrangement—non-profit universities and commercial publishers—is odd, exploitative, and ripe for reversal. The article is understandably light on details, but one promising avenue—with some real momentum—is to link up the existing, nonprofit repository infrastructure with a post-then-review publication model. In Björn Brembs’ variation on the idea, publishing services like copy editing would be provided on a procurement basis.

A full divorce between academia and commercial publishers sounds utopian, for sure, but it’s worth imagining, in various schemes like Kaplan’s or Brembs’, if only to chip away at the sense of inevitability.

The Collections Silo

Iowa State’s Curtis Brandy, in an interview on the Opening the Future site:

Libraries have traditionally placed their scholarly communications and collections work in distinct organizational silos. This has meant, in many cases, that the values that inform a library’s work in scholarly communications do not actually inform the work done in collections. […] At Iowa State, we have just adopted a new collection and open strategies policy that centers our library’s values in our collection work. […] Aligning our scholarly communications work and values with collections helps a library to shift this spending from traditional collection procurement to open investing, which will help incentivize and support the transition to a more equitable scholarly publishing system.

This is indeed the rub: The scholarly comms librarian or office, in most U.S. libraries, is nowhere near the collections budget. The Iowa State policy is a promising first step, but what about a complete merger of the roles—so that scholarly communications and collections are served by the same, unified staff?

‘CC BY: A (Somewhat) Cautionary Licensing Tale’

I missed this Ubiquity Press post from last April:

Until 2021, Ubiquity Press published all of our books under the Creative Commons Attribution license (CC BY) […] However, at the end of 2020, we were made aware that another company had taken a very large number of open access book titles that had been published under the CC BY license, including those from Ubiquity Press, and was re-selling them under their own name.

I’ve always been mystified by the fetishism for CC BY, particularly weighed against the CC BY-NC alternative. Cynical commercial uptake of the kind that victimized Ubiquity is the predictable result. Reacting to the same news, Martin Eve wrote:

I have to admit, today, that I was wrong about the risk of others reprinting open-access monographs produced under a Creative Commons license. […] Perhaps the -NC license should be considered, after all. Again, I admit that I may have been overly naive/trusting/hopeful in previously spurning such a license. The world is such a disappointment.

‘The Peer-Review Crisis’

Colleen Flaherty, writing for Inside Higher Ed:

Roland Hatzenpichler, assistant professor of environmental microbiology at Montana State University, said it’s been about a year since he started refusing to review for for-profit journals for free. In response to one journal’s recent request that he review an article, for instance, Hatzenpichler thanked the editor for the invite but said that because the publication is owned by a major for-profit company with high profit margins, “my consulting fee of $200 per hour applies. Please let me know if these terms are acceptable and I will consider whether I can accept the invitation and/or suggest alternative reviewers. Please note that I will charge a one-time fee of $50 for the latter because I would be effectively doing the work you are being paid for free otherwise.” No journal has taken Hatzenpichler upon his offer thus far. He doesn’t necessarily expect any to do so. He won’t stop asking, however.

Nice idea, nice wording (“my consulting fee… applies”). Asking for payment for the oligopolists is the easy case, however. Much thornier is whether all reviewers should be paid. My thinking on this question has evolved. I used to view volunteer reviewing in terms of the academic’s vocation, as a potent symbol of an anti-pecuniary calling. The first three words of the original Budapest declaration are “An old tradition”—the “willingness of scientists and scholars to publish the fruits of their research in scholarly journals without payment, for the sake of inquiry and knowledge.” Reviewing, likewise.

The main reason that conviction has eroded is the broken academic contract: the turn to adjuncts, the decimation of humanities disciplines, and the related turn to market values in higher ed worldwide. If universities don’t hold up their end, citing the scholarly vocation to review for free feels like part of the swindle.

‘Amazon joins race for quantum computer with new Caltech center’

Jeanne Whalen, writing for The Washington Post (paywalled) on Amazon’s play for quantum computing :

Amazon will base its quantum team at a new center on the campus of Caltech in Pasadena, Calif., which officially opens this week. Caltech described it as the first “corporate-partnership building” on the university’s campus, showing “Caltech’s interests in bringing fundamental science to the marketplace.”

According to The Post, the deal entails Amazon renting land from Caltech for the center, which will be run by a pair of on-leave Caltech professors now working for the tech giant. The article quotes one of the Caltech prof-turned-Amazon-employees:

[Brandao] said Caltech would also benefit because academics need the deep pockets of industry to scale up quantum machines. “It’s not cheap to do that, it’s not easy to do that,” he said. “It’s not something that people can do just at universities. So we need industry there.”

In many ways this open embrace of corporate research is an old story, dating back to MIT in the early 20th century or—in its recent wave—biotech entanglements in the 1980s. What’s new is how commonplace—how humdrum—the mutual bear hug has become.

‘Replacing Academic Journals’

Björn Brembs and nine co-authors, in a sweeping paper posted to Zenodo last fall:

There is now a very real threat of a single (or few) corporations effectively owning all scientific data, both research data and user data, on top of their share of the scholarly literature. […] The break will not be technological, as all the technology for the disruption already exists, but with regard to governance. In general, there is broad agreement on the goal for a modern scholarly digital infrastructure: it needs to replace traditional journals with a decentralized, resilient, evolvable network that is interconnected by open standards, that allow seamlessly moving from one provider to another, under the governance of the scholarly community, de-centering the journal article as the sole scientific output that “counts”.

It’s an important paper, with a sharp diagnosis of the oligopolist stranglehold—a “public good in private hands,” as the authors put it.

There’s lots to unpack in the Brembsian alternative proposed here. One cornerstone is the adoption of open standards that—as best I understand it—would enable university repositories and nonprofit, community-led platforms like Open Library of Humanities (OLH) to form a kind of global, interoperable library. A second cornerstone is a regulated market for services. In an open procurement process, publishers and other firms—nonprofit or otherwise—would submit bids for peer review services, for example, or for copy editing or even writing software. The idea is that a regulated marketplace will, through competition enabled by open standards, discipline the overall system’s cost.

It’s a fascinating proposal, one that—as the paper notes—could be implemented with existing technologies. The problem is the lever of change. The incumbent publishers’ entrenched position, Brembs et al explain, renders a first move by libraries or scholars impractical. That leaves funders, whose updated rules and review criteria could, the paper argues, tip the incentive structure in the direction of an open, journal-free alternative.

There’s a lot to chew on in what is an exciting, radical blueprint. One facet of the current system that the paper underplays, however, is the journal system’s prestige lock-in. It’s not just outdated procurement rules that stand in the way of the journal’s demise, but also entrenched features of the scholarly reward system. That system, to some large extent, exists in scholars’ heads, as reinforced and reproduced by the tenure and promotion process. Maybe funder rules can erode those norms, but work to reform the reward system should happen in tandem with funder-driven incentives—along the lines that Juan Alperin and colleagues are suggesting.

A second friendly annotation to the Brembs paper has to do with its implicit orientation to nomothetic disciplines. The paper designates the replication crisis as one of three driving problems in the current system. Likewise, in the paper’s imagined future, there’s a discussion of hypotheses generated by machine learning models, which, the authors write, “will help mitigate researchers’ all too human tendencies to cut corners, tell stories or cheat.” Hmmm. The contrast between machine and human is too tightly drawn, and the premise of AI value-freedom is hard to defend. More to the point, replication and hypothesis testing make no sense—that is, they do not apply—to idiographic fields, many of which reside in the humanities. There’s a hidden parochialism in the paper’s assumptions about what academic knowledge is.

Quibbles aside, the paper is a thrilling template for an alternative future.

A final thought: I hope that Brembs et al are in contact with the Confederation of Open Access Repositories (COAR), which just received a $4 million grant from Arcadia for its ongoing project to build a “standard, interoperable, and decentralised approach” to a global network of networks.

On the heels of the UNESCO and Budapest recommendations, with a boost from Brembs et al and COAR, maybe—just maybe—an APC-free, community-led scholarly publishing system is within reach.

‘Contributor Roles Taxonomy (CRediT) Formalized as NISO Standard’

Speaking of persistent identifiers, NISO annointed the CRediT taxonomy as an official standard in February. The core bundle of roles to credit has been around since 2014:

  • Conceptualization
  • Data curation
  • Formal analysis
  • Funding acquisition
  • Investigation
  • Methodology
  • Project administration
  • Resources
  • Software
  • Supervision
  • Validation
  • Visualization
  • Writing – original draft
  • Writing – review & editing

It’s a fascinating reflection of what academic knowledge is: modular, stepwise, nomothetic, hypothesis-driven, grant-funded, and team-based. It’s knoweldge as refracted through the well-funded lab—science as a metonym for academic inquiry writ large. Post-war “big science” in taxonomic relief, as endorsed by 34 organizations (including many of the giant publishers). Here’s hoping an STS grad student is tidying up her dissertation on the process.

Neither “monograph” nor “humanities” appear in the lengthy standards doc, of course. The idiographic fields—with their tendency for unfunded, lone-wolfe inquiry—aren’t a good fit for NISO Z39.104-2022. Conceptualization first, writing second? “I don’t know what I think until I write it down.”

‘Why Publishers Should Care About Persistent Identifiers’

Phill Jones and Alice Meadows, in a Scholarly Kitchen post urging publishers to go all in with persistent identifiers like ORCID:

Over the last few years, there has been significant progress in developing recommendations, policies, and procedures for creating, promoting, and using persistent identifiers (PIDs). […] Publishers — and publishing system providers — were early and enthusiastic adopters of persistent identifiers. […] Nevertheless, we think it’s fair to say that many — probably most — publishers aren’t realizing the full potential of PIDs. DOIs are being registered for most publications (especially Crossref DOIs) and ORCID iDs are being collected, but their full value — for authors and publishers alike — isn’t currently being exploited.

“Exploited” is an interesting verb in this context.

The benefits of persistent identifiers—including new-ish entrants like the ROR organization ID—are obvious, and Jones and Meadows make the case in this and a follow-up post.

But there are reasons, too, to be cautious about a “PID-optimized world,” to borrow their phrase. One is that an ID-for-everything system promises to feed the quantified research-assessment culture—the one that’s ruined UK higher ed and, in its worldwide overspread, is complicit in the re-casting of the university as an economic engine.

The other, related issue is that all this interlinked data will make it easier for the big publishers to create prediction products, only to sell back to universities and research-assessment offices at steep premiums. We are, in a PID-optimized world, shoveling coal into the furnace of surveillance publishing.

Jones and Meadows, indeed, pitch the business case to publishers directly: “This information,” they write, “doesn’t just flow from publisher systems; it can flow into them too, enabling […] better (and easier) business insights.”

Yikes. And who knew there is a PIDapoolaz?

‘Open access in low-income countries — open letter on equity’

From a new and important open letter signed by 17 Nobel laureates and more than 30 international organizations:

The exorbitant article processing charges (APC) that come hand-in-hand with the Author-Pays OA publishing model deepens the inequality between researchers from developed countries and those in lower income countries. […] In developing countries, APC associated with the publication of scientific articles could represent a large proportion of the annual grants (as much as 35-130%).

The letter, spearheaded by a group of young scientists, calls out Plan S for its friendly fire, and cites the momentum behind the UNESCO Recommendation. Among other suggestions, the letter calls for the creation of an ad hoc worldwide committee, a Global Initiative for Equitable OA Models.

This is one to watch—and join.

‘FTC warns it will go after ed tech companies misusing children’s data’

Tonya Riley, writing for Cyberscoop:

The Federal Trade Commission voted 5-0 on Thursday to issue a policy statement warning education tech companies against using data collected from children via education services for additional commercial purposes, including marketing and advertising.

A promising move … for children under 13. Teens and, well, college students remain, as Riley puts it, “susceptible to industry surveillance.”

‘Wiley and Universidad Nacional Autónoma de México (UNAM) Sign Open Access Agreement’

From the Wiley press release:

This agreement, which marks Wiley’s first in Latin America, positions UNAM as the leader and pioneer in the development of open access research, allowing its researchers to publish most if not all of its articles open access across Wiley’s portfolio of hybrid journals.

It’s a strangely spare announcement, with ambiguous language (“most if not all,” hybrid journals only?). Without more details, the “deal” looks like a PR stunt: Latin America—home to the nonprofit, fee-free alternative—joins the read-and-publish club.

‘Partnership with Max Planck Society marks Springer Nature’s largest open access book deal’

From Springer Nature’s press release announcing an OA book deal with Max Planck:

The initial three-year agreement, live as of 1st January 2022, will enable authors from all 86 Max Planck Institutes to receive a discount on the standard Book Publishing Charge (BPC) to publish their book OA. MPDL will contribute central funding toward the coverage of the discounted BPC, lowering the costs for authors even further. The discount and funding will be available across all of the publisher’s book imprints, under a CC BY licence, ensuring their work is freely accessible and discoverable to all communities across science, technology, medicine, the humanities and social sciences. They will be available to readers around the world via Springer Nature’s content platform SpringerLink. 

Deals like this one are an extension in spirit of the journal-based read-and-publish model—even if there’s no straightforward “read” component. Springer Nature charges $15,000 to publish an OA book, via a Book Processing Charge (BPC). Max Planck researchers won’t see that charge, as the German institutes will cover the difference after the deal’s discount. The problem is that deals like this prop up the BPC system, which excludes the vast majority of the world’s scholars. Researchers lucky enough to work in wealthy European countries and a handful of rich North American universities, meanwhile, accrue all the OA benefits.

Book Analytics Dashboard

From the Curtin Open Knowledge Initiative (COKI) announcement:

The Book Analytics Dashboard Project (2022-2025) is focused on creating a sustainable OA Book focused analytics service. […] In addition to scaling workflows, infrastructure and customer support, the Demonstration Project is developing a long-term plan for housing, maintenance and funding of the analytics service as a sustainable community infrastructure.

It’s a mess right now—a labor-intensive harvesting of (in Lucy Barnes’ phrase) “apples, oranges, grapes, kiwi fruit and pears.”

Godspeed to the COKI people.

‘The Gig Economy Comes for Scholarly Work’

Kate Eichhorn, writing in the Chronicle of Higher Education [paywalled]:

In early March, Chegg launched a marketing campaign to convince educators to sell their teaching materials to the company’s recently launched Uversity platform. Chegg describes Uversity as a “collaborative learning library,” though many educators may disagree with that characterization. […] The message went on to break down Chegg’s assessed valuation of different types of curricular materials, with a caveat that the listed price reflects the company’s current 50-percent bonus — a limited-time offer. Apparently, if I rushed to take advantage of this promotion, I could cash in to the tune of $375 per practice exam (limit 4), $375 per study guide (limit 4), $120 for lecture notes (limit 15), $75 per practice quiz (limit 5), $120 per case study (limit 5), and $120 for lab notes (limit 10).

Chegg, the cheating-as-a-service (CAAS) platform, is preying on underpaid academics:

That there’s a market for Chegg’s predatory practices at all reflects the grim reality of higher education in the 2020s. As stories about the New Faculty Majority reveal, a critical mass of faculty members don’t make enough money to support themselves and their families. As a result, side gigging is already a norm in higher education.

‘What’s Your Tier? Introducing Library Partnership (LP) Certification for Journal Publishers’

Rachel Caldwell and Robin N. Sinn, writing in Commonplace in November, propose a journal-publisher scorecard. In their vision,

a publisher’s actions are quantified based on points earned for practices that align with the values of libraries and many institutions of higher education. The more points a publisher earns, the higher their overall score. LP certification is similar to the U.S. Green Building Council’s Leadership in Energy and Environmental Design (LEED) architectural certification. Where LEED certification assesses a building project’s practices in “credit categories” such as water efficiency or indoor environmental air quality, LP certification assesses a publisher’s practices in four categories: Access, Rights, Community, and Discoverability. The overall score a publisher earns places them in one of four tiers.

The idea is to help librarians make informed choices about where to invest limited OA dollars, through what is, in effect, a point-based rating system. Library-by-library vetting is utterly impractical, especially when multiple values (like licensing and fee-free OA) are at stake.

It’s an exciting idea, one that could streamline and justify libraries’ mission-aligned investing. Caldwell and Sinn include a pilot run-through of their scoring with five publishers. Elsevier, with 12 points, is easily bested by UC eScholarship and the Society for Neuroscience—40 and 44 points, respectively.

Library Partnership

Their proposed scheme (dubbed Library Paternship) could easily fold into mission-aligned funding exchanges like LYRASIS’s OACIP. Even a badging system would make sense.

More on this project soon.

Scopus + arXiv

From Arxiv’s blog last week:

New among the membership benefits that all institutions receive, is access to a personalized digital dashboard, containing an overview of the articles their researchers have posted on the platform. […] To provide this information, arXiv is partnering with Scopus to optimize that publication data and increase institution’s visibility of their researcher contributions.

Yikes: arXiv in bed with Elsevier. As arXiv later clarified, Scopus is using arXiv’s open APIs, so there’s no private data capture. As Open Book Publishers’ Rupert Gatti wrote on Twitter:

The additional info is helpful – suggesting that Scopus is using the open api as anybody else can. But what does it mean that Scopus was ‘selected’ to provide submission data? Could another entity (eg @OpenAlex_org) provide an alternative for you? any Scopus exclusivity clause?

‘Introducing the Open Book Collective’

The good people at COPIM have officially announced the Open Book Collective:

The collective will bring together OA publishers, OA publishing service providers, libraries, and other research institutions to create a new, mutually supportive ecosystem for the thriving of OA book publishing. At the heart of the work of the Open Book Collective (OBC) will be a new platform. This platform will make it far quicker and easier for libraries and others to financially support different OA publishers and service providers via membership offerings.

The in-progress platform, when it’s unveiled this summer, will be the best example of what I’ve called mission-aligned funding exchange—where funders provide direct support to OA publishers, bypassing the author-excluding APC/BPC. One especially exciting facet of the design is that the Collective will support publication bundles, including the ScholarLed publishers (of which mediastudies.press is a member).

Annual Reviews Goes All-In with Subscribe to Open

Annual Reviews, in a press release:

Today, the leading nonprofit publisher Annual Reviews announced that over the next 18 months they will make their entire portfolio of 51 academic journals freely available to everyone under a new model called Subscribe to Open. These highly cited journals cover topics across the sciences, including astronomy, environmental science, genomics, marine science, public health, and sociology

Annual Reviews, together with Berghahn, ran the first round of S2O pilots last year. They apparently went well.

‘When Consolidation Provides Benefits as well as Market Power’

I missed this odd bit of Panglossian praise for corporate consolidation, published in the Scholary Kitchen last fall:

One very good reason to favor consolidation is when it provides scale and financial stability to a specialized or niche provider. Such a specialized provider may find it a struggle, if not impossible, to sustain itself over time without such scale. […] Users also benefit when a consolidation serves to streamline and integrate workflows. This is such an obvious benefit to end-users that it is often surprising how long it takes for some consolidations to deliver on workflow benefits. […] Paradoxically, consolidation may serve to increase competition, not weaken it. This happens when a particular market segment already has a dominant player, but consolidation among other, smaller organizations in the same segment can put pressure on the leading incumbent. In the research publishing area any consolidation that does not include Elsevier potentially makes it possible to compete more effectively with Elsevier.

Wowzer. Both Roger Schonfeld and Joseph Esposito are better than this, whatever
cavaets and qualifications they offer. They dismissively refer to “frequent, at times reflexive, customer opposition to acquisitions in these sectors, motivated by concerns about pricing.” That’s Newspeakish as a characterization: Yes, lots of folks that care about scholarly communication, librarians very much included, worry about monopoly pricing power. But we’re also worried about a host of other factors, including the monetization of scholars’ data, blocked access to reading (subscriptions) or authorship (APCs), and the fundamental misalignment of values between publicly traded for-profits and the pursuit of knowledge.

A benefit of consolidation is that …. it will help another conglomerate bloat up to rival the hyper-consolidated Elsevier?? That’s a depressing surrender of agency and possibility. Et tu, Roger?