I missed this important Brookings report from September:
Hundreds of higher education institutions are procuring algorithms that strategically allocate scholarships to convince more students to enroll. In doing so, these enrollment management algorithms help colleges vary the cost of attendance to students’ willingness to pay, a crucial aspect of competition in the higher education market. This paper elaborates on the specific two-stage process by which these algorithms first predict how likely prospective students are to enroll, and second help decide how to disburse scholarships to convince more of those prospective students to attend the college.
The report recommends that universities stop using the algorithms to assess likelihood to enroll or to mete out aid:
The stated goal of enrollment optimization algorithms is to incentivize enrollment at the precise maximum tuition (or minimum scholarship) an applicant is willing to pay to attend that college. Vendors unanimously market their enrollment management software in this way—saying they intend to allocate the “minimum amount of aid necessary to meet and exceed your [enrollment] goals.”
Twitter has rightly blown up over Taylor & Francis’s appalling “Accelerated Publication” grift:
If time is critical in your publishing strategy, our Accelerated Publication options are available on a select number of biomedical journals and can help you get your research into a high-quality, peer-reviewed journal, fast.
With Accelerated Publication, you can:
coordinate your publishing schedule with conferences, drug approvals, and drug launches
keep competitive advantage by getting your discoveries to market quickly
speed up the peer review process without sacrificing quality or rigor
$7000 dollars to publish in 3-5 weeks.
From a new Penn State study on online program management (OPMs) deals:
OPM contracts are—to be blunt—one strategy for extracting profit from a mostly nonprofit educational model. And, also bluntly, they are a way of doing so without attracting the level of regulatory scrutiny that is connected to the actual provision of for-profit higher education. […] Several firms we analyzed have legacy connections to for-profit higher education, and most are attracting venture capital investments (and a few initiating initial public offerings), which suggests confidence in the profitability and returns on investment that these vehicles provide.
If regulation for outsourcing online program management moves too quickly and without caution, the outcome could decimate many colleges and universities by drying up their ability to outsource with for-profit companies—or, worse still, create large-scale issues for students when online programs cannot function internally within the institution offering the program.
But the point isn’t to preserve the OPM industry. The point instead is to keep higher-ed mission-aligned and nonprofit—to protect students and universities from windfall-extracting profiteers.
Rebecca Bryant, Charles Watkinson, and Rebecca Welzenbach, writing for the Scholarly Kitchen on a science-humanities gap in metadata retrieval:
The ability to harvest accurate and mostly complete metadata for researchers in STEM disciplines is quite good in all of these examples. However, metadata harvesting from any of these sources provides disappointing results for humanities and some social science scholars. When we look at a researcher’s profile, we may notice that a book (or more than one) is missing.
They’re writing about the brave new world of RIM. Short for “Research Information Management,” RIM software is mainly used by universities and other research entities to measure and predict scholars’ productivity.
Point taken: The humanities, and social science book disciplines too, have some metadata problems. But I’m not sure we want to oil the metric gears, given the way that quantified measures of “impact” have distorted scholars’ truth-seeking commitments for decades. Worse still is the fact that the major players in the RIM market are Elsevier (Pure) and Springer Nature parent Holtzbrinck (Elements). In both cases, the publishing oligopolists are harvesting academics’ behavior and re-selling them as prediction products—on top of their windfall subscription-and-APC profits.
Yes, metadata resiliency is a good thing. But let’s not inadvertently build the road to surveillance publishing.
Brilliant paper by Alexander Gross and Björn Brembs, last revised in July:
The affordability problem of scholarly publishing, i.e., the supra-inflationary price increases with stagnating library budgets, has been a hot topic for more than three decades. In recent years, […] the average cost of an article has emerged as a useful measure with which to compare different business models. However, most authors refer to the prices charged by the publisher, not the actual cost to the publisher. One consequence of this mis-attribution is a potential overestimation of the actual costs of scholarly publishing due to the inclusion of the business models and pricing strategies of publishers into the calculation. To close this gap, here we provide a bottom-up calculation of the cost of efforts and services which are required to achieve a certain service level in order to publish an academic journal article.
Gross and Brembs’ estimate of cost per article? Something like $200 to $1000, depending on the journal set-up, with the cost for a “representative scholarly article” at about $400. That’s roughly a tenth of the commercial publishers’ typical APC.
From a March press release announcing the “EnableOA” publishing platform:
The Amnet and Coko Foundation partnership has paved the way for EnableOA: an open access publishing platform for the scholarly community. It is a comprehensive framework with a slew of modular and smart features that allow customization of the platform to best suit the publishing requirements of a user. The platform is designed to reduce publishing cost and time while supporting greater collaboration, research and development integrity, distribution, and transparency.
I hadn’t heard of Amnet before. It’s a buzzword-heavy, India-based contract publisher, specializing in “Smartshoring®”: “a smart balance between onshore and offshore resources enriched with strategic global partnerships across multiple geographies.” The company—which is part of a larger holding group—is clearly moving into scholarly publishing, with the EnableOA journal and book platforms.
The weird thing is that the platforms are built on two core Coko softwares: Kotahi (journals) and Editoria (books). Coko, of course, is short for Collaborative Knowledge Foundation, run by energetic coder-entrepreneur Adam Hyde.
Hyde and Coko are fixtures in the nonprofit scholarly publishing world. They have a major hand in the Open Publishing Awards and Open Publishing Fest, for example. And yet there’s been an apparent shift in focus at Coko. One way to capture this is by homepage tagline. Up through 2019:
Transforming Knowledge Creation and Sharing
The Coko Foundation is a non-profit organization transforming how knowledge is created, improved, and shared. Our goal – to build community-owned solutions in open science, open research, and open education.
We build, you publish.
We use open-source technology to build the best publishing tools in the business. Whether you’re looking to extend an existing platform or build something new, we can help you build better, in less time and for less money.
That’s a pretty big change. And so is for-profit Indian outsourcing.
Jeffrey R. Young, writing for EdSurge:
A new startup wants to shake up the textbook market by making it easier for professors to adopt courseware created at colleges and universities rather than by commercial textbook publishers. It’s solution: Create a new marketplace where instructors can find them. A key premise of the Lexington, Mass.-based company, called Argos Education, is that the way textbooks are created and revised is due for a reset. Namely, it wants to help build an open-source system that lets professors piece together online course materials from a variety of sources, and also offer their own materials for sale to colleagues around the world.
The idea for a mix-and-match courseware marketplace is a great one, but holy hell—why was the initiative organized as a for-profit company? The underlying Sojourner platform, to be built from a planned Carnegie Mellon/Arizona State integration of two existing softwares, will be open source.
But Argos, the marketplace vendor, is a profit-seeker. As countless mission-aligned, mom-and-pop ventures in scholarly publishing have already shown, Argos will be—if, and as soon as, it is successful—a sitting duck for big-publisher acquisition. The EdSurge article inadvertently makes the point: The Arizona State team working on Sojourner had built a commercial platform, Smart Sparrow, which it sold to … Pearson in 2020.
From the Open Library of humanities blog:
We are delighted to announce that the Open Library of Humanities is now open to expressions of interest from subscription journals in the humanities seeking to move to a gold open access (OA) publishing model without author-facing charges (‘diamond’ OA).
The journal-flipping posters are the best part.
I have a new working paper, “Surveillance Publishing,” up on SocArXiv:
In place of Google’s propensity to buy, Clarivate is selling bets on future scholarly productivity and impact, among other academic prediction products. This essay lingers on a prediction too: Clarivate’s business model is coming for scholarly publishing. Google is one peer, but the company’s real competitors are Elsevier, Springer Nature, Wiley, Taylor& Francis, and SAGE. Elsevier, in particular, has been moving into predictive analytics for years now. Of course the publishing giants have long profited off of academics and our university employers—by packaging scholars’ unpaid writing-and-editing labor only to sell it back to us as usuriously priced subscriptions or APCs. That’s a lucrative business that Elsevier and the others won’t give up. But they’re layering another business on top of their legacy publishing operations, in the Clarivate mold.
From the MIT press release:
[Direct to Open] moves professional and scholarly books from a solely market-based, purchase model, where individuals and libraries buy single eBooks, to a collaborative, library-supported open access model. Instead of purchasing a title once for a single collection, libraries now have the opportunity to fund them one time for the world through participant fees. To date, over 160 libraries and consortia from across the globe have committed to support the D2O initiative,
A prominent proof of concept.
Dorothea Salo, in a delightly feisty OA chapter on preservation of digital humanists’ work and evidence:
Most institutions investing anything at all in the digital humanities have only one to a mere handful of digital humanists on the faculty. These paltry few face the Sisyphean task of successfully persuading their library, campus IT organization, and campus administrators to allocate significant money and staff toward digital preservation. Such an appeal typically only happens in the first place if digital humanists are already lucky enough to have access to basic computing and support, which is often not the case.
It gets worse:
Digital humanists find themselves countered, not to say opposed, in their efforts to secure support and funding by a much greater number of faculty humanists not identifying with the digital humanities, who think of libraries only as print-book purveyors and believe products of digital culture barely or not at all worth preserving, parallel to historic reactions to the advent in the West of printed codices (as opposed to scribed manuscripts), photography, film, television, and comics/graphic novels.
‘Creating what we seek to measure – How to understand the performative aspect of impact evaluation?’
Jorrit Smit and Lauren Hessels, writing for the LSE Impact blog on the “performativity of evaluation”:
… the way in which specific evaluation methods assume, and thereby produce, different understandings of research and impact. In a recent paper (open access) we explore this very question, building on previous constructivist studies that have proposed how evaluation significantly shapes the contours of the object it valuates.
It’s a smart point, that we humans change in reaction to ostensibly descriptive measures. But that fact might call into question the practice of assessment itself, especially in light of Goodhart’s law.
Jeffrey Young writing for EdSurge:
Facebook (er, I guess now Meta) announced that it would partner with Coursera and edX to help push Meta’s curriculum in augmented and virtual reality, which it calls the Spark AR Curriculum. A spokesperson for edX, which started as a nonprofit by Harvard and MIT but is in the process of being sold to for-profit 2U, said the group would share more information about the partnership and its broader shifts in the coming weeks.
So it comes to this: Harvard and MIT sell off nonprofit edX, which pivots to Facebook.
Sue Curry Jansen and I, in a double review of Kate Crawford’s Atlas of AI (Yale, 2021) and Frank Pasquale’s The New Laws of Robotics (Harvard, 2020):
What Crawford and Pasquale draw out is that AI is a way of seeing the world—a lay epistemology. When we see the world through the lens of AI, we see extraction-ready data. We see countable aggregates everywhere we look. We’re always peering ahead, predicting the future with machinist probabalism. It’s the view from Palo Alto that feels like a god’s eye view. From up there, the continents look patterned and classification-ready. Earth-bound disorder is flattened into clear signal. What AI sees, in Crawford’s phrase, is a “Linnaean order of machine-readable tables.” It is, in Pasquale’s view, an engineering mindset that prizes efficiency over human judgment.
The piece appears in The b2o Review.
From Scholastica’s 2020 “State of Journal Production and Access” report:
Transformative agreements were a lower rated option, with 54% of survey respondents selecting “low” or “no” potential and only 32% of respondents selecting “some” or “very” high potential.
OBC will act as a collector of revenues accrued for new membership packages from institutions – primarily academic libraries – with this revenue then passed on to OA book publishers and infrastructure providers. A portion of the revenue will be retained to cover the platform’s administrative overhead costs.
The Open Book Collective, when it’s launched in the spring, will help solve the APC/BPC author-exclusion problem, by directly linking funders to publishers (and infrastructure stewards). I see the OBC as the book counterpart to the journal-centric OACIP from LYRASIS. They are—in practice, in OACIP’s case, and in active development, now, for OBC—the two main examples of the mission-aligned funding exchange.
Of course people can publish elsewhere as well. I suspect in the short term Octopus will act a bit like a preprint server in that respect. Authors will hopefully want to publish quickly in Octopus to ensure they get the credit for their work as soon as possible, protecting their “priority” over it, but then write it up for a traditional journal at some point. In the longer term I think that second stage will modify, so that journals will not be taking primary research papers and organizing peer review, but instead be commissioning reviews, syntheses, editorials, and short “news” articles announcing findings — so authors will instead be pitching those once they have published in Octopus.
There’s lots to digest about Octopus, including its modular model. What’s most interesting about Freeman’s interview, though, is her vision for the future of journals. Plainly influenced by her background in media (and science journalism in particular), she sees journals moving toward public- or professional-facing summary pieces, with the soup-to-nuts details for fellow specialists hosted on Octopus or its future peers:
It struck me that journals are trying to fulfill two roles at once in the current system. On the one hand they are disseminating important findings to their readers, particularly to professionals who may be able to implement those findings in practice. On the other hand they have the somewhat thankless task of being the primary research record, where researchers publish all their work in all detail to a minority readership. These two roles pull in opposite directions: an article that gives full details, all the twists and turns and blind alleys of the real research process, does not make a nice, easily-digested narrative for those who mainly want to know the findings and implications; an article that supplies the ‘edited highlights’ does not give the full details that other researchers need to learn from. Journal articles are best suited to the dissemination role — the “news and views” and editorialized, narrative-driven pieces.
It’s an interesting vision for the division of publishing labor—readable digests, separated out from the weedier research record. Freeman just got £650,000 from Research England to build out her end: Octopus is slated to launch in spring 2022.
Today, LA Referencia, RedCLARA and the three African regional research and education networks – ASREN, WACREN and UbuntuNet Alliance – signed a Memorandum of Understanding (MoU) to formalize their relationship as the two continents seek to ramp up their open science activities. The aim of the collaboration is to advance open science policies, services and infrastructure that reflect the unique needs and conditions of each continent within a framework of international cooperation.
I’m sensing growing momentum around South-South cooperation to challenge the APC model that’s taken partial hold in Europe and North America. Last week’s OASPA conference featured a fiery talk by Reggie Raju, a librarian at the University of Cape Town.
Agata Morka, reporting last year on an OA-books funding workshop:
In the Nordic countries, there are official lists, with separate ratings for journals and for book publishers (Norwegian one (also used in Sweden), Danish and Finish [sic] one). They are used in various ways: some institutions might require their researchers to only publish with top rated publishers from these lists, some might use it as a point of reference in evaluating their institution’s publishing output, and some might steer their researchers towards them in the search for trustworthy publishers. The lists themselves, as well as practices surrounding the evaluation/inclusion processes remain controversial. According to the Finnish list, operating on a 0-3 scale (0=under evaluation, 1=basic, 2=leading, 3=top) there are 13 leading book publishers in the world, with usual suspects included: a bunch of Ivy League university presses and commercial giants like Palgrave Macmillan or Routledge.
Deflating, if unsurprising.
Considering science as a global public good, open science services should be viewed as essential research infrastructures, governed and owned by the community and funded collectively by governments, funders and institutions reflecting the diverse interests and needs of the research community and society. Member States are encouragea to promote non-commercial open science infrastructures …
The document also takes a strong stand against the APC, endorsing
diversity in scholarly communications with adherence to the principles of open, transparent and equitable access and supporting non-commercial publishing models and collaborative publishing models with no article processing charges or book processing charges.
If adopted in November, the UNESCO principles—even as recommendations—could be crucial in the fight against the closed-authorship policies of Plan S and the commercial publishers.