From the Center for Open Science's [press release]( announcing its new journal-rating measure, the [TOP Factor](

> TOP Factor is based primarily on the [Transparency and Openness Promotion]( (TOP) Guidelines, a framework of eight standards that summarize behaviors that can improve transparency and reproducibility of research such as transparency of data, materials, code, and research design, preregistration, and replication. Journals can adopt policies for each of the eight standards that have increasing levels of stringency.

There's lot to applaud here. The initiative is nonprofit, transparent itself, and an avowed alternative to the notorious Journal Impact Factor—and one that rates, rather than ranks.

Still, it's a striking illustration of one knowledge culture—one that aims for generalized knowledge, with data-driven hypothesis testing—claiming to represent scholarship as a whole. The masked parochialism is there in the first sentence. The Top Factor "assesses journal policies for the degree to which they promote core scholarly norms of transparency and reproducibility." Transparency may indeed hold academy-wide, but reproducibility is not a "core scholarly norm"—it doesn't even make methodological or epistemological sense across huge swaths of the humanistic social sciences and humanities.

These fields and scholars don't have hypotheses to pre-register, nor in many cases "data" to deposit, not in the pointillist sense implied here. In idiographic fields the whole endeavor is devoted to the study of *particulars*, and to building out arguments *with* texts and "data"—not in advance. Yet by the TOP measure, hundreds of journals—whole quadrangles of the university, for that matter—will be cast as low-scoring, norm-violating laggards.

Some of this submerged disciplinary chauvinism comes out in the press release:

> “Disciplines are evolving in different ways toward improving rigor and transparency,” noted Brian Nosek, Executive Director of the Center for Open Science. “TOP Factor makes that diversity visible and comparable across research communities. For example, economics journals are at the leading edge of requiring transparency of data and code whereas psychology journals are among the most assertive for promoting preregistration.”

The journals, and their home disciplines, that aren't well-represented in the initial stable of ranked titles are, in effect, called out for their slumbering dogmatism:

> The initial journals are heavily represented by psychology, economics, education, and general science outlets. The initial emphasis is on fields and journals that have been particularly progressive in adopting policies for transparency and rigor.

But what would it mean for *The Journal of American History* to support "study pre-registration," "analysis pre-registration," or "code transparency"? The point of historical scholarship—most of it, most the time—just isn't to produce testable, generalized findings, with data and code at the ready for peers to replicate.

To be fair, some of the [TOP measures](, like data and "research material" transparency, interpreted capaciously, could accommodate archival documents and other kinds of qualitative evidence. But all of the framing, and most of the measures themselves, are anchored in a nomothetic knowledge culture that, at the same time, disguises its parochialism. It's a symptom of the wider "open science" movement, as a label and as a bundle of practices. There's an epistemological hubris that takes one model of knowledge, suitable to certain questions and good for certain answers, as universal.

So let's bury the journal impact factor, for sure. But let's not slot in a replacement that cordons off, without saying so, half the house of knowledge.

(Disclosure: I am co-coordinator of [MediArXiv](, which is hosted by the Center for Open Science.)