Adam Hyde, John Chodacki, and Paul Shanon, writing on FORCE11’s Upstream on seven key roles that “AI” could play in a scholarly publishing workflow:

  1. Extract: Identify and isolate specific entities or data points within the content.
  2. Validate: Verify the accuracy and reliability of the information.
  3. Generate: Produce new content or ideas, such as text or images.
  4. Analyse: Examine patterns, relationships, or trends within the information.
  5. Reformat: Modify and adjust information to fit specific formats or presentation styles.
  6. Discover: Search for and locate relevant information or connections.
  7. Translate: Convert information from one language or form to another.

It’s a useful breakdown, but I’m stunned—given the authors and outlet—that there’s no mention of commercial exploitation. Scholarly publishing is dominated, of course, by a for-profit oligopoly, one that mines scholars’ behavior to bundle into proprietary prediction products. As a flurry of recent announcements makes clear, Elsevier and co. are charging into the post-ChatGPT future—with the aim to expand their surveillance-publishing footprint at the expense of scholars and universities. In the real world, the seven Upstream verbs—to extract, to discover, to generate, and so on—will be turned on us.