Adam Hyde, John Chodacki, and Paul Shanon, writing on FORCE11’s Upstream on seven key roles that “AI” could play in a scholarly publishing workflow:
- Extract: Identify and isolate specific entities or data points within the content.
- Validate: Verify the accuracy and reliability of the information.
- Generate: Produce new content or ideas, such as text or images.
- Analyse: Examine patterns, relationships, or trends within the information.
- Reformat: Modify and adjust information to fit specific formats or presentation styles.
- Discover: Search for and locate relevant information or connections.
- Translate: Convert information from one language or form to another.
It’s a useful breakdown, but I’m stunned—given the authors and outlet—that there’s no mention of commercial exploitation. Scholarly publishing is dominated, of course, by a for-profit oligopoly, one that mines scholars’ behavior to bundle into proprietary prediction products. As a flurry of recent announcements makes clear, Elsevier and co. are charging into the post-ChatGPT future—with the aim to expand their surveillance-publishing footprint at the expense of scholars and universities. In the real world, the seven Upstream verbs—to extract, to discover, to generate, and so on—will be turned on us.