AI, Ethics, and accountability in Academic publishing

As AI becomes more embedded in our workflows, the academic community is facing complex questions: Who owns AI-generated content? Can a machine be an author? And how do we preserve scholarly integrity in a world of machine assistance?
These were the questions I was invited to address as a panelist for a recent discussion on AI in Academic and Scholarly Publishing. Below are some of the key ideas I shared—and continue to wrestle with as both an author and academic leader.
Ethical Implications of AI-Generated Research
AI tools can generate convincing, well-structured writing. But that writing may include bias, hallucinated facts, or even fabricated citations. These risks raise serious concerns about research reproducibility and the credibility of the academic record.
One of the most urgent questions is about authorship. Can AI be an author? According to the Committee on Publication Ethics (COPE) and reinforced by APA guidelines: no. AI cannot take responsibility, make intellectual contributions, or respond to peer review—core requirements of scholarly authorship. Researchers must retain full responsibility for any AI-generated content they use.
As Prof. Stefka Tzanova puts it, generative AI “can lead to bias amplification and hallucinations,” (Sacco, Arms, & Norton, 2024) making it vital for institutions to implement policies that promote human accountability. Prof. Jeremy Norton echoes this concern, noting that AI’s “black box” nature makes it difficult to verify the origins or accuracy of its outputs (Sacco, Arms, & Norton, 2024).
Safe and Reliable AI Tools
Some AI tools—like predictive models (IBM Watson, Azure AI) or writing assistants (Grammarly, Quillbot)—are considered safe when used with clear intent and oversight. But they’re not substitutes for scholarly thinking. They’re collaborators, not creators.
To use AI responsibly, researchers need AI literacy: an understanding of what these tools can (and cannot) do. That means staying up to date with evolving ethical guidelines, peer-reviewed studies, and institutional policies. Both Tzanova and Norton stress the need for transparency and continual education to navigate this new terrain ethically.
The Takeaway
AI is powerful—but it is not neutral, nor is it infallible. As The Augmented Author, I believe we can harness these tools ethically and responsibly, but only if we lead with transparency, retain authorship, and stay informed.
Let’s keep the conversation going.
Reference:
Sacco, K, Arms, K., & Norton, A. (Eds.). (2024). Navigating AI in Academic Libraries: Implications for Academic Research. IGI Global Publishing. https://www.igi-global.com/book/navigating-academic-libraries/334856


Leave a comment