Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Recently, generative AI tools have been discussed, primarily focused on the risks, ethical issues, and responsible use of these technologies in academic writing. Using AI tools exclusively in the writing process remains unacceptable due to significant concerns about plagiarism, fabrication, and the reliability of AI-generated content. However, researchers who use AI tools as assistants can substantially enhance their academic writing. Specifically, these tools are beneficial for improving text readability, clarity, and word choice, extract information from large texts, synthesize details, and summarize or paraphrase lengthy sentences. This article briefly discusses the potential issues associated with using AI in writing and explores responsible practices for employing these tools to improve academic writing. Additionally, it summarizes the policies of various publishers regarding AI usage and provides recommendations from two institutions on the responsible use of these tools.
Keywords: Generative AI; AI in writing; AI ethics; Responsible AI; Generative AI in writing
The potential issues associated with using AI in academic writing can be summarized as follows [1]:
⦁ High Similarity Scores and Copyright Violation Concerns: AI-generated content may closely resemble existing research, raising copyright infringement and plagiarism issues.
⦁ Low Quality Due to Poor Citations and Inaccurate References: AI often produces incorrect or misleading references, undermining the quality and reliability of academic work.
⦁ Fabrication, Falsification, and Plagiarism: AI can fabricate or falsify content, leading to severe violations of research integrity.
⦁ Inappropriate Authorship Attribution: AI cannot be credited as an author; human authors must take full responsibility for the content.
⦁ Disruption of Subsequent Research: Excessive reliance on AI may lead to retractions, harming the knowledge ecosystem and diminishing societal trust.
While the irresponsible use of generative AI poses serious risks, responsible usage can significantly enhance academic writing efficiency in various ways [2]:
⦁ Idea Generation and Research Design: AI assists with brainstorming, identifying literature gaps, suggesting hypotheses, and supporting robust research methodology development.
⦁ Content Development and Structuring: AI tools can assist in text expansion, predictive text generation, and autocompletion. They also help structure documents, maintain logical flow, and integrate visual or multimedia elements.
⦁ Literature Review and Synthesis: AI can process large volumes of information, perform text extraction, and create summary tables or comparative analyses, streamlining the literature review.
⦁ Data Management and Analysis: AI tools enhance data interpretation through detailed analysis and visualization and assist in dataset management.
⦁ Editing, Review, and Publishing Support: AI aids in proofreading, drafting abstracts, managing peer review responses, and improving the quality and efficiency of the publishing process.
⦁ Communication, Outreach, and Ethical Compliance: AI tailors content for diverse platforms, engages audiences, detects plagiarism, and assesses ethical risks in research.
In light of these benefits, several publishers and institutions have established policies to clarify generative AI tools’ limitations and outline responsible usage practices. Below is a summary of various publishers’ policies on using these tools.
The responsibilities of authors when using generative AI tools, based on Taylor & Francis’ policy [3], include:
⦁ Originality, Validity, and Integrity: Authors are accountable for their submissions’ originality, validity, and integrity.
⦁ Responsible Use: Authors must use generative AI tools responsibly and adhere to Taylor & Francis’ editorial policies.
⦁ Content Accuracy: Authors must review AI-generated outputs to verify accuracy.
⦁ Disclosure: AI tools should be disclosed in the Methods or Acknowledgements section of journal articles or in the preface of books.
⦁ Consultation: Authors should consult the journal’s editor if AI use extends beyond language improvement.
⦁ Human Oversight: AI usage should always include human oversight and transparency.
⦁ Prohibited Uses: Authors should avoid using AI to replace essential author responsibilities, such as text or code generation without rigorous revision or synthetic data creation without robust methodology.
⦁ Image and Data Creation: AI should not be used to create or alter images, figures, or research data.
Springer’s policy [4] on AI usage emphasizes the following key points:
⦁ AI Authorship: Large Language Models (LLMs) like ChatGPT do not qualify as authors because they lack accountability. AI use should be documented in the Methods section, although AI-assisted copy editing for style does not require disclosure. Human responsibility for the final text remains essential.
⦁ Generative AI Images: Due to unresolved legal and ethical issues, Springer prohibits AI-generated images unless they are from contracted agencies, used in AI-related content, or based on verifiable scientific data. All such images must be labeled as AI-generated.
Elsevier’s policy [5] specifies that AI technologies should only enhance readability and language, not replace core authoring tasks such as generating scientific insights, drawing conclusions, or providing clinical recommendations. AI must be used with human oversight, and all content must be carefully reviewed and edited. Authors are required to disclose the use of generative AI in manuscripts, and a statement will appear in the published work. Generative AI cannot be credited as an author or co-author.
Potential Uses of AI Tools in Academic Writing Based on Different Institutions
According to the University of North Carolina [6], the potential uses of AI tools in academic writing include:
⦁ Brainstorming/Exploring: AI helps spark ideas by prompting questions about key concepts.
⦁ Generating Outlines: AI tools can create outlines for writing projects, which can then be adapted to structure drafts.
⦁ Models of Writing Genres: AI can provide examples of various writing formats, aiding in understanding structural conventions.
⦁ Summarizing Long Texts: AI offers concise summaries of lengthy texts to guide further analysis.
⦁ Editing and Refining: AI tools help correct grammatical errors and enhance sentence-level clarity.
⦁ Translation: AI translates text or audio into different languages, though nuances in idiomatic language may require careful review.
⦁ Transactional Communication: AI can draft polite emails, meeting agendas, and project timelines, which can be customized for specific audiences.
However, AI tools also have pitfalls [6]:
⦁ False Information: AI may generate inaccurate or misleading information.
⦁ Fabricated Citations and Quotes: AI can produce citations and quotes that seem credible but are incorrect or fabricated.
⦁ Biases: AI can reflect biases present in its training data.
⦁ Academic Integrity Risks: Using AI-generated text without proper attribution can violate academic standards.
⦁ Privacy Concerns: Entering personal or sensitive information into AI tools can lead to data retention and potential sharing.
⦁ Misinterpretation of Ideas: AI may alter the meaning of original ideas when paraphrasing, necessitating thorough review.
The University of Waterloo’s [7] principles for responsible AI usage include:
⦁ Fair: Ensuring AI-generated content is unbiased and compliant with human rights and accessibility standards.
⦁ Accountable: Taking responsibility for AI-generated content to ensure it is accurate, legal, and ethical.
⦁ Secure: Using secure infrastructure to protect privacy and personal data.
⦁ Transparent: Identifying AI-generated content and documenting decisions for accountability.
⦁ Educated: Understanding AI’s strengths, limitations, and responsible usage practices.
⦁ Relevant: Ensuring AI use aligns with organizational goals and identifies suitable tools for each task.
The Government of Ontario’s principles [7] for ethical AI usage include:
⦁ Transparent and Explainable: Providing clear explanations for automated decisions.
⦁ Good and Fair: Respecting laws, human rights, and democratic values throughout the AI lifecycle.
⦁ Safe: Minimizing risks to individuals and society.
⦁ Accountable and Responsible: Monitoring AI’s impact and addressing adverse outcomes.
⦁ Human-Centric: Enhancing human experiences and promoting inclusivity.
⦁ Sensible and Appropriate: Balancing innovation with ethical standards.