1. Disclose your AI use. Detail where the AI-generated content appears.
2. Verify the accuracy, validity and appropriateness of AI-generated content. Large language models (LLMs) such as ChatGTP can produce incorrect or misleading information, especially when used outside of the domain of their training data or when dealing with complex or ambiguous topics. Outdated training data can result in incorrect or incomplete knowledge on a topic.
3. Check sources and citations to ensure proper referencing.
4. Appropriately cite AI-generated content.
5. Avoid plagiarism and copyright infringement. AI can inadvertently reproduce text from existing sources without due citation, infringing upon others’ intellectual property.
6. Be aware of bias. LLMs have been trained on text that includes biases, and because there is inherent bias in AI tools due to human programming, AI-generated text may reproduce these biases, such as racism or sexism, or may overlook perspectives of populations that have been historically marginalized. Relying on LLMs to generate text or images can inadvertently propagate these biases, so all AI-generated content should be carefully reviewed to ensure it’s inclusive, impartial and appeals to broad readership.
7. Acknowledge limitations. If you include AI-generated content, appropriately acknowledge the constraints including the potential for bias, inaccuracies, and knowledge gaps.
8. Take responsibility. AI tools should not be recognized as co-authors.
9. Check specific guidelines. Check the submission guidelines of your targeted journal or grant-making agency to ensure compliance with any AI-related policies.