Generative AI tools such as ChatGPT utilize large language models (LLMs) to generate unique text- or image-based responses to user prompts. These tools “generate” new content by referring to the data they have been trained on.

There are several important factors to consider when using these tools such as confidentiality, data privacy, compliance, copyright and research integrity.

This page aims to provide UMass Lowell researchers with basic guidance, information and resources on the use of AI in their research writing. Content will be updated as AI technologies evolve.

Writing Research Manuscripts

Many academic publishers have policies concerning the use of generative AI when writing and/or developing research manuscripts. Publisher-specific guidelines related to using, citing, disclosing and acknowledging AI tools should be closely reviewed and followed.

Grant Writing

Many funders have yet to publish policies on the use of AI in grant preparation. However, the National Science Foundation states that proposers are encouraged to indicate in the project description the extent to which, in any, generative AI technology was used to develop their proposal.

Peer Review

The general consensus among publishers and grant-making agencies is that grant applications and submitted manuscripts should never be loaded into generative AI tools to produce review reports. Doing so may expose sensitive information or intellectual property to others because this information has now become part of the tool’s training data.

Both the National Institutes of Health and the National Science Foundation prohibit reviewers from using generative AI tools to analyze and formulate peer-review critiques.

More Resources