As we navigate the rapidly evolving landscape of Artificial Intelligence (AI) and its potential impact on academic integrity, it's important that we approach this issue with thoughtfulness and care.

The university's Academic Integrity policy includes language about student misuse of AI as an example of academic dishonesty.

There are currently no tools that can consistently and accurately detect writing produced by AI. While AI detectors such as Originality AI and Turnitin have gained popularity, it is important to recognize that these tools often produce false positives or negatives. As a result, UMass Lowell is not endorsing or sponsoring any specific AI detector at this time.*

Instead, we encourage faculty to use a combination of strategies to identify potential instances of AI-generated content in student work. These strategies focus on identifying inconsistencies, anomalies and patterns that may indicate the use of AI, rather than relying solely on automated tools.

Below are some approaches that faculty can employ:

exterior of library plaza

1. Stylistic inconsistencies: AI-generated text may have inconsistencies in writing style, tone or voice compared to the student's previous work. Faculty should look for sudden, unexplained changes in a student’s writing style.

2. Lack of personal insights or experiences: AI-generated content may lack personal anecdotes, experiences or unique perspectives that students might otherwise include in their work (depending on the assignment). An essay that feels generic could be a red flag.

3. Unusual phrasing or awkward sentences: While AI writing has improved, it can still produce awkward phrasing or unusual word choices. Faculty should look for sentences that don't quite sound natural or human-like.

4. Inconsistent or irrelevant references: AI models may generate references that are outdated, irrelevant or inconsistent with the content of the paper. Some references don’t even exist and are complete “hallucinations” on the part of the software.

5. Lack of understanding during discussions: If a student struggles to discuss or explain the ideas presented in their work, it may indicate that they didn't write it themselves. Faculty should engage students in conversations about their work if they suspect any form of academic dishonestly. (This applies to plagiarized work or work completed by someone other than the student.)

6. Absence of intermediate drafts or research notes: If a student submits a polished essay without evidence of the writing process, such as drafts, outlines or research notes, it may raise suspicions. You may want to require drafts and notes.

7. Unusual topic or prompt responses: If a student's work doesn't quite address the given topic or prompt, or if it includes information or terms that weren’t covered in class, it could be a sign of AI-generated content.

These strategies are not foolproof and should be used in combination with other methods, such as discussing academic integrity with students and creating assignments that require personal insights and critical thinking.

Ultimately, fostering a culture of academic integrity and teaching students the value of original work is key to preventing the misuse of AI in academic settings.

Find more on the benefits and challenges of AI in the classroom at the CELT Resource Page on AI.

* In deciding not to endorse any current AI detection tools, UMass Lowell joins a number of other institutions that have made similar decisions, such as Syracuse University, Vanderbilt University, among many others, citing concerns about false positives, potential biases and possible privacy infringements.

The University's Academic Integrity Policy includes student misuse usage of AI as academic dishonesty.