10/18/2024
By You Zhou

The Richard A. Miner School of Computer & Information Sciences invites you to attend a doctoral dissertation defense by You Zhou on "Content Significance Distribution of Text Blocks in an Article and Detecting AI-Generated Texts in Cross-Domains."


Ph.D. Candidate: You Zhou
Date: Thursday, Oct. 31, 2024
Time: 10 a.m. Eastern Time

Location: This will be a virtual defense via Zoom. Meeting ID: 2904839748

Committee Members:

Jie Wang (advisor), Professor, Miner School of Computer and Information Sciences
Benyuan Liu (member), Professor, Miner School of Computer and Information Sciences
Li Feng (member), Instructional design manager, The TJX companies


Abstract:

We explore how to capture the significance of a sub-text block in an article and how it may be used for text mining tasks.
A sub-text block is a sub-sequence of sentences in the article. We formulate the notion of content significance distribution (CSD) of sub-text blocks, referred to as CSD of the first kind and denoted by CSD-1. In particular, we leverage Hugging Face's SentenceTransformer to generate contextual sentence embeddings, and use MoverScore over text embeddings to measure how similar a sub-text block is to the entire text. To overcome the exponential blowup on the number of sub-text blocks, we present an approximation algorithm and show that the approximated CSD-1 is almost identical to the exact CSD-1. Under this approximation, we show that the average and median CSD-1's for news, scholarly research, argument, and narrative articles share the same pattern. We also show that under a certain linear transformation, the complement of the cumulative distribution function of the beta distribution with certain values of $\alpha$ and $\beta$ resembles a CSD-1 curve. We then use CSD-1's to extract linguistic features to train an SVC classifier for assessing how well an article is organized. Through experiments, we show that this method achieves high accuracy for assessing student essays. Moreover, we study CSD of sentence locations, referred to as CSD of the second kind and denoted by CSD-2, and show that average CSD-2's for different types of articles possess distinctive patterns, which either conform common perceptions of article structures or provide rectification with minor deviation.

Existing tools to detect text generated by a large language model (LLM) have met with certain success, but their performance can drop when dealing with texts in new domains. To tackle this issue, we train a ranking classifier called RoBERTa-Ranker, a modified version of RoBERTa, as a baseline model using a dataset we constructed that includes a wider variety of texts written by humans and generated by various LLMs. We then present a method to fine-tune RoBERTa-Ranker that requires only a small amount of labeled data in a new domain. Experiments show that this fine-tuned domain-aware model outperforms the popular DetectGPT and GPTZero on both in-domain and cross-domain texts, where AI-generated texts may either be in a different domain or generated by a different LLM not used to generate the training datasets. This approach makes it feasible and economical to build a single system to detect AI-generated texts across various domains.

To further enhance the performance of our baseline model, we conduct extensive data generation and augmentation to ensure stability and robustness. Additionally, we train our model on a larger architecture (Mixtral 8x7B). For labeling, we move away from a simple binary classification (0 or 1) approach and instead assign a label of 0.5 to articles partially generated by both human authors and AI. This strategy is incorporated into training the model to improve the accuracy of the model in distinguishing mixed-origin texts.