Philosophy, Criminal Justice Faculty Team Up on $4.2 Million in Defense Grants

Philosophy Chair Nicholas Evans and Criminology Assoc. Prof. Neil Shortland sit side by side Image by K. Webster
Philosophy Chair Nicholas Evans, left, and Criminology Assoc. Prof. Neil Shortland have been awarded $4.2 million in defense grants for research on AI.

09/13/2024
By Katharine Webster


A philosophy professor and a forensic psychologist walk into a university. Soon, they’re comparing research interests and grant sizes.

The result is no joke. Philosophy Chair Nicholas Evans and Criminology Assoc. Prof. Neil Shortland, director of UML’s Center for Terrorism and Security Studies, have collaborated for several years. And they were just awarded a pair of Minerva Research Initiative grants totaling $4.2 million from the U.S. Army Research Office for research on the ethics and psychology of decision-making involving artificial intelligence.

Together, they will study the future uses of AI in warfare, its psychological effects on humans, and how people from different countries and backgrounds use social science predictions about the future of AI to make defense and other policies.

“Everything the Department of Defense is worried about when it comes to AI is really about the future,” Evans says. “Some people think that AI is going to make war less common, reduce the number of targets and reduce the number of civilian casualties, while some people think the opposite.

“In every domain, people are split about the future of AI, with some predicting utopia and others predicting apocalypse.”

Criminology Assoc. Prof. Neil Shortland in suit with arms crossed Image by Courtesy
Shortland researches terrorism and the psychology of decision-making.

Psychological and Social Effects of Using AI in Warfare

Shortland’s grant, for $1.5 million over three years, will look at the effect on service members of turning an increasing number of life-and-death decisions over to AI. For example, which wounded soldiers should a medic treat first? Or, should a drone operator fire a rocket at a truck full of people?

“These are psychologically important decisions,” Shortland says. “What our grant does is ask, ‘How will a human handle giving away that decision if the decision turns out to be wrong? And how will the population feel if these decisions are no longer made by humans?’”

The Department of Defense views offloading the responsibility for such decisions to AI as a potential way to reduce trauma and PTSD for service members. 

But it may have the opposite effect, Shortland says, especially since AI may exponentially increase the number of decisions overseen by a given person. For example, a single soldier on the ground can only shoot at a certain number of people, but a drone operator can kill multiple people, several times a day.

“What if AI is making those decisions for 10 drones overseen by one person?” Shortland says. “If we take the humans out of the decision-making loop, are we doing more unknown harm to the humans involved?”

The implications go well beyond national defense, Shortland says. For example, he says, “Doctors take it very hard when they make a bad decision.”

“You can imagine that, in 10 years, a doctor will be making not seven prescription decisions over lunch, but 70 or 700” with assistance from AI, he says. “Maybe if someone has strep, we’re OK with it, but for people who have lupus, or long COVID or something like that, maybe they really need a doctor with a human face interacting with them. Where is that line?” 

Evans, as co-principal investigator, will lend his expertise in ethics, population-specific variables and large databases. The other co-PIs are Electrical and Computer Engineering Asst. Prof. Paul Robinette, who researches human-robot trust, and two business faculty members who study the use of AI in business, Prof. Scott Latham and Assoc. Prof. Beth Humberd.

Philosophy Chair Nicholas Evans leans against a brick wall Image by Alyssa Parker
Evans researches ethics and new technologies.

Utopia or Apocalypse?

Evans’ grant, for $2.7 million over three years, examines social science research into AI that results in predictions about its future effects and how influential decision-makers use those predictions to make policy. 

In addition to Shortland, who has a depth of expertise on decision-making research, faculty from the Science Policy Research Unit at the University of Sussex in England are on the grant, along with a postdoctoral researcher hired from the University of Washington and students at UML.

“If you have a range of predictions from ‘War is going to get much better’ to ‘War is going to get much worse,’ you need to figure out who is making the predictions, where and why,” Evans says.

Senior philosophy major Sam Angelli-Nichols will assist Evans and the postdoctoral researcher to create a searchable database of approximately 50,000 existing social science research articles, across different academic disciplines and countries, about the future of AI.

First training and then deploying an AI large language model, they will tag each article by the author’s name, nationality, gender, source of funding, field of expertise and methodology. Then, they will pose questions such as, “Are economists more optimistic or pessimistic than sociologists about the future of AI?” and “Did the advent of ChatGPT increase optimism about AI?”

“We’re trying to understand the people making the predictions, because the people making the predictions are tremendously important to the predictions that get made,” Evans says.

Then, using the Overton Institute database of policy decisions, they will identify, interview and present alternate scenarios to influential decision-makers to better understand which predictions they rely on when making policy – and “if the predictions they thought were important actually made it into the policy,” he says.

Ultimately, Evans, Shortland and a board of advisors drawn from various social science disciplines hope to understand whether there is enough commonality to chart some best practices for future social science research into the effects of AI, they say.

“We’re hoping to push the social sciences ahead in new and exciting ways,” Evans says.