12/04/2024
By Danielle Fretwell
Candidate Name: Zahra Rezaei Khavas
Degree: Doctoral
Defense Date: Wednesday, December 11, 2024
Time: 10 a.m. - 12 p.m.
Location: Southwick Hall, Room 240
Committee:
Advisor: Paul Robinette, Ph.D., Assistant Professor, Electrical & Computer Engineering, University of Massachusetts Lowell
Committee Members*
Katherine Tsui, Ph.D., Manager in the Large Behavior Models division, Toyota Research Institute
Reza Azadeh, Ph.D., Associate Professor, Computer Science, University of Massachusetts Lowell
Jean-Francois Millithaler, Ph.D.,Assistant Professor, Electrical & Computer Engineering, University of Massachusetts Lowell
Justin W Hart, Ph.D., Assistant Professor, Computer Science, University of Texas in Austin
Brief Abstract:
Given the impact of trust observed in human-robot interaction (HRI), appropriate trust in robotic collaborators is one of the leading factors influencing HRI performance. Trust in HRI needs to be calibrated properly rather than maximized. Thus, factors affecting trust and the effects of different behaviors by robots on human trust need to be investigated.
Problem/Gap 1: Factors affecting trust in human-drone interaction In the initial phase of my studies, I performed human-drone interaction (HDI) studies in an online setting. A test bed was developed to assess how various factors affect human trust, including drone-related, task-related, and environment-related elements. I also examined the impact of different drone failures on human trust, comparing results from real-world and simulated videos. The findings showed that drone-related features, particularly performance, had the greatest influence on trust, the more severe the drone failures, the greater the trust loss. Simulated videos also lead to similar results when accurately designed.
Problem/Gap-2: Effects of robots violating different trust aspects on humans Researchers have widely acknowledged the multidimensional nature of trust in HRI, leading to trust scales that reflect various dimensions. One such trust scale incorporates
both a performance aspect and a moral aspect.
In the second phase of my research, I focused on four main goals:
1. Designing a game that distinguishes between robots’ performance and moral trust violations.
2. Investigating whether individuals perceive robots as agents with intentions, acknowledging the potential for robots to possess morality. Our results revealed that some people only consider the possibility of robots possessing morality after witnessing them violate moral trust.
3. Assessing the effects of performance and moral trust violations by robots on humans. Our results showed that moral trust violations by robots cause a higher trust loss in humans. Additionally, people tend to retaliate against robots that violate moral trust, even if such retaliation does not benefit them and may incur a cost.
4. Assessing the effects of teammate identity on the effects of violations of the two trust aspects on humans. Our results indicated that violations of either performance or moral trust by robots cause a higher trust loss in humans than similar violations by human teammates, with moral trust violations resulting in more severe differences.
Future work: We aim to develop a trust model using physiological measures including Electroencephalogram (EEG), Electrocardiogram (ECG), Electro-dermal Activity (EDA), Respiration (Resp), and Eye-tracking, to classify trust in HRI under three main classes, Trust, Performance distrust, and Moral Distrust.