Metallic futurist justice statue
The short answer for Aaron Shepherd, an assistant teaching professor in the Department of Philosophy, is yes.
“AI is already contributing to subtle ways of infringing upon people’s freedom,” says Shepherd, who teaches a Digital Ethics course that examines ethical issues emerging in AI and other technologies.
AI needs to be trained on immense amounts of data, which can be extracted from people without their awareness. For instance, when a website uses the fraud detection service reCAPTCHA to tell humans and bots apart, the person who completes the test by deciphering text or matching images is actually helping to build machine learning datasets, according to Google, which owns the service.
“You don’t think that you’re providing data for AI, but that’s exactly what you’re doing,” Shepherd says.
Companies have also outsourced work to improve AI speech and image recognition. A Time magazine investigation revealed that OpenAI, a San Francisco-based AI research and deployment company, used Kenyan laborers earning less than $2 per hour to label massive quantities of harmful text passages and images to make OpenAI’s ChatGPT less toxic. The task became damaging and exploitative for the workers.
“We don’t see any of that on our end,” Shepherd says. “We just see the finished product.”
Jenifer Whitten-Woodring, dean of the Honors College and a political science associate professor with expertise in human rights, sees another problem with data used to train AI.
“Generative AI is only as good as its training data, and the training data are often biased,” she says.
AI generates biased results when it learns from historical data that includes cultural prejudices and other biases. When a business uses AI screening tools for recruiting and hiring, the AI may discriminate against a candidate based on their name due to the biases it learned.
The use of AI for surveillance also raises concerns. Predictive policing, which is aimed at preventing crimes before they happen, involves invasive surveillance systems that are trained on biased data. This has led to the targeting of people in low-income and minority neighborhoods. Some police departments also rely on facial recognition systems, which can make incorrect matches and lead to wrongful incarcerations.
Shepherd and Whitten-Woodring agree that more needs to be done to protect human rights from AI. They argue that more transparency is needed from AI platforms about how they are extracting data and the shortcomings of that data. And people need to be educated about the dangers of AI, they add.
"We need AI governance,” Whitten-Woodring says. “Amnesty International, Human Rights Watch and other organizations have called for legally binding regulation of AI at the national, regional and international levels, but this type of cooperation seldom happens.”—BC