39 Other Ethical Considerations
Introduction
So far, we have covered the basics of how AI works and whether popular tools are appropriate for academic research. However, other ethical factors may influence your decision on whether or not to use AI, either in the classroom or outside it. This chapter provides introductions to these considerations so you can make an informed choice.
Copyright and “Stealing”
Training Materials
AI training depends on machine learning – the ability for a model to teach itself about patterns found in its training materials. Successful machine learning depends on a truly vast amount of data. More extensive and diverse data points typically result in better-tuned models.
However, the exact sources that popular tools like ChatGPT, Claude, Gemini, and others use for training are not fully disclosed. While some of these datasets are materials that are free for reuse or are old enough to no longer be subject to copyright, a significant portion of training data is likely copyrighted material that has been copied and used to train AI models without the creators’ permissions. This material includes not just publicly available text on the Internet (which is still the intellectual property of its creators, regardless of whether it’s been published online), but also the text of pirated books. The same concern applies to artists whose works have been used to train AI for image generation.
Many people have concluded that any AI use is ethically wrong in solidarity with the artists (in various mediums) whose work has been used to train highly profitable AI models without compensation.
Output
AI-generated work cannot be copyrighted. If you create text, images, or other content with generative AI, you do not have rights to that work.
Liability and Citation
An essential foundation to fields like academia and journalism is the ability to cite sources. When authors share data or make claims, they support those statements by sharing where or from whom they gathered that information. This practice means that authors SHOULD rely on credible sources, resulting in better information sharing; it also allows readers to evaluate sources themselves and track down sources for their own research.
AI tools taking others’ words and ideas and remixing them for new output (that frequently contains errors) means that there is no trail of responsibility to the original source of information. Readers cannot fact-check an information source because there isn’t one there. There is no author or organization whose reputation we can research; there is no research methodology we can critique.
If an AI tool produces incorrect facts – possibly with harmful consequences for a user who doesn’t know that hallucinations are even possible – who is responsible? Is it the company who created the tool? Is it a specific person at the company? Can they wash their hands of liability and say “well, it’s just a risk of the model”? This uncertainty has both ethical and legal ramifications that are still unfolding.
Note: There are ways to cite various AI tools or otherwise acknowledge that you used AI in a project. For specific assignments, check with your professor. One example strategy is the AI Acknowledgement tool created by Helena Marvin, UMSL Libraries, licensed CC0.
Power
Training and running AI requires an enormous amount of power and resources (e.g., water). This resource consumption has significant environmental and power bill ramifications, especially for individuals who live near AI data centers. Powering and cooling data centers has been a concern before the recent escalation in AI use – now, it is significantly increasing. See the video below for an example of recent conversations on this topic here in St. Louis:
Privacy
Content You Share With AI
AI companies are not always forthcoming about when and how they harvest, retain, and use personal data submitted by their users. Some tool policies restrict what they do with user input (i.e., what you type into the prompt bar), but they may track your activity in other ways. Once collected, it’s even more unclear about what these companies do with your data – store it indefinitely, use your conversations for review or training, or even sharing your information with other companies (possibly for profit).
Content You Share Online
AI is built on data scraping to extract massive amounts of data from the Internet in order to train models. You may never know if any information you put online has been used to train AI.
Your Other Activity
It is becoming more common for AI tools to request broad access to features on your device(s) in order to function, including calendars, photos, contacts, email, and more. Even if this data is stored locally on your phone or laptop, granting a tool access allows it to essentially capture a snapshot of your most personal information. In this way, you may never know what data it has gathered from you or what is done with that information.
Labor
Impact on Future Employment
AI can make some jobs more efficient, but its growing capabilities may also lead to unemployment. The Future of Jobs Global Report 2025 indicated that 41% of companies are planning reductions in workforce as artificial intelligence tools and models continue to expand, causing many to fear replacement by AI and the loss of their jobs. Current sectors most at risk are ones that lend themselves to automation, like customer service, banking, insurance, and even transportation. Other roles that could be impacted include factory/warehouse workers, research analysts, and computer programmers.
Yet in the same aforementioned report, AI was also identified as one of the fastest-growing core skills sought out by employers. This means workers will likely be expected to have a familiarity with and working understanding of AI moving forward, which can box out those who have had minimal experience with this technology. Skill divides may become even more exacerbated, and information literacies (technology literacy, AI literacy) will be more valuable than ever. While the future is very much uncertain, it is clear that AI implementation and reliance will impact and shape the labor market over the next several years.
Exploitation
Despite appearances, AI does still require human labor in order to perform effectively, especially when it comes to training and improvement. This industry, referred to as “crowdwork,” “data labor,” or “ghost work,” has boomed with the increase in AI models requiring training and testing. People are needed to manually review, tag/label, annotate, and moderate data, which often results in exposure to disturbing and psychologically damaging content. In other cases, their roles involve inputting provocative prompts and assessing bias and/or offensiveness of the model’s responses. Doing these tasks for hours a day can take an immense toll on one’s mental health, and unfortunately, most of the time these workers are contract hires who receive minimal compensation. They do not have access to health benefits, cannot earn enough for the resources they may need, and can be indiscriminately fired from their jobs. These workers are considered “invisible,” as their labor is largely unacknowledged by the companies they work for while also going unseen by consumers. When companies market AI tools/features as if they simply work like magic, it is easy to overlook the very real human cost.
Related to exploitation, see also the section above on Copyright regarding training models on content without compensating creators.
Best Practices
- Consider each usage of AI and whether another non-generative tool could be used.
- Be conscientious of your use of AI tools.
- Consider if your AI task is worth potentially giving up access to your personal information.
- Thoroughly review the privacy or data use policy of any AI tool you’re considering using.
- Avoid putting any personal information (yours or anyone else’s) into a chatbot or other AI tool.
- Do not upload any proprietary information or work (yours or anyone else’s) into an AI tool, especially without permission.
- Acknowledge your use of AI.
Key Takeaways
- There are many ethical considerations to consider when choosing whether or not to use AI for a given task. These include potential harms for other people, yourself, and the environment. Some prominent topics are:
- Copyright violations and lack of compensation
- Liability and academic citation standards
- Power use and environmental impacts
- Privacy
- Job market impacts and labor exploitation
*Some content on this page is adapted from:
- Missouri Library Association. (2025). “Missouri Librarian AI Resource Summit.” [link forthcoming]