The Emmanuel Community and Generative AI
Generative AI refers to a class of artificial intelligence systems that are designed to generate text, images, music, and other forms of content, that are similar to or indistinguishable from content created by humans. These systems are capable of learning patterns and structures from large datasets and then generating new examples that mimic those patterns.
AI has tremendous potential for producing great benefits in many areas of life, including business, science and education. It also has the potential for producing significant harms. The Emmanuel Community is responsible for establishing policies that best promote its benefits while minimizing its harms, especially in academic areas.
The Emmanuel Community has a variety of opinions about using AI in higher education. Many are concerned about the negative impact on teaching and learning, especially in areas concerning academic integrity. Others see AI as a potentially useful pedagogical tool, not unlike other previously disruptive technologies that initially spawned controversy, such as the internet, mobile devices, and social media. It is the purpose of this website to serve as a resource for members of the Emmanuel Community to examine these harms and benefits, to guide us all in the process of establishing institutional and individual policies for its use, and to serve as a central clearing house for discussions of how generative AI might best be used at Emmanuel.
Institutional Policies
AI Ethics
Early attempts at constructing an AI ethics have focused less on discussing AI’s potential benefits and more on identifying its potential harms, suggesting how to avoid them and formulating all of this into a set of rules for designing and regulating AI systems. The general moral principles upon which these rules rest are mostly consequentialist (the right thing to do is what promotes the greatest good and/or least evil for all those effected by the action) and, to a lesser degree, the respect for persons principle (everyone has the same intrinsic value and thus should be treated equally). Lists of such rules may vary somewhat, but most include the following as essential to any ethically acceptable version of AI.
AI models require large data sets to function effectively. For companies using dedicated AI systems it is important that this data be kept private from competitors and hackers to protect their business interests. Often personal customer information is included in this data, such as phone numbers, addresses, and social security numbers. Other types of personal information can be found in Large Language Models (LLMs), information that allows such systems to track the location, consumer preferences, medical history, political preferences, age and gender. In addition, anything that an individual submits to an AI system becomes part of that system and available to any other user. Because this information can be used for harmful purposes, AI systems should be designed and used in ways that respect the privacy of business and individuals.
One of the ethical issues of AI use concerns equal access to the benefits of AI. There is a risk that certain groups may be excluded from benefiting from AI technologies due to factors such as socioeconomic status, geography, or digital literacy, increasing already existing disparities. Those without access to reliable internet connections, quality education, or financial resources may be further marginalized, leading to the reinforcement of existing power imbalances.
The main issue of transparency in AI ethics revolves around the lack of clear understanding of how AI systems make decisions. While algorithms used in AI models produce answers to prompts, just how they arrive at those responses are unknown to users. In addition, the quality of the data used is also often unknown. This lack of transparency makes it difficult to identify the biases and errors embedded within the data and the algorithm, to trust the outcomes as reliable, and to assign accountability for the decisions made by AI systems. To avoid these problems AI systems need to clearly identify the data used and the reasoning behind their responses, making the AI decision-making process accessible, reliable and accountable.
The safety issue in AI ethics concerns the potential for AI systems to cause harm to individuals, society, or the environment, especially as these systems become more advanced and capable of working autonomously. Without human oversight they can lead to unpredictable behavior and harmful unintended consequences. In applications such as autonomous vehicles, for example, or autonomous military weapons systems, errors or malfunctions in AI systems can have severe consequences, including loss of life or significant economic damage. AI can also be used to spread disinformation and support criminal behavior. Some even worry about advanced AI systems, systems that learn on their own and set their own goals, being able to control the world. Ethical AI systems must find ways to prevent harmful outcomes such as these.
Large AI systems require huge data centers that use tremendous amounts of energy to run and especially to train them. This has the potential to significantly increase carbon emissions and to promote climate change. By one reliable estimate, AI data centers in 2026 will use as much energy as Japan’s current total energy consumption. In addition, AI systems use a great deal of fresh water to cool the computers running the systems, often competing for water used for human and agricultural purposes. Major AI investors are already planning to construct nuclear power plants to meet some of this need. The hope is that in the future AI systems will discover the means for conserving energy in a variety of areas, thus mitigating its carbon footprint. In the meantime, sustainable energy use remains a significant problem.
One of the strongest requirements of an ethical AI system is that it be used fairly, without discrimination. The discrimination found in AI systems is usually not overt but appears in the form of biases found in the algorithms and data used by the system. Those who construct algorithms sometimes have their unconscious biases reflected in their creations and in the data on which they train. This is especially true for some LLMs, which include most of the information found on the Internet and contains a great deal of bias. This shows up in systems, for example, that judge real estate mortgage eligibility, approve health care treatments, direct criminal justice sentencing, select applicants for job recruitment and in many other areas. These biases reflect and perpetuate stereotypes within a society, reenforcing social inequality. Addressing these biases requires careful attention to the data used to train AI systems, the design of the algorithms themselves, as well as the broader societal context in which these systems are utilized.
The reliability of LLMs is an extremely serious issue in AI ethics. Responses to prompts by AI systems such as ChatGPT are often inaccurate. Such systems are designed to make predictions about what text usually comes next, and sometimes such predictions are erroneous. This can be due to inadequate data sets, mistakes in algorithms, complexity of the problem, unclear prompts or misinterpreted social context. Under these conditions the system simply makes something up. While some AI systems are more accurate than humans, some have an unacceptably high rate of error. Some estimates place the inaccuracy of LLMs between 5 and 20%, depending on the deployment. Wildly implausible responses, caused by the algorithm seeing patterns in the data that are not there, are often called “hallucinations”. This level of inaccuracy is clearly a problem in many areas, such as health care diagnosis, facial recognition, and scientific and academic research. For AI systems to be ethically acceptable they clearly need to be more accurate than is currently the case.
The issue of AI responsibility concerns those who construct AI systems and those who use them. If something goes wrong with an AI system, if it makes a poor prediction or a biased judgment, for example, it is not the system’s fault. The system has no agency.
Rather it is the responsibility of those who design and train it. The more important issue here concerns those who misuse AI systems. The term “accountability” best captures the issue of personal responsibility. People with moral agency ought to be held accountable for their misuse of AI systems. Such misuse may include the spread of disinformation, especially those using “deep fakes”. Deep fakes involve the use of AI techniques to create audio and visual content that appear to be authentic but are not. Sometimes this false information is used to commit cybercrimes, such as criminals requesting payment, identity theft, or ransomware attacks. In education, one of the main concerns is plagiarism, students using material produced by AI as if it were their own creation. An adequate AI ethics holds those responsible for such misuses of AI morally accountable for their harmful actions.
“Professor Tom WallThe real threat is to say that ‘we are machines’ means that we are identical kinds of beings with the computers and the algorithms that control the world of AI.