Artificial Intelligence (AI) in Education

AI and Ethics in Education

The ethical and societal implications of AI, especially in education, are numerous and complex. Below are some issues to consider:

Accessibility and Equity: On the one hand, AI can help make education more accessible and personalized, enabling students to learn at their own pace and providing teachers with tools to identify areas where students are struggling. It could also create opportunities for students in remote areas or those who cannot attend school due to health issues or disabilities. However, on the other hand, not all students and schools have equal access to the technology and infrastructure needed for AI-based education. This digital divide can exacerbate educational inequalities.

Data Privacy and Security: AI systems in education often rely on collecting and analyzing large amounts of data about students. This raises questions about how this data is stored, who has access to it, and how it is used. There are risks of breaches of privacy and potential misuse of data.

Bias and Fairness: Like all AI systems, educational AI can be subject to bias, depending on how it's trained and what data it's trained on. For instance, an AI tutoring system could potentially favor certain types of students over others, based on the data it was trained with. This could perpetuate existing biases and inequalities.

Teacher-Student Relationship: While AI can automate some tasks, it cannot replace the human interaction and emotional support provided by teachers. There are concerns that over-reliance on AI could erode the teacher-student relationship and the social skills students develop in the classroom.

Skill Development: As AI and automation become increasingly integrated into the workforce, there is a need to ensure education systems are adequately preparing students with the skills they will need for the future. This includes not just technical skills for working with AI, but also soft skills like critical thinking and creativity that AI is currently not able to replicate.

Transparency and Understanding: It's important for students, parents, and educators to understand how AI tools make decisions. Unfortunately, many AI systems are "black boxes" where the decision-making process is opaque. This can make it hard to trust the system or to challenge its decisions.

Regulation and Policy: Given all these issues, there is a clear need for regulation and policies to guide the use of AI in education. These regulations need to address issues like data privacy, transparency, and equity. However, policy-making in this area is complex and needs to strike a balance between protecting students and enabling innovation.

All these issues underline the need for an interdisciplinary approach to AI in education, incorporating not just technological expertise, but also input from educators, psychologists, sociologists, ethicists, and legal experts.

ChatGPT and Free Labor

Asking students to use ChatGPT provides free labor to OpenAI. ChatGPT is in its infancy and has been released as a free research preview (OpenAI, 2022). It will continue to become a more intelligent form of artificial intelligence… with the help of users who provide feedback to the responses it generates. 

Consider: Do you really want to ask your students to help train an AI tool as part of their education? 

blog post from Autumm Caines (2022), Instructional Designer at the University of Michigan – Dearborn, outlines a few tips to mitigate this free labor, including:

  • Not asking students to create ChatGPT accounts and instead doing instructor demos;
  • Encouraging students to use burner email accounts (to reduce personal data collection) if they choose to use the tool;
  • Using one shared class login.

Caines includes some interesting thoughts on students working themselves out of future jobs by using ChatGPT. We currently cannot find research to support this.

This information is from "ChatGPT & Education" by Torrey Trust, Ph.D., and is licensed under CC BY NC 4.0.

Assignments and Privacy Policies

Before assigning students to work on projects involving AI chatbots, make sure to review the privacy policy of the tool(s) you've selected. Also consider what benefit you may be providing the developer by requiring your students to conduct free labor to improve the tool's algorithm.

OpenAI (the company that designed ChatGPT) collects a lot of data from ChatGPT users. 

  • The privacy policy states that this data can be shared with third-party vendors, law enforcement, affiliates, and other users.

  • Do NOT provide a student’s full name and associated class grade to ChatGPT to write emails, this is a potential FERPA violation (in the United States) for sharing a student’s educational record (with OpenAI) without their permission. See more about FERPA at UNM

  • This tool should not be used by children under 13 (data collection from children under 13 violates the Children’s Online Privacy Protection Rule - COPPA).

  • While you can request to have your ChatGPT account deleted, the prompts that you input into ChatGPT cannot be deleted. If you, or your students, were to ask ChatGPT about sensitive or controversial topics, this data cannot be removed. 

TIP: Before asking your students to use ChatGPT (if you plan to do so), please read over the privacy policy with them and allow them to opt out if they do not feel comfortable having their data collected and shared as outlined in the policy.

This information is from "ChatGPT & Education" by Torrey Trust, Ph.D., and is licensed under CC BY NC 4.0.

Recommended readings