As artificial intelligence algorithms continue to advance, views on ethical usage of such programs are undergoing their own evolution. In the educational context, most critiques center around the student experience. Ethical use by faculty, however, is typically less discussed.
Universities must ask themselves this: How can one healthily use AI without becoming fully reliant on it? As algorithms adapt, assigning seemingly menial tasks to Language Learning Models (LLMs) is becoming even more convenient.
Why spend the time poring through dozens of essays when AI does it in just over a nanosec? Naturally, the academic profession obligates the time-consuming review of tests, papers and even daily assessments. Perhaps its use can free up more time professors can dedicate to ensuring each and every student succeed in the classroom.
As with any technology, troubleshooting errors are inevitable. Research reveals the harshness of AI evaluation and human evaluation often differs. Look at GPT-4o’s assessment essays — on average, AI-evaluated scores were 0.9 points below human-evaluated scores, and only matched 30% of the time.
This in part might be attributable to AI’s inability to discern good and bad writing. In fact, human graders passed out more A’s and F’s, whereas AI passed out more C’s. The impact? Quality writing gets less recognition and sub-par performance gets lenient critiques.
If AI evaluation is not supplemented by professor analysis, more students might falsely believe that they have met professors’ expectations. The numerical difference between the professor’s and AI’s grading might be small, but it makes a difference. Instructors relying exclusively on AI creates graduates less equipped to earn a degree.
Despite AI’s humanless nature, it is no less susceptible to biases towards people of various minority groups. AI algorithms emulate human-influenced datasets, leading to computer assessments that reflect human prejudices. In a world where attacks on basic diversity programs and institutional barriers to minority student success hinder an equal playing field, faculty must commit to ensuring AI tools evaluate meritocratically.
Dr. Matthew Murray from the Department of Sociology and Anthropology weighed in on faculty use.
“I don’t have anything against it directly, unless the AI simply imitates what it thinks you want, which is self-defeating and dangerous,” he said while elaborating on its ability to provide new perspectives.
Murray commented on Blackboard’s recently available AI tools, mentioning that “the new version of Blackboard has what seems to me to be very good AI tools for course development,” including but not limited to AI-generated quizzes, assignments and rubrics based on course materials.
On said tools, Murray reflected that “I have been lightly trained in them, but I have not used them.”
If faculty does choose to employ AI tools alongside traditional pedagogy, the university needs to adapt accordingly. Departments should implement required AI-use training, minimizing the potential negatives of AI tools while maximizing productive benefits.
As students face accountability for their AI usage, educators must ensure they are using it to enhance the learning experience. Computers might be quick, but teachers are the true building block to education.
Ella Snyder is a sophomore creative writing major from Oxford, Miss.



































