In its brief but abrasive existence so far, artificial intelligence has jumped off the screen, transcending its computerish confines. ChatGPT, the first of countless algorithms, made its debut in late 2022, yet society has been unprepared and ill-equipped.
Besides a few, far between academic notices to “not plagiarize with AI” — a vague and not-so-tenacious statement — artificial intelligence has largely been unregulated in every sphere of society, and its reins are often held by inexperienced users.
From an insurgence of computer-generated anime art to an explosion of visitors to ChatGPT and other artificial intelligence, how we teach AI needs to be reevaluated so students and industries alike can thrive.
One of the most dangerous side effects of artificial intelligence is its impact on creative industries. Artists from watercolor painters to jazz musicians and everything in between have been put in a precarious spot by the sudden influx of digitally sourced designs.
Creativity — a unique component of the human experience — is inadvertently under threat by machines. While it takes an artist hours to paint a portrait, AI can instantly program the picture into existence based on an analysis of art on the web.
But why is this bad economically? In a free market, should not products be manufactured more quickly and be the first on the shelf if they are quality? The answer: absolutely not.
AI algorithms scan the internet for original art pieces and intricately duplicate the styles and methods of said artists while avoiding any obstruction of copyright issues. In a similar way, AI kills academic creativity through its internet scanning and near plagiarism. While “do not plagiarize with ChatGPT” seems to reverberate in every syllabus, it is only part of the story. Obviously, claiming a midterm paper written by a computer algorithm is dishonest, but a reason many students flock to ask for immediate answers rather than descriptive suggestions is the lack of education regarding AI.
Now instead of taking special interest in a particular area of research as a means to write an essay, students are in competition to see who can make the robot appear the most real.
The purpose behind many assignments is quickly being lost. AI is not entirely to blame for this, as students have been wired to be more concerned with high marks instead of actually learning.
AI can and should be used in the classroom to reduce time spent on menial tasks that do not benefit intellectual development or academic engagement. These include creating flashcard decks for final exams, reading every word of an assigned textbook or analyzing written works for punctuation errors until migraines threaten your sanity.
For these tasks, AI can reduce the time spent and allow more focus on areas that promote deeper learning. All you need to do is ask the robot if it can help.
Perhaps the most dangerous way AI has shaped our world is through its usage in misinformation and propaganda. Since 2022, websites and social media accounts have seen a surge of “fake news” generated by artificial intelligence, often with devastating consequences.
In 2024, any actor — whether kindhearted or malicious — can provoke propaganda campaigns that erode basic human dignity.
Leading up to 2020, Eastern European “troll farms” used a litany of resources to convince Facebook-savvy Americans of political lies, building a platform of 140 million. However, with the advent of AI the artillery of programmed weaponry has increased.
At the end of the day, algorithms like ChatGPT, Jasper and Gemini have more nuance than you have been told. Rather than taking the most recent technological boom at face value, students and industry leaders alike should consider how they are using AI — and who they are hurting.
Kadin Collier is a freshman Arabic and international studies double major from Tokyo.