Doomsday predictions of the artificial intelligence apocalypse have circulated in the public discourse for years, but after the release of OpenAI’s ChatGPT in November 2022, popularity and expectations of AI have reached a fever pitch.
The technology creeped closer into the lives of millions, notably when 383 million people opened Snapchat to discover a new friend on their account, My AI.
Many people that had managed to escape the news cycle on AI as technology progressed over the years were spared no longer. One thing was unclear, though: What exactly is AI?
“There are different definitions of AI and many of them are overlapped. Since the word itself is artificial intelligence, we can describe AI in simple terms as a computer’s ability to mimic or simulate human intelligence, especially cognitive functions such as learning, reasoning, interacting, etc,” Assistant Professor of Computer Science Thai Le said.
The conversation around AI in recent months has largely focused on generative AI, such as ChatGPT or DALL-E, but AI is used for more than just creating content. Some AI is utilized for packaging boxes, others for spell checking drunk text messages and detecting damage in oil pipelines.
AI works by collecting large swaths of training data, analyzing and detecting patterns in that data and then finally making a prediction based upon the analysis. The AI then evaluates its own prediction, finding errors and tweaking its process to produce a more accurate result the subsequent time.
For example, DALL-E, the image generative AI that was released in January 2021, is trained on pairs of images with correlating text, such as a picture of a puppy with the word “dog” attached to it. After analyzing millions and millions of such pairs and detecting trends, AI produces its best guess when prompted to provide an image of a dog, improving each time.
As opposed to the human mind, AI is a digital process that can work without sleep, food or a pension program. Naturally, many people fear the advancement of AI technology and its implications for society and the economy.
“There are two popular, extreme camps of thinking on the future of AI. One, overoptimistic people tend to think that AI will solve everything. Two, overly pessimistic people tend to think that AI will destroy humanity and the world,” Le said. “Borrowing from one of recent discussions with my colleagues, I want to be ‘hopeful’ on the future of AI. Positive AI efforts will be amplified and negative AI efforts will be hindered and controlled.”
In order to bridge the gap between uncertainty and understanding, the AI Task Force at UM, led by Executive Director of Academic Innovation and Associate Professor of Writing and Rhetoric Robert Cummings, is planning to provide an AI minor program to students at the university.
“The digital media and data studies interdisciplinary minor is offering a new emphasis in AI and data sciences,” Cummings said.
These courses will be available in the computer science and philosophy departments. They will include programming, fundamentals and ethics of AI. He expects the courses to be offered in fall 2024.
Despite AI’s considerable abilities, it does not have the capacity for emotions or subjectivity. Le thinks that society should put more investment into understanding AI’s limitations, such as navigating issues of bias and fairness.
Marc Watkins, Academic Innovation Fellow and lecturer, explained how people fear generative AI systems will automate their labor and remove their jobs because of recent reports. However, Watkins noted that AI could allow people to ease their workload, if this technology does not replace them.
“I think we will see people embrace and react negatively at similar rates, making generative AI’s adoption chaotic and challenging,” Watkins said.