Ever wonder whether the person you were watching in a video was real? Deepfake technology is one of many new applications of artificial intelligence that manipulates thousands of social media users daily.
Andrea Hickerson has made it her object of study. As dean of the School of Journalism and New Media, professor of journalism and co-director of the new National Center for Narrative Intelligence, Hickerson has brought her work and research on deepfakes in the DeFake Project to UM.
“Deepfakes are audio-visuals generated or altered by artificial intelligence,” Hickerson said. “A ‘good’ deepfake is imperceptible to the human eye, meaning we can’t always trust what we see.”
Deepfake technology allows one to alter photos or videos of a person to make them say or do something that appears real, but is not. As such, deep fakes can be vehicles for misinformation or outright defamation.
Recently, a video was released of Tom Hanks promoting a dental company. The video turned out to be completely artificial — Hanks reported that he had nothing to do with it.
“Deepfakes have the ability to severely impact a person’s reputation if their image is used in a deepfake without their consent,” Hickerson said.
In a MIT Technology Review article, Rochester Institute of Technology graduate student John Sohrawardi noted that the damaging power of deepfakes is often leveraged against women in the form of revenge porn.
“Students, and historically women, are susceptible to artificially generated (deepfaked) revenge pornography through deepfaked face-swap videos,” Sohrawardi said.
Many students are unaware that deepfakes are an issue. Reagan Phalines, a freshman journalism major, expressed how she had seen deepfakes but was not educated on them.
“I knew it was kind of a thing, but I didn’t know the term for it and have only seen it in a form of entertainment,” Phalines said.
Hickerson began her work on deepfakes when she was a professor at Rochester Institute of Technology about five years ago. Her colleagues in the DeFake project included computer science and cybersecurity professors at RIT. The difference between traditional research on deepfakes and Hickerson’s is she and her colleagues are developing their work specifically with journalists in mind.
“We believe journalists are essential to educating the public,” Hickerson said.
Her work focuses on integrating technology like deepfakes into journalism coursework so that students are experts in computing and technology.
Interim chair of the journalism department Michael Tonos said AI technology poses a learning curve for many journalists.
“As journalists, we are always taught to be skeptical of what we read, but rarely are we taught to be skeptical of what we see,” Tonos said.
Tonos and Hickerson believe learning about deepfakes is vital to students pursuing careers in journalism.
“It is critical that students going into public communication fields have enough technical and computing literacy so they can translate important information to the public,” Hickerson said.
As artificial intelligence technology develops and deepfakes with it, the risk of online manipulation will continue to increase.
“Verified information is critical for the public to make informed decisions. A deepfake could compromise that or it could manipulate a person’s attitudes or emotions,” Hickerson said.
When warning students about the validity of content shared on social media, Hickerson advised students to trust their intuition and do their own research to verify the video.
“Think about who shared it and why, and if you think something feels manipulated or off, take a moment to look for independent verification of the video,” she said.
There are currently no tools that are able to consistently and accurately identify deepfakes, as they would constantly need to be updated as the technology progresses.
“Be skeptical of information from sources you don’t know much about and be open to the possibility that someone could be trying to manipulate you,” Hickerson said. “My door is always open to students interested in learning more about or doing research on deepfakes.”