How can journalism and artificial intelligence coexist? What are the practical uses of AI in a field like journalism? What does the legal side of all of this look like?
Lead counsel for the New York Times v. OpenAI copyright lawsuit, Ian Crosby, answered these questions and many more at the “Addressing the Impact of Social Media and Artificial Intelligence on Democracy” symposium hosted April 1-2 by the University of Mississippi’s brand-new Jordan Center for Journalism Advocacy and Innovation.

“Rights holders like the New York Times want to be fairly compensated for the value that’s being extracted from their works,” Crosby said. “They also want to have some say in how the products that are created using their works are deployed to make sure that they’re synergistic and that they do not cannibalize the core business that funds the ability to create these works in the first place.”
Crosby’s client, the New York Times, is seeking fair use compensation for its journalists in instances where their works are used to train AI models.
Crosby also said that, no matter what decision comes from New York Times v. OpenAI, there will be a new precedent for AI usage in journalism going forward — one which creates a symbiotic relationship between journalism and AI.
“If you’re cannibalizing the revenue that creates the original content, the incentives to create the original content are going to be diminished, and so you will see a further contraction in the media space,” Crosby said. “I think it benefits everybody that there be a fair, rational model by which some of the monetization goes through the creators so that we can have a healthy economy.”
The purpose of the symposium at UM, and the general focus of the Jordan Center itself, is to confront the growing threats to information quality by advancing journalistic standards and promoting media literacy.
Knowing this, Crosby said that AI does not have to be viewed as the enemy many people make it out to be; rather, when used and operated in the right way, AI can be a tool for journalists.
“Just to be clear, I don’t think that any of my clients feel that AI is bad (or) that AI shouldn’t exist,” Crosby said. “AI has amazing uses, and those uses should persist.”
One practical use Crosby suggested for AI in journalism is for the “needle in the haystack problem.” Crosby said that AI is useful for delving into large data sets, such as government documents, to quickly sift through and pick out key information much quicker than a human could.
“(If) you get some giant dump of government documents, and you want to write a story about it the next day, and you have AI go through and pull out and say, ‘here are the key documents that hit on that,’ that’s just one fair use case, right?”
As someone who does not practice journalism but has a pulse on the field, Crosby shared his perspective on the future of AI in journalism.
“I think it’s important for people who are journalists, and in particular people who are influential journalists, to establish sets of professional standards that live up to the best ethics of the profession — and hopefully a set of norms (regarding AI) will coalesce around those,” Crosby said.
Crosby said that, like any profession, journalism is full of choices. In this case, Crosby said each journalist has to figure out their own personal AI ethics.
“I think every journalist has to make an ethical decision, has to make a professional decision about how they’re going to use AI, but I think that there are definitely important use cases inside journalism,” Crosby said.
Through all the noise and chatter that journalists and everyone else hear about AI replacing jobs, Crosby provided some clarity on AI’s limited capabilities.
“Let’s be realistic: People talk about (AI) training and learning and reasoning, but these are all anthropomorphisms,” Crosby said. “These are all attributing human qualities to something that doesn’t have those qualities. AIs are next token predictors — they just statistically predict what the next word is based on all the words that they’ve seen.”