In September 2023, civil rights lawyer and educator Meetali Jain launched the Tech Justice Law Project (TJLP), a nonprofit pushing back against the growing power of Big Tech through litigation, policy and education. Small but mighty, the group tackles everything from disinformation and AI harms to children’s digital rights.
“In every issue we care about — civil rights, human rights — there’s now an online dimension,” Jain said. “If we want to vindicate people’s rights, we need to be conversant in online technologies and how they’re altering the media ecosystem.”

Jain participated in a panel discussion titled “Where Artificial Intelligence and the Law Collide” at the Jordan Center for Journalism Advocacy and Innovation symposium on “Addressing the Impact of Social Media and Artificial Intelligence on Democracy” at the University of Mississippi on April 1.
Jain’s path to tech accountability started far from Silicon Valley. Early in her career, she represented detainees post-9/11, including those held at Guantánamo Bay.
“From that, I moved into other areas — workers’ rights, corporate accountability — and then into clinical teaching,” Jain said.
Jain taught at American University and Seton Hall Law School and spent time in South Africa after clerking for Justice Yvonne Mokgoro on the Constitutional Court. The human rights lawyer returned to the U.S. after the 2016 election, when she began working at Avaaz as campaign and legal director.
“That was when the word ‘disinformation’ entered my vocabulary,” Jain said.
While working with a global team of researchers, she helped identify and expose harmful online narratives in order to pressure platforms such as Facebook, Twitter and YouTube to take action. But she quickly saw the limits of progress.
“Regardless of administration, the might of the tech lobby and the oligarchs was so strong that no regulation would be easily passed,” Jain said. “So I left to work on tech policy at the state level to try to fill in the gap left by Congress.”
Those experiences eventually led her to launch TJLP, becoming its founder and executive director.
“We’re very small, just two people, but we work in partnership with law students. Students are our backbone,” Jain said. “We both come from clinical teaching, and we really believe the stronger the student experience, the more likely they are to pursue public interest tech law and not get swept up by (larger) companies.”
TJLP’s work spans direct litigation, amicus briefs and policy advocacy.
“We’ve been engaging with lawmakers at the state level and in Congress,” Jain said. “We also maintain a litigation tracker to monitor meaningful tech-related cases in the U.S.”
The organization’s work gained even more urgency last summer, when Jain received a chilling email from a grieving mother.
“She said nobody believed her when she said she thought her son’s death was caused by technology,” Jain said. “And I had chills. I had been working in children’s digital rights and was very aware of the dangers of generative AI. What I didn’t know was who the plaintiff would be or what the specific facts would be until I spoke to her.”
The woman’s son had taken his own life after a months-long engagement with characters on an AI companion app.
“This mom’s objective is to raise awareness, especially among parents who know little about this technology,” Jain said. “And I can say, even as a mom of two and someone who does tech law full time, I had no idea about these apps or this industry.”
For Jain, the case is just one example of why regulation cannot wait.
“Different technologies and different populations require different guardrails. But we need consensus to do something, because right now it’s innovation at all costs,” Jain said. “Mark Zuckerberg became famous for his motto, ‘move fast and break things.’ I now feel that with AI, it’s ‘move fast and break people.’ We need to have that conversation.”
She also questioned the Trump administration’s push to accelerate AI innovation to compete with China.
“I think we can stay competitive and also constrain. We have to,” Jain said. “And I think that we’re going to have to see an outcry from ordinary people for this to happen.”
Jain expressed a deeper concern about who gets left behind in the process of technological development.
“The tech policy world can be very elitist,” Jain said. “I came in with a social justice background, and I was really struck by how little infrastructure exists for regular people harmed by tech. … You’ve got legal services for housing or immigration, but there’s nothing for tech harm survivors. What you don’t have in this field is people saying, ‘Have you been harmed by tech? Come see us. We’re going to help you.’”
In response to this, Jain and her nonprofit are exploring how to apply the legal aid model to tech. She also hopes to bring a human rights framework back into the conversation — something she has been committed to since her early days as part of the “Bringing Human Rights Home” network.
“The U.S. goes out pushing human rights norms globally,” Jain said. “But (the U.S.) hasn’t even ratified the Convention on the Rights of the Child. I’m still interested in pushing for human rights enforcement right here at home.”
With each case, Jain is redefining what tech accountability could look like, not just in policy rooms but also in people’s lives and especially tech’s impact among vulnerable populations.
“Injustice doesn’t stop at the border of a courtroom or a country,” Jain said. “Neither should our fight against it.