To the management of America’s nonprofit newsrooms:

We are writing to you as the nation’s first-ever caucus of nonprofit newsroom unions, which is composed of the media workers representing journalism outlets across the country.

Once again, our industry finds itself at an inflection point, facing the uncertainty of artificial intelligence. Like any new technology, AI can provide a potent new tool to assist us in our work if it is used responsibly, thoughtfully and ethically.

However, we have already seen numerous failures in the implementation of AI at for-profit news outlets, where the economic incentive to use AI recklessly has at times degraded the otherwise high-caliber product our colleagues produce. From Axel Springer to Gannett, major for-profit publishers have unveiled flawed AI tools that are hallucinating false news articles or synopses, slowing down journalists’ workflows and otherwise threatening their editorial product.

Nonprofit newsrooms can and must do better to uphold the mandates of our mission-driven organizations and ensure readers’ trust in our editorial product. Importantly, we are not calling for a ban on AI. Our message is simple — management must work with us to determine how to best implement this emerging technology and protect journalistic integrity.

To that end, we unequivocally demand that you commit to the following guardrails around AI in current or future collective bargaining agreements and in the implementation of such contracts:

  • AI is rapidly evolving and most of its uses remain to be seen. Our unions will retain the right to bargain over its future implementation.
  • Management will not lay off media workers as a result of implementing AI.
  • Our members will not be disciplined if we decide AI is not the best tool for a job, and our members’ usage of AI tools will not be surveilled.
  • Our outlets will not publish work that is entirely AI-generated, including, but not limited to, text, photos, videos, audio, translations or artwork. When we publish AI-assisted work, it will be clearly labeled as such, will be checked by a human editor and will follow a public ethics policy.
  • Management will not strike deals to use our work — or the work of freelancers or contributors — to train AI without both our consent and our compensation.

Many nonprofit newsrooms have already agreed to common-sense AI provisions, and we applaud them. Publications including Grist, the Associated Press, CalMatters/The Markup and the Chicago Reader enshrined contract language that included such guardrails as protecting unit members against being laid off as a result of AI and only implementing new AI tools when the unit consents.

But some nonprofit newsrooms appear set on racing to the bottom.

Management at ProPublica, which stands at the forefront of investigative journalism, has repeatedly refused in negotiations with its workers to agree to even the most basic limits on publishing AI-generated content or laying off media workers as a result. At The Marshall Project, the preeminent source for enterprise reporting on the criminal justice system, management has gone many months without responding to their workers’ contract proposal concerning AI. At VTDigger, which provides vital local news to Vermonters, management has offered insulting AI provisions that would allow them to lay off reporters as a result of AI and that include zero guardrails for unionized journalists on how AI impacts their editorial product. And at The Texas Tribune, a distinguished voice covering the Lone Star State, after previously agreeing to AI guardrails, including layoff protections, management recently rejected these provisions in favor of unchecked discretion to roll out AI.

Our newsrooms’ readers and small-dollar donors have already expressed their displeasure with the poor decisions leaders of the media industry are making with respect to AI. Our colleagues at for-profit newsrooms are providing cautionary tales. And now, we, the journalists and staff who produce the country’s nonprofit news, demand that you work with us to protect the integrity of our work and ensure AI is used responsibly. The future of journalism depends on it.

Respectfully,

The unionized journalists and media workers of CalMatters/The Markup, the Central Valley Journalism Collaborative, Chicago Sun-Times, Consumer Reports, EdSource, In These Times, Jacobin, High Country News, The Marshall Project, MinnPost, New York Focus, ProPublica, The Salt Lake Tribune, Spotlight PA, The Texas Tribune, VTDigger and contributors from the National Writers Union's Freelance Solidarity Project