Story by
TANYA BABBAR, AMANI CLARK-BEY, GIUSTINO RACCHINI, MEGAN TROTTER and HALEY MORELAND
TribLive
Sept. 11, 2024
This story was not written using artificial intelligence, but it could have been.
Although the technology might have sped up the writing process, the quality of the information — and the trust the reader has in it — likely would have suffered.
Use of AI in the news industry, so far, largely is restricted to research, making stories more searchable online and compiling data from the internet.
“No one has a good example of generative AI doing original reporting,” said Jeremy Gilbert, a Northwestern University professor and media strategist who has spent the last 15 years looking at the question of how AI can benefit journalism.
But that doesn’t mean journalists aren’t using AI. As the technology grows and advances, more uses for it are being found in the news industry.
News agencies already are experimenting with a variety of ways to use artificial intelligence to reach broader audiences and make newsrooms more efficient, said Alex Mahadevan, director of the Poynter Institute’s MediaWise program.
For example, the Washington Post has a chatbot called Climate Answers. Readers can pose questions, and the chatbot will comb through relevant Washington Post stories to provide AI-generated answers.
The New York Times and the Atlantic use AI voices to read long stories aloud, Mahadevan said, and USA Today generates bulleted summaries of its stories using the technology.
“What you’re seeing are news organizations leveraging it to make their content more palatable to a wider audience,” he said.
Other news outlets are using artificial intelligence to more quickly scan through data, classify images and video and optimize headlines for search engines, Mahadevan said.
Where the technology will go in the future, he said, is impossible to predict.
Sunday
• The technology is ubiquitous, causing reason for optimism about its impact but concerns about its consequences
• Who are the founding fathers of AI? / What are the different kinds of AI?
Monday
• How AI is being used for finance and investing
Tuesday
• AI’s use in the medical field
One possibility, Mahadevan said, is artificial intelligence could be used to craft news packages for specific users. A user could specify their preferred medium and areas of interest to receive content catered to them, he said.
“I think the biggest promise that AI holds is personalizing all of our great reporting that we do in ways that will reach new audiences, that will make them care about what’s happening,” he said.
The creation of actual news stories with AI, however, remains extremely limited.
The Associated Press, for example, has been a leading and transparent proponent for the use of AI in news production, relying on the technology to tag content for search and help predict where photographers can get the best photos for news events.
They also use AI for limited story creation, using it for writing corporate earnings reports and creating sports previews and recaps for most professional sports leagues.
AP box scores for sports such as baseball, football and basketball, for example, are produced via AI by tapping into stats put out by teams or their leagues on the internet. When a major corporation publishes its earnings report to a website, AI can cull that information and produce a summary of the report for AP readers in far less time than it would have taken a human reporter.
At TribLive, newsroom editors are in the early stages of exploring how AI tools can be used to streamline the news gathering process.
“We continue to explore the responsible use of AI tools to assist in the development of content,” said executive editor Luis Fabregas. “That said, we are committed to producing journalism of the highest standards, which means that we don’t publish stories or photos generated by AI. We do that because we value accuracy and originality.”
The Switch Sports, an online and broadcast sports event production company, helps provide live game action from all major professional leagues. Broadcasters licensed to cover the sports leagues can pick up play-by-play, analysis or even live video feeds from the company and stream or broadcast games without committing a full production crew to the event.
Anthony Desanti, executive producer at The Switch, said the technology helps him write copy for game announcers in real time and receive real-time statistical analysis. Other aspects of AI improve the quality of video and photographs.
“This is only the tip of the iceberg. AI is exploding,” Desanti said.
That explosion creates a real interest in news organizations striving to stay relevant in today’s technology-driven climate.
“A lot of newsrooms are experimenting with artificial intelligence because they don’t want to be left behind, like many of them felt they were with the transition from print or broadcast to digital,” Gilbert said. “It’s important for newsrooms to experiment right now before the tastes of audiences have changed in ways that might make it hard to recover.”
As the Knight Chair in Digital Media Strategy at Northwestern Medill, Gilbert is running experiments through the school’s digital media Knight Lab to explore new ways of using AI in journalism.
In Hannah Covington’s opinion, news literacy is now more important than ever.
As senior director of education at the News Literacy Project in Washington, D.C, Covington thinks, while AI may save news organizations time, a healthy amount of caution is necessary.
“News organizations, like everyone else, need to understand the limitations of AI,” Covington said.
In recent years, the public has seen several AI news mishaps and missteps. In November 2022, tech outlet CNET published AI-generated articles, leading to widespread public criticism and rebuke from its staff. Sports Illustrated, in November 2023, came under public fire for the same thing.
Most recently, a reporter for a newspaper in Wyoming resigned after it was discovered he was using an AI writing tool to produce news stories under his byline.
“Transparency is key,” Covington said. “I think these decisions, when a news organization is looking at AI, should be made with input from audiences.”
Large news organizations are formulating guidelines for the use of AI in their newsrooms, for now and for possible uses in the future.
The Associated Press, for example, last year published multipoint guidelines that govern the use of AI by its employees.
Included are restrictions against using technologies like ChatGPT to produce publishable content. AP staff are allowed to experiment with ChatGPT as long as they don’t include the information in actual news reports.
Other guidelines state that any output from a generative AI program be treated as unvetted, requiring journalists to independently verify its accuracy; prohibiting the use of AI to alter photos, videos and audio content; and independently verifying the source of photos transmitted to AP to ensure deepfakes are kept out of the organization’s files.
Business Insider, likewise, has AI guidelines that remind journalists they are responsible for the “accuracy, fairness, originality, and quality of every word in your stories.”
The guidelines specifically forbid journalists from using programs like ChatGPT to write stories. ” …Your stories must be completely written by you,” the guidelines state.
The road to integrating AI in the news industry has not been without it hangups, even when the technology isn’t being used to produce stories.
Companies using AI to identify and gather news from online sources have landed in court, accused of copyright infringement.
In December, news giant The New York Times sued OpenAI and Microsoft, the creators of popular AI chatbot ChatGPT, for copyright infringement, accusing the tech companies of using millions of its news articles without authorization to train ChatGPT to produce its own news stories, making it a competitor of the newspaper, the Times reported.
OpenAI and Microsoft since have moved to dismiss portions of the suit. In April, eight newspapers owned by Alden Global Capital filed a similar lawsuit against the companies.
Like most other things associated with AI, however, the position of news organizations is changing rapidly.
In recent months, many large news organizations, including The Atlantic and Politico have partnered with OpenAI to license their content. Vox reports, in the past few months, more than 70 news organizations have made deals with the AI company, including German publishing giant Axel Springer, Dotdash Meredith, The Financial Times and The Associated Press, among others.
Just this month, OpenAI announced on its blog that it struck a deal with magazine publishing giant Condé Nast to license content from Vogue, The New Yorker, Condé Nast Traveler, GQ, Architectural Digest, Vanity Fair, Wired, Bon Appétit and others for ChatGPT and its newest rollout SearchGPT.
In fact, many in the AI and news industries point to positive aspects of the technology that can protect both journalists and their readers from misinformation.
Very realistic but bogus photos, known as “deepfakes,” have become a problem in the dissemination of news. While AI likely is to blame for the existence of deepfakes, it also is used to detect and expose them.
At the University of Buffalo’s Center for Information Integrity, media forensics expert Siwei Lyu uses AI programs to uncover the deepfakes before they find their way into legitimate news reports.
By running a photo through a series of detection algorithms, Lyu and his staff can spot signs that a photo was created using AI. Common signs are anomalies in the image, like someone with six fingers.
Lyu even made the programming available to the public through his Deepfake-O-Meter, where anyone can create an account and run photos through the programming to see if the image is genuine.
For decades across the U.S, local newspapers have faced a decline. According to a Northwestern State of Local News Study in 2023, the U.S had lost almost 2,900 newspapers since 2005. As demand for news and media consumption have shifted in recent decades, smaller publications face the brunt of a challenge for newspapers everywhere. When it comes to staying ahead of the curve through implementing AI technology into reporting, the larger, more well-funded newspapers are more likely to have the wherewithal to stay up to date on new technology.
“Some of these large language models and multimodal language models are proprietary ones that cost money. The open-source ones are improving, but they still lag behind the ones that are commercial payment, and that payment is not trivial,” said Dr. Adriana Kovashka, an associate professor of computer science at the University of Pittsburgh.
In 2021, the Knight Lab launched the Local AI in the News Initiative in partnership with the Associated Press to help local newsrooms manage — and join in on — the AI tend.
“When you can do experimentation instead of cost cutting, then maybe that helps grow your audience instead of shrinking,” Gilbert said.
AI technology is developing at accelerated speeds, leaving lawmakers, researchers, journalists and educators working hard to keep up.
In Covington’s field, rapid change means there isn’t one fool-proof suggestion she can offer on how to protect oneself from misinformation. Still, a few simple steps can make a considerable difference: internet searches to vet information, checking multiple credible sources and considering the motivations of the poster of information are all healthy measures.
Despite understandable concerns over the threats misuse of AI poses, Covington encourages people to avoid what she refers to as a blowback effect with news consumption.
“It’s important to not let AI undermine your trust in all news and information,” Covington said. “It’s thinking, ‘If everything is fake, then I can’t trust anything.’ Don’t let AI stop you from believing anything. Just be skeptical and look for credible sources.”
Tanya Babbar, Amani Clark-bey, giustino Racchini, Megal Trotter and Haley Moreland are TribLive staff writers. Staff writer Julia Burdelski contributed.