Will AI Save Or Kill Journalism?


I am a forty-five-year-old journalist who, for many years, didn’t read the news. In high school, I knew about events like the O. J. Simpson trial and the Oklahoma City bombing, but not much else. In college, I was friends with geeky economics majors who read The Economist, but I’m pretty sure I never actually turned on CNN or bought a paper at the newsstand. I read novels, and magazines like Wired and Spin. If I went online, it wasn’t to check the front page of the Times but to browse record reviews from College Music Journal. Somehow, during this time, I thought of myself as well informed. I had all sorts of views about the world. Based on what, I now wonder? Chuck Klosterman, in his cultural history “The Nineties,” describes that decade as the last one during which it was both possible and permissible to have absolutely no idea what was going on. So maybe the bar was low.

The 9/11 attacks, which happened during my senior year, were a turning point. Afterward, as a twentysomething, I subscribed to the Times and The Economist and, eventually, The New Yorker and The New York Review of Books. My increasing immersion in the news felt like a transition into adult consciousness. Still, it’s startling to recall how shallow, and how fundamentally optional, my engagement with the news was then. Today, I’m surrounded by the news at seemingly every moment; checking on current events has become almost a default activity, like snacking or daydreaming. I have to take active steps to push the news away. This doesn’t feel right—shouldn’t I want to be informed?—but it’s necessary if I want to be present in my life.

It also doesn’t feel right to complain that the news is bad. There are many crises in the world; many people are suffering in different ways. But studies of news reporting over time have found that it’s been growing steadily more negative for decades. It’s clearly not the case that everything has been getting worse, incrementally, for the past eighty years. Something is happening not in reality but in the news industry. And since our view of the world beyond our direct experience is so dramatically shaped by the news, its growing negativity is consequential. It renders us angry, desperate, panicked, and fractious.

The more closely you look at the profession of journalism, the stranger it seems. According to the Bureau of Labor Statistics, fewer than fifty thousand people were employed as journalists in 2023, which is less than the number of people who deliver for DoorDash in New York City—and this small group is charged with the impossible job of generating, on a daily basis, an authoritative and interesting account of a bewildering world. Journalists serve the public good by uncovering disturbing truths, and this work contributes to the improvement of society, but the more these disturbing truths are uncovered, the worse things seem. Readers bridle at the negativity of news stories, yet they click on scary or upsetting headlines in greater numbers—and so news organizations, even the ones that strive for accuracy and objectivity, have an incentive to alarm their own audiences. (Readers also complain about the politicization of news, but they click on headlines that seem to agree with their political views.) It’s no wonder that people trust journalists less and less. Gone are the days when cable was newfangled, and you could feel informed if you read the front page and watched a half-hour newscast while waiting for “The Tonight Show” to start. But this is also a bright spot when it comes to the news: it can change.

Certainly, change is coming. Artificial intelligence is already disrupting the ways we create, disseminate, and experience the news, on both the demand and the supply sides. A.I. summarizes news so that you can read less of it; it can also be used to produce news content. Today, for instance, Google decides when it will show you an “A.I. overview” that pulls information from news stories, along with links to the source material. On the science-and-tech podcast “Discovery Daily,” a stand-alone news product published by the A.I.-search firm Perplexity, A.I. voices read a computer-generated script.

It’s not so easy to parse the implications of these developments, in part because a lot of news already summarizes. Many broadcasts and columns essentially catch you up on known facts and weave in analysis. Will A.I. news summaries be better? Ideally, columns like these are more surprising, more particular, and more interesting than what an A.I. can provide. Then there are interviews, scoops, and other kinds of highly specific reporting; a reporter might labor for months to unearth new information, only for A.I. to hoover it up and fold it into some bland summary. But if you’re interested in details, you probably won’t be happy with an overview, anyway. From this perspective, the simplest human-generated summaries—sports recaps, weather reports, push alerts, listicles, clickbait, and the like—are most at risk of being replaced by A.I. (Condé Nast, the owner of The New Yorker, has licensed its content to OpenAI, the maker of ChatGPT; it has also joined a lawsuit against Cohere, an A.I. company accused of using copyrighted materials in its products. Cohere denies any wrongdoing.)

And yet there’s a broader sense in which “the news,” as a whole, is vulnerable to summary. There’s inherently a lot of redundancy in reporting, because many outlets cover the same momentous happenings, and seek to do so from multiple angles. (Consider how many broadly similar stories about the Trump Administration’s tariffs have been published in different publications recently.) There’s value in that redundancy, as journalists compete with one another in their search for facts, and news junkies value the subtle differences among competing accounts of the same events. But vast quantities of parallel coverage also enable a reader to ask a service like Perplexity, “What’s happening in the news today?,” and get a pretty well-rounded and specific answer. She can explore subjects of interest, see things from many sides, and ask questions without ever visiting the website of a human-driven news organization.

The continued spread of summarization could make human writers—with their own personalities, experiences, contexts, and insights—more valuable, both as a contrast to and a part of the A.I. ecosystem. (Ask ChatGPT what a widely published writer might think about any given subject—even subjects they haven’t written about—and their writing can seem useful in a new way.) It could also be that, within newsrooms, A.I. will open up new possibilities. “I really believe that the biggest opportunity when it comes to A.I. for journalism, at least in the short term, is investigations and research,” Zach Seward, the editorial director of A.I. initiatives at the Times, told me. “A.I. is actually opening up a whole new category of reporting that we weren’t even able to contemplate taking on previously—I’m talking about investigations that involve tens of thousands of pages of unorganized documents, or hundreds of hours of video, or every federal court filing.” Because reporters would be in the driver’s seat, Seward went on, they could use it to further the “genuine reporting of new information” without compromising “the fundamental obligation of a news organization—to be a reliable source of truth.” (“Our principle is we never want to shift the burden of verification to the reader,” Seward said at a forum on A.I. and journalism this past fall.)

But there’s no getting around the money problem. Even if readers value human journalists and the results they produce, will they still value the news organizations—the behind-the-scenes editors, producers, artists, and businesspeople—on which A.I. depends? It’s quite possible that, as A.I. rises, individual voices will survive while organizations die. In that case, the news could be hollowed out. We could be left with A.I.-summarized wire reports, Substacks, and not much else.

News travels through social media, which is also being affected by A.I. It’s easy to see how text-centric platforms, such as X and Facebook, will be transformed by A.I.-generated posts; as generative video improves, the same will be true for video-based platforms, such as YouTube, TikTok, and Twitch. It may become genuinely difficult to tell the difference between real people and fake ones—which sounds bad. But here, too, the implications are uncertain. A.I.-based content could find an enthusiastic social-media audience.

To understand why, you have to stop and think about what A.I. makes possible. This is a technology that separates form from content. A large language model can soak up information in one form, grasp its meaning to a great extent, and then pour the same information into a different mold. In the past, only a human being could take ideas from an article, a book, or a lecture, and explain them to another human being, often through the analog process we call “conversation.” But this can now be automated. It’s as though information has been liquefied so that it can more easily flow. (Errors can creep in during this process, unfortunately.)

It’s tempting to say that the A.I. result is only re-presenting information that already exists. Still, the power of reformulation—of being able to tell an A.I., “Do it again, a little differently”—shouldn’t be underestimated. A single article or video could be re-created and shared in many formats and flavors, allowing readers (or their algorithms) to decide which ones suit them best. Today, if you want to fix something around the house, you can be pretty sure that someone, somewhere, has made a YouTube video about how to do it; the same principle might soon apply to the news. If you want to know how the new tariffs might affect you—as a Christian mother of three, say, with a sub-six-figure income living in Hackensack, New Jersey—A.I. may be able to offer you an appropriate article that you can share it with your similar friends.

At the same time, however, the fluidity of A.I. could work against social platforms. Personalization might allow you to skip the process of searching, discovering, and sharing altogether; in the near future, if you want to listen to a podcast covering the news stories you care about most, an A.I. may be able to generate one. If you like a particular human-made podcast—“Radiolab,” say, or “Pod Save America”—an A.I. may be able to edit it for you, nipping and tucking until it fits into your twenty-four-minute commute.

Right now, the variable quality and uncertain accuracy of A.I. news protects sophisticated news organizations. “As the rest of the internet fills up with A.I.-generated slop, and it’s harder to tell the provenance of what you’re reading, then the value of being able to say, ‘This was reported and written by the reporters whose faces you see on the byline’ only goes up and up,” Seward said. As time passes and A.I. improves, however, different kinds of readers may find ways of embracing it. Those who enjoy social media may discover A.I. news content through it. (Some people are already doing this, on TikTok and elsewhere.) Those who don’t frequent social platforms may go directly to chatbots or other A.I. sources, or may settle on news products that are explicitly marketed as combining human journalists with A.I. Others may continue to prefer the old approach, in which discrete units of carefully vetted, thoroughly fact-checked journalism are produced by people and published individually.

Is it possible to imagine a future in which the script is flipped? As I wrote last week, many people who work in A.I. believe that the technology is improving far faster than is widely understood. If they’re right—if we cross the milestone of “artificial general intelligence,” or A.G.I., by 2030 or sooner—then we may come to associate A.I. “bylines” with balance, comprehensiveness, and a usefully nonhuman perspective. That might not mean the end of human reporters—but it would mean the advent of artificial ones.

One way to glimpse the possible future of news, right now, is to use A.I. tools for yourself. Earlier this year, on social media, I came across the Substack “Letters from an American,” by the historian Heather Cox Richardson, who publishes nearly every day on the ongoing Trump emergency. I find her pieces illuminating, but I often fall behind; I’ve discovered that ChatGPT, with the right encouragement, can give me a reasonably good summary of what she’s written about. Sometimes I stick with the summary, but often I read a post. Using A.I. to catch up can be great. Imagine asking the Times what happened in Ukraine while you were on vacation, or instructing The New Yorker to recap the first half of that long article you started last week.

For a while, I’ve been integrating A.I. into my news-reading process. I peruse the paper but keep my phone nearby, asking one of the A.I.s that I use (Claude, ChatGPT, Grok, Perplexity) questions as I go. “Tell me more about that prison in El Salvador,” I might say aloud. “What do firsthand accounts of life inside reveal?” Sometimes I’ve followed stories mainly through Perplexity, which is like a combination of ChatGPT and Google: you can search for information and then ask questions about it. “What’s going on with the Supreme Court?” I might ask. Then, beneath a bulleted list of developments, the A.I. will suggest follow-up questions. (“What are the implications of the Supreme Court’s decision on teacher-training grants?”) It’s possible to move seamlessly from a news update into a wide-ranging Q. & A. about whatever’s at stake. Articles are replaced by a conversation.

The news, for the most part, follows events forward in time. Each day—or every few hours—newly published stories track what’s happened. The problem with this approach is presentism. In reporting on the dismantling of the federal agency U.S.A.I.D., for instance, news organizations weren’t able to dedicate much space to discussing the agency’s history. But A.I. systems are biased toward the past—they are smart only because they’ve learned from what’s already been written—and they move easily among related ideas. Since I followed the U.S.A.I.D. story partly using A.I., it was easy for me to learn about the agency’s origins, and about the debates that have unfolded for decades about its purpose and value: Was it mainly a humanitarian organization, or an instrument of American soft power, or both? (A.I.s can be harder to politicize than you might think: even Grok, the system built by Elon Musk’s company xAI, partly with the intent of being non-woke, provided nuanced and evenhanded answers to my questions.) It was easy, therefore, to follow the story backward in time—even, in some sense, sideways, into subjects like global health and the mounting influence of China and India. I could’ve done this in what is now the usual fashion—Googling, tapping, scrolling. But working in a single text chat was more efficient, fun, and intellectually stimulating.



Source link

Scroll to Top