AI and Journalism: a balancing act
Rini Elizabeth Mukkath
Artificial Intelligence (AI) is rapidly reshaping industries, and journalism is no exception. While AI promises to revolutionise news gathering, production, and dissemination, it also poses significant challenges to journalistic integrity and credibility.
Professor Charlie Beckett is the Director of JournalismAI and a professor in Media and Communications at the London School of Economics (LSE). He is the founding director of Polis, an LSE think-tank focusing on international journalism and society, running public events, and conferences, a summer school, and publishing research. We spoke to him about the impact of AI on journalism, and how it will continue to grow, making the news cycles faster, more accessible, and potentially more customised to individual readers. However, as AI becomes deeply embedded in journalistic practices, maintaining a balance between technological efficiency and editorial integrity will be paramount. For journalism to benefit from AI without compromising on quality or ethics, it will require a careful blend of human judgement, ethical oversight, and a commitment to adapting the media landscape.
Excerpts from an interview:
Can you give us a bit of background on what it means when you say "AI and journalism together?
The first thing to say is there’s no such thing as artificial intelligence in the sense that it replicates human intelligence. Human intelligence is different. What we call AI is just very good at pretending to be intelligent. AI has been around for decades through machine learning—software that can do tasks humans once needed their own intelligence to perform. For example, we’ve had translation tools for quite a long time, though initially, they were terrible. You wouldn’t use them professionally; perhaps you’d use them on holiday to order a beer, but that’s about it. However, recently, these tools have become much, much better. A few years ago, with generative AI tools like ChatGPT, AI took a massive leap forward.
Now, almost everyone has tried putting a prompt into something like ChatGPT or using DALL-E to generate a picture. It feels almost magical, and it’s improving all the time. Importantly, it’s still a range of tools, not just one thing, which makes it fascinating. It’s referred to as a “general tool” because it can perform so many functions. In a way, it’s like electricity or digital technology—it will be everywhere, and in a few years, we won’t even call it “AI.” We’ll simply say, “I’m going to write something, make a film, do research, or plan my holiday on my computer.” The AI component will be implicit.
For journalism, which is all about processing information, AI is proving to be an incredibly helpful tool. It allows us to convert real-world data—audio, video, imagery—into content like articles or podcasts. We shouldn’t worry about the idea of “robot reporters”; that isn’t going to happen. AI isn’t replacing journalists or doctors, but it’s assisting them significantly. AI may reduce the need for journalists to handle some repetitive tasks, which many would consider routine and even boring. As a former journalist, I can say that we’re delighted to have this “mechanical intern” to handle those aspects, freeing us for more creative work.
At the LSE, we have an Innovation Challenge funded by Google, which provided $4 million to support news organisations—especially smaller ones in the Global South—to innovate with AI. We asked these organisations for ideas, and hundreds of proposals came in, with AI being used to improve revenue, reduce subscriber churn, help journalists come up with creative story ideas, and fight misinformation. One project, for instance, used AI to counter bias in newsrooms by spotting potential gender or racial biases and encouraging corrective action.
There are, of course, problems with AI, but journalists are actively experimenting with and using it. When AI is used poorly, it’s usually because of human error. AI isn’t a “truth machine,” it’s not always up-to-date, and it doesn’t actually “know” anything—it just helps us organise and process information. So, it requires human oversight and regular checks.
AI won’t transform journalism overnight, but the entire information environment is shifting. Over the past 20–30 years, the way we consume news has changed drastically, moving away from scheduled broadcasts or printed newspapers. Now, news flows around us constantly, allowing us to dip in and out. This accessibility is generally beneficial, but it also raises concerns about the truth and accuracy of information and whether it may be offensive or disturbing.
Are there any tools you're seeing that contribute to addressing disinformation?
Yes, although I think it's unrealistic to expect a tool that can eliminate disinformation or hate speech entirely. The reality of the Internet is that we're constantly chasing this content—it gets out there and spreads quickly. The root issue lies with humans; technology merely provides the platform to amplify and speed it up. Misinformation and hate speech originate from individuals who create misleading content, and from those who read, click, and spread it without enough thought. Sometimes, they even enjoy it.
It's fundamentally a human issue, meaning it requires regulatory and social solutions. We can’t shut down the streets to stop people from spreading hate, and similarly, we can’t shut down the Internet. AI is helpful in identifying problematic content, allowing us to take action. It can pinpoint the sources, track the spread, and in doing so, offer a pathway to counter or possibly penalise it. However, it's a complex task, as addressing hate speech or misinformation too actively can amplify it, drawing more attention to it unintentionally.
No single AI tool will filter it all out, but there are numerous tools used in various ways. Ultimately, we need human judgement, especially as opinions on what constitutes hate speech vary widely. Often, it’s a grey area, making it crucial to approach labelling with care. Journalists, for example, decide whether to publish quotes that might contain hateful language, based on context. These are human decisions, and AI shouldn't make them.
In your project, when you talk to journalists, how have they been responding to AI? What challenges do they face, and how are they adapting?
I spend a lot of time talking to journalists, and there's a spectrum of responses to AI. Some are excited, seeing it as a way to manage their workload and handle information flow. Others are apprehensive, fearing AI could replace their jobs, or they've read about AI making significant errors and embedding biases. For instance, predictive AI in the justice system is known to have biases, often overlooking mitigating factors.
Overall, research suggests that journalists are cautiously optimistic about adopting AI tools. Many appreciate practical applications, like transcription tools, that streamline tasks. News organisations are slowly integrating AI, though it’s not a magic solution. Efficiency depends on many factors, including economic, political, and infrastructure challenges, especially in international settings. Adoption varies across regions and sectors. For example, local news in the UK, which has experienced budget cuts, is more enthusiastic, viewing AI as a potential lifeline. In contrast, certain well-resourced publications may not feel the need to adopt it urgently.
A broad range of news organisations—big, small, niche, general, online, app-based, subscription-driven—are exploring AI. The desire to use it is there, but cost and time investments are significant factors. AI requires training and regular updates, so it’s not a quick fix. Interestingly, some journalists are reluctant to engage, perhaps out of pride in their current methods, though we’ve learned that failing to adapt often means being left behind.
Given the prevalence of misinformation and deep fakes on social media, do you have ethical concerns about AI's use?
Yes, although I wouldn’t attribute all societal problems to misinformation alone. Many issues stem from real-world factors, independent of AI or social media. There's evidence suggesting misinformation has less impact than we often assume, but the overwhelming amount of information we encounter daily is a significant challenge. This overload can make people feel disconnected or even shut down their critical thinking skills.
Our adaptation to this new information ecosystem has been slow; humans weren’t naturally prepared for this constant influx of content. Social media, while impactful, isn’t solely to blame for issues like teenage unhappiness—there are other societal factors involved.
We should recognise that we have control over the information ecosystem, much like we aim to improve air and water quality. Journalists play a vital role in this, though the approach may need to change. Instead of just reporting shocking news, they might consider what information people genuinely need. A healthier “information diet” tailored to personal needs—short texts for some, longer audio pieces for others—could foster a more balanced, beneficial media landscape. AI has the potential to assist in this, personalising the experience to match individual preferences and behaviours.