Roundtable keynote speaker Charlie Beckett is the founding director of Polis, the U.K. think tank for research and debate around international journalism and society in the Department of Media and Communications at the London School of Economics and Political Science (LSE).
Beckett is also the author of SuperMedia: Saving Journalism So It Can Save The World (2008) and WikiLeaks: News In The Networked Era (2012). Prior to joining LSE, he was a program editor at ITN’s Channel 4 News and previously a senior producer at BBC News for 10 years. He began his career with local newspapers in his home South London before starting in television at London Weekend Television.
He specializes in changes in journalism around the world and their relationship to society and politics.
Beckett is leading the Polis JournalismAI project, which is supported by the Google News Initiative, and was the Lead Commissioner for the LSE Truth, Trust & Technology Commission.
JournalismAI seeks to make AI’s potential more accessible and to address inequalities in news media related to AI.
JournalismAI published a global report in 2019 surveying 71 news organizations in 32 different countries about their use and understanding of AI.
The report established that AI was already a significant player in the journalism industry but had also heightened editorial and ethical responsibilities for newsrooms.
A follow-up report was published in 2023 surveying 105 news organizations in 46 different countries.
The report determined the biggest difference in newsrooms’ use of AI was in their news distribution, as 80 per cent of surveyed newsrooms used AI to share content with their audiences in 2023, a 30-per-cent leap from 2019.
In April, Beckett participated in two journalism and AI panels at the International Journalism Festival conference in Italy.
He explained that LSE has worked with thousands of journalists on training courses and case studies to better understand AI and how to seize the opportunities it presents for newsrooms.
“I think one of the big dangers (of AI) is that you get too worried about (the risks) and you don’t seize the opportunities,” Beckett said.
“AI does not create deepfakes, humans do,” Beckett said. “All disinformation is human-decided. They decide to make fakes or propaganda.”
— Charlie Beckett, director of JournalismAI project at Polis
Acknowledging the harmful uses of AI, including deepfakes, Beckett reminded the audience that “all disinformation is human-decided” and has nothing to do with AI. He said human manipulation is “at least as worrying” as AI technology. He views AI as a tool, not the source, of its outputs.
“AI does not create deepfakes, humans do,” Beckett said. “All disinformation is human-decided. They decide to make fakes or propaganda.”
Conversely, he noted the potential for increased efficiency and new tools to counter propaganda and disinformation in newsrooms.
“AI can potentially be a part of every aspect of news gathering, news production, and of course, critically, in news distribution,” Beckett said.
Beckett also stressed the importance of newsrooms collaborating with one another and outside the industry to better understand the technology’s risks and opportunities. He explained that newsrooms, which employ fewer people than ever, are smaller than many tech companies and need their help to fully comprehend and responsibly harness AI.
“One of the most interesting developments from AI has been the degree to which newsrooms are talking to each other,” Beckett said. “They’re talking to universities, they’re talking to startups, they’re trying to talk to the tech companies as well.
“We need to learn from each other.”