A panel discussion on journalism ethics in the age of AI covered a wide range of issues at the Carleton University-hosted industry roundtable in Toronto on Thursday.
Panel chair Aneurin Bosley was joined by former Radio-Canada AI specialist Florent Daudens, Torontoverse creator and AI advocate Chris Dinn and — via video link — Singapore-based journalist Gina Chua, executive editor of AI-innovating Semafor and formerly a senior editor with the Wall Street Journal Asia and Reuters news agency.
The panelists combated unfounded fears and misconceptions about AI while discussing ways in which thoughtful guidelines can regulate the technology’s use to avoid unethical choices by journalists.
Daudens, who spearheaded AI learning initiatives at Radio-Canada, talked about the process of developing ethics guidelines at the French-language branch of the CBC before he left to work at Hugging Face, a machine learning and data science platform.
“It’s really important to understand what’s under the hood,” Daudens said of AI technology.
AI models are imperfect and can be biased, but rather than finding flaws with the technology, all three panelists said it’s incumbent on journalists to understand the AI training and fine-tuning process.
“Training is all about volume,” Dinn said. “The quality of the data is way less important than the quantity.”
“We all have a pretty strong understanding of how cars work even though we’re not all mechanics. We’re going to have to have that same sort of relationship with this technology.”
— Chris Dinn, founder and publisher at Torontoverse
As a software developer, Dinn said he knows the ins and outs of AI technology, and like the other panelists, he said many concerns about issues with AI application in journalism depend on transparency and the decisions of journalists.
An audience member raised the question of AI being fed inaccurate data, which would only serve to reinforce disinformation, particularly about misrepresented and marginalized groups.
Chua responded, referring to her role as part of the Trans Journalist Association and the conversations they have about building news organizations that can support underserved communities. They use AI to create accessible style guides and a misinformation classifier.
In her view, misinformation and histories of misrepresentation must be remedied by the news organization. This problem doesn’t lie with AI, Chua insisted.
“I think there are real opportunities for people to redefine news and news judgment,” Chua said.
Principles and judgment are the crux of the issue, which the panelists said don’t actually have much to do with AI — it’s all in the guidelines and ethics of the journalists and their news organizations.
“We should be really specific about what we’re asking (AI) to do and ensure it’s doing it really well,” Chua said.
But if the ultimate responsibility for ethical behaviour falls on journalists, then the burden of guidelines becomes heavier. Daudens said guidelines with a long list of “don’ts” aren’t the most conducive.
“The way we approach this in Radio-Canada, not to say that it was perfect, but we really rely on the editorial judgment of the teams,” he said.
Rather than seeing guidelines as a strict law book, he said they are there to train journalists on ethical conduct.
Chua said there’s a broader issue preceding this discussion about AI in regards to ethical guidelines. The bigger concern is doing away with standards established by and for dominant interests that have historically ostracized and misrepresented many marginalized groups.
“I do think that the coverage (of AI) could be a lot better, but I do think the coverage of a lot of things could be better and lamenting it won’t do very much,” Chua said. “My belief is that people want to do the right thing.”
While AI use, whether simple translation or complex data analysis, does complicate current ideas about truth and transparency, Dinn said that’s why guidelines are ever-evolving. As the times change, so do the circumstances and certainly the terms.
“I worry people are more likely to strangle innovation with stringent ethics guidelines,” Dinn said, which he said often happens when journalists don’t understand enough about AI.
He wants to keep the conversation about AI going and recognize the continuous changes with public demands.
“We all have a pretty strong understanding of how cars work even though we’re not all mechanics,” Dinn said, extending Daudens’ automobile metaphor. “We’re going to have to have that same sort of relationship with this technology.”
The panelists appeared to gain consensus around the idea that as long as humans remain at the core of accountability and responsibility for following guidelines and practicing ethical news judgment, the application of AI in newsrooms shouldn’t cause any catastrophes.
Daudens said newsrooms and journalists should have more conversations with the public to see how the broader society understands the news and how they think AI could fit into their relationship with journalism.
However, significant challenges loom. The roundtable’s discussion on AI and ethics focused on text-based AIs such as ChatGPT and other LLMs, but one audience member brought up ethical concerns regarding deepfakes and AIs that could potentially reveal anonymous identities by unblurring faces in investigative news reports, potentially endangering vulnerable sources.
“We are obsessed with text and it’s a mistake,” Daudens said.
One possible way to attack the problem area of video and audio AI manipulation would be through further engagement with the technology industry — an idea that sets off alarms for many news organizations.
“News media organizations don’t think about themselves as tech companies and sometimes refuse to do it,” Daudens continued. “I think it’s a huge mistake.”
“There are so many technologists that want to work in media technology but nobody will take them seriously,” Dinns added. “People want to build this technology. You just need to find them.”