Artificial intelligence boomed last year, and the world of journalism has been reverberating ever since.

From its use in editing and translating to helping investigative journalists reveal millions stored in offshore accounts, AI has been a nifty tool for newsrooms. But generative AI is also behind deep fakes and the spread of disinformation. Just last week, U.S. political consultant Steven Kramer was fined and charged for sending fake calls mimicking President Joe Biden’s voice to thousands of voters. And last year, Sports Illustrated found itself in at the centre of controversy after a third-party content creator the publication partnered with posted stories authored by non-existent journalists with fake names and bios generated by AI.

Like it or not, AI is very much a part of the current media landscape, blurring distinctions between traditional journalism and machine-assisted content creation and distribution — while also complicating current ethical standards. And according to University of Ottawa Internet law specialist Michael Geist, one of the country’s most prominent critics of Canada’s Online News Act, the rise of AI has made the controversial Bill C-18 propping up the news industry “outdated” on arrival.

The Council of Europe released a report in November 2023 exploring the need for guidelines on using AI in newsrooms. [Image adapted from Council of Europe report cover]

Some news organizations have adopted guidelines and policies on using AI in reporting, but it’s still uncommon and there isn’t a consistent industry standard. There’s also no way around the dynamic interplay between AI and journalism, so the May 30 industry roundtable on Journalism and AI is meeting it head on.

‘These systems may actually work better and faster than human journalists at sifting through information to find patterns. But can you really ask a reader to trust you — or a policy-maker to make policy based on your findings — if you can’t explain them?’

— Gina Chua, executive editor, Semafor

Tune in at 1 p.m. for a panel discussion with leading thinkers on AI and journalism ethics. The panel, chaired by Carleton University journalism professor Aneurin Bosley — who teaches ethics to the next generation of Canadian reporters — includes Chris Dinn, founder of the digital news site Torontoverse, Florent Daudens, who led Radio-Canada into the era of AI and teaches at Université de Montréal, and Gina Chua, executive editor of the digital media outlet Semafor and previously executive editor of the Reuters news agency.

Before Dinn was a publisher, he was a software engineer. He earned an Emmy Award in technology and engineering for his work at mDialog integrating live video streams with advertisements. In the six years after Google bought mDialog, Chris focused on figuring out how to optimize publisher ads, ultimately launching torontoverse.com, an innovative news startup using mapping technologies for news delivery. 

“I felt really passionate about local journalism,” Chris said to Nikita Roy on her podcast Newsroom Robots. “I felt like what it really needed was a better way to pay for it.” 

Dinn is developing AI technologies as solutions to pressing issues facing local journalism. He’ll be sharing insights into the nitty gritty details of AI, how it already features in the relationship between the public and the news and what the future may hold. 

Panelists Florent Daudens, Gina Chua and Chris Dinn and panel chair Aneurin Bosley will discuss the ethics of AI in journalism and the development of newsroom guidelines.

As the outgoing director of newsgathering and deployment at Radio-Canada — the French-language arm of the CBC — Daudens led the way for AI learning and integration in the newsroom, bringing digital transformation to Canada’s national public broadcaster after a similar leadership role at Le Devoir.

Daudens also joined Roy on Newsroom Robots, discussing the demands and mechanics of building an AI-literate workforce. 

“We want people to be able to embrace AI and not suffer from it,” Florent said. “If we don’t know what we’re talking about, if we don’t know the possibilities, we’ll just miss the train and then realize we have to use it but we don’t know how.” 

‘We want people to be able to embrace AI and not suffer from it. If we don’t know what we’re talking about, if we don’t know the possibilities, we’ll just miss the train and then realize we have to use it but we don’t know how.’

— Florent Daudens, AI specialist formerly with Radio-Canada, now teaching at Université de Montréal

Panelist Gina Chua is joining the roundtable virtually. She’s a Singaporean journalist and executive editor of global news startup Semafor, with a decorated career spanning decades in several newsrooms including Reuters news agency, South China Morning Post and Wall Street Journal Asia

One of the most senior openly transgender journalists in the U.S., Chua will be sharing her insights on newsroom operations, logistics, tools and responsibilities. She recently wrote a piece on using AI to count hate crimes, exploring the potential value of machine learning and AI’s powerful data-crunching capabilities to shed light in a dark corner of society.

“These systems may actually work better and faster than human journalists at sifting through information to find patterns,” Chua wrote. “But can you really ask a reader to trust you — or a policy-maker to make policy based on your findings — if you can’t explain them?”

Panel moderator Aneurin Bosley spent 14 years as a digital editor at the Toronto Star, working extensively with interactive technologies in storytelling. Before his work at the Star, Bosley was the editor of the Internet Business Journal and a tech columnist on CBC Radio in Ottawa. 

With several years of experience teaching data journalism and media ethics at Carleton’s J-School, Aneurin will lead the discussion on the impact of AI on journalism ethics and what rules newsrooms are establishing around its use.

Pin It on Pinterest

Share This