What Balkan newsrooms can learn from the CBC example

CBC instructs journalists to treat AI outputs like any unverified source: check them, compare them, and correct them before use. This kind of discipline is especially important in regions where misinformation spreads easily.

Author: NarativAI 

As artificial intelligence becomes more common in global newsrooms, CBC News in Canada offers a clear and practical example of how a media organisation can introduce AI responsibly. Their recently published guidelines focus on transparency, ethics, and maintaining human oversight, all issues that are equally relevant for newsrooms across the Balkans.

CBC’s approach begins with a straightforward rule: AI can support the work, but it cannot take over journalism. Reporters may use AI tools for tasks like transcription, organising notes, or background research, but the final editorial decisions remain firmly in human hands. Every fact must still be checked and every story approved by an editor.

CBC explains this distinction clearly in its guidelines: “We distinguish between generative AI, where content is produced primarily or entirely by an AI tool with minimal human intervention, and AI-assisted content, where the technology is used simply to enhance, accelerate, or support human journalism.”

Transparency is another key point. CBC commits to telling audiences when AI has played a part in the workflow. In media environments where trust is often low, being open about how content is produced can help maintain credibility rather than risk damaging it.

The guidelines also draw a clear line when it comes to sensitive reporting. For stories involving victims, minors, or vulnerable groups, CBC avoids using AI-generated faces, voices, or synthetic media. They prefer traditional methods such as blurring or audio distortion, arguing that synthetic elements may confuse the audience or reduce trust.

Verification remains central. AI can speed up processes, but it can also generate errors or misleading information. CBC instructs journalists to treat AI outputs like any unverified source: check them, compare them, and correct them before use. This kind of discipline is especially important in regions where misinformation spreads easily.

Perhaps the strongest lesson lies in the preparation itself. By creating clear rules early, CBC ensures that experiments with AI happen within a safe, ethical framework. Journalists understand the boundaries, editors know their responsibilities, and audiences know what to expect.

For Balkan newsrooms, CBC’s example shows that adopting AI does not mean lowering standards. With thoughtful guidelines, AI can be used in ways that support strong journalism, protect credibility, and maintain public trust, values that remain essential in every media market.

(This text was written and reviewed by the editor with support from artificial intelligence tools for language editing and stylistic refinement. More on how NarativAI uses AI — Link)