The training was led by Aleksandar Manasiev (NarativAI) and designed as a highly practical, hands-on programme. Instead of abstract promises about automation, participants explored how generative AI systems actually work, what large language models can and cannot do, and why human editorial judgement remains essential.
Author: NarativAi
Will artificial intelligence free journalists from routine work, or replace them altogether?This question framed the opening of the three-day training „Artificial Intelligence in Newsrooms: Trust, Transparency and Verification“ held in Štip from 26 to 28 February 2026 and organised by the AJM (Association of Journalists of Macedonia).
From the very first session, participants were confronted with two competing narratives that currently dominate global debates on artificial intelligence in journalism. On one side, an optimistic narrative: AI as a tool that can relieve journalists from repetitive tasks, support data analysis, and create space for deeper investigative reporting, potentially even helping to sustain local media. On the other, a dystopian narrative: AI replacing journalists, eroding trust in the media, and blurring the very notion of authorship.
Rather than choosing between fear and hype, the training positioned journalists exactly where they need to be today, between technological opportunity and ethical responsibility.
From fear to function: how AI can support journalists
The training was led by Aleksandar Manasiev (NarativAI) and designed as a highly practical, hands-on programme. Instead of abstract promises about automation, participants explored how generative AI systems actually work, what large language models can and cannot do, and why human editorial judgement remains essential.
A key message ran throughout all sessions: AI does not “understand” truth, it predicts patterns. This distinction, often overlooked in public debates, is critical for journalism and underlines why responsibility always stays with the journalist and the newsroom.
“The goal of this training was to open journalists’ perspectives,” said Manasiev.
“Artificial intelligence is not meant to replace journalists. It is meant to help them use technology more effectively, so they can create stronger, more accurate and more meaningful journalistic stories.”
Ethical dilemmas already facing newsrooms
Building on real newsroom scenarios, participants discussed concrete ethical challenges that media outlets are already encountering:
- Who is the author, the journalist or the algorithm?
- Should AI-assisted content always be clearly labelled?
- Can AI generate false or fabricated information, and how can journalists recognise it?
- How should a newsroom respond when synthetic images or videos circulate online?
These questions were not treated as theoretical exercises. Through case studies, verification workflows and group discussions, journalists explored practical responses that can be implemented immediately in editorial practice.
Trust, transparency and verification at the core
In line with the AJM’s focus, the training placed strong emphasis on trust, transparency and verification as the foundation of responsible AI use in journalism.
Participants worked with free and accessible tools for verifying images and videos, identifying manipulated or synthetic content, and designing internal verification plans. A recurring principle was clearly stated throughout the sessions: being accurate is more important than being first, especially in an environment saturated with AI-generated content.
The programme also addressed the need for transparency toward audiences, иncluding when and how AI is used in journalistic production, and why clear explanations and labelling can help preserve public trust rather than undermine it.
AI as a newsroom assistant, not a decision-maker
Beyond verification, the training demonstrated how AI tools can support journalists in everyday work without replacing editorial responsibility. Sessions covered:
- research and analysis of large document sets,
- identifying patterns and anomalies in data,
- assisting with transcription and note-taking,
- supporting multiplatform storytelling and content adaptation.
Crucially, these tools were presented as assistive technologies, not automated decision-makers. Human oversight, editorial standards and ethical judgement were consistently defined as non-negotiable elements of newsroom work.
Investment in responsible, high-quality media
The President of the Association of Journalists of Macedonia (AJM), Mladen Chadikovski, stressed that journalists must keep pace with technological change while preserving ethical standards, noting that practical solutions and clear editorial procedures are essential to safeguarding professionalism in a rapidly evolving digital environment.
Katerina Jakimovska, Senior Project Officer at the Wilfried Martens Centre for European Studies, highlighted that supporting such initiatives is part of a broader European commitment to strengthening media resilience, emphasizing that trust is built through transparent processes and clear standards.
Daniel Braun, representing the Konrad-Adenauer-Stiftung (KAS), underlined that investing in journalists’ knowledge and practical skills is an investment in responsible, high-quality media and the protection of the public interest in the digital age.
The event was organised by the AJM, with the support of the Wilfried Martens Centre for European Studies and the Konrad-Adenauer-Stiftung, and was open to journalists, editors and media professionals interested in strengthening their AI-related skills and professional standards.
(This text was written and reviewed by the editor with support from artificial intelligence tools for language editing and stylistic refinement. More on how NarativAi uses AI — Link)














