AI tools often make mistakes. When using ChatGPT, you can clearly see the line below: “ChatGPT can make mistakes. Check important info”. In my work ive found out that AI can invent quotes, change dates, or mix up facts. If AI gives you information, treat it as a hint, not as truth. Every claim still requires human verification. This means that good journalism never lets AI be the final source.
By Aleksandar Manasiev , NarativAI
A few years ago, I worked on a documentary where we filmed for four days straight. By the end, I had more than ten hours of raw video material that I needed to review. AI wasn’t “a thing” back then, but I desperately needed help. Nobody in the newsroom had time to assist, and I was already overwhelmed with other assignments.
So I did what any exhausted journalist would do, I Googled for a tool that could save me. I found something experimental, powered by early AI. I uploaded all my footage, and within minutes I had a complete transcription. It wasn’t perfect, but it allowed me to quickly find the parts I needed and review only those segments. That simple tool saved me days of work.
This experience taught me something important: AI can be a powerful assistant, if we use it responsibly. Today, AI tools are far more advanced and far more common. And with that progress comes a new responsibility for us journalists: how to use them without breaking professional ethics or damaging public trust.
Here are the key principles that every journalist should follow.
AI can assist, not replace reporting
AI can organise interviews, suggest angles, clean audio, or transcribe hours of footage, just like it helped me then. But it cannot replace real reporting. It cannot understand context, power dynamics, or what’s at stake.
Always verify the facts yourself
AI tools often make mistakes. When using ChatGPT, you can clearly see the line below: “ChatGPT can make mistakes. Check important info”. In my work ive found out that AI can invent quotes, change dates, or mix up facts. If AI gives you information, treat it as a hint, not as truth. Every claim still requires human verification. This means that good journalism never lets AI be the final source.

Be transparent with your audience
If AI played a role in creating your text, audio, or visuals, just say it. I’m doing the same here. AI helped me refine parts of this article, because while my English is good, it’s not this polished. Transparency matters. A simple line is enough:
“This article was partially created with AI assistance and fully edited by the author.”
Protect your sources and sensitive data
Many AI tools store what you upload. That means you should never input: confidential documents, interviews with vulnerable people, private information, unpublished investigations…
If you wouldn’t email it to a stranger, don’t upload it to an AI tool.
Never mislead with AI images or audio
AI visuals can look convincing. So can AI-generated voices. Using them without warning is unethical. Always label synthetic media clearly:
“Illustration generated with artificial intelligence.”
This prevents confusion and protects your audience.
Be careful with sensitive stories
AI can make many mistakes in stories about elections, violence, courts, minorities, or corruption and these mistakes can cause real harm. So use AI to support your work, not to shape the narrative or write the core of sensitive stories.
Understand the tools
Ethical use starts with knowing how the tool works and these are the information that you need to know: What data was it trained on? Does it store your inputs? Can it generate synthetic images? Does it embed watermarks like SynthID? What are its limits? You don’t need to be an engineer, just aware and responsible.
Human judgment is esential
AI cannot understand ethics, context, or consequences. It cannot assess risk.
It cannot know when a story might harm someone. Only a journalist can make those decisions. So that’s why I always treat AI as a tool not as an editor. My first encounter with AI saved me days of work and helped me finish a documentary on time.
Today, AI can do much more. But as it grows stronger, our responsibility grows with it. At NarativAI, our message is simple:
Use AI, but don’t lose the values that make journalism trustworthy.
(This text was written and reviewed by the editor with support from artificial intelligence tools for language editing and stylistic refinement. More on how NarativAi uses AI — Link)