UNICEF warns: AI-generated content is undermining children’s trust in media

Unlike adults, children are less equipped to question the origin, intent or reliability of what they see online. When synthetic content looks professional, emotional or authoritative, it is often taken at face value, especially when it appears in environments that resemble news or educational platforms.

Author:NarativAi

Children today are growing up in an information space where images, videos and even news stories can be generated in seconds, and often without anyone clearly saying so. UNICEF is warning that this shift is already affecting how children understand information, credibility and truth.

In its Guidance on AI and Children 3.0, UNICEF notes that children are increasingly exposed to “AI-generated disinformation and harmful content that is highly realistic and persuasive,” a combination that creates particular risks for young audiences whose critical thinking skills are still developing.

Unlike adults, children are less equipped to question the origin, intent or reliability of what they see online. When synthetic content looks professional, emotional or authoritative, it is often taken at face value, especially when it appears in environments that resemble news or educational platforms.

AI is no longer just recommending content to children. It is producing it. From synthetic images and voices to automatically generated summaries and articles, AI systems are now shaping the information landscape itself. UNICEF warns that even content that is not illegal or overtly harmful can still cause damage when it is “often inaccurate” and floods digital spaces, gradually reducing the overall quality of information available to children

This erosion does not happen all at once. It happens quietly, through repetition, volume and familiarity,  until it becomes harder for children to tell what is verified journalism, what is automated content, and what is simply made up.

UNICEF also points out that children “use digital services and apps in unanticipated ways,” often without fully understanding how content is produced or why certain stories are shown to them

 Recommendation systems can amplify sensational or misleading material, while AI-generated content is optimized for engagement rather than accuracy. For children, that combination is especially powerful.

One of UNICEF’s clearest messages is about transparency. AI systems, the guidance states, should “clearly warn children and caregivers upfront that they are interacting with an AI, not a human” This is not a technical detail, but a matter of children’s rights, their right to understand the environment they are navigating.

The guidance also cautions against presenting AI systems in human terms. Describing AI tools as helpful “assistants”, “friends” or neutral narrators can mislead children and create false trust. UNICEF stresses the need to prevent anthropomorphizing AI systems and to help children form accurate mental models of how these technologies actually work

While the document is primarily addressed to governments and businesses, its implications for journalism are hard to ignore. Media organizations are increasingly using AI tools, to generate visuals, automate content, summarize stories or boost reach. Each of these choices shapes the information environment children grow up in, whether or not children are the intended audience.UNICEF calls for a “child-rights-by-design approach across the full AI lifecycle”.

In media terms, that means slowing down when necessary, labeling clearly, questioning automation, and recognizing that children are already part of the audience, even when content is not made for them. The warning from UNICEF is not about rejecting technology. It is about responsibility. AI, the guidance emphasizes, must “optimize opportunities, mitigate risks and eliminate harms for children”

(This text was written and reviewed by the editor with support from artificial intelligence tools for language editing and stylistic refinement. More on how NarativAi uses AI — Link)