Misuse of artificial intelligence and failure to verify information lead to Ukrainian journalist Viola Burda being falsely listed as a participant in a 2007 act of vandalism on Mount Hoverla, the journalist reported in a Facebook post.
Viola Burda said that a post by one Yevhenia Shcherbak about Alexander Dugin mentioned an act of vandalism on Mount Hoverla committed by Eurasian Youth Union activists in 2007. Shcherbak falsely implied that Viola Burda and journalist Natalka Kovalenko were involved in the incident.
“I have never been to Hoverla. The first and last time I travelled to Russia was 36 years ago. And it is strange that SBU are not knocking on my door yet, suspecting me of ties to the ideologist of Russian fascism,” Burda wrote.
The journalist explained that she and Natalka Kovalenko worked at Radio Liberty in 2007 and released the news story “Highest Point. Tonight’s vandalism committed by Eurasian Youth Union on Mount Hoverla and in the Yaremche Nature Reserve” on 19 October. The two journalists are listed on the Radio Liberty website as the authors of the story.
“We are listed as the authors of the news report. AI says as much, but articulates it poorly. The [post’s] author interprets the AI-generated text – and Natalka and I become vandalism perpetrators and Dugin’s associates,” the journalist explained.

After an exchange in the comments to the post, Yevhenia Shcherbak deleted the paragraph mentioning Burda and Kovalenko, saying the information was “inadequately verified” and apologising. However, at the time the correction was made, the post already had over 2 thousand likes and had been shared about 380 times, Burda noted.
The journalist added that search queries with the words “Hoverla vandalism” yield correct information about the event, in particular about the involvement of Eurasian Youth Union activists, but errors arise due to the generated replies being used uncritically.
“The author calls her post an analysis and, judging by her profile, actively uses AI. But if a [missing] full stop in the text interferes with your analysis and your only source of information is AI, then maybe you’d be better off doing something else,” Burda concluded.
In September 2025, IMI experts tracked 30 TikTok videos containing false information about Ukraine over the course of two days.