AI in newsrooms: media managers and journalists analyzed a model policy for the ethical and transparent use of artificial intelligence at an IJC workshop

On Friday, December 5, 18 media managers and journalists participated in a workshop dedicated to exploring a model policy on the ethical and transparent use of artificial intelligence (AI) in journalism, organized by the Independent Journalism Center (IJC). The document analyzed can be adapted to the needs of each newsroom or integrated into existing editorial policy.

The workshop was led by media expert Alexey Terekhov, a media manager, trainer, and journalist with over 20 years of experience in international consulting. In recent years, he has focused on integrating artificial intelligence into editorial workflows, from prompt engineering and process automation to training newsrooms in the sustainable use of AI tools.

“We proposed including the rules for working with artificial intelligence in a separate policy because we need to maintain transparency with the public. This policy continues the editorial offices’ commitment to openness, complements existing editorial standards, and does not contradict them,” Alexey Terekhov emphasized, referring to the importance of a separate AI policy.

Workshop participants acknowledged that with the “explosion” of new technologies, ethical dilemmas are becoming increasingly complex, and the need for clear policies is becoming more and more evident.

“Participating in this workshop was essential for us as an editorial team that wants to integrate new technologies without compromising journalistic quality. Understanding the risks, limitations, and responsibilities associated with AI helps us protect both our audience and the integrity of our work. For us, this knowledge is a necessary foundation for professional, ethical, and transparent journalism. Following this training, we will adopt a clear policy on the use of AI in our newsroom, built on these principles and standards and guided, not least, by our responsibility to combat misinformation,” said journalist Polina Cupcea, founder of the media project “Oameni și Kilometri” (People and Kilometers).

Other newsrooms share the same concern for transparency and accountability.

“In the era of media technology, it is essential to be honest with our audience, to respect the Code of Ethics and the media legislation in force. We cannot combat the fakes generated by artificial intelligence if we ourselves use the same tools without being transparent. That is why we must always inform the public when a media product has been created with the help of AI. In this way, we build loyalty among media consumers and show them that we do not want to mislead them, while also giving them the ability to differentiate between AI creations and those made by humans,” noted Sergiu Niculita, program director at TV8.

“The workshop gave us a clear perspective on how traditional television can integrate AI responsibly. We emphasized the need to introduce clear rules on the use of AI in editorial policy in order to protect journalistic credibility and standards. Studio-L will implement these principles to combine digital innovation with professional ethics,” said Renata Lupacescu, editor-in-chief of the Studio-L television station in Causeni.

In addition to discussions on the ethical and transparent use of AI, one of the sessions was dedicated to prompt engineering techniques. Participants practiced examples of how AI tools can be used effectively in journalism, from generating ideas and documentation to optimizing editorial processes.

This activity is part of the project supported by Sweden “Media Literacy Advancement and Support to Moldovan Media” and implemented by Internews, which aims to contribute towards the growth of a diverse, independent and financially viable media landscape in Moldova, and to empower Moldovan youth to navigate their complex information environment.

Loading

Share This

Copy Link to Clipboard

Copy