How can AI’s biases and knowledge errors affect journalists?

Artificial intelligence (AI) is driving an impressive change in journalism. Journalists are utilising this new technology with great enthusiasm and speed. However, this advanced technology also raises important ethical and truth-related issues. In this article, I will focus on the problems of bias that journalists may face when using artificial intelligence and specific solutions on how to overcome these challenges.

First of all, let’s answer the following question for “beginners”. Does artificial intelligence have biases? Artificial intelligence is not biased per se. However, it learns what you give it. If you train AI with biased data, it learns these biases. For example, if there is a bias against a certain gender in the data used to evaluate job applications, and if artificial intelligence is trained with this data, artificial intelligence can learn the same bias.

Whether AI is biased or not depends on what you teach it. If you want fair and balanced results, you need to be careful about the data you use. Since journalists rarely train AI directly and are often the direct users of the tools, they have to be more careful about the biases that arise.

Firstly, one of the bias issues that journalists may face when using AI is bias in data sets. AI is often trained based on large amounts of data, and these data sets may reflect human biases. For example, a news article dataset may contain data that is biased against a particular demographic group or social segment. This can lead to biased results in the AI’s news selection or prioritisation and cause imbalances in the news equation.

One of the areas where artificial intelligence algorithms can affect journalism is the selection of news sources and the process of evaluating news. Artificial intelligence relies on certain criteria when automatically analysing news, and it is essential that these criteria are set correctly. However, if these criteria are set incorrectly or in a biased manner, it may cause the news to move away from objectivity and reflect a certain view or perspective. Journalists need to be careful in this process when using artificial intelligence algorithms and be in a position to ensure that the algorithms act in accordance with the principle of impartiality and diversity.

Another challenge that journalists may face when working with AI is its limitations and errors. AI algorithms are prone to errors when making predictions and judgements based on data. These errors can manifest as incorrect conclusions or misunderstandings and can affect the accuracy of news stories. Journalists should take these limitations and errors into account when using AI, and should ensure that they have robust processes for verifying and, where necessary, correcting their conclusions.

Finally, a challenge regarding the impact of AI on journalists is to balance human and ethical oversight. The use of AI in news production and evaluation may transform the role of journalists and expose them to greater scrutiny and correction.

Consequently, while using AI effectively, journalists should maintain ethical and professional values and not be discouraged from reviewing AI’s judgements or conclusions. Only then can AI function as an effective “facilitator”.

Yazar hakkında

Sarphan Uzunoğlu

NewsLabTurkey Yönetici Direktörü Dr. Sarphan Uzunoğlu, kimi sivil toplum örgütlerine danışmanlık yapmakta ve İzmir Ekonomi Üniversitesi'nde yeni medya üzerine dersler vermektedir. Ashoka Fellow'u da olan Uzunoğlu, doktorasını Galatasaray Üniversitesi'nde tamamlamıştır ve geçmişte Lübnan Amerikan Üniversitesi Multimedya Gazetecilik Bölümü'nde Öğretim Üyesi Doktor, Norveç Arktik Üniversitesi Medya ve Dökümantasyon Bölümü'nde Doçent Doktor olarak çalışmıştır.