Uses of Generative AI in the Newsroom: Mapping Journalists’ Perceptions of Perils and Possibilities

0

By: Hannes Cools & Nicholas Diakopoulos

Generative Artificial Intelligence (AI) tools like ChatGPT, DALL-E, and Stable Diffusion have created utopian and dystopian portrayals of how professions will look in the future (Acemoglu and Restrepo Citation2022; Cools et al. Citation2022; Davenport and Kirby Citation2015; Diakopoulos et al., Citation2024). These tools generate text, audio, video, or other media and have the potential to disrupt specific work processes in communication and media industries such as marketing, gaming, entertainment, and journalism with the help of AI (Brynjolfsson, Li, and Raymond Citation2023).

Although the term “AI” and related technologies have been around for decadesFootnote1, the introduction of novel capabilities from generative AI-tools results in new challenges in how and where information is produced and moves within a news ecosystem (Diakopoulos Citation2019). In 2023, the development, implementation, and utilization of AI in newsrooms has been democratized by generative AI-tools such as DALL-E and Midjourney and Large Language Models (LLMs) like ChatGPT and Bard. According to a survey by WAN-IFRA that was conducted in May 2023, 49% of the respondents admitted that they had used generative AI-tools already (WAN-IFRA, Citation2023). Salesforce, which owns popular workplace tool Slack, conducted a survey with 14,000 workers in 14 countries and found that 28% of employees utilize generative AI in their workplaces, with more than half doing so without official endorsement from their employers (Salesforce Citation2023).

Generative AI can be defined as a technology that can create new content—including text, image, audio, video, or other media—based on its training data and in response to written prompts (Lorenz, Perset, and Berryhill Citation2023). In journalism, generative AI can be used to perform a variety of tasks that are situated in every stage of the reporting process, from the gathering to the production, verification, and distribution of news (Cools Citation2022). Some of these tasks include but are not limited to writing text or summarizing and translating lengthy reports (Beckett and Yaseen Citation2023).

As emerging technologies have always influenced journalism (Pavlik Citation2000), this study seeks to advance a deeper understanding of the use of generative AI in newsrooms in a couple ways. First, given the relative newness of generative AI, there has not yet been substantial focus on the mapping of the current uses of generative AI in a qualitative way. This study addresses this gap by identifying the specific uses of generative AI tools through in-depth interviews with early adopters in traditional newsrooms in the Netherlands and Denmark, who have previously been working with other computational tools (e.g., audience metric systems, python packages for data analysis). Although the interviewees self-identify as early adopters, the uses that were prevalent in the results offer an initial assessment of how these generative AI tools are integrated into journalists’ daily workflows. Second, the possibilities and perils of generative AI in journalism are evaluated by our respondents from a journalism ethics perspective. Earlier studies like the previously referenced 2023 survey from WAN-IFRA have explored possibilities and perils but have not substantively addressed the link between the current uses and possibilities with concerns from a journalism ethics perspective. In summary, this study enhances our understanding of generative AI’s impact on journalism by exploring its specific applications in newsrooms and evaluating the broader implications of its adoption, highlighting both its perils and its possibilities.

Literature Review

Computational Journalism and Generative AI

This study aims to examine how generative AI technology relates to a range of journalistic tasks that have been previously explored in computational journalism. The term “computational journalism” was coined in 2006 by Essa and Diakopoulos at Georgia Tech (Diakopoulos Citation2024) to capture the array of ways that AI, algorithms, and automation of information intersects with journalistic goals (Cools Citation2022; Diakopoulos Citation2019). Over the years, the emergence of this field has been regarded as an indicator that journalism has been subject to a continuous process of (technological) change as algorithms bring about new methodologies and perspectives to the news reporting process (Diakopoulos Citation2019; Napoli Citation2014). Importantly, the term computational journalism is not restricted to AI or generative AI alone. In this study, AI and generative AI are seen as specific enabling technologies within the broader computational journalism umbrella.

Previous work has elaborated on computational journalism across the journalistic value chain, from the (1) gathering, to the (2) production, to the (3) verification and the (4) distribution of news. Scholars like Cools (Citation2022) and Thurman (Citation2018, Forthcoming 2019) have stated that the phases of the journalistic value chain are arbitrary but that they offer a valuable lens for analyzing the news reporting process vis-à-vis computing in journalism. Therefore, the four phases are listed next to further describe past and current applications of computation in relation to journalism.

(1) News gathering refers to algorithms that help gather information or trends which are capable of predicting what might be newsworthy based on a dataset (Beckett and Yaseen Citation2023). For example, The Washington Post’s “Lead Locator” analyzes how voter turnout differs demographically and geographically in elections. Based on outliers in the data, the algorithm generates a tip sheet that journalists can interactively explore (Diakopoulos, Dong, and Bronner Citation2020). (2) News production refers to algorithms that support journalists in producing news. For example, the British public broadcaster, BBC, uses a tool called “Juice” that translates articles and also has them automatically summarized by Natural Language Processing (NLP) – a subcategory of AI where computers gain the ability to process, reproduce, and summarize text and spoken words (Dörr, Citation2016; Molumby Citation2020). (3) News verification refers to algorithms that help journalists to fact-check (Cools Citation2022). In Belgium, there is “Factrank” (originally called “Claimbuster”), a tool that transcribes the statements of parliament members and labels them based on their ability to whether they should be fact-checked (Berendt et al. Citation2021). The labeling happens, for example, because a number appears in the statement. FullFact, a fact-checking platform from the United Kingdom, uses similar tools to do live fact-checking (Full Fact Citation2019). (4) News distribution and moderation pertain to algorithms that computationally help in the management and control of news content distribution within various platforms. Metrics tools like “Chartbeat” and “smartocto” are being utilized in newsrooms to better map what audiences are reading, what they are responding to, and what kind of news they are sharing. They provide analytics on loyalty, engagement, impact, and more to connect the dots between online publishers and their audiences (Lamot Citation2021). In addition, these tools can be used for optimization by conducting A/B tests which go beyond standard page views and reach (Hagar & Diakopoulos Citation2019).

Although present in some form as “deepfakes” since around 2017, in 2022, generative AI was introduced to a larger audience as a potential general-purpose productivity tool when OpenAI’s ChatGPT was launched. The widespread adoption of ChatGPT, reaching 1 million users in five days after its release in November 2022 and 100 million within the first couple months, underscores the significance of the technology as a driving force behind the integration into diverse sectors, including journalism (Diakopoulos et al. Citation2024). In contrast to earlier forms of AI, generative AI has capabilities that include generating written text in various styles, engaging in interactive wide-ranging natural language conversations (e.g., as a chatbot), analyzing content to generate data such as scores or classifications, and producing content such as headlines, summaries, illustrations, or even entire articles. A key differentiator of the technology is that it can be prompted to do different tasks using natural language (e.g., “Write three headlines for the following article: <article text>”), allowing it to be used by a large diversity of people with basic written literacy.

In research, early explorations of LLMs to support journalistic tasks have included application in creativity support (Diakopoulos et al. Citation2024) and angle generation in news discovery (Nishal et al. Citation2024), though in-depth research examining its use across the four phases of the journalistic value chain is scarce. In industry and practice there are a growing number of short case-studies for the technology that have been mentioned in blogsFootnote2 or at conference venues, including SEO text writing, article summarization, weather alert production, quiz generation, data journalism, style and grammar checking, transcription and translation, and so on, however there is little research that delves more deeply into this range of use cases, a gap which the current study seeks to help fill through interviews with early adopters.

Journalism Ethics and the Development of Responsible Practices with Generative AI

In recent decades, technology has influenced journalism as a process and as a product (Pavlik Citation2021). Digital transformations have facilitated the emergence of diverse forms of journalism that challenge the epistemological and professional foundations of mainstream media journalism. Similarly, the rise of social media platforms, as well as the advent of formats such as (citizen) blogs and podcasts have dramatically changed news outlets’ revenue models, forcing newsrooms to innovate (Cools et al. Citation2022). Collectively, these digital transformations are not only diversifying the sources and types of news available but are also prompting a re-evaluation of journalistic norms and practices. At the same time, news consumer behavior has drastically changed as people increasingly seek tailored information experiences that align with their individual preferences and interests (Beckett and Yaseen Citation2023). Scholars have shown that journalism has always been prone to technologies and digitization, but recent developments in (generative) AI have demonstrated that algorithms are able to sift and analyze considerable amounts of data which unleash additional value in news production (Pavlik Citation2000; Schapals and Porlezza Citation2020; Wu, Tandoc, and Salmon Citation2019).

Although (generative) AI-technologies are likely more advanced than earlier waves of digitization in newsrooms, the way technology influences or changes journalism in its core has remained more or less the same (Cools Citation2022). As Pavlik (Citation2000) showed in his pioneering study on why journalism has long been shaped by technology, the specific influence boils down to (1) the way journalists work; (2) the specific nature of the content; (3) the structure of the news organization and the relationship between news outlets, its workers, and its audiences (229). In 2023, data-driven technologies and tools are already being utilized by newsrooms across the entire news reporting process, from gathering and production, to the distribution of news (Cools Citation2022).

However, as these technologies have been introduced, and are evolving, the potential for tension with journalism ethics arises, particularly in how they shape the workflow, content, and relationships within news organizations in at least three ways. First, the algorithms embedded in AI technologies may inadvertently introduce bias throughout the journalistic value chain. Second, generative AI tools in 2023 are already able to produce texts, images and videos based on short prompts which could challenge the accuracy, authenticity, and credibility of journalistic content (Jones, Luger, and Jones Citation2023). Third, the opaque nature of some AI algorithms can hinder transparency in an editorial process that emphasizes the importance of diverse perspectives, editorial independence, and human decision-making (Beckett and Yaseen Citation2023).

This study adopts journalism ethics as a conceptual framework for understanding journalistic practice, acknowledging the necessity to critically examine the ethical implications and considerations associated with the increasing integration of generative AI in journalistic practice. Concretely, journalism ethics refers to the “application of ethical norms that guide the social practice of journalism, in its many technological forms” (Ward Citation2019, 307). As these technological forms continue to shape and redefine the foundational aspects of journalism, an ethical lens becomes imperative for evaluating the impact on journalistic principles, integrity, and the broader societal implications of information dissemination (Paik Citation2023). In particular, issues of diversity, inclusion, and representation are critical, and should be embedded as key features in the journalism ethics framework. Journalism ethics must address how generative AI can either mitigate or exacerbate biases, ensuring that diverse voices are fairly represented in media coverage. Additionally, the potential for conflicts of interest and interdependence between AI developers and media organizations needs to be scrutinized, ensuring that technological advancements do not compromise journalistic independence or integrity. Moreover, the context of modern journalism necessitates a focus on cross-cultural ethics. As news stories and media consumption cross borders, ethical standards must be adaptable and sensitive to different cultural norms and values (Hanitzsch Citation2021).

McBride and Rosenstiel (Citation2013) have stated that journalism ethics goes all the way back to the introduction of the telegram in the 1800s in the newsroom, as journalists needed to professionalize due to the pressure of technological innovation. Due to the external influences of technological advancements and financial motivations, news organizations underwent a shift from being an integral part of a “political apparatus” to establishing itself as an autonomous entity (Paik Citation2023). This transformation prompted news professionals to pursue a more formalized approach, advocating for clear editorial guidelines (Petre Citation2021).

By anchoring the analysis within the framework of journalism ethics, this study aims to provide a nuanced understanding of the ethical dimensions inherent in the intersection of generative AI and journalistic practices in the contemporary media landscape. The adoption of this framework is valuable in at least two ways:

First, it serves as a compass for preserving the integrity of journalistic practices, ensuring that the information disseminated is accurate, fair, and unbiased (Anderson Citation2013). In the context of generative AI, which has the capacity to (autonomously) produce content, the framework becomes instrumental in scrutinizing the accuracy and reliability of the generated information (Paik Citation2023). Deuze states that journalism ethics provides a normative guide for safeguarding the fundamental principles of journalistic integrity, which we use here to help evaluate the ethical implications of using generative AI tools with respect to those norms.

Second, journalism ethics could provide a foundation for questioning and evaluating issues of fairness and bias in news reporting, which becomes particularly pertinent when leveraging generative AI (Jones, Luger, and Jones Citation2023). The framework could help newsrooms in mapping and mitigating biases that may be inadvertently introduced by AI algorithms, and it could contribute to a model of distributed responsibility in newsrooms (Paik Citation2023). This model implies that responsibility takes place within a setting where interconnected agents, whether human, artificial, or a combination of both, engage in morally charged (positive or negative) actions that emerge from local interactions initially devoid of moral implications (Floridi Citation2023).

Since 2022, with the emergence of novel LLMs, the era of generative AI has started to arise, potentially automating, augmenting, and transforming specific phases across the journalistic value chain. This study seeks to advance our understanding of how various LLMs are used in newsrooms from a journalism ethics perspective with a specific emphasis on bias. In this light, it is important to acknowledge that all journalists are biased in one way or another, as well as the technologies that they are using. In other words, human bias and AI bias should be considered as an inevitable and inherent element of the journalism ethics framework.

This study seeks to map in a relevant way how journalists are experimenting with generative AI in daily journalistic workflows. These uses of generative AI fueled by LLMs might, for example, contain biases and prejudice, which might harm what is at the core of journalism: truth finding and establishing trusted sources. A responsible development and implementation of generative AI is therefore required (Moran and Shaikh Citation2022) as generative AI might distance humans more from immediate responsibility and shift them toward a more ad-hoc causative role where they could still be held accountable (Paik Citation2023). In this light, this study asks the following research questions:

RQ1: For which specific work processes are generative AI tools used by journalists?

RQ2: What do journalists see as the most prominent perils and possibilities when using generative AI tools?

RQ3: What ethical conditions do journalists deem necessary for the responsible deployment of generative AI in their work?

 

Method

We address our research questions through semi-structured interviews with journalists from The Netherlands and Denmark. In the following subsections we describe the sampling approach and participant pool, and then the interview guide and analysis approach.

Selection of Countries

The Netherlands and Denmark were chosen for two main reasons. First, both The Netherlands and Denmark have highly developed media industries characterized by a diverse range of media outlets. They have similar media environments that include traditional newspapers, digital platforms, and public broadcasters. In doing so, the presence of established newspapers ensures the continuation of in-depth, investigative journalism, while digital-only platforms offer immediacy and broader reach to various audience segments. Second, both countries are at the forefront of integrating technological advancements into journalism. This includes the adoption of digital media, which allows for more interactive and multimedia-rich news experiences, and data journalism, which involves the use of data analysis to uncover and tell stories in a different way. Furthermore, innovative news distribution methods, such as the use of mobile apps, social media, and personalized news feeds, ensure that news outlets in The Netherlands and Denmark remain relevant in the digital age, engaging audiences effectively and adapting to changing consumption patterns. By selecting news outlets in these countries, this study can leverage the advanced media systems of The Netherlands and Denmark to gain insights into journalistic practices, and to what extent they are (not) using generative AI in the newsroom.

Selection News Outlets and Interviewees

To capture an understanding of the uses of generative AI at the respective media organizations, a convenience sampling strategy was employed (Etikan et al. Citation2016). The researchers had access to these organizations through earlier research projects in The Netherlands and Denmark, and they are all considered as traditional news outlets, namely one public broadcaster (NOS) and two legacy news organizations (NRC and Berlingske Media). Although both countries have high levels of internet penetration and its citizens have “healthy” news consumption routines, distrust in media content has been on the rise over the last decade.

Additionally, through our access, the goal was to include a range of perspectives and insights from journalists at these traditional news outlets. The sampling process involved identifying one key individual for each news outlet that we had access to. These key contacts had direct involvement or experience with generative AI-tools in their daily work. These key individuals were initially approached via email, and they were asked to suggest other potential respondents.

The study deliberately selected journalists that have already experimented with generative AI-tools. The final sample consists of 15 participants, 8 men and 7 women, from traditional news media, namely Berlingske Media (8, Respondent 1–8), NOS (4, Respondent 9–12), NRC (3, Respondent 13–15). The average age of our respondents is 29.7, with the youngest being 24 and the oldest 56. Almost all members of the sample have a master’s degree and have a background in journalism, computer science, or both. One has a master’s degree in economics and another one has a master’s degree in history.

Interview Guide and Analysis

In this study, semi-structured interviews were conducted as a means of facilitating purposeful and focused conversations. These interviews offered the necessary flexibility to delve into the subjects’ insights, interests, and areas of expertise. The interviews were conducted in May and June 2023, both in person and via Zoom, with an average duration of 45 min. To ensure confidentiality, respondent names were anonymized and assigned identifiers which are linked to the respective news outlets (respondent 1–8, Berlingske Media, respondent 9–12, NOS; respondent 13–15, NRC). The interview guide consisted of three sections: the first section gathered general demographic information of the respondents, the second section explored the uses of generative AI tools and the criteria for evaluating the accuracy of their outputs, and the third section addressed ethical concerns related to the use of generative AI and the development of guidelines.

Each interview was recorded, transcribed, and subjected to qualitative thematic content analysis. This approach facilitates the identification of patterns and themes within the data. To address the research questions, the researchers created a coding scheme based on emergent patterns discovered while reading the interview transcripts (Strauss and Corbin Citation1990). Following the initial coding process in NVivo, a more comprehensive coding scheme emerged, allowing for the distinction of specific themes. Through the axial coding phase, the preliminary themes were further refined, building upon the initial open coding. The initial codebook is presented after the initial coding of the transcripts. The results are structured in accordance with the prevalent patterns from the interviews in three overarching sections. First, the codes of “perils of Generative AI” and “opportunities of Generative AI” were seen as distinct. Second, the code “uses of Generative AI” was divided into the four phases of news reporting namely “gathering”, “production”, “verification”, and “distribution/moderation”. Third, the “responsible AI” conditions code was linked to “ethical considerations” and “(mis)conceptions of AI”, followed by the code of “AI literacy”, “regulating AI”, “transparency”, and “human oversight requirement”.

Results

To contextualize the results, we must consider that the interviewees self-identify as early adopters in the sense that they were already actively experimenting with generative AI. The overall observation from the interviews is that the news outlets in the sample are still in an experimentation phase. Respondents were asked to formulate specific uses of generative AI in relation to the news reporting process during the interviews. Overall, everyone in the sample had used ChatGPT, and two respondents mentioned that they had interacted with the generative AI-tool of Bing, BingChat.

In this study and in the questionnaire, the news reporting process is divided into four phases that are inherently interconnected, namely (1) news gathering (collecting information), (2) news production (structuring the information), (3) news verification (checking the information), and (4) news distribution and moderation (dissemination of the information). Overall, respondents see possibilities of AI tools throughout the entire reporting process, with specific potential applications in the gathering, production, and distribution phase. The news verification phase, however, is not as prevalent as journalists don’t sufficiently trust AI to verify and factcheck information, interviewees state.

Uses of Generative AI

 

summarizes the potential uses of generative AI described by participants, structured into news gathering, news production, news verification, and news distribution. The table provides an overall structure to describe the first part of the results.

Table 1. Uses of generative AI according to interviewees.

 

For the first phase of news gathering, interviewees mentioned potential uses in light of “template journalism” – traditional Natural Language Generation (NLG) – and shorter forms of trend and topic analysis. “Automated Content Aggregation” is mentioned, as generative AI-tools could aggregate news articles, press releases, and other relevant content from online sources. Respondent 5 mentions that this not only streamlines the collection process but also provides us with more information in a shorter time span because of this aggregation. “Topics Trend Analysis” is also a potential use of generative AI-tools, as respondent 1 underscores that they are afraid to forget specific aspects in a story, and by gauging which topics are trending, journalists can prioritize their resources and focus on stories and analysis for their audiences. At the same time, respondent 10 contradicted the point of respondent 1 by stating that generative AI-tools only generate “captain-obvious trends and suggestions” which do not help the journalist in coming up with new trends from data.

Another use of generative AI-tools is “Anticipating News Events and Scenarios”, namely that these tools could generate potential scenarios of news events that could help imagine how these might evolve in the future. For instance, helping to develop several scenarios for how inflation could evolve in the next year. Respondents also mention that such anticipation could also point to newsworthy events or information prior to publication that might be relevant for newsgathering. Respondent 3 works with ChatGPT to brainstorm on specific outlines based on historical data when they need to write an in-depth analysis. Respondent 12 states that tasks like automated content aggregation, topic and trend analysis, and social media monitoring in the news gathering phase usually take up a lot of time, but they also mention that they “don’t see these tasks as what lies at the core of what journalism is about”.

For the second phase, news production, a lot of uses are linked to producing news more efficiently in the form of the “Automation of Transcripts”. Respondent 2 mentions that they use another large language model to generate transcripts. “The quality of the transcripts is incredibly good, especially with interviews in English. I don’t have to listen back and type along.” They mention that transcription is a process that can easily be automated, as they label it as not an essential part of journalism. “Auto Summarization” is also mentioned in the interviews, as respondent 9 states that they use it to rewrite paragraphs that “they themselves find complicated.” The same respondent says that it doesn’t have a direct impact on their job as a technology reporter. Respondent 6, who is an editor overseeing a group of journalists, states that they use it for generating emails with specific feedback on articles that were produced that day. For example, one respondent who is an adjunct-editor-in-chief describes that they use it to put bullet points (e.g., article X was not shared on social media platform Y, video X was not behind the paywall, podcast X was very popular) into a plain text format for the email that is sent to staff every evening.

“Automated Article Writing” encompasses a broad tent of news production processes like headline generation, pitch generation to their editor, and interview questions. “I have generated questions for interviews that I was tasked to do last-minute”, respondent 2 states, “but I have always checked them, and rewrote them.” Respondent 7 mentions that ChatGPT is a very good language machine. “You can’t rely on it for factual information, but it’s incredibly good with grammar and sentence structures.” They sometimes ask ChatGPT to rewrite a paragraph that does not flow right. “I prompt it to give me three alternatives for that paragraph.” Respondent 11 uses the subscription-based GPT-4 for “Data Analysis and Visualization” based on open-source datasets. They prompt ChatGPT to give suggestions on how to visualize specific data, and they ask if there is any data missing in the spreadsheet. Additionally, the respondent mentions that they upload already-existing graphs, and they prompt GPT-4 how a graph needs to be read. For example, they upload a screenshot of a graph on inflation in GPT-4, and they get an output with different ways on how to read that specific graph.

Lastly, respondents point to “Multilingual Translation” and “Voice and Speech Synthesis” that can be automated by generative AI. Respondent 2 mentions that translation, voice, and speech synthesis help them to make content more accessible. “Because if you can transcribe it, you can translate it,” they state. “I think we can make our content more accessible for people who have learning disabilities.” They mention that generative AI has the potential to be transformative to our society, as it might make available and synthesize “new knowledge” from foreign languages that we do not understand.

Some respondents are concerned with using generative AI-tools too much. Respondent 5 states that “if a tool would generate parts of an article or facilitate data analysis in an accurate way, I would “unlearn” that skill. I think that’s one of the reasons I don’t use generative AI too much.” Respondent 11 and 12 both state that they oppose the use of generative AI-tools for specific processes of the news production phase. Respondent 12 uses tools like ChatGPT and Bard, but is vigilant of some sort of “laziness” when generating parts of articles automatically: “The disadvantage, especially in news production could be a sort of laid-backness. If a system is doing parts of your job, while you are not taking on new tasks.”

News verification, the third phase, involves a lot of avoiding the use of generative AI. When respondents were asked about the news verification phase, there was mostly consensus that this specific phase was a core task of journalism, as it is centralized around “checking sources”. Respondent 13 describes that fact checking content is something that is inherently human, which entails that this process should not be delegated to generative AI-tools. Respondent 5 agrees and states that verification will probably be one of the last strongholds when reflecting on the impact of generative AI on the news reporting process. Apart from the limitations of generative AI within news verification, some respondents mention that it can help with “Real-time Fact-Checking”. Respondent 7 mentions that it can be used to rapidly cross-reference statements. They also mention that when they need to do real-time fact-checking, there is a lot of time pressure, therefore, “using generative AI is like having a second pair of eyes”. Overall, Respondent 13 summarizes that verification will remain human as it ties directly to the trustworthy reputation of a news outlet. “The responsibility of verifying information relies on the individual journalist. I do not believe that this process is scalable by generative AI.”

Lastly, for news distribution, respondents mention a wide range of generative AI uses in this phase, from news personalization to sharing and labeling content automatically. The first use of generative AI-tools is “Content Personalization”, where these tools are seen as means to experiment with formats. Respondent 11 mentions that it is valuable to have “different kinds of journalism delivered to different kinds of people at different times of the day” where generative AI-tools could contribute to the generation of the different formats (e.g., text, slideshow, podcast, video). Additionally, user behavior is being analyzed in the form of “User Engagement Analysis”. Respondent 4 describes that Generative AI-tools like ChatGPT 4 can delve deep into user engagement metrics based on an uploaded dataset. They state that they experiment with data points like click-through rates and time spent on articles in order to generate novel insights from that data. Lastly, generative AI tools are used for “Search Engine Optimization”, as this is a task that is regularly disliked, Respondent 9 states. They add that search engine optimization is uncontroversial to be taken over by ChatGPT, as it does not directly affect the journalistic product.

Another use is “Social Media Posting”. Respondent 2 mentions that their social media editors deploy it to generate three or four teasers. Similarly, Respondent 15 states that they experiment with generative AI-tools like ChatGPT to generate a social media planning schedule. “Automated Content Distribution” is mentioned by respondents that use these tools to generate newsletters automatically. The adoption of generative AI-tools, Respondent 4 mentions, highlights the potential for more efficiency in the news distribution phase, which leaves more time for them to conduct data analysis. Respondent 11 says that generative AI will transform their news publishing methods, whereas respondent 7 believes that the impact on news publishing will be more of a helping hand, as “these tools are not to be trusted without human oversight”.

Across the journalistic value chain, some respondents express the need to use and experiment with generative AI-tools. As was mentioned before, the phases where generative AI-tools are used the most are in news production and news distribution, and not so much in the news verification phase. In the next section, the main perils, and possibilities of the use of generative AI-tools are described.

Perils and Possibilities of Generative AI-Tools

The findings reveal three perils and two possibilities when advancing our understanding of ethical considerations of AI at NRC, NOS and Berlingske Media. Each of them will be highlighted below.

1. Perils

One of the perils that was mentioned most frequently by respondents, was the lack of news judgment. Respondent 8 mentions that the use of generative AI-tools can result in oversimplification of complex issues. They experimented with summaries, and they realized that the LLM could insufficiently highlight what was newsworthy in stories. Respondent 1 points towards the lack of relevance which in turn could lead to inadequate understanding of daily reporting.

Another peril that was named frequently, was the fact that these generative AI-tools produce hallucinations and biases.Footnote3 Hallucinations in journalism can lead to the misrepresentation of reality. Respondent 10 fears that it could lead to misleading readers or viewers. “If we produce hallucinated content, we can quickly become an unreliable source of information”, they state. “These tools reinforce the discrimination that was already on the internet”, respondent 5 adds. They fear that audiences might misunderstand specific news topics if they were to rely more on generative AI-tools, as well as the fact that already-existing stereotypes might be reinforced. Respondent 3 links these hallucinations and biases to the risk that the spread of inaccurate information can lead to reputational damage. Instead of fostering constructive debates based on facts, respondent 8 says, discussions can become polarized and because of these hallucinations, they might be based on false premises. In light of these hallucinations and biases, respondents do not agree on how to evaluate outputs from generative AI-tools for accuracy. Respondents 9 and 14 state they simply use a search engine to verify specific outputs of generative AI-tools whereas respondent 4 says they only “prompt it with questions they already know the answer to”.

A third prominent limitation was the fear of losing control of their (journalistic) autonomy. Respondent 14 says that they are reluctant to train a large language model on journalist’s specific outputs, as the model might want to replicate their unique tone of voice. “A personal touch in an article adds depth, as a reader, it can result in a unique emotional connection”, they state. Respondent 2 mentions that an over-reliance on AI can lead to homogenized content that lacks the individual creativity of writers. Similarly, there might be less of an incentive or opportunity for journalists to innovate and try new approaches. Respondent 4 illustrates that attributing too much power and capacity to generative AI-tools might result in diminished critical thinking. “There is a risk that they might not engage as deeply with the subject matter, which is vital for journalism”, they say.

2. Possibilities

One of the main possibilities that was mentioned by respondents was the fact that generative AI-tools can contribute to the efficiency and scalability of work processes. Respondent 11 highlighted the potential, noting that in the realm of ensuring journalism’s future, they hope that AI provides them with the means to emphasize what’s original, creative, and distinct. Similarly, respondent 9 believes that AI can reduce time spent on routine tasks, stating that “AI tools enable us to dedicate more time to what truly matters to us as a news or publishing company.” Respondent 1 says that they see generative AI-tools “as their own personal secretary, and it can help to make improvements or suggestions at a lightning speed”.

The advent of modular journalism and personalization is another possibility for the use of generative AI-tools that was mentioned by respondents. “We can now easily decide to translate our content”, respondent 2 states, “and share it on different platforms in order to reach a more diverse audience.” Respondent 6 believes that AI can enhance their products for their uses. They state: “If we can harness AI to deliver a truly relevant and personalized news experience for our audience, we could distinguish ourselves from our competitors.” Respondent 14 says that modular journalism can provide the building blocks for different formats, which can then be assembled in a personalized manner to cater to individual user preferences. “Generative AI can help to produce these modular forms of journalism, generating formats that are digestible for different audiences”, they mention.

Given the above-mentioned perils and possibilities that are associated with the use of generative AI-tools in journalism, it becomes essential to delve deeper into the parameters that govern their responsible development and implementation. While the potential benefits of AI integration in the journalistic process are undeniable, ranging from efficiency in news production to personalization in news distribution, the inherent risks cannot be overlooked. This includes concerns about the accuracy, potential biases, and the loss of the human touch in critical phases like news verification. In the next section, this study turns to some of the required conditions for the responsible use of generative AI-tools.

Conditions for Responsible Use of Generative AI-Tools

One of the main, and most frequently mentioned conditions for the responsible use of generative AI-tools, is the establishment of comprehensive and ethical guidelines. Just as editorial standards govern the practice of journalism, respondents state, a set of principles specifically designed for AI interventions can serve as the bedrock for its ethical and effective use. Such guidelines can delineate the boundaries of AI’s involvement, ensuring that its capabilities are utilized to augment human effort rather than replace the core values of journalistic integrity.

A noteworthy initiative observed in pioneering newsrooms is the establishment of so-called “AI Task Forces”, which is seen by respondents as another condition for responsible AI. These specialized teams, often composed of technologists and journalists, are tasked with evaluating the applications of generative AI-tools in journalistic processes. By evaluating these generative AI-tools, these task forces can preemptively identify potential pitfalls, from the inadvertent propagation of misinformation to more subtle concerns like bias amplification. At the time of data collection, both NRC and NOS in the Netherlands and Berlingske Media in Denmark had such task forces (Respondents 3, 7, 12 and 15 were members of these task forces). However, respondents mention that the burden of understanding and responsibly navigating the AI landscape should not rest exclusively on these task forces.

More broadly, there is a pressing need to increase AI literacy across all echelons of the news ecosystem, respondents say. From editors and reporters to videographers and designers, a basic understanding of AI’s mechanisms, capabilities, and limitations can empower individuals to make informed decisions. This knowledge equips them to interrogate AI-generated content, critically question its sources, and validate its authenticity, ensuring that the stories they craft uphold the highest journalistic standards. But, guidelines and theoretical knowledge of AI might prove insufficient, respondents mention.

The dynamic nature of AI, with its ever-evolving algorithms and capabilities, necessitates a more hands-on approach. Herein lies the importance of experimentation, another condition for the responsible use of generative AI-tools. Encouraging a culture where journalists actively engage with AI-tools, testing and tinkering with their functionalities, can lead to a more organic understanding of the technology. Such experimentation helps to get journalists acquainted with both the transformative possibilities and the inherent limitations of these generative AI-tools.

In conclusion, the ethical and responsible use of these generative AI-tools within journalism hinges on a multi-pronged approach. Comprehensive ethical guidelines, specialized AI Task Forces, and a broader AI literacy across the news ecosystem lay the foundation for ensuring that AI augments, rather than compromises, journalistic integrity, and autonomy. Yet, as (generative) AI continues to advance, the industry must also foster a culture of hands-on experimentation, empowering journalists to actively engage with and deeply understand the potentials and pitfalls of generative AI-tools.

Discussion

This study evaluated the uses of generative AI-tools, their perils and possibilities, and the required conditions for their responsible implementation in Dutch and Danish traditional news organizations by interviewing a range of journalists. Here we synthesize our findings further to elaborate how we have addressed our research questions.

For the first research question (RQ1), we found that respondents have started to use generative AI-tools across the different phases of the news reporting process. From the initial phase of news gathering, where AI helps process content aggregation and trend analysis, to the final stages of news distribution, where content personalization and search engine optimization are streamlined. Respondents underscored the efficiency and versatility brought about by these generative AI-tools, especially in areas that do not compromise the core of journalistic labor. Referring to what defines this core, respondents mention examples that include, but are not limited to, the write-up of a thorough political analysis, receiving a scoop from a source and reporting breaking news. The question remains where the threshold lies for what is close to journalism’s core, and what uses in the journalistic process can be (partly) outsourced to generative AI-tools. There was dissensus by respondents on the generation of entire articles without a human in the loop as the specific topic matters. Results have described that journalists might be inclined to outsource sports and financial reporting to generative AI-tools, something that has been outsourced to AI before (Thurman Citation2018, Forthcoming 2019). Ethical concerns and considerations were discussed, as some remain reluctant to its adoption, evidenced by the cautious, even resistant, approach towards utilizing generative AI-tools throughout the news reporting process. Respondents especially distrust these in the news verification phase. For this phase, journalistic integrity in the form of verifying sources and fact-checking was mentioned by respondents as an inherent human responsibility, also bearing in mind that these tools are not good with accurate factual information.

For our second research question (RQ2), mapping the perils and possibilities, respondents highlighted more concerns than opportunities when reflecting on the use of generative AI-tools. Limitations include the innate tendency of generative AI-tools to sometimes oversimplify inherently complex matters, leading to a reduced depth of understanding. Another significant concern is that AI, despite its vast computational capabilities, can generate content that does not resonate with the specific context in which it is meant to be applied, rendering it less effective or even misleading. “AI Task Forces”, like the ones NRC, NOS, and Berlingske have established, have proven to play a pivotal role in the discussion and the experimentation with generative AI-tools and in trying to walk the line to balance perils and possibilities. They act as sentinels, identifying risks such as bias amplification or misinformation, that can, in turn, inform the responsible use of generative AI-tools. As the implications of this technological immersion are not entirely graspable, journalists will need to grapple with finding a balance between innovation and integrity, ensuring that the essence of journalism is not diluted in tension with the goals for advancement and efficiency. Establishing guidelines can enhance the understanding of AI in newsrooms. As described by de Haan et al. (Citation2022) and Becker, Simon, and Crum (Citation2023), AI’s presence frequently goes unnoticed by journalists, and guidelines can function as a starting point for a more constructive discussion. Respondents mention that having these “AI Task Forces” in place is an important first step, however, they underscore that responsibility extends beyond these “teams”. Comprehensive AI literacy is essential for all professionals in the news ecosystem, from editors to marketers and reporters. This ensures rigorous scrutiny of AI-generated content. Furthermore, fostering a culture of hands-on experimentation with AI tools seems to be a productive approach for grounding theoretical knowledge with practical experience.

For building the conditions of responsible AI tools (RQ3), respondents mention that the integration of generative AI tools in journalism has presented a transformative landscape in the form of the automation and augmentation of their work processes. Central to responsible adoption is the establishment of solid ethical guidelines. These guidelines should serve as a compass, ensuring that AI enhances, rather than undermines, journalistic values. Expanding on the larger implications of this study, one can infer that the primary risks and opportunities presented by generative AI tools don’t deviate significantly from those associated with other AI applications (Diakopoulos Citation2019). The crux lies in navigating these innovations with prudence, ensuring that the tools enhance, rather than overshadow, the principles of journalism that underlie responsible practice. The essence of responsible reporting and factual accuracy can be compromised if AI tools are used without a comprehensive understanding of its limitations and possibilities. This concern is echoed by de-Lima-Santos and Ceron (Citation2022), who emphasize that while AI offers vast potential for scalability and efficiency, it should not be at the expense of the core values and ethics that underpin the journalism profession. One risk is that news organizations could become more homogeneous or similar among themselves by using these generative AI-tools (Cools and Diakopoulos Citation2023; Napoli Citation2014).

From the results, we observe that there is a growing apprehension that, rather than mitigating online biases, AI might unintentionally reinforce them, especially if the data it is trained on contains those very biases. Additionally, respondents mention their fears of AI-induced hallucinations leading to misinformation, potential reputational damage, and a loss of unique journalistic voice. On the brighter side, the rise of modular journalism and the promise of more efficient work processes suggest that, when used responsibly, generative AI-tools can enhance and augment the journalistic landscape. A central area of activity for practitioners moving forward will be in balancing these sometimes-in-tension factors, either by establishing acceptable practices for use or by advancing tools and technologies that embody acceptable compromise.

Amidst the increasing use of generative AI-tools and their perils and possibilities, we may wonder if journalism is yet again at an important crossroads. We already observe that generative AI will further accelerate the digital transformations that have challenged the foundations of mainstream media in the past. Generative AI-tools might be the newest digital shift similar to the rise of the internet, and the emergence of social media that have challenged news outlets’ revenue models and have prompted a re-evaluation of journalism. In conclusion, we see the implementation of AI as a double-edged sword. Generative AI could improve existing bias and prejudice, by enabling more personalized and context-aware news delivery, which could engage a broader audience and encourage more active civic participation. However, this optimistic view is tempered by concerns about the ethical implications of AI, including issues of transparency, and accountability (Cools and Koliska Citation2024). Thus, the integration of AI in journalism presents a complex interplay of opportunities and challenges which could inspire future research to evaluate its impact on the social, democratic, and cultural norms, roles, and practices of journalism.

Conclusion

In this study we addressed questions related to the extent of use of generative AI tools amongst early adopters at three media organizations in the Netherlands and Denmark, including issues and benefits of their use, and in light of various ethical tensions that may arise in the course of developing responsible use. Our findings provide an early map of tasks across the four main phases of news production. In addition, our findings outline various challenges such as the loss of editorial judgment, possibility for introducing bias and inaccuracy, and the fear of loss of journalistic autonomy, as well as some benefits, namely the potential for efficiency gains and for creating new more modular or personalized experiences. Our findings also establish a few of the strategies that practitioners are exploring for articulating the conditions for responsible practice, including the creation of explicit guidelines, the establishment of task forces and specific teams including for experimentation, and the need to invest in AI literacy initiatives. Taken together, these findings contribute a set of ideas that can help orient practitioners and future researchers towards areas where additional research or practical advancement may be warranted as generative AI is adopted in news production.

While this study provides valuable insights into the perceptions and applications of generative AI among journalists, it is also important to acknowledge its limitations. Firstly, our research methodology was based on interviews, which only offer a partial view of the broader journalistic landscape vis-a-vis generative AI-tools. The timing of the interviews, namely in May and June 2023, should also be considered when reading the results, as they only provide a snapshot of the use of generative AI tools at the time of data collection. To garner a more comprehensive understanding across a wider sample of individuals in different organizational contexts, we recommend supplementing this qualitative approach with a survey on the uses and implications of generative AI-tools in journalism, also to include a more comparative perspective. Additionally, the geographical scope of our study was restricted to journalists in the Netherlands and Denmark. This limitation means that our findings are not representative and are therefore context-dependent of the Dutch and Danish news outlets that were studied, with caveats on the generally high resourcing available at these outlets which is not present uniformly in the industry. Future research would benefit from incorporating views from news professionals across different media systems and across various continents, ensuring a more holistic understanding of the subject matter. Thirdly, this study captures the perspective of the early adopters, and as these AI-technologies are evolving rapidly, they offer a snapshot of the uses, the perils, and possibilities of generative AI-tools. Future research could therefore build on the insights of these early adopters and could consider including a longitudinal approach.

Ethics Approval

This study was approved by the Ethics Review Commission of (university withheld).

Disclosure Statement

No potential conflict of interest was reported by the author(s).

Notes

1 An AI system is a machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments (Diakopoulos et al., Citation2024).

3 In the context of Large Language Models (LLMs) like GPT-3.5, hallucinations refer to the generation of outputs that contain information or details that are not accurate, real, or supported by the input data. Hallucinations can occur when the model produces responses that sound plausible or coherent but are not grounded in factual information (Bender et al. Citation2021).

References

Cite this article https://doi.org/10.1080/17512786.2024.2394558

Licensing
© 2024 The Author(s). Published by Informa UK Limited, trading as Taylor & Francis Group
This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. The terms on which this article has been published allow the posting of the Accepted Manuscript in a repository by the author(s) or with their consent.