By: Lynge Asbjørn Møller, Arjen van Dalen & Morten Skovsgaard
Recent years have seen the rapid proliferation of a new wave of newsroom technologies leveraging artificial intelligence for effective news automation (Broussard et al. Citation2019). Until recently, the landscape of journalistic artificial intelligence rarely touched upon the main task area of journalists, primarily focusing on tasks such as serving up algorithmic news recommendations and utilising basic automated-writing systems. However, the emergence of generative artificial intelligence has sparked a transformative shift. Artificial intelligence can now perform tasks that strike at the core of journalistic work, such as sophisticated writing, news speaking, and complex data analysis. At first glance, these advancements seem to challenge claims of human superiority over algorithmic systems in many aspects of news production, raising questions about the legitimacy of journalism’s professional claim and the future role of journalists. Much journalistic work is “mundane, repetitious, and formulaic” (Deuze and Beckett Citation2022, 5), taking on generic forms and following “preexisting templates to fit facts to story forms” (Carlson Citation2015, 425). With artificial intelligence becoming increasingly capable of mimicking these forms and templates, its introduction in newsrooms is likely to evoke concerns among journalists about job displacement. Digitisation has already undercut traditional news business models and upended work routines, putting pressure on journalists to reexamine their professional skills and the nature of their profession (Örnebring Citation2010). The rise of artificial intelligence perpetuates this reconfiguration of journalism in relation to technology.
Against this backdrop, we examine how journalists position themselves in response to the introduction of artificial intelligence in their work, thus building upon important previous research into how the journalistic discourse towards news automation has reshaped the boundaries of journalism (e.g., Carlson Citation2015; Lindén Citation2018; van Dalen Citation2012). Most of these studies focus on the implementation of automated news writing through rule-based systems that apply predefined rules to structured data or templates to produce basic news content that involve primarily factual information. Therefore, previous literature has found that news automation was rarely perceived by journalists as a serious professional threat and often presented as a means to free up resources for in-depth work by automating only the most mundane and repetitious journalistic tasks (e.g., Carlson Citation2015; Milosavljević and Vobič Citation2019; Schapals and Porlezza Citation2020; van Dalen Citation2012). While the recent advancements within generative artificial intelligence expand the ways that journalists can streamline aspects of their work, these new technologies may also prompt journalists to be more sceptical towards automation as it encroaches upon their core task area. Amidst growing media hype surrounding the abilities of artificial intelligence as text generation tools, there is therefore a need for updated insights into the perceptions of journalists towards news automation.
However, current literature offers only limited insights into how regular journalists working on the ground respond to news automation. The routines, values, and attitudes of regular journalists significantly shape the integration of technology into journalistic practice (Boczkowski Citation2004), highlighting the importance of understanding their perceptions to predict the future trajectory of journalism in a time of major technological disruption. Previous studies have largely investigated journalistic discourse around automation and artificial intelligence through for instance mediated reactions (Carlson Citation2015; Moran and Shaikh Citation2022; van Dalen Citation2012) or interviews with higher-level editorial staff key to the implementation of the technologies (Bucher Citation2017; Lindén Citation2018; Milosavljević and Vobič Citation2019; Rydenfelt Citation2022; Wu, Tandoc, and Salmon Citation2019). These studies have been instrumental to capturing the early understandings of these technologies when regular journalists were less confronted with them in their daily lives. But given the media hype around generative artificial intelligence, journalists working on the ground are increasingly forming their own opinions about the role of artificial intelligence in journalism. Compared to managerial staff, the journalists whose work is being automated are less likely to be techno-optimistic about the economic prospects of artificial intelligence and instead more inclined to protect their professional boundaries when technology encroaches on their turf (Lewis Citation2012).
This exploratory study contributes to filling these gaps in literature by investigating how regular journalists respond to and perceive of artificial intelligence in journalism. These are defined as news workers who produce journalism within news organisations but do not hold managerial positions and are not directly involved in technology implementation decisions. We have conducted 21 semi-structured interviews with such journalists in two Danish news organisations. The Scandinavian news market in general, and the Danish one in particular, is known for strong public interest traditions, high levels of professionalism, and a progressive approach to technology adoption (Syvertsen et al. Citation2014), and news organisations in Denmark are currently investing heavily in implementing artificial intelligence tools in daily news operations (Schrøder, Blach-Ørsten, and Eberholst Citation2024, 76). This makes it an interesting setting to investigate the impact of artificial intelligence on a robust journalistic profession. The specific informants were sampled to vary on professional parameters such as professional task area and length of professional career to cover different perspectives on the role of artificial intelligence in journalism. Specifically, the aim of this interview study is to investigate the potentials and concerns that these journalists attribute to artificial intelligence in journalism and the ways in which they attempt to distinguish themselves from artificial intelligence to highlight their unique professional skillset. By focusing on these aspects, we want to better understand not only the readiness of regular journalists to embrace or resist the adoption of artificial intelligence but also how they position themselves in relation to these technologies. This will shed light on the strategies employed by journalists to navigate this evolving landscape and how the rise of artificial intelligence reshapes journalism as a profession.
Automation and the Professional Boundaries of Journalism
This study builds upon a long line of important research that have contributed to our understanding of how journalists position themselves in response to news automation, particularly in the context of news production. The focus of much of the previous literature is on basic automated-writing systems that automatically fill in prewritten templates with structured data to produce for instance sports match reports or financial news, in journalism studies termed as “robot journalism” (Latar Citation2018), “automated journalism” (Carlson Citation2015), and “algorithmic journalism” (Dörr Citation2016). Many of these studies indicate that journalists largely dismiss automated journalism as serious competition by strategically presenting their own skillset as superior (Bucher Citation2017; Carlson Citation2015; Moran and Shaikh Citation2022; Rydenfelt Citation2022; Thäsler-Kordonouri and Barling Citation2023). In the face of news automation, journalists have been found to “rigorously defend their own work” (Schapals and Porlezza Citation2020, 23) by illuminating “the advantages of human beings over machines” (Rydenfelt Citation2022, 2609). Journalists for instance accentuate human qualities such as emotions and creativity (Moran and Shaikh Citation2022), the journalistic instinct rooted in their news judgement (Bucher Citation2017), the human ability of understanding nuances (Thurman, Dörr, and Kunert Citation2017), as well as the value of human storytelling (Carlson Citation2015). These skills, journalists emphasise, “cannot be captured in algorithms” (van Dalen Citation2012, 653).
In journalism studies, the concept of boundary work has been invoked to describe such rhetorical struggles over the boundaries of journalistic practice. However, these strategic responses to external challenges are not exclusive to journalism. Studies of journalistic boundary work have largely adopted a framework from the sociology of professions and specifically Abbott’s (Citation1988) influential analysis of professional jurisdiction within the system of professions (Anderson and Schudson Citation2019). Jurisdiction is an occupational group’s claim to control a certain area of work through a display of exclusive knowledge and expertise. When faced with external challenges, members of an occupational group reexamine their expertise to maintain control over problems falling under their jurisdiction or to appropriate new ones. Journalism’s exclusive claim to their own news jurisdiction has always been contested. Compared to traditional professions such as medicine or law, journalism struggles to monopolise access to the profession and fend off external challengers to their jurisdiction. Scholars have previously drawn on Abbott’s framework to analyse the factors that leave journalism vulnerable to inter-professional competition from other occupational groups such as public relations workers (Abbott Citation1988; Anderson and Lowrey Citation2007) or online bloggers (Lowrey Citation2006; Lowrey and Mackay Citation2008). Borrowing specific concepts from Abbott’s framework relating to professional resilience and vulnerabilities, we argue that journalism’s inability to monopolise has also left it susceptible to automation through advanced artificial intelligence.
In short, journalism’s professional vulnerabilities come down to the potential replicability of certain aspects of journalistic work by outsiders. This is due to a lack of abstraction of the knowledge base needed to solve the problems falling under the jurisdiction of journalism. Central to Abbott’s conceptualisation of professional problem-solving is the concept of inference; the application of abstract knowledge to client characteristics to determine a suitable solution. Scholars argue that journalistic knowledge is “real and expert but by no means abstract” (Anderson and Schudson Citation2019, 137) which makes the inference process for journalists “routine and not highly difficult for outsiders to learn” (Lowrey Citation2006, 492). Journalists have largely rationalised the production of news to meet organisational demands through the standardisation of news formats and the adoption of conventional storytelling structures (Schudson Citation2003). This sort of “routinisation” makes it difficult for a profession to claim exclusive control over their area of work, leaving it vulnerable to competition from rival occupations (Abbott Citation1988, 51). However, the same dynamics also leave tasks susceptible to replacement not by other occupations but by technology itself. Previous research on news automation has pointed to the ability of algorithmic systems for automated article writing to emulate generic news formats (Carlson Citation2015; Lindén Citation2017), contributing to an underlying fear among some journalists that they will lose their jobs to automation (Jones, Jones, and Luger Citation2022; Kim and Kim Citation2018; Moran and Shaikh Citation2022).
But since these first automation applications, the potential utility of automated journalism to generate content has undergone a transformative shift with the emergence of generative artificial intelligence. Therefore, artificial intelligence may pose a greater professional threat to regular journalists than previously assumed, potentially exacerbating existing elements of “automation anxiety” in the journalistic discourse around automation (Lindén Citation2018). As van Dalen (Citation2012, 654) argues, “journalists might be more defensive when this new development touches upon their core democratic tasks [or] when their own job is on the line and could be automated”. Compared to journalists with managerial responsibilities, regular journalists who perceive their roles as more easily replaceable by artificial intelligence may harbour intensified fears about job security. For instance, the work of news journalists is more reliant on routine inference and involves less abstract knowledge compared to the work of their investigative counterparts. This discrepancy is likely to amplify their anxieties regarding the impact of generative artificial intelligence on their work and prompt a heightened need for boundary work in an attempt to defend their professional domain.
These recent advancements may also perpetuate optimistic perspectives prevalent in literature. Abbott (Citation1988) argues that just as technology can destroy jurisdictions, it can also develop them. Several studies discuss the potential of automation and artificial intelligence “to improve journalism in ways currently beyond reach” (Jones, Jones, and Luger Citation2022, 1745), and journalists have been found to be enthusiastic about its capacity to relieve them of routine tasks and leave them with more time to pursue complex work (Carlson Citation2015; Milosavljević and Vobič Citation2019; Moran and Shaikh Citation2022; Schapals and Porlezza Citation2020; Wu, Tandoc, and Salmon Citation2019). The automation of such routinisable and vulnerable task areas allows professionals to instead tightly protect areas of work in which they add value through their specialised knowledge and skill, in turn reinforcing the jurisdictional claim of the profession (Abbott Citation1988). By embracing these technologies as tools for improvement rather than threats, journalists engage in strategic boundary work to maintain their position as indispensable contributors to the journalistic profession. The positive framing of automation serves to reinforce the boundaries of journalism by focusing on the sort of abstract work that is exclusive to journalists, such as interviewing sources, identifying newsworthy stories, engaging in creative storytelling, and conducting in-depth investigations. Embracing automation as an opportunity to focus more time and resources on such tasks reaffirms rather than dilutes the expertise and value of journalists as skilled professionals.
While these boundary struggles are a common theme in previous literature on news automation, the reactions of journalists are also influenced by factors extending beyond professional disputes. News automation and artificial intelligence has the potential to affect aspects of work that contribute to its meaningfulness, such as autonomy, skill variety, task identity, and task significance (Hackman and Oldham Citation1975). Compared to managerial staff with more varied and strategic responsibilities, work meaningfulness can be assumed to be especially relevant to the reactions of regular journalists who derive a significant portion of their job satisfaction and sense of purpose from the meaningfulness of their daily tasks. Previously, technological advancements have impacted the ways in which journalists give meaning to their work by for instance reducing their perceived editorial autonomy (Deuze Citation2005) and diluting the significance of their traditional investigative and news gathering skills at the expense of technical skills (Örnebring Citation2010). In line with these developments, Olsen (Citation2023) found the shift towards automated journalism to result in fewer editorial resources, limited opportunities for in-depth reporting and skill development, and reduced autonomy, leading to frustration and a sense of inadequacy among journalists.
This research contributes to existing literature by exploring how regular journalists react to artificial intelligence in a time characterised by increasing media hype around the abilities of generative artificial intelligence, thus advancing our understanding of how technology reshapes journalistic work practices and reconfigures journalism as a profession. Specifically, we ask the following research questions:
RQ1: What potentials do regular journalists attribute to artificial intelligence in relation to their own work and their profession in general?
RQ2: What threats and concerns do regular journalists attribute to artificial intelligence in relation to their own work and their profession in general?
RQ3: How do regular journalists distinguish themselves from artificial intelligence, and what core skills do they emphasise to do so?
Methods
This study draws on interviews with 21 journalists at the two Danish news organisations Jysk Fynske Medier (JFM) and TV2 Denmark (TV2). As part of the Nordic media system, the news industry in Denmark is characterised by high levels of digital innovation and an institutionalised professionalism rooted in strong public interest traditions (Brüggemann et al. Citation2014). Recent national developments include significant investments by Danish news organisations in generative artificial intelligence to enhance journalistic practices, coinciding with increased financial pressures on the news industry that have led to cost-cutting measures and layoffs (Schrøder, Blach-Ørsten, and Eberholst Citation2024, 76). This media environment creates fertile ground for tensions around the professional threat of artificial intelligence which may influence newsroom attitudes towards the technology. The two specific organisations are among the largest in the country. TV2 is one of the primary television networks in Denmark, operating as a commercial organisation with certain public service obligations. Its news department offers both national and international news on their primary television channel, their news-only channel, and their website. JFM is one of the largest commercial media conglomerates in Denmark and runs a multitude of different daily and weekly newspapers as well as radio stations. Despite their different operational structures, both organisations adhere to regulatory and self-regulatory standards rooted in the institutionalised professionalism characterising the Danish news industry. Therefore, the perspectives of journalists from these organisations provide insights into the broader Danish journalistic landscape, offering valuable lessons for understanding the impact of artificial intelligence on the journalistic profession that may resonate beyond the Nordic region.
The informants are all regular journalists working on the ground at the news organisations without managerial responsibilities or direct involvement in technology implementation decisions. They were sampled to represent a range of professional roles and experience levels to investigate the impact of these factors on their perspectives. However, logistical constraints also influenced the selection process as availability during the visits to the newsrooms played a role in sampling decisions.
presents details about the informants categorised into four distinct groups based on their professional roles: beat reporting (covering specific beats using various reporting methods), general assignment reporting (covering diverse beats with varied reporting methods), day-to-day news reporting (delivering immediate news updates), and investigative reporting (specialising in in-depth reporting and thorough investigations). Informed consent was collected before each interview, and the informants have been anonymised to allow the highest possible level of confidentiality.
Table 1. List of interviews.
The interviews were carried out in person at the different news organisations and had an average length of 30 min. The interviews followed a semi-structured interview guide covering current work practices and skills, current and potential use of artificial intelligence, and potentials and concerns related to the use of artificial intelligence in journalism. The interview guide is available on OSF (see Data Availability Statement). Semi-structured research interviews are regarded among the most efficient methods for qualitative data collection because the interviewer is able to focus on specific topics while still allowing informants to provide responses in their own terms (Kvale and Brinkmann Citation2008). The interviews were recorded and transcribed after which they were analysed using Braun and Clarke’s (Citation2006) process for thematic analysis. Coding was done in Nvivo by the first author following Deterding and Waters (Citation2021) approach to flexible QDA coding which involves applying index codes to large text segments for data reduction followed by analytic coding of specific text segments. To increase the reliability of the coding process, another author reviewed each coded transcript. The codes were then explored in Nvivo in order to identify themes. General themes were identified by exploring the index and analytic code and looking for alternative explanations and negative cases to ensure the robustness of each finding. To explore potential differences across groups, we ran specific queries relating to the perceptions and uses of artificial intelligence among the informants depending on metadata such as experience, organisation, position, and professional role.
Results
In this section, the themes revealed by the thematic analysis are presented and analysed. One major finding is the minimal variation in attitudes toward artificial intelligence across different news organisations, experience levels, and professional roles. Consequently, this section analyses the results collectively, highlighting only the notable differences where they are observed. The section is divided into three subsections that each deal with one research question, focusing first on the potentials attributed to artificial intelligence (RQ1), secondly on the threats and concerns expressed by the journalists (RQ2), and lastly on the ways in which the journalists distinguish themselves from artificial intelligence (RQ3).
Less Routine Tasks, More In-Depth Work
Most of the journalists display a limited understanding of artificial intelligence, often associating it with recent advancements in generative artificial intelligence such as ChatGPT. Definitions vary, from algorithms providing answers akin to a search engine to technologies mimicking human intelligence. While several journalists find artificial intelligence somewhat “scary” (int1; int2; int13; int16) because of how fast it is developing, they also describe that they are excited about its potentials for journalism. One informant puts it this way:
It’s a very advanced technology that is somehow both a little scary, because what the heck is it, how fast is this going to move forward, can we follow along, and how will it end? But it is also very exciting. Very, very exciting. Because I also think that it will be a pretty good tool to work with journalistically. (int1, beat editor)
The majority of the journalists have positive things to say about the potential role of artificial intelligence in their own work. Some describe its role as a “help” (int2; int11; int20) or a “tool” (int3; int5; int7; int10; int11; int12; int15), indicating that artificial intelligence is primarily viewed as something that complements journalistic work, rather than as a direct competitor. Several journalists emphasise that artificial intelligence can help “streamline” (int20), “optimise” (int14; int9), and “ease” the work process (int6; int12; int13; int17; int19; int20; int21) by undertaking the “routine tasks” (int5; int7; int9; int19) that are “boring” (int1; int12; int13; int15) and require “manual” (int6) and “hard” (int7; int13) labour and a lot of time (int2; int20). Still, the actual use of artificial intelligence among the journalists is quite limited. Whereas some mention other tools such as transcription services and or basic automated-writing systems in relation to their use of artificial intelligence, most journalists associate it with generative artificial intelligence tools such as ChatGPT. Whereas the journalists at TV2 are not yet allowed to use generative artificial intelligence for work, over half of the journalists at JFM are experimenting with these tools at the job.
All the journalists anticipate extensive integration of artificial intelligence into their work, envisioning its use in news gathering, news production, and for more administrative tasks such as shift scheduling. More than half of the journalists imagine artificial intelligence aiding in news gathering by suggesting stories and angles, identifying sources, preparing interview questions, and transcribing interviews. Some journalists, especially those in beat reporting and investigative roles, also envision using artificial intelligence for investigative research tasks such as document handling, data processing, and information verification. In production, over two-thirds anticipate artificial intelligence supporting in writing tasks from providing language inspiration and generating headlines to drafting news texts and enhancing proofreading. According to the journalists, these tasks typically involve repetitive and data-based activities, such as reading numbers, putting things into systems, and identifying patterns.
In relation to their profession in general, the journalists hope that artificial intelligence will enable them to engage in more impactful work, envisioning themselves being “set free” from routine tasks to pursue other endeavours (int2; int7; int18; int19). Artificial intelligence is seen as a means to “get out” more (int1; int8; int18) and “seek out” more stories (int1; int21) and “talk to more people” (int1; int8), thus improving the quality of journalism (int2; int7; int19; int20). In relation to investigative journalism, artificial intelligence is viewed as a help that allows journalists to “dive deeper” into stories (in13; int20) and “expose” injustices (int13; int18) through “the sort of investigative work that really makes a difference” (int7). Our analysis indicates that investigative journalists hold a more positive view of the potential of artificial intelligence in their work with their statements being coded with a notably higher frequency as positive rather than negative compared to other journalists. Investigative journalists are more “privileged” in this context, as one of them put it:
It would ease our work and enable us to work faster and perhaps publish more often and spend our efforts on what is often the most challenging part of the most difficult stories: Getting hold of the right people and getting them to tell us something. (int21, investigative reporter)
The above-mentioned applications of artificial intelligence can largely be said to augment rather than automate journalistic tasks. However, a third of the journalists also envision fully automating the production of certain types of news. Examples include generating short news articles from news wires, press releases, or foreign news, covering crime statistics or sports matches, and scripting or even voicing voice-overs for broadcast news shows. The journalists note that such automation is suitable for reporting that focuses on concrete, data-based information available online without the need for quotes from sources. According to them, these articles typically follow standardised formats and only require moderate language skills. The day-to-day news journalists account for nearly three quarters of the statements concerning the potential of artificial intelligence to fully automate news production, suggesting a greater sense of professional threat among this group towards the application of artificial intelligence.
Fewer Jobs and Less Meaning
There seems to be a consensus among the journalists that jobs will be lost to artificial intelligence because journalism as a profession is “threatened by artificial intelligence” (int8). But these concerns are not centred around artificial intelligence as a direct cause for job displacement, rather they seem to be related to a general defeatism about the future of the media industry. The risk of losing one’s job is seen as “a basic condition in the industry anyway” (int6) because “nobody reads newspapers anymore” (int10) so “the money isn’t there” (int19). Whereas some journalists have made peace with the thought that they might no longer be needed in the future (int5; int6; int21), many are concerned that artificial intelligence would be implemented for cost-cuttings on production (int1; int2; int4; int10;int13; int15; int17). However, the journalists largely refer to job displacement as a concern for the profession in general rather than a specific concern related to their own job. It is formulated as “losing colleagues” (int7) or working in “smaller newsrooms” (int4) with “fewer journalists” (int2; int5; int6; int8). Further, several journalists express more concern about other specialisations within journalism being lost to automation than their own with for instance broadcast journalists viewing writing journalists as more threatened by artificial intelligence (int14; int15; int17).
In relation to their own job, several journalists are more concerned that artificial intelligence could lead to a decline in work meaningfulness. Interestingly, these concerns are, much like many of the more positive statements, related to the potential of artificial intelligence replacing routine tasks. Some of these tasks also have value for the journalists, as one of them explains:
There is another concern for me, not so much for my own job security but to a greater extent about what my job is going to be in the future. Because I worry that some of what I really like about my job can be replaced. (int6, general assignment editor)
Journalists mention that they worry that artificial intelligence will take away enjoyable aspects of their job such as writing (int4; int11) or brainstorming (int1) or that these tasks will be substituted by boring tasks such as fact-checking (int5; int14). Other journalists are concerned about missing the excitement of publishing “fast news” (int6; int7). They view this kind of journalism as a break from the more demanding work of collecting information and interviewing, and they worry that without these breaks their job would become more “stressful” (int6) and “harder” (int6; int7). Especially journalists doing general assignment reporting seem to be concerned with artificial intelligence affecting the variation of their tasks. These types of journalists “thrive in a mix” of tasks (int7), usually working across various topics and assignments.
Further, many journalists voice concerns about losing their professional identity and pride in their skills. The prospect of artificial intelligence replicating skills that they have spent years perfecting is viewed as “annoying” (int21) and a cause for a loss of “meaning” (int4), leading to feelings of being “unimportant” (int7). Journalists question why artificial intelligence should perform tasks inherent in a “journalistic craft” deemed uniquely human (int5; int10). Other journalists are more open-minded towards the future of their profession, seeing their work meaningfulness as independent of technology. They stress that the impact of artificial intelligence is not going to be “as catastrophic as many people make it out to” (int19), and that journalists will “find something else to use our creativity on” (int15) and “find happiness in what is left behind” (int8). These statements about the impact of artificial intelligence on journalistic work highlight an ongoing debate over the necessity for journalists to adapt to a changing technological landscape and reassess their roles and identity. One journalist puts it this way:
An enormously large part of our self-understanding as journalists is to be news journalists. So, I think that we really have to adapt to a new world […] and I think that we’re really going to have to evaluate our own role. (int7, general assignment reporter)
In relation to the profession at large, the journalists worry that artificial intelligence will negatively impact the perceived trustworthiness of journalism. Some journalists are concerned about the reflection of “bias” (int14; int20) or “values” (int5; int8) in artificial intelligence. Others refer to the tendency of generative artificial intelligence tools to occasionally “fabricate” (int5; in19), “lie” (int6), and make “mistakes” (int1; int12; int13; int14; int15). Further, some of the journalists worry that these tools lack proper sourcing and critical evaluation (int4;int11; int15), leading to concerns about their propensity to perpetuate misinformation (int 1; int9; int10). Hence, many journalists find artificial intelligence to be too “unreliable” (int16; int19), and they struggle completely “trusting” it (int4; int6; int12; int13; int14; int16; int19). To ensure responsible implementation of artificial intelligence in journalism, the journalists therefore stress the need for rigorous fact-checking (int1; int4; int5; int6; int8; int9; int10; int12; int15; int16; int20) and human oversight (int2; int3; int5; int12; int13). Several journalists also advocate for transparency through clear labelling of the involvement of artificial intelligence in production (int10; int14; int18; int19).
Even though the journalists are quick to point out these issues with artificial intelligence, they seem to largely perceive its implementation in news organisations as organised top down. They emphasise that news organisations have a responsibility to implement artificial intelligence responsibly (int2; int16), and it was seen as a “management task” (int1) to be attentive to “pitfalls” (int10) and “grey areas” (int14). Further, many journalists refer to internal guidelines on whether and how to use artificial intelligence, for instance complaining about lacking guidelines (int1; int9; int11) and urging that “some really smart people that call the shots need to sit down and agree on guidelines for what we want with it” (int2). Further, some journalists seem to await training in artificial intelligence to be organised by management (int1; int3; int4), while others urge journalism schools to focus on these technologies (int12; int14). Whereas journalists generally promote autonomy and ethics as important boundary markers to characterise journalistic work (Deuze Citation2005), these statements indicate that the interviewed journalists perceive limited agency in the ethical implementation of artificial intelligence, both individually and within the profession at large.
Irreplaceable Human Qualities
As the journalists grapple with these potentials and challenges of artificial intelligence in the newsroom, they consistently emphasise their own professional superiority rooted in the inherently human character of much of their work. Many of these human qualities concretely relate to their own physical presence when working in the field. For example, many journalists emphasise that artificial intelligence cannot “report from the field” (int3; int15) because of the “unplanned” nature of field reporting (int13), and it cannot collect the sensory details necessary for the sort of narrative journalistic storytelling that can bring news stories to life (int1; int5; int12).
Many journalists also point out that artificial intelligence cannot undertake “human contact” (int19; int20), it cannot “call up sources” (int4; int15) or “meet with people” (int1; int4; int21), and it cannot “do interviews” (int8; int18). Especially the local journalists at JFM emphasise cultivating connections to sources as essential for getting a feel for the community, something that they do not imagine artificial intelligence being capable of (int2; int6; int10). Journalists doing more investigative journalism emphasise artificial intelligence’s inability to convince people to participate in their stories as “cases” (int20), speak with sources “off the record” (int10), or “persuade” sources to share confidential information (int21). Therefore, as one journalist puts it, having “an understanding of interview techniques and human contact will become alpha and omega if the computer undertakes everything else” (int19).
Other aspects of this human approach to news relate to a uniquely human way of conducting oneself as a journalist. Many of the journalists emphasise the special feeling that they have for the good story and how to approach it, something that has been termed as the “journalistic gut feeling” (Schultz Citation2007) and a “nose for news” (De Maeyer Citation2020). Artificial intelligence cannot “find” (int9; int18), “select” (int6), or “angle” (int10; int17) stories because it requires a certain “feeling” (int6; int10). Further, artificial intelligence does not know when “there is more to investigate” (int12) or “whether the story holds up […] because it only goes in a predetermined direction” (int18). Therefore, this ability to evaluate a situation or a story “based on gut feeling” (int17) will have “even larger value going forward” (int9) when artificial intelligence undertakes more tasks.
Similarly, journalists emphasise that artificial intelligence lacks the ability to critically approach news stories and subjects. It cannot sense if something does not add up or is out of place, rather it “just writes it” (int10). For instance, an investigative journalist worry that artificial intelligence will not be able to spot it if officials are hiding information in foot notes and appendices of official documents (int21). Journalists at JFM base similar critical perceptions on their experiences with the sort of automated systems that fill in prewritten templates to automatically generate articles about sports results, annual accounts of local business, or local property sales. The journalists do not view these systems as producing real journalism because they do not “capture the nuances” (int2) and fail to consider “everything underlying the numbers” (int5). While anyone would be able to extract numbers from a database, human journalists are “critical” (int14), they “think in terms of motives” (int21), and they do not “accept” answers at face value (int15; int16). As one journalist put it, “the critical sense always lies in a trained journalist and not in a computer” (int16).
Further, many journalists highlight “ethical considerations” (int4; int9 int19) and “ethical decisions” (int7; int17) as an important boundary marker for journalistic practice, and they doubt that artificial intelligence will be able to “handle that without help” (int20). As one journalist ponders, you might be able to “teach [artificial intelligence] red and green”, but you cannot teach it “whether it is right or wrong to mention that the mother was crying during a trial” (int7). Artificial intelligence cannot “make that choice” (int9) because it does not “have an eye for it” (int6) and lacks “emotions and empathy” (int17). Furthermore, journalists argue that artificial intelligence poses new ethical dilemmas for journalists, including decisions about when to use and not use the technology (int9), how to identify errors in its output (int15), and how to verify artificially generated information (int21). Consequently, journalists will have to be “sharper in terms of ethics” (int8) because “the more we put in the hands of machines, the more ethical questions we face” (int20). These new knowledge areas in the ethical and responsible use of artificial intelligence therefore need to be taught in journalism schools (int16).
Several journalists also emphasise that news workers need an emotional understanding for the story and the sources. A human journalist has “empathy for other people” (int17) which is important in stories about for instance “people who have experienced something or are affected by illness” (int5). Artificial intelligence cannot replicate this emotional connection to people because it has not had “real experiences with real people in different situations” (int15) and therefore does not “empathise with what they are experiencing” (int8). Furthermore, artificial intelligence lacks the “creativity” (int13), “linguistic finesse” (int7), and “personal voice” (int15) that contributes to the “the smoothness of the reading experience and the flow of an article” (int18). These aspects are part of what makes news worth consuming for audiences, according to one journalist:
What makes an article good is that you, as a reader, get to experience the kind of people it is about […] I don’t think that a robot in the same way as a skilled journalist can convey who these people are and what they have at stake. (int1, beat editor)
Without these uniquely human aspects of journalism, news content will “resemble each other” (int20) and become too “boring” (int18), “clinical” (int14), and “mechanical” (int20), according to the journalists. These statements indicate that journalists present artificial intelligence as incapable of replicating the human elements essential for producing impactful and distinctive journalism. While recognising the potential benefits these technologies may offer, journalists staunchly assert that artificial intelligence’s inherent lack of human qualities serves as a barrier against the complete replacement of journalists. In other words, the journalists predict that journalism as a profession will not erode anytime soon. As one journalist put it:
My most important point is definitely that artificial intelligence cannot completely undertake our work because it does not have the human qualities that we must have as journalists. (int17, day-to-day news editor)
Discussion
The findings of this study shed light on the evolving attitudes of journalists towards automation, revealing a nuanced interplay between optimism about the potentials of artificial intelligence and concerns about its impact on journalistic work and the profession at large (see summary in
). The findings suggest that recent advancements in generative artificial intelligence have brought about a shift in the attitudes of journalists towards automation. Basic automated-writing systems seem to have been increasingly normalised into practice, and journalists have strategically positioned the tasks that they undertake as outside their jurisdiction, thus mitigating the perceived professional threat. But this new wave of artificial intelligence technologies is viewed differently by the journalists, striking closer to the core of their jurisdiction. In the light of these advancements, journalists recognise artificial intelligence as a threat to their profession, but they are more immediately concerned with how it impacts the meaningfulness of their work and their professional identity on an individual level. This study then contributes to understanding the multifaceted impact of artificial intelligence on both individual practices and the broader profession, the implications of which we unfold here.
Table 2. How journalists view artificial intelligence (AI) in journalism.
Regarding the professional threat of artificial intelligence, our findings suggest that mainly news journalists perceive the technology as a danger to their job security. Predominantly, artificial intelligence is viewed as an abstract challenge to journalism’s jurisdiction, threatening to reduce its workforce and weaken its position in the system of professions. Considering the comparably routinised work practices of news journalists, this observation underscores the notion that the closer artificial intelligence gets to encroaching on the core task areas of journalists, the more pronounced the immediate threat becomes. The differential impact of artificial intelligence on various professional roles highlights the importance of considering the diversity of perspectives within the profession to fully grasp how technology impacts journalism. In this regard, the concepts of abstract knowledge and routinised inference, borrowed from Abbott’s system of professions framework, provide a lens for understanding the larger threat of artificial intelligence on the level of the profession at large. However, these theoretical concepts are less effective in capturing the impact of artificial intelligence on the individual level, where meaning and identity are more immediate concerns.
As artificial intelligence increasingly encroaches upon traditional journalistic tasks, journalists are worried about its impact on the meaningfulness of their daily work and the preservation of their professional craft. Specifically, artificial intelligence is perceived to decrease task variation and take time away from aspects of their craft such as writing that are integral sources of meaning and identity for regular journalists working on the ground. This aligns with the conceptualisation of journalism as a craft profession where practitioners derive their professional identity from the mastery of their craftsmanship rather than abstract knowledge alone (Örnebring Citation2009). By focusing on the perspectives of regular journalists rather than those of higher-level editorial staff, this study then contributes to the academic literature on news automation by offering a deeper understanding of how artificial intelligence impacts not only the boundaries of journalism’s jurisdiction but also the professional identities of individual journalists. Unlike managers who may experience immediate benefits from cost-cutting measures facilitated by artificial intelligence, it seems that regular journalists identifying primarily with their craft skills rather than their larger function will struggle to adjust to a future in which these skills are likely to become less important for the profession.
However, journalists repeatedly defend other skills that allow them to add value compared to automated processes. These findings highlight the boundary struggles of journalists as they reexamine their expertise in the face of artificial intelligence. Amidst this reexamination of journalism, a human-centric view of the journalistic craft emerges, extending beyond quality newswriting to encompass nuanced judgment, empathy, and creativity believed beyond the replication capabilities of artificial intelligence. While discussions of the boundaries of journalism often overlook the significance of these largely human aspects of journalistic practice, they consistently emerge in journalistic discourse on news automation (Carlson Citation2015; Lindén Citation2017; van Dalen Citation2012). The contribution of this study lies in its focus on regular journalists which enriches the existing literature by offering firsthand perspectives from those whose work is directly impacted by artificial intelligence. The technology appears to compel regular journalists to work harder at defining the ways in which they add value through their uniquely human qualities, thereby reshaping the traditional boundaries of journalism to increasingly acknowledge the significance of the human element in news to the jurisdiction of journalism.
This study was done at a time when journalists and newsrooms were only starting to experiment with generative artificial intelligence. Given the relative lack of experience with the technology among the informants and the speed of technological development, journalists may find it hard to envision artificial intelligence’s concrete impact on journalism, and more research is needed as it becomes increasingly integrated. Still, the findings prompt fundamental questions about the evolving nature of journalism and its continued relevance in the face of technological disruption. One answer to these questions leverages artificial intelligence to automate routine tasks, allowing journalists to instead focus on tasks where they add the most value through their exclusively human expertise. Amidst the encroachment of artificial intelligence, an opportunity then emerges for journalism to reaffirm its human element and reclaim core values of empathy and creativity that distinguish human journalism from automated processes. Rather than succumbing to the homogenising forces of digitisation, artificial intelligence can prompt a renaissance of human-centric journalism, emphasising the importance of nuanced storytelling and emotional depth in an era of technological acceleration. Unlike traditional disturbances in the system of professions, artificial intelligence represents a distinct professional threat akin to competition from other occupational groups, compelling the profession to change and adapt. Ultimately, this can strengthen journalism’s jurisdictional claim and enhance news organisations’ competitive edge in the fiercely competitive new media landscape.
But these aspirations rely on how artificial intelligence is implemented in the profession. While the technology holds the potential to enhance efficiency and improve news production processes, its responsible implementation requires careful consideration of the ethics and values inherent in journalistic practice. Regular journalists seem increasingly concerned with the ethical and operational implications of artificial intelligence and how news organisations should leverage these technologies, opening a new area of expertise for professional journalists in addressing the specific challenges posed by artificial intelligence. However, even in a Danish context characterised by high levels of journalistic professionalism, they perceive limited agency in shaping the role of artificial intelligence in journalism. This suggests that its implementation will be largely driven by organisational and managerial decisions rather than bottom-up initiatives. Therefore, it seems crucial for journalists and their organisations to engage in ongoing dialogue about the ethical, operational, and professional implications of artificial intelligence. This study contributes with insights into the differentiated perspectives on the profession-wide and individual impacts of artificial intelligence on journalism. Only by understanding these diverse perspectives on artificial intelligence within the profession can its potential risks be mitigated while harnessing its potential to reaffirm the human touch in journalism.
Disclosure Statement
No potential conflict of interest was reported by the author(s).
Data Availability Statement
The interview guide used in the interviews is available on OSF at the following link: https://osf.io/3hdaw/?view_only=ae678d138b3e47f1a808f26eadb3f609.
References
- Abbott, A. 1988. The System of Professions: An Essay on the Division of Expert Labour. Chicago, IL: University of Chicago Press.
- Anderson, W., and W. Lowrey. 2007. “What Factors Influence Control Over Work in the Journalism/Public Relations Dynamic? An Application of Theory From the Sociology of Occupations.” Mass Communication and Society 10 (4): 385–402. https://doi.org/10.1080/15205430701577819.
- Anderson, C. W., and M. Schudson. 2019. “Objectivity, Professionalism, and Truth Seeking.” In The Handbook of Journalism Studies, edited by K. Wahl-Jorgensen and T. Hanitzsch, 136–150. New York, NY: Routledge.
- Boczkowski, P. J. 2004. “The Processes of Adopting Multimedia and Interactivity in Three Online Newsrooms.” Journal of Communication 54 (2): 197–213. https://doi.org/10.1111/j.1460-2466.2004.tb02624.x.
- Braun, V., and V. Clarke. 2006. “Using Thematic Analysis in Psychology.” Qualitative Research in Psychology 3 (2): 77–101. https://doi.org/10.1191/1478088706qp063oa.
- Broussard, M., N. Diakopoulos, A. L. Guzman, R. Abebe, M. Dupagne, and C.-H. Chuan. 2019. “Artificial Intelligence and Journalism.” Journalism & Mass Communication Quarterly 96 (3): 673–695. https://doi.org/10.1177/1077699019859901.
- Brüggemann, M., S. Engesser, F. Büchel, E. Humprecht, and L. Castro. 2014. “Hallin and Mancini Revisited: Four Empirical Types of Western Media Systems: Hallin and Mancini Revisited.” Journal of Communication 64 (6): 1037–1065. https://doi.org/10.1111/jcom.12127.
- Bucher, T. 2017. “Machines Don’t Have Instincts’: Articulating the Computational in Journalism.” New Media & Society 19 (6): 918–933. https://doi.org/10.1177/1461444815624182.
- Carlson, M. 2015. “The Robotic Reporter.” Digital Journalism 3 (3): 416–431. https://doi.org/10.1080/21670811.2014.976412.
- De Maeyer, J. 2020. “A Nose for News: From (News) Values to Valuation.” Sociologica 14:109–132. https://doi.org/10.6092/ISSN.1971-8853/11176.
- Deterding, N. M., and M. C. Waters. 2021. “Flexible Coding of In-depth Interviews: A Twenty-First-Century Approach.” Sociological Methods & Research 50 (2): 708–739. https://doi.org/10.1177/0049124118799377.
- Deuze, M. 2005. “What is Journalism? Professional Identity and Ideology of Journalists Reconsidered.” Journalism 6 (4): 442–464. https://doi.org/10.1177/1464884905056815.
- Deuze, M., and C. Beckett. 2022. “Imagination, Algorithms and News: Developing AI Literacy for Journalism.” Digital Journalism 10 (10): 1913–1918. https://doi.org/10.1080/21670811.2022.2119152.
- Dörr, K. N. 2016. “Mapping the Field of Algorithmic Journalism.” Digital Journalism 4 (6): 700–722. https://doi.org/10.1080/21670811.2015.1096748.
- Hackman, J. R., and G. R. Oldham. 1975. “Development of the Job Diagnostic Survey.” Journal of Applied Psychology 60 (2): 159–170. https://doi.org/10.1037/h0076546.
- Jones, B., R. Jones, and E. Luger. 2022. “AI ‘Everywhere and Nowhere’: Addressing the AI Intelligibility Problem in Public Service Journalism.” Digital Journalism 10 (10): 1731–1755. https://doi.org/10.1080/21670811.2022.2145328.
- Kim, D., and S. Kim. 2018. “Newspaper Journalists’ Attitudes Towards Robot Journalism.” Telematics and Informatics 35 (2): 340–357. https://doi.org/10.1016/j.tele.2017.12.009.
- Kvale, S., and S. Brinkmann. 2008. InterViews: Learning the Craft of Qualitative Research Interviewing. Thousand Oaks, CA: SAGE Publications.
- Latar, N. L. 2018. Robot Journalism: Can Human Journalism Survive? Singapore, SG: World Scientific.
- Lewis, S. C. 2012. “The Tension between Professional Control and Open Participation: Journalism and its Boundaries.” Information, Communication & Society 15 (6): 836–866. https://doi.org/10.1080/1369118X.2012.674150.
- Lindén, C.-G. 2017. “Algorithms for Journalism: The Future of News Work.” The Journal of Media Innovations 4 (1): 60–76. https://doi.org/10.5617/jmi.v4i1.2420.
- Lindén, C.-G. 2018. “Algorithms are a Reporter’s New Best Friend: News Automation and the Case for Augmented Journalism.” In The Routledge Handbook of Developments in Digital Journalism, edited by S. A. Eldridge and B. Franklin, 237–250. London: Routledge.
- Lowrey, W. 2006. “Mapping the Journalism–Blogging Relationship.” Journalism 7 (4): 477–500. https://doi.org/10.1177/1464884906068363.
- Lowrey, W., and J. B. Mackay. 2008. “Journalism and Blogging: A Test of a Model of Occupational Competition.” Journalism Practice 2 (1): 64–81. https://doi.org/10.1080/17512780701768527.
- Milosavljević, M., and I. Vobič. 2019. “Human Still in the Loop.” Digital Journalism 7 (8): 1098–1116. https://doi.org/10.1080/21670811.2019.1601576.
- Moran, R. E., and S. J. Shaikh. 2022. “Robots in the News and Newsrooms: Unpacking Meta-Journalistic Discourse on the Use of Artificial Intelligence in Journalism.” Digital Journalism 10 (10): 1756–1774. https://doi.org/10.1080/21670811.2022.2085129.
- Olsen, G. R. 2023. “Enthusiasm and Alienation: How Implementing Automated Journalism Affects the Work Meaningfulness of Three Newsroom Groups.” Journalism Practice, Advance online publication. https://doi.org/10.1080/17512786.2023.2190149.
- Örnebring, H. 2009. “Reassessing Journalism as a Profession.” In The Routledge Companion to News and Journalism, edited by S. Allan, 568–577. London: Routledge.
- Örnebring, H. 2010. “Technology and Journalism-as-Labour: Historical Perspectives.” Journalism 11 (1): 57–74. https://doi.org/10.1177/1464884909350644.
- Rydenfelt, H. 2022. “Transforming Media Agency? Approaches to Automation in Finnish Legacy Media.” New Media & Society 24 (12): 2598–2613. https://doi.org/10.1177/1461444821998705.
- Schapals, A. K., and C. Porlezza. 2020. “Assistance or Resistance? Evaluating the Intersection of Automated Journalism and Journalistic Role Conceptions.” Media and Communication 8 (3): 16–26. https://doi.org/10.17645/mac.v8i3.3054.
- Schrøder, K. C., M. Blach-Ørsten, and M. K. Eberholst. 2024. “Denmark.” In Digital News Report 2024, edited by N. Newman, R. Fletcher, C. T. Robertson, A. R. Arguedas, and R. K. Nielsen, 76–77. Oxford: Reuters Institute for the Study of Journalism. https://reutersinstitute.politics.ox.ac.uk/sites/default/files/2024-06/RISJ_DNR_2024_Digital_v10%20lr.pdf(open in a new window).
- Schudson, M. 2003. The Sociology of News. New York, NY: W. W. Norton.
- Schultz, I. 2007. “The Journalistic Gut Feeling: Journalistic Doxa, News Habitus and Orthodox News Values.” Journalism Practice 1 (2): 190–207. https://doi.org/10.1080/17512780701275507.
- Syvertsen, T., G. Enli, O. J. Mjøs, and H. Moe. 2014. The Media Welfare State: Nordic Media in the Digital Era. Ann Arbor, MI: University of Michigan Press. https://doi.org/10.2307/j.ctv65swsg.
- Thäsler-Kordonouri, S., and K. Barling. 2023. “Automated Journalism in UK Local Newsrooms: Attitudes, Integration, Impact.” Journalism Practice, Advance online publication. https://doi.org/10.1080/17512786.2023.2184413.
- Thurman, N., K. Dörr, and J. Kunert. 2017. “When Reporters Get Hands-on with Robo-Writing: Professionals Consider Automated Journalism’s Capabilities and Consequences.” Digital Journalism 5 (10): 1240–1259. https://doi.org/10.1080/21670811.2017.1289819.
- van Dalen, A. 2012. “The Algorithms Behind the Headlines: How Machine-Written News Redefines the Core Skills of Human Journalists.” Journalism Practice 6 (5–6): 648–658. https://doi.org/10.1080/17512786.2012.667268.
- Wu, S., E. C. Tandoc, and C. T. Salmon. 2019. “Journalism Reconfigured: Assessing Human–Machine Relations and the Autonomous Power of Automation in News Production.” Journalism Studies 20 (10): 1440–1457. https://doi.org/10.1080/1461670X.2018.1521299.
Cite this article https://doi.org/10.1080/1461670X.2024.2412212

View