The Associated Press has laid out its approach to using generative AI, joining a handful of major news organisations in doing so.
A major international news agency has set out its guidance on using artificial intelligence (AI) to produce journalism. The Associated Press said it will continue to experiment with AI, but won’t use it to create publishable content and images.
While the technology is being adopted by more industries as generative tools become widely available and capable, the news industry is asking itself tough questions about the topic.
Can the audience really trust news services that are using AI to generate content?
While some organisations are laying down restrictive rules for incorporating AI into their workflows, others are more openly embracing the technology.
A recent job advert from Newsquest Media Group is asking for an “AI-assisted reporter”, who “will be at the forefront of a new era in journalism, utilising AI technology to create national, local, and hyper-local content for our news brands, while also applying their traditional journalism skills”. The reporter will be working with AI to “help write news articles”, and will “integrate AI-generated content into newsrooms of different sizes”.
Job adverts like this showcase how divided the industry is on the topic of using AI to create content for news. There are now specific courses available to learn how to implement AI into newsrooms. Earlier this summer, Euronews Next spoke with Charlie Beckett, who leads the LSE’s JournalismAI project.
He described a “new world” for journalism where anything a journalist reports on is “now going to be influenced by AI”.
However newsrooms end up using AI, Beckett insisted that it is a “language machine…not a truth machine”, so the human factor is still a vital element in producing journalism.
Here’s a look at how different news organisations are handling the AI revolution.
AP has issued guidelines on using AI, coupling these with a chapter in its influential stylebook.
“Our goal is to give people a good way to understand how we can do a little experimentation but also be safe,” said Amanda Barrett, vice president of news standards and inclusion at AP.
The company said any material produced by AI should be carefully vetted – just like material from any other source, and that a photo, video, or audio segment generated by AI should not be used unless that segment is the subject of a story itself.
The AP said AI could however be used for more menial tasks like putting together digests of stories sent out in newsletters.
It has experimented with simpler forms of AI for a decade, using it to create short news stories out of sports scores or corporate earning reports. That’s important experience, Barrett said, but “we still want to enter this new phase cautiously, making sure we protect our journalism and protect our credibility.”
The news organisation wants its journalists to become familiar with the technology, as they will need to report stories about it for years to come, Barrett added.
For its part, AP’s rival news agency Reuters has said it is taking a “responsible approach” to AI that “safeguards accuracy and fosters trust”.
The British newspaper was one of the first major news organisations to lay out its approach to generative AI, with a joint statement from its chief executive and editor-in-chief.
They wrote in June about how they will and won’t be using AI tools. The Guardian says that AI will only be used editorially when it “contributes to the creation and distribution of original journalism”, and with human oversight and a senior editor’s permission.
The paper will also focus on using the technology to help journalists to “interrogate large data sets” or assist with corrections, suggestions, and reducing the workload from “time-consuming business processes”.
They add that another guiding principle will be choosing AI tools that have considered issues such as “permissioning, transparency and fair reward” regarding the material they were trained on.
This is a major point of controversy around popular chatbots such as ChatGPT, with its creator OpenAI accused of training its language models on copyrighted content.
While major news organisations might be treading carefully into the future with AI, the technology could provide an opportunity for smaller newsrooms that are resource and budget-restrained.
News Corp Australia is reportedly producing 3,000 articles a week using generative AI, with small teams using the technology to publish thousands of local stories each week on topics such as the weather, fuel prices, and traffic conditions.
Meanwhile, a local newspaper in Nottinghamshire in the UK this month announced it was trialling the use of AI.
The paper’s senior editor Natalie Fahy wrote in a letter to readers that the Reach-owned regional daily will be using AI to generate bullet point summaries at the top of some of their longer articles.
These will be checked by an editor before being added to the article, she said, while there will also be a line at the bottom explaining that AI has been used.