ChatGPT as seen by Patrick Bourbeau


ChatGPT as seen by Patrick Bourbeau

A text by media lawyer Patrick Bourbeau

With automated language models such as the now-famous ChatGPT becoming household names in only a matter of months, text-based media organizations and society in general have been forced to ask some difficult questions about those models’ use. Legal and ethical issues have come to the forefront, especially in terms of civil liability, intellectual property (IP) and transparency.

 

Media in crisis

It’s no secret that the dominant positions of Google and Facebook have had negative financial impacts on many traditional media organizations in the past decade.

Those media outlets may therefore find it even more tempting to turn to an automated language tool to write some of their content with low added value. Examples could include updates on the stock market or minor-league sports. The value of that type of content is likely to decline.

Others may instead use those new tools simply to generate news summaries, highlights or automatic notifications.

 

Liability issue

Regardless of the approach they choose, text-based media organizations will have to be aware of their civil liability, which will continue to apply to content produced in these new ways. They’ll need to make sure content produced that way, as is the case for all published content, isn’t defamatory and meets the applicable journalism standards.

The challenge is daunting because no consensus has even been reached yet on the journalism standards that should apply when using automated language models. Those standards are just starting to be debated.

 

Human-assisted design

In the short or medium term, it will be essential for media operators to come up with clear policies on the use of tools such as ChatGPT. In particular, the policies should define the types of content suitable for automated language models as well as cases where human involvement is needed to ensure the quality and accuracy of the content produced.

It's very likely media content producers will decide that human involvement is always necessary, even though it may be minimal.

 

Variable value

At most, we can expect the use of these new tools to be limited to writing content with little added value. That will enable media organizations and journalists to devote more time and resources to on-the-ground newsgathering work.

ChatGPT is incapable of doing certain tasks such as conducting in-depth interviews, building relationships with trusted sources, witnessing events on-site as they unfold and, of course, always questioning the official statements and actions of the people running our institutions.

Those tasks inevitably require sustained human involvement, making the content produced incredibly valuable for both the media and our democratic institutions.

 

IP at risk

For content generated by automated language tools, a number of issues are also raised in terms of IP. Organizations publishing text-based content produced that way have to be aware of the risk associated with infringing third-party IP rights.

There’s currently no way to determine which sources were used to “feed” the artificial intelligence tool (AI) or generate an answer to a question asked. Content produced by ChatGPT appears to be “original” but may in fact have been reproduced word for word without permission from content produced by a third party. Clearly, publishing that content would violate the Copyright Act.

It's important to note that in their terms of use, the major automated language tools offer no guarantees whatsoever that their content respects third-party IP rights and they assume no liability in that regard. Media organizations are therefore in the line of fire if a third party takes legal action against them for infringing their IP rights.

 

Empty shell

ChatGPT and others like it raise a thorny ethical question: Can journalism exist without journalists and reporters? That’s not a question we’ll attempt to answer today because it would involve a far-reaching discussion of how to define journalism.

However, what we can agree on here is that, until very recently, journalism content could only be produced by a human. In Western media, apart from a few rare exceptions, that journalist also took credit and responsibility for the content they wrote.

 

Transparency at the core

At the industry level, there already seems to be a consensus forming that media organizations should be transparent with readers when content has not been written entirely by a human.

A clear indication for readers would therefore be expected if all or part of a story or coverage was generated by an automated language model. Some media organizations have started to disclose their terms of use for those tools, explaining how and when they use them to generate content.

This article has been only a brief overview of the major ethical and legal issues that media organizations will have to grapple with as they take advantage of AI in the coming years. The discussion has only just begun!

 

__

Patrick Bourbeau is Vice-President, Legal Affairs, for La Presse and President of the Canadian Media Lawyers Association. He focuses mainly on media law and defending freedom of expression.

Subscribe to our newsletter to continue learning about copyright and intellectual property