Ina Fried shared her expertise during the AI, AI, Ooohhhhh … panel discussion at the 2023 NLGJA: The Association of LGBTQ+ Journalists Convention in Philadelphia, Pennsylvania, on September 9, 2023. Watch the IG Live replay and access the transcript below.
By Daniela “Dani” Capistrano for TransLash Media, with transcript support by Brennen Beckwith
Eric Hegedus: ….in a related area of the potential perils of deepfake videos.
However, the fact of the matter is it’s here, and it’s not going to slow down anytime soon. Which of course brings us to the key question today: should we as journalists be fascinated or frightened by AI, or perhaps a little bit of both?
For today’s discussion I brought together three panelists who are highly informed about the logistical, practical, and ethical concerns regarding generative AI and beyond. Starting on my far right: we have Ina Fried who is the chief technology correspondent at Axios, and we’re proud to say 2016 Inductee into the Hall of Fame of the Association of LGBTQ+ Journalists [watch our post-panel interview with Ina Fried].
In the middle we have Dr. Erin K. Coyle, who I just met. I’m so happy that she’s here today. She is an associate professor of journalism at Temple University, which is just right up the road on Broad Street, and my former hometown in Philadelphia. Last but certainly not least, we have Arlyn Gajilan, who is the Digital News Director at Reuters. And speaking of awards, in 2019 she received the Leadership Award from The Association of LGBTQ+ Journalists. Very well deserved.
And I kind of want to kick it off with you…So, using chatGPT to create a story from scratch: quite frankly, what would your primary concerns be for journalists should, you know, regarding whatever jump started the story.
Ina Fried: I mean I think it’s really what is it is —
Ina Fried: I mean I think, um, it’s great to ask the question, but in many ways it’s the wrong question to ask. Sorry, Rick…
I think with chatGPT and and generative AI, there’s this natural inclination that the first best use must be to have the AI do something for us like an entire thing, like writing an article.
I’d be more–I’m you know, I’m the parent of a 10-year-old. I’m much more concerned with him using it to write his homework, it’s actually pretty good at that um, which is causing a lot of pain for teachers. Um it’s not good at doing that part of our job right now, and here’s why:
Even if you feed in all the information, these AI, the one thing they have almost none of today is judgment; is an ability to assess credibility. So they can take a bunch of notes and spit things out but they can’t do that piece of our jobs.
Which doesn’t mean we shouldn’t test AI see what it’s capable of embrace it try it out. So right now I’m recording our session using Otter.ai and even right now I could ask it to summarize what Eric said so far it would do that. I could ask it to write a news article and it wouldn’t be the worst news article you’ve ever read. It would be not as good as you could do, but better than the worst colleague you’ve ever worked with.
But there are a lot of tools out there that I think we as journalists can use and we’ll get into some of those later. But it’s not ready to write articles; it’s not ready to do that which I think for most of us is good news, um, because uh–it’s it’s certainly something that we’re going to have to be aware of as threats. I think newsrooms are looking for what they can use.
We’ve seen a variety of news outlets try to use it to write stories with poor results. And I think the reason is it’s not ready to do that.
Eric: Sure, okay um yeah it’s interesting to bring that up because one of the more high profile ones which we were just discussing recently and whistling Gannett not to talk about a certain thing but when Gannett um was using Sports coverage in Columbus Dispatch, thank you, um and it was spitting out the most ridiculous headlines you can possibly imagine and they wound up shutting it down. They said no we’re not going to do this any longer.
Um what I mean what are the training wheels we need at this point? Because there’s there’s the the point that you know if you use AI to do things you’re going to need copy editors your going to need even more copy editors even though copy editor jobs are going down what are we supposed to do at this point?
Ina: Yeah I mean I think that’s the point is that the AI isn’t ready to be left on its own to do much of anything. It can help organize notes, it can do a bunch of things. But it’s not very good at critical thinking. It’s very good at generating impressive sounding words but that’s not our job as journalists. Our job is to distill information to explain what happened.
There are some limited use which is I think why some people went into this and I brought up this example to Eric: The LA Times has actually been using not this generation of AI, but you know, a very primitive bot for about 10 years to post the first draft of earthquake stories. Because basically the first thing you want to know about an earthquake is when did it happen what was the magnitude and where was it centered.
Well those are three very specific bits of information that can be presented and you get what you want.
In a sports story, unless it’s just giving you the score which doesn’t require AI, you want to know something about it and it’s not good at that. And we’ve seen other outlets try to condense information and as you say I think today if you use it for that purpose you’re creating an hour and a half of copy editing work for every hour of work. Because it will get something wrong and you won’t know, and it’ll do a good job of making it sound convincing.
Eric: And I want to go to Arlyn next, because you’re, I want to make sure I get this correctly, you over see and edit for Reuters website social media platforms and many AI efforts and and Digital Partnerships with Google Facebook and so forth. How is Reuters using AI? Because you had mentioned that it’s been more than a decade also that Reuters has been using some versions of AI. What has worked for you?
Arlyn: We have. I mean we’ve been using AI as you said for the better part of the decade if not longer yo do some of the things that you just talk about, which is the basics. Um so for example we use it to do market living reports stock market analyzes you know the kind of basic things that don’t require the kind of coloring that let’s say a sports story might have in it. So we’ve been doing that for quite some time. Um and it’s been critical for us for not only in our clients many of whom might be this room and also for our website.
Because we publish three to five thousand stories just text stories per day not including images not including the video clips or package video so at that volume and scale you couldn’t do that with a purely human staff you do need technology to do that.
We have been looking, we actually have been testing AI to curate our home page. Not necessarily a front page but section fronts. So we’ve been using that technology that uses an algorithm that looks at like what’s popular in Google Trends versus what people are using on our site and then surfaces surfaces of the top stories on those landing pages and by and large I hate to say it it actually gets better recirculation rates than some of our human interviews.
So that technology is available it’s being used not just by us but it’s actually a product that’s being licensed by other news outlets out there. We are also experimenting with AI to help generate social media posts and before you freak out. It is still in the very early stages where we’re looking at a very defined set of content and video and text namely our own and only things that are produced by us not you know it’s not searching the internet for Content it’s using our content sets to generate a quick video a package video that you can publish on social media. It’s still in test mode but it’s proving a bit promising so we’re looking at all these different use cases for it and there are I think probably use cases that will enable us to free up the time of writers and editors and copy editors in the future.
To do basic things like look through our archives and produce a fact box historical timeline. Or some of the basic things that a wire service like ours produces and provides all of you that it could do much more efficiently than hiring a human to go and basically run the same search on their own. So in broad stokes that’s kind of where we’re currently at. But we have been using it to good effect.
Eric: I mean one of the kind of knee jerk reactions I think some people have had including in journalism or maybe in law or what a bunch of other fields is that generative AI can somehow eliminate jobs. And operators are possibly concerned that somehow they would be losing the jobs because they could wind up being that whatever ChatGPT writes a story and then the editor takes however much time to edit it. You know who needs a writer at that point however you had mentioned something about to me earlier that um there might be it might just mean a change in jobs for some people. What sort of direction do you see for that?
Arlyn: I mean uh yeah we talked about this a while back I mean I’m old enough to have started my career on on Atex systems right like for those of you who’ve been around long enough they were mainframe computers you know there were actual type settings people whose job it was to like paste things and hang it up on the wall and for us to look at. There was a a revolution in The Newsroom those jobs did disappear. And it was unfortunate and there was a lot of turmoil in the industry.
But there were also things that I think technology and read us all up to do I mean you talk about this like again I’m old enough to have started where I went to a clip morgue to do research and I went to the library and moved a wall to pull the folder from a wall to look at like the clips from around the country and I don’t want to go back to that like it is so much more efficient to be able to search for news on the internet not to have to like limit my use the way we used to with what was a Proquest or whatever that platform was that everyone said you can only use that for five minutes because it costs thousands of dollars.
Like no one wants to be in that and I think technology has you know the positive side of it has the ability to free us up to make those phone calls to travel to a source to meet with someone and do the actual reporting work that we all got into the business doing. And that’s my hope for it. But is it fair to say that there will be tumult and handringing and that jobs will be lost I don’t think we can say that that won’t happen.
Ina: I’d even take it one step further, I do think it will disrupt the business of Journalism which is already hurting. Because one thing AI today is not very good at writing stories but it’s very good at aggregating. It is very good at summarizing and the people who are going to be in a very good position until some of the legal issues are settled.
It’s not what Arlyn does with Reuters copy, it’s what someone who’s not in the news business does with Reuters copy. And it’s going to be very hard to prove that they summarize and you know today there’s Publishers suing the AI engine saying you’ve consumed our content without rights and they may ultimately prevail. But I do think AI is going to add further pressure on an already very pressured business model.
Arlyn: I was just gonna say uh yeah the only jobs that will probably be safe in all of this is intellectually intellectual property lawyers.
There is a lot I mean it is no secret that open AI has scanned all of our websites. And trained ChatGPT on our intellectual property which I get not entirely cool um and you’re right that the business of Journalism is strained. But the flip side of this too is that there are new models now where you have to negotiate with companies that are reliant on massive amounts of data like our archives have and so we as news organizations have to recognize the intellectual property that we sit on is very valuable and that that’s a conversation that needs to happen not a conversation it’s a negotiation that needs to happen going forward. To make sure that we’re properly compensated that writers are not being you know plagiarized and that that intellectual property is valued.
Eric: I’m gonna go to Erin next As a matter of fact um Erin you’re a journalism ethics expert. In a general sense what are do you think the key issues you know that we’re facing that we have to face as far as generative AI and its use?
Erin: There are a lot of them and a lot of layers. And number one we can’t ignore it. We need to start thinking about what does this mean and preparing to teach this semester saying what do we do students have access to this software do we just use this new boiler plant University policy put in place to be protective that says you’re not allowed to use this period. Or do we say everyone’s back to only doing things by hand with pencil and pen and paper in the classroom or do we really start talking about what it means to use this technology.
What are the potential risks what are the things that we need to be learning for how we can business ethically and I don’t know that I’ll get it 100 right but I’m sure trying and being transparent with students about how this is something we’re all learning about and we need to think fundamentally about what it means to be ethical journalists.
You know one of my mentors who wrote my favorite journalism ethics textbook says journalists who really have to think about three fundamental virtues. We have to think about having Integrity. We have to think about civility and we have to think about credibility. And we as journalists to the ethical journalists have to show we know what’s right, what’s wrong, and we know how to make the right decision.
We’re all human beings so there are times that we make mistakes, but if we are going to show we know what’s right so we look very consciously to see whether things are right or not and correct our mistakes. When it comes to civility we work on minimizing harm, we work on having a society where we can all coexist. When it comes to credibility we have to work on being a trustworthy.
Well for us to be trustworthy as journalists we need to be transparent about what we’re doing and how we’re using AI. Some corporations have been transparent about their evolving AI policies for what they see as potential strengths or potential weaknesses.
One of the greatest hopes for AI is that it will allow journalists to have more time to serve broader communities especially traditionally underserved communities that are harder to reach because they have less trust in journalism. So that could be something positive that comes out of this.
But we do know that human beings have bias in some semesters when there’s no one with a food allergy that would be a problem in the room. And then I hand some M&Ms to my students they don’t eat them open them up and let them just start sorting and ask them questions where they’re just lumping their guts and their personal biases or how they’re answering. Now what do you see? What do you observe with these and they often talk about colors I say well okay in that bag again what do you see the most of? They said well there’s the most of blue. I said oh does that mean that the makers of M&Ms are leftists? And they look at me like what and that’s exactly how they’re supposed to look at me.
Because I’m asking what are your biases that you’re naturally bringing to this. If there are more blue and orange M&Ms in that bag does that mean that the University of Florida fans? It actually means that they’ve been packaged in one of two centers in the United States.
But yeah these questions bring the maps but they bring us to this idea that all of us have biases that we might not recognize and AI has learned some of our biases from the information that has been fed into it and there are some outrageous examples online. I’ll just say biased language that has been used for if you’re playing with Dolly to see what is the picture of a good journalist uh what potential biases are shown by what is produced by this technology?
Where we’re at right now we very much need Humanity to see what the technology is spitting out to come in and see is this accurate at all is this biased? We as journalists have to recognize we’re not the only ones who could be using this top technology. There are press releases being written in this city using AI.
And we have other sources who will be using AI to generate information for us. So we need to think very carefully about how we check the accuracy of the information that we’re getting when we know machines and humans may have been working together to create this information.
Eric: And actually I’m hoping you can share you had told me a story doesn’t necessarily have a great ending but but it also shows our human limitations when we’re dealing with AI and involved the press release you had gotten that was written by AI if you could just lay briefly talk about that.
Erin: Now when I worked as a journalist one of my nightmare stories that never collate went away um came from writing a story that was what was inspired by a press release with the corporation revealing that they were going to be raising rates soon for customers. And my first thought seeing the numbers and that was really can that be right and when I talked to my editor I was like does this look right to you?
And just well you know I kind of remember getting something in the mail from that corporation that I wasn’t out there reading so maybe so I went and I called all the sources I could think of that were knowledgeable on this topic including the person who I thought wrote the press release.
And a resource confirmed this could be accurate and I wrote a story and was very embarrassed to receive a call saying sorry the information that press release was inaccurate. So some of these are not new problems and they’re horrifying whenever they happen.
We just need to anticipate them and be prepared for it and think about how are we going to be engaging in fact checking knowing that there are multiple sources out there using AI that might be producing these same errors so if our fact checking processes are I’m going to go online and search and see if I can find a few other mentions of this same information and I know these sources they look like trustworthy sources. We probably shouldn’t stop there.
Ina: Not to make it worse but I think this will get more challenging before it gets better. Right now the errors are more obvious and there are fewer of them but this is AI generated misinformation. There’s a bunch of challenges job disruption you know the fear of the computer is taking over the planet. Bias is a huge problem and I don’t want to minimize it but AI generated misinformation my prediction is that’s and everyone I talk it’s not just a prediction when I talk to the scientists they’re like yeah that in terms of the one that’s largest and closest most proximate to us AI generated misinformation is going to be a huge challenge.
And the reason it’s going to be such a challenge is AI is really good at making things sound convincing whether they’re true or not. It’s getting a little bit better for the moment in terms of some of the engines are starting to to not just make up an answer when they don’t know it used to be if you said you know who won the battle of Thermadelphia it would give you an answer I just made up the word Thermadelphia but it would just generate a text and part of that is it’s really good at generating fiction um and it is really good at that and I think we’re going to have to be very vigilant as reporters.
I mean the good news is I think the need for people to distill information which is our main job one of our main jobs as journalists the need is going to go up the challenge is keeping the business healthy enough that we’re serving it but I think the threat to society is very real and I think I think what we’re all getting at is the implications of AI are much bigger than whether we use it to write our story or not.
Arlyn: There’s example actually we were talking about this, it just happened this week where there’s don’t search for it now but there’s there’s a a very convincing video that looks like a Reuters Producer that’s circulating on the socials. Um and we have gone back to the social platforms and said this is not a Reuters video please take it down. And they think it’s legit and we are saying it is not legit. And we have been in like a three to five day conversation saying we are Reuters and we know what we produce.
So I mean it is a very real kind of dilemma that is complicated by the fact that you can be vigilant but you are reliant on others to help you delete the misinformation and they are not always playing ball. Partly because they may not have one platform’s case they don’t have a lot of people left in another platforms case um people were on vacation like you literally just human things that exist in the world that have left this thing up three to five days now.
It’s very frustrating and it’s going to get worse. I agree.
Eric: Actually you know one thing that you had mentioned to me if you don’t mind elaborating on it you’re you have a concern about this the election side do you think that things will get worse as far as misinformation is concerned with AI in place?
Ina: I think there’s going to be a few things that we have to watch out for um I certainly hope some of them don’t come to pass but I’m not very optimistic. One is you mentioned at the beginning deep fakes we’ve talked about the idea of deep fake videos for a long time the technology is here and real the other is what that also creates is plausible deniability. You know we heard fake news as a phrase used as a weapon in the last election but now anyone who doesn’t like a video of them that no longer likes what they said in a video can possibly say I didn’t say that it was fake and it’s very it’s very hard to tell. Still we have the technological tools at this moment probably to tell but not at scale. It takes experts to kind of point out well here we see this here we see that.
It’s going to get harder there are some initiatives Adobe’s leading this content authenticity initiative which I think is a very important long-term thing where news organizations others that want to show that a video is legitimate will be able to verify the prominence of it from the moment it’s captured until you viewed it.
I think that’s a super important initiative but I do think both of those pieces of misinformation and I think we will also see just a lot of people again finding something on the Internet that corroborates what they’re trying to say and again the internet is already filled with misinformation but now these engines are going to start they’re trained on the internet so not only do they have all our biases they have tons of bad information and now there’s I want to make it really scary for one more second and we can move back to a solution there are estimates that within a few years time 90 to 99% of the content on the internet will have been generated by AI and that means that if the AI systems don’t license the content they risk training themselves on their own generated misinformation.
So I do think it is going to test our human systems one of the things we were talking about beforehand is I really think it only accentuates what’s been horribly badly needed already which is media literacy training. We should be teaching this in schools the news literacy project is doing great work around this but we’re just gonna have to be better consumers.
Erin: It also emphasizes the need for us as journalists to be making our policies more clear and to be promoting what we do. In a way that we’re transparent about what our processes are why we’re trustworthy how we produce information and it might feel like an uphill battle for a little while but as we have more and more misinformation we need to really work on gaining the public’s trust that we are devoted to providing accurate information and showing them why we care about it and why we are trustworthy so maybe I’m just hopeful that if we do a good enough job of convincing people we’re devoted to this and we’re the ones you can trust that we may be able to retrain the public that we are the ones to turn to.
Eric: Speaking of trust this might be a good time to show some I don’t know if I’m any of you uh subscribe to the Ina has a newsletter that she does on Axios it’s absolutely fantastic I highly recommend subscribing to it um and one one of the recent video very timely that came out.
Ina’s Video: I’m Ina Fried Chief Technology Correspondent at Axios. For my latest prompt review I’m taking a look at HeyGen a new video service that uses AI to help create automated videos.
Except I never said any of that that was all AI at work. What I did was I sent HeyGen a two-minute video of me talking about anything in my case I chose Lego and they created an avatar that then I can use to say any words I type. Like the intro I just did in theory it could be used for corporate marketing to offer very personalized messages or for a CEO to talk to individual employees. Now in HeyGen’s case they required consent I had to send them a video of me saying yes I give you permission to use this video to create an avatar. However I expect other companies won’t be so ethical.
And really the era of deepfakes upon us as you see you can make anyone say anything with the power of AI.
Deepfake Ina: From Axios I’m Ina Fried, or am I?
Eric: That is the brilliance of Ina. Before I asked the next question I wanted to mention something I have a good friend who’s here in this convention I don’t think he made it to the panel now.
Neil Savage he’s a science writer he just recently just a few days ago wrote an article called Exposing deepfake imagery. In it he wrote quote sometimes the manipulation can be easy to spot the mouth movements don’t quite sync up with the audio or there’s some weird stillness or color to the deviation in the face. But as they generate there’s a generation of such videos gets more sophisticated it often takes a computer to find the flaws.
We ain’t computers, we are human beings. I’m curious most newsrooms I imagine you know small newsrooms or even independent researchers or journalists potentially may not have the resources that they would like to have Reuters I assume get it’s a global it’s a global machine really I mean they probably have more resources than most newsrooms but you know what what are journalists supposed to do and there are resources available and I know that Ina knows of a few also that are a little bit more accessible to everyday journalists but this is a problem this is going to get worse.
Arlyn: It is a problem I think.
There are two things one is I think technology is at the heart of that problem but it could also be like a heart of the solution um I don’t know of any companies yet I’ll learn more.
I think eventually there will be companies that will be able to check for some kind of standard that’s being developed so that you can fact check or verify the sourcing of video or still pictures. For us we are kind of looking at that technology there’s for those of you who are Reuters clients you know we have a platform called Reuters connect and it’s a platform that you can download all of our texts videos photos but also Partners on the platform so other providers of different images videos etc.
From stock to to UTC or what have you can download it. If it’s on our platform we’ve verified it and we’re trying to create um a standard by which all of it can be verified. Now I recognize not everyone can afford Reuters in local newsrooms but I think the solution has to be oddly enough a technological solution to kind of counter the technological problems.
Ina: Yeah there are a couple things but I think in the interim you know I think one thing that we’re going to have to at least for the moment dispel ourselves or you know if if I say to you someone said this you’re not going to just quote me as saying someone said this you’re going to verify it.
I think for the moment we’re gonna have to start treating audio and video if we if it isn’t coming over a secured way with a grain of salt. We’ve already seen that we already have some of the skills we need you know if there’s an outlandish video you know we’ve seen a lot what we’ve seen thus far is not so much deep fakes but somebody who’s saying oh this is a photo of you know Russia launching bombs at Ukraine and you know you do a reverse image search you find out it’s from the Iraqi War.
Um I think we’re going to need that same sort of skepticism. I do think there are technological approaches to verify video but a lot of them rely on we have to get them to a point where there’s enough video that’s verified that when it’s not verified we start from a place of skepticism.
Right now we kind of have to apply that to everything because very little of it is verified. I do think we’ll be able to have verified pipelines like Reuters communicating directly to its subscribers I think we will see that technology but I think we’re going to have to start not necessarily assuming any every video and audio piece of audio we read is indeed what it purports to be. Which again play to our strengths as journalists we just have to use them.
Erin: Well, we probably need some impact guidelines for how we handle audio and video especially when these are coming from any person’s cell phone and they’re coming internationally from sources that we can’t quickly and easily check with human beings that we can connect with in a way that we can tell. Did you really produce this or not? Can you show me something that showed you produce this and when we produce this.
Arlyn: It gets a little tricky too because we’ve all been used to using agency over the last couple years you know just pulling something off of Twitter and you know reaching out.
That’s going to get really dicey very very soon if it’s not already dicey so um if if you’re not subscribed to like Storyfull or one of those platforms that do that leg work for you. Just yeah you know uh the jaundice eye apply it to everything you’re looking at.
Eric: Um I I was going to try just to do a chatGPT demonstration but since just about everyone here knows what that it is unless there’s a show of hands that you want to do something I we could make something up off the top of our heads. Because again I I submitted this panel proposal back in January for anything and I just left it to stay the way it was and it has snowballed so much that… we can do questions also.
Audience Member: Yeah I’d be curious to see how maybe some of the ways you use chatGPT to be like to help you do journalism.
Eric: Yeah I mean one of the things that’s been on my mind one of the things I do at work is I’m dealing with breaking celebrity Obits um yeah obviously we write we have three rights a ton of free rights for notable people um but obviously we don’t know that everyone’s potentially going to pass away on this given day um and I thought about you know we could if we have a writer who’s not that familiar with some Hollywood screen Legend you know we could have them potentially plug in the Hollywood screen Legend’s name and get some information but I feel like you know you’re still not going to necessarily get the emotion and the the sense of tribute if this is someone who is like extraordinarily wonderful.
I mean I’ll give you an example of it I didn’t didn’t do AI this one but we had a writer who uh was assigned to do uh a pre-obit on a particular very everyone would know him I’m not going to say who did uh celebrity who’s still alive and a true icon and when this writer turned in the information for the obit it wasn’t good and I I could have sent it back. Instead I started doing my own research started finding more questions now I’m gonna when this person passes it’s going to be co-by-lined.
However I really do wonder whether it would have been a good idea to tell the writer you know what before you even start see if you can find through chatGPT see what it says because I’m relatively convinced it might have more emotion more to the story. I I now look it was yeah it was just an extreme example but I I’m really convinced that it might have helped at least to set up what should have been a beautiful tribute and fell flat.
Arlyn: I just want to say one thing, I’m mindful of time. But if a reporter writes a story and the majority of their sourcing is the internet they would not last in my newsroom. That’s what chatGPT is. Like be aware that you cannot rely on that at least not now um caveat there but like that is the source.
I mean as as journalists you know it is our job to fact check we fact check for a living honestly.
And you cannot rely on an algorithm right now to produce stories that are meaningful to do kind of things that we got into journalism for. To kind of hold power to account that’s not what ChatGPT is good for. Anyway thats my soap box.
Ina: Yeah I’ll put it bluntly: you should never use chatGPT to write anything. But that doesn’t mean that technology isn’t going to be useful. So I use it now I record almost everything because chatGPT oh sorry Otter.ai will not only transcribe it which was already super useful but I can actually ask questions of the data. The real breakthroughs won’t be generically using chat gbt as either a reporting or a writing tool they will be using the technology that powers that against powerful trusted data sets so you’re already seeing that.
You’re already seeing enterprises businesses are not using chatGPT nearly as much as they’re using that technology but constraining it and saying use everything you’ve learned about how you compile information but only apply it to this data set. Um for example they’re using it you know to do how to’s to do customer service and they’re using as the training data only the data they verify that’s going to become a very powerful tool in the coming months and years.
Similarly there’s all sorts of things that it’s going to be good at. You mentioned timelines. Again when it has verified information at the heart. But let’s be clear chatGPT was stock training in 2021 it doesn’t cite sources it does make stuff up like Wikipedia is way better than chatGPT and I’d still get in trouble in Arlyn’s newsroom if I rely only on Wikipedia. And Wikipedia is several orders of magnitude more reliable.
Erin: There is AI that can be really valuable for doing data analysis. We have to think really carefully though about what the methods are that are being used. I mean I would trust AI to do statistical analysis more that I would trust myself to sit down with a pencil and piece of paper to do it and I don’t know that anybody would say it’s unethical for me to use some sort of software or AI to do a computation when I’m being open about it and what I entrusted to be more accurate than myself.
You don’t know about my math skills that’s very different than saying write this article for me and right now that’s where we’re at we’re at a point where we can say do these computations for us the more complete data we give the more we can trust what is coming out but what’s really scary right now is I can just put in a couple of words or one question and have hundreds of words come back to me. That the software itself has a disclosure they don’t guarantee it’s accurate.
So we have to try to backtrack and try to figure out where did this come from is it credible and we don’t take those steps as careless.
Eric: It’s about good journalism, that comes right down to it.
Audience Member: Yeah so the San Francisco Press Club a few months ago had a panel on AI and journalism and there was a kerfuffle to use the technical term between uh Bloomberg’s Rachel Metz and Venture Beats Michael Nunez. Michael volunteered that he used chatGPT sometimes to write headlines or other bits of copy and Rachel stopped him in his tracks and said wait do you disclose that to the readers do you disclose that and then all sudden they started having dueling articles about it. I wondered you guys talked about this in The Newsroom about you know is chatGPT or other AI like clippy as some might say or is it something that needs to be disclosed and shared?
Arlyn: If its used we should disclose it full stop. I I don’t think chatGPT is something we would use to write a headline about that the AI the underlying AI. Put it this way I had AI that looked at Google Trends that looked at my catalog of headlines on a particular topic and then wrote a really good SEO headline for me. Would I use that would I consider using that?
Hell yeah I would use that and I think that will be transformational for the industry going forward it’s a service to the reader because it meets them where they’re searching and that’s a good thing. But will I have to disclose it I have to come up with the wording for it but absolutely we should disclose it.
Ina: and I just want to wet your appetite for one minute because I think we aren’t fully embracing what are some of the really cool possibilities down the road again. We need all the caution that we’ve talked about but I envision a day where I’m working on a story and I can ask my own personal assistant that’s AI I’m writing about the legal issues related to AI and it spits back a bunch of sources. That day is not very far off there are a lot of ways that AI can really help and serve us. Not only are we not is the AI not in a place to write our stories or be a source for our stories I think if that’s if we only look for it to do one of those two things we’re missing where it could actually be really useful.
Erin: I’m going to go back to the risks sorry well a lot of people are using AI a lot of people are using these same programs Eric and I talked about the risk that it’s possible that AI writes one headline for me and AI writes the same headline for another publication so then when that headline comes out and we don’t disclose that we had help from AI who plagiarized?
That’s an example of why it’s especially important for us to be thinking about those possibilities and again we need to disclose how we’re using this to be transparent because if I disclose ChatGPT wrote this headline and somebody says well you plagiarized for me because ChatGPT wrote my headline. Neither of us plagiarized from the other ChatGPT plagiarized from itself.
Audience Member: My names Elizabeth Baily, I’m a reporter.
I just wanted to talk a little bit about the idea that you can use AI to sort of I think you were talking about re-circulation right which is something that we don’t really deal with as reporters but I do know that the most popular story I ever wrote for The Advocate was Adele Takes Her Son to Disneyland and I also know that I wrote about transgender prisoners and how they are treated in prison. And I know that didn’t get this many clicks, and so I just worry about the possibility that AI is going to give the people what they want but not what they need. And so I was just wondering if you could comment on that
Arlyn: sure I think I’m the one who brought up the technology that isn’t in full transparency to the technology called Sophie, which was produced by the Toronto Global Mail which is a sister company of ours cause we’re both owned by Reuters. The company actually just got sold to a venture firm. We still have a contract with them it’s a real possibility and one of the things that we identified early on before we released it like we beta tested it for about six months to see what would surface up and calibrated and adjusted the AI and our filtering and data into it to make sure that it wasn’t going to serve up quickly.
Now having said that we wouldn’t necessarily write we don’t write those kind of quick great stories typically on on Reuters but it was still still serving up things that were not what we felt was appropriate so there’s this is a long-winded way of saying the technology will get better with additional inputs.
But it requires that nursing and it requires that hand-holding and monitoring and policing quite frankly. In order for it to do what we need it to do to serve our readers appropriately.
Audience Member: I have two things, one I’m not a reporter or a journalist. I did [muffled] AI I’ve actually made all these various things a few things you might want to be aware of that especially that tool that you made you can mess that thing up so easily. Just a few bad frames, you put a few bad frames of your face in there you can mess up those training models like nobody’s business and I’ve done it.
Um what else there was another thing but I don’t remember it anymore.
Audience Member: Yeah so I am unfortunately at a newsroom that where our leaders are using AI irresponsibly and using AI in ways that is degrading our credibility and our integrity and I am wondering what are some of the base level strictures that you guys might encourage um CEOs or upper level management to consider even when um say a we are owned by a venture capitalist firm.
Audience Member: What did they do?
Audience Member: I can tell you afterwards.
Ina: I don’t even know the answer to what company they’re referring to but I do know inside
um but they there are several companies that have said quite emphatically we are using AI to write stories. They are doing so today at the expense of their credibility because they’re using it to do things it’s not capable of doing and it is eroding the credibility of other journalists.
So I think what I would encourage companies to do every company is looking at this technology and saying how can we use this look and we’re headed into we’re in a bad economy I do think it’s right for journalism institutions I would encourage companies look at the technology look at what it’s good for but I would also then you know really caution what isn’t ready for prime time and again some companies are doing what is at least able to do which is spit out information unedited. It’s just not there like if it gets to that point we as journalists will have to figure out where our roles fit in again I don’t think it’s ever going to be good at applying credibility.
You know they may have a credibility algorithm someday but right now it treats every source as equally valid that you would input. Um and so I just think it’s it’s the it is the right question to be asking as a newsroom leader how might we use AI how might we use this in the future um I’ll be honest in our Newsroom it’s our Graphics Department that’s most worried because you can train AI I mean the images are already pretty good just in a direct to Consumer one but you can then further train it on your style of graphics you know and for companies that decide we just want an image to go with the story that’s going to be a real real issue. Now they say a lot of I what you hear from managers and there may be some truth is oh we’ll use this for some stories it’ll free up our graphics artist to do the more complex work historically that hasn’t worked out very well for for uh journalists um especially in a industry that struggles as much with profitability as ours.
Audience Member: Okay so I remembered what you guys were talking about this uh reading an article in The New York Times that said that Google is testing a product that uses AI to generate news stories and I pulled it up. Apparently over the summer they pitched it among the outlets: New York Times, Washington Post, and News Corp and the tool is called Genesis and apparently when given details of a current event can generate a decently written news article with a decent factual basis. I’m wondering if I any of you know about this and what the news organization’s responses were.
Arlyn: I was not named in that list. To my knowledge they did not pitch us.
Eric: We’re on that list my parent company is on that list but I to be honest with you, I mean I’m not part of those discussions thank goodness, um so it’s I think it’s very I don’t want to speak out of turn if someone is recording:
I’m sure it’s just a very rudimentary at this point they are definitely are discussions on some level but higher pay grade than I have unfortunately.
Arlyn: I just want to qualify what I said to my knowledge we were not pitched by that company but we are pitched frequently by a variety that have a variety of different AI powered technologies. We evaluate them on a case-by-case basis there’s if you go to reuters.com at the end of every story there’s a link to the trust principles and it’s it’s a very interesting thing but you know look at number five which is unusual for a news Outlet number five it’s it’s basically spares no effort or expense of looking at new ways of delivering um news to our clients and to our readers so by virtue of these tenants that the company has to live by we are constantly looking at new technologies. So I’ll leave it at that you may know more about that.
Ina: I don’t know whether my organization has been pitched on it or no. What I can say is it doesn’t surprise me knowing where the technology is at today that you could create if you give it a set of facts and say write a story it can do a pretty decent job. Like we all got but you know I’m sure you have this for your first semester journalism students you give them the set of facts and you say write a news story that’s the beginning. Like it can do that it can’t decide which of those facts is more credible but but I’m sure the technology is good enough today if you give it a set of facts it can write a news story. I bet Google has that technology I bet there’s a bunch of companies that have that technology
Eric: I also want to point out just for the heck of it when I was preparing this this session I had this little idea of my head oh I’m going to have each of the panelists with their with their bias into chatGPT they’ll just request a bio on themselves. I put mine in, it was so bad I was like oh hell, no I’m not doing this. It was just superfluous flowery and entirely inaccurate quite frankly and my my name is out there journalist or hey it is you remind me it just was dumb it was so
Ina: So Eric’s bio is superfluous flowery and inaccurate.
Eric: that’s your kind of language and content. We’ll take one more question.
Audience Member: yeah I’m wondering I mean we’re both in the news world and our CEO Robert Thompson’s been very open about negotiating with these companies to try and uh get paid for the intellectual property. What’s the risk that then licensing that content adds credibility to the results you get from the AI when the AI is still not producing actual credible results?
Arlyn: It’s a really good question um and I don’t have the answer to it if I did I’d be wealthier or wealthy but but it’s a good question and one when you spin out what this will look like 10 15 years from now it’s almost a trade-off because the only way that the promise of the underlying AI technology can deliver is if it’s based on accurate data sets which the internet as a whole is not.
So in order for for it to work for you know News Corp for Reuters or The New York Times or whatever it has to be credible data sets that that it’s based on. Does it does that mean you know we get to a point where say that contract ends right how you roll that back? I don’t that’s the question that I think is under your your question and I don’t know the answer to that because once that literal genie is out of that bottle you can’t put it back in as far as I know like how do you scour through that that much data to pull back. I don’t know this is these could be lifelong contracts in some cases but again like I said before the only people who are going to make a lot of money and succeed in this is the intellectual property.
Erin: Right so one thing related to that I also teach journalism law and for right now is very important core and copyright law that depends on human creativity we don’t have a lot of case I get in this area but we have a long history copyright law that that’s designed humans doing the work. Six years ago there was a lot of attention given to a question about a chimpanzee that took a photographer’s camera and whether the the owner of the camera owned the rights to photograph and the case law that came out of that said no a human being didn’t take the picture.
So it is not copyright protected so whether we’re talking about animals or we’re talking about machines there have to be human contributions to them extremely protection
Ina: Can I just throw in one more thing, um so a very related thing is most of the current engines were traded on this collection of copyrighted books that were on the internet, but not licensed. There’s a big question now do they have to pull it out. One of the risks is if they do pull that out of future training models it might just benefit the incumbents, because there’s no way to pull it out.
And the other thing that’s really important is we have gotten this unfortunate point where the generative AI today can’t tell you how it made the decisions it made. We could have avoided that; we could have said AI has to be able to explain itself, but it’s much more computationally expensive and no one’s going to go back, is when I talk to experts, what they say.
Eric: I um, we’re actually running a little bit over, but uh I’ve spent a couple hours on the phone with each of the–sort of collectively with our panelists; I’m just so happy that you’re able to be here. I appreciate all your expertise.
Did you find this resource helpful? Consider supporting TransLash today with a tax-deductible donation.