今年夏季,絕不能錯過名勝壹號世界郵輪重回基隆啟航!多種優惠方案讓您輕鬆預訂心儀的日本沖繩郵輪行程,同時省下大筆開支!

The Media Copilot

2 週前
-
-
(基於 PinQueue 指標)
The Media Copilot
Hosted by journalist Pete Pachal, The Media Copilot is a weekly conversation with smart people on how AI is changing media, journalism, and the news.
Upgrading Journalism With AI, One Newsroom at a Time, With Nota CEO Josh Brandau

Journalists are naturally skeptical people. They look critically at new things, especially when the incentives around them are complex, and that's certainly the case with AI. Given the early missteps of some sites publishing AI content and the existential threat the technology poses to distribution, it's only natural that a stigma around using AI has emerged among many reporters.

That stigma is something Josh Brandau is wearing down, one newsroom at a time. Josh is the co-founder of Nota, a content platform for augmenting newsrooms with AI tools. I spoke to Josh for The Media Copilot podcast about the company and how it's grown since its launch in the summer of 2022 — well before ChatGPT and generative AI exploded into the mainstream.

Josh and I discussed how Nota is helping newsrooms, especially small to midsize ones, giving them easy ways to leverage AI to create content more efficiently across multiple formats. But we also talked about how transformative AI is going to be, both for how journalists do their work and the industry as a whole. With everything happening with Google’s AI Search and ChatGPT’s new ability to really talk to you, that discussion is definitely more urgent than ever.

If you enjoy this conversation, I’d encourage you to follow the show on Substack, Apple Podcasts, Spotify, or any other podcast app, really. Also, we’d appreciate it if you’d leave a rating or review — it really does help the show. And if you’re on YouTube, please like the video and subscribe to the channel.


The Media Copilot is a podcast and newsletter that explores how generative AI is changing media, journalism, and the news.

⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Subscribe to The Media Copilot newsletter.⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠

⁠⁠⁠Explore our courses⁠⁠⁠ on how to use AI tools, tailored for media, marketers, PR professionals, and other content creators.

⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Follow us on X.⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠

Subscribe to the podcast on:

Music: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Favorite⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ by Alexander Nakarada, licensed under Creative Commons BY Attribution 4.0 License

© AnyWho Media 2024

Fri, 17 May 2024 15:30:00 GMT
Hunting AI Content in the Wild, With Originality CEO Jon Gillham

In the latest episode of The Media Copilot podcast, I had the pleasure of talking with Jon Gillham, founder of Originality.ai, about the nuanced world of AI-generated content and its detection. Jon's company started from a simple need in his content marketing business: ensuring that content was authentically created by humans, not AI. As AI sophistication has grown, so has the necessity for robust detection tools.


The field of AI detection is more complicated than you might think. Jon points out that while not all AI content is spam, almost all spam is now AI-generated. That leads us to an unpacking of Google’s dilemma — that targeting AI-generated content in search results might result a better experience for customers but its position as a major LLM developer inherently conflicts with that goal. Nonetheless, AI detection tools are essential for publishers trying to navigate the new digital landscape without compromising their search rankings or credibility.


We also talk about the importance of transparency and authorship as AI becomes more ingrained in digital content creation. Projecting forward, you can begin to see a “hybrid” future where AI aids content creation under stringent guidelines to ensure quality and authenticity, and that’s OK!


I’m really happy with how the conversation goes deep on the complexities and realities of having AI “out in the wild” in our information ecosystem, and how the interplay between AI technologies and content creators has evolved — and will continue to evolve.


If you enjoy this conversation, I’d encourage you to follow the show on Substack, Apple Podcasts, Spotify, or any other podcast app, really. Also, we’d appreciate it if you’d leave a rating or review — it really does help the show. And if you’re on YouTube, please like the video and subscribe to the channel.


The Media Copilot is a podcast and newsletter that explores how generative AI is changing media, journalism, and the news.


⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Subscribe to the newsletter.⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠


⁠⁠Explore our courses⁠⁠ on how to use AI tools, tailored for media, marketers, PR professionals, and other content creators.


⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Follow us on X.⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠


Subscribe to the podcast on:


Music: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Favorite⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ by Alexander Nakarada, licensed under Creative Commons BY Attribution 4.0 License


© AnyWho Media 2024

Fri, 03 May 2024 15:30:00 GMT
Navigating the AI Revolution in Media, With Ricky Sutton

If past is prologue, the story of how AI changes media won't have a happy ending for those in the news business. Tech platforms profoundly altered the media landscape over the last 20 years, forever redefining how news is created, distributed, and monetized, and most media brands that tethered their strategy to tech platforms now find themselves diminished and, in some cases, demolished. Will history repeat itself with AI?


To guide me toward an answer, I turned to Ricky Sutton. Ricky is one of the most interesting personalities in media today, and we spoke on The Media Copilot podcast. He’s had a wildly diverse career, cutting his teeth as a reporter before moving on to very important roles at both media companies and tech companies. That experience has enabled him to spot trends long before they were obvious to the rest of us. Ricky's also founded Oovoo, a video platform for media companies that’s powered by — what else — AI.


Last year he stepped down from the day-to-day at Oovoo to focus on AI through his own Substack called Future Media, where he regularly shares his thoughts on how AI is changing how we consume information, and how those in the media can get ahead of those trends so we’re not always at the mercy of Big Tech.


If you enjoy this conversation, I’d encourage you to follow the show on Substack, Apple Podcasts, Spotify, or any other podcast app, really. Also, we’d appreciate it if you’d leave a rating or review — it really does help the show. And if you’re on YouTube, please like the video and subscribe to the channel.


The Media Copilot is a podcast and newsletter that explores how generative AI is changing media, journalism, and the news.


⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Subscribe to the newsletter.⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠


⁠⁠Explore our courses⁠⁠ on how to use AI tools, tailored for media, marketers, PR professionals, and other content creators.


⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Follow us on X.⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠


Subscribe to the podcast on:


Music: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Favorite⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ by Alexander Nakarada, licensed under Creative Commons BY Attribution 4.0 License


© AnyWho Media 2024

Fri, 26 Apr 2024 15:30:00 GMT
News Aggregation in an AI World, With Alex Fink

This week on The Media Copilot podcast I’m thrilled to talk to Alex Fink. Alex is the founder and CEO of Otherweb, a news aggregator that uses AI to give readers a healthier news diet than your average social media feed. Instead of optimizing for outrage or clickbait, Otherweb favors “kale over cake” — an analogy we come back to a few times in the conversation. Alex has an interesting career. After working for a long time in computer vision, he decided the world had enough cameras and decided to focus instead on the decisions technology could help with rather than the tech itself. Otherweb isn’t his first rodeo — he’s been a founder twice before and a very astute observer of the media business. He’s full of great observations about the arguably corrupted incentives of ad-based media, which helps to guide Otherweb and the way it ranks and serves up content. You can check out Otherweb here: https://otherweb.com/

The Media Copilot is a podcast and newsletter that explores how generative AI is changing media, journalism, and the news.


⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Subscribe to the newsletter.⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠


⁠Explore our courses⁠ on how to use AI tools, tailored for media, marketers, PR professionals, and other content creators.


⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Follow us on X.⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠


Subscribe to the podcast on:


Music: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Favorite⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ by Alexander Nakarada, licensed under Creative Commons BY Attribution 4.0 License


© AnyWho Media 2024

Tue, 16 Apr 2024 19:30:00 GMT
The Realities of Applying AI in Newsrooms, with Joe Amditis

Yes, we know generative AI is bad for writing articles whole-cloth. But what IS it good for when you want to apply AI in a newsroom?

In this week's episode of The Media Copilot podcast, host Pete Pachal explores that question Joe Amditis, Associate Director of Operations at the Center for Cooperative Media. As part of his role, Joe researches how journalists can apply generative AI, both at the individual and organization levels, and has written guides on publicly available tools, including the how to create custom GPTs for reporting use cases.

Joe's advice to those new to using AI for journalism? Experiment. Journalists need to use AI to understand its capabilities and limitations, and should focus on low-stakes tasks initially. Once they're comfortable, some of the most useful applications he's found are:

  • Documentation and transcription

  • Brainstorming ideas on why people should care about a story

  • Filtering through data and documents to surface potential leads

  • Generating stock images and graphics to accompany articles

The key takeaway: Use AI tools pragmatically to gain efficiency in workflows, but do not lose sight of the human element and relationships at the core of journalism. As long as journalists don't lose sight of creating quality, valuable content for their communities, they'll be able to find ways AI can help move faster toward that goal.


The Media Copilot is a podcast and newsletter that explores how generative AI is changing media, journalism, and the news.


⁠⁠⁠⁠⁠⁠⁠⁠Subscribe to the newsletter.⁠⁠⁠⁠⁠⁠⁠⁠


⁠⁠⁠⁠⁠⁠⁠⁠Follow on X.⁠⁠⁠⁠⁠⁠⁠⁠


Subscribe to the podcast on:


Music: ⁠⁠⁠⁠⁠⁠⁠⁠Favorite⁠⁠⁠⁠⁠⁠⁠⁠ by Alexander Nakarada, licensed under Creative Commons BY Attribution 4.0 License


© AnyWho Media 2024

Fri, 15 Mar 2024 15:45:22 GMT
How Journalists Can Make Peace With AI, With Anne-Marie Tomchak

What's it like to come face-to-face with your own deepfake? Anne-Marie Tomchak knows, and the encounter is captured vividly in her documentary Game Changer: AI and You, which aired recently on Ireland's public broadcaster.

It's a powerful moment, and one I would argue every journalist covering AI should experience: the unnerving feeling of seeing your own image and voice co-opted to say or do anything that the programmer desires. It's one thing to hear about a celebrity like Taylor Swift being deepfaked; it's quite another to have it done to you personally. And with the technology becoming so accessible, that becomes a greater possibility every day.

On this week's episode of The Media Copilot podcast, Anne-Marie shared the insights she gained by working on the documentary (her second on the subject of AI), zeroing in on AI's growing influence in journalism. We discussed how AI is reshaping the media landscape, from newsroom operations to content creation, and the ethical and legal conundrums emerging from AI-generated content. As the founder of BBC's social media investigative unit, Anne-Marie talks about how this technological shift is different from digital media shakeups of the past.

If you enjoy the podcast, please subscribe on your favorite platform, check out our channel on YouTube, and leave a review or a star rating. It really does help the show, and it'll ensure we keep bringing you great conversations like this one.

The Media Copilot is a podcast and newsletter that explores how generative AI is changing media, journalism, and the news.


⁠⁠⁠⁠⁠⁠⁠Subscribe to the newsletter.⁠⁠⁠⁠⁠⁠⁠


⁠⁠⁠⁠⁠⁠⁠Follow on X.⁠⁠⁠⁠⁠⁠⁠


Subscribe to the podcast on:


Music: ⁠⁠⁠⁠⁠⁠⁠Favorite⁠⁠⁠⁠⁠⁠⁠ by Alexander Nakarada, licensed under Creative Commons BY Attribution 4.0 License


© AnyWho Media 2024

Fri, 01 Mar 2024 16:30:00 GMT
Why Journalists Make the Best Prompt Engineers, With David Caswell

Doing journalism with AI? What even is that?

Up until recently, the answer to that question was a small part of the profession, mostly restricted to big publications with deep pockets and a sophisticated data strategy (think: The Washington Post or the AP). But after ChatGPT said hello to the world a year and a half ago, however, "AI-powered news" was suddenly a blank canvas.

After the world saw the disastrous results of using the content produced by generative AI without a robust process surrounding the creating, vetting, and publishing of that content, the media world went back to the drawing board: What is this "magical" new technology good for, and what does a newsroom need to do to use it safely and ethically?

David Caswell spends most of his days thinking about exactly that. David has been working with machine learning and AI in media for well over a decade, leading product teams at the BBC, Tribune Publishing, and Yahoo. He's now a consultant and researcher focused on AI in newsrooms, and he wrote arguably the definitive guide on the subject last fall in his article "AI and News: What's Next?"

This week David joins The Media Copilot podcast to talk about everything that's happened since his article dropped, and how his thinking about AI's role in our media ecosystem has changed. We also explore what he hopes to see come out of The New York Times lawsuit against OpenAI, how reporters should be leveraging generative tools, and why journalists are naturally good prompt engineers.

If you enjoy the podcast, please subscribe on your favorite platform and leave a review or a star rating. It really does help the show, and it'll ensure we keep bringing you great conversations like this one.


The Media Copilot is a podcast and newsletter that explores how generative AI is changing media, journalism, and the news.


⁠⁠⁠⁠⁠⁠Subscribe to the newsletter.⁠⁠⁠⁠⁠⁠


⁠⁠⁠⁠⁠⁠Follow on X.⁠⁠⁠⁠⁠⁠


Subscribe to the podcast on:


Music: ⁠⁠⁠⁠⁠⁠Favorite⁠⁠⁠⁠⁠⁠ by Alexander Nakarada, licensed under Creative Commons BY Attribution 4.0 License


© AnyWho Media 2024

Fri, 23 Feb 2024 16:00:00 GMT
How AI Is Creating Realistic Fictional Characters, With AImmersive

What happens when you teach your AI to churn out believable fictional characters? AImmersive co-founders Max Salamonowicz and Casey McBeath have built a tool for writers and creatives who want to create realistic video game and fiction characters.


The tool they've created isn't just a simple character generator. It's an advanced AI system designed to produce believable, complex characters with unique personalities, backstories, and traits.


Salamonowicz and McBeath spoke to John Biggs for The Media Copilot podcast. Their insights offer a glimpse into the merging of technology and creativity, and how generative AI is poised to redefine the landscape of narrative arts.



The Media Copilot is a podcast and newsletter that explores how generative AI is changing media, journalism, and the news.


⁠⁠⁠⁠⁠Subscribe to the newsletter.⁠⁠⁠⁠⁠


⁠⁠⁠⁠⁠Follow on X.⁠⁠⁠⁠⁠


Subscribe to the podcast on:


Music: ⁠⁠⁠⁠⁠Favorite⁠⁠⁠⁠⁠ by Alexander Nakarada, licensed under Creative Commons BY Attribution 4.0 License


© AnyWho Media 2024

Fri, 09 Feb 2024 20:48:40 GMT
How 'Invisible QR Codes' Can Protect Copyright in the Age of AI, With Eric Wengrowski

Copyright is one of the biggest issues in AI. Eric Wengrowski, the CEO of Steg.AI explains how digital watermarking can help.


It's fair to say the subject of copyright comes up a lot when you're talking about AI. Whether you're talking about a large language model (LLM) like the ones that power ChatGPT, or diffusion models that serve text-to-image creators like Midjourney, these generative systems suck up massive amounts of training data from the open web.


This has concerned many content creators and publishers, including The New York Times, which brought its concerns to the courts in late December. While the world waits for the law to catch up to the AI industry, the question remains: can authors, photographers, videographers and anyone else in the business of creating content do anything to ensure they stay connected and in control of the things they create?


There might be. What all these issues are circling is the concept of content provenance: ensuring the copyright holder of any piece of content is embedded within the content itself. One way to do that through digital watermarking — essentially creating an "Invisible QR code" that travels with the document, image, or video, even if it's copied and stripped of metadata.


Steg.AI is a company that specializes in digital watermarking, and The Media Copilot spoke with its CEO, Eric Wengrowski in our latest podcast. We fully explored the role of watermarking in a world where all kinds of web crawlers are constantly hoovering up data, why it's important to label synthetic content, and the incredibly important question of: can you still detect the watermark of a piece of training data in model output?


The Media Copilot is a podcast and newsletter that explores how generative AI is changing media, journalism, and the news.


⁠⁠⁠⁠Subscribe to the newsletter.⁠⁠⁠⁠


⁠⁠⁠⁠Follow on X.⁠⁠⁠⁠


Subscribe to the podcast on:


Music: ⁠⁠⁠⁠Favorite⁠⁠⁠⁠ by Alexander Nakarada, licensed under Creative Commons BY Attribution 4.0 License


© AnyWho Media 2024

Fri, 02 Feb 2024 17:53:11 GMT
Reviving the Dead With AI, With Siggi Arnason

"Just imagine the whole society just crumbling over AI."

People in Iceland don't have to imagine it. That quote from Siggi Arnason, CEO of OverTune, is describing the fallout from a viral video that his company's AI-powered technology helped create. The video was a comedy sketch that featured a recreation of a popular deceased Icelandic comedian, Hermann Gunnarsson. After it aired, over 90% of the country ended up seeing it, and in response, the country's parliament is fast-tracking legislation around deepfakes and the use of AI.

Another effect of the controversy is that OverTune has gone viral. Arnason spoke to John Biggs on The Media Copilot podcast about the skit and the resulting firestorm. Arnason, a former musician and self-described "lover of cats," is unique in that he never wanted to be an AI influencer. But when his team built Iceland’s first deepfaked political comedy sketch, he knocked over a can of cod. Now his country is wrestling with the concepts of ownership, creativity, and the future of AI.


Join us in New York on February 1! We are planning our first meetup in Manhattan and we’d love to meet you! Sign up to our ⁠Meetup Group⁠ here and we’ll send you the details shortly.


The Media Copilot is a podcast and newsletter that explores how generative AI is changing media, journalism, and the news.


⁠⁠⁠Subscribe to the newsletter.⁠⁠⁠


⁠⁠⁠Follow on X.⁠⁠⁠


Subscribe to the podcast on:


Music: ⁠⁠⁠Favorite⁠⁠⁠ by Alexander Nakarada, licensed under Creative Commons BY Attribution 4.0 License


© AnyWho Media 2024

Fri, 26 Jan 2024 18:04:29 GMT
How Media Can Thrive in the Age of AI, With Louise Story

When The New York Times filed its landmark lawsuit, accusing OpenAI of violating copyright by training its large language models (LLMs) on its journalism, some savvy observers had been expecting such a move for months.

One of those people is Louise Story. Louise is a former Times staffer, spending several years as an investigative journalist before getting involved in strategy and building new formats for the paper (such as live video). She also led content and product strategy for The Wall Street Journal — including its approach to AI — so few people have a better perspective on how newsrooms regard technology platforms. She now offers that perspective as an independent consultant, helping guide media companies on digital strategy and how they can adapt to an AI-mediated future.

I spoke to Louise for The Media Copilot podcast. We of course talk about the lawsuit and dissect the stakes for the players involved and the media. I was also excited to get her perspective on the infamous Sports Illustrated debacle and how incidents like it have added to the stigma of generative content. Of course, I couldn’t let her leaving without getting her to share some practical advice on how newsrooms can take their first steps into the world of GenAI.


Join us in New York on February 1! We are planning our first meetup in Manhattan and we’d love to meet you! Sign up to our Meetup Group here and we’ll send you the details shortly.


The Media Copilot is a podcast and newsletter that explores how generative AI is changing media, journalism, and the news.


⁠⁠Subscribe to the newsletter.⁠⁠


⁠⁠Follow on X.⁠⁠


Subscribe to the podcast on:


Music: ⁠⁠Favorite⁠⁠ by Alexander Nakarada, licensed under Creative Commons BY Attribution 4.0 License


© AnyWho Media 2024

Fri, 12 Jan 2024 16:30:00 GMT
Sniffing Out AI Writers, With Lee Gaul

When ChatGPT showed how easy it was to write an "original" academic paper that could get a passing grade, the need for some kind of AI detector was suddenly starkly clear. The market quickly responded, and GPTZero, created by 23-year-old Edward Tian, was an overnight sensation last spring. College professors now routinely check papers for AI authorship.


In the media world, the need for such a tool was perhaps less urgent, since editors tend to have a tighter grip on how copy is produced, and few writers would risk their reputations trying to pass off synthetic articles as their own. That is, until the boondoggle with Sports Illustrated, where articles supplied by a third party appeared to have been written by AI (note: the company that supplied the articles claims they were human-written).


The incident got widespread attention, and it underscored the need for AI detection in media, especially when you publish content at scale, from multiple sources. Even if your in-house editorial team is strictly human-driven, freelancers and syndication partners may not have gotten the memo.


So do managing editors need to add "copy and paste article into AI detector" to the long list of editors' duties? They can, but another solution may be to build it into existing processes and tools, which is exactly why Copyleaks exists. The company began as a plagiarism detector and now markets itself as an AI detection company. It claims to be able to do detect synthetic text across models, in multiple languages, and in detail (i.e. showing which parts of a document are AI generated, as opposed to a simple Yes/No result).


Lee Gaul is the enterprise sales director at Copyleaks, and he's this week's guest on The Media Copilot podcast. Our conversation goes beyond simple AI detection and explores the big-picture issues driving the demand for the service as well as the increased need for human judgment when machines enter the picture.


The Media Copilot is a podcast and newsletter that explores how generative AI is changing media, journalism, and the news.


⁠Subscribe to the newsletter.⁠


⁠Follow on X.⁠


Subscribe to the podcast on:


Music: ⁠Favorite⁠ by Alexander Nakarada, licensed under Creative Commons BY Attribution 4.0 License


© AnyWho Media 2024

Fri, 05 Jan 2024 16:44:35 GMT
Applying ChatGPT to Financial News, With Matt Martel

At a time when most newsrooms across the world are considering, studying and, in some cases, experimenting with generative AI, at least one publication has enthusiastically embraced the technology, building it into workflows and publishing "synthetic" content on the regular.


BusinessDesk in New Zealand uses ChatGPT and other AI models to augment what it’s serving up to subscribers, using the tech’s generative capabilities to both monitor news events and create content around them almost instantly. After launching AI-powered articles and summaries in the spring, BusinessDesk is going further, using it to summarize lengthy reports and assist in news gathering.


Matt Martel, general manager of BusinessDesk parent NZME, spoke to The Media Copilot about why the BusinessDesk newsroom jumped into the realm of generative AI so quickly, how it avoids the pitfalls of the tech without slowing things down, and the ways the company’s organizational structure made it so friendly to integrating GenAI into real-world workflows.


You can hear the full version of this PREMIUM episode of The Media Copilot by ⁠becoming a paid subscriber⁠.


The Media Copilot is a podcast and newsletter that explores how generative AI is changing media, journalism, and the news.


Subscribe to the newsletter.


Follow on X.


Subscribe to the podcast on:


Music: Favorite by Alexander Nakarada, licensed under Creative Commons BY Attribution 4.0 License


© AnyWho Media 2023

Fri, 22 Dec 2023 19:17:41 GMT
Running Your Own Newsroom's LLM, with Viktor Shpak

Newsrooms can only get so far with pasting prompts into ChatGPT. Once you want to get more serious with generative AI, a media business should think seriously about running, fine-tuning, and perhaps even building their own large language model (LLM).


There are a number of approaches to this, and it's easy enough to download a commercial or open-source model to run on your private cloud, or even your MacBook. But what are the factors to consider when rolling your own AI operation, and how expensive can it get?


In this week's conversation, John Biggs talks with Viktor Shpak, lead developer for VisibleMagic, about what it takes to run your own LLM in the privacy of your own office. He also explores the future of AI-generated content and code, pointing out that the rising AI tide will — theoretically — lift all boats. We're grateful we had the chance to probe the mind of an extremely plugged-in developer.


The Media Copilot is a podcast and newsletter that explores how generative AI is changing media, journalism, and the news.


⁠⁠⁠⁠⁠⁠⁠Subscribe to the newsletter.⁠⁠⁠⁠⁠⁠⁠


⁠⁠⁠⁠⁠⁠⁠Follow on X.⁠⁠⁠⁠⁠⁠⁠


Subscribe to the podcast on:


Music: ⁠⁠⁠⁠⁠⁠⁠Favorite⁠⁠⁠⁠⁠⁠⁠ by Alexander Nakarada, licensed under Creative Commons BY Attribution 4.0 License


© AnyWho Media 2023

Fri, 08 Dec 2023 17:20:29 GMT
AI Journalism Goes Global, With Charlie Beckett

In the year since ChatGPT arrived on the scene, journalism has grappled with the ethics of generative AI. From robot-written articles to the proliferation of “fake” images, the problems the media needs to think through have been bubbling in the background for a long time, but they've been exacerbated by the scale that generative AI makes possible.


One person who's spent a lot of time thinking about all the perils and promise that AI brings to journalism is Charlie Beckett. A professor in the Department of Media and Communications at the London School of Economics (LSE), Beckett is the founding director of Polis, the school’s international journalism think tank. He’s currently leading Polis’s Journalism and AI project, which hosts the JournalismAI Festival, starting on December 6.


The festival promises to unite dozens of journalists who are innovating and using generative AI in newsrooms all over the world. It'll take on topics like detecting bias in content, the role AI can play in covering elections, and how small and local newsrooms can leverage the tech to punch above their weight.


In talking to Beckett, I was struck by the tone of optimism that emerged in our conversation. Even though we tackled thorny topics like the ethics of generative images in war and the recent generative-content brouhaha involving Sports Illustrated, it's clear his focus is on how this manifestly transformative technology can help the truth that journalists seek shine through.


I hope you enjoy the discussion as much as I did. You can register for free for the JournalismAI Festival here.


The Media Copilot is a podcast and newsletter that explores how generative AI is changing media, journalism, and the news.


⁠⁠⁠⁠⁠⁠Subscribe to the newsletter.⁠⁠⁠⁠⁠⁠


⁠⁠⁠⁠⁠⁠Follow on X.⁠⁠⁠⁠⁠⁠


Subscribe to the podcast on:


Music: ⁠⁠⁠⁠⁠⁠Favorite⁠⁠⁠⁠⁠⁠ by Alexander Nakarada, licensed under Creative Commons BY Attribution 4.0 License


© AnyWho Media 2023

Fri, 01 Dec 2023 19:12:18 GMT
Is This the End of OpenAI?

The past few days have turned the entire industry of generative AI upside-down. Before the weekend, OpenAI was sitting comfortably in pole position, riding high from a series of recent announcements designed to keep it there. Most of the world saw ChatGPT as the default starting place for anyone taking their first steps into AI, and the company’s models as setting the standard, with competitors fighting for scraps of mind share.


Now we're in a completely different world. Ever since its board fired CEO Sam Altman in a surprise move Friday afternoon, the situation at OpenAI — and the marketplace for generative AI tools — has been in flux. There have been so many developments since Friday that it's been difficult to keep up (here’s a good summary), but the current state of affairs is a standoff between OpenAI's employees and the board. The staff wants Altman reinstated and the board to resign, or they're all going to follow Altman to Microsoft (far and away OpenAI's biggest investor), which offered him a job as CEO of a new AI subsidiary. Microsoft has said it would indeed hire the defecting staffers.


On this week's Media Copilot podcast, John Biggs and I are joined by Peter Bittner from The Upgrade to discuss these possible scenarios for OpenAI and what they mean to customers… and competitors. Whatever happens, one thing has been made very clear: the field of generative AI will not be the same after this.


The Media Copilot is a podcast and newsletter that explores how generative AI is changing media, journalism, and the news.


⁠⁠⁠⁠⁠Subscribe to the newsletter.⁠⁠⁠⁠⁠


⁠⁠⁠⁠⁠Follow on X.⁠⁠⁠⁠⁠


Subscribe to the podcast on:


Music: ⁠⁠⁠⁠⁠Favorite⁠⁠⁠⁠⁠ by Alexander Nakarada, licensed under Creative Commons BY Attribution 4.0 License


© AnyWho Media 2023

Tue, 21 Nov 2023 18:47:57 GMT
Staying One Step Ahead of ChatGPT, With Brennan Woodruff

In this week’s conversation, I talk to Brennan Woodruff of GoCharlie about how AI services based on content generation can contend with a ChatGPT-dominated world. Plus John Biggs and I break down the week’s news: YouTube’s guidelines for synthetic content, a new study rating the big models on hallucinations, and an inflection point on the thorny issue of fair use.


John Biggs and I offer a crash course on using AI for marketing and media. Learn more about the 3-hour session here.


When you’re running a startup, you’re already in a race. When you’re running an AI startup, you’re essentially in the New York City marathon. It’s already a slog, and you’re wall-to-wall with thousands of competitors of all stripes. Whether or not you succeed depends on the kind of race you’re running: Do you want to win the whole thing, beat your personal best, or be top in your category?


GoCharlie appears to be aiming for the third option. The AI startup is one of many that specializes in creating marketing copy, images, and other material, but it differentiates itself by applying its own large language model (LLM) trained specifically for that use case. That would seem to give the young company an advantage, but now that OpenAI had made it easy for anyone to create task-specific GPTs with assistants — and is creating a platform to sell them — can GoCharlie get past this “extinction-level event” for AI startups?


I spoke with co-founder Brennan Woodruff about GoCharlie, what it brings to the table to marketers and media people, and how AI entrepreneurs can stay in the race even when running alongside a ChatGPT that’s wearing rocket boots.


In this week’s AI news that’s most relevant to media…

What even is fair use anyway? Ed Newton-Rex, the VP of Audio at Stability AI — the creator of the Stable Diffusion image generator — very publicly resigned from the company, arguing strongly against the perspective, common among tech companies, that training AI models on copyrighted material constitutes fair use.


Let’s put “Hail Hydra” at the end of every deepfake: YouTube kinda-sorta took a stand on deepfakes, introducing new requirements for creators to label “synthetic” content made to look realistic, but allowing a parody/satire exception. It’s a important step, though still leaves a lot up to YouTube’s human moderators. Also: anyone making bank off of songs made from cloned voices of various artists is on notice now that those artists can force synthetic songs to be taken down. Progress? Probably. But other platforms (a certain single-letter network comes to mind) will likely have different standards.


Wait, people use Notion? This week Notion launched Q&A, an AI-powered feature that can scan all the material you’ve put on the service to inform answers to specific queries — essentially letting you have a conversation with your work. This is the dream of Google Bard’s feature that connects with all your Gmail and Google Docs, but Notion’s thingie probably has a better chance of giving useful answers since it probably won’t have every grocery list you’ve made since 2006 in there.


The Hallucination Olympics: Rankings for which generative AI model hallucinates the most are out, and boy, Google’s Gemini upgrade can’t come fast enough — Google Palm, which powers Bard, was dead last. Perhaps not surprisingly, OpenAI’s models lead the pack, though some smart folks were able to get Llama 2 into the same category. Hallucinations will never go away entirely, but we’re optimistic that the robots will continue to get better at, you know, facts. Now if we could just to the same with bias…


The Media Copilot is a podcast and newsletter that explores how generative AI is changing media, journalism, and the news.


⁠⁠⁠⁠Subscribe to the newsletter.⁠⁠⁠⁠


⁠⁠⁠⁠Follow on X.⁠⁠⁠⁠


Subscribe to the podcast on:


Music: ⁠⁠⁠⁠Favorite⁠⁠⁠⁠ by Alexander Nakarada, licensed under Creative Commons BY Attribution 4.0 License


© AnyWho Media 2023

Fri, 17 Nov 2023 15:43:06 GMT
Putting AI Where Reporters Actually Work, With Ryan Restivo


There are thousands of generative AI tools for content, and some actually work well. Generative tools can create SEO headlines, social copy, document analysis, and lots more for reporters and editors, all ready to enhance your productivity.

However, there are roadblocks to incorporating these tools in day-to-day work. Beyond the basic concerns about quality and hallucinations, often the workflow itself is the issue: Incorporating a new tool typically means another login, another browser window open, and a new app to get familiar with. Then, if you’re constantly copying and pasting from the tool to your CMS and back again, the gains in efficiency start to drop. In other words, GenAI has the best chance of being effective when it’s integrated into existing workflows.

That’s the magic of YESEO, a tool developed by Ryan Restivo in partnership with the Reynolds Journalism Institute (RJI) at the University of Missouri. While there are any number of tools that will serve up SEO headlines for news stories, YESEO was created specifically for Slack, the collaboration platform found in almost every newsroom.

I spoke to Ryan about developing YESEO — which he began before the general release of ChatGPT — how newsrooms can develop a pragmatic approach to generative AI tools, and what a reporter’s workflow looks like in a future world where GenAI tools are as common as spellcheckers.

The Media Copilot is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.

The Media Copilot is a podcast and newsletter that explores how generative AI is changing media, journalism, and the news.


⁠⁠⁠Subscribe to the newsletter.⁠⁠⁠


⁠⁠⁠Follow on X.⁠⁠⁠


Subscribe to the podcast on:


Music: ⁠⁠⁠Favorite⁠⁠⁠ by Alexander Nakarada, licensed under Creative Commons BY Attribution 4.0 License


© AnyWho Media 2023

Fri, 10 Nov 2023 13:30:00 GMT
How Media Can Survive in the Generative AI Era, with Brian Morrissey

It's still early days for generative AI, but the change it will inevitably impose on the news media is massive. If that sounds sounds scary to you, you should talk to someone. We'd recommend Brian Morrissey, author of The Rebooting newsletter and host of The Rebooting Show podcast, both of which get into the weeds of the media business. In this wide-ranging conversation, Brian and Pete Pachal attack the big questions around GenAI and media: What happens when AI becomes the dominant force in search and SEO traffic to news sites dries up? What does the publisher-audience relationship look like in an AI-mediated world? And how can media companies get ahead of the coming GenAI wave — and maybe even ride it? The Media Copilot is a podcast and newsletter that explores how generative AI is changing media, journalism, and the news.


⁠⁠Subscribe to the newsletter.⁠⁠


⁠⁠Follow on X.⁠⁠


Subscribe to the podcast on:


Music: ⁠⁠Favorite⁠⁠ by Alexander Nakarada, licensed under Creative Commons BY Attribution 4.0 License


© AnyWho Media 2023

Sun, 05 Nov 2023 11:00:00 GMT
Teaching AI to the Next Generation of Journalists, with Thomas Seymat

Perhaps the people best positioned to thrive in tomorrow’s media ecosystem are today’s journalism students. Learning how generative AI can assist in their work while also learning the fundamentals of the trade means the next generation of reporters will have machines assisting their work from the start. Helping guide this vanguard of robot-enhanced journalists is Thomas Seymat. Thomas is the Editorial Projects and Development Manager for Euronews, and he also teaches at the Journalist Training Center in France, one of the oldest journalism schools in Europe. This fall he’s leading a class on the use of GenAI in reporting, coaching them on how ChatGPT, Midjourney and other tools can make them stronger, more efficient reporters while also establishing where the guardrails are on their use. Our conversation was illuminating — and somewhat reassuring — about the future of journalism and the next generation. Thomas revealed how his students are already using these tools, their thoughts on the ethics of AI, and some “road to Damascus” moments in their journey. On the news brief, John Biggs and I discuss why labeling content as “AI assisted” is practically useless, whether or not Apple’s new AI-ready Macs mean anything, and that the White House executive order on AI might actually be pretty good? That’s up first. Information on the AI class John and I are teaching is here. More to come on that soon.


This week's top AI stories for media:


The White House executive order on AI


Labeling won't solve AI's problems (Axios)


IAC warns regulators generative AI could wreck the web (Axios)


Apple mentions AI, finally (CNN)


The Media Copilot is a podcast and newsletter that explores how generative AI is changing media, journalism, and the news.


⁠Subscribe to the newsletter.⁠


⁠Follow on X.⁠


Subscribe to the podcast on:


Music: ⁠Favorite⁠ by Alexander Nakarada, licensed under Creative Commons BY Attribution 4.0 License


© AnyWho Media 2023

Fri, 03 Nov 2023 14:50:03 GMT
How AI Can Customize Your News, with Jeremy Caplan

The Media Copilot is a weekly discussion about generative AI and how it's changing media, journalism, and the news. After a news briefing where journalists Pete Pachal and John Biggs discuss the most recent AI headlines relevant to the media, we present a conversation with a new person every week — innovators, media executives, and fascinating people with compelling perspectives on AI. For this week's conversation, we welcome Jeremy Caplan, author of the newsletter Wonder Tools and Director of Teaching and Learning at the Craig Newmark Graduate School of Journalism at CUNY. We spoke to Jeremy about not just the tools journalists can use to begin using generative AI in their day-to-day, but also the mentality needed to get the most out of this unprecedented moment in media. If you watch the video version of the podcast, you may notice Jeremy’s video was slightly out of sync with his audio. We tried to fix this in post and failed, but rest assured Jeremy talks just like a normal human when he has a better internet connection. Here are the stories from the news briefing: Nightshade "Poisons" AI models by altering metadata on images (Ars Technica) Twelve Labs shows off AI that can "watch" and interpret videos (The Neuron) An AI designed to clean up Wikipedia citations (Nature) Anthropic is crowd-sourcing an "AI constitution" (Axios)

The Media Copilot is a podcast and newsletter that explores how generative AI is changing media, journalism, and the news.


⁠Subscribe to the newsletter.⁠


⁠Follow on X.⁠


Subscribe to the podcast on:


Music: ⁠Favorite⁠ by Alexander Nakarada, licensed under Creative Commons BY Attribution 4.0 License


© AnyWho Media 2023

Tue, 31 Oct 2023 14:59:43 GMT
The Ethics of DALL-E, with Harry McCracken

The ethics of generative AI are more complicated than they might seem. Take generative images, like the ones created by DALL-E and Midjourney. Even when legal issues like the licensing of training imagery are addressed (as Adobe Firefly seems to), does that mean it's OK to create an image in the style of a well-known illustrator without their knowledge or approval? And does the prolific use of GenAI images cheapen the entire practice of photo illustration?

That was one of many topics I talked about this week with Harry McCracken, Global Technology Editor for Fast Company, and a key member of the publication's internal team exploring generative AI. Harry has been covering tech since the dawn of the internet and has had a front-row seat every innovation in tech since then. Over the past several months he's plunged deep into AI, and the topic features regularly in his newsletter, Plugged In, such as this recent piece on the dawn of "self-aware" software.

Harry is also the first guest in what will become a regular feature on The Media Copilot: Friday Conversations, where I chat with journalists, media executives, and interesting people doing interesting things with generative AI and the news. This first conversation is free for everybody, but I plan to make these conversations exclusive to paid subscribers starting next week. If you don’t want to miss any, it might be a good idea to take advantage of that subscribe button below 👇

I hope you find the conversation as stimulating as I did. Look for more insights from fascinating people working at the intersection of media and GenAI in the coming weeks.


The Media Copilot is a podcast and newsletter that explores how generative AI is changing media, journalism, and the news.


⁠Subscribe to the newsletter.⁠


⁠Follow on X.⁠


Subscribe to the podcast on:


Music: ⁠Favorite⁠ by Alexander Nakarada, licensed under Creative Commons BY Attribution 4.0 License


© AnyWho Media 2023

Fri, 27 Oct 2023 01:40:43 GMT
-
-
(基於 PinQueue 指標)
0 則留言