trust Archives - Digital Content Next Official Website Thu, 12 Mar 2026 18:49:28 +0000 en-US hourly 1 Speed vs. accuracy: Journalism’s ethical balancing act https://digitalcontentnext.org/blog/2026/03/16/speed-vs-accuracy-journalisms-ethical-balancing-act/ Mon, 16 Mar 2026 11:27:00 +0000 https://digitalcontentnext.org/?p=47001 The pressure to publish first has always existed in journalism. What has changed is the pace at which decisions are made. In today’s digital-first newsrooms, journalists often report live, publish...

The post Speed vs. accuracy: Journalism’s ethical balancing act appeared first on Digital Content Next.

]]>
The pressure to publish first has always existed in journalism. What has changed is the pace at which decisions are made.

In today’s digital-first newsrooms, journalists often report live, publish updates in real time, and interact directly with audiences as stories unfold. The result is tension between speed and accuracy. It is no longer just a professional challenge but, increasingly, an ethical one shaped by the systems and workflows that define real-time journalism.

Our latest research with student and early-career journalists, drawing on interviews and survey responses, highlights how strongly this concern is felt. Many young reporters say the expectation to publish quickly, correct later, and keep the feed moving can feel like pressure to take risks. When verification occurs after publication rather than before, accuracy becomes reactive instead of foundational.

For media executives, this shift raises an important question: how can news organizations deliver the speed audiences expect while protecting the credibility that sustains trust? Addressing that question requires more than reminding journalists to “be careful.” It requires rethinking the systems, workflows, and newsroom culture that shape real-time journalism.

The ethical pressure of real-time news

Live blogs, rolling coverage, push notifications, and social platforms mean that each new detail can reach audiences within seconds. This immediacy is powerful, enabling newsrooms to inform the public almost in real time. But once information is published, it spreads quickly across platforms and communities, often far beyond a newsroom’s control. Even when updates or corrections are issued later, there is no guarantee they will reach the same audiences. The original version can continue to circulate long after corrections have been made.

For younger journalists working inside these workflows, the ethical stakes feel high. They are often operating at the intersection of reporting, publishing, and audience interaction. In some cases, they are expected to monitor live feeds, write updates, verify information, and respond to audience questions simultaneously.

The intention behind these workflows is understandable. Audiences expect immediacy, competitors publish in real time, and the news cycle moves quickly. But when newsroom systems reward velocity above all else, they risk signaling that speed matters more than judgment.

That perception matters. Trust depends on the belief that news organizations prioritize accuracy even when it slows them down. If journalists feel pushed to publish unverified information, that trust becomes harder to sustain.

When technology accelerates publishing but not verification

Digital publishing tools have transformed how breaking news is reported. They allow reporters to update stories instantly, provide minute-by-minute coverage, and keep audiences informed as events unfold.

Used well, these tools strengthen journalism. They enable transparency, allow corrections to be made quickly, and give audiences a clearer view of what is known and what is still developing.

The problem arises when technology rewards speed without supporting the editorial decisions behind it. Real-time publishing environments can encourage constant updates, even when information is incomplete. If newsroom dashboards or performance metrics emphasize update frequency or time-to-publish above all else, journalists may feel pressure to move forward before verification is complete.

Media executives should consider whether their tools and metrics reinforce the right priorities. Do workflows allow time for verification? Do editors have clear visibility on updates before they go live? Are journalists encouraged to label uncertain information clearly rather than present it as confirmed?

Technology cannot replace editorial judgment, but it can either strengthen or weaken it.

Credibility built through transparency

Accuracy is not only about getting facts right the first time. It is also about how news organizations respond when information changes.

In live coverage, new details often emerge that challenge earlier assumptions. Responsible reporting means correcting inaccuracies quickly and clearly. It also means explaining those corrections so audiences understand what changed and why.

This transparency is essential for maintaining credibility. Audiences are often more understanding of evolving information than silence or defensiveness when mistakes occur.

The same principle applies to audience engagement. Today’s journalists frequently interact directly with readers through comment sections and social platforms. These conversations can build trust when handled well, but they can also spread confusion or misinformation if inaccurate claims are left unaddressed. When false information appears in comment threads or audience discussions, correcting it promptly helps prevent those claims from spreading further.

Newsrooms should be prepared for this reality. That preparation includes setting clear community guidelines, assigning responsibility for monitoring conversations, and ensuring journalists are supported when responding in fast-moving environments.

Responding quickly matters, but so does responding carefully.

Building systems that support ethical speed

The core challenge facing digital newsrooms is not whether to move quickly. Speed is part of modern journalism, and audiences expect it. The challenge is ensuring it does not weaken the editorial standards that define the profession.

That preparation starts with clear expectations. Verification is not optional, even under pressure. When information is uncertain, the responsible approach is to say so.

It also requires practical support. Editors, producers, and audience teams should work together so reporters are not juggling every responsibility alone during live coverage. When someone is responsible for monitoring comments or verifying incoming information, the reporter covering the story can focus on accurate updates.

Training also matters, particularly for younger journalists who are starting their careers in live, digital news environments rather than traditional reporting structures. They need guidance not only on how to publish quickly but also on when to pause.

Finally, newsroom leaders must reinforce that credibility remains the industry’s real competitive advantage. Speed may capture attention in the moment, but trust determines whether audiences return tomorrow.

Accuracy sustains trust

The modern newsroom operates in an environment defined by constant updates and immediate audience response. That reality is unlikely to change. What can change is how organizations balance the demands of speed with the responsibility of accuracy.

Journalism has always required difficult judgment calls. In digital reporting, those decisions simply happen faster and in public view. The goal is not to slow down the news cycle, but to ensure that the systems behind it protect the principles journalism depends on.

Speed may capture attention. Trust depends on whether the systems behind the newsroom protect accuracy when the pressure to publish is highest.

The post Speed vs. accuracy: Journalism’s ethical balancing act appeared first on Digital Content Next.

]]>
AI is rewriting search and credibility is king https://digitalcontentnext.org/blog/2025/11/17/ai-is-rewriting-search-and-credibility-is-king/ Mon, 17 Nov 2025 12:28:00 +0000 https://digitalcontentnext.org/?p=46394 For 25+ years, the open web ran on links. You typed a question into Google, got ten blue link results, and clicked on the one that resonated most with you. ...

The post AI is rewriting search and credibility is king appeared first on Digital Content Next.

]]>
For 25+ years, the open web ran on links. You typed a question into Google, got ten blue link results, and clicked on the one that resonated most with you. 

Not anymore. Generative AI is shifting user behavior, from querying and clicking to asking and consuming inside tools like ChatGPT, Gemini, and Perplexity. Google’s AI Overviews rewrite results into ready-made answers, turning what used to be a page of links into a single, synthesized suggestion. In its place, we’re entering a recommendation web—a world where every surviving link must have strong context and credibility.

In the wake of AI overviews, search isn’t dying, but the link economy is. 

At Raptive, we see this shift firsthand across 6,000+ publisher partners generating billions of monthly sessions. The data reveals a re-ordering of trust. The winners will be those who modernize for this “answer-first” landscape without abandoning the fundamentals.

Search is no longer a traffic channel; it’s a reputation test. 

Here’s what we’re learning about staying visible and resilient in this new era of search, and what digital media leaders should prioritize. 

1. Traffic patterns are fragmenting

Searches are growing, but click-through rates are declining. Similarweb data shows that zero-click searches have climbed from 56% to 69% year-over-year as Google’s AI Overviews increasingly answer questions directly on the page. Search behavior is being redistributed and discovery is flowing across new channels: AI assistants, social algorithms, recommendation engines, and Google Discover. 

-AI search charts showing AI overview growth and AI overview keywords-
Source: Ahrefs

Executive takeaway: Continue investing in high-quality, differentiated content that strengthens brand reputation. Move away from lightweight informational content that can be commoditized by AI. In a world of algorithmic discovery, originality and authority are the only currencies that hold value.

2. Quality and authorship signals are non-negotiable

Google’s June 2025 Core Update reaffirmed that expertise and trust win. In our analysis, sites with clear bylines, full author names, and robust About pages outperformed those without. Those signals are key ranking factors tied to credibility.

Executive takeaway: Audit your trust signals. Every article should clearly identify who wrote it, when it was last updated, and why readers should trust it. Invest in author pages, structured data, and visible expertise across verticals. “Real names, real voices” is the new SEO.

-chart showing content activity and pageview outcomes from AI search-
Source: Raptive

3. Consistent content activity drives resilience

In a study across independent creators, we found that sites publishing one new post and updating at least five existing ones monthly were far more likely to gain traffic after the June update. We saw the same correlation among larger publishers: steady, consistent activity signals both relevance and reliability.

Executive takeaway: Operationalize content cadence. Build processes for regular updates to evergreen content, and treat publishing frequency as a core SEO health metric, not just an editorial one.

4. Engagement metrics are rising in importance

Across our network, URLs that gained traffic after Google’s June update had 3x more comments and 3x more engagement than those that declined. AI and Google’s algorithms alike are rewarding proof of reader value.

Executive takeaway: Design for engagement and invite readers to interact. Encourage user reviews, comments, and feedback loops. Treat engagement as a credibility metric.

5. Discoverability is shifting toward recommendations

As AI search becomes more personalized, Google Discover is growing as a key traffic source. Discover rewards relevance and freshness, often outperforming traditional search in volume and conversion.

Executive takeaway: Optimize for recommendation ecosystems. Publish consistently, pair content with strong visuals, and prioritize depth and originality. These factors correlate directly with Discover visibility.

-Chart showing Google's AI search and Discover impact on traffic-
Source: Press Gazette

Why “modern SEO” still wins

Despite the hype around “GEO” (Generative Engine Optimization), “AEO” (Answer Engine Optimization), and a growing alphabet soup of new acronyms, the fundamentals of optimization haven’t changed. Today’s AI search is just a bunch of classical searches in a trench coat. Modern SEO—writing for readers, demonstrating expertise, and maintaining technical excellence—is what allows your content to surface in both traditional and AI-driven results.

And while the conversation around AI traffic grows louder, it’s important to remember that AI surfaces account for just 0.02% of total traffic today. In fact, our research found that pages ranking in Google’s top three positions are twice as likely to appear in AI Overviews as those outside the top three. 

Good SEO is good GEO.

And good GEO begins with genuine expertise. 

What to prioritize next

For digital media executives guiding strategy in 2025 and beyond:

  • Diversify traffic sources: Balance your reliance on Google with growth in newsletters, Discover, and direct audiences.
  • Double down on quality and cadence: Content activity and freshness are measurable, defensible advantages.
  • Audit trust and transparency: Author identity, About pages, and schema markup now influence both human perception and algorithmic ranking.
  • Invest in engaged communities: Reader interaction and loyalty protect against volatility in algorithms and AI tools.
  • Stay pragmatic: Don’t chase new acronyms or “AI hacks.” Track changes, test cautiously, and keep your team focused on fundamentals.

The bottom line

The future belongs to those worth recommending. 

People still want what they’ve always wanted: answers they can trust, ideas that make sense, and trustworthy sources. That’s where publishers matter most—not as content factories, but as champions of quality and original content that real people can count on. 

At Raptive, our mission is to ensure that independent voices remain discoverable, trusted, and economically viable in an AI-mediated web. Because the end of links doesn’t have to mean the end of independence—it can mark the beginning of a new era of credibility.

That’s not just good SEO; it’s good business. And, more than that, it’s good humanity.

The post AI is rewriting search and credibility is king appeared first on Digital Content Next.

]]>
The trust gap in polling – and how to close it https://digitalcontentnext.org/blog/2025/10/14/the-trust-gap-in-polling-and-how-to-close-it/ Tue, 14 Oct 2025 11:28:00 +0000 https://digitalcontentnext.org/?p=46136 New research reveals a paradox when it comes to public perception of polling practices: while many Americans suspect poll results may be manipulated to serve specific agendas, or even fabricated...

The post The trust gap in polling – and how to close it appeared first on Digital Content Next.

]]>
New research reveals a paradox when it comes to public perception of polling practices: while many Americans suspect poll results may be manipulated to serve specific agendas, or even fabricated altogether, there remains a strong appetite for trustworthy data. This tension represents a critical opportunity for media organizations to boost public confidence by prioritizing accountability and ethical research standards.

The report, by AI-native quantitative research platform Outward Intelligence, sheds light on the main factors shaping perception of poll reliability, including concerns around bias, objectivity, and transparency. The findings, based on a September 2025 online survey of 775 U.S. adults balanced demographically, provide valuable insights for media leaders to utilize towards improving both polling and communications practices.

Audiences still value polling

First, some good news. While reservations abound, the majority of those studied value high quality data, feel represented at least somewhat in polling results, and view polling data with at least some level of trust.

  • Despite widespread skepticism, 65% of participants express at least some level of confidence in poll accuracy.
  • 85% of respondents view high quality of polling data as either extremely or very important.
  • An overwhelming majority (86%) believe polls represent their views “a lot” or at least “somewhat.”

In addition, most participants (83%) believe it is very or extremely important for leaders of media organizations, government entities, and businesses to heed public opinion when making decisions. Therefore, organizations that demonstrate quality public opinion research as well as utilization of results in guiding their practices could have an edge going forward.

Audiences lose faith

However, this research brings to light a variety of doubts and concerns. Nearly half of those surveyed say they often or very often question the validity of polling data. Over one third think polling has declined in quality over time. Certain issues seem to provoke even greater levels of cynicism:

  • Election stress. 70% of those surveyed feel that election-related polls are correct only occasionally, or not at all.
  • Artificial Intelligence apprehension. 83% voice concern over how AI might affect trust in polling, highlighting growing awareness around data integrity issues.
  • Bias suspicion. 87% believe that organizations “spin” data for their own purposes – either sometimes (53%) or very often (34%).
  • Underrepresentation concerns. Almost 90% of participants believe that polls leave out or underrepresent certain groups of people.

Over a third of respondents believe that polls are completely lacking in transparency. Almost a quarter of respondents don’t even believe that polls are conducted with “real people actually taking a survey.” Other notable concerns include inadequate sample sizes and unclear or poorly communicated methodologies.

-chart showing the primary reasons audiences have lost trust in polling-

Polling possibilities for media leaders

These findings offer media leaders the opportunity to shore up their polling methods and communications to foster trust. More transparency about how data is gathered, the representative nature of the survey pool, and the responsible use of AI in data collection and analysis are all areas in which organizations can increase their oversight and improve trust.

Based on this data, actionable steps media leaders can take to boost public confidence in polling practices may include the following:

  • Improve transparency in polling coverage. Disclose methodology such as sample size, margin of error, and how participants were selected. Acknowledge what polls can and cannot predict, especially around elections. Minimize framing poll results to fit narratives, as audiences are sensitive to perceived manipulation.
  • Educate audiences on polling fundamentals. Provide explanations that demystify polling processes. Use interactive formats such as infographics or Q&A sessions to show how data is gathered and interpreted.
  • Address AI concerns proactively. Be transparent about how AI tools are used in data analysis and results reporting. Highlight the human oversight integral to editorial decisions. Consider publishing AI ethics guidelines for polling and data use.
  • Champion methodological rigor. Partner with reputable research firms that adhere to high standards and ensure that polls include diverse and representative samples.
  • Foster interaction. Invite audience feedback on polling coverage through social media, newsletters, or live forums. Use skepticism as a springboard for dialogue, acknowledging doubts, and responding with clarity.
  • Position polling as a tool, not absolute truth. Frame polls as snapshots of sentiment, not definitive forecasts. Balance polling data with qualitative insights, such as interviews with individuals or community discussions.

Finally, demonstrating how public opinion is being taken into consideration when making decisions can instill more confidence in audiences going forward. Media executives who embrace these strategies can strengthen their credibility and trustworthiness. In a landscape where skepticism is high but demand for quality data remains strong, publishers who lead with transparency and integrity can deepen audience loyalty and differentiate themselves in the market.

The post The trust gap in polling – and how to close it appeared first on Digital Content Next.

]]>
Rethinking audience relationships in the media  https://digitalcontentnext.org/blog/2025/09/23/rethinking-audience-relationships-in-the-media/ Tue, 23 Sep 2025 11:26:00 +0000 https://digitalcontentnext.org/?p=46027 The success of journalism – and the media business – depends on building and maintaining a strong relationship with the audience. But that relationship is changing. No longer defined by...

The post Rethinking audience relationships in the media  appeared first on Digital Content Next.

]]>
The success of journalism – and the media business – depends on building and maintaining a strong relationship with the audience. But that relationship is changing. No longer defined by distance or one-way communication, audience relationships now unfold across platforms, within communities, and through direct interaction. These connections shape how journalism operates, how trust develops, and how news organizations maintain their role in public life. 

A new study, From Cultivating Fans to Coping With Troublemakers: A Typology of Journalists’ Audience Relationships, examines how journalists and news organizations engage with audiences in today’s media landscape. Drawing on interviews with 52 German journalists working across traditional media, digital-native outlets, and innovation units within legacy organizations, the research shows that audience relationships are central to contemporary journalism and highlights how organizations are already adapting to these realities. Participants reflect a wide range of roles and beats—from politics and science to lifestyle and local reporting—and work across formats including print, broadcast, social media, and newsletters. Together, their perspectives offer media companies a broad view of how journalists navigate audience relationships across platforms, newsroom structures, and editorial contexts. 

Audience evolves from the general public to subgroups and individuals 

Journalists no longer address a single, monolithic public. Media audiences consist of diverse subgroups and individuals—from TikTok followers and newsletter subscribers to marginalized communities and local fans. Segmenting audiences, tailoring content to different platforms, and fostering loyalty within communities are now part of newsroom routines. 

Metrics, comments, direct messages, and live events make audiences more tangible than ever before. Publishers must balance real-time feedback with editorial priorities, using data to measure reach while maintaining journalistic independence. Interaction becomes an everyday part of reporting, providing both accountability and a sense of connection. 

The 11 types of audience relationships 

The study identifies 11 distinct audience relationships: service, representative, conversational, appreciative, community-oriented, coaching, demanding, inspirational, defensive, competitive, and antagonistic. 

Media professionals must move fluidly between these categories depending on platform, story, or context. Reporters may provide essential information in a service role or give voice to overlooked communities in a representative capacity. They may also foster conversational exchanges on social media and draw inspiration from appreciative feedback. At the same time, those same journalists may encounter demanding or antagonistic interactions that require resilience and adaptability. 

Different types of news organizations approach these dynamics in ways that reflect their focus and style. Community-focused outlets prioritize representative and community-oriented ties, giving voice to underrepresented groups and fostering shared belonging.  

Digital-native publishers lean into conversational and appreciative connections, particularly in interactive formats. Traditional brands continue to rely on the service relationship, while adding coaching or inspirational elements to strengthen loyalty. Challenging interactions, including antagonistic and competitive dynamics, are now part of the everyday landscape, requiring newsrooms to balance engagement with critique. 

Continuity amid change in the media industry 

The typology highlights practices that journalists and media organizations already implement. It provides language to describe the variety of audience connections and shows that these connections are not uniform. These connections can range from energizing to draining, collaborative to confrontational. Understanding this spectrum helps explain how journalism adapts across beats, platforms, and formats. 

Audience relationships influence distribution strategies, editorial framing, newsroom culture, and the emotional experience of journalists themselves. Far from undermining professional norms, these relationships add new layers to them. Objectivity, independence, and public service remain central, now practiced alongside relational skills that emphasize empathy, resilience, and adaptability. 

For journalism, this is not a departure from tradition but an expansion. Media organizations no longer serve a passive audience. They operate in a landscape where interaction, segmentation, and emotional labor are embedded in everyday practice. By articulating these dynamics, the research illuminates how journalists manage diverse relationships. It also shows how organizations integrate these practices into their strategies. Finally, it highlights how journalism continues to evolve while sustaining its core mission of serving the public. 

The post Rethinking audience relationships in the media  appeared first on Digital Content Next.

]]>
AI increases misinformation–and the value of trusted news  https://digitalcontentnext.org/blog/2025/09/09/ai-increases-misinformation-and-the-value-of-trusted-news/ Tue, 09 Sep 2025 11:28:00 +0000 https://digitalcontentnext.org/?p=45966 Artificial intelligence is transforming how information is created, consumed, and trusted. As AI makes misinformation becomes to produce and harder to detect, researchers are beginning to uncover how these shifts...

The post AI increases misinformation–and the value of trusted news  appeared first on Digital Content Next.

]]>
Artificial intelligence is transforming how information is created, consumed, and trusted. As AI makes misinformation becomes to produce and harder to detect, researchers are beginning to uncover how these shifts affect news consumption, audience trust, and the role of established publishers. 

A new study from the National Bureau of Economic Research examines how AI-generated misinformation affects audience behavior and publisher strategies. Working with the major German news outlet Süddeutsche Zeitung (SZ), the report, GenAI Misinformation, Trust, and News Consumption: Evidence from a Field Experiment, provides timely evidence on the challenges and opportunities facing trusted media brands. 

The results reveal an important paradox. While exposure to AI-driven misinformation raises concerns about trust in media overall, it also increases engagement and strengthens loyalty to trusted news brands. The findings point to both challenges and opportunities in an era where credibility is increasingly scarce. 

Experiment context 

This extensive study includes more than 17,000 individual surveys measuring SZ reader concern with misinformation, trust in various media outlets, and willingness to pay for subscriptions. In addition, approximately 6,000 participants consented to have their browsing and subscription data tracked. And a subset of respondents participated in a quiz featuring side-by-side images, some real and some generated by AI, to identify which images are authentic. 

Double-edged impact 

The findings underscore the dual nature of AI-driven misinformation. On one hand, it erodes trust in media content overall. On the other hand, it highlights the relative value of credible news organizations that can help audiences navigate a confusing information environment. 

  • Concerns about misinformation: Participants’ exposure to an AI image quiz increases their worries about false content. 
  • Trust declines: Trust in news falls across all outlets, including SZ itself, showing that awareness of AI-generated content reduces confidence in the broader news environment. 
  • Engagement increases: Daily visits to SZ’s digital content grow by 2.5% immediately after the research, with the effect lasting more than two weeks before tapering off. 
  • Subscriber retention improves: Five months after the experiment, participants in the treatment group are 1.1% more likely to remain subscribed, roughly a one-third reduction in attrition compared to the control group. 
  • Impact is greatest for vulnerable readers: Readers struggling the most to distinguish real images from AI images show the strongest engagement and retention — daily visits rising 4% in the first three to five days and a 3.5% increase in willingness to pay for a subscription. 

The study finds that transparency and education efforts help strengthen audience loyalty. Although trust scores decline overall, readers who struggled with the AI quiz do not lose confidence in SZ. Instead, they visit the site more often and show greater willingness to pay for its content. This suggests that when people recognize the challenges of navigating misinformation, they lean more heavily on reliable sources. 

The bigger picture 

As noted, the findings highlight how trusted journalism drives engagement and reduces churn, even amid growing skepticism. Credibility emerges as a powerful business asset, as audiences turn to dependable brands to make sense of an uncertain information environment. As AI-driven misinformation makes navigating the news increasingly difficult, credible publishers are well-positioned to guide audiences, delivering the clarity, reliability, and trust that readers seek. 

The post AI increases misinformation–and the value of trusted news  appeared first on Digital Content Next.

]]>
Live, interactive news: real-time trust and real-world impact https://digitalcontentnext.org/blog/2025/08/04/live-interactive-news-real-time-trust-and-real-world-impact/ Mon, 04 Aug 2025 11:29:00 +0000 https://digitalcontentnext.org/?p=45721 The 2025 Reuters Digital News Report provides a sharp snapshot of the current state of digital journalism: audiences are overwhelmed, trust is fragile, and the format of news delivery matters...

The post Live, interactive news: real-time trust and real-world impact appeared first on Digital Content Next.

]]>
The 2025 Reuters Digital News Report provides a sharp snapshot of the current state of digital journalism: audiences are overwhelmed, trust is fragile, and the format of news delivery matters more than ever. These aren’t new challenges. However, the urgency has intensified, along with the opportunity for publishers ready to meet audiences where they are.

How we deliver news can play a crucial role in why audiences return. Live, interactive news formats are more than a content style. They are also a tool for rebuilding trust, deepening engagement, and strengthening the bottom line.

Trust is fragile, but fixable

This year’s report confirms an ongoing crisis of trust in news. Yet it also offers a glimmer of hope. Encouragingly, 38% of people say they turn to trusted news outlets first, while only 14% go to social media. This reinforces what we’ve long believed: audiences want credible information, but they want it delivered in a way that fits the fast-paced, mobile-first world they live in.

Live blogs and real-time updates play a crucial role here. By showing how information is gathered, when it’s updated, and who is reporting it, live coverage inherently encourages transparency. It’s a format that invites accountability and provides a natural space for in-context fact-checking, source attribution, and even conflict-of-interest disclosures.

Süddeutsche Zeitung saw seven out of its 10 most-read articles in 2023 come from live blogs. They use the format not just to update but to explain, embedding transparency cues and structured fact-checks within its real-time coverage. FAZ achieved over 8x longer retention rates on live blogs than traditional articles: proof that real-time transparency helps retain trust and attention.

There’s also an untapped opportunity in building meta-coverage—live blogs that relate to the reporting process itself. Who broke the story? How was it verified? What questions are still open? During the 2024 U.S. election, Der Spiegel deployed a collaborative newsroom effort, where 33 journalists contributed to a single live blog. Readers could see not just the unfolding story but the multi-perspective editorial process in action. The approach blends speed, transparency, and team-driven insight in one coherent stream. This kind of behind-the-scenes work can help restore confidence in an age of skepticism.

Instant, micro-content

Another key finding from the Reuters report is the growing demand for shorter, more accessible formats, particularly among younger readers. At a time when many consumers feel overwhelmed by endless scrolling and algorithmic content streams, live blogs offer something different. They offer a coherent, time-stamped narrative that delivers key facts quickly, yet with enough context to foster a deeper understanding.

Unlike social media snippets, live blogs are built around editorial judgement. Unlike long-form articles, they’re agile and responsive. They give audiences real-time coverage of politics, sports, and community events on one coherent platform.

For example, during election nights, we’ve seen publications use live blogs not only to report results but also to explain shifting trends, share expert commentary. They also link to explanatory articles—all within one feed. It’s the ideal format for audiences who want to stay informed without being overloaded. A powerful example comes from Stears in Nigeria, which garnered more than 10 times the traffic on its live blog compared to its standard articles during the 2023 elections.

Interactive news as a differentiator

Today’s audiences don’t just want to consume the news; they want to engage with it. Interactive news is the answer. The Reuters report shows increasing interest in formats that allow for interaction and explanation, especially among younger and more skeptical readers.

Live blogs are ideal for interactive features like reader polls, Q&As with journalists and experts, and moderated comment threads, all embedded directly into the coverage. This turns passive readers into active participants and reinforces the human side of journalism.

This is part of a broader trend. For instance, Stuff in New Zealand regularly engages readers through polls and live Q&As. Its Met Gala coverage received over 1,000 reader responses, while Taylor Swift ticketing coverage triggered more than 400 comments in real-time. These aren’t just passive metrics; they reflect an audience eager to feel part of the conversation.

Sustainability and innovation

For publishers facing revenue pressure, these formats aren’t just good for engagement, they’re good for business. Customizable, brand-integrated live feeds open up new opportunities for native sponsorships, affiliate placements, and reader subscriptions. They also drive reader loyalty through habitual check-ins and notifications.

At regional German paper Westfälische Nachrichten, the paywalled soccer live blog achieved a 7.3% subscriber reach—a particularly strong result that demonstrates how high-value, recurring live formats can support subscription strategies. Whether it’s covering a local election or a global sporting event, live blogs are proving to be not just editorial assets but commercial ones.

A strategic roadmap for newsrooms

If there’s one clear takeaway from the 2025 Reuters report, it’s that format is strategy. As automation and AI transform the backend of journalism, publishers must also reconsider the front-end user experience.

Live blogs offer a versatile way for publishers to respond to today’s challenges. By prioritizing transparency and making editorial processes visible in real-time, they help reinforce trust with audiences who increasingly want to understand where their news comes from. At the same time, features like multi-reporter collaboration, easy formatting, AI-powered tools, and partner integrations make live blogs more efficient for editorial teams, allowing them to focus on what matters most: delivering compelling, real-time storytelling. They also meet the growing demand for bite-sized, easy-to-navigate updates, providing a clear, chronological narrative that cuts through information overload.

Crucially, live blogs also create space for deeper engagement. Whether through interactive Q&As, embedded polls, or moderated comments, they transform readers from passive consumers into active participants. And from a business perspective, they unlock new value through repeat visits, increased dwell time, and formats that are ready for sponsorship or brand integration.

Trust isn’t just built on accuracy; it’s built on experience. Audiences want news they can believe and a format that respects their time, attention, and intelligence. With the right tools, publishers can deliver both. Live, interactive news won’t solve all of the industry’s challenges, but as this year’s Digital News Report makes clear, it’s a critical piece of the puzzle, and one that’s ready to scale.

The post Live, interactive news: real-time trust and real-world impact appeared first on Digital Content Next.

]]>
Gen Z is skeptical and selective of news – but still engaged https://digitalcontentnext.org/blog/2025/07/08/gen-z-is-skeptical-and-selective-of-news-but-still-engaged/ Tue, 08 Jul 2025 11:27:00 +0000 https://digitalcontentnext.org/?p=45588 In a digital environment where information moves quickly and influencers often shape public opinion it can seem like Gen Z is turning away from traditional journalism. But young people continue...

The post Gen Z is skeptical and selective of news – but still engaged appeared first on Digital Content Next.

]]>
In a digital environment where information moves quickly and influencers often shape public opinion it can seem like Gen Z is turning away from traditional journalism. But young people continue to seek credible, professional news, especially when stories are significant or hit close to home. At Owasso High School in Oklahoma, students did just that following the unexpected death of a classmate. Despite false or misleading posts circulating online, many actively sought accurate information and turned to reliable news sources that they felt they could trust.

News literacy advocate Hannah Covington highlights this behavior in an article detailing her conversations with teens about conspiracy theories. Sixteen-year-old Andie Murphy, for example, deleted Instagram over concerns about AI-driven data collection. Once a regular consumer of influencer content, she now checks multiple professional outlets before accepting information as accurate. “I just couldn’t trust what I was seeing anymore,” she said. Her shift reflects a broader change in how Gen Z engages with news.

Recent 2025 studies reinforce this news trust trend

Research from Raptive supports this noted shift in Gen Z’s relationship with news and information. Their study finds that 49% of Gen Z actively verify online information by checking trusted, credible sources, while 55% say they trust content from established experts over influencers or peer posts. Notably, 39% view social platforms as less credible compared to open-web sources. These findings reflect a generation that is not only skeptical, but also intentional in its pursuit of accurate information.

According to the Poynter Institute, while teens may not frequently use dedicated news apps, they actively seek out reliable sources like CNN and the Associated Press during moments of uncertainty. About 20% of surveyed adolescents say they encounter fake news daily. However, many report that they turn to trusted news outlets when crises hit.

Similarly, Common Sense Media found that teens are increasingly wary of digital content, especially AI-generated material. In its 2025 research, teens express deep skepticism toward manipulated images and videos, with one respondent noting, “I already doubt everything I read online.” This mistrust is driving more teens toward professional journalism for verification and reassurance.

Peer fact-checking reinforces news habits

Covington’s reporting also highlights how peer influence reinforces this fact-checking culture. In school libraries and hallways, students openly challenge each other around misinformation. These real-time corrections help shape a community that values accuracy and critical thinking. For Gen Z, information vetting is becoming a social skill.

While teens may not engage with mainstream media daily, they don’t dismiss it. Covington’s interviews confirm that students return to professional news brands when a story feels urgent or emotionally charged. “If it’s big enough, I’ll check real news sites,” one student explained. That behavior underscores an important truth: for Gen Z, trust in established news sources and journalism often reactivates in moments of crisis.

This pattern aligns with Common Sense Media’s findings, which note that teens are eager for tools that help them navigate digital uncertainty. While skepticism runs high, so does the demand for guidance. Likewise, Poynter’s research shows that even teens regularly exposed to misinformation seek clarity from reputable sources during confusing or high-stakes events.

Gen Z’s relationship with news is complex but far from disengaged. They are critical of what they see, cautious about interpreting it, and selective in who they trust. When news matters, especially during confusion, fear, or grief, they turn to professional journalism for clarity. Their behavior suggests a desire not just for content, but for credibility. In a noisy and uncertain information landscape, Gen Z continues to seek out trustworthy news.

The post Gen Z is skeptical and selective of news – but still engaged appeared first on Digital Content Next.

]]>
From policy to practice: Responsible media AI implementation https://digitalcontentnext.org/blog/2025/06/30/from-policy-to-practice-responsible-media-ai-implementation/ Mon, 30 Jun 2025 11:27:00 +0000 https://digitalcontentnext.org/?p=45520 As artificial intelligence becomes more embedded in editorial and business processes, media companies face increased pressure to ensure AI is implemented responsibly. This requires companies to develop a plan for...

The post From policy to practice: Responsible media AI implementation appeared first on Digital Content Next.

]]>
As artificial intelligence becomes more embedded in editorial and business processes, media companies face increased pressure to ensure AI is implemented responsibly. This requires companies to develop a plan for AI use that covers several areas including bias mitigation, risk management, legal compliance and long-term governance.

In my last article, I shared real-world examples of how media companies are implementing ethical AI best practices for transparency and disclosures, bias and ongoing staff education. Here we go deeper into the steps media companies are taking to reduce risk, protect privacy and maintain editorial oversight while integrating AI tools into their processes. Together, these form the eight pillars of ethical AI.

Ethical guidelines and standards

Establishing clear policies that define how AI is used across editorial, marketing and operational teams is essential to increasing transparency and building trust with audiences. Already, some media leaders have not only created policies around AI usage but share them publicly – which offer some great examples to other organizations grappling with AI governance.

The New York Times outlines its AI policies as part of its ethical journalism handbook, which was developed for its editorial and opinion teams and is available to the public. The guidelines state the importance of human oversight and adhering to established standards for journalism and editing.

The Financial Times also made its AI governance publicly available by sharing its principles in articles that outline specific tools staff are integrating into their workflows. It also discusses its investment in skill development and how it has transformed into a company committed to AI fluency and innovation.

Media companies need to develop formal AI ethics guidelines that help guide staff and increase transparency with the public. However, it’s equally important to regularly evaluate these guidelines as technology evolves.

Rights and permissions

As part of their governance strategy, companies must also take steps to ensure that any content produced through AI does not infringe on intellectual property rights or violate content licensing agreements. This means securing applicable rights and permissions to use the information generated by the AI tools and creating internal processes to ensure that AI outputs do not use third-party content without permission.

The New York Times encourages staff to use AI to create content including quizzes, quote cards and FAQs. However, its guidelines state that copyrighted material should not be input into AI tools, which prevents potential misuse of third-party content in AI training.

The Guardian outlines its commitment to protecting content creators’ rights when selecting third-party AI tools by stating it will only use tools that have addressed permission, transparency and fair reward for content usage.

These practices can reduce risk and reinforce a publisher’s commitment to responsible content development.

Accountability and human oversight

Even sophisticated AI systems can produce biased, inaccurate or misleading output. To safeguard against this, media companies should take a “human-in-the-loop” approach and assign qualified individuals to oversee AI tools at every stage of use.

Bay City News, a San Francisco Bay area nonprofit news organization, maintains audience transparency by publicly sharing how the team uses AI including in-depth context about the process behind each project. When it created its award-winning election results hub using AI, human oversight including fact-checking was a vital part of the project’s success.

While BBC prohibits the use of AI to directly create news content, AI use in other areas such as research must be actively monitored and the outcomes assessed by an editor.

Wired also does not create content directly from AI, but the company states that if AI is used to suggest headlines or social media posts, an editor needs to approve the final choice for accuracy.

Privacy and data protection

As readers grow more concerned about how their personal data is collected and used, publishers must take steps to ensure that AI tools are deployed in ways that maintain legal compliance. AI governance must include the development of transparent data collection policies and adhering to privacy regulations such as GDPR and CCPA.

Graham Media Group, a Detroit-based media company, prioritizes reader privacy and security and shares its compliance with data privacy laws on its disclosure page and in its privacy policy. The company also uses an in-house AI tool to help employees streamline their workflows without relying on free AI tools or unsecured platforms.

BBC states in its responsible AI policy that if staff intend to include personal data in an AI tool, a data protection impact assessment must be completed prior to use.

Risk management and adaptation

Using AI introduces a range of potential risks such as bias and fairness that must be actively managed. Effective AI governance requires continuous monitoring and a proactive approach to identifying and addressing these risks.

BBC created its AI Risk Advisory Group that includes subject matter experts from legal, data protection, commercial and business affairs and editorial departments. The group provides detailed advice on potential risks of using AI in both the newsroom and across the company.

As AI technologies evolve, so must the ethical frameworks that support their use. By integrating ethical AI principles into daily operations, media organizations can protect their brands, maintain audience trust and demonstrate their value to advertisers and partners who seek reliable, trusted media environments.

The post From policy to practice: Responsible media AI implementation appeared first on Digital Content Next.

]]>
What media leaders need to know about AI risk and regulation https://digitalcontentnext.org/blog/2025/06/10/what-media-leaders-need-to-know-about-ai-risk-and-regulation/ Tue, 10 Jun 2025 11:19:00 +0000 https://digitalcontentnext.org/?p=45396 Organizations and individuals around the world are becoming increasingly reliant upon AI. However, adoption of AI methods by organizations is outpacing risk mitigation, and consumers are increasingly apprehensive about its...

The post What media leaders need to know about AI risk and regulation appeared first on Digital Content Next.

]]>
Organizations and individuals around the world are becoming increasingly reliant upon AI. However, adoption of AI methods by organizations is outpacing risk mitigation, and consumers are increasingly apprehensive about its use. While geographic regions have different approaches to regulating AI use, research indicates that increased data security measures and regulation positively influence customer confidence.

User concerns grow globally

Recent research found low trust in AL combined with high support for increased regulation. Trust, Attitudes, and Use of Artificial Intelligence: A Global Study 2025, led by the University of Melbourne in collaboration with KPMG, surveyed over 48,000 people across 47 countries. The study found that most people report using AI regularly. And even more support AI regulations:

  • 66% of people use AI regularly, and 83% believe AI will bring significant benefits.
  • Despite this, only 46% of people globally report trust in AI systems.
  • 70% of respondents support national and international AI regulation.

As international consumers are increasingly relying on AI, they are also expressing increased trepidation over issues of trust and transparency. Tech engagement, while strong across markets, is especially robust in emerging markets such as India, Brazil, and China, according to Kantar Media’s Global Digital Media and Tech Trends Report. The report analyzed data derived from 80,000 respondents in 37 countries. 86% of media users from India answered yes to “I try to keep up with developments in technology” as did 76% of those in China and 73% of those in Brazil – compared to 62% of U.S. respondents.  A large majority of media users in India (78%) agreed with the statement: “Artificial intelligence has had a significant impact on my daily life.” More than half of Brazilians agreed (52%), compared to less than half (46%) of U.S. respondents.

According to the Adobe 2025 Digital Trends Report, key issues around AI adoption include trust and the use of AI include balance, transparency, and data security.  The balance between innovation and trust is an ongoing challenge: both privacy concerns and governance complexities remain significant hurdles. Of the 8,301 consumers surveyed by Adobe, close to half (45%) claim to prioritize visibility and control over their data, while a third (33%) say they demand clarity on how AI is used to generate recommendations. As organizations move beyond pilot programs to scale AI initiatives, they must focus on clear disclosure and ethical AI practices to maintain credibility.

Trust erosion and AI adoption

A recent study by Thales reveals that consumers’ trust in organizations to use their data responsibly in the age of AI is rapidly waning. However, increased regulation alleviated some of the distrust.

  • In 2024, 47% of consumers questioned whether companies used AI responsibly. By 2025, this concern rose to 57% – a marked leap in just one year.
  • The 2025 global trust index found global trust rates stagnating or declining, with no sector achieving more than 50% “high trust” ratings. However, industries (such as banking and healthcare) and geographies with the most regulations had higher trust rates.
  • Trust in news media hit a new low, with news organizations trusted by only 3% of consumers in 2025, a decline from 6% in 2024. Some of this drop was attributed to slackening oversight (particularly from social platforms).
  • In contrast, Government services saw improvement, increasing from 37% trust in 2024 to 42% in 2025, an improvement possibly driven by enhanced regulatory frameworks like the EU’s Digital Operational Resilience Act (DORA).
Few organizations have a policy around AI usage, which will affect consumer trust

These studies indicate increasing awareness on the part of consumers about the risks inherent in AI technology. This underscores the need for media executives to demonstrate strong governance and proactive leadership.

AI data risk found to be almost universal

Consumer trepidation is far from unfounded. The 2025 State of Data Security Report by Varonis quantifies the data risk entailed by AI usage based on data obtained from 1,000 companies. Findings confirm that AI adoption is leaping ahead of risk mitigation. Among the findings: 99% of organizations have had sensitive data exposed to AI tools.

The report also indicates 88% of organizations evaluated had old but still enabled user accounts, which are potential entry points for attackers. The report also reveals that 90% of the organizations studied have exposed sensitive cloud data, and 98% were found to have employees using unsanctioned apps, including shadow AI. The study underscores an urgent need for stronger data governance frameworks in the age of AI.

As previously reported by DCN, a plan that includes transparency, balance, and education can offset some AI concerns. However, the conundrum remains that while most users claim to want transparency around the use of AI, revealing such origins can undermine trust. In addition to the very real risks of data exposure, AI use by organizations risks turning off consumers who perceive a lack of human connection and oversight. Media Pulse points out that human creator content, even when flawed, feels authentic and drives stronger engagement. Thus, AI tools must always be orchestrated with human creators to maintain community trust.

Global governments differ on AI regulation

The impact of AI adoption and data security concerns will likely spur increased regulation in many locales, so companies will be wise to get ahead of future requirements. The EU Artificial Intelligence Act (AI Act) – the world’s first comprehensive AI regulation – officially went into force in August of 2024. Designed to ensure safe, ethical, and transparent AI development and deployment across the European Union, it requires AI content and deepfakes to be clearly labeled by 2026. Failure to disclose can lead to legal penalties.

Other countries are likely to follow suit as the impact of AI-related data risks become increasingly apparent. Brazil and Peru are currently working on AI governance frameworks based on the EU model. Canada has established the Artificial Intelligence and Data Act (AIDA), which focuses on transparency, accountability, and risk management for AI systems. China’s strict AI regulations include content moderation laws and licensing requirements for AI models. Meanwhile, India is developing a techno-legal approach to AI regulation. The United Kingdom leans towards a more pro-innovation approach, relying on existing regulators rather than creating new AI-specific laws. Australia boasts a comprehensive “AI assurance framework” at federal, state and territory levels.

According to Pew Research, the public is more concerned about not enough government regulation of AI than too much

Meanwhile, AI governance in the U.S. has been fragmented so far, with state-level regulations and sector-specific guidelines. Until 2025, a more nationwide approach to shaping AI governance seemed likely, but recent executive orders have been aimed at repealing AI regulations. As of this writing, The House of Representatives has passed the Budget Reconciliation Bill, which includes a 10-year moratorium on state and local laws regulating the use of AI technologies. This repeal of AI regulations, however, may be the opposite of what the majority of the public wants: More than half of U.S. adults (58%) say they are concerned that government regulation won’t go far enough in managing AI risks. Only 21% fear it will go too far, according to a recent Pew Research poll.

Different attitudes towards AI risks mean that a policy acceptable in one region might not be elsewhere. Media companies operating internationally may have to tailor AI-driven strategies to align with local regulatory expectations. The Global AI Regulation Tracker offers an interactive real-time comparison of how various countries are responding to the explosion of AI use with regulations, laws and policies

Given the research linking increased AI governance with customer confidence, being proactive about AI policy is wise from a customer service perspective. Whether or not their region requires it, media leaders will want to establish clear guidelines for AI use within their organizations to ensure practices that align with user expectations. Regulations aren’t just about compliance; they are about setting standards that align with public trust. Media leaders who responsibly integrate AI can gain a strategic advantage, with ethical AI use as a key differentiator among market competitors.

The post What media leaders need to know about AI risk and regulation appeared first on Digital Content Next.

]]>
Ethical AI in action: strategies to build audience trust https://digitalcontentnext.org/blog/2025/05/27/ethical-ai-in-action-strategies-to-build-audience-trust/ Tue, 27 May 2025 17:37:25 +0000 https://digitalcontentnext.org/?p=45318 As artificial intelligence becomes more integrated into both newsroom workflows and business operations, media companies are under increasing pressure to be transparent about how these tools are being used. Recent...

The post Ethical AI in action: strategies to build audience trust appeared first on Digital Content Next.

]]>
As artificial intelligence becomes more integrated into both newsroom workflows and business operations, media companies are under increasing pressure to be transparent about how these tools are being used. Recent research shows that most audiences expect clear disclosure when AI plays a role in media production. To maintain trust and credibility, publishers must follow ethical best practices for implementing AI and communicate openly about its use across their organizations.

Here are examples of how several media companies are integrating ethical AI best practices into their operations to build trust and accountability with audiences and advertisers.

Transparency and disclosures

As readers become more aware—and skeptical—of AI’s role in content production, they expect transparency about where and how AI is involved. Developing a disclosure strategy helps audiences better understand the context around AI’s role in producing content.

The Associated Press publicly shares its AI standards and policies and discloses its intent to use AI to improve the quality of its work, while adhering to strict standards for accuracy. The policy also highlights the importance of human oversight in all aspects of content production.

USA Today adds disclosures when AI is used to write summaries or “Key Points” at the top of select articles. The disclosures state that the content was AI-generated and reviewed by a journalist before publishing. USA Today also includes a link to the publication’s ethical conduct policy with the disclosure.

Bias, balance, and fairness

AI models train on a variety of online content. If biases exist in these sources, these biases could be included in AI-generated output. Media companies are creating ways to identify and minimize bias and making these processes public.

The Guardian developed a statement to share how it’s minimizing bias including a working group to discuss ways to prevent bias and develop company-wide policies. The statement also explains how it integrates human oversight into its AI processes.

National Public Radio (NPR) added a special section on AI to its ethics handbook to encourage journalists to identify and counter bias inherent in these tools. The section also emphasizes the importance of providing transparency about how AI is used and discussing questionable output with editors.

Training and education

As AI technologies and best practices evolve, regular training ensures teams stay informed and continue to uphold ethical standards.

Radio-Canada launched a comprehensive AI literacy program to provide a foundation for its staff’s implementation and understanding of AI. The program includes a foundational training session to provide an overview of AI concepts and ethical considerations as well as follow-up workshops focused on practical applications. The company also developed a cross-functional group that includes developers, analysts and journalists to meet bi-weekly to discuss AI developments and share ongoing projects. Since its inception, more than 100 staff members have participated in the training program.

The New York Times implemented a training program that outlines how AI should be used and includes editorial guidelines, use cases and sample prompts. The company also developed an internal team to explore how AI can be applied to newsroom processes.

AI ethics and the broader media ecosystem

Marketers and advertisers are also placing greater focus on ethical standards. The Institute for Advertising Ethics recently released its AI Ethics Toolkit, designed to help buyers navigate the responsible use of AI.

Media organizations can’t slow their pace of innovation. They need to experiment with and implement technologies while ensuring that they keep audience trust at the forefront. The growing focus on ethics across the broader ecosystem signals that publishers who adopt transparent, responsible AI practices will not only build audience trust but also strengthen their position with advertisers seeking accountable media partners.

The post Ethical AI in action: strategies to build audience trust appeared first on Digital Content Next.

]]>