policy Archives - Digital Content Next Official Website Thu, 26 Feb 2026 21:49:09 +0000 en-US hourly 1 Ad tech dominance defines market power and pricing https://digitalcontentnext.org/blog/2026/03/03/ad-tech-dominance-defines-market-power-and-pricing/ Tue, 03 Mar 2026 12:27:00 +0000 https://digitalcontentnext.org/?p=46916 Digital advertising remains a primary source of revenue for media companies. Yet the system that allocates that revenue is controlled by a small number of intermediaries that design the auctions,...

The post Ad tech dominance defines market power and pricing appeared first on Digital Content Next.

]]>
Digital advertising remains a primary source of revenue for media companies. Yet the system that allocates that revenue is controlled by a small number of intermediaries that design the auctions, govern data flows, and determine access to demand. The central debate over behavioral advertising is often framed as a question of performance. The more consequential question is structural: who controls the ad infrastructure that decides how value is distributed? 

Ad tech firms argue that behavioral tracking improves efficiency across the ecosystem. They maintain that it delivers more relevant ads, reduces wasted spending, and increases publisher revenue. The concern, however, is not simply whether tracking improves performance. It is whether, in a concentrated market, tracking reinforces the firms that control the infrastructure rather than delivering broad gains for advertisers, publishers, and consumers. 

New research puts that debate to the test. 

Economic Rationales for Regulating Behavioral Ads, by Pegah Moradi, Cristobal Cheyre, and Alessandro Acquisti, reviews economic evidence on behavioral advertising. The authors evaluate whether tracking delivers the efficiency gains intermediaries claim. They find that when a small number of firms control key parts of the system, behavioral advertising often strengthens those firms rather than delivering broad gains across advertisers, publishers, and consumers. 

A federal judge reached a similar conclusion about market structure in United States v. Google LLC. The court ruled that Google unlawfully maintains monopoly power in key segments of the ad tech market. It found that Google’s control over both the publisher ad server and the ad exchange enabled it to entrench its dominance across multiple layers of the stack, restrict alternatives, and distort competition. The case now moves into a remedies phase that will determine whether structural or behavioral changes are required. 

Together, the research and the ruling point to the same issue: control over infrastructure shapes outcomes in digital advertising. 

Intermediaries capture a large share of revenue 

The research examines how digital ad auctions allocate value as advertiser competition increases. As more advertisers bid to reach the same users, bidding pressure rises. The intermediaries operating those auctions capture a significant share of that incremental spending. Studies cited in the report show that dominant ad tech firms can take 30 percent or more of each advertising dollar that flows through the system. 

The authors do not argue that advertising lacks value. They argue that who controls the trading systems strongly influences how that value is divided. 

In the Google case, the court examines how control over publisher ad servers and exchanges affects competition. By maintaining dominance across multiple layers of the ad tech stack, Google gains the ability to influence pricing, auction mechanics, and access to demand. The court concludes that this structure harms competition. The ruling supports the conclusion that control over ad tech infrastructure plays a central role in shaping market outcomes. 

Behavioral targeting and market adjustment 

The report explains how behavioral targeting allows firms to group users based on data and earn more from certain audiences. It then examines whether this practice expands total value in the market or mainly shifts revenue among advertisers, publishers, intermediaries, and consumers. The authors find limited evidence that tracking consistently produces substantial new gains across the ecosystem. 

This finding shapes the debate over privacy regulation. Critics argue that limiting tracking would damage innovation and eliminate free digital content. After reviewing evidence from GDPR and Apple’s App Tracking Transparency framework, the paper finds little support for predictions of market collapse. Digital advertising continues, firms adjust their strategies and markets adapt. 

The report finds that when tracking declines, companies adapt. Competition shifts, but digital advertising and content remain in place. 

Ad infrastructure determines outcomes 

The debate over behavioral advertising comes down to two competing explanations. One holds that tracking improves ad performance and increases revenue across the ecosystem. The research challenges that claim. It shows that when a few firms control the data and auction systems, tracking often strengthens their market power rather than delivering broad gains. 

The court’s ruling in United States v. Google LLC reflects the same concern. Its findings about monopoly power and harmful tying focus on how control over key ad tech systems can distort competition. 

For premium publishers, this is not an abstract policy question. The rules of the system and who controls them shape outcomes. The federal ruling signals that the structure of digital advertising markets warrants continued scrutiny. As the remedies phase proceeds, changes could alter how value flows among advertisers, intermediaries, and publishers.  

Market structure determines who sets the terms of pricing, how bids clear, and whether investment in trusted content is rewarded through open competition. Sustainable digital markets require competition, transparency, and balanced bargaining power. Strong markets reward content creation and innovation rather than control over infrastructure and data extraction. The research and the courts have made one thing clear: digital advertising has reached an important inflection point. 

The post Ad tech dominance defines market power and pricing appeared first on Digital Content Next.

]]>
Goals scored: Canada proved news bargaining works https://digitalcontentnext.org/blog/2026/02/26/goals-scored-canada-proved-news-bargaining-works/ Thu, 26 Feb 2026 12:33:00 +0000 https://digitalcontentnext.org/?p=46888 The Olympic hockey gold medals were both determined by bruising 2-1 overtime games. Both between the United States and Canada. Both captivated viewers around the world. And both came down to skill, discipline, and the rules of the game (love ’em or hate...

The post Goals scored: Canada proved news bargaining works appeared first on Digital Content Next.

]]>
The Olympic hockey gold medals were both determined by bruising 2-1 overtime games. Both between the United States and Canada. Both captivated viewers around the world. And both came down to skill, discipline, and the rules of the game (love ’em or hate ’em.) We can see that same principle in action in Canada’s policy arena: the Online News Act (C-18) is working. It’s delivering real resources into newsrooms. And that’s precisely why it is under attack. By Facebook. By Meta. By their proxies and lobbyists. By their friends in government.  

In Canada, the Act now delivers roughly $100 million annually from Google to support journalism. This isn’t theoretical.  All of the arguments on why news bargaining would fail are out the window now that it’s working.  The cash is flowing directly into newsrooms across the nation.  

Financial support that matters  

An independent, family-owned publisher in Alberta calls the roughly $28K per year her newsroom receives a “gamechanger.” A mid-sized outlet in Quebec reports hundreds of thousands annually. Both also lead associations representing hundreds of publishers in their respective provinces. All confirm the same: this support matters. It is stabilizing local journalism at a moment when democracy cannot afford further erosion. 

As an industry (notably in the tech and trade press), we are guilty of paying way too much attention to the political food fight while legislation is debated, then losing interest once implementation begins. We saw this play out when a similar law was passed earlier in Australia; the U.S. tech press and influential reporters made it a global story how Facebook was blocking the news neutralizing the efforts of parliaments but then failed to follow up when funds began to be distributed from Google and Facebook. 

The rationale behind Canada’s Online News Act – and its inspiring big sister law, Australia’s News Media Bargaining Code – was straightforward. Policymakers recognized a structural imbalance in bargaining power between dominant digital gatekeepers, primarily Google and Facebook, and news publishers. Antitrust enforcement was moving too slowly to address the harm in real time. Journalism, however, could not wait for multi-year litigation to conclude. These laws were designed as targeted, interim interventions to push platforms to negotiate and get money flowing while broader competition cases played out. 

Balance benefits from flexibility  

Yes, in the United States, antitrust enforcement is finally catching up. Google has been found to have violated antitrust laws in four different district courts. However, only one of these decisions has made its way through the appellate process and will bear fruit this year. In Canada, their Department of Justice is actively suing Google. Meta had an early win in its defense against the FTC but that too is now under appeal. Most importantly, recent court decisions validate the imbalance these news bargaining codes were designed to address. The premise was never radical. It was a bridge to restore a bit of balance while competition law ran its course. 

What makes these laws smart – something DCN and I have emphasized repeatedly in public filings and testimony – is their simplicity and flexibility. They do not attempt to set a fixed price on journalism, attention or (heaven forbid) clicks. They are not overly prescriptive about the value of content. Instead, they require platforms to negotiate in good faith with individual publishers. Sophisticated publishers are free to negotiate based on the strength of their brands, the distinctiveness of their journalism, and their broader strategic goals. Others can opt into models that distribute funds based on easily measured metrics such as the number of journalists employed, a structure Canada has broadly adopted. This hybrid approach respects market dynamics while ensuring broad support for newsroom employment. 

Equally important, these frameworks keep the government out of the business of picking the winners and losers in the press. Funds flow through negotiated bilateral platform and publisher agreements or structured industry mechanisms, not through political discretion.  

That safeguard is not theoretical. In California, Governor Gavin Newsom curtailed funding for a similar measure after it passed into law. At the federal level, under a President Trump — who has repeatedly attacked news organizations and sought to use governmental power against perceived critics — the danger would be obvious. Allowing the executive branch to influence which publishers receive support would be a profound threat to press independence. The Canadian and Australian models wisely avoided that trap. 

Question the Meta narrative 

The one aspect of these new bargaining laws that did not unfold as intended was Meta’s response in Australia or Canada. Rather than negotiating under a democratically enacted framework, Meta chose to block news entirely and leverage its platform to blame the government. That decision eliminated referral traffic for publishers who had relied on Facebook distribution. But despite Meta’s narrative, it was never evidence that the law failed. It is evidence that a platform, often labeled as hostile to journalism and hostile to democracy, chose retaliation over participation under the rules developed through a deliberated parliamentary process. Are we surprised? 

And Meta now argues the law should be dismantled because it supposedly inhibits their AI licensing deals. That claim also collapses under scrutiny. In the U.S. – where there is no such Online News Act, nothing close to it – there has been no surge in voluntary licensing deals for journalism out of Menlo Park. The absence of regulation has not unlocked a growth market of dealmaking here. The deals simply are not happening.  

Instead, both Meta and Google have entangled content licensing within the tentacles of their sprawling platform businesses, blurring any clear precedent for paying directly for journalism. Why? Because direct licensing creates precedent. And precedent matters, particularly as these companies fight consequential copyright litigation over AI training. In fact, internal emails disclosed in discovery of Kadrey v Meta, an early AI copyright case, showed employee concern that even licensing one copyrighted work could undermine the company’s sweeping fair use arguments; the claim that anything publicly accessible on the web can be used to train AI models without permission or payment. Let that sink in. That expansive interpretation of fair use is, at best, aggressive. It has already faced skepticism in court and in the U.S. Copyright Office’s report.  

But the facts remain stubborn for Meta. The Online News Act is delivering real dollars to real newsrooms. It was designed as a measured mitigation to a proven imbalance in market power. It pushes dominant platforms to the negotiating table without dictating deal terms. It protects against government interference in distribution decisions. And it serves as a bridge while antitrust enforcement addresses the deeper structural problems. 

An act to defend democracy 

Journalism is essential infrastructure for democracy. When market distortions threaten it, policymakers have a responsibility to act. Australia acted. Canada acted. And the results show that carefully constructed news bargaining frameworks can work. 

The answer is not to dismantle what is working. It is to refine it, strengthen it against retaliation, and ensure that dominant platforms cannot use their gatekeeping power to avoid accountability – whether in news distribution, competition law, or AI. 

Democracy cannot wait for perfect solutions. It depends on practical ones. And Canada should be applauded for this one.  

Even if America puts the final puck in the net ; )  

The post Goals scored: Canada proved news bargaining works appeared first on Digital Content Next.

]]>
DCN Perspective on EC Investigation into Google’s AI Overviews https://digitalcontentnext.org/blog/2025/12/09/dcn-perspective-on-ec-investigation-into-googles-ai-overviews/ Tue, 09 Dec 2025 21:16:57 +0000 https://digitalcontentnext.org/?p=46517 Digital Content Next strongly supports the European Commission’s formal investigation into Google’s AI Overviews and AI Mode practices. Google’s use of premium publisher content to power AI-generated summaries atop search results...

The post DCN Perspective on EC Investigation into Google’s AI Overviews appeared first on Digital Content Next.

]]>
Digital Content Next strongly supports the European Commission’s formal investigation into Google’s AI Overviews and AI Mode practices. Google’s use of premium publisher content to power AI-generated summaries atop search results is not innovation – it is substitution, siphoning value away from the open web and trusted news and entertainment.

As DCN Members participating in our proprietary DCN Benchmark Reports are aware, DCN’s Q3 Quarterly Revenue Report reveals clear and structural declines in advertising inventory. Across nearly all formats, it is impressions – not pricing – driving revenue softness. Desktop display and video impressions significantly declined year-over-year, driving revenue down despite rising CPMs which underscores the impact of lower impressions and potential harm to publishers.

These declines directly align with the sharp downturn in referral traffic from Google Search – identified in DCN’s July 2025 Snap Survey as a top-tier, systemic risk for publishers. That study received significant press having established a median 10% year-over-year drop in search referral traffic, with non-news brands seeing a 14% median decline, and clear evidence that AI Overviews and zero-click experiences are displacing clicks and visibility for publishers’ original content.

This EC action also comes amid escalating legal challenges in the United States. Following the DOJ’s ruling that Google maintained an illegal monopoly in search, Judge Mehta acknowledged potential harms to publishers – particularly due to the lack of specific and freely given consent for AI training on their content – may influence the shape of future remedies. Recent private litigation from Penske Media further alleges that Google is illegally tying its AI products to its search monopoly, using publisher content to feed products that directly replace – rather than refer to – the source material.

These developments point to a single, urgent conclusion: Google’s AI Overviews is extending the company’s dominance at the expense of a free and plural media. Regulators must act swiftly and decisively to:

  • Halt the unauthorized use of publisher content in AI products;
  • Enforce transparency around how AI models ingest and display that content; and
  • Ensure publishers retain meaningful control and fair compensation in this new ecosystem.

Without corrective action, this dynamic will continue to erode the economic foundation of high-quality news and entertainment – not because readers value it less, but because dominant platforms choose to redirect that value elsewhere.

The post DCN Perspective on EC Investigation into Google’s AI Overviews appeared first on Digital Content Next.

]]>
Chamber of Progress asks for blank check for big tech https://digitalcontentnext.org/blog/2025/11/06/chamber-of-progress-asks-for-blank-check-for-big-tech/ Thu, 06 Nov 2025 12:32:00 +0000 https://digitalcontentnext.org/?p=46361 The Chamber of Progress has asked the Trump Administration to intervene to ensure that all AI training is considered “fair use.” You read that right: They’re asking the White House...

The post Chamber of Progress asks for blank check for big tech appeared first on Digital Content Next.

]]>
The Chamber of Progress has asked the Trump Administration to intervene to ensure that all AI training is considered “fair use.” You read that right: They’re asking the White House to declare that every use of copyrighted material to train artificial intelligence systems is lawful, no matter the circumstances.

It’s a radical proposal that would reward the largest technology platforms at the expense, if not demise, of publishers and creators. In the pursuit of unbridled profit, the Chamber is seeking to overturn more than two centuries of copyright law that has served our country well. U.S. copyright protections have long struck a balance between creators’ rights and technological progress, ensuring that those who invest in producing art, journalism, music, film, and literature can be fairly compensated while still allowing for reasonable uses that advance learning and innovation.

Declaring all AI training “fair use” would blow up that balance. It would amount to a government-granted blank check for Silicon Valley’s biggest players to strip-mine the creative economy. And while some of the Chamber’s backers may cheer that result, it’s important to note that not all technology companies share that view. Microsoft, for example, reportedly told publishers just last month, “You deserve to be paid on the quality of your IP.” (Note that Microsoft apparently chooses not to be associated with The Chamber of Progress.)

So, let’s take a moment to refresh our collective memory about what fair use actually is, as well as what it is not.

How fair use works

The concept of fair use is baked into U.S. copyright law. It provides limited exceptions for certain uses of copyrighted works without permission for purposes like criticism, commentary, news reporting, teaching, or scholarship. But whether a use is “fair” depends on a careful, case-by-case balancing test.

The law identifies four factors:

  1. The purpose and character of the use, including whether the use is commercial or nonprofit and whether it’s transformative, which means it adds new meaning, message, or purpose.
  2. The nature of the copyrighted work, which recognizes that creative works enjoy stronger protection than purely factual material.
  3. The amount and substantiality of the portion used, meaning both how much is taken and how significant that material is to the original work.
  4. The effect of the use on the potential market for the original, which is often referred to as the most critical factor and is key for publishers as it looks at whether the new use substitutes for or diminishes the market value of the original work, and whether the new use would hinder emerging markets for the original work (i.e. licensing).

Each factor must be weighed. Fair use was designed to be flexible, not absolute and it should be wielded like a surgical tool, not a sledgehammer.

What the courts are saying

Courts are still working through the major questions of how copyright law should apply. But, in the two most recent cases, judges ruled, for different reasons, that AI models were likely developed unlawfully. In Bartz v. Anthropic, Judge Alsup held that training AI systems using lawfully acquired books could be “spectacularly transformative,” comparing it to “training schoolchildren to write.” It’s worth noting that this case concerned books and not content like news articles where the potential for substitution is much greater. But even in that decision, he drew a bright line against using pirated or illegally obtained material, saying that would not qualify as fair use.

In the same district in Kadrey v. Meta, Judge Chhabria took a very different view barely 24 hours later. While he ultimately ruled for Meta, it was only because the plaintiffs couldn’t yet show actual market harm. Importantly, the court rejected Alsup’s “schoolchildren” analogy, calling it “inapt,” and acknowledged that generative AI poses a qualitatively different threat to human authorship, particularly because it can flood the market with AI-generated substitutes for real creative work. His decision suggested that proving tangible market harm is key to overcoming the fair use defense.

Together, these early cases show that the courts are highly skeptical of AI companies’ legal claims and that fair use in the AI era is anything but a settled question.

Stronger cases ahead

The cases now moving through the courts could reshape the entire landscape. The New York Times v. OpenAI is poised to be the most consequential yet. The Times alleges that OpenAI violated its terms of use, copied and reproduced its journalism without permission, and even regurgitated near-verbatim passages from Times’ stories in its outputs.  The judge largely denied a partial motion to dismiss in March 2025.

Similar suits from Disney, NBC Universal, Warner Bros. Discovery, and others allege that AI systems like Midjourney and Minimax have infringed on copyrighted characters and images, using them as raw material to generate new (and often derivative) outputs. These cases go beyond questions of data ingestion and look squarely at what the machines produce. When AI outputs contain or imitate protected creative expression, or produce outputs that can substitute for the original works, the argument that “training” is obviously a fair use becomes untenable.

That’s what makes these lawsuits so strong: they don’t rely on abstract theories about future market harm. They show the receipts by offering specific examples of copyrighted material appearing in AI-generated outputs or showing that outputs are otherwise substitutive, clear evidence that these tools are not merely “learning” but supplanting protected works.

Why the Chamber of Progress is panicking

Which brings us back to the Chamber of Progress and their remarkable plea for a government blank check. If the law were really on their side, they wouldn’t need the President to intervene. The truth is, they’re nervous. And they should be.

The Chamber represents the largest AI and tech firms in America, companies valued in the trillions of dollars, and those companies want to maintain margins and multiples no matter the cost to other historically and highly valuable segments of our economy. If courts continue to recognize that AI training and outputs can infringe on copyrighted works, Big Tech will have to negotiate more licenses and continue paying creators. And, by the way, more licensing agreements could actually prove helpful to AI systems by ensuring their products have reliable access to accurate, fact-checked content. However, no matter how The Chamber tries to spin it, that’s not “anti-innovation.” That’s accountability.

The Chamber’s proposed outcome would obliterate that accountability, retroactively blessing a decade of mass data scraping and granting legal immunity to the industry for whatever it does next. It’s an act of desperation masquerading as policy. It’s impunity masquerading as progress.

A final word

For two centuries, copyright law has powered one of the most dynamic creative economies in the world. It protects authors, journalists, musicians, filmmakers, and artists while still allowing room for innovation. The Chamber of Progress’s proposal would dismantle that legacy overnight, transforming fair use from a balanced doctrine into a blanket permission slip.

As these cases move forward, the courts are doing their job: weighing evidence, applying the law, and adapting old principles to new technology. That’s how progress is supposed to work in a democracy governed by the rule of law.

The Chamber may sense the writing on the wall. The creative industries are organized, the evidence is mounting, and the courts are increasingly skeptical of AI’s “just-learning” defense. That’s why they’re now seeking the Administration’s help to tilt the landscape in their favor.

Throughout history, new technology has tested the limits of copyright, from photocopiers to radio and television and the internet. But the courts have a long track record of determining how emerging tools fit within existing law. Innovation and creativity thrive together only when both are respected. Protecting the rights of those who produce original work ensures that progress benefits everyone. And that’s more than fair enough.  

The post Chamber of Progress asks for blank check for big tech appeared first on Digital Content Next.

]]>
Accountability is not censorship  https://digitalcontentnext.org/blog/2025/09/18/accountability-is-not-censorship/ Thu, 18 Sep 2025 11:33:00 +0000 https://digitalcontentnext.org/?p=46014 The recent killing of Charlie Kirk—regardless of one’s political alignment—has intensified national reflection on the state of our political discourse. Violence against anyone for their beliefs is an assault on...

The post Accountability is not censorship  appeared first on Digital Content Next.

]]>
The recent killing of Charlie Kirk—regardless of one’s political alignment—has intensified national reflection on the state of our political discourse. Violence against anyone for their beliefs is an assault on democratic values. This moment has sparked rare bipartisan calls to reject incendiary rhetoric and recommit to civil engagement. 

Radicalization in politics is not new. But today, it is amplified, monetized, and normalized through the very platforms where our public discourse now lives. It feels more commonplace today than at any point in American history. That’s not necessarily because it happens more often, but because it’s more visible, more immediate, and more inescapable in an era of social media and live-streamed video and audio. We’re seeing and hearing things we might never have been exposed to in the past. 

Algorithmic amplification and indefensible immunity 

This is no accident. Social media algorithms are explicitly designed to maximize user engagement—not accuracy, civility, or truth. The most inflammatory content is rewarded with amplification, regardless of whether it’s true, defamatory, or dangerous. This creates a system where extremism is not just tolerated but incentivized. The result is an environment that’s not just toxic — it’s legally unaccountable. 

Section 230 of the Communications Decency Act was originally intended to protect online platforms from liability for user posts. But today, it provides near-total immunity to the largest tech companies—even when their own algorithms actively promote harmful, illegal, or even deadly content.  

For example, in Gonzalez v. Google, the family of a U.S. citizen killed in a Paris terrorist attack argued that YouTube’s algorithm actively recommended ISIS content. And yet, courts shielded Google from liability under Section 230. When a multibillion-dollar company can engineer its systems in a way that results in the promotion of extremist propaganda and then disclaim all responsibility, we must ask: What is the purpose of a liability shield that protects this behavior? 

Section 230 protections have shielded platforms from accountability even in tragic and preventable cases: 

  • In Doe v. MySpace, courts dismissed the claims of families whose children were sexually exploited, ruling that platforms aren’t responsible for foreseeable harm arising from user interactions. 
  • In Doe v. Snap Inc., parents whose children died from fentanyl-laced drugs bought via Snapchat were similarly blocked from pursuing legal remedies—even though Snapchat’s disappearing message design arguably enabled the illegal activity. 

These are not edge cases. They reveal a systemic failure: social media companies face no consequences for design choices that would be unacceptable for other types of companies. Section 230 has become a legal firewall for product decisions that would not pass muster in any other industry.  

Congress has an urgent responsibility to reform this law. At minimum, companies should be required to: 

  • Take reasonable steps to prevent foreseeable harm enabled by their platforms, 
  • Be transparent about how algorithms influence outcomes, 
  • Be held liable when platform features contribute to illegal activity. 

Tech companies argue that any effort to place guardrails on social media amounts to censorship. In reality, they are protecting their bottom line. Reform would threaten the low-cost, high-profit business model that relies on unfettered data extraction and behavioral manipulation. 

In every other industry, companies are held accountable for the products they design—especially when harm to children is involved.  

News organizations are held to account for what they publish—often in court. Whether it’s a private citizen or a powerful public figure, individuals have legal recourse when they believe they’ve been wronged by the press. Consider the high-profile case of Hulk Hogan, who sued Gawker Media for invasion of privacy and won $140 million in damages—a verdict that ultimately forced the company into bankruptcy. That case underscores a fundamental principle: when media companies cause harm, they can be held liable. 

In other industries, many major companies have been held liable for selling defective or unsafe products that led to the deaths of children, resulting in multimillion-dollar verdicts and settlements. IKEA settled for $46 million over dressers that tipped over; Fisher-Price paid a $13 million penalty and undisclosed settlement amounts and was forced to recall Rock ‘n Play sleepers tied to over 100 infant deaths, and; Evenflo is currently facing multiple lawsuits and investigations for marketing “safe” booster seats even though they allegedly had internal data showing a high risk of injury or death.  

It is outrageous that parents who have lost a child to suicide because of social media algorithms don’t have the same opportunity for justice. 

Considering the corrosive impact of online extremism, we must expect more—from platforms, from policymakers, and from ourselves. The best way to restore a healthier political discourse, a safer digital environment—and a safer world—is to make social media companies legally responsible for the products they design and the harm those products cause. This is not a debate about censorship. It’s about accountability. If your business model profits from harvesting Americans’ most personal data, you must also bear responsibility when that model causes real-world harm. 

The post Accountability is not censorship  appeared first on Digital Content Next.

]]>
Time’s up for platform privilege https://digitalcontentnext.org/blog/2025/06/26/times-up-for-platform-privilege/ Thu, 26 Jun 2025 11:36:00 +0000 https://digitalcontentnext.org/?p=45546 Just as a leopard doesn’t change its spots, Google and Meta haven’t changed their ways. Despite mounting legal threats and public backlash, both big tech platforms continue to behave as...

The post Time’s up for platform privilege appeared first on Digital Content Next.

]]>
Just as a leopard doesn’t change its spots, Google and Meta haven’t changed their ways. Despite mounting legal threats and public backlash, both big tech platforms continue to behave as if rules don’t apply to them.

New evidence has emerged to underscore that Google’s original unofficial motto of “Don’t be evil” was never really their true North Star. Instead, it is a smokescreen for big tech’s naked ambitions. Meta’s early motto—“Move fast and break things”—may have been more honest, but the honesty makes it even more damning. As it turns out, the broken things weren’t just outdated norms or sluggish competitors. They were the foundations of fair competition, user privacy, democratic discourse, and now, copyright law. The damage isn’t merely collateral; it is strategic.

Big tech’s anticompetitive behavior enters its AI era

Now we’re seeing a similar pattern unfold with generative AI. In Kadrey v. Meta, evidence unsealed early this year suggests Meta execs, including Mark Zuckerberg, chose to pirate copyrighted content to train its LLaMA AI model. It was revealed that Meta initially explored licensing but opted instead to download pirated content via BitTorrent from LibGen under the reasoning that doing things the legal way would take too much time.

Worse, the company allegedly stripped copyright management info from the files to cover its tracks. Clearly, they’re following the motto of moving fast and breaking things. This time around, they seem intent on breaking copyright law. Given Meta’s long track record, I’m not sure what is most surprising: the planning of such a sophisticated heist or the ham-handed cover up. Either way, they graciously documented it all in email.

Meanwhile, over in Mountain View, Google has once again leveraged its search dominance to take traffic and revenue from publishers. In May, Google launched AI Mode, which scrapes and summarizes publishers’ original content to give users the answer without needing to click through thereby extracting out all of the incentives for the publisher.

In a bit of stunning bravado, Google rolled out AI Mode just 48 hours before closing arguments in the remedies phase of the Google Search trial, where the evidence clearly shows that Google abused its market power in search to maintain its significant advantages in crawling, clicks and query data which are paramount to the AI era. Google claims publishers can opt out. However, they can only do so by removing themselves from search entirely–which is no choice at all when it involves a company with more than 95% of the mobile queries. Google’s unauthorized use of copyrighted content to create a substitutive product has, to no one’s surprise, led to a massive downturn in traffic to publisher sites. Simultaneously, Google announced that Gemini will soon be on by default for consumers, collecting data about their activities. This is an oft-used strategy by Google. They tune the defaults to maximum data collection, knowing full well that consumers won’t know or take the time to shut them off.

The courts push back

However, despite big tech’s brazen and predictable pattern of brutish behavior, the legal system may be starting to catch up with the platforms’ anticompetitive tactics. Google has been found guilty of violating antitrust law in both the search and ad tech markets. And at least in the search case, the Court has been very focused on ensuring AI is a competitive marketplace rather than the fruit of more Google abuses. In addition, we’re starting to get additional clarity on how copyright law applies in this new digital age of AI.

In Thomson Reuters v. Ross Intelligence, U.S. District Court Judge Stephanos Bibas ruled that Ross infringed copyright by using Westlaw’s headnotes to train an AI competitor, despite Ross’ claims of fair use. Initially, Ross reached out to Thomson Reuters to license the content but ultimately opted to acquire the Westlaw content from a third party, LegalEase (which sounds eerily similar to Kadrey v Meta).

Judge Bibas rejected all of Ross’ defenses, stating that innocent infringement, copyright misuse, merger defense, scenes à faire defense, and fair use did not apply. On fair use, Judge Bibas eloquently analyzed the four established factors: the use’s purpose and character; the copyrighted work’s nature; how much of the work was used and how substantial a part it was relative to the copyrighted work’s whole; and how Ross’s use affected the copyrighted work’s value or potential market.

On the fourth factor, Judge Bibas found that Ross “meant to compete with Westlaw by developing a market substitute.” He wrote that this factor is “undoubtedly the single most important element of fair use.” That seems like an important ruling in light of the way Google’s AI Mode trains on and serves as a substitute for publisher’s original content.

In April, U.S. District Court Judge Sidney Stein rejected OpenAI and Microsoft’s motion to dismiss, thereby allowing all of the copyright and trademark dilution clams from The New York Times’ suit to proceed. While the bar is admittedly lower for a motion to dismiss, Judge Stein noted “that plaintiffs have plausibly alleged the existence of third-party end-user infringement and that defendants knew or had reason to know of that infringement.”

Then, in May, the U.S. Copyright Office released a report on AI training and fair use. It concluded that using massive troves of copyrighted content to generate commercial AI outputs likely fails fair use, especially when done through illegal means. The report also notes that “effective licensing options can ensure that innovation continues to advance without undermining intellectual property rights.” The Copyright Office rightly recognized that creative works are not mere “data” to be harvested, but expressions of human authorship protected by the Constitution and enshrined in U.S. copyright law.

From slogans to standards

So, what does this mean? For one, courts are rejecting the Silicon Valley myth that fair use lets AI companies take whatever they want. Licensing isn’t just viable, it’s required. Congress should pay attention.

Although there will inevitably be bumps along the road as fair use analysis is unique to each case, these rulings act as a compass to where things are headed. They send important signals to big tech companies with a history of anticompetitive behavior: don’t be evil or you may be held liable. The old playbook—take first, ask questions never—isn’t going to work in this new AI era. It’s time for a better North Star: accountability, transparency, and fair competition.

The post Time’s up for platform privilege appeared first on Digital Content Next.

]]>
Senate testimony: A call for accountability in digital advertising https://digitalcontentnext.org/blog/2025/05/07/senate-testimony-a-call-for-accountability-in-digital-advertising/ Wed, 07 May 2025 18:19:45 +0000 https://digitalcontentnext.org/?p=45203 As CEO of Digital Content Next (DCN), I testified on April 1, 2025, to the Senate Judiciary Subcommittee on Antitrust on behalf of our members—leaders in trusted journalism and premium...

The post Senate testimony: A call for accountability in digital advertising appeared first on Digital Content Next.

]]>
As CEO of Digital Content Next (DCN), I testified on April 1, 2025, to the Senate Judiciary Subcommittee on Antitrust on behalf of our members—leaders in trusted journalism and premium entertainment. My testimony addressed the persistent anticompetitive behavior of dominant tech platforms, particularly in the digital advertising market. While our members come from diverse media backgrounds, they all rely on fair access to the open internet to create, distribute, and monetize original content. Yet today’s digital ad marketplace lacks basic rules, allowing tech giants to operate without accountability—fueling opaque practices, data arbitrage, and conflicts of interest that harm publishers and consumers alike.

Since my testimony, Judge Leonie Brinkema ruled that Google abused its monopoly in ad tech, with remedies now under consideration—including potentially breaking up parts of its ad business. Meta also faces an ongoing antitrust suit from the FTC. While these cases are encouraging, legislative action is essential to prevent further harm.

That’s why DCN strongly supports the bipartisan AMERICA Act, introduced by Senator Lee. The bill sets clear, common-sense rules to restore transparency, promote competition, and curb platform abuses. This will ensure a fairer digital marketplace for content creators and the public.

Key points from Jason Kint’s testimony:

Surveillance Advertising Dominance

Google and Meta built their advertising empires by embedding surveillance across the open web, exploiting content and user data without consent or fair compensation.

Market Distortion and Publisher Harm

The current ad tech system strips value from professionally created content, impairs subscription growth, and erodes trust by favoring opaque, engagement-driven algorithms.

AI Repeating the Pattern

Generative AI models are being trained on copyrighted works without permission, threatening to displace original content and replicate past harms.

Call for Policy Action

DCN supports the AMERICA Act’s measures to rein in conflicts of interest in ad tech, alongside broader calls for privacy legislation and copyright protections in the AI era.

Preserving Democratic Institutions

Without reform, unchecked platform power will continue to undermine the premium news and entertainment that consumers love.

The post Senate testimony: A call for accountability in digital advertising appeared first on Digital Content Next.

]]>
Copyright and AI: a win win https://digitalcontentnext.org/blog/2025/03/20/copyright-and-ai-a-win-win/ Thu, 20 Mar 2025 11:18:00 +0000 https://digitalcontentnext.org/?p=44836 In terms of public policy debates, Artificial Intelligence continues to be the belle of the ball with nearly every major government courting the industry to locate their investments and jobs...

The post Copyright and AI: a win win appeared first on Digital Content Next.

]]>
In terms of public policy debates, Artificial Intelligence continues to be the belle of the ball with nearly every major government courting the industry to locate their investments and jobs within their jurisdictions. Europe, China, Korea, and the U.S. (among others) have laid out competing tax and government spending plans to entice and encourage AI companies. Against this backdrop of AI frenzy, President Donald Trump, via the Office of Science Technology and Policy, has solicited input on the formation of an “AI Action Plan” in order to “define the priority policy actions needed to sustain and enhance America’s AI dominance.”

Unsurprisingly and unabashedly, tech companies advocate that the U.S. government allow their content-generating AI models to train on copyrighted material without consent or compensation. However, as DCN noted in our comments regarding the action plan, a key component to achieving the stated goal of enhancing America’s AI dominance – and the broader success of American businesses – is the robust protection and enforcement of U.S. intellectual property law including the Copyright Act.

The longstanding legal rights for copyright holders are derived from the U.S. Constitution (Article I, section 8, clause 8), which affords them the opportunity to monetize the results of their hard work and investment in a variety of ways and incentivizes them to reinvest in the creation of additional content and new innovative delivery mechanisms to potential consumers. As a result of these longstanding rights, American content creators, including news organizations and other publishers, are able to contribute significantly to U.S. economic growth, including through employment, exports and important trade surplus, and digital services and goods. 

According to a recent study, copyright-based industries accounted for 12.31% of the U.S. economy and 63.13% of the U.S. digital economy. From 2020 to 2023, these industries outpaced U.S. economic growth almost threefold. In the digital sector alone, copyright-based industries employ 56.6% of all employees in the digital sector. The annual compensation paid to core copyright workers is approximately 50% higher than the average U.S. annual wage. As for the global impact, the sales of select U.S. copyrighted products in overseas markets amounted to $272.6 billion, which exceeded the sales of other IP industries including pharmaceuticals, agriculture, and aerospace.

Unfortunately, the manner in which many AI developers have exploited original content without consent or compensation – to build and operationalize their commercial products – has unjustifiably violated the rights of copyright holders. It has upended the existing balance which has historically sustained and promoted innovation.

AI developers use copyright protected content not only to “teach” their models to predict and mimic language skills, but also as a means to create compelling outputs which have the compounding harm of substituting for the original works on which the models were trained. This activity unfairly competes with those who invested in the creation of the original material and undermines their ability to seek a fair economic return. In fact, U.S. Senior District Judge Beryl Howell noted earlier this week in a copyright case attempting to argue fair use that the publisher’s content is “so valuable they put a copyright on it.” Exactly.

By “reaping that which they do not sow” AI companies cause harm to creators, publishers and the ecosystem as a whole. It is important that this form of destructive misappropriation be deterred, whether by copyright law or other appropriate means. In the U.S, there are 39 related lawsuits and counting. The outcome of these suits will provide much-needed clarity regarding the application of existing copyright law, including the fact-specific defense of fair use, to the infringement of the rights of copyright holders to develop generative AI technology.

However, one U.S. District Court recently confirmed that licensing is required for the use of copyrighted content to train an AI system. In Thomson Reuters Enter. Ctr. GmBH v. Ross Intel. Inc., the court, applying clear and recent precedent from the U.S. Supreme Court, held that the defendant’s unauthorized use of the plaintiff’s works to train the defendant’s AI system was direct infringement and did not constitute fair use. The Court reaffirmed that the impact of the use on existing and potential markets is the single most important element of a fair use analysis, and that there was clearly a potential market to use the materials at issue in the case to train AI. 

Lest the VC crowd be dismayed, a licensing framework is emerging as many deals have been struck by publishers, record labels, motion picture industries, and others. OpenAI, Google, and Perplexity have all made efforts to pay for the right to use protected content to power their models and tools. This is a clear acknowledgment that this model is not only necessary, but eminently feasible.

While publishers’ rights are coming into clearer focus in the U.S., AI companies are  beginning to feel a shared pain as evidenced recently by DeepSeek’s R1 model. OpenAI accused the company of IP theft, claiming that DeepSeek may have used OpenAI’s IP and violated its terms of service to develop its AI model. 

“We know PRC (China) based companies – and others – are constantly trying to distill the models of leading US AI companies,” OpenAI said in a statement to Bloomberg. “As the leading builder of AI, we engage in countermeasures to protect our IP, including a careful process for which frontier capabilities to include in released models, and believe as we go forward that it is critically important that we are working closely with the US government to best protect the most capable models from efforts by adversaries and competitors to take US technology.”

A rising tide can lift all boats. Only maintaining existing copyright protections will lead to a robust, free market where creators are incentivized to make high quality works and AI companies are incentivized to license them. Importantly, in this robust market, AI companies would continue to have access to quality content which is critical for training and outputs. The American values of IP protection have been a cornerstone in our country’s innovative spirit and competitive edge over foreign adversaries. Protecting IP is a matter of preserving the core principles that distinguish American businesses in the global market. For the history of the U.S., copyright and innovation have gone hand in hand and there is no reason to deviate from that successful combination as we build the next chapter.


Read DCN’s Comments on the AI Action Plan, which were filed with the Office of Science and Technology Policy on March 15, 2025

The post Copyright and AI: a win win appeared first on Digital Content Next.

]]>
The big 5 for 2025: forces impacting media and tech https://digitalcontentnext.org/blog/2024/12/05/the-big-5-for-2025-forces-impacting-media-and-tech/ Thu, 05 Dec 2024 12:02:00 +0000 https://digitalcontentnext.org/?p=44264 As we barrel into the new year and all that awaits, the media industry is at the nexus of technological disruption, regulatory upheaval, and changing consumer sentiment in terms of...

The post The big 5 for 2025: forces impacting media and tech appeared first on Digital Content Next.

]]>
As we barrel into the new year and all that awaits, the media industry is at the nexus of technological disruption, regulatory upheaval, and changing consumer sentiment in terms of media and expectations for it. From the rise of artificial intelligence to intensifying antitrust enforcement and the shifting stance of dominant platforms, the stakes for publishers and content companies have never been higher.  

Here’s a look at five critical trends in the media landscape and what they may mean in the future. 

Artificial intelligence is reshaping content creation and distribution with breakneck speed. AI-generated search results are increasingly the norm, while the fate of the underlying articles and video remains murky. Publishers are leveraging AI to scale production, personalize experiences, and streamline workflow. However, while this boom propels media forward, the underlying AI models are contentious in their devaluation – or outright dismissal of – property rights, IP, and the fair value of content, not to mention debate around the quality and accuracy of AI generated search results and source attribution.  

In 2025, marquee copyright cases are slated for trial. Courts will tackle questions about how intellectual property laws apply to works created or transformed by AI rather than humans. At stake are the legality of using copyrighted material to train AI models and the extent to which those models can monetize their output while risking, if not entirely supplanting, the clear licensing opportunity for publishers. These rulings will set precedents and could rewrite the rules of the road for both AI developers and publishers. 

Joining the groundswell, Canadian media orgs jumped in last week by collectively suing OpenAI, alleging unauthorized use of their news reporting to train its models. Similar lawsuits are expected to continue globally as publishers push for enforcement against misappropriation and/or copyright violations of their work. 

Media companies must once again prepare for these shifts by walking and chewing gum at the same time. As ever, publishers must safeguard their media content while continuously experimenting. The challenge will be striking the balance between embracing AI’s potential and ensuring accountability with their strategic technical platform partners. 

2. The role of a free and plural press amid political threats 

In this era of heightened political tensions, the role of the press as a democratic watchdog is paramount. In the U.S., the new administration brings with it a wave of uncertainty.  Media leaders watch with a wary eye as leadership nominations roll in.   

Concerns about surveillance, legal pressures and expense, and erosion of journalistic protections here and around the world are intensifying (to say the least). Globally, authoritarian regimes are leaning on tech to suppress dissent and control narratives, challenging the resilience of independent media.  

For publishers, protecting and promoting a pluralistic media ecosystem is essential. This means investing in news reporting, supporting press freedom initiatives, and maintaining commitments to accuracy and integrity despite political pressures. As threats to press freedom grow, a robust fourth estate remains critical to the industry’s long-term viability as well as to democracy itself.  

3. Social platforms: shifting sands in distribution 

Social media’s dominance in content distribution is being reshaped by user migration. Elon Musk’s tumultuous leadership of X (formerly Twitter) has alienated advertisers and much of its user base, notably journalists. This has fueled the rapid rise of Bluesky, a decentralized alternative designed to resist the power of billionaires and governments. Remarkably, Bluesky is approaching or has surpassed Meta’s Threads in certain usage metrics. Bluesky’s embrace of open-web principles and support for journalism –  very different from the current suppression of links on X and Threads – has further endeared it to journalists and publishers. 

These platform shifts come amid the FTC v. Meta antitrust trial, scheduled for April 2025. Although the legal complaint focuses on the relevant market of social media built around the personal social graph (thereby excluding X, Bluesky, Threads, and LinkedIn), the dynamics of platform competition remain crucial for publishers to connect with new audiences where they want to be reached. It is also as yet unclear how the incoming Trump administration will respond to a potential ban of TikTok, which is set to hit a key milestone the day before his inauguration. 

For media companies, platform diversification has long been a requirement. Relying too heavily on any one distribution channel leaves brands vulnerable to algorithm changes, shifting user sentiment, and unpredictable policy shifts. Building owned-and-operated platforms, prioritizing direct relationships with audiences, and leveraging multiple distribution channels are essential strategies to ensure resilience in this fragmented ecosystem. 

4. Regulatory and court interventions reshape big tech 

2025 is shaping up to be a watershed year for antitrust regulation and enforcement. The U.S. Department of Justice (DOJ) has already won its search antitrust case, calling for the divestiture of Google’s Chrome browser and potentially its Android operating system. Meanwhile, the DOJ’s Virginia adtech case (expected to result in another major win) foreshadows broader changes to Google’s dominance in digital advertising. Next up: the Texas adtech trial in March, followed by the previously mentioned FTC antitrust case against Meta. 

Beyond the U.S., Canada’s competition regulator called for the breakup of Google’s adtech business last week, with the European Union likely to follow suit. These developments could significantly reshape the global ad market, which would offer publishers an opportunity to regain control over their data and revenue streams.  

However, it also introduces uncertainty. Navigating new partnerships, technologies, and regulatory frameworks will require adaptability and leaning into a long-term strategy while bearing short-term headaches (read: costs). Building strong first-party data capabilities and exploring alternative adtech solutions will be crucial for growth in this evolving environment. 

5. Advertising reinvented: privacy, AI, and accountability 

Advertising is undergoing a transformation driven by consumer privacy concerns and regulation. The death of third-party cookies and the rise of privacy-focused technologies have elevated the importance of first-party data. This means that publishers’ direct relationships with audiences and the high-quality content they provide are more valuable than ever. 

Google’s antitrust challenges are also poised to reshape the future of advertising. The cases brought against the company globally allege manipulation of ad auctions and abuse of its monopoly power to harm publishers and consumers alike. If successful, these actions could reinvigorate competition and enable publishers to negotiate better terms, which would have been available for the past 10 years if it weren’t for Google’s  behaviors. The greatest fruit of Google’s abuses across search and adtech may well be YouTube where Google has been able to marry its unparalleled access to search, location, web-wide browsing, and adtech data with the largest pool of streaming video inventory on earth. It will be interesting to see if this attracts regulatory scrutiny in 2025. 

At the same time, AI is accelerating the evolution of advertising strategies. Predictive targeting, on the fly ad creative, and more advanced tech to control ad campaigns are helping large platforms capture new dollars from offline retail media while better maintaining privacy. For publishers, a dual focus on consumer trust and innovative monetization will be critical for success if they want to peel off some of these dollars. 

Outlook: shaping the future of media 

In 2025, the media industry is defined by rapid change and high stakes. From AI-driven innovation and platform fragmentation to regulatory challenges and shifting consumer expectations, content companies face a complex and evolving landscape. Success will require a commitment to trust, adaptability, and creativity. 

As DCN has long advocated, publishers prepared for these shifts – whether through diversifying revenue streams, strengthening first-party data, or doubling down on audience relationships – will be well-positioned to survive if not thrive. In this new era of accountability and competition, it’s not just about outlasting disruption; it’s about shaping what comes next. 

The post The big 5 for 2025: forces impacting media and tech appeared first on Digital Content Next.

]]>
AI developers favor premium media content for training https://digitalcontentnext.org/blog/2024/11/12/ai-developers-favor-premium-media-content-for-training/ Tue, 12 Nov 2024 12:16:00 +0000 https://digitalcontentnext.org/?p=44109 As large language models (LLMs) evolve from experimental tools to valuable assets, transparency in their data sourcing is rapidly declining. Initially, datasets were openly shared, allowing the public to examine...

The post AI developers favor premium media content for training appeared first on Digital Content Next.

]]>
As large language models (LLMs) evolve from experimental tools to valuable assets, transparency in their data sourcing is rapidly declining. Initially, datasets were openly shared, allowing the public to examine the content used for training. However, LLM companies tightly guard their data sources today, leading to new intellectual property (IP) conflicts. Many media companies are pursuing litigation to protect their content from unauthorized use in AI training. At the same time, courts, regulators, and policymakers are engaged in debates over content ownership and the responsibilities of large language model (LLM) developers.

A new report by George Wukoson, Ziff Davis’ lead AI attorney, and Joey Fortuna, the company’s chief technology officer, sheds light on the nature of data sources used by major LLMs. Their research reveals that AI developers often favor high-quality content when selecting training data, especially content owned by premium media companies. Their findings support discussions around publishers’ IP rights, content licensing, and the ethical dimensions of AI development.

Dataset analysis and key findings

Wukoson and Fortuna’s research uses Domain Authority (DA), a metric developed by Moz for search engine optimization, to measure the prominence of domains in several LLM training datasets. They examine Common Crawl, C4, OpenWebText, and OpenWebText2, and analyze how curation levels affect the inclusion of content from high-DA, premium sources.

Their findings showed that as datasets become more curated, the share of content from high-quality publishers rises significantly. Key findings from the research:

1. Increasing inclusion of premium content

In less curated datasets like Common Crawl, content from major media companies only makes up about 0.44%. However, in OpenWebText2, a highly curated dataset, content from these companies jumps to 12.04%. This shift indicates that LLM developers selectively incorporate reputable sources to improve the quality and accuracy of the model’s output.

2. Higher DA correlates with higher curation levels

    Common Crawl, an uncurated dataset, has over 50% of domains with DA scores below 10, indicating that it includes a significant amount of low-authority content. In contrast, OpenWebText2 comprises 39.4% of domains with DA scores between 90 and 100, reflecting a preference for high-authority, reliable sources as datasets undergo curation.

    3. Prominence of premium content

      Leading publishers like The New York Times and News Corp consistently appear in the top DA range (90–100), reflecting their high authority in the dataset rankings. Their dominance in these datasets suggests that LLMs are more frequently training on established news and media sources. This gives these outlets a stronger influence in shaping model behavior and responses.

      These trends show a pattern where the curation of training datasets systematically filters out lower-quality sources, favoring reputable, high-DA domains. As a result, LLMs benefit from exposure to high-quality, well-sourced content, which may enhance their performance but raise concerns about IP use and representation.

      Prioritizing high-quality, high-DA content in LLM datasets escalates legal disputes between media companies and AI firms. For example, the New York Times has filed a copyright infringement suit against major AI developers. They argue that these AI companies profit from high-quality content without appropriate compensation to the original publishers.

      As LLMs continue transforming industries, the value of high-quality, curated content becomes increasingly apparent. The authors’ analysis shows that curation prioritizes content from high-DA, reputable media content companies, amplifying their role in shaping model outcomes. This trend will likely intensify as LLM companies refine their training methodologies, sparking further debate over intellectual property and AI firms’ financial obligations to content creators. The findings here call for a broader dialogue around data licensing and compensation frameworks that reflect the mutual value between content creators and AI innovators.

      The post AI developers favor premium media content for training appeared first on Digital Content Next.

      ]]>