The Advertising Industry’s Deepening Role in Online Censorship
In the arsenal of the censorship-industrial complex, few weapons have been more effective than advertiser boycotts. Long before online censorship reached its peak in 2020 and 2021, advocates of online censorship had identified online advertisers as the most important source of pressure on social media companies to restrict free speech. When direct appeals to social media platforms fail, pro-censorship campaigners use the threat of advertiser boycotts to produce the desired result.
This playbook has been effective. Since 2016, the advertising industry has repeatedly engaged in boycotts against platforms in a bid to restrict online free speech to a narrow set of “brand safe” viewpoints. Three major incidents stand out, each one targeted at a major online platform:
- The 2017 YouTube ‘Adpocalypse’: coordinated media coverage of ads appearing next to extremist and “hateful” videos on YouTube led to an advertising boycott that cost the company millions. YouTube responded by implementing a draconian system of demonetizing anything remotely controversial, decimating the revenue streams of independent creators and establishing a content moderation regime that persists to this day.
- The 2020 Facebook “disinformation” boycott: driven by panicked media coverage about Facebook’s alleged failure to sufficiently censor then-president Donald Trump and his supporters, advertisers fled Facebook at the height of the 2020 presidential election campaign,
- The 2022-23 X boycott: calls for advertisers to financially throttle X, formerly Twitterm began before Elon Musk even completed his acquisition. Upon his takeover, the advertising industry quickly complied. Within a year, X’s advertising revenue had been cut in half.
On top of this, two of the most potent forces of financial blacklisting emerged directly from the advertising industry:
- NewsGuard: a private company that financially throttles alternative media by selling blacklists of disfavored news sources to advertisers, NewsGuard is a creation of the advertising industry. Its 2018 seed round was led by Publicis Groupe, one of the “big six” ad agency holding companies. It counts two other Big Six companies, IPG and Omnicom, as clients.
- Sleeping Giants: founded by far-left advertising professionals, Sleeping Giants was a campaign to financially throttle alternative news sources, by spreading smears about its targets to any brand that advertised with them. It succeeded in triggering the flight of advertisers from conservative news sites. Its alumni continue to use the same playbook against non-establishment media.
Finally, there are a litany of examples of major players in the advertising industry independently pushing censorship. “Big six” agencies and their subsidiaries regularly engage in pro-censorship activities. Examples include IPG’s principles for responsible media and content, which preclude advertising next to “hate speech,” WPP subsidiary GroupM’s decision to join the “conscious advertising network” which promotes similar principles, and Omnicom and IPG’s decision to offer their clients access to NewsGuard blacklists.
The Global Alliance for Responsible Media (GARM)
While advertiser boycotts are often preceded by media and NGO-led pressure campaigns, the advertising industry has also acted on its own initiative to promote internet censorship.
In 2019, the World Federation of Advertisers, a body that represents roughly 90 percent of global advertising spend (almost one trillion dollars), moved to institutionalize the advertising industry’s new role as the global internet police by establishing the Global Alliance for Responsible Media (GARM).
GARM’s mission is to establish a shared definition of “harmful content” across the entire ad industry, so that advertisers can collectively blacklist disfavored sources of content. It is, in effect, a declaration of permanent boycott against platforms that are too free speech-friendly.
Even before GARM’s establishment, the WFA was attempting to influence content on social media. In 2020, after major advertisers boycotted Facebook, Facebook, Twitter, and YouTube reached an agreement with WFA to arrive at “common definitions” of harmful content, further agreeing to have some of their processes reviewed by external auditors.
GARM’s “brand safety” floor was the next evolution of the ad industry’s plan to standardize speech restrictions across social media. In its most recent version, GARM’s shared framework includes the most common pretexts for censoring political speech: misinformation, hate speech, content that “shocks, offends, and insults,” and even the improper discussion of sensitive social issues:
If GARM’s “brand safety floor” were adopted across social media platforms, it would ensure online speech is nowhere close to the ideal of the First Amendment.
According to internal communications obtained by the House Judiciary Committee, this is unlikely to be of any concern to GARM, whose co-founder is on record complaining about the “extreme global interpretation of the US constitution,” and the use of “‘principles for governance’ and applying them as literal law from 230 years ago (made by white
men exclusively).”
GARM has different standards for content, and makes no secret of the fact that it wants its “brand safety floor” to be adopted by any platform that wants to receive its members’ ad revenue.
- Platforms will adopt, operationalize and continue to enforce monetization policies with a clear mapping to GARM brand suitability framework
- Platforms will leverage their community standards and monetization policies to uphold the GARM brand safety floor
- Advertising technology providers will adopt and integrate GARM definitions into targeting and reportingservices via clear mapping or overt integration
- Agencies will leverage the framework to guide how they invest with platforms at the agency-wide level and at the individual campaign level
- Marketers will use the definitions to set brand risk and suitability standards for corporate, brand and campaign levels.
At a recent hearing of the House Judiciary Committee, Christian Juhl, CEO of WPP subsidiary and GARM member GroupM, explained that the whole purpose of GARM is to create a one-size-fits-all definition of “harmful content” for platforms.
“Brand suitability is particular to each brand what is unsuitable to one may be suitable to another. But all brands generally agree they do not want to appear next to illegal or harmful content. Many also seek to avoid as ad placements near content that while not illegal does not align with their values.
With the increasing focus on brand suitability Brands wanted to better understand how Publishers were identifying prohibiting and removing harmful content. What they found that was at every platform took a different approach. Definitions of harmful content also varied. Without consistent standards, companies were concerned their ads would end up appearing in unsuitable environments. We believed that consistent standards were needed to help our clients connect with consumers which is why we and other organizations came together to establish the Global Alliance responsible Media or GARM. GARM developed standard definitions of content that brands might consider unsuitable so that advertisers and publishers could speak a common language about sensitive content”
Later in the hearing, Juhl sums up GARM’s mission as “making order of something that had no order to begin with.”
Given that every major platform (with the exception of Substack) counts advertising as a major source of revenue, these shared standards for advertisers are, in effect, shared standards for all of social media.
As shown in the House Judiciary Committee’s report, platforms and publishers that deviate from GARM’s standards (or whose viewpoints GARM members simply don’t like) are met with swift retribution, even if ads from GARM brands don’t directly appear next to content that the cartel objects to.
Here are three major examples of GARM’s efforts to control online content, from the report:
- The Twitter/X boycott: after a collective boycott of X due to Elon Musk’s relaxation content moderation regime, GARM members bragged “taking on Elon Musk” and taking X “80 percent below revenue forecasts.”
- Blacklisting non-establishment media: GARM members “closely watched” disfavored news outlets to find a pretext for withdrawing ad dollars, with Breitbart News and The Daily Wire specifically named. GARM also collaborated with NewsGuard, a private company, and the Global Disinformation Index (GDI), a British nonprofit. The primary purpose of both these organizations is to build blacklists of disfavored news sources.
- Threatening Spotify: members of GARM’s steer team placed sustained pressure on Spotify over its support for Joe Rogan, then the world’s number-one podcast host, for hosting guests skeptical of official COVID-19 policies. Members of GARM’s steer team also advised Coca-cola, a major global brand, that Rogan and Spotify were a “major area of concern.”
Despite the fact that Rogan’s content was not juxtaposed with any major GARM brand, representatives of the ad cartel still got involved, telling Spotify that it had a “disregard for spreading dangerous misinformation” and that the platform’s support for Rogan “process of joining GARM.”
Beyond “Brand Safety”
At the same time, another email shows GARM co-founder Rob Rakowitz admitting that “brand safety is somewhat separate” from their concerns with Spotify because “brands aren’t being slotted into [The Joe Rogan Experience] by accident per se.”
This raises questions about the “brand safety” argument: the industry’s go-to, politically neutral pretext for boycotting disfavored platforms.
According to the “brand safety” argument, decisions to withdraw ads from social media platforms or news websites are driven by the fear of juxtaposing a client’s ad with controversial content that divides the public. This keeps the brand safe from any consumer backlash or negative press caused by that content — so goes the argument.
Yet, as Rakowitz admits in his disclosed email, even if we accept the argument that brands might have been damaged by running ads next to The Joe Rogan Experience (then the most popular podcast in the world) there was no such risk, since no such ads were being run. Yet GARM pressured Spotify anyway.
This reflects the advertising industry’s gung-ho attitude to pushing censorship on social media platforms in the last decade, which included a desire to move beyond the “brand safety” principle.
In 2020, Interpublic Group (IPG), one of the world’s “big six” advertising holding companies and a GARM member, argued that brands should think about not just their own “safety” but the more expansive commitment of “responsibility.”
Brand responsibility, explained IPG in an announcement, shifts focus from protecting brands to protecting “the communities that a brand serves, weighing the societal impact of the content, the publishers and services, and the platforms being funded by advertising.”
IPG’s “media responsibility principles” justify financial throttling for a litany of pretexts that are typically used to curb political speech and blacklist non-establishment media. Under IPG’s principles, ad revenue can be withdrawn from any site or platform that “creates hostile conversation environments,” “spreads misinformation,” or “fuels hatred on the grounds of race, religion, nationality, migration status, sexuality, gender or gender identity, disability or any other group characteristic.”
By shifting the justification from from “brand safety” to “brand responsibility,” IPG sidesteps the only limiting principle on advertiser blacklisting: the need to prove that disfavored content is somehow a risk to brands.
IPG’s own press release revealed the vagueness of its new principles: anything that has a negative “societal impact” or “contributes to harm” is deemed fair game for demonetization.
IPG’s “media responsibility” initiatives are housed in its ESG division, IPG ESG, which includes broad commitments to fighting “hate speech” and promoting diversity.
In a 2020 article, Christian Juhl, CEO of WPP subsidiary GroupM, the largest media buying company in the world and a member of GARM’s steer team, also wrote about the need to go beyond brand safety.
While praising the 2020 boycott of Facebook, Juhl argued that advertiser action driven by “brand safety” concerns hasn’t gone far enough.
So far, advertiser efforts to address the situation have proved about as effective as those “brand safety” protocols. The #StopHateforProfit boycott of Facebook this summer reportedly drained the platform of millions of dollars in revenue. But the effort fizzled once advertisers realized the boycott was doing more damage to their own bottom line that it was to Facebook’s.
As a solution, Juhl proposed the concept of “socially conscious media buying,” inspired by ESG funds, which take the “ethical and moral consequences” of ad spending into an account.
In addition to cost-per-impression, we need to be measuring cost-per-social contribution. We need to start factoring a media placement’s carbon footprint into our ad pricing. We need to support publishers that reach more diverse audiences, even if those publishers don’t yet provide the level of audience data we’ve grown accustomed to. And we need to do this with more than the usual 10% experimental budget.
This means the introduction of new tools that empower marketers to consider these sorts of ethical and moral consequences when buying media. Just as ESG investing and sustainable funds have become billion-dollar businesses on Wall Street, we believe that socially conscious media buying will find an enthusiastic audience among advertisers.
“Brand safety” is already a concept that requires guesswork, and can easily be skewed by ideological and political biases. “Ethical and moral consequences” is an even more subjective measure.
Rakowitz’s actions with regards to Spotify, IPG’s “media responsibility principles,” and Juhl’s “socially conscious media buying” all point to the same conclusion: the advertising industry wants to exploit its control of the purse-strings of ad revenue to influence content around the web.
And it considers that cause to be too important – indeed, to have too many “ethnical and moral consequences” – to be restricted by the limiting principle of brand safety.