As device and internet usage continues to grow worldwide, ensuring a safe, sustainable, and responsible digital environment for all is becoming ever more critical. Harmful content and misinformation is on the rise, making it difficult for advertisers to control where their ads appear, and how users make associations with their brand. This is the core problem that the GARM standards aim to address.
In 2019, the Global Alliance for Responsible Media (GARM), an initiative by the World Federation of Advertisers, started working towards creating a safer digital environment. It’s an industry effort that unites marketers, media agencies, media platforms, data providers, and industry associations around the effort to both protect the potential of digital media, and reduce the prevalence and monetization of harmful content online.
The GARM charter states that the collective is committed to three core goals:
- Taking actions which will better protect everyone (children in particular) online
- Working towards a media environment where hate speech, bullying and disinformation is challenges
- Taking steps to ensure personal data is protected and used responsibly when given
To do this, they have focussed their attention on three strategic areas:
- Establishing shared, universal safety standards for the advertising and media ecosystem
- Improving and creating common brand safety tools across the industry
- Driving mutual accountability and independent verification and oversight
This article will discuss the first strategic goal, which led to the creation of the GARM Safety Floor & Sustainability Framework.
What is GARM, and What are the Benefits of GARM Standards?
GARM is the Global Alliance for Responsible Media. They’re a collective of marketers, media agencies, media platforms, tech companies, and industry associations who created a standardized framework to help identify and address potential harmful content online and through social media.
The GARM standards, furthermore, is the first step in the collective’s goal of safeguarding the positive potential of digital media. It does this by providing platforms, agencies, and marketers with a standardized framework that defines safe and harmful content online.
“Our position is that you cannot address the challenge of harmful online content if you are unable to describe it using consistent and understandable language,” explains GARM.
To accomplish this, GARM developed common definitions and categories for harmful content online. To date, GARM has identified eleven key categories that advertisers can use to better understand the types of content their ads may be appearing alongside.
The main benefits of the GARM brand suitability standards include:
- Ensuring that there’s a common way to categorize sensitive content
- Providing transparency into where sensitive content may be present, thereby enabling consumer safety, brand safety, and responsible marketing
- Establishing a method for platforms to target, exclude, and report on those categories in the interest of responsible speech, public interest, and advertiser choice
Advertisers can use these categories to cautiously target some categories, exclude them altogether, and to understand the contextual nuance that exists within each.
These categories afford a deeper understanding of the type of content that exists on the internet, as well as the context in which it appears. By doing so, it gives advertisers the tools to protect their brand safety, and ensure they are not funding harmful content.
Above all, the goal of the GARM standards is to give the industry tools and insights that are needed to make informed and responsible decisions about media spend and targeting.
- For ad platforms, this means adopting these standards and enforcing monetization policies that map back to the GARM suitability framework and safety floor
- For agencies, this means leveraging the GARM framework to guide how they invest with platforms across their agency accounts
- For marketers, this means using the standards to set brand safety strategies and controls of at the corporate, brand, and campaign level
Brand safety is one of the key goals behind the GARM standards. By giving marketers and agencies the tools to control their brand safety, they can also tackle the issue of inadvertent monetization of harmful content. This is a win for the industry, the user, and the individual advertisers.
Why does brand safety matter?
Brand safety matters because of both the tangible impact that harmful content can have when it’s associated with your company, and because of the macro effect that this type of content has on society.
From a societal perspective, being able to identify harmful content and ensure that brands do not inadvertently monetize it is critical to the health of digital media as a whole.
According to the GARM charter:
“The rapid growth of digital communications and commerce has connected the world in unprecedented ways. Many of these connections come from advertising-supported platforms which provide immense utility to the billions of people who use them. But as the size of the audiences, and the volume of advertising and commerce on these platforms has grown, in turn, bad actors have been attracted to the environments.”
The people who either directly or indirectly support and monetize this content, GARM goes on to say, become advocates—either willing or unwilling—for that harmful behavior. Any brand should actively take steps to ensure that they don’t become an unwitting supporter of harmful online content.
From a business perspective, brand safety has tangible and proven impacts on a company’s revenue and reputation. According to the 2022 TAG/BSI US Consumer Brand Safety Survey:
- 81% of respondents would stop purchasing a product they regularly buy if they discovered the brand’s ads had appear next to racist content or hate speech
- 87% said the same about terrorist recruiting videos
- 88% said it’s very or somewhat important that advertisers ensure their ads do not appear near brand unsafe content
The problem is that not every company knows if their content appears next to content that their audience would find harmful. This is why brand safety is so important.
By implementing brand efforts, advertisers can reduce the chances of inadvertently placing an ad next to unsavory content, thereby protecting their reputation and ensuring they don’t fund harmful actors.
This, again, is where GARM’s standards come into play. To ensure brand safety, you first need a framework to understand content that is unsafe. To do that, GARM created what they call the brand safety floor.
What does “brand safety floor” mean?
The brand safety floor is a section of GARM’s suitability framework that refers to content categories that are likely not appropriate for any advertising.
This category taxonomy includes:
- Adult and explicit sexual content
- Arms and ammunition
- Crime and harmful acts to individuals and society, human rights violations
- Death, injury, or military conflict
- Online piracy
- Hate speech and acts of aggression
- Obscenity and Profanity, including language, gestures, and explicitly gory, graphic or repulsive content intended to shock and disgust
- Illegal drugs/tobacco/cigarettes/vaping/alcohol
- Spam or harmful content
- Debated sensitive social issues
The goal of the GARM brand safety floor is to identify and benchmark content categories that pose a risk to advertisers, ensuring they can actively take steps to limit their exposure to that content, or exclude it altogether.
The brand safety floor helps to inform the wider GARM suitability framework, that breaks each category into low, medium, or high risk. Within each of these grading criteria, GARM provides a description of the type of content that might be included. This ensures that advertisers and platforms can understand the nuance that goes into each category, and more intelligently react to it.
For example, low risk content may discuss the topics listed above through educational information, or scientific narratives. High risk, on the other hand, might explicitly endorse them, show graphic images, glamorize or exploit.
Together, GARM’s brand safety floor and suitability framework provide a starting point and a nuanced analysis of each of these high risk categories, helping brands make the right decision for their ad placements and adjacency.
How to implement brand safety and suitability
Implementing brand safety and suitability involves a combination of contextual planning and platform-based functionality.
First, advertisers will need to assess each of the categories listed above—and others that may not align with their target audience—to identify content types that might be harmful to the brand. This should include taking GARM’s risk profiles into account to ensure that some potential relevant and suitable content is not inadvertently excluded.
From this assessment, advertisers can decide which categories they want to include, and which to exclude. Depending on the scope of the ad program, this could include campaign-specific inclusions and exclusions, or the same for the entire ad program.
Categories and risk levels can either be permanent inclusion or exclusions, or they can be turned on and off as needed. For example, adding a content exclusion that’s associated with a major news event or crisis might be a good idea to avoid the appearance of profiting from those events.
In some cases, “always on” exclusions might make sense to ensure that ads are blocked from unsafe environments in perpetuity.
However these categories are used, it’s important that advertisers have the right platform to enable both transparency into category performance, and the ability to actively include or exclude them.
Peer39, for example, offers brand suitability controls that let advertisers select the right risk levels for each category based on their brand’s needs. These categories and risk levels can then be used to control targeting on all campaigns, as needed, without the need to use personal data.
Suitability insights offered by Peer39 provide valuable data and metrics to help advertisers understand how those controls impact ad inventory and volume. This can be used to help inform campaign planning, and to make changes to campaigns as they progress.