Contextual Advertising and Targeting Insights | Peer39 Blog

Peer39 Interview Series: socialcontext's Chris Vargo

Written by Mario Diez | Apr 22, 2022 7:27:27 PM

Chris Vargo is an associate professor of advertising and analytics at the University of Colorado Boulder. He teaches in the College of Communication, Media and Information and the Leeds School of Business. His courses focus on marketing, machine learning, news, and media analytics. He is a co-founder and CEO of socialcontext and a Partner in Peer39’s Contextual Data Marketplace. We spent some time with Chris recently to learn more about socialcontext  and the work he’s bringing together for the digital media industry.

Hi Chris, Thanks for the time today. Let’s start with where the inspiration came for socialcontext.

As a research professor, one of the methods I use to study data is content analysis. In the social sciences, content analysis is used to detect the presence/absence of specific items. In mass communication research, we use content analysis to study media messages. For instance, I’ve used content analysis to figure out what types of brand messages get more engagement than others on Twitter. The computational extension of content analysis, in my opinion, is supervised machine learning. So when I was getting a PhD, I took a few information science classes and started to code. Eventually, I was able to build models that worked well. One of my first acceptable supervised machine learning algorithms was used to detect anti-vax messages on Twitter at scale. We used it to figure out where a lot of the chatter was coming from, and used Census data to try and figure out what it was about those areas that correlated with that talk.

My colleague and co-founder at Colorado, Toby Hopp, went on to build more detailed algorithms with Google and their Jigsaw team. Toby studies all of the ways people can be awful (a.k.a. incivil) to each other online. Google’s Perspective algorithms had a similar aim, they wanted to be able to screen toxic comments on their platforms and remove them before they hurt people and conversations. When they heard of our work, they reached out to us to see if we’d help them build algorithms to detect more nuanced behaviors. For instance, we had an algorithm that could detect when someone was telling another person to, for a lack of better words, shut up. We called it “exclusion of opponents.” We learned a lot about deep learning through that team. We even worked with Twitter to help them conceive of algorithms that could help keep their platform safe. We got pretty good at detecting very specific things in text. 

As you mentioned, I also teach advertising. When I was preparing a lecture on contextual advertising, I started poking around a bit at what little was out there on how the “big guys,” –don’t worry, not Peer39– were doing it. I was shocked. All of the major players were using technology that was, in my qualified opinion, grossly inappropriate. Either technology from 20 years ago, or somewhat newer approaches that were “unsupervised,” meaning they had no humans in the loop to ensure what these algorithms were doing were actually correct. 

Then, I got my hands on some contextual labels from “the big guys.” My jaw dropped, and we wrote a little whitepaper about it. The accuracy, precision and recall of these contextual algorithms, in many cases, were no better than if the algorithms were working randomly, or just blindly assigning articles to categories. Even for blatantly simple problems, like if a news article was “sports” or “not sports.” We tried to recreate their labels, we couldn’t. It was almost as if the machines labeling contextual data were just totally off the rails. Again, not Peer39!

I realized that I needed to do something once I started to see the unintended consequences. 20% of what brand safety providers were labeling as “unsafe” or “unsuitable” was actually good news about sensitive, underrepresented populations: African Americans, the LGBTQIA+ community, Latinx news. Also important issues like COVID-19, climate change, and racial diversity. 

Who was suffering? The journalists and news orgs that create the content and the communities and issues that deserve to be supported. When an article is marked as unsafe or unsuitable, it’s under monetized. But also, many advertisers want to reach audiences that care about these issues. The big guys don’t have the tools or the follow through to make tools that look out for these groups. It kept me up at night.

So I created socialcontext. It’s a positive contextual targeting tool that helps advertisers target the news that they care about, and reach audiences that care about it, too.

How should advertisers, buyers, and platforms think about socialcontext?

Socialcontext finds good news content, right down to the article level. Only good news, with very specific definitions. You pick the specific types of news that most align to your brand, and we ensure you bid on only that news. It takes the worry away. You know exactly what types of news you’re running on.

At the least, it’s a good supplemental campaign. It allows you to ensure that X% of your advertising budget reaches audiences reading news that aligns with your brand. Dedicate even 2% of ayour entire campaign budget to a socialcontext campaign, and you can say with a clear conscience that you’re sponsoring good journalism in that area. 

For instance, we have a client now that’s trying to raise awareness in the LBGTQ community around unfair blood donation policies for that community. Others are using socialcontext to ensure that they regularly engage with the African American community and sponsor uplifting news that pertains to that community. 

This is all done in a “non-creepy” way. We’re not using probabilistic data to infer that someone is black or gay. We’re simply identifying the news articles that these populations care about, and allowing you to appear alongside it with one click of a button.

One more thing, when you run a socialcontext programmatic campaign, you don’t need brand safety or suitability stacked on top as another data layer. Instead of trying to run a wide open campaign with overbroad exclusion techniques, try only targeting good inventory.

What are you hearing in the market that makes now the right time for this offering to be adopted?

I’ll spare you the, “cookies are going away” speil, you know that. That aside, it’s high time that advertisers stop using a chainsaw to cut out “good news” from the programmatic forest. I once talked to a major programmatic buyer in the e-retail space, one that has every reason to reach the LGBT community, because it’s a key demo of theirs. They’ve been relegated to narrow programmatic inventories, like cooking blogs and entertainment news, because it’s “safe.” At the same time, they wish they had more programmatic scale. They’re ignoring tens of thousands of good news articles each day because their contextual provider doesn’t have the nuance to get them in front of the right audiences. This is what socialcontext brings to the table.

There are better methods. Forget the chainsaw, you can use a scalpel. We're the surgeons. We can help you unlock that programmatic scale that you’re looking for, while keeping you well within your brands definition of “safe” and “suitable.”

Are there any interesting learnings you’ve seen recently?

Yes! It turns out that our positive contextual targeting approach is actually great at driving engagement! There’s this assumption in advertising that invasive 3rd party data is the only way to find the right consumers to engage with your ads. Not true! One of our programmatic campaigns right now is seeing a CTR of 1.2%! Why? Because they’re reaching individuals who read the news content that matches their brand’s core values, their social responsibility values. The things they say they care about. When you sponsor good news, good things happen.

One more thing, when you run a socialcontext programmatic campaign, you don’t need brand safety or suitability stacked on top as another data layer. Instead of trying to run a wide open campaign with overbroad exclusion techniques, try only targeting good inventory.

We only suggest quality news, and we only suggest articles that match the exact category you pay for. This means that if the rest of the ad tech supply chain does what it’s supposed to, you’ll never show up on the far reaches of the Internet, the made for advertising sites, the fake news sites, the divisive partisan news ecosystem, but only established news publishers with long standing commitments to the principles of quality journalism. I still recommend folks pay for verification, because we can’t see what happens on the other side of the exchange, but aside from that, you don’t need to pay for more data! CPM goblins reduce your reach and scale. That’s a strength of many of Peer39’s offerings, honestly. But what does that mean for you, the advertiser? Lower CPMs, a higher percentage of your CPM going to publishers, and less chance that one of your data providers is selling you snake oil that doesn’t actually do what it purports to.

As a professor - you are seeing and shaping the next generation of the digital media industry. Anything you’d like to share? 

Haha, I’m not a media futurist, but my bold prediction is that the metaverse will flop. All jokes aside, we all know that ad tech gets a bad rap in digital marketing because of the snake oil. As it pertains to contextual advertising, you, the advertiser, can put a stop to this by demanding to see under the hood at what these “algorithms,” which are often no more than a bag of keywords, are doing. If you’re spending money with a contextual provider for instance, you should demand to see what content you’re running on, right down to the actual URLs. You deserve to see what is being blocked or excluded. You deserve explanations when the algorithms don’t do what they’re supposed to.

You deserve to be able to see the actual performance metrics that matter. Accuracy is easy to game. Demand to know what the positive class precision and recall are. Demand to see how they classify data on a test set of URLs you send. Ask for permanent access to a sandbox where you can input a URL and see what the classification algorithm does in real-time. We offer all the above to our clients, and we’re proud to share the work we do.

That being said, no algorithm will ever be perfect. Still, the future of contextual advertising must be more transparent than it currently is, else we will never as data providers be able to emerge from the dark cloud of criticism that has plagued safety and suitability solutions today.

For anyone interested to learn more about socialcontext, you can follow Chris at (https://www.linkedin.com/in/chrisjvargo/ and https://twitter.com/chrisjvargo), socialcontext.ai and always have access to the offering in the Peer39 UI (log in here).