Saturday, January 3, 2015

How Programmatic Media Buying Is Saving The Banner Ad

Why is it so tempting to dismiss entire swaths of the media landscape as dying or dead as Farhad Manjoo did in a piece for the New York Times earlier this month? TV is dead, Print is dead, and as Mr. Manjoo suggested, the banner is dead, proclaim technology and media reporters! The answer is that killing off a media channel is a step toward simplicity in a maddeningly complex landscape. Many industries are sustained by media, which means many people need to make sense of the media landscape in order to make critical business decisions. How do I reach my target audience in an efficient manner, asks the marketer? How do I monetize my content, asks the publisher? As complexity proliferates, it can be very tempting to intellectually dismiss an entire channel, freeing up the brain to try to make sense of the latest new fangled obsession.
When such a new tactic arises out of the ether, like native advertising, it can be a shining beacon in the rough waters of the media ocean, compelling media professionals to pursue it with reckless abandon. This false euphoria is the other side of the "banners are dead" paradigm. The media industry alternates between these two extremes: death and life, cool and uncool, new and old, rarely stopping to ponder the more nuanced middle ground.
Mr. Manjoo's piece is guilty of this type of sweeping generalization. His article stops just short of blaming the "web's decline" on banners. Say what? Hours after publishing, Mr. Manjoo was promptly taken to task by the president of the IAB, Randall Rothenberg, for writing an article heavy on hyperbole and light on statistics. Sure, the banner isn't perfect, but it's hardly a monster swallowing the web.
The bigger story is that the way banners, and frankly all media, is being bought and sold is undergoing a radical transformation which is changing how consumers experience ads on the web, and thus the web itself. And it's only just begun... Yet, Mr. Manjoo's piece made no mention of programmatic media buying, which would suggest he mailed this piece in, if snail mail wasn't dead of course.
Programmatic media buying uses data to target the placement of ads. Behind this simple statement is an incredible technological phenomenon: offline data, for instance loyalty card data reflecting what a consumer buys in a brick and mortar store, can now be used to target online media. A new industry within online advertising has been forged on these technological advancements: Marketers pay data brokers to match offline data with online data, they license data management platforms (DMP) to organize the data in a visually simplistic manner and they use demand side platforms (DSP) to automate the execution of media buys that makes use of the data. In short, advertisers have embraced a new technology infrastructure and now routinely target audiences based upon a web goer's data profile.
This data based approach reflects a radical departure from how media has previously been bought and sold. In the past, content was used as a proxy for the target audience. For instance, a dog food maker bought ad space on a cute puppy website with the assumption that visitors to that site are pet owners. This approach has a major flaw: many of the site's visitors love cute puppies but don't have a pet to feed at home. The advertiser is paying for ads that are unlikely to positively impact the business, and the consumer is seeing something irrelevant. I see what Mr. Manjoo was complaining about.
But what he missed is that the data based approach of programmatic media buying changes that. Now, the fact that a consumer has a history of purchasing dog food (online or in-store, both data sets are viable) becomes the criteria for targeting ads. In the era of programmatic media buying, data has replaced content as the means for targeting.
Ad targeting is now incredibly precise, but it's also automated. Programmatic media buying allows advertisers to program their buy with a target audience, and then let the DSP find that audience. This "set it and forget it" approach frees up marketers' time to focus on building creative that is useful, entertaining or otherwise relevant. Automation will power nearly 62% of banner ads in 2014 -- that's a lot of free time for marketers to invest in improving the advertising experience.


Accelerating this change is the fact that data based targeting costs about half of content based targeting. That's right, even though data based targeting is more precise in finding a specific audience, it costs more to use content -- like the dog food site -- as a proxy for a specific audience. In the programmatic era, advertisers don't need to pay content premiums to target an audience; they can use less expensive data. 
As you can imagine, this puts downward pressure on the rates publishers can charge advertisers, threatening their business model. In essence, the ad-buying marketplace posed an existential challenge to publishers: How can you justify a premium when data allows an advertiser to more effectively and efficiently reach its target audiences?
Traditionally, the answer is there's branding value in being associated with content consumers know, love and trust. There's also value in delivering an ad in an environment that is contextually relevant to the advertiser's product, like dog food on a puppy site. But, try pitching soft branding value to a marketer charged with hitting a quarterly sales goal. Is it worth them paying double? In reality, publishers had to find a new way to validate the premium and the answer was the content itself. Advertisers now routinely "co-create" content with publishers. Instead of buying ads next to the content experience, ads become the content experience. Once publishers began co-creating content with advertisers, the idea of integrating that content into the site's editorial stream felt less taboo, and thus, native advertising was born.
The rise of content marketing and native advertising is thus interconnected with the rise of programmatic media buying. That interconnected history is still present today. Banners, content and native are three tools that complement one another. Whether publisher or advertiser, knowing how these tools are interconnected, their strengths and their strategic role will determine how the web is experienced by consumers.

Monday, September 1, 2014

The Hidden Price of Personalization

The scandal over Facebook’s “Emotional Contagion” experiment, in which Facebook suppressed posts of positive or negative sentiment in order to gauge the effect on its users, shined the spotlight on Facebook’s algorithm, and that’s a good thing. The more we realize that algorithms built by corporations control the information we see, the more likely we are to remain vigilant consumers of that information. This is no easy task.

When it comes to using technology to maintain an objective perspective of the world, we can be our own worst enemies. Even though technology gives us access to an incredible array of opinions and news sources, we tend to engage with world views similar to our own. Rather than broadening our perspective, our ability to personalize our newsfeeds can limit our exposure to different politics and cultures. It’s easy to construct “echo chambers” for ourselves that reinforce our own biases.

So when it comes to using technology to stay informed, we don’t exactly have a good track record. That said, our awareness of the “echo chamber” phenomenon is pretty high – high enough that we have a term for it. Some people even make a concerted effort to undo the effect by following folks with opposing views, as excruciating as that can be. It's safe to say we are aware of our own bias when we gather news. 

But, what we haven’t been as vigilant about is understanding how the tools that we use to gather that information, namely, personalization algorithms, carry their own bias. This bias is not of our own making, but reflects the agenda of the corporation that creates the personalization algorithm. 

This is the revelation of the Emotional Contagion experiment. Simply put, Facebook was willing to manipulate the mood of nearly 1 million people in order to understand its product better. This is bias in the sense that Facebook will filter the information a user receives in order to advance its own agenda. 

I’m conflicted about this experiment. One part of me hopes that Facebook, under Zuckerberg, is genuinely interested in the social good it can create. Even though the experiment failed to properly request consent from its participants (a clear ethical breach), if the end goal is to send ripples of happiness across Facebook’s network of a billion people, that sounds like a worthwhile sociological pursuit. On the other hand, Facebook is a business operating in a capitalist society, it’s na├»ve to think it is motivated by anything other than profit. Would Facebook experiment with users’ emotions in order to learn what moods make people more receptive to ads? I’m not sure we know the answer to that question. Either way, the fact that Facebook is even capable of manipulating your mood without you knowing is a massive wake up call. To state it clearly: we do not know how personalization algorithms are filtering information for us, and therefore algorithms can have an agenda we’re unaware of.

It's hard to detect algorithmic bias for a number of reasons. Mostly, because we tend to think of our newsfeed as our own. It’s almost impossible not to. Our feeds are overflowing with content from friends and family and we’ve hand picked the publishers, celebrities and brands we want to follow. But, this is only true on a superficial level. Beneath the surface, engineers of personalization algorithms, like the newsfeed, manage a tricky paradox: the algorithm must feel as if it is in the service of its end users, while also being in the service of the corporation that created it.

This issue is broader than Facebook. Algorithms control the stories you see on Yahoo’s homepage feed, the results of a Google search, the offers you see on online travel sites like Orbitz. Even Twitter, the bastion of algorithm free newsfeeds is reportedly changing course. There are countless other web services that use algorithms to personalize the user experience. All of these services are pre-baked with the agenda of their creators. Mostly, they are tinkering with the algorithm to maximize the volume and effect of advertising, upsell, or improve their product, but that does not mean they can’t be used to advance social or political agendas.

For example, despite being widely viewed as a neutral platform, Facebook has a history of introducing its own agenda into the newsfeed. In 2010, it placed a message at the top of the newsfeed encouraging users to vote in the presidential election. In 2012, Facebook urged users to register as organ donors. In both these instances, Facebook’s tactics were highly visible and advanced what are two universally accepted social goods. 

But, imagine a situation where the “get the vote out” message only appeared to users who Facebook had identified as Democrats. Or that negative news stories about Republicans were displayed in more newsfeeds than those about Democrats. With nearly 200 million Facebook users in the US, such tinkering could sway an election. I’m not suggesting Facebook did that, or that it would, but that it can. It’s not illegal or a violation of the user agreement. The primary reason why this is not plausible is because it might be detected, which would discredit the platform’s perceived neutrality and presumably cause a massive drop in users. Fox News bias is delivered with an in-your-face transparency, MSNBC the same. Algorithmic bias is more subtle.

It’s early innings in the world of personalization, but the stakes are about to get higher. Soon the world will be blinking with an interconnected network of “smart” screens that knows who you are, and with a high degree of certainty. This ubiquity of Internet-connected entities (the so-called Internet of Things) means we will soon interact with more and more “personalized” services. What if your TV, your work computer, your phone, your home computer, your tablet, your car’s dashboard, the screen at the shelf in the Walgreens, in the back of the taxi, and in your workplace elevator all acted in concert? 

At first, all this personalization will benefit the end user. Imagine if the screen in the back of your taxi could communicate with your phone to know your home address, that Starbucks would start to whip up your favorite drink as soon as you walk in the store, or that Walgreens will know that your “smart” fridge is out of milk. This personalization will be highly beneficial for the end users, because attracting loyal end users is essential for the business to survive. Yet, as these services become indispensable and the novelty turns into expectation, we must remember that personalization always comes with a price.

Sunday, May 18, 2014

What the General Public Should Learn from Facebook’s "Reach-Gate" Scandal on Madison Avenue

For many years in online advertising circles, it was accepted as a truism that Facebook is “just a platform.” Unlike a publisher, a platform has no say in what content flows through its pipes. 

That idea was always flawed as it overlooked Facebook’s Edgerank algorithm which displays the content it deems interesting for its users, while burying that which it deems less so. As Josh Constine from TechCrunch put it at the time: Facebook controls the news feed like an editor-in-chief controls a newspaper’s front page. Facebook did have one key constraint: it would only display content that was published (or engaged with) by someone in a user’s network. The algorithm decided which piece of content to display, but it could only choose from a user’s self-selected network.*

Despite the existence of an algorithm designed to curate content, the idea that Facebook was a neutral platform persisted in ad land. Algorithms enjoy a certain inscrutable air on Madison Avenue. They are magical black boxes out of which billion-dollar businesses bloom. Years earlier, Google’s algorithm accomplished the God-like task of “organizing the world’s information”, if Facebook’s algorithm could do something equally impressive in “social”, it was hardly Madison Avenue’s place to question it. 

Then of course, the "Reach-gate" scandal happened and the myth that Facebook was a neutral platform vanished along with organic reach. Facebook insists the change to the algorithm addressed user experience, and not revenue goals, but nevertheless, Madison Avenue learned a hard lesson: Facebook and its algorithm should be viewed with a healthy skepticism. (Overheard predicted the tweak to organic reach 2 years before.)

Now, it may be time for the general public to learn the same lesson. 

The Boston Globe is reporting that Facebook is testing a “related articles” feature that promotes content that it deems relevant for its users. Sound familiar? Except this time, Facebook seems willing to introduce what I’ll call “foreign content”, in other words, content that was never published or engaged with by anyone in your network. While this could certainly be testing ground for an Outbrain-like ad product, this feature has broader implications. 

That Facebook can control what content shows up in the newsfeed gives it power that transcends advertising and is rooted in social and political influence. Facebook is behaving like a publisher, yet it is still viewed as an impartial platform. We can give Zuckerberg the benefit of the doubt that his ambition is to improve user experience and not advance his own political or social agenda. Still, we must combat the myth of the neutral platform and recognize Facebook for what it is, or at least, what it could be. 

At last count, Pew reported that 30% of Americans get their news from Facebook. Now, it seems that “getting your news from Facebook” has a literal interpretation that Pew never intended.

*There is a notable exception to this rule: beginning in 2012, advertisers could pay to insert their own content into a user’s newsfeed without being part of the user’s network.

Monday, December 30, 2013

The Day the Social Web Became Self Aware

Viral Hoaxes, mob justice and errors in breaking news have become hallmarks of the social media era. The most recent event, the stoning of Justine Sacco in the Internet’s town square, seems to have struck a nerve with members of the media. While no one disputes that Sacco’s tweet was horrific (and reckless given her profession), the speed, size and ruthlessness of the mob seems to have startled a number of prominent journalists. In response came a series of soul-searching posts that analyzed how the social web had become capable of such violence.

Nick Bilton of the Times called on social media influencers who have large online followings to be thoughtful about what they share, rather than automatically fanning the flames of what’s going viral

Roxane Gay of Salon pointed out that by stripping Justine Sacco of her humanity, the social media mob that went after her embodied the very attitude it claimed to condemn

Luke O’Neil of Esquire criticized the business model of web publishers that incentivizes the forsaking of journalistic values in pursuit of increased traffic

As O’Neil points out, web publishers are beholden to a “business plan driven by the belief that big traffic absolves all sins.” Translation: It’s hard for media companies to toe the line between being a news organization and a traffic herder. The temptation to publish the next piece of viral content can compromise traditional journalistic values, for example: 

  • In the Sacco scenario, the mob is not a mob, it’s an asset capable of driving mass traffic
  • When news is breaking, looser standards for fact checking make business sense (Boston Marathon, Sandy Hook, Hurricane Sandy)
  • When a story has already gone viral, share first to siphon off a piece of the traffic, then ask questions second (rooftop break up, yearofelan, lesbian waitress, snow sphinx)
These examples attest to the fact that media companies will capitalize on sharable headlines, even at the expense of traditional journalistic values. The problem is that it makes business sense to do so. While it’s tempting to lay the blame at the doorstep of new Internet media companies, the truth is this is a systemic problem and until the system has different incentives, the trend will accelerate. Upworthy, BuzzFeed, Gawker and The Huffington Post are just the first movers showing everyone else how to play the game.

So where is this all going? The trajectory of the social web and how it impacts consumers has parallels to the rise and fall of Demand Media. In its earlier days, Demand provided a valuable service to consumers by assembling a content empire designed to answer the millions of queries that people type into Google every day. Yet upon closer examination, Demand’s business model was to win traffic from Google, not to provide an answer to people’s search queries. While those two goals seemed nearly synonymous with one another, the immense scale required for Demand to create a successful business magnified the subtle distinction. A few years later, Demand had become excellent at winning traffic from Google, but pretty bad at providing consumers useful information.

The social web seems be going through similar growing pains. Where Demand once created content optimized for the search platform, today a new crop of companies are creating content optimized for sharing (the social platform). At first glance, the business of aggregating, repackaging, creating and disseminating sharable content seems to be good for consumers, simply by virtue of filling up newsfeeds with enticing headlines. Yet, the scale required to build a business around sharable content has revealed that what is highly sharable is not always in the interest of consumers. In fact, what is sharable is not necessarily true, or ethical and sometimes can be downright dangerous.

The parallels continue: when Demand veered off course, Google tweaked its algorithm so that Demand’s content would no longer appear as prominently in search results. Just last month, Facebook announced a change to its edgerank algorithm that is expected to punish disreputable publishers. But that’s where the similarities end. Google’s control over search results is much more direct than Facebook’s control over what its one billion users share. Moreover, much of the web's sharing happens over Twitter and other social networks, not just Facebook. 

That’s why it was so commendable to see prominent journalists pause and reflect in the aftermath of Sacco. The media’s self-regulation is the first step in what must be a collaborative effort to minimize the sharing of content that does more harm than good. In the era of social media, all journalists are ombudsman charged with upholding the web’s credibility as a medium. As it turns out, Justine Sacco’s legacy is less the content of her tweet, but that her downfall marked the day the media became self-aware of its vital role on the social web.