Saturday, November 3, 2012

Electing a President in a Microtargeted World - Gretchen Gavett ...

The 2012 election cycle is remarkable for how much information the campaigns have about voters. Using microtargeting ? a system of gathering and analyzing enormous amounts of data on behavior and opinions ? both the Obama and Romney camps are drilling down on which voters matter, how to best reach them, and with what message. The goal is to figure out which persuadable voters to spend energy ? and dollars ? on, in the hope of moving the dial just slightly in their favor. It's a world where hundredths of a percentage point matter.

Ken Strasma, president of Strategic Telemetry, knows this world well. A former campaign manager, Strasma started his company in 2003 to find a better way to predict who the true persuadables are in a campaign, and to find smart ways to target messaging towards them. He's worked on over 1,000 political campaigns, including John Kerry and Barack Obama's presidential races, as well New York City Mayor Michael Bloomberg's successful 2009 run. He's involved in Obama's current campaign, though not on a day-to-day level.

Strasma and I talked on the phone about how microtargeting has changed over the years ? both for campaigns and businesses ? and where he sees it moving in the future. This is an edited interview of our conversation.

When you talk about scores in microtargeting, what does that mean?

Microtargeting is really all about predicting an attitude or behavior, and the end product is what we call a score, which is just a percent likelihood of someone exhibiting that attitude or behavior. So if we've done a survey to see who cares about global warming, then every voter in a state might get a score on their voter file saying that this is how likely we are to say they care about global warming.

If we're launching a new type of bicycle and we want to know how likely it is that this person is a bicycle rider, we would apply that score, saying: here's the likelihood that this person is a bicycle rider in the market for a new bike. Scores are just likelihoods that we've predicted with these various analytics algorithms. ...

How do you measure success when you work on political campaigns?

We like to think that good targeting can provide a few percentage points improvement for a campaign. It's not going to take, you know, a candidate who is polling at 20% and get them a little bit [closer to winning]. But someone who's a point or two behind, it can make the difference. ...

We measure success by both win/loss, and also by being able to demonstrate that we've improved the campaign's percentage of the vote and cut costs.

What are some of the key differences are between Obama's 2008 campaign and what you're seeing in this one?

Definitely the scale. In 2004, microtargeting was a fairly new concept. In 2008, ... microtargeting was a key part of everything we did, from the Iowa caucuses to the general election. In this election, the campaign has a very large in-house data team dealing with all kinds of targeting analytics and also the online operations. And also, I should, in fairness, acknowledge that the Romney campaign also is taking its place here and has a much more robust targeting and digital operation than John McCain did.

In your work, do you only focus on the metrics side, or do you decide what sort of messaging would go out in targeted ads?

We help people decide which message [to send out] of a set of messages. So while we wouldn't be crafting the ads or mailings ourselves, if the media consultant says it has five different television ads they're considering running, we might tell them, for a given show, which ad is most likely to resonate with the viewers of that show.

The same with the mailing. We can take a suite of issues, see which ones model as being the ones that voters most likely to care about, so that the mailings going to them is the one they care about the most. ...

As limited as time and money are in a campaign, we're on a even more limited resource ? the voter's attention span ? because they're getting inundated with information. If you've got only about three seconds from the time they take a piece of mail out of their mailbox to when they throw it away, you want to make sure that the headline issue on that piece of mail is the one they care about the most. And microtargeting ads do that.

What were some of Obama's key features that you would use to market him, to tell a story based on the data you had?

One of the things about 2008 was, early on, there was the one key issue on most Democratic primary voters' minds: the war in Iraq, where he had a very different position from the Republicans, and even some of the Democratic primary field. And then by the general election, when the economic meltdown had started, really the economy was the top of peoples' minds.

So those issues weren't really things that depended that much on targeting for, because that basically what the campaigns were talking about.

It was very interesting to be able to model the secondary issues on people's minds, even if we knew that the economy was number one, pretty much as it is today. There would be other issues that people would care about. In the context of the economy, you know, what's more important to you? Is it unemployment benefits? Is it creating jobs? Is it the mortgage crisis?

... Or what other issues is it? Is it the environment? Is it education? So one of the things we would do would be to ask people their top two issues, so that if one issue like Iraq or the economy was swamping all the others, we would still get a second important issue that the voter cared about.

You said here that you would just ask people. Is there where you get your data, or is it also from browser cookies and other sources of information?

A large combination. I like to say that we get our data three ways: We ask, we observe, and we test. So the asking is generally three sources: telephone surveys, door-to-door canvassing, and online surveys. The surveys ? you mentioned cookies ? we might know what website one's visited, what search terms they've searched on, have they visited a campaign's website, and, if so, what activities that they've undertaken there.

And then the testing is really important because, as much as you could ask someone if they're undecided or if an issue matters to them, often times people don't really know what goes into their decision about either a candidate or a product. There's a combination of features and attributes that make the candidate or the product appealing to them, and being able to just test different ways of presenting an ad and seeing how people respond is very valuable.

So we'll do a lot of A/B testing with online ads.

We'll also do randomized experiments where we take a test sample and a control sample and do a particular treatment to one group and see how the purchasing or voting behavior is different between the two. And you often get some interesting and not intuitive results that are quite different from what would be if you just asked the voter or the consumer their preferences.

It reminds me of a quote of yours from Sasha Eisenberg's book Victory Lab: "We knew who ... people were going to vote for before they decided."

Right. People were telling us they were undecided and we felt that it was likely that they were actually just not engaged yet. And by asking a number of issue questions, we were able to very accurately predict which candidate they would end up supporting once they were engaged.

Does this also apply when you're looking to marketing a product?

It definitely does, especially in the case of a new product entering the market. But if we're able to describe different attributes of it and test it and model how people would respond to those different attributes, then we're able to make a prediction about who the likely consumers are once the product does launch.

What advice would you give businesses and organizations thinking about doing this kind of work?

The first step is always the same and that's saving and collecting as much data as you possibly can. Often times a company will collect data on who their customers are, who's bought a product. But for a while, it would [also be useful to collect the] tens of thousands of names of people they've mailed or called who haven't bought the product. Saving that information is equally important for both successful and non-successful prospects. Then [you] can model the likelihood of someone buying the product if they're solicited.

[Then you need to] standardize your data, if you can. You can't do any modeling if you don't have good data to start with.

What are some of the biggest surprises that you've found doing this work over the past decade? Is there moment when you had an "aha" feeling?

One is that the kind of features someone has on their telephone matters. It's something we've puzzled over for a long time because, intuitively, it makes sense that someone who has caller ID, call forwarding, call waiting, those type of features, is better able to screen their calls. Maybe the skewed data we're seeing here is just the effect of those people being harder to reach, or only reachable when they want to be reached.

It is very similar to the problem with calling landlines as opposed to cell phones. And going door-to-door, you underrepresent people in security-locked apartments. With online surveys, you miss out on people who aren't online. Every type survey misses out on some slightly different group of people. The key is to use as many different avenues as possible.

Another surprise that pops up quite often in politics is motorcycle ownership: it doesn't make someone more Democratic or more Republican. The interesting thing about the motorcycle owner is it makes you less easily categorized on. If someone is just like the Democrat on most of their attributes but ride a motorcycle, they are more likely to be a little bit libertarian, probably against gun control laws and helmet laws. And so that pushes them against the natural Democratic tendencies, based on their other data.

And the same holds for a Republicans. You could have someone who looks very Republican, based on their race, ethnicity, education, but if you factor in that they're a motorcycle rider, they are much more likely to be pro-choice and against government censorship of the Internet and other issues like that.

So motorcycle riding is a good caution flag for not making assumptions based on the rest of what you know about a person.

When people complain about microtargeting as an invasion of privacy, how do you respond?

[I] point out that this information is generally volunteered voluntarily. It comes from discount cards that you use in a supermarket, or registration cards that you fill out. So ... people are making the choice to give up that data.

What really shocks people is when they realize how much of it is out there. When people tell me that they're concerned about that, I advise them: Don't use a supermarket discount card if you don't have to. Don't fill out the survey questionnaires that you get with a product if you don't want to. It's very different in the United States from other countries. Almost every other country has much stricter privacy laws than the United States. So when we've been doing microtargeting in other countries, the available data is definitely less.

What's new and exciting in the microtargeting world?

Two of the more exciting things in what we're working on are real-time microtargeting and the online space.

For real time, it's gone from taking maybe a couple of weeks to now just a few days to get results back from a survey and crunch the numbers and build scores. ...

We're able to update scores in near real time; ... it will take about a third of a second for us to get a new piece of information in, for it to ripple through the system, update, and predict its scores.

Does that make you able to predict better or just predict faster?

Predict faster. The underlying technology is still largely the same. We're still running the same types of numbers.

Now, I should say that the technology has improved in ways that make the predictions better as well, largely because there are some algorithms that would take so long to run, especially the machine-learning, artificial intelligence algorithm ... and neural networks. It just wasn't practical to leave something running before; ... whereas now it might take half an hour to run an algorithm that, years ago, took months. ...

And how has the digital space changed what you do?

It's a really intriguing space for us, because we've spent so much time and effort trying to get information out of people, calling them, knocking on their doors, etc. And now suddenly, through Twitter and Facebook and other online social networking, people are volunteering this information in huge tidal waves we've never seen before.

Are they doing this purposefully? Are you asking for the information or are you just reading through it?

It's just reading what's out there, active social listening. ... We also do formal online surveys, [but] the real volume is what people are volunteering about themselves. So where it used to be that you had to ask someone: "which candidate do you support? Which product do you like?" These people are telling us that in far greater volume than we could ever solicit directly.

The trick is that they're also telling us what they had for breakfast and that their cat looks cute in a sweater and other things. ... What's key is sifting through all of that to find the meaningful nuggets, and also sifting through the way people speak online.

Computers are notoriously bad at detecting sarcasm. So if someone tweets that they love Sarah Palin as much as they love a root canal, a basic actual leverage processing algorithm is going to classify that as a tweet that's positive towards Sarah Palin because it says "loves Sarah Palin." So we've been working a lot with coming up with our artificial intelligence text processing routines that will allow us to actually capture the meaning of a tweet or a Facebook post, as opposed to just saying there's this many about Obama and this many about Romney.

Source: http://blogs.hbr.org/hbr/hbreditors/2012/11/electing_a_president_in_a_micr.html

NBC Olympics schedule Alexa Vega 2012 Olympics Chad Everett London Olympics Kristen Stewart Rupert Sanders Photos BBC

No comments:

Post a Comment