Thứ Tư, 14 tháng 11, 2012

Audience research is bullshit

A perennial topic on this blog is research. In my interview last week with former NBC President, Warren Littlefield he discussed it and the flaws in the system as he saw them. I received an interesting Friday Question though, right after the election, supporting the notion of research and seeking my response.  (I'm also trying to answer a few more of these questions and catch up a little.)   Here’s his Q and my A.

It’s from Hecky:

Given the success of sabremetrics and deft statistical analysis in both sports and now politics (Nate Silver corrected predicted everything but the senate race in North Dakota), how can you be so opposed to research testing in principle when it comes to entertainment? Certainly it's true that a lot of research is done poorly (e.g. bad methodology, unwarranted conclusions/inferences, sloppy handling of the data, etc.). The companies doing it for profit don't make their methods publicly available, so who knows if what they're doing is any good. But I don't think that justifies a wholesale rejection of the entire enterprise. Maybe we just haven't seen a Nate Silver of Nielsen yet.

All this stuff about "going with your gut" and just finding "great" material and having "vision" -- unquantifiable rules of thumb -- strikes me as complete hooey. It's the exact same sort of dogma that got so deliciously panned in "Moneyball" and in all the election post-mortems about FOX News predictions over the last two days. When done right, statistical research methods work, and it doesn't really matter what's being analyzed. It could be baseball, TV, the stock market, or politics. TV is about making money by generating ratings. And I don't see why we shouldn't expect proper research to aid in achieving that goal. It's just a matter of figuring out the right parameters by which to measure the performance of one's algorithm.

Thank you for your question, Hecky.  Let me first say this: in 1974 I worked in the NBC research department.  My educational background emphasized math.  I appreciate the value of statistics and have seen the process of audience testing first hand from both sides -- as the network and as a producer.   Okay -- that's my disclaimer.  Here's my answer: 

How do you measure art, Hecky? How do you assign a numeric value to creative endeavors? Yes, you can predict who will win an election. It’s simple. People tell you they’ll vote for candidate A or candidate B and you put a check in the appropriate column. If you’ve asked the right people, if you’ve asked a large enough sample of people, and they’re truthful then you can make a prediction with relative assurance (always taking into account a margin for error).

When you’re analyzing baseball players there are intangibles but their ultimate value can be determined by performance. How many hits in how many at bats? Strikes vs. balls? How many stolen bases and how many caught stealings? They're all numbers -- numbers that don't lie.  MONEYBALL found statistics that were overlooked. They discovered undervalued players. And in MONEYBALL, these statistics were used merely as one form of input. Scouting and intangibles were still taken into account, just not to the same extent.  And the advantage the Oakland A's had was that no one else knew these statistics, which gave them a competitive edge.  Today every team knows those same formulas.  So you better have someone with an eye for talent to go along with the computer readouts.

But turning to entertainment --

When a joke doesn’t get a laugh, is it because the writer isn’t good, the actor didn’t deliver the line well, the audience doesn’t like that actor, the audience doesn’t like the situation, the audience doesn’t understand the joke, the audience is tired because it’s late at night, the air conditioning isn’t working, they’ve heard a similar joke, they didn’t hear the joke correctly, they’re biased against jokes of that topic, they were distracted by something else going on on the set, a camera blocked their view, they were pre-occupied by problems at home, or any combination of the above? Plus, the audience you’re testing has little dials and is asked to twist them to the right or left depending on how much they liked said joke – what’s the standard? Two people may find the joke equally funny but one person gives it a +4 and the other gives it a +7.  Is one guy overly generous or is the other overly tough? 

So when a test audience is watching your show and that joke comes on the screen and a line on a graph determines how funny it supposedly is – how accurate do you think that is? And how helpful is that number in determining why the joke didn’t rate higher?

Okay, let’s say you ask each audience member why he didn’t laugh at the joke. Here’s the answer you’re going to get most of the time: it wasn’t funny. Yeah, we know it wasn’t funny. Why? You think they can tell you? I’ve watched focus groups where people didn’t like characters because of their shoes.

On the other hand, you poll a bunch of people on who they plan to vote for they can tell you. And if you ask why they can generally give you an answer. They like his tax plan. They think the other guy isn’t a friend of Israel’s. They always vote along party lines. Their reasons aren’t subconscious. When you laugh at a joke, when you hear a new band, when you see a certain painting how often can you accurately define and articulate what you like about it and to what extent? And then digitize it.

That’s what program research attempts to do. It takes your show and breaks it down into which characters the audience thought they liked, which jokes they thought they liked, and based on that – how popular the show might be.

There is one statistic I would love to see. It’s also the one statistic these audience research firms won’t show you. HOW MANY TIMES HAVE YOU BEEN WRONG?

Since the failure rate on television shows is over 90% and these were the shows that all tested well, my guess is that number they’re keeping from us is also well into the 90 percentile. So Hecky, I disagree with your theory that testing works. It doesn’t always.

Now do the math. If something doesn’t work 90+% of the time why keep doing it?

Nat Silver’s numbers worked. His information was accurate. Karl Rove’s was not. And neither was the research that said SEINFELD was a bomb and THE PLAYBOY CLUB was going to be a breakout hit.

So the answer here is not to put too much stock in audience research. It’s too flawed. As Mr. Littlefield said, any show with new ideas, hard-to-categorize premises or execution test poorly. But show Mother Teresa assisting orphans and it will test through the roof.What would you rather watch -- that or BREAKING BAD?   Guess which of those two shows the research company would recommend. 

And yet the networks make programming decisions based almost SOLEY on this flawed information. And that’s my big beef. So when a network president “goes with his guy” and discards research for what he believes is a good show, I say that’s just as valid or more valid an indicator of whether a show will succeed. And a whole lot cheaper.

I could see political strategists going to Romney and saying you need to appeal more to women and minorities. I can’t see advisors telling Picasso he needs more blue, or telling Shakespeare that 64.6% of playgoers don’t like Hamlet because he’s indecisive.

0 nhận xét:

Đăng nhận xét