Thursday, January 28, 2016

i think cenk sort of explained this well, but not really.

the time aggregate isn't useful because it "balances out error". that's not actually a real thing, mathematically speaking - error just doesn't "balance out" like that. that said, if you want an aggregate to "balance out error", what you need to do is do them all at the same time. so, if you had ten firms all doing polling in the same region over the same weekend? then, it "balances out" - because you're increasing the sample size, basically. more people, less error. but it has to be apples and apples.

taking a poll from two weekends ago and averaging it with a poll from last weekend doesn't balance anything out. rather, it amplifies error - because you're aggregating apples and oranges. if you're actually concerned about predicting results, aggregates should be completely ignored. a single snap poll from a trustworthy firm is always better than an aggregate over disparate time periods.

so, why do they do this?

because it's a good measure of branding. it's the kind of thing you do when you want to measure market response to a cereal ad. and, the people that are doing this largely look at the situation as comparable to measuring attitudes in response to advertising. in fact, a lot of the time that's what they're actually being paid to do - not predict outcomes.

so you shouldn't pay very much attention to the rcp averages. it's bad methodology. instead, key in on reliable firms using reliable methods (live interview phone calling) and key in on their most recent snapshots.