Big Data in Market Research: Using Statistical Modeling to Predict the Future
- November 9, 2012
In the wake of an election widely reported as “too close to call,” a surprisingly accurate predictor has emerged: statistical modeling. Nate Silver’s impressively spot-on predictions of the 2008 and 2012 elections on his FiveThirtyEight blog are quickly passing into legend (and translating into better book sales: The Signal and the Noise, is the #2 best-seller on Amazon). But what Silver does isn’t magic, nor is he alone. Other statisticians, like Sam Wang, a neuroscience professor at Princeton, and Simon Jackman, a political science professor at Stanford, have also accurately predicted the results of the 2012 election using statistical modeling. Math geeks everywhere are celebrating their win. Nerds rule!
LRW’s cutting-edge Marketing Sciences team is often called upon by clients to create predictive models that can help them make better business decisions, and they’ve learned a thing or two about the practice (and the pitfalls) of working with data to predict the future. We’ll let them break out the signal from the noise for you:
1. When developing predictive models, think probabilistically.
Naturally, marketers would be much happier if they could predict with absolute certainty what would happen when they launched a new product, changed a price, or rolled out a new ad campaign. But no statistician worth their salt would tell you they can predict the future. The world is changing constantly, and there are way too many variables in the model: there’s rarely (if ever) one right “answer.”
“Don’t take one (hopefully) best guess,” cautions Mick McWilliams, PhD, Senior Vice President of Marketing Sciences. “It’s better to model multiple possible outcomes and understand their relative probabilities of actually happening.”
For example, forecasters during Hurricane Sandy presented their results as probable paths within a “cone” of various possibilities (an example that should ring a bell for readers of Mick’s series on Agent-Based Modeling). Our team recently designed a model for a client attempting to tune up their promotional programs, and ran over 100 different simulations to identify the promotional mix that maximized profitability. Of course, probabilistic forecasting methods are more complex and time consuming than traditional market research, but they can significantly increase your chances of success.
2. Approach big data with a purpose.
Silver and his counterparts leveraged a massive amount of public and private opinion data to forecast the election, tapping into a slew of polls across the US as inputs for their models. We’ve noticed lots of conversation, excitement, and even fear about “big data” among our clients and colleagues. The excitement stems from the potential knowledge that might be gleaned; the fear from the notion that increasingly advanced statistical models will make human analysts less valuable.
What you should really be afraid of are the spurious correlations and patterns in big data that can lead market researchers (and their clients) down the wrong path. That’s what Silver calls “noise,” and it can be dangerously misleading without the right interpreter.
“Mega databases and fancy analytics software on their own do not a valid prediction make,” says Jason Brooks, Vice President, Marketing Sciences. “They do not alleviate the need for an informed analyst guided by relevant knowledge and armed with intelligent hypotheses to objectively test. You can lead Big Data to water- but you can’t make it drink.”
3. Stay objective (and manage your expectations).
We know from innovations recently launched by our Pragmatic Brain Science Institute™ (aka, PBSI™) that brains are wired to look for patterns as a means of simplifying our complex world. In an internet age oversaturated with available data, even research professionals often focus in on the information they like and ignore the rest. Anyone can see patterns where none exist, filter them through their biases, and cherry pick data to fit with the story they want to tell.
Market researchers are not immune to this tendency, and must strive for objectivity in design, analysis, and (perhaps most importantly) the presentation of their findings. Granted, academic statisticians don’t have to worry about the consequences of telling their boss that a new product is likely to flop or that the new ad campaign is falling flat. But the professional researcher needs to stay equally objective; what makes the most sense from the business knowledge you have isn’t always what’s most likely to happen.
“Good forecasters must strike a balance between their prior knowledge and what the data is telling them,” says Steve Karlin, Senior Vice President, Marketing Sciences. “Good forecasting involves generating hypotheses about direct relationships that ought to exist, then exploring all available data for these relationships. Importantly, you must be willing to accept that your hypotheses are incorrect.”
4. Never stop improving your models.
Top statistical modelers use a Bayesian approach to model development, which incorporates not only data quality but hypothesis accuracy into the mix. When new data is available, they use it to update their models and predictions on the fly, never resting with a tried and true past method.
“Be willing to walk away from norms and models that are too old,” advises Hilary DeCamp, our Senior Vice President and Global Director of Marketing Science and Methods. “Purge old data points from benchmarks and models, and revise estimation procedures off only the more recent data. More history isn’t better history- unless it’s still relevant.”
Big data is only getting bigger, and statistical modeling isn’t going away. It might well be the most effective way to use big data to predict human behavior, something any business should consider endlessly valuable. The trick to using it is to go in with a strong hypothesis, manage your expectations, and weed out what doesn’t work with a constantly improving methodology. (And if you can’t do it yourself, you know where to find us.)