Proceed with Caution When Doing Market Size Analyses

market size analysis

The Dangers of Relying on Survey Data Alone

By Ken Donaven

NOTE: This is Part 3 of a three-part series on Market Size Analysis. Find Part 1 here and Part 2 here.

In a previous article, we explored the distinctions between “bottom-up” insights and “top-down” insights — in essence, surveying purchasers versus interviewing sellers, respectively. We have seen instances in which Market Size Analysis that relies only upon a survey of the market’s customers can yield faulty intelligence and, therefore, faulty guidance.

While “bottom-up” insights — surveying purchasers of products to extract data — are often gathered to perform market size studies, there is a danger in relying solely on this type of information-gathering alone. In the absence of “top-down” insights — interviewing manufacturers, distributors, and experts across the entire value chain — such results can return incomplete, misleading, or wholly incorrect data.

It’s not that we caution against including customer survey results in the overall mix of insights, per se. It’s that we urge against relying solely on customer surveys, which is a mistake we’ve seen some make over the years. What invariably results is that the common perils rear their ugly heads, compromising the data set:

  • Memories are fuzzy and unreliable.
  • Humans are fallible, and will default to obscuring that fallibility to preserve their egos, rather than admit to something unflattering.
  • Surveys alone often fail to account for nuances in geography, demography, brand halos, and respondent biases.
  • Perception is not always reality.

We’ll explore each of these in the following section.

Down the Memory Hole

One of the greatest limitations of bottom-up (“what did you purchase?”) surveys is that they are recall-based. We ask fallible humans to recall things they don’t always commit to memory. Not everyone can remember the details of the myriad purchases they make over the course of a year. Absent total recall, respondents tend to guess. Or they may make data up, simply to work their way through surveys. Or perhaps worse, they will remember things inaccurately, thereby confidently reporting purchase details that simply aren’t accurate.

There is also a phenomenon known as the “Brand Halo Effect” that can impact the factual memory of a survey respondent. This is when a larger market player or two dominate a given category and generate an outsized impact on respondents’ recall accuracy. In the absence of remembered details to the contrary, respondents may attribute a purchase to a large category-leading brand, even if that’s not the product they actually purchased. You especially see such a phenomenon when a given brand name becomes the generic representative of the category itself, such as in the case of Kleenex or Band-Aid.

Another potential pitfall to avoid is “over-extrapolating” survey data to represent a broader slice of the market than is appropriate. I’ll give you an example. Let’s say you are conducting market size analysis for a brand with climate-dependent usage or seasonality. One potential mistake would be to conduct a survey of “power users” in southern latitudes of the United States, then take that data and extrapolate it nationally to determine market size. It would stand to reason, however, that southern latitudes would cater to year-round purchasers of the product or service, while northern climates would have more limited purchase incidence. If one were to extrapolate regionally collected data to impute the national market size, you can see where those purchase incidents would be potentially overstated, resulting in a larger market size than is actually the case.

Ultimately, what you are looking to do is apply mathematical formulas and data science to survey results to at least establish a starting point against which to triangulate an accurate market size. Survey instruments do serve to collect a lot of data points at scale relatively economically. With the results in such high quantities, you can apply the Law of Large Numbers to achieve reasonable confidence that such large numbers, all saying similar things, tend not to lie.

But, as we urged in the prior article in this series, trust but verify.

Best Practices for Designing Effective Bottom-Up Surveys

Here are some of the best practices we observe when triangulating top-down insights with bottom-up insights and published data.

Use bottom-up data as a starting/reference point. Bottom-up research will reveal a great number of data points, from which you can start and to which you can refer back as baseline metrics. Because of the scale of survey instruments, you will obtain data points such as purchase incidence, purchase frequency, and a baseline formula for establishing a market size range (to be further scrutinized later).

Take care in designing the survey. Effective survey design helps to ensure that the appropriate respondents are included in the survey through robust screening criteria. As we’ve demonstrated elsewhere, “garbage in” will almost always result in “garbage out.”

Be sure to account for the zeros. Purchase incidence must be scrutinized and screened in order for the respondent pool to be an accurate representation of the market being studied. Purchase incidence is a data point that reports whether a respondent has, in fact, purchased the product in question in the relevant time period being analyzed. If a respondent reports not purchasing the product — a zero purchase incidence — that insight must be used to calculate the total market size. Without accounting for this zero (and all other zeros), the market size will be overstated and not reflective of all potential customers.

Make sure the numbers match. Relatedly, it is important to cross-check established data points against the data that the survey returns — for example, as it relates to purchase incidence. If published or otherwise reliably acquired data suggests a given market typically experiences a 33% purchase incidence (for example, if a product has a three-year replacement cycle), be sure that the data you return from your survey reasonably matches that established baseline. If it doesn’t, it might suggest that your respondent pool is either over- or under-representing the market, statistically. If you survey 400 respondents, and 200 report a purchase incidence, but the established/accepted incidence rate is 33% for that market, the purchase incidence in your participant population is too high (50%), and you should survey more people. Conversely, if your survey only reveals 100 purchasers, you may need to find more purchasers to survey in order to bring that 25% purchase incidence closer to the established 33% baseline.

Using Triangulation to Square the Circle

In an ideal situation, you do not rely solely on survey data to attain the market size intelligence you’re looking for. Nor are you simply interviewing the company leadership, competitors and industry experts for their anecdotal, top-down analyses. In a perfect world, you use both methodologies as well as sourcing publicly available information and published reports to provide the triangulation stool three solid legs and firm standing in the final analysis. 

This is why a full triangulation approach is the most accurate and complete methodology to understand market size. Without all three legs, the stool will wobble…and ultimately fall down.

CONTINUE READING: Part 3 in this series is here, Proceed with Caution When Doing Market Size Analyses.

Ken Donaven serves as Senior Director with Martec. He can be reached at [email protected].

Subscribe To Our Newsletter
Get The Latest Insights

Leading #MRX Posts

Customer Experience

Show Me, Don’t Tell Me.

One of our recent innovations in our ongoing pursuit to optimize and perfect Emotion Intelligence research is the use of images in a “qual-then-quant” process to gain deeper and more authentic insights into how emotions and sentiment are driving purchase decisions (or not).

Read More »
B2B

eBook: Measuring and Optimizing the Customer Experience

It is critical, when analyzing and working to optimize customer experiences, that three primary tenets of sound CX design be woven deeply into the fabric of the analysis, even when using artificial intelligence or other technology to do some of the information gathering and processing.

Read More »
B2B

Market Sizing Using Top-Down Insights

Without top-down analytics as part of the market research equation, one runs the risk of placing faith in two sources of data that have proven at times to be unreliable and substandard pictures of reality.

Read More »
Scroll to Top
Scroll to Top