Archive for January, 2010
First rule for all questionnaire design work is:”Do not make your respondents work harder than necessary.” This is good general advice when designing a questionnaire and it is particularly true when you construct questions that use scales. If the scales are hard to use you are placing an undue burden on respondents. If the scales are awkward, the respondents will be frustrated. If the scales are unbalanced, you will get ‘crappy’ data.
A few examples will probably clarify these points more effectively than a long description of the issues. Let’s start out with a classic example, which many a survey taker has had to endure. A one-sided or skewed scale is shown in the example below, which was taken from a satisfaction study. I doubt it fooled the respondents who took the time to read the questions carefully. Companies that use this tactic only fool themselves and look foolish to the survey taker.
A One-sided or Skewed Scale
Supplier Value is:
The next form of an unbalanced scale is the non-parallel scale. In this example, the scale starts with the concept of ‘negative’ on the left and migrates to the concept of ‘strong’ on the right side. More than likely, this error was an oversight on the part of the designer.
The Non-parallel Scale
New hiring policy has had:
|Negative Impact||No Impact||Little Impact||Modest Impact||Strong Impact|
However, it can cause problems for respondents and will be a problem downstream for the person analyzing the data. By the way, in case you did not notice, you see evidence of skewed scale construction in this example as well.
Finally, at least for this blog post, we have the third unbalanced scale issue. This one features an internal problem that will frustrate many respondents. At first glance, it looks ok, but it has what I will call the ‘Big Leap’ scale problem. The person writing the questionnaire created a big leap between the first and second scale points and the fourth and fifth scale points.
The Big Leap’ Scale
In your opinion, what was the return on investment (ROI) for your company’s marketing campaigns during the last 12 months?
When a respondent is asked a question with this set of items, they are likely to be frustrated because for some people the answer they want to give is not on the scale, none of the options they were given reflect their experience accurately. In other words they do not want to say it was ‘significant’ and yet the ROI might be better or worse than ‘somewhat’ leaving them with a dilemma.
Consider these points the next time you design a study or point them out to a colleague if he or she asks you to review a research instrument, but be gentle so they can hear the issue and correct it.
Note: the examples are not written in a best practice style, a format was used that fit the page parameters of WordPress.
|Join Our Email List|
As a follow-up to our post on ” When and Why to Choose Groups vs. One-on-One Interviews” we thought it would be interesting (dare I say fun!) to setup a quick (unscientific of course) poll to see how people use focus groups. Take the poll, see the results, and by all means comment on this subject.
In our zeal to provide clean datasets by removing questionable cases, we can commit another research “sin” – the introduction of researcher bias.
More than once, I have witnessed a researcher going through a dataset case by case to try to determine if a respondent is a gamer or simply unqualified. The person started out with some basic rules or criteria, but in addition made decisions on respondents’ qualifications from a subjective position, arguing, “No one with this [attribute here] would frame answers the way this respondent did.”
Maybe the researcher was correct and the person was not a legitimate respondent, but this is a slippery slop and we should not take the task of deleting cases lightly. You should use both valid and repeatable criteria when you delete cases from a dataset.
Please share your thoughts on this topic – leave a comment.
|Join Our Email List|
Learn more about market research best practices at http://www.atheath.com
Written by Roger A. Straus
From a distance, focus groups and “one-on-one” interviews may seem to be virtually interchangeable. Both are intensively moderated, focused qualitative methodologies, but they have different strengths and weaknesses. The focus group is a group depth interview; it runs on group dynamics and the group, not the constituent individuals, is front-and-center. The individual depth interview (IDI) focuses on a single individual at a time. Typically, one can get more done in a shorter span of time using focus groups, but you can get far more depth with IDIs – you have up to an hour with each subject, while each group member gets only about 10 minutes to speak on average. You can do a typical two-group-per-day design in about four hours, versus an entire day for six to twelve IDIs. This condenses the time demand on busy observers and researchers.
Still, while a well-moderated group better simulates real-world dynamics (playing on peer-to-peer interaction), IDIs are easier to do with sensitive topics or rare respondent types, and require less skill to conduct effectively. They are also better adapted to telephone interviewing.
It makes sense, then, to consider the differences between groups and IDIs in terms of how each fits your needs. The following chart, used as a checklist, will help you with the decision. Ideally, you should choose the approach that maximizes relevant “pluses” related to your needs.
|Objective/Consideration||Focus Groups||Individual Interviews|
Discovery and exploration of new markets, concepts, etc.
|Simulate real-world response/maximize realism||++||+|
|Get an overview||++||+|
|Explore consensus or lack of||++||+|
|Concentrate observer/research time and effort||++||+|
|Understand commonalities within and differences between segments||++||+|
|Avoid “please the interviewer” (rapport or transference) effects||++||+|
|Understand differences within target segments||+||++|
|Gain detailed, in-depth individual understanding||+||++|
|Facilitate use of projective, other individual-based “depth” techniques||+||++|
|Explore very sensitive, embarrassing, controversial or “personal” topics||+||++|
|Avoid any potential for interpersonal bias||+||++|
|Study low-prevalence or hard-to-recruit respondent segments||+||++|
Study many different respondent segments or types
This chart is intentionally designed to suggest that differences are relative – shades of gray, not black-and-white. There are variations that can allow you to take advantage of the fact that these differences are along a continuum with pure focus groups and pure IDIs at the polar ends. You can get the benefits of IDIs while still getting much of the benefit of focus groups, for example, by using mini-groups (4-5 respondents) or triads. Dyads, which are best managed like IDIs, allow for some of the dynamics one gets with focus groups. Furthermore, you can combine the two: e.g., schedule two or even three groups (e.g., one at breakfast, two in the evening) with IDIs during the rest of the day. You can interview less prevalent and /or hard to recruit respondent types individually, others in groups. In this way, you can reap the benefits of both methods. In the end, the best methodological decision is the one that supports your business goals.
Download a Free eBook with details on using Focus Groups at: www.atheath.com/booksandseminars
Please share your thoughts on this “When and Why” issue! Add your comment below.
Debates on the best or most appropriate scale construction will continue well into the future. I do not intend to resolve the debate, but I can describe a few scales that have been used to good effect and make a few observations about scales that do not seem to work well. First, let’s talk about a basic issue: “Does a scale need a mid-point?” The answer is typically yes, but not always. There are times as in the case of dyadic trade-offs when a six point scale is appropriate. One other special case is when you are conducting a satisfaction survey and you want to force a preference (when this tactic is used it assumes that preferences always exist no matter how slight and it’s important to extract it from the respondent). Thus, a four (4) or six (6) point scale might be desirable. Most statistical tests assume that a mid-point is part of the measure. Moreover, if a satisfaction score or any outcome or dependent variable is being created from a scale it’s best to use a 5 or 7 point scale to avoid violating assumptions in a multivariate analysis such as a multiple regression analysis.
By far the most frequently used scale is the 1-5 point scale. This easy to comprehend scale structure is a favorite among professional and non-professional survey instrument writers and has a distinct place within professional questionnaire construction. However, this scale suffers from two limitations. The first limitation is its scope, with only five points, two at the extreme end (i.e., 1 and 5) and the one mid-point the scale suffers from its own bounded parameters. Second, many respondents have shown a reluctance to use extreme values especially if the labels are extreme, such as when words like “never” or “always” are used. This could lead to a restricted set of scores making it difficult to measure differences or changes over time.
One solution used extensively in the social sciences, but which has not been as widely adopted by market researchers is the use of a seven point Likert Scale. Likert Scales use a seven-digit scale and can greatly improve the differentiation of scores. The seven-point scale is anchored at the end-points with descriptors. It does not use descriptive labels on the remaining five points of the scale (see Example Likert Scale). The advantages of a seven  point scale are important to consider when selecting scales for your questionnaire design.
|Join Our Email List|
As the writer and editor of a blog (but regardless of what specific writing venue I’m using), I am always looking for ideas and tips on how to improve. One source that I found is Paul Gillin who is an author and who writes and publishes both a blog and a newsletter.
In Dec. 2009, he posted five tips for bloggers, part of a series on blogging. If you write for a living or just for fun, I recommend you spend the time to hear what Paul has to say on the subject.
PS: Be on the look out for our discussion of Focus Group Principles and Practice.
Web experts tell us that new website visitors decide whether to stay and look around or leave a site in about 3 to 5 seconds. If this is true, and we believe it is, you have to ask yourself, “would I stay on my company’s home page or landing page more than 3 seconds?”
It’s hard to be honest about this in regard to your own website, the only way to get a fair answer is to see what people do when they arrive at your website for the first time. A bounce rate analysis is just what the doctor ordered – think of it not as a cure, but as an x-ray that you read to see if there are any problems. However, you cannot just take one x-ray, you have to take different views more like a CAT scan when you explore bounce rates.
First step separate new visitors from returning visitors. Next, compare the bounce rates of these two groups. Now explore the new visitor group by the page they landed on to examine which pages are especially problematic. Then look at how visitors found these pages on your site.
Keep digging until you have a clear picture of what is driving your overall bounce rate – focus on the area that generates the highest bounce rate, fix it [often easier said than done] and move to the next problem. Systematically identifying and addressing bounce rate issues will make a material difference in the performance of your website.
Please tell a friend about this blog. Thanks!
Written by Paul D. Berger
A profitable customer is a person, household, or company whose revenues over time exceed, by an acceptable amount, the cost to a company to attract, sell to, and serve that customer. This excess, excluding customer acquisition costs, is called the Customer Lifetime Value (CLV). A major reason for its usefulness is to determine the maximum acquisition cost at which a customer is a profitable one.
While everybody would agree that, of course, we wish to minimize acquisition costs, it may turn out, for example, that a CLV analysis would tell a company that it may be able to increase its profits by spending, at the margin, more to acquire a customer than the profit made on the customer during the first transaction. This was very much the case for a cruise ship company; data clearly indicated a sufficiently high degree of repeat business to warrant an acquisition cost that exceeded the margin from the first cruise. Relatively precise values for this maximum profitable acquisition cost (i.e., CLV) were determined for different cruise destinations. It is not that we know whether an individual customer will become a repeat customer or not. It’s the fact that, given a sufficiently large amount of data (cases), we can, with a reasonable degree of confidence predict whether the repeat rate (or some other factor) is within some tolerance. The tolerance level allows a company to profit by spending money to acquire a customer, without necessarily profiting from the specific initial transaction.
By analyzing past data, we can determine the CLV for customers in general, or for different segments of customers. A segment can be virtually anything that is useful for the company. For a cruise ship company it seemed natural to segment by destination of first cruise; for a financial services company, it was natural to segment by customer asset position or degree of activity (and several other criteria). Segments are usually based on details of past purchase behavior, but can be demographically or otherwise based. In the cruise ship case, A CLV analysis was also done for different age groups, to aid in determining marketing and promotional strategy.
To find the CLV, many different quantities are tracked over time for each customer who starts at a certain time, or has certain demographics, or who buys a certain type of product (e.g., stereo equipment). Indeed, attention needs to be given to the unit of analysis (e.g., “customer” can mean a person, household, or company).
The quantities that are needed fall into two major categories. One category deals with probabilities, that is, the proportion of customers that repeat purchases, how often they do so, and in specific periods. The other category of information deals with revenues, product costs, and promotional costs (the latter often referred to as “retention costs”). These quantities tend to be available, although experience is often necessary to “dig them out,” as well as to identify changes in these quantities, such as increases in revenue due to price increases, or perhaps a pattern of ‘upward buying’ over time.
For a more detailed discussion on the topic of CLV analysis, see Berger and Nasr, “Customer Lifetime Value: Marketing Models and Applications,” Journal of Interactive Marketing, Volume 12, Winter, 1998.
One of the most powerful approaches you can use to better understand the market is a sales cycle analysis. Gaining a strong appreciation for the good and bad news about your market positioning in relationship to your competition isn’t always easy and if there’s bad news it’s difficult to hear. However, not knowing where you stand is at best dangerous and could prove fatal.
Sales cycle analysis is about understanding why you do or don’t get on the short list of your prospects (or stay on the short list of your customers) and what will facilitate or impede your progress toward becoming a preferred supplier – the position we obviously all want. If knowledge is power than sale cycle knowledge is royalty.
There is certainly more than one approach to achieving insight into the sales cycle dynamics of customers and prospects and we won’t try to explore all the options here. However, it is worth stating that a commitment to understanding these dynamics is not a one shot deal. If you and your company are serious about sales and the factors that will propel your sales, you are best served by tracking the metrics at least annually.
For better or worse, markets continue to evolve quickly. A very good way to stay close to the action is to track market activity systematically. Creating a baseline and measuring off that starting point is an essential component of the process.
Any sailor who races will tell you a good start is imperative.
You can structure sales cycle studies to help maximize your reach tactics. How to best reach your audience is a function of understanding how they search for information and more precisely how customers and prospects search for information at each stage of the buying process. In addition, the new reach equation is all about social networking and social media.
Studies on sales cycles also, almost by definition, provide competitive insights; it’s not enough to know if you’re on the short list you need to know who’s on it with you.
Combining information on brand and product positioning with a continually updated view of reach dynamics is a powerful tool in the hands of a savvy marketing professional. What are you waiting for? Get started!
In 2010 we’ll cover numerous market research topics, here are a few of the postings (in no particular order) planned for early in the year:
Customer Lifetime Value – A major reason for its use is to determine maximum acquisition cost so that each customer is a profitable one.
Sales Cycle Analysis: A Powerful Tool – Sales cycle analysis is about understanding why you do or don’t get on the short list of your prospects and what happens next!
Have You Checked Your Bounce Rate Lately? – Bounce rate analysis is just what the doctor ordered – think of it not as a cure, but as an x-ray that you read to see if there are any problems.
Market Forecasts are on Everyone’s Mind – It appears that Market Forecasts are on everyone’s mind; a sample of over N=250 purchasers of market research put ‘market forecasts’ on the top of their shopping list.
The Importance of Balanced Scales – Data collection is not an end in itself, it is the means to an end and if you cannot interpret your data unambiguously, you have a problem.
Researcher Bias: Avoid this Slippery Slop – In our zeal to provide clean datasets by removing questionable cases, we can commit another research sin – the introduction of researcher bias.
And, much more…..