Archive for June, 2010
Are you planning to deploy an online survey, conduct telephone interviews, or recruit focus group participants? A key differentiator for your team is the ability to manage the process of providing a sample frame that is appropriate to the research objectives.
Explore the various panels of respondents available as well as the list options at your disposal. You may want to tap one or both for sample development. To make the process as effective and efficient as possible, provide clear parameters and as much information about screening and estimates of incidence rates as you can.
Thinking through the specifics related to the sample parameters ahead of time will help create a clear request for quotes (RFQ). The bids you get back will be based on your specifications and completeness and clarity are the best tools you have to avoid surprises.
If you need help creating an RFQ I highly recommend our eBook titled “Fast Track Your Web Panel RFQ”, an AtHeath Publication.
Put More Punch in Your Surveys and get the “Fast Track Your Web Panel RFQ” eBook FREE!
Purchase “Questionnaire Design for Business Research” it will help you create questionnaires using innovative best practices.
Find it at our dedicated website where you can read excerpts and get a special offer (two bonus eBooks valued at $59.90). http://questionnairedesign.tatepublishing.net/
Recently we asked the question What Exactly is a Research Panel? [see What Exactly is a Research Panel?]. Now we answer two related questions.
What is a Panel Not?
A panel is not a simple list of emails or names. Panels typically achieve a 5% – 20% response rates, which is much higher than the typical list response rate. The reason panel response rates are higher is panel providers actively manage the panelists and delete inactive respondents. Lists on the other hand are often old, overused, and typically provide under 1% and sometimes less than 0.5% response rates.
Why Do I Need Panels?
Panels allow you to better control your sample than lists. In the absence of high response rates, we are often left with quota sampling and a panel provides the means to fill quotas.
In addition, when you buy a list the cost is per number of names. Typically, lists are sold as a minimum purchase of 5,000 names, with discounts for larger pulls. Depending on the list, you may be able to pre-select names based on profile information. However, the quality of these profiles is often in question and rarely verified.
Panel companies charge per completed survey. The advantage is you know what you will pay (assuming you’ve correctly estimated the incidence rate) and the profiling information is typically of a higher quality and more current than from a list. With that said there is a good deal of variability in the quality of panels and panelists and the Latin phrase Caveat Emptor (Let the buyer beware) is a most appropriate caution.
This not because panel providers are disreputable, in fact, most are highly contentious. The caution is a warning not to assume anything, be an educated buyer and user. Understand the strengths and limitations of the lists and panels you use and you will be in a much strong position when clients ask tough questions.
Panels are sampling frames, designed as sources for sampling specific populations as part of market research studies and used for both business-to-consumer and business-to-business projects. Sample development is an essential part of any research project.
Panels are divided into two basic types: public panels and private panels. Public access panels are public (hence the name), which means that anyone can access them for conducting research. Anyone, that is, who is willing to pay for the privilege of gaining access. A Public Access Panel is a database of people with profiles (some short others reasonably extensive) and emails, which is actively managed and accessed to conduct online market research.
Panelists have actively signed up, usually through a double opt-in process (see Note below for definition). In return for giving their opinion during online surveys, they receive rewards typically in the form of points or cash equivalents, which participants can convert to cash or other incentives. Other forms of incentives include (but are not limited to) airline miles, a chance to win a drawing, gift vouchers, and in some cases donations to charity.
Private panels are owned by a company or organization and used for their proprietary and explicit data collection requirements. These panels may be managed internally or the management may be outsourced to a third party. Private panel owners do not, in general, make their panelists available for research outside their company or organization. There are of course exceptions, but most exceptions are limited to a very close group of partners or alliances.
NOTE: Definition of Double Opt-in
First Opt-in Person opts in to participate [typically online, but could be in any form]
Second Opt-in Person is then sent an email asking them to confirm agreement to participate [the double opt-in]
The sample plan is typically straightforward. However, on occasion when the sample is multinational or uses a complex stratification scheme the research team will need to examine its options; paying close attention the how complexity translates into cost. It’s very important once again to set expectation appropriately.
Use this checklist to think through the sample plan process and structure you sampling plan to maximize quality and minimize cost.
Data Collection and Sampling
1. Match the Data Collection method to research objectives
- Phone interviews
- Web-based interview
- In-person, face-to-face
- Email w/ phone
- Mail [postal]
2. Sample source options for web-based studies
- Bartered lists – No control, low risk, opportunity costs
- Paid lists – No control, high risk, RR management
- Panels – High control, CPI based cost, guaranteed sample size
- Internal respondent pools – Hard to develop and maintain, good control, limited size
3. Sample source options for phone-based studies
- Internal lists – High control, low risk, uncertain biases, opportunity costs
- General business lists [D&B] – Good control, low risk, Response Rate management
- Panels – High control, CPI based cost, guaranteed sample size
- Respondent pools – Hard to develop and maintain, good control, limited size
4. Sampling frame criteria issues to consider
- Feasibility to supply the sample is essential
- Filling quotas is the next most important
- Time in field and meeting deadline is third
5. Sample bias issues
- Assume that all sources have a bias
6. Mitigate bias through the use of:
- Multiple sources
- Response rate management
- Avoiding respondent fatigue
- Translation – localization
7. Stratification and quota setting
- Use a Random Stratification
- Set Quotas
For more guidance on sampling see our eBook “Sampling Dilemmas and Solutions” www.AtHeath.com/MRRC
Written by Roger A. Straus
Focus group moderators generally employ one of three basic role strategies when moderating focus groups.
The first role a moderator can enact is that of the Naïf – such a moderator indicates (verbally and/or nonverbally) that he or she is there to learn from the group, presenting oneself appropriately. This role tends to be used primarily when dealing with technical subjects, in business-to-business, healthcare research, or when group members are older, and/or hold conspicuously high status positions. The naiveté is, or should be, only an act. The trick is to play a one-down position and seem intensely curious about what group members have to share, but personally unknowledgeable, while using your skills and knowledge to draw out participants and get them to explain “how it is” to you. This role is best adopted by moderators who are willing to act the part and who look the part (i.e., typically younger researchers).
The next role is that of the Pro – here, one straightforwardly does the job of asking questions, getting answers, and probing. One plays it as a professional, a process expert, but not a content expert – much like the role of the “moderator” in political forums, television-news talk programs, or a group facilitator in other settings. Those taking the facilitator role generally share as little as possible about themselves. If possible, they vanish into the group process. Novice moderators are well advised to start by adopting this persona, but many seasoned practitioners use it to good effect as well.
The third role is that of an Insider – an expert or at least peer of the group members. To pull this off, you must either be an exceptionally good (and glib) actor or have intimate familiarity with both the subject matter and the everyday life or work world of the respondents. The trick is not to assert insider status (which is likely to prompt resistance from group members), but to enact it matter-of-factly. It is often wise to start by just dropping insider terminology in a matter of fact manner, framing your probes in a technical way that clearly signals you know the field that you understand “where they are coming from.” As the group proceeds and you build up that sense of being a peer, you can simply start using the language and terminology of an insider (as appropriate) and digging deep from there.
Each persona has its place choose among them wisely.
Most of the time we tweet to be helpful, interesting, etcetera, and to be followed. Today I would like to suggest you should tweet with the express objective of being UnFollowed!
If you are asking, “Why would I do that? I am sure you are not alone. Most of us focus attention on creating as large a following as possible and that is a valid objective, but you have to ask yourself, “To what end?”
First, let’s establish why you are tweeting. If you are out there having fun and have no business objectives – stop reading this post it probably doesn’t really apply to you. However, if you are using twitter as a micro-blogging platform and you hope to do business or some form of organized activity (e.g., charitable fund-raising) with you’re twitter audience than read on.
We all like to read entertaining tweets, news, or items that add value by alerting us to interesting things happening in our world. We follow and “list” people who tweet on topics of particular interest to us.
If you are tweeting with the intent of creating a following of people who care about your area of expertise, than you are by definition not as interested in “casual” followers. Those who are very unlikely to give a hoot about your content or what you offer the marketplace.
Therefore, some portion of your tweets should focus, like a laser, on your primary topic with the intent of separating out relevant from irrelevant followers. If you are successful, you will see your follower numbers go down (temporarily).
However, the ratio of relevant to irrelevant followers will have improved – this is a good thing. It’s also important, especially if you use twitter as a micro-blogging tool and systematically integrate your blogging or other messaging activities with Twitter. So Tweet On-Point!
Rewritten from Jarom Adair’s “How to break a search engine’s heart”
We often focus on things that search engines like. It’s equally important to know what will get you banned, blacklisted, and shunned by search engines. These are sometimes called “black hat” techniques, and they’re used to trick search engines into giving higher rankings in ways that search engines don’t like. However, search engines have deciphered most of the tricks, therefore, employ “black hat” techniques at your own risk.
If you’re unsure whether something could get you into trouble with search engines, there’s one rule of thumb that will serve you well: If it’s not human-friendly or it provides a poor user experience, search engines won’t like it.If something is good or helpful for a human visitor, a search engine will give you a better rank. On the other hand, if you do things that a human visitor wouldn’t find helpful, you’re likely to get into trouble. Some transgressions aren’t too serious. However, some are equivalent to what bankruptcy does for your credit score. Here are a few of the tricks that will land you in jail
Keyword spamming is probably the most common. Here is an example of key word spamming: “Real estate investing is great. Try real estate investing when real estate investing works. Real estate investing money is great during bad real estate investing economic times and good real estate investing economic times.”
Humans would have a hard time reading a sentence like that.
Keyword spamming worked much better in the past than it does now. A spider would see “real estate investing” mentioned repeatedly on a web page and say to itself “this page must be an important real estate investing page…look at how often those words are used!”
However, websites meant for human use don’t read like the sentence above. Search engines are now intelligent enough to distinguish a readable vs. an unreadable sentence. If your site is useful for humans, it will mention “real estate investing” regularly, but it will also talk about things that go along with real estate investing like financing, landlord issues, different investment strategies and so on.
Keyword spamming includes trying to use key words on a page that human visitors can’t see. This includes:
- Making key word the same color as page background or close to it
- Placing key words in <input type=”hidden”> tags
- Placing key words in image <alt> tags
- Placing key words between the <head> tags
- Placing key words behind css absolute positioned objects
- Making the text really tiny so people can’t read it or don’t notice it
Here are a few other black hat techniques that search engines are figuring out and penalizing people for using. I’m not going to tell you how to do these things, but you should be aware of what they are:
Page swapping: Once a high rank is achieved, swapping that high ranked page with a page of non-related content
Doorway pages: Pages designed for good optimization, but humans who visit the page are immediately forwarded to a different page.
Cloaking: It’s possible to create a web page that shows one page to spiders and a completely different page to human visitors. The spider page gets the rank and the human page displays whatever the person using cloaking wants.
Duplicate sites: This approach tries to get more search engine attention by making duplicate copies of the same site under different domain names.
Page hijacking: If you copy somebody’s web site and put it under a different domain name, you can fool search engines and visitors into thinking it’s the original site (example: copying Gap.com and creating MyGap.com and tricking people into visiting your site).
Scraper sites, spam blogs, and link farms: Create a bunch of fake sites with content on them (usually content copied from other sites) and then link each site back to your site to get more incoming links. Rumor has it that search engines will look to see where sites are hosted, and if they’re all hosted in the same place they’ll suspect something’s up.
A Few Final Thoughts
The examples above are but a few of the most common ways to get yourself into trouble with search engines. No doubt, intrepid marketers will come up with more.
If these tricks seem to compromise of one’s integrity – it’s because they do. Techniques like these will often work for a short period of time and you could dedicate your whole life to keeping one step ahead of the search engines (and some people do), but trying to do so (or hiring someone to do so for you) will only end up hurting legitimate businesses and ultimately your own business too!
Analysis plans are executed when data collection is completed, but should be formulated during the research design phase. Therefore, analysis planning should begin in parallel with the questionnaire design. We create questionnaire designs (or architectures) to address specific data collection objectives. The analysis plan should reflect these objectives and be related directly to research and business goals.
How does the process work? The research team (hopefully with the client’s involvement) is well advised to think “backward” from the intended report or analysis rather than “forward” from the research proposal. Thinking backward from the analysis requires knowing what hypotheses (a fancy word for beliefs or expectations), the client had going into the study: “What are they attempting to discover?”
The analysis plan and the questionnaire to collect the data for the analysis must be coordinated. If you ask questions that are not related to specific research objectives, you have wasted valuable resources. On the other hand, if you don’t cover the research objectives in the questionnaire, well good luck addressing the client’s expectations. However, luck will have had little to do with it.
Use a clear validation process to identify questions in the questionnaire that address the objectives outlined in the proposal or statement of work. In addition, be sure questions needed to “examine the data” [e.g., demographics and firmographics] are collected. These are the questions used for a priori segmentation analysis.
For a more in-depth discussion on analysis plans see: Analysis Plans Made Easier, find it on http://www.AtHeath.com/MRRC
One of the most powerful approaches you can use to better understand the market is a sales cycle analysis. Gaining a strong appreciation for the good and bad news about your market positioning in relationship to your competition isn’t always easy and if there’s bad news it’s difficult to hear. However, not knowing where you stand is at best dangerous and could prove fatal.
Sales cycle analysis is about understanding why you do or don’t get on the short list of your prospects (or stay on the short list of you customers) and what will facilitate or impede your progress toward becoming a preferred supplier – the position we obviously all want to be in.
If knowledge is power than sale cycle knowledge is a supercharged Muscle Car.
|Join Our Email List|
In product manufacturing “form follows function” similarly in research design the approach must fit the objectives; this includes: developing an experimental or quasi-experimental design, designing an effective questionnaire, determining sample requirements, data collection methods, and the analytic approach.
Each of the research design elements is crafted to optimize the results and to work in concert with one another.
While this may sound a little like “Apple Pie and Mom” it is easy to lose sight of the importance of integrating all the elements of a project. The quality of the research is only as strong as the weakest link!
|Join Our Email List|