A concern that often arises among researchers who run studies with single-source, self-report, cross-sectional designs is that of common method bias (also known as common method variance). Specifically, the concern is that when the same method is used to measure multiple constructs, this may result in spurious method-specific variance that can bias observed relationships between the measured constructs (Schaller, Patil, & Malhotra, 2015). Before we dive into the types of bias that may result from using a single method, we will first give an overview of what we mean by ‘method’.
When we say method we broadly refer to aspects of a test or task that can be a source of systematic measurement error. In a questionnaire this includes the wording of instructions and items, or the response format (e.g. Likert, Visual Analogue Scale, etc). Many researchers, such as Podsakoff, MacKenzie, and Podsakoff’s (2012), also consider a study’s measurement context as a potential methodfactor.
This applies to latent constructs, which captures systematic variance among its measures. “If systematic method variance is not controlled, this variance will be lumped together with systematic trait variance in the construct” (Podsakoff et al., 2012, p. 542). This can thus lead to inaccurate estimates of a scale’s reliability and convergent validity (Williams, Hartman, & Cavazotte, 2010).
Method factors can inflate, deflate, or have no effect on the observed relationship between two constructs (Siemsen, Roth, & Oliveira, 2010). Inflated or deflated estimates of the relationship between two constructs can affect hypothesis testing by increasing the chance of a Type I or Type II error, respectively.
One crucial mechanism through which common method bias arises is decreased motivation (or sometimes a lack of ability) for participants to respond accurately and an increased tendency for participants to engage in [satisficing](/2015/07/29/minimising-noise-and-maximising-your-data-quality-the-case-of-satisficing). Listed below are a few procedural remedies Podsakoff et al., 2012 propose to reduce satisficing:
By adding a time delay, increasing the physical separation of items, and/or adding a cover story to deemphasize any association between the independent and dependent variables, you can reduce a participants’ tendency to use previous answers to inform subsequent answers. A temporal delay achieves this by allowing recalled information to leave a participant’s short term memory before answering new questions. Proximal separation removes common retrieval cues and a cover story (i.e. psychological separation) decreases the perceived relevance of previously recalled information to newly recalled information.
Consider switching up the response formats for different questionnaires. Here is one example that demonstrates how influential response formats can be: Kothandapani (1971) experimented with four different scale formats: Likert, Thurstone, Guttman, and Guilford. He found, quite remarkably, that the average correlation between his independent and dependent variables dropped by 60% from r = .45 to r = .18 when he used different response formats versus the same response format.
Ambiguous items increase participants’ reliance on their systematic response tendencies (e.g. extreme or midpoint response styles) as they are unable to rely on the content of the ambiguous item. Reduce ambiguity by keeping questions as simple and specific as possible. Do not shy away from defining terms that may be unfamiliar to your participants and be generous in providing examples when appropriate.
For additional tips (including statistical remedies), we recommend reading Podsakoff and colleagues’ (2012) full review article.
Kothandapani, V. (1971). Validation of feeling, belief, and intention to act as three components of attitude and their contribution to prediction of contraceptive behaviors. Journal of Personal Social Psychology, 19, 321-333.
Podsakoff, P.M., MacKenzie, S.B., & Podsakoff, N.P. (2012). Sources of method bias in social science research and recomhmendations on how to control it. Annual Review Psychology, 63, 539-569.
Schaller, T.K., Patil, A., & Malhotra, N.K. (2015). Alternative techniques for assessing common method variance: An analysis of the Theory of Planned Behavior research. Organizational Research Methods, 18, 177-206.
Siemsen, E., Roth, A., & Oliveira, P. (2010). Common method bias in regression models with linear, quadratic, and intraction effects. Organizational Research Methods, 13, 456-476.
Williams, L.J., Hartman, N., & Cavazotte, F. (2010). Method variance and marker variables: A review and comprehensive CFA marker technique. Organizational Research Methods, 13, 477-514.