Most questionnaires in psychological assessment collect responses using rating scales (e.g., strongly disagree to strongly agree). Responses given with the rating scale format are susceptible to a number of response biases including response styles and faking. The multidimensional forced-choice (MFC) format has been proposed as an alternative to rating scales (RS) that may be less susceptible to response biases. In the MFC format, two or more items measuring different traits are presented simultaneously to the respondent. The respondent’s task is to either rank the items with respect to how well they describe him/her or choose the ones that describes him/her best and least.
The goal of this talk is to provide an overview of current research on the MFC format with a
particular focus on the normativity of trait estimates, validity, and faking. First, I will briefly address how MFC data can be analyzed and summarize results from a simulation study evaluating the normativity of trait estimates from the Thurstonian item response model. Second, I will describe empirical research comparing the validity of trait estimates from the MFC and the RS format when using normative scoring for both formats. Third, I will present an empirical study comparing the MFC and RS format with respect to intentional faking. Lastly, I will discuss the findings and evaluate the feasibility of the MFC format as an alternative to RS in terms of a cost-benefit trade-off.