Conference Agenda

Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).

 
 
Session Overview
Session
More than a decade of research into switching general population surveys from interviewer-based to self-completion modes I
Time:
Tuesday, 09/July/2024:
9:30am - 11:00am

Session Chair: Michèle Ernst Stähli
Session Chair: Michael Ochsner
Location: C401, Floor 4

Iscte's Building 2 / Edifício 2

Session Abstract

General population surveys are currently challenged by several societal developments, such as budget constraints and the respondents’ more active lifestyle, which leads to a lower contact success rate and higher costs in interviewer-based survey designs. At the same time, internet penetration rates are increasing fast and steadily. General population surveys are therefore pushed to innovating the design and several experiments on different designs for fielding a general population survey on the web have been fielded for more than a decade now. Survey methodologists study mode effects between interviewer-based and web/paper self-completion for over a decade. For example, Switzerland has fielded a comprehensive mixed-mode experiment using the European Social Survey (ESS) in 2012, a complex experiment on push-to-web designs using the European Values Study has been fielded in six countries in 2017 and during the pandemic, the ESS has been fielded as a self-completion web/paper survey in several countries in 2021. Given the change towards self-completion of the ESS in 2027, several experiments based on the ESS questionnaire have been fielded or are in the field.

This session welcomes contributions that show the effects of a mode switch on results of general population surveys with a special focus on changes over time. This includes two types of research questions: effects of a mode-switch on time-series data as well as the changes in effects of a mode-switch over time. The first type of research questions includes how to demonstrate a mode effect in a time-series, how to correct for mode effects, how to visualize time-series with a mode change in-between and many more. The second type of research questions includes changes over time in under- or overrepresentation of specific groups in the population, changes in, or persistency of, mode effects regarding some variables or change in the share of paper vs. web participation, mobile participation etc. We welcome contributions based on ESS data but also based on any other general population survey that provides insights into the effect of switching from an interviewer-based to a self-completion survey relevant to the mode-switch of ESS foreseen in 2027 (e.g., including items and concepts used also in the ESS, such as trust, attitudes towards democracy, immigration, family, or welfare state).


Show help for 'Increase or decrease the abstract text size'
Presentations

A Comparison of Non-Response Patterns over Time in Countries that Switched to Self-Completion at Round 10

Peter Lynn, Carla Xena

University of Essex, United Kingdom

"This paper explores the implications of changing data collection methods, specifically from face-to-face to self-completion, on non-response patterns and comparability. The study focuses on nine countries that implemented self-completion in Round 10, analyzing response trends in relation to Rounds 8 and 9. The variables used to illustrate these patterns vary across countries. To enable these comparisons, we use auxiliary variables present in the Round 10 sample design data file and in at least one of the Round 8 or 9 files. We use logistic regression to assess the propensity to respond by region, level of urbanization, and individual characteristics such as sex, gender, marital status and citizenship. Our findings suggest that the response patterns exhibited in the earlier, face-to-face, rounds of ESS are broadly replicated when those same countries employed the self-completion mode at R10."



Effect of incentives in Face-to Face and Self-Completion surveys on response rates and sample composition. The case of Switzerland

Michèle Ernst Stähli, Michael Ochsner, Alexandre Pollien

FORS, Switzerland

Since more than a decade more and more Face-to-Face surveys switch to Self-Completion modes. In this presentation we want to take advantage from several experiments at FORS in this field. In particular, we use data from the ESS 2012 parallel web and paper fields, the EVS 2017 parallel web-paper field, the mode switch of MOSAiCH-ISSP in 2018, and several incentive experiments implemented in these surveys to investigate the effects of different types of incentives on response rates and sample composition. In doing this, we focus on different types of incentives (conditional and unconditional, monetary and non-monetary, digital and physical) in combination with mode and length of the questionnaire. The use of register data as sample frame for all these surveys allow for reliable sample composition analyses, at least on socio-demographic and geographical characteristics. By showing impacts of incentives in different versions of a mode switch, we reflect upon the role incentives could play for the success of the ESS transition to the web-paper mode.



Experimental evidence for effects of a mode-switch from interviewer-based to self-completion: Effects on time-series and item-nonresponse

Oliver Lipps, Michael Ochsner, Marieke Voorpostel

FORS, Switzerland

Currently, many surveys are switched from interviewer-based to self-completion modes. In this presentation, we will present two different examples of effects of such a mode switch having a time-series aspect. The two examples are characterized by a controlled experimental design. The first example uses data from a repeated cross-sectional study of a Swiss city that was switched from interviewer-based (telephone) to self-completion (push-to-web mixed mode). It shows how mode switches impact cross-sectional time-series and the effect on reporting eight years after the switch. The second example uses panel data from the Swiss Household Panel to investigate the effect of a mode switch on item nonresponse. Insights suggest that item-nonresponse remains very low after a mode switch even though there is a slight increase in item nonresponse. However, the increase is so small that an effect on substantive results remains unlikely.



Investigating the role of data collection mode in the nonresponse and measurement error nexus

Caroline Roberts

University of Lausanne, Switzerland

One of the most pernicious challenges in a mixed mode survey setting is the problem of ‘social desirability bias’ – measurement error resulting from the tendency for respondents to answer questions in socially normative ways, or ways that present themselves in a more favourable light. It is well-documented that the bias affects certain question topics more than others, and results more often when interviewers administer survey questionnaires, compared with when respondents complete questionnaires on their own, whether online or on paper. Where a survey uses a mix of interviewer- and self-administered modes – or where a repeated or longitudinal survey switches from interviewer- to self-administration, the difference in measurement accuracy can confound comparisons across the modes. Finding ways to adjust for differential measurement error on affected measures, is therefore, an important priority for researchers working with mixed mode data. Part of the challenge, however, lies in the possibility of a nonresponse- and measurement error nexus: a correlation between response propensity and the tendency to give error-filled answers to certain questions. Mode of data collection may influence both the decision to take part in a survey, and the amount of social desirability bias observed in estimates. However, other factors may also be at play: sample members with more ‘socially undesirable’ true values on measures of interest may be less likely to participate, and more likely to give socially desirable answers once persuaded to take part. Other common cause variables (besides mode or the true value) - e.g., survey topic, survey attitudes and data privacy concerns – may also play a role in both error sources. Understanding under what circumstances response propensity and measurement error are related, how statistics are impacted, and disentangling the role played by mode from other factors is, therefore, key to finding effective ways to handle mode effects in mixed mode data. In this paper, we use data from a mode experiment conducted in Switzerland in 2012-2013, alongside Round 6 of the ESS, to investigate the role of mode in the nonresponse-measurement error nexus for a set of measures known to be affected by social desirability bias (measuring subjective wellbeing and attitudes to immigration). Besides mode, the design of the study (which includes auxiliary data for nonrespondents) allows for an investigation into alternative possible common causes of nonresponse and measurement error in the target variables. To try to disentangle these various error causes, the analysis addresses the following research questions: RQ1: Do nonrespondents report more socially undesirable characteristics than respondents? RQ2: To what extent does this vary in surveys conducted in different modes? RQ3: How does mode of data collection affect the response propensity and the tendency for respondents to report more socially undesirable characteristics when modes are mixed sequentially? RQ4: To what extent do other common cause variables besides mode play a role in the nonresponse-measurement error nexus, and does adjusting for them help to reduce observed differences between modes?