Conference Agenda

Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).

 
 
Session Overview
Session
Questionnaire translation and language - basic elements of cross-cultural survey projects
Time:
Monday, 08/July/2024:
1:30pm - 3:00pm

Session Chair: Brita Dorer
Location: C401, Floor 4

Iscte's Building 2 / Edifício 2

Session Abstract

In cross- cultural survey projects, national questionnaires are usually developed by translating one or more source questionnaires into all relevant target languages. For the comparability of the data gathered by such multilingual survey projects, it is of utmost importance that the quality of all translations is of highest quality, and that all translated questionnaires do ideally “ask the same questions” as the source questionnaire(s).

The ESS has been the first cross-cultural survey to rigorously implement the translation scheme “TRAPD” (consisting of the steps Translation, Review, Adjudication, Pretesting and Documentation), based on an interdisciplinary team or committee approach, since its beginning. Currently preparing its 12th round, translating its questionnaires has been a particular focus in all ESS rounds so far. By setting high standards to its translations and often being at the forefront of new questionnaire translation developments, the ESS’ translation scheme has been inspiring others within the community. This does not only refer to the translation process as such – including, for instance, its approaches to translate into shared languages or to systematically assess its translation qualities –, but also to developing its source questionnaire, which is formulated in a way to minimise later translation problems as much as possible by involving translation experts in the source questionnaire development and carrying out “advance translations”. Experimenting with and implementing innovations, e.g., in the field of translation technology, is a core element of the ESS translation approaches.

This session invites presentations on various aspects related to questionnaire translation or survey language in a broader sense, whether linked to the ESS or not. Topics may cover different approaches or methods to translate questionnaires, to assess or measure translation quality; technical and other innovations in the field; looking closer at existing ESS translations, e.g., into shared languages, comparing or assessing particular translations or expressions; discussing translatability matters, also related to questionnaire design or pretesting; other linguistic or language-related topics, such as minority languages, choice of interview or questionnaire language, easy or plain language, gender aspects in language, or the influence of translation and/or language on survey results.


Show help for 'Increase or decrease the abstract text size'
Presentations

History of Survey Translation Methods in the ESS

Alisú Schoua-Glusberg

Research Support Services Inc., United States of America

In the last quarter of the 20th century, survey research began to look into alternative ways to translate and assess questionnaire translations. Back translation appeared as a promising approach and became best practice for a couple of decades. Toward the end of the 1990s, dissatisfaction with this approach led survey researchers with linguistics background to consider and experiment with alternative approaches. This presentation will cover the history of survey translation, problems with back translation, early team translation approaches (e.g. Brislin, Modified Committee Approach), and how they influenced the ESS. Beginning with Harkness' (2002) TRAPD model, the ESS established its own implementations of the model, which is currently considered best practice in the translation of data collection instruments. We will include a discussion of recent literature with experiments comparing different methods and approaches.



The impact of machine translation on the Review discussion in a questionnaire translation project – an experiment for English-to-German translation

Brita Dorer1, Dorothée Behr1, Diana Zavala Rojas2, Danielly Sorato2

1GESIS-Leibniz Institute for the Social Sciences; 2Universitat Pompeu Fabra

In cross-cultural surveys, the quality of questionnaire translations is highly important for comparability and overall quality of the final survey data. Over the past years, Machine Translation (MT) has improved in quality and is now increasingly used in the translation business for different text types. For questionnaires, until some years ago, the recommendation had been against the use of MT, as the quality had been considered insufficient. For bringing together the technical improvements of MT and the need to optimise questionnaire translations, we carried out highly standardised questionnaire translation experiments in the language pairs English-German and English-Russian. The TRAPD scheme, (Translation, Review, Adjudication, Pretesting, Documentation) has become the gold standard for translating questionnaires in cross-cultural contexts. In our baseline scenario, both initial translations (T in TRAPD) were drafted by human translators; in two treatment scenarios, one of the two translations resulted from Machine-Translation and Post-Editing, i.e., a human corrected the MT output, following specific guidance: one scenario involved a so-called ‘light’ post-editing (understandable, but not necessarily grammatically correct), and one a ‘full’ post-editing of the MT output (quality comparable to human translation). In this presentation, we focus on the Review step. For the three translations into German, we study whether the involvement of MT in the Translation step had an impact on the Review discussions: whether the post-editors’ role was similar to the one of the human translators and whether the contribution of the post-edited version was comparable to the version translated by a human translator. We apply a mixed-methods approach: transcripts of the recorded Review discussions are coded and analysed quantitatively. The qualitative data result from an ex-post questionnaire including mostly open-ended questions, asking about the participants’ experiences during the Review discussions and their attitudes towards MT. We will present the experiment’s set-up and results and draw first conclusions on the potential use of Machine Translation in the context of questionnaire translations. This topic is embedded in a larger research project that is still ongoing.



Using Machine Translation and Postediting in the TRA (P)D Approach: Effects on the Quality of Translated Survey Texts

Diana Zavala-Rojas1, Dorothée Behr2, Brita Dorer3, Danielly Sorato4

1ESS ERIC-Univesitat Pompeu Fabra; 2GESIS; 3GESIS; 4Univesitat Pompeu Fabra

A highly controlled experimental setting using a sample of

questions from the European Social Survey (ESS) and European Values

Study (EVS) was used to test the effects of integrating machine translation

and postediting into the Translation, Review, Adjudication, (Pretesting), and

Documentation (TRAPD) approach in survey translation. Four experiments

were conducted in total, two concerning the language pair English-German

and two in the language pair English-Russian. The overall results of this

study are positive for integrating machine translation and postediting into

the TRAPD process, when translating survey questionnaires. The experiments

show evidence that in German and Russian languages and for a sample

of ESS and EVS survey questions, the effect of integrating machine

translation and postediting on the quality of the review outputs—with quality

understood as texts output with the fewest errors possible—can hardly

be distinguished from the quality that derives from the setting with human

translations only.



 
Contact and Legal Notice · Contact Address:
Privacy Statement · Conference: ESS 2024
Conference Software: ConfTool Pro 2.8.103+TC
© 2001–2024 by Dr. H. Weinreich, Hamburg, Germany