Match!

Examining writing : research and practice in assessing second language writing

Published on Jan 1, 2007
Stuart Shaw2
Estimated H-index: 2
,
Cyril J. Weir14
Estimated H-index: 14
Abstract
This publication highlights the need for test developers to provide clear explanations of the ability constructs which underpin tests offered in the public domain. An explanation is increasingly required, if the validity of test score interpretation and use are to be supported both logically, and with empirical evidence. The book demonstates the application of a comprehensive test validation framework which adopts a socio-cognitive perspective. The framework embraces six core components which reflect the practical nature and quality of an actual testing event. It examines Cambridge ESOL writing tasks from the following perspectives: Test Taker, Cognitive Validity, Context Validity, Scoring Validity, Criterion-related Validity and Consequential Validity. The authors show how an understanding and analysis of the framework and its components in relation to specific writing tests can assist test developers to operationalise their tests more effectively, especially in relation to criterial distinctions across test levels.
  • References (0)
  • Citations (65)
References0
Newest
Cited By65
Newest
Gwan-Hyeok Im , Dong-il Shin (CAU: Chung-Ang University), Liying Cheng21
Estimated H-index: 21
Purpose and background The purpose of this paper is to critically review the traditional and contemporary validation frameworks—the content, criterion, and construct validations; the evidence-gathering; the socio-cognitive model; the test usefulness; and an argument-based approach—as well as empirical studies using an argument-based approach to validation in high-stakes contexts to discuss the applicability of an argument-based approach to validation. Chapelle and Voss (2014) reported that despi...
Published on Jun 1, 2019in Journal of Second Language Writing4.20
Sonca Vo (Iowa State University), Sonca Vo
Abstract Second language writing research has often analyzed written discourse to provide evidence on learner language development; however, single word-based analyses have been found to be insufficient in capturing learner language development (Read & Nation, 2006). This study therefore utilized both single word-based and multi-word analyses. Specifically, it explored vocabulary distributions and lexical bundles to better understand the development of writing proficiency across three levels in ...
Published on Mar 8, 2019in European Journal of Special Needs Education1.04
Judith Fairbairn (British Council), Richard Spiby (British Council)
ABSTRACTLanguage test developers have a responsibility to ensure that their tests are accessible to test takers of various backgrounds and characteristics and also that they have the opportunity to perform to the best of their ability. This principle is widely recognised by educational and language testing associations in guidelines for the production and delivery of ethical tests. This paper reports on the process of formulating good policy and practice within the British Council, an organisati...
Published on Dec 1, 2018in Studies in Educational Evaluation1.68
Abstract In writing assessment, the inconsistency of teachers’ scorings is among the frequently reported concerns regarding the validity and the reliability of assessment. The study aimed to find out to what extent participating in a community of assessment practice (CAP) can impact the discrepancies among raters’ scorings. Adopting one group pretest-posttest design, patterns in the teachers’ scoring judgments were explored based on both quantitative and qualitative data. The results indicate si...
Published on Jul 1, 2018in Assessing Writing1.84
Anthony Becker2
Estimated H-index: 2
(CSU: Colorado State University)
Abstract In second language (L2) writing, rating scales are often used to measure a variety of discourse and linguistic features. When developing scales, the scoring criteria need to provide a clear and credible basis for scoring judgments, as well as for differentiating levels of writing performance ( Weigle, 2002 ). Often times, the criteria used to evaluate the L2 writing of students at intensive English programs (IEPs) are adopted from textbooks or developed as an ad hoc solution, and their ...
Published on Apr 1, 2018in Assessing Writing1.84
Sathena Hiu Chong Chan3
Estimated H-index: 3
(University of Bedfordshire),
Stephen Bax5
Estimated H-index: 5
(University of Bedfordshire),
Cyril J. Weir2
Estimated H-index: 2
(University of Bedfordshire)
Abstract International language testing bodies are now moving rapidly towards using computers for many areas of English language assessment, despite the fact that research on comparability with paper-based assessment is still relatively limited in key areas. This study contributes to the debate by researching the comparability of a high-stakes EAP writing test (IELTS) in two delivery modes, paper-based (PB) and computer-based (CB). The study investigated 153 test takers’ performances and their c...
Published on Jan 2, 2018in Language Assessment Quarterly0.98
Anthony Green11
Estimated H-index: 11
(University of Bedfordshire)
ABSTRACTThe Common European Framework of Reference for Languages (CEFR) is widely used in setting language proficiency requirements, including for international students seeking access to university courses taught in English. When different language examinations have been related to the CEFR, the process is claimed to help score users, such as university admissions staff, to compare and evaluate these examinations as tools for selecting qualified applicants. This study analyses the linking claim...
Published on Dec 1, 2017in Language Testing in Asia
Miguel Fernández1
Estimated H-index: 1
(CSU: Chicago State University),
Athar Munir Siddiqui (CSU: Chicago State University)
Background Marking of essays is mainly carried out by human raters who bring in their own subjective and idiosyncratic evaluation criteria, which sometimes lead to discrepancy. This discrepancy may in turn raise issues like reliability and fairness. The current research attempts to explore the evaluation criteria of markers on a national level high stakes examination conducted at 12th grade by three examination boards in the South of Pakistan.
Published on Jun 1, 2017in Journal of Second Language Writing4.20
Lia Plakans10
Estimated H-index: 10
(UI: University of Iowa),
Atta Gebril9
Estimated H-index: 9
(The: American University in Cairo)
Published on Oct 1, 2016in Assessing Writing1.84
Sarah Goodwin1
Estimated H-index: 1
(GSU: Georgia State University)
Abstract Second language (L2) writing researchers have noted that various rater and scoring variables may affect ratings assigned by human raters (Cumming, 1990; Vaughan, 1991; Weigle, 1994, 1998, 2002; Cumming, Kantor, & Powers, 2001; Lumley, 2002; Barkaoui, 2010). Contrast effects (Daly & Dickson-Markman, 1982; Hales & Tokar, 1975; Hughes, Keeling, & Tuck, 1983), or how previous scores impact later ratings, may also color raters’ judgments of writing quality. However, little is known about how...