Computer-Automated Scoring of Written Responses

Volume II. Approaches and Development
Part 8. Technology and Assessment
Nathan T. Carr

Nathan T. Carr

California State University, Fullerton, USA

Search for more papers by this author
First published: 11 November 2013
Citations: 2

Abstract

This chapter begins by briefly discussing the human scoring procedures that preceded—and still operate parallel to—computer-automated scoring (CAS) of written responses. The current conceptualization of the topic is approached by tracing the development of CAS in two areas: extended response tasks such as essays, and limited production tasks such as short answer questions. Limited production responses will be further divided based on the approach to scoring that is being used. This classification is important not only because of the differences in the types of expected responses that they yield, but also because of the different computational approaches normally used to score them: various forms of key word or phrase matching for limited production responses, and systems using more complex forms of natural language processing to score both limited production and extended response tasks. The chapter next moves on to a discussion of current research on CAS in written responses, maintaining its organization based on extended and limited production tasks, and then concludes by exploring the future directions in which research, development, and operational use are likely to proceed.

The full text of this article hosted at iucr.org is unavailable due to technical difficulties.