Tests of English for Academic Purposes in University Admissions
Abstract
This chapter charts the history, surveys the current developments, and discusses the future trends in English for academic purposes (EAP) assessments used for admissions to post-secondary English-medium institutions. A historical overview of the origin and growth of EAP assessments used for university admissions is provided, focusing on the evolution of test philosophies and on test content for the various assessments. Current trends of EAP testing in admissions are discussed, focusing on test constructs, delivery method, scoring methods and technologies, and score reporting and interpretation. The chapter reports on current research using the argument-based approach to test validation, and presents exemplary studies addressing validity inferences including domain definition, evaluation, generalization, explanation, extrapolation, and utilization. It concludes with a discussion of future trends and of critical development and research issues to be addressed.
References
- Barker, F. (2010). How can corpora be used in language testing? In A. O'Keeffe & M. McCarthy (Eds.), The Routledge handbook of corpus linguistics (pp. 633–45). Abingdon, England: Routledge.
- Biber, D. (2003). Variation among university spoken and written registers: A new multi-dimensional analysis. In C. Meyer & P. Leistyna (Eds.), Corpus analysis: Language structure and language use (pp. 47–70). Amsterdam, Netherlands: Rodopi.
-
Biber, D.
(2006).
University language: A corpus-based study of spoken and written registers.
Amsterdam, Netherlands:
John Benjamins.
10.1075/scl.23 Google Scholar
- Biber, D., Conrad, S., Peppen, R., Byrd, P., & Helt, M. (2002). Speaking and writing in the university: A multi-dimensional comparison. TESOL Quarterly, 36(1), 9–48.
- Breeze, R., & Miller, P. (2008). Predictive validity of the IELTS Listening test as an indicator of student coping ability in Spain. In L. Taylor (Ed.), IELTS research reports. Vol. 12 (pp. 201–34).
-
Bridgeman, B., &
Carlson, S. B.
(1984).
Survey of academic writing tasks.
Written Communication, 1(2), 247–80.
10.1177/0741088384001002004 Google Scholar
- Bridgeman, B., Powers, D., Stone, E., & Mollaun, P. (2012). TOEFL iBT speaking test scores as indicators of oral communicative language proficiency. Language Testing, 29(1), 91–108.
- Bridges, G. (2010). Demonstrating cognitive validity of IELTS academic writing task 1. Cambridge ESOL: Research Notes, 42, 24–33.
- Bridges, G., & Shaw, S. D. (2004). IELTS writing: Revising assessment criteria and scales (Phase 4). Cambridge ESOL: Research Notes, 18, 8–12. Retrieved January 14, 2013 from http://www.cambridgeesol.org/rs_notes/rs_nts18.pdf
- Brown, A., Iwashita, N., & McNamara, T. (2005). An examination of rater orientations and test taker performance on English-for-academic-purposes speaking tasks (TOEFL® Monograph No. MS-29). Princeton, NJ: ETS.
-
Canale, M., &
Swain, M.
(1980).
Theoretical bases of communicative approaches to second language teaching and testing.
Applied Linguistics, 1(1), 1–47.
10.1093/applin/1.1.1 Google Scholar
- Chan, S. H. C. (2011). Demonstrating cognitive validity and face validity of PTE Academic writing items summarize written text and write essay (PTE Academic Research Note).
- Chapelle, C. A. (2008). The TOEFL validity argument. In C. A. Chapelle, M. K. Enright, & J. M. Jamieson (Eds.), Building a validity argument for the Test of English as a Foreign Language (pp. 319–52). New York, NY: Routledge.
- C. A. Chapelle, M. K. Enright, & J. M. Jamieson (Eds.). (2008). Building a validity argument for the Test of English as a Foreign Language. New York, NY: Routledge.
-
Charge, N., &
Taylor, L.
(1997).
Recent developments in IELTS.
English Language Testing Journal, 51(4), 374–80.
10.1093/elt/51.4.374 Google Scholar
- Cho, Y., & Bridgeman, B. (2012). Relationship of TOEFL IBT™ scores to academic performance: Some evidence from American universities. Language Testing, 29(3), 421–42.
- Clapham, C. (1996). The development of IELTS: A study of the effect of background knowledge on reading comprehension. Cambridge, England: Cambridge University Press.
- Cohen, A., & Upton, T. (2006). Strategies in responding to the new TOEFL reading tasks (TOEFL Monograph No. MS-33). Princeton, NJ: ETS.
-
Coleman, J. A.
(2006).
English-medium teaching in European higher education.
Language Teaching, 39, 1–14.
10.1017/S026144480600320X Google Scholar
- Cook, V. (1999). Going beyond the native speaker in language teaching. TESOL Quarterly, 33(2), 185–209.
- Council of Europe. (2001). Common European Framework of Reference for Languages: Learning, teaching, assessment. Cambridge, England: Cambridge University Press.
- Cronbach, L. J. (1971). Test validation. In R. L. Thorndike (Ed.), Educational measurement ( 2nd ed., pp. 443–507). Washington, DC: ACE.
- Cronbach, L. J. (1989). Construct validation after thirty years. In R. L. Linn (Ed.), Intelligence: Measurement, theory, and public policy (pp. 147–71). Urbana, IL: University of Illinois Press.
- Cumming, A., Kantor, R., & Powers, D. (2001). Scoring TOEFL essays and TOEFL 2000 prototype writing tasks: An investigation into raters’ decision making and development of a preliminary analytic framework (ETS RR-01-04; TOEFL-MS-22). Princeton, NJ: ETS.
- Cumming, A., Kantor, R., & Powers, D. E. (2002). Decision making while rating ESL/EFL writing tasks: A descriptive framework. Modern Language Journal, 86, 67–96.
-
Davies, A.,
Hamp-Lyons, L., &
Kemp, C.
(2003).
Whose norms? International proficiency tests in English.
World Englishes, 22(4), 571–84.
10.1111/j.1467-971X.2003.00324.x Google Scholar
- de Jong, J. (2009, June). Unwarranted claims about CEF alignment of some international English language tests. Paper presented at the 6th annual conference of the European Association for Language Testing and Assessment (EALTA), Turku, Finland.
- de Jong, J. H. A. L., & Zheng, Y. (2011). Applying EALTA guidelines: A practical case study on Pearson Test of English Academic. Retrieved January 14, 2013 from http://www.pearsonpte.com/research/Documents/EALTAguidelinesPTEAcademic.pdf
- Ducasse, A. M., & Brown, A. (2009). The role of interactive communication in IELTS speaking and its relationship to candidates’ preparedness for study or training contexts. In L. Taylor (Ed.), IELTS research reports. Vol. 12 (pp. 125–50).
- Educational Testing Service. (2004). English language competency descriptors. Princeton, NJ: Author.
- Educational Testing Service. (2005). Standard setting materials for the Internet-based TOEFL test [Compact disk]. Princeton, NJ: Author.
- Educational Testing Service. (2011). Reliability and comparability of TOEFL iBT scores. TOEFL iBT® research insight series, 3. Princeton, NJ: Author.
- Eignor, D., Taylor, C., Kirsch, I., & Jamieson, J. (1998). Development of a scale for assessing the level of computer familiarity of TOEFL examinees (TOEFL Research Report 60). Princeton, NJ: ETS.
- Elder, C., & Davies, A. (2006). Assessing English as a lingua franca. Annual Review of Applied Linguistics, 25, 282–301.
- Feast, V. (2002). The impact of IELTS scores on performance at university. International Education Journal, 3(4), 70–85.
- Golder, K., Reeder, K., & Fleming, S. (2009). Determination of appropriate IELTS band score for admission into a program at a Canadian post-secondary polytechnic institution. In. J. Osborne (Ed.), IELTS research reports. Vol. 10 (pp. 1–25).
-
Gomez, G. P.,
Noah, A.,
Schedl, M.,
Wright, C., &
Yolkut, A.
(2007).
Proficiency descriptors based on a scale-anchoring study of the new TOEFL iBT reading test.
Language Testing, 24(3), 417–44.
10.1177/0265532207077209 Google Scholar
- Hale, G. A., Taylor, C., Bridgeman, B., Carson, J., Kroll, B., & Kantor, R. (1996). A study of writing tasks assigned in academic degree programs (TOEFL Report No. 54; ETS RR-95-44). Princeton, NJ: ETS.
-
Jenkins, J.
(2006).
The spread of EIL: A testing time for testers.
ELT Journal, 60(1), 42–50.
10.1093/elt/cci080 Google Scholar
-
Kachru, B.
(1996).
The paradigms of marginality.
World Englishes, 15, 241–55.
10.1111/j.1467-971X.1996.tb00112.x Google Scholar
-
Kane, M. T.
(2006).
Validation. In
R. L. Brennan (Ed.),
Educational measurement
( 4th ed., pp. 17–64).
Westport, CT:
ACE/Praeger.
10.3917/rhu.016.0017 Google Scholar
- Kerstjens, M., & Nery, C. (2000). Predictive validity in the IELTS test. In R. Tulloh (Ed.), IELTS research reports. Vol. 3 (pp. 85–108).
- Kirsch, I., Jamieson, J., Taylor, C., & Eignor, D. (1998). Computer familiarity among TOEFL examinees (TOEFL Research Report 59). Princeton, NJ: ETS.
- Lee, Y.-W., & Kantor, R. (2005). Dependability of new ESL writing test scores: Evaluating prototype tasks and alternative rating schemes (ETS RR-05-14). Princeton, NJ: ETS.
- Lim, G., Geranpayeh, A., Khalifa, H., & Buckendahl, C. (2012). Standard setting to an international framework: Implications for theory and practice. Internal Journal of Testing, 13(1), 32–49.
- Lloyd-Jones, G., Neame, C., & Medaney, S. (2007). A multiple case study of the relationship between the indicators of students’ English language competence on entry and students’ academic progress at an international postgraduate university. In L. Taylor (Ed.), IELTS research reports. Vol. 11 (pp. 1–54).
- Mauranen, A., Hynninen, N., & Ranta, E. (2010). English as an academic lingua franca: The ELFA project. English for Specific Purposes, 29, 183–90.
- Maycock, L., & Green T. (2005). The effects on performance of computer familiarity and attitudes towards CB IELTS (IELTS Research Notes 20).
- Milanovic, M., & Saville, N. (1996). Considering the impact of Cambridge EFL examinations (Internal Report). Cambridge, England: Cambridge ESOL.
- Munby, J. (1978). Communicative syllabus design: A sociolinguistic model for defining the content of purpose-specific language programmes. Cambridge, England: Cambridge University Press.
- Pearlman, M. (2008). Finalizing the test blueprint. In C. A. Chapelle, M. K. Enright, & J. M. Jamieson (Eds.), Building a validity argument for the Test of English as a Foreign Language (pp. 227–58). New York, NY: Routledge.
- Pearson Education. (2010a). Aligning PTE Academic test scores to the Common European Framework of Reference for Languages (Pearson Research Note). Retrieved June 6, 2012 from http://pearsonpte.com/research/Documents/Aligning_PTEA_Scores_CEF.pdf
- Pearson Education. (2010b). Research summary: The Pearson International Corpus of Academic English (PICAE). Retrieved June 6, 2012 from http://www.pearsonpte.com/research/Documents/RS_PICAE_2010.pdf
- Pearson Education. (2010c). The official guide to Pearson Test of English Academic. Upper Saddle River, NJ: Author.
- Pearson Education. (2011). Validity and reliability in PTE Academic (Pearson Research Summary). Retrieved June 2, 2012 from http://pearsonpte.com/research/Documents/Validity_and_Reliability_in_PTEA_4Aug10_v2.pdf
- Quirk, R. (1985). The English language in a global context. In R. Quirk & H. G. Widdowson (Eds.), English in the world: Teaching and learning the language and literatures (pp. 1–6). Cambridge, England: Cambridge University Press.
-
Quirk, R.
(1990).
Language varieties and standard language.
English Today, 21, 3–10.
10.1017/S0266078400004454 Google Scholar
- Rosenfeld, M., Leung, S., & Oltman, P. K. (2001). The reading, writing, speaking, and listening tasks important for academic success at the undergraduate and graduate levels (ETS RM-01-03; TOEFL-MS-21). Princeton, NJ: ETS.
- Sawaki, Y., & Nissan, S. (2009). Criterion-related validity of the TOEFL ® iBT listening section (ETS RR-09-02; TOEFL iBT Report No. iBT-08). Princeton, NJ: ETS.
- Sawaki, Y., Stricker, L. J., & Oranje, A. H. (2009). Factor structure of the TOEFL Internet-based test. Language Testing, 26(1), 5–30.
-
Seidlhofer, B.
(2004).
Research perspectives on teaching English as a lingua franca.
Annual Review of Applied Linguistics, 24, 209–39.
10.1017/S0267190504000145 Google Scholar
- Shaw, S. D. (2004). IELTS writing: Revising assessment criteria and scales (phase 3). Research Notes, 16, 3–7.
- Singh, M., & Sawyer, W. (2011). Learning to play the “classroom tennis” well: IELTS and international students in teacher education. In L. Taylor (Ed.), IELTS research reports. Vol. 11 (pp. 1–54).
- Swain, M., Huang, L., Barkaoui, K., Brooks, L., & Lapkin, S. (2009). The speaking section of the TOEFL iBT™ (SSTiBT): Test-takers’ reported strategic behaviors (TOEFL iBT™ Report No. iBT-10). Princeton, NJ: ETS.
- Tannenbaum, R. J., & Wylie, E. C. (2008). Linking English-language test scores onto the Common European Framework of Reference: An application of standard-setting methodology (TOEFL iBT™ Report No. iBT-06). Princeton, NJ: ETS.
- Taylor, C., Jamieson, J., Eignor, D., & Kirsch, I. (1998). The relationship between computer familiarity and performance on computer-based TOEFL® test tasks (TOEFL® Research Rep. No. RR-61). Princeton, NJ: ETS.
-
Taylor, L.
(2006).
The changing landscape of English: Implications for language assessment.
ELT Journal, 60(1), 51–60.
10.1093/elt/cci081 Google Scholar
- Taylor, L., & Jones, N. (2001). Revising the IELTS speaking test. Research Notes, 4, 9–11.
- University of Cambridge ESOL Examinations. (2006). IELTS test performance data 2004. Research Notes, 23, 13–15. Retrieved June 6, 2012 from http://www.cambridgeesol.org/rs_notes/rs_nts23.pdf
- University of Cambridge ESOL Examinations. (2011). Analysis of test data. Retrieved June 6, 2012 from http://www.ielts.org/researchers/analysis_of_test_data.aspx
- University of Cambridge ESOL Examinations. (2012). History of IELTS. Retrieved June 6, 2012, from http://www.ielts.org/researchers/history_of_ielts.aspx
- Vienna-Oxford International Corpus of English. (2011). What is VOICE? Retrieved June 6, 2012, from http://www.univie.ac.at/voice/page/what_is_voice
- Wall, D., & Horák, T. (2006). The impact of changes in the TOEFL® examination on teaching and learning in Central and Eastern Europe. Phase 1: The baseline study (TOEFL® Monograph No. MS-34). Princeton, NJ: ETS.
- Wall, D., & Horák, T. (2008). The impact of changes in the TOEFL® examination on teaching and learning in Central and Eastern Europe. Phase 2: Coping with change (TOEFL iBT™ Report No. iBT-05). Princeton, NJ: ETS.
- Wall, D., & Horák, T. (2011). The impact of changes in the TOEFL® examination on teaching and learning in Central and Eastern Europe. Phase 3: The role of the coursebook, and Phase 4: Describing change (TOEFL iBT™ Report No. iBT-17). Princeton, NJ: ETS.
- Weigle, S. C. (2011). Validation of automated scores of TOEFL iBT® tasks against nontest indicators of writing ability (TOEFL iBT Report No. iBT-15). Princeton, NJ: ETS.
- Weir, C., Hawkey, R., Green, A., Unaldi, A., & Devi, S. (2009). The relationship between the academic reading construct as measured by IELTS and the reading experiences of students in their first year of study at a British university. In L. Taylor (Ed.), IELTS research reports. Vol. 9 (pp. 97–156).
-
Xi, X.
(2007).
Validating TOEFL® iBT Speaking and setting score requirements for ITA screening.
Language Assessment Quarterly, 4(4), 318–51.
10.1080/15434300701462796 Google Scholar
- Xi, X., & Mollaun, P. (2011). Using raters from India to score a large-scale speaking test. Language Learning, 61(4), 1222–55.
- Zhang, Y. (2008). Repeater analyses for TOEFL iBT (ETS Research Report RM-08-05). Princeton, NJ: ETS.
Suggested Readings
- C. J. Alderson, K. J. Khrahnke, & C. W. Stansfield (Eds.). (1987). Reviews of English language proficiency tests. Washington, DC: TESOL.
-
Chalhoub-Deville, M., &
Turner, C.
(2000).
What to look for in ESL admission tests: Cambridge certificate exams, IELTS, and TOEFL.
System, 28, 523–39.
10.1016/S0346-251X(00)00036-1 Google Scholar
-
Cumming, A.
(2007).
New directions in testing English language proficiency for university entrance. In
J. Cummins &
C. Davison (Eds.),
International handbook of English language teaching (pp. 473–85).
New York, NY:
Springer.
10.1007/978-0-387-46301-8_34 Google Scholar
- Fulcher, G. (1999). Assessment in English for academic purposes: Putting content validity in its place. Applied Linguistics, 20(2), 221–36.
- S. Stoynoff, & C. A. Chapelle (Eds.). (2005). ESOL tests and testing. Alexandria, VA: TESOL.