Volume 3, Issue 2 pp. 302-316
RESEARCH ARTICLE
Open Access

Reimagining higher education: The impact of learner outcome metrics in Ireland and beyond

Gerry Dunne

Corresponding Author

Gerry Dunne

Marino Institute of Education, Dublin, Ireland

Correspondence

Gerry Dunne.

Email: [email protected]

Search for more papers by this author
First published: 18 October 2024

Abstract

A college is an institution that exists to provide instruction. Subtly but profoundly, we are shifting to a new paradigm: a college is an institution that exists to produce learning (Barr & Tagg, 1995, p. 13). This paper traces the evolution of learner outcomes, from their progeny as a diagnostic tool in accurately measuring learning gains and facilitating institutional self-improvement, to their present distorted incarnation, in propagating “governance by numbers”. The paper proceeds in a number of steps. Firstly, it begins with a brief contextualization of the topography of higher education. It then moves to explore the OECD influences regarding quantifying outcomes and aligning them with performance metrics. From here, it looks to EU deliberations in the shape of the Bologna Process with its enduring iterations, the purpose of which benchmarks HEIs' (higher education institutions) success against the successful achievement of learner outcomes. To conclude, taking the example of the Irish funding and strategy, Hunt Report (National Strategy for Higher Education to 2030), I provide an explanatory account detailing how this “learning outcomes race” and its association with “success metrics” within the higher education landscape is mistaken and harmful.

1 PARADIGM SHIFTS IN HIGHER EDUCATION

Over the past 20 years or so, higher education institutions (HEIs) have experienced ever-increasing pressures to provide accountability data on the quality and efficacy of their teaching and learning (Rizvi et al., 2022). Throughout the world, measuring “learning outcomes” is considered by many stakeholders to be a reliable metric in determining what is referred to as the “value-added” of colleges and universities (Douglass et al., 2012). Demands for comparable data on student learning outcomes stem from several stakeholders, including prospective students, employers in search of qualified graduates, taxpayers concerned about the efficient use of public funding, and policymakers deciding on accreditation and resource allocation (Nusche, 2008). The emergence of this new emphasis on measuring learning outcomes has usurped the historical model by which HEIs were traditionally evaluated, more specifically, “resources used, classes taught, articles published and degrees awarded,” and precipitated a more student-centered approach based on calculating, “what a learner knows or can do as a result of their learning” (Otter, 1992, p. 1). Proponents of this outcomes-based approach, namely the OECD and EU, in tandem with neo-liberal managerialist agendas, are now focusing on gauging “what students actually achieve, as opposed to what the institution intends to teach” (Eisner, 1979, p. 103). Underpinning this thinking is the claim that learner outcomes engender the new golden standard for evaluating the performance of HEIs, and that they are an effective tool in benchmarking the shared OECD/EU goals of transparency, accountability, and quality assurance (Cuenin, 1988).

2 THE COMMERCIALIZATION OF EDUCATION: FLAWED METRICS AND MARKET FORCES

In the wake of such deliberations, many higher education institutions across Europe have been hurled into the midst of an identity crisis (Barnett, 1997). Pressures over accountability, vis-á-vis teaching, learning, and research, coupled with the all too evident public fixation with league tables, have led many academics bemoaning the watchdog tag of the OECD and the Times Higher Education Supplement (Walsh & Loxley, 2014). At the heart of the matter lies the issue of (a) performance, (b) productivity, (c) quality, (d) efficiency and (e) economic return (Walsh & Loxley, 2014). Universities tend to be predominantly funded via public coffers. Where there is public expenditure, there is accountability. Where there is money, there is commercialization. Where there is commercialization, there are price tags. Where there are price tags, there is profit and loss. Somewhere amidst this metricized process, education, educators (in the form of human capital), and the process of educating have been issued price tags. Profit margins and marketplace metrics seem to now dictate the nature of the task befalling academics—they must somehow increase their value in order to prove their worth.

This view has been met with exasperation amongst the general academic community (Walsh, 2019). For the most part, academics categorically refuse to reduce their work to a value-based commodity within the marketplace (Slaughter & Rhoades, 2004). They question the twisted logic of why “money is not pursued to allow research, research is pursued to attract or acquire money” (Saunders, 2007, p. 12). Notwithstanding this demur, academics strongly object to being subjected to a myriad of performance-based metrics. For these reasons they rally against the commercially driven model of Taylorism, named after Winslow Taylor (1856–1915), whereby work is regulated for greater productivity through constant measurement. Their views echo those espoused some 18 years ago by the Trade Union Advisory Committee/Education International to the OECD Meeting of Ministers of Education, on 22 June 2006, whereupon they assert, “the quality of higher education is neither a measurable product nor an outcome subject to any simple performance-based definition—quality has to do with the conditions and activities of teaching and free enquiry” (OECD, 2006, p. 3). In stridently defending this principle at all costs, academics of this persuasion attest how “the Irish university is a public good, not a profit-making institution… [where]… the main aim of teaching is the dissemination of knowledge and the fostering of creativity.” Therefore, it should never be about increasing “human capital,” “profit-margins” or “commercialization” (Defend the Irish University, 2013).

Outside this noble paradigm, Presidents of HEIs, stakeholders, policymakers, and politicians actively emphasize a value-added philosophy of education. They maintain that productivity is paramount to success and success is the cornerstone of HEIs existence. Should productivity decline, success will dwindle and the future of said institutions will be in jeopardy. It is therefore crucial that performance, productivity, and efficiency be measured in order to ensure that HEIs continue to improve and flourish. Proponents of this commercial model point out, “public authorities provide the bulk—80% or more—of expenditure on educational institutions in half of all OECD countries; in only four countries—Australia, Japan, Korea and the United States—do public authorities pay less than half” (OECD, 2006, p. 7). This type of outlay is clearly unsustainable. In keeping with this view, “governments are obliged to justify the allocation of public resources and the effectiveness with which they are used” (OECD, 2006, p. 8).

For better or worse, universities occupy a global market. To survive the vagaries of the marketplace, many HEIs have succumbed to a business model to “sell their wares.” Commentators like view such measures, (especially when revered as being the guiding tenets of success), as anathema to the historical values of the university. Both argue that what we are left with is those who know the price of everything, but the value of nothing. Ronnie Munck (2013) of the Defend the University Campaign likewise speaks of, “a creeping business agenda subverting the ethos of the university…and a reduction of the student search for knowledge and enlightenment to a set of narrow job-related skills” (Defend the Irish University, 2013). This wave of “managerial philistinism, whereby politicians and public servants with little understanding of the principles and values of university education insist HEIs should be run in a more business-like fashion, focused on efficiency and economic return,” signifies a regressive step in their eyes. Marketplace proclivities, especially at the expense of historical educational values, such as free enquiry, result in people failing to understand education as a process, as per the traditional norms, but rather an enterprise—something that only becomes worthwhile when it is rendered quantifiable, initially within a reductionist model, such as learner outcomes—an idea we shall return to later, and perhaps more importantly, a fiscal setting.

As Mary Gallagher (2012) bewails in her book, Academic Armageddon: An Irish Requiem for Higher Education, this teleologically focused educational philosophy has now infected our third level students. Now, as she puts it,

You go in order to get your grades and then go on to the next stage, rather than going to college to explore different ways of thinking, and the meaning of life. There was a sense of openness, which I think has been lost. Everything has an exchange value now, or is a line on your CV, and that starts very early (p. 21).

Conversely, there are those who believe some measure of accountability must be enacted in the higher education sector. Recent publications in the US, such as Declining by Degrees and Our Underachieving Colleges, have questioned the quality of student learning, especially in essential areas such as critical thinking and analytical reasoning (Bok, 2008). Like all institutions funded from the public purse, there are ways in which efficiency and productivity can be improved. Leading this charge are government officials, international organizations, such as the OECD, the EU, and concerned taxpayers. They insist on value for money and consistently espouse a bottom-line approach to education. As Lenning (1977) reflects,

In this age of accountability, administrators and others have been especially concerned about educational outcomes and their measurement. [As a result], they are being called on to provide factual evidence that they and their programmes are providing the benefits that were intended, and that these outcomes are being produced in a cost-effective manner (p. 9).

This zeitgeist has ushered universities and their respective managerial bodies into distilling education, knowledge, and learning into “outcomes”; or in other words, a teleological commodity wherein the effects of learning can be clearly measured and tested. HEIs are now expected to furnish demonstrable proof detailing what students know or can do as a result of their learning. Currently learner outcomes are considered sine qua non: a prized master key capable of unlocking the doors of quality assurance, transparency, and value for money (OECD, 2012). This phenomenon has heralded new modes of governance within HEIs, most of whom are fixated with performance and accountability metrics, to embrace learner outcomes as a universal currency—an overarching auditing instrument that transcends boundaries, languages, and qualifications. Based on this evaluation, international interest groups such as the OECD and successive EU higher education commissions since Bologna 1999 have come to identify learner outcomes as an effective tool in benchmarking transparency, accountability, and quality assurance. To understand how this situation has arisen, it is important to review some key policy ideas and decisions. These can be traced to three sources that are connected but which also demonstrate differences in political emphasis. The first is the broad scale vision for education of the European Union, the second is the more utilitarian and functional view of the OECD, and the third is the Irish national policy on higher education which is informed by these.

Irrespective of one's allegiances, concerns of this nature have “precipitated state legislatures into increasing the pressure on colleges and universities to become more accountable for student learning” (Klein et al., 2005, p. 251). At present, learner outcomes are one of the leading tools used to gauge the teaching and learning performance of HEIs (OECD, 2012). Recent examples of this approach include the OECD's Programme for International Student Assessment (PISA) and AHELO study (Assessment of Higher Education Learning Outcomes, 2012), which chart pupils' performance at the second and third level, respectively, and US President Barack Obama's exhortation for a more outcomes-based assessment of higher education (16/Nov/13 Times Higher Education). The identification of outcomes-based education as an optimal metric convinced the U.S. Department of Education to begin actively pushing a plan to develop outcomes-based ratings for U.S. colleges and universities by 2015. A further goal is to tie consequences—in the form of federal funding—to outcomes by 2018, an outcome that did not materalize. Should a certain HEI fall short on certain outcome measures, their funding will be shaved accordingly. Various forms of performance funding have already been instituted in the UK (Clark, 2009), in some US states (Banta, 2007), and in Australia (Anderson et al., 2000). Understandably, this performance-based funding approach is a highly controversial and contentious topic in higher education today (see, OECD, 2012).

Other notable exemplars of this outcomes-based approach include the Provão and ENADE in Brazil, which have been administered since 1996 and 2004, respectively. These instruments provide external data across more than 26 subject areas. In the US, the Collegiate Learning Assessment, has been used by over 400 institutions in order “to collect objective data on learning outcomes” (Coates, 2009, p. 32). Australia likewise uses outcomes-based assessment in the form of its Graduate Skills Assessment, 2000, to gauge critical thinking and problem solving amongst its graduates. In Mexico, the Examen Nacional de Ingreso al Posgrado, 1997, looks at verbal and analytical reasoning and candidates' ability to infer, analyze, and synthesize information. Recent research from the European arena, more specifically, empirical research conducted in the vocational sector by Cedefop (The European Centre for the Development of Vocational Training, 2011, compiles a comparative analysis of learning outcomes in nine European countries. The picture emerging from the literature is that learner outcomes are now regarded as being a reliable instrument for obtaining comparable information on what students actually learn across HEIs (OECD, 2006).

Despite the current obsession of quantifying performance within the topography of higher education, educationalists such as Ozga and Lingard (2007) view an outcomes-based approach to education as being nothing more than a thinly veiled auditing system, oftentimes manipulated for comparative purposes, rather than a means to enhance teaching and learning. Looking to Europe, we see the Bologna Process (1999) and its insistence on an “adoption of a common framework of readable and comparable degrees” together with establishing a transparent system of “quality assurance” has led to a shift from the traditional model of evaluation which typically focused on “resources used, classes taught, and articles published” to “what a learner knows or can do as a result of learning” (Otter, 1992, p. 1). This performance-based utilitarian educational philosophy stemming from international organizations, (IOs), such as the OECD and successive EU Commissions, in tandem with national policy documents, namely the National Strategy for Higher Education to 2030 Report, not only conceptualize, but drive forward what Douglass et al. (2012) has termed the “learner outcomes race” (p. 318). In line with this theme, this paper traces how learner outcomes, via the three-pronged European higher education objectives of: (a) quality assurance, (b) harmonization, and (c) transparency, have grown to underpin this entire process.

Before we move to investigate how the OECD and EU have endorsed the merits of learner outcomes, we must first explain the distinction between an input/output paradigm and how it differs from an outcomes-based approach. Deborah Nusche of the OECD (2008, p. 7) explains the distinction thus: “inputs are the financial, human and material resources used, such as funding and endowments, faculty and administration, buildings and equipment” whilst, outputs are “anything that an institution or system produces.” HEI outputs can therefore be measured in terms of articles published, classes taught, educational material distributed, and degrees awarded. Typically, HEIs were evaluated using a combination of input/output metrics. This has now changed, with proponents, in this case the OECD, detailing how they view “inputs, activities and outputs [as having] little intrinsic value in terms of student learning” (Nusche, 2008, p. 7). They are, instead, “only the intermediate steps, that may or may not lead, to outcomes or benefits” (p. 8).

According to Allan (1996), outcomes describe what the student actually achieves, as opposed to what the institution intends to teach. This is the crux of the matter. As one would expect, educational philosophy within EU policy and OECD test-data have recently aligned to explore this performance-based approach to education. Research from Barr and Tagg, (1995), albeit focused on the American model, traces this evolutionary emphasis on delivering lectures and providing students with the means to learn, toward a “learning paradigm” whereby the focus is no longer on the means but on the end—more specifically, supporting the learning process of students. As they put it,

We are beginning to recognize that our dominant paradigm mistakes a means for an end. It takes the means or method-called “instruction” or “teaching”—and makes it the college’s end or purpose. {…} We now see that our mission is not instruction but rather that of producing learning with every student by whatever means work best (Barr & Tagg, 1995, p. 13).

This new initiative—a “learning incubator” if you will, shifts the focus from “instruction,” “teaching” and “research” to a new model centered on measurable learner outcomes. As one critic maintains though, “…Something has been lost in the shift from the language of education to the language of learning”…since…“education”… is not just about the transmission of knowledge, skills, and values, but is concerned with the individuality, subjectivity, or personhood of the student, with their “coming into the world” as unique singular beings. (Rizvi et al., 2022, p. 3). Now, as part of this new model, so to speak, the student is the focus of the learning process. With this, learner outcomes are identified as apposite means of isolating precisely what a student knows or can do as a result of their learning. Theoretically, this copper-fastens transparency, accountability and quality assurance. In addition, it facilitates harmonization between EU universities and makes synchronization a more realizable goal. The key architects driving this agenda forward are the OECD and EU (Rizvi et al., 2022).

3 GLOBAL PUSHES FOR ACCOUNTABILITY: OECD AND EU INFLUENCES

Since its inception in 1961, borne out of the Organization of European Economic Cooperation (OEEC), the OECD has been described as “a think tank, a geographical entity, an organizational structure, a policy-making forum, a network of policy-makers, researchers and consultants, and a sphere of influence” (Henry et al., 2001, p. 1). In more legalistic terms, the OECD is an intergovernmental organization of some 30 of the world's most developed countries, which produce two thirds of the world's goods and services and which are committed to the principles of a market economy and a pluralistic society. George Papadopolous, former deputy director, responsible for education, reports in his seminal work Education 1960–1990: The OECD Perspective how OECD involvement in education is principally designed to facilitate ongoing and meaningful discussions, both within and across countries, with the hope that such exchanges will lead to general conclusions which serve as guidelines for policy development.

This “catalytic role” Papadopolus proposes involves three steps. Firstly, it begins with the identification of major policy issues which emerge on the education horizon. These issues are then put together whereon a framework is developed and a dialectical process involving experts ensues. Finally, the organization then carries out a solid and objective analysis of the problem, both quantitative and qualitative, whereupon the collective experience of member countries in terms of concepts and actual policy and practice comes to bear.

By means of its seasoned reputation for educational expertise, particularly statistical analysis, OECD findings and subsequent policy recommendations enjoy a certain lofty status amongst policymakers, with the result that few see the need to question the findings of their data, nor deconstruct the authoritative character of the knowledge contained therein (Porter & Webb, 2004). It has been said that due to this mindset, the OECD's greatest impact thus far has been in devising governance by comparison. One example of this approach is the implementation of learner outcomes and the AHELO project. It is this we examine next.

In 2006 at the OECD Education Ministers' Conference, Marietta Giannakou, the then chair, resolved in her conclusions how there was a pressing need to provide “a clearer focus on what students learn”. To accomplish this goal, the OECD expressed it was vital for universities to first “develop better evidence of learning outcomes.” In keeping with this view, “a major program of reform is needed, giving more emphasis to outcomes in particular.” Due to the OECD's experience with the PISA study, it was felt they would have the necessary expertise in developing new measures of learning outcomes in higher education. As both the OECD and Education Ministers wanted to enhance the quality of higher education, this shared objective signaled the arrival of the AHELO program.

Further research conducted in 2008, more specifically, the “OECD Thematic Review of Tertiary Education,” likewise recommended increasing the focus on student outcomes to maximize quality in HEIs. It was around this time the OECD started preparing a roadmap for the AHELO study. In the AHELO executive summary, it is noted, “in a complex, ever changing and growing higher education context, where a variety of rankings are often being used as a yardstick of academic excellence, there is a clear need for a way to effectively measure the actual outcomes of teaching and learning” (AHELO Exec summary p. 2). The same report concludes, despite quality assurance systems and the remarkable development of performance indicators worldwide, data on the learning outcomes of higher education remains scarce. Mindful of this empirical shortfall, the OECD's assessment of higher education learning outcomes (AHELO) thus aims to provide a tool for higher education institutions (HEIs) to assess what their students know and can do upon graduation on an international scale. It was felt that the implementation of this tool could play a key role in supporting HEIs in their efforts to improve the learning outcomes for their students (p. 2). The implementation of AHELO involved a total of 249 HEIs in 17 countries and regions, and the instruments were administered to close to 4900 faculty and some 23,000 students.

As previously noted, the proposal to explore the development of an international AHELO emerged during the Athens Meeting of OECD Education Ministers in 2006. It was becoming clear at this stage that new performance metrics for higher education were required in order to enhance transparency, accountability, productivity, and efficiency. EU and OECD efforts were steadily aligning to initiate a paradigm shift for higher education, which epitomized the following developments within HEIs:
  • (i)

    Governance within HEIs was slowly edging away from collegial approaches, incorporating communities of scholars, toward models embracing increased transparency and accountability. Outcomes-based assessment was therefore identified as a measure that would facilitate both of these objectives.

  • (ii)

    Current educational philosophy metrics have resulted in a shift from evaluating HEIs' performances based on outputs, for example, articles published and so on, to what a learner knows or can do as a result of their studies. This shift is exemplified in the Bologna Declaration 1999, which pledged to write all higher education programs in terms of learning outcomes by 2010.

  • (iii)

    A resurging emphasis on student-centered learning and with it, research on the nexus of teaching–learning processes. This shift from an “instruction paradigm” toward a “learning paradigm” necessitates outcome assessments as being an important tool for the evaluation of instructional effectiveness.

It is worth pointing out that there are two faces to learner outcomes: (i) progressive—used to enhance teaching and learning; (ii) a yardstick used to gauge HEIs teaching and learning performance. Turning to EU policy (2000–2013), alongside the National Strategy for Higher Education to 2030, I now show how learner outcomes, insofar as they fulfill the objectives of transparency, accountability and quality assurance, straddle both agendas to some greater or lesser extent.

Turning to EU Higher Education policy now, the Bologna Declaration of 19th June 1999, henceforth referred to as The Bologna Process, cordially invites “universities and other institutions of higher education…to be actors, rather than objects,” in this upcoming “essential process of change.” To this end it advocates a clear common purpose: “to create a European space for higher education in order to enhance the employability and mobility of citizens and to increase the international competitiveness of European higher education.” A set of specified objectives are tabled in light of this goal. First on the list is the “adoption of a common framework of readable and comparable degrees.” This is followed by formulating and delivering an “European Credit Transfer System (ECTS)-compatible credit system” and fashioning “a European dimension in quality assurance, with comparable criteria and methods.”

Although initially a large focus was on student mobility and in delineating undergraduate and postgraduate qualifications, it soon became evident that to realize these goals, courses must first be defined in terms of learner outcomes. Comparability and compatibility are effectively redundant terms according to the document Bologna Beyond 2010, unless one can firstly specify precisely what students “know, understand and can do” upon completion of their studies. Similarly, in order to establish an ECTS system, one must firstly be able to objectively measure and quantify European-wide qualifications. The success of this endeavor lies in the efficacy of the assessment rubric being implemented across the board. This point was raised 3 years earlier in The London Communiqué of 2007, whereupon it states “a significant outcome of the Bologna Process is the development of more student-centered, outcome-based-learning”. Bologna Beyond 2010 hence takes stock of the process thus far (1999–2010), and offers the following unequivocal assessment: “the success of the Bologna Process depends on the comprehensive implementation of a learning outcomes approach in higher education”.

Explicit learner outcomes must therefore be assessed and assessment procedures tailored in accordance with the recommendations outlined in the document Qualifications Frameworks in the European Higher Education Area (EHEA) in order to measure the achievement of intended learning outcomes and other program objectives. Based on this logic, a national qualifications framework mirroring the overarching European framework is left with the task of devising a common methodology based on learning outcomes (i.e., knowledge, skills, and competencies descriptors). Several groups such as the OECD and the AHELO project have since taken up this challenge and tried to pinpoint exactly what learners know, understand, and can do upon completion of their studies. As the document “Standards and Guidelines for Quality Assurance in the European Higher Education Area 2005” points out, the purposes of standards and guidelines is to “make external quality assurance more transparent and simpler to understand for everyone involved”. Consequently, institutions should have “formal mechanisms for the approval, periodic review and monitoring of their awards.” In line with this reasoning, if Europe is to achieve its aspiration to be the most dynamic and knowledge-based economy in the world (Lisbon Strategy), then “higher education will need to demonstrate that it takes the quality of its programmes and awards seriously and is willing to put into place the means of assuring and demonstrating that quality”. It is clear that the confidence of students and other stakeholders in higher education is more likely to be “established and maintained through effective quality assurance activities which ensure the programmes are well-developed, regularly monitored and periodically reviewed, thereby securing their continuing relevance and currency”. The quality assurance of programs and awards are expected to include “development and publication of explicit learning outcomes” and the “participation of students in quality assurance activities” (no page).

The Berlin Communiqué of 2003 proposes a framework of “comparable and compatible qualifications…. which should seek to describe qualifications in terms of workload, level, learning outcomes, competences and profile”. The Bergen Communiqué of 19-20 May 2005 reiterates this stance and pledges to implement references and guidelines to guarantee quality, as suggested in the ENQA report (European Association for Quality Assurance in Higher Education) and introducing national qualification frameworks. In the Leuven/Louvain-la-Nueve Communiqué of 28-29 April 2009, one of its prime objectives was to “develop student-centered learning outcomes and teaching missions” alongside “multidimensional transparency tools” to generate “comparable data” regarding quality assurance, as championed by the Bologna Process. Evidence of a sustained interest in learner outcomes and quality assurance, is notably reflected also in the Bucharest Communiqué, April 2010, wherein it identifies quality as being one of their three targets. Quality assurance and assessment of same have thus become the common thread amongst all the various EU documents focusing on higher education.

4 IRISH HIGHER EDUCATION IN CONTEXT: THE HUNT REPORT

Published in January 2011, the National Strategy for Higher Education to 2030, otherwise known colloquially as the Hunt report, after the chair, Dr. Colin Hunt, was published 3 and a half years into the most severe economic crisis affecting the state since the 1950s and barely 2 months after the Irish government accepted a bailout from the International Monetary Fund (Walsh & Loxley, 2014). Within this context, the report identifies higher education as the “cure and restorative” for the imploding Irish economy and explicitly declares education as “the key to economic recovery in the short-term and to long-term prosperity” (Higher Education Strategy Group, 2011, p. 29). These claims are made on the basis of international policy influences, namely the Lisbon Treaty 2000 and the OECD review of higher education in 2004 in which both documents explicitly acclaim how the “purpose of education is to serve the economy”.

The Hunt Report is principally a synthesis of the findings from various EU Education Commissions and OECD reports on higher education. This is evidenced by the fact most of the conclusions are underpinned by EU and OECD deliberations since Bologna 1999. In accordance with this principle, the report aims to provide a roadmap for Irish HEIs to continue feeding the knowledge economy. Chaired by economist Dr. Colin Hunt, the report is framed in the context of two previous reports, Powering the Smart Economy (SFI, 2009) and Tomorrow's Skills: Toward a National Skills Strategy (Forfás, 2009). The Smart Economy report pledges to “position Ireland as a location of choice in the International Education market,” by “restructuring the higher education system…to enhance system wide performance.” (Higher Education Strategy Group, 2011, p. 6) The Tomorrow's Skills report fundamentally calculates education in terms of skills, and in so doing, effectively reduces the raison d’être of HEIs to a commercial transmitter of skills and competencies. Against this backdrop the National Strategy for Higher Education to 2030, otherwise known as the Hunt Report, incorporates a strong view that “higher education is central to future economic development in Ireland” (p. 3). Its tone, rhetoric and stance are accordingly punctuated with a strong economic fervor.

Like the various EU deliberations discussed earlier, the Hunt Report focuses on the importance of higher education adapting to meet the needs of a changing society. Part of this entails a discussion on how best to proceed in planning for future demand. It explores the role of teaching and learning in HEIs as well as higher education's crucial role in forging links between research and enterprise. The notion of engagement with wider society is considered alongside the internationalization of higher education. Part 3 of the report explores the issue of HEIs' governance, including how to establish a sustainable and equitable funding model going forward.

5 THE NEO-LIBERAL AGENDA IN HIGHER EDUCATION

It will have become evident to the reader that much of the discourse in higher education, its process and outcomes, is borrowed from a particular source that has gained significant influence at both policy and institutional levels. Understandably, the omnipresent concerns of accountability, transparency, productivity, and efficiency within HEIs take up much ink in the Hunt report. These considerations are examined in light of the fact “the capacity of higher education will almost double over the next 20 years” (Higher Education Strategy Group, 2011, p. 3). This startling statistic is ratified in a subsequent report by the Economic and Social Research Institute (A Study of Future Demand for Higher Education in Ireland, released in 2012), wherein it estimates that potential undergraduate HE entrants would “grow from 41,000 in 2010/2011 to 44,000 in 2019/20 (7%) and to just over 51,000 by 2029/2030.” This means that Ireland has 39 publicly funded institutions catering for a population of just 4.5 million. More specifically, it has seven universities and 14 institutes of technology, as well as colleges of education and a few niche providers such as the Royal Irish Academy of Music. Put simply, there are now approximately 1330 courses in 45 institutions on offer, including private institutions. The numbers of courses on offer have trebled over the last 20 years. Given the plethora of courses on offer and the 25% projected increase in third level students by 2029, it rapidly becomes evident that this sort of growth is unsustainable.

5.1 Efficiency and transparency

The Hunt Report is praiseworthy of the Irish higher education system and its comparatively efficient use of resources. It notes how Irish HEIs “produce graduates for lower-than-average costs” (p. 40). This particular statistic is based on ECOFIN's research, which examined efficiency and effectiveness of higher education spending. Similarly, another independent study ranks Ireland highest of all counties in the international recruiter views of the graduate employability and second highest of 28 countries in the international peer review of graduate quality (Times Higher Education Supplement).

A comparative analysis of tertiary sector work practices is then analyzed to generate data about academic workloads across the EU. The report observes how there is a “lack of transparency regarding staff workloads” (p. 41). Various international metrics for quantifying workloads are scrutinized. Finland operates a system specifying the number of hours an academic must be on campus; whilst others, notably Italy, specify how many hours an academic is required to teach. One of the key findings of the report states how “transparency and content of academic contracts needs to be addressed to ensure that productivity is optimized” (p. 41). To remedy this problem, we must become “more efficient in how we deploy resources” by realizing “greater economies of scale” and by “rationalizing programmes” (p. 41) by offering them in fewer institutions.

5.2 Quality assurance

With regard to quality assurance, the report stresses how a recent report from the European Commission is resoundingly positive. Based on their metrics, Ireland scores highly in the indicators used by the EU (Higher Education Strategy Group, 2011). These three indicators are (i) stage of development of external quality assurance; (ii) level of student participation; and (iii) level of international participation. Concerns are however expressed around “perceived grade inflation,” “clarity of learning outcomes,” improved “assessment practices” and “better teaching,” including provision for access to a wider range of learning resources (p. 42).

5.3 Transparency and accountability

Building on the EU recommendation of writing courses in terms of learner outcomes and the OECD's clarion call for an assessment of higher education learner outcomes, the Hunt report (2011) likewise discusses the merits of learner outcomes as a way of maximizing accountability and ensuring transparency. Whilst acknowledging, “measuring learning outcomes is more difficult than measuring inputs or processes,” it concludes this progression entails “an enhanced capacity for measurement and evaluation” (p. 56). Furthermore, the process of changing the focus from inputs to learning outcomes, or a learning paradigm as previously postulated by Barr and Tagg (1995), “requires changes in the way academics conceive of and organize their teaching and assessment” (Higher Education Strategy Group, 2011, p. 56). The point is made that the undergraduate curriculum needs to place more emphasis on “generic skills” especially those required for the workplace and active citizenship. Creativity and entrepreneurship need to be nurtured and actively encouraged, in addition to “reflective learning,” “applied knowledge” and “heuristics.” The report briefly alludes to the AHELO project and its objective of measuring “key generic skills” “such as analytic reasoning, critical thinking, the ability to synthesize, and the practical application of theory” (p. 56).

The “Framework Implementation and Impact Study” carried out by the “National Qualifications Authority of Ireland” on the “National Framework of Qualifications” indicates international evidence points to “the need to align…learning outcomes with assessment practice” (Higher Education Strategy Group, 2011, p. 57). These recommendations follow a 2010 EU Commission document that specifies how HEIs should “ensure alignment and balance between learning outcomes, pedagogy and assessment, and institutions should have the capacity to deliver an outcomes–based approach, in terms of environmental conditions and staff preparedness and expertise” (p. 57). Adam (2006) argues that in the 21st century, learning outcomes are “best viewed as a fundamental building block of the Bologna educational reforms” (p. 3). This is because they are a “practical device” and represent a “methodological approach” to actively improve competitiveness, transparency and the mobility of European education (pp. 3–6).

For learner outcomes and quality assurance, “Hunt” draws upon the 2006 European parliament's proposal that HEIs should embrace quality assurance internally. A collegial environment embracing a quality culture should make it easier to monitor and enhance quality. But the report also concedes “external independent quality assurance agencies can carry out this task more effectively” (Higher Education Strategy Group, 2011, p. 85). Although Irish higher education institutes are widely acknowledged to score highly in terms of quality assurance insofar as they have coherent quality assurance policies and Bologna-aligned external assessment procedures in situ. Irish quality assurance systems work in accordance with European Standards and Guidelines for Quality Assurance; they have a high level of student participation in the governance of quality assurance and almost all external reviews of institutions and programs involve international participation. One of the recommendations is the establishment of a published national student survey along with a recommendation that each HEI should implement a comprehensive anonymous student feedback system.

The key policy concerns from the EU, OECD, and at the national level which we have argued have been strongly influenced by a neoliberal agenda of recent years have been brought to bear on the notion of student learning outcomes that is at the core of this investigation. It is now apposite to consider precisely how learner outcomes have been shaped by these forces.

6 THE DEBATE OVER LEARNER OUTCOMES: USES AND ABUSES

Learning outcomes are written statements specifying what a learner is expected to know, understand, and/or be able to demonstrate at the end of a period of learning (Adam, 2004). They are explicit statements about the outcomes of education—in other words—the results of learning. Simply put, they are “what a learner knows or can do as a result of learning” (Otter, 1992, p. 1). Outcomes describe what the student actually achieves, as opposed to what the institution intends to teach (Allan, 1996). Therefore, learning outcomes are “principally concerned with the achievements of the learner rather than the intention of the teacher as expressed in the module or course” (Adam, 2004, p. 6). Essentially, learning outcomes are divided into different categories. Firstly, there are subject-specific outcomes that relate to the subject discipline and the knowledge/skills/competences therein. Secondly, there are generic (sometimes called transferrable or transversal skills) outcomes that relate to any and all disciplines, for example, written, oral, problem solving, information technology, and team-working skills (Adam, 2004).

Although learning outcomes have recently been vaulted to the domain of notoriety, no doubt due to the importance attributed to them by EU and OECD agendas, their precise origins are decidedly difficult to pinpoint. Most educationalists agree that they grew out of the behaviorist movement in psychology and therefore have much in common with behavioral objectives and indeed their progeny in the form of domain-referenced objectives. Adam (2004), in his article “An Introduction to Learning Outcomes” offers a similar appraisal. He loosely traces their origins to the work of Ivan Pavlov (1849–1936) and subsequently to the work of the American “behaviourist school” of psychology developed by JB Watson (1878–1958) and BF Skinner (1904–1990).

Behaviorism is a school of thought within psychology whose theoretical goal is the prediction and control of behavior. As its pioneer, Watson considered the introspective methods of structuralism to be unscientific. He thus tried to explain human functioning atheoretically—without the use of mediating variables, such as “thought,” “feeling,” or “introspection,” chiefly because “they are not directly observable or verifiable.” Based on this appraisal, Watson concluded that behavior should be the primary focus of psychology. Borrowing heavily on the doctrine of operationalism from the logical positivists, he enunciated that virtually all behavior can be explained as the product of learning and that all learning consists of conditioning.

One of the most famous behaviorists was Ivan Petrovich Pavlov (1849–1936). He was awarded the Nobel Prize in Physiology in 1904. Perhaps what most people recall about him were his infamous “conditioning” experiments with dogs and automatic learning. In layman's terms, he would ring a bell and administer food. After a period of time under this “conditioning,” the mere sound of the bell ringing would make the dog salivate. Following this research, Watson and Skinner pioneered the behaviorist approach, which explained human behavior in terms of responses to external stimuli. As a consequence, behaviorism is sometimes referred to as S–R psychology (for stimulus–response).

Behaviorist objectives were the mantra of teaching and training in the US during the 1960s, and like learning outcomes, their function was to identify in the clearest possible terms how students might demonstrate their acquisition of knowledge, understanding, attitudes, and skills (Melton, 1996). Statements of objectives were expected to identify in clear measurable terms the standards of performance required and the conditions under which such performances were to be observed. As one would expect, hordes of educationalists, in an attempt to simplify their methods and enrich their teaching, together with the desire to maximize the learning of their charges, especially in relation to transparency and accountability, pounced on the precepts of behaviorism by assuming that learning could be tailored to fulfill the same criteria. Behaviorism, as championed within the educational sphere, thus began to emphasize the clear identification and measurement of learning and the need to produce observable, verifiable, and measurable outcomes, based on this learning.

Initially learning outcomes were synonymous with outcomes-based education, a model of educational structuring that involves the clear and explicit identification, statement, and assessment of student learning (Adam, 2004; Allan, 1996; Spady, 1988). Typically, this approach was reserved for second-level education. Curricula at this level subscribed to an outcomes-based system constructed around explicit and detailed student outcome statements. Such statements describe what a person is expected to know, understand, and/or be able to demonstrate at the end of a period of learning (Adam, 2004). HEIs first tested this model in the USA, Australia, New Zealand, and the United Kingdom. Later, this “learning outcomes approach” was refined by educational practitioners in Australia, New Zealand, South Africa, Denmark, Sweden, Ireland, and other parts of Europe (Adam, 2004). Within Europe, learning outcomes were identified as a key objective in the Bologna Declaration of 1999—a document which 29 ministers of education signed. As mentioned earlier, further commitments were made in lieu of this declaration to establish a EHEA by 2010, and with this, a strong assurance to write all higher education programs in terms of learner outcomes by that date. Interestingly, the EHEA adopted the very broad and generic “Dublin Descriptors” as the cycle descriptors for its Bologna overarching qualifications framework. These Dublin Descriptors roughly follow Bloom's Taxonomy of Learning: knowledge, comprehension, application, analysis, synthesis, and evaluation. At present, 47 countries around the world have now endorsed the Bologna Declaration.

Within the sphere of higher education, learning outcomes are the keystone of the Bologna educational reforms. As a practical device, they signify a methodological approach whose objective is to improve the transparency, competitiveness, efficiency, and mobility of European education. Transparency is assured by virtue of the fact that learning outcomes can be demonstrated, tested, and verified. With regard to competitiveness, learner outcomes give an accurate picture as to how effectively HEIs achieve their goals; more specifically, can students now do what the HEIs' learner outcomes specify or not? In theory, this allows students to make more informed decisions about the university they choose for their respective courses. Efficiency is also more accurately gauged via the adoption of the learning outcomes approach. Students now possess the tools of deciphering whether the learning experience was optimal or not.

7 LEARNER OBJECTIVES—THE PROBLEM OF SPECIFICATION

Although on the face of things, learner outcomes seem to be a relatively straightforward construct, there is still quite a bit of conceptual confusion regarding the definition of educational intention (Allan, 1996). The actual mechanics of generating learning objectives specifying what a student will learn as a result of engagement in a particular course has morphed into a “minefield of terminological confusion” go as far as to say that aims, goals, tasks, objectives, and learning outcomes are often “used freely and indiscriminately.” The ambiguity around learner outcomes has reopened the debate regarding the concepts of intention and specificity, and moreover, the terminology used to describe them. Deliberations of this nature highlight the lack of clarity around the way learner outcomes are being used in higher education today. Although the widely accepted definition of learner outcomes referring to what a student knows can do or understands as a result of his/her learning is seemingly quite clear, conceptual problems arise when educationalists start scrutinizing the specifics of individual learner outcomes in terms of measurable behaviors. What does it mean for example, to specify students will be able to understand the literary themes underpinning Joyces's Ulysses? Taking up this point (Hussey & Smith, 2003), claim the concept of learner outcomes has become so entangled with notions of specificity, transparency, and measurability as to become largely irrelevant to pedagogical practices. Program developers “wrestle with demonstrable, action-oriented verbs, deftly attempting to avoid too often repeating the same formula: by the end of this module, students will be able to demonstrate the ability to apply the concept of alienation to their own experience” (Hussey & Smith, 2003, pp. 357–368). Other critics of this reductionist approach to learning point out that the academic study cannot be limited to a skill/competence approach that can be summed up in a box-ticking rubric. Further objections are leveled over the idea that learner outcomes should be the same for everyone —or that notional learning time (the number of hours a specified learning outcome should take) is a realistic goal. This leaves policymakers and theorists with a problem to solve: How can they avoid the specificity problem without sacrificing education integrity for the illusory allure of accountability?

8 CONCLUSION

To conclude—increasing emphases on learner outcomes in higher education institutions signifies a fundamental shift in how educational “success” and “quality” is measured. While these metrics aim to enhance transparency, accountability, and quality assurance, they also generate significant challenges. Myopic focus on quantifiable outcomes can potentially undermine the intrinsic values of higher education, such as fostering critical thinking, creativity, free inquiry, and intellectual exploration. Academics argue that education should resist, any and all attempts, aimed at driving the process and outcome (potential or real) to a commodity, measured by market-driven metrics. What this article demonstrates is that ideological moves toward outcomes-based assessment, influenced heavily by OECD and EU policies, is reflective of a mistaken broader neoliberal agenda which prioritizes economic efficiency over rounded educational experiences. As the Hunt Report and other associated policy documents illustrate, this approach risks overshadowing the traditional goals of higher education, some of which include inward looking dispositions and skills, such as, personal growth, self-knowledge, principles of responsible and effective inquiry and truth-seeking. More outward looking competencies might include social justice capacities, compassion, empathy, and phronetic (wise) judgment. Moving forward, then, it is crucial to appropriately balance the demand for measurable outcomes with the need to preserve the core educational values that nurture free inquiry and intellectual diversity. Only by addressing these concerns can higher education institutions truly live up to its societal and educational duties to nurture and challenge their students and society effectively, ensuring that they remain not only accountable but also vibrant centers of learning and innovation.

AUTHOR CONTRIBUTIONS

Gerry Dunne: Conceptualization; formal analysis.

CONFLICT OF INTEREST STATEMENT

No conflict of interest.

ETHICS STATEMENT

No data to declare.

ENDNOTES

Biography

  • Gerry Dunne is a lecturer in philosophy and history of education in Marino Institute of Education, Trinity College Dublin. He researches mostly in the area of ethics and epistemology in education.

DATA AVAILABILITY STATEMENT

Data sharing not applicable to this article as no datasets were generated or analyzed during the current study.

    The full text of this article hosted at iucr.org is unavailable due to technical difficulties.