This introductory paper is structured in seven sections of which this is the first. The remaining sections provide an overview of the themes addressed by the special issue and introduce the papers featured within. The paper, aimed at experienced researchers in the field, provides a comprehensive overview of frontier efficiency measurement techniques and their application in the education context up to A unique feature of this review compared to previous ones for example Worthington, ; Johnes, ; Emrouznejad et al, is that it bridges the gap between the parametric generally in the form of regression or SFA education economics literature and the non-parametric typically in the form of DEA efficiency literature.
This is indeed a useful contribution and it draws out hitherto unremarked connections between themes in the two strands of literature. This paper provides an excellent resource to researchers in the field as it covers studies based on various levels of analysis individual students, institutions and nations , identifies the datasets and measures of inputs and outputs which have been used in past papers and details the possible non-discretionary or environmental variables which are relevant in education studies.
Discussion of methodological concerns revolves around endogeneity and its sources, in particular omitted variable bias, measurement error, selection bias and simultaneous causality issues.
This leads to a discussion and comparison of each of these problems in the parametric and non-parametric contexts. The efficiency non-parametric literature is criticised for largely ignoring the possible detrimental effects of endogeneity on efficiency while devoting too much energy to minor methodological details. A particular contribution of the review concerns the links made between parametric and non-parametric approaches in four cases.
First of all, matching analysis is compared to conditional efficiency. Second, quantile regression is related to partial frontiers. Third, difference-in-difference analysis is compared to meta-frontier analysis. Fourth, it is noted that value added studies are more prevalent in the economics of education literature than in the efficiency literature, where they are relatively rare. Mutual benefits, it is argued, could be made in each of these four areas if researchers in one field learnt from those in the other.
There are several reasons why authors may avoid cross-country efficiency analyses. First, comparable data at national level can be difficult to obtain. Second, an assumption underlying frontier estimation is that the units of assessment face the same production conditions and technology. This assumption is difficult to maintain in a cross-country framework especially where the sample of countries might be particularly diverse. The heterogeneity of country technologies and education policies may therefore hinder the comparability of the results, but at the same time, it is the only way to compare and benchmark educational policies across countries.
Efficiency of primary schools is assessed through an order-m non-parametric approach where a single output average results in PIRLS Reading test and inputs relating to the prior achievement of students, and to school resources such as teachers, computers and instructional hours, are used. The importance of the environment where schools operate is stressed in this paper and taken into account in a second-stage analysis, where country and school contextual factors are considered to account for the heterogeneity of countries and schools.
The way the money or funding is allocated, however, is a means by which governments can improve equity between schools facing different environments typically a harsher environment is one where the percentage of economically and culturally disadvantaged students is higher. These papers represent a timely contribution to the literature given the current interest in allocation of funding in Europe in response to the economic crisis see European Commission, for the various funding mechanisms of public sector schools.
In England, for example, the Government has recently produced a consultation document on the funding of schools Department for Education, A major part of the proposal is a move away from block funds allocated to schools on the basis of historical costs and towards a funding mechanism which removes inequities by allocating a lump sum to schools and incorporating a national mechanism for dealing with the extra costs faced by low-population areas with small schools.
In Portugal, a new formula for the financing of higher education institutions was put forward in July , but public primary and secondary schools are still financed based on approved budgets.takethestagebycx.clockenflap.com/100.php
Steering Committee on Workshop on Key National Education Indicators
The case of the Netherlands is analysed in this special issue by Haelermans and Ruggiero, where it is shown that schools in harsher environments do indeed receive extra funds; however, excess funding does not compensate for the excess costs of achieving acceptable standards the authors derive the cost required for schools to achieve a certain standard of performance deemed acceptable 2. In Weber et al 's paper, the authors also tackle financing issues, this time in the US schools in the district of Texas , linking these with equity issues.
Main results show that policies that reduce inefficiency tend to enhance equity as well.
- To Be Continued.
- Wake Up Mr Hedgehog its Spring - Grandmas Stories.
- Council of Ministers of Education, Canada?
The paper also suggests that weighted student funding may be a way to reduce inequalities, but cautions against the fact that for inefficient schools an enhanced budget may not resolve their inefficiencies and inequalities. That is, there are winners schools that would see their budgets increase under a weighted student funding and losers schools that would see their budget shrink under a weighted student funding , but extra funds will eventually only benefit efficient schools, which are more able to use the extra resources efficiently.
This paper therefore links three important issues in education: funding, efficiency and equity see also Woessmann, for links between efficiency and equity of schools in the EU.
As noted earlier, education including higher education contributes to economic growth; higher education also receives public funding in many countries, and so it is important to understand productivity growth in universities. The study is based on universities in Norway over a year period.
With only a small number of exceptions, previous studies of higher education productivity growth Flegg et al, ; Carrington et al, ; Johnes, ; Worthington and Lee, ; Kempkes and Pohl, ; Margaritis and Smart, rely on point estimates of productivity change.
- Finding Merlin: The Truth Behind the Legend!
- Once More Into The Breach: A Personal Account: Reliving the History of the Civil War;
- Girl Talk: The Consequences Of Sexual Promiscuity.
- References and Bibliography | Monitoring Educational Equity | The National Academies Press?
- Key National Education Indicators?
- British Columbia.
- Make Your Dreams Come True.
This study, however, applies a bootstrap procedure Simar and Wilson, , , for the Malmquist productivity index MPI which takes into account sampling variation. It differs from Parteka and Wolszczak-Derlacz , which also applies bootstrap methods in the MPI context, in that it i derives and examines the components of the MPI and ii visually inspects productivity change in the context of labour input changes.
The production relationship is defined with 2 inputs and 4 outputs. The initial analysis of the components of MPI catch-up and frontier shift suggests that the two measures move in parallel until after which frontier shift grows markedly, while the catch-up measure gradually deteriorates. Productivity change distributions for each university over time are examined in three time blocks and reveal a general picture that the group of institutions with significant productivity decrease is shrinking while the group with productivity increase is expanding.
The authors note that it would be interesting to extend the study to examine the relationship between size and productivity growth and in particular to the question of whether merging institutions might increase productivity; the effect of merging on both efficiency and productivity is largely unresearched Johnes, While there are some mergers in this dataset, the small number precludes a more detailed study at present but is something which might be possible as the database increases.
They are distinct, however, in that one Thanassoulis et al uses student feedback to assess performance of individual tutors, while the other Sneyers and De Witte uses student satisfaction in a model with both graduation and dropout rates to examine efficiency at programme level. Much of the extant literature on efficiency and frontier estimation in higher education focuses on the university or the department as the unit of assessment exceptions include Dolton et al, ; Johnes, a , b ; Barra and Zotti, whose empirical analysis is at the student level, and Colbert et al, who examine efficiency in the context of MBA programmes.
The paper by Thanassoulis et al deals with the assessment of teaching efficiency of academic staff. The method it proposes combines the Analytical Hierarchy Process AHP and DEA in order to arrive at an overall assessment of a tutor reflecting their performance in teaching. To the extent, however, that a teacher normally also carries out research the method also allows the assessment of the teacher given their performance in research.
A crucial feature is that the teaching dimension reflects the value judgements made by the students at the receiving end of the teaching. This is a key departure point of this study from previous studies in this area. The basic premise is that students, depending perhaps on gender, career aspirations and type of course e.
The different weights are then used in the computation of a mean aggregate score on teaching per tutor, which is operationalized by AHP Saaty, The aggregate grade or grades on teaching along with measures of the research output by the tutor are then used as outputs in a DEA model, set against the salary and teaching experience of the teacher. The authors illustrate their approach using real data modified for confidentiality on these variables for teachers at a Greek University.
The DEA model is solved to estimate the scope for improving performance by the teacher depending on the relative emphasis given to teaching versus research. It is noteworthy that whether emphasis is placed solely on improving on teaching or equally on improving teaching and research similar results are obtained where the estimated scope to improve on teaching is concerned. This suggests teaching and research are largely separable, and poor teaching performance is not generally compensated for by good performance in research.
Information of this type can be useful to a teacher in terms of setting aspiration levels for improvement in teaching, depending on whether the tutor is to focus on teaching or teaching and research. The paper by Sneyers and De Witte, in this special issue, addresses the use of first-year student dropout rates, 3 programme quality ratings and graduation rates 4 as indicators of university performance for the distribution of funding.
Is it possible, for example, to perform well along all three dimensions simultaneously? Given that dropout rates at the end of the first year at university could actually be a means of selecting the best and most motivated students to go forward, it is important to examine graduation rates and quality rating given the first-year student dropout rate.
Specifically, the paper compares programmes on graduation rates and quality ratings conditional on first-year dropout rates and examines the programme and institutional characteristics which underpin the performance. The paper is original in two ways.
First, the level of analysis is the programme rather than, for example, the institution or department. Second, the paper applies a non-parametric conditional efficiency method with continuous environmental variables Cazals et al, ; Daraio and Simar, and extends this to also include discrete environmental variables De Witte and Kortelainen, The significance of the effects of environmental variables on performance at programme level can be derived using this approach.
The study employs a rich dataset for universities in the Netherlands. The authors find that there is considerable variation in how the first-year dropout rate and the selectivity which that implies is used to have a positive effect on graduation rates and programme quality ratings. Some programmes are found to be inefficient in terms of their graduation rates and quality ratings given the incidence of first-year dropout and could learn from the practices characterising the efficient programmes. There is clear evidence of programme characteristics which influence graduation rates and quality ratings.
These results, therefore, have clear policy implications including, for example, that policies formulated at programme level would have higher impact than those formulated at an institution level. The paper by Mayston addresses the issue of incorrectly assuming convexity for the production possibility set PPS in DEA as this could happen in assessments in the education context. The question is of course not new and many authors have questioned the assumption of convexity in DEA in general. For example, Farrell notes that indivisibilities in production or economies of specialisation could lead to a non-convex PPS.
Free Disposal Hull FDH technologies, introduced by Deprins et al , can be deployed to measure efficiency, set targets for performance, etc. They assess container ports on efficiency where inputs in the form of indivisible capital items such as of berths, gantry cranes, straddle carriers, etc.
They conclude that the FDH method does not in some cases set demanding targets and can make units appear efficient simply for lack of comparators. Its advantage is that when units are not efficient, the benchmarks exist in real life so that they can be used as role models for less efficient units to emulate.
DEA with the assumption of convexity of PPS on the other hand is more discriminating in terms of efficiency and so better for setting more challenging performance targets. This, however, can be at the expense of using virtual rather than real units as role model benchmarks for inefficient units. The Mayston paper argues that in the specific context of assessments by DEA of comparative efficiency in education, convexity may not hold because of the fact that outputs have a quality dimension in a way that differs from output quality in other contexts.
In addition, lack of convexity can arise because both physical capital assets such as lecture theatres and libraries are non-divisible and because intangible assets in the form of knowledge specialisation by academics can also lead to indivisibilities of efficient research output.