Evaluation Methods for Non-Experimental Data
Richard Blundell
University College London and Institute for Fiscal Studies
Search for more papers by this authorMonica Costa Dias
University College London and Institute for Fiscal Studies
Search for more papers by this authorRichard Blundell
University College London and Institute for Fiscal Studies
Search for more papers by this authorMonica Costa Dias
University College London and Institute for Fiscal Studies
Search for more papers by this authorThe authors thank Costas Meghir, Barbara Sianesi, John Van Reenen, the editor and a referee for helpful comments. This review was originally prepared for the Department for Education and Employment and is also part of the programme of research of the Economic and Social Research Council (ESRC) Centre for the Microeconomic Analysis of Fiscal Policy at the Institute for Fiscal Studies. Co-funding from the Leverhulme Trust is gratefully acknowledged. The second author acknowledges financial support from Sub-Programa Ciência e Tecnologia do Segundo Quadro Comunitário de Apoio, grant number PRAXIS XXI/BD/11413/97. The usual disclaimer applies.
Abstract
This paper presents a review of non-experimental methods for the evaluation of social programmes. We consider matching and selection methods and analyse each for cross-section, repeated cross-section and longitudinal data. The methods are assessed drawing on evidence from labour market programmes in the UK and in the US.
REFERENCES
- Ashenfelter, O. (1978), ‘Estimating the effect of training programs on earnings’, Review of Economics and Statistics, vol. 60, pp. 47–57.
- Ashenfelter, O. and Card, D. (1985), ‘Using the longitudinal structure of earnings to estimate the effect of training programs’, Review of Economics and Statistics, vol. 67, pp. 648–60.
- Bassi, L. (1983), ‘The effect of CETA on the post-program earnings of participants’, Journal of Human Resources, vol. 18, pp. 539–56.
- Bassi, L. (1984), ‘Estimating the effects of training programs with nonrandom selection’, Review of Economics and Statistics, vol. 66, pp. 36–43.
- Bell, B., Blundell, R. and Van Reenen, J. (1999), ‘Getting the unemployed back to work: an evaluation of the New Deal proposals’, International Tax and Public Finance, vol. 6, pp. 339–60.
- Blundell, R., Costa Dias, M., Meghir, C. and Van Reenen, J. (2000), ‘ Evaluating the employment impact of mandatory job-search assistance: the UK New Deal Gateway’, unpublished manuscript, Institute for Fiscal Studies.
- Blundell, R., Dearden, L. and Meghir, C. (1996), The Determinants and Effects of Work-Related Training to Britain, London : Institute for Fiscal Studies.
- Blundell, R., Dearden, L. and Meghir, C. (1998), ‘Estimating labour supply responses using tax policy reforms’, Econometrica, vol. 66, pp. 827–61.
-
Blundell, R.,
Dearden, L. and
MaCurdy, T. (1999), ‘ Labor supply: a review of alternative approaches’, in
O. Ashenfelter and
D. Card, (eds), Handbook of Labor Economics, Elsevier North-Holland.
10.1016/S1573-4463(99)03008-4 Google Scholar
- Burtless, G. (1985), ‘Are targeted wage subsidies harmful? Evidence from a wage voucher experiment’, Industrial and Labor Relations Review, vol. 39, pp. 105–14.
- Card, D. and Robins, P. K. (1998), ‘Do financial incentives encourage welfare recipients to work?’, Research in Labor Economics, vol. 17, pp. 1–56.
- Cochrane, W. and Rubin, D. (1973), ‘Controlling bias in observational studies’, Sankyha, vol. 35, pp. 417–46.
- Devine, T. and Heckman, J. (1996), ‘ Consequences of eligibility rules for a social program: a study of the Job Training Partnership Act (JTPA)’, in S. Polachek, (ed.), Research in Labor Economics, vol. 15, pp. 111–70, Greenwich , CT : JAI Press.
- Eissa, N. and Liebman, J. (1996), ‘Labor supply response to the Earned Income Tax Credit’, Quarterly Journal of Economics, vol. 111, pp. 605–37.
- Fan, J. (1992), ‘Design adaptive nonparametric regression’, Journal of the American Statistical Association, vol. 87, pp. 998–1004.
- Fisher, R. (1951), The Design of Experiments, sixth edition, London : Oliver and Boyd.
- Hahn, J. (1998), ‘On the role of the propensity score in efficient semiparametric estimation of average treatment effects’, Econometrica, vol. 66, pp. 315–31.
-
Hausman, J. A. and
Wise, D. A. (1985), Social Experimentation,
Chicago
: University of Chicago Press for National Bureau of Economic Research.
10.7208/chicago/9780226319421.001.0001 Google Scholar
- Heckman, J. (1979), ‘Sample selection bias as a specification error’, Econometrica, vol. 47, pp. 153–61.
- Heckman, J. (1990), ‘Varieties of selection bias’, American Economic Review, vol. 80, pp. 313–18.
- Heckman, J. (1992), ‘ Randomization and social program’, in C. Manski and I. Garfinkle, (eds), Evaluating Welfare and Training Programs, Cambridge , MA : Harvard University Press.
- Heckman, J. (1996), ‘Randomization as an instrumental variable estimator’, Review of Economics and Statistics, vol. 56, pp. 336–41.
- Heckman, J. (1997), ‘ Instrumental variables: a study of the implicit assumptions underlying one widely used estimator for program evaluations’, Journal of Human Resources, forthcoming.
- Heckman, J. and Hotz, V. J. (1989), ‘Choosing among alternative nonexperimental methods for estimating the impact of social programs’, Journal of the American Statistical Association, vol. 84, pp. 862–74.
- Heckman, J., Ichimura, H. and Todd, P. (1997), ‘Matching as an econometric evaluation estimator’, Review of Economic Studies, vol. 64, pp. 605–54.
-
Heckman, J.,
Ichimura, H. and
Robb, R. (1985), ‘ Alternative methods for evaluating the impact of interventions’, in
Longitudinal Analysis of Labour Market Data,
New York
: Wiley.
10.1017/CCOL0521304539.004 Google Scholar
-
Heckman, J.,
Ichimura, H. and
Robb, R. (1986), ‘ Alternative methods for solving the problem of selection bias in evaluating the impact of treatments on outcomes’, in
H. Wainer, (ed.), Drawing Inferences from Self-Selected Samples,
Berlin
: Springer Verlag.
10.1007/978-1-4612-4976-4_7 Google Scholar
- Heckman, J., Ichimura, H. and Smith, J. (1994), ‘ Ashenfelter's dip and the determinants of program participation’, University of Chicago, mimeo.
- Heckman, J., Ichimura, H. and Clements, N. (1997), ‘Making the most out of program evaluations and social experiments: accounting for heterogeneity in program impacts’, Review of Economic Studies, vol. 64, pp. 487–536.
- Kemple, J., Dolittle, F. and Wallace, J. (1993), The National JTPA Study: Site Characteristics to Participation Patterns, New York : Manpower Demonstration Research Corporation.
- LaLonde, R. (1986), ‘Evaluating the econometric evaluations of training programs with experimental data’, American Economic Review, vol. 76, pp. 604–20.
- Orr, L., Bloom, H., Bell, S., Lin, W., Cave, G. and Dolittle, F. (1994), The National JTPA Study: Impacts, Benefits and Costs of Title II-A, report to the US Department of Labor, 132, Bethesda , MD : Abt Associates.
- Rosenbaum, P. and Rubin, D. B. (1983), ‘The central role of the propensity score in observational studies for causal effects’, Biometrika, vol. 70, pp. 41–55.
- Rosenbaum, P. and Rubin, D. B.(1984), ‘Reducing bias in observational studies using subclassification on the propensity score’, Journal of the American Statistical Association, vol. 79, pp. 516–24.
- Rosenbaum, P. and Rubin, D. B. (1985), ‘Constructing a control group using multivariate matched sampling methods that incorporate the propensity score’, American Statistician, pp. 39–58.
- Rubin, D. B. (1978), ‘Bayesian inference for causal effects: the role of randomization’, Annals of Statistics, vol. 7, pp. 34–58.
- Rubin, D. B. (1979), ‘Using multivariate matched sampling and regression adjustment to control bias in observational studies’, Journal of the American Statistical Association, vol. 74, pp. 318–29.