Educational Vouchers: A Review of the Research

 

by
Alex Molnar

 

Center for Education Research, Analysis, and Innovation
School of Education
University of Wisconsin-Milwaukee
PO Box 413
Milwaukee WI 53201
414-229-2716

 

October, 1999

 

 

CERAI-99-21

 

Educational Vouchers: A Review of the Research 
October 1999
CERAI-99-21

Alex Molnar
Professor, Department of Curriculum and Instruction University of Wisconsin-Milwaukee 

This document combines excerpts from two reports: "Smaller Classes -- Not Vouchers -- Increase Student Achievement" (Harrisburg, Pa.: Keystone Research Center, March 1998); and "Smaller Classes and Educational Vouchers: A Research Update" (Harrisburg, Pa.: Keystone Research Center, June 1999). Both documents are available on the website of the Center for Education Research, Analysis, and Innovation at http://www.uwm.edu/Dept/CERAI

 

Table of Contents - Exercept 1
Historical Background
Educational Choice Enters the Mainstream
The Battle Over Vouchers Today
The Milwaukee Parental Choice Voucher Program
The Debate Over the Achievement Effect of the Milwaukee Voucher Program
Box 3: Public vs. Private Schools
Why Different Researchers Reach Different Conclusions
The Witte Evaluations
Box 4: Sorting through the Conflicting Voucher Results
The Greene, Peterson, and Du Evaluation
Box 5: When are Significant Results Not So Significant?
The Rouse Evaluation
Milwaukee’s Private Voucher Program -- PAVE
Box 6 - A Case Example of the Relative Cost and Performance of Public and Private Schools

The Cleveland Scholarship and Tutoring Program (CSTP)
Vouchers, Values, and Educational Equity
Box 7: Does Money Matter? School Spending and School Outcomes
References

 

Table of Contents - Exercept 2
The Argument Over Vouchers
The Milwaukee Parental Choice Voucher Program
The Achievement Effects of the Milwaukee Voucher Program

The Cleveland Scholarship and Tutoring Program (CSTP)
Official Evaluation Results for CSTP
Private Voucher Programs
Private School Vouchers (Con't)
Vouchers and Educational Equity
References

Box 4: Sorting through the Conflicting Voucher Results

To help you avoid getting lost in the technical summary of voucher research on Milwaukee, the list below summarizes this report’s distillation of what the research tells us.

Disagreement exists about whether the voucher program generates outcomes compared to the Milwaukee Public School (MPS) system. Two of three research teams think no positive outcomes result in reading. Two of three teams think that positive outcomes result in math.

The evaluations all deal with small samples. Many students drop out of the experiment, possibly on a non-random basis. These data deficiencies should be kept in mind when interpreting the results.47

The parents of voucher applicants have more education and higher expectations than parents of most Milwaukee Public School students. Wherever they attend school, the children of such parents may improve over time compared to other students.

Students in a group of public schools with small classes outperform Choice students (according to the only analysis that looks at different groups within the MPS system).

Lacking the necessary data, the evaluations cannot look at the educational process inside the Choice schools. They cannot explain what lies behind any differences in performance between Choice and MPS schools or among the Choice schools.

Over 80 percent of Milwaukee voucher students attended three schools with established reputations. At best, the experiment tells us something about how these particular private schools compare with Milwaukee public schools, as a group. It indicates nothing about the impact of larger-scale voucher programs.

Taking account of these differences requires including in the analysis only students for whom there are complete data, which exacerbates the problem of sample size. Witte’s overall conclusion: there is no academic advantage for students attending Choice schools. He finds a small, non-significant advantage for Milwaukee Public Schools in reading.48

 

The Greene, Peterson, and Du Evaluation

Greene, Peterson, and Du (GPD) argue that, when Witte compares Choice and MPS students, his controls for family and individual characteristics are inadequate.49 Therefore, GPD choose a method different from Witte’s.50 They compare Choice students to students who applied to but did not get into Choice schools. The Milwaukee voucher law required that each participating school randomly select its successful voucher applicants. GPD therefore consider a comparison of successful and unsuccessful applicants to be akin to a natural experiment comparing two other wise identical groups. In their view, differences that may exist between students do not have to be controlled for because random assignment assures that differences will be evenly distributed across the groups being compared. 

Several factors mar their natural experiment, however:

First, no one has examined whether Choice schools actually selected randomly. (In response to this point, GPD show the prior test scores and family characteristics of the two groups to be similar "in essential respects."51 )

Second, siblings of children already enrolled in Choice schools were guaranteed places without going through the lottery.

Third, since lotteries took place at the school level, each school’s group of Choice students has its own control group of rejected applicants.

The available data, however, does not indicate the particular Choice school to which unsuccessful applicants sought admission. To model the lottery process, GPD therefore assume that Hispanic students applied to the predominantly Hispanic school and that African-Americans applied to one of the two other schools with large numbers of voucher recipients. This technique leads GPD to leave white students out of the analysis. 

Aside from questions about the randomness of the original selection process and the difficulties of modeling it, a number of other problems result from GPD’s reliance on unsuccessful Choice applicants as a comparison group. First, only a relatively small number of applicants failed to get into the voucher program each year (see Table 1). Moreover, many of these applicants dropped out of the Milwaukee Public Schools by the third or fourth year of the program, aggravating GPD’s sample size problems. The largest number of Choice students analyzed by GPD in the third year is 310, with only 86 in the control group. By the fourth year, the largest number of Choice students analyzed by GPD is 110, with only 26 in the control group. This makes the estimated effects unusually sensitive to a few very high or low scores. 

As Witte and Rouse note, moreover, unsuccessful Choice applicants who returned to the Milwaukee Public Schools are not only a smaller group over time, they may also be progressively less representative. In part because of the availability of a privately funded voucher program (see the discussion of PAVE below), many unsuccessful applicants found the resources to leave MPS. Those remaining in MPS may constitute an atypical, low-performance sub-group, particularly in years three and four. Consistent with this possibility, after four years, the family income of unsuccessful Choice applicants remaining in the MPS system is over $6,500 below that of unsuccessful applicants who leave MPS. The parental education of those still in MPS also falls slightly below that of the group who left.52

While unsuccessful applicants may be a low-performance group, the opposite may be true of those left in Choice schools in later years. (This problem plagues Witte’s analysis as well as GPD’s.) GPD themselves report evidence that voucher students who remain in the program are an unrepresentative, high-performance group (see the last part of Box 5). University of California-Berkeley Professor Bruce Fuller suggests that drawing a conclusion from looking at students left in Choice schools would be like determining the effects of smoking by only tracking smokers who didn’t die.53

Comparing Choice students to unsuccessful Choice applicants, GPD report that, after three or four years in the Choice program, students begin to show higher levels of performance. In math, GPD report 5 and 11 percentile rank differences in the third and fourth years.54 Reading scores of Choice students exceed those of unsuccessful applicants by 2 to 5 percentile ranks. GPD say that the delay before math and reading scores improve may result from the time it takes students to accustom themselves to a new school and its academic program.

When GPD take account of students’ individual characteristics on which they have data, their results achieve conventional levels of statistical significance once and approach significance six other times. GPD maintain, as noted earlier, that random selection of voucher recipients from each school’s applicant pool means that there is no need to control for individual characteristics. While random assignment does mean that individual characteristics should not make much difference, it does not justify excluding them. GPD counter that the lack of statistical significance of their results (once they include background characteristics) results not from any reduction in the positive impact of Choice schools, but rather from a reduction in the sample size because the data do not contain complete information on individual characteristics for all students. 

In 1997, following GPD’s analysis, Witte himself looked at the performance of unsuccessful Choice applicants.55 In reading, he finds, Choice students perform no differently than unsuccessful applicants. In math, like GPD, Witte finds that Choice students do better than unsuccessful applicants, especially in the third and fourth years in the program. Witte, however, discounts the value of these results because 52 percent of unsuccessful applicants did not return to MPS, so no test scores are available for them. He argues that the remaining unsuccessful applicants do not constitute a random sample of unsuccessful applicants. Witte also suspects his math results because his total sample for this comparison includes only 85 students who had been in the Choice program four years, and only 27 unsuccessful applicants. Moreover, the achievement difference can be accounted for by the scores of only five unsuccessful applicants who did not appear to answer any of the test questions. When Witte eliminates the scores of the lowest scoring group of students (five unsuccessful applicants and two Choice students), he finds that the math effect was no longer statistically significant. Moreover, the unsuccessful applicants did even more poorly against a random group of MPS students than against Choice students.

Based on their results, Greene, Peterson, and Du speculate that vouchers, if generalized and extrapolated to all white and minority students in the United States, would eliminate most of the achievement gap between white and minority students in reading and erase it altogether in math. It is not clear on what grounds GPD base this speculation because they exclude all white students from their analyses. 

Greene, Peterson, and Du’s overall conclusion: participation in the Milwaukee Parental Choice Program confers academic achievement advantages in reading and in math that are cumulative and that first appear after three years in the program. 

Continue with the Next Section Box 5: When are Significant Results Not So Significant?