2007 - Think Tank Review Project

These articles and/or reports are copyrighted material, the use of which has not been specifically authorized by the copyright owner. We are making such material available in our efforts to advance understanding of educational issues, etc. We believe this constitutes a 'fair use' of any such copyrighted material as provided for in section 107 of the U.S. Copyright Law. In accordance with Title 17 U.S.C. Section 107, the material on this site is distributed without profit to those who have expressed a prior interest in receiving the included information for research and educational purposes. For more information go to: http://www.law.cornell.edu/uscode/17/107.shtml. If you wish to use copyrighted material from this site for purposes of your own that go beyond fair use, you must obtain permission from the copyright owner.

Previous Years:  2008 · 2006

Review of Two Reports:

Report: Are Private High Schools Better Academically Than Public High Schools?

Date:
October 10, 2007
Author:
Harold Wenglinsky
Think Tank:
Center on Education Policy
Report: Monopoly Versus Markets: The Empirical Evidence on Private Schools and School Choice

Date:
October 17, 2007
Author:
Greg Forster
Think Tank:
Milton & Rose D. Friedman Foundation
Think Tank Review:
Date:
December 12, 2007
Reviewer:
Jaekyung Lee
Institution:
State University of New York at Buffalo
Public versus private school achievement gaps in general and the effects of school choice on academic outcomes in particular remain controversial issues. I review two recent reports of empirical studies on this topic: one from the Milton & Rose D. Friedman Foundation (MFF) and the other from the Center on Education Policy (CEP). MFF presents its empirical analysis in the context of the larger policy question about the effect of school choice, whereas CEP simply attempts to answer a research question, with policy implications, about a possible public-private school achievement gap. Both studies contribute new evidence to the existing literature through secondary analyses of national high school student datasets — the Educational Longitudinal Study (ELS) and the National Education Longitudinal Study (NELS) databases. The two reports in tandem provide contrasting views and results regarding private school effects. MFF argues that private schooling is more successful at improving student test scores; CEP argues that public and private schools have relatively equal success. This review provides an independent cross-examination of the two data sources and shows that the public-private high school gaps in math achievement gain scores were almost null (in the NELS) or too small to be practically significant (in the ELS). Therefore, the seemingly divergent findings and conclusions at the first glance may have been largely due to their different interpretations rather than real differences in the results. Both studies could have given more useful guidelines for policy and practice if they had examined reasons for observed gaps (or the lack thereof) between public and private schools.
Review of: "Shortchanging Disadvantaged Students: An Analysis of Intra-district Spending Patterns in Ohio"
Date:
September 20, 2007
Authors:
Matthew Carr, Nathan Gray, and Marc Holley
Think Tank:
The Buckeye Institute
Think Tank Review:
Date:
October 17, 2007
Reviewer:
Institution:
University of Kansas
The Buckeye Institute report, Shortchanging Disadvantaged Students: An Analysis of Intra-district Spending Patterns in Ohio, argues that high-poverty Ohio school districts can no longer place blame on the State of Ohio for failing to provide financial resources equitably. Rather, the authors argue that inequitable resource allocation across schools within high-poverty districts is the primary cause of remaining poverty-related disparities in student outcomes in Ohio. However, analyses presented by the authors fail to validate that the State of Ohio has allocated financial resources across districts with any greater degree of equity than high-poverty districts have allocated resources across schools. The authors show that many of the 72 high-poverty districts they identify in the state do not systematically allocate more funding to higher-poverty schools. This finding is undermined by numerous well-understood, overlooked factors, however, including cost differences and poverty-reporting differences by school grade level and very basic issues of sample size. Finally, while the authors contend that poverty related achievement gaps are a function of within district resource allocation disparities, the authors provide no validation that the achievement gap they measure exists within rather than between districts.
Review of: "End It, Don’t Mend It: What to Do with No Child Left Behind"
Date:
September 5, 2007
Authors:
Neal McCluskey and Andrew J. Coulson
Think Tank:
Cato Institute
Think Tank Review:
Date:
October 8, 2007
Reviewer:
Institution:
University of California at Berkeley
This new report from the Cato Institute begins with a solid analysis of No Child Left Behind's difficult-to-discern effects on student achievement, concluding that the law has narrowed the curriculum while failing to boost test scores. The report also includes a useful, though one-sided review of current debates on Capitol Hill, focusing on proposals that the authors believe offer little more than tinkering with the current law. This prompts the question of why major players have yet to back out of this short-term policy quagmire and ask, what would an effective federal role look like? Despite this provocative thinking, the authors ultimately fall back on the Cato creed: shrink the central state and expand market-choice in every sector of human activity. The report suffers from two key weaknesses. First, the authors ignore historical evidence showing that state-led accountability efforts, extending through the late 1990s, were associated with significant gains in achievement and narrower racial gaps. Rather than asking how Washington might learn from the states' apparent success, the authors infer from NCLB's limitations that any federal education policy will fail. Second, the authors' failure to subject market-based approaches to the same critical analysis applied to NCLB leads them to endorse a very narrow range of policy alternatives.
Review of: "The ABC's of School Choice"
Date:
September 2007
Author:
The Milton and Rose Friedman Foundation
Think Tank:
The Milton and Rose Friedman Foundation
Think Tank Review:
Date:
September 24, 2007
Reviewer:
Institution:
University of Illinois
A new annual report from the Milton and Rose Friedman Foundation is designed as a resource to provide ammunition for persuading people as to the merits of school choice. While there may indeed be a number of reasons to argue for school choice, this handbook shoots blanks. The report provides updated information on thirteen states and the District of Columbia with policies that approximate the Friedman Foundation's voucher-based version of school "choice." While the descriptive compendium of information is mostly accurate and somewhat useful, the report begins and ends with "Frequently Asked Questions," where the Foundation seeks to interpret the research on school choice issues for the lay reader. As might be expected from a voucher advocacy organization such as this, the report relies on a highly selective sub-sample of studies. The research referred to in the report tends toward non-peer-reviewed studies of questionable quality from other advocacy organizations, while ignoring evidence in these and other higher quality studies that questions the Foundation's unequivocal support for vouchers. Evidence — particularly on the issue of achievement — is consistently abused in this report, both by misrepresenting individual studies (including those by voucher advocates) and misrepresenting the general body of research on choice. In short, for those hoping to learn more about the issue, this one-sided report does a poor job of even representing only one side of the debate.
Review of: "Portfolios: A Backward Step in School Accountability"
Date:
September 2007
Author:
Robert Holland
Think Tank:
Lexington Institute
Think Tank Review:
Date:
September 19, 2007
Reviewer:
Institution:
University of Vermont
This self-described "research study" and following press release are intended to influence the debate over the direction of the reauthorization of NCLB, offering a defense of the current test-based accountability system against the inclusion of "multiple measures." The report presents a review of the research on portfolios in large-scale school accountability systems, concludes that portfolio assessment is severely flawed, and then characterizes portfolios as a proxy for all non-test-based measures of student performance. The report has several glaring weaknesses, however. The literature review cherry-picks two studies, both conducted 13 years ago and, on the basis of those studies, concludes that portfolios are not reliable and are too expensive for large scale accountability systems. Yet other large scale studies of portfolios – some of which are discussed in one of the two studies that the report itself relies on – come to different conclusions but are not examined or even mentioned. An even bigger problem with this new report (which is repeated in the press release), however, is the author’s decision to present portfolios as somehow representative of all non-test-based measures of student performance – which they clearly are not. This results in a document that is of little value for research or policy development.
Review of: "Michigan Higher Education: Facts and Fiction"
Date:
June 20, 2007
Authors:
Dr. Richard Vedder and Matthew Denhart
Think Tank:
Mackinac Center for Public Policy
Think Tank Review:
Date:
July 5, 2007
Reviewer:
Institution:
University of California, Los Angeles
The findings and conclusions of a new policy brief from the Mackinac Center for Public Policy are poorly grounded and misleading. The report, entitled "Michigan Higher Education: Facts and Fiction," does raise a number of important issues concerning the financing of public universities, but the study is firmly rooted in a strong, ideologically based conceptual framework. Rather than explore how universities have been affected by or responded to state cutbacks and how this resulting behavior affects state economic growth, the report seeks to confirm the authors’ belief that there should be less government involvement in the funding of public universities. The authors narrowly focus on benefits from higher education that accrue to individual students, despite considerable empirical research from other scholars showing societal benefits. The report’s attempt to model the relationship between state spending on public higher education and that state’s economic growth suffers from these and other flaws. In short, the authors grossly overstate their findings and policy-makers should view with great caution the conclusions drawn and policy recommendation to reduce state funding for public universities.
Review of: "Answering the Question That Matters Most: Has Student Achievement Increased Since No Child Left Behind?"
Date:
June 5, 2007
Author:
Center on Education Policy
Think Tank:
Center on Education Policy
Think Tank Review:
Date:
June 26, 2007
Reviewer:
Institution:
University of California at Santa Barbara
A new report released by the Center on Education Policy, "Answering the Question That Matters Most: Has Student Achievement Increased Since No Child Left Behind?" has received a great deal of attention in the press and is likely to be cited often in the upcoming debate on the reauthorization of the No Child Left Behind Act (NCLB). Using states as their unit of analysis, this report concludes that since the implementation of NCLB in 2002, on average, test scores have increased, the achievement gap has narrowed, and achievement gains post-NCLB have increased faster than before NCLB. Despite its attempt and intent to carefully analyze the complex issue of test score improvement before and after the implementation of NCLB in 2002, however, there are some important weaknesses in the analysis that may have resulted in a much more optimistic picture of the impact of the legislation than the data warrant. The report acknowledges several important methodological weaknesses, but other such weaknesses are never mentioned. Among these additional problems are issues of scope, measurement, and selection—all of which ultimately call into question the robustness of the findings, rendering the report’s conclusions far from definitive.
Review of: "School Choice by the Numbers: The Fiscal Effect of School Choice Programs 1990 – 2006"
Date:
May 9, 2007
Author:
Susan Aud
Think Tank:
Milton & Rose D. Friedman Foundation
Think Tank Review:
Date:
May 24, 2007
Reviewer:
Institution:
University of Kansas
This review considers the recently released study by Susan Aud of the Milton & Rose D. Friedman Foundation, concerning the fiscal effects of school vouchers policies. Aud calculates the simple difference between, on the one hand, state and local government spending on students attending traditional public schools, and, on the other, the government spending on children opting for vouchers to private schools. Aud finds a cumulative savings of $444 million over a 15-year period nationwide. Aud’s analysis does confirm an obvious point: if state and local governments subsidize vouchers at a lower rate than public schooling, then, all other things being equal, state and local expenditures will decrease. Aud argues in particular that vouchers offer a win-win scenario for local school districts, suggesting that districts losing students to vouchers may simultaneously increase spending per pupil on those left behind while, at the same time, decreasing spending overall. This review concludes that Aud’s assumption of increased per-pupil spending by school districts might be true, but the assumption of decreased total budget likely is not. Further, even if state and local governments were, in fact, able to reduce instructional expenses by $444 million over 15 years, this was merely a drop in the bucket – she describes a savings of less than 1/100th of one percent of annual public school spending, or about 60 cents per child per year.
Review of Two Reports:
Report: State Takeover, School Restructuring, Private Management, and Student Achievement in Philadelphia

Date:
Feb. 1, 2007
Authors:
Brian Gill, Ron Zimmer, Jolley Christman and Suzanne Blanc
Think Tank:
RAND Corporation and Research For Action (RFA).
Report: School Reform in Philadelphia: A Comparison of Student Achievement at Privately-Managed Schools with Student Achievement in Other District Schools

Date:
April 10, 2007
Author:
Paul Peterson
Think Tank:
Program on Education Policy and Governance (PEPG) at Harvard University
Think Tank Review:
Date:
May 7, 2007
Reviewer:
Derek Briggs
Institution:
University of Colorado, Boulder
In 2002 the city of Philadelphia began a policy of restructuring its lowest-achieving elementary and middle schools. Eighty-six schools were included. Restructuring can take on a wide variety of forms, but in Philadelphia the most prominent approaches shifted school management to either the district or one of several private providers. In 2007, after four years of this policy, two research reports were issued, one by RAND in collaboration with Research For Action (RAND-RFA) and one by the Program on Education Policy and Governance (PEPG). Both reports examined whether any positive effects on the math and reading achievement of students could be attributed to privately managed schools, district-managed schools, or neither. According to the RAND-RFA report, private management has had no cumulative effect on math or reading achievement, while district management has had a positive effect on math achievement but no effect on reading. According to the PEPG report, private management has had a positive effect on the percentage of students reaching "Basic" levels of performance in math and reading, while district management has generally had no effect. The different findings from the two reports can largely be explained by the fact that PEPG did not have the same access to data as did RAND-RFA. PEPG also analyzed data using a different methodological approach than did RAND-RFA, due in large part to the data limitations. This review identifies and describes methodological weaknesses in the report from RAND-RFA as well as in the PEPG report. Overall, while the RAND-RFA study appears to better capture the overall effects of Philadelphia's reform than does the PEPG study, it does not differentiate effects between the elementary and middle school grades. Further analysis and research is needed before drawing any definitive conclusions.
Review of: "How Much Are Public School Teachers Paid?"
Date:
January 31, 2007
Authors:
Jay P. Greene and Marcus A. Winters
Think Tank:
Manhattan Institute
Think Tank Review:
Date:
February 19, 2007
Reviewers:
Sean P. Corcoran and Lawrence Mishel
Institution:
New York University and the Economic Policy Institute
The Manhattan Institute report, How Much Are Public School Teachers Paid?, uses hourly earnings from the 2005 National Compensation Survey (NCS) to contend that teachers are better paid than most white- collar professionals, including many in occupations commonly understood to be quite lucrative. The report relies on hourly earnings data in an attempt to provide an apples-to-apples comparison of pay for a standard unit of work. Unfortunately, this approach is fundamentally flawed because the NCS calculation of weeks and hours worked is very different for teachers and other professionals. In fact, the Bureau of Labor Statistics — which publishes the NCS — has explicitly warned its users not to use hourly rates of pay in this exact same context. It is unclear why the authors of this report have apparently have chosen to ignore that warning, but what remains is a measure of compensation that is of very little use in informing policy discussions of teacher pay.
Review of: "Whole language high jinks: How to tell when 'scientifically-based reading instruction' isn't"
Date:
January 29, 2007
Author:
Louisa Moats
Think Tank:
Thomas B. Fordham Institute
Think Tank Review:
Date:
February 14, 2007
Reviewer:
Richard Allington
Institution:
University of Tennessee - Knoxville
In Whole language high jinks: How to tell when 'scientifically- based reading instruction' isn't, Louisa Moats contends that she provides "the necessary tools to distinguish those [programs] that truly are scientifically based... from those that merely pay lip service to science" (p. 10). This review finds that Moats exaggerates the findings of the National Reading Panel (NRP), especially the effects of systematic phonics on reading achievement. She also ignores research completed since the NRP report was issued seven years ago. Perhaps most disturbingly, she touts primarily commercial curriculum products distributed by her employer – products that have far fewer published studies of effectiveness than the products and methods she disparages.

These flaws pervade the report's subsequent discussion of what "scientifically based reading instruction" should look like. In the end, the Fordham report works more effectively as promotional material for products and services offered by Moats's employer, SoprisWest, than as a reliable guide to effective reading instruction.
Further Commentary:
Review of: "Report Card on American Education"
Date:
November, 2006
Author:
Andrew T. LeFevre
Think Tank:
American Legislative Exchange Council
Think Tank Review:
Date:
January 8, 2007
Reviewer:
Gene V Glass
Institution:
Arizona State University
The "Report Card on American Education," published by the American Legislative Exchange Council, uses poor and misleading methods to draw some very controversial findings. The report presents readily available statistics to generate hundreds of tables and figures concerning each state's education "inputs," "outputs," and demographics. Interspersed among these tables are a mere dozen pages of analysis intended to support the conclusion, in the words of ALEC Executive Director Lori Roman, that per-pupil spending increases, pupil-to-teacher ratio reductions and raises for teachers "...are not going to make the difference in raising American student achievement to international standards. Empowering parents will" (p. 1). But ineptness and naiveté in measurement and data analysis have thwarted any attempt to legitimately derive such conclusions.