| Home |
| 1-Overview || 2-FAQ || 3-Primary Findings || 4-Actual Performance | | 5-Funding Variables |
| 6-Teacher Data || 7-Race || 8-OSRC || 9-Closing Statement || Appendix-Top Performing Districts |
Randy L. Hoover, Ph.D.
The Ohio School Report Card, 2000, correlates with the Presage Factor (r=0.78) almost as significantly as the 1997 OPT district performance does (r=0.80). Practically speaking, they are virtually the same. What this means is that OSRC carries with it the same advantagement-disadvantagement bias.
What these findings tell us is that to a very significant degree (conservatively, 60.1% based upon r=0.78), the OSRC reports social-economic living conditions of the district and not the academic growth of the pupils nor the effectiveness of educators in the district. OSRC is open to the old computer adage of "garbage in, garbage out." The fundamental unit of assessment that drives the OSRC ratings is the percent of the districts pupils passing the OPT; if the unit of assessment is flawed, so are the cumulative results reported in the OSRC. Again, as with OPT itself, validity in the statistical/mathematical sense of tests and measurements is the flaw of the OSRC.
By the time OSRC ratings reach the public, they are impersonal representations of Ohios school children framed invisibly by their very real lives on the spectrum of advantagement-disadvantagement. Yet, OSRC is used to reward or punish the very people who have to deal with the day-to-day reality of those childrens lives, educators being held accountable for that over which they have virtually no control or decision latitude whatsoever.
Given that the OSRC is aimed directly at assessing the districts educators in general and its teachers in particular, the findings of this study point to the following advisories:
Teachers and educators in districts rated low on the OSRC may, indeed, be performing extremely well such as is known from this study to be the case with Youngstown City Schools as noted in the previous section on actual district performance. In other words, beware that there may be no validity whatsoever to the rating given by the OSRC that places our most disadvantaged districts in the "academic emergency" category. Teachers and educators in districts rated high on OSRC may, indeed, be performing nowhere near their potential. From the results of this study, this is shown to be the case with many OSRC top-ranked districts. In other words, just as noted above regarding underestimating low-ranking districts, the same caveat applies to the highest ranked districts. Teachers and educators in districts given OSRC ratings anywhere in-between the extremes of the "academic watch" and "effective" categories likewise cannot be said to be valid with any certainty at all. These OSRC mid-range school districts represent the vast majority of Ohios schools. Some of these districts are performing as claimed by OSRC ratings. However, an equal number are either performing far below what might be expected and others far above
There is an imminent reality implicit in this study that cuts directly to the assumptions and interpretation of OSRC as a measure of district educator effectiveness. That reality is a judgment that I make with reflective confidence based upon these findings and upon my professional experience as a classroom teacher and a university teacher educator. My judgment is that if we were to ever switch the staff of a district rated high by the OSRC with that of a district rated low, in five-years time there would likely be no change whatsoever in the OPT ratings of either school district. Indeed, if any change were to be observed, it would most likely be that the scores of the low OSRC-rated district would drop slightly due to the highly rated educators having to deal with problems and issues in the lives of the children of the district that they have never experienced anywhere before in their professional experience.