Troy LaRaviere's blog

CHICAGO SCHOOL OFFICIALS ALTER CHARTER SCHOOL TEST DATA

UPDATE: The day before this article was posted (December 9, 2014), CPS officials assured us they would give us the data files explaining the changes in charter school test scores.  As of today, they have not lived up to this assurance.  The files remain closed to the public.

by Troy LaRaviere

Background

The members of the Administrator’s Alliance for Proven Policy and Legislation in Education (AAPPLE), have once again—unfortunately—found it necessary to attempt to hold Chicago Public Schools officials accountable. Ironically this time it is the CPS Office of Accountability that needs to answer to students, teachers, parents and school leaders across Chicago.

This office is responsible for developing and executing the system used for rating schools using CPS’s five-level “SQRP” system. However, an AAPPLE comparative analysis indicates the Office of Accountability altered the school level growth data used to assign those ratings. This basic finding was confirmed by an independent analysis conducted by the Chicago Sun-Times.

Investigation

In August, AAPPLE analyzed NWEA MAP student growth results and discovered students in public schools were learning far more than their peers in charter schools. Our findings were published in a Chicago Sun-Times Op-Ed. In addition, the Sun-Times published its own independent analysis, which affirmed our findings. Our analysis was based on a file containing the school level results of the Northwest Evaluation Association (NWEA) Measures of Academic Progress (MAP) assessment. This file was released by the CPS Office of Accountability in early August. That original file is no longer available on the Office of Accountability website. At some point between the publication of our findings and the release of school ratings, CPS removed the original file containing school growth data and replaced it with a different version. There are no indications or acknowledgements on the site that the data in the file has been changed.

Fortunately, we saved the original version.

An analysis of both versions indicates massive changes were made to the student growth data for charter schools at some point during the last few months as CPS delayed the release of school quality ratings.

Findings

We found these changes led to certain schools appearing to have greater academic growth by lowering the average pretest scores while leaving the posttest scores as they were. For other schools, the changes raised the pretest scores and once again left the posttest scores as they were, giving the impression of less student growth.

The changes were made to the data for nearly every charter school while affecting less than 20 public schools. Charter school scores were changed by more than 50 percentile points in some cases while most of the public school changes were 2 points or less.

Winners and Losers

Two charters schools had their student growth percentile raised by 50 points or more. In general, the CICS chain of charter schools was made to appear much higher performing—in terms of student growth—than the original unchanged results indicated. The charter chain got two score increases of 30 or more percentile points, six changes of 20 points or more, and 10 changes in the data created increases of 10-points or more. On the other hand, schools like Shabazz Charter sustained a 20-point loss and a 10-point loss as the result of the changes. Click here for an Excel spreadsheet detailing these changes. It is interesting that many of the schools whose growth was changed upward are in gentrifying areas of Chicago.

Action Taken by AAPPLE

We requested an immediate meeting with the CPS Accountability Office and got an immediate reply. The request was made on Sunday evening, and the meeting was scheduled for Tuesday afternoon. We then informed the leadership of the Chicago Principals and Administrators Association (CPAA), the parent group Raise Your Hand, and the Chicago Teachers Union (CTU). These organizations represent principals, teachers, and parents, and we received valuable input and feedback from each group. The CPAA and CTU expressed interest in attending the meeting with CPS and each sent representatives.

CPS Response

CPS representatives—led by John Barker—acknowledged the scores had indeed been changed and told us there were two broad reasons for the changes.

Justification #1 Differing pre-test dates

The first was because students took the tests at different times and the growth numbers were not entirely comparable as a result. Public school students take the test each spring and student growth is measured by how much scores increased from spring 2013 to spring 2014. To better understand this, it helps to think of the 2013 MAP assessment as a pre-test and the 2014 MAP assessment as the post-test given to see how much a student learned over the course of a year.

However, many charter students did not take the pre-test in spring 2013 and instead took it in fall 2013, so their student growth was measured by how much scores increased from fall 2013 to spring 2014.  CPS stated it needed to change the scores to reflect this.

AAPPLE’s Response to Justification #1: Differing pre-test dates

Whether a test was administered in Spring or Fall does not provide an advantage or disadvantage since NWEA already takes this into account when it provides student growth projections to CPS for both Spring and Fall. Making any type of adjustments does not make sense.

If there were an advantage, it would be charter schools that had the technical testing advantage. Students who took the pre-test in spring rather than the following fall typically score higher on the pre-test because they took the test before the learning loss that occurs over the summer months. However when measuring growth, students who score higher on the pre-test are at a competitive disadvantage. For example, “John”—a public school student—took the pretest in spring 2013 and scored a 203. If John had been a charter school student he would have taken the pre-test in fall 2013 after the summer learning loss. As a result he would have gotten a lower pre-test score, such as a 200. With a post-test score of 210—with all else being equal–John’s public school growth would have been 7 points, while his charter growth would have been 10 points, simply because of the timing of the pre-test in relationship to the summer learning loss. Again, that’s with all else being equal.

All things between public schools and charter schools, however, are not equal. Despite having such a massively deceptive technical advantage, the MAP results demonstrated conclusively that charter school students learned far less than students in public schools, especially in reading. It is frightening to think of how much worse charter school student growth scores would have been had they not started with such a competitive advantage.

All of this however is moot since NWEA already provides CPS with aligned projections that take into account when students take the assessment. However, CPS decided to move forward and change the scores anyway on the claim that they need to reflect the different testing dates. I told CPS officials that if that were the case then charter school results should have gone down. Indeed, about half of the changes were negative but all charter growth scores should have gone down when adjusted for different pre-test schedules. Why did some of them go up? That was when CPS officials articulated a far more puzzling justification for changing scores.

Justification #2: Students took different versions of the same test

The MAP assessment comes in several different formats. Among them is a version aligned to the Illinois State Standards and another version aligned to the Common Core State Standards. All public schools took the Common Core aligned MAP assessment. Apparently, CPS allowed charter schools to choose the version aligned with the Illinois standards. CPS officials then told us that these schools are likely the ones whose growth scores where adjusted upward.

AAPPLE’s response to justification #2: Different versions of the same test

CPS’s changes had significant effects on school growth percentiles as well as on the percentage of students who made national average growth. Perhaps the most egregious example is CICS Irving Park. Before the scores were altered, the percentage of students making national average reading growth at CICS was 40; after the changes it was 63. Before the changes, CICS Irving Park had a national growth percentile of 34; after the changes it was 86. What statistical model is CPS using to estimate their results would have been so vastly different had they simply taken a different version of the same basic test?

Defending this action puts CPS in an interesting no-win situation. First, they need to acknowledge that changes like the ones they made to CICS Irving Park are incredibly implausible—if not impossible. If CPS insists that the tests were so vastly different that they would change the growth results so drastically, then CPS may need to consider invalidating the ratings of any schools whose scores where changed as a result of taking different tests since applying ratings metrics to different tests is extremely poor practice.

If CPS officials stand by their assertion that CICS’s results would have been so vastly different had they just taken a different version of the same test, what they’ve done is to acknowledge that a school system can get the results it wants based on the test it selects. This calls into question the validity of the testing process itself. More importantly it raises unavoidable doubts about the intelligence of using such wildly inconsistent assessments to make high stakes decisions about schools; decisions such as their “levels” in a system of school choice, their probationary status, or–in many cases–their very existence.

Conclusion and Implications

Summary of Above Responses

First, the scores should not have been altered at all. The original scores should have remained with an asterisk explaining the different testing dates and differing test versions. However, since the scores were changed, then logic would have us believe all charter school scores should have shown even less growth given the deceptive technical advantages they had in both testing dates and test versions. This however is not the case and it raises serious questions about both the methodology and motivations behind the changes in charter school scores. In addition the massive variance in scores based on CPS’s hypothesis about what schools would have scored had they simply taken a different version of the same test, raises serious concerns about the use of these tests for high-stakes decisions such as school ratings, probation, turnaround, and closure.

CPS Creates a Non-Level Playing Field

I have no doubt charter schools chose to ignore the CPS practice of taking the pre-test in the spring in order to give themselves a technical advantage over public schools when it comes to measuring growth.   I also have no doubt some chose to take the Illinois standards based assessment because they assumed it would be easier and their students would appear to score higher than public school students as a result. Despite having both of these deceptive technical advantages, the results demonstrated the monumental failure of our city’s irrational experiment with charter schools. However, to the point of this report, the question must be “Why did CPS allow them to have these deceptive technical advantages in the first place?

Continuing Lack of Transparency Increases Likelihood of Tampering with Scores

We asked CPS for a technical document that explains and illustrates precisely how adjustment were made for those students whose scores were altered as a result of different testing dates as well as for those whose scores were altered based on taking different versions of the test. CPS also needs to explain why it chose to alter scores for a factor (test dates) that NWEA already takes into account when it determines student growth expectations and grade level growth percentiles.

Furthermore, the question needs to be asked, “Why did CPS go through the trouble of actually changing the pretest scores to make it appear that the new growth scores were based on a real score, rather than on their statistical model of what growth “would have been?”

These score changes happened without any public disclosure. Despite having dozens of opportunities to tell the public about their actions, CPS remained silent. For several months CPS delayed the release of school ratings. During this time several principals, union representatives, and media representatives continually pressed CPS officials about what was causing the delay in releasing the ratings. At no time did those officials state or acknowledge what they were doing with the scores. I supposed “We’re changing the charter school scores” would not have made for a good media sound bite in their minds. It would have, however, been an accurate one.

Moving Forward

In a system based on “choice,” parents and other stakeholders must be provided with accurate indicators of school quality. CPS’s school rating system is intended to serve as this primary indicator. It cannot serve this purpose if there are clouds of suspicion about tampering with the data used to determine these ratings.

To be clear, this rating system and school choice have done nothing to improve the overall quality of education in our city. In fact, the achievement gap between black and white students has widened as a direct result of the poor quality of the charter school options that have infiltrated our district. Again, the rating system and “choice” have failed our parents and our students. Some of us are principals of schools that received the highest “1+” rating. Yet we believe the rating system should be abolished and CPS should take the resources it puts into maintaining this system and put them into efforts to improve all of its schools.

However, as long as the rating system exists the data used to inform it must be accurate, and the people who prepare it must operate with transparency and integrity. The changes in the charter school data—with no public disclosure or explanation whatsoever—puts both into question.

*************************

DATA FILES

Original file

The original file is no longer posted on CPS’s site, but can be viewed at:

https://drive.google.com/file/d/0BytSj0QyFz1ea2dta1dVU3ZvbE0/view?usp=sharing

Altered file

The altered file up now on the Office of Accountabilities site can be downloaded at http://cps.edu/Performance/Documents/DataFiles/NWEAreportSchool_2014.xls

There website is http://cps.edu/SchoolData/Pages/SchoolData.aspx (see “Assessment Reports”).

An AAPPLE version of the altered file that contains additional columns that show the original results side-by-side with the altered results can be viewed at the following link.

https://drive.google.com/file/d/0BytSj0QyFz1eU1BjMW83MkY5T3c/view?usp=sharing

File for quick and easy at-a-glance analysis

Click the link below for an AAPPLE document that contains only those schools with altered information. It contains additional columns that show both the original results and altered results side-by-side. It also calculates the difference between the original scores and the altered scores. Lastly in contains small tables on the side of the date of certain charter chains that indicate how that particular chain was affective by the changes to the data.

https://drive.google.com/file/d/0BytSj0QyFz1eb1l6LVlZNkRMRmM/view?usp=sharing