Searching over 5,500,000 cases.


searching
Buy This Entire Record For $7.95

Download the entire decision to receive the complete text, official citation,
docket number, dissents and concurrences, and footnotes for this case.

Learn more about what you receive with purchase of this case.

Emma C. v. Thurmond

United States District Court, N.D. California

July 5, 2019

EMMA C., et al., Plaintiffs,
v.
TONY THURMOND, et al., Defendants.

          ORDER RE STATE'S COMPLIANCE AT PHASE 2

          VINCE CHHABRIA, UNITED STATES DISTRICT JUDGE

         After inheriting this consent decree from the previously assigned judge, the Court set up a process for examining whether the State of California does a legally adequate job of monitoring school districts for compliance with the Individuals with Disabilities Education Act (IDEA). The process has four phases, the first of which is complete. That phase addressed the state's data collection activities, and the Court held that the state will - after correcting a couple isolated but significant defects - be deemed in compliance with its federal statutory obligations in this area.

         Thus began the second phase, which examines how the state analyzes the data it collects to determine which districts are in need of intervention, and to determine what type of intervention is called for. The matters addressed in this phase are more central to the state's responsibility to monitor school districts to ensure that disabled children are receiving an appropriate education as required by the IDEA.

         Unfortunately, California's system for flagging school districts for intervention is riddled with serious defects. To give just three examples:

• The state attaches the classification of “meets requirements” to hundreds of small school districts despite conducting virtually no analysis of those districts.
• The state takes minimal steps to flag problems with the delivery of services to preschool-aged children with disabilities, even while acknowledging the importance of early intervention to IDEA compliance.
• The state's system for identifying the most troubled districts appears irrational, resulting in the selection of districts for intensive intervention that are less in need of it than the ones passed over.

         It appears, at least so far, that current leaders in the Special Education Division of the California Department of Education don't deserve much of the blame for this. They inherited many of the problems, candidly acknowledge most of them, and are committed to improving the state's monitoring activities. They have been forthcoming and cooperative, even while their lawyers from the Attorney General's Office have sometimes been nonresponsive or obstructionist. Accordingly, for now, there is no reason to seriously consider holding the state in contempt for failure to comply with the consent decree. Nonetheless, the defects in the current system are so serious, and so numerous, that they significantly interfere with California's ability to monitor how school districts are serving disabled children. The state cannot move on to Phase 3 until it addresses - or shows that it's well on its way to addressing - these problems.

         I. BACKGROUND

         A.

         The IDEA requires states that accept federal assistance to provide a free and appropriate education to all children with disabilities. 20 U.S.C. § 1412(a)(1)(A). As a practical matter, this responsibility falls largely on individual school districts. In recognition of this, the statute requires states, in turn, to conduct effective oversight of the school districts. Id. § 1416(a)(1)(C). The consent decree in this case requires the state to demonstrate that it has an adequate oversight system in place.[1]

         This Court's May 18 order established a four-phase monitoring process for assessing the legal adequacy of the state's oversight system. Dkt. No. 2387 at 1-2. Phase 1 involved scrutiny of the data the state collects from school districts. Specifically, the inquiry was whether the state collects enough data, and the right types of data, to enable it to effectively monitor districts. The Court concluded that although isolated legal deficiencies in the state's data collection system must be addressed, the state was largely compliant in this area. The Court thus concluded that the state could move on to Phase 2 and could cure the isolated data collection deficiencies in the subsequent phases.

         Phase 2 - the current phase - examines how the state analyzes the data it collects. To satisfy its monitoring obligations under the IDEA, the state must do an adequate job of flagging the districts that are failing, or struggling, to provide an appropriate education to students with disabilities. And it must do an adequate job of deciding what type of intervention is necessary in a given district. Phase 3 will examine how the state, after it has decided which districts to select for further monitoring, actually executes that monitoring. Phase 4 will examine the state's written policies and directions governing school district compliance with the IDEA.

         Phase 2 essentially poses two questions. The first is whether the state translates the data it collects into metrics capable of identifying school districts that may be falling short on their obligation to provide an appropriate education to students with disabilities. The second question is whether the state's methods for sorting districts for inclusion in (and exclusion from) its various monitoring activities is adequate.

         In addition, the IDEA's implementing regulations require states to issue annual compliance determinations for school districts. 34 C.F.R. § 300.600(a)(2). The state's process for selecting districts for monitoring also determines whether the districts are classified by the state as complying with federal law. School districts that are selected for monitoring are classified as “needs assistance, ” “needs intervention, ” or “needs substantial intervention” if problems persist. States must prohibit these school districts from reducing their “maintenance of effort, ” meaning their allocation of non-federal funds for special education services. 34 C.F.R. § 300.608(a). School districts that the state decides are not in need of any monitoring are typically classified as “meets requirements.” Phase 2 therefore inherently includes an inquiry into whether the state's sorting methods adequately ensure that districts labeled as “meets requirements” do not suffer serious deficiencies in serving disabled children.

         Phase 2 took roughly the same format as Phase 1. The Court received written submissions from the parties, a report from the court monitor outlining his conclusions, and an amicus brief from the Morgan Hill Concerned Parents Association. The written submissions were followed by two days of evidentiary hearings. Kristen Wright, the Director of the Special Education Division for the California Department of Education, Shiyloh Duncan-Bercerril, the Division's Education Administrator, and Alison Greenwood, the Division's Quality Assurance Administrator, testified during Phase 1 and returned to testify for Phase 2. After the hearings, the Court ordered the monitor to conduct supplemental data analyses and ordered the state to provide all data necessary for those analyses. The monitor and his data consultant, Dr. Susan Wagner, presented their conclusions at a third evidentiary hearing, and state policymakers offered further testimony in response.[2] The monitor then conducted a final set of data analyses.

         B.

         As explained in previous orders, the purpose of this oversight is to ensure that the state complies with federal law. The IDEA and its implementing regulations offer only general guidance for what states must do to satisfy their monitoring and enforcement obligations. The state's monitoring activities must focus on “improving educational results and functional outcomes for all children with disabilities, ” and ensuring that states meet the IDEA's requirements, with a special focus on “priority areas” enumerated in the statute. 20 U.S.C. § 1416(a)(2), (a)(3). The state must use “quantifiable indicators and such qualitative indicators as are needed to adequately measure performance in the priority areas.” 34 C.F.R. § 300.600(c). When a district appears from the data to fall short on its obligations, the state must respond with an appropriate enforcement action to correct the noncompliance “as soon as possible.” 34 C.F.R. § 300.600(e). The IDEA, however, does not require states to adopt any particular approach for monitoring. By specifying the ends, but leaving the means to the states, the IDEA strikes a balance between federal authority and the states' historic discretion in the design and control of their education systems.

         For purposes of this consent decree, however, the standard for compliance is relatively straightforward. The state will not be found to be out of compliance simply because the plaintiffs or the court monitor have identified isolated deficiencies or ways in which the monitoring system could become more effective. But if the state's chosen procedures are so deficient that they significantly hinder its ability to monitor school districts, the state will not be found compliant merely because the statute does not expressly forbid those choices.

         This standard has important implications for this phase. Any evaluation of the state's data analysis activities must pay close attention to how that data is actually used. Because certain problems may occur in tandem - for example, districts that frequently suspend students with disabilities may also have poor performance on statewide assessments - different metrics may yield similar information. Therefore, an imprecise metric may not compromise the overall monitoring system. Further, the importance of precision may turn on the importance of the metric. The state need not use the most granular data possible if the data is less important in the context of the overall system, or if the state has set targets for that data that sufficiently ensure that any poor performance will lead to monitoring.

         These realities underscore the importance of reviewing the state's data analysis activities as a whole. No monitoring system is perfect. Identifying a theoretical concern with an individual metric is not sufficient to find the state out of compliance with federal law. It must be shown that the concern, either alone or in combination with other issues, significantly interferes with the state's ability to evaluate school districts and to identify the ones in need of intervention.

         C.

         As discussed in detail in the Phase 1 order, during the school year, school districts submit large swaths of data about all students, with and without disabilities, to databases maintained by the state. See Dkt. No. 2428 at 3. This process constitutes the “first tier” of the state's monitoring system.

         The state then translates those data into metrics that align with key directives of the IDEA. For example, one directive is that schools educate students with disabilities in the “least restrictive environment.” 20 U.S.C. § 1412(a)(5). In general, this means that students with disabilities must be taught in general education classrooms, alongside their nondisabled peers, as often as reasonably possible. Schools may only remove students with disabilities from general education classrooms when the nature or severity of the child's disability requires it. To evaluate district performance in this area, the state uses three different metrics. For school-age children, the state calculates the percentages of disabled students who are taught (i) in general education classrooms for greater than 80% of the day; (ii) in general education classrooms for less than 40% of the day; and (iii) in separate placements. For preschool-aged children, the state looks to the percentages of disabled students in (i) general early-childhood programs; and (ii) separate placements. Dkt. No. 2455-1 at 11-12, 14. The state compares school districts' performance on each metric to a target that policymakers set in consultation with various groups, including local administrators, parent groups, and advocacy groups. Dkt. No. 2455-1 at 4.

         In other areas, the state uses a system called the “Dashboard.” The Dashboard is a visual depiction of district performance on a grid. The vertical axis of the grid represents the school district's “status, ” or the district's current performance. The horizontal axis on the grid represents the degree to which the school district improved or regressed from the previous year. Combining information from both axes results in a color: red, orange, yellow, green, or blue. An example from the state's written submissions is included here:

         (Image Omitted)

Dkt. No. 2455-1 at 24.

         This is the Dashboard that the state uses to evaluate disabled students' performance on statewide assessments. The state uses a metric called “distance from standard.” See Dkt. No. 2455-1 at 22-23. For each student, the state calculates the distance between the student's score and the score needed to establish that the student met the academic standards relevant to that assessment. The state then calculates the average “distance from standard” for each school district and reduces the score if the district failed to ensure that enough disabled students actually took the assessment. The circled squares show that districts with a distance from standard of 10- 44.5 points that improved their performance between 3-14 points from the prior year will receive a “green.” Districts that receive a “red” or “orange” are treated as missing the target for this metric. In addition to performance on statewide assessments, the state currently uses the Dashboard to evaluate districts' suspension practices.

         D.

         After the data are translated into a set of metrics and compared to targets, the next step is to determine whether a district should be chosen for further monitoring, and if so, what type of monitoring activity to conduct. The degree to which the state effectively executes its monitoring activities will be explored in depth during Phase 3. But evaluating how the state decides which monitoring activities to conduct, and for which districts, requires at least some understanding of the substance of those monitoring activities and how they differ from one another.

         In general, a district that performs poorly on any individual metric will be selected for a type of targeted monitoring. A district that performs poorly across many different metrics, or in certain instances, the same metrics for several consecutive years may be selected for more intensive monitoring. In all, the state currently performs five types of monitoring relevant to this case - three targeted monitoring activities that relate to a specific area of poor performance by a district, and two intensive monitoring activities that involve greater intervention.[3]

         The state has dubbed the three targeted monitoring activities “performance indicator review, ” “data identified noncompliance review, ” and “disproportionality review.” A district is selected for performance indicator review if it fails to meet a target for a particular metric that the state believes is closely tied to student outcomes. Dkt. No. 2506 at 6. For example, a district that receives a red or orange on the Dashboard for poor performance on statewide assessments by disabled students, or for excessive suspension of disabled students, will be selected for performance indicator review in the pertinent area. A district selected for review must perform a “root cause” analysis into the reasons for its inadequate performance in that area and must submit an improvement plan to the state. Dkt. No. 2501 at 7-9. The state does not formally supervise the district's implementation of the plan but looks to see whether the targets are met for the following year. If the district remains in performance indicator review for multiple years, it must perform a “record review, ” meaning it must select ten individual student records and review them for any patterns related to the noncompliance. Dkt. No. 2506 at 161-63.

         The second targeted monitoring activity, which has the unfortunate name of data identified noncompliance review, addresses poor performance on metrics that are related to timeliness. For example, before a child may receive special education services, the district must conduct an initial evaluation to determine whether the child has a disability. The IDEA requires districts to conduct these evaluations within 60 days after receiving parental consent. 20 U.S.C. § 1414(a)(1)(C)(i)(I). The state flags districts for monitoring whenever they fail to meet this and any other statutory deadline for any student. The district must correct all identified noncompliances (for example, by holding any past-due initial evaluations). The district must also submit a root cause analysis that explains why the district has missed the deadline (or deadlines) and must submit data to the state demonstrating that it has corrected the problem. If the district continues to miss deadlines after that, the state may order it to take further corrective actions, and may even withhold funding to the district. Dkt. No. 2506 at 24.

         The third form of targeted monitoring, disproportionality review, relates to the IDEA's requirement that states prevent discrimination against children on the basis of race or ethnicity. 20 U.S.C. § 1418(d). To use one concrete example, school districts must take care to avoid suspending Hispanic disabled children at a disproportionally higher rate than all disabled children who are not Hispanic. See 34 C.F.R. § 300.646(a)(3). The state is given the discretion to determine how marked the discrepancy must be before districts become “significantly disproportionate.” 34 C.F.R. § 300.647(a)(7). Once selected, the state evaluates the districts' policies and procedures and a sample of student records for compliance with federal law (for example, to ensure that all IEP meetings were conducted with a general education teacher present). Dkt. No. 2455-1 at 54; Dkt. No. 2506 at 25-26. The district must correct all noncompliance identified by the state.

         The state's two intensive monitoring activities are comprehensive review and significant disproportionality review. Comprehensive review is reserved for districts experiencing the most serious performance issues. The decision to select districts for comprehensive review is based on a total score derived from performance on many different metrics. Poor performance on any individual metric will, in addition to potentially leading to selection for targeted review, cause the district to lose points from its total score. Dkt. No. 2469 at 57. After all districts are scored, the state sets a cut score and selects all districts falling below that score for comprehensive review. In contrast to performance indicator review and data identified noncompliance review - which largely involve self-analysis by the district - the state puts its own boots on the ground during comprehensive review. The state, rather than the district, develops the monitoring plan after analyzing the district's compliance history. It will interview parents and administrators and conduct staff trainings as needed. A district selected for comprehensive review may remain in monitoring for several years. Dkt. No. 2506 at 30-34.

         The state's other intensive monitoring activity, significant disproportionality review, applies to districts that California has determined to be “significantly disproportionate” in any area for three consecutive years. During this monitoring activity, the state takes an even harder look at the district's policies and practices. In addition to the activities performed during targeted disproportionality review, districts in intensive monitoring must develop an improvement plan that the state dubs an “early intervening services plan, ” which must be approved by the California Department of Education. The state also makes various forms of technical assistance available to districts, both through the state directly and through contractors, to assist the district in developing their plan and getting back into compliance. Districts must set aside 15% of their IDEA-based funds to finance the implementation of the plan and must report on their progress to the state. Dkt. No. 2506 at 28-30.

         II. OBVIOUS DEFECTS IN THE STATE'S DATA ANALYSIS SYSTEM

         The state's process of determining which school districts need further monitoring, as well as its process for deciding how intensive that monitoring should be, is riddled with defects. Some of these defects are small and easily fixable, but several are so fundamental, or so obviously contrary to the IDEA (or both), that the state cannot get out from under this consent decree without fixing them. Furthermore, in contrast to Phase 1- where the state largely established compliance, such that it made sense to move on to Phase 2 of the monitoring process concurrent with the state's efforts to fix a couple of outstanding legal defects in its data collection program - the flaws in the state's system for identifying districts in need of intervention are so severe that they preclude a transition to the next phase. Before moving on to Phase 3, the state will be required to demonstrate that it has meaningfully addressed these problems (or at least that it's well into the process of addressing them).

         A.

         As a preliminary matter, there are several areas where the state essentially conducts no assessment of school district performance at all, even though the IDEA requires it. The policymakers testified at the hearings that they are currently developing data analysis procedures for almost all of these areas. But there is nothing in place now.

         Small ...


Buy This Entire Record For $7.95

Download the entire decision to receive the complete text, official citation,
docket number, dissents and concurrences, and footnotes for this case.

Learn more about what you receive with purchase of this case.