In 1992, superintendents from a small number of school districts in the New York metropolitan region began meeting periodically to consider issues related to the organizational performance of their districts.
The districts represented in this informal group serve student populations that typically show strong achievement results on state and national norm- and criterion-referenced tests. With more than 90 percent of their graduates college-bound, they meet and even comfortably exceed the traditional benchmarks of excellence for public schools.
Given their quality and their location, these districts attract and hold faculty who bring to their work substantial experience and strong academic backgrounds. School facilities offer students and faculty the opportunity to learn and teach in an efficient and well-maintained environment. Although confronted in recent years by large cuts in state aid, the communities served by these districts, for the most part, have remained supportive of school budgets. By almost any commonly cited measure, these school districts are among the most successful in the region and, perhaps, in the nation.
Nevertheless, these superintendents shared the belief that the common standards of district excellence did not present a sufficiently rigorous challenge to students and faculty in their districts. Moreover, they concluded that the traditional methods for examining instructional practices that were being used in the early 1990’s by regional accrediting agencies were not designed to encourage the close examination of performance assessments in high-achieving districts. As a result, accreditation reports were providing these districts with little information about how to grow and sustain improvement in student performance.
In the spring of 1994 the group decided to form a consortium committed to bringing more rigorous and systematic attention to student performance standards within the member districts. Further study created an interest in the possibility of an assessment model based on benchmarking techniques in the private sector that might be adapted for use in educational institutions. The members agreed that it would be useful to employ the services of a corporate consultant. Proposals were solicited from several firms engaged in benchmarking activities, and a contract was awarded to the IBM K-12 Consulting Group to assist in this effort.
The move to establish state and national standards as a step toward improving public school student performance has been in full swing across the country for some years. New standardized tests have been developed and existing tests revised and refined. Beginning with the passage of the No Child Left Behind Act, the federal government weighed in with a substantial set of testing requirements to measure annual improvement of students, schools, districts and states.
To complement these tests, some states and districts are experimenting with other forms of assessment that stress student performance, including portfolios, exhibitions, and demonstrations that challenge students’ abilities to analyze and synthesize information, draw conclusions, and communicate their ideas effectively.
As a result of these developments, districts have an abundance of student performance data. At the same time, they find themselves confronted with demands from the state and from their communities to effect an improvement in student performance on these highly public measures. The question they must confront is: “How can student performance data be most effectively used to improve teaching and learning?”
In response to this question, Tri-State Consortium members have identified a number of common questions:
- What areas of student performance are assessed and how are assessment data used?
- What are appropriate standards of excellence, and how can they be used and shared among Consortium districts?
- How can the assessment process be improved?
- How are student performance data collected, stored, analyzed and disseminated to faculty and staff?
- How are student performance data used to evaluate the effectiveness of curriculum and instruction?
- How are staff development, curriculum revision, and other support programs and processes informed by student performance results?
- How are assessment results communicated to, and feedback solicited from, the broader school community?
Consortium member districts have made a commitment to pursue these questions as “critical friends.” Members have come together to develop, implement, and refine an assessment model that can help them identify how a district’s resources, processes, and planning can be aligned effectively for high-quality student performance. The model encompasses staff development, supervision and evaluation, instructional practice, community relations, and annual and long-term planning tools. Underlying the model is the belief that high-performing districts can and must improve continuously.
In the first year of the IBM engagement, Consortium members worked closely with the consultants to develop the new assessment model, derived from corporate sources, but redesigned for use in school districts. A draft of the model was completed in May 1995. At that time, the number of participating districts increased from the original twelve to twenty drawn from suburban districts in New York, Connecticut, and New Jersey.
That fall, a Consortium conference drew together 170 teachers, administrators, parents, and board members to review the model, discuss existing assessment practices, and explore ways in which the new organization could serve as a vehicle for examining local standards. In 1995-96 pilot visits were conducted in three member districts; the following year an additional five pilot visits took place. Deriving from the experience of these first visits, teachers and administrators from member districts took part in a series of three Best Practices Conferences designed as forums for sharing what had been learned through the visit process. In 2010, membership in the Consortium has increased to 46 districts.
To date, the Consortium has conducted more than 100 district visits. It currently conducts, on average, thirteen district visits each year. To facilitate this work, to enhance the organization’s capacity to train those involved in the work—both as visiting team members and as members of the districts’ receiving teams—and to expand opportunities for communication and sharing among members, the Consortium recruited and appointed an Executive Director in 1999 and a Director of Training in 2000.
Just as the model encourages member districts to become “learning organizations,” the Consortium itself is committed to operating as a learning organization. In 2003-04, the Consortium commissioned an evaluation study of the impact of the organization on its member districts. The study looked closely at six actively participating member districts, asking the question, “What difference in teaching and learning has Consortium participation made in your district?” Based on the conclusions and recommendations in the evaluation study, the Consortium has developed a five-year strategic plan for the years 2010-2015. Refining the model and the visit process, developing new ways to share what we are learning, and using technology to become a “virtual” as well as an actual organization are top priorities.