Education Dean's Testimony To House Committee On Education And The Workforce
Testimony toThe U.S. House Committee on Education and the Workforce
The Policy Influence of Education Research and R&D Centers
Submitted by Susan H. Fuhrman
Dean, Graduate School of Education, University of Pennsylvania and Chair, Consortium for Policy Research in Education (CPRE)
Mr. Chairman and members of the Committee, thank you very much for the opportunity to testify this morning. I have been asked to focus on two issues: the role of research and education policy as viewed from the university level and the role of R & D research centers in building a cumulative base of knowledge and disseminating it to the field.
The Role of Research and Education Policy
Research and experience have shown that the direct application of research to policy is rare. In the heat of political decision-making, many other factors come into play. The authoritative allocation of values has more to do with aggregating interests around particular positions than with making decisions based on evidence. Research is often used to justify political positions already taken, rather than to set a direction for policy. Decision-making happens in a haphazard fashion and research can become a key ingredient or not depending on very particular circumstances. (Kingdon, 1984). Whether research is shaken or stirred into the mix has little to do with its nature and much more to do with what else is going on in the stew. Furthermore, researchers and policymakers are said to speak different languages, operate on different time frames and care about different things. Researchers are always qualifying their findings and disagreeing with one another; policymakers are always pressing for more definitiveness and consensus. (Kaestle, 1993; Lagemann, 1997).
Abandoning the notion that research is likely to have an instrumental influence on policy, many scholars have come to the conclusion that research has a more indirect, conceptual influence. Weiss and others believe that social science research has its greatest effect on the discourse of policymaking rather than on specific policies. Research filters into the discussion through a gradual process of "enlightenment." (Weiss, 1980; Weiss and Buculavas, 1980). It frames debates by introducing new ideas or problem definitions, helps to question or support existing assumptions and creates frameworks for thinking about policy issues. Its influence is so gradual and unpredictable that the original source may be long forgotten. The very fact that an idea emanated from research as opposed to experience or interest group argument may be entirely lost. While this suggests that the influence of research is probably seriously underestimated as many ideas from research are not correctly attributed to research there no question that even if we appreciate the enlightening value of research, evidence appears relatively low on the list of factors that influence policy.
However, there are ways to promote the use of research by policymakers. Examining studies that have garnered significant policymaker interest, like the Perry Preschools study and the Tennessee STAR class size experiment along with associated studies, Ie come to the conclusion that several aspects of such studies facilitate their influence. In my opinion, four aspects of policy studies promote their use.
First, studies that are influential incorporate a research design well suited to the question they are intended to answer. Some questions concern the effectiveness of a particular policy approach. If the question is "does this option work," then either an experimental or quasi-experimental design with very careful controls would be appropriate. Other types of designs might show relationships between treatments and achievement but cannot answer definitively the causal question. But "does it work" is not the only question policymakers ask about policy options. They also want to know the manner in which policies exert an influence not just whether they do and how various design options play out in practice. To study how policies play out in practice how they are implemented research must include significant qualitative components. Only by visiting classrooms and schools where policies are being translated into practice can researchers understand the many issues that shape the translation. Policymakers also want to know more about the dimensions of problems, such as whether different population groups or types of schools experience issues differently. In other words, there are many things they want to know that don require an experimental design. The important point is that research suited to the question is more likely to be considered rigorous than research that is stretched to answer questions it can. Another aspect of design is scale. Studies with very small sample sizes cannot support more generalized statements. They can raise issues for further investigation and suggest possible relationships, but larger scale studies will be necessary to confirm their findings.
A second characteristic of influential research is that it is longitudinal in nature. In my opinion, policymakers want to know how results hold up over time and whether effects are sustained. Short-term studies don provide such answers; they may be a first step, but once a finding of interest emerges, it should be followed over time to determine its staying power.
Third, research that is useful is accompanied by successive studies that confirm and lend weight to its findings. In many fields, replication is accepted practice. Studies are undertaken specifically to repeat previous studies and to confirm or deny their findings. In education, however, perhaps because research has been significantly under-funded, we have put a premium on new, unique studies. Little credit is given to researchers who replicate previous work although such replication is essential not just to confirm and lend credence to the original work, but also to see how the policy under investigation works in various contexts. Policymakers particularly want to understand this last point will it work in my state and value studies that are replicated in varied settings.
Fourth, policymakers value synthesis, efforts to connect the most recent study with past work and to clarify the aggregate weight of the findings. Single studies very often contradict one another. The contradictions may be artifacts of design or setting; they may or may not shed light on the underlying research question. Only through systematic summaries that sort through the contradictions and determine where the weight of the evidence lies can we provide policymakers with definitive statements that can guide their decisions.
In my opinion, influential research is rigorous, large-scale, longitudinal, validated through replication and packaged in a way that puts findings in the larger context of related research. Substantial funding is necessary to sponsor such research. The President Commission of Advisors on Science and Technology (PCAST) recommends that funding for education research, development and dissemination be increased over five years to $1.5 billion a multi-fold increase over current funding.
At the university level, we are attempting to promote the type of research I have just described. For example, at the University of Pennsylvania, we have founded the Campbell Collaboration (http://campbell.gse.upenn.edu/), a multi-national association named after statistician Donald Campbell and modeled on the Cochrane Collaboration, one of the foremost promoters of evidence-based practice in medicine. The Campbell Collaboration produces systematic reviews of "what works" in education and social policy. Policymakers will have electronic access to up-to-date syntheses of relevant research. Significantly, by employing rigorous standards for including studies in such syntheses, the Collaboration will encourage and promote research quality.
The Role of R&D Centers in Building Cumulative Knowledge
As founder and director of the Consortium for Policy Research in Education (CPRE), an R&D center supported by the Office of Educational Research and Improvement (OERI) since 1985, I have had the opportunity to participate in a number of large-scale studies that have enhanced the knowledge base over time. Centers are essential for building programmatic research efforts studies that build on one another over time to accumulate knowledge progressively. Successful centers build knowledge that is cumulative, provide a sort of institutional memory about that knowledge, and make it available to other researchers, practitioners, and policymakers.
One example of successive studies that build upon each other to provide cumulative knowledge concerns standards-based reform and the development of measures intended to determine what content is actually being offered in classrooms. In the 1980s, during our first grant period, we were charged with determining the effects of state policies to raise high school graduation requirements. States were mandating additional units of key subjects, and policymakers were curious about whether students were actually taking the added units. We examined transcripts in four states and found that students were indeed taking more units of math, science, English and history, but that it wasn obvious from the course titles what content they were actually receiving. For example, we came across a course called "Informal Geometry," or geometry without proofs, leading us to wonder whether the content was significantly watered down. It turned out that courses such as this were more prevalent in states with high school graduation tests that focused on low-level competencies. In such states, many additional mathematics units focused on arithmetic, as did the tests. This finding led us to examine how the two policies requirements for graduation units (which envisioned students taking algebra and geometry as their additional units of math) and competency examinations (which led to the additional units being focused on the same low-level skills as the tests) were in conflict with one another. Understanding and documenting such instances of policy conflict, we began to write about a more coherent approach to state instructional guidance: the idea of standards that would set expectations for student learning and aligning other policy levers student assessment, teacher education and professional development to such standards. The early work on standards-based reform was enormously influential, leading directly to NSF Systemic Initiatives and to policy reforms in a number of states (Smith and Oay, 1991; CPRE, 1991).
We continued to wonder about what actual content was behind course titles, and in the early 1990 we undertook the Reform Up Close study, a six-state, 72 classroom examination of enacted curriculum content. We studied the courses that expanded the most as a result of new high school graduation requirements and used questionnaires, daily instructional logs, and observation to determine what was actually being taught. We found, reassuringly, that "algebra" being offered to students taking it largely as a result of the new requirements, students who previously would have been in lower-track math, was similar to algebra offered to students with better previous preparation. This study showed that it was possible to monitor instructional content on a large scale to determine if policymaker expectations were really being met. (Porter et. al., 1994)
If algebra is algebra, pretty much the same from place to place, can students with poor preparation take it and then move on into other college preparatory classes? Growing out of the Reform Up Close study was a set of questions about whether moving students into gateway courses such as algebra could have long-term benefits. We then undertook a study involving 4800 students in New York and California who were engaged in transition math classes designed to move them from general mathematics into higher levels of the subject. The Upgrading High School Mathematics Study (Gamoran, et. al., 1997) showed that students in the transition classes did in fact move into college preparatory math sequences at higher levels than comparison groups and their achievement was higher as well. This study confirmed important components of the theory underlying standards reforms: students can learn to higher expectations and, since the students who learned the most had teachers who had extensive professional development in the math curriculum they were teaching, aligned professional development pays off. The study enabled us to further develop the measures of classroom content that we had been working on and to relate the content of instruction directly to student learning.
We are now working on a very large-scale, longitudinal study of elementary school instruction, the Study of Instructional Improvement, that asks the next set of logical questions: what is the actual content of elementary instruction and how is it aligned to standards, how much and what kinds of professional development are most effective in changing instruction and influencing student learning, and what components of instruction are most related to student learning? The study focuses on the design, enactment, and effects of four comprehensive reform interventions and is aimed at building a theory of instructional intervention. Using a sample of 120 schools over six years, the study entails and depends on the development of suitable measures of intervention design, the processes of enactment and of instruction. For this study, we have made significant advances in the development of daily instructional logs and developed new, unprecedented measures of teachersknowledge of content for teaching, or pedagogical content knowledge.
Disseminating Research to the Field
We are not waiting for the results of the Study of Instructional Improvement to begin disseminating the work. We are piloting our pedagogical content knowledge questions in California with teachers taking summer professional development institutes. The evaluator of the institutes will benefit from new measures to use in pre- and post-testing, and we will benefit from having several thousand additional teachers involved in measurement refinement.
Disseminating work in progress is one way to maintain close contact with the field. To us continuing, close contact with the audience is the key to successful dissemination. CPRE has accomplished this by making interaction with clients a central element of our mission. In our view, our constituents policymakers and practitioners who are engaged in reform want research that responds to their concerns, that reaches them conveniently, and that is timely. We try to consult a variety of stakeholders to develop research projects that will meet the needs of constituents, seeking advice about important questions from our Executive Board, Affiliated Organizations (20 national policymaker and practitioner organizations), and in the course of research and dissemination.
We use multiple channels for publishing and conveying our findings, including meetings and newsletters of Affiliated Organizations. Our work is produced in a variety of formats: Policy Briefs (circulation 11,500); Policy Bulletins (short summaries sent to approximately 600 people including members of the state associations of school administrators, state school board associations, and media representatives); Research Reports (sent to Affiliates and advisors; advertised widely); Journal Articles; Books (sent to key advisors and advertised in CPRE briefs and by publishers); Occasional Papers (allied work not directly supported by CPRE and distributed like Research Reports); and papers presented at conferences.
Face-to-face dissemination includes our own sponsored meetings such as policy forums and Congressional briefings; special strands and regular presentations at Affiliated Organizations meetings, such as meetings sponsored by the Education Commission of the States, the National Conference of State Legislatures, and the American Association of School Administrators; presentations at policymaker, researcher and practitioner meetings throughout the country and abroad; work with the press; and technical assistance to states and localities. Electronic dissemination features our website (http://www.gse.upenn.edu/cpre/) which is linked to a special CPRE website on Teacher Compensation, School Finance, and Program Adequacy (http://www.wcer.wisc.edu/cpre/) as well as to OERI and to our affiliates; 16,470 copies of reports were downloaded from the CPRE site between Dec 2000 and May 2001.
Conclusions
Quality research and dissemination are closely linked. Links to constituencies are strengthened when researchers disseminate findings that derive from high quality studies, and developing such links can in turn enhance the quality of the research. Research quality is furthered when the questions are closely tied to the needs and concerns of clients; those needs are frequently uncovered in the course of dissemination.
I hope the thoughts I have presented on high-quality research that influences policy, on building cumulative knowledge, and on disseminating research are helpful. I will be glad to answer additional questions. Thank you.
References
Consortium for Policy Research in Education (1991). "Putting the Pieces together: Systemic School Reform." CPRE Policy Briefs (no.rb-06) New Brunswick, NJ author.
Gamoran, A., Porter, A.C., Smithson, J., & White, P.A. (1996). "Upgrading High School Math Instruction: Improving Learning Opportunities for Low-achieving, Low-income Youth." Paper prepared for the Consortium for Policy Research in Education, University of Wisconsin-Madison.
Kaestle, C. (1993). The Awful Reputation of Education Research. Educational Researcher, 22 (1), 23-31.
Kingdon, J.W. (1984). Agendas, Alternatives, and Public Policies. Boston: Little, Brown.
Lagemann, E.C. (1997). Contested Terrain: A History of Education Research in the United States, 1890-1990. Educational Researcher, 26 (9), 5-17.
Porter, A.C., Kirst, M.W., Osthoff, E.J., Smithson, J. L., & Schneider, S.A. (1993). Reform Up Close: A Classroom Analysis. Madison, WI: Wisconsin Center for Education Research, Consortium for Policy Research in Education.
Smith, M.S., and J. Oay. , 1991. Systemic School Reform. In S. Fuhrman and B. Malen (eds.), The Politics of Curriculum and Testing, pgs. 233-267. Bristol, PA: Balmer Press.
Weiss, C.H. (1980). Knowledge Creep and Decision Accretion. Knowledge: Creation, Diffusion , Utilization. 1 (3), 381-404.
Weiss, C. H. and Buculavas, M. (1980). Social Science Research and Decision Making. New York: Columbia University Press.