This post is about a type of research that’s applicable to all of us – improvement research. Teachers, admins, informal educators, designers and policy makers: keep reading because we can ALL use improvement research to assess changes in practices and get better at what we do.
Even though I’m a researcher, “improvement research” is still new to me. I used to think almost any research could count as improvement research. Don’t we all hope our research helps to improve something – student learning, teacher practices, or design of learning tools? After attending the Carnegie Foundation Summit on Improvement in Education earlier this month it’s all become much clearer. Both improvement research and traditional academic research are incredibly important for understanding, supporting and improving education and learning, but they are not the same.
What’s improvement research?
Improvement research is done to produce and assess changes in practice. Unlike traditional academic research, the goal isn’t to produce and test theories about the relationships between conceptual variables. Instead, it’s about testing hypotheses about how our processes affect outcomes we care about. For example, a teacher might hypothesize that using iPads to play a math game (the process) might help her students to understand how to add fractions (the outcome). Improvement research could be done to iteratively refine the use of iPads and the math game to support the learning of fractions. The teacher could share findings with others colleagues to improve practices across the school district.
Improvement research is designed to help make changes in practice, which means it’s very practical! It doesn’t have to involve excessive amounts of quantitative data (woohoo!), since you only need “just enough” data from small samples. Data collection can fit into existing routines, such as asking students to write down the answer to the same question before leaving class each day, in order to assess a specific learning goal. Anyone interested in improving education and learning, can make small and frequent iterations and use simple measures like this to assess change during practice.
What’s unique and accessible about improvement research is that it doesn’t require special training or large amounts of time or data, so anyone can do it – teachers, administrators, game developers, non-profit organizations…anyone, to assess changes in practices.
Several years ago I was a literacy coach in Chicago Public schools. Part of my job was to build literacy support tools that were embedded in science curriculum, and to support educators through observations and coaching. We started using these strategies in Environment Science classes, but eventually scaled to Chemistry, Physics, and Biology courses. I needed to know how the literacy tools were used to support student comprehension of content and how the design and use of the tools could be more effective. Based on evidence, the teachers and I made changes on a weekly, if not daily, basis to try and improve the work we were doing.
How did I know what changes to make? How did I know if the changes were effective in improving practices? I focused on asking one question, and I listened. I asked teachers to reflect on the conversations students had during the activities using the literacy tools. I also listened to student conversations and documented the types of questions they were asking. This information was critical for adapting the literacy strategies and how they were used.
We need precise aims and measures to improve our work.
Improvement research involves a continuous cycle of planning, doing, studying, and acting (PDSA).
Modified from Grunow, A. (2014). Measurement for improvement. Improvement Science Basics Workshop. Carnegie Foundation Summit on Education Improvement. March 10, 2014. Carnegie Foundation for the Advancement of Teaching.
At the start of each PDSA cycle, three questions should be addressed:
1. What are we trying to accomplish? Ideally, in improvement research we need a precise aim we’re trying to reach. This includes how much change, by when, and for what/whom. For the literacy strategy example an aim might have been: By the end of the semester, at least 6 higher-order thinking questions are asked during each reading activity that involves the use of the annotation literacy support tool in the Environment Science 101 course at Obama High School.
2. How will we know that a change is an improvement? We also need precise measures, which can be quantitative or qualitative, for assessing change. One measure of change in the science classrooms could have been the number of higher order thinking questions asked during a class period, tallied up on the chalk board by the teacher or an observer and tracked over the semester. Just answering one question each day can make it clear how changes are resulting in improvements (or not). One thing that this example reiterated for me is that measuring improvement and changes in processes is a lot easier when the data collection strategies are embedded in work we’re already doing.
3. What change(s) can we make that will result in an improvement? Changes can be made at various stages and to different parts of a system, but in this example they were primarily related to design (e.g. the literacy strategies) or implementation (e.g. how the strategies were used in class).
Improvement research is something all of us can do. Improvement research and traditional research both play an important part in improving education and learning. You don’t have to be a trained researcher or an expert on learning theory to do improvement research. You just need something you want to improve. That means that we all play a critical role in research, not just the researchers. To learn more about improvement research and strategies for scaling best practices, check out some of the useful resources from the Carnegie Foundation.
A version of this post was originally published on WorkingExamples.org.