As education programs go, it was profoundly ambitious. Indeed, the title said it all: “No Child Left Behind.” Concerned that too many public school children were literally being “left behind,” Congress in 2001 mandated that states test students annually and rate schools on the basis of the overall results.
Was it successful? That’s still being debated, but it’s safe to say that NCLB, which was replaced in 2015, had both positive and negative effects. On the plus side, it shined a much-needed spotlight on underserved schools and the critical need to improve them, but it also created an incentive for schools to “teach to the test.”
Concerned that they and their schools could be stigmatized, some teachers took weeks out of regular instruction to drill children on what would be on the exams. This wasn’t what the program intended, but it shows what can happen when an education evaluation tries to answer questions—in this case, “Are the students learning?” “Is the school good?”—that are far too broad.
“I think one of the biggest pitfalls is when you’re asked to evaluate a program, but people already know what they want the results to be, so it makes it very hard to be objective,” said Mona Levine, Ed.D., a faculty member for Georgetown University’s Certificate in Education Program Evaluation. “A second major problem—and this is one of my pet peeves—is when evaluations are too broad, and the evaluators are trying to evaluate too many things, and you don’t get depth.”
The Age of Accountability
In Georgetown’s three-course certificate program, students learn how to avoid these and other pitfalls, such as implicit bias and “survey fatigue,” that can skew their analyses. And, most important, they learn the theory, methods, and applications associated with effective evaluation.
The program is designed for those who are performing evaluations themselves and, equally important, interpreting and acting on the research of a third party.
“In the world today, we’re more focused on assessment and accountability in general,” Levine said. “But accountability should not be the only reason to do an evaluation. The most important purpose, in my book, is for internal reviews and improving what you’re doing.”
One prime example of this process, among public schools, is the method used by the nationally recognized Long Beach (Calif.) Unified School District, whose staff continually review and revise various iterations of successful instructional programs to meet current student needs. Talk with teachers and administrators about curricula for any length of time and you’re sure to hear the word “iteration” come up several times.
A Critical Skill
This focus on program development and evaluation is fast becoming the norm in education circles. Indeed, most educators cannot even submit grant proposals without including a program evaluation plan, Levine said.
And yet, educators aren’t the only ones who can benefit from this kind of knowledge. The material also applies to those involved in training programs offered by governments, corporations, associations, and nonprofits.
Enrollment in Georgetown’s online certificate program has soared during the pandemic. In addition to high school and college educators, the program has drawn lawyers, college administrators, staff from the International Monetary Fund and World Bank, and job-changers in other fields who want to show prospective employers that they understand and can interpret education program evaluations.
The program includes three courses that are approached from theoretical and practical perspectives: program planning, analysis and evaluation, and research methodology and program evaluation design. In the final course, students create their own evaluation plans.
As with many of Georgetown’s certificate programs, students learn from their instructors and peers, who bring with them expertise in a variety of professions. For example, one recent cohort included a student who works with deaf and blind individuals and had special insight into how requirements of the Americans with Disabilities Act could affect the design of program evaluations.
“That’s what’s really exciting,” Levine said. “Because the students learn from each other, and I learn from them, because they bring such different experiences.”