Lifting the Burden of Proof

On January 31, when Time Warner youth media grantees convened in one of the multimedia giant’s glass-walled conference rooms, practitioners pondered one of the most persistent and challenging requests from funders—to provide proof that their programs work.
The conference was monitored by the Educational Development Center (EDC), which has been researching how youth media programs evaluate their impact. Youth media practitioners in attendance shared frustrations about the demands for evaluations (few would expect football coaches to provide such proof, pointed out one attendee), as well as practical tips for carrying them out.
Anthony Streit of EDC advised organizations to clearly identify the programs’ intended outcomes before designing evaluations. And just as young reporters can elicit the type of candid quotes from peers that often elude adult journalists, young people can also conduct insightful peer evaluations.

The demand for program evaluations can be frustrating (few would expect football coaches to provide them).

Directors who had previously worked with EDC to develop program evaluations concurred that the process was “complicated” and “time-consuming,” but ultimately worthwhile. “It’s really interesting when you start looking through this lens, and what it’s telling us about impact,” said one.
Another talked about forming reality-TV-inspired “Truth Booths,” where teens in her video workshop could privately tell the camera what they really thought about the program.
Because finding effective systems of evaluation—and, preferably, ones that appeal to funders—was still a trial-and-error process for most at the conference, it’s important to recognize that the broader youth work field has also grappled with this challenge. Practitioners can look to programs, conferences, and publications outside youth media, as well, for further ideas about demonstrating program effectiveness. Youth Today, for instance, has a regular column on program evaluation. The February issue alone features two articles on the subject.
One of these articles covers a recent conference hosted by the Partnership for After-School Education that sounds surprisingly similar to the Time Warner meeting. Participants there debated how to reconcile the need for accountability with the often murky and difficult-to-measure goals of youth work, like helping young people make better decisions. “Develop short- and long-term indicators to show that young people are on the right path,” as Jane Quinn summarized that conference’s overriding theme. “But resist the urge to abandon the values that undergird our work.”
An inspiring article from an earlier Youth Today issue profiles the Phoenix Academy of Los Angeles, a residential substance-abuse treatment program for teens, which partners with research teams for program evaluations. The collaborations, according to the article, “have yielded mutually beneficial gold-standard evaluations, boosted Phoenix Academy’s quality improvement plan and garnered good publicity—delivering a lot of evaluation bang for relatively few of the academy’s bucks.”
When I edited a magazine written by teens in foster care, I fantasized about this type of venture. I wanted to enlist a researcher to conduct a longitudinal study comparing a group of writers who participated in the magazine’s summer writing workshop to a group who applied but, at random, weren’t admitted. I imagined it would be kind of like the “Seven Up!” documentaries, which tracked down a group of British school kids every seven years. In my study, researchers would check in with the participants every year or so, looking for patterns among the two groups.
While this idea may be a bit far-fetched, learning about the Phoenix House collaboration makes partnering with researchers—who might even fund program studies—a feasible idea. And I’ve since learned that at least one youth media group has already accomplished this. Dr. Catherine Sanderson studied the audience of Sex, Etc., concluding in the Journal of Adolescent Research that young people who read the newsletter showed marked increases in responsible sexual attitudes.
It may be a long time before many programs have the means to conduct such rigorous research. For now, as we continue experimenting to find the right balance of surveys, studies, and youth-led evaluations, it’s worth borrowing ideas and inspiration from the outside, as well.


Finding effective means of evaluation—and, preferably, ones that appeal to funders—is still a trial-and-error process for most youth programs.