Skip to main content

Currently Skimming:

7 Cheerleaders, Naysayers, Large and Small Evaluators: Fostering Support and Inclusion
Pages 37-40

The Chapter Skim interface presents what we've algorithmically identified as the most significant single chunk of text within every page in the chapter.
Select key terms on the right to highlight them within pages of the chapter.


From page 37...
... In addition, Baron said he believes that a technical document would be preferable to a consensus document, which might not contain the needed clarity or specificity. He gave the example of the Common Evidence Guidelines, a joint publication of the Institute of Education Sciences and the National Science Foundation (NSF)
From page 38...
... The specific goal would be to build the number of interventions shown in high-quality randomized experiments, replicated across different studies or sites, that produce sizable impacts on important life outcomes. He gave examples of health research and social policy studies that follow this paradigm, and he asserted that this approach helps nonscientific stakeholders know and accept the value of evaluation studies.
From page 39...
... cautioned the workshop participants that any document that might be created needs to go beyond simply covering impact evaluations, social programs, and experiments in more established programs: it also needs to be applicable to the variety of agencies trying to build evaluation offices. In response to a query from Maynard about the smaller agencies that often may not have a voice in these conversations, Nightingale explained that they are represented in cross-agency evaluation groups that OMB convenes and are actively involved in discussions about funding, strategy, design, and other concepts around evaluation.
From page 40...
... identified three groups that might be opposed to a principles document: those from program agencies who may challenge the notion of rigor and ascribe more to the "I tried it and it works" philosophy; practitioners who may see an investment in evaluation as detracting from direct services; and smaller agencies whose programs or target populations may be underrepresented in a push towards randomized controlled trials. Baron replied that a response to these arguments would be to focus on evaluating components of a program rather than the entire program -- e.g., looking at preschool interventions as opposed to the entire Head Start program.


This material may be derived from roughly machine-read images, and so is provided only to facilitate research.
More information on Chapter Skim is available.