Skip to main content

Currently Skimming:

3 The Standard Bearers of Federal Evaluation
Pages 15-20

The Chapter Skim interface presents what we've algorithmically identified as the most significant single chunk of text within every page in the chapter.
Select key terms on the right to highlight them within pages of the chapter.


From page 15...
... Goldstein reminded the workshop participants that evidence is just one component of decision making, and evaluation is but one form of evidence, along with such factors as descriptive research studies, performance measures, financial and cost data, survey statistics, and program administrative data. While the ACF policy focuses primarily on evaluation, many of the principles also apply to the development and use of other types of evidence.
From page 16...
... Rigor requires having appropriate resources, a workforce with appropriate training and experience, a competitive acquisition process, and -- along with impact studies -- robust implementation components that will enable evaluators to identify why a program did or didn't work or what elements were associated with greater impacts. Relevance means setting evaluation priorities that consider many factors, including legislative requirements, the originating agency's interests, and those of other stakeholders: state and local grantees, tribes, advocates, and researchers.
From page 17...
... The difference for IES, Neild explained, is that it also incorporates formal peer review of its evaluation reports to promote rigor and scientific integ 2  See https://clear.dol.gov [May 2017]
From page 18...
... She said that external peer review helps to mitigate these risks and increase the public trust in their agency's findings. Neild said she also believes that peer review incentivizes highquality work by staff because they know that publication is not a given: it has to be earned by producing work that meets rigorous and objective standards.
From page 19...
... To manage cost and scope of the evaluation, Molyneaux explained, the policy is structured in a way that promotes developing evaluation design in tandem with the program design. The operations staff who work with MCC's foreign counterparts to create, implement, and maintain the projects work in country teams with MCC's evaluation staff but report administratively to a separate department.
From page 20...
... Some other early problems MCC faced were due to a lack of integration between evaluation planning and program design: in one instance, a farmer training program was executed and evaluated, even though procurement issues had delayed completion of the irrigation system the farmers were trained to use until several years after the training and the evaluation were completed. Molyneaux said that MCC is forthright with its evaluation results, even when they are disappointing, and that the ensuing open dialogue has helped the agency improve evaluation and program design.


This material may be derived from roughly machine-read images, and so is provided only to facilitate research.
More information on Chapter Skim is available.