10 agencies meet this criterion
Subcriteria
1.1. The agency has public documentation of a chief evaluation officer or, if a non-CFO Act agency, of another position with similar authority to a chief evaluation officer.
1.2. The agency has an evaluation governance structure.
Learn More
DOL’s chief evaluation officer oversees the agency’s Chief Evaluation Office (CEO) and coordinates department-wide evaluations, including interpreting research and evaluation findings and identifying implications for programmatic and policy decisions. The agency has created a structure to accomplish important objectives, such as those in the Evidence Act. Its evidence officials closely coordinate through both regular and ad hoc meetings.
DOL continues to leverage current governance structures. For example, the chief evaluation officer plays a role in forming the annual budget requests of DOL’s agencies, recommending the inclusion of evidence in grant competitions, and providing technical assistance to department leadership to ensure that evidence informs policy design. Also, the chief evaluation officer traditionally participates in quarterly performance meetings with DOL leadership and the Performance Management Center (PMC). The chief evaluation officer reviews agency operating plans, works with agencies and the PMC to coordinate performance targets and measures, and evaluates findings. Quarterly meetings are held with agency leadership and staff as part of the learning agenda process, and meetings are held as needed to strategize addressing new priorities or legislative requirements.
As AmeriCorps’ designated evaluation officer, the director of the Office of Research and Evaluation (ORE) coordinates evaluation policy and the use of findings. The agency has a Research and Evaluation Council that meets monthly to assess the agency’s learning agenda and evaluation plan. Members of the Council include the director of ORE, the chief information officer/chief data officer, the chief of staff, the chief of program operations, and the chief operating officer.
The Deputy Assistant Secretary for Planning, Research and Evaluation in the Office for Planning, Research and Evaluation (OPRE) serves as the agency’s chief evaluation officer. This person oversees ACF’s research and evaluation efforts, including dissemination of findings, among other responsibilities. OPRE works with ACF program offices to develop annual research plans, integrating the development of program-specific learning agendas into this process. In addition, the office holds regular and ad hoc meetings with ACF program offices to discuss research and evaluation findings. Since September 2019, the Deputy Assistant Secretary for Planning, Research and Evaluation has served as the primary ACF representative to the U.S. Department of Health and Human Services’ Leadership Council and Evidence and Evaluation Council. The cross-agency councils meet regularly to discuss agency-specific needs and experiences and to collaboratively develop guidance for department-wide action.
In alignment with the ACL Office of Performance and Evaluation (OPE) Strategic Vision, the OPE director and staff coordinate the support, improvement and evaluation of agency programs through the implementation of the ACL performance strategy, learning agenda, evaluation plan, and through the National Institute for Disability, Independent Living, and Rehabilitation Research. This structure requires consultation with ACL leadership, management staff and program managers.
SAMHSA has two leaders coordinating evaluation policy and findings: the director of the Center for Behavioral Health Statistics and Quality as the evaluation lead and the director of the Office of Evaluation (OE) as the evaluation officer. The OE is central to SAMHSA’s evaluation governance, overseeing program evaluations in partnership with program centers or offices. OE also works collaboratively with the National Mental Health and Substance Use Policy Laboratory to provide support for SAMHSA’s evaluations by overseeing the identification performance indicators to monitor SAMHSA programs. OE similarly developed and maintains an evaluation repository and supports Evidence Act initiatives, including the development of a learning agenda, an evaluation repository, and an agency Capacity Assessment.
Recent notable achievements include an Evaluation of SAMHSA Programs and Policies, as well as Evaluation Plans for FY 2023 and FY 2024. Additionally, SAMHSA established the SAMHSA Evidence and Evaluation Board (SEEB), complete with a charter and confirmed voting members, and updated the Evaluation of SAMHSA Programs and Policies guidance document.
SEEB’s purpose is twofold. It serves as SAMHSA’s principal evaluation and evidence forum for managing its evaluation portfolio and its evaluation and evidence data. It is a strategic asset to support the agency in meeting its mission and agency priorities, including implementation of the Evidence Act. SEEB coordinates activities of the evaluation officer, chief data officer, statistical officer and performance improvement officers (in all centers and offices) and provides a structured environment to pursue alignment with the framework offered by the Evidence Act. SEEB also serves as the mechanism to both generate and disseminate knowledge and best practices, while providing a forum for the agency to reach consensus on issues pertaining to evaluation and evidence.
ED’s Commissioner for the National Center for Education Evaluation and Regional Assistance (NCEE) provides leadership as the agency’s evaluation officer. The Institute of Education Sciences, NCEE’s parent agency, is primarily responsible for education research, evaluation and statistics. ED also has an evaluation governance structure led by the Evidence Leadership Group (ELG), which supports program staff that runs evidence-based grant competitions and monitors evidence-based grant projects. The ELG advises ED leadership and staff on how evidence can be used to improve agency programs and provides support to staff in the use of evidence. It is co-chaired by the evaluation officer and the director of grants policy in the Office of Planning, Evaluation and Policy Development. Members meet monthly for the purposes of ensuring ongoing coordination of Evidence Act work.
The department has an evaluation officer who facilitates the development and use of quality data, evidence and rigorous evaluations in decision making. Each bureau also has an evaluation lead.
The evaluation officer and evaluation leads meet regularly regarding Evidence Act deliverables. For example, the group meets to report on the strategic plan and to develop and submit an annual evaluation plan to the White House Office of Management and Budget. In fall 2023, the chief evaluation officer began hosting monthly evaluation brown bag discussions to advance evidence-based policy practices.
DOT has restructured to strengthen performance measurement, program evaluation, and enterprise risk management functions, which are essential to its Infrastructure Investment and Jobs Act mission. This reorganization, aligned with DOT Order 1101.12C, establishes a new Office of Performance, Evaluation, and Enterprise Risk (PEER) within the Office of the Assistant Secretary for Budget and Programs / chief financial officer. A key change is consolidating analysts for these functions into one unit under the director of PEER, who serves as the department’s performance improvement, evaluation, and risk officer, centralizing leadership previously spread across financial and budget roles. This cross-functional oversight ensures that program evidence, including performance measurement, performance analysis, foundational fact-finding, program evaluation, and program project management information, is available to executives when making resource allocation decisions. PEER oversees and coordinates the development of required annual and four-year evidence-building plans and reports on the progress of the implementation of the Evidence Act to the White House Office of Management and Budget and Congress. This includes the DOT’s Learning Agenda, Capacity Assessment, Evaluation Framework and Annual Evaluation Plan.
DOT’s Evaluation Community of Practice and the Research Development & Technology (RD&T) Planning Team further support data and research initiatives. The Evaluation Community, with representatives from across DOT, meets bi-monthly to share insights, while the RD&T Planning Team, led by the Office of the Assistant Secretary for Research and Technology, coordinates research across DOT’s operating administrations enhancing efficiency and preventing duplicate efforts. Together, these efforts underscore DOT’s commitment to improving transportation innovation, optimizing program effectiveness, and delivering cost-effective infrastructure solutions.
MCC prioritizes evaluation through its leadership structure. The MCC Evaluation Management Committee (EMC) oversees decision making, integration and quality control of evaluations and programs. The EMC ensures that evaluations are effectively aligned with program design and implemented in a manner that increases their utility to MCC, in-country and external stakeholders. The committee includes the evaluation officer, chief data officer, and Monitoring and Evaluation representatives, among others. Each evaluation involves 11 to 16 EMC meetings, from reviewing the scope of work to publishing the final evaluation.
USAID has a chief evaluation officer who sits on the agency’s Data Board and meets with the Operations Council and Privacy Council on an ad hoc basis. The chief data officer (CDO), chief evaluation officer, statistical official and others have standing weekly/monthly meetings. The CDO’s team and leadership from the Office of Learning, Evaluation and Research, which manages agency requirements related to performance monitoring, evaluation and organizational learning, also have a standing meeting. The CDO’s team maintains an internal dashboard that is shared with the evaluation officer and statistical official to help track progress against milestones on an ongoing basis.
One important way of encouraging evidence use in the federal government is to show leading examples of where this practice already is happening and how it contributes to positive impacts both for federal agencies and for the people they serve. To do so, agencies must have strong evaluation leadership, with an evaluation officer capable of advising on the design and implementation of evaluations, interpretation of results, and integration of findings into action—both inside and outside of their agencies. USAID demonstrated how an evaluation office can exercise this kind of leadership when they launched their new “Evidence to Action Briefs” in June 2023. (Read more on p. 28 of The Power of Evidence to Drive America’s Progress.)