Wandersman Center
  • Home
  • Blog
  • PODCASTS
  • Our Team
    • The Mission
    • The Leaders
    • The Faculty
  • Our Approach
    • Defining Readiness
    • Using Readiness
    • Studying Readiness
    • Getting To Outcomes
  • Our Services
    • What We Do
    • Partners and Projects
  • Learn More

Accountability Science©

12/12/2025

0 Comments

 
This blog is an initial draft (developed by Abraham Wandersman and Corinne Graffunder) to describe accountability science.
Definition of Accountability Science
 
We propose the following as a definition of accountability science:
Picture
_____________________________________________________________________________________________________

Criteria to Evaluate the Accountability of Past Initiatives

 
These criteria focus on what happened, why it happened, and what was learned.

1.  Clarity of goals and expectations:
  • Were goals, success metrics, and roles clearly defined at the outset?
  • Did stakeholders share a common understanding of what “success” meant?
 2.  Evidence base and rationale:
  • Was the initiative grounded in evidence or a clear theory of change?
  • Did the chosen approach logically connect to expected outcomes?
 3.  Implementation quality and fidelity:
  • Were planned activities carried out as intended?
  • Where deviations occurred, were they documented and explained?
 4.  Fit and feasibility in the real-world context:
  • Did the initiative fit the organizational, cultural, and environmental context?
  • How well were resource constraints managed?
 5.  Capacity to implement:
  • Did staff, leadership, infrastructure, and systems support effective implementation?
  • Were gaps in capacity identified and addressed?
 6.  Outcomes and impact:
  • To what extent were short-, mid-, and long-term outcomes achieved?
  • Were unintended outcomes (positive or negative) identified?
 7.  Efficiency and use of resources:
  • How well were time, energy, and money used relative to results?
  • Were there more efficient alternatives that emerged through experience?
 8.  Transparency and data use:
  • Was monitoring data collected, shared, and used appropriately?
  • Were decisions documented in a way that allows scrutiny and learning?
 9.  Corrective action and continuous quality improvement (CQI):
  • When problems arose, were timely adjustments made?
  • Was there a mechanism for learning and improvement?
 10. Sustainability and institutionalization:
  • Were successful practices maintained, scaled, or replicated?
  • Did the initiative leave lasting capacity or systems improvements?
______________________________________________________________________________________________________

Criteria to Evaluate the Accountability of Future Initiatives


These criteria focus on planning quality, strategic alignment, and likelihood of success before implementation begins.

1.  Defined needs and clear goals:
  • Are needs well-documented and supported by data?
  • Are goals specific, measurable, and aligned with organizational priorities?
 2.  Evidence-based strategy selection:
  • Is the proposed approach grounded in research, best practices, or prior success?
  • Is there a clear rationale connecting the chosen strategy to desired outcomes?
 3.  Fit with context and stakeholders:
  • Is the strategy culturally, politically, and operationally appropriate?
  • Do stakeholders support the plan?
 4.  Capacity readiness:
  • Does the organization have (or plan to build) the skills, staffing, systems, leadership, and partnerships needed?
  • Are resource constraints recognized and addressed?
 5.  Strategic use of resources:
  • Is the plan realistic, given time, energy, and funding?
  • Does it prioritize high-leverage actions?
 6.  Implementation planning:
  • Are roles, timelines, milestones, and responsibilities clearly defined?
  • Are there contingencies to manage risks?
 7.  Monitoring and evaluation plan:
  • Is there a clear plan to track progress and outcomes?
  • Are data collection methods feasible and meaningful?
 8.  Accountability structures:
  • Are decision-making processes explicit?
  • Are there mechanisms for transparency and tracking commitments?
 9.  CQI and adaptation mechanisms:
  • Is there a built-in process for reviewing data and iterating during implementation?
  • Does the plan define how adjustments will be made?
 10. Sustainability planning:
  • Does the plan consider long-term viability beyond initial funding?
  • Are there strategies for building durable capacity or systems?
______________________________________________________________________________________________________

Citations for Criteria to Evaluate the Accountability of Past Initiatives

 
These criteria map most closely onto program evaluation, implementation evaluation, CQI, and accountability frameworks:

  • Carroll, C., et al. (2007). A conceptual framework for implementation fidelity.
  • CDC. Evaluation framework – standards for accuracy, propriety, and utility.
  • CDC (1999). Framework for program evaluation in public health.
  • Chambers, D. A., et al. (2013). The dynamic sustainability framework.
  • Damschroder, L. J., et al. (2009). The consolidated framework for implementation research (CFIR).
  • Deming, W. E. (1986). Out of the crisis (CQI, PDSA cycles).
  • Durlak, J. A., & DuPre, E. (2008). Implementation matters.
  • Fixsen, D. L., et al. (2005). Implementation research: A synthesis of the literature.
  • Funnell, S. C., & Rogers, P. J. (2011). Purposeful program theory.
  • GAO (2015). Standards for internal control in the federal government (Green book).
  • Langley, G. J., et al. (2009). The improvement guide.
  • Patton, M. Q. (2011). Developmental evaluation.
  • Rossi, P. H., Lipsey, M. W., & Freeman, H. E. (2004). Evaluation: A systematic approach.
  • Scheirer, M. A. (2013). Linking sustainability to program planning and evaluation.
  • Wholey, J. S. (1983). Evaluation and effective public management.
  • Wandersman, A., et al. (2000). Getting To Outcomes.
______________________________________________________________________________________________________

Citations for Criteria to Evaluate the Accountability of Future Initiatives

 
These criteria draw from planning, evidence-based decision-making, implementation planning, and organizational readiness literature—much of which underpins GTO:

  • Behn, R. D. (2001). Rethinking democratic accountability.
  • Bryson, J. M. (2018). Strategic planning for public and nonprofit organizations.
  • CDC (1999). Evaluation framework.
  • Chambers et al. (2013). Dynamic sustainability framework.
  • Damschroder et al. (2009). CFIR domains of inner/outer setting and intervention–recipient fit.
  • Deming (1986). PDSA.
  • Fixsen et al. (2005). Implementation stages and drivers.
  • Funnell & Rogers (2011). Theory of change for planning.
  • GAO (2015). Internal controls and accountability systems.
  • GAO (2015). Resource stewardship and risk assessment.
  • Mintzberg, H. (1994). The rise and fall of strategic planning.
  • Patton (2011). Evaluation for adaptive management.
  • Rossi, Lipsey, & Freeman (2004). Evaluation planning principles.
  • Sackett, D. L., et al. (1996). Evidence-based medicine.
  • Scheirer (2013). Sustainability integrated into planning and design.
  • Wandersman et al. (2000). Getting To Outcomes.
  • Weiner, B. J. (2009). Organizational readiness for change.
 _____________________________________________________________________________________________________

Definitions Tailored by Sector


  1. For funders (philanthropy, foundations, and institutional investors): Accountability science is an interdisciplinary field that provides funders with systematic, evidence-informed approaches to assess the impact and lessons of past investments and to strategically design and support future ones—helping funders fulfill their responsibility to steward limited resources wisely, operate transparently, and advance meaningful, measurable results.
  2. For government agencies: Accountability science is an interdisciplinary field that equips government agencies with systematic, evidence-informed methods to assess the outcomes and drivers of past and current programs and policies and to strategically design and implement future initiatives—supporting agencies in their responsibility to use public resources efficiently and effectively to improve well-being.
  3. For nonprofit organizations: Accountability science is an interdisciplinary field that helps nonprofit organizations use systematic, evidence-informed approaches to learn from past work and to strategically design and improve future initiatives—strengthening their responsibility to use limited resources wisely and to demonstrate meaningful, mission-aligned progress for the individuals and communities they serve.
  4. For corporate settings (business, industry, financial services, and technology): Accountability science is an interdisciplinary field that applies systematic, evidence-informed approaches to evaluate past performance and to strategically plan and improve future initiatives—enabling organizations to fulfill their responsibility for transparent decision-making, efficient resource use, sound risk management, and delivering better results for customers, employees, and shareholders.
_____________________________________________________________________________________________________

Accountability Science Model: Responsibility → Evidence → Action → Results 
 
Picture

0 Comments



Leave a Reply.

    Categories

    All
    Available Tools
    Department Of Defense
    News & Events
    SCALE
    Serve & Connect

    Archives

    December 2025
    August 2025
    October 2024
    September 2023
    January 2023
    November 2022
    June 2022
    August 2021
    May 2021
    October 2020
    July 2020
    June 2020
    April 2020
    March 2020
    February 2020
    January 2020
    December 2019
    November 2019
    October 2019
    September 2019
    August 2019
    July 2019
    April 2019
    March 2019
    February 2019
    January 2019
    December 2018
    November 2018

    RSS Feed

Wandersman Center

Picture

Contact Us

    Message us! 

Submit
"At some point, we just have to roll up our sleeves and do something different. 

That's what readiness provides. It's the something that makes implementation a bit better."

Dr. Brittany Cook
VP of Education and Human Development
  • Home
  • Blog
  • PODCASTS
  • Our Team
    • The Mission
    • The Leaders
    • The Faculty
  • Our Approach
    • Defining Readiness
    • Using Readiness
    • Studying Readiness
    • Getting To Outcomes
  • Our Services
    • What We Do
    • Partners and Projects
  • Learn More