Are you looking for an evaluation model to apply to an educational program?  A great evaluation approach is Daniel Stufflebeam’s CIPP evaluation model (Fitzpatrick, Sanders & Worthen, 2011; Mertens & Wilson, 2012; Stufflebeam, 2003; Zhang, Zeller, Griffith, Metcalf, Williams, Shea & Misulis, 2011). In this decision-oriented approach, program evaluation is defined as the “systematic collection of information about the activities, characteristics, and outcomes of programs to make judgments about the program, improve program effectiveness, and/or inform decisions about future programming.” (Patton, 1997, p. 23).  The CIPP evaluation model (see figure 1) is a framework for guiding evaluations of programs, projects, personnel, products, institutions, and evaluation systems (Stufflebeam, 2003).

CIPP Model Visual copy

Figure 1: Components of Stufflebeam’s (2003) CIPP Model.

Designed to assist administrators in making informed decisions, CIPP is a popular evaluation approach in educational settings  (Fitzpatrick et al., 2011; Zhang et al., 2011). This approach, developed in the late 1960s, seeks to improve and achieve accountability in educational programming through a “learning-by-doing” approach (Zhang et al., 2011). Its core concepts are context, input, process, and product evaluation, with the intention of not to prove, but rather improve, the program itself (Stufflebeam, 2003). An evaluation following the CIPP model may include a context, input, process, or product evaluation, or a combination of these elements (Stufflebeam, 2003).

The context evaluation stage of the CIPP Model creates the big picture of where both the program and evaluation fit (Mertens & Wilson, 2012). This stage assists in decision-making related to planning, and enables the evaluator to identify the needs, assets, and resources of a community in order to provide programming that will be beneficial (Fitzpatrick et al., 2012; Mertens & Wilson, 2012). Context evaluation also identifies the political climate that could influence the success of the program (Mertens & Wilson, 2012). To achieve this, the evaluator compiles and assesses background information, and interviews program leaders and stakeholders.  Key stakeholders in the evaluation are identified. In addition, program goals are assessed, and data reporting on the program environment is collected. Data collection can use multiple formats. These include both formative and summative measures, such as environmental analysis of existing documents, program profiling, case study interviews, and stakeholder interviews (Mertens, & Wilson, 2012). Throughout this process, continual dialogue with the client to provide updates is integral.

To complement context evaluation, input evaluation can be completed.  In this stage, information is collected regarding the mission, goals, and plan of the program. Its purpose is to assess the program’s strategy, merit and work plan against research,  the responsiveness of the program to client needs, and alternative strategies offered in similar programs (Mertens & Wilson, 2012).  The intent of this stage is to choose an appropriate strategy to implement to resolve the program problem (Fitzpatrick et al., 2011).

In addition to context evaluation and input evaluation, reviewing program quality is a key element to CIPP. Process evaluation investigates the quality of the program’s implementation. In this stage, program activities are monitored, documented and assessed by the evaluator (Fitzpatrick et al., 2011; Mertens & Wilson, 2012). Primary objectives of this stage are to provide feedback regarding the extent to which planned activities are carried out, guide staff on how to modify and improve the program plan, and assess the degree to which participants can carry out their roles (Sufflebeam, 2003).

The final component to CIPP, product evaluation, assesses the positive and negative effects the program had on its target audience (Mertens & Wilson, 2012), assessing both the intended and unintended outcomes (Stufflebeam, 2003). Both short-term and long-term outcomes are judged. During this stage, judgments of stakeholders and relevant experts are analyzed, viewing outcomes that impact the group, subgroups, and individual. Applying a combination of methodological techniques assure all outcomes are noted and assist in verifying evaluation findings (Mertens & Wilson, 2012; Stufflebeam, 2003).

This summary is the work of myself and Christine Miller.

References:

Fitzpatrick, J., Sanders, J., & Worthen, B. (2011). Program evaluation: Alternative approaches and practical guidelines (4th Ed.). New York: Allyn & Bacon. Canadian Publisher: Pearson. ISBN: 978-0-205-57935-8

Mertens, D. & Wilson, A. (2012). Program evaluation theory and practice: A comprehensive guide. New York: Guilford Press. EISBN: 9781462503254

Patton, Q. M. (1997). Utilization focused evaluation: The new century text (3rd Ed.), London: Sage Publications.

Stufflebeam, D. (2003). The CIPP model of evaluation. In T. Kellaghan, D. Stufflebeam & L. Wingate (Eds.), Springer international handbooks of education: International handbook of educational evaluation. Retrieved from http://www.credoreference.com.ezproxy.lib.ucalgary.ca/entry/spredev/the_cipp_model_for_evaluation

Zhang, G., Zeller, N., Griffith, R., Metcalf, D., Williams, J., Shea, C. & Misulis, K. (2011). Using the context, input, process, and product evaluation model (CIPP) as a comprehensive framework to guide the planning, implementation, and assessment of service-learning programs. Journal of higher education and outreach engagement 15(4), 57 – 83.

Advertisements

13 responses »

  1. this is a great work. i hope to write thesis around is area of study

  2. […] Other proto-systems would include applying 6 Sigmas to evaluate training, using a modified narrative theory like Brinkerhoff’s Success Case Method, or a system’s view approach such as Stufflebeam’s CIPP evaluation model. […]

  3. Syed Abid Shah says:

    Great job..I you have done good…i am student of MS Programme in Edcuation at Islamabad, Pakistan..and i am in search of knowledge thanks for sharing.

  4. […] of what has been done (para. 4). Mazur (2013) offers a another research-based summary of the CIPP Evaluation Model that is a good review to see for learning about how the model is a useful addition to or complete […]

  5. The CIPP is very informative and useful to students of evaluation theory

  6. Ndanggils says:

    Hi all,

    Great to be here.

    Please i need help
    I am a PhD student working on “Evaluation of the Implementation of Information and Communication Technology (ICT) Curriculum in Secondary Schools in the South West Region of Cameroon.”

    I have been reading this model for a week now but can still figure out how to use it in my study. what to consider are variables within context, input, process and product. Can any one help,

  7. Kofi Asamoah says:

    thanks for this summary. I want to know the difference between tests and assessments?

  8. Well written and accurate.

    • Chemir sileshi says:

      really fine summary to me because I’ve got good sorts of info. from it. I’m a Ph d student at AAU, Ethiopia & interested to conduct a research on secondary preparatory school English syllabus using CIPP model. Any professional advice please! thank you

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s