In this hypothetical situation, two years ago, the Oklahoma University (OU) and the Veteran Administration (VA) teaching hospitals implemented the Dedicated Education Unit (DEU) model as part of a collaborative partnership. The intended goal of this program was to ensure graduates have the nursing skills and competencies to meet the health care needs of an increasingly diverse population in the local and surrounding area. When the program was implemented evaluation criteria was not established, nor where there clear objectives in establishing DEUs. Problems began to emerge between the level of agreement between nursing instructors and mentors in student clinical evaluations in the DEUs. This section of the hypothetical situation reports on stakeholders, logic model, appreciative inquiry questions, and method and analysis.
Developing an evaluation that is relevant, effective, and current requires input from key players to ensure the final product is congruent with the overall program goals and outcomes. The dean of the nursing department wanted to compare the level of agreement between nursing instructors and mentors in student clinical evaluations in the DEUs. The dean at OU is the primary recipient of this evaluation report. Because of the complexity involved in evaluating nursing student’s clinical competency, a partnership with the dean of nursing, VA chief nurse, faculty, mentors, and nursing students was essential to developing an evaluation plan. A study by Keirns (2009) highlighted that stakeholders have marked differences in priorities and approaches to evaluations which are based on the impact of the problem and specific organizational strengths and skills. For example, hospital administrators and partnering schools want to know whether a program is cost-effective. On the other hand, existing programs can tap into the evidence-base in order to improve their programs, and emerging programs may seek best practices to help guide their evaluation. These different mindsets create conflict; as such, it behooves evaluators to involve stakeholders early in the planning and decision phase so requesting resources, such as funding, is easier to justify.
In order to meet the Dean of faculty and the VA chief nurse’s desire for the evaluation to be collaborative, participatory, and learning orientated, 15 people were personally invited to participate in a one-day workshop to help focus on the evaluation and develop the evaluation plan. These individuals will be a diverse group of Hispanic, Native American, African-American, and Caucasian BSN students, staff nurses, and faculty members. They were selected because diversity contributes to understanding worldviews of culturally different participants and stakeholders in the evaluation.
Purpose and Logic Model
The purpose of the evaluation is to compare the level of agreement between nursing instructors and mentors in student clinical evaluations within the DEU of clinical education. The DEUs are visualized as a village working together, contributing talents to raise student nurses (Moscato, Miller, Logsdon, Weinberg, & Chorpenning, 2007). The logic model is preferred to investigate whether the DEU program caused demonstrable effects and to compare the level of agreement between nursing instructors and mentors in student clinical evaluations in DEUs. The reason for this preference is that the logic model gives a visual snapshot of a program or project that describes logical linkage among program resources, activities, outputs, audiences, and short, immediate, and long-term outcomes related to a specific problem or situation, planned activities, and changes and results one hopes to achieve (McCawley, n.d.). A key insight of the logic model is the importance of measuring outcomes or results. This is important because of the potential to waste time and money. Ongoing assessments, reviews, and corrections can produce better program design and a system to strategically monitor, manage, and report program outcomes throughout the development and implementation process (Silverman, Mai, Boulet, & O’Leary, 2009).
Input and Outputs Including Activities/Participation
Inputs will be received from five sources which are staff, faculty, funding, facilities, and IT support. Inputs include resources, contributions, and funding required for the program.
Three types of outputs will result from the evaluation:
- Educational workshops: Presented to mentors, faculty, and undergraduate nursing students. The focus will be to train these stakeholders to use the new online clinical evaluation tool.
- Professional development in-service: Presented to mentors and faculty. The focus will be to identify key elements on clinical competency performance measures that are agreed upon by mentors and faculty.
- Research experiences workshop: Presented to mentors and students from faculty members of the nursing school. The focus is to create a culture of evidence-based practice to enhance effective communication. A portion of this workshop will be dedicated to a brainstorming session which will seek to develop a formalized process of sharing, giving feedback, and creating a safe environment in which transparency is the norm.
The planned activities; education workshop, professional development in-service, and research experience workshop, along with key stakeholders; undergraduate nursing students, mentors, and faculty, will have a positive impact on producing a clearer understanding of the dissonance between the level of disagreement between nursing instructors and mentors in student clinical evaluations within the DEUs. Moreover, these activities will cost the facility a minimal amount of money; however, its impact will save the facility thousands of dollars to train new graduates and transitioning nurses during their orientation to the organization. Further, staff nurses will develop into the role of clinical teachers and express confidence in evaluating and monitoring nursing students. In light of the evidence, it is believed that involving stakeholders increases their understanding of the evaluation and their commitment to using the results.
Program Outcomes Includes Short- and Long-Term Goals
Two areas will be impacted by the evaluation:
- The scores from the clinical evaluation tool will be used as data by faculty when preparing end-of-term performance grades.
- Exposure to consistent evaluations of competencies and the identification of appropriate measurement tools for evaluating clinical performance will ensure grading is fair and consistent.
Established short and long term evaluations help to provide evidence that standards are being accomplished (Iwasiw, Goldenberg, & Andrusyszyn, 2009). Formative also called short term evaluations, refer to evaluations taking place during the program or learning activity. Summative also called long term evaluations, are conducted at the end of the activity and provide information about learner achievement. The emphasis of this evaluation focuses on the extent to which objectives and outcomes were met for the purposes of accountability, resource allocation, and assignment of grades. It is important to constantly nurture short- and long-term evaluations to ensure programs accomplish stated objectives and benefit stakeholders.
Garside, J. R., & Nhemachena, J. Z. (2013). A concept analysis of competence and its transition in nursing. Nurse Education Today, 33(5), 541-545. doi:10.1016/j.nedt.2011.12.007
Iwasiw, C. L., Goldenberg, D., & Andrusyszyn, M. (2009). Curriculum development in nursing education (2nd ed.). Sudbury, MA: Jones and Bartlett Publishers.
Keirns, C. C. (2009). Asthma mitigation strategies: professional, charitable, and community coalitions. American Journal of Preventive Medicine, 37(6), 244-250. doi:10.1016/j.amepre.2009.08.004
McCawley, P. F. (n.d.). The logic model for program planning and evaluation. Retrieved from: http://www.uiweb.uidaho.edu/extension/LogicModel.pdf
Moscato, S. R., Miller, J., Logsdon, K., Weinberg, S., & Chorpenning, L. (2007). Dedicated education unit: An innovative clinical partner education model. Nursing Outlook, 55(1), 31-37. doi:10.1016/j.outlook.2006.11.001
Preskill, H., & Catsambas, T. (2006). Reframing evaluation through appreciative inquiry. Thousand Oaks, CA: SAGE Publications.