The Oklahoma University and the Veteran Administration teaching hospitals implemented the Dedicated Education Unit (DEU) model as part of a collaborated partnership. The intended goal of implementing the DEU was to ensure graduates have the nursing skills and competencies to meet the health care needs of an increasingly diverse population in the local and surrounding area. Problems began to emerge between the level of agreement between nursing instructors and mentors in student clinical evaluations in the DEUs. It was realized that when the program was implemented evaluation criteria was not established between instructors and mentors on how student’s clinical competencies will be evaluated. Because of this realization, it was necessary to examine what approach is needed for instructors and mentors to be cohesive in evaluating nursing student’s clinical competencies.

Evaluation Data Collected Methods and Analysis

Qualitative data will be obtained by doing a focus group interview of students, mentors, and faculty asking open-ended questions using appreciative inquiry as a data collection technique. The focus group interviews will be recorded then transcribed verbatim, and texts will be reviewed immediately after the engagements. Quantitative data will consist of an online survey that will be emailed to mentors, instructors, and students who were involved in clinical practices and those who provided evaluations of student’s clinical competencies within DEU. The online survey will consist of a 5-point Likert scale with several open-ended questions framed around appreciative inquiry.  Data analysis will follow an inductive process of thematic analysis where data will be read, reread, coded, and categorized into themes. This will allow for an in-depth direct data examination and consideration of the different meanings of participants experiences from which emerging patterns and themes can be generated (Naude, van den Bergh et al. 2014). The quantitative data from the surveys will be exported from Survey Monkey into a SPSS data file. Descriptive statistics will be computed for the survey and summary reports will be given to the primary stakeholder (Preskill and Boyle 2008).

The evaluation process was conducted over a 4-week period. All professional standards were employed to ensure that the highest standard of evaluation practice was maintained throughout the entire process following the integrated guide of the program evaluation standards (Yarbrough, Shulha et al. 2011) and the accuracy standards; to include the trustworthiness of the study principles of credibility, transferability, dependability, confidentiality and confirmability (Cohen and Crabtree 2006). These standards were applied from introductory meetings to presentations of recommendations, to all stakeholders of interest regarding the evaluations and its findings.

Key Findings

  • Instructors reported that they did not use the result of the student’s assessment to improve their evaluation of student clinical performance.
  • Mentors reported that they did not use the result of the student’s assessment to improve their evaluation of student clinical performance.
  • Instructors reported that they did involve mentors in the decision-making process of which evaluation strategies will use to determine student achievement of clinical outcomes?
  • Mentors reported that they did not involve instructors in their decision of which evaluation strategies they will use to determine student achievement of clinical outcomes?
  • Instructors reported that they did not communicate with mentors how student’s clinical competencies will be evaluated.
  • Mentors reported that they communicated with the instructors their evaluation of student’s clinical performance of skills during their shift.

Conclusion

Upon review of the evaluation findings instructors and mentors came to a clear understanding of why there has been a discrepancy in evaluating student’s clinical competencies within the DEUs. The instructors and mentors agreed that whatever method is chosen to assess and evaluate student’s clinical competencies should be discussed and agreed-upon.  Instructors and mentors also agreed that evaluation criteria need to be understandable, explicit, and transparent to everyone directly involved with students including the student during clinical practice. Finally, instructors and mentors need to agree upon specific actions to be taken as a consequence of the results of assessments or evaluations.

Recommendations

  • Whatever method is chosen to assess and evaluate student’s clinical competencies should be discussed and agreed-upon by clinical instructors and mentors to assure that the results of evaluating student clinical competencies are consistent.
  • Evaluation criteria need to be understandable, explicit and transparent to instructors, mentors, and students. Students need to be able to tell what is expected of them in each form of assessment or evaluation they encounter.
  • Instructors and mentors need to agree upon specific actions to be taken as a consequence of the results of assessments or evaluations.

References

Cohen, D. and B. Crabtree (2006). “Qualitative research guidelines project.” from http://www.qualres.org/HomeLinc-3684.html.

Naude, L., et al. (2014). ““Learning to like learning”: An appreciative inquiry into emotions in education.” Social Psychology of Education 17(1): 211-228.

Preskill, H. and S. Boyle (2008). “Building evaluation capacity research study: Executive summary.” from http://www.lpfch.org/programs/preteens/ECB_researchstudy.pdf.

Yarbrough, D. B., et al. (2011). The program evaluation standards: A guide for evaluators and evaluation users. Thousand Oaks, CA, SAGE.

Leave a Reply