Program Evaluation in Adult Learning Environments

A Little About this Project

Although this case study focused on my role and the product my company was selling, the key components of evaluation remain the same. As you read through this article, consider how you currently evaluate success in your role and in your company. How does it compare to what is discussed below? How robust is the evaluation process at your company? Is there any room for improvement? Ready, let’s start!

Evaluation – Defining Anticipated Outcomes

The evaluation is possibly the most critical aspect of running a program. Because not all problems can be adequately addressed while a program is running, it is imperative that an evaluation process is built into the creation of any program, regardless of the length. This allows you to gather participant data that’s outside the scope of personal success and instead gather data on other equally important components of what makes a program successful. Soliciting this information during or after a course allows program stakeholders to revisit their initial assumptions about what the course or program should accomplish and what it takes to reach those goals. By committing to the creation and implementation of an evaluation plan, you are committing to the continued success of your program.

Philosophy of Evaluation

Whereas assessments should focus on the role of the instructors in ensuring student success, the evaluation should be thought of in terms of gauging program health. The in-class assessments let us know if participants are grasping content and if not, what we can do to adjust our approach. There are other aspects, such as whether the program meets participant expectations, that are not captured through project rubrics or other classroom assessment techniques. This is where the concept of evaluation comes in.

Once complete, it’s important to review the program in its entirety to discover whether the program has served its purpose, whether there are areas of improvement that can be addressed prior to the start of the next program run and whether the program, in its current iteration, provides value to stakeholders. To do this, evaluations need to be:

  • Objective: You, the instructional designer, facilitator or other stakeholder, have invested a substantial amount of time in developing this program. It may be hard to hear that it’s not working the way it’s supposed to. It’s important that fear not keep you from asking those hard, direct and neutral (non-leading) questions.
  • Actionable:Regardless of the way it is collected, evaluation data must be actionable. At times, we may fall into the trap of asking participants completely subjective questions that do not have clear action items associated. Conversely, asking objective questions that only allow for a yes or no may not give you any useful data around suggested improvements.
  • Targeted: Know what you’re evaluating ahead of time and why. While you may be tempted to ask participants how they felt about the location of the school, unless you have the ability to change that, it may be a waste of time – yours and the participant’s – to include it. Questions related to program goals, instructional quality, course content and overall satisfaction all fall under the category of things your stakeholders will want to know. A rule of thumb is to determine key health metrics at the outset of the program and ask questions that speak to those items.

Which aspects of the program are being evaluated and how?

As discussed, what you intend to evaluate should be established prior to the start of the program. This is especially crucial if this is a pilot, as it is likely that there will be areas requiring immediate refinement. For this course, four core areas were identified. In order to determine whether this course was effective, we will evaluate whether participants:

  • Are learning what we expect them to learn
  • Believe the content is delivered in a way that is accessible and easy to understand
  • Can make clear connections between the content they are learning and the content’s relevance to their everyday lives and,
  • Would recommend this program, including the setup, content and instructors, in the future

Feedback will be collected via exit tickets (surveys), administered by the instructor at the end of each session. These surveys will consist of four questions. The exit tickets are meant to be a section by section snapshots evaluating the immediate impact of the course. At the completion of the course, a more comprehensive end-of-course survey will be administered.

Sample Evaluation #1 – Exit Tickets

Purpose Evaluate the quality of content and instruction at the end of each session
Deliverable Survey (hosted in Google docs) – the same form is used throughout the course, differentiated by the submission date.
Questions
  1. In one sentence, summarize the most important concept you learned today (Open text)
  2. The material covered during this session is immediately applicable to my job (Likert scale)
  3. The material covered during this session was presented in a clear and approachable manner (Likert scale)
  4. Is there anything else you’d like to add? (Open text)

Sample Evaluation #2 – End of Course Survey

Purpose Evaluated the quality of content and instruction of the overall program
Deliverable Survey (hosted in Google docs)
Questions Overall Experience

  1. Would you recommend this course to a friend of colleague? (NPS)
  2. What is the most important reason you gave us that score?

Content

  1. The material covered throughout this course met my expectations (Likert Scale)
  2. The material covered throughout this course was relevant to my job (Likert Scale)
  3. The material covered throughout this course is immediately applicable (Likert Scale)
  4. I understood what the requirements were for all of my assignments (Likert Scale)
  5. Is there anything you’d like us to know about the content? (Open text)

Delivery

  1. The instructor presented in a way that made the material easy to understand (Likert Scale)
  2. The instructor provided meaningful feedback (Likert Scale)
  3. I had the tools and resources I needed to participate in class and complete assignments (Likert Scale)
  4. Is there anything you’d like us know about the delivery (Open text)

 

What happens to the data?

During the Course

During the course, data collected from the exit tickets will evaluate whether the content is relevant and immediately applicable for participants. In addition, it will evaluate delivery techniques and allow program facilitators to provide supplemental materials in areas where the content is lacking. In the event that the issue lies with instruction, alternative methods can be explored, including but not limited to the introduction of blended or self-directed learning tools, additional AV equipment or even a change of venue. The purpose of the evaluations at the end of session and end-of-course is to cover as many modifiable external components as possible and establish a plan of action to be implemented during the current cohort or for the next.

After the Course

All data will be aggregated in an Excel file and used to create a dashboard that identifies trends, weaknesses and strengths of the program. The exit tickets and end-of-course survey responses will be coded and categorized into Content, Delivery, Tools and Environment and then into stages based on the length of time it will take to implement. Quantitative and qualitative data will be used to drive decisions around revising content and/or delivery methods and whether any additional financial investments will need to be made for tools or program development. Finally, data collected will inform what kind of training, if any, would benefit the instructor based on feedback around delivery. A timeline will be established for all improvements and the changes will be communicated to stakeholders, as well as future and former participants.

 

Scroll to top