In my role as an Evaluation Coordinator, I’m constantly thinking of the best ways to evaluate training programs. At the forefront of my mind, evaluation is not just another box to check in the training cycle. As such, it isn’t something I think about at the end of the design. Instead, I like to consider it right after I know what the learning or performance objectives are. I’ve been learning about scenario-based e-learning this semester. Consequently, pondering ways to evaluate SBeL has me comparing it with how learning types are evaluated.
Outcomes for training courses can differ greatly from outcomes of other learning. For example, drunk driving prevention training for truck drivers could reduce accident rates by 75%. On the other hand, instructor-led training on data entry could reduce errors by 90%.
These results may not create the same kind of impact but without evaluation it is difficult to determine the training’s effectiveness.
You Need a Plan
As with all training, stakeholders and training managers want to know what is working so course designers need to gather data to answer this question.
The answer to the question lies in creating an evaluation plan or strategy.
When people think of evaluation, they often think about measuring the learner’s results in the e-learning module or course. While this is helpful, it doesn’t tell stakeholders how well the learning will transfer to workplace situations or if it will result in dramatic outcomes like a 75% reduction in accidents.
Not that all training needs to resolve life and death type outcomes, but stakeholders need to see results that justify the training’s value.
A good evaluation plan, however, must take consider learners and stakeholders.
Embracing Learners in Your Evaluation Plan
Learners need to know if they’ve met the benchmark or passing the grade required for training. It is just as important that the instruction provides a great learning experience for the learners. To embrace learners in your plan:
Evaluate Learning Effectiveness
Evaluating learning effectiveness can tell course designers if the design meets the learning objectives of the course. To evaluate learning effectiveness:
Design Pre- and post-assessments
When possible, provide a pre-test for the learners. This demonstrates consideration of the learner’s time. If a learner does well on a pre-test, they might not need to go through the entire course. Pre-assessments can also provide data when compared against the results of post-assessments.
When designing tests, ensure that they are relevant to the material. Also, see to it that tests provide ways for learners to demonstrate mastery of the learning objectives. Post assessments could include the challenges a learner must complete to determine competency.
Include Knowledge Checks:
Some SBeL courses only provide assessments in the form of challenges at the end of the course. Instead, provide regular knowledge checks at different stages. Providing knowledge checks keep learners abreast of their performance before the course ends which can increase engagement. This is especially helpful if the course is saturated with information relevant to the challenges.
Knowledge checks can take the form of multiple-choice questions, drag and drop questions, drop-down menu questions, and hot spot questions.
Evaluate Learner Motivation
Course designers can learn a lot about a course from the learners.
Learners’ feedback provides insight into their satisfaction levels. It also provides information for course designers to improve the course with updates in the future. Learner motivation can be evaluated using surveys.
Surveys
Smile sheets and Likert Scales won’t necessarily improve a training’s effectiveness. Still, they allow learners to influence the direction of the learning process as participants rather than learning recipients. Furthermore, when learners evaluate their training, they see that their input matters. This can increase their engagement with future learning.
Other indicators that can measure learner motivation include the completion rates and number of course enrollments when the courses are optional. A learner’s satisfaction is essential, but it is only one aspect of the evaluation process.
Evaluate Learning Efficiency
Evaluating learning efficiency might seem like a waste of time, but it demonstrates course designers’ consideration for learners’ time and effort.
If a course is too long, learners might disengage and not complete the training with fidelity. Too short, and they might not achieve the learning outcomes.
If learners are not completing their courses, designers should want to know why.
Designers can compare average time completion rates with other training sessions. They can also analyze to see if these rates equate with the time to achieve competency.
Learning efficiency is also an area that will interest stakeholders as they like to consider how much downtime learners need to complete training.
Determining learning efficiency should begin when testing the course’s pilot and implementation phase.
Embracing Stakeholders in Your Evaluation Plan
At the end of the training, stakeholders want to know one major thing.
Has the training resolved the performance problem? In other words, has the training resulted in learners doing what they couldn’t do before the training?
If the answer to this question isn’t a resounding yes, no matter the results of the tests and surveys or how beautifully designed the scenario-based e-learning course is, stakeholders will deem it a waste of time and resources.
Therefore, course designers need a strategy in place for evaluating the success of the training through the lens of stakeholders.
To evaluate a scenario-based learning course from the perspective of stakeholders:
Evaluate workplace actions
Course designers rarely have control over how evaluations are carried out on the job. it is however crucial to consider the organization’s performance metrics for measuring job competency.
Organizations evaluate training effectiveness by how well their employees can the knowledge and skills learned to workplace situations and challenges.
In the example of the truck drivers, a 75% reduction in accident rates could tell stakeholders a training was effective.
Likewise, an increase in sales or improved customer satisfaction ratings could prove how valuable an SBeL course was for the organization’s sales reps and customer care representatives.
The way learners’ performance is measured in the workplace or real world should inform the design of SBeL courses thus improving their effectiveness.
Evaluate the Return on Investment (ROI)
Evaluating ROI isn’t usually within the purview of course designers.
Still, IDs who want continued buy-in from stakeholders, as well as potential future opportunities, should create a follow-up plan. Even if a training successfully delivers the desired workplace outcomes, check to see if the organization made an ROI.
Ideally, monetary gains should surpass the cost of producing and delivering training. Ruth Clark, in Scenario-Based Learning: Evidence-Based Guidelines for Online Workforce Learning, suggests a way to calculate ROI.
Net program benefits ÷ program costs × 100.
It is easier to perform ROI evaluations when the training focuses on skills linked to an organization’s bottom line or relevant metrics to compare the evaluation results. For instance, a car shop can calculate the increase in profits to see if there is an ROI after its employees take an SBeLcourse. It isn’t as clear-cut for some other training outcomes. Regardless, IDs should be prepared with ideas if an organization wants to measure its ROI.
Evaluation plans aren’t set in stone. They will likely evolve throughout the design and implementation of the course so be flexible. Know that any plan is better than no plan. Evaluation plans provide designers, stakeholders, and learners the best ROI on time and money spent It is a vital step in reaching desired training outcomes.
To read my blog post on how to influence outcomes with performance objectives follow this link.
https://tiptoplearner.com/how-to-influence-outcomes-with-performance-objectives/