The Path to Effective Assessments in Corporate Training
At heart, learning is action and reflection, and instruction is designed action and guided reflection. As instructional designers, we should be creating learning experiences that are a series of problems supplemented with resources to guide performance and then feedback to refine understanding. Such problem-solving, when evaluated (whether scaffolded with feedback or not, but why would you ever miss an opportunity for learning), inherently is assessment. What matters is the nature of the problems, and the specifics of the feedback.
Do You Provide Practice as Part of Your Assessments?
With existing tools, it is easy to develop knowledge questions. Such questions support accurately defining terms, and abstract manipulation. Yet knowledge questions can lead to what cognitive science calls ‘inert knowledge’: you learn it, even can pass a test on it, but you go out in the real world where it is relevant and it doesn’t even get activated! What’s needed is knowledge application.
In more detail, what learners need are contexts where the problems they face resemble the ones they’ll face after learning. Concepts are the tools they should apply to those contextualized problems to develop solutions. Examples can show how concepts are applied to contexts to solve problems to help model the desired behavior (and thinking!).
To put it another way, what learners need is to be put in a position to make decisions like they’ll have to make after learning. Building assessments that explicitly provide a situation and require a decision are likely to be what builds the necessary skills to equip your learners.
When knowledge does need to be known ‘cold’, automated, we can assess that as well, but what will most benefit organizations is the ability to make better decisions, to solve important problems, not the ability to recite rote facts. When those situations do occur, such as when reactions or terminology must be memorized, we can assess that as well, but it should be in service to the ability to perform (and even assessed that way), not in lieu of it.
And such types of situations can be developed in existing tools such as multiple-choice or matching tasks, but it takes a deeper understanding. While branching scenarios and simulations are ideal, and may be justified, even ordinary assessment tools that are developed as ‘mini-scenarios’ can provide meaningful practice. And that’s going to be better for your outcomes.
An important detail is the alternative to the right answer. Too often, we see alternatives that are so silly or so obviously wrong that there’s no real challenge to the learner. This is a waste of learner time and stakeholder money. No learning comes from such a response. Instead, the alternatives to the right answer should be reliable ways learners can misunderstand. Misconceptions are likely in any situation complex enough to require instruction in the first place. Mistakes don’t tend to be random, but instead come from systematic misapplications of existing models. And these should be addressed.
Consequently, developing alternatives to the right answer come from either (or both) of a) the cognitive ways learners can misinterpret, and b) the typical problems seen even after instruction. You want a chance to catch these mistakes before they count! Thus, wrong answers should be challenging. You may need to adjust the challenge depending on the ability of the learners, but over time the complexity should grow to mimic the complexity the learners will observe after the learning. The final assessment should be the ability to perform the task, period.
How Do You Provide Effective Feedback on Assessments?
That’s necessary, but not sufficient. To be used as formative assessment, that is assessment that helps the learner understand what they know and how they improve, the feedback they get is also nuanced (and again, all assessment should provide opportunities to improve). Just saying ‘right’ or ‘wrong’ isn’t enough to help learners efficiently improve. And not having specific responses to each alternative response to the problem is also sub-optimal.
When learners get it right, it is good to help strengthen the connection between their response and the concept that should be guiding their performance. Reemphasize the link between the relevant model and how it plays out in this context. You want a response that effectively says (with more specifics): “you were right, the model does say to do what you did”.
Importantly, what you say when they get it wrong is at least as important, if not more so. Just as we created specific alternatives to the right answer, we want the feedback to identify the specific mistake and address it. Even the “wrong, the right answer is…” isn’t enough. You want to say, roughly “this is wrong; the model you want is … and your response reflects a common way to misconstrue the problem, … ”
Any tool that doesn’t let you provide specific feedback for each wrong answer is fundamentally flawed. Fortunately, that’s increasingly rare, but look at the options to ensure when you’re purchasing or using that you’ve thought through and can develop rich responses. While you can cram all the misconceptions into the same feedback for all wrong answers, you’re making your learner work unnecessarily and so decreasing the learning impact.
There’s an additional constraint. If you are indeed (as you should) putting the learner in a context and asking them to make decisions and solve problems, let the consequences of those decisions (good and bad) be made clear before you then invoke the external voice. Learners will use models to explain what has happened and predict what will happen, and the consequences help reinforce that, as well as closing the story (the emotional experience). Properly conveyed, those consequences may inherently convey how the misconception is inappropriate, but external clarification may benefit as well.
There are the usual additional details: minimalism and timeliness. The feedback should, while accomplishing the goals identified, otherwise be as minimal as possible. Extraneous content only interferes with processing the message. Similarly, in general, the feedback should be immediate. In cases of apt learners in a complex environment, there are arguments for delaying the feedback, but the way to bet is to provide the feedback right away.
Should We Score eLearning Assessments?
One of the issues then becomes, how do you score an assessment? And the notion of scoring is missing the bigger picture. Our concern should be the ability to do. Which implies that relative scores aren’t sufficient.
Many times, we arbitrarily decide that a pass rate of, say, 80% on a knowledge test is sufficient. And I suspect that there’s little real thought behind whether 8 out of 10 is sufficient to determine a true understanding. I think it’s just a familiar approach.
A better approach for determining the necessary level of performance is thinking through what demonstrates an adequate competency. Of course, this requires having determined an appropriate competency to begin with, but that’s also best practice; we should be determining what the appropriate performance is. Then, we should determine what a successful demonstration of that ability is.
Robert Mager stipulated the components of a good objective to include the performance, the context for the performance, and what the criteria for a successful demonstration of ability should be. It could be ‘4 out of 5 successful transactions’ (e.g. 80%), but it might be more (or less). And your assessment output (read: score) should be ‘passed’ or ‘did not pass’. Of course, in both cases, reinforcing the learning would be appropriate.
We could determine that some misconceptions are less heinous than others, and provide partial scores, but the real issue is to develop the ability to do, and that requires determining when learners can do, and nothing else really matters.
Putting it Together
And yet, just creating practice with feedback still isn’t sufficient. The right practice, practice that is aligned with the need and the associated objectives, is critical. This starts from doing a gap analysis about what aspects of performance are not measuring up to need, and then a root cause analysis to identify the type of intervention necessary. From there, if a skill gap has been identified, as mentioned you then you create learning objectives that will address the skill gap and immediately design your assessments to determine whether those skills have been acquired. (And then you design your course to ensure that those skills will be acquired!)
A second element is sufficient practice (e.g. problems as formative assessment). Enough assessment is required to develop the skills to the necessary level, and enough feedback to remove persistent misconceptions, as well as sufficient spacing to ensure that the skill shift will be persistent.
There is a role for summative evaluation, to determine that the learner has achieved the required level of performance. Our specification of objectives should include a statement of what competent performance should be (e.g. our outcomes should be criterion-referenced), and then we should not consider the learning experience to be successful and complete until the necessary performance is achieved. The gateway beyond the class is the ability to perform, and that’s a summative judgment. However, even that performance can reinforce the learning, so it can also be formative.
Good assessment – good practice and feedback – is critical to learning. It is much more than a rote-knowledge test, but it is learnable, and doable.