Our website is made possible by displaying online advertisements to our visitors. Please consider supporting us by whitelisting our website.
FEMA IS-130.A: How to be an Exercise Evaluator Course Summary

FEMA IS-130.A: How to be an Exercise Evaluator Course Summary

Course Overview

This course provides a foundation for exercise evaluation concepts as identified in the Homeland Security Exercise and Evaluation Program (HSEEP).

Objectives: After completing this course, you will be able to:

  • Define the roles and responsibilities of an exercise evaluator.
  • Discover the tools necessary to support the exercise evaluator for a successful exercise evaluation.
  • Identify the necessary tasks in conducting an exercise evaluation.
  • Recognize methods of analyzing exercise data.
Graphic listing all lessons in course. First line states Lesson List. Underneath is a bulleted list that reads Introduction to Exercise Evaluation, Roles and Responsibilities of the Exercise Evaluator, Tools of the Evaluator, Exercise Observation and Data Collection, Evaluation Data Analysis, Exercise Wrap-Up Activities, and Course Summary. Introduction to Course Evaluation has an arrow in front of it indicating the beginning of this lesson.

Welcome to IS 130.a, How To Be An Exercise Evaluator

In 2002, the Department of Homeland Security (DHS) developed the Homeland Security Exercise and Evaluation Program (HSEEP), which provides a common approach to exercise program management, design and development, conduct, evaluation, and improvement planning. HSEEP aligns local preparedness efforts with the National Preparedness Goal and the National Preparedness System. As a key component of national preparedness, exercises provide officials and stakeholders from across the whole community with the opportunity to shape planning, assess and validate capabilities, and address areas for improvement.

IS-0130.a How to Be an Exercise Evaluator provides learners with the basics of serving as an exercise evaluator. The course also builds a foundation for exercise evaluation concepts as identified in the HSEEP. The course was designed for anyone who has accepted the responsibility of being an evaluator.

Lesson Synopses

This course contains six lessons.

The remainder of Lesson 1, Introduction to Exercise Evaluation, provides an overview of the exercise evaluation process and the improvement planning process as described in the Homeland Security Exercise and Evaluation Program (HSEEP).

  • Lesson 2, Roles and Responsibilities of the Exercise Evaluator, explains the responsibilities of a lead evaluator, the criteria used to select evaluators and the components of an evaluator briefing.
  • Lesson 3, Tools of the Evaluator, discusses the purpose and content of: an Exercise Evaluation Guide (EEG), the Controller and Evaluator Handbook (C/E Handbook), and the Master Scenario Events List (MSEL).
  • Lesson 4, Exercise Observation and Data Collection, discusses how to collect data during an evaluation, the challenges of evaluation, and the purpose of the player hot wash.
  • Lesson 5, Evaluation Data Analysis, describes the components of the post-exercise Controller/Evaluator (C/E) debriefing and how to develop effective recommendations for improvement.
  • Lesson 6, Exercise Wrap-up Activities, discusses the purpose and content of both the After-Action Report/Improvement Plan (AAR/IP) and the After-Action Meeting (AAM).

Lesson 1 Overview

This lesson provides an understanding of the exercise cycle, with an emphasis on evaluation and the qualifications needed to be an effective evaluator.

Objectives: At the end of this lesson, you will be able to:

  • Distinguish between discussion-based and operations-based exercise evaluations.
  • Explain the purpose of exercise evaluation.

What is Evaluation?

Evaluation is the act of observing and recording exercise activity or conduct, assessing behaviors or activities against exercise objectives, and noting strengths, weaknesses, deficiencies, or other observations.

Evaluation methods can differ between discussion-based exercises and operations-based exercises. For example:

  • Discussion-based exercises: Evaluators or note-takers observe and record participant discussion.
  • Operations-based exercises: Evaluators observe and record participant actions.

The data recorded by evaluators forms the analytical basis for determining if an exercise meets its objectives.

Evaluation should not be a single event in the exercise cycle; instead, it should be carefully integrated into overall exercise design. The output of exercise evaluation is information used to improve performance. For this reason, exercise evaluation is part of an on-going process of improvements to preparedness.

Evaluator Characteristics

The capabilities and objectives tested in an exercise play a critical role in recruiting evaluators. Evaluators should have experience, preferably subject matter expertise in their assigned functional area. They should have functional knowledge including familiarity with the plans, policies, procedures, and agreements between local agencies and jurisdictions.

In addition to subject matter expertise, evaluators must be able to provide objective evaluations. Members of a participating agency may feel pressured to favor outcomes for their agency. For this reason, it is sometimes best to recruit evaluators from local non-participating agencies either within or from outside of the jurisdiction.

The Exercise Evaluation and Improvement Planning Process

The Exercise Evaluation and Improvement Planning Process has eight steps. In your role as an exercise evaluator, you will participate in activities that fall within Step 2 – Observe the exercise and collect data and Step 3 – Analyze data.
Graphic showing the Evaluation and Improvement Planning Process

The first four steps of the process, Exercise Evaluation, address evaluation planning, observation, and analysis.

The last four steps of the process, Improvement Planning, focus on using information gained from exercises to implement improvements to a jurisdiction’s capabilities.

Lesson 1 Summary

In this lesson, you learned about the exercise cycle, evaluation and improvement planning, and the qualifications needed to be and effective evaluator.

Objectives: Having completed this lesson, you are able to:

  • Distinguish between discussion-based and operations-based exercise evaluations.
  • Explain the purpose of exercise evaluation.
The next lesson presents information on Roles and Responsibilities of the Exercise Evaluator.

Lesson 2 Overview

This lesson provides an understanding of the responsibilities evaluation team members, including the time commitments required.

Objectives: At the end of this lesson, you will be able to:

  • Identify the criteria used to select evaluators.
  • Describe the components of an evaluator briefing.
Graphic listing all lessons in course. First line states Lesson List. Underneath is a bulleted list that reads Introduction to Exercise Evaluation, Roles and Responsibilities of the Exercise Evaluator, Tools of the Evaluator, Exercise Observation and Data Collection, Evaluation Data Analysis, Exercise Wrap-Up Activities, and Course Summary. Introduction to Course Evaluation has a check mark in front of it. Roles and Responsibilities of the Exercise Evaluator has an arrow in front of it indicating the beginning of this lesson.

As an Evaluator, You Are a Member of a Team…

The evaluation team is led by a lead evaluator. Appointed by the exercise planning team leader, the lead evaluator oversees all facets of the evaluation. The lead evaluator is charged with developing evaluation requirements, data collection methods, and corresponding tools and documentation to be used during the exercise. In addition, the lead evaluator selects, assigns, and trains his/her evaluation team members. Your lead evaluator is your manager or boss for the duration of the exercise/evaluation/improvement planning.

The goal of the evaluation process is to obtain objective evaluations. An evaluator should be familiar with the mission areas, core capabilities, plans, policies, and procedures that will be examined during the exercise. Subject matter expertise and experience in the assigned functional area for evaluation are beneficial in maintaining objectivity.

A pre-exercise evaluator briefing is held for all evaluation team members to ensure a shared understanding of what key data to collect and how that data will contribute to the evaluation of the exercise.

The evaluator briefing provides an opportunity to resolve unanswered questions, clarify roles and responsibilities, and distribute any last-minute changes or updates.

It is important to note that depending on the complexity of the exercise, the evaluator briefing may be combined with exercise controllers for a Controller/Evaluator (C/E) briefing, and could require briefings at more than one site, depending on the organization and exercise layout.

Evaluator Responsibility

As an evaluation team member that participated in an exercise, keep in mind that you may be contacted to answer questions and provide additional information concerning your evaluation results.

Lesson 2 Summary

In this lesson, you learned about the responsibilities of the evaluation team.

Objectives: Having completed this lesson, you are able to:

  • Identify the criteria used to select evaluators.
  • Define the challenges of evaluation.
  • Describe the components of an evaluator briefing.
The next lesson presents information on the Tools of the Evaluator.

Lesson 3 Overview

This lesson provides an understanding of the documentation tools used by the evaluation team to observe, collect data, and evaluate player/participant reactions to prompts.

Objectives: At the end of this lesson, you will be able to:

  • Identify the purpose and content of the Controller and Evaluator Handbook (C/E Handbook).
  • Identify the purpose and content of an Exercise Evaluation Guide (EEG).
  • Identify the purpose and content of the Master Scenario Events List (MSEL).
Graphic listing all lessons in course. First line states Lesson List. Underneath is a bulleted list that reads Introduction to Exercise Evaluation, Roles and Responsibilities of the Exercise Evaluator, Tools of the Evaluator, Exercise Observation and Data Collection, Evaluation Data Analysis, Exercise Wrap-Up Activities, and Course Summary. Introduction to Course Evaluation and Roles and Responsibilities of the Exercise Evaluator have check marks in front of them. Tools of the Evaluator has an arrow in front of it indicating the beginning of this lesson.

Exercise Evaluation Tools

 
Evaluation Tool Evaluation Tool Description
  Discussion-based Exercises
Situation Manual (SitMan) The primary reference material provided to all participants. It is a textual background for the facilitated exercise and discussion. The SitMan structure includes an overview (scope, objectives, core capabilities, rules, and exercise agenda), the scenario, broken into modules, and discussion questions at the end of each module.
  Operations-based Exercises
Controller/Evaluator (C/E) Handbook The Controller/Evaluator Handbook (C/E Handbook) is the primary evaluation documentation for the exercise. It is used as an instructional manual for controllers and evaluators and is sometimes called the Evaluation Plan.

It typically contains the following information:

  • Exercise-Specific Details: Exercise scenario, schedule of events, and evaluation schedule
  • Evaluator Team Organization, Assignments, and Locations: A list of evaluator locations, shift assignments, a map of the exercise site(s), evaluation team organizational chart, and evaluation team contact information
  • Evaluator Instructions: Step-by-step instructions for evaluators for activities before, during, and following the exercise
  • Evaluation Tools: EEGs, the MSEL, or a list of venue-specific injects, electronic or manual evaluation logs or data collection forms, relevant plans and procedures, Participant Feedback Forms, and Hot Wash templates

The C/E Handbook may be a standalone document or a supplement to the Exercise Plan (ExPlan). It may also be broken into separate controller and evaluator versions.

Master Scenario Events List (MSEL) A chronological listing of events, in spreadsheet format, that drive exercise play. The MSEL is used during operations-based exercises or complex discussion-based exercises. MSEL events include contextual injects, expected action events (milestones), and contingency injects.

Situation Manual (SitMan)

A Situation Manual (SitMan) is the core documentation provided to all participants in a discussion-based exercise. It is the textual background for a facilitated exercise. It supports the scenario narrative and is the primary reference material.

Typically, the SitMan includes the following:

  • Exercise scope, objectives, and core capabilities
  • Exercise assumptions and artificialities
  • Instructions for exercise participants
  • Exercise structure
  • Exercise scenario background
  • Discussion questions and key issues
  • Schedule of events

Reference materials may include:

  • Material Safety Data Sheet (MSDS) 6
  • Relevant documentation, plans, SOPs, etc.
  • Jurisdiction-specific threat information
  • A list of reference terms

Exercise Evaluation Guide (EEG) Goals and Components

EEGs are designed to accomplish several goals:

  • Streamline data collection
  • Enable thorough assessments of the participant organizations’ capability targets
  • Support development of the After-Action Report (AAR)
  • Provide a consistent process for assessing preparedness through exercises
  • Help organizations map exercise results to exercise objectives, core capabilities, capability targets, and critical tasks for further analysis and assessment

Each EEG consists of the following components:

  • The EEG: asks evaluators to record the completion or non-completion of tasks and performance measures.
  • The EEG Analysis Sheets: the Observations Summary section of the EEG Analysis Sheets asks evaluators to record the general flow of their observations.

The Evaluator Observations section of the EEG Analysis Sheets prompts evaluators to list their major observations, including strengths and areas for improvement.

Advance to the next screen to review an image of the Exercise Evaluation Guide (EEG).

Example Exercise Evaluation Guide (EEG)

EEG example form template. Select Full Alt Text link for more information.

EEG Format

The EEG format presents the following evaluation requirements to exercise evaluators:

  • Core capabilities: The distinct critical elements necessary to achieve a specific mission area (e.g., prevention). To assess both capacity and gaps, each core capability includes capability targets.
  • Capability target(s): The performance thresholds for each core capability; they state the exact amount of capability that players aim to achieve. Capability targets are typically written as quantitative or qualitative statements.
  • Critical tasks: The distinct elements required to perform a core capability; they describe how the capability target will be met. Critical tasks generally include the activities, resources, and responsibilities required to fulfill capability targets. Capability targets and critical tasks are based on operational plans, policies, and procedures to be exercised and tested during the exercise.
  • Performance ratings: The summary description of performance against target levels. Performance ratings include both Target Ratings, describing how exercise participants performed relative to each capability target, and Core Capability Ratings, describing overall performance relative to the entire core capability.

For each EEG, evaluators provide a target rating, observation notes including an explanation of the target rating, and a final core capability rating. In order to efficiently complete these sections of the EEG, evaluators focus their observations on the capability targets and critical tasks listed in the EEG.

The Controller/Evaluator (C/E) Handbook

The Controller/Evaluator Handbook (C/E Handbook) is the primary evaluation documentation for the exercise. It is used as an instructional manual for controllers and evaluators and is sometimes called the Evaluation Plan. It typically contains the following information:

  • Exercise-Specific Details: Exercise scenario, schedule of events, and evaluation schedule
  • Evaluator Team Organization, Assignments, and Locations: A list of evaluator locations, shift assignments, a map of the exercise site(s), evaluation team organizational chart, and evaluation team contact information
  • Evaluator Instructions: Step-by-step instructions for evaluators for activities before, during, and following the exercise
  • Evaluation Tools: EEGs, the MSEL, or a list of venue-specific injects, electronic or manual evaluation logs or data collection forms, relevant plans and procedures, Participant Feedback Forms, and Hot Wash templates

The C/E Handbook may be a standalone document or a supplement to the Exercise Plan (ExPlan). It may also be broken into separate controller and evaluator versions.

Master Scenario Events List (MSEL)

In more complex exercises (operations-based or complex discussion-based exercises), a Master Scenario Events List (MSEL) provides a timeline and location for all expected exercise events and injects (actions that push the scenario forward). It is important that all evaluators have a copy of the MSEL. It keeps evaluators on track, in their expected locations and observing actions, as assigned.

Evaluators can refer to the MSEL to help determine the times at which specific evaluators should be at certain locations. For discussion-based exercises, the assignment of evaluators depends on the number of players, the organization of the players and discussion, and the exercise objectives.

Components of the MSEL

The MSEL should contain:

  • Chronological listing that supplements exercise scenario
  • Scenario time
  • Event synopses
  • All injects, including the controller responsible for the inject, as well as all evaluator special instructions
  • Intended player (agency or individual player)
  • Expected participant responses – when developing storyboards, be sure to add a quick note which clarifies that expected participant responses are what evaluators are evaluating (expected responses vs. actual responses)
  • Objectives and core capabilities
  • Notes section with special instructions

Advance to the next screen to review an image of the Master Senario Events List (MSEL).

MSEL Example

A sample MSEL with the heading, “E 900 All Hazards Functional Exercise. Day 1 – Tuesday.” A table with eight columns follows on the next line.  Column headings, from left to right are: Inject Number, Delivery Time, From, To (User), Means Inject Method, Description, Expected Player Action, and Threads.

Lesson 3 Summary

In this lesson, you learned about the Exercise Evaluation Guide (EEG), Controller/Evaluator (C/E) Handbook, and the Master Scenario Events List (MSEL).

Objectives: Having completed this lesson, you are able to:

  • Identify the purpose and content of an Exercise Evaluation Guide (EEG).
  • Identify the purpose and content of the Controller and Evaluator Handbook (C/E Handbook).
  • Identify the purpose and content of the Master Scenario Events List (MSEL).
The next lesson presents information on Exercise Observation and Data Collection.

Lesson 4 Overview

This lesson provides an overview of the observation and data collection and hot wash processes.

Objectives: At the end of this lesson, you will be able to:

  • Describe the exercise-related data that evaluators should collect.
  • Define the challenges of evaluation.
  • Identify the purpose and content of the player hot wash.
Graphic listing all lessons in course. First line states Lesson List. Underneath is a bulleted list that reads Introduction to Exercise Evaluation, Roles and Responsibilities of the Exercise Evaluator, Tools of the Evaluator, Exercise Observation and Data Collection, Evaluation Data Analysis, Exercise Wrap-Up Activities, and Course Summary. Introduction to Course Evaluation, Roles and Responsibilities of the Exercise Evaluator, and Tools of the Evaluator have check marks in front of them. Exercise Observation and Data Collection has an arrow in front of it indicating the beginning of this lesson.

Exercise Observations and Data Collection

Exercise evaluators should observe exercise activity in a non-attribution environment, in accordance with the evaluation training and EEGs. Evaluators will generally be able to collect information about the following topics related to execution of capabilities and tasks examined during the exercise:

  • Utilization of plans, policies, and procedures related to capabilities.
  • Implementation of legal authorities.
  • Understanding and assignment of roles and responsibilities of participating organizations and players.
  • Decision-making processes used.
  • Activation and implementation of processes and procedures.
  • How and what information is shared among participating agencies/organizations and the public.

Discussion-based exercises usually focus on issues involving plans, policies, and procedures’ consequently, observations of these exercises may consist of an evaluator or note-taker recording data from participant discussions on EEGs.  On the other hand, operations-based exercises focus on issues affecting the operational execution of capabilities and critical tasks.

During exercises, evaluators collect and record participant actions, which for the analytical basis for determining if critical tasks were successfully demonstrated and capability targets met.

Using EEGs in Exercise Observation

As you learned in Lessons 3, Exercise Evaluation Guides (EEGs) identify the activities, tasks, and performance measures that the evaluator should observe during the exercise. Evaluators should complete the EEG so that:

  • Events can be reconstructed at a later time (such as during summary sessions).
  • Evaluators can conduct root cause analyses of problems.

To ensure EEGs are fully complete, evaluators should:

  • Synch their timekeeping with other evaluators before the exercise.
  • Record the name and time of the exercise (as applicable).
  • Log times and location accurately.
  • Take notes on whether exercise simulations affect the observed task.

Complete EEGs are essential to the development of the After-Action Report/Improvement Plan (AAR/IP).

The EEG Observations Section

The EEG Observations Section allows exercise evaluators to record general exercise events, specific actions deserving special recognition, particular challenges or concerns, and areas needing improvement.

The information recorded in the EEGs is used to develop the AAR/IP.

The standard sources, such as EEGs, are not the only sources of information.  Evaluators should make all attempts to gather as much information as possible.  Other sources of evaluation information are:

  • Event logs.
  • Video or audio recordings.
  • Evaluator notes.
  • Photographs.

For operations-based exercises, evaluators should be given a format that suits the environment.

Types of Reporting

During an exercise, each evaluator performs three types of reporting:

  • Descriptive reporting is the direct observation and documentation of actions listed on evaluation forms. For example, consider a checklist item that asks whether the outgoing Operations Section Chief briefed his or her replacement. This item requires little subjective judgment on the part of the evaluator. For that reason, it prompts descriptive reporting. Descriptive reporting typically yields reliable data.
  • Inferential reporting requires an evaluator to arrive at a conclusion before recording information. For example, consider a checklist item that asks whether a capability is “adequate.” In judging whether the capability is “adequate,” the evaluator must first make an assumption about what “adequate” means. Since no two evaluators will make the exact same assumption, inferential reporting yields inconsistent data.
  • Evaluative reporting requires evaluators to assess performance on a scale of success. For example, consider an evaluation item that asks evaluators to rate the success of the Incident Commander’s communications strategy. This item requires the evaluator to make an evaluative judgment. Reliable evaluative data is difficult to collect.

For the most part, evaluators will perform descriptive reporting.

Post-exercise activities ask the evaluator to assess data in relationship to exercise objectives. These assessments require inferential and evaluative judgments.

Note Taking and Data Collection

Evaluators should retain their notes and records of the exercise to support the development of the AAR. As necessary, the lead evaluator may assign other evaluators to collect supplemental data during or immediately after the exercise. Such data is critical to fill in gaps identified during exercise evaluation.

For example, sources of supplemental evaluation data might include records produced by automated systems or communication networks, and written records, such as duty logs and message forms.

Additional data collection methods include participant feedback forms and Facilitator/Evaluator (F/E) or C/E briefings.

Hot washes also provide times data can be collected and/or validated.  During these times, occurring immediately after the exercise, players can voice both positives and negatives of the exercise.  They are attended by the players, evaluators, and the controllers of facilitators.

Note Taking Skills

Observation notes include if and how quantitative or qualitative targets were met. For example, a capability target might state, “Within 4 hours of the incident…” Observation notes on that target should include the actual time required for exercise players to complete the critical task(s).

Additionally, observations should include:

  • Actual time required for exercise players to complete the critical task(s).
  • How target was or was not met.
  • Decisions made and information gathered to make decision.  This includes following up with other evaluators who may have been on the other end of the conversations, so all “sides of the story” are represented.
  • Requests made and how requests were handled.
  • Resources utilized.
  • Plans, policies, procedures, or legislative authorities used or implemented.
  • Any other factors contributed to the outcomes.

Based on their observations, evaluators assign a target rating for each capability target listed on the EEG. Evaluators then consider all target ratings for the core capability and assign an overall core capability rating. The rating scale includes four ratings:

  • Performed without Challenge (P).
  • Performed with Some Challenges (S).
  • Performed with Major Challenges (M).
  • Unable to be Performed (U).

Common Pitfalls of Evaluation

Evaluations are only effective if evaluators perform systematic observation and generate unbiased records. To ensure unbiased records, evaluators should avoid seven pitfalls of exercise evaluation:

  1. Observer Drift occurs when evaluators lose interest or a common frame of reference during an exercise. It is usually the result of fatigue or lack of motivation. Observer drift can be minimized by feedback from the lead evaluator, beverages and snacks, breaks, and rotational shifts of exercise observation.
  2. Errors of Leniency occur when evaluators have a tendency to rate all actions positively. It can be minimized by pre-exercise training.
  3. Errors of Central Tendency occur when evaluators describe all activities as average in order to avoid making difficult decisions. It can be minimized by pre-exercise training.
  4. Halo Effect occurs when evaluators form a positive impression of a person or group early in the exercise and permit this impression to influence their observations. It can be minimized by pre-exercise training.
  5. Hypercritical Effect occurs when evaluators believe it is their job to find something wrong, regardless of the players’ performance. It can be minimized by pre-exercise training.
  6. Contamination occurs when evaluators know how an activity was performed in earlier exercises and permit this knowledge to affect their expectations. It can be minimized by pre-exercise training.
  7. Evaluator Bias refers to errors that are traceable to characteristics of the evaluator. Evaluator bias can be minimized by careful selection of evaluators, or by employing multiple evaluators to observe the same functions.

Exercise Hot Wash

After the exercise, one or more player hot washes are held. It is attended by the Exercise Planning Team, players, evaluators, and facilitators or controllers. The player hot wash is an opportunity for players to describe their immediate impressions of demonstrated capabilities and the exercise itself. For this reason, it affords a valuable opportunity for evaluators to fill in gaps in their notes.

Player hot washes allow time for players to address key topics, cross-disciplinary issues, or conflicting recommendations that were identified in earlier discussions. They are also an opportunity for players to comment on how well the exercise was planned and conducted.

Player hot washes should be held as soon as possible after the exercise is complete, while player observations are still fresh. They are most effective when led by an experienced facilitator who can keep the discussion constructive and focused.

During the hot wash, evaluators, controllers and/or facilitators should distribute Participant Feedback Forms for players to submit.

Hot Wash Goals for Evaluators

For evaluators, a hot wash is an opportunity to:

  • Collect thoughts and observations about what occurred and how the participants thought it went
  • Provide evaluators the opportunity to clarify points and collect information that may have missed

Although evaluators may be assigned to record a particular group discussion, they should capture information on cross-cutting issues.

Exercise Hot Wash Questions

Some suggested questions are:

  • What happened?
  • What was supposed to happen?
  • Why is there a difference?
  • What is the effect of the difference?
  • What should be learned from this?
  • What improvements need to be implemented?
  • Were the organizational roles and responsibilities clearly identified?

Positive Aspects of Evaluation

Strengths are identified and can be carried forward.

Lesson 4 Summary

In this lesson, you learned about observation and data collection processes, including good note taking skills and types of reporting. In addition, the content of the hot wash meeting is explained.

Objectives: Having completed this lesson, you are able to:

  • Describe the exercise-related data that evaluators should collect.
  • Define the challenges of evaluation.
  • Identify the purpose and content of the player hot wash.
The next lesson presents information on Evaluation Data Analysis.

Lesson 5 Overview

This lesson provides an understanding of data analysis, issue identification, and root-cause analysis.

Objective: At the end of this lesson, you will be able to:

  • Describe the components of the post-exercise Controller/Evaluator (C/E) debriefing.
  • Describe how to collect data to ensure a proper Root Cause Analysis.
Graphic listing all lessons in course. First line states Lesson List. Underneath is a bulleted list that reads Introduction to Exercise Evaluation, Roles and Responsibilities of the Exercise Evaluator, Tools of the Evaluator, Exercise Observation and Data Collection, Evaluation Data Analysis, Exercise Wrap-Up Activities, and Course Summary. Introduction to Course Evaluation, Roles and Responsibilities of the Exercise Evaluator, Tools of the Evaluator, and Exercise Observation and Data Collection have check marks in front of them. Evaluation Data Analysis has an arrow in front of it indicating the beginning of this lesson.

Controller/Evaluator (C/E) Debriefing

Data Analysis takes starts when evaluators and controllers meet in the C/E Debriefing after the exercise. This meeting includes controllers because they are frequently teamed with evaluators, and because they can provide insights and observations based on the Master Scenario Event List (MSEL). The C/E Debriefing allows evaluators to review results of the hot wash and participant feedback forms. It also enables evaluators to:

  • Compare notes with other evaluators and controllers.
    • This helps all evaluators fill in information gaps. It also enhances continuity. Consider an evaluator who has notes about a situation that involved follow-up in another situation. If the second situation related to the assigned objectives of another evaluator, the two evaluators must compare notes. Comparing notes may also help evaluators resolve discrepancies within their own notes.
  • Refine evaluation documents.
  • Develop an overall capability summary.

Reviewing Exercise Objectives

Data analysis is the time when evaluators assess player performance against exercise objectives. For this reason, evaluators should start by re-reading exercise objectives. These objectives provide the foundation for all data analysis. If the exercise was complex, evaluators may only need to re-read the objectives related to their assignments.

When reviewing the exercise objectives, consider the following points:

  • What was the intent of the objective?
  • What would demonstrate the successful performance of the objective?
  • If the objective was not met, was it the result of poor exercise design or the decisions of players?

Data Analysis

The goal of data analysis is to evaluate the ability of exercise participants to perform core capabilities and to determine if exercise objectives were met.

During data analysis, the evaluation team first consolidates the data collected during the exercise and determines whether participants performed critical tasks and met capability targets. Evaluators consider participant performance against all targets to determine the overall ability to perform core capabilities. Additionally, the evaluation team takes notes on the course of exercise play, demonstrated strengths, and areas for improvement. This provides the evaluators with not only what happened, but why events happened.

After this initial data analysis, evaluators examine each critical task not completed as expected and each target not met, with the aim of identifying a root cause. A root cause is the source of or underlying reason behind an identified issue toward which the evaluator can direct an improvement. When conducting a root-cause analysis, the evaluator should attempt to trace the origin of each event back to earlier events and their respective causes.

Identifying Issues

In both discussion-based and operations-based exercises, evaluators identify issues by comparing exercise objectives to actual performance. Through this comparison, evaluators identify which capabilities (and their associated activities, performance measures, and tasks) were successfully demonstrated in the exercise. They also identify which capabilities need improvement.

  • During operations-based exercises, evaluators identify issues by answering the following questions:
    • What happened?
    • What did evaluators see?
    • What was supposed to happen based on plans and procedures?
    • Was there a difference? Why or why not?
    • What was the impact?
    • Were the consequences of the action (or inaction or decision) positive, negative, or neutral?
    • What are the strengths and areas of improvement to remedy deficiencies?
  • In discussion-based exercises, evaluators seek to identify the following issues:
    • In an incident, how would response personnel perform the activities and associated tasks?
    • What decisions would need to be made, and who would make them?
    • Are personnel trained to perform the activities and associated tasks?
    • Are other resources needed? If so, how will they be obtained?
    • Do plans, policies, and procedures support the performance of the activities and associated tasks?
    • Are players familiar with these documents?
    • Do personnel from multiple agencies or jurisdictions need to work together to perform the activities?
    • If so, are agreements or relationships in place to support this?
    • What are the strengths and areas of improvement to remedy deficiencies?

Determining Root Causes

After evaluators identify discrepancies between what happened and what was supposed to happen (the issues), they explore the source of these issues. This step is called root-cause analysis. When conducting root-cause analysis, evaluators ask why each event happened or did not happen.

When evaluators have identified the root cause of a problem, they can be sure that corrective actions will actually address the problem, and not just a symptom of it.

Root Cause Analysis (The “Why Staircase”)

Root-Cause is defined as the source of, or underlying reason behind, an identified issue toward which the evaluator can direct improvement. The Root Cause Analysis graphic identifies a series of questions that can help drill down to determine the cause(s) of the issue. First, a one sentence description of the event to be analyzed is recorded as the Problem Statement.

Consider asking the following questions to narrow down the issue of concern:

  • What happened? (What was observed?)
  • What capability targets were met? If they were not met, what were the contributing factors?
  • Did discussion or activities suggest that critical tasks were executed to meet capability targets? If not, what were the impacts or consequences?
  • Do current plans, policies, and procedures support critical tasks and capability targets?
  • Were players familiar with them?

There is room next to the questions to record the, “Why” results. Also, include suggested Corrective action(s). Finally, using a list format, identify the root causes identified through this analysis.

Graphic used during the Root Cause Analysis process.

Developing Recommendations for Improvement

After identifying issues and their root causes, evaluators develop recommendations for enhancing preparedness, which are compiled into a draft After-Action Report (AAR) and submitted to the exercise sponsor. Once distributed, elected and appointed officials, or designees, review and accept observations and recommendations and formalize the AAR. The recommendations in the AAR will be the basis for corrective actions identified in the After-Action Meeting (AAM).

Honesty is key when writing recommendations. If you have a criticism, record it. Exercises will only improve preparedness if they are followed by accurate and useful feedback.

Recommendations for improvement should:

  • Identify areas to sustain or improve
  • Address both short-term and long-term solutions
  • Be consistent with other recommendations
  • Identify references for implementation

To the extent possible, evaluators should detail how to implement improvements. They can even recommend who will implement them and provide suggested timeframes for completion.

Recommended Improvements

When developing recommendations for discussion-based exercises, evaluators should guide their discussion with the following questions:

  • What changes need to be made to plans to improve performance?
  • What changes need to be made to organizational structures to improve performance?
  • What changes need to be made to leadership and management processes to improve performance?
  • What training is needed to improve performance?
  • What changes to resources are needed to improve performance?
  • What practices should be shared with other communities?

When developing recommendations for operations-based exercises, evaluators should guide their discussion with the following questions:

  • What changes need to be made to plans or procedures to improve performance?
  • What changes need to be made to organizational structures to improve performance?
  • What changes need to be made to leadership and management processes to improve performance?
  • What training is needed to improve performance?
  • What changes to equipment are needed to improve performance?
  • What are lessons learned for approaching a similar problem in the future?

Lesson 5 Summary

In this lesson, you learned about data analysis, issue identification, root-cause analysis, and improvement recommendations.

Objective: Having completed this lesson, you are able to:

  • Describe the components of the post-exercise Controller/Evaluator (C/E) debriefing.
  • Describe how to collect data to ensure a proper Root Cause Analysis.

Lesson 6 Overview

This lesson provides an understanding of the after-action reporting process, determining corrective actions, finalizing the After-Action Report (AAR), and improvement implementation.

Objectives: At the end of this lesson, you will be able to:

  • Identify the purpose and content of the After-Action Report (AAR)/Improvement Plan (IP).
  • Describe the After-Action Meeting (AAM).
Graphic listing all lessons in course. First line states Lesson List. Underneath is a bulleted list that reads Introduction to Exercise Evaluation, Roles and Responsibilities of the Exercise Evaluator, Tools of the Evaluator, Exercise Observation and Data Collection, Evaluation Data Analysis, Exercise Wrap-Up Activities, and Course Summary. Introduction to Course Evaluation, Roles and Responsibilities of the Exercise Evaluator, Tools of the Evaluator, Exercise Observation and Data Collection, and Evaluation Data Analysis have check marks in front of them. Exercise Wrap-Up Activities has an arrow in front of it indicating the beginning of this lesson.

Exercise Evaluation Guide (EEG) and the Controller/Evaluator (C/E) Debrief

The Evaluator Evaluation Guide (EEG) is the basis of the exercise. The evaluator’s notes and after-exercise questions are recorded in the observations section. These questions are the first step in the data analysis process. The EEGs are intended to guide an evaluator’s observations so that the evaluator focuses on capabilities and tasks relevant to exercise objectives.

The Controller/Evaluator (C/E) debrief follows the exercise and is where data is compiled and discussed among the team of evaluators. As a result, areas that require follow-on are identified and documented. The information gathered at this debrief would be used to build the AAR/IP document, which is presented to the exercise sponsor.

What is an After-Action Report (AAR)?

All discussion-based and operations-based exercises result in the development of an AAR. The AAR is the document that summarizes key information related to evaluation. The Homeland Security Exercise and Evaluation Program (HSEEP) has defined a standard format for the development of an AAR. By using this format, jurisdictions ensure that the style and the level of detail in their AAR is consistent with other jurisdictions. Consistency across jurisdictions allows the nation-wide emergency preparedness community to gain a broad view of capabilities.

The length, format, and development timeframe of the AAR depend on the exercise type and scope. These parameters should be determined by the exercise planning team based on the expectations of elected and appointed officials as they develop the evaluation requirements in the design and development process. The main focus of the AAR is the analysis of core capabilities. Generally, AARs also include basic exercise information, such as the exercise name, type of exercise, dates, location, participating organizations, mission area(s), specific threat or hazard, a brief scenario description, and the name of the exercise sponsor and POC.

The AAR should include an overview of performance related to each exercise objective and associated core capabilities, while highlighting strengths and areas for improvement. Therefore, evaluators should review their evaluation notes and documentation to identify the strengths and areas for improvement relevant to the participating organizations’ ability to meet exercise objectives and demonstrate core capabilities.

 

Information in AAR

The main focus of the AAR is the analysis of core capabilities. Generally, AARs also include basic exercise information, such as the exercise name, type of exercise, dates, location, participating organizations, mission area(s), specific threat or hazard, a brief scenario description, and the name of the exercise sponsor and POC.

The suggested AAR format specifically includes:

  • Basic exercise information:
    • Exercise name
    • Exercise scope
    • Dates and location
    • Participating organizations
    • Mission area(s) and Core Capabilities
    • Specific threat or hazard
    • Brief scenario description
    • Exercise sponsor name and point of contact (POC)
  • Executive summary
  • Exercise goals and objectives
  • Analysis of capabilities demonstrated
  • Conclusion
  • Improvement Plan (IP) matrix
  • Additional appendices may include:
    • Lessons learned
    • A participant feedback summary
    • An exercise events summary table
    • Performance ratings
    • An acronym list

AAR Review

Upon completion, the evaluation team provides the draft AAR to the exercise sponsor, who distributes it to participating organizations. Elected and appointed officials, or their designees, review and confirm observations identified in the formal AAR, and determine which areas for improvement require further action. Areas for improvement that require action are those that will continue to seriously impede capability performance if left unresolved.

As part of the improvement planning process, elected and appointed officials identify corrective actions to bring areas for improvement to resolution and determine the organization with responsibility for those actions.

Corrective Actions

After evaluation concludes, organizations should perform an additional qualitative assessment of the analyzed data to identify potential corrective actions. Corrective actions are concrete, actionable stepsthat are intended to resolve capability shortcomings identified in exercises or real-world events.

In developing corrective actions, elected and appointed officials or their designees should first review and revise the draft AAR, as needed, prior to the AAM to confirm that the issues identified by evaluators are valid and require resolution. The reviewer then identifies which issues fall within their organization’s authority, and assume responsibility for taking action on those issues. Finally, they determine an initial list of appropriate corrective actions to resolve identified issues. Each corrective action should identify what will be done to address the recommendation; who (person or agency) should be responsible; and a timeframe for implementation. A corrective action should contain enough detail to make it useful.

The organization’s reviewer should use the following questions to guide their discussion when developing corrective actions:

  • What changes need to be made to plans and procedures to improve performance?
  • What changes need to be made to organizational structures to improve performance?
  • What changes need to be made to management processes to improve performance?
  • What changes to equipment or resources are needed to improve performance?
  • What training is needed to improve performance?
  • What are the lessons learned for approaching similar problems in the future?

Benchmarking Corrective Actions

Corrective actions must include attainable benchmarks that will allow the jurisdiction to measure progress towards its implementation. Examples of benchmarks include the following:

  • The number of personnel trained in a task
  • The percentage of equipment that is up-to-date
  • The finalization of an interagency agreement within a given amount of time

These benchmarks should be defined against concrete deadlines so the jurisdiction can track gradual progress toward implementation.

After-Action Meeting (AAM)

Once the organization’s reviewer has confirmed the draft areas for improvement and identified initial corrective actions, a draft Improvement Plan (IP) is developed for review at an AAM. The purpose of the AAM is to review and refine the draft AAR. As part of the AAM, attendees develop an IP that articulates specific corrective actions by addressing issues identified in the AAR. The refined AAR and IP are then finalized as a combined AAR/IP.

Prior to the AAM, as appropriate, the exercise sponsor will distribute the revised AAR, which incorporates feedback on the strengths and areas for improvement, and the draft IP to participants. Distributing these documents for review prior to the meeting helps to ensure that all attendees are familiar with the content and are prepared to discuss exercise results, identified areas for improvement, and corrective actions.

To answer any questions or provide necessary details on the exercise itself, the organization’s elected and appointed officials, or their designees, should attend the AAM along with any other stakeholders, the Exercise Planning Team, and the Evaluation Team.

During the AAM, participants should seek to reach final consensus on strengths and areas for improvement, as well as revise and gain consensus on draft corrective actions. Additionally, as appropriate, AAM participants should develop concrete deadlines for implementation of corrective actions and identify specific corrective action owners/assignees. Participant organizations are responsible for developing implementation processes and timelines, and keeping their elected and appointed officials informed of the implementation status.

Image used for tip element icon
Tip
Characteristics of an ideal AAM
The meeting should be:

  • Scheduled for a full day, within several weeks of the end of the exercise
  • Held at a convenient location or at the site where the exercise took place

Participants should:

  • Validate observations and recommendations and provide insight into activities that may have been overlooked or misinterpreted by evaluators
  • Participate in a facilitated discussion of ways in which participating organizations can build upon the strengths identified in the jurisdiction

After-Action Report/Improvement Plan Finalization

Once all corrective actions have been consolidated in the final IP, the IP may be included as an appendix to the AAR. The AAR/IP is then considered final, and may be distributed to exercise planners, participants, and other preparedness stakeholders as appropriate.

Corrective Action Tracking and Implementation

Corrective Action Tracking and Implementation captured in the AAR/IP should be tracked and continually reported on until completion. Organizations should assign points of contact responsible for tracking and reporting on their progress in implementing corrective actions. By tracking corrective actions to completion, preparedness stakeholders are able to demonstrate that exercises have yielded tangible improvements in preparedness. Stakeholders should also ensure there is a system in place to validate previous corrective actions that have been successfully implemented. These efforts should be considered part of a wider continuous improvement process that applies prior to, during, and after an exercise is completed.

Using IPs to Support Continuous Improvement

Conducting exercises and documenting the strengths, areas for improvement, and associated corrective actions is an important part of the National Preparedness System, and contributes to the strengthening of preparedness across the Whole Community and achievement of the National Preparedness Goal. Over time, exercises should yield observable improvements in preparedness for future exercises and real-world events.

The identification of strengths, areas for improvement and corrective actions that result from exercises help organizations build capabilities as part of a larger continuous improvement process. The principles of continuous improvement are:

  • Consistent Approach. Organizations should employ a consistent approach for continuous improvement-related activities across applicable mission areas—prevention, protection, mitigation, response, and recovery. This consistent approach enables a shared understanding of key terminology, functions, processes, and tools. This approach also fosters continuous improvement-related interoperability and collaboration across an organization’s components.
  • Support National Preparedness. By conducting continuous improvement activities, organizations support the development and sustainment of core capabilities across the whole community. Continuous improvement activities also ensure that organizations are able to support assessments of national preparedness in a timely, actionable, and meaningful way.
  • Effective Issue Resolution and Information Sharing. Through improvement planning, organizations complete continuous improvement action items at the lowest level possible while facilitating the sharing of strengths and areas for improvement.
  • Application across Operational Phases. The functions, processes, and tools apply to all operational phases, including:
    • Near-real time collection and analysis during real-world events or exercises
    • Post-event/exercise analysis
    • Trend analysis across multiple events/exercises over time

Lesson 6 Summary

In this lesson, you learned about the after-action reporting process, determining corrective actions, finalizing the After-Action Report (AAR), and improvement implementation.

Objectives: Having completed this lesson, you are able to:

  • Identify the purpose and content of the After-Action Report (AAR)/Improvement Plan (IP).
  • Describe the After-Action Meeting (AAM).

Course Summary

The course objectives include:
  • Define the roles and responsibilities of an exercise evaluator.
  • Discover the tools necessary to support the exercise evaluator for successful exercise evaluation.
  • Identify the necessary tasks in conducting an exercise evaluation.
  • Recognize methods of analyzing exercise data.
This course discussed the following topics:

  • Introduction to exercise evaluation

This lesson distinguished between discussion and operations-based exercise evaluation, described how capabilities and objectives impact evaluation, explained the purpose of exercises, and defined the improvement planning process.

  • Explain an exercise

This topic discussed the different types of exercises including discussion-based and operation based, evaluator qualifications, evaluator characteristics, and evaluator sources.

  • Explain why exercises are important

This topic discussed the importance of exercises in national preparedness.

  • Explain the exercise evaluation steps.

This topic described the exercise evaluation steps including planning, observing and collecting and analyzing data, developing the draft after-action report, identifying improvements, conducting the after-action meeting, finalizing the After Action Report (AAR), and tracking implementation.

  • Role and Responsibilities of the exercise evaluator

This topic identified the skillsets needed to be a lead evaluator, the criteria used to select evaluators. In this chapter you also identified who should serve as an evaluator, defined the challenges of evaluation, and described an evaluator briefing.

  • Exercise Observation and Data Collection

This topic discussed the tools available for data collection, how to use the EEG as a guide, and describe how to document exercise logs and use them as evidence of a successful or unsuccessful exercise.

  •  Evaluation data analysis

This topic described the components of the post-exercise controller/evaluator debriefing, and explained how to develop effective recommendations for improvement.

  • Exercise wrap-up activities

This final topic listed the components of an AAR/IP, explained how to write an analysis of a capability, described the purpose of Improvement plan and the improvement plan matrix and also how participating organizations generate corrective actions.