Saving Time With User-Centered Design

The Challenge

Background: Each year, the Cadet Wing (CW) assesses the over 4,000 cadets at the US Air Force Academy. Two components of this—the personal appearance inspection and cadet interview—had been carried out on paper or used a survey tool to log interview responses.

Goal: Cut down on the number of hours required to collect and analyze the data by developing a user-friendly, digital tool that will store the data in one location.

My Role: Interaction designer, usability tester, and trainer.

The Iterative Design Process

User Research

The CW conducted the user research on the two main user-groups: assessors and data analysts. From their research, I formed the following insights

  • For the assessors:
    • They're time crunched: they do not have much time to learn a new tool or tools, and assessments need to be completed efficiently.
    • They exhibit a spectrum of comfort and abilities with technology: the product needs to be easy to use and intuitive.
    • They feel pressure to have their squadron receive high marks.
  • For the data analysts:
    • They wanted granularity in the data: they needed to be able to track comments, which interview questions were asked, and how many points were given for each category.
    • They were concerned about the integrity of the data.
    • They needed one-stop access to data: data for both assessments needs to be stored in one location.
    • They had more familiarity and comfort with technology than most assessors.

I added a third user to consider: the cadet. For cadet's, the main need was the ability to see their scores, and whether they passed, after the assessment was over.

UX Methods

The main UX skills used in this project were:
  • Defining the Problem: We were initially approached with We want to use the LMS for cadet assessment. Through careful questioning and listening I was able to craft concrete goals for the project and identify constraints.
  • Ideation and Journey Mapping: While ideating for possible solutions, I created journey maps for viable solutions, such as the one below.
    The user journey for either rubric
    These journey maps helped explain the pro's and con's of the two viable solutions that came out of the ideation phase.
  • Prototyping: Because we have limited control over the user interface, I went straight to prototyping possible solutions in the LMS, focusing on the cleanest and simplest interactions with the LMS.
  • Usability Testing and Iterating: Given the short lifespan of this project, I opted to adapt an agile framework. I quickly produced mockups of ideas and conducted usability testing at regularly scheduled meetings, then made revisions in light of the feedback from testing.

Interaction Design Decisions

One of the initial decision was whether the LMS could be leveraged to achieve the goal. After defining the problem, it was clear that it could. We then had to make the following decisions during the design and development of the rubrics.

Choosing the Best Tool

Of 6 assessment tools in the LMS, I chose 2 that were best suited to achieve our goal: the tests tool and the rubric tool. I created some mockups of each to showcase the functionality.

We settled on the rubric for the following reasons:

  • The workflow for the assessor was simplified but also more robust.
  • Data retrieval with a rubric would be cleaner than a test.
  • It would be easy to track changes in and verify the data.

Grouping the Cadets

Rubrics have to be housed within a course in the LMS. So, we had to decide how many courses to create. The four main considerations for making this decision were:

  1. Page Load Times: fewer courses would put more cadets in a course, meaning a longer load time.
  2. Ease of Finding Cadet: more courses would act as a filter when trying to find a cadet to assess.
  3. Double Roles: some cadets would need to be both assessed and assess other cadets.
  4. Data Collection: fewer courses would make data collection easier.
Ultimately, we chose one course per class year, or 4 courses total for the following reasons:
  • We needed at least two courses to accommodate the double roles.
  • We could easily create filters within a course to make it easy to find a cadet.
  • Courses would load quickly enough with a little over 1,000 cadets enrolled.
  • Data collection would not take too much longer than using one course.
  • Creating one course per class year is a natural way for users to divide up the student population.

Accessing the Rubrics

The appearance assessment had different two rubrics that could be used. We had to decide when an assessor would choose the appropriate rubric: after launching the rubric window or before. The two workflows would be

  1. Launch the rubric window > choose the rubric > complete the rubric > save and submit the rubric.
    Choosing between the two rubrics
    The first and second rubric options

  2. Choose the rubric > launch the rubric window > complete the rubric > save and submit the rubric.
    The chosen rubric, with text box
After testing, we chose the second option because it was simpler to the assessors and reduced the chance of errors. The first option made some assessors mistakenly believe they were not done after filling in one rubric (note the 1 of 2 text). There was no confusion that you were done with the second option. This meant an extra column of data, fewer errors inputting data was worth the extra column.

Assigning Points

The rubrics had to calculate scores correctly. For the interview I leveraged the weighting function of the rubric, assigning a 0% weight to those rows and non-zero weight to the rest. This allowed us to track which questions were asked by selecting the appropriate question within the rubric and reserve the points for the rest of the rows.
The interview rubric, with the categories in each cell.

For the uniform inspection rubric, I presented two ways the rubric could calculate points:

  1. Each category could be seen as an opportunity to gain points (i.e. a cadet started with a low score and worked his or her way to a higher score);
    Point schema: earn points.

  2. Each category could be seen as an opportunity to lose points (i.e. they started with a high score and lost points for infractions).
    Point schema: lose points.

On behalf of the cadets, I advocated for the first option, but the leadership chose the second option.

Descriptions for Categories

Given the length of the rubric and the fact that the Levels of Achievement row could not be frozen, I wanted to give assessors an easy way to reference which column corresponded to which level of achievement. The options I presented were to show, in each cell, the verbal description or the verbal description and the level of achievement (using ALL CAPS to distinguish the two).
The top rubric doesn't have the categories in cell whereas the bottom does.
In testing, including both the description and level of achievement increased the speed and accuracy of the assessments and reduced the cognitive load of the assessor; so that's what we adopted for the final product.

Assigning a Grade

Our final decision was how assessment grades were displayed in the course. Keeping the cadet in mind, I suggested adding a feature that would display a pass or fail grade (along with appropriate color coding) in addition to the score received on each assessment. The leadership liked this feature, so we added it.


  1. Saved over 915 hours of annual work collecting and analyzing assessment data.
  2. Overwhelmingly positive feedback, including a letter of commendation from the Vice Commandant of Cadets noting how Dr. Padgett’s innovative placement of the Personal Appearance Inspection (PAI) into Blackboard is a game changer for us.
  3. Over 200 assessors who are happy to use the rubrics for Cadet Assessment.
  4. Full committment from the CW to adopt the LMS for all training and assessment programs.