Background: Each year, the Cadet Wing (CW) assesses the over 4,000 cadets at the US Air Force Academy. Two components of this—the personal appearance inspection and cadet interview—had been carried out on paper or used a survey tool to log interview responses.
Goal: Cut down on the number of hours required to collect and analyze the data by developing a user-friendly, digital tool that will store the data in one location.
My Role: Interaction designer, usability tester, and trainer.
The CW conducted the user research on the two main user-groups: assessors and data analysts. From their research, I formed the following insights
I added a third user to consider: the cadet. For cadet's, the main need was the ability to see their scores, and whether they passed, after the assessment was over.
We want to use the LMS for cadet assessment. Through careful questioning and listening I was able to craft concrete goals for the project and identify constraints.
One of the initial decision was whether the LMS could be leveraged to achieve the goal. After defining the problem, it was clear that it could. We then had to make the following decisions during the design and development of the rubrics.
Of 6 assessment tools in the LMS, I chose 2 that were best suited to achieve our goal: the tests tool and the rubric tool. I created some mockups of each to showcase the functionality.
We settled on the rubric for the following reasons:
Rubrics have to be housed within a course in the LMS. So, we had to decide how many courses to create. The four main considerations for making this decision were:
The appearance assessment had different two rubrics that could be used. We had to decide when an assessor would choose the appropriate rubric: after launching the rubric window or before. The two workflows would be
1 of 2text). There was no confusion that you were done with the second option. This meant an extra column of data, fewer errors inputting data was worth the extra column.
The rubrics had to calculate scores correctly. For the interview I leveraged the weighting function of the rubric, assigning a 0% weight to those rows and non-zero weight to the rest. This allowed us to track which questions were asked by selecting the appropriate question within the rubric and reserve the points for the rest of the rows.
For the uniform inspection rubric, I presented two ways the rubric could calculate points:
On behalf of the cadets, I advocated for the first option, but the leadership chose the second option.
Given the length of the rubric and the fact that the Levels of Achievement row could not be frozen, I wanted to give assessors an easy way to reference which column corresponded to which level of achievement. The options I presented were to show, in each cell, the verbal description or the verbal description and the level of achievement (using ALL CAPS to distinguish the two).
In testing, including both the description and level of achievement increased the speed and accuracy of the assessments and reduced the cognitive load of the assessor; so that's what we adopted for the final product.
Our final decision was how assessment grades were displayed in the course. Keeping the cadet in mind, I suggested adding a feature that would display a pass or fail grade (along with appropriate color coding) in addition to the score received on each assessment. The leadership liked this feature, so we added it.
Dr. Padgett’s innovative placement of the Personal Appearance Inspection (PAI) into Blackboard is a game changer for us.