“For most people right now, they will be writing in a digital environment, and certainly for fourth graders, they’re growing into a world where our interactions are in a digital environment,” she said.

In fact, moving writing assessments to a digital platform has increased relevance for young students today.

“Writing in a computer-based environment is part of the construct of what writing is–so looking at how young people are able to move into, with comfort, writing in a digital framework both tells us something about the move toward putting more assessment opportunities into the computer, but it also tells us something extremely significant about preparing young people to be writers at all–because to be a writer today is to write in digital environments,” Eidman-Aadahl said.

The study consisted of two parts:

  1. Small-scale usability testing to improve development of the assessment platform for fourth-grade students
  2. A pilot writing assessment administered to a sample of 13,000 students nationwide

The pilot examined students’ ability to organize and write typed responses to prompts, with each prompt accompanied by either 30 minutes or 20 minutes of writing time.

Students with 30 minutes of writing time were scored on a 6-point rubric, with 1 being the lowest and 6 being the highest.

Thirty-nine percent of students received a 1 or a 2, indicating little to no marginal writing skills; 47 percent of students received a 3 or 4, indicating developing or adequate writing skills; and 14 percent of students received a 5 or 6 and demonstrated competent or effective writing skills.

The average word count was 110 words per response. Thirty-one percent of students typed 1,001 keystrokes or more, and 71 percent of the highest-scoring student responses used 1,001 keystrokes or more, compared to 9 percent of the lowest-scoring responses.

Fifty-five percent used the backspace key fewer than 100 times, and 39 percent of students used the spell check tool at least four times.

When writing for different purposes:

  • 17 percent of student responses received a score of 5 or 6 when asked to “convey experience”
  • 15 percent of responses scored a 5 or 6 when asked to “explain”
  • 10 percent of student responses received a 5 or 6 when asked to “persuade”

The usability study helped to inform the pilot assessment administered to students nationwide.

A number of lessons emerged from those usability tests, including:
1. Simplifying directions will help students understand the objective.

  • Students had trouble reading and following directions to adjust the computer screen. The test was altered to deliver one step at a time, accompanied by a recorded voice-over.
  • Students didn’t spend enough time reading the general directions (32 percent), therefore the directions were shortened and were read aloud.

2. Tools should be easy to locate and well-labeled

  • Students used icons more often than drop-down menus, so more icons were added to the toolbar in order to help students see the available editing options
  • More than half of students (68 percent) had trouble finding the writing panel, so the buttons to locate the panel were enlarged
  • Some students (25 percent) misinterpreted the original text-to-speech icon, so it was changed and a rollover label was added
  • Nearly one-third of students (32 percent) thought a “reset” button erased all answers instead of the answer to their current question. That button was changed to a “clear answer” button.