By the time the third faculty member left my office, I was ready to crawl into a hole. Grades had been mailed the night before, and teachers who were checking grades on the computer began to report discrepancies between the grades they submitted and the grades that were now in the system.

My response was immediate panic. Had a student hacked our network? Had a faculty member allowed a student to see his password? Would I have to tell the entire faculty that all grades and comments needed to be re-submitted? How do you explain to all those parents that their son’s grades are not accurate, or that he didn’t really make the honor roll this term? What about seniors who had to send these grades to colleges? These and other questions kept me busy for most of the day and kept me awake most of that night.

Upon closer inspection, however, the problem was not as bad as it originally appeared. There were only a handful of teachers who reported discrepancies, and we were able to fix them without much difficulty. The security of the network had not been compromised, and while there were some minor technical problems during grade input, those problems did not impact the accuracy of the grades.

Rather, the discrepancies were the result of input errors that came from a combination of poor understanding of our electronic gradebook program and a user interface that is somewhat misleading.

For instance, the gradebook program we use allows teachers to customize their own grading system. Some teachers did not change the default grade cutoffs in all of their classes, and so students were assigned letter grades that didn’t correspond to the teacher’s numeric average.

In other cases, due to the way averaged columns were displayed, faculty members had made manual changes to grades in the column for final grades, thinking they were changing semester grades.

As I began to realize the cause of the problem, my first reaction was one of relief—followed by frustration: “Whew! It’s not my fault. The problem isn’t in the program or the network. It’s in the users. We just need to get a new faculty. That’ll fix it! We need a faculty that can follow the directions that I type up and explain to them in workshop after workshop.” It felt a lot better to be able to shift the blame to someone else.

The more I thought about why I was frustrated, though, the less sense it made. If only one or two people had a problem, it would be easier to dismiss it as their own fault, but problems were reported by about 10 percent of the teachers. These are educated people. All of them have at least four years of college, and some have their Ph.D.’s They are entrusted with the daily shaping of young minds, and they all do a pretty good job of it.

Additionally, some of the teachers who had problems with their grades are among the most active computer users on the faculty. So if the problem wasn’t the system, and it wasn’t the faculty, then what was the real cause of the problem?

Logic lead me to think that this was a training problem, but we’ve had numerous workshops in the use of this program, ranging from full faculty workshops at grade input time to individual training sessions throughout the term when faculty were having trouble. How many workshops can you really have?

Eventually, however, I realized that the problem stemmed from the quality and type of training, not the quantity of it.

In order to spark motivation, we have employed “just-in-time” training. For us, this means having a workshop on how to use the various features of the grade input program as teachers need to use them.

For example, we taught teachers how to set up rosters and seating charts before the first day of school. We taught them how to add assignments during the first week of school, and in the week that first-term grades were due, we taught them how to adjust and submit their grades to be printed on report cards.

The philosophy behind this was that faculty members’ needs to get their job done would drive their motivation to learn the skills being taught in the workshop. The fact that they were working on “real” projects would better engage their attention and make them more active participants in the workshop.

This type of training is very effective when you are trying to teach very discrete skills. However, something was missing from our training which, I believe, led to the difficulties we experienced this quarter.

The focus of our just-in-time training was the accomplishment of a specific goal by following a series of specific instructions. This way, faculty members mastered a specific and essential task without being overloaded with information and confused.

Although they were able to perform these various tasks, not all faculty members were able to see how these different tasks related to each other. For some teachers, the transfer of skills from one task to a new one was more difficult.

Our training also failed to provide users with a general overview of how the program works. While some teachers could perform certain tasks very well, they had difficulty solving unexpected problems or adjusting to situations beyond those to which they had been exposed.

In short, we were training our users, rather than educating them.

Training is a perfectly acceptable solution for users who perform basic and repetitive functions on their computers. These users may never need to respond to situations outside the scope of their training.

Many users, however, require more than training. They require education. They require practice in the type of thinking required to perform complex tasks on a computer, like maintaining a gradebook, which are highly customizable and likely to present situations and problems that are unique to each user.

While the fundamental principles of our training program are sound, I plan to add more activities next year which will require the transfer of skills from different areas of the program. It is important that users have a more general understanding of how the program works.

Additionally, I plan to add activities which present problems requiring the user to look up answers in the manual or help file. While I would like the main focus of the training to be on “real” projects, I plan to add some simulated problems and evaluate the learner’s response to these problems.

Telling isn’t the same as teaching, and training isn’t the same as educating. Those of us who design and run workshops need to take our cues from our teachers in the classroom and model workshops that reflect a slightly more constructivist approach to learning, rather than the memorization of how to perform certain tasks.

Even the most avid of computer users would benefit from a greater overall understanding of a particular program or system and how to respond to unique and unexpected problems. Not only will this type of training help to avoid problems like we experienced this quarter, but it will lead to a more independent, productive, and creative end user.