In the laboratory benchmarking that is done in usability engineering users have little or no control over what they do and how and when they do it. This kind of setting is unnatural and quite different from the kinds of environments in which most people work. Awareness of these differences has encouraged researchers to explore techniques that provide data that reflect real usage more accurately. Contextual inquiry is a form of elicitation which can be used in the evaluation.
In this approach users and researchers participate to identify and understand usability problems within the normal working environment of the user. Whiteside et al. (1988) have identified the following key differences:
Work context typically, word processing benchmark tasks are six pages long, whereas Whiteside report that most of the people in their studies were working with documents 30-50 pages long.
Time context experiments generally have a prescribed time context, whereas in real work environments people tend to have some choice about the order in which they do tasks and how long they take.
Motivational context in the experimental context the experimenter controls the situation, whereas usually in the work context there is scope for some negotiation.
Social context in the work environment there is normally a social network of support that does not exist in the experimental context.
Unlike usability engineering, a contextual inquiry has its roots in the ethnographic paradigm and not in science or engineering. usability issues of concern are identified by users, or by users and evaluators collaboratively, while users are working in their natural work environments on their own work. The term "contextual interview" has been used to describe the discussions that drive contextual inquiry. This is an interview between users and evaluators in which any aspect of concern is discussed and recorded on video for re-examination later by users and evaluators together. The kinds of things that are of particular interest to the evaluator (Holtzblatt and Beyer, 1993) are:
structure and language used in the work,
individual and group actions and intentions,
the culture affecting the work,
explicit and implicit aspects of the work.
Like any protocol data, this analysis may be time-consuming but Holtzblatt and Beyer (1993) recommend that evaluators try to:
get as close to the work as possible,
uncover work practice hidden in words,
create interpretations with customers,
let customers expand the scope of the discussion.
Unlike usability engineering, there are no metrics to feedback to the design team, although judiciously selected video clips of users struggling with some aspect of their work provide strong and meaningful messages to the design team. The interpretation of the data is also done with reference to the wider work context and the users' general aims. As Whiteside et al. (1988) say, "The important point about contextualism is that it implies that interpretation is primary (rather than data); knowledge lives in practical action (rather than being based on representation) and it is assumed that behavior is meaningful only in context (rather than that behavior can be studied scientifically)".
Contextual inquiry and usability engineering are complementary approaches in many respects as they inform design in different ways.
Contextual inquiry will produce data from which we can infer valuable information about the context of work and how the system fits that context. It is a useful technique for obtaining requirements fr a new product, particularly if the product is some form of groupware like a CSCW system.
Usability engineering is a good technique for refining and fine-tuning design and for ensuring that internal user interface standards are upheld. The metrics that are produced enable the design team to compare directly one version of a system with another, which is particularly useful when upgrading products.