top of page
Image by SIMON LEE
Writer's pictureantoniodellomo

Usage simulations

Usage simulations involve reviewing the system to find out usability problems. These reviews are usually done by experts, who simulate the behavior of less experienced users and try to anticipate the usability problems that they will encounter. For this reason, they may also be referred to as expert reviews or expert evaluations.


Ideally, these reviewers will be experts in HCI and will also have a broad experience of different kinds of systems, so they should be able to spot usability problems concerned with inconsistency, poor task structure, confusing screen design, and so on. The main inconsistency, poor task structure, confusing screen design, and so on. The main reason for having reviewers pretend to be fewer experienced users rather than employing typical users in the first place is to do with efficiency and prescriptive feedback. In terms of efficiency, a small number of reviewers can usually identify a whole range of potential problems for users during a single session. Real users would take much longer and require more facilities. Furthermore, experts are often forth-coming with prescriptive feedback about how the system can be improved and how usability problems can be put right. Often little prompting is needed to get reviewers to suggest possible solutions to the problems identified because they have experienced many systems, and the design team may benefit from this additional input. Generally, they provide detailed reports based on their use of the prototype or working system.


While reviews are relatively straightforward, it is necessary to consider the following:

  • To ensure an impartial opinion the reviewers should not have been involved with previous versions of the system or prototype.

  • The reviewers should have suitable experience, both of the application and of HCI. Media and creative design expertise may also be needed for some systems. Finding a small panel of reviewers with the necessary expertise may be difficult and compromises may have to be made.

  • The role of the reviewers needs to be clearly defined to ensure that they adopt the required perspective when using the system. While it is relatively easy to assess both the very limited understanding of novice users and the extensive knowledge of very experienced users, intermediate categories of user experience are much more difficult to define and role play.

  • The tasks undertaken, the system or prototype used and any accessory materials, such as manuals or tutorials, have to be common to all experts and representative of those intended for the eventual users.

During data collection and analysis the I'm is to obtain a common set of factors from the individual reports that address the most important problems. This aim may be achieved in three ways:

  • Structured reporting reviewers have to report observations in a set way. For example, they are asked to specify the nature of the problems they encounter, their source, the importance for the user, and any possible remedies.

  • Unstructured reporting reviews report their observations and categorization of common problem areas is then determined.

  • Predefined categorization reviewers are given a list of problem categories and they report the occurrence of problems in these categories.

Reviews appear to be an attractive method in terms of their potential efficiency and prescriptive feedback. However, there are a few problems that need to be noted. These problems are bias, locating suitable reviewers, role-playing, and the behavior of real users.








Comentários


bottom of page