How to improve your UX designs with Task Analysis

TASK ANALYSIS AND MODELING

Task analysis for interface design uses either an elaborative or object-oriented approach but applies this approach to human activities.

Task analysis can be applied in two ways. As we have already noted, an interactive, computer-based system is often used to replace a manual or semi-manual activity.

To understand the tasks that must be performed to accomplish the goal of the activity, a human engineer must understand the tasks that humans currently perform (when using a manual approach) and then map these into a similar (but not necessarily identical) set of tasks that are implemented in the context of the user interface.

Alternatively, the human engineer can study an existing specification for a computer-based solution and derive a set of user tasks that will accommodate the user model, the design model, and the system perception.

Regardless of the overall approach to task analysis, a human engineer must first define and classify tasks. We have already noted that one approach is stepwise elaboration.

For example, assume that a small software company wants to build a computer-aided design system explicitly for interior designers.

By observing an interior designer at work, the engineer notices that interior design comprises a number of major activities: furniture layout, fabric and material selection, wall and window coverings selection, presentation (to the customer), costing, and shopping.

Each of these major tasks can be elaborated into subtasks.

For example, furniture layout can be refined into the following tasks:

  • draw a floor plan based on room dimensions
  • place windows and doors at appropriate locations
  • use furniture templates to draw scaled furniture outlines on floor plan
  • move furniture outlines to get best placement; (5) label all furniture outlines
  • draw dimensions to show location
  • draw perspective view for customer. A similar approach could be used for each of the other major tasks.

Subtasks 1–7 can each be refined further. Subtasks 1–6 will be performed by manipulating information and performing actions within the user interface. On the other hand, subtask 7 can be performed automatically in software and will result in little direct user interaction.

The design model of the interface should accommodate each of these tasks in a way that is consistent with the user model (the profile of a “typical” interior designer) and system perception (what the interior designer expects from an automated system).

An alternative approach to task analysis takes an object-oriented point of view. The human engineer observes the physical objects that are used by the interior designer and the actions that are applied to each object. For example, the furniture template would be an object in this approach to task analysis.

The interior designer would select the appropriate template, move it to a position on the floor plan, trace the furniture outline and so forth.

The design model for the interface would not provide a literal implementation for each of these actions, but it would define user tasks that accomplish the end result (drawing furniture outlines on the floor plan).

INTERFACE DESIGN ACTIVITIES

Once task analysis has been completed, all tasks (or objects and actions) required by the end-user have been identified in detail and the interface design activity commences.

The first interface design steps can be accomplished using the following approach:

  • Establish the goals5 and intentions for each task.
  • Map each goal and intention to a sequence of specific actions.
  • Specify the action sequence of tasks and subtasks, also called a user scenario, as it will be executed at the interface level.
  • Indicate the state of the system; that is, what does the interface look like at the time that a user scenario is performed?
  • Define control mechanisms; that is, the objects and actions available to the user to alter the system state.
  • Show how control mechanisms affect the state of the system.
  • Indicate how the user interprets the state of the system from information provided through the interface.

Always following the golden rules discussed above, the interface designer must also consider how the interface will be implemented, the environment

(e.g., display technology, operating system, development tools) that will be used, and other elements of the application that ―sit behind‖ the interface.

Defining Interface Objects and Actions

An important step in interface design is the definition of interface objects and the actions that are applied to them. That is, a description of a user scenario is written.

Nouns (objects) and verbs (actions) are isolated to create a list of objects and actions.

Once the objects and actions have been defined and elaborated iteratively, they are categorized by type. Target, source, and application objects are identified.

A source object (e.g., a report icon) is dragged and dropped onto a target object (e.g., a printer icon).

The implication of this action is to create a hard-copy report. An application object represents application-specific data that is not directly manipulated as part of screen interaction. For example, a mailing list is used to store names for a mailing.

The list itself might be sorted, merged, or purged (menu-based actions) but it is not dragged and dropped via user interaction.

When the designer is satisfied that all important objects and actions have been defined (for one design iteration), screen layout is performed.

Like other interface design activities, screen layout is an interactive process in which graphical design and placement of icons, definition of descriptive screen text, specification and titling for windows, and definition of major and minor menu items is conducted.

If a real world metaphor is appropriate for the application, it is specified at this time and the layout is organized in a manner that complements the metaphor.

To provide a brief illustration of the design steps noted previously, we consider a user scenario for an advanced version of the Safe Home system. In the advanced version, Safe Home can be accessed via modem or through the Internet.

A PC application allows the homeowner to check the status of the house from a remote location, reset the Safe Home configuration, arm and disarm the system, and (using an extra cost video option6) monitor rooms within the house visually.

A preliminary user scenario for the interface follows:

Scenario: The homeowner wishes to gain access to the Safe Home system installed in his house. Using software operating on a remote PC (e.g., a notebook computer carried by the homeowner while at work or traveling).

The homeowner determines the status of the alarm system, arms or disarms the system, reconfigures security zones, and views different rooms within the house via preinstalled video cameras.

To access Safe Home from a remote location, the homeowner provides an identifier and a password. These define levels of access (e.g., all users may not be able to reconfigure the system) and provide security. Once validated, the user (with full access privileges) checks the status of the system and changes status by arming or disarming Safe Home.

The user reconfigures the system by displaying a floor plan of the house, viewing each of the security sensors, displaying each currently configured zone, and modifying zones as required.

The user views the interior of the house via strategically placed video cameras. The user can pan and zoom each camera to provide different views of the interior.

Home owner tasks

  • accesses the SafeHome system
  • enters an ID and password to allow remote access
  • checks system status
  • arms or disarms SafeHome system
  • displays floor plan and sensor locations
  • displays zones on floor plan
  • changes zones on floor plan
  • displays video camera locations on floor plan
  • selects video camera for viewing
  • views video images (4 frames per second)
  • pans or zooms the video camera

Objects (boldface) and actions (italics) are extracted from this list of homeowner tasks. The majority of objects noted are application objects.

However, video camera location (a source object) is dragged and dropped onto video camera (a target object) to create a video image (a window with video display).

A preliminary sketch of the screen layout for video monitoring is created (Figure2). To invoke the video image, a video camera location icon, C, located in floor plan displayed in the monitoring window is selected.

In this case a camera location in the living room, LR, is then dragged and dropped onto the video camera icon in the upper left-hand portion of the screen. The video image window appears, displaying streaming video from the camera located in the living room (LR).

The zoom and pan control slides are used to control the magnification and direction of the video image. To select a view from another camera, the user simply drags and drops a different camera location icon into thecamera icon in the upper left-hand corner of the screen.

The layout sketch shown would have to be supplemented with an expansion of each menu item within the menu bar, indicating what actions are available for the 

user figure

Example – SafeHome (Preliminary Screen Layout)

Video monitoring mode (state). A complete set of sketches for each homeowner task noted in the user scenario would be created during the interface design.

Design Issues

As the design of a user interface evolves, four common design issuesalmost always surface: system response time, user help facilities, error information handling, and command labeling.

Unfortunately, many designers do not address these issues until relatively late in the design process (sometimes the first inkling of a problem doesn’t occur until an operational prototype is available).

Unnecessary iteration, project delays, and customerfrustration often result. It is far better to establish each as a design issue to be considered at the beginning of software design, when changes are easy and costs are low.

System response time is the primary complaint for many interactive applications. In general, system response time is measured from the point at which the user performs some control action (e.g., hits the return key or clicks a mouse) until the software responds with desired output or action.

System response time has two important characteristics length and variability. If thelength of system response is too long, user frustration and stress is the inevitable result.

However, a very brief response time can also be detrimental if the user is being paced by the interface. A rapid response may force the user to rush and therefore make mistakes.

Variability refers to the deviation from average response time, and in many ways, it is the most important response time characteristic.

Low variability enables the user to establish an interaction rhythm, even if response time is relatively long. For example, a 1-second response to a command is preferable to a response that varies from 0.1 to 2.5 seconds.

The user is always off balance, always wondering whether something “different” has occurred behind the scenes.

Almost every user of an interactive, computer-based system requires help now and then. In some cases, a simple question addressed to a knowledgeable colleague can do the trick. In others, detailed research in a multivolume set of “user manuals” may be the only option. In many cases.

however, modern software provides on-line help facilities that enable a user to get a question answered or resolve a problem without leaving the interface.

Two different types of help facilities are encountered: integrated and add-on An integrated help facility is designed into the software from the beginning.

It is often context sensitive, enabling the user to select from those topics that are relevant to the actions currently being performed. Obviously, this reduces the time required for the user to obtain help and increases the “friendliness” of the interface.

An add-on help facility is added to the software after the system has been built. In many ways, it is really an on-line user’s manual with limited query capability.

The user may have to search through a list of hundreds of topics to find appropriate guidance, often making many false starts and receiving much irrelevant information.

There is little doubt that the integrated help facility is preferable to the add-on approach. A number of design issues [RUB88] must be addressed when a help facility is considered:

  • Will help be available for all system functions and at all times during system interaction? Options include help for only a subset of all functions and actions or help for all functions.
  • How will the user request help? Options include a help menu, a special function key, or a HELP command.
  • How will help be represented? Options include a separate window, a reference to a printed document(less than ideal), or a one- or two-line suggestion produced in a fixed screen location.
  • How will the user return to normal interaction? Options include a return button displayed on the screen, a function key, or control sequence.
  • How will help information be structured? Options include a “flat” structure in which all information is accessed through a keyword, a layered hierarchy of information that provides increasing detail as the user proceeds into the structure, or the use of hypertext.

Error messages and warnings are “bad news” delivered to users of interactive systems when something has gone awry.

At their worst, error messages and warnings impart useless or misleading information and serve only to increase user frustration. There are few computer users who have not encountered an error of the form:

SEVERE SYSTEM FAILURE

Somewhere, an explanation for error 14A must exist; otherwise, why would the designers have added the identification? Yet, the error message provides no real indication of what is wrong or where to look to get additional information. An error message presented in this manner does nothing to assuage user anxiety or to help correct the problem.

In general, every error message or warning produced by an interactive system should have the following characteristics:

  • The message should describe the problem in jargon that the user can understand.
  • The message should provide constructive advice for recovering from the error.

•The message should indicate any negative consequences of the error (e.g., potentially corrupted data files) so that the user can check to ensure that they have not occurred (or correct them if they have).

  • The message should be accompanied by an audible or visual cue. That is, a beep might be generated to accompany the display of the message, or the message might flash momentarily or be displayed in a color that is easily recognizable as the “error color.”
  • The message should be “nonjudgmental.” That is, the wording should never place blame on the user. Because no one really likes bad news, few users will like an error message no matter how well designed. But an effective error message philosophy can do much to improve the quality of an interactive system and will significantly reduce user frustration when problems do occur.

The typed command was once the most common mode of interaction between user and system software and was commonly used for applications of every type.

Today, the use of window-oriented, point and pick interfaces has reduced reliance on typed commands, but many power-users continue to prefer a command-oriented mode of interaction. A number of design issues arise when typed commands are provided as a mode of interaction:

  • Will every menu option have a corresponding command?
  • What form will commands take? Options include a control sequence (e.g., alt-P), function keys, or a typed word.
  • How difficult will it be to learn and remember the commands? What can be done if a command is forgotten?
  • Can commands be customized or abbreviated by the user?

As we noted earlier, conventions for command usage should be established across all applications. It is confusing and often error-prone for a user to type alt-D when a graphics object is to be duplicated in one application and alt-D when a graphics object is to be deleted in another. The potential for error is obvious.

IMPLEMENTATION TOOLS

Once a design model is created, it is implemented as a prototype,7 examined by users (who fit the user model described earlier) and modified based on their comments.

To accommodate this iterative design approach, a broad class of interface design and prototyping tools has evolved.

Called user-interface toolkits or user-interface developmentsystems (UIDS), these tools provide components or objects that facilitate creation of windows, menus, device interaction, error messages, commands, and many other elements of an interactive environment.

Using prepackaged software components to create a user interface, a UIDS provides built-in mechanisms [MYE89] for

  • managing input devices (such as a mouse or keyboard)
  • validating user input
  • handling errors and displaying error messages
  • providing feedback (e.g., automatic input echo)
  • providing help and prompts
  • handling windows and fields, scrolling within windows
  • establishing connections between application software and the interface
  • insulating the application from interface management functions
  • allowing the user to customize the interface

These functions can be implemented using either a language-based or graphical approach.

DESIGN EVALUATION

Once an operational user interface prototype has been created, it must be evaluated to determine whether it meets the needs of the user.

Evaluation can span a formality spectrum that ranges from an informal “test drive,” in which a user provides impromptu feedback to a formally designed study that uses statistical methods for the evaluation of questionnaires completed by a population of end-users.

The user interface evaluation cycle takes the form shown in Figure 3. After the design model has been completed, a firstlevel prototype is created.

The prototype is evaluated by the user, who provides the designer with direct comments about the efficacy of the interface. In addition, if formal evaluation techniques are used (e.g., questionnaires, rating sheets)

the designer may extract information from these data (e.g., 80 percent of all users did not like the mechanism for saving data files).

User Interface Evaluation Cycle
User Interface Evaluation Cycle

Design modifications are made based on user input and the next level prototype is created. The evaluation cycle continues until no further modifications to the interface design are necessary.

The prototyping approach is effective, but is it possible to evaluate the quality of a user interface before a prototype is built?

If potential problems can be uncovered and corrected early, the number of loops through the evaluation cycle will be reduced and development time will shorten.

If a design model of the interface has been created, a number of evaluation criteria can be applied during early design reviews:

  1. The length and complexity of the written specification of the system and its interface provide an indication of the amount of learning required by users of the system.
  2. The number of user tasks specified and the average number of actions per task provide an indication of interaction time and the overall efficiency of the system.
  3. The number of actions, tasks, and system states indicated by the design model imply the memory load on users of the system.
  4. Interface style, help facilities, and error handling protocol provide a general indication of the complexity of the interface and the degree to which it will be accepted by the user.

Once the first prototype is built, the designer can collect a variety of qualitative and quantitative data that will assist in evaluating the interface.

To collect qualitative data, questionnaires can be distributed to users of the prototype.

Questions can be all

  • simple yes/no response,
  • numeric response,
  • scaled (subjective) response, or
  • percentage (subjective) response.

Examples are

  • Were the icons self-explanatory? If not, which icons were unclear?
  • Were the actions easy to remember and to invoke?
  • How many different actions did you use?
  • How easy was it to learn basic system operations (scale 1 to 5)?
  • Compared to other interfaces you’ve used, how would this rate—top 1%, top10%, top 25%, top 50%, bottom 50%?

If quantitative data are desired, a form of time study analysis can be conducted.

Users are observed during interaction, and data—such as number of tasks correctly completed over a standard time period, frequency of actions, sequence of actions, time spent “looking” at the display, number and types of errors, error recovery time.

Time spent using help, and number of help references per standard time period—are collected and used as a guide for interface modification.  

Leave a Comment