Evaluating GIS Tools and Systems
We previously focused attention on a design process for GIS that involves needs assessment, concept development, prototyping, and implementation. Underlying each stage is a focus on evaluation - with the implication that evaluation is something that occurs during and between each stage of design.
Evaluation is needed to ensure that the progress made between design stages is based at least in part on input from end-users or customers. Rather than testing out the system with users at the very end of the process, you test in each of the stages along the way to add/subtract features and capabilities according to user needs. Failing to evaluate "along the way" can result in wasted effort if fully implemented systems must be fundamentally revised based on user feedback gathered at the end of design and development.
What is Evaluation?
Evaluation is typically categorized into two broad areas: formative evaluation and summative evaluation. Formative evaluations focus on developing and refining designs. Summative evaluations compare an implemented system to an alternative system with the goal of measuring differences in performance or user satisfaction between the two systems. Quite often, formative evaluations happen in the early/middle stages of a design exercise and summative evaluations take place toward the end when a system has been implemented.
Common methods used in both types of evaluation include:
- Heuristic examinations - measure user responses to the system based on a set of common system design criteria
- Surveys - use open or close-ended questions to identify system needs or areas for improvement
- Focus groups - involve group discussion of design options, user experiences, or other topics to inform or critique a design
- Interviews - make use of one-on-one questioning with users or customers to explore design options or gather feedback on tools
- Card-sorting - is an activity in which users organize system tools / functions using paper cards to suggest interface organization
- Expert evaluation - has system design and usability experts critique designs, prototypes, or final systems
- Field Exercises - put the system through a "test run" using realistic data, scenarios, and tasks
- Cost-Benefit Analysis - uses metrics to measure the costs of developing/using a system versus the benefits associated with its products
I have provided links to additional content explaining some of the evaluation methods that may not be familiar to you. Check them out!
Formal vs. Informal Evaluation
Not everyone can afford to spend time & money on conducting in-depth evaluations at each stage of the design process while developing a new GIS system. Like everything else associated with GIS system design, trade-offs are involved and it is important for you, the designer, to figure out how to balance the need to make sure your progress is meaningful against the need to make progress toward the final system.
A distinction used quite often is to characterize evaluation efforts as formal or informal depending on the degree to which the evaluation activity makes use of rigorous methods to ensure unbiased participants, sound methodology, and careful analysis of results. An informal evaluation might make use of a few of your coworkers to look over a prototype design, while a formal evaluation could involve a dozen real end-users who complete a realistic exercise using the new GIS system and complete a post-activity interview and survey to gather structured and unstructured feedback.
The point here is that there are times when an informal evaluation will help you make progress on design and development goals, but there will come a time when you really want to conduct something formal to measure your success.