Evaluation techniques for interactive systems
Software engineering is the look at designing, developing, and renovating software. It comes in touch with HCI to make the person and device interplay greater colorful and interactive.
Evaluation is a manner that seriously examines a program. It includes gathering and studying statistics about a program’s activities, characteristics, and outcomes.
The assessment itself can contain many unique human beings from designers and evaluators to real customers of the system.
Evaluation has 3 major goals:
· Assess the quantity and accessibility of the system’s functionality.
The system’s functionality is vital in that it needs to accord with the user’s requirements. Evaluation at this level may also measure the user’s overall performance with the system to evaluate the effectiveness of the system in helping the task.
· Assess customers’ experience of the interaction.
It is important to evaluate the user’s experience of the interaction and its effect on the user. And consists of thinking about factors such as how easy the system is to learn, its usability, and the user’s satisfaction with it. It may additionally consist of his entertainment and emotional response, especially in the case of structures that are aimed at enjoyment or entertainment.
· Identify any specific problems with the system.
These can be elements of the design that, when used in their supposed context, cause unexpected results, or confusion among users. And it is associated with each the functionality and usability of the design.
The very last purpose of assessment is to discover specific problems with the design. These can be elements of the layout that, while the use of their supposed context, cause unexpected results, or confusion among customers.
Evaluation through expert analysis
The evaluation of a system needs to ideally take place earlier than any implementation work begins. If the design itself may be evaluated, expensive errors may be prevented because the design can be modified earlier than any large resource commitments are made. A variety of techniques for evaluating interactive structures and the use of expert evaluation were developed. These techniques are flexible evaluation methods because they’ll be applied at any point of the development process, from design specifications through storyboards and prototypes to full implementations.
There are a few expert-based assessment approaches.
- Cognitive Walkthrough
This is one of the most efficient and extremely cost-effective ways of increasing the usability of the system. Most users prefer to do things to learn a product rather than to study a guide or follow a set of instructions. So with this evaluation, it is confident that the design is easy to pick up through a beginner and takes much less time to turn out to be professional in using the design.
The expert must have a specification or prototype of the system, a description of duties, and an additionally written list of the actions needed to complete the task with the proposed system to do this walkthrough.
- Heuristic Evaluation
In a heuristic evaluation, a small number of usability evaluators systematically inspect a user interface and make decisions based on a predetermined set of usability principles known as heuristics. Heuristics are general rules that describe common properties of usable user interfaces.
- Model-based evaluation
Model-based evaluation is the use of a model of how a human could use a proposed system to achieve predicted usability measures through calculation or simulation. These predictions can replace or supplement empirical measurements received through user testing. Model-based evaluation is combining cognitive and design models for the evaluation process.
Evaluation through user participation
User participation in evaluation tends to arise in the later stages of development while there’s at least a working prototype of the system in place. This may range from a simulation of the system’s interactive capabilities, without its underlying functionality.
Styles of evaluation
Laboratory studies:
Users are taken out of their ordinary work environment to participate in controlled tests, often in a specialist usability laboratory.
Field studies :
This type of evaluation takes the designer or evaluator out into the user’s work environment to observe the system in action.
Empirical methods: experimental evaluation
This presents empirical proof to support a specific claim or hypothesis. The evaluator chooses a hypothesis to test. Any modifications in the behavioral measures are attributed to the specific conditions.
Observational Techniques
- Think aloud
In this technique, the user is requested to talk through what he’s doing as he’s being observed.
2. Cooperative evaluation
A variation on think-aloud is referred to as cooperative evaluation. In this approach, the user is recommended to look at himself as a collaborator in the evaluation and not actually as an experimental participant.
3. Automated Analysis
Analyzing protocols, whether or not video, audio or system logs, is time-consuming and tedious by hand. It is tough if there may be multiple stream of information to synchronize. So, automatic analysis gives tools like EVA ( Experimental Video Annotator ) that’s a system that runs on a multimedia workstation with an immediate link to a video recorder to support the task.
4. Post-task walkthrough
the participants have to perform tasks in a recorded session and are then later asked by a moderator to reflect on their actions. But the disadvantage of this method is loss of freshness.
5. Protocol Analysis
Query Techniques
- Interviews
Interviewing users about their experience with an interactive system provides a direct way of gathering information.
2. Questionnaires
Users are given a set of fixed questions about what they prefer and what they think about the design. This gives the chance to reach a group of people. But this is less flexible.
Evaluation through monitoring physiological responses
This will allow seeing more clearly what users do when they interact with computers and how they feel.
Physiological Measurements
Measuring physiological responses like heart rate, breathing, and skin secretions may use in determining a user’s emotional response to an interface.
Hope you found this article helpful!
See you in another article.
Thank you!
Reference: Alan Dix, Janet Finlay, Gregory Abowd, and Russell Beale. 2003. Human–Computer Interaction(3rd Edition). New York: Prentice-Hall