ORPP logo
Image from Google Jackets

Predicting User Performance and Errors : Automated Usability Evaluation Through Computational Introspection of Model-Based User Interfaces.

By: Material type: TextTextSeries: T-Labs Series in Telecommunication Services SeriesPublisher: Cham : Springer International Publishing AG, 2017Copyright date: ©2018Edition: 1st edDescription: 1 online resource (156 pages)Content type:
  • text
Media type:
  • computer
Carrier type:
  • online resource
ISBN:
  • 9783319603698
Subject(s): Genre/Form: Additional physical formats: Print version:: Predicting User Performance and ErrorsDDC classification:
  • 004
LOC classification:
  • QA76.9.U83
Online resources:
Contents:
Intro -- Contents -- Acronyms -- List of Figures -- List of Tables -- 1 Introduction -- 1.1 Usability -- 1.2 Multi-Target Applications -- 1.3 Automated Usability Evaluation of Model-Based Applications -- 1.4 Research Direction -- 1.5 Conclusion -- Part I Theoretical Background and Related Work -- 2 Interactive Behavior and Human Error -- 2.1 Action Regulation and Human Error -- 2.1.1 Human Error in General -- 2.1.2 Procedural Error, Intrusions and Omissions -- 2.2 Error Classification and Human Reliability -- 2.2.1 Slips and Mistakes---The Work of Donald A. Norman -- 2.2.2 Human Reliability Analysis -- 2.3 Theoretical Explanations of Human Error -- 2.3.1 Contention Scheduling and the Supervisory System -- 2.3.2 Modeling Human Error with ACT-R -- 2.3.3 Memory for Goals Model of Sequential Action -- 2.4 Conclusion -- 3 Model-Based UI Development (MBUID) -- 3.1 A Development Process for Multi-target Applications -- 3.2 A Runtime Framework for Model-Based Applications: The Multi-access Service Platform and the Kitchen Assistant -- 3.3 Conclusion -- 4 Automated Usability Evaluation (AUE) -- 4.1 Theoretical Background: The Model-Human Processor -- 4.1.1 Goals, Operators, Methods, and Selection Rules (GOMS) -- 4.1.2 The Keystroke-Level Model (KLM) -- 4.2 Theoretical Background: ACT-R -- 4.3 Tools for Predicting Interactive Behavior -- 4.3.1 CogTool and CogTool Explorer -- 4.3.2 GOMS Language Evaluation and Analysis (GLEAN) -- 4.3.3 Generic Model of Cognitively Plausible User Behavior (GUM) -- 4.3.4 The MeMo Workbench -- 4.4 Using UI Development Models for Automated Evaluation -- 4.4.1 Inspecting the MBUID Task Model -- 4.4.2 Using Task Models for Error Prediction -- 4.4.3 Integrating MASP and MeMo -- 4.5 Conclusion -- Part II Empirical Results and Model Development -- 5 Introspection-Based Predictions of Human Performance.
5.1 Theoretical Background: Display-Based Difference-Reduction -- 5.2 Statistical Primer: Goodness-of-Fit Measures -- 5.3 Pretest (Experiment 0) -- 5.3.1 Method -- 5.3.2 Results -- 5.3.3 Discussion -- 5.4 Extended KLM Heuristics -- 5.4.1 Units of Mental Processing -- 5.4.2 System Response Times -- 5.4.3 UI Monitoring -- 5.5 MBUID Meta-Information and the Extended KLM Rules -- 5.6 Empirical Validation (Experiment 1) -- 5.6.1 Method -- 5.6.2 Results -- 5.6.3 Discussion -- 5.7 Further Validation (Experiments 2--4) -- 5.8 Discussion -- 5.9 Conclusion -- 6 Explaining and Predicting Sequential Error in HCI with Cognitive User Models -- 6.1 Theoretical Background: Goal Relevance as Predictor of Procedural Error -- 6.2 Statistical Primer: Odds Ratios (OR) -- 6.3 TCT Effect of Goal Relevance: Reanalysis of Experiment 1 -- 6.3.1 Method -- 6.3.2 Results -- 6.3.3 Discussion -- 6.4 A Cognitive Model of Sequential Action and Goal Relevance -- 6.4.1 Model Fit -- 6.4.2 Sensitivity and Necessity Analysis -- 6.4.3 Discussion -- 6.5 Errors as a Function of Goal Relevance and Task Necessity (Experiment 2) -- 6.5.1 Method -- 6.5.2 Results -- 6.5.3 Discussion -- 6.6 Are Obligatory Tasks Remembered More Easily? An Extended Cognitive Model with Cue-Seeking -- 6.6.1 Model Implementation -- 6.6.2 How Does the Model Predict Errors? -- 6.6.3 Model Fit -- 6.6.4 Discussion -- 6.7 Confirming the Cue-Seeking Strategy with Eye-Tracking (Experiment 3) -- 6.7.1 Methods -- 6.7.2 Results -- 6.7.3 Results Discussion -- 6.7.4 Cognitive Model -- 6.7.5 Discussion -- 6.8 Validation in a Different Context (Experiment 4) -- 6.8.1 Method -- 6.8.2 Results -- 6.8.3 Results Discussion -- 6.8.4 Cognitive Model -- 6.8.5 Discussion -- 6.9 Chapter Discussion -- 6.10 Conclusion -- 7 The Competent User: How Prior Knowledge Shapes Performance and Errors.
7.1 The Effect of Concept Priming on Performance and Errors -- 7.1.1 Method -- 7.1.2 Results -- 7.1.3 Results Discussion -- 7.1.4 Cognitive Model -- 7.1.5 Discussion -- 7.2 Modeling Application Knowledge with LTMC -- 7.2.1 LTMC -- 7.2.2 Method -- 7.2.3 Results -- 7.2.4 Discussion -- 7.3 Conclusion -- Part III Application and Evaluation -- 8 A Deeply Integrated System for Introspection-Based Error Prediction -- 8.1 Inferring Task Necessity and Goal Relevance From UI Meta-Information -- 8.2 Integrated System -- 8.2.1 Computation of Subgoal Activation -- 8.2.2 Parameter Fitting Procedure -- 8.3 Validation Study (Experiment 5) -- 8.3.1 Method -- 8.3.2 Results -- 8.3.3 Results Discussion -- 8.4 Model Fit -- 8.5 Discussion -- 8.5.1 Validity of the Cognitive User Model -- 8.5.2 Comparison to Other Approaches -- 8.6 Conclusion -- 9 The Unknown User: Does Optimizing for Errors and Time Lead to More Likable Systems? -- 9.1 Device-Orientation and User Satisfaction (Experiment 6) -- 9.1.1 Method -- 9.1.2 Results -- 9.1.3 Discussion -- 9.2 Conclusion -- 10 General Discussion and Conclusion -- 10.1 Overview of the Contributions -- 10.2 General Discussion -- 10.2.1 Validity of the User Models -- 10.2.2 Applicability and Practical Relevance of the Predictions -- 10.2.3 Costs and Benefits -- 10.3 Conclusion -- References -- Index.
Tags from this library: No tags from this library for this title. Log in to add tags.
Star ratings
    Average rating: 0.0 (0 votes)
No physical items for this record

Intro -- Contents -- Acronyms -- List of Figures -- List of Tables -- 1 Introduction -- 1.1 Usability -- 1.2 Multi-Target Applications -- 1.3 Automated Usability Evaluation of Model-Based Applications -- 1.4 Research Direction -- 1.5 Conclusion -- Part I Theoretical Background and Related Work -- 2 Interactive Behavior and Human Error -- 2.1 Action Regulation and Human Error -- 2.1.1 Human Error in General -- 2.1.2 Procedural Error, Intrusions and Omissions -- 2.2 Error Classification and Human Reliability -- 2.2.1 Slips and Mistakes---The Work of Donald A. Norman -- 2.2.2 Human Reliability Analysis -- 2.3 Theoretical Explanations of Human Error -- 2.3.1 Contention Scheduling and the Supervisory System -- 2.3.2 Modeling Human Error with ACT-R -- 2.3.3 Memory for Goals Model of Sequential Action -- 2.4 Conclusion -- 3 Model-Based UI Development (MBUID) -- 3.1 A Development Process for Multi-target Applications -- 3.2 A Runtime Framework for Model-Based Applications: The Multi-access Service Platform and the Kitchen Assistant -- 3.3 Conclusion -- 4 Automated Usability Evaluation (AUE) -- 4.1 Theoretical Background: The Model-Human Processor -- 4.1.1 Goals, Operators, Methods, and Selection Rules (GOMS) -- 4.1.2 The Keystroke-Level Model (KLM) -- 4.2 Theoretical Background: ACT-R -- 4.3 Tools for Predicting Interactive Behavior -- 4.3.1 CogTool and CogTool Explorer -- 4.3.2 GOMS Language Evaluation and Analysis (GLEAN) -- 4.3.3 Generic Model of Cognitively Plausible User Behavior (GUM) -- 4.3.4 The MeMo Workbench -- 4.4 Using UI Development Models for Automated Evaluation -- 4.4.1 Inspecting the MBUID Task Model -- 4.4.2 Using Task Models for Error Prediction -- 4.4.3 Integrating MASP and MeMo -- 4.5 Conclusion -- Part II Empirical Results and Model Development -- 5 Introspection-Based Predictions of Human Performance.

5.1 Theoretical Background: Display-Based Difference-Reduction -- 5.2 Statistical Primer: Goodness-of-Fit Measures -- 5.3 Pretest (Experiment 0) -- 5.3.1 Method -- 5.3.2 Results -- 5.3.3 Discussion -- 5.4 Extended KLM Heuristics -- 5.4.1 Units of Mental Processing -- 5.4.2 System Response Times -- 5.4.3 UI Monitoring -- 5.5 MBUID Meta-Information and the Extended KLM Rules -- 5.6 Empirical Validation (Experiment 1) -- 5.6.1 Method -- 5.6.2 Results -- 5.6.3 Discussion -- 5.7 Further Validation (Experiments 2--4) -- 5.8 Discussion -- 5.9 Conclusion -- 6 Explaining and Predicting Sequential Error in HCI with Cognitive User Models -- 6.1 Theoretical Background: Goal Relevance as Predictor of Procedural Error -- 6.2 Statistical Primer: Odds Ratios (OR) -- 6.3 TCT Effect of Goal Relevance: Reanalysis of Experiment 1 -- 6.3.1 Method -- 6.3.2 Results -- 6.3.3 Discussion -- 6.4 A Cognitive Model of Sequential Action and Goal Relevance -- 6.4.1 Model Fit -- 6.4.2 Sensitivity and Necessity Analysis -- 6.4.3 Discussion -- 6.5 Errors as a Function of Goal Relevance and Task Necessity (Experiment 2) -- 6.5.1 Method -- 6.5.2 Results -- 6.5.3 Discussion -- 6.6 Are Obligatory Tasks Remembered More Easily? An Extended Cognitive Model with Cue-Seeking -- 6.6.1 Model Implementation -- 6.6.2 How Does the Model Predict Errors? -- 6.6.3 Model Fit -- 6.6.4 Discussion -- 6.7 Confirming the Cue-Seeking Strategy with Eye-Tracking (Experiment 3) -- 6.7.1 Methods -- 6.7.2 Results -- 6.7.3 Results Discussion -- 6.7.4 Cognitive Model -- 6.7.5 Discussion -- 6.8 Validation in a Different Context (Experiment 4) -- 6.8.1 Method -- 6.8.2 Results -- 6.8.3 Results Discussion -- 6.8.4 Cognitive Model -- 6.8.5 Discussion -- 6.9 Chapter Discussion -- 6.10 Conclusion -- 7 The Competent User: How Prior Knowledge Shapes Performance and Errors.

7.1 The Effect of Concept Priming on Performance and Errors -- 7.1.1 Method -- 7.1.2 Results -- 7.1.3 Results Discussion -- 7.1.4 Cognitive Model -- 7.1.5 Discussion -- 7.2 Modeling Application Knowledge with LTMC -- 7.2.1 LTMC -- 7.2.2 Method -- 7.2.3 Results -- 7.2.4 Discussion -- 7.3 Conclusion -- Part III Application and Evaluation -- 8 A Deeply Integrated System for Introspection-Based Error Prediction -- 8.1 Inferring Task Necessity and Goal Relevance From UI Meta-Information -- 8.2 Integrated System -- 8.2.1 Computation of Subgoal Activation -- 8.2.2 Parameter Fitting Procedure -- 8.3 Validation Study (Experiment 5) -- 8.3.1 Method -- 8.3.2 Results -- 8.3.3 Results Discussion -- 8.4 Model Fit -- 8.5 Discussion -- 8.5.1 Validity of the Cognitive User Model -- 8.5.2 Comparison to Other Approaches -- 8.6 Conclusion -- 9 The Unknown User: Does Optimizing for Errors and Time Lead to More Likable Systems? -- 9.1 Device-Orientation and User Satisfaction (Experiment 6) -- 9.1.1 Method -- 9.1.2 Results -- 9.1.3 Discussion -- 9.2 Conclusion -- 10 General Discussion and Conclusion -- 10.1 Overview of the Contributions -- 10.2 General Discussion -- 10.2.1 Validity of the User Models -- 10.2.2 Applicability and Practical Relevance of the Predictions -- 10.2.3 Costs and Benefits -- 10.3 Conclusion -- References -- Index.

Description based on publisher supplied metadata and other sources.

Electronic reproduction. Ann Arbor, Michigan : ProQuest Ebook Central, 2024. Available via World Wide Web. Access may be limited to ProQuest Ebook Central affiliated libraries.

There are no comments on this title.

to post a comment.

© 2024 Resource Centre. All rights reserved.