Skip to Main content Skip to Navigation
Journal articles

Comparing Inspections and User Testing for the Evaluation of Virtual Environments

Cédric Bach 1, 2 Dominique L. Scapin 3 
1 IRIT-ICS - Interactive Critical Systems
IRIT - Institut de recherche en informatique de Toulouse
3 AxIS - Usage-centered design, analysis and improvement of information systems
CRISAM - Inria Sophia Antipolis - Méditerranée , Inria Paris-Rocquencourt
Abstract : This article describes an experiment comparing three Usability Evaluation Methods: User Testing (UT), Document-based Inspection (DI), and Expert Inspection (EI) for evaluating Virtual Environments (VEs). Twenty-nine individuals (10 end-users and 19 junior usability experts) participated during 1 hr each in the evaluation of two VEs (a training VE and a 3D map). Quantitative results of the comparison show that the effectiveness of UT and DI is significantly better than the effectiveness of EI. For each method, results show their problem coverage: DI- and UT-based diagnoses lead to more problem diversity than EI. The overlap of identified problems amounts to 22% between UT and DI, 20% between DI and EI, and 12% between EI and UT for both virtual environments. The identification impact of the whole set of usability problems is 60% for DI, 57% for UT, and only 36% for EI for both virtual environments. Also reliability of UT and DI is significantly better than reliability of EI. In addition, a qualitative analysis identified 35 classes describing the profile of usability problems found with each method. It shows that UT seems particularly efficient for the diagnosis of problems that require a particular state of interaction to be detectable. On the other hand, DI supports the identification of problems directly observable, often related to learnability and basic usability. This study shows that DI could be viewed as a "4-wheel drive SUV evaluation type" (less powerful under certain conditions but able to go everywhere, with any driver), whereas UT could be viewed as a "Formula 1 car evaluation type" (more powerful but requiring adequate road and a very skilled driver). EI is found (considering all metrics) to be not efficient enough to evaluate usability of VEs.
Document type :
Journal articles
Complete list of metadata
Contributor : Nathalie Gaudechoux Connect in order to contact the contributor
Submitted on : Thursday, February 20, 2014 - 3:40:45 PM
Last modification on : Monday, July 4, 2022 - 9:42:42 AM



Cédric Bach, Dominique L. Scapin. Comparing Inspections and User Testing for the Evaluation of Virtual Environments. International Journal of Human-Computer Interaction, Taylor & Francis, 2010, 26 (8), pp.786-824. ⟨10.1080/10447318.2010.487195⟩. ⟨hal-00950036⟩



Record views