DIGITAL LIBRARY
USABILITY EVALUATION OF DIGITAL LIBRARIES: OFFLINE AND ONLINE ASSESSMENT COMPARED
ZBW - Leibniz Information Centre for Economics (GERMANY)
About this paper:
Appears in: INTED2015 Proceedings
Publication year: 2015
Pages: 2158-2166
ISBN: 978-84-606-5763-7
ISSN: 2340-1079
Conference name: 9th International Technology, Education and Development Conference
Dates: 2-4 March, 2015
Location: Madrid, Spain
Abstract:
Nowadays good usability is a “must have” of every online product. Also a modern digital library (Library 2.0; see Maness, 2006) needs to assure an easy and effective handling of its website and its online services. In order to monitor the usability, the use of standardized scales is a quick and relatively cheap way. Thereby, the question arises if the measurement should be made in an offline setting (paper-pencil, face-to-face) or in an online setting (via an internet survey). Offline and online assessment have their special advantages and drawbacks. Accordingly, the differences between the offline and the online assessment might cause different data even though using the analogous questionnaire. Thus, it has to be assured that the offline measurement delivers analogous results like the equivalent online measurement. Besides the setting of usability evaluation, another important decision is how detailed the usability evaluation should be, i.e., if one short scale is sufficient or more detailed questions are needed.

In this contribution we present an empirical investigation on the comparison of a rather long offline-version versus a relatively short online-version of a usability questionnaire for the evaluation of a Library 2.0, namely the ZBW (http://www.zbw.eu/en/). Both versions regard to usability-evaluation of the homepage as well as the three core services (EconBiz for literature search, EconDesk for online help, and EconStor, a publishing portal) of the ZBW.

As measurement instrument we used the System Usability Scale (SUS, Brooke, 1996) as a short standardized usability scale for the homepage as well as the three services. Thereby, we applied the SUS analogously with the same wording and the same answering options in the offline as well as in the online version of the questionnaire. This enables a direct comparison of the quantitative evaluation results of the offline versus the online measurements. Both versions comprised also some additional questions on the ZBW (open questions, multiple choice and ratings) as well as several control variables (e.g., age, gender, occupation and prior experiences with the ZBW). Additionally, the long offline version included the detailed usability scale ISONORM (Prümper, 1999) for the evaluation of EconBiz and a playful scribbling task for heuristic insights. The offline questionnaire was filled out by paper-pencil in a face-to-face setting. The online survey was announced via several online platforms and mailing lists. Thus, the offline measurement was much more resource-intensive.

The results showed that there were no significant differences between the offline versus online evaluation by the SUS. However, the online assessment had a rather high number of drop outs (people who didn’t complete the questionnaire.) Additionally, most online participants ignored the open questions whereas the offline participants gave rather exhaustive comments.

Overall, the findings suggest that for a pure usability control check a short online measurement is sufficient and leads to equal results like a long offline questionnaire. However, if deeper insights are needed, a detailed offline evaluation is more appropriate. The theoretical and practical implications of the findings will be discussed.
Keywords:
Usability, evaluation, online survey, offline setting, assessment methods, Library 2.0, digital library.