Software Usability Measurement Inventory
What is it?
SUMI is recommended to any organisation which wishes to measure the perceived quality of end-user experience of a digital product. It provides a valid and reliable method for the comparison of competing products and different versions of the same product, as well as providing diagnostic information for future developments.
How it Works?
Consists of a 50-item questionnaire devised in accordance with psychometric practice.

Minimum of 10 users. ~10 min questionnaire.

SUMI has been used effectively to assess new products during product evaluation. Make comparisons between products or versions of products. And to set targets for future application developments.
What Measures?
Overall it measures:

  • Overall Reactions to the Software
  • Efficiency
  • Affect (Emotion)
  • Helpfulness
  • Controllability (Dependability)
  • Learnability (Perspicuity)

What are the Pros and Cons?
Has been used specifically within development environments to:
  • Set verifiable goals for user experience
  • track achievement of targets during product development
  • highlight the good and bad aspects of an interface.

  • It is easy to use; not many costs are involved;
  • On average a SUMI test can be carried out in approximately 4 days. This includes the time necessary for limited context analysis and reporting;
  • During the testing, the emphasis is on finding defects. It often results in negative quality indications but with an objective perspective;
  • The usability score is split into various aspects, making a thorough more detailed evaluation possible (using the various output data).

  • A running version of the system needs to be available;
  • This implies SUMI can only be carried at a relatively late stage of the project;
  • A high (minimum of ten) number of users with the same background, need to fill out the questionnaire. Quite often the implementation or test doesn't involve ten or more users belonging to the same user group;
  • The accuracy and level of detail of the findings are limited (this can partly be solved by adding a small number of open questions to the SUMI questionnaire);
  • Don't have a free normed database of scores.
What SUMI Covers?
The SUMI scales are defined as follows:

Efficiency (Pragmatic)
This refers to the respondent feeling that the software is enabling users to do their task(s) in a quick, effective, and economical manner or, at the opposite extreme, that the software is getting in the way of performance.

Affect (Hedonic)
This is a psychological term for emotional feeling. In this context, it refers to the respondent feeling mentally stimulated and pleasant, or the opposite: stressed and frustrated as a result of interacting with the software.

Helpfulness (Hedonic)
This refers to the respondents' perceptions that the software communicates in a helpful way and assists in the resolution of operational problems. A low score on this scale indicates that the software is not communicating adequately with the respondent and making things difficult.

Control (Pragmatic)
This sub-scale refers to the feeling that the respondent has that the software is responding in an expected and consistent way to inputs and commands. It is not difficult to make the software work and the respondent can get their work done with ease. A low score on this scale indicates that the software is making unwanted mental demands on the respondent and that the respondent prefers to stick with however much they have mastered with this software.

Learnability (Hedonic)
This sub-scale refers to the feeling that the respondent has that it is relatively straightforward to become familiar with the software; and that its tutorial interface, handbooks, etc. are readable and instructive. This also refers to re-learning how to use the software after a while away from it.
A Case Study of Usability Testing - the SUMI Evaluation Approach of the EducaNext Portal. Available from: [accessed Jan 26 2021].
Kirakowski, Jurek & Corbett, Mary. (2006). SUMI: the Software Usability Measurement Inventory. British Journal of Educational Technology. 24. 210 - 212. 10.1111/j.1467-8535.1993.tb00076.x.

Made on