Skip to main content
Browse Subject Areas

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

Developing an assistive technology usability questionnaire for people with neurological diseases

  • Maria Masbernat-Almenara,

    Roles Conceptualization, Data curation, Formal analysis, Investigation, Methodology, Validation, Writing – original draft, Writing – review & editing

    Affiliations Department of Nursing and Physiotherapy, University of Lleida, Lleida, Spain, Research Group of Health Care, IRB Lleida, Institute for Biomedical Research Dr. Pifarré Foundation, Lleida, Spain, Group on Society Studies, Health, Education and Culture, University of Lleida, Lleida, Spain

  • Francesc Rubi-Carnacea ,

    Roles Formal analysis, Resources, Writing – original draft, Writing – review & editing

    Affiliations Department of Nursing and Physiotherapy, University of Lleida, Lleida, Spain, Research Group of Health Care, IRB Lleida, Institute for Biomedical Research Dr. Pifarré Foundation, Lleida, Spain, Group on Society Studies, Health, Education and Culture, University of Lleida, Lleida, Spain

  • Eloy Opisso,

    Roles Conceptualization, Formal analysis, Methodology, Supervision, Writing – review & editing

    Affiliations Institut Guttmann, Neurorehabilitation Institute, Badalona, Spain, Universitat Autònoma de Barcelona, Cerdanyola del Vallés, Barcelona, Spain, Fundació Institut d’investigació en Ciències de la Salut Germans Trias i Pujol, Badalona, Barcelona, Spain

  • Esther Duarte-Oller,

    Roles Methodology, Resources, Supervision, Writing – review & editing

    Affiliations Physical Medicine and Rehabilitation Department, Parc de Salut Mar (Hospital del Mar, Hospital de l’Esperança), Barcelona, Catalonia, Spain, Rehabilitation Research Group, Hospital del Mar Medical Research Institute (IMIM), Barcelona, Catalonia, Spain

  • Josep Medina-Casanovas,

    Roles Conceptualization, Resources, Supervision, Writing – review & editing

    Affiliations Institut Guttmann, Neurorehabilitation Institute, Badalona, Spain, Universitat Autònoma de Barcelona, Cerdanyola del Vallés, Barcelona, Spain, Fundació Institut d’investigació en Ciències de la Salut Germans Trias i Pujol, Badalona, Barcelona, Spain

  • Fran Valenzuela-Pascual

    Roles Formal analysis, Software, Supervision, Writing – original draft, Writing – review & editing

    Affiliations Department of Nursing and Physiotherapy, University of Lleida, Lleida, Spain, Research Group of Health Care, IRB Lleida, Institute for Biomedical Research Dr. Pifarré Foundation, Lleida, Spain, Group on Society Studies, Health, Education and Culture, University of Lleida, Lleida, Spain



This study describes the development of a questionnaire for assessing the usability of assistive technologies accessible to people with neurological diseases.


A Delphi study was conducted to identify relevant items for the questionnaire. After that, the content validity was addressed to identify the essential items. Once the questionnaire was designed following the results of the Delphi study and content validity, the reliability, validity, and the Rasch model of the questionnaire were examined.


Two rounds of the Delphi study were carried out. A total of 73 participants (42 experts and 31 users) participated in round 1, and 59 people (27 experts and 32 users) in round 2. A total of 53 and 29 items were identified in rounds 1 and 2, respectively. In the content validity, we found nine items above the threshold of 0.58. Finally, ten items were included in the questionnaire. Fifty-one participants participate in the reliability and validity of the questionnaire. The internal consistency reliability of the questionnaire analyzed by Cronbach’s Alpha was α = 0,895. There was moderate to considerable concordance among our questionnaire items test-retest in the Kappa coefficient and a strong association between test-retest in the Spearman’s coefficient ρ = 0.818 (p<0,001). The intraclass correlation coefficient was 0,869 with a 95% confidence interval (0,781;0,923). There was a strong correlation between the total scores of the new questionnaire and other validated questionnaires analyzed with Spearman’s coefficient ρ = 0.756 (p<0,001). The ten items demonstrated a satisfactory fit to the Rasch model.


The present study suggested that the new questionnaire is a reliable 10-item usability questionnaire that allows subjective and quick assessment of the usability of assistive technologies by people with neurological diseases.


Neurological disorders are among the most important causes of disability (247‒308 million) and death (8·8–9·4 million) worldwide [1]. In addition, the burden of neurological disorders in public health has increased substantially in the last 25 years because of expanding population numbers, aging, and increased survivor rates from stroke and other neurological disorders [2]. These survivors require intensive rehabilitation to reduce the sequelae of the disorders, increase their quality of life, and improve their autonomy in activities of daily living. In most cases, some assistive technology is needed.

Recent advances in technology and rehabilitation have led to the development of new tools to assess people with disabilities and improve their functioning and autonomy in their daily life [3]. It is known that a good acceptance of assistive technology can improve the quality of life and social inclusion of these patients [4]. However, not all products achieve the goal for which they were designed since they often do not consider the real needs of users [5]. Therefore, it is increasingly necessary to involve end-users from the beginning of developing new products [6]. A product can only be considered successful if it is used, and currently, more than 50% of users abandon their new products because they are not sufficiently usable [7, 8]. For this reason, usability is becoming more critical in engineering and rehabilitation. According to the ISO 9241–11 [9], usability is the extent to which specified users can use a product to achieve specified goals with effectiveness, efficiency, and satisfaction in a specified context of use. Understanding this, the design of a product should not involve only multidisciplinary theoretical foundations, but user experience should be considered throughout the design process to ensure quality, improve usability, and increase product acceptability [5, 10]. For that reason, it is essential to perform a usability test during product development to know if it fits the user’s needs. Usability testing refers to evaluating a product or service by testing it with representative users [11].

According to the WHO, assistive technology is a general term covering the systems and services related to delivering assistive products and services [12]. Assistive products aim to maintain or improve an individual’s functioning and independence, promoting their well-being [12].

There is a lack of evidence-based procedures for assistive technologies selection [1315]. For example, professionals prescribe assistive products without considering the user’s needs. However, this clinical outcome assessment is important in clinical practice and research because it improves evidence and provides considerable feedback to healthcare professionals and patients, enhancing their empowerment with their opinions and needs and improving their quality of life [16].

Many existing usability questionnaires have been developed to evaluate software usability and web accessibility [17] and not assistive technologies [18]. Some questionnaires [1921] measure the psychosocial impact of quality of life by using assistive technologies from the point of view of people with disabilities. Those are interesting questionnaires; however, they do not evaluate the usability of assistive technologies per se. There are other questionnaires developed exclusively to assess wheelchairs [22], and only a few are explicitly developed for assistive technologies. For example, the Quebec User Evaluation of Satisfaction with Assistive Technology (QUEST 2.0) [23] contains technical items such as weight, product dimensions, service delivery, repairs, and device services. People with neurological diseases have limitations in the accessibility of the existing usability questionnaires, for example, in the comprehension of the questions or the answer form. In addition, existing questionnaires are too extensive for them or do not address all the essential items for assessing usability and user satisfaction [18, 24]. It is known that longer questionnaires incur high costs in data collection and reduce the number of answers due to the time answering and the quality of the information gathered due to fatigue [25, 26].

The objective of this study was to develop a short questionnaire focused only on the usability of assistive technology products and user satisfaction, and it should be accessible and easy to understand to people with neurological diseases.

Materials and methods

Questionnaire design

Delphi study.

To construct an easy-to-use questionnaire, we performed a Delphi study [27], as it is a prospective approach that seeks to derive a consensus from a group of experts based on the analysis and reflection of a defined problem. A google form questionnaire was sent via email to the neurological healthcare professionals from the Institut Guttmann, Spain and Hospital de l’Esperança, Spain. Also, all the patients from the Institut Guttmann who voluntarily agreed to participate were interviewed to answer the questionnaire.

The questionnaire contains six questions: three were open questions related to assistive technologies, and three were multiple-choice questions about how a questionnaire should be easy and quick to answer. The three open questions were: (1) List some requirements or characteristics important to how the assistive products should or should be manufactured. (2) Would you change anything about the existing assistive products? (3) What kind of assistive products would you like to have? (for users)/ in your workplace? (experts). The three multiple-choice questions related to the questionnaires were: (1) the time required for you to answer a questionnaire, (2) which questionnaire format the user preferred, and (3) how many questions you would like to answer.

Content validity ratio.

Once the Delphi study finished, and the items were obtained, they were evaluated using the content validity ratio (CVR) [28, 29]. Content validity addresses the degree to which items‏ of‏ an instrument sufficiently represent the content domain and answers the question of to what extent the selected sample in an instrument or instrument‏ items‏ is a comprehensive sample of the content [29]. The content validity ratio varies between 1 and -1. Higher scores indicate greater agreement by panel members on the need for an item in an instrument. The formula of content validity ratio is CVR = (Ne - N/2)/(N/2), in which the Ne is the number of panelists indicating "essential" and N is the total number of panelists [29]. The numeric value of the content validity ratio is determined by Lawshe Table [28].

Questionnaire design.

The questions were written based on the qualitative data obtained in the Delphi study and the items obtained in the CVR. Subsequently, ten users (people with neurological diseases) read the questionnaire to assess their understanding of the questions and if their answers from the Delphi study could be represented with the proposed response values (numbers, traffic light colors, and faces).

Questionnaire reliability and validity.

The reliability and validity of the new questionnaire were addressed.

Sample size. The recommended sample size for similar studies has been established as 50 subjects [30, 31]. The inclusion criteria for participants were (1) adults (≥18 years old) with neurological diseases that have been using some assistive products for the past month. Exclusion criteria moderate to severe cognitive impairment based on the Pfeiffer Short Portable Mental State Questionnaire (Pfeiffer SPMSQ) [32] translated into Spanish [33].

Assistive product analyzed. It is challenging to analyze the same product because neurological patients have personalized assistive technologies adapted to their disability. Therefore, all the participants could choose which product addressed, only with the condition that they had used it for at least one month.

Data analysis of reliability.

The reliability and validity of the new questionnaire and all the data were analyzed using the SPSS Statistics 27 program.

The internal consistency reliability of the new questionnaire was analyzed using Cronbach’s Alpha [34]. The value of the coefficients was interpreted as follows: unacceptable (0.5 ≥ α), poor (0.6 ≥ α ≥ 0.5), questionable (0.7 ≥ α ≥ 0.6), acceptable (0.8 ≥ α ≥ 0.7). good (0.9 ≥ α ≥ 0.8), and excellent α ≥ 0.9) [34].

Test-retest. Reliability can be assessed with a test-retest comparison. This method evaluates the stability of the results at two different points in time using a stability coefficient. To perform the test-retest, the same questionnaire was administered on two occasions, separated by a certain period, to the same subjects and under the same conditions. According to different authors [35, 36], an interval between two days and two weeks between the test-retest interviews is recommended. Accordingly, the interval between the two tests was set to 15 days. In order to know the concordance between the test and the retest, data was analyzed by weighted quadratic Kappa coefficient [37]. The value of the coefficients was interpreted as follows: poor (< 0.00), weak (0.00‒0.20), good (0.20‒0.40), moderate (0.41‒0.60), considerable (0.61‒0.80), and almost perfect (0.81‒1.00) [37].

Intraclass correlation coefficient (ICC) model 1.1 with a 95% of confidence interval (CI) was measured. ICC indicates the degree to which the participants maintained their opinion during the test-retest [38]. The value of the coefficients was interpreted as follows: poor reliability (< 0.5), moderate reliability (0.5‒0.75), good reliability (0.75‒0.9), and excellent reliability (> 0.90) [38].

The normality of the data was tested using the Kolmogorov-Smirnov test, and the repeatability was determined by the Spearman’s correlation coefficient [39] between the test and the retest.

Data analysis of validity

Concurrent validity. To know the relationship between the new questionnaire and another related questionnaire, QUEST 2.0 [23] was chosen. It is the shortest worldwide questionnaire and has been translated and validated in several languages. Only eight items of the QUEST 2.0 was administered to users in Spanish [40] because the other four items are related to services. Also, in this study, data was tested using the Kolmogorov-Smirnov test and analyzed using Spearman’s correlation coefficient [39].

Measurement of other parameters. The time was measured while the participants answered the questionnaire to obtain the average time of all the participants.

Rasch model.

In order to further analyze the construct validity of the different items, the Rasch model [41] was carried out. This model gives an idea of the scale’s internal consistency by relating the item’s difficulty to the person’s ability [41]. The values of infit and outfit mean squares (MNSQ) of 1 indicate a perfect fit between the data and the model, values between 0.5 and 1.5 indicate an acceptable fit, and values greater than 2 indicate a severe mismatch [41].

The study was developed following the COSMIN guidelines [42].

Ethical approval

The Institut Guttmann Neurorehabilitation Hospital Ethics Committee approved this study. In addition, this research was conducted following the Declaration of Helsinki’s ethical principles. All participants participated voluntarily, and they signed an informed written consent form. Their personal data were archived following the Spanish Organic Law 3/2018, December 5, on the Protection of Personal Data and guarantee of digital rights.


Study Delphi

Two rounds of a Delphi study were needed to obtain all the items of the new questionnaire. A total of 73 participants (42 experts and 31 users) were involved in round 1 (Table 1). From round 1, 53 different items and qualitative information about questionnaires were derived qualitatively.

Table 1. Demographic characteristics of participants of the two rounds in the Delphi study.

Round two involved 59 people (27 experts and 32 users) (Table 1). This round obtained 15 items and data about the scale. The items were: "effectiveness", "comfort", "adaptability", "easy to put on/off", "safe", "lightweight", "functioning", "ergonomic", "economical", "affordable", "easy to use", "feedback", "stimulating," "monitored" and "movement facilitator. Due to "economical" and "affordable" have the same meaning, both were considered as a single item. Additionally, in this round and following the usability premises, the items "aesthetics" and "easy to remember how to use it" were added. Finally, 16 items were analyzed using the content validity ratio.

Content validity ratio (CVR)

Thirty-four experts from the Delphi study from the Institut Guttmann and the Hospital de l’Esperança were selected to evaluate the items obtained from round two. 70% of the participants have more than ten years of expertise in neurorehabilitation, and all of them used to work with assistive technologies experts (Table 2).

Table 2. Demographic characteristics of participants in content validity.

Participants had to choose which items were essential, useful but not essential, and not essential when evaluating the items. Table 3 shows the results. Nine of the sixteen items exceeded the threshold of 0.58. In addition, the experts agreed to accept the item "comfortable" because its threshold was 0.53.

The experts considered "functional" and "movement facilitator" as one word due to the similarity of their meanings. The item "satisfaction" was added to the questionnaire to ascertain the end-user opinion of the product. Therefore, ten items were selected to create the questionnaire (Table 4).

Table 4. Questions and the items that are involved in each question.

Questionnaire design

Following the qualitative information from the Delphi study, the questionnaire was formulated in an understandable language, and the length was as short as possible. When necessary, the users could fill the blank space with the product’s name being evaluated. The 6-point Likert scale was chosen since it forces the respondent to decide positively or negatively according to the item in question [43]. A numeric panel from 0 to 5 points was used, and each number was associated with a box using the colors of the traffic light to facilitate the choice. Furthermore, faces with expressions were added to facilitate the answers according to the patients with neurological diseases responses from the Delphi study.

The statements of the questions are in first person to facilitate users’ answers and usability experiences of the product subjectively [44].

Once the questionnaire was finished, ten users (see demographic characteristics in Table 5) read and answered the questionnaire to know if all the requirements from the Delphi study were met. During this process, some wording modifications were made.

Table 5. Demographic characteristics users from the questionnaire design.

The new questionnaire was named "Assistive Technology Usability Questionnaire for people with Neurological diseases" (NATU Quest) and included ten questions. It should be administered at the end of a usability test. The final version of the questionnaire, questionnaire score, and interpretation are available in the supporting information file.

Reliability and validity

Sample description

A total of 51 people with neurological diseases consecutive recruited from the Institut Guttmann Hospital voluntarily agreed to participate in the study. These people were different from the Delphi study, and their demographic characteristics are summarized in Table 6. Fifty-three percent of the participants answered the questionnaire through an interview due to physical limitations, and the rest (47%) answered it by themselves. First, the participants answered the new questionnaire and the QUEST 2.0. On average, the participants completed the new questionnaire within 102.40 seconds in the first administration and within 82.08 seconds in the second administration. The QUEST 2.0 was administered once before the NATU Quest, and the participants needed an average of 74 seconds to complete it. Participants scored an assistive product they had used in the last three months. All the patients answered all the items.

Table 6. Demographic characteristics of the 51 participants in the reliability and validity of NATU quest.

Reliability results

The internal consistency reliability of the NATU Quest was analyzed using the Cronbach’s Alpha [34] (α = 0.895). This result can be interpreted as good reliability.

Reliability through test-retest. A retest was performed 15 days after answering the two questionnaires for the first time to assess the reliability of the NATU Quest. Table 7 shows the weighted quadratic Kappa coefficient and Spearman’s coefficient results of the NATU Quest. The results showed a moderate to considerable concordance between NATU Quest items test-retest in the Kappa coefficient because all the results were above 0,50. The results also showed a strong association between test-retest in the Spearman’s coefficient (ρ = 0.818), significant with p-value < 0.0001. The results of the ICC showed good reliability (ICC = 0.869; CI 95% 0.781 to 0.923).

Concurrent validity

The correlation of the total scores between NATU Quest and QUEST 2.0 analyzed with the Spearman’s coefficient was strong with ρ = 0.756 significant, with p-value < 0.0001.

Rasch model results

All ten items demonstrated a satisfactory fit to the Rasch model, which could be considered productive for measurement (MNSQ infit between 0.64 and 1.43; MNSQ outfit between 0.52 and 1.49).


There is a need for a short and easy questionnaire to properly assess the usability of assistive technologies in people with neurological diseases.

The items included in the questionnaire, the format of the questionnaire, and the answer form were derived through two rounds of a Delphi study [27] based on the opinion of 69 experts (neurorehabilitation professionals, such as occupational therapists and physiotherapists) and 63 users (people with neurological diseases). Finally, we narrowed items down to 10 essential usability items using a content validity ratio. Some of the items are represented in diferent words in the other usability questionnaires, for example: "Safe," is included in the PIADS [20] and the QUEST 2.0. [23] and the Usability Scale for Assistive Technology for Wheeled Mobility (USAT-WM) [22], while “comfort” appears in PIADS [20] and QUEST 2.0. [23]. The item “easy to use” appears in QUEST 2.0. [23] and USAT-WM) [22].The items "adaptability", "ergonomic" and “satisfaction” are included in the PIADS [20], while “easy to put on/off," and "effectiveness” are included in QUEST 2.0. [23]. The item “functioning" appear in the USAT-WM [22]. Finally, the item "easy to remember how to use it" does not appear in any questionnaire but is a usability attribute [15].

Once the questionnaire form was designed, 51 end-users with neurological diseases participated in the questionnaire validity and reliability. The results suggested that NATU Quest has good reliability and validity and fits in the Rasch model.

In contrast with other questionnaires, the NATU Quest was developed, considering the opinions of professionals and people with neurological diseases. Other relevant aspects of this study are the heterogeneity of the included sample, the wide range of neurological diseases, and the inclusion of different assistive technologies.

In this study, we developed a usability scale to analyze assistive technologies for people with neurological diseases; however, the study had some limitations: (1) Selection bias of the participants since most of them were from the same province. However, other regions have the same experiences [45]. (2) Although all the users who participated in the validation had a neurological disease, the authors chose the Pfeiffer SPMSQ to assess the cognitive problems because it is short and quick to answer. However, Pfeiffer SPMSQ does not accurately assess all possible cognitive deficits, and it is not sensitive enough to detect low or mild cognitive deficits. (3) Different assistive technologies were analyzed due to the people with neurological disease conditions. It would be very interesting to perform another validation with the same product for all users, which may be a good option for developing a new product. (4) For practical reasons we chose that the users have used the product at least for one month, however, probably is not enough time to test a product.(5) Finally, the test-retest reliability was only compared to QUEST 2.0; because we considered that adding other tests for comparison in the same study would have placed a burden on the users.

It would be interesting to measure test-retest reliability with another usability questionnaire for future work. It would also be interesting to verify the new questionnaire’s external validity with other population groups, such as older people. Finally, the items should be reviewed after a few years to determine if they are still sensitive enough to assess rapidly evolving assistive technologies. Likewise, it should be interesting to translate the new questionnaire into other languages.


The present study suggested that the NATU is a reliable 10-item usability questionnaire that allows subjective and quick assessment of the usability of assistive technologies. This questionnaire aims to be accessible to people with neurological diseases and reflects the level of acceptance and satisfaction a patient has with the product being used. In addition, the NATU Quest can also be useful for evaluating products in development through user-centered design since the patient can state an opinion about the product during its development, which will facilitate the development of products for a better fit for patients’ needs.


The authors are grateful to all the professionals and participants involved in this study for their contribution.


  1. 1. Feigin VL, Nichols E, Alam T, Bannick MS, Beghi E, Blake N, et al. Global, regional, and national burden of neurological disorders, 1990–2016: a systematic analysis for the Global Burden of Disease Study 2016. Lancet Neurol. 2019;18: 459–480. pmid:30879893
  2. 2. Feigin VL, Krishnamurthi R V., Theadom AM, Abajobir AA, Mishra SR, Ahmed MB, et al. Global, regional, and national burden of neurological disorders during 1990–2015: a systematic analysis for the Global Burden of Disease Study 2015. Lancet Neurol. 2017;16: 877–897. pmid:28931491
  3. 3. De Mello Monteiro CB, Dawes H, Mayo N, Collett J, Magalhaes FH. Assistive Technology Innovations in Neurological Conditions. Biomed Res Int. 2021;2021. pmid:33728341
  4. 4. Baldassin V, Shimizu HE, Fachin-Martins E. Computer assistive technology and associations with quality of life for individuals with spinal cord injury: a systematic review. Qual Life Res. 2018;27: 597–607. pmid:29417427
  5. 5. Elnady A, Ben Mortenson W, Menon C. Perceptions of existing wearable robotic devices for upper extremity and suggestions for their development: Findings from therapists and people with stroke. J Med Internet Res. 2018;20: e12. pmid:29764799
  6. 6. Rowland JL, Malone LA, Fidopiastis CM, Padalabalanarayanan S, Thirumalai M, Rimmer JH. Perspectives on Active Video Gaming as a New Frontier in Accessible Physical Activity for Youth With Physical Disabilities. Phys Ther. 2016;96: 521–532. pmid:26316530
  7. 7. Sugawara AT, Ramos VD, Alfieri FM, Battistella LR. Abandonment of assistive products: assessing abandonment levels and factors that impact on it. Disabil Rehabil Assist Technol. 2018;13: 716–723. pmid:29334475
  8. 8. Federici S, Meloni F, Borsci S. The abandonment of assistive technology in Italy: A survey of National Health Service users. Eur J Phys Rehabil Med. 2016;52: 516–526. pmid:26784731
  9. 9. ISO 9241–11:2018(en), Ergonomics of human-system interaction—Part 11: Usability: Definitions and concepts. [cited 18 Oct 2022]. Available:
  10. 10. Almenara M, Cempini M, Gómez C, Cortese M, Martín C, Medina J, et al. Usability test of a hand exoskeleton for activities of daily living: an example of user-centered design. Disabil Rehabil Assist Technol. 2017;12: 84–96. pmid:26376019
  11. 11. Nielsen J. Usability engineering. Academic Press; 1993.
  12. 12. WHO. Assistive technology. 2018 [cited 26 Oct 2022]. Available:
  13. 13. Friederich A, Bernd T, De Witte L. Methods for the selection of assistive technology in neurological rehabilitation practice. Scand J Occup Ther. 2010;17: 308–318. pmid:19968577
  14. 14. Bernd T, Van Der Pijl D, De Witte LP. Existing models and instruments for the selection of assistive technology in rehabilitation practice. Scand J Occup Ther. 2009;16: 146–158. pmid:18846479
  15. 15. Carneiro L, Rebelo F, Filgueiras E, Noriega P. Usability and User Experience of Technical Aids for People with Disabilities? A Preliminary Study with a Wheelchair. Procedia Manuf. 2015;3: 6068–6074.
  16. 16. Grimm B, Blom A, Jahr H, Rosenbaum D. New Tools and Technologies for Clinical Outcome assessment. J Orthop Transl. 2016;7: 70.
  17. 17. john Brooke. SUS: A “Quick and Dirty” Usability Scale. 1996; 207–212.
  18. 18. Koumpouros Y. A Systematic Review on Existing Measures for the Subjective Assessment of Rehabilitation and Assistive Robot Devices. Journal of Healthcare Engineering. Hindawi Limited; 2016. pmid:27196802
  19. 19. Demers L, Monette M, Descent M, Jutai J, Wolfson C. The psychosocial impact of assistive devices scale (PIADS): Translation and preliminary psychometric evaluation of a Canadian-French version. Qual Life Res. 2002;11: 583–592. pmid:12206579
  20. 20. Jutai J, Day H. Psychosocial Impact of Assistive Devices Scale (PIADS). Technol Disabil. 2002;14: 107–111.
  21. 21. Scherer MJ, Cushman LA. Measuring subjective quality of life following spinal cord injury: A validation study of the assistive technology device predisposition assessment. Disabil Rehabil. 2001;23: 387–393. pmid:11394589
  22. 22. Arthanat S, Nochajski SM, Lenker JA, Bauer SM, Wu YWB. Measuring usability of assistive Technology from a multicontextual perspective: The case of power wheelchairs. Am J Occup Ther. 2009;63: 751–764. pmid:20092111
  23. 23. Demers L, Weiss-Lambrou R, Ska B. The Quebec User Evaluation of Satisfaction with Assistive Technology (QUEST 2.0): An overview and recent progress. Technol Disabil. 2002;14: 101–105.
  24. 24. Almenara Masbernat M, Medina Casanovas J, Duarte Oller E, Selva O’Callaghan A, Universitat Autònoma de Barcelona. Departament de Medicina. Modelo teòrico-práctico para la implementaciòn del diseño centrado en el usuario en el desarrollo, la validaciòn y la aceptaciòn de los productos de apoyo para personas con enfermedades de origen neurològico. 2018.
  25. 25. Lavrakas P. Encyclopedia of Survey Research Methods. Encyclopedia of Survey Research Methods. Sage Publications, Inc.; 2012.
  26. 26. Choi YM, Sprigle SH. Approaches for evaluating the usability of assistive technology product prototypes. Assist Technol. 2011;23: 36–41.
  27. 27. Landeta J. Current validity of the Delphi method in social sciences. Technol Forecast Soc Change. 2006;73: 467–482.
  28. 28. Tristán-López A. Modificación al modelo de Lawshe para el dictamen cuantitativo de la validez de contenido de un instrumento objetivo. Av en medición. 2008;6: 37–48.
  29. 29. Zamanzadeh V, Ghahramanian A, Rassouli M, Abbaszadeh A, Alavi-Majd H, Nikanfar A-R. Design and Implementation Content Validity Study: Development of an instrument for measuring Patient-Centered Communication. J Caring Sci. 2015;4: 165–178. pmid:26161370
  30. 30. Jackson JL, Chamberlin J, Kroenke K. Predictors of patient satisfaction. Soc Sci Med. 2001;52: 609–620. pmid:11206657
  31. 31. Bowling A, Rowe G, Lambert N, Waddington M, Mahtani KR, Kenten C, et al. The measurement of patients’ expectations for health care: A review and psychometric testing of a measure of patients’ expectations. Health Technology Assessment. 2012. pp. 1–532. pmid:22747798
  32. 32. Pfeiffer E. A Short Portable Mental Status Questionnaire for the Assessment of Organic Brain Deficit in Elderly Patients. J Am Geriatr Soc. 1975;23: 433–441. pmid:1159263
  33. 33. Martínez de la Iglesia J, Dueñas Herrero R, Onís Vilches MC, Aguado Taberné C, Albert Colomer C, Luque Luque R. [Spanish language adaptation and validation of the Pfeiffer’s questionnaire (SPMSQ) to detect cognitive deterioration in people over 65 years of age]. Med Clin (Barc). 2001;117: 129–34. Available:
  34. 34. George D, Mallery P. SPSS for Windows step by step: a simple guide and reference, 17.0 update. 4th ed. Boston: Allyn & Bacon, editor. 2003.
  35. 35. Streiner DL, Norman GR, Cairney J. Oxford Medicine Online Health Measurement Scales: A practical guide to their development and use (5 ed.). Chapter: Reliability. 2019.
  36. 36. Marx RG, Menezes A, Horovitz L, Jones EC, Warren RF. A comparison of two time intervals for test-retest reliability of health status instruments. J Clin Epidemiol. 2003;56: 730–735. pmid:12954464
  37. 37. Fleiss JL, Cohen J. The Equivalence of Weighted Kappa and the Intraclass Correlation Coefficient as Measures of Reliability: 2016;33: 613–619.
  38. 38. Koo TK, Li MY. A Guideline of Selecting and Reporting Intraclass Correlation Coefficients for Reliability Research. J Chiropr Med. 2016;15: 155. pmid:27330520
  39. 39. Mukaka M. A guide to appropriate use of Correlation coefficient in medical research. Malawi Med J. 2012;24: 69. Available: /pmc/articles/PMC3576830/.
  40. 40. Mora Barrera CA. Validación de la versión en español de la evaluación Quebec de usuarios con tecnologia de asistencia (QUEST 2.0). Universidad Nacional de Colombia. 2010.
  41. 41. Boone WJ. Rasch Analysis for Instrument Development: Why, When, and How? CBE Life Sci Educ. 2016;15. pmid:27856555
  42. 42. Terwee CB, Prinsen CAC, Chiarotto A, Westerman MJ, Patrick DL, Alonso J, et al. COSMIN methodology for evaluating the content validity of patient-reported outcome measures: a Delphi study. Qual Life Res. 2018;27: 1159–1170. pmid:29550964
  43. 43. Abdul . Quality of Psychology Test Between Likert Scale 5 and 6 Points. J Soc Sci. 2010;6: 399–403.
  44. 44. First-person surveys in User Research | by Nikki Anderson | UX Collective. [cited 27 Oct 2022]. Available:
  45. 45. Jiménez-Arberas E, Ordóñez-Fernández FF. Discontinuation or abandonment of mobility assistive technology among people with neurological conditions. Rev Neurol. 2021;72: 426–432. pmid:34109998