Figures
Abstract
Objective
Health applications (apps) enable patients to make decisions regarding their health utilizing digitally-enabled processes for care, expand accessibility to healthcare service delivery, and raise public awareness of health. In many languages, there is still a lack of an instrument that assesses how patients perceive apps. This study aims to adapt and validate the User Version of the Mobile Application Rating Scale (uMARS) in Persian and evaluate the overall quality of the Persian meditation apps.
Methods
This cross-sectional study population comprises 86 healthcare workers in a health center in the west of Iran. The sample size was determined using Cochran’s formula. First, the uMARS was translated into Persian. Then, validity was assessed using the Content Validity Ratio (CVR), Content Validity Index (CVI), and face validity. Cronbach’s alpha and Intraclass Correlation Coefficients (ICC) were used to assess reliability. Among the 124 meditation apps, Aramia met the criteria included in the study and was selected to evaluate objective quality, subjective quality, and perceived impact.
Results
The majority of the participants were female (82.55%). More than half of the participants had a bachelor’s degree (58.14%). The CVR, CVI, ICC, and Cronbach’s alpha values were obtained as 0.79. 0.90, 0.93, and 0.86, respectively. These Findings revealed that the Persian version of the questionnaire has sufficient reliability and validity. Among the three subscales, perceived impact received the highest mean score (3.96 ± 0.37), and a total score of 3.74 ± 1.04 was obtained.
Citation: Kohzadi Z, Rahmatizadeh S, Kohzadi Z, Valizadeh-Haghi S (2025) Persian adaptation and validation of the user version of The Mobile Application Rating Scale (uMARS). PLoS ONE 20(4): e0320349. https://doi.org/10.1371/journal.pone.0320349
Editor: Taher Babaee, Iran University of Medical Sciences, IRAN, ISLAMIC REPUBLIC OF
Received: February 8, 2024; Accepted: February 11, 2025; Published: April 2, 2025
Copyright: © 2025 Kohzadi et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
Data Availability: “All relevant data are within the paper and its Supporting Information files.”
Funding: This research was supported by the Student Research Committee, Shahid Beheshti University of Medical Sciences, Tehran, Iran [Project NO. 43003344]. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.
Competing interests: The authors have declared that no competing interests exist.
Introduction
Progress in mobile phone technology over the past two decades has allowed individuals to access health information anywhere due to widespread internet connectivity. Moreover, mobile phones can be employed by public health and healthcare organizations to share health information and provide consistent, unbiased support [1]. The demand for more accurate and tailored medicine, the unsustainable nature of current healthcare expenditure, and the rapid and continuous growth of wireless connectivity have made “mobile health” increasingly popular in recent years [2]. The term “mobile health” or “mHealth” refers to the use of portable electronic devices, such as smartphones and tablets, to support medical services and improve patient outcomes [3]. mHealth solutions are widely used in public health and healthcare services, where they are valued for their simplicity, broad reach, and widespread acceptance [4]. Indeed, interest in digital health technology is growing, and more than half of smartphone users have downloaded at least one health-related application (app) at some point [5]. Smartphone health apps help patients in many ways, including better self-management, increased engagement, convenience, financial savings, and improved provider-patient communication [6]. Through a variety of mechanisms, mobile health apps have the potential to improve patient outcomes significantly [7]. These apps can improve patient engagement, help patients with self-management, provide educational resources, enable remote monitoring, and promote treatment adherence [8,9]. Research indicates that mobile health apps can affect health outcomes by improving the quality of life, controlling clinical parameters, promoting medication adherence, managing symptoms, reducing complications, and contributing to overall well-being [10–12]. Users of health-related apps risk a range of potential problems, such as adverse health effects, inaccurate or misleading information, absence of accountability, inefficient healthcare, and privacy issues, if they are not evaluated and regulated properly [13,14].
However, the quality of mHealth apps could be improved. It is concerning that many health-related apps lack substantial evidence to support their claims [15]. Evaluating the quality and content of these apps is essential to ensure that both users and experts can trust their credibility and effectiveness [16]. The Mobile Application Rating Scale (MARS) is a tool used by experts to evaluate the quality of mHealth apps, and the user version of the Mobile Application Rating Scale (uMARS) is a simplified version of MARS that end-users can use to evaluate mHealth app quality. The uMARS tool is a practical and feasible means of assessing the quality of an app by a layperson with no specialized training [17,18]. Researchers can determine the aspects of health app quality that users have evaluated as positive and negative and highlight areas requiring further development [19]. In addition, the scale has helped increase engagement scores, usability, functionality, and perceived impact of health apps or the benefits of programs that use apps [20–22] In some studies, uMARS has evaluated the app’s quality. Bardus et al. evaluated the quality of mobile apps designed to help manage weight [23]. LeBeau et al. assessed the quality of mobile apps used by occupational therapists [24], and Adam et al. evaluated the quality of mobile apps that calculate prostate cancer risk [25]. Several versions of uMARS are available in various languages, all of which have been translated and verified [26–29]. However, this instrument has no translation or cultural adaptation into Persian. The Persian version of uMARS can be used by the Persian community to evaluate mobile apps. Therefore, this study aims to adapt and validate uMARS in Persian and evaluate the overall quality of the Persian meditation apps.
Method
Study Design and Participants
This study was a cross-sectional survey conducted in January 2024. The population studied consisted of healthcare workers in a health center in the city of Ilam, in the west of Iran. The sample size, determined using Cochran’s formula with a 95% confidence level, was calculated to be eighty-six individuals. The inclusion criteria were having at least three years of work experience and having an Android or iOS phone. Conversely, the exclusion criteria were having less than three years of experience and not having an Android phone.
This research has been approved by the Ethics Committee of Shahid Beheshti University of Medical Sciences (Ethics Code: IR.SBMU.RETECH.REC. 1402.458). In this research, verbal consent was obtained from the participants. Participants were given the Participant Information Sheet to read for obtaining their consent. Once the participants comprehensively considered the study and discussed it with researchers, they were allowed to ask any questions they may have. Once all questions were answered and the researcher was confident that the participant fully understood the study requirements, they asked if they would like to consent. If the participant agreed to proceed, the researcher obtained a witness unrelated to the study. A note was added to the participant’s record stating the date oral consent was obtained, the study’s details, and the researcher’s name and witness present during oral consent. Notably, participants were allowed to withdraw from the study at any time. Furthermore, no identifiable information was collected from the participants.
During the ethics review, the committee decided that a signed consent form was unnecessary. Hence, the consent form was read and explained to respondents during data collection instead of requiring written consent. The research objectives were verbally explained to all participants by two research team members. After addressing their questions, if they expressed interest, the questionnaire was provided to them for completion. Additionally, at the beginning of the questionnaire, the research objectives, methodology, and confidentiality were explained again, and participants were allowed to withdraw from the study at any time. No identifiable information was collected from the participants. The confidentiality of the participant’s responses was also ensured.
Instruments
The study used a two-section questionnaire. The first section is about the demographic characteristics of the participants (gender, age, education, and field of education), and the second section includes the uMARS.
uMARS tool is a multidimensional tool used to evaluate the overall quality of mobile health apps developed and validated by Stoyanov et al. in 2016 [17]. The uMARS has three key dimensions: objective quality, subjective quality, and perceived impact. Objective quality relates to the technical aspects of the product, including usability, reliability, and functionality. Subjective quality is based on users’ thoughts and feelings, such as whether the product is enjoyable, easy to use, and meets their needs. Perceived impact measures how much a product positively affects the user’s learning, such as improving knowledge, skills, or motivation. Twenty items in uMARS evaluate the objective and subjective quality of apps. They are scored on a 5-point Likert Scale, with one meaning “poor” and five meaning “excellent” quality. Additionally, an extra option exists to select “Not applicable” for items 13 –16. The mean of the four dimensions’ scores is used for calculating the objective quality score: engagement (items 1–5), functionality (items 6–9), aesthetics (items 10–12), and information (items 13–16). The mean of four subjective items (17–20) obtains a subjective quality score. The last subscale of uMARS consists of six items that are rated on a 5-point Likert Scale from one (strongly disagree) to five (strongly agree) to evaluate how the user perceives the app’s impact on their awareness, knowledge, attitudes, intention to change, help-seeking, and a probability of changing the target health behavior [17]. The Interclass Correlation Coefficients (ICC) in the uMARS version (0.66 and 0.70 in two distinct periods) are at appropriate levels. The overall score for each section is calculated by dividing the sum of the scores for each question by the total number of questions in the section. The total uMARS score is calculated by dividing the sum of the scores for each section by the total number of questions [17]. In this study, a score below three is considered poor, three to four is considered moderate, and above four is considered good.
Cross-cultural adaptation and translation process
After obtaining permission from the original questionnaire owners, two Persian (Farsi) language researchers fluent in Persian and English with expertise in health and Information Technology (IT) terminology independently translated the questionnaire into Persian. To reach an agreement, the translators checked the translations for discrepancies and compiled the preliminary version in Persian. The entire Persian version was translated into English by a native English speaker who was fluent in Persian and blind to the original English terminology and phrases in the questionnaire. The English back-translated version has been checked against the original by a team of experts to ensure no conceptual or cross-cultural inconsistencies.
Statistical analysis
The study evaluated the content and face validity through quantitative and qualitative methods. Qualitative face validity was determined through a 12-expert panel of medical informatics and health information management. The purpose was to identify the difficulty, inconsistency, and ambiguity in phrases or inadequacies in the meanings of words. The experts provided their opinions in the form of minor changes to the questionnaire. The Impact score of the question was calculated to determine quantitative face validity. First, a five-point Likert scale was used for each tool item: completely agree (five points) to completely disagree(one point). Then, the questionnaire was given to 15 healthcare workers to determine its validity. After the target group had completed the questionnaire, we calculated the face validity using the following formula:
An acceptable impact score for this study is at least 1.5 [30].
For the qualitative evaluation of content validity, experts were requested to provide their opinions after a detailed instrument study. Notably, the qualitative evaluation of content validity considers grammar, appropriate wording, question meaning, and the placement of questions in their proper place. After gathering expert opinions, the necessary changes were applied to the tool. The Content Validity Ratio (CVR) was used to quantitatively evaluate the validity of the content and ensure that the most important and accurate content (i.e., necessary questions) was selected. Additionally, the Content Validity Index (CVI) was used to ensure that the tool’s questions were designed in the best way possible to measure the content. experts were asked to evaluate each of the tool’s questions to determine the content validity of the questionnaire. The experts were asked to indicate whether each question was “necessary,” “not necessary, but useful,” or “not necessary to answer.” The answers were then calculated using the CVR formula and adapted to Lawshe’s table. Only scores above 0.56 were accepted [31]. Then, the questionnaire was given to the experts for calculating CVI, and they were asked to evaluate three criteria of relevance, simplicity, and clarity based on a four-point Likert scale (for example, one: irrelevant, two: Somewhat relevant, three: relevant, and four: completely relevant). The number of experts who chose options three and four was divided by the total number of experts to calculate the CVI score. Scores above 0.79 were considered acceptable [32,33].
Cronbach’s alpha was used to determine the degree of internal consistency of the uMARS subscales and overall score. A Cronbach’s alpha value above 0.90 was interpreted as excellent, 0.80-0.89 as good, 0.70-0.79 as acceptable, 0.60-0.69 as questionable, 0.50-0.59 as poor, and < 0.50 as unacceptable [34].
The uMARS subscales and total scores were tested for test-retest reliabilities. ICC values less than 0.50, 0.50-0.75, 0.75-0.90, and more than 0.90 are regarded as low, moderate, good, and excellent reliability, respectively [35]. After two weeks, a retest was completed by twenty-four randomly selected healthcare workers.
The questionnaire in paper format was distributed among the study participants. The data were analyzed using SPSS version 18.0 (SPSS Inc, Chicago, IL, USA).
App selection process
Following the translation into Persian, a thorough app search was conducted on January 2024, utilizing Google Play Store (Android), App Store (iOS), and Cafe Bazaar, the most well-known app store in Iran. Keywords such as “meditation” and “meditation Persian” were searched in these stores. A mobile app was selected based on the following inclusion criteria: availability in Persian, relevance to meditation, free availability, achieving a user rating of over four stars out of five, maintaining a minimum of twenty thousand installations, and being updated within the past six months; the data was recorded in a Microsoft Excel spreadsheet for subsequent analysis using descriptive frequency statistics. During the selection process, emphasis was placed on software well-supported by its company, regularly updated within the past year to ensure ongoing improvement, widely installed to showcase its popularity and user familiarity, and consistently maintains high user ratings to reflect increased satisfaction and acceptance.
Ultimately, the Aramia App was selected. The Aramia Persian App specializes in meditation, yoga, mindfulness, and sub-categories such as relaxation, concentration, peaceful sleep, pain control, and stress management. It also includes breathing exercises and night-time relaxation stories. Additionally, the app offers over 15 meditation courses designed to achieve specific goals.
Results
Table 1 presents the participants’ demographic information. The female participants constituted the majority (82.55%) of the total participants. A significant proportion (58.14%) of the participants held a bachelor’s degree. Moreover, most participants’ education field was public health (60.47%). Notably, Table 1 did not have any missing responses.
Validity and reliability
The questionnaire included questions with a face content validity score equal to or greater than 1.5. Table 2 shows the test-retest reliability (ICC) results, CVI, and CVR for uMARS, which were 0.93, 0.90, and 0.79, respectively. Given that CVR, CVI, and ICC are more than 0.75, 0.79, and 0.56, respectively, the Persian version of the questionnaire has acceptable reliability and validity.
Table 3 shows Cronbach’s alpha values obtained for the subscales. The highest-scoring subscale was objective quality, with a score of 0.84. The information item quality subscale followed closely, scoring 0.74. The overall Cronbach’s alpha was 0.86, indicating good reliability as it exceeded the acceptable threshold of 0.7.
App assessment
Table 4 shows the mean scores for the Aramia App. Among the three subscales (Objective quality, subjective quality, perceived impact), perceived impact received the highest mean score (3.96 ± 0.37), and subjective quality received the lowest mean score (3.49 ± 0.34). The overall uMARS score for Aramia was 3.74 ± 1.04.
Discussion
The uMARS is a valuable tool for researchers and app developers to identify highly valued and praised components and opportunities for further improvement. It also aids healthcare professionals in recommending quality mobile apps to their patients [17,26,36]. This study presents the first translation and adaptation of the uMARS scale into Persian.
The results indicate that the Persian version has acceptable validity and reliability, and all subscales of objective and subjective quality showed adequate internal consistency and test-retest reliability. This finding is consistent with other comparable studies and the original English version [17,26–28]. The present study’s findings showed that the participants rated the Aramia app moderately. In this app, the highest score was received by the perceived impact subscale since Aramia’s app increased awareness, knowledge, attitudes, and intention to change in meditation.
The overall uMARS score for Aramia was 3.74 which is comparable to the findings of the similar studies [37]. It is also found that the Aramia app has received a poor score in the customization section. This finding is similar to the findings of the study by Lambrecht et al. evaluating the quality of a mobile app to support patients with rheumatic diseases, the customization item received the lowest score [37]. These findings show that mobile app developers need to consider customization to increase their users’ satisfaction with that specific app. Occupational stress and burnout affect every professional, but healthcare workers are most at risk [38,39]. Seemingly, meditation practices facilitated by the Aramia app can improve their mental health and overall well-being, leading to better focus, attention, decision-making, and patient care. By practicing mindfulness, healthcare workers can enhance their ability to stay present and focused in demanding situations. In addition, meditation applications offer healthcare workers a valuable resource for stress management, improved mental health, and overall well-being. Healthcare workers can boost their personal and professional lives by incorporating mindfulness practices into their daily routines using these apps.
The participants in this study were healthcare workers, but findings from other studies might differ based on participant types. Therefore, we recommend conducting further research on a larger scale with diverse samples to validate the Persian version of the uMARS across various health program categories. Additionally, while the uMARS focuses on assessing the quality and functionality of mobile health apps, it needs to precisely evaluate the credibility of the information these apps provide. Including an assessment of information credibility in future studies would be highly valuable.
Conclusion
The study confirmed that the Persian version of the uMARS (PuMARS) is as valid and reliable regarding cross-cultural congruence as the original. This study’s findings indicate that the PuMARS holds significant promise for expansion. Both researchers and application developers can utilize this tool to gather feedback from end-users and pinpoint valuable elements and prospects for enhancement, thus guaranteeing enhanced quality and pertinence of mHealth. By employing the PuMARS scale, developers can identify areas that require improvement and make well-informed choices to optimize their applications for improved user interaction and heightened user satisfaction.
Acknowledgments
The authors would like to thank all participants and the authorities of the study setting, who provided permission to conduct the study.
References
- 1. Payne HE, Lister C, West JH, Bernhardt JM. Behavioral functionality of mobile apps in health interventions: a systematic review of the literature. JMIR Mhealth Uhealth. 2015;3(1):e20. pmid:25803705
- 2. Steinhubl SR, Muse ED, Topol EJ. Can mobile health technologies transform health care?. JAMA. 2013;310(22):2395–6.
- 3. Grundy Q. A review of the quality and impact of mobile health apps. Annu Rev Public Health. 2022;43:117–34. pmid:34910582
- 4. Berendes S, Gubijev A, McCarthy O, Palmer M, Wilson E, Free C. Sexual health interventions delivered to participants by mobile technology: a systematic review and meta-analysis of randomised controlled trials. Sex Transm Infect. 2021;97(3):190–200.
- 5. Robbins R, Krebs P, Jagannathan R, Jean-Louis G, Duncan D. Health app use among US mobile phone users: Analysis of trends by chronic disease status. JMIR Mhealth Uhealth. 2017;5(12):.
- 6. Ahn D. Benefits and risks of apps for patients. Curr Opin Endocrinol DiabetesObes. 2022;29(1):17–22.
- 7. Free C, Phillips G, Watson L, Galli L, Felix L, Edwards P, et al. The effectiveness of mobile-health technologies to improve health care service delivery processes: a systematic review and meta-analysis. PLoS Med. 2013;10(1):e1001363. pmid:23458994
- 8. Morrison LG, Hargood C, Pejovic V, Geraghty AWA, Lloyd S, Goodman N, et al. The effect of timing and frequency of push notifications on usage of a smartphone-based stress management intervention: an exploratory trial. PLoS One. 2017;12(1):e0169162. pmid:28046034
- 9. Lee C, Ventola C. Mobile devices and apps for health care professionals: Uses and benefits. Pharmacy and Therapeutics. 2014;39(5):356.
- 10. Collado-Borrell R, Escudero-Vilaplana V, Narrillos-Moraza Á, Villanueva-Bueno C, Herranz-Alonso A, Sanjurjo-Sáez M. Patient-reported outcomes and mobile applications. A review of their impact on patients’ health outcomes. Farmacia Hospitalaria. 2022;46(3):173–81.
- 11. Song T, Yu P, Zhang Z. Design features and health outcomes of mHealth applications for patient self-management of asthma: a systematic review: mHealth apps for asthma self-management. Australasian Computer Science Week. 2022;153–60.
- 12. Kouroubali A, Koumakis L, Kondylakis H, Katehakis DG. An integrated approach towards developing quality mobile health apps for cancer. Advances in Healthcare Information Systems and Administration. 2019:46–71.
- 13. Parker L, Bero L, Gillies D, Raven M, Grundy Q. The “hot potato” of mental health app regulation: a critical case study of the Australian policy arena. Int Jf Health Policy Manag. 2019;8(3):168.
- 14. Woulfe F, Fadahunsi KP, Smith S, Chirambo GB, Larsson E, Henn P, et al. Identification and Evaluation of Methodologies to Assess the Quality of Mobile Health Apps in High-, Low-, and Middle-Income Countries: Rapid Review. JMIR Mhealth Uhealth [Internet]. 2021 Oct 1 [cited 2024 Apr 4];9(10). Available from:/pmc/articles/PMC8548973/
- 15. Modave F, Bian J, Leavitt T, Bromwell J, Harris Iii C, Vincent H. Low quality of free coaching apps with respect to the american college of sports medicine guidelines: a review of current mobile apps. JMIR Mhealth Uhealth. 2015;3(3):e77. pmid:26209109
- 16. BinDhim NF, Hawkey A, Trevena L. A systematic review of quality assessment methods for smartphone health apps. Telemed J E Health. 2015;21(2):97–104. pmid:25469795
- 17. Stoyanov SR, Hides L, Kavanagh DJ, Wilson H. Development and validation of the user version of the mobile application rating scale (uMARS). JMIR Mhealth Uhealth. 2016;4(2):.
- 18. Stoyanov SR, Hides L, Kavanagh DJ, Zelenko O, Tjondronegoro D, Mani M. Mobile app rating scale: a new tool for assessing the quality of health mobile apps. JMIR Mhealth Uhealth. 2015;3(1):e27. pmid:25760773
- 19. Ferguson M, Maidment D, Gomez R, Coulson N, Wharrad H. The feasibility of an m-health educational programme (m2Hear) to improve outcomes in first-time hearing aid users. Int J Audiol. 2021;60(sup1):S30-41.
- 20. Serlachius A, Schache K, Kieser A, Arroll B, Petrie K, Dalbeth N. Association between user engagement of a mobile health app for gout and improvements in self-care behaviors: randomized controlled trial. JMIR Mhealth Uhealth. 2019;7(8):e15021. pmid:31411147
- 21. O’Reilly M, Slevin P, Ward T, Caulfield B. A wearable sensor-based exercise biofeedback system: Mixed methods evaluation of Formulift. JMIR Mhealth Uhealth. 2018;6(1):e33.
- 22. Davidson S, Fletcher S, Wadley G, Reavley N, Gunn J, Wade DA. A mobile phone app to improve the mental health of taxi drivers: Single-arm feasibility trial. JMIR Mhealth Uhealth. 2020;8(1):e19812.
- 23. Bardus M, Ali A, Demachkieh F, Hamadeh G. Assessing the quality of mobile phone apps for weight management: User-centered study with employees from a Lebanese university. JMIR Mhealth Uhealth. 2019;7(1):e9836.
- 24. LeBeau K, Huey LG, Hart M. Assessing the quality of mobile apps used by occupational therapists: evaluation using the user version of the mobile application rating scale. JMIR Mhealth Uhealth. 2019;7(5):e13019. pmid:31066712
- 25. Adam A, Hellig J, Perera M, Bolton D, Lawrentschuk N. Prostate cancer risk calculator mobile applications (Apps): a systematic review and scoring using the validated user version of the Mobile Application Rating Scale (uMARS). World Journal of Urology. 2018;36(4):565–73.
- 26. Martin-Payo R, Carrasco-Santos S, Cuesta M, Stoyan S, Gonzalez-Mendez X, Del Mar Fernandez-Alvarez M. Spanish adaptation and validation of the User Version of the Mobile Application Rating Scale (uMARS). J Am Med Inform Assoc. 2021;28(12):2681–6.
- 27. Chasiotis G, Stoyanov S, Karatzas A, Gravas S. Greek validation of the user version of the Mobile Application Rating Scale (uMARS). J Int Med Res. 2023;51(3):1–8.
- 28. Morselli S, Sebastianelli A, Domnich A, Bucchi C, Spatafora P, Liaci A. Translation and validation of the Italian version of the user version of the Mobile Application Rating Scale (uMARS). J Prev Med Hyg. 2021;62(1):E243.
- 29. Shinohara Y, Yamamoto K, Ito M, Sakata M, Koizumi S, Hashisako M. Development and validation of the Japanese version of the uMARS (user version of the mobile app rating system). Int J Med Inform. 2022;165.
- 30. Thomas S, Hathaway D, Arheart K. Face validity. West J Nurs Res. 1992;14(1):109–12.
- 31. LAWSHE CH. A QUANTITATIVE APPROACH TO CONTENT VALIDITY1. Personnel Psychology. 1975;28(4):563–75.
- 32. Polit DF, Beck CT, Owen SV. Is the CVI an acceptable indicator of content validity? Appraisal and recommendations. Res Nurs Health. 2007;30(4):459–67. pmid:17654487
- 33. Shi J, Mo X, Sun Z. [Content validity index in scale development]. Zhong Nan Da Xue Xue Bao Yi Xue Ban. 2012;37(2):152–5.
- 34. Domnich A, Arata L, Amicizia D, Signori A, Patrick B, Stoyanov S, et al. Development and validation of the Italian version of the Mobile Application Rating Scale and its generalisability to apps targeting primary prevention. BMC Med Inform Decis Mak. 2016;16:83. pmid:27387434
- 35. Koo TK, Li MY. A guideline of selecting and reporting intraclass correlation coefficients for reliability research. J Chiropr Med. 2016;15(2):155–63. pmid:27330520
- 36. Gralha S, Bittencourt O da S. Portuguese translation and validation of the user rating scale for mobile applications in the health area (uMARS). Res Soc Develop. 2023;12(6):e8912642056.
- 37. Lambrecht A, Vuillerme N, Raab C, Simon D, Messner E-M, Hagen M, et al. Quality of a supporting mobile app for rheumatic patients: patient-based assessment using the User Version of the Mobile Application Scale (uMARS). Front Med (Lausanne). 2021;8:715345. pmid:34368202
- 38. de la Cruz S, Cebrino J, Herruzo J, Vaquero-Abellán M. Multicenter study into burnout, perceived stress, job satisfaction, coping strategies, and general health among emergency department nursing staff. J Clin Med. 2020;9(4).
- 39. Mengist B, Amha H, Ayenew T, Gedfew M, Akalu T, Assemie M, et al. Occupational stress and burnout among health care workers in Ethiopia: A systematic review and meta-analysis. Arch Rehabil Res Clin Transl. 2021;3(2):1–11.