Usability Testing of Mobile Applications: A Methodological Framework
Abstract
:1. Introduction
2. Related Work
- Mobile phone usability level. The level indicates what we ultimately want to evaluate. As an emergent concept, usability cannot be measured directly or precisely. Instead, it could be indirectly indicated as the sum of some usability factors underlying the concept of usability.
- Usability indicator level. Five usability indicators are relevant to the usability of mobile phones, including visual support of task goals, support of cognitive interaction, support of efficient interaction, functional support of user needs, and ergonomic support.
- Usability criteria level. This level identifies several usability factors that can be directly measured using different methods.
- Usability property level. This level represents the actual states or behaviors of several interface features of a mobile phone, providing an actual usability value to the criteria level.
3. The Theory of Mobile Usability
3.1. Usability Conceptualization
- external, independent, and valid for all interested users (e.g., current weather, dangerous events, time, etc.);
- location, refers to information about the user’s point of interest (e.g., traffic jams, road conditions, parking space, restaurant reviews, etc.);
- user-specific, related to the user’s attributes, beliefs, activities, and interests (e.g., gender, age, nationality, religion, etc.).
3.2. Usability Attributes Conceptualization
3.3. Usability Attributes Operationalization
3.3.1. Observed Effectiveness
3.3.2. Observed Efficiency
3.3.3. Perceived Effectiveness
3.3.4. Perceived Efficiency
3.3.5. Perceived Satisfaction
4. Methodological Framework for Mobile Usability Testing
- “a technique used to evaluate a product by testing it on users” [55];
- “a technique for identifying difficulty that individuals may have using a product” [56];
- “a widely used technique to evaluate user performance and acceptance of products and systems” [57];
- “an essential skill for usability practitioners—professionals whose primary goal is to provide guidance to product developers for the purpose of improving the ease of use of their products” [58];
- “an evaluation method in which one or more representative users at a time perform tasks or describe their intentions under observation” [59];
- “a systematic way of observing actual users trying out a product and collecting information about the specific ways in which the product is easy or difficult for them” [62].
4.1. Research Settings for Usability Testing of Mobile Applications
4.1.1. Laboratory Studies
- a desk and chair, used by a user during application testing;
- document camera, a real-time image capture device, responsible for video recording;
- microphone, an external device that enables audio recording;
- video camera, a optical instrument that allows real-time observation of the user;
- a computer system unit (PC), optionally equipped with a keyboard, mouse, and external monitor, used to store session recordings; alternatively, a laptop computer may be used as an efficient substitute.
- monitor, used to view a user performing tasks;
- microphone, as a means of communication with a user;
- office equipment, necessary to ensure comfortable working conditions.
- laboratory-based testing by using a computer-based mobile device emulator;
- laboratory-based testing by using a mobile device (smartphone or tablet).
4.1.2. Field Studies
4.1.3. Laboratory vs. Field Studies
4.1.4. Self-User Usability Testing
4.2. Data Collection Techniques
- Questionnaire.
- Participant observation.
- Thinking aloud.
- Interview.
4.2.1. Questionnaire
4.2.2. Participant Observation
- Passive occurs when the researcher is present but does not interact with people. At this level of participation, those being observed may not even be aware that they are being observed. By acting as a pure observer, a great deal of undisturbed information can be obtained [96].
- Moderate is when the observer is present at the scene of action. However, recognized as a researcher, the observer does not actively interact with those involved but may occasionally be asked to become involved. At this level of interaction, it is typical practice to use a structured observation framework. In some settings, moderate participation acts as a proxy until more active participation is possible.
- Active occurs when the researcher participates in almost everything that other people do with the aim of learning. In addition, a researcher proactively interacts with the participants (e.g., by talking to them and performing activities), thereby collecting all the necessary information.
- Complete is when the researcher is or becomes a member of the group that is being studied. To avoid disrupting normal activities, the role is usually hidden from the group [97].
- Moderated, requiring the moderator to be present either in person or on camera. In an in-person-moderated usability test, a moderator asks a participant to complete a series of tasks while observing and taking notes. So, both roles communicate in real-time.
- Unmoderated, which does not require the presence of a moderator. Participants perform application testing at their own pace, usually guided by a set of prompts.
4.2.3. Thinking Aloud
- Relaxed (or interactive) thinking aloud: a test user is asked to verbalize his or her thoughts by providing a running commentary on self-performed actions and, in moderated tests, is encouraged to self-reflect on current thoughts and actions [112].
- Classic thinking aloud: a test user is limited to verbalizing information that is used or has been used to solve a particular task. In addition, the interaction between researcher and user should be restricted to a simple reminder to think aloud if the user falls silent [113].
4.2.4. Interview
- An unstructured or non-directive interview is an interview in which the questions are not predetermined, i.e., the interviewer asks open-ended questions and relies on the freely given answers of the participants. This approach therefore offers both parties (interviewer and interviewee) flexibility and freedom in planning, conducting, and organizing the interview content and questions [118].
- A semi-structured interview combines a predetermined set of open-ended questions, which also encourage discussion, with the opportunity for the interviewer to explore particular issues or responses further. The rigidity of its structure can be varied depending on the purpose of the study and the research questions. The main advantages are that the semi-structured interview method has been found to be successful in enabling reciprocity between the interviewer and the participant [119], allowing the interviewer to improvise follow-up questions based on the participant’s responses. The semi-structured format is the most commonly used interview technique in qualitative research [120].
- A structured interview is a systematic approach to interviewing in which you pose the same pre-defined questions to all participants in the same order and rate them using a standardized scoring system. In research, structured interviews are usually quantitative in nature. Structured interviews are easy to replicate because they use a fixed set of closed-ended questions that are easy to quantify and thus test for reliability [121]. However, this form of interview is not flexible; i.e., new questions cannot be asked off the cuff (during the interview) as an interview schedule must be followed.
4.3. Usability Testing Process
- Data collection.
- Data analysis.
- Usability assessment.
4.3.1. Data Collection
- A session starts with an introduction. The researcher briefly presents the general assumptions, research objectives, research object and methods, session scenario (including tasks to be performed by the user), details of the thinking aloud protocol, components of the test environment, data collected and how the data will be used in the future, user rights, user confidentiality, and anonymity clauses.
- The user is then asked to sign a consent form, a document that protects the rights of the participants, provides details of the above information, and ultimately builds trust between the researcher and the participants.
- Next, the researcher briefly informs and instructs the participant about the content and purpose of the pre-testing questionnaire.
- This instrument is used to collect demographic data, information about the participant’s knowledge and skills in relation to the object of the study, and other relevant information. An example of a pre-testing questionnaire is available here [50].
- During the second briefing, a researcher introduces a user with a pre-defined list of tasks to be performed by the user. All hardware equipment should be checked. Finally, if there are no obstacles, a user should be kindly asked to think aloud while performing each task.
- The usability testing session is the core component of the data collection phase as voice and video data are collected using hardware equipment (document camera and microphone [130]). In addition, the researcher is also involved through monitoring and observation of the progress of the test session. From a practical perspective, written notes can often provide useful follow-up information.
- Next, the participant is asked for additional feedback in the form of open questions on any unspoken or unobserved issues raised during the interaction with the application; this could take the form of an interview. If necessary, a short break afterwards is an option to consider.
- Finally, the post-test questionnaire is submitted to a user. The aim is to collect primary data on the user’s perceptions and experiences. An example of a post-test questionnaire can be found here [50].
- In the summary, a researcher concludes the testing session and discusses any remaining organizational issues, if there are any.
4.3.2. Data Analysis
- video content analysis, which comprises annotation procedures including the user’s actions and application responses, separated and marked on the timeline;
- identifying and documenting the application errors, defects, and malfunctions; and
- extracting all numerical values necessary to estimate particular attribute metrics.
4.3.3. Usability Assessment
4.4. GAQM Framework
- Conceptual level (goal). The research goal is defined, including the usability dimension (observed or perceived) and the name of the mobile application (subject of the study); a goal could also refer to usability in general.
- Contextual level (attribute). The mobile usability attributes are specified.
- Operational level (question). At least one research question is formulated to operationalize each specific attribute.
- Quantitative level (metric). At least one directly observable metric is assigned for an observed attribute, while two or more are assigned for a perceived attribute.
5. Use Cases
5.1. Use Case #1
5.2. Use Case #2
5.3. Use Case #3
6. Discussion
7. Conclusions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
Abbreviations
ACM | Association for Computing Machinery |
MDPI | Multidisciplinary Digital Publishing Institute |
GAQM | Goal–Attribute–Question–Metric |
GPS | Global Positioning System |
IEEE | Institute of Electrical and Electronics Engineers |
ISO | International Organization for Standardization |
UI | User Interface |
References
- Statista. Forecast Number of Mobile Users Worldwide from 2020 to 2025. 2023. Available online: https://www.statista.com/statistics/218984/number-of-global-mobile-users-since-2010/ (accessed on 28 July 2023).
- Statista. Time Spent with Nonvoice Activities on Mobile Phones Every Day in the United States from 2019 to 2024. 2023. Available online: https://www.statista.com/statistics/1045353/mobile-device-daily-usage-time-in-the-us/ (accessed on 28 July 2023).
- Elite Content Marketer. Average Screen Time Statistics for 2023. 2023. Available online: https://elitecontentmarketer.com/screen-time-statistics/ (accessed on 28 July 2023).
- Statista. Revenue from Smartphone Sales in the United States from 2013 to 2027. 2023. Available online: https://www.statista.com/statistics/619821/smartphone-sales-revenue-in-the-us/ (accessed on 28 July 2023).
- Admiral Media. Why 99,5 percent of Consumer Apps Fail (And How To Keep Yours Alive). 2023. Available online: https://admiral.media/why-consumer-apps-fail-and-how-to-keep-yours-alive/ (accessed on 16 December 2023).
- Goyal, A. Top Reasons Why Mobile Apps Fail to Make a Mark in the Market. 2019. Available online: https://www.businessofapps.com/insights/top-reasons-why-mobile-apps-fail-to-make-a-mark-in-the-market/ (accessed on 2 February 2024).
- Swaid, S.I.; Suid, T.Z. Usability heuristics for M-commerce apps. In Advances in Usability, User Experience and Assistive Technology, Proceedings of the AHFE 2018 International Conferences on Usability & User Experience and Human Factors and Assistive Technology, Orlando, FL, USA, 21–25 July 2018; Springer: Berlin/Heidelberg, Germany, 2019; pp. 79–88. [Google Scholar]
- Tode, C. More than Half of Consumers Dissatisfied with Mobile Retail Experiences. 2017. Available online: https://www.retaildive.com/ex/mobilecommercedaily/more-than-half-of-shoppers-are-dissatisfied-with-mobile-retail-experiences-adobe (accessed on 2 February 2024).
- Hedegaard, S.; Simonsen, J.G. Extracting usability and user experience information from online user reviews. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, Paris, France, 27 April–2 May 2013; pp. 2089–2098. [Google Scholar]
- Xu, G.; Gutiérrez, J.A. An exploratory study of killer applications and critical success factors in m-commerce. J. Electron. Commer. Organ. (JECO) 2006, 4, 63–79. [Google Scholar] [CrossRef]
- Baharuddin, R.; Singh, D.; Razali, R. Usability dimensions for mobile applications-a review. Res. J. Appl. Sci. Eng. Technol. 2013, 5, 2225–2231. [Google Scholar] [CrossRef]
- Sunardi; Desak, G.F.P.; Gintoro. List of most usability evaluation in mobile application: A systematic literature review. In Proceedings of the 2020 International Conference on Information Management and Technology (ICIMTech), Bandung, Indonesia, 13–14 August 2020; IEEE: Piscataway, NJ, USA, 2020; pp. 283–287. [Google Scholar]
- Au, F.T.; Baker, S.; Warren, I.; Dobbie, G. Automated usability testing framework. In Proceedings of the Ninth Conference on Australasian User Interface, Darlinghurst, NSW, Australia, 1 January 2008; Volume 76, pp. 55–64. [Google Scholar]
- Joshua, S.R.; Abbas, W.; Lee, J.H.; Kim, S.K. Trust Components: An Analysis in The Development of Type 2 Diabetic Mellitus Mobile Application. Appl. Sci. 2023, 13, 1251. [Google Scholar] [CrossRef]
- Hohmann, L. Usability: Happier users mean greater profits. Inf. Syst. Manag. 2003, 20, 66–76. [Google Scholar] [CrossRef]
- Ahmad, W.F.W.; Sulaiman, S.; Johari, F.S. Usability Management System (USEMATE): A web-based automated system for managing usability testing systematically. In Proceedings of the 2010 International Conference on User Science and Engineering (i-USEr), Shah Alam, Malaysia, 13–15 December 2010; IEEE: Piscataway, NJ, USA, 2010; pp. 110–115. [Google Scholar]
- Cliquet, G.; Gonzalez, C.; Huré, E.; Picot-Coupey, K. From Mobile Phone to Smartphone: What’s New About M-Shopping? In Ideas in Marketing: Finding the New and Polishing the Old, Proceedings of the 2013 Academy of Marketing Science (AMS) Annual Conference, Monterey Bay, CA, USA, 15–18 May 2013; Springer: Berlin/Heidelberg, Germany, 2015; pp. 199–202. [Google Scholar]
- Coursaris, C.K.; Kim, D.J. A meta-analytical review of empirical mobile usability studies. J. Usability Stud. 2011, 6, 117–171. [Google Scholar]
- Thales. The Evolution of the Smartphone. 2023. Available online: https://www.thalesgroup.com/en/worldwide-digital-identity-and-security/mobile/magazine/evolution-smartphone (accessed on 17 December 2023).
- Nacheva, R. Standardization issues of mobile usability. Int. J. Interact. Mob. Technol. 2020, 14, 149–157. [Google Scholar] [CrossRef]
- Whittemore, R.; Knafl, K. The integrative review: Updated methodology. J. Adv. Nurs. 2005, 52, 546–553. [Google Scholar] [CrossRef]
- Zhang, D.; Adipat, B. Challenges, methodologies, and issues in the usability testing of mobile applications. Int. J.-Hum.-Comput. Interact. 2005, 18, 293–308. [Google Scholar] [CrossRef]
- Ji, Y.G.; Park, J.H.; Lee, C.; Yun, M.H. A usability checklist for the usability evaluation of mobile phone user interface. Int. J.-Hum.-Comput. Interact. 2006, 20, 207–231. [Google Scholar] [CrossRef]
- Heo, J.; Ham, D.H.; Park, S.; Song, C.; Yoon, W.C. A framework for evaluating the usability of mobile phones based on multi-level, hierarchical model of usability factors. Interact. Comput. 2009, 21, 263–275. [Google Scholar] [CrossRef]
- Hussain, A.; Kutar, M. Usability metric framework for mobile phone application. In Proceedings of the PG Net’09: 10th Annual Conference on the Convergence of Telecommunications, Networking and Broadcasting, Liverpool, UK, 22–23 June 2009. [Google Scholar]
- Jeong, J.; Kim, N.; In, H.P. Detecting usability problems in mobile applications on the basis of dissimilarity in user behavior. Int. J.-Hum.-Comput. Stud. 2020, 139, 102364. [Google Scholar] [CrossRef]
- Weichbroth, P. Usability of mobile applications: A systematic literature study. IEEE Access 2020, 8, 55563–55577. [Google Scholar] [CrossRef]
- ISO 9241-11:2018; Ergonomics of Human-System Interaction—Part 11: Usability: Definitions and Concepts. International Organization for Standardization: Geneva, Switzerland, 2018.
- Owoc, M.; Weichbroth, P.; Żuralski, K. Towards better understanding of context-aware knowledge transformation. In Proceedings of the 2017 Federated Conference on Computer Science and Information Systems (FedCSIS), Prague, Czech Republic, 3–6 September 2017; IEEE: Piscataway, NJ, USA, 2017; pp. 1123–1126. [Google Scholar]
- Yuan, H.; Jin, T.; Ye, X. Establishment and Application of Crowd-Sensing-Based System for Bridge Structural Crack Detection. Appl. Sci. 2023, 13, 8281. [Google Scholar] [CrossRef]
- Rafique, U.; Khan, S.; Ahmed, M.M.; Kiani, S.H.; Abbas, S.M.; Saeed, S.I.; Alibakhshikenari, M.; Dalarsson, M. Uni-planar MIMO antenna for sub-6 GHz 5G mobile phone applications. Appl. Sci. 2022, 12, 3746. [Google Scholar] [CrossRef]
- Nakhimovsky, Y.; Miller, A.T.; Dimopoulos, T.; Siliski, M. Behind the scenes of google maps navigation: Enabling actionable user feedback at scale. In Proceedings of the CHI’10 Extended Abstracts on Human Factors in Computing Systems, Atlanta, CA, USA, 10–15 April 2010; pp. 3763–3768. [Google Scholar]
- Musumba, G.W.; Nyongesa, H.O. Context awareness in mobile computing: A review. Int. J. Mach. Learn. Appl. 2013, 2, 5. [Google Scholar] [CrossRef]
- Luo, C.; Goncalves, J.; Velloso, E.; Kostakos, V. A survey of context simulation for testing mobile context-aware applications. ACM Comput. Surv. (CSUR) 2020, 53, 1–39. [Google Scholar] [CrossRef]
- Encyclopedia.com. Attribute. 2024. Available online: https://www.encyclopedia.com/science-and-technology/computers-and-electrical-engineering/computers-and-computing/attribute (accessed on 20 January 2024).
- Zaibon, S.B. User testing on game usability, mobility, playability, and learning content of mobile game-based learning. J. Teknol. 2015, 77, 131–139. [Google Scholar] [CrossRef]
- Gilbert, A.L.; Sangwan, S.; Ian, H.H.M. Beyond usability: The OoBE dynamics of mobile data services markets. Pers. Ubiquitous Comput. 2005, 9, 198–208. [Google Scholar] [CrossRef]
- Silvennoinen, J.; Vogel, M.; Kujala, S. Experiencing visual usability and aesthetics in two mobile application contexts. J. Usability Stud. 2014, 10, 46–62. [Google Scholar]
- Widyanti, A.; Ainizzamani, S.A.Q. Usability evaluation of online transportation’user interface. In Proceedings of the 2017 International Conference on Information Technology Systems and Innovation (ICITSI), Bandung, Indonesia, 23–24 October 2017; IEEE: Piscataway, NJ, USA, 2017; pp. 82–86. [Google Scholar]
- Harrison, R.; Flood, D.; Duce, D. Usability of mobile applications: Literature review and rationale for a new usability model. J. Interact. Sci. 2013, 1, 1–16. [Google Scholar] [CrossRef]
- John, B.E.; Marks, S.J. Tracking the effectiveness of usability evaluation methods. Behav. Inf. Technol. 1997, 16, 188–202. [Google Scholar] [CrossRef]
- Jeng, J. Usability assessment of academic digital libraries: Effectiveness, efficiency, satisfaction, and learnability. Libri 2005, 55, 96–121. [Google Scholar] [CrossRef]
- Kabir, M.A.; Salem, O.A.; Rehman, M.U. Discovering knowledge from mobile application users for usability improvement: A fuzzy association rule mining approach. In Proceedings of the 2017 8th IEEE International Conference on Software Engineering and Service Science (ICSESS), Beijing, China, 24–26 November 2017; IEEE: Piscataway, NJ, USA, 2017; pp. 126–129. [Google Scholar]
- Baumeister, R.F. Encyclopedia of Social Psychology; Sage: London, UK, 2007; Volume 1. [Google Scholar]
- Choi, W.; Stvilia, B. Web credibility assessment: Conceptualization, operationalization, variability, and models. J. Assoc. Inf. Sci. Technol. 2015, 66, 2399–2414. [Google Scholar] [CrossRef]
- Carneiro, A. Maturity and metrics in health organizations information systems. In Handbook of Research on ICTs and Management Systems for Improving Efficiency in Healthcare and Social Care; IGI Global: Hershey, PA, USA, 2013; pp. 937–952. [Google Scholar]
- United States Agency for International Development. Glossary of Evaluation Terms. 2009. Available online: https://pdf.usaid.gov/pdf_docs/PNADO820.pdf (accessed on 20 January 2024).
- Bollen, K.A.; Diamantopoulos, A. In defense of causal-formative indicators: A minority report. Psychol. Methods 2017, 22, 581. [Google Scholar] [CrossRef] [PubMed]
- Edwards, J.R.; Bagozzi, R.P. On the nature and direction of relationships between constructs and measures. Psychol. Methods 2000, 5, 155. [Google Scholar] [CrossRef] [PubMed]
- Weichbroth, P. An empirical study on the impact of gender on mobile applications usability. IEEE Access 2022, 10, 119419–119436. [Google Scholar] [CrossRef]
- Cambridge Dictionary. Satisfaction. 2024. Available online: https://dictionary.cambridge.org/dictionary/english/satisfaction (accessed on 20 January 2024).
- Liébana-Cabanillas, F.; Molinillo, S.; Ruiz-Montañez, M. To use or not to use, that is the question: Analysis of the determining factors for using NFC mobile payment systems in public transportation. Technol. Forecast. Soc. Chang. 2019, 139, 266–276. [Google Scholar] [CrossRef]
- Hsiao, C.H.; Chang, J.J.; Tang, K.Y. Exploring the influential factors in continuance usage of mobile social Apps: Satisfaction, habit, and customer value perspectives. Telemat. Inform. 2016, 33, 342–355. [Google Scholar] [CrossRef]
- Wan, L.; Xie, S.; Shu, A. Toward an understanding of university students’ continued intention to use MOOCs: When UTAUT model meets TTF model. Sage Open 2020, 10, 2158244020941858. [Google Scholar] [CrossRef]
- Lodhi, A. Usability heuristics as an assessment parameter: For performing usability testing. In Proceedings of the 2010 2nd International Conference on Software Technology and Engineering, San Juan, PR, USA, 3–5 October 2010; IEEE: Piscataway, NJ, USA, 2010; Volume 2, pp. V2-256–V2-259. [Google Scholar]
- Chisman, J.; Diller, K.; Walbridge, S. Usability testing: A case study. Coll. Res. Libr. 1999, 60, 552–569. [Google Scholar] [CrossRef]
- Wichansky, A.M. Usability testing in 2000 and beyond. Ergonomics 2000, 43, 998–1006. [Google Scholar] [CrossRef]
- Lewis, J.R. Usability testing. In Handbook of Human Factors and Ergonomics; John Wiley & Sons: Hoboken, NJ, USA, 2012; pp. 1267–1312. [Google Scholar]
- Riihiaho, S. Usability testing. In The Wiley Handbook of Human Computer Interaction; John Wiley & Sons: Hoboken, NJ, USA, 2018; Volume 1, pp. 255–275. [Google Scholar]
- Mason, P.; Plimmer, B. A critical comparison of usability testing methodologies. NACCQ 2005, 255–258. Available online: https://www.cs.auckland.ac.nz/~beryl/publications/NACCQ%202005%20Critical%20Comparison%20of%20Usability%20Testing%20Methodologies.pdf (accessed on 17 December 2023).
- Gaffney, G. Information & Design Designing for Humans. 1999. Available online: https://infodesign.com.au/assets/UsabilityTesting.pdf (accessed on 28 July 2023).
- Dumas, J.S.; Redish, J. A Practical Guide to Usability Testing; Intellect Books: Bristol, UK, 1999. [Google Scholar]
- Sauer, J.; Sonderegger, A.; Heyden, K.; Biller, J.; Klotz, J.; Uebelbacher, A. Extra-laboratorial usability tests: An empirical comparison of remote and classical field testing with lab testing. Appl. Ergon. 2019, 74, 85–96. [Google Scholar] [CrossRef] [PubMed]
- Betiol, A.H.; de Abreu Cybis, W. Usability testing of mobile devices: A comparison of three approaches. In Proceedings of the IFIP Conference on Human-Computer Interaction, Orlando, FL, USA, 9–14 July 2005; Springer: Berlin/Heidelberg, Germany, 2005; pp. 470–481. [Google Scholar]
- Gawlik-Kobylińska, M.; Kabashkin, I.; Misnevs, B.; Maciejewski, P. Education Mobility as a Service: A Study of the Features of a Novel Mobility Platform. Appl. Sci. 2023, 13, 5245. [Google Scholar] [CrossRef]
- Dehlinger, J.; Dixon, J. Mobile application software engineering: Challenges and research directions. In Proceedings of the Workshop on Mobile Software Engineering, Honolulu, HI, USA, 22–24 May 2011; Volume 2, pp. 29–32. [Google Scholar]
- Schusteritsch, R.; Wei, C.Y.; LaRosa, M. Towards the perfect infrastructure for usability testing on mobile devices. In Proceedings of the CHI’07 extended abstracts on Human Factors in Computing Systems, San Jose, CA, USA, 28 April–3 May 2007; pp. 1839–1844. [Google Scholar]
- Genaidy, A.M.; Karwowski, W. The emerging field of health engineering. Theor. Issues Ergon. Sci. 2006, 7, 169–179. [Google Scholar] [CrossRef]
- Keith, M.; Shao, B.; Steinbart, P.J. The usability of passphrases for authentication: An empirical field study. Int. J.-Hum.-Comput. Stud. 2007, 65, 17–28. [Google Scholar] [CrossRef]
- Aziz, H.A. Comparison between field research and controlled laboratory research. Arch. Clin. Biomed. Res. 2017, 1, 101–104. [Google Scholar] [CrossRef]
- Harbach, M.; Von Zezschwitz, E.; Fichtner, A.; De Luca, A.; Smith, M. It’s a hard lock life: A field study of smartphone (Un)Locking behavior and risk perception. In Proceedings of the 10th Symposium on Usable Privacy and Security (SOUPS 2014), Menlo Park, CA, USA, 9–1 July 2014; pp. 213–230. [Google Scholar]
- Pousttchi, K.; Thurnher, B. Understanding effects and determinants of mobile support tools: A usability-centered field study on IT service technicians. In Proceedings of the 2006 International Conference on Mobile Business, Copenhagen, Denmark, 26–27 June 2006; IEEE: Piscataway, NJ, USA, 2006; p. 10. [Google Scholar]
- Van Elzakker, C.P.; Delikostidis, I.; van Oosterom, P.J. Field-based usability evaluation methodology for mobile geo-applications. Cartogr. J. 2008, 45, 139–149. [Google Scholar] [CrossRef]
- Kjeldskov, J.; Graham, C.; Pedell, S.; Vetere, F.; Howard, S.; Balbo, S.; Davies, J. Evaluating the usability of a mobile guide: The influence of location, participants and resources. Behav. Inf. Technol. 2005, 24, 51–65. [Google Scholar] [CrossRef]
- Pensabe-Rodriguez, A.; Lopez-Dominguez, E.; Hernandez-Velazquez, Y.; Dominguez-Isidro, S.; De-la Calleja, J. Context-aware mobile learning system: Usability assessment based on a field study. Telemat. Inform. 2020, 48, 101346. [Google Scholar] [CrossRef]
- Rowley, D.E. Usability testing in the field: Bringing the laboratory to the user. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, Boston, MA, USA, 24–28 April 1994; pp. 252–257. [Google Scholar]
- Kallio, T.; Kaikkonen, A.; Cankar, M.; Kallio, T.; Kankainen, A. Usability testing of mobile applications: A comparison between laboratory and field testing. J. Usability Stud. 2005, 1, 23–28. [Google Scholar]
- Nielsen, C.M.; Overgaard, M.; Pedersen, M.B.; Stage, J.; Stenild, S. It’s worth the hassle! the added value of evaluating the usability of mobile systems in the field. In Proceedings of the 4th Nordic Conference on Human-Computer Interaction: Changing Roles, Oslo, Norway, 14–18 October 2006; pp. 272–280. [Google Scholar]
- Duh, H.B.L.; Tan, G.C.; Chen, V.H.h. Usability evaluation for mobile device: A comparison of laboratory and field tests. In Proceedings of the 8th Conference on Human-Computer Interaction with Mobile Devices and Services, Amsterdam, The Netherlands, 2–5 September 2006; pp. 181–186. [Google Scholar]
- Nayebi, F.; Desharnais, J.M.; Abran, A. The state of the art of mobile application usability evaluation. In Proceedings of the 2012 25th IEEE Canadian Conference on Electrical and Computer Engineering (CCECE), Montreal, QC, Canada, 29 April–2 May 2012; IEEE: Piscataway, NJ, USA, 2012; pp. 1–4. [Google Scholar]
- Kaya, A.; Ozturk, R.; Altin Gumussoy, C. Usability measurement of mobile applications with system usability scale (SUS). In Proceedings of the Industrial Engineering in the Big Data Era: Selected Papers from the Global Joint Conference on Industrial Engineering and Its Application Areas, GJCIE 2018, Nevsehir, Turkey, 21–22 June 2018; Springer: Berlin/Heidelberg, Germany, 2019; pp. 389–400. [Google Scholar]
- Suzuki, S.; Bellotti, V.; Yee, N.; John, B.E.; Nakao, Y.; Asahi, T.; Fukuzumi, S. Variation in importance of time-on-task with familiarity with mobile phone models. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, Vancouver, BC, Canada, 7–12 May 2011; pp. 2551–2554. [Google Scholar]
- Kugler, S.; Czwick, C.; Anderl, R. Development of a valuation method for IoT-platforms. In Proceedings of the Product Lifecycle Management in the Digital Twin Era: 16th IFIP WG 5.1 International Conference, PLM 2019, Moscow, Russia, 8–12 July 2019; Revised Selected Papers 16; Springer: Berlin/Heidelberg, Germany, 2019; pp. 293–301. [Google Scholar]
- Partala, T.; Saari, T. Understanding the most influential user experiences in successful and unsuccessful technology adoptions. Comput. Hum. Behav. 2015, 53, 381–395. [Google Scholar] [CrossRef]
- Sonderegger, A.; Schmutz, S.; Sauer, J. The influence of age in usability testing. Appl. Ergon. 2016, 52, 291–300. [Google Scholar] [CrossRef] [PubMed]
- Alturki, R.; Gay, V. Usability testing of fitness mobile application: Methodology and quantitative results. Comput. Sci. Inf. Technol. 2017, 7, 97–114. [Google Scholar]
- Ahmad, N.A.N.; Hussaini, M. A Usability Testing of a Higher Education Mobile Application Among Postgraduate and Undergraduate Students. Int. J. Interact. Mob. Technol. 2021, 15. [Google Scholar] [CrossRef]
- Cambridge Dictionary. Meaning of Questionnaire in English. 2023. Available online: https://dictionary.cambridge.org/dictionary/english-polish/questionnaire (accessed on 6 August 2023).
- Cambridge Dictionary. Meaning of Survey in English. 2023. Available online: https://dictionary.cambridge.org/dictionary/english-polish/survey (accessed on 6 August 2023).
- Nielsen Norman Group. Open-Ended vs. Closed-Ended Questions in User Research. 2016. Available online: https://www.nngroup.com/articles/open-ended-questions/ (accessed on 6 August 2023).
- Sauro, J.; Kindlund, E. A method to standardize usability metrics into a single score. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, Portland, OR, USA, 2–7 April 2005; pp. 401–409. [Google Scholar]
- Ponto, J. Understanding and evaluating survey research. J. Adv. Pract. Oncol. 2015, 6, 168. [Google Scholar]
- Preissle, J. Participant Observation. In The Corsini Encyclopedia of Psychology; University of Michigan: Ann Arbor, MI, USA, 2010; pp. 1–2. [Google Scholar]
- Ahmed, A.A.; Muhammad, R.A. Participant observation of a farmers-herders community in Anguwar Jaba Keffi Nigeria. Int. J. Sci. Res. Publ. 2021, 11, 84–87. [Google Scholar] [CrossRef]
- Musante, K.; DeWalt, B.R. Participant Observation as a Data Collection Method; Rowman Altamira: Lanham, MD, USA, 2010. [Google Scholar]
- Dorazio, P.; Stovall, J. Research in context: Ethnographic usability. J. Tech. Writ. Commun. 1997, 27, 57–67. [Google Scholar] [CrossRef]
- Kawulich, B.B. Participant observation as a data collection method. Forum Qual. Sozialforschung Forum Qual. Soc. Res. 2005, 6, 43. [Google Scholar] [CrossRef]
- Jorgensen, D.L. Participant Observation: A Methodology for Human Studies; Sage: London, UK, 1989; Volume 15. [Google Scholar]
- Tomlin, W.C.; Tomlin, W.C. UX and Usability Testing Data. In UX Optimization: Combining Behavioral UX and Usability Testing Data to Optimize Websites; Apress: New York, NY, USA, 2018; pp. 97–127. [Google Scholar]
- Mohamad, U.H.; Abdul Hakim, I.N.; Ali-Akbari, M. Usability of a gamified antibiotic resistance awareness mobile application: A qualitative evaluation. IET Netw. 2022. [Google Scholar] [CrossRef]
- Khayyatkhoshnevis, P.; Tillberg, S.; Latimer, E.; Aubry, T.; Fisher, A.; Mago, V. Comparison of Moderated and Unmoderated Remote Usability Sessions for Web-Based Simulation Software: A Randomized Controlled Trial. In Proceedings of the International Conference on Human-Computer Interaction, Virtual, 26 June–1 July 2022; Springer: Berlin/Heidelberg, Germany, 2022; pp. 232–251. [Google Scholar]
- Kamińska, D.; Zwoliński, G.; Laska-Leśniewicz, A. Usability Testing of Virtual Reality Applications—The Pilot Study. Sensors 2022, 22, 1342. [Google Scholar] [CrossRef] [PubMed]
- Fisher, E.A.; Wright, V.H. Improving online course design through usability testing. J. Online Learn. Teach. 2010, 6, 228–245. [Google Scholar]
- Svanæs, D.; Alsos, O.A.; Dahl, Y. Usability testing of mobile ICT for clinical settings: Methodological and practical challenges. Int. J. Med. Inform. 2010, 79, e24–e34. [Google Scholar] [CrossRef] [PubMed]
- Güss, C.D. What is going through your mind? Thinking aloud as a method in cross-cultural psychology. Front. Psychol. 2018, 9, 1292. [Google Scholar] [CrossRef] [PubMed]
- Trickett, S.; Trafton, J.G. A primer on verbal protocol analysis. In The PSI Handbook of Virtual Environments for Training and Rducation; Bloomsbury Academic: New York, NY, USA, 2009; Volume 1, pp. 332–346. [Google Scholar]
- Ericsson, K.A.; Simon, H.A. Protocol Analysis (Revised Edition); Overview of Methodology of Protocol Analysis; MIT Press: Cambridge, MA, USA, 1993. [Google Scholar]
- Bernardini, S. Think-aloud protocols in translation research: Achievements, limits, future prospects. Target. Int. J. Transl. Stud. 2001, 13, 241–263. [Google Scholar] [CrossRef]
- Jensen, J.J. Evaluating in a healthcare setting: A comparison between concurrent and retrospective verbalisation. In Proceedings of the International Conference on Human-Computer Interaction, Beijing, China, 22–27 July 2007; Springer: Berlin/Heidelberg, Germany, 2007; pp. 508–516. [Google Scholar]
- Fan, M.; Lin, J.; Chung, C.; Truong, K.N. Concurrent think-aloud verbalizations and usability problems. ACM Trans.-Comput.-Hum. Interact. (TOCHI) 2019, 26, 1–35. [Google Scholar] [CrossRef]
- Hertzum, M.; Borlund, P.; Kristoffersen, K.B. What do thinking-aloud participants say? A comparison of moderated and unmoderated usability sessions. Int. J.-Hum.-Comput. Interact. 2015, 31, 557–570. [Google Scholar] [CrossRef]
- Hertzum, M.; Holmegaard, K.D. Thinking aloud in the presence of interruptions and time constraints. Int. J.-Hum.-Comput. Interact. 2013, 29, 351–364. [Google Scholar] [CrossRef]
- Siegel, D.; Dray, S. A Usability Test Is Not an Interview. ACM Interact. 2016, 23, 82–84. [Google Scholar]
- Boren, T.; Ramey, J. Thinking aloud: Reconciling theory and practice. IEEE Trans. Prof. Commun. 2000, 43, 261–278. [Google Scholar] [CrossRef]
- Hertzum, M.; Hansen, K.D.; Andersen, H.H. Scrutinising usability evaluation: Does thinking aloud affect behaviour and mental workload? Behav. Inf. Technol. 2009, 28, 165–181. [Google Scholar] [CrossRef]
- Nasruddin, Z.A.; Markom, A.; Abdul Aziz, M. Evaluating construction defect mobile app using think aloud. In Proceedings of the User Science and Engineering: 5th International Conference, i-USEr 2018, Puchong, Malaysia, 28–30 August 2018; Proceedings 5; Springer: Berlin/Heidelberg, Germany, 2018; pp. 3–11. [Google Scholar]
- Conway, J.M.; Peneno, G.M. Comparing structured interview question types: Construct validity and applicant reactions. J. Bus. Psychol. 1999, 13, 485–506. [Google Scholar] [CrossRef]
- Chauhan, R.S. Unstructured interviews: Are they really all that bad? Hum. Resour. Dev. Int. 2022, 25, 474–487. [Google Scholar] [CrossRef]
- Galletta, A. Mastering the Semi-Structured Interview and Beyond: From Research Design to Analysis and Publication; NYU Press: New York, NY, USA, 2013; Volume 18. [Google Scholar]
- Kallio, H.; Pietilä, A.M.; Johnson, M.; Kangasniemi, M. Systematic methodological review: Developing a framework for a qualitative semi-structured interview guide. J. Adv. Nurs. 2016, 72, 2954–2965. [Google Scholar] [CrossRef]
- Wilson, J.L.; Hareendran, A.; Hendry, A.; Potter, J.; Bone, I.; Muir, K.W. Reliability of the modified Rankin Scale across multiple raters: Benefits of a structured interview. Stroke 2005, 36, 777–781. [Google Scholar] [CrossRef]
- Ormeño, Y.I.; Panach, J.I.; Pastor, O. An Empirical Experiment of a Usability Requirements Elicitation Method to Design GUIs based on Interviews. Inf. Softw. Technol. 2023, 164, 107324. [Google Scholar] [CrossRef]
- Lindgaard, G.; Dudek, C. User satisfaction, aesthetics and usability: Beyond reductionism. In Usability: Gaining a Competitive Edge; Springer: Berlin/Heidelberg, Germany, 2002; pp. 231–246. [Google Scholar]
- Liang, L.; Tang, Y.; Tang, N. Determinants of groupware usability for community care collaboration. In Proceedings of the Frontiers of WWW Research and Development-APWeb 2006: 8th Asia-Pacific Web Conference, Harbin, China, 16–18 January 2006; Proceedings 8; Springer: Berlin/Heidelberg, Germany, 2006; pp. 511–520. [Google Scholar]
- Walji, M.F.; Kalenderian, E.; Piotrowski, M.; Tran, D.; Kookal, K.K.; Tokede, O.; White, J.M.; Vaderhobli, R.; Ramoni, R.; Stark, P.C.; et al. Are three methods better than one? A comparative assessment of usability evaluation methods in an EHR. Int. J. Med. Inform. 2014, 83, 361–367. [Google Scholar] [CrossRef] [PubMed]
- Jiang, T.; Luo, G.; Wang, Z.; Yu, W. Research into influencing factors in user experiences of university mobile libraries based on mobile learning mode. Libr. Hi Tech 2022. ahead-of-print. [Google Scholar] [CrossRef]
- Stanton, N.A.; Hedge, A.; Brookhuis, K.; Salas, E.; Hendrick, H.W. Handbook of Human Factors and Ergonomics Methods; CRC Press: Boca Raton, FL, USA, 2004. [Google Scholar]
- Fontana, A.; Frey, J.H. The interview. In The Sage Handbook of Qualitative Research; Sage: London, UK, 2005; Volume 3, pp. 695–727. [Google Scholar]
- Britannica Dictionary. Britannica Dictionary Definition of PROCESS. 2023. Available online: https://www.britannica.com/dictionary/process (accessed on 1 August 2023).
- Budiu, R. Usability Testing for Mobile Is Easy. 2014. Available online: https://www.nngroup.com/articles/mobile-usability-testing/ (accessed on 9 January 2024).
- Salkind, N.J. Encyclopedia of Research Design; Sage: London, UK, 2010; Volume 1. [Google Scholar]
- Caldiera, V.R.B.G.; Rombach, H.D. The goal question metric approach. In Encyclopedia of Software Engineering; University of Michigan: Ann Arbor, MI, USA, 1994; pp. 528–532. [Google Scholar]
- Nick, M.; Althoff, K.D.; Tautz, C. Facilitating the practical evaluation of organizational memories using the goal-question-metric technique. In Proceedings of the Twelfth Workshop on Knowledge Acquisition, Modeling and Management, Dagstuhl Castle, Germany, 26–29 May 1999. [Google Scholar]
- Kaplan, A.; Maehr, M.L. The contributions and prospects of goal orientation theory. Educ. Psychol. Rev. 2007, 19, 141–184. [Google Scholar] [CrossRef]
- Ramli, R.Z.; Wan Husin, W.Z.; Elaklouk, A.M.; Sahari@ Ashaari, N. Augmented reality: A systematic review between usability and learning experience. Interact. Learn. Environ. 2023, 1–17. [Google Scholar] [CrossRef]
- Akmal Muhamat, N.; Hasan, R.; Saddki, N.; Mohd Arshad, M.R.; Ahmad, M. Development and usability testing of mobile application on diet and oral health. PLoS ONE 2021, 16, e0257035. [Google Scholar] [CrossRef]
- Zaini, H.; Ishak, N.H.; Johari, N.F.M.; Rashid, N.A.M.; Hamzah, H. Evaluation of a Child Immunization Schedule Application using the Software Usability Measurement Inventory (SUMI) Model. In Proceedings of the 2021 IEEE 11th International Conference on System Engineering and Technology (ICSET), Shah Alam, Malaysia, 6 November 2021; IEEE: Piscataway, NJ, USA, 2021; pp. 281–285. [Google Scholar]
- Burghardt, D.; Wirth, K. Comparison of evaluation methods for field-based usability studies of mobile map applications. In Proceedings of the International Cartographic Conference, Paris, France, 3–8 July 2011. [Google Scholar]
- Taniar, D. Mobile Computing: Concepts, Methodologies, Tools, and Applications: Concepts, Methodologies, Tools, and Applications; IGI Global: Hershey, PA, USA, 2008; Volume 1. [Google Scholar]
- López-Gil, J.M.; Urretavizcaya, M.; Losada, B.; Fernández-Castro, I. Integrating field studies in agile development to evaluate usability on context dependant mobile applications. In Proceedings of the XV International Conference on Human Computer Interaction, Puerto de la Cruz, Spain, 10–21 September 2014; pp. 1–8. [Google Scholar]
- Von Zezschwitz, E.; Dunphy, P.; De Luca, A. Patterns in the wild: A field study of the usability of pattern and pin-based authentication on mobile devices. In Proceedings of the 15th international Conference on Human-Computer Interaction with Mobile Devices and Services, Munich, Germany, 27–30 September 2013; pp. 261–270. [Google Scholar]
- Knip, F.; Bikar, C.; Pfister, B.; Opitz, B.; Sztyler, T.; Jess, M.; Scherp, A. A field study on the usability of a nearby search app for finding and exploring places and events. In Proceedings of the 13th International Conference on Mobile and Ubiquitous Multimedia, Melbourne, VIC, Australia, 25–28 November 2014; pp. 123–132. [Google Scholar]
- Davis, F.D. Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS Q. 1989, 13, 319–340. [Google Scholar] [CrossRef]
- Davis, W.S.; Yen, D.C. The Information System Consultant’s Handbook: Systems Analysis and Design; CRC Press: Boca Raton, FL, USA, 2019. [Google Scholar]
- Fife-Schaw, C. Questionnaire design. In Research Methods in Psychology; SAGE: An Caisteal Nuadh, UK, 1995; pp. 174–193. [Google Scholar]
- Maramba, I.; Chatterjee, A.; Newman, C. Methods of usability testing in the development of eHealth applications: A scoping review. Int. J. Med. Inform. 2019, 126, 95–104. [Google Scholar] [CrossRef]
- Lim, K.C.; Selamat, A.; Alias, R.A.; Krejcar, O.; Fujita, H. Usability measures in mobile-based augmented reality learning applications: A systematic review. Appl. Sci. 2019, 9, 2718. [Google Scholar] [CrossRef]
- Panchea, A.M.; Todam Nguepnang, N.; Kairy, D.; Ferland, F. Usability Evaluation of the SmartWheeler through Qualitative and Quantitative Studies. Sensors 2022, 22, 5627. [Google Scholar] [CrossRef]
- Romero-Ternero, M.; García-Robles, R.; Cagigas-Muñiz, D.; Rivera-Romero, O.; Romero-Ternero, M. Participant Observation to Apply an Empirical Method of Codesign with Children. Adv.-Hum.-Comput. Interact. 2022, 2022, 1–5. [Google Scholar] [CrossRef]
- Álvarez Robles, T.d.J.; Sánchez Orea, A.; Álvarez Rodríguez, F.J. UbicaME, mobile geolocation system for blind people: User experience (UX) evaluation. Univers. Access Inf. Soc. 2023, 22, 1163–1173. [Google Scholar] [CrossRef]
- McDonald, S.; Edwards, H.M.; Zhao, T. Exploring think-alouds in usability testing: An international survey. IEEE Trans. Prof. Commun. 2012, 55, 2–19. [Google Scholar] [CrossRef]
- Van Waes, L. Thinking aloud as a method for testing the usability of websites: The influence of task variation on the evaluation of hypertext. IEEE Trans. Prof. Commun. 2000, 43, 279–291. [Google Scholar] [CrossRef]
- Nielsen, J. Usability Engineering; Morgan Kaufmann: Burlington, MA, USA, 1994. [Google Scholar]
- Borys, M.; Milosz, M. Mobile application usability testing in quasi-real conditions. In Proceedings of the 2015 8th International Conference on Human System Interaction (HSI), Warsaw, Poland, 25–27 June 2015; IEEE: Piscataway, NJ, USA, 2015; pp. 381–387. [Google Scholar]
- Garmer, K.; Ylvén, J.; Karlsson, I.M. User participation in requirements elicitation comparing focus group interviews and usability tests for eliciting usability requirements for medical equipment: A case study. Int. J. Ind. Ergon. 2004, 33, 85–98. [Google Scholar] [CrossRef]
- Roulston, K. Considering quality in qualitative interviewing. Qual. Res. 2010, 10, 199–228. [Google Scholar] [CrossRef]
- Hussain, A.; Mkpojiogu, E.O.; Ishak, N.; Mokhtar, N.; Ani, Z.C. An Interview Report on Users’ Perception about the Usability Performance of a Mobile E-Government Application. Int. J. Interact. Mob. Technol. 2019, 13, 169–178. [Google Scholar] [CrossRef]
- Sarkar, U.; Gourley, G.I.; Lyles, C.R.; Tieu, L.; Clarity, C.; Newmark, L.; Singh, K.; Bates, D.W. Usability of commercially available mobile applications for diverse patients. J. Gen. Intern. Med. 2016, 31, 1417–1426. [Google Scholar] [CrossRef]
Attribute/Type | Observed | Perceived |
---|---|---|
Effectiveness | • | • |
Efficiency | • | • |
Satisfaction | Not applicable | • |
Attribute | Definition |
---|---|
Effectiveness | the ability of a user to complete a task in a given context [39] |
Efficiency | the ability of a user to complete a task with speed and accuracy [40] |
Satisfaction | Not applicable |
Attribute | Definition |
---|---|
Effectiveness | a user’s perceived level of workload in a given context [41] |
Efficiency | a user’s perceived level of application performance (in terms of time) [42] |
Satisfaction | a user’s perceived level of comfort and pleasure [43] |
Code | Metric | Unit and Quantity |
---|---|---|
EFFE1 | rate of successful task completion | integer/amount |
EFFE2 | total number of steps required to complete a task | integer/amount |
EFFE3 | total number of taps related to app usage | integer/amount |
EFFE4 | total number of taps unrelated to app usage | integer/amount |
EFFE5 | total number of times that the back button was used | integer/amount |
Code | Metric | Scale |
---|---|---|
EFFI2 | duration of the application starting * | 7-point Likert scale |
EFFI3 | duration of the application closing * | |
EFFI4 | duration of content loading * | |
EFFI5 | duration of the application response to the performed actions * | |
EFFI6 | application performance continuity |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the author. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Weichbroth, P. Usability Testing of Mobile Applications: A Methodological Framework. Appl. Sci. 2024, 14, 1792. https://doi.org/10.3390/app14051792
Weichbroth P. Usability Testing of Mobile Applications: A Methodological Framework. Applied Sciences. 2024; 14(5):1792. https://doi.org/10.3390/app14051792
Chicago/Turabian StyleWeichbroth, Paweł. 2024. "Usability Testing of Mobile Applications: A Methodological Framework" Applied Sciences 14, no. 5: 1792. https://doi.org/10.3390/app14051792
APA StyleWeichbroth, P. (2024). Usability Testing of Mobile Applications: A Methodological Framework. Applied Sciences, 14(5), 1792. https://doi.org/10.3390/app14051792