Next Article in Journal
Blockchain Technology Adoption for Disrupting FinTech Functionalities: A Systematic Literature Review for Corporate Management, Supply Chain, Banking Industry, and Stock Markets
Previous Article in Journal
The Dresden Model of Adaptability: A Holistic Approach to Human-Centeredness, Resilience, Sustainability, and the Impact on the Sustainable Development Goals in the Era of Industry 5.0
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Methodological Quality of User-Centered Usability Evaluation of Digital Applications to Promote Citizens’ Engagement and Participation in Public Governance: A Systematic Literature Review

by
Rute Bastardo
1,
João Pavão
2 and
Nelson Pacheco Rocha
3,*
1
UNIDCOM, Science and Technology School, University of Trás-os-Montes and Alto Douro, Quinta de Prado, 5001-801 Vila Real, Portugal
2
INESC-TEC, Science and Technology School, University of Trás-os-Montes and Alto Douro, Quinta de Prado, 5001-801 Vila Real, Portugal
3
IEETA-Institute of Electronics and Informatics Engineering of Aveiro, Department of Medical Sciences, University of Aveiro, 3810-193 Aveiro, Portugal
*
Author to whom correspondence should be addressed.
Digital 2024, 4(3), 740-761; https://doi.org/10.3390/digital4030038
Submission received: 27 June 2024 / Revised: 21 August 2024 / Accepted: 2 September 2024 / Published: 5 September 2024

Abstract

:
This systematic literature review aimed to assess the methodological quality of user-centered usability evaluation of digital applications to promote citizens’ engagement and participation in public governance by (i) systematizing their purposes; (ii) analyzing the evaluation procedures, methods, and instruments that were used; (iii) determining their conformance with recommended usability evaluation good practices; and (iv) identifying the implications of the reported results for future developments. An electronic search was conducted on Web of Science, Scopus, and IEEE Xplore databases, and after a screening procedure considering predefined eligibility criteria, 34 studies were reviewed. These studies performed user-centered usability evaluation of digital applications related to (i) participatory reporting of urban issues, (ii) environmental sustainability, (iii) civic participation, (iv) urban planning, (v) promotion of democratic values, (vi) electronic voting, and (vii) chatbots. In terms of the methodological quality of the included studies, the results suggest that there is a high heterogeneity of the user-centered usability evaluation. Therefore, there is a need for recommendations to support user-centered usability evaluations of digital applications to promote citizens’ engagement and participation in public governance to improve the planning and conduction of future research.

1. Introduction

As a consequence of the profound impact of the information technologies (IT) in the societal organization during the last decades, a growing body of knowledge and practice has evidenced the innovative potential of the digital transformation of public administration not only in terms of internal procedural management but also in terms of external service provision, including its relationship with the citizens [1,2,3,4,5,6]. Moreover, according to political agendas and governmental strategies, this digital transformation has become a key objective [3,7,8], which is corroborated by the United Nations, which envisages the use of digital tools to support policy making and public service delivery for its sustainable development goals [3,8,9].
As in other areas of research with intense dynamism, different concepts have emerged and evolved over time to characterize digitally enhanced public services [8], such as digital government or e-government [10]. Digital government has been broadly defined as the process of implementing IT-enabled government innovations by transforming the public organizational structures and services delivery [3,11]. This definition emphasizes the use of electronic means, particularly the internet, to deliver government information and processes to governmental and non-governmental entities, business, and citizens [8,10].
According to different studies (e.g., [1,12,13,14,15,16,17]), various maturity stages of digital government may coexist, reflecting different degrees of technical sophistication and interaction with citizens [18], including catalog (i.e., online existence of digital services), transaction (i.e., electronic transactions between the government and citizens), vertical integration (i.e., existence of connections between the local systems and higher-level systems), and horizontal integration (i.e., systems’ integration across different functions allowing citizens to access different public services) [12,18]. Considering these maturity stages, the citizens’ engagement assumes a normative perspective; that is, since the adoption of IT presents several advantages, it is desirable that citizens be actively engaged [8]. However, when the focus is the political participation of the citizens, their engagement is not only required for public service delivery but also for an active participation in public governance activities [8,19,20], namely in terms of decision making, policy formulation, collaboration, and overall management of governmental and societal affairs. In this respect, digital governance or e-governance might be considered the application of electronic means, including innovative technologies such as artificial intelligence [21], to support both internal government operations and interactions between governmental and non-governmental entities, businesses, and citizens to improve information and service delivery, encourage citizens’ participation in decision-making processes [22], and contribute to accountability, transparency, and democratic values [8,19,23].
Citizens’ engagement is determined by multiple socio-organizational circumstances, such as citizens’ awareness and motivation to participate or mitigation of digital divide challenges (e.g., IT literacy, availability of accessible communication infrastructures or adequacy of the user-interfaces) [24,25,26,27]. Moreover, previous experiences and self-efficacy can influence the perception of citizens’ satisfaction and expectations towards using electronic public services and their engagement [24]. This means that user experience (i.e., users’ states resulting from their characteristics and prior experience as well as the context of use of a specific product or service [28]) and the related usability concept (i.e., the ability of a product or a service to help the user to achieve a specific goal in a given situation while enjoying its use [29,30]) are fundamental features of people-centered technological applications [31]. User experience and usability have been considered fundamental dimensions of digital government and governance quality models (e.g., [21,32,33,34,35,36,37,38,39]) and usability assessments of digitally enhanced public services have been performed worldwide, particularly in terms of institutional websites, portals, or online public services (e.g., [40,41,42,43,44,45,46,47]).
Considering secondary research studies related to the importance of the user experience and usability of digital government applications, it is possible to identify several reviews: (i) Desmal et al. [38] explored the impact of usability quality attributes such as the efficiency, satisfaction, memorability, error, and compatibility of mobile government services; (ii) Aldrees and Gračanin [18] identified factors (e.g., perceived usefulness or perceived ease of use) affecting user experience of digital government applications and provided recommendations to support the design and implementation of future applications; (iii) Desmal et al. [48] identified quality attributes (i.e., usability, interaction, consistency, information, accessibility, and privacy and security) that impact the users’ satisfaction with mobile digital government portals; (iv) Menezes et al. [49] systematized models, dimensions, instruments, and tools to evaluate public services from the perspective of users and identified the main dimensions regarding service evaluation (i.e., quality, success, and acceptance of information systems and user satisfaction and user experience); (v) Lyzara et al. [50] identified adequate methods to asses usability of digital government applications; (vi) Monzón et al. [51] identified models for measuring the level of balance between usability and safety; (vii) Alshamsi et al. [52] performed a mapping review to establish the trade-off between usability and security; (viii) Yerlikaya and Durdu [53] reviewed the usability research conducted on university websites over a decade (2006–2016) to identify the most frequently used usability evaluation methods; (ix) Cisneros et al. [54] established how accessibility evaluations of digital government web applications are performed; and (x) Zhang et al. [55] systematized how eye-tracking technology has been used in the usability evaluation of digital government applications.
However, the authors of this systematic review were not able to identify systematic literature reviews focused on the methodological quality of the user-centered usability evaluation of digital applications to promote citizens’ engagement and participation in public governance. To fulfill this research gap, this systematic literature review aimed to assess the methodological quality of user-centered usability evaluation of these digital applications by (i) systematizing their purposes; (ii) analyzing the evaluation procedures, methods, and instruments that were used; (iii) determining their conformance with recommended usability evaluation good practices; and (iv) identifying the implications of the reported results for future developments. Therefore, this systematic review might contribute (i) to increasing the awareness of the importance of the user-centered usability evaluation of digital applications to promote citizens’ engagement and participation in public governance, (ii) identifying good practices and methodological issues, and (iii) providing evidence to support the development of recommendations to improve the planning, conduction, and reporting of future user-centered usability evaluation studies.

2. Materials and Methods

This study was performed following guidelines of the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) [56].

2.1. Research Question

The systematic review aimed to answer the following research question: What is the methodological quality of the studies performing user-centered usability evaluation of digital applications to promote citizens’ engagement and participation in public governance? This research question was framed by the Population, Intervention, Comparison, Outcomes, and Context (PICOC) framework [57] (Table 1) and subdivided into the following research sub-questions: (i) What are the specific purposes of the digital applications described by the included studies? (ii) What usability evaluation procedures, methods, and instruments are being used? (iii) What is the level of conformance of the procedures, methods, and instruments being used with recommended usability evaluation good practices? And (iv) what are the implications of the usability assessment results on future development of digital applications to promote citizens’ engagement and participation in public governance?

2.2. Search Strategies

The following databases were considered to retrieve the articles to be included in the systematic review: (i) Scopus, (ii) Web of Science, and (iii) IEEE Xplore.
The search queries were based on the conjunction of the following expressions: (i) Usability OR UX OR “User Experience” OR “User-centered” OR evaluat* OR assess* OR measur* OR test* and (ii) Government OR governance OR democracy OR “public administration” OR crowdsourcing OR crowdsensing OR “citizens’ participation” OR “Citizens’ reporting” OR e-collaboration OR e-services OR “smart services” OR “intelligent services” OR “smart city” OR “intelligent city” OR “digital city” OR “sustainable city”. No limits were considered in terms of the studies’ publication date.

2.3. Inclusion and Exclusion Criteria

The review included primary research studies published in English and focused on user-centered usability evaluation of applications to promote citizens’ engagement and participation in public governance. In turn, the following exclusion criteria were considered: (i) references without abstracts or authors’ names; (ii) articles not written in English; (iii) secondary studies such as literature reviews or surveys; (iv) studies evaluating the usability of applications to support interactions between the citizens and authorities but without considering their engagement and participation in public governance, such as institutional websites, portals, or online public services (e.g., tax payments or documents requests and submissions); or (v) studies evaluating the usability of applications not related to smart government (e.g., healthcare applications).

2.4. Screening Procedures

All retrieved references were imported to an Excel spreadsheet and checked for duplicates. Then, the titles and abstracts of all references were screened according to the predefined eligibility criteria. Finally, full texts of potentially relevant articles were retrieved and screened. In all these steps, the references were independently screened by two randomly chosen authors. If a consensus could not be reached between the two authors, the third author was consulted.

2.5. Synthesis and Reporting

Syntheses of the included studies were prepared to systematize (i) the number of studies published in conference proceedings and in scientific journals, (ii) the distribution by publication years, (iii) the total number of authors, (iv) the type of affiliated institutions of the authors, and (v) the countries where the experimental setups took place.
The different digital solutions considered by the included studies were classified according to their purposes. For that, each author identified a list of the terms and definitions used in the included articles to create a primary list of categories and refined it by further analyses. Then, the resulting categorizations were checked and discussed as a group, and the final list of categories was achieved by consensus. Additionally, an analysis was performed to identify the data security and privacy mechanisms being employed as well as the strategies to incentivize citizens’ participation.
Finally, the authors identified the participants’ characteristics and the testing environments and analyzed the usability evaluation procedures, methods, and instruments used in each study to determine test and inquiry methods and respective techniques such as observation, think aloud, scales, questionnaires, or interviews. These results were the basis of the methodological quality assessment of the studies included in this systematic review.

2.6. Methodological Quality

The methodological quality of the included studies was performed using a scale developed to assess the methodological quality of studies evaluating usability of electronic products and services, the Critical Assessment of Usability Studies Scale (CAUSS) [58]. The CAUSS has 15 items that can be scored with one or zero points [58]. For each one of the included studies, the CAUSS items were independently assessed by the three authors. The disagreements were resolved by consensus.

3. Results

3.1. Study Selection

The PRISMA flowchart of the systematic review is presented in Figure 1. The literature search was performed in January 2024, and 6270 references were retrieved: 3083 from Scopus, 1612 from Web of Science, and 1575 from IEEE Xplore.
Then, 1950 references were excluded because they were duplicated (n = 1749), were front matters (n = 195), or did not have abstracts or authors’ names (n = 6).
During the title and abstract screening, 4284 references were excluded according to the inclusion and exclusion criteria. The exclusion criteria include the following: (i) references reporting secondary studies such as systematic reviews (e.g., [59,60]); (ii) references presenting arguments about the importance of applications to support the citizens (e.g., [61,62,63]) or the importance of user-centered design (e.g., [64,65]); (iii) references reporting on applications to support government and governance processes without the involvement of the citizens, such as, for instance, applications to improve the planning processes of public servants (e.g., [66,67,68,69,70]); (iv) references reporting on usability evaluation of applications to support interactions between the citizens and authorities but without considering their engagement and participation in public governance, such as institutional websites and portals or online public services (e.g., [71,72,73,74,75,76,77,78,79,80,81]); (v) references that, despite being focused on the development of applications to promote citizens’ engagement and participation in public governance, did not report on usability evaluations (e.g., [82,83,84]); (vi) references reporting on new procedures or methods to support user-centered evaluation of applications to optimize government and governance processes (e.g., [85,86,87,88]); and (vii) references reporting the development (including or not usability evaluations) of applications of other domains than digital government and governance, such as, for instance, healthcare applications (e.g., [89,90,91,92,93,94,95,96,97]).
During the full-text analysis, two references were excluded: one [98] because it did not report the results of usability evaluation and the other [99] because it reported a simulation study without the involvement of real users. Therefore, this study reviewed 34 articles [100,101,102,103,104,105,106,107,108,109,110,111,112,113,114,115,116,117,118,119,120,121,122,123,124,125,126,127,128,129,130,131,132,133].

3.2. Demographics of the Included Studies

Concerning publication types, fifteen studies were published in conference proceedings [102,103,106,108,109,110,111,113,115,119,122,123,126,127,131], and nineteen studies were published in scientific journals [100,101,104,105,107,112,114,116,117,118,120,121,124,125,128,129,130,132,133].
The included studies were published between 2012 (i.e., one study [100]) and 2023 (i.e., seven studies [127,128,129,130,131,132,133]). According to Figure 2, there was an increment of publications during the last years, and three-quarters of studies (i.e., [107,108,109,110,111,112,113,114,115,116,117,118,119,120,121,122,123,124,125,126,127,128,129,130,131,132,133]) were published in the last five years.
A total of 153 researchers authored the included studies, but none of the researchers was involved in more than one study. According to the authors’ affiliations, most authors belonged to academia, either universities or research institutes, and only ten of them were affiliated with governmental (i.e., seven authors) or business (i.e., three authors) entities. Still, according to the authors’ affiliations, IT was the core domain of most of them.
Looking specifically at the locations where the experimental setups took place (Table 2), it is possible to conclude that twenty-five countries were involved in 5 multi-national studies and 29 national studies. European countries represented the biggest contribution, with 26 experiments. Moreover, 18 experiments took place in Asia, 6 in Indonesia, and 4 in South America (i.e., Brazil).

3.3. Purpose of the Reported Applications

As presented in Table 3, seven different purposes were identified according to the characteristics of applications evaluated by the included studies, namely (i) participatory reporting of urban issues, which was the most representative; (ii) environmental sustainability; (iii) civic participation; (iv) urban planning; (v) promotion of democratic values; (vi) electronic voting; and (vii) chatbots.

3.3.1. Participatory Reporting of Urban Issues

Participatory reporting of urban issues aims to provide city authorities with a better understanding of problems faced by the citizens [134].
Three studies [108,114,124] did not focus on specific issues but instead were related to general-purpose participatory reporting applications: (i) Falcão et al. [108] presented the Crowd4City system to gather voluntarily generated information from citizens to be explored in different contexts, such as locations with high crimes rate, places where traffic jams occur frequently, pavement defects, or poor lightning; (ii) Matsuda et al. [124] proposed ParmoSense, which might be configured to specific data gathering purposes and is able to collect information provided by the citizens either explicitly (e.g., photos, comments, or answers to questionnaires) or implicitly (i.e., information provided the sensors embedded in the citizens’ mobile devices); and (iii) Aditya et al. [114] described a digital survey application whose potential was exemplified by three uses cases related to land, disaster, and water infrastructure issues of Indonesian slums.
In terms of participatory reporting applications focused on specific issues, their aims were quite diverse and included (i) city incidents (i.e., problems that citizens have experienced in their activities) in three studies [103,104,106]; (ii) disaster relief (i.e., integration of the crowdsourcing paradigm to support disaster management) in one study [101]; (iii) public transports (i.e., a collaborative information repository regarding public transportation) in one study [102]; (iv) ecotourism assets (i.e., the use of citizens mobile devices to map ecotourism assets) in one study [105]; (v) social services (i.e., capacities, client access, and effectiveness and responsiveness of local social services) in one study [116]; (vi) sanitarian information (i.e., providing feedback, reporting malfunctions, and creating awareness of sanitarian conditions) in one study [117]; and (vii) air pollution detection in one study [125].

3.3.2. Environmental Sustainability

Six studies [107,113,122,127,132,133] were focused on environmental sustainability issues: (i) Koroleva et al. [107] presented a collective awareness platform to engage citizens, water professionals, and politicians on local environmental sustainability challenges; (ii) Spraz and Han [133] evaluated the Digital Government Collaborative Platform to facilitate collaborations between citizens and governmental authorities to address environmental issues in Sri Lanka; (iii) Wernbacher et al. [122] described an application to identify unused waste heat sources; (iv) Dioquino et al. [113] developed a web-based application to support the retrieving of reusable materials; (v) Ananta et al. [127] proposed a crowdsourcing mobile application to support community participation in waste management; and (vi) Manik et al. [132] proposed a crowd-based early warning system to mitigate harmful algal blooms.

3.3.3. Civic Participation

In terms of civic participation, one study [130] proposed a crowdsourcing platform to support citizen searches during missing persons cases, and four studies [115,119,126,131] proposed crowdsourcing applications to support donation campaigns; two of them [119,126] were for general donations, one [115] was specifically focused on food insecurity, and another one supported donation of goods [131].

3.3.4. Urban Planning

When planning and designing urban environments, the citizens living and working in those environments are the most affected. Therefore, it is important to have citizens involved in the planning and designing processes. In this respect, five studies [110,111,120,121,123] presented applications to support the collaboration of citizens in urban planning: (i) Knect et al. [110] described an application that allows planners to share a subset of the design space formed by parametric design variants with citizens; (ii) Thoneick et al. [121] presented a digital participation system composed of a presentation of public planning data and a decision-support tool; (iii) Takenouchi et al. [120] presented a system for creating disaster prevention maps with information provided by the citizens; and (iv) Nguyen et al. [111] and (v) Görgü et al. [123] were focused on the preparation of disaster risk plans using crowdsourced data.

3.3.5. Promotion of Democratic Values

Four studies [109,112,118,129] were focused on the promotion of democratic values: (i) Hasim et al. [109] redesigned the Rembugan Jateng platform currently available in Indonesia to submit proposals to the government; (ii) Zabaleta et al. [112] proposed a collaborative platform aimed at fostering citizens’ involvement in the public administration, namely allowing their collaboration with civil servants in the definition and improvement of new administrative procedures and services; (iii) Janoskova et al. [118] described a comprehensive platform for most of the services offered by the cities for their citizens, including the possibility of submitting suggestions or directly contacting relevant representatives; and (iv) Junqueira et al. [129] evaluated Brazilian governmental platforms aiming to increment the participation of citizens in the legislative, budgetary, supervisory, and representation activities of the Senate.

3.3.6. Electronic Voting

Fuglerud and Røssvoll [100] presented the evaluation of several electronic voting prototypes in Norway, involving technical aspects as well as expert evaluation and user testing in the field.

3.3.7. Chatbots

Cortés-Cediel et al. [128] developed a chatbot to support the exploration of citizen-generated content provided by digital participation tools. The chatbot uses argument mining methods to extract and visualize argumentative information underlying the citizens’ proposals and debates to guide the users’ navigation and to promote the discussion process.

3.4. Security and Privacy Mechanisms and Strategies to Incentivize Citizen Participation

The proposed applications used different types of data sources, including the Global Positioning System (GPS) [104,105,106,107,108,111,114,116,117,120,122,123,124,125,127,130,132], cameras [123,124], environmental sensors [124,125], and inertial sensors [124]. This means that the applications present potential privacy risks, namely in terms of personal data, and secure data communication mechanisms are required. However, only two articles [100,122] addressed concerns with data security and privacy mechanisms.
Incentive mechanisms might be proposed to guarantee the engagement of the citizens. In this respect, five studies [102,107,122,124,127] proposed incentives mechanisms either in the form of entertainment (i.e., gamification) [102,107,127] or monetary incentives [122,124]. Moreover, to avoid cheating and to enable transparency for the reward mechanism, Wernbacher et al. [122] used blockchain as a method for securing the gamification results.

3.5. Usability Evaluation Procedures, Methods, and Instruments

Most of the included studies proposed new applications and evaluated their usability. However, six studies [100,109,110,114,125,129] evaluated already existing applications: (i) Fuglerud and Røssvoll [100] evaluated several electronic voting prototypes from the E-vote 2011project of the Norwegian Ministry of Local Government and Regional Affairs; (ii) Hasim et al. [109] evaluated the Rembugan Jateng, a digital participation system available in Central Java Province of Indonesia; (iii) Knect et al. [110] evaluated an existing design space exploration tool; (iv) Aditya et al. [114] evaluated the Open Data Kit (ODK), a digital survey application that was used for a range of community development projects worldwide; (v) Ong et al. [125] evaluated AirVisual, a mobile application from the United Nations Environment Programme (UNEP) to provide air quality monitoring; and (vi) Junqueira et al. [129] evaluated e-Cidadania and e-Democracia, two platforms from the Brazilian Senate aiming to support the participation of citizens in legislative, budgetary, supervisory, and representation activities.
Details of the design of the experimental setups of the included studies are present in Table 4, namely the usability assessment methods, techniques, and instruments that were used and number and average age of the participants.
According to Table 5, test methods were used in 17 studies, and inquiry methods were used in 30 studies. In turn, 13 studies applied a multimethod approach (i.e., combining both the test and inquiry methods). Three different test method techniques were reported (i.e., task performance evaluation, think aloud, and critical incidents), and task performance evaluation was the most reported technique (n = 15). The most reported inquiry techniques were scales (n = 18), questionnaires (n = 13), and interviews (n = 10).
Table 6 presents the types of usability inquiry instruments that were identified. In terms of validated scales, the System Usability Scale (SUS) was used in 15 studies [101,104,110,111,113,117,118,120,122,123,126,127,130,131,132], while the User Experience Questionnaire (UEQ) was used in two studies [109,133], and the Smartphone Usability QuestionnaiRE (SURE) was used in one study [103]. Thirteen studies [106,107,108,111,112,114,116,119,122,123,124,128,133] used questionnaires developed by the respective authors, which did not provide the psychometric characteristics (e.g., validity and reliability) of these questionnaires. Regarding model-based instruments, the models used were the Unified Theory of Acceptance and Use of Technology (UTAUT) and the Technology Acceptance Model (TAM). The first was used by Ong et al. [125], and the second was used by Manik et al. [132].
Concerning the environment where usability evaluation was conducted, most of the studies performed the usability evaluation in research facilities. However, ten studies [101,102,112,114,116,124,125,126,132,133] were conducted in the participants’ environment.
The number of participants varied from 5 [115] to 416 [125]. Five studies [102,103,115,120,127] included 10 or fewer participants, while seven studies [112,116,121,124,125,132,133] included more than 100 participants. In terms of the age of the participants, 19 studies did not include any information related to the age of the participants. In turn, in three studies [100,125,130], the participants were teenagers, adults, and older adults; in five studies [105,109,118,132,133], the participants were teenagers and adults; in six studies [103,104,108,111,117,128], the participants were adults; in two studies [112,123], the participants were adults and older adults; and, finally, in one study [129], the participants were older adults.

3.6. Methodological Quality Assessment

The fifteen items of the CAUSS [58], the scale that was used to evaluate the methodological quality of the included studies, considered several dimensions of the user-center usability evaluation, namely (i) usability assessment instruments (items 1 and 2), (ii) procedures (items 3, 4, 5, 6, and 15), (iii) participants (items 7, 8, and 12), (iv) study evaluators (items 9 and 10), and (v) context and tasks (items 11, 13, and 14).
Analyzing Figure 3, it is possible to verify that only three items of the CAUSS were scored positively for more than 90% of included studies (i.e., items 3, 4, and 13). In contrast, items 5, 6, 7, 9, 10, 11, and 14 were scored positively for less than 50% of the included studies. The score of the remaining five items (i.e., items 1, 2, 8, 12, and 15) varied from 53% to 68%.
The items related to the study evaluators were the ones with lower scores since, in general, the researchers did not report the expertise of the study evaluators nor their independence to the applications development. In terms of usability assessment instruments, some studies used ad hoc questionnaires instead of valid and reliable assessment instruments. Moreover, in terms of the context of use and tasks, although the tasks performed by the participants were representative of the functionalities of the applications being evaluated, in general, the evaluation experiment was not conducted in a real context, or a close-to-real context of use did not permit a continuous and prolonged use over time. In turn, in terms of procedures, the studies failed to employ triangulation of methods for assessment of usability and to duly consider the participants’ characteristics. Finally, analyzing the items related to the participants, it was found that the studies failed to include the participation of both potential users and experts.

3.7. Implications of the Reviewed Usability Evaluation Studies on Future Applications

Fourteen studies [103,106,107,108,112,113,114,117,119,120,122,123,126,127], most of them published in conference proceedings, determined the usability quality of the proposed applications and did not report implications for future developments. Moreover, in five studies [109,111,115,118,131], most of which were published in conference proceedings, the usability evaluations were performed within development cycles based on user-centered design, and the results confirm the adequacy of this type of approach.
In turn, twelve studies [100,102,104,105,110,124,125,128,129,130,132,133] concluded about factors that affect the perceived usability and, consequently, the acceptance of the applications, as presented in Table 7.
Finally, three studies [101,116,121] reported additional empirical knowledge: (i) Thoneick [121] suggested strategies to support interdisciplinary approaches using traditional and innovative urban planning practices; (ii) Yang et al. [101] highlighted that interoperability issues might have a relevant impact on the perceived usability of the digital applications; and (iii) Liu et al. [116] emphasized the dichotomy between citizens and authorities (i.e., citizens are more enthusiastic about the possibilities of improving the access to the public services, while the authorities are more reluctant to innovative solutions due to maintenance costs).

4. Discussion

A total of 34 studies were included in this review. This relatively small number of included studies, when compared to the number of studies focused on digital government and governance [8], does not reflect the level of importance that is being given to the development of digital applications to promote citizens’ participation in public affairs but rather the importance of user-centered usability within the development of such applications. As usability is an essential factor for citizens’ adherence to and acceptance of digital applications [22,24,31,32,33,34,35,36,37,38,39], it was hypothesized that user-centered usability evaluation would deserve more interest from researchers focused on the specific topic of this systematic review.
A possible reason for the reduced number of studies focused on the user-centered usability evaluation of the digital applications considered for this systematic review is related to the fact that a significant percentage of the reported applications are still in early development stages (e.g., requirements elicitation, general overview of the proposed architectures, or performance evaluations of the proposed applications or some of their components) [134] and are, therefore, cannot yet be subject to real-world evaluations by end-users. However, considering the distribution of the included studies by publication years, it is possible to conclude that there is a growing trend of interest in the usability evaluation of digital applications to promote citizens’ engagement in public affairs.
In terms of geographical distribution, Europe represented the biggest contribution, which might be a consequence of the importance of European scientific productivity in terms of the development of sustainable smart cities [136,137].
Concerning the specific purposes of the applications (i.e., the first research sub-question), the included studies were categorized into six different purposes: participatory reporting of urban issues, environmental sustainability, civic participation, urban planning, promotion of democratic values, electronic voting, and chatbots. The last two categories only include one study each. In turn, participatory reporting of urban issues was the most relevant category with 35% of the studies, and the remainder were distributed between environmental sustainability (18% of the studies), civic participation (15% of the studies), urban planning (15% of the studies), and promotion of democratic values (12% of the studies). These results corroborate the results of other reviews since participatory reporting of urban issues, environmental sustainability and urban planning are important purposes among the scientific literature on smart cities, while the promotion of democratic values are fundamental issues of the modernization of public administration [2,8,134].
In general, the studies failed to present evidence about how data privacy, integrity, and confidentiality are guaranteed as well as how to incentivize the engagement of the citizens since only two studies [100,122] addressed concerns with data security and privacy mechanisms, and five studies [102,107,124,126,127] proposed incentive mechanisms (e.g., gamification). This might result from the fact that the studies were focused on the usability evaluation of the proposed digital applications. However, privacy and security mechanisms might impact usability [52,53], and incentive mechanisms are important for the acceptance and continuous use of digital governance [138].
Considering the second research sub-question (i.e., what usability evaluation procedures, methods, and instruments are being used?), it is possible to conclude that both test and inquiry methods are being applied and that there is a high heterogeneity in terms of procedures and instruments. Concerning the level of conformance of the procedures, methods, and instruments with recommended usability evaluation good practices (i.e., the third research sub-question), the results of the application of CAUSS (Figure 3) suggest the existence of good and bad practices irrespective of the five dimensions of this scale (i.e., usability assessment instruments, procedures, participants, study evaluators, and context and tasks).
In terms of good practices, three CAUSS items were scored positively by more than 90% of the studies: (i) item 3 (i.e., coherence between the procedures used to assess usability); (ii) item 4 (i.e., adequacy of the assessment procedures to the solutions’ development state); and (iii) item 13 (i.e., representativeness of the tasks used for the usability evaluation).
In turn, five items were scored positively by more than 50% and less than 70% of the studies: (i) item 1 (i.e., use of valid measurement instruments of usability); (ii) item 2 (i.e., use of reliable measurement instruments of usability); (iii) item 8 (i.e., representativeness of the participants); (iv) item 12 (i.e., number of participants); and (v) item 15 (i.e., adequacy of the analyses that were performed and variables that were assessed).
In this review, almost 50% of the studies did not use reliable and validated measurement instruments of usability. Moreover, almost 40% of the studies developed ad hoc questionnaires. In turn, considering the studies that used validated scales and questionnaires, the System Usability Scale (SUS) was the most used, which is in line with other reviews on user-centered usability evaluation [134,139].
Moreover, a considerable number of studies failed to report on the quality criteria pre-identified by seven CAUSS items: (i) item 5 (i.e., adequacy of the procedures to the participants’ characteristics); (ii) item 6 (i.e., employment of triangulation methods for the assessment of usability); (iii) item 7 (i.e., usability assessment with both potential users and experts); (iv) item 9 (i.e., training of the investigator responsible for the usability assessment); (v) item 10 (i.e., independence of the investigator responsible for the usability assessment in relation to the development process); (vi) item 11 (i.e., usability assessment conducted in the real context or a close-to-real context where the evaluated solution is being evaluated); and (vii) item 14 (i.e., usability assessment based on continuous and prolonged use of the evaluated solution).
Despite the heterogeneity on the procedures, methods, and instruments used the user-centered usability evaluation, most studies failed to show their adequacy concerning the characteristics of the participants involved in the usability evaluations, particularly in terms of age, given that more than 50% of studies did not indicate the age of the participants.
Additionally, less than 40% of the studies used both test and inquiry methods, which means that most of the included studies did not perform triangulation of the methods to assess usability. Moreover, less than 30% of the studies conducted usability evaluations with both users and experts, which is a recommended practice to identify potential usability problems [140] and is in line with the results of other reviews (e.g., [50]).
Considering the training and independence of the investigator responsible for the usability assessment, only one study reported that the responsible investigator was a trained researcher, and six studies reported that the evaluators were not involved in the development process. These results should be analyzed carefully since they might not reflect a bad practice when performing usability assessment but rather an omission when reporting the usability assessment experience. However, this information is of great relevance to clarify that the inexperience of the researchers and potential conflicts of interest did not impact the results of the usability evaluation [140].
Most of the included studies were conducted in the laboratory context. Consequently, the results of the usability evaluations did not reflect the use of the proposed applications in real environments (i.e., the applications were evaluated in their real context by less than one-third of the studies) or the continuous and prolonged use of the applications (i.e., only 20% of the studies evaluated the applications’ usability considering their prolonged and continued use).
Considering the fourth research sub-question (i.e., what are the implications of the usability assessment results on future development of digital applications to promote citizens’ engagement and participation in public governance?), there are diverse factors that should be considered during the applications development to increase their usability and acceptance, including the application of universal design principles to promote the inclusion of people with disabilities or other disadvantaged groups such as people with low literacy [105]; to invest in visual and aesthetic quality, which was also identified by Desmal et al. [48]; to minimize the effort required to achieve the intended results, in accordance with the results reported by Aldrees and Gračanin [18]; to apply motivational features (e.g., gamification) to promote the continuous and sustainable use of the proposed applications; and to duly consider human values (e.g., transparency, fairness, or trust [135]) when designing the applications. Moreover, low usability might reinforce participants’ distrust in both the applications and authorities and might negatively impact collaborative tasks due to the cognitive load.
Finally, concerning the research question that informed the present study (i.e., what is the methodological quality of the studies performing user-centered usability evaluation of digital applications to promote citizens’ engagement and participation in public governance?), based on the analysis of the included studies, it is possible to conclude that the methodological quality of user-centered usability evaluation should be increased to facilitate the reproducibility and comparability of results across studies. Therefore, the methodological quality could be improved considering diverse dimensions. The study evaluators should have usability evaluation expertise, and the reporting should clarify whether they are internal or external to the application’s development. Moreover, the study evaluators should select valid and reliable instruments of assessment. In terms of procedures, a rationale should be considered for the combination of methods and techniques. Moreover, considering the context of use and tasks, the study promotors should develop a participant script with a detailed description of the tasks, facilities, and material needed and identify and justify the choice of lab test or field test or both. Finally, in terms of participants, a clear definition of inclusion and exclusion criteria (e.g., age, gender, educational level, and academic background) and a rationale for the sample size, the sampling methods, and the recruitment should be provided.
The collected evidence of this systematic review might be used, together with other information sources, to sustain the development of recommendations to support user-centered usability evaluations of digital governance applications, including methodological guidelines, standardized study designs, and reporting checklists, to help researchers when designing their experiments.

5. Limitations

When analyzing the results of this review, some limitations inherent to its scope and methodology must be considered. Given the vastness of the digital government and governance field, it was challenging to define the search strategies, potentially resulting in exclusion of relevant studies. Moreover, the review may exhibit bias towards published research, potentially excluding relevant but unpublished studies or gray literature.
Considering the inclusion and exclusion criteria, this study only reviewed articles focused on the user-centered usability evaluation of digital applications to promote citizens’ participation in public affairs. Therefore, the included studies do not reflect the current research related to use of digital applications to maximize the citizens’ participation and engagement in public governance, and consequently, it is not possible to systematize citizens’ participatory models or their impact in terms of outcomes, which also constitute limitations of this review since it does not allow to understand all the implications of the quality of the applications usability.
Despite these limitations, the results of this review identified significant differences in usability assessment procedures, methods, and instruments as well as important methodological flaws, which raise concerns about potential bias of the studies and make it difficult to establish comparisons across the studies and to infer general conclusions. The identification of these drawbacks might contribute to increasing the awareness of the importance of usability evaluation good practices and, consequently, to improving the quality of future studies focused on the user-centered usability evaluation of digital applications to promote citizen engagement and participation in public governance.

6. Conclusions

The specific purposes of the digital applications developed by the included studies were distributed by participatory reporting of urban issues, environmental sustainability, civic participation, urban planning, promotion of democratic values, electronic voting, and chatbots. However, a large percentage of the included studies are still in an early development phase, and consequently, at this stage, they do not significantly contribute to the development of citizen participatory models with impact at the societal level.
The review results suggest that there is high heterogeneity both in terms of usability evaluation procedures, methods, and instruments being used and their conformity with recommended usability evaluation good practices. In terms of implications for future developments, most studies are focused on evaluating the usability quality of their applications or to show the viability of user-centered development approaches and not in generalizing implications for future developments. Even so, the results of a minority of the included studies pointed out that the application of universal design principles, the quality, the visual and aesthetic experience, the existence of motivational features, and the effort and performance expectancies contribute to better usability and might increase citizens’ trust in the applications and authorities. Moreover, several implicating human values (e.g., transparency, safety, universal usability, feedback, authenticity, fairness, representativeness, accountability, legitimacy, informed consent, autonomy, awareness, human welfare, attitude, and trust [135]) should be incorporated into the development of digital applications to promote citizen engagement and participation in public governance.
Considering the methodological quality of the studies performing user-centered usability evaluation of digital applications to promote citizens’ engagement and participation in public governance (i.e., the research objective that informed this review), the results suggest that researchers failed to consider and report relevant methodological aspects. Therefore, recommendations to support user-centered usability evaluations of digital governance applications should be established and disseminated to improve the methodological quality of future studies. The conducting of rigorous experiments on user-centered usability is likely to improve comparability of usability results across studies, facilitate further research on the impact of usability on other outcomes, and provide efficient digital solutions to maximize the societal impact (e.g., wellbeing, sustainability, transparency, efficiency, accountability, or promotion of democratic values such as representativeness) of the citizens’ engagement and participation in public governance. In this respect, as the main conclusion of this review, it should be highlighted that there is a need to increase the research community’s awareness of the existing knowledge in terms of good practices of user-centered usability evaluation.
The assessment of the impact of digital applications to support the engagement and participation of the citizens in public governance goes far beyond usability evaluation and requires multidisciplinary teams with expertise beyond IT (e.g., political or social sciences). Therefore, future reviews are required to systematize the frameworks, metrics, procedures, and methods being used to assess the societal impact of these digital applications as well as the methodological quality of the assessment being performed and both the positive and negative outcomes being measured.

Author Contributions

Conceptualization, N.P.R.; writing—original draft preparation, N.P.R.; writing—review and editing, N.P.R., R.B. and J.P.; investigation, N.P.R., R.B. and J.P. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

No new data were created or analyzed in this study.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Moon, M.J. The evolution of e-government among municipalities: Rhetoric or reality? Public Adm. Rev. 2002, 62, 424–433. [Google Scholar] [CrossRef]
  2. Webster, C.; Leleux, C. Smart governance: Opportunities for technologically-mediated citizen co-production. Inf. Polity 2018, 23, 95–110. [Google Scholar] [CrossRef]
  3. Barcevičius, E.; Cibaitė, G.; Codagnone, C.; Gineikytė, V.; Klimavičiūtė, L.; Liva, G.; Matulevič, L.; Misuraca, G.; Vanini, I. Exploring Digital Government Transformation in the EU; Publications Office of the European Union: Luxembourg, 2019. [Google Scholar] [CrossRef]
  4. Cornips, L.; Voorberg, W.; van Meerkerk, I.; Kramer, R. Co-production as a strategy for enhanced equal representation in public service delivery: The case of Rotterdam. Cities 2023, 141, 104480. [Google Scholar] [CrossRef]
  5. Gerrard, J.; Savage, G.C. Policy translations of citizen participation and new public governance: The case of school governing bodies. Crit. Policy Stud. 2023, 17, 484–501. [Google Scholar] [CrossRef]
  6. Cao, H.; Kang, C.I. A citizen participation model for co-creation of public value in a smart city. J. Urban Aff. 2024, 46, 905–924. [Google Scholar] [CrossRef]
  7. Gil-García, J.R.; Pardo, T.A. E-government success factors: Mapping practical tools to theoretical foundations. Gov. Inf. Q. 2005, 22, 187–216. [Google Scholar] [CrossRef]
  8. Ravšelj, D.; Umek, L.; Todorovski, L.; Aristovnik, A. A review of digital era governance research in the first two decades: A bibliometric study. Future Internet 2022, 14, 126. [Google Scholar] [CrossRef]
  9. Resolution Adopted by the General Assembly on 26 November 2018 (A/73/L.20 and A/73/L.20/Add.1); United Nations: New York, NY, USA, 2018.
  10. Janowski, T. Digital government evolution: From transformation to contextualization. Gov. Inf. Q. 2015, 32, 221–236. [Google Scholar] [CrossRef]
  11. Charalabidis, Y.; Sarantis, D.; Askounis, D. Knowledge-driven project management in government transformation. In Handbook of Research on ICT-Enabled Transformational Government: A Global Perspective; IGI Global: Dauphin, PA, USA, 2009. [Google Scholar] [CrossRef]
  12. Layne, K.; Lee, J. Developing fully functional E-government: A four stage model. Gov. Inf. Q. 2001, 18, 122–136. [Google Scholar] [CrossRef]
  13. Reddick, C.G. A two-stage model of e-government growth: Theories and empirical evidence for US cities. Gov. Inf. Q. 2004, 21, 51–64. [Google Scholar] [CrossRef]
  14. Wendler, R. The maturity of maturity model research: A systematic mapping study. Inf. Softw. Technol. 2012, 54, 1317–1339. [Google Scholar] [CrossRef]
  15. Fath-Allah, A.; Cheikhi, L.; Al-Qutaish, R.E.; Idri, A. E-government maturity models: A comparative study. Int. J. Softw. Eng. Appl. 2014, 5, 71. [Google Scholar] [CrossRef]
  16. Joshi, P.R.; Islam, S. E-government maturity model for sustainable E-government services from the perspective of developing countries. Sustainability 2018, 10, 1882. [Google Scholar] [CrossRef]
  17. Hochstetter, J.; Diaz, J.; Dieguez, M.; Espinosa, R.; Arango-López, J.; Cares, C. Assessing transparency in eGovernment electronic processes. IEEE Access 2021, 10, 3074–3087. [Google Scholar] [CrossRef]
  18. Aldrees, A.; Gračanin, D. UX in E-government Services for Citizens: A Systematic Literature Review. J. User Exp. 2023, 18, 133–169. [Google Scholar]
  19. Bindu, N.; Sankar, C.P.; Kumar, K.S. From conventional governance to e-democracy: Tracing the evolution of e-governance research trends using network analysis tools. Gov. Inf. Q. 2019, 36, 385–399. [Google Scholar] [CrossRef]
  20. Wittels, A.S. Participatory governance and responsiveness: Do motivational interventions increase engagement with citizen input? Political Res. Q. 2023, 76, 741–756. [Google Scholar] [CrossRef]
  21. Vrabie, C. E-Government 3.0: An AI model to use for enhanced local democracies. Sustainability 2023, 15, 9572. [Google Scholar] [CrossRef]
  22. Al-Nidawi, W.J.A.; Al-Wassiti, S.K.J.; Maan, M.A.; Othman, M. A review in E-government service quality measurement. Indones. J. Electr. Eng. Comput. Sci. 2018, 10, 1257–1265. [Google Scholar] [CrossRef]
  23. Adam, I.; Fazekas, M. Are emerging technologies helping win the fight against corruption? A review of the state of evidence. Inf. Econ. Policy 2021, 57, 100950. [Google Scholar] [CrossRef]
  24. Alruwaie, M.; El-Haddadeh, R.; Weerakkody, V. Citizens’ continuous use of eGovernment services: The role of self-efficacy, outcome expectations and satisfaction. Gov. Inf. Q. 2020, 37, 101485. [Google Scholar] [CrossRef]
  25. Buntaine, M.T.; Nielson, D.L.; Skaggs, J.T. Escaping the disengagement dilemma: Two field experiments on motivating citizens to report on public services. Br. J. Political Sci. 2021, 51, 685–705. [Google Scholar] [CrossRef]
  26. Bricout, J.; Baker, P.M.; Moon, N.W.; Sharma, B. Exploring the smart future of participation: Community, inclusivity, and people with disabilities. Int. J. E-Plan. Res. (IJEPR) 2021, 10, 94–108. [Google Scholar] [CrossRef]
  27. Kim, N.Y.; Kim, H.J.; Kim, S.H. Are satisfied citizens willing to participate more? An analysis of citizens’ life satisfaction in the public service domain and public participation. Int. Rev. Public Adm. 2022, 27, 211–227. [Google Scholar] [CrossRef]
  28. ISO 9241-210; Human-Centred Design for Interactive Systems. International Organization for Standardization: Geneve, Switzerland, 2019.
  29. Nielsen, J. Usability Engineering; Morgan Kaufmann: San Francisco, CA, USA, 1994. [Google Scholar]
  30. ISO 9241-11; Guidance on Usability. International Organization for Standardization: Geneve, Switzerland, 1999.
  31. Venkatesh, V.; Ramesh, V.; Massey, A.P. Understanding usability in mobile commerce. Commun. ACM 2003, 46, 53–56. [Google Scholar] [CrossRef]
  32. Srivastava, P.; Mostafavi, A. Challenges and opportunities of crowdsourcing and participatory planning in developing infrastructure systems of smart cities. Infrastructures 2018, 3, 51. [Google Scholar] [CrossRef]
  33. Hyvärinen, T.; Kaikkonen, A.; Hiltunen, M. Placing links in mobile banking application. In Proceedings of the 7th International Conference on Human Computer Interaction with Mobile Devices & Services, Salzburg, Austria, 19–22 September 2005; pp. 63–68. [Google Scholar] [CrossRef]
  34. Chanana, L.; Agrawal, R.; Punia, D.K. Service quality parameters for mobile government services in India. Glob. Bus. Rev. 2016, 17, 136–146. [Google Scholar] [CrossRef]
  35. Al-Hubaishi, H.S.; Ahmad, S.Z.; Hussain, M. Exploring mobile government from the service quality perspective. J. Enterp. Inf. Manag. 2017, 30, 4–16. [Google Scholar] [CrossRef]
  36. Saadi, M.R.; Ahmad, S.Z.; Hussain, M. Prioritization of citizens’ preferences for using mobile government services: The analytic hierarchy process (AHP) approach. Transform. Gov. People Process Policy 2017, 11, 476–503. [Google Scholar] [CrossRef]
  37. Singh, H.; Grover, P.; Kar, A.K.; Ilavarasan, P.V. Review of performance assessment frameworks of e-government projects. Transform. Gov. People Process Policy 2020, 14, 31–64. [Google Scholar] [CrossRef]
  38. Desmal, A.J.; Hamid, S.; Othman, M.K.; Zolait, A. Exploration of the usability quality attributes of mobile government services: A literature review. PeerJ Comput. Sci. 2022, 8, e1026. [Google Scholar] [CrossRef]
  39. Sheoran, S.; Vij, S. A Review of E-Government Assessment Frameworks: E-Readiness, Adoption, Citizen Engagement and Quality: E-Readiness, Adoption, Citizen Engagement and Quality. JeDEM-Ejournal Edemocracy Open Gov. 2022, 14, 197–213. [Google Scholar] [CrossRef]
  40. Venkatesh, V.; Hoehle, H.; Aljafari, R. A usability study of the obamacare website: Evaluation and recommendations. Gov. Inf. Q. 2017, 34, 199–210. [Google Scholar] [CrossRef]
  41. Ojo, A.; Mellouli, S. Deploying governance networks for societal challenges. Gov. Inf. Q. 2018, 35, S106–S112. [Google Scholar] [CrossRef]
  42. Ababneh, R.; Alrefaie, L. Evaluating the quality of public administration institutes’ websites in the Arab world. In Global Knowledge, Memory and Communication; Emerald: Bingley, UK, 2022. [Google Scholar] [CrossRef]
  43. Agrawal, G.; Kumar, D.; Singh, M. Assessing the usability, accessibility, and mobile readiness of e-government websites: A case study in India. Univers. Access Inf. Soc. 2022, 21, 737–748. [Google Scholar] [CrossRef]
  44. Rivas-Delgado, O.; Libaque-Saenz, C.F. The impact of usability on e-government usage in the Peruvian context. Issues Inf. Syst. 2022, 23, 1–14. [Google Scholar] [CrossRef]
  45. Liu, Q.; Kim, K. Research on the Usability Test of Interface Design in e-Government-Focused on Qingdao e-Government Website. Arch. Des. Res. 2023, 36, 59–72. [Google Scholar] [CrossRef]
  46. Qonita, F.; Budiman, M.F.; Sari, V.M.; Limantara, N. Analysis of User Experience on The Government Application of Indonesian Higher Education Institutional Information Systems Using Usability Method. In Proceedings of the 4th International Conference on Innovative Trends in Information Technology (ICITIIT), Kerala, India, 11–12 February 2023; pp. 1–6. [Google Scholar] [CrossRef]
  47. Sheoran, S.; Mohanasundaram, S.; Kasilingam, R.; Vij, S. Usability and Accessibility of Open Government Data Portals of Countries Worldwide: An Application of TOPSIS and Entropy Weight Method. Int. J. Electron. Gov. Res. (IJEGR) 2023, 19, 1–25. [Google Scholar] [CrossRef]
  48. Desmal, A.J.; Hamid, S.; Othman, M.K.; Zolait, A. A user satisfaction model for mobile government services: A literature review. PeerJ Comput. Sci. 2022, 8, e1074. [Google Scholar] [CrossRef]
  49. Menezes, V.G.D.; Pedrosa, G.V.; Silva, M.P.D.; Figueiredo, R.M.D. Evaluation of public services considering the expectations of users-A systematic literature review. Information 2022, 13, 162. [Google Scholar] [CrossRef]
  50. Lyzara, R.; Purwandari, B.; Zulfikar, M.F.; Santoso, H.B.; Solichah, I. E-government usability evaluation: Insights from a systematic literature review. In Proceedings of the 2nd International Conference on Software Engineering and Information Management, Bali, Indonesia, 10–12 January 2019; pp. 249–253. [Google Scholar] [CrossRef]
  51. Monzón, F.H.; Tupia, M.; Bruzza, M. Security versus usability in e-government: Insights from the literature. In Proceedings of the MICRADS 2020, Quito, Ecuador, 29–31 July 2020; pp. 29–42. [Google Scholar] [CrossRef]
  52. Alshamsi, A.; Williams, N.; Andras, P. The Trade-off between Usability and Security in the Context of eGovernment: A Mapping Study. In Proceedings of the 30th International BCS Human Computer Interaction Conference, Fern Barrow, UK, 11–15 July 2016; pp. 1–13. [Google Scholar] [CrossRef]
  53. Yerlikaya, Z.; Onay Durdu, P. Usability of university websites: A systematic review. In Proceedings of the 11th International Conference of Universal Access in Human-Computer Interaction, Vancouver, BC, Canada, 9–14 July 2017; pp. 277–287. [Google Scholar] [CrossRef]
  54. Cisneros, D.; Huamán Monzón, F.; Paz, F. Accessibility evaluation of E-Government web applications: A systematic review. In Proceedings of the International Conference on Human-Computer Interaction, Virtual Event, 24–29 July 2021; pp. 210–223. [Google Scholar] [CrossRef]
  55. Zhang, J.; Chang, D.; Zhang, Z. Review on the Application of Eye-tracking Technology in Usability Evaluation of E-government Apps. In Proceedings of the IEEE International Conference on Industrial Engineering and Engineering Management (IEEM), Singapore, 13–16 December 2021; pp. 1646–1650. [Google Scholar] [CrossRef]
  56. Moher, D.; Liberati, A.; Tetzlaff, J.; Altman, D.G.; Prisma Group. Preferred reporting items for systematic reviews and meta-analyses: The PRISMA statement. Int. J. Surg. 2010, 8, 336–341. [Google Scholar] [CrossRef] [PubMed]
  57. Booth, A.; Sutton, A.; Clowes, M.; Martyn-St James, M. Systematic Approaches to a Successful Literature Review; Sage: Newcastle upon Tyne, UK, 2021. [Google Scholar]
  58. Silva, A.G.; Simões, P.; Santos, R.; Queirós, A.; Rocha, N.P.; Rodrigues, M. A scale to assess the methodological quality of studies assessing usability of electronic health products and services: Delphi study followed by validity and reliability testing. J. Med. Internet Res. 2019, 21, e14829. [Google Scholar] [CrossRef]
  59. Madyatmadja, E.D.; Prabowo, H. Participation to public e-service development: A systematic literature review. J. Telecommun. Electron. Comput. Eng. (JTEC) 2016, 8, 139–143. [Google Scholar]
  60. Delgado, M.; Paz, F.; Tupia, M. Sistemas de lógica difusa para la evaluación de usabilidad de sitios web de gobierno electrónico: Una revisión sistemática. Rev. Ibérica De Sist. E Tecnol. De Informação 2021, E41, 141–154. [Google Scholar]
  61. Mahou-Lago, X.M.; Varela-Álvarez, E.J. Innovation and opportunities for citizen participation in Spanish smart cities. In Smarter as the New Urban Agenda; Springer: Cham, Switzerland, 2016. [Google Scholar] [CrossRef]
  62. Li, E.; Chen, Q.; Zhang, X.; Zhang, C. Digital government development, local governments’ attention distribution and enterprise total factor productivity: Evidence from China. Sustainability 2023, 15, 2472. [Google Scholar] [CrossRef]
  63. Castilla, R.; Pacheco, A.; Franco, J. Digital government: Mobile applications and their impact on access to public information. SoftwareX 2023, 22, 101382. [Google Scholar] [CrossRef]
  64. De Róiste, M. Bringing in the users: The role for usability evaluation in eGovernment. Gov. Inf. Q. 2013, 30, 441–449. [Google Scholar] [CrossRef]
  65. Sachs, M.; Schossböck, J. Acceptance of tools for electronic citizen participation. In Proceedings of the International Conference on Electronic Participation, San Benedetto Del Tronto, Italy, 2–4 September 2019; pp. 35–46. [Google Scholar] [CrossRef]
  66. Haustein, E.; Lorson, P.C.; Oulasvirta, L.O.; Sinervo, L.M. Perceived usability of local government (LG) financial statements by local councillors: Comparative study of Finland and Germany. Int. J. Public Sect. Manag. 2021, 34, 441–458. [Google Scholar] [CrossRef]
  67. Rey, W.P. Assessing MABIS Mobile App Based on People at the Center of Mobile Application Development (PACMAD) Usability Model: Empirical Investigation. In Proceedings of the 13th International Conference on Software Technology and Engineering (ICSTE), Osaka, Japan, 27–29 October 2023; pp. 37–43. [Google Scholar] [CrossRef]
  68. Ouaadi, I.; Haddad, M.E. Fuzzy expert system and information systems auditing: An approach for risk assessment in audit pre-planning. Int. J. Bus. Contin. Risk Manag. 2023, 13, 204–228. [Google Scholar] [CrossRef]
  69. Therias, A.; Rafiee, A. City digital twins for urban resilience. Int. J. Digit. Earth 2023, 16, 4164–4190. [Google Scholar] [CrossRef]
  70. Dwivedi, V.; Iqbal, M.; Norta, A.; Matulevičius, R. Evaluation of a Legally Binding Smart-Contract Language for Blockchain Applications. J. Univers. Comput. Sci. 2023, 29, 691–717. [Google Scholar] [CrossRef]
  71. Chang Lee, K.; Kirlidog, M.; Lee, S.; Gun Lim, G. User evaluations of tax filing web sites: A comparative study of South Korea and Turkey. Online Inf. Rev. 2008, 32, 842–859. [Google Scholar] [CrossRef]
  72. Awan, M.A. Dubai e-government: An evaluation of G2B websites. J. Internet Commer. 2008, 6, 115–129. [Google Scholar] [CrossRef]
  73. Choudrie, J.; Wisal, J.; Ghinea, G. Evaluating the usability of developing countries’ e-government sites: A user perspective. Electron. Gov. Int. J. 2009, 6, 265–281. [Google Scholar] [CrossRef]
  74. Yuan, L.; Zhongling, L. Experimental evaluation on government portal website’s usability to 11 government websites of Zhejiang province. In Proceedings of the 2nd International Conference on Information Science and Engineering, Kerbala, Iraq, 26–29 March 2010; pp. 2076–2078. [Google Scholar] [CrossRef]
  75. Israel, D.; Tiwari, R. Empirical study of factors influencing acceptance of e-government services in India. In Proceedings of the 5th International Conference on Theory and Practice of Electronic Governance, Tallinn, Estonia, 26–28 October 2011; pp. 141–146. [Google Scholar] [CrossRef]
  76. González Martínez, S.; Luna-Reyes, L.F.; Luna, D.E.; Gil-Garcia, J.R.; Sandoval-Almazán, R. Comparing usability of government web portals during governor change of terms. In Proceedings of the 12th Annual International Digital Government Research Conference: Digital Government Innovation in Challenging Times, College Park, MD, USA, 12–15 June 2011; pp. 327–328. [Google Scholar] [CrossRef]
  77. Bouzas-Lorenzo, R.; Mahou-Lago, X.M. An evaluation of citizen service web portals in Latin America. Acad. Rev. Latinoam. De Adm. 2015, 28, 99–114. [Google Scholar] [CrossRef]
  78. Baguma, R. Usability evaluation of the etax portal for Uganda. In Proceedings of the 11th International Conference on Theory and Practice of Electronic Governance, Galway, Ireland, 4–6 April 2018; pp. 449–458. [Google Scholar] [CrossRef]
  79. Madariaga, L.; Nussbaum, M.; Marañón, F.; Alarcón, C.; Naranjo, M.A. User experience of government documents: A framework for informing design decisions. Gov. Inf. Q. 2019, 36, 179–195. [Google Scholar] [CrossRef]
  80. Hussain, A.; Mkpojiogu, E.O.; Ishak, N.; Mokhtar, N.; Ani, Z.C. An Interview Report on Users’ Perception about the Usability Performance of a Mobile E-Government Application. Int. J. Interact. Mob. Technol. 2019, 13, 169–178. [Google Scholar] [CrossRef]
  81. Patsoulis, G.; Promikyridis, R.; Tambouris, E. Integration of chatbots with Knowledge Graphs in eGovernment: The case of Getting a Passport. In Proceedings of the 25th Pan-Hellenic Conference on Informatics, Volos, Greece, 26–28 November 2021; pp. 425–429. [Google Scholar] [CrossRef]
  82. Butt, M.A.; Li, S.; Javed, N. Towards Co-PPGIS-a collaborative public participatory GIS-based measure for transparency in housing schemes: A case of Lahore, Pakistan. Appl. Geomat. 2016, 8, 27–40. [Google Scholar] [CrossRef]
  83. Birghan, F.; Hettenhausen, R.; Meschede, C.; Siebenlist, T. Informing Citizens via Council Information Systems. In Proceedings of the 20th Annual International Conference on Digital Government Research, Dubai, United Arab Emirates, 18–20 June 2019; pp. 280–286. [Google Scholar] [CrossRef]
  84. Sofyan, Z. User requirements elicitation in web-based Participatory Geographic Information System interface design. In Proceedings of the 9th Annual International Conference on Sciences & Engineering, Banda Aceh, Indonesia, 18–20 September 2019; p. 012028. [Google Scholar] [CrossRef]
  85. Magoutas, B.; Schmidt, K.U.; Mentzas, G.; Stojanovic, L. An adaptive e-questionnaire for measuring user perceived portal quality. Int. J. Hum. Comput. Stud. 2010, 68, 729–745. [Google Scholar] [CrossRef]
  86. Byun, D.H.; Finnie, G. An AHP method for evaluating usability of electronic government portals. Electron. Gov. Int. J. 2011, 8, 343–362. [Google Scholar] [CrossRef]
  87. Faisal, M.; Al-Qouz, H.; Husain, F. A direct method for measuring user experience in E-government portals. In Proceedings of the 15th International Conference on Information Technology Based Higher Education and Training (ITHET), Istanbul, Turkey, 8–10 September 2016; pp. 1–4. [Google Scholar] [CrossRef]
  88. Schneider, H.; Frison, K.; Wagner, J.; Butz, A. CrowdUX: A case for using widespread and lightweight tools in the quest for UX. In Proceedings of the 2016 ACM Conference on Designing Interactive Systems, Brisbane, Australia, 4–8 June 2016; pp. 415–426. [Google Scholar] [CrossRef]
  89. Burian, M.F.; Fauvety, P.; Aquino, N.; González, M.; Romero, D.; Cernuzzi, L.; Paniagua, J.; Chenú-Abente, R. Design of SmartMoving, an application for pedestrians with reduced mobility. In Proceedings of the XLVI Latin American Computing Conference (CLEI), Loja, Ecuador, 19–23 October 2020; pp. 367–376. [Google Scholar] [CrossRef]
  90. Callegari, L.S.; Nelson, K.M.; Arterburn, D.E.; Dehlendorf, C.; Magnusson, S.L.; Benson, S.K.; Schwarz, E.B.; Borrero, S. Development and pilot testing of a patient-centered web-based reproductive decision support tool for primary care. J. Gen. Intern. Med. 2021, 36, 2989–2999. [Google Scholar] [CrossRef]
  91. Remelhe, E.; Cerqueira, M.; Faria, P.M.; Paiva, S. Sustainable smart parking solution in a campus environment. EAI Endorsed Trans. Energy Web 2022, 9, e2. [Google Scholar] [CrossRef]
  92. Wang, G.; Qin, Z.; Wang, S.; Sun, H.; Dong, Z.; Zhang, D. Towards accessible shared autonomous electric mobility with dynamic deadlines. IEEE Trans. Mob. Comput. 2022, 23, 925–940. [Google Scholar] [CrossRef]
  93. Henderson, H.; Grace, K.; Gulbransen-Diaz, N.; Klaassens, B.; Leong, T.W.; Tomitsch, M. From parking meters to vending machines: A study of usability issues in self-service technologies. Int. J. Hum. -Comput. Interact. 2023, 40, 4365–4379. [Google Scholar] [CrossRef]
  94. Armand, T.P.T.; Mozumder, M.A.I.; Ali, S.; Amaechi, A.O.; Kim, H.C. Developing a Low-Cost IoT-Based Remote Cardiovascular Patient Monitoring System in Cameroon. Healthcare 2023, 11, 199. [Google Scholar] [CrossRef]
  95. Moriuchi, E.; Berbary, C.; Easton, C. Looking through the lenses of a patient: An empirical study on the factors affecting patients’ intention to use avatar-assisted therapy. J. Technol. Behav. Sci. 2023, 8, 100–112. [Google Scholar] [CrossRef]
  96. Medich, M.; Cannedy, S.L.; Hoffmann, L.C.; Chinchilla, M.Y.; Pila, J.M.; Chassman, S.A.; Calderon, R.A.; Young, A.S. Clinician and Patient Perspectives on the Use of Passive Mobile Monitoring and Self-Tracking for Patients with Serious Mental Illness: User-Centered Approach. JMIR Hum. Factors 2023, 10, e46909. [Google Scholar] [CrossRef]
  97. Jabbar, W.A.; Tiew, L.Y.; Shah, N.Y.A. Internet of things enabled parking management system using long range wide area network for smart city. Internet Things Cyber-Phys. Syst. 2024, 4, 82–98. [Google Scholar] [CrossRef]
  98. Bohman, S.; Hansson, H.; Mobini, P. Online participation in higher education decision-making. JeDEM-Ejournal Edemocracy Open Gov. 2014, 6, 267–285. [Google Scholar] [CrossRef]
  99. Bahadori, H.; Vahdat-Nejad, H.; Moradi, H. CrowdBIG: Crowd-based system for information gathering from the earthquake environment. Nat. Hazards 2022, 114, 3719–3741. [Google Scholar] [CrossRef]
  100. Fuglerud, K.S.; Røssvoll, T.H. An evaluation of web-based voting usability and accessibility. Univ. Access Inf. Soc. 2012, 11, 359–373. [Google Scholar] [CrossRef]
  101. Yang, D.; Zhang, D.; Frank, K.; Robertson, P.; Jennings, E.; Roddy, M.; Lichtenstern, M. Providing real-time assistance in disaster relief by leveraging crowdsourcing power. Pers. Ubiquitous Comput. 2014, 18, 2025–2034. [Google Scholar] [CrossRef]
  102. Brito, J.; Vieira, V.; Duran, A. Towards a framework for gamification design on crowdsourcing systems: The GAME approach. In Proceedings of the 12th International Conference on Information Technology-New Generations, Las Vegas, NV, USA, 13–15 April 2015; pp. 445–450. [Google Scholar] [CrossRef]
  103. Pistolato, A.C.; Brandão, W.C. ConnectCity: A collaborative e-government approach to report city incidents. In Proceedings of the 15th International Conference WWW/Internet 2016, Mannheim, Germany, 28–30 October 2016; pp. 233–237. [Google Scholar]
  104. Winckler, M.; Bernhaupt, R.; Bach, C. Identification of UX dimensions for incident reporting systems with mobile applications in urban contexts: A longitudinal study. Cogn. Technol. Work 2016, 18, 673–694. [Google Scholar] [CrossRef]
  105. Idris, N.H.; Osman, M.J.; Kanniah, K.D.; Idris, N.H.; Ishak, M.H.I. Engaging indigenous people as geo-crowdsourcing sensors for ecotourism mapping via mobile data collection: A case study of the Royal Belum State Park. Cartogr. Geogr. Inf. Sci. 2017, 44, 113–127. [Google Scholar] [CrossRef]
  106. Bousios, A.; Gavalas, D.; Lambrinos, L. CityCare: Crowdsourcing daily life issue reports in smart cities. In Proceedings of the 2017 IEEE Symposium on Computers and Communications (ISCC), Heraklion, Greece, 3–6 July 2017; pp. 266–271. [Google Scholar] [CrossRef]
  107. Koroleva, M.B.K.; Vitorino, K.D.D.; Novak, J. Developing a collective awareness platform for urban sustainability challenges: Case study of the POWER project. Eur. J. Sustain. Dev. 2019, 8, 214. [Google Scholar] [CrossRef]
  108. Falcão, A.G.R.; Wanderley, P.F.; da Silva Leite, T.H.; de Souza Baptista, C.; de Queiroz, J.E.R.; de Oliveira, M.G.; Rocha, J.H. Crowdsourcing urban issues in smart cities: A usability assessment of the Crowd4City system. In Proceedings of the 8th International Conference on Electronic Government and the Information Systems Perspective, Linz, Austria, 26–29 August 2019; pp. 147–159. [Google Scholar] [CrossRef]
  109. Hasim, W.; Wibirama, S.; Nugroho, H.A. Redesign of E-participation using user-centered design approach for improving user experience. In Proceedings of the 2019 International Conference on Information and Communications Technology (ICOIACT), Yogyakarta, Indonesia, 24–25 July 2019; pp. 857–861. [Google Scholar] [CrossRef]
  110. Knecht, K.; Stefanescu, D.A.; Koenig, R. Citizen Engagement through Design Space Exploration Integrating citizen knowledge and expert design in computational urban planning. In Proceedings of the 37th Education and Research in Computer Aided Architectural Design in Europe and XXIII Iberoamerican Society of Digital Graphics, Joint Conference, Porto, Portugal, 11–13 September 2019; pp. 785–794. [Google Scholar] [CrossRef]
  111. Nguyen, Q.N.; Frisiello, A.; Rossi, C. The Design of a Mobile Application for Crowdsourcing in Disaster Risk Reduction. In Proceedings of the 16th International Conference on Information Systems for Crisis Response and Management, València, Spain, 19–22 May 2019; pp. 607–618. [Google Scholar]
  112. Zabaleta, K.; Lopez-Novoa, U.; Pretel, I.; López-de-Ipiña, D.; Cartelli, V.; Di Modica, G.; Tomarchio, O. Designing a Human Computation Framework to Enhance Citizen-Government Interaction. J. Univers. Comput. Sci. 2019, 25, 122–153. [Google Scholar] [CrossRef]
  113. Dioquino, J.; Cac, A.R.; Tandingan, D.R. Development of Material Recovery Trading System: An Innovative Tool for Community Waste Management Support. In Proceedings of the International Conference on Information Technology and Digital Applications 2019, Yogyakarta, Indonesia, 15 November 2019; p. 012013. [Google Scholar] [CrossRef]
  114. Aditya, T.; Sugianto, A.; Sanjaya, A.; Susilo, A.; Zawani, H.; Widyawati, Y.S.; Amin, S. Channelling participation into useful representation: Combining digital survey app and collaborative mapping for national slum-upgrading programme. Appl. Geomat. 2020, 12, 133–148. [Google Scholar] [CrossRef]
  115. Asfarian, A.; Putra, R.P.; Panatagama, A.P.; Nurhadryani, Y.; Ramadhan, D.A. E-Initiative for Food Security: Design of Mobile Crowdfunding Platform to Reduce Food Insecurity in Indonesia. In Proceedings of the 8th International Conference on Information and Communication Technology (ICoICT), Yogyakarta, Indonesia, 24 June 2020; pp. 1–5. [Google Scholar] [CrossRef]
  116. Liu, H.K.; Hung, M.J.; Tse, L.H.; Saggau, D. Strengthening urban community governance through geographical information systems and participation: An evaluation of my Google Map and service coordination. Aust. J. Soc. Issues 2020, 55, 182–200. [Google Scholar] [CrossRef]
  117. Biswas, R.; Arya, K.; Fernandes, V.; Shah, T. Find A Loo: An app for sanitation governance. Inf. Commun. Soc. 2021, 24, 1586–1602. [Google Scholar] [CrossRef]
  118. Janoskova, P.; Stofkova, K.R.; Kovacikova, M.; Stofkova, J.; Kovacikova, K. The concept of a smart city communication in the form of an urban mobile application. Sustainability 2021, 13, 9703. [Google Scholar] [CrossRef]
  119. Jindal, A.; Chowdhury, A. Designing a donation portal to help underprivileged Indians. In Proceedings of the International Conference on Research into Design, Mumbai, India, 7–10 January 2021; pp. 399–411. [Google Scholar] [CrossRef]
  120. Takenouchi, K.; Choh, I. Development of a support system for creating disaster prevention maps focusing on road networks and hazardous elements. Vis. Comput. Ind. Biomed. Art 2021, 4, 22. [Google Scholar] [CrossRef]
  121. Thoneick, R. Integrating online and onsite participation in urban planning: Assessment of a digital participation system. Int. J. E-Plan. Res. (IJEPR) 2021, 10, 1–20. [Google Scholar] [CrossRef]
  122. Wernbacher, T.; Pfeiffer, A.; Gebetsroither-Geringer, E.; Goels, M.; Worster, J.; Meißner, E.; Graf, A.; Stollnberger, R.; Geyer, R.; Schmidt, R.R.; et al. HotCity-A gamified token system for reporting waste heat sources. In Proceedings of the International Conference on Intelligent Systems & Networks, Hanoi, Vietnam, 19–19 March 2021; pp. 15–27. [Google Scholar] [CrossRef]
  123. Görgü, L.; O’Grady, M.; Mangina, E.; O’Hare, G.M. Participatory risk management in the smart city. In Proceedings of the 2022 IEEE International Smart Cities Conference (ISC2), Paphos, Cyprus, 26–29 September 2022; pp. 1–6. [Google Scholar] [CrossRef]
  124. Matsuda, Y.; Kawanaka, S.; Suwa, H.; Arakawa, Y.; Yasumoto, K. ParmoSense: Scenario-based Participatory Mobile Urban Sensing Platform with User Motivation Engine. Sens. Mater. 2022, 34, 3063. [Google Scholar] [CrossRef]
  125. Ong, A.K.S.; Prasetyo, Y.T.; Kusonwattana, P.; Mariñas, K.A.; Yuduang, N.; Chuenyindee, T.; Robas, K.P.E.; Persada, S.F.; Nadlifatin, R. Determining factors affecting the perceived usability of air pollution detection mobile application “AirVisual” in Thailand: A structural equation model forest classifier approach. Heliyon 2022, 8, e12538. [Google Scholar] [CrossRef]
  126. Gonzales, A.L.R.; Ingalla, E.J.M.; Javier, N.A.F.; Serrano, E.A.; Rodriguez, R.L. CharitAble: A Software Application for Charity Donation. In Proceedings of the 14th International Conference on Humanoid, Nanotechnology, Information Technology, Communication and Control, Environment, and Management, Boracay, Philippines, 1–4 December 2022; pp. 1–5. [Google Scholar] [CrossRef]
  127. Ananta, M.T.; Rohidah, S.; Brata, K.C.; Abidin, Z. Mobile Crowdsourcing App Design: Managing Waste Through Waste Bank in Rural Area of Indonesia. In Proceedings of the 8th International Conference on Sustainable Information Engineering and Technology, Bali, Indonesia, 24–25 October 2023; pp. 664–672. [Google Scholar] [CrossRef]
  128. Cortés-Cediel, M.E.; Segura-Tinoco, A.; Cantador, I.; Bolívar, M.P.R. Trends and challenges of e-government chatbots: Advances in exploring open government data and citizen participation content. Gov. Inf. Q. 2023, 40, 101877. [Google Scholar] [CrossRef]
  129. Junqueira, L.; Freire, A.P.; Grützmann, A.; Zitkus, E. Challenges and barriers faced by older adults to access legislative e-participation platforms in Brazil. Electron. J. Inf. Syst. Dev. Ctries. 2023, 89, e12274. [Google Scholar] [CrossRef]
  130. Lam, J.; Kop, N.; Houtman, C. The impact of technological support on citizen searches for missing persons. Justice Eval. J. 2023, 6, 62–80. [Google Scholar] [CrossRef]
  131. Leoputra, C.D.; Satya, D.P.; Al-Ghazali, M.R. Application of User-Centered Design Approach in Developing Interaction Design of In-Kind Donation Feature on a Crowdfunding Platform. In Proceedings of the 2023 International Conference on Electrical Engineering and Informatics (ICEEI), Bandung, Indonesia, 10–11 October 2023; pp. 1–6. [Google Scholar] [CrossRef]
  132. Manik, L.P.; Albasri, H.; Puspasari, R.; Yaman, A.; Al Hakim, S.; Kushadiani, S.K.; Riyanto, S.; Setiawan, F.A.; Thesiana, L.; Jabbar, M.A.; et al. Usability and acceptance of crowd-based early warning of harmful algal blooms. PeerJ 2023, 11, e14923. [Google Scholar] [CrossRef]
  133. Sapraz, M.; Han, S. Users’ evaluation of a digital government collaborative platform (DGCP) in Sri Lanka. Transform. Gov. People Process Policy 2024, 18, 131–144. [Google Scholar] [CrossRef]
  134. Bastos, D.; Fernández-Caballero, A.; Pereira, A.; Rocha, N.P. Smart city applications to promote citizen participation in city management and governance: A systematic review. Informatics 2022, 9, 89. [Google Scholar] [CrossRef]
  135. Sapraz, M.; Han, S. Implicating human values for designing a digital government collaborative platform for environmental issues: A value sensitive design approach. Sustainability 2021, 13, 6240. [Google Scholar] [CrossRef]
  136. Bajdor, P.; Starostka-Patyk, M. Smart City: A bibliometric analysis of conceptual dimensions and areas. Energies 2021, 14, 4288. [Google Scholar] [CrossRef]
  137. Sharif, A.; Allam, Z.; Feizizadeh, B.; Ghamari, H. Three decades of research on smart cities: Mapping knowledge structure and trends. Sustainability 2021, 13, 7140. [Google Scholar] [CrossRef]
  138. Vento, I. Hands-off or hands-on governance for public innovation? A comparative case study in the EU cohesion policy implementation in Finland. Int. J. Public Adm. 2020, 43, 989–999. [Google Scholar] [CrossRef]
  139. Almeida, A.F.; Rocha, N.P.; Silva, A.G. Methodological quality of manuscripts reporting on the usability of mobile applications for pain assessment and management: A systematic review. Int. J. Environ. Res. Public Health 2020, 17, 785. [Google Scholar] [CrossRef]
  140. Silva, A.G.; Caravau, H.; Martins, A.; Almeida, A.M.; Silva, T.; Ribeiro, Ó.; Santinha, G.; Rocha, N.P. Procedures of User-Centered Usability Assessment for Digital Solutions: Scoping Review of Reviews Reporting on Digital Solutions Relevant for Older Adults. JMIR Hum. Factors 2021, 8, e22774. [Google Scholar] [CrossRef]
Figure 1. Systematic review flowchart.
Figure 1. Systematic review flowchart.
Digital 04 00038 g001
Figure 2. Number of studies published by year.
Figure 2. Number of studies published by year.
Digital 04 00038 g002
Figure 3. Number of studies that met each item after consensus was reached between reviewers.
Figure 3. Number of studies that met each item after consensus was reached between reviewers.
Digital 04 00038 g003
Table 1. PICO framework.
Table 1. PICO framework.
PopulationDigital applications to promote citizens’ engagement and participation in public governance
InterventionUser-centered usability evaluation
ComparisonN/A
OutcomeMethodological quality
ContextResearch papers selected from scientific databases
Table 2. Location of the experimental setups.
Table 2. Location of the experimental setups.
CountriesNumber of StudiesNational StudiesMultinational Studies
EuropeSpain4[128][107,111,112]
The United Kingdom4 [107,111,112,123]
France3[104][101,123]
Germany2[121][101]
Greece2[106][101]
Italy2 [111,112]
Norway2[100][101]
Austria1[122]
Croatia1 [111]
Finland1 [111]
Ireland1 [101]
The Netherlands1[130]
Portugal1 [123]
Slovakia1[118]
AsiaIndonesia6[109,114,115,127,131,132]
India2[117,119]
Japan2[120,124]
Philippines2[113,126]
Hong Kong1[116]
Israel1 [107]
Malaysia1[105]
Singapore1[110]
Sri Lanka1[133]
Thailand1[125]
South AmericaBrazil4[102,103,108,129]
Table 3. Purposes of the applications that were evaluated by the included studies.
Table 3. Purposes of the applications that were evaluated by the included studies.
PurposesNumber of StudiesReferences
Participatory reporting of urban issues12[101,102,103,104,105,106,108,114,116,117,124,125]
Environmental sustainability6[107,113,122,127,132,133]
Civic participation5[115,119,126,130,131]
Urban planning5[110,111,120,121,123]
Promotion of democratic values4[109,112,118,129]
Electronic voting1[100]
Chatbots1[128]
Table 4. Usability evaluation design.
Table 4. Usability evaluation design.
TestInquiryParticipants
#Task Perfor-manceThink AloudCritical IncidentsScalesQuestionnairesInterviewFocus GroupAcceptance Models NumberAge (Min–Max)
[100]xxx-----24<20–89
[101]x--SUS 1-x--16-
[102]x-------10-
[103]-x-SURE 2----1025–34
[104]-x-SUS-x--2021–57
[105]x-------4010–52
[106]xx--xx--20-
[107]----x-x-30-
[108]x-x-x---3018–35
[109]---UEQ 3----1616–40
[110]---SUS----32-
[111]x--SUSx---5125–>56
[112]----x---21518–>65
[113]---SUS----55-
[114]----xx--29-
[115]xx------5-
[116]----x---120-
[117]---SUS----3322–50
[118]xx-SUS----2515–>60
[119]----x---45-
[120]---SUS-x--9-
[121]x----x -124-
[122]---SUSx---31-
[123]---SUSx---1818–>65
[124]----x---152-
[125]-------UTAUT 441615–>64
[126]---SUS----30-
[127]x--SUS-x--10-
[128]x---x---1218–54
[129]x-x--x--2060–80
[130]x-xSUS----33<25–>65
[131]x--SUS-x--40-
[132]---SUS---TAM 510417–61
[133]---UEQxx--23915–62
Notes: 1 System Usability Scale (SUS); 2 Smartphone Usability QuestionnaiRE (SURE); 3 User Experience Questionnaire (UEQ); 4 Unified Theory of Acceptance and Use of Technology (UTAUT); 5 Technology Acceptance Model (TAM).
Table 5. Usability evaluation methods.
Table 5. Usability evaluation methods.
MethodsStudies
Exclusively test methods[100,102,105,115]
Exclusively inquiry methods[107,109,110,112,113,114,116,117,119,120,122,123,124,125,126,132,133]
Multimethod (test and inquiry methods)[101,103,104,106,108,111,118,121,127,128,129,130,131]
Table 6. Usability inquiry instruments.
Table 6. Usability inquiry instruments.
Instruments NatureStudy
Validated scales[101,103,104,109,110,111,113,117,118,120,122,123,126,127,130,131,132,133]
Ad hoc questionnaires[106,107,108,111,112,114,116,119,122,123,124,128,133]
Technology acceptance models[122,125]
Table 7. Factors with positive and negative impact in the perceived usability.
Table 7. Factors with positive and negative impact in the perceived usability.
ImpactFactorsStudy
PositiveApplication of universal design principles to minimize the exclusion of disadvantageous groups[100,105]
Application of design methods to maximise the visual and aesthetic experience[104]
Maximization of effort expectancy and performance expectancy[125,132]
Incorporation of human values (e.g., the framework proposed by [135]) in the design of the user interaction[104,110,128,133]
Introduction of gamification and other motivating features to promote the continuous and sustainable use of the proposed applications[102,124]
NegativeComplicated features (e.g., complicated language or resources that are difficult to use) negatively impacts perceived usability as well as reinforce participants’ distrust in both digital applications and authorities.[129]
The cognitive load of the user interaction is a focus of distraction that might negatively impact collaborative tasks[130]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Bastardo, R.; Pavão, J.; Rocha, N.P. Methodological Quality of User-Centered Usability Evaluation of Digital Applications to Promote Citizens’ Engagement and Participation in Public Governance: A Systematic Literature Review. Digital 2024, 4, 740-761. https://doi.org/10.3390/digital4030038

AMA Style

Bastardo R, Pavão J, Rocha NP. Methodological Quality of User-Centered Usability Evaluation of Digital Applications to Promote Citizens’ Engagement and Participation in Public Governance: A Systematic Literature Review. Digital. 2024; 4(3):740-761. https://doi.org/10.3390/digital4030038

Chicago/Turabian Style

Bastardo, Rute, João Pavão, and Nelson Pacheco Rocha. 2024. "Methodological Quality of User-Centered Usability Evaluation of Digital Applications to Promote Citizens’ Engagement and Participation in Public Governance: A Systematic Literature Review" Digital 4, no. 3: 740-761. https://doi.org/10.3390/digital4030038

APA Style

Bastardo, R., Pavão, J., & Rocha, N. P. (2024). Methodological Quality of User-Centered Usability Evaluation of Digital Applications to Promote Citizens’ Engagement and Participation in Public Governance: A Systematic Literature Review. Digital, 4(3), 740-761. https://doi.org/10.3390/digital4030038

Article Metrics

Back to TopTop