Next Article in Journal
Research on Innovation Management of Enterprise Supply Chain Digital Platform Based on Blockchain Technology
Next Article in Special Issue
A Fuzzy–Rough MCDM Approach for Selecting Green Suppliers in the Furniture Manufacturing Industry: A Case Study of Eco-Friendly Material Production
Previous Article in Journal
The Role of User Experience in the Impact of Low-Carbon Building Characteristics on Consumer’s Housing Purchase Intention
Previous Article in Special Issue
Sustainable Low-Carbon Production: From Strategy to Reality
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

On Increasing Service Organizations’ Agility: An Artifact-Based Framework to Elicit Improvement Initiatives

Department of Engineering Design and Robotics, Technical University of Cluj-Napoca, 400114 Cluj-Napoca, Romania
*
Author to whom correspondence should be addressed.
Sustainability 2023, 15(13), 10189; https://doi.org/10.3390/su151310189
Submission received: 8 May 2023 / Revised: 16 June 2023 / Accepted: 20 June 2023 / Published: 27 June 2023
(This article belongs to the Special Issue Sustainable Production & Operations Management)

Abstract

:
The present research focuses on operational agility in service organizations, which are subject to variability through customers, service providers, suppliers, or unexpected events. As such, their management teams may face challenges in understanding their agility-related assets and success metrics, and furthermore in defining the scope of work for improvement initiatives. Previous research offers quite general insights into agility-related capabilities, practices, obstacles, or (agility-related) information quality evaluation. Yet, management teams need specific practices and techniques in order to improve operational agility capabilities, and thus increase their sustainable performance. We propose a conceptual framework and an artifact-centric algorithm that elicits and prioritizes improvement initiatives by (a) understanding agility-related assets by modelling operational business artifacts, (b) determining agility bottlenecks by identifying quality issues in operational artifacts, and (c) eliciting and prioritizing improvement initiatives to increase artifact quality. The framework application is discussed through a case study in a company operating in the rail freight industry, in which a set of initiatives to improve operational agility capabilities is obtained and prioritized. We conclude that the proposed algorithm is an applicable and relevant tool for management teams in service organizations, in their operational agility improvement endeavors.

1. Introduction

Agility, a necessary condition for sustainable performance, should be a key competence of organizations in dynamic markets today [1]. Organizations have to move faster than the market around them, both in terms of decision-making and capability development over time, to maintain a competitive advantage [2]. Agility should therefore not only be seen as a tactic, but rather as a strategic goal by itself.
Organizations doing this invest in underlying assets that allow them to iterate and learn faster, as this de-bottlenecks everything they do [3]. As such, a strategy that includes agility should highlight these underlying assets and align agility-related success metrics so that everybody in the team can make that strategy come alive. Metrics are particularly important, as understanding their relative importance helps to understand what matters to leadership. However, while quick and innovative responses to changes in the business environment (i.e., being agile) impact performance, research on the resources and capabilities that enable organizations to be agile is still nascent [4].
The present research focuses on operational agility in service organizations, whose processes are usually complex, non-linear, and work with a high volume of information of variable quality [5,6]. Service systems are inherently subject to variability, whether through customers, service providers, suppliers, or unexpected events [7]. As such, their management teams may face challenges in understanding their agility-related assets and success metrics. They may furthermore face challenges in defining the scope of work for agility-related improvement initiatives.
Among the bottlenecks to (operational) agility reported by practitioners [8], two stand out, especially in the aforementioned context of service organizations: Ineffectively managed unstructured work and complex processes. Approximately 60% of today’s work is unstructured, consisting of ad-hoc, one-off tasks that require interpretation, sound judgment, and expertise. This is especially particular to service organizations, whose front-line employees are usually given the flexibility to adapt to “on-the-ground” situations. In addition to that, the more interfaces involved, the more information is lost or interpreted differently.
According to a Forbes Insights and PMI report on agility [9] interviewing more than 500 senior executives across the globe, over 90% of them regard agility as critical to business success. The same source reports that only 27% of the interviewed senior executives consider themselves highly agile. Regarding figures, the global market for operations management services, which are directly correlated with continuous improvement, was estimated around as much as $70 billion USD in 2018 [10]. As such, effectively identifying and running improvement initiatives under the pressure of continuous change and uncertainty (an important facet of being agile) leads not only to better agility but also significant cost savings.
For a manager, a quick web search might reveal dozens of articles on business agility and why organizations need to become more and more agile. Guidelines and good practices related to agility may also be found. However, to become agile, an organization has to be specific in its own capabilities for agility [11] and to run its own agility improvement initiatives. Businesses are different, and the nature of the challenges they face varies with their size, domain, and level of capabilities and resources. As such, we investigate the following research issue: What specific practices and techniques can managers implement in order to improve operational agility capabilities?
The goal of this research is to support service organizations in understanding agility-related assets and metrics and in defining agility-related improvement initiatives. Specifically, we address their management teams by proposing a conceptual framework and an artifact-centric algorithm that elicits and prioritizes improvement initiatives by (a) understanding agility-related assets through modelling operational business artifacts, (b) determining agility bottlenecks through identifying quality issues in operational artifacts, and (c) eliciting and prioritizing improvement initiatives to increase artifact quality. We also present a case study to demonstrate the effectiveness of the proposed algorithm.
The remainder of this article is organized as follows. Section 2 summarizes the literature and presents the theoretical framework of our research, Section 3 presents the algorithm and its design considerations, Section 4 presents a case study within a service-providing company, which operates in the rail freight industry, and Section 5 discusses the obtained results, the implications, and the generalizability and applicability of the algorithm.

2. Literature Review and Ground Ideas

The goal of this section is to present a theoretical framework that incorporates operational agility, business artifacts, and agility-related improvement initiatives, to ground an algorithm for eliciting and prioritizing such improvement initiatives.

2.1. Organizational and Operational Agility

2.1.1. Agility Facets

Today’s industries and markets continuously change by emerging, colliding, splitting, growing, and declining, hence organizations have to contend with perpetual uncertainties [12]. Therefore, regardless of their financial success, they face challenges in creating strategies to deal with such uncertainties. Agility refers to an organization’s ability to sense changes in the environment and to make qualitative/relevant subsequent decisions [2,4,13,14,15,16]. This means continuously adjusting strategies, supporting decision-making on challenging projects, effectively responding to ambiguity and uncertainty, and perceiving unanticipated change as opportunities for improvement [4,17,18].
Agility should be a key competence of organizations in today’s dynamic markets [1] [19,20,21,22]. It should be not only seen as a tactic, but rather as a strategic goal by itself. Managers thus need to understand what makes their companies agile, how agile they actually are, and how they can improve agility.
Agility is defined in [4] in terms of three dimensions: Customer responsiveness, operational flexibility, and strategic flexibility. Two dimensions of agility are described in [23]—speed and capabilities of an organization to use resources and respond to changes. According to [24], it consists of customer agility, partnership agility, and operational agility. In [25], two facets of agility are discussed: Operational adjustment agility and market capitalizing agility.
Agility should, as such, exist at any organizational level, from strategic/business/organizational level agility down to product or service, tools, and even (employee) personal agility. We focus in this article on the operational level of agility, i.e., the ability of a company to adapt its operations, technology, and information to perpetually changing business requirements [14,26], to allow managers to achieve the desired competitive advantages [24].

2.1.2. Service Organizations Particularities

In service organizations, process quality is usually managed via process KPIs, which rather reflect the performance level of activities, paying less attention to data aspects of business processes [27]. Additionally, the variability they face through either customers, service providers, suppliers, or unexpected events [7] results in less accurate business process models. As such, improving operational business processes may not impact agility.
In this light, management teams in service organizations may face challenges in understanding their agility-related assets and success metrics, and in defining the scopes of work for agility-related improvement initiatives.

2.1.3. Operational Agility Enablers

Agility is seen as a dynamic capability, differing from an operational routine in that it describes how resources can be reconfigured and integrated to respond to contextual changes [14]. To be agile, organizations need a set of enablers, which are capabilities that allow them to promptly respond to changing business environments [28].
Knowledge management, organization learning, leadership commitment, multidisciplinary teams, organizational culture, decentralized decision-making, or customer and stakeholder involvement are reported as generic agility enablers [29]. Information (including data architecture) is also reported as an enabler of agility [15]. Operational capabilities such as Enterprise Architecture can also enhance agility [30]. There is also a positive link between an organization’s information systems capabilities and its agility [31]. Moreover, operational agility requires building (at least) an information processing network and implementing organizational controls to enhance the right information processing capability [26].
To summarize, rather general answers can be found to the question of how to build agility in a particular organization. Managers need, however, to know what capabilities to develop, given different changes in their business environments, and what practices and techniques can be implemented in this respect [11,32].

2.2. Artifact-Centric Approach to Operational Agility

2.2.1. Business Artifacts as Operational Agility Capabilities

Traditional business process models based on activity flow seldom support the flexibility needed to deal with continuous change and uncertainty [27,33]. Managers need to communicate not only how they want things to be performed but also why, so data managed by processes become at least equally important as the activity flows. Employees can thus make smarter decisions in uncommon situations (change and uncertainty), as they better sense the context of their activities.
In this light, a different paradigm for managing processes should be adopted. One such approach focuses on business data records—called business artifacts—that correspond to business-relevant objects, which are updated by a set of services that implement business process tasks [27,33,34,35,36]. All organizations, regardless of their activity domain, rely on business records [33], which contain information on what they produce. Business artifacts enable recording this information so that they reflect the context of a business, while work tasks are defined by considering how artifacts are processed [27,36,37,38]. Business artifacts are concrete, identifiable, self-describing parts of information, combined into a persistent structure, that can be used to actually run a business [27,39]. Artifact-centric business process modelling has proven to be quite effective in practice [38]. It substantially enhances communication between business operations stakeholders as compared with the communication enabled by traditional activity-flow-based approaches [33].
Aligning information systems and businesses at the strategic implementation stage (i.e., operational level) is reported to enhance organizational agility [40]. More specifically, reengineering business behaviors as business services, similar to the service-oriented architectural pattern in software design, is also reported to lead to flexibility and agility in organizations, as long as business knowledge is adequately modelled and interfaces between services are clearly defined [41]. Consequently, the first ground idea of this article is that business artifacts at the operational level of the business, if adequately modelled, represent an operational agility capability of a business.
While discussing service organizations’ particularities, ineffectively managed unstructured work and complex processes showed up as significant agility bottlenecks. In addition to that, approximately 60% of today’s work is unstructured, in the form of ad-hoc unique tasks whose outcomes require further interpretation and judgement. The more interactions involved, the more information is lost or interpreted differently. For this reason, the second ground idea of this article is that, to support operational agility, the focus on tasks and processes should be shifted from “conformity to a plan” to “building the result so that it fits other tasks”, i.e., the interfaces between business processes/tasks should be re-engineered so that they consist of/point to specific business artifacts.

2.2.2. Operational Vocabularies

Business users usually identify business artifacts as the key elements of the process; artifacts can be seen as a kind of vocabulary of the process, similar to the notion of domain ontology [42]. However, business processes and data models (the business artifact models) can evolve independently one from the other [38]. Business artifacts may be modelled ex ante, so that changes in these elements can be consistently measured across organizations [43], but this means that the same elements are equally central in any organization [42], which is rarely the case. As such, the third ground idea of this article is that business artifacts can be derived from the particular operational information system of an organization (ideally modelled via a domain ontology). Having such an operational informational system is an enabler for operational IT application orchestration, which is reported to have a positive impact on agility [44].

2.2.3. Artifact Modelling

Artifact-centric business process modelling focuses on key business-relevant records that evolve by progressing through the process. Business artifacts reflect both data and process aspects and are used as building blocks in a process model [27]. A business artifact is described by an information model and an instance of it that reflects all the possible evolutions of the artifact over its lifecycle (in terms of activities or tasks acting on the artifact instance) [35,38,45]. Artifacts can be conceptualized as objects that are created and used in practice to facilitate work activities [46]; they are heterogeneous and dynamic. Artifacts should be mutable, i.e., able to accommodate informational variation after their creation [46]. Artifacts can also be referenced, by storing the identifier of another artifact [37]. This leads us to the fourth ground idea of the article, to model the business artifacts in the form of classes and objects (instances of classes), as in object-oriented programming. Process activities would be represented by calls of artifact “methods”. The quality of information conveyed through the artifacts becomes thus increasingly important, as collective tasks have to be supported as the process progresses [46].

2.3. Improving Agility via Improving Artifact Quality

2.3.1. Assessing and Improving Operational Agility

To achieve agility, organizations need to effectively identify and manage their capabilities, and recognize the need for developing them to create new value propositions [42]. They can thus also iterate and learn faster, as this de-bottlenecks everything they do [3]. As such, a strategy that includes agility should highlight these underlying assets and align agility-related success metrics so that everybody in the team can make that strategy come alive. Metrics are particularly important, as understanding their relative importance helps to learn what matters to leadership.
While responding quickly and with innovative actions to changes in the business environment (i.e., being agile) impacts performance, research on the resources and capabilities that enable organizations to be agile is reported to be still nascent [4]. Measuring organization agility is also a research gap reported in the literature [21]. We, therefore, propose a practical and effective approach to the problem of improving operational agility, namely by measuring and increasing the quality of business artifact instances to specific target values.

2.3.2. Artifact Quality

Artifacts may enclose and convey information of different types, quantities, and quality [46] and are characterized by formative, informative, and performative aspects, to discern how stakeholders comprehend, interpret, and act upon them [47]. Different people will focus differently on some or all of these facets of artifacts [39]; nevertheless, these differences can be reduced by the quality of information an artifact encloses [46]. Information quality is also important due to the interdependent growth of tasks executed on artifacts [26].
Defining and assessing artifact quality is reported to be subject to issues such as artifact specificity and manual assessment challenges [48]. No feasible approach to conduct artifact quality assessments based on general quality models has been reported either [49], primarily due to different meanings that can be given to quality models depending on the specific domain scenarios. Current approaches identify artifact-specific metrics and discuss their usage for assessing artifacts based on user-defined attributes such as correctness, modularity, or completeness [48]. For these reasons, the fifth ground idea of our approach is that, regarding artifact quality, even if a common set of metrics is used for all artifact types, the importance of metrics in connection with each artifact type varies and should be considered relative to the business requirements.
Identifying artifact quality metrics deals with information quality, as artifacts, as discussed above, consist of information units. Identifying artifact quality means assessing the fitness for use of the information units it contains. As we propose modelling artifacts as classes and class instances, metrics should be considered to cover quality aspects of both artifact properties (the actual information) and their methods (the tasks, i.e., providing and building that information). Although the scientific literature currently lacks consensus with regard to information quality characteristics [50], several effective multi-dimensional models have been reported. The metrics we use (see Table 1) expand on the metrics discussed in [27,51].

2.3.3. Eliciting Improvement Initiatives

Process improvements based on process metrics positively impact activities but do not necessarily improve business artifacts; moreover, gaps may also exist between current process activities and a targeted process quality profile [52]. Improving process models in service organizations is also demanding, as their front-line employees may deviate from existing process models. As such, improving process models may not necessarily improve the organization’s overall performance (and agility). Instead, improving business artifact quality means improving operational capabilities, thus fostering operational agility (the first ground idea of this article).
To improve performance, functional departments within an organization often run “operational activities” in the form of improvement initiatives [53]. They often run such initiatives as projects, to make use of project management techniques for defining the scope of work, success criteria, and metrics to monitor project status. However, a commonly encountered issue here is deciding which improvement initiative to start next, and why [54,55]. This requires addressing questions related to the accuracy, cost, availability, or volatility of information required for decision making [15]. At the operational level, this information is contained by business artifacts, so the sixth ground idea of this work is that improvement initiatives should be prioritized based on their contribution to the quality of information (i.e., the business artifacts) that grounds the decision-making process. As the quality of a specific artifact (or artifact family) is described by present quality values for each metric, improvement initiatives should be elicited to increase these quality values to some desired target values.
In organizations, team members might have different opinions of what initiatives to start, with different justifications for selections [55]. Approaching improvement initiatives such as artifact quality improvement projects may render initially different opinions more uniform. Managerial consensus in prioritizing improvement initiatives, which is crucial to their success, can be thus obtained, as numerical information quality target values can be more easily agreed upon. The initiation phase of an improvement initiative typically implies pinpointing the weakness(es) or bottlenecks that have to be addressed. Low “scores” (present values) for artifact quality metrics clearly point out such weaknesses. Project success criteria can also be supported by artifact quality metrics and target values.

2.4. Research Model

Building on the six ground ideas supported by the above theoretical background literature, we propose our conceptual model as shown in Figure 1. First, the operational business goals determine the operational model, similar to an operational vocabulary of the processes (an operational ontology), whose instances (the business artifacts) are the actual objects (information units) that are created and used in work activities. Second, the operational business artifacts represent the operational agility capability of a business; consequently, improving their quality will impact the operational agility. Third, as artifact instances should help stakeholders comprehend, interpret, and act upon them, they should have an adequate quality level, requested by operational business goals. Finally, internal improvement endeavors should be prioritized by how they impact artifacts with lower quality levels, as this will eventually improve operational agility.
The research question of this article is as follows: What practices and techniques can managers implement in order to improve operational agility capabilities? Rather general answers can be found to the question of how to build agility in a particular organization, as revealed by our literature review. Managers need, however, to know what capabilities to develop or improve, and what practices and techniques can be implemented in this respect.
To solve this problem, based on the conceptual model above, we propose an algorithm to elicit and prioritize agility improvement initiatives, by understanding agility-related assets through modelling operational business artifacts, determining agility bottlenecks through identifying quality issues in operational artifacts, and eliciting and prioritizing improvement initiatives to increase artifact quality.
The algorithm aims to be much more than a simple heuristic. It should be an effective operational agility improvement tool, in the form of a decision-support system, addressed to managers in service organizations.
The contribution of our work to the OR literature is thus threefold (both theoretical and practical):
(1)
It proposes a conceptual framework for understanding and increasing operational agility in service organizations.
(2)
It proposes an algorithm to elicit and prioritize agility improvement initiatives by identifying quality bottlenecks in operational business artifacts.
(3)
It presents a case study to demonstrate the effectiveness of the proposed algorithm.
Although the ground ideas, viewed independently, are not all necessarily new, as far as we are aware, their integration into a conceptual model has not yet been performed. To the best of our knowledge, no similar approach can be found in the OR literature. However, in order not to miss any similar research, we particularly searched for literature addressing topics such as (operational) agility improvement initiatives, operational agility capabilities (assets), business/operational agility artifacts, business artifact modelling, and business artifact quality (improvement).
We looked for frameworks or algorithms for (operational) agility improvement, but not merely heuristics, that draw on concrete directions for agility improvement. The main database used for the search was ScienceDirect, due to its relatively large coverage. To round up the result set, we used snowballing and also considered the provided “recommended articles”. The relevant findings are discussed below.
In [48], the authors discuss artifact quality definition and assessment in software and model-driven engineering, arguing that artifacts should be developed and maintained considering specific quality attributes. Although related to two of our ground ideas, their research does not address agility.
While investigating the interaction between sustainability, agility, and their influence on operational performance in the automotive industry, ref. [56] established 52 agility-related abilities, grouped into seven categories, and five operational performance metrics. They concluded that the higher level of agility implementation practices, the greater the improvements in operational performance metrics. However, managers need to find ways to properly build or improve such abilities.
In [42], the authors propose a framework for recognizing common strategies, activities, and paths to business model reconfiguration developed through the activation of a set of micro-capabilities for strategic agility. These capabilities are grouped into three classes; for each class, a proposition is advanced for achieving business model agility. Managers still need ways to adapt these guidelines to their specific organizational context.
Focusing on the human dynamics of agility, ref. [57] provides three sets of interrelated actions that executives and key decision-makers can use to guide their organizations through turbulence: Identify one’s VUCA (volatility, uncertainty, complexity, and ambiguity), define obstacles to agility, and implement agility-enhancing practices. They also provide two tools: A VUCA audit and a checklist for enhancing agility. Meanwhile, to some extent, akin to our approach (understand context, understand (agility) bottlenecks, and improve on bottlenecks), agility improvement rather relies on heuristics.
Addressing what drives agility in advanced economies, ref. [21] argue that organizations’ agility is a function of the country/industry level of digitization and investments in intangible assets. The main agility determinants, in the case of family and non-family enterprises, are identified. The results support managers in setting and promoting investment policies to increase agility. Investment prioritization is, however, based on high-level information, and not that much on the particular context of an organization.
In [41], the authors present principles for creating flexibility and agility when implementing new or revised policies into business processes. They rely on formulating business processes using business services, event-driven integration and orchestration, separation of processes, knowledge, and resources, and reuse by collaborative policy implementation. Their aim for increasing process agility is to better support the implementation of policies.
In [27], the authors propose building user interface flow models to help visualize artifact-centric processes and assist in the creation of user interfaces. To create the model, the framework considers the relationships among business processes, user interfaces, and user roles in an artifact-centric process model. The role of artifact-centric (operational) business process modelling in attaining natural modularity and componentization of business operations is also highlighted. Although their research aim is not related to our approach, the insights are interesting, as modularity and componentization of business operations may impact organizational agility.
A framework for modelling and analyzing information quality requirements for socio-technical systems is discussed in [47]. It takes into account both technical issues and social and organizational aspects of information quality. It implies analyzing information based on seven quality dimensions: Accessibility, accuracy, believability, trustworthiness, completeness, timeliness, and consistency. A business artifact-based IS can be a socio-technical system; however, the framework is not related to agility.
Few of these results provide a framework or an algorithm to identify and improve the particular agility-related capabilities of an organization. Most approaches are quite general, although provide thorough insights on agility-related capabilities, practices, obstacles, or (agility-related) information quality evaluation. Yet, management teams need an effective and efficient tool to elicit agility improvement initiatives, in the context of their unique organizational environment.

3. Materials and Methods

Starting from the literature review and the proposed research model, the present work aims to answer the research question of what practices and techniques can managers implement in order to improve operational agility capabilities. We, therefore, propose an algorithm to elicit and prioritize agility improvement initiatives. In this section, we discuss the algorithm design, its steps, and considerations regarding its evaluation.

3.1. Research Methodology

The algorithm we propose to elicit and prioritize agility-related improvement initiatives within an organization relies on operational business artifacts. It can thus be seen as a series of steps to be run on top of an informational system, which handles information of variable quality. For this reason, we followed a Design Science Research Methodology for information systems research [58], addressing: (a) “What is the problem?”, (b) “How should the problem be solved?”, (c) “Creating an artifact that solves the problem”, (d) “Demonstration of the use of the artifact”, and (e) “How well does the artifact work?”. We introduced the problem in the first section. We then reviewed the literature and presented the ground ideas and the research model for solving the problem (the second section). The “artifact” to solve the problem is the algorithm that we detail in this section. We will demonstrate its use via a longitudinal case study within a service-providing organization in the rail freight industry, in the next section. In the last section we discuss the algorithm’s relevance for managers and aspects regarding its applicability.
In designing the algorithm, we also followed several guidelines for Design Science in Information Systems Research presented in [59]. The first guideline refers to the DSR outcome, which should be a viable artifact, in the form of a construct, a model, a method, or an instantiation. We, therefore, elaborated on a theoretical construct by reviewing the existing related literature (Figure 1 in Section 2.4). Based on this construct, we then proposed the algorithm for eliciting and prioritizing agility-related improvement initiatives; it is an “intellectual” design artifact in the form of an intellectual tool. The guideline mentions demonstrating the feasibility of the DSR artifact by its instantiation; we did this by running a case study in a service-providing organization (Section 4).
The second guideline refers to the importance and relevance of business problems solved by a design artifact to a constituent community comprising practitioners who plan, manage, design, implement, operate, and evaluate IS systems. Relevance to this community means addressing problems faced by the interaction of people, organizations, and information technology. This will be addressed in detail in Section 5.
The third guideline recommends evaluating the utility, quality, and efficacy of the design artifact, with respect to the business environment. We did this by using an observational approach, namely by running a case study within a service-providing organization in the rail freight industry. The case study will be presented in Section 4.
The fourth guideline is to provide clear contributions in the area of the design artifact, design construction knowledge (i.e., foundations), and/or design evaluation knowledge. In this respect, the contributions of our work are the conceptual framework, the algorithm itself, and the results of applying it (the case study).
The fifth guideline we followed addresses the research rigor in both the construction and the evaluation of the designed artifact. It states that rigor must be also assessed with respect to the applicability and generalizability of the design artifact. We, therefore, based our algorithm on a conceptual framework constructed after a thorough literature review. Its applicability and generalizability will be analyzed in Section 5.
We also based our algorithm on the contingency theory, aiming to identify and also prioritize improvement initiatives that allow the organization to develop its operational agility capabilities [60,61]. The algorithm furthermore relies on the formalism of the QFD method by correlating operational business requirements with the business artifacts, to verify their relevance. In weighting improvement initiatives, it relies on a systematic multi-criteria decision analysis method (TOPSIS [62]); weighting criteria reflect both efficiency and effectiveness aspects of improvement initiatives.

3.2. The Algorithm

The outcome of the algorithm is a set of weighted improvement project proposals that act on operational business artifacts and thus contribute to the organization’s operational agility (ground idea 1). The weight of each improvement initiative represents its perceived implementation efficiency and effectiveness. The improvement initiative set is determined by 6 steps.
Step 1. Define operational artifacts if they have not been yet defined. Use any existing established artifact design approach, as long as you start from the particular operational information system of the organization (ground idea 3). Model artifacts similar to software objects, using properties, methods, and events (ground idea 3). The actual artifacts will be instances of model artifacts (artifact classes). Describe the artifact classes. Use the following rules for defining artifacts: (1) An artifact is a concrete, identifiable, self-describing chunk of information that can be used actually run the business [33], (2) each artifact is unique, (3) each artifact is an instance of a specific “artifact class”, and (4) an artifact may contain links to other artifacts, such as software object properties pointing to other objects. Design artifact class methods considering the autonomy of tasks, e.g., design each method in relative isolation: A method should generally only process the information it receives as input parameters.
Step 2. Rebuild the interfaces between processes/tasks so that they consist of “pointers” of business artifact properties and/or methods. Thus, the focus on tasks and processes is shifted from “conformity to a plan” to “building the result so that it fits other tasks” (ground idea 2).
Step 3. Set artifact quality metrics for each property (information unit) of an artifact. Set quality metrics also for each method (i.e., (process) step acting on an information unit). For each artifact class, assess the artifact quality metrics relevance to the business requirements (ground idea 5). Perform a correlation analysis, using a correlation matrix, between the artifact class metrics and the business requirements, to check the completeness and relevance of the metric set. Each matrix line should contain at least one strong correlation level (meaning that at least one quality aspect of the artifact addresses the business requirement on that line). Each matrix column should also contain at least one strong correlation level (meaning that the metric on the column is relevant to at least one business requirement). If any line or column in the matrix lacks at least one strong correlation, step 3 should be reconsidered.
Step 4. Set performance targets for artifact quality. Identify the current performance level of artifacts by determining present values for metrics. Identify the current performance level of process operations by determining present values for method performance metrics. Subsequently, set target values for all metrics, within a required time frame, for each one of them. A rule of thumb is as follows: The higher the metric weight (computed in step 4), the higher the target value. However, the target values should be set in line with the organization’s strategy.
Step 5. Identify improvement initiatives to impact artifact quality so that the target values can be reached within the imposed time frame (ground idea 6). Calculate a “perceived impact” value by summing, for each impacted artifact quality metric, the expected improvement multiplied by the metric’s weight. Further describe each initiative by investment cost, man-hours, time span, technical difficulty, and organizational difficulty. Put forward improvement initiatives respecting Goodhart’s law (i.e., do not transform metrics into targets) [63].
Step 6. Prioritize the improvement initiatives based on the criteria from step 5 using an adequate multi-criteria decision analysis method (e.g., TOPSIS).

4. Case Study in a Service Organization

The proposed algorithm was applied in an SME operating in the rail freight industry. The company aims to boost its operational agility capabilities by identifying and implementing several improvement initiatives that should impact its key business artifacts. As in many other service organizations, they face information quality issues related to accuracy, volatility, granularity, or completeness. To remain competitive, they need to improve their efficiency (time and effort) in managing information produced or consumed by their processes. They specifically report unsatisfactory process performance, and consequently unsatisfactory business results, due to the aforementioned quality issues that affect the information exchanged by processes. This section discusses the application of each algorithm step, highlighting the obtained results.

4.1. Context

The company provides services to customers on a project-oriented basis, i.e., each transport request is managed as an individual transport project. It has a specific life cycle: Receiving the request from a client, deciding if they are (technically and financially) able to accept it, sending the offer, signing a contract, allocating resources (people, trains), performing the transport, and analyzing project performance.
Three processes, for which the company has been reporting several agility-related issues, were chosen as a starting point for applying the algorithm: (p1) Customer requirement analysis, (p2) determining the feasibility of a new transport request, and (p3) allocating adequate human resource for the transport.
Process (p1), led by the customer relations department, translates the incoming transport requests into customer requirements and calls process (p2) to determine the feasibility of the potential transport. It then estimates the economic feasibility of the potential transport, constructs an economical and technical offer, and transmits it to the customer.
Process (p2), led by the technical department, estimates—at the time of issuing the commercial offer—if the company is capable of performing the requested transport service within the specified time frame. The availability of rolling stock and human resources is estimated, the latter by calling process (p3), and the effort of providing these resources is also estimated. The information is then provided to process (p1) via several documents.
Process (p3), also led by the technical department, handles the human resource required for a transport activity within a project. The train crew should be allocated based on their proximity to the route start point or end point, their technical capability to drive the train on that route, and their post-driving rest period.
The company reported several agility-related issues for these processes. For (p1), the most critical issue is the increased number of offer requests from potentially unreliable customers. Some requests (approximately 15%) come from competitors (using intermediary companies) for capability testing purposes, while other requests (approximately 20%) are used as “alternatives” in bureaucratic procurements. As the company cannot reliably determine the purpose of the request, much time and effort is wasted to produce accurate responses (offers) for all requests. Currently, approximately 25% of requests produce a transport project. Of these, 80% are one-time transport requests.
For (p2), the most important reported issue is estimating the correct future transport capability, because several concurrent transport requests may exist within a future time interval. Train delays, especially on long journeys, and last-minute rolling stock issues also add up to this issue.
For (p3), the main reported issue is the train crew availability, especially when overtime is requested. As each train driver is constrained to a specific driving time a day, traffic delays caused by network problems determine additional train personnel usage. This frequently alters the organization’s transport project processing capability.

4.2. Running the Algorithm

Step 1 consisted of defining business artifact models (“artifact classes”). To do so, the operational vocabulary of the organization was reviewed. Their specific approach was to associate each operation with the costs it implies (to reflect its “value”); as such, all operational costs related to transport projects were decomposed in a “cost tree” with a manageable granularity level, avoiding shared or indirect costs at the project level. Artifact classes were then determined based on each cost item “parent” in the tree. Figure 2 shows the cost tree and Figure 3 shows the defined artifact classes.
Five core operational artifact classes were set up: TProject (a commercial project), TTrain (an individual transport from point A to point B using one train—either loaded with freight, empty carriages, or single locomotives), TTrainStaff (describing one employee), TPathSegment (describing the railway track between two adjacent railway stations—a train path can be built up using an array of instances of TPathSegment), and TRollingStockUnit (describing a railway vehicle such as a carriage or locomotive). There is also a TProjectManager class (an array of transport projects) to aggregate all commercial projects. Any report or communication exchanged by employees should be capable of being modelled by combining instances of the above classes.
The TTrainStaff artifact class is further explained (we limit our description to this example for the sake of brevity). The properties of the TTrainStaff class consist of personal data (such as name, id, age, and home address), driving skills descriptors (confidential information allowing dispatchers to select adequate train crew for special transports), authorizations (an array containing the driving license types for the person and their validity period), health clearances (an array containing all the regular medical examinations for that person), journey (an array of travels or state changes for that person—see the description below), and the authorization plan (an array containing the calendar for driving license renewals).
The journey array is exemplified in Table 2 and logs travel or state changes related to travel for a person. The Id column represents the array index. The Time column marks the beginning of travel or the time of a state change.
Methods were also defined for the TTrainStaff artifact class, such as getPositionOn(a_time), which returns the probable location of the person at the time a_time, or getDrivingHours(a_time), which returns the total driving hours for that person within the current month. These methods represent tasks that are currently manually performed by a dispatcher of the company.
Step 2 consisted of rebuilding interfaces between processes/tasks so that they consist of “pointers” of business artifact properties and/or methods. Process p1 (customer requirement analysis) calls process p2 (determining the feasibility of a new transport request), and process p2 calls process p3 (allocating adequate human resources for the transport). The interface between p1 and p2 therefore becomes:
{ ProjectManager.accessProject(id).estimateCosts; }
which is a method to sum up costs associated with necessary project resources, as human resources and rolling stock should be already allocated (with some probability/availability). Likewise, the interface between p2 and p3 becomes:
{ [ProjectManager.accessProject(id, mycredentials).Train[1..i].setupTrain;] }//for all trains in the Train[] array; setupTrain is a method that allocates the adequate crew for the train
The information to be exchanged via the interface between p1 and p2 originally consisted of a report generated in the technical department and addressed to the sales director, usually via email, and without a strict format. The interface between p2 and p3 consisted of a spreadsheet template and was internally managed by the technical department.
Step 3 consisted of setting up artifact quality metrics and reviewing and ranking operational business requirements. The artifact quality metrics used are those discussed in Section 2.3. For each artifact class, this step further implied checking the artifact quality metrics’ relevance to business requirements. As an example, the correlation matrix corresponding to TProject artifacts is presented in Figure 4. Its last column contains the relative weights for the business requirements. Strong correlations were represented by ‘9’, average correlations by ‘3’, and weak correlations by ‘1’. A relative weight was also computed for each quality metric, based on the weighted sum on each matrix column. Because there is at least one strong correlation on each matrix line and column, the quality metrics for this artifact type are considered to be relevant to the operational business requirements.
The values in the correlation matrix were determined in a face-to-face meeting between the authors, the sales director, and the head of the technical department.
Step 4 consisted of identifying the current performance level and setting the performance targets for artifact quality. We discuss the performance level for the TTrainStaff-type artifacts here. The present values and target values for each metric are summarized in Figure 5. A scale of 1–9 was used in each cell, with 1 denoting the lowest quality level and 9 the highest quality level, as all quality metrics were formulated to be monotonic. For instance, the current accuracy of “skills” is 6, meaning that skills are “somehow” correctly, but not precisely, assessed. As accuracy is highly significant (8,4% relative weight), the management requested a target value of 8 (i.e., personnel skills should be precisely assessed). The current information age of “seriousness” is 2; the management requests improving it to 8 (it should always be up-to-date). “Seriousness” means here the extent to which the company can rely on that employee’s declared availability (as employees do not have a regular 8 h/day job). The current throughput of the getDrivingHours() method is currently 4 (meaning there is a significant effort in providing an accurate value of the number of hours a train driver spent driving the train within the current month); the management wants to significantly decrease this effort (the target value is 9).
The current level of artifact quality was determined by the authors using informal, conversational interviews with six employees: Two from the technical department, three from among the train crews, and the general manager. The target values were decided by the general manager and the sales director.
Step 5 consisted of eliciting agility improvement initiatives, which will impact the artifact quality so that the target values set in Step 4 can be reached. A mixed team consisting of the authors and company representatives proposed a set of 22 such initiatives to tackle those metrics whose present values were low or very low.
For brevity, here we discuss the improvement initiatives proposed for improving the TTrainStaff-type artifact quality. Table 3 describes their goal, the artifact quality metrics to be improved within a specific time horizon (and their target values), and the initiative characteristics (investment cost, duration, effort, technical and organizational difficulty, and perceived impact). The perceived impact was calculated by summing, for each impacted artifact quality metric, the expected improvement multiplied by the metric’s weight. The values for the rest of the initiative characteristics were estimated by a team made up of the sales director, the technical manager, and the general manager.
The time horizon indicates the period until the initiative results impact the corresponding quality metrics. Difficulty is estimated on a 1–10 scale. All proposed initiatives respect Goodhart’s law, i.e., none of them makes targets out of metrics.
Step 6 consisted of prioritizing the improvement initiatives using a freely available online implementation of the TOPSIS method (https://decision-radar.com/Topsis.html (accessed on 8 April 2023)—see Figure 6).
The improvement initiative prioritization criteria weights used in TOPSIS were initially determined by the authors using the AHP method and then discussed and adjusted with the company representatives. The obtained normalized weights were as follows: Time horizon: 0.12; investment cost: 0.15; duration: 0.10; effort: 0.15; technical difficulty: 0.12; organizational difficulty: 0.08; and perceived impact: 0,28. As expected, the perceived impact was considered the most important criterion.
The TOPSIS analysis determined the following scores for the initiatives: 0.824 for (d13), 0.589 for (d14), 0.576 for (d9), 0.446 for (d20), and 0.313 for (d8). In the entire list of 22 initiatives, they ranked #2, #8, #9, #13, and #18. At the time of submitting this research, the first four initiatives have been implemented as internal development projects and reported by the company to be on track.

4.3. Implications for Practice

4.3.1. Remarks on Operational Artifacts Definition and Modelling

In practice, business artifacts may have a large variety of class design solutions, but this does not impact how the algorithm would be applied. It may be true that poorly designed artifacts may lead to inefficient and ineffective improvement initiatives, as artifact modelling “correctness” cannot be guaranteed. Furthermore, our research does not focus on checking such “correctness”. However, having a “fair enough” business artifact model, in terms of efficiency and effectiveness of processing information, should suffice. Any “general-purpose” artifact design approach already covered by the literature can be used in the first algorithm step (for instance [33]), as long as the artifacts are derived from the particular operational information system of an organization (ground idea 3), the rules of thumb proposed in Step 1 of the algorithm are applied, and the interfaces between processes/tasks are rebuilt so that they consist of “pointers” of business artifact properties and/or methods (ground idea 2). In doing so—starting from the operational information system and shifting focus from “conformity to a plan” to “building the result so that it fits other tasks”—we avoid the elicitation of (just) general performance-enhancing initiatives, and target operational agility.
Another issue that could be raised is whether improvement initiatives would trigger changes in the artifact model, which would make the algorithm inefficient. Improvement initiatives, in our approach, should impact specific artifact quality levels and not change the artifact itself. For instance, in the case of the getDrivingHours() method discussed in the case study, improving its availability or reliability level would not change the artifact model. In fact, the organization operations (which are artifact class methods) do not change; their effectiveness and/or efficiency should change. Although theoretically possible, we consider model changes caused by improvement initiatives to be rare. In such a case, that “improvement” would rather be a process re-engineering endeavor, such as implementing an ERP. Indeed, artifacts may be completely changed in this case, but such improvement projects are rather specific to strategic agility improvement approaches.

4.3.2. Remarks on Running the Algorithm

Planning the artifact quality in step 4 consisted of setting target quality values for each metric. These target values actually drove the general objective of the improvement initiatives proposed in step 5, as this is the eventual result of running the algorithm. The target values should be reached within the time horizon estimated for each initiative, which does not coincide with its duration. The scope of an initiative refers to what needs to be achieved at the end of its implementation period; these results are the ones that actually impact the artifact quality.
Priority initiatives should significantly impact artifact quality but should also be easily implemented. As such, in Step 5, we also made use of criteria specific to project management.
In Step 6, we used the TOPSIS method to calculate scores to prioritize improvement initiatives. As the criteria (the metrics from step 6) are related to some extent, since one partially influences the other, we did not opt for widespread methods such as AHP for project ranking [64]. We used AHP, however, to initially rank the criteria. AHP was considered adequate for this due to the short criteria list. We then analyzed these initial rankings with the company representatives and adjusted the final weights to also fit their perspective; these final weights were then used in the TOPSIS method.

4.3.3. Remarks on Algorithm Results

The main result of running the algorithm is the 22 weighted improvement initiatives. If successfully implemented, they should directly contribute to the organization’s operational agility by determining superior artifact quality and thus enabling better decision-making. Another important result is the operational business artifacts and the target value list for the artifact quality metrics. This allows the organization to set and agree on clear operational performance goals.
The main benefit of applying the algorithm and implementing agility-related improvement initiatives is the increase in operational agility. Currently, the company has to reject up to five “one-time” transport requests each month due to shortages in rolling stock and train crew; with increased agile capabilities, i.e., superior resource management, such rejected transport requests should drop to a maximum of 2–3 per month. This is important, as, at this time, over 80% of commercial projects consist of “one-time” transports.
The company estimates an increase in capacity utilization for its rolling stock, not only by implementing the project (d8) rolling stock sharing but also by implementing project (d13) resource planning software platform. The latter will also make scheduled maintenance operations “visible” to anyone by building a complete “booking” calendar for each car and locomotive. In this way, rolling stock can be properly “reserved” for future transport projects. The company targets over 80% capacity utilization for cars (it currently averages under 70%) and over 85% for locomotives (currently under 80%). This will maximize revenue, especially for rented rolling stock, which represents, over a year, between 30 and 50% of the total rolling stock.
By implementing project (d9) train crew sharing, and by considering the current personnel shortage, the company estimates an increase in transport capacity by 4–6 transports each month. At the same time, for periods with fewer transport requests, some of the train crew may be detached from other partners, reducing wage costs by up to 15% (estimated by the company).
A side benefit of applying the algorithm emerged after deciding improvement initiatives in step 5 of the algorithm. Although divergent opinions were raised while analyzing the potential projects, the company representatives in the team, including the general manager, reached a surprisingly high consensus level, with this being quite uncommon in the company. Consensus meant, in this case, the “agreement to support the 22 initiatives” rather than “agreeing that this initiative portfolio is the best”. We argue that the high consensus level was determined by describing initiative benefits as quality improvements of business artifact properties and methods, which are actually operational data perceived as highly important by the company representatives.

4.3.4. Additional Remarks

Although we consider the algorithm application to be straightforward, a few additional implications for practice are discussed here, namely technical issues and the effort to apply the algorithm.
Regarding technical issues, arguably the most technically challenging step is the first one, which involves operational business artifact definition. It requires superior business analysis skills, but consultancy services on business analysis are widely available if needed. The successful identification of business artifacts and the reconsideration of business process interfaces so that they point to specific business artifacts enables process modularity and componentization, with this also being an agility enabler. The other algorithm steps should not raise technical issues, as the tools and formalism used in the algorithm should be familiar to project managers and quality personnel.
The algorithm application effort, in the case of our service-providing company operating in the rail freight industry, was approximately 2.5 person-months. A rerun of the algorithm is estimated at 0.5 person-months, without the first step (as artifacts are already defined).

5. Discussions

5.1. Key Findings

As rather general answers can be found to the problem of building agility in a particular organization [21,41,42,56,57], we aimed to answer the research question of what practices and techniques managers can implement in order to improve operational agility capabilities. Although previous research provides relevant insights into agility-related capabilities, practices, obstacles, or (agility-related) information quality evaluation [27,47,48], management teams still need effective tools to elicit agility improvement initiatives, in the context of their unique organizational environment.
Based on the support of existing research related to artifact-centric approaches to operational agility [33,40,41,42,44,46,47] and business artifact quality [26,42,46,48,49], we grounded our conceptual framework (see Figure 1). As per this framework, the operational business goals determine the operational model, similar to an operational vocabulary of the processes. Its instances, the business artifacts, are the actual objects (information units) that are created and used in work activities. The operational business artifacts represent the operational agility capability of a business, and improving their quality will impact the operational agility. Internal improvement initiatives should be proposed and prioritized by how they impact artifacts with lower quality levels, as this will eventually improve operational agility.
Our results add to the existing literature by (1) proposing a conceptual framework for understanding and increasing operational agility in service organizations, (2) proposing an algorithm to elicit and prioritize agility improvement initiatives by identifying quality bottlenecks in operational business artifacts, and (3) presenting a case study to demonstrate the effectiveness of the proposed algorithm.
The case study showed how the proposed algorithm was successfully used in a service organization to elicit 22 weighted improvement initiatives related to operational agility. Starting from the “cost tree” of their commercial projects, they modelled their operational artifacts in five core classes and then assessed the quality levels of their artifact instances. Focusing on three processes for which several agility-related issues did exist, they proposed improvement initiatives to impact the quality of two specific artifacts. When prioritizing them, apart from the impact, they also considered the investment cost, duration, effort, and technical and organizational difficulty related to each initiative.
Via the case study, the algorithm provided a concrete answer to the research question of what practices and techniques managers can implement in order to improve operational agility capabilities. It also seeks to fill the gap in the existing literature regarding concrete means of increasing operational agility. It is also a response to the call in [65] for agility and flexibility scholars to focus more on the service sector.
In the remainder of this section, by following the guidelines for Design Science in Information Systems Research [59], we will further discuss (a) the relevance for practitioners, focusing on the algorithm, (b) applicability and generalizability, focusing on the case study, and (c) relevance for scholars, focusing on the conceptual framework.

5.2. Relevance for Practitioners

The intensity of awareness and research on agility, with all its facets [1,19,20,21], stresses its relevance. Organizations have to move faster than the market around them, both in terms of decision-making and capability development over time [2], and managers need to understand what makes their companies agile, how agile they actually are, and how they can improve agility. As such, this article’s research question may be even more relevant in the case of service organizations. The proposed algorithm implies defining operational artifacts, starting from the particular operational information system of the organization (step 1), thus supporting managers in understanding their agility-related assets (ground ideas 1–3). Managers can also use performance targets for artifact quality (step 4) as agility-related success metrics.
Relevance to a particular community also means addressing problems faced by the interaction of people and organizations [59]. Regarding agility improvement endeavors, team members might have different opinions of what initiatives to start, with different justifications for selections [55]. Approaching improvement initiatives as artifact quality improvement projects (step 5) may align opinions, and managerial consensus in prioritizing improvement initiatives, which is crucial to their success, can be thus obtained. This is because numerical information quality target values can be more easily agreed upon. In light of the first and third ground ideas of this article, potentially different views on agility capabilities may also become aligned. Disagreement on the operational agility capabilities of the organization may fade if these are associated with business artifacts at the operational level of the business, which are derived from the particular operational information system of an organization (ideally modelled via a domain ontology).

5.3. Applicability and Generalizability

Applicability means the usefulness of the algorithm for the particular task of improving operational agility capabilities. The algorithm should be suited to the particular business environment of the organization using it, so one applicability aspect is its integration within the technical infrastructure of the business environment. In its current form, the proposed algorithm is an intellectual tool that is suited to any technical infrastructure providing basic communication and information-sharing features.
Another applicability aspect is the practical worth of the algorithm results. The main output of the algorithm is a list of improvement initiatives, prioritized by their impact on operational business artifact quality. The calculated improvement initiative scores also reflect investment cost, man-hours, time span, technical difficulty, and organizational difficulty; as such, the management team can easily transform the top improvement initiatives into internal improvement projects for increasing agility. As each improvement initiative aims at improving one or more quality aspects of artifacts to reach a precise target value, the management team can easily monitor the success of the improvement projects.
Applying the algorithm itself within the organization, for determining improvement initiatives, can be also managed as an internal improvement project, with a clearly defined scope and steps. A time horizon can also be estimated after the first run (e.g., 3–4 weeks for the organization in our case study).
Functionality is another applicability aspect. The algorithm consists of several functions that reflect the ground ideas discussed in Section 2: Review the operational information system (the operational vocabulary) of the organization, define or reassess operational artifacts, model artifacts as software classes/objects, rebuild interfaces between tasks, determine present values and set target values for business artifact quality, and elicit and prioritize improvement initiatives to reach the set target values. All the functions were run within the case study with minor issues, such as (for instance) different perceptions of the team members on artifact quality levels.
Data and algorithm output accuracy are also applicability aspects. Regarding data accuracy, an empirical validation of the modelled artifacts was performed by checking that any interface between projects can be represented by a structure consisting of pointers to properties or methods of artifact classes. The accuracy of the modelled artifacts, which are also an output of the algorithm, was proved by checking that any interface between projects and virtually any report requested by management can be represented by a structure consisting of pointers to artifact class instances (or properties/methods of such instances). Furthermore, while any algorithm with input data deriving from individuals and teams, rather than (verified) data streams, complies with the “garbage in garbage out” rule, we made use of a clear metric definition and a simple mathematical formalism for analyzing data, as discussed in Section 3.2, in order to improve (or at least maintain) data accuracy.
Regarding output accuracy, the main output of the algorithm of the weighted improvement initiatives. To check that the initiative list is accurate, or “correct”, a “reference value” is needed. To obtain it, we set up informal conversational interviews with the general manager and two other key persons from the technical department, right after presenting them the initiative list. None of them were involved in completing steps 4–6 of the framework. The goal of the interviews was (a) to understand their desirability of implementing the proposed improvement initiatives and (b) to check if the computed weights were consistent with their perception of how the initiatives should be prioritized. The findings were as follows. Regarding issue (a), each initiative was considered particularly important for increasing the agility capabilities of the company. Regarding issue (b), the general manager ranked initiative (d13) as #2, (d14) as #7, (d9) as #8, (d20) as #15, and (d8) as #19. The two key persons from the technical departments ranked (d13) as #2 and #1, (d14) as #9 and #7, (d9) as #8 and #8, (d20) as #15 and #13, and (d8) as #19 and #11. The ranks are close enough to the scores calculated by the algorithm. We, therefore, consider the output accuracy level to be satisfactory.
Regarding the improvement in initiative score accuracy, numeric values were assigned to the improvement initiative weights and the artifact properties’ quality levels. However important the improvement initiatives might be for boosting agility, chances are that the organization lacks the financial and human resources to simultaneously implement them, therefore the need to prioritize them. An impartial, objective, and reasonable manner of prioritizing them is to express their implementation attractiveness by numerical values. The same holds for planning the quality level of business artifact properties and methods by expressing their target values (needed performance level) by numerical values.
Two improvement initiatives that score closely together may still differ in their perceived impact on artifact quality, on one hand, and investment cost and technical difficulty, on the other hand: One may determine better artifact quality, but may cost more and/or be more difficult to implement. However, two projects that involve similar resources and effort, but impact artifact quality differently, will obviously obtain different (therefore unique) rankings by applying TOPSIS (step 7).
Quantitative metrics such as age, accuracy or transmit time, used in step 5 to measure the quality levels of artifact properties and methods, do guarantee different numeric values for different quality levels of information. This does not necessarily hold for qualitative metrics such as relevance or format usefulness; a workaround was, in our case, to gain consensus on the quality level by forming a mixed team to help us assess those particular quality aspects. The team consisted, in addition to the authors, of one key person from the technical department and two persons from the train staff.
Generalizability, in our context, indicates how useful the conceptual framework and algorithm are for a broader group of people or situations. Although we tested the algorithm by running only one longitudinal case study within a service-providing organization in the rail freight industry, we argue that it can be applied in any service-providing organization.

5.4. Relevance for Researchers

An important goal of research in Information Systems is to acquire knowledge and understanding that enable the development and implementation of technology-based solutions to heretofore unsolved and important business problems [59]. As such, to researchers, we propose an overview of organizational and operational agility, an artifact-centric approach to operational agility, and insights into how to improve agility via improving business artifact quality. We derived six ground ideas from the literature review, and—based on them—we provided a conceptual framework for understanding and increasing operational agility in service organizations. Although the ground ideas, viewed independently, are not all necessarily new, as far as we are aware, their integration into a conceptual model has not yet been achieved.
Our work focused on service organizations, which may face challenges in understanding their agility-related assets and success metrics and in defining the scopes of work for agility-related improvement initiatives. By proposing the algorithm for eliciting and prioritizing agility-related improvement initiatives, we thus respond to the call of [65] for agility and flexibility scholars to focus more on the service sector, which contributes to over 70% of developed countries’ GDPs.

5.5. Limitations

The performance level of artifacts is identified in step 4 by determining present values for quality metrics, for each property and method of an artifact class. Artifacts are actually instances of artifact classes, such as objects in OOP, and in a real scenario, a dynamic array of artifacts does exist for each artifact class. For example, a new instance of TTrain is created each time a new transport is planned. We acknowledge that the performance level for a particular property or method may vary among artifacts of the same type; this was, however, not considered while developing the algorithm. The current quality levels assessed in step 4 actually reflect average values of recent artifact arrays. Future development of the algorithm will address this issue.
As we proposed, operational business artifacts can be abstracted by building an ontology that models the organization’s operational “world”, and thus be better understood and “accepted” by the company personnel. However, the ontology-based approach may make business artifact definition difficult without support from a skilled business analyst.

6. Conclusions

Service organizations need to become more agile in the current dynamic market while dealing with a high volume of information of variable quality and complex processes. Their management teams may face challenges in understanding their agility-related assets and success metrics. They may furthermore face challenges in defining the scope of work for agility-related improvement initiatives.
Organizations are different in terms of size, domain, capabilities, and resources, so they need to be specific in their own capabilities for agility and run their own agility improvement initiatives. As rather general answers to this issue can be found in the literature, we aimed to answer the research question of what practices and techniques can managers implement in order to improve operational agility capabilities.
This research adds to the existing literature by proposing a conceptual framework for understanding and increasing operational agility in service organizations. Based on this, we proposed an artifact-centric algorithm that supports service organizations in eliciting and prioritizing agility-related improvement initiatives, by (a) understanding agility-related assets through modelling operational business artifacts, (b) determining agility bottlenecks through identifying quality issues in operational artifacts, and (c) eliciting and prioritizing improvement initiatives to increase artifact quality. We applied the algorithm within a service company operating in the rail freight industry and identified 22 weighted improvement initiatives; implementing them should positively impact all modelled business artifacts currently facing quality issues.
The operational business artifacts and the target value list for the artifact quality metrics allowed the company to set and agree on clear operational performance goals. The calculated improvement initiative scores, reflecting investment cost, man-hours, time span, technical difficulty, and organizational difficulty, allowed the management team to easily transform the top improvement initiatives into internal improvement projects for increasing agility. We thus consider the algorithm to be an applicable and relevant tool for management teams in service organizations.
The proposed artifact-centric approach to operational agility aims to contribute to knowledge and understanding of heretofore unsolved and important business problems, which is a relevant research goal in Information Systems. By focusing on service organizations, we also contribute to the call of [65] for agility and flexibility scholars to focus more on the service sector, which contributes to over 70% of developed countries’ GDPs.
Future work will consist of implementing a software rail freight resource planner, based on the proposed framework.

Author Contributions

Conceptualization, formal analysis and methodology, M.F.; validation, M.D. and M.M.; investigation, B.M. and M.D.; resources, data curation, B.M.; writing—original draft preparation, M.F.; writing—review and editing, B.M.; visualization, M.M.; supervision, M.F. All authors have read and agreed to the published version of the manuscript.

Funding

This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors.

Institutional Review Board Statement

Not applicable to this article.

Informed Consent Statement

Not applicable to this article.

Data Availability Statement

The data presented in this study are available upon request from the corresponding author.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Cunha, M.P.E.; Gomes, E.; Mellahi, K.; Miner, A.S.; Rego, A. Strategic agility through improvisational capabilities: Implications for a paradox-sensitive HRM. Hum. Resour. Manag. Rev. 2019, 30, 100695. [Google Scholar] [CrossRef]
  2. Clauss, T.; Kraus, S.; Kallinger, F.L.; Bican, P.M.; Brem, A.; Kailer, N. Organizational ambidexterity and competitive advantage: The role of strategic agility in the exploration-exploitation paradox. J. Innov. Knowl. 2020, 6, 203–213. [Google Scholar] [CrossRef]
  3. Oliver Wyman. Insights: Agility as a Strategy. Available online: https://www.oliverwyman.com/our-expertise/insights/2017/jun/agility-as-a-strategy.html (accessed on 8 April 2023).
  4. Ravichandran, T. Exploring the relationships between IT competence, innovation capacity and organizational agility. J. Strat. Inf. Syst. 2018, 27, 22–42. [Google Scholar] [CrossRef]
  5. Brad, S.; Draghici, A. Lean agile technology transfer approach. Int. J. Sustain. Econ. 2016, 8, 224. [Google Scholar] [CrossRef]
  6. Khayer, A.; Islam, M.T.; Bao, Y. Understanding the Effects of Alignments between the Depth and Breadth of Cloud Computing Assimilation on Firm Performance: The Role of Organizational Agility. Sustainability 2023, 15, 2412. [Google Scholar] [CrossRef]
  7. Weiss, E.N.; Goldberg, R. Robust services: People or processes? Bus. Horiz. 2019, 62, 521–527. [Google Scholar] [CrossRef]
  8. 5 Bottlenecks to Business Agility, and How to Avoid Them. 2018. Available online: https://www.cio.com/article/221647/5-bottlenecks-to-business-agility-and-how-to-avoid-them.html (accessed on 2 December 2021).
  9. PMI, F.I. Achieving Greater Agility: The Essential Influence of the C-Suite. 2017. Available online: https://www.pmi.org/learning/thought-leadership/series/achieving-greater-agility (accessed on 8 April 2023).
  10. Consultancy.co.za. Operations Consulting. Consultancy.eu. 2018. Available online: https://www.consultancy.uk/consulting-industry/operations-consulting (accessed on 15 September 2019).
  11. Zhang, D.Z. Towards theory building in agile manufacturing strategies—Case studies of an agility taxonomy. Int. J. Prod. Econ. 2011, 131, 303–312. [Google Scholar] [CrossRef]
  12. Vecchiato, R. Creating value through foresight: First mover advantages and strategic agility. Technol. Forecast. Soc. Chang. 2015, 101, 25–36. [Google Scholar] [CrossRef] [Green Version]
  13. Hemalatha, C.; Sankaranarayanasamy, K.; Durairaaj, N. Lean and agile manufacturing for work-in-process (WIP) control. Mater. Today Proc. 2021, 46, 10334–10338. [Google Scholar] [CrossRef]
  14. Ghasemaghaei, M.; Hassanein, K.; Turel, O. Increasing firm agility through the use of data analytics: The role of fit. Decis. Support Syst. 2017, 101, 95–105. [Google Scholar] [CrossRef]
  15. Tallon, P.P.; Queiroz, M.; Coltman, T.; Sharma, R. Information technology and the search for organizational agility: A systematic review with future research possibilities. J. Strateg. Inf. Syst. 2019, 28, 218–237. [Google Scholar] [CrossRef]
  16. Palazzo, M.; Ma, S.; Rehman, A.U.; Muthuswamy, S. Assessment of Factors Influencing Agility in Start-Ups Industry 4.0. Sustainability 2023, 15, 7564. [Google Scholar] [CrossRef]
  17. Badawy, A.M. Fast Strategy: How Strategic Agility Will Help You Stay Ahead of the Game. Wharton School Publishing, (2008). J. Eng. Technol. Manag. 2009, 328, 342–344. [Google Scholar] [CrossRef]
  18. Gaspar, M.L.; Popescu, S.G.; Dragomir, M.; Unguras, D. Defining Strategic Quality Directions based on Organisational Context Identification; Case Study in a Software Company. Procedia Soc. Behav. Sci. 2018, 238, 615–623. [Google Scholar] [CrossRef]
  19. Roberts, N.; Grover, V. Leveraging Information Technology Infrastructure to Facilitate a Firm’s Customer Agility and Competitive Activity: An Empirical Investigation. J. Manag. Inf. Syst. 2014, 28, 231–270. [Google Scholar] [CrossRef] [Green Version]
  20. Worley, C.G.; Lawler, E.E. Agility and Organization Design: A Diagnostic Framework. Organ. Dyn. 2010, 39, 194–204. [Google Scholar] [CrossRef]
  21. Škare, M.; Soriano, D.R. A dynamic panel study on digitalization and firm’s agility: What drives agility in advanced economies 2009–2018. Technol. Forecast. Soc. Chang. 2021, 163, 120418. [Google Scholar] [CrossRef]
  22. Sun, J.; Sarfraz, M.; Turi, J.A.; Ivascu, L. Organizational Agility and Sustainable Manufacturing Practices in the Context of Emerging Economy: A Mediated Moderation Model. Processes 2022, 10, 2567. [Google Scholar] [CrossRef]
  23. Li, X.; Chung, C.; Goldsby, T.J.; Holsapple, C.W. A unified model of supply chain agility: The work-design perspective. Int. J. Logist. Manag. 2008, 19, 408–435. [Google Scholar] [CrossRef]
  24. Sambamurthy, V.; Bharadwaj, A.; Grover, V. Shaping agility through digital options: Reconceptualizing the role of information technology in contemporary firms. MIS Q. Manag. Inf. Syst. 2003, 27, 237–264. [Google Scholar] [CrossRef] [Green Version]
  25. Lu, Y.; Ramamurthy, K. Understanding the link between information technology capability and organizational agility: An empirical examination. MIS Q. Manag. Inf. Syst. 2011, 35, 931–954. [Google Scholar] [CrossRef] [Green Version]
  26. Tan, F.T.C.; Tan, B.; Wang, W.; Sedera, D. IT-enabled operational agility: An interdependencies perspective. Inf. Manag. 2017, 54, 292–303. [Google Scholar] [CrossRef]
  27. Yongchareon, S.; Liu, C.; Zhao, X.; Yu, J.; Ngamakeur, K.; Xu, J. Deriving user interface flow models for artifact-centric business processes. Comput. Ind. 2018, 96, 66–85. [Google Scholar] [CrossRef]
  28. Bottani, E. Profile and enablers of agile companies: An empirical investigation. Int. J. Prod. Econ. 2010, 125, 251–261. [Google Scholar] [CrossRef]
  29. Conforto, E.C.; Salum, F.; Amaral, D.C.; Da Silva, S.L.; De Almeida, L.F.M. Can Agile Project Management be Adopted by Industries Other than Software Development? Proj. Manag. J. 2014, 45, 21–34. [Google Scholar] [CrossRef]
  30. Hazen, B.T.; Bradley, R.V.; Bell, J.E.; In, J.; Byrd, T.A. Enterprise architecture: A competence-based approach to achieving agility and firm performance. Int. J. Prod. Econ. 2017, 193, 566–577. [Google Scholar] [CrossRef]
  31. CFelipe, M.; Roldán, J.L.; Leal-Rodríguez, A.L. An explanatory and predictive model for organizational agility. J. Bus. Res. 2016, 69, 4624–4631. [Google Scholar] [CrossRef]
  32. De Blume, P.G.; Dong, L. Strengthening Sustainability in Agile Education: Using Client-Sponsored Projects to Cultivate Agile Talents. Sustainability 2023, 15, 8598. [Google Scholar] [CrossRef]
  33. Nigam, A.; Caswell, N.S. Business artifacts: An approach to operational specification. IBM Syst. J. 2010, 42, 428–445. [Google Scholar] [CrossRef]
  34. Cohn, D.; Hull, R. Business artifacts: A data-centric approach to modeling business operations and processes. IEEE Data Eng. Bull. 2009, 32, 3–9. [Google Scholar] [CrossRef]
  35. Koutsos, A.; Vianu, V. Process-centric views of data-driven business artifacts. J. Comput. Syst. Sci. 2017, 86, 82–107. [Google Scholar] [CrossRef] [Green Version]
  36. Fulea, M.; Kis, M.; Blagu, D.; Mocan, B. Artifact-Based Approach to Improve Internal Process Quality Using Interaction Design Principles | Fulea | Acta Technica Napocensis—Series: Applied Mathematics, Mechanics, and Engineering. ACTA Tech. Napoc.-Ser. Appl. Math. Mech. Eng. 2021, 64, 697–706. Available online: https://atna-mam.utcluj.ro/index.php/Acta/article/view/1700/1376 (accessed on 6 June 2022).
  37. Kang, G.; Yang, L.; Zhang, L. Verification of behavioral soundness for artifact-centric business process model with synchronizations. Futur. Gener. Comput. Syst. 2019, 98, 503–511. [Google Scholar] [CrossRef]
  38. Oriol, X.; De Giacomo, G.; Estañol, M.; Teniente, E. Embedding reactive behavior into artifact-centric business process models. Futur. Gener. Comput. Syst. 2021, 117, 97–110. [Google Scholar] [CrossRef]
  39. Curry, M.; Marshall, B.; Kawalek, P. IT artifact bias: How exogenous predilections influence organizational information system paradigms. Int. J. Inf. Manag. 2014, 34, 427–436. [Google Scholar] [CrossRef]
  40. Zhou, J.; Bi, G.; Liu, H.; Fang, Y.; Hua, Z. Understanding employee competence, operational IS alignment, and organizational agility—An ambidexterity perspective. Inf. Manag. 2018, 55, 695–708. [Google Scholar] [CrossRef]
  41. Gong, Y.; Janssen, M. From policy implementation to business process management: Principles for creating flexibility and agility. Gov. Inf. Q. 2012, 29 (Suppl. S1), S61–S71. [Google Scholar] [CrossRef]
  42. Battistella, C.; De Toni, A.F.; De Zan, G.; Pessot, E. Cultivating business model agility through focused capabilities: A multiple case study. J. Bus. Res. 2017, 73, 65–82. [Google Scholar] [CrossRef]
  43. Siggelkow, N. Evolution toward fit. Adm. Sci. Q. 2002, 47, 125–159. [Google Scholar] [CrossRef] [Green Version]
  44. Queiroz, M.; Tallon, P.P.; Sharma, R.; Coltman, T. The role of IT application orchestration capability in improving agility and performance. J. Strateg. Inf. Syst. 2018, 27, 4–21. [Google Scholar] [CrossRef]
  45. Meroni, G.; Baresi, L.; Montali, M.; Plebani, P. Multi-party business process compliance monitoring through IoT-enabled artifacts. Inf. Syst. 2018, 73, 61–78. [Google Scholar] [CrossRef]
  46. Zaitsev, A.; Gal, U.; Tan, B. Coordination artifacts in Agile Software Development. Inf. Organ. 2020, 30, 100288. [Google Scholar] [CrossRef]
  47. Gharib, M.; Giorgini, P. Information quality requirements engineering with STS-IQ. Inf. Softw. Technol. 2019, 107, 83–100. [Google Scholar] [CrossRef]
  48. Basciani, F.; Di Rocco, J.; Di Ruscio, D.; Iovino, L.; Pierantonio, A. A tool-supported approach for assessing the quality of modeling artifacts. J. Comput. Lang. 2019, 51, 173–192. [Google Scholar] [CrossRef]
  49. Lochmann, K. Defining and Assessing Software Quality by Quality Models. Ph.D. Thesis, München Technical University, München, Germany, 2014. [Google Scholar]
  50. Laranjeiro, N.; Soydemir, S.N.; Bernardino, J. A Survey on Data Quality: Classifying Poor Data. In Proceedings of the 2015 IEEE 21st Pacific Rim International Symposium on Dependable Computing (PRDC), Zhangjiajie, China, 18–20 November 2015. [Google Scholar] [CrossRef]
  51. Heidari, F.; Loucopoulos, P. Quality evaluation framework (QEF): Modeling and evaluating quality of business processes. Int. J. Account. Inf. Syst. 2014, 15, 193–223. [Google Scholar] [CrossRef]
  52. Barafort, B.; Shrestha, A.; Cortina, S.; Renault, A. A software artefact to support standard-based process assessment: Evolution of the TIPA® framework in a design science research project. Comput. Stand. Interfaces 2018, 60, 37–47. [Google Scholar] [CrossRef]
  53. Andrews, T.D. Managing Improvement Initiatives as Projects. 2012. Available online: https://www.pmi.org/learning/library/managing-improvement-initiatives-projects-6019 (accessed on 26 January 2022).
  54. Rudnik, K.; Bocewicz, G.; Kucińska-Landwójtowicz, A.; Czabak-Górska, I.D. Ordered fuzzy WASPAS method for selection of improvement projects. Expert Syst. Appl. 2021, 169, 114471. [Google Scholar] [CrossRef]
  55. Aqlan, F.; Al-Fandi, L. Prioritizing process improvement initiatives in manufacturing environments. Int. J. Prod. Econ. 2018, 196, 261–268. [Google Scholar] [CrossRef]
  56. El-Khalil, R.; Mezher, M.A. The mediating impact of sustainability on the relationship between agility and operational performance. Oper. Res. Perspect. 2020, 7, 100171. [Google Scholar] [CrossRef]
  57. Baran, B.E.; Woznyj, H.M. Managing VUCA: The human dynamics of agility. Organ. Dyn. 2020, 50, 100787. [Google Scholar] [CrossRef]
  58. Peffers, K.; Tuunanen, T.; Rothenberger, M.A.; Chatterjee, S. A design science research methodology for information systems research. J. Manag. Inf. Syst. 2007, 24, 45–77. [Google Scholar] [CrossRef]
  59. Hevner, A.; Chatterjee, S. Design Research in Information Systems; Springer US: Boston, MA, USA, 2010; Volume 22. [Google Scholar]
  60. Martinsuo, M.; Geraldi, J. Management of project portfolios: Relationships of project portfolios with their contexts. Int. J. Proj. Manag. 2020, 38, 441–453. [Google Scholar] [CrossRef]
  61. McAdam, R.; Miller, K.; McSorley, C. Towards a contingency theory perspective of quality management in enabling strategic alignment. Int. J. Prod. Econ. 2019, 207, 195–209. [Google Scholar] [CrossRef] [Green Version]
  62. Hwang, C.-L.; Yoon, K. Multiple Attribute Decision Making: Methods and Applications a State-of-the-Art Survey; Springer: Berlin/Heidelberg, Germany, 1981. [Google Scholar]
  63. Goodhart, C.A.E.; Goodhart, C.A.E. Problems of Monetary Management: The UK Experience. In Monetary Theory and Practice; Springer: Berlin/Heidelberg, Germany, 2015. [Google Scholar]
  64. Munier, N.; Munier, N. Comparison of Different Models. In A Strategy for Using Multicriteria Analysis in Decision-Making; Springer: Berlin/Heidelberg, Germany, 2011. [Google Scholar]
  65. Christofi, M.; Pereira, V.; Vrontis, D.; Tarba, S.; Thrassou, A. Agility and flexibility in international business research: A comprehensive review and future research directions. J. World Bus. 2021, 56, 101194. [Google Scholar] [CrossRef]
Figure 1. The conceptual model.
Figure 1. The conceptual model.
Sustainability 15 10189 g001
Figure 2. The cost tree.
Figure 2. The cost tree.
Sustainability 15 10189 g002
Figure 3. The operational artifact classes.
Figure 3. The operational artifact classes.
Sustainability 15 10189 g003
Figure 4. TProject-type artifact quality metrics and operational business requirements correlation matrix.
Figure 4. TProject-type artifact quality metrics and operational business requirements correlation matrix.
Sustainability 15 10189 g004
Figure 5. The present values and target values for each quality metric (TTrainStaff-type artifacts).
Figure 5. The present values and target values for each quality metric (TTrainStaff-type artifacts).
Sustainability 15 10189 g005
Figure 6. Improvement initiative weighting decision matrix (screenshot from the online tool).
Figure 6. Improvement initiative weighting decision matrix (screenshot from the online tool).
Sustainability 15 10189 g006
Table 1. Quality metrics for artifacts.
Table 1. Quality metrics for artifacts.
MetricDescription
accuracyextent to which information is true or error free with respect to some known or measured value
completenessextent to which all parts of information are available and complete (with respect to its intended usage)
volatilityextent to which the information value deprecates over time
ageextent to which the information is actual
consistencyextent to which (all) multiple records of the same information are the same across space
granularityextent to which the information level of detail is suited to its scope
relevanceextent to which the information addresses its customer’s needs
format usefulnessextent to which the information format is appropriate
throughputeffort (man-hour) to provide the information
response timetime needed to provide the information
transmit timetime for information to reach its intended destination
reliabilitythe probability that the information is correctly produced/provided (i.e., it is accurate, complete, and relevant) without failure under a given environment and during a specified period of time
failure frequencynumber of failures occurred while producing/providing the information within a time unit
availabilitythe extent (time percent) to which the information is available to its intended users
Table 2. Journey array sample data.
Table 2. Journey array sample data.
IdTimeStartPointProbabilityJourneyKindStateRemarks
...
100t0{Home}86%{free-time}unavailablethe person is off duty
101t1{Home}45%availability request issuedavailablethe person is contacted and asked to drive a train at time t3;
there is a 45% chance that he will positively respond (e.g., he already travelled 10 times within the last two weeks)
102t2{Home}100%{Home}available
103t3{Home}72%inbound journeyinbound_travelthe person travels from home to start point railway station;
72% is the probability that the journey will be needed within the commercial project
104t4depot at start point railway station72%preparing locomotivetrain_setupthe person performs tasks for setting up the locomotive and then drives it to the actual start point (where the carriages are located);
72% is the probability that the journey will be needed within the commercial project
105t5start point railway station72%driving the traindrivingthe person drives the train to an intermediary railway station (where he will be replaced by a colleague)
106t6intermediary railway station72%outbound journeyoutbound_travelthe person travels back home from the intermediary railway station (job is currently done for him);
72% is the probability that the journey will be needed within the commercial project
...
Table 3. Improvement initiatives to improve TTrainStaff-type artifact quality.
Table 3. Improvement initiatives to improve TTrainStaff-type artifact quality.
Goal & DescriptionImpacted Metrics & Time HorizonCharacteristics
(d8)Rolling stock sharing (with other 4 partner companies)–engines and carriages not used by a company (for a time period) are made available to the other partners–this means building a shared rolling stock usage calendar and authorizing some drivers to drive partner locomotives (possibly of different type)AuthorizationPlan[]: accuracy (5➛➛9), throughput (4➛9)
(time horizon: 24 months)
Investment cost: €5000
Duration: 9 months
Effort (man-month): 12
Technical difficulty: 5 Organizational difficulty: 8
Perceived impact: 77.6
(d9)Train crew sharing (with other 4 partner companies)–teams from one company in a specific geographic region can be used also by partner companies which may not have employees in that region–this means building a shared train crew calendarJourney[]: completeness (2➛8), age (2➛9), consistency (4➛9), throughput (2➛9)
seriousness: accuracy (3➛6), age (2➛8)
getDrivingHours(): age (1➛8), reliability (5➛8), availability (5➛9)
(time horizon: 24 months)
Investment cost: €3000
Duration: 11 months
Effort (man-month): 9
Technical difficulty: 5 Organizational difficulty: 8
Perceived impact: 386.6
(d13)Design and implementation of a resource planning software platform based on the newly described artifacts, in collaboration with a university–this will also support projects (d8) and (d9)for all: consistency (➛9)
for all methods: throughput, response time, availability (➛9)
(time horizon: 18 months)
Investment cost: 16.000€
Duration: 15 months
Effort (man-month): 8
Technical difficulty: 5 Organizational difficulty: 9
Perceived impact: 430.5
(d16)Real-time monitoring (and recording) of locomotive parameters and context (e.g., instant speed, energy consumption, GPS position, meteorological conditions)–besides train instant geographical positioning, this will enable tracking the driving style of the train driver (which impacts on energy consumption)skills: accuracy (6➛9), completeness (2➛8), volatility (2➛8), age (1➛8), consistency (1➛9), relevance (5➛8), availability (2➛8)
(time horizon: 18 months)
Investment cost: 12.000€
Duration: 12 months
Effort (man-month): 6
Technical difficulty: 6 Organizational difficulty: 3
Perceived impact: 311.7
(d20)online project dashboard for customers (on the company website), allowing them to manage their commercial transport projects–request new transport, see real-time data about current transports etc.Journey[]: completeness (2➛8), age (2➛9), throughput (1➛9)
getPositionOn(): accuracy (3➛9), throughput (1➛9)
(time horizon: 24 months)
Investment cost: 6.000€
Duration: 6 months
Effort (man-month): 4 (only our side)
Technical difficulty: 5 Organizational difficulty: 8
Perceived impact: 297.6
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Fulea, M.; Mocan, B.; Dragomir, M.; Murar, M. On Increasing Service Organizations’ Agility: An Artifact-Based Framework to Elicit Improvement Initiatives. Sustainability 2023, 15, 10189. https://doi.org/10.3390/su151310189

AMA Style

Fulea M, Mocan B, Dragomir M, Murar M. On Increasing Service Organizations’ Agility: An Artifact-Based Framework to Elicit Improvement Initiatives. Sustainability. 2023; 15(13):10189. https://doi.org/10.3390/su151310189

Chicago/Turabian Style

Fulea, Mircea, Bogdan Mocan, Mihai Dragomir, and Mircea Murar. 2023. "On Increasing Service Organizations’ Agility: An Artifact-Based Framework to Elicit Improvement Initiatives" Sustainability 15, no. 13: 10189. https://doi.org/10.3390/su151310189

APA Style

Fulea, M., Mocan, B., Dragomir, M., & Murar, M. (2023). On Increasing Service Organizations’ Agility: An Artifact-Based Framework to Elicit Improvement Initiatives. Sustainability, 15(13), 10189. https://doi.org/10.3390/su151310189

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop