Next Article in Journal
Electric Vehicle Sentiment Analysis Using Large Language Models
Previous Article in Journal
Directed Topic Extraction with Side Information for Sustainability Analysis
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

The Analyst’s Hierarchy of Needs: Grounded Design Principles for Tailored Intelligence Analysis Tools

1
The Pennsylvania State University, University Park, PA 16802, USA
2
North Carolina State University, Raleigh, NC 27695, USA
3
University of North Carolina Chapel Hill, Chapel Hill, NC 27599, USA
4
Smith College, Northampton, MA 01063, USA
*
Author to whom correspondence should be addressed.
Analytics 2024, 3(4), 406-424; https://doi.org/10.3390/analytics3040022
Submission received: 22 July 2024 / Revised: 9 September 2024 / Accepted: 15 October 2024 / Published: 29 October 2024
(This article belongs to the Special Issue Advances in Applied Data Science: Bridging Theory and Practice)

Abstract

:
Intelligence analysis involves gathering, analyzing, and interpreting vast amounts of information from diverse sources to generate accurate and timely insights. Tailored tools hold great promise in providing individualized support, enhancing efficiency, and facilitating the identification of crucial intelligence gaps and trends where traditional tools fail. The effectiveness of tailored tools depends on an analyst’s unique needs and motivations, as well as the broader context in which they operate. This paper describes a series of focus discovery exercises that revealed a distinct hierarchy of needs for intelligence analysts. This reflection on the balance between competing needs is of particular value in the context of intelligence analysis, where the compartmentalization required for security can make it difficult to group design patterns in stakeholder values. We hope that this study will enable the development of more effective tools, supporting the well-being and performance of intelligence analysts as well as the organizations they serve.

1. Introduction

Intelligence analysis provides critical insights to governments, organizations, and security agencies and plays a pivotal role in modern decision-making processes. Analysts are tasked with collecting, analyzing, and interpreting vast amounts of information from various sources to generate accurate and timely insights [1,2]. The resultant tradecraft is as unique as artistic style or literary voice, and traditional one-size-fits-all tools often fall short of meeting the needs of individual analysts. Tailored tools, in contrast, promise to streamline the analysis workflow, facilitating the identification of critical intelligence gaps and trends by offering features such as advanced data visualization, predictive modeling, pattern recognition, recommendation engines, automated summarization, and information fusion capabilities that are responsive to an individual analyst’s information needs and preferences around data-driven communication.
Of course, these tools are only as good as our understanding of the needs that they must adapt to meet. This paper documents a series of exercises conducted during the 2023 Summer Conference on Applied Data Science with the goal of deepening our understanding of the complex, dynamic relationships between the needs and desires specific to intelligence analysis. As a compliment to conventional design exercises centering stakeholders’ responses to specific features of the tool, we conducted a series of focus discovery groups to examine the underlying system of values that intelligence analysts prioritize and the tension between these often-competing needs.
These exercises revealed a clear hierarchical system of values that differs from those in other work environments, evoking a domain-specific Maslow’s Hierarchy as a framework for understanding the relative importance of various factors at play. Subsequent design activities using the resulting “Analyst Hierarchy of Needs” model as a lens afforded a novel perspective on the relationship between fulfilling or neglecting needs at various levels on overall satisfaction with proposed tool designs. Finally, our study investigates the role of organizational factors on the fulfillment of these needs, such as regulatory compliance, team dynamics, and constraints introduced by the operational environment. We conclude by identifying strategies and interventions grounded in this model that can enhance the well-being and performance of intelligence analysts, benefiting both the analysts themselves and the organizations they serve.

2. Background

2.1. The 2023 Summer Conference on Applied Data Science (SCADS)

The work described in this paper was conducted at the 2023 Summer Conference on Applied Data Science (SCADS), an event hosted annually by the Laboratory for Analytic Sciences. Over an intensive eight-week program, experts from various disciplines convene to collaboratively address a grand challenge in the realm of machine learning and artificial intelligence. Drawing inspiration from the intelligence community’s manually produced daily intelligence briefings for the President of the United States, the 2023 grand challenge was centered on creating tailored daily reports (TLDRs) for intelligence analysts and other knowledge workers. Over 50 researchers and practitioners from academia, government, and industry came together to explore the design space of systems capable of producing similar concise, auto-generated reports for a broader audience, using modern AI/ML capabilities to proactively furnish individuals or organizations with pertinent information.

2.2. Intelligence Community Analytic Culture

The term tradecraft has been described in Johnston’s 2005 book Analytic Culture as “a catchall for the often-idiosyncratic methods and techniques required to perform analysis” [3]. Due to the sensitive nature of this work, there is a purposeful mystery behind these processes, which includes the lack of formal documentation or measurement of methodology transfer from senior analyst to beginner analyst [3]. It is therefore impossible to pin down a formal, singular definition of the analytic process; nearly two decades after the publication of Analytic Culture, the Intelligence Community (IC) and its techniques still remain generally informal and complex.
This poses a profound problem for the academic and industry collaborators charged with the development of novel systems for supporting these processes—after all, it is challenging to design effectively for processes we cannot meaningfully observe. What we do understand is that intelligence analysts must follow a set of standards set by the Department of Defense in the Intelligence Community Directive 203, which provide a useful starting point for understanding the system of values at work across the IC.
Another equally important constraint on analysts’ production is time. Often, workflows begin with a search of previously supported data, reinforcing a pre-existing mental model where “[d]emands for quick processing reduce opportunities to consider different possibilities” [4]. This constraint engenders a tendency a natural toward confirmation bias [3]. Johnston expands on this tendency: “[t]rying to discern controversies and divergence in intelligence products is often difficult, because some of [these processes]… are specifically designed to produce a corporate consensus for an audience of high-level policymakers” [3].
As such, there is a great resistance to change within the intelligence analyst community, to a greater extent than is predicted by Rogers’ Diffusion of Innovations [5]. This could be attributed to the IC’s organizational culture, identical to attributions of a high-reliability organization (HRO), with consequences such as “…the threat of a loss in status, funding, and access to policymakers, all of which would have a detrimental effect on the ability of the intelligence agency to perform its functions… In response to the organizational norm, the analyst is inclined to work the product line rather than change it” [3]. For further details on the history of the IC in the United States, see Appendix A.

3. Materials and Methods

3.1. Focused Discovery

In order to explore the system of needs and their interrelationship, we first needed a way to help surface them. The Focused Discovery methodology is an integral part of the UX design process, primarily centered on comprehending the problem domain and framing the issues that need attention [6]. It underscores the need to maintain a wide perspective and stay neutral regarding technology, aiming to set design initiatives on the correct path [7]. A successful discovery phase results in a deep understanding of user needs, their problems, and potential opportunities, while also establishing a shared vision among various stakeholders [7,8].
Focused Discovery is the first phase of the UK Design Council’s double-diamond model of UX Design and comprises two key stages: discovery and definition (see Figure 1). Discovery is necessary when there are unknown factors impeding progress or when team alignment is lacking [9]. Various triggers can initiate discoveries, including new market prospects, acquisitions, policy alterations, shifts in organizational strategies, or persistent organizational challenges.
Typical activities within the discovery phases are exploratory research, stakeholder interviews, assumption mapping, research question generation, affinity diagramming, service blueprinting, and problem framing workshops. Forming multidisciplinary teams with roles like researchers, facilitators, sponsors/owners, and technical experts is vital for conducting effective discovery. The output of a discovery phase includes a detailed comprehension of the problem space, well-defined objectives, and sometimes initial high-level solution concepts. Various artifacts may be produced, such as a refined problem statement, service blueprints, user journey maps, personas, and high-level design concepts or wireframes.

3.2. Participants

We recruited people with analyst or analyst-adjacent experience by directly reaching out to analysts at the LAS. A total of fourteen individuals responded to our request. Ten participants joined in the first focused discovery group activity and four participants joined in the second focused discovery group activity. Descriptions of work roles included linguistic analysts, intelligence analysts, and discovery analysts. Participants’ experience working as analysts ranged from two to ten years of experience.

3.3. Study Conceptualization

At the inception of our program, the initial conceptualization of our study was centered on the mental models of intelligence analysts. However, as the week unfolded, it became apparent to the research team that the vast scope of such a project would be untenable, given the time constraints. Consequently, the team shifted its focus towards identifying the fundamental values associated with a tailored daily report (TLDR) system that would align with our analysts’ preferences. This pivot marked a significant advancement in our study design, establishing a focused discovery group comprising our analysts.

3.4. Study Setup and Data Collection

Both FDG exercises were facilitated by two members of the project team. To promote consistency in facilitating, a template was created. The FDGs were semistructured and were approximately seventy-five minutes long. Conducting our FDG exercises as semistructured allowed us to reword and reorder questions in response to the natural conversational flow of the FDG exercise, while also providing leeway to explore interesting digressions within the scope of the exercise.
Both FDG exercises were conducted only after obtaining consent from all group participants. Participants were asked to provide their input on their expectations of a TLDR system. Specifically, they were asked to describe desirable and undesirable attributes and characteristics of a TLDR system. Desirable features were recorded on green sticky notes and undesirable features were recorded on yellow sticky notes. After each sticky note was filled in and completed, facilitators randomly affixed each one to a whiteboard without any specific organization.
Upon completion of this step, participants were tasked with categorizing their desired and undesired features into related groups. Subsequently, the analysts were asked to individually assign a blue sticky note to a value or feature they deemed essential for the TLDR system, a purple sticky note to a potential but non-essential feature, and a red sticky note to a feature they deemed unacceptable within the system. Following this exercise, a debriefing session was conducted wherein the participants were asked to explain their rationale for assigning blue sticky notes to certain themes and their reasons for prioritizing one theme over another. This debriefing provided valuable insights into the analysts’ thought processes and their prioritization criteria for the TLDR system.
In addition to the FDG exercise, three members of the project team and two additional members recruited from SCADS also served as note-takers for this exercise. Since the FDG consisted of groups of individuals, it was important to have multiple note-takers present who could document our participants’ conversations and involvement throughout the exercise. Additionally, one member of the project team participated as a note-taker in the pilot FDG exercise prior to the actual FDG exercises to allow the project team to gain familiarity with the note-taking process.
After each exercise, one facilitator and three note-takers from the project (consistent for both exercises) gathered together and proceeded to the first round of data gathering and transformation. We first collected the paper sheets participants used to write down their initial thoughts and ideas of the system and shred them as mentioned in the consent form. Then, we examined the whiteboard with the organized sticky note clusters, adding relevant context—i.e., if a sticky note points at two categories, we duplicate the same sticky notes and apply for both categories, creating a better analysis setup without changing what participants expressed during the exercise. Then, we took a photo of the whiteboard as it is, and physically collected each category with all sticky notes attached, before digitizing the mirror result into a shared Miro board for further qualitative analysis.
With consent from the participants, we also video-recorded the exercise, capturing all relevant information. We also took several photos of the whiteboard to serve as snapshots for different stages to show how the sticky notes are placed, clustered, and moved in general. All photos and videos, along with the notes from note-takers, were then uploaded to a secured Google Drive shared among SCADS scholars, with the data deleted from its original devices.

3.5. Analysis Plan

3.5.1. Data Collection and Preparation

The analysis of FDG exercise data was conducted collaboratively by a team of five researchers. To ensure impartiality in observation and to capture diverse perspectives, two note-takers who were not previously familiar with the design of these FDG exercises were asked to document their independent observations in addition to those captured by the primary study team. This approach was aimed at mitigating latent confirmation bias on the part of the study designers and promoting a comprehensive exploration of the data. Following each FDG exercise, the note-takers individually reviewed and refined their digital notes to incorporate relevant or necessary contextual information. Additionally, the physical placement of the participants’ sticky notes and their contents were digitized using the research group’s shared online interactive platform, Miro board. The accumulated notes and Miro board served as the basis for subsequent qualitative analysis.

3.5.2. Focused Coding and Analytic Framework Development

The research team followed Charmaz’s focused coding methodology [10] to select codes that effectively captured the principal themes present in the data as well as the processes observed during the FDGs. Themes that identified active processes, interactions, and discussions leading to overarching participant groupings and defining generic intelligence analysts’ values in technological systems were of particular interest.
Each researcher conducted an individual assessment of the two exercises, generating in vivo codes. These codes were subsequently reviewed in a group setting, enabling a formal examination of the proposed themes. Among the identified codes, five were consistently evident across all researchers’ analyses, culminating in a theoretical definition of the observed data. To ensure impartiality in the construction of formal categories, the team also invited an experienced analyst to perform an independent analysis and associated coding.

3.5.3. Grounded Theory Process

Continuing to follow Charmaz’s principles of Grounded Theory, the research team then synthesized the latent properties of each category, specifying the conditions under which each category arose, was maintained, and underwent changes within the analysts’ processes. Consequences of these categories and their interrelationships were also thoroughly explored. This analytical stage revealed a precedent ordering among categories and helped to establish meaningful connections between them.
The team completed the initial iteration of the coding process, documenting emerging themes from this phase in Section 4. It is important to acknowledge that due to the abbreviated 8-week format of SCADS, these findings are preliminary in nature. At present, the research team has reached the memo-making stage of the Grounded Theory process. To enhance analytical rigor and uncover deeper insights, we anticipate further iterations of coding as part of our future work. Specifically, to refine the categories and resultant model, a future stage of theoretical sampling involving interviews with the population is under development. This step will allow for a more formal definition of category properties and the specification of conditions linking them to other categories.

4. Results

4.1. Individual Differences and Analyst Roles

Throughout the discovery exercises, each participant had a different ideal when explaining desirable TLDR functionalities. Some wanted an ability to listen to their tailored daily report or convert it to text, some wanted summaries of the overwhelming requests and requirements that demand their attention.
For example, the grouping of Control was seen as the least essential grouping in terms of participant voting. However, during the FDG exercises, other higher-voted groupings showed an underlying participant drive for control of the system such as Display, Visualization, Collaborative, Control + Transparent + Sources, and UX Touchpoints + Spin-up Time. How an analyst learns, summarizes, and interprets data is dependent on their personal learning techniques. It is impossible to group all intelligence analysts within the same category without consideration of these minute differences that define analysts as individuals.
Towards the end of the first FDG, there was a discussion about the conflicting desires of different work roles of intelligence analysts. Team structure is considered; there is a clear difference in priority from a Mission Manager perspective versus their team.
“Approaching from mission management perspective… I have [a] very narrowed focus on what’s relevant. Based on what my team’s poured out. Less find new things, more summarize existing body of things…”
Mission managers trust the output their team has produced, and spend time refining their intelligence production to become more digestible for leadership/those higher up the chain of command. What a Mission Manager might want presented to them in their TLDR might be an amalgamation of their teams’ individual work or TLDRs, as maintaining a constantly aware presence allows them to provide timely and accurate reports to the organizational leadership.
Another conflict in this discussion is the idea of relevance. The participants describe relevance as the intelligence analysts’ information in the current context. What shop an analyst is in determines the contextual type of data they work with. While discovery analysts deal with near real-time information, constantly shifting their frame of reference to the immediate, other offices work on longer-term accounts and review previously published reports and information. Then, personal tasking/prioritization was considered secondary to how relevant the tasks were in association with leadership’s priorities. A desire that came forth was the ability for the TLDR to remain flexible to constantly changing priorities, without losing the frame of reference for the previous tasks.

4.2. Barrier Dissolution

The focused discovery group (FDG) exercises were thoughtfully scheduled across two distinct days, a strategic decision aimed at preventing an overload of information within a single session. However, during the course of these sessions, the research team observed an intriguing behavioral pattern that warranted further examination. Each session, despite sharing identical starting points, was initially marked by a palpable reticence among the participants. This reluctance was most evident during the early stages of the exercise when the analysts were tasked with identifying their preferred system features and values. This phase was often characterized by a period of quiet introspection, suggesting a cautious approach to the task at hand.
Interestingly, the introduction of the sticky note activity appeared to catalyze a change in the participants’ demeanor. As they began to place their thoughts on the board physically, a sense of ease seemed to permeate the group, fostering increased interaction. The research team interpreted this as a symbolic lifting of the initial barrier, indicative of the analysts’ growing comfort and engagement with the exercise. This pivotal transition, typically occurring within the first 25 min of both sessions, coincided with a shift in the analysts’ focus from individual ideation to collective interaction. This shift in group dynamics provided a wealth of insights into both the analysts’ behavior and the inherent nature of the exercise. The increased level of engagement facilitated the extraction of meaningful themes and observations, as the analysts’ active participation seemed to enrich the discourse. However, this heightened interaction also introduced a layer of complexity in interpreting the discussions, as they began to delve deeper into their individual and collective preferences and aversions regarding the TLDR system.
Despite the commonalities, there were discernible differences between the Friday and Tuesday sessions, particularly in the speed at which this initial barrier was lifted. The Friday session, characterized by a larger group of analysts, experienced a quicker transition to lively interaction compared to the smaller Tuesday session. This observation, subtle yet significant, suggests that group size may play a role in influencing the dynamics and outcomes of these exercises. Lifting the initial barrier marked a phase of enriched information extraction for the research team. The discussions among the analysts evolved to become more organic and dynamic, sparking debates over the prioritization of certain features or values. This stage of the exercise also revealed the interplay of personality dominance within the group, occasionally resulting in some voices being unintentionally overshadowed. However, the analysts’ perceived investment in the exercise ensured that quieter participants were not silenced for extended periods, thereby maintaining the diversity and richness of the discourse. This observation underscores the importance of fostering an inclusive environment to capture a comprehensive range of perspectives in such exercises.

4.3. Compliance Confusion

In the first FDG, a later discussion of the connection between tool integration and functionality brought forth a debate about compliance. This is an overloaded term within the government sphere, which brought another layer of confusion to the participants. Tool functionality compliance, compliance with data and privacy laws, and tool performance emerge from this grander debate.
“Oh, that’s a good point. Sometimes compliant systems do functional things that slow it down…”
“I think we need 16 more stickies that say compliance.”
“…whether or not aligns with your values, imposed upon the analysts….”
User-facing systems within the government are subject to Section 508 of the Rehabilitation Act and Section 255 of the Communications Act.
“…develop, procure, maintain, or use information and communications technology (ICT) that is accessible to people with disabilities and to give employees and members of the public with disabilities access to information comparable to the access available to others”
[11]
With these laws in place, the IA community has been greatly impacted by these new requirements pressed upon their toolkits. Their tools must meet a perfect score on the Accessibility Scoresheet, an internal scoring system created with WCAG 2.1 standards in mind. If they do not meet this requirement, the tool must go through remediation to correct the functionality. Since many analyst tools are close to twenty years old, development teams have taken the initiative to overhaul them completely and move towards modernization. With new tools being developed at a rapid pace in order to meet these imposed deadlines, analysts have been left bereft of their input in the process, effectively disrupting their workflow piece by piece. This is a huge disruption to analyst tradecraft, which has remained much the same since the analytic culture change after the events of 11 September 2001 [12]. This impact is directly shown in the concern raised verbally by analysts and reflected in their groupings.
Data and privacy laws directly affect intelligence analysts as well, another concern surfacing almost immediately in their spoken-aloud concerns and negative attributions within the FDG exercises:
“If I am committing incidents by using the tool…”
“Sometimes I won’t get relevant information if I don’t have relevant access…”
As discussed earlier, tools built for the government have a set of laws to follow in order for analysts to interact with data safely. Any user-facing tool within the government sphere needs to handle the user’s credentials properly in order to deliver the data relevant to the analysts’ clearance. It is interesting to note that analysts show concern that they might be recommended articles or information that they could not follow up on due to mismatching accesses; there is a perceived possibility for the tool to fail in this aspect.
“I do believe functional connects to performance.”
“If it’s slow than analysts don’t want to use it…”
There is a distinct, widely known gap in the functionality, development, and UX considerations between the tools available to the IC and those available to the external community. This is due in part to practical limitations on the pace at which development requirements and federal law are able to evolve in response to technological change. These sometimes-outdated tools have been the backbone of analysts’ toolkits for close to two decades, and there is an obvious concern and recognition of their poor performance at tension with the broader, organizational resistance to change.
“If it’s slow I’m turning it off”
This phenomenon is best described as the tension between adopters in the diffusion of the innovation curve, analysts who are more early-stage adopters versus late majority adopters. But combining that with the HRO principles of preoccupation with failure, the reluctance to simplify, deference to expertise, and sensitivity to operations provides a better understanding of the pressure analysts are under in delivering intelligence products.

5. An Analyst’s Hierarchy of Needs

As the themes emerged, it became clear that while each analyst might prioritize their needs differently, certain needs had more common ground than others. Adapting from Maslow’s Hierarchy of Needs, we describe the analysts’ needs for a TLDR system as using the same pyramid structure (see Figure 2). We believe this structure provides general guidelines for a design process to ensure the most necessary need is covered before moving on to more aspirational features. It is worth noting that we do not consider this structure to be rigid, similar to Maslow’s Hierarchy. Higher needs can sometimes take precedence over lower ones. However, we do suggest that designs that significantly disrupt this hierarchy could render the system unusable, as it would decentralize analysts’ essential workflow. In an effort to present the commentary that supported the development of our model, we provide examples for each theme in Table 1.

5.1. Efficient and Functional

In our initial analysis, the dual themes of efficient and functional stood out as primary concerns among the participating analysts. At the forefront of discussions was the unanimous emphasis on compliance. Given that analysts operate within stringent legal frameworks, compliance is not just a preference but a necessity. Any system that fails to adhere to these legal stipulations is immediately rendered unsuitable.
Beyond compliance, there was a clear and consistent call for systems that champion efficiency. Feedback from participants revealed a shared dissatisfaction with current tools, often characterized as suboptimal and lacking in seamless integration with their existing workflows. Yet, this acceptance of current limitations did not diminish their hope for more efficient tools. During the FDG exercise, attributes such as performance, functionality, and length were frequently highlighted, tools should be efficient and enhance, rather than impede, their capabilities.
This feedback underscores the importance of a system’s adaptability and responsiveness in ensuring its adoption and continued use. Drawing from the data gathered during the FDG exercise, it is evident that analysts seek efficient and functional tools. As inferred from their feedback, the ideal system would seamlessly integrate into their workflow, offering immediate, intuitive enhancements while strictly adhering to legal and operational standards. The feedback underscores the pressing need for systems that can adeptly support analysts’ daily operations, ensuring they are efficient, functional, and, above all, compliant.

5.2. Trustworthy and Reliable

Throughout the FDG exercises, a recurring sentiment among the analysts was encapsulated in the phrase, “trust but verify”. This mantra underscores analysts’ imperative to understand and authenticate the origins of information, especially within the TLDR. Given that many analysts are responsible for crafting and disseminating reports to their superiors, ensuring the credibility and accuracy of the information they relay is paramount. Other participants expanded on their meaning, expressing the desire for guidance rather than imposition from AI tools.
The overarching sentiment was that the TLDR should not compromise trust, a foundational element in their work. From the researchers’ perspective, the theme that crystallized from this discourse was unequivocally trustworthy and reliable. The ability to validate a system’s output, backed by transparent sources and references, is of utmost importance. This transparency fosters trust between the human user and the machine and lays the groundwork for a reliable partnership. The emphasis on verifiability, juxtaposed with concerns about potential system limitations, was consistent across discussions. This highlights a broader challenge faced by many in the realm of contemporary analysis: navigating the credibility of sophisticated tools that, at times, may lack transparent functionalities.

5.3. Context-Aware and Relevant

In the course of our FDG exercises, the intertwined themes of context-awareness and relevance emerged as central concerns among the participating analysts. Analysts consistently emphasized the need for information and systems to be both context-aware and relevant.
Some participants expanded on the theme of contextualization, expressing a desire to see information in context and to have options that allow for deeper exploration. A significant point of contention arose around the TLDR’s role in determining relevance. The prevailing sentiment was that while the analyst is best equipped to discern what information is pertinent for a report, the system should not autonomously make that determination. While universally acknowledged as crucial, the concept of relevance was interpreted variably among participants with some analysts expressed that relevance is context-dependent. That is, relevance might be contingent on pressing matters, with mission-critical information taking precedence.
Taken together, our research revealed a nuanced divergence in analysts’ perceptions and requirements concerning relevance. The question of what is relevant? emerged as a focal point. This question risked oversimplification when posed without clear parameters, and led us to categorize relevance into three distinct tiers:
  • Mission-Centric Relevance: This level emphasizes the overarching mission, with the sentiment that every piece of information is relevant due to the mission being representative. The challenge here is discerning the most crucial information amidst a sea of data.
  • Workflow-Centric Relevance: This tier focuses on the analyst’s day-to-day operations. Information is segmented into sessions, highlighting needs for continuity and updates, encapsulated by sentiments like “pick me up where I left off” and “inform me about what I missed.”
  • Individual vs. Team Relevance: This level addresses the balance between what is relevant to an individual analyst versus the broader team.
From the data collated, it is evident that analysts seek a system that is adept at discerning and navigating the multifaceted layers of relevance. The ideal system would not only be context-aware but would also seamlessly adapt to the shifting sands of relevance, whether determined by mission, workflow, or the balance between individual and team needs.

5.4. Tailored

The theme of tailoring emerged as a central topic of debate among the participating analysts, particularly in relation to the TLDR system’s potential adaptability for knowledge workers. There was a clear desire for the TLDR system to be adaptable, reflecting individual analysts’ unique needs and contexts.
However, the journey from a context-aware system to a truly tailored one represents a significant leap, requiring the system to account for individual differences, as evidenced by the theme of “tailored” emerging predominantly from individual feedback. Yet, this emphasis on individual tailoring was not unanimous. A notable segment of our analysts questioned whether the TLDR system should be tailored to the individual, positing that perhaps it should be tailored to the team or the team objectives instead. This sentiment underscores a broader debate: In an ideal scenario where a perfectly context-aware system exists, how do we calibrate it to balance the needs of the individual with those of the role, mission, and organization? How can a system cater to an individual analyst while considering their unique personality traits, cognitive status, and training experience?
This line of thinking suggests reimagining the TLDR, where the tailored component might vary based on specific roles rather than individual preferences. From the data collated, it is evident that the concept of tailoring, while universally acknowledged as valuable, is fraught with complexities. As gleaned from the discussions, the ideal system would need to strike a delicate balance: it should be adaptable enough to cater to individual nuances yet broad enough to align with team objectives and organizational goals. The challenge lies in navigating these dual imperatives, ensuring that the system remains both relevant and effective for its users.

5.5. Customizable

While both tailoring and customization revolve around personalizing the system, the discussions revealed distinct interpretations for each term. Tailoring was perceived as a broader concept, potentially encompassing both individual and team preferences. In contrast, “customization” was more intimately associated with individual agency, allowing users to modify the system according to their unique preferences.
This dichotomy between themes was exemplified with analysts’ strong preference for control over the TLDR’s interface and functionality. This desire for control extended to the ability to manipulate the system’s layout, move windows or widgets, and determine the presentation of information. The overarching theme was the aspiration for a system that could be molded to meet individual needs. This was further emphasized by the enthusiastic response to the hypothetical feature of a magic slider (see Table 1).
The discussions also highlighted a consensus that a finalized TLDR system should not be monolithic or rigid. Instead, it should offer flexibility, ensuring that analysts have agency in determining how information is presented to them. This perspective underscores the importance of a dynamic presentation of content that can be adjusted based on individual analyst preferences. From the data collated, it is evident that while analysts seek a system that can be both tailored and customizable, these terms are not interchangeable. The ideal TLDR system, as inferred from the discussions, would seamlessly blend both concepts, offering a platform that can be adapted to individual and team needs while allowing users the freedom to customize its interface and functionalities to their liking. The challenge lies in developing a system that strikes this delicate balance, ensuring it remains both adaptable and user-centric.

5.6. Grounding System Design in Analysts’ Needs

The analysis of the FDG exercises, structured within the Analyst’s Hierarchy of Needs, illustrates the complexity and interdependence of the various needs that intelligence analysts prioritize. While analysts may rank their needs differently, there is a clear consensus on the necessity of certain foundational aspects, such as compliance, efficiency, and trustworthiness. For a comprehensive overview, see Table 1. The hierarchy we have developed provides a useful framework for understanding these needs in relation to one another. However, its true value lies in its ability to ground the creation of a robust assessment tool. This tool will be essential for evaluating the effectiveness of existing systems and guiding the development of future systems tailored to meet individual and organizational needs.
Our findings underscore that the needs of analysts, while hierarchical in nature, are deeply interconnected, meaning that any disruption to one aspect—such as customization or trust—can ripple through the system, undermining overall effectiveness. As such, the creation of tailored systems that strike a balance between adaptability and functionality is crucial. The assessment framework derived from this hierarchy will allow for a more nuanced evaluation of systems, ensuring that they are designed to meet the evolving demands of intelligence work while remaining grounded in the practical realities of analysts’ day-to-day tasks. The development of this hierarchy is not merely theoretical but a practical tool for improving system design. It offers a pathway to create adaptive, efficient, and user-centric tools closely aligned with intelligence analysts’ unique and often shifting requirements. By anchoring future assessments in this framework, we can ensure that the systems built on these insights will effectively support the analysts’ needs, enhancing individual performance and organizational outcomes.

6. High-Fidelity Prototype Evaluation

In the spring of 2022, graduate students enrolled in a graphic design course at [a large southeastern (U.S.) university—omitted for review] were assigned a project to design high-fidelity prototypes for what a “tailored daily report” could look like for an intelligence analyst. Students were split into one of three design teams and were assigned a user persona of a type of analyst in a particular scenario within a specified use case to simulate a real-world situation. Each design team was asked to identify specific tasks the analyst would perform via the interface. Students used supplied research, secondary research, and user interviews with intelligence analysts to inform the design decisions and account for different individual characteristics of at least two users in the same type of job role performing the same tasks for the specified scenario. To account for emerging technologies, students were also asked to explore how their interface might use machine learning capabilities to personalize and improve the user experience of these tasks. Each design team received feedback from intelligence analysts on their high-fidelity design prototypes while in development, as well as the final version of their interface design, which were recorded as “critique” videos.
With the development of the hierarchical model and user feedback from intelligence analysts on design prototypes of the TLDR, our project team elected to evaluate two high-fidelity prototypes by comparing the “critique” videos to see whether the values elicited from intelligence analysts in the FDAs would be reflected in these early conceptions of a TLDR interface too. We will present each with a brief introduction of task context, persona involved, and related system design, followed by our reflection based on the critiques through the lens of our hierarchical model.

6.1. Case Study 1: The Hive

6.1.1. Persona and Task

The first high-fidelity prototype reviewed was developed for Ron, a fictional language analyst specializing in the fictitious foreign language Kobian and a subject-matter expert on Kobian nuclear weapons development. He was asked to lend his Kobian language skills to assist the agency that focuses on the leadership decisions of the Kobian presidential administration, which is a subject he does not know much about. Specifically, Ron was tasked with triaging large amounts of foreign language communications that are primarily speech- and text-based data from sources that may be connected to the leadership decisions of the Kobian administration in order to find out as much information as he could about events related to Kobian diplomacy towards China following the visit.

6.1.2. System Design

Graduate students enrolled in MGXD Design Studio II designed The Hive (see Figure 3), a highly adaptive information module cluster to keep Ron on top of his task. Assuming full support of the machine learning model trained on Ron’s profile, the system prioritizes his tasks based on importance (to the left) and relevance (the deeper color). The system provides in-time contextual information about updates from the whole team as well as customer interest changes, and offers content recommendations based on Ron’s selected tags on the right panel.

6.1.3. Reflection

The Hive interface was generally well-received, with several comments addressing how the interface would be a welcomed improvement over the tools in current use. However, while their interface received a considerable amount of positive feedback from intelligence analysts, their feedback interpreted through the lens of the model hierarchy would suggest the design team prioritized higher-order needs (e.g., customizable) without much consideration of the more fundamental lower-order needs that should be met first. This is best illustrated by the design team’s decision to allow the AI to generate task summaries without addressing the technological constraints associated with using AI to generate narrative summaries. Notably absent from their interface were indicators delineating the sources of information or confidence metrics associated with AI-generated content. Our focus group discussions underscored the significance of the “trust but verify” ethos among analysts, and this interface prototype appears to misconstrue its essence. Such omissions not only compromise the trustworthiness criterion of our model but also impinge on its reliability facet. Intelligence analysts, continually adapting to a barrage of emerging technologies, will eschew tools that do not measure up to their established standards [2]. If a tool fails to engender trust, its reliability is inherently compromised, rendering it unsuitable for the very analysts it purports to serve.

6.2. Case Study 2: Data Board and Sunburst Chart

6.2.1. Persona and Task

The Kronos Incident is a benchmark dataset provided in the 2014 VAST Challenge, which described a fictional mass kidnapping scenario. The leaders of the fictional GAStech are celebrating their new-found fortune as a result of the initial public offering of their very successful company but find several of the employees go missing. Meanwhile, a fictional organization known as Protectors of Kronos (POK) is suspected in the disappearance. Nyah, a senior search and discovery analyst (SDA) serves on the “Surge Team” for this crisis. Data available for this task are current and history local news, exchanged company emails, GIS tracking data for GAStech employees, and real-time feeds of microblogs and emergency calls.

6.2.2. System Design

With several information search tools at Nyah’s disposal, she would need a system that can help her run queries, organize reports, and bring updates as the situation evolves. Graduate students enrolled in MGXD Design Studio II designed a Situational Awareness Interface (SAI) for this scenario, which consists of a data board on the left to present text summaries on the current situation, customer requests and supervisor instructions, a timeline visualization on the bottom to maintain the event sequence and event recommender on the top right with sunburst chart to present current and newly added events, with a sunburst chart to illustrate possible detailed connections between entities (see Figure 4). The design also brought in the concept of using digital watches to bridge information flow in between working sessions (e.g., when Nyah goes for a coffee, the watch receives and displays new information, allowing her to keep track of the evolution).

6.2.3. Reflection

The SAI design raised a wide range of discussions. Intelligent analysts generally liked the interface, especially the Sunburst chart, which links information from different granularities. However, analysts raised concerns about the trustworthiness of the recommendations from a machine learning pattern detector, stating that addressing the uncertainty of information on that level is critical before moving the concepts to real applications. Also, unexpected to the design team, the inclusion of digital watches in such a scenario raised more concerns than approvals: digital assets like smartwatches are not within the compliance of a highly sensitive IA environment. A common pattern from both controversies is that the design efforts had focused heavily on higher-order needs, like contextual awareness, without necessarily consolidating lower-order needs, like trustworthiness, reliability, and even much more fundamental principles like compliance in an IA community. To conclude, the evaluation marks the necessity of our hierarchical model in development. Having a clear structure of needs from the community is a crucial step before any design effort, for it bridges the gap where top-down design meets with bottom-up needs, or at very least establish a common ground for the iterative design progress to be much more efficient.

7. Discussion

While this study warrants further exploration and deeper analysis, our initial findings from the FDG exercises offer valuable insights. Our hierarchical model underscores the ever-evolving nature of intelligence analysts’ needs, which can pivot based on immediate tasks, overarching mission goals, or broader organizational shifts. This nuanced understanding underscores the importance of creating adaptive tools, subtly emphasizing system designers’ need to be agile and attuned to these dynamic requirements.
The model might present needs hierarchically, but the discussions from our exercises showed how interconnected these needs are. For instance, according to our analyst, a system’s efficiency and functionality are closely linked to its trustworthiness and reliability. Neglecting one tier of our model might impact the fulfillment of another. This interconnectedness not only underscores the necessity for a holistic approach when addressing these needs but also brings to light the inherent challenges in system design. Specifically, striking a balance between tailoring and customization becomes paramount, as it requires systems to cater to individual preferences while aligning with broader team or organizational goals. This balance is crucial for both individual productivity and team success.
The emphasis on being context awareness and relevance shows that analysts need systems that can adapt to their ever-changing work environment. Systems need to be agile, adjusting to the shifting landscapes of intelligence work. In this context, trust, seen as a higher-order need in traditional models, emerges as foundational in our hierarchy, underscoring its pivotal role in the dynamic realm of intelligence. This shift in perspective emphasizes the importance of trust as a cornerstone for other needs. Additionally, integrating ethical considerations into system design is paramount in the intelligence domain. Systems must not only be efficient and functional but also uphold principles such as fairness, accountability, and transparency. By embedding ethical guidelines into the development process, designers can ensure that tools not only meet analysts’ operational needs but also align with broader societal values and legal requirements. This ethical alignment reinforces trust, not just within the analytic community but also among stakeholders and the public, ultimately enhancing the credibility and legitimacy of intelligence operations.
Furthermore, the creation of the model hierarchy provides the theoretical basis for which to apply by application. For example, future research could develop a theoretically based, psychometric sound, user-first measurement instrument to formally evaluate and guide future refinement of analytic tools, enhancing synergies and addressing tensions between needs and values in future analytic tool evaluations. Such an approach could enable system designers to identify necessary adjustments to their tools and understand how different needs interact within the system. In this way, the model hierarchy becomes a compass, offering direction for balancing competing requirements, such as customization versus efficiency and functionality, and ensuring that these design considerations remain aligned with the evolving nature of intelligence work. Perhaps, most importantly, it would further enable the evaluation of future iterations of TLDR prototypes in a standardized, yet user-center manner, refining theoretical shortcomings with additional input and rigorous testing to validate both the theory and the reliability and validity of any assessment.
By creating this model hierarchy, the analytic community is one step closer towards systems that are adaptable, responsive, and tailored to the analysts’ shifting demands. Ultimately, an assessment grounded in our model would enable the creation of more responsive and targeted tools that meet intelligence analysts’ complex and diverse needs, ensuring that these systems remain effective in real-world applications.

7.1. Theoretical Limitations

Although Maslow’s hierarchy is a useful way to organize needs, we acknowledge that it is not without criticism. For example, Maslow’s proposition that individuals must satisfy their lower-order needs first (e.g., physiological needs) before they can progress to next-level needs (e.g., safety needs) has since been disproven ([13]). It is now understood that people can be motivated by higher-order needs regardless of the fulfillment of their lower-order needs.
Considering this, we propose two alternative frameworks for future research. First, in acknowledging the complexity and nuances of intelligence analyst workflows, we suggest a model that incorporates multiple outcomes without a hierarchical structure. This “Multiple-outcomes model” would explicitly recognize the importance of multiple and different outcomes, such as efficiency and trustworthiness. Second, instead of merely listing variables as outcomes in a theoretical model, we propose future research could consider integrating multiple goals, again, without adhering to a hierarchy. This “Multiple-goals model” could address the tensions(or synergies) among needs or values that intelligence analysts hold regarding their tools. For example, a multiple-goals approach may present opportunities to understand and navigate tensions between design features tailored to team-wide objectives and those intended for individual customization or personalization.
With the renewed complexity regarding need satisfaction [13], we welcome the added nuance into the interpretation of our needs hierarchy model. At least, we are certain our model will still offer greater clarity to future SCADS participants by providing them with a general framework of intelligence analysts’ needs and help them accomplish the grand challenge of SCADS by aiding in the evaluation of a system that can generate tailored daily reports (TLDR) for intelligence analysts. Additionally, the exceptions in satisfying higher-needs first might indicate that systems should preserve such communication channels to link different levels of needs from an architect’s point of view.

7.2. Practical Limitations

While all participants managed to cooperate in both focus group discussions, a perceived social barrier could have persisted. This could be attributed to the mixed experience levels of participants, where higher-ranking or more experienced participants interacted with their lower-ranking or less experienced counterparts, or it might stem from personality differences. A well-documented deference to expertise exists within the analyst community, as this is how the mysterious and highly specialized training of tradecraft is passed [14].
Our findings reveal that intelligence analysts have a unique hierarchy of needs distinct from those in conventional work environments. However, it is important to note that these findings were derived from a limited sample of intelligence analysts, who may not fully represent the entire community. Future studies would benefit from a larger sample size to ensure theoretical saturation is met. Future iterations of the SCADS offer a unique opportunity to validate the model with a more representative and diverse sample of analysts. Alternatively, the IC might consider validating the model with intelligence analysts in-house.
The research team was acutely aware of the potential for bias during the evaluations of the FDG exercises, which are critical to maintaining the integrity of the study. Still, we acknowledge that our pre-existing perceptions may have inadvertently led to confirmation bias, stemming from our established views of analysts’ varying knowledge and experience levels. This issue could have been compounded by recall bias, which might have affected the analysts during the debriefing session. These biases were not intentionally integrated into the exercise, recognizing their possible influence on the analysts’ responses is crucial.
In addition to these biases, it is essential to highlight two other factors that could have influenced the dynamics of the analyst group. First, the presence of dominant personalities, a common phenomenon in focus groups, could significantly alter group dynamics by suppressing less dominant voices. Second, the potential influence of the facilitators could have steered the direction of the course of discussions and, consequently, affected the outcomes.
Moreover, we must consider the cultural and contextual differences among our analysts. These differences could significantly impact group dynamics, affecting interactions and discussions within the group. By recognizing and outlining these limitations, we not only maintain the integrity and validity of our work but also provide valuable insights for future research in this field.

8. Conclusions

In the face of an ever-evolving landscape, the U.S. intelligence community (IC) grapples with a myriad of challenges. The surge in data, the advent of disruptive technologies, and the emergence of new global threats have the potential to upend traditional intelligence processes, tradecrafts, and priorities. As highlighted by CSIS (2021), there is a tangible resistance to change, a backdrop of escalating threats, and a growing realization that the IC no longer holds the exclusive mantle as the primary intelligence source for policymakers. These challenges underscore the pressing need for the IC to embrace emerging technologies. However, this is easier said than done, given the prevalent skepticism, a historical under-investment in digital expertise, and a deeply ingrained cultural resistance to change. Innovators, users, acquirers, and providers must have a cohesive alignment to navigate this complex terrain. This alignment is the cornerstone for effective procurement and seamless integration of cutting-edge technologies. Moreover, achieving tangible progress in this domain necessitates stakeholder buy-in from these governmental agencies. Their endorsement and active participation are pivotal in ensuring that the innovations are not only technologically sound but also align with the broader strategic objectives of the IC. Merging Nielsen’s heuristics with our hierarchical model will be pivotal in shaping the future directions of our system design evaluations. By doing so, we can create systems that are not only user-centric but also robust in addressing the multifaceted needs of intelligence analysts. This combined approach, bolstered by stakeholder buy-in, offers a roadmap for the IC to maintain its edge, ensuring it remains agile, adaptive, and responsive in a world of constant change.

Author Contributions

Conceptualization, A.E.G., J.C.P., W.W. and R.J.C.; Methodology, A.E.G. and J.C.P.; Investigation, A.E.G., J.C.P. and W.W.; Writing—original draft, A.E.G., J.C.P. and R.J.C.; Supervision, R.J.C. All authors have read and agreed to the published version of the manuscript.

Funding

This work was funded by the Laboratory for Analytic Sciences during the 2023 Summer Conference on Applied Data Science at North Carolina State University.

Data Availability Statement

The raw data supporting the conclusions of this article will be made available by the authors on request.

Acknowledgments

This material is based upon work performed, in whole or in part, in coordination with the Department of Defense (DoD). Any opinions, findings, conclusions, or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the DoD and/or any agency or entity of the United States Government.

Conflicts of Interest

One of the authors (R. Jordan Crouser) is guest editing this special issue and will recuse himself from any adjudication of this work. The funders had no role in the design of the study; in the analyses or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

Appendix A. A Brief History of the Intelligence Community in the United States

The intelligence community (IC) in the United States began with World War I, on a small scale prior to the infamous Zimmerman Telegram. This event launched the US onto the world stage, and hurtled the military into a need for cryptology communities. The victory of the Second World War was significantly attributed to the actions of the rapidly developed work of the IC. For example, one of the most well-known cipher machines, the German Enigma, was the tool utilized to encrypt tens of thousands of tactical messages of World War II. From the tireless work of Allied cryptologists, many of Enigma messages were deciphered. These accomplishments, founded from the pressure of wartime and the tragedy of Pearl Harbor resulted in a technological breakthrough in the intelligence community.
Increased scrutiny on intelligence community activities occur twenty years in the future. Initially driven by President Nixon in the 1970s and deepened by the Office of Management and Budget 1971 study as well as the Pentagon’s Office of Net Assessment, there was a perception that intelligence analysis was not keeping pace with technological collection. There was a distinct dissatisfaction from these organizations with the “sophistication” of intelligence production. Nixon went forth to compound the dissatisfaction in November 1971 to demand that effort be placed into upgrading analysis personnel and its methods.
These factors, including the international environment of the time and the building of available information instigated new methodology and tools utilized to conduct intelligence analysis. Technology such as automated data handling systems and the adoption of computers were the result of such intense external pressure, including the forming of the Product Assessment Group (PAG) [12]. In September 1972, the Product Assessment Group’s successor, the Product Review Group (PRG), published a memo admitting the difficulty in adopting new analysis techniques and the spirit of innovation within the analyst community. There was a disbelief or distrust in the newly proposed methodology, and suggestions that the passage of time for newer, more innovation-based thinking people and to select a new, specific intelligence problem these proposed methods could tackle.
Despite this resistant culture forming, organizational support for forward-thinking remained, as noted in this final report on analytical training in the CIA in 1975: “While there is no magic formula for making a good analyst, we do consider that the analyst of the next 25 years will rely less than in the past on documentary and historical tools and more upon mathematical and computer-assisted analytic methods…”
However, this was not pursued with prejudice as the movement of innovation slowed to a crawl throughout the next two decades. However, this reluctance changed with the events of 9/11 and the WMD crisis of 2003 that led to the Iraq War. These massive events are seen as intelligence failures, which become incentives for change in the intelligence community. The encouragement by the 2004 Intelligence Reform Terrorism Prevention Act (IRTPA) of sound analytic methods and tradecraft, as well as conducting alternate means of analysis for verification, became the rigorous guidelines the IC community is currently under, forming the 2007 Intelligence Community Directive (ICD) 203, Analytic Standards [15].

References

  1. Ahrend, J.M.; Jirotka, M.; Jones, K. On the collaborative practices of cyber threat intelligence analysts to develop and utilize tacit Threat and Defence Knowledge. In Proceedings of the 2016 International Conference on Cyber Situational Awareness, Data Analytics and Assessment (CyberSA), London, UK, 13–14 June 2016; pp. 1–10. [Google Scholar] [CrossRef]
  2. Marrin, S. Training and Educating U.S. Intelligence Analysts. Int. J. Intell. CounterIntell. 2009, 22, 131–146. [Google Scholar] [CrossRef]
  3. Johnston, R. Analytic Culture in the United States Intelligence Community: An Ethnographic Study; Number 14; Central Intelligence Agency: Langley, VA, USA, 2005.
  4. Levine, R. Principles of Intelligence Analysis. Stud. Intell. 2021, 65. [Google Scholar]
  5. Rogers, E.M. Diffusion of Innovations; Ohio State University: Columbus, OH, USA, 1962. [Google Scholar]
  6. Council, D. Design Council. 2023. Available online: https://www.designcouncil.org.uk/fileadmin/uploads/dc/Documents/Press_Releases/Design_for_Planet_Festival_2023_full_programme_release_-_6_Sept_2023.pdf (accessed on 14 September 2023).
  7. Wang, X.; Huang, Z.; Xu, T.; Li, Y.; Qin, X. Exploring the Future Design Approach to Ageing Based on the Double Diamond Model. Systems 2023, 11, 404. [Google Scholar] [CrossRef]
  8. Jarrett, C.; Baxter, Y.C.; Boch, J.; Carrasco, C.; Muñoz, D.C.; Dib, K.M.; Pessoa, L.; Saric, J.; Silveira, M.; Steinmann, P. Deconstructing design thinking as a tool for the implementation of a population health initiative. Health Res. Policy Syst. 2022, 20, 91. [Google Scholar] [CrossRef] [PubMed]
  9. Murty, P. Discovery Processes in Designing. Ph.D. Thesis, University of Sydney, Sydney, Australia, 2006. [Google Scholar]
  10. Charmaz, K.; Belgrave, L. Qualitative interviewing and grounded theory analysis. SAGE Handb. Interview Res. Complex. Craft 2012, 2, 347–365. [Google Scholar]
  11. Gubrium, J.F.; Holstein, J.A.; Marvasti, A.B.; McKinney, K.D. The SAGE Handbook of Interview Research: The Complexity of the Craft Published by SAGE, Publication Year: 2012. Available online: https://methods.sagepub.com/book/handbook-of-interview-research-2e/n25.xml (accessed on 11 July 2023).
  12. Marchio, J. Overcoming the inertia of ‘old ways of producing intelligence’—The IC’s development and use of new analytic methods in the 1970s. Intell. Natl. Secur. 2021, 36, 978–994. [Google Scholar] [CrossRef]
  13. Desmet, P.; Fokkinga, S. Beyond Maslow’s Pyramid: Introducing a Typology of Thirteen Fundamental Needs for Human-Centered Design. Multimodal Technol. Interact. 2020, 4, 38. [Google Scholar] [CrossRef]
  14. Center for the Study of Intelligence; Johnston, R. Analytic Culture in the U.S. Intelligence Community: An Ethnographic Study-Working as an Intelligence Analyst, Central Intelligence Agency (CIA) Intelligence Papers; Amazon Digital Services LLC—KDP Print: Seattle, WA, USA, 2017. [Google Scholar]
  15. Analytic Culture in the U.S. Intelligence Community. Published by The Center for the Study of Intelligence (CSI), Location: Pittsburgh, PA, USA. Available online: https://irp.fas.org/cia/product/analytic.pdf (accessed on 18 July 2023).
Figure 1. Focused Discovery is the first phase of the UK Design Council’s double-diamond model. In this paper, we focus primarily on the divergent ‘Discover’ stage. Icons courtesy of Flaticon.com.
Figure 1. Focused Discovery is the first phase of the UK Design Council’s double-diamond model. In this paper, we focus primarily on the divergent ‘Discover’ stage. Icons courtesy of Flaticon.com.
Analytics 03 00022 g001
Figure 2. Visual representation of the comprehensive hierarchical model: distilling key themes, patterns, and insights from the focus discovery exercises conducted at the summer conference on applied data science (SCADS) 2023. An image of the hierarchy model presented as a pyramid, from bottom (1) to top (5), reads: 1. Efficient and Functional, 2. Trustworthy and Reliable, 3. Context-Aware and Relevant, 4. Tailored, 5. Customizable. Beside the pyramid, a double arrow on the edge presents the spectrum of need: towards the bottom are necessary features, and towards the top are aspirational features.
Figure 2. Visual representation of the comprehensive hierarchical model: distilling key themes, patterns, and insights from the focus discovery exercises conducted at the summer conference on applied data science (SCADS) 2023. An image of the hierarchy model presented as a pyramid, from bottom (1) to top (5), reads: 1. Efficient and Functional, 2. Trustworthy and Reliable, 3. Context-Aware and Relevant, 4. Tailored, 5. Customizable. Beside the pyramid, a double arrow on the edge presents the spectrum of need: towards the bottom are necessary features, and towards the top are aspirational features.
Analytics 03 00022 g002
Figure 3. An overview of The Hive interface’s design: The main panel consists of multiple hexagon information modules to help linguistic analysts focus on their current task. Peripheral interfaces surrounding what the analyst is currently working on offer interactive summaries of critical documents, labels associated with the summaries, and on-demand details of the summaries if they wish to clarify information.
Figure 3. An overview of The Hive interface’s design: The main panel consists of multiple hexagon information modules to help linguistic analysts focus on their current task. Peripheral interfaces surrounding what the analyst is currently working on offer interactive summaries of critical documents, labels associated with the summaries, and on-demand details of the summaries if they wish to clarify information.
Analytics 03 00022 g003
Figure 4. An overview of the Sunburst interface’s design: The main panel consists of sections relevant to situational awareness. That includes a textual summary of relevant information and a timeline of events for knowledge workers and people of interest that includes an exploded view to assess patterns. A screenshot of the high-fidelity design interface in Case Study 2; details are provided in the caption.
Figure 4. An overview of the Sunburst interface’s design: The main panel consists of sections relevant to situational awareness. That includes a textual summary of relevant information and a timeline of events for knowledge workers and people of interest that includes an exploded view to assess patterns. A screenshot of the high-fidelity design interface in Case Study 2; details are provided in the caption.
Analytics 03 00022 g004
Table 1. Themes, Expressions, and Interpretations from Analyst Feedback.
Table 1. Themes, Expressions, and Interpretations from Analyst Feedback.
ThemeThematic ExpressionInterpretation
Efficiency and Functionality“I think we need 16 more stickies that say compliance.”Compliance is fundamental for analysts working within strict legal frameworks, rendering non-compliant systems unusable.
“We’re analysts; we’re accustomed to slow.”There is a widespread desire among analysts for more efficient systems, which contrasts with their experiences of slow, poorly integrated tools.
Trustworthy and Reliable“Accuracy and trust are intrinsically linked to relevance.”Accuracy and trust are essential as analysts rely on their reports to influence decisions at higher levels.
“I want it to guide me… AI should never do it for me… implying that, to some degree, we can verify?”Analysts prefer AI tools that provide guidance without taking over, ensuring the user can verify the tool’s outputs.
Context-Aware and Relevant“I want something relevant to my current position. It changes as I change.”Analysts demand systems that dynamically adjust to their current roles and contexts.
“Relevance is a weird word… some things relevant [to the] team aren’t relevant to me.”Relevance is seen as fluid, varying by context and individual needs within a team.
“Some elements pertinent to the team may not be relevant to me.”Systems must adeptly navigate varying levels of relevance, balancing team and individual requirements.
Tailored“If today I was focused on this group, then tomorrow, on this group, I’d want a filtering mechanism.”Analysts value systems that can be precisely adjusted to reflect their focus and needs over time.
“I don’t want it for myself but for the team.”While some analysts seek individual customization, others emphasize alignment with broader team goals.
Customizable“Control, yes I like control.”Analysts strongly prefer systems that allow personal control over functionality and interface design.
“If we have the magic slider? Would you like that? YES!”Enthusiasm for features like a “magic slider” shows a significant preference for highly customizable systems.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Girona, A.E.; Peters, J.C.; Wang, W.; Crouser, R.J. The Analyst’s Hierarchy of Needs: Grounded Design Principles for Tailored Intelligence Analysis Tools. Analytics 2024, 3, 406-424. https://doi.org/10.3390/analytics3040022

AMA Style

Girona AE, Peters JC, Wang W, Crouser RJ. The Analyst’s Hierarchy of Needs: Grounded Design Principles for Tailored Intelligence Analysis Tools. Analytics. 2024; 3(4):406-424. https://doi.org/10.3390/analytics3040022

Chicago/Turabian Style

Girona, Antonio E., James C. Peters, Wenyuan Wang, and R. Jordan Crouser. 2024. "The Analyst’s Hierarchy of Needs: Grounded Design Principles for Tailored Intelligence Analysis Tools" Analytics 3, no. 4: 406-424. https://doi.org/10.3390/analytics3040022

APA Style

Girona, A. E., Peters, J. C., Wang, W., & Crouser, R. J. (2024). The Analyst’s Hierarchy of Needs: Grounded Design Principles for Tailored Intelligence Analysis Tools. Analytics, 3(4), 406-424. https://doi.org/10.3390/analytics3040022

Article Metrics

Back to TopTop