Playful Probing: Towards Understanding the Interaction with Machine Learning in the Design of Maintenance Planning Tools
Abstract
:1. Introduction
2. Background and Related Work
2.1. Aircraft Condition-Based Maintenance
2.2. Playful Probing
3. Initial Exploration of Work
3.1. Human–ML Interaction for CBM
3.2. Maintenance Planning
- Block: predefined routine maintenance, usually heavy and with due dates (as “A-checks”).
- Cluster: usually a flexible, small group of tasks that can be routine or non-routine, such as reactive or preventive maintenance; can have due dates, RUL, both, or none.
- Flight: aircraft movement between airports. It is not possible to do any maintenance to the aircraft in this period.
- Hangar: place where maintenance is performed. It has several restrictions, such as time, materials, and labour.
4. The Process of Playful Probes
4.1. Playful Probe Preparation
- Materials designThe materials presented in Figure 1 were based on the main maintenance concepts set out in Section 3:
- Row: one row represents an aircraft.
- Column: representation of time. One column represents one day.
- Flight: blue ribbons represent aircraft flights in the respective aircraft row.
- Registration time: the limit time to register the aircraft to some maintenance slot is 30 days.
- Open time: the limit time to open a new workscope (create new maintenance) is 21 days.
- Block: red rectangles represent predefined routine maintenance (with due date from the maintenance planning document). When moved before the registration limit, it must be registered as a new block after this registration limit.
- Cluster: group of tasks that represent other types of A-checks (small maintenance) with due date, RUL, both, or none. If a cluster is moved, it must be moved after the open workscope limit unless it is joined to a block.
In this phase of the discovery process, we chose to create materials that tried to faithfully represent current maintenance concepts as a starting point. We decided to use only a small speculative detail of CBM maintenance in these materials: the RUL indicator (in flight hours) that was included in some clusters. - Resolution pathTo enable this task in a limited time, we created a structured resolution path. The beginning of the resolution was linear and could only progress one way. Participants faced the simplest concepts of flight planing and maintenance. Subsequently, the resolution would lead to a path where users would necessarily be faced with more complex issues such as conflicting conditions and 90% confidence RULs.This probe was designed so that during the rehearsal, the disposition of visual artefacts confront participants with situations that can lead to debate and the generation of insights. The main ones were:
- -
- Introducing block and cluster grouping—Is it possible to group all the maintenance in these two typologies? How do we deal with the deadlines of each type?
- -
- Introducing estimates with 90% confidence—Does it make sense to have a large degree of uncertainty? How do we represent it to enable decisions?
- Material digitalizationTo prepare the virtual workshop, all artefacts were designed digitally but printed and rehearsed with manually as is common in paper prototype exercises (Figure 1). After testing multiple approaches to instrument the playful probing with visual artefacts, we adjusted size and complexity, and the exercise was migrated to the digital collaboration tool (Figure 2).
4.2. Playful Probe Workshop
- 4.
- BriefingIn an initial part of the playful probing workshop, an introduction was made explaining the basic maintenance elements of the game and demonstrating how to solve a simple problem (Figure 3).The canvas represents a fleet of only two aircraft, with flights and maintenance distributed over time and using a minimum block time of 4 h. To simplify the maintenance problem for a first iteration of the game design, only three types of artefacts were created with which the participants could interact (drag and drop), representing the maintenance work of an “A-check”. For simplicity, we assumed that there was only one hangar with a maintenance team available, so it was not possible to do multiple maintenance procedures at the same time. This part lasted around 10 min, and the participants cleared some doubts about the game but did not interact with artefacts.
- 5.
- Running the playful probing participatory design workshopIn this part of the experimental session, artefacts were presented to participants with a non-trivial maintenance scheduling problem to be solved (Figure 2). The participants’ voices and the collaborative canvas were recorded while they presented their ideas and played with the representations to solve each maintenance problem. The facilitator acted as gamemaster, answered participants’ questions about whether they could take some actions, alerted them when they were ignoring some important condition, and tried to get them to explore the problem boundaries in a dialogue with the material representations.Exploration developed freely to solve each game problem, with no constraints regarding order, time, or the management of concurrency among open explorations; the facilitator favoured out-loud dialogue and the explicit manipulation of the representations as a form of dialogical imagination among participants. Given the habitual nature of play, we expected the emergence of self-directed and highly autonomous activities driven by participants’ playful trajectories actively exploring the boundaries of the gameplay scenario.
- 6.
- Debriefing debateShortly after the participants solved the planning problem, the focus group occurred, in which a broader discussion space was opened to reflect on the current state of maintenance and how CBM can be used in the future.
4.3. After the Playful Probe Workshop
- 7.
- Semi-structured email interviewAfter reviewing the recordings, specific interview questions were sent to the participants with the intention of clarifying or deepening the reflections they expressed during the play and debriefing phases. The first group of questions focused on the experience and interpretation of the participants about the exercise. The leading questions were:How did you perceive the experience from the moment where the problem appeared with a yellow star to the reached solution? What did you find most challenging and why?The second group were speculative questions about using an ML agent to help with maintenance planning. The leading questions were:How would you briefly narrate a planner using the ML planing/scheduling with this interface? At which moments ML should be called in to provide a new solution or a partial solution to the planner?The third group of questions focused on the visualization, interpretation, and control of RUL indicators. The leading questions were:Did you experience difficulties visualizing/interpreting RUL indicators? Can you anticipate some improvement in the way we present information to give better control to the planner?The last group of questions focused on the playful probing exercise itself. The leading questions were:What did you thought about the session technique used: should we make some changes? were the materials limiting in anyway that needs to be fixed? did it help generate or make explicit some insights about the subject mater?The email interviews were typically an extrinsic reflection on the experience, a post-reflection. It will be discussed later in the discussion section.
- 8.
- Data collection and in-depth content analysisThe playful probing workshop generated audio and video recordings, dialogue text, and interview transcriptions. A video was made (with informed consent) of the conversation between the participants and the manipulation of game artefacts during the playful probe workshop. All video data were analysed (verbal and actions with materials) to generate initial codes. Then, the recording was split into 30 s segments and coded into groups. These grouping were based on the intrinsic self-analysis of the experience that emerged in the conversation generated as participants played the scenario. Data collection and analysis of the workshop are described in the next section.
5. Data Collection and Analysis
5.1. Content Coding
5.2. Conversation Analysis
6. Understandable Interactions
Design Insights
- Understandable maintenance representationRegarding the experience of interpreting the game elements (flights, blocks, clusters of tasks, and plans) the participants generally found it clear, with P1 adding “clear and similar to tools already in use” and P2 saying “this view is actually quite nice to be able to quickly scan the situation”. As can be visualized in Figure 5, the participants started to talk about planning representation and then immediately started to move artefacts at minute 8.P1, during the exercise, verbalized the possibility of also visualizing other kinds of maintenance that does not require a hangar, which is important in a short-term maintenance paradigm such as CBM, concluding “If it’s a really small problem you can do it during a turnaround”.
- Maintenance package managementIn respect to future developments of maintenance artefacts, P1 was thinking about how to visualise the “benefit you get combining a cluster and a block. Let’s say, after this 30H (of maintenance), there are 1H of toeing, that means if you combine them, you save an hour, so the box becomes a little bit smaller”. P2 was also concerned with this kind of plan optimization, “we should combine these two, because it’s a kind of waste bringing them back to the hangar twice in two day”. Both participants were interested in “open” and split clusters being used as some sub-clusters, especially if there was a task where some RUL restricts the entire cluster. This should be interesting if there is an update in only one RUL among the possible dozens at any given time, and the best solution is to solve the problem related to this specific RUL and leave the rest of the cluster according to the original schedule, “in fact, what we necessarily need to move, is not all the work, but part of work.” (P2).
- Maintenance flexibility and controlAt some point, P1 considered scheduling two hours of maintenance over the limit, and wondered “what is the consequences of not making the exact Due Date? what’s the consequences of having the component filled before the preventive removal?” and “How critical is it if we don’t respect a RUL?”, suggesting that the planner should have the flexibility to schedule tasks in other time if the return is large enough. Participants agreed that we should start from the assumption that the planner knows things that cannot be coded in the model, “the planner might have more data or might have some preferences, some strategies in his head, that make him decide to deviate from the output of ML algorithm” (P1). Thus, we should assume that s/he can make some changes based on human (tacit) knowledge and turn these into constraints to generate a new solution. This can be done by fixing a particular block or cluster or locking an empty space after some maintenance because s/he “knows there is a risk that they are working in an area where usually have other findings which they need to attempt too as well” (P2).
- Manual planningComplementing the previous point, P2 said that that maintainers need some room to schedule clusters because they do not know what kind of corrective tasks they will have in 30 days: “we don’t have the luxury always of having RUL of more than 100 h (…). The problems pop up, let’s say, in common flights, so we need to act on that right now (…) to find some spot to fix the next couple of days”.
- Maintenance time restrictionsParticipants confirmed the time to fix blocks as part of A-checks is done respecting the time limits presented in this exercise “until like you said, the 20 days to 30 days” (P1). However, “there are also other work as modifications, and those you can foresee months of prompt, let’s say if you want to install wifi on the aircraft, what this is not popping up on a short term but that you already know months in advance” and can be scheduled in some check.
- Flight and maintenance plan mergerAlthough planners do not visualize flights in maintenance planning, in part because planning flights is currently done in the short term, they recognized the importance of visualizing flights on the same canvas as maintenance. After participants played with flights, blocks, and clusters, they suggested improvements to make them more complete, such as including the turnaround and towing time in the flight artefacts. Participants also suggested presented the hours per flight. This information may also be important for cooperation with operational planners. A task with low probability and very high impact can trigger a discussion about whether it should be planned, and they should simply accept this schedule if they “have a spare aircraft stand bay or have some buffer in the network”; otherwise, they will not take this risk, which may lead to cancellation.
- The role of automatic planningParticipants assumed that there would be some form of automatic planning that would reschedule the entire plan. However, they felt the need to plan only part of the plan. P3 felt the need to have a button to “fix the rest” once s/he made a few choices. P2 also had the same question, “How will we be able to lock some parts not to be changed by AI plan recalculation?”. P2 recognized that it is difficult to manually optimize a solution, venting “Wow, this is endless!”, and concerning plan optimize “we should combine these two, because it’s a kind of waste bringing them back to the hangar twice in two days”.Participants agreed that it might be a good idea for the ML agent to automatically group tasks into clusters and propose a solution to the planner. Then s/he must make an assessment and decide what to accept, taking into account that s/he will always be able to adjust the solution that the system has proposed.Both participants highlighted a few occasions when the ML agent could be called to present some solution. Participant 2 said that the ML agent should be called “When a new RUL is introduced, either a change or a new cluster”. P1 also suggest that “an initial proposal to cope with a new ‘problem’ would be nice, indicating the differences the ML propose to make”. P1 also presented an idea similar to some chess applications to improve the interaction between the user and the ML agent: “If you select a block, perhaps see the options of what you can do with that block, before moving”.
- Discretionary balance between control and autonomyParticipants also expected the tool be useful to generate a solution that not only respects the restrictions but also allows limiting the search space to a certain period of time or to some selected aircraft. However, it should show the planner the impact of this limitation. “For example I gave an 8 h (slack) after maintenance just because sometimes there is an issue, but s/he sees in the planning that it has a quite lot of impact” (P2). P1 agreed that the planner should be able to get some kind of score, or even better, the cost of making changes, “because maybe there are some biases in behavior or maybe (the planner) is used to do in a certain way.” Further, it must be feasible that this actually helps to achieve better solutions “not just in time reduction, but also in optimality”.
- Maintenance RUL confidence levelThe run participants found it easy and clear to understand what needed to be done. However, when confronted with the RUL confidence level, they found it not easy to interpret, and they considered the RUL as a fixed due date. P2 said it “was quite tricky estimate what risk you took when you interpreted the RUL”, while P1 said the representation of RUL required some mental effort to visualize: it “was a bit challenging to determine the due dates for the tasks, it required some mental efforts”. P1 added during the exercise, “the difference between 95 and 99 in my head is not playing a role”. Despite the difficulty in seeing the impact of the confidence level during the exercise, they made an effort to understand the impact of the confidence level; e.g., P1 said "I won’t to risk, because 90% is quite high”.
- Maintenance RUL visualizationParticipants suggested automatically visualizing the RUL on the timeline, and P1 also suggested it would be good to “visualize operation impact” such as costs, availability, and the maintenance components, asking P2 “But it could actually depend on what’s these 65 h based on, right? What of kind components we are talking about?”. During the exercise, P2 suggested an RUL of 60 h with a confidence level of 90%, “it would be nice if we could see (…) 65+−6 h, than you kind have an idea of how close the edge you are", and when asked if a boxplot could fit, P1 answered “Yeh, I’m thinking out aloud now, but perhaps instead a square box, it could be a kind of distribution”. At the end of the exercise, P1 took a co-constructive move and started using the collaboration tool to make some design proposals. S/he started to draw how this kind of distribution could be, as shown in Figure 8, a visual analogy based on how the arrival time is modelled but in this case as a view of the risk.A participants added another curve, shown in Figure 9, and said that it was something that s/he is not used to, that it was just his/her idea based on aircraft management with regards to a future CBM scenario.This should be something related to the impact cost, “so if you do this task now, it will cost you something because it will be based on the RUL (…) if you do it too early it’s got a cost because you are wasting the RUL, but if you do it too late it’s gonna cost you because it’s incurring a delay, cancellation, or high repair times. But there is no optimal here, and there is something that you can play with”, referring to the possibility of adjusting the best time to schedule some cluster and getting the respective impact of this move.
- CBM maintenance indicatorsParticipants asked about if they have the data needed to possibly turn the impact curve; the participant added “we know the delay cost, we know the cancellation cost approximately, we know very much escalate repair cost is, and we know about how much RUL cost approximately, what is a bit more difficult it’s the cost of preventive repair”. P2 presented his/her vision: “we should have a kind of class of component or class of consequences, and depending on that class, it must not run the risk, or it can run the risk of exhausting the RUL”. P1 agreed, “the decision on whether to schedule something, should not be just dependent on the description of the task but should be also dependent of the maintenance opportunities and the state of the fleet”, and “take in consideration the probability that’s something might fill with the large or small impact”.
7. Conclusions
Author Contributions
Funding
Informed Consent Statement
Conflicts of Interest
Abbreviations
CBM | Condition-Based Maintenance |
ML | Machine Learning |
AI | Artificial Intelligence |
AMP | Aircraft Maintenance Planning |
RUL | Remaining Useful Life |
References
- Bødker, S.; Kyng, M. Participatory design that matters—Facing the big issues. ACM Trans. Comput.-Hum. Interact. 2018, 25, 1–31. [Google Scholar] [CrossRef] [Green Version]
- Mattelmaki, T.; Korkeakoulu, T. Design Probes; University of Art and Design: Helsinki, Finland, 2008. [Google Scholar]
- Gaver, W.W.; Boucher, A.; Pennington, S.; Walker, B. Cultural probes and the value of uncertainty. Interactions 2004, 11, 53. [Google Scholar] [CrossRef] [Green Version]
- Celikoglu, O.M.; Ogut, S.T.; Krippendorff, K. How Do User Stories Inspire Design? A Study of Cultural Probes. Des. Issues 2017, 33, 84–98. [Google Scholar] [CrossRef]
- Lange-Nielsen, F.; Lafont, X.V.; Cassar, B.; Khaled, R. Involving players earlier in the game design process using cultural probes. In Proceedings of the 4th International Conference on Fun and Games-FnG ’12, Toulouse, France, 4–6 September 2012; ACM Press: New York, NY, USA, 2012; pp. 45–54. [Google Scholar] [CrossRef]
- Hutchinson, H.; Hansen, H.; Roussel, N.; Eiderbäck, B.; Mackay, W.; Westerlund, B.; Bederson, B.B.; Druin, A.; Plaisant, C.; Beaudouin-Lafon, M.; et al. Technology probes. In Proceedings of the Human factors in Computing Systems-CHI ’03, Ft. Lauderdale, FL, USA, 5–10 April 2003; ACM Press: Ft. Lauderdale, FL, USA, 2003; p. 17. [Google Scholar] [CrossRef]
- Vasconcelos, A.; Silva, P.A.; Caseiro, J.; Nunes, F.; Teixeira, L.F. Designing tablet-based games for seniors: The example of CogniPlay, a cognitive gaming platform. In Proceedings of the Fun and Games ’12: International Conference on Fun and Games, Toulouse, France, 4–6 September 2012; Volume 3, pp. 1–10. [Google Scholar] [CrossRef]
- Wallace, J.; McCarthy, J.; Wright, P.C.; Olivier, P. Making design probes work. In Proceedings of the Conference on Human Factors in Computing Systems, Paris, France, 27 April–2 May 2013; pp. 3441–3450. [Google Scholar] [CrossRef] [Green Version]
- Huizinga, J. Homo Ludens: A Study of the Play-Element in Culture; Angelico Press: Brooklyn, NY, USA, 2016. [Google Scholar]
- Gaver, B.; Dunne, T.; Pacenti, E. Design: Cultural probes. Interactions 1999, 6, 21–29. [Google Scholar] [CrossRef]
- Sahay, A. An overview of aircraft maintenance. In Leveraging Information Technology for Optimal Aircraft Maintenance, Repair and Overhaul (MRO); Elsevier: Amsterdam, The Netherlands, 2012; pp. 1–230. [Google Scholar] [CrossRef]
- Knowles, M.; Baglee, D.; Wermter, S. Reinforcement learning for scheduling of maintenance. In Res. and Dev. in Intelligent Syst. XXVII: Incorporating Applications and Innovations in Intel. Sys. XVIII-AI 2010, 30th SGAI Int. Conf. on Innovative Techniques and Applications of Artificial Intel.; Springer: London, UK, 2011; pp. 409–422. [Google Scholar] [CrossRef]
- Andrade, P.; Silva, C.; Ribeiro, B.; Santos, B.F. Aircraft maintenance check scheduling using reinforcement learning. Aerospace 2021, 8, 113. [Google Scholar] [CrossRef]
- Bernhaupt, R.; Weiss, A.; Obrist, M.; Tscheligi, M. Playful probing: Making probing more fun. In Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) 4662 LNCS; Springer: Berlin/Heidelberg, Germany, 2007; pp. 606–619. [Google Scholar] [CrossRef] [Green Version]
- Sjovoll, V.; Gulden, T. Play probes-As a productive space and source for information. In Proceedings of the 18th International Conference on Engineering and Product Design Education: Design Education: Collaboration and Cross-Disciplinarity, E and PDE 2016, Aalborg, Denmark, 8–9 September 2016; Number September. The Design Society: Copenhagen, Denmark; Institution of Engineering Designers: Glasgow, UK, 2016; pp. 342–347. [Google Scholar]
- Kjeldskov, J.; Gibbs, M.; Vetere, F.; Howard, S.; Pedell, S.; Mecoles, K.; Bunyan, M. Using Cultural Probes to Explore Mediated Intimacy. Australas. J. Inf. Syst. 2004, 11, 102–115. [Google Scholar] [CrossRef] [Green Version]
- Moser, C.; Fuchsberger, V.; Tscheligi, M. Using probes to create child personas for games. In Proceedings of the 8th International Conference on Advances in Computer Entertainment Technology-ACE ’11, Lisbon, Portugal, 8–11 November 2011; ACM Press: New York, NY, USA, 2011; p. 1. [Google Scholar] [CrossRef]
- Klabbers, J.H.G. The Magic Circle: Principles of Gaming & Simulation; Modeling and simulations for learning and instruction; Sense Publishers: Rotterdam, The Netherlands, 2006. [Google Scholar]
- Ribeiro, J.; Roque, L. Playfully probing practice-automation dialectics in designing new ML-tools. In Proceedings of the VideoJogos 2020: 12th International Conference on Videogame Sciences and Arts, Mirandela, Portugal, 26–28 November 2020; pp. 1–9. [Google Scholar]
- Ribeiro, J.; Andrade, P.; Carvalho, M.; Silva, C.; Ribeiro, B. Playful Probes for Design Interaction with Machine Learning: A Tool for Aircraft Condition-Based Maintenance Planning and Visualisation. Mathematics 2022, 10, 1604. [Google Scholar] [CrossRef]
- Amershi, S.; Weld, D.; Vorvoreanu, M.; Fourney, A.; Nushi, B.; Collisson, P.; Suh, J.; Iqbal, S.; Bennett, P.N.; Inkpen, K.; et al. Guidelines for Human-AI Interaction. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems-CHI ’19, Glasgow, UK, 4–9 May 2019; pp. 1–13. [Google Scholar] [CrossRef]
- Holbrook, J. Human-Centered Machine Learning. 2017. Available online: https://medium.com/google-design/human-centered-machine-learning-a770d10562cd (accessed on 16 April 2020).
- Guzdial, M.; Liao, N.; Chen, J.; Chen, S.Y.; Shah, S.; Shah, V.; Reno, J.; Smith, G.; Riedl, M.O. Friend, collaborator, student, manager: How design of an AI-driven game level editor affects creators. In Proceedings of the Conference on Human Factors in Computing Systems, Glasgow, UK, 4–9 May 2019; pp. 1–13. [Google Scholar] [CrossRef] [Green Version]
- Abdul, A.; Vermeulen, J.; Wang, D.; Lim, B.Y.; Kankanhalli, M. Trends and Trajectories for Explainable, Accountable and Intelligible Systems. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, Montreal, QC, Canada, 21–26 April 2018; ACM: Montreal, QC, Canada, 2018; pp. 1–18. [Google Scholar] [CrossRef]
- Wang, D.; Yang, Q.; Abdul, A.; Lim, B.Y. Designing Theory-Driven User-Centric Explainable AI. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems-CHI ’19, Glasgow, UK, 4–9 May 2019; ACM Press: Glasgow, UK, 2019; pp. 1–15. [Google Scholar] [CrossRef]
- Zhou, J.; Gandomi, A.H.; Chen, F.; Holzinger, A. Evaluating the quality of machine learning explanations: A survey on methods and metrics. Electronics 2021, 10, 593. [Google Scholar] [CrossRef]
- Linardatos, P.; Papastefanopoulos, V.; Kotsiantis, S. Explainable ai: A review of machine learning interpretability methods. Entropy 2021, 23, 18. [Google Scholar] [CrossRef] [PubMed]
- Bhatt, U.; Xiang, A.; Sharma, S.; Weller, A.; Taly, A.; Jia, Y.; Ghosh, J.; Puri, R.; Moura, J.M.F.; Eckersley, P. Explainable machine learning in deployment. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, Barcelona, Spain, 27–30 January 2020; ACM: Barcelona, Spain, 2020; pp. 648–657. [Google Scholar] [CrossRef]
- Bødker, S.; Roque, L.; Larsen-Ledet, I.; Thomas, V. Taming a Run-Away Object: How to Maintain and Extend Human Control in Human-Computer Interaction? In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, CHI 2018, Montreal, QC, Canada, 21–26 March 2018; pp. 1–6. [Google Scholar]
- Lukosch, H.K.; Bekebrede, G.; Kurapati, S.; Lukosch, S.G. A Scientific Foundation of Simulation Games for the Analysis and Design of Complex Systems. Simul. Gaming 2018, 49, 279–314. [Google Scholar] [CrossRef] [PubMed]
- Vaishnavi, V.K.; Purao, S. (Eds.) Design Science Research in Information Systems. In Proceedings of the 4th International Conference on Design Science Research in Information Systems and Technology, DESRIST 2009, Philadelphia, PA, USA, 7–8 May 2009.
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Ribeiro, J.; Roque, L. Playful Probing: Towards Understanding the Interaction with Machine Learning in the Design of Maintenance Planning Tools. Aerospace 2022, 9, 754. https://doi.org/10.3390/aerospace9120754
Ribeiro J, Roque L. Playful Probing: Towards Understanding the Interaction with Machine Learning in the Design of Maintenance Planning Tools. Aerospace. 2022; 9(12):754. https://doi.org/10.3390/aerospace9120754
Chicago/Turabian StyleRibeiro, Jorge, and Licínio Roque. 2022. "Playful Probing: Towards Understanding the Interaction with Machine Learning in the Design of Maintenance Planning Tools" Aerospace 9, no. 12: 754. https://doi.org/10.3390/aerospace9120754
APA StyleRibeiro, J., & Roque, L. (2022). Playful Probing: Towards Understanding the Interaction with Machine Learning in the Design of Maintenance Planning Tools. Aerospace, 9(12), 754. https://doi.org/10.3390/aerospace9120754