An Overview of Verification and Validation Challenges for Inspection Robots
Abstract
:1. Introduction: Sending Robots to Look at Things
1.1. Why?
- Chemicals and Radiation—both in space and on earth, e.g., nuclear fission/fusion.
- Explosions and Fire—exploring burning buildings, forest fires, rescue scenarios, etc.
- Liquids and Flows—underwater, in sewers, handling strong currents, external pipeline inspection, inspection inside pipelines, etc.
- Wind and Weather—inspecting structures such as buildings, turbines, or oil-rigs, in bad weather, or inspecting meteorological phenomena themselves.
- Pressure and Heat—e.g., from water (deep sea), from earth (deep excavations), from sunlight, as well as from fire.
- Lack of atmosphere—e.g., space exploration, deep sea environments.
- Dangerous flora or fauna—e.g., inspecting hornets’ nests (Inspecting Hornet Nests—https://www.bbc.co.uk/news/world-europe-jersey-41141330 (assessed on 28 April 2021)), or areas infected with dangerous microorganisms.
- Inspecting structures for leaks, for corrosion, or for general wear-and-tear;
- Security inspection within restricted areas;
- Monitoring human safety, e.g., an isolated person;
- Conservation tasks and wildlife monitoring (Spotting sharks—https://www.youtube.com/watch?v=z007p-8e_QA (assessed on 28 April 2021)).
1.2. What?
- a variety of construction materials—metal, soft materials, nano-technology, etc.;
- a variety of sensors for inspection—infrared, pressure, lidar, radioactivity, heat, non-destructive testing (NDT) components, etc.;
- a variety of propulsion mechanisms—wings, wheels, propellers, buoyancy, etc.;
- a variety of communication mechanisms—radio, WiFi, speech, etc.; and
- a variety of software components—for deep learning, explanation, planning, decision-making, navigation, etc.
1.3. How?
1.4. Where?
- land (underground)—e.g., tunnel inspections;
- land (surface)—e.g., looking for pot-holes on a road surface;
- land (extra-terrestrial)—e.g., looking for specific minerals on a planet;
- water (surface)—e.g., assessing wind turbine support structures;
- water (sub-surface)—e.g., assessing sea-bed pipelines;
- air—e.g., assessing high oil-rig structures;
- space—e.g., assessing satellite structures;
- in confined/constricted spaces—e.g., assessing internals of pipes;
- in adverse weather conditions—e.g., carrying out inspections of structures in high winds;
- … and so on …
- a range of different dangers that the human, and now the robot, might be exposed to,
- a range of different inspection tasks to be carried out, and
- a range of different requirements we that place on our robots.
1.5. Research Questions and Paper Outline
- RQ1:
- What are the verification and validation challenges for robotic remote inspection systems?
- RQ2:
- What existing verification and validation techniques can we use to tackle the challenges found in answering RQ1?
- RQ3:
- What are some of the remaining challenges not tackled by the existing techniques found in answering RQ2?
2. Verifying and Validating Responsibility: Who or What Is “In Charge”?
2.1. Direct Human Control
2.2. Remote Human Control
- physical testing;
- user validation—asking operators how they find use of the system;
- simulation-based testing (and training);
- formal verification, for example as part of corroborative verification [3].
2.3. Supervised, or Semi-Autonomous, Behaviour
- Traditionally, analytic stability/avoidance proofs for control algorithms based on control theory are used [4].
- More recently, simulation-based testing [5] has been used to develop a virtual prototype, or digital twin, of the unmanned vehicle.
- At later stages in the product development lifecycle, hardware-based techniques such as hardware-in-loop and real-world testing can be used [3]. The latter can be costly, however, and is generally left until the prototype has reached a mature point in its development.
- A central concern when using ground control stations with semi-autonomous systems, such as unmanned aircraft, is what happens if the communication link with the unmanned aircraft is disrupted. In the worst case, the unmanned aircraft could become an ‘uncontrolled missile’. Consequently, it is necessary to have some fallback systems on-board the aircraft to take over and maintain safe operations [8].
- Since safety parameters need to be assigned for the robot or vehicle, there will be a considerable amount of calibration needed to ensure the appropriate levels that retain both effectiveness and safety. This calibration might potentially have to be carried out separately for every new mission.
2.4. Autonomy
- If we have decisional bounds, or a set of prescribed decisions, then we can:
- formally verify that the system always works within these restrictions;
- test a range of scenarios to ensure that the system always works within these restrictions—this might be real physical testing or testing in simulation;
- use runtime verification to monitor whether the system follows these restrictions in practice.
- If we have fully autonomous decision-making, then we can formally verify this process to ensure that the system always tries to make the right decisions and takes the important aspects into account [9]. Note that we can never guarantee what the external effects of any decision will be.
- Once we delegate some (limited) autonomy to the system then new issues come into play. For example, are correct decisions always made, given the information available? Moreover, are these decisions made for the right reasons, i.e., the robot has ‘the best of intentions’? Related to this, what guarantees can we give? Can we formally verify this?
- Particularly concerning the aspects delegated, can the system effectively explain its decisions to humans? This, and the previous issue, rely on whether the decision-making is transparent, which is a basis both for explainability and verifiability.
- How effective and reliable are self-awareness and fault-recognition capabilities? For example, can a robot detect when one of its sensors is working badly? Similarly, given health management procedures, is the robot still able to function? In particular, in spite of failures, can the robot still achieve its aims?
- How is privacy dealt with, for example ensuring that personal data is not revealed to the wrong people? What guarantees (verification) can we provide for this aspect [10]?
- Finally, is the locus of responsibility clear? Who is in charge, the robot or the person, and who or what is to blame if things go wrong? Again, verification (and validation) is needed for these crucial issues.
2.5. Summary
3. Component Reliability: What, When, and Why It Works?
- a single control or state estimation system,
- a neural network, or
- a symbolic AI system.
3.1. Controller or State Estimation System
- Simulation-based testing and user validation are also frequently used in this aspect.
- Analytic proofs regarding control are dependent on mathematical models of (probably unknown) environments—in the case of distant (and even hazardous) environments, we are not at all sure exactly how accurate these models are. Consequently, while we know that ‘all mathematical models are wrong’, we are not sure exactly how wrong!
- Formal methods can quickly become intractable without severe abstractions/ approximations concerning the practical environment. Further, there can be a significant number of false positives or false negatives, when considering such abstraction techniques.
- Simulation-based testing and user validation may not be able to identify all ‘corner’ cases.
3.2. Neural Networks and Sub-Symbolic AI
- Formal verification methods usually involve not only high computational complexity but also large state space due to the size of neural networks.
- Software testing methods require extra effort to make sense of the testing results. For example, it needs to clarify the relation between coverage rate and the satisfiability of a property.
- Statistical methods can only achieve statistical results and may be subject to the risks of ignoring corner cases.
3.3. Symbolic AI and Logical Reasoning
- Model checking: the Model Checking Agent Programming Languages (MCAPL) project [58] includes the Gwendolen [59] programming language for programming BDI-based agents, and the Agent JavaPathFinder (AJPF) [60] program model checker for verifying Gwendolen agent programs; MCMAS [61] and MCK [46] are two symbolic model checkers for multi-agent systems based on ordered binary decision diagrams.
- Runtime verification: in [65] runtime verification is used to validate the abstract environment model of agent systems at runtime, that were previously verified offline by a program model checker; DecAMon [66] is an algorithm for decentralised monitoring of communication in multi-agent systems via agent interaction protocols.
- Testing: in [67], it was shown that BDI agent programs are more difficult to test than procedural programs of equivalent size, and conclude that their findings motivate the use of formal methods for the verification of agent systems.
- Symbolic AI methods can be inflexible and require significant knowledge engineering effort to produce, when compared to learning based approaches, although this same effort enables tracability from implemented program features back to user requirements.
- Model-based verification techniques have significant state space explosion problems, which are exacerbated in the verification of agents because of the increased complexity required in their reasoning cycle.
3.4. Summary
4. System Reliability: Putting It All Together
4.1. Architectures
- specification of the behaviour of an individual module;
- specification of the linkage between, and communications to, other modules;
- flexibility in exchanging modules (reconfigurability, etc.);
- security, reliability, and real-time efficiency of modular interactions;
- etc.
4.2. Physical Testing
4.3. Laboratory Testing
- Nuclear—we might set up irradiation and obstacles, and expose our robot to both,
- Water—we might set up a laboratory tank with (erratic) water flows and test our robot within this, and
- Space—we might utilise a desert landscape as an approximation for a lunar or Martian surface.
- the design of experiments,
- the fidelity of the lab model of the real environment, and
- the ways in which additional failures/constraints can be represent, e.g., vision system failure.
- We must be clear about the details of this gap. What aspects do we simulate well; what elements are less realistic; which ones do we not even represent?
- We must also be precise about failures that might occur in the real environment and whether these can occur in the lab-based system. And, if not, are these to be modelled/simulated? For example, radiation might affect a particular sensor in a strange way, gradually degrading its capabilities. We can introduce software to simulate this aspect in a laboratory setting.
- Given the above explorations relating to the gap between real and laboratory environments, we can have an estimate of all of the elements of the environment to be covered. Then, experiments can be designed that address many, though probably not all, issues. This aspect of experimental design and coverage of the issues, is both complex and important.
4.4. Simulation-Based Testing
- V&V techniques such as simulation-based testing are non-exhaustive, meaning that it is not possible to examine every permutation of a scenario. Therefore a key issue is ensuring that the simulations that have been performed are likely to have captured erroneous behaviour. This can be done by examining cases in which the system is operated at the limits of its capabilities, for example, the ambient environmental conditions, e.g., wind speed, air density, temperature. These conditions can then be tested physically to validate the simulation results.
- No simulation, no matter how complex, is a perfect representation of reality. This means that all of the models used need to be validated using physical data to mitigate the differences. However, some degree of inaccuracy needs to be accepted.
- Many of the issues relevant here also appear in the section on laboratory-based testing. In particular, assessing and understanding the gap between the simulation and the real world is again crucial, as is the development of experiments [83].
- However, the computational framework that these simulations sit in allows us to verify in a more extensive way. While replication of individual experiments in labs, or in the real-world, can be difficult this replication is straightforward in software based systems. Similarly, running many experiments can be time-consuming in labs and real test environments. But, with simulations, we can utilise high-performance computing, often large parallel compute servers, to run huge numbers of tests and experiments. And, while the coverage aspects and experimental design both remain important, the use of HPC allows us to employ brute-force techniques, for example assessing billions of randomly varied scenarios.
4.5. Heterogeneous Verification
4.6. Corroborative V&V
4.7. Scalable Verification
4.8. Runtime Verification
4.9. Summary
5. Properties and Requirements
5.1. Safety
- In nuclear decommissioning, for example, a new hazard could involve the robot damaging vital infrastructure because of an incorrect decision or incorrect classification of sensor input, where a trained human is less likely to make these mistakes. Damaging this infrastructure could pose a hazard to other humans in the facility.
- Once we consider the example of a rover exploring a distant planet, then the risk is less to other humans but to the rover itself. Damage to the rover could mean mission failure, with damaged hardware remaining on the planet potentially making future missions more difficult. Proposals for the next steps in space exploration involve collaborations between astronauts and robots (e.g., ESA’s ‘moon village’ vision (https://www.esa.int/About_Us/Ministerial_Council_2016/Moon_Village (accessed on 28 April 2021)) so an autonomous rover could conceivably be patrolling on another planet where humans are living. In this case, a collision between the rover and the habitat could have safety-critical implications.
- sensor failure (e.g., environment not correctly detected),
- software failure (e.g., learning algorithm recognises environment incorrectly), or
- hardware failure (e.g., slippage due to tyres not being replaced).
5.2. Security
- The need for fast message passing amongst autonomous robots in certain scenarios tends to result in a lack of message encryption which leaves these systems vulnerable to attackers.
- Machine learning components can be particularly vulnerable to attackers. For example, in a deep learning vision system, an attacker may flash it with images leading it to incorrectly identify an obstacle or another robot, potentially causing a collision. (See Section 3.2.)
5.3. Ethics and Transparency
- We can also use either formal verification or testing techniques to examine specific scenarios to check that the system makes the correct decisions in those specific scenarios [121].
- We can potentially use formal techniques to ensure that transparency mechanisms store and report accurate information. The appropriate use of architectures (Section 4), ensuring that key decisions are made by symbolic AI components and so are much more amenable to explanations, are crucial.
- All ethical theories have ‘pathological’ edge cases where the decisions are considered unethical by most people. Furthermore, when encoding the information to be used in ethical reasoning it is important to consider carefully whose opinion about this information is sought and avoid bias in the implemented ethics.
- Transparency mechanisms can prove opaque to users and, conversely, can also instill a false sense of confidence in the system [122].
5.4. Responsibility and Trust
- Socio-cognitive research lends validation techniques to exploring trust in HRI by considering how humans will react in given scenarios. For example, humans utilise a Theory of Mind (ToM) to understand the intentions and goals of others, with trust associated with a person’s ability to make mental state inferences [129]. By extraction, an exploration of how humans develop a suitable mental model of robot capability can be used to approximate how much information a robot should be relaying to an operator to maintain their trust [130].
- While we know that a basis of trust is essential to the adoption of robotic systems, we do not yet know the answers to fundamental questions which would ensure this, including what is the best way to interact with robotics in specific situations?
- Trust research requires interdisciplinary effort, to take ideas from fundamental human science over to engineering design to produce optimal HRI systems. Further, the different dimensions of trust (e.g., dispositional and situational, etc.) are currently under-researched within human-non-human agent interactions. Combined, this makes the property and requirement of a trustworthy system a huge endeavour that will not be resolved by a single team.
5.5. Energy and Sustainability
5.6. Summary
6. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Bonner, M.C.; Taylor, R.M.; Miller, C.A. Tasking Interface Manager: Affording Pilot Control of Adaptive Automation and Aiding. In Contemporary Ergonomics 2000; CRC Press: Boca Raton, FL, USA, 2004; pp. 70–74. [Google Scholar]
- Hobbs, A.; Lyall, B. Human Factors Guidelines for Remotely Piloted Aircraft System (RPAS) Remote Pilot Stations (RPS); Technical Report; NASA: Washington, DC, USA, 2016. Available online: https://ntrs.nasa.gov/citations/20190000211 (accessed on 28 April 2021).
- Webster, M.; Western, D.; Araiza-Illan, D.; Dixon, C.; Eder, K.; Fisher, M.; Pipe, A.G. A Corroborative Approach to Verification and Validation of Human–Robot Teams. Int. J. Robot. Res. 2020, 39, 73–79. [Google Scholar] [CrossRef]
- Simrock, S. Control Theory; CAS—CERN Accelerator School: Digital Signal Processing: Geneva, Switzerland, 2008; pp. 73–130. [Google Scholar] [CrossRef]
- Page, V.; Webster, M.; Fisher, M.; Jump, M. Towards a Methodology to Test UAVs in Hazardous Environments. In Proceedings of the 15th International Conference on Autonomic and Autonomous Systems (ICAS), Athens, Greece, 2–6 June 2019. [Google Scholar]
- Fisher, M. An Introduction to Practical Formal Methods Using Temporal Logic; John Wiley & Sons: Hoboken, NJ, USA, 2011. [Google Scholar]
- Garoche, P.L. Formal Verification of Control System Software; Princeton University Press: Princeton, NJ, USA, 2019. [Google Scholar]
- Institute of Electrical and Electronics Engineers. P7009—Standard for Fail-Safe Design of Autonomous and Semi-Autonomous Systems. 2017. Available online: https://standards.ieee.org/project/7009.html (accessed on 28 April 2021).
- Bremner, P.; Dennis, L.A.; Fisher, M.; Winfield, A.F.T. On Proactive, Transparent, and Verifiable Ethical Reasoning for Robots. Proc. IEEE 2019, 107, 541–561. [Google Scholar] [CrossRef]
- Arapinis, M.; Cheval, V.; Delaune, S. Verifying Privacy-Type Properties in a Modular Way. In Proceedings of the 25th IEEE Computer Security Foundations Symposium (CSF), Cambridge, MA, USA, 25–27 June 2012; pp. 95–109. [Google Scholar]
- Asadollah, S.A.; Inam, R.; Hansson, H. A Survey on Testing for Cyber Physical System. In Proceedings of the International Conference on Testing Software and Systems (ICTSS), Sharjah and Dubai, United Arab Emirates, 23–25 November 2015; Volume 9447, pp. 194–207. [Google Scholar]
- Zabczyk, J. Mathematical Control Theory: An Introduction; Birkhauser: Basel, Switzerland, 2008. [Google Scholar]
- Tabuada, P. Verification and Control of Hybrid Systems—A Symbolic Approach; Springer: New York City, NY, USA, 2009. [Google Scholar] [CrossRef]
- Gerber, C.; Preuße, S.; Hanisch, H. A Complete Framework for Controller Verification in Manufacturing. In Proceedings of the 15th IEEE Conference on Emerging Technologies Factory Automation (ETFA), Bilbao, Spain, 13–16 September 2010; pp. 1–9. [Google Scholar]
- Park, J.; Pajic, M.; Sokolsky, O.; Lee, I. LCV: A Verification Tool for Linear Controller Software. In Proceedings of the Tools and Algorithms for the Construction and Analysis of Systems, Prague, Czech Republic, 6–11 April 2019; Springer: New York City, NY, USA, 2019; pp. 213–225. [Google Scholar] [CrossRef] [Green Version]
- Sun, Y.; Zhou, Y.; Maskell, S.; Sharp, J.; Huang, X. Reliability Validation of Learning Enabled Vehicle Tracking. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), Paris, France, 31 May–31 August 2020; pp. 9390–9396. [Google Scholar]
- Huang, W.; Zhou, Y.; Sun, Y.; Sharp, J.; Maskell, S.; Huang, X. Practical Verification of Neural Network Enabled State Estimation System for Robotics. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, (IROS), Las Vegas, NV, USA, 25–29 October 2020. [Google Scholar]
- Zhang, F.; Leitner, J.; Milford, M.; Upcroft, B.; Corke, P.I. Towards Vision-Based Deep Reinforcement Learning for Robotic Motion Control. arXiv 2015, arXiv:1511.03791. [Google Scholar]
- Jiang, Z.; Luo, S. Neural Logic Reinforcement Learning. In Proceedings of the International Conference on Machine Learning (ICML), Long Beach, CA, USA, 10–15 June 2019; pp. 3110–3119. [Google Scholar]
- Huang, W.; Zhao, X.; Huang, X. Embedding and Synthesis of Knowledge in Tree Ensemble Classifiers. arXiv 2020, arXiv:2010.08281. [Google Scholar]
- Huang, X.; Kroening, D.; Ruan, W.; Sharp, J.; Sun, Y.; Thamo, E.; Wu, M.; Yi, X. A Survey of Safety and Trustworthiness of Deep Neural Networks: Verification, Testing, Adversarial Attack and Defence, and Interpretability. Comput. Sci. Rev. 2020, 37, 100270. [Google Scholar] [CrossRef]
- Huang, X.; Kwiatkowska, M.; Wang, S.; Wu, M. Safety Verification of Deep Neural Networks. In Proceedings of the Computer Aided Verification (CAV), Heidelberg, Germany, 22–28 July 2017; Springer: Berlin/Heidelberg, Germany, 2017; pp. 3–29. [Google Scholar]
- Wicker, M.; Huang, X.; Kwiatkowska, M. Feature-guided Black-box Safety Testing of Deep Neural Networks. In Proceedings of the TACAS, Thessaloniki, Greece, 14–20 April 2018; pp. 408–426. [Google Scholar]
- Ruan, W.; Huang, X.; Kwiatkowska, M. Reachability Analysis of Deep Neural Networks with Provable Guarantees. In Proceedings of the IJCAI, Stockholm, Sweden, 13–19 July 2018; pp. 2651–2659. [Google Scholar]
- Wu, M.; Wicker, M.; Ruan, W.; Huang, X.; Kwiatkowska, M. A Game-based Approximate Verification of Deep Neural Networks with Provable Guarantees. Theor. Comput. Sci. 2020, 807, 298–329. [Google Scholar] [CrossRef]
- Ruan, W.; Wu, M.; Sun, Y.; Huang, X.; Kroening, D.; Kwiatkowska, M. Global Robustness Evaluation of Deep Neural Networks with Provable Guarantees for the Hamming Distance. In Proceedings of the IJCAI, Macao, China, 10–16 August 2019; pp. 5944–5952. [Google Scholar]
- Sun, Y.; Huang, X.; Kroening, D.; Sharp, J.; Hill, M.; Ashmore, R. Structural Test Coverage Criteria for Deep Neural Networks. ACM Trans. Embed. Comput. Syst. 2019, 18. [Google Scholar] [CrossRef]
- Sun, Y.; Wu, M.; Ruan, W.; Huang, X.; Kwiatkowska, M.; Kroening, D. Concolic Testing for Deep Neural Networks. In Proceedings of the 33rd ACM/IEEE International Conference on Automated Software Engineering (ASE), Montpellier, France, 3–7 September 2018; pp. 109–119. [Google Scholar]
- Huang, W.; Sun, Y.; Sharp, J.; Ruan, W.; Meng, J.; Huang, X. Coverage Guided Testing for Recurrent Neural Networks. arXiv 2019, arXiv:1911.01952. [Google Scholar]
- Webb, S.; Rainforth, T.; Teh, Y.W.; Kumar, M.P. Statistical Verification of Neural Networks. In Proceedings of the International Conference on Learning Representations, New Orleans, LA, USA, 6–9 May 2019. [Google Scholar]
- Jin, G.; Yi, X.; Zhang, L.; Zhang, L.; Schewe, S.; Huang, X. How does Weight Correlation Affect the Generalisation Ability of Deep Neural Networks. In Proceedings of the NeurIPS, Vancouver, BC, Canada, 6–12 December 2020. [Google Scholar]
- Zhao, X.; Banks, A.; Sharp, J.; Robu, V.; Flynn, D.; Fisher, M.; Huang, X. A Safety Framework for Critical Systems Utilising Deep Neural Networks. In Proceedings of the 39th International Conference on Computer Safety, Reliability, and Security (SAFECOMP), Lisbon, Portugal, 15–18 September 2020; Springer: Berlin/Heidelberg, Germany, 2020; Volume 12234, pp. 244–259. [Google Scholar]
- Zhao, X.; Huang, W.; Banks, A.; Cox, V.; Flynn, D.; Schewe, S.; Huang, X. Assessing Reliability of Deep Learning Through Robustness Evaluation and Operational Testing. In Proceedings of the SAFECOMP, Lisbon, Portugal, 15–18 September 2020. [Google Scholar]
- Smolensky, P. Connectionist AI, Symbolic AI, and the brain. Artif. Intell. Rev. 1987, 1, 95–109. [Google Scholar] [CrossRef]
- Wooldridge, M.; Rao, A. (Eds.) Foundations of Rational Agency; Kluwer Academic Publishers: Dordrecht, The Netherlands, 1999. [Google Scholar]
- Wooldridge, M. An Introduction to Multiagent Systems; John Wiley & Sons: London, UK, 2002. [Google Scholar]
- Bratman, M.E. Intentions, Plans, and Practical Reason; Harvard University Press: Cambridge, MA, USA, 1987. [Google Scholar]
- Rao, A.S.; Georgeff, M. BDI Agents: From Theory to Practice. In Proceedings of the First International Conference on Multi-Agent Systems (ICMAS), San Francisco, CA, USA, 12–14 June 1995; pp. 312–319. [Google Scholar]
- Cardoso, R.C.; Dennis, L.A.; Fisher, M. Plan Library Reconfigurability in BDI Agents. In Proceedings of the Engineering Multi-Agent Systems, Montreal, QC, Canada, 13–14 May 2019; pp. 195–212. [Google Scholar]
- Cardoso, R.C.; Ferrando, A.; Dennis, L.A.; Fisher, M. An Interface for Programming Verifiable Autonomous Agents in ROS. In Multi-Agent Systems and Agreement Technologies; Springer: Berlin/Heidelberg, Germany, 2020; pp. 191–205. [Google Scholar]
- Stringer, P.; Cardoso, R.C.; Huang, X.; Dennis, L.A. Adaptable and Verifiable BDI Reasoning. arXiv 2020, arXiv:2007.11743. [Google Scholar] [CrossRef]
- Onyedinma, C.; Gavigan, P.; Esfandiari, B. Toward Campus Mail Delivery Using BDI. J. Sens. Actuator Netw. 2020, 9, 56. [Google Scholar] [CrossRef]
- Huang, X.; Kwiatkowska, M.; Olejnik, M. Reasoning about Cognitive Trust in Stochastic Multiagent Systems. ACM Trans. Comput. Log. 2019, 20, 21:1–21:64. [Google Scholar] [CrossRef]
- Manna, Z.; Pnueli, A. The Temporal Logic of Reactive and Concurrent Systems; Springer: Berlin, Germany, 1992. [Google Scholar]
- Parikh, R. Propositional Game Logic. In Proceedings of the 24th Annual Symposium on Foundations of Computer Science (FOCS), Tucson, AZ, USA, 7–9 November 1983; pp. 195–200. [Google Scholar]
- Huang, X.; van der Meyden, R. Symbolic Model Checking Epistemic Strategy Logic. In Proceedings of the 28th AAAI Conference on Artificial Intelligence, Quebec City, QC, Canada, 27–31 July 2014; pp. 1426–1432. [Google Scholar]
- Huang, X. Bounded Model Checking of Strategy Ability with Perfect Recall. Artif. Intell. 2015, 222, 182–200. [Google Scholar] [CrossRef]
- Huang, X.; van der Meyden, R. An Epistemic Strategy Logic. ACM Trans. Comput. Log. 2018, 19, 26:1–26:45. [Google Scholar] [CrossRef] [Green Version]
- Huang, X.; Luo, C. A Logic of Probabilistic Knowledge and Strategy. In Proceedings of the International Conference on Autonomous Agents and Multi-Agent Systems (AAMAS), St. Paul, MN, USA, 6–10 May 2013; pp. 845–852. [Google Scholar]
- Fagin, R.; Halpern, J.; Moses, Y.; Vardi, M. Reasoning About Knowledge; MIT Press: Cambridge, MA, USA, 1996. [Google Scholar]
- Huang, X.; Ruan, J. ATL Strategic Reasoning Meets Correlated Equilibrium. In Proceedings of the 26th International Joint Conference on Artificial Intelligence (IJCAI), Melbourne, Australia, 19–25 August 2017; pp. 1102–1108. [Google Scholar]
- Huang, X.; Kwiatkowska, M. Model Checking Probabilistic Knowledge: A PSPACE Case. In Proceedings of the 13th AAAI Conference on Artificial Intelligence, Phoenix, AZ, USA, 12–17 February 2016; pp. 2516–2522. [Google Scholar]
- Fu, C.; Turrini, A.; Huang, X.; Song, L.; Feng, Y.; Zhang, L. Model Checking Probabilistic Epistemic Logic for Probabilistic Multiagent Systems. In Proceedings of the 27th International Joint Conference on Artificial Intelligence (IJCAI), Stockholm, Sweden, 13–19 July 2018; pp. 4757–4763. [Google Scholar]
- Huang, X.; van der Meyden, R. Synthesizing Strategies for Epistemic Goals by Epistemic Model Checking: An Application to Pursuit Evasion Games. In Proceedings of the 26th AAAI Conference on Artificial Intelligence, Toronto, ON, Canada, 22–26 July 2012. [Google Scholar]
- Huang, X.; Maupin, P.; van der Meyden, R. Model Checking Knowledge in Pursuit Evasion Games. In Proceedings of the 22nd International Joint Conference on Artificial Intelligence (IJCAI), Catalonia, Spain, 16–22 July 2011; pp. 240–245. [Google Scholar]
- Bakar, N.A.; Selamat, A. Agent Systems Verification: Systematic Literature Review and Mapping. Appl. Intell. 2018, 48, 1251–1274. [Google Scholar] [CrossRef]
- Luckcuck, M.; Farrell, M.; Dennis, L.A.; Dixon, C.; Fisher, M. Formal Specification and Verification of Autonomous Robotic Systems: A Survey. ACM Comput. Surv. 2019, 52, 1–41. [Google Scholar] [CrossRef] [Green Version]
- Dennis, L.A. The MCAPL Framework including the Agent Infrastructure Layer and Agent Java Pathfinder. J. Open Source Softw. 2018, 3. [Google Scholar] [CrossRef]
- Dennis, L.A.; Farwer, B. Gwendolen: A BDI Language for Verifiable Agents. In Proceedings of the Workshop on Logic and the Simulation of Interaction and Reasoning, AISB, Aberdeen, UK, 3–4 April 2008. [Google Scholar]
- Dennis, L.A.; Fisher, M.; Webster, M.; Bordini, R.H. Model Checking Agent Programming Languages. Autom. Softw. Eng. 2012, 19, 5–63. [Google Scholar] [CrossRef] [Green Version]
- Lomuscio, A.; Raimondi, F. MCMAS: A Model Checker for Multi-agent Systems. In Proceedings of the 12th International Conference on Tools and Algorithms for the Construction and Analysis of Systems (TACAS), Vienna, Austria, 25 March–2 April 2006; Springer: Berlin, Germany, 2006; Volume 3920, pp. 450–454. [Google Scholar]
- Dixon, C.; Fisher, M.; Bolotov, A. Resolution in a Logic of Rational Agency. Artif. Intell. 2002, 139, 47–89. [Google Scholar] [CrossRef] [Green Version]
- Wooldridge, M.; Dixon, C.; Fisher, M. A Tableau-Based Proof Method for Temporal Logics of Knowledge and Belief. J. Appl. Non Class. Logics 1998, 8, 225–258. [Google Scholar] [CrossRef] [Green Version]
- Alechina, N.; Dastani, M.; Khan, F.; Logan, B.; Meyer, J.J. Using Theorem Proving to Verify Properties of Agent Programs. In Specification and Verification of Multi-Agent Systems; Springer: Berlin, Germany, 2010; pp. 1–33. [Google Scholar]
- Ferrando, A.; Dennis, L.A.; Ancona, D.; Fisher, M.; Mascardi, V. Verifying and Validating Autonomous Systems: Towards an Integrated Approach. In Proceedings of the 18th International Conference on Runtime Verification (RV), Limassol, Cyprus, 10–13 November 2018; Springer: Berlin, Germany, 2018; Volume 11237, pp. 263–281. [Google Scholar]
- Ferrando, A.; Ancona, D.; Mascardi, V. Decentralizing MAS Monitoring with DecAMon. In Proceedings of the 16th Conference on Autonomous Agents and MultiAgent Systems (AAMAS), São Paulo, Brazil, 8–12 May 2017; pp. 239–248. [Google Scholar]
- Winikoff, M. BDI agent testability revisited. Auton. Agents Multiagent Syst. 2017, 31, 1094–1132. [Google Scholar] [CrossRef]
- Alami, R.; Chatila, R.; Fleury, S.; Ghallab, M.; Ingrand, F. An Architecture for Autonomy. Int. J. Robot. Res. 1998, 17, 315–337. [Google Scholar] [CrossRef]
- Quigley, M.; Conley, K.; Gerkey, B.P.; Faust, J.; Foote, T.; Leibs, J.; Wheeler, R.; Ng, A.Y. ROS: An Open-source Robot Operating System. In Proceedings of the ICRA Workshop on Open Source Software, Kobe, Japan, 12–17 May 2009. [Google Scholar]
- Elkady, A.; Sobh, T. Robotics Middleware: A Comprehensive Literature Survey and Attribute-Based Bibliography. J. Robot. 2012, 2012. [Google Scholar] [CrossRef] [Green Version]
- Garavel, H.; Lang, F.; Mounier, L. Compositional Verification in Action. In Formal Methods for Industrial Critical Systems; Springer: Berlin, Germany, 2018; pp. 189–210. [Google Scholar]
- Farrell, M.; Luckcuck, M.; Fisher, M. Robotics and Integrated Formal Methods: Necessity Meets Opportunity. In Proceedings of the 14th International Conference on Integrated Formal Methods (iFM), Maynooth, Ireland, 5–7 September 2018; pp. 161–171. [Google Scholar]
- Albus, J.S.; Barbera, A.J.; Nagel, R. Theory and Practice of Hierarchical Control. In Proceedings of the 23rd IEEE Computer Society International Conference, Washington, DC, USA, 15–17 September 1981. [Google Scholar]
- Buxbaum, H.J.; Maiwald, W. Utilization of a Hierarchical Control System in a Robot-Based Flexible Assembly Cell. IFAC Proc. Vol. 1995, 28, 399–404. [Google Scholar] [CrossRef]
- Brooks, R. A Robust Layered Control System for a Mobile Robot. J. Robot. Autom. 1986, 2, 14–23. [Google Scholar] [CrossRef] [Green Version]
- Cimatti, A.; Pistore, M.; Traverso, P. Automated Planning. In Handbook of Knowledge Representation; Elsevier: Amsterdam, The Netherlands, 2008; pp. 841–867. [Google Scholar]
- Ghallab, M.; Nau, D.S.; Traverso, P. Automated Planning and Acting; Cambridge Univ. Press: Cambridge, UK, 2016. [Google Scholar]
- Dennis, L.A.; Fisher, M.; Lincoln, N.K.; Lisitsa, A.; Veres, S.M. Practical Verification of Decision-Making in Agent-Based Autonomous Systems. Autom. Softw. Eng. 2016, 23, 305–359. [Google Scholar] [CrossRef] [Green Version]
- Gamma, E.; Helm, R.; Johnson, R.; Vlissides, J. (Eds.) Design Patterns; Addison-Wesley Publishing Co.: Boston, MA, USA, 1995. [Google Scholar]
- Menghi, C.; Tsigkanos, C.; Pelliccione, P.; Ghezzi, C.; Berger, T. Specification Patterns for Robotic Missions. IEEE Trans. Softw. Eng. 2019. [Google Scholar] [CrossRef] [Green Version]
- Xiao, A.; Bryden, K.M. Virtual Engineering: A Vision of the Next-Generation Product Realization Using Virtual Reality Technologies. In Proceedings of the ASME 2004 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference, Boston, MA, USA, 30 March–2 April 2004. [Google Scholar]
- Robert, C.; Guiochet, J.; Waeselynck, H. Testing a non-deterministic robot in simulation—How many repeated runs? In Proceedings of the Fourth IEEE International Conference on Robotic Computing (IRC), Taichung, Taiwan, 9–11 November 2020. 8p. [Google Scholar]
- Cavalcanti, A.; Sampaio, A.; Miyazawa, A.; Ribeiro, P.; Filho, M.C.; Didier, A.; Li, W.; Timmis, J. Verified Simulation for Robotics. Sci. Comput. Program. 2019, 174, 1–37. [Google Scholar] [CrossRef]
- Cardoso, R.C.; Farrell, M.; Luckcuck, M.; Ferrando, A.; Fisher, M. Heterogeneous Verification of an Autonomous Curiosity Rover. In Proceedings of the 12th International NASA Formal Methods Symposium (NFM), Moffett Field, CA, USA, 11–15 May 2020; Springer: Berlin, Germany, 2020; Volume 12229, pp. 353–360. [Google Scholar]
- Hoare, C.A.R. An Axiomatic Basis for Computer Programming. Commun. ACM 1969, 12, 576–580. [Google Scholar] [CrossRef]
- Jones, C.B. Tentative Steps Toward a Development Method for Interfering Programs. ACM Trans. Program. Lang. Syst. 1983, 5, 596–619. [Google Scholar] [CrossRef]
- Cardoso, R.C.; Dennis, L.A.; Farrell, M.; Fisher, M.; Luckcuck, M. Towards Compositional Verification for Modular Robotic Systems. In Proceedings of the 2nd Workshop on Formal Methods for Autonomous Systems (FMAS), Online, 7 December 2020; pp. 15–22. [Google Scholar] [CrossRef]
- Araiza-Illan, D.; Western, D.; Pipe, A.G.; Eder, K. Coverage-Driven Verification—An Approach to Verify Code for Robots that Directly Interact with Humans. In Proceedings of the 11th International Haifa Verification Conference (HVC), Haifa, Israel, 17–19 November 2015; pp. 69–84. [Google Scholar]
- Salem, M.; Lakatos, G.; Amirabdollahian, F.; Dautenhahn, K. Would You Trust a (Faulty) Robot?: Effects of Error, Task Type and Personality on Human-Robot Cooperation and Trust. In Proceedings of the 10th ACM/IEEE International Conference on Human-Robot Interaction (HRI), Portland, OR, USA, 2–5 March 2015; pp. 141–148. [Google Scholar]
- Dixon, C.; Winfield, A.; Fisher, M.; Zeng, C. Towards Temporal Verification of Swarm Robotic Systems. Robot. Auton. Syst. 2012, 60, 1429–1441. [Google Scholar] [CrossRef] [Green Version]
- Webster, M.; Breza, M.; Dixon, C.; Fisher, M.; McCann, J. Exploring the Effects of Environmental Conditions and Design Choices on IoT Systems Using Formal Methods. J. Comput. Sci. 2020, 45, 101183. [Google Scholar] [CrossRef]
- Kouvaros, P.; Lomuscio, A. Verifying Emergent Properties of Swarms. In Proceedings of the 24th International Joint Conference on Artificial Intelligence (IJCAI), Buenos Aires, Argentina, 25–31 July 2015; pp. 1083–1089. [Google Scholar]
- Lomuscio, A.; Pirovano, E. Verifying Emergence of Bounded Time Properties in Probabilistic Swarm Systems. In Proceedings of the 27th International Joint Conference on Artificial Intelligence (IJCAI), Stockholm, Sweden, 13–19 July 2018; pp. 403–409. [Google Scholar]
- Graham, D.; Calder, M.; Miller, A. An Inductive Technique for Parameterised Model Checking of Degenerative Distributed Randomised Protocols. Electron. Notes Theor. Comput. Sci. 2009, 250, 87–103. [Google Scholar] [CrossRef] [Green Version]
- Donaldson, A.F.; Miller, A.; Parker, D. Language-Level Symmetry Reduction for Probabilistic Model Checking. In Proceedings of the 6th International Conference on the Quantitative Evaluation of Systems (QEST), Budapest, Hungary, 13–16 September 2009; pp. 289–298. [Google Scholar]
- Abdulla, P.A.; Jonsson, B.; Nilsson, M.; Saksena, M. A Survey of Regular Model Checking. In Proceedings of the International Conference on Concurrency Theory, London, UK, 31 August–3 September 2004; pp. 35–48. [Google Scholar]
- Delzanno, G. Automatic Verification of Parameterized Cache Coherence Protocols. In Proceedings of the International Conference on Computer Aided Verification, Chicago, IL, USA, 15–19 July 2000; pp. 53–68. [Google Scholar]
- Delzanno, G. Constraint-based verification of parameterized cache coherence protocols. Form. Methods Syst. Des. 2003, 23, 257–301. [Google Scholar] [CrossRef]
- Esparza, J.; Finkel, A.; Mayr, R. On the Verification of Broadcast Protocols. In Proceedings of the 14th Symposium on Logic in Computer Science (LICS), Trento, Italy, 2–5 July 1999; pp. 352–359. [Google Scholar]
- Konur, S.; Dixon, C.; Fisher, M. Analysing Robot Swarm Behaviour via Probabilistic Model Checking. Robot. Auton. Syst. 2012, 60, 199–213. [Google Scholar] [CrossRef] [Green Version]
- Gainer, P.; Linker, S.; Dixon, C.; Hustadt, U.; Fisher, M. Multi-Scale Verification of Distributed Synchronisation. Form. Methods Syst. Des. 2020. [Google Scholar] [CrossRef]
- Fisher, M.; Konev, B.; Lisitsa, A. Practical infinite-state verification with temporal reasoning. NATO Secur. Through Sci. Ser. D Inf. Commun. Secur. 2006, 1, 91. [Google Scholar]
- Dixon, C.; Fisher, M.; Konev, B.; Lisitsa, A. Practical First-order Temporal Reasoning. In Proceedings of the 15th International Symposium on Temporal Representation and Reasoning (TIME), Montreal, QC, Canada, 16–18 June 2008; pp. 156–163. [Google Scholar]
- Leucker, M.; Schallhart, C. A Brief Account of Runtime Verification. J. Log. Algebr. Methods Program. 2009, 78, 293–303. [Google Scholar] [CrossRef] [Green Version]
- Desai, A.; Dreossi, T.; Seshia, S.A. Combining Model Checking and Runtime Verification for Safe Robotics. In Proceedings of the International Conference on Runtime Verification, Seattle, WA, USA, 13–16 September 2017; Volume 10548, pp. 172–189. [Google Scholar]
- Ferrando, A.; Cardoso, R.C.; Fisher, M.; Ancona, D.; Franceschini, L.; Mascardi, V. ROSMonitoring: A Runtime Verification Framework for ROS. In Proceedings of the towards Autonomous Robotic Systems (TAROS), Nottingham, UK, 16 September 2020; pp. 387–399. [Google Scholar] [CrossRef]
- Luckcuck, M. Monitoring Robotic Systems using CSP: From Safety Designs to Safety Monitors. arXiv 2020, arXiv:2007.03522. [Google Scholar]
- Zhang, X.; Leucker, M.; Dong, W. Runtime Verification with Predictive Semantics. In Proceedings of the 4th International Symposium NASA Formal Methods (NFM), Moffett Field, CA, USA, 14–16 May 2012; Springer: Berlin, Germany, 2012; Volume 7226, pp. 418–432. [Google Scholar]
- Miyazawa, A.; Ribeiro, P.; Li, W.; Cavalcanti, A.; Timmis, J.; Woodcock, J. RoboChart: Modelling and verification of the functional behaviour of robotic applications. Softw. Syst. Model. 2019, 18, 3097–3149. [Google Scholar] [CrossRef] [Green Version]
- Fisher, M.; Mascardi, V.; Rozier, K.Y.; Schlingloff, B.; Winikoff, M.; Yorke-Smith, N. Towards a Framework for Certification of Reliable Autonomous Systems. Auton. Agents Multiagent Syst. 2021, 35, 8. [Google Scholar] [CrossRef]
- Farrell, M.; Bradbury, M.; Fisher, M.; Dennis, L.A.; Dixon, C.; Yuan, H.; Maple, C. Using Threat Analysis Techniques to Guide Formal Verification: A Case Study of Cooperative Awareness Messages. In Proceedings of the International Conference on Software Engineering and Formal Methods, Oslo, Norway, 16–20 September 2019; pp. 471–490. [Google Scholar]
- Book, G. Security Threats against Space Missions; CCSDS Secretariat: Washington, DC, USA, 2006. [Google Scholar]
- Farrell, M.; Bradbury, M.; Fisher, M.; Dixon, C. Workshop Report: Space Security Scoping; Technical Report; FAIR-SPACE Hub: Guildford, UK, 2019; Available online: https://www.fairspacehub.org (accessed on 28 April 2021).
- Maple, C.; Bradbury, M.; Yuan, H.; Farrell, M.; Dixon, C.; Fisher, M.; Atmaca, U.I. Security-Minded Verification of Space Systems. In Proceedings of the 2020 IEEE Aerospace Conference, Big Sky, MT, USA, 7–14 March 2020; pp. 1–13. [Google Scholar]
- Nof, S. Handbook of Industrial Robotics; Number v. 1 in Electrical and Electronic Engineering; Wiley: New York City, NY, USA, 1999. [Google Scholar]
- British Standards Institution (BSI). BS 8611 Robots and Robotic Devices—Guide to the Ethical Design and Application. 2016. Available online: http://www.bsigroup.com (accessed on 28 April 2021).
- Institute of Electrical and Electronics Engineers. P7001—Transparency of Autonomous Systems; IEEE: New York City, NY, USA, 2016. [Google Scholar]
- Winfield, A.F.T.; Jirotka, M. The Case for an Ethical Black Box. In Proceedings of the 18th Conference on Towards Autonomous Robotic Systems (TAROS), Guildford, UK, 19–21 July 2017; Springer: Berlin, Germany, 2017; Volume 10454, pp. 262–273. [Google Scholar]
- Koeman, V.; Dennis, L.A.; Webster, M.; Fisher, M.; Hindriks, K. The “Why did you do that?” Button: Answering Why-questions for end users of Robotic Systems. In Proceedings of the EMAS, Montreal, QC, Canada, 13–14 May 2019. [Google Scholar]
- Dennis, L.A.; Oren, N. Explaining BDI agent behaviour through dialogue. In Proceedings of the 20th International Conference on Autonomous Agents and Multi-Agent Systems (AAMAS), London, UK, 3–7 May 2021. [Google Scholar]
- Dennis, L.A.; Fisher, M.; Slavkovik, M.; Webster, M.P. Formal Verification of Ethical Choices in Autonomous Systems. Robot. Auton. Syst. 2016, 77, 1–14. [Google Scholar] [CrossRef] [Green Version]
- Kaur, H.; Nori, H.; Jenkins, S.; Caruana, R.; Wallach, H.M.; Vaughan, J.W. Interpreting Interpretability: Understanding Data Scientists’ Use of Interpretability Tools for Machine Learning. In Proceedings of the International Conference on Human Factors in Computing Systems (CHI), Honolulu, HI, USA, 25–30 April 2020; pp. 1–14. [Google Scholar]
- Falcone, R.; Castelfranchi, C. Socio-cognitive model of trust. In Human Computer Interaction: Concepts, Methodologies, Tools, and Applications; IGI Global: Hershey, PA, USA, 2009; pp. 2316–2323. [Google Scholar]
- Fisher, M.; List, C.; Slavkovik, M.; Weiss, A. Ethics and Trust: Principles, Verification and Validation (Dagstuhl Seminar 19171). Dagstuhl Rep. 2019, 9, 59–86. [Google Scholar] [CrossRef]
- Sebo, S.S.; Krishnamurthi, P.; Scassellati, B. “I Don’t Believe You”: Investigating the Effects of Robot Trust Violation and Repair. In Proceedings of the 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI), Daegu, Korea, 11–14 March 2019; pp. 57–65. [Google Scholar]
- Chatila, R.; Dignum, V.; Fisher, M.; Giannotti, F.; Morik, K.; Russell, S.; Yeung, K. Trustworthy AI. In Reflections on Artificial Intelligence for Humanity; Springer: New York City, NY, USA, 2021; pp. 13–39. [Google Scholar]
- Ullman, D.; Malle, B.F. What Does it Mean to Trust a Robot?: Steps Toward a Multidimensional Measure of Trust. In Proceedings of the Companion of the 2018 ACM/IEEE International Conference on Human-Robot Interaction (HRI), Chicago, IL, USA, 5–8 March 2018; pp. 263–264. [Google Scholar]
- Tolmeijer, S.; Weiss, A.; Hanheide, M.; Lindner, F.; Powers, T.M.; Dixon, C.; Tielman, M.L. Taxonomy of Trust-Relevant Failures and Mitigation Strategies. In Proceedings of the ACM/IEEE International Conference on Human-Robot Interaction (CHI), Cambridge, UK, 23–26 March 2020; pp. 3–12. [Google Scholar]
- Vanderbilt, K.E.; Liu, D.; Heyman, G.D. The development of distrust. Child Dev. 2011, 82, 1372–1380. [Google Scholar] [CrossRef] [PubMed]
- Malle, B.F.; Fischer, K.; Young, J.E.; Moon, A.; Collins, E. Trust and the Discrepancy between Expectations and Actual Capabilities of Social Robots. In Human-Robot Interaction: Control, Analysis, and Design; Zhang, D., Wei, B., Eds.; Cambridge Scholars Publishing: New York, NY, USA, 2020. [Google Scholar]
- Schwartz, R.; Dodge, J.; Smith, N.A.; Etzioni, O. Green AI. Commun. ACM 2020, 63, 54–63. [Google Scholar] [CrossRef]
- Andoni, M.; Tang, W.; Robu, V.; Flynn, D. Data analysis of battery storage systems. CIR—Open Access Proc. J. 2017, 2017, 96–99. [Google Scholar] [CrossRef] [Green Version]
- Boker, U.; Henzinger, T.A.; Radhakrishna, A. Battery Transition Systems. In Proceedings of the 41st ACM Symposium on Principles of Programming Languages, (POPL), San Diego, CA, USA, 22–24 January 2014; pp. 595–606. [Google Scholar]
- Wognsen, E.R.; Hansen, R.R.; Larsen, K.G. Battery-Aware Scheduling of Mixed Criticality Systems. In Leveraging Applications of Formal Methods, Verification and Validation. Specialized Techniques and Applications; Margaria, T., Steffen, B., Eds.; Springer: Berlin/Heidelberg, Germany, 2014; pp. 208–222. [Google Scholar]
- Zhao, X.; Osborne, M.; Lantair, J.; Robu, V.; Flynn, D.; Huang, X.; Fisher, M.; Papacchini, F.; Ferrando, A. Towards Integrating Formal Verification of Autonomous Robots with Battery Prognostics and Health Management. In Software Engineering and Formal Methods; Springer: Berlin/Heidelberg, Germany, 2019; pp. 105–124. [Google Scholar]
- Bekhit, A.; Dehghani, A.; Richardson, R. Kinematic Analysis and Locomotion Strategy of active Pipe Inspection Robot Concept for Operation in Active Pipelines. Int. J. Mech. Eng. Mechatron. 2012, 1, 15–27. [Google Scholar] [CrossRef] [Green Version]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Fisher, M.; Cardoso, R.C.; Collins, E.C.; Dadswell, C.; Dennis, L.A.; Dixon, C.; Farrell, M.; Ferrando, A.; Huang, X.; Jump, M.; et al. An Overview of Verification and Validation Challenges for Inspection Robots. Robotics 2021, 10, 67. https://doi.org/10.3390/robotics10020067
Fisher M, Cardoso RC, Collins EC, Dadswell C, Dennis LA, Dixon C, Farrell M, Ferrando A, Huang X, Jump M, et al. An Overview of Verification and Validation Challenges for Inspection Robots. Robotics. 2021; 10(2):67. https://doi.org/10.3390/robotics10020067
Chicago/Turabian StyleFisher, Michael, Rafael C. Cardoso, Emily C. Collins, Christopher Dadswell, Louise A. Dennis, Clare Dixon, Marie Farrell, Angelo Ferrando, Xiaowei Huang, Mike Jump, and et al. 2021. "An Overview of Verification and Validation Challenges for Inspection Robots" Robotics 10, no. 2: 67. https://doi.org/10.3390/robotics10020067
APA StyleFisher, M., Cardoso, R. C., Collins, E. C., Dadswell, C., Dennis, L. A., Dixon, C., Farrell, M., Ferrando, A., Huang, X., Jump, M., Kourtis, G., Lisitsa, A., Luckcuck, M., Luo, S., Page, V., Papacchini, F., & Webster, M. (2021). An Overview of Verification and Validation Challenges for Inspection Robots. Robotics, 10(2), 67. https://doi.org/10.3390/robotics10020067