Quantitative Metrics for Performance Monitoring of Software Code Analysis Accredited Testing Laboratories
Abstract
:1. Introduction
2. Background: Definitions and Concepts
2.1. Quality Infrastructure
2.2. Interlaboratory Comparison
2.3. Software Testing and Information Technology Standards
- lack of consensus between committee and testing community;
- need for extensive and difficult to implement documentation provided for in the standard;
- plastering or reversing the testing process by adopting the standard; and
- impact on other areas of the cycle that were not tested.
2.4. Software Analysis and Testing Accredited Laboratories
2.5. Software Metrics and Techniques Applied to Interlaboratory Comparisons
2.5.1. Code Coverage
2.5.2. Software Mutation
2.5.3. Theoretical Evaluation
3. Research Method
3.1. Research Questions
3.2. Overview
- Perform a Review: In this step, our goal is minimally to identify the most recent and available evidence regarding proficiency testing methods and which elements of them could contribute in developing a proficiency testing method in the context of software conformity assessment.
- Define Hypotheses: Here, the purpose is to find or develop the theoretical and practical elements that could support defining some hypotheses to evaluate and determine which conditions make performing proficiency testing rounds in the context of software conformity assessment feasible.
- Plan a Round: Based on the found elements and defined hypotheses in the previously performed step, a plan is designed and elaborated taking into account all requirements for performing an interlaboratory comparison or proficiency testing round according to the current version of ISO/IEC 17043.
- Run a Round Trial: In this step, we expect to obtain the first indications of the round’s feasibility based on the planning. In general, students and collaborators not allocated in an accredited lab are recruited as participants in this step. If there are practical problems identified during the trial execution, they are analyzed and the adjustments are properly reported in the plan. Otherwise, the plan is considered ready and the round can be performed in the next step.
- Perform the Round: The round’s plan is presented to the laboratories, and they are invited to act as participant of the round. Afterward, the round is performed and the participants’ data are collected.
- Analyze Data and Publish Results: In this step, we analyze laboratories’ data, compute their performance and publish the results. In addition, we perform a post-mortem analysis aiming to synthesize lessons learned, identify new issues or questions and evaluate the theoretical and practical elements that could be removed or adapted and the new ones that must be investigated and incorporated. Thus, new hypotheses are stated and tested in the next iterations.
3.3. Literature Review
3.3.1. Context
3.3.2. Search Strategy
TITLE-ABS-KEY ( (“proficiency testing” OR “interlaboratory comparison”) ) AND ( LIMIT-TO (PUBYEAR, 2021) OR LIMIT-TO (PUBYEAR, 2020) OR LIMIT-TO (PUBYEAR, 2019) OR LIMIT-TO (PUBYEAR, 2018) OR LIMIT-TO (PUBYEAR, 2017) OR LIMIT-TO (PUBYEAR, 2016) ) AND ( LIMIT-TO (LANGUAGE, “English”))
3.3.3. Study Selection Procedure and Criteria
- Obtain the returned scientific publications searched in peer-reviewed journals and conferences available on the web through search engine, as evidence of proficiency testing in the last five years.
- Use Scopus’ pre-defined search filters to select for reading of title and abstract of the studies labeled as computer science or engineering, as the most probable subject areas to find ongoing or similar studies related to proficiency testing for sensor-, ICT- or software-based products’ conformity assessment.
- Select for full reading the papers that discuss ongoing or evaluated methods for performing proficiency testing for sensor-, ICT- or software-based products’ conformity assessment.
3.3.4. Performing the Searches
3.3.5. Data Extraction and Review Results
3.3.6. Threats to Validity of Review and Preliminary Results
4. First Round: Software Analysis Interlaboratory Comparison via Code Coverage
4.1. Rationale
4.2. Interlaboratory Comparison Execution
- Metric: As discussed above, we used code coverage to compare laboratories. We presented software together with a code coverage (arising from a real test case that was not informed to the labs) and challenged the labs to design test cases that achieve a code coverage as close as possible to the original one. To measure the distance between to coverages A and B, we used the Jaccard Index .
- Testing Item: The testing item was informed prior to the release of the code coverage challenges, so that the laboratories could have time to become familiar with the item. The chosen software was the available open source software Alliance Peer-to-Peer communication software Version 1.0.6 (build 1281) (http://alliancep2p.sourceforge.net/, accessed on 24 May 2021. ).
- Code Coverage Tool: The tool required to trace software execution and register code coverage were also informed prior to the release of the code coverage challenges. We used EclEmma JaCoCo 3.1.2 Plugin for Eclipse Java Platform [67].
- Delivery Mechanisms: A relevant—and new—property of the round is that, unlike “classic” proficiency testing that requires the physical transportation of a reference testing item/specimen, our round could benefit from Internet communication for transmission of the test item. Thus, we developed a virtual machine with the complete environment and tools needed to perform the software tests. To avoid problems due to the transmission of a large virtual machine, an encrypted packet was released one week before the beginning of the tests, so that, on the first day of the tests, the only thing needed was to release a decryption key on the website of the Proficiency Testing Round.
- Approval Criteria: To approve a lab in the challenge, its Jaccard Index of similarity should be larger than the mean Jaccard Index (among all participants) minus three times the standard deviation.
- Data Integrity: To assure the integrity and authenticity of data, all the communication between our organizing team and the laboratories were digitally signed with private keys corresponding to public keys that were securely exchanged before beginning the Proficiency Testing Round (in a registration stage). The chosen algorithms were SHA256+RSA2048.
- Cronogram: The Proficiency testing was a five-month process that started on 16 June 2019 with the elaboration of the work plan and finished in 20 December 2019 with the release of certificates of participation for the labs. The execution of the tests by the labs started at 10 a.m. (UTC-3) on 23 September 2019 and the deadline for return of the test reports by the labs was 4 p.m. (UTC-3) on 27 September 2019.
- JaCoCo 0.8.3. Java library for code coverage. More details about the library and how it was used in this research are presented below.
- Reference reports. These are the JaCoCo reports, in xml format, issued from the execution of the tests prepared by our organizing team.
- EP item. It consists of the source code and a functional version of Alliance P2P—Version 1.0.6 (build 1281).
- Auxiliary tools / files. The auxiliary tools or files available to facilitate analysis reports:
- Eclipse IDE 2019-03 (4.11.0). It is a platform with features and tools to streamline the software testing development process (it can be downloaded at: https://www.eclipse.org/downloads/packages/release/2019-03/r, accessed on 24 May 2021).
- EclEmma JaCoCo 3.1.2. Plugin based on the JaCoCo code coverage library for the Eclipse platform.
- Java-8-Openjdk-amd64. It is a development kit for Java platform systems, the software programming language used in this round.
- Junit 5. It is a framework used to facilitate the creation of unit tests in Java.
- comparaRelatorios.py. It is a script developed in Python programming language, which automatically compares the lines covered, not covered and covered incorrectly, in the report of the appraised with the reference report. It was made available to the participants of the round to facilitate the identification of divergent and convergent lines.
- zeresima.xml. It is a JaCoCo report, in xml format, formed by zeroed lines, simulating a unit test that does not cover any line. When used with the script comparaRelatorios.py, zerezima.xml is compared to a reference report, pointing out the diverging lines, that is, the lines that are not zeroed and that, obviously, were executed by the reference test case. Note that the VM for the Proficiency Testing Round can be downloaded from the linked Virtual Machine and all the additional details of the round can be obtained in Proficiency Testing Round (in Portuguese).
4.2.1. Code Coverage Library—JaCoCo
4.2.2. Unit Tests—JUnit
- The classes should belong to different packages of the Core subsystem of the item, avoiding the test cases needed cover graphical user interface (GUI) functionalities.
- The classes should allow tests of different levels of difficulty.
- The classes should test decision structures and cover false and true conditions.
4.2.3. Software Alliance P2P
- Comm contains the classes responsible for the ow of network data.
- Node contains classes with information from actors (users who share files).
- File contains classes for managing shared files.
4.3. Evaluation Criteria
- indicates “satisfactory” performance and does not generate a signal.
- indicates “questionable” performance and generates a warning signal.
- indicates “unsatisfactory” performance and generates an action signal.
4.4. Participant Performance and Round’s Results
5. Second Round: Interlaboratory Comparison Using Software Mutation Metrics
5.1. Rationale
5.2. Interlaboratory Comparison Execution
- Fundamental tools and files:
- -
- Proficiency test item: source code and a functional version of the software product chosen for the round.
- *
- Alliance P2P—version 1.2.0 (build 1281).
- Auxiliary tool and files:
- -
- Eclipse IDE for Enterprise Java Developers—version: 2019-03 (4.11.0).
- *
- Java-8-Openjdk-amd64;
- *
- JUnit5; and
- *
- PIT (Pitest)—tool for mutation testing and code coverage.
Mutation Testing—PIT (Pitest)
5.3. Evaluation Criteria
- indicates “exceptional” performance and does not generate a signal.
- indicates “very good” performance and does not generate a signal.
- indicates “good” performance and does not generate a signal.
- indicates “satisfactory” performance and does not generate a signal.
- indicates “acceptable” performance and generates a warning signal.
- indicates “unsatisfactory” performance and generates an action signal.
5.4. Participants Performance and Round’s Results
6. Conclusions
6.1. Discussion and Final Considerations
6.2. Future Works
- Specific purpose software: In the executed rounds, a generic network application was used as the test item. The idea is that the analysis of such software would not involve too specific knowledge in any application area, being, therefore, more suitable for initial rounds of interlaboratory comparison. In future rounds, one can explore the analysis of software modules dedicated to specific applications within the scope of the participating laboratories. For example, accredited laboratories for testing software for smart energy meters could be challenged to perform analysis of electrical metrology software modules.
- Security testing: In several conformity assessment programs, it is important that the test lab masters cybersecurity analysis techniques. One way to assess the competence of laboratories in cybersecurity is to carry out interlaboratory comparisons with reference to test cases that explore exploitation of vulnerabilities and security flaws.
- Integration tests: All the challenges presented to the laboratories in the two rounds explored highly compartmentalized test scenarios, such as unit tests (class tests). An interesting question is whether it would be possible to develop a comparison model between laboratories based on system or integration tests, that is, tests in which the software application is executed as a whole or closer to its totality. This scenario can bring a new level of complexity and difficulty that allows a more rigorous assessment of the competence of the participating laboratories.
- Standard model: There is currently no standard model for measuring laboratories performance in evaluating software products. Analyzing other technical areas, such as chemistry, biomedical, medicine, physical, etc. would be interesting to check what is worth bringing to computer science and what can applied in software companies, software development, software testing, etc.
6.3. Implications
6.3.1. For Practitioners
6.3.2. For Researchers
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Tholen, D. Metrology in service of society: The role of proficiency testing. Accredit. Qual. Assur. 2011, 16, 603–605. [Google Scholar] [CrossRef]
- ISO/IEC. ISO/IEC. ISO/IEC 17025:2017. In General Requirements for the Competence of Testing and Calibration Laboratories; International Organization for Standardization and International Electrotechnical Commission: Geneva, Switzerland, 2017. [Google Scholar]
- ISO/IEC. Principles and Practives in Product Regulation and Market Surveilance; International Organization for Standardization and International Electrotechnical Commission: Geneva, Switzerland, 2017. [Google Scholar]
- ISO/IEC. ISO/IEC. ISO/IEC 17043:2010. In Conformity Assessment—General Requirements for Proficiency Testing; International Organization for Standardization and International Electrotechnical Commission: Geneva, Switzerland, 2010. [Google Scholar]
- Tucker, J.S. Evaluation and Approval of Laboratories in Connecticut Importance of Voluntary Standards; National Bureau of Standards, Special Publication; National Bureau of Standards: Gaithersburg, MD, USA, 1979; pp. 63–66.
- Smith, W.J.; Castino, G. Laboratory Performance Evaluation and Accreditation—It’s All in the Implementation; National Bureau of Standards, Special Publication; National Bureau of Standards: Gaithersburg, MD, USA, 1979; pp. 79–84.
- Thomas, D. An approved laboratory program for photovoltaic reference cell development. Solar Cells 1982, 7, 131–134. [Google Scholar] [CrossRef]
- ISO/IEC/IEEE International Standard—Systems and Software Engineering—Software Life Cycle Processes—Part 2: Relation and Mapping between ISO/IEC/IEEE 12207:2017 and ISO/IEC 12207:2008; ISO/IEC/IEEE 12207-2:2020(E); ISO/IEC: Geneva, Switzerland; IEEE: Piscataway, NJ, USA, 2020; pp. 1–278. [CrossRef]
- ISO/IEC/IEEE International Standard—Systems and Software Engineering—System of Systems (SoS) Considerations in Life Cycle Stages of a System; ISO/IEC/IEEE 21839:2019(E); ISO/IEC: Geneva, Switzerland; IEEE: Piscataway, NJ, USA, 2019; pp. 1–40. [CrossRef]
- ISO/IEC 27000:2018 Information Technology—Security Techniques—Information Security Management Systems—Overview and Vocabulary; ISO/IEC/IEEE 21839:2018; ISO/IEC: Geneva, Switzerland, 2018.
- Sudhakar, G.P.; Farooq, A.; Patnaik, S. Soft factors affecting the performance of software development teams. Team Perform. Manag. Int. J. 2011, 17, 187–205. [Google Scholar] [CrossRef]
- McConnell, S. Quantifying soft factors. IEEE Softw. 2000, 17, 9. [Google Scholar]
- Wagner, S.; Ruhe, M. A systematic review of productivity factors in software development. arXiv 2018, arXiv:1801.06475. [Google Scholar]
- Aghamohammadi, A.; Mirian-Hosseinabadi, S.H.; Jalali, S. Statement frequency coverage: A code coverage criterion for assessing test suite effectiveness. Inf. Softw. Technol. 2021, 129, 106426. [Google Scholar] [CrossRef]
- Papadakis, M.; Kintis, M.; Zhang, J.; Jia, Y.; Le Traon, Y.; Harman, M. Mutation testing advances: An analysis and survey. In Advances in Computers; Elsevier: Amsterdam, The Netherlands, 2019; Volume 112, pp. 275–378. [Google Scholar]
- Li, Q.; Li, X.; Jiang, Z.; Mingcheng, E.; Ma, J. Industry Quality Infrastructure: A Review. In Proceedings of the 2019 International Conference on Quality, Reliability, Risk, Maintenance, and Safety Engineering (QR2MSE), Zhangjiajie, China, 6–9 August 2019; pp. 180–185. [Google Scholar] [CrossRef]
- Yoo, H. A Case Study on the Establishment of a National Quality Infrastructure in Korea. In Proceedings of the 19th International Congress of Metrology (CIM2019), Paris, France, 24–26 September 2019; EDP Sciences: Les Ulis, France, 2019; p. 04002. [Google Scholar]
- Ruso, J.; Filipovic, J. How do Public Policy-makers Perceive National Quality Infrastructure? The Case of Serbia as an EU Pre-accession Country. Eur. Rev. 2020, 28, 276–293. [Google Scholar] [CrossRef]
- Machado, R.; Melo, W.; Bento, L.; Camara, S.; Da Hora, V.; Barras, T.; Chapetta, W. Proficiency Testing for Software Analysis and Cybersecurity Laboratories. In Proceedings of the 2020 IEEE International Workshop on Metrology for Industry 4.0 & IoT, Roma, Italy, 3–5 June 2020; pp. 441–446. [Google Scholar] [CrossRef]
- Charki, A.; Pavese, F. Data comparisons and uncertainty: A roadmap for gaining in competence and improving the reliability of results. Int. J. Metrol. Qual. Eng. 2019, 10. [Google Scholar] [CrossRef] [Green Version]
- Arvizu-Torres, R.; Perez-Castorena, A.; Salas-Tellez, J.; Mitani-Nakanishi, Y. Biological and environmental reference materials in CENAM. Anal. Bioanal. Chem. 2001, 370, 156–159. [Google Scholar] [CrossRef] [PubMed]
- Bode, P.; De Nadai Fernandes, E.; Greenberg, R. Metrology for chemical measurements and the position of INAA. J. Radioanal. Nucl. Chem. 2000, 245, 109–114. [Google Scholar] [CrossRef]
- Yadav, S.; Gupta, V.; Prakash, O.; Bandyopadhyay, A. Proficiency testing through interlaboratory comparison in the pressure range up to 70 MPa using pressure dial gauge as an artifact. J. Sci. Ind. Res. 2005, 64, 722–740. [Google Scholar]
- NIST. FIPS 140-3. Security Requirements for Cryptographic Modules; NIST: Gaithersburg, MD, USA, 2019.
- Boccardo, D.R.; dos Santos, L.C.G.; da Costa Carmo, L.F.R.; Dezan, M.H.; Machado, R.C.S.; de Aguiar Portugal, S. Software evaluation of smart meters within a Legal Metrology perspective: A Brazilian case. In Proceedings of the 2010 IEEE PES Innovative Smart Grid Technologies Conference Europe (ISGT Europe), Gothenburg, Sweden, 11–13 October 2010; pp. 1–7. [Google Scholar]
- Computer Security Division, Information Technology Laboratory. CMVP FIPS 140-3 Related References—Cryptographic Module Validation Program: CSRC. Available online: https://csrc.nist.gov/Projects/cryptographic-module-validation-program/fips-140-3-standards (accessed on 24 May 2021).
- NIST. National Voluntary Laboratory Accreditation Program; NIST: Gaithersburg, MD, USA, 2017.
- ANSSI. Licensing of Evaluation Facilities for the First Level Security Certification; Ed. 1.2 Reference: ANSSI-CSPN-AGR-P-01/1.2; ANSSI: Paris, France, 2015. [Google Scholar]
- IEEE. Standard for System, Software, and Hardware Verification and Validation; Std 1012-2016 (Revision of IEEE Std 1012-2012/ Incorporates IEEE Std 1012-2016/Cor1-2017); IEEE: Piscataway, NJ, USA, 2017; pp. 1–260. [Google Scholar] [CrossRef]
- Myers, G.J.; Badgett, T.; Thomas, T.M.; Sandler, C. The Art of Software Testing; Wiley Online Library: Hoboken, NJ, USA, 2012; Volume 3. [Google Scholar]
- Bertolino, A. Software Testing Research: Achievements, Challenges, Dreams. In Proceedings of the Future of Software Engineering (FOSE’07), Minneapolis, MN, USA, 23–25 May 2007; pp. 85–103. [Google Scholar] [CrossRef]
- ISO/IEC/IEEE International Standard—Software and Systems Engineering—Software Testing—Part 1: Concepts and Definitions; ISO/IEC/IEEE 29119-1:2013(E); ISO/IEC: Geneva, Switzerland; IEEE: Piscataway, NJ, USA, 2013; pp. 1–64. [CrossRef]
- Rierson, L. Developing Safety-Critical Software: A Practical Guide for Aviation Software and DO-178C Compliance; CRC Press: Boca Raton, FL, USA, 2017. [Google Scholar]
- Smith, D.J.; Simpson, K.G. The Safety Critical Systems Handbook: A Straightforward Guide to Functional Safety: IEC 61508 (2010 Edition), IEC 61511 (2015 Edition) and Related Guidance; Butterworth-Heinemann: Oxford, UK, 2020. [Google Scholar]
- ISO/IEC. ISO/IEC 15408:2009 Information Technology—Security Techniques—Evaluation Criteria for IT Security; International Organization for Standardization and International Electrotechnical Commission: Geneva, Switzerland, 2009. [Google Scholar]
- Antinyan, V.; Derehag, J.; Sandberg, A.; Staron, M. Mythical unit test coverage. IEEE Softw. 2018, 35, 73–79. [Google Scholar] [CrossRef]
- Gligoric, M.; Groce, A.; Zhang, C.; Sharma, R.; Alipour, M.A.; Marinov, D. Guidelines for coverage-based comparisons of non-adequate test suites. ACM Trans. Softw. Eng. Methodol. 2015, 24, 1–33. [Google Scholar] [CrossRef]
- Meneely, A.; Smith, B.; Williams, L. Validating software metrics: A spectrum of philosophies. ACM Trans. Softw. Eng. Methodol. 2013, 21, 1–28. [Google Scholar] [CrossRef]
- Weyuker, E. Evaluating software complexity measures. IEEE Trans. Softw. Eng. 1988, 14, 1357–1365. [Google Scholar] [CrossRef]
- Harrison, W. An entropy-based measure of software complexity. IEEE Trans. Softw. Eng. 1992, 18, 1025–1029. [Google Scholar] [CrossRef]
- Chidamber, S.; Kemerer, C. A metrics suite for object oriented design. IEEE Trans. Softw. Eng. 1994, 20, 476–493. [Google Scholar] [CrossRef] [Green Version]
- Lakshmi Narasimhan, V.; Hendradjaya, B. Some Theoretical Considerations for a Suite of Metrics for the Integration of Software Components. Inf. Sci. 2007, 177, 844–864. [Google Scholar] [CrossRef]
- Devanbu, P.; Karstu, S.; Melo, W.; Thomas, W. Analytical and empirical evaluation of software reuse metrics. In Proceedings of the IEEE 18th International Conference on Software Engineering, Berlin, Germany, 25–30 March 1996; pp. 189–199. [Google Scholar] [CrossRef] [Green Version]
- Zhang, H.; Li, Y.F.; Tan, H.B.K. Measuring design complexity of semantic web ontologies. J. Syst. Softw. 2010, 83, 803–814. [Google Scholar] [CrossRef]
- Pan, W.; Ming, H.; Chang, C.; Yang, Z.; Kim, D.K. ElementRank: Ranking Java Software Classes and Packages using a Multilayer Complex Network-Based Approach. IEEE Trans. Softw. Eng. 2019. [Google Scholar] [CrossRef]
- Weyuker, E.J. Axiomatizing software test data adequacy. IEEE Trans. Softw. Eng. 1986, SE-12, 1128–1138. [Google Scholar] [CrossRef]
- Khangura, S.; Konnyu, K.; Cushman, R.; Grimshaw, J.; Moher, D. Evidence summaries: The evolution of a rapid review approach. Syst. Rev. 2012, 1. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Tricco, A.; Antony, J.; Zarin, W.; Strifler, L.; Ghassemi, M.; Ivory, J.; Perrier, L.; Hutton, B.; Moher, D.; Straus, S. A scoping review of rapid review methods. BMC Med. 2015, 13. [Google Scholar] [CrossRef] [Green Version]
- Jahangirian, M.; Eldabi, T.; Garg, L.; Jun, G.T.; Naseer, A.; Patel, B.; Stergioulas, L.; Young, T. A rapid review method for extremely large corpora of literature: Applications to the domains of modelling, simulation, and management. Int. J. Inf. Manag. 2011, 31, 234–243. [Google Scholar] [CrossRef] [Green Version]
- Cartaxo, B.; Pinto, G.; Fonseca, B.; Ribeiro, M.; Pinheiro, P.; Baldassarre, M.; Soares, S. Software Engineering Research Community Viewpoints on Rapid Reviews. In Proceedings of the 2019 ACM/IEEE International Symposium on Empirical Software Engineering and Measurement (ESEM), Porto de Galinhas, Brazil, 19–20 September 2019. [Google Scholar] [CrossRef] [Green Version]
- Chapetta, W.; Travassos, G. Towards an evidence-based theoretical framework on factors influencing the software development productivity. Empir. Softw. Eng. 2020, 25, 3501–3543. [Google Scholar] [CrossRef]
- Kotyczka-Moranska, M.; Mastalerz, M.; Plis, A.; Sciazko, M. Inter-laboratory proficiency testing of the measurement of gypsum parameters with small numbers of participants. Accredit. Qual. Assur. 2020, 25, 373–381. [Google Scholar] [CrossRef]
- Zavadil, T. Inter-laboratory proficiency testing of NDT labs according to ISO 17043 as a tool for continuous improvement principle according to ISO 9001. In Proceedings of the Pressure Vessels and Piping Conference, Prague, Czech Republic, 15–20 July 2018; Volume 51593, p. V01BT01A006. [Google Scholar]
- de Medeiros Albano, F.; Ten Caten, C.S. Analysis of the relationships between proficiency testing, validation of methods and estimation of measurement uncertainty: A qualitative study with experts. Accredit. Qual. Assur. 2016, 21, 161–166. [Google Scholar] [CrossRef]
- Fant, K.; Pejcinovic, B.; Wong, P. Exploring proficiency testing of programming skills in lower-division computer science and electrical engineering courses. In Proceedings of the 2016 ASEE Annual Conference and Exposition, New Orleans, LA, USA, 26–29 June 2016. [Google Scholar]
- Poenaru, M.M.; Iacobescu, F.; Anghel, M.A. Pressure Calibration Quality Assessment through Interlaboratories Comparison. In Proceedings of the 22th IMEKO TC4 Symposium “Supporting World Development through Electrical and Electronic Measurements”, Iasi, Romania, 14–15 September 2017; p. 27. [Google Scholar]
- Miller, W.G.; Jones, G.R.; Horowitz, G.L.; Weykamp, C. Proficiency testing/external quality assessment: Current challenges and future directions. Clin. Chem. 2011, 57, 1670–1680. [Google Scholar] [CrossRef] [Green Version]
- Miller, W.G. The role of proficiency testing in achieving standardization and harmonization between laboratories. Clin. Biochem. 2009, 42, 232–235. [Google Scholar] [CrossRef]
- Raithatha, C.; Rogers, L. Panel Quality Management: Performance, Monitoring and Proficiency; Wiley Online Library: Hoboken, NJ, USA, 2017; pp. 113–164. [Google Scholar] [CrossRef]
- 2020 IEEE International Symposium on Electromagnetic Compatibility and Signal/Power Integrity (EMCSI 2020). Available online: https://ieeexplore.ieee.org/xpl/conhome/9184728/proceeding (accessed on 22 May 2021).
- Bair, M. Verification of gas flow traceability from 0.1 sccm to 1 sccm using a piston gauge. In Proceedings of the 6th CCM International Conference on Pressure and Vacuum Metrology—5th International Conference IMEKO TC16, Pereira, Colombia, 8–10 May 2017. [Google Scholar]
- Merlone, A.; Sanna, F.; Beges, G.; Bell, S.; Beltramino, G.; Bojkovski, J.; Brunet, M.; Del Campo, D.; Castrillo, A.; Chiodo, N.; et al. The MeteoMet2 project—Highlights and results. Meas. Sci. Technol. 2018, 29. [Google Scholar] [CrossRef] [Green Version]
- Muravyov, S.; Khudonogova, L.; Emelyanova, E. Interval data fusion with preference aggregation. Meas. J. Int. Meas. Confed. 2018, 116, 621–630. [Google Scholar] [CrossRef]
- Sauerwald, T.; Baur, T.; Leidinger, M.; Reimringer, W.; Spinelle, L.; Gerboles, M.; Kok, G.; Schütze, A. Highly sensitive benzene detection with metal oxide semiconductor gas sensors—An inter-laboratory comparison. J. Sens. Sens. Syst. 2018, 7, 235–243. [Google Scholar] [CrossRef] [Green Version]
- Parsai, A.; Demeyer, S. Comparing mutation coverage against branch coverage in an industrial setting. Int. J. Softw. Tools Technol. Transf. 2020, 22, 365–388. [Google Scholar] [CrossRef]
- Gay, G. Generating Effective Test Suites by Combining Coverage Criteria. In Search Based Software Engineering; Menzies, T., Petke, J., Eds.; Springer International Publishing: Cham, Switzerland, 2017; pp. 65–82. [Google Scholar]
- Hoffmann, M.R. Java Code Coverage Library. Available online: https://www.eclemma.org/ (accessed on 24 May 2021).
- Jaccard, P. The distribution of the flora in the alpine zone. 1. New Phytol. 1912, 11, 37–50. [Google Scholar] [CrossRef]
- INMETRO. Relatório Final da Primeira Rodada de Ensaio de Proficiência; Inmetro: Rio de Janeiro, Brasil, 2019. [CrossRef]
- INMETRO. Relatório Final da Primeira Rodada Pública de Proficiência em Avaliação da Conformidade de Produto de Software 2020; Inmetro: Rio de Janeiro, Brasil, 2020. [CrossRef]
Subject Area | Number of Studies (Non-Exclusive) |
---|---|
Medicine | 481 |
Biochemistry, Genetics and Molecular Biology | 282 |
Engineering | 264 |
Chemistry | 260 |
Physics and Astronomy | 225 |
Environmental Science | 163 |
Agricultural and Biological Sciences | 135 |
Chemical Engineering | 109 |
Earth and Planetary Sciences | 96 |
Immunology and Microbiology | 92 |
Health Professions | 82 |
Pharmacology, Toxicology and Pharmaceutics | 60 |
Social Sciences | 59 |
Materials Science | 55 |
Computer Science | 51 |
Energy | 36 |
Mathematics | 28 |
Multidisciplinary | 19 |
Participant Code | Jaccard Index |
---|---|
01 | 1.00 |
05 | 1.00 |
07 | 1.00 |
12 | 1.00 |
14 | 1.00 |
18 | 1.00 |
ID | Mutation Score | Killed Mutants | Bytecode (KB) | Ri |
---|---|---|---|---|
ID-2 | 2.11% | 62 | 49.60 | 0.00043 |
ID-3 | 0.00% | 0 | 9.60 | 0.00000 |
ID-4 | 63.36% | 1832 | 6.51 | 0.09513 |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Chapetta, W.A.; das Neves, J.S.; Machado, R.C.S. Quantitative Metrics for Performance Monitoring of Software Code Analysis Accredited Testing Laboratories. Sensors 2021, 21, 3660. https://doi.org/10.3390/s21113660
Chapetta WA, das Neves JS, Machado RCS. Quantitative Metrics for Performance Monitoring of Software Code Analysis Accredited Testing Laboratories. Sensors. 2021; 21(11):3660. https://doi.org/10.3390/s21113660
Chicago/Turabian StyleChapetta, Wladmir Araujo, Jailton Santos das Neves, and Raphael Carlos Santos Machado. 2021. "Quantitative Metrics for Performance Monitoring of Software Code Analysis Accredited Testing Laboratories" Sensors 21, no. 11: 3660. https://doi.org/10.3390/s21113660
APA StyleChapetta, W. A., das Neves, J. S., & Machado, R. C. S. (2021). Quantitative Metrics for Performance Monitoring of Software Code Analysis Accredited Testing Laboratories. Sensors, 21(11), 3660. https://doi.org/10.3390/s21113660