Tracks
1) Engineering of Digital Twins for Cyber-Physical Systems
Organizers:
John Fitzgerald fitzgerald@newcastle.ac.uk
Pieter Gorm Larsen pgl@eng.au.dk
Tiziana Margaria University of Limerick tiziana.margaria@ul.ie
Jim Woodcock jim.woodcock@york.ac.uk
Ensuring the dependability of Cyber-Physical Systems (CPSs) poses challenges for model-based engineering, stemming from the semantic heterogeneity of the models of computational, physical and human processes, and from the range of stakeholders involved. We argue that delivering such dependability requires a marriage of multi-disciplinary models developed during design with models derived from real operational data. Assets developed during design thus become the basis of a learning digital twin, able to support decision making both in redesign and in responsive operation.
Starting from an open integrated toolchain leveraging formal models for CPS design, we consider the extension of this concept towards digital twins. The Gartner group puts digital twins in its 10 strategically most important technologies in 2019.
A digital twin is a virtual replica of a real system (usually called the physical twin) that can be used to inform decision making in operation and design. Digital twins combine multi-models developed during design processes, with models derived by Machine Learning (ML) from data acquired from the physical twin. Such twins are thus described as perpetually learning, in the sense that operational data is used to tune multi-models, making them a more faithful reflection of the CPS, forming a sound basis for decisions about possible responses to dependability threats.
From a scientific perspective the key research questions are
- what foundations are needed for a well-founded digital twin,
- where the limits are for such digital twins,
- when it is worthwhile constructing them and
- what values can be expected from them.
Dependable operation of CPSs requires both the ability to address the consequences of evolving system components, and the ability to explore and identify optimal changes that do not unduly compromise overall dependability. This combination of prediction and response alongside support for informed decision-making and redesign by humans requires both the data derived from operations and the models developed in design. Tackling the challenges of CPS design thus requires a marriage of both descriptive multi-models of the type that might be developed in a design process, and inductive models derived from data acquired during operation. This combination of models, cutting across formalisms as well as across design and operation, has the potential to form a learning digital twin for a CPS, enabling off-line and on-line decision-making.
The goal of this track is to discuss how one can enable the well-founded engineering of such learning digital twins for dependable CPSs.
2) Verification and Validation of Concurrent and Distributed Systems
Organizers:
Cristina Seceleanu cristina.seceleanu@mdh.se
Marieke Huisman m.huisman@utwente.nl
Concurrent and distributed systems are becoming omnipresent, since concurrency and distribution are necessary to fulfil performance requirements of modern software, and due to their natural fit with the underlying application domain. However, concurrent and distributed systems add a lot of extra complexity to systems, and allow many different kinds of errors to occur, which cannot happen in sequential software. As Leslie Lamport, known for his seminal work in distributed systems, famously said: “A distributed system is one in which the failure of a computer you didn’t even know existed can render your own computer unusable” (CAITIE MCCAFFREY: The Verification of a Distributed System – A practitioner’s guide to increasing confidence in system correctness). Similarly, for concurrent systems, an error might occur in one execution, and then disappear in the next execution of the system.
Given such a pessimistic outlook, how can one even start thinking about verification of concurrent and distributed systems? How can verification techniques for concurrent systems benefit from verification techniques for distributed systems, and vice versa?
This track focuses on providing answers to these questions, by inviting submissions that propose models, techniques, and tools for the rigorous analysis of any sort of concurrent and distributed systems. During the session, we plan to have not only presentations and discussions about the individual verification techniques, but also about how they can be combined.
3) Modularity and (De-)composition in Verification
Organizers:
Reiner Hähnle (TU Darmstadt),
Eduard Kamburjan (TU Darmstadt),
Dilian Gurov (KTH Stockholm)
Achieving modularity – the structured composition and decomposition of systems – is a key technique in engineering disciplines and a cornerstone of computer science. A multitude of approaches to ensure modularity have been proposed, from object-orientation, via structured concurrency models to delta oriented programming for variability.
Modularity gives the overall system a structure that can be exploited by automatic tools. Many modularity approaches are based on formal techniques that follow existing modularity patterns in verification, for example, contracts and invariants.
When modularity is not strictly demanded by a system, formal verification techniques can be used to check or enforce additional modularity patterns, for example, choreographies or behavioral types.
Modularity in formal verification goes beyond exploiting modularity patterns: a suitable specification language must be found.
Finally, formal verification tools themselves can be modular and composed of subsystems and/or sub-logics for different aspects.
The track will bring together researchers from different areas of verification, in particular programming language theory, logic and formal methods, to discuss
(1) where current challenges in modularity for verification lie,
(2) whether modular verification techniques in different areas can benefit from each other
(3) how verification tools and techniques can be modularized.
4) Software Verification Tools
Organizers:
Markus Schordan (Lawrence Livermore National Laboratory, CA, USA)
Dirk Beyer (LMU Munich, Germany)
Irena Boyanova NIST, USA
In the last few years, tools for program analysis and verification became a lot more mature, and several competitions comparatively evaluate the implemented analyses for a given set of benchmarks.The comparison of the analyses either focuses on the analysis results themselves (verification of specified properties) or on the impact on a client analysis.
This track is concerned with methods of evaluation for comparing analysis and verification tools and how to determine their correctness. Therefore the validation of the results of analysis tools is also considered. Of particular interest are verification tools that can be applied to themselves.
The topics of interest are
- Reuse of verification results and combination of multiple verifiers using conditional model checking
- Analysis benchmarking and experimental evaluation of analysis accuracy
- Verification of parallel programs (e.g., data races in shared-memory parallel programs)
- Identification of undefined behavior in programs (e.g. C and C++)
- Validation of verification witnesses
- Elimination of spurious counter examples and false positives
- Soundness of program analysis for incomplete code (external function calls, etc.)
- Interoperability of analysis tools
- Combination of techniques (e.g., model checking with theorem proving)
- Approaches for evaluating interactive verification tools
5) X- by-Construction: Correctness meets Probability
Organizers:
Maurice H. ter Beek
Loek Cleophas
Ina Schaefer
Bruce W. Watson
Correctness-by-Construction (C-by-C) approaches the development of software (systems) as a true form of Engineering, with a capital ‘E’. C-by-C advertises a stepwise refinement process from specification to code, ideally by C-by-C design tools that automatically generate error-free software (system) implementations from rigorous and unambiguous specifications of requirements. Afterwards, testing only serves to validate the C-by-C process rather than to find bugs. (Of course, bugs might still be present outside the boundaries of the verified system: in libraries, compilers, hardware, the C-by-C design tools themselves, etc.).
A lot of progress has been made in this domain, implying it is time to look further than correctness and investigate a move from C-by-C to X-by-C, i.e. by considering also non-functional properties. X-by-C is thus concerned with a step-wise refinement process from specification to code that automatically generates software (system) implementations that by-construction satisfy specific non-functional properties concerning security, dependability, reliability or resource/energy consumption, etc.
We aim to bring together researchers and practitioners that are interested in C-by-C and X-by-C. In line with the growing attention to fault-tolerance (thanks to increasingly common failures in hardware and software) and the increasing use of machine learning techniques in modern software systems in both of these contexts, guaranteed properties are hard to establish we want to particularly emphasise X-by-C in the setting of probabilistic properties.
We therefore invite researchers and practitioners from (at least) the following communities to share their views on (moving from C-by-C to) X-by-C:
- People working on system-of-systems, who address modelling and verification (correctness, but also non-functional properties concerning security, reliability, resilience, energy consumption, performance and sustainability) of networks of interacting legacy and new software systems, and who are interested in applying X-by-C techniques in this domain in order to prove potentially probabilistic non-functional properties of systems-of-systems by construction (from their constituent systems satisfying these properties).
- People working on quantitative modelling and analysis, e.g., through probabilistic or real-time systems and probabilistic or statistical model checking, in particular in the specific setting of dynamic, adaptive or (runtime) reconfigurable systems with variability. These people work on lifting successful formal methods and verification tools from single systems to families of systems, i.e., modelling and analysis techniques that need to cope with the complexity of systems stemming from behaviour, variability, and randomness and which focus not only on correctness but also on non-functional properties concerning safety, security, performance or dependability properties. As such, they may be interested in applying X-by-C techniques in this domain to prove non-functional properties of families of systems by construction (from their individual family members satisfying these properties).
- People working on systems involving components that employ machine learning (ML) or other artificial intelligence (AI) approaches. In these settings, models and behaviour are typically dependent on what is learned from large data sets, and may change dynamically based on yet more data being processed. As a result, guaranteeing properties (whether correctness or nonfunctional ones) becomes hard, and probabilistic reasoning needs to be applied instead with respect to such properties for the components employing ML or AI approaches, and as a consequence, for systems involving such components as well.
- People working on generative software development, who are concerned with the automatic generation of software from specifications given in general formal languages or domain-specific languages, leading to families of related software (systems). Also in this setting, the emphasis so far has typically been on functional correctness, but the restricted scope of the specifications| especially for domain-specific languages may offer a suitable ground for reasoning about non-functional properties, and for using X-by-C techniques to guarantee such properties.
- People working on systems security, privacy and algorithmic transparency and accountability, who care about more than correctness. The application of X-by-C techniques could provide certain guarantees from the outset when designing critical systems. It could also enforce transparency when developing algorithms for automated decision-making, in particular those based on data analytics thus reducing algorithmic bias by avoiding opaque algorithms.
6) Rigorous Engineering of Collective Adaptive Systems
Organizers:
Rocco De Nicola (IMT Lucca, Italy),
Stefan Jähnichen (TU Berlin, Germany),
Martin Wirsing (LMU München, Germany)
Today´s software systems are becoming increasingly distributed and decentralized and have to adapt autonomously to dynamically changing, open-ended environments. Often the nodes have their own individual properties and objectives; interactions with other nodes or with humans may lead to the emergence of unexpected phenomena. We call such systems collective adaptive systems. Examples for collective adaptive systems are robot swarms, smart cities, autonomous cars, voluntary peer-to-peer clouds as well as socio-technical systems and the internet of things.
It is crucial that unexpected and possibly dangerous situations be avoided within such systems. Hence, there is a strong need of methods, tools and techniques to guarantee that systems behave as expected and guarantee at once safety, security, integrity, availability, correctness, reliability, resilience and that such issue are not addressed in isolation but as a whole at system level.
The track on Rigorous Engineering of Collective Adaptive Systems offers a venue for presenting research work aiming at rigorous defining methods to engineer collective adaptive systems in order to face the above outlined challenges. Topics of interest include (but are not limited to):
- Methods and theories for designing, specifying, and analysing collective adaptive systems
- Techniques for programming and operating collective adaptive systems
- Methods and mechanisms for adaptation and dynamic self-expression
- Trustworthiness, validation and verification of collective adaptive systems
- Security and performance of collective adaptive systems
- Techniques to identify collective adaptive systems and their components
7) Automated Verification of Embedded Control Software
Organizers:
Dilian Gurov, KTH Royal Institute of Technology, Stockholm
Paula Herber, University of Münster
Ina Schaefer, TU Braunschweig
The development of embedded control software is driven by high demands with respect to both cost efficiency and quality requirements. The time to market is comparatively short, and the functionality volume is ever-increasing. For example, in 2005, the software in a modern car comprised about 70 MB. Today, we usually have more than 1 GB (i.e. more than 100 million lines of code) distributed over more than a hundred microprocessors. With the current trend towards autonomous driving and the internet of things, these numbers can be expected to further increase. At the same time, embedded control software components often have to meet the highest safety requirements. Beside functional requirements, non-functional requirements like real-time, memory and power consumption are crucial. For example, the control part of automotive software is typically classified on the highest Automotive Safety Integrity Levels, that is, ASIL C or ASIL D, according to the ISO 26262 standard for functional safety in road vehicles. ISO 26262 strongly recommends that semi-formal and formal verification are used for such systems.
The major barrier for the adoption of semi-formal and formal verification in the industrial practice is the limited degree of automation of formal methods. Formal specifications and models often need to be manually written, and scalable verification techniques typically require tedious user annotations and interactions. The challenges of automated verification for embedded control software are multiplied by the recently incorporated use of statistical methods such as machine learning, by the uncertainties of self-adaptation in dynamic environments, and by increasing security issues in the wake of omnipresent wireless interfaces and the internet of things.
In this track, we investigate how the degree of automation can be increased for the verification of embedded control software systems. We plan to not only discuss the traditional safety goals of embedded control software, but also more recent challenges like the use of machine learning in safety-critical components and security issues arising from, e.g., car-to-car communication and wireless interfaces.
8) Automating Software Re-Engineering
Organizers:
Serge Demeyer
Reiner Hähnle
Heiko Mantel
Formal approaches to software analysis and development tend to focus on greenfield scenarios or to look at some piece of given software as a static object. Arguably, dynamic evolution of software is a much more common and relevant issue already, and its importance keeps growing. Key drivers are
- the advent of massively parallel hardware and the desire to optimally exploit it by software,
- the need to diversify and customize, particularly, in IoT and Industry 4.0, and
- novel application scenarios for existing software, fueled by digitalization of everything.
Software refactoring, parallelization, adaptation, become central activities in the value
chain: automating them can realize huge gains. Formal approaches to software modeling and analysis are poised to make a substantial contribution, because they are fundamentally concerned with automation and correctness.
In this track we invite researchers with an active interest in the automation of software re-engineering; people working on formal foundations, on tools, as well as practitioners of re-engineering, for example, from the HPC or Industry 4.0 domain.
9) 30 years of Statistical Model Checking!
Organizers:
Kim. G. Larsen, Aalborg University, Denmark
Axel Legay, UCLOUVAIN, Belgium
Statistical Model Checking (SMC) has been proposed as an alternative to avoid an exhaustive exploration of the state space of a system under verification. The core idea of the approach is to conduct some simulations of the system and then use results from the statistics area in order to decide whether the system satisfies the property with respect to a given probability. The answer is correct up to some confidence. SMC is generally much faster than formal verification techniques, and it is closer to industry current practices. Moreover, the approach can be used to verify properties that cannot be expressed by the classical temporal logics used in formal verification as well as to approximate undecidable model checking problems. Impressive results have been obtained in various areas such as energy-centric systems, automotive, or systems biology.
This track is the 4th SMC track at ISOLA (formers: 2014,2016,2018). As it was the case for the previous tracks, our track is opened to all aspects of SMC. This includes (but is not limited to): applying SMC beyond classical stochastic systems, speeding up simulation, rare-events, etc. This year, our objective is to put a strong focus on real-life applications of SMC (robotics, connected cars, …) and on extending SMC’s application beyond classical bounded LTL.
10) From Verification to Explanation
Organizers:
Holger Herrmanns and Christel Baier
Universität des Saarlandes, Dresden University (Germany)
It is becoming the norm that software artefacts participate in actions and decisions that affect humans. This trend has been catching momentum for decades, and is now amplified considerably by the remarkable abilities of machine-learnt methods.
However, our understanding of what is the cause of a specific automated decision is lagging far behind. More severe, we are lacking the scientifc basis to explain how several such applications interact in cascades of automated decisions. With the increase in cyber-physical technology impacting our lives, the consequences of this gradual loss in understanding are becoming severe.
This track of ISOLA intends to explore how computer aided verification techniques can be leveraged to master this challenge. We look for algorithmic and tool-supported approaches that aim at making the behaviour of software and CPS systems understandable.
11) Formal methods for DIStributed COmputing in future RAILway systems (DisCoRail 2020)
Organizers:
Alessandro Fantechi (Università di Firenze, Italy)
Stefania Gnesi (ISTI-CNR, Italy)
Anne Haxthausen (Technical University of Denmark)
The aim of this track is to discuss (1) how distributed computing can change the domain of railway signaling and train control systems, and (2) how formal methods can help to address challenges arising from this change.
The growingly wide deployment of ERTMS-ETCS systems on high speed lines as well as on freight corridors is already a witness to the possible achievements of high safety standards by means of distributed control algorithms that span over geographical areas and are able to safely control large physical systems. In ERTMS-ETCS the guarantee of global properties (such as safety) emerges from the conformance of the subsystems to well-established communication protocols and standards. Formal methods are already one of the technologies used within railway industries, and the Shift2Rail program insists on the synergy of formal methods and standardised interfaces for a seamless connection of independent (formally specified) equipments and devices through well-defined interfaces can guarantee global dependability properties.
Anyway, most of the crucial decisions needed to guarantee safety are still taken at centralized places (such as the Radio Block Centre – RBC), and the topology of such systems can be considered as a two layers network, the lower layer being just a connection of mobile systems with a centralized unit (the RBC), while the higher layer connects through a fixed network the RBCs with each other and the traffic management systems.
Following an increasingly popular trend for Cyber-Physical Systems, a more dynamic network connection among mobile components can be envisaged, in which decisions are actually taken in a distributed way, with some form of distributed consensus algorithms. An example is the virtual coupling concept, in which the strict cross-control between coupled trains has to be negotiated locally, while the global behavior of the set of coupled trains has to follow the rules dictated by the ETCS control system. Another example is given by proposals of fully distributed interlocking systems, where the route reservation is a global concept to be negotiated between the nodes.
Pros and Cons of distributing vital decisions is a matter of active research, especially considering that the increasing importance of communication raises the need of uncertainty being taken into account in a railway control system: is the same safety level achievable by distributed decisions w.r.t. centralised ones? How formal methods can guarantee safety in such context? What about availability, interoperability, cybersecurity?
Following the success of the first DisCoRail meeting held in 2019, the track aims for a fruitful discussion on these topics between researchers and experts from industry and academia that have addressed these aspects in research and development projects.
12) Programming: What is Next?
Organizers:
Klaus Havelund (Jet Propulsion Laboratory, USA)
Bernhard Steffen (TU Dortmund, Germany)
High-level main-stream programming languages (high-level wrt. to assembler languages and machine code) have evolved dramatically since the emergence of the FORTRAN language well over half a century ago, with hundreds of languages being developed since then. Especially in the current moment we see many new languages appearing, most of which are oriented towards application programming, and a few of which are oriented towards systems and embedded close-to-the-metal programming. More experimental programming languages focusing e.g. on program correctness, supporting proof systems are appearing as well.
In addition we see developments in the area of Domain-Specific Languages (DSLs), including visual as well as textual languages, easy to learn for experts in dedicated fields. Combined with approaches like aspect-oriented, generative and meta-programming this may lead to a very different but powerful style of system development. Related to these developments, we can also observe developments in modeling languages meant to support abstraction, verification, and productivity, however, their practically compared to traditional programming languages is not unquestioned.
In the more esoteric end of the spectrum, some claim that programming as we know it today will lose significance in the light of new paradigms like, in particular, AI and machine learning-based system development which provides solutions to problems beyond the scope of classical programming (see AlphaGo), or programming by example. Quantum computing is another potentially disruptive alternative to the traditional way of solving problems.
The above discussed trends are largely of concern to software development experts. However, the increasingly strong dependence of everybody’s life on IT supports voices that demand programming to become the third basic skill taught at school, next to reading and writing. Similar to reading and writing, which was reserved for the ‘elite’ for hundreds of years and only later became a widespread ‘commodity’, familiarity with computers is today a frequent job requirement. One may anticipate that (basic) programming skills will become a frequent job requirement. But how should a corresponding discipline of programming for everybody look like and how would it relate to the still ongoing development of expert-level programming languages?
Programming seems to have a very bright future, and we, as the experts, have quite a responsibility, a responsibility we are not always aware of. The track “Programming: What is Next” aims at discussing corresponding trends and visions along questions such as:
- What are the trends in classical programming language development?, both wrt. Applications programming and systems/embedded programming?
- What are the trends in more experimental programming language development, where focus is on research rather than adoption? This includes topics such as e.g. program verification, meta-programming and program synthesis.
- What role will domain-specific languages play and what is the right balance between textual and graphical languages?
- What is the connection between modeling and programming?
- Will system development continue to be dominated by programming or will there be radical changes in certain application areas/generally? E.g. driven by breakthroughs in AI and machine learning techniques and applications.
- Is teaching classical programming as third discipline sensible/required?
- Can we imagine something like programming for everybody?
Doctoral Symposium
General Information
The ISoLA Doctoral Symposium is a forum for young researchers who are looking for valuable scientific feedback and networking opportunities. Master and PhD students are invited to present their research ideas and projects, discuss them with the scientific community, and establish collaborations in their field of research.
The Doctoral Symposium invites contributions on all topics in the scope of ISoLA, including (but not limited to):
* Deduction and model-checking
* System specification, construction and transformation
* Program analysis and verification
* Composition and refinement
* Testing and test-case generation
* Software system certification
* System maintenance and evolution
* Hybrid and safety-critical systems
* Model-based testing and automata learning
* Applications, case studies and experience reports of formal methods in practice
The Doctoral Symposium will be held as combination of short presentations and posters. All participants will give a short presentation about their work in a special session during the conference. Additionally, there is the opportunity to display posters that can be presented and discussed during ISoLA’s generous coffee breaks.
Submission
There are two possible ways to submit work and participate in the Doctoral Symposium:
1. Submit a regular paper to ISoLA, selecting Doctoral Symposium as track during submission. The templates, requirements and deadlines of the main symposium apply in this case. If the paper is accepted, it will be published in the main ISoLA proceedings. The paper can have multiple authors, but the young researcher who wants to participate in the DocSym should be the first author.
2. Submit a research abstract that is structured and formatted according to [this template] and does not exceed four pages. The young researcher who wants to participate in the DocSym should be the sole author. Submissions of abstracts are accepted until September 15, 2020.
After the symposium, authors of accepted abstracts will be invited to submit a full paper to a post-conference volume that will be published in the open access journal [Electronic Communications of the EASST] at no charge. This paper can have multiple authors, but the young researcher who participated in the DocSym should be the first author.
All submissions must be made via the [ISoLA submission system].
Evaluation
All submitted papers and research abstracts will be reviewed by members of the ISoLA DocSym committee. They will use the following criteria:
* Relevance of the research to the scope of the symposium.
* Quality of the paper or research abstract.
* Stage of the research (taking into account if the candidate is at the beginning, middle, or end of their Master’s/PhD project)
Acceptance
The authors of the accepted papers will be asked to submit a camera-ready version of their contribution following the deadlines of the main ISoLA symposium. They must register for ISoLA 2020 and present their work on site.
Publication
The accepted papers will be included in the main ISoLA proceedings.
The authors of accepted research abstracts will be invited to submit to a post-conference volume that will be published in the open access journal [Electronic Communications of the EASST] at no charge.
Important Dates
For paper submission:
* The deadlines of the main ISoLA symposium apply.
For research abstract submission:
* Submissions are accepted until September 15, 2020.
* Notifications are sent to authors no later than September 30, 2020.
Contact
Please contact Anna-Lena Lamprecht (a.l.lamprecht@uu.nl) for questions concerning the ISoLA Doctoral Symposium.
Committee
* Sven Jörges, FH Dortmund, Germany
* Anna-Lena Lamprecht (chair), Utrecht University, Netherlands
* Anila Mjeda, Lero / University of Limerick, Ireland
* Stefan Naujokat, TU Dortmund, Germany
Rigorous Examination of Reactive Systems (RERS)
General Information
Validating the correctness of verification tools and comparing their capabilities both require a thorough evaluation. In it’s now 10th iteration, the Rigorous Examination of Reactive Systems (RERS) Challenge aims to realize such an evaluation of state-of-the-art verification and testing approaches. This track covers the RERS Challenge 2020.
Topics of interest include:
- Approaches to solving the verification tasks of RERS,
- Generation of verification benchmarks, and
- Transferability of corresponding results to real-world applications.
Submission
Submissions should be sent using the Equinocs system where this RERS track can be found alongside other ISoLA tracks.
For participants of the challenge, RERS 2020 offers three different possibilities to feature their approaches in peer-reviewed publications:
- Full paper (about 15 pages): For those who intend to present their research results based on the RERS challenge in a full paper, the ISoLA’20 track on Software Verification Tools accepts corresponding submissions. The deadline for those submissions is June 30th, 2020.
- Short paper (about 4 pages): If you intend to contribute a brief summary of your approach and corresponding results, please submit your short paper to this RERS track of ISoLA’20 by July 31st, 2020.
- Brief description: To allow us to cover your approach in our summary of RERS 2020, please provide us with a brief description (about 10 lines) and send it along with your initial submission by Aug. 10th, 2020. This description can be updated with your final submission by Aug. 20th, 2020.
Focus and Goals
In order to prevent an overfitting of tools to specific benchmark verification tasks, the problems that participants of RERS have to solve are newly generated each year based on property-preserving transformations. RERS focusses on temporal logics, including reachability properties, and features scalable verification tasks. The main goals of RERS -as stated on http://www.rers-challenge.org -are:
- Encourage the combination of usually different research fields for better software verification results.
- Provide a comparison foundation based on differently tailored benchmarks that reveals the strengths and weaknesses of specific approaches.
- Initiate a discussion for better benchmark generation reaching out across the usual community barriers to provide benchmarks useful for testing and comparing a wide variety of tools.