The seL4 verification journey: how have the challenges and opportunities evolved
ABSTRACT: The formal verification journey of the seL4 microkernel is nearing two decades, and still has an busy roadmap for the years ahead. It started as a research project aiming for a highly challenging problem with the potential of significant impact. Today, a whole ecosystem of developers, researchers, adopters and supporters are part of the seL4 community. With increasing uptake and adoption, seL4 is evolving, supporting more platforms, architectures, configurations, and features. This creates both opportunities and challenges: verification is what makes seL4 unique; as the seL4 code evolves, so must its formal proofs. With more than a million lines of formal, machine-checked proofs, seL4 is the most highly assured OS kernel, with proofs of an increasing number of properties (functional correctness, binary correctness, security –integrity and confidentiality– and system initialisation) and for an increasing number of hardware architectures: Arm (32-bit), x86 (64-bit) and RISC-V (64-bit), with proofs now starting for Arm (64-bit). In this talk we will reflect on the evolution of the challenges and opportunities the seL4 verification faced along its long, and continuing, journey.
BIO: June Andronick is CEO and co-founder of Proofcraft, providing commercial support for software verification in general and the seL4 microkernel verification in particular. She is also CEO of the seL4 Foundation, and conjoint Professor at UNSW Sydney. Her technical expertise is in increasing the reliability of critical software systems, by mathematically proving that the code behaves as expected and satisfies security and safety requirements. She previously led the Trustworthy Systems group, world-leading in the area of verified operating systems software, known worldwide for the formal verification of the seL4 microkernel. She was recognised in 2011 by MIT’s Technology Review as one of the world’s top young innovators (TR35). She holds a PhD in Computer Science from the University of Paris-Sud, France.
Why do things go wrong (or right)? Applications of causal reasoning to verification
ABSTRACT: In this talk I will look at the connections between causality and learning from one side, and verification and synthesis from the other side. I will introduce the relevant concepts and discuss how causality and learning can help to improve the quality of systems and reduce the amount of human effort in designing and verifying systems. I will (briefly) introduce the theory of actual causality as defined by Halpern and Pearl. This theory turns out to be extremely useful in various areas of computer science due to a good match between the results it produces and our intuition. I will illustrate the definitions by examples from formal verification. I will also argue that active learning can be viewed as a type of causal discovery. Tackling the problem of reducing the human effort from the other direction, I will discuss ways to improve the quality of specifications and will focus in particular on synthesis.
BIO: Hana Chockler is a Principal Scientist at a startup company causaLens and a Reader (Associate Professor) in Formal Methods in the Department of Informatics at King’s College London (KCL). Prior to joining KCL in 2013, Hana worked at IBM Research in the formal verification and software engineering departments. Her research interests span a wide variety of topics, including formal verification and synthesis of hardware and software, fundamental concepts in causality and its applications to a variety of domains, learning, and explainable AI.
On applying Model Checking in Formal Verification
ABSTRACT: Use of Hardware model checking in the EDA industry is wide spread and now considered an essential part of verification. While there are many papers, and books, about SAT, SMT and Symbolic model checking, often very little is written about how these methods can be applied. Choices made when modeling systems can have large impacts on applicability and scalability. There is generally no formal semantics defined for the hardware design languages, nor for the intermediate representations in common use. As unsatisfactory as it may be, industry conventions and behaviour exhibited by real hardware have instead been the guides. In this tutorial we will give an overview of some of the steps needed to apply hardware model checking in an EDA tool. We will touch on synthesis, hierarchy flattening, gate lowering, driver resolution, issues with discrete/synchronous time models, feedback loops and environment constraints, input rating and initialisation/reset.
BIO: Håkan Hjort has more than 20 years of industrial experience earned during his tenure at Safelogic, Jasper Design Automation and Cadence Design Systems, doing everything from language frontends to SAT solvers. Håkan studied Computer Science and Engineering at Chalmers, Gothenburg, and is today a Senior Software Architect in the System&Verification Group at Cadence, working on the formal verification tool Jasper.
Verification of Distributed Protocols: Decidable Modeling and Invariant Inference
ABSTRACT: Verification of distributed protocols and systems, where both the number of nodes in the systems and the state-space of each node are unbounded, is a long-standing research goal. In recent years, efforts around the Ivy verification tool have pushed a strategy of modeling distributed protocols and systems in a new way that enables decidable deductive verification, i.e., given a candidate inductive invariant, it is possible to automatically check if it is inductive, and to produce a finite counterexample to induction in case it is not inductive. Complex protocols require quantifiers in both models and their invariants, including forall-exists quantifier alternations. Still, it is possible to obtain decidability by enforcing a stratification structure on quantifier alternations, often achieved using modular decomposition techniques, which are supported by Ivy. Stratified quantifiers lead not only to theoretical decidability, but to reliably good solver performance in practice, which is in contrast to the typical instability of SMT solvers over formulas with complex quantification.
Reliable automation of invariant checking and finite counterexamples open the path to automating invariant inference. An invariant inference algorithm can propose a candidate invariant, automatically check it, and get a finite counterexample that can be used to inform the next candidate. For a complex protocol, this check would typically be performed thousands of times before an invariant is found, so reliable automation of invariant checking is a critical enabler. Recently, several invariant inference algorithms have been developed that can find complex quantified invariants for challenging protocols, including Paxos and some of its most intricate variants.
In the tutorial I will provide an overview of Ivy’s principles and techniques for modeling distributed protocols in a decidable fragment of first-order logic. I will then survey several recently developed invariant inference algorithms for quantified invariants, and present one such algorithm in depth: Primal-Dual Houdini. Primal-Dual Houdini is based on a new mathematical duality, and is obtained by deriving the formal dual of the well-known Houdini algorithm. As a result, Primal-Dual Houdini possesses an interesting formal symmetry between the search for proofs and for counterexamples.
BIO: Oded Padon is a researcher at VMware Research. Prior to joining VMware Research, Oded was a postdoctoral scholar at Stanford University. He completed his PhD from Tel Aviv University. Oded’s research interests include programming languages, formal verification, distributed systems, and quantum computing. He is a recipient of the 2020 ETAPS Doctoral Dissertation Award and the 2017 Google PhD fellowship in programming languages.