Joe Leslie-Hurd's Bibliography

[1] Mark D. Aagaard, Robert B. Jones, and Carl-Johan H. Seger. Combining theorem proving and trajectory evaluation in an industrial environment. In Proceedings of the 1998 Conference on Design Automation (DAC-98), pages 538–541, Los Alamitos, CA, June 1998. ACM/IEEE. [ bib ]
[2] Martin Abadi and Joseph Y. Halpern. Decidability and expressiveness for first-order logics of probability. Information and Computation, 1994. [ bib ]
[3] Iago Abal, Alcino Cunha, Joe Hurd, and Jorge Sousa Pinto. Using term rewriting to solve bit-vector arithmetic problems (poster presentation). In SAT-2012 [573], pages 493–495. [ bib | http ]
[4] Wilhelm Ackermann. Solvable Cases of the Decision Problem. Studies in Logic and the Foundations of Mathematics. North-Holland, Amsterdam, 1954. [ bib ]
[5] Mark Adams. Introducing HOL Zero. In Fukuda et al. [554], pages 142–143. [ bib ]
Theorem provers are now playing an important role in two diverse fields: computer system verification and mathematics. In computer system verification, they are a key component in toolsets that rigorously establish the absence of errors in critical computer hardware and software, such as processor design and safety-critical software, where traditional testing techniques provide inadequate assurance. In mathematics, they are used to check the veracity of recent high-profile proofs, such as the Four Colour Theorem and the Kepler Conjecture, whose scale and complexity have pushed traditional peer review to its limits.
[6] Wolfgang Ahrendt, Bernhard Beckert, Reiner Hähnle, Wolfram Menzel, Wolfgang Reif, Gerhard Schellhorn, and Peter H. Schmitt. Integration of automated and interactive theorem proving. In W. Bibel and P. Schmitt, editors, Automated Deduction: A Basis for Applications, volume II, chapter 4, pages 97–116. Kluwer, 1998. [ bib | .ps.gz ]
In this chapter we present a project to integrate interactive and automated theorem proving. Its aim is to combine the advantages of the two paradigms. We focus on one particular application domain, which is deduction for the purpose of software verification. Some of the reported facts may not be valid in other domains. We report on the integration concepts and on the experimental results with a prototype implementation.
[7] B. Akbarpour and S. Tahar. An approach for the formal verification of DSP designs using theorem proving. IEEE Transactions on CAD of Integrated Circuits and Systems, 25(8):1141–1457, 2006. [ bib | .pdf ]
[8] Torben Amtoft, John Hatcliff, Edwin Rodríguez, Robby, Jonathan Hoag, and David Greve. Specification and checking of software contracts for conditional information flow. In FM-2008 [529]. [ bib | .html ]
[9] R. Anderson. Why information security is hard - an economic perspective. In ACSAC-2001 [503], pages 358–365. [ bib | .pdf ]
According to one common view, information security comes down to technical measures. Given better access control policy models, formal proofs of cryptographic protocols, approved firewalls, better ways of detecting intrusions and malicious code, and better tools for system evaluation and assurance, the problems can be solved. The author puts forward a contrary view: information insecurity is at least as much due to perverse incentives. Many of the problems can be explained more clearly and convincingly using the language of microeconomics: network externalities, asymmetric information, moral hazard, adverse selection, liability dumping and the tragedy of the commons.
[10] Ross Anderson. Security Engineering: A Guide to Building Dependable Distributed Systems. Wiley, 1 edition, January 2001. [ bib | .html ]
[11] Tresys Technology. Apol Help Files, 2008. Available from http://oss.tresys.com/projects/setools/wiki/helpFiles/iflow_help. [ bib ]
[12] A. Armando and S. Ranise. From integrated reasoning specialists to “plug-and-play” reasoning components. In Calmet and Plaza [505], pages 42–54. [ bib ]
[13] Alessandro Armando, Silvio Ranise, and Michael Rusinowitch. A rewriting approach to satisfiability procedures. Information and Computation, 183(2):140–164, June 2003. [ bib | .ps ]
[14] R. Arthan, P. Caseley, C. O'Halloran, and A. Smith. ClawZ: Control laws in Z. In ICFEM-2000 [551], pages 169–176. [ bib ]
[15] R. D. Arthan and R. B. Jones. Z in HOL in ProofPower. BCS FACS FACTS, 2005-1. [ bib ]
[16] Owen L. Astrachan and Mark E. Stickel. Caching and lemmaizing in model elimination theorem provers. In Kapur [511], pages 224–238. [ bib | .html ]
[17] O. L. Astrachan and Donald W. Loveland. The use of lemmas in the model elimination procedure. Journal of Automated Reasoning, 19(1):117–141, August 1997. [ bib | .html ]
[18] Franz Baader and Tobias Nipkow. Term Rewriting and All That. Cambridge University Press, 1998. [ bib ]
[19] Alan Baker. A Concise Introduction to the Theory of Numbers. Cambridge University Press, 1984. [ bib ]
[20] Siani Baker, Andrew Ireland, and Alan Smaill. On the use of the constructive omega-rule within automated deduction. In Voronkov [559], pages 214–225. [ bib | .html ]
The cut elimination theorem for predicate calculus states that every proof may be replaced by one which does not involve use of the cut rule. This theorem no longer holds when the system is extended with Peano's axioms to give a formalisation for arithmetic. The problem of generalisation results, since arbitrary formulae can be cut in. This makes theorem-proving very difficult - one solution is to embed arithmetic in a stronger system, where cut elimination holds. This paper describes a system based on the omega rule, and shows that certain statements are provable in this system which are not provable in Peano arithmetic without cut. Moreover, an important application is presented in the form of a new method of generalisation by means of "guiding proofs" in the stronger system, which sometimes succeeds in producing proofs in the original system when other methods fail. The implementation of such a system is also described.
[21] Michael Baldamus, Klaus Schneider, Michael Wenz, and Roberto Ziller. Can American Checkers be solved by means of symbolic model checking? In Howard Bowman, editor, Formal Methods Elsewhere (a Satellite Workshop of FORTE-PSTV-2000 devoted to applications of formal methods to areas other than communication protocols and software engineering), volume 43 of Electronic Notes in Theoretical Computer Science. Elsevier, May 2001. [ bib | .pdf ]
Symbolic model checking has become a successful technique for verifying large finite state systems up to more than 1020 states. The key idea of this method is that extremely large sets can often be efficiently represented with propositional formulas. Most tools implement these formulas by means of binary decision diagrams (BDDs), which have therefore become a key data structure in modern VLSI CAD systems.

Some board games like American checkers have a state space whose size is well within the range of state space sizes that have been tackled by symbolic model checking. Moreover, the question whether there is a winning strategy in these games can be reduced to two simple μ-calculus formulas. Hence, the entire problem to solve such games can be reduced to simple μ-calculus model checking problems.

In this paper, we show how to model American checkers as a finite state system by means of BDDs. We moreover specify the existence of winning strategies for both players by simple μ-calculus formulas. Further still, we report on our experimental results with our own model checking tool, and we describe some powerful improvements that we have found in trying to solve the game.

[22] T. Ball, E. Bounimova, B. Cook, V. Levin, J. Lichtenberg, C. McGarvey, B. Ondrusek, S. K. Rajamani, and A. Ustuner. Thorough static analysis of device drivers. In EUROSYS-2006 [527], pages 73–85. [ bib ]
[23] Clemens Ballarin. Locales and locale expressions in Isabelle/Isar. In Berardi et al. [604], pages 34–50. [ bib ]
[24] Grzegorz Bancerek. The fundamental properties of natural numbers. Journal of Formalized Mathematics, 1, 1989. [ bib ]
[25] Paul H. Bardell, W. H. McAnney, and J. Savir. Built In Test for VLSI: Pseudorandom Techniques. John Wiley, October 1987. [ bib | .html ]
This handbook provides ready access to all of the major concepts, techniques, problems, and solutions in the emerging field of pseudorandom pattern testing. Until now, the literature in this area has been widely scattered, and published work, written by professionals in several disciplines, has treated notation and mathematics in ways that vary from source to source. This book opens with a clear description of the shortcomings of conventional testing as applied to complex digital circuits, revewing by comparison the principles of design for testability of more advanced digital technology. Offers in-depth discussions of test sequence generation and response data compression, including pseudorandom sequence generators; the mathematics of shift-register sequences and their potential for built-in testing. Also details random and memory testing and the problems of assessing the efficiency of such tests, and the limitations and practical concerns of built-in testing.
[26] Bruno Barras, Samuel Boutin, Cristina Cornes, Judicael Courant, Jean-Christophe Filliatre, Eduardo Gimenez, Hugo Herbelin, Gérard Huet, César A. Muñoz, Chetan Murthy, Catherine Parent, Christine Paulin-Mohring, Amokrane Saibi, and Benjamin Werner. The Coq proof assistant reference manual: Version 6.1. Technical Report RT-0203, INRIA (Institut National de Recherche en Informatique et en Automatique), France, 1997. [ bib ]
[27] Bruno Barras. Programming and computing in HOL. In Aagaard and Harrison [588], pages 17–37. [ bib ]
[28] Andrej Bauer, Edmund M. Clarke, and Xudong Zhao. Analytica - an experiment in combining theorem proving and symbolic computation. Journal of Automated Reasoning, 21(3):295–325, 1998. [ bib | .ps ]
[29] Bernhard Beckert and Joachim Posegga. leanTAP: Lean tableau-based deduction. Journal of Automated Reasoning, 15(3):339–358, December 1995. [ bib | .ps.Z ]
[30] Michael J. Beeson. Foundations of Constructive Mathematics. Springer, 1985. [ bib ]
[31] Michael Beeson. Unification in Lambda-Calculi with if-then-else. In Kirchner and Kirchner [514], pages 103–118. [ bib | http ]
A new unification algorithm is introduced, which (unlike previous algorithms for unification in lambda-calculus) shares the pleasant properties of first-order unification. Proofs of these properties are given, in particular uniqueness of the answer and the most-general-unifier property. This unification algorithm can be used to generalize first-order proof-search algorithms to second-order logic, making possible for example a straighforward treatment of McCarthy's circumscription schema.
[32] Eric T. Bell. Debunking Science. University of Washington Book Store, 1930. [ bib ]
[33] C. Benzmuller. Equality and Extensionality in Higher-Order Theorem Proving. PhD thesis, Saarland University, April 1999. [ bib | .ps.gz ]
[34] Stefan Berghofer and Tobias Nipkow. Proof terms for simply typed higher order logic. In Aagaard and Harrison [588], pages 38–52. [ bib | .html ]
This paper presents proof terms for simply typed, intuitionistic higher order logic, a popular logical framework. Unification-based algorithms for the compression and reconstruction of proof terms are described and have been implemented in the theorem prover Isabelle. Experimental results confirm the effectiveness of the compression scheme.
[35] E. R. Berlekamp. Factoring polynomials over large finite fields. Math. Comput., 24, 1970. [ bib ]
This paper presents algorithms for root-finding and factorization, two problems in finite fields. The latter problem is reduced to the root-finding problem, for which a probabilistic algorithm is given. This paper is a precursor of Rabin (1980).
[36] Tim Berners-Lee, James Hendler, and Ora Lassila. The semantic web. Scientific American, May 2001. [ bib ]
[37] Al Bessey, Ken Block, Ben Chelf, Andy Chou, Bryan Fulton, Seth Hallem, Charles H. Gros, Asya Kamsky, Scott McPeak, and Dawson Engler. A few billion lines of code later: using static analysis to find bugs in the real world. Communications of the ACM, 53(2):66–75, February 2010. [ bib | http ]
[38] Marc Bezem, Dimitri Hendriks, and Hans de Nivelle. Automated proof construction in type theory using resolution. In McAllester [516], pages 148–163. [ bib | http ]
[39] Józef Bialas. The σ-additive measure theory. Journal of Formalized Mathematics, 1990. [ bib | .html ]
[40] Józef Bialas. Properties of Caratheodor's measure. Journal of Formalized Mathematics, 1992. [ bib | .html ]
[41] Ambrose Bierce. The Devil's Dictionary. Dover, dover thrift edition, 1993. [ bib ]
[42] Armin Biere, Alessandro Cimatti, Edmund M. Clarke, Ofer Strichman, and Yunshun Zhu. Bounded Model Checking, volume 58 of Advances in Computers, pages 117–148. Elsevier, August 2003. [ bib | .pdf ]
[43] P. Billingsley. Probability and Measure. John Wiley & Sons, Inc., New York, 2nd edition, 1986. [ bib ]
[44] Jesse Bingham and Joe Leslie-Hurd. Verifying relative error bounds using symbolic simulation. In Biere and Bloem [520], pages 277–292. [ bib | http ]
In this paper we consider the problem of formally verifying hardware that is specified to compute reciprocal, reciprocal square root, and power-of-two functions on floating point numbers to within a given relative error. Such specifications differ from the common case in which any given input is specified to have exactly one correct output. Our approach is based on symbolic simulation with binary decision diagrams, and involves two distinct steps. First, we prove a lemma that reduces the relative error specification to several inequalities that involve reasoning about natural numbers only. The most complex of these inequalities asserts that the product of several naturals is less-than/greater-than another natural. Second, we invoke one of several customized algorithms that decides the inequality, without performing the expensive symbolic multiplications directly. We demonstrate the effectiveness of our approach on a next-generation Intel processor design and report encouraging time and space metrics for these proofs.
[45] Steven Bishop, Matthew Fairbairn, Michael Norrish, Peter Sewell, and Keith Wansbrough. The TCP specification: A quick introduction. Available from Peter Sewell's web site, March 2004. [ bib | .ps ]
[46] Ian Blake, Gadiel Seroussi, and Nigel Smart. Elliptic Curves in Cryptography, volume 265 of London Mathematical Society Lecture Note Series. Cambridge University Press, 1999. [ bib ]
[47] Sandrine Blazy and Xavier Leroy. Mechanized semantics for the Clight subset of the C language. Special Issue of the Journal of Automated Reasoning, 43(3):263–288, October 2009. [ bib | http ]
This article presents the formal semantics of a large subset of the C language called Clight. Clight includes pointer arithmetic, struct and union types, C loops and structured switch statements. Clight is the source language of the CompCert verified compiler. The formal semantics of Clight is a big-step operational semantics that observes both terminating and diverging executions and produces traces of input/output events. The formal semantics of Clight is mechanized using the Coq proof assistant. In addition to the semantics of Clight, this article describes its integration in the CompCert verified compiler and several ways by which the semantics was validated.
[48] E. Bleicher. Freezer and EGT-query service on Nalimov DTM EGTs. Available at the web page http://www.k4it.de, 2006. [ bib ]
[49] Sylvie Boldo and César Muñoz. A high-level formalization of floating-point numbers in PVS. Technical Report NASA/CR-2006-214298, National Aeronautics and Space Administration, Langley Research Center, Hampton VA 23681-2199, USA, October 2006. [ bib | .pdf ]
[50] George Boole. The Mathematical Analysis of Logic, Being an Essay Toward a Calculus of Deductive Reasoning. Macmillan, Cambridge, 1847. [ bib ]
[51] Maksym Bortin, Einar Broch Johnsen, and Christoph Lüth. Structured formal development in Isabelle. Nordic Journal of Computing, 13:1–20, 2006. [ bib | .html ]
General purpose theorem provers provide advanced facilities for proving properties about specifications, and may therefore be a valuable tool in formal program development. However, these provers generally lack many of the useful structuring mechanisms found in functional programming or specification languages. This paper presents a constructive approach to adding theory morphisms and parametrisation to theorem provers, while preserving the proof support and consistency of the prover. The approach is implemented in Isabelle and illustrated by examples of an algorithm design rule and of the modular development of computational effects for imperative language features based on monads.
[52] Richard J. Boulton. Lazy techniques for fully expansive theorem proving. Formal Methods in System Design, 3(1/2):25–47, August 1993. [ bib ]
[53] Richard J. Boulton. Combining decision procedures in the HOL system. In Schubert et al. [548], pages 75–89. [ bib ]
[54] Richard Boulton, Joe Hurd, and Konrad Slind. Computer assisted reasoning: A Festschrift for Michael J. C. Gordon. Special Issue of the Journal of Automated Reasoning, 43(3):237–242, October 2009. [ bib | http ]
[55] François Bourdoncle. Efficient chaotic iteration strategies with widenings. In Bjørner et al. [535], pages 128–141. [ bib | .html ]
Abstract interpretation is a formal method that enables the static and automatic determination of run-time properties of programs. This method uses a characterization of program invariants as least and greatest fixed points of continuous functions over complete lattices of program properties. In this paper, we study precise and efficient chaotic iteration strategies for computing such fixed points when lattices are of infinite height and speedup techniques, known as widening and narrowing, have to be used. These strategies are based on a weak topological ordering of the dependency graph of the system of semantic equations associated with the program and minimize the loss in precision due to the use of widening operators. We discuss complexity and implementation issues and give precise upper bounds on the complexity of the intraprocedural and interprocedural abstract interpretation of higher-order programs based on the structure of their control flow graph.
[56] M. S. Bourzutschky, J. A. Tamplin, and G. McC. Haworth. Chess endgames: 6-man data and strategy. Theoretical Computer Science, 349:140–157, 2005. [ bib ]
[57] Robert S. Boyer and J Strother Moore. Proof checking the RSA public key encryption algorithm. American Mathematical Monthly, 91(3):181–189, 1984. [ bib | .ps.Z ]
[58] Robert S. Boyer and J Strother Moore. Integrating decision procedures into heuristic theorem provers: A case study of linear arithmetic. Machine Intelligence, 11:83–124, 1988. [ bib ]
[59] Robert S. Boyer, N. G. de Bruijn, Gérard Huet, and Andrzej Trybulec. A mechanically proof-checked encyclopedia of mathematics: Should we build one? can we? In Bundy [512], pages 237–251. [ bib ]
[60] Sally A. Browning and Joe Hurd. Cryptol: The language of cryptanalysis (invited talk abstract). In Bernstein and Gaj [576], pages 57–59. [ bib | http ]
[61] Randal E. Bryant. Symbolic Boolean manipulation with ordered binary-decision diagrams. ACM Computing Surveys, 24(3):293–318, September 1992. [ bib | .html ]
Ordered Binary-Decision Diagrams (OBDDs) represent Boolean functions as directed acyclic graphs. They form a canonical representation, making testing of functional properties such as satisfiability and equivalence straightforward. A number of operations on Boolean functions can be implemented as graph algorithms on OBDD data structures. Using OBDDs, a wide variety of problems can be solved through symbolic analysis. First, the possible variations in system parameters and operating conditions are encoded with Boolean variables. Then the system is evaluated for all variations by a sequence of OBDD operations. Researchers have thus solved a number of problems in digital-system design, finite-state system analysis, artificial intelligence, and mathematical logic. This paper describes the OBDD data structure and surveys a number of applications that have been solved by OBDD-based symbolic analysis.
[62] Alan Bundy. Artificial mathematicians. Available from the author's web site, May 1996. [ bib | .ps.gz ]
[63] N. de Bruijn. The mathematical language AUTOMATH, its usage, and some of its extensions. In Symposium on Automatic Demonstration, pages 29–61. Lecture Notes in Mathematics, 125, Springer, 1970. [ bib ]
[64] G. Buffon. Essai d'arithmétique morale. Supplément à l'Histoire Naturelle, 4, 1777. [ bib ]
[65] Alan Bundy. The Computer Modelling of Mathematical Reasoning. Academic Press, 1983. [ bib ]
[66] David Burke, Joe Hurd, John Launchbury, and Aaron Tomb. Trust relationship modeling for software assurance. In FAST-2010 [528]. [ bib | http ]
[67] Michael Burrows, Martín Abadi, and Roger Needham. A logic of authentication. ACM Transactions on Computer Systems, 8(1):18–36, February 1990. [ bib | .pdf ]
Authentication protocols are the basis of security in many distributed systems, and it is therefore essential to ensure that these protocols function correctly. Unfortunately, their design has been extremely error prone. Most of the protocols found in the literature contain redundancies or security flaws. A simple logic has allowed us to describe the beliefs of trustworthy parties involved in authentication protocols and the evolution of these beliefs as a consequence of communication. We have been able to explain a variety of authentication protocols formally, to discover subtleties and errors in them, and to suggest improvements. In this paper we present the logic and then give the results of our analysis of four published protocols, chosen either because of their practical importance or because they serve to illustrate our method.
[68] Rod Burstall and John Darlington. A transformation system for developing recursive programs. Journal of the Association for Computing Machinery, 1, January 1977. [ bib ]
[69] H. Busch. First-order automation for higher-order-logic theorem proving. In Melham and Camilleri [547]. [ bib ]
[70] Ricky W. Butler, George Hagen, Jeffrey M. Maddalon, César Muñoz, Anthony Narkawicz, and Gilles Dowek. How formal methods implels discovery: A short history of an air traffic management project. In Muñoz [563]. [ bib | .pdf ]
[71] O. Caprotti and M. Oostdijk. Formal and efficient primality proofs by use of computer algebra oracles. Journal of Symbolic Computation, 32(1–2):55–70, 2001. Special Issue on Computer Algebra and Mechanized Reasoning. [ bib ]
[72] Víctor A. Carreño and Paul S. Miner. Specification of the IEEE-854 floating-point standard in HOL and PVS. In HOL-1995a [549]. Supplemental proceedings. [ bib | .pdf ]
[73] Orieta Celiku and Annabelle McIver. Cost-based analysis of probabilistic programs mechanised in HOL. Nordic Journal of Computing, 11(2):102–128, 2004. [ bib | .pdf ]
[74] Orieta Celiku and Annabelle McIver. Compositional specification and analysis of cost-based properties in probabilistic programs. In Fitzgerald et al. [533], pages 107–122. [ bib | http ]
[75] Orieta Celiku. Mechanized Reasoning for Dually-Nondeterministic and Probabilistic Programs. PhD thesis, Åbo Akademi University, 2006. [ bib | http ]
[76] Chin-Liang Chang and Richard Char-Tung Lee. Symbolic Logic and Mechanical Theorem Proving. Academic Press, New York, 1973. [ bib ]
[77] Alonzo Church. A formulation of the simple theory of types. Journal of Symbolic Logic, 5:56–68, 1940. [ bib ]
[78] Koen Claessen and John Hughes. QuickCheck: a lightweight tool for random testing of Haskell programs. ACM SIGPLAN Notices, 35(9):268–279, September 2000. [ bib | http ]
[79] E. M. Clarke, O. Grumberg, and D. A. Peled. Model Checking. The MIT Press, 1999. [ bib ]
[80] T. Coe. Inside the Pentium FDIV bug. Dr. Dobbs Journal, April 1996. [ bib ]
[81] Avra Cohn. The notion of proof in hardware verification. Journal of Automated Reasoning, 5(2):127–139, 1989. [ bib ]
[82] Bob Colwell and Bob Brennan. Intel's formal verification experience on the Willamette development. In Aagaard and Harrison [588], pages 106–107. [ bib ]
[83] Adriana B. Compagnoni. Higher-Order Subtyping with Intersection Types. PhD thesis, Catholic University, Nigmegen, January 1995. [ bib ]
[84] Stephen A. Cook. The complexity of theorem-proving procedures. In ACM, editor, Third annual ACM Symposium on Theory of Computing, pages 151–158, Shaker Heights, Ohio, USA, May 1971. ACM Press. [ bib ]
[85] Stephen A. Cook. The P versus NP problem. Manuscript prepared for the Clay Mathematics Institute for the Millennium Prize Problems, April 2000. [ bib | http ]
[86] Byron Cook, Andreas Podelski, and Andrey Rybalchenko. Terminator: Beyond safety. In Ball and Jones [519], pages 415–418. [ bib | .html ]
Previous symbolic software model checkers (i.e., program analysis tools based on predicate abstraction, pushdown model checking and iterative counterexample-guided abstraction refinement, etc.) are restricted to safety properties. Terminator is the first software model checker for termination. It is now being used to prove that device driver dispatch routines always return to their caller (or return counterexamples if they if they fail to terminate).
[87] Thomas H. Cormen, Charles Eric Leiserson, and Ronald L. Rivest. Introduction to Algorithms. MIT Press/McGraw-Hill, Cambridge, Massachusetts, 1990. [ bib ]
[88] Duncan Coutts, Isaac Potoczny-Jones, and Don Stewart. Haskell: Batteries included. In Gill [538], pages 125–126. [ bib | .html ]
The quality of a programming language itself is only one component in the ability of application writers to get the job done. Programming languages can succeed or fail based on the breadth and quality of their library collection. Over the last few years, the Haskell community has risen to the task of building the library infrastructure necessary for Haskell to succeed as a programming language suitable for writing real-world applications. This on-going work, the Cabal and Hackage effort, is built on the open source model of distributed development, and have resulted in a flowering of development in the language with more code produced and reused now than at any point in the community's history. It is easier to obtain and use Haskell code, in a wider range of environments, than ever before. This demonstration describes the infrastructure and process of Haskell development inside the Cabal/Hackage framework, including the build system, library dependency resolution, centralised publication, documentation and distribution, and how the code escapes outward into the wider software community. We survey the benefits and trade-offs in a distributed, collaborative development ecosystem and look at a proposed Haskell Platform that envisages a complete Haskell development environment, batteries included.
[89] D. Craigen, S. Kromodimoeljo, I. Meisels, B. Pase, and M. Saaltink. EVES: An overview. Technical Report CP-91-5402-43, ORA Corporation, 1991. [ bib ]
[90] Paul Curzon. The formal verification of the Fairisle ATM switching element: an overview. Technical Report 328, University of Cambridge Computer Laboratory, March 1994. [ bib ]
[91] David Cyrluk, Patrick Lincoln, and Natarajan Shankar. On Shostak's decision procedure for combinations of theories. In McRobbie and Slaney [513]. [ bib ]
[92] Ingo Dahn and Christoph Wernhard. First order proof problems extracted from an article in the MIZAR mathematical library. In Bonacina and Furbach [536]. [ bib | .html ]
[93] Ingo Dahn. Interpretation of a Mizar-like logic in first-order logic. In Caferra and Salzer [537], pages 116–126. [ bib ]
[94] Marc Daumas, David Lester, and César Muñoz. Verified real number calculations: A library for interval arithmetic. IEEE Transactions on Computers, 58(2):226–237, February 2009. [ bib ]
[95] Harold Davenport. The Higher Arithmetic. Cambridge University Press, sixth edition, 1992. [ bib ]
[96] Morris DeGroot. Probability and Statistics. Addison-Wesley, 2nd edition, 1989. [ bib ]
[97] K. de Leeuw, E. F. Moore, C. E. Shannon, and N. Shapiro. Computability by probabilistic machines. In C. E. Shannon and J. McCarthy, editors, Automata Studies, pages 183–212. Princeton University Press, Princeton, NJ, 1955. [ bib ]
[98] Richard A. De Millo, Richard J. Lipton, and Alan J. Perlis. Social processes and proofs of theorems and programs. Communications of the ACM, 22(5):271–280, 1979. [ bib | DOI ]
It is argued that formal verifications of programs, no matter how obtained, will not play the same key role in the development of computer science and software engineering as proofs do in mathematics. Furthermore the absence of continuity, the inevitability of change, and the complexity of specification of significantly many real programs make the formal verification process difficult to justify and manage. It is felt that ease of formal verification should not dominate program language design.
[99] Ben L. Di Vito. Hypatheon: A mathematical database for PVS users (extended abstract). In Wegner [562]. [ bib | .pdf ]
[100] Louise A. Dennis, Graham Collins, Michael Norrish, Richard Boulton, Konrad Slind, Graham Robinson, Mike Gordon, and Tom Melham. The PROSPER toolkit. In Graf and Schwartzbach [582], pages 78–92. [ bib | .pdf ]
[101] Jörg Denzinger and Dirk Fuchs. Knowledge-based cooperation between theorem provers by TECHS. SEKI-Report SR-97-11, University of Kaiserslautern, 1997. [ bib | .ps.gz ]
[102] Edsger W. Dijkstra. The humble programmer. Communications of the ACM, 15(10):859–866, October 1972. [ bib | http ]
[103] E. W. Dijkstra. A Discipline of Programming. Prentice Hall, 1976. [ bib ]
[104] Eelco Dolstra and Andres Löh. NixOS: A purely functional Linux distribution. In Hook and Thiemann [552], pages 367–378. [ bib | http ]
[105] Theo Drane, Samuel Coward, Mertcan Temel, and Joe Leslie-Hurd. On the systematic creation of faithfully rounded commutative truncated booth multipliers. In ARITH-2024 [506], pages 108–115. [ bib | http ]
In many instances of fixed-point multiplication, a full precision result is not required. Instead it is sufficient to return a faithfully rounded result. Faithful rounding permits the machine representable number either immediately above or below the full precision result, if the latter is not exactly representable. Multipliers which take full advantage of this freedom can be implemented using less circuit area and consuming less power. The most common implementations internally truncate the partial product array. However, truncation applied to the most common of multiplier architectures, namely Booth architectures, results in non-commutative implementations. The industrial adoption of truncated multipliers is limited by the absence of formal verification of such implementations, since exhaustive simulation is typically infeasible. We present a commutative truncated Booth multiplier architecture and derive closed form necessary and sufficient conditions for faithful rounding. We also provide the bit-vectors giving rise to the worst-case error. We present a formal verification methodology based on ACL2 which scales up to 42 bit multipliers. We synthesize a range of commutative faithfully rounded multipliers and show that truncated booth implementations are up to 31% smaller than externally truncated multipliers.
Keywords: sufficient conditions;systematics;commutation;computer architecture;digital arithmetic;formal verification
[106] Jianjun Duan, Joe Hurd, Guodong Li, Scott Owens, Konrad Slind, and Junxing Zhang. Functional correctness proofs of encryption algorithms. In Sutcliffe and Voronkov [560], pages 519–533. [ bib | http ]
[107] Stefan Edelkamp. Symbolic exploration in two-player games: Preliminary results. In The International Conference on AI Planning & Scheduling (AIPS), Workshop on Model Checking, pages 40–48, Toulouse, France, 2002. [ bib | .pdf ]
In this paper symbolic exploration with binary decision diagrams (BDDs) is applied to two-player games to improve main memory consumption for reachability analysis and game-theoretical classification, since BDDs provide a compact representation for large set of game positions. A number of examples are evaluated: Tic-Tac-Toe, Nim, Hex, and Four Connect. In Chess we restrict the considerations to the creation of endgame databases. The results are preliminary, but the study puts forth the idea that BDDs are widely applicable in game playing and provides a universal tool for people interested in quickly solving practical problems.
[108] Euclid. Elements, 300B.C. [ bib | .html ]
[109] Euclid. Elements. Dover, 1956. Translated by Sir Thomas L. Heath. [ bib ]
[110] William M. Farmer, Joshua D. Guttman, and F. Javier Thayer. IMPS: An interactive mathematical proof system. Journal of Automated Reasoning, 11:213–248, 1993. [ bib | .html ]
IMPS is an Interactive Mathematical Proof System intended as a general purpose tool for formulating and applying mathematics in a familiar fashion. The logic of IMPS is based on a version of simple type theory with partial functions and subtypes. Mathematical specification and inference are performed relative to axiomatic theories, which can be related to one another via inclusion and theory interpretation. IMPS provides relatively large primitive inference steps to facilitate human control of the deductive process and human comprehension of the resulting proofs. An initial theory library containing almost a thousand repeatable proofs covers significant portions of logic, algebra and analysis, and provides some support for modeling applications in computer science.
[111] William M. Farmer. Theory interpretation in simple type theory. In Heering et al. [544], pages 96–123. [ bib | .pdf ]
[112] William M. Farmer. The seven virtues of simple type theory. Journal of Applied Logic, 6:267–286, 2008. [ bib | .html ]
Simple type theory, also known as higher-order logic, is a natural extension of first-order logic which is simple, elegant, highly expressive, and practical. This paper surveys the virtues of simple type theory and attempts to show that simple type theory is an attractive alternative to first-order logic for practical-minded scientists, engineers, and mathematicians. It recommends that simple type theory be incorporated into introductory logic courses offered by mathematics departments and into the undergraduate curricula for computer science and software engineering students.
[113] V. A. Feldman and D. Harel. A probabilistic dynamic logic. Journal of Computer and System Sciences, 28(2):193–215, 1984. [ bib ]
[114] Warren E. Ferguson, Jesse Bingham, Levent Erkok, John R. Harrison, and Joe Leslie-Hurd. Digit serial methods with applications to division and square root. IEEE Transactions on Computers, 67(3):449–456, March 2018. [ bib | DOI | http ]
We present a generic digit serial method (DSM) to compute the digits of a real number V. Bounds on these digits, and on the errors in the associated estimates of V formed from these digits, are derived. To illustrate our results, we derive bounds for a parameterized family of high-radix algorithms for division and square root. These bounds enable hardware designers to determine, for example, whether a given choice of DSM parameters allows rapid formation and rounding of approximations to V.
[115] James H. Fetzer. Program verification: the very idea. Communications of the ACM, 31(9):1048–1063, 1988. [ bib | DOI ]
The notion of program verification appears to trade upon an equivocation. Algorithms, as logical structures, are appropriate subjects for deductive verification. Programs, as causal models of those structures, are not. The success of program verification as a generally applicable and completely reliable method for guaranteeing program performance is not even a theoretical possibility.
[116] FIDE. The FIDE Handbook, chapter E.I. The Laws of Chess. FIDE, 2004. Available for download from the FIDE website. [ bib | http ]
[117] C. Fidge and C. Shankland. But what if I don't want to wait forever? Formal Aspects of Computing, to appear in 2003. [ bib ]
[118] M. Fierz, M. Cash, and E. Gilbert. The 2002 world computer-checkers championship. ICGA Journal, 25(3):196–198, 2002. [ bib ]
[119] George S. Fishman. Monte Carlo: Concepts, Algorithms and Applications. Springer, 1996. [ bib ]
[120] J. D. Fleuriot. A Combination of Geometry Theorem Proving and Nonstandard Analysis, with Application to Newton's Principia. Distinguished Dissertations. Springer, 2001. [ bib ]
[121] Anthony Fox. Formal specification and verification of ARM6. In Basin and Wolff [593], pages 25–40. [ bib ]
[122] Anthony Fox. An algebraic framework for verifying the correctness of hardware with input and output: A formalization in HOL. In et al. [518], pages 157–174. [ bib ]
[123] Adam Fuchs, Avik Chaudhuri, and Jeffrey Foster. SCanDroid: Automated security certification of Android applications. Available from the second author's website, 2009. [ bib | http ]
Android is a popular mobile-device platform developed by Google. Android's application model is designed to encourage applications to share their code and data with other applications. While such sharing can be tightly controlled with permissions, in general users cannot determine what applications will do with their data, and thereby cannot decide what permissions such applications should run with. In this paper we present SCanDroid, a tool for reasoning automatically about the security of Android applications. SCanDroid's analysis is modular to allow incremental checking of applications as they are installed on an Android device. It extracts security specifications from manifests that accompany such applications, and checks whether data flows through those applications are consistent with those specifications. To our knowledge, SCanDroid is the first program analysis tool for Android, and we expect it to be useful for automated security certification of Android applications.
[124] Dov M. Gabbay and Hans Jürgen Ohlbach. Quantifier elimination in second–order predicate logic. In Bernhard Nebel, Charles Rich, and William Swartout, editors, Principles of Knowledge Representation and Reasoning (KR92), pages 425–435. Morgan Kaufmann, 1992. [ bib | http ]
An algorithm is presented which eliminates second–order quantifiers over predicate variables in formulae of type exists P1,...,PnF where F is an arbitrary formula of first–order predicate logic. The resulting formula is equivalent to the original formula – if the algorithm terminates. The algorithm can for example be applied to do interpolation, to eliminate the second–order quantifiers in circumscription, to compute the correlations between structures and power structures, to compute semantic properties corresponding to Hilbert axioms in non classical logics and to compute model theoretic semantics for new logics.
[125] Chris Gathercole and Peter Ross. Small populations over many generations can beat large populations over few generations in genetic programming. http://www.dai.ed.ac.uk/students/chrisg/, 1997. [ bib ]
[126] J. T. Gill. Computational complexity of probabilistic Turing machines. SIAM Journal on Computing, 6(4):675–695, December 1977. [ bib ]
[127] P. C. Gilmore. A proof method for quantification theory: Its justification and realization. IBM Journal of Research and Development, 4:28–35, 1960. [ bib ]
[128] Jean-Yves Girard. Proofs and types, volume 7 of Cambridge tracts in theoretical computer science. Cambridge University Press, 1989. [ bib ]
[129] K. Gödel. Über formal unentscheidbare Sätze der Principia Mathematica und verwandter Systeme. Monatshefte für Mathematik und Physik, 38:173–198, 1931. [ bib ]
[130] K. Gödel. On Formally Undecidable Propositions of Principia Mathematica and Related Systems. Oliver and Boyd, London, 1962. Translated by B. Meltzer. [ bib ]
[131] J. Goguen and G. Malcolm. Algebraic Semantics of Imperative Programs. MIT Press, Cambridge, Mass., 1st edition, 1996. [ bib ]
[132] E. Goldberg and Y. Novikov. BerkMin: A fast and robust SAT-solver. In Design, Automation, and Test in Europe (DATE '02), pages 142–149, March 2002. [ bib | .pdf ]
[133] Charles M. Goldie and Richard G. E. Pinch. Communication theory, volume 20 of LMS Student Texts. Cambridge University Press, 1991. [ bib ]
[134] Georges Gonthier. A computer-checked proof of the four colour theorem. Available for download at the author's website, 2004. [ bib | .pdf ]
This report gives an account of a successful formalization of the proof of the Four Colour Theorem, which was fully checked by the Coq v7.3.1 proof assistant. This proof is largely based on the mixed mathematics/computer proof of Robertson et al, but contains original contributions as well. This document is organized as follows: section 1 gives a historical introduction to the problem and positions our work in this setting; section 2 defines more precisely what was proved; section 3 explains the broad outline of the proof; section 4 explains how we exploited the features of the Coq assistant to conduct the proof, and gives a brief description of the tactic shell that we used to write our proof scripts; section 5 is a detailed account of the formal proof (for even more details the actual scripts can be consulted); section 6 is a chronological account of how the formal proof was developed; finally, we draw some general conclusions in section 7.
[135] M. Gordon, R. Milner, L. Morris, M. Newey, and C. Wadsworth. A Metalanguage for interactive proof in LCF. In POPL-1978 [567], pages 119–130. [ bib ]
[136] M. Gordon, R. Milner, and C. Wadsworth. Edinburgh LCF, volume 78 of Lecture Notes in Computer Science. Springer, 1979. [ bib ]
[137] Mike Gordon. Why Higher-Order Logic is a good formalism for specifying and verifying hardware. Technical Report 77, Computer Laboratory, The University of Cambridge, 1985. [ bib ]
[138] M. J. C. Gordon. Programming Language Theory and its Implementation. Prentice Hall, 1988. [ bib ]
[139] Michael J. C. Gordon. HOL: A proof generating system for higher-order logic. In Graham Birtwistle and P. A. Subrahmanyam, editors, VLSI Specification, Verification and Synthesis, pages 73–128. Kluwer Academic Publishers, Boston, 1988. [ bib ]
[140] M. J. C. Gordon. Mechanizing programming logics in higher order logic. In G. Birtwistle and P. A. Subrahmanyam, editors, Current Trends in Hardware Verification and Automated Theorem Proving, pages 387–439. Springer-Verlag, 1989. [ bib | .dvi.gz ]
[141] M. J. C. Gordon and T. F. Melham, editors. Introduction to HOL (A theorem-proving environment for higher order logic). Cambridge University Press, 1993. [ bib ]
[142] Mike Gordon. Merging HOL with set theory: preliminary experiments. Technical Report 353, University of Cambridge Computer Laboratory, November 1994. [ bib | http ]
Set theory is the standard foundation for mathematics, but the majority of general purpose mechanized proof assistants support versions of type theory (higher order logic). Examples include Alf, Automath, Coq, Ehdm, HOL, IMPS, Lambda, LEGO, Nuprl, PVS and Veritas. For many applications type theory works well and provides, for specification, the benefits of type-checking that are well-known in programming. However, there are areas where types get in the way or seem unmotivated. Furthermore, most people with a scientific or engineering background already know set theory, whereas type theory may appear inaccessable and so be an obstacle to the uptake of proof assistants based on it. This paper describes some experiments (using HOL) in combining set theory and type theory; the aim is to get the best of both worlds in a single system. Three approaches have been tried, all based on an axiomatically specified type V of ZF-like sets: (i) HOL is used without any additions besides V; (ii) an embedding of the HOL logic into V is provided; (iii) HOL axiomatic theories are automatically translated into set-theoretic definitional theories. These approaches are illustrated with two examples: the construction of lists and a simple lemma in group theory.
[143] M. J. C. Gordon. Notes on PVS from a HOL perspective. Available from the author's web page, 1996. [ bib | .html ]
During the first week of July 1995 I visited SRI Menlo Park to find out more about PVS (Prototype Verification System). The preceding week I attended LICS95, where I had several talks with Natarajan Shankar accompanied by demos of the system on his SparcBook laptop. This note consists of a somewhat rambling and selective report on some of the things I learned, together with a discussion of their implications for the evolution of the HOL system.
[144] Michael J. C. Gordon. Reachability programming in HOL98 using BDDs. In Aagaard and Harrison [588], pages 179–196. [ bib | http ]
Two methods of programming BDD-based symbolic algorithms in the Hol98 proof assistant are presented. The goal is to provide a platform for implementing intimate combinations of deduction and algorithmic verification, like model checking. The first programming method uses a small kernel of ML functions to convert between BDDs, terms and theorems. It is easy to use and is suitable for rapid prototying experiments. The second method requires lower-level programming but can support more efficient calculations. It is based on an LCF-like use of an abstract type to encapsulate rules for manipulating judgements r t -> b meaning “logical term t is represented by BDD b with respect to variable order r”. The two methods are illustrated by showing how to perform the standard fixed-point calculation of the BDD of the set of reachable states of a finite state machine.
[145] Michael J. C. Gordon. Proof, Language, and Interaction: Essays in Honour of Robin Milner, chapter 6. From LCF to HOL: A Short History. MIT Press, May 2000. [ bib ]
[146] Michael J. C. Gordon. Programming combinations of deduction and BDD-based symbolic calculation. LMS Journal of Computation and Mathematics, 5:56–76, August 2002. [ bib | http ]
A generalisation of Milner's `LCF approach' is described. This allows algorithms based on binary decision diagrams (BDDs) to be programmed as derived proof rules in a calculus of representation judgements. The derivation of representation judgements becomes an LCF-style proof by defining an abstract type for judgements analogous to the LCF type of theorems. The primitive inference rules for representation judgements correspond to the operations provided by an efficient BDD package coded in C (BuDDy). Proof can combine traditional inference with steps inferring representation judgements. The resulting system provides a platform to support a tight and principled integration of theorem proving and model checking. The methods are illustrated by using them to solve all instances of a generalised Missionaries and Cannibals problem.
[147] Michael J. C. Gordon. PuzzleTool : An example of programming computation and deduction. In Carreño et al. [592], pages 214–229. [ bib ]
[148] Mike Gordon, Joe Hurd, and Konrad Slind. Executing the formal semantics of the Accellera Property Specification Language by mechanised theorem proving. In Geist and Tronci [522], pages 200–215. [ bib | http ]
The Accellera Property Specification Language (PSL) is designed for the formal specification of hardware. The Reference Manual contains a formal semantics, which we previously encoded in a machine readable version of higher order logic. In this paper we describe how to `execute' the formal semantics using proof scripts coded in the HOL theorem prover's metalanguage ML. The goal is to see if it is feasible to implement useful tools that work directly from the official semantics by mechanised proof. Such tools will have a high assurance of conforming to the standard. We have implemented two experimental tools: an interpreter that evaluates whether a finite trace w, which may be generated by a simulator, satisfies a PSL formula f (i.e. wf), and a compiler that converts PSL formulas to checkers in an intermediate format suitable for translation to HDL for inclusion in simulation test-benches. Although our tools use logical deduction and are thus slower than hand-crafted implementations, they may be speedy enough for some applications. They can also provide a reference for more efficient implementations.
[149] Mike Gordon, Juliano Iyoda, Scott Owens, and Konrad Slind. A proof-producing hardware compiler for a subset of higher order logic. In Hurd et al. [597], pages 59–75. [ bib | http ]
[150] M. J. C. Gordon. Specification and verification I, 2009. Course notes available from http://www.cl.cam.ac.uk/~mjcg/Teaching/SpecVer1/SpecVer1.html. [ bib ]
[151] M. J. C. Gordon. Specification and verification II, 2009. Course notes available from http://www.cl.cam.ac.uk/~mjcg/Teaching/SpecVer2/SpecVer2.html. [ bib ]
[152] Daniel Gorenstein, Richard Lyons, and Ronald Solomon. The Classification of the Finite Simple Groups. AMS, 1994. [ bib | http ]
[153] Paul Graunke. Verified safety and information flow of a block device. In SSV-2008 [580], pages 187–202. [ bib | http ]
This work reports on the author's experience designing, implementing, and formally verifying a low-level piece of system software. The timing model and the adaptation of an existing information flow policy to a monadic framework are reasonably novel. Interactive compilation through equational rewriting worked well in practice. Finally, the project uncovered some potential areas for improving interactive theorem provers.
[154] Penny Grubb and Armstrong A. Takang. Software Maintenance: Concepts and Practice. World Scientific Publishing Company, 2nd edition, July 2003. [ bib ]
[155] Elsa Gunter. Doing algebra in simple type theory. Technical Report MS-CIS-89-38, Logic & Computation 09, Department of Computer and Information Science, University of Pennsylvania, 1989. [ bib ]
[156] Joshua D. Guttman, Amy L. Herzog, John D. Ramsdell, and Clement W. Skorupka. Verifying information flow goals in Security-Enhanced Linux. Journal of Computer Security, 13(1):115–134, 2005. [ bib | .pdf ]
[157] Florian Haftmann. From higher-order logic to Haskell: There and back again. In Gallagher and Voigtländer [565], pages 155–158. [ bib | .pdf ]
We present two tools which together allow reasoning about (a substantial subset of) Haskell programs. One is the code generator of the proof assistant Isabelle, which turns specifications formulated in Isabelle's higher-order logic into executable Haskell source text; the other is Haskabelle, a tool to translate programs written in Haskell into Isabelle specifications. The translation from Isabelle to Haskell directly benefits from the rigorous correctness approach of a proof assistant: generated Haskell programs are always partially correct w.r.t. to the specification from which they are generated.
[158] Petr Hájek. Metamathematics of Fuzzy Logic, volume 4 of Trends in Logic. Kluwer Academic Publishers, Dordrecht, August 1998. [ bib | http ]
This book presents a systestematic treatment of deductive aspects and structures of fuzzy logic understood as many valued logic sui generis. Some important systems of real-valued propositional and predicate calculus are defined and investigated. The aim is to show that fuzzy logic as a logic of imprecise (vague) propositions does have well-developed formal foundations and that most things usually named `fuzzy inference' can be naturally understood as logical deduction.

There are two main groups of intended readers. First, logicians: they can see that fuzzy logic is indeed a branch of logic and may find several very interesting open problems. Second, equally important, researchers involved in fuzzy logic applications and soft computing. As a matter of fact, most of these are not professional logicians so that it can easily happen that an application, clever and successful as it may be, is presented in a way which is logically not entirely correct or may appear simple-minded. (Standard presentations of the logical aspects of fuzzy controllers are the most typical example.) This fact would not be very important if only the bon ton of logicians were harmed; but it is the opinion of the author (who is a mathematical logician) that a better understanding of the strictly logical basis of fuzzy logic (in the usual broad sense) is very useful for fuzzy logic appliers since if they know better what they are doing, they may hope to do it better. In addition, a better mutual understanding between (classical) logicians and researchers in fuzzy logic, promises to lead to deeper cooperation and new results.

Audience: mathematicians, logicians, computer scientists, specialists in artificial intelligence and knowledge engineering, developers of fuzzy logic.

[159] Thomas C. Hales. Introduction to the Flyspeck project. In Coquand et al. [523]. [ bib | http ]
This article gives an introduction to a long-term project called Flyspeck, whose purpose is to give a formal verification of the Kepler Conjecture. The Kepler Conjecture asserts that the density of a packing of equal radius balls in three dimensions cannot exceed pi/sqrt18. The original proof of the Kepler Conjecture, from 1998, relies extensively on computer calculations. Because the proof relies on relatively few external results, it is a natural choice for a formalization effort.
[160] Joseph Y. Halpern. An analysis of first-order logics of probability. Artificial Intelligence, 1990. [ bib | http ]
We consider two approaches to giving semantics to first-order logics of probability. The first approach puts a probability on the domain, and is appropriate for giving semantics to formulas involving statistical information such as “The probability that a randomly chosen bird flies is greater than .9.” The second approach puts a probability on possible worlds, and is appropriate for giving semantics to formulas describing degrees of belief, such as “The probability that Tweety (a particular bird) flies is greater than .9.” We show that the two approaches can be easily combined, allowing us to reason in a straightforward way about statistical information and degrees of belief. We then consider axiomatizing these logics. In general, it can be shown that no complete axiomatization is possible. We provide axiom systems that are sound and complete in cases where a complete axiomatization is possible, showing that they do allow us to capture a great deal of interesting reasoning about probability.
[161] Darrel Hankerson, Alfred Menezes, and Scott Vanstone. Guide to Elliptic Curve Cryptography. Springer, 2003. [ bib | http ]
[162] R. W. Hansell. Borel measurable mappings for nonseparable metric spaces. Transactions of the American Mathematical Society, 161:145–169, November 1971. [ bib | http ]
"One of the key papers in non-separable theory. [To show that f IN measurable E U implies countable range] One may use Section 2, Corollary 7, with the observation that [0,1] satisfies the assumptions and, in your situation, the collection f^(-1)(x), x∈X is even completely additive-Borel"
[163] G. H. Hardy. A Mathematician's Apology, reprinted with a foreword by C. P. Snow. Cambridge University Press, 1993. [ bib ]
[164] John Harrison. Metatheory and reflection in theorem proving: A survey and critique. Technical Report CRC-053, SRI Cambridge, Millers Yard, Cambridge, UK, 1995. [ bib | .html ]
One way to ensure correctness of the inference performed by computer theorem provers is to force all proofs to be done step by step in a simple, more or less traditional, deductive system. Using techniques pioneered in Edinburgh LCF, this can be made palatable. However, some believe such an approach will never be efficient enough for large, complex proofs. One alternative, commonly called reflection, is to analyze proofs using a second layer of logic, a metalogic, and so justify abbreviating or simplifying proofs, making the kinds of shortcuts humans often do or appealing to specialized decision algorithms. In this paper we contrast the fully-expansive LCF approach with the use of reflection. We put forward arguments to suggest that the inadequacy of the LCF approach has not been adequately demonstrated, and neither has the practical utility of reflection (notwithstanding its undoubted intellectual interest). The LCF system with which we are most concerned is the HOL proof assistant.

The plan of the paper is as follows. We examine ways of providing user extensibility for theorem provers, which naturally places the LCF and reflective approaches in opposition. A detailed introduction to LCF is provided, emphasizing ways in which it can be made efficient. Next, we present a short introduction to metatheory and its usefulness, and, starting from Gödel's proofs and Feferman's transfinite progressions of theories, look at logical `reflection principles'. We show how to introduce computational `reflection principles' which do not extend the power of the logic, but may make deductions in it more efficient, and speculate about their practical usefulness. Applications or proposed applications of computational reflection in theorem proving are surveyed, following which we draw some conclusions. In an appendix, we attempt to clarify a couple of other notions of `reflection' often encountered in the literature.

The paper questions the too-easy acceptance of reflection principles as a practical necessity. However I hope it also serves as an adequate introduction to the concepts involved in reflection and a survey of relevant work. To this end, a rather extensive bibliography is provided.

[165] John Harrison. Binary decision diagrams as a HOL derived rule. The Computer Journal, 38:162–170, 1995. [ bib | .html ]
Binary Decision Diagrams (BDDs) are a representation for Boolean formulas which makes many operations, in particular tautology-checking, surprisingly efficient in important practical cases. In contrast to such custom decision procedures, the HOL theorem prover expands all proofs out to a sequence of extremely simple primitive inferences. In this paper we describe how the BDD algorithm may be adapted to comply with such strictures, helping us to understand the strengths and limitations of the HOL approach.
[166] John Harrison. Optimizing proof search in model elimination. In McRobbie and Slaney [513], pages 313–327. [ bib | .html ]
Many implementations of model elimination perform proof search by iteratively increasing a bound on the total size of the proof. We propose an optimized version of this search mode using a simple divide-and-conquer refinement. Optimized and unoptimized modes are compared, together with depth-bounded and best-first search, over the entire TPTP problem library. The optimized size-bounded mode seems to be the overall winner, but for each strategy there are problems on which it performs best. Some attempt is made to analyze why. We emphasize that our optimization, and other implementation techniques like caching, are rather general: they are not dependent on the details of model elimination, or even that the search is concerned with theorem proving. As such, we believe that this study is a useful complement to research on extending the model elimination calculus.
[167] John Harrison. Formalized mathematics. Technical Report 36, Turku Centre for Computer Science (TUCS), Lemminkäisenkatu 14 A, FIN-20520 Turku, Finland, 1996. [ bib | .html ]
It is generally accepted that in principle it's possible to formalize completely almost all of present-day mathematics. The practicability of actually doing so is widely doubted, as is the value of the result. But in the computer age we believe that such formalization is possible and desirable. In contrast to the QED Manifesto however, we do not offer polemics in support of such a project. We merely try to place the formalization of mathematics in its historical perspective, as well as looking at existing praxis and identifying what we regard as the most interesting issues, theoretical and practical.
[168] John Harrison. A Mizar mode for HOL. In von Wright et al. [583], pages 203–220. [ bib | .html ]
The HOL theorem prover is implemented in the LCF style, meaning that all inference is ultimately reduced to a collection of very simple (forward) primitive inference rules. By programming it is possible to build alternative means of proving theorems on top, while preserving security. Existing HOL proofs styles are, however, very different from those used in textbooks. Here we describe the addition of another proof style, inspired by Mizar. We believe the resulting system combines some of the best features of both HOL and Mizar.
[169] John Harrison. Proof style. In Giménex and Paulin-Mohring [603], pages 154–172. [ bib | .html ]
We are concerned with how computer theorem provers should expect users to communicate proofs to them. There are many stylistic choices that still allow the machine to generate a completely formal proof object. The most obvious choice is the amount of guidance required from the user, or from the machine perspective, the degree of automation provided. But another important consideration, which we consider particularly significant, is the bias towards a `procedural' or `declarative' proof style. We will explore this choice in depth, and discuss the strengths and weaknesses of declarative and procedural styles for proofs in pure mathematics and for verification applications. We conclude with a brief summary of our own experiments in trying to combine both approaches.
[170] John Harrison. HOL light: A tutorial introduction. In Srivas and Camilleri [530], pages 265–269. [ bib | .html ]
HOL Light is a new version of the HOL theorem prover. While retaining the reliability and programmability of earlier versions, it is more elegant, lightweight, powerful and automatic; it will be the basis for the Cambridge component of the HOL-2000 initiative to develop the next generation of HOL theorem provers. HOL Light is written in CAML Light, and so will run well even on small machines, e.g. PCs and Macintoshes with a few megabytes of RAM. This is in stark contrast to the resource-hungry systems which are the norm in this field, other versions of HOL included. Among the new features of this version are a powerful simplifier, effective first order automation, simple higher-order matching and very general support for inductive and recursive definitions.

Many theorem provers, model checkers and other hardware verification tools are tied to a particular set of facilities and a particular style of interaction. However HOL Light offers a wide range of proof tools and proof styles to suit the needs of various applications. For work in high-level mathematical theories, one can use a more `declarative' textbook-like style of proof (inspired by Trybulec's Mizar system). For larger but more routine and mechanical applications, HOL Light includes more `procedural' automated tools for simplifying complicated expressions, proving large tautologies etc. We believe this unusual range makes HOL Light a particularly promising vehicle for floating point verification, which involves a mix of abstract mathematics and concrete low-level reasoning. We will aim to give a brief tutorial introduction to the system, illustrating some of its features by way of such floating point verification examples. In this paper we explain the system and its possible applications in a little more detail.

[171] John Harrison. Floating point verification in HOL light: the exponential function. Technical Report 428, University of Cambridge Computer Laboratory, 1997. [ bib | .html ]
[172] John Harrison. Theorem Proving with the Real Numbers (Distinguished dissertations). Springer, 1998. [ bib | .html ]
[173] John Harrison. The HOL Light Manual (1.0), May 1998. [ bib | http ]
[174] John Harrison. Formalizing Dijkstra. In Grundy and Newey [586], pages 171–188. [ bib | .html ]
[175] John Harrison. A HOL theory of Euclidean space. In Hurd and Melham [596], pages 114–129. [ bib | .html ]
[176] John Harrison. Floating-point verification using theorem proving. In Bernardo and Cimatti [575], pages 211–242. [ bib | .html ]
[177] John Harrison. Verifying nonlinear real formulas via sums of squares. In Schneider and Brandt [598], pages 102–118. [ bib | .html ]
[178] John Harrison. Formalizing basic complex analysis. In Matuszewski and Zalewska [602], pages 151–165. [ bib | .html ]
[179] John Harrison. Formalizing an analytic proof of the prime number theorem. Special Issue of the Journal of Automated Reasoning, 43(3):243–261, October 2009. [ bib | http ]
We describe the computer formalization of a complex-analytic proof of the Prime Number Theorem (PNT), a classic result from number theory characterizing the asymptotic density of the primes. The formalization, conducted using the HOL Light theorem prover, proceeds from the most basic axioms for mathematics yet builds from that foundation to develop the necessary analytic machinery including Cauchy’s integral formula, so that we are able to formalize a direct, modern and elegant proof instead of the more involved ‘elementary’ Erdös-Selberg argument. As well as setting the work in context and describing the highlights of the formalization, we analyze the relationship between the formal proof and its informal counterpart and so attempt to derive some general lessons about the formalization of mathematics.
[180] Sergiu Hart, Micha Sharir, and Amir Pnueli. Termination of probabilistic concurrent programs. ACM Transactions on Programming Languages and Systems (TOPLAS), 5(3):356–380, July 1983. [ bib ]
[181] O. Hasan and S. Tahar. Formalization of the standard uniform random variable. Theoretical Computer Science, 2007. To appear. [ bib ]
[182] G. McC. Haworth. 6-man chess solved. ICGA Journal, 28(3):153, September 2005. [ bib ]
[183] G. McC. Haworth. Freezer analysis of KRP(a2)KbBP(a3). Private communication, 2006. [ bib ]
[184] Jifeng He, K. Seidel, and A. McIver. Probabilistic models for the guarded command language. Science of Computer Programming, 28(2–3):171–192, April 1997. [ bib ]
[185] E. A. Heinz. Endgame databases and efficient index schemes. ICCA Journal, 22(1):22–32, March 1999. [ bib | .ps.gz ]
Endgame databases have become an integral part of modern chess programs during the past few years. Since the early 1990s two different kinds of endgame databases are publicly available, namely Edwards' so-called "tablebases" and Thompson's collection of 5-piece databases. Although Thompson's databases enjoy much wider international fame, most current chess programs use tablebases because they integrate far better with the rest of the search. For the benefit of all those enthusi- asts who intend to incorporate endgame databases into their own chess programs, this article describes the index schemes of Edwards' tablebases and Thompson's databases in detail, explains their differences, and provides a comparative evaluation of both.

In addition we introduce new indexing methods that improve on the published state-of-the-art and shrink the respective index ranges even further (especially for large databases and endgames with Pawns). This reduces the overall space requirements of the resulting databases substantially. We also propose several solutions for the problem of potential en-passant captures in endgame databases with at least one Pawn per each side. Shortly after the initial submission of our original text, Nalimov independently applied similar techniques as ours and even more advanced index schemes to his new tablebases which are now publicly available on the Internet, too. We briefly put his work into perspective as well.

[186] Jörg Heitkötter and David Beasley. The hitch-hiker's guide to evolutionary computation: A list of frequently asked questions (faq). USENET: comp.ai.genetic, 1998. [ bib | ftp ]
[187] Heber Herencia-Zapana, George Hagen, and Anthony Narkawicz. Formalizing probabilistic safety claims. In Bobaru et al. [564], pages 162–176. [ bib ]
[188] H. J. van den Herik, I. S. Herschberg, and N. Nakad. A six-men-endgame database: KRP(a2)KbBP(a3). ICCA Journal, 10(4):163–180, 1987. [ bib ]
[189] H. J. van den Herik, I. S. Herschberg, and N. Nakad. A reply to R. Sattler's remarks on the KRP(a2)-KbBP(a3) database. ICCA Journal, 11(2/3):88–91, 1988. [ bib ]
[190] Ted Herman. Probabilistic self-stabilization. Information Processing Letters, 35(2):63–67, June 1990. [ bib ]
[191] Thai Son Hoang, Z. Jin, K. Robinsion, A. K. McIver, and C. C. Morgan. Probabilistic invariants for probabilistic machines. In Proceedings of the 3rd International Conference of B and Z users, volume 2651 of Lecture Notes in Computer Science, pages 240–259. Springer, 2003. [ bib ]
[192] Tony Hoare. The verifying compiler. Sample submission for the Workshop on Grand Challenges for Computing Research, November 2002. [ bib | .pdf ]
[193] Peter V. Homeier and David F. Martin. A mechanically verified verification condition generator. The Computer Journal, 38(2):131–141, July 1995. [ bib | .pdf ]
[194] Peter V. Homeier. The HOL-Omega logic. In Berghofer et al. [601], pages 244–259. [ bib | .pdf ]
[195] David Hooper and Kenneth Whyld. The Oxford Companion to Chess. Oxford University Press, 2nd edition, September 1992. [ bib ]
[196] John E. Hopcroft, Rajeev Motwani, and Jeffrey D. Ullman. Introduction to Automata Theory, Languages, and Computation. Addison-Wesley, 2nd edition, 2001. [ bib ]
[197] Gérard Huet and Derek Oppen. Equations and rewrite rules: A survey. Technical Report CSL-111, SRI International, January 1980. [ bib ]
[198] Brian Huffman. Formal verification of monad transformers. In Thiemann and Findler [553]. [ bib | .html ]
[199] J. Hughes. Why functional programming matters. Computer Journal, 32(2):98–107, 1989. [ bib | .html ]
[200] Marieke Huisman. Reasoning about Java Programs in higher order logic with PVS and Isabelle. PhD thesis, University of Nijmegen, Holland, February 2001. [ bib ]
[201] Joe Hurd. The Real Number Theories in hol98, November 1998. Part of the documentation for the hol98 theorem prover. [ bib ]
A description of the hol98 reals library, ported partly from the reals library in HOL90 and partly from the real library in hol-light.
[202] Joe Hurd. Integrating Gandalf and HOL. In Bertot et al. [587], pages 311–321. [ bib | http ]
Gandalf is a first-order resolution theorem-prover, optimized for speed and specializing in manipulations of large clauses. In this paper I describe GANDALF_TAC, a HOL tactic that proves goals by calling Gandalf and mirroring the resulting proofs in HOL. This call can occur over a network, and a Gandalf server may be set up servicing multiple HOL clients. In addition, the translation of the Gandalf proof into HOL fits in with the LCF model and guarantees logical consistency.
[203] Joe Hurd. Integrating Gandalf and HOL. Technical Report 461, University of Cambridge Computer Laboratory, May 1999. Second edition. [ bib | .html ]
[204] Joe Hurd. Congruence classes with logic variables. In Aagaard et al. [589], pages 87–101. [ bib ]
[205] Joe Hurd. Lightweight probability theory for verification. In Aagaard et al. [589], pages 103–113. [ bib ]
There are many algorithms that make use of probabilistic choice, but a lack of tools available to specify and verify their operation. The primary contribution of this paper is a lightweight modelling of such algorithms in higher-order logic, together with some key properties that enable verification. The theory is applied to a uniform random number generator and some basic properties are established. As a secondary contribution, all the theory developed has been mechanized in the hol98 theorem-prover.
[206] Joe Hurd. The Probability Theories in hol98, June 2000. Part of the documentation for the hol98 theorem prover. [ bib ]
A description of the hol98 probability library.
[207] Joe Hurd. Congruence classes with logic variables. Logic Journal of the IGPL, 9(1):59–75, January 2001. [ bib | http ]
We are improving equality reasoning in automatic theorem-provers, and congruence classes provide an efficient storage mechanism for terms, as well as the congruence closure decision procedure. We describe the technical steps involved in integrating logic variables with congruence classes, and present an algorithm that can be proved to find all matches between classes (modulo certain equalities). An application of this algorithm makes possible a percolation algorithm for undirected rewriting in minimal space; this is described and an implementation in hol98 is examined in some detail.
[208] Joe Hurd. Predicate subtyping with predicate sets. In Boulton and Jackson [590], pages 265–280. [ bib | http ]
We show how PVS-style predicate subtyping can be simulated in HOL using predicate sets, and explain how to perform subtype checking using this model. We illustrate some applications of this to specification and verification in HOL, and also demonstrate some limits of the approach. Finally we show preliminary results of a subtype checker that has been integrated into a contextual rewriter.
[209] Joe Hurd. Verification of the Miller-Rabin probabilistic primality test. In Boulton and Jackson [591], pages 223–238. [ bib ]
[210] Joe Hurd. Formal Verification of Probabilistic Algorithms. PhD thesis, University of Cambridge, 2002. [ bib | http ]
This thesis shows how probabilistic algorithms can be formally verified using a mechanical theorem prover.

We begin with an extensive foundational development of probability, creating a higher-order logic formalization of mathematical measure theory. This allows the definition of the probability space we use to model a random bit generator, which informally is a stream of coin-flips, or technically an infinite sequence of IID Bernoulli(1/2) random variables.

Probabilistic programs are modelled using the state-transformer monad familiar from functional programming, where the random bit generator is passed around in the computation. Functions remove random bits from the generator to perform their calculation, and then pass back the changed random bit generator with the result.

Our probability space modelling the random bit generator allows us to give precise probabilistic specifications of such programs, and then verify them in the theorem prover.

We also develop technical support designed to expedite verification: probabilistic quantifiers; a compositional property subsuming measurability and independence; a probabilistic while loop together with a formal concept of termination with probability 1. We also introduce a technique for reducing properties of a probabilistic while loop to properties of programs that are guaranteed to terminate: these can then be established using induction and standard methods of program correctness.

We demonstrate the formal framework with some example probabilistic programs: sampling algorithms for four probability distributions; some optimal procedures for generating dice rolls from coin flips; the symmetric simple random walk. In addition, we verify the Miller-Rabin primality test, a well-known and commercially used probabilistic algorithm. Our fundamental perspective allows us to define a version with strong properties, which we can execute in the logic to prove compositeness of numbers.

[211] Joe Hurd. A formal approach to probabilistic termination. In Carreño et al. [592], pages 230–245. [ bib | http ]
We present a probabilistic version of the while loop, in the context of our mechanized framework for verifying probabilistic programs. The while loop preserves useful program properties of measurability and independence, provided a certain condition is met. This condition is naturally interpreted as “from every starting state, the while loop will terminate with probability 1”, and we compare it to other probabilistic termination conditions in the literature. For illustration, we verify in HOL two example probabilistic algorithms that necessarily rely on probabilistic termination: an algorithm to sample the Bernoulli(p) distribution using coin-flips; and the symmetric simple random walk.
[212] Joe Hurd. HOL theorem prover case study: Verifying probabilistic programs. In Norman et al. [509], pages 83–92. [ bib ]
The focus of this paper is the question: “How suited is the HOL theorem prover to the verification of probabilistic programs?” To answer this, we give a brief introduction to our model of probabilistic programs in HOL, and then compare this approach to other formal tools that have been used to verify probabilistic programs: the Prism model checker, the Coq theorem prover, and the B method.
[213] Joe Hurd. Fast normalization in the HOL theorem prover. In Walsh [507]. An extended abstract. [ bib | http ]
[214] Joe Hurd. An LCF-style interface between HOL and first-order logic. In Voronkov [517], pages 134–138. [ bib | http ]
[215] Joe Hurd. Verification of the Miller-Rabin probabilistic primality test. Journal of Logic and Algebraic Programming, 50(1–2):3–21, May–August 2003. Special issue on Probabilistic Techniques for the Design and Analysis of Systems. [ bib | http ]
Using the HOL theorem prover, we apply our formalization of probability theory to specify and verify the Miller-Rabin probabilistic primality test. The version of the test commonly found in algorithm textbooks implicitly accepts probabilistic termination, but our own verified implementation satisfies the stronger property of guaranteed termination. Completing the proof of correctness requires a significant body of group theory and computational number theory to be formalized in the theorem prover. Once verified, the primality test can either be executed in the logic (using rewriting) and used to prove the compositeness of numbers, or manually extracted to Standard ML and used to find highly probable primes.
[216] Joe Hurd. Using inequalities as term ordering constraints. Technical Report 567, University of Cambridge Computer Laboratory, June 2003. [ bib | http ]
In this paper we show how linear inequalities can be used to approximate Knuth-Bendix term ordering constraints, and how term operations such as substitution can be carried out on systems of inequalities. Using this representation allows an off-the-shelf linear arithmetic decision procedure to check the satisfiability of a set of ordering constraints. We present a formal description of a resolution calculus where systems of inequalities are used to constrain clauses, and implement this using the Omega test as a satisfiability checker. We give the results of an experiment over problems in the TPTP archive, comparing the practical performance of the resolution calculus with and without inherited inequality constraints.
[217] Joe Hurd. Formal verification of probabilistic algorithms. Technical Report 566, University of Cambridge Computer Laboratory, May 2003. [ bib | .html ]
[218] Joe Hurd. First-order proof tactics in higher-order logic theorem provers. In Archer et al. [581], pages 56–68. [ bib | http ]
In this paper we evaluate the effectiveness of first-order proof procedures when used as tactics for proving subgoals in a higher-order logic interactive theorem prover. We first motivate why such first-order proof tactics are useful, and then describe the core integrating technology: an `LCF-style' logical kernel for clausal first-order logic. This allows the choice of different logical mappings between higher-order logic and first-order logic to be used depending on the subgoal, and also enables several different first-order proof procedures to cooperate on constructing the proof. This work was carried out using the HOL4 theorem prover; we comment on the ease of transferring the technology to other higher-order logic theorem provers.
[219] Joe Hurd, Annabelle McIver, and Carroll Morgan. Probabilistic guarded commands mechanized in HOL. In Cerone and Pierro [571], pages 95–111. [ bib ]
[220] Joe Hurd. Compiling HOL4 to native code. In Slind [595]. [ bib | http ]
We present a framework for extracting and compiling proof tools and theories from a higher order logic theorem prover, so that the theorem prover can be used as a platform for supporting reasoning in other applications. The framework is demonstrated on a small application that uses HOL4 to find proofs of arbitrary first order logic formulas.
[221] Joe Hurd. Formal verification of chess endgame databases. In Hurd et al. [597], pages 85–100. [ bib | http ]
Chess endgame databases store the number of moves required to force checkmate for all winning positions: with such a database it is possible to play perfect chess. This paper describes a method to construct endgame databases that are formally verified to logically follow from the laws of chess. The method employs a theorem prover to model the laws of chess and ensure that the construction is correct, and also a BDD engine to compactly represent and calculate with large sets of chess positions. An implementation using the HOL4 theorem prover and the BuDDY BDD engine is able to solve all four piece pawnless endgames.
[222] Joe Hurd. Formalizing elliptic curve cryptography in higher order logic. Available from the author's website, October 2005. [ bib | http ]
Formalizing a mathematical theory using a theorem prover is a necessary first step to proving the correctness of programs that refer to that theory in their specification. This report demonstrates how the mathematical theory of elliptic curves and their application to cryptography can be formalized in higher order logic. This formal development is mechanized using the HOL4 theorem prover, resulting in a collection of formally verified functional programs (expressed as higher order logic functions) that correctly implement the primitive operations of elliptic curve cryptography.
[223] Joe Hurd, Annabelle McIver, and Carroll Morgan. Probabilistic guarded commands mechanized in HOL. Theoretical Computer Science, 346:96–112, November 2005. [ bib | http ]
The probabilistic guarded-command language pGCL contains both demonic and probabilistic nondeterminism, which makes it suitable for reasoning about distributed random algorithms Proofs are based on weakest precondition semantics, using an underlying logic of real- (rather than Boolean-) valued functions.

We present a mechanization of the quantitative logic for pGCL using the HOL4 theorem prover, including a proof that all pGCL commands satisfy the new condition sublinearity, the quantitative generalization of conjunctivity for standard GCL.

The mechanized theory also supports the creation of an automatic proof tool which takes as input an annotated pGCL program and its partial correctness specification, and derives from that a sufficient set of verification conditions. This is employed to verify the partial correctness of the probabilistic voting stage in Rabin's mutual-exclusion algorithm.

[224] Joe Hurd. First order proof for higher order theorem provers (abstract). In Benzmueller et al. [526], pages 1–3. [ bib ]
[225] Joe Hurd. System description: The Metis proof tactic. In Benzmueller et al. [526], pages 103–104. [ bib ]
[226] Joe Hurd, Mike Gordon, and Anthony Fox. Formalized elliptic curve cryptography. In HCSS-2006 [540]. [ bib | http ]
Formalizing a mathematical theory is a necessary first step to proving the correctness of programs that refer to that theory in their specification. This paper demonstrates how the mathematical theory of elliptic curves and their application to cryptography can be formalized in higher order logic. This formal development is mechanized using the HOL4 theorem prover, resulting in a collection of higher order logic functions that correctly implement the primitive operations of elliptic curve cryptography.
[227] Joe Hurd. Book review: Rippling: Meta-level guidance for mathematical reasoning by A. Bundy, D. Basin, D. Hutter and A. Ireland. Bulletin of Symbolic Logic, 12(3):498–499, 2006. [ bib | http ]
[228] Joe Hurd. Embedding Cryptol in higher order logic. Available from the author's website, March 2007. [ bib | http ]
This report surveys existing approaches to embedding Cryptol programs in higher order logic, and presents a new approach that aims to simplify as much as possible reasoning about the embedded programs.
[229] Joe Hurd. Proof pearl: The termination analysis of TERMINATOR. In Schneider and Brandt [598], pages 151–156. [ bib | http ]
TERMINATOR is a static analysis tool developed by Microsoft Research for proving termination of Windows device drivers written in C. This proof pearl describes a formalization in higher order logic of the program analysis employed by TERMINATOR, and verifies that if the analysis succeeds then program termination logically follows.
[230] Joe Hurd, Anthony Fox, Mike Gordon, and Konrad Slind. ARM verification (abstract). In HCSS-2007 [541]. [ bib | http ]
[231] Joe Hurd. OpenTheory: Package management for higher order logic theories. In Reis and Théry [566], pages 31–37. [ bib | http ]
Interactive theorem proving has grown from toy examples to major projects formalizing mathematics and verifying software, and there is now a critical need for theory engineering techniques to support these efforts. This paper introduces the OpenTheory project, which aims to provide an effective package management system for logical theories. The OpenTheory article format allows higher order logic theories to be exported from one theorem prover, compressed by a stand-alone tool, and imported into a different theorem prover. Articles naturally support theory interpretations, which is the mechanism by which theories can be cleanly transferred from one theorem prover context to another, and which also leads to more efficient developments of standard theories.
[232] Joe Hurd, Magnus Carlsson, Sigbjorn Finne, Brett Letner, Joel Stanley, and Peter White. Policy DSL: High-level specifications of information flows for security policies. In HCSS-2009 [542]. [ bib | http ]
SELinux security policies are powerful tools to implement properties such as process confinement and least privilege. They can also be used to support MLS policies on SELinux. However, the policies are very complex, and creating them is a difficult and error-prone process. Furthermore, it is not possible to state explicit constraints on an SELinux policy such as “information flowing to the network must be encrypted”.

We present two related Domain Specific Languages (DSL) to make it much easier to specify SELinux security policies. The first DSL is called Lobster. Lobster allows the user to state an information flow policy at a very high level of abstraction, and then refine the policy into lower and lower levels until it can be translated by the Lobster compiler into an SELinux policy. The second DSL is called Symbion. Symbion allows the user to state policy constraints such as “information flowing to the network must be encrypted”. Symbion and Lobster will then interface with tools that can check that the policy satisfies the constraints.

We also present the analysis tool Shrimp, with which we are able to analyze and find errors in the SELinux Reference Policy. Shrimp is also the basis for our ongoing effort in reverse translating the Reference Policy into Lobster.

Since Lobster and Symbion represent information flows and contraints on information flows, they are more broadly applicable than to just SELinux. We will point to some of the directions we wish to take Lobster and Symbion beyond SELinux.

[233] Joe Hurd and Guy Haworth. Data assurance in opaque computations. In Van den Herik and Spronck [501], pages 221–231. [ bib | http ]
The chess endgame is increasingly being seen through the lens of, and therefore effectively defined by, a data `model' of itself. It is vital that such models are clearly faithful to the reality they purport to represent. This paper examines that issue and systems engineering responses to it, using the chess endgame as the exemplar scenario. A structured survey has been carried out of the intrinsic challenges and complexity of creating endgame data by reviewing the past pattern of errors during work in progress, surfacing in publications and occurring after the data was generated. Specific measures are proposed to counter observed classes of error-risk, including a preliminary survey of techniques for using state-of-the-art verification tools to generate EGTs that are correct by construction. The approach may be applied generically beyond the game domain.
[234] Joe Hurd. Composable packages for higher order logic theories. In Aderhold et al. [605]. [ bib | http ]
Interactive theorem proving is tackling ever larger formalization and verification projects, and there is a critical need for theory engineering techniques to support these efforts. One such technique is effective package management, which has the potential to simplify the development of logical theories by precisely checking dependencies and promoting re-use. This paper introduces a domain-specific language for defining composable packages of higher order logic theories, which is designed to naturally handle the complex dependency structures that often arise in theory development. The package composition language functions as a module system for theories, and the paper presents a well-defined semantics for the supported operations. Preliminary tests of the package language and its toolset have been made by packaging the theories distributed with the HOL Light theorem prover. This experience is described, leading to some initial theory engineering discussion on the ideal properties of a reusable theory.
[235] Joe Hurd. Evaluation opportunities in mechanized theories (invited talk abstract). In McGuinness et al. [525]. [ bib | http ]
[236] Joe Hurd. OpenTheory Article Format, August 2010. Available for download at http://www.gilith.com/opentheory/article.html. [ bib | .html ]
[237] Joe Hurd. The OpenTheory standard theory library. In Bobaru et al. [564], pages 177–191. [ bib | http ]
Interactive theorem proving is tackling ever larger formalization and verification projects, and there is a critical need for theory engineering techniques to support these efforts. One such technique is cross-prover package management, which has the potential to simplify the development of logical theories and effectively share theories between different theorem prover implementations. The OpenTheory project has developed standards for packaging theories of the higher order logic implemented by the HOL family of theorem provers. What is currently missing is a standard theory library that can serve as a published contract of interoperability and contain proofs of basic properties that would otherwise appear in many theory packages. The core contribution of this paper is the presentation of a standard theory library for higher order logic represented as an OpenTheory package. We identify the core theory set of the HOL family of theorem provers, and describe the process of instrumenting the HOL Light theorem prover to extract a standardized version of its core theory development. We profile the axioms and theorems of our standard theory library and investigate the performance cost of separating the standard theory library into coherent hierarchical theory packages.
[238] Joe Hurd. FUSE: Inter-application security for android (abstract). In Launchbury and Richards [543], pages 53–54. [ bib | http ]
[239] Michael Huth. The interval domain: A matchmaker for aCTL and aPCTL. In Rance Cleaveland, Michael Mislove, and Philip Mulry, editors, US - Brazil Joint Workshops on the Formal Foundations of Software Systems, volume 14 of Electronic Notes in Theoretical Computer Science. Elsevier, 2000. [ bib | .html ]
[240] Graham Hutton and Erik Meijer. Monadic parser combinators. Technical Report NOTTCS-TR-96-4, Department of Computer Science, University of Nottingham, 1996. [ bib | http ]
In functional programming, a popular approach to building recursive descent parsers is to model parsers as functions, and to define higher-order functions (or combinators) that implement grammar constructions such as sequencing, choice, and repetition. Such parsers form an instance of a monad, an algebraic structure from mathematics that has proved useful for addressing a number of computational problems. The purpose of this report is to provide a step-by-step tutorial on the monadic approach to building functional parsers, and to explain some of the benefits that result from exploiting monads. No prior knowledge of parser combinators or of monads is assumed. Indeed, this report can also be viewed as a first introduction to the use of monads in programming.
[241] Paul B. Jackson. Expressive typing and abstract theories in Nuprl and PVS (tutorial). In von Wright et al. [583]. [ bib | .ps.gz ]
This 90min tutorial covered two of the more novel aspects of the Nuprl and PVS theorem provers:

Expressive type systems. These make specifications more consise and accurate, but good automation is needed to solve proof obligations that arise during type-checking.

Support for abstract theories. These have uses in supporting mechanical proof developments, and in the domains of program specification refinement and mathematics. Key issues are whether types can be formed for classes of abstract theories and the need for automatic inference of theory interpretations.

The tutorial assumed some familiarity with HOL-like theorem provers, but did not assume familiarity with either PVS or Nuprl in particular. The issue of Nuprl's constructivity was orthogonal to the issues discussed and was therefore glossed over.

[242] Mateja Jamnik, Alan Bundy, and Ian Green. On automating diagrammatic proofs of arithmetic arguments. Journal of Logic, Language and Information, 8(3):297–321, 1999. Also available as Department of Artificial Intelligence Research Paper No. 910. [ bib | .ps ]
[243] Claus Skaanning Jensen, Augustine Kong, and Uffe Kjaerulff. Blocking Gibbs sampling in very large probabilistic expert systems. Technical Report R-93-2031, Aalborg University, October 1993. [ bib ]
[244] D. S. Johnson. A catalog of complexity classes. In J. van Leeuwen, editor, Handbook of Theoretical Computer Science, Volume A: Algorithms and Complexity, chapter 9, pages 67–161. Elsevier and The MIT Press (co-publishers), 1990. [ bib ]
"Johnson presents an extensive survey of computational complexity classes. Of particular interest here is his discussion of randomized, probabilistic, and stochastic complexity classes."
[245] Peter T. Johnstone. Notes on logic and set theory. Cambridge University Press, 1987. [ bib ]
[246] Claire Jones. Probabilistic Non-Determinism. PhD thesis, University of Edinburgh, 1990. [ bib | http ]
[247] Michael D. Jones. Restricted types for HOL. In Gunter [585]. [ bib | .html ]
[248] F. Kammüller and L. C. Paulson. A formal proof of Sylow's first theorem – an experiment in abstract algebra with Isabelle HOL. Journal of Automated Reasoning, 23(3-4):235–264, 1999. [ bib ]
[249] F. Kammüller. Modular reasoning in Isabelle. In McAllester [516]. [ bib ]
[250] David R. Karger. Global min-cuts in RNC, and other ramifications of a simple min-cut algorithm. In Proceedings of the 4th Annual ACM-SIAM Symposium on Discrete Algorithms (SODA '93), pages 21–30, Austin, TX, USA, January 1993. SIAM. [ bib ]
[251] R. M. Karp. The probabilistic analysis of some combinatorial search algorithms. In Traub [453], pages 1–20. [ bib ]
[252] Matt Kaufmann and J Strother Moore. An industrial strength theorem prover for a logic based on Common Lisp. IEEE Transactions on Software Engineering, 23(4):203–213, April 1997. [ bib | http ]
[253] Matt Kaufmann and J Strother Moore. A precise description of the ACL2 logic, 1998. [ bib | .ps.Z ]
[254] Matt Kaufmann, Panagiotis Manolios, and J Strother Moore. Computer-Aided Reasoning: An Approach. Kluwer Academic Publishers, June 2000. [ bib ]
[255] Matt Kaufmann, Panagiotis Manolios, and J Strother Moore, editors. Computer-Aided Reasoning: ACL2 Case Studies. Kluwer Academic Publishers, June 2000. [ bib ]
[256] F. P. Kelly. Notes on effective bandwidths. In F.P. Kelly, S. Zachary, and I.B. Ziedins, editors, Stochastic Networks: Theory and Applications, number 4 in Royal Statistical Society Lecture Notes Series, pages 141–168. Oxford University Press, 1996. [ bib ]
[257] H. J. Keisler. Probability quantifiers. In J. Barwise and S. Feferman, editors, Model-Theoretic Logics, pages 509–556. Springer, New York, 1985. [ bib ]
[258] D. J. King and R. D. Arthan. Development of practical verification tools. ICL Systems Journal, 11(1), May 1996. [ bib | .html ]
Current best practice for high-assurance security and safety-critical software often requires the use of machine-checked rigorous mathematical techniques. Unfortunately, while there have been some notable successes, the provision of software tools that adequately support such techniques is a hard research problem, albeit one that is slowly being solved. This paper describes some contributions to this area of research and development in ICL in recent years. This research builds both on ICL’s own experiences in building and using tools to support security applications and on work carried out by the Defence Research Agency into notations and methods for program verification.
[259] Steve King, Jonathan Hammond, Rod Chapman, and Andy Pryor. Is proof more cost-effective than testing? IEEE Transactions on Software Engineering, 26(8):675–686, August 2000. [ bib ]
This paper describes the use of formal development methods on an industrial safety-critical application. The Z notation was used for documenting the system specification and part of the design, and the SPARK1 subset of Ada was used for coding. However, perhaps the most distinctive nature of the project lies in the amount of proof that was carried out: proofs were carried out both at the Z level-approximately 150 proofs in 500 pages-and at the SPARK code level-approximately 9,000 verification conditions generated and discharged. The project was carried out under UK Interim Defence Standards 00-55 and 00-56, which require the use of formal methods on safety-critical applications. It is believed to be the first to be completed against the rigorous demands of the 1991 version of these standards. The paper includes comparisons of proof with the various types of testing employed, in terms of their efficiency at finding faults. The most striking result is that the Z proof appears to be substantially more efficient at finding faults than the most efficient testing phase. Given the importance of early fault detection, we believe this helps to show the significant benefit and practicality of large-scale proof on projects of this kind.
[260] Gerwin Klein, Kevin Elphinstone, Gernot Heiser, June Andronick, David Cock, Philip Derrin, Dhammika Elkaduwe, Kai Engelhardt, Rafal Kolanski, Michael Norrish, Thomas Sewell, Harvey Tuch, and Simon Winwood. seL4: Formal verification of an OS kernel. In Matthews and Anderson [578], pages 207–220. [ bib | http ]
Complete formal verification is the only way to guarantee that a system is free of programming errors.

We present our experience in performing the formal, machine-checked verification of the seL4 microkernel from an abstract specification down to its C implementation. We assume correctness of compiler, assembly code, and hardware, and we used a unique design approach that fuses formal and operating systems techniques. To our knowledge, this is the first formal proof of functional correctness of a complete, general-purpose operating-system kernel. Functional correctness means here that the implementation always strictly follows our high-level abstract specification of kernel behaviour. This encompasses traditional design and implementation safety properties such as the kernel will never crash, and it will never perform an unsafe operation. It also proves much more: we can predict precisely how the kernel will behave in every possible situation.

seL4, a third-generation microkernel of L4 provenance, comprises 8,700 lines of C code and 600 lines of assembler. Its performance is comparable to other high-performance L4 kernels.

[261] D. E. Knuth and P. B. Bendix. Simple word problems in universal algebra. In J. Leech, editor, Computational problems in abstract algebra, pages 263–297. Pergamon Press, Elmsford, N.Y., 1970. [ bib ]
[262] Donald E. Knuth and Andrew C. Yao. The complexity of nonuniform random number generation. In Traub [453]. [ bib ]
[263] Donald E. Knuth. The Art of Computer Programming: Seminumerical Algorithms. Addison-Wesley, 1997. Third edition. [ bib ]
[264] Neal Koblitz. A Course in Number Theory and Cryptography. Number 114 in Graduate Texts in Mathematics. Springer, 1987. [ bib ]
[265] Andrei N. Kolmogorov. Foundations of the Theory of Probability. Chelsea, New York, 1950. [ bib ]
[266] E. A. Komissarchik and A. L. Futer. Ob analize ferzevogo endshpilia pri pomoshchi EVM. Problemy Kybernet, 29:211–220, 1974. [ bib ]
[267] Konstantin Korovin and Andrei Voronkov. Knuth-Bendix constraint solving is NP-complete. In Orejas et al. [550], pages 979–992. [ bib ]
[268] Dexter Kozen. Semantics of probabilistic programs. In 20th Annual Symposium on Foundations of Computer Science, pages 101–114, Long Beach, Ca., USA, October 1979. IEEE Computer Society Press. [ bib ]
[269] Dexter Kozen. A probabilistic PDL. In Proceedings of the 15th ACM Symposium on Theory of Computing, 1983. [ bib ]
[270] Dexter Kozen. Efficient code certification. Technical Report 98-1661, Cornell University, January 1998. [ bib ]
[271] Jesper Torp Kristensen. Generation and compression of endgame tables in chess with fast random access using OBDDs. Master's thesis, University of Aarhus, Department of Computer Science, February 2005. [ bib | http ]
[272] R. Kumar, T. Kropf, and K. Schneider. Integrating a first-order automatic prover in the HOL environment. In Archer et al. [545], pages 170–176. [ bib ]
[273] Ramana Kumar and Joe Hurd. Standalone tactics using OpenTheory. In Beringer and Felty [556], pages 405–411. [ bib | http ]
Proof tools in interactive theorem provers are usually developed within and tied to a specific system, which leads to a duplication of effort to make the functionality available in different systems. Many verification projects would benefit from access to proof tools developed in other systems. Using OpenTheory as a language for communicating between systems, we show how to turn a proof tool implemented for one system into a standalone tactic available to many systems via the internet. This enables, for example, LCF-style proof reconstruction efforts to be shared by users of different interactive theorem provers and removes the need for each user to install the external tool being integrated.
[274] Ramana Kumar. Challenges in using OpenTheory to transport Harrison's HOL model from HOL Light to HOL4. In Blanchette and Urban [570], pages 110–116. [ bib | http ]
OpenTheory is being used for the first time (in work to be described at ITP 2013) as a tool in a larger project, as opposed to in an example demonstrating OpenTheory's capability. The tool works, demonstrating its viability. But it does not work completely smoothly, because the use case is somewhat at odds with OpenTheory's primary design goals. In this extended abstract, we explore the tensions between the goals that OpenTheory-like systems might have, and question the relative importance of various kinds of use. My hope is that describing issues arising from work in progress will stimulate fruitful discussion relevant to the development of proof exchange systems.
[275] Eyal Kushilevitz and Michael O. Rabin. Randomized mutual exclusion algorithms revisited. In Maurice Herlihy, editor, Proceedings of the 11th Annual Symposium on Principles of Distributed Computing, pages 275–283, Vancouver, BC, Canada, August 1992. ACM Press. [ bib ]
[276] Marta Kwiatkowska, Gethin Norman, Roberto Segala, and Jeremy Sproston. Automatic verification of real-time systems with discrete probability distributions. In Katoen [508], pages 75–95. [ bib | .ps.gz ]
We consider the timed automata model of [3], which allows the analysis of real-time systems expressed in terms of quantitative timing constraints. Traditional approaches to real-time system description express the model purely in terms of nondeterminism; however, we may wish to express the likelihood of the system making certain transitions. In this paper, we present a model for real-time systems augmented with discrete probability distributions. Furthermore, using the algorithm of [5] with fairness, we develop a model checking method for such models against temporal logic properties which can refer both to timing properties and probabilities, such as, “with probability 0.6 or greater, the clock x remains below 5 until clock y exceeds 2”.
[277] M. Kwiatkowska, G. Norman, and D. Parker. Prism: Probabilistic symbolic model checker. In Proceedings of PAPM/PROBMIV 2001 Tools Session, September 2001. [ bib | .ps.gz ]
[278] B. A. LaMacchia and A. M. Odlyzko. Computation of discrete logarithms in prime fields. Designs, Codes, and Cryptography, 1(1):46–62, May 1991. [ bib | .html ]
The presumed difficulty of computing discrete logarithms in finite fields is the basis of several popular public key cryptosystems. The secure identification option of the Sun Network File System, for example, uses discrete logarithms in a field GF(p) with p a prime of 192 bits. This paper describes an implementation of a discrete logarithm algorithm which shows that primes of under 200 bits, such as that in the Sun system, are very insecure. Some enhancements to this system are suggested.
[279] Leslie Lamport. How to write a proof, February 1993. [ bib | http ]
A method of writing proofs is proposed that makes it much harder to prove things that are not true. The method, based on hierarchical structuring, is simple and practical.
[280] Leslie Lamport. Latex: A document preparation system. Addison-Wesley, 2nd edition, 1994. [ bib ]
[281] Leslie Lamport and Lawrence C. Paulson. Should your specification language be typed? ACM Transactions on Programming Languages and Systems, 21(3):502–526, May 1999. [ bib | http ]
Most specification languages have a type system. Type systems are hard to get right, and getting them wrong can lead to inconsistencies. Set theory can serve as the basis for a specification languuage without types. This possibility, which has been widely overlooked, offers many advantages. Untyped set theory is simple and is more flexible than any simple typed formalism. Polymorphism, overloading, and subtyping can make a type system more powerful, but at the cost of increased complexity,and such refinements can never attain the flexibility of having no types at all. Typed formalisms have advantages too, stemming from the power of mechanical type checking. While types serve little purpose in hand proofs, they do help with mechanized proofs. In the absence of verification, type checking can catch errors in specifications. It may be possible to have the best of both worlds by adding typing annotations to an untyped specification language. We consider only specification languages, not programming languages.
[282] Bruce M. Landman and Aaron Robertson. Ramsey Theory on the Integers. American Mathematical Society, February 2004. [ bib ]
[283] John Launchbury and Simon L. Peyton Jones. Lazy functional state threads. In SIGPLAN Symposium on Programming Language Design and Implementation (PLDI'94), Orlando, pages 24–35, June 1994. [ bib ]
[284] Jeff Lawson. Operational code authentication, 1999. [ bib | .html ]
[285] Meir M. Lehman. Programs, life cycles, and laws of software evolution. Proceedings of the IEEE, 68(9):1060–1076, 1980. [ bib | .pdf ]
[286] Douglas B. Lenat. CYC: A large-scale investment in knowledge infrastructure. Communications of the ACM, 38(11):33–38, 1995. [ bib | .html ]
[287] H. W. Lenstra. Factoring integers with elliptic curves. Ann. Math., 126:649–673, 1987. [ bib ]
[288] Xavier Leroy. Formal certification of a compiler back-end or: programming a compiler with a proof assistant. In Morrisett and Jones [569], pages 42–54. [ bib | .pdf ]
[289] Joe Leslie-Hurd and Guy Haworth. Computer theorem proving and HoTT. ICGA Journal, 36(2):100–103, June 2013. [ bib | http ]
Theorem-proving is a one-player game. The history of computer programs being the players goes back to 1956 and the `LT' Logic Theory Machine of Newell, Shaw and Simon. In game-playing terms, the `initial position' is the core set of axioms chosen for the particular logic and the `moves' are the rules of inference. Now, the Univalent Foundations Program at IAS Princeton and the resulting `HoTT' book on Homotopy Type Theory have demonstrated the success of a new kind of experimental mathematics using computer theorem proving.
[290] Joe Leslie-Hurd. Maintaining verified software. In Shan [539], pages 71–80. [ bib | http ]
Maintaining software in the face of evolving dependencies is a challenging problem, and in addition to good release practices there is a need for automatic dependency analysis tools to avoid errors creeping in. Verified software reveals more semantic information in the form of mechanized proofs of functional specifications, and this can be used for dependency analysis. In this paper we present a scheme for automatic dependency analysis of verified software, which for each program checks that the collection of installed libraries is sufficient to guarantee its functional correctness. We illustrate the scheme with a case study of Haskell packages verified in higher order logic. The dependency analysis reduces the burden of maintaining verified Haskell packages by automatically computing version ranges for the packages they depend on, such that any combination provides the functionality required for correct operation.
[291] Clarence Irving Lewis. A Survey of Symbolic Logic. Univ. of California Press, Berkeley, Berkeley, 1918. Reprint of Chapters I–IV by Dover Publications, 1960, New York. [ bib ]
[292] J. R. Lewis and B. Martin. Cryptol: High assurance, retargetable crypto development and validation. In MILCOM-2003 [561], pages 820–825. [ bib | http ]
As cryptography becomes more vital to the infrastructure of computing systems, it becomes increasingly vital to be able to rapidly and correctly produce new implementations of cryptographic algorithms. To address these challenges, we introduce a new, formal methods-based approach to the specification and implementation of cryptography, present a number of scenarios of use, an overview of the language, and present part of a specification of the advanced encryption standard.
[293] Guodong Li and Konrad Slind. An embedding of Cryptol in HOL-4. Unpublished draft, 2005. [ bib ]
[294] Donald W. Loveland. Mechanical theorem proving by model elimination. Journal of the ACM, 15(2):236–251, April 1968. [ bib ]
[295] Christopher Lynch and Barbara Morawska. Goal-directed E-unification. In RTA-2001 [572], pages 231–245. [ bib | .ps ]
We give a general goal directed method for solving the E-unification problem. Our inference system is a generalization of the inference rules for Syntactic Theories, except that our inference system is proved complete for any equational theory. We also show how to easily modify our inference system into a more restricted inference system for Syntactic Theories, and show that our completeness techniques prove completeness there also.
[296] Christopher Lynch and Barbara Morawska. Goal-directed E-unification. In RTA-2001 [572], pages 231–245. [ bib | .ps ]
We give a general goal directed method for solving the -unification problem. Our inference system is a generalization of the inference rules for Syntactic Theories, except that our inference system is proved complete for any equational theory. We also show how to easily modify our inference system into a more restricted inference system for Syntactic Theories, and show that our completeness techniques prove completeness there also.
[297] Donald MacKenzie. Mechanizing proof: computing, risk, and trust. MIT Press, 2001. [ bib ]
[298] David MacQueen. Modules for Standard ML. In Boyer et al. [557], pages 198–207. [ bib | .pdf ]
The functional programming language ML has been undergoing a thorough redesign during the past year, and the module facility described here has been proposed as part of the revised language, now called Standard ML. The design has three main goals: (1) to facilitate the structuring of large ML programs; (2) to support separate compilation and generic library units; and (3) to employ new ideas in the semantics of data types to extend the power of ML's polymorphic type system. It is based on concepts inherent in the structure of ML, primarily the notions of a declaration, its type signature, and the environment that it denotes.
[299] Jeffrey M. Maddalon, Ricky W. Butler, and César Muñoz. A mathematical basis for the safety analysis of conflict prevention algorithms. Technical Report NASA/TM-2009-215768, National Aeronautics and Space Administration, Langley Research Center, Hampton VA 23681-2199, USA, June 2009. [ bib | .pdf ]
[300] Harry G. Mairson. Deciding ML typability is complete for deterministic exponential time. In Conference Record of the Seventeenth Annual ACM Symposium on Principles of Programming Languages, pages 382–401. ACM SIGACT and SIGPLAN, ACM Press, 1990. [ bib ]
[301] Yu I. Manin. A Course in Mathematical Logic. Springer, New York, 1977. [ bib ]
[302] John Matthews. fCryptol semantics. Available from the author on request, January 2005. [ bib ]
[303] Frank Mayer, Karl MacMillan, and David Caplan. SELinux by Example. Open Source Software Development Series. Prentice Hall, 2007. [ bib ]
[304] David McAllester. Grammar rewriting. In Kapur [511], pages 124–138. [ bib ]
[305] Bill McCarty. SELinux: NSA's Open Source Security Enhanced Linux. O'Reilly, 2005. [ bib ]
[306] Pamela McCorduck. Machines Who Think: A Personal Inquiry into the History and Prospects of Artificial Intelligence. A. K. Peters, March 2004. [ bib ]
[307] Gary McGraw. Software Security: Building Security In. Addison-Wesley Software Security Series. Addison-Wesley, February 2006. [ bib | http ]
[308] Annabelle McIver and Carroll Morgan. Probabilistic predicate transformers: Part 2. Technical Report PRG-TR-5-96, Programming Research Group, Oxford University Computer Laboratory, March 1996. [ bib | .html ]
Probabilistic predicate transformers guarantee standard (ordinary) predicate transformers to incorporate a notion of probabilistic choice in imperative programs. The basic theory of that, for finite state spaces, is set out in [5], together with a statements of their `healthiness conditions'. Here the earlier results are extended to infinite state spaces, and several more specialised topics are explored: the characterisation of standard and deterministic programs; and the structure of the extended space generated when `angelic choice' is added to the system.
[309] Annabelle McIver, Carroll Morgan, and Elena Troubitsyna. The probabilistic steam boiler: a case study in probabilistic data refinement. In Proceedings of the International Refinement Workshop and Formal Methods Pacific, Canberra, 1998. [ bib | .html ]
Probabilistic choice and demonic nondeterminism have been combined in a model for sequential programs in which program refinement is defined by removing demonic nondeterminism. Here we study the more general topic of data refinement in the probabilistic setting, extending standard techniques to probabilistic programs. We use the method to obtain a quantitative assessment of safety of a (probabilistic) version of the steam boiler.
[310] A. K. McIver and C. Morgan. Demonic, angelic and unbounded probabilistic choices in sequential programs. Acta Informatica, 37(4/5):329–354, 2001. [ bib ]
[311] A. K. McIver and C. C. Morgan. Partial correctness for probabilistic demonic programs. Theoretical Computer Science, 266(1–2):513–541, 2001. [ bib ]
[312] Annabelle McIver and Carroll Morgan. Abstraction, Refinement and Proof for Probabilistic Systems. Springer, 2004. [ bib | http ]
[313] Thomas Frederick Melham. Formalizing Abstraction Mechanisms for Hardware Verification in Higher Order Logic. PhD thesis, University of Cambridge, August 1989. [ bib ]
[314] T. F. Melham. A mechanized theory of the Π-calculus in HOL. Nordic Journal of Computing, 1(1):50–76, 1994. [ bib ]
[315] Jia Meng, Claire Quigley, and Lawrence C. Paulson. Automation for interactive proof: first prototype. Information and Computation, 204(10):1575–1596, 2006. [ bib ]
[316] Gary L. Miller. Riemann's hypothesis and tests for primality. In Conference Record of Seventh Annual ACM Symposium on Theory of Computation, pages 234–239, Albuquerque, New Mexico, May 1975. [ bib ]
[317] Steven P. Miller, Michael W. Whalen, and Darren D. Cofer. Software model checking takes off. Communications of the ACM, 53(2):58–64, February 2010. [ bib | DOI | http ]
[318] R. Milner. A theory of type polymorphism in programming. Journal of Computer and System Sciences, 17:348–375, December 1978. [ bib ]
[319] Robin Milner. Communication and Concurrency. International Series in Computer Science. Prentice Hall, 1989. [ bib ]
[320] Robin Milner, Mads Tofte, Robert Harper, and David MacQueen. The Definition of Standard ML. MIT Press, Cambridge, MA, USA, 1997. [ bib ]
[321] J. C. Mogul, R. F. Rashid, and M. J. Accetta. The packet filter: An efficient mechanism for user-level network code. In ACM Symposium on Operating Systems Principles, November 1987. [ bib ]
[322] Abdel Mokkedem and Tim Leonard. Formal verification of the Alpha 21364 network protocol. In Aagaard and Harrison [588], pages 443–461. [ bib ]
[323] David Monniaux. Abstract interpretation of probabilistic semantics. In Seventh International Static Analysis Symposium (SAS'00), number 1824 in Lecture Notes in Computer Science. Springer, 2000. [ bib | .html ]
Following earlier models, we lift standard deterministic and nondeterministic semantics of imperative programs to probabilistic semantics. This semantics allows for random external inputs of known or unknown probability and random number generators. We then propose a method of analysis of programs according to this semantics, in the general framework of abstract interpretation. This method lifts an ordinary abstract lattice, for non-probabilistic programs, to one suitable for probabilistic programs. Our construction is highly generic. We discuss the influence of certain parameters on the precision of the analysis, basing ourselves on experimental results.
[324] David Monniaux. An abstract Monte-Carlo method for the analysis of probabilistic programs (extended abstract). In 28th Symposium on Principles of Programming Languages (POPL '01). Association for Computer Machinery, 2001. [ bib | .html ]
We introduce a new method, combination of random testing and abstract interpretation, for the analysis of programs featuring both probabilistic and non-probabilistic nondeterminism. After introducing ordinary testing, we show how to combine testing and abstract interpretation and give formulas linking the precision of the results to the number of iterations. We then discuss complexity and optimization issues and end with some experimental results.
[325] Carroll Morgan, Annabelle McIver, Karen Seidel, and J. W. Sanders. Probabilistic predicate transformers. Technical Report TR-4-95, Oxford University Computing Laboratory Programming Research Group, February 1995. [ bib | .html ]
Predicate transformers facilitate reasoning about imperative programs, including those exhibiting demonic non-deterministic choice. Probabilistic predicate transformers extend that facility to programs containing probabilistic choice, so that one can in principle determine not only whether a program is guaranteed to establish a certain result, but also its probability of doing so. We bring together independent work of Claire Jones and Jifeng He, showing how their constructions can be made to correspond; from that link between a predicate-based and a relation-based view of probabilistic execution we are able to propose "probabilistic healthiness conditions", generalising those of Dijkstra for ordinary predicate transformers. The associated calculus seems suitable for exploring further the rigorous derivation of imperative probabilistic programs.
[326] Carroll Morgan. Proof rules for probabilistic loops. In He Jifeng, John Cooke, and Peter Wallis, editors, Proceedings of the BCS-FACS 7th Refinement Workshop, Workshops in Computing. Springer, 1996. [ bib | .html ]
[327] Carroll Morgan, Annabelle McIver, and Karen Seidel. Probabilistic predicate transformers. ACM Transactions on Programming Languages and Systems, 18(3):325–353, May 1996. [ bib ]
[328] Carroll Morgan and Annabelle McIver. pGCL: formal reasoning for random algorithms. South African Computer Journal, 22:14–27, 1999. [ bib | .ps.gz ]
[329] Max Moser, Christopher Lynch, and Joachim Steinbach. Model elimination with basic ordered paramodulation. Available from the authors' web sites, January 1996. [ bib | .ps.gz ]
We present a new approach for goal-directed theorem proving with equality which integrates Basic Ordered Paramodulation into a Model Elimination framework. In order to be able to use orderings and to restrict the applications of equations to non-variable positions, the goal-directed tableau construction is combined with bottom-up completion where only positive literals are overlapped. The resulting calculus thus keeps the best properties of completion while giving up only part of the goal-directedness.
[330] Max Moser, Ortrun Ibens, Reinhold Letz, Joachim Steinbach, Christoph Goller, Johannes Schumann, and Klaus Mayr. SETHEO and E-SETHEO – the CADE-13 systems. Journal of Automated Reasoning, 18:237–246, 1997. [ bib | .ps.gz ]
The model elimination theorem prover SETHEO (version V3.3) and its equational extension E-SETHEO are presented. SETHEO employs sophisticated mechanisms of subgoal selection, elaborate iterative deepening techniques, and local failure caching methods. Its equational counterpart E-SETHEO transforms formulae containing equality (using a variant of Brand's modification method) and processes the output with the standard SETHEO system. The paper gives an overview of the theoretical background, the system architecture, and the performance of both systems.
[331] Rajeev Motwani and Prabhakar Raghavan. Randomized Algorithms. Cambridge University Press, Cambridge, England, June 1995. [ bib ]
[332] Kevin P. Murphy, Yair Weiss, and Michael I. Jordan. Loopy belief propagation for approximate inference: An empirical study. In Laskey K.B. and Prade H., editors, Proceedings of the Fifteenth Conference on Uncertainty in Artificial Intelligence, San Francisco, 1999. Morgan Kaufmann. [ bib | .html ]
[333] E. V. Nalimov, G. McC. Haworth, and E. A. Heinz. Space-efficient indexing of chess endgame tables. ICGA Journal, 23(3):148–162, September 2000. [ bib | .ps.gz ]
Chess endgame tables should provide efficiently the value and depth of any required position during play. The indexing of an endgame's positions is crucial to meeting this objective. This paper updates Heinz' previous review of approaches to indexing and describes the latest approach by the first and third authors.

Heinz' and Nalimov's endgame tables (EGTs) encompass the en passant rule and have the most compact index schemes to date. Nalimov's EGTs, to the Distance-to-Mate (DTM) metric, require only 30.6 x 10^9 elements in total for all the 3-to-5-man endgames and are individually more compact than previous tables. His new index scheme has proved itself while generating the tables and in the 1999 World Computer Chess Championship where many of the top programs used the new suite of EGTs.

[334] Anthony J. Narkawicz and César Muñoz. Formal verification of air traffic conflict prevention bands algorithms. Technical Report NASA/TM-2010-216706, National Aeronautics and Space Administration, Langley Research Center, Hampton VA 23681-2199, USA, June 2010. [ bib | .pdf ]
[335] George C. Necula and Peter Lee. Proof-carrying code. Technical Report CMU-CS-96-165, Computer Science Department, Carnegie Mellon University, September 1996. [ bib ]
[336] R. P. Nederpel, J. H. Geuvers, and R. C. De Vrijer. Selected Papers on Automath (Studies in Logic and the Foundations of Mathematics: Volume 133). North-Holland, 1994. [ bib ]
[337] Andrzej Nedzusiak. σ-fields and probability. Journal of Formalized Mathematics, 1989. [ bib | .html ]
[338] Andrzej Nedzusiak. Probability. Journal of Formalized Mathematics, 1990. [ bib | .html ]
[339] Greg Nelson and Derek C. Oppen. Simplification by cooperating decision procedures. ACM Transactions on Programming Languages and Systems, 1(2):245–257, October 1979. [ bib ]
[340] Greg Nelson and Derek C. Oppen. Fast decision procedures bases on congruence closure. Journal of the Association for Computing Machinery, 27(2):356–364, April 1980. [ bib ]
[341] M. Nesi. Value-passing CCS in HOL. In Joyce and Seger [546], pages 352–365. [ bib ]
[342] Monica Nesi. Formalising Process Calculi in Higher Order Logic. PhD thesis, University of Cambridge, 1986. [ bib | .html ]
[343] Robert Nieuwenhuis and Albert Rubio. Theorem proving with ordering and equality constrained clauses. Journal of Symbolic Computation, 19(4):321–351, April 1995. [ bib ]
[344] R. Nieuwenhuis and A. Rubio. Paramodulation-based theorem proving. In Robinson and Voronkov [402], chapter 7, pages 371–443. [ bib ]
[345] Nils J. Nilsson. Probabilistic logic. Artificial Intelligence, 28(1):71–87, 1986. [ bib ]
[346] Tobias Nipkow. Functional unification of higher-order patterns. In Proceedings, Eighth Annual IEEE Symposium on Logic in Computer Science, pages 64–74. IEEE Computer Society Press, 1993. [ bib ]
Higher-order patterns (HOPs) are a class of lambda-terms which behave almost like first-order terms w.r.t. unification: unification is decidable and unifiable terms have most general unifiers which are easy to compute. HOPs were first discovered by Dale Miller and subsequently developed and applied by Pfenning and Nipkow. This paper presents a stepwise development of a functional unification algorithm for HOPs. Both the usual representation of lambda-terms with alphabetic bound variables and de Bruijn's notation are treated. The appendix contains a complete listing of an implementation in Standard ML.
[347] Tobias Nipkow. Hoare logics in Isabelle/HOL. In H. Schwichtenberg and R. Steinbrüggen, editors, Proof and System-Reliability, pages 341–367. Kluwer, 2002. [ bib | .html ]
[348] Tobias Nipkow, Lawrence C. Paulson, and Markus Wenzel. Isabelle/HOL—A Proof Assistant for Higher-Order Logic, volume 2283 of LNCS. Springer, 2002. [ bib | http ]
[349] Tobias Nipkow. Social choice theory in HOL. Special Issue of the Journal of Automated Reasoning, 43(3):289–304, October 2009. [ bib | http ]
This article presents formalizations in higher-order logic of two proofs of Arrow’s impossibility theorem due to Geanakoplos. The Gibbard-Satterthwaite theorem is derived as a corollary. Lacunae found in the literature are discussed.
[350] Tobias Nipkow, Lawrence C. Paulson, and Markus Wenzel. Isabelle/HOL—A Proof Assistant for Higher-Order Logic, October 2011. Available for download at http://isabelle.in.tum.de/dist/Isabelle2011-1/doc/tutorial.pdf. [ bib | .pdf ]
[351] Pilar Nivela and Robert Nieuwenhuis. Saturation of first-order (constrained) clauses with the Saturate system. In Claude Kirchner, editor, Proceedings of the 5th International Conference on Rewriting Techniques and Applications (RTA-93), volume 690 of Lecture Notes in Computer Science, pages 436–440. Springer, June 1993. [ bib ]
[352] Michael Norrish. C formalised in HOL. PhD thesis, University of Cambridge Computer Laboratory, 1998. [ bib ]
[353] Michael Norrish and Konrad Slind. A thread of HOL development. The Computer Journal, 41(1):37–45, 2002. [ bib ]
The HOL system is a mechanized proof assistant for higher-order logic that has been under continuous development since the mid-1980s, by an ever-changing group of developers and external contributors. We give a brief overview of various implementations of the HOL logic before focusing on the evolution of certain important features available in a recent implementation. We also illustrate how the module system of Standard ML provided security and modularity in the construction of the HOL kernel, as well as serving in a separate capacity as a useful representation medium for persistent, hierarchical logical theories.
[354] Michael Norrish. Rewriting conversions implemented with continuations. Special Issue of the Journal of Automated Reasoning, 43(3):305–336, October 2009. [ bib | http ]
We give a continuation-based implementation of rewriting for systems in the LCF tradition. These systems must construct explicit proofs of equations when rewriting, and currently do so in a way that can be very space-inefficient. An explicit representation of continuations improves performance on large terms, and on long-running computations.
[355] L. Northrup, P. Feiler, R. P. Gabriel, J. Goodenough, R. Linger, T. Longstaff, R. Kazman, M. Klein, D. Schmidt, K. Sullivan, and K. Wallnau. Ultra-Large-Scale Systems: The Software Challenge of the Future. Software Engineering Institute, Carnegie Mellon, 2006. The 2006 report for a 12-month study of ultra-large-scale systems software, sponsored by the United States Department of Defense. [ bib | http ]
[356] John Nunn. Secrets of Rook Endings. Gambit Publications, December 1999. [ bib ]
[357] John Nunn. Secrets of Pawnless Endings. Gambit Publications, expanded edition 2 edition, 2002. [ bib ]
[358] Steven Obua and Sebastian Skalberg. Importing HOL into Isabelle/HOL. In Furbach and Shankar [555], pages 298–302. [ bib | http ]
We developed an importer from both HOL 4 and HOL-light into Isabelle/HOL. The importer works by replaying proofs within Isabelle/HOL that have been recorded in HOL 4 or HOL-light and is therefore completely safe. Concepts in the source HOL system, that is types and constants, can be mapped to concepts in Isabelle/HOL; this facilitates a true integration of imported theorems and theorems that are already available in Isabelle/HOL. The importer is part of the standard Isabelle distribution.
[359] Chris Okasaki. Purely Functional Data Structures. Cambridge University Press, Cambridge, UK, 1998. [ bib ]
Data structures and efficiency analysis for functional programming. Code in ML and Haskell. Many references.
[360] John O'Leary, Roope Kaivola, and Tom Melham. Relational STE and theorem proving for formal verification of industrial circuit designs. In Jobstmann and Ray [532], pages 97–104. [ bib | .pdf ]
[361] Machigar Ongtang, Stephen McLaughlin, William Enck, and Patrick McDaniel. Semantically rich application-centric security in Android. In ACSAC-2009 [504], pages 340–349. [ bib | .html ]
[362] Sam Owre and Natarajan Shankar. The formal semantics of PVS. Technical Report SRI-CSL-97-2, Computer Science Laboratory, SRI International, Menlo Park, CA, August 1997. [ bib | .html ]
[363] S. Owre, N. Shankar, J. M. Rushby, and D. W. J. Stringer-Calvert. PVS System Guide. Computer Science Laboratory, SRI International, Menlo Park, CA, September 1999. [ bib ]
[364] Sam Owre and N. Shankar. Theory interpretations in PVS. Technical Report SRI-CSL-01-01, SRI International, April 2001. [ bib | .pdf ]
[365] Michael Parks. Number-theoretic test generation for directed rounding. IEEE Transactions on Computers, 49(7):651–658, 2000. [ bib | .pdf ]
We present methods to generate systematically the hardest test cases for multiplication, division, and square root subject to directed rounding, essentially extending previous work on number-theoretic floating-point testing to rounding modes other than to-nearest. The algorithms focus upon the rounding boundaries of the modes truncate, to-minusinfinity, and to-infinity, and programs based on them require little beyond exact arithmetic in the working precision to create billions of edge cases. We will show that the amount of work required to calculate trial multiplicands pays off in the form of free extra tests due to an interconnection among the operations considered herein. Although these tests do not replace proofs of correctness, they can be used to gain a high degree of confidence that the accuracy requirements as mandated by IEEE Standard 754 have been satisfied.
[366] Christine Paulin, Philippe Audebaud, and Richard Lassaigne. Randomized algorithms in type theory. Slides from a talk delivered at Dagstuhl seminar 01341: Dependent Type Theory meets Practical Programming, August 2001. [ bib | .ps.gz ]
[367] Lawrence C. Paulson. A higher-order implementation of rewriting. Science of Computer Programming, 3:119–149, 1983. [ bib | .pdf ]
Many automatic theorem-provers rely on rewriting. Using theorems as rewrite rules helps to simplify the subgoals that arise during a proof. LCF is an interactive theorem-prover intended for reasoning about computation. Its implementation of rewriting is presented in detail. LCF provides a family of rewriting functions, and operators to combine them. A succession of functions is described, from pattern matching primitives to the rewriting tool that performs most inferences in LCF proofs. The design is highly modular. Each function performs a basic, specific task, such as recognizing a certain form of tautology. Each operator implements one method of building a rewriting function from sim- pler ones. These pieces can be put together in numerous ways, yielding a variety of rewriting strategies. The approach involves programming with higher-order functions. Rewriting functions are data val- ues, produced by computation on other rewriting functions. The code is in daily use at Cambridge, demon- strating the practical use of functional programming.
[368] Lawrence C. Paulson. Set theory for verification: I. From foundations to functions. Journal of Automated Reasoning, 11(3):353–389, 1993. [ bib | .pdf ]
A logic for specification and verification is derived from the axioms of Zermelo-Fraenkel set theory. The proofs are performed using the proof assistant Isabelle. Isabelle is generic, supporting several different logics. Isabelle has the flexibility to adapt to variants of set theory. Its higher-order syntax supports the definition of new binding operators. Unknowns in subgoals can be instantiated incrementally. The paper describes the derivation of rules for descriptions, relations and functions, and discusses interactive proofs of Cantor's Theorem, the Composition of Homomorphisms challenge [9], and Ramsey's Theorem [5]. A generic proof assistant can stand up against provers dedicated to particular logics.
[369] Lawrence C. Paulson. Isabelle: A generic theorem prover. Lecture Notes in Computer Science, 828:xvii + 321, 1994. [ bib ]
[370] Lawrence C. Paulson. Set theory for verification: II. Induction and recursion. Journal of Automated Reasoning, 15(2):167–215, 1995. [ bib | .pdf ]
A theory of recursive definitions has been mechanized in Isabelle's Zermelo-Fraenkel (ZF) set theory. The objective is to support the formalization of particular recursive definitions for use in verification, semantics proofs and other computational reasoning. Inductively defined sets are expressed as least fixedpoints, applying the Knaster-Tarski Theorem over a suitable set. Recursive functions are defined by well-founded recursion and its derivatives, such as transfinite recursion. Recursive data structures are expressed by applying the Knaster-Tarski Theorem to a set, such as Vω, that is closed under Cartesian product and disjoint sum. Worked examples include the transitive closure of a relation, lists, variable-branching trees and mutually recursive trees and forests. The Schröder-Bernstein Theorem and the soundness of propositional logic are proved in Isabelle sessions.
[371] Lawrence C. Paulson. Proving properties of security protocols by induction. In CSFW-1997 [510], pages 70–83. [ bib | .ps.gz ]
[372] Lawrence C. Paulson. Mechanizing coinduction and corecursion in higher-order logic. Journal of Logic and Computation, 7(2):175–204, March 1997. [ bib | .ps.gz ]
[373] Lawrence C. Paulson. The Isabelle Reference Manual, October 1998. With contributions by Tobias Nipkow and Markus Wenzel. [ bib ]
[374] Lawrence C. Paulson. Inductive analysis of the Internet protocol TLS. TISSEC, 2(3):332–351, August 1999. [ bib ]
[375] L. C. Paulson. A generic tableau prover and its integration with Isabelle. Journal of Universal Computer Science, 5(3), March 1999. [ bib | http ]
[376] Lawrence C. Paulson. Mechanizing UNITY in Isabelle. ACM Transactions on Computational Logic, 2000. In press. [ bib ]
[377] Lawrence C. Paulson and Kong Woei Susanto. Source-level proof reconstruction for interactive theorem proving. In Schneider and Brandt [598]. [ bib ]
[378] F. Pfenning and C. Schürmann. System description: Twelf — A meta-logical framework for deductive systems. In Ganzinger [515]. [ bib ]
[379] Lee Pike, Mark Shields, and John Matthews. A verifying core for a cryptographic language compiler. In Manolios and Wilding [502], pages 1–10. [ bib | .pdf ]
A verifying compiler is one that emits both object code and a proof of correspondence between object and source code. We report the use of ACL2 in building a verifying compiler for Cryptol, a stream-based language for encryption algorithm specification that targets Rockwell Collins' AAMP7 microprocessor (and is designed to compile efficiently to hardware, too). This paper reports on our success in verifying the "core" transformations of the compiler – those transformations over the sub-language of Cryptol that begin after "higher-order" aspects of the language are compiled away, and finish just before hardware or software specific transformations are exercised. The core transformations are responsible for aggressive optimizations. We have written an ACL2 macro that automatically generates both the correspondence theorems and their proofs. The compiler also supplies measure functions that ACL2 uses to automatically prove termination of Cryptol programs, including programs with mutually-recursive cliques of streams. Our verifying compiler has proved the correctness of its core transformations for multiple algorithms, including TEA, RC6, and AES. Finally, we describe an ACL2 book of primitive operations for the general specification and verification of encryption algorithms.
[380] Robert M. Pirsig. Zen and the Art of Motorcycle Maintenance. Morrow, 10th edition, May 1974. [ bib | .html ]
[381] Gordon Plotkin, Colin Stirling, and Mads Tofte, editors. Proof, Language, and Interaction: Essays in Honour of Robin Milner. MIT Press, May 2000. [ bib ]
[382] Andreas Podelski and Andrey Rybalchenko. Transition invariants. In LICS-2004 [558], pages 32–41. [ bib ]
[383] Andreas Podelski and Andrey Rybalchenko. A complete method for the synthesis of linear ranking functions. In Steffen and Levi [606], pages 239–251. [ bib ]
[384] R. Pollack. On extensibility of proof checkers. Lecture Notes in Computer Science, 996:140–161, 1995. [ bib | .ps.gz ]
[385] George Pólya. How to Solve It. Princeton University Press, Princeton, 1948. [ bib ]
[386] Chr. Posthoff and I. S. Herschberg. Computer analysis of a queen endgame. ICCA Journal, 9(4):189–198, 1986. Translation of Komissarchik (1974). [ bib ]
[387] Riccardo Pucella and Jesse A. Tov. Haskell session types with (almost) no class. In Gill [538], pages 25–36. [ bib ]
We describe an implementation of session types in Haskell. Session types statically enforce that client-server communication proceeds according to protocols. They have been added to several concurrent calculi, but few implementations of session types are available.

Our embedding takes advantage of Haskell where appropriate, but we rely on no exotic features. Thus our approach translates with minimal modification to other polymorphic, typed languages such as ML and Java. Our implementation works with existing Haskell concurrency mechanisms, handles multiple communication channels and recursive session types, and infers protocols automatically.

While our implementation uses unsafe operations in Haskell, it does not violate Haskell's safety guarantees. We formalize this claim in a concurrent calculus with unsafe communication primitives over which we layer our implementation of session types, and we prove that the session types layer is safe. In particular, it enforces that channel-based communication follows consistent protocols.

[388] William Pugh. A practical algorithm for exact array dependence analysis. Communications of the ACM, 35(8):102–114, August 1992. [ bib ]
The Omega test is an integer programming algorithm that can determine whether a dependence exists between two array references, and if so, under what conditions. Conventional wisdom holds that integer programming techniques are far too expensive to be used for dependence analysis, except as a method of last resort for situations that cannot be decided by simpler methods. We present evidence that suggests this wisdom is wrong, and that the Omega test is competitive with approximate algorithms used in practice and suitable for use in production compilers. The Omega test is based on an extension of Fourier-Motzkin variable elimination to integer programming, and has worst-case exponential time complexity. However, we show that for many situations in which other (polynomial) methods are accurate, the Omega test has low order polynomial time complexity. The Omega test can be used to simplify integer programming problems, rather than just deciding them. This has many applications, including accurately and efficiently computing dependence direction and distance vectors.
[389] K. Quinn. Ever had problems rounding off figures? the stock exchange has. Wall Street Journal, 202(91), November 1983. [ bib | .pdf ]
[390] Florian Rabe. Representing Logics and Logic Translations. PhD thesis, Jacobs University Bremen, May 2008. [ bib | http ]
[391] M. O. Rabin. Probabilistic automata. Information and Control, 6:230–245, 1963. [ bib ]
"This is a seminal paper on the theory of probabilistic automata. Rabin defined the notion of a language being accepted by a probabilistic automaton relative to a cutpoint lambda. One of his key results was to show that there exist finite state probabilistic automata that define non-regular languages."
[392] M. O. Rabin. Probabilistic algorithms. In Traub [453], pages 21–39. [ bib ]
[393] Michael O. Rabin. N-process mutual exclusion with bounded waiting by 4log2(N)-valued shared variable. Journal of Computer and System Sciences, 25(1):66–75, August 1982. [ bib ]
[394] Norman Ramsey and Avi Pfeffer. Stochastic lambda calculus and monads of probability distributions. In POPL-2002 [568]. [ bib | .html ]
Probability distributions are useful for expressing the meanings of probabilistic languages, which support formal modeling of and reasoning about uncertainty. Probability distributions form a monad, and the monadic definition leads to a simple, natural semantics for a stochastic lambda calculus, as well as simple, clean implementations of common queries. But the monadic implementation of the expectation query can be much less efficient than current best practices in probabilistic modeling. We therefore present a language of measure terms, which can not only denote discrete probability distributions but can also support the best known modeling techniques. We give a translation of stochastic lambda calculus into measure terms. Whether one translates into the probability monad or into measure terms, the results of the translations denote the same probability distribution.
[395] T. S. Ray. An evolutionary approach to synthetic biology, zen and the art of creating life. Artificial Life, 1(1), 1993. [ bib | ftp ]
[396] M. K. Reiter and A. D. Rubin. Anonymous web transactions with crowds. Communications of the ACM, 42(2):32–38, February 1999. [ bib ]
The authors describe how the Crowds system works – essentially, a group of users act as web forwarders for each other in a way that appears random to outsiders. They analyse the anonymity properties of the system and compare it with other systems. Crowds enables the retrieval of information over the web with only a small amount of private information leakage to other parties.
[397] Stefan Richter. Formalizing integration theory with an application to probabilistic algorithms. In Slind et al. [594], pages 271–286. [ bib ]
[398] J. A. Robinson. A machine-oriented logic based on the resolution principle. Journal of the ACM, 12(1):23–49, January 1965. [ bib ]
[399] J. A. Robinson. Automatic deduction with hyper-resolution. International Journal of Computer Mathematics, 1:227–234, 1965. [ bib ]
[400] J. A. Robinson. A note on mechanizing higher order logic. Machine Intelligence, 5:121–135, 1970. [ bib ]
[401] Abraham Robinson. Non-standard Analysis. Princeton University Press, 1996. [ bib ]
[402] A. Robinson and A. Voronkov, editors. Handbook of Automated Reasoning. Elsevier Science, 2001. [ bib ]
[403] A. J. Roycroft. *C*. EG, 7(119):771, 1996. [ bib ]
[404] A. J. Roycroft. The computer section. EG, 8(123):47–48, 1997. [ bib ]
[405] A. J. Roycroft. *C*. EG, 8(130 Supplement):428, 1998. [ bib ]
[406] A. J. Roycroft. AJRs snippets. EG, 8(131):476, 1999. [ bib ]
[407] Piotr Rudnicki. An overview of the Mizar project. Notes to a talk at the workshop on Types for Proofs and Programs, June 1992. [ bib | .html ]
[408] John Rushby. Formalism in safety cases. In Dale and Anderson [579], pages 3–17. [ bib | http ]
[409] Bertrand Russell. The Autobiography of Bertrand Russell. George Allen & Unwin, London, 1968. 3 volumes. [ bib ]
[410] David M. Russinoff. An experiment with the Boyer-Moore theorem prover: A proof of Wilson's theorem. Journal of Automated Reasoning, 1:121–139, 1985. [ bib ]
[411] Mark Saaltink. Domain checking Z specifications. In C. Michael Holloway and Kelly J. Hayhurst, editors, LFM' 97: Fourth NASA Langley Formal Methods Workshop, pages 185–192, Hampton, VA, September 1997. [ bib | http ]
[412] R. Sattler. Further to the KRP(a2)KbBP(a3) database. ICCA Journal, 11(2/3):82–87, 1988. [ bib ]
[413] J. Schaeffer. One Jump Ahead: Challenging Human Supremacy in Checkers. Springer, New York, 1997. [ bib ]
[414] J. Schaeffer, Y. Bjornsson, N. Burch, R. Lake, P. Lu, and S Sutphen. Building the checkers 10-piece endgame databases. In Herik et al. [521]. [ bib ]
[415] Fred Schneider. Trust in Cyberspace. National Academy Press, 1999. [ bib ]
[416] Bruce Schneier. Applied Cryptography. Wiley, second edition, 1996. [ bib ]
[417] Johann Ph. Schumann. DELTA — A bottom-up processor for top-down theorem provers (system abstract). In Bundy [512]. [ bib | .html ]
[418] Johann Schumann. Tableaux-based theorem provers: Systems and implementations. Journal of Automated Reasoning, 13:409–421, 1994. [ bib | .html ]
The following list of tableaux-based theorem provers was assembled in the Spring and Summer 1993 as the result of a wide-spread enquiry via e-mail. It is intended to provide a short overview of the field and existing implementations. For each system, a short description is given. Additionally, useful information about the system is presented in tabular form. This includes the type of logic which can be handled by the system (input), the implementation language, hardware and operating systems requirements (implementation). Most of the systems are available as binaries or as sources with documentation and can be obtained via anonymous ftp or upon request. The descriptions and further information have been submitted by the individuals whose names are given as contact address. The provers are ordered alphabetically by their name (or the author's name).
[419] J. T. Schwartz. Fast probabilistic algorithms for verification of polynomial identities. Journal of the ACM, 27(4):701–717, October 1980. [ bib | .pdf ]
[420] Carl-Johan Seger. Introduction to formal hardware verification. Technical Report TR-92-13, Department of Computer Science, University of British Columbia, June 1992. [ bib | http ]
Formal hardware verification has recently attracted considerable interest. The need for “correct” designs in safety-critical applications, coupled with the major cost associated with products delivered late, are two of the main factors behind this. In addition, as the complexity of the designs increase, an ever smaller percentage of the possible behaviors of the designs will be simulated. Hence, the confidence in the designs obtained by simulation is rapidly diminishing. This paper provides an introduction to the topic by describing three of the main approaches to formal hardware verification: theorem-proving, model checking and symbolic simulation. We outline the underlying theory behind each approach, we illustrate the approaches by applying them to simple examples and we discuss their strengths and weaknesses. We conclude the paper by describing current on-going work on combining the approaches to achieve multi-level verification approaches.
[421] Karen Seidel, Carroll Morgan, and Annabelle McIver. An introduction to probabilistic predicate transformers. Technical Report TR-6-96, Oxford Programming Research Group Technical Report, 1996. [ bib ]
[422] Karen Seidel, Carroll Morgan, and Annabelle McIver. Probabilistic imperative programming: a rigorous approach. In Groves and Reeves [534]. [ bib ]
[423] Natarajan Shankar and Sam Owre. Principles and pragmatics of subtyping in PVS. In D. Bert, C. Choppy, and P. D. Mosses, editors, Recent Trends in Algebraic Development Techniques, WADT '99, volume 1827 of Lecture Notes in Computer Science, pages 37–52, Toulouse, France, September 1999. Springer. [ bib | .html ]
[424] Mark Shields. A language for symmetric-key cryptographic algorithms and its efficient implementation. Available from the author's website, March 2006. [ bib | http ]
The development of cryptographic hardware for classified data is expensive and time consuming. We present a domain-specific language, microCryptol, and a corresponding compiler, mcc, to address these costs. microCryptol supports the capture of mathematically precise specifications of algorithms, while also allowing those specifications to be compiled to efficient imperative code able to execute on embedded microprocessors.
[425] Jamie Shield, Ian J. Hayes, and David A. Carrington. Using theory interpretation to mechanise the reals in a theorem prover. In Colin Fidge, editor, Electronic Notes in Theoretical Computer Science, volume 42. Elsevier, 2001. [ bib | .html ]
[426] J. H. Silverman. The Arithmetic of Elliptic Curves. Number 106 in Graduate Texts in Mathematics. Springer, 1986. [ bib ]
[427] John Slaney, Arnold Binas, and David Price. Guiding a theorem prover with soft constraints. In de Mántaras and Saitta [524], pages 221–225. [ bib | .pdf ]
Attempts to use finite models to guide the search for proofs by resolution and the like in forst order logic all suffer from the need to trade off the expense of generating and maintaining models against the improvement in quality of guidance as investment in the semantic aspect of reasoning is increased. Previous attempts to resolve this tradeoff have resulted either in poor selection of models, or in fragility as the search becomes over-sensitive to the order of clauses, or in extreme slowness. Here we present a fresh approach, whereby most of the clauses for which a model is sought are treated as soft constraints. The result is a partial model of all of the clauses rather than an exact model of only a subset of them. This allows our system to combine the speed of maintaining just a single model with the robustness previously requiring multiple models. We present experimental evidence of benefits over a range of first order problem domains.
[428] Konrad Slind. HOL98 Draft User's Manual, Athabasca Release, Version 2, January 1999. Part of the documentation included with the hol98 theorem-prover. [ bib ]
[429] Konrad Slind. Reasoning about Terminating Functional Programs. PhD thesis, Technical University of Munich, 1999. [ bib ]
[430] Konrad Slind and Michael Norrish. The HOL System Tutorial, February 2001. Part of the documentation included with the HOL4 theorem-prover. [ bib | http ]
[431] Konrad Slind and Joe Hurd. Applications of polytypism in theorem proving. In Basin and Wolff [593], pages 103–119. [ bib | http ]
Polytypic functions have mainly been studied in the context of functional programming languages. In that setting, applications of polytypism include elegant treatments of polymorphic equality, prettyprinting, and the encoding and decoding of high-level datatypes to and from low-level binary formats. In this paper, we discuss how polytypism supports some aspects of theorem proving: automated termination proofs of recursive functions, incorporation of the results of metalanguage evaluation, and equivalence-preserving translation to a low-level format suitable for propositional methods. The approach is based on an interpretation of higher order logic types as terms, and easily deals with mutual and nested recursive types.
[432] R. Solovay and V. Strassen. A fast Monte-Carlo test for primality. SIAM Journal on Computing, 6(1):84–85, March 1977. [ bib ]
[433] Saurabh Srivastava, Michael Hicks, Jeffrey S. Foster, and Patrick Jenkins. Modular information hiding and type-safe linking for C. IEEE Transactions on Software Engineering, 34(3), May/June 2008. [ bib ]
[434] Daryl Stewart. Formal for everyone - Challenges in achievable multicore design and verification. In Cabodi and Singh [531], page 186. [ bib | .pdf ]
[435] L. B. Stiller. Parallel analysis of certain endgames. ICCA Journal, 12(2):55–64, 1989. [ bib ]
[436] Colin Stirling. Bisimulation, model checking and other games, June 1997. Notes for a Mathfit instructional meeting on games and computation, held in Edinburgh, Scotland. [ bib | .ps ]
[437] David Stirzaker. Elementary Probability. Cambridge University Press, 1994. [ bib ]
[438] T. Ströhlein. Untersuchungen über kombinatorische Spiele. PhD thesis, Technical University of Munich, 1970. [ bib ]
[439] Bryan O'Sullivan, John Goerzen, and Don Stewart. Real World Haskell. O'Reilly Media, Inc., 1st edition, 2008. [ bib ]
[440] Christian B. Suttner and Geoff Sutcliffe. The TPTP problem library — v2.1.0. Technical Report JCU-CS-97/8, Department of Computer Science, James Cook University, December 1997. [ bib | .ps.gz ]
[441] Donald Syme. Declarative Theorem Proving for Operational Semantics. PhD thesis, University of Cambridge, 1998. [ bib ]
[442] Donald Syme. A Simplifier/Programmable Grinder for hol98, January 1998. Part of the documentation included with the hol98 theorem-prover. [ bib ]
[443] Tanel Tammet. A resolution theorem prover for intuitionistic logic. In McRobbie and Slaney [513]. [ bib ]
[444] Tanel Tammet. Gandalf version c-1.0c Reference Manual, October 1997. [ bib | http ]
[445] Tanel Tammet. Gandalf. Journal of Automated Reasoning, 18(2):199–204, April 1997. [ bib ]
[446] Tanel Tammet. Towards efficient subsumption. In Kirchner and Kirchner [514]. [ bib ]
[447] J. Tamplin. EGT-query service extending to 6-man pawnless endgame EGTs in DTC, DTM, DTZ and DTZ50 metrics. Available from the web page http://chess.jaet.org/endings/, 2006. [ bib ]
[448] Aaron Tay. A guide to endgames tablebase. Available from the web page http://www.aarontay.per.sg/Winboard/egtb.html, 2006. [ bib ]
[449] Laurent Théry. A quick overview of HOL and PVS, August 1999. Lecture Notes from the Types Summer School '99: Theory and Practice of Formal Proofs, held in Giens, France. [ bib | .html ]
[450] K. Thompson. Retrograde analysis of certain endgames. ICCA Journal, 9(3):131–139, 1986. [ bib ]
[451] Stephen Toulmin. The Uses of Argument. Cambridge University Press, 1958. [ bib ]
[452] David S. Touretzky. Common Lisp: A Gentle Introduction to Symbolic Computation. Benjamin Cummings, 1990. [ bib | .html ]
[453] J. F. Traub, editor. Algorithms and Complexity: New Directions and Recent Results. Academic Press, New York, 1976. [ bib ]
"This book provides a record of a conference, and contains many novel ideas for probabilistic algorithms."
[454] A. Trybulec and H. A. Blair. Computer aided reasoning. In Rohit Parikh, editor, Proceedings of the Conference on Logic of Programs, volume 193 of Lecture Notes in Computer Science, pages 406–412, Brooklyn, NY, June 1985. Springer. [ bib ]
[455] Andrzej Trybulec and Howard Blair. Computer assisted reasoning with MIZAR. In Aravind Joshi, editor, Proceedings of the 9th International Joint Conference on Artificial Intelligence, pages 26–28, Los Angeles, CA, August 1985. Morgan Kaufmann. [ bib ]
[456] Alan M. Turing. Checking a large routine. In Report of a Conference on High Speed Automatic Calculating Machines, pages 67–69, Cambridge, England, June 1949. University Mathematical Laboratory. [ bib ]
[457] D. A. Turner. A new implementation technique for applicative languages. Software – Practice and Experience, 9(1):31–49, January 1979. [ bib ]
Combinators, a "curried" version of lambda calculus that eliminates the need for symbol binding. Combinators can be reduced (evaluated) locally and in parallel, so they make an interesting model of parallel computation. Combinator hackers: this paper introduces some new combinators, besides S-K-I, that help keep the translation from blowing up in space.
[458] Thomas Tymoczko. The four color problems. Journal of Philosophy, 76, 1979. [ bib ]
[459] Peter Van Roy and Seif Haridi. Concepts, Techniques, and Models of Computer Programming. MIT Press, March 2004. [ bib ]
[460] Moshe Y. Vardi. Why is modal logic so robustly decidable? Technical Report TR97-274, Rice University, April 1997. [ bib ]
In the last 20 years modal logic has been applied to numerous areas of computer science, including artificial intelligence, program verification, hardware verification, database theory, and distributed computing. There are two main computational problems associated with modal logic. The first problem is checking if a given formula is true in a given state of a given structure. This problem is known as the model-checking problem. The second problem is checking if a given formula is true in all states of all structures. This problem is known as the validity problem. Both problems are decidable. The model-checking problem can be solved in linear time, while the validity problem is PSPACE-complete. This is rather surprising when one considers the fact that modal logic, in spite of its apparent propositional syntax, is essentially a first-order logic, since the necessity and possibility modalities quantify over the set of possible worlds, and model checking and validity for first-order logic are computationally hard problems. Why, then, is modal logic so robustly decidable? To answer this question, we have to take a close look at modal logic as a fragment of first-order logic. A careful examination reveals that propositional modal logic can in fact be viewed as a fragment of 2-variable first-order logic. It turns out that this fragment is computationally much more tractable than full first-order logic, which provides some explanation for the tractability of modal logic. Upon a deeper examination, however, we discover that this explanation is not too satisfactory. The tractability of modal logic is quite and cannot be explained by the relationship to two-variable first-order logic. We argue that the robust decidability of modal logic can be explained by the so-called tree-model property, and we show how the tree-model property leads to automata-based decision procedures.
[461] Norbert Völker. HOL2P - A system of classical higher order logic with second order polymorphism. In Schneider and Brandt [598], pages 334–351. [ bib | http ]
[462] John von Neumann. Various techniques for use in connection with random digits. In von Neumann's Collected Works, volume 5, pages 768–770. Pergamon, 1963. [ bib ]
[463] Tanja E. J. Vos. UNITY in Diversity: A Stratified Approach to the Verification of Distributed Algorithms. PhD thesis, Utrecht University, 2000. [ bib ]
[464] Philip Wadler. The essence of functional programming. In 19th Symposium on Principles of Programming Languages. ACM Press, January 1992. [ bib ]
[465] Philip Wadler. Comprehending monads. Mathematical Structures in Computer Science, 2:461–493, 1992. [ bib | .html ]
Category theorists invented monads in the 1960's to concisely express certain aspects of universal algebra. Functional programmers invented list comprehensions in the 1970's to concisely express certain programs involving lists. This paper shows how list comprehensions may be generalised to an arbitrary monad, and how the resulting programming feature can concisely express in a pure functional language some programs that manipulate state, handle exceptions, parse text, or invoke continuations. A new solution to the old problem of destructive array update is also presented. No knowledge of category theory is assumed.
[466] Stan Wagon. The Banach-Tarski Paradox. Cambridge University Press, 1993. [ bib ]
[467] Keith Wansbrough, Michael Norrish, Peter Sewell, and Andrei Serjantov. Timing UDP: mechanized semantics for sockets, threads and failures. Draft, 2001. [ bib | http ]
[468] Jefferson Hane Weaver. The World of Physics, volume II. Simon and Schuster, New York, 1987. [ bib ]
[469] Morten Welinder. Very efficient conversions. In Schubert et al. [548], pages 340–352. [ bib | http ]
Using program transformation techniques from the field of partial evaluation an automatic tool for generating very efficient conversions from equality-stating theorems has been implemented. In the situation where a Hol user would normally employ the built-in function GEN_REWRITE_CONV, a function that directly produces a conversion of the desired functionality, this article demonstrates how producing the conversion in the form of a program text instead of as a closure can lead to significant speed-ups. The Hol system uses a set of 31 simplifying equations on a very large number of intermediate terms derived, e.g., during backwards proofs. For this set the conversion generated by the two-step method is about twice as fast as the method currently used. When installing the new conversion, tests show that the overall running times of Hol proofs are reduced by about 10%. Apart from the speed-up this is completely invisible to the user. With cooperation from the user further speed-up is possible.
[470] Markus Wenzel. Isar - A generic interpretative approach to readable formal proof documents. In Bertot et al. [587], pages 167–184. [ bib ]
[471] Stephen Westfold. Integrating Isabelle/HOL with Specware. In Schneider and Brandt [599]. [ bib ]
[472] Arthur White. Limerick. Mathematical Magazine, 64(2):91, April 1991. [ bib ]
[473] Alfred North Whitehead and Bertrand Russell. Principia Mathematica. Cambridge University Press, Cambridge, 1910. [ bib ]
[474] A. N. Whitehead. An Introduction to Mathematics. Williams and Northgate, London, 1911. [ bib ]
[475] Freek Wiedijk. Mizar light for HOL light. In Boulton and Jackson [590]. [ bib ]
[476] Freek Wiedijk. Mizar's soft type system. In Schneider and Brandt [598], pages 383–399. [ bib ]
[477] David Williams. Probability with Martingales. Cambridge University Press, 1991. [ bib ]
[478] Carl Roger Witty. The Ontic inference language. Master's thesis, MIT, June 1995. [ bib | .html ]
OIL, the Ontic Inference Language, is a simple bottom-up logic programming language with equality reasoning. Although intended for use in writing proof verification systems, OIL is an interesting general-purpose programming language. In some cases, very simple OIL programs can achieve an efficiency which requires a much more complicated algorithm in a traditional programming language. This thesis gives a formal semantics for OIL and some new results related to its efficient implementation. The main new result is a method of transforming bottomup logic programs with equality to bottom-up logic programs without equality. An introduction to OIL and several examples are also included.
[479] Wai Wong. The HOL res_quan library, 1993. HOL88 documentation. [ bib | .html ]
[480] W. Wong. Recording and checking HOL proofs. In Schubert et al. [548], pages 353–368. [ bib | http ]
Formal proofs generated by mechanized theorem proving systems may consist of a large number of inferences. As these theorem proving systems are usually very complex, it is extremely difficult if not impossible to formally verify them. This calls for an independent means of ensuring the consistency of mechanically generated proofs. This paper describes a method of recording HOL proofs in terms of a sequence of applications of inference rules. The recorded proofs can then be checked by an independent proof checker. Also described in this paper is an efficient proof checker which is able to check a practical proof consisting of thousands of inference steps.
[481] Wai Wong. A proof checker for HOL. Technical Report 389, University of Cambridge Computer Laboratory, March 1996. [ bib ]
[482] Larry Wos, Ross Overbeek, Ewing Lusk, and Jim Boyle. Automated Reasoning: Introduction and Applications. McGraw-Hill, New York, 2nd edition, 1992. [ bib ]
[483] Larry Wos and Gail W. Pieper. A fascinating country in the world of computing: your guide to automated reasoning. World Scientific Publishing Co., Inc., River Edge, NJ, USA, 1999. [ bib | http ]
[484] R. Wu and D. F. Beal. Solving chinese chess endgames by database construction. Information Sciences, 135(3–4):207–228, 2001. [ bib ]
[485] Hongwei Xi. Dependent Types in Practical Programming. PhD thesis, Carnegie Mellon University, September 1998. [ bib | .html ]
[486] Lotfi A. Zadeh. Fuzzy sets. Information and Control, 8:338–353, 1965. [ bib ]
[487] Vincent Zammit. On the Readability of Machine Checkable Formal Proofs. PhD thesis, University of Kent at Canterbury, March 1999. [ bib ]
[488] Lintao Zhang and Sharad Malik. The quest for efficient boolean satisfiability solvers. In Voronkov [517], pages 295–313. [ bib | .pdf ]
[489] Junxing Zhang and Konrad Slind. Verification of Euclid's algorithm for finding multiplicative inverses. In Hurd et al. [597], pages 205–220. [ bib ]
[490] FIPS. Digital Signature Standard. Federal Information Processing Standards Publication 186. U.S. Department of Commerce/NIST National Technical Information Service, May 1994. [ bib | http ]
[491] Galois, Inc. Cryptol Programming Guide, October 2008. Available for download at http://corp.galois.com/storage/files/downloads/Cryptol_ProgrammingGuide.pdf. [ bib | .pdf ]
[492] IEEE. Standard for binary floating-point arithmetic. ANSI/IEEE Standard 754-1985. The Institute of Electrical and Electronic Engineers, Inc., 345 East 47th Street, New York, NY 10017, USA, 1985. [ bib | http ]
[493] IEEE. Standard for Verilog Hardware Description Language. IEEE Standard 1364-2001. The Institute of Electrical and Electronic Engineers, Inc., 345 East 47th Street, New York, NY 10017, USA, 2001. [ bib ]
The Verilog Hardware Description Language (HDL) is defined in this standard. Verilog HDL is a formal notation intended for use in all phases of the creation of electronic systems. Because it is both machine readable and human readable, it supports the development, verification, synthesis, and testing of hardware designs; the communication of hardware design data; and the maintenance, modification, and procurement of hardware. The primary audiences for this standard are the implementors of tools supporting the language and advanced users of the language.
[494] IEEE. Standard for Information Technology—Portable Operating System Interface (POSIX) Base Definitions. IEEE Standard 1003.1-2001. The Institute of Electrical and Electronic Engineers, Inc., 345 East 47th Street, New York, NY 10017, USA, 2001. [ bib ]
This standard defines a standard operating system interface and environment, including a command interpreter (or shell), and common utility programs to support applications portability at the source code level. It is the single common revision to IEEE Std 1003.1-1996, IEEE Std 1003.2-1992, and the Base Specifications of The Open Group Single UNIX Specification, Version 2.
[495] MoD. Requirements for the Procurement of Safety Critical Software in Defence Equipment. Interim Defence Standard 00-55. Ministry of Defence, Directorate of Standardization, Kentigern House, 65 Brown St, Glasgow G2 8EX, UK, April 1991. [ bib ]
[496] National Security Agency. Fact sheet NSA Suite B cryptography. Published on the web at http://www.nsa.gov/ia/industry/crypto_suite_b.cfm, 2005. [ bib | http ]
[497] National Security Agency. The case for elliptic curve cryptography. Available for download at http://www.nsa.gov/business/programs/elliptic_curve.shtml, January 2009. [ bib | http ]
[498] National Security Agency. NSA Suite B cryptography. Available for download at http://www.nsa.gov/ia/programs/suiteb_cryptography/, November 2010. [ bib | http ]
[499] National Security Agency. Mathematical routines for the NIST prime elliptic curves. Available for download at http://www.nsa.gov/ia/_files/nist-routines.pdf, April 2010. [ bib | .pdf ]
[500] Anonymous. The QED manifesto. In Bundy [512]. [ bib | http ]
[501] H. Jaap Van den Herik and Pieter Spronck, editors. Advances in Computer Games, 12th International Conference (ACG 2009), volume 6048 of Lecture Notes in Computer Science. Springer, May 2010. [ bib | DOI | http ]
[502] Panagiotis Manolios and Matthew Wilding, editors. Proceedings of the Sixth International Workshop on the ACL2 Theorem Prover and its Applications. HappyJack Books, August 2006. [ bib ]
[503] Proceedings of the 17th Annual Computer Security Applications Conference (ACSAC 2001), 2001. [ bib ]
[504] Proceedings of the 25th Annual Computer Security Applications Conference (ACSAC 2009), December 2009. [ bib ]
[505] Jacques Calmet and Jan Plaza, editors. Artificial Intelligence and Symbolic Computation (AISC '98), volume 1476 of Lecture Notes in Artificial Intelligence, Plattsburgh, New York, USA, September 1998. Springer. [ bib | http ]
[506] Proceedings of the 31st Symposium on Computer Arithmetic (ARITH 2024). IEEE Computer Society, June 2024. [ bib ]
[507] Toby Walsh, editor. Ninth Workshop on Automated Reasoning: Bridging the Gap between Theory and Practice. The Society for the Study of Artificial Intelligence and Simulation of Behaviour, April 2002. [ bib ]
[508] Joost-Pieter Katoen, editor. Formal Methods for Real-Time and Probabilistic Systems, volume 1601 of Lecture Notes in Computer Science, Bamberg, Germany, May 1999. Springer. [ bib | http ]
[509] Gethin Norman, Marta Kwiatkowska, and Dimitar Guelev. AVoCS 2002: Second workshop on automated verification of critical systems. Technical Report CSR-02-6, School of Computer Science, University of Birmingham, April 2002. [ bib ]
[510] 10th Computer Security Foundations Workshop. IEEE Computer Society Press, 1997. [ bib ]
[511] Deepak Kapur, editor. Proceedings of the 11th International Conference on Automated Deduction (CADE-11), volume 607 of Lecture Notes in Artificial Intelligence. Springer, June 1992. [ bib ]
[512] Alan Bundy, editor. 12th International Conference on Automated Deduction (CADE-12), volume 814 of Lecture Notes in Artificial Intelligence. Springer, June 1994. [ bib ]
[513] Michael A. McRobbie and John K. Slaney, editors. 13th International Conference on Automated Deduction (CADE-13), volume 1104 of Lecture Notes in Artificial Intelligence. Springer, July 1996. [ bib ]
[514] Claude Kirchner and Hélène Kirchner, editors. Proceedings of the 15th International Conference on Automated Deduction (CADE-15), volume 1421 of Lecture Notes in Artificial Intelligence. Springer, July 1998. [ bib ]
[515] H. Ganzinger, editor. Proceedings of the 16th International Conference on Automated Deduction (CADE-16), volume 1632 of Lecture Notes in Artificial Intelligence. Springer, July 1999. [ bib ]
[516] David A. McAllester, editor. Proceedings of the 17th International Conference on Automated Deduction (CADE-17), volume 1831 of Lecture Notes in Computer Science. Springer, June 2000. [ bib ]
[517] Andrei Voronkov, editor. Proceedings of the 18th International Conference on Automated Deduction (CADE-18), volume 2392 of Lecture Notes in Artificial Intelligence. Springer, July 2002. [ bib ]
[518] J. L. Fiadeiro et al., editor. CALCO 2005, volume 3629 of Lecture Notes in Computer Science. Springer, 2005. [ bib ]
[519] Thomas Ball and Robert B. Jones, editors. Proceedings of the 18th International Conference on Computer Aided Verification (CAV 2006), volume 4144 of Lecture Notes in Computer Science. Springer, August 2006. [ bib ]
[520] Armin Biere and Roderick Bloem, editors. Proceedings of the 26th International Conference on Computer Aided Verification (CAV 2014), volume 8559 of Lecture Notes in Computer Science. Springer, July 2014. [ bib ]
[521] H. J. van den Herik, H. Iida, and E. A. Heinz, editors. Proceedings of the 10th Advances in Computer Games Conference. Kluwer, November 2003. [ bib ]
[522] Daniel Geist and Enrico Tronci, editors. Correct Hardware Design and Verification Methods (CHARME 2003), volume 2860 of Lecture Notes in Computer Science. Springer, October 2003. [ bib ]
[523] Thierry Coquand, Henri Lombardi, and Marie-Françoise Roy, editors. Mathematics, Algorithms, Proofs, number 05021 in Dagstuhl Seminar Proceedings. Internationales Begegnungs- und Forschungszentrum fuer Informatik (IBFI), Schloss Dagstuhl, Germany, 2006. [ bib ]
[524] Ramon López de Mántaras and Lorenza Saitta, editors. 16th European Conference on Artificial Intelligence (ECAI 2004). IOS Press, August 2004. [ bib ]
[525] D. McGuinness, A. Stump, G. Sutcliffe, and C. Tinelli, editors. Proceedings of the Workshop on Evaluation Methods for Solvers and Quality Metrics for Solutions (EMS+QMS 2010), July 2010. [ bib ]
[526] Christoph Benzmueller, John Harrison, and Carsten Schuermann, editors. Empirically Successful Automated Reasoning in Higher-Order Logic. arXiv.org, December 2005. [ bib | http ]
[527] Proceedings of the EuroSys 2006 Conference, April 2006. [ bib ]
[528] Proceedings of the 7th International Workshop on Formal Aspects of Security & Trust (FAST 2010), September 2010. [ bib ]
[529] Proceedings of Formal Methods 2008, volume 5014 of Lecture Notes in Computer Science. Springer, May 2008. [ bib ]
[530] Mandayam Srivas and Albert Camilleri, editors. Proceedings of the First International Conference on Formal Methods in Computer-Aided Design (FMCAD '96), volume 1166 of Lecture Notes in Computer Science. Springer, 1996. [ bib ]
[531] Gianpiero Cabodi and Satnam Singh, editors. Formal Methods in Computer-Aided Design (FMCAD 2012). IEEE, October 2012. [ bib ]
[532] Barbara Jobstmann and Sandip Ray, editors. Formal Methods in Computer-Aided Design (FMCAD 2013). IEEE, October 2013. [ bib ]
[533] John Fitzgerald, Ian J. Hayes, and Andrzej Tarlecki, editors. Proceedings of Formal Methods Europe 2005, volume 3582 of Lecture Notes in Computer Science. Springer, July 2005. [ bib ]
[534] Lindsay Groves and Steve Reeves, editors. Formal Methods Pacific '97. Springer, July 1997. [ bib | .html ]
[535] Dines Bjørner, Manfred Broy, and Igor V. Pottosin, editors. Proceedings of the International Conference on Formal Methods in Programming and Their Applications, volume 735 of Lecture Notes in Computer Science. Springer, June 1993. [ bib ]
[536] Maria Paola Bonacina and Ulrich Furbach, editors. International Workshop on First-Order Theorem Proving (FTP '97), number 97-50 in RISC-Linz Report Series. Johannes Kepler Universität Linz, October 1997. [ bib | http ]
[537] Ricardo Caferra and Gernot Salzer, editors. International Workshop on First-Order Theorem Proving (FTP '98), Technical Report E1852-GS-981. Technische Universität Wien, November 1998. [ bib | http ]
[538] Andy Gill, editor. Haskell '08: Proceedings of the first ACM SIGPLAN symposium on Haskell. ACM, September 2008. [ bib ]
[539] Chung-chieh Shan, editor. Haskell '13: Proceedings of the 2013 ACM SIGPLAN symposium on Haskell. ACM, September 2013. [ bib ]
[540] High Confidence Software and Systems: HCSS 2006, April 2006. [ bib ]
[541] High Confidence Software and Systems: HCSS 2007, May 2007. [ bib ]
[542] High Confidence Software and Systems: HCSS 2009, May 2009. [ bib ]
[543] John Launchbury and Ray Richards, editors. Proceedings of the Twelfth Annual High Confidence Software and Systems Conference (HCSS 2012), May 2012. [ bib ]
[544] Jan Heering, Karl Meinke, Bernhard Möller, and Tobias Nipkow, editors. Higher-Order Algebra, Logic, and Term Rewriting, First International Workshop (HOA '93), volume 816 of Lecture Notes in Computer Science. Springer, 1994. [ bib ]
[545] Myla Archer, Jeffrey J. Joyce, Karl N. Levitt, and Phillip J. Windley, editors. Proceedings of the 1991 International Workshop on the HOL Theorem Proving System and its Applications (HOL '91), August 1991. IEEE Computer Society Press, 1992. [ bib ]
[546] J. J. Joyce and C.-J. H. Seger, editors. Higher Order Logic Theorem Proving and its Applications, 6th International Workshop, volume 780 of Lecture Notes in Computer Science. Springer, August 1993. [ bib ]
[547] Tom Melham and Juanito Camilleri, editors. Higher Order Logic Theorem Proving and Its Applications, 7th International Workshop, volume 859 of Lecture Notes in Computer Science. Springer, September 1994. [ bib ]
[548] E. T. Schubert, P. J. Windley, and J. Alves-Foss, editors. Proceedings of the 8th International Workshop on Higher Order Logic Theorem Proving and Its Applications (HOL '95), volume 971 of Lecture Notes in Computer Science. Springer, September 1995. [ bib ]
[549] Proceedings of the 8th International Workshop on Higher Order Logic Theorem Proving and Its Applications (HOL '95), September 1995. Supplemental proceedings. [ bib ]
[550] F. Orejas, P. G. Spirakis, and J. van Leeuwen, editors. Automata, Languages and Programming, volume 2076 of Lecture Notes in Computer Science. Springer, July 2001. [ bib | http ]
[551] Proceedings of the 3rd IEEE International Conference on Formal Engineering Methods (ICFEM 2000), September 2000. [ bib ]
[552] James Hook and Peter Thiemann, editors. Proceedings of the 13th ACM SIGPLAN international conference on Functional programming (ICFP 2008). ACM, September 2008. [ bib ]
[553] Peter Thiemann and Robby Bruce Findler, editors. Proceedings of the 17th ACM SIGPLAN International Conference on Functional Programming (ICFP 2012). ACM, September 2012. [ bib ]
[554] Komei Fukuda, Joris vander Hoeven, Michael Joswig, and Nobuki Takayama, editors. Proceedings of the Third International Congress on Mathematical Software (ICMS 2010), volume 6327 of Lecture Notes in Computer Science. Springer, September 2010. [ bib | http ]
[555] Ulrich Furbach and Natarajan Shankar, editors. Automated Reasoning, Third International Joint Conference (IJCAR 2006), volume 4130 of Lecture Notes in Computer Science. Springer, August 2006. [ bib ]
[556] Lennart Beringer and Amy Felty, editors. Third International Conference on Interactive Theorem Proving (ITP 2012), volume 7406 of Lecture Notes in Computer Science. Springer, August 2012. [ bib ]
[557] Robert S. Boyer, Edward S. Schneider, and Jr. Guy L. Steele, editors. LFP '84: Proceedings of the 1984 ACM Symposium on LISP and functional programming. ACM, August 1984. [ bib ]
[558] 19th IEEE Symposium on Logic in Computer Science (LICS 2004), July 2004. [ bib ]
[559] A. Voronkov, editor. Proceedings of the International Conference on Logic Programming and Automated Reasoning (LPAR '92), volume 624 of Lecture Notes in Artificial Intelligence. Springer, July 1992. [ bib ]
[560] Geoff Sutcliffe and Andrei Voronkov, editors. Logic for Programming, Artificial Intelligence, and Reasoning, 12th International Conference (LPAR 2005), volume 3835 of Lecture Notes in Artificial Intelligence. Springer, December 2005. [ bib ]
[561] Military Communications Conference 2003. IEEE, October 2003. [ bib | http ]
[562] Bernd Wegner. Proceedings of the second North American workshop on mathematical knowledge management (NA-MKM 2004), January 2004. [ bib ]
[563] César Muñoz. Proceedings of the second NASA formal methods symposium (NFM 2010). Technical Report NASA/CP-2010-216215, National Aeronautics and Space Administration, Langley Research Center, Hampton VA 23681-2199, USA, April 2010. [ bib ]
[564] Mihaela Bobaru, Klaus Havelund, Gerard J. Holzmann, and Rajeev Joshi, editors. Third International Symposium on NASA Formal Methods (NFM 2011), volume 6617 of Lecture Notes in Computer Science. Springer, April 2011. [ bib ]
[565] John P. Gallagher and Janis Voigtländer, editors. Proceedings of the ACM SIGPLAN workshop on Partial Evaluation and Program Manipulation (PEPM 2010). ACM, January 2010. [ bib ]
[566] Gabriel Dos Reis and Laurent Théry, editors. PLMMS '09: Proceedings of the ACM SIGSAM 2009 International Workshop on Programming Languages for Mechanized Mathematics Systems. ACM, August 2009. [ bib ]
[567] POPL '78: Proceedings of the 5th ACM SIGACT-SIGPLAN symposium on Principles of programming languages, New York, NY, USA, 1978. ACM. [ bib ]
[568] POPL 2002: The 29th ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages. ACM Press, January 2002. [ bib ]
[569] J. Gregory Morrisett and Simon L. Peyton Jones, editors. Proceedings of the 33rd ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages (POPL 2006). ACM, January 2006. [ bib ]
[570] Jasmin Christian Blanchette and Josef Urban, editors. Proceedings of the Third International Workshop on Proof Exchange for Theorem Proving (PxTP 2013), volume 14 of EPiC Series. EasyChair, June 2013. [ bib ]
[571] A. Cerone and A. Di Pierro, editors. Proceedings of the 2nd Workshop on Quantitative Aspects of Programming Languages (QAPL 2004), volume 112 of Electronic Notes in Theoretical Computer Science (ENTCS). Elsevier, January 2005. [ bib | http ]
[572] Rewriting Techniques and Applications (RTA 2001), volume 2051 of Lecture Notes in Computer Science. Springer, May 2001. [ bib | http ]
[573] Proceedings of the 15th International Conference on the Theory and Applications of Satisfiability Testing (SAT 2012), volume 7317 of Lecture Notes in Computer Science. Springer, June 2012. [ bib ]
[574] Summary of a Workshop on Software Certification and Dependability. The National Academies Press, 2004. [ bib | http ]
Certification of critical software systems (e.g., for safety and security) is important to help ensure their dependability. Today, certification relies as much on evaluation of the software development process as it does on the system s properties. While the latter are preferable, the complexity of these systems usually makes them extremely difficult to evaluate. To explore these and related issues, the National Coordination Office for Information technology Research and Development asked the NRC to undertake a study to assess the current state of certification in dependable systems. The study is in two phases: the first to frame the problem and the second to assess it. This report presents a summary of a workshop held as part of the first phase. The report presents a summary of workshop participants presentations and subsequent discussion. It covers, among other things, the strengths and limitations of process; new challenges and opportunities; experience to date; organization context; and cost-effectiveness of software engineering techniques. A consensus report will be issued upon completion of the second phase.
[575] Marco Bernardo and Alessandro Cimatti, editors. Formal Methods for Hardware Verification, 6th International School on Formal Methods for the Design of Computer, Communication, and Software Systems (SFM 2006), volume 3965 of Lecture Notes in Computer Science. Springer, May 2006. [ bib ]
[576] Daniel J. Bernstein and Kris Gaj, editors. Proceedings of the 5th Special-Purpose Hardware for Attacking Cryptographic Systems Workshop (SHARCS 2012), March 2012. [ bib ]
[577] P. Naur and B. Randell, editors. Software Engineering. Scientific Affairs Division, NATO, October 1968. [ bib | http ]
[578] Jeanna Neefe Matthews and Thomas E. Anderson, editors. Proceedings of the 22nd ACM Symposium on Operating Systems Principles. ACM, October 2009. [ bib ]
[579] Chris Dale and Tom Anderson, editors. Making Systems Safer: Proceedings of the Eighteenth Safety-Critical Systems Symposium. Springer, February 2010. [ bib ]
[580] Proceedings of the 3rd International Workshop on Systems Software Verification (SSV 2008), volume 217 of Electronic Notes in Theoretical Computer Science. Elsevier, July 2008. [ bib ]
[581] Myla Archer, Ben Di Vito, and César Muñoz. Design and application of strategies/tactics in higher order logics (strata 2003). Technical Report NASA/CP-2003-212448, NASA, September 2003. [ bib | .pdf ]
[582] Susanne Graf and Michael Schwartzbach, editors. Tools and Algorithms for the Construction and Analysis of Systems: 6th International Conference, TACAS 2000, volume 1785 of Lecture Notes in Computer Science. Springer, March 2000. [ bib ]
[583] Joakim von Wright, Jim Grundy, and John Harrison, editors. Theorem Proving in Higher Order Logics, 9th International Conference, TPHOLs '96, volume 1125 of Lecture Notes in Computer Science. Springer, August 1996. [ bib ]
[584] Elsa L. Gunter and Amy Felty, editors. Theorem Proving in Higher Order Logics, 10th International Conference, TPHOLs '97, volume 1275 of Lecture Notes in Computer Science. Springer, August 1997. [ bib ]
[585] Elsa L. Gunter, editor. Supplemental Proceedings of the 10th International Conference on Theorem Proving in Higher Order Logics, TPHOLs '97, August 1997. [ bib ]
[586] Jim Grundy and Malcolm Newey, editors. Theorem Proving in Higher Order Logics, 11th International Conference, TPHOLs '98, volume 1497 of Lecture Notes in Computer Science. Springer, September 1998. [ bib ]
[587] Yves Bertot, Gilles Dowek, André Hirschowitz, Christine Paulin, and Laurent Théry, editors. Theorem Proving in Higher Order Logics, 12th International Conference, TPHOLs '99, volume 1690 of Lecture Notes in Computer Science. Springer, September 1999. [ bib | http ]
[588] Mark Aagaard and John Harrison, editors. Theorem Proving in Higher Order Logics, 13th International Conference: TPHOLs 2000, volume 1869 of Lecture Notes in Computer Science. Springer, August 2000. [ bib ]
[589] Mark Aagaard, John Harrison, and Tom Schubert. TPHOLs 2000: Supplemental proceedings. Technical Report CSE-00-009, Oregon Graduate Institute, August 2000. [ bib ]
[590] Richard J. Boulton and Paul B. Jackson, editors. 14th International Conference on Theorem Proving in Higher Order Logics: TPHOLs 2001, volume 2152 of Lecture Notes in Computer Science. Springer, September 2001. [ bib | http ]
[591] Richard J. Boulton and Paul B. Jackson. TPHOLs 2001: Supplemental proceedings. Technical Report EDI-INF-RR-0046, Division of Informatics, University of Edinburgh, September 2001. [ bib | .html ]
[592] Víctor A. Carreño, César A. Muñoz, and Sofiène Tahar, editors. 15th International Conference on Theorem Proving in Higher Order Logics: TPHOLs 2002, volume 2410 of Lecture Notes in Computer Science. Springer, August 2002. [ bib | http ]
[593] David Basin and Burkhart Wolff, editors. 16th International Conference on Theorem Proving in Higher Order Logics: TPHOLs 2003, volume 2758 of Lecture Notes in Computer Science. Springer, September 2003. [ bib | http ]
[594] Konrad Slind, Annette Bunker, and Ganesh Gopalakrishnan, editors. 17th International Conference on Theorem Proving in Higher Order Logics: TPHOLs 2004, volume 3223 of Lecture Notes in Computer Science. Springer, September 2004. [ bib ]
[595] Konrad Slind. TPHOLs 2004: Emerging trends. Technical Report UUCS-04, School of Computing, University of Utah, September 2004. [ bib | http ]
[596] Joe Hurd and Tom Melham, editors. 18th International Conference on Theorem Proving in Higher Order Logics: TPHOLs 2005, volume 3603 of Lecture Notes in Computer Science. Springer, August 2005. [ bib | http ]
[597] Joe Hurd, Edward Smith, and Ashish Darbari. Theorem proving in higher order logics: Emerging trends proceedings. Technical Report PRG-RR-05-02, Oxford University Computing Laboratory, August 2005. [ bib | http ]
[598] Klaus Schneider and Jens Brandt, editors. 20th International Conference on Theorem Proving in Higher Order Logics: TPHOLs 2007, volume 4732 of Lecture Notes in Computer Science. Springer, September 2007. [ bib | http ]
[599] Klaus Schneider and Jens Brandt. Theorem proving in higher order logics: Emerging trends proceedings. Technical Report 364/07, Department of Computer Science, University of Kaiserslautern, August 2007. [ bib | .html ]
[600] César Muñoz Otmane Ait Mohamed and Sofiène Tahar, editors. 21st International Conference on Theorem Proving in Higher Order Logics: TPHOLs 2008, volume 5170 of Lecture Notes in Computer Science. Springer, August 2008. [ bib | http ]
[601] Stefan Berghofer, Tobias Nipkow, Christian Urban, and Makarius Wenzel, editors. Theorem Proving in Higher Order Logics, 22nd International Conference (TPHOLs 2009), volume 5674 of Lecture Notes in Computer Science. Springer, August 2009. [ bib ]
[602] R. Matuszewski and A. Zalewska, editors. From Insight to Proof: Festschrift in Honour of Andrzej Trybulec, volume 10(23) of Studies in Logic, Grammar and Rhetoric. University of Bialystok, 2007. [ bib | http ]
[603] Eduardo Giménex and Christine Paulin-Mohring, editors. Types for Proofs and Programs: International Workshop TYPES' 96, volume 1512 of Lecture Notes in Computer Science. Springer, September 1996. [ bib ]
[604] Stefano Berardi, Mario Coppo, and Ferruccio Damiani, editors. Types for Proofs and Programs: Third International Workshop, TYPES 2003, volume 3085 of Lecture Notes in Computer Science. Springer, May 2004. [ bib ]
[605] M. Aderhold, S. Autexier, and H. Mantel, editors. Proceedings of the 6th International Verification Workshop (VERIFY 2010), July 2010. [ bib ]
[606] Bernhard Steffen and Giorgio Levi, editors. Proceedings of the 5th International Conference on Verification, Model Checking, and Abstract Interpretation (VMCAI 2004), volume 2937 of Lecture Notes in Computer Science. Springer, January 2004. [ bib ]

This file was generated by bibtex2html 1.99.