(Warning: this is a long post on something which most people probably care very little about. On the other hand, there are 19 google hits for "logic changed my life".)
In general, how does one interpret the question, "What is the logic of X?", for some programming language X? A standard response is that the logical theorems are exactly the inhabited types, i.e., types τ such that there is a closed term t:τ. But what are the "terms"? The answer is obvious enough in simple formal systems where there is only one typing judgment, and so we can say things like, "The inhabited types of simply-typed lambda calculus are exactly the theorems of minimal implicational logic". Yet, real programming languages usually have many different typing judgments, since programs are composed out of many different syntactic objects. In ML, in particular, a program includes some let-bound values and expressions, some function definitions, and a result expression (in addition to datatype definitions, module definitions, etc., which are classified by different kinds of "types").
(In this post, "ML" is meant in a slightly fuzzy but essentially unambiguous way, as a call-by-value language with effects. To settle a concrete interpretation, I will be giving examples in Standard ML of NJ, which notably provides the control operator callcc. "Value" is meant in the standard operational sense, i.e., a value is a fully evaluated ML expression. We can similarly ask, "What is the logic of Haskell values?", but that is a different, trickier question, to which I'm still not sure of the answer.)
When answering the question, "What is the logic of ML?" (or "What is the logic of Haskell?"), the standard approach is to gloss over these different syntactic categories, and assume we are talking about expressions. This makes some sense, insofar as this is the largest syntactic category: all values and functions are also expressions. And expressions also have easy-to-understand principles of composition. For example, the rule of application—from two expressions e₁:σ→τ and e₂:σ, we can form the expression (e₁ e₂):τ—witnesses the logical principle of modus ponens. Whereas from two values v₁:σ→τ and v₂:σ, the application (v₁ v₂):τ is no longer a value.
Yet, I believe it is a mistake to define "the" logic of a programming language in this way, without realizing there is finer structure. That is why I phrased the question, "What is the logic of ML values?", rather than full-stop, "What is the logic of ML?" And so how does the logic of ML values differ from the logic of ML expressions?
Well, it is often claimed that ML (and likewise Haskell) defines an inconsistent logic. This is a statement about the logic of expressions. For example, non-terminating expressions such as
fun loop() = loop()inhabit every type, as do exception-raising expressions, such as
fun fail() = raise (Fail "inconsistent!?")(Note: the expressions here are
loop()
and
fail()
. Here and below, I assign names to
expressions by declaring them as thunks.)
But none of these are values. Indeed, evaluation of the first
expression will never yield a result,
- loop();whereas the second will return immediately, but with an exception rather than a value:
(* waiting a long time... *)
- fail();(There are some additional complaints about the value restriction when evaluating these polymorphic expressions, but for our purposes we can ignore them here.) In fact, the logic of ML values is consistent, as we can see by defining the empty type:
uncaught exception Fail [Fail: inconsistent!?]
raised at: stdIn:2.3-3.5
data void = Void of voidThere are no values of type void, since the only way to construct one is by already having one. Note, though, that all bets are off "under a lambda"—we can still build a value inhabiting any function type σ→τ, for example
fun fnfail() = fn _ => raise (Fail "inconsistent!?")Let's try to be a bit more precise about what happens before we reach that point.
Another folk theorem about "the logic of ML" is that insofar as it is consistent (i.e., if you avoid things like unrestricted recursion and exceptions), with the addition of control operators it becomes classical. We can import the relevant SML/NJ library and see what they mean:
- open SMLofNJ.Cont;As first observed by Griffin, the type of callcc corresponds logically to the classical principle of Peirce's Law. We can make this a bit easier to see by defining another control operator letcc, which essentially does the same thing as callcc but with first-class ML functions, rather than values of the cont type:
opening SMLofNJ.Cont
type 'a cont = 'a ?.cont
val callcc : ('a cont -> 'a) -> 'a
val throw : 'a cont -> 'a -> 'b
val isolate : ('a -> unit) -> 'a cont
(* ... *)
- fun letcc f = callcc (fn k => f (fn x => throw k x));Using letcc, we can derive the law of excluded middle, ∀α.α ∨ ¬α. To represent this type, we first introduce a type constructor for disjunctions:
val letcc = fn : (('a -> 'b) -> 'a) -> 'a
data ('a,'b) either = Left of 'a | Right of 'bNow we can witness excluded middle as follows:
fun excluded_middle() : ('a,'a -> void) either =
letcc (fn k => Right (fn x => k (Left x)))
So what happens when we attempt to evaluate this expression? (To avoid complaints about the value restriction, we instantiate the type variable 'a at some arbitrary type a.)
- excluded_middle() : (a,a -> void) either;And now, in contrast to what happened above with the evaluation of loop() and fail(), here we actually get a value!
val it = Right fn : (a,a -> void) either
So can we say that the logic of ML values really is classical? Well, we evidently can under the "bag of tautologies" view of logic. But that is not necessarily the most useful view.
If we look up at the above ML session, a shockingly anti-classical
principle is already staring back down at us. After evaluating
excluded_middle()
, we got back this value:
val it = Right fn : (a,a -> void) eitherThat is, not only is this a proof of α ∨ ¬α, but in fact it contains a proof of the right side of the disjunct, ¬α. This is an instance of the intuitionistic disjunction principle, that a proof of a disjunction is a proof of one of the two disjuncts.
The punchline, I would say, is that the logic of ML values is not unsound, and not classical, but simply constructive. What is an ML value of type τ? Just an intuitionistic proof of τ—but in a non-empty context. This context includes the current continuation of type ¬τ, as well as all of the other effects that can be invoked underneath a lambda, i.e., in values of functional type.
Okay, so what have we brushed over here? Well first, the restriction
that effects are only invoked underneath lambdas can actually be
enforced by CPS translation. Functions σ→τ are interpreted as
¬(σ∧¬τ), where ¬(-) denotes minimal negation, i.e., ¬τ = τ→β, for some
fixed parameter β, the return type. Effectful operations are
encoded as things that return β. Second, I said that a value of (a
CPS translated) type τ is an intuitionistic proof of τ (in a non-empty
context of effects), but are all such intuitionstic proofs
values? Well, no, because some proofs involve "detours". To be
concrete, if M₁ and M₂ are proofs of σ and τ, then π₁(M₁, M₂) is a
proof of σ—yet, #1 ("hello", "world")
is typically not
considered to be a value (it evaluates to "hello"). (Some
people like to call expressions like #1 ("hello",
"world")
"valuable".) Third, to say that there is a "logic" of
ML values, we really do have to explain what its composition
principles are, and not just which principles fail. I might revisit
some of these points in another post.
Finally, the whole idea of a "logic of ML values" is very close to the idea of the realizability interpretation of ML. Some people believe that realizability is a distinct and very different alternative to Curry-Howard as a constructive interpretation of logic. My point with this post was mainly to suggest that Curry-Howard need not exclude realizability-like interpretations, once we realize (so to speak) that there is more than one syntactic category of proofs.