Wednesday, September 29, 2010

reflecting and reifying the state monad

The Haskell code below was inspired by Andrej Bauer's visit last week, as well as thinking more about this, this, and this since writing this.

{-# LANGUAGE RankNTypes, FlexibleInstances #-}

-- A "negative type" is a datatype of continuations, indexed by answer
-- type.  Given such a description, we can define values, "algebras",
-- and "handlers" as control expressions ("double negations") with
-- different answer type constraints
-- (cf. http://polaro.blogspot.com/2010/05/five-kinds-of-double-negation.html)

newtype Value neg = Val { runVal :: forall r. neg r -> r }
newtype Algebra neg r = Alg { runAlg :: neg r -> r }
newtype Handler neg r1 r2 = Han { runHandle :: neg r1 -> r2 }

infixr 0 `plugV`, `plugA`, `plugH`

plugV :: neg r -> Value neg -> r
k `plugV` v = runVal v k

plugA :: neg r -> Algebra neg r -> r
k `plugA` v = runAlg v k

plugH :: neg r -> Handler neg r r' -> r'
k `plugH` v = runHandle v k

-- We define the negative type Cell s

data Cell s r = Update s (Cell s r)
              | Lookup (s -> Cell s r)
              | Return r

-- Example: given an initial state, we can create a cell value...

new :: s -> Value (Cell s)
new s = Val newVal
    where
      newVal (Update s' k) = k `plugV` new s'
      newVal (Lookup ks) = ks s `plugV` new s
      newVal (Return a) = a

-- Example: perform some operations on an initial state...

r1 = (Lookup $ \s ->
      Update (s-1) $
      Lookup $ \s' ->
      Return (s'+1))
     `plugV`
     new 2
-- r1 = 2

-- Monadic reification and reflection

reify :: Cell s a -> s -> (s,a)
reify k s = k `plugH` Han reifyHandle
    where
      reifyHandle (Update s' k') = reify k' s'
      reifyHandle (Lookup ks) = reify (ks s) s
      reifyHandle (Return a) = (s,a)

reflect :: (s -> (s,a)) -> Cell s a
reflect m = Lookup $ \s ->
            let (s',a) = m s in
            Update s' $
            Return a

-- Example: Normalization by reification and reflection

prog1 = Update 0 $
        Lookup $ \s ->
        Return (s + 1)
prog2 = Lookup $ \s ->
        Update 2 $
        Update 0 $
        Return 1

-- a bit of code for printing cell continuations (prints only a small
-- segment of lookup tables)
instance Show r => Show (Cell Integer r) where
    show (Update s k) = "upd " ++ show s ++ "; " ++ show k
    show (Lookup ks) = show (map (\s -> (s, ks s)) [0..3])
    show (Return a) = "ret " ++ show a

-- *Main> prog1
-- upd 0; [(0,ret 1),(1,ret 2),(2,ret 3),(3,ret 4)]
-- *Main> prog2
-- [(0,upd 2; upd 0; ret 1),(1,upd 2; upd 0; ret 1),(2,upd 2; upd 0; ret 1),(3,upd 2; upd 0; ret 1)]
-- *Main> reflect(reify prog1)
-- [(0,upd 0; ret 1),(1,upd 0; ret 1),(2,upd 0; ret 1),(3,upd 0; ret 1)]
-- *Main> reflect(reify prog2)
-- [(0,upd 0; ret 1),(1,upd 0; ret 1),(2,upd 0; ret 1),(3,upd 0; ret 1)]

Otherwise, things have been good. This blog has been quiet for a while, but I'm still learning a lot from amazing people. Though I can't resist quoting Henry Miller: "It is now the fall of my second year in Paris. I was sent here for a reason I have not yet been able to fathom."

Saturday, May 22, 2010

Five kinds of double-negation

A recent question on the TYPES mailing list reminded me that one reason negation is confusing is because it can mean so many different things. I and others originally thought the answer the person was looking for (the type-theoretic reading of negation) was "continuations", but it turned out his intuition was about a different form of negation not usually considered in constructive logic (but available in "refinement type systems", etc.), since it's tied to the "extrinsic" interpretation of types-as-properties. (For more on this deficit of constructive logic, see the introduction to Paul Taylor's Subspaces in Abstract Stone Duality.)

But even restricted to plain old type theory, there are many different interpretations of negation! Under the principle of "Why talk about something confusing when you can talk about something doubly confusing?", today I want to talk about different computational interpretations of double negations in logic.

  1. (A -> 0) -> 0. Here 0 is falsehood, and this is the standard interpretation of double-negation in intuitionistic logic. What does this mean computationally? Well, one way to think about this is to interpret types as sets. How many different functions f : X -> 0 are there from any set X into the empty set? None, unless X is itself empty, in which case there is exactly one. Applying this reasoning to our double-negation, if A is non-empty, then A -> 0 is empty, which in turn means that (A -> 0) -> 0 contains exactly one element; conversely, if A is empty, then A -> 0 is non-empty, which in turns means that (A -> 0) -> 0 is empty. What we see is that intuitionistic double-negation encodes the principle of proof-irrelevance, i.e., it forgets all the computational content of a type. (The catch here is that set-theoretic reasoning doesn't always prove type isomorphisms—this argument really only works for extensional type theory.)
  2. (A -> ⊥) -> ⊥. Here ⊥ represents "minimal" falsehood, i.e., simply a logical atom with no introduction or elimination rules, and in particular without the principle of explosion. Because we have replaced "empty" with "unknown", minimal double-negation does not have the same computational degeneracy of intuitionistic double-negation: there are at least as many different constructive proofs of (A -> ⊥) -> ⊥ as there are of A, and often more. For example, in general there is no proof in minimal logic of excluded middle A ∨ (A -> ⊥), but there is a proof of ((A ∨ (A -> ⊥)) -> ⊥) -> ⊥. Indeed, minimal logic rather than intuitionistic logic really is the proper place to understand the computational content of the classical double-negation translations (something which I didn't fully appreciate until reading Jeremy Avigad's The computational content of classical arithmetic). As such, this form of double-negation is the first step towards understanding continuation-passing style.
  3. (A -> R) -> R, where R is any type of answers. We said that ⊥ above is a logical atom. In type-theoretic terms, it is a type variable. So to move to this form double-negation, we simply instantiate the variable with a concrete type R. Now, logically we are beginning to stretch the meaning of "negation". In particular, nothing says that R is uninhabited—indeed if it is provably uninhabited, we are back to the computationally degenerate situation (1). But whether or not we accept the terminology, this form of double-negation is extremely important computationally, tied to the representation of control flow. A function of type A -> R is a continuation (transforming any value of type A into a result of type R), and so a term of type (A -> R) -> R is something which takes a continuation to a result—a computation with "control effects". The type (A -> R) -> R is so powerful that it comes with the following warning:
    Abuse of the Continuation monad can produce code that is impossible to understand and maintain.
    The "principle of double-confusion" I mentioned at the beginning then motivates the following pair of generalizations...
  4. (A -> R1) -> R2, where R1 and R2 are two (possibly distinct) types of answers. This form of double-negation comes up in the study of delimited control operators, which were originally motivated by the fact that the type (A -> R) -> R is not only too powerful but also not powerful enough. The type (A -> R1) -> R2 can be seen as a sort of Hoare triple {R1}A{R2}, which gives an intimation of its power.
  5. ∀α.(A -> α) -> α, a polymorphic type (and more generally, ∀α.(A -> Tα) -> Tα, where T is an arbitrary monad). It's easy to see that A ≡ ∀α.(A -> α) -> α is provable in second-order intuitionistic logic, but in fact this can also be interpreted as the Yoneda isomorphism in category theory, as sigfpe explained a few years ago. More generally, there is a Yoneda isomorphism between TA and ∀α.(A -> Tα) -> Tα for any monad T (and formally, a type isomorphism in System F + parametricity axioms). This isomorphism lies at the heart of Filinski's representation theorem that delimited control operators can be used to perform "monadic reflection".
So that is a small cast of characters, biased towards the view of "negation as continuation" (which as I said, is not the only view). For more plots involving these different forms of double-negation (particularly (4) and (5)), you can read the paper I posted two weeks ago.

Sunday, May 2, 2010

Polarity and the logic of delimited continuations

Text: pdf
Code: twelf
Abstract:

Polarized logic is the logic of values and continuations, and their interaction through continuation-passing style. The main limitations of this logic are the limitations of CPS: that continuations cannot be composed, and that programs are fully sequentialized. Delimited control operators were invented in response to the limitations of classical continuation-passing. That suggests the question: what is the logic of delimited continuations?

We offer a simple account of delimited control, through a natural generalization of the classical notion of polarity. This amounts to breaking the perfect symmetry between positive and negative polarity in the following way: answer types are positive. Despite this asymmetry, we retain all of the classical polarized connectives, and can explain "intuitionistic polarity" (e.g., in systems like CBPV) as a restriction on the use of connectives, i.e., as a logical fragment. Our analysis complements and generalizes existing accounts of delimited control operators, while giving us a rich logical language through which to understand the interaction of control with monadic effects.

To appear at LICS 2010.

Writing this paper helped me sort out some things I'd been puzzled about for a while—I hope you also find it useful!

Saturday, April 24, 2010

"One of the first things to learn about category theory is that not everything in mathematics is a category"

Sorry, apologies, etc., for the long silence. I've been working on the final version of a paper that I plan to post here in a few days. This mini-post is just to highlight an exchange on MathOverflow that jibes with some things I've been thinking about lately. The quote in the title is from Mike Shulman in response to this question, "Why haven’t certain well-researched classes of mathematical object been framed by category theory?" I also like Reid Barton's response, in particular, "From the standpoint of higher category theory, categories (i.e., 1-categories) are just one level among many in a family of mathematical structures". This seems to me like an important lesson to keep in mind when applying category theory to programming languages and proof theory: to resist the desire to phrase everything as a category.

Tuesday, January 26, 2010

Twelf, my favorite general-purpose functional programming language

(Disclaimer: the title of this post is tongue-in-cheek—I do not really like to play favorites. But hopefully you will understand why I chose it: to mess with your head.)

Last week I was at POPL 2010 in Madrid. Most fun of course was catching up with friends and meeting new people, but there were also some excellent talks. A few highlights:

In this post I want to talk a bit about my favorite general-purpose functional programming language, Twelf. If that sentence gives you cognitive dissonance, please read on!

Really, what I am going to be describing is a very shallow embedding of a pure functional language within Twelf. For familiarity (though at a cost of expressiveness), I will limit our language to an ML-like, call-by-value fragment (with arbitrary recursive datatypes and recursive functions), rather than a fully polarized type theory. For fun, our language will also include Haskell-style typeclasses. Our two weapons will be traditional LF-style (dependently-typed) higher-order abstract syntax, and the defunctionalization trick I described a year ago. Don't worry, we won't be proving any theorems, only writing programs.

We begin by declaring the (meta-level) type of (object-level) types:

tp : type.                      % types
As we did informally in a previous post, we will distinguish three syntactic categories:
val : tp -> type.               % values
kon : tp -> tp -> type.         % "delimited" continuations
exp : tp -> type.               % expressions
(Note: there is nothing scary about delimited continuations—think of them as pure ML functions, or entailments, or morphisms in a category.) Expressions in our language are constructed out of values and continuations, as follows:
! : val A -> exp A.             %prefix 10 !.
$ : kon A B -> exp A -> exp B.  %infix right 11 $.
Because we are really restricting to types of positive polarity, values are built by applying constructors, while continuations are defined by pattern-matching on values. The latter principle is expressed (in defunctionalized form) by the apply function, a Twelf logic program:
apply : kon A B -> val A -> exp B -> type.
%mode apply +K +V -E.
Now, without knowing anything else about values and continuations, we can already give our language a generic (big-step) operational semantics:
eval : exp A -> val A -> type.
%mode eval +E -V.
- : eval (! V) V.
- : eval (F $ E) VR
        <- eval E V
        <- apply F V E'
        <- eval E' VR.
And that's the core language! Next we can build up a nice library of datatypes and library routines. We define some simple datatype constructors, by their value constructors:
1 : tp.
<> : val 1.

0 : tp.

* : tp -> tp -> tp.             %infix right 12 *.
pair : val A -> val B -> val (A * B).

+ : tp -> tp -> tp.             %infix right 11 +.
inl : val A -> val (A + B).
inr : val B -> val (A + B).

=> : tp -> tp -> tp.            %infix right 10 =>.
fn : kon A B -> val (A => B).
We also introduce polymorphic and recursive types (note our reliance on HOAS and substitution):
∀ : (tp -> tp) -> tp.
Λ : ({x}val (A x)) -> val (∀ A).

rec : (tp -> tp) -> tp.
fold : val (A* (rec A*)) -> val (rec A*).
Finally we derive some familiar datatypes:
bool = 1 + 1.
true = inl <>.
false = inr <>.

maybe = [A] A + 1.
nothing : val (maybe A) = inr <>.
just : val A -> val (maybe A) = [V] inl V.

nat = rec [X] 1 + X.
z : val nat = fold (inl <>).
s : val nat -> val nat = [V] fold (inr V).

list = [A] rec [X] 1 + A * X.
nil : val (list A) = fold (inl <>).
cons : val A -> val (list A) -> val (list A) =
        [V] [Vs] fold (inr (pair V Vs)).
Just as we do in Haskell, we can define continuations (functions) by giving them a type declaration, and then a list of pattern-matching clauses. For example, here we define the boolean operations "and" and "or":
and : kon (bool * bool) bool.
- : apply and (pair true true) (! true).
- : apply and (pair true false) (! false).
- : apply and (pair false _) (! false).

or : kon (bool * bool) bool.
- : apply or (pair false false) (! false).
- : apply or (pair false true) (! true).
- : apply or (pair true _) (! true).
We can also define some continuation constructors in the same way. For example, we can define the beloved lambda:
λ : (val A -> exp B) -> kon A B.
- : apply (λ F) V (F V).
λ constructs a continuation from an expression with a free value variable, which is useful when we don't need to pattern-match on the argument. We also define some useful syntactic sugar:
let : exp A -> (val A -> exp B) -> exp B = [E] [F] λ F $ E.
In effect here we are using Twelf as a macro system for our embedded language. The following continuation constructors correspond to (sequent calculus) "left rules" for the various type constructors:
split : (val A -> val B -> exp C) -> kon (A * B) C.
- : apply (split E*) (pair V1 V2) (E* V1 V2).

case : kon A C -> kon B C -> kon (A + B) C.
- : apply (case K1 K2) (inl V) E
        <- apply K1 V E.
- : apply (case K1 K2) (inr V) E
        <- apply K2 V E.

funsplit : (kon A B -> exp C) -> kon (A => B) C.
- : apply (funsplit E*) (fn F) (E* F).

gensplit : (({x}val (A x)) -> exp C) -> kon (∀ A) C.
- : apply (gensplit E*) (Λ V) (E* V).

unfold : kon (A* (rec A*)) C -> kon (rec A*) C.
- : apply (unfold K) (fold V) E
        <- apply K V E.
Note we are not obligated to use these constructors to define continuations, i.e., they are not part of the language specification: they are just library routines. Here's an example of a recursive definition:
append : kon (list A * list A) (list A).
- : apply append (pair nil L2) (! L2).
- : apply append (pair (cons V L1) L2)
     (let (append $ ! pair L1 L2) [L'] ! cons V L').
Okay, hopefully by now you are convinced that in this style we we can write more or less ordinary (effect-free) ML programs. Next we turn to typeclasses. Typeclasses are basically type-indexed (or type constructor-indexed) records, and so their encoding in Twelf follows the same pattern of defunctionalization (i.e., for the polarity-sensitive, because they are negative values). Here we introduce a few typeclasses familiar from Haskell:
EQ : tp -> type.
eq : EQ A -> kon (A * A) bool.

FUNCTOR : (tp -> tp) -> type.
fmap : FUNCTOR F -> kon A B -> kon (F A) (F B).

MONAD : (tp -> tp) -> type.
eta : MONAD T -> kon A (T A).
ext : MONAD T -> kon A (T B) -> kon (T A) (T B).

% Wadler's syntactic sugar...
return : MONAD T -> exp A -> exp (T A) = [M] [E] eta M $ E.
bind : MONAD T -> exp (T A) -> kon A (T B) -> exp (T B) = [M] [E] [K] ext M K $ E.
We can instantiate EQ in various ways:
1eq : EQ 1.
- : apply (eq 1eq) (pair <> <>) (! true).

sumeq : EQ A -> EQ B -> EQ (A + B).
- : apply (eq (sumeq EqA EqB)) (pair (inl Va1) (inl Va2))
        (eq EqA $ ! pair Va1 Va2).
- : apply (eq (sumeq EqA EqB)) (pair (inl Va1) (inr Vb2))
        (! false).
- : apply (eq (sumeq EqA EqB)) (pair (inr Vb1) (inl Va2))
        (! false).
- : apply (eq (sumeq EqA EqB)) (pair (inr Vb1) (inr Vb2))
        (eq EqB $ ! pair Vb1 Vb2).

paireq : EQ A -> EQ B -> EQ (A * B).
- : apply (eq (paireq EqA EqB)) (pair (pair Va1 Vb1) (pair Va2 Vb2))
       (let (eq EqA $ ! pair Va1 Va2) [x]
          let (eq EqB $ ! pair Vb1 Vb2) [y]
          and $ ! pair x y).

nateq : EQ nat.
- : apply (eq nateq) (pair z z) (! true).
- : apply (eq nateq) (pair z (s _)) (! false).
- : apply (eq nateq) (pair (s _) z) (! false).
- : apply (eq nateq) (pair (s N1) (s N2)) (eq nateq $ ! pair N1 N2).
Likewise FUNCTOR:
idf : FUNCTOR ([A]A).
- : apply (fmap idf K) V E
     <- apply K V E.

prodf : FUNCTOR F -> FUNCTOR G -> FUNCTOR ([A] F A * G A).
- : apply (fmap (prodf F G) K) (pair V1 V2)
     (let (fmap F K $ ! V1) [x] let (fmap G K $ ! V2) [y] ! pair x y).

expf : FUNCTOR ([A]B => A).
- : apply (fmap expf K) (fn K') (! fn (λ [x] K $ K' $ ! x)).

listf : FUNCTOR ([A]list A).
- : apply (fmap listf K) nil (! nil).
- : apply (fmap listf K) (cons V VS) (let (K $ ! V) [x]
                                  let (fmap listf K $ ! VS) [xs]
                                  ! cons x xs).
And MONAD:
idm : MONAD ([a]a).
- : apply (eta idm) V (! V).
- : apply (ext idm K) V E
     <- apply K V E.

maybem : MONAD ([a]maybe a).
- : apply (eta maybem) V (! just V).
- : apply (ext maybem K) V (case K (λ [_] ! nothing) $ ! V).
And now an example where we run fmap on a pair of lists, using Twelf's logic engine to automatically infer the appropriate instance of FUNCTOR:
%query 1 *
  eval (fmap F (λ [x] ! s x) $
            ! pair (cons z (cons z nil)) (cons (s z) (cons (s z) nil))) R.
---------- Solution 1 ----------
R = pair (cons (s z) (cons (s z) nil)) (cons (s (s z)) (cons (s (s z)) nil));
F = prodf listf listf.
____________________________________________
And finally, a slightly more involved example:
cond : exp C -> exp C -> kon bool C =
    [E1] [E2] case (λ [_] E1) (λ [_] E2).

lookup : EQ A -> val A -> kon (list (A * B)) (maybe B).
- : apply (lookup Eq V) nil (! nothing).
- : apply (lookup Eq V) (cons (pair V' VE) Vs)
     (cond (lookup Eq V $ ! Vs) (! just VE) $ 
        (eq Eq $ ! pair V V')).

one = s z. two = s one. three = s two. four = s three.
%query 1 *
  eval (bind _ (lookup _ one $ ! cons (pair z three)
                  (cons (pair one four) (cons (pair two z) nil)))
          (λ [x] return _ (! s x))) R.
---------- Solution 1 ----------
R = just (s four).
____________________________________________
And that's the gist of it.

If you are still reading this, perhaps you will be inspired and use our favorite general-purpose functional programming language to write, say, a webserver?

Monday, December 21, 2009

The History of Categorical Logic

I'm currently making my way through Jean-Pierre Marquis and Gonzalo E. Reyes' 116-page-long The History of Categorical Logic, and it is fascinating. Someone needs to make it into a comic book. [HT: This Week's Finds]

Monday, November 16, 2009

What is the logic of ML values?

Before approaching the question, "What is the logic of ML values?", I first want to explain why there is more subtlety to the question than is often assumed.

(Warning: this is a long post on something which most people probably care very little about. On the other hand, there are 19 google hits for "logic changed my life".)

In general, how does one interpret the question, "What is the logic of X?", for some programming language X? A standard response is that the logical theorems are exactly the inhabited types, i.e., types τ such that there is a closed term t:τ. But what are the "terms"? The answer is obvious enough in simple formal systems where there is only one typing judgment, and so we can say things like, "The inhabited types of simply-typed lambda calculus are exactly the theorems of minimal implicational logic". Yet, real programming languages usually have many different typing judgments, since programs are composed out of many different syntactic objects. In ML, in particular, a program includes some let-bound values and expressions, some function definitions, and a result expression (in addition to datatype definitions, module definitions, etc., which are classified by different kinds of "types").

(In this post, "ML" is meant in a slightly fuzzy but essentially unambiguous way, as a call-by-value language with effects. To settle a concrete interpretation, I will be giving examples in Standard ML of NJ, which notably provides the control operator callcc. "Value" is meant in the standard operational sense, i.e., a value is a fully evaluated ML expression. We can similarly ask, "What is the logic of Haskell values?", but that is a different, trickier question, to which I'm still not sure of the answer.)

When answering the question, "What is the logic of ML?" (or "What is the logic of Haskell?"), the standard approach is to gloss over these different syntactic categories, and assume we are talking about expressions. This makes some sense, insofar as this is the largest syntactic category: all values and functions are also expressions. And expressions also have easy-to-understand principles of composition. For example, the rule of application—from two expressions e₁:σ→τ and e₂:σ, we can form the expression (e₁ e₂):τ—witnesses the logical principle of modus ponens. Whereas from two values v₁:σ→τ and v₂:σ, the application (v₁ v₂):τ is no longer a value.

Yet, I believe it is a mistake to define "the" logic of a programming language in this way, without realizing there is finer structure. That is why I phrased the question, "What is the logic of ML values?", rather than full-stop, "What is the logic of ML?" And so how does the logic of ML values differ from the logic of ML expressions?

Well, it is often claimed that ML (and likewise Haskell) defines an inconsistent logic. This is a statement about the logic of expressions. For example, non-terminating expressions such as

fun loop() = loop()
inhabit every type, as do exception-raising expressions, such as
fun fail() = raise (Fail "inconsistent!?")
(Note: the expressions here are loop() and fail(). Here and below, I assign names to expressions by declaring them as thunks.) But none of these are values. Indeed, evaluation of the first expression will never yield a result,
- loop();
(* waiting a long time... *)
whereas the second will return immediately, but with an exception rather than a value:
- fail();
uncaught exception Fail [Fail: inconsistent!?]
raised at: stdIn:2.3-3.5
(There are some additional complaints about the value restriction when evaluating these polymorphic expressions, but for our purposes we can ignore them here.) In fact, the logic of ML values is consistent, as we can see by defining the empty type:
data void = Void of void
There are no values of type void, since the only way to construct one is by already having one. Note, though, that all bets are off "under a lambda"—we can still build a value inhabiting any function type σ→τ, for example
fun fnfail() = fn _ => raise (Fail "inconsistent!?")
Let's try to be a bit more precise about what happens before we reach that point.

Another folk theorem about "the logic of ML" is that insofar as it is consistent (i.e., if you avoid things like unrestricted recursion and exceptions), with the addition of control operators it becomes classical. We can import the relevant SML/NJ library and see what they mean:

- open SMLofNJ.Cont;
opening SMLofNJ.Cont
type 'a cont = 'a ?.cont
val callcc : ('a cont -> 'a) -> 'a
val throw : 'a cont -> 'a -> 'b
val isolate : ('a -> unit) -> 'a cont
(* ... *)
As first observed by Griffin, the type of callcc corresponds logically to the classical principle of Peirce's Law. We can make this a bit easier to see by defining another control operator letcc, which essentially does the same thing as callcc but with first-class ML functions, rather than values of the cont type:
- fun letcc f = callcc (fn k => f (fn x => throw k x));
val letcc = fn : (('a -> 'b) -> 'a) -> 'a
Using letcc, we can derive the law of excluded middle, ∀α.α ∨ ¬α. To represent this type, we first introduce a type constructor for disjunctions:
data ('a,'b) either = Left of 'a | Right of 'b
Now we can witness excluded middle as follows:
fun excluded_middle() : ('a,'a -> void) either =
letcc (fn k => Right (fn x => k (Left x)))

So what happens when we attempt to evaluate this expression? (To avoid complaints about the value restriction, we instantiate the type variable 'a at some arbitrary type a.)

- excluded_middle() : (a,a -> void) either;
val it = Right fn : (a,a -> void) either
And now, in contrast to what happened above with the evaluation of loop() and fail(), here we actually get a value!

So can we say that the logic of ML values really is classical? Well, we evidently can under the "bag of tautologies" view of logic. But that is not necessarily the most useful view.

If we look up at the above ML session, a shockingly anti-classical principle is already staring back down at us. After evaluating excluded_middle(), we got back this value:

val it = Right fn : (a,a -> void) either
That is, not only is this a proof of α ∨ ¬α, but in fact it contains a proof of the right side of the disjunct, ¬α. This is an instance of the intuitionistic disjunction principle, that a proof of a disjunction is a proof of one of the two disjuncts.

The punchline, I would say, is that the logic of ML values is not unsound, and not classical, but simply constructive. What is an ML value of type τ? Just an intuitionistic proof of τ—but in a non-empty context. This context includes the current continuation of type ¬τ, as well as all of the other effects that can be invoked underneath a lambda, i.e., in values of functional type.

Okay, so what have we brushed over here? Well first, the restriction that effects are only invoked underneath lambdas can actually be enforced by CPS translation. Functions σ→τ are interpreted as ¬(σ∧¬τ), where ¬(-) denotes minimal negation, i.e., ¬τ = τ→β, for some fixed parameter β, the return type. Effectful operations are encoded as things that return β. Second, I said that a value of (a CPS translated) type τ is an intuitionistic proof of τ (in a non-empty context of effects), but are all such intuitionstic proofs values? Well, no, because some proofs involve "detours". To be concrete, if M₁ and M₂ are proofs of σ and τ, then π₁(M₁, M₂) is a proof of σ—yet, #1 ("hello", "world") is typically not considered to be a value (it evaluates to "hello"). (Some people like to call expressions like #1 ("hello", "world") "valuable".) Third, to say that there is a "logic" of ML values, we really do have to explain what its composition principles are, and not just which principles fail. I might revisit some of these points in another post.

Finally, the whole idea of a "logic of ML values" is very close to the idea of the realizability interpretation of ML. Some people believe that realizability is a distinct and very different alternative to Curry-Howard as a constructive interpretation of logic. My point with this post was mainly to suggest that Curry-Howard need not exclude realizability-like interpretations, once we realize (so to speak) that there is more than one syntactic category of proofs.