Sunday, November 18, 2007

lucidity vs ludicity, and translating one esoteric language into a slightly less esoteric one

One of the things I often wonder about is how to say things that are intelligible. Too frequently, I lapse into jargon─sometimes even on-the-spot-invented jargon─and only realize this after the damage has been done. An example: the previous blog post, which was probably incomprehensible to all but...well, probably to all. I promise I will try hard to avoid doing this in the future.

But on the topic of making things less inscrutable─and not on the topic of Girard!, though keep your eyes on the Girard Reading Group Wiki─another thing I've been thinking about is how to translate Martin-Löf's "judgment vs proposition" distinction (as spelled out in the Siena Lectures) into the language of category theory. At first, the judgment vs proposition distinction may seem pedantic: okay, one can distinguish between a well-formed proposition "A" and the assertion that it is true "A true"...but so what? Well, the distinction helps get dependent type theory off the ground, since well-formedness is defined mutually with truth (the dependent quantifier ∏x:A.B is a well-formed proposition if and only if A is well-formed and B is well-formed under the assumption x:A). More mundanely, it is obvious that a separate notion of judgment is useful if we want to refute propositions as well as assert them, using the judgment "A false" in addition to "A true".

Well, this is actually not quite so obvious. Why not just assert ¬A? For a philosophical explanation of why not, Greg Restall's essay "Multiple Conclusions" is a nice read. From a proof/type-theoretic perspective, the reason why not to identify "A false" with "¬A " is that we need the separate judgment in order to obtain "good compositionality" principles─this is an informal but very general law gleaned by Frank Pfenning and collaborators over years of refining the "judgmental method", to which the paper "A judgmental reconstruction of modal logic" probably serves as the most complete introduction. Unfortunately, though, my impression is that these principles are still not very widely accepted or understood.

I think part of the problem is the seemingly obscure philosophical origins of the judgmental method (although its application is practically motivated), and so what I would like is a translation guide into the language of category theory (an order of magnitude less obscure, though granted still sometimes considered esoteric) . I think there must be such a translation, since the whole point of the judgmental method (at least in my mind) is obtaining compositionality principles.

Does anyone know if the judgment vs proposition distinction has been explored before in category theory? As a simple example of a system with two judgments that should be possible to explain categorically, I propose the bidirectional lambda calculus with implication and conjunction (described in Section 3.1 here).

Tuesday, November 6, 2007

Location: PHI Bar

(Only the location matters.)

Today we had the second (or really 1.5th) meeting of the Girard Reading Group, a.k.a. the Locus Solum Drinking Group. Many more people than I expected! It seems that obscure proof theory is a sexy topic. We're trying to go through Locus Solum systematically, although it's hard because the writing style makes digression so tempting. Fortunately, it's a diverse enough group of people that we better stay disciplined or else anarchy will ensue (as in the first meeting). Rob put up a wiki for us to pool whatever nuggets/maggots of wisdom we manage to extract. Stay tuned there for updates! Here I'll talk about a few of today's punchlines (these will probably go on the wiki soon).

I think it was a pretty productive day. We made it through the basics of designs-dessins, and started to talk about designs-desseins (what's the differance between them?). Basically, designs-dessins are focused sequent calculus proofs in affine, polarized linear logic, where formulas have been replaced by loci. I still don't have a good sense of the motivation for locativity, but the definition is very concrete: a locus ξ is just a finite sequence of natural numbers. Appending a number ("bias") i corresponds to taking an immediate subformula (ξ*i is an "immediate sublocus" of ξ). There are two kinds of sequents ("pitchforks"): ⊢ Λ (positive pitchfork) and ξ ⊢ Λ (negative pitchfork). Here Λ is a finite set of loci.

Now, Girard doesn't much intuition for what these judgments mean, but my running interpretation is as follows: a positive pitchfork asserts that all of the the loci in Λ can't be false, i.e. that if all the loci in Λ are false we have a contradiction; a negative pitchfork asserts that if all the loci in Λ are false, then ξ is also false. (I'm not really sure what it means for a locus to be false, as opposed to a formula, but let's run with it...) Observe this is just an inversion of the usual interpretation of intuitionistic sequents Γ ⊢ A and Γ ⊢∙. However, the system is not really "dual intuitionistic", because the notions "false" and "contradiction" used here are intuitionistic, and really there is a hidden notion of "direct truth" buried within the rules for proving pitchforks.

Let's take Girard's example "pre-located" formula A = (P⊕ Q) ⊗ R. There are two proper positive rules (right-rules) for A:

P ⊢ Γ R ⊢ Δ Q ⊢ Γ R ⊢ Δ
————————————————————— ——————————————————————
⊢ Γ, Δ, A ⊢ Γ, Δ, A

Under our interpretation, to prove ⊢ Γ, Δ, A we must show that all of the formulas in Γ, Δ, A can't be false. These rules attempt to do this by focusing on A, i.e. they try to show that A in particular is true (still under the assumption that Γ and Δ are false). To prove A, we must prove both P⊕ Q and R. To prove a disjunction we must prove either of the disjuncts, while proving the negation of a formula is the same as refuting it. Hence to prove A (under falsehood assumptions Γ, Δ), we must either refute P and refute R (with an arbitrary splitting of the assumptions), or else refute Q and refute R. This is exactly how we read the premises of the two rules. On the other hand, see how in one step we are able to go all the way from "show Γ, Δ, A can't all be false" to "show A is true if Γ, Δ are false" to "show either (P is false if Γ is false and R is false if Δ is false) or (Q is false if Γ is false and R is false if Δ is false)"? This is the focalization property, and it's why we didn't explicitly have to mention "truth" when we glossed the two kinds of sequents. (Editorial remark: if the truth judgment is hidden in there, though, why not show it? I.e., use a third kind of pitchfork "⊢ Λ; ξ", where ξ is already in focus. This is what I do in my paper.)

Designs-dessins are not a perfect notation for proofs, however. The problem is that not all of the loci in a pitchfork will ever be placed in focus. These loci can be split up arbitrarily among the different premises of a derivation---so we can have two dessins which apply essentially the same rules, but just differ in how they throw away these unused loci. The situation here is more or less as it is for ordinary intuitionistic logic. For any sequent calculus proof of Γ ⊢ A, we can get a "new" proof by weakening the derivation to Γ, Δ ⊢ A. However, the sequence of rules applied in both proofs is the same. Natural deduction/simply-typed lambda calculus get rid of these artificial distinctions between proofs: we can give one lambda term corresponding to both sequent calculus proofs (it simply won't mention the extra hypotheses in Δ). So designs-desseins are something like a lambda calculus for focused proofs. Yes, we should expect them to look something like spine form and pattern-matching...but we might have to squint pretty hard...

Tuesday, July 31, 2007

Synthetic topology

I've started reading about Martín Escardó's "synthetic topology", which seems closely related to (but not quite the same as) the ludics topological interpretation. The basic idea is that rather than topologizing a language by first specifying a partial order on data, then constructing open sets using that order, and finally showing that all definable functions are continuous—the approach in traditional domain theory—instead one starts at the end and simply declares continuous functions to be those definable in the language. An open set U of a datatype D is now defined by a continuous function χU : D → S from D into the Sierpinski space S. In classical topology, the Sierpinski space is simply the two point set {⊤, ⊥}, with open sets {{⊤, ⊥}, {⊤}, ∅}. In synthetic topology, S is a datatype with the single element ⊤; a computation of type S can either evaluate to ⊤, or else diverge, which is represented by ⊥. So the Sierpinski space essentially represents the observable results of computation, where we only distinguish between termination and non-termination. (In ludics, these two observable results are called daimon and faith.)

So again, an open set U of D is defined by U = χU-1(⊤), where χU : D → S is definable in the underlying programming language. It follows immediately that U is closed under contextual equivalence, i.e. if d∈U and d ≈ d' then d'∈U. However, it is not necessarily the case that the collection of open sets is closed under (infinite or even finite) unions (as in classical topology). For this to be the case, given any two functions f,g : D → S, we have to be able to construct an h : D → S such that h(d) = ⊤ iff either f(d) = ⊤ or g(d) = ⊤. In other words, our language needs a sort of "parallel-or" construct.

I haven't gotten much further yet, but this looks interesting. Also, it seems that Escardó isn't just interested in applications to programming semantics, but in examining more traditional topological questions through synthetic topology.

A tutorial: "Synthetic topology of data types and classical spaces" by Martín Escardó.

A paper: "Operational domain theory and topology of sequential programming languages" by Martín Escardó and Weng Kin Ho.

Thursday, June 28, 2007

A topology of cut-elimination

One of the central ideas of Ludics is an orthogonality relationship between proofs and refutations, which says, essentially, that they can be "successfully" cut against each other. Recall that Gentzen's cut rule is a method of composing derivations:
Γ ⊢ Δ, A      Γ, A ⊢ Δ
Γ ⊢ Δ
The rule gives a derivation of Γ ⊢ Δ from derivations of weaker sequents, using A as an intermediary. Gentzen proved that the cut rule is unnecessary, i.e. that it is always possible to obtain a direct derivation of Γ ⊢ Δ without going through the middle-man. However it turned out that this process of cut-elimination was of extreme interest.

In "Locus Solum", Girard tells us to focus our attention on the following instance of cut:
 ⊢ A      A ⊢
In words: "From a proof of A and a refutation of A, derive contradiction." But isn't that a vacuous instance of cut? Logic is supposed to be consistent, so how could we ever have both a proof and a refutation of the same proposition? Well, Girard's solution is to add a single rule, the daimon, that can conclude anything from no premises:
Γ ⊢ Δ
Since the logic becomes inconsistent, it is now entirely possible to have both a proof and a refutation of the same proposition. But isn't that...crazy? Perhaps Girard inhaled a bit too much mustard before coming up with this stuff?

But actually, if we think of cut-elimination in terms of the Curry-Howard correspondence, the daimon makes a good deal of sense. A proof of A corresponds to a value of type A (in the ordinary programming languages sense), while a refutation of A corresponds to a continuation accepting type A. (Strictly speaking, in Ludics proofs don't correspond to values in the ordinary sense, but for now we can ignore the difference.) Now, what can a continuation K do with a value V:A? Well, V might itself contain suspended continuations, and K could call any of those with some values. But another possibility is that K simply terminates the computation. And that's when it calls the daimon. In other words, daimon is just French for "done". Thus when Girard defines a proof and a refutation to be orthogonal when their cut normalizes to the daimon, this is another way of saying that we can throw a value to a continuation and get a terminating result (notation: V⊥K).

So what does this have to do with topology? For any type A, let's collect all of its values into a set XA = {V | V:A}. Now, let K be any A-accepting continuation. We can use K to pick out a subset of XA, namely the set of values on which K terminates. That is, we build the set K= {V:A | V⊥K}. And then we define a topology ΩA on XA, generated by all of the basic open sets K.

Let's try to see how this works for the type of booleans, with set of values X2 = {t, f}. We can describe any boolean continuation as essentially fitting one of the following four recipes:

[t|f ↦ done]

[t ↦ done] [f ↦ done]


The top recipe ([t|f ↦ done]) describes a continuation which, given either t or f, simply returns. The next recipe on the left ([t ↦ done]) describes a continuation which returns on t, but whose behavior is unspecified (and thus possibly non-terminating) on f. Similarly with [f ↦ done]. Finally, [] is the boolean continuation with totally unspecified behavior. It is easy to see the basic open set corresponding to each continuation:


{t} {f}


And so Ω2 is simply the discrete topology. In general, though, the topology ΩA will be more complicated.

Fact: ΩA is T0.

Expanding the definition of ΩA and of the separation axiom, this means that for any two "syntactically distinct" values of type A (omitting the precise definition), there is some continuation which terminates on one but not the other. Or in programming language terminology, T0 says that syntactic inequality implies contextual inequivalence.

Fact: ΩA is not always T1.
Counter-example: Consider Ω¬2, the topology on boolean continuations treated as values. We treat a continuation as a value by wrapping it in con, so for example con([t ↦ done]) is a value of type ¬2. Now, we can again summarize the behavior of any ¬2-continuation, this time with five possibilities:

[con(k) ↦ done]

[con(k) ↦ k(t)] [con(k) ↦ k(f)]

[con(k) ↦ k(t)∥k(f)]


The top continuation simply ignores its argument and returns. The next two call k with a single boolean, either t or f. The continuation on the third line calls k with both t and f. And finally, the behavior of the continuation [] is, again, unspecified. Now let us use the shorthand {t,f}, {t}, {f}, {}, to refer to the four previously-defined boolean continuations in the evident way. Then the above ¬2-continuations generate the following basic open sets:


{{t,f},{t}} {{t,f},{f}}



Then, for example, {t} and {t,f} cannot be separated, since the latter is in every open set containing the former. Another way of putting this is that any context in which the value con([t ↦ done]) is acceptable, the value
con([t|f ↦ done]) is also acceptable; the former supplies a continuation guaranteed to terminate only on t, the latter one that that will terminate on any boolean. It might also be worth noting that the two values con([t ↦ done])and con([f ↦ done]) are separable, but not by open neighborhoods.

Okay! Hopefully that's a reasonable sketch of the basics of the topological interpretation in Ludics. Stay tuned for more...

Sunday, June 10, 2007


The premise of this blog is first: wow, I have a blog! Second, I feel that at this point in my life I need to spend less time thinking up crackpot ideas, and more time writing up crackpot ideas.

Additional introduction: I am a grad student at Carnegie Mellon, with broad interests in programming languages and proof theory, logic, categories and philosophy of language. More specifically, my thesis work has started to take shape after a couple years spent trying to make sense of nonsense, and I might devote a few blog entries to this exegesis (hence the blog title). At the moment, many other people are pursuing similar ideas, some from very different backgrounds, and I might also spend some time trying to tie the threads. Otherwise, as I said, there will be lots of personal crackpot ideas, and alliteration.