Thursday, June 28, 2007

A topology of cut-elimination

One of the central ideas of Ludics is an orthogonality relationship between proofs and refutations, which says, essentially, that they can be "successfully" cut against each other. Recall that Gentzen's cut rule is a method of composing derivations:
Γ ⊢ Δ, A      Γ, A ⊢ Δ
Γ ⊢ Δ
The rule gives a derivation of Γ ⊢ Δ from derivations of weaker sequents, using A as an intermediary. Gentzen proved that the cut rule is unnecessary, i.e. that it is always possible to obtain a direct derivation of Γ ⊢ Δ without going through the middle-man. However it turned out that this process of cut-elimination was of extreme interest.

In "Locus Solum", Girard tells us to focus our attention on the following instance of cut:
 ⊢ A      A ⊢
In words: "From a proof of A and a refutation of A, derive contradiction." But isn't that a vacuous instance of cut? Logic is supposed to be consistent, so how could we ever have both a proof and a refutation of the same proposition? Well, Girard's solution is to add a single rule, the daimon, that can conclude anything from no premises:
Γ ⊢ Δ
Since the logic becomes inconsistent, it is now entirely possible to have both a proof and a refutation of the same proposition. But isn't that...crazy? Perhaps Girard inhaled a bit too much mustard before coming up with this stuff?

But actually, if we think of cut-elimination in terms of the Curry-Howard correspondence, the daimon makes a good deal of sense. A proof of A corresponds to a value of type A (in the ordinary programming languages sense), while a refutation of A corresponds to a continuation accepting type A. (Strictly speaking, in Ludics proofs don't correspond to values in the ordinary sense, but for now we can ignore the difference.) Now, what can a continuation K do with a value V:A? Well, V might itself contain suspended continuations, and K could call any of those with some values. But another possibility is that K simply terminates the computation. And that's when it calls the daimon. In other words, daimon is just French for "done". Thus when Girard defines a proof and a refutation to be orthogonal when their cut normalizes to the daimon, this is another way of saying that we can throw a value to a continuation and get a terminating result (notation: V⊥K).

So what does this have to do with topology? For any type A, let's collect all of its values into a set XA = {V | V:A}. Now, let K be any A-accepting continuation. We can use K to pick out a subset of XA, namely the set of values on which K terminates. That is, we build the set K= {V:A | V⊥K}. And then we define a topology ΩA on XA, generated by all of the basic open sets K.

Let's try to see how this works for the type of booleans, with set of values X2 = {t, f}. We can describe any boolean continuation as essentially fitting one of the following four recipes:

[t|f ↦ done]

[t ↦ done] [f ↦ done]


The top recipe ([t|f ↦ done]) describes a continuation which, given either t or f, simply returns. The next recipe on the left ([t ↦ done]) describes a continuation which returns on t, but whose behavior is unspecified (and thus possibly non-terminating) on f. Similarly with [f ↦ done]. Finally, [] is the boolean continuation with totally unspecified behavior. It is easy to see the basic open set corresponding to each continuation:


{t} {f}


And so Ω2 is simply the discrete topology. In general, though, the topology ΩA will be more complicated.

Fact: ΩA is T0.

Expanding the definition of ΩA and of the separation axiom, this means that for any two "syntactically distinct" values of type A (omitting the precise definition), there is some continuation which terminates on one but not the other. Or in programming language terminology, T0 says that syntactic inequality implies contextual inequivalence.

Fact: ΩA is not always T1.
Counter-example: Consider Ω¬2, the topology on boolean continuations treated as values. We treat a continuation as a value by wrapping it in con, so for example con([t ↦ done]) is a value of type ¬2. Now, we can again summarize the behavior of any ¬2-continuation, this time with five possibilities:

[con(k) ↦ done]

[con(k) ↦ k(t)] [con(k) ↦ k(f)]

[con(k) ↦ k(t)∥k(f)]


The top continuation simply ignores its argument and returns. The next two call k with a single boolean, either t or f. The continuation on the third line calls k with both t and f. And finally, the behavior of the continuation [] is, again, unspecified. Now let us use the shorthand {t,f}, {t}, {f}, {}, to refer to the four previously-defined boolean continuations in the evident way. Then the above ¬2-continuations generate the following basic open sets:


{{t,f},{t}} {{t,f},{f}}



Then, for example, {t} and {t,f} cannot be separated, since the latter is in every open set containing the former. Another way of putting this is that any context in which the value con([t ↦ done]) is acceptable, the value
con([t|f ↦ done]) is also acceptable; the former supplies a continuation guaranteed to terminate only on t, the latter one that that will terminate on any boolean. It might also be worth noting that the two values con([t ↦ done])and con([f ↦ done]) are separable, but not by open neighborhoods.

Okay! Hopefully that's a reasonable sketch of the basics of the topological interpretation in Ludics. Stay tuned for more...


Jason said...

Is this topological interpretation original, or already "buried in the mustard" somewhere, if you catch my meaning? It's quite interesting.

Noam said...

Yes, it is buried in the mustard -- though from an abstract, topological perspective, the ideas are almost certainly much older. I've been talking with Steve Awodey lately, and he suggested that this could be an instance of Stone duality.

jcreed said...

Aha, that sounds likely.

Rob said...

I don't think I understand enough topoplogy, so let me see if I get this correct: T_0 effectively means that "given A and B, I can make a continuation that terminates on exactly one of A and B," whereas T_1 effectively means that "given A and B, I can make a continuation such that A terminates and B does not." Is that a correct understanding?