*Bi-Polarism Theory and Mathematics is Inconsistent*. And while you're in the holiday spirit, check out The Twelf Days of Christmas.

## Wednesday, December 24, 2008

## Thursday, December 4, 2008

## Tuesday, November 18, 2008

### lies and updates

A few other updates:

- This past weekend there was a small workshop at CMU, Bernaysfest, dedicated to the life and work of Paul Bernays, a collaborator of David Hilbert. I snuck in, and heard a bunch of interesting talks on the history of proof theory. (Bernd Buldt's talk, listed as TBA on the schedule, was on "Mathematical practice and platonism: A phenomenological perspective", and quite interesting once I understood what he was trying to do. He wants to give an explanation for why so many mathematicians are platonists, why mathematical objects feel "real" to them, by looking at mathematics as a social process.) The workshop also brought Bill Tait, who last week gave a standing-room-only talk about cut-elimination in predicative systems (Richard Zach blogs about).
- A month ago I gave a fluffy talk on "walking the way of duality" at the Student Seminar Series. Here are the slides, but I'm keeping the video to myself, thank you very much.
- My refinement types paper was accepted to PLPV, as was Dan and Bob's paper, so we'll be going to Savannah.

## Thursday, July 24, 2008

### Refinement types and computational duality

One lesson learned painfully over the past twenty years is the perilous interaction of Curry-style refinement typing with evaluation order and side-effects. This led eventually to the value restriction on polymorphism in ML, as well as, more recently, to similar artifacts in prototype refinement type systems for ML with intersection and union types. For example, some of the traditional subtyping laws for unions and intersections are unsound in the presence of effects, while union-elimination requires anevaluation context restrictionin addition to the value restriction on intersection-introduction.

This paper reexamines the interaction between refinement types and evaluation order from the perspective of computational duality. Building on recent work on the Curry-Howard interpretation offocusing proofsas pattern-matching programs written in infinitary, higher-order syntax, I give a simple explanation of intersection and union types in the presence of effects, reconstructing phenomena such as the value restriction and differing subtyping laws for call-by-value and call-by-name-- not as ad hoc artifacts, but indeed as logical theorems. However, this abstract account using infinitary syntax has the drawback that refinement checking is undecidable---to address this, I show how to systematically construct a finitary syntax via defunctionalization, and then give an algorithmic refinement checking system. Parallel to the text, a formalization in the dependently-typed functional language Agda is described, both for the sake of clarifying these ideas, and also because it was an important guide in developing them. As one example, the Agda encoding split very naturally into an intrinsic ("Church") view of well-typed programs, and an extrinsic ("Curry") view of refinement typing for those programs.

Code available here.

## Tuesday, April 22, 2008

### Today is election day...

Second: over the weekend I met an old friend from college, Ben Schmidt, who now studies American history at Princeton. Later I was looking at his website, and saw that he has a very interesting paper about using a variation of Google's PageRank algorithm to rank PhD programs:

*Ranking Doctoral Programs by Placement: A New Method*

Benjamin M. Schmidt and Matthew M. Chingos

Forthcoming in PS: Political Science & Politics

Most existing rankings of graduate programs rely on some measure of faculty quality, whether it be reputation (as in the National Research Council and US News rankings), honors (prizes, membership in learned societies, etc.), or research quality and quantity (as in citation studies and publication counts). We propose a ranking that focuses instead on the success of a program’s graduates. By using stochastic matrices, it is possible to create a system in which programs essentially “vote” for each other by hiring graduates of other programs as professors in their own department. This system allows us to create an objective, results-oriented measure that is well suited to measure the quality of departments whose graduates aspire to academic positions. The rankings correlate well with reputational ranking systems, and include a per capita measure that recognizes the accomplishments of smaller but high quality programs.

## Monday, April 14, 2008

### a paper

*Focusing on binding and computation*

Dan Licata, Noam Zeilberger, and Robert Harper

To appear at LICS 08

Variable binding is a prevalent feature of the syntax and proof theory of many logical systems. In this paper, we define a programming language that provides intrinsic support for both representing and computing with binding. This language is extracted as the Curry-Howard interpretation of a focused sequent calculus withtwokinds of implication, of opposite polarity. Therepresentational arrowextends systems of definitional reflection with a notion of scoped inference rules, which are used to represent binding. On the other hand, the usualcomputational arrowclassifies recursive functions defined by pattern-matching. Unlike many previous approaches, both kinds of implication are connectives in a single logic, which serves as a rich logical framework capable of representing inference rules that mix binding and computation.

Comments appreciated!

## Saturday, April 12, 2008

### just wait a second...

*proofs*in mathematics (e.g., "Trust me, I have a proof of the Riemann Hypothesis, just wait a second..."). Why the double standard? Do we shackle ourselves by using a fragment of mathematics that disallows (we hope) non-terminating proofs? Or have we been too hasty to accept non-terminating programs? The latter stance is embraced in total functional programming. But I think the former is also not super crazy. With a more interactive notion of proof, potential non-termination would not be so dangerous.

**Z:** "I have a proof of the Riemann Hypothesis, can you add it to the Book?"**E:** "Of course, but first can you tell me how you handle this tricky case?" *E presents tricky case to Z***Z:** "Sure, no problem, this case is easily handled by, um, some math, er, ..."*Z stalls**E waits patiently, but keeps the Book firmly shut*

In some sense, terminating proofs are just a time-saving mechanism for referees. It would indeed be a shame if Z wasted all of E's time with this pseudoproof of RH. On the other hand, E could be doing other things at the same time, such as drinking coffee, or running SETI@Home.

## Thursday, March 13, 2008

### anglo-franco photos

In London, my sister's cat Arras:

A group of angry Serbian protesters in London:

Chango Spasiuk!

The students at the Ecole Polytechnique in Lozère draw some neat murals:

Paris has a successful bike-sharing program:

The French are

*serious*about their fast food.

The town of Antony, where I stayed with Kaustuv and Vivek, had a delightful Sunday market.

Un canard:

Indian fast food, near the Gare du Nord:

My gracious host Kaustuv:

Jean-Yves, after his first lecture on the Geometry of Interaction V:

Neither the subject nor the cameraman were prepared for this photo, hence

Paul-André came out a bit blurry. In real life, he is quite sharp!

## Sunday, February 24, 2008

### transatlantic updates

This week I'll be meeting with Paul-André Mellies, and giving talks at PPS and LIX.

## Wednesday, January 30, 2008

### Fun with polarity and unicode

*> Dependent pattern-matching and unicode: a killer combination.*

> I will try to post more about this in the next few days.

> I will try to post more about this in the next few days.

"Few" is subjective.

Before getting to the Agda code that Dan and I wrote, though, I wanted to talk a little bit about a neat, simple idea I learned recently. In this paper, I argued that one way to rationally reconstruct the linear logic notion of polarity was to turn to so-called proof-theoretic semantics, and in particular Michael Dummett's distinction between *verificationist* and *pragmatist* justifications of the logical rules. A verificationist meaning-theory supposes that connectives are defined by their introduction rules, and uses this definition to justify their elimination rules. A pragmatist meaning-theory supposes the opposite: connectives are defined by their elimination rules, and this justifies their introduction rules. Well, polarity lets us refine this picture: the *positive* connectives are defined by their introduction rules, the *negative* connectives by their elimination rules.

But another (more fun?) way to think about polarity is through *game*-theoretic semantics. Think of the truth of a proposition A as a game between Prover and Refuter: A is valid iff Prover has a winning strategy. So the meaning of a connective is essentially a set of rules for playing a game. A few examples:

- [A ⊕ B] : This is a game where Prover begins by selecting either A or B, after which play proceeds by the rules of that game.
- [A & B] : This is a game where Refuter begins by selecting either A or B, after which play proceeds by the rules of that game.
- [A
^{⊥}] : Play A, but with the roles of Prover and Refuter swapped. - [0] : Prover begins but has no moves, so Refuter automatically wins.
- [⊤] : Refuter begins but has no moves, so Prover automatically wins.

The polarity of a connective/formula is just the answer to the important question: "Which player begins?" (By the way, this also gives another explanation for

*focus*and

*inversion*. Focus is just the fact that after making a move in the game A ⊕ B, Prover doesn't have to give up her turn to Refuter, e.g., A could itself be the game A₁⊕ A₂. Inversion is the fact that the second player really has no choice in the matter, e.g., in the game A ⊕ B, Refuter has to account for Prover playing either A or B.)

The reason I like this (I think pretty old) idea is that not only is it a good icebreaker at parties, but it also suggests a natural generalization, which Samson Abramsky (a bit tongue-in-cheek) calls Socially Responsive, Environmentally Friendly Logic. Rather than treating only 2-player games between Prover and Refuter, we can encode n-player games using n distinct polarities. For example, Abramsky defines the connective A ⊕_{α} B as the game where α begins play, selecting between A and B. I think this is a really cool and promising generalization.

But anyways, today let's stick with the boring, bipolar world. I'm going to talk about how to give a focused encoding of polarized intuitionistic logic in Agda, which is simultaneously a deep embedding of a language with explicit evaluation order and pattern-matching. For those playing along, the full code is available here. Let's start with some boilerplate:

open import Data.List

open import Data.String renaming (_++_ to concat)

module Main where

Now we define the syntax of polarized formulas, as two mutually inductive datatypes.

module Types where

Atom : Set

Atom = String

mutual

data Pos : Set where

X⁺ : Atom -> Pos

↓ : Neg -> Pos

1⁺ : Pos

_*_ : Pos -> Pos -> Pos

0⁺ : Pos

_+_ : Pos -> Pos -> Pos

2⁺ : Pos

ℕ : Pos

data Neg : Set where

X- : Atom -> Neg

↑ : Pos -> Neg

⊤ : Neg

_→_ : Pos -> Neg -> Neg

_&_ : Neg -> Neg -> Neg

infixr 14 _→_

infixr 15 _&_

infixr 15 _+_

infixr 16 _*_

Positive formulas include positive atoms and shifted negative formulas, as well as strict products and sums. (We would call the units 1 and 0, but unfortunately Agda doesn't let us overload numerical constants.) For fun, we also defined the type of booleans 2⁺, and the type of natural numbers ℕ. Negative formulas include negative atoms and shifted positive formulas, as well as functions and lazy products. Observe the essential use of Unicode!

Now we define hypotheses (positive atoms or arbitrary negative formulas) and conclusions (negative atoms or arbitrary positive formulas), as well as some simple machinery for inspecting contexts. Contexts are basically lists of hypotheses, but since inversion introduces multiple hypotheses at a time, it is more convenient to work with lists of *lists* of hypotheses. Hence we define LCtx (linear context) and UCtx (unrestricted context), and the membership relationships ∈ and ∈∈. Proofs of membership are isomorphic to De Bruijn indices.

module Contexts where

open Types public

data Hyp : Set where

HP : Atom -> Hyp

HN : Neg -> Hyp

data Conc : Set where

CP : Pos -> Conc

CN : Atom -> Conc

LCtx : Set

LCtx = List Hyp

UCtx : Set

UCtx = List LCtx

data _∈_ : Hyp -> LCtx -> Set where

f0 : {h : Hyp} {Δ : LCtx} -> h ∈ (h :: Δ )

fS : {h h' : Hyp} {Δ : LCtx} -> h' ∈ Δ -> h' ∈ (h :: Δ)

data _∈∈_ : Hyp -> UCtx -> Set where

s0 : {h : Hyp} {Δ : LCtx} {Γ : UCtx} -> h ∈ Δ -> h ∈∈ (Δ :: Γ)

sS : {h : Hyp} {Δ : LCtx} {Γ : UCtx} -> h ∈∈ Γ -> h ∈∈ (Δ :: Γ)

Okay, now we can start the fun stuff. First we define patterns:

module Patterns where

open Contexts public

[_] : {A : Set} -> A -> List A

[_] x = x :: []

-- Pat⁺ Δ A⁺ = linear entailment "Δ ⇒ A⁺"

data Pat⁺ : LCtx -> Pos -> Set where

Px : {x : Atom} -> Pat⁺ [ HP x ] (X⁺ x)

Pu : {A⁻ : Neg} -> Pat⁺ [ HN A⁻ ] (↓ A⁻)

P<> : Pat⁺ [] 1⁺

Ppair : {Δ1 : LCtx} { Δ2 : LCtx } {A : Pos} {B : Pos}

-> Pat⁺ Δ1 A -> Pat⁺ Δ2 B -> Pat⁺ (Δ1 ++ Δ2) (A * B)

Pinl : {Δ : LCtx} {A B : Pos} -> Pat⁺ Δ A -> Pat⁺ Δ (A + B)

Pinr : {Δ : LCtx} {A B : Pos} -> Pat⁺ Δ B -> Pat⁺ Δ (A + B)

Ptt : Pat⁺ [] 2⁺

Pff : Pat⁺ [] 2⁺

Pz : Pat⁺ [] ℕ

Ps : {Δ : LCtx } -> Pat⁺ Δ ℕ -> Pat⁺ Δ ℕ

-- Pat⁻ Δ A⁻ c = linear entailment "Δ ; A⁻ ⇒ c"

data Pat⁻ : LCtx -> Neg -> Conc -> Set where

Dx : forall {x} -> Pat⁻ [] (X- x) (CN x)

Dp : forall {A⁺} -> Pat⁻ [] (↑ A⁺) (CP A⁺)

Dapp : {Δ1 : LCtx } {Δ2 : LCtx} {A⁺ : Pos} {B⁻ : Neg} {c : Conc}

-> Pat⁺ Δ1 A⁺ -> Pat⁻ Δ2 B⁻ c -> Pat⁻ (Δ2 ++ Δ1) (A⁺ → B⁻) c

Dfst : {Δ : LCtx} {A⁻ : Neg} {B⁻ : Neg} {c : Conc}

-> Pat⁻ Δ A⁻ c -> Pat⁻ Δ (A⁻ & B⁻) c

Dsnd : {Δ : LCtx} {A⁻ : Neg} {B⁻ : Neg} {c : Conc}

-> Pat⁻ Δ B⁻ c -> Pat⁻ Δ (A⁻ & B⁻) c

Observe that the constructors for Pat⁺ are just

*value*patterns. The constructors for Pat⁻ are somewhat less familiar: they correspond to patterns for elimination contexts, which are sometimes called

*spines*.

Finally, we define the generic focused sequent calculus:

module Logic where

open Patterns public

mutual

-- RFoc Γ A⁺ = right-focus "Γ ⊢ [A⁺]"

data RFoc : UCtx -> Pos -> Set where

Val : forall {Γ A⁺ Δ} -> Pat⁺ Δ A⁺ -> SatCtx Γ Δ -> RFoc Γ A⁺

-- LInv Γ c₀ c = left-inversion "Γ ; c₀ ⊢ c"

data LInv : UCtx -> Conc -> Conc -> Set where

Id- : forall {Γ X- } -> LInv Γ (CN X-) (CN X-)

Fn : forall {Γ A⁺ c}

-> ({Δ : LCtx} -> Pat⁺ Δ A⁺ -> UnFoc (Δ :: Γ) c)

-> LInv Γ (CP A⁺) c

-- RInv Γ h = right-inversion "Γ ⊢ h;"

data RInv : UCtx -> Hyp -> Set where

Susp : forall {Γ A⁻ }

-> ({Δ : LCtx} {c : Conc} -> Pat⁻ Δ A⁻ c -> UnFoc (Δ :: Γ) c)

-> RInv Γ (HN A⁻)

Id+ : forall {Γ X⁺ } -> ((HP X⁺) ∈∈ Γ) -> RInv Γ (HP X⁺)

-- LFoc Γ A⁻ c = left-focus "Γ ; [A⁻] ⊢ c"

data LFoc : UCtx -> Neg -> Conc -> Set where

Spine : forall {Δ A⁻ c0 c Γ} -> Pat⁻ Δ A⁻ c0 -> SatCtx Γ Δ -> LInv Γ c0 c

-> LFoc Γ A⁻ c

-- UnFoc Γ c = unfocused "Γ ⊢ c"

data UnFoc : UCtx -> Conc -> Set where

Ret : forall {Γ A⁺ } -> RFoc Γ A⁺ -> UnFoc Γ (CP A⁺)

App : forall {Γ A⁻ c} -> ((HN A⁻) ∈∈ Γ) -> LFoc Γ A⁻ c -> UnFoc Γ c

-- SatCtx Γ Δ = conjoined inversion "Γ ⊢ Δ;"

data SatCtx : UCtx -> LCtx -> Set where

Sub : forall {Γ Δ} ->

({h : Hyp} -> (h ∈ Δ) -> (RInv Γ h))

-> SatCtx Γ Δ

Some examples follow (translated from this paper). Here's where Agda's second most important feature (after Unicode support) gets to show off: dependent pattern-matching! We reuse Agda's coverage checker to ensure that our object-level functions are total.

module Examples where

open Logic

NilCtx : {Γ : UCtx} -> SatCtx Γ []

NilCtx = Sub f

where f : forall {h} -> h ∈ [] -> RInv _ h

f ()

⌊_⌋ : {Δ : LCtx} -> Pat+ Δ 2⁺ -> Pat+ [] 2⁺

⌊_⌋ Ptt = Ptt

⌊_⌋ Pff = Pff

tt : {Γ : UCtx} -> RFoc Γ 2⁺

tt = Val Ptt NilCtx

ff : {Γ : UCtx} -> RFoc Γ 2⁺

ff = Val Pff NilCtx

not : (Γ : UCtx) -> RInv Γ (HN (2⁺ → ↑ 2⁺))

not Γ = Susp not*

where

not* : {Δ : LCtx} {c : Conc} -> Pat- Δ (2⁺ → ↑ 2⁺) c -> UnFoc (Δ :: Γ) c

not* (Dapp Ptt Dp) = Ret ff

not* (Dapp Pff Dp) = Ret tt

and : (Γ : UCtx) -> RInv Γ (HN (2⁺ → 2⁺ → ↑ 2⁺))

and Γ = Susp and*

where

and* : {Δ : LCtx} {c : Conc} -> Pat- Δ (2⁺ → 2⁺ → ↑ 2⁺) c -> UnFoc (Δ :: Γ) c

and* (Dapp Ptt (Dapp Ptt Dp)) = Ret tt

and* (Dapp Ptt (Dapp Pff Dp)) = Ret ff

and* (Dapp Pff (Dapp Ptt Dp)) = Ret ff

and* (Dapp Pff (Dapp Pff Dp)) = Ret ff

App' : {Γ : UCtx} {A+ B+ : Pos} {c : Conc} ->

(HN (A+ → ↑ B+) ∈∈ Γ) -> (RFoc Γ A+) -> (LInv Γ (CP B+) c) ->

UnFoc Γ c

App' u (Val p σ) F = App u (Spine (Dapp p Dp) σ F)

table1 : (Γ : UCtx) -> RInv Γ (HN (↓(2⁺ → ↑ 2⁺) → ↑ (2⁺ * 2⁺)))

table1 Γ = Susp table1*

where

table1* : {Δ : LCtx} {c : Conc} -> Pat- Δ (↓(2⁺ → ↑ 2⁺) → ↑ (2⁺ * 2⁺)) c -> UnFoc (Δ :: Γ) c

table1* (Dapp Pu Dp) = App' (s0 f0) tt (Fn (\b1 ->

App' (sS (s0 f0)) ff (Fn (\b2 ->

Ret (Val (Ppair ⌊ b1 ⌋ ⌊ b2 ⌋) NilCtx)))))

## Monday, January 14, 2008

### Back from POPL, forward to AGDA

*oblivious*sorting algorithm can be verified correct simply by testing it on arrays of 0s and 1s. Voigtlaender proved an analogous result for parallel prefix sort: correctness can be verified simply by testing arrays of 0s, 1s, and 2s. The neat thing is that the result can be obtained almost as a "free theorem", i.e. by proving a few simple combinatorial properties and applying

*parametricity*. This paper also seems to me in a similar spirit as Martín Escardó's seemingly impossible functional programs and Kevin Watkin's 98-line proof of Conway's Cosmological Theorem─very clever uses of Haskell to reduce a seemingly infinite search space to a finite one. These sort of results make me feel optimisic about the future.

Another thing that makes me feel optimistic: Agda2. Dan Licata and I had a coding session today to try to translate focusing-with-higher-order-rules into Agda, and it seems almost perfectly suited. Dependent pattern-matching

*and*unicode: a killer combination. I will try to post more about this in the next few days.