> I will try to post more about this in the next few days.
"Few" is subjective.
Before getting to the Agda code that Dan and I wrote, though, I wanted to talk a little bit about a neat, simple idea I learned recently. In this paper, I argued that one way to rationally reconstruct the linear logic notion of polarity was to turn to so-called proof-theoretic semantics, and in particular Michael Dummett's distinction between verificationist and pragmatist justifications of the logical rules. A verificationist meaning-theory supposes that connectives are defined by their introduction rules, and uses this definition to justify their elimination rules. A pragmatist meaning-theory supposes the opposite: connectives are defined by their elimination rules, and this justifies their introduction rules. Well, polarity lets us refine this picture: the positive connectives are defined by their introduction rules, the negative connectives by their elimination rules.
But another (more fun?) way to think about polarity is through game-theoretic semantics. Think of the truth of a proposition A as a game between Prover and Refuter: A is valid iff Prover has a winning strategy. So the meaning of a connective is essentially a set of rules for playing a game. A few examples:
- [A ⊕ B] : This is a game where Prover begins by selecting either A or B, after which play proceeds by the rules of that game.
- [A & B] : This is a game where Refuter begins by selecting either A or B, after which play proceeds by the rules of that game.
- [A⊥] : Play A, but with the roles of Prover and Refuter swapped.
- [0] : Prover begins but has no moves, so Refuter automatically wins.
- [⊤] : Refuter begins but has no moves, so Prover automatically wins.
The polarity of a connective/formula is just the answer to the important question: "Which player begins?" (By the way, this also gives another explanation for focus and inversion. Focus is just the fact that after making a move in the game A ⊕ B, Prover doesn't have to give up her turn to Refuter, e.g., A could itself be the game A₁⊕ A₂. Inversion is the fact that the second player really has no choice in the matter, e.g., in the game A ⊕ B, Refuter has to account for Prover playing either A or B.)
The reason I like this (I think pretty old) idea is that not only is it a good icebreaker at parties, but it also suggests a natural generalization, which Samson Abramsky (a bit tongue-in-cheek) calls Socially Responsive, Environmentally Friendly Logic. Rather than treating only 2-player games between Prover and Refuter, we can encode n-player games using n distinct polarities. For example, Abramsky defines the connective A ⊕α B as the game where α begins play, selecting between A and B. I think this is a really cool and promising generalization.
But anyways, today let's stick with the boring, bipolar world. I'm going to talk about how to give a focused encoding of polarized intuitionistic logic in Agda, which is simultaneously a deep embedding of a language with explicit evaluation order and pattern-matching. For those playing along, the full code is available here. Let's start with some boilerplate:
open import Data.List
open import Data.String renaming (_++_ to concat)
module Main where
Now we define the syntax of polarized formulas, as two mutually inductive datatypes.
module Types where
Atom : Set
Atom = String
mutual
data Pos : Set where
X⁺ : Atom -> Pos
↓ : Neg -> Pos
1⁺ : Pos
_*_ : Pos -> Pos -> Pos
0⁺ : Pos
_+_ : Pos -> Pos -> Pos
2⁺ : Pos
ℕ : Pos
data Neg : Set where
X- : Atom -> Neg
↑ : Pos -> Neg
⊤ : Neg
_→_ : Pos -> Neg -> Neg
_&_ : Neg -> Neg -> Neg
infixr 14 _→_
infixr 15 _&_
infixr 15 _+_
infixr 16 _*_
Positive formulas include positive atoms and shifted negative formulas, as well as strict products and sums. (We would call the units 1 and 0, but unfortunately Agda doesn't let us overload numerical constants.) For fun, we also defined the type of booleans 2⁺, and the type of natural numbers ℕ. Negative formulas include negative atoms and shifted positive formulas, as well as functions and lazy products. Observe the essential use of Unicode!
Now we define hypotheses (positive atoms or arbitrary negative formulas) and conclusions (negative atoms or arbitrary positive formulas), as well as some simple machinery for inspecting contexts. Contexts are basically lists of hypotheses, but since inversion introduces multiple hypotheses at a time, it is more convenient to work with lists of lists of hypotheses. Hence we define LCtx (linear context) and UCtx (unrestricted context), and the membership relationships ∈ and ∈∈. Proofs of membership are isomorphic to De Bruijn indices.
module Contexts where
open Types public
data Hyp : Set where
HP : Atom -> Hyp
HN : Neg -> Hyp
data Conc : Set where
CP : Pos -> Conc
CN : Atom -> Conc
LCtx : Set
LCtx = List Hyp
UCtx : Set
UCtx = List LCtx
data _∈_ : Hyp -> LCtx -> Set where
f0 : {h : Hyp} {Δ : LCtx} -> h ∈ (h :: Δ )
fS : {h h' : Hyp} {Δ : LCtx} -> h' ∈ Δ -> h' ∈ (h :: Δ)
data _∈∈_ : Hyp -> UCtx -> Set where
s0 : {h : Hyp} {Δ : LCtx} {Γ : UCtx} -> h ∈ Δ -> h ∈∈ (Δ :: Γ)
sS : {h : Hyp} {Δ : LCtx} {Γ : UCtx} -> h ∈∈ Γ -> h ∈∈ (Δ :: Γ)
Okay, now we can start the fun stuff. First we define patterns:
module Patterns where
open Contexts public
[_] : {A : Set} -> A -> List A
[_] x = x :: []
-- Pat⁺ Δ A⁺ = linear entailment "Δ ⇒ A⁺"
data Pat⁺ : LCtx -> Pos -> Set where
Px : {x : Atom} -> Pat⁺ [ HP x ] (X⁺ x)
Pu : {A⁻ : Neg} -> Pat⁺ [ HN A⁻ ] (↓ A⁻)
P<> : Pat⁺ [] 1⁺
Ppair : {Δ1 : LCtx} { Δ2 : LCtx } {A : Pos} {B : Pos}
-> Pat⁺ Δ1 A -> Pat⁺ Δ2 B -> Pat⁺ (Δ1 ++ Δ2) (A * B)
Pinl : {Δ : LCtx} {A B : Pos} -> Pat⁺ Δ A -> Pat⁺ Δ (A + B)
Pinr : {Δ : LCtx} {A B : Pos} -> Pat⁺ Δ B -> Pat⁺ Δ (A + B)
Ptt : Pat⁺ [] 2⁺
Pff : Pat⁺ [] 2⁺
Pz : Pat⁺ [] ℕ
Ps : {Δ : LCtx } -> Pat⁺ Δ ℕ -> Pat⁺ Δ ℕ
-- Pat⁻ Δ A⁻ c = linear entailment "Δ ; A⁻ ⇒ c"
data Pat⁻ : LCtx -> Neg -> Conc -> Set where
Dx : forall {x} -> Pat⁻ [] (X- x) (CN x)
Dp : forall {A⁺} -> Pat⁻ [] (↑ A⁺) (CP A⁺)
Dapp : {Δ1 : LCtx } {Δ2 : LCtx} {A⁺ : Pos} {B⁻ : Neg} {c : Conc}
-> Pat⁺ Δ1 A⁺ -> Pat⁻ Δ2 B⁻ c -> Pat⁻ (Δ2 ++ Δ1) (A⁺ → B⁻) c
Dfst : {Δ : LCtx} {A⁻ : Neg} {B⁻ : Neg} {c : Conc}
-> Pat⁻ Δ A⁻ c -> Pat⁻ Δ (A⁻ & B⁻) c
Dsnd : {Δ : LCtx} {A⁻ : Neg} {B⁻ : Neg} {c : Conc}
-> Pat⁻ Δ B⁻ c -> Pat⁻ Δ (A⁻ & B⁻) c
Observe that the constructors for Pat⁺ are just value patterns. The constructors for Pat⁻ are somewhat less familiar: they correspond to patterns for elimination contexts, which are sometimes called spines.
Finally, we define the generic focused sequent calculus:
module Logic where
open Patterns public
mutual
-- RFoc Γ A⁺ = right-focus "Γ ⊢ [A⁺]"
data RFoc : UCtx -> Pos -> Set where
Val : forall {Γ A⁺ Δ} -> Pat⁺ Δ A⁺ -> SatCtx Γ Δ -> RFoc Γ A⁺
-- LInv Γ c₀ c = left-inversion "Γ ; c₀ ⊢ c"
data LInv : UCtx -> Conc -> Conc -> Set where
Id- : forall {Γ X- } -> LInv Γ (CN X-) (CN X-)
Fn : forall {Γ A⁺ c}
-> ({Δ : LCtx} -> Pat⁺ Δ A⁺ -> UnFoc (Δ :: Γ) c)
-> LInv Γ (CP A⁺) c
-- RInv Γ h = right-inversion "Γ ⊢ h;"
data RInv : UCtx -> Hyp -> Set where
Susp : forall {Γ A⁻ }
-> ({Δ : LCtx} {c : Conc} -> Pat⁻ Δ A⁻ c -> UnFoc (Δ :: Γ) c)
-> RInv Γ (HN A⁻)
Id+ : forall {Γ X⁺ } -> ((HP X⁺) ∈∈ Γ) -> RInv Γ (HP X⁺)
-- LFoc Γ A⁻ c = left-focus "Γ ; [A⁻] ⊢ c"
data LFoc : UCtx -> Neg -> Conc -> Set where
Spine : forall {Δ A⁻ c0 c Γ} -> Pat⁻ Δ A⁻ c0 -> SatCtx Γ Δ -> LInv Γ c0 c
-> LFoc Γ A⁻ c
-- UnFoc Γ c = unfocused "Γ ⊢ c"
data UnFoc : UCtx -> Conc -> Set where
Ret : forall {Γ A⁺ } -> RFoc Γ A⁺ -> UnFoc Γ (CP A⁺)
App : forall {Γ A⁻ c} -> ((HN A⁻) ∈∈ Γ) -> LFoc Γ A⁻ c -> UnFoc Γ c
-- SatCtx Γ Δ = conjoined inversion "Γ ⊢ Δ;"
data SatCtx : UCtx -> LCtx -> Set where
Sub : forall {Γ Δ} ->
({h : Hyp} -> (h ∈ Δ) -> (RInv Γ h))
-> SatCtx Γ Δ
Some examples follow (translated from this paper). Here's where Agda's second most important feature (after Unicode support) gets to show off: dependent pattern-matching! We reuse Agda's coverage checker to ensure that our object-level functions are total.
module Examples where
open Logic
NilCtx : {Γ : UCtx} -> SatCtx Γ []
NilCtx = Sub f
where f : forall {h} -> h ∈ [] -> RInv _ h
f ()
⌊_⌋ : {Δ : LCtx} -> Pat+ Δ 2⁺ -> Pat+ [] 2⁺
⌊_⌋ Ptt = Ptt
⌊_⌋ Pff = Pff
tt : {Γ : UCtx} -> RFoc Γ 2⁺
tt = Val Ptt NilCtx
ff : {Γ : UCtx} -> RFoc Γ 2⁺
ff = Val Pff NilCtx
not : (Γ : UCtx) -> RInv Γ (HN (2⁺ → ↑ 2⁺))
not Γ = Susp not*
where
not* : {Δ : LCtx} {c : Conc} -> Pat- Δ (2⁺ → ↑ 2⁺) c -> UnFoc (Δ :: Γ) c
not* (Dapp Ptt Dp) = Ret ff
not* (Dapp Pff Dp) = Ret tt
and : (Γ : UCtx) -> RInv Γ (HN (2⁺ → 2⁺ → ↑ 2⁺))
and Γ = Susp and*
where
and* : {Δ : LCtx} {c : Conc} -> Pat- Δ (2⁺ → 2⁺ → ↑ 2⁺) c -> UnFoc (Δ :: Γ) c
and* (Dapp Ptt (Dapp Ptt Dp)) = Ret tt
and* (Dapp Ptt (Dapp Pff Dp)) = Ret ff
and* (Dapp Pff (Dapp Ptt Dp)) = Ret ff
and* (Dapp Pff (Dapp Pff Dp)) = Ret ff
App' : {Γ : UCtx} {A+ B+ : Pos} {c : Conc} ->
(HN (A+ → ↑ B+) ∈∈ Γ) -> (RFoc Γ A+) -> (LInv Γ (CP B+) c) ->
UnFoc Γ c
App' u (Val p σ) F = App u (Spine (Dapp p Dp) σ F)
table1 : (Γ : UCtx) -> RInv Γ (HN (↓(2⁺ → ↑ 2⁺) → ↑ (2⁺ * 2⁺)))
table1 Γ = Susp table1*
where
table1* : {Δ : LCtx} {c : Conc} -> Pat- Δ (↓(2⁺ → ↑ 2⁺) → ↑ (2⁺ * 2⁺)) c -> UnFoc (Δ :: Γ) c
table1* (Dapp Pu Dp) = App' (s0 f0) tt (Fn (\b1 ->
App' (sS (s0 f0)) ff (Fn (\b2 ->
Ret (Val (Ppair ⌊ b1 ⌋ ⌊ b2 ⌋) NilCtx)))))
No comments:
Post a Comment