tag:blogger.com,1999:blog-74866746848445601972018-08-28T01:50:26.008-07:00The Paralyzing Paradoxes of Professor PolaroPrimarily about proofs and programsNoamhttp://www.blogger.com/profile/09175792123472366590noreply@blogger.comBlogger33125tag:blogger.com,1999:blog-7486674684844560197.post-24916999245230299122010-09-29T07:26:00.000-07:002010-09-29T07:26:46.001-07:00reflecting and reifying the state monadThe Haskell code below was inspired by <a href="http://math.andrej.com/2010/09/27/programming-with-effects-i-theory/">Andrej Bauer's visit last week</a>, as well as thinking more about <a href="http://www.google.com/search?q=filinski+representing+monads">this</a>, <a href="http://blog.sigfpe.com/2009/11/programming-with-impossible-functions.html">this</a>, and <a href="http://www.pps.jussieu.fr/%7Emellies/papers/segal-lics-2010-revised.pdf">this</a> since writing <a href="http://www.pps.jussieu.fr/%7Enoam/papers/polpol.pdf">this</a>. <hr><p><pre><br />{-# LANGUAGE RankNTypes, FlexibleInstances #-}<br /><br />-- A "negative type" is a datatype of continuations, indexed by answer<br />-- type. Given such a description, we can define values, "algebras",<br />-- and "handlers" as control expressions ("double negations") with<br />-- different answer type constraints<br />-- (cf. http://polaro.blogspot.com/2010/05/five-kinds-of-double-negation.html)<br /><br />newtype Value neg = Val { runVal :: forall r. neg r -> r }<br />newtype Algebra neg r = Alg { runAlg :: neg r -> r }<br />newtype Handler neg r1 r2 = Han { runHandle :: neg r1 -> r2 }<br /><br />infixr 0 `plugV`, `plugA`, `plugH`<br /><br />plugV :: neg r -> Value neg -> r<br />k `plugV` v = runVal v k<br /><br />plugA :: neg r -> Algebra neg r -> r<br />k `plugA` v = runAlg v k<br /><br />plugH :: neg r -> Handler neg r r' -> r'<br />k `plugH` v = runHandle v k<br /><br />-- We define the negative type Cell s<br /><br />data Cell s r = Update s (Cell s r)<br /> | Lookup (s -> Cell s r)<br /> | Return r<br /><br />-- Example: given an initial state, we can create a cell value...<br /><br />new :: s -> Value (Cell s)<br />new s = Val newVal<br /> where<br /> newVal (Update s' k) = k `plugV` new s'<br /> newVal (Lookup ks) = ks s `plugV` new s<br /> newVal (Return a) = a<br /><br />-- Example: perform some operations on an initial state...<br /><br />r1 = (Lookup $ \s -><br /> Update (s-1) $<br /> Lookup $ \s' -><br /> Return (s'+1))<br /> `plugV`<br /> new 2<br />-- r1 = 2<br /><br />-- Monadic reification and reflection<br /><br />reify :: Cell s a -> s -> (s,a)<br />reify k s = k `plugH` Han reifyHandle<br /> where<br /> reifyHandle (Update s' k') = reify k' s'<br /> reifyHandle (Lookup ks) = reify (ks s) s<br /> reifyHandle (Return a) = (s,a)<br /><br />reflect :: (s -> (s,a)) -> Cell s a<br />reflect m = Lookup $ \s -><br /> let (s',a) = m s in<br /> Update s' $<br /> Return a<br /><br />-- Example: Normalization by reification and reflection<br /><br />prog1 = Update 0 $<br /> Lookup $ \s -><br /> Return (s + 1)<br />prog2 = Lookup $ \s -><br /> Update 2 $<br /> Update 0 $<br /> Return 1<br /><br />-- a bit of code for printing cell continuations (prints only a small<br />-- segment of lookup tables)<br />instance Show r => Show (Cell Integer r) where<br /> show (Update s k) = "upd " ++ show s ++ "; " ++ show k<br /> show (Lookup ks) = show (map (\s -> (s, ks s)) [0..3])<br /> show (Return a) = "ret " ++ show a<br /><br />-- *Main> prog1<br />-- upd 0; [(0,ret 1),(1,ret 2),(2,ret 3),(3,ret 4)]<br />-- *Main> prog2<br />-- [(0,upd 2; upd 0; ret 1),(1,upd 2; upd 0; ret 1),(2,upd 2; upd 0; ret 1),(3,upd 2; upd 0; ret 1)]<br />-- *Main> reflect(reify prog1)<br />-- [(0,upd 0; ret 1),(1,upd 0; ret 1),(2,upd 0; ret 1),(3,upd 0; ret 1)]<br />-- *Main> reflect(reify prog2)<br />-- [(0,upd 0; ret 1),(1,upd 0; ret 1),(2,upd 0; ret 1),(3,upd 0; ret 1)]<br /></pre><hr><p>Otherwise, things have been good. This blog has been quiet for a while, but I'm still learning a lot from amazing people. Though I can't resist quoting Henry Miller: "It is now the fall of my second year in Paris. I was sent here for a reason I have not yet been able to fathom."Noamhttp://www.blogger.com/profile/09175792123472366590noreply@blogger.com0tag:blogger.com,1999:blog-7486674684844560197.post-60874227126380539972010-05-22T11:06:00.000-07:002010-05-22T11:06:05.410-07:00Five kinds of double-negationA <a href="http://lists.seas.upenn.edu/pipermail/types-list/2010/001481.html">recent question on the TYPES mailing list</a> reminded me that one reason negation is confusing is because it can mean so many different things. I and others originally thought the answer the person was looking for (the type-theoretic reading of negation) was "continuations", but it turned out his intuition was about a <a href="http://lists.seas.upenn.edu/pipermail/types-list/2010/001484.html">different form of negation</a> not usually considered in constructive logic (but available in "refinement type systems", etc.), since it's tied to the "extrinsic" interpretation of types-as-properties. (For more on this deficit of constructive logic, see the introduction to Paul Taylor's <a href="http://www.paultaylor.eu/ASD/subasd/subasd.pdf">Subspaces in Abstract Stone Duality</a>.) <p>But even restricted to plain old type theory, there are many different interpretations of negation! Under the principle of "Why talk about something confusing when you can talk about something doubly confusing?", today I want to talk about different computational interpretations of <i>double</i> negations in logic. <ol><li>(A -> 0) -> 0. Here 0 is falsehood, and this is the standard interpretation of double-negation in intuitionistic logic. What does this mean computationally? Well, one way to think about this is to interpret types as sets. How many different functions f : X -> 0 are there from any set X into the empty set? None, unless X is itself empty, in which case there is exactly one. Applying this reasoning to our double-negation, if A is non-empty, then A -> 0 is empty, which in turn means that (A -> 0) -> 0 contains exactly one element; conversely, if A is empty, then A -> 0 is non-empty, which in turns means that (A -> 0) -> 0 is empty. What we see is that intuitionistic double-negation encodes the principle of <i>proof-irrelevance</i>, i.e., it <i>forgets</i> all the computational content of a type. (The catch here is that set-theoretic reasoning doesn't always prove type isomorphisms—this argument really only works for extensional type theory.) </li><li>(A -> ⊥) -> ⊥. Here ⊥ represents <a href="http://en.wikipedia.org/wiki/Minimal_logic">"minimal" falsehood</a>, i.e., simply a logical atom with no introduction or elimination rules, and in particular without the principle of <a href="http://en.wikipedia.org/wiki/Principle_of_explosion">explosion</a>. Because we have replaced "empty" with "unknown", minimal double-negation does not have the same computational degeneracy of intuitionistic double-negation: there are at least as many different constructive proofs of (A -> ⊥) -> ⊥ as there are of A, and often more. For example, in general there is no proof in minimal logic of excluded middle A ∨ (A -> ⊥), but there is a proof of ((A ∨ (A -> ⊥)) -> ⊥) -> ⊥. Indeed, minimal logic rather than intuitionistic logic really is the proper place to understand the computational content of the classical double-negation translations (something which I didn't fully appreciate until reading Jeremy Avigad's <a href="http://arxiv.org/abs/0901.2551">The computational content of classical arithmetic</a>). As such, this form of double-negation is the first step towards understanding <i>continuation-passing style.</i></li><li>(A -> R) -> R, where R is any type of <i>answers</i>. We said that ⊥ above is a logical atom. In type-theoretic terms, it is a type variable. So to move to this form double-negation, we simply <i>instantiate</i> the variable with a concrete type R. Now, logically we are beginning to stretch the meaning of "negation". In particular, nothing says that R is uninhabited—indeed if it is provably uninhabited, we are back to the computationally degenerate situation (1). But whether or not we accept the terminology, this form of double-negation is extremely important computationally, tied to the representation of control flow. A function of type A -> R is a continuation (transforming any value of type A into a result of type R), and so a term of type (A -> R) -> R is something which takes a continuation to a result—a computation with "control effects". The type (A -> R) -> R is so powerful that it comes with <a href="http://www.haskell.org/all_about_monads/html/contmonad.html">the following warning</a>: <center><img src="http://www.haskell.org/all_about_monads/html/warn.png">Abuse of the Continuation monad can produce code that is impossible to understand and maintain.</center>The "principle of double-confusion" I mentioned at the beginning then motivates the following <i>pair</i> of generalizations... </li><li>(A -> R1) -> R2, where R1 and R2 are two (possibly distinct) types of answers. This form of double-negation comes up in the study of <a href="http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.43.4822">delimited control operators</a>, which were originally motivated by the fact that the type (A -> R) -> R is not only too powerful but also not powerful enough. The type (A -> R1) -> R2 can be seen as a sort of <a href="http://en.wikipedia.org/wiki/Hoare_logic">Hoare triple</a> {R1}A{R2}, which gives an intimation of its power. </li><li>∀α.(A -> α) -> α, a polymorphic type (and more generally, ∀α.(A -> Tα) -> Tα, where T is an arbitrary <a href="http://en.wikipedia.org/wiki/Monad_%28category_theory%29">monad</a>). It's easy to see that A ≡ ∀α.(A -> α) -> α is provable in second-order intuitionistic logic, but in fact this can also be interpreted as the Yoneda <i>isomorphism</i> in category theory, as <a href="http://blog.sigfpe.com/2006/11/yoneda-lemma.html">sigfpe explained a few years ago</a>. More generally, there is a Yoneda isomorphism between TA and ∀α.(A -> Tα) -> Tα for any monad T (and formally, a type isomorphism in System F + parametricity axioms). This isomorphism lies at the heart of <a href="http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.43.8213">Filinski's representation theorem</a> that delimited control operators can be used to perform "monadic reflection". </li></ol>So that is a small cast of characters, biased towards the view of "negation as continuation" (which as I said, is not the only view). For more plots involving these different forms of double-negation (particularly (4) and (5)), you can read the <a href="http://polaro.blogspot.com/2010/05/polarity-and-logic-of-delimited.html">paper I posted</a> two weeks ago.Noamhttp://www.blogger.com/profile/09175792123472366590noreply@blogger.com11tag:blogger.com,1999:blog-7486674684844560197.post-66171973866424024812010-05-02T12:31:00.000-07:002010-05-02T12:31:48.097-07:00Polarity and the logic of delimited continuationsText: <a href="http://www.pps.jussieu.fr/~noam/papers/polpol.pdf">pdf</a><br>Code: <a href="http://www.pps.jussieu.fr/~noam/src/delimited.tar">twelf</a><br>Abstract: <p><blockquote>Polarized logic is the logic of values and continuations, and their interaction through continuation-passing style. The main limitations of this logic are the limitations of CPS: that continuations cannot be composed, and that programs are fully sequentialized. Delimited control operators were invented in response to the limitations of classical continuation-passing. That suggests the question: what is the logic of delimited continuations? <p>We offer a simple account of delimited control, through a natural generalization of the classical notion of polarity. This amounts to breaking the perfect symmetry between positive and negative polarity in the following way: answer types are positive. Despite this asymmetry, we retain all of the classical polarized connectives, and can explain "intuitionistic polarity" (e.g., in systems like CBPV) as a restriction on the use of connectives, i.e., as a logical fragment. Our analysis complements and generalizes existing accounts of delimited control operators, while giving us a rich logical language through which to understand the interaction of control with monadic effects. </blockquote>To appear at <a href="http://www2.informatik.hu-berlin.de/lics/lics10/">LICS 2010</a>. <p>Writing this paper helped me sort out some things I'd been puzzled about for a while—I hope you also find it useful!Noamhttp://www.blogger.com/profile/09175792123472366590noreply@blogger.com5tag:blogger.com,1999:blog-7486674684844560197.post-1228544769338611532010-04-24T14:12:00.000-07:002010-04-24T14:12:43.092-07:00"One of the first things to learn about category theory is that not everything in mathematics is a category"Sorry, apologies, etc., for the long silence. I've been working on the final version of <a href="http://www2.informatik.hu-berlin.de/lics/lics10/lics10-accepted.html">a paper</a> that I plan to post here in a few days. This mini-post is just to highlight an exchange on <a href="http://mathoverflow.net/">MathOverflow</a> that jibes with some things I've been thinking about lately. The quote in the title is from <a href="http://math.uchicago.edu/~shulman/">Mike Shulman</a> in response to <a href="http://mathoverflow.net/questions/22359/why-havent-certain-well-researched-classes-of-mathematical-object-been-framed-by">this question</a>, "Why haven’t certain well-researched classes of mathematical object been framed by category theory?" I also like <a href="http://mathoverflow.net/questions/22359/why-havent-certain-well-researched-classes-of-mathematical-object-been-framed-by/22380#22380">Reid Barton's response</a>, in particular, "From the standpoint of higher category theory, categories (i.e., 1-categories) are just one level among many in a family of mathematical structures". This seems to me like an important lesson to keep in mind when applying category theory to programming languages and proof theory: to resist the desire to phrase everything as a category.Noamhttp://www.blogger.com/profile/09175792123472366590noreply@blogger.com2tag:blogger.com,1999:blog-7486674684844560197.post-77645116683138895812010-01-26T15:32:00.000-08:002010-01-26T15:44:03.078-08:00Twelf, my favorite general-purpose functional programming language<i>(Disclaimer: the title of this post is tongue-in-cheek—I do not really like to play favorites. But hopefully you will understand why I chose it: to mess with your head.)</i><p>Last week I was at <a href="http://www.cse.psu.edu/popl/10/index.html">POPL 2010</a> in Madrid. Most fun of course was catching up with friends and meeting new people, but there were also some excellent talks. A few highlights: <ul> <li><a href="http://www.cse.psu.edu/~swarat/pubs/index.html">Swarat Chaudhuri</a>'s talk about <a href="http://www.cse.psu.edu/~swarat/pubs/continuity.pdf">Continuity Analysis of Programs</a>: continuity, like type safety, provides a measure of the "robustness" of programs</li> <li><a href="http://www2.research.att.com/~trevor/">Trevor Jim</a>'s very fun talk on Yakker (<a href="http://www.cs.princeton.edu/~dpw/papers/ddgrammars-0709.pdf">Semantics and Algorithms for Data-dependent Grammars</a>)</li> <li><a href="http://www.diku.dk/hjemmesider/ansatte/andrzej/">Andrzej Filinski</a> on <a href="http://www.diku.dk/hjemmesider/ansatte/andrzej/papers/popl10-mia.pdf">Monads in Action</a>: exploring the relationship between monadic and operational semantics</li> <li><a href="http://www.ccs.neu.edu/home/matthias/">Matthias Felleisen</a>'s invited talk at TLDI, which opened with the following slide: <img width="600" src="http://www.pps.jussieu.fr/~noam/images/felleisen.jpg" alt="i see you're the keynote speaker at TYPES in language design and implementation. what the f...??"></ul> In this post I want to talk a bit about my favorite general-purpose functional programming language, <a href="http://twelf.plparty.org/wiki/Main_Page">Twelf</a>. If that sentence gives you cognitive dissonance, please read on! <p>Really, what I am going to be describing is a <i>very shallow embedding</i> of a pure functional language within Twelf. For familiarity (though at a cost of expressiveness), I will limit our language to an ML-like, call-by-value fragment (with arbitrary recursive datatypes and recursive functions), rather than a fully polarized type theory. For fun, our language will also include Haskell-style typeclasses. Our two weapons will be traditional LF-style (dependently-typed) higher-order abstract syntax, and the defunctionalization trick I described <a href="http://polaro.blogspot.com/2009/07/defunctionalizing-proofs-revised.html">a year ago</a>. Don't worry, we won't be proving any theorems, only writing programs. <p>We begin by declaring the (meta-level) type of (object-level) types: <pre><br />tp : type. % types<br /></pre>As we did informally <a href="http://polaro.blogspot.com/2009/11/what-is-logic-of-ml-values.html">in a previous post</a>, we will distinguish three syntactic categories: <pre><br />val : tp -> type. % values<br />kon : tp -> tp -> type. % "delimited" continuations<br />exp : tp -> type. % expressions<br /></pre>(Note: there is nothing scary about delimited continuations—think of them as pure ML functions, or entailments, or morphisms in a category.) Expressions in our language are constructed out of values and continuations, as follows: <pre><br />! : val A -> exp A. %prefix 10 !.<br />$ : kon A B -> exp A -> exp B. %infix right 11 $.<br /></pre>Because we are really restricting to types of <i>positive polarity</i>, values are built by applying constructors, while continuations are defined by pattern-matching on values. The latter principle is expressed (in defunctionalized form) by the <i>apply</i> function, a Twelf logic program: <pre><br />apply : kon A B -> val A -> exp B -> type.<br />%mode apply +K +V -E.<br /></pre>Now, without knowing anything else about values and continuations, we can already give our language a generic (big-step) operational semantics: <pre><br />eval : exp A -> val A -> type.<br />%mode eval +E -V.<br />- : eval (! V) V.<br />- : eval (F $ E) VR<br /> <- eval E V<br /> <- apply F V E'<br /> <- eval E' VR.<br /></pre>And that's the core language! Next we can build up a nice library of datatypes and library routines. We define some simple datatype constructors, by their value constructors: <pre><br />1 : tp.<br /><> : val 1.<br /><br />0 : tp.<br /><br />* : tp -> tp -> tp. %infix right 12 *.<br />pair : val A -> val B -> val (A * B).<br /><br />+ : tp -> tp -> tp. %infix right 11 +.<br />inl : val A -> val (A + B).<br />inr : val B -> val (A + B).<br /><br />=> : tp -> tp -> tp. %infix right 10 =>.<br />fn : kon A B -> val (A => B).<br /></pre>We also introduce polymorphic and recursive types (note our reliance on HOAS and substitution): <pre><br />∀ : (tp -> tp) -> tp.<br />Λ : ({x}val (A x)) -> val (∀ A).<br /><br />rec : (tp -> tp) -> tp.<br />fold : val (A* (rec A*)) -> val (rec A*).<br /></pre>Finally we derive some familiar datatypes: <pre><br />bool = 1 + 1.<br />true = inl <>.<br />false = inr <>.<br /><br />maybe = [A] A + 1.<br />nothing : val (maybe A) = inr <>.<br />just : val A -> val (maybe A) = [V] inl V.<br /><br />nat = rec [X] 1 + X.<br />z : val nat = fold (inl <>).<br />s : val nat -> val nat = [V] fold (inr V).<br /><br />list = [A] rec [X] 1 + A * X.<br />nil : val (list A) = fold (inl <>).<br />cons : val A -> val (list A) -> val (list A) =<br /> [V] [Vs] fold (inr (pair V Vs)).<br /></pre>Just as we do in Haskell, we can define continuations (functions) by giving them a type declaration, and then a list of pattern-matching clauses. For example, here we define the boolean operations "and" and "or": <pre><br />and : kon (bool * bool) bool.<br />- : apply and (pair true true) (! true).<br />- : apply and (pair true false) (! false).<br />- : apply and (pair false _) (! false).<br /><br />or : kon (bool * bool) bool.<br />- : apply or (pair false false) (! false).<br />- : apply or (pair false true) (! true).<br />- : apply or (pair true _) (! true).<br /></pre>We can also define some continuation constructors in the same way. For example, we can define the beloved lambda: <pre><br />λ : (val A -> exp B) -> kon A B.<br />- : apply (λ F) V (F V).<br /></pre>λ constructs a continuation from an expression with a free value variable, which is useful when we <i>don't</i> need to pattern-match on the argument. We also define some useful syntactic sugar: <pre><br />let : exp A -> (val A -> exp B) -> exp B = [E] [F] λ F $ E.<br /></pre>In effect here we are using Twelf as a <i>macro system</i> for our embedded language. The following continuation constructors correspond to (sequent calculus) "left rules" for the various type constructors: <pre><br />split : (val A -> val B -> exp C) -> kon (A * B) C.<br />- : apply (split E*) (pair V1 V2) (E* V1 V2).<br /><br />case : kon A C -> kon B C -> kon (A + B) C.<br />- : apply (case K1 K2) (inl V) E<br /> <- apply K1 V E.<br />- : apply (case K1 K2) (inr V) E<br /> <- apply K2 V E.<br /><br />funsplit : (kon A B -> exp C) -> kon (A => B) C.<br />- : apply (funsplit E*) (fn F) (E* F).<br /><br />gensplit : (({x}val (A x)) -> exp C) -> kon (∀ A) C.<br />- : apply (gensplit E*) (Λ V) (E* V).<br /><br />unfold : kon (A* (rec A*)) C -> kon (rec A*) C.<br />- : apply (unfold K) (fold V) E<br /> <- apply K V E.<br /></pre>Note we are not obligated to use these constructors to define continuations, i.e., they are not part of the language specification: they are just library routines. Here's an example of a recursive definition: <pre><br />append : kon (list A * list A) (list A).<br />- : apply append (pair nil L2) (! L2).<br />- : apply append (pair (cons V L1) L2)<br /> (let (append $ ! pair L1 L2) [L'] ! cons V L').<br /></pre>Okay, hopefully by now you are convinced that in this style we we can write more or less ordinary (effect-free) ML programs. Next we turn to typeclasses. Typeclasses are basically type-indexed (or type constructor-indexed) records, and so their encoding in Twelf follows the same pattern of defunctionalization (i.e., for the polarity-sensitive, because they are <i>negative</i> values). Here we introduce a few typeclasses familiar from Haskell: <pre><br />EQ : tp -> type.<br />eq : EQ A -> kon (A * A) bool.<br /><br />FUNCTOR : (tp -> tp) -> type.<br />fmap : FUNCTOR F -> kon A B -> kon (F A) (F B).<br /><br />MONAD : (tp -> tp) -> type.<br />eta : MONAD T -> kon A (T A).<br />ext : MONAD T -> kon A (T B) -> kon (T A) (T B).<br /><br />% Wadler's syntactic sugar...<br />return : MONAD T -> exp A -> exp (T A) = [M] [E] eta M $ E.<br />bind : MONAD T -> exp (T A) -> kon A (T B) -> exp (T B) = [M] [E] [K] ext M K $ E.<br /></pre>We can instantiate EQ in various ways: <pre><br />1eq : EQ 1.<br />- : apply (eq 1eq) (pair <> <>) (! true).<br /><br />sumeq : EQ A -> EQ B -> EQ (A + B).<br />- : apply (eq (sumeq EqA EqB)) (pair (inl Va1) (inl Va2))<br /> (eq EqA $ ! pair Va1 Va2).<br />- : apply (eq (sumeq EqA EqB)) (pair (inl Va1) (inr Vb2))<br /> (! false).<br />- : apply (eq (sumeq EqA EqB)) (pair (inr Vb1) (inl Va2))<br /> (! false).<br />- : apply (eq (sumeq EqA EqB)) (pair (inr Vb1) (inr Vb2))<br /> (eq EqB $ ! pair Vb1 Vb2).<br /><br />paireq : EQ A -> EQ B -> EQ (A * B).<br />- : apply (eq (paireq EqA EqB)) (pair (pair Va1 Vb1) (pair Va2 Vb2))<br /> (let (eq EqA $ ! pair Va1 Va2) [x]<br /> let (eq EqB $ ! pair Vb1 Vb2) [y]<br /> and $ ! pair x y).<br /><br />nateq : EQ nat.<br />- : apply (eq nateq) (pair z z) (! true).<br />- : apply (eq nateq) (pair z (s _)) (! false).<br />- : apply (eq nateq) (pair (s _) z) (! false).<br />- : apply (eq nateq) (pair (s N1) (s N2)) (eq nateq $ ! pair N1 N2).<br /></pre>Likewise FUNCTOR: <pre><br />idf : FUNCTOR ([A]A).<br />- : apply (fmap idf K) V E<br /> <- apply K V E.<br /><br />prodf : FUNCTOR F -> FUNCTOR G -> FUNCTOR ([A] F A * G A).<br />- : apply (fmap (prodf F G) K) (pair V1 V2)<br /> (let (fmap F K $ ! V1) [x] let (fmap G K $ ! V2) [y] ! pair x y).<br /><br />expf : FUNCTOR ([A]B => A).<br />- : apply (fmap expf K) (fn K') (! fn (λ [x] K $ K' $ ! x)).<br /><br />listf : FUNCTOR ([A]list A).<br />- : apply (fmap listf K) nil (! nil).<br />- : apply (fmap listf K) (cons V VS) (let (K $ ! V) [x]<br /> let (fmap listf K $ ! VS) [xs]<br /> ! cons x xs).<br /></pre>And MONAD: <pre><br />idm : MONAD ([a]a).<br />- : apply (eta idm) V (! V).<br />- : apply (ext idm K) V E<br /> <- apply K V E.<br /><br />maybem : MONAD ([a]maybe a).<br />- : apply (eta maybem) V (! just V).<br />- : apply (ext maybem K) V (case K (λ [_] ! nothing) $ ! V).<br /></pre>And now an example where we run fmap on a pair of lists, using Twelf's logic engine to automatically infer the appropriate instance of FUNCTOR: <pre><br />%query 1 *<br /> eval (fmap F (λ [x] ! s x) $<br /> ! pair (cons z (cons z nil)) (cons (s z) (cons (s z) nil))) R.<br />---------- Solution 1 ----------<br />R = pair (cons (s z) (cons (s z) nil)) (cons (s (s z)) (cons (s (s z)) nil));<br />F = prodf listf listf.<br />____________________________________________<br /></pre>And finally, a slightly more involved example: <pre><br />cond : exp C -> exp C -> kon bool C =<br /> [E1] [E2] case (λ [_] E1) (λ [_] E2).<br /><br />lookup : EQ A -> val A -> kon (list (A * B)) (maybe B).<br />- : apply (lookup Eq V) nil (! nothing).<br />- : apply (lookup Eq V) (cons (pair V' VE) Vs)<br /> (cond (lookup Eq V $ ! Vs) (! just VE) $ <br /> (eq Eq $ ! pair V V')).<br /><br />one = s z. two = s one. three = s two. four = s three.<br />%query 1 *<br /> eval (bind _ (lookup _ one $ ! cons (pair z three)<br /> (cons (pair one four) (cons (pair two z) nil)))<br /> (λ [x] return _ (! s x))) R.<br />---------- Solution 1 ----------<br />R = just (s four).<br />____________________________________________<br /></pre>And that's the gist of it. <p>If you are still reading this, perhaps you will be inspired and use our favorite general-purpose functional programming language to write, say, a webserver?Noamhttp://www.blogger.com/profile/09175792123472366590noreply@blogger.com0tag:blogger.com,1999:blog-7486674684844560197.post-73815057414023131512009-12-21T16:02:00.000-08:002009-12-21T16:02:39.645-08:00The History of Categorical LogicI'm currently making my way through Jean-Pierre Marquis and Gonzalo E. Reyes' 116-page-long <a href="http://www.webdepot.umontreal.ca/Usagers/marquisj/MonDepotPublic/HistofCatLog.pdf">The History of Categorical Logic</a>, and it is fascinating. Someone needs to <a href="http://polaro.blogspot.com/2009/09/review-of-logicomix.html">make it into a comic book</a>. [HT: <a href="http://math.ucr.edu/home/baez/week287.html">This Week's Finds</a>]Noamhttp://www.blogger.com/profile/09175792123472366590noreply@blogger.com0tag:blogger.com,1999:blog-7486674684844560197.post-61236048587386007332009-11-16T17:10:00.000-08:002009-11-16T17:10:05.830-08:00What is the logic of ML values?Before approaching the question, "What is the logic of ML values?", I first want to explain why there is more subtlety to the question than is often assumed. <p> (Warning: this is a long post on something which most people probably care very little about. On the other hand, there are 19 google hits for <a href="http://www.google.com/search?num=100&q=%22logic+changed+my+life%22">"logic changed my life"</a>.) <p> In general, how does one interpret the question, "What is the logic of X?", for some programming language X? A standard response is that the logical theorems are exactly the inhabited types, i.e., types τ such that there is a closed term t:τ. But what are the "terms"? The answer is obvious enough in simple formal systems where there is only one typing judgment, and so we can say things like, "The inhabited types of simply-typed lambda calculus are exactly the theorems of minimal implicational logic". Yet, real programming languages usually have many different typing judgments, since programs are composed out of many different syntactic objects. In ML, in particular, a program includes some let-bound values and expressions, some function definitions, and a result expression (in addition to datatype definitions, module definitions, etc., which are classified by different kinds of "types"). <p> (In this post, "ML" is meant in a slightly fuzzy but essentially unambiguous way, as a call-by-value language with effects. To settle a concrete interpretation, I will be giving examples in <a href="http://www.smlnj.org/">Standard ML of NJ</a>, which notably provides the control operator callcc. "Value" is meant in the standard operational sense, i.e., a value is a fully evaluated ML expression. We can similarly ask, "What is the logic of Haskell values?", but that is a different, trickier question, to which I'm still not sure of the answer.) <p> When answering the question, "What is the logic of ML?" (or "What is the logic of Haskell?"), the standard approach is to gloss over these different syntactic categories, and assume we are talking about expressions. This makes some sense, insofar as this is the largest syntactic category: all values and functions are also expressions. And expressions also have easy-to-understand principles of composition. For example, the rule of application—from two expressions e₁:σ→τ and e₂:σ, we can form the expression (e₁ e₂):τ—witnesses the logical principle of modus ponens. Whereas from two <i>values</i> v₁:σ→τ and v₂:σ, the application (v₁ v₂):τ is no longer a value. <p> Yet, I believe it is a mistake to define "the" logic of a programming language in this way, without realizing there is finer structure. That is why I phrased the question, "What is the logic of ML <i>values</i>?", rather than full-stop, "What is the logic of ML?" And so how does the logic of ML values differ from the logic of ML expressions? <p> Well, it is often claimed that ML (and likewise Haskell) defines an inconsistent logic. This is a statement about the logic of expressions. For example, non-terminating expressions such as <blockquote>fun loop() = loop() </blockquote>inhabit every type, as do exception-raising expressions, such as <blockquote>fun fail() = raise (Fail "inconsistent!?") </blockquote>(Note: the expressions here are <code>loop()</code> and <code>fail()</code>. Here and below, I assign names to expressions by declaring them as thunks.) But none of these are values. Indeed, evaluation of the first expression will never yield a result, <blockquote>- loop(); <br>(* waiting a long time... *) </blockquote>whereas the second will return immediately, but with an exception rather than a value: <blockquote>- fail(); <br>uncaught exception Fail [Fail: inconsistent!?] <br> raised at: stdIn:2.3-3.5 </blockquote>(There are some additional complaints about the <i>value restriction</i>when evaluating these polymorphic expressions, but for our purposes we can ignore them here.) In fact, the logic of ML values is consistent, as we can see by defining the empty type: <blockquote>data void = Void of void </blockquote>There are no values of type void, since the only way to construct one is by already having one. Note, though, that all bets are off "under a lambda"—we can still build a value inhabiting any function type σ→τ, for example <blockquote>fun fnfail() = fn _ => raise (Fail "inconsistent!?") </blockquote>Let's try to be a bit more precise about what happens before we reach that point. <p> Another folk theorem about "the logic of ML" is that insofar as it is consistent (i.e., if you avoid things like unrestricted recursion and exceptions), with the addition of control operators it becomes <i>classical</i>. We can import the relevant SML/NJ library and see what they mean: <blockquote>- open SMLofNJ.Cont; <br>opening SMLofNJ.Cont <br> type 'a cont = 'a ?.cont <br> val callcc : ('a cont -> 'a) -> 'a <br> val throw : 'a cont -> 'a -> 'b <br> val isolate : ('a -> unit) -> 'a cont <br>(* ... *) </blockquote>As first observed by Griffin, the type of callcc corresponds logically to the classical principle of Peirce's Law. We can make this a bit easier to see by defining another control operator letcc, which essentially does the same thing as callcc but with first-class ML functions, rather than values of the cont type: <blockquote>- fun letcc f = callcc (fn k => f (fn x => throw k x)); <br>val letcc = fn : (('a -> 'b) -> 'a) -> 'a </blockquote>Using letcc, we can derive the law of excluded middle, ∀α.α ∨ ¬α. To represent this type, we first introduce a type constructor for disjunctions: <blockquote>data ('a,'b) either = Left of 'a | Right of 'b </blockquote>Now we can witness excluded middle as follows: <blockquote>fun excluded_middle() : ('a,'a -> void) either = <br> letcc (fn k => Right (fn x => k (Left x))) </blockquote><p> So what happens when we attempt to evaluate this expression? (To avoid complaints about the value restriction, we instantiate the type variable 'a at some arbitrary type a.) <blockquote>- excluded_middle() : (a,a -> void) either; <br>val it = Right fn : (a,a -> void) either </blockquote>And now, in contrast to what happened above with the evaluation of loop() and fail(), here we actually get a value! <p> So can we say that the logic of ML values really <i>is</i> classical? Well, we evidently can under the "bag of tautologies" view of logic. But that is not necessarily the most useful view. <p> If we look up at the above ML session, a shockingly anti-classical principle is already staring back down at us. After evaluating <code>excluded_middle()</code>, we got back <i>this</i> value: <blockquote>val it = Right fn : (a,a -> void) either </blockquote>That is, not only is this a proof of α ∨ ¬α, but in fact it contains a proof of the right side of the disjunct, ¬α. This is an instance of the <i>intuitionistic</i> <a href="http://en.wikipedia.org/wiki/Disjunction_property">disjunction principle</a>, that a proof of a disjunction is a proof of one of the two disjuncts. <p> The punchline, I would say, is that the logic of ML values is not unsound, and not classical, but simply constructive. What is an ML value of type τ? Just an intuitionistic proof of τ—but in a non-empty context. This context includes the current continuation of type ¬τ, as well as all of the other effects that can be invoked <i>underneath</i> a lambda, i.e., in values of functional type. <p> Okay, so what have we brushed over here? Well first, the restriction that effects are only invoked underneath lambdas can actually be enforced by CPS translation. Functions σ→τ are interpreted as ¬(σ∧¬τ), where ¬(-) denotes minimal negation, i.e., ¬τ = τ→β, for some fixed parameter β, the <i>return type</i>. Effectful operations are encoded as things that return β. Second, I said that a value of (a CPS translated) type τ is an intuitionistic proof of τ (in a non-empty context of effects), but are <i>all</i> such intuitionstic proofs values? Well, no, because some proofs involve "detours". To be concrete, if M₁ and M₂ are proofs of σ and τ, then π₁(M₁, M₂) is a proof of σ—yet, <code>#1 ("hello", "world")</code> is typically not considered to be a value (it evaluates to "hello"). (<a href="http://www.cs.cmu.edu/~rwh/courses/modules/papers/crary-etal99/paper.pdf">Some people</a> like to call expressions like <code>#1 ("hello", "world")</code> "valuable".) Third, to say that there is a "logic" of ML values, we really <i>do</i> have to explain what its composition principles are, and not just which principles fail. I might revisit some of these points in another post. <p> Finally, the whole idea of a "logic of ML values" is very close to the idea of the <a href="http://math.andrej.com/2009/05/29/mathematically-structured-but-not-necessarily-functional-programming/">realizability interpretation of ML</a>. Some people believe that realizability is a distinct and very different alternative to Curry-Howard as a constructive interpretation of logic. My point with this post was mainly to suggest that Curry-Howard need not exclude realizability-like interpretations, once we realize (so to speak) that there is more than one syntactic category of proofs.Noamhttp://www.blogger.com/profile/09175792123472366590noreply@blogger.com5tag:blogger.com,1999:blog-7486674684844560197.post-53770727995546202082009-10-28T04:03:00.000-07:002009-10-28T04:03:37.630-07:00Big-Step NormalisationI realized that the idea of <a href="http://polaro.blogspot.com/2009/10/normalization-by-evaluation.html">the previous post</a> of factoring normalization as a translation to and from a syntactic model of normal forms is explored very nicely in a JFP article by Altenkirch and Chapman, indeed with a concrete environment semantics, albeit without a connection to focusing:<br /><br />"Big-Step Normalisation"<br />Thorsten Altenkirch and James Chapman<br /><a href="http://cs.ioc.ee/~james/PUBLICATION_files/tait2.pdf">pdf</a><br />Abstract:<br />Traditionally, decidability of conversion for typed λ-calculi is established by showing that small-step reduction is conﬂuent and strongly normalising. Here we investigate an alternative approach employing a recursively defined normalisation function which we show to be terminating and which reﬂects and preserves conversion. We apply our approach to the simply-typed λ-calculus with explicit substitutions and βη-equality, a system which is not strongly normalising. We also show how the construction can be extended to System T with the usual β-rules for the recursion combinator. Our approach is practical, since it does verify an actual implementation of normalisation which, unlike normalisation by evaluation, is ﬁrst order. An important feature of our approach is that we are using logical relations to establish equational soundness (identity of normal forms reﬂects the equational theory), instead of the usual syntactic reasoning using the Church-Rosser property of a term rewriting system.Noamhttp://www.blogger.com/profile/09175792123472366590noreply@blogger.com0tag:blogger.com,1999:blog-7486674684844560197.post-9368504347384462392009-10-11T11:56:00.000-07:002009-10-11T14:30:59.124-07:00Normalization-By-Evaluation, Normalization-By-FocusingSo I've moved to Paris. I finally arrived here last week (after a bit of a summer vacation), to begin a post-doc at Université Paris 7. In addition to my host <a href="http://www.pps.jussieu.fr/%7Emellies/">Paul-André Melliès</a>, there is a ridiculous number of people based here doing exciting work in programming and proof theory, and also quite a few visitors. So I'm excited.<br /><br />One of my goals with this post-doc is to better connect the general approach to logic and programming that I was trained in at CMU, which could be called a "syntactic" approach, to the more semantic approach prevalent in Europe. "Syntactic" is in scare quotes because it is sometimes used as a derogatory term, slightly gentler than the epithet "symbol-pushing", and I don't think it is really appropriate. Of course <i>I</i> wouldn't think so! But there's a burden of proof in explaining how this thing that looks like symbol-pushing to the outside world is actually just another way (and a useful way) of manipulating semantic content.<br /><br />So this post is in that vein, and was prompted by hearing <a href="http://www.lix.polytechnique.fr/%7Edanko/">Danko Ilik</a> (one of <a href="http://pauillac.inria.fr/%7Eherbelin/">Hugo Herbelin</a>'s students) give a talk two days ago about <a href="http://en.wikipedia.org/wiki/Normalisation_by_evaluation">Normalization-By-Evaluation</a> (NBE), and its apparently well-known (but I guess not well-known enough, since I hadn't seen it before) connection to <a href="http://en.wikipedia.org/wiki/Kripke_semantics">Kripke semantics</a> for intuitionistic logic. The connection is extremely easy to state:<br /><center><br />NBE = completeness ○ soundness<br /></center><br />That is, constructively speaking, NBE for the lambda calculus (a.k.a. a proof system for intuitionistic logic) corresponds to applying the soundness theorem (if something is provable it is true in every Kripke model) followed by the completeness theorem (if something is true in every Kripke model then it is provable). More precisely, the soundness theorem is applied to bring a lambda term into the <i>universal</i> model of contexts ordered by inclusion, and then the completeness theorem takes it back into the lambda calculus, now in normal form. Darin Morrison recently posted some <a href="http://www.cs.nott.ac.uk/%7Edwm/nbe/html/NBE.html">elegant code</a> to the Agda mailing list, demonstrating this explicitly.<br /><br />Okay, that's neat, now let's talk about something else: focusing. If you are reading this blog you have probably heard something about it. (If not, you can learn about it <a href="http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.25.1142">here</a> or <a href="http://iml.univ-mrs.fr/%7Egirard/0.pdf.gz">here</a> or <a href="http://perso.ens-lyon.fr/olivier.laurent/llfoc.ps.gz">here</a> or <a href="http://www.lix.polytechnique.fr/%7Ekaustuv/assets/papers/chaudhuri06thesis.pdf">here</a> or <a href="http://www.cs.cmu.edu/%7Edrl/pubs/lzh08focbind/lzh08focbind.pdf">here</a> or <a href="http://www.cs.cmu.edu/%7Enoam/thesis.pdf">here</a>.) Originally, focusing was conceived as a search strategy for linear logic sequent calculus, achieving efficiency by severely restricting when you could apply the different rules, based on the <i>polarity</i> of connectives (positive ⊗, ⊕, !, ∃, versus negative ⅋, &, ?, ∀). The surprising and important result was that this strategy is complete, i.e.:<br /><br />THEOREM (Completeness of focusing) [Andreoli]: If the sequent Γ ⊢ Δ is provable in linear logic, then it has a focusing proof.<br /><br />The converse, soundness, is of course also important, but was obvious in Andreoli's original formulation, where focusing was seen as a <i>search</i> strategy, and a focusing proof was just a special sort of sequent proof.<br /><br />Nonetheless, we have soundness and completeness theorems. Great! Can we compose them?<br /><br />Well, let's not go there quite yet...<br /><br />First, let's move away from linear logic, which may seem exotic, to something more mundane: classical propositional logic. Is there a notion of focusing proofs in classical logic? In fact, there are too many. Whereas the polarities of the connectives of linear logic are all somehow fixed by their inference rules, in classical logic there are many different, equally valid possibilities for inference rules, and as a result the propositional connectives are fundamentally bipolar. The flip side is that the classical connectives can be explicitly <i>polarized</i>, e.g.., we can choose whether to treat conjunction ∧ positively as ⊗, or negatively as &, and similarly with disjunction, negation, and implication. And this act of disambiguation endows the classical connectives with constructive content.<br /><br />How so?<br /><br />Well, let's take another look at the soundness and completeness theorems for focusing. Since focusing proofs deal with explicitly polarized propositions, and ordinary (classical) proofs don't, we will relate the two notions via an operation |-| that "forgets" polarity, i.e., collapses ⊗ and & both to plain conjunction, ⅋ and & both to plain disjunction, etc. Then we can state the following:<br /><br />THEOREM (Classical soundness of focusing): If the sequent Γ ⊢ Δ has a focusing proof, then |Γ| ⊢ |Δ| has an ordinary classical proof.<br /><br />THEOREM (Classical completeness of focusing): If the sequent |Γ| ⊢ |Δ| has an ordinary classical proof, then Γ ⊢ Δ has focusing proof.<br /><br />So <i>now</i> can we compose? <br /><br />Again speaking more constructively, suppose we take as our starting point some version of classical natural deduction, e.g., natural deduction adjoined with reductio ad absurdum, or with Peirce's law, a.k.a. <a href="http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.102.8007&rep=rep1&type=pdf">lambda calculus with control operators</a>. And suppose we have a proof (program) establishing (inhabiting) some proposition (type). Now, to apply the completeness theorem to obtain a focusing proof/program, crucially, we must choose some polarization of the connectives. What we obtain by applying the completeness theorem can and will depend on the particular polarization. But then, once we have a focusing derivation, we can again obtain a classical derivation in a deterministic way, by applying the soundness theorem.<br /><br />And what do we get as a result? Well, we can make two observations:<br /><br />1. The result is in normal form. I.e., it corresponds to a β-reduced, η-expanded lambda term (or a sequent calculus proof without cuts and with only atomic initial axioms).<br />2. The result is in continuation-passing style (CPS).<br /><br />And in fact, the choice we made when applying the completeness theorem—the way we polarized the connectives—is precisely the way we choose among the many different CPS translations, call-by-value vs call-by-name, etc.<br /><br />[Aside: it is well-known that different CPS transformations correspond to different double-negation translations, proof-theoretically. This may seem at odds with the above statement, since the result of applying completeness followed by soundness is still a proof of the original sequent, not a double-negated sequent. But we can make the connection to double-negation translation more transparent by factoring the soundness theorem by way of <i>minimal</i> logic, i.e, the fragment of intuitionistic logic where negation is defined by ¬A = A ⊃ #, for some distinguished atom #. The soundness theorem is then stated as follows:<br /><br />THEOREM (Minimal soundness of focusing): If the sequent Γ ⊢ Δ has a focusing proof, then Γ^t, Δ^f ⊢ # has an ordinary minimal proof.<br /><br />Here, -^t is the operation that interprets positive connectives "directly" (i.e., conjunction as conjunction, disjunction as disjunction, etc.) and interprets negative connectives as the negation of their De Morgan duals, while -^f interprets negative connectives as their duals, and positive connectives as their negations. Now, the result of applying the classical completeness theorem followed by minimal soundness, followed by the inclusion of minimal logic into classical logic, is not necessarily the original sequent, but a (classically) equivalent one. For more details, see Chapters 3 and 4 of <a href="http://www.cs.cmu.edu/%7Enoam/thesis.pdf">my thesis</a>.]<br /><br />In short, by composing the soundness and completeness theorems for focusing, we obtain a normalization procedure for a proof system/programming language, that moreover fixes an evaluation order based on the polarization. We could call this "Normalization-By-Focusing":<br /><center><br />NBF = soundness ○ completeness<br /></center><br />So how does this compare to the original equation, NBE = completeness ○ soundness? First, observe that the fact that the positions of soundness and completeness are swapped is purely a historical convention! Since focusing was originally conceived as a search strategy for sequent calculus, it made sense to ask that it be complete. But in any case, it is clear that the focusing completeness theorem is playing a role analogous to the Kripke soundness theorem, and the focusing soundness theorem a role analogous to the Kripke completeness theorem. In other words, in both cases we are going out of a formal system with some arbitrary collection of rules, into a very well-behaved system, and then back into a canonical fragment of the original system.<br /><br />In other words, I think it is entirely accurate to say that focusing provides a <i>semantics</i> of proofs. And a <a href="http://en.wikipedia.org/wiki/Proof-theoretic_semantics">proof-theoretic semantics</a> at that.<br /><br />But how does it really relate to other semantic approaches? Again, that's what I'm here to find out. I suspect, though, that there's more than a superficial resemblance between the NBE and NBF equations, and there might be a deeper connection between focusing and the technique of <a href="http://www.andrew.cmu.edu/user/avigad/Papers/forcing.pdf">forcing in proof theory</a>.Noamhttp://www.blogger.com/profile/09175792123472366590noreply@blogger.com7tag:blogger.com,1999:blog-7486674684844560197.post-70579159822355317442009-09-30T14:31:00.000-07:002009-09-30T14:42:59.869-07:00More on LOGICOMIX, from Richard ZachRichard Zach of <a href="http://www.ucalgary.ca/~rzach/logblog/index.html">LogBlog</a> also wrote a <a href="http://www.ucalgary.ca/~rzach/logblog/2009/09/logicomix-epic-search-for-truth.html">review of Logicomix</a> last week, and today followed up specifically addressing the novel's theme of <a href="http://www.ucalgary.ca/~rzach/logblog/2009/09/logic-and-madness.html">Logic and Madness</a>. This was something I left out of <a href="http://polaro.blogspot.com/2009/09/review-of-logicomix.html">my review</a>, but like Zach, I was similarly turned off by the novel's insinuations of a positive correlation. Both posts by Zach point out several historical inaccuracies in Logicomix, where the novel overstates the "madness" of various logicians—important to keep in mind when reading Logicomix (which you should, nonetheless...).Noamhttp://www.blogger.com/profile/09175792123472366590noreply@blogger.com0tag:blogger.com,1999:blog-7486674684844560197.post-32155757687929326692009-09-15T15:26:00.000-07:002009-09-16T02:12:16.105-07:00Review of LOGICOMIXI was in New York City yesterday visiting the French consulate, on a mission to obtain a scientific visa (my second attempt). The bus from Rochester got into Chinatown around 7:45am, so I made it to the consulate by 9am, got in line, handed in my documents, had a few pictures/fingerprints taken, and was told to come back at 3pm to pick up a visa. With a few hours to kill in the city, I set out on another mission: to find a copy of Logicomix!<br /><br /><i>LOGICOMIX: an Epic Search for Truth</i><br />Apostolos Doxiadis and Christos H. Papadimitriou<br />art by Alecos Papadatos and Annie di Donna<br /><a href="http://www.logicomix.com/">http://www.logicomix.com/</a><br /><br />First, a note on availability. This book was just released to the public in English, I believe a little over a week ago. The UK edition is currently <a href="http://www.amazon.com/gp/product/0747597200/">sold out</a>, and the US edition is only available for pre-order on Amazon. So the odds of my finding a copy were perhaps not so good. On the other hand, New York City has the <a href="http://en.wikipedia.org/wiki/Strand_Bookstore">Strand Bookstore</a>. With a bit of luck and perserverance, I managed to track down their only copy, which they received two [sic!] days ago (they are ordering more). I finished reading it on the train ride back to Rochester.<br /><br />Second, a disclaimer: I had ridiculously high expectations for this book. It has been at the top of my wishlist for years. Anticipation for Logicomix has been floating around the net since the first half of the decade, based on its basic premise: a comic book about logic and logicians. For example, you can find <a href="http://wadler.blogspot.com/2005/08/logicomix.html">Philip Wadler drooling about the idea</a> back in the summer of 2005. I don't remember when I first found out about Logicomix, but it was a while before that (looking back at old email archives, it seems to have been May 2004), sometime after I heard about Papadimitriou's first work of fiction, <a href="http://books.google.com/books?id=QJyX175VCj8C&lpg=PP1&dq=turing%20a%20novel%20about%20computation&pg=PP1#v=onepage&q=&f=false">Turing: a novel about computation</a> (which is a very fun book, by the way, basically a sci-fi thriller with some computer science lessons thrown in, and some pretty hot sex scenes).<br /><br />Given all this anticipation built on the idea of Logicomix, reading the actual Logicomix was bound to be somewhat of a let-down. In a word, I was hoping for something impossibly <i>EPIC</i>. Let me say, though, that I still think Logicomix is a revolutionary book, with a brilliant premise that is very well executed but for a few flaws. The premise is basically this: to convey the drama of the late 19th-/early 20th-century upheavals in mathematics and logic, focusing on the <i>people</i> involved, but also giving a sense of their <i>ideas</i>, and of the passion behind the debate. The driving plot force is the life story of Bertrand Russell, and his (at times fictitious) interactions with some of logic's other major and minor figures.<br /><br />Logicomix <i>is</i> epic in ambition. In some form or another, the book looks at topics including: Euclid's Elements, Cantor's set theory, Boolean algebra, Leibniz's calculus ratiocinator, the 1900 International Congress of Mathematicians and Hilbert's problems, the Epimenides paradox and Russell's paradox, Frege's Begriffsschrift and the Foundations of Arithmetic, Principia Mathematica, simple and ramified type theory, Wittgenstein's Tractatus and the picture theory of language, the Vienna Circle, logical positivism, and the murder of Moritz Schlick, Godel's first incompleteness theorem, algorithms, atheism, World Wars I and II, pacifism, pedagogy, the connection between logic and madness, Aeschylus, street crime in Athens, and free love. And it tries to examine all this as a <i>graphic novel</i>, taking full advantage of the genre's literary/artistic conventions, such as flashbacks and self-reference. The latter fits in well with the subject matter (e.g., Papadamitriou's character asks in the book, "Suppose now you make a complete catalogue of all books that are *not* self-referential"...), but also, more significantly, gives rise to a parallel story relating the intellectual process of writing the book, showing how Logicomix grew through the authors' own debates and discoveries. Finally, a brief appendix at the end tries to fill in some of the historical/mathematical background for the story with a few traditional but well-written articles.<br /><br />Now, please stop.<br /><br />Go find a copy of Logicomix (don't worry, they're out there), and read it! You will enjoy it.<br /><br />I do think the book has flaws, though. I was hoping for Logicomix to do <i>more</i>, but I think in reality it tries to do too much, leaving too much unresolved. There is no doubt something intentional here on the part of the authors, in reference to their subject matter. A chunk of the story revolves around the arduous task of writing the Principia, and how Whitehead after ten years finally convinces Russell to publish their manuscript, even in its unfinished, very imperfect state. Through the self-referential segments of Logicomix, it is strongly implied that the authors had similar discussions of their own. But of course that's not an excuse! I would have liked to have seen some of the historical characters fleshed out more, and could have used less of the "behind the scenes" with the writers. The latter had the feeling of unedited transcripts—again, this was certainly intentional, but I felt it took away from the intensity of the main story. Similarly, I didn't like the overall framing device of having Bertrand Russell relate his life story at a public lecture on "The Role of Logic in Human Affairs": again, because it dilutes the action through another level of indirection, and also because it is a completely implausible lecture. (There are a few other moments where the authors take artistic license with history, which can be a bit grating if you are familiar with the history.) Finally, it seems that the authors didn't know how to end the novel; Logicomix fades away in a highly unsatisfying way.<br /><br />That said, I am very very happy that Logicomix is finally in print. I am waiting for the sequel.Noamhttp://www.blogger.com/profile/09175792123472366590noreply@blogger.com2tag:blogger.com,1999:blog-7486674684844560197.post-82771859332432156472009-07-10T02:32:00.000-07:002009-07-10T02:48:08.105-07:00Defunctionalizing proofs (revised + extended edition)I wrote a paper that elaborates and improves upon the basic idea <a href="http://polaro.blogspot.com/2009/01/defunctionalizing-proofs-and-how-to.html">I talked sketchily about some months ago</a>, of applying defunctionalization towards a formal representation of "Ω-rules" in Twelf, with applications to pattern-matching and cut-elimination. The paper was accepted to <a href="http://www.lix.polytechnique.fr/~lengrand/Events/PSTT09/">Proof Search in Type Theory</a>, next month in Montreal.<br /><br />Text: <a href="http://www.cs.cmu.edu/~noam/papers/defunct.pdf">pdf</a><br />Code: <a href="http://www.cs.cmu.edu/~noam/code/defocus.lf">twelf</a><br />Abstract:<br />In previous work, the author gave a higher-order analysis of focusing proofs (in the sense of Andreoli’s search strategy), with a role for infinitary rules very similar in structure to Buchholz’s Ω-rule. Among other benefits, this “pattern-based” description of focusing simpliﬁes the cut-elimination procedure, allowing cuts to be eliminated in a connective-generic way. However, interpreted literally, it is problematic as a representation technique for proofs, because of the difficulty of inspecting and/or exhaustively searching over these infinite objects. In the spirit of infinitary proof theory, this paper explores a view of pattern-based focusing proofs as façons de parler, describing how to compile them down to first-order derivations through defunctionalization, Reynolds’ program transformation. Our main result is a representation of pattern-based focusing in the Twelf logical framework, whose core type theory is too weak to directly encode infinitary rules—although this weakness directly enables so-called “higher-order abstract syntax” encodings. By applying the systematic defunctionalization transform, not only do we retain the beneﬁts of the higher-order focusing analysis, but we can also take advantage of HOAS within Twelf, ultimately arriving at a proof representation with surprisingly little bureaucracy.<br /><br /><hrule><br />This is very much work in progress—comments much appreciated!Noamhttp://www.blogger.com/profile/09175792123472366590noreply@blogger.com0tag:blogger.com,1999:blog-7486674684844560197.post-79194027545182917362009-07-03T19:13:00.000-07:002009-07-03T19:54:47.059-07:00The Value of the Free ManI found this political lecture by Albert Einstein (delivered before "The Friends of Europe" in London) in an old magazine (World Digest, April 1934), while <a href="http://rochester.craigslist.org/vol/1207095664.html">sorting through piles and piles of books</a>. As far as I can tell it does not exist online, so I transcribed it and am placing it here (and <a href="http://www.cs.cmu.edu/~noam/the-value-of-the-free-man.txt">here</a>):<br /><br /><hr><br /><center><b>The Value of the Free Man</b></center><br /><br />Modern life is worth while because the individual guaranteed the possibility of free development, free communication, free religion, and, as far as the well-being of human society permits, free initiative of action.<br /><br />Nobody can deny that to-day this foundation of a worthy existence is in considerable danger. Forces are at work which are attempting to destroy the European inheritance of freedom, tolerance, and human dignity. The danger is characterised as Hitlerism, Militarism, and Communism which, while indicating different conditions, all lead to the subjugation and enslavement of the individual by the State, and bring tolerance and personal liberty to an end.<br /><br />When I speak of Europe, I do not mean the geographical conception of Europe, but a certain attitude of life and to society which has grown up in Europe and is characteristic of our civilisation.<br /><br />I mean the spirit which was born in ancient Greece and, more than a thousand years later, at the time of the Renaissance, spread from Italy: the spirit of personal liberty and regard for the individual. <br /><br />If to-day our knowledge of natural laws and the technical possibilities for the advancement of our lives have reached such perfection as never before in the known history of mankind, we owe this to the fact that only the individual can really create what is new and worthwhile.<br /><br />Without such freedom there would have been no Shakespeare, no Goethe, no Newton, no Faraday, no Pasteur, and no Lister.<br /><br />There would be no comfortable houses for the mass of the people, no railway, no wireless, no protection against epidemics, no cheap books, no culture and no enjoyment of art for all.<br /><br />There would be no machines to relieve the people from the arduous labour needed for the production of the necessities of life.<br /><br />It cannot be my task to act as judge of the conduct of a nation from the point of view of statesmen or as one who is immersed in the responsibilities and the details of the daily life of politics.<br /><br />Yet everyone who loves the subtler manifestations and the beauty of the human spirit, and who believes himself able to diagnose the deeper causes of the present situation, has the duty of raising his voice in warning.<br /><br />For if we do not make energetic resistance, we shall sink back into a way of life which is indistinguishable from that of the subjects of the old Asiatic despotisms.<br /><br />It will be worse than theirs, for the modern despots, while not disposing of more brains than their former colleagues, own the arm---and it is a long and strong arm---of a mighty technical apparatus for the exercise of physical power, and in their schools, radio, and press they possess an instrument of mental and moral influence which ordinary mortals cannot resist.<br /><br />It is of the essence of despotism not only that one individual with practically unlimited authority is at the head, but that society itself becomes the tool of the enslavement of the individual.<br /><br />I would therefore much rather call the enemy of the European spirit "Servitude to the State."<br /><br />This servitude exists when the State is not an instrument in the hands of the citizens, as it ought to be, but when the citizens are in practice unconditionally delivered over to the power of the State. This condition is expressed in the German language by speaking of the population as "human material" (Menschenmaterial).<br /><br />What is it that converts the free European into a slave of the State? My simple answer is: the necessity of military organisation on the part of each separate State. In other words, the cause is international juridical anarchy. I believe that the German has become the complete victim of the State because his country, by its geographical condition, is most exposed to attack, and was therefore most dependent on rigid military organisation.<br /><br />But military organisation means degradation of the individual to an obedient tool, void of will. It demands the training of youth into an automatic and uncritical obedience towards a superior.<br /><br />In short, it implies the renunciation of personality and human worth.<br /><br />The petty persons who to-day govern Germany owe their power mainly to the education of former generations in a spirit of submissiveness and servility, a state of mind which is diametrically opposed to the consciousness of personal responsibility of the true European man.<br /><br />If it be true that individuality as the basis of the European ideal is gravely menaced by the necessity for military organisation of the several separate States, there can be no doubt as to how the danger may, and must be, overcome.<br /><br />The separate State must be offered an effective guarantee for its security in relation to neighbouring States.<br /><br />Disarmament according to a fixed plan without guarantee of security by the entire community of States is impossible, if for no other reason than that to-day war is not a society-game where one can be sure that everything will go according to rule.<br /><br />On this main issue of how to win through to civilisation, therefore, I share entirely the French point of view.<br /><br />I am also convinced that a universal collective guarantee for the security of individual States is in itself by no means enough.<br /><br />The military organisations of the separate States ought to be internationalised, that is, transformed into an international police force subject to a super-national authority.<br /><br />If the reader thinks this Utopian, he may, for comparison, imagine a State in which there are no police and no courts. In their place a law exists which makes it compulsory for every citizen to help his fellow-citizen with whom a third citizen dares to interfere. Do you believe that the citizens of such a State would desist from carrying their own weapons about with them? I doubt it.<br /><br />The serious efforts of the Governments which have led to the Disarmament Conference show that there exists a general awareness of the danger in which we live. In the light of the foregoing, lack of success is not difficult to understand. There are problems which cannot be brought to solution by small steps and stages.<br /><br />Either the security of States is guaranteed internationally, in which case the separate State needs no military organisation at all, or this guarantee is not achieved, in which case the State must arm as effectively and completely as possible.<br /><br />Nationalism in my opinion is, in this respect, no more than an idealistic basis for the militarist and aggressive mental condition of a people.<br /><br />Hitlerism is a form of State thoroughly adapted to military purposes. This is so true that a "leader" can only hold himself in power by behaving aggressively, at least in appearance, in foreign relations, for he must always hold before the eyes of his people the real or imaginary purpose for which they are asked to sacrifice their freedom.<br /><br />Only through perils and upheavals can nations be brought to further developments. It is in times of economic distress such as are experienced everywhere to-day, that one sees very clearly the strength of the moral forces which reside in a people.<br /><br />Let us hope that a historian, delivering judgment in some future period when Europe is politically and economically united, will be able to say that in our days the liberty and honour of this Continent were saved by its Western nations.<br /><br />Let him be able to say they stood fast together against the temptations of hatred and oppression, and that the Western world defended successfully the liberty of the individual, without which life to a self-respecting man is not worth living.<br /><br />The leading statesmen are burdened with tremendous responsibilities to-day as they were twenty years ago.<br /><br />May they succeed through timely agreement in establishing a condition of unity and clarity of international obligations in Europe, so that to every State a war-like adventure must appear foredoomed as utterly hopeless.<br /><br />But we are concerned not merely with the technical problem of securing and maintaining peace, but also with the important task of education and enlightenment. If we want to resist the powers which threaten to suppress intellectual and individual freedom, we must keep clearly before us what is at stake, and what we owe to that freedom which our ancestors won for us after hard struggles.<br /><br />Thus, the work of statesmen can succeed only if it is supported by the serious and determined will of the peoples. And the essence of that will is the creative adventure of free citizens.<br /><br />I know very many people who are willing to approve in principle opinions such as have been expressed here, but who will fall back the next moment into the ruts of ordinary political action.<br /><br />But he who is seriously convinced that the future growth of our civilisation is bound up with the preservation and development of the human personality, must be prepared to make sacrifices for his convictions.<br /><br />The sacrifice I am thinking of is the partial abandonment of State sovereignty by the separate States, and the yielding of the principle of egoism in favour of international security.<br /><br />Here lies the hope of Europe and the Western world.Noamhttp://www.blogger.com/profile/09175792123472366590noreply@blogger.com0tag:blogger.com,1999:blog-7486674684844560197.post-23993111617641107662009-06-08T15:39:00.000-07:002009-06-08T20:37:23.459-07:00The Proofs of Life After Death: a 20th Century SymposiumBrowsing in the local history section of the <a href="http://rocwiki.org/Rundel_Memorial_Library_Building">Central Library</a> in Rochester, NY, I stumbled on this curious book from 1902:<br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://2.bp.blogspot.com/_8_mM8k3BkRU/Si2UnLPWcnI/AAAAAAAAD24/rtKQAB2iwjw/s1600-h/life-after-death.png"><img style="display:block; margin:0px auto 10px; text-align:center;cursor:pointer; cursor:hand;width: 254px; height: 400px;" src="http://2.bp.blogspot.com/_8_mM8k3BkRU/Si2UnLPWcnI/AAAAAAAAD24/rtKQAB2iwjw/s400/life-after-death.png" border="0" alt=""id="BLOGGER_PHOTO_ID_5345091733511631474" /></a><br />Wonderfully, the full text is available online <a href="http://books.google.com/books?id=12MAAAAAMAAJ&dq=%22proofs+of+life+after+death%22&printsec=frontcover&source=bl&ots=th6qV-fzmM&sig=CsXEpFnaID36vI491GwO9RFQsAE&hl=en&ei=j4gtSs7YIozYM4-1gJ4G&sa=X&oi=book_result&ct=result&resnum=1#PPA7,M1">through google books</a>. The book is a collection of responses by various thoughtful people to the following request:<br /><blockquote><br />Dear Sir:<br /><br />The Author of this letter, inspired by the untimely decease of a dear friend, and in contemplation of the numerous philosophical and logical theories leading to a belief in the continued existence of the soul, or personal identity after death, begs of you the great favor of a letter, setting out as briefly, or at such length as may be convenient, what you consider to be the strongest reason, or argument, advanced by science or philosophy, or by common sense, in favor of an affirmative answer to this mighty question; or preferably, a statement of your own deductions thereon.<br /><br />It is our desire to obtain from thinkers and educators of the world, an expression—a twentieth century bulletin, on this subject.<br /><br />Our request will impress you doubtless as an unusual one, but none the less will you see the force of it, and its possibilities. Who can measure the impetus such a compilation may have upon the inquiring human mind?<br /><br />May I not have your co-operation in this matter?<br /><br />Thanking you now in advance for the courtesy of a reply, I am<br /><br />Fraternally yours,<br />ROBERT J. THOMPSON.<br /><br />Wellington Ave., Chicago, U. S. A.<br />October, 1901. <br /></blockquote><br />I had fun skimming through the first collection of responses, from "The Scientists". The variety of responses is interesting. Although there's the usual bloviation on the <a href="http://en.wikipedia.org/wiki/Cosmological_argument">cosmological</a> and <a href="http://en.wikipedia.org/wiki/Teleological_argument">teleological arguments</a>, as well as some self-citations to studies confirming the existence of telepathy and other paranormal phenomena, there's also some healthy scientific modesty. E.g., <a href="http://en.wikipedia.org/wiki/%C3%89mile_Duclaux">E. Ducleaux</a> writes,<br /><blockquote><br />Excuse me for not being able to help you in your investigation. I have no scientific opinion regarding the questions you put. I mean, no opinion that rests on anything but personal beliefs. Besides, I think that everybody is in about the same position and that any reasons that may be brought forth in favor of one's opinion are only good for the person that brings them forth, and that they cannot impress the listener; they are therefore not scientific reasons.<br /></blockquote><br /><a href="http://en.wikipedia.org/wiki/Dmitri_Mendeleev">D. I. Mendélieff</a>'s response begins similarly, although he ends by arguing for the immortality of the soul by analogy with the laws of conservation of mass and energy (an analogy I was surprised to see repeated very often in the other responses!):<br /><blockquote><br />The question as to the continuance of the existence of the soul or personal identity after death, mentioned in your letter of August, 1901, I, as a natural philosopher, consider to be an hypothesis which cannot be proved by evidence of real facts. But as a man educated in a religious sense, I prefer to remain in the belief of the immortality of the soul. It is my opinion that the philosophical side of the question consists in the relation between the soul, the natural forces, and matter; and if it were possible to clear up to some extent this feature of the problem---the relation between force and matter---then also the relation between the soul and natural forces would be forwarded to a great extent.<br /><br />The unquestionable existence of reason, will and consciousness compels us to acknowledge the existence of a special world of relations of this kind, and any rational conclusion in relation to this special world cannot be accepted as proved quite in the same manner. Knowledge of physics and mechanics does not give anything in relation to chemistry or in relation to the existence of celestial bodies.<br /><br />We must simply confess that it is impossible to comprehend this question in a general way, but it would also be sheer nonsense to ignore the physical world; and as matter and natural forces must be acknowledged as eternal, it is also probable that the soul is eternal.<br /></blockquote><br />But my favorite response was by James R. Nichols, marveling at the modern-day wonders of <del>Web 2.0</del> the telephone:<br /><blockquote><br />Do we not every day converse with unseen friends long distances away; do we not recognize their familiar voices, in homes separated from us by rivers, woods, and mountains? These voices come out of the darkness, guided by a frail wire which science provides as a pathway.<br />[...]<br />If our friends in this life, dead to us (hidden as they are by the shroud of space), can be seen, and we can hear their voices, their shouts of laughter, the words of the hymns they sing, the cries of the little ones in the mother's arms, is it very absurd to anticipate a time when those dead to us by the dissolution of the body may, by some unknown telephony, send to us voices from a realm close at hand, but hidden from mortal vision?<br /></blockquote>Noamhttp://www.blogger.com/profile/09175792123472366590noreply@blogger.com1tag:blogger.com,1999:blog-7486674684844560197.post-83482213280324358792009-05-08T10:11:00.000-07:002009-05-08T14:56:51.087-07:00The Logical Basis of Evaluation Order and Pattern-Matching<a href="http://www.cs.cmu.edu/~noam/thesis.pdf">The Logical Basis of Evaluation Order and Pattern-Matching</a><br /><blockquote><br />An old and celebrated analogy says that writing programs is like proving theorems. This analogy has been productive in both directions, but in particular has demonstrated remarkable utility in driving progress in programming languages, for example leading towards a better understanding of concepts such as abstract data types and polymorphism. One of the best known instances of the analogy actually rises to the level of an isomorphism: between Gentzen's natural deduction and Church's lambda calculus. However, as has been recognized for a while, lambda calculus fails to capture some of the important features of modern programming languages. Notably, it does not have an inherent notion of evaluation order, needed to make sense of programs with side effects. Instead, the historical descendents of lambda calculus (languages like Lisp, ML, Haskell, etc.) impose evaluation order in an ad hoc way.<br /><br />This thesis aims to give a fresh take on the proofs-as-programs analogy---one which better accounts for features of modern programming languages---by starting from a different logical foundation. Inspired by Andreoli's <i>focusing proofs</i> for linear logic, we explain how to axiomatize certain canonical forms of logical reasoning through a notion of <i>pattern.</i> Propositions come with an intrinsic <i>polarity,</i> based on whether they are defined by patterns of proof, or by patterns of refutation. Applying the analogy, we then obtain a programming language with built-in support for pattern-matching, in which evaluation order is explicitly reflected at the level of types---and hence can be controlled locally, rather than being an ad hoc, global policy decision. As we show, different forms of continuation-passing style (one of the historical tools for analyzing evaluation order) can be described in terms of different <i>polarizations.</i> This language provides an elegant, uniform account of both untyped and intrinsically-typed computation (incorporating ideas from infinitary proof theory), and additionally, can be provided an <i>extrinsic</i> type system to express and statically enforce more refined properties of programs. We conclude by using this framework to explore the theory of typing and subtyping for intersection and union types in the presence of effects, giving a simplified explanation of some of the unusual artifacts of existing systems.<br /></blockquote><br />Successfully defended April 17, 2009!Noamhttp://www.blogger.com/profile/09175792123472366590noreply@blogger.com4tag:blogger.com,1999:blog-7486674684844560197.post-3730786270674062172009-02-04T21:59:00.000-08:002009-02-04T22:05:37.182-08:00On the meaning of logical completenessI would really like to have time to read <a href="http://www.kurims.kyoto-u.ac.jp/~terui/meaning.pdf">this paper</a> by Michele Basaldella and Kazushige Terui, as well as many of Terui's other highly interesting <a href="http://www.kurims.kyoto-u.ac.jp/~terui/pub.html">recent papers</a>.Noamhttp://www.blogger.com/profile/09175792123472366590noreply@blogger.com0tag:blogger.com,1999:blog-7486674684844560197.post-70589387174182333192009-01-26T19:55:00.000-08:002009-02-28T11:56:50.871-08:00Defunctionalizing Proofs (and how to define a pattern-matching, CPS language in Twelf)I drove back from Savannah and POPL/PLPV/TLDI yesterday. It was a long and exciting drive (in part because of the snow and frozen windshield washer fluid driving through West Virginia, god I hate that stretch of highway), and a long and exciting week at the conference. My favorite talk was an invited one by <a href="http://homepages.nyu.edu/%7Ecb125/">Chris Barker</a> on <a href="http://barker.linguistics.fas.nyu.edu/Stuff/barker-popl-abstract.pdf">Wild Control Operators</a>, in the sense of "continuations in the wild". Chris is a linguist who for several years (I think since around 2000) has been advocating the use of continuations to analyze apparently non-compositional phenomena in natural language. He collaborates with brilliant computer scientist <a href="http://www.cs.rutgers.edu/%7Eccshan/">Ken Shan</a>, who wrote his PhD thesis on "Linguistic Side Effects".<br /><br />In this post I want to introduce some <a href="http://www.cs.cmu.edu/~noam/code/focus.lf">Twelf code</a> I wrote after getting back to Pittsburgh. At the PLPV workshop I presented my paper on <a href="http://www.cs.cmu.edu/%7Enoam/papers/rtcd.pdf">Refinement Types and Computational Duality</a>. I didn't exactly time the talk well, and ended up going through about half my slides in ten minutes past the allotted time. (Luckily, I didn't run into Obama's talk, which was better prepared.) The paper itself is similar in that respect, trying to rush through too much material in too short a space. One of the ideas it describes probably too briefly is that of <i>defunctionalizing proofs</i>. Defunctionalization is a trick due to John Reynolds, from his paper on <a href="http://www.brics.dk/%7Ehosc/local/HOSC-11-4-pp363-397.pdf">Definitional Interpreters for Higher-Order Programming Languages</a>. It allows you to take a program with higher-order functions and convert it into one with only first-order functions. Essentially, it boils down to giving a unique tag to every function definition in your program, and then defining a separate "apply" function that tells you how to apply the function denoted by a tag to an argument. (A complication is that function bodies may reference escaping variables, so defunctionalization has to be preceded by <a href="http://en.wikipedia.org/wiki/Closure_conversion">closure conversion</a>.)<br /><br />Why would <i>proofs</i> contain functions in the first place? For the same reason programs do: sometimes they are a useful abstraction. An old example from proof theory comes in the work of Paul Lorenzen and Kurt Schütte, who in the 1950s began revisiting Gerhard Gentzen's work via infinitary methods (see <a href="http://math.stanford.edu/~feferman/papers/bernays.pdf">"Lieber Herr Bernays!, Lieber Herr Gödel!"</a> by Feferman for an interesting history). Schütte simplified Gentzen's proof of cut-elimination for Peano arithmetic by replacing the usual schema of induction by the so-called ω-rule, which says, "To prove ∀x:ℕ.A(x), prove A(0), A(1), A(2), ...". At first glance, the ω-rule seems silly: how can we ever apply a rule with infinitely many premises? But a different way of looking at the ω-rule is that it demands a single <i>function</i>, mapping natural numbers n to proofs of A(n). And this function can certainly have a finite representation, for example as a C program.<br /><br />In <a href="http://www.cs.cmu.edu/~noam/research/">some other papers</a>, I described how to view Andreoli's "focusing" proofs as higher-order in this sense, containing functions that map <i>proof patterns</i> (or refutation patterns) to other sorts of proof objects. For example, to refute a proposition of <i>positive polarity</i>, we build a function from proof patterns to proofs of contradiction. This has a simple Curry-Howard interpretation: we can define <i>call-by-value</i> continuations by functions from value patterns to expressions. In other words, we actually represent the <i>syntax</i> of a programming language using computation.<br /><br />Dependently-typed functional languages such as <a href="http://wiki.portal.chalmers.se/agda/">Agda</a> and <a href="http://coq.inria.fr/">Coq</a> turn out to be very convenient platforms for embedding such higher-order language definitions. In the dependently-typed setting of <a href="http://twelf.plparty.org/wiki/Main_Page">Twelf</a>, frustratingly, it seems impossible! This is surprising, at least in a superficial way, because Twelf is well-known for allowing language definition via "higher-order abstract syntax" (HOAS). But HOAS is a different idea from the above technique (which I've taken to calling AHOS). Essentially, the problem is that the function space of LF (the type-theoretic core of Twelf) is very weak: abstractly, a function A→B is just a B with a hole for an A, or in other words, A→B represents a <i>substitution function</i>, something into which you can plug in an A to obtain a B. This weakness is in fact crucial for doing HOAS. But it means we absolutely cannot use the LF function space to do AHOS. (Think, for example, what that would mean for the ω-rule: to prove ∀x:ℕ.A(x), we would have to give a proof of A(n) that <i>works the same way</i> for every n. But the predicate A might hold for different n for different reasons, so such a rule would be woefully incomplete.) On the other hand, that we can't define languages using AHOS in Twelf is frustrating, because HOAS is rather nice for other parts of a language definition.<br /><br />Fortunately, it turns out we can have our AHOS and eat HOAS too! And it all boils down to Reynolds' trick. You can check out the code <a href="http://www.cs.cmu.edu/~noam/code/focus.lf">here</a>.Noamhttp://www.blogger.com/profile/09175792123472366590noreply@blogger.com1tag:blogger.com,1999:blog-7486674684844560197.post-14850297256931826432008-12-24T14:50:00.000-08:002008-12-24T14:54:42.067-08:00bipolarism theoryAnother <a href="http://www.inu.edu.jo/Bi-Polarism%20Theory.htm">interesting conference</a>, on <i>Bi-Polarism Theory and Mathematics is Inconsistent</i>. And while you're in the holiday spirit, check out <a href="http://chrisamaphone.livejournal.com/671156.html">The Twelf Days of Christmas</a>.Noamhttp://www.blogger.com/profile/09175792123472366590noreply@blogger.com0tag:blogger.com,1999:blog-7486674684844560197.post-83709973389237924282008-12-04T14:16:00.000-08:002008-12-04T14:18:06.959-08:00a conference dedicated to Per Martin-Löf on the occasion of his retirementThis promises to be <a href="http://www.math.uu.se/PFM/">a very exciting conference</a>.Noamhttp://www.blogger.com/profile/09175792123472366590noreply@blogger.com0tag:blogger.com,1999:blog-7486674684844560197.post-58396674126460804962008-11-18T20:45:00.000-08:002008-11-18T22:08:48.850-08:00lies and updatesSo in case you haven't realized, those <a href="http://polaro.blogspot.com/2008/07/refinement-types-and-computational.html">promises</a> to post more on this blog were just lies. Not gonna happen. But in the hope of generating a bit more content, and also to give this blog a more Web 2.0 feel, I've added a "google reader shared items" bar, which links to various posts by other people that I find interesting. Don't worry, there aren't plans for an AdSense bar in the near future.<br /><br />A few other updates:<br /><ul><br /><li>This past weekend there was a small workshop at CMU, <a href="http://www.hss.cmu.edu/philosophy/bernaysfest.php">Bernaysfest</a>, dedicated to the life and work of Paul Bernays, a collaborator of David Hilbert. I snuck in, and heard a bunch of interesting talks on the history of proof theory. (<a href="http://users.ipfw.edu/buldtb/">Bernd Buldt's</a> talk, listed as TBA on the schedule, was on "Mathematical practice and platonism: A phenomenological perspective", and quite interesting once I understood what he was trying to do. He wants to give an explanation for why so many mathematicians are platonists, why mathematical objects feel "real" to them, by looking at mathematics as a social process.) The workshop also brought <a href="http://home.uchicago.edu/~wwtx/">Bill Tait</a>, who last week gave a standing-room-only talk about cut-elimination in predicative systems (<a href="http://www.ucalgary.ca/~rzach/logblog/2008/11/tait-cut-elimination-for-predicative.html">Richard Zach blogs about</a>).<br /><li>A month ago I gave a fluffy talk on "walking the way of duality" at the Student Seminar Series. Here are <a href="http://www.cs.cmu.edu/~noam/talks/theway.pdf">the slides</a>, but I'm keeping the video to myself, thank you very much.<br /><li>My <a href="http://www.cs.cmu.edu/~noam/refinements/">refinement types paper</a> was accepted to <a href="http://sneezy.cs.nott.ac.uk/darcs/plpv09/">PLPV</a>, as was <a href="http://www.cs.cmu.edu/~drl/pubs/lh08pdt/lh08pdt.pdf">Dan and Bob's paper</a>, so we'll be going to Savannah.<br /></ul>Noamhttp://www.blogger.com/profile/09175792123472366590noreply@blogger.com0tag:blogger.com,1999:blog-7486674684844560197.post-48956056544653383652008-07-24T06:23:00.000-07:002008-07-24T03:24:03.890-07:00Refinement types and computational dualityApologies that this blog hasn't seen much action for a while. Hopefully that will get fixed soon, but then again the whole thesis thing might get in the way. This post is just to say that recently I've been having fun writing up a <a href="http://www.cs.cmu.edu/~noam/papers/rtcd.pdf">paper</a>:<br /><br /><blockquote><br /> One lesson learned painfully over the past twenty years is the perilous interaction of Curry-style refinement typing with evaluation order and side-effects. This led eventually to the value restriction on polymorphism in ML, as well as, more recently, to similar artifacts in prototype refinement type systems for ML with intersection and union types. For example, some of the traditional subtyping laws for unions and intersections are unsound in the presence of effects, while union-elimination requires an <i>evaluation context restriction</i> in addition to the value restriction on intersection-introduction.<br /><br /> This paper reexamines the interaction between refinement types and evaluation order from the perspective of computational duality. Building on recent work on the Curry-Howard interpretation of <i>focusing proofs</i> as pattern-matching programs written in infinitary, higher-order syntax, I give a simple explanation of intersection and union types in the presence of effects, reconstructing phenomena such as the value restriction and differing subtyping laws for call-by-value and call-by-name-- not as ad hoc artifacts, but indeed as logical theorems. However, this abstract account using infinitary syntax has the drawback that refinement checking is undecidable---to address this, I show how to systematically construct a finitary syntax via defunctionalization, and then give an algorithmic refinement checking system. Parallel to the text, a formalization in the dependently-typed functional language Agda is described, both for the sake of clarifying these ideas, and also because it was an important guide in developing them. As one example, the Agda encoding split very naturally into an intrinsic ("Church") view of well-typed programs, and an extrinsic ("Curry") view of refinement typing for those programs.<br /></blockquote><br />Code available <a href="http://www.cs.cmu.edu/~noam/refinements/">here</a>.Noamhttp://www.blogger.com/profile/09175792123472366590noreply@blogger.com0tag:blogger.com,1999:blog-7486674684844560197.post-24401617025619011262008-04-22T08:41:00.000-07:002008-04-22T09:13:49.664-07:00Today is election day......so maybe I can post about a couple unusual things today. First: go vote! If you live in Pennsylvania, are eligible to vote, are registered with a political party, and haven't voted already. (Not sure what segment of my huge readership that applies to.) Yesterday I heard Barack Obama speak at the Petersen Events Center. The content of the speech wasn't all that exciting (since it's all been said before), but I was very impressed by the fact that he made the effort for a last-minute, late-night stop in Pittsburgh. You could see on his face just how tired he was, but also that he knew the stakes, and was determined to keep standing up. It somewhat reminded me of <a href="http://en.wikipedia.org/wiki/Kibadachi">kibadachi</a> practice.<br /><br />Second: over the weekend I met an old friend from college, Ben Schmidt, who now studies American history at Princeton. Later I was looking at his website, and saw that he has <a href="http://www.princeton.edu/~bschmidt/Papers/index.html">a very interesting paper</a> about using a variation of Google's PageRank algorithm to rank PhD programs:<br /><br /><i>Ranking Doctoral Programs by Placement: A New Method</i><br />Benjamin M. Schmidt and Matthew M. Chingos <br />Forthcoming in PS: Political Science & Politics <br /><blockquote><br />Most existing rankings of graduate programs rely on some measure of faculty quality, whether it be reputation (as in the National Research Council and US News rankings), honors (prizes, membership in learned societies, etc.), or research quality and quantity (as in citation studies and publication counts). We propose a ranking that focuses instead on the success of a program’s graduates. By using stochastic matrices, it is possible to create a system in which programs essentially “vote” for each other by hiring graduates of other programs as professors in their own department. This system allows us to create an objective, results-oriented measure that is well suited to measure the quality of departments whose graduates aspire to academic positions. The rankings correlate well with reputational ranking systems, and include a per capita measure that recognizes the accomplishments of smaller but high quality programs. <br /></blockquote>Noamhttp://www.blogger.com/profile/09175792123472366590noreply@blogger.com0tag:blogger.com,1999:blog-7486674684844560197.post-14885622191447751402008-04-14T15:53:00.000-07:002008-04-14T15:59:55.189-07:00a paper<i><a href="http://www.cs.cmu.edu/~noam/papers/focbind.pdf">Focusing on binding and computation</a></i><br /><a href="http://www.cs.cmu.edu/~drl/">Dan Licata</a>, <a href="http://www.cs.cmu.edu/~noam/">Noam Zeilberger</a>, and <a href="http://www.cs.cmu.edu/~rwh/">Robert Harper</a><br />To appear at <a href="http://www2.informatik.hu-berlin.de/lics/lics08/">LICS 08</a><br /><blockquote><br />Variable binding is a prevalent feature of the syntax and proof theory of many logical systems. In this paper, we define a programming language that provides intrinsic support for both representing and computing with binding. This language is extracted as the Curry-Howard interpretation of a focused sequent calculus with <i>two</i> kinds of implication, of opposite polarity. The <i>representational arrow</i> extends systems of definitional reflection with a notion of scoped inference rules, which are used to represent binding. On the other hand, the usual <i>computational arrow</i> classifies recursive functions defined by pattern-matching. Unlike many previous approaches, both kinds of implication are connectives in a single logic, which serves as a rich logical framework capable of representing inference rules that mix binding and computation.<br /></blockquote><br />Comments appreciated!Noamhttp://www.blogger.com/profile/09175792123472366590noreply@blogger.com1tag:blogger.com,1999:blog-7486674684844560197.post-82473437780955329362008-04-12T14:36:00.000-07:002008-04-12T16:17:44.940-07:00just wait a second...An article of faith is that the undecidability of the halting problem makes it impossible to restrict to terminating programs and still have a general-purpose programming language. On the other hand, another article of faith is that we'd be in big trouble if we allowed non-terminating <i>proofs</i> in mathematics (e.g., "Trust me, I have a proof of the Riemann Hypothesis, just wait a second..."). Why the double standard? Do we shackle ourselves by using a fragment of mathematics that disallows (we hope) non-terminating proofs? Or have we been too hasty to accept non-terminating programs? The latter stance is embraced in <a href="http://en.wikipedia.org/wiki/Total_functional_programming">total functional programming</a>. But I think the former is also not super crazy. With a more interactive notion of proof, potential non-termination would not be so dangerous.<br /><p><br /><b>Z:</b> "I have a proof of the Riemann Hypothesis, can you add it to the Book?"<br /><br><b>E:</b> "Of course, but first can you tell me how you handle this tricky case?" <br /><br><i>E presents tricky case to Z</i><br /><br><b>Z:</b> "Sure, no problem, this case is easily handled by, um, some math, er, ..."<br /><br><i>Z stalls</i><br /><br><i>E waits patiently, but keeps the Book firmly shut</i><br /><p><br />In some sense, terminating proofs are just a time-saving mechanism for referees. It would indeed be a shame if Z wasted all of E's time with this pseudoproof of RH. On the other hand, E could be doing other things at the same time, such as drinking coffee, or running SETI@Home.Noamhttp://www.blogger.com/profile/09175792123472366590noreply@blogger.com0tag:blogger.com,1999:blog-7486674684844560197.post-49450096668576323392008-03-13T06:27:00.000-07:002008-03-13T07:17:12.387-07:00anglo-franco photosIn Nottingham, a statue of <a href="http://en.wikipedia.org/wiki/Robin_Hood:_Prince_of_Thieves">Kevin Costner</a>:<br /><img src="http://lh3.google.com/noam.zeilberger/R9ko9UzcGOI/AAAAAAAABMo/yG1MtZmVqG8/IMG_6111.JPG?imgmax=512"><br /><br />In London, my sister's cat Arras:<br /><img src="http://lh4.google.com/noam.zeilberger/R9kpGkzcGPI/AAAAAAAABMw/-1TEAK9M2vU/IMG_6117.JPG?imgmax=512"><br /><br />A group of <a href="http://www.youtube.com/watch?v=WsOofIHbkco&feature=related">angry Serbian protesters</a> in London:<br /><img src="http://lh5.google.com/noam.zeilberger/R9kpP0zcGQI/AAAAAAAABM4/EMNcfmbI0Ck/IMG_6129.JPG?imgmax=512"><br /><br />Chango Spasiuk!<br /><img src="http://lh6.google.com/noam.zeilberger/R9kpeEzcGRI/AAAAAAAABNA/kTHh2zqTOg0/IMG_6136.JPG?imgmax=512"><br /><br />The students at the Ecole Polytechnique in Lozère draw some neat murals:<br /><img src="http://lh6.google.com/noam.zeilberger/R9kp5EzcGUI/AAAAAAAABNY/-m_0FFIqTlo/IMG_6143.JPG?imgmax=512"><br /><br />Paris has a <a href="http://www.washingtonpost.com/wp-dyn/content/article/2007/03/23/AR2007032301753.html">successful bike-sharing program</a>:<br /><img src="http://lh4.google.com/noam.zeilberger/R9kqAkzcGVI/AAAAAAAABNg/smU4ITWTk1E/IMG_6158.JPG?imgmax=512"><br /><br />The French are <i>serious</i> about their fast food.<br /><img src="http://lh4.google.com/noam.zeilberger/R9kqSkzcGWI/AAAAAAAABNo/AWm5er-BpiI/IMG_6163.JPG?imgmax=512"><br /><br />The town of <a href="http://en.wikipedia.org/wiki/Antony%2C_Hauts-de-Seine">Antony</a>, where I stayed with Kaustuv and Vivek, had a delightful Sunday market.<br /><img src="http://lh3.google.com/noam.zeilberger/R9kq4UzcGYI/AAAAAAAABN4/LAuxq15qA7Q/IMG_6185.JPG?imgmax=512"><br /><br />Un canard:<br /><img src="http://lh6.google.com/noam.zeilberger/R9krEEzcGZI/AAAAAAAABOA/HZgqfo6nqUE/IMG_6205.JPG?imgmax=512"><br /><br />Indian fast food, near the Gare du Nord:<br /><img src="http://lh4.google.com/noam.zeilberger/R9krQkzcGaI/AAAAAAAABOI/VMZJ5geXURo/IMG_6213.JPG?imgmax=512"><br /><br />My gracious host Kaustuv:<br /><img src="http://lh3.google.com/noam.zeilberger/R9krWUzcGbI/AAAAAAAABOQ/UKfTxG_FtOI/IMG_6217.JPG?imgmax=512"><br /><br />Jean-Yves, after his first lecture on <a href="http://iml.univ-mrs.fr/~girard/GdI5.pdf">the Geometry of Interaction V</a>:<br /><img src="http://lh4.google.com/noam.zeilberger/R9krckzcGcI/AAAAAAAABOY/exhSEuLuVeA/IMG_6218.JPG?imgmax=512"><br /><br />Neither the subject nor the cameraman were prepared for this photo, hence<br />Paul-André came out a bit blurry. In real life, he is quite sharp!<br /><img src="http://lh3.google.com/noam.zeilberger/R9krhUzcGdI/AAAAAAAABOk/z87VVACUZjw/IMG_6219.JPG?imgmax=512">Noamhttp://www.blogger.com/profile/09175792123472366590noreply@blogger.com0