Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Formalising Gödel's incompleteness theorems, I (lawrencecpaulson.github.io)
83 points by furcyd on May 18, 2022 | hide | past | favorite | 51 comments


I started my PhD recently, and in the process I've had to learn Isabelle/HOL, and I gotta say that using it sort of feels like you're unlocking the keys to understanding the universe.

Like, I understand that it's fundamentally still code, and so there are potential bugs in the kernel, but shear versatility and level of automation you get within Isabelle/HOL makes the development and understanding of proof feel like outright magic sometimes. Proof is really hard, and having a powerful tool to mechanically check it has nearly limitless potential.


> using it sort of feels like you're unlocking the keys to understanding the universe.

Completely agree!

> having a powerful tool to mechanically check it has nearly limitless potential

As someone working in the industry and playing with theorem provers in my spare time, I've never been able to bridge the gap between industry and theory enough to be able to use these formal methods on projects that I'm on.


Sadly I too have not found an employer that is willing to let me use any kind of theorem-proving software in production. Back when I was at Jet doing F#, I almost got permission to use F-Star [1], but that project never got to that point.

My PhD is in regards to proving properties about robotics, so maybe in 5-6 years I'll have fully bridged industry and theory, but I wouldn't hold my breath :)

[1] https://www.fstar-lang.org/


Theorem-proving gets some use in the Ethereum community. Smart contracts are an ideal application for it: the code is short and bugs can be disastrous.


> Sadly I too have not found an employer that is willing to let me use any kind of theorem-proving software in production. Back when I was at Jet doing F#, I almost got permission to use F-Star [1], but that project never got to that point.

As someone who doesn't code for a living, I am certain that this is a naïve question, but, if you want to use theorem-proving software for verification of existing code, does that require employer permission? Is the issue something like, e.g., you need the theorem-proving software and the code to be verified on the same machine, and you're not allowed to install the theorem-proving software on a work machine or to move the code to a non-work machine?


I meant that I couldn't get budget to do it. Obviously I could do a code verification in my free time, but my goal was to start the project with everything that could be written in FStar being written in FStar, with my coworkers onboard.

I had root access to my laptop so I could install anything I wanted, but it's hard to do anything if you don't have budget allocated to do so.

EDIT: I also should point out that I wanted to export the code from FStar to F# and use the exported and verified code.


I work on a project that is 20 million lines of code - while large someone reading this is working on a project much larger. There is no way any human working alone can prove the whole thing. I have no idea how to even get started. Even if I did, until I can prove theorem proving is useful in the real world I'm stuck doing it as a side project: my boss will encourage it, but if I spend no more than a couple hours a week one it

Now in my company we are worried about bugs and reliability. If I can show via example that theorem proving catches bugs they will train all developers on it and demand all code be proved. However proving quality needs to be something that is worth the effort, if it slightly increases quality but at the expense of making projects take 10 times as long it is probably not worth it except for specific areas of concern.


I feel like it's much harder to shoehorn in proofs to existing codebases than it is to start with the proofs first.

The automation in Isabelle is quite good; good enough to where I don't feel that writing pure new code with proofs takes me a prohibitively long amount of time (about 2-3x writing the equivalent Haskell) (sledgehammer ftw), but I can't imagine trying to retroactively go and prove any of my old projects that are considerably smaller than 20 million lines.

One big perk for using Isabelle (or other proof systems), outside of the obvious correctness proofs, is being able to prove equivalence. If you can inductively show that two functions have the same inputs and outputs (say one was quick to write or easy to prove things about, the other is efficient), then you can tell the code exporter to replace any instances of the inefficient version with the efficient version. This means that your optimization is proven safe, and is done for free, which is worth its weight in gold when used correctly.

Of course, this really only applies to pure and mathey functions. It's substantially harder to prove properties about business logic.


> I feel like it's much harder to shoehorn in proofs to existing codebases than it is to start with the proofs first.

This is one of the big problems in CS. We have code that has been in production for 20 years and as such has very few bugs left. We know from experience replacing that code with new proven correct code results in something more buggy (for a few years while we fix the incorrect requirements) than the original. We still are replacing a lot of old code with proven correct code, but only as the CPU it is tied to goes out of production (code that is portable thus is low on the to replace list). If we could prove existing code correct easily it would potentially save a lot of money as we can keep old code around and add correct features.


> We have code that has been in production for 20 years and as such has very few bugs left.

That seems optimistic!


Off topic, but I'm an engineer whose math extends (shakily) into basic proofs (though I'm rusty). I tinkered quite a bit with Lean a couple of years ago and felt comfortable with it as a language, but I don't actually grok how to use it effectively for proving theorems (I'd love to get into formal verification).

Do you have any recommendations for getting started? Something like a "Proofs for Hackers"?


Software Foundations is always a great place to start! (https://softwarefoundations.cis.upenn.edu/) It uses Coq not Isabelle, but it's a great starting point and contains a ton of useful information. The book is one large literate program, so it's easy to follow along and do the exercises.


Software Foundations (mentioned in a sister comment) is great, and highly recommended. If you want something more Isabelle-focused, the Concrete Semantics book is pretty good [1].

Not exactly the same, but if you want something a bit more approachable than Isabelle or Coq or Agda, it might be worth looking into TLA+. That can introduce you to predicate logic and state machines and set theory if you're not already familiar with them, and TLA+ is a lot more programmer-centric and less pure-mathey than Isabelle. TLA+ offers model checking for "brute force" verification, and also offers a formal proof system called TLAPS. Lamport's talks on TLA+ are quite good [2], and I found that learning Isabelle was a lot easier once I had a lot of practice with TLA+.

[1] http://concrete-semantics.org/ [2] https://lamport.azurewebsites.net/video/videos.html


Amen!


Someone needs to tell the mathematicians they are allowed to use variable names longer than a single character now.


I can’t fathom anyone who has actually spent significant time performing mathematical manipulations with pencil and paper thinking that this is a good idea. Mathematicians have evolved notational conventions that work well for reading, writing, and calculation.


And I can't fathom why anyone would prefer to declare fundamental components of their proof using identifiers as terse as this:

   nominal_datatype fm =
       Mem tm tm    (infixr "IN" 150)
     | Eq tm tm     (infixr "EQ" 150)
     | Disj fm fm   (infixr "OR" 130)
     | Neg fm
     | Ex x::name f::fm binds x in f

These are not pencil-paper formula notation, they are declarations of named entities, and they don't need to. be so terse.


These are obvious abbreviations in the context. If you're working on a proof you might not want lots of 'theorem' and 'formula' to clutter up your screen.

The programming world converged on full word variables, but it also converged on {} instead of BEGIN END etc.

Alex


They fscked up when they decided to omit the dot when multiplying. So now it's impossible to differentiate between "varname" and "var*name".


> They fscked up when they decided to omit the dot when multiplying. So now it's impossible to differentiate between "varname" and "var*name".

Well, since we don't use multi-letter variable names, no such distinction is necessary! But if, for some reason, we did do so, then these two cases would be typeset in LaTeX as \(varname\) and \(var\,name\) (or, rather, \(\mathit{var}\) and \(\mathit{var}\,\mathit{name}\), since TeX correctly interprets \(var\) as semantically `var` anyway).


> Well, since we don't use multi-letter variable names, no such distinction is necessary!

You remember the story about the Fox and the Grapes, right?


If you need multiple letter variable names, your function may be too long, or you are using a language that does not let you express yourself.



They aren't using ASCII. They just invent new glyphs.


that elitism has to stop, people should design hyeroglyphs-free programming languages optimized for mathematical expressions and proofs should be written in it. The lack of IDE support in regular mathML or PDF math is astonishing.


>> Formalisation of the internal calculus, HF

HF seems 'hereditarily finite'.

I wish someone presents the proof that's 'inlines' the explanation of the terminology used and defines them before using. For something as fundamental as Godel's theorems, this should be worth doing.


I feel like godel incompleteness is a cheap trick that mislead to think there are statements in ZFC that can't be proven except that those proven statement are contrived useless meta-statements.


I'm going to assume you mean that the true unprovable sentences that Gödel constructed are contrived. I don't think I can convince you otherwise, except to say that others thought like you and eventually Paris and Harrington came up with [1]. You can make even more "concrete" statements, see the work of Chaitin, but all this stems from Hilbert's very concrete 10th problem [2].

I must say, be careful with "reasoning". For example, once a philosophy PhD minimized mathematics eloquently by saying all mathematical results are "tautological", hence uninteresting. Hence, why bother studying it, and in your case, why bother thinking Gödel did anything special.

Why bother about anything at all... there's the rub.

[1] https://en.wikipedia.org/wiki/Paris–Harrington_theorem

[2] https://en.wikipedia.org/wiki/Hilbert's_tenth_problem


> Since Peano arithmetic cannot prove its own consistency by Gödel's second incompleteness theorem, this shows that Peano arithmetic cannot prove the strengthened finite Ramsey theorem.

This seems quite circular, where the (claims mine) contrived statements enabling godel 2 incompletude theorem, make it unable to prove a non-contrived statement?

Thanks for the reference though, I'll admit that is a topic I'm not familiar enough with to fully challenge it.

> I must say, be careful with "reasoning". For example, once a philosophy PhD minimized mathematics eloquently by saying all mathematical results are "tautological", hence uninteresting. Hence, why bother studying it, and in your case, why bother thinking Gödel did anything special.

Yeah that's a classic, it's a good thought experiment but that only prove that one must not be careful with the pruning that can enable high level reasoning but more with being careful of fallacious or misleading reasonings. Yes in theory, mathematical chain proofs are only tautologies derived from ZFC/higher order logic, so yes mathematics doesn't say anything new. However in practice, the task of unfolding reasoning chains and being able to refer to past lemnas as abtractions/objects, enable the reader to optimize for cognition, and hence the more proofs advances, the more we can discover, retain, understand and refer, useful tautological yet transitive knowledge.


This is not true. For example, the statement "there exists a bijection between the real numbers and the powerset of the integers" is not contrived or meta or useless, but is independent of ZFC.


independently of ZFC, can your statement be mathematically proven in a reasonable alternative axiomatic system?

Also, I am aware there are non-contrived statements that are can't be proven in ZFC such as https://en.wikipedia.org/wiki/List_of_statements_independent.... I'm only claiming that the incompleteness theorems only find contrived ones, and that for the non-contrived ones humanity has found, it is NOT that they are mathematically unproveable, it is that ZFC is not enough.


There are a variety of stronger axiomatic systems (typically ZFC + some other axioms). Some of them prove the continuum hypothesis (which is what I stated), others prove the negation. There's no way to decide which one is "right", though, just like there's no way to tell if the Axiom of Choice is "right".

Your second paragraph is confused. Given an axiom system like ZFC, there are (a) statements that can be proved true or false using it, (b) statements that can't be proved that are true in a particular model, (c) statements that can't be proved that are false in that model. It's set (b) that the incompleteness theorem tells us must exist. The theorem doesn't "find" statements, it proves that (b) exists by constructing a particular one, which is necessarily meta because it applies to every formal system.

However, we do not have access to a model which tells us the answers for things like the CH. So all you can do is decide on the axiom scheme you like, and then some things are provable. You can always add more axioms, like CH, but you can add their negation instead, if you want. So there's no sense in which there's really a right answer for CH but we haven't found it yet.


> independently of ZFC, can your statement be mathematically proven in a reasonable alternative axiomatic system?

There are plenty of people who find ZFC + ¬CH a reasonable axiomatic system.

EDIT: I misread samth's statement (https://news.ycombinator.com/item?id=31424581), which I think is proveable in ZFC; I thought their statement was ¬CH.


My statement was slightly misphrased, it was intended to be just a statement of CH.


It seems its not enough ? https://mathoverflow.net/questions/273861/independence-over-... (Although its CH not the negation of it) what additional axioms be needed there?


I misread the initial statement as "there exist sets intermediate in size between the real numbers and the integers", which may have been what samth meant (https://news.ycombinator.com/item?id=31424581). That is ¬CH, so ZFC + ¬CH can obviously prove it. If you meant that ZFC + ¬CH is not enough to prove interesting mathematics, then it surely is, since ZFC itself is already enough to prove interesting mathematics. If you meant that ZFC + ¬CH (or ZFC + CH) is not enough to decide all "natural" statements … well, I agree with that, but it seemed that you were arguing against, not for, that position (https://news.ycombinator.com/item?id=31423948).


I am making the distinction between natural statements and contrived statements. I claim godel only find contrived statements. However there exist non-contrived, natural statements that are unprovable in ZFC. My question is, what set of additional axioms are needed to have a maximal coverage/provability of those independent statements, while introducing the least possible axioms. And yes indeed ZFC + ¬CH seems like a good first step.


There are a large number of barriers to formalizing your question, such as (i) the definition of natural, (ii) the definition of "maximal" when comparing countably infinite sets, and (iii) the fact that asking about the provability of X in vanilla ZFC is still arguably interesting in the system ZFC+X (see: reverse mathematics), so it's not clear what adding it as an axiom does towards the "largeness" of the set of provable natural statements.


About formalising (i) I guess it's about accepting everything that isn't a meta-token, I don't remember godel incompleteness theorems sufficently to be precise about it, anyway the useful list would be the wikipedia list of independent statements since most of them are not contrived (although very abstract). So yes my question can be specialized as to how to achieve maximum coverage of the wikipedia page. Otherwise, great comment and it's been a while since I hadn't heard the term reverse mathematics!


Well, Gödel set out to prove incompleteness. The first statement that comes to mind is “there is an unprovable sentence” but that is too general. What Gödel did was find out that “this statement is unprovable” is “definable” in Peano’s system (once it has been encoded), and that’s it.

Nothing especially contrived there. Less so for a professional logician.


Halting problem and difficulties in compilation and static analysis are not contrived.


If Godel's theorem applies to all formal systems. Then in formalizing Godel's theorem, doesn't that make the theorem apply to itself?

So the calculus that formalises the theorem is either inconsistent or there are other true statements about it that can't be proven? Which is it? Can someone with a bigger brain elaborate on how the theorem applies to itself in a language for tiny brained people? No big words please.


I think the short version is: just because any formal system is guaranteed to be incomplete doesn't mean it can't prove useful things.

And even an incomplete formal system can prove Gödel's Incompleteness Theorem.


Godel’s incompleteness theorems are about formal systems.

A formal system can be thought of as an algorithm that takes a purported formal proof of some arithmetical statement (e.g. there is no largest twin prime) and returns true/false depending on whether the proof is correct. Call the algorithm a “proof checker” and say it “accepts” a statement if it returns true for some purported proof of the statement.

Godel’s first incompleteness theorem says that at least one of the following is true:

(1) There is some statement such that the checker accepts both the statement and the statement’s negation (e.g. the checker accepts both “2 + 2 = 4” and “2 + 2 ≠ 4”). This is called inconsistency.

(2) There is some true statement which the checker will not accept. This is called incompleteness.

Since Godel’s first incompleteness theorem is not a formal system, it does not apply to itself.


There are two more options:

(3) The system's axioms are not finitely enumerable (i.e. not finite, and cannot be generated by a generator of finite length; e.g. a system where every true statement is an axiom and there are infinitely many axioms)

(4) The system is not complex enough to express arithmetic (e.g. propositional calculus)


Good point. For (3), I should have said something about the axioms being computable.

I think I already excluded cases like (4) by stating that the checker operates on "arithmetical statement[s]".


> I think I already excluded cases like (4) by stating that the checker operates on "arithmetical statement[s]".

It's at least a bit vague, I think you need something that is at least as strong as Robinson arithmetic. In particular, I believe that if you throw out multiplication (or addition, and keep multiplication), you get a complete theory.


>Since Godel’s first incompleteness theorem is not a formal system, it does not apply to itself.

Except the author is formalizing the theorem. Meaning the theorem is based off a formal system. That means the system itself that was created to derive this theorem is either incomplete or inconsistent.

So Yes it does apply to itself.


I asked a similar question a while back:

https://news.ycombinator.com/item?id=27915824


> true statements about it that can't be proven?

Its missing the qualifier: can't be proven in that system. They can be examined from outside the system, or say, with a different system.


The problem still continues. Even if you add the unprovable formula ( lets say formula1) in System1 as an axiom to System2 ( System1 + formula1 ), System2 will have its own unprovable formulas even though formula1 would be superficially provable in System2. You could add the unprovable formulas in System2 as axioms to System3 but System3 will have its own unprovable formulas . Godel showed that it's simply the nature of any formal system with complexity greater than or equal to arithmetic.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: