Skip to content

Latest commit

 

History

History
530 lines (394 loc) · 15.9 KB

classical_reasoning.rst

File metadata and controls

530 lines (394 loc) · 15.9 KB

Classical Reasoning

If we take all the rules of propositional logic we have seen so far and exclude reductio ad absurdum, or proof by contradiction, we have what is known as intuitionistic logic. In intuitionistic logic, it is possible to view proofs in computational terms: a proof of A \wedge B is a proof of A paired with a proof of B, a proof of A \to B is a procedure which transforms evidence for A into evidence for B, and a proof of A \vee B is a proof of one or the other, tagged so that we know which is the case. The ex falso rule makes sense only because we expect that there is no proof of falsity; it is like the empty data type.

Proof by contradiction does not fit in well with this world view: from a proof of a contradiction from \neg A, we are supposed to magically produce a proof of A. We will see that with proof by contradiction, we can prove the following law, known as the law of the excluded middle: \forall A, A \vee \neg A. From a computational perspective, this says that for every A we can decide whether or not A is true.

Classical reasoning does introduce a number of principles into logic, however, that can be used to simplify reasoning. In this chapter, we will consider these principles, and see how they follow from the basic rules.

Proof by Contradiction

Remember that in natural deduction, proof by contradiction is expressed by the following pattern:

The assumption \neg A is canceled at the final inference.

In Lean, the inference is named byContradiction, and since it is a classical rule, we have to use the command open Classical before it is available. Once we do so, the pattern of inference is expressed as follows:

section
open Classical
variable (A : Prop)

example : A :=
byContradiction
  (fun h : ¬ A ↦ show False from sorry)

end

One of the most important consequences of this rule is a classical principle that we mentioned above, namely, the law of the excluded middle, which asserts that the following holds for all A: A \vee \neg A. In Lean we denote this law by em. In mathematical arguments, one often splits a proof into two cases, assuming first A and then \neg A. Using the elimination rule for disjunction, this is equivalent to using A \vee \neg A, which is the excluded middle principle for this particular A.

Here is a proof of em, in natural deduction, using proof by contradiction:

Here is the same proof rendered in Lean:

section
open Classical

example : A ∨ ¬ A := by
  apply byContradiction
  intro (h1 : ¬ (A ∨ ¬ A))
  have h2 : ¬ A := by
    intro (h3 : A)
    have h4 : A ∨ ¬ A := Or.inl h3
    show False
    exact h1 h4
  have h5 : A ∨ ¬ A := Or.inr h2
  show False
  exact h1 h5

end

The principle is known as the law of the excluded middle because it says that a proposition A is either true or false; there is no middle ground. As a result, the theorem is named em in the Lean library. For any proposition A, em A denotes a proof of A ∨ ¬ A, and you are free to use it any time Classical is open:

section
open Classical

example (A : Prop) : A ∨ ¬ A :=
Or.elim (em A)
  (fun _ : A ↦ Or.inl ‹A›)
  (fun _ : ¬ A ↦ Or.inr ‹¬A›)
end

Or even more simply:

section
open Classical

example (A : Prop) : A ∨ ¬ A :=
em A

end

In fact, we can go in the other direction, and use the law of the excluded middle to justify proof by contradiction. You are asked to do this in the exercises.

Proof by contradiction is also equivalent to the principle \neg \neg A \leftrightarrow A. The implication from right to left holds intuitionistically; the other implication is classical, and is known as double-negation elimination. Here is a proof in natural deduction:

And here is the corresponding proof in Lean:

section
open Classical

example (A : Prop) : ¬ ¬ A ↔ A :=
Iff.intro
  (fun h1 : ¬ ¬ A ↦
    show A from byContradiction
      (fun h2 : ¬ A ↦
        show False from h1 h2))
  (fun h1 : A ↦
    show ¬ ¬ A from fun h2 : ¬ A ↦ h2 h1)

end

In the next section, we will derive a number of classical rules and equivalences. These are tricky to prove. In general, to use classical reasoning in natural deduction, we need to extend the general heuristic presented in :numref:`forward_and_backward_reasoning` as follows:

  1. First, work backward from the conclusion, using the introduction rules.
  2. When you have run out things to do in the first step, use elimination rules to work forward.
  3. If all else fails, use a proof by contradiction.

Sometimes a proof by contradiction is necessary, but when it isn't, it can be less informative than a direct proof. Suppose, for example, we want to prove A \wedge B \wedge C \to D. In a direct proof, we assume A, B, and C, and work towards D. Along the way, we will derive other consequences of A, B, and C, and these may be useful in other contexts. If we use proof by contradiction, on the other hand, we assume A, B, C, and \neg D, and try to prove \bot. In that case, we are working in an inconsistent context; any auxiliary results we may obtain that way are subsumed by the fact that ultimately \bot is a consequence of the hypotheses.

Some Classical Principles

We have already seen that A \vee \neg A and \neg \neg A \leftrightarrow A are two important theorems of classical propositional logic. In this section we will provide some more theorems, rules, and equivalences. Some will be proved here, but most will be left to you in the exercises. In ordinary mathematics, these are generally used without comment. It is nice to know, however, that they can all be justified using the basic rules of classical natural deduction.

If A \to B is any implication, the assertion \neg B \to \neg A is known as the contrapositive. Every implication implies its contrapositive, and the other direction is true classically:

Here is another example. Intuitively, asserting "if A then B" is equivalent to saying that it cannot be the case that A is true and B is false. Classical reasoning is needed to get us from the second statement to the first.

Here are the same proofs, rendered in Lean:

section
open Classical
variable (A B : Prop)

example (h : ¬ B → ¬ A) : A → B := by
  intro (h1 : A)
  show B
  apply byContradiction
  intro (h2 : ¬ B)
  have h3 : ¬ A := h h2
  show False
  exact h3 h1

example (h : ¬ (A ∧ ¬ B)) : A → B := by
  intro (h1 : A)
  show B
  apply byContradiction
  intro
  have : A ∧ ¬ B := And.intro ‹A› ‹¬ B›
  show False
  exact h this

end

Notice that in the second example, we used an anonymous intro and an anonymous have. We used the brackets \f< and \f> to write ‹A› and ‹¬ B›, referring back to the first assumption. The use of the word this refers back to the have.

Knowing that we can prove the law of the excluded middle, it is convenient to use it in classical proofs. Here is an example, with a proof of (A \to B) \vee (B \to A):

Here is the corresponding proof in Lean:

section
open Classical

variable (A B : Prop)

example : (A → B) ∨ (B → A) :=
Or.elim (em B)
  (fun h : B ↦
    have : A → B :=
      fun _ : A ↦ show B from h
    show (A → B) ∨ (B → A) from Or.inl this)
  (fun h : ¬ B ↦
    have : B → A :=
      fun _ : B ↦ have : False := h ‹B›
      show A from False.elim this
    show (A → B) ∨ (B → A) from Or.inr this)

end

Using classical reasoning, implication can be rewritten in terms of disjunction and negation:

(A \to B) \leftrightarrow \neg A \vee B.

The forward direction requires classical reasoning.

The following equivalences are known as De Morgan's laws:

  • \neg (A \vee B) \leftrightarrow \neg A \wedge \neg B
  • \neg (A \wedge B) \leftrightarrow \neg A \vee \neg B

The forward direction of the second of these requires classical reasoning.

Using these identities, we can always push negations down to propositional variables. For example, we have

A formula built up from \wedge, \vee, and \neg in which negations only occur at variables is said to be in negation normal form.

In fact, using distributivity laws, one can go on to ensure that all the disjunctions are on the outside, so that the formulas is a big or of and's of propositional variables and negated propositional variables. Such a formula is said to be in disjunctive normal form. Alternatively, all the and's can be brought to the outside. Such a formula is said to be in conjunctive normal form. An exercise below, however, shows that putting formulas in disjunctive or conjunctive normal form can make them much longer.

The contradiction Tactic

Once we reach a contradiction in our proof, i.e. by having h1 : A and h2 : ¬A, we can apply the tactic contradiction. This will search for a contradiction among the hypotheses, and complete the proof if it succeeds in finding one. Revisiting our previous example:

section
open Classical

example : A ∨ ¬ A := by
  apply byContradiction
  intro (h1 : ¬ (A ∨ ¬ A))
  have h2 : ¬ A := by
    intro (h3 : A)
    have h4 : A ∨ ¬ A := Or.inl h3
    show False
    exact h1 h4
  have h5 : A ∨ ¬ A := Or.inr h2
  show False
  exact h1 h5

example : A ∨ ¬ A := by
  apply byContradiction
  intro (h1 : ¬ (A ∨ ¬ A))
  have h2 : ¬ A := by
    intro (h3 : A)
    have h4 : A ∨ ¬ A := Or.inl h3
    contradiction
  have h5 : A ∨ ¬ A := Or.inr h2
  contradiction

end

Since contradiction does not require the names of variables that form a contradiction we can even remove all of the names.

section
open Classical

example : A ∨ ¬ A := by
  apply byContradiction
  intro
  have : ¬ A := by
    intro
    have : A ∨ ¬ A := Or.inl ‹A›
    contradiction
  have : A ∨ ¬ A := Or.inr this
  contradiction

end

Exercises

  1. Show how to derive the proof-by-contradiction rule from the law of the excluded middle, using the other rules of natural deduction. In other words, assume you have a proof of \bot from \neg A. Using A \vee \neg A as a hypothesis, but without using the rule RAA, show how you can go on to derive A.

  2. Give a natural deduction proof of \neg (A \wedge B) from \neg A \vee \neg B. (You do not need to use proof by contradiction.)

  3. Construct a natural deduction proof of \neg A \vee \neg B from \neg (A \wedge B). You can do it as follows:

    1. First, prove \neg B, and hence \neg A \vee \neg B, from \neg (A \wedge B) and A.
    2. Use this to construct a proof of \neg A, and hence \neg A \vee \neg B, from \neg (A \wedge B) and \neg (\neg A \vee \neg B).
    3. Use this to construct a proof of a contradiction from \neg (A \wedge B) and \neg (\neg A \vee \neg B).
    4. Using proof by contradiction, this gives you a proof of \neg A \vee \neg B from \neg (A \wedge B).
  4. Give a natural deduction proof of P from \neg P \to (Q \vee R), \neg Q, and \neg R.

  5. Give a natural deduction proof of \neg A \vee B from A \to B. You may use the law of the excluded middle.

  6. Give a natural deduction proof of A \to ((A \wedge B) \vee (A \wedge \neg B)). You may use the law of the excluded middle.

  7. Put (A \vee B) \wedge (C \vee D) \wedge (E \vee F) in disjunctive normal form, that is, write it as a big "or" of multiple "and" expressions.

  8. Prove ¬ (A ∧ B) → ¬ A ∨ ¬ B by replacing the sorry's below by proofs.

    import Mathlib.Tactic
    
    section
    
    open Classical
    variable {A B C : Prop}
    
    -- Prove ¬ (A ∧ B) → ¬ A ∨ ¬ B by replacing the sorry's below
    -- by proofs.
    
    lemma step1 (h₁ : ¬ (A ∧ B)) (h₂ : A) : ¬ A ∨ ¬ B :=
    have : ¬ B := sorry
    show ¬ A ∨ ¬ B from Or.inr this
    
    lemma step2 (h₁ : ¬ (A ∧ B)) (h₂ : ¬ (¬ A ∨ ¬ B)) : False :=
    have : ¬ A :=
      fun _ : A ↦
      have : ¬ A ∨ ¬ B := step1 h₁ ‹A›
      show False from h₂ this
    show False from sorry
    
    theorem step3 (h : ¬ (A ∧ B)) : ¬ A ∨ ¬ B :=
    byContradiction
      (fun h' : ¬ (¬ A ∨ ¬ B) ↦
        show False from step2 h h')
    
    end
  9. Complete these proofs in tactic mode

    section
    open Classical
    variable {A B C : Prop}
    
    example (h : ¬ B → ¬ A) : A → B := by
      sorry
    
    example (h : A → B) : ¬ A ∨ B := by
      sorry
    
    end