Lecture Notes by Anthony Zhang.

CS 245

Logic and computation.

Instructor: Daniela Maftuleac
Email: daniela.maftuleac@uwaterloo.ca
Office Hours: Tuesday 2-3pm, Wednesday 12-1pm Math Tutorial Center
Website: https://www.student.cs.uwaterloo.ca/~cs245/

Assignments are due at 1pm at the dropboxes on MC 4th floor, due on Thursdays, posted on Tuesdays, returned in tutorials. Remark requests must be done within a week after the marks are posted.

Logic

Logic is the science of correct reasoning. It is concerned with determining whether things are true or false.

In symbolic logic, we use symbols to represent truth values and manipulate logical statements using certain logical rules.

This course deals with propositional logic, predicate logic, and program verification.

Prove that 3 \mid (4^n + 5) for all n \in \mb{N}:

Clearly, this is true for n = 0, since 3 \mid (1 + 5).
Assume for some k \ge 0 that 3 \mid (4^k + 5).
So \exists p \in \mb{Z}, 3p = 4^k + 5 and 4(3p) = 4(4^k + 5), so 12p - 15 = 3(4p - 5) = 4^{k + 1} + 5.
Let q = 4p - 5. Since q \in \mb{Z}, \exists q \in \mb{Z}, 3q = 4^{k + 1} + 5.
So 3 \mid 4^{k + 1} + 5, and by induction, 3 \mid 4^n + 5 for all n \in \mb{N}.

The idea is that with strong induction, we can use any of the previous cases to prove the inductive hypothesis.

7/5/14

Propositional Logic

The language of propositions is propositional logic.

A proposition is a declarative statement that is either true or false. It may depend on the context, like how the proposition x \ge 5 depends on the contextual variable x.

We always assume that for any proposition with its context (the situation the proposition applies in), either the proposition is true, or it is false, and it cannot be both. If a statement is false, then its negation is true.

Propositions must be declarative. They must make sense when we ask, "is it true that PROPOSITION?". It must be assignable to either true or false.

Simple/atomic propositions are the building blocks of compound propositions. For example, "It is currently raining in Waterloo" and "It is currently 12 degrees Celsius in Waterloo" are simple propositions, and "It is either raining or 12 degrees in Waterloo" is a compound proposition.

Atomic ropositions are the smallest possible statements that are still propositions. They cannot be divided into smaller propositions.

Expression Syntax

The propositional language is known as \mathcal{L}_P (Language of Propositions). This language contains atoms, which are lowercase Latin characters such as p, q, and r, optionally with subscripts. These all belong to the set \operatorname{Atom}(\mathcal{L}_P), so p \in \operatorname{Atom}(\mathcal{L}_P).

There are also punctuation symbols, the round parentheses "(" and ")", which change the precendence of subexpressions.

There are also connectives, which are \neg, \wedge, \vee, \rightarrow, \leftrightarrow.

An expression is a finite sequence of symbols. The length of an expression is the number of symbols it contains. Expressions may not necessarily be valid, like ( \vee \neg \vee (.

The empty expression (containing no symbols) is denoted \emptyset.

Expression Operations

Let U, V, W be expressions. Then U and V are equal (U = V) if and only if they contain the same sequence of symbols.

Then UV is the concatenation of U and V - the expression containing the entire sequence in U, followed by the entire sequence in V.

If U = W_1 V W_2, then V is a segment of U - it is a sequence of symbols that U also contains. All sequences contain the empty expression.

If V is a segment of U and V \ne U, then V is a proper segment of U.

If U = VW, then V is an initial segment and W is a terminal segment.

Expression Validity

The binary (two-parameter) connectives are \wedge, \vee, \rightarrow, \leftrightarrow. In the following, \odot will represent any arbitrary one of these binary connectives. The only connective that is not binary is the unary \neg connective.

The set of formulas is the set of all valid/well-formed expressions, and is denoted \formlp. Given A, B \in \formlp, we can define \formlp as follows:

Formulas are often represented using Roman capital letters.

Theorem 2.2.3 states the obvious consequences of the definition: all formulas of \mathcal{L}_P are atoms, or of the form \neg A, A \wedge B, A \vee B, A \rightarrow B, A \leftrightarrow B.

Let R(n) be a property (true if the property holds for n, false otherwise). Theorem 2.2.4 states that if \forall p \in \operatorname{Atom}(\mathcal{L}_P), R(p) and \forall A \in \formlp, R(A) \implies R(\neg A) and \forall A, B \in \formlp, R(A) \wedge R(B) \implies R(A \odot B), then \forall A \in \formlp, R(A).

In other words, if a property holds for all the forms of formulas, then it holds for all formulas.

The type of a formula is determined by its top-level operator. For example, \neg A is a negation, A \wedge B is conjunction, A \vee b is a disjunction, A \rightarrow B is an implication, A \leftrightarrow B is an equivalence. According to theorem 2.3.3, all formulas must be one of these forms.

We want to be able to look at any expression and determine whether it is well formed. For example, \neg() is not a well formed formula (WFF) because it is not obtainable through the rules that define an element in the set of formulas.

Is (\neg a) \wedge (b \vee c) well formed?

We can first notice that this is of the form P \wedge Q, where P = \neg a and Q = b \vee c. By the rules, we know that the whole thing is well formed if and only if P and Q are both well formed.
Now we check if P is well formed. Clearly, it is of the form \neg R, where R = a, and a is an atom (which is, by definition, well formed), so by the rules, \neg a is well formed.
Now we check if Q is well formed. Clearly, it is of the form S \vee T, where S = b and T = c. By the rules, Q is well formed if and only if S and T are. Since they are atoms, they are both well formed by definition.
Since P and Q are well formed, P \wedge Q is well formed and so is (\neg a) \wedge (b \vee c).

12/5/14

Parse Trees

The other way of testing for well-formedness is by constructing a parse tree out of the symbols in the formula. The top-level operator or atom is the root node of the tree, and the operands are its children. This is repeated until all the symbols have been added to the tree. Parentheses simply change the arrangement of the nodes and are not included in the parse tree.

For example, the expression (((\neg p) \wedge q) \rightarrow (p \wedge (q \vee (\neg r)))) has the following parse tree:

      \implies
       /   \
      /     \
  \wedge   \wedge
   /  \     /  \
\neg   q   p  \vee
  |           /  \
  p          q  \neg
                  |
                  r

Every parse tree is unique. The rules for a valid formula can be applied to a parse tree to test for well-formedness:

An algorithm for testing if an expression U is a well formed formula is given below

  1. If U is empty, produce False.
  2. If U \in \operatorname{Atom}(\mathcal{L}_P), then produce True.
  3. If the expression does not begin with a ( symbol, produce False.
  4. If the second symbol is \neg, then let V be an expression such that U = (\neg V). Apply this algorithm again with V as the expression, and if it is false, produce False.
  5. Otherwise (the second symbol is not \neg), apply this algorithm again with all the symbols starting from and including the second symbol, except doing nothing in step 7. If it results in False, return False. Otherwise, let V be the well formed formula from the second symbol to wherever the algorithm run ended. If the next symbol after this is not a binary connective, produce False. Do the formula thing again to get a well formed formula W, or produce False.
  6. If the symbol after the place W ended is not a ) symbol, produce False.
  7. If the expression does not end after the ) symbol, produce False.
  8. Produce True.

This algorithm gets applied a finite number of times, proportional to the depth of the tree.

The height of a parse tree is the length of the longest path from the root to a leaf.

Precedence

We now introduce a system of precedence. The logical connectives listed by priority, from highest to lowest, are \neg, \wedge, \vee, \rightarrow, and \leftrightarrow.

Precedence is the idea that some connectives get higher priority than others. For example, p \vee q \wedge r could potentially mean either ((p \vee q) \wedge r), or (p \vee (q \wedge r)). But since \wedge has a higher precedence than \vee, we define p \vee q \wedge r to mean (p \vee (q \wedge r)).

When there are multiple ambiguities, we start with the ones with highest precedence. For example, \neg p \wedge q \vee r is resolved to (\neg p) \wedge q \vee r, then (((\neg p) \wedge q) \vee r).

With this in place, we can now eliminate many of the superfluous parentheses to clean up our formulas.

Course of values induction is a type of strong induction where the inductive hypothesis used to prove the inductive conclusion is the conjunction of all the previous cases.

In other words, we do course of values induction by proving a property holds for M(1), and then by proving that M(1) \wedge \ldots \wedge M(k) \implies M(k + 1) for any k \ge 1. By the principal of strong induction, M(n) therefore holds for all n \ge 1.

Sometimes we want to use induction on the height of the parse tree. This is often useful for proving things about propositional formulas, due to the recursive nature of formulas.

If a formula C contains the segment (\neg A), where A is another formula, then A is the scope of the \neg within C. The scope is the area where the \neg applies.

For binary connectives, if C contains (A \odot B) where \odot is a binary connective, then A and B are the left and right scopes, respectively, of the binary connective within C.

A scope is unique to the operator. Two operators cannot have the exact same scope, though the scopes themselves can be equal.

14/5/14

Semantics

The semantics of a language describe how we interpret should valid formulas in that language. Semantics assigns meanings to formulas.

Let A and B be formulas, representing the propositions \mathcal{A} and \mathcal{B}, respectively. Then \neg A represents "Not \mathcal{A}", A \wedge B represents "\mathcal{A} and \mathcal{B}".

Semantics is formally given as functions that map each formula to a value in \set{0, 1} (false and true). This function can be represented using a truth table.

The value of a formula A given a truth valuation t is represented A^t, with A^t \in \set{0, 1}. If we used conventional function notation, we might write t(A) \in \set{0, 1}.

Truth valuations are functions that are used to assign truth values to propositional variables. The domain is the set of propositional variables in a formula, and the range is \set{0, 1}. If t is a truth valuation, then A^t \in \set{0, 1}, where A is a propositional variable.

t \models A is the same as A^t = 1, while t \not\models A is the same as A^t = 0.

We can completely represent the truth valuation for a formula by listing out all possible values of the domain and the resulting value in the range. For n variables, there are 2^n possible values.

We can define the truth valuation function t in terms of formulas A and B recursively:

  1. A \in \operatorname{Atom}(\mathcal{L}_P) \implies A^t \in \set{0, 1}.
  2. (\neg A) = \begin{cases} 1 &\text{if } A^t = 0 \\ 0 &\text{otherwise} \end{cases}.
  3. (A \wedge B) = \begin{cases} 1 &\text{if } A^t = B^t = 1 \\ 0 &\text{otherwise} \end{cases}.
  4. (A \vee B) = \begin{cases} 1 &\text{if } A^t = 1 \text{ or } B^t = 1 \\ 0 &\text{otherwise} \end{cases}.
  5. (A \rightarrow B) = \begin{cases} 1 &\text{if } A^t = 0 \text{ or } B^t = 1 \\ 0 &\text{otherwise} \end{cases}.
  6. (A \leftrightarrow B) = \begin{cases} 1 &\text{if } A^t = B^t \\ 0 &\text{otherwise} \end{cases}.

Given a set of formulas \Sigma, \Sigma^t = \begin{cases} 1 &\text{if } \forall A \in \Sigma, A^t = 1 \\ 0 &\text{otherwise} \end{cases}. In other words, a set of formulas is true in a given truth valuation if and only if all of the formulas in the set are true.

\Sigma is satisfiable if and only if there exists a truth valuation t such that \Sigma^t = 1. It this t exists, then t satisfies \Sigma. Therefore, \emptyset is satisfiable by any truth valuation.

A truth valuation can also be called a model.

A tautology is a formula which is always true, so for any truth valuation t, A^t = 1.

A contradiction is a formula which is always false, so for any truth valuation t, A^t = 0.

A semantically consistent formula is one that is not a contradiction, so there exists a truth valuation t such that A^t = 1.

Satisfiability is a property of a set of formulas where there exists a truth valuation that makes every formula in the set true.

The negation of a tautology is always a contradiction, and only the negation of a tautology is a contradiction. Likewise, the negation of a contradiction is a tautology, and only the negation of a contradiction is a tautology.

Rather than writing out the entire truth table, we can simplify formulas using the following identities:

Show that A = ((p \wedge q \rightarrow r) \wedge (p \rightarrow q)) \rightarrow (p \rightarrow r) is a tautology:

Assume p^t = 0. Then A^t = (1 \rightarrow (p \rightarrow r))^t = 1.
Assume p^t = 1. Then A^t = (((q \rightarrow r) \wedge q) \rightarrow r)^t.
Assume q^t = 0. Then A^t = 1. Assume q^t = 1. Then A^t = (r \rightarrow r)^t = 1.
So A is a tautology.

Let \mathcal{A} and \mathcal{A}_1, \ldots, \mathcal{A}_n be propositions.

Deductive logic studies whether \mathcal{A} can be deduced from \mathcal{A}_1, \ldots, \mathcal{A}_n.

Let \Sigma \subseteq \formlp, A \in \formlp.

A is a tautological consequence/semantic entailment of \Sigma (written \Sigma \models A) if and only if for all truth valuation t, \Sigma^t = 1 \implies A^t = 1. Also, \neg (\Sigma \models A) = \Sigma \not\models A. We can also write out the elements of \Sigma directly, like P_1, \ldots, P_n \models A, which is the same as \set{P_1, \ldots, P_n} \models A.

In other words, if \Sigma \models A, then the truth of \Sigma implies the truth of A.

As a result, if \emptyset \models A, then A is a tautology. This is because \emptyset^t = 1 for any truth valuation t, so A^t = 1 as well.

As a result of our semantic definitions of our operations, conjunction and disjunction is commutative and associative. So A \wedge B = B \wedge A, A \vee B = B \vee A, (A \wedge B) \wedge C = A \wedge (B \wedge C), (A \vee B) \vee C = A \vee (B \vee C).

Also, A_1, \ldots, A_n \models A \iff \emptyset \models A_1 \wedge \ldots \wedge A_n \rightarrow A \iff \emptyset \models A_1 \implies (A_2 \implies (\ldots (A_n \implies A) \ldots)).

21/5/14

Note that \Sigma \implies A is not a formula, because it cannot be created using the formula rules. It is an operator that accepts formulas as both operands.

It is true that \Sigma \not\models A if there exists a truth valuation t such that \Sigma^t = 1 and A^t = 0.

If A \models B and B \models A, then A \equiv B. Also, if A \equiv B and C \equiv D, then \neg A \equiv \neg B and A \odot C \equiv B \odot D where \odot is a binary connective.

Also, we define \emptyset^t = 1 for any truth valuation t.

If B \equiv C, then A \equiv A' where A' is A with any number of occurrences of B replaced by C. This is known as replaceability.

The dual of A, which is a formula that uses only the connectives \neg, \wedge, \vee is A', which is A with every \wedge replaced by \vee, every \vee with \wedge, and every atom A with \neg A. It is always true that A = \neg A'. This is basically application of De Morgan's laws to formulas.

We know that A \rightarrow B \equiv \neg A \vee B. Because of this, \rightarrow is definable in terms of \neg, \vee, or reducible to \neg, \vee.

\neg is a unary connective - it accepts one operand. \vee is a binary connective - it accepts two operands. It is also possible to have ternary connectives and even connectives of arity 4 and above (arity is the number of operands).

Let f be an n-ary connective - a connective accepting n operands. Then f A_1 \ldots A_n is a formula where A_1, \ldots, A_n are connected by the connective f.

An n-ary connective has 2^n possible input cases to consider. For k possible inputs, there are 2^k possible output cases. So there are 2^{2^n} possible n-ary connectives.

A set of connectives is adequate if and only if any n-ary connective can be defined in terms of these connectives.

For example, \set{\wedge, \vee, \neg} is an adequate set because the truth table for any n-ary operator can be represented by a formula using just these connectives. This can be proven:

Let f be an n-ary connective. Clearly, there are 2^n possible truth valuations t_1, \ldots, t_{2^n} for A_1, \ldots, A_n. Let t be one of these truth valuations.
Let T(t) = T_1(t) \wedge \ldots \wedge T_n(t) where T_i(t) = \begin{cases} A_i &\text{if } A_i^t = 1 \\ \neg A_i &\text{if } A_i^t = 0 \end{cases}. For example, if A_1^t = 1, A_2^t = 0, A_3^t = 1, then T = A_1 (\neg A_2) A_3.
Clearly, T(t_i)^{t_j} is true if and only if t_i = t_j. In other words, T(t) results in a formula that is true in only the truth valuation t and no others. Let V = T(t_1) \vee \ldots \vee T(t_{2^n}), where there is a T(i) term if and only if (f A_1, \ldots, A_n)^t = 1.
Clearly, this is a formula that is true when f A_1, \ldots, A_n is true, and false otherwise. Therefore, f A_1, \ldots, A_n = V.
So any f can be defined in terms of \set{\wedge, \vee, \neg}, which makes it adequate.

We usually prove sets are adequate by defining the connectives in a set that is already known to be adequate in terms of the connectives of this set. If a connective f is definable in terms of a set of connectives A, and every connective in that set is definable in terms of another set B, then f is definable in terms of B.

This is because we could define f first in terms of A, and then define that in terms of B.

For example, \set{\wedge, \neg} is adequate because \wedge and \neg are already in the set, and A \vee B \equiv \neg((\neg A) \wedge (\neg B)), so \vee is definable in terms of \set{\wedge, \neg}.

26/5/14

The Schroder connective is defined as A \downarrow B where A \downarrow B \equiv \neg (A \wedge B) - the NOR operation. This connective is interesting because it can make an adequate set by itself - \set{\downarrow} is an adequate set.

\set{\neg, \rightarrow} is an adequate set.

Proof Calculus

We want a calculus for reasoning about propositional logic. This will allow us to simplify proofs of validity.

In this course we will be looking at the Hilbert and natural deduction systems for formal proof calculus.

Let \Sigma = \set{\alpha_1, \ldots, \alpha_n}. This is a set of premises.

\Sigma \models d is the same as \forall t, \Sigma^t = 1 \implies d^t = 1.

\Sigma \vdash \alpha means that \alpha is provable or deducible from \Sigma.

Hilbert System

The Hilbert System is a deductive system over the set of propositional logic formulas.

\Sigma \vdash_H \alpha means that \alpha is provable or deducible in the Hilbert system. \vdash_H A means \emptyset \vdash_H A.

\Sigma is a set of formulas known as premises, and from them we can deduce \delta using axioms and inference rules.

Axioms are basically just tautologies. In this course we will only be using axioms over \set{\neg, \rightarrow} since this set is small and results in fewer axioms:

  1. \phi \rightarrow (\psi \rightarrow \phi) - basically, this is just a special case of
  2. (\phi \rightarrow (\psi \rightarrow \xi)) \rightarrow ((\phi \rightarrow \psi) \rightarrow (\phi \rightarrow \xi)) - basically, \rightarrow is distributive
  3. (\neg \phi \rightarrow \neg \psi) \rightarrow (\psi \rightarrow \phi) - basically, the contrapositive of a formula implies the formula

Here, \phi, \psi, \xi are any formulas.

Inference rules are manipulations we can perform on formulas to obtain new formulas. In this system we have only Modus Ponens (MP): \set{\phi, \phi \rightarrow \psi} \rightarrow_H \psi.

A proof is a set of formulas that includes axioms, premises, and conclusions. We start with the axioms that we know are true, so we use inference rules over the axioms and previously obtained formulas, listing out each new formula obtained, until we get the desired result.

For example, prove that \vdash_H A \rightarrow A:

  1. A \rightarrow ((A \rightarrow A) \rightarrow A) (Axiom 1 where \psi = A \rightarrow A)
  2. (A \rightarrow ((A \rightarrow A) \rightarrow A)) \rightarrow ((A \rightarrow (A \rightarrow A)) \rightarrow (A \rightarrow A)) (Axiom 2 where \psi = A \rightarrow A)
  3. (A \rightarrow (A \rightarrow A)) \rightarrow (A \rightarrow A) (Modus Ponens on 1 and 2)
  4. A \rightarrow A (Modus Ponens on 1 and 3)

Prove that \set{A \rightarrow B, B \rightarrow C} \vdash_H A \rightarrow C:

  1. A \rightarrow B (Premise 1)
  2. B \rightarrow C (Premise 2)
  3. ((B \rightarrow C) \rightarrow (A \rightarrow (B \rightarrow C))) (Axiom 1 where \phi = B \rightarrow C)
  4. A \rightarrow (B \rightarrow C) (Modus Ponens on 1 and 3)
  5. (A \rightarrow (B \rightarrow C)) \rightarrow ((A \rightarrow B) \rightarrow (A \rightarrow C)) (Axiom 2)
  6. (A \rightarrow B) \rightarrow (A \rightarrow C) (Modus Ponens on 4 and 5)
  7. A \rightarrow C (Modus Ponens on 1 and 6)

The deduction theorem says that \Sigma \vdash_H A \rightarrow B if and only if \Sigma \cup A \vdash_H B. So \set{\alpha_1, \ldots, \alpha_n} \vdash_H \alpha is the same as \emptyset \vdash_H (\alpha_1 \rightarrow (\ldots (alpha_n \rightarrow \alpha) \ldots)).

In other words, to prove an implication, we just have to assume the antecedent and use it to prove the consequent.

28/5/14

Prove that \neg \neg A \vdash_H A:

  1. \neg \neg A (Premise 1)
  2. \neg \neg A \rightarrow (\neg \neg \neg \neg A \rightarrow \neg \neg A) (Axiom 1 where \psi = \neg \neg \neg \neg A)
  3. \neg \neg \neg \neg A \rightarrow \neg \neg A (Modus Ponens on 1 and 2)
  4. (\neg (\neg \neg \neg A) \rightarrow \neg A) \rightarrow (\neg A \rightarrow \phi) (Axiom 3 where \phi = \neg \neg \neg A)
  5. \neg A \rightarrow \neg \neg \neg A (Modus Ponens on 3 and 4)
  6. (\neg A \rightarrow \neg \neg \neg A) \rightarrow (\neg \neg A \rightarrow A) (Axiom 3 where \psi = \neg \neg A)
  7. \neg \neg A \rightarrow A (Modus Ponens on 5 and 6)
  8. A (Modus Ponens on 1 and 7)

\Sigma \models \alpha is a semantic construct. This is associated with soundness.

\Sigma \vdash \alpha is a syntactical construct. This is associated with completeness.

Formulas must be semantically entailed before it can be syntactically entailed. We need it to be semantically sound before proving it is complete.

Here are some rules of thumb to more easily construct proofs.

Natural Deduction

In natural deduction, we infer a conclusion from the premises using various proof rules.

Natural deduction was born out of people noticing that the Hilbert system is terribly complicated even for the simplest of proofs.

Let \Sigma = \set{\alpha_1, \ldots, \alpha_n} be the set of all the premises.

For convenience, we define that \alpha_1, \ldots, \alpha_n is equivalent to \set{\alpha_1, \ldots, \alpha_n}.

We also define that \Sigma, \alpha is equivalent to \Sigma \cup \set{\alpha}, and given a set S, then \Sigma, S is equivalent to \Sigma \cup S.

\Sigma \vdash \alpha is not a formula, but is a proposition.

If the train arrives late and there are no taxis at the station, then John is late for his meeting.
John is not late for his meeting.
The train did arrive late.
Therefore, there were taxis at the station.

The above can be written as follows:

Let l represent the train being late. Let m represent being late to the meeting. Let t represent there being taxis.
Then l \wedge \neg t \rightarrow m, \neg m, l, so t.

The rules of deduction in ND are:

Name Symbol Rule Description Rule Type
Reflexivity \operatorname{Ref} A \vdash A Anything can be deduced from itself. Built-in
Addition of Premises + with \Sigma_1 \vdash \alpha \Sigma_1 \vdash \alpha implies \Sigma_1 \cup \Sigma_2 \vdash \alpha Premises can be freely added to valid arguments. Built-in
Reflexive premises \epsilon \alpha \in \Sigma implies \Sigma \vdash \alpha Anything can be deduced if it is itself deducible. Derived
Negation-introduction \neg+ with \Sigma, \alpha \vdash \psi and \Sigma, \alpha \vdash \neg \psi \Sigma, \alpha \vdash \psi and \Sigma, \alpha \vdash \neg \psi implies \Sigma \vdash \neg \alpha If \alpha results in a contradiction, then \neg \alpha must be true. Built-in
Negation-elimination \neg- with \Sigma, \neg \alpha \vdash \psi and \Sigma, \neg \alpha \vdash \neg \psi \Sigma, \neg \alpha \vdash \psi and \Sigma, \neg \alpha \vdash \neg \psi implies \Sigma \vdash \alpha If \neg \alpha results in a contradiction, then \alpha must be true. Built-in
Conjunction-introduction \wedge+ with \Sigma \vdash \alpha and \Sigma \vdash \psi \Sigma \vdash \alpha and \Sigma \vdash \psi implies \Sigma \vdash \alpha \wedge \psi If something implies two things individually, then it implies both together. Built-in
Conjunction-elimination \wedge- with \Sigma \vdash \alpha \wedge \psi \Sigma \vdash \alpha \wedge \psi implies \Sigma \vdash \alpha and \Sigma \vdash \psi If two things both follow from the premises, each one follows individually. Built-in
Implication-introduction \rightarrow+ with \Sigma, \alpha \vdash \psi \Sigma, \alpha \vdash \psi implies \Sigma \vdash \alpha \rightarrow \psi If \psi can be deduced from \alpha, then \alpha \rightarrow \psi. Built-in
Implication-elimination \rightarrow- with \Sigma \vdash \alpha \rightarrow \psi and \Sigma \vdash \alpha \Sigma \vdash \alpha \rightarrow \psi and \Sigma \vdash \alpha implies \Sigma \vdash \psi If something implies something else and that thing is true, then so is the other. Built-in
Disjunction-introduction \vee+ with \Sigma \vdash \alpha \Sigma \vdash \alpha implies \Sigma \vdash \alpha \vee \psi and \Sigma \vdash \psi \vee \alpha Something deducible implies that it or something else is deducible. Built-in
Disjunction-elimination \vee- with \Sigma, \alpha_1 \vdash \psi and \Sigma, \alpha_2 \vdash \psi \Sigma, \alpha_1 \vdash \psi and \Sigma, \alpha_2 \vdash \psi implies \Sigma, \alpha_1 \vee \alpha_2 \vdash \psi If two things individually imply another, then either implies it. Built-in
Double-implication-introduction \leftrightarrow+ with \Sigma, \alpha \vdash \psi and \Sigma, \psi \vdash \alpha \Sigma, \alpha \vdash \psi and \Sigma, \psi \vdash \alpha implies \Sigma \vdash \alpha \leftrightarrow \psi If one thing implies another and the other implies it, they are equivalent. Built-in
Double-implication-elimination 1 \leftrightarrow- with \Sigma \vdash \alpha \leftrightarrow \alpha and \Sigma \vdash \psi \Sigma \vdash \alpha \leftrightarrow \psi and \Sigma \vdash \alpha implies \Sigma \vdash \psi If two things are equivalent and the first is deducible, so is the second. Built-in
Double-implication-elimination 2 \leftrightarrow- with \Sigma \vdash \alpha \leftrightarrow \psi and \Sigma \vdash \alpha \Sigma \vdash \alpha \leftrightarrow \psi and \Sigma \vdash \psi implies \Sigma \vdash \alpha If two things are equivalent and the second is deducible, so is the first. Built-in
Transitivity of deducibility \operatorname{Tr} with \Sigma_1 \vdash \Sigma_2 and \Sigma_2 \vdash \Sigma_3 \Sigma_1 \vdash \Sigma_2 and \Sigma_2 \vdash \Sigma_3 implies \Sigma_1 \vdash \Sigma_3 If one thing implies a second, which implies a third, then the first implies the last. Derived

;wip: add all the other rules here, such as de morgan

Prove the \epsilon rule from the basic rules \operatorname{Ref}, +:

Let A = \Sigma \setminus \set{\alpha}.
Clearly, \alpha \implies \alpha by \operatorname{Ref} (step 1).
So \set{\alpha} \cup A \vdash \alpha by + with step 1 (step 2).
Since \set{\alpha} \cup A = \Sigma, \Sigma \vdash \alpha (step 3, conclusion).

Prove that \neg \neg p \vdash p:

  1. \neg \neg p, \neg p \vdash \neg \neg p (\epsilon).
  2. \neg \neg p, \neg p \vdash \neg p (\epsilon).
  3. \neg \neg p \vdash p (\neg-).

Note that implication-introduction and implication-elimination is very similar to the deduction theorem.

Prove that A \rightarrow B, B \rightarrow C \vdash A \rightarrow C:

  1. A \rightarrow B, B \rightarrow C, A \vdash A \rightarrow B (\epsilon)
  2. A \rightarrow B, B \rightarrow C, A \vdash A (\epsilon)
  3. A \rightarrow B, B \rightarrow C, A \vdash B (\rightarrow- with step 1 and 2)
  4. A \rightarrow B, B \rightarrow C, A \vdash B \rightarrow C (\epsilon)
  5. A \rightarrow B, B \rightarrow C, A \vdash C (\rightarrow- with step 3 and 4)
  6. A \rightarrow B, B \rightarrow C \vdash A \rightarrow C (\rightarrow+ with step 5)

2/6/14

If a system is sound and complete, then arguments can be proved if and only if they are valid.

We can write A, B \vdash C as \frac{A \qquad B}{C}. This is known as inference notation.

Formal Proofs

A formal proof for \Sigma_k \vdash \alpha_k is simply a sequence of the natural deduction rules - \Sigma_1 \vdash \alpha_1, \ldots, \Sigma_k \vdash \alpha_k. If this is the case, then \alpha_n is formally deducible from \Sigma_n - it can be proven from those premises.

Note that \Sigma \vdash \alpha is very different from \Sigma \models \alpha - the former deals purely with syntax, while the latter deals with semantics - what the symbols actually mean.

Negation-introduction (\neg+) is also called contradiction or reductio ad absurdum. It is usually the rule we use when we want to prove by contradiction, and is very similar to negation-elimination (\neg-).

Prove that p \vdash \neg \neg p:

  1. p, \neg p \vdash p (\epsilon)
  2. p, \neg p \vdash \neg p (\epsilon)
  3. p \vdash \neg \neg p (reductio ad absurdum with step 1 and 2)

In formal proofs, we usually start off by stating premises (also called assumptions), and then adding rules and modifying results until we arrive at the conclusion of the proof. The conclusion of the proof is the thing we are trying to prove.

The proof format we have been using so far is in something known as line/sequent/turnstile notation. The \vdash symbol is a sequent or turnstile, with all premises before it and all conclusions after.

In box notation, we write assumptions, and then we draw a box that contains it, the box acts like a "scope" in which the assumptions hold and we can directly write the deduced thing. Inside the box we can directly write the resulting formulas without writing the \alpha_1, \ldots, \alpha_n \vdash before it. Scopes can nest, and this saves a lot of rewriting of premises.

Alternatively, we might indent the values instead of drawing a box around them. For example, the previous proof that A \rightarrow B, B \rightarrow C \vdash A \rightarrow C:

  1. A \rightarrow B, B \rightarrow C (assumption)
  2. The following is indented or drawn in a box:
    1. A \rightarrow B, B \rightarrow C, A (assumption)
    2. A \rightarrow B (\epsilon)
    3. A (\epsilon)
    4. B (\rightarrow- with step 2.2 and 2.3)
    5. B \rightarrow C (\epsilon)
    6. C (\rightarrow- with step 2.4 and 2.5)
  3. A \rightarrow C (\rightarrow+ with step 2.6):

When we begin a box or indentation, we insert an assumption. When we close a box, we remove the assumption. This ability to insert and remove asumptions keeps our proofs clean and organized.

Boxes or indentations can nest, and have lexical scope - when we have a box inside a box, we can use any statements above it, even if it is in a parent box. Outside of a box, we cannot use any of the resulting formulas directly - they are always attached to their assumptions. For example, in the above example we could not state C in step 3, but we could use A \rightarrow B, B \rightarrow C, A \vdash C in the \rightarrow+ rule.

A box is often closed when we arrive at a conclusion using a rule such as \neg-, \rightarrow+, or \vee-. Usages of \neg- and \rightarrow+ are usually found right after a box closes. \vee- is usually found after two boxes.

All boxes must be closed by the end of the argument. Otherwise, there are extra premises and the proof is not yet complete.

Elimination rules turn two statements into one. Introduction rules introduce new statements, or turn one statement into two.

4/6/14

Soundness/Completeness

Soundness is a property of a deductive system where it is impossible to prove any invalid arguments.

Completeness is a property of a deductive system where it is always possible to prove valid arguments.

The soundness theorem states that if something is provable, then it is valid - \Sigma \vdash \alpha implies that \Sigma \models \alpha.

In practice, we prove soundness by using the contrapositive: by proving that \Sigma \not\models \alpha implies \Sigma \not\vdash \alpha.

If A \vdash B and B \vdash C, then A \vdash C.

In other words, if \Sigma' = \set{A_1, \ldots, A_n} and \Sigma \vdash A_1, \ldots, \Sigma \vdash A_n and \Sigma' \vdash \alpha, then \Sigma \vdash \alpha

Proof:

  1. A_1, \ldots, A_n \vdash \alpha (Premise)
  2. A_1, \ldots, A_{n - 1} \vdash A_n \rightarrow \alpha (\rightarrow+)
  3. \vdash A_1 \rightarrow (\ldots (A_n \rightarrow \alpha) \ldots)
  4. \Sigma \vdash A_1 \rightarrow (\ldots (A_n \rightarrow \alpha) \ldots)
  5. \Sigma \vdash A_1 (premise)
  6. \Sigma \vdash A_2 \rightarrow (\ldots (A_n \rightarrow \alpha) \ldots)
  7. \Sigma \vdash \alpha

The proof rules of Natural Deduction are purely syntactic - they only shuffle symbols around without considering what they mean. Now we connect it with the actual semantics.

Soundness of natural deduction means that formal proofs demonstrate tautological consequences - \Sigma \vdash \alpha implies that \Sigma \models \alpha. In other words, every proof should be a valid argument.

Completeness of natural deduction means that tautological consequences are provable using natural deduction - \Sigma \models \alpha implies that \Sigma \vdash \alpha. In other words, every valid argument should have a proof.

Basically, anything provable by truth tables can also be proven using the ND rules.

Since ND is sound and complete, \Sigma \models \alpha if and only if \Sigma \vdash \alpha.

Also, \rightarrow does not have an associative property, so (A \rightarrow B) \rightarrow C is not the same as A \rightarrow (B \rightarrow C).

9/6/14

Natural Deduction

Soundness and completeness is rather intuitive. For example, it is obvious that if p and q are true, then p is also true, so p \wedge q \models p. In ND, this is represented using the \wedge- rule, which states p \wedge q \vdash p.

When we want to prove that two formulas are tautologically equivalent, we can either use a truth table, or write a formal proof.

A sound formula is one where its proof, \Sigma \vdash \alpha, implies \Sigma \models \alpha.

We can prove the soundness theorem for ND using strong induction over the length of the proof:

Let n be the length of the proof in steps.
Clearly, if n = 1, then the proof must have the form \Sigma, \alpha \vdash \alpha (\epsilon), since this is the only rule that can prove something in one step.
Clearly, \Sigma, \phi \models \phi because anything is a tautological consequence of itself.
Assume for some k \ge 1 that all proofs of length k or less are sound - the inductive hypothesis.
Then a proof of length k + 1 has the steps \alpha_1, \ldots, \alpha_k, \phi, with rules associated with each step.
Clearly, \alpha_1, \ldots, \alpha_k is sound, by the inductive hypothesis.
We want to prove that \alpha_1, \ldots, \alpha_k, \phi is also sound.
Clearly, \phi is the result of applying the ND rules. So each rule is a possible case and we need to check each rule to verify that the proof of length k + 1 is sound.
Assume the last rule is \wedge+. Then \phi is of the form \alpha_i \wedge \alpha_j where i < k, j < k.
Then there must be sound proofs of \alpha_i and \alpha_j, by the inductive hypothesis.
So \Sigma_i \models \alpha_i and \Sigma_j \models \alpha_j.
By a truth table, \Sigma \models \alpha_i \wedge \alpha_j, so \phi is sound.
We can use this technique to verify this for every ND rule.
Therefore, \Sigma \vdash \phi is always sound - it always implies \Sigma \models \phi.

Syntactical consistency is a property of sets of formulas \Sigma \subseteq \formlp where there does not exist \alpha \in \Sigma such that \Sigma \vdash \alpha and \Sigma \vdash \neg \alpha at the same time - that it not possible to derive a contradiction from the premises.

This is not to be confused with semantic consistency, which applies only to formulas.

A set of formulas \Sigma \subseteq \formlp is maximally consistent if and only if \Sigma is consistent and any \alpha \in \formlp \setminus \Sigma, \Sigma \cup \set{\alpha} is inconsistent. In other words, it is the largest possible set of consistent formulas.

There are multiple possible maximally consistent sets.

If \Sigma is maximally consistent:

We can prove the completeness theorem for ND by proving the contrapositive:

We want to prove \Sigma \not\vdash \phi \implies \Sigma \not\models \phi.
Assume \Sigma \not\vdash \phi.
Then \Sigma \cup \set{\neg \alpha} is syntactically consistent, because there is no way that \Sigma \vdash \alpha or \Sigma \vdash \neg \phi for any \phi \in \Sigma.
So there exists a truth valuation t such that \Sigma^t = 1 \wedge (\neg \phi)^t = 1.
So \Sigma \not\models \phi.

The Lindenbaum Lemma says that any consistent set of formulas can be extended into a maximally consistent set.

11/6/14

Soundness theorem: \Sigma \vdash \alpha \implies \Sigma \models \alpha.

Completeness theorem: \Sigma \models \alpha \implies \Sigma \vdash \alpha, and its contrapositive \Sigma \not\vdash \alpha \implies \Sigma \not\models \alpha.

Soundness: \Sigma is satisfiable implies \Sigma is sound.

Completeness: \Sigma is sound implies \Sigma is satisfiable.

Logical connectives:

Tips and tricks for proofs:

Prove \neg A \vee \neg B \vdash \neg (A \wedge B):

  1. \neg A, \neg \neg (A \wedge B), \neg (A \wedge B) \vdash \neg \neg (A \wedge B) (\epsilon)
  2. \neg A, \neg \neg (A \wedge B), \neg (A \wedge B) \vdash \neg (A \wedge B) (\epsilon)
  3. \neg A, \neg \neg (A \wedge B) \vdash A \wedge B (\neg- with 1, 2)
  4. \neg A, \neg \neg (A \wedge B) \vdash A (\wedge- with 3)
  5. \neg A, \neg \neg (A \wedge B) \vdash \neg A (\epsilon)
  6. \neg A \vdash \neg (A \wedge B) (\neg- 4, 5)
  7. \neg B, \neg \neg (A \wedge B), \neg (A \wedge B) \vdash \neg \neg (A \wedge B) (\epsilon)
  8. \neg B, \neg \neg (A \wedge B), \neg (A \wedge B) \vdash \neg (A \wedge B) (\epsilon)
  9. \neg B, \neg \neg (A \wedge B) \vdash A \wedge B (\neg- with 7, 8)
  10. \neg B, \neg \neg (A \wedge B) \vdash B (\wedge- with 9)
  11. \neg B, \neg \neg (A \wedge B) \vdash \neg B (\epsilon)
  12. \neg B \vdash \neg (A \wedge B) (\neg- with 10, 11)
  13. \neg A \vee \neg B \vdash \neg (A \wedge B) (\vee+ with 6, 12)

Prove \neg (A \wedge B) \vdash \neg A \vee \neg B:

  1. \neg (A \wedge B), \neg \neg A, \neg A \vdash \neg \neg A (\epsilon)
  2. \neg (A \wedge B), \neg \neg A, \neg A \vdash \neg A (\epsilon)
  3. \neg (A \wedge B), \neg \neg A \vdash A (\neg- with 1, 2)
  4. \neg (A \wedge B), \neg \neg B, \neg B \vdash \neg \neg B (\epsilon)
  5. \neg (A \wedge B), \neg \neg B, \neg B \vdash \neg B (\epsilon)
  6. \neg (A \wedge B), \neg \neg B \vdash B (\neg- with 4, 5) ;wip

Prove A \rightarrow B \vdash \neg B \rightarrow \neg A:

  1. A \rightarrow B, \neg B, \neg \neg A, \neg A \vdash \neg \neg A (\epsilon)
  2. A \rightarrow B, \neg B, \neg \neg A, \neg A \vdash \neg A (\epsilon)
  3. A \rightarrow B, \neg B, \neg \neg A \vdash A (\neg- with 1, 2)
  4. A \rightarrow B, \neg B, \neg \neg A \vdash A \rightarrow B (\epsilon)
  5. A \rightarrow B, \neg B, \neg \neg A \vdash B (\rightarrow- with 3, 4)
  6. A \rightarrow B, \neg B, \neg \neg A \vdash \neg B (\epsilon)
  7. A \rightarrow B, \neg B \vdash \neg A (\neg- with 5, 6)
  8. A \rightarrow B \vdash \neg B \rightarrow \neg A (\rightarrow+ with 7)

16/6/14

Predicate Logic

Predicate logic is also known as first order logic.

Predicate logic is like propositional logic, but extended with quantifiers, variables, predicates, functions, constants, and more.

An ordered pair is an object that contains two objects a and b, represented by \tup{a, b}, which, unlike sets, preserves the order of a and b (which one goes first). Two ordered pairs \tup{a, b} and \tup{c, d} are equal if and only if a = c and b = d.

This can be extended into the tuple object - an n-tuple is an object that stores objects a_1, \ldots, a_n, preserving order, and is represented \tup{a_1, \ldots, a_n}. Again, \tup{a_1, \ldots, a_n} = \tup{b_1, \ldots, b_n} if and only if a_i = b_i, 1 \le i \le n.

Relations

An n-ary relation of S is R = \set{\tup{x_1, \ldots, x_n} \middle| x_1, \ldots, x_n \in S, f(x_1, \ldots, x_n)} for some boolean function f(x_1, \ldots, x_n).

For example, \le is a binary relation over \mb{N} where R = \set{\tup{a, b} \middle| a, b \in \mb{N}, a \le n}. It is the set of all pairs of values where the first value is less than or equal to the second.

A ternary relation over \mb{N} might be R = \set{\tup{x, y, z} \middle| x + y < z, x, y, z \in \mb{N}}.

It is always true that any n-ary relation R over S is a subset of S^n.

Equality is a special binary relation R = \set{\tup{x, y} \middle| x, y \in S, x = y} = \set{\tup{x, x} \middle| x \in S}.

The Boolean value \tup{x, y} \in R for a binary relation can also be written x R y.

R is reflexive over S if and only if \forall x \in S, x R x.

R is symmetric over S if and only if \forall x, y \in S, x R y \implies y R x.

R is transitive over S if and only if \forall x, y, z \in S, x R y \wedge y R z \implies x R z.

If R is all three, then R is an equivalence relation. Examples of equivalence relations include = and \equiv \pmod{36}.

The R-equivalence class of x is \overline x = \set{y \in S \middle| x R y}. It is the set of all elements of S that satisfy x R y and is therefore a subset of S.

Functions

A function/mapping f is a set of ordered pairs where \tup{x, y} \in f \wedge \tup{x, z} \implies y = z. In other words, it is a set of tuples with no duplicates.

The domain of f is \operatorname{dom}(f) = \set{x \middle| \tup{x, y} \in f}.

The range of f is \operatorname{ran}(f) = \set{y \middle| \tup{x, y} \in f}.

If \exists y, \tup{x, y} \in f, then f(x) = y. If \operatorname{dom}(f) = S and \operatorname{ran}(f) \subseteq T for some sets S and T, then f: S \to T (f is a function from S to T). Similarly, we can define n-ary functions like g: S_1, S_2, S_3 \to T.

If R is an n-ary relation and S' \subseteq S, then the restriction of R to S' is R \cap S'^n. Basically, it is a restriction of the possible input and output values that are allowed. If R is a function, this is written R \mid S': S' \to T - the restriction restricts the allowed x-values.

f is onto/surjective if \operatorname{ran}(f) = T - if the image is the same as the codomain. In other words, the output of the function is all the possible output values in T, or every possible output value has a corresponding input value.

Basically, every unique y \in T has a (not necessarily unique) x \in S such that f(x) = y.

f is one-to-one/injective if f(x) = f(y) \implies x = y - if every output value has a unique input value. In other words, every possible input value has a unique output value.

Basically, if x_1, x_2 \in S and x_1 \ne x_2, then f(x_1) \ne f(x_2).

If f is both, then f is a bijection and is bijective - every output value has one and only one input value, and every input value has one and only one output value. A bijection maps every element of S to a unique element of T and vice versa.

18/6/14

Two sets S, T are equipotent if and only if there exists a bijection from S to T. This is denoted S \sim T.

If and only if S \sim T, then \abs{S} = \abs{T}. A set S is countably infinite if \abs{S} = \abs{\mb{N}} - if we can construct a bijection between it and the set of all natural numbers. S is countable if it is finite, or countably infinite - if \abs{S} \le \abs{\mb{N}}.

We can prove a set is countably infinite by showing that there is a bijection between it and \mb{N}, and we prove a set is countable by either proving it is finite or showing the bijection exists.

We can prove a set S is not countable by proving that the bijection cannot exist.

Prove that the set of integers is countable:

Clearly, there are more integers than natural numbers. However, they are both infinite, and infinite sets work in funny ways.
Even though there is a bijection between a subset of \mb{Z} and \mb{N}, it is actually still possible to define a bijective f: \mb{Z} \to \mb{N}.
Let f(z) = \begin{cases} 1 - 2z &\text{if } x \le 0 \\ 2z &\text{if }x > 0 \end{cases}.
This is a bijection because we can find the inverse, f^{-1}(n) = \begin{cases} \frac{1 - n}{2} &\text{if } 2 \mid n \\ \frac n 2 &\text{if } 2 \nmid n \end{cases}.
So the set of integers is countable.

A countable set is one where we can enumerate or count the elements - we can assign a natural number to each element in the set.

Any subset of a countable set is countable. The union of countably many countable sets is also countable:

Let S_1 \cup \ldots \cup S_n \cup \ldots be a union of countably many countable sets.
Let S_i = \set{a_{i, 1}, \ldots, a_{i, m}, \ldots}.
Then S_1 \cup \ldots \cup S_n \cup \ldots = \set{a_{1, 1}, a_{1, 2}, a_{2, 1}, a_{1, 3}, a_{2, 2}, a_{3, 1}, a_{1, 4}, a_{2, 3}, a_{3, 2}, a_{4, 1}, \ldots} - ordered increasing by n + m and then by n.
Let f: S_1 \cup \ldots \cup S_n \cup \ldots \to \mb{N} be defined as f(a_{n, m}) = \frac 1 2 (n + m - 1)(n + m - 2) + n = \frac 1 2 (n + m - 1)(n + m - 2) + n = n^2 + m^2 + 2nm - 3n - 3m + 2.
This is called the diagonalization argument. It can also be used to prove that rational numbers are countable (by making n, m the numerator and denominator) and similar things.

The Cartesian product of a finite amount of countable sets is also a countable set. Also, the set of finite sets all containing only elements in a particular countable set is countable.

Predicate/first order logic is an extension of propositional logic that adds constructs to help deal with individuals/objects in a set. While propositional logic makes us list out all the objects we are using, predicate logic gives additional tools to deal things like some objects, all objects, or more.

For example, "Every student is younger than some instructor" could be represented in propositional logic by writing (s_1 < i_1 \vee \ldots \vee s_1 < i_n) \wedge \ldots \wedge (s_m < i_1 \vee \ldots \vee s_m < i_n) where s_1, \ldots, s_m are the student ages and i_1, \ldots, i_n are the instructor ages.

Predicate logic makes it much easier to express this: \forall s \in S, \exists i \in I, s < i, where S is the set of all students and I is the set of all instructors.

Syntax

The language of predicate logic is called \mathcal{L}.

Formulas in predicate logic is made up of several components:

23/6/14

Write "Every child is younger than its mother" using predicate logic:

Let C(x) represent x being a child. Let M(x) represent the mother of x. Let Y(x, y) represent x being younger than y.
Then \forall x. C(x) \rightarrow Y(x, M(x)).

\forall x. \forall y. \operatorname{plus}(x, y) = \operatorname{plus}(y, x) - addition is commutative.

\forall x. \neg \operatorname{equal}(0, \operatorname{succ}(x)) - 0 is not the successor of any non-negative integer.

Well formed formulas in \mathcal{L} can be defined as follows:

This means that terms are not formulas, though atoms are.

We can also draw parse trees for formulas. Here, the leaves are terms and the other nodes are either connectives or predicates or quantifiers.

25/6/14

A free variable is a variable that has a value we can assign to it. A variable that is not free is bound/quantified, and occurs when we use quantifiers.

For example, \forall y, y = x has the free variable x and the bound variable y. x is free because we can give it whatever value we want, which we cannot do for y since it does not have one particular value.

In a parse tree, if a variable node x has a quantifier like \exists x above it in the parse tree, it is bound. Otherwise, it is free.

If A \in \operatorname{Form}(\mathcal{L}), FV(A) is the set of free variables in A. It is defined using the following rules:

For quantifier formulas like \forall x. A(x) and (\exists x. A(x)) \wedge P(x), A(x) is the scope of x in the formula.

A variable can occur in multiple places and represent different things. For example, (\exists x. A(x)) \wedge P(x) has a bound x on the left side of \wedge, and a free variable x on the right side. These two refer to different objects.

A predicate logic formula A is closed and is a sentence if and only if there are no free variables - FV(A) = \emptyset.

Given a formula A, A[x/t] is the same formula, but with all free occurrences of x substituted with the term t. Note that bound occurrences of x are not replaced.

If y is a variable in t and there are no occurrences of x in the scope of \exists y or \forall y in a formula A, then t is free for x in A. Basically, t is free for x in A if substituting t for any occurrence of x does not cause any free variables to become bound.

For example, if t = y + 1 and A = \forall y, P(x), then A[x/t] would cause the y in y + 1 we substituted in to become bound, so t is not free for x in A. If t is not free for x, then we can make it free for x by renaming the variable - t = y + 1 becomes t = z + 1.

Formally, a substitution A[x/t] can be defined recursively:

Interpretations and Assignments

In propositional logic, there were truth valuations that fixed atom values. In predicate logic, there are more elements to consider, such as quantifiers and functions.

Predicate logic formulas are used to express Boolean values. We can obtain these Boolean values using interpretations. All formulas are either true or false for all posible conditions.

An interpretation/model of a formula A is a tuple of the form I = \tup{U, (\ldots)^I}, where U is the domain/universe of discourse and (\ldots)^I is a function that maps constants to the domain, functions to functions over the domain, and predicates to relations in the domain.

Basically, an interpretation is a universe of discourse combined with a mapping from elements of the formula to elements in the universe of discourse.

If c \in CS(A), then c^I \in U. If f \in FS(A), then f^I: D^n \to D where n is the arity of f. If P \in PS(A), then P^I \subseteq D^n where n is the arity of P.

Find an interpretation of \exists c. P(f(g(a), g(b)), g(c)):

Let I = \tup{\mb{N}, \begin{cases} 3 &\text{if } a \\ 4 &\text{if } b \\ x + y &\text{if } f(x, y) \\ x^2 &\text{if } g(x) \\ = &\text{if } P \end{cases}}.
So (\forall c. P(f(g(a), g(b)), g(c)))^I becomes \exists c. 3^2 + 4^2 = c^2.
The word form of the interpretation is: "Let \mb{N} be the universe of discourse. Let a be 3 and b be 4. Let f be addition and g be squaring, and P be equality.".

Interpretations are definitely more concrete than the original formulas, but if there are quantifiers, we still cannot directly state whether the formula is true or false under a given interpretation.

A valuation/assignment of an interpretation of a formula A^I is a function \theta: VS(A) \to U where U is the universe of discourse for the interpretation. Basically, it is a bunch of substitutions for each variable. Valuations work only on terms.

If t \in CS, then t^{I, \theta} = t^I. If x \in VS, then x^{I, \theta} = \theta(x). If f \in FS, then f(t_1, \ldots, t_n)^{I, \theta} = f^I(t_1^{I, \theta}, \ldots, t_n^{I, \theta}).

For example, using the previous interpretation I, (\exists c. P(f(g(a), g(b)), g(c)))^I is \exists c. 3^2 + 4^2 = c^2. So under the assignment \theta(c) = 5, (\exists c. P(f(g(a), g(b)), g(c)))^{I, \theta} is 3^2 + 4^2 = 5^2, which is true.

Interpretations and valuations also work on terms. In this case, an interpretation and assignment result in an element of the universe of discourse for the interpretation.

If A(x) has no free occurrences of y, and A(y) has no free occurrences of x, then A(x) and A(y) basically mean the same thing. We can make them the same using a simple substitution A(x)[x/y] = A(y).

So for a closed formula, we can change the variables around as we wish: \exists x. P(x) is the same as \exists y. P(y).

2/7/14

In predicate logic, a model is an interpretation together with an assignment. A model is a tuple \tup{I, \theta}.

If M = (I, \theta), then I, \theta \models A is the same as M \models A. In the same way, A^{I, \theta} is the same as A^M.

The satisfaction relation \models is a relation between an interpretation, an assignment, and a predicate logic formula. If true, it means that the formula under the interpretation and assignment is true. This is similar to the \models operator over truth valuations and formulas in propositional logic.

Given an interpretation I, an assignment \theta, a model M = (I, \theta), and a first order logic formula A:

We can also define a sub-language of \mathcal{L} that has a subset of the elements of the full language. For example, \mathcal{A} = \tup{0, \operatorname{add1}, =} is a sub-language that only has three elements in it. We can use it to represent any equality of natural numbers using an interpretation like \tup{\mb{N}, \begin{cases} x + 1 &\text{if } \operatorname{add1}(x) \end{cases}}.

We could say that the universal quantifier is conceptually similar to conjunction in propositional logic, because \forall x. P(x) means P(x_1) \wedge P(x_2) \wedge \ldots. We could also say that the existential quantifier is conceptally similar to disjunction in propositional logic, because \exists x. P(x) means P(x_1) \vee P(x_2) \vee \ldots.

7/7/14

The relevance lemma states that given a formula A, interpretation I, and valuations \theta_1, \theta_2, if \theta_1(x) = \theta_2(x) for all x \in VS(A), then I, \theta_1 \models A \iff I, \theta_2 \models A.

\Sigma \in \operatorname{Form}(\mathcal{L}) is satisfiable if and only if there exists an interpretation I and valuation \theta such that I, \theta \models A for all A \in \Sigma.

A \in \operatorname{Form}(\mathcal{L}) is valid if and only if for any interpretation I and valuation \theta, I, \theta \models A.

We prove something is satisfiable by giving I, \theta and prove something is valid by using contradiction over the existance of I, \theta that doesn't satisfy it.

A is a logical consequence of \Sigma and \Sigma \models A if and only if for every I, \theta, I, \theta \models \Sigma (satisfies every element of \Sigma) implies I, \theta \models A.

So \emptyset \models A, also written as \models A, means that A is valid.

The replacability theorem states that given formulas A, B, C, if B \equiv C then A[B/C] \equiv A. In other words, if two things are equal then we can replace those things in other things and they will not change semantically.

The duality theorem states that \neg A \equiv A' where A' is A with \wedge replaced with \vee, \vee with \wedge, \exists with \forall, \forall for \exists, and all atoms P with \neg P.

Deductive Systems

Natural deduction in first order logic is similar to that in propositional logic, but with additional rules for quantifiers.

Name Symbol Rule Description
Universal-introduction \forall+ with \Sigma \vdash \alpha[x/t] and t is a constant not in \Sigma \Sigma \vdash \alpha[x/t] for all t not in \Sigma implies \Sigma \vdash \forall x. \alpha If \alpha is true for an arbitrary x, then it is true all x.
Universal-elimination (specification) \forall- with \Sigma \vdash \forall x. \alpha and t is a constant \Sigma \vdash \forall x. \alpha implies \Sigma \vdash \alpha[x/t] If \alpha is true for all x, then it is true for a particular x as well.
Existential-introduction \exists+ with \Sigma \vdash \alpha[x/t] and t is a constant \Sigma \vdash \alpha[x/t] for all t implies \Sigma \vdash \exists x. \alpha If \alpha is true for an arbitrary x, then it is true some x.
Existential-elimination \exists- with \Sigma, \alpha(u) \vdash \psi \Sigma, \alpha(u) \vdash \psi implies \Sigma, \exists x. p(x) \vdash \psi If \alpha implies something, there exists an \alpha that implies something.

Show that \forall x. \forall y. P(x, y) \vdash \forall y. \forall x. P(x, y):

  1. \forall x. \forall y. P(x, y) \vdash \forall x. \forall y. P(x, y) (Ref)
  2. \forall x. \forall y. P(x, y) \vdash \forall y. P(u, y) (\forall- with 1 and u is arbitrary)
  3. \forall x. \forall y. P(x, y) \vdash P(u, v) (\forall- with 2 and v is arbitrary)
  4. P(u, v) \vdash P(u, v) (Ref with arbitrary u and v)
  5. P(u, v) \vdash \forall y. P(u, y) (\forall+ with 4 and v is arbitrary)
  6. P(u, v) \vdash \forall x. \forall y. P(x, y) (\forall+ with 5 and u is arbitary)
  7. \forall x. \forall y. P(x, y) \vdash \forall x. \forall y. P(x, y) (\operatorname{Tr} with 3 and 6)

9/7/14

Show that \exists x. P(x) \vdash \exists y. P(y):

  1. P(u) \vdash P(u) (Ref)
  2. P(u) \vdash \exists y. P(y) (\exists+ with 1 and u is arbitrary)
  3. \exists x. P(x) \vdash \exists y. P(y) (\exists- with 2 and u is arbitrary)

14/7/14

Peano Arithmetic

The Peano axioms formally define natural numbers. Peano arithmetic is a sublanguage of \mathcal{L}

The equality relation \approx is a binary relation over a domain D, defined as \set{\tup{x, x} \middle| x \in D}.

We define two axioms for the equality relation:

This implies that equality is symmetric (\forall x. \forall y. x \approx y \rightarrow y \approx x) and transitive (\forall w. \forall x. \forall y. x \approx y \rightarrow (y \approx w \rightarrow x \approx w)).

Proof of symmetry:

Proof:

  1. \vdash \forall x. x \approx x (Axiom 1)
  2. \vdash a \approx a (\forall- with 1)
  3. \vdash \forall x. \forall y. x \approx y \rightarrow (x \approx a \rightarrow y \approx a) (Axiom 2 with z \approx a as \alpha)
  4. \vdash a \approx b \rightarrow (a \approx a \rightarrow b \approx a) (\forall- with 3)
  5. a \approx b \vdash a \approx a (\epsilon)
  6. a \approx b \vdash a \approx a \rightarrow b \approx a (\rightarrow- with 4 and 5)
  7. a \approx b \vdash b \approx a (\rightarrow- with 6)
  8. \vdash a \approx b \rightarrow b \approx a (\rightarrow+ with 7)
  9. \vdash \forall x. \forall y. x \approx y \rightarrow y \approx x (\forall+ with 8)

The symmetry rule is known as \operatorname{symmetric}.

Proof of transitivity:

  1. \vdash b \approx a \rightarrow (b \approx c \rightarrow a \approx c) (Axiom 2)
  2. a \approx b \vdash b \approx a \rightarrow (b \approx c \rightarrow a \approx c) (+)
  3. \vdash a \approx b \rightarrow b \approx a (\operatorname{symmetric})
  4. a \approx b \vdash b \approx a (\rightarrow- with 3)
  5. a \approx b \vdash b \approx c \rightarrow a \rightarrow c (\rightarrow- with 1 and 4)
  6. \vdash a \approx b \rightarrow (b \approx c \rightarrow a \approx c) (\rightarrow+ with 5)
  7. \forall w. \forall x. \forall y. x \approx y \rightarrow (y \rightarrow w \rightarrow x \approx w) (\forall+ with 6)

The idea of natural numbers were created in order to allow counting or ordering objects, where 0 represents a lack of objects and every successive number is one more object than the previous.

If S: \mb{N} \to \mb{N} is a unary function that, given a natural number, gives the next natural number, every natural number can be represented as a term. For example, 0 is represented with 0, and 3 is represented with S(S(S(0))).

The signature of Peano arithmetic is \sigma = \tup{0, S, +, \cdot, \approx, \not\approx}. These have the following axioms:

  1. \forall x. S(x) \not\approx 0 - no successor of a natural number is 0.
  2. \forall x. \forall y. S(x) \approx S(y) \rightarrow x \approx y - two numbers with equal successors are equal.
  3. \forall x. x + 0 \approx x - 0 is the additive identity
  4. \forall x. \forall y. s + S(y) \approx S(x + y) - succession is linear
  5. \forall x. x \cdot 0 \approx 0 - 0 is the multiplicative identity
  6. \forall x. \forall y. s \cdot S(y) \approx x \cdot y + x - succession is ditributive
  7. \alpha[x/0] \rightarrow ((\forall x. \alpha \rightarrow \alpha[x/S(x)]) \rightarrow \forall x. \alpha) - simple induction

Axiom 7 states that if something is true for 0, and each value implies its successor, then it is true for all values.

Also, \forall y. 0 + y \approx y and \forall y. x + y \approx y + x \vdash \forall y. S(x) + y \approx S(x + y).

Also, \forall x. \forall y. x + y \approx y + x and \forall x. \forall y. \forall z. (x + y) + z \approx x + (y + z).

Prove that every non-zero number has a predecessor - \forall x. x \not\approx 0 \rightarrow \exists y. S(y) \approx x:

;wip

Prove that addition is commutative - \forall x. \forall y. x + y \approx y + x using induction:

  1. \vdash \forall y. 0 + y \approx y (derived from Axiom 3)
  2. \forall x. \forall y. x + y \approx y + x \vdash \forall x. \forall y. x + y \approx y + x \rightarrow S(x + y) \approx S(y + x) (Axiom 2 for equality)
  3. \vdash \forall x. \forall y. x + y \approx y + x (Axiom 7 with 1 and 2 where \alpha = x + y \approx y + x)

Program Verification

A program is correct if it satisfies its specifications. We can show correctness by inspection, testing, and formal verification.

Testing and inspection can show that there are bugs, but cannot show that there are no bugs. For that we need formal verification, which can prove that a program is correct for all possible inputs.

In the real world, formal verification is used for safety and mission critical systems such as brakes and nuclear power plants. It is also helpful for documentation purposes.

The formal verification workflow starts by converting an informal description of the program into an equivalent formula \phi in a symbolic logic. Then, the program is written trying to implement that formula, before we prove that the program is correct or fix it until it is correct.

In this course we will use a core programming language that is a subset of C/C++ and Java with the most important features. For example, a program that calculates the factorial of x:

y = 1;
z = 0;
while (z != x) {
    z = z + 1;
    y = y * z;
}

This is a declarative program, and manipulates states (valuations of variables at a given point in the program) using commands.

A Hoare triple is an assertion used in formal verification, of the form \tup{P C Q}, where P is the set of preconditions, C is the code or program, and Q is the set of postconditions. This is also written \tup{\phi C \psi}.

The Hoard triple \tup{P C Q} means that if the program starting state satisfies P, then the final state of the program after it finishes satisfies Q.

\tup{P C Q} is a specification of C. For example, a specification for a program C that takes a positive number x and computes a number whose square is less than x might be written as \tup{\set{x > 0} C \set{y * y < x}}. Note that this does not specify a unique C.

\tup{P C Q} is satisfied under partially correctness if and only if when C is executed with any starting state s satisfying P with C eventually terminating, the final state s' satisfies Q.

\tup{P C Q} being satisfied under partial correctness can be written as \models_{par} \tup{P C Q}. If we can prove that \tup{P C Q} is satisfied under partial correctness using partial correctness calculus, then \vdash_{par} \tup{P C Q}.

21/7/14

\tup{P C Q} is satisfied under total correctness if and only if when C is executed with any starting state s satisfying P, C eventually terminates and the final state s' satisfies Q.

\tup{P C Q} being satisfied under total correctness can be written as \models_{tot} \tup{P C Q}. If we can prove that \tup{P C Q} is satisfied under total correctness using total correctness calculus, then \vdash_{par} \tup{P C Q}.

Total correctness is basically partial correctness plus the additional condition that the program must terminate.

\operatorname{States} is the set of all states. \operatorname{satisfies}(s, X) is the state s satisfying conditions X. \operatorname{terminates}(C, s) is the program C terminating when run with starting state s. s' = \operatorname{result}(C, s) is the final state s' after running the program C with starting state s, which is undefined if C does not terminate.

So \models_{par} \tup{P C Q} is equivalent to \forall s \in \operatorname{States}, \operatorname{satisfies}(s, P) \wedge \operatorname{terminates}(C, s) \rightarrow \operatorname{satisfies}(\operatorname{result}(C, s), Q).

So \models_{tot} \tup{P C Q} is equivalent to \forall s \in \operatorname{States}, \operatorname{satisfies}(s, P) \implies (\operatorname{terminates}(C, s) \wedge \operatorname{satisfies}(\operatorname{result}(C, s), Q)). Alternatively, \forall s \in \operatorname{States}, \operatorname{satisfies}(s, P) \wedge \text{(partially correct)} \rightarrow \operatorname{terminates}(C, s).

We prove total correctness by first proving partial correctness, then proving termination. We prove partial correctness by using some sound inference rules like in natural deduction. Termination is often easy to prove, but there is no algorithm that can check if a program terminates in general - proving termination is undecidable, just like proving things in general.

We also have logical variables that appear only in preconditions and postconditions, not the code itself. For example, we might have preconditions \set{x_0 \in \mb{N}, x = x_0, y \in \mb{N}} where x_0 is a logical variable.

Partial correctness is also called validity, due to them being valid in the partial correctness calculus.

We now introduce the inference rules of partial correctness.

Name Rule Example
Assignment \vdash_{par} \set{\phi[E/x]} x = E; \set{\phi} \vdash_{par} \tup{\set{y + 1 = 7} x = y + 1; \set{x = 7}}
Precondition Strengthening (implied) \set{\alpha} \rightarrow \set{\alpha'}, \tup{\set{\alpha'} C Q} \vdash_{par} \tup{\set{\alpha} C Q} \set{y = 1} \rightarrow \set{y \ge 0}, \tup{\set{y \ge 0} C Q} \vdash_{par} \tup{\set{y = 1} C Q}
Postcondition Weakening (implied) \tup{P C \set{\alpha}}, \set{\alpha} \rightarrow \set{\alpha'} \vdash_{par} \tup{P C \set{\alpha'}} \tup{P C \set{y = 1}}, \set{y = 1} \rightarrow \set{y \ge 0} \vdash_{par} \tup{P C \set{y \ge 0}}
Composition \tup{P C_1 Q}, \tup{Q C_2 R} \vdash_{par} \tup{P C_1;C_2 R} \tup{\set{x = 1} y = x \set{y = 1}}, \tup{\set{y = 1} k = y * 5 \set{k = 5}} \vdash_{par} \tup{\set{x = 1} y = x; k = y * 5 \set{k = 5}}
Conditional 1 (if-then) \tup{P \cup B C Q}, P \cup \overline B \rightarrow Q \vdash_{par} \tup{P (if (B) C) Q} -
Conditional 2 (if-then-else) \tup{P \cup B C_1 Q}, \tup{P \cup \overline B C_2 Q} \vdash_{par} \tup{P (if (B) C_1 else C_2) Q} -
Partial While (possibly non-terminating) \tup{I \cup B C I} \vdash_{par} \tup{I (while (B) C) I \cup \overline B} -
Partial For 1 (possibly non-terminating) \tup{I \cup \set{a \le v, v \le b} C I[v/v + 1]} \vdash_{par} \tup{I[v/a] \cup \set{a \le b} (for (v = a to b) C) I[v/b + 1]} -
Partial For 2 (possibly non-terminating) \vdash_{par} \tup{I \cup \set{a > b} (for (v = a to b) C) I} -

Note that in the textbook the \alpha[x/y] (all occurrences of x replaced by y) rule is actually written \alpha[y/x]. In this course we will be using the \alpha[x/y] notation.

;wip: algebra vs calculus in terms of semantic vs syntactic

Annotated Programs

We perform program verification by first writing the program in annotated form. This is a form of the program where we insert assertions after each program statement. The composition is implicit - it is assumed that the postconditions of each statement are part of the preconditions of the next.

We then use inference rules to try to justify the assertions and prove that they are true at the points they appear in the program. Implied rules such as precondition strengthening need additional proof that the implication holds - this can be done after everything else.

Each assertion should be an instance of an inference rule, without simplification.

;wip: annotated program example

23/7/14

For annotated programs, we usually work backwards from the postconditions to prove things all the way to the beginning.

To verify programs, we start by annotating the code with assertions, then justifying those assertions with inference rules, working upward until we reach the preconditions. Then, we prove the implications such as those used in the implied and if-then rules using thigns like predicate logic, algebra, and basic arithmetic.

For conditionals, the annotated form appears as follows:

{P}
if (B) {
    {P \wedge B} (if-then)
    C
    {Q} (justify by some inference rule dependent on C)
}
{Q} (if-then (a)) [Proof of $P \wedge \neg B \rightarrow Q$]

{P}
if (B) {
    {P \wedge B} (if-then-else)
    C_1
    {Q} (justify by some inference rule)
} else {
    {P \wedge \neg B} (if-then-else)
    C_2
    {Q} (justify by some inference rule)
}
{Q} (if-then-else)

An example of a conditional proof follows:

{true}
if (max < x) {
    {true \wedge max < x} (if-then rule)
    {x \ge x} (assignment rule)
    (true \wedge max < x) \rightarrow x \ge x (implied A)
    max = x;
    {max \ge x}
}
{max \ge x}
{true \wedge \neg (max < x) \rightarrow (max \ge x)}
true \wedge \neg (max < x) \rightarrow (max \ge x) (implied B)

Implied A:
Clearly $x \ge x$ is a tautology and the implication holds.

Implied B:
1. $true \wedge \neg(max < x)$ (assumption)
2. $\neg(max < x)$ (predicate logic)
3. $max \ge x$ (algebra)

Also, the I in the Partial While rule is called a loop invariant, because it is a condition that doesn't change with respect to the loop. This is basically something that is true both before and after every single run of the loop.

As a result, the loop invariant is true before the loop starts and after it ends. The invariant does not depend on how the loop runs.

Loop invariants must also express relationships between variables used in the body of the loop.

In the core language, only while loops and recursion can result in non-termination - programs that do not use either loops or recursion are guaranteed to terminate.

28/7/14

We want to prove that our programs are totally correct - that they are partially correct, and that they terminate. We can now prove partial correctness using the deduction rules given earlier, but now we need to prove that our programs terminate.

Clearly, any program that does not use loops or recursion must terminate since it is impossible that they do not. When we have while loops and recursion, proving termination is more difficult.

We often prove that while loops terminate by finding a variant - an integer expression in the loop that is always non-negative, yet is always decreasing. If we prove that this value exists, then it must be that after enough runs of the loop, it must terminate to prevent the variant from becoming negative.

For example, consider the following code:

{x \ge 0}
y = 1;
z = 0;
while (z != x) {
    z = z+1;
    y = y * z;
}
{y = x!}

A good candidate for the variant is x - z, since inside the loop, x - z is always non-negative, and on every iteration of the loop, x - z must always decrease since z increases by 1 and x is constant. If we prove these two things are true, then we can prove that the while loop terminates using contradiction - supposing the loop didn't terminate, it would violate the variant.

For loops in our core language appear as follows:

for v = a to b {
    C
}

The value of v, a, b cannot be changed within the loop body C. The body is not executed at all if a > b, and executed 1 + b - a times otherwise. In each run of the loop, the value of v is incremented afterwards. That means that if a \le b, the above is equivalent to the following:

v = a;
C
v = v + 1;
C
...
v = b;
C

;wip: up to finishing for loops

30/7/14

Something is computable if it can be calculated using a systematic procedure - that given enough resources and a certain program, we could solve the problem. There are actually many types of problems that cannot be solved in this way.

For example, the halting problem is not computable - it is impossible to have a program that determines if any given program terminates.

Creative Commons License This work by Anthony Zhang is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. Copyright 2013-2017 Anthony Zhang.