Knowledge is software for your brain: The more you know, the more problems you can solve!
Taking Propositions as Types Seriously a nice brief read by Russell O’Connor, to link to.
Foundations of Algebraic Specification and Formal Software Development Authors: Donald Sannella, Andrzej Tarlecki http://link.springer.com/book/10.1007%2F978-3-642-17336-3
(You can access this via the McMaster library using you MacID login: http://catalogue.mcmaster.ca/catalogue/Record/1950176 )
doc:temp
doc:Hussain
doc:Hello
doc:SemVar
doc:graph
doc:Expression
( One reads ‘:=’ as becomes and so the addition of an extra colon results in a ‘stutter’: One reads ‘∷=’ as be-becomes. The symbol ‘|’ is read or. )
Notice that a constant is really just an application with n being 0 arguments and so the first line in the definition above could be omitted.
In a sense, an expression is like a sentence with the variables acting as pronouns and the function applications acting as verb clauses and the argument to the application are the participants in the action of the verbal clause.
A variable of type τ is a name denoting a yet unknown value of type τ;
i.e., “it is a pronoun (nickname) referring to a person in the collection of people τ”.
E.g., to say Expr τ
refer to the expressions denoting values of type τ;
then a meta-variable is simply a normal variable of type Expr τ
.
That is, when we write phrases like “Let E be an expression”
, then the name
Expressions, as defined above, are also known as abstract syntax trees (AST) or prefix notation. Then textual substitution is known as ‘grafting trees’ (a monadic bind).
Their use can be clunky, such as by requiring many parentheses and implicitly forcing a syntactic distinction between equivalent expressions; e.g., gcd(m,gcd(n,p)) and gcd(gcd(m,n),p) look difference even though gcd is associative.
As such, one can declare precedence levels —a.k.a. binding power— to reduce parentheses, one can declare fixity —i.e., have arguments around operation names—, and, finally, one can declare association —whether sequential instances of an operation should be read with implicit parenthesis to the right or the to the left— to reduce syntactic differences. The resulting expression are now known to be in a concrete syntax —i.e., in a syntactic shape that is more concrete.
That is, the conventions on how a string should be parsed as a tree are known as a precedence, fixity, and associativity rules.
Similarly, not for operators but one treats relations conjunctionally to reduce
the number of ‘and’(∧) symbols; e.g.
doc:Induction
doc:Textual_Substitution
doc:Inference_Rule
Just as there are meta-variables and meta-theorems, there is ‘meta-syntax’:
- The use of a fraction to delimit premises from conclusion is a form of ‘implication’.
- The use of a comma, or white space, to separate premises is a form of ‘conjunction’.
If our expressions actually have an implication and conjunction operation, then
inference rule above can be presented as an axiom
The inference rule says “if the
Moreover, the rule is not a Boolean expression. Rules are thus more general, allowing us to construct systems of reasoning that have no concrete notions of ‘truth’ —e.g., the above arithmetic rule says from the things above the fraction bar, using the operation ‘+’, we can get the thing below the bar, but that thing (19) is not ‘true’ as we may think of conventional truth.
Finally, the rule asserts that
A “theorem” is a syntactic concept: Can we play the game of moving symbols to get this? Not “is the meaning of this true”! ‘Semantic concepts’ rely on ‘states’, assignments of values to variables so that we can ‘evaluate, simplify’ statements to deduce if they are true.
Syntax is like static analysis; semantics is like actually running the program (on some, or all possible inputs).
One reads/writes a natural deduction proof (tree) from the very bottom: Each line is an application of a rule of reasoning, whose assumptions are above the line; so read/written upward. The benefit of this approach is that rules guide proof construction; i.e., it is goal-directed.
However the downsides are numerous:
- So much horizontal space is needed even for simple proofs.
- One has to repeat common subexpressions; e.g., when using transitivity of equality.
- For comparison with other proof notations, such as Hilbert style,
see Equational Propositional Logic.
This is more ‘linear’ proof format; also known as equational style or calculational proof. This corresponds to the ‘high-school style’ of writing a sequence of equations, one on each line, along with hints/explanations of how each line was reached from the previous line.
Finally, an inference rule says that it is possible to start with the givens
doc:Logic
doc:Theorem
doc:Metatheorem
doc:Calculus (Propositional Calculus)
doc:Semantics
doc:Calculational_Proof
doc:Programming
doc:Specification
doc:Proving_is_Programming
doc:Algorithmic_Problem_Solving
doc:nat-trans
doc:cat
doc:Associative
doc:Identity
doc:Distributive
doc:Commutative
doc:Reflexive
doc:Transitive
doc:Symmetric
doc:Antisymmetric
doc:Asymmetric
doc:Preorder
doc:Equivalence
doc:Linear
doc:Semilinear
doc:Univalent
doc:Total
doc:Map
doc:Surjective
doc:Injective
doc:Bijective
doc:Iso
doc:Difunctional