Above is a validity checker for a variant of the implicational fragment of intuitionistic propositional logic. Entering a statement into the box, it will highlight with green if the statement is a theorem and red if the statement is a nontheorem. The theory was designed to be a canonical representative of the PSPACE-Complete complexity class and the solver was optimized for simplicity. The validity checker is not intended to be an efficient SAT solver. Rather, it is more similar to automated theorem provers in that it generates new clauses during the derivation and produces a proof instead of searching for counterexamples.

All characters, with the exception of whitespace and brackets, are interpreted as propositions. A sequence of characters is interpreted as a Horn clause, in the style of logic programming.

ABC ≡ A ← (B ∧ C) ≡ (A ← B) ← C

These Horn clauses can be nested with parentheses.

A(BC) ≡ A ← (B ← C)

Here are some example theorems (click to paste them into the box):While we allow parentheses enclosing the consequent of a Horn clause, these parentheses are ignored due to the following equivalence:

As such, all statements have a single proposition as their consequent and a list of other statements as their antecedents. More formally, we may write the type of a statement as an Algebraic Data Type (ADT):Statement := Nat × (List Statement)

Where atomic propositions are indexed with natural numbers. This ADT is perhaps to be expected of a canonical propositional logic, as it has a tree-like structure but does not have an arbitrarily chosen branching factor. Furthermore, there is no separate constructor for leaf nodes since all statements have exactly one consequent, including atomic propositions which are represented as a statement with an empty list of antecedents.

There is only one inference rule in this language. In short, to produce the desired consequent, we must choose the antecedent that will derive it via Modus Ponens. Then we must provide a proof of each antecedent of this chosen antecedent. Formally, the ADT of a proof in this language is:

Proof := Nat × (List Proof) ≡ Statement

Where antecedents are indexed by natural numbers. How convenient! The proof procedure is quite elegant, especially when each statement is given a unique identifier at the start, as new statements produced by the inference rule are composed only of subtrees of the original statement.

Unlike many intuitionistic logics, this theory does not have negation, a falsum, or any other equivalent notion.
At first glance, this may appear to reduce the generality of the theory.
However, by means of a computational reduction, all constructs in propositional logic can be encoded.
To add falsum into the language, one may add double negation elimination for each atomic proposition (this incurs only a constant factor length increase).
In the following examples, F represents the falsum but is **not** distinct from any other proposition.
We demonstrate the reduction on Pierce's Law.

A(A(BA)) (A(F(FA))) (B(F(FB)))

So, while Pierce's law is a nontheorem, it may be proven by making explicit the assumption of double negation elimination (and even precisely on which variables). Removing double negation elimination from our theory is in some sense more general as it allows for the simple encoding of alternative logics. One might, for instance, be inclined to ask, "What if there are multiple falsums, each with double negation elimination on distinct sets of variables? What if we instead used triple negation elimination?" As an example of a statement that would be more difficult to prove in classical logic, consider the proof that double negation elimination implies principle of explosion:

Another great advantage of removing falsum from the language is in the simplicity of the inference rules, proof procedure and semantics. By eliminating unnecessary language constructs, statements of the theory are written in terms of semantic atoms. The number of redundant statements and branches in the proof tree is also greatly reduced. To illustrate, consider that double negation elimination and the law of excluded middle are not only equivalent in this theory but are represented with the same string (under the assumption that A(FB) represents A∨B).

The idea that falsum should not belong in a canonical language is a reflection of my larger belief that truth is not Boolean-valued. While concepts like Boolean algebras and duality are useful, meaningful and even beautiful, they impose unnecessary structure when included as primitive objects. A canonical theory should not directly exhibit symmetries nor immediately beget arbitrary constants (like 2, in the case of Boolean-valued logic).

A model theorist may retort that truth-valuation is derived from a model and thus is inherently Boolean-valued. Each statement must be either verifiably true or false in the model. This neglects the fact that models are themselves axiomatized and do not exist outside of a theory, which is assumed to be classical. Thus, this defense of classical logic is circular. In sufficiently complex theories, there is no distinction between what is provable and what is verifiably true in the model. Conflating provability and truth, we arrive at a striking reinterpretation of Gödel's first incompleteness theorem: the law of excluded middle does not hold in general. The counterexample is, informally, "this statement is false," which of course has no Boolean truth-valuation.

The next step of this research is to develop a canonical recognizable-complete language. That is, to extend this theory to be as expressive as first-order logic (and indeed, higher-order logics). Martin Löb's 1976 paper suggests that this may be done via second-order quantification over atomic propositions (see also the second-order propositional calculus from type theory).

I am investigating alternative constructions that do not add quantifiers nor dependent types. One such idea is derived from provability logic, where we add a predicate Prov to the language where Prov(ϕ) holds if and only if ϕ is provable. As elucidated by Gödel's first incompleteness theorem (and the notion of the universal Turing Machine), self-reference is a defining characteristic of recognizable-complete languages. For this reason, it appears to me sensible to add this as a primitive construct to build a recognizable-complete language. This also could allow the theory to prove metatheorems without the need of a metalogic. I believe that metatheorems are a key part of our ability to efficiently search the space of possible proofs of a theorem by drawing connections between disparate concepts and eliminating branches from consideration that cannot lead to the desired proof. Once again, this serves the ultimate goal of building a theory that is conducive to automated theorem proving procedures and represents the natural deduction capabilities of the human mind.

Another extension I have considered is to allow statements to be referenced in their own antecedents. This cyclic dependency does not appear to result in a recognizable-complete language for any natural extension of the inference rule.

The creation of a canonical recognizable-complete language by nature creates a canonical programming language. This allows us to produce a natural ordering on programs by complexity. This can in turn be used to "more precisely" compute the Kolmogorov complexity of a given string. I see this as having applications in automated theorem proving, as humans tend to try simple approaches before attempting more complex ones. Having a precise notion of what makes a proof approach complex is core to buildling automated theorem provers that behave more like humans.