So for today I thought I’d write a short post reviewing the paper The Seven Virtures of Simple Type Theory (Farmer 2008) . I’d read it before perhaps a year ago, but decided to reread it this week. On a fundamental level, this paper is an advertisement for Higher Order Logic/Simple Type Theory as a logical framework worth using. Now, to be clear, it’s not proposing that HOL is the one true logic or anything that grandiose, but more makes a case for it being taugh as a part of standard curricula by expounding its merits.

So what **are** those merits? Perhaps my only complaint about the writing of this paper is that the “seven virtues” aren’t laid out clearly at the beginning but are interleaved throughout. For ease of reference then, they are

- simple and uniform syntax
- propositions are just expressions of type
`*`

- there are very few syntactic constructs needed
- abstraction
- application
- equality
- definite choice (i.e. an operator that choose the unique element that satisfies a proposition, if it exists)

- propositions are just expressions of type
- it has a fairly simple semantics because the syntax is very sparse
- the type of propositions is the two point set of true and false
- the base types are modeled by sets
- function types are just the total functions between sets
- the lambda terms are interpreted the standard way
- equality is interpreted as equality
- definite choice is the only slightly odd one, but is interpreted as an error value if the proposition doesn’t evaluate to true for any value

- HOL is extremely expressive
- quantifiers and the like can all be built up from those base elements of syntax
- anything that can be expressed in an n-th order logic can be embedded into HOL

- HOL is categorical (i.e. uniquely “pins down” to a single model) for theories such as Peano Arithmetic
- it has a simple but very powerful proof system
- six axioms
- one inference rule, which is effectively simplification via equality

- Henkin semantics!
- a different notion of models where the “function spaces” aren’t
**all**the total functions - Henkin semantics provides many more models than the standard semantics
- so many that we get completeness back for our proof system!
- so many that HOL becomes fancy syntax for 1st order logic, in this perspective

- a different notion of models where the “function spaces” aren’t
- it has a number of neat extensions for data types, partial functions, subtypes, etc.

and that’s more or less a summary of the paper.

I really enjoyed reading this, though, as it’s very well written, easy to read, and summarizes quite a few things that I hadn’t really seen presented quite as cleanly. For example, the reduction of all the normal quantifiers and logical connectives to just the lambda calculus + equality operator and definite choice was the clearest I’d ever seen it presented. I thought the presentation in Lambek and Scott’s book (Lambek and Scott 1988) was good, but I thought this paper made it even more obvious with the simple progression at the beginning of section 4.

This paper was also the first time I’d heard of Henkin semantics, and while strictly speaking I “get it”, on a philosophic level the idea fundamentally disturbed me. I think I still have a nasty streak of platonism that I haven’t managed to expel, because the idea that there could be two very different classes of models that I’m “proving theorems about” felt very strange. It was a reminder that notions such as “completeness” are, in a sense, loosely connected to the real formal system. The logic + proof theory is capable of proving theorems. These theorems are, by our intended models, connected to “real” mathematics—whatever that’s supposed to mean. On the other hand, there are biases in our intentions, as discussed by Girard in papers such as Normativity in Logic (Girard 2012) . I suppose the more I think about this and about things such as the Lowenheim-Skolem theorem in first-order logic the more I get nervous about what the connection between our formal systems and mathematics even is.

Yes, I know, there are many practical matters where such concerns are unwarranted, but this is the part of me that I think still acts like a theologian even after all these years: constantly worried about what the ultimate truth of things is and doubting my certainty in anything.

# References

Farmer, William M. 2008. “The Seven Virtues of Simple Type Theory.” *Journal of Applied Logic* 6 (3). Elsevier: 267–86.

Girard, Jean-Yves. 2012. *Normativity in Logic*. Springer.

Lambek, Joachim, and Philip J Scott. 1988. *Introduction to Higher-Order Categorical Logic*. Vol. 7. Cambridge University Press.

Nice review!

“This paper was also the first time I’d heard of Henkin semantics, and while strictly speaking I “get it”, on a philosophic level the idea fundamentally disturbed me.”

My experience was the exact opposite. Non-Henkin semantics is essentially a hack: an artificial restriction on what sets are allowed to be models.

Analogously to Henkin semantics, one could call usual first-order model theory something like “Hilbert-semantics”. As you know, variables can range over an arbitrary set in Hilbert-semantics: there is a deductive system, and both Incompleteness and Löwenheim–Skolem are satisfied.

Now, you could invent a new “first-order standard semantics”, which is the same as Hilbert-semantics, except that the variables are now required to range over the set of *all* natural numbers. In the light of this new semantics, deductive systems for first-order logic don’t exist, and the axioms can *magically* characterize the natural numbers up to isomorphism.

The first-order standard semantics would be regarded as the joke it is, while the second-order standard semantics is accepted as… well, standard!