heyting

Heyting

Heyting Algebras

Izak, May 2023

Table of Contents

Heyting Algebra

Now, Heyting algebras often take the unhelpful label—for those who are not yet students of topology—that: "Every topology provides a complete Heyting algebra in the form of its open set lattice."

NOTE: An "open" set is defined as a set where each point is surrounded by other members of the set. To provide a precise definition, one begins with a "basic open set" and forms other open sets from it. In the presence of a metric, basic open sets can be represented by \(ϵ\)-neighborhoods (points are less than \(ϵ\) away from some point for \(ϵ > 0\)). \(A\) set \(S\) is considered open if, for every point \(P ∈ S\), there exists a basic open set \(U\) containing \(P\) such that \(U ⊆ S\). \(A\) set is classified as closed if it either has no complement or if its complement is open.

A subset lattice (also called powerset lattice) is prototypical. Where join is disjunction, union, and inclusion, and meet is conjunction, intersection, and (something like) includes. In the case of Heyting algebras, these might also be maximum and minimum respectively.

Source: Wikipedia Power set

For a more concrete example of a Heyting algebra, let us look at the next Figure where is is clear conjunction \(A \land B\) is \(min(A, B)\) and the disjunction \(A \lor B\) is \(max(A, B)\) but less clear is what Wikipedia kindly spells out: "Every totally ordered set that has a least element \(0\)and a greatest element \(1\) is a Heyting algebra (if viewed as a lattice). In this case \(A \rightarrow B\) equals to \(1\)(or \(T\) for those generalized fans out there) when \(A \leq B\), and \(B\) otherwise."

Fig: Source: Wikipedia Heyting algebra

With all that aside, it becomes somewhat clearer how anybody would evaluate truth via a Heyting algebra. But what of intuitionistic logic? Well, Wikipedia touches on that too!

Picture this: We've got this formula \((P \lor Q) \rightarrow P\), and by the definition of the pseudo-complement (fancy term alert!), it's all about finding the largest element \(x\) such that \(P \land Q \land x \leq P\). And guess what? We hit the jackpot! We find an \(x\) that satisfies the equation, and you know what that means? It's \(1\),baby! That's the biggest \(x\) we can get.

But wait, there's more! The rule of modus ponens swoops in like a superhero, allowing us to derive this formula from the formulas \(P\) and \(P \rightarrow Q\). Now, here's where things get interesting. In any Heyting algebra, if \(P\) has that value of \(1\),and \(P \rightarrow Q\) is strutting with a value of \(1\)too, it can only mean one thing: \(P \land 1 \leq Q\). And when we crunch the numbers, \(1 \land 1 \leq Q\), it's crystal clear—\(Q\) has got to be \(1\)too!

So here's the deal: If a formula can be deduced from the laws of intuitionistic logic using our trusty modus ponens rule, it's a winner in all Heyting algebras. No matter how you assign those variable values, it'll always score a perfect \(1\).Talk about consistency!

We can whip up a Heyting algebra where Peirce's law doesn't always hit that \(1\)mark. Let's take a peek at a 3-element algebra \(\{0, \frac{1}{2}, 1\}\). If we play around and give \(P\) a value of \(\frac{1}{2}\) and \(Q\) a sad \(0\),the value of Peirce's law \(((P \rightarrow Q) \rightarrow P) \rightarrow P\) turns out to be \(\frac{1}{2}\). Peirce's law (excluded middle via conditionals) just can't quite make the intuitionistic cut.

But fear not. There's a flip side to this story. If a formula always takes a value of \(1\),it can be deduced from the laws of intuitionistic logic. The intuitionistically valid formulas always have that \(1\)value. It's just like those classically valid formulas that score a perfect \(1\)in the two-element Boolean algebra, no matter what true and false you throw their way.

So, my friends, a Heyting algebra is like a wild and wonderful playground for logic. It's a grand generalization of the usual truth values system, and the biggest element in the algebra is like the ultimate "true." And guess what? Our usual two-valued logic system is just a special case of a Heyting algebra—the baby version, where we only have \(1\) (true) and \(0\) (false) hanging out.

Let’s take a turn from the big boys and churn out another spindle on the so-called Computational Trilogy.

Topologize This

The question now: how do elementary topoi relate to Heyting algebras? Here is my transposition of knowledge dependencies—mostly from Wikipedia. The following sparse, definitional style is one of many manners for illustrating how these concepts hang together. As a preliminary, some chapter zero jargon, (monics are injective homomorphisms—injective meaning all inputs used \( \forall x \exists f(x) \)—homomorphisms satisfying \( f(ab) = f(a)f(b) \)) and isomorphisms. That being said, there are several layers of definitions to peel away, at least to recall: global element, and subobject classifier, and elementary topos. With that out of the way: recall, the global element of an object \(A\) from a category is a morphism \(h:1 \rightarrow A\), where \(1\) is a terminal object of the category. (recall: a terminal object is the object of the category that is the target of morphisms from all objects in the category) Also recall, the subobject classifier relies on two other definitions, at least: subobject & pullback. \(A\) subobject is analogous to a subset, but formally: Some object \(A\) in some category has two monics denoted \(u:S \rightarrow A\) and \(v:T \rightarrow A\) (satisfying an equivalence relation \(uRv\) with isomorphism \(f:S \rightarrow T\) where \(u = v \circ f\)), and the subobjects of \(A\) are the equivalence classes of those monics that can be expressed as the equalizer of two morphisms. An equalizer is analogous to the set-theoretic notion: let \(X\) and \(Y\) be sets. Let \(f\) and \(g\) be functions, both from \(X\) to \(Y\). Then the equalizer of \(f\) and \(g\) is the set of elements \(x\) of \(X\) such that \(f(x)\) equals \(g(x)\) in \(Y\). This is also denoted \(Eq(f,g) := \{x \in X \mid f(x) = g(x)\}\). This generalizes to any set of functions \(f,g \in F\), but is degenerate (e.g., singletons and empties). Categorically, the equalizer is the limit of the diagram (note: a diagram is analogous to an indexed family of sets, except morphisms are indexed too, i.e., a diagram with shape \(J\) in category \(C\) is a functor \(F:J \rightarrow C\) for morphisms \(f,g:X \rightarrow Y\)). The limit is a cone (note: a cone to \(F\) is an object \(N\) of \(C\) together with a family \(p(X):N \rightarrow F(X)\) of morphisms indexed by objects \(X\) of \(J\) such that every morphism \(f:X \rightarrow Y\) in \(J\) requires \(F(f) \circ p(X) = p(Y)\)) where this specific cone is denoted \((L,t)\) to \(F:J \rightarrow C\) such that for every other cone denoted \((N,p)\) to \(F:J \rightarrow C\) there exists a unique morphism \(u:N \rightarrow L\) such that \(t(X) \circ u = p(X)\) for all \(X\) in \(J\).

Source: Wikipedia Limit (category theory)

Now, just as "the equalizer is the limit of the diagram," and every "subobjects of some object are the equivalence classes of those monics that can be expressed as the equalizer of two morphisms": for each monic \(j:U \rightarrow X\) there is a unique morphism \(h(j):X \rightarrow \Omega\) where \(\Omega\) is the subobject classifier for some category \(C\) with a terminal object satisfying \(1 \rightarrow \Omega\) in the commutative diagram of a pullback such that U is the limit.

Source: Wikipedia Subobject classifier

At risk of redundancy, a pullback is the limit of a diagram with two morphisms \(f:X \rightarrow Z\) and \(g:Y \rightarrow Z\) where an object P and two morphisms \(p_1:P \rightarrow X\) and \(p_2:P \rightarrow Y\) where \(fp_1 = gp_2\), all denoted \((P, p_1, p_2)\), which is a universal property, meaning for any other triple denoted \((Q, q_1, q_2)\) where \(q_1:Q \rightarrow X\) and \(q_2:Q \rightarrow Y\) and \(fq_1 = gq_2\) there exists a unique \(u:Q \rightarrow P\) such that \(p_1 \circ u = q_1\) and \(p_2 \circ u = q_2\). This is unique up to isomorphism.

Source: Wikipedia Pullback (category theory)

Finally, elementary topoi have many definitions. (To avoid beginning the question, this definition allows one to derive the existence of the subobject classifier rather than positing the subobject classifier to derive other properties.) \(A\) topos is a category with two properties: (1) every limit over a finite index category exists, (2) every object has a power object. \(A\) power object of an object \(X\) is a pair \((P_X, \ni_x)\) with \(\ni_x \subseteq P_X \times X\). Now, for every object I, a morphism \(r:I \rightarrow P_X\) induces a subobject \(\{(i, x) \mid x \in r(i)\} \subseteq I \times X\). \(A\) pullback denoted \(\ni_x\) defines this along \(r \times X : I \times X \rightarrow P_X \times X\).

So, an isomorphic correspondence between relations \(R \subseteq I \times X\) and morphisms \(r: I \rightarrow P_X\) is the universal property of a power object. From finite limits and power objects one can derive that:

  1. All colimits taken over finite index categories exist.
  2. The category has a subobject classifier.
  3. The category is Cartesian closed.

A category \(C\) is Cartesian closed if it:

  1. Has a terminal object.
  2. Any two objects \(X\) and \(Y\) in \(C\) have a product \(X \times Y\) in \(C\).
  3. Any two objects \(Z\) and \(Y\) in \(C\) have an exponential denoted \(Z(Y)\) in \(C\).

Recall: an exponential is an object \(Z(Y)\) with a morphism \((Z(Y) \times Y) \rightarrow Z\) if for any object \(X\) and morphism \(g: X \times Y \rightarrow Z\) there is a unique morphism \(\lambda g: X \rightarrow Z(Y)\), and thus an isomorphism \(g \cong \lambda g\) often denoted \(\text{Hom}(X \times Y, Z) \cong \text{Hom}(X, Z(Y))\).

Source: Wikipedia Exponential object

With all of this, we can see that the global elements of the subobject classifier denoted \(\Omega\) of an elementary topos form a Heyting algebra (of truth values of the intuitionistic higher-order logic induced by the topos). More generally, the set of subobjects of any object \(X\) in a topos forms a Heyting algebra.

The question: how to relate "(1) every limit over a finite index category exists, (2) every object has a power object" of my definition for elementary topos with the lattices and logical connectives described for Heyting algebras. Well, it is just a list of verifiable algebraic properties according to my GPT query for Lean code, for explications of tactics, and for a natural language explanation. Here's a list:

  1. Limits of finite diagrams exist in the category \(C\) .
  2. Pullbacks exist in the category \(C\) .
  3. Products exist in the category \(C\) .
  4. Equalizers exist in the category \(C\) .
  5. The infimum of subobjects \(P\) and \(Q\) is lower or equal to \(P\) .
  6. If a subobject \(R\) is lower or equal to both \(P\) and \(Q\) , it is also lower or equal to their infimum.
  7. There is an equivalence between \(a ≤ b\) and the implication \(a ⊓ c ≤ b\) for any subobject \(C\) .
  8. The infimum of subobjects \(P\) and \(Q\) is lower or equal to \(Q\) .
  9. Subobject \(P\) is lower or equal to the supremum of \(P\) and \(Q\) .
  10. Subobject \(Q\) is lower or equal to the supremum of \(P\) and \(Q\) .
  11. If a subobject \(R\) is greater or equal to both \(P\) and \(Q\) , it is also greater or equal to their supremum.

Adjunction Boolean

(See nLab ʜᴇʏᴛɪɴɢ ᴀʟɢᴇʙʀᴀ for source)

There are several ways of passing back and forth between Boolean algebras and ʜᴇʏᴛɪɴɢ ᴀʟɢᴇʙʀʌs, having to do with the double negation operator. Recall, double negation \(¬¬: L → L\) is a monad. Further, it preserves finite meets.

Now let \(L(\neg\neg)\) denote the poset of regular elements of \(L\), that is, those elements \(x\) such that \(\neg\neg x = x\). (When \(L\) is the topology of a space, an open set \(U\) is regular if and only if it is the interior of its closure, that is if it a regular element of the ʜᴇʏᴛɪɴɢ ᴀʟɢᴇʙʀʌ of open sets described above.)

A Theorem: The poset \(L(\neg\neg)\) is a Boolean algebra. Moreover, the assignment \(L \mapsto L(\neg\neg)\) is the object part of a functor \(F: \text{Heyt} \to \text{Bool}\) called Booleanization, which is left adjoint to the full and faithful inclusion \(i: \text{Bool} \rightarrow \text{Heyt}\)

The unit of the adjunction, applied to a ʜᴇʏᴛɪɴɢ ᴀʟɢᴇʙʀʌ \(L\), is the map \(L \to L(\neg\neg)\) which maps each element \(x\) to its regularization \(\neg\neg x\).

Proof. To avoid confusion from two different maps called \(\neg\neg\) that differ only in their codomain, write \(M: L \to L\) for the monad and \(U: L \to L(\neg\neg)\) for the left adjoint. Write \(\iota: L(\neg\neg) \to L\) for the right adjoint, so that \(M = \iota \circ U\).

We first show that \(L(\neg\neg)\) is a ʜᴇʏᴛɪɴɢ ᴀʟɢᴇʙʌ and \(U\) is a Heyting-algebra map. Because \(U\) is surjective (and monotone), it suffices to show that it preserves the Heyting-algebra operators: finite meets, finite joins, and implication.

Because \(\iota\) is full, it reflects meets. Therefore, because \(\neg\neg: L \to L\) preserves finite meets, \(M = \iota \circ U\) preserves finite meets, so too does \(U\), and as a left adjoint, it preserves joins.

Finally, for \(a, b \in L\), we show that \(\neg\neg(a \to b)\), which (because \(\neg\neg: L \to L\) preserves finite meets) is the \(L\)-implication \(\neg\neg a \to \neg\neg b\), satisfies the universal property (\(1\)), also in \(L(\neg\neg)\). For any \(x \in L(\neg\neg)\), \(\iota(x) \leq \neg\neg a \to_\mathrm{L} \neg\neg b\) just if \(\iota(x) \land_\mathrm{L} \neg\neg a \leq \neg\neg b\) by the universal property in \(L\); but because \(\iota\) reflects meets, this is equivalent to \(x \land_\mathrm{L(\neg\neg)} (\neg\neg a) \leq \neg\neg b\), completing the universal property.

Now because \(U\) preserves implication and (the empty join) \(0\), it preserves negation. Therefore, \(\neg\neg\) in \(L(\neg\neg)\) is the identity, so the latter is a Boolean algebra.

Therefore \(U = \neg\neg: L \to L(\neg\neg)\) is a ʜᴇʏᴛɪɴɢ ᴀʟɢᴇʙʌ quotient which is the coequalizer of \(1\), \(\neg\neg: L \to L\). It follows that a ʜᴇʏᴛɪɴɢ ᴀʟɢᴇʙʌ map \(L \to B\) to any Boolean algebra \(B\), i.e., any ʜᴇʏᴛɪɴɢ ᴀʟɢᴇʙʌ where \(1\) and \(\neg\neg\) coincide, factors uniquely through this coequalizer, and the induced map \(L(\neg\neg) \to B\) is a Boolean algebra map. In other words, \(\neg\neg: L \to L(\neg\neg)\) is the universal ʜᴇʏᴛɪɴɢ ᴀʟɢᴇʙʌ map to a Boolean algebra, which establishes the adjunction.


Quantifiers as Adjoints

(See "Quantifiers as Adjoints...". Math Stack Exchange for source)

What is a predicate \(P(x)\), with the variable \(x\) coming from the sort \(X\)? Viewing \(X\) as a set, we can interpret \(P(x)\) as a subset of \(X\). The definable subsets of \(X\) naturally form a partially ordered category, where an arrow \(P(x) \rightarrow Q(x)\) exists if and only if \(P \subseteq Q\). This category structure can also be seen as the partial order of implications between predicates. Let's denote this category as Pred(\(X\)).

Now, there is a functor \(Dy: \text{Pred}(X) \rightarrow \text{Pred}(X \times Y)\) that corresponds to adding a dummy variable \(y\) of type \(Y\). So, \(Dy(P(x)) = \{(x, y) \in X \times Y \mid x \in P(x)\}\). We can write this as \(P(x, y)\), where the dot denotes that \(y\) is a dummy variable. This is indeed a functor, as if \(P(x) \rightarrow Q(x)\), then \(P(x, y) \rightarrow Q(x, y)\) (again, you can interpret this as containment or implication).

There are also functors \(\exists y: \text{Pred}(X \times Y) \rightarrow \text{Pred}(X)\) and \(\forall y: \text{Pred}(X \times Y) \rightarrow \text{Pred}(X)\) that behave as follows:

- \(\exists yP(x, y) = \{x \in X \mid \exists y(x, y) \in P(x, y)\}\).

- \(\forall yP(x, y) = \{x \in X \mid \forall y(x, y) \in P(x, y)\}\).

Now, we have the adjunctions \(\exists y \dashv Dy \dashv \forall y\). These adjunctions correspond to the following statements:

- \(\exists yP(x, y) \rightarrow Q(x) \iff P(x, y) \rightarrow Q(x, y)\).

- \(P(x) \rightarrow \forall yQ(x, y) \iff P(x, y) \rightarrow Q(x, y)\).

Moving on to viewing predicates as maps \(X \rightarrow \{0,1\}\), we can define a corresponding partial order structure on maps \(X \rightarrow \{0,1\}\). Specifically, \(P(x) \rightarrow Q(x)\) if and only if for all \(x \in X\), \(P(x) \leq Q(x)\). In other words, the partial order is defined by pointwise comparison of functions.

The functor \(Dy\) now takes a map \(X \rightarrow \{0,1\}\) and composes it with the projection \(X \times Y \rightarrow X\), and its adjoints are given by:

- \((\exists yP(x, y))(x) = \{1 \text{ if } \exists yP(x, y) = 1, 0 \text{ otherwise}\} = \sup_{y \in Y} P(x, y)\).

- \((\forall yP(x, y))(x) = \{1 \text{ if } \forall yP(x, y) = 1, 0 \text{ otherwise}\} = \inf_{y \in Y} P(x, y)\).

By replacing \(\{0,1\}\) with \([0,1]\), we can generalize this concept further. We take the same category structure on predicates (pointwise \(\leq\) in \([0,1]\)) and obtain the quantifiers in continuous model theory as adjoints to the "dummy variable functor," replacing max and min with sup and inf.

Regarding your second question, if the compact Hausdorff space is not ordered, it is not clear what the appropriate category structure should be since the one where the quantifiers naturally appear stems directly from the order. However, it is easier to generalize this situation by replacing \(\{0,1\}\) and \([0,1]\) with any complete lattice, which is what happens in topos theory.


Topoi of ʜᴇʏᴛɪɴɢ ᴀʟɢᴇʙʀᴀ Models

(See "Is there a topos of Heyting...",Math Stack Exchange for source)

Let \(Heyt\) be the category of Heyting algebras. We first describe a functor \(F: Set^{op} \rightarrow Heyt\). This functor is simply the functor sending \(X\) to \(H^X\). In fact, \(F\) always outputs a complete Heyting algebra since \(H\) is complete.

Let us note that for all \(f: A \rightarrow B\), the map \(Ff: FB \rightarrow FA\) preserves all meets and all joins. Therefore, \(Ff\) has a left adjoint \(\exists f\) and a right adjoint \(\forall f: FA \rightarrow FB\). Moreover, these adjoints satisfy the Beck-Chevalley condition. This gives us a "first-order hyperdoctrine with equality".

From here, we can construct the category of partial equivalence relations.

In a bit more generality, consider a category \(C\) with finite limits and a functor \(F: C^{op} \rightarrow Heyt\), where for all \(f: A \rightarrow B\), \(Ff\) has a left and a right adjoint which satisfy the Beck-Chevalley condition. Such a functor is known as a first-order hyperdoctrine with equality. We write \(f^{-1}\) instead of \(Ff\).

A partial equivalence relation consists of an object \(A \in C\), together with some predicate \(P \in F(A \times A)\), which satisfies the following conditions:

1. Consider the "swap map" \(swap = (p2, p1): A^2 \rightarrow A^2\). Then \(swap^{-1}(P) = P\). This is "symmetry". It corresponds to saying \(\forall x, y \in A (P(x, y) \iff P(y, x))\).

2. Consider the three maps \((p1, p2), (p1, p3), (p2, p3): A^3 \rightarrow A^2\). We have \((p1, p2)^{-1}(P) \land (p2, p3)^{-1}(P) \leq (p1, p3)^{-1}(P)\). This is "transitivity". It corresponds to saying \(\forall x, y, z \in A (P(x, y) \land P(y, z) \rightarrow P(x, z))\).

The arrows from \((A, P)\) to \((B, Q)\) will consist of \(f \in F(A \times B)\) which satisfies the following conditions:

1. Consider \(p1: A \times B \rightarrow A\). Then \(\exists p1f = P\). This is "having the correct domain" - it corresponds to saying \(\forall x \in A (P(x) \rightarrow \exists y \in B (f(x, y)))\).

2. Consider \(p_2: A \times B \rightarrow B\). Then \(\exists p_2f \leq Q\). This is "having the correct codomain". It corresponds to saying \(\forall y \in B (\exists x \in A (f(x, y)) \rightarrow Q(y, y))\).

3. Consider \((p_1, p_2), (p_1, p_3): A \times B^2 \rightarrow A \times B\) and \((p_2, p_3): A \times B^2 \rightarrow B^2\). Then \((p_1, p_2)^{-1}f \land (p_1, p_3)^{-1}f \leq (p_2, p_3)^{-1}(Q)\). This is known as "being single-valued" and corresponds to saying \(\forall x \in A \forall y, z \in B (f(x, y) \land f(x, z) \rightarrow Q(y, z))\).

Using these tools, we can fairly easily define composition. Given \(f: (A, P) \rightarrow (B, Q)\) and \(g: (B, Q) \rightarrow (C, R)\), we can define \(g \circ f: (A, P) \rightarrow (C, R)\) as follows. We have

  maps \((p_1, p_2): A \times B \times C \rightarrow A \times B\), \((p_1, p_3): A \times B \times C \rightarrow A \times C\), and \((p_2, p_3): A \times B \times C \rightarrow B \times C\). Then \(g \circ f = \exists (p_1, p_2)((p_1, p_2)^{-1}(f) \land (p_2, p_3)^{-1}(g))\). This corresponds to defining \(g \circ f = \{(x, z) \in A \times C | \exists y \in B (f(x, y) \land g(y, z))\}\).

One can easily show that this gives us a category. In fact, it gives us a Heyting category - a category in which we can interpret first-order intuitionist logic. This type of category has finite limits, an initial object, Heyting algebra structures for subobjects, and left and right adjoints to pullbacks for monos.

We now add one final condition on \(F\). The condition is:

For every object \(X\) of \(C\), there exists an object \(PX\) and an element \(in_X \in F(X \times PX)\) such that for all \(Y \in C\) and \(Q \in F(X \times Y)\), there exists some (not necessarily unique) \(f: Y \rightarrow PX\) with \((1_X \times f)^{-1}(in_X) = Q\).

This makes \(F\) into a "tripos". In the particular case \(FX = HX\), we see that we can construct \(P(X) = H^X\) with \(in_X: X \times HX \rightarrow H\) being the obvious map.

Using the tripos condition, we can prove that the resulting category of partial equivalence relations is , in fact, a topos. Finally, we have the following theorem:

Thm: The tripos-to-topos construction applied to the functor \(X \mapsto HX\) produces a topos which is equivalent to \(Sh(H)\).

So the "correct" way of constructing \(H\)-valued types just gives us the topos of sheaves on the locale \(H\).

There are a few other interesting ways of getting triposes. The first is the "syntactic tripos". Let's say we're working with intuitionist higher-order logic. Then let \(C\) be the category of types, where the objects are (products of) types, the arrows are function terms, and \(F(T)\) is the Heyting algebra of predicates on \(T\). The tripos-to-topos construction then produces a topos where the true statements in the internal logic exactly correspond to provable statements.

The second is, of course, starting with \(C\) being the topos and taking the sub-object functor. The tripos-to-topos construction will simply produce a topos which is equivalent to \(C\) - this shouldn't be terribly surprising.

The third is using a partial combinatory algebra. This is related to Kleene's realizability models for arithmetic and can be used to construct a topos where, for example, we can show in the internal logic that all functions \(N \rightarrow N\) are computable. It can also be used to model Brouwer's intuitionism.