## Functorial Semantics for Algebraic Theories

$\require{AMScd} \require{AMSmath} \newcommand{\mc}[1]{\mathcal{#1}} \newcommand{\id}{\mathrm{id}} \newcommand{\str}[2]{#1–\mathbf{Str}(\mc #2)} \newcommand{\model}{\mathrm{Mod}} \newcommand{\clsf}{\mathrm{Cl}}$

These are my notes taken while approaching categorical logic from the angle of algebraic theories. My notes are based on Notes on Categorical Logic by Andrew Pitts, Steve Awodey’s (unfortunately unfinished, but very helpful) lecture notes on categorical logic, and the ncatlab article on internal logic.

#### Algebraic Theories

An algebraic signature is specified by a set of sorts, a set of operator symbols, and a set of variables for each sort. A context is a (possibly empty) list of distinct variables with their types. We will denote the empty context by $()$. A term-in-context $t(x)$ is a term $t$ over the signature in question together with a context $x$ such that all free variables of the term appear in the context. An equation-in-context $s = t(x)$ consists of two terms $s,t$ of the same with a context that is appropriate for both of them. Finally, an algebraic theory is given by an algebraic signature and a set of equations-in-context. The theorems of the theory are the derivable equations using reflexivity, transitivity, symmetry, substitution, and weakening (where weakening refers to the fact that we can add more variables to a context and substitution means that if $s = s' \;(x)$ and $t = t' \; (x, y)$ is known, we can conclude $t[s/y] = t'[s'/y] \;(x)$). We write $T \vdash s = t \; (x)$ if $s = t \; (x)$ is a theorem.

A example of an algebraic theory is the theory of monoids, which has a single sort $M$ and a two operator symbols: the binary $\cdot$ (written infix) and the nullary $e$. Its equations are simply $e \cdot m = m(m)$ and $(m_1 \cdot m_2) \cdot m_3 = m_1 \cdot (m_2 \cdot m3) (m_1, m_2, m_3)$.

#### Algebras

Given a algebraic signature $\Sigma$ and a category $\mc C$ with finite products, a $\Sigma$-structure $M$ in $\mc C$ maps each sort $A$ of $T$ to an object $MA$ of $\mc C$ and each operator $f \colon A_1, \ldots, A_n \to B$ to an arrow $Mf \colon \prod_{i=1}^n MA_i \to B$ (without any further requirements). To any context $x = x_1 : A_1, \ldots, x_n : A_n$ we can then associate an object $Mx = \prod_{i=1}^n MA_i$. We can use all of this to define the denotation of a term-in-context $t(x)$ of sort $A$ to be an arrow $[[t (x)]]_M \colon Mx \to MA$:
* if $t = x_i$ with $x_i : A$, then $[[t(x)]]_M$ is the projection $\pi \colon Mx \to MA$ with $\pi(x) = x_i$.
* if $t = f(t_1, \ldots, t_n)$, then $[[t(x)]]_M$ is $Mf \circ \prod_{i=1}^n [[t_1(x)]]_M$.

If $T$ is an algebraic theory on $\Sigma$ and $M$ is a $\Sigma$-structure, we say that $M$ satisfies an equation-in-context $s = t \; (x)$, written $M \models s = t \; (x)$, if $[[s(x)]]_M = [[t(x)]]_M$. If $M$ satisfies all equations of a theory, it is called a $T$-algebra in $\mc C$. The $T$-algebras in a category again form a category $\model(T, \mc C)$, where morphisms $h \colon M \to N$ are given by a family of $\mc C$-arrows $h_A$, one for each sort $A$ of $\Sigma$, such that for every function symbol $f \colon A_1,\ldots, A_n \to B$ the diagram

commutes. All this generalizes the usual algebraic notions; e.g. a monoid in the category of sets is just a monoid, and the commutativity of the diagram just states that $h(x\cdot y) = h(x) \cdot h(y)$ in a monoid.

#### Soundness

We can now prove the soundness property, namely, that every $T$-algebra $M$ satisfies all theorems of $T$. To do this, it is enough to ensure that the rules of inference are compatible with our models. Reflexivity, symmetry, and transitivity just restate properties of equality for arrows. This still leaves weakening and substitution open (which brings us to the substitution lemma). For terms-in-context $t(x), s(y)$, let us write $t(s(y))$ for the term $t[s_1/x_1, \ldots, s_n/x_n](y)$. Then by induction on $t$, one can prove that $[[t(s(y))]]_M = [[t(x)]]_M \circ \langle [[s_1(y)]]_M, \ldots, [[s_n(y)]] \rangle$. Weakening is then just a special case of substitution.

#### Completeness

Completeness can be proved by constructing a model that satisfies just the theorems provable in the theory. For this, let us define the syntactic category $\clsf(T)$ of $T$ where the objects are simply contexts and an arrow $h \colon x = (x_1,\ldots, x_n) \to (y_1, \ldots, y_m) = y$ is a list of terms-in-context $(t_1(x), \ldots, t_m(x))$. More precisely, these are equivalence classes of these, since we identify two arrows $(t_1(x), \ldots, t_n(x))$, $(s_1(x), \ldots, s_n(x))$ if $s_i = t_i (x)$ is provable in $T$. Composition of arrows $(t_1(x), \ldots, t_m(x)) \colon (x_1, \ldots, x_n) \to (x_1, \ldots, x_m)$, $(s_1(x), \ldots, s_k(x)) \colon (x_1, \ldots, x_m) \to (x_1, \ldots x_k)$ is defined by substitution in each component, $s_i[t_1, \ldots, t_m/x_1, \ldots, x_m]$. Note that each variable in a context belongs to a certain sort, which I have suppressed here, and two contexts are only equal if they also agree on sorts.
This category $\clsf(T)$ has all finite products and they are given by concatenation of contexts. We can now define a $T$-structure $U$ in $\clsf(T)$ by setting $UA = (x \colon A)$ for all sorts $A$ and $Uf = f(x_1, \ldots, x_n)$ for $f \colon A_1, \ldots, A_n \to B$. Then $U \in \model(T, \clsf(T))$, since, by definition,

#### Functorial Semantics

Now amazingly, any $T$-algebra $M$ in $\mc C$ can be seen as an image of $U$ under a specific finite-product preserving functor (FP-functor) $\clsf(T) \to \mc C$, and vice-versa, each FP-functor $F \colon \clsf(T) \to \mc C$ induces a $T$-algebra. First note that any such functor $F$ maps $U$ to a $T$-algebra in $\mc C$ (this part is obvious; we write $FU$ for this $T$-algebra). Also, if $\eta \colon F \Rightarrow G$ is a natural transformation between two such functors, notice that it restricts to a $T$-algebra homomorphism between $FU$ and $GU$, since for all $f \colon A_1, \ldots, A_n \to B$ we have, by naturality and preservation of products, that $\eta_{UB} \circ FU f = GUf \circ \eta_{\prod UA_i}$. Thus evaluation at $U$ is a functor $\mathrm{eval}_U \colon \mc C^{\clsf(T)} \to \model(T, \mc C)$. When restricted to the full subcategory of FP-functors, this yields an equivalence: Given any model $M \in \model(T, \mc C)$, we can construct an FP-functor $F_M \colon \clsf(T) \to \mc C$ by setting $F_M((x \colon A)) = MA$ for all sorts $A$ (finite product preservation then fully determines the other objects) and $F_M(t (x)) = [[t (x)]]_M$. This is a functor, since $M$ is a model and two arrows in $\clsf(T)$ are equal iff they are equal in all models.

The category $\clsf(T)$ is then called the classifying category and $U$ the universal model, and by the preceding discussion it is defined by the universal property that $T$-algebras in $\mc C$ are in one-to-one correspondence with FP-functors $\clsf(T) \to \mc C$. Sometimes $U$ is also referred to as logically generic for the calculus we sketched in the beginning, which here simply means that it satisfies exactly these equations that are provable. As a side note, the Yoneda embedding $y \colon \clsf(T) \widehat{\clsf(T)}$, $A \mapsto \mathrm{Hom}(--, A)$ gives rise to another logically generic model $y(U)$, since $y$ is an FP-functor that is also faithful.

Instead of starting off with a theory and building a category, we can also go the other way around. What the construction of $\clsf(T)$ essentially did, was replacing the representation of a theory via axioms by some category with a universal property that ensures that the category contains all consequences of the axioms, and just these. So given a category $\mc C$ with finite products, the associated signature $\Sigma_{\mc C}$ contains a sort for each object of the category and a operator symbol $f \colon A_1, \ldots, A_n \to B$ for each arrow $f \colon \prod{i=1}^n A_i \to B$. Now let $M_{\mc C}$ be the model sending every sort and every operator symbol to itself (the terms over this signature make up the internal language of $\mc C$, and essentially allow you to treat $\mc C$ similar to $Set$). Now consider the theory $T_{\mc C}$ with all equations satisfied by $M_{\mc C}$ as axioms. Intuitively, it is clear that this construction is right-adjoined to the process of constructing the syntactic category (in a sense, we are simply forgetting that our theory is also a category), and that $\clsf(T_{\mc C}) \simeq \mc C$.

## Notes on Recognisable Languages over Monads

$\require{AMScd} \require{AMSmath} \newcommand{\mc}[1]{\mathcal{#1}} \newcommand{\id}{\mathrm{id}} \newcommand{\rec}{\mathrm{Rec}}$

In this post, take some notes about Bojańczyk’s paper Recognisable Languages over Monads. These notes are mainly for myself (I find it helpful to write stuff down), so I presuppose some category theory, some formal language theory, and a certain acquaintance with Stone duality. I may write about another approach to profinite monads as codensity monads soon. Or not. We’ll see.

In the following, let $(T \colon \mc C \to \mc C, \mu, \eta)$ be a monad on $\mc C$. A $T$-algebra is an object $A$ of $\mc C$ together with an arrow $e_A \colon TA \to A$ such that

commute. We will write $(A, e_A)$ for this $T$-algebra and occasionally also just$A$. The simplest example of a $T$-algebra is $(TA, \mu_a)$, the free $T$-algebra.
Given two $T$-algebras $(A, e_A)$ and $(B, e_B)$, a $T$-algebra morphism $h \colon (A, e_A) \to (B, e_B)$ is an arrow $h \colon A \to B$ such that

commutes. This data defines a category, the Eilenberg-Moore category $Set^T$ of algebras of $T$ with their morphisms.

Our running example will be the monoid monad $M$ that arises as the composition $UF$ of the adjoint functors $F \colon Set \to Mon$ and $U \colon Mon \to Set$, where $U$ is the forgetful functor and $F$ takes a set $\Sigma$ to the free monoid $F\Sigma = \Sigma^*$ with the usual action on maps. In other words, $MA$ is the set of finite words on $A$. This monad has as its unit $\eta_\Sigma \colon \Sigma \to \Sigma^*$ the embedding of elements from $\Sigma$ as words of length one and as its multiplication the collapsing of finite words of finite words over $\Sigma$ to finite words over $\Sigma$. It is easily seen that the algebras for $M$ are simply monoids, and the $M$-algebra morphisms are monoid homomorphisms: The idea here is that the free monoid consists of all terms that make sense for a monoid and $e_A$ is the evaluation of these terms to elements of the actual monoid $A$. The laws now simply ensure that this evaluation works as you would expect.

Let $(A, e_A)$ be a $T$-algebra and $X$ any object of $\mc C$. An arrow $f \colon A \to X$ (in $\mc C$) is recognized be a $T$-morphism $h \colon (A, e_A) \to (B, e_B)$ if $f = g \circ h$ for some arrow$g \colon B \to X$ in $\mc C$. Such an arrow $f$ is recognized by a $T$-algebra $(B, e_B)$, if there is a $T$-algebra morphism $h \colon (A, e_A) \to (B, e_B)$ that recognizes it. Pictographically:

Note how this generalizes the notion of recognition for monoids: A subset $K$ of a monoid $N$ is recognized by a monoid $N'$ if there exists $K' \subseteq N'$ and a monoid morphism $h \colon N \to N'$ such that $K = h^{-1}(K')$. In our definition above, this is the case of $X = \{0,1\}$ and $f$ as the characteristic function $\chi_K$ of $K$. Any $Set$-arrow $g \colon B \to X$ then defines a subset $g^{-1}(1)$ of $B$, and vice-versa. So $\chi_K = g \circ h$ translates $x \in K$ if $h(x) \in g^{-1}(1)$, therefore $K = h^{-1}(g^{-1}(1))$.

Also, this definition readily implies that all such arrows $f \colon A \to X$ are recognized by $\id_A \colon (A, e_A) \to (A, e_A)$ – which is no problem, since usually we are interested in what arrows can be recognized by ‘finite’ $T$-algebras. Unfortunately, there is no obvious way how to define what finite means for an arbitrary monad. We will presuppose that a sensible choice for finite has been made in the context we are considering. The recognizable arrows of $T$ are then the arrows of $T$ that are recognized by finite algebras.

I will first describe how Bojańczyk extends a monad $T$ on $Set$ to its profinite version $\overline T$.

In the following, let $\mc C = Set$ and let $(A, e_A)$ be a $T$-algebra. A generalized element of $A$ is a map $\tau$ associating to every surjective $T$-algebra morphism $h \colon (A, e_A) \to (B, e_B)$ an element $h^\tau \in B$ such that for every finite $T$-algebra $(C, e_C)$ and $T$-algebra morphism $g \colon (B, e_B) \to (C, e_C)$

The set of all generalized elements of $A$ is written $\bar A$.

The comparison to the case of the free profinite monoid $\widehat{\Sigma^*}$ is illuminating: Seen as the completion of $\Sigma^*$ with the profinite metric, we see that elements of $\widehat{\Sigma^*}$ are either finite words or (equivalence classes of) non-convergent Cauchy-sequences. One of the main points about the profinite completion of $\Sigma^*$ is that any monoid homomorphism $h \colon \Sigma^* \to N$, where $N$ is finite, can be uniquely extended to a uniformly continuous monoid homomorphism $\hat h \colon \widehat{\Sigma^*} \to N$. To define $\hat h$, we set $\hat h(w) = h(w)$ whenever $w \in \Sigma^*$, and if $w = (w_n)_{n\in\mathbf{N}}$ is a Cauchy-sequence, we set $\hat h(w) = \lim_{n \to \infty} h(w_n)$, which must exist by the definition of the profinite metric. We can now turn this upside down and instead consider the elements of $\widehat{\Sigma^*}$ as acting on the morphisms, where each $w \in \widehat{\Sigma^*}$ acts as $h \mapsto \hat h(w)$. In the context of Bojańczyk’s work, $\hat h$ is written $\bar h$ and is defined as $\bar h(\tau) = h^\tau$.

The set $\bar A$ can be equipped with a topology generated by the opens of the form $\{\tau \in \bar A \mid h^\tau = b\}$ with $h \colon (A, e_A) \to (B, e_B)$, $b \in B$, which makes it homeomorphic to the Stone dual $\mc S(\rec(A))$ of the recognizable languages of $A$. Note that the topology of $\mc S(\rec(A))$ is generated by the opens $\{\mc F \in \mc S(\rec(A)) \mid L \in \mc F \}$ with $L \in \rec(A)$. This suggests that is the homemorphism in question.

Bojańczyk calls my generalized elements of $A$ instead $T$-algebra morphism types over $A$ and refers to $\bar A$ as the compactification of $A$. I find this nomenclature confusing, since type has a rather different meaning for me and there are several ways to compactify $A$ as a discrete space.

Now we can define the profinite monad $(\overline T \colon Set \to Set, \bar \mu, \bar \eta)$ for $T$. It acts on sets via $\overline T \Sigma = \overline{T \Sigma}$. The operations $\overline T f$ for $f \colon \Gamma \to \Sigma$, $\bar\eta_\Sigma$, and $\bar\mu_\Sigma$ are completely fixed by requiring that, for each finite $T$-algebra $(A, e_A)$ and each surjective $T$-algebra morphism $h \colon (T\Sigma, e_{T\Sigma}) \to (A, e_A)$, the following diagrams commute:

Since each element $\tau \in \overline T \Sigma$ is defined by its action $h^\tau$ on each $h \colon (T\Sigma, e_{T\Sigma}) \to (A, e_A)$ for each finite $T$-algebra $(A, e_A)$, the diagrams actually define the operations.

Again I think it is helpful to see what this means in the familiar case of the free profinite monoid. The embedding $\bar \eta_\Sigma$ is straight forward and turns a word $w \in \Sigma^*$ into the generalized element $h \mapsto h(w)$, or the principal ultrafilter of $w$, or the constant sequence $(w,w,w,w,\ldots)$, depending on your point of view.
The lifting of functions $f \colon \Gamma \to \Sigma$ is given by $\bar h(\overline Tf(x)) = \overline{h \circ Tf}$, which makes use of the fact that $h \circ Tf \colon (TA, e_{TA}) \to (A, e_A)$ is a monoid homomorphism into a finite monoid that can be extended continously. In terms of profinite words this means that $\overline Tf((w_n)_{n\in\mathbf{N}}) = (Tf(w_n))_{n\in\mathbf{N}}$, since then $\bar h(Tf(w_n)_{n\in\mathbf{N}}) = \lim_{n\to\infty} h(Tf(w_n))$.
The case of the multiplication is less obvious. The right-most diagram states that $\bar h(\bar\mu_\Sigma(x)) = \overline{e_A} \circ \overline T \bar h$. I am still trying to find meaning in this one.

## Variations on Voronoi

Lately I have returned to the awesome Processing library, which provides a very simple infrastructure for creating interactive experiments that produce nice images. I have not been using it since version 2; the last time I was using it was in 2009, I think. It was mostly a need to do something different from my daily routine that made me return to Processing, since I have very fond memories of toying around with Processing. Working with Processing made my again painfully aware of the differences between Java and C#, but overall it was a pleasant experience leading to pictures like this here:

The program that I have been working on is inspired by Voronoi diagrams. Voronoi diagrams take as input a plane (say, the unit square) and a set of distinguished positions in that plane. For each point of the plane, the output at that point is the closest of the distinguished positions to that point. This information can then be used to color the plane, yielding a picture of colored cells, see Wikipedia. There are several ways to compute a Voronoi diagram, the one I decided to use probably does not count as a sensible method: Our plane is a picture (which means that it is subdivided into pixels) and we compute the diagram iteratively by repeatedly adding the pixels surrounding each distinguished point to its cell. Each pixel is given the color of its cell (the cell color has been predetermined in some way using the associated distinguished point) and after sufficiently many simulation steps, we get a finished diagram, like this one here:

One thing that might happen in this simulation is that a pixel is halfway between two distinguished points, in which case that pixel is added to whichever cell is simulated first in the update step of the simulation. Alternatively, we can simply note that pixel as conflicted and color it black. Note also that I never specified what exactly surrounding pixels means in this context: The most sensible notions that come to mind is that for any pixel, its neighbors are given by its left, right, top, and bottom neighbors. You could also add in the diagonals. Or do something entirely different, which will generate Voronoi-like-diagrams for different metrics (check for yourself whether a measure of distance given by such a neighborhood actually induces a metric). This generates pictures such as this one:

Of course, there are several factors that can be changed to create interesting pictures:

• the kind of neighborhood used to determine surrounding cells
• the initial distribution of distinguished points
• the initial colors of every cell
• how we color pixels belonging to a cell (just the solid color? take the distance into account? do some crazy bit-arithmetic based on the current cell size?)
• how we handle conflicts (always mark them as black borders? give the pixel to one of the conflict partners at random? etc.)

To explore the kinds of pictures that arise in this way, I wrote a program to specify these parameters and tweak them during the simulation, which leads to rather interesting pictures showing great diversity. Actually, I lied a bit: Before writing the program, I felt the need to write a simple custom GUI library for Processing, because it does not come with one and the ones I tried did some things that I didn’t like (this, of course, is not a good reason to write a GUI library – or any library at all, – but integrates neatly with the mindset of most programmers I know). Here are some of the pictures generated with the program, along with some detail-shots at the pixel-level (click the pictures to get a larger view). I am also currently experimenting with printing, you can buy prints here.

## Note to myself – Image Magick commands that you will use

Dear future me, here are some ImageMagick commands that you will end up using over an over again and have to look up repeatedly (unless I write them down now, in which case you will surely remember all of them):

• Convert from from one image format to another (PNG in this case):  mogrify -format png <files>
• Resize pictures:  convert <input> -resize 25% <output>
• Add label to the bottom right:  composite label:<text> -gravity southeast <input> <output>
• Add a white border to the image:  convert -bordercolor White -border 10x10 <input> <output>
• Arrange images horizontally such that the spacing between images is just as large as the frame of the whole batch (20px, in this case):  cmd /c "montage preview-1.png preview-2.png preview-3.png -geometry 500X500+10+0 -tile x1 - | convert - -bordercolor White -border 10x20 preview.png" — note the use of CMD here, since Windows PowerShell screws up piping of binary data (“Hey, it’s a string, let’s make it Unicode!”)
• Cut out the center in a square format:  convert <input> -set option:distort:viewport "%[fx: w>h ? h : w]x%[fx: w>h ? h : w]+%[fx: w>h ? (w - h)/2 : 0]+%[fx: w>h ? 0 : (h - w)/2]" -filter point -distort SRT 0 +repage <output>
• Cut out the center in a 2/3 format:  convert <input> -set option:distort:viewport "%[fx: w>(3/2*h) ? 3/2*h : w]x%[fx: w>(3/2*h) ? h : 2/3*w]+%[fx: w>(3/2*h) ? (w - 3/2*h)/2 : 0]+%[fx: w>(3/2*h) ? 0 : (h - 2/3*w)/2]" -filter point -distort SRT 0 +repage <output>