Functorial Semantics for Algebraic Theories

These are my notes taken while approaching categorical logic from the angle of algebraic theories. My notes are based on Notes on Categorical Logic by Andrew Pitts, Steve Awodey’s (unfortunately unfinished, but very helpful) lecture notes on categorical logic, and the ncatlab article on internal logic.

Algebraic Theories

An algebraic signature is specified by a set of sorts, a set of operator symbols, and a set of variables for each sort. A context is a (possibly empty) list of distinct variables with their types. We will denote the empty context by . A term-in-context is a term over the signature in question together with a context such that all free variables of the term appear in the context. An equation-in-context consists of two terms of the same with a context that is appropriate for both of them. Finally, an algebraic theory is given by an algebraic signature and a set of equations-in-context. The theorems of the theory are the derivable equations using reflexivity, transitivity, symmetry, substitution, and weakening (where weakening refers to the fact that we can add more variables to a context and substitution means that if and is known, we can conclude ). We write if is a theorem.

A example of an algebraic theory is the theory of monoids, which has a single sort and a two operator symbols: the binary (written infix) and the nullary . Its equations are simply and .


Given a algebraic signature and a category with finite products, a -structure in maps each sort of to an object of and each operator to an arrow (without any further requirements). To any context we can then associate an object . We can use all of this to define the denotation of a term-in-context of sort to be an arrow :
* if with , then is the projection with .
* if , then is .

If is an algebraic theory on and is a -structure, we say that satisfies an equation-in-context , written , if . If satisfies all equations of a theory, it is called a -algebra in . The -algebras in a category again form a category , where morphisms are given by a family of -arrows , one for each sort of , such that for every function symbol the diagram

commutes. All this generalizes the usual algebraic notions; e.g. a monoid in the category of sets is just a monoid, and the commutativity of the diagram just states that in a monoid.


We can now prove the soundness property, namely, that every -algebra satisfies all theorems of . To do this, it is enough to ensure that the rules of inference are compatible with our models. Reflexivity, symmetry, and transitivity just restate properties of equality for arrows. This still leaves weakening and substitution open (which brings us to the substitution lemma). For terms-in-context , let us write for the term . Then by induction on , one can prove that . Weakening is then just a special case of substitution.


Completeness can be proved by constructing a model that satisfies just the theorems provable in the theory. For this, let us define the syntactic category of where the objects are simply contexts and an arrow is a list of terms-in-context . More precisely, these are equivalence classes of these, since we identify two arrows , if is provable in . Composition of arrows , is defined by substitution in each component, . Note that each variable in a context belongs to a certain sort, which I have suppressed here, and two contexts are only equal if they also agree on sorts.
This category has all finite products and they are given by concatenation of contexts. We can now define a -structure in by setting for all sorts and for . Then , since, by definition,

Functorial Semantics

Now amazingly, any -algebra in can be seen as an image of under a specific finite-product preserving functor (FP-functor) , and vice-versa, each FP-functor induces a -algebra. First note that any such functor maps to a -algebra in (this part is obvious; we write for this -algebra). Also, if is a natural transformation between two such functors, notice that it restricts to a -algebra homomorphism between and , since for all we have, by naturality and preservation of products, that . Thus evaluation at is a functor . When restricted to the full subcategory of FP-functors, this yields an equivalence: Given any model , we can construct an FP-functor by setting for all sorts (finite product preservation then fully determines the other objects) and . This is a functor, since is a model and two arrows in are equal iff they are equal in all models.

The category is then called the classifying category and the universal model, and by the preceding discussion it is defined by the universal property that -algebras in are in one-to-one correspondence with FP-functors . Sometimes is also referred to as logically generic for the calculus we sketched in the beginning, which here simply means that it satisfies exactly these equations that are provable. As a side note, the Yoneda embedding , gives rise to another logically generic model , since is an FP-functor that is also faithful.

Instead of starting off with a theory and building a category, we can also go the other way around. What the construction of essentially did, was replacing the representation of a theory via axioms by some category with a universal property that ensures that the category contains all consequences of the axioms, and just these. So given a category with finite products, the associated signature contains a sort for each object of the category and a operator symbol for each arrow . Now let be the model sending every sort and every operator symbol to itself (the terms over this signature make up the internal language of , and essentially allow you to treat similar to ). Now consider the theory with all equations satisfied by as axioms. Intuitively, it is clear that this construction is right-adjoined to the process of constructing the syntactic category (in a sense, we are simply forgetting that our theory is also a category), and that .

Notes on Recognisable Languages over Monads

In this post, take some notes about Bojańczyk’s paper Recognisable Languages over Monads. These notes are mainly for myself (I find it helpful to write stuff down), so I presuppose some category theory, some formal language theory, and a certain acquaintance with Stone duality. I may write about another approach to profinite monads as codensity monads soon. Or not. We’ll see.

Recognizable languages over Monads

Algebras for a Monad

In the following, let be a monad on . A -algebra is an object of together with an arrow such that

commute. We will write for this -algebra and occasionally also just. The simplest example of a -algebra is , the free -algebra.
Given two -algebras and , a -algebra morphism is an arrow such that

commutes. This data defines a category, the Eilenberg-Moore category of algebras of with their morphisms.

Our running example will be the monoid monad that arises as the composition of the adjoint functors and , where is the forgetful functor and takes a set to the free monoid with the usual action on maps. In other words, is the set of finite words on . This monad has as its unit the embedding of elements from as words of length one and as its multiplication the collapsing of finite words of finite words over to finite words over . It is easily seen that the algebras for are simply monoids, and the -algebra morphisms are monoid homomorphisms: The idea here is that the free monoid consists of all terms that make sense for a monoid and is the evaluation of these terms to elements of the actual monoid . The laws now simply ensure that this evaluation works as you would expect.

Recognizable Languages over Monads

Let be a -algebra and any object of . An arrow (in ) is recognized be a -morphism if for some arrow in . Such an arrow is recognized by a -algebra , if there is a -algebra morphism that recognizes it. Pictographically:

Note how this generalizes the notion of recognition for monoids: A subset of a monoid is recognized by a monoid if there exists and a monoid morphism such that . In our definition above, this is the case of and as the characteristic function of . Any -arrow then defines a subset of , and vice-versa. So translates if , therefore .

Also, this definition readily implies that all such arrows are recognized by – which is no problem, since usually we are interested in what arrows can be recognized by ‘finite’ -algebras. Unfortunately, there is no obvious way how to define what finite means for an arbitrary monad. We will presuppose that a sensible choice for finite has been made in the context we are considering. The recognizable arrows of are then the arrows of that are recognized by finite algebras.

Profinite Monads, Bojańczyk’s construction

I will first describe how Bojańczyk extends a monad on to its profinite version .

In the following, let and let be a -algebra. A generalized element of is a map associating to every surjective -algebra morphism an element such that for every finite -algebra and -algebra morphism

The set of all generalized elements of is written .

The comparison to the case of the free profinite monoid is illuminating: Seen as the completion of with the profinite metric, we see that elements of are either finite words or (equivalence classes of) non-convergent Cauchy-sequences. One of the main points about the profinite completion of is that any monoid homomorphism , where is finite, can be uniquely extended to a uniformly continuous monoid homomorphism . To define , we set whenever , and if is a Cauchy-sequence, we set , which must exist by the definition of the profinite metric. We can now turn this upside down and instead consider the elements of as acting on the morphisms, where each acts as . In the context of Bojańczyk’s work, is written and is defined as .

The set can be equipped with a topology generated by the opens of the form with , , which makes it homeomorphic to the Stone dual of the recognizable languages of . Note that the topology of is generated by the opens with . This suggests that is the homemorphism in question.

Bojańczyk calls my generalized elements of instead -algebra morphism types over and refers to as the compactification of . I find this nomenclature confusing, since type has a rather different meaning for me and there are several ways to compactify as a discrete space.

Now we can define the profinite monad for . It acts on sets via . The operations for , , and are completely fixed by requiring that, for each finite -algebra and each surjective -algebra morphism , the following diagrams commute:

Since each element is defined by its action on each for each finite -algebra , the diagrams actually define the operations.

Again I think it is helpful to see what this means in the familiar case of the free profinite monoid. The embedding is straight forward and turns a word into the generalized element , or the principal ultrafilter of , or the constant sequence , depending on your point of view.
The lifting of functions is given by , which makes use of the fact that is a monoid homomorphism into a finite monoid that can be extended continously. In terms of profinite words this means that , since then .
The case of the multiplication is less obvious. The right-most diagram states that . I am still trying to find meaning in this one.

Variations on Voronoi

Lately I have returned to the awesome Processing library, which provides a very simple infrastructure for creating interactive experiments that produce nice images. I have not been using it since version 2; the last time I was using it was in 2009, I think. It was mostly a need to do something different from my daily routine that made me return to Processing, since I have very fond memories of toying around with Processing. Working with Processing made my again painfully aware of the differences between Java and C#, but overall it was a pleasant experience leading to pictures like this here:

Definitely not a Voronoi diagram.
A picture that is definitely not a Voronoi diagram.

The program that I have been working on is inspired by Voronoi diagrams. Voronoi diagrams take as input a plane (say, the unit square) and a set of distinguished positions in that plane. For each point of the plane, the output at that point is the closest of the distinguished positions to that point. This information can then be used to color the plane, yielding a picture of colored cells, see Wikipedia. There are several ways to compute a Voronoi diagram, the one I decided to use probably does not count as a sensible method: Our plane is a picture (which means that it is subdivided into pixels) and we compute the diagram iteratively by repeatedly adding the pixels surrounding each distinguished point to its cell. Each pixel is given the color of its cell (the cell color has been predetermined in some way using the associated distinguished point) and after sufficiently many simulation steps, we get a finished diagram, like this one here:

Looks like a Voronoi diagram with a Manhattan metric.
Looks like a Voronoi diagram with a Manhattan metric.

One thing that might happen in this simulation is that a pixel is halfway between two distinguished points, in which case that pixel is added to whichever cell is simulated first in the update step of the simulation. Alternatively, we can simply note that pixel as conflicted and color it black. Note also that I never specified what exactly surrounding pixels means in this context: The most sensible notions that come to mind is that for any pixel, its neighbors are given by its left, right, top, and bottom neighbors. You could also add in the diagonals. Or do something entirely different, which will generate Voronoi-like-diagrams for different metrics (check for yourself whether a measure of distance given by such a neighborhood actually induces a metric). This generates pictures such as this one:

Voronoi-like diagram with a strange distance measure
Voronoi-like diagram with a strange distance measure, simulation is still running
Voronoi-like diagram with a strange distance measure
Voronoi-like diagram with a strange distance measure

Of course, there are several factors that can be changed to create interesting pictures:

  • the kind of neighborhood used to determine surrounding cells
  • the initial distribution of distinguished points
  • the initial colors of every cell
  • how we color pixels belonging to a cell (just the solid color? take the distance into account? do some crazy bit-arithmetic based on the current cell size?)
  • how we handle conflicts (always mark them as black borders? give the pixel to one of the conflict partners at random? etc.)

To explore the kinds of pictures that arise in this way, I wrote a program to specify these parameters and tweak them during the simulation, which leads to rather interesting pictures showing great diversity. Actually, I lied a bit: Before writing the program, I felt the need to write a simple custom GUI library for Processing, because it does not come with one and the ones I tried did some things that I didn’t like (this, of course, is not a good reason to write a GUI library – or any library at all, – but integrates neatly with the mindset of most programmers I know). Here are some of the pictures generated with the program, along with some detail-shots at the pixel-level (click the pictures to get a larger view). I am also currently experimenting with printing, you can buy prints here.

Note to myself – Image Magick commands that you will use

Dear future me, here are some ImageMagick commands that you will end up using over an over again and have to look up repeatedly (unless I write them down now, in which case you will surely remember all of them):

  • Convert from from one image format to another (PNG in this case):  mogrify -format png <files>
  • Resize pictures:  convert <input> -resize 25% <output>
  • Add label to the bottom right:  composite label:<text> -gravity southeast <input> <output>
  • Add a white border to the image:  convert -bordercolor White -border 10x10 <input> <output> 
  • Arrange images horizontally such that the spacing between images is just as large as the frame of the whole batch (20px, in this case):  cmd /c "montage preview-1.png preview-2.png preview-3.png -geometry 500X500+10+0 -tile x1 - | convert - -bordercolor White -border 10x20 preview.png" — note the use of CMD here, since Windows PowerShell screws up piping of binary data (“Hey, it’s a string, let’s make it Unicode!”)
  • Cut out the center in a square format:  convert <input> -set option:distort:viewport "%[fx: w>h ? h : w]x%[fx: w>h ? h : w]+%[fx: w>h ? (w - h)/2 : 0]+%[fx: w>h ? 0 : (h - w)/2]" -filter point -distort SRT 0 +repage <output>
  • Cut out the center in a 2/3 format:  convert <input> -set option:distort:viewport "%[fx: w>(3/2*h) ? 3/2*h : w]x%[fx: w>(3/2*h) ? h : 2/3*w]+%[fx: w>(3/2*h) ? (w - 3/2*h)/2 : 0]+%[fx: w>(3/2*h) ? 0 : (h - 2/3*w)/2]" -filter point -distort SRT 0 +repage <output>