I have been striving to strike a balance between the need to accomplish real work with functional programming and trying to grasp the mathematics behind it. I am not saying that those who understand the mathematics of functional programming well don’t get real work done. Obviously they do. It’s just that I am thinking that I, and perhaps you, don’t need to know the advanced details of mathematics as well as they do in order to get real work done, too.

In my conversations with “a certain individual” I have reached the point where I find the hair splitting of theory upon theories very unproductive. The overarching theme has become, “you can’t understand functional programming until you first understand the math behind it. Go learn the math first, and then return to functional programming.” To that I have one word: Bunk.

It’s not that I don’t respect the work of the legends of mathematicians and their dedication to getting things right. Of course I do. As for me, I am almost certain that embracing those theories at a deep level is not going to make me a better programmer using the functional approach. And I am estimating that if you don’t already know mathematics very well, you probably won’t need to. I will establish this, but I have to break down my thoughts. So, here goes.

## The Breach

Previously I was told that my points of reasoning about mathematics and functional programming are invalid. Here is the feedback that I received:

- The fact that I see a set as a collection of objects, especially if I consider the objects to be
*held in*the set, as in contained, is wrong - I should abandon my reasoning about sets in general, because Set Theory has been proven wrong mathematically
- I cannot use sets, even the Category Theory’s Category of Sets, as a means to reason about functional programming, because actually only Type Theory can be accurately used to do that

As I said before, I completely respect that the lessons learned in mathematics over the past 115 years or so are extremely important to mathematics and hence, to mathematicians. Yet, in my previous discussions with my dad, I got some very practical and trustworthy advice. I expressed to my dad how my attempts to visualize the mathematical concepts were considered incorrect and even harmful to my ultimate ability to grasp functional programming. My father told me in no uncertain terms that I must visualize the mathematical concepts in any way that makes them understandable to me. Since that’s what I was already doing before he reaffirmed my direction, I will continue. I don’t think that doing so in any way disrespects the mathematics. It’s simple: you must start somewhere.

Try to explain the taste of a pomegranate to someone who has never tasted a pomegranate. You may think that the only way to describe that taste is to say, “it just tastes like a pomegranate and if you don’t eat a pomegranate you will never understand.” Although you would be technically correct, it’s not very helpful to someone who won’t be able to do that any time in the near future. To be helpful, you may instead decide to describe pomegranate taste in this way:

- Only the seeds taste good, not the fruit itself, which is very bitter
- The seeds have a tangy or even sour taste, and have a pulpiness about them
- If you know what a cherry tastes like, it might remind you of a cherry, but it’s not quite as sweet

You know what? Describing a pomegranate in that way won’t replace the person’s need to eventually taste a pomegranate to achieve full understanding, but it gives them some comparisons to go by. You are building on knowledge that they almost certainly already have. Although the person has never tasted a pomegranate, the absolute mystery of pomegranate taste has vanished.

## Being Wrong Isn’t So Wrong

I now address the points, one-by-one, concerning my “invalid reasonings about functional programming”:

My visualization of sets is accurate according to the definition of a set that I read in a Calculus book:

A set is a collection of objects—not necessarily material objects—described in such a way that we have no doubt as to whether a particular object does or does not belong to it. A set may be described by listing its elements.

I must say, that is exactly or very close to how I previously and currently visualize and reason about a set. And although I can reason about a set in terms of listing its elements (its contents), I have been told that I can’t do that because sets don’t contain objects. So I dug into this objection to my thinking a bit deeper. I asked my wife (who also knows math quite well) how she thinks of a set. She did not describe a mathematical set as having contents, but described it more in terms of rules, e.g. “a set of clothing that can be worn together, which includes a top, trousers or a skirt, shoes, and some complementing apparel.” I asked if she saw those pieces of clothing as being placed into a container or in a specific location in her closet. She said, “no.” This, too, is accurate according to the same Calculus book:

We can also describe a set by a rule, for example, all the even positive integers (this set contains an infinite number of objects). Another set defined by a rule is the set of all planets in our solar system.

Well, look there, “contains” in a Calculus book. And who says that sets are immutable? The set of all planets in our solar system changed in the past few years when Pluto was removed. Well, just kidding. The set without Pluto is actually a different set specified by the filtering out any dwarf planets from the older collection, ummm, set. The old and new sets may be named, by rule:

- “Planets in our solar system between 1930 and 2006” and this set contains nine objects
- “Planets in our solar system since 2006” and this set contains eight objects, and the newer rule that filters dwarf planets is a function passed as an argument to the filter function

Ok, back on point. I think that my wife could still clearly list the elements of each clothing set, but she doesn’t need to place them into a specific container in order to prove that the elements are in a given set.

That’s useful. Still, I think that you and I can now better reason about sets by actually *loosening* our definition a bit. Notice that I said “loosening” rather then “tightening,” because I don’t consider it wrong to think of a set as actually containing objects. Apparently neither do the mathematician authors of the Calculus book. In fact, one rule clause of a given set could be that the objects must be in a constrained location. It’s just an additional rule clause, not a wrong visualization. The planets in our solar system, whether eight or nine, have very definite rules that indicate the constraints on their location. Thus, we might choose to think about a set of objects being in a container, but that’s not necessarily the only way to do so. We can think of sets instead in terms of the rules that they fulfill.

Alright, I justified a particular visualization and ways to reason about sets. You may disagree. Others will certainly, as I do, find it useful, and better yet, we aren’t (very) wrong. Well, none of us are (very) wrong as long as sets are even relevant. Are sets and the Set Theory at all valid any longer?

## There Are Two Kinds of Programmers

Thinking in the way “a certain individual” thinks, I would say the following reflects one world view:

There are two kinds of programmers: those who consider Type Theory the only way to understand functional programming, and those who are wrong.

This is the path that I have gone down due to this way of thinking:

- Set Theory was proven mathematically wrong more than 100 years ago, so stop thinking in sets
- Use Category Theory instead
- So I look into the Category of Sets in Category Theory and discover that the sets described there are basically the same thing as those described in Set Theory (for all practical purposes)
- I reason on sets as stated in Category Theory, which doesn’t change my viewpoint much at all
- I am now told that I can’t use Category Theory, because Category Theory doesn’t cover some uses of functional programming
- I think to myself, “could every other functional programmer on the planet be wrong in what they say about using Category Theory, or is ‘a certain individual’ wrong?”
- I ask for an example of why that is so, and it takes quite a lot of churn to get a clear answer, but one is produced

Here is one classical reason why Set Theory is considered to be wrong:

According to naive set theory, any definable collection is a set. Let

Rbe the set of all sets that are not members of themselves. IfRis not a member of itself, then its definition dictates that it must contain itself, and if it contains itself, then it contradicts its own definition as the set of all sets that are not members of themselves. This contradiction is [Bertrand] Russell’s paradox.

In simple terms this is roughly saying that if you have one set for every individual galaxy in the physical universe, and the universe is the set of all those galaxies, by definition of set, the universe set must contain itself. Since a set cannot logically contain itself, which at the universal level sounds to me like the ultimate stack overflow situation, then Set Theory is considered wrong.

That’s fine and all, but that’s not how I need to use sets. I am never looking at sets from the standpoint that I have a full universe of sets and I need to contain them all, and then contain the container. Nope, that’s not my problem space.

Another problem seems to be large cardinals. (No, not the birds, although very large cardinals as in birds could indeed be a big problem. For one, my family may no longer be able to afford the seed to feed them through the Arizona winters.) This is discussing the axioms of infinity. If you can actually count to infinity, evidently it becomes a problem in Set Theory.

Again, that’s fine and all, but think about the physical universe once more. There are not enough atoms in the physical universe to express the numbers to infinity, not even in any combination of atoms and all their protons and neutrons and all that stuff. I don’t actually need to count to infinity, ever. If I use one long, a 64-bit integer, to count in one direction, it will take a very, very long time to overflow that value into the negative. For example, if I increment that long integer 1 million times per second, it will take something like 300,000 years to cause its overflow.

So let’s be reasonable, solving universally huge mathematics problems isn’t our focus right now. Although theorists may need to think at that level, realistically we need to deploy much less buggy and much more correctly designed software, and functional programming can help with that. Further, google the interwebs for contemporary use of Set Theory. It is still widely used for practical problem solving. I think it’s probably ok for me and many other people to reason about functional programming by means of sets.

Finally we reach the point at which Category Theory is wrong, at least when I try to use the Category of Sets to reason about the mathematics of functional programming:

In the mathematical field of category theory, the category of sets, denoted as Set, is the category whose objects are sets. The arrows or morphisms between sets A and B are all triples (f, A, B) where f is a function from A to B.

Thus, we can write the following, which is probably now somewhat familiar to those who have been trying to grasp functional programming:

f A -> B

When calling function f by passing the parameter of type A, this function/morphism will map the A to a result of type B. Cool, and just to make the point explicit, the type B may be the same type as type A. In other words, the actual result from calling f on an integer may be another integer. Thus, f isn’t required to return a completely different type.

Now, and also to be explicit, this is not far from the classical definition of set in Set Theory, and that’s why it is very easy for me to drop Set Theory in favor of Category Theory. It is accurate to say that in Set Theory, morphisms are functions. However, in Category Theory a morphism need not be limited to operating on objects that are in the Category of Sets, because there are other categories. The following is a simple example of a morphism using the C-like syntax of my made up functional programming language. It’s a morphism that works on integers, not sets.

int increment(int i) { return i + 1; }

The increment function takes an integer as a parameter, and returns an integer that is the value of the parameter plus one. I suppose you could say that increment morphs 1 to 2, and 2 to 3, and so on. But that’s not the “problem” with reasoning about Set Theory, and Category Theory for that matter, when considering functional programming. As I understand it, the “problem” is introduced by the following implementation of the increment function:

int increment(int i) { print("increment: " + i + " to: " + (i + 1)); return i + 1; }

The “problem” is that you cannot *mathematically prove* that the above function/morphism works, unless you apply Type Theory. Now, it’s not like I have a problem with Type Theory, but this all smells to me like a moving target. Like me, you are probably thinking that the declaration for this morphism is:

increment int -> int

As in, increment takes an int and returns an int. But mathematically, using Type Theory, this particular implementation of the increment function is actually formally defined as follows:

increment int -> (printIO, int)

The returned result, or the morphing of the int parameter, is the tuple of printIO and int. In some languages the tuple result of the above is probably (unit, int) rather than (printIO, int), but I digress. With the “problem” established, I will make just a few observations as I see this:

- I am not working in mathematical theory, or even writing a functional language compiler, and thus I don’t need to mathematically prove that this code works
- I can test this code and observe its output and perceive that it works to the extent that I require it to solve some problem
- Ultimately to me and any client of the increment function, the outcome of this method is the return value of type int, not the tuple of printIO and int
- Any side-effect is in essence unknown to the client; only the return value is known
- Writing a functional solution as seen in the above code is in most cases bad design

I need to address the last point further. Unless the print expression is just debugging code, in which case we don’t need to reason about it’s long-term impact on the definition of the function, I would never place code like that in kind of function. Sure, it’s just a hypothetical example. Even so, any such side-effecting outcomes should be placed in explicitly designed functions, not hidden inside unrelated behavior. Whether you are printing, writing to a file, playing a video, or sending an email, put those expressions in well-defined functions, and probably ones that only perform side effects. This will not only make the outcomes explicit, it will make the vast majority of your none side-effecting functional code easier to reason about. What I mean by this is that the following will be true and valid in all but a few functions:

f A -> B

You may be wondering where the sets disappeared to in this discussion. Well, they didn’t disappear. For example, you might consider the increment of int to int to be a set of one integer being morphed to a set of another integer. Yet, to make the point about sets or collections of any kind, the following is a more explicit example:

set[int] increment(set[int] values) { return values.map { v -> return v + 1; } }

Which when expressed in terms of types and arrows/morphisms, is:

increment set[int] -> set[int]

I don’t see the point of over complicating functional programming for the sake of bandwagoning on the side of mathematics. If someone tries to convince you that “you can’t get there from here, you have to go someplace else first,” don’t believe them. Respect the mathematics and understand it to the degree that enables you to get your programming work done with improved results. If learning deeper mathematical theory helps you improve even more, then do it. You will embark on that journey with the confidence gained from originally learning functional programming with less mathematical prowess.

In my experience, the *types* of programmers are better described as follows:

There are two types of programmers: those who are practical, reasonable, and pragmatic, and those that are rigid and dogmatic. Depending on your polarization toward either of these two types, one or the other is wrong.

In case you didn’t notice, that expresses the rules that define two sets 🙂

How you choose is your business. Personally I don’t have time for endless debates about matters that don’t amount to anything useful in the real world. I am guessing that there are many people like me who don’t want to stop everything, learn very advanced mathematics, and then finally try their hand at functional programming. While I will be true to the math to the extent that I possibly can be at this time, I’d simply like many others to make use of functional programming because it has a lot of good to offer. Based on my desire to help others accomplish that, I will continue writing from my practical perspective.