Assumptions:

What is Functional Programming?

Functional Programming (FP) revolves around the idea that functions should be used in a mathematical sense; an expression where inputs are transformed directly to outputs, and the return value of a function depends only on the values passed in as parameters. That's to say, if we have a function:

<div style="width:100%;margin;auto;text-align:center;font-size:28px;font-style:oblique;font-weight:bold;"> f(x) = y<br> <span style="font-weight:normal;font-size:18px;">"f of x equals y"</span> </div>

then putting the same parameters (x) into a function (f) will always return the same values (y). This property of functional programming is known as referential transparency. As a consequence, the functional paradigm disallows side-effects that can affect the result.

Functional programming takes roots in the declarative paradigm, throwing the concept of control flow out the window in favor of recursion, pattern-matching, and ternary operators to handle iteration and decision-making. Functional programming also focuses on treating data as immutable; once a variable is set, the value of that variable cannot be changed.

Characteristics of Functional Programming

The functional paradigm refers to languages that implement functional programming in a "pure" way, and also refer to styles of programming that can be viewed as "functional approaches".

Today's modern languages offer the ability to write in a multi-paradigm style, meaning that even if you don't intend to learn a functional language, knowledge of these concepts can offer a new tool to use in situations where they might be the more effective choice!

Immutable Variables

Immutable variables in functional programming can be seen similarly to const and final variables in non-functional languages; once a value is set to an immutable variable, we can't mutate (change) the value of it again. This also means that the state of the program is immutable.

Immutable variables have some advantages over mutable variables that can make them more preferable in the right situations:

Immutable Variables are Thread Safe

Because there is no worry about the values of an immutable variable or object ever changing, it's almost automatically considered thread safe, that is, the program doesn't have to worry about other threads modifying the data at the same time it's being read. This is a big plus when it comes to parallel computing, which is much more desirable now that chip manufacturers are using more and more cores.

Immutable Variables can be Copied by Reference

Consider that you have a structure with a lot of data like this:

struct BigData {
 double u;  // 8
 double v;  // 16
 double w;  // 24
 double x;  // 32
 double y;  // 40
 double z;  // 48 bytes!
}

When passing mutable data around to functions, many non-functional languages might have to copy the data to a new object unless it is explicitly passed by reference to prevent state from being modified where it shouldn't be. With an immutable variable, you can instead just pass a reference that points to the data instead of copying all of it.

The data structure above is 48-bytes, meaning that copying all the values when passing it as a parameter is 12x more expensive than sending across a 4-byte reference to the data!

Immutable State?

If you're coming to functional programming from an imperative background, immutable variables can be one of the hardest concepts to grasp; "how can a program do anything useful without modifying state?" The reality of this is that functional programs do have state, it just isn't the state you're used to using.

The way that the functional paradigm goes about managing the state of the program is by passing objects to functions, and those functions returning new objects with the updated state in return.

Pure Functions

In declarative programming, we defined a side-effect to be mutating state that exists outside of a function, or even performing I/O operations. In functional programming, a pure function is a function that doesn't contain any side effects. This is somewhat of a natural consequence to having immutable state.

First-Class Functions

A cool feature of the functional paradigm is the ability for functions to pass functions as variables, and return functions as results. Functions that accept other functions as variables or return references to functions are known as first-class functions.

Let's take a look at a use-case for this. A popular design pattern in object-oriented programming is the <a href="https://www.devmaking.com/learn/design-pattern/command-pattern/" target="_blank" style="color:inherit;">command pattern</a>. The gist of this pattern is allowing the ability to treat objects as actions that can be passed to other parts of a program. We can actually use first-class functions in almost an identical way. Let's see how we'd do it in Python:

def add(x, y):
 return x + y

def subtract(x, y):
 return x - y

# first class: returns function
def getCalcFunction(operator):
 # keeping it simple for demos sake
 if operator == '+':
   return add
 else:
   return subtract 

# first class: accepts function
def calculate(fn, x, y):
 return fn(x, y)

# demo:
action = getCalcFunction('+')
result = calculate(action, 5, 6) # 5 + 6, result = 11

Anonymous Functions

Functional programming also introduces another feature known as anonymous functions. Different languages have different syntax for these, but an anonymous function is just that: anonymous! Anonymous functions are like regular functions, except they don't have a name.

"But wait, how do we call a function without a name?"

The purpose of anonymous functions is to help hide clutter around your application, and make code more concise. In Python, anonymous functions take the form of lambdas that look like this:

lambda arguments : expression

Let's look at our getCalcFunction() example again, but using lambdas instead of named functions:

def getCalcFunction(operator):
 if operator == '+':
   return lambda x, y : x + y
 else:
   return lambda x, y : x - y

def calculate(fn, x, y):
 return fn(x, y)

# demo:
action = getCalcFunction('+')
result = calculate(action, 5, 6) # 5 + 6, result = 11

Whether using lambdas is a better design choice for your application is for you to decide! It should be noted that a drawback of lambdas in python is that they can only be a single line long. However, this isn't true for all anonymous functions.

In JavaScript, it's quite often that you'll find yourself writing callback functions that will do something when a particular operation has finished. This is especially common when making http requests, or notifying when a button has been clicked. JavaScript also has support for anonymous functions, which come in two popular flavors: arrow functions, and anonymous functions.

JavaScript Arrow functions:

(arguments) =&gt; { statements; }

JavaScript Anonymous functions:

function(arguments) { statements; }

> It's also valid to have no arguments: () =&gt; { statements; }

Using these syntaxes, let's look at how they can each be used to add an event to an HTML button:

var myButton;

// Named:
function buttonClicked(event) {
 console.log("Named function::My button was clicked!");
}
myButton.addEventListener("click", buttonClicked);

// Anonymous *Arrow function*:
myButton.addEventListener("click", (event) =&gt; {
 console.log("Anonymous arrow function::My button was clicked!");
});

// Anonymous *function*:
myButton.addEventListener("click", function(event) {
 console.log("Anonymous function::My button was clicked!");
});

Recursion

Recursion is the ability for a function to call itself. With functional programming being a declarative paradigm and not having control flows, recursion is equivalent to using a loop in imperative languages. In fact, any code that can be written using a loop can also be written using recursion, and vice versa!

Some problems are simplified using recursion as opposed to loop structures, and we can see this when searching a <a href="https://www.devmaking.com/learn/data-structures/binary-search-tree/" target="_blank" style="color:inherit;">binary search tree</a>. Take a look at this example in C#:

struct Node {
 int value;
 Node left;
 Node right;
}

class BST {
 Node root;

 // ...

 public Node Search(int value) {
   return SearchHelper(this.root, value);
 }

 private Node SearchHelper(Node n, int value) {
   if(n == null) return null;
   if(value &gt; n.value) return SearchHelper(n.right, value);
   if(value &lt; n.value) return SearchHelper(n.left, value);
   return n;
 }
}

Recursion has the same pitfalls of infinity as while loops do: if you aren't careful about making sure you always get closer to a terminating statement, you can end up recurring forever, or until you get a stack overflow error. Generally speaking, modern compilers in imperative languages may try and actually unwrap recursive calls into loops. This is due somewhat to the fact that recursive solutions can be more expensive than their looping counterparts, and recursion that runs too deep can risk errors on finite machines. However, there's a version of recursion (hey, that rhymes) which you don't need to worry about stack busting: tail recursion.

Tail Recursion

Tail recursion is an optimization that some functional languages make to actively remove the stack pointer at the end of each recursive call. This is advantageous because it doesn't need to worry about walking back up the stack when the operations are finished.

However, not all recursive functions can be made tail recursive; the only way for a recursive function to be "tail optimized" is if the recursive call is the last statement in the function:

// This won't be tail recursive:
public int factorial(int n) {
 if(n == 1) return 1;
 return n * factorial(n - 1);
}

// This will be tail recursive:
public int factorial(int n, int runningTotal=1) {
 if(n &lt;= 1) return runningTotal;
 return factorial(n - 1, n * runningTotal);
}

The key is that the last function must only be the recursive call. Having anything that requires a reminder to what previous values up the recursive trail are means that the call stack can't forget the previous pointer because it needs the values held in them to compute the final result. In the case of the non-tail recursive factorial, it results 5*4*3*2*1, where the tail optimized version carries the factorial value with it through the recursive chain.

Pattern Matching

Lastly, functional programming offers a neat feature known as pattern matching, which allows objects to be "matched" to certain cases based on their types and properties. In a way, they're similar to switch statements, except with much more expressiveness.

Let's take a look at a program written in Scala that features pattern matching:

def matchType(x: Any) = x match {
 case y: Int =&gt; "Integer!"
 case s: String =&gt; "String!"
 case b: Boolean =&gt; "Boolean!"
 case _ =&gt; "Other type"
}

matchType("hello world!") // "String!"
matchType(true) // "Boolean!"
matchType(1.05) // "Other type"

Pattern matching can some in handy when checking for types of classes that inherit from a base class, or even have values that are similar to a criteria you're searching for.

Wrap Up

Functional programming offers many features that may seem at first alien to programmers accustomed to the imperative style, however many find that once they begin working with the functional style themselves, they tend to enjoy them more and find much more use.

Functional programming is sometimes upheld by "purists" as the only way that people should be programing, however I disagree with them in the same way I disagree with OOP "purists". Concepts like anonymous functions and immutable data can and do help improve code clarity. Just the same, there are certain situations where modeling data as objects and using control structures makes the entire program easier to reason and maintain.

Some of the most widely used languages that exist such as JavaScript, C#, and Python are effective because they're all multi-paradigm languages that offer a wide variety of programming styles to meet the wide variety of problems that are out there!