A whirlwind tour of the available, the usable, and the undesirable.
"Monads are just monoids in the category of endofunctors."
Main intent: demystify FP style without jargon and demo some nice techniques. We'll call "FP" the kinds of techniques you use when functions are ordinary types and lists (arrays) are baked in.
Our motivation for using FP style:
We will also point out where you can have too much of this good thing.
Remember in what follows that Guido is not our friend and Pythonistas are particular about the Pythonic.
Functions are first class types in Python. That's what makes everything that follows work. It's true of most modern languages these days, but those of us who learned to program in earlier times will have used many languages where you can't assign a function to variable or pass a function to another function.
You can make anonymous one-statement functions with the lambda keyword.
# fn is a perfectly respectable function
fn = lambda x,y: x*y
type(fn)
builtins.function
fn(2,4)
8
Otherwise, you can define named functions with def that return things with return.
Note that Python functions defined with def are not necessarily functions in the FP sense. They don't have to return anything and they can have side-effects. In FP-speak, having side-effects makes a function impure. Ewwww. Impure functions are yukky. Also, by definition, a function has a result -- it maps a thing to another thing.
Pure functions are easier to understand at a glance, don't cause debugging mysteries, and lend themselves to unit tests.
def pure_fn(x,y):
z = x + y
return z * 2
def impure_fn(l):
print(l)
l[0] = l[0] -1
# and we don't even return anything, how rude
mylist = [2]
impure_fn(mylist)
print(mylist)
[2] [1]
# Functions are values and so...
a = pure_fn
a(5,6)
22
Functions defined with lambda and functions defined with def are both just plain Python functions. They just happen to have been defined with different syntax.
There are builtins for some common functional idioms, in the form of map and filter. These both return "iterables", not lists, which can be a bit annoying. Earlier Pythons returned lists.
l = [5,3,5,6,1,2,9,3]
double_l = map(lambda x: x * 2, l)
double_l
<builtins.map at 0x7f569457d6d8>
# We have to cast result to
# list type with list()
list(double_l)
l
[5, 3, 5, 6, 1, 2, 9, 3]
l = [5,3,5,6,1,2,9,3]
list(filter(lambda x: x % 2 == 0, l))
[6, 2]
help(functools.reduce)
Help on built-in function reduce in module _functools: reduce(...) reduce(function, sequence[, initial]) -> value Apply a function of two arguments cumulatively to the items of a sequence, from left to right, so as to reduce the sequence to a single value. For example, reduce(lambda x, y: x+y, [1, 2, 3, 4, 5]) calculates ((((1+2)+3)+4)+5). If initial is present, it is placed before the items of the sequence in the calculation, and serves as a default when the sequence is empty.
Up until Python3, there was reduce, which you might know as "fold". Guido killed reduce in Python 3, saying that list comprehensions had eliminated the need for it and it was always a wart. People who disagree can use the reduce provided by the functools module.
Here's a minimal version to show what it does and how. Don't use this implementation because it will blow up in various ways without extra code. The functools version is written in C, probably for speed and to avoid blowing up the stack. We'll talk about that later.
def reduce_(fn, seq, seed=None):
"""
fn is a function that takes two arguments and returns one result.
The initial seed is optional.
If there's one thing left in the sequence, use it as an arg and the seed as the other.
Otherwise, recursively eat up the sequence.
"""
if len(seq) == 1 and seed is None:
return seq[0]
elif len(seq) == 0:
return seed
return fn(seq[0], reduce_(fn, seq[1:], seed))
def tish(s1, s2):
return s1 + " TISH " + s2
# 2! 4! 6! 8! EVERYONE INTERCALATE!
reduce_(tish, ["BOOM", "BOOM", "BOOM", "BOOM"])
'BOOM TISH BOOM TISH BOOM TISH BOOM'
reduce_(lambda x, y: x+y, [1, 2, 3, 4, 5], 10)
25
List comprehensions. I like to think of a comprehension as a way to do a thing with some items if they pass a test. This isn't exactly common FP style, but, it has no sideeffects per se, and it's borrowed from Haskell, so it kind of smells like FP.
fruits = ["apple", "banana", "avocado"]
# cf Haskell
# import Data.Char
# [map toUpper fruit | fruit <- ["apple", "banana", "pear"], head fruit == 'a']
[fruit.upper() for fruit in fruits if fruit.startswith("a")]
['APPLE', 'AVOCADO']
Because functions are first class types, we can make functions that make functions.
def adder_factory(i):
k = 30
def adder(j):
# i, j and k are "open" variables in adder's scope
return i + j + k
return adder
add5 = adder_factory(5)
adder_factory = None
# Mysteriously, i j and k must still be around somehow. Those bindings were closed.
add5(10)
45
Did I mention Python has closures? A closure is what happens when a function keeps references to things that were in its environment when it was created, even if the original scope they were from has disappeared.
Finally, there is some nice syntactic sugar for functions that make other functions, called decorators.
def print_args(fn):
"""
We're going to make a new function called "wrapped".
wrapped will get called whenever another function is decorated with print_args.
"""
def wrapped(*args):
print ("Called", fn.__name__, "with:", args,)
result = fn(*args)
print ("resulting in:", result)
return result
return wrapped
@print_args
def add_nums(i, j):
return i + j
add_nums(4,5)
Called add_nums with: (4, 5) resulting in: 9
9
This "@" syntax is just a nice way of tidying up a function that makes another function. It's pretty much the same as this:
def add_nums2(i, j):
return i + j
add_nums2 = print_args(add_nums2)
add_nums2(4,5)
Called add_nums2 with: (4, 5) resulting in: 9
9
Note that this is a naive decorator example -- in practice, you should do things like tidy up the name of the wrapped function to help with debugging. There are whole modules of helpers for this. Python people love using decorators but are often scared of writing them, for some reason.
Also, there's no need to do logging like this, but it seems to be the classic example for this kind of thing.
Anyway, if you've ever used decoraters, you were doing functional programming all along!
import itertools
import functools
import operator
list(itertools.accumulate( [1,2,3,4], operator.add))
#help(itertools.accumulate)
[1, 3, 6, 10]
itertools mostly contains functions for working with lists -- the kind of thing you might find in the Haskell Prelude. Also includes accumulate(), which works like Haskell scan.
functools has helpers to set up partial application, write generic functions more easily, and make wrapped functions behave better. Also includes reduce(), which you may recognise as fold.
operator provides functions that are the equivalent of Python's builtin operators. Unlike say Haskell, Python operators are not functions with sugar, so you need something like this to work with them effectively in a functional style.
import functools
import operator
functools.reduce(operator.mul, [1,2,3,4])
24
The Python list class has methods that seem like the built-in functions sort and reverse. However, they do not return a transformed copy. They modify the list in place and return nothing. Every time I've been away from Python for a while, this trips me up.
l = [3, 2, 5, 1]
sorted(l)
[1, 2, 3, 5]
l
[3, 2, 5, 1]
l.sort()
l
[1, 2, 3, 5]
l.sort()
print (list(reversed(l))) # reversed returns an iterator
[5, 3, 2, 1]
print (l)
[1, 2, 3, 5]
l.reverse()
print (l)
[5, 3, 2, 1]
map(function, iterable, ...)
Return an iterator that applies function to every item of iterable, yielding the results. If additional iterable arguments are passed, function must take that many arguments and is applied to the items from all iterables in parallel. With multiple iterables, the iterator stops when the shortest iterable is exhausted. For cases where the function inputs are already arranged into argument tuples, see itertools.starmap().
I'm sorry, this is just weird. Or at least unlike the way most FP languages deal with map and functions that take more than one argument.
list(map((lambda tup:tup[0]*tup[1]), [(2,3),(4,4)]))
[6, 16]
Late binding can be a pain.
# This example and subsequent para stolen from Brian Chikilian
def create_multipliers():
return [lambda x : i * x for i in range(5)]
for multiplier in create_multipliers():
print (multiplier(2))
8 8 8 8 8
This happens due to Python’s late binding behavior which says that the values of variables used in closures are looked up at the time the inner function is called. So in the above code, whenever any of the returned functions are called, the value of i is looked up in the surrounding scope at the time it is called (and by then, the loop has completed, so i has already been assigned its final value of 4).
The solution to this common Python problem is a bit of a hack.
def create_multipliers():
return [lambda x, i=i : i * x for i in range(5)] # note the i=i in the lambda parameter
for multiplier in create_multipliers():
print (multiplier(2))
0 2 4 6 8
Composition is a very powerful technique for building up pipelines of transformations, expressing things concisely, and making sure nothing stomps on your intermediate variables.
When you compose functions, they need to have compatible types. Python doesn't hold your hand a whole lot here, which is why this is best used sparingly.
import functools
def _compose_helper(fn1, fn2):
# Make a new function that takes the args,
# calls fn1 with them
# and then passes the result to fn2
def _fn(*args):
return fn1(fn2(*args))
return _fn
def compose(*fns):
# Glom all the functions in fns together
# by repeatedly applying _compose_helper
return functools.reduce(_compose_helper, fns)
# Side note on readability: if we were steeped in FPness, we might just write:
compose2 = lambda *fns: functools.reduce(lambda f1, f2: lambda *args: f1(f2(*args)), fns)
# Resist that urge if you want to have friends.
def centre_wide(s):
return s.center(80)
def upper(s):
return s.upper()
def word_count(s):
return s + "(%d)" % len(s.split())
def no_pants(s):
return s.replace("trousers", "****")
lines = ["In the beginning",
"there was stuff like trousers",
"but we hated wearing trousers because trousers suck"]
star_wars = compose2(upper,
word_count,
centre_wide,
no_pants,
)
print("\n".join([star_wars(s) for s in lines]))
IN THE BEGINNING (3) THERE WAS STUFF LIKE **** (5) BUT WE HATED WEARING **** BECAUSE **** SUCK (8)
From a talk I'm working on:
compose2 = lambda *fns: functools.reduce(lambda f1, f2: lambda *args: f1(f2(*args)), fns)
So ashamed.
— Mr Salteena (@saniac) June 29, 2015
@saniac The fuck.
— Rob Isaac (@rmi) June 29, 2015
However, if you want your co-workers to hate you, you can do what one guy did, and abuse Python's ability to override operators to allow the creation of code in point free style:
Pipe is a Python module enabling infix syntax in Python. For those asking “Why ?” let’s take an example :
Compare the readability of the classical prefix syntax :
sum(select(where(take_while(fib(), lambda x: x < 1000000) lambda x: x % 2), lambda x: x * x))
And the infix syntax :
fib() | take_while(lambda x: x < 1000000) \
| where(lambda x: x % 2) \
| select(lambda x: x * x) \
| sum()
Isn’t the infix syntax more readable ?