Note: Click on "Kernel" > "Restart Kernel and Clear All Outputs" in JupyterLab before reading this notebook to reset its output. If you cannot run this file on your machine, you may want to open it in the cloud .
So far, we have only used what we refer to as core Python in this book. By this, we mean all the syntactical rules as specified in the language reference and a minimal set of about 50 built-in functions
. With this, we could already implement any algorithm or business logic we can think of!
However, after our first couple of programs, we would already start seeing recurring patterns in the code we write. In other words, we would constantly be "reinventing the wheel" in each new project.
Would it not be smarter to pull out the reusable components from our programs and put them into some project independent library of generically useful functionalities? Then we would only need a way of including these utilities in our projects.
As all programmers across all languages face this very same issue, most programming languages come with a so-called standard library that provides utilities to accomplish everyday tasks without much code. Examples are making an HTTP request to some website, open and read popular file types (e.g., CSV or Excel files), do something on a computer's file system, and many more.
Python also comes with a standard library that is structured into coherent modules and packages for given topics: A module is just a plain text file with the file extension .py that contains Python code while a package is a folder that groups several related modules.
The code in the standard library is contributed and maintained by many volunteers around the world. In contrast to so-called "third-party" packages (cf., the next section below), the Python core development team closely monitors and tests the code in the standard library
. Consequently, we can be reasonably sure that anything provided by it works correctly independent of our computer's operating system and will most likely also be there in the next Python versions. Parts in the standard library
that are computationally expensive are often rewritten in C and, therefore, much faster than anything we could write in Python ourselves. So, whenever we can solve a problem with the help of the standard library
, it is almost always the best way to do so as well.
The standard library has grown very big over the years, and we refer to the website PYMOTW (i.e., "Python Module of the Week") that features well written introductory tutorials and how-to guides to most parts of the library. The same author also published a book that many Pythonistas keep on their shelf for reference. Knowing what is in the standard library
is quite valuable for solving real-world tasks quickly.
Throughout this book, we look at many modules and packages from the standard library in more depth, starting with the math
and random
modules in this chapter.
import math
This creates the variable math
that references a module object (i.e., type
module
) in memory.
math
<module 'math' from '/usr/lib64/python3.12/lib-dynload/math.cpython-312-x86_64-linux-gnu.so'>
id(math)
140177537558144
type(math)
module
module
objects serve as namespaces to organize the names inside a module. In this context, a namespace is nothing but a prefix that avoids collision with the variables already defined at the location where we import the module into.
Let's see what we can do with the math
module.
The dir() built-in function may also be used with an argument passed in. Ignoring the dunder-style names,
math
offers quite a lot of names. As we cannot know at this point if a listed name refers to a function or an ordinary variable, we use the more generic term attribute to mean either one of them.
dir(math)
['__doc__', '__file__', '__loader__', '__name__', '__package__', '__spec__', 'acos', 'acosh', 'asin', 'asinh', 'atan', 'atan2', 'atanh', 'cbrt', 'ceil', 'comb', 'copysign', 'cos', 'cosh', 'degrees', 'dist', 'e', 'erf', 'erfc', 'exp', 'exp2', 'expm1', 'fabs', 'factorial', 'floor', 'fmod', 'frexp', 'fsum', 'gamma', 'gcd', 'hypot', 'inf', 'isclose', 'isfinite', 'isinf', 'isnan', 'isqrt', 'lcm', 'ldexp', 'lgamma', 'log', 'log10', 'log1p', 'log2', 'modf', 'nan', 'nextafter', 'perm', 'pi', 'pow', 'prod', 'radians', 'remainder', 'sin', 'sinh', 'sqrt', 'sumprod', 'tan', 'tanh', 'tau', 'trunc', 'ulp']
Common mathematical constants and functions are now available via the dot operator .
on the math
object. This operator is sometimes also called the attribute access operator, in line with the just introduced term.
math.pi
3.141592653589793
math.e
2.718281828459045
math.sqrt
<function math.sqrt(x, /)>
help(math.sqrt)
Help on built-in function sqrt in module math: sqrt(x, /) Return the square root of x.
math.sqrt(2)
1.4142135623730951
Observe how the arguments passed to functions do not need to be just variables or simple literals. Instead, we may pass in any expression that evaluates to a new object of the type the function expects.
So just as a reminder from the expression vs. statement discussion in Chapter 1 : An expression is any syntactically correct combination of variables and literals with operators. And the call operator
()
is yet another operator. So both of the next two code cells are just expressions! They have no permanent side effects in memory. We may execute them as often as we want without changing the state of the program (i.e., this Jupyter notebook).
So, regarding the very next cell in particular: Although the 2 ** 2
creates a new object 4
in memory that is then immediately passed into the math.sqrt() function, once that function call returns, "all is lost" and the newly created
4
object is forgotten again, as well as the return value of math.sqrt() .
math.sqrt(2 ** 2)
2.0
Even the composition of several function calls only constitutes another expression.
math.sqrt(sum([99, 100, 101]) / 3)
10.0
If we only need one particular function from a module, we may also use the alternative from ... import ...
syntax.
This does not create a module object but only makes a variable in our current location reference an object defined inside a module directly.
from math import sqrt
sqrt(16)
4.0
Often, we need a random variable, for example, when we want to build a simulation. The random module in the standard library
often suffices for that.
import random
random
<module 'random' from '/usr/lib64/python3.12/random.py'>
Besides the usual dunder-style attributes, the built-in dir() function lists some attributes in an upper case naming convention and many others starting with a single underscore
_
. To understand the former, we must wait until Chapter 11 , while the latter is explained further below.
dir(random)
['BPF', 'LOG4', 'NV_MAGICCONST', 'RECIP_BPF', 'Random', 'SG_MAGICCONST', 'SystemRandom', 'TWOPI', '_ONE', '_Sequence', '__all__', '__builtins__', '__cached__', '__doc__', '__file__', '__loader__', '__name__', '__package__', '__spec__', '_accumulate', '_acos', '_bisect', '_ceil', '_cos', '_e', '_exp', '_fabs', '_floor', '_index', '_inst', '_isfinite', '_lgamma', '_log', '_log2', '_os', '_pi', '_random', '_repeat', '_sha512', '_sin', '_sqrt', '_test', '_test_generator', '_urandom', '_warn', 'betavariate', 'binomialvariate', 'choice', 'choices', 'expovariate', 'gammavariate', 'gauss', 'getrandbits', 'getstate', 'lognormvariate', 'normalvariate', 'paretovariate', 'randbytes', 'randint', 'random', 'randrange', 'sample', 'seed', 'setstate', 'shuffle', 'triangular', 'uniform', 'vonmisesvariate', 'weibullvariate']
The random.random() function generates a uniformly distributed
float
number between 0 (including) and 1 (excluding).
random.random
<function Random.random()>
help(random.random)
Help on built-in function random: random() method of random.Random instance random() -> x in the interval [0, 1).
random.random()
0.782609162553633
While we could build some conditional logic with an if
statement to map the number generated by random.random() to a finite set of elements manually, the random.choice()
function provides a lot more convenience for us. We call it with, for example, the
numbers
list and it draws one element out of it with equal chance.
random.choice
<bound method Random.choice of <random.Random object at 0x561db35d9a60>>
help(random.choice)
Help on method choice in module random: choice(seq) method of random.Random instance Choose a random element from a non-empty sequence.
numbers = [7, 11, 8, 5, 3, 12, 2, 6, 9, 10, 1, 4]
random.choice(numbers)
4
To reproduce the same random numbers in a simulation each time we run it, we set the random seed . It is good practice to do that at the beginning of a program or notebook. It becomes essential when we employ randomized machine learning algorithms, like the Random Forest
, and want to obtain reproducible results for publication in academic journals.
The random module provides the random.seed()
function to do that.
random.seed(42)
random.random()
0.6394267984578837
random.seed(42)
random.random()
0.6394267984578837
As the Python community is based around open source, many developers publish their code, for example, on the Python Package Index PyPI from where anyone may download and install it for free using command-line based tools like pip or conda. This way, we can always customize our Python installation even more. Managing many such packages is quite a deep topic on its own, sometimes fearfully called dependency hell .
The difference between the standard library and such third-party packages is that in the first case, the code goes through a much more formalized review process and is officially endorsed by the Python core developers. Yet, many third-party projects also offer the highest quality standards and are also relied on by many businesses and researchers.
Throughout this book, we will look at many third-party libraries, mostly from Python's scientific stack, a tightly coupled set of third-party libraries for storing big data efficiently (e.g., numpy), "wrangling" (e.g., pandas) and visualizing them (e.g., matplotlib or seaborn), fitting classical statistical models (e.g., statsmodels), training machine learning models (e.g., sklearn), and much more.
Below, we briefly show how to install third-party libraries.
numpy is the de-facto standard in the Python world for handling array-like data. That is a fancy word for data that can be put into a matrix or vector format.
As numpy is not in the standard library , it must be manually installed, for example, with the pip tool. As mentioned in Chapter 0
, to execute terminal commands from within a Jupyter notebook, we start a code cell with an exclamation mark.
If you are running this notebook with an installation of the Anaconda Distribution, then numpy is probably already installed. Running the cell below confirms that.
!pip install numpy
Requirement already satisfied: numpy in /home/alexander/Repositories/intro-to-python/.venv/lib64/python3.12/site-packages (1.26.4)
numpy is conventionally imported with the shorter idiomatic name np
. The as
in the import statement changes the resulting variable name. It is a shortcut for the three lines import numpy
, np = numpy
, and del numpy
.
import numpy as np
np
is used in the same way as math
or random
above.
np
<module 'numpy' from '/home/alexander/Repositories/intro-to-python/.venv/lib64/python3.12/site-packages/numpy/__init__.py'>
Let's convert the above numbers
list into a vector-like object of type numpy.ndarray
.
vec = np.array(numbers)
vec
array([ 7, 11, 8, 5, 3, 12, 2, 6, 9, 10, 1, 4])
type(vec)
numpy.ndarray
numpy somehow magically adds new behavior to Python's built-in arithmetic operators. For example, we may now scalar-multiply
vec
. Also, numpy's functions are implemented in highly optimized C code and, therefore, are fast, especially when dealing with bigger amounts of data.
2 * vec
array([14, 22, 16, 10, 6, 24, 4, 12, 18, 20, 2, 8])
This scalar multiplication would "fail" if we used a plain list
object like numbers
instead of an numpy.ndarray
object like vec
. The two types exhibit different behavior when used with the same operator, another example of operator overloading.
2 * numbers # surprise, surprise
[7, 11, 8, 5, 3, 12, 2, 6, 9, 10, 1, 4, 7, 11, 8, 5, 3, 12, 2, 6, 9, 10, 1, 4]
numpy's numpy.ndarray
objects integrate nicely with Python's built-in functions (e.g., sum() ) or functions from the standard library
(e.g., random.choice()
).
sum(vec)
78
random.choice(vec)
7
For sure, we can create local modules and packages. In the Chapter 2 directory, there is a sample_module.py file that contains, among others, a function equivalent to the final version of
average_evens()
. To be realistic, this sample module is structured in a modular manner with several functions building on each other. It is best to skim over it now before reading on.
To make code we put into a .py file available in our program, we import it as a module just as we did above with modules in the standard library or third-party packages.
The pwd
utility tells us in which directory Python is currently in. We refer to that as the working directory, and pwd
reads "print working directory." JupyterLab automatically sets this to the directory in which the notebook is in.
!pwd
/home/alexander/Repositories/intro-to-python/02_functions
The name to be imported is the file's name except for the .py part. For this to work, the file's name must adhere to the same rules as hold for variable names in general.
What happens during an import is as follows. When Python sees the import sample_module
part, it first creates a new object of type module
in memory. This is effectively an empty namespace. Then, it executes the imported file's code from top to bottom. Whatever variables are still defined at the end of this, are put into the module's namespace. Only if the file's code does not raise an error, will Python make a variable in our current location (i.e., mod
here) reference the created module
object. Otherwise, it is discarded. In essence, it is as if we copied and pasted the file's code in place of the import statement. If we import an already imported module again, Python is smart enough to avoid doing all this work all over and does nothing.
import sample_module as mod
mod
<module 'sample_module' from '/home/alexander/Repositories/intro-to-python/02_functions/sample_module.py'>
Disregarding the dunder-style attributes, mod
defines the attributes _round_all
, _scaled_average
, average
, average_evens
, and average_odds
, which are exactly the ones we would expect from reading the sample_module.py file.
A convention when working with imported code is to disregard any attributes starting with a single underscore _
. These are considered private and constitute implementation details the author of the imported code might change in a future version of his software. We must not rely on them in any way.
In contrast, the three remaining public attributes are the functions average()
, average_evens()
, and average_odds()
that we may use after the import.
dir(mod)
['__builtins__', '__cached__', '__doc__', '__file__', '__loader__', '__name__', '__package__', '__spec__', '_round_all', '_scaled_average', 'average', 'average_evens', 'average_odds']
We use the imported mod.average_evens()
just like average_evens()
defined in the first part of this chapter. The advantage we get from modularization with .py files is that we can now easily reuse functions across different Jupyter notebooks without redefining them again and again. Also, we can "source out" code that distracts from the storyline told in a notebook.
mod.average_evens
<function sample_module.average_evens(numbers, *, scalar=1)>
help(mod.average_evens)
Help on function average_evens in module sample_module: average_evens(numbers, *, scalar=1) Calculate the average of all even numbers in a list. Args: numbers (list of int's/float's): numbers to be averaged; if non-whole numbers are provided, they are rounded scalar (float, optional): multiplies the average; defaults to 1 Returns: scaled_average (float)
mod.average_evens(numbers)
7.0
mod.average_evens(numbers, scalar=2)
14.0
Packages are a generalization of modules, and we look at one in detail in Chapter 11 .
As a further reading on modules and packages, we refer to the official tutorial .