pysgd

The pysgd package is structured as a general gradient descent algorithm that accepts data, an objective function, a gradient descent adaptation and hyperparameters as its arguments. Below is the file structure of the package:

pysgd/
|--__init__.py
|--adaptations/
|  |--__init__.py
|  |--adagrad.py
|  |--adam.py
|  |--constant.py
|--objectives/
|  |--__init__.py
|  |--linear.py
|  |--logistic.py
|  |--stab-tang.py
|--tests/

The intention of this package is to present reasonbly efficient, working algorithms that are easy to understand.

The package is structured to make it easy to add additional objective functions and gradient descent adaptations, by following the basic form of the existing ones and adding them into their respective folders.

Gradient Descent

Gradient descent is a method for minimizing an objective function. In machine learning applications the objective function to be minimized is the error, $J$, (or cost) of a predictive model. A predictive model consists of a parameters, $\theta$, that are applied to inputs, $X$, (also called training samples, features, observations or independent variables) in order to estimate an output, $\hat{y}$ (also called a label or dependent variable). Gradient descent attempts to determine the parameters that when applied to a set of inputs result in the lowest total error (the difference between the actual outcome and the one predicted by the model). Below is the basic predictive formula.

$$H(X,\theta)=\hat{y}$$

Below is an illustrative formula for determining the cost of a model.

$$J(\theta) = \sum_{i=1}^m\mid{h_i(\theta,x_i) - y_i}\mid$$

There are different formulas for computing cost depending on the application, but the formula above expresses the essence of predicting actual outcomes as closely as possible.

In order to minimze $J$ with respect to $\theta$, the algorithm starts with an abitrary value of $\theta$, determines the "direction" that would result in the fastest decrease in cost (called the gradient), updates $\theta$ in that direction by a small amount (called the learning rate or $\alpha$) and then repeats until cost $J$ has been minimized.

$$\theta_j := \theta_j - \alpha\triangledown\theta_jJ(\theta)$$

or

$$\theta_j := \theta_j - \alpha\frac\partial{\partial\theta_j}J(\theta)$$

API

The package has one main function, sgd, that returns a $j$ x $(n + 2)$ array, where $j$ is the number of iterations and $n$ is the number of features in the data. $\theta_j$ is in the first $n + 1$ columns and the cost $J_j$ in the last column.

Argument Definition
theta0 Starting value of $\theta$ ($\theta_0$) in the form of an $1$ x$(n + 1)$ array.
obj='stab_tang' Objective function to be minimized in the form of a string with a value of stab_tang, linear or logistic. stab_tang is for the Stablinsky-Tang function, included for testing and illustrative purposes.
adapt='constant' Gradient descent adaptation in the form of a string with a value of constant, adagrad or adam.
  • `constant` applies no adaptation
  • `adagrad` implements [Adaptive Gradient Algorithm](http://stanford.edu/~jduchi/projects/DuchiHaSi10_colt.pdf)
  • `adam` implements [Adaptive Moment Estimation](https://arxiv.org/pdf/1412.6980v8.pdf)
data=np.array([]) Data in the form of an $m$ x $(n+1)$ array, including ones in the first column, if necessary, where $m$ is the number of training observations.
size=50 Batch size in the form of an integer between $1$ and $m$. Batches are generated contiguously over the data. Data is shuffled between cycles.
alpha=.01 Learning rate $\alpha$ in the form of a floating point integer.
epsilon=10**-8 Hyperparameter used by adagrad and adam for smoothing.
beta1=0.9 Hyperparamter used by adam that controls the decay rates of the moving gradient averages.
beta2=0.999 Hyperparamter used by adam that controls the decay rates of the moving gradient averages.
delta_min=10**-6 Maximum change in $\theta_n$ to establish convergence in the form of a floating point integer.
iters=1000 Maximum number of batches evaluated unless convergence is achieved in fewer iterations.

Testing

Tests are in the tests folder using pytest, with 100% coverage.

In addition to sample data sets, we also use the Stablinsky-Tang function for testing, which is non-convex, suitable for testing, with straightforward gradient computation. This allows us to compare the value of $\theta$ produced by each algorithm and its associated $J$ with values we can calculate directly. By using a known function with two dimensional inputs we can plot $J$ as a surface for a given range of $\theta$ values and then $J_\theta$ for each iteration of the algorithm to visualize the progression of the algorithms.

The Styblinski–Tang function with respect to $\theta$ is:

$$J(\theta) = \dfrac{\sum_{i=1}^n\theta_i^4-16\theta_i^2+5\theta_i}{n}$$

where $n$ is the number of dimensions in the data. For two dimensions, we can also express our cost function as:

$$J(\theta) = \dfrac{\theta_1^4-16\theta_1^2+5\theta_1+\theta_2^4-16\theta_2^2+5\theta_2}{2}$$

The global minimum of this function is $-78.33233$ at $\theta = (-2.903534, -2.903534)$

The Styblinski–Tang gradient function is:

$$\frac\partial{\partial\theta_n}J(\theta) = 2\theta_n^3-16\theta_n+2.5$$

The color scale of the surface plots corresponds to the z-axis value, which represents cost $J$ for all values of $\theta$ in the displayed range. The color scale of the points on the surface, which represent the cost $J_{\theta_j}$ as a function of $\theta_j$ at each iteration of the model, corresponds to the iteration.

pysgd

In [23]:
import pysgd
import inspect

# Define function to display code.
def disp_mod(mod):
    code, line_no = inspect.getsourcelines(mod)
    from IPython.display import Markdown, display
    def printmd(string):
        display(Markdown('``` python\n' + string + '```'))
    printmd(''.join(code))

disp_mod(pysgd)
import numpy as np
import importlib
from pysgd.objectives import Objective

# Define general gradient descent algorithm
def sgd(
    theta0,
    obj='stab_tang',
    adapt='constant',
    data=np.array([]),
    size=50,
    alpha=.01,
    epsilon=10**-8,
    beta1=0.9,
    beta2=0.999,
    delta_min=10**-6,
    iters=1000):

    # Initialize gradient adaptation parameters
    params = dict(
        alpha=alpha,
        epsilon=epsilon,
        beta1=beta1,
        beta2=beta2
    )

    # Initialize cost and gradient functions
    obj_fun = Objective(obj, data, size)

    # Initialize gradient adaptation.
    grad_adapt = importlib.import_module('pysgd.adaptations.' + adapt).adapt

    # Initialize theta and cost history for convergence testing and plot
    theta_hist = np.zeros((iters, theta0.shape[0]+1))
    theta_hist[0] = obj_fun.cost(theta0)

    # Initialize theta generator
    theta_gen = grad_adapt(params, obj_fun.grad)(theta0)

    # Initialize iteration variables
    delta = float("inf")
    i = 1

    # Run algorithm
    while delta > delta_min:
        # Get next theta
        theta = next(theta_gen)

        # Store cost for plotting, test for convergence
        try:
            theta_hist[i] = obj_fun.cost(theta)
        except:
            print('{} minimum change in theta not achieved in {} iterations.'
                  .format(delta_min, theta_hist.shape[0]))
            break
        delta = np.max(np.square(theta - theta_hist[i-1,:-1]))**0.5

        i += 1
    # Trim zeros and return
    theta_hist = theta_hist[:i]
    return theta_hist

pysgd.objectives

The Objective class in sgd/objectives/__init__.py is where the objective functions, data and gradient adaptations are handled.

In [24]:
disp_mod(pysgd.objectives)
import importlib
import numpy as np
from os.path import dirname, basename, isfile
import glob
modules = glob.glob(dirname(__file__)+"/*.py")
__all__ = [ basename(f)[:-3] for f in modules if isfile(f)]

class Objective(object):

    def __init__(self, obj, data, size):

        obj = importlib.import_module('pysgd.objectives.' + obj)

        def batches_gen(data=data, size=size):
            i = 0
            while True:
                index = slice(i*size, min((i+1)*size, data.shape[0]), 1)
                if data.shape[0] - i * size > 0:
                    yield (data[index,:-1], data[index,-1])
                    i += 1
                else:
                    np.random.shuffle(data)
                    i = 0

        self.batches = batches_gen()

        def grad_from_data(theta):
            return obj.grad_fun(theta, next(self.batches))

        def cost_from_data(theta):
            return obj.cost_fun(theta, data)

        if data.size > 1:
            self.grad = grad_from_data
            self.cost = cost_from_data
        else:
            self.grad = obj.grad_fun
            self.cost = obj.cost_fun

Contstant Alpha

Run gradient descent algorithm.

In [25]:
import numpy as np

theta_hist = pysgd.sgd(theta0=np.array([-0.2, -4.4]))

Plot $J_j$ for each $\theta_j$.

In [26]:
import plotly.offline as py
py.init_notebook_mode()
import plotly.graph_objs as go

# Prepare plot
x = np.arange(-4.6, 4.6, 0.1)
y = np.arange(-4.6, 4.6, 0.1)
X, Y = np.meshgrid(x, y)
Z = 1/2.0 * (X**4 - 16*X**2 + 5*X + Y**4 - 16*Y**2 + 5*Y)

# Prepare surface contours
contour = dict(
    show = True,
    color = 'DodgerBlue', #'#0066FF',
    highlightcolor = 'DeepSkyBlue',
    highlightwidth = 1.5,
    width = 1
)

# Add surface to plot
surface = go.Surface(
    name = 'J surface',
    x = X,
    y = Y,
    z = Z,
    colorscale = 'Rainbow',
    showlegend = False,
    contours = dict(
        y = contour,
        x = contour,
        z = dict(
            show = False,
            color = contour['color'],
            highlightcolor = contour['highlightcolor'],
            highlightwidth = contour['highlightwidth'],
            width = contour['width']
        )
    )
)

# Add theta_hist to plot - set up as a function for future plots
def spec_theta_hist_trace(theta_hist):
    theta_hist_trace = go.Scatter3d(
        name = 'theta_hist',
        x = theta_hist[:,0],
        y = theta_hist[:,1],
        z = theta_hist[:,2],
        mode = 'markers',
        showlegend = False,
        marker = dict(
            color = np.arange(theta_hist.shape[0]),
            colorscale = 'Blackbody',
            showscale = False,
            size = "5"
        )
    )
    return theta_hist_trace

# Specify layout options
layout = go.Layout(
    title='Constant Alpha',
    autosize=False,
    width=700,
    height=700,
    scene=dict(
        xaxis=dict(
            title = 'theta1',
            ticks = "outside",
            dtick = 0.25,
            showticklabels = False
        ),
        yaxis=dict(
            title = 'theta2',
            ticks = "",
            dtick = 0.25,
            showticklabels = False
        ),
        zaxis=dict(
            title = 'J',
        ),
        camera=dict(
            up=dict(x=0, y=0, z=1),
            center=dict(x=0, y=0, z=0),
            eye=dict(x=0.25, y=1.25, z=1.15)
        )
    )
)

# Execute plot
fig = go.Figure(data=[surface, spec_theta_hist_trace(theta_hist)], layout=layout)
py.iplot(fig, filename='constant_alpha_gradient_descent')