Probabilistic Programming and Bayesian Methods for Hackers Chapter 4

Run in Google Colab View source on GitHub



Original content (this Jupyter notebook) created by Cam Davidson-Pilon (@Cmrn_DP)

Ported to Tensorflow Probability by Matthew McAteer (@MatthewMcAteer0), with help from the TFP team at Google ([email protected]).

Welcome to Bayesian Methods for Hackers. The full Github repository is available at github/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers. The other chapters can be found on the project's homepage. We hope you enjoy the book, and we encourage any contributions!


Table of Contents

  • Dependencies & Prerequisites
  • The greatest theorem never told
    • The Law of Large Numbers
    • Intuition
    • How do we compute $Var(Z)$ though?
    • Expected values and probabilities
    • What does this all have to do with Bayesian statistics?
    • The Disorder of Small Numbers
    • Example: Aggregated geographic data
    • Example: Kaggle's U.S. Census Return Rate Challenge
    • Example: How to order Reddit submissions
      • Setting up the Praw Reddit API
      • Register your Application on Reddit
        • Reddit API Setup
      • Sorting!
      • But this is too slow for real-time!
    • Extension to Starred rating systems
    • Example: Counting Github stars
    • Conclusion
    • Appendix
      • Exercises
      • Kicker Careers Ranked by Make Percentage
      • Average Household Income by Programming Language
    • References

Dependencies & Prerequisites

Tensorflow Probability is part of the colab default runtime, so you don't need to install Tensorflow or Tensorflow Probability if you're running this in the colab.
If you're running this notebook in Jupyter on your own machine (and you have already installed Tensorflow), you can use the following
  • For the most recent nightly installation: pip3 install -q tfp-nightly
  • For the most recent stable TFP release: pip3 install -q --upgrade tensorflow-probability
  • For the most recent stable GPU-connected version of TFP: pip3 install -q --upgrade tensorflow-probability-gpu
  • For the most recent nightly GPU-connected version of TFP: pip3 install -q tfp-nightly-gpu
Again, if you are running this in a Colab, Tensorflow and TFP are already installed
In [0]:
#@title Imports and Global Variables  { display-mode: "form" }
"""
The book uses a custom matplotlibrc file, which provides the unique styles for
matplotlib plots. If executing this book, and you wish to use the book's
styling, provided are two options:
    1. Overwrite your own matplotlibrc file with the rc-file provided in the
       book's styles/ dir. See http://matplotlib.org/users/customizing.html
    2. Also in the styles is  bmh_matplotlibrc.json file. This can be used to
       update the styles in only this notebook. Try running the following code:

        import json
        s = json.load(open("../styles/bmh_matplotlibrc.json"))
        matplotlib.rcParams.update(s)
"""
!pip3 install -q praw
!pip3 install -q pandas_datareader
!pip3 install -q wget
from __future__ import absolute_import, division, print_function

#@markdown This sets the warning status (default is `ignore`, since this notebook runs correctly)
warning_status = "ignore" #@param ["ignore", "always", "module", "once", "default", "error"]
import warnings
warnings.filterwarnings(warning_status)
with warnings.catch_warnings():
    warnings.filterwarnings(warning_status, category=DeprecationWarning)
    warnings.filterwarnings(warning_status, category=UserWarning)

import numpy as np
import os
#@markdown This sets the styles of the plotting (default is styled like plots from [FiveThirtyeight.com](https://fivethirtyeight.com/))
matplotlib_style = 'fivethirtyeight' #@param ['fivethirtyeight', 'bmh', 'ggplot', 'seaborn', 'default', 'Solarize_Light2', 'classic', 'dark_background', 'seaborn-colorblind', 'seaborn-notebook']
import matplotlib.pyplot as plt; plt.style.use(matplotlib_style)
import matplotlib.axes as axes;
from matplotlib.patches import Ellipse
from mpl_toolkits.mplot3d import Axes3D
import pandas_datareader.data as web
%matplotlib inline
import seaborn as sns; sns.set_context('notebook')
from IPython.core.pylabtools import figsize
#@markdown This sets the resolution of the plot outputs (`retina` is the highest resolution)
notebook_screen_res = 'retina' #@param ['retina', 'png', 'jpeg', 'svg', 'pdf']
%config InlineBackend.figure_format = notebook_screen_res

import tensorflow as tf
tfe = tf.contrib.eager

# Eager Execution
#@markdown Check the box below if you want to use [Eager Execution](https://www.tensorflow.org/guide/eager)
#@markdown Eager execution provides An intuitive interface, Easier debugging, and a control flow comparable to Numpy. You can read more about it on the [Google AI Blog](https://ai.googleblog.com/2017/10/eager-execution-imperative-define-by.html)
use_tf_eager = False #@param {type:"boolean"}

# Use try/except so we can easily re-execute the whole notebook.
if use_tf_eager:
    try:
        tf.enable_eager_execution()
    except:
        pass

import tensorflow_probability as tfp
tfd = tfp.distributions
tfb = tfp.bijectors

  
def evaluate(tensors):
    """Evaluates Tensor or EagerTensor to Numpy `ndarray`s.
    Args:
    tensors: Object of `Tensor` or EagerTensor`s; can be `list`, `tuple`,
      `namedtuple` or combinations thereof.
   
    Returns:
      ndarrays: Object with same structure as `tensors` except with `Tensor` or
        `EagerTensor`s replaced by Numpy `ndarray`s.
    """
    if tf.executing_eagerly():
        return tf.contrib.framework.nest.pack_sequence_as(
            tensors,
            [t.numpy() if tf.contrib.framework.is_tensor(t) else t
             for t in tf.contrib.framework.nest.flatten(tensors)])
    return sess.run(tensors)

class _TFColor(object):
    """Enum of colors used in TF docs."""
    red = '#F15854'
    blue = '#5DA5DA'
    orange = '#FAA43A'
    green = '#60BD68'
    pink = '#F17CB0'
    brown = '#B2912F'
    purple = '#B276B2'
    yellow = '#DECF3F'
    gray = '#4D4D4D'
    def __getitem__(self, i):
        return [
            self.red,
            self.orange,
            self.green,
            self.blue,
            self.pink,
            self.brown,
            self.purple,
            self.yellow,
            self.gray,
        ][i % 9]
TFColor = _TFColor()

def session_options(enable_gpu_ram_resizing=True, enable_xla=True):
    """
    Allowing the notebook to make use of GPUs if they're available.
    
    XLA (Accelerated Linear Algebra) is a domain-specific compiler for linear 
    algebra that optimizes TensorFlow computations.
    """
    config = tf.ConfigProto()
    config.log_device_placement = True
    if enable_gpu_ram_resizing:
        # `allow_growth=True` makes it possible to connect multiple colabs to your
        # GPU. Otherwise the colab malloc's all GPU ram.
        config.gpu_options.allow_growth = True
    if enable_xla:
        # Enable on XLA. https://www.tensorflow.org/performance/xla/.
        config.graph_options.optimizer_options.global_jit_level = (
            tf.OptimizerOptions.ON_1)
    return config


def reset_sess(config=None):
    """
    Convenience function to create the TF graph & session or reset them.
    """
    if config is None:
        config = session_options()
    global sess
    tf.reset_default_graph()
    try:
        sess.close()
    except:
        pass
    sess = tf.InteractiveSession(config=config)

reset_sess()

The greatest theorem never told

This chapter focuses on an idea that is always bouncing around our minds, but is rarely made explicit outside books devoted to statistics. In fact, we've been using this simple idea in every example thus far.

The Law of Large Numbers

Let $Z_i$ be $N$ independent samples from some probability distribution. According to the Law of Large numbers, so long as the expected value $E[Z]$ is finite, the following holds,

$$\frac{1}{N} \sum_{i=1}^N Z_i \rightarrow E[ Z ], \;\;\; N \rightarrow \infty.$$

In words:

The average of a sequence of random variables from the same distribution converges to the expected value of that distribution.

This may seem like a boring result, but it will be the most useful tool you use.

Intuition

If the above Law is somewhat surprising, it can be made more clear by examining a simple example.

Consider a random variable $Z$ that can take only two values, $c_1$ and $c_2$. Suppose we have a large number of samples of $Z$, denoting a specific sample $Z_i$. The Law says that we can approximate the expected value of $Z$ by averaging over all samples. Consider the average:

$$ \frac{1}{N} \sum_{i=1}^N \;Z_i $$

By construction, $Z_i$ can only take on $c_1$ or $c_2$, hence we can partition the sum over these two values: $$ \begin{align} \frac{1}{N} \sum_{i=1}^N \;Z_i & =\frac{1}{N} \big( \sum_{ Z_i = c_1}c_1 + \sum_{Z_i=c_2}c_2 \big) \\ & = c_1 \sum_{ Z_i = c_1}\frac{1}{N} + c_2 \sum_{ Z_i = c_2}\frac{1}{N} \\ & = c_1 \times \text{ (approximate frequency of $c_1$) } \\ & \;\;\;\;\;\;\;\;\; + c_2 \times \text{ (approximate frequency of $c_2$) } \\ & \approx c_1 \times P(Z = c_1) + c_2 \times P(Z = c_2 ) \\ & = E[Z] \end{align} $$

Equality holds in the limit, but we can get closer and closer by using more and more samples in the average. This Law holds for almost any distribution, minus some important cases we will encounter later.

Example


Below is a diagram of the Law of Large numbers in action for three different sequences of Poisson random variables.

We sample sample_size = 100000 Poisson random variables with parameter $\lambda = 4.5$. (Recall the expected value of a Poisson random variable is equal to its parameter.) We calculate the average for the first $n$ samples, for $n=1$ to sample_size.

In [2]:
sample_size_ = 100000
expected_value_ = lambda_val_ = 4.5
N_samples = tf.range(start=1,
                      limit=sample_size_,
                      delta=100)

plt.figure(figsize(12.5, 4))
for k in range(3):
    samples = tfd.Poisson(rate=lambda_val_).sample(sample_shape=(sample_size_))
    [ samples_, N_samples_ ] = evaluate([ samples, N_samples ]) 

    partial_average_ = [ samples_[:i].mean() for i in N_samples_ ]        

    plt.plot( N_samples_, partial_average_, lw=1.5,label="average of  $n$ samples; seq. %d"%k)

plt.plot( N_samples_, expected_value_ * np.ones_like( partial_average_), 
    ls = "--", label = "true expected value", c = "k" )

plt.ylim( 4.35, 4.65) 
plt.title( "Convergence of the average of \n random variables to its \
expected value" )
plt.ylabel( "average of $n$ samples" )
plt.xlabel( "# of samples, $n$")
plt.legend();

Looking at the above plot, it is clear that when the sample size is small, there is greater variation in the average (compare how jagged and jumpy the average is initially, then smooths out). All three paths approach the value 4.5, but just flirt with it as $N$ gets large. Mathematicians and statistician have another name for flirting: convergence.

Another very relevant question we can ask is how quickly am I converging to the expected value? Let's plot something new. For a specific $N$, let's do the above trials thousands of times and compute how far away we are from the true expected value, on average. But wait — compute on average? This is simply the law of large numbers again! For example, we are interested in, for a specific $N$, the quantity:

$$D(N) = \sqrt{ \;E\left[\;\; \left( \frac{1}{N}\sum_{i=1}^NZ_i - 4.5 \;\right)^2 \;\;\right] \;\;}$$

The above formulae is interpretable as a distance away from the true value (on average), for some $N$. (We take the square root so the dimensions of the above quantity and our random variables are the same). As the above is an expected value, it can be approximated using the law of large numbers: instead of averaging $Z_i$, we calculate the following multiple times and average them:

$$ Y_k = \left( \;\frac{1}{N}\sum_{i=1}^NZ_i - 4.5 \; \right)^2 $$

By computing the above many, $N_y$, times (remember, it is random), and averaging them:

$$ \frac{1}{N_Y} \sum_{k=1}^{N_Y} Y_k \rightarrow E[ Y_k ] = E\;\left[\;\; \left( \frac{1}{N}\sum_{i=1}^NZ_i - 4.5 \;\right)^2 \right]$$

Finally, taking the square root:

$$ \sqrt{\frac{1}{N_Y} \sum_{k=1}^{N_Y} Y_k} \approx D(N) $$

In [3]:
N_Y = tf.constant(250)  # use this many to approximate D(N)
N_array = tf.range(1000., 50000., 2500) # use this many samples in the approx. to the variance.
D_N_results = tf.zeros(tf.shape(N_array)[0])
lambda_val = tf.constant(4.5) 
expected_value = tf.constant(4.5) #for X ~ Poi(lambda) , E[ X ] = lambda

[
    N_Y_, 
    N_array_, 
    D_N_results_, 
    expected_value_, 
    lambda_val_,
] = evaluate([ 
    N_Y, 
    N_array, 
    D_N_results, 
    expected_value,
    lambda_val,
])

def D_N(n):
    """
    This function approx. D_n, the average variance of using n samples.
    """
    Z = tfd.Poisson(rate=lambda_val_).sample(sample_shape=(int(n), int(N_Y_)))
    average_Z = tf.reduce_mean(Z, axis=0)
    average_Z_ = evaluate(average_Z)
    
    return np.sqrt(((average_Z_ - expected_value_)**2).mean())

for i,n in enumerate(N_array_):
    D_N_results_[i] =  D_N(n)

plt.figure(figsize(12.5, 3))
plt.xlabel( "$N$" )
plt.ylabel( "expected squared-distance \nfrom true value" )
plt.plot(N_array_, D_N_results_, lw = 3, 
            label="expected distance between\n\
expected value and \naverage of $N$ random variables.")
plt.plot( N_array_, np.sqrt(expected_value_)/np.sqrt(N_array_), lw = 2, ls = "--", 
        label = r"$\frac{\sqrt{\lambda}}{\sqrt{N}}$" )
plt.legend()
plt.title( "How 'fast' is the sample average converging? " );

As expected, the expected distance between our sample average and the actual expected value shrinks as $N$ grows large. But also notice that the rate of convergence decreases, that is, we need only 10 000 additional samples to move from 0.020 to 0.015, a difference of 0.005, but 20 000 more samples to again decrease from 0.015 to 0.010, again only a 0.005 decrease.

It turns out we can measure this rate of convergence. Above I have plotted a second line, the function $\sqrt{\lambda}/\sqrt{N}$. This was not chosen arbitrarily. In most cases, given a sequence of random variable distributed like $Z$, the rate of convergence to $E[Z]$ of the Law of Large Numbers is

$$ \frac{ \sqrt{ \; Var(Z) \; } }{\sqrt{N} }$$

This is useful to know: for a given large $N$, we know (on average) how far away we are from the estimate. On the other hand, in a Bayesian setting, this can seem like a useless result: Bayesian analysis is OK with uncertainty so what's the statistical point of adding extra precise digits? Though drawing samples can be so computationally cheap that having a larger $N$ is fine too.

How do we compute $Var(Z)$ though?

The variance is simply another expected value that can be approximated! Consider the following, once we have the expected value (by using the Law of Large Numbers to estimate it, denote it $\mu$), we can estimate the variance:

$$ \frac{1}{N}\sum_{i=1}^N \;(Z_i - \mu)^2 \rightarrow E[ \;( Z - \mu)^2 \;] = Var( Z )$$

Expected values and probabilities

There is an even less explicit relationship between expected value and estimating probabilities. Define the indicator function

$$\mathbb{1}_A(x) = \begin{cases} 1 & x \in A \\\\ 0 & else \end{cases} $$ Then, by the law of large numbers, if we have many samples $X_i$, we can estimate the probability of an event $A$, denoted $P(A)$, by:

$$ \frac{1}{N} \sum_{i=1}^N \mathbb{1}_A(X_i) \rightarrow E[\mathbb{1}_A(X)] = P(A) $$

Again, this is fairly obvious after a moments thought: the indicator function is only 1 if the event occurs, so we are summing only the times the event occurs and dividing by the total number of trials (consider how we usually approximate probabilities using frequencies). For example, suppose we wish to estimate the probability that a $Z \sim Exp(.5)$ is greater than 5, and we have many samples from a $Exp(.5)$ distribution.

$$ P( Z > 5 ) = \frac{1}{N}\sum_{i=1}^N \mathbb{1}_{z > 5 }(Z_i) $$

In [4]:
N = 10000

print("Probability Estimate: ", len(np.where(evaluate(tfd.Exponential(rate=0.5).sample(sample_shape=N)) > 5))/N )
Probability Estimate:  0.0001

What does this all have to do with Bayesian statistics?

Point estimates, to be introduced in the next chapter, in Bayesian inference are computed using expected values. In more analytical Bayesian inference, we would have been required to evaluate complicated expected values represented as multi-dimensional integrals. No longer. If we can sample from the posterior distribution directly, we simply need to evaluate averages. Much easier. If accuracy is a priority, plots like the ones above show how fast you are converging. And if further accuracy is desired, just take more samples from the posterior.

When is enough enough? When can you stop drawing samples from the posterior? That is the practitioners decision, and also dependent on the variance of the samples (recall from above a high variance means the average will converge slower).

We also should understand when the Law of Large Numbers fails. As the name implies, and comparing the graphs above for small $N$, the Law is only true for large sample sizes. Without this, the asymptotic result is not reliable. Knowing in what situations the Law fails can give us confidence in how unconfident we should be. The next section deals with this issue.

The Disorder of Small Numbers

The Law of Large Numbers is only valid as $N$ gets infinitely large: never truly attainable. While the law is a powerful tool, it is foolhardy to apply it liberally. Our next example illustrates this.

Example: Aggregated geographic data

Often data comes in aggregated form. For instance, data may be grouped by state, county, or city level. Of course, the population numbers vary per geographic area. If the data is an average of some characteristic of each the geographic areas, we must be conscious of the Law of Large Numbers and how it can fail for areas with small populations.

We will observe this on a toy dataset. Suppose there are five thousand counties in our dataset. Furthermore, population number in each state are uniformly distributed between 100 and 1500. The way the population numbers are generated is irrelevant to the discussion, so we do not justify this. We are interested in measuring the average height of individuals per county. Unbeknownst to us, height does not vary across county, and each individual, regardless of the county he or she is currently living in, has the same distribution of what their height may be:

$$ \text{height} \sim \text{Normal}(\text{mu}=150, \text{sd}=15 ) $$

We aggregate the individuals at the county level, so we only have data for the average in the county. What might our dataset look like?

In [5]:
plt.figure(figsize(12.5, 4))

std_height = 15.
mean_height = 150.
n_counties = 5000
smallest_population = 100
largest_population = 1500
pop_generator = np.random.randint
norm = np.random.normal

population_ = pop_generator(smallest_population, largest_population, n_counties)

# Our strategy to vectorize this problem will be to end-to-end concatenate the
# number of draws we need. Then we'll loop over the pieces.
d = tfp.distributions.Normal(loc=mean_height, scale= 1. / std_height)
x = d.sample(np.sum(population_))

average_across_county = []
seen = 0
for p in population_:
    average_across_county.append(tf.reduce_mean(x[seen:seen+p]))
    seen += p
average_across_county_full = tf.stack(average_across_county)

##located the counties with the apparently most extreme average heights.
[ 
    average_across_county_,
    i_min, 
    i_max 
] = evaluate([
    average_across_county_full,
    tf.argmin( average_across_county_full ), 
    tf.argmax( average_across_county_full )
])

#plot population size vs. recorded average
plt.scatter( population_, average_across_county_, alpha = 0.5, c=TFColor[6])
plt.scatter( [ population_[i_min], population_[i_max] ], 
           [average_across_county_[i_min], average_across_county_[i_max] ],
           s = 60, marker = "o", facecolors = "none",
           edgecolors = TFColor[0], linewidths = 1.5, 
            label="extreme heights")

plt.xlim( smallest_population, largest_population )
plt.title( "Average height vs. County Population")
plt.xlabel("County Population")
plt.ylabel("Average height in county")
plt.plot( [smallest_population, largest_population], [mean_height, mean_height], color = "k", label = "true expected \
height", ls="--" )
plt.legend(scatterpoints = 1);

What do we observe? Without accounting for population sizes we run the risk of making an enormous inference error: if we ignored population size, we would say that the county with the shortest and tallest individuals have been correctly circled. But this inference is wrong for the following reason. These two counties do not necessarily have the most extreme heights. The error results from the calculated average of smaller populations not being a good reflection of the true expected value of the population (which in truth should be $\mu =150$). The sample size/population size/$N$, whatever you wish to call it, is simply too small to invoke the Law of Large Numbers effectively.

We provide more damning evidence against this inference. Recall the population numbers were uniformly distributed over 100 to 1500. Our intuition should tell us that the counties with the most extreme population heights should also be uniformly spread over 100 to 1500, and certainly independent of the county's population. Not so. Below are the population sizes of the counties with the most extreme heights.

In [6]:
print("Population sizes of 10 'shortest' counties: ")
print(population_[ np.argsort( average_across_county_ )[:10] ], '\n')
print("Population sizes of 10 'tallest' counties: ")
print(population_[ np.argsort( -average_across_county_ )[:10] ])
Population sizes of 10 'shortest' counties: 
[139 120 148 110 212 110 134 169 243 162] 

Population sizes of 10 'tallest' counties: 
[101 109 296 134 121 203 145 192 113 357]

Not at all uniform over 100 to 1500. This is an absolute failure of the Law of Large Numbers.

Example: Kaggle's U.S. Census Return Rate Challenge

Below is data from the 2010 US census, which partitions populations beyond counties to the level of block groups (which are aggregates of city blocks or equivalents). The dataset is from a Kaggle machine learning competition some colleagues and I participated in. The objective was to predict the census letter mail-back rate of a group block, measured between 0 and 100, using census variables (median income, number of females in the block-group, number of trailer parks, average number of children etc.). Below we plot the census mail-back rate versus block group population:

In [7]:
reset_sess()

import wget
url = 'https://raw.githubusercontent.com/CamDavidsonPilon/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers/master/Chapter4_TheGreatestTheoremNeverTold/data/census_data.csv'
filename = wget.download(url)
filename
Out[7]:
'census_data.csv'
In [8]:
plt.figure(figsize(12.5, 6.5))
data_ = np.genfromtxt( "census_data.csv", skip_header=1, 
                        delimiter= ",")
plt.scatter( data_[:,1], data_[:,0], alpha = 0.5, c=TFColor[6])
plt.title("Census mail-back rate vs Population")
plt.ylabel("Mail-back rate")
plt.xlabel("population of block-group")
plt.xlim(-100, 15e3 )
plt.ylim( -5, 105)

i_min = tf.argmin(  data_[:,0] )
i_max = tf.argmax(  data_[:,0] )

[ i_min_, i_max_ ] = evaluate([ i_min, i_max ])
 
plt.scatter( [ data_[i_min_,1], data_[i_max_, 1] ], 
             [ data_[i_min_,0], data_[i_max_,0] ],
             s = 60, marker = "o", facecolors = "none",
             edgecolors = TFColor[0], linewidths = 1.5, 
             label="most extreme points")

plt.legend(scatterpoints = 1);

The above is a classic phenomenon in statistics. I say classic referring to the "shape" of the scatter plot above. It follows a classic triangular form, that tightens as we increase the sample size (as the Law of Large Numbers becomes more exact).

I am perhaps overstressing the point and maybe I should have titled the book "You don't have big data problems!", but here again is an example of the trouble with small datasets, not big ones. Simply, small datasets cannot be processed using the Law of Large Numbers. Compare with applying the Law without hassle to big datasets (ex. big data). I mentioned earlier that paradoxically big data prediction problems are solved by relatively simple algorithms. The paradox is partially resolved by understanding that the Law of Large Numbers creates solutions that are stable, i.e. adding or subtracting a few data points will not affect the solution much. On the other hand, adding or removing data points to a small dataset can create very different results.

For further reading on the hidden dangers of the Law of Large Numbers, I would highly recommend the excellent manuscript The Most Dangerous Equation.

Example: How to order Reddit submissions

You may have disagreed with the original statement that the Law of Large numbers is known to everyone, but only implicitly in our subconscious decision making. Consider ratings on online products: how often do you trust an average 5-star rating if there is only 1 reviewer? 2 reviewers? 3 reviewers? We implicitly understand that with such few reviewers that the average rating is not a good reflection of the true value of the product.

This has created flaws in how we sort items, and more generally, how we compare items. Many people have realized that sorting online search results by their rating, whether the objects be books, videos, or online comments, return poor results. Often the seemingly top videos or comments have perfect ratings only from a few enthusiastic fans, and truly more quality videos or comments are hidden in later pages with falsely-substandard ratings of around 4.8. How can we correct this?

Consider the popular site Reddit (I purposefully did not link to the website as you would never come back). The site hosts links to stories or images, called submissions, for people to comment on. Redditors can vote up or down on each submission (called upvotes and downvotes). Reddit, by default, will sort submissions to a given subreddit by Hot, that is, the submissions that have the most upvotes recently.

How would you determine which submissions are the best? There are a number of ways to achieve this:

  1. Popularity: A submission is considered good if it has many upvotes. A problem with this model is that a submission with hundreds of upvotes, but thousands of downvotes. While being very popular, the submission is likely more controversial than best.
  2. Difference: Using the difference of upvotes and downvotes. This solves the above problem, but fails when we consider the temporal nature of submission. Depending on when a submission is posted, the website may be experiencing high or low traffic. The difference method will bias the Top submissions to be the those made during high traffic periods, which have accumulated more upvotes than submissions that were not so graced, but are not necessarily the best.
  3. Time adjusted: Consider using Difference divided by the age of the submission. This creates a rate, something like difference per second, or per minute. An immediate counter-example is, if we use per second, a 1 second old submission with 1 upvote would be better than a 100 second old submission with 99 upvotes. One can avoid this by only considering at least t second old submission. But what is a good t value? Does this mean no submission younger than t is good? We end up comparing unstable quantities with stable quantities (young vs. old submissions).
  4. Ratio: Rank submissions by the ratio of upvotes to total number of votes (upvotes plus downvotes). This solves the temporal issue, such that new submissions who score well can be considered Top just as likely as older submissions, provided they have many upvotes to total votes. The problem here is that a submission with a single upvote (ratio = 1.0) will beat a submission with 999 upvotes and 1 downvote (ratio = 0.999), but clearly the latter submission is more likely to be better.

I used the phrase more likely for good reason. It is possible that the former submission, with a single upvote, is in fact a better submission than the later with 999 upvotes. The hesitation to agree with this is because we have not seen the other 999 potential votes the former submission might get. Perhaps it will achieve an additional 999 upvotes and 0 downvotes and be considered better than the latter, though not likely.

What we really want is an estimate of the true upvote ratio. Note that the true upvote ratio is not the same as the observed upvote ratio: the true upvote ratio is hidden, and we only observe upvotes vs. downvotes (one can think of the true upvote ratio as "what is the underlying probability someone gives this submission a upvote, versus a downvote"). So the 999 upvote/1 downvote submission probably has a true upvote ratio close to 1, which we can assert with confidence thanks to the Law of Large Numbers, but on the other hand we are much less certain about the true upvote ratio of the submission with only a single upvote. Sounds like a Bayesian problem to me.

One way to determine a prior on the upvote ratio is to look at the historical distribution of upvote ratios. This can be accomplished by scraping Reddit's submissions and determining a distribution. There are a few problems with this technique though:

  1. Skewed data: The vast majority of submissions have very few votes, hence there will be many submissions with ratios near the extremes (see the "triangular plot" in the above Kaggle dataset), effectively skewing our distribution to the extremes. One could try to only use submissions with votes greater than some threshold. Again, problems are encountered. There is a tradeoff between number of submissions available to use and a higher threshold with associated ratio precision.
  2. Biased data: Reddit is composed of different subpages, called subreddits. Two examples are r/aww, which posts pics of cute animals, and r/politics. It is very likely that the user behaviour towards submissions of these two subreddits are very different: visitors are likely friendly and affectionate in the former, and would therefore upvote submissions more, compared to the latter, where submissions are likely to be controversial and disagreed upon. Therefore not all submissions are the same.

In light of these, I think it is better to use a Uniform prior.

With our prior in place, we can find the posterior of the true upvote ratio. The Python script below will scrape the best posts from the showerthoughts community on Reddit. This is a text-only community so the title of each post is the post.

Setting up the Praw Reddit API

Use of the praw package for retrieving data from Reddit does require some private information on your Reddit account. As such, we are not releasing the secret keys and reddit account passwords that we originally used for the code cell below. Fortunately, we've provided detailed information on how to set up the next code cell with your custom information.

Register your Application on Reddit

  1. Log into your Reddit account.

  2. Click the down arrow to the right of your name, then click the Preferences button.

  1. Click the app tab.

  1. Click the create another app button at the bottom left of your screen.

  2. Populate your script with the required fields. Refer to the screen shot below:

  1. Hit the create app button once you have populated all fields. You should now have a script which resembles the following:

NOTE: Certain components of the reddit = praw.Reddit("BasyesianMethodsForHackers") code have been intentionally omitted. This is because praw requires a user ID for accessing Reddit. the praw function follows the following format:

reddit = praw.Reddit(client_id='PERSONAL_USE_SCRIPT_14_CHARS', \
                     client_secret='SECRET_KEY_27_CHARS ', \
                     user_agent='YOUR_APP_NAME', \
                     username='YOUR_REDDIT_USER_NAME', \
                     password='YOUR_REDDIT_LOGIN_PASSWORD')

For help with creating a Reddit instance, visit https://praw.readthedocs.io/en/latest/code_overview/reddit_instance.html.

For help on configuring PRAW, visit https://praw.readthedocs.io/en/latest/getting_started/configuration.html.

In [9]:
#@title Reddit API setup
import sys
import numpy as np
from IPython.core.display import Image
import praw

reset_sess()

enter_client_id = 'ZhGqHeR1zTM9fg'                  #@param {type:"string"}
enter_client_secret = 'keZdvIa1Ge257NKEm3v-eGEdv8M' #@param {type:"string"}
enter_user_agent = "bayesian_app"                   #@param {type:"string"}
enter_username = "ThisIsJustADemo"                  #@param {type:"string"}
enter_password = "EnterYourOwnInfoHere"             #@param {type:"string"}

subreddit_name = "showerthoughts"     #@param ["showerthoughts", "todayilearned", "worldnews", "science", "lifeprotips", "nottheonion"] {allow-input: true}

reddit = praw.Reddit(client_id=enter_client_id,
                     client_secret=enter_client_secret,
                     user_agent=enter_user_agent,
                     username=enter_username,
                     password=enter_password)
subreddit  = reddit.subreddit(subreddit_name)

# go by timespan - 'hour', 'day', 'week', 'month', 'year', 'all'
# might need to go longer than an hour to get entries...

timespan = 'day' #@param ['hour', 'day', 'week', 'month', 'year', 'all']

top_submissions = subreddit.top(timespan)

#adding a number to the inside of int() call will get the ith top post.
ith_top_post = 2   #@param {type:"number"}
n_sub = int(ith_top_post)

i = 0
while i < n_sub:
    top_submission = next(top_submissions)
    i += 1

top_post = top_submission.title

upvotes = []
downvotes = []
contents = []

for sub in top_submissions:
    try:
        ratio = sub.upvote_ratio
        ups = int(round((ratio*sub.score)/(2*ratio - 1))
                  if ratio != 0.5 else round(sub.score/2))
        upvotes.append(ups)
        downvotes.append(ups - sub.score)
        contents.append(sub.title)
    except Exception as e:
        continue

votes = np.array( [ upvotes, downvotes] ).T

print("Post contents: \n")
print(top_post)
Post contents: 

Videogames are so appealing to people that can't find a mission in life,because there is always a clear cut mission in videogames

Above is the top post as well as some other sample posts:

In [10]:
"""
contents: an array of the text from the last 100 top submissions to a subreddit
votes: a 2d numpy array of upvotes, downvotes for each submission.
"""
n_submissions_ = len(votes)
submissions = tfd.Uniform(low=float(0.), high=float(n_submissions_)).sample(sample_shape=(4))
submissions_ = evaluate(tf.to_int32(submissions))

print("Some Submissions (out of %d total) \n-----------"%n_submissions_)
for i in submissions_:
    print('"' + contents[i] + '"')
    print("upvotes/downvotes: ",votes[i,:], "\n")
Some Submissions (out of 98 total) 
-----------
"The whales born around the time of creation of Moby Dick may still be alive."
upvotes/downvotes:  [85  6] 

"The difference between creepy and flirty is attraction."
upvotes/downvotes:  [141  17] 

"Tom Holland is the last Spider-Man Stan Lee met"
upvotes/downvotes:  [678  67] 

"Whoever was using the vacuum when Thanos snapped his fingers literally cleaned up after themselves."
upvotes/downvotes:  [55 10] 

For a given true upvote ratio $p$ and $N$ votes, the number of upvotes will look like a Binomial random variable with parameters $p$ and $N$. (This is because of the equivalence between upvote ratio and probability of upvoting versus downvoting, out of $N$ possible votes/trials). We create a function that performs Bayesian inference on $p$, for a particular submission's upvote/downvote pair.

In [0]:
def joint_log_prob(upvotes, N, test_upvote_ratio):
    tfd = tfp.distributions
  
    rv_upvote_ratio = tfd.Uniform(name="upvote_ratio", low=0., high=1.)
    rv_observations = tfd.Binomial(name="obs",
                                   total_count=float(N),
                                   probs=test_upvote_ratio)
  
    return (
        rv_upvote_ratio.log_prob(test_upvote_ratio)
        + tf.reduce_sum(rv_observations.log_prob(float(upvotes)))
    )

in some cases we might want to run someting like an HMC for multiple, or a variable number, of inputs. Loops are common examples of this. Here we define our function for setting up an HMC that can take in different numbers of upvotes and/or downvotes.

In [0]:
def posterior_upvote_ratio(upvotes, downvotes):
    reset_sess()

    N = float(upvotes) + float(downvotes)

    # Initialize the step_size. (It will be automatically adapted.)
    with tf.variable_scope(tf.get_variable_scope(), reuse=tf.AUTO_REUSE):
        step_size = tf.get_variable(
          name='step_size',
          initializer=tf.constant(0.5, dtype=tf.float32),
          trainable=False,
          use_resource=True
        ) 


    # Set the chain's start state.
    initial_chain_state = [
        0.5 * tf.ones([], dtype=tf.float32, name="init_upvote_ratio")
    ]


    # Since HMC operates over unconstrained space, we need to transform the
    # samples so they live in real-space.
    unconstraining_bijectors = [
        tfp.bijectors.Sigmoid()          
    ]

    # Define a closure over our joint_log_prob.
    unnormalized_posterior_log_prob = lambda *args: joint_log_prob(upvotes, N, *args)


    # Defining the HMC
    hmc=tfp.mcmc.TransformedTransitionKernel(
        inner_kernel=tfp.mcmc.HamiltonianMonteCarlo(
            target_log_prob_fn=unnormalized_posterior_log_prob,
            num_leapfrog_steps=2,
            step_size=step_size,
            step_size_update_fn=tfp.mcmc.make_simple_step_size_update_policy(),
            state_gradients_are_stopped=True),
        bijector=unconstraining_bijectors)


    # Sample from the chain.
    [
        posterior_upvote_ratio
    ], kernel_results = tfp.mcmc.sample_chain(
        num_results=20000,
        num_burnin_steps=5000,
        current_state=initial_chain_state,
        kernel=hmc)


    # Initialize any created variables.
    init_g = tf.global_variables_initializer()
    init_l = tf.local_variables_initializer()
    
    evaluate(init_g)
    evaluate(init_l)
    
    return evaluate([
        posterior_upvote_ratio,
        kernel_results,
    ])
In [13]:
plt.figure(figsize(11., 8))
posteriors = []
colours = ["#5DA5DA", "#F15854", "#B276B2", "#60BD68", "#F17CB0"]
for i in range(len(submissions_)):
    j = submissions_[i]
    posteriors.append( posterior_upvote_ratio(votes[j, 0], votes[j, 1])[0] )
    plt.hist( posteriors[i], bins = 10, normed = True, alpha = .9, 
            histtype="step",color = colours[i], lw = 3,
            label = '(%d up:%d down)\n%s...'%(votes[j, 0], votes[j,1], contents[j][:50]) )
    plt.hist( posteriors[i], bins = 10, normed = True, alpha = .2, 
            histtype="stepfilled",color = colours[i], lw = 3, )
    
plt.legend(loc="upper left")
plt.xlim( 0, 1)
plt.title("Posterior distributions of upvote ratios on different submissions");

Some distributions are very tight, others have very long tails (relatively speaking), expressing our uncertainty with what the true upvote ratio might be.

Sorting!

We have been ignoring the goal of this exercise: how do we sort the submissions from best to worst? Of course, we cannot sort distributions, we must sort scalar numbers. There are many ways to distill a distribution down to a scalar: expressing the distribution through its expected value, or mean, is one way. Choosing the mean is a bad choice though. This is because the mean does not take into account the uncertainty of distributions.

I suggest using the 95% least plausible value, defined as the value such that there is only a 5% chance the true parameter is lower (think of the lower bound on the 95% credible region). Below are the posterior distributions with the 95% least-plausible value plotted:

In [14]:
N = posteriors[0].shape[0]
lower_limits = []

for i in range(len(submissions_)):
    j = submissions_[i]
    plt.hist( posteriors[i], bins = 20, normed = True, alpha = .9, 
            histtype="step",color = colours[i], lw = 3,
            label = '(%d up:%d down)\n%s...'%(votes[j, 0], votes[j,1], contents[j][:50]) )
    plt.hist( posteriors[i], bins = 20, normed = True, alpha = .2, 
            histtype="stepfilled",color = colours[i], lw = 3, )
    v = np.sort( posteriors[i] )[ int(0.05*N) ]
    plt.vlines( v, 0, 30 , color = colours[i], linestyles = "--",  linewidths=3  )
    lower_limits.append(v)
    plt.legend(loc="upper left")

plt.legend(loc="upper left")
plt.title("Posterior distributions of upvote ratios on different submissions");
order = np.argsort( -np.array( lower_limits ) )
print(order, lower_limits)
[2 0 1 3] [0.8725181, 0.84175247, 0.8906297, 0.7529012]

The best submissions, according to our procedure, are the submissions that are most-likely to score a high percentage of upvotes. Visually those are the submissions with the 95% least plausible value close to 1.

Why is sorting based on this quantity a good idea? By ordering by the 95% least plausible value, we are being the most conservative with what we think is best. When using the lower-bound of the 95% credible interval, we believe with high certainty that the 'true upvote ratio' is at the very least equal to this value (or greater), thereby ensuring that the best submissions are still on top. Under this ordering, we impose the following very natural properties:

  1. given two submissions with the same observed upvote ratio, we will assign the submission with more votes as better (since we are more confident it has a higher ratio).
  2. given two submissions with the same number of votes, we still assign the submission with more upvotes as better.

But this is too slow for real-time!

I agree, computing the posterior of every submission takes a long time, and by the time you have computed it, likely the data has changed. I delay the mathematics to the appendix, but I suggest using the following formula to compute the lower bound very fast.

$$ \frac{a}{a + b} - 1.65\sqrt{ \frac{ab}{ (a+b)^2(a + b +1 ) } }$$

where $$ \begin{align} & a = 1 + u \\ & b = 1 + d \\ \end{align} $$ $u$ is the number of upvotes, and $d$ is the number of downvotes. The formula is a shortcut in Bayesian inference, which will be further explained in Chapter 6 when we discuss priors in more detail.

In [15]:
def intervals(u, d):
    a = tf.add(1., u)
    b = tf.add(1., d)
    mu = tf.divide(x=a, y=tf.add(1., u))
    std_err = 1.65 * tf.sqrt((a * b) / ((a + b) ** 2 * (a + b + 1.)))
    
    return (mu, std_err)
  
print("Approximate lower bounds:")
posterior_mean, std_err  = evaluate(intervals(votes[:,0],votes[:,1]))
lb = posterior_mean - std_err
print(lb)
print("\n")
print("Top 40 Sorted according to approximate lower bounds:")
print("\n")
[ order ] = evaluate([tf.nn.top_k(lb, k=lb.shape[0], sorted=True)])
ordered_contents = []
for i, N in enumerate(order.values[:40]):
    ordered_contents.append( contents[i] )
    print(votes[i,0], votes[i,1], contents[i])
    print("-------------")
Approximate lower bounds:
[0.9958004  0.9952644  0.99469125 0.9904279  0.98615533 0.98392266
 0.98387784 0.9858639  0.9887984  0.9826459  0.98709446 0.98860824
 0.98188657 0.98530483 0.9822309  0.9848264  0.97857845 0.98753065
 0.97575593 0.97290665 0.96474063 0.9737217  0.9652639  0.96327144
 0.97547925 0.96451837 0.96231556 0.96992385 0.9663818  0.9553402
 0.9663533  0.9667713  0.9648441  0.9632066  0.9573882  0.95636785
 0.96158546 0.9553612  0.95927936 0.94961226 0.95419115 0.94762176
 0.965108   0.9519916  0.9554301  0.950122   0.95891047 0.95510113
 0.9553612  0.9413886  0.94585073 0.9413886  0.955      0.9379405
 0.96590126 0.9338504  0.93719655 0.96340525 0.9314869  0.9412543
 0.94142634 0.9360013  0.9662785  0.95730925 0.94299465 0.9348654
 0.9308132  0.953262   0.9332762  0.93518186 0.92587835 0.9478044
 0.9443473  0.9303111  0.91968226 0.9222233  0.92677134 0.92474455
 0.93195814 0.9307668  0.92651534 0.91968226 0.91678697 0.92627853
 0.9416106  0.9121545  0.9235447  0.91529703 0.9261378  0.91965044
 0.9142571  0.91032416 0.9307668  0.91678697 0.9261378  0.9135513
 0.9079874  0.9147517 ]


Top 40 Sorted according to approximate lower bounds:


6985 368 In 25 years, it's going to be really weird if a car commercial has engine sounds
-------------
6450 412 Setting a morning alarm is like placing a bomb that will blow up your dreams.
-------------
2766 86 Pity sex is literally someone giving a fuck about your pathetic life.
-------------
1109 46 The first black couple to have an albino child probably freaked out.
-------------
967 84 It is more socially acceptable to lose weight because of illness than it is to gain weight because of illness.
-------------
642 48 When an animal is severely ill, we tend to put it out of it's misery as soon as possible. When a human is severely ill, we go through extreme lengths to do anything besides 'pulling the plug'.
-------------
640 48 Male provided chromosomes determine gender, so technically semen is gender fluid.
-------------
734 47 Talking would be a pain in the ass if our teeth were flaccid until we got hungry
-------------
627 19 you know that you have been productive at work when you do not need to charge your personal phone during the day
-------------
678 67 Tom Holland is the last Spider-Man Stan Lee met
-------------
611 25 Final exams are only final if you pass them.
-------------
616 19 If never leaving your parents' house/basement as an adult is seen in society as being a failure, the royal families are essentially the biggest losers of us all.
-------------
384 20 Having nudes when you were a teenager and holding onto them until adulthood is technically possessing child pornography
-------------
285 6 You never realize how much force is put into each step until you hit your foot on a piece of furniture.
-------------
277 9 Someone who’s blind from birth doesn’t even know they’re blind until someone tells them.
-------------
540 28 Being hard at work and being hard at work are two drastically different statements
-------------
274 14 r/oddlysatisfying can become r/mildyinfuriating whenever the video stops to buffer.
-------------
939 60 How famous do you have to be in order to be "assassinated" and not murdered
-------------
216 11 Getting laid and getting off are both amazing but getting laid off sucks
-------------
205 13 A snowman would see snowflakes as their own flesh falling from the sky.
-------------
238 42 Programming is just giving an autistic machine instructions and them taking everything literally.
-------------
190 10 Chances are that someone who is against vaccination due to concerns over autism is vaccinated and is not autistic.
-------------
225 34 The most unrealistic part of Star Trek is no longer space travel, it's humanity living on a healthy Earth.
-------------
190 26 In an apocalypse scenario, the homeless would thrive whilst the majority of us will struggle severely with the adjustment.
-------------
250 16 People with a humiliation fetish literally have a "guilty pleasure"
-------------
151 13 Hufflepuffs are just the Canadians of Hogwarts.
-------------
160 18 When you clap you give yourself a high five for something someone else has accomplished
-------------
133 6 Depression steals years from your life and the life from your years.
-------------
132 8 Maybe Aliens haven't come to visit us because when they look at us from 10,000 LY away, they don't see any civilizations.
-------------
130 18 Butts are just Leg Shoulders
-------------
125 7 Someday drones will be combined with skywriting, and the sky will be full of penises.
-------------
111 5 HIMYM is a Ted Talk
-------------
112 6 Whoever first discovered static electricity probably thought they were the chosen one or something.
-------------
125 9 A reverse E-Bay, where an item is first listed at a very high price, but slowly goes down with time until someone buys it.
-------------
110 10 There are people who were born after you that have already died. Their entire life's span began and ended within yours.
-------------
111 11 A female’s nipples are censored, but if a female decides to transform into a male, those same nipples are not censored.
-------------
119 9 We are all cursed with infinite potential and limited time; and that curse is the meaning of life.
-------------
100 9 It must be shitty to be a little dog. 6-7 years old, been around experienced life. Then all of a sudden a giant baby moves in and starts kicking your ass up and down the house.
-------------
95 6 The Grinch probably got his dog (Max) because someone didn't want him and threw him in the trash
-------------
107 16 Life is just avoiding death as long as possible.
-------------

We can view the ordering visually by plotting the posterior mean and bounds, and sorting by the lower bound. In the plot below, notice that the left error-bar is sorted (as we suggested this is the best way to determine an ordering), so the means, indicated by dots, do not follow any strong pattern.

In [16]:
r_order = order.indices[::-1][-40:]
ratio_range_ = evaluate(tf.range( len(r_order)-1,-1,-1 )) 
r_order_vals = order.values[::-1][-40:]
plt.errorbar( r_order_vals, 
                             np.arange( len(r_order) ), 
               xerr=std_err[r_order], capsize=0, fmt="o",
                color = TFColor[0])
plt.xlim( 0.3, 1)
plt.yticks( ratio_range_ , map( lambda x: x[:30].replace("\n",""), ordered_contents) );