A conventional hash table (or hash table-backed set structure) consists of a series of buckets. Hash table insert looks like this:
Hash table lookup proceeds similarly:
As a warm-up exercise, let's implement a precise array-backed hash set. We'll store lists in each hash bucket and handle collisions by appending to a list. Since we're storing a set, we'll just store keys (not keys and values). You'll implement the add
, intersection
, and union
methods on the HashSet
class.
Here are some hints to keep in mind:
index_for
method tells you which bucket to put an item inHashSet
, you can iterate over all of its items with a for
loop, like for it in hs:
help(obj)
in a cell to get documentation for obj
, no matter what obj
is!If you get stuck, check the solution!
class HashSet(object):
def __init__(self, sz=256):
# initialize elements to empty lists
self.items = [[] for _ in range(sz)]
self.size = sz
def __len__(self):
""" Returns the number of elements in this set """
return sum([len(it) for it in self.items])
def __iter__(self):
import itertools
return itertools.chain(*self.items)
def index_for(self, item):
""" Returns the index of the hash bucket for _item_ """
return hash(item) % self.size
def contains(self, item):
""" Returns True if this set contains _item_ and False otherwise """
for i in self.items[self.index_for(item)]:
if i == item:
return True
return False
def add(self, item):
""" If _item_ is not already in the set, add it to the appropriate
bucket. If _item_ is already in the set, do nothing. """
# FIXME: implement this!
pass
def add_all(self, items):
for item in items:
self.add(item)
def intersection(self, other):
""" Returns a new set containing all the items that are members of
both this set and _other_ """
result = HashSet()
# FIXME: implement this!
return result
def union(self, other):
""" Returns a new set containing all the items that are members of
either this set or _other_ """
result = HashSet()
# FIXME: implement this!
return result
Once you've finished implementing a precise hash set structure, you can run some ad-hoc tests to make sure it behaves the way you'd expect.
test1 = HashSet()
for item in ["a", "b", "c", "d", "e", "a", "b", "f"]:
pre_insert = test1.contains(item)
test1.add(item)
post_insert = test1.contains(item)
print(item, len(test1), sorted(test1), pre_insert, post_insert)
We expect the previous cell to print out
a 1 ['a'] False True
b 2 ['a', 'b'] False True
c 3 ['a', 'b', 'c'] False True
d 4 ['a', 'b', 'c', 'd'] False True
e 5 ['a', 'b', 'c', 'd', 'e'] False True
a 5 ['a', 'b', 'c', 'd', 'e'] True True
b 5 ['a', 'b', 'c', 'd', 'e'] True True
f 6 ['a', 'b', 'c', 'd', 'e', 'f'] False True```
Once we're confident that adding an element to a set works, we can also run some tests to ensure that intersection and union work the way we'd expect. For these tests, we'll check to make sure that our set works the same way as Python's built-in set
type for given inputs.
from itertools import combinations
failures = 0
for t in combinations(combinations(['a', 'b', 'c', 'd', 'e', 'f', 'g', 'h'], 4), 2):
left = HashSet()
right = HashSet()
left.add_all(t[0])
right.add_all(t[1])
lr = (repr(sorted(left)), repr(sorted(right)))
if sorted(left.union(right)) != sorted(right.union(left)):
failures += 1
print("uh oh, union isn't commutative for %s and %s" % lr)
if sorted(left.intersection(right)) != sorted(right.intersection(left)):
failures += 1
print("uh oh, intersection isn't commutative for %s and %s" % lr)
if sorted(left.union(right)) != sorted(set(t[0]).union(set(t[1]))):
failures += 1
print("union wasn't what we expected for %s and %s" % lr)
if sorted(left.intersection(right)) != sorted(set(t[0]).intersection(set(t[1]))):
failures += 1
print("intersection wasn't what we expected for %s and %s" % lr)
print("finished tests with %d failures" % failures)
Our last test checks that we handle hash collisions appropriately.
hs = HashSet()
for i in range(1024):
if len(hs) != i:
print("len(hs) was %d; expected %d" % (len(hs), i))
hs.add(i)
if not hs.contains(i):
print("hs didn't contain %d; expected it to" % i)
Precise sets must use space proportional to the number of elements in the set. In our implementation, we handle collisions by appending an entry to a list. This has a performance impact as the number of elements in the hash table continues to grow beyond the number of buckets, since we're no longer looking up an entry in an array keyed by a hash value (which takes constant time); we're now looking up an entry in a list (which takes time proportional to the number of elements in the list).
To see this performance impact, let's plot the average time it takes to do one insert and a corresponding lookup as the number of elements grows.
from datasketching import plot
plot.hash_experiment(HashSet(256), 5, 16)
Think of a Bloom filter as a hashed set structure that has no precise way to handle collisions. Instead, the Bloom filter ameliorates the impact of hash collisions by using multiple hash functions. The buckets in the Bloom filter are merely bits: they do not have the identities of keys. When a value is inserted into the Bloom filter, multiple hash functions are used to select which buckets should be set to true (buckets that are already true are not changed). This means that if all of the buckets for a given key are true, then the Bloom filter may contain it, but that if any of the buckets for a given key are false, then the Bloom filter must not contain it.
Let's see an implementation. We'll start with a basic bit vector class so that we can efficiently store values.
from datasketching.BitVector import BitVector
We can now implement the Bloom filter using the bit vector to store values.
class Bloom(object):
def __init__(self, size, hashes):
""" Initializes a Bloom filter with the
given size and a collection of hashes,
which are functions taking arbitrary
values and returning integers.
hashes can be either a function taking
a value and returning a list of results
or a list of functions. In the latter
case, this constructor will synthesize
the former """
self.__buckets = BitVector(size)
self.__size = len(self.__buckets)
if hasattr(hashes, '__call__'):
self.__hashes = hashes
else:
funs = hashes[:]
def h(value):
return [f(value) for f in funs]
self.__hashes = h
def size(self):
return self.__size
def insert(self, value):
""" Inserts a value into this set """
for h in self.__hashes(value):
self.__buckets[h % self.__size] = True
def lookup(self, value):
""" Returns true if value may be in this set
(i.e., may return false positives) """
for h in self.__hashes(value):
if self.__buckets[h % self.__size] == False:
return False
return True
Now we'll need some different hash functions to use in our Bloom filter. We can simulate multiple hashes by using one of the hashes supplied in hashlib
and simply masking out parts of the digest.
from datasketching.hashing import hashes_for
Now let's construct a Bloom filter using our three hashes.
# Make a Bloom filter with three hashes,
# each of which is 32 bits (8 hex digits)
bloom = Bloom(1024, hashes_for(3, 8))
bloom.insert("foobar")
bloom.lookup("foobar")
bloom.lookup("absent")
So far, so good!
We can tell that the Bloom filter uses constant time for inserts no matter how many elements are in the set by running the same experiment we ran with our open-addressed hash set against our Bloom filter. To be directly comparable with the HashSet experiment, we'll use different hash functions (based on Python's hash
builtin) -- in our other Bloom filter experiments, we're serializing input data and using a slower but better hash function.
from datasketching import plot
from datasketching.hashing import fast_hashes_for
import cProfile
plot.hash_experiment(Bloom(256, fast_hashes_for()), 5, 18)
The tradeoff, of course, is that as the Bloom filter fills up, the false positive rate gets worse. Let's run an experiment to see how our false positive rate changes over time. We're going to construct a random stream of values and insert them into a Bloom filter -- but we're going to look them up first. Since it is extremely improbable that we'll get the same random values twice in a short simulation (the period of the Mersenne Twister that Python uses is too large to allow this), we can be fairly certain that any values for which lookup
returns true before we've inserted them are false positives. We'll collect the false positive rate at every 100 samples.
def bloom_experiment(sample_count, size, hashes, seed=0x15300625):
import random
from collections import namedtuple
random.seed(seed)
bloom = Bloom(size, hashes)
result = []
false_positives = 0
for i in range(sample_count):
bits = random.getrandbits(64)
if bloom.lookup(bits):
false_positives = false_positives + 1
bloom.insert(bits)
if i % 100 == 0:
result.append((i + 1, false_positives / float(i + 1)))
result.append((i + 1, false_positives / float(i + 1)))
return result
Let's set up plotting (using the Altair API for the Vega-Lite visualization grammar):
import altair as alt
alt.renderers.enable('notebook')
Then we can run an experiment and plot the results:
from pandas import DataFrame
results = bloom_experiment(1 << 18, 4096, hashes_for(3, 8))
df = DataFrame.from_records(results)
df.rename(columns={0: "cardinality", 1: "FPR"}, inplace=True)
alt.Chart(df).mark_line().encode(alt.X("cardinality", scale=alt.Scale(type="log", base=2)), y="FPR")
We can see how increasing the size of the filter changes our results:
results = bloom_experiment(1 << 18, 16384, hashes_for(3, 8))
df = DataFrame.from_records(results )
df.rename(columns={0: "cardinality", 1: "FPR"}, inplace=True)
alt.Chart(df).mark_line().encode(alt.X("cardinality", scale=alt.Scale(type="log", base=2)), y="FPR")
We can analytically predict a false positive rate for a given Bloom filter. If $k$ is the number of hash functions, $m$ is the size of the Bloom filter in bits, and $n$ is the number of elements in the set, we can expect a false positive rate of $ ( 1 - e^{- kn / m} )^k $. Let's plot that function for our previous example:
results = []
import math
hash_count = 3
filter_size = 16384
entries = 0
while entries < 1 << 18:
results.append((entries + 1, math.pow(1 - math.pow(math.e, -((hash_count * (entries + 1)) / filter_size)), hash_count)))
entries = entries + 100
df = DataFrame.from_records(results)
df.rename(columns={0: "cardinality", 1: "FPR"}, inplace=True)
alt.Chart(df).mark_line().encode(alt.X("cardinality", scale=alt.Scale(type="log", base=2)), y="FPR")
As we can see, our expected false positive rate lines up very closely to our actual false positive rate.
Since it is possible to incrementally update a Bloom filter by adding a single element, the Bloom filter is suitable for stream processing.
However, it is also possible to find the union of two Bloom filters if they have the same size and were constructed with the same hash functions, which means it is possible to use the Bloom filter for parallel batch processing (i.e., approximating a very large set by combining the Bloom filters approximating its subsets). The union of Bloom filters approximating sets $A$ and $B$ is the bucketwise OR of $A$ and $B$. The union of Bloom filters approximating sets $A$ and $B$ will produce the same result as the Bloom filter approximating the set $A \cup B$.
It is also possible to find the intersection of two Bloom filters by taking their bucketwise AND. $ \mathrm{Bloom}(A) \cap \mathrm{Bloom}(B) $ may be less precise than $ \mathrm{Bloom}(A \cap B) $; the upper bound on the false positive rate for $ \mathrm{Bloom}(A) \cap \mathrm{Bloom}(B) $ will be the greater of the false positive rates for $ \mathrm{Bloom}(A) $ and $ \mathrm{Bloom}(B) $.
class Bloom(object):
def __init__(self, size, hashes):
""" Initializes a Bloom filter with the
given size and a collection of hashes,
which are functions taking arbitrary
values and returning integers.
hashes can be either a function taking
a value and returning a list of results
or a list of functions. In the latter
case, this constructor will synthesize
the former """
self.__buckets = BitVector(size)
self.__size = len(self.__buckets)
if hasattr(hashes, '__call__'):
self.__hashes = hashes
else:
funs = hashes[:]
def h(value):
return [int(f(value)) for f in funs]
self.__hashes = h
def size(self):
return self.__size
def insert(self, value):
""" Inserts a value into this set """
for h in self.__hashes(value):
self.__buckets[h % self.__size] = True
def lookup(self, value):
""" Returns true if value may be in this set
(i.e., may return false positives) """
for h in self.__hashes(value):
if self.__buckets[h % self.__size] == False:
return False
return True
def merge_from(self, other):
""" Merges other in to this filter by
taking the bitwise OR of this and
other. Updates this filter in place. """
self.__buckets.merge_from(other.__buckets)
def intersect(self, other):
""" Takes the approximate intersection of
this and other, returning a new filter
approximating the membership of the
intersection of the set approximated
by self and the set approximated by other.
The upper bound on the false positive rate
of the resulting filter is the greater of
the false positive rates of self and other
(but the FPR may be worse than the FPR of
a Bloom filter constructed only from the
values in the intersection of the sets
approximated by self and other). """
b = Bloom(self.size(), self.__hashes)
b.__buckets.merge_from(self.__buckets)
b.__buckets.intersect_from(other.__buckets)
return b
def union(self, other):
""" Generates a Bloom filter approximating the
membership of the union of the set approximated
by self and the set approximated by other.
Unlike intersect, this does not affect the
precision of the filter (i.e., its precision
will be identical to that of a Bloom filter
built up from the union of the two sets). """
b = Bloom(self.size(), self.__hashes)
b.__buckets.merge_from(self.__buckets)
b.__buckets.merge_from(other.__buckets)
return b
def dup(self):
b = Bloom(self.size(), self.__hashes)
b.merge_from(self)
return b
We can see these in action:
b1 = Bloom(1024, hashes_for(3, 8))
b2 = Bloom(1024, hashes_for(3, 8))
b1.insert("foo")
b1.insert("bar")
b2.insert("foo")
b2.insert("blah")
b_intersect = b1.intersect(b2)
b_intersect.lookup("foo")
b_intersect.lookup("blah")
b_union = b1.union(b2)
b_union.lookup("blah"), b_union.lookup("bar")
The partitioned Bloom filter simply divides the set of buckets into several partitions (one for each hash function) so that, e.g., a bit in partition 0 can only be set by hash 0, and so on. A major advantage of the partitioned Bloom filter is that it has a better false positive rate under intersection (see the reference to Jeffrey and Steffan below), which can be better used to identify potential conflicts between very large sets.
Because we track the count of hash functions explicitly (in the count of partitions), we can also easily adapt the cardinality estimation technique of Swamidass and Baldi.
class PartitionedBloom(object):
def __init__(self, size, hashes):
""" Initializes a Bloom filter with the
given per-partition size and a collection
of hashes, which are functions taking
arbitrary values and returning integers.
The partition count is the number of hashes.
hashes can be either a function taking
a value and returning a list of results
or a list of functions. In the latter
case, this constructor will synthesize
the former """
if hasattr(hashes, '__call__'):
self.__hashes = hashes
# inspect the tuple returned by the hash function to get a depth
self.__depth = len(hashes(bytes()))
else:
funs = hashes[:]
self.__depth = len(hashes)
def h(value):
return [int(f(value)) for f in funs]
self.__hashes = h
self.__buckets = BitVector(size * self.__depth)
self.__size = size
def size(self):
return self.__size
def partitions(self):
return self.__depth
def insert(self, value):
""" Inserts a value into this set """
for (p, row) in enumerate(self.__hashes(value)):
self.__buckets[(p * self.__size) + (row % self.__size)] = True
def lookup(self, value):
""" Returns true if value may be in this set
(i.e., may return false positives) """
for (p, row) in enumerate(self.__hashes(value)):
if not self.__buckets[(p * self.__size) + (row % self.__size)]:
return False
return True
def merge_from(self, other):
""" Merges other in to this filter by
taking the bitwise OR of this and
other. Updates this filter in place. """
self.__buckets.merge_from(other.__buckets)
def intersect(self, other):
""" Takes the approximate intersection of
this and other, returning a new filter
approximating the membership of the
intersection of the set approximated
by self and the set approximated by other.
The upper bound on the false positive rate
of the resulting filter is the greater of
the false positive rates of self and other
(but the FPR may be worse than the FPR of
a Bloom filter constructed only from the
values in the intersection of the sets
approximated by self and other). """
b = PartitionedBloom(self.size(), self.__hashes)
b.__buckets.merge_from(self.__buckets)
b.__buckets.intersect_from(other.__buckets)
return b
def union(self, other):
""" Generates a Bloom filter approximating the
membership of the union of the set approximated
by self and the set approximated by other.
Unlike intersect, this does not affect the
precision of the filter (i.e., its precision
will be identical to that of a Bloom filter
built up from the union of the two sets). """
b = PartitionedBloom(self.size(), self.__hashes)
b.__buckets.merge_from(self.__buckets)
b.__buckets.merge_from(other.__buckets)
return b
def dup(self):
b = PartitionedBloom(self.size(), self.__hashes)
b.merge_from(self)
return b
def approx_cardinality(self):
""" Returns an estimate of the cardinality of
the set modeled by this filter. Uses
a technique due to Swamidass and Baldi. """
from math import log
m, k = self.size() * self.partitions(), self.partitions()
X = self.__buckets.count_set_bits()
print(m, k, X)
return -(m / k) * log(1 - (X / m))
def pbloom_experiment(sample_count, size, hashes, mod1=3, mod2=7, seed=0x15300625):
import random
from collections import namedtuple
random.seed(seed)
pb1 = PartitionedBloom(size, hashes)
pb2 = PartitionedBloom(size, hashes)
b1 = Bloom(pb1.size() * pb1.partitions(), hashes)
b2 = Bloom(pb1.size() * pb1.partitions(), hashes)
result = []
pb_fp, b_fp = 0, 0
count = 0
for i in range(sample_count):
bits = random.getrandbits(64)
if i % mod1 == 0:
pb1.insert(bits)
b1.insert(bits)
if i % mod2 == 0:
pb2.insert(bits)
b2.insert(bits)
if i % mod1 == 0:
count += 1
pb = pb1.intersect(pb2)
b = b1.intersect(b2)
random.seed(seed)
for i in range(sample_count):
bits = random.getrandbits(64)
if pb.lookup(bits) and ((i % mod1 != 0) or (i % mod2 != 0)):
pb_fp += 1
if b.lookup(bits) and ((i % mod1 != 0) or (i % mod2 != 0)):
b_fp += 1
return (count, b_fp, pb_fp)
results = []
for pwr in range(10, 17):
for count in [1 << pwr, (1 << pwr) + (1 << (pwr - 1))]:
tp, bfp, pbfp = pbloom_experiment(count, 16384, hashes_for(8, 4))
results.append(("Bloom", count, bfp / (float(tp) + bfp)))
results.append(("partitioned Bloom", count, pbfp / (float(tp) + pbfp)))
df = DataFrame.from_records(results )
df.rename(columns={0: "kind", 1: "cardinality", 2: "FPR"}, inplace=True)
import altair as alt
alt.renderers.enable('notebook')
base = alt.Chart(df).encode(alt.X("cardinality", scale=alt.Scale(type="log", base=2)), y="FPR", color="kind")
base.mark_point() + base.mark_line()
SELECT * FROM A, B WHERE A.x = B.x
: by broadcasting Bloom filters of the sets of values for x
in both A
and B
, it is possible to filter out many tuples that would never appe