This example illustrates how Dask-ML can be used to classify large textual datasets in parallel. It is adapted from this scikit-learn example.
The primary differences are that
from dask.distributed import Client, progress
client = Client(n_workers=2, threads_per_worker=2, memory_limit='2GB')
client
Scikit-Learn provides a utility to fetch the newsgroups dataset.
import sklearn.datasets
bunch = sklearn.datasets.fetch_20newsgroups()
The data from scikit-learn isn't too large, so the data is just returned in memory. Each document is a string. The target we're predicting is an integer, which codes the topic of the post.
We'll load the documents and targets directly into a dask DataFrame.
In practice, on a larger than memory dataset, you would likely load the
documents from disk or cloud storage using dask.bag
or dask.delayed
.
import dask.dataframe as dd
import pandas as pd
df = dd.from_pandas(pd.DataFrame({"text": bunch.data, "target": bunch.target}),
npartitions=25)
df
Each row in the text
column has a bit of metadata and the full text of a post.
print(df.head().loc[0, 'text'][:500])
Dask's HashingVectorizer provides a similar API to scikit-learn's implementation. In fact, Dask-ML's implementation uses scikit-learn's, applying it to each partition of the input dask.dataframe.Series
or dask.bag.Bag
.
Transformation, once we actually compute the result, happens in parallel and returns a dask Array.
import dask_ml.feature_extraction.text
vect = dask_ml.feature_extraction.text.HashingVectorizer()
X = vect.fit_transform(df['text'])
X
The output array X
has unknown chunk sizes becase the input dask Series or Bags don't know their own length.
Each block in X
is a scipy.sparse
matrix.
X.blocks[0].compute()
This is a document-term matrix. Each row is the hashed representation of the original post.
We can combine the HashingVectorizer with Incremental and a classifier like scikit-learn's SGDClassifier
to
create a classification pipeline.
We'll predict whether the topic was in the comp
category.
bunch.target_names
import numpy as np
positive = np.arange(len(bunch.target_names))[['comp' in x for x in bunch.target_names]]
y = df['target'].isin(positive).astype(int)
y
import numpy as np
import sklearn.linear_model
import sklearn.pipeline
import dask_ml.wrappers
Because the input comes from a dask Series, with unknown chunk sizes, we need to specify assume_equal_chunks=True
. This tells Dask-ML that we know that each partition in X
matches a partition in y
.
sgd = sklearn.linear_model.SGDClassifier(
tol=1e-3
)
clf = dask_ml.wrappers.Incremental(
sgd, scoring='accuracy', assume_equal_chunks=True
)
pipe = sklearn.pipeline.make_pipeline(vect, clf)
SGDClassifier.partial_fit
needs to know the full set of classes up front.
Because our sgd
is wrapped inside an Incremental
, we need to pass it through
as the incremental__classes
keyword argument in fit
.
pipe.fit(df['text'], y,
incremental__classes=[0, 1]);
As usual, Incremental.predict
lazily returns the predictions as a dask Array.
predictions = pipe.predict(df['text'])
predictions
We can compute the predictions and score in parallel with dask_ml.metrics.accuracy_score
.
dask_ml.metrics.accuracy_score(y, predictions)
This simple combination of a HashingVectorizer and SGDClassifier is pretty effective at this prediction task.