#!/usr/bin/env python # coding: utf-8 # You can read an overview of this Numerical Linear Algebra course in [this blog post](http://www.fast.ai/2017/07/17/num-lin-alg/). The course was originally taught in the [University of San Francisco MS in Analytics](https://www.usfca.edu/arts-sciences/graduate-programs/analytics) graduate program. Course lecture videos are [available on YouTube](https://www.youtube.com/playlist?list=PLtmWHNX-gukIc92m1K0P6bIOnZb-mg0hY) (note that the notebook numbers and video numbers do not line up, since some notebooks took longer than 1 video to cover). # # You can ask questions about the course on [our fast.ai forums](http://forums.fast.ai/c/lin-alg). # # 2. Topic Modeling with NMF and SVD # Topic modeling is a great way to get started with matrix factorizations. We start with a **term-document matrix**: # # term-document matrix # (source: [Introduction to Information Retrieval](http://player.slideplayer.com/15/4528582/#)) # # We can decompose this into one tall thin matrix times one wide short matrix (possibly with a diagonal matrix in between). # # Notice that this representation does not take into account word order or sentence structure. It's an example of a **bag of words** approach. # ### Motivation # Consider the most extreme case - reconstructing the matrix using an outer product of two vectors. Clearly, in most cases we won't be able to reconstruct the matrix exactly. But if we had one vector with the relative frequency of each vocabulary word out of the total word count, and one with the average number of words per document, then that outer product would be as close as we can get. # # Now consider increasing that matrices to two columns and two rows. The optimal decomposition would now be to cluster the documents into two groups, each of which has as different a distribution of words as possible to each other, but as similar as possible amongst the documents in the cluster. We will call those two groups "topics". And we would cluster the words into two groups, based on those which most frequently appear in each of the topics. # ### In today's class # We'll take a dataset of documents in several different categories, and find topics (consisting of groups of words) for them. Knowing the actual categories helps us evaluate if the topics we find make sense. # # We will try this with two different matrix factorizations: **Singular Value Decomposition (SVD)** and **Non-negative Matrix Factorization (NMF)** # In[4]: import numpy as np from sklearn.datasets import fetch_20newsgroups from sklearn import decomposition from scipy import linalg import matplotlib.pyplot as plt # In[5]: get_ipython().run_line_magic('matplotlib', 'inline') np.set_printoptions(suppress=True) # ## Additional Resources # - [Data source](http://scikit-learn.org/stable/datasets/twenty_newsgroups.html): Newsgroups are discussion groups on Usenet, which was popular in the 80s and 90s before the web really took off. This dataset includes 18,000 newsgroups posts with 20 topics. # - [Chris Manning's book chapter](https://nlp.stanford.edu/IR-book/pdf/18lsi.pdf) on matrix factorization and LSI # - Scikit learn [truncated SVD LSI details](http://scikit-learn.org/stable/modules/decomposition.html#lsa) # # ### Other Tutorials # - [Scikit-Learn: Out-of-core classification of text documents](http://scikit-learn.org/stable/auto_examples/applications/plot_out_of_core_classification.html): uses [Reuters-21578](https://archive.ics.uci.edu/ml/datasets/reuters-21578+text+categorization+collection) dataset (Reuters articles labeled with ~100 categories), HashingVectorizer # - [Text Analysis with Topic Models for the Humanities and Social Sciences](https://de.dariah.eu/tatom/index.html): uses [British and French Literature dataset](https://de.dariah.eu/tatom/datasets.html) of Jane Austen, Charlotte Bronte, Victor Hugo, and more # ## Set up data # Scikit Learn comes with a number of built-in datasets, as well as loading utilities to load several standard external datasets. This is a [great resource](http://scikit-learn.org/stable/datasets/), and the datasets include Boston housing prices, face images, patches of forest, diabetes, breast cancer, and more. We will be using the newsgroups dataset. # # Newsgroups are discussion groups on Usenet, which was popular in the 80s and 90s before the web really took off. This dataset includes 18,000 newsgroups posts with 20 topics. # In[6]: categories = ['alt.atheism', 'talk.religion.misc', 'comp.graphics', 'sci.space'] remove = ('headers', 'footers', 'quotes') newsgroups_train = fetch_20newsgroups(subset='train', categories=categories, remove=remove) newsgroups_test = fetch_20newsgroups(subset='test', categories=categories, remove=remove) # In[8]: newsgroups_train.filenames.shape, newsgroups_train.target.shape # Let's look at some of the data. Can you guess which category these messages are in? # In[55]: print("\n".join(newsgroups_train.data[:3])) # hint: definition of *perijove* is the point in the orbit of a satellite of Jupiter nearest the planet's center # In[249]: np.array(newsgroups_train.target_names)[newsgroups_train.target[:3]] # The target attribute is the integer index of the category. # In[58]: newsgroups_train.target[:10] # In[11]: num_topics, num_top_words = 6, 8 # Next, scikit learn has a method that will extract all the word counts for us. # In[7]: from sklearn.feature_extraction.text import CountVectorizer, TfidfVectorizer # In[342]: vectorizer = CountVectorizer(stop_words='english') vectors = vectorizer.fit_transform(newsgroups_train.data).todense() # (documents, vocab) vectors.shape #, vectors.nnz / vectors.shape[0], row_means.shape # In[343]: print(len(newsgroups_train.data), vectors.shape) # In[303]: vocab = np.array(vectorizer.get_feature_names()) # In[304]: vocab.shape # In[18]: vocab[7000:7020] # ## Singular Value Decomposition (SVD) # "SVD is not nearly as famous as it should be." - Gilbert Strang # We would clearly expect that the words that appear most frequently in one topic would appear less frequently in the other - otherwise that word wouldn't make a good choice to separate out the two topics. Therefore, we expect the topics to be **orthogonal**. # # The SVD algorithm factorizes a matrix into one matrix with **orthogonal columns** and one with **orthogonal rows** (along with a diagonal matrix, which contains the **relative importance** of each factor). # # # (source: [Facebook Research: Fast Randomized SVD](https://research.fb.com/fast-randomized-svd/)) # # SVD is an **exact decomposition**, since the matrices it creates are big enough to fully cover the original matrix. SVD is extremely widely used in linear algebra, and specifically in data science, including: # # - semantic analysis # - collaborative filtering/recommendations ([winning entry for Netflix Prize](https://datajobs.com/data-science-repo/Recommender-Systems-%5BNetflix%5D.pdf)) # - calculate Moore-Penrose pseudoinverse # - data compression # - principal component analysis (will be covered later in course) # In[344]: get_ipython().run_line_magic('time', 'U, s, Vh = linalg.svd(vectors, full_matrices=False)') # In[345]: print(U.shape, s.shape, Vh.shape) # Confirm this is a decomposition of the input. # #### Answer # In[346]: #Exercise: confrim that U, s, Vh is a decomposition of the var Vectors # Confirm that U, V are orthonormal # #### Answer # In[246]: #Exercise: Confirm that U, Vh are orthonormal # #### Topics # What can we say about the singular values s? # In[96]: plt.plot(s); # In[97]: plt.plot(s[:10]) # In[52]: num_top_words=8 def show_topics(a): top_words = lambda t: [vocab[i] for i in np.argsort(t)[:-num_top_words-1:-1]] topic_words = ([top_words(t) for t in a]) return [' '.join(t) for t in topic_words] # In[347]: show_topics(Vh[:10]) # We get topics that match the kinds of clusters we would expect! This is despite the fact that this is an **unsupervised algorithm** - which is to say, we never actually told the algorithm how our documents are grouped. # We will return to SVD in **much more detail** later. For now, the important takeaway is that we have a tool that allows us to exactly factor a matrix into orthogonal columns and orthogonal rows. # ## Non-negative Matrix Factorization (NMF) # #### Motivation # PCA on faces # # (source: [NMF Tutorial](http://perso.telecom-paristech.fr/~essid/teach/NMF_tutorial_ICME-2014.pdf)) # # A more interpretable approach: # # NMF on Faces # # (source: [NMF Tutorial](http://perso.telecom-paristech.fr/~essid/teach/NMF_tutorial_ICME-2014.pdf)) # #### Idea # Rather than constraining our factors to be *orthogonal*, another idea would to constrain them to be *non-negative*. NMF is a factorization of a non-negative data set $V$: $$ V = W H$$ into non-negative matrices $W,\; H$. Often positive factors will be **more easily interpretable** (and this is the reason behind NMF's popularity). # # NMF on faces # # (source: [NMF Tutorial](http://perso.telecom-paristech.fr/~essid/teach/NMF_tutorial_ICME-2014.pdf)) # # Nonnegative matrix factorization (NMF) is a non-exact factorization that factors into one skinny positive matrix and one short positive matrix. NMF is NP-hard and non-unique. There are a number of variations on it, created by adding different constraints. # #### Applications of NMF # - [Face Decompositions](http://scikit-learn.org/stable/auto_examples/decomposition/plot_faces_decomposition.html#sphx-glr-auto-examples-decomposition-plot-faces-decomposition-py) # - [Collaborative Filtering, eg movie recommendations](http://www.quuxlabs.com/blog/2010/09/matrix-factorization-a-simple-tutorial-and-implementation-in-python/) # - [Audio source separation](https://pdfs.semanticscholar.org/cc88/0b24791349df39c5d9b8c352911a0417df34.pdf) # - [Chemistry](http://ieeexplore.ieee.org/document/1532909/) # - [Bioinformatics](https://bmcbioinformatics.biomedcentral.com/articles/10.1186/s12859-015-0485-4) and [Gene Expression](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2623306/) # - Topic Modeling (our problem!) # # NMF on documents # # (source: [NMF Tutorial](http://perso.telecom-paristech.fr/~essid/teach/NMF_tutorial_ICME-2014.pdf)) # **More Reading**: # # - [The Why and How of Nonnegative Matrix Factorization](https://arxiv.org/pdf/1401.5226.pdf) # ### NMF from sklearn # First, we will use [scikit-learn's implementation of NMF](http://scikit-learn.org/stable/modules/generated/sklearn.decomposition.NMF.html): # In[13]: m,n=vectors.shape d=5 # num topics # In[363]: clf = decomposition.NMF(n_components=d, random_state=1) W1 = clf.fit_transform(vectors) H1 = clf.components_ # In[296]: show_topics(H1) # ### TF-IDF # [Topic Frequency-Inverse Document Frequency](http://www.tfidf.com/) (TF-IDF) is a way to normalize term counts by taking into account how often they appear in a document, how long the document is, and how commmon/rare the term is. # # TF = (# occurrences of term t in document) / (# of words in documents) # # IDF = log(# of documents / # documents with term t in it) # In[ ]: vectorizer_tfidf = TfidfVectorizer(stop_words='english') vectors_tfidf = vectorizer_tfidf.fit_transform(newsgroups_train.data) # (documents, vocab) # In[263]: W1 = clf.fit_transform(vectors_tfidf) H1 = clf.components_ # In[255]: show_topics(H1) # In[26]: plt.plot(clf.components_[0]) # In[27]: clf.reconstruction_err_ # ### NMF in summary # Benefits: Fast and easy to use! # # Downsides: took years of research and expertise to create # Notes: # - For NMF, matrix needs to be at least as tall as it is wide, or we get an error with fit_transform # - Can use df_min in CountVectorizer to only look at words that were in at least k of the split texts # ### NMF from scratch in numpy, using SGD # #### Gradient Descent # The key idea of standard **gradient descent**: # 1. Randomly choose some weights to start # 2. Loop: # - Use weights to calculate a prediction # - Calculate the derivative of the loss # - Update the weights # 3. Repeat step 2 lots of times. Eventually we end up with some decent weights. # # **Key**: We want to decrease our loss and the derivative tells us the direction of **steepest descent**. # # Note that *loss*, *error*, and *cost* are all terms used to describe the same thing. # # Let's take a look at the [Gradient Descent Intro notebook](gradient-descent-intro.ipynb) (originally from the [fast.ai deep learning course](https://github.com/fastai/courses)). # #### Stochastic Gradient Descent (SGD) # **Stochastic gradient descent** is an incredibly useful optimization method (it is also the heart of deep learning, where it is used for backpropagation). # # For *standard* gradient descent, we evaluate the loss using **all** of our data which can be really slow. In *stochastic* gradient descent, we evaluate our loss function on just a sample of our data (sometimes called a *mini-batch*). We would get different loss values on different samples of the data, so this is *why it is stochastic*. It turns out that this is still an effective way to optimize, and it's much more efficient! # # We can see how this works in this [excel spreadsheet](graddesc.xlsm) (originally from the [fast.ai deep learning course](https://github.com/fastai/courses)). # # **Resources**: # - [SGD Lecture from Andrew Ng's Coursera ML course](https://www.coursera.org/learn/machine-learning/lecture/DoRHJ/stochastic-gradient-descent) # - fast.ai wiki page on SGD # - [Gradient Descent For Machine Learning](http://machinelearningmastery.com/gradient-descent-for-machine-learning/) (Jason Brownlee- Machine Learning Mastery) # - [An overview of gradient descent optimization algorithms](http://sebastianruder.com/optimizing-gradient-descent/) # #### Applying SGD to NMF # **Goal**: Decompose $V\;(m \times n)$ into $$V \approx WH$$ where $W\;(m \times d)$ and $H\;(d \times n)$, $W,\;H\;>=\;0$, and we've minimized the Frobenius norm of $V-WH$. # # **Approach**: We will pick random positive $W$ & $H$, and then use SGD to optimize. # **To use SGD, we need to know the gradient of the loss function.** # # **Sources**: # - Optimality and gradients of NMF: http://users.wfu.edu/plemmons/papers/chu_ple.pdf # - Projected gradients: https://www.csie.ntu.edu.tw/~cjlin/papers/pgradnmf.pdf # In[272]: lam=1e3 lr=1e-2 m, n = vectors_tfidf.shape # In[252]: W1 = clf.fit_transform(vectors) H1 = clf.components_ # In[253]: show_topics(H1) # In[265]: mu = 1e-6 def grads(M, W, H): R = W@H-M return R@H.T + penalty(W, mu)*lam, W.T@R + penalty(H, mu)*lam # dW, dH # In[266]: def penalty(M, mu): return np.where(M>=mu,0, np.min(M - mu, 0)) # In[267]: def upd(M, W, H, lr): dW,dH = grads(M,W,H) W -= lr*dW; H -= lr*dH # In[268]: def report(M,W,H): print(np.linalg.norm(M-W@H), W.min(), H.min(), (W<0).sum(), (H<0).sum()) # In[348]: W = np.abs(np.random.normal(scale=0.01, size=(m,d))) H = np.abs(np.random.normal(scale=0.01, size=(d,n))) # In[349]: report(vectors_tfidf, W, H) # In[350]: upd(vectors_tfidf,W,H,lr) # In[351]: report(vectors_tfidf, W, H) # In[352]: for i in range(50): upd(vectors_tfidf,W,H,lr) if i % 10 == 0: report(vectors_tfidf,W,H) # In[281]: show_topics(H) # This is painfully slow to train! Lots of parameter fiddling and still slow to train (or explodes). # ### PyTorch # [PyTorch](http://pytorch.org/) is a Python framework for tensors and dynamic neural networks with GPU acceleration. Many of the core contributors work on Facebook's AI team. In many ways, it is similar to Numpy, only with the increased parallelization of using a GPU. # # From the [PyTorch documentation](http://pytorch.org/tutorials/beginner/blitz/tensor_tutorial.html): # # pytorch # # **Further learning**: If you are curious to learn what *dynamic* neural networks are, you may want to watch [this talk](https://www.youtube.com/watch?v=Z15cBAuY7Sc) by Soumith Chintala, Facebook AI researcher and core PyTorch contributor. # # If you want to learn more PyTorch, you can try this [tutorial](http://pytorch.org/tutorials/beginner/deep_learning_60min_blitz.html) or this [learning by examples](http://pytorch.org/tutorials/beginner/pytorch_with_examples.html). # **Note about GPUs**: If you are not using a GPU, you will need to remove the `.cuda()` from the methods below. GPU usage is not required for this course, but I thought it would be of interest to some of you. To learn how to create an AWS instance with a GPU, you can watch the [fast.ai setup lesson](http://course.fast.ai/lessons/aws.html). # In[282]: import torch import torch.cuda as tc from torch.autograd import Variable # In[283]: def V(M): return Variable(M, requires_grad=True) # In[284]: v=vectors_tfidf.todense() # In[285]: t_vectors = torch.Tensor(v.astype(np.float32)).cuda() # In[286]: mu = 1e-5 # In[287]: def grads_t(M, W, H): R = W.mm(H)-M return (R.mm(H.t()) + penalty_t(W, mu)*lam, W.t().mm(R) + penalty_t(H, mu)*lam) # dW, dH def penalty_t(M, mu): return (M # source: [Python Nimfa Documentation](http://nimfa.biolab.si/) # # #### Using PyTorch and SGD # - Took us an hour to implement, didn't have to be NMF experts # - Parameters were fiddly # - Not as fast (tried in numpy and was so slow we had to switch to PyTorch) # ## Truncated SVD # We saved a lot of time when we calculated NMF by only calculating the subset of columns we were interested in. Is there a way to get this benefit with SVD? Yes there is! It's called truncated SVD. We are just interested in the vectors corresponding to the **largest** singular values. # # (source: [Facebook Research: Fast Randomized SVD](https://research.fb.com/fast-randomized-svd/)) # #### Shortcomings of classical algorithms for decomposition: # - Matrices are "stupendously big" # - Data are often **missing or inaccurate**. Why spend extra computational resources when imprecision of input limits precision of the output? # - **Data transfer** now plays a major role in time of algorithms. Techniques the require fewer passes over the data may be substantially faster, even if they require more flops (flops = floating point operations). # - Important to take advantage of **GPUs**. # # (source: [Halko](https://arxiv.org/abs/0909.4061)) # #### Advantages of randomized algorithms: # - inherently stable # - performance guarantees do not depend on subtle spectral properties # - needed matrix-vector products can be done in parallel # # (source: [Halko](https://arxiv.org/abs/0909.4061)) # ### Randomized SVD # Reminder: full SVD is **slow**. This is the calculation we did above using Scipy's Linalg SVD: # In[384]: vectors.shape # In[344]: get_ipython().run_line_magic('time', 'U, s, Vh = linalg.svd(vectors, full_matrices=False)') # In[345]: print(U.shape, s.shape, Vh.shape) # Fortunately, there is a faster way: # In[175]: get_ipython().run_line_magic('time', 'u, s, v = decomposition.randomized_svd(vectors, 5)') # The runtime complexity for SVD is $\mathcal{O}(\text{min}(m^2 n,\; m n^2))$ # **Question**: How can we speed things up? (without new breakthroughs in SVD research) # **Idea**: Let's use a smaller matrix (with smaller $n$)! # # Instead of calculating the SVD on our full matrix $A$ which is $m \times n$, let's use $B = A Q$, which is just $m \times r$ and $r << n$ # # We haven't found a better general SVD method, we are just using the method we have on a smaller matrix. # In[175]: get_ipython().run_line_magic('time', 'u, s, v = decomposition.randomized_svd(vectors, 5)') # In[177]: u.shape, s.shape, v.shape # In[178]: show_topics(v) # Here are some results from [Facebook Research](https://research.fb.com/fast-randomized-svd/): # # # **Johnson-Lindenstrauss Lemma**: ([from wikipedia](https://en.wikipedia.org/wiki/Johnson%E2%80%93Lindenstrauss_lemma)) a small set of points in a high-dimensional space can be embedded into a space of much lower dimension in such a way that distances between the points are nearly preserved. # # It is desirable to be able to reduce dimensionality of data in a way that preserves relevant structure. The Johnson–Lindenstrauss lemma is a classic result of this type. # ### Implementing our own Randomized SVD # In[112]: from scipy import linalg # The method `randomized_range_finder` finds an orthonormal matrix whose range approximates the range of A (step 1 in our algorithm above). To do so, we use the LU and QR factorizations, both of which we will be covering in depth later. # # I am using the [scikit-learn.extmath.randomized_svd source code](https://github.com/scikit-learn/scikit-learn/blob/14031f65d144e3966113d3daec836e443c6d7a5b/sklearn/utils/extmath.py) as a guide. # In[182]: # computes an orthonormal matrix whose range approximates the range of A # power_iteration_normalizer can be safe_sparse_dot (fast but unstable), LU (imbetween), or QR (slow but most accurate) def randomized_range_finder(A, size, n_iter=5): Q = np.random.normal(size=(A.shape[1], size)) for i in range(n_iter): Q, _ = linalg.lu(A @ Q, permute_l=True) Q, _ = linalg.lu(A.T @ Q, permute_l=True) Q, _ = linalg.qr(A @ Q, mode='economic') return Q # And here's our randomized SVD method: # In[236]: def randomized_svd(M, n_components, n_oversamples=10, n_iter=4): n_random = n_components + n_oversamples Q = randomized_range_finder(M, n_random, n_iter) # project M to the (k + p) dimensional space using the basis vectors B = Q.T @ M # compute the SVD on the thin matrix: (k + p) wide Uhat, s, V = linalg.svd(B, full_matrices=False) del B U = Q @ Uhat return U[:, :n_components], s[:n_components], V[:n_components, :] # In[237]: u, s, v = randomized_svd(vectors, 5) # In[238]: get_ipython().run_line_magic('time', 'u, s, v = randomized_svd(vectors, 5)') # In[239]: u.shape, s.shape, v.shape # In[247]: show_topics(v) # Write a loop to calculate the error of your decomposition as you vary the # of topics. Plot the result # #### Answer # In[248]: #Exercise: Write a loop to calculate the error of your decomposition as you vary the # of topics # In[242]: plt.plot(range(0,n*step,step), error) # **Further Resources**: # - [a whole course on randomized algorithms](http://www.cs.ubc.ca/~nickhar/W12/) # ### More Details # Here is a process to calculate a truncated SVD, described in [Finding Structure with Randomness: Probabilistic Algorithms for Constructing Approximate Matrix Decompositions](https://arxiv.org/pdf/0909.4061.pdf) and [summarized in this blog post](https://research.fb.com/fast-randomized-svd/): # # 1\. Compute an approximation to the range of $A$. That is, we want $Q$ with $r$ orthonormal columns such that $$A \approx QQ^TA$$ # # # 2\. Construct $B = Q^T A$, which is small ($r\times n$) # # # 3\. Compute the SVD of $B$ by standard methods (fast since $B$ is smaller than $A$), $B = S\,\Sigma V^T$ # # 4\. Since $$ A \approx Q Q^T A = Q (S\,\Sigma V^T)$$ if we set $U = QS$, then we have a low rank approximation $A \approx U \Sigma V^T$. # #### So how do we find $Q$ (in step 1)? # To estimate the range of $A$, we can just take a bunch of random vectors $w_i$, evaluate the subspace formed by $Aw_i$. We can form a matrix $W$ with the $w_i$ as it's columns. Now, we take the QR decomposition of $AW = QR$, then the columns of $Q$ form an orthonormal basis for $AW$, which is the range of $A$. # # Since the matrix $AW$ of the product has far more rows than columns and therefore, approximately, orthonormal columns. This is simple probability - with lots of rows, and few columns, it's unlikely that the columns are linearly dependent. # #### The QR Decomposition # We will be learning about the QR decomposition **in depth** later on. For now, you just need to know that $A = QR$, where $Q$ consists of orthonormal columns, and $R$ is upper triangular. Trefethen says that the QR decomposition is the most important idea in numerical linear algebra! We will definitely be returning to it. # #### How should we choose $r$? # Suppose our matrix has 100 columns, and we want 5 columns in U and V. To be safe, we should project our matrix onto an orthogonal basis with a few more rows and columns than 5 (let's use 15). At the end, we will just grab the first 5 columns of U and V # # So even although our projection was only approximate, by making it a bit bigger than we need, we can make up for the loss of accuracy (since we're only taking a subset later). # In[175]: get_ipython().run_line_magic('time', 'u, s, v = decomposition.randomized_svd(vectors, 5)') # In[176]: get_ipython().run_line_magic('time', 'u, s, v = decomposition.randomized_svd(vectors.todense(), 5)') # ## End