In this tutorial, we will learn how to visualize different types of NLP based Embeddings via TensorBoard. TensorBoard is a data visualization framework for visualizing and inspecting the TensorFlow runs and graphs. We will use a built-in Tensorboard visualizer called Embedding Projector in this tutorial. It lets you interactively visualize and analyze high-dimensional data like embeddings.
import gensim
import pandas as pd
import smart_open
import random
# read data
dataframe = pd.read_csv('movie_plots.csv')
dataframe
MovieID | Titles | Plots | Genres | |
---|---|---|---|---|
0 | 1 | Toy Story (1995) | A little boy named Andy loves to be in his roo... | animation |
1 | 2 | Jumanji (1995) | When two kids find and play a magical board ga... | fantasy |
2 | 3 | Grumpier Old Men (1995) | Things don't seem to change much in Wabasha Co... | comedy |
3 | 6 | Heat (1995) | Hunters and their prey--Neil and his professio... | action |
4 | 7 | Sabrina (1995) | An ugly duckling having undergone a remarkable... | romance |
5 | 9 | Sudden Death (1995) | Some terrorists kidnap the Vice President of t... | action |
6 | 10 | GoldenEye (1995) | James Bond teams up with the lone survivor of ... | action |
7 | 15 | Cutthroat Island (1995) | Morgan Adams and her slave, William Shaw, are ... | action |
8 | 17 | Sense and Sensibility (1995) | When Mr. Dashwood dies, he must leave the bulk... | romance |
9 | 18 | Four Rooms (1995) | This movie features the collaborative director... | comedy |
10 | 19 | Ace Ventura: When Nature Calls (1995) | Ace Ventura, emerging from self-imposed exile ... | comedy |
11 | 29 | City of Lost Children, The (Cité des enfants p... | Krank (Daniel Emilfork), who cannot dream, kid... | sci-fi |
12 | 32 | Twelve Monkeys (a.k.a. 12 Monkeys) (1995) | In a future world devastated by disease, a con... | sci-fi |
13 | 34 | Babe (1995) | Farmer Hoggett wins a runt piglet at a local f... | fantasy |
14 | 39 | Clueless (1995) | A rich high school student tries to boost a ne... | romance |
15 | 44 | Mortal Kombat (1995) | Based on the popular video game of the same na... | action |
16 | 48 | Pocahontas (1995) | Capt. John Smith leads a rag-tag band of Engli... | animation |
17 | 50 | Usual Suspects, The (1995) | Following a truck hijack in New York, five con... | comedy |
18 | 57 | Home for the Holidays (1995) | After losing her job, making out with her soon... | comedy |
19 | 69 | Friday (1995) | Two homies, Smokey and Craig, smoke a dope dea... | comedy |
20 | 70 | From Dusk Till Dawn (1996) | Two criminals and their hostages unknowingly s... | action |
21 | 76 | Screamers (1995) | (SIRIUS 6B, Year 2078) On a distant mining pla... | sci-fi |
22 | 82 | Antonia's Line (Antonia) (1995) | In an anonymous Dutch village, a sturdy, stron... | fantasy |
23 | 88 | Black Sheep (1996) | Comedy about the prospective Washington State ... | comedy |
24 | 95 | Broken Arrow (1996) | "Broken Arrow" is the term used to describe a ... | action |
25 | 104 | Happy Gilmore (1996) | A rejected hockey player puts his skills to th... | comedy |
26 | 105 | Bridges of Madison County, The (1995) | Photographer Robert Kincaid wanders into the l... | romance |
27 | 110 | Braveheart (1995) | When his secret bride is executed for assaulti... | action |
28 | 141 | Birdcage, The (1996) | Armand Goldman owns a popular drag nightclub i... | comedy |
29 | 145 | Bad Boys (1995) | Marcus Burnett is a hen-pecked family man. Mik... | action |
... | ... | ... | ... | ... |
1813 | 122902 | Fantastic Four (2015) | FANTASTIC FOUR, a contemporary re-imagining of... | sci-fi |
1814 | 127098 | Louis C.K.: Live at The Comedy Store (2015) | Comedian Louis C.K. performs live at the Comed... | comedy |
1815 | 127158 | Tig (2015) | An intimate, mixed media documentary that foll... | comedy |
1816 | 127202 | Me and Earl and the Dying Girl (2015) | Seventeen-year-old Greg has managed to become ... | comedy |
1817 | 129354 | Focus (2015) | In the midst of veteran con man Nicky's latest... | action |
1818 | 129428 | The Second Best Exotic Marigold Hotel (2015) | The Second Best Exotic Marigold Hotel is the e... | comedy |
1819 | 129937 | Run All Night (2015) | Professional Brooklyn hitman Jimmy Conlon is m... | action |
1820 | 130490 | Insurgent (2015) | One choice can transform you-or it can destroy... | sci-fi |
1821 | 130520 | Home (2015) | An alien on the run from his own people makes ... | animation |
1822 | 130634 | Furious 7 (2015) | Dominic and his crew thought they'd left the c... | action |
1823 | 131013 | Get Hard (2015) | Kevin Hart plays the role of Darnell--a family... | comedy |
1824 | 132046 | Tomorrowland (2015) | Bound by a shared destiny, a bright, optimisti... | sci-fi |
1825 | 132480 | The Age of Adaline (2015) | A young woman, born at the turn of the 20th ce... | romance |
1826 | 132488 | Lovesick (2014) | Lovesick is the comic tale of Charlie Darby (M... | fantasy |
1827 | 132796 | San Andreas (2015) | In San Andreas, California is experiencing a s... | action |
1828 | 132961 | Far from the Madding Crowd (2015) | In Victorian England, the independent and head... | romance |
1829 | 133195 | Hitman: Agent 47 (2015) | An assassin teams up with a woman to help her ... | action |
1830 | 133645 | Carol (2015) | In an adaptation of Patricia Highsmith's semin... | romance |
1831 | 134130 | The Martian (2015) | During a manned mission to Mars, Astronaut Mar... | sci-fi |
1832 | 134368 | Spy (2015) | A desk-bound CIA analyst volunteers to go unde... | comedy |
1833 | 134783 | Entourage (2015) | Movie star Vincent Chase, together with his bo... | comedy |
1834 | 134853 | Inside Out (2015) | After young Riley is uprooted from her Midwest... | comedy |
1835 | 135518 | Self/less (2015) | A dying real estate mogul transfers his consci... | sci-fi |
1836 | 135861 | Ted 2 (2015) | Months after John's divorce, Ted and Tami-Lynn... | comedy |
1837 | 135887 | Minions (2015) | Ever since the dawn of time, the Minions have ... | comedy |
1838 | 136016 | The Good Dinosaur (2015) | In a world where dinosaurs and humans live sid... | animation |
1839 | 139855 | Anomalisa (2015) | Michael Stone, an author that specializes in c... | animation |
1840 | 142997 | Hotel Transylvania 2 (2015) | The Drac pack is back for an all-new monster c... | animation |
1841 | 145935 | Peanuts Movie, The (2015) | Charlie Brown, Lucy, Snoopy, and the whole gan... | animation |
1842 | 149406 | Kung Fu Panda 3 (2016) | Continuing his "legendary adventures of awesom... | comedy |
1843 rows × 4 columns
In this part, we will learn about visualizing Doc2Vec Embeddings aka Paragraph Vectors via TensorBoard. The input documents for training will be the synopsis of movies, on which Doc2Vec model is trained.
The visualizations will be a scatterplot as seen in the above image, where each datapoint is labelled by the movie title and colored by it's corresponding genre. You can also visit this Projector link which is configured with my embeddings for the above mentioned dataset.
Below, we define a function to read the training documents, pre-process each document using a simple gensim pre-processing tool (i.e., tokenize text into individual words, remove punctuation, set to lowercase, etc), and return a list of words. Also, to train the model, we'll need to associate a tag/number with each document of the training corpus. In our case, the tag is simply the zero-based line number.
def read_corpus(documents):
for i, plot in enumerate(documents):
yield gensim.models.doc2vec.TaggedDocument(gensim.utils.simple_preprocess(plot, max_len=30), [i])
train_corpus = list(read_corpus(dataframe.Plots))
Let's take a look at the training corpus.
train_corpus[:2]
[TaggedDocument(words=['little', 'boy', 'named', 'andy', 'loves', 'to', 'be', 'in', 'his', 'room', 'playing', 'with', 'his', 'toys', 'especially', 'his', 'doll', 'named', 'woody', 'but', 'what', 'do', 'the', 'toys', 'do', 'when', 'andy', 'is', 'not', 'with', 'them', 'they', 'come', 'to', 'life', 'woody', 'believes', 'that', 'he', 'has', 'life', 'as', 'toy', 'good', 'however', 'he', 'must', 'worry', 'about', 'andy', 'family', 'moving', 'and', 'what', 'woody', 'does', 'not', 'know', 'is', 'about', 'andy', 'birthday', 'party', 'woody', 'does', 'not', 'realize', 'that', 'andy', 'mother', 'gave', 'him', 'an', 'action', 'figure', 'known', 'as', 'buzz', 'lightyear', 'who', 'does', 'not', 'believe', 'that', 'he', 'is', 'toy', 'and', 'quickly', 'becomes', 'andy', 'new', 'favorite', 'toy', 'woody', 'who', 'is', 'now', 'consumed', 'with', 'jealousy', 'tries', 'to', 'get', 'rid', 'of', 'buzz', 'then', 'both', 'woody', 'and', 'buzz', 'are', 'now', 'lost', 'they', 'must', 'find', 'way', 'to', 'get', 'back', 'to', 'andy', 'before', 'he', 'moves', 'without', 'them', 'but', 'they', 'will', 'have', 'to', 'pass', 'through', 'ruthless', 'toy', 'killer', 'sid', 'phillips'], tags=[0]), TaggedDocument(words=['when', 'two', 'kids', 'find', 'and', 'play', 'magical', 'board', 'game', 'they', 'release', 'man', 'trapped', 'for', 'decades', 'in', 'it', 'and', 'host', 'of', 'dangers', 'that', 'can', 'only', 'be', 'stopped', 'by', 'finishing', 'the', 'game'], tags=[1])]
We'll instantiate a Doc2Vec model with a vector size with 50 words and iterating over the training corpus 55 times. We set the minimum word count to 2 in order to give higher frequency words more weighting. Model accuracy can be improved by increasing the number of iterations but this generally increases the training time. Small datasets with short documents, like this one, can benefit from more training passes.
model = gensim.models.doc2vec.Doc2Vec(size=50, min_count=2, iter=55)
model.build_vocab(train_corpus)
model.train(train_corpus, total_examples=model.corpus_count, epochs=model.iter)
5168238
Now, we'll save the document embedding vectors per doctag.
model.save_word2vec_format('doc_tensor.w2v', doctag_vec=True, word_vec=False)
Tensorboard takes two Input files. One containing the embedding vectors and the other containing relevant metadata. We'll use a gensim script to directly convert the embedding file saved in word2vec format above to the tsv format required in Tensorboard.
%run ../../gensim/scripts/word2vec2tensor.py -i doc_tensor.w2v -o movie_plot
2017-04-20 02:23:05,284 : MainThread : INFO : running ../../gensim/scripts/word2vec2tensor.py -i doc_tensor.w2v -o movie_plot 2017-04-20 02:23:05,286 : MainThread : INFO : loading projection weights from doc_tensor.w2v 2017-04-20 02:23:05,464 : MainThread : INFO : loaded (1843, 50) matrix from doc_tensor.w2v 2017-04-20 02:23:05,578 : MainThread : INFO : 2D tensor file saved to movie_plot_tensor.tsv 2017-04-20 02:23:05,579 : MainThread : INFO : Tensor metadata file saved to movie_plot_metadata.tsv 2017-04-20 02:23:05,581 : MainThread : INFO : finished running word2vec2tensor.py
The script above generates two files, movie_plot_tensor.tsv
which contain the embedding vectors and movie_plot_metadata.tsv
containing doctags. But, these doctags are simply the unique index values and hence are not really useful to interpret what the document was while visualizing. So, we will overwrite movie_plot_metadata.tsv
to have a custom metadata file with two columns. The first column will be for the movie titles and the second for their corresponding genres.
with open('movie_plot_metadata.tsv','w') as w:
w.write('Titles\tGenres\n')
for i,j in zip(dataframe.Titles, dataframe.Genres):
w.write("%s\t%s\n" % (i,j))
Now you can go to http://projector.tensorflow.org/ and upload the two files by clicking on Load data in the left panel.
For demo purposes I have uploaded the Doc2Vec embeddings generated from the model trained above here. You can access the Embedding projector configured with these uploaded embeddings at this link.
For the visualization purpose, the multi-dimensional embeddings that we get from the Doc2Vec model above, needs to be downsized to 2 or 3 dimensions. So that we basically end up with a new 2d or 3d embedding which tries to preserve information from the original multi-dimensional embedding. As these vectors are reduced to a much smaller dimension, the exact cosine/euclidean distances between them are not preserved, but rather relative, and hence as you’ll see below the nearest similarity results may change.
TensorBoard has two popular dimensionality reduction methods for visualizing the embeddings and also provides a custom method based on text searches:
Principal Component Analysis: PCA aims at exploring the global structure in data, and could end up losing the local similarities between neighbours. It maximizes the total variance in the lower dimensional subspace and hence, often preserves the larger pairwise distances better than the smaller ones. See an intuition behind it in this nicely explained answer on stackexchange.
T-SNE: The idea of T-SNE is to place the local neighbours close to each other, and almost completely ignoring the global structure. It is useful for exploring local neighborhoods and finding local clusters. But the global trends are not represented accurately and the separation between different groups is often not preserved (see the t-sne plots of our data below which testify the same).
Custom Projections: This is a custom bethod based on the text searches you define for different directions. It could be useful for finding meaningful directions in the vector space, for example, female to male, currency to country etc.
You can refer to this doc for instructions on how to use and navigate through different panels available in TensorBoard.
The Embedding Projector computes the top 10 principal components. The menu at the left panel lets you project those components onto any combination of two or three.
Data is visualized by animating through every iteration of the t-sne algorithm. The t-sne menu at the left lets you adjust the value of it's two hyperparameters. The first one is Perplexity, which is basically a measure of information. It may be viewed as a knob that sets the number of effective nearest neighbors[2]. The second one is learning rate that defines how quickly an algorithm learns on encountering new examples/data points.
The above plot was generated with perplexity 8, learning rate 10 and iteration 500. Though the results could vary on successive runs, and you may not get the exact plot as above with same hyperparameter settings. But some small clusters will start forming as above, with different orientations.
In this part, we will see how to visualize LDA in Tensorboard. We will be using the Document-topic distribution as the embedding vector of a document. Basically, we treat topics as the dimensions and the value in each dimension represents the topic proportion of that topic in the document.
We use the movie Plots as our documents in corpus and remove rare words and common words based on their document frequency. Below we remove words that appear in less than 2 documents or in more than 30% of the documents.
import pandas as pd
import re
from gensim.parsing.preprocessing import remove_stopwords, strip_punctuation
from gensim.models import ldamodel
from gensim.corpora.dictionary import Dictionary
# read data
dataframe = pd.read_csv('movie_plots.csv')
# remove stopwords and punctuations
def preprocess(row):
return strip_punctuation(remove_stopwords(row.lower()))
dataframe['Plots'] = dataframe['Plots'].apply(preprocess)
# Convert data to required input format by LDA
texts = []
for line in dataframe.Plots:
lowered = line.lower()
words = re.findall(r'\w+', lowered, flags = re.UNICODE | re.LOCALE)
texts.append(words)
# Create a dictionary representation of the documents.
dictionary = Dictionary(texts)
# Filter out words that occur less than 2 documents, or more than 30% of the documents.
dictionary.filter_extremes(no_below=2, no_above=0.3)
# Bag-of-words representation of the documents.
corpus = [dictionary.doc2bow(text) for text in texts]
# Set training parameters.
num_topics = 10
chunksize = 2000
passes = 50
iterations = 200
eval_every = None
# Train model
model = ldamodel.LdaModel(corpus=corpus, id2word=dictionary, chunksize=chunksize, alpha='auto', eta='auto', iterations=iterations, num_topics=num_topics, passes=passes, eval_every=eval_every)
You can refer to this notebook also before training the LDA model. It contains tips and suggestions for pre-processing the text data, and how to train the LDA model to get good results.
Now we will use get_document_topics
which infers the topic distribution of a document. It basically returns a list of (topic_id, topic_probability) for each document in the input corpus.
# Get document topics
all_topics = model.get_document_topics(corpus, minimum_probability=0)
all_topics[0]
[(0, 0.00029626785677659928), (1, 0.99734244187457377), (2, 0.00031813940693891458), (3, 0.00031573036467256674), (4, 0.00033277056023999966), (5, 0.00023981837072288835), (6, 0.00033113374640540293), (7, 0.00027953838669809549), (8, 0.0002706215262517565), (9, 0.00027353790672011199)]
The above output shows the topic distribution of first document in the corpus as a list of (topic_id, topic_probability).
Now, using the topic distribution of a document as it's vector embedding, we will plot all the documents in our corpus using Tensorboard.
Tensorboard takes two input files, one containing the embedding vectors and the other containing relevant metadata. As described above we will use the topic distribution of documents as their embedding vector. Metadata file will consist of Movie titles with their genres.
# create file for tensors
with open('doc_lda_tensor.tsv','w') as w:
for doc_topics in all_topics:
for topics in doc_topics:
w.write(str(topics[1])+ "\t")
w.write("\n")
# create file for metadata
with open('doc_lda_metadata.tsv','w') as w:
w.write('Titles\tGenres\n')
for j, k in zip(dataframe.Titles, dataframe.Genres):
w.write("%s\t%s\n" % (j, k))
Now you can go to http://projector.tensorflow.org/ and upload these two files by clicking on Load data in the left panel.
For demo purposes I have uploaded the LDA doc-topic embeddings generated from the model trained above here. You can also access the Embedding projector configured with these uploaded embeddings at this link.
The Embedding Projector computes the top 10 principal components. The menu at the left panel lets you project those components onto any combination of two or three.
As we can see there are a lot of points which cluster at the corners of the simplex. This is primarily due to the sparsity of vectors we are using. The documents at the corners primarily belongs to a single topic (hence, large weight in a single dimension and other dimensions have approximately zero weight.) You can modify the metadata file as explained below to see the dimension weights along with the Movie title.
Now, we will append the topics with highest probability (topic_id, topic_probability) to the document's title, in order to explore what topics do the cluster corners or edges dominantly belong to. For this, we just need to overwrite the metadata file as below:
tensors = []
for doc_topics in all_topics:
doc_tensor = []
for topic in doc_topics:
if round(topic[1], 3) > 0:
doc_tensor.append((topic[0], float(round(topic[1], 3))))
# sort topics according to highest probabilities
doc_tensor = sorted(doc_tensor, key=lambda x: x[1], reverse=True)
# store vectors to add in metadata file
tensors.append(doc_tensor[:5])
# overwrite metadata file
i=0
with open('doc_lda_metadata.tsv','w') as w:
w.write('Titles\tGenres\n')
for j,k in zip(dataframe.Titles, dataframe.Genres):
w.write("%s\t%s\n" % (''.join((str(j), str(tensors[i]))),k))
i+=1
Next, we upload the previous tensor file "doc_lda_tensor.tsv" and this new metadata file to http://projector.tensorflow.org/ .
In T-SNE, the data is visualized by animating through every iteration of the t-sne algorithm. The t-sne menu at the left lets you adjust the value of it's two hyperparameters. The first one is Perplexity, which is basically a measure of information. It may be viewed as a knob that sets the number of effective nearest neighbors[2]. The second one is learning rate that defines how quickly an algorithm learns on encountering new examples/data points.
Now, as the topic distribution of a document is used as it’s embedding vector, t-sne ends up forming clusters of documents belonging to same topics. In order to understand and interpret about the theme of those topics, we can use show_topic()
to explore the terms that the topics consisted of.
The above plot was generated with perplexity 11, learning rate 10 and iteration 1100. Though the results could vary on successive runs, and you may not get the exact plot as above even with same hyperparameter settings. But some small clusters will start forming as above, with different orientations.
I named some clusters above based on the genre of it's movies and also using the show_topic()
to see relevant terms of the topic which was most prevelant in a cluster. Most of the clusters had doocumets belonging dominantly to a single topic. For ex. The cluster with movies belonging primarily to topic 0 could be named Fantasy/Romance based on terms displayed below for topic 0. You can play with the visualization yourself on this link and try to conclude a label for clusters based on movies it has and their dominant topic. You can see the top 5 topics of every point by hovering over it.
Now, we can notice that their are more than 10 clusters in the above image, whereas we trained our model for num_topics=10
. It's because their are few clusters, which has documents belonging to more than one topic with an approximately close topic probability values.
model.show_topic(topicid=0, topn=15)
[('life', 0.0069577926389817156), ('world', 0.006240163206609986), ('man', 0.0058828040298109794), ('young', 0.0053747678629860532), ('family', 0.005083746467542196), ('love', 0.0048691281379952146), ('new', 0.004097644507005606), ('t', 0.0037446821043766597), ('time', 0.0037022423231064822), ('finds', 0.0036129806190553109), ('woman', 0.0031742920620375422), ('earth', 0.0031692677510459484), ('help', 0.0031061538189201504), ('it', 0.0028658594310878023), ('years', 0.00272218005397741)]
You can even use pyLDAvis to deduce topics more efficiently. It provides a deeper inspection of the terms highly associated with each individual topic. For this, it uses a measure called relevance of a term to a topic that allows users to flexibly rank terms best suited for a meaningful topic interpretation. It's weight parameter called λ can be adjusted to display useful terms which could help in differentiating topics efficiently.
import pyLDAvis.gensim
viz = pyLDAvis.gensim.prepare(model, corpus, dictionary)
pyLDAvis.display(viz)
/Users/parul/.virtualenvs/gensim3/lib/python3.4/site-packages/pyLDAvis/_prepare.py:387: DeprecationWarning: .ix is deprecated. Please use .loc for label based indexing or .iloc for positional indexing See the documentation here: http://pandas.pydata.org/pandas-docs/stable/indexing.html#deprecate_ix topic_term_dists = topic_term_dists.ix[topic_order]
The weight parameter λ can be viewed as a knob to adjust the ranks of the terms based on whether they are simply ranked according to their probability in the topic (λ=1) or are normalized by their marginal probability across the corpus (λ=0). Setting λ=1 could result in similar ranking of terms for large no. of topics hence making it difficult to differentiate between them, and setting λ=0 ranks terms solely based on their exclusiveness to current topic which could result in such rare terms that occur in only a single topic and hence the topics may remain difficult to interpret. (Sievert and Shirley 2014) suggested the optimal value of λ=0.6 based on a user study.
We learned about visualizing the Document Embeddings and LDA Doc-topic distributions through Tensorboard's Embedding Projector. It is a useful tool for visualizing different types of data for example, word embeddings, document embeddings or the gene expressions and biological sequences. It just needs an input of 2D tensors and then you can explore your data using provided algorithms. You can also perform nearest neighbours search to find most similar data points to your query point.