Demonstration for the u_mass topic coherence using topic coherence pipeline

In [1]:
import numpy as np
import logging
import pyLDAvis.gensim
import json
import warnings
warnings.filterwarnings('ignore')  # To ignore all warnings that arise here to enhance clarity

from gensim.models.coherencemodel import CoherenceModel
from gensim.models.ldamodel import LdaModel
from gensim.corpora.dictionary import Dictionary
from numpy import array

Set up logging

In [2]:
logger = logging.getLogger()
logger.setLevel(logging.DEBUG)
logging.debug("test")

Set up corpus

As stated in table 2 from this paper, this corpus essentially has two classes of documents. First five are about human-computer interaction and the other four are about graphs. Let's see how our LDA models interpret them.

In [3]:
texts = [['human', 'interface', 'computer'],
         ['survey', 'user', 'computer', 'system', 'response', 'time'],
         ['eps', 'user', 'interface', 'system'],
         ['system', 'human', 'system', 'eps'],
         ['user', 'response', 'time'],
         ['trees'],
         ['graph', 'trees'],
         ['graph', 'minors', 'trees'],
         ['graph', 'minors', 'survey']]
In [4]:
dictionary = Dictionary(texts)
corpus = [dictionary.doc2bow(text) for text in texts]

Set up two topic models

We'll be setting up two different LDA Topic models. A good one and bad one. To build a "good" topic model, we'll simply train it using more iterations than the bad one. Therefore the u_mass coherence should in theory be better for the good model than the bad one since it would be producing more "human-interpretable" topics.

In [5]:
goodLdaModel = LdaModel(corpus=corpus, id2word=dictionary, iterations=50, num_topics=2)
badLdaModel = LdaModel(corpus=corpus, id2word=dictionary, iterations=1, num_topics=2)
In [6]:
goodcm = CoherenceModel(model=goodLdaModel, corpus=corpus, dictionary=dictionary, coherence='u_mass')
In [7]:
badcm = CoherenceModel(model=badLdaModel, corpus=corpus, dictionary=dictionary, coherence='u_mass')

View the pipeline parameters for one coherence model

Following are the pipeline parameters for u_mass coherence. By pipeline parameters, we mean the functions being used to calculate segmentation, probability estimation, confirmation measure and aggregation as shown in figure 1 in this paper.

In [8]:
print goodcm
CoherenceModel(segmentation=<function s_one_pre at 0x7fcfdbafe050>, probability estimation=<function p_boolean_document at 0x7fcfdbafe320>, confirmation measure=<function log_conditional_probability at 0x7fcfdbafe488>, aggregation=<function arithmetic_mean at 0x7fcfdbafe410>)

Visualize topic models

In [9]:
pyLDAvis.enable_notebook()
In [10]:
pyLDAvis.gensim.prepare(goodLdaModel, corpus, dictionary)
Out[10]:
In [11]:
pyLDAvis.gensim.prepare(badLdaModel, corpus, dictionary)
Out[11]:
In [12]:
print goodcm.get_coherence()
-13.8048438862
In [13]:
print badcm.get_coherence()
-15.5467907012

Conclusion

Hence as we can see, the u_mass coherence for the good LDA model is much more (better) than that for the bad LDA model. This is because, simply, the good LDA model usually comes up with better topics that are more human interpretable. For the first topic, the goodLdaModel rightly puts emphasis on "graph", "trees" and "user" with reference to the second class of documents. For the second topic, it puts emphasis on words such as "system", "eps", "interface" and "human" which signify human-computer interaction. The badLdaModel however fails to decipher between these two topics and comes up with topics which are mostly both graph based but are not clear to a human. The u_mass topic coherence captures this wonderfully by giving the interpretability of these topics and number as we can see above. Hence this coherence measure can be used to compare difference topic models based on their human-interpretability.