import numpy as np import logging import pyLDAvis.gensim import json import warnings warnings.filterwarnings('ignore') # To ignore all warnings that arise here to enhance clarity from gensim.models.coherencemodel import CoherenceModel from gensim.models.ldamodel import LdaModel from gensim.corpora.dictionary import Dictionary from numpy import array
logger = logging.getLogger() logger.setLevel(logging.DEBUG) logging.debug("test")
As stated in table 2 from this paper, this corpus essentially has two classes of documents. First five are about human-computer interaction and the other four are about graphs. Let's see how our LDA models interpret them.
texts = [['human', 'interface', 'computer'], ['survey', 'user', 'computer', 'system', 'response', 'time'], ['eps', 'user', 'interface', 'system'], ['system', 'human', 'system', 'eps'], ['user', 'response', 'time'], ['trees'], ['graph', 'trees'], ['graph', 'minors', 'trees'], ['graph', 'minors', 'survey']]
dictionary = Dictionary(texts) corpus = [dictionary.doc2bow(text) for text in texts]
We'll be setting up two different LDA Topic models. A good one and bad one. To build a "good" topic model, we'll simply train it using more iterations than the bad one. Therefore the
u_mass coherence should in theory be better for the good model than the bad one since it would be producing more "human-interpretable" topics.
goodLdaModel = LdaModel(corpus=corpus, id2word=dictionary, iterations=50, num_topics=2) badLdaModel = LdaModel(corpus=corpus, id2word=dictionary, iterations=1, num_topics=2)
goodcm = CoherenceModel(model=goodLdaModel, corpus=corpus, dictionary=dictionary, coherence='u_mass')
badcm = CoherenceModel(model=badLdaModel, corpus=corpus, dictionary=dictionary, coherence='u_mass')
Following are the pipeline parameters for
u_mass coherence. By pipeline parameters, we mean the functions being used to calculate segmentation, probability estimation, confirmation measure and aggregation as shown in figure 1 in this paper.
CoherenceModel(segmentation=<function s_one_pre at 0x7fcfdbafe050>, probability estimation=<function p_boolean_document at 0x7fcfdbafe320>, confirmation measure=<function log_conditional_probability at 0x7fcfdbafe488>, aggregation=<function arithmetic_mean at 0x7fcfdbafe410>)
pyLDAvis.gensim.prepare(goodLdaModel, corpus, dictionary)
pyLDAvis.gensim.prepare(badLdaModel, corpus, dictionary)
Hence as we can see, the
u_mass coherence for the good LDA model is much more (better) than that for the bad LDA model. This is because, simply, the good LDA model usually comes up with better topics that are more human interpretable.
For the first topic, the goodLdaModel rightly puts emphasis on "graph", "trees" and "user" with reference to the second class of documents.
For the second topic, it puts emphasis on words such as "system", "eps", "interface" and "human" which signify human-computer interaction.
The badLdaModel however fails to decipher between these two topics and comes up with topics which are mostly both graph based but are not clear to a human. The
u_mass topic coherence captures this wonderfully by giving the interpretability of these topics and number as we can see above. Hence this coherence measure can be used to compare difference topic models based on their human-interpretability.