import book_classification as bc
import pandas
import shelve
myShelf = shelve.open("storage_new.db")
aBookCollection = myShelf['aBookCollection']
aDataFrame = aBookCollection.as_dataframe()
del myShelf
aDataFrame.icol([0, 1]).describe()
/home/david/.local/lib/python3.3/site-packages/pandas/core/config.py:570: DeprecationWarning: height has been deprecated. warnings.warn(d.msg, DeprecationWarning) /home/david/.local/lib/python3.3/site-packages/pandas/core/config.py:570: DeprecationWarning: height has been deprecated. warnings.warn(d.msg, DeprecationWarning)
Title | Author | |
---|---|---|
count | 597 | 597 |
unique | 586 | 47 |
top | A Christmas Carol | Nathaniel Hawthorne |
freq | 5 | 94 |
Some authors have more books than others, something that might impact the classification.
sort(aDataFrame.groupby('Author').size()).plot(kind='bar', figsize=(15, 6))
<matplotlib.axes.AxesSubplot at 0x7f3bba125f50>
This is the distribution of book counts over authors.
#aDataFrame.groupby('Author').size().plot(kind='kde', figsize=(6, 5))
aDataFrame.groupby('Author').size().hist()
<matplotlib.axes.AxesSubplot at 0x7f96a7988ad0>
When talking about "vocabulary", an implicit tokenization scheme is also there. In this case, we chose tokens consisting of alphabetic symbols and longer than 2 characters. That BasicTokenizer
uses NLTK.
tokenizer = bc.BasicTokenizer()
aBookAnalysis = bc.BookCollectionAnalysis(aBookCollection, tokenizer)
The vocabulary size (unique words) for each book.
aBookAnalysis.vocabulary_size_by_book().set_index('Book').sort(['Unique words']).plot()
<matplotlib.axes.AxesSubplot at 0x7f3bac834450>
The vocabulary size (unique words) for each author.
dataframe = aBookAnalysis.vocabulary_size_by_author().set_index('Author').sort(['Unique words'])
dataframe.plot(kind='bar', figsize=(15, 6))
<matplotlib.axes.AxesSubplot at 0x7f3bbabcef90>
Here we'll explore vocabulary intersections among authors by number of words.
These are totals (each observation corresponds to the amount of words present in exactly the intersection of that many authors).
pandas.Series(aBookAnalysis.shared_words_by_authors()).apply(math.log10).plot(figsize=(6, 4))
<matplotlib.axes.AxesSubplot at 0x7f3bacd4c9d0>
pandas.Series(aBookAnalysis.shared_words_by_books()).apply(math.log10).plot(figsize=(8, 4))
<matplotlib.axes.AxesSubplot at 0x7f3b7fd00910>
These are cumulative totals (observations represent the number of words that appear in N authors or less).
pandas.Series(aBookAnalysis.shared_words_by_authors()).cumsum().apply(math.log).plot()
<matplotlib.axes.AxesSubplot at 0x7f9673e7f210>
pandas.Series(aBookAnalysis.shared_words_by_books()).cumsum().apply(math.log).plot()
<matplotlib.axes.AxesSubplot at 0x7f9673f0b290>
vocabularySizes = aBookAnalysis.vocabulary_size_by_book()['Unique words'] / len(aBookAnalysis.vocabulary().total())
vocabularySizes.hist(bins=100,figsize=(10,5))
#vocabularySizes.plot(kind='kde')
<matplotlib.axes.AxesSubplot at 0x7f3b7fe7ae50>
print(vocabularySizes.sum() / len(vocabularySizes))
0.0365571297371
Let's look at the differences between them. Note that the logarithm was applied to frequencies, so they are in the same scale as entropies.
frequenciesExtractor = bc.FrequenciesExtractor(tokenizer)
entropiesExtractor = bc.EntropiesExtractor(tokenizer, bc.FixedGrouper(500))
frequencies = bc.CollectionHierarchialFeatures.from_book_collection(aBookCollection, frequenciesExtractor)
entropies = bc.CollectionHierarchialFeatures.from_book_collection(aBookCollection, entropiesExtractor)
df_input = []
for word in aBookAnalysis._vocabulary.total().keys():
df_input.append([math.log(frequencies.total()[word]), entropies.total()[word]])
df_input.sort()
entropies_vs_frequencies = pandas.DataFrame(df_input, columns=["Frequencies", "Entropies"])
entropies_vs_frequencies.plot(kind='kde', figsize=(8, 8), subplots=True, sharex=False)
array([<matplotlib.axes.AxesSubplot object at 0x7f3b7faf2250>, <matplotlib.axes.AxesSubplot object at 0x7f3bace2ddd0>], dtype=object)
If we plot both distributions individually, we can't see the difference (apart from the scales). But by sorting pairs according to one of them (in this case frequencies), it's clear that entropies aren't the same.
More over, the maximum grows in a similar fashion as frequencies, but it can't explain the additional variations. So entropy seems to have more information about the words than frequency.
#entropies_vs_frequencies["Entropies"].plot(figsize=(12, 4))
plt.figsize(12,5)
fig = plt.figure()
l=len(entropies_vs_frequencies["Entropies"])
plt.axis([0,l,0,1])
scatter(range(l),entropies_vs_frequencies["Entropies"],s=1,alpha=0.05,figure=fig)
<matplotlib.collections.PathCollection at 0x7f3bacdd4d90>
By zooming at the tail (> 140000) we see the pattern continues.
plt.figsize(12,5)
l=len(entropies_vs_frequencies["Entropies"])
plt.axis([140000,l,0,1])
scatter(range(l),entropies_vs_frequencies["Entropies"],s=1,alpha=0.2)
<matplotlib.collections.PathCollection at 0x7f3b7fb046d0>
plt.figsize(12,5)
l=len(entropies_vs_frequencies["Entropies"])
plt.axis([130000,150000,0,1])
scatter(range(l),entropies_vs_frequencies["Entropies"],s=1,alpha=0.2)
<matplotlib.collections.PathCollection at 0x7f3b7fe8dcd0>
# TODO: get a decent density plot of x=freq,y=entr with log color map
#figure(figsize(10, 10))
#scatter(entropies_vs_frequencies["Frequencies"], entropies_vs_frequencies["Entropies"])
#figure(figsize(5, 5))
They seem to be almost equal many times, but increasingly more often informative words appear (which higher frequencies).
The following distribution of differences in the entropy series (sorted by increasing frequencies) indicates variation of entropy between words with essentialy the same frequency.
entropies_vs_frequencies["Entropies"].diff().dropna().apply(abs).hist(log=True)
<matplotlib.axes.AxesSubplot at 0x7f9674438e90>
If we want to ignore rare words (that appear in few books and authors), we can make the distribution looke more "gaussian".
The analysis is in another file, so that both can be compared.