from fastai.gen_doc.nbdoc import *
from fastai.text import *
from fastai import *
text.tranform
contains the functions that deal behind the scenes with the two main tasks when preparing texts for modelling: tokenization and numericalization.
Tokenization splits the raw texts into tokens (wich can be words, or punctuation signs...). The most basic way to do this would be to separate according to spaces, but it's possible to be more subtle; for instance, the contractions like "isn't" or "don't" should be split in ["is","n't"] or ["do","n't"]. By default fastai will use the powerful spacy tokenizer.
Numericalization is easier as it just consists in attributing a unique id to each token and mapping each of those tokens to their respective ids.
This step is actually divided in two phases: first, we apply a certain list of rules
to the raw texts as preprocessing, then we use the tokenizer to split them in lists of tokens. Combining together those rules
, the tok_func
and the lang
to process the texts is the role of the Tokenizer
class.
show_doc(Tokenizer, doc_string=False)
class
Tokenizer
[source]
Tokenizer
(tok_func
:Callable
='SpacyTokenizer'
,lang
:str
='en'
,rules
:ListRules
=None
,special_cases
:StrList
=None
,n_cpus
:int
=None
)
This class will process texts by appling them the rules
then tokenizing them with tok_func(lang)
. special_cases
are a list of tokens passed as special to the tokenizer and n_cpus
is the number of cpus to use for multi-processing (by default, half the cpus available). We don't directly pass a tokenizer for multi-processing purposes: each process needs to initiate a tokenizer of its own. The rules and special_cases default to
default_rules = [fix_html, replace_rep, replace_wrep, deal_caps, spec_add_spaces, rm_useless_spaces]
and
default_spec_tok = [BOS, FLD, UNK, PAD]
show_doc(Tokenizer.process_text)
process_text
[source]
process_text
(t
:str
,tok
:BaseTokenizer
) →List
[str
]
Processe one text t
with tokenizer tok
.
show_doc(Tokenizer.process_all)
For an example, we're going to grab some IMDB reviews.
path = untar_data(URLs.IMDB_SAMPLE)
path
PosixPath('/home/ubuntu/fastai/fastai/../data/imdb_sample')
df = pd.read_csv(path/'train.csv', header=None)
example_text = df.iloc[2][1]; example_text
'Every once in a long while a movie will come along that will be so awful that I feel compelled to warn people. If I labor all my days and I can save but one soul from watching this movie, how great will be my joy.<br /><br />Where to begin my discussion of pain. For starters, there was a musical montage every five minutes. There was no character development. Every character was a stereotype. We had swearing guy, fat guy who eats donuts, goofy foreign guy, etc. The script felt as if it were being written as the movie was being shot. The production value was so incredibly low that it felt like I was watching a junior high video presentation. Have the directors, producers, etc. ever even seen a movie before? Halestorm is getting worse and worse with every new entry. The concept for this movie sounded so funny. How could you go wrong with Gary Coleman and a handful of somewhat legitimate actors. But trust me when I say this, things went wrong, VERY WRONG.'
tokenizer = Tokenizer()
tok = SpacyTokenizer('en')
' '.join(tokenizer.process_text(example_text, tok))
'every once in a long while a movie will come along that will be so awful that i feel compelled to warn people . if i labor all my days and i can save but one soul from watching this movie , how great will be my joy . \n\n where to begin my discussion of pain . for starters , there was a musical montage every five minutes . there was no character development . every character was a stereotype . we had swearing guy , fat guy who eats donuts , goofy foreign guy , etc . the script felt as if it were being written as the movie was being shot . the production value was so incredibly low that it felt like i was watching a junior high video presentation . have the directors , producers , etc . ever even seen a movie before ? halestorm is getting worse and worse with every new entry . the concept for this movie sounded so funny . how could you go wrong with gary coleman and a handful of somewhat legitimate actors . but trust me when i say this , things went wrong , xxup very xxup wrong .'
As explained before, the tokenizer split the text according to words/punctuations signs but in a smart manner. The rules (see below) also have modified the text a little bit. We can tokenize a list of texts directly at the same time:
df = pd.read_csv(path/'train.csv', header=None)
texts = df[1].values
tokenizer = Tokenizer()
tokens = tokenizer.process_all(texts)
' '.join(tokens[2])
'every once in a long while a movie will come along that will be so awful that i feel compelled to warn people . if i labor all my days and i can save but one soul from watching this movie , how great will be my joy . \n\n where to begin my discussion of pain . for starters , there was a musical montage every five minutes . there was no character development . every character was a stereotype . we had swearing guy , fat guy who eats donuts , goofy foreign guy , etc . the script felt as if it were being written as the movie was being shot . the production value was so incredibly low that it felt like i was watching a junior high video presentation . have the directors , producers , etc . ever even seen a movie before ? halestorm is getting worse and worse with every new entry . the concept for this movie sounded so funny . how could you go wrong with gary coleman and a handful of somewhat legitimate actors . but trust me when i say this , things went wrong , xxup very xxup wrong .'
The tok_func
must return an instance of BaseTokenizer
:
show_doc(BaseTokenizer)
show_doc(BaseTokenizer.tokenizer)
tokenizer
[source]
tokenizer
(t
:str
) →List
[str
]
Take a text t
and returns the list of its tokens.
show_doc(BaseTokenizer.add_special_cases)
add_special_cases
[source]
add_special_cases
(toks
:StrList
)
Record a list of special tokens toks
.
The fastai library uses spacy tokenizers as its default. The following class wraps it as BaseTokenizer
.
show_doc(SpacyTokenizer)
class
SpacyTokenizer
[source]
SpacyTokenizer
(lang
:str
) ::BaseTokenizer
Wrapper around a spacy tokenizer to make it a BaseTokenizer
.
If you want to use your custom tokenizer, just subclass the BaseTokenizer
and override its tokenizer
and add_spec_cases
functions.
Rules are just functions that take a string and return the modified string. This allows you to customize the list of default_rules
as you please. Those default_rules
are:
show_doc(deal_caps, doc_string=False)
deal_caps
[source]
deal_caps
(t
:str
) →str
In t
, if a word is written in all caps, we put it in a lower case and add a special token before. A model will more easily learn this way the meaning of the sentence. The rest of the capitals are removed.
deal_caps("I'm suddenly SHOUTING FOR NO REASON!")
"i'm suddenly xxup shouting xxup for no xxup reason!"
show_doc(fix_html, doc_string=False)
fix_html
[source]
fix_html
(x
:str
) →str
This rules replaces a bunch of HTML characters or norms in plain text ones. For instance <br />
are replaced by \n
,
by spaces etc...
fix_html("Some HTML text<br />")
'Some HTML& text\n'
show_doc(replace_rep, doc_string=False)
replace_rep
[source]
replace_rep
(t
:str
) →str
Whenever a character is repeated more than three times in t
, we replace the whole thing by 'TK_REP n char' where n is the number of occurences and char the character.
replace_rep("I'm so excited!!!!!!!!")
"I'm so excited xxrep 8 ! "
show_doc(replace_wrep, doc_string=False)
replace_wrep
[source]
replace_wrep
(t
:str
) →str
Whenever a word is repeated more than four times in t
, we replace the whole thing by 'TK_WREP n w' where n is the number of occurences and w the word repeated.
replace_wrep("I've never ever ever ever ever ever ever ever done this.")
"I've never xxwrep 7 ever done this."
show_doc(rm_useless_spaces)
rm_useless_spaces("Inconsistent use of spaces.")
'Inconsistent use of spaces.'
show_doc(spec_add_spaces)
spec_add_spaces('I #like to #put #hashtags #everywhere!')
'I # like to # put # hashtags # everywhere!'
To convert our set of tokens to unique ids (and be able to have them go through embeddings), we use the following class:
show_doc(Vocab, doc_string=False)
class
Vocab
[source]
Vocab
(path
:PathOrStr
)
Contain the correspondance between numbers and tokens and numericalize. path
should point to the 'tmp' directory with the token and id files.
show_doc(Vocab.create, doc_string=False)
create
[source]
create
(path
:PathOrStr
,tokens
:Tokens
,max_vocab
:int
,min_freq
:int
) →Vocab
Create a Vocab
dictionary from a set of tokens
in path
. Only keeps max_vocab
tokens, and only if they appear at least min_freq
times, set the rest to UNK
.
show_doc(Vocab.numericalize)
show_doc(Vocab.textify)
vocab = Vocab.create(path, tokens, max_vocab=1000, min_freq=2)
vocab.numericalize(tokens[2])[:10]
[207, 321, 11, 6, 246, 144, 6, 22, 88, 240]
show_doc(SpacyTokenizer.tokenizer)
tokenizer
[source]
tokenizer
(t
:str
) →List
[str
]
show_doc(SpacyTokenizer.add_special_cases)
add_special_cases
[source]
add_special_cases
(toks
:StrList
)