import tensorflow as tf
import numpy as np
import os
import time
path_to_file = "shahnameh.txt"
First, look in the text:
# Read, then decode for py2 compat.
text = open(path_to_file, 'rb').read().decode(encoding='utf-8')
# length of text is the number of characters in it
print ('Length of text: {} characters'.format(len(text)))
Length of text: 2653849 characters
# Take a look at the first 250 characters in text
print(text[:250])
|به نام خداوند جان و خرد |کزین برتر اندیشه برنگذرد |خداوند نام و خداوند جای |خداوند روزی ده رهنمای |خداوند کیوان و گردان سپهر |فروزنده ماه و ناهید و مهر |ز نام و نشان و گمان برترست |نگارندهٔ بر شده پیکرست |به بینندگان آفریننده را |نبینی مرنجان دو بین
# The unique characters in the file
vocab = sorted(set(text))
print ('{} unique characters'.format(len(vocab)))
48 unique characters
Before training, we need to map strings to a numerical representation. Create two lookup tables: one mapping characters to numbers, and another for numbers to characters.
# Creating a mapping from unique characters to indices
char2idx = {u:i for i, u in enumerate(vocab)}
idx2char = np.array(vocab)
text_as_int = np.array([char2idx[c] for c in text])
Now we have an integer representation for each character. Notice that we mapped the character as indexes from 0 to len(unique)
.
print('{')
for char,_ in zip(char2idx, range(20)):
print(' {:4s}: {:3d},'.format(repr(char), char2idx[char]))
print(' ...\n}')
{ '\n': 0, ' ' : 1, '(' : 2, ')' : 3, '|' : 4, '«' : 5, '»' : 6, '،' : 7, '؟' : 8, 'ء' : 9, 'آ' : 10, 'أ' : 11, 'ؤ' : 12, 'ئ' : 13, 'ا' : 14, 'ب' : 15, 'ت' : 16, 'ث' : 17, 'ج' : 18, 'ح' : 19, ... }
# Show how the first 13 characters from the text are mapped to integers
print ('{} ---- characters mapped to int ---- > {}'.format(repr(text[:13]), text_as_int[:13]))
'|به نام خداون' ---- characters mapped to int ---- > [ 4 15 38 1 37 14 36 1 20 21 14 39 37]
Given a character, or a sequence of characters, what is the most probable next character? This is the task we're training the model to perform. The input to the model will be a sequence of characters, and we train the model to predict the output—the following character at each time step.
Since RNNs maintain an internal state that depends on the previously seen elements, given all the characters computed until this moment, what is the next character?
Next divide the text into example sequences. Each input sequence will contain seq_length
characters from the text.
For each input sequence, the corresponding targets contain the same length of text, except shifted one character to the right.
So break the text into chunks of seq_length+1
. For example, say seq_length
is 4 and our text is "Hello". The input sequence would be "Hell", and the target sequence "ello".
To do this first use the tf.data.Dataset.from_tensor_slices
function to convert the text vector into a stream of character indices.
# The maximum length sentence we want for a single input in characters
seq_length = 100
# Create training examples / targets
char_dataset = tf.data.Dataset.from_tensor_slices(text_as_int)
for i in char_dataset.take(5):
print(idx2char[i.numpy()])
| ب ه ن
The batch
method lets us easily convert these individual characters to sequences of the desired size.
sequences = char_dataset.batch(seq_length+1, drop_remainder=True)
for item in sequences.take(5):
print(repr(''.join(idx2char[item.numpy()])))
print("***"*5)
'|به نام خداوند جان و خرد\n|کزین برتر اندیشه برنگذرد\n|خداوند نام و خداوند جای\n|خداوند روزی ده رهنمای\n|خ' *************** 'داوند کیوان و گردان سپهر\n|فروزنده ماه و ناهید و مهر\n|ز نام و نشان و گمان برترست\n|نگارندهٔ بر شده پیکر' *************** 'ست\n|به بینندگان آفریننده را\n|نبینی مرنجان دو بیننده را\n|نیابد بدو نیز اندیشه راه\n|که او برتر از نام و' *************** ' از جایگاه\n|سخن هر چه زین گوهران بگذرد\n|نیابد بدو راه جان و خرد\n|خرد گر سخن برگزیند همی\n|همان را گزین' *************** 'د که بیند همی\n|ستودن نداند کس او را چو هست\n|میان بندگی را ببایدت بست\n|خرد را و جان را همی سنجد اوی\n|د' ***************
For each sequence, duplicate and shift it to form the input and target text by using the map
method to apply a simple function to each batch:
def split_input_target(chunk):
input_text = chunk[:-1]
target_text = chunk[1:]
return input_text, target_text
dataset = sequences.map(split_input_target)
Print the first examples input and target values:
for input_example, target_example in dataset.take(1):
print ('Input data: ', repr(''.join(idx2char[input_example.numpy()])))
print ('Target data:', repr(''.join(idx2char[target_example.numpy()])))
Input data: '|به نام خداوند جان و خرد\n|کزین برتر اندیشه برنگذرد\n|خداوند نام و خداوند جای\n|خداوند روزی ده رهنمای\n|' Target data: 'به نام خداوند جان و خرد\n|کزین برتر اندیشه برنگذرد\n|خداوند نام و خداوند جای\n|خداوند روزی ده رهنمای\n|خ'
Each index of these vectors are processed as one time step. For the input at time step 0, the model receives the index for "F" and trys to predict the index for "i" as the next character. At the next timestep, it does the same thing but the RNN
considers the previous step context in addition to the current input character.
for i, (input_idx, target_idx) in enumerate(zip(input_example[:5], target_example[:5])):
print("Step {:4d}".format(i))
print(" input: {} ({:s})".format(input_idx, repr(idx2char[input_idx])))
print(" expected output: {} ({:s})".format(target_idx, repr(idx2char[target_idx])))
Step 0 input: 4 ('|') expected output: 15 ('ب') Step 1 input: 15 ('ب') expected output: 38 ('ه') Step 2 input: 38 ('ه') expected output: 1 (' ') Step 3 input: 1 (' ') expected output: 37 ('ن') Step 4 input: 37 ('ن') expected output: 14 ('ا')
We used tf.data
to split the text into manageable sequences. But before feeding this data into the model, we need to shuffle the data and pack it into batches.
# Batch size
BATCH_SIZE = 64
dataset = dataset.batch(BATCH_SIZE, drop_remainder=True)
dataset
<BatchDataset shapes: ((64, 100), (64, 100)), types: (tf.int32, tf.int32)>
Use tf.keras.Sequential
to define the model. For this simple example three layers are used to define our model:
tf.keras.layers.Embedding
: The input layer. A trainable lookup table that will map the numbers of each character to a vector with embedding_dim
dimensions;tf.keras.layers.GRU
: A type of RNN with size units=rnn_units
(You can also use a LSTM layer here.)tf.keras.layers.Dense
: The output layer, with vocab_size
outputs.# Length of the vocabulary in chars
vocab_size = len(vocab)
# The embedding dimension
embedding_dim = 25
# Number of RNN units
rnn_units = 1024
def build_model(vocab_size, embedding_dim, rnn_units, batch_size):
model = tf.keras.Sequential([
tf.keras.layers.Embedding(vocab_size, embedding_dim,
batch_input_shape=[batch_size, None]),
tf.keras.layers.GRU(rnn_units,
return_sequences=True,
stateful=True,
recurrent_initializer='glorot_uniform'),
tf.keras.layers.Dense(vocab_size)
])
return model
model = build_model(
vocab_size = len(vocab),
embedding_dim=embedding_dim,
rnn_units=rnn_units,
batch_size=BATCH_SIZE)
For each character the model looks up the embedding, runs the GRU one timestep with the embedding as input, and applies the dense layer to generate logits predicting the log-likelihood of the next character:
Now run the model to see that it behaves as expected.
First check the shape of the output:
for input_example_batch, target_example_batch in dataset.take(1):
example_batch_predictions = model.predict(input_example_batch)
print(example_batch_predictions.shape, "# (batch_size, sequence_length, vocab_size)")
(64, 100, 48) # (batch_size, sequence_length, vocab_size)
In the above example the sequence length of the input is 100
but the model can be run on inputs of any length:
model.summary()
Model: "sequential" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= embedding (Embedding) (64, None, 25) 1200 _________________________________________________________________ gru (GRU) (64, None, 1024) 3228672 _________________________________________________________________ dense (Dense) (64, None, 48) 49200 ================================================================= Total params: 3,279,072 Trainable params: 3,279,072 Non-trainable params: 0 _________________________________________________________________
To get actual predictions from the model we need to sample from the output distribution, to get actual character indices. This distribution is defined by the logits over the character vocabulary.
Note: It is important to sample from this distribution as taking the argmax of the distribution can easily get the model stuck in a loop.
Try it for the first example in the batch:
sampled_indices = tf.random.categorical(example_batch_predictions[0], num_samples=1)
sampled_indices = tf.squeeze(sampled_indices,axis=-1).numpy()
This gives us, at each timestep, a prediction of the next character index:
sampled_indices
array([38, 47, 10, 11, 21, 29, 37, 41, 13, 35, 41, 23, 12, 21, 41, 15, 47, 29, 3, 10, 42, 33, 12, 13, 18, 4, 35, 27, 41, 16, 8, 45, 15, 43, 8, 40, 28, 35, 29, 3, 5, 8, 17, 32, 10, 44, 25, 24, 45, 24, 26, 40, 19, 8, 6, 45, 25, 3, 31, 13, 28, 12, 6, 46, 22, 45, 12, 33, 42, 33, 0, 15, 32, 36, 45, 22, 33, 44, 43, 33, 36, 36, 9, 47, 4, 18, 0, 31, 33, 33, 0, 45, 37, 15, 44, 22, 30, 10, 29, 28], dtype=int64)
Decode these to see the text predicted by this untrained model:
print("Input: \n", repr("".join(idx2char[input_example_batch[0]])))
print()
print("Next Char Predictions: \n", repr("".join(idx2char[sampled_indices ])))
Input: '|به نام خداوند جان و خرد\n|کزین برتر اندیشه برنگذرد\n|خداوند نام و خداوند جای\n|خداوند روزی ده رهنمای\n|' Next Char Predictions: 'ه\u200cآأدطنپئلپرؤدپب\u200cط)آچفؤئج|لصپت؟گبژ؟ٔضلط)«؟ثغآکسزگزشٔح؟»گس)عئضؤ»یذگؤفچف\nبغمگذفکژفممء\u200c|ج\nعفف\nگنبکذظآطض'
At this point the problem can be treated as a standard classification problem. Given the previous RNN state, and the input this time step, predict the class of the next character.
The standard tf.keras.losses.sparse_categorical_crossentropy
loss function works in this case because it is applied across the last dimension of the predictions.
Because our model returns logits, we need to set the from_logits
flag.
def loss(labels, logits):
return tf.keras.losses.sparse_categorical_crossentropy(labels, logits, from_logits=True)
Configure the training procedure using the tf.keras.Model.compile
method. We'll use tf.keras.optimizers.Adam
with default arguments and the loss function.
model.compile(optimizer='adam', loss=loss)
Use a tf.keras.callbacks.ModelCheckpoint
to ensure that checkpoints are saved during training:
# Directory where the checkpoints will be saved
checkpoint_dir = './training_checkpoints'
# Name of the checkpoint files
checkpoint_prefix = os.path.join(checkpoint_dir, "ckpt_{epoch}")
checkpoint_callback=tf.keras.callbacks.ModelCheckpoint(
filepath=checkpoint_prefix,
save_weights_only=True)
To keep training time reasonable, use 10 epochs to train the model. In Colab, set the runtime to GPU for faster training.
EPOCHS=10
history = model.fit(dataset, epochs=EPOCHS, callbacks=[checkpoint_callback])
Epoch 1/10 410/410 [==============================] - 129s 314ms/step - loss: 2.4480 Epoch 2/10 410/410 [==============================] - 149s 363ms/step - loss: 1.8074 Epoch 3/10 410/410 [==============================] - 152s 371ms/step - loss: 1.5528 Epoch 4/10 410/410 [==============================] - 154s 375ms/step - loss: 1.4235 Epoch 5/10 410/410 [==============================] - 154s 377ms/step - loss: 1.3444 Epoch 6/10 410/410 [==============================] - 157s 382ms/step - loss: 1.2861 Epoch 7/10 410/410 [==============================] - 157s 383ms/step - loss: 1.2370 Epoch 8/10 410/410 [==============================] - 136s 331ms/step - loss: 1.1920 Epoch 9/10 410/410 [==============================] - 147s 359ms/step - loss: 1.1487 Epoch 10/10 410/410 [==============================] - 152s 370ms/step - loss: 1.1066
To keep this prediction step simple, use a batch size of 1.
Because of the way the RNN state is passed from timestep to timestep, the model only accepts a fixed batch size once built.
To run the model with a different batch_size
, we need to rebuild the model and restore the weights from the checkpoint.
tf.train.latest_checkpoint(checkpoint_dir)
'./training_checkpoints\\ckpt_10'
model = build_model(vocab_size, embedding_dim, rnn_units, batch_size=1)
model.load_weights(tf.train.latest_checkpoint(checkpoint_dir))
model.build(tf.TensorShape([1, None]))
model.summary()
Model: "sequential_1" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= embedding_1 (Embedding) (1, None, 25) 1200 _________________________________________________________________ gru_1 (GRU) (1, None, 1024) 3228672 _________________________________________________________________ dense_1 (Dense) (1, None, 48) 49200 ================================================================= Total params: 3,279,072 Trainable params: 3,279,072 Non-trainable params: 0 _________________________________________________________________
The following code block generates the text:
It Starts by choosing a start string, initializing the RNN state and setting the number of characters to generate.
Get the prediction distribution of the next character using the start string and the RNN state.
Then, use a categorical distribution to calculate the index of the predicted character. Use this predicted character as our next input to the model.
The RNN state returned by the model is fed back into the model so that it now has more context, instead than only one word. After predicting the next word, the modified RNN states are again fed back into the model, which is how it learns as it gets more context from the previously predicted words.
Looking at the generated text, you'll see the model knows when to capitalize, make paragraphs and imitates a Shakespeare-like writing vocabulary. With the small number of training epochs, it has not yet learned to form coherent sentences.
def generate_text(model, start_string):
# Evaluation step (generating text using the learned model)
# Number of characters to generate
num_generate = 1000
# Converting our start string to numbers (vectorizing)
input_eval = [char2idx[s] for s in start_string]
input_eval = tf.expand_dims(input_eval, 0)
# Empty string to store our results
text_generated = []
# Here batch size == 1
model.reset_states()
for i in range(num_generate):
predictions = model(input_eval)
# remove the batch dimension
predictions = tf.squeeze(predictions, 0)
# using a categorical distribution to predict the word returned by the model
predictions = predictions
predicted_id = tf.random.categorical(predictions, num_samples=1)[-1,0].numpy()
# We pass the predicted word as the next input to the model
# along with the previous hidden state
input_eval = tf.expand_dims([predicted_id], 0)
text_generated.append(idx2char[predicted_id])
return (start_string + ''.join(text_generated))
print(generate_text(model, start_string=u"به نام خدا"))
به نام خدایست نستوه تو |به گیتی نماید نگارد ز بتاختی |وزان پس چنین تا برآرم بماه |صد آن تختها برکشمد از تو بر تن خویش یابی به خون |ز شادی شگفتی که بیکار گشت |دوماه |کجا آن همه ریز کردم همی |ز تخم بد و باژ و پر بوی مهر |شنیده تخت باژی چو کوه بزرگ |بدست سخن گوی برخاستند |به زندان بیاوردش از جنگ جفت |یکی دیگر آنگه که تن بگذرد |من آن تخت راخسر بر تنگ هنگام موسن شود |سربخت این را که پوشیدهام |سراسان کنم داد و دانندگان |گلاب و عنان برگرفتند راه |نماند به رستم که لشکر براند |چه افگند دینار و گرمان به دست |چو ارجات داری خرامید یاد |که نزد کزت بر تو بر خاک روی |شهنشاه بینندهٔ رخش بروخون |تو گفتی همی درکشید این سخن |سواری بر اب گوهرنگار |صزو تن به پا اندر آویختست |نه زین باره و گردیه را بدست |به خون خسره آیید گفتار من |نگردد به بازد اسیدش تخل به درد |سوی حلبهاد آن سه زر |سپاس از دبیرو ستم |همی دشمنندان او تخت را نو نمرد |هرآنکس که او دشمن ایمن ببین |بدو گفت بهرام چون بر روان |یبا پیرسر گفت زن پر ز خون |نگه کرده و از بلت خسرو شوردار |بدآنید تاوان به ایران تویی | |ار و دوبست و زه برکشد |فروشد نه بیما نیر خ
The easiest thing you can do to improve the results it to train it for longer (try EPOCHS=30
).
You can also experiment with a different start string, or try adding another RNN layer to improve the model's accuracy, or adjusting the temperature parameter to generate more or less random predictions.