SRTTU

Run in Google Colab Azure Notebooks

تولید متن با استفاده از شبکه های بازگشتی در سطح کاراکتر روی شاهنامه فروسی

کدها با تغییرات برگرفته از مستندات سایت تنسرفلو است.

https://www.tensorflow.org/tutorials/sequences/text_generation

توصیه می‌شود قبل از این نوت‌بوک حتما نوت بوک

42-text-generation-with-lstm.ipynb

را مرور کنید.
در این نوت بوک: یک مدل character level بازگشتی خواهیم داشت در این نوت بوک از لایه embedding برای کاراکترها و نه کلمات استفاده شده است در این مثال از api تنسورفلو برای ورودی داده به نام tf.dataset استفاده شده در این نوت بوک eager execution تنسورفلو فعال این نوت بوک روی داده شاهنامه آموزش دیده است از نسخه خاص GPU به نام tf.keras.layers.CuDNNGRU استفاده میکنیم.
اگر در گوگل کولب اجرا میکنید حتما GPU را فعال کنید.

Note: Enable GPU acceleration to execute this notebook faster. In Colab: Runtime > Change runtime type > Hardware acclerator > GPU. If running locally make sure TensorFlow version >= 1.11.

This tutorial includes runnable code implemented using tf.keras and eager execution.

While some of the sentences are grammatical, most do not make sense. The model has not learned the meaning of words, but consider:

  • The model is character-based. When training started, the model did not know how to spell an Persian word, or that words were even a unit of text.

  • As demonstrated below, the model is trained on small batches of text (100 characters each), and is still able to generate a longer sequence of text with coherent structure.

Setup

Import TensorFlow and other libraries

In [1]:
import tensorflow as tf
tf.enable_eager_execution()

import numpy as np
import os
import time

Download the Shahnameh dataset

Change the following line to run this code on your own data.

In [2]:
path_to_file = tf.keras.utils.get_file('shahnameh.txt', 'http://dataset.class.vision/NLP/shahnameh.txt')
Downloading data from http://dataset.class.vision/NLP/shahnameh.txt
4653056/4652876 [==============================] - 5s 1us/step

Read the data

First, look in the text.

In [3]:
text = open(path_to_file , encoding="utf8").read()
# length of text is the number of characters in it
print ('Length of text: {} characters'.format(len(text)))
Length of text: 2653849 characters
In [4]:
# Take a look at the first 250 characters in text
print(text[:250])
|به نام خداوند جان و خرد
|کزین برتر اندیشه برنگذرد
|خداوند نام و خداوند جای
|خداوند روزی ده رهنمای
|خداوند کیوان و گردان سپهر
|فروزنده ماه و ناهید و مهر
|ز نام و نشان و گمان برترست
|نگارندهٔ بر شده پیکرست
|به بینندگان آفریننده را
|نبینی مرنجان دو بین
In [5]:
# The unique characters in the file
vocab = sorted(set(text))
print ('{} unique characters'.format(len(vocab)))
48 unique characters

Process the text

Vectorize the text

Before training, we need to map strings to a numerical representation. Create two lookup tables: one mapping characters to numbers, and another for numbers to characters.

In [6]:
# Creating a mapping from unique characters to indices
char2idx = {u:i for i, u in enumerate(vocab)}
idx2char = np.array(vocab)

text_as_int = np.array([char2idx[c] for c in text])

Now we have an integer representation for each character. Notice that we mapped the character as indexes from 0 to len(unique).

In [7]:
print(char2idx['آ'])
print(char2idx['\n'])
print(char2idx[' '])
print(char2idx['ث'])
10
0
1
17
In [8]:
print('{')
for char,_ in zip(char2idx, range(20)):
    print('  {:4s}: {:3d},'.format(repr(char), char2idx[char]))
print('  ...\n}')
{
  '\n':   0,
  ' ' :   1,
  '(' :   2,
  ')' :   3,
  '|' :   4,
  '«' :   5,
  '»' :   6,
  '،' :   7,
  '؟' :   8,
  'ء' :   9,
  'آ' :  10,
  'أ' :  11,
  'ؤ' :  12,
  'ئ' :  13,
  'ا' :  14,
  'ب' :  15,
  'ت' :  16,
  'ث' :  17,
  'ج' :  18,
  'ح' :  19,
  ...
}
In [9]:
# Show how the first 13 characters from the text are mapped to integers
print ('{} ---- characters mapped to int ---- > {}'.format(repr(text[:13]), text_as_int[:13]))
'|به نام خداون' ---- characters mapped to int ---- > [ 4 15 38  1 37 14 36  1 20 21 14 39 37]

The prediction task

Given a character, or a sequence of characters, what is the most probable next character? This is the task we're training the model to perform. The input to the model will be a sequence of characters, and we train the model to predict the output—the following character at each time step.

Since RNNs maintain an internal state that depends on the previously seen elements, given all the characters computed until this moment, what is the next character?

Create training examples and targets

Next divide the text into example sequences. Each input sequence will contain seq_length characters from the text.

For each input sequence, the corresponding targets contain the same length of text, except shifted one character to the right.

So break the text into chunks of seq_length+1. For example, say seq_length is 4 and our text is "Hello". The input sequence would be "Hell", and the target sequence "ello".

To do this first use the tf.data.Dataset.from_tensor_slices function to convert the text vector into a stream of character indices.

In [10]:
# The maximum length sentence we want for a single input in characters
seq_length = 100
examples_per_epoch = len(text)//seq_length

# Create training examples / targets
char_dataset = tf.data.Dataset.from_tensor_slices(text_as_int)

for c in char_dataset.take(15):
    print(idx2char[c.numpy()])
|
ب
ه
 
ن
ا
م
 
خ
د
ا
و
ن
د
 

The batch method lets us easily convert these individual characters to sequences of the desired size.

In [12]:
sequences = char_dataset.batch(seq_length+1, drop_remainder=True)

for item in sequences.take(5):
    print(repr(''.join(idx2char[item.numpy()])))
    print("\n")
'|به نام خداوند جان و خرد\n|کزین برتر اندیشه برنگذرد\n|خداوند نام و خداوند جای\n|خداوند روزی ده رهنمای\n|خ'


'داوند کیوان و گردان سپهر\n|فروزنده ماه و ناهید و مهر\n|ز نام و نشان و گمان برترست\n|نگارندهٔ بر شده پیکر'


'ست\n|به بینندگان آفریننده را\n|نبینی مرنجان دو بیننده را\n|نیابد بدو نیز اندیشه راه\n|که او برتر از نام و'


' از جایگاه\n|سخن هر چه زین گوهران بگذرد\n|نیابد بدو راه جان و خرد\n|خرد گر سخن برگزیند همی\n|همان را گزین'


'د که بیند همی\n|ستودن نداند کس او را چو هست\n|میان بندگی را ببایدت بست\n|خرد را و جان را همی سنجد اوی\n|د'


For each sequence, duplicate and shift it to form the input and target text by using the map method to apply a simple function to each batch:

In [13]:
def split_input_target(chunk):
    input_text = chunk[:-1]
    target_text = chunk[1:]
    return input_text, target_text

dataset = sequences.map(split_input_target)

Print the first examples input and target values:

In [14]:
for input_example, target_example in  dataset.take(1):
    print ('Input data: ')
    print(repr(''.join(idx2char[input_example.numpy()])))
    print ('Target data: ')
    print(repr(''.join(idx2char[target_example.numpy()])))
Input data: 
'|به نام خداوند جان و خرد\n|کزین برتر اندیشه برنگذرد\n|خداوند نام و خداوند جای\n|خداوند روزی ده رهنمای\n|'
Target data: 
'به نام خداوند جان و خرد\n|کزین برتر اندیشه برنگذرد\n|خداوند نام و خداوند جای\n|خداوند روزی ده رهنمای\n|خ'
input:
'|به نام خداوند جان و خرد
|کزین برتر اندیشه برنگذرد
|خداوند نام و خداوند جای
|خداوند روزی ده رهنمای
|'
output:
'به نام خداوند جان و خرد
|کزین برتر اندیشه برنگذرد
|خداوند نام و خداوند جای
|خداوند روزی ده رهنمای
|خ'

هر اندیسی از این وکتور به عنوان یک time step پردازش میشود. برای ورودی در زمان 0، مدل ورودی اندیس "|" را دریافت کرده و سعی میکند اندیس "ب" را به عنوان کاراکتر بعدی پیشگویی کند. در time step بعدی کاری مشابه انجام می‌دهد اما RNN علاوه بر ورودی در این لحظه زمانی contex مرحله قبل را هم دارد.
In [15]:
for i, (input_idx, target_idx) in enumerate(zip(input_example[:5], target_example[:5])):
    print("Step {:4d}".format(i))
    print("  input: {} ({:s})".format(input_idx, repr(idx2char[input_idx])))
    print("  expected output: {} ({:s})".format(target_idx, repr(idx2char[target_idx])))
Step    0
  input: 4 ('|')
  expected output: 15 ('ب')
Step    1
  input: 15 ('ب')
  expected output: 38 ('ه')
Step    2
  input: 38 ('ه')
  expected output: 1 (' ')
Step    3
  input: 1 (' ')
  expected output: 37 ('ن')
Step    4
  input: 37 ('ن')
  expected output: 14 ('ا')

Create training batches

We used tf.data to split the text into manageable sequences. But before feeding this data into the model, we need to shuffle the data and pack it into batches.

In [16]:
# Batch size 
BATCH_SIZE = 64
steps_per_epoch = examples_per_epoch//BATCH_SIZE

# Buffer size to shuffle the dataset
# (TF data is designed to work with possibly infinite sequences, 
# so it doesn't attempt to shuffle the entire sequence in memory. Instead, 
# it maintains a buffer in which it shuffles elements).
BUFFER_SIZE = 10000

dataset = dataset.shuffle(BUFFER_SIZE).batch(BATCH_SIZE, drop_remainder=True)

dataset
Out[16]:
<BatchDataset shapes: ((64, 100), (64, 100)), types: (tf.int32, tf.int32)>

Build The Model

Use tf.keras.Sequential to define the model. For this simple example three layers are used to define our model:

  • tf.keras.layers.Embedding: The input layer. A trainable lookup table that will map the numbers of each character to a vector with embedding_dim dimensions;
  • tf.keras.layers.GRU: A type of RNN with size units=rnn_units (You can also use a LSTM layer here.)
  • tf.keras.layers.Dense: The output layer, with vocab_size outputs.
In [17]:
# Length of the vocabulary in chars
vocab_size = len(vocab)

# The embedding dimension 
embedding_dim = 256

# Number of RNN units
rnn_units = 1024

Next define a function to build the model.

Use CuDNNGRU if running on GPU.

In [18]:
if tf.test.is_gpu_available():
    rnn = tf.keras.layers.CuDNNGRU
else:
    import functools
    rnn = functools.partial(
        tf.keras.layers.GRU, recurrent_activation='sigmoid')
In [19]:
def build_model(vocab_size, embedding_dim, rnn_units, batch_size):
    model = tf.keras.Sequential([
    tf.keras.layers.Embedding(vocab_size, embedding_dim, 
                              batch_input_shape=[batch_size, None]),
    rnn(rnn_units,
        return_sequences=True, 
        recurrent_initializer='glorot_uniform',
        stateful=True),
    tf.keras.layers.Dense(vocab_size)
    ])
    return model
In [20]:
model = build_model(
  vocab_size = len(vocab), 
  embedding_dim=embedding_dim, 
  rnn_units=rnn_units, 
  batch_size=BATCH_SIZE)

For each character the model looks up the embedding, runs the GRU one timestep with the embedding as input, and applies the dense layer to generate logits predicting the log-liklihood of the next character:

A drawing of the data passing through the model

Try the model

Now run the model to see that it behaves as expected.

First check the shape of the output:

In [21]:
for input_example_batch, target_example_batch in dataset.take(1): 
    example_batch_predictions = model(input_example_batch)
    print(example_batch_predictions.shape, "# (batch_size, sequence_length, vocab_size)")
(64, 100, 48) # (batch_size, sequence_length, vocab_size)

In the above example the sequence length of the input is 100 but the model can be run on inputs of any length:

In [22]:
model.summary()
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
embedding (Embedding)        (64, None, 256)           12288     
_________________________________________________________________
cu_dnngru (CuDNNGRU)         (64, None, 1024)          3938304   
_________________________________________________________________
dense (Dense)                (64, None, 48)            49200     
=================================================================
Total params: 3,999,792
Trainable params: 3,999,792
Non-trainable params: 0
_________________________________________________________________

To get actual predictions from the model we need to sample from the output distribution, to get actual character indices. This distribution is defined by the logits over the character vocabulary.

Note: It is important to sample from this distribution as taking the argmax of the distribution can easily get the model stuck in a loop.

Try it for the first example in the batch:

In [23]:
sampled_indices = tf.multinomial(example_batch_predictions[0], num_samples=1)
sampled_indices = tf.squeeze(sampled_indices,axis=-1).numpy()

This gives us, at each timestep, a prediction of the next character index:

In [24]:
sampled_indices
Out[24]:
array([ 7,  6, 18, 39, 33, 45, 22, 19, 18, 41, 40, 29, 33, 19, 26,  7, 30,
       10,  3, 21, 36,  5, 32, 34,  7, 46, 23, 33,  0, 36,  7,  8, 39, 12,
       30, 30, 32, 47, 35, 35, 35, 38, 31,  3, 39, 15, 25,  8, 38, 47,  1,
        8, 39, 28, 19, 28, 14,  7, 10, 40, 26, 18,  4, 35, 46, 10, 46,  9,
       41,  7, 24,  3, 22, 44, 45, 41, 33, 16, 16, 29, 30, 35, 36, 44, 13,
       31, 47, 10, 30, 18,  8,  0, 23, 33, 27, 47, 19, 40, 26, 10],
      dtype=int64)

Decode these to see the text predicted by this untrained model:

In [25]:
print("Input: \n", repr("".join(idx2char[input_example_batch[0]])))
print()
print("Next Char Predictions: \n", repr("".join(idx2char[sampled_indices ])))
Input: 
 ' ناز و ز تو بپوشد سخن\n|تهمتن بران گشت همداستان\n|که فرخنده موبد زد این داستان\n|چنین گفت خرم دل رهنمای'

Next Char Predictions: 
 '،»جوفگذحجپٔطفحش،ظآ)دم«غق،یرف\nم،؟وؤظظغ\u200cلللهع)وبس؟ه\u200c ؟وضحضا،آٔشج|لیآیءپ،ز)ذکگپفتتطظلمکئع\u200cآظج؟\nرفص\u200cحٔشآ'

Train the model

At this point the problem can be treated as a standard classification problem. Given the previous RNN state, and the input this time step, predict the class of the next character.

Attach an optimizer, and a loss function

The standard tf.losses.sparse_softmax_cross_entropy loss function works in this case because it is applied across the last dimension of the predictions.

In [26]:
example_batch_loss  = tf.losses.sparse_softmax_cross_entropy(target_example_batch, example_batch_predictions)
print("Prediction shape: ", example_batch_predictions.shape, " # (batch_size, sequence_length, vocab_size)") 
print("scalar_loss:      ", example_batch_loss.numpy())
Prediction shape:  (64, 100, 48)  # (batch_size, sequence_length, vocab_size)
scalar_loss:       3.8720946

Configure the training procedure using the tf.keras.Model.compile method. We'll use tf.train.AdamOptimizer with default arguments and tf.losses.sparse_softmax_cross_entropy as the loss function.

In [27]:
model.compile(
    optimizer = tf.train.AdamOptimizer(),
    loss = tf.losses.sparse_softmax_cross_entropy)

Configure checkpoints

Use a tf.keras.callbacks.ModelCheckpoint to ensure that checkpoints are saved during training:

In [28]:
# Directory where the checkpoints will be saved
checkpoint_dir = os.path.join(os.getcwd(), 'training_checkpoints')
checkpoint_dir
Out[28]:
'D:\\my cources\\deeplearning-part2\\notebook\\training_checkpoints'
In [29]:
# Name of the checkpoint files
checkpoint_prefix = os.path.join(checkpoint_dir, "ckpt_{epoch}")

checkpoint_callback=tf.keras.callbacks.ModelCheckpoint(
    filepath=checkpoint_prefix,
    save_weights_only=True)

Execute the training

To keep this quick, train the model for just 3 epochs:

In [30]:
EPOCHS=30
In [31]:
history = model.fit(dataset.repeat(), epochs=EPOCHS, steps_per_epoch=steps_per_epoch, callbacks=[checkpoint_callback])
Epoch 1/30
414/414 [==============================] - 107s 259ms/step - loss: 2.1298
Epoch 2/30
414/414 [==============================] - 109s 264ms/step - loss: 1.4952
Epoch 3/30
414/414 [==============================] - 110s 266ms/step - loss: 1.3507
Epoch 4/30
414/414 [==============================] - 110s 266ms/step - loss: 1.2831
Epoch 5/30
414/414 [==============================] - 111s 267ms/step - loss: 1.2383
Epoch 6/30
414/414 [==============================] - 110s 267ms/step - loss: 1.2029
Epoch 7/30
414/414 [==============================] - 110s 267ms/step - loss: 1.1726
Epoch 8/30
414/414 [==============================] - 111s 267ms/step - loss: 1.1443
Epoch 9/30
414/414 [==============================] - 111s 269ms/step - loss: 1.1201
Epoch 10/30
414/414 [==============================] - 111s 267ms/step - loss: 1.0959
Epoch 11/30
414/414 [==============================] - 111s 268ms/step - loss: 1.0741
Epoch 12/30
414/414 [==============================] - 111s 269ms/step - loss: 1.0536
Epoch 13/30
414/414 [==============================] - 111s 267ms/step - loss: 1.0350
Epoch 14/30
414/414 [==============================] - 111s 268ms/step - loss: 1.0198
Epoch 15/30
414/414 [==============================] - 111s 267ms/step - loss: 1.0040
Epoch 16/30
414/414 [==============================] - 111s 267ms/step - loss: 0.9912
Epoch 17/30
414/414 [==============================] - 111s 267ms/step - loss: 0.9803
Epoch 18/30
414/414 [==============================] - 111s 268ms/step - loss: 0.9700
Epoch 19/30
414/414 [==============================] - 112s 272ms/step - loss: 0.9613
Epoch 20/30
414/414 [==============================] - 113s 272ms/step - loss: 0.9551
Epoch 21/30
414/414 [==============================] - 114s 275ms/step - loss: 0.9501
Epoch 22/30
414/414 [==============================] - 112s 270ms/step - loss: 0.9463
Epoch 23/30
414/414 [==============================] - 111s 269ms/step - loss: 0.9441
Epoch 24/30
414/414 [==============================] - 111s 269ms/step - loss: 0.9424
Epoch 25/30
414/414 [==============================] - 112s 270ms/step - loss: 0.9418
Epoch 26/30
414/414 [==============================] - 111s 269ms/step - loss: 0.9433
Epoch 27/30
414/414 [==============================] - 111s 269ms/step - loss: 0.9424
Epoch 28/30
414/414 [==============================] - 112s 270ms/step - loss: 0.9445
Epoch 29/30
414/414 [==============================] - 114s 276ms/step - loss: 0.9482
Epoch 30/30
414/414 [==============================] - 113s 272ms/step - loss: 0.9503

Generate text

Restore the latest checkpoint

To keep this prediction step simple, use a batch size of 1.

Because of the way the RNN state is passed from timestep to timestep, the model only accepts a fixed batch size once built.

To run the model with a different batch_size, we need to rebuild the model and restore the weights from the checkpoint.

In [32]:
tf.train.latest_checkpoint(checkpoint_dir)
Out[32]:
'D:\\my cources\\deeplearning-part2\\notebook\\training_checkpoints\\ckpt_30'
In [33]:
model = build_model(vocab_size, embedding_dim, rnn_units, batch_size=1)

model.load_weights(tf.train.latest_checkpoint(checkpoint_dir))

model.build(tf.TensorShape([1, None]))
In [34]:
model.summary()
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
embedding_1 (Embedding)      (1, None, 256)            12288     
_________________________________________________________________
cu_dnngru_1 (CuDNNGRU)       (1, None, 1024)           3938304   
_________________________________________________________________
dense_1 (Dense)              (1, None, 48)             49200     
=================================================================
Total params: 3,999,792
Trainable params: 3,999,792
Non-trainable params: 0
_________________________________________________________________

The prediction loop

The following code block generates the text:

  • It Starts by choosing a start string, initializing the RNN state and setting the number of characters to generate.

  • Get the prediction distribution of the next character using the start string and the RNN state.

  • Then, use a multinomial distribution to calculate the index of the predicted character. Use this predicted character as our next input to the model.

  • The RNN state returned by the model is fed back into the model so that it now has more context, instead than only one word. After predicting the next word, the modified RNN states are again fed back into the model, which is how it learns as it gets more context from the previously predicted words.

To generate text the model's output is fed back to the input

In [35]:
def generate_text(model, start_string):
    # Evaluation step (generating text using the learned model)

    # Number of characters to generate
    num_generate = 1000


    # Converting our start string to numbers (vectorizing) 
    input_eval = [char2idx[s] for s in start_string]
    input_eval = tf.expand_dims(input_eval, 0)

    # Empty string to store our results
    text_generated = []

    # Low temperatures results in more predictable text.
    # Higher temperatures results in more surprising text.
    # Experiment to find the best setting.
    temperature = 1.0

    # Here batch size == 1
    model.reset_states()
    for i in range(num_generate):
        predictions = model(input_eval)
        # remove the batch dimension
        predictions = tf.squeeze(predictions, 0)

        # using a multinomial distribution to predict the word returned by the model
        predictions = predictions / temperature
        predicted_id = tf.multinomial(predictions, num_samples=1)[-1,0].numpy()

        # We pass the predicted word as the next input to the model
        # along with the previous hidden state
        input_eval = tf.expand_dims([predicted_id], 0)

        text_generated.append(idx2char[predicted_id])

    return (start_string + ''.join(text_generated))
In [57]:
print(generate_text(model, start_string="به نام خد"))
به نام خداوند باز
|همیشه جهان را کند کارزار
|یکی آتشی دید یزدان‌پرست
|چو بگذشت گردان به ایوان خویش
|همیدون پیاده سر و تنش گفت
|زمین شد پر از سر پر از خاک و ششت
|چه آمد به گنجور گستهم گفت
|که کاوس های تو پرخاش جوی
|زمانه برآید ترا یار کس
|رده
|بسیچید روز و جگر برگرفت
|به آواز گفتند یک شب به خنجر به رنج
|تنش لشکری سوخت افراسیاب
|که چون زخم کو را همه یاد کرد
|زمان و زمان کشته شد زادشم
|همی ز آهن آتش پر از بادگون
|فرستاده گفت ای جهاندیده مرد
|که دست نخستت ز اسفندیار
|بدان تا بر و بیشه و بی‌شمار
|همی خواب دارد هر آنک کس را ندیدیم کس
|نباید که او را کسی زین سپس
|نه پاداش با خوردنیها بخواند
|شگفتی و از کوه دیدارشان
|فروبارجویست مانم به چیست
|جز از پهلوان جهان کرد راست
|شهنشاه وهرکو پرا بگذرد
|نکوهر بود
|جز از نیکدل چاره گردند راه
|همه خاک بیند به پیکار او
|بدان نامداران جنگش دراز
|به کش نیازی دهد
|ز کیوان و باعواب قیدافه چون زرگرفت
|به حقنا رسید از فزود
|ولیکن یکی گفت شاه جگاه
|بفرجام بس زنده گر دوستی
|بغلتید و دل شاددل بود زور
|شب آمد بخیره ز ما بنده‌ایم
|هم استه بدین تیره خاک نژند
|بیاورد گرسیوز آن 
دانشگاه تربیت دبیر شهید رجایی
مباحث ویژه 2 - یادگیری عمیق پیشرفته
علیرضا اخوان پور
97-98
SRTTU.edu - Class.Vision - AkhavanPour.ir