Training RNN with simple sequence dataset

 

We have learned in previous post that RNN is expected to have an ability to remember the sequence information. Let’s do a easy experiment to check it before trying actual NLP application.

Simple sequence dataset

I just prepared a simple script to generate simple integer sequence as follows,

Its output is,

[1 2 2 3 3 3 4 4 4 4 5 5 5 5 5 6 6 6 6 6 6 7 7 7 7 7 7 7 8 8 8 8 8 8 8 8 9 9 9 9 9 9 9 9 9 1 2 2 ..., 9 9 9]

So the number \(i\) is repeated \(i\) times. In order for RNN to generate correct sequence, RNN need to “count” how many times this number already appeared. 

For example, to output correct sequence of 9 9 9 … followed by 1, RNN need to count if 9 is already appeared 9 times to output 1.

 

Training code for RNN

Training procedure of RNN is little bit complicated compared to MLP or CNN, due to the existence of recurrent loop and we need to deal with back propagation with sequential data properly.

To achieve this, we implement custom iterator and updater.

※ Following implementation just referenced from Chainer official example code.

Iterator – feed data sequentially

ParallelSequentialIterator

When training RNN, we need to input the data sequentially. Thus we should not take random permutation. We need to be careful when creating the minibatch dataset so that each minibatch should be feed in sequence. 

You can implement custom Iterator class to achieve this functionality. The parent class Iterator is implemented as following, Iterator code.

So what we need to implement in Iterator is

  • __init__(self, ...) :
    Initialization code.
  • __next__(self)
    This is the core part of iterator. For each iteration, this function is automatically called to get next minibatch.
  • epoch_detail(self) :
    This property is used by trainer module to show the progress of training.
  • serialize(self) :
    Implement if you want to support resume functionality of trainer.

We will implement ParallelSequentialIterator, works following, please also see the figure above.

  1. It will get dataset in the __init__ code, and split the dataset equally with size batch_size.
  2. Every iteration of the training loop, __next__() is called.
    This iterator will prepare current word (input data) and next word (answer data).
    The RNN model is trained to predict next word from current word (and its recurrent unit, which encodes the past sequence information).
  3. Additionally, in order for the trainer extensions to work nicely, epoch_detail and serialize are implemented. (These are not mandatory for minimum implementation.)

The final code looks like following,

 

Updater – Truncated back propagation through time (BPTT)

TruncatedBPTT2

Truncated Backpropagation Through Time. For RNN, each forward computation (arrow from bottom to top) depends on the previous recurrent unit. Thus we need to compute forward computation several times to proceed to backward computation.

Back propagation through time: The training procedure for RNN model is different from MLP or CNN. Because each forward computation of RNN depends on the previous forward computation due to the existence of recurrent unit. Therefore we need to execute forward computation several times before executing backward computation to allow recurrent loop, \(W_{hh}\), to learn the sequential information. We set the value bprop_len (back propagation length) in below Updater implementation. Forward computation is executed this number of times consecutively, followed by one time of back propagation.

Truncate computational graph: Also, as you can see from the above figure, RNN graph will grow every time the forward computation is executed, and computer cannot handle if the graph grows infinitely long. To deal with this issue, we will cut (truncate) the graph after each time of backward computation. It can be achieved by calling unchain_backward function in chainer.

 

This optimization method can be implemented by creating custom Updater class, BPTTUpdater, as a subclass of StandardUpdater.

It just overrides the function update_core, which is the function to write parameter update (optimize) process.

Source code: bptt_updater.py

As you can see, forward is executed in the for loop bprop_len times consecutively to accumulate loss, followed by one backward to execute the back propagation of this accumulated loss. After that, the parameter is updated by optimizer using update funciton.

Note that unchain_backward is called every time at the end of the update_core function to truncate/cut the computational graph.

 

Main training code

Once iterator and the updater are prepared, training code is almost same with previous training for MLP-MNIST task or CNN-CIFAR10/CIFAR100.

 

Run the code

You can execute the code like,

You can also train with different model using -a option,

 

Below is the result in my environment with RNN architecture, 

 

I set the N_VOCABULARY=10 in simple_sequence_dataset.py, and even the simple RNN achieved the accuracy close to 1. It seems this RNN model have an ability to remember past 10 sequence.

 

 

Sponsored Links

Leave a Reply

Your email address will not be published.