Training code for MyDataset



This tutorial corresponds to 03_custom_dataset_mlp folder in the source code.

We have prepared your own dataset, MyDataset, in previous post. Training procedure for this dataset is now almost same with MNIST traning.

Differences from MNIST dataset are,

  • This task is regression task (estimate final “value”), instead of classification task (estimate the probability of category)
  • Training data and validation/test data is not splitted in our custom dataset


Model definition for Regression task training

Our task is to estimate the real value “t” given real value “x“, which is categorized as regression task.


Example: Linear regression. Created by Sewaku

We often use mean squared error as loss function, namely,

$$ L = \frac{1}{D}\sum_i^D (t_i – y_i)^2 $$

where \(i\) denotes i-th data, \(D\) is number of data, and \(y_i\) is model’s output from input \(x_i \).


The implementation for MLP can be written as,

In this case, MyMLP model will calculate y (target to predict) in forward computation, and loss is calculated at __call__ function of the model.


Data separation for validation/test

When you are downloading publicly available machine learning dataset, it is often separated as training data and test data (and sometimes validation data) from the beginning.

However, our custom dataset is not separated yet. We can split the existing dataset easily with chainer’s function, which includes following function

  • chainer.datasets.split_dataset(dataset, split_at, order=None)
  • chainer.datasets.split_dataset_random(dataset, first_size, seed=None)
  • chainer.datasets.get_cross_validation_datasets(dataset, n_fold, order=None)
  • chainer.datasets.get_cross_validation_datasets_random(dataset, n_fold, seed=None)

refer SubDataset for details.

These are useful to separate training data and test data, example usage is as following,

Here, we load our data as dataset (which is subclass of DatasetMixin), and split this dataset into train and test using chainer.datasets.split_dataset_random function. I split train data 70% : test data 30%, randomly in above code.

We can also specify seed argument to fix the random permutation order, which is useful for reproducing experiment or predicting code with same train/test dataset.

Training code

The total code looks like,


[hands on]

Execute to train the model. Trained model parameter will be saved to result/mymlp.model.



Sponsored Links

Leave a Reply

Your email address will not be published.