MNIST inference code

MNIST inference

  We already learned how to write training code in chainer, the last task is to use this trained model to inference (predict) the test input MNIST image. Inference code structure usually becomes as follows, Prepare input data Instantiate the trained model Load the trained model Feed input data into loaded model to get inference result You have already learned the necessary stuff, and it is easy. See inference_mnist.py for the source code. Prepare input data For MNIST, it is easy in one line

  Instantiate the trained model and load the model

Here, note that model can be loaded after instantiating the model. This model must have […]

Continue reading →

Chainer family

  Recently several sub-libraries for Chainer are released, ChainerRL RL: Reinforcement Learning Deep Reinforcement Learning library. cite from http://chainer.org/general/2017/02/22/ChainerRL-Deep-Reinforcement-Learning-Library.html Recent state-of-the-art deep reinforcement algorithms are implemented, including A3C (Asynchronous Advantage Actor-Critic) ACER (Actor-Critic with Experience Replay) (only the discrete-action version for now) Asynchronous N-step Q-learning DQN (including Double DQN, Persistent Advantage Learning (PAL), Double PAL, Dynamic Policy Programming (DPP)) DDPG (Deep Deterministic Poilcy Gradients) (including SVG(0)) PGT (Policy Gradient Theorem)  How to install

  github repository ChainerRL – Deep Reinforcement Learning Library   ChainerCV CV: Computer Vision Image processing library for deep learning training. Common data-augmentation are implemented. How to install

github repository document ChainerMN MN: Multi Node […]

Continue reading →

Chainer version 2 – updated part

  Chainer version 2 is planned to be released in Apr 2017. Pre-release version is already available, install by this command

  The biggest change is that cupy (Roughly, it is GPU version of numpy) becomes independent, and provided separately.   Reference Chainer v2 alpha from Seiya Tokui

Continue reading →

Writing organized, reusable, clean training code using Trainer module

trainer3

  Training code abstraction with Trainer Until now, I was implementing the training code in “primitive” way to explain what kind of operations are going on in deep learning training (※). However, the code can be written in much clean way using Trainer modules in Chainer. ※ Trainer modules are implemented from version 1.11, and some of the open source projects are implemented without Trainer. So it helps to understand these codes by knowing the training implementation without Trainer module as well. Motivation for using Trainer We can notice there are many “typical” operations widely used in machine learning, for example Iterating minibatch training, with minibatch sampled ramdomly Separate train […]

Continue reading →

Design patterns for defining model

  Machine learning consists of training phase and predict/inference phase, and what  model need to calculate is different Training phase: calculate loss (between on output and target) Predict/Inference phase: calculate output To manage this, I often see below 2 patterns to manage this.   Predictor – Classifier framework See train_mnist_2_predictor_classifier.py (train_mnist_1_minimum.py and train_mnist_4_trainer.py are also implemented in Predictor – Classifier framework) 2 Chain classes, “Predictor” and “Classifier” are used for this framework. Training phase: Predictor’s output is fed into Classifier to calculate loss. Predict/Inference phase: Only predictor’s output is used.   Predictor Predictor simply calculates output based on input.

 

  Classifier Classifier “wraps” predictors output y to […]

Continue reading →

Refactoring MNIST training

  Previous section, we learned minimum implementation (train_mnist_1_minimum.py) for the training code for MNIST. Now, let’s refactor the codes. See train_mnist_2_predictor_classifier.py. argparse argparse is used to provide configurable script code. User can pass variable when executing the code. Below code is added to the training code

Then, these variables are configurable when executing the code from console. And these variables can be accessed by args.xxx (e.g. args.batchsize, args.epoch etc.). For example, to set gpu device number 0,

or

or even adding “=”, works the same

  You can also see what command is available using –help command or simply -h.

  Reference: argparse document   […]

Continue reading →