Fast Benchmarking of Accuracy vs. Training Time with Cyclic Learning Rates
Abstract
A multiplicative cyclic learning rate schedule allows the construction and evaluation of accuracy vs. training time tradeoff curves within a single training run, aiding the assessment of different training methods.
Benchmarking the tradeoff between neural network accuracy and training time is computationally expensive. Here we show how a multiplicative cyclic learning rate schedule can be used to construct a tradeoff curve in a single training run. We generate cyclic tradeoff curves for combinations of training methods such as Blurpool, Channels Last, Label Smoothing and MixUp, and highlight how these cyclic tradeoff curves can be used to evaluate the effects of algorithmic choices on network training efficiency.
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper