API

cyclic_lr

class keras_one_cycle_clr.cyclic_lr.CLR(cyc, lr_range, momentum_range, phase_one_fraction=0.3, amplitude_fn=None, reset_on_train_begin=True, record_frq=10, verbose=False)

Bases: tensorflow.python.keras.callbacks.Callback

Based off https://github.com/keras-team/keras-contrib/blob/master/keras_contrib/callbacks/cyclical_learning_rate.py.

Parameters
  • cyc – an integer for a number of cycles.

  • lr_range – a tuple of starting (usually minimum) lr value and maximum (peak) lr value.

  • momentum_range – a tuple of momentum values.

  • phase_one_fraction – a fraction for phase I (increasing lr) in one cycle. Must between 0 to 1.

  • amplitude_fn – a function to modify a cycle’s amplitude (lr_max-lr_min). For example, amplitude_fn = lambda x: np.power(0.5, x) will halven the amplitude every cycle.

  • reset_on_train_begin – True or False to reset counters when training begins.

  • record_frq – integer > 0, a frequency in batches to record training loss.

  • verbose – True or False to print progress.

property cycle_momentum
find_n_epoch(dataset, batch_size=None)

A method to find a number of epochs to train in the sweep.

Parameters
  • dataset – If the training data is an ndarray (used with model.fit), dataset is the x_train. If the training data is a generator (used with model.fit_generator), dataset is the generator instance.

  • batch_size – Needed only if dataset is x_train.

Return epochs

a number of epochs needed to do a learning rate sweep.

get_current_lr(n_iter=None)

A helper function to calculate a current learning rate based on current iteration number.

Return lr

a current learning rate.

get_current_momentum(n_iter=None)

A helper function to calculate a current momentum based on current iteration number.

Return momentum

a current momentum.

on_epoch_end(epoch, logs={})
on_train_batch_begin(batch, logs={})
on_train_batch_end(batch, logs={})
on_train_begin(logs={})
reset()
test_run(n_iter=None)

Visualize values of learning rate (and momentum) as a function of iteration (batch).

Parameters

n_iter – a number of cycles. If None, 1000 is used.

lr_range_test

class keras_one_cycle_clr.lr_range_test.LrRangeTest(lr_range=(1e-05, 10), wd_list=[], steps=100, batches_per_step=5, threshold_multiplier=5, validation_data=None, validation_batch_size=16, batches_per_val=10, verbose=False)

Bases: tensorflow.python.keras.callbacks.Callback

A callback class for finding a learning rate.

Parameters
  • lr_range – a tuple of lower and upper bounds of learning rate.

  • wd_list – a list of weight decay to perform grid search.

  • steps – a number of steps for learning rates in a range test.

  • batches_per_step – a number of batches to average loss for each learning rate step.

  • threshold_multiplier – a multiplier to lowest encountered training loss to determine early termination of range test.

  • validation_data – either (x_test, y_test) or a generator. Useful for wd grid search.

  • validation_batch_size – a batch size when evaluating validation loss.

  • batches_per_val – a number of batches to use in averaging validation loss.

  • verbose – True or False whether to print out progress detail.

find_n_epoch(dataset, batch_size=None)

A method to find a number of epochs to train in the sweep.

Parameters
  • dataset – If the training data is an ndarray (used with model.fit), dataset is the x_train. If the training data is a generator (used with model.fit_generator), dataset is the generator instance.

  • batch_size – Needed only if dataset is x_train.

Return epochs

a number of epochs needed to do a learning rate sweep.

on_train_batch_begin(batch, logs)
on_train_batch_end(batch, logs)
on_train_begin(logs={})
plot(set='train', x_scale='log', y_scale='linear', ma=True, window=5, **kwargs)

Plot the lr range test result. :param set: either “train” or “valid”. If ‘valid’, validation_data must not be None. :param x_scale: scale for the x axis, either “log” or “linear”. :param y_scale: scale for the y axis, either “log” or “linear”. :param ma: True or False to use moving windonw average. :param window: an integer for window of averaging. :param kwargs: valid kwargs to [pyplot.plot](https://matplotlib.org/3.1.0/api/_as_gen/matplotlib.pyplot.plot.html) function.

one_cycle

class keras_one_cycle_clr.one_cycle.OneCycle(lr_range, momentum_range=None, phase_one_fraction=0.3, reset_on_train_begin=True, record_frq=10, verbose=False)

Bases: tensorflow.python.keras.callbacks.Callback

A callback class for one-cycle policy training.

Parameters
  • lr_range – a tuple of starting (usually minimum) lr value and maximum (peak) lr value.

  • momentum_range – a tuple of momentum values.

  • phase_one_fraction – a fraction for phase I (increasing lr) in one cycle. Must between 0 to 1.

  • reset_on_train_begin – True or False to reset counters when training begins.

  • record_frq – integer > 0, a frequency in batches to record training loss.

  • verbose – True or False to print progress.

property cycle_momentum
get_current_lr(n_iter=None)

A helper function to calculate a current learning rate based on current iteration number.

Return lr

a current learning rate.

get_current_momentum(n_iter=None)

A helper function to calculate a current momentum based on current iteration number.

Return momentum

a current momentum.

on_epoch_end(epoch, logs={})
on_train_batch_begin(batch, logs={})
on_train_batch_end(batch, logs={})
on_train_begin(logs={})
test_run(n_iter=None)

Visualize values of learning rate (and momentum) as a function of iteration (batch).

Parameters

n_iter – a number of cycles. If None, 1000 is used.

utils

class keras_one_cycle_clr.utils.History(history=None)

Bases: object

Custom class to help get log data from keras.callbacks.History objects.

Parameters

history – a keras.callbacks.History object or None.

keras_one_cycle_clr.utils.concatenate_history(hlist, reindex_epoch=False)

A helper function to concatenate training history object (keras.callbacks.History) into a single one, with a help History class.

Parameters
  • hlist – a list of keras.callbacks.History objects to concatenate.

  • reindex_epoch – True or False whether to reindex epoch counters to an increasing order.

Return his

an instance of History class that contain concatenated information of epoch and training history.

keras_one_cycle_clr.utils.cuda_release_memory()

Force cuda to release GPU memory by closing it.

Return cuda

numba’s cuda module.

keras_one_cycle_clr.utils.moving_window_avg(x, window=5)

Return a moving-window average. :param x: a numpy array :param window: an integer, number of data points for window size.

keras_one_cycle_clr.utils.plot_from_history(history)

Plot losses in training history.

Parameters

history – a keras.callbacks.History or (this module’s) History object.

keras_one_cycle_clr.utils.reset_keras(per_process_gpu_memory_fraction=1.0)

Reset Keras session and set GPU configuration as well as collect unused memory. This is adapted from [jaycangel’s post on fastai forum](https://forums.fast.ai/t/how-could-i-release-gpu-memory-of-keras/2023/18).

Calling this before any training will clear Keras session. Hence, a Keras model must be redefined and compiled again. It can be used in during hyperparameter scan or K-fold validation when model training is invoked several times.

Parameters

per_process_gpu_memory_fraction – tensorflow’s config.gpu_options.per_process_gpu_memory_fraction

keras_one_cycle_clr.utils.save_history_to_csv(history, filepath)

Save a training history into a csv file.

Parameters
  • history – a History callback instance from Model instance.

  • filepath – a string filepath.

keras_one_cycle_clr.utils.set_lr(optimizer, lr)

Helper to set learning rate of Keras optimizers.

Parameters
  • optimizer – Keras optimizer

  • lr – value of learning rate.

keras_one_cycle_clr.utils.set_momentum(optimizer, mom_val)

Helper to set momentum of Keras optimizers.

Parameters
  • optimizer – Keras optimizer

  • mom_val – value of momentum.