Piggyback
Piggyback is a parameter-isolation-based continual learning method which prevents forgetting by learning binary masks of network parameters for each task. For the details, see the original paper.
Node-level Problems
- class NCTaskILPiggybackTrainer(model, scenario, optimizer_fn, loss_fn, device, **kwargs)[source]
- afterInference(results, model, optimizer, _curr_batch, training_states)[source]
The event function to execute some processes right after the inference step (for training). We recommend performing backpropagation in this event function. In our implementation, we compute the gradient for the real-valued masks and restore the masked parameters.
- Parameters:
results (dict) – the returned dictionary from the event function inference.
model (torch.nn.Module) – the current trained model.
optimizer (torch.optim.Optimizer) – the current optimizer function.
curr_batch (object) – the data (or minibatch) for the current iteration.
curr_training_states (dict) – the dictionary containing the current training states.
- Returns:
A dictionary containing the information from the results.
- beforeInference(model, optimizer, _curr_batch, training_states)[source]
The event function to execute some processes right before inference (for training).
Before training, Piggyback gets binarized/ternarized mask from real-valued mask and weights the masks.
- Parameters:
model (torch.nn.Module) – the current trained model.
optimizer (torch.optim.Optimizer) – the current optimizer function.
curr_batch (object) – the data (or minibatch) for the current iteration.
curr_training_states (dict) – the dictionary containing the current training states.
- inference(model, _curr_batch, training_states)[source]
The event function to execute inference step. For task-IL, we need to additionally consider task information for the inference step.
- Parameters:
model (torch.nn.Module) – the current trained model.
curr_batch (object) – the data (or minibatch) for the current iteration.
curr_training_states (dict) – the dictionary containing the current training states.
- Returns:
A dictionary containing the inference results, such as prediction result and loss.
- initTrainingStates(scenario, model, optimizer)[source]
The event function to initialize the dictionary for storing training states (i.e., intermedeiate results).
- Parameters:
scenario (begin.scenarios.common.BaseScenarioLoader) – the given ScenarioLoader to the trainer
model (torch.nn.Module) – the given model to the trainer
optmizer (torch.optim.Optimizer) – the optimizer generated from the given optimizer_fn
- Returns:
Initialized training state (dict).
- processAfterEachIteration(curr_model, curr_optimizer, curr_training_states, curr_iter_results)[source]
The event function to execute some processes for every end of each epoch. Whether to continue training or not is determined by the return value of this function. If the returned value is False, the trainer stops training the current model in the current task.
In this event function, our implementation synchronize the learning rates of the main optimizer and the optimizer for masks.
Note
This function is called for every end of each epoch, and the event function
processAfterTrainingis called only when the learning on the current task has ended.- Parameters:
curr_model (torch.nn.Module) – the current trained model.
curr_optimizer (torch.optim.Optimizer) – the current optimizer function.
curr_training_states (dict) – the dictionary containing the current training states.
curr_iter_results (dict) – the dictionary containing the training/validation results of the current epoch.
- Returns:
A boolean value. If the returned value is False, the trainer stops training the current model in the current task.
- processAfterTraining(task_id, curr_dataset, curr_model, curr_optimizer, curr_training_states)[source]
The event function to execute some processes after training the current task.
In this event function, our implementation stores the learned masks for each task.
- Parameters:
task_id (int) – the index of the current task.
curr_dataset (object) – The dataset for the current task.
curr_model (torch.nn.Module) – the current trained model.
curr_optimizer (torch.optim.Optimizer) – the current optimizer function.
curr_training_states (dict) – the dictionary containing the current training states.
- processBeforeTraining(task_id, curr_dataset, curr_model, curr_optimizer, curr_training_states)[source]
The event function to execute some processes before training. In this function, masks for network parameters are initialized.
- Parameters:
task_id (int) – the index of the current task
curr_dataset (object) – The dataset for the current task.
curr_model (torch.nn.Module) – the current trained model.
curr_optimizer (torch.optim.Optimizer) – the current optimizer function.
curr_training_states (dict) – the dictionary containing the current training states.
- processEvalIteration(model, _curr_batch)[source]
The event function to handle every evaluation iteration.
Piggyback has to use different masks to evaluate the performance for each task.
- Parameters:
model (torch.nn.Module) – the current trained model.
curr_batch (object) – the data (or minibatch) for the current iteration.
task_id (int) – the id of a task
- Returns:
A dictionary containing the outcomes (stats) during the evaluation iteration.