Bare Model
The Bare model follows the incremental learning schemes, but no CL technique is applied to the model.
Node-level Problems
- class NCClassILBareMinibatchTrainer(model, scenario, optimizer_fn, loss_fn, device, **kwargs)[source]
This trainer has the same behavior as NCMinibatchTrainer.
- class NCClassILBareTrainer(model, scenario, optimizer_fn, loss_fn, device, **kwargs)[source]
This trainer has the same behavior as NCTrainer.
- class NCDomainILBareTrainer(model, scenario, optimizer_fn, loss_fn, device, **kwargs)[source]
This trainer has the same behavior as NCTrainer.
- class NCTaskILBareTrainer(model, scenario, optimizer_fn, loss_fn, device, **kwargs)[source]
- inference(model, _curr_batch, training_states)[source]
The event function to execute inference step. For task-IL, we need to additionally consider task information for the inference step.
- Parameters:
model (torch.nn.Module) – the current trained model.
curr_batch (object) – the data (or minibatch) for the current iteration.
curr_training_states (dict) – the dictionary containing the current training states.
- Returns:
A dictionary containing the inference results, such as prediction result and loss.
Link-level Problems
- class LCTaskILBareTrainer(model, scenario, optimizer_fn, loss_fn, device, **kwargs)[source]
- inference(model, _curr_batch, training_states)[source]
The event function to execute inference step. For task-IL, we need to additionally consider task information for the inference step.
- Parameters:
model (torch.nn.Module) – the current trained model.
curr_batch (object) – the data (or minibatch) for the current iteration.
curr_training_states (dict) – the dictionary containing the current training states.
- Returns:
A dictionary containing the inference results, such as prediction result and loss.
- prepareLoader(_curr_dataset, curr_training_states)[source]
The event function to generate dataloaders from the given dataset for the current task. For task-IL, we need to additionally consider task information.
- Parameters:
- Returns:
A tuple containing three dataloaders. The trainer considers the first dataloader, second dataloader, and third dataloader as dataloaders for training, validation, and test, respectively.
- class LCTimeILBareTrainer(model, scenario, optimizer_fn, loss_fn, device, **kwargs)[source]
This trainer has the same behavior as LCTrainer.
Graph-level Problems
- class GCDomainILBareTrainer(model, scenario, optimizer_fn, loss_fn, device, **kwargs)[source]
This trainer has the same behavior as GCClassILBareTrainer.
- class GCTaskILBareTrainer(model, scenario, optimizer_fn, loss_fn, device, **kwargs)[source]
- inference(model, _curr_batch, training_states)[source]
The event function to execute inference step. For task-IL, we need to additionally consider task information for the inference step.
- Parameters:
model (torch.nn.Module) – the current trained model.
curr_batch (object) – the data (or minibatch) for the current iteration.
curr_training_states (dict) – the dictionary containing the current training states.
- Returns:
A dictionary containing the inference results, such as prediction result and loss.