Fig 1. Conceptual illustrations of different CIL methods. (a) Conventional methods use all available data (which are imbalanced among classes) to train the model. (b) Recent methods follow this convention but add a fine-tuning step on a balanced subset of all classes. (c) The proposed Adaptive Aggregation Networks (AANets) is a new architecture and it applies a different data strategy: using all available data to update the parameters of plastic and stable blocks, and the balanced set of exemplars to meta-learn the aggregation weights for these blocks. Our key lies in that meta-learned weights can balance the usage of the plastic and stable blocks, i.e., balance between plasticity and stability.
Fig 2. An example architecture of AANets with three levels of residual blocks. At each level, we compute the feature maps from a stable block (blue) as well as a plastic block (orange), respectively, aggregate the maps with meta-learned weights, and feed the result maps to the next level. The outputs of the final level are used to train classifiers.
Table 1. Average incremental accuracies (%) of four state-of-the-art methods w/ and w/o our AANets as a plug-in architecture. In the upper block, we present some comparable results reported in some other related works.
Please cite our paper if it is helpful to your work:
@inproceedings{Liu2021AANets,
author = {Liu, Yaoyao and
Schiele, Bernt and
Sun, Qianru},
title = {Adaptive Aggregation Networks for Class-Incremental Learning},
booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
year = {2020}
}
Copyright © 2021 Max Planck Institute for Informatics |