learning representations for continual learningwhitem/... · clp is general, it encompasses…. •...

25
Learning Representations for Continual Learning Martha White Assistant Professor University of Alberta Joint work with Khurram Javed

Upload: others

Post on 08-Nov-2020

1 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Learning Representations for Continual Learningwhitem/... · CLP is General, it encompasses…. • Formulations with task descriptors • Continual learning classification benchmarks

Learning Representations for Continual Learning

Martha WhiteAssistant ProfessorUniversity of Alberta

Joint work with Khurram Javed

Page 2: Learning Representations for Continual Learningwhitem/... · CLP is General, it encompasses…. • Formulations with task descriptors • Continual learning classification benchmarks

Goals for the talk

• Motivate the importance of learning an explicit Representation for online updating

• Introduce the Online-Aware Meta-Learning objective

• Show how the representations from this objective

• improve prediction performance when learning online

• complement previous strategies for continual learning, like EWC

On arXiv: Meta-Learning Representations for Continual Learning (https://arxiv.org/abs/1905.12588)

Page 3: Learning Representations for Continual Learningwhitem/... · CLP is General, it encompasses…. • Formulations with task descriptors • Continual learning classification benchmarks

Problem Setting

• A Continual Learning Prediction (CLP) problem has an unending stream of samples

• Correlated sequence for inputs X

• Each Y dependent only on X: Yt distributed according to P(Y | Xt)

Solution Manifold for Task 1

Solution manifolds for anunconstrained representation

Joint Training Soluion

Parameter Space

Solution manifolds for an idealrepresentation for continual learning

W W

p1

p2

p3

p1

p2p3

Figure 2: Effect of the representation on continual learning, for a problem where targets are generatedfrom three different distributions p1(Y |x), p2(Y |x) and p3(Y |x). The representation results indifferent solution manifolds for the three distributions; we depict two different possibilities here. Weshow the learning trajectory when training incrementally from data generates first by p1, then p2

and p3. On the left, the online updates interfere, jumping between distant points on the manifolds.On the right, the online updates either generalize appropriately—for parallel manifolds—or avoidinterference because manifolds are orthogonal.

regression and classification problems. We analyze the representations learned with our objectiveand find that they are highly sparse. We then show that this idea complements and improves on othercontinual learning strategies, like Meta Experience Replay [Riemer et al., 2019], which can learnmore effectively from our learned representations.

2 Problem Formulation

A Continual Learning Prediction (CLP) problem consists of an unending stream of samples

(X1, Y1), (X2, Y2), . . . , (Xt, Yt), . . .

for inputs Xt and prediction targets Yt, from sets X and Y respectively.1 The random vector Yt issampled according to an unknown distribution p(Y |Xt). We assume the process X1, X2, . . . , Xt, . . .has a marginal distribution µ : X ! [0, 1), that reflects how often each input is observed. Thisassumption allows for a variety of correlated sequences. For example, Xt could be sampled from adistribution potentially dependent on past variables Xt�1 and Xt�2. The targets Yt, however, aredependent only on Xt, and not on past Xi.

The goal of the agent is to learn a function f : X ! Y , to predict targets y from inputs x. Thisfunction can be parameterized, with parameter vector ✓ and W as

f✓,W (x) = gW (�✓(x)) for representation �✓ : X ! Rd (1)

with gW : Rd ! Y a task-specific prediction function learned on a shared representation �✓(x).For example, if the agent is making m predictions, then we could have W = [w1, . . . , wm], withprediction vector gW (�✓(x)) = tanh(�✓(x)W ) composed of predictions tanh(�✓(x)wi) for eachprediction target i.

A variety of continual problems can be represented by this formulation. One example is an onlineregression problem, such as predicting the next spatial location for a robot given the current location.Current classification benchmarks in continual learning can also be represented by this CLP formalism.One class is selected, with iid samples of Xt only from data for that class. This sequence is correlated,because the process consistently reveals samples only from one class until switching to another class.The CLP formulation also allows for targets Yt that are dependent on a history of the most recent kobservations. This can be obtained by defining each Xt to be the last k observations. The overlapbetween Xt and Xt�1 does not violate the assumptions on the correlated sequence of inputs. Finally,

1This definition encompasses the continual learning problem where the tuples also include task descriptorsTt [Lopez-Paz and Ranzato, 2017]. Tt in the tuple (Xt, Tt, Yt) can simply be considered as part of the inputs.

3

Page 4: Learning Representations for Continual Learningwhitem/... · CLP is General, it encompasses…. • Formulations with task descriptors • Continual learning classification benchmarks

CLP is General, it encompasses….

• Formulations with task descriptors

• Continual learning classification benchmarks

• observe pairs (X,Y) for each class in order

• Online regression problems

• even RL by considering Yt to be a bootstrapped target

• Settings where Yt depends on the last k observations

• Xt equal to the last k observations, with overlap between Xt and Xt-1

(Xt, Tt, Yt)<latexit sha1_base64="mG0B9yQdsIKCVHDoRT/zw/qQRIE=">AAAB+HicbVDLSsNAFJ34rPXRqEs3g0WoICVRQZdFNy4r9CVtCJPppB06mYSZG6GWfokbF4q49VPc+TdO2iy09cC9HM65l7lzgkRwDY7zba2srq1vbBa2its7u3sle/+gpeNUUdaksYhVJyCaCS5ZEzgI1kkUI1EgWDsY3WZ++5EpzWPZgHHCvIgMJA85JWAk3y5VOj6c4UbWHnw49e2yU3VmwMvEzUkZ5aj79levH9M0YhKoIFp3XScBb0IUcCrYtNhLNUsIHZEB6xoqScS0N5kdPsUnRunjMFamJOCZ+ntjQiKtx1FgJiMCQ73oZeJ/XjeF8NqbcJmkwCSdPxSmAkOMsxRwnytGQYwNIVRxcyumQ6IIBZNV0YTgLn55mbTOq+5F1bm/LNdu8jgK6Agdowpy0RWqoTtUR01EUYqe0St6s56sF+vd+piPrlj5ziH6A+vzB3kDkao=</latexit>

Correlated X Yt ~ P(Y | Xt)

Page 5: Learning Representations for Continual Learningwhitem/... · CLP is General, it encompasses…. • Formulations with task descriptors • Continual learning classification benchmarks

Parameterized function

Representation Learning Network (RLN)

x1x2

xn

...

Input

Task Learning Network (TLN)

Learnedrepresentation

y

Output

Only updated withthe meta-update

Updated with the meta-updateand inner updates

...

r1r2r3r4

Net

wor

kC

onne

ctio

ns

Could be any differentiable layer e.g a conv layer + relu

or fc layer + relu

rd

Net

wor

kC

onne

ctio

ns

Net

wor

kC

onne

ctio

ns

Net

wor

kC

onne

ctio

ns

Net

wor

kC

onne

ctio

ns

Net

wor

kC

onne

ctio

ns

Net

wor

kC

onne

ctio

ns

W<latexit sha1_base64="PL5IEwd5vwvwo46b4eEismiYoO8=">AAAB9HicbVDLSgNBEOyNrxhfUY9eBoPgKeyqoMegF48RzAOSJczO9iZDZh/OzAbCku/w4kERr36MN//GyWYPmljQUFR1T0+XlwiutG1/W6W19Y3NrfJ2ZWd3b/+genjUVnEqGbZYLGLZ9ahCwSNsaa4FdhOJNPQEdrzx3dzvTFAqHkePepqgG9JhxAPOqDaS289fyCT6M9IZVGt23c5BVolTkBoUaA6qX30/ZmmIkWaCKtVz7ES7GZWaM4GzSj9VmFA2pkPsGRrREJWb5Stn5MwoPgliaSrSJFd/T2Q0VGoaeqYzpHqklr25+J/XS3Vw42Y8SlKNEVssClJBdEzmCRCfS2RaTA2hTHLzV8JGVFKmTU4VE4KzfPIqaV/Uncu6/XBVa9wWcZThBE7hHBy4hgbcQxNawOAJnuEV3qyJ9WK9Wx+L1pJVzBzDH1ifP9kRkiM=</latexit>

✓<latexit sha1_base64="LpGnCwa+BfsItbBdWNbHlqo5iO4=">AAAB/HicbVDLSgNBEJyNrxhfqzl6GQyCp7Crgh6DXjxGMA9IljA76U2GzM4uM71CWOKvePGgiFc/xJt/4+Rx0MSChqKqe3q6wlQKg5737RTW1jc2t4rbpZ3dvf0D9/CoaZJMc2jwRCa6HTIDUihooEAJ7VQDi0MJrXB0O/Vbj6CNSNQDjlMIYjZQIhKcoZV6brk7eyMPZQYT2sUhIOu5Fa/qzUBXib8gFbJAved+dfsJz2JQyCUzpuN7KQY50yi4hEmpmxlIGR+xAXQsVSwGE+SzxRN6apU+jRJtSyGdqb8nchYbM45D2xkzHJplbyr+53UyjK6DXKg0Q1B8vijKJMWETpOgfaGBoxxbwrgW9q+UD5lmHG1eJRuCv3zyKmmeV/2Lqnd/WandLOIokmNyQs6IT65IjdyROmkQTsbkmbySN+fJeXHenY95a8FZzJTJHzifPytilRg=</latexit>

Prediction Learning Network (TLN)

Page 6: Learning Representations for Continual Learningwhitem/... · CLP is General, it encompasses…. • Formulations with task descriptors • Continual learning classification benchmarks

CLP Objective

Objective is still an empirical risk minimization objective

the prediction problem in reinforcement learning—predicting the value of a policy from a state—canbe represented by considering the inputs Xt to be states and the targets to be sampled returns orbootstrapped targets.

The learning objective for the CLP problem is to minimize error over all samples. Let ` : Y ⇥Y ! Rbe the loss for the entire prediction vector, with loss `(y, y) between prediction y 2 Y and target y.If we assume that inputs X are seen proportionally to some density µ : X ! [0, 1), then the CLPobjective can be written as

CLP(✓,W )def= E[`(f✓,W (X), Y )] =

Z Z`(f✓,W (x), y)p(y|x)dy

�µ(x)dx. (2)

The objective for CLP is a standard empirical risk minimization problem, but the learning problemdiffers from the standard iid setting. The agent sees a correlated stream of data, rather than gettingto directly sample from p(x, y) = p(y|x)µ(x). This modification can cause significant issues whensimply applying standard algorithms for the iid setting. Instead, we need to design algorithms thattake this correlation into account.

3 An Objective for Learning Representations for Continual Learning

Our goal is to learn representations that are suitable for continual learning. For an illustration ofwhat would constitute an effective representation for continual learning, suppose that we have threeclusters of inputs, which have significantly different p(Y |x), corresponding to p1, p2 and p3. Fora fixed 2-dimensional representation �✓ : X ! R2, we can consider the manifold of solutionsW 2 R2 given by a linear model that provide equivalently accurate solutions for each pi. These threemanifolds are depicted as three different coloured lines in the W 2 R2 parameter space in Figure 2.The goal is to find one parameter vector W that is effective for all three distributions. For two differentrepresentations, these manifolds, and their intersections can look very different. The intuition is thatonline updates from a W are more effective when the manifolds are either parallel—allowing forpositive generalization—or orthogonal—avoiding interference. It is unlikely that a representationproducing such manifolds would emerge naturally. Instead, we will have to explicitly find it.

We frame the problem of learning a representation for continual learning as a meta-learning prob-lem. We design the objective for a generic online update for W . Denote the online update for ksteps as U(✓,W, {(Xt+i, Yt+i)}k

i=1). For example, starting from parameters (✓t, Wt), the updateU(✓t, Wt, {(Xt+i, Yt+i)}k

i=1) could give (✓t+k, Wt+k) after k steps of stochastic gradient descentwhere the ith step is taken with respect to (Xt+i, Yt+i). In this work, we consider an online updatethat only changes W , for a fixed ✓, and only update the representation with the meta-objective. Thisis a crucial idea behind the success of our method, and is further discussed in Appendix B. Our goalis to optimize ✓ so that overall prediction accuracy is improved under these online updates.

We propose the following Online-aware Meta-Learning (OML) objective

OML(✓,W )def=

ZEhCLP

⇣U(✓,W, {(Xt+i, Yt+i)}k

i=1)⌘|Xt = x

iµ(x)dx. (3)

This objective represents the expected loss after a k-step online update from any input x, sampledproportionally to µ. Note that we could consider a different distribution than µ over starting x than inthe CLP. For example, we could consider a subset of x from which we do online updates, but want tooptimize performance on the full set of visited x. We will use this generality when demonstrating theutility of this objective for pre-training representations for continual learning.

We can optimize this objective similarly to other gradient-based meta-learning objectives. Early workon learning-to-learn considered optimizing parameters through learning updates themselves, thoughtypically considering approaches using genetic algorithms [Schmidhuber, 1987]. Improvementsin automatic differentiation have made it more feasible to computed gradient-based meta-learningupdates [Finn, 2018]. Some meta-learning algorithms have similarly considered optimizationsthrough multiple steps of updating for the few-shot learning setting [Finn et al., 2017; Li et al., 2017].The successes in these previous works in optimizing similar objectives motivate OML as a feasibleobjective for Meta-learning Representations for Continual Learning. We provide pseudo-code foroptimizing the OML objective given in Equation 3 in Appendix A .

4

µ : X ! [0,1) marginal distribution over X<latexit sha1_base64="pE0V4RSz9Y0YdfLnl2AsXVS3cZw=">AAACNnicbVDLSitBEO3xba5Xoy7dNIYLCpcwc6+guBLduBEUjAYyIdR0epLGfgzdNWoY8lVu/A53blwo4tZPsCdm4etAw+FUVVedk2RSOAzD+2Bicmp6ZnZuvvJr4ffiUnV55cyZ3DLeYEYa20zAcSk0b6BAyZuZ5aASyc+Ti4Oyfn7JrRNGn+Ig420FPS1SwQC91KkexSrfpbEC7DOQRXNIYyt6fQRrzRVthX9pLHSKg00aI7/GgiqwPaFB0q4/zookL/+hxu+gzWGnWgvr4Qj0O4nGpEbGOO5U7+KuYbniGpkE51pRmGG7AIuCST6sxLnjGbAL6PGWpxoUd+1iZHtI/3ilS1Nj/dNIR+rHiQKUcwOV+M7Sn/taK8Wfaq0c0512IXSWI9fsfVGaS4qGlhl665YzlANPgFnhb6WsDxYY+qQrPoToq+Xv5OxfPfpfD0+2anv74zjmyBpZJxskIttkjxySY9IgjNyQe/JInoLb4CF4Dl7eWyeC8cwq+YTg9Q2K4qxw</latexit>

Page 7: Learning Representations for Continual Learningwhitem/... · CLP is General, it encompasses…. • Formulations with task descriptors • Continual learning classification benchmarks

Different training regime

• Do not get iid samples,

• The sequence of samples is correlated

• This has a significant impact on training NNs online

• Updates are not spread across the space

• The NN begins to specialize to recent samples and forget what it learned before on other parts of the space

Xt ⇠ µ and Yt ⇠ P (Y |Xt)<latexit sha1_base64="Wln0EJtxocUd1e4kaZHyimbv+R4=">AAACF3icbVDLSgMxFM34tr6qLt1cLIJuyowKuhTduKxgX3RKyaSpBpPMkNwRy9i/cOOvuHGhiFvd+TemD0FbDwQO55zLzT1RIoVF3//ypqZnZufmFxZzS8srq2v59Y2KjVPDeJnFMja1iFouheZlFCh5LTGcqkjyanRz1vert9xYEetL7Ca8qeiVFh3BKDqplS/WWgihFQpClUKI/A4zoLoNPaj/OKXdOtyDC+5BK1/wi/4AMEmCESmQEUqt/GfYjlmquEYmqbWNwE+wmVGDgkney4Wp5QllN/SKNxzVVHHbzAZ39WDHKW3oxMY9jTBQf09kVFnbVZFLKorXdtzri/95jRQ7x81M6CRFrtlwUSeVgDH0S4K2MJyh7DpCmRHur8CuqaEMXZU5V0IwfvIkqewXg4Oif3FYODkd1bFAtsg22SUBOSIn5JyUSJkw8kCeyAt59R69Z+/Nex9Gp7zRzCb5A+/jG5g/naU=</latexit>

Page 8: Learning Representations for Continual Learningwhitem/... · CLP is General, it encompasses…. • Formulations with task descriptors • Continual learning classification benchmarks

Strategies to Mitigate Interference

• Replay or generate samples for more updates

• e.g., replay in RL, model-based RL, knowledge distillation

• Modify the online update to retain knowledge

• e.g., Elastic Weight Consolidation

• Use sparse (or semi-distributed) representations

• e.g., early work by French in 90s

Page 9: Learning Representations for Continual Learningwhitem/... · CLP is General, it encompasses…. • Formulations with task descriptors • Continual learning classification benchmarks

Strategies to Mitigate Interference

• Replay or generate samples for more updates

• e.g., replay in RL, model-based RL, knowledge distillation

• Modify the online update to retain knowledge

• e.g., Elastic Weight Consolidation

• Use sparse (or semi-distributed) representations

• e.g., early work by French in 90s

Page 10: Learning Representations for Continual Learningwhitem/... · CLP is General, it encompasses…. • Formulations with task descriptors • Continual learning classification benchmarks

Importance of the Representation for Mitigating Interference

Solution Manifold for Task 1

Solution manifolds for anunconstrained representation

Joint Training Soluion

Parameter Space

Solution manifolds for an idealrepresentation for continual learning

W W

p1

p2

p3

p1

p2p3

Page 11: Learning Representations for Continual Learningwhitem/... · CLP is General, it encompasses…. • Formulations with task descriptors • Continual learning classification benchmarks

Separately considering the Representation

Representation Learning Network (RLN)

x1x2

xn

...

Input

Task Learning Network (TLN)

Learnedrepresentation

y

Output

Only updated withthe meta-update

Updated with the meta-updateand inner updates

...

r1r2r3r4

Net

wor

kC

onne

ctio

ns

Could be any differentiable layer e.g a conv layer + relu

or fc layer + relu

rd

Net

wor

kC

onne

ctio

ns

Net

wor

kC

onne

ctio

ns

Net

wor

kC

onne

ctio

ns

Net

wor

kC

onne

ctio

ns

Net

wor

kC

onne

ctio

ns

Net

wor

kC

onne

ctio

ns

Claim: It is useful to delineate the representation for continual learning (and learn it differently)

Prediction Learning Network (TLN)

Page 12: Learning Representations for Continual Learningwhitem/... · CLP is General, it encompasses…. • Formulations with task descriptors • Continual learning classification benchmarks

Hypothesized Advantages• Learning this representation could be a slower,

background process

• Slowly changing representation would suffer much less from interference

• The Prediction Learning Network can still learning quickly online

• Gain stability without losing reactivity

• Can consider alternative objectives and learning approaches for the Representation Learning Network

Page 13: Learning Representations for Continual Learningwhitem/... · CLP is General, it encompasses…. • Formulations with task descriptors • Continual learning classification benchmarks

Online-aware Meta-Learning Objective

the prediction problem in reinforcement learning—predicting the value of a policy from a state—canbe represented by considering the inputs Xt to be states and the targets to be sampled returns orbootstrapped targets.

The learning objective for the CLP problem is to minimize error over all samples. Let ` : Y ⇥Y ! Rbe the loss for the entire prediction vector, with loss `(y, y) between prediction y 2 Y and target y.If we assume that inputs X are seen proportionally to some density µ : X ! [0, 1), then the CLPobjective can be written as

CLP(✓,W )def= E[`(f✓,W (X), Y )] =

Z Z`(f✓,W (x), y)p(y|x)dy

�µ(x)dx. (2)

The objective for CLP is a standard empirical risk minimization problem, but the learning problemdiffers from the standard iid setting. The agent sees a correlated stream of data, rather than gettingto directly sample from p(x, y) = p(y|x)µ(x). This modification can cause significant issues whensimply applying standard algorithms for the iid setting. Instead, we need to design algorithms thattake this correlation into account.

3 An Objective for Learning Representations for Continual Learning

Our goal is to learn representations that are suitable for continual learning. For an illustration ofwhat would constitute an effective representation for continual learning, suppose that we have threeclusters of inputs, which have significantly different p(Y |x), corresponding to p1, p2 and p3. Fora fixed 2-dimensional representation �✓ : X ! R2, we can consider the manifold of solutionsW 2 R2 given by a linear model that provide equivalently accurate solutions for each pi. These threemanifolds are depicted as three different coloured lines in the W 2 R2 parameter space in Figure 2.The goal is to find one parameter vector W that is effective for all three distributions. For two differentrepresentations, these manifolds, and their intersections can look very different. The intuition is thatonline updates from a W are more effective when the manifolds are either parallel—allowing forpositive generalization—or orthogonal—avoiding interference. It is unlikely that a representationproducing such manifolds would emerge naturally. Instead, we will have to explicitly find it.

We frame the problem of learning a representation for continual learning as a meta-learning prob-lem. We design the objective for a generic online update for W . Denote the online update for ksteps as U(✓,W, {(Xt+i, Yt+i)}k

i=1). For example, starting from parameters (✓t, Wt), the updateU(✓t, Wt, {(Xt+i, Yt+i)}k

i=1) could give (✓t+k, Wt+k) after k steps of stochastic gradient descentwhere the ith step is taken with respect to (Xt+i, Yt+i). In this work, we consider an online updatethat only changes W , for a fixed ✓, and only update the representation with the meta-objective. Thisis a crucial idea behind the success of our method, and is further discussed in Appendix B. Our goalis to optimize ✓ so that overall prediction accuracy is improved under these online updates.

We propose the following Online-aware Meta-Learning (OML) objective

OML(✓,W )def=

ZEhCLP

⇣U(✓,W, {(Xt+i, Yt+i)}k

i=1)⌘|Xt = x

iµ(x)dx. (3)

This objective represents the expected loss after a k-step online update from any input x, sampledproportionally to µ. Note that we could consider a different distribution than µ over starting x than inthe CLP. For example, we could consider a subset of x from which we do online updates, but want tooptimize performance on the full set of visited x. We will use this generality when demonstrating theutility of this objective for pre-training representations for continual learning.

We can optimize this objective similarly to other gradient-based meta-learning objectives. Early workon learning-to-learn considered optimizing parameters through learning updates themselves, thoughtypically considering approaches using genetic algorithms [Schmidhuber, 1987]. Improvementsin automatic differentiation have made it more feasible to computed gradient-based meta-learningupdates [Finn, 2018]. Some meta-learning algorithms have similarly considered optimizationsthrough multiple steps of updating for the few-shot learning setting [Finn et al., 2017; Li et al., 2017].The successes in these previous works in optimizing similar objectives motivate OML as a feasibleobjective for Meta-learning Representations for Continual Learning. We provide pseudo-code foroptimizing the OML objective given in Equation 3 in Appendix A .

4

the prediction problem in reinforcement learning—predicting the value of a policy from a state—canbe represented by considering the inputs Xt to be states and the targets to be sampled returns orbootstrapped targets.

The learning objective for the CLP problem is to minimize error over all samples. Let ` : Y ⇥Y ! Rbe the loss for the entire prediction vector, with loss `(y, y) between prediction y 2 Y and target y.If we assume that inputs X are seen proportionally to some density µ : X ! [0, 1), then the CLPobjective can be written as

CLP(✓,W )def= E[`(f✓,W (X), Y )] =

Z Z`(f✓,W (x), y)p(y|x)dy

�µ(x)dx. (2)

The objective for CLP is a standard empirical risk minimization problem, but the learning problemdiffers from the standard iid setting. The agent sees a correlated stream of data, rather than gettingto directly sample from p(x, y) = p(y|x)µ(x). This modification can cause significant issues whensimply applying standard algorithms for the iid setting. Instead, we need to design algorithms thattake this correlation into account.

3 An Objective for Learning Representations for Continual Learning

Our goal is to learn representations that are suitable for continual learning. For an illustration ofwhat would constitute an effective representation for continual learning, suppose that we have threeclusters of inputs, which have significantly different p(Y |x), corresponding to p1, p2 and p3. Fora fixed 2-dimensional representation �✓ : X ! R2, we can consider the manifold of solutionsW 2 R2 given by a linear model that provide equivalently accurate solutions for each pi. These threemanifolds are depicted as three different coloured lines in the W 2 R2 parameter space in Figure 2.The goal is to find one parameter vector W that is effective for all three distributions. For two differentrepresentations, these manifolds, and their intersections can look very different. The intuition is thatonline updates from a W are more effective when the manifolds are either parallel—allowing forpositive generalization—or orthogonal—avoiding interference. It is unlikely that a representationproducing such manifolds would emerge naturally. Instead, we will have to explicitly find it.

We frame the problem of learning a representation for continual learning as a meta-learning prob-lem. We design the objective for a generic online update for W . Denote the online update for ksteps as U(✓,W, {(Xt+i, Yt+i)}k

i=1). For example, starting from parameters (✓t, Wt), the updateU(✓t, Wt, {(Xt+i, Yt+i)}k

i=1) could give (✓t+k, Wt+k) after k steps of stochastic gradient descentwhere the ith step is taken with respect to (Xt+i, Yt+i). In this work, we consider an online updatethat only changes W , for a fixed ✓, and only update the representation with the meta-objective. Thisis a crucial idea behind the success of our method, and is further discussed in Appendix B. Our goalis to optimize ✓ so that overall prediction accuracy is improved under these online updates.

We propose the following Online-aware Meta-Learning (OML) objective

OML(✓,W )def=

ZEhCLP

⇣U(✓,W, {(Xt+i, Yt+i)}k

i=1)⌘|Xt = x

iµ(x)dx. (3)

This objective represents the expected loss after a k-step online update from any input x, sampledproportionally to µ. Note that we could consider a different distribution than µ over starting x than inthe CLP. For example, we could consider a subset of x from which we do online updates, but want tooptimize performance on the full set of visited x. We will use this generality when demonstrating theutility of this objective for pre-training representations for continual learning.

We can optimize this objective similarly to other gradient-based meta-learning objectives. Early workon learning-to-learn considered optimizing parameters through learning updates themselves, thoughtypically considering approaches using genetic algorithms [Schmidhuber, 1987]. Improvementsin automatic differentiation have made it more feasible to computed gradient-based meta-learningupdates [Finn, 2018]. Some meta-learning algorithms have similarly considered optimizationsthrough multiple steps of updating for the few-shot learning setting [Finn et al., 2017; Li et al., 2017].The successes in these previous works in optimizing similar objectives motivate OML as a feasibleobjective for Meta-learning Representations for Continual Learning. We provide pseudo-code foroptimizing the OML objective given in Equation 3 in Appendix A .

4

Recall:

where U(✓,W, {(Xt+i, Yt+i)}ki=1)

is the online update for k steps starting from ✓,W

[✓,Wt+k] = U(✓,Wt, {(Xt+i, Yt+i)}ki=1)<latexit sha1_base64="GbP/JJ4PlRqsSoRwNwC/h3lf9iQ=">AAACs3ichVFLb9NAEF6bVwmPpnDkMiIBpSKKbKCCS6UKLhyLRB7INma9HicrP7U7BiLLf5AjN/4N68SqSovEHFbffrPfzjczUZVJTY7z27Jv3Lx1+87B3cG9+w8eHg6PHi10WSuBc1FmpVpFXGMmC5yTpAxXlUKeRxkuo/R9l19+Q6VlWXyibYVBzteFTKTgZKhw+PO5T/iDmu8bVAgtzCc+bZD4FJZT8JvJKmzohWyn8HkPjv02bOSp235Jj8H3B71cajAyKIvOB9RVzAkhKRWM0zFowkqbkyuSxRoSVeYwvigzbrtvvIt7VydtAzi97CWk/7sJhyNn5uwCrgO3ByPWx3k4/OXHpahzLEhkXGvPdSoKms6myLAd+LXGiouUr9EzsOA56qDZzbyFZ4aJdy0mZUGwYy8rGp5rvc0j8zLntNFXcx35r5xXU/I2aGRR1YSF2BdK6gyohG6BEEuFgrKtAVwoabyC2HDFBZk1D8wQ3KstXweLlzP31cz5+Hp09q4fxwF7wp6yCXPZG3bGPrBzNmfCcqyFFVpf7RPbsyM73j+1rV7zmP0Vdv4HYBPQOg==</latexit>

Page 14: Learning Representations for Continual Learningwhitem/... · CLP is General, it encompasses…. • Formulations with task descriptors • Continual learning classification benchmarks

Online-aware Meta-Learning Objective

the prediction problem in reinforcement learning—predicting the value of a policy from a state—canbe represented by considering the inputs Xt to be states and the targets to be sampled returns orbootstrapped targets.

The learning objective for the CLP problem is to minimize error over all samples. Let ` : Y ⇥Y ! Rbe the loss for the entire prediction vector, with loss `(y, y) between prediction y 2 Y and target y.If we assume that inputs X are seen proportionally to some density µ : X ! [0, 1), then the CLPobjective can be written as

CLP(✓,W )def= E[`(f✓,W (X), Y )] =

Z Z`(f✓,W (x), y)p(y|x)dy

�µ(x)dx. (2)

The objective for CLP is a standard empirical risk minimization problem, but the learning problemdiffers from the standard iid setting. The agent sees a correlated stream of data, rather than gettingto directly sample from p(x, y) = p(y|x)µ(x). This modification can cause significant issues whensimply applying standard algorithms for the iid setting. Instead, we need to design algorithms thattake this correlation into account.

3 An Objective for Learning Representations for Continual Learning

Our goal is to learn representations that are suitable for continual learning. For an illustration ofwhat would constitute an effective representation for continual learning, suppose that we have threeclusters of inputs, which have significantly different p(Y |x), corresponding to p1, p2 and p3. Fora fixed 2-dimensional representation �✓ : X ! R2, we can consider the manifold of solutionsW 2 R2 given by a linear model that provide equivalently accurate solutions for each pi. These threemanifolds are depicted as three different coloured lines in the W 2 R2 parameter space in Figure 2.The goal is to find one parameter vector W that is effective for all three distributions. For two differentrepresentations, these manifolds, and their intersections can look very different. The intuition is thatonline updates from a W are more effective when the manifolds are either parallel—allowing forpositive generalization—or orthogonal—avoiding interference. It is unlikely that a representationproducing such manifolds would emerge naturally. Instead, we will have to explicitly find it.

We frame the problem of learning a representation for continual learning as a meta-learning prob-lem. We design the objective for a generic online update for W . Denote the online update for ksteps as U(✓,W, {(Xt+i, Yt+i)}k

i=1). For example, starting from parameters (✓t, Wt), the updateU(✓t, Wt, {(Xt+i, Yt+i)}k

i=1) could give (✓t+k, Wt+k) after k steps of stochastic gradient descentwhere the ith step is taken with respect to (Xt+i, Yt+i). In this work, we consider an online updatethat only changes W , for a fixed ✓, and only update the representation with the meta-objective. Thisis a crucial idea behind the success of our method, and is further discussed in Appendix B. Our goalis to optimize ✓ so that overall prediction accuracy is improved under these online updates.

We propose the following Online-aware Meta-Learning (OML) objective

OML(✓,W )def=

ZEhCLP

⇣U(✓,W, {(Xt+i, Yt+i)}k

i=1)⌘|Xt = x

iµ(x)dx. (3)

This objective represents the expected loss after a k-step online update from any input x, sampledproportionally to µ. Note that we could consider a different distribution than µ over starting x than inthe CLP. For example, we could consider a subset of x from which we do online updates, but want tooptimize performance on the full set of visited x. We will use this generality when demonstrating theutility of this objective for pre-training representations for continual learning.

We can optimize this objective similarly to other gradient-based meta-learning objectives. Early workon learning-to-learn considered optimizing parameters through learning updates themselves, thoughtypically considering approaches using genetic algorithms [Schmidhuber, 1987]. Improvementsin automatic differentiation have made it more feasible to computed gradient-based meta-learningupdates [Finn, 2018]. Some meta-learning algorithms have similarly considered optimizationsthrough multiple steps of updating for the few-shot learning setting [Finn et al., 2017; Li et al., 2017].The successes in these previous works in optimizing similar objectives motivate OML as a feasibleobjective for Meta-learning Representations for Continual Learning. We provide pseudo-code foroptimizing the OML objective given in Equation 3 in Appendix A .

4

the prediction problem in reinforcement learning—predicting the value of a policy from a state—canbe represented by considering the inputs Xt to be states and the targets to be sampled returns orbootstrapped targets.

The learning objective for the CLP problem is to minimize error over all samples. Let ` : Y ⇥Y ! Rbe the loss for the entire prediction vector, with loss `(y, y) between prediction y 2 Y and target y.If we assume that inputs X are seen proportionally to some density µ : X ! [0, 1), then the CLPobjective can be written as

CLP(✓,W )def= E[`(f✓,W (X), Y )] =

Z Z`(f✓,W (x), y)p(y|x)dy

�µ(x)dx. (2)

The objective for CLP is a standard empirical risk minimization problem, but the learning problemdiffers from the standard iid setting. The agent sees a correlated stream of data, rather than gettingto directly sample from p(x, y) = p(y|x)µ(x). This modification can cause significant issues whensimply applying standard algorithms for the iid setting. Instead, we need to design algorithms thattake this correlation into account.

3 An Objective for Learning Representations for Continual Learning

Our goal is to learn representations that are suitable for continual learning. For an illustration ofwhat would constitute an effective representation for continual learning, suppose that we have threeclusters of inputs, which have significantly different p(Y |x), corresponding to p1, p2 and p3. Fora fixed 2-dimensional representation �✓ : X ! R2, we can consider the manifold of solutionsW 2 R2 given by a linear model that provide equivalently accurate solutions for each pi. These threemanifolds are depicted as three different coloured lines in the W 2 R2 parameter space in Figure 2.The goal is to find one parameter vector W that is effective for all three distributions. For two differentrepresentations, these manifolds, and their intersections can look very different. The intuition is thatonline updates from a W are more effective when the manifolds are either parallel—allowing forpositive generalization—or orthogonal—avoiding interference. It is unlikely that a representationproducing such manifolds would emerge naturally. Instead, we will have to explicitly find it.

We frame the problem of learning a representation for continual learning as a meta-learning prob-lem. We design the objective for a generic online update for W . Denote the online update for ksteps as U(✓,W, {(Xt+i, Yt+i)}k

i=1). For example, starting from parameters (✓t, Wt), the updateU(✓t, Wt, {(Xt+i, Yt+i)}k

i=1) could give (✓t+k, Wt+k) after k steps of stochastic gradient descentwhere the ith step is taken with respect to (Xt+i, Yt+i). In this work, we consider an online updatethat only changes W , for a fixed ✓, and only update the representation with the meta-objective. Thisis a crucial idea behind the success of our method, and is further discussed in Appendix B. Our goalis to optimize ✓ so that overall prediction accuracy is improved under these online updates.

We propose the following Online-aware Meta-Learning (OML) objective

OML(✓,W )def=

ZEhCLP

⇣U(✓,W, {(Xt+i, Yt+i)}k

i=1)⌘|Xt = x

iµ(x)dx. (3)

This objective represents the expected loss after a k-step online update from any input x, sampledproportionally to µ. Note that we could consider a different distribution than µ over starting x than inthe CLP. For example, we could consider a subset of x from which we do online updates, but want tooptimize performance on the full set of visited x. We will use this generality when demonstrating theutility of this objective for pre-training representations for continual learning.

We can optimize this objective similarly to other gradient-based meta-learning objectives. Early workon learning-to-learn considered optimizing parameters through learning updates themselves, thoughtypically considering approaches using genetic algorithms [Schmidhuber, 1987]. Improvementsin automatic differentiation have made it more feasible to computed gradient-based meta-learningupdates [Finn, 2018]. Some meta-learning algorithms have similarly considered optimizationsthrough multiple steps of updating for the few-shot learning setting [Finn et al., 2017; Li et al., 2017].The successes in these previous works in optimizing similar objectives motivate OML as a feasibleobjective for Meta-learning Representations for Continual Learning. We provide pseudo-code foroptimizing the OML objective given in Equation 3 in Appendix A .

4

Recall:

Key Point: U maintains correlated ordering that the online learner will see

where U(✓,W, {(Xt+i, Yt+i)}ki=1)

is the online update for k steps starting from ✓,W

[✓,Wt+k] = U(✓,Wt, {(Xt+i, Yt+i)}ki=1)<latexit sha1_base64="GbP/JJ4PlRqsSoRwNwC/h3lf9iQ=">AAACs3ichVFLb9NAEF6bVwmPpnDkMiIBpSKKbKCCS6UKLhyLRB7INma9HicrP7U7BiLLf5AjN/4N68SqSovEHFbffrPfzjczUZVJTY7z27Jv3Lx1+87B3cG9+w8eHg6PHi10WSuBc1FmpVpFXGMmC5yTpAxXlUKeRxkuo/R9l19+Q6VlWXyibYVBzteFTKTgZKhw+PO5T/iDmu8bVAgtzCc+bZD4FJZT8JvJKmzohWyn8HkPjv02bOSp235Jj8H3B71cajAyKIvOB9RVzAkhKRWM0zFowkqbkyuSxRoSVeYwvigzbrtvvIt7VydtAzi97CWk/7sJhyNn5uwCrgO3ByPWx3k4/OXHpahzLEhkXGvPdSoKms6myLAd+LXGiouUr9EzsOA56qDZzbyFZ4aJdy0mZUGwYy8rGp5rvc0j8zLntNFXcx35r5xXU/I2aGRR1YSF2BdK6gyohG6BEEuFgrKtAVwoabyC2HDFBZk1D8wQ3KstXweLlzP31cz5+Hp09q4fxwF7wp6yCXPZG3bGPrBzNmfCcqyFFVpf7RPbsyM73j+1rV7zmP0Vdv4HYBPQOg==</latexit>

Page 15: Learning Representations for Continual Learningwhitem/... · CLP is General, it encompasses…. • Formulations with task descriptors • Continual learning classification benchmarks

How is this different from MAML? Similar idea but…

• The goal is to learn a representation for continual learning, rather than few-shot learning (fine-tuning the network)

• Our goal is to learn fast and minimize interference across all data

• Sample a sequence for the online update, and optimize error for all other data

• There are no explicit tasks, just correlated data

• Instead of updating the whole network online, only the prediction layers are updated online

• Can be seen as setting stepsizes for representation params to zero

Page 16: Learning Representations for Continual Learningwhitem/... · CLP is General, it encompasses…. • Formulations with task descriptors • Continual learning classification benchmarks

Separately considering the Representation

Representation Learning Network (RLN)

x1x2

xn

...

Input

Task Learning Network (TLN)

Learnedrepresentation

y

Output

Only updated withthe meta-update

Updated with the meta-updateand inner updates

...

r1r2r3r4

Net

wor

kC

onne

ctio

ns

Could be any differentiable layer e.g a conv layer + relu

or fc layer + relu

rd

Net

wor

kC

onne

ctio

ns

Net

wor

kC

onne

ctio

ns

Net

wor

kC

onne

ctio

ns

Net

wor

kC

onne

ctio

ns

Net

wor

kC

onne

ctio

ns

Net

wor

kC

onne

ctio

ns

W<latexit sha1_base64="PL5IEwd5vwvwo46b4eEismiYoO8=">AAAB9HicbVDLSgNBEOyNrxhfUY9eBoPgKeyqoMegF48RzAOSJczO9iZDZh/OzAbCku/w4kERr36MN//GyWYPmljQUFR1T0+XlwiutG1/W6W19Y3NrfJ2ZWd3b/+genjUVnEqGbZYLGLZ9ahCwSNsaa4FdhOJNPQEdrzx3dzvTFAqHkePepqgG9JhxAPOqDaS289fyCT6M9IZVGt23c5BVolTkBoUaA6qX30/ZmmIkWaCKtVz7ES7GZWaM4GzSj9VmFA2pkPsGRrREJWb5Stn5MwoPgliaSrSJFd/T2Q0VGoaeqYzpHqklr25+J/XS3Vw42Y8SlKNEVssClJBdEzmCRCfS2RaTA2hTHLzV8JGVFKmTU4VE4KzfPIqaV/Uncu6/XBVa9wWcZThBE7hHBy4hgbcQxNawOAJnuEV3qyJ9WK9Wx+L1pJVzBzDH1ifP9kRkiM=</latexit>

✓<latexit sha1_base64="LpGnCwa+BfsItbBdWNbHlqo5iO4=">AAAB/HicbVDLSgNBEJyNrxhfqzl6GQyCp7Crgh6DXjxGMA9IljA76U2GzM4uM71CWOKvePGgiFc/xJt/4+Rx0MSChqKqe3q6wlQKg5737RTW1jc2t4rbpZ3dvf0D9/CoaZJMc2jwRCa6HTIDUihooEAJ7VQDi0MJrXB0O/Vbj6CNSNQDjlMIYjZQIhKcoZV6brk7eyMPZQYT2sUhIOu5Fa/qzUBXib8gFbJAved+dfsJz2JQyCUzpuN7KQY50yi4hEmpmxlIGR+xAXQsVSwGE+SzxRN6apU+jRJtSyGdqb8nchYbM45D2xkzHJplbyr+53UyjK6DXKg0Q1B8vijKJMWETpOgfaGBoxxbwrgW9q+UD5lmHG1eJRuCv3zyKmmeV/2Lqnd/WandLOIokmNyQs6IT65IjdyROmkQTsbkmbySN+fJeXHenY95a8FZzJTJHzifPytilRg=</latexit>

Prediction Learning Network (TLN)

Page 17: Learning Representations for Continual Learningwhitem/... · CLP is General, it encompasses…. • Formulations with task descriptors • Continual learning classification benchmarks

Q1: Can we learn representations that are effective for online updating?

• Pre-train the representation on a pre-training set

• batch of data can be used however, including iid training

• Test performance when learning online on new data, with a fixed pre-trained representation

• Two settings:

• One prediction, Y | x generated from 10 different functions

• An increasing number of predictions (increasing classes)

Page 18: Learning Representations for Continual Learningwhitem/... · CLP is General, it encompasses…. • Formulations with task descriptors • Continual learning classification benchmarks

Dataset 2: Split-Omniglot• Omniglot has 1623 characters, each with 20 hand-written

images

• Pre-training data: the first 963 classes

• The remaining classes are Evaluation data (for online learning): all of one class is seen before going to the next

• Inputs are the image (we do not use task IDs)

• Chosen because (a) a hard problem with many predictions, (b) a given split between pre-training and evaluation classes

Page 19: Learning Representations for Continual Learningwhitem/... · CLP is General, it encompasses…. • Formulations with task descriptors • Continual learning classification benchmarks

Algorithms• 8 layers in NN, with 6 for RLN and 2 for TLN

• Scratch: learns online from a random initialization, no pre-training

• Pretraining: iid training on pre-training set

• SR-NN: a neural network with sparse activations, trained using the Set-KL method

• MRCL: our algorithm, Meta-learned Reps for Continual Learning

• MRCL without RLN: define online update on whole network

Page 20: Learning Representations for Continual Learningwhitem/... · CLP is General, it encompasses…. • Formulations with task descriptors • Continual learning classification benchmarks

Number of classes learned

Accuracy

25 50 75 100 125 150 1750.0

0.2

0.4

0.6

0.8

1.0

SR-NN

10

Scratch

Pretraining

OracleMRCL

Omniglot Training Trajectory Performance

MRCL without RLN

Continual Classification Results

MRCL is our method

Up is good

*Oracle: learned using IID training on the trajectory with multiple epochs

1. Separating out the RLN is very important2. Just fixing the representation does not prevent interference3. Sparse Activation NN helps quite a bit, but not as much as MRCL

Page 21: Learning Representations for Continual Learningwhitem/... · CLP is General, it encompasses…. • Formulations with task descriptors • Continual learning classification benchmarks

Sparsity naturally emerges when using OML

MRCL

SR-NN

Pre-training

Random Instance 1 Random Instance 2 Average Activation

Dead Neurons

0.0

1.0Random Instance 3

SR-NN

MRCL has 4% activation and no dead neurons Pre-training has 40% activation 3% dead neurons Best SR-NN has 15% activation with 1% dead neurons SR-NN trained to be more sparse, with 4% activation, has 14% dead neurons

*Reshaped the 2304 length representation vectors into 32x72 for visualization

Page 22: Learning Representations for Continual Learningwhitem/... · CLP is General, it encompasses…. • Formulations with task descriptors • Continual learning classification benchmarks

Q2: Do the learned representations complement other strategies for forgetting?

Table 1: MRCL combined with existing continual learning methods. All memory-based methodsuse a buffer of 200. Error margins represent one std over 10 runs. Performance of all methods isconsiderably improved when they learn from representations learned by MRCL; moreover, evenonline updates are competitive with rehearsal based methods with MRCL. Finally, online updateson MRCL outperform all methods when they learn from other representations. Note that MER doesbetter than approx IID in some cases because it does multiple rehearsal-based updates for everysample.

Split-OmniglotOne class per task, 50 tasks Five classes per task, 20 tasks

Method Standard MRCL Pretraining Standard MRCL Pretraining

Online 04.64 ± 2.61 64.72 ± 2.57 21.16 ± 2.71 01.40 ± 0.43 55.32 ± 2.25 11.80 ± 1.92

Approx IID 53.95 ± 5.50 75.12 ± 3.24 54.29 ± 3.48 48.02 ± 5.67 67.03 ± 2.10 46.02 ± 2.83

ER-Reservoir 52.56 ± 2.12 68.16 ± 3.12 36.72 ± 3.06 24.32 ± 5.37 60.92 ± 2.41 37.44 ± 1.67

MER 54.88 ± 4.12 76.00 ± 2.07 62.76 ± 2.16 29.02 ± 4.01 62.05 ± 2.19 42.05 ± 3.71

EWC 05.08 ± 2.47 64.44 ± 3.13 18.72 ± 3.97 02.04 ± 0.35 56.03 ± 3.20 10.03 ± 1.53

enforce sparsity, has approximately 3% dead neurons. SR-NN with the same level of sparsity asMRCL has approximately 14% dead neurons whereas one optimized for best accuracy has 0.7% deadneurons, but with a higher average overall activation of 15%. MRCL, on the other hand, managed toobtain a highly sparse representation, without incurring dead neurons. These results provide furtherevidence that MRCL, and the objective it uses, promote learning an effective representation for onlineupdating.

5 Improvements by Combining with Knowledge Retention Approaches

We have shown that MRCL learns effective representations for continual learning. In this section, weanswer a different question: how does MRCL behave when it is combined with existing continuallearning methods? We test the performance of EWC [Lee et al., 2017], MER [Riemer et al., 2019]and ER-Reservoir [Chaudhry et al., 2019b], in their standard form—learning the whole networkonline—as well as with pre-trained fixed representations. We use pre-trained representations fromMRCL and Pretraining, obtained in the same way as described in earlier sections. For the Standardonline form of these algorithms, to avoid the unfair advantage of pre-training, we initialize thenetworks by learning iid on the pre-training set.

As baselines, we also report results for (a) fully online SGD updates that update one point at a time inorder on the trajectory and (b) approximate IID training where SGD updates are used on a randomshuffling of the trajectory, removing the correlation.

We report the test set results for learning 50 tasks with one class per task, and learning 20 tasks with 5tasks per class in Split-Omniglot in Table 1. For each of the methods, we do a 15/5 train/test split foreach Omniglot class and test multiple values for all the hyperparameters and report results for the bestsetting. The conclusions are surprisingly clear. (1) MRCL improves all the algorithms; (2) simplyproviding a fixed representation, as in Pretraining, does not provide nearly the same gains as MRCLand (3) MRCL with a basic Online updating strategy is already competitive, outperforming all thecontinual learning methods without MRCL. There are a few additional outcomes of note. MRCLoutperforms even approximate IID sampling, suggesting it is not only mitigating interference butactually making learning faster on new data. Finally, the difference in online and experience replaybased algorithms for MRCL is not as pronounced as it is for other representations.

6 Conclusion and Discussion

In this paper, we proposed a meta-learning objective to learn representations that are robust to inter-ference under online updates and promote future learning. We showed that using our representations,it is possible to learn from highly correlated data streams with significantly improved robustness toforgetting. We found sparsity emerges as a property of our learned representations, without explicitlytraining for sparsity. We finally showed that our method is complementary to the existing state of the

8

1. MRCL improves all the algorithms.2. The results are not due to just fixing the representation. 3. MRCL with a basic Online updating strategy is already competitive.4. MRCL improves even approximate IID sampling (suggesting it is not

only mitigating interference but making learning faster on new data).

Split-Omniglot

Page 23: Learning Representations for Continual Learningwhitem/... · CLP is General, it encompasses…. • Formulations with task descriptors • Continual learning classification benchmarks

Key take-away

We should be explicitly learning representations that are well-suited for online updating

Page 24: Learning Representations for Continual Learningwhitem/... · CLP is General, it encompasses…. • Formulations with task descriptors • Continual learning classification benchmarks

Open Questions• How can we learn these representations online?

• When can we expect to learn representations amenable to online updating?

• e.g., how related does pre-training data have to be to future data?

• What are the disadvantages of delineating a representation and training it differently?

• Can we learn representations tailored to more complex continual updates?

Page 25: Learning Representations for Continual Learningwhitem/... · CLP is General, it encompasses…. • Formulations with task descriptors • Continual learning classification benchmarks

Open Questions• How can we learn these representations online?

• When can we expect to learn representations amenable to online updating?

• e.g., how related does pre-training data have to be to future data?

• What are the disadvantages of delineating a representation and training it differently?

• Can we learn representations tailored to more complex continual updates?

Thank you!