Next Article in Journal
Deeper Exploiting Graph Structure Information by Discrete Ricci Curvature in a Graph Transformer
Next Article in Special Issue
Hybrid DAER Based Cross-Modal Retrieval Exploiting Deep Representation Learning
Previous Article in Journal
Analysis of the Maximum Efficiency and the Maximum Net Power as Objective Functions for Organic Rankine Cycles Optimization
Previous Article in Special Issue
Semi-Supervised k-Star (SSS): A Machine Learning Method with a Novel Holo-Training Approach
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

On Sequential Bayesian Inference for Continual Learning

1
Department of Engineering Science, University of Oxford, Oxford OX2 6ED, UK
2
SRI International, Arlington, VA 22209, USA
3
Department of Computer Science, University of Oxford, Oxford OX1 3QG, UK
*
Author to whom correspondence should be addressed.
Entropy 2023, 25(6), 884; https://doi.org/10.3390/e25060884
Submission received: 1 May 2023 / Revised: 24 May 2023 / Accepted: 28 May 2023 / Published: 31 May 2023
(This article belongs to the Special Issue Information Theory for Data Science)

Abstract

:
Sequential Bayesian inference can be used for continual learning to prevent catastrophic forgetting of past tasks and provide an informative prior when learning new tasks. We revisit sequential Bayesian inference and assess whether using the previous task’s posterior as a prior for a new task can prevent catastrophic forgetting in Bayesian neural networks. Our first contribution is to perform sequential Bayesian inference using Hamiltonian Monte Carlo. We propagate the posterior as a prior for new tasks by approximating the posterior via fitting a density estimator on Hamiltonian Monte Carlo samples. We find that this approach fails to prevent catastrophic forgetting, demonstrating the difficulty in performing sequential Bayesian inference in neural networks. From there, we study simple analytical examples of sequential Bayesian inference and CL and highlight the issue of model misspecification, which can lead to sub-optimal continual learning performance despite exact inference. Furthermore, we discuss how task data imbalances can cause forgetting. From these limitations, we argue that we need probabilistic models of the continual learning generative process rather than relying on sequential Bayesian inference over Bayesian neural network weights. Our final contribution is to propose a simple baseline called Prototypical Bayesian Continual Learning, which is competitive with the best performing Bayesian continual learning methods on class incremental continual learning computer vision benchmarks.

1. Introduction

The goal of continual learning (CL) is to find a predictor that learns to solve a sequence of new tasks without losing the ability to solve previously learned tasks. One key challenge of CL with neural networks (NNs) is that model parameters from previously learned tasks are “overwritten” during gradient-based learning of new tasks, which leads to catastrophic forgetting of previously learned abilities [1,2]. One approach to CL hinges on using recursive applications of Bayes’ Theorem, using the weight posterior in a Bayesian neural network (BNN) as the prior for a new task [3]. However, obtaining a full posterior over NN weights is computationally demanding and we often need to resort to approximations, such as the Laplace method [4] or variational inference [5,6] to obtain a neural network weight posterior.
When performing Bayesian CL, sequential Bayesian inference is performed with an approximate BNN posterior, not the true posterior [7,8,9,10,11,12]. If we consider the performance of sequential Bayesian inference with a variational approximation over a BNN weight posterior, then we barely observe an improvement over simply learning new tasks with stochastic gradient descent (SGD). We will develop this statement further in Section 2.2. Therefore, if we had access to the true BNN weight posterior, would this be enough to prevent forgetting by sequential Bayesian inference?
Our contributions in this paper are to revisit Bayesian CL. (1) Experimentally, we perform sequential Bayesian inference using the true Bayesian NN weight posterior. We do this by using the gold standard of Bayesian inference methods, Hamiltonian Monte Carlo (HMC) [13]. We use density estimation over HMC samples and use this approximate posterior density as a prior for the next task within the HMC sampling process. Surprisingly, our HMC method for CL yields no noticeable benefits over an approximate inference method (VCL Nguyen et al. [9]) despite using samples from the true posterior. (2) As a result, we consider a simple analytical example and highlight that exact inference with a misspecified model can still cause forgetting. (3) We show mathematically that under certain assumptions, task data imbalances will cause forgetting in Bayesian NNs. (4) We propose a new probabilistic model for CL and show that by explicitly modeling the generative process of the data, we can achieve good performance, avoiding the need to rely on recursive Bayesian inference over NN weights to prevent forgetting. Our proposed model, Prototypical Bayesian Continual Learning (ProtoCL), is conceptually simple, scalable, and competitive with state-of-the-art Bayesian CL methods in the class-incremental learning setting.

2. Background

2.1. The Continual Learning Problem

Continual learning (CL) is a learning setting whereby a model must learn to make predictions over a set of tasks sequentially while maintaining performance across all previously learned tasks. In CL, the model is sequentially shown T tasks, denoted T t for t = 1 , , T . Each task, T t , is comprised of a dataset D t = ( x i , y i ) i = 1 N t , which a model needs to learn to make predictions with. More generally, tasks are denoted by distinct tuples comprised of the conditional and marginal data distributions, { p t ( y | x ) , p t ( x ) } . After task T t , the model will lose access to the training dataset but its performance will be continually evaluated on all tasks T i for i t . We decompose predictors as g = h f such that y ^ = g ( x ) . We define f as an embedding function mapping f : X Z and h as a head mapping to outputs h : Z Y . Some continual learning methods use a separate head per task h i i = 1 T , these methods are called multi-headed while those that use one head are called single-headed.

2.2. Bayesian Continual Learning

We consider a setting in which task data arrives sequentially at timesteps, t = 1 , 2 , , T . At the first timestep, t = 1 , that is, for task T 1 , the model receives the first dataset D 1 and learns the conditional distribution p ( y i | x i , θ ) for all ( x i , y i ) D 1 (i indexes a datapoint in D 1 ). We denote the parameters θ as having a prior distribution p ( θ ) for T 1 . The posterior predictive distribution for a test point x 1 * D 1 is hence:
p ( y 1 * | x 1 * , D 1 ) = p ( y 1 * | x 1 * , θ ) p ( θ | D 1 ) d θ .
We note that computing this posterior predictive distribution requires p ( θ | D 1 ) . For  t = 2 , a CL model is required to fit p ( y i | x i , θ ) for ( x i , y i ) D 1 D 2 . The posterior predictive distribution for a new test point x 2 * D 1 D 2 point is:
p ( y 2 * | x 2 * , D 1 , D 2 ) = p ( y 2 * | x 2 * , θ ) p ( θ | D 1 , D 2 ) d θ .
The posterior must thus be updated to reflect this new conditional distribution. We can use repeated application of Bayes’ rule to calculate the posterior distributions p ( θ | D 1 , , D T )  as:
p ( θ | D 1 , , D T 1 , D T ) = p ( D T | θ ) p ( θ | D 1 , , D T 1 ) p ( D T | D 1 , , D T 1 ) .
In the CL setting, we lose access to previous training datasets; however, using repeated applications of Bayes’ rule Equation (3) allows us to sequentially incorporate information from past tasks in the parameters θ . At  t = 1 , we have access to D 1 and the posterior over parameters is:
log p ( θ | D 1 ) = log p ( D 1 | θ ) + log p ( θ ) log p ( D 1 ) .
At t = 2 , we require p ( θ | D 1 , D 2 ) to calculate the posterior predictive distribution in Equation (2). However, we have lost access to D 1 . According to Bayes’ rule, the posterior may be written as:
log p ( θ | D 1 , D 2 ) = log p ( D 2 | θ ) + log p ( θ | D 1 ) log p ( D 2 | D 1 ) ,
where we used the conditional independence of D 2 and D 1 given θ . We note that the likelihood p ( D 2 | θ ) is only dependent upon the current task dataset, D 2 , and that the prior p ( θ | D 1 ) encodes parameter knowledge from the previous task. Hence, we can use the posterior evaluated at t as a prior for learning a new task at t + 1 . From Equation (3), we require that our model with parameters θ is a sufficient statistic of D 1 , i.e.,  p ( D 2 | θ , D 1 ) = p ( D 2 | θ ) , making the likelihood conditionally independent of D 1 given θ . This observation motivates the use of high-capacity predictors, such as Bayesian neural networks, that are flexible enough to learn from D 1 .

Continual Learning Example: Split-MNIST

For the MNIST dataset [14], we know that if we were to train a BNN we would achieve good performance by inferring the posterior p ( θ | D ) and integrating out the posterior to infer the posterior predictive over a test point Equation (1). Therefore, if we were to split the dataset MNIST into 5 two-class classification tasks, then we should be able to recursively recover the multi-task posterior p ( θ | D ) = p ( θ | D 1 , D 5 ) using Equation (3). This problem is called Split-MNIST [15], where the first task involves the classification of the digits { 0 , 1 } , the second task classification of the digits { 2 , 3 } , and so on.
We can define three different CL settings [16,17,18]. When we allow the CL agent to make predictions with a task identifier τ the scenario is referred to as task-incremental. The identifier τ could be used to select different heads Section 2.1, for instance. This scenario is not compatible with sequential Bayesian inference outlined in Equation (3) since no task identifier is required for making predictions. Domain-incremental learning is another scenario that does not have access to τ during evaluation and requires the CL agent to perform classification to the same output space for each task; for example, for Split-MNIST the output space is { 0 , 1 } for all tasks, so this amounts to classifying between even and odd digits. Domain incremental learning is compatible with sequential Bayesian inference with a Bernoulli likelihood. The third scenario is class-incremental learning which also does not have access to τ but the agent needs to classify each example to its corresponding class. For Split-MNIST, for example, the output space is { 0 , , 9 } for each task. Class-incremental learning is compatible with sequential Bayesian inference with a categorical likelihood.

2.3. Variational Continual Learning

Variational CL (VCL; Nguyen et al. [9]) simplifies the Bayesian inference problem in Equation (3) into a sequence of approximate Bayesian updates on the distribution over random neural network weights θ . To do so, VCL uses the variational posterior from previous tasks as a prior for new tasks. In this way, learning to solve the first task entails finding a variational distribution q 1 ( θ | D 1 ) that maximizes a corresponding variational objective. For the subsequent task, the prior is chosen to be q 1 ( θ | D 1 ) , and the goal becomes to learn a variational distribution q 2 ( θ | D 2 ) that maximizes a corresponding variational objective under this prior. Denoting the recursive posterior inferred from multiple datasets by q t ( θ | D 1 : t ) , we can express the variational CL objective for the t-th task as:
L ( θ , D t ) = D KL q t ( θ ) | | q t 1 ( θ | D 1 : t 1 ) E q t [ log p ( D t | θ ) ] .
When applying VCL to the problem of Split-MNIST Figure 1, we can see that single-headed VCL barely performs better than SGD when remembering past tasks. Multi-headed VCL performs better, despite not being a requirement from sequential Bayesian inference Equation (3). Therefore, why does single-head VCL not improve over SGD if we can recursively build up an approximate posterior using Equation (3)? We hypothesize that it could be due to using a variational approximation of the posterior and so we are not actually strictly performing the Bayesian CL process described in Section 2.2. We test this hypothesis in the next section by propagating the true BNN posterior to verify whether we can recursively obtain the true multi-task posterior and so improve on single-head VCL and prevent catastrophic forgetting.

3. Bayesian Continual Learning with Hamiltonian Monte Carlo

To perform inference over BNN weights we use the HMC algorithm [13]. We then use these samples and learn a density estimator that can be used as a prior for a new task (we considered Sequential Monte Carlo, but it is unable to scale to the dimensions required for the NNs we consider [19]. HMC on the other hand has recently been successfully scaled to relatively small BNNs of the size considered in this paper [20] and ResNet models but at large computational cost [21]). HMC is considered the gold standard in approximate inference and is guaranteed to asymptotically produce samples from the true posterior (in the NeurIPS 2021 Bayesian Deep Learning Competition (https://izmailovpavel.github.io/neurips_bdl_competition), the goal was to find an approximate inference method that is as “close” as possible to the posterior samples from HMC). We use posterior samples of θ from HMC and then fit a density estimator over these samples, to use as a prior for a new task. This allows us to use a multi-modal posterior distribution over θ rather than a diagonal Gaussian variational posterior such as in VCL. More concretely, to propagate the posterior p ( θ | D 1 ) we use a density estimator, defined p ^ ( θ | D 1 ) , to fit a probability density on HMC samples as a posterior. For the next task T 2 we can use p ^ ( θ | D 1 ) as a prior for a new HMC sampling chain and so on (see Figure 2). The density estimator priors need to satisfy two key conditions for use within HMC sampling. Firstly, that they are a probability density function. Secondly, that they are differentiable with respect to the input samples.
We use a toy dataset (Figure 3) with two classes and inputs x R 2  [22]. Each task is a binary classification problem where the decision boundary extends from left to right for each new task. We train a two-layer BNN, with a hidden state size of 10. We use Gaussian Mixture Models (GMM) as a density estimator for approximating the posterior with HMC samples. We also tried Normalizing Flows which should be more flexible [23]; however, these did not work robustly for HMC sampling (RealNVP was very sensitive to the choice of random seed, the samples from the learned distribution did not give accurate predictions for the current task and led to numerical instabilities when used as a prior within HMC sampling). To the best of our knowledge, we are the first to incorporate flexible priors into the sampling methods such as HMC.
Training a BNN with HMC on the same multi-task dataset obtains a test accuracy of 1.0 . Thus, the final posterior is suitable for continual learning under Equation (3) and we should be able to recursively arrive at the multi-task posterior with our recursive inference method with HMC. The results from Figure 3 demonstrate that using HMC with an approximate multi-modal posterior fails to prevent forgetting and is less effective than using multi-head VCL. In fact, multi-head VCL clearly outperforms HMC, indicating that the source of the knowledge retention is not through the propagation of the posterior but through the task-specific heads. For  T 2 , we use p ^ ( θ | D 1 ) instead of p ( θ | D 1 ) as a prior and this will bias the HMC sampling for all subsequent tasks. In the next paragraph, we detail the measures taken to ensure that our HMC chains have converged so we are sampling from the true posterior. Moreover, we access the fidelity of the GMM density estimator with respect to the HMC samples. We also repeated these experiments with another toy dataset of five binary classification tasks where we observe similar results Appendix A.
For HMC, we ensure that we are sampling from the posterior by assessing chain convergence and effective sample sizes (Figure A5). The effective sample size measures the autocorrelation in the chain. The effective sample sizes for the HMC chains for our BNNs are similar to the literature [20]. Moreover, we ensure that the GMM approximate posterior is multi-modal and has a more complex posterior in comparison to VCL, and that the GMM samples produce equivalent results to HMC samples for the current task (Figure A4). See Appendix B for details.
The 2-d benchmarks we consider in this section are from previous works and are domain-incremental continual learning problems. The domain incremental setting is also simpler [18] than the class-incremental setting and thus a good starting point when attempting to perform exact sequential Bayesian inference. Despite this, we are not able to perform sequential Bayesian inference in BNNs despite using HMC, which is considered the gold standard of Bayesian deep learning. HMC and density estimation with a GMM produces richer, more accurate, and multi-modal posteriors. Despite this, we are still not able to sequentially build up the multi-task posterior or obtain much better results than an isotropic Gaussian posterior such as single-head VCL. The weak point of this method is the density estimation, the GMM removes probability mass over areas of the BNN weight space posterior, which is important for the new task. This demonstrates just how difficult a task it is to model BNN weight posteriors. In the next section, we study a different analytical example of sequential Bayesian inference and look at how model misspecification and task data imbalances can cause forgetting in Bayesian CL.

4. Bayesian Continual Learning and Model Misspecification

We now consider a simple analytical example where we can perform the sequential Bayesian inference Equation (3) in closed form using conjugacy. We consider a simple setting where data points arrive online, one after another.
Observations y 1 , y 2 , , y t arrive online, and each observation is generated by a hidden variable θ 1 , θ 2 , , θ t p where p is a probability density function. At time t, we wish to infer the filtering distribution p ( θ t | y 1 , y 2 , , y t )  [24] using sequential Bayesian inference, similarly to the Kalman filter [25]. The likelihood is p ( y t | θ t ) = N ( y t ; f ( · ; θ t ) , σ 2 ) such that the mean is parameterized by a linear model y t = f ( · ; θ t ) + ϵ where ϵ N ( 0 , σ 2 ) and f ( · ; θ t ) = θ t . We consider a Gaussian prior over the mean parameters θ such that p ( θ 0 ) = N ( θ 0 ; 0 , σ 0 2 ) . Since the conjugate prior for the mean is also Gaussian, the prior and posterior are N ( θ t 1 ; θ ^ t 1 , σ ^ t 1 2 ) and N ( θ t ; θ ^ t , σ ^ t 2 ) . By using sequential Bayesian inference we can have closed-form update equations for our posterior parameters:
θ ^ t = σ ^ t 2 y t σ 2 + θ ^ t 1 σ ^ t 1 2 = σ ^ t 2 i = 1 t y i σ 2 + θ ^ 0 σ ^ 0 2 , 1 σ ^ t 2 = 1 σ 2 + 1 σ ^ t 1 2 .
From Equation (7), the posterior mean follows a Gaussian distribution, where the posterior mean is a sum of the online observation and the online prior. Therefore, the posterior mean is a weighted sum of the data and the final value of the posterior is not dependent on the order of the data. We consider the situation where there is a task change (this non-stationarity is referred to as a changepoint in the time-series literature, in Figure 4A at t = 110 ). Concretely, for task 1 the dataset is generated according to N ( 1 , σ 2 ) , so we want the model to regress to this task. For task 2, the data is generated according to N ( 1 , σ 2 ) and so we want our continual learning agent to regress well to this task too. As with all continual learning benchmarks, we require our model to retain performance on past tasks and perform equally well on both tasks at the end of training at t = 220 . From Figure 4A, we can see that the linear model will regress to the first dataset well, as data is seen online and the linear model is updated online. However, as is seen in data from the second task, the linear model eventually tracks the global mean over both tasks Equation (7) rather than a mean for each task Figure 4A. This is even more pronounced when there is a task dataset imbalance Figure 4B.
The model is clearly misspecified since a linear model cannot regress to both of these tasks simultaneously. A more suitable model would be a mixture model, which is able to regress to both task datasets. Despite performing exact inference, a misspecified model can forgetFigure 4. Performance on the first task is reduced while learning the second task, this becomes even more pronounced with task dataset imbalances Figure 4B. In the case of HMC, we verified that our Bayesian neural network had a perfect performance on all tasks beforehand. In Figure 3, we had a well-specified model but struggled with exact sequential Bayesian inference, Equation (3). When learning with linear models online, we are performing exact inference; however, we have a misspecified model. It is important to disentangle model misspecification and exact inference and highlight that model misspecification is a caveat that has not been highlighted in the CL literature as far as we are aware. Furthermore, we can only ensure that our models are well specified if we have access to data from all tasks a priori. Therefore, in the scenario of online continual learning [26,27,28], we cannot know if our model will perform well on all past and future tasks without making assumptions on the task distributions.

5. Sequential Bayesian Inference and Imbalanced Task Data

Neural Networks are complex models with a broad hypothesis space and hence are a suitably well-specified model when tackling continual learning problems [29]. However, we struggle to fit the posterior samples from HMC to perform sequential Bayesian inference in Section 3.
We continue to use Bayesian filtering and assume a Bayesian NN where the posterior is Gaussian with a full covariance. By modeling the entire covariance, we enable modeling of how each individual weight varies with respect to all others. We do this by interpreting online learning in Bayesian NNs as filtering [30]. Our treatment is similar to Aitchison [31], who derives an optimizer by leveraging Bayesian filtering. We consider inference in the graphical model depicted in Figure 5. The aim is to infer the optimal BNN weights, θ t * at t given a single observation and the BNN weight prior. The previous BNN weights are used as a prior for inferring the posterior BNN parameters. We consider the online setting, where a single data point ( x t , y t ) is observed at a time.
Instead of modeling the full covariance, we instead consider each parameter θ i as a function of all the other parameters θ i t . We also assume that the values of the weights are close to those of the previous timestep [32]. To obtain the updated equations for BNN parameters given a new observation and prior, we make two simplifying assumptions as follows.
Assumption A1.
For a Bayesian neural network with output f ( x t ; θ ) and likelihood L ( x t , y t ; θ ) , the derivative evaluated at θ t is z t = L ( x t , y t ; θ ) / θ | θ = θ t and the Hessian is H . We assume a quadratic loss for a data point ( x t , y t ) of the form:
L ( x t , y t ; θ ) = L t ( θ ) = 1 2 θ H θ + z t θ ,
the result of a second-order Taylor expansion. The Hessian is assumed to be constant with respect to ( x t , y t ) (but not with respect to θ ).
To construct the dynamical equation for θ , consider the gradient for the i-th weight while all other parameters are set to their current estimate at the optimal value for the θ i t * :
θ i t * = 1 H i i H i i θ i t ,
since z i t = 0 at a mode. The equation above shows us that the dynamics of the optimal weight θ i t * is dependent on all the other current values of the parameters θ i t . The dynamics of θ i t are a complex stochastic process dependent on many different variables such as the dataset, model architecture, learning rate schedule, etc.
Assumption A2.
Since reasoning about the dynamics of θ i t is intractable, we assume that at the next timestep, the optimal weights are close to the previous timesteps with a discretized Ornstein–Uhlenbeck process for the weights θ i t with reversion speed ϑ R + and noise variance η i 2 :
p ( θ i , t + 1 | θ i , t ) = N ( ( 1 ϑ ) θ i t , η i 2 ) ,
this implies that the dynamics for the optimal weight are defined by
p ( θ i , t + 1 * | θ i , t * ) = N ( ( 1 ϑ ) θ i t * , η 2 ) ,
where η 2 = η i 2 H i i H i i .
In simple terms, in Assumption 2, we assume a parsimonious model of the dynamics, and that the next value of θ i , t is close to their previous value according to a Gaussian, similarly to Aitchison [31].
Lemma 1.
Under Assumptions 1 and 2 the dynamics and likelihood are Gaussian. Thus, we are able to infer the posterior distribution over the optimal weights using Bayesian updates and by linearizing the BNN the update equations for the posterior of the mean and variance of the BNN for a new data point are:
μ t , post = σ t , post 2 μ t , prior σ t , prior 2 ( η 2 ) + y t σ 2 g ( x t ) and 1 σ t , post 2 = g ( x t ) 2 σ 2 + 1 σ t , prior 2 ( η 2 ) ,
where we drop the notation for the i-th parameter, the posterior is N ( θ t * ; μ t , post , σ t , post 2 ) and g ( x t ) = f ( x t ; θ i t * ) θ i t * and σ t , prior 2 is a function of η 2 .
See Appendix E for the derivation of Lemma 1. From Equation (12), we can notice that the posterior mean depends linearly on the prior and a data-dependent term and so will behave similarly to our previous example in Section 4. Under Assumption 1 and Assumption 2, if there is a data imbalance between tasks in Equation (12), then the data-dependent term will dominate the prior term if there is more data for the current task.
In Section 3, we showed that it is very difficult with current machine learning tools to perform sequential Bayesian inference for simple CL problems with small Bayesian NNs. When we disentangle Bayesian inference and model misspecification, we show showed that misspecified models can forget despite exact Bayesian inference. The only way to ensure that our model is well specified is to show that the multi-task posterior produces reasonable posterior predictive distributions p ( y | x , D ) = p ( y | x , D , θ ) p ( θ | D ) d θ for one’s application. Additionally, in this section, we have shown that if there is a task dataset size imbalance, then we can obtain forgetting under certain assumptions.

6. Related Work

There has been a recent resurgence in the field of CL [33] given the advent of deep learning. Methods that approximate sequential Bayesian inference Equation (3) have been seminal in CL’s revival and have used a diagonal Laplace approximation [3,7]. The diagonal Laplace approximation has been enhanced by modeling covariances between neural network weights in the same layer [8]. Instead of the Laplace approximation, we can use a variational approximation for sequential Bayesian inference, named VCL [9,34]. The variational Gaussian variance of each Bayesian NN parameter can be used to pre-condition the learning rates of each weight and create a mask per task by using pruning [10]. Using richer priors has also been explored [11,35,36,37,38]. For example, one can learn a scaling of the Gaussian NN weight parameters for each task by learning a new variational adaptation parameter which can strengthen the contribution of a specific neuron [39]. The online Laplace approximation can be seen as a special case of VCL where the KL-divergence term Equation (6) is tempered and the temperature tends to 0 [12]. Gaussian processes have also been applied to CL problems leveraging inducing points to retain previous task functions [40,41].
Bayesian methods that regularize weights have not matched up to the performance of experience replay-based CL methods [42] in terms of accuracy on CL image classification benchmarks. Instead of regularizing high-dimensional weight spaces, regularizing task functions is a more direct approach to combat forgetting [43]. Bayesian NN weights can also be generated by a hypernetwork, where the hypernetwork needs only simple CL techniques to prevent forgetting [44]. In particular, one can leverage the duality between the Laplace approximation and Gaussian processes to develop a functional regularization approach to Bayesian CL [45] or using function-space variational inference [46,47].
In the next section, we propose a simple Bayesian continual learning baseline that models the data-generating continual learning process and performs exact sequential Bayesian inference in a low-dimensional embedding space. Previous work has explored modeling the data-generating process by inferring the joint distribution of inputs and targets p ( x , y ) and learning a generative model to replay data to prevent forgetting [48], and by learning a generative model per class and evaluating the likelihood of the inputs given each class p ( x | y )  [49].

7. Prototypical Bayesian Continual Learning

We have shown that sequential Bayes over NN parameters is very difficult (Section 3), and is only suitable for situations where the multi-task posterior is suitable for all tasks. We now show that a more fruitful approach is to model the full data-generating process of the CL problem and we propose a simple and scalable approach for doing so. In particular, we represent classes by prototypes [50,51] to prevent catastrophic forgetting. We refer to this framework as Prototypical Bayesian Continual Learning, or ProtoCL for short. This approach can be viewed as a probabilistic variant of iCarl [51], which creates embedding functions for different classes that are simply class means and predictions made by nearest neighbors. ProtoCL also bears similarities to the few-shot learning model Probabilistic Clustering for Online Classification [52], developed for few-shot image classification.
Model. ProtoCL models the generative CL process. We consider classes j { 1 , , J } , generated from a categorical distribution with a Dirichlet prior:
y i , t Cat ( p 1 : J ) , p 1 : J Dir ( α t ) .
Images are embedded into an embedding space by an encoder, z = f ( x ; w ) with parameters w . The per class embeddings are Gaussian, and their mean has a prior which is also Gaussian:
z i t | y i t N ( z ¯ y t , Σ ϵ ) , z ¯ y t N ( μ y t , Λ y t 1 ) .
See Figure 6 for an overview of the model. To alleviate forgetting in CL, ProtoCL uses a coreset of past task data to continue to embed past classes distinctly as prototypes. The posterior distribution over class probabilities { p j } j = 1 J and class embeddings { z ¯ y j } j = 1 J is denoted in short hand as p ( θ ) with parameters η t = { α t , μ 1 : J , t , Λ 1 : J , t 1 } . ProtoCL models each class prototype but does not use task-specific NN parameters or modules such as multi-head VCL. By modeling a probabilistic model over an embedding space, this allows us to use powerful embedding functions f ( · ; w ) without having to parameterize them probabilistically, and so this approach is more scalable than VCL, for instance.
Inference. As the Dirichlet prior is conjugate with the categorical distribution and likewise, the Gaussian over prototypes with a Gaussian prior over the prototype mean, we can calculate posteriors in closed form and update the parameters η t as new data is observed without using gradient-based updates. We optimize the model by maximizing the posterior predictive distribution and use a softmax over class probabilities to perform predictions. We perform gradient-based learning of the NN embedding function f ( · ; w ) and update the parameters η t at each iteration of gradient descent as well, see Algorithm 1.
Algorithm 1 ProtoCL continual learning
1:
Input: task datasets T 1 : T , initialize embedding function: f ( · ; w ) , coreset: M = .
2:
for  T 1   to  T T   do
3:
   for each batch in T i M  do
4:
     Optimize f ( · ; w ) by maximizing the posterior predictive p ( z , y )  Equation (18)
5:
     Obtain posterior over θ by updating η , Equations (15)–(17).
6:
   end for
7:
   Add random subset from T i to M .
8:
end for
Sequential updates. We can obtain our parameter updates for the Dirichlet posterior by Categorical-Dirichlet conjugacy:
α t + 1 , j = α t , j + i = 1 N t I ( y t i = j ) ,
where N t are the number of points seen during the update at timestep t. Moreover, due to Gaussian-Gaussian conjugacy, the posterior for the Gaussian prototypes is governed by:
Λ y t + 1 = Λ y t + N y Σ ϵ 1
Λ y t + 1 μ y t + 1 = N y Σ ϵ 1 z ¯ y t + Λ y t μ y t , y t C t ,
where N y are the number of samples of class y and z ¯ y t = ( 1 / N y ) i = 1 N y z y i , see Appendix D.2 for the detailed derivation.
Objective. We optimize the posterior predictive distribution of the prototypes and classes:
p ( z , y ) = p ( z , y | θ t ; η t ) p ( θ t ; η t ) d θ t = p ( y ) i = 1 N t N ( z i t | y i t ; μ y t , t , Σ ϵ + Λ y t , t 1 ) .
where the p ( y ) = α y / j = 1 J α j , see Appendix D.3 for the detailed derivation. This objective can then be optimized using gradient-based optimization for learning the prototype embedding function z = f ( x ; w ) .
Predictions. To make a prediction for a test point x * , the class with the maximum (log)-posterior predictive is chosen, where the posterior predictive is:
p ( y * = j | x * , x 1 : t , y 1 : t ) = p ( y * = j | z * , θ t ) = p ( y * = j , z * | θ t ) i p ( y = i , z * | θ t ) ,
see Appendix D.4 for further details.
Preventing forgetting. As we wish to retain the class prototypes, we make use of coresets: experience from previous tasks. At the end of learning a task T t , we retain a subset M t D t and augment each new task dataset to ensure that posterior parameters η t and prototypes are able to retain previous task information.
Class-incremental learning. In this CL setting, we do not tell the CL agent which task it is being evaluated on with a task identifier τ . Therefore, we cannot use the task identifier to select a specific head to use for classifying a test point. Moreover, we require the CL agent to identify each class, { 0 , , 9 } for Split-MNIST and Split-CIFAR10, and not just { 0 , 1 } as in domain-incremental learning. Class-incremental learning is more general, realistic, and harder a problem setting, and thus important to focus on rather than other settings, despite domain-incremental learning also being compatible with sequential Bayesian inference as described in Equation (3).
Implementation. For Split-MNIST and Split-FMNIST, the baselines and ProtoCL all use two-layer NNs with a hidden state size of 200. For Split-CIFAR10 and Split-CIFAR100, the baselines and ProtoCL use a four-layer convolution neural network with two fully connected layers of size 512 similarly to Pan et al. [22]. For ProtoCL and all baselines that rely on replay, we fix the size of the coreset to 200 points per task. For all ProtoCL models, we allow the prior Dirichlet parameters to be learned and set their initial value to 0.7 found by a random search over MNIST with ProtoCL. An important hyperparameter for ProtoCL is the embedding dimension of the Gaussian prototypes for Split-MNIST and Split-FMNIST, this was set to 128, while for the larger vision datasets, this was set to 32 found using grid-search.
Results. ProtoCL produces good results on CL benchmarks on par or better than S-FSVI [47], which is state-of-the-art among Bayesian CL methods while being a lot more efficient to train and without requiring expensive variational inference. ProtoCL can flexibly scale to larger CL vision benchmarks, producing better results than S-FSVI. The code to reproduce all experiments can be found here https://github.com/skezle/bayes_cl_posterior. All our experiments are in the more realistic class incremental learning setting, which is a harder setting than those reported in most CL papers, so the results in Table 1 are lower for certain baselines than in the respective papers. We use 200 data points per task, see Figure A6 for a sensitivity analysis of the performance over the Split-MNIST benchmark as a function of core size for ProtoCL. In Table 2 we show how ProtoCL is able to scale to larger and more challenging CL vision benchmarks. ProtoCL demonstrates competitive performance versus the baselines we consider while at the same time requiring a fraction of the computational cost in terms of training times benchmarked on the same GPU.
The stated aim of ProtoCL is not to provide a novel state-of-the-art method for CL, but rather to propose a simple baseline that takes an alternative route than weight-space sequential Bayesian inference. We can achieve strong results that mitigate forgetting, namely by modeling the generative CL process and using sequential Bayesian inference over a few parameters in the class prototype embedding space. We argue that modeling the generative CL process is a fruitful direction for further research rather than attempting sequential Bayesian inference over the weights of a BNN. ProtoCL scales to 10 tasks of Split-CIFAR100, which, to the best of our knowledge, is the highest number of tasks and classes that have been considered by previous Bayesian continual learning methods.

8. Discussion and Conclusions

In this paper, we revisited the use of sequential Bayesian inference for CL. We can use sequential Bayes to recursively build up the multi-task posterior Equation (3). Previous methods have relied on approximate inference and see little benefit over SGD. We test the hypothesis of whether this poor performance is due to the approximate inference scheme by using HMC in two simple CL problems. HMC asymptotically samples from the true posterior, and we use a density estimator over HMC samples to use as a prior for a new task within the HMC sampling process. This density is multi-modal and accurate with respect to the current task but is not able to improve over using an approximate posterior. This demonstrates just how challenging it is to work with BNN weight posteriors. The source of error comes from the density estimation step. We then look at an analytical example of sequential Bayesian inference where we perform exact inference; however, due to model misspecification, we observe forgetting. The only way to ensure a well-specified model is to assess the multi-task performance over all tasks a priori. This might not be possible in online CL settings. We then model an analytical example over Bayesian NNs and, under certain assumptions, show that if there are task data imbalances then this will cause forgetting. Because of these results, we argue against performing weight space sequential Bayesian inference and instead model the generative CL problem. We introduce a simple baseline called ProtoCL. ProtoCL does not require complex variational optimization and achieves competitive results to the state-of-the-art in the realistic setting of class incremental learning.
This conclusion should not be a surprise since the latest Bayesian CL papers have all relied on multi-head architectures or inducing points/coresets to prevent forgetting, rather than better weight-space inference schemes. Our observations are in line with recent theory from [53], which states that optimal CL requires perfect memory. Although the results were shown with deterministic NNs the same results follow for BNN with a single set of parameters. Future research directions include enabling coresets of task data to efficiently and accurately approximate the posterior of a BNN to remember previous tasks.

Author Contributions

S.K. lead the research including conceptualization, performing the experiments and writing the paper. S.J.R. helped with conceptualization. A.C. helped with the development of the ideas and the implementation of HMC with a density estimator as a prior. T.G.J.R. ran the S-FSVI baselines for the class incremental continual learning experiments. T.G.J.R., A.C. and S.J.R. helped to write the paper. Funding acquisition, S.Z. and S.J.R. All authors have read and agreed to the published version of the manuscript.

Funding

S.K. acknowledges funding from the Oxford-Man Institute of Quantitative Finance. T.G.J.R. acknowledges funding from the Rhodes Trust, Qualcomm, and the Engineering and Physical Sciences Research Council (EPSRC). This material is based upon work supported by the United States Air Force and DARPA under Contract No. FA8750-20-C-0002. Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the United States Air Force and DARPA.

Institutional Review Board Statement

Not applicable.

Data Availability Statement

All data is publically available, code to reproduce all experiments can be found here https://github.com/skezle/bayes_cl_posterior.

Acknowledgments

We would like to thank Sebastian Farquhar, Laurence Aitchison, Jeremias Knoblauch, and Chris Holmes for discussions. We would also like to thank Philip Ball for his help with writing the paper.

Conflicts of Interest

The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

Abbreviations

The following abbreviations are used in this manuscript:
CLContinual Learning
NNNeural Network
BNNBayesian Neural Network
HMCHamiltonian Monte Carlo
VCLVariational Continual Learning
SGDStochastic Gradient Descent
SHSingle Head
MHMulti-head
GMMGaussian Mixture Model
ProtoCLPrototypical Bayesian Continual Learning

Appendix A. The Toy Gaussians Dataset

See Figure A1 for a visualization of the toy Gaussians dataset, which we use as a simple CL problem. This is used for evaluating our method for propagating the true posterior by using HMC for posterior inference and then using a density estimator on HMC samples as a prior for a new task. We construct 5, 2-way classification problems for CL. Each 2-way task involves classifying adjacent circles and squares Figure A1. With a 2 layer network with 10 neurons we obtain a test accuracy of 1.0 for the multi-task learning of all 5 tasks together. Hence, according to Equation (3) a BNN with the same size should be able to learn all 5 binary classification tasks continually by sequentially building up the posterior.

Appendix B. HMC Implementation Details

We set the prior for T 1 , to p 1 ( θ ) = N ( 0 , τ 1 I ) with τ = 10 . We burn-in the HMC chain for 1000 steps and sample for 10 , 000 more steps and run 20 different chains to obtain samples from our posterior, which we then pass to our density estimator. We use a step size of 0.001 and trajectory length of L = 20 , see Appendix C for further implementation details of the density estimation procedure. For the GMM, we optimize for the number of components by using a holdout set of HMC samples.

Appendix C. Density Estimation Diagnostics

We provide plots to show that the HMC chains indeed sample from the posterior have converged in Figure A3 and Figure A5. We run 20 HMC sampling chains and randomly select one chain to plot for each seed (of 10). We run HMC over 10 seeds and aggregate the results Figure 3 and Figure A1. The posteriors p ( θ | D 1 ) , are approximated with a GMM and used as a prior for the second task and so forth.
We provide empirical evidence to show that the density estimators have fit to HMC samples of the posterior in Figure A2 and Figure A4, where we show the number of components of the GMM density estimator, which we use as a prior for a new task, are all multi-modal posteriors. We show the BNN accuracy when sampling BNN weights from our GMM all recover the accuracy of the converged HMC samples. The effective sample size (ESS) of the 20 chains is a measure of how correlated the samples are (higher is better). The reported ESS values for our experiments are in line with previous work which uses HMC for BNN inference [20].
Figure A1. Continual learning binary classification accuracies from the toy Gaussian dataset similar to [44] using 10 random seeds. The pink solid line is a multi-task (MT) baseline test accuracy using SGD/HMC.
Figure A1. Continual learning binary classification accuracies from the toy Gaussian dataset similar to [44] using 10 random seeds. The pink solid line is a multi-task (MT) baseline test accuracy using SGD/HMC.
Entropy 25 00884 g0a1
Figure A2. Diagnostics from using a GMM prior fit to samples of the posterior generated from HMC, all results are for 10 random seeds. Left, effective sample sizes (ESS) of the resulting HMC chains of the posterior, all are greater than those reported in other works using HMC for BNNs [20]. Middle, the accuracy of the BNN when using samples from the GMM density estimator instead of the samples from HMC. Right, The optimal number of components of each GMM posterior fitted with a holdout set of HMC samples by maximizing the likelihood.
Figure A2. Diagnostics from using a GMM prior fit to samples of the posterior generated from HMC, all results are for 10 random seeds. Left, effective sample sizes (ESS) of the resulting HMC chains of the posterior, all are greater than those reported in other works using HMC for BNNs [20]. Middle, the accuracy of the BNN when using samples from the GMM density estimator instead of the samples from HMC. Right, The optimal number of components of each GMM posterior fitted with a holdout set of HMC samples by maximizing the likelihood.
Entropy 25 00884 g0a2
Figure A3. Convergence plots from a one randomly sampled HMC chain (of 20) for each task over 10 different runs (seeds) for 5 tasks from the toy Gaussian dataset similar to Henning et al. [44] (visualized in Figure A1). We use a GMM density estimator as the prior conditioned on the previous task data.
Figure A3. Convergence plots from a one randomly sampled HMC chain (of 20) for each task over 10 different runs (seeds) for 5 tasks from the toy Gaussian dataset similar to Henning et al. [44] (visualized in Figure A1). We use a GMM density estimator as the prior conditioned on the previous task data.
Entropy 25 00884 g0a3
Figure A4. Diagnostics from using a GMM to fit samples of the posterior HMC samples, all results are for 10 random seeds on the toy dataset from Pan et al. [22] (and visualized in Figure 3). Left, effective sample sizes (ESS) of the resulting HMC chains of the posterior, all are greater than those reported in other works using HMC for BNNs [20]. Middle left. the current task accuracy from HMC sampling. Middle right, the accuracy of the BNN when using samples from the GMM density estimator instead of the converged HMC samples. Right, The optimal number of components of each GMM posterior fitted with a holdout set of HMC samples by maximizing the likelihood.
Figure A4. Diagnostics from using a GMM to fit samples of the posterior HMC samples, all results are for 10 random seeds on the toy dataset from Pan et al. [22] (and visualized in Figure 3). Left, effective sample sizes (ESS) of the resulting HMC chains of the posterior, all are greater than those reported in other works using HMC for BNNs [20]. Middle left. the current task accuracy from HMC sampling. Middle right, the accuracy of the BNN when using samples from the GMM density estimator instead of the converged HMC samples. Right, The optimal number of components of each GMM posterior fitted with a holdout set of HMC samples by maximizing the likelihood.
Entropy 25 00884 g0a4
Figure A5. Convergence plots from a randomly sampled HMC chain (of 20) for each task over 10 different seeds for 5 tasks from the toy dataset from [22] (see Figure 3 for a visualization of the data). We use a GMM density estimator as a prior.
Figure A5. Convergence plots from a randomly sampled HMC chain (of 20) for each task over 10 different seeds for 5 tasks from the toy dataset from [22] (see Figure 3 for a visualization of the data). We use a GMM density estimator as a prior.
Entropy 25 00884 g0a5

Appendix D. Prototypical Bayesian Continual Learning

ProtoCL models the generative process of CL where new tasks are comprised of new classes j { 1 , , J } of a total of J and can be modeled by using a categorical distribution with a Dirichlet prior:
y i , t Cat ( p 1 : J ) , p 1 : J Dir ( α t ) .
We learn a joint embedding space for our data with a NN, z = f ( x ; w ) with parameters w . The embedding space for each class is Gaussian whose mean has a prior which is also Gaussian:
z i t | y i t N ( z ¯ y t , Σ ϵ ) , z ¯ y t N ( μ y t , Λ y t 1 ) .
By ensuring that we have an embedding per class and using a memory of past data, we ensure that the embedding does not drift. The posterior parameters are η t = { α t , μ 1 : J , t , Λ 1 : J , t 1 } .

Appendix D.1. Inference

As the Dirichlet prior is conjugate with the categorical distribution and so is the Gaussian distribution with a Gaussian prior over the mean of the embedding, then we can calculate posteriors in closed form and update our parameters as we see new data online without using gradient-based updates. We perform gradient-based learning of the NN embedding function f ( · ; w ) with parameters w . We optimize the model by maximizing the log-predictive posterior of the data and use the softmax over class probabilities to perform predictions. The posterior over class probabilities { p j } j = 1 J and class embeddings { z ¯ y j } j = 1 J is denoted as p ( θ ) for short hand and has parameters are η t = { α t , μ 1 : J , t , Λ 1 : J , t 1 } are updated in closed form at each iteration of gradient descent.

Appendix D.2. Sequential Updates

We can obtain our posterior:
p ( θ t | D t ) p ( D t | θ t ) p ( θ t )
= i = 1 N t p ( z t i | y t i ; z ¯ y t , Σ ϵ , y t ) p ( y t i | p 1 : J ) p ( p i : J ; α t ) p ( z ¯ y t ; μ y t , t , Λ y t , t 1 )
= N ( μ t + 1 , Σ t + 1 ) Dir ( α t + 1 ) ,
where N t is the number of data points seen during update t. Concentrating on the Categorical-Dirichlet conjugacy:
Dir ( α t + 1 ) p ( p 1 : J ; α t ) i = 1 N t p ( y t i ; p i : J )
j = 1 J p j α j 1 i = 1 N t j = 1 J p j I ( y t i = j )
= j = 1 J p j α j 1 + i = 1 N t I ( y t i = j ) .
Thus:
α t + 1 , j = α t , j + i = 1 N t I ( y t i = j ) .
Moreover, due to Gaussian-Gaussian conjugacy, then the posterior for the Gaussian prototype of the embedding for each class is:
N ( μ t + 1 , Λ t + 1 ) i = 1 N t N ( z t i | y t i ; z ¯ y t , Σ ϵ ) N ( z ¯ y t ; μ y t , t , Λ y t 1 )
= y t { 1 , , J } N ( z y t | y t ; z ¯ y t , 1 N y t Σ ϵ ) N ( z ¯ y t ; μ y t + 1 , Λ y t 1 )
= y t { 1 , , J } N ( z ¯ y t ; μ t + 1 , Λ y t + 1 1 ) ,
where N y t is the number of points of class y t from the set of all classes C = { 1 , , J } . The update equations for the mean and variance of the posterior are:
Λ y t + 1 = Λ y t + N y t Σ ϵ 1 , y t C t
Λ y t + 1 μ y t + 1 = N y t Σ ϵ 1 z ¯ y t + Λ y t μ y t , y t C t .

Appendix D.3. ProtoCL Objective

The posterior predictive distribution we want to optimize is:
p ( z , y ) = p ( z , y | θ ; η ) p ( θ ; η ) d θ ,
where p ( θ ) denotes the distributions over class probabilities { p j } j = 1 J and mean embeddings { z ¯ y j } j = 1 J ,
p ( z , y ) = i = 1 N t p ( z i t | y i t ; z ¯ y t , Σ ϵ ) p ( y i t | p 1 : J ) p ( p 1 : J ; α t ) p ( z ¯ y t ; μ y t , t , Λ y t , t 1 ) d p 1 : J d z ¯ y t
= i = 1 N t p ( z i t | y i t ; z y t , Σ ϵ ) p ( z ¯ y t ; μ y t , t , Λ y t , t 1 ) d z ¯ y t i = 1 N t p ( y i t | p 1 : J ) p ( p 1 : J ; α t ) d p 1 : J i p ( y i ) = p ( y )
= p ( y ) i = 1 N t Z i 1 N ( z ¯ y i t ; c , C ) d z ¯ y t
= p ( y ) i = 1 N t N ( z i t | y i t ; μ y t , t , Σ ϵ + Λ y t , t 1 ) .
where in Equation (A18) we use §8.1.8 in [54]. The term p ( y ) is:
p ( y ) = p ( y | p 1 : J ) p ( p 1 : J ; α t ) d p 1 : J
= p y Γ ( j = 1 J α j ) j = 1 J Γ ( α j ) j = 1 J p j α j 1 d p 1 : J
= Γ ( j = 1 J α j ) j = 1 J Γ ( α j ) j = 1 J p j I ( y = j ) + α j 1 d p 1 : J
= Γ ( j = 1 J α j ) j = 1 J Γ ( α j ) j = 1 J Γ ( I ( y = j ) + α j ) Γ ( 1 + j = 1 J α j )
= Γ ( j = 1 J α j ) j = 1 J Γ ( α j ) j = 1 J Γ ( I ( y = j ) + α j ) j = 1 J α j Γ ( j = 1 J α j )
= j = 1 , j y J Γ ( α j ) j = 1 J Γ ( α j ) Γ ( 1 + α y ) j = 1 J α j
= j = 1 , j y J Γ ( α j ) j = 1 J Γ ( α j ) α y Γ ( α y ) j = 1 J α j
= α y j = 1 J α j ,
where we use the identity Γ ( n + 1 ) = n Γ ( n ) .
Figure A6. Split-MNIST average test accuracy over five tasks for different memory sizes. On the x-axis, we show the size of the entire memory buffer shared by all five tasks. Accuracies are over a mean and standard deviation over five different runs with different random seeds.
Figure A6. Split-MNIST average test accuracy over five tasks for different memory sizes. On the x-axis, we show the size of the entire memory buffer shared by all five tasks. Accuracies are over a mean and standard deviation over five different runs with different random seeds.
Entropy 25 00884 g0a6
Figure A7. The evolution of the Dirichlet parameters α t for each class in Split-MNIST tasks for ProtoCL. All α j are shown over 10 seeds with ± 1 standard error. By the end of training, all classes are roughly equally likely, as we have trained on equal amounts of all classes.
Figure A7. The evolution of the Dirichlet parameters α t for each class in Split-MNIST tasks for ProtoCL. All α j are shown over 10 seeds with ± 1 standard error. By the end of training, all classes are roughly equally likely, as we have trained on equal amounts of all classes.
Entropy 25 00884 g0a7

Appendix D.4. Predictions

To make a prediction for a test point x * :
p ( y * = j | x * , x 1 : t , y 1 : t ) = p ( y * = j | z * , θ t )
= p ( z * | y * = j , θ t ) p ( y * = j | θ t ) i p ( z * | y * = i , η t ) p ( y * = i | θ t )
= p ( y * = j , z * | θ t ) i p ( y = i , z * | θ t ) ,
where θ t are sufficient statistics for ( x 1 : t , y 1 : t ) .
Preventing forgetting. As we wish to retain the task-specific prototypes, at the end of learning a task T t we take a small subset of the data as a memory to ensure that posterior parameters and prototypes do not drift, see Algorithm 1.

Appendix D.5. Experimental Setup

The prototype variance, Σ ϵ is set to a diagonal matrix with the variances of each prototype set to 0.05 . The prototype prior precisions, Λ y t , are also diagonals and initialized randomly and exponentiated to ensure a positive semi-definite covariance for the sequential updates. The parameters α j j are set to 0.78 , which was found by random search over the validation set on MNIST. We also allow α j to be learned in the gradient update step in addition to the sequential update step (lines 4 and 5 Algorithm 1), see Figure A7 to see the evolution of the α j or all classes j over the course of learning Split-MNIST.
For the Split-MNIST and Split-FMNIST benchmarks, we use an NN with two layers of size 200 and trained for 50 epochs with an Adam optimizer. We perform a grid-search over learning rates, dropout rates, and weight decay coefficients. The embedding dimension is set to 128. For the Split-CIFAR10 and Split-CIFAR100 benchmarks, we use the same network as Pan et al. [22], which consists of four convolution layers and two linear layers. We train the networks for 80 epochs for each task with the Adam optimizer with a learning rate of 1 × 10 3 . The embedding dimension is set to 32. All experiments are run on a single GPU NVIDIA RTX 3090.

Appendix E. Sequential Bayesian Estimation as Bayesian Neural Network Optimization

We shall consider inference in the graphical model depicted in Figure A8. The aim is to infer the optimal BNN weights, θ t * at time t given observations and the previous BNN weights. We assume a Gaussian posterior over weights with full covariance; hence, we model interactions between all weights. We shall consider the online setting where we see one data point ( x t , y t ) at a time and we will make no assumption as to whether the data comes from the same task or different tasks over the course of learning.
Figure A8. Graphical model of under which we perform inference in Section 5. Grey nodes are observed and white are latent variables.
Figure A8. Graphical model of under which we perform inference in Section 5. Grey nodes are observed and white are latent variables.
Entropy 25 00884 g0a8
We set up the problem of sequential Bayesian inference as a filtering problem and we leverage the work of Aitchison [31], which casts NN optimization as Bayesian sequential inference. We make the reasonable assumption that the distribution over weights is a Gaussian with full covariance. Since reasoning about the full covariance matrix of a BNN is intractable, we instead consider the i-th parameter and reason about the dynamics of the optimal estimates θ i t * as a function of all the other parameters θ i t . Each weight is functionally dependent on all others. If we had access to the full covariance of the parameters, then we could reason about the unknown optimal weights θ i t * given the values of all the other weights θ i t . However, since we do not have access to the full covariance, another approach is to reason about the dynamics of θ i t * given the dynamics of θ i t and assume that the values of the weights are close to those of the previous timestep [32] and so we cast the problem as a dynamical system.
Consider a quadratic loss of the form:
L ( x t , y t ; θ ) = L t ( θ ) = 1 2 θ H θ + z t θ ,
which we can arrive at by simple Taylor expansion, where H is the Hessian which is assumed to be constant across data points but not across the parameters θ . If the BNN output takes the form f ( x t ; θ ) , then the derivative evaluated at θ t is z t = L ( x t , y t ; θ ) θ | θ = θ t . To construct the dynamical equations for our weights, consider the gradient for a single datapoint:
L t ( θ ) θ = H θ + z t .
If we consider the gradient for the i-th weight while all other parameters are set to their current estimate:
L ( θ i , θ i ) θ i = H i i θ i t H i i θ i t + z t i .
When the gradient is set to zero we recover the optimal value for θ i t , denoted as θ i t * :
θ i t * = 1 H i i H i i θ i t .
since z t i = 0 at the modes. The equation above shows us that the dynamics of the optimal weight θ i t * is dependent on all the other current values of the parameters θ i t . That is, the dynamics of θ i t * are governed by the dynamics of the weights θ i t . The dynamics of θ i t are a complex stochastic process dependent on many different variables. Since reasoning about the dynamics is intractable, we instead assume a discretized Ornstein–Uhlenbeck process for the weights θ i t with reversion speed ϑ R + and noise variance η i 2 :
p ( θ i , t + 1 | θ i , t ) = N ( ( 1 ϑ ) θ i t , η i 2 ) ,
this implies that the dynamics for the optimal weight are defined by
p ( θ i , t + 1 * | θ i , t * ) = N ( ( 1 ϑ ) θ i t * , η 2 ) ,
where η 2 = η i 2 H i i H i i . This same assumption is made in Aitchison [31]. This assumes a parsimonious model of the dynamics. Together with our likelihood:
p ( y t | x t ; θ t * ) = N ( y t ; f ( x t ; θ t * ) , σ 2 )
where f ( · , θ ) is a neural network prediction with weights θ , we can now define a linear dynamical system for the optimal weight θ i * by linearizing the Bayesian NN [32] and by using the transition dynamics in Equation (A36). Thus, we are able to infer the posterior distribution over the optimal weights using Kalman filter-like updates [25]. As the dynamics and likelihood are Gaussian, then the prior and posterior are also Gaussian, for ease of notation we drop the index i such that θ i t * = θ t * :
p ( θ t * | ( x , y ) t 1 , , ( x , y ) 1 ) = N ( μ t , prior , σ t , prior 2 )
p ( θ t * | ( x , y ) t , , ( x , y ) 1 ) = N ( μ t , post , σ t , post 2 )
By using the transition dynamics and the prior we can obtain closed-form updates:
p ( θ t * | ( x , y ) t 1 , , ( x , y ) 1 ) = p ( θ t * | θ t 1 * ) p ( θ t 1 * | ( x , y ) t 1 , , ( x , y ) 1 ) d θ t 1 *
N ( θ t * ; μ t , prior , σ t , prior 2 ) = N ( θ t * ; ( 1 ϑ ) θ t 1 * , η 2 ) N ( θ t 1 * ; μ t 1 , post , σ t 1 , post 2 ) d θ t 1 * .
Integrating out θ t 1 * we can obtain updates for the prior for the next timestep as follows:
μ t , prior = ( 1 ϑ ) μ t 1 , p o s t
σ t , prior 2 = η 2 + ( 1 ϑ ) 2 σ t 1 , post 2 .
The updates for obtaining our posterior parameters: μ t , post and σ t , post 2 , comes from applying Bayes’ theorem:
log N ( θ t * ; μ t , post , σ t , post 2 ) log N ( y t ; f ( x t ; θ t * ) , σ 2 ) + log N ( θ t * ; μ t , prior , σ t , prior 2 ) ,
by linearizing our Bayesian NN such that f ( x t , θ 0 ) f ( x t , θ 0 ) + f ( x t ; θ t * ) θ t * ( θ t * θ 0 ) and by substituting into Equation (A44) we obtain our update equation for the posterior of the mean of our BNN parameters:
1 2 σ t , post 2 ( θ t * μ t , post ) 2 = 1 2 σ 2 ( y g ( x t ) θ t * ) 2 1 2 σ t , prior 2 ( θ t * μ t , prior ) 2
μ t , post = σ t , post 2 μ t , prior σ t , prior 2 + y σ 2 g ( x t ) ,
where g ( x t ) = f ( x t ; θ t * ) θ t * , and the update equation for the variance of the Gaussian posterior is:
1 σ t , post 2 = g ( x t ) 2 σ 2 + 1 σ t , prior 2 .
From our updated equations, Equation (A46) and Equation (A47), we can notice that the posterior mean depends linearly on the prior and an additional data dependent term. These equations are similar to the filtering example in sec:misspecification. Therefore, under certain assumptions above, a BNN should behave similarly. If there exists a task data imbalance, then the data term will dominate the prior term in Equation (A46) and could lead to forgetting of previous tasks.

References

  1. McCloskey, M.; Cohen, N.J. Catastrophic interference in connectionist networks: The sequential learning problem. In Psychology of Learning and Motivation; Elsevier: Amsterdam, The Netherlands, 1989; Volume 24, pp. 109–165. [Google Scholar]
  2. French, R.M. Catastrophic forgetting in connectionist networks. Trends Cogn. Sci. 1999, 3, 128–135. [Google Scholar] [CrossRef]
  3. Kirkpatrick, J.; Pascanu, R.; Rabinowitz, N.; Veness, J.; Desjardins, G.; Rusu, A.A.; Milan, K.; Quan, J.; Ramalho, T.; Grabska-Barwinska, A.; et al. Overcoming catastrophic forgetting in neural networks. Proc. Natl. Acad. Sci. USA 2017, 114, 3521–3526. [Google Scholar] [CrossRef]
  4. MacKay, D.J. A practical Bayesian framework for backpropagation networks. Neural Comput. 1992, 4, 448–472. [Google Scholar] [CrossRef]
  5. Graves, A. Practical variational inference for neural networks. Adv. Neural Inf. Process. Syst. 2011, 24, 1–9. [Google Scholar]
  6. Blundell, C.; Cornebise, J.; Kavukcuoglu, K.; Wierstra, D. Weight uncertainty in neural network. In Proceedings of the International Conference on Machine Learning, PMLR, Lille, France, 6–11 July 2015; pp. 1613–1622. [Google Scholar]
  7. Schwarz, J.; Czarnecki, W.; Luketina, J.; Grabska-Barwinska, A.; Teh, Y.W.; Pascanu, R.; Hadsell, R. Progress & compress A scalable framework for continual learning. In Proceedings of the International Conference on Machine Learning, PMLR, Stockholm, Sweden, 10–15 July 2018; pp. 4528–4537. [Google Scholar]
  8. Ritter, H.; Botev, A.; Barber, D. Online structured laplace approximations for overcoming catastrophic forgetting. Adv. Neural Inf. Process. Syst. 2018, 31, 1–11. [Google Scholar]
  9. Nguyen, C.V.; Li, Y.; Bui, T.D.; Turner, R.E. Variational Continual Learning. In Proceedings of the International Conference on Learning Representations, Vancouver, BC, Canada, 30 April–3 May 2018. [Google Scholar]
  10. Ebrahimi, S.; Elhoseiny, M.; Darrell, T.; Rohrbach, M. Uncertainty-Guided Continual Learning in Bayesian Neural Networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, Long Beach, CA, USA, 16–17 June 2019; pp. 75–78. [Google Scholar]
  11. Kessler, S.; Nguyen, V.; Zohren, S.; Roberts, S.J. Hierarchical indian buffet neural networks for bayesian continual learning. In Proceedings of the Uncertainty in Artificial Intelligence, PMLR, Online, 27–30 July 2021; pp. 749–759. [Google Scholar]
  12. Loo, N.; Swaroop, S.; Turner, R.E. Generalized Variational Continual Learning. In Proceedings of the International Conference on Learning Representations, Addis Ababa, Ethiopia, 26–30 April 2020. [Google Scholar]
  13. Neal, R.M. MCMC using Hamiltonian dynamics. Handbook of Markov Chain Monte Carlo; Chapman and Hall: New York, NY, USA, 2011; pp. 113–162. [Google Scholar]
  14. LeCun, Y.; Bottou, L.; Bengio, Y.; Haffner, P. Gradient-based learning applied to document recognition. Proc. IEEE 1998, 86, 2278–2324. [Google Scholar] [CrossRef]
  15. Zenke, F.; Poole, B.; Ganguli, S. Continual learning through synaptic intelligence. In Proceedings of the International Conference on Machine Learning, PMLR, Sydney, Australia, 6–11 August 2017; pp. 3987–3995. [Google Scholar]
  16. Hsu, Y.C.; Liu, Y.C.; Ramasamy, A.; Kira, Z. Re-evaluating continual learning scenarios: A categorization and case for strong baselines. arXiv 2018, arXiv:1810.12488. [Google Scholar]
  17. Van de Ven, G.M.; Tolias, A.S. Three scenarios for continual learning. arXiv 2019, arXiv:1904.07734. [Google Scholar]
  18. van de Ven, G.M.; Tuytelaars, T.; Tolias, A.S. Three types of incremental learning. Nat. Mach. Intell. 2022, 4, 1185–1197. [Google Scholar] [CrossRef]
  19. Chopin, N.; Papaspiliopoulos, O. An Introduction to Sequential Monte Carlo; Springer: Cham, Switzerland, 2020; Volume 4. [Google Scholar]
  20. Cobb, A.D.; Jalaian, B. Scaling Hamiltonian Monte Carlo Inference for Bayesian Neural Networks with Symmetric Splitting. In Proceedings of the Thirty-Seventh Conference on Uncertainty in Artificial Intelligence, Online, 27–30 July 2021; pp. 675–685. [Google Scholar]
  21. Izmailov, P.; Vikram, S.; Hoffman, M.D.; Wilson, A.G.G. What are Bayesian neural network posteriors really like? In Proceedings of the International Conference on Machine Learning, PMLR, Online, 18–24 July 2021; pp. 4629–4640. [Google Scholar]
  22. Pan, P.; Swaroop, S.; Immer, A.; Eschenhagen, R.; Turner, R.; Khan, M.E.E. Continual deep learning by functional regularisation of memorable past. Adv. Neural Inf. Process. Syst. 2020, 33, 4453–4464. [Google Scholar]
  23. Dinh, L.; Sohl-Dickstein, J.; Bengio, S. Density estimation using real NVP. arXiv 2016, arXiv:1605.08803. [Google Scholar]
  24. Doucet, A.; De Freitas, N.; Gordon, N. An introduction to sequential Monte Carlo methods. In Sequential Monte Carlo Methods in Practice; Springer: Berlin/Heidelberg, Germany, 2001; pp. 3–14. [Google Scholar]
  25. Kalman, R.E. A new approach to linear filtering and prediction problems. J. Basic Eng. Mar. 1960, 82, 35–45. [Google Scholar] [CrossRef]
  26. Aljundi, R.; Lin, M.; Goujaud, B.; Bengio, Y. Gradient based sample selection for online continual learning. Adv. Neural Inf. Process. Syst. 2019, 32, 1–10. [Google Scholar]
  27. Aljundi, R.; Kelchtermans, K.; Tuytelaars, T. Task-free continual learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 11254–11263. [Google Scholar]
  28. De Lange, M.; Aljundi, R.; Masana, M.; Parisot, S.; Jia, X.; Leonardis, A.; Slabaugh, G.; Tuytelaars, T. A continual learning survey: Defying forgetting in classification tasks. arXiv 2019, arXiv:1909.08383. [Google Scholar]
  29. Wilson, A.G.; Izmailov, P. Bayesian deep learning and a probabilistic perspective of generalization. Adv. Neural Inf. Process. Syst. 2020, 33, 4697–4708. [Google Scholar]
  30. Ciftcioglu, Ö.; Türkcan, E. Adaptive Training of Feedforward Neural Networks by Kalman Filtering; Netherlands Energy Research Foundation ECN: Petten, The Netherlands, 1995. [Google Scholar]
  31. Aitchison, L. Bayesian filtering unifies adaptive and non-adaptive neural network optimization methods. Adv. Neural Inf. Process. Syst. 2020, 33, 18173–18182. [Google Scholar]
  32. Jacot, A.; Gabriel, F.; Hongler, C. Neural tangent kernel: Convergence and generalization in neural networks. Adv. Neural Inf. Process. Syst. 2018, 31, 1–10. [Google Scholar]
  33. Thrun, S.; Mitchell, T.M. Lifelong robot learning. Robot. Auton. Syst. 1995, 15, 25–46. [Google Scholar] [CrossRef]
  34. Zeno, C.; Golan, I.; Hoffer, E.; Soudry, D. Task agnostic continual learning using online variational bayes. arXiv 2018, arXiv:1803.10123. [Google Scholar]
  35. Ahn, H.; Cha, S.; Lee, D.; Moon, T. Uncertainty-based continual learning with adaptive regularization. Adv. Neural Inf. Process. Syst. 2019, 32, 1–11. [Google Scholar]
  36. Farquhar, S.; Osborne, M.A.; Gal, Y. Radial bayesian neural networks: Beyond discrete support in large-scale bayesian deep learning. In Proceedings of the International Conference on Artificial Intelligence and Statistics, PMLR, Online, 26–28 August 2020; pp. 1352–1362. [Google Scholar]
  37. Mehta, N.; Liang, K.; Verma, V.K.; Carin, L. Continual learning using a Bayesian nonparametric dictionary of weight factors. In Proceedings of the International Conference on Artificial Intelligence and Statistics, PMLR, Online, 13–15 April 2021; pp. 100–108. [Google Scholar]
  38. Kumar, A.; Chatterjee, S.; Rai, P. Bayesian structural adaptation for continual learning. In Proceedings of the International Conference on Machine Learning, PMLR, Online, 18–24 July 2021; pp. 5850–5860. [Google Scholar]
  39. Adel, T.; Zhao, H.; Turner, R.E. Continual learning with adaptive weights (claw). arXiv 2019, arXiv:1911.09514. [Google Scholar]
  40. Titsias, M.K.; Schwarz, J.; Matthews, A.G.d.G.; Pascanu, R.; Teh, Y.W. Functional Regularisation for Continual Learning with Gaussian Processes. In Proceedings of the ICLR, Addis Ababa, Ethiopia, 26–30 April 2020. [Google Scholar]
  41. Kapoor, S.; Karaletsos, T.; Bui, T.D. Variational auto-regressive Gaussian processes for continual learning. In Proceedings of the International Conference on Machine Learning, PMLR, Online, 18–24 July 2021; pp. 5290–5300. [Google Scholar]
  42. Buzzega, P.; Boschini, M.; Porrello, A.; Abati, D.; Calderara, S. Dark experience for general continual learning: A strong, simple baseline. Adv. Neural Inf. Process. Syst. 2020, 33, 15920–15930. [Google Scholar]
  43. Benjamin, A.; Rolnick, D.; Kording, K. Measuring and regularizing networks in function space. In Proceedings of the International Conference on Learning Representations, Vancouver, BC, Canada, 30 April–3 May 2018. [Google Scholar]
  44. Henning, C.; Cervera, M.; D’Angelo, F.; Von Oswald, J.; Traber, R.; Ehret, B.; Kobayashi, S.; Grewe, B.F.; Sacramento, J. Posterior meta-replay for continual learning. Adv. Neural Inf. Process. Syst. 2021, 34, 14135–14149. [Google Scholar]
  45. Swaroop, S.; Nguyen, C.V.; Bui, T.D.; Turner, R.E. Improving and understanding variational continual learning. arXiv 2019, arXiv:1905.02099. [Google Scholar]
  46. Rudner, T.G.J.; Chen, Z.; Teh, Y.W.; Gal, Y. Tractabe Function-Space Variational Inference in Bayesian Neural Networks. In Proceedings of the 36th Conference on Neural Information Processing Systems (NeurIPS 2022), New Orleans, LA, USA, 28 November–9 December 2022. [Google Scholar]
  47. Rudner, T.G.J.; Smith, F.B.; Feng, Q.; Teh, Y.W.; Gal, Y. Continual Learning via Sequential Function-Space Variational Inference. In Proceedings of the 38th International Conference on Machine Learning, PMLR, Virtual, 18–24 July 2022. [Google Scholar]
  48. Lavda, F.; Ramapuram, J.; Gregorova, M.; Kalousis, A. Continual classification learning using generative models. arXiv 2018, arXiv:1810.10612. [Google Scholar]
  49. van de Ven, G.M.; Li, Z.; Tolias, A.S. Class-incremental learning with generative classifiers. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 20–25 June 2021; pp. 3611–3620. [Google Scholar]
  50. Snell, J.; Swersky, K.; Zemel, R. Prototypical networks for few-shot learning. Adv. Neural Inf. Process. Syst. 2017, 30, 1–11. [Google Scholar]
  51. Rebuffi, S.A.; Kolesnikov, A.; Sperl, G.; Lampert, C.H. ICARL: Incremental classifier and representation learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 2001–2010. [Google Scholar]
  52. Harrison, J.; Sharma, A.; Finn, C.; Pavone, M. Continuous meta-learning without tasks. Adv. Neural Inf. Process. Syst. 2020, 33, 17571–17581. [Google Scholar]
  53. Knoblauch, J.; Husain, H.; Diethe, T. Optimal continual learning has perfect memory and is NP-hard. In Proceedings of the International Conference on Machine Learning, PMLR, Virtual, 13–18 July 2020; pp. 5327–5337. [Google Scholar]
  54. Petersen, K.B.; Pedersen, M.S. The Matrix Cookbook; Technical University of Denmark: Lyngby, Denmark, 2008; Volume 7, p. 510. [Google Scholar]
Figure 1. Accuracy on Split-MNIST for various CL methods with a two-layer BNN, all accuracies are an average and standard deviation over 10 runs with different random seeds. We compare an NN trained with SGD (single-headed) with VCL. We consider single-headed (SH) and multi-head (MH) VCL variants.
Figure 1. Accuracy on Split-MNIST for various CL methods with a two-layer BNN, all accuracies are an average and standard deviation over 10 runs with different random seeds. We compare an NN trained with SGD (single-headed) with VCL. We consider single-headed (SH) and multi-head (MH) VCL variants.
Entropy 25 00884 g001
Figure 2. Illustration of the posterior propagation process; priors in blue are in the top row and posterior samples on the bottom row. This is a two-step process where we first perform HMC with an isotropic Gaussian prior for T 1 then perform density estimation on the HMC samples from the posterior to obtain p ^ 1 ( θ | D 1 ) . This posterior can then be used as a prior for the new task T 2 and so on.
Figure 2. Illustration of the posterior propagation process; priors in blue are in the top row and posterior samples on the bottom row. This is a two-step process where we first perform HMC with an isotropic Gaussian prior for T 1 then perform density estimation on the HMC samples from the posterior to obtain p ^ 1 ( θ | D 1 ) . This posterior can then be used as a prior for the new task T 2 and so on.
Entropy 25 00884 g002
Figure 3. On the left is the toy dataset of 5 distinct 2-way classification tasks that involve classifying circles and squares [22]. Moreover, continual learning binary classification test accuracies over 10 seeds. The pink solid line is a multi-task (MT) baseline accuracy using SGD/HMC with the same model as for the CL experiments.
Figure 3. On the left is the toy dataset of 5 distinct 2-way classification tasks that involve classifying circles and squares [22]. Moreover, continual learning binary classification test accuracies over 10 seeds. The pink solid line is a multi-task (MT) baseline accuracy using SGD/HMC with the same model as for the CL experiments.
Entropy 25 00884 g003
Figure 4. Posterior estimate of the filtering distribution Equation (7) for a linear model where data is divided into two tasks. We study two different scenarios with two tasks or changepoints, with a balanced and an imbalanced task dataset. In scenario A, we perform 110 sequential inference updates to the linear model with data from task 1, then another 110 sequential updates with data from task 2. In scenario B, the task datasets are imbalanced. We perform 20 sequential Bayesian updates to the linear model with data from task 1 and then 200 updates with data from task 2.
Figure 4. Posterior estimate of the filtering distribution Equation (7) for a linear model where data is divided into two tasks. We study two different scenarios with two tasks or changepoints, with a balanced and an imbalanced task dataset. In scenario A, we perform 110 sequential inference updates to the linear model with data from task 1, then another 110 sequential updates with data from task 2. In scenario B, the task datasets are imbalanced. We perform 20 sequential Bayesian updates to the linear model with data from task 1 and then 200 updates with data from task 2.
Entropy 25 00884 g004
Figure 5. Graphical model for filtering. Grey and white nodes and latent variables are observed, respectively.
Figure 5. Graphical model for filtering. Grey and white nodes and latent variables are observed, respectively.
Entropy 25 00884 g005
Figure 6. Overview of ProtoCL.
Figure 6. Overview of ProtoCL.
Entropy 25 00884 g006
Table 1. Mean accuracies across all tasks over CL vision benchmarks for class incremental learning [17]. All results are averages and standard errors over 10 seeds. * Uses the predictive entropy to make a decision about which head for class incremental learning.
Table 1. Mean accuracies across all tasks over CL vision benchmarks for class incremental learning [17]. All results are averages and standard errors over 10 seeds. * Uses the predictive entropy to make a decision about which head for class incremental learning.
MethodCoresetSplit-MNISTSplit-FMNIST
VCL [9] 33.01 ± 0.08 32.77 ± 1.25
+ coreset  52.98 ± 18.56   61.12 ± 16.96
HIBNN *  [11] 85.50 ± 3.20   43.70 ± 20.21
FROMP [22] 84.40 ± 0.00 68.54 ± 0.00
S-FSVI [47] 92 . 94 ± 0 . 17 80.55 ± 0.41
ProtoCL (ours) 93 . 73 ± 1 . 05 82 . 73 ± 1 . 70
Table 2. Mean accuracies across all tasks over CL vision benchmarks for class incremental learning [17]. All results are averages and standard errors over 10 seeds. Training times have been benchmarked using an Nvidia RTX3090 GPU.
Table 2. Mean accuracies across all tasks over CL vision benchmarks for class incremental learning [17]. All results are averages and standard errors over 10 seeds. Training times have been benchmarked using an Nvidia RTX3090 GPU.
MethodTraining Time (s) ( ) Split CIFAR-10 (Acc) ( )
FROMP [22] 1425 ± 28   48.92 ± 10.86
S-FSVI [47] 44,434 ± 91 50.85 ± 3.87
ProtoCL (ours) 384 ± 6 55 . 81 ± 2 . 10
Split CIFAR-100 (Acc) ( )
S-FSVI [47] 37,355 ± 1135 20.04 ± 2.37
ProtoCL (ours) 1425 ± 28 23 . 96 ± 1 . 34
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Kessler, S.; Cobb, A.; Rudner, T.G.J.; Zohren, S.; Roberts, S.J. On Sequential Bayesian Inference for Continual Learning. Entropy 2023, 25, 884. https://doi.org/10.3390/e25060884

AMA Style

Kessler S, Cobb A, Rudner TGJ, Zohren S, Roberts SJ. On Sequential Bayesian Inference for Continual Learning. Entropy. 2023; 25(6):884. https://doi.org/10.3390/e25060884

Chicago/Turabian Style

Kessler, Samuel, Adam Cobb, Tim G. J. Rudner, Stefan Zohren, and Stephen J. Roberts. 2023. "On Sequential Bayesian Inference for Continual Learning" Entropy 25, no. 6: 884. https://doi.org/10.3390/e25060884

APA Style

Kessler, S., Cobb, A., Rudner, T. G. J., Zohren, S., & Roberts, S. J. (2023). On Sequential Bayesian Inference for Continual Learning. Entropy, 25(6), 884. https://doi.org/10.3390/e25060884

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop