Next Article in Journal
Understanding the Impact of Perceived Challenge on Narrative Immersion in Video Games: The Role-Playing Game Genre as a Case Study
Previous Article in Journal
Designing Gestures for Data Exploration with Public Displays via Identification Studies
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Task-Adaptive Multi-Source Representations for Few-Shot Image Recognition †

Department of Electronic Engineering, Shanghai Jiao Tong University, Shanghai 200240, China
*
Authors to whom correspondence should be addressed.
This article is a revised and expanded version of a paper entitled: Ge Liu, et al. Learning and Adapting Diverse Representations for Cross-domain Few-shot Learning, which was presented In Proceedings of the IEEE International Conference on Data Mining Workshops, Shanghai, China, 2023.
Information 2024, 15(6), 293; https://doi.org/10.3390/info15060293
Submission received: 11 March 2024 / Revised: 11 May 2024 / Accepted: 16 May 2024 / Published: 21 May 2024
(This article belongs to the Special Issue Few-Shot Learning for Knowledge Engineering and Intellectual System)

Abstract

:
Conventional few-shot learning (FSL) mainly focuses on knowledge transfer from a single source dataset to a recognition scenario with only a few training samples available but still similar to the source domain. In this paper, we consider a more practical FSL setting where multiple semantically different datasets are available to address a wide range of FSL tasks, especially for some recognition scenarios beyond natural images, such as remote sensing and medical imagery. It can be referred to as multi-source cross-domain FSL. To tackle the problem, we propose a two-stage learning scheme, termed learning and adapting multi-source representations (LAMR). In the first stage, we propose a multi-head network to obtain efficient multi-domain representations, where all source domains share the same backbone except for the last parallel projection layers for domain specialization. We train the representations in a multi-task setting where each in-domain classification task is taken by a cosine classifier. In the second stage, considering that instance discrimination and class discrimination are crucial for robust recognition, we propose two contrastive objectives for adapting the pre-trained representations to be task-specialized on the few-shot data. Careful ablation studies verify that LAMR significantly improves representation transferability, showing consistent performance boosts. We also extend LAMR to single-source FSL by introducing a dataset-splitting strategy that equally splits one source dataset into sub-domains. The empirical results show that LAMR can achieve SOTA performance on the BSCD-FSL benchmark and competitive performance on mini-ImageNet, highlighting its versatility and effectiveness for FSL of both natural and specific imaging.

1. Introduction

Recent years have witnessed significant progress in computer vision applications thanks to the development of deep learning [1,2] with large-scale annotated data [3]. However, when the deployed domain is specific, the training data may be limited or the labeling cost can be particularly extreme as it must be done by an expert, for example, a doctor in the medical area. To relax the demanding data requirements in deep learning, the emerging topic of few-shot learning (FSL) [4] has received considerable attention and developed as a fundamental research problem in the past few years. With only a few annotated samples per class available, few-shot image recognition aims to efficiently build a classification model for recognizing new classes in an unseen domain.
Directly training a deep recognition model [2] from scratch with only scarce data would intuitively lead to over-fitting collapse [5]. So recent few-shot image recognition is typically addressed in an inductive transfer learning paradigm [6], which aims to improve the learning with limited few-shot data (typically denoted as support set S ) using the knowledge in a base set D b containing abundant samples. Conventionally, the learning process is divided into two stages: (1) learning a transferable model from the base dataset D b , and (2) adapting the pre-trained model to the unseen target few-shot task with S .
Prevailing approaches [7,8,9,10] to learning in the first stage are typically based on meta-learning [11]. It learns a meta-model by maximizing the generalization accuracy across a variety of few-shot tasks drawn from the base set, with the goal of transferring meta-knowledge to improve generalization on the unseen domain. The meta-model is shown to hold the promise of fast adaptation [8] and avoiding over-fitting [7]. Although meta-learning provides an elegant solution to FSL, recent studies also indicate that the sophisticated meta-learning algorithms may be unnecessary [12,13,14,15,16]. Instead, the simple representation learning based on supervised cross-entropy loss on the entire dataset could transfer well, and achieve even better performance. Those findings significantly underpin that the essential role of few-shot transfer mainly relies on feature reuse [17] instead of fast adaptation. Other techniques, including self-supervised learning [18] and knowledge distillation [14], have also effectively improved feature transferability. Besides directly leveraging the frozen representation for target FSL, some efforts have also explored the improvements based on task-specific adaptation [19], indicating that proper adaptation may still be necessary [20], especially for cross-domain FSL [21].
However, most of those existing FSL protocols and methods limit their source domains to using only one dataset for pre-training, but many datasets from semantically different domains are indeed available. Besides, a recent benchmark called meta-dataset [22] suggests using multiple source datasets to deal with FSL, but its target datasets for evaluations are still just natural images. In practice, the scenarios for FSL are more likely to come from the specific recognition domains, such as remote sensing [23] and medical imagery [24,25,26]. In this paper, we aim to address the practical few-shot setting, referred to as multi-source cross-domain few-shot learning. To promote FSL with knowledge from multiple source domains, some methods [27,28,29] are devoted to learning universal representations but still lack effective adaptations. Unlike most prior methods that focus on either representation learning or adaptation on the few-shot data, we address the problem by exploring both aspects: how to effectively learn different generalizable features from multiple source domains and how to use few-shot data to make an efficient adaptation (or deployment) for a wide range of cross-domain FSL scenarios. Therefore, we propose a novel two-stage learning scheme (as illustrated in Figure 1), namely learning and adapting representations (LAMR).
Concretely, in the first stage, we propose a parameter-efficient multi-head framework for training multi-source representations. Instead of learning a single domain-agnostic embedding, we aim to represent diverse features by constructing separate sub-spaces, each of which corresponds to a specific domain. This is achieved by optimizing multiple in-domain classification tasks on the multi-head representation spaces with a shared backbone. In this way, our model can preserve information with regard to each domain in a compact network. The representations can then be universal enough to further support generalization to vastly different FSL tasks.
The pre-trained representations are expected to generalize well to the unseen task that is similar to the source domain. However, this is still a challenge if a large domain shift exists between the source and target data, where pre-trained features are less transferable and proper task-specific adaptation on the limited target data becomes necessary. Besides, we consider that instance discrimination and class discrimination are two crucial capabilities for a robust recognition model. To impose the two objectives, we accordingly propose two feature contrastive losses for improving model discrimination towards unseen classes on the few-shot training data. This enables effective task-specific adaptation as the adapted features can be more task-relevant to the target classes. Empirical results show that the adaptation can yield significant performance boosts; the recognition scenario suffers extreme domain shifts, such as remote sensing and medical domains.
In summary, our contributions are as follows:
  • We develop a novel two-stage learning scheme, namely learning and adapting representations (LAMR), for vastly addressing cross-domain few-shot learning tasks, especially for recognition scenarios beyond natural images, including remote sensing and medical imagery.
  • To achieve multi-source representations, we propose a parameter-efficient multi-head framework, which can further support simple but effective transfer to different downstream FSL tasks.
  • To achieve task-specific transfer, we propose a few-shot adaptation method for improving model discrimination towards unseen classes by imposing instance discrimination and class discrimination at the feature level.
  • LAMR can achieve state-of-the-art results on cross-domain FSL benchmarks in the multi-source setting.
Compared to the preliminary version of the conference paper [30], this work additionally presents the following novel contents:
  • We extend LAMR to single-source FSL by introducing dataset-splitting strategies that equally split one source dataset into sub-domains. The empirical results show that applying simple “random splitting” can improve conventional cosine-similarity-based classifiers in FSL with a fixed single-source data budget. LAMR also achieves superior performance on (single-source) BSCD-FSL benchmark and competitive results on mini-ImageNet.
  • We conduct more careful ablation studies, which verify that the performance gains come from not only the good transferability of the proposed multi-source representations but also each component in the objectives of few-shot adaptation.
  • Discussions and comparisons of more related works, especially for few-shot learning with multi-source domains, are included.
  • More feature visualizations and analyses are included. Limitations and future directions are discussed.
The rest of this paper is organized as follows. In Section 2, we briefly review the related works. In Section 3, we formulate the task and present baseline methods. In Section 4, we elaborate on our proposed method. In Section 5 and Section 6, we describe the benchmark datasets, implementation details, experiment results, and ablation studies. In Section 7, we draw conclusions, discuss limitations and provide some promising future directions.

2. Related Works

2.1. Few-Shot Learning

Meta-learning [11] is a pioneering approach to addressing few-shot learning [7,8,9,10]. The corresponding training regime is namely episodic training, which focuses on mimicking a target few-shot task style, i.e., “N-way K-shot task”. Concretely, this approach trains a meta-model on various “N-way K-shot” tasks (or episodes) sampled from the source dataset, with the goal of steering meta-knowledge that can generalize well in the unseen domain. For instance, MAML [7] optimizes a model-agnostic meta-initialization that can enable fast adaptation to a novel FSL task with only a few fine-tuning steps. Meta-LSTM [8] suggests using an LSTM module as the meta-learner to provide the task-specific update rule for the optimization. Prototypical networks [9] and Matching Networks [10] seek to learn a good metric space capable of directly separating new, unknown classes. Those methods are proven to avoid over-fitting and hold the promise of fast adaptation, as the meta-model is assumed to be the optimal initialization for the different unseen few-shot tasks. Although meta-learning is an elegant solution to the problem of few-shot learning, recent research indicates that it may not be necessary to use complex algorithms. Instead, the simple representation learning [12,13,14,15,16,21,31] based on supervised cross-entropy loss on the entire dataset could transfer well and achieve competitive or even better performance. Those findings significantly highlight that the essential role of few-shot transfer mainly relies on source-feature reuse [17] instead of fast adaptation. Furthermore, to make the feature representations more generalizable and transferable, other techniques, including self-supervised learning [18], knowledge distillation [14,32], saliency-guided attention [33], and contrastive learning [34] have also been proved to effectively improve the performance and enhance model discrimination on the novel categories.
The methods discussed above assume only one source dataset is used for pre-training, but many datasets collected from semantically different domains can be available in the machine learning community. To promote few-shot learning with knowledge from multiple domains, learning Universal Representations [21,27,28,29] and feature selection [21,27,28] are explored in the literature. Concretely, the simplest way [21] to achieve such representations is to train separate feature extractors for each available domain. SUR [27] and URT [28] obtain multiple representations in a parameter-efficient backbone [35] where domain-specific FiLM [36] layers are inserted after each batch normalization layer. For addressing given few-shot tasks, Guo et al. [21] propose a greedy selection algorithm that iteratively searches for the best subset of features on all layers of all pre-trained models, and the selected features in the set are concatenated for training a linear classifier. SUR [27] proposes a feature selection procedure to linearly combine the domain-specific representations with different weights. URT [28] further trains a universal representation transformer layer to weigh the features. Different from [21,27,28] use multiple representations, URL [29] proposes distilling knowledge from the separate multi-domain networks into a single feature extractor.
Aside from directly using representations trained from one domain or multiple domains, some work has also looked into how to make effective few-shot task adaptations with limited data [19,20,37,38,39,40]. Concretely, TADAM [20] applies a task embedding network block, which takes the mean vector of few-shot features as input and produces element-wise scaling and shift vectors to adjust each batch normalization layer, thus making the feature extractor task-specific. FN [38] directly fine-tunes the scaling and shifting parameters of batch normalization on few-shot data to adapt the feature extractor. ConFeSS [39] proposes to learn a task-specific feature masking module that can produce refined features for fine-tuning a target classifier and the feature extractor. Associative alignment [19] first selects a set of task-relevant categories from source data and conducts feature alignment between the selected source data and target data for network adaptation. PDA [41] proposes a proxy-based domain adaptation scheme to optimize the pre-trained representation and a novel few-shot classifier simultaneously. Instead of adjusting the pre-trained network, some methods [37,40] choose to incrementally learn some parametric modules for adaptation to novel tasks and leave the pre-trained parameters frozen. For example, Implanting [37] adds and learns new convolutional filters within the existing CNN layers. TSA [40] attaches residual adapters to each module of a pre-trained model and optimizes them from scratch on the few-shot data. Unlike these methods that perform adaptation by leveraging auxiliary parametric modules [20,37,39,40] or additional data [19], our method provides a more effective adaptation scheme that directly optimizes the pre-trained representations with the limited target data.
Except for the widely investigated few-shot learning for regular image recognition, recent studies have also focused on other tasks, such as scene recognition [42], multi-label classification [43] and multi-modal learning [44]. In this paper, we aim at a practical FSL setting, namely multi-source cross-domain few-shot learning. Different from most existing methods that focus on either representation learning or adaptation on the few-shot data, we address the problem by focusing on both aspects: how to design a good multi-source representation network and how to adapt the representations to address cross-domain FSL in a wide range of scenarios.

2.2. Domain Adaptation

Domain adaptation (DA) typically aims at transferring knowledge from a data-rich source domain to an unlabeled target domain. Most existing DA approaches intend to learn invariant feature representations across two domains by distribution alignment [45,46] or adversarial learning [47]. Besides the single-source DA, our method is more relevant to the multi-source DA [48,49], which also intends to leverage knowledge from multiple source domains. To learn domain-invariant feature representations, these methods typically align domains pairwise based on a domain-shared feature extractor, where the learning framework is also similar to ours. Concretely, Xu et al. [49] leverages multiple domain discriminators to reduce the domain shift by adversarial learning, while [48] matches moments of feature distributions across all pairs of source and target domains. However, the addressed task in this paper intrinsically differs in both single-source and multi-source DA, where the source and target domains have the same classes (or label space). In contrast, we tackle the problem of few-shot learning, where the classes in the source and target domains do not overlap.

2.3. Contrastive Learning

Our few-shot adaptation strategy is highly inspired by the self-supervised contrastive learning by imposing instance discrimination [50,51,52,53] and supervised contrastive learning [54]. All those methods aim to learn a good universal representation from a large-scale dataset, thus boosting transferability on a variety of computer vision tasks. A basic idea of these methods is to make contrasts between positive and negative pairs. For instance, NCE [50] proposes a non-parametric softmax classifier that is made up of instance features to achieve instance discrimination. MoCo [52] and SimCLR [53] typically construct different views from the same instance via a variety of data augmentations. SimCLR [53] learns the representation by minimizing the distance of the features from these views and maximizing the distance of the features from other instances. MoCo [52] minimizes contrastive loss based on a dynamic feature dictionary and a momentum encoder. Supervised contrastive learning [54] minimizes the distance of the features of the same category samples and maximizes the distance of the features from different categories. Unlike these methods that make use of contrastive learning for large-scale pre-training, we propose two contrastive objectives to impose both instance discrimination and class discrimination on the few-shot data for adapting pre-trained feature representations to be task-specific.

2.4. Multi-Task Learning

Multi-task learning aims to learn multiple related tasks simultaneously [35,55,56]. The main idea is to build a compact network that can represent all domains by sharing most model parameters except for minimal parameters for task specialization. Unlike multi-task learning, which aims to achieve optimal performance across multiple source tasks, transfer learning focuses on addressing a specific target task with insufficient training data using knowledge from a single or multiple source domain. In this paper, we seek efficient multi-source representation learning in a multi-task setting in order to further support a broad range of downstream few-shot learning tasks.

3. Preliminary

3.1. Task Formulation

Few-shot image recognition aims to generalize basic knowledge to perform novel class categorization in previously unseen domains. It can be defined from an inductive transfer learning perspective, corresponding to two learning routines, i.e., meta-training and meta-testing stages. In the conventional few-shot setting, there is only one source dataset for training, and the deployed recognition scenario is also similar to the source domain, which is also regarded as in-domain few-shot learning. In contrast to the usual single-source setting, we look at how to use more abundant modal information from multiple source datasets sampled from different domains to support broad FSL tasks, especially for visual recognition tasks that go beyond natural images, such as remote sensing and medical images.
Formally, let us assume we have B source datasets D = { D b } b = 1 B in the meta-training stage, where each D b = { ( x , y ) } X b × Y b corresponds to a specific domain, and the ( x , y ) denotes an image sample and its associated class label. Based on deep neural networks, few-shot learning algorithms aim to extract general and transferable knowledge from large-scale data D . In the meta-testing stage, the pre-trained model is adapted with a novel few-shot learning task which provides a small support set S sampled from target domain D n = { ( x , y ) } X n × Y n . The configuration with the support set S containing N different classes with K samples each ( S = S i i = 1 N , S i = K ) is referred to as the “N-way K-shot” recognition task. Particularly, all the source datasets D b and target dataset D n have no common class. After the pre-trained model has been adapted to the support set, a query set Q sampled from the unseen classes is used to evaluate the generalization performance.

3.2. Transfer Learning Baseline

We revisit a conventional transfer learning baseline, where a feature extractor F is first pre-trained on the source data and then frozen when adapting to the few-shot task. For the multi-source setting, we can simply merge the multiple source datasets into a joint dataset
D J = D 1 D 2 D b D B X J × Y J .
Therefore, the representation learning can be conducted by one joint classification task. Associated with a classification layer C b a s e , the feature extractor can be trained in an end-to-end manner for recognizing all joint classes by minimizing the expected empirical risk,
F = arg min F , C b a s e E ( x , y J ) D J L ( C b a s e F ( x ) , y J ) ,
where the L denotes a loss function (typically as cross-entropy) that measures the agreement between the true class label and the corresponding prediction from the classifier. Note, that the y J denotes the class label in the current joint space Y J instead of the space in the original source domain.

3.2.1. FT Baseline

Given a target few-shot task presented by S , a simple fine-tuning (FT) baseline is freezing the pre-trained feature extractor and retraining a new classifier head C n o v e l with the features of the support set, i.e.,
C n o v e l = arg min C n o v e l E ( x , y ) S L ( C n o v e l F ( x ) , y ) .
The new recognition model composed of { F , C n o v e l } can be used for a target task. This baseline method has recently been proven to be effective when the pre-trained features can be transferable and reused in the target domain.

3.2.2. NNC Baseline

Another natural approach to directly performing unseen class categorization can resort to the k-nearest neighbor (KNN) with the pre-trained representation. Particularly, this method expects that the deeply learned features are discriminative and generalized enough to separate new classes, so the query (test) sample can be well classified by its nearest neighbors. Besides, a more generally used non-parametric method for multi-shot FSL is the nearest neighbor classifier (NNC) baseline [12,27], where the weights of the target classifier can be regarded as class prototypes [9]. Each prototype is computed using the averaged features of the corresponding support class as follows,
p k = 1 S k x , y = k S k F x .
For a query image x q Q , the NNC assigns it to the label of the closest support class with a similarity metric sim ( · , · ) on the representation space,
y ^ q = arg max j { 1 , , N } sim F ( x q ) , p j .

4. Approach

In this section, we elaborate on our approach to addressing multi-source cross-domain few-shot learning, which includes two learning stages: (1) learning multi-source representations and (2) adapting them to the few-shot task. Particularly, in the adaptation procedure, the objective is to adapt the pre-trained representations to be task-specific and discriminative enough for identifying novel classes.

4.1. Multi-Source Representation Learning

Given the multiple source datasets { D b } b = 1 B , we first present our framework for training multi-source representations, aiming to effectively extract the diverse semantic information. Typically, a simple way to obtain multi-domain representations is to train a separate feature extractor for each source domain [57]. However, when the number of domains is large, adapting and deploying a lot of models would be impractical. Besides, another baseline (presented in Section 3.2), training a single-task network with the merged source data, can be parameter-efficient but suppress feature diversity. What is worse, the potential interference across different domains may impede regular training [58].
To reduce the computational cost and model size, we achieve efficient domain-specific representations by a multi-head structure, where the multiple source datasets share a backbone network, with the assumption that low-level features are generalizable across different domains and tasks [5]. Concretely, the multi-head representations have multiple projection layers, each of which corresponds to a different domain and is used to map shared features into the space of that domain. The learning framework is depicted in Figure 2. Besides the original B domains, we also create a universal domain on all merged source data D J presented in Section 3.2 and further define the number of feature representations as D = B + 1 .
Inspired by the previous study [59], we instantiate each projection layer with a low-rank bilinear pooling (LBP) structure, since it has been proven to improve feature discrimination for the single-source FSL. Assuming the feature maps outputted by a shared CNN backbone are f ϕ ( x ) R h × w × c , we add parallel low-rank bilinear (LBP) layers [60,61] at the end of the shared backbone f ϕ ( x ) as can be seen in Figure 2. Denoting the D domain-specific LBP layers as { P θ i } i = 1 D , we can obtain a feature representation set as { F i ( · ) } i = 1 D for D domains,
F i ( x ) = P θ i f ϕ ( x ) = l = 1 h w P i , 1 T f ϕ ( x ) l P i , 2 T f ϕ ( x ) l ,
where P i , 1 R c × d and P i , 2 R c × d are two projection matrices for P θ i at the i-th domain, and subscript l represents the spatial index among h w . As shown in Figure 3, the detailed architecture of LBP in our implementation consists of two parallel 1 × 1 convolutions with c channels followed by a Hadamard product and a global average pooling operation. The feature dimension can be manually set to d.
We train the multi-source representations by using regular supervised training with in-domain classification, which is performed on each representation head. Concretely, the cosine classifiers [62,63] are used as the classification layers, denoted as C = { C i ( · ; W i ) } i = 1 D , where W i = [ w 1 , , w N i ] are the d-dimensional classification weight vectors for the N i classes in the i-th domain. The classifier C i ( · ; W i ) produces the normalized classification score (probability) for the j-th class
C i j ( F i ( x ) ; W i ) = softmax j [ γ sim F ( x ) , w i ] ,
where the cosine similarity sim ( x , y ) = x T y x 2 y 2 is defined as the dot product between the two 2 normalized vectors, and γ is a regular associated scalar. In summary, the pre-training procedure minimizes the multi-domain classification losses. For clarity, we re-denote an image example in the joint dataset as ( x , y O , y J , y D ) D J , where y O , y J , y D are its original-domain class label, joint-domain class label, and domain index label, respectively. The end-to-end training objective in a multi-task setting is as follows,
J s p = b = 1 B 1 [ y D = b ] L C E ( C b F b ( x ) , y O ) ,
J j o i n t = L C E ( C J F J ( x ) , y J ) ,
F = arg min F , C E ( x , y O , y J , y D ) D J [ J s p + J j o i n t ] ,
where L C E is the cross-entropy function, and 1 { 0 , 1 } is a domain indicator function that returns 1 if its argument is true and 0 otherwise.
During network training with mini-batch SGD, the back-propagated gradients accumulated from the multiple tasks on the shared parameters may be too large to ensure proper end-by-end optimization. To stabilize the training process, we adopt a simple gradient scaling mechanism. To be specific, when the losses are backpropagated to the shared features, the cumulative gradients from the multi-head branches are averaged. In this way, the magnitude of gradients for domain-shared parameters (the CNN backbone) and domain-specific parameters (the projection and classification heads) can be balanced for proper end-to-end training.
In summary, our framework can be trained end-to-end and be built upon any CNN backbone, which is parameter-efficient and simple to implement. The joint training regime ensures that the shared low-level features are general and that multi-head projections are fully responsible for domain specialization. Thus, the produced representations can be universal enough to support further generalization to vastly different few-shot recognition tasks.

4.2. Adapting Representations on Few-Shot Data

After obtaining a set of feature representations { F i ( · ) } i = 1 D , we further conduct model adaptation, aiming to generalize the pre-trained representations to address the few-shot task, which only provides a small support set. To achieve this goal, we identify instance discrimination and class discrimination as two crucial factors for improving model generalization. Accordingly, we propose two contrastive learning objectives, which are performed on each domain-specific head. Different from the previous method [34], which uses contrastive learning on the source data to improve feature transferability in the first stage, our method conducts model adaptation by enhancing contrast across few-shot data, thus directly making the pre-trained features more specific and discriminative to the target task. As the adaptation procedure is conducted on each independent representation head F ( · ; θ i ) , we omit the domain index in the following notations for clarity. The adaptation procedure is depicted in Figure 4.

4.2.1. Parametric Instance Discrimination

Unlike most self-supervised contrastive losses, which use complex data augmentation to construct positive pairs from the same instance to achieve instance discrimination, we propose a parametric module, namely instance parametric proxy (IPP). It is functionally similar to a memory bank storing instance features for instance classification [50,52], but different in that our IPP is learnable and updated by gradient descent. For an N-way K-shot task that provides a support set S = S i i = 1 N , S i = K , we denote the weights of the IPP as V = { v i } i = 1 N K , v i R d , each of which corresponds to a support instance and can be initialized by original support features. For each iteration of model adaptation, let i I { 1 , , N K } be the index of an arbitrary transformed sample from the original support image, and A ( i ) be the negative index set of the sample x i . Then, we perform contrastive learning by enforcing each instance x i to be close to its proxy v i and far from its negative samples indexed from A ( i ) in the feature space. Our parametric instance discrimination (PID) loss modified from info-NCE [51] is as follows:
L P I D ( S ; ϕ , θ , V ) = 1 | I | i I log P ^ i , p ,
P ^ i , p = E ( F ( x i ) , v i ) E ( F ( x i ) , v i ) + a A ( i ) [ E ( F ( x i ) , v a ) + E ( F ( x i ) , F ( x a ) ) ] ,
where E ( w 1 , w 2 ) = exp sim ( w 1 , w 2 ) / τ and τ is a regular temperature parameter. Unlike unsupervised contrastive learning [52,53] makes an anchor instance discriminate from all other instances; the negative index set in Equation (12) is defined as A ( i ) { a I : y a y i } , which means the negative pairs between same-class instances would be filtered by the category labels. As a result, A ( i ) = ( N 1 ) K for the N-shot K-shot task specifically. Thus, this supervised objective is more effective in reducing instance variations, benefiting from avoiding negative contrast between same-class features. During adaptation, this contrastive loss is jointly minimized with respect to the IPP and the parameters of feature representation by SGD.

4.2.2. Class Feature Discrimination

While instance discrimination loss can enhance instance invariance for improving generalization, it only loosely ensures intra-class compactness [54], which is a key capability to cluster same-class features. To make the representations more discriminative to the target task, we enforce intra-class feature invariance while also keeping the separation of between-class features. Given the arbitrary transformed support samples indexed by i I { 1 N K } , let A ( i ) and P ( i ) for sample x i be the negative and positive index sets, respectively. Here, P ( i ) { p I : y p = y i } { i } and P ( i ) = K 1 for the K-shot task specifically. Then our class feature discrimination (CFD) targets minimizing the supervised contrastive loss as follows,
L C F D ( S ; ϕ , θ ) = 1 | I | i I 1 | P ( i ) | p P ( i ) log P ^ i , c ,
P ^ i , c = E ( F ( x i ) , F ( x p ) ) E ( F ( x i ) , F ( x p ) ) + a A ( i ) E ( F ( x i ) , F ( x a ) ) .
With the complementary supervision of Equations (11) and (13), not only the inter-class feature differences can be enlarged but also the intra-instance and intra-class feature variations can be reduced.

4.2.3. Prototypical Classification

The two contrastive objectives can enhance model discrimination at the feature level, thus improving accuracy for the direct NNC baseline. However, previous literature findings [14,64] also indicate that training linear classifiers on the frozen feature extractor can outperform NCC, as they can learn better class boundaries by exploiting overall support examples instead of only calculating class centers. A natural approach is to build a linear classifier from scratch as presented FT baseline in Section 3.2. Here, we propose to implicitly conduct classification by repurposing the IPP without rebuilding a parametric layer. It is also beneficial to avoid over-parameterization in the low-data regime. In each iteration, we first calculate the class prototypes by averaging the instance proxies belonging to the same class:
p k = 1 K v i V , y i = k v i , k = 1 , , N .
For a support sample x i , the posterior probability P ^ s belonging to the support class k is as follows
P ^ s ( y = k | x i ) = exp sim F ( x i ) , p k j exp sim F ( x i ) , p j .
The prototypical classification loss based on cross-entropy between the predictions and the support labels is as follows
L P C ( S ; ϕ , θ , V ) = ( x i , y i ) S log P ^ s ( y = y i | x i ) .
This regularization can encourage the model to learn more comprehensive features by enforcing accurate prediction with natural cross-entropy. It turns out to be particularly effective and leads to significant performance boosts, which can be attributed to improvements in the quality of the adapted features and more representative class prototypes induced by IPP.

4.2.4. Implementation of Total Adaptation Loss

Finally, the few-shot adaptation is conducted by minimizing the three combined losses on the support set:
L t o t a l ( S ; ϕ , θ , V ) = L P I D + λ 1 L C F D + λ 2 L P C ,
where the λ 1 and λ 2 are two regular trade-off parameters.
We conduct two adaptation strategies: (1) LAMR: adapting the projection layers and leaving the backbone frozen. This adaptation is independently performed on each representation head with the shared features extracted, which typically allows for rapid adaptation. (2) LAMR++: adapting both the projection layers and the shared backbone.

4.2.5. Query Prediction

With the proposed adaptation, we can transfer the pre-trained representations to task-specific ones, denoted as { F ( · ; θ ^ i ) } i = 1 D . We can build the nearest neighbor classifier (NNC) [9] by each adapted IPP. For the i-th domain, the induced prototypes of NCC computed by Equation (15) are denoted as { p ^ j i } j = 1 N . For a query image x q Q , the similarity to the class j on the i-th representation is computed as sim ( F ( x q ; θ ^ i ) , p ^ j i ) . The final prediction is based on the aggregation of class similarity across all the representation heads, as follows:
y ^ q = arg max j { 1 , , N } i = 1 D sim ( F ( x q ; θ ^ i ) , p ^ j i )

4.3. Extension to Single-Source FSL

We can make our multi-source framework applicable to the single-source FSL by dividing the source dataset into sub-domains, each of which contains some unique classes. We propose the following two splitting methods.
  • Random splitting. The original classes are equally randomly split into sub-datasets.
  • Clustering splitting. One natural class splitting choice would be K-means clustering on class prototypes computed over image features, with a representation pre-trained on the full classes. However, K-means may result in unbalanced partitions. Inspired by the previous method [65], we iteratively split each current dataset in half along the principal component computed over the class prototypes. For splitting N iterations, the original dataset can be divided into 2 N subsets, each of which can be regarded as a distinct domain and is composed of classes that are closer to each other.
With the split sub-domains on a fixed training data budget, our framework can indeed encourage more diverse representations, which can be further adapted to the FSL task and produce an ensemble of NNC classifiers. However, it typically worsens the performance of the individual classifier on average but makes the ensemble prediction more accurate. This modification turns out to be particularly effective when the number of partitions is appropriate, as illustrated in the experimental section. For the extreme case where there is only one class per domain, the representation cannot be trained without supervision. Therefore, we can expect that there may exist an optimal number of partitions for a dataset. We can choose the hyperparameter based on the performance of the validation set.

5. Experiments and Results

5.1. Benchmark Datasets

5.1.1. Broader Study of Cross-Domain Few-Shot Learning (BSCD-FSL)

We mainly evaluate our approach on the recently proposed cross-domain benchmark BSCD-FSL [21], which provides few-shot evaluation protocols in both single- and multi-source settings. Figure 5 shows examples of source and target images. For the multi-source setting, training datasets contain mini-ImageNet [8], CUB [66], CIFAR100 [67], DTD [68], and Caltech256 [69]. All the source domains belong to colored natural images. For the single-source setting, only mini-ImageNet is used. The testing domain covers a spectrum of image types, including CropDiseases [70], EuroSAT [23], ISIC2018 [24,25], and ChestX [26] datasets. Concretely, they are images of plant diseases, remote sensing images, dermoscopy images of skin lesions, and chest X-ray images, each of which corresponds to a different level of similarity to natural images. Compared to previous benchmarks [13,22], this provides more diverse specialized recognition scenarios for evaluating cross-domain FSL.

5.1.2. Mini-Imagenet

For conventional (in-domain) few-shot learning, we evaluate the most commonly used mini-ImageNet [8] dataset, which is derived from the ImageNet dataset [3] and consists of 60,000 color natural images of size 84 × 84 that belong to 100 classes, each with 600 examples. The mini-ImageNet was first proposed in [10]. We use the common follow-up setting [8] where the dataset is divided into 64 base classes, 16 validation classes, and 20 novel classes. To make this dataset applicable to our framework, the proposed splitting methods in Section 4.3 are performed on the 64 base classes, and the optimal hyperparameters are selected by the validation set.

5.2. Implementation Details

5.2.1. Network Architecture

We use ResNet12 [20,64], a derivative of residual networks [2] particularly designed for few-shot learning, as the feature extraction backbone f ϕ ( · ) to produce the shared features in all experiments. The detailed structure of ResNet12 is shown in Figure 6. ResNet12 has four residual blocks, and each block is made up of three convolutional layers and one 2 × 2 max-pooling layer with stride 2. Each convolutional layer has a 3 × 3 kernel, followed by batch normalization and leaky ReLU of 0.1. The four blocks output feature maps with 64/160/320/640 channels, respectively. The number of parameters of ResNet12 is approximately 12.4M. For the 3 × 84 × 84 input image size, the output feature maps have a size of 640 × 5 × 5 . The low-rank bilinear pooling (LBP) layer utilizes two parallel 1 × 1 convolutional layers. We set its feature dimension d as 640 equal to the output channels of the ResNet12 backbone. The number of parameters of the LBP is approximately 0.8 M.

5.2.2. Training Details

Our codebase is developed on the few-shot learning framework with Pytorch in [21]. In the meta-training stage, we use the SGD optimizer with the Nesterov momentum 0.9 and a weight decay of 1 × e 4 is applied to all the model parameters. We train 140 epochs totally on both single- and multi-source learning, with the learning rate initialized by 0.1 and dropped to 0.01 at the 100 epoch similar to [12]. Conventional data augmentations, including random resize and crop, horizontal flip, and color jittering are applied to the source training images. In the meta-testing stage, we conduct model adaptation on the few-shot data. For the method LAMR, only the domain-specific layers are fine-tuned. For the method LAMR++, both the pre-trained backbone and the domain-specific layers are fine-tuned. Concretely, we use an SDG optimizer with 100 epochs fine-tuned on few-shot data (support set). The trade-off parameters λ 1 and λ 2 are simply set to 1. The metric scalar γ (or temperature parameter τ ) in the cosine similarity is set to 20 (or 0.05) in all equations.

5.2.3. Evaluation Protocol

For the BSCD-FSL benchmark, we evaluate 5-way few-shot performance varying the shot in {5, 20, 50} over the 600 tasks following the previous evaluation protocol [21]. For the mini-ImageNet benchmark, we evaluate 5-way 1-shot and 5-shot generalization performance over 2000 tasks on the novel set as in [18,41]. Each few-shot task contains 15 queries per class. We report the average accuracy with a corresponding 95 % confidence interval over all tasks for all experiments. In particular, we apply consistent sampling to make a fair comparison rigorously, where the sampling of testing few-shot tasks follows a deterministic order by numpy with a fixed seed. It makes our ablation studies and comparisons more convincing.

5.3. Results on Multi-Source FSL

We present the results of our method in the multi-source FSL setting, where five semantically different datasets can be used for pre-training.
The following multi-domain learning methods are compared:
  • Union-CC [62]: A baseline method trains a single feature extractor on the union of all training data with the cosine classifier and tests it with the NNC classifier.
  • Ensemble: A baseline method trains separate feature extractors on each dataset and tests with the average prediction of the NCC classifiers built on them.
  • All-EMDs [21]: A method concatenates the feature vectors of all layers of all the separate feature extractors for training a linear classifier.
  • IMS-f [21]: A greedy selection method iteratively searches for the best subset of features on all layers of all the separate feature extractors for a given few-shot task. Then, the feature vectors in the set are concatenated for training a linear classifier.
  • FiLM-pf [35]: Multiple representations are trained on a parameter-efficient backbone, in which parallel domain-specific FiLM [36] layers are inserted after each batch normalization layer. The multiple representations are tested with the average prediction of NCC classifiers built on them.
  • SUR [27]: A feature selection method performs a linear combination of domain-specific representations on the FiLM-pf.
  • URL [29]: A single feature extractor is distilled from the separate multi-domain networks and tested with the NNC classifier.
  • URL+Ad [29]: An adaptation method attaches a pre-classifier feature mapping (a linear layer) to the URL and optimizes it with the few-shot data.
  • TSA [40]: An adaptation method attaches residual adapters to each convolution layer of a pre-trained model (here is URL [29]) and a pre-classifier feature mapping on the pre-trained model and optimizes them from scratch with the few-shot data.
Table 1 reports the detailed results on the four target datasets. Figure 7 summarizes the result comparison across different methods according to the average accuracy across all shot levels and datasets in the benchmark. We can observe that the proposed LAMR demonstrates a clear promise to set a new state of the art in all experimental settings, as it can consistently precede both previous methods and the baseline methods, as shown in Table 1. Concretely, Ensemble, All-EBDs, and IMS-f use the multi-domain features built on the fully separated feature extractors, which is inefficient and impractical when deployed to target domains. In addition, FiLM-pf and SUR built on a parameter-efficient backbone are also computationally expensive, as they still require multiple forward passes to obtain the multi-domain features. In contrast to these methods, our multi-head network is both parameter-efficient and computation-efficient. Instead of using an ensemble of multiple feature representations, URL learns a single network by knowledge distillation from the ensemble of separate multi-domain networks. The distilled single network shows better generalization ability compared to the Ensemble and Union-CC but still underperforms compared to other methods if directly using its features. However, when making further adaptations to the URL, results can be significantly improved. Concretely, URL+Ad simply employs a linear layer on the top of the URL to make feature adaptation, which results in an average improvement of 2.7%. TSA conducts a deeper adaptation, which can provide an average improvement of 3.9% over the URL. Besides, our proposed adaptation method is a combination of fine-tuning losses, which is also orthogonal to methods like TSA that make network adaptation by incrementally learning some new parametric modules. Last, our LAMR, which only fine-tunes the projection layers, consistently performs better than the TAS in all settings shown in Table 1. With the backbone being further adapted, our method LAMR++ can achieve consistent performance gains. Particularly on the ISIC dataset, our LAMR++ can produce improvements of 3.7%, 3.2%, and 3.4% over our shallower adaptation method (LAMR) in {5/20/50}-shot settings, respectively. It is generally considered important to adapt both shallow and deep layers in a neural network for successfully addressing cross-domain few-shot learning.
Overall, we can clearly observe that our methods (LAMR & LAMR++) have absolute superiority over other methods. Concretely, LAMR++ achieves an average classification accuracy of 73.07% across all datasets and shot levels, which outperforms the TSA by 3.2%.

5.4. Results on Single-Source FSL

5.4.1. Validating Splitting Strategy for Single-Source FSL

We first investigate the effects of different splitting methods and splitting numbers for single-source FSL. As previously demonstrated in Section 4.3, there may exist a trade-off partition number for achieving the optimal generalization performance. The evaluation is conducted on the validation set of mini-ImageNet, with consistently sampled 200 tasks. We evaluate the two proposed splitting methods, random splitting and clustering splitting, with the splitting number varying in { 1 , 2 , 4 , 8 , 16 } . For the random splitting method, we perform three trials based on different random split classes and report the mean of the three runs. It is worth noting that the results across different trials have a low variance, as the variation between each two random trials is within 1%. The plots of five-way one-shot and five-shot validation accuracy are shown in Figure 8. The one splitting (or root point) refers to the strong FSL baseline, namely CC [62,63], trained with the original dataset without splitting.
First, it is of interest that the random splitting method achieves better performance than the clustering splitting method. A possible explanation for this phenomenon is that randomly split sub-domains include more heterogeneous classes, which can yield more discriminative representations and better average performance in the ensemble. Second, we can observe that the best number of splitting for both methods on mini-ImageNet is 2. It also indicates our multi-source framework can improve single-source FSL with a fixed data budget. Third, we can find that when the split sub-domains (greater than 4) become too large, the accuracy is seriously decreased, since there are too many limited classes and data to enable meaningful representation learning in each sub-domain.

5.4.2. Results on Mini-ImageNet

We further evaluate our LAMR trained on the “fake multi-domain” which is partitioned by the optimal splitting strategy validated in Figure 8. We report the results on the mini-ImageNet test set and compare them with the prior methods that focus on learning or adapting a good representation in Table 2. We make comparisons as follows. (1) First, we compare our method with the approaches [12,13,14,62,71] that directly rely on a good pre-trained representation learning. They all perform few-shot classification on the frozen representation by building a target classifier with NNC [12,62,71] or FT [13,14] baselines, but differ in the way the feature extractor is learned. Concretely, the CC [62] is our baseline model, whose deep representation is trained on a cosine classifier. Neg-Cosine [71] enhances CC representation by employing a negative margin into the softmax loss. Other methods [12,13,14] would rather use natural linear classifiers to minimize the cross-entropy loss for obtaining the representation, and the Meta-Baseline [12] further improves the pre-trained feature extractor followed by a meta-training stage. All the methods mentioned above are competitive with meta-learning-based methods [9,64], and beneficial from a good embedding. However, our LAMR shows significant superiority over the CC-based methods [62,71] as well as other methods [12,13,14]. For example, LAMR can outperform Embed-Distill [14] by 1.9% and 1.8% in 5-shot and 1-shot settings, respectively. Besides, our method is also orthogonal with those methods [9,12,13,62,71], as their learning algorithms can also be used for pre-training our multi-head representation framework.
(2) Second, we further compare our LAMR with the methods [19,20,27,37,72,73] that perform feature adaptation when employed to target few-shot tasks. TADAM [20] employs a task embedding network (TEN) block that generates scaling and shift vectors for each batch normalization layer, adapting the network to be task-specific. However, learning an accurate auxiliary network may be a challenging task, especially when target data are limited and the domain shift is significant. Centroid-Align [19] first selects a set of task-relevant categories from source data and conducts feature alignment between the selected source data and target few-shot data for network adaptation. Free-Lunch [72] proposes to calibrate the distribution of the novel samples using the statistics of selected base classes that are considered task-relevant. H-OT [73] further develops a novel hierarchical optimal transport framework to achieve adaptive distribution calibration. Unlike methods [19,72,73] that perform adaptation by leveraging the base data [19] or their statistics [72,73], our method provides a more effective adaptation scheme that directly optimizes the pre-trained representations with the limited target data without re-accessing the source data. Besides, our methods also achieve on par with or better performance than them. For example, our LAMR++ outperforms H-OT by 0.27% and 0.97% in 1-shot and 5-shot settings, respectively. Instead of adjusting the pre-trained parameters, Implant [37] adds and learns new convolutional filters upon the frozen CNN layers. Our LAMR++ is also orthogonal to it and performs significantly better than it by 3.4% and 4.1% in 1-shot and 5-shot settings, respectively.
(3) Third, we compare LAMR with an ensemble-based method, Robust-20 [57], which trains an ensemble of 20 ResNets promoted by cooperation and diversity regularization. Our approach is more efficient for building an ensemble of multiple representations and also significantly outperforms it, with notable absolute accuracy improvements of 2.0% and 2.3% for 1-shot and 5-shot settings, respectively.
It is observed that adapting the backbone (by LAMR++) only slightly helps few-shot transfer for this benchmark dataset, whereas the improvement is more pronounced for cross-domain few-shot learning. It may also indicate that deeper adaptation is more necessary when the domain distribution shift increases.

5.4.3. Results on BSCD-FSL

We further report the results of single-source cross-domain FSL on the four specific domains of BSCD-FSL and compare them with the prior approaches in Table 3. Concretely, the Linear and Mean-centroid denote FT and NNC baselines, respectively, presented in Section 3.2. The Ft-CC denotes fine-tuning a cosine classifier based on the frozen feature extractor. It is obvious that the proposed approach can surpass all three transfer learning baselines by a large margin. For instance, LAMR performs better than the Linear by 5.4%, 4.6%, and 3.1% in {5/20/50}-shot settings, respectively. Similar to the observation in the multi-source FSL, LAMR++ also yields notable and consistent improvements over LAMR. Particularly on the ISIC dataset, LAMR++ can produce improvements of 5.5%, 5.8%, and 5.2% over our LAMR in {5/20/50}-shot settings, respectively.
Besides, we also make comparisons with other state-of-the-art methods [32,38,39,74]. LDP-net [32] imposes local-global feature consistency of prototypical networks by knowledge distillation, which improves the cross-domain generalization of the learned features. However, due to a lack of feature adaptation, LDP-net is typically inferior to other adaptation methods [38,39,74] and ours, especially for 20/50-shot settings. For example, on the ISIC dataset, LAMR++ performs significantly better than LDP-net by 6.0%, 9.8%, and 9.9% in {5/20/50}-shot settings, respectively.
Other methods [38,39,74] conduct domain-specific feature adaptation for tackling the large domain shift. Particularly, FN [38] adapts the feature extractor by fine-tuning its scaling and shifting parameters of batch normalization on few-shot data. We can observe that the FN is inferior to the transfer learning baselines in several cases of 5-shot settings. A possible interpretation for this phenomenon is that, with too limited data and extreme domain shift, optimizing the BN parameters accurately may be particularly hard. ConFeSS [39] proposes to learn a task-specific feature masking module that can produce refined features for further fine-tuning a target classifier and the feature extractor. NSAE [74] pretrains and fine-tunes the network with an additional auto-encoder to improve the model generalization, which implicitly augments the support data. Unlike ConFeSS [39] and NSAE [74], which leverage auxiliary modules for model adaptation, our approach is more efficient by directly optimizing the target model. Finally, we can also observe that our approach can achieve the highest accuracy among all the methods and experimental settings, except for one case in 5-way 5-shot EuroSAT where ConFeSS slightly outperforms our LAMR++ by 0.03%. But in 5-way 20-shot and 50-shot EuroSAT, our LAMR++ can achieve significant performance gains of 2.6% and 3.1% over it, respectively.
Table 3. The results of single-source few-shot learning on the BSCD-FSL benchmark. The best result in each setting is marked in bold.
Table 3. The results of single-source few-shot learning on the BSCD-FSL benchmark. The best result in each setting is marked in bold.
MethodsChestXISIC
5-Way 5-Shot5-Way 20-Shot5-Way 50-Shot5-Way 5-Shot5-Way 20-Shot5-Way 50-Shot
ProtoNet 1 [9]24.05 ± 1.0128.21 ± 1.1529.32 ± 1.1239.57 ± 0.5749.50 ± 0.5551.99 ± 0.52
Linear 1 [13]25.97 ± 0.4131.32 ± 0.4535.49 ± 0.4548.11 ± 0.6459.31 ± 0.4866.48 ± 0.56
Mean-centroid 1 [75]26.31 ± 0.4230.41 ± 0.4634.68 ± 0.4647.16 ± 0.5456.40 ± 0.5361.57 ± 0.66
Ft-CC 1 [62]26.95 ± 0.4432.07 ± 0.5534.76 ± 0.5548.01 ± 0.4958.13 ± 0.4862.03 ± 0.52
FN [38]25.78 ± 0.4231.88 ± 0.4634.81 ± 0.4945.34 ± 0.6058.92 ± 0.5765.90 ± 0.58
ConFeSS [39]27.09 ± 0.2433.57 ± 0.3139.02 ± 0.1248.85 ± 0.2960.10 ± 0.3365.34 ± 0.45
NSAE [74]27.10 ± 0.4435.20 ± 0.4838.95 ± 0.7054.05 ± 0.6366.17 ± 0.5971.32 ± 0.61
LDP-net 2 [32]27.30 ± 0.4334.03 ± 0.4937.58 ± 0.4848.15 ± 0.6058.47 ± 0.5664.20 ± 0.55
LAMR27.66 ± 0.4433.82 ± 0.5038.92 ± 0.5048.66 ± 0.6062.38 ± 0.6068.92 ± 0.56
LAMR++ 28.86 ± 0.4535.86 ± 0.5041.36 ± 0.5654.11 ± 0.6268.22 ± 0.5774.12 ± 0.54
MethodsEuroSATCropDiseases
5-Way 5-Shot5-Way 20-Shot5-Way 50-Shot5-Way 5-Shot5-Way 20-Shot5-Way 50-Shot
ProtoNet 1 [9]73.29 ± 0.7182.27 ± 0.5780.48 ± 0.5779.72 ± 0.6788.15 ± 0.5190.81 ± 0.43
Linear 1 [13]79.08 ± 0.6187.64 ± 0.4791.34 ± 0.3789.25 ± 0.5195.51 ± 0.3197.68 ± 0.21
Mean-centroid 1 [75]82.21 ± 0.4987.62 ± 0.3488.24 ± 0.2987.61 ± 0.4793.87 ± 0.6894.77 ± 0.34
Ft-CC 1 [62]81.37 ± 1.5486.83 ± 0.4388.83 ± 0.3889.15 ± 0.5193.96 ± 0.4694.27 ± 0.41
FN [38]80.03 ± 0.7088.94 ± 0.4692.34 ± 0.3691.11 ± 0.4996.62 ± 0.2698.27 ± 0.17
ConFeSS [39]84.65 ± 0.3890.40 ± 0.2492.66 ± 0.3688.88 ± 0.5195.34 ± 0.4897.56 ± 0.43
NSAE [74]83.96 ± 0.5792.38 ± 0.3395.42 ± 0.3493.14 ± 0.4798.30 ± 0.1999.25 ± 0.14
LDP-net 2 [32]81.50 ± 0.6588.15 ± 0.4890.75 ± 0.4189.00 ± 0.5195.49 ± 0.2997.28 ± 0.20
LAMR84.46 ± 0.5592.21 ± 0.3394.46 ± 0.2794.15 ± 0.3998.19 ± 0.1799.16 ± 0.11
LAMR++ 84.62 ± 0.5593.08 ± 0.3195.75 ± 0.2394.30 ± 0.3898.39 ± 0.1699.26 ± 0.11
1 the results from [21]. 2 reproduced results based the official released code [32] and their trained model.

6. Ablation Study and Analysis

We also conduct the ablation study based on the BSCD-FSL benchmark, as its target domains include a vast array of different specific recognition scenarios.

6.1. Effect of Multi-Domain Learning Framework

We first explore how our multi-source framework benefits feature transferability. We compare our framework with two baseline models: (1) Single-source: The feature representation is trained on one source dataset (that is mini-ImageNet). (2) Merged-multi-sources: The feature representation is trained by one task that classifies merged classes from all source datasets, as presented in Section 3. We validate the transferability of these representations based on the NNC baseline directly, and the results are reported in Table 4. We observe that the representation trained with Merged-multi-sources does better than the representation trained with Sigle-source in most cases. This is because the multi-source could provide a substantial amount of training data. Compared to Merged-multi-sources, our multi-source framework achieves much higher accuracy in most settings. Only for the ChestX, we see our framework is slightly underperforming within 1%. We conjecture that because the distribution of this target domain does not match any of the train distributions, thus robust knowledge transfer can not be ensured. However, the overall performance still demonstrates the benefit of using our multi-source representations rather than using one representation learned on the combined source dataset.

6.2. Significance of Few-Shot Adaptation

In order to understand what enables good representation adaptation over few-shot data, we systematically study the effect of all components in our adaptation loss, i.e., PID, CFD, PC with respect to Equations (11), (13) and (17). Table 5 shows the detailed results for all 24 settings that vary in two source types, four target domains, and three shot levels. We can make the following observations: (1) Applying any one of PID, CFD, or PC can lead to consistent performance gains in all 24 experimental settings. (2) With the combined supervision of PID and CFD, the results can be better than when only PID or CFD is used in 16 out of 24 settings. (3) Incorporating all three components can lead to the best result in 17 out of 24 settings. Particularly for the ISIC dataset with 50 examples per class available, the overall performance improves by up to 10.0% and 15.8% in multi- and single-source settings, respectively. It also verifies that our adaptation strategy is advantageous for dealing with the domain shift problem in few-shot learning. (4) The overall performance gains are more significant when more data are available. For example on the ChestX dataset, our adaptation method can yield improvements of {1.4%,4.0%,6.6%} over the baseline in {5/20/50}-shot settings, respectively. This also indicates that when suffering from extreme domain bias, such as in the medical domain, recognition requires more data to ensure good adaptation.
To better evaluate the effectiveness of each isolated component and different combinations of the three components towards performance in cross-domain few-shot transfer, we further compute the average accuracy across different datasets and shot levels and rank them in Figure 9. We can observe that the rank of isolated performance gains of using PID, CFD, or PC is {PC > CFD > PID} for both multi-source and single-source FSL. The full adaptation can achieve the best mean accuracy in both multi-source and single-source settings. Overall, the adaptation provides an average improvement of 4.5% and 6.4% for multi-source FSL and single-source FSL, respectively.

6.3. Effect of Different Classifier Learning

We also compare some variants built on our pre-trained multi-source representations using different classification modules or fine-tuning regimes:
  • Fixed-MSR: Directly leveraging the frozen multi-source representations with NNC baseline.
  • Ft-LC: Fine-tuning a linear classification layer on each frozen representation head.
  • Ft-CC: Fine-tuning a cosine classification layer on each frozen representation head.
  • Ft-MSR-LC: Fine-tuning both multi-source representations (projection layers) and following linear classification layers.
  • Ft-MSR-CC: Fine-tuning both multi-source representations (projection layers) and following cosine classification layers.
The results are reported in Table 6. We can observe that: (1) only fine-tuning a classifier on the frozen representations can obtain performance gains in all settings. Besides, fine-tuning cosine classifiers (Ft-CC) always outperform linear classifier-based ones (Ft-LC), which is also consistent with the previous literature findings [13,21]. (2) further adapting the representations associated with the classifiers (Ft-MSR-LC & Ft-MSR-CC) can also lead to accuracy boosts.

6.4. Impressive Visualization

To further qualitatively understand how the adaptation leads to few-shot performance gains, we visualize feature embeddings of the query images and the class prototypes by t-SNE [76] in Figure 10, which is computed in a 5-way 5-shot task sampled from the CropDisease dataset. Figure 10a–f denote the multi-head representations, each of which shows its feature embeddings before or after the adaptation, respectively. The benefits of our adaptation method can be attributed to two aspects: (1) The query features of the same class become more compact, and the class clusters are more separable from each other after making the adaptation. It also indicates that our LAMR can encourage intra-class compactness and inter-class divergence, which results in more discriminative features for classification. (2) The induced class prototypes computed from the adapted instance proxy are also more representative so the prototypes can be used to classify the query features well. These observations can also explain the significant performance boosts presented in the ablation study.
We further analyze how features are changed with the adaptation by visualizing class activation maps (CAMs) [77] on a 5-shot task sampled from the CropDisease, which can also be regarded as a fine-grained recognition task on grape diseases. Comparisons of regions that the deep CNNs focus on for discrimination before and after the adaptation are shown in Figure 11. We can make the following observations: (1) For the healthy grape leaf, visual cues to make predictions do not significantly change before and after the adaptation, which indicates that the pre-trained features can be generalized enough to recognize such common natural objects. However, we can still see that the adaptation can make CNN features more focused on the skeleton and edge of the leaf but less on the background. (2) For the two grape leaf diseases, our adaptation method can help CNN concentrate on the most relevant regions with respect to the two specific diseases. In contrast, the CNN features without the adaptation still focus on the visual cues of the common object, such as the leaf edge. The change indicates our method can improve discrimination towards the class-specific visual cues. The ability to steer the task-relevant features verifies the superior performance of our LAMR.

7. Conclusions and Future Work

In this paper, we investigate a more practical FSL setting, namely multi-source cross-domain few-shot learning. To tackle the problem, we propose a simple yet effective multi-source representation framework for learning prior knowledge from multiple datasets, which enables generalization to a wide range of unseen domains. Further task-specific adaptation on few-shot data is performed to enhance instance discrimination and class discrimination by minimizing two contrastive losses on the multi-domain representations. We empirically demonstrate the superiority of our LAMR over many previous methods and strong baselines, which achieves state-of-the-art results for cross-domain FSL. We extend LAMR to single-source FSL by introducing dataset-splitting strategies that equally split one source dataset into sub-domains. The empirical results show that applying simple “random splitting” can improve conventional cosine-similarity-based classifiers in FSL with a fixed single-source data budget. Extensive ablation studies and analyses illustrate that each component of our method can effectively facilitate few-shot transfer. Our method also has some limitations, and we could see some promising future directions. First, we conduct adaptation by either fine-tuning or freezing the full backbone. It would be promising for future work to seek more flexible adaptation methods that can select a part of layers or parameters to adjust conditioning on the given task. Second, a study [65] has also shown that the choice of the source training dataset has a huge impact on the performance of downstream tasks. We also acknowledge that not every dataset in the multi-source domains contributes equally to the target task. Further improvements can also be explored for a more scalable transfer by considering the similarity between the source and target domains. It is worth noting that the two limitations may also exist with most other methods that focus on representation learning on source data or adaptation on few-shot data. Besides, to our knowledge, multi-source few-shot learning on other fundamental computer vision applications, such as segmentation and detection has not been explored yet. Developing new benchmarks for those computer vision problems would also foster future progress in this field.

Author Contributions

Conceptualization, G.L.; methodology, G.L.; software, G.L. and Z.Z.; validation, G.L. and Z.Z.; investigation, G.L. and Z.Z.; writing—original draft preparation, G.L.; writing—review and editing, G.L., Z.Z. and X.F.; supervision, X.F.; project administration, X.F. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The data used in this study are publicly available.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Krizhevsky, A.; Sutskever, I.; Hinton, G.E. Imagenet classification with deep convolutional neural networks. Commun. ACM 2017, 60, 84–90. [Google Scholar] [CrossRef]
  2. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep Residual Learning for Image Recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 26 June–1 July 2016; pp. 770–778. [Google Scholar]
  3. Russakovsky, O.; Deng, J.; Su, H.; Krause, J.; Satheesh, S.; Ma, S.; Huang, Z.; Karpathy, A.; Khosla, A.; Bernstein, M.; et al. Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 2015, 115, 211–252. [Google Scholar] [CrossRef]
  4. Fei-Fei, L.; Fergus, R.; Perona, P. One-shot learning of object categories. IEEE Trans. Pattern Anal. Mach. Intell. 2006, 28, 594–611. [Google Scholar] [CrossRef] [PubMed]
  5. Yosinski, J.; Clune, J.; Bengio, Y.; Lipson, H. How transferable are features in deep neural networks? In Proceedings of the Advances in Neural Information Processing Systems (NeurIPS), Montreal, QC, Canada, 8–13 December 2014; pp. 3320–3328. [Google Scholar]
  6. Pan, S.J.; Yang, Q. A survey on transfer learning. IEEE Trans. Knowl. Data Eng. 2010, 22, 1345–1359. [Google Scholar] [CrossRef]
  7. Finn, C.; Abbeel, P.; Levine, S. Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks. In Proceedings of the 34th International Conference on Machine Learning, Sydney, Australia, 6–11 August 2017; pp. 1126–1135. [Google Scholar]
  8. Ravi, S.; Larochelle, H. Optimization as a model for few-shot learning. In Proceedings of the International Conference on Learning Representations, San Juan, Puerto Rico, 2–4 May 2016. [Google Scholar]
  9. Snell, J.; Swersky, K.; Zemel, R. Prototypical Networks for Few-shot Learning. Adv. Neural Inf. Process. Syst. 2017, 30, 4077–4087. [Google Scholar]
  10. Vinyals, O.; Blundell, C.; Lillicrap, T.; Kavukcuoglu, K.; Wierstra, D. Matching Networks for One Shot Learning. Adv. Neural Inf. Process. Syst. 2016, 29, 3630–3638. [Google Scholar]
  11. Thrun, S. Lifelong learning algorithms. In Learning to Learn; Springer: Berlin/Heidelberg, Germany, 1998; pp. 181–209. [Google Scholar]
  12. Chen, Y.; Liu, Z.; Xu, H.; Darrell, T.; Wang, X. Meta-Baseline: Exploring Simple Meta-Learning for Few-Shot Learning. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), Montreal, QC, Canada, 11–17 October 2021; pp. 9062–9071. [Google Scholar]
  13. Chen, W.Y.; Liu, Y.C.; Kira, Z.; Wang, Y.C.F.; Huang, J.B. A Closer Look at Few-shot Classification. In Proceedings of the International Conference on Learning Representations (ICLR), New Orleans, LA, USA, 6–9 May 2019. [Google Scholar]
  14. Tian, Y.; Wang, Y.; Krishnan, D.; Tenenbaum, J.B.; Isola, P. Rethinking few-shot image classification: A good embedding is all you need? In Proceedings of the European Conference on Computer Vision (ECCV), Glasgow, UK, 23–28 August 2020. [Google Scholar]
  15. Wang, Y.; Chao, W.L.; Weinberger, K.Q.; van der Maaten, L. SimpleShot: Revisiting Nearest-Neighbor Classification for Few-Shot Learning. arXiv 2019, arXiv:1911.04623. [Google Scholar]
  16. Dhillon, G.S.; Chaudhari, P.; Ravichandran, A.; Soatto, S. A Baseline for Few-Shot Image Classification. In Proceedings of the International Conference on Learning Representations (ICLR), Addis Ababa, Ethiopia, 30 April 2020. [Google Scholar]
  17. Raghu, A.; Raghu, M.; Bengio, S.; Vinyals, O. Rapid learning or feature reuse? towards understanding the effectiveness of maml. In Proceedings of the International Conference on Learning Representations (ICLR), Addis Ababa, Ethiopia, 30 April 2020. [Google Scholar]
  18. Gidaris, S.; Bursuc, A.; Komodakis, N.; Perez, P.; Cord, M. Boosting Few-Shot Visual Learning with Self-Supervision. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), Seoul, Republic of Korea, 27 October–2 November 2019. [Google Scholar]
  19. Afrasiyabi, A.; Lalonde, J.F.; Gagné, C. Associative Alignment for Few-shot Image Classification. In Proceedings of the Computer Vision—ECCV 2020: 16th European Conference, Glasgow, UK, 23–28 August 2020. [Google Scholar]
  20. Oreshkin, B.; Rodríguez López, P.; Lacoste, A. TADAM: Task dependent adaptive metric for improved few-shot learning. Adv. Neural Inf. Process. Syst. 2018, 31, 721–731. [Google Scholar]
  21. Guo, Y.; Codella, N.C.; Karlinsky, L.; Codella, J.V.; Smith, J.R.; Saenko, K.; Rosing, T.; Feris, R. A broader study of cross-domain few-shot learning. In Proceedings of the Computer Vision—ECCV 2020: 16th European Conference, Glasgow, UK, 23–28 August 2020; pp. 124–141. [Google Scholar]
  22. Triantafillou, E.; Zhu, T.; Dumoulin, V.; Lamblin, P.; Evci, U.; Xu, K.; Goroshin, R.; Gelada, C.; Swersky, K.; Manzagol, P.A.; et al. Meta-Dataset: A Dataset of Datasets for Learning to Learn from Few Examples. In Proceedings of the International Conference on Learning Representations (ICLR), Addis Ababa, Ethiopia, 30 April 2020. [Google Scholar]
  23. Helber, P.; Bischke, B.; Dengel, A.; Borth, D. Eurosat: A novel dataset and deep learning benchmark for land use and land cover classification. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2019, 12, 2217–2226. [Google Scholar] [CrossRef]
  24. Tschandl, P.; Rosendahl, C.; Kittler, H. The HAM10000 dataset, a large collection of multi-source dermatoscopic images of common pigmented skin lesions. Sci. Data 2018, 5, 180161. [Google Scholar] [CrossRef] [PubMed]
  25. Codella, N.; Rotemberg, V.; Tschandl, P.; Celebi, M.E.; Dusza, S.; Gutman, D.; Helba, B.; Kalloo, A.; Liopyris, K.; Marchetti, M.; et al. Skin lesion analysis toward melanoma detection 2018: A challenge hosted by the international skin imaging collaboration (isic). arXiv 2019, arXiv:1902.03368. [Google Scholar]
  26. Wang, X.; Peng, Y.; Lu, L.; Lu, Z.; Bagheri, M.; Summers, R.M. Chestx-ray8: Hospital-scale chest x-ray database and benchmarks on weakly-supervised classification and localization of common thorax diseases. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 2097–2106. [Google Scholar]
  27. Dvornik, N.; Schmid, C.; Mairal, J. Selecting relevant features from a multi-domain representation for few-shot classification. In Proceedings of the Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, 23–28 August 2020; Springer: Berlin/Heidelberg, Germany, 2020; pp. 769–786. [Google Scholar]
  28. Liu, L.; Hamilton, W.L.; Long, G.; Jiang, J.; Larochelle, H. A Universal Representation Transformer Layer for Few-Shot Image Classification. In Proceedings of the International Conference on Learning Representations (ICLR), Virtual Event, 3–7 May 2021. [Google Scholar]
  29. Li, W.H.; Liu, X.; Bilen, H. Universal representation learning from multiple domains for few-shot classification. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Virtual, 11–17 October 2021; pp. 9526–9535. [Google Scholar]
  30. Liu, G.; Zhang, Z.; Cai, F.; Liu, D.; Fang, X. Learning and Adapting Diverse Representations for Cross-domain Few-shot Learning. In Proceedings of the 2023 IEEE International Conference on Data Mining Workshops (ICDMW), Shanghai, China, 1–4 December 2023; IEEE: Piscataway, NJ, USA, 2023; pp. 294–303. [Google Scholar]
  31. Bontonou, M.; Béthune, L.; Gripon, V. Predicting the generalization ability of a few-shot classifier. Information 2021, 12, 29. [Google Scholar] [CrossRef]
  32. Zhou, F.; Wang, P.; Zhang, L.; Wei, W.; Zhang, Y. Revisiting prototypical network for cross domain few-shot learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, BC, Canada, 17–24 June 2023; pp. 20061–20070. [Google Scholar]
  33. Zhao, L.; Liu, G.; Guo, D.; Li, W.; Fang, X. Boosting Few-shot visual recognition via saliency-guided complementary attention. Neurocomputing 2022, 507, 412–427. [Google Scholar] [CrossRef]
  34. Liu, C.; Fu, Y.; Xu, C.; Yang, S.; Li, J.; Wang, C.; Zhang, L. Learning a few-shot embedding model with contrastive learning. In Proceedings of the AAAI Conference on Artificial Intelligence, Virtual, 2–9 February 2021; Volume 35, pp. 8635–8643. [Google Scholar]
  35. Rebuffi, S.A.; Bilen, H.; Vedaldi, A. Efficient parametrization of multi-domain deep neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Salt Lake City, UT, USA, 18–23 June 2018; pp. 8119–8127. [Google Scholar]
  36. Perez, E.; Strub, F.; De Vries, H.; Dumoulin, V.; Courville, A. Film: Visual reasoning with a general conditioning layer. In Proceedings of the AAAI Conference on Artificial Intelligence, New Orleans, LA, USA, 2–7 February 2018; Volume 32. [Google Scholar]
  37. Lifchitz, Y.; Avrithis, Y.; Picard, S.; Bursuc, A. Dense Classification and Implanting for Few-Shot Learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 15–20 June 2019. [Google Scholar]
  38. Yazdanpanah, M.; Rahman, A.A.; Chaudhary, M.; Desrosiers, C.; Havaei, M.; Belilovsky, E.; Kahou, S.E. Revisiting Learnable Affines for Batch Norm in Few-Shot Transfer Learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), New Orleans, LA, USA, 18–24 June 2022; pp. 9109–9118. [Google Scholar]
  39. Das, D.; Yun, S.; Porikli, F. ConfeSS: A framework for single source cross-domain few-shot learning. In Proceedings of the International Conference on Learning Representations (ICLR), Virtual Event, 25–29 April 2022. [Google Scholar]
  40. Li, W.H.; Liu, X.; Bilen, H. Cross-domain Few-shot Learning with Task-specific Adapters. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), New Orleans, LA, USA, 18–24 June 2022; pp. 7161–7170. [Google Scholar]
  41. Liu, G.; Zhao, L.; Fang, X. PDA: Proxy-based domain adaptation for few-shot image recognition. Image Vis. Comput. 2021, 110, 104164. [Google Scholar] [CrossRef]
  42. Soudy, M.; Afify, Y.M.; Badr, N. GenericConv: A Generic Model for Image Scene Classification Using Few-Shot Learning. Information 2022, 13, 315. [Google Scholar] [CrossRef]
  43. Csányi, G.M.; Vági, R.; Megyeri, A.; Fülöp, A.; Nagy, D.; Vadász, J.P.; Üveges, I. Can Triplet Loss Be Used for Multi-Label Few-Shot Classification? A Case Study. Information 2023, 14, 520. [Google Scholar] [CrossRef]
  44. Cai, J.; Wu, L.; Wu, D.; Li, J.; Wu, X. Multi-Dimensional Information Alignment in Different Modalities for Generalized Zero-Shot and Few-Shot Learning. Information 2023, 14, 148. [Google Scholar] [CrossRef]
  45. Tzeng, E.; Hoffman, J.; Zhang, N.; Saenko, K.; Darrell, T. Deep Domain Confusion: Maximizing for Domain Invariance. arXiv 2014, arXiv:1412.3474. [Google Scholar]
  46. Long, M.; Cao, Y.; Wang, J.; Jordan, M. Learning transferable features with deep adaptation networks. In Proceedings of the International Conference on Machine Learning (ICML), Lille, France, 6–11 July 2015; pp. 97–105. [Google Scholar]
  47. Ganin, Y.; Lempitsky, V. Unsupervised domain adaptation by backpropagation. In Proceedings of the International Conference on Machine Learning (ICML), Lille, France, 7–9 July 2015; pp. 1180–1189. [Google Scholar]
  48. Peng, X.; Bai, Q.; Xia, X.; Huang, Z.; Saenko, K.; Wang, B. Moment matching for multi-source domain adaptation. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), Seoul, Republic of Korea, 27 October–2 November 2019; pp. 1406–1415. [Google Scholar]
  49. Xu, R.; Chen, Z.; Zuo, W.; Yan, J.; Lin, L. Deep cocktail network: Multi-source unsupervised domain adaptation with category shift. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Salt Lake City, UT, USA, 18–22 June 2018; pp. 3964–3973. [Google Scholar]
  50. Wu, Z.; Xiong, Y.; Yu, S.X.; Lin, D. Unsupervised Feature Learning via Non-Parametric Instance Discrimination. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Salt Lake City, UT, USA, 18–22 June 2018. [Google Scholar]
  51. Oord, A.v.d.; Li, Y.; Vinyals, O. Representation learning with contrastive predictive coding. arXiv 2018, arXiv:1807.03748. [Google Scholar]
  52. He, K.; Fan, H.; Wu, Y.; Xie, S.; Girshick, R. Momentum contrast for unsupervised visual representation learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 13–19 June 2020; pp. 9729–9738. [Google Scholar]
  53. Chen, T.; Kornblith, S.; Norouzi, M.; Hinton, G. A simple framework for contrastive learning of visual representations. In Proceedings of the International Conference on Machine Learning (ICML), Virtual Event, 13–18 July 2020; pp. 1597–1607. [Google Scholar]
  54. Khosla, P.; Teterwak, P.; Wang, C.; Sarna, A.; Tian, Y.; Isola, P.; Maschinot, A.; Liu, C.; Krishnan, D. Supervised contrastive learning. Adv. Neural Inf. Process. Syst. 2020, 33, 18661–18673. [Google Scholar]
  55. Bilen, H.; Vedaldi, A. Universal representations: The missing link between faces, text, planktons, and cat breeds. arXiv 2017, arXiv:1701.07275. [Google Scholar]
  56. Guo, Y.; Li, Y.; Wang, L.; Rosing, T. Depthwise convolution is all you need for learning multiple visual domains. In Proceedings of the AAAI Conference on Artificial Intelligence, Honolulu, HI, USA, 29–31 January 2019; Volume 33, pp. 8368–8375. [Google Scholar]
  57. Dvornik, N.; Schmid, C.; Mairal, J. Diversity with Cooperation: Ensemble Methods for Few-Shot Classification. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), Seoul, Republic of Korea, 27 October–2 November 2019. [Google Scholar]
  58. Chen, Z.; Badrinarayanan, V.; Lee, C.Y.; Rabinovich, A. Gradnorm: Gradient normalization for adaptive loss balancing in deep multitask networks. In Proceedings of the International Conference on Machine Learning (ICML), Stockholm, Sweden, 10–15 July 2018; pp. 794–803. [Google Scholar]
  59. Liu, G.; Zhao, L.; Li, W.; Guo, D.; Fang, X. Class-wise Metric Scaling for Improved Few-Shot Classification. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), Virtual, 5–9 January 2021; pp. 586–595. [Google Scholar]
  60. Yu, C.; Zhao, X.; Zheng, Q.; Zhang, P.; You, X. Hierarchical Bilinear Pooling for Fine-Grained Visual Recognition. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018. [Google Scholar]
  61. Kim, J.; On, K.W.; Lim, W.; Kim, J.; Ha, J.; Zhang, B. Hadamard Product for Low-rank Bilinear Pooling. In Proceedings of the International Conference on Learning Representations (ICLR), Toulon, France, 24–26 April 2017. [Google Scholar]
  62. Gidaris, S.; Komodakis, N. Dynamic Few-Shot Visual Learning without Forgetting. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Salt Lake City, UT, USA, 18–23 June 2018. [Google Scholar]
  63. Qi, H.; Brown, M.; Lowe, D.G. Low-Shot Learning with Imprinted Weights. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Salt Lake City, UT, USA, 18–23 June 2018. [Google Scholar]
  64. Lee, K.; Maji, S.; Ravichandran, A.; Soatto, S. Meta-learning with differentiable convex optimization. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 15–20 June 2019; pp. 10657–10665. [Google Scholar]
  65. Sbai, O.; Couprie, C.; Aubry, M. Impact of base dataset design on few-shot image classification. In Proceedings of the European Conference on Computer Vision (ECCV), Glasgow, UK, 23–28 August 2020; pp. 597–613. [Google Scholar]
  66. Wah, C.; Branson, S.; Welinder, P.; Perona, P.; Belongie, S. The Caltech-UCSD Birds-200-2011 Dataset. In Technical Report CNS-TR-2011-001; California Institute of Technology: Pasadena, CA, USA, 2011. [Google Scholar]
  67. Krizhevsky, A. Learning Multiple Layers of Features from Tiny Images; Technical Report; University of Toronto: Toronto, ON, Canada, 2009. [Google Scholar]
  68. Cimpoi, M.; Maji, S.; Kokkinos, I.; Mohamed, S.; Vedaldi, A. Describing textures in the wild. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Columbus, OH, USA, 23–28 June 2014; pp. 3606–3613. [Google Scholar]
  69. Griffin, G.; Holub, A.; Perona, P. Caltech-256 Object Category Dataset; Technical Report; California Institute of Technology: Pasadena, CA, USA, 2007. [Google Scholar]
  70. Mohanty, S.P.; Hughes, D.P.; Salathé, M. Using deep learning for image-based plant disease detection. Front. Plant Sci. 2016, 7, 1419. [Google Scholar] [CrossRef] [PubMed]
  71. Liu, B.; Cao, Y.; Lin, Y.; Li, Q.; Zhang, Z.; Long, M.; Hu, H. Negative Margin Matters: Understanding Margin in Few-shot Classification. In Proceedings of the European Conference on Computer Vision (ECCV), Glasgow, UK, 23–28 August 2020. [Google Scholar]
  72. Yang, S.; Liu, L.; Xu, M. Free Lunch for Few-shot Learning: Distribution Calibration. In Proceedings of the International Conference on Learning Representations, Virtual Event, 3–7 May 2021. [Google Scholar]
  73. dan Guo, D.; Tian, L.; Zhao, H.; Zhou, M.; Zha, H. Adaptive Distribution Calibration for Few-Shot Learning with Hierarchical Optimal Transport. Adv. Neural Inf. Process. Syst. 2022, 35, 6996–7010. [Google Scholar]
  74. Liang, H.; Zhang, Q.; Dai, P.; Lu, J. Boosting the generalization capability in cross-domain few-shot learning via noise-enhanced supervised autoencoder. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Montreal, BC, Canada, 11–17 October 2021; pp. 9424–9434. [Google Scholar]
  75. Mensink, T.; Verbeek, J.; Perronnin, F.; Csurka, G. Distance-based image classification: Generalizing to new classes at near-zero cost. IEEE Trans. Pattern Anal. Mach. Intell. 2013, 35, 2624–2637. [Google Scholar] [CrossRef] [PubMed]
  76. Maaten, L.V.D.; Hinton, G. Visualizing data using t-SNE. J. Mach. Learn. Res. 2008, 9, 2579–2605. [Google Scholar]
  77. Zhou, B.; Khosla, A.; Lapedriza, A.; Oliva, A.; Torralba, A. Learning deep features for discriminative localization. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 2921–2929. [Google Scholar]
Figure 1. Illustration of our approach. First, we pre-train efficient multi-domain feature representations on the abundant data from semantically different domains. Then, given a few-shot task (such as remote sensing scene recognition), we perform adaptation on the pre-trained multi-domain representations by optimizing domain-specific parameters with few-shot data (the support set).
Figure 1. Illustration of our approach. First, we pre-train efficient multi-domain feature representations on the abundant data from semantically different domains. Then, given a few-shot task (such as remote sensing scene recognition), we perform adaptation on the pre-trained multi-domain representations by optimizing domain-specific parameters with few-shot data (the support set).
Information 15 00293 g001
Figure 2. Multi-source representations learning from multiple source datasets. The structure of representations contains a shared CNN backbone f ϕ and the multi-head projection layers { P θ i } i = 1 D , which consists of a feature set { F i ( x ) } i = 1 D . We train the representations by taking entropy loss on the multi-head classification tasks. The five input images correspond to the five source datasets of BSCD-FSL [21]. Best viewed in color.
Figure 2. Multi-source representations learning from multiple source datasets. The structure of representations contains a shared CNN backbone f ϕ and the multi-head projection layers { P θ i } i = 1 D , which consists of a feature set { F i ( x ) } i = 1 D . We train the representations by taking entropy loss on the multi-head classification tasks. The five input images correspond to the five source datasets of BSCD-FSL [21]. Best viewed in color.
Information 15 00293 g002
Figure 3. Structure of Low-rank bilinear pooling layer. This is used as the projection layer for achieving domain specialization. GAP denotes global average pooling.
Figure 3. Structure of Low-rank bilinear pooling layer. This is used as the projection layer for achieving domain specialization. GAP denotes global average pooling.
Information 15 00293 g003
Figure 4. Adapting representations for recognizing previously unseen categories. The adaptation is performed on each representation head. Best viewed in color.
Figure 4. Adapting representations for recognizing previously unseen categories. The adaptation is performed on each representation head. Best viewed in color.
Information 15 00293 g004
Figure 5. Image examples of datasets in the Broader Study of Cross-Domain Few-Shot Learning (BSCD-FSL) benchmark. Five training datasets are mini-ImageNet, CUB, CIFAR100, DTD, and Caltech256. Four testing domains are CropDiseases, EuroSAT, ISIC2018, and ChestX.
Figure 5. Image examples of datasets in the Broader Study of Cross-Domain Few-Shot Learning (BSCD-FSL) benchmark. Five training datasets are mini-ImageNet, CUB, CIFAR100, DTD, and Caltech256. Four testing domains are CropDiseases, EuroSAT, ISIC2018, and ChestX.
Information 15 00293 g005
Figure 6. The detailed structure of ResNet12. It contains four residual blocks, each of which is made up of three { 3 × 3 convolution (Conv), batch normalization (BN), Leaky ReLU of 0.1} and one 2 × 2 max-pooling layer.
Figure 6. The detailed structure of ResNet12. It contains four residual blocks, each of which is made up of three { 3 × 3 convolution (Conv), batch normalization (BN), Leaky ReLU of 0.1} and one 2 × 2 max-pooling layer.
Information 15 00293 g006
Figure 7. Overall performance comparison across different methods in the multi-source setting of BSCD-FSL benchmark.
Figure 7. Overall performance comparison across different methods in the multi-source setting of BSCD-FSL benchmark.
Information 15 00293 g007
Figure 8. Validation accuracy varying splitting strategies and numbers of partitioned sub-domains on mini-ImageNet. (a) Five-way one-shot validation performance (b) 5-way 5-shot validation performance. We can observe that the best performance is achieved at 2 splitting for both 1-shot and 5-shot settings, which is also a trade-off between the number of split domains and classes (or data) per domain for a fixed data budget on the mini-ImageNet benchmark.
Figure 8. Validation accuracy varying splitting strategies and numbers of partitioned sub-domains on mini-ImageNet. (a) Five-way one-shot validation performance (b) 5-way 5-shot validation performance. We can observe that the best performance is achieved at 2 splitting for both 1-shot and 5-shot settings, which is also a trade-off between the number of split domains and classes (or data) per domain for a fixed data budget on the mini-ImageNet benchmark.
Information 15 00293 g008
Figure 9. Evaluating the effectiveness of each isolated component and different combinations of the three components (PID, CFD, and PC) towards performance in cross-domain few-shot transfer. (a) Mean accuracy across different datasets and shot levels for multi-source FSL. (b) Mean accuracy across different datasets and shot levels for single-source FSL. We can observe that the full adaptation can achieve the best mean accuracy in both multi-source and single-source settings.
Figure 9. Evaluating the effectiveness of each isolated component and different combinations of the three components (PID, CFD, and PC) towards performance in cross-domain few-shot transfer. (a) Mean accuracy across different datasets and shot levels for multi-source FSL. (b) Mean accuracy across different datasets and shot levels for single-source FSL. We can observe that the full adaptation can achieve the best mean accuracy in both multi-source and single-source settings.
Information 15 00293 g009
Figure 10. The t-SNE visualization of the feature distribution on the multi-head representations before and after making adaptation by our method. (af) show the six representation spaces learned on the BSCD-FSL benchmark. Better viewed in color with zoom-in.
Figure 10. The t-SNE visualization of the feature distribution on the multi-head representations before and after making adaptation by our method. (af) show the six representation spaces learned on the BSCD-FSL benchmark. Better viewed in color with zoom-in.
Information 15 00293 g010
Figure 11. Visualization using the class activation maps (CAMs) to show the regions that deep networks focus on before and after the adaptation. Image examples from a 5-shot task in the CropDisease. Better viewed in color.
Figure 11. Visualization using the class activation maps (CAMs) to show the regions that deep networks focus on before and after the adaptation. Image examples from a 5-shot task in the CropDisease. Better viewed in color.
Information 15 00293 g011
Table 1. The results of multi-source few-shot learning on the BSCD-FSL benchmark. Best results are marked in bold.
Table 1. The results of multi-source few-shot learning on the BSCD-FSL benchmark. Best results are marked in bold.
MethodsChestXISIC
5-Way 5-Shot5-Way 20-Shot5-Way 50-Shot5-Way 5-Shot5-Way 20-Shot5-Way 50-Shot
Union-CC [62]26.08 ± 0.4131.14 ± 0.4333.54 ± 0.4543.35 ± 0.5551.71 ± 0.5854.34 ± 0.53
Ensemble26.45 ± 0.4430.81 ± 0.4533.47 ± 0.4644.49 ± 0.5752.49 ± 0.5655.06 ± 0.54
All-EBDs [21]26.74 ± 0.4232.77 ± 0.4738.07 ± 0.5046.86 ± 0.6058.57 ± 0.5966.04 ± 0.56
IMS-f [21]25.50 ± 0.4531.49 ± 0.4736.40 ± 0.5045.84 ± 0.6261.50 ± 0.5868.64 ± 0.53
FiLM-pf [35]26.79 ± 0.4530.91 ± 0.4533.80 ± 0.4747.06 ± 0.5655.43 ± 0.5657.73 ± 0.53
SUR [27]26.81 ± 0.4630.98 ± 0.4533.85 ± 0.4647.37 ± 0.5655.59 ± 0.5957.92 ± 0.53
URL [29]26.49 ± 0.4530.40 ± 0.4433.75 ± 0.4646.00 ± 0.5853.87 ± 0.5856.32 ± 0.54
URL+Ad [29]26.68 ± 0.4431.41 ± 0.4436.41 ± 0.4548.10 ± 0.6058.84 ± 0.6364.16 ± 0.58
TSA [40]27.04 ± 0.4333.31 ± 0.4737.15 ± 0.4849.40 ± 0.6162.34 ± 0.6067.73 ± 0.56
LAMR27.37 ± 0.4134.16 ± 0.4839.21 ± 0.5152.58 ± 0.6165.33 ± 0.5570.52 ± 0.53
LAMR++ 28.38 ± 0.4536.77 ± 0.5042.22 ± 0.5456.26 ± 0.6668.52 ± 0.5573.89 ± 0.52
MethodsEuroSATCropDiseases
5-Way 5-Shot5-Way 20-Shot5-Way 50-Shot5-Way 5-Shot5-Way 20-Shot5-Way 50-Shot
Union-CC [62]81.01 ± 0.5686.05 ± 0.4887.30 ± 0.4190.22 ± 0.5493.97 ± 0.3695.09 ± 0.32
Ensemble84.03 ± 0.5888.10 ± 0.4988.44 ± 0.4891.89 ± 0.5195.04 ± 0.3796.06 ± 0.30
All-EBDs [21]81.29 ± 0.6289.90 ± 0.4192.76 ± 0.3490.82 ± 0.4896.64 ± 0.2598.14 ± 0.18
IMS-f [21]83.56 ± 0.5991.22 ± 0.3893.85 ± 0.3090.66 ± 0.4897.18 ± 0.2498.43 ± 0.16
FiLM-pf [35]83.93 ± 0.5887.82 ± 0.5187.94 ± 0.4893.73 ± 0.4696.18 ± 0.3297.26 ± 0.25
SUR [27]84.35 ± 0.5988.32 ± 0.5088.42 ± 0.4993.72 ± 0.4696.16 ± 0.3397.26 ± 0.25
URL [29]83.74 ± 0.5888.52 ± 0.4889.13 ± 0.4592.13 ± 0.5095.18 ± 0.3696.21 ± 0.27
URL+Ad [29]84.57 ± 0.5591.66 ± 0.3693.66 ± 0.3193.12 ± 0.4497.23 ± 0.2498.51 ± 0.15
TSA [40]85.10 ± 0.5592.25 ± 0.3494.24 ± 0.2993.53 ± 0.4497.58 ± 0.2298.81 ± 0.13
LAMR86.92 ± 0.4793.65 ± 0.2995.42 ± 0.2394.61 ± 0.3998.26 ± 0.1899.12 ± 0.11
LAMR++ 87.38 ± 0.4794.40 ± 0.2696.31 ± 0.2194.84 ± 0.3998.57 ± 0.1699.30 ± 0.10
Table 2. Comparison to previous methods on mini-ImageNet. Our LAMR is trained following the optimal splitting strategy selected by the validation set, which can fairly compare with other methods under a same training data budget. All methods use ResNets as the feature backbone. The best result in each setting is marked in bold.
Table 2. Comparison to previous methods on mini-ImageNet. Our LAMR is trained following the optimal splitting strategy selected by the validation set, which can fairly compare with other methods under a same training data budget. All methods use ResNets as the feature backbone. The best result in each setting is marked in bold.
TypeMethodBackbone5-Way 1-Shot5-Way 5-Shot
w/o AdaptProtoNet [9] by [64]ResNet1259.25 ± 0.6475.60 ± 0.48
MetaOptNet [64]ResNet1262.64 ± 0.6278.63 ± 0.46
CC [62] by [37]ResNet1258.61 ± 0.1876.40 ± 0.13
baseline [13]ResNet1851.75 ± 0.8074.27 ± 0.63
Neg-Cosine [71]ResNet1263.85 ± 0.8181.57 ± 0.56
Embed-Distill [14]ResNet1264.82 ± 0.6082.14 ± 0.43
Meta-Baseline [12]ResNet1263.17 ± 0.2379.26 ± 0.17
Robust20 [57]ResNet1863.95 ± 0.4281.59 ± 0.42
w/ AdaptTADAM [20]ResNet1258.50 ± 0.3076.70 ± 0.30
Centroid-Align [19]ResNet1859.88 ± 0.6780.35 ± 0.73
Implant [37]ResNet1262.53 ± 0.1979.77 ± 0.19
DC+SUR [27]ResNet1263.13 ± 0.6380.04 ± 0.41
Free-Lunch [72]ResNet1264.73 ± 0.4481.15 ± 0.42
H-OT [73]ResNet1265.63 ± 0.3282.87 ± 0.43
LAMR (ours)ResNet1265.73 ± 0.4383.37 ± 0.29
LAMR++ (ours)ResNet1265.90 ± 0.4383.84 ± 0.29
Table 4. Comparing transferability of feature representations trained by different models. The best result in each setting is marked in bold.
Table 4. Comparing transferability of feature representations trained by different models. The best result in each setting is marked in bold.
MethodsChestXISIC
5-Way 5-Shot5-Way 20-Shot5-Way 50-Shot5-Way 5-Shot5-Way 20-Shot5-Way 50-Shot
Single-source25.90 ± 0.4130.16 ± 0.4532.76 ± 0.4543.84 ± 0.5551.98 ± 0.5754.34 ± 0.53
Merged-multi-sources26.08 ± 0.4131.14 ± 0.4333.54 ± 0.4543.35 ± 0.5551.71 ± 0.5854.34 ± 0.53
Our Framework25.96 ± 0.4430.21 ± 0.4332.58 ± 0.4348.61 ± 0.6058.13 ± 0.5960.54 ± 0.57
MethodsEuroSATCropDiseases
5-Way 5-Shot5-Way 20-Shot5-Way 50-Shot5-Way 5-Shot5-Way 20-Shot5-Way 50-Shot
Single-source78.64 ± 0.6184.05 ± 0.5485.03 ± 0.4888.27 ± 0.5992.57 ± 0.4494.19 ± 0.34
Merged-multi-sources81.01 ± 0.5686.05 ± 0.4887.30 ± 0.4190.22 ± 0.5493.97 ± 0.3695.09 ± 0.32
Our Framework84.76 ± 0.5189.36 ± 0.4189.88 ± 0.3991.89 ± 0.5095.22 ± 0.3396.03 ± 0.27
Table 5. Ablation study on 5-way K-shot performance by validating three components of the adaptation objective, including PID (parametric instance discrimination), CFD (class feature discrimination), and PC (prototypical classification). The best result in each setting is marked in bold.
Table 5. Ablation study on 5-way K-shot performance by validating three components of the adaptation objective, including PID (parametric instance discrimination), CFD (class feature discrimination), and PC (prototypical classification). The best result in each setting is marked in bold.
Multi-Source FSLSingle-Source FSL
K-ShotPIDCFDPCChestXISICEuroSATCropDiseasesChestXISICEuroSATCropDiseases
5 25.96 ± 0.4448.61 ± 0.6084.76 ± 0.5191.89 ± 0.5026.12 ± 0.4242.96 ± 0.5680.10 ± 0.6289.38 ± 0.55
26.89 ± 0.4550.61 ± 0.6185.06 ± 0.5192.64 ± 0.4426.96 ± 0.4345.12 ± 0.5681.33 ± 0.6191.72 ± 0.48
26.84 ± 0.4151.92 ± 0.6387.12 ± 0.4794.51 ± 0.4026.96 ± 0.4448.22 ± 0.6184.36 ± 0.5593.94 ± 0.41
27.06 ± 0.4252.14 ± 0.6186.42 ± 0.4894.20 ± 0.4127.49 ± 0.4347.78 ± 0.5983.50 ± 0.5793.40 ± 0.43
27.17 ± 0.4252.12 ± 0.6186.50 ± 0.4894.19 ± 0.4127.43 ± 0.4247.90 ± 0.5983.54 ± 0.5693.48 ± 0.42
27.39 ± 0.4252.43 ± 0.6186.91 ± 0.4794.62 ± 0.3927.56 ± 0.4448.54 ± 0.5984.42 ± 0.5594.15 ± 0.39
27.35 ± 0.4652.53 ± 0.6486.77 ± 0.4794.48 ± 0.3827.55 ± 0.4448.59 ± 0.5984.32 ± 0.5594.05 ± 0.40
27.37 ± 0.4152.58 ± 0.6186.92 ± 0.4794.61 ± 0.3927.66 ± 0.4448.66 ± 0.6084.46 ± 0.5594.15 ± 0.39
20 30.21 ± 0.4358.13 ± 0.5989.36 ± 0.4195.22 ± 0.3330.92 ± 0.4350.41 ± 0.5784.78 ± 0.5393.73 ± 0.40
31.60 ± 0.4658.87 ± 0.5889.87 ± 0.4095.83 ± 0.2732.22 ± 0.4553.49 ± 0.5686.45 ± 0.5195.42 ± 0.33
30.95 ± 0.4363.90 ± 0.6093.65 ± 0.2998.24 ± 0.1831.32 ± 0.4461.09 ± 0.6491.99 ± 0.3498.01 ± 0.18
33.45 ± 0.4663.96 ± 0.5792.97 ± 0.3197.83 ± 0.2033.20 ± 0.4861.09 ± 0.5991.49 ± 0.3697.80 ± 0.19
33.33 ± 0.4864.01 ± 0.5692.99 ± 0.3197.84 ± 0.2033.33 ± 0.4861.16 ± 0.5991.46 ± 0.3697.78 ± 0.19
33.64 ± 0.4765.19 ± 0.5793.60 ± 0.2998.26 ± 0.1833.64 ± 0.4762.13 ± 0.6192.18 ± 0.3398.20 ± 0.17
34.26 ± 0.4964.69 ± 0.5693.52 ± 0.2898.15 ± 0.1634.06 ± 0.4962.29 ± 0.6092.21 ± 0.3498.07 ± 0.18
34.16 ± 0.4865.33 ± 0.5593.65 ± 0.2998.26 ± 0.1833.82 ± 0.5062.38 ± 0.6092.21 ± 0.3398.19 ± 0.17
50 32.58 ± 0.4360.54 ± 0.5789.88 ± 0.3996.03 ± 0.2733.87 ± 0.4653.15 ± 0.5485.71 ± 0.5095.06 ± 0.30
34.69 ± 0.4761.18 ± 0.5490.63 ± 0.3896.69 ± 0.2336.01 ± 0.4656.70 ± 0.5587.80 ± 0.4596.67 ± 0.24
33.47 ± 0.4368.51 ± 0.6895.48 ± 0.2399.19 ± 0.1134.53 ± 0.4463.74 ± 0.7794.26 ± 0.2899.11 ± 0.11
36.59 ± 0.4866.46 ± 0.5694.37 ± 0.2698.50 ± 0.1638.50 ± 0.5066.56 ± 0.5693.86 ± 0.2898.77 ± 0.13
37.61 ± 0.4967.05 ± 0.5594.47 ± 0.2698.56 ± 0.1538.44 ± 0.5167.20 ± 0.5593.88 ± 0.2898.84 ± 0.13
34.91 ± 0.4869.40 ± 0.5895.45 ± 0.2399.10 ± 0.1237.24 ± 0.4767.71 ± 0.6094.44 ± 0.2799.16 ± 0.11
37.97 ± 0.5269.08 ± 0.5595.23 ± 0.2399.03 ± 0.1039.07 ± 0.4868.43 ± 0.5694.44 ± 0.2699.09 ± 0.11
39.21 ± 0.5170.52 ± 0.5395.42 ± 0.2399.12 ± 0.1138.92 ± 0.5068.92 ± 0.5694.46 ± 0.2799.16 ± 0.11
Table 6. Quantitative analysis of different classifiers that are incorporated into our pre-trained multi-source representations during the meta-test stage.
Table 6. Quantitative analysis of different classifiers that are incorporated into our pre-trained multi-source representations during the meta-test stage.
MethodsChestXISIC
5-Way 5-Shot5-Way 20-Shot5-Way 50-Shot5-Way 5-Shot5-Way 20-Shot5-Way 50-Shot
Fixed-MSR25.96 ± 0.4430.20 ± 0.4632.58 ± 0.4648.61 ± 0.6258.13 ± 0.6160.54 ± 0.57
Ft-LC25.68 ± 0.4430.53 ± 0.4633.64 ± 0.4851.32 ± 0.6361.52 ± 0.5764.18 ± 0.56
Ft-CC26.75 ± 0.4432.69 ± 0.4837.19 ± 0.5351.51 ± 0.6463.59 ± 0.5767.75 ± 0.55
Ft-MSR-LC26.25 ± 0.4531.10 ± 0.4634.26 ± 0.4851.81 ± 0.6262.56 ± 0.5765.07 ± 0.56
Ft-MSR-CC27.04 ± 0.4533.31 ± 0.4938.26 ± 0.5252.12 ± 0.6364.52 ± 0.5670.00 ± 0.53
MethodsEuroSATCropDiseases
5-Way 5-Shot5-Way 20-Shot5-Way 50-Shot5-Way 5-Shot5-Way 20-Shot5-Way 50-Shot
Fixed-MSR84.76 ± 0.5189.36 ± 0.4089.88 ± 0.3991.89 ± 0.4795.22 ± 0.2996.03 ± 0.25
Ft-LC85.28 ± 0.4891.01 ± 0.3591.85 ± 0.3292.85 ± 0.4296.78 ± 0.2297.64 ± 0.16
Ft-CC86.71 ± 0.4893.08 ± 0.2994.52 ± 0.2594.11 ± 0.3997.93 ± 0.1798.83 ± 0.11
Ft-MSR-LC85.82 ± 0.4891.62 ± 0.3392.48 ± 0.3093.51 ± 0.4097.21 ± 0.2098.00 ± 0.15
Ft-MSR-CC86.77 ± 0.4893.48 ± 0.2895.09 ± 0.2394.45 ± 0.3898.18 ± 0.1699.09 ± 0.10
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Liu, G.; Zhang, Z.; Fang, X. Task-Adaptive Multi-Source Representations for Few-Shot Image Recognition. Information 2024, 15, 293. https://doi.org/10.3390/info15060293

AMA Style

Liu G, Zhang Z, Fang X. Task-Adaptive Multi-Source Representations for Few-Shot Image Recognition. Information. 2024; 15(6):293. https://doi.org/10.3390/info15060293

Chicago/Turabian Style

Liu, Ge, Zhongqiang Zhang, and Xiangzhong Fang. 2024. "Task-Adaptive Multi-Source Representations for Few-Shot Image Recognition" Information 15, no. 6: 293. https://doi.org/10.3390/info15060293

APA Style

Liu, G., Zhang, Z., & Fang, X. (2024). Task-Adaptive Multi-Source Representations for Few-Shot Image Recognition. Information, 15(6), 293. https://doi.org/10.3390/info15060293

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop