Task-Adaptive Multi-Source Representations for Few-Shot Image Recognition †
Abstract
:1. Introduction
- We develop a novel two-stage learning scheme, namely learning and adapting representations (LAMR), for vastly addressing cross-domain few-shot learning tasks, especially for recognition scenarios beyond natural images, including remote sensing and medical imagery.
- To achieve multi-source representations, we propose a parameter-efficient multi-head framework, which can further support simple but effective transfer to different downstream FSL tasks.
- To achieve task-specific transfer, we propose a few-shot adaptation method for improving model discrimination towards unseen classes by imposing instance discrimination and class discrimination at the feature level.
- LAMR can achieve state-of-the-art results on cross-domain FSL benchmarks in the multi-source setting.
- We extend LAMR to single-source FSL by introducing dataset-splitting strategies that equally split one source dataset into sub-domains. The empirical results show that applying simple “random splitting” can improve conventional cosine-similarity-based classifiers in FSL with a fixed single-source data budget. LAMR also achieves superior performance on (single-source) BSCD-FSL benchmark and competitive results on mini-ImageNet.
- We conduct more careful ablation studies, which verify that the performance gains come from not only the good transferability of the proposed multi-source representations but also each component in the objectives of few-shot adaptation.
- Discussions and comparisons of more related works, especially for few-shot learning with multi-source domains, are included.
- More feature visualizations and analyses are included. Limitations and future directions are discussed.
2. Related Works
2.1. Few-Shot Learning
2.2. Domain Adaptation
2.3. Contrastive Learning
2.4. Multi-Task Learning
3. Preliminary
3.1. Task Formulation
3.2. Transfer Learning Baseline
3.2.1. FT Baseline
3.2.2. NNC Baseline
4. Approach
4.1. Multi-Source Representation Learning
4.2. Adapting Representations on Few-Shot Data
4.2.1. Parametric Instance Discrimination
4.2.2. Class Feature Discrimination
4.2.3. Prototypical Classification
4.2.4. Implementation of Total Adaptation Loss
4.2.5. Query Prediction
4.3. Extension to Single-Source FSL
- Random splitting. The original classes are equally randomly split into sub-datasets.
- Clustering splitting. One natural class splitting choice would be K-means clustering on class prototypes computed over image features, with a representation pre-trained on the full classes. However, K-means may result in unbalanced partitions. Inspired by the previous method [65], we iteratively split each current dataset in half along the principal component computed over the class prototypes. For splitting N iterations, the original dataset can be divided into subsets, each of which can be regarded as a distinct domain and is composed of classes that are closer to each other.
5. Experiments and Results
5.1. Benchmark Datasets
5.1.1. Broader Study of Cross-Domain Few-Shot Learning (BSCD-FSL)
5.1.2. Mini-Imagenet
5.2. Implementation Details
5.2.1. Network Architecture
5.2.2. Training Details
5.2.3. Evaluation Protocol
5.3. Results on Multi-Source FSL
- Union-CC [62]: A baseline method trains a single feature extractor on the union of all training data with the cosine classifier and tests it with the NNC classifier.
- Ensemble: A baseline method trains separate feature extractors on each dataset and tests with the average prediction of the NCC classifiers built on them.
- All-EMDs [21]: A method concatenates the feature vectors of all layers of all the separate feature extractors for training a linear classifier.
- IMS-f [21]: A greedy selection method iteratively searches for the best subset of features on all layers of all the separate feature extractors for a given few-shot task. Then, the feature vectors in the set are concatenated for training a linear classifier.
- SUR [27]: A feature selection method performs a linear combination of domain-specific representations on the FiLM-pf.
- URL [29]: A single feature extractor is distilled from the separate multi-domain networks and tested with the NNC classifier.
- URL+Ad [29]: An adaptation method attaches a pre-classifier feature mapping (a linear layer) to the URL and optimizes it with the few-shot data.
5.4. Results on Single-Source FSL
5.4.1. Validating Splitting Strategy for Single-Source FSL
5.4.2. Results on Mini-ImageNet
5.4.3. Results on BSCD-FSL
Methods | ChestX | ISIC | ||||
5-Way 5-Shot | 5-Way 20-Shot | 5-Way 50-Shot | 5-Way 5-Shot | 5-Way 20-Shot | 5-Way 50-Shot | |
ProtoNet 1 [9] | 24.05 ± 1.01 | 28.21 ± 1.15 | 29.32 ± 1.12 | 39.57 ± 0.57 | 49.50 ± 0.55 | 51.99 ± 0.52 |
Linear 1 [13] | 25.97 ± 0.41 | 31.32 ± 0.45 | 35.49 ± 0.45 | 48.11 ± 0.64 | 59.31 ± 0.48 | 66.48 ± 0.56 |
Mean-centroid 1 [75] | 26.31 ± 0.42 | 30.41 ± 0.46 | 34.68 ± 0.46 | 47.16 ± 0.54 | 56.40 ± 0.53 | 61.57 ± 0.66 |
Ft-CC 1 [62] | 26.95 ± 0.44 | 32.07 ± 0.55 | 34.76 ± 0.55 | 48.01 ± 0.49 | 58.13 ± 0.48 | 62.03 ± 0.52 |
FN [38] | 25.78 ± 0.42 | 31.88 ± 0.46 | 34.81 ± 0.49 | 45.34 ± 0.60 | 58.92 ± 0.57 | 65.90 ± 0.58 |
ConFeSS [39] | 27.09 ± 0.24 | 33.57 ± 0.31 | 39.02 ± 0.12 | 48.85 ± 0.29 | 60.10 ± 0.33 | 65.34 ± 0.45 |
NSAE [74] | 27.10 ± 0.44 | 35.20 ± 0.48 | 38.95 ± 0.70 | 54.05 ± 0.63 | 66.17 ± 0.59 | 71.32 ± 0.61 |
LDP-net 2 [32] | 27.30 ± 0.43 | 34.03 ± 0.49 | 37.58 ± 0.48 | 48.15 ± 0.60 | 58.47 ± 0.56 | 64.20 ± 0.55 |
LAMR | 27.66 ± 0.44 | 33.82 ± 0.50 | 38.92 ± 0.50 | 48.66 ± 0.60 | 62.38 ± 0.60 | 68.92 ± 0.56 |
LAMR++ | 28.86 ± 0.45 | 35.86 ± 0.50 | 41.36 ± 0.56 | 54.11 ± 0.62 | 68.22 ± 0.57 | 74.12 ± 0.54 |
Methods | EuroSAT | CropDiseases | ||||
5-Way 5-Shot | 5-Way 20-Shot | 5-Way 50-Shot | 5-Way 5-Shot | 5-Way 20-Shot | 5-Way 50-Shot | |
ProtoNet 1 [9] | 73.29 ± 0.71 | 82.27 ± 0.57 | 80.48 ± 0.57 | 79.72 ± 0.67 | 88.15 ± 0.51 | 90.81 ± 0.43 |
Linear 1 [13] | 79.08 ± 0.61 | 87.64 ± 0.47 | 91.34 ± 0.37 | 89.25 ± 0.51 | 95.51 ± 0.31 | 97.68 ± 0.21 |
Mean-centroid 1 [75] | 82.21 ± 0.49 | 87.62 ± 0.34 | 88.24 ± 0.29 | 87.61 ± 0.47 | 93.87 ± 0.68 | 94.77 ± 0.34 |
Ft-CC 1 [62] | 81.37 ± 1.54 | 86.83 ± 0.43 | 88.83 ± 0.38 | 89.15 ± 0.51 | 93.96 ± 0.46 | 94.27 ± 0.41 |
FN [38] | 80.03 ± 0.70 | 88.94 ± 0.46 | 92.34 ± 0.36 | 91.11 ± 0.49 | 96.62 ± 0.26 | 98.27 ± 0.17 |
ConFeSS [39] | 84.65 ± 0.38 | 90.40 ± 0.24 | 92.66 ± 0.36 | 88.88 ± 0.51 | 95.34 ± 0.48 | 97.56 ± 0.43 |
NSAE [74] | 83.96 ± 0.57 | 92.38 ± 0.33 | 95.42 ± 0.34 | 93.14 ± 0.47 | 98.30 ± 0.19 | 99.25 ± 0.14 |
LDP-net 2 [32] | 81.50 ± 0.65 | 88.15 ± 0.48 | 90.75 ± 0.41 | 89.00 ± 0.51 | 95.49 ± 0.29 | 97.28 ± 0.20 |
LAMR | 84.46 ± 0.55 | 92.21 ± 0.33 | 94.46 ± 0.27 | 94.15 ± 0.39 | 98.19 ± 0.17 | 99.16 ± 0.11 |
LAMR++ | 84.62 ± 0.55 | 93.08 ± 0.31 | 95.75 ± 0.23 | 94.30 ± 0.38 | 98.39 ± 0.16 | 99.26 ± 0.11 |
6. Ablation Study and Analysis
6.1. Effect of Multi-Domain Learning Framework
6.2. Significance of Few-Shot Adaptation
6.3. Effect of Different Classifier Learning
- Fixed-MSR: Directly leveraging the frozen multi-source representations with NNC baseline.
- Ft-LC: Fine-tuning a linear classification layer on each frozen representation head.
- Ft-CC: Fine-tuning a cosine classification layer on each frozen representation head.
- Ft-MSR-LC: Fine-tuning both multi-source representations (projection layers) and following linear classification layers.
- Ft-MSR-CC: Fine-tuning both multi-source representations (projection layers) and following cosine classification layers.
6.4. Impressive Visualization
7. Conclusions and Future Work
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Krizhevsky, A.; Sutskever, I.; Hinton, G.E. Imagenet classification with deep convolutional neural networks. Commun. ACM 2017, 60, 84–90. [Google Scholar] [CrossRef]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep Residual Learning for Image Recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 26 June–1 July 2016; pp. 770–778. [Google Scholar]
- Russakovsky, O.; Deng, J.; Su, H.; Krause, J.; Satheesh, S.; Ma, S.; Huang, Z.; Karpathy, A.; Khosla, A.; Bernstein, M.; et al. Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 2015, 115, 211–252. [Google Scholar] [CrossRef]
- Fei-Fei, L.; Fergus, R.; Perona, P. One-shot learning of object categories. IEEE Trans. Pattern Anal. Mach. Intell. 2006, 28, 594–611. [Google Scholar] [CrossRef] [PubMed]
- Yosinski, J.; Clune, J.; Bengio, Y.; Lipson, H. How transferable are features in deep neural networks? In Proceedings of the Advances in Neural Information Processing Systems (NeurIPS), Montreal, QC, Canada, 8–13 December 2014; pp. 3320–3328. [Google Scholar]
- Pan, S.J.; Yang, Q. A survey on transfer learning. IEEE Trans. Knowl. Data Eng. 2010, 22, 1345–1359. [Google Scholar] [CrossRef]
- Finn, C.; Abbeel, P.; Levine, S. Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks. In Proceedings of the 34th International Conference on Machine Learning, Sydney, Australia, 6–11 August 2017; pp. 1126–1135. [Google Scholar]
- Ravi, S.; Larochelle, H. Optimization as a model for few-shot learning. In Proceedings of the International Conference on Learning Representations, San Juan, Puerto Rico, 2–4 May 2016. [Google Scholar]
- Snell, J.; Swersky, K.; Zemel, R. Prototypical Networks for Few-shot Learning. Adv. Neural Inf. Process. Syst. 2017, 30, 4077–4087. [Google Scholar]
- Vinyals, O.; Blundell, C.; Lillicrap, T.; Kavukcuoglu, K.; Wierstra, D. Matching Networks for One Shot Learning. Adv. Neural Inf. Process. Syst. 2016, 29, 3630–3638. [Google Scholar]
- Thrun, S. Lifelong learning algorithms. In Learning to Learn; Springer: Berlin/Heidelberg, Germany, 1998; pp. 181–209. [Google Scholar]
- Chen, Y.; Liu, Z.; Xu, H.; Darrell, T.; Wang, X. Meta-Baseline: Exploring Simple Meta-Learning for Few-Shot Learning. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), Montreal, QC, Canada, 11–17 October 2021; pp. 9062–9071. [Google Scholar]
- Chen, W.Y.; Liu, Y.C.; Kira, Z.; Wang, Y.C.F.; Huang, J.B. A Closer Look at Few-shot Classification. In Proceedings of the International Conference on Learning Representations (ICLR), New Orleans, LA, USA, 6–9 May 2019. [Google Scholar]
- Tian, Y.; Wang, Y.; Krishnan, D.; Tenenbaum, J.B.; Isola, P. Rethinking few-shot image classification: A good embedding is all you need? In Proceedings of the European Conference on Computer Vision (ECCV), Glasgow, UK, 23–28 August 2020. [Google Scholar]
- Wang, Y.; Chao, W.L.; Weinberger, K.Q.; van der Maaten, L. SimpleShot: Revisiting Nearest-Neighbor Classification for Few-Shot Learning. arXiv 2019, arXiv:1911.04623. [Google Scholar]
- Dhillon, G.S.; Chaudhari, P.; Ravichandran, A.; Soatto, S. A Baseline for Few-Shot Image Classification. In Proceedings of the International Conference on Learning Representations (ICLR), Addis Ababa, Ethiopia, 30 April 2020. [Google Scholar]
- Raghu, A.; Raghu, M.; Bengio, S.; Vinyals, O. Rapid learning or feature reuse? towards understanding the effectiveness of maml. In Proceedings of the International Conference on Learning Representations (ICLR), Addis Ababa, Ethiopia, 30 April 2020. [Google Scholar]
- Gidaris, S.; Bursuc, A.; Komodakis, N.; Perez, P.; Cord, M. Boosting Few-Shot Visual Learning with Self-Supervision. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), Seoul, Republic of Korea, 27 October–2 November 2019. [Google Scholar]
- Afrasiyabi, A.; Lalonde, J.F.; Gagné, C. Associative Alignment for Few-shot Image Classification. In Proceedings of the Computer Vision—ECCV 2020: 16th European Conference, Glasgow, UK, 23–28 August 2020. [Google Scholar]
- Oreshkin, B.; Rodríguez López, P.; Lacoste, A. TADAM: Task dependent adaptive metric for improved few-shot learning. Adv. Neural Inf. Process. Syst. 2018, 31, 721–731. [Google Scholar]
- Guo, Y.; Codella, N.C.; Karlinsky, L.; Codella, J.V.; Smith, J.R.; Saenko, K.; Rosing, T.; Feris, R. A broader study of cross-domain few-shot learning. In Proceedings of the Computer Vision—ECCV 2020: 16th European Conference, Glasgow, UK, 23–28 August 2020; pp. 124–141. [Google Scholar]
- Triantafillou, E.; Zhu, T.; Dumoulin, V.; Lamblin, P.; Evci, U.; Xu, K.; Goroshin, R.; Gelada, C.; Swersky, K.; Manzagol, P.A.; et al. Meta-Dataset: A Dataset of Datasets for Learning to Learn from Few Examples. In Proceedings of the International Conference on Learning Representations (ICLR), Addis Ababa, Ethiopia, 30 April 2020. [Google Scholar]
- Helber, P.; Bischke, B.; Dengel, A.; Borth, D. Eurosat: A novel dataset and deep learning benchmark for land use and land cover classification. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2019, 12, 2217–2226. [Google Scholar] [CrossRef]
- Tschandl, P.; Rosendahl, C.; Kittler, H. The HAM10000 dataset, a large collection of multi-source dermatoscopic images of common pigmented skin lesions. Sci. Data 2018, 5, 180161. [Google Scholar] [CrossRef] [PubMed]
- Codella, N.; Rotemberg, V.; Tschandl, P.; Celebi, M.E.; Dusza, S.; Gutman, D.; Helba, B.; Kalloo, A.; Liopyris, K.; Marchetti, M.; et al. Skin lesion analysis toward melanoma detection 2018: A challenge hosted by the international skin imaging collaboration (isic). arXiv 2019, arXiv:1902.03368. [Google Scholar]
- Wang, X.; Peng, Y.; Lu, L.; Lu, Z.; Bagheri, M.; Summers, R.M. Chestx-ray8: Hospital-scale chest x-ray database and benchmarks on weakly-supervised classification and localization of common thorax diseases. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 2097–2106. [Google Scholar]
- Dvornik, N.; Schmid, C.; Mairal, J. Selecting relevant features from a multi-domain representation for few-shot classification. In Proceedings of the Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, 23–28 August 2020; Springer: Berlin/Heidelberg, Germany, 2020; pp. 769–786. [Google Scholar]
- Liu, L.; Hamilton, W.L.; Long, G.; Jiang, J.; Larochelle, H. A Universal Representation Transformer Layer for Few-Shot Image Classification. In Proceedings of the International Conference on Learning Representations (ICLR), Virtual Event, 3–7 May 2021. [Google Scholar]
- Li, W.H.; Liu, X.; Bilen, H. Universal representation learning from multiple domains for few-shot classification. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Virtual, 11–17 October 2021; pp. 9526–9535. [Google Scholar]
- Liu, G.; Zhang, Z.; Cai, F.; Liu, D.; Fang, X. Learning and Adapting Diverse Representations for Cross-domain Few-shot Learning. In Proceedings of the 2023 IEEE International Conference on Data Mining Workshops (ICDMW), Shanghai, China, 1–4 December 2023; IEEE: Piscataway, NJ, USA, 2023; pp. 294–303. [Google Scholar]
- Bontonou, M.; Béthune, L.; Gripon, V. Predicting the generalization ability of a few-shot classifier. Information 2021, 12, 29. [Google Scholar] [CrossRef]
- Zhou, F.; Wang, P.; Zhang, L.; Wei, W.; Zhang, Y. Revisiting prototypical network for cross domain few-shot learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, BC, Canada, 17–24 June 2023; pp. 20061–20070. [Google Scholar]
- Zhao, L.; Liu, G.; Guo, D.; Li, W.; Fang, X. Boosting Few-shot visual recognition via saliency-guided complementary attention. Neurocomputing 2022, 507, 412–427. [Google Scholar] [CrossRef]
- Liu, C.; Fu, Y.; Xu, C.; Yang, S.; Li, J.; Wang, C.; Zhang, L. Learning a few-shot embedding model with contrastive learning. In Proceedings of the AAAI Conference on Artificial Intelligence, Virtual, 2–9 February 2021; Volume 35, pp. 8635–8643. [Google Scholar]
- Rebuffi, S.A.; Bilen, H.; Vedaldi, A. Efficient parametrization of multi-domain deep neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Salt Lake City, UT, USA, 18–23 June 2018; pp. 8119–8127. [Google Scholar]
- Perez, E.; Strub, F.; De Vries, H.; Dumoulin, V.; Courville, A. Film: Visual reasoning with a general conditioning layer. In Proceedings of the AAAI Conference on Artificial Intelligence, New Orleans, LA, USA, 2–7 February 2018; Volume 32. [Google Scholar]
- Lifchitz, Y.; Avrithis, Y.; Picard, S.; Bursuc, A. Dense Classification and Implanting for Few-Shot Learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 15–20 June 2019. [Google Scholar]
- Yazdanpanah, M.; Rahman, A.A.; Chaudhary, M.; Desrosiers, C.; Havaei, M.; Belilovsky, E.; Kahou, S.E. Revisiting Learnable Affines for Batch Norm in Few-Shot Transfer Learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), New Orleans, LA, USA, 18–24 June 2022; pp. 9109–9118. [Google Scholar]
- Das, D.; Yun, S.; Porikli, F. ConfeSS: A framework for single source cross-domain few-shot learning. In Proceedings of the International Conference on Learning Representations (ICLR), Virtual Event, 25–29 April 2022. [Google Scholar]
- Li, W.H.; Liu, X.; Bilen, H. Cross-domain Few-shot Learning with Task-specific Adapters. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), New Orleans, LA, USA, 18–24 June 2022; pp. 7161–7170. [Google Scholar]
- Liu, G.; Zhao, L.; Fang, X. PDA: Proxy-based domain adaptation for few-shot image recognition. Image Vis. Comput. 2021, 110, 104164. [Google Scholar] [CrossRef]
- Soudy, M.; Afify, Y.M.; Badr, N. GenericConv: A Generic Model for Image Scene Classification Using Few-Shot Learning. Information 2022, 13, 315. [Google Scholar] [CrossRef]
- Csányi, G.M.; Vági, R.; Megyeri, A.; Fülöp, A.; Nagy, D.; Vadász, J.P.; Üveges, I. Can Triplet Loss Be Used for Multi-Label Few-Shot Classification? A Case Study. Information 2023, 14, 520. [Google Scholar] [CrossRef]
- Cai, J.; Wu, L.; Wu, D.; Li, J.; Wu, X. Multi-Dimensional Information Alignment in Different Modalities for Generalized Zero-Shot and Few-Shot Learning. Information 2023, 14, 148. [Google Scholar] [CrossRef]
- Tzeng, E.; Hoffman, J.; Zhang, N.; Saenko, K.; Darrell, T. Deep Domain Confusion: Maximizing for Domain Invariance. arXiv 2014, arXiv:1412.3474. [Google Scholar]
- Long, M.; Cao, Y.; Wang, J.; Jordan, M. Learning transferable features with deep adaptation networks. In Proceedings of the International Conference on Machine Learning (ICML), Lille, France, 6–11 July 2015; pp. 97–105. [Google Scholar]
- Ganin, Y.; Lempitsky, V. Unsupervised domain adaptation by backpropagation. In Proceedings of the International Conference on Machine Learning (ICML), Lille, France, 7–9 July 2015; pp. 1180–1189. [Google Scholar]
- Peng, X.; Bai, Q.; Xia, X.; Huang, Z.; Saenko, K.; Wang, B. Moment matching for multi-source domain adaptation. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), Seoul, Republic of Korea, 27 October–2 November 2019; pp. 1406–1415. [Google Scholar]
- Xu, R.; Chen, Z.; Zuo, W.; Yan, J.; Lin, L. Deep cocktail network: Multi-source unsupervised domain adaptation with category shift. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Salt Lake City, UT, USA, 18–22 June 2018; pp. 3964–3973. [Google Scholar]
- Wu, Z.; Xiong, Y.; Yu, S.X.; Lin, D. Unsupervised Feature Learning via Non-Parametric Instance Discrimination. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Salt Lake City, UT, USA, 18–22 June 2018. [Google Scholar]
- Oord, A.v.d.; Li, Y.; Vinyals, O. Representation learning with contrastive predictive coding. arXiv 2018, arXiv:1807.03748. [Google Scholar]
- He, K.; Fan, H.; Wu, Y.; Xie, S.; Girshick, R. Momentum contrast for unsupervised visual representation learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 13–19 June 2020; pp. 9729–9738. [Google Scholar]
- Chen, T.; Kornblith, S.; Norouzi, M.; Hinton, G. A simple framework for contrastive learning of visual representations. In Proceedings of the International Conference on Machine Learning (ICML), Virtual Event, 13–18 July 2020; pp. 1597–1607. [Google Scholar]
- Khosla, P.; Teterwak, P.; Wang, C.; Sarna, A.; Tian, Y.; Isola, P.; Maschinot, A.; Liu, C.; Krishnan, D. Supervised contrastive learning. Adv. Neural Inf. Process. Syst. 2020, 33, 18661–18673. [Google Scholar]
- Bilen, H.; Vedaldi, A. Universal representations: The missing link between faces, text, planktons, and cat breeds. arXiv 2017, arXiv:1701.07275. [Google Scholar]
- Guo, Y.; Li, Y.; Wang, L.; Rosing, T. Depthwise convolution is all you need for learning multiple visual domains. In Proceedings of the AAAI Conference on Artificial Intelligence, Honolulu, HI, USA, 29–31 January 2019; Volume 33, pp. 8368–8375. [Google Scholar]
- Dvornik, N.; Schmid, C.; Mairal, J. Diversity with Cooperation: Ensemble Methods for Few-Shot Classification. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), Seoul, Republic of Korea, 27 October–2 November 2019. [Google Scholar]
- Chen, Z.; Badrinarayanan, V.; Lee, C.Y.; Rabinovich, A. Gradnorm: Gradient normalization for adaptive loss balancing in deep multitask networks. In Proceedings of the International Conference on Machine Learning (ICML), Stockholm, Sweden, 10–15 July 2018; pp. 794–803. [Google Scholar]
- Liu, G.; Zhao, L.; Li, W.; Guo, D.; Fang, X. Class-wise Metric Scaling for Improved Few-Shot Classification. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), Virtual, 5–9 January 2021; pp. 586–595. [Google Scholar]
- Yu, C.; Zhao, X.; Zheng, Q.; Zhang, P.; You, X. Hierarchical Bilinear Pooling for Fine-Grained Visual Recognition. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018. [Google Scholar]
- Kim, J.; On, K.W.; Lim, W.; Kim, J.; Ha, J.; Zhang, B. Hadamard Product for Low-rank Bilinear Pooling. In Proceedings of the International Conference on Learning Representations (ICLR), Toulon, France, 24–26 April 2017. [Google Scholar]
- Gidaris, S.; Komodakis, N. Dynamic Few-Shot Visual Learning without Forgetting. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Salt Lake City, UT, USA, 18–23 June 2018. [Google Scholar]
- Qi, H.; Brown, M.; Lowe, D.G. Low-Shot Learning with Imprinted Weights. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Salt Lake City, UT, USA, 18–23 June 2018. [Google Scholar]
- Lee, K.; Maji, S.; Ravichandran, A.; Soatto, S. Meta-learning with differentiable convex optimization. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 15–20 June 2019; pp. 10657–10665. [Google Scholar]
- Sbai, O.; Couprie, C.; Aubry, M. Impact of base dataset design on few-shot image classification. In Proceedings of the European Conference on Computer Vision (ECCV), Glasgow, UK, 23–28 August 2020; pp. 597–613. [Google Scholar]
- Wah, C.; Branson, S.; Welinder, P.; Perona, P.; Belongie, S. The Caltech-UCSD Birds-200-2011 Dataset. In Technical Report CNS-TR-2011-001; California Institute of Technology: Pasadena, CA, USA, 2011. [Google Scholar]
- Krizhevsky, A. Learning Multiple Layers of Features from Tiny Images; Technical Report; University of Toronto: Toronto, ON, Canada, 2009. [Google Scholar]
- Cimpoi, M.; Maji, S.; Kokkinos, I.; Mohamed, S.; Vedaldi, A. Describing textures in the wild. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Columbus, OH, USA, 23–28 June 2014; pp. 3606–3613. [Google Scholar]
- Griffin, G.; Holub, A.; Perona, P. Caltech-256 Object Category Dataset; Technical Report; California Institute of Technology: Pasadena, CA, USA, 2007. [Google Scholar]
- Mohanty, S.P.; Hughes, D.P.; Salathé, M. Using deep learning for image-based plant disease detection. Front. Plant Sci. 2016, 7, 1419. [Google Scholar] [CrossRef] [PubMed]
- Liu, B.; Cao, Y.; Lin, Y.; Li, Q.; Zhang, Z.; Long, M.; Hu, H. Negative Margin Matters: Understanding Margin in Few-shot Classification. In Proceedings of the European Conference on Computer Vision (ECCV), Glasgow, UK, 23–28 August 2020. [Google Scholar]
- Yang, S.; Liu, L.; Xu, M. Free Lunch for Few-shot Learning: Distribution Calibration. In Proceedings of the International Conference on Learning Representations, Virtual Event, 3–7 May 2021. [Google Scholar]
- dan Guo, D.; Tian, L.; Zhao, H.; Zhou, M.; Zha, H. Adaptive Distribution Calibration for Few-Shot Learning with Hierarchical Optimal Transport. Adv. Neural Inf. Process. Syst. 2022, 35, 6996–7010. [Google Scholar]
- Liang, H.; Zhang, Q.; Dai, P.; Lu, J. Boosting the generalization capability in cross-domain few-shot learning via noise-enhanced supervised autoencoder. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Montreal, BC, Canada, 11–17 October 2021; pp. 9424–9434. [Google Scholar]
- Mensink, T.; Verbeek, J.; Perronnin, F.; Csurka, G. Distance-based image classification: Generalizing to new classes at near-zero cost. IEEE Trans. Pattern Anal. Mach. Intell. 2013, 35, 2624–2637. [Google Scholar] [CrossRef] [PubMed]
- Maaten, L.V.D.; Hinton, G. Visualizing data using t-SNE. J. Mach. Learn. Res. 2008, 9, 2579–2605. [Google Scholar]
- Zhou, B.; Khosla, A.; Lapedriza, A.; Oliva, A.; Torralba, A. Learning deep features for discriminative localization. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 2921–2929. [Google Scholar]
Methods | ChestX | ISIC | ||||
5-Way 5-Shot | 5-Way 20-Shot | 5-Way 50-Shot | 5-Way 5-Shot | 5-Way 20-Shot | 5-Way 50-Shot | |
Union-CC [62] | 26.08 ± 0.41 | 31.14 ± 0.43 | 33.54 ± 0.45 | 43.35 ± 0.55 | 51.71 ± 0.58 | 54.34 ± 0.53 |
Ensemble | 26.45 ± 0.44 | 30.81 ± 0.45 | 33.47 ± 0.46 | 44.49 ± 0.57 | 52.49 ± 0.56 | 55.06 ± 0.54 |
All-EBDs [21] | 26.74 ± 0.42 | 32.77 ± 0.47 | 38.07 ± 0.50 | 46.86 ± 0.60 | 58.57 ± 0.59 | 66.04 ± 0.56 |
IMS-f [21] | 25.50 ± 0.45 | 31.49 ± 0.47 | 36.40 ± 0.50 | 45.84 ± 0.62 | 61.50 ± 0.58 | 68.64 ± 0.53 |
FiLM-pf [35] | 26.79 ± 0.45 | 30.91 ± 0.45 | 33.80 ± 0.47 | 47.06 ± 0.56 | 55.43 ± 0.56 | 57.73 ± 0.53 |
SUR [27] | 26.81 ± 0.46 | 30.98 ± 0.45 | 33.85 ± 0.46 | 47.37 ± 0.56 | 55.59 ± 0.59 | 57.92 ± 0.53 |
URL [29] | 26.49 ± 0.45 | 30.40 ± 0.44 | 33.75 ± 0.46 | 46.00 ± 0.58 | 53.87 ± 0.58 | 56.32 ± 0.54 |
URL+Ad [29] | 26.68 ± 0.44 | 31.41 ± 0.44 | 36.41 ± 0.45 | 48.10 ± 0.60 | 58.84 ± 0.63 | 64.16 ± 0.58 |
TSA [40] | 27.04 ± 0.43 | 33.31 ± 0.47 | 37.15 ± 0.48 | 49.40 ± 0.61 | 62.34 ± 0.60 | 67.73 ± 0.56 |
LAMR | 27.37 ± 0.41 | 34.16 ± 0.48 | 39.21 ± 0.51 | 52.58 ± 0.61 | 65.33 ± 0.55 | 70.52 ± 0.53 |
LAMR++ | 28.38 ± 0.45 | 36.77 ± 0.50 | 42.22 ± 0.54 | 56.26 ± 0.66 | 68.52 ± 0.55 | 73.89 ± 0.52 |
Methods | EuroSAT | CropDiseases | ||||
5-Way 5-Shot | 5-Way 20-Shot | 5-Way 50-Shot | 5-Way 5-Shot | 5-Way 20-Shot | 5-Way 50-Shot | |
Union-CC [62] | 81.01 ± 0.56 | 86.05 ± 0.48 | 87.30 ± 0.41 | 90.22 ± 0.54 | 93.97 ± 0.36 | 95.09 ± 0.32 |
Ensemble | 84.03 ± 0.58 | 88.10 ± 0.49 | 88.44 ± 0.48 | 91.89 ± 0.51 | 95.04 ± 0.37 | 96.06 ± 0.30 |
All-EBDs [21] | 81.29 ± 0.62 | 89.90 ± 0.41 | 92.76 ± 0.34 | 90.82 ± 0.48 | 96.64 ± 0.25 | 98.14 ± 0.18 |
IMS-f [21] | 83.56 ± 0.59 | 91.22 ± 0.38 | 93.85 ± 0.30 | 90.66 ± 0.48 | 97.18 ± 0.24 | 98.43 ± 0.16 |
FiLM-pf [35] | 83.93 ± 0.58 | 87.82 ± 0.51 | 87.94 ± 0.48 | 93.73 ± 0.46 | 96.18 ± 0.32 | 97.26 ± 0.25 |
SUR [27] | 84.35 ± 0.59 | 88.32 ± 0.50 | 88.42 ± 0.49 | 93.72 ± 0.46 | 96.16 ± 0.33 | 97.26 ± 0.25 |
URL [29] | 83.74 ± 0.58 | 88.52 ± 0.48 | 89.13 ± 0.45 | 92.13 ± 0.50 | 95.18 ± 0.36 | 96.21 ± 0.27 |
URL+Ad [29] | 84.57 ± 0.55 | 91.66 ± 0.36 | 93.66 ± 0.31 | 93.12 ± 0.44 | 97.23 ± 0.24 | 98.51 ± 0.15 |
TSA [40] | 85.10 ± 0.55 | 92.25 ± 0.34 | 94.24 ± 0.29 | 93.53 ± 0.44 | 97.58 ± 0.22 | 98.81 ± 0.13 |
LAMR | 86.92 ± 0.47 | 93.65 ± 0.29 | 95.42 ± 0.23 | 94.61 ± 0.39 | 98.26 ± 0.18 | 99.12 ± 0.11 |
LAMR++ | 87.38 ± 0.47 | 94.40 ± 0.26 | 96.31 ± 0.21 | 94.84 ± 0.39 | 98.57 ± 0.16 | 99.30 ± 0.10 |
Type | Method | Backbone | 5-Way 1-Shot | 5-Way 5-Shot |
---|---|---|---|---|
w/o Adapt | ProtoNet [9] by [64] | ResNet12 | 59.25 ± 0.64 | 75.60 ± 0.48 |
MetaOptNet [64] | ResNet12 | 62.64 ± 0.62 | 78.63 ± 0.46 | |
CC [62] by [37] | ResNet12 | 58.61 ± 0.18 | 76.40 ± 0.13 | |
baseline [13] | ResNet18 | 51.75 ± 0.80 | 74.27 ± 0.63 | |
Neg-Cosine [71] | ResNet12 | 63.85 ± 0.81 | 81.57 ± 0.56 | |
Embed-Distill [14] | ResNet12 | 64.82 ± 0.60 | 82.14 ± 0.43 | |
Meta-Baseline [12] | ResNet12 | 63.17 ± 0.23 | 79.26 ± 0.17 | |
Robust20 [57] | ResNet18 | 63.95 ± 0.42 | 81.59 ± 0.42 | |
w/ Adapt | TADAM [20] | ResNet12 | 58.50 ± 0.30 | 76.70 ± 0.30 |
Centroid-Align [19] | ResNet18 | 59.88 ± 0.67 | 80.35 ± 0.73 | |
Implant [37] | ResNet12 | 62.53 ± 0.19 | 79.77 ± 0.19 | |
DC+SUR [27] | ResNet12 | 63.13 ± 0.63 | 80.04 ± 0.41 | |
Free-Lunch [72] | ResNet12 | 64.73 ± 0.44 | 81.15 ± 0.42 | |
H-OT [73] | ResNet12 | 65.63 ± 0.32 | 82.87 ± 0.43 | |
LAMR (ours) | ResNet12 | 65.73 ± 0.43 | 83.37 ± 0.29 | |
LAMR++ (ours) | ResNet12 | 65.90 ± 0.43 | 83.84 ± 0.29 |
Methods | ChestX | ISIC | ||||
5-Way 5-Shot | 5-Way 20-Shot | 5-Way 50-Shot | 5-Way 5-Shot | 5-Way 20-Shot | 5-Way 50-Shot | |
Single-source | 25.90 ± 0.41 | 30.16 ± 0.45 | 32.76 ± 0.45 | 43.84 ± 0.55 | 51.98 ± 0.57 | 54.34 ± 0.53 |
Merged-multi-sources | 26.08 ± 0.41 | 31.14 ± 0.43 | 33.54 ± 0.45 | 43.35 ± 0.55 | 51.71 ± 0.58 | 54.34 ± 0.53 |
Our Framework | 25.96 ± 0.44 | 30.21 ± 0.43 | 32.58 ± 0.43 | 48.61 ± 0.60 | 58.13 ± 0.59 | 60.54 ± 0.57 |
Methods | EuroSAT | CropDiseases | ||||
5-Way 5-Shot | 5-Way 20-Shot | 5-Way 50-Shot | 5-Way 5-Shot | 5-Way 20-Shot | 5-Way 50-Shot | |
Single-source | 78.64 ± 0.61 | 84.05 ± 0.54 | 85.03 ± 0.48 | 88.27 ± 0.59 | 92.57 ± 0.44 | 94.19 ± 0.34 |
Merged-multi-sources | 81.01 ± 0.56 | 86.05 ± 0.48 | 87.30 ± 0.41 | 90.22 ± 0.54 | 93.97 ± 0.36 | 95.09 ± 0.32 |
Our Framework | 84.76 ± 0.51 | 89.36 ± 0.41 | 89.88 ± 0.39 | 91.89 ± 0.50 | 95.22 ± 0.33 | 96.03 ± 0.27 |
Multi-Source FSL | Single-Source FSL | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|
K-Shot | PID | CFD | PC | ChestX | ISIC | EuroSAT | CropDiseases | ChestX | ISIC | EuroSAT | CropDiseases |
5 | 25.96 ± 0.44 | 48.61 ± 0.60 | 84.76 ± 0.51 | 91.89 ± 0.50 | 26.12 ± 0.42 | 42.96 ± 0.56 | 80.10 ± 0.62 | 89.38 ± 0.55 | |||
✓ | 26.89 ± 0.45 | 50.61 ± 0.61 | 85.06 ± 0.51 | 92.64 ± 0.44 | 26.96 ± 0.43 | 45.12 ± 0.56 | 81.33 ± 0.61 | 91.72 ± 0.48 | |||
✓ | 26.84 ± 0.41 | 51.92 ± 0.63 | 87.12 ± 0.47 | 94.51 ± 0.40 | 26.96 ± 0.44 | 48.22 ± 0.61 | 84.36 ± 0.55 | 93.94 ± 0.41 | |||
✓ | 27.06 ± 0.42 | 52.14 ± 0.61 | 86.42 ± 0.48 | 94.20 ± 0.41 | 27.49 ± 0.43 | 47.78 ± 0.59 | 83.50 ± 0.57 | 93.40 ± 0.43 | |||
✓ | ✓ | 27.17 ± 0.42 | 52.12 ± 0.61 | 86.50 ± 0.48 | 94.19 ± 0.41 | 27.43 ± 0.42 | 47.90 ± 0.59 | 83.54 ± 0.56 | 93.48 ± 0.42 | ||
✓ | ✓ | 27.39 ± 0.42 | 52.43 ± 0.61 | 86.91 ± 0.47 | 94.62 ± 0.39 | 27.56 ± 0.44 | 48.54 ± 0.59 | 84.42 ± 0.55 | 94.15 ± 0.39 | ||
✓ | ✓ | 27.35 ± 0.46 | 52.53 ± 0.64 | 86.77 ± 0.47 | 94.48 ± 0.38 | 27.55 ± 0.44 | 48.59 ± 0.59 | 84.32 ± 0.55 | 94.05 ± 0.40 | ||
✓ | ✓ | ✓ | 27.37 ± 0.41 | 52.58 ± 0.61 | 86.92 ± 0.47 | 94.61 ± 0.39 | 27.66 ± 0.44 | 48.66 ± 0.60 | 84.46 ± 0.55 | 94.15 ± 0.39 | |
20 | 30.21 ± 0.43 | 58.13 ± 0.59 | 89.36 ± 0.41 | 95.22 ± 0.33 | 30.92 ± 0.43 | 50.41 ± 0.57 | 84.78 ± 0.53 | 93.73 ± 0.40 | |||
✓ | 31.60 ± 0.46 | 58.87 ± 0.58 | 89.87 ± 0.40 | 95.83 ± 0.27 | 32.22 ± 0.45 | 53.49 ± 0.56 | 86.45 ± 0.51 | 95.42 ± 0.33 | |||
✓ | 30.95 ± 0.43 | 63.90 ± 0.60 | 93.65 ± 0.29 | 98.24 ± 0.18 | 31.32 ± 0.44 | 61.09 ± 0.64 | 91.99 ± 0.34 | 98.01 ± 0.18 | |||
✓ | 33.45 ± 0.46 | 63.96 ± 0.57 | 92.97 ± 0.31 | 97.83 ± 0.20 | 33.20 ± 0.48 | 61.09 ± 0.59 | 91.49 ± 0.36 | 97.80 ± 0.19 | |||
✓ | ✓ | 33.33 ± 0.48 | 64.01 ± 0.56 | 92.99 ± 0.31 | 97.84 ± 0.20 | 33.33 ± 0.48 | 61.16 ± 0.59 | 91.46 ± 0.36 | 97.78 ± 0.19 | ||
✓ | ✓ | 33.64 ± 0.47 | 65.19 ± 0.57 | 93.60 ± 0.29 | 98.26 ± 0.18 | 33.64 ± 0.47 | 62.13 ± 0.61 | 92.18 ± 0.33 | 98.20 ± 0.17 | ||
✓ | ✓ | 34.26 ± 0.49 | 64.69 ± 0.56 | 93.52 ± 0.28 | 98.15 ± 0.16 | 34.06 ± 0.49 | 62.29 ± 0.60 | 92.21 ± 0.34 | 98.07 ± 0.18 | ||
✓ | ✓ | ✓ | 34.16 ± 0.48 | 65.33 ± 0.55 | 93.65 ± 0.29 | 98.26 ± 0.18 | 33.82 ± 0.50 | 62.38 ± 0.60 | 92.21 ± 0.33 | 98.19 ± 0.17 | |
50 | 32.58 ± 0.43 | 60.54 ± 0.57 | 89.88 ± 0.39 | 96.03 ± 0.27 | 33.87 ± 0.46 | 53.15 ± 0.54 | 85.71 ± 0.50 | 95.06 ± 0.30 | |||
✓ | 34.69 ± 0.47 | 61.18 ± 0.54 | 90.63 ± 0.38 | 96.69 ± 0.23 | 36.01 ± 0.46 | 56.70 ± 0.55 | 87.80 ± 0.45 | 96.67 ± 0.24 | |||
✓ | 33.47 ± 0.43 | 68.51 ± 0.68 | 95.48 ± 0.23 | 99.19 ± 0.11 | 34.53 ± 0.44 | 63.74 ± 0.77 | 94.26 ± 0.28 | 99.11 ± 0.11 | |||
✓ | 36.59 ± 0.48 | 66.46 ± 0.56 | 94.37 ± 0.26 | 98.50 ± 0.16 | 38.50 ± 0.50 | 66.56 ± 0.56 | 93.86 ± 0.28 | 98.77 ± 0.13 | |||
✓ | ✓ | 37.61 ± 0.49 | 67.05 ± 0.55 | 94.47 ± 0.26 | 98.56 ± 0.15 | 38.44 ± 0.51 | 67.20 ± 0.55 | 93.88 ± 0.28 | 98.84 ± 0.13 | ||
✓ | ✓ | 34.91 ± 0.48 | 69.40 ± 0.58 | 95.45 ± 0.23 | 99.10 ± 0.12 | 37.24 ± 0.47 | 67.71 ± 0.60 | 94.44 ± 0.27 | 99.16 ± 0.11 | ||
✓ | ✓ | 37.97 ± 0.52 | 69.08 ± 0.55 | 95.23 ± 0.23 | 99.03 ± 0.10 | 39.07 ± 0.48 | 68.43 ± 0.56 | 94.44 ± 0.26 | 99.09 ± 0.11 | ||
✓ | ✓ | ✓ | 39.21 ± 0.51 | 70.52 ± 0.53 | 95.42 ± 0.23 | 99.12 ± 0.11 | 38.92 ± 0.50 | 68.92 ± 0.56 | 94.46 ± 0.27 | 99.16 ± 0.11 |
Methods | ChestX | ISIC | ||||
5-Way 5-Shot | 5-Way 20-Shot | 5-Way 50-Shot | 5-Way 5-Shot | 5-Way 20-Shot | 5-Way 50-Shot | |
Fixed-MSR | 25.96 ± 0.44 | 30.20 ± 0.46 | 32.58 ± 0.46 | 48.61 ± 0.62 | 58.13 ± 0.61 | 60.54 ± 0.57 |
Ft-LC | 25.68 ± 0.44 | 30.53 ± 0.46 | 33.64 ± 0.48 | 51.32 ± 0.63 | 61.52 ± 0.57 | 64.18 ± 0.56 |
Ft-CC | 26.75 ± 0.44 | 32.69 ± 0.48 | 37.19 ± 0.53 | 51.51 ± 0.64 | 63.59 ± 0.57 | 67.75 ± 0.55 |
Ft-MSR-LC | 26.25 ± 0.45 | 31.10 ± 0.46 | 34.26 ± 0.48 | 51.81 ± 0.62 | 62.56 ± 0.57 | 65.07 ± 0.56 |
Ft-MSR-CC | 27.04 ± 0.45 | 33.31 ± 0.49 | 38.26 ± 0.52 | 52.12 ± 0.63 | 64.52 ± 0.56 | 70.00 ± 0.53 |
Methods | EuroSAT | CropDiseases | ||||
5-Way 5-Shot | 5-Way 20-Shot | 5-Way 50-Shot | 5-Way 5-Shot | 5-Way 20-Shot | 5-Way 50-Shot | |
Fixed-MSR | 84.76 ± 0.51 | 89.36 ± 0.40 | 89.88 ± 0.39 | 91.89 ± 0.47 | 95.22 ± 0.29 | 96.03 ± 0.25 |
Ft-LC | 85.28 ± 0.48 | 91.01 ± 0.35 | 91.85 ± 0.32 | 92.85 ± 0.42 | 96.78 ± 0.22 | 97.64 ± 0.16 |
Ft-CC | 86.71 ± 0.48 | 93.08 ± 0.29 | 94.52 ± 0.25 | 94.11 ± 0.39 | 97.93 ± 0.17 | 98.83 ± 0.11 |
Ft-MSR-LC | 85.82 ± 0.48 | 91.62 ± 0.33 | 92.48 ± 0.30 | 93.51 ± 0.40 | 97.21 ± 0.20 | 98.00 ± 0.15 |
Ft-MSR-CC | 86.77 ± 0.48 | 93.48 ± 0.28 | 95.09 ± 0.23 | 94.45 ± 0.38 | 98.18 ± 0.16 | 99.09 ± 0.10 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Liu, G.; Zhang, Z.; Fang, X. Task-Adaptive Multi-Source Representations for Few-Shot Image Recognition. Information 2024, 15, 293. https://doi.org/10.3390/info15060293
Liu G, Zhang Z, Fang X. Task-Adaptive Multi-Source Representations for Few-Shot Image Recognition. Information. 2024; 15(6):293. https://doi.org/10.3390/info15060293
Chicago/Turabian StyleLiu, Ge, Zhongqiang Zhang, and Xiangzhong Fang. 2024. "Task-Adaptive Multi-Source Representations for Few-Shot Image Recognition" Information 15, no. 6: 293. https://doi.org/10.3390/info15060293
APA StyleLiu, G., Zhang, Z., & Fang, X. (2024). Task-Adaptive Multi-Source Representations for Few-Shot Image Recognition. Information, 15(6), 293. https://doi.org/10.3390/info15060293