Few-Shot Learning with Graph Neural Networks
ICLR(2017), cited over 600 times

Summary
- Few shot learning을 위해 다른 sample과의 similarity 정보까지 이용(즉, 각 sample의 label을 독립적으로 학습하는데서 그치지 않음)
- 각 sample을 graph의 node라고 보고, edge는 두 sample간의 similarity kernel로 간주.
- Edge, 즉 similarity kernel은 trainable 함(즉, 단순한 inner product 등으로 pre-defined 되지 않음)
- Node의 feature는 message passing algorithm에서 착안하여 각 time step 마다 이웃 node에서 message를 받아서 업데이트됨.
- Semi-supervised learning, 더 나아가 active learning에도 적용 가능.
- Omniglot, Mini-ImageNet에 대해 더 적은 parameter로 state-of-the-art 성능을 보여줌(2017년 기준)
Keywords
- Few shot learning
- Graph neural network
- Semi-supervised learning
- Active learning with Attention
1. Introduction
- Supervised end-to-end learning has been extremely successful in computer vision, speech, or machine translation tasks.
- However, there are some tasks(e.g. few shot learning) that cannot achieve high performance with conventional methods.
- New supervised learning setup
- Input-output setup:
- With i.i.d. samples of collections of images and their associated label similarity
- cf) conventional setup: i.i.d. samples of images and their associated labels
- Authors' model can be extended to semi-supervised and active learning
-
Semi-supervised learning:
Learning from a mixture of labeled and unlabeled examples

https://blog.est.ai/2020/11/ssl/
-
Active learning:
The learner has the option to request those missing labels that will be most helpful for the prediction task

ICML 2019 active learning tutorial

Annotated by JH Gu
2. Closely related works and ideas