K-means++: the advantages of careful seeding, SODA. Society for Industrial and Applied Mathematics, 2007. ,
The power of ensembles for active learning in image classification, CVPR, 2018. ,
Mixmatch: A holistic approach to semi-supervised learning, 2019. ,
Unsupervised learning by predicting noise, ICML, 2017. ,
Deep clustering for unsupervised learning of visual features, 2018. ,
Semi-Supervised Learning, 2006. ,
Revisiting pre-training: An efficient training method for image classification, 2018. ,
Large-scale visual active learning with deep probabilistic ensembles, 2019. ,
Unsupervised visual representation learning by context prediction, ICCV, 2015. ,
Adversarial active learning for deep networks: a margin based approach, 2018. ,
, Deep bayesian active learning with image data, 2017.
, Deep active learning over the long tail, 2017.
Unsupervised representation learning by predicting image rotations, In ICLR, 2018. ,
URL : https://hal.archives-ouvertes.fr/hal-01864755
Discriminative active learning, 2018. ,
Label propagation for deep semi-supervised learning, CVPR, 2019. ,
URL : https://hal.archives-ouvertes.fr/hal-02370297
Biased importance sampling for deep neural network training, 2017. ,
Learning multiple layers of features from tiny images, 2009. ,
Temporal ensembling for semisupervised learning, 2016. ,
Temporal ensembling for semisupervised learning, ICLR, 2017. ,
Gradient-based learning applied to document recognition, Proceedings of the IEEE, vol.86, issue.11, pp.2278-2324, 1998. ,
Ascent: Active supervision for semisupervised learning, IEEE Transactions on Knowledge and Data Engineering, 2019. ,
Graphbased active learning based on label propagation, International Conference on Modeling Decisions for Artificial Intelligence, pp.179-190, 2008. ,
Sgdr: Stochastic gradient descent with warm restarts, ICLR, 2017. ,
Adversarial sampling for active learning, 2018. ,
Employing em in pool-based active learning for text classification, ICML, 1998. ,
Active+ semi-supervised learning = robust multi-view learning, ICML, 2002. ,
Reading digits in natural images with unsupervised feature learning, NIPS Workshop on Deep Learning and Unsupervised Feature Learning, 2011. ,
Unsupervised learning of visual representations by solving jigsaw puzzles, ECCV, 2016. ,
Semisupervised learning with scarce annotations, 2019. ,
Active learning for convolutional neural networks: A core-set approach, 2018. ,
Active learning literature survey, 2009. ,
, Variational adversarial active learning, 2019.
Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results, NIPS, 2017. ,
Interpolation consistency training for semi-supervised learning, 2019. ,
Cost-effective active learning for deep image classification, IEEE Trans. CSVT, vol.27, issue.12, pp.2591-2600, 2017. ,
Unsupervised learning of visual representations using videos, ICCV, 2015. ,
Dropsample: A new training method to enhance deep convolutional neural networks for large-scale unconstrained handwritten chinese character recognition, 2015. ,
Learning with local and global consistency, NIPS, 2003. ,
Ranking on data manifolds, NIPS, 2003. ,
Exploiting unlabeled data in content-based image retrieval, ECML, 2004. ,
Learning from labeled and unlabeled data with label propagation, 2002. ,
Combining active learning and semi-supervised learning using gaussian fields and harmonic functions, ICML 2003 workshop on the continuum from labeled to unlabeled data in machine learning and data mining, 2003. ,
, CoreSet
, CoreSet
, Average accuracy and standard deviation for different label budget b and cycle on MNIST and SVHN. Following Algorithm 1, we show the effect of unsupervised pre-training (PRE) and semi-supervised learning (SEMI) compared to the standard baseline, Table 5