Abstract : Intelligent pattern selection is an active learning strategy where the classifiers select during training the most informative patterns. This paper investigates such a strategy where the informativeness of a pattern is measured as the approximation error produced by the classifier. The algorithm builds the training corpus starting from a small randomly chosen initial dataset and new patterns are added to the learning corpus based on their error sensitivity. The training dataset expansion is based on the selection of the most erroneous patterns. Our experimental results on MNIST 1 separated digit dataset show that only 3.26%of training data are sufficient for training purpose without decreasing the performance (98.36%) of the resulting neural classifier.