Abstract : Computing distances between textual representation is at the heart of many Natural Language Processing tasks. The standard approaches initially developed for Information Retrieval are then used; most often they rely on a bag-of-words (or bag-of-feature) description with a TF-IDF (or variants) weighting, a vectorial representation and classical similarity functions like cosine. In this paper, we are interested in such a task, namely the semantic clustering of entities extracted from a text. We argue that for this kind of tasks, more suited representations and similarity measures can be used. In particular, we explore the use of alternative representation for entities called Bag-Of-Vectors (or Bag-of-Bags-of-Features). In this new model, each entity is not defined as a unique vector but as a set of vectors, in which each vector is built based on the contextual features of one occurrence of the entity. In order to use Bag-Of-Vectors for clustering, we introduce new versions of classical similarity functions such as Cosine, Jaccard and Scalar Products. Experimentally, we show that the Bag-Of-Vectors representation always improve the clustering results compared to classical Bag-Of-Features representations.