On Evaluating Interestingness Measures for Closed Itemsets

Aleksey Buzmakov 1, 2 Sergei O. Kuznetsov 2 Amedeo Napoli 1
1 ORPAILLEUR - Knowledge representation, reasonning
Inria Nancy - Grand Est, LORIA - NLPKD - Department of Natural Language Processing & Knowledge Discovery
Abstract : There are a lot of measures for selecting interesting itemsets. But which one is better? In this paper we introduce a methodology for evaluating interesting-ness measures. This methodology relies on supervised classification. It allows us to avoid experts and artificial datasets in the evaluation process. We apply our method-ology to evaluate promising measures for itemset selection, such as leverage and stability. We show that although there is no evident winner between them, stability has a slightly better performance.
Document type :
Conference papers
Complete list of metadatas

Cited literature [17 references]  Display  Hide  Download

https://hal.inria.fr/hal-01095927
Contributor : Aleksey Buzmakov <>
Submitted on : Tuesday, December 16, 2014 - 2:56:02 PM
Last modification on : Tuesday, December 18, 2018 - 4:38:02 PM
Long-term archiving on : Monday, March 23, 2015 - 2:06:55 PM

File

stairs-interestingness-measure...
Files produced by the author(s)

Identifiers

Collections

Citation

Aleksey Buzmakov, Sergei O. Kuznetsov, Amedeo Napoli. On Evaluating Interestingness Measures for Closed Itemsets. 7th European Starting AI Researcher Symposium (STAIRS 2014), 2014, Prague, Czech Republic. pp.71 - 80, ⟨10.3233/978-1-61499-421-3-71⟩. ⟨hal-01095927⟩

Share

Metrics

Record views

540

Files downloads

193