How to Use Information Theory to Mitigate Unfair Rating Attacks

Abstract : In rating systems, users want to construct accurate opinions based on ratings. However, the accuracy is bounded by the amount of information transmitted (leaked) by ratings. Rating systems are susceptible to unfair rating attacks. These attacks may decrease the amount of leaked information, by introducing noise. A robust trust system attempts to mitigate the effects of these attacks on the information leakage. Defenders cannot influence the actual ratings: being honest or from attackers. There are other ways for the defenders to keep the information leakage high: blocking/selecting the right advisors, observing transactions and offering more choices. Blocking suspicious advisors can only decrease robustness. If only a limited number of ratings can be used, however, then less suspicious advisors are better, and in case of a tie, newer advisors are better. Observing transactions increases robustness. Offering more choices may increase robustness.
Document type :
Conference papers
Complete list of metadatas

Cited literature [18 references]  Display  Hide  Download

https://hal.inria.fr/hal-01438346
Contributor : Hal Ifip <>
Submitted on : Tuesday, January 17, 2017 - 4:07:54 PM
Last modification on : Tuesday, January 17, 2017 - 4:18:27 PM
Long-term archiving on : Tuesday, April 18, 2017 - 3:22:55 PM

File

428098_1_En_2_Chapter.pdf
Files produced by the author(s)

Licence


Distributed under a Creative Commons Attribution 4.0 International License

Identifiers

Citation

Tim Muller, Dongxia Wang, Yang Liu, Jie Zhang. How to Use Information Theory to Mitigate Unfair Rating Attacks. 10th IFIP International Conference on Trust Management (TM), Jul 2016, Darmstadt, Germany. pp.17-32, ⟨10.1007/978-3-319-41354-9_2⟩. ⟨hal-01438346⟩

Share

Metrics

Record views

99

Files downloads

133