P. Mavridis, D. Gross-amblard, and Z. Miklos, Skill-Aware Task Assignment in Crowdsourcing Applications, Proceedings of the 1st International Symposium on Web AlGorithms, 2015. International Conference Papers, 2015.
URL : https://hal.archives-ouvertes.fr/hal-01171330

P. Mavridis, D. Gross-amblard, and Z. Miklos, Using Hierarchical Skills for Optimized Task Assignment in Knowledge Intensive Crowdsourcing, 25th international World Wide Web Conference (WWW), 2016.
URL : https://hal.archives-ouvertes.fr/hal-01306481

, Journals (Under Review) Submitted on Fast-Track of ACM TWEB and pending feedback

P. Mavridis, D. Gross-amblard, and Z. Miklos, Using Hierarchical Skills for Optimized Task Assignment in Knowledge Intensive Crowdsourcing, ACM Journal on Transactions on the WEB. Submission Pending
URL : https://hal.archives-ouvertes.fr/hal-01306481

P. Mavridis, G. Demartini, and D. Gross-amblard, Zoltan Miklos. It's Up to You ! Ranking Tasks for Relevance, Diversity and Urgency in Crowdsourcing Platforms

, Crowdflower. www.crowdflower.com

, Crowdpolicy. www.crowdpolicy.com

. Definedcrowd,

. Foldit,

. Foulefactory,

, Projet

. Sauvages-de-ma-rue,

D. Acemoglu, M. Mostagir, and A. Ozdaglar, Managing innovation in a crowd, Proceedings of the Sixteenth ACM Conference on Economics and Computation, p.15, 2015.

R. Agrawal, S. Gollapudi, A. Halverson, and S. Ieong, Diversifying search results, Proceedings of the Second ACM International Conference on Web Search and Data Mining, pp.5-14, 2009.

S. Ahmad, A. Battle, Z. Malkani, and S. Kamvar, The jabberwocky programming environment for structured social computing, Proceedings of the 24th Annual ACM Symposium on User Interface Software and Technology, pp.53-64, 2011.

S. Amer-yahia, E. Gaussier, V. Leroy, J. Pilourdault, R. M. Borromeo et al., Task composition in crowdsourcing, Data Science and Advanced Analytics (DSAA), 2016 IEEE International Conference on, pp.194-203, 2016.
URL : https://hal.archives-ouvertes.fr/hal-01407780

A. Anagnostopoulos, L. Becchetti, C. Castillo, A. Gionis, and S. Leo-nardi, Online team formation in social networks, Proceedings of the 21st International Conference on World Wide Web, pp.839-848, 2012.

D. W. Barowy, C. Curtsinger, E. D. Berger, and A. Mcgregor, Automan : A platform for integrating human-based and digital computation, References Proceedings of the ACM International Conference on Object Oriented Programming Systems Languages and Applications, pp.639-654, 2012.

M. Barsky, A. Thomo, Z. Toth, and C. Zuzarte, Online update of btrees, Proceedings of the 19th ACM International Conference on Information and Knowledge Management, pp.149-158, 2010.

M. A. Bender, M. Farach-colton, G. Pemmasani, S. Skiena, and P. Su-mazin, Lowest common ancestors in trees and directed acyclic graphs, Journal of Algorithms, vol.57, pp.75-94, 2005.

A. Bozzon, M. Brambilla, S. Ceri, and A. Mauri, Reactive crowdsourcing, Proceedings of the 22Nd International Conference on World Wide Web (Republic and Canton of, pp.153-164, 2013.

A. Bozzon, M. Brambilla, S. Ceri, M. Silvestri, and G. Vesci, Choosing the right crowd : Expert finding in social networks, Proceedings of the 16th International Conference on Extending Database Technology, pp.637-648, 2013.

D. C. Brabham and . Crowdsourcing, , 2013.

K. Bradley, R. Rafter, and B. Smyth, Case-based user profiling for content personalization, Proceedings of the International Conference on Adaptive Hypermedia and Adaptive Web-based Systems, pp.62-72, 2000.

D. Butler, Crowdsourcing goes mainstream in typhoon response, Nature, 2013.

M. A. Campion, A. A. Fink, B. J. Ruggeberg, L. Carr, G. M. Phillips et al., Doing competencies well : Best practices in competency modeling, Personnel Psychology, vol.64, pp.225-262, 2011.

C. C. Cao, J. She, Y. Tong, and L. Chen, Whom to ask ? : Jury selection for decision making tasks on micro-blog services, PVLDB, vol.5, pp.1495-1506, 2012.

B. Carterette, An analysis of np-completeness in novelty and diversity ranking, Information Retrieval, vol.14, pp.89-106, 2011.

I. Catallo, E. Ciceri, P. Fraternali, D. Martinenghi, and M. Tagliasac-chi, Top-k diversity queries over bounded regions, ACM Transactions on Database Systems, vol.38, issue.2, p.44, 2013.

S. Cooper, F. Khatib, A. Treuille, J. Barbero, J. Lee et al., Predicting protein structures with a multiplayer online game, Nature, vol.466, pp.756-760, 2010.

A. P. Dawid and A. M. Skene, Maximum likelihood estimation of observer error-rates using the em algorithm, Journal of the Royal Statistical Society. Series C (Applied Statistics), vol.28, pp.20-28, 1979.

G. Demartini, D. E. Difallah, and P. Cudré-mauroux, Zencrowd : leveraging probabilistic reasoning and crowdsourcing techniques for large-scale entity linking, Proceedings of the 21st World Wide Web Conference, pp.469-478, 2012.

M. C. Desmarais and R. S. Baker, A review of recent advances in learner and skill modeling in intelligent learning environments, User Modeling and User-Adapted Interaction, vol.22, pp.9-38, 2012.

D. Deutch, O. Greenshpan, B. Kostenko, and T. Milo, Declarative platform for data sourcing games, Proceedings of the 21st World Wide Web Conference 2012, pp.779-788, 2012.

D. E. Difallah, G. Demartini, and P. Cudré-mauroux, Pick-a-crowd : Tell me what you like, and i'll tell you what to do, WWW '13, International World Wide Web Conferences Steering Committee, pp.367-374, 2013.

D. E. Difallah, G. Demartini, and P. Cudré-mauroux, Scheduling human intelligence tasks in multi-tenant crowd-powered systems, Proceedings of the 25th International Conference on World Wide Web (Republic and Canton of, pp.855-865, 2016.

A. Doan, M. J. Franklin, D. Kossmann, and T. Kraska, Crowdsourcing applications and platforms : A data management perspective, vol.4, pp.1508-1509

A. Doan, R. Ramakrishnan, and A. Y. Halevy, Crowdsourcing systems on the world-wide web, Communnications ACM, vol.54, pp.86-96, 2011.

M. Drosou and E. Pitoura, Dynamic diversification of continuous data, Proceedings of the 15th International Conference on Extending Database Technology, pp.216-227, 2012.

S. Elliott, Ford Turns to the 'Crowd' for New Fiesta Ads, New York Times, 2013.

M. Enrich, M. Braunhofer, and F. Ricci, Cold-start management with cross-domain collaborative filtering and tags, of Lecture Notes in Business Information Processing, vol.152, pp.101-112, 2013.

E. Estellés-arolas, F. G. De-guevara, and .. , Towards an integrated crowdsourcing definition, Journal of Information Science, vol.38, pp.189-200, 2012.

J. Fan, G. Li, B. C. Ooi, K. Tan, J. Feng et al., An adaptive crowdsourcing framework, Proceedings of the 2015 ACM SIGMOD International Conference on Management of Data, pp.1015-1030, 2015.

S. Faradani, B. Hartmann, and P. G. Ipeirotis, What's the right price ? pricing tasks for finishing on time, Human computation, vol.11, p.11, 2011.

A. Feng, M. J. Franklin, D. Kossmann, T. Kraska, S. Madden et al., Query processing with the VLDB crowd, vol.4, pp.1387-1390

M. J. Franklin, D. Kossmann, T. Kraska, S. Ramesh, R. Xin et al., Answering queries with crowdsourcing, Proceedings of the 2011 ACM SIGMOD International Conference on Management of Data, pp.61-72, 2011.

P. Fraternali, D. Martinenghi, and M. Tagliasacchi, Top-k bounded diversification, Proceedings of the 2012 ACM SIGMOD International Conference on Management of Data, pp.421-432, 2012.

U. Gadiraju, R. Kawase, and S. Dietze, A taxonomy of microtasks on the web, Proceedings of the 25th ACM conference on Hypertext and social media, pp.218-223, 2014.

U. Gadiraju, R. Kawase, S. Dietze, and G. Demartini, Understanding malicious behavior in crowdsourcing platforms : The case of online surveys, Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems, pp.1631-1640, 2015.

Y. Gao and A. Parameswaran, Finish them ! : Pricing algorithms for human computation, PVLDB, vol.7, pp.1965-1976, 2014.

M. Gladwell, Outliers : the story of success, 2008.

D. Haas, J. Ansel, L. Gu, and A. Marcus, Argonaut : Macrotask crowdsourcing for complex data processing, PVLDB, vol.8, pp.1642-1653, 2015.

P. Hitlin, Research in the Crowdsourcing Age, a Case Study, Pew Research Center, 2016.

Y. S. Horawalavithana and D. N. Ranasinghe, An efficient incremental indexing mechanism for extracting top-k representative queries over continuous data-streams, Proceedings of the 14th International Workshop on Adaptive and Reflective Middleware, vol.8, pp.1-8, 2015.

J. Howe, The Rise of Crowdsourcing. Wired, 2006.

J. Howe and . Crowdsourcing, Why the Power of the Crowd Is Driving the Future of Business, vol.1, 2008.

D. R. Karger, S. Oh, and D. Shah, Budget-optimal task allocation for reliable crowdsourcing systems, Operations Research, vol.62, issue.1, pp.1-24, 2014.

F. Khatib, F. Dimaio, S. Cooper, M. Kazmierczyk, M. Gilski et al., Crystal structure of a monomeric retroviral protease solved by protein folding game players, Nature Structural and Molecular Biology, vol.18, pp.1175-1177, 2011.

A. Kittur, B. Smus, S. Khamkar, and R. E. Kraut, Crowdforge : crowdsourcing complex work, Proceedings of the 24th Annual ACM Symposium on User Interface Software and Technology, pp.43-52, 2011.

H. W. Kuhn, The hungarian method for the assignment problem, Naval Research Logistics Quarterly, vol.2, pp.83-97, 1955.

A. Kulkarni, M. Can, and B. Hartmann, Collaboratively crowdsourcing workflows with turkomatic, Proceedings of the ACM 2012 Conference on Computer Supported Cooperative Work, pp.1003-1012, 2012.

E. Law and L. Von-ahn, Human Computation, Synthesis Lectures on Artificial Intelligence and Machine Learning, 2011.

J. Lee and S. Hwang, Toward efficient multidimensional subspace skyline computation, VLDB Journal, vol.23, pp.129-145, 2014.

G. Little, L. B. Chilton, M. Goldman, R. C. Miller, and . Turkit, Human computation algorithms on mechanical turk, Proceedings of the 23Nd Annual ACM Symposium on User Interface Software and Technology, pp.57-66, 2010.

X. Liu, M. Lu, B. C. Ooi, Y. Shen, S. Wu et al., Cdas : a crowdsourcing data analytics system, PVLDB, vol.5, pp.1040-1051, 2012.

K. Maarry, W. Balke, H. Cho, S. Hwang, and Y. Baba, Skill ontology-based model for quality assurance in crowdsourcing, Database Systems for Advanced Applications, pp.376-387, 2014.

M. Magnani, I. Assent, and M. L. Mortensen, Taking the big picture : Representative skylines based on significance and diversity, VLDB Journal, vol.23, pp.795-815, 2014.

J. Markoff, In a Video Game, Tackling the Complexities of Protein Folding, New York Times, 2010.

P. Mavridis, D. Gross-amblard, and Z. Miklós, Skill-aware task assignment in crowdsourcing applications, International Symposium on Web AlGorithms, 2015.
URL : https://hal.archives-ouvertes.fr/hal-01171330

P. Mavridis, D. Gross-amblard, and Z. Miklós, Using hierarchical skills for optimized task assignment in knowledge-intensive crowdsourcing, Proceedings of the 25th International Conference on World Wide Web, pp.843-853, 2016.
URL : https://hal.archives-ouvertes.fr/hal-01306481

S. E. Middleton, N. R. Shadbolt, and D. C. De-roure, Ontological user profiling in recommender systems, ACM Transactions on Information Systems (TOIS), vol.22, pp.54-88, 2004.

P. Minder and A. Bernstein, Crowdlang : A programming language for the systematic exploration of human computation systems, Lecture Notes in Computer Science, vol.7710, pp.124-137, 2012.

L. Mo, R. Cheng, B. Kao, X. S. Yang, C. Ren et al., Optimizing plurality for human intelligence tasks, 22nd ACM International Conference on Information and Knowledge Management, CIKM'13, pp.1929-1938, 2013.

A. Morishima, N. Shinagawa, T. Mitsuishi, H. Aoki, and S. Fukusumi, Cylog/crowd4u : A declarative platform for complex data-centric crowdsourcing, PVLDB, vol.5, pp.1918-1921, 2012.

J. Oosterman, A. Bozzon, G. Houben, A. Nottamkandath, C. Dijk-shoorn et al., Crowd vs. experts : Nichesourcing for knowledge intensive tasks in cultural heritage, Proceedings of the 23rd International Conference on World Wide Web, pp.567-568, 2014.

J. Oosterman, A. Nottamkandath, C. Dijkshoorn, A. Bozzon, G. Hou-ben et al., Crowdsourcing knowledge-intensive tasks in cultural heritage, Proceedings of the 2014 ACM Conference on Web Science, pp.267-268, 2014.

A. G. Parameswaran, H. Park, H. Garcia-molina, N. Polyzotis, J. Widom et al., Declarative crowdsourcing, Proceedings of the 21st ACM International Conference on Information and Knowledge Management, pp.1203-1212, 2012.

H. Park, R. Pang, A. G. Parameswaran, H. Garcia-molina, N. Polyzo-tis et al., An overview of the deco system : data model and query language ; query processing and optimization, SIGMOD Record, vol.41, pp.22-27, 2012.

J. Pilourdault, S. Amer-yahia, D. Lee, and S. B. Roy, Motivation-Aware Task Assignment in Crowdsourcing, Proceedings of the 16th International Conference on Extending Database Technology, p.17, 2017.
URL : https://hal.archives-ouvertes.fr/hal-01498801

D. Poo, B. Chng, and J. Goh, A hybrid approach for user profiling, Proceedings of the 36th Annual Hawaii International Conference on System Sciences (HICSS'03) -Track 4 -Volume, vol.4, pp.103-105, 2003.

H. Rahman, S. Thirumuruganathan, S. B. Roy, S. Amer-yahia, and G. Das, Worker skill estimation in team-based tasks, PVLDB, vol.8, pp.1142-1153, 2015.
URL : https://hal.archives-ouvertes.fr/hal-02000589

P. Resnik, Semantic similarity in a taxonomy : An information-based measure and its application to problems of ambiguity in natural language, J. Artif. Intell. Res. (JAIR), pp.95-130, 1999.

S. B. Roy, I. Lykourentzou, S. Thirumuruganathan, S. Amer-yahia, and G. Das, Task assignment optimization in knowledge-intensive crowdsourcing, VLDB Journal, vol.24, pp.467-491, 2015.
URL : https://hal.archives-ouvertes.fr/hal-02000596

R. L. Santos, C. Macdonald, and I. Ounis, Exploiting query reformulations for web search result diversification, Proceedings of the 19th International Conference on World Wide Web, pp.881-890, 2010.

E. Steel, Newcastle Brown Ale Calls for Other Brands to Join a Sly Super Bowl Ad Campaign, 2015.

D. Tamir, 50000 Worldwide Mechanical Turk Workers, 2014.

S. Tranquillini, F. Daniel, P. Kucherbaev, and F. Casati, Modeling, enacting, and integrating custom crowdsourcing processes, ACM Transactions on the Web, vol.9, issue.2, p.43, 2015.

G. Trebay, Keeping T-Shirts in the Moment, New York Times, 2005.

G. Valkanas, A. N. Papadopoulos, and D. Gunopulos, Skydiver : A framework for skyline diversification, Proceedings of the 16th International Conference on Extending Database Technology, pp.406-417, 2013.

P. Victor, C. Cornelis, A. M. Teredesai, and M. De-cock, Whom should i trust ? : The impact of key figures on cold start recommendations, Proceedings of the 2008 ACM Symposium on Applied Computing, pp.2014-2018, 2008.

J. Vuurens and A. De-vries, Obtaining High-Quality Relevance Judgments Using Crowdsourcing, IEEE Internet Computing, vol.16, pp.20-27, 2012.

D. Wang, T. Abdelzaher, L. Kaplan, and C. C. Aggarwal, Recursive fact-finding : A streaming approach to truth estimation in crowdsourcing applications, ICDCS, 2013.

X. Wang, Z. Dou, T. Sakai, and J. Wen, Evaluating search result diversity using intent hierarchies, Proceedings of the 39th International ACM SIGIR References Conference on Research and Development in Information Retrieval, pp.415-424, 2016.

P. Wicks, T. E. Vaughan, M. P. Massagli, and J. Heywood, Accelerated clinical discovery using self-reported patient data collected online and a patientmatching algorithm, Nature Biotech, vol.29, pp.411-414, 2011.

T. Wu, L. Chen, P. Hui, C. J. Zhang, and W. Li, Hear the whole story : Towards the diversity of opinion in crowdsourcing markets, PVLDB, vol.8, pp.485-496, 2015.

J. Zhang, J. Tang, and J. Li, Expert finding in a social network, DAS-FAA, vol.4443, pp.1066-1069, 2007.

W. Zhang and J. Wang, A collective bayesian poisson factorization model for cold-start local event recommendation, Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp.1455-1464, 2015.

Z. Zhao, J. Cheng, F. Wei, M. Zhou, W. Ng et al., Socialtransfer : Transferring social knowledge for cold-start cowdsourcing, Proceedings of the 23rd ACM International Conference on Conference on Information and Knowledge Management, pp.779-788, 2014.

L. Zheng and L. Chen, Mutual benefit aware task assignment in a bipartite labor market, IEEE 32nd International Conference on Data Engineering (ICDE), pp.73-84, 2016.

Y. Zheng, R. Cheng, S. Maniu, and L. Mo, On optimality of jury selection in crowdsourcing, Proceedings of the 18th International Conference on Extending Database Technology, pp.193-204, 2015.

C. .. Challenges, 7 1.5 Contribution 1 : Matching tasks with participants using taxonomy of skills

.. .. ,

. .. , Contribution 2 : We present each participant with a short list of personalized tasks out of a huge list of available tasks, p.11

, notice the notion of distance in a taxonomy of jobs

, Expert extraction approach from

, Workflow of a vision system [62] based on skill-ontology, p.21

. .. , Finding three participants that maximize diversity [91], p.24

. .. , User study workflow followed in [79], p.27

Q. and T. ). , We can notice the original network on which they want to solve the Social Task Assignment problem and the modified problem instance on which they solve the group Steiner tree problem. With bold an example solution, p.29

, Overview of the Jabberwocky system

.. .. ,

, Another taxonomy of skills

. .. , 50 4.2 Normalized cumulative distance of assignment with respect to the number of participants, specified order within skills with the same number on this picture)

, Assignment time with respect to the number of participants, p.56

. .. , Assignment time for larger number of participants, p.57

, Normalized cumulative distance with respect to participant profile budget

. .. Distance, 60 4.8 Normalized cumulative distance of task assignments for different participant and task numbers. Smaller values are better, p.61

. .. , 62 4.10 Ratio of matched tasks to participants compared to total number of tasks for different participant and task numbers, p.62

, Bigger values are better

, 64 4.13 Ratio of matched tasks to participants compared to total number of tasks and participants for different task expertize requirements. Bigger values are better, Cumulative distance of tasks assignments for different task expertise requirements. Smaller values are better

, Ratio of correct answers with respect to different crowds and algorithm comparisons. Questions are annotated lower in the taxonomy, p.67

, Ratio of correct answers with respect to different crowds and algorithm comparisons. Questions are annotated higher in the taxonomy, p.68

, Another taxonomy of skills

, Normalized Relevance for the list with respect to the size of the list proposed to the participant

, Normalized Diversity for the list with respect to the size of the list proposed to the participant

, Normalized Effective Diversity for the proposed list with respect to the size of the list proposed to the participant

, Normalized Urgency for the proposed list with respect to the size of the list proposed to the participant

. .. , Normalized Overall Metric (Relevance, Diversity, Urgency) Average with respect to the size of the list proposed to the participant, vol.87

, Nanoseconds needed to calculate the corresponding list proposed to the participant

, First screen of the experiment : Selecting skills from list of skills, p.89

. .. , 89 5.10 Third and sixth screen of the experiment : Selecting tasks from a randomized order short list of either random or cleverly selected k tasks

;. .. Taxonomy-vs, 50 5.1 Complexity assuming that |P | = |T | = n, m s is the maximum number of skills of a participant and d max the maximum depth of the skill taxonomy. k is the number of tasks in the list

, Results on the relevance of selecting tasks from a list in a Crowdsourcing Platform

, Timely results on selecting tasks from a list in a Crowdsourcing Platform. 81 5.5 Distribution from CrowdFlower experiment on how participants choose tasks

, Ratio of preference of shortlist over full list

. .. List, 91 Résumé rassembler une "foule sage et intelligente, la transparence et l'ouverture gouvernementale 2 , etc. Les défis à relever pour utiliser au mieux les talents disponibles et optimiser la qualité des résultats du crowdsourcing sont nombreux