C. Robert and G. Casella, Monte Carlo statistical methods, 2013.

C. Cooper, T. Radzik, and Y. Siantos, Fast low-cost estimation of network properties using random walks, Proceedings of workshop on algorithms and models for the web-graph (WAW), 2013.

L. Massoulié, L. Merrer, E. Kermarrec, A. Ganesh, and A. , Peer counting and sampling in overlay networks: Random walk methods, Proceedings of ACM annual symposium on principles of distributed computing (PODC), 2006.

K. Avrachenkov, B. Ribeiro, and J. K. Sreedharan, Inference in osns via lightweight partial crawls, ACM SIGMETRICS Perform Eval Rev, vol.44, issue.1, pp.165-77, 2016.
URL : https://hal.archives-ouvertes.fr/hal-01403018

A. Nazi, Z. Zhou, S. Thirumuruganathan, N. Zhang, and G. Das, Walk, not wait: faster sampling over online social networks, Proceedings of the VLDB Endowment, vol.8, pp.678-89, 2015.

S. Goel and M. J. Salganik, Respondent-driven sampling as markov chain monte carlo, Stat Med, vol.28, issue.17, pp.2202-2231, 2009.

M. J. Salganik and D. D. Heckathorn, Sampling and estimation in hidden populations using respondent-driven sampling, Sociol Methodol, vol.34, issue.1, pp.193-240, 2004.

E. Volz and D. D. Heckathorn, Probability based estimation theory for respondent driven sampling, J Off Stat, vol.24, issue.1, p.79, 2008.

M. Gjoka, M. Kurant, C. T. Butts, and A. Markopoulou, Walking in facebook: a case study of unbiased sampling of osns, Proceedings of IEEE INFOCOM, pp.1-9, 2010.

A. Dasgupta, R. Kumar, and T. Sarlos, On estimating the average degree, Proceedings of WWW, pp.795-806, 2014.

B. Ribeiro and D. Towsley, Estimating and sampling graphs with multidimensional random walks, Proceedings of ACM SIGCOMM internet measurement conference (IMC), 2010.

D. Aldous and J. A. Fill, Reversible markov chains and random walks on graphs, 2014.

P. Brémaud, Markov chains: gibbs fields, monte carlo simulation, and queues, 1999.

E. Nummelin, MC's for MCMC'ists, Int Stat Rev, vol.70, issue.2, pp.215-255, 2002.

G. O. Roberts and J. S. Rosenthal, General state space markov chains and mcmc algorithms, Probability Surv, vol.1, pp.20-71, 2004.

P. Billingsley, Probability and measure, 2008.

C. Lee, X. Xu, and D. Y. Eun, Beyond random walk and Metropolis-Hastings samplers: why you should not backtrack for unbiased graph sampling, Proceedings ACM SIGMETRICS/PERFORMANCE joint international conference on measurement and modeling of computer systems, 2012.

S. M. Ross, Applied probability models with optimization applications, 1992.

J. Abounadi, D. Bertsekas, and V. S. Borkar, Learning algorithms for markov decision processes with average cost, SIAM J Control Optimization, vol.40, issue.3, pp.681-98, 2001.

V. S. Borkar, R. Makhijani, and R. Sundaresan, Asynchronous gossip for averaging and spectral ranking, IEEE J Sel Areas Commun, vol.8, issue.4, pp.703-719, 2014.

V. S. Borkar, Reinforcement learning: a bridge between numerical methods and monte carlo. In: Perspectives in mathematical science-I: probability and statistics, pp.71-91, 2009.

J. G. Kemeny and J. L. Snell, Finite Markov Chains, 1983.

V. D. Blondel, J. Guillaume, R. Lambiotte, and E. Lefebvre, Fast unfolding of communities in large networks, J Stat Mech Theory Exp, issue.10, p.10008, 2008.
URL : https://hal.archives-ouvertes.fr/hal-01146070