M. Abadi, A. Chu, I. Goodfellow, H. B. Mcmahan, I. Mironov et al., Deep Learning with Differential Privacy, pp.308-318, 2016.

G. Ateniese, L. V. Mancini, A. Spognardi, A. Villani, D. Vitali et al., Hacking Smart Machines with Smarter Ones: How to Extract Meaningful Data from Machine Learning Classifiers, Int. J. Secur. Netw, vol.10, pp.137-150, 2015.

A. Bojchevski and S. Günnemann, Adversarial Attacks on Node Embeddings via Graph Poisoning, ICML, 2019.

H. Cai, V. W. Zheng, and K. C. Chang, A Comprehensive Survey of Graph Embedding: Problems, Techniques, and Applications, IEEE Transactions on Knowledge and Data Engineering, vol.30, pp.1616-1637, 2018.

N. Carlini, C. Liu, Ú. Erlingsson, J. Kos, and D. Song, The Secret Sharer: Evaluating and Testing Unintended Memorization in Neural Networks, USENIX Security, pp.267-284, 2019.

H. Chen, B. Perozzi, R. Al-rfou, and S. Skiena, A Tutorial on Network Embeddings, 2018.

J. Du, S. Zhang, G. Wu, M. F. Jos, S. Moura et al., Topology Adaptive Graph Convolutional Networks, 2018.

V. Duddu and . Vijay-rao, Quantifying (Hyper) Parameter Leakage in Machine Learning, 2019.

V. Duddu, D. Samanta, V. Rao, and V. E. Balas, Stealing Neural Networks via Timing Side Channels, 2018.

M. Fredrikson, S. Jha, and T. Ristenpart, Model Inversion Attacks That Exploit Confidence Information and Basic Countermeasures, CCS, pp.1322-1333, 2015.

K. Ganju, Q. Wang, W. Yang, C. A. Gunter, and N. Borisov, Property Inference Attacks on Fully Connected Neural Networks Using Permutation Invariant Representations, CCS, pp.619-633, 2018.

N. Z. Gong and B. Liu, You Are Who You Know and How You Behave: Attribute Inference Attacks via Users' Social Friends and Behaviors, USENIX Security, pp.979-995, 2016.

N. Zhenqiang-gong and B. Liu, You Are Who You Know and How You Behave: Attribute Inference Attacks via Users' Social Friends and Behaviors, USENIX Security, pp.979-995, 2016.

A. Grover and J. Leskovec, node2vec: Scalable Feature Learning for Networks, KDD, 2016.

W. Hamilton, Z. Ying, and J. Leskovec, Inductive Representation Learning on Large Graphs, pp.1024-1034, 2017.

J. Hayes, L. Melis, G. Danezis, and E. Cristofaro, LOGAN: Membership Inference Attacks Against Generative Models. PETS, vol.1, pp.133-152, 2019.

X. He, J. Jia, and M. Backes, Neil Zhenqiang Gong, and Yang Zhang. 2020. Stealing Links from Graph Neural Networks

J. Jia and N. Gong, Attriguard: A Practical Defense against Attribute Inference Attacks via Adversarial Machine Learning, USENIX Security, pp.513-529, 2018.

J. Jia, A. Salem, M. Backes, Y. Zhang, and N. Gong, MemGuard: Defending against Black-Box Membership Inference Attacks via Adversarial Examples, CCS, pp.259-274, 2019.

J. Jia, B. Wang, L. Zhang, and N. Gong, AttriInfer: Inferring User Attributes in Online Social Networks Using Markov Random Fields, pp.1561-1569, 2017.

N. Thomas, M. Kipf, and . Welling, Semi-Supervised Classification with Graph Convolutional Networks, ICLR, 2017.

J. Klicpera, A. Bojchevski, and S. Gnnemann, Combining Neural Networks with Personalized PageRank for Classification on Graphs, ICLR, 2019.

Q. Li, Z. Han, and X. Wu, Deeper Insights into Graph Convolutional Networks for Semi-Supervised Learning, AAAI, 2018.

L. Melis, C. Song, V. Emiliano-de-cristofaro, and . Shmatikov, Exploiting Unintended Feature Leakage in Collaborative Learning, SP, pp.691-706, 2019.

T. Mikolov, I. Sutskever, K. Chen, G. Corrado, and J. Dean, Distributed Representations of Words and Phrases and Their Compositionality, NIPS, pp.3111-3119, 2013.

M. Nasr, R. Shokri, and A. Houmansadr, Machine Learning with Membership Privacy Using Adversarial Regularization, CCS, pp.634-646, 2018.

M. Nasr, R. Shokri, and A. Houmansadr, Comprehensive Privacy Analysis of Deep Learning: Passive and Active White-box Inference Attacks against Centralized and Federated Learning, SP, pp.739-753, 2019.

X. Pan, M. Zhang, S. Ji, and M. Yang, Privacy Risks of General-Purpose Language Models, SP, 2020.

B. Perozzi, R. Al-rfou, and S. Skiena, DeepWalk: Online Learning of Social Representations, KDD, pp.701-710, 2014.

A. Salem, Y. Zhang, M. Humbert, P. Berrang, M. Fritz et al., ML-Leaks: Model and Data Independent Membership Inference Attacks and Defenses on Machine Learning Models, NDSS, 2019.

R. Shokri, M. Stronati, C. Song, and V. Shmatikov, Membership Inference Attacks Against Machine Learning Models, SP, pp.3-18, 2017.

, Congzheng Song and Ananth Raghunathan. 2020. Information Leakage in Embedding Models

C. Song, T. Ristenpart, and V. Shmatikov, Machine Learning Models That Remember Too Much, CCS, pp.587-601, 2017.

C. Song and V. Shmatikov, Overlearning Reveals Sensitive Attributes, ICLR, 2020.

L. Song and P. Mittal, Systematic Evaluation of Privacy Risks of Machine Learning Models, 2020.

L. Van-der-maaten and G. Hinton, Visualizing Data using t-SNE, Journal of Machine Learning Research, pp.2579-2605, 2008.

P. Velikovi, G. Cucurull, A. Casanova, A. Romero, P. Li et al., Graph Attention Networks, ICLR, 2018.

B. Wang and N. Z. Gong, Stealing Hyperparameters in Machine Learning, SP, pp.36-52, 2018.

D. Xu, S. Yuan, X. Wu, and H. Phan, DPNE: Differentially Private Network Embedding, pp.235-246, 2018.

L. Vu and S. N. Tran, dpUGC: Learn Differentially Private Representation for User Generated Contents, CICLing, 2019.

J. Zhou, G. Cui, Z. Zhang, C. Yang, Z. Liu et al., Graph neural networks: A review of methods and applications, 2018.

D. Zhu, Z. Zhang, P. Cui, and W. Zhu, Robust Graph Convolutional Networks Against Adversarial Attacks, KDD, pp.1399-1407, 2019.

D. Zügner, A. Akbarnejad, and S. Günnemann, Adversarial Attacks on Neural Networks for Graph Data, KDD, pp.2847-2856, 2018.

D. Zügner and S. Günnemann, Certifiable Robustness and Robust Training for Graph Convolutional Networks, KDD, pp.246-256, 2019.