, follows: Input: an ONNX file created using an ML framework

, Convert the ONNX model to NIER (onnx parser)

, Convert NIER to a SMT-LIB (smtifyer) string, written on disk

, Add the property to validate to the existing SMT-LIB file

, Output: An SMT-LIB file that can be solved to prove the property. ;;;; Automatically generated part ;; Inputs declaration (declare-fun |actual_input_0_0_0_8| () Real)

, Weights declaration (declare-fun |l_1.weight_31_4| () Real) (assert (= |l_1, vol.31

, Outputs declaration (assert (= |actual_output_0_0_0_1| ( + |16_0_0_0_1| |l_3.bias_1| )))

, Property to check

, Formulate constraint on outputs

A. Araujo, R. Pinot, B. Negrevergne, L. Meunier, Y. Chevaleyre et al., Robust Neural Networks using Randomized Adversarial Training, 2019.

J. Bai, F. Lu, and K. Zhang, Open neural network exchange, 2019.

C. Barrett, C. L. Conway, M. Deters, L. Hadarean, D. Jovanovi'c et al., Proceedings of the 23rd International Conference on Computer Aided Verification (CAV '11), vol.6806, pp.171-177, 2011.

C. Barrett, L. D. Moura, and A. Stump, Smt-comp: Satisfiability modulo theories competition, International Conference on Computer Aided Verification, pp.20-23, 2005.

C. Barrett, P. Fontaine, and A. Stump, The SMT-LIB Standard

C. Barrett and C. Tinelli, Satisfiability modulo theories, Handbook of Model Checking, pp.305-343, 2018.
URL : https://hal.archives-ouvertes.fr/hal-01095009

M. Bojarski, D. D. Testa, D. Dworakowski, B. Firner, B. Flepp et al., End to end learning for self-driving cars, 2016.

A. Boopathy, P. Tsui-wei-weng, S. Chen, L. Liu, . Daniel et al., An Efficient Framework for Certifying Robustness of Convolutional Neural Networks, 2018.

R. Bunel, I. Turkaslan, H. S. Philip, P. Torr, M. Kohli et al., A Unified View of Piecewise Linear Neural Network Verification, 2017.

N. Carlini and D. Wagner, Towards Evaluating the Robustness of Neural Networks, 2016.

F. Chabot, M. Chaouch, J. Rabarisoa, C. Teulière, and T. Chateau, Deep manta: A coarse-to-fine many-task network for joint 2d and 3d vehicle analysis from monocular image, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp.2040-2049, 2017.
URL : https://hal.archives-ouvertes.fr/hal-01653519

P. Cousot and R. Cousot, Abstract Interpretation: A Unified Lattice Model for Static Analysis of Programs by Construction or Approximation of Fixpoints, Proceedings of the 4th ACM SIGACT-SIGPLAN Symposium on Principles of Programming Languages, POPL '77, pp.238-252, 1977.

L. De-moura and N. Bjørner, Z3: An efficient smt solver, Proceedings of the Theory and Practice of Software, 14th International Conference on Tools and Algorithms for the Construction and Analysis of Systems, TACAS'08/ETAPS'08, pp.337-340, 2008.

A. Dosovitskiy, G. Ros, F. Codevilla, A. Lopez, and V. Koltun, CARLA: An open urban driving simulator, Proceedings of the 1st Annual Conference on Robot Learning, pp.1-16, 2017.

T. Dreossi, J. Daniel, S. Fremont, E. Ghosh, H. Kim et al., Verifai: A toolkit for the formal design and analysis of artificial intelligence-based systems, International Conference on Computer Aided Verification, pp.432-442, 2019.

B. Dutertre and L. D. Moura, The yices smt solver, 2006.

N. Ford, J. Gilmer, N. Carlini, and D. Cubuk, Adversarial Examples Are a Natural Consequence of Test Error in Noise, 2019.

I. J. Goodfellow, Jonathon Shlens, and Christian Szegedy. Explaining and Harnessing Adversarial Examples, 2014.

A. J. Hawkins, Tesla didnt fix an autopilot problem for three years, and now another person is dead. The Verge, 2019.

K. He, X. Zhang, S. Ren, and J. Sun, Deep Residual Learning for Image Recognition, 2015.

A. Ilyas, S. Santurkar, D. Tsipras, and L. Engstrom, Brandon Tran, and Aleksander Madry, They Are Features, 2019.

T. Karras, S. Laine, and T. Aila, A style-based generator architecture for generative adversarial networks, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp.4401-4410, 2019.

G. Katz, C. Barrett, D. Dill, K. Julian, and M. Kochenderfer, Reluplex: An Efficient SMT Solver for Verifying Deep Neural Networks, 2017.

G. Katz, D. A. Huang, D. Ibeling, K. Julian, C. Lazarus et al., The Marabou Framework for Verification and Analysis of Deep Neural Networks, Computer Aided Verification, vol.11561, pp.443-452, 2019.

A. Kurakin, I. Goodfellow, and S. Bengio, Adversarial examples in the physical world, 2016.

A. Madry, A. Makelov, and L. Schmidt, Dimitris Tsipras, and Adrian Vladu. Towards Deep Learning Models Resistant to Adversarial Attacks, 2017.

G. Manfredi and Y. Jestin, An introduction to acas xu and the challenges ahead, 2016 IEEE/AIAA 35th Digital Avionics Systems Conference (DASC), pp.1-9, 2016.
URL : https://hal.archives-ouvertes.fr/hal-01638049

B. Marre, F. Bobot, and Z. Chihani, Real behavior of floating point numbers, 2017.
URL : https://hal.archives-ouvertes.fr/cea-01795760

M. Mirman, T. Gehr, and M. Vechev, Differentiable Abstract Interpretation for Provably Robust Neural Networks, vol.9

N. Papernot, P. Mcdaniel, and I. Goodfellow, Transferability in Machine Learning: from Phenomena to Black-Box Attacks using Adversarial Samples, 2016.

A. Paszke, S. Gross, S. Chintala, G. Chanan, E. Yang et al., Automatic differentiation in PyTorch, NIPS Autodiff Workshop, 2017.

L. Pulina and A. Tacchella, An Abstraction-Refinement Approach to Verification of Artificial Neural Networks, CAV, 2010.

M. Ravanelli, T. Parcollet, and Y. Bengio, The PyTorch-Kaldi Speech Recognition Toolkit, 2018.
URL : https://hal.archives-ouvertes.fr/hal-02107617

C. Simon-gabriel, Y. Ollivier, L. Bottou, B. Schlkopf, and D. Lopez-paz, First-order Adversarial Vulnerability of Neural Networks and Input Dimension, 2019.

G. Singh, T. Gehr, M. Pschel, and M. Vechev, An Abstract Domain for Certifying Neural Networks, vol.3, p.30

G. Singh, T. Gehr, M. Pschel, and M. Vechev, Robustness Certification with Refinement, 2018.

C. Szegedy, W. Zaremba, I. Sutskever, and J. Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. Intriguing properties of neural networks, 2013.

V. Tjeng, K. Xiao, and R. Tedrake, Evaluating Robustness of Neural Networks with Mixed Integer Programming, 2017.

S. Wang, K. Pei, J. Whitehouse, J. Yang, and S. Jana, Formal Security Analysis of Neural Networks using Symbolic Intervals, 2018.

E. Wong and J. Kolter, Provable defenses against adversarial examples via the convex outer adversarial polytope, 2017.

Z. Yao, A. Gholami, P. Xu, K. Keutzer, and M. Mahoney, Trust Region Based Adversarial Attack on Neural Networks, 2018.