, follows: Input: an ONNX file created using an ML framework
, Convert the ONNX model to NIER (onnx parser)
, Convert NIER to a SMT-LIB (smtifyer) string, written on disk
, Add the property to validate to the existing SMT-LIB file
, Output: An SMT-LIB file that can be solved to prove the property. ;;;; Automatically generated part ;; Inputs declaration (declare-fun |actual_input_0_0_0_8| () Real)
, Weights declaration (declare-fun |l_1.weight_31_4| () Real) (assert (= |l_1, vol.31
, Outputs declaration (assert (= |actual_output_0_0_0_1| ( + |16_0_0_0_1| |l_3.bias_1| )))
, Property to check
, Formulate constraint on outputs
, Robust Neural Networks using Randomized Adversarial Training, 2019.
, Open neural network exchange, 2019.
, Proceedings of the 23rd International Conference on Computer Aided Verification (CAV '11), vol.6806, pp.171-177, 2011.
Smt-comp: Satisfiability modulo theories competition, International Conference on Computer Aided Verification, pp.20-23, 2005. ,
The SMT-LIB Standard ,
Satisfiability modulo theories, Handbook of Model Checking, pp.305-343, 2018.,
URL : https://hal.archives-ouvertes.fr/hal-01095009
End to end learning for self-driving cars, 2016. ,
, An Efficient Framework for Certifying Robustness of Convolutional Neural Networks, 2018.
A Unified View of Piecewise Linear Neural Network Verification, 2017. ,
, Towards Evaluating the Robustness of Neural Networks, 2016.
Deep manta: A coarse-to-fine many-task network for joint 2d and 3d vehicle analysis from monocular image, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp.2040-2049, 2017.,
URL : https://hal.archives-ouvertes.fr/hal-01653519
Abstract Interpretation: A Unified Lattice Model for Static Analysis of Programs by Construction or Approximation of Fixpoints, Proceedings of the 4th ACM SIGACT-SIGPLAN Symposium on Principles of Programming Languages, POPL '77, pp.238-252, 1977. ,
Z3: An efficient smt solver, Proceedings of the Theory and Practice of Software, 14th International Conference on Tools and Algorithms for the Construction and Analysis of Systems, TACAS'08/ETAPS'08, pp.337-340, 2008. ,
CARLA: An open urban driving simulator, Proceedings of the 1st Annual Conference on Robot Learning, pp.1-16, 2017. ,
Verifai: A toolkit for the formal design and analysis of artificial intelligence-based systems, International Conference on Computer Aided Verification, pp.432-442, 2019. ,
The yices smt solver, 2006. ,
Adversarial Examples Are a Natural Consequence of Test Error in Noise, 2019. ,
, Jonathon Shlens, and Christian Szegedy. Explaining and Harnessing Adversarial Examples, 2014.
Tesla didnt fix an autopilot problem for three years, and now another person is dead. The Verge, 2019. ,
, Deep Residual Learning for Image Recognition, 2015.
Brandon Tran, and Aleksander Madry, They Are Features, 2019. ,
A style-based generator architecture for generative adversarial networks, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp.4401-4410, 2019. ,
Reluplex: An Efficient SMT Solver for Verifying Deep Neural Networks, 2017. ,
The Marabou Framework for Verification and Analysis of Deep Neural Networks, Computer Aided Verification, vol.11561, pp.443-452, 2019. ,
Adversarial examples in the physical world, 2016. ,
, Dimitris Tsipras, and Adrian Vladu. Towards Deep Learning Models Resistant to Adversarial Attacks, 2017.
An introduction to acas xu and the challenges ahead, 2016 IEEE/AIAA 35th Digital Avionics Systems Conference (DASC), pp.1-9, 2016.,
URL : https://hal.archives-ouvertes.fr/hal-01638049
Real behavior of floating point numbers, 2017.,
URL : https://hal.archives-ouvertes.fr/cea-01795760
Differentiable Abstract Interpretation for Provably Robust Neural Networks, vol.9 ,
, Transferability in Machine Learning: from Phenomena to Black-Box Attacks using Adversarial Samples, 2016.
Automatic differentiation in PyTorch, NIPS Autodiff Workshop, 2017. ,
An Abstraction-Refinement Approach to Verification of Artificial Neural Networks, CAV, 2010. ,
, The PyTorch-Kaldi Speech Recognition Toolkit, 2018.
URL : https://hal.archives-ouvertes.fr/hal-02107617
, First-order Adversarial Vulnerability of Neural Networks and Input Dimension, 2019.
An Abstract Domain for Certifying Neural Networks, vol.3, p.30 ,
Robustness Certification with Refinement, 2018. ,
, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. Intriguing properties of neural networks, 2013.
Evaluating Robustness of Neural Networks with Mixed Integer Programming, 2017. ,
, Formal Security Analysis of Neural Networks using Symbolic Intervals, 2018.
Provable defenses against adversarial examples via the convex outer adversarial polytope, 2017. ,
, Trust Region Based Adversarial Attack on Neural Networks, 2018.