Skip to Main content Skip to Navigation
New interface
Conference papers

On-the-fly Black-Box Probably Approximately Correct Checking of Recurrent Neural Networks

Abstract : We propose a procedure for checking properties of recurrent neural networks used for language modeling and sequence classification. Our approach is a case of black-box checking based on learning a probably approximately correct, regular approximation of the intersection of the language of the black-box (the network) with the complement of the property to be checked, without explicitly building individual representations of them. When the algorithm returns an empty language, there is a proven upper bound on the probability of the network not verifying the requirement. When the returned language is nonempty, it is certain the network does not satisfy the property. In this case, an explicit and interpretable characterization of the error is output together with sequences of the network truly violating the property. Besides, our approach does not require resorting to an external decision procedure for verification nor fixing a specific property specification formalism.
Complete list of metadata
Contributor : Hal Ifip Connect in order to contact the contributor
Submitted on : Thursday, November 4, 2021 - 3:58:02 PM
Last modification on : Friday, November 5, 2021 - 3:58:01 AM
Long-term archiving on: : Saturday, February 5, 2022 - 7:09:48 PM


 Restricted access
To satisfy the distribution rights of the publisher, the document is embargoed until : 2023-01-01

Please log in to resquest access to the document


Distributed under a Creative Commons Attribution 4.0 International License



Franz Mayr, Ramiro Visca, Sergio Yovine. On-the-fly Black-Box Probably Approximately Correct Checking of Recurrent Neural Networks. 4th International Cross-Domain Conference for Machine Learning and Knowledge Extraction (CD-MAKE), Aug 2020, Dublin, Ireland. pp.343-363, ⟨10.1007/978-3-030-57321-8_19⟩. ⟨hal-03414742⟩



Record views