Skip to Main content Skip to Navigation
Conference papers

Speedup of Network Training Process by Eliminating the Overshoots of Outputs

Abstract : The overshoots between the expected and actual outputs while training network will slow down the training speed and affect the training accuracy. In this paper, an improved training method for eliminating overshoots is proposed on the basis of traditional network training algorithms and a suggestion of eliminating overshoot is given. Gradient descent is regarded as the training criterion in traditional methods which neglects the side effects caused by overshoots. The overshoot definition (OD) is combined with gradient descent. According to the overshoot suggestion, local linearization and weighted mean methods are used to adjust the parameters of network. Based on the new training strategy, a numerical experiment is conducted to verify the proposed algorithm. The results show that the proposed algorithm eliminates overshoots effectively and improves the training performance of the network greatly.
Document type :
Conference papers
Complete list of metadata

Cited literature [14 references]  Display  Hide  Download

https://hal.inria.fr/hal-01821056
Contributor : Hal Ifip <>
Submitted on : Friday, June 22, 2018 - 11:45:21 AM
Last modification on : Friday, June 22, 2018 - 12:00:49 PM
Long-term archiving on: : Tuesday, September 25, 2018 - 11:16:38 AM

File

467708_1_En_39_Chapter.pdf
Files produced by the author(s)

Licence


Distributed under a Creative Commons Attribution 4.0 International License

Identifiers

Citation

Di Zhou, Yuxin Zhao, Chang Liu, Yanlong Liu. Speedup of Network Training Process by Eliminating the Overshoots of Outputs. 14th IFIP International Conference on Artificial Intelligence Applications and Innovations (AIAI), May 2018, Rhodes, Greece. pp.462-470, ⟨10.1007/978-3-319-92007-8_39⟩. ⟨hal-01821056⟩

Share

Metrics

Record views

126

Files downloads

15