Regularization Issues in Neural Network Models of Dynamical Systems

Sammanfattning: The latest era of neural networks started some ten years ago and the literature has been characterized by many successful applications, but the underlying theory is often omitted. In this thesis feed-forward neural networks are considered from a system identification point of view. Two nonlinear generalizations of the linear ARX and OE models are proposed and theoretically justifed.Neural networks are often characterized by the fact that they use a fairly large amount of parameters. We address the problem how this can be done without the usual penalty in terms of a large variance error. We show that regularization is a key explanation, and that terminating a gradient search ("backpropagation") before the true criterion minimum is found is a way of achieving regularization. This theory also explains the concept of "overtraining" in neural nets. 

  Denna avhandling är EVENTUELLT nedladdningsbar som PDF. Kolla denna länk för att se om den går att ladda ner.