Numerical algorithms for high dimensional integration with application to machine learning and molecular dynamics

Sammanfattning: This thesis contains results on high dimensional integration with two papers, paper I and paper II, presenting applications in machine learning and two papers, paper III and paper IV, presenting applications to molecular dynamics.In paper I we present algorithms based on a Metropolis test for training shallow neural networks with trigonometric activation functions. Numerical experiments are performed on both synthetic and real data. The trigonometric activation function gives access to the Fourier transform and its inverse transform. The algorithms gives equidistributed amplitudes.In paper II we derive smaller generalization error for deep residual neural networks compared to shallow ones. An algorithm that builds the residual neural network layer by layer based on an algorithm from paper I is presented both as a stand alone algorithm as well as a pre-step for a global optimizer like Stochastic gradient descent or Adam. Numerical test are performed with promising results.In paper III we make use of the semiclassical Weyl law to show that canonical quantum observables can be approximated by molecular dynamics with an error rate proportional to the electron-nuclei mass ratio. Numerical experiments are presented that confirms the expected theoretical result.In paper IV we consider canonical ensembles of molecular systems. We propose four numerical algorithms for efficient computation of the canonical ensemble molecular dynamics observables. The four algorithms can each be efficient in different situations. For example in low temperatures we can make use of the fact that the lowest electron energy levels contributes most to the observable. The work is an extension of the results in paper III.