In it we (i) develop a computational framework for examining the practical capabilities of deep neural networks for numerical approximation tasks in scientific computing, (ii) conduct the first comprehensive empirical study on training fully-connected deep neural networks for standard function approximation tasks, (iii) present a novel theoretical analysis that shows that there exist provably good ways to train deep neural networks for smooth, high-dimensional function approximation that match current best-in-class schemes.
This is part of a larger program of work on understanding the performance of deep neural networks for scientific computing tasks. For more of our work in this direction, check out another recent paper of ours:
Simone Brugiapaglia,Nick Dexter, Sebastian Moraga and I have just uploaded a new paper on learning Hilbert-valued functions from limited data using deep neural networks. This problem arises in many important problems in computational science and engineering, notably the solution of parametric PDEs for UQ. In the paper, we first present a novel practical existence theorem that shows there is a DNN architecture and training procedure that is guaranteed to perform as well the current state-of-the-art methods in terms of sample complexity. We also quantify all errors in the process, including the measurement error and physical space discretization error. We then present results from initial numerical investigations on parametric PDE problems. These results are promising, and show that even simpler DNNs and training can achieve competitive and sometimes better results than current best-in-class schemes.
Stand by for more work in this direction in the near future! In the meantime, the paper can be found here:
In it, we show that current deep learning approaches for image reconstruction are unstable: namely, small perturbations in the measurements lead to a myriad of artefacts in the recovered images. This has potentially serious consequences for the safe and secure deployment of machine learning techniques in imaging applications.
My MSc student Qinghong (Jackie) Xu successfully defended her Master’s thesis. Congratulations!
Jackies thesis is titled “Compressive Imaging with Total Variation Regularization and Application to Auto-calibration of Parallel Magnetic Resonance Imaging”. It contains a novel (and technical) theoretical analysis of TV regularization in compressed sensing, and a new method for auto-calibration in parallel MRI. Stand by for the paper later this year!