Simone Brugiapaglia,Nick Dexter, Sebastian Moraga and I have just uploaded a new paper on learning Hilbert-valued functions from limited data using deep neural networks. This problem arises in many important problems in computational science and engineering, notably the solution of parametric PDEs for UQ. In the paper, we first present a novel practical existence theorem that shows there is a DNN architecture and training procedure that is guaranteed to perform as well the current state-of-the-art methods in terms of sample complexity. We also quantify all errors in the process, including the measurement error and physical space discretization error. We then present results from initial numerical investigations on parametric PDE problems. These results are promising, and show that even simpler DNNs and training can achieve competitive and sometimes better results than current best-in-class schemes.
Stand by for more work in this direction in the near future! In the meantime, the paper can be found here:
In it, we show that current deep learning approaches for image reconstruction are unstable: namely, small perturbations in the measurements lead to a myriad of artefacts in the recovered images. This has potentially serious consequences for the safe and secure deployment of machine learning techniques in imaging applications.
My MSc student Qinghong (Jackie) Xu successfully defended her Master’s thesis. Congratulations!
Jackies thesis is titled “Compressive Imaging with Total Variation Regularization and Application to Auto-calibration of Parallel Magnetic Resonance Imaging”. It contains a novel (and technical) theoretical analysis of TV regularization in compressed sensing, and a new method for auto-calibration in parallel MRI. Stand by for the paper later this year!
When approximating a multivariate function defined on an irregular domain, a good choice of sampling points is critical. In this paper, my PhD student Juan and I develop new, practical sampling strategies for which the sample complexity is near-optimal: specifically, it is linear (up to a log factor) in the degree of the approximation. This improves previous approaches which were at best quadratic in the degree. Here’s the paper: