Deep neural networks are effective at learning high-dimensional Hilbert-valued functions from limited data

Simone Brugiapaglia, Nick Dexter, Sebastian Moraga and I have just uploaded a new paper on learning Hilbert-valued functions from limited data using deep neural networks. This problem arises in many important problems in computational science and engineering, notably the solution of parametric PDEs for UQ. In the paper, we first present a novel practical existence theorem that shows there is a DNN architecture and training procedure that is guaranteed to perform as well the current state-of-the-art methods in terms of sample complexity. We also quantify all errors in the process, including the measurement error and physical space discretization error. We then present results from initial numerical investigations on parametric PDE problems. These results are promising, and show that even simpler DNNs and training can achieve competitive and sometimes better results than current best-in-class schemes.

Stand by for more work in this direction in the near future! In the meantime, the paper can be found here:

Deep neural networks are effective at learning high-dimensional Hilbert-valued functions from limited data

The instability phenomenon in deep learning for image reconstruction

Our paper On instabilities of deep learning in image reconstruction and the potential costs of AI was just published in PNAS:

In it, we show that current deep learning approaches for image reconstruction are unstable: namely, small perturbations in the measurements lead to a myriad of artefacts in the recovered images. This has potentially serious consequences for the safe and secure deployment of machine learning techniques in imaging applications.

Here is some press coverage: Cambridge University News, EurekAlert, The Register, Health Care Business, Radiology Business, Science Daily,   Psychology Today, Government Computing, Diagnostic Imaging, News Medical, Press Release Point, Tech Xplore, Aunt Minnie, My Science, Digit, The Talking Machines

Welcome Sebastian

I am pleased to welcome Sebastian Moraga as a new PhD student in my group. Sebastian joins SFU from the University of Concepcion in Chile.  He previously visited my group in Spring 2017.

Congratulations Jackie!

My MSc student Qinghong (Jackie) Xu successfully defended her Master’s thesis. Congratulations!

Jackies thesis is titled “Compressive Imaging with Total Variation Regularization and Application to Auto-calibration of Parallel Magnetic Resonance Imaging”. It contains a novel (and technical) theoretical analysis of TV regularization in compressed sensing, and a new method for auto-calibration in parallel MRI. Stand by for the paper later this year!

Optimal sampling in general domains

When approximating a multivariate function defined on an irregular domain, a good choice of sampling points is critical. In this paper, my PhD student Juan and I develop new, practical sampling strategies for which the sample complexity is near-optimal: specifically, it is linear (up to a log factor) in the degree of the approximation. This improves previous approaches which were at best quadratic in the degree. Here’s the paper:

Optimal sampling strategies for multivariate function approximation on general domains