The gap between theory and practice in function approximation with deep neural networks

Nick Dexter and I have a paper which is accepted in the SIAM Journal on Mathematics of Data Science:

The gap between theory and practice in function approximation with deep neural networks

In it we (i) develop a computational framework for examining the practical capabilities of deep neural networks for numerical approximation tasks in scientific computing, (ii) conduct the first comprehensive empirical study on training fully-connected deep neural networks for standard function approximation tasks, (iii) present a novel theoretical analysis that shows that there exist provably good ways to train deep neural networks for smooth, high-dimensional function approximation that match current best-in-class schemes.

This is part of a larger program of work on understanding the performance of deep neural networks for scientific computing tasks. For more of our work in this direction, check out another recent paper of ours:

Deep neural networks are effective at learning high-dimensional Hilbert-valued functions from limited data

Deep neural networks are effective at learning high-dimensional Hilbert-valued functions from limited data

Simone Brugiapaglia, Nick Dexter, Sebastian Moraga and I have just uploaded a new paper on learning Hilbert-valued functions from limited data using deep neural networks. This problem arises in many important problems in computational science and engineering, notably the solution of parametric PDEs for UQ. In the paper, we first present a novel practical existence theorem that shows there is a DNN architecture and training procedure that is guaranteed to perform as well the current state-of-the-art methods in terms of sample complexity. We also quantify all errors in the process, including the measurement error and physical space discretization error. We then present results from initial numerical investigations on parametric PDE problems. These results are promising, and show that even simpler DNNs and training can achieve competitive and sometimes better results than current best-in-class schemes.

Stand by for more work in this direction in the near future! In the meantime, the paper can be found here:

Deep neural networks are effective at learning high-dimensional Hilbert-valued functions from limited data

The instability phenomenon in deep learning for image reconstruction

Our paper On instabilities of deep learning in image reconstruction and the potential costs of AI was just published in PNAS:

https://www.pnas.org/content/early/2020/05/08/1907377117

In it, we show that current deep learning approaches for image reconstruction are unstable: namely, small perturbations in the measurements lead to a myriad of artefacts in the recovered images. This has potentially serious consequences for the safe and secure deployment of machine learning techniques in imaging applications.

Here is some press coverage: Cambridge University News, EurekAlert, The Register, Health Care Business, Radiology Business, Science Daily,   Psychology Today, Government Computing, Diagnostic Imaging, News Medical, Press Release Point, Tech Xplore, Aunt Minnie, My Science, Digit, The Talking Machines

Welcome Sebastian

I am pleased to welcome Sebastian Moraga as a new PhD student in my group. Sebastian joins SFU from the University of Concepcion in Chile.  He previously visited my group in Spring 2017.

Congratulations Jackie!

My MSc student Qinghong (Jackie) Xu successfully defended her Master’s thesis. Congratulations!

Jackies thesis is titled “Compressive Imaging with Total Variation Regularization and Application to Auto-calibration of Parallel Magnetic Resonance Imaging”. It contains a novel (and technical) theoretical analysis of TV regularization in compressed sensing, and a new method for auto-calibration in parallel MRI. Stand by for the paper later this year!