The gap between theory and practice in function approximation with deep neural networks

Nick Dexter and I have a paper which is accepted in the SIAM Journal on Mathematics of Data Science:

The gap between theory and practice in function approximation with deep neural networks

In it we (i) develop a computational framework for examining the practical capabilities of deep neural networks for numerical approximation tasks in scientific computing, (ii) conduct the first comprehensive empirical study on training fully-connected deep neural networks for standard function approximation tasks, (iii) present a novel theoretical analysis that shows that there exist provably good ways to train deep neural networks for smooth, high-dimensional function approximation that match current best-in-class schemes.

This is part of a larger program of work on understanding the performance of deep neural networks for scientific computing tasks. For more of our work in this direction, check out another recent paper of ours:

Deep neural networks are effective at learning high-dimensional Hilbert-valued functions from limited data