Research areas: Mathematics of data science, numerical analysis, machine learning, applied and computational harmonic analysis, approximation theory, deep learning, compressed sensing, sampling theory, spectral methods for PDEs
Summary: Broadly speaking, my group investigates the recovery of objects (images, signals, functions, etc) from data. We pursue applications in diverse areas, including image and signal processing, Uncertainty Quantification (UQ) and the numerical solution of PDEs. One area of focus has been the development, analysis and implementation of sparse regularization techniques and their associated Compressed Sensing (CS) theory. Our recent interests include the development and analysis of machine learning techniques – in particular, deep neural networks and Deep Learning (DL) – for scientific computing applications.
Listed below are short summaries of my group’s areas of research. For links to the papers cited, please see my Publications page.
Recent features
For recent news features of my work, see:
- March 2021: SIAM News
- May 2020: Various
- December 2019: SFU Science News
- August 2019: Cambridge University Press blog
- September 2018: SFU Science News
- October 2017: SIAM News
For videos of some recent talks, see:
- Deep learning for scientific computing: two stories on the gap between theory & practice. Alan Turing Institute workshop on Interpretability, Safety, and Security in AI (held online), December 2021.
- Deep learning for scientific computing: (closing) the gap between theory and practice. The AAAI 2021 Spring Symposium on Combining Artificial Intelligence and Machine Learning with Physics Sciences (held online), April 2021.
- The troublesome kernel: instabilities in deep learning for inverse problems. One World Mathematics of INformation, Data, and Signals (OW-MINDS) Seminar (held online), June 2020.
- Approximation of high-dimensional functions: from polynomials to neural networks. SIAM Conference on Mathematics of Data Science (held online), May 2020.
- On the (unreasonable) effectiveness of compressive imaging. Alan Turing Institute workshop on Approximation, sampling and compression in data science, London, UK, May 2019.
- Compressed sensing and high-dimensional approximation: progress and challenges. iTWIST 2018, Marseille, France, November 2018.
- Polynomial approximation of high-dimensional functions: from regular to irregular domains. Workshop on Surrogate models for UQ in complex systems, Isaac Newton Institute, Cambridge, UK, February 2018.
- From global to local: structure-exploiting sampling in compressive imaging. CAIMS 2017 Early Career Prize presentation, Dalhousie University, Halifax, Canada, July 2017.
- Compressed Sensing: Theory, applications and extensions. PIMS-Shell Lunchbox Lectures, Calgary, Canada, February 2016.
Active Areas of Research
Neural networks and deep learning for image reconstruction
Due to their unprecedented success in tasks such as image classification, neural networks and Deep Learning (DL) are now attracting significant attention for inverse problems in imaging. While they have shown some promise for such problems, in recent work we demonstrated that there are is a serious issue: namely, such methods have a tendency to be highly unstable. This has potentially serious consequences for the safety, security and reliability of such algorithms, especially in key applications such as medical imaging. Broadly speaking, my collaborators and I are interested in the mathematics underlying such instabilities, and how it can be used to develop more robust algorithms.
[expand title=”Read more!”]
In a recent paper [1], we demonstrated the instability phenomenon in DL for image reconstruction, and introduced a series of tests to examine the (in)stability of trained networks. All networks tested were shown to be either unstable or suffer from poor performance. More recently, we provided a series of theoretical insights into this phenomenon [2]. In particular, we traced the development of instabilities to a so-called lack of kernel awareness. This means that trained neural networks have a tendency to overperform, recovering images better than they should do given the data, which necessarily leads to instabilities. We also presented a series of results highlighting how delicate training can be, in particular when it comes to the performance-stability tradeoff. Nonetheless, in recent work [3] we have shown that stable, accurate and efficient deep neural networks can indeed be constructed, albeit not via training.
Our work [1] has been featured in University of Cambridge research news, as well as various other news venues. For an overview, see Part V of my book Compressive Imaging: Structure, Sampling, Learning.
Relevant Papers
- V. Antun, F. Renna, C. Poon, B. Adcock and A. C. Hansen, On instabilities of deep learning in image reconstruction and the potential costs of AI
- N. M. Gottschling, V. Antun, B. Adcock and A. C. Hansen, The troublesome kernel: why deep learning for inverse problems is typically unstable
- B. Adcock and M. Neyra-Nesterenko, Stable, accurate and efficient deep neural networks for inverse problems with analysis-sparse models
[/expand]
Deep learning for scientific computing
In the last five years, there has been a significant increase in the use of machine learning techniques for problems in computational science and engineering. Yet, such methods are typically poorly understood mathematically, and often do not meet the rigorous standards of scientific computing when it comes to accuracy, stability and efficiency. Broadly speaking, my group is interested understanding these issues, and in turn, the development of robust and reliable machine learning techniques for scientific computing purposes.
[expand title=”Read more!”]
In a recent paper [1], we focused on the application of DL to pratical high-dimensional function approximation – a problem that arises in applications such as Uncertainty Quantification (UQ). We introduced a reliable computational framework for testing DL and used it to examine questions surrounding architecture design and sample complexity. Our main results show that DL’s practical performance is hard to predict, and can contrast quite starkly with neural network existence theory. We also established a theoretical result asserting the existence of a `good’ DL approach (architecture, sampling strategy and training procedure). This result suggests the potential for better performance with DL, with more work needed to develop better training techniques. Recently, we have extended this theoretical result to the more realistic setting of Hilbert-valued functions [2], and shown that in practice DL-based techniques can perform as well as or better than state-of-the-art polynomial-based methods for parametric PDE problems. In addition, we have also developed new adaptive sampling strategies to enhance the performance of DL for function approximation tasks in data-scarce setting [3].
Relevant Papers
- B. Adcock and N. Dexter, The gap between theory and practice in function approximation with deep neural networks
- B. Adcock, S. Brugiapaglia, N. Dexter and S. Moraga, Deep neural networks are effective at learning high-dimensional Hilbert-valued functions from limited data
- B. Adcock, J. M. Cardenas and N. Dexter, CAS4DL: Christoffel Adaptive Sampling for function approximation via Deep Learning
[/expand]
High-dimensional approximation and applications in computational science
The challenge of approximating a high-dimensional function from data arises in many areas of computational science. This is often a key task in Uncertainty Quantification (UQ), for example, and/or the numerical solution of parametric PDEs. This problem is rendered difficult by the well-known curse of dimensionality. Classical approaches, such as projection, interpolation and least-squares, are known to have significant drawbacks in such scenarios. Conversely, in the last five years, CS has begun to emerge as a promising way to ameliorate (or even circumvent) this issue.
[expand title=”Read more!”]
My group’s work in this direction has been twofold. First, we have developed a set of weighted l1 regularization strategies for computing sparse polynomial approximations of smooth, high-dimensional functions. As shown in [1,2], these techniques provably overcome the curse of dimensionality (up to a logarithmic factor in the dimension) and are optimal, in the sense that they achieve the same asymptotic sample complexities as oracle least-squares estimators. Second, we have sought to address the inherent infinite-dimensionality of the approximation problem by proposing an infinite-dimensional CS framework [3]. We have also developed various extensions, including to derivative sampling [4], problems on irregular domains [5], optimized sampling strategies [] and applications to UQ.
This work was recently featured by SFU Science. For an in-depth introduction to this topic, see my book Sparse Polynomial Approximation of High-Dimensional Functions.
Relevant papers
- B. Adcock, Infinite-dimensional compressed sensing and function interpolation.
- B. Adcock, S. Brugiapaglia and C. Webster, Compressed sensing approaches for polynomial approximation of high-dimensional functions.
- B. Adcock, Infinite-dimensional l1 minimization and function approximation from pointwise data.
- B. Adcock and Y. Sui, Compressive Hermite interpolation: sparse, high-dimensional approximation from gradient-augmented measurements.
- B. Adcock and D. Huybrechs, Approximating smooth, multivariate functions on irregular domains.
- B. Adcock and Juan M. Cardenas, Near-optimal sampling strategies for multivariate function approximation on general domains.
- B. Adcock, J. M. Cardenas and N. Dexter. An adaptive sampling and domain learning strategy for multivariate function approximation on unknown domains.
[/expand]
Parameter robust sparse regularization and silent data corruptions
Popular sparse regularization strategies such as LASSO or basis pursuit involve careful parameter tuning to achieve good performance. This is also reflected in the related compressed sensing theory, which often imposes unrealistic assumptions, such as a priori bounds on the noise level, in order to show accuracy and stability guarantees. In this work, we aim to develop new theory and practical sparse regularization decoders to deal with the more realistic noise-blind scenarios. This includes the case of sparse (i.e. silent) data corruptions – an increasingly important setting in modern computational infrastructures.
[expand title=”Read more!”]
We first extended standard compressed sensing theory to explain the performance of basis pursuit decoders in the noise-blind setting [1]. Next, we successfully introduced a novel decoder, the Square-Root LASSO, which had been used previously in statistical inference, for sparse regularization [3]. Theoretically, we stablished that this decoder is noise-blind, and thus no expensive, empirical parameter tuning is needed. Finally, we have also extended this work to the problem of sparse data corruption [2,3]. This is an increasingly-common issue in, for instance, UQ simulations, and one which can have a significant effect on downstream computations. Our work introduces new decoders for mitigating against sparse data corruptions, allowing one to achieve accuracies similar to the uncorrupted case despite a large fraction (up to 10%-20%) of data being arbitrarily corrupted.
Relevant papers
- S. Brugiapaglia and B. Adcock, Robustness to unknown error in sparse regularization.
- B. Adcock, A. Bao, J. Jakeman and A. Narayan, Compressed sensing with sparse corruptions: Fault-tolerant sparse collocation approximations.
- B. Adcock, A. Bao and S. Brugiapaglia. Correcting for unknown errors in sparse high-dimensional function approximation.
[/expand]
Numerical approximation with redundancy
Orthogonal bases are ubiquitous in numerical approximation. However, there are many problems where finding a ‘good’ (e.g. rapidly-convergent or computationally-efficient) orthonormal basis is difficult or impossible. One such example is finding global high-order approximation systems on irregular domains, another is constructing approximation systems that incorporate a finite number of ‘feature functions’ (e.g. singularities or oscillations). However, in many of these cases one can easily redundant systems of functions with the desired properties. Redundant systems (dictionaries) are standard tools in image and signal processing and sampling theory. But they are far less well understood in numerical analysis and approximation.
[expand title=”Read more!”]
My group and collaborators are developing the theory, implementation and application of numerical methods based on redundant systems for a range of different problems. These include efficient approximation and PDE solvers on irregular domains in two and more dimensions, numerical methods for treating singularities and beyond. Our work has typically assumed the underlying system is a so-called frame. The basic theory of numerical frame approximation was introduced in [1,2], based on earlier work on so-called Fourier extensions [3]. Applications to multivariate approximation on irregular domains are considered in [4]. Most recently, we has shown that a numerical method for approximating functions from equispaced samples using algebraic polynomial extensions has desirable stability and convergence properties. Namely, it is well-conditioned and the error decays exponentially fast down to a user-controlled tolerance. This behaviour is essentially optimal: a result of Platte, Trefethen and Kuijlaars shows that no stable method can converge exponentially fast to zero. Our result shows exponentially fast error decay is possible down to finite tolerances.
Relevant papers
- B. Adcock and D. Huybrechs, Frames and numerical approximation.
- B. Adcock and D. Huybechs, Frames and numerical approximation II: generalized sampling.
- B. Adcock, D. Huybrechs and J. Martin-Vaquero, On the numerical stability of Fourier extensions.
- B. Adcock and D. Huybrechs, Approximating smooth, multivariate functions on irregular domains.
- B. Adcock and A. Shadrin, Fast and stable approximation of analytic functions from equispaced samples via polynomial frames.
[/expand]
Less Active and Past Areas of Research
Compressive imaging and local structure
Image reconstruction has been one of the key beneficiaries of sparse regularization and Compressed Sensing (CS). Standard CS is based on global principles: sparsity and incoherence. However, these principles do not adequately capture the structure inherent to most imaging problems. In such problems, it turns out that local structure plays a crucial role. Leveraging this additional structure – through the design of optimized sampling strategies and recovery algorithms – we obtain significant gains in reconstruction quality.
[expand title=”Read more!”]
In a series of papers we introduced a new mathematical framework for CS based on broader and more realistic local principles. This theory not only explains why CS works so well in imaging applications, it also opens the door for substantially improved approaches in a range of practical applications through the design of optimized sampling strategies. Applications include single-pixel/lensless imaging, MRI, NMR, radio interferometry, fluorescence microscopy and beyond. Practical implementations of these ideas have been carried out in [3], as well as by Siemens (Wang et al, 2014); a major manufacturer of MR scanners. Theoretically, we have shown that CS is nearly optimal for the approximation of piecewise smooth functions when combined with these sampling strategies [5]. This closes a key gap between the standard theory of CS and its performance in practical imaging settings.
This work has been featured on the cover of SIAM News, as well as by Cambridge University Press’ blog and by SFU Science. For an in-depth treatment, see my book Compressive Imaging: Structure, Sampling, Learning.
Relevant Papers
- B. Adcock, A. C. Hansen, C. Poon and B. Roman, Breaking the coherence barrier: a new theory for compressed sensing.
- B. Adcock, A. C. Hansen, and B. Roman, The quest for optimal sampling: computationally efficient, structure-exploiting measurements for compressed sensing.
- B. Roman, B. Adcock and A. C. Hansen, On asymptotic structure in compressed sensing.
- C. Li and B. Adcock, Compressed sensing with local structure: uniform recovery guarantees for the sparsity in levels class.
- B. Adcock, S. Brugiapaglia and M. King-Roskamp, Do log factors matter? On optimal wavelet approximation and the foundations of compressed sensing.
[/expand]
Sparse regularization and parallel acquisition: medical imaging and beyond
Sparse regularization techniques have the potential to significantly enhance reconstruction quality and/or reduce acquisition time in many applications. Typically, one considers a scenario of single sensor acquiring measurements of an object. However, many practical acquisition systems involve multiple sensors acting in parallel. This is the case, for instance, in parallel MRI, where multiple coils simultaneously acquire Fourier data of the image.
[expand title=”Read more!”]
Our work in this area seeks to further enhance sparse regularization for parallel acquisition systems through the development of fast algorithms that exploit additional structure beyond sparsity. In [1] we introduced a new regularization which promotes joint sparsity between the measured coil images, and a fast algorithm for its implementation. This approach improves reconstruction accuracy over the current state-of-the-art techniques. Ongoing work involves the further development of this model and the development of new CS theory [3,4] to optimize its performance in applications.
Relevant papers
-
- I. Y. Chun, B. Adcock and T. Talavage, Efficient compressed sensing SENSE pMRI reconstruction with joint sparsity promotion.
- I. Y. Chun, B. Adcock and T. Talavage, Non-Convex Compressed Sensing CT Reconstruction Based on Tensor Discrete Fourier Slice Theorem.
- I. Y. Chun and B. Adcock, Compressed sensing and parallel acquisition.
- I. Y. Chun and B. Adcock, Uniform recovery from subgaussian multi-sensor measurements.
[/expand]
Infinite-dimensional compressed sensing
Standard compressed sensing (CS) concerns vectors and matrices. However, many real-life signals and images are analog/continuous-time, and therefore better modelled as functions in function spaces acted on by linear operators. Applying discrete CS techniques to continuous problems is plagued by poor reconstructions and issues such as the inverse crime. This raises the question of whether or not one can extend CS techniques and theory to infinite dimensions.
[expand title=”Read more!”]
In [1] we introduced a framework for CS in Hilbert spaces, based on the ideas of generalized sampling (see below). Whilst finite-dimensional CS is a special case of this more general theory, applying this framework directly to the underlying continuous model allows one to avoid the aforementioned issues and thereby obtain higher-fidelity reconstructions. Practical validation of these ideas have been carried out in applications such as electron microscopy and Helium Atom Scattering [3].
Relevant papers
- B. Adcock and A. C. Hansen, Generalized sampling and infinite-dimensional compressed sensing.
- B. Adcock, A. C. Hansen, B. Roman and G. Teschke, Generalized sampling: stable reconstructions, inverse problems and compressed sensing over the continuum.
- B. Roman, B. Adcock and A. C. Hansen, On asymptotic structure in compressed sensing.
[/expand]
Generalized Sampling
The classical result in sampling theory, the Nyquist-Shannon Sampling Theorem, states that a bandlimited signal can be recovered from countably-many equispaced samples taken at or above a certain critical rate (the Nyquist rate). However, in practice one never has access to infinitely-many samples. This raises a different question: how well can one stably recover a given signal from finitely-many such samples?
[expand title=”Read more!”]
In [1] we introduced a new sampling framework to address this issue, known as generalized sampling (GS). An important concept in GS, introduced in [2], is the stable sampling rate (SSR), which specifies the relationship between the number of samples and the degrees of freedom in the reconstruction. Linearity of the SSR for wavelet spaces was shown in [4]. We also extended the original GS framework to the case of nonuniform samples [5,6], ill-posed problems [3] and sampling with derivatives [7]. Open problems include further generalizations to other function spaces and to other types of sampling and reconstruction systems.
Relevant papers
- B. Adcock and A. C. Hansen, A generalized sampling theorem for stable reconstructions in arbitrary bases.
- B. Adcock, A. C. Hansen and C. Poon, Beyond consistent reconstructions: optimality and sharp bounds for generalized sampling, and application to the uniform resampling problem.
- B. Adcock, A. C. Hansen, E. Herrholz and G. Teschke, Generalized sampling: extensions to frames and inverse and ill-posed problems.
- B. Adcock, A. C. Hansen and C. Poon, On optimal wavelet reconstructions from Fourier samples: linearity and universality of the stable sampling rate.
- B. Adcock, M. Gataric and A. C. Hansen, On stable reconstructions from nonuniform Fourier measurements.
- B. Adcock, M. Gataric and A. C. Hansen, Weighted frames of exponentials and stable recovery of multidimensional functions from nonuniform Fourier samples.
- B. Adcock, M. Gataric and A. C. Hansen, Density theorems for nonuniform sampling of bandlimited functions using derivatives or bunched measurements.
[/expand]
High-order function approximation using variable transforms
Variable transforms are useful in a variety of numerical computations to increase accuracy, efficiency and/or stability. Our work in this area aims to develop new variable transform techniques for a range of practical approximation problems.
[expand title=”Read more!”]
First, we developed new parametrized mapping techniques for the efficient approximation of functions with endpoint singularities [2,3]. Second, we used certain parametrized conformal mappings to better approximate functions from scattered data [1]. Key aspects of this work are the analysis of stability and convergence of the various approximations uniformly in the mapping parameters, and the understanding of how such techniques yield high accuracy without necessarily converging in a classical sense.
Relevant papers
- B. Adcock and R. Platte, A mapped polynomial method for high-accuracy approximations on arbitrary grids.
- B. Adcock and M. Richardson, New exponential variable transform methods for functions with endpoint singularities.
- B. Adcock, M. Richardson and J. Martin-Vaquero, Resolution-optimal exponential and double-exponential transform methods for functions with endpoint singularities.
[/expand]
High-order function approximation with non-standard Fourier series
In a number of different applications, such as spectral methods for PDEs, scattered data approximation, etc, one seeks to recover a smooth function to high accuracy from its values on a given set of points. Our work in this area involves the use of certain non-standard Fourier series for such reconstructions.
[expand title=”Read more!”]
An attractive means to do this is to use a Fourier series on a larger domain, known as the Fourier extension technique. In [1,2,3] we provided analysis for the convergence and stability of Fourier extensions and in particular, established their benefits for approximating oscillatory functions. Another approach is to use expansions in various eigenfunctions [4,5,6], which are particularly well suited in higher dimensions.
Relevant papers
- B. Adcock, D. Huybrechs and J. Martin-Vaquero, On the numerical stability of Fourier extensions.
- B. Adcock and J. Ruan, Parameter selection and numerical approximation properties of Fourier extensions from fixed data.
- B. Adcock and D. Huybrechs, On the resolution power of Fourier extensions for oscillatory functions.
- B. Adcock, A. Iserles and S. P. Nørsett, From high oscillation to rapid approximation II: Expansions in Birkhoff series.
- B. Adcock, On the convergence of expansions in polyharmonic eigenfunctions.
- B. Adcock, Multivariate modified Fourier series and application to boundary value problems.
[/expand]
Resolution of the Gibbs phenomenon
The Gibbs phenomenon occurs when a piecewise smooth function is expanded as a Fourier series. Characteristic features are oscillations near the discontinuities and lack of uniform convergence of the expansion. Resolution of the Gibbs phenomenon refers to postprocessing the Fourier coefficients to obtain higher orders of convergence, and has applications to both image processing and the numerical solution of PDEs.
[expand title=”Read more!”]
In [1] we proved a stability barrier for this problem, and in [2,3] several approaches for attaining this barrier were proposed. The Gibbs phenomenon in related eigenfunction expansions was also studied in [4,5].
Relevant papers
- B. Adcock, A. C. Hansen and A. Shadrin, A stability barrier for reconstructions from Fourier samples.
- B. Adcock and A. C. Hansen, Stable reconstructions in Hilbert spaces and the resolution of the Gibbs phenomenon.
- B. Adcock and A. C. Hansen, Generalized sampling and the stable and accurate reconstruction of piecewise analytic functions from their Fourier coefficients.
- B. Adcock, Gibbs phenomenon and its removal for a class of orthogonal expansions.
- B. Adcock, Convergence acceleration of modified Fourier series in one or more dimensions.
[/expand]