Research

Broadly speaking, my work concerns the reconstruction of objects (images, signals, functions, etc) from data. Applications range from image and signal processing to high-dimensional approximation and the numerical solution of PDEs. I am particularly interested in the development, application and analysis of compressed sensing techniques in these areas. I also have ongoing interests in sampling theory, nonuniform sampling, stability barriers, high-order methods in approximation, non-classical Fourier methods, and the resolution of the Gibbs phenomenon.

Listed below are some short summaries of my areas of research. For links to the papers cited, please see my Publications page.

 

A new framework for compressed sensing

Standard compressed sensing (CS) is based on sparsity, incoherence and uniform random subsampling. However, in many problems where CS is applied, e.g. imaging, these principles are poorly suited.

Read more!
In a recent series of papers we introduced a new mathematical framework for CS based on broader and more realistic principles.  This theory not only allows one to explain why CS works in such applications, but it also opens the door for substantially improved CS approaches in a range of other problems [3], including compressive imaging and fluorescence microscopy. Current work in this area includes the development and generalization of the theory of [1], the optimization of sampling strategies proposed in [1,2,3], and the extension of this framework to other types of practical sensing architectures.

Relevant Papers

  1. B. Adcock, A. C. Hansen, C. Poon and B. Roman, Breaking the coherence barrier: a new theory for compressed sensing.
  2. B. Adcock, A. C. Hansen, and B. Roman, The quest for optimal sampling: computationally efficient, structure-exploiting measurements for compressed sensing.
  3. B. Roman, B. Adcock and A. C. Hansen, On asymptotic structure in compressed sensing.

 

Infinite-dimensional compressed sensing

Standard compressed sensing (CS) concerns vectors and matrices. However, many real-life signals and images are analog/continuous-time, and therefore better modelled as functions in function spaces acted on by linear operators.  Applying discrete CS techniques to continuous problems can is plagued by poor reconstructions and issues such as the inverse crime.  This raises the question of whether or not one can extend CS techniques and theory to infinite dimensions.

Read more!
In [1] we introduced a framework for CS in Hilbert spaces, based on the ideas of generalized sampling (see below). Whilst finite-dimensional CS is a special case of this more general theory, applying this framework directly to the underlying continuous model allows one to avoid the aforementioned issues.

Relevant papers

  1. B. Adcock and A. C. Hansen, Generalized sampling and infinite-dimensional compressed sensing.
  2. B. Adcock, A. C. Hansen, B. Roman and G. Teschke, Generalized sampling: stable reconstructions, inverse problems and compressed sensing over the continuum.

 

High-dimensional approximation via compressed sensing

Many applications call for the accurate approximation of smooth, high-dimensional functions. Due to phenomena such as the curse of dimensionality, this is often a challenging task. Compressed sensing offers a way to ameliorate (or even circumvent) this issue by computing sparse approximations in terms of tensor products of orthogonal polynomials.

Read more!
My work in this direction is twofold. First, I have sought to address the inherent infinite-dimensionality of this problem by proposing a infinite-dimensional CS framework [1]. Second, I have derived a series of recovery guarantees for weighted l1 minimization which improve over existing results. These guarantees lead to sharp estimates in a number of important cases [2], and demonstrate how the curse of dimensionality can be circumvented. Ongoing work includes optimizing the sampling points, extending this framework to more complicated approximation problems, and pursuing applications such as the solution of high-dimensional parametric PDEs.

Relevant papers

  1. B. Adcock, Infinite-dimensional compressed sensing and function interpolation.
  2. B. Adcock, Infinite-dimensional l1 minimization and function approximation from pointwise data.

 

Sparse regularization in medical imaging

Sparse regularization techniques have the potential to significantly enhance reconstruction quality and/or reduce scan time in medical imaging. Our work in this areas seeks to further enhance these techniques in applications such as parallel MRI through the development of fast algorithms that exploit additional structure beyond sparsity.

Read more!
In [1] we introduced a new regularization which promotes joint sparsity between the measured coil images, and a fast algorithm for its implementation. This approach improves reconstruction accuracy over the current state-of-the-art techniques. Ongoing work involves the further development of this model and the development of new CS theory to optimize its performance in applications.

Relevant papers

  1. I. Y. Chun, B. Adcock and T. Talavage, Efficient compressed sensing SENSE pMRI reconstruction with joint sparsity promotion.
  2. I. Y. Chun, B. Adcock and T. Talavage, Non-Convex Compressed Sensing CT Reconstruction Based on Tensor Discrete Fourier Slice Theorem.

 

Generalized Sampling

The classical result in sampling theory, the Nyquist-Shannon Sampling Theorem, states that a bandlimited signal can be recovered from countably-many equispaced samples taken at or above a certain critical rate (the Nyquist rate). However, in practice one never has access to infinitely-many samples. This raises a different question: how well can one stably recover a given signal from finitely-many such samples?

Read more!
In [1] we introduced a new sampling framework to address this issue, known as generalized sampling (GS). An important concept in GS, introduced in [2], is the stable sampling rate (SSR), which specifies the relationship between the number of samples and the degrees of freedom in the reconstruction. Linearity of the SSR for wavelet spaces was shown in [4]. Recently, we also extended the original GS framework to the case of nonuniform samples [5,6], ill-posed problems [3] and sampling with derivatives [7]. Open problems include further generalizations to other function spaces and to other types of sampling and reconstruction systems.

Relevant papers

  1. B. Adcock and A. C. Hansen, A generalized sampling theorem for stable reconstructions in arbitrary bases.
  2. B. Adcock, A. C. Hansen and C. Poon, Beyond consistent reconstructions: optimality and sharp bounds for generalized sampling, and application to the uniform resampling problem.
  3. B. Adcock, A. C. Hansen, E. Herrholz and G. Teschke, Generalized sampling: extensions to frames and inverse and ill-posed problems.
  4. B. Adcock, A. C. Hansen and C. Poon, On optimal wavelet reconstructions from Fourier samples: linearity and universality of the stable sampling rate.
  5. B. Adcock, M. Gataric and A. C. Hansen, On stable reconstructions from nonuniform Fourier measurements.
  6. B. Adcock, M. Gataric and A. C. Hansen, Weighted frames of exponentials and stable recovery of multidimensional functions from nonuniform Fourier samples.
  7. B. Adcock, M. Gataric and A. C. Hansen, Density theorems for nonuniform sampling of bandlimited functions using derivatives or bunched measurements.

 

High-order function approximation using variable transforms

Variable transforms are useful in a variety of numerical computations to increase accuracy, efficiency and/or stability. Our ongoing work in this area aims to develop new variable transform techniques for a range of practical approximation problems.

Read more!
First, we are developing new parametrized mapping techniques for the efficient approximation of functions with endpoint singularities [2,3]. Second, we are using certain parametrized conformal mappings to better approximate functions from scattered data [1]. Key aspects of this work are the analysis of stability and convergence of the various approximations uniformly in the mapping parameters, and the understanding of how such techniques yield high accuracy without necessarily converging in a classical sense.

Relevant papers

  1. B. Adcock and R. Platte, A mapped polynomial method for high-accuracy approximations on arbitrary grids.
  2. B. Adcock and M. Richardson, New exponential variable transform methods for functions with endpoint singularities.
  3. B. Adcock, M. Richardson and J. Martin-Vaquero, Resolution-optimal exponential and double-exponential transform methods for functions with endpoint singularities.

 

High-order function approximation with non-standard Fourier series

In a number of different applications, such as spectral methods for PDEs, scattered data approximation, etc, one seeks to recover a smooth function to high accuracy from its values on a given set of points.  Our work in this area involves the use of certain non-standard Fourier series for such reconstructions.

Read more!
One attractive means to do this is use a Fourier series on a larger domain, known as the Fourier extension technique.  In [1,2,3] we provided analysis for the convergence and stability of Fourier extensions and in particular, established their benefits for approximating oscillatory functions. Another approach is to use expansions in various eigenfunctions [4,5,6], which are particularly well suited in higher dimensions.

Relevant papers

  1. B. Adcock, D. Huybrechs and J. Martin-Vaquero, On the numerical stability of Fourier extensions.
  2. B. Adcock and J. Ruan, Parameter selection and numerical approximation properties of Fourier extensions from fixed data.
  3. B. Adcock and D. Huybrechs, On the resolution power of Fourier extensions for oscillatory functions.
  4. B. Adcock, A. Iserles and S. P. Nørsett, From high oscillation to rapid approximation II: Expansions in Birkhoff series.
  5. B. Adcock, On the convergence of expansions in polyharmonic eigenfunctions.
  6. B. Adcock, Multivariate modified Fourier series and application to boundary value problems.

 

Resolution of the Gibbs phenomenon

The Gibbs phenomenon occurs when a piecewise smooth function is expanded as a Fourier series. Characteristic features are oscillations near the discontinuities and lack of uniform convergence of the expansion. Resolution of the Gibbs phenomenon refers to postprocessing the Fourier coefficients to obtain higher orders of convergence, and has applications to both image processing and the numerical solution of PDEs.

Read more!
In [1] we proved a stability barrier for this problem, and in [2,3] several approaches for attaining this barrier were proposed. The Gibbs phenomenon in related eigenfunction expansions was also studied in [4,5].

Relevant papers

  1. B. Adcock, A. C. Hansen and A. Shadrin, A stability barrier for reconstructions from Fourier samples.
  2. B. Adcock and A. C. Hansen, Stable reconstructions in Hilbert spaces and the resolution of the Gibbs phenomenon.
  3. B. Adcock and A. C. Hansen, Generalized sampling and the stable and accurate reconstruction of piecewise analytic functions from their Fourier coefficients.
  4. B. Adcock, Gibbs phenomenon and its removal for a class of orthogonal expansions.
  5. B. Adcock, Convergence acceleration of modified Fourier series in one or more dimensions.