Multivariate polynomials are excellent means of approximating high-dimensional functions on tensor-product domains. But what about approximations on irregular domains, as is quite common in applications? In our new paper, Daan Huybrechs and I tackle this question via using tools from frame theory and approximation theory:
Approximating smooth, multivariate functions on irregular domains
We establish a series of results on approximation rates and sample complexity, deriving bounds that scale well with dimension d in a variety of cases.
Matt King-Roskamp – an undergraduate student in my group co-supervised with Simone Brugiapaglia – was awarded runner-up in the poster competition at the recent SIAM PNW Conference for his poster Optimal Sampling Strategies for Compressive Imaging. He beat out a competitive field of graduate and undergraduate students from across the Pacific Northwest region.
Alex Bastounis, Anders C. Hansen and I have an article in this month’s edition of SIAM News:
From Global to Local: Getting More from Compressed Sensing
In it we explain how recent progress in the theory of compressed sensing allows one to significantly enhance the performance of sparse recovery techniques in imaging applications.
My former undergraduate student Anyi (Casie) Bao was a winner of the SFU Department of Mathematics Undergraduate Research Prize. Her work involved the development and analysis of compressed sensing-based strategies for correcting for corrupted measurements in Uncertainty Quantification. A draft version of the resulting paper can be found here:
Compressed sensing with sparse corruptions: Fault-tolerant sparse collocation approximations
Matt King-Roskamp – an undergraduate student in my group co-supervised with Simone Brugiapaglia – was awarded runner-up in two poster competitions this summer:
His work, entitled Optimal Sampling Strategies for Compressive Imaging, presents new, theoretically optimal sampling techniques for imaging using compressed sensing.
Simone Brugiapaglia, Rick Archibald and I have been investigating the robustness of constrained l1-minimization regularization to unknown measurement errors. The vast majority of existing theory for such problems requires an a priori bound on the noise level. Yet in many, if not most, real-world applications such a bound is unknown. Our work provides the first recovery guarantees for a large class of practical measurement matrices, thereby extending existing results for subgaussian random matrices. Besides this, our work sheds new light on sparse regularization in practice, addressing questions about the relationship between model fidelity, noise parameter estimation and reconstruction error.
Simone will be presenting this work at SPARS2017 and SampTA 2017. In the meantime our conference paper is available here:
Recovery guarantees for compressed sensing with unknown errors
Simone Brugiapaglia, Clayton Webster and I have written a chapter that surveys recent trends in high-dimensional approximation using the theory and techniques of compressed sensing:
Polynomial approximation of high-dimensional functions via compressed sensing
In it we demonstrate how the structured sparsity of polynomial coefficients of high-dimensional functions can be exploited via weighted l1 minimization techniques. This yields approximation algorithms whose sample complexities are essentially independent of the dimension d. Hence the curse of dimensionality is mitigated to a significant extent.
I am part of the organizing committee for the 1st Biennial Meeting of the SIAM Pacific Northwest Section to be held from Oct 27 to 29, 2017 at Oregon State University. We look forward to seeing you in Corvallis in the Fall!
Frames of Hilbert spaces are ubiquitous in image and signal processing, coding theory and sampling theory. However, they are far less widely known in numerical analysis.
In a new paper with Daan Huybrechs, we take a look at frames from a numerical analyst’s perspective:
Frames and numerical approximation
First, we point out that frames can be useful tools in numerical analysis where orthonormal bases may be difficult or impossible to construct. Second, we investigate issues concerning stability and accuracy in frame approximations. Our main result is that frame approximations are stable and accurate, provide the function being approximated has representations in the frame with small-norm coefficients.
One application of this work is meshfree approximation of functions on complex geometries using so-called Fourier extensions. Daan maintains a GitHub page with fast algorithms for computing such approximations.
Alexei Shadrin, Rodrigo Platte and I have just finished a paper on stability and instability in approximating analytic functions from nonequispaced points:
Optimal sampling rates for approximating analytic functions from pointwise samples
In it, we generalize Rodrigo’s previous work (with Arno Kuijlaars and Nick Trefethen) from equispaced nodes to arbitrary nonequispaced nodes. In particular, our result quantifies the tradeoff between convergence rates and ill-conditioning for nodes distributed according to modified Jacobi weight functions. We also determine a necessary and sufficient sampling rate for stable approximation with polynomial least-squares fitting.