Announcements

(01/19/21) Householder Dice: a new matrix-free algorithm for simulating dynamics on random matrices

January 19, 2021

In many problems in statistical learning, random matrix theory, and statistical physics, one needs to simulate dynamics on random matrix ensembles. A classical example is to use iterative methods to compute the extremal eigenvalues/eigenvectors of a (spiked) random matrix. Other examples include approximate message passing on dense random graphs, and gradient descent algorithms for solving learning and estimation problems with random design. In our recent paper, we show that all of these...

Read more about (01/19/21) Householder Dice: a new matrix-free algorithm for simulating dynamics on random matrices

(01/06/21) New paper: Phase transitions in transfer learning with high-dimensional perceptrons

January 6, 2021
Transfer learning seeks to improve the generalization performance of a target task by exploiting the knowledge learned from a related source task. Central questions include deciding what information one should transfer and when transfer can be beneficial. The latter question is related to the so-called negative transfer phenomenon, where the transferred source information actually reduces the generalization performance of the target task. This happens when the two tasks are sufficiently dissimilar. In our new... Read more about (01/06/21) New paper: Phase transitions in transfer learning with high-dimensional perceptrons

(09/26/20) NeurIPS paper: High-dimensional perceptrons: Approaching Bayes error with convex optimization

September 26, 2020
In our paper to appear at this year's NeurIPS, we consider a supervised classification of a synthetic dataset whose labels are generated by feeding a one-layer neural network with random i.i.d inputs. We prove a formula for the generalization error achieved by l2 regularized classifiers that minimize a convex... Read more about (09/26/20) NeurIPS paper: High-dimensional perceptrons: Approaching Bayes error with convex optimization

(09/17/20) New paper: Universality Laws for High-Dimensional Learning with Random Features

September 17, 2020
In our recent paper, we prove a universality theorem for learning with random features. Our result shows that, in terms of training and generalization errors, the random feature model with a nonlinear activation function is asymptotically equivalent to a surrogate Gaussian model with a matching covariance matrix. This settles a conjecture based on which... Read more about (09/17/20) New paper: Universality Laws for High-Dimensional Learning with Random Features

(08/28/20) New paper: A Precise Performance Analysis of Learning with Random Features

August 28, 2020
In our recent paper, we study the problem of learning an unknown function using random feature models. Our main contribution is an exact asymptotic analysis of such learning problems with Gaussian data. Under mild regularity conditions for the feature matrix, we provide an exact characterization of the asymptotic training and generalization errors, valid in both the... Read more about (08/28/20) New paper: A Precise Performance Analysis of Learning with Random Features

(06/16/20) New paper: The limiting Poisson law of massive MIMO detection

June 16, 2020
Estimating a binary vector from noisy linear measurements is a prototypical problem for MIMO systems. A popular algorithm, called the box-relaxation decoder, estimates the target signal by solving a least squares problem with convex constraints. In our recent paper, we show that the performance of the algorithm, measured by the number of incorrectly-decoded bits... Read more about (06/16/20) New paper: The limiting Poisson law of massive MIMO detection

(06/02/20) ICML paper: The role of regularization in classification of high-dimensional Gaussian mixture

June 2, 2020
In our new paper to appear at ICML 2020, we consider a high-dimensional mixture of two Gaussians in the noisy regime where even an oracle knowing the centers of the clusters misclassifies a small but finite fraction of the points. We provide a rigorous analysis of the generalization error of regularized convex classifiers,... Read more about (06/02/20) ICML paper: The role of regularization in classification of high-dimensional Gaussian mixture

(09/04/19) NeurIPS paper: A solvable high-dimensional model of GAN

September 4, 2019
In our paper to appear at this year's NeurIPS, we present a simple shallow GAN model fed by high-dimensional input data. The dynamics of the training process of the proposed model can be exactly analyzed in the high-dimensional limit. In particular, by using the tool of scaling limits of stochastic processes, we show that the macroscopic quantities measuring the quality of the training process converge to a... Read more about (09/04/19) NeurIPS paper: A solvable high-dimensional model of GAN

(05/01/19) ICML paper: Approximate survey propagation for high-dimensional estimation

May 1, 2019
In Generalized Linear Estimation (GLE) problems, one seeks to estimate a signal that is observed through a linear transform followed by a component-wise, possibly nonlinear and noisy, channel. In the Bayesian optimal setting, Generalized Approximate Message Passing (GAMP) is known to achieve optimal performance for GLE. However, its performance can significantly degrade whenever there is a mismatch between the assumed and the true generative model, a situation frequently encountered in practice. In our... Read more about (05/01/19) ICML paper: Approximate survey propagation for high-dimensional estimation