In many problems in statistical learning, random matrix theory, and statistical physics, one needs to simulate dynamics on random matrix ensembles. A classical example is to use iterative methods to compute the extremal eigenvalues/eigenvectors of a (spiked) random matrix. Other examples include approximate message passing on dense random graphs, and gradient descent algorithms for solving learning and estimation problems with random design. In our recent paper, we show that all of these...
Transfer learning seeks to improve the generalization performance of a target task by exploiting the knowledge learned from a related source task. Central questions include deciding what information one should transfer and when transfer can be beneficial. The latter question is related to the so-called negative transfer phenomenon, where the transferred source information actually reduces the generalization performance of the target task. This happens when the two tasks are sufficiently dissimilar. In our new... Read more about (01/06/21) New paper: Phase transitions in transfer learning with high-dimensional perceptrons
In our recent paper, we prove a universality theorem for learning with random features. Our result shows that, in terms of training and generalization errors, the random feature model with a nonlinear activation function is asymptotically equivalent to a surrogate Gaussian model with a matching covariance matrix. This settles a conjecture based on which... Read more about (09/17/20) New paper: Universality Laws for High-Dimensional Learning with Random Features
In our recent paper, we study the problem of learning an unknown function using random feature models. Our main contribution is an exact asymptotic analysis of such learning problems with Gaussian data. Under mild regularity conditions for the feature matrix, we provide an exact characterization of the asymptotic training and generalization errors, valid in both the... Read more about (08/28/20) New paper: A Precise Performance Analysis of Learning with Random Features
Estimating a binary vector from noisy linear measurements is a prototypical problem for MIMO systems. A popular algorithm, called the box-relaxation decoder, estimates the target signal by solving a least squares problem with convex constraints. In our recent paper, we show that the performance of the algorithm, measured by the number of incorrectly-decoded bits... Read more about (06/16/20) New paper: The limiting Poisson law of massive MIMO detection
In our paper to appear at this year's NeurIPS, we present a simple shallow GAN model fed by high-dimensional input data. The dynamics of the training process of the proposed model can be exactly analyzed in the high-dimensional limit. In particular, by using the tool of scaling limits of stochastic processes, we show that the macroscopic quantities measuring the quality of the training process converge to a... Read more about (09/04/19) NeurIPS paper: A solvable high-dimensional model of GAN
In Generalized Linear Estimation (GLE) problems, one seeks to estimate a signal that is observed through a linear transform followed by a component-wise, possibly nonlinear and noisy, channel. In the Bayesian optimal setting, Generalized Approximate Message Passing (GAMP) is known to achieve optimal performance for GLE. However, its performance can significantly degrade whenever there is a mismatch between the assumed and the true generative model, a situation frequently encountered in practice. In our... Read more about (05/01/19) ICML paper: Approximate survey propagation for high-dimensional estimation