In many problems in statistical learning, random matrix theory, and statistical physics, one needs to simulate dynamics on random matrix ensembles. A classical example is to use iterative methods to compute the extremal eigenvalues/eigenvectors of a (spiked) random matrix. Other examples include approximate message passing on dense random graphs, and gradient descent algorithms for solving learning and estimation problems with random design. In our recent paper, we show that all of these...
Transfer learning seeks to improve the generalization performance of a target task by exploiting the knowledge learned from a related source task. Central questions include deciding what information one should transfer and when transfer can be beneficial. The latter question is related to the so-called negative transfer phenomenon, where the transferred source information actually reduces the generalization performance of the target task. This happens when the two tasks are sufficiently dissimilar. In our new... Read more about (01/06/21) New paper: Phase transitions in transfer learning with high-dimensional perceptrons
In our paper to appear at this year's NeurIPS, we present a simple shallow GAN model fed by high-dimensional input data. The dynamics of the training process of the proposed model can be exactly analyzed in the high-dimensional limit. In particular, by using the tool of scaling limits of stochastic processes, we show that the macroscopic quantities measuring the quality of the training process converge to a... Read more about (09/04/19) NeurIPS paper: A solvable high-dimensional model of GAN
In Generalized Linear Estimation (GLE) problems, one seeks to estimate a signal that is observed through a linear transform followed by a component-wise, possibly nonlinear and noisy, channel. In the Bayesian optimal setting, Generalized Approximate Message Passing (GAMP) is known to achieve optimal performance for GLE. However, its performance can significantly degrade whenever there is a mismatch between the assumed and the true generative model, a situation frequently encountered in practice. In our... Read more about (05/01/19) ICML paper: Approximate survey propagation for high-dimensional estimation
In sparse linear regression, the SLOPE estimator generalizes LASSO by assigning magnitude-dependent regularizations to different coordinates of the estimate. In our new paper, we present an asymptotically exact characterization of the performance of SLOPE in the high-dimensional regime. Our asymptotic characterization enables us to derive optimal regularization sequences to either minimize the MSE or to maximize the power in variable selection under any given level of Type-I error. Read more about (03/28/19) New paper: Asymptotics and optimal designs of SLOPE