In our recent paper, we present a simple GAN model fed by high-dimensional input data. The dynamics of the training process of the proposed model can be exactly analyzed in the high-dimensional limit. In particular, by using the tool of scaling limits of stochastic processes, we show that the macroscopic quantities measuring the quality of the training process converge to a deterministic process that is characterized as the unique solution of a finite-dimensional ordinary differential... Read more about (05/21/18) A Solvable High-Dimensional Model of Generative Adversarial Networks
Together with Yuxin Chen (Princeton) and Yuejie Chi (CMU), I gave a tutorial at this year's ICASSP on recent advances on nonconvex statistical estimation. We will be covering topics include the landscapes of nonconvex estimation, analyzing gradient descent and stochastic gradient descent methods, spectral methods for initialization, and example applications to phase retrieval, low-rank matrix recovery and blind deconvolution.
Our recent work on a precise high-dimensional analysis of subspace learning algorithms from subsampled measurements will appear at the SPARS workshop as an oral presentation. (Full paper with technical details will soon follow.)