Publications by Year: 2021

2021
A. Maillard, F. Krzakala, Y. M. Lu, and L. Zdeborova, “Construction of optimal spectral methods in phase retrieval,” in Mathematical and Scientific Machine Learning, 2021. arXiv:2012.04524 [cs.IT]Abstract
We consider the phase retrieval problem, in which the observer wishes to recover a n-dimensional real or complex signal X⋆ from the (possibly noisy) observation of |ΦX⋆|, in which Φ is a matrix of size m×n. We consider a \emph{high-dimensional} setting where n,m→∞ with m/n=O(1), and a large class of (possibly correlated) random matrices Φ and observation channels. Spectral methods are a powerful tool to obtain approximate observations of the signal X⋆ which can be then used as initialization for a subsequent algorithm, at a low computational cost. In this paper, we extend and unify previous results and approaches on spectral methods for the phase retrieval problem. More precisely, we combine the linearization of message-passing algorithms and the analysis of the \emph{Bethe Hessian}, a classical tool of statistical physics. Using this toolbox, we show how to derive optimal spectral methods for arbitrary channel noise and right-unitarily invariant matrix Φ, in an automated manner (i.e. with no optimization over any hyperparameter or preprocessing function).
O. Dhifallah and Y. M. Lu, “Phase Transitions in Transfer Learning for High-Dimensional Perceptrons,” Entropy, Special Issue "The Role of Signal Processing and Information Theory in Modern Machine Learning", vol. 23, no. 4, 2021. arXiv:2101.01918 [cs.LG]Abstract
Transfer learning seeks to improve the generalization performance of a target task by exploiting the knowledge learned from a related source task. Central questions include deciding what information one should transfer and when transfer can be beneficial. The latter question is related to the so-called negative transfer phenomenon, where the transferred source information actually reduces the generalization performance of the target task. This happens when the two tasks are sufficiently dissimilar. In this paper, we present a theoretical analysis of transfer learning by studying a pair of related perceptron learning tasks. Despite the simplicity of our model, it reproduces several key phenomena observed in practice. Specifically, our asymptotic analysis reveals a phase transition from negative transfer to positive transfer as the similarity of the two tasks moves past a well-defined threshold.