Generalization error in high-dimensional perceptrons: Approaching Bayes error with convex optimization

Citation:

B. Aubin, Y. M. Lu, F. Krzakala, and L. Zdeboravá, “Generalization error in high-dimensional perceptrons: Approaching Bayes error with convex optimization,” in Conference on Neural Information Processing Systems (NeurIPS), 2020.

Date Presented:

2020

Abstract:

We consider a commonly studied supervised classification of a synthetic dataset whose labels are generated by feeding a one-layer neural network with random iid inputs. We study the generalization performances of standard classifiers in the high-dimensional regime where α=n/d is kept finite in the limit of a high dimension d and number of samples n. Our contribution is three-fold: First, we prove a formula for the generalization error achieved by ℓ2 regularized classifiers that minimize a convex loss. This formula was first obtained by the heuristic replica method of statistical physics. Secondly, focussing on commonly used loss functions and optimizing the ℓ2 regularization strength, we observe that while ridge regression performance is poor, logistic and hinge regression are surprisingly able to approach the Bayes-optimal generalization error extremely closely. As α→∞ they lead to Bayes-optimal rates, a fact that does not follow from predictions of margin-based generalization error bounds. Third, we design an optimal loss and regularizer that provably leads to Bayes-optimal generalization error.

arXiv:2006.06560 [stat.ML]

Last updated on 10/20/2020