Welcome to the new version of CaltechAUTHORS. Login is currently restricted to library staff. If you notice any issues, please email coda@library.caltech.edu
Published November 10, 2020 | Submitted
Report Open

The Performance Analysis of Generalized Margin Maximizer (GMM) on Separable Data

Abstract

Logistic models are commonly used for binary classification tasks. The success of such models has often been attributed to their connection to maximum-likelihood estimators. It has been shown that gradient descent algorithm, when applied on the logistic loss, converges to the max-margin classifier (a.k.a. hard-margin SVM). The performance of the max-margin classifier has been recently analyzed. Inspired by these results, in this paper, we present and study a more general setting, where the underlying parameters of the logistic model possess certain structures (sparse, block-sparse, low-rank, etc.) and introduce a more general framework (which is referred to as "Generalized Margin Maximizer", GMM). While classical max-margin classifiers minimize the 2-norm of the parameter vector subject to linearly separating the data, GMM minimizes any arbitrary convex function of the parameter vector. We provide a precise analysis of the performance of GMM via the solution of a system of nonlinear equations. We also provide a detailed study for three special cases: (1) ℓ₂-GMM that is the max-margin classifier, (2) ℓ₁-GMM which encourages sparsity, and (3) ℓ_∞-GMM which is often used when the parameter vector has binary entries. Our theoretical results are validated by extensive simulation results across a range of parameter values, problem instances, and model structures.

Additional Information

Copyright 2020 by the author(s).

Attached Files

Submitted - 2010.15379.pdf

Files

2010.15379.pdf
Files (540.6 kB)
Name Size Download all
md5:7cfa6ff7856ec71c9f822ad51a41db3a
540.6 kB Preview Download

Additional details

Created:
August 19, 2023
Modified:
March 5, 2024