Welcome to the new version of CaltechAUTHORS. Login is currently restricted to library staff. If you notice any issues, please email coda@library.caltech.edu
Published May 1, 2006 | public
Journal Article Open

The p-norm generalization of the LMS algorithm for adaptive filtering

Abstract

Recently much work has been done analyzing online machine learning algorithms in a worst case setting, where no probabilistic assumptions are made about the data. This is analogous to the H/sup /spl infin// setting used in adaptive linear filtering. Bregman divergences have become a standard tool for analyzing online machine learning algorithms. Using these divergences, we motivate a generalization of the least mean squared (LMS) algorithm. The loss bounds for these so-called p-norm algorithms involve other norms than the standard 2-norm. The bounds can be significantly better if a large proportion of the input variables are irrelevant, i.e., if the weight vector we are trying to learn is sparse. We also prove results for nonstationary targets. We only know how to apply kernel methods to the standard LMS algorithm (i.e., p=2). However, even in the general p-norm case, we can handle generalized linear models where the output of the system is a linear function combined with a nonlinear transfer function (e.g., the logistic sigmoid).

Additional Information

© Copyright 2006 IEEE. Reprinted with permission. Manuscript received December 1, 2004; revised June 26, 2005. [Posted online: 2006-04-18] This work was supported by the National Science Foundation under Grant CCR 9821087, the Australian Research Council, the Academy of Finland under Decision 210796, and the IST Programme of the European Community under PASCAL Network of Excellence IST-2002-506778. The associate editor coordinating the review of this manuscript and approving it for publication was Dr. Dominic K. C. Ho.

Files

KIVieeetsp06.pdf
Files (486.4 kB)
Name Size Download all
md5:1ca673da82a1dd25c404fc9aa0ae659c
486.4 kB Preview Download

Additional details

Created:
August 22, 2023
Modified:
March 5, 2024