Welcome to the new version of CaltechAUTHORS. Login is currently restricted to library staff. If you notice any issues, please email coda@library.caltech.edu
Published August 31, 2005 | Submitted
Report Open

Perceptron learning with random coordinate descent

Li, Ling

Abstract

A perceptron is a linear threshold classifier that separates examples with a hyperplane. It is perhaps the simplest learning model that is used standalone. In this paper, we propose a family of random coordinate descent algorithms for perceptron learning on binary classification problems. Unlike most perceptron learning algorithms which require smooth cost functions, our algorithms directly minimize the training error, and usually achieve the lowest training error compared with other algorithms. The algorithms are also computational efficient. Such advantages make them favorable for both standalone use and ensemble learning, on problems that are not linearly separable. Experiments show that our algorithms work very well with AdaBoost, and achieve the lowest test errors for half of the datasets.

Additional Information

For C++ code of the perceptron learning algorithms used in this paper, please see http://www.work.caltech.edu/ling/lemga/ ; For the artificial datasets used in this paper, please see http://www.work.caltech.edu/ling/data/ .

Attached Files

Submitted - tr05perceptron.pdf

Files

tr05perceptron.pdf
Files (204.8 kB)
Name Size Download all
md5:9eb430d9f4f24638cc0f178a0e200abc
204.8 kB Preview Download

Additional details

Created:
August 19, 2023
Modified:
January 13, 2024