Welcome to the new version of CaltechAUTHORS. Login is currently restricted to library staff. If you notice any issues, please email coda@library.caltech.edu
Published July 13, 2017 | Submitted + Accepted Version
Report Open

Preference Identification

Abstract

An experimenter seeks to learn a subject's preference relation. The experimenter produces pairs of alternatives. For each pair, the subject is asked to choose. We argue that, in general, large but finite data do not give close approximations of the subject's preference, even when countably infinite many data points are enough to infer the preference perfectly. We then provide sufficient conditions on the set of alternatives, preferences, and sequences of pairs so that the observation of finitely many choices allows the experimenter to learn the subject's preference with arbitrary precision. The sufficient conditions are strong, but encompass many situations of interest. And while preferences are approximated, we show that it is harder to identify utility functions. We illustrate our results with several examples, including expected utility, and preferences in the Anscombe-Aumann model.

Additional Information

Echenique thanks the National Science Foundation for its support through the grants SES 1558757 and CNS 1518941. Lambert gratefully acknowledges the financial support and hospitality of Microsoft Research New York and the Cowles Foundation at Yale University.

Attached Files

Accepted Version - sswp1428.pdf

Submitted - 1807.11585.pdf

Files

1807.11585.pdf
Files (806.2 kB)
Name Size Download all
md5:9e7f20da4858b57f7b9ad42d7c0eae4b
395.1 kB Preview Download
md5:b001b5c4a94acc6e12c73b6301670e89
411.2 kB Preview Download

Additional details

Created:
August 19, 2023
Modified:
January 13, 2024