Welcome to the new version of CaltechAUTHORS. Login is currently restricted to library staff. If you notice any issues, please email coda@library.caltech.edu
Published October 6, 2014 | Published
Book Section - Chapter Open

Clinical Online Recommendation with Subgroup Rank Feedback

Abstract

Many real applications in experimental design need to make decisions online. Each decision leads to a stochastic reward with initially unknown distribution. New decisions are made based on the observations of previous rewards. To maximize the total reward, one needs to solve the tradeoff between exploring different strategies and exploiting currently optimal strategies. This kind of tradeoff problems can be formalized as Multi-armed bandit problem. We recommend strategies in series and generate new recommendations based on noisy rewards of previous strategies. When the reward for a strategy is difficult to quantify, classical bandit algorithms are no longer optimal. This paper, studies the Multi-armed bandit problem with feedback given as a stochastic rank list instead of quantified reward values. We propose an algorithm for this new problem and show its optimality. A real application of this algorithm on clinical treatment is helping paralyzed patient to regain the ability to stand on their own feet.

Additional Information

Copyright is held by the owner/author(s). Publication rights licensed to ACM. This work was supported by the the Helmsley Foundation, the Christopher and Dana Reeve Foundation, and the National Institutes of Health (NIH).

Attached Files

Published - p289-sui.pdf

Files

p289-sui.pdf
Files (895.2 kB)
Name Size Download all
md5:fba79c2c91ec719b0fd440b84577b8b3
895.2 kB Preview Download

Additional details

Created:
August 20, 2023
Modified:
October 17, 2023