-
Approximation to Bayes Risk in Repeated Play
A classic algorithm for approximating Bayes risk in repeated play. -
Maximizing and Satisficing in Multi-armed Bandits with Graph Information
The dataset used in the paper is a graph-based multi-armed bandit problem with similarity graph information. -
Zero-Inflated Bandits
The Zero-Inflated Bandits problem is a variant of the multi-armed bandit problem where the reward is modeled as a zero-inflated distribution. The authors propose two algorithms,...