Abstract
We introduce a new protocol for prediction with expert advice in which each expert evaluates the learner’s and his own performance using a loss function that may change over time and may be different from the loss functions used by the other experts. The learner’s goal is to perform better or not much worse than each expert, as evaluated by that expert, for all experts simultaneously. If the loss functions used by the experts are all proper scoring rules and all mixable, we show that the defensive forecasting algorithm enjoys the same performance guarantee as that attainable by the Aggregating Algorithm in the standard setting and known to be optimal. This result is also applied to the case of “specialist” experts. In this case, the defensive forecasting algorithm reduces to a simple modification of the Aggregating Algorithm.
| Original language | English |
|---|---|
| Title of host publication | nan |
| Publisher | Springer |
| ISBN (Electronic) | 9783642044137 |
| ISBN (Print) | 9783642044137 |
| DOIs | |
| Publication status | Published - 1 Jan 2009 |
| Event | Algorithmic Learning Theory 2009 - Duration: 1 Jan 2009 → … |
Conference
| Conference | Algorithmic Learning Theory 2009 |
|---|---|
| Period | 1/01/09 → … |
| Other | Algorithmic Learning Theory 2009 |
Keywords
- prediction
Fingerprint
Dive into the research topics of 'Prediction with expert evaluators' advice'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver