Abstract
Haussler, Littlestone, and Warmuth described a general-purpose algorithm for learning according to the prediction model, and proved an upper bound on the probability that their algorithm makes a mistake in terms of the number of examples seen and the Vapnik-Chervonenkis (VC) dimension of the concept class being learned. We show that their bound is within a factor of 1 + o(1) of the best possible such bound for any algorithm.
| Original language | English |
|---|---|
| Pages (from-to) | 1257-1261 |
| Number of pages | 5 |
| Journal | IEEE Transactions on Information Theory |
| Volume | 47 |
| Issue number | 3 |
| DOIs | |
| State | Published - Mar 2001 |
| Externally published | Yes |
Keywords
- Computational learning
- One-inclusion graph algorithm
- Prediction model
- Sample complexity
- Vapnik-Chervonenkis (VC) dimension
Fingerprint
Dive into the research topics of 'The one-inclusion graph algorithm is near-optimal for the prediction model of learning'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver