The one-inclusion graph algorithm is near-optimal for the prediction model of learning

  • Y. Li*
  • , P. M. Long
  • , A. Srinivasan
  • *Corresponding author for this work

Research output: Contribution to journalLetterpeer-review

Abstract

Haussler, Littlestone, and Warmuth described a general-purpose algorithm for learning according to the prediction model, and proved an upper bound on the probability that their algorithm makes a mistake in terms of the number of examples seen and the Vapnik-Chervonenkis (VC) dimension of the concept class being learned. We show that their bound is within a factor of 1 + o(1) of the best possible such bound for any algorithm.

Original languageEnglish
Pages (from-to)1257-1261
Number of pages5
JournalIEEE Transactions on Information Theory
Volume47
Issue number3
DOIs
StatePublished - Mar 2001
Externally publishedYes

Keywords

  • Computational learning
  • One-inclusion graph algorithm
  • Prediction model
  • Sample complexity
  • Vapnik-Chervonenkis (VC) dimension

Fingerprint

Dive into the research topics of 'The one-inclusion graph algorithm is near-optimal for the prediction model of learning'. Together they form a unique fingerprint.

Cite this