#126: ShareBoost: Boosting for multi-view learning with performance guarantees

J. Peng, C. Barbu, G. Seetharaman, F. Wei, X. Wu, and K. Palaniappan

Lecture Notes in Artificial Intelligence (ECML PKDD), Volume 6912, pgs. 597--612, 2011

machine learning, detection, classification, features, texture, data mining

PlainText, Bibtex, PDF, URL, DOI, Google Scholar


Algorithms combining multi-view information are known to exponentially quicken classification, and have been applied to many fields. However, they lack the ability to mine most discriminant information sources (or data types) for making predictions. In this paper, we propose an algorithm based on boosting to address these problems. The proposed algorithm builds base classifiers independently from each data type (view) that provides a partial view about an object of interest. Different from AdaBoost, where each view has its own re-sampling weight, our algorithm uses a single re-sampling distribution for all views at each boosting round. This distribution is determined by the view whose training error is minimal. This shared sampling mechanism restricts noise to individual views, thereby reducing sensitivity to noise. Furthermore, in order to establish performance guarantees, we introduce a randomized version of the algorithm, where a winning view is chosen probabilistically. As a result, it can be cast within a multi-armed bandit framework, which allows us to show that with high probability the algorithm seeks out most discriminant views of data for making predictions. We provide experimental results that show its performance against noise and competing techniques.