Gradient Descent Style Leveraging of Decision Trees and Stumps for Misclassification Cost Performance
Cameron-Jones, Mike (2001) Gradient Descent Style Leveraging of Decision Trees and Stumps for Misclassification Cost Performance. In: AI 2001: Advances in Artificial Intelligence, 14th Australian Joint Conference on Artificial Intelligence, 10-14 Dec 2001, Adelaide, Australia. Official URL: http://www.springerlink.com/content/8wa6n94gm9ufe6dv/ AbstractThis paper investigates the use, for the task of classifier learning in the presence of misclassification costs, of some gradient descent style leveraging approaches to classifier learning: Schapire and Singer's AdaBoost.MH and AdaBoost.MR [16], and Collins et al's multi-class logistic regression method [4], and some modifications that retain the gradient descent style approach. Decision trees and stumps are used as the underlying base classifiers, learned from modified versions of Quinlan's C4.5 [15]. Experiments are reported comparing the performance, in terms of average cost, of the modified methods to that of the originals, and to the previously suggested "Cost Boosting" methods of Ting and Zheng [21] and Ting [18], which also use decision trees based upon modified C4.5 code, but do not have an interpretation in the gradient descent framework. While some of the modifications improve upon the originals in terms of cost performance for both trees and stumps, the comparison with tree-based Cost Boosting suggests that out of the methods first experimented with here, it is one based on stumps that has the most promise.
Repository Staff Only: item control page
|