Support vector
machine is a supervised learning algorithm, which is recommended for
classification and nonlinear function approaches. Support vector machines
require remarkable amount of memory with time consuming process for large data
sets in the training process. The main reason for this is the solving a
constrained quadratic programming problem within the algorithm. In this paper,
we proposed three approaches for identifying the non-critical points in
training set and remove these non-critical points from the original training
set for speeding up the training process of support vector machine. For this
purpose, we used principal component analysis, Mahalanobis distance and
Euclidean distance based measurements for the elimination of non-critical
training instances in the training set. We compared the proposed methods in
terms of computational time and classification accuracy between each other and
conventional support vector machine. Our experimental results show that
principal component analysis and Mahalanobis distance based proposed methods
have positive effects on computational time without degrading the
classification results than the Euclidean distance based proposed method and
conventional support vector machine.
Primary Language | English |
---|---|
Subjects | Engineering |
Journal Section | Research Articles |
Authors | |
Publication Date | March 15, 2018 |
Submission Date | January 3, 2018 |
Acceptance Date | March 6, 2018 |
Published in Issue | Year 2018 Volume: 4 Issue: 1 |