高级检索

    杨辉华, 王行愚. 学习理论的一个关键算法的稀疏逼近[J]. 华东理工大学学报(自然科学版), 2004, (6): 688-693.
    引用本文: 杨辉华, 王行愚. 学习理论的一个关键算法的稀疏逼近[J]. 华东理工大学学报(自然科学版), 2004, (6): 688-693.
    Sparse Approximation to a Key Algorithm of Learning Theory[J]. Journal of East China University of Science and Technology, 2004, (6): 688-693.
    Citation: Sparse Approximation to a Key Algorithm of Learning Theory[J]. Journal of East China University of Science and Technology, 2004, (6): 688-693.

    学习理论的一个关键算法的稀疏逼近

    Sparse Approximation to a Key Algorithm of Learning Theory

    • 摘要: Poggio和Smale最近提出的学习理论的一个关键算法(A key algorithm,KA)可用于非线性分类和回归,并避免求解二次规划,但几乎所有的样本是“支持向量”。为此提出了一种稀疏KA算法(SKA),通过设计特定的优化函数,SKA能有效减少“支持向量”,并具备良好的推广能力。将SKA应用于两个实际的模式识别问题,并与支持向量机(SVM)进行比较,验证了SKA的有效性。

       

      Abstract: A key algorithm (KA) of learning theory presented recently by Poggio and Smale is claimed to be capable of both nonlinear classification and regression. It avoids the hard quadratic programming, but suffers from the fact that nearly all the training samples are "support vectors". To impose sparsity to KA, a sparse KA algorithm(SKA) is put forward, which can effectively cut off "support vectors"and meanwhile keep good generalization capacity. With comparison to SVM, the superiority of SKA is demonstrated on two UCI datasets.

       

    /

    返回文章
    返回