Zhang Suofei, Wu Haiyang, Wu Zhenyang. Transfer and reusing of object view information[J]. Journal of Image and Graphics, 2013, 18(10): 1302-1306. DOI: 10.11834/jig.20131011.
Conventional vision based object detection or recognition models mostly depend on the view information of target examples. However
such attached view information is usually limited in several datasets. When the view information is scare
some generic object detection models try to learn the target by evaluating the view information with unsupervised learning methods. In this paper
a selective transfer learning method
TransferBoost
is improved and introduced to relieve the lack of object view information in training. The proposed TransferBoost
based on the GentleBoost framework
prompts the performance of learning the target by reusing prior knowledge from other object classes. Given a well labeled training set as source task
TransferBoost can transfer the knowledge on both instance level and task level by adjusting the weights of examples and task simultaneously. Such a combination of two levels transfers extracts useful information more effectively from mixed relevant source tasks and irrelevant source tasks. Our experimental results show
that compared to traditional machine learning methods
transfer learning needs much less training examples thus reduces the training cost of object detection or recognition models and extends the applicability of existing models.