Local Visual Feature Detection and Description for Non-Rigid 3D Objects
AbstractFeature extraction is an essential step in various image processing and computer vision tasks, such as object recognition, image retrieval, 3D construction, virtual reality, and so on. Design of feature extraction method is probably the single most important factor in achieving high performance of various tasks. Different applications create different challenges and requirements for the design of visual features. In this paper, we explored and investigated the effectiveness of different combinations of promising local feature detectors and descriptors for non-rigid 3D objects. Different configurations of visual feature detectors and descriptors have been enumerated, and each configuration has been evaluated by image matching accuracy. The results indicated that the scale-invariant feature transform feature detector and descriptor achieved the best overall performance in describing local features of non-rigid 3D object.
(1) Trier, Ø.D., A.K. Jain, and T. Taxt, Feature extraction methods for character recognition-a survey. Pattern recognition, 1996. 29(4): p. 641-662 %@ 0031-3203.
(2) Knopp, J., et al. Hough transform and 3D SURF for robust three dimensional classification. in Computer Vision–ECCV 2010. 2010. Springer.
(3) Chandrasekhar, V., et al. Comparison of local feature descriptors for mobile visual search. in Image Processing (ICIP), 2010 17th IEEE International Conference on. 2010. IEEE.
(4) Lankinen, J., V. Kangas, and J.-K. Kamarainen. A comparison of local feature detectors and descriptors for visual object categorization by intra-class repeatability and matching. in Pattern Recognition (ICPR), 2012 21st International Conference on. 2012. IEEE.
(5) He, X.-C. and N.H.C. Yung. Curvature scale space corner detector with adaptive threshold and dynamic region of support. 2004. IEEE.
(6) Mokhtarian, F. and R. Suomela, Robust image corner detection through curvature scale space. Ieee Transactions on Pattern Analysis and Machine Intelligence, 1998. 20(12): p. 1376-1381.
(7) Harris, C. and M. Stephens. A combined corner and edge detector. 1988. Citeseer.
(8) Mikolajczyk, K. and C. Schmid. Indexing based on scale invariant interest points. in Computer Vision, 2001. ICCV 2001. Proceedings. Eighth IEEE International Conference on. 2001. IEEE.
(9) Lowe, D.G., Distinctive image features from scale-invariant keypoints. International Journal of Computer Vision, 2004. 60(2): p. 91-110.
(10) Mikolajczyk, K. and C. Schmid, A performance evaluation of local descriptors. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 2005. 27(10): p. 1615-1630.
(11) Van De Sande, K.E.A., T. Gevers, and C.G.M. Snoek, Evaluating color descriptors for object and scene recognition. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 2010. 32(9): p. 1582-1596.
(12) Bay, H., T. Tuytelaars, and L. Van Gool, Surf: Speeded up robust features, in Computer vision–ECCV 2006. 2006, Springer. p. 404-417 %@ 3540338322.
(13) Tola, E., V. Lepetit, and P. Fua, Daisy: An efficient dense descriptor applied to wide-baseline stereo. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 2010. 32(5): p. 815-830
(14) Winder, S., G. Hua, and M. Brown. Picking the best daisy. 2009. IEEE.
(15) Morel, J.-M. and G. Yu, ASIFT: A new framework for fully affine
invariant image comparison. SIAM Journal on Imaging Sciences, 2009. 2(2): p. 438-469.
(16) Behmo, R., et al., Towards optimal naive bayes nearest neighbor, in Computer Vision–ECCV 2010. 2010, Springer. p. 171-184.
(17) Van De Weijer, J. and C. Schmid. Applying color names to image description. 2007. IEEE.
(18) Zeng, K., N. Wu, and K.K. Yen, A Color Boosted Local Feature Extraction Method for Mobile Product Search. Int. J. on Recent Trends in Engineering and Technology, 2014. 10(2): p. 78-84.
(19) Song, X., D. Muselet, and A. Trémeau, Affine transforms between image space and color space for invariant local descriptors. Pattern Recognition, 2013. 46(8): p. 2376-2389.
(20) Burghouts, G.J. and J.-M. Geusebroek, Performance evaluation of local colour invariants. Computer Vision and Image Understanding, 2009.
(1): p. 48-62.
(21) Rosten, E. and T. Drummond, Machine learning for high-speed corner detection, in Computer Vision–ECCV 2006. 2006, Springer. p. 430-443.
(22) Leutenegger, S., M. Chli, and R.Y. Siegwart. BRISK: Binary robust invariant scalable keypoints. in Computer Vision (ICCV), 2011 IEEE International Conference on. 2011. IEEE.
(23) Oxford Affine Covariant Features dataset. Available from: http://www.robots.ox.ac.uk/~vgg/research/affine/.
(24) Columbia University Image Library. Available from: http://www.cs.columbia.edu/CAVE/software/softlib/coil-100.php.
Authors wishing to include figures, tables, or text passages that have already been published elsewhere are required to obtain permission from the copyright owner(s) for both the print and online format and to include evidence that such permission has been granted when submitting their papers. Any material received without such evidence will be assumed to originate from the authors.
All authors of manuscripts accepted for publication in the journal Transactions on Networks and Communications are required to license the Scholar Publishing to publish the manuscript. Each author should sign one of the following forms, as appropriate:
License to publish; to be used by most authors. This grants the publisher a license of copyright. Download forms (MS Word formats) - (doc)
Publication agreement — Crown copyright; to be used by authors who are public servants in a Commonwealth country, such as Canada, U.K., Australia. Download forms (Adobe or MS Word formats) - (doc)
License to publish — U.S. official; to be used by authors who are officials of the U.S. government. Download forms (Adobe or MS Word formats) – (doc)
The preferred method to submit a completed, signed copyright form is to upload it within the task assigned to you in the Manuscript submission system, after the submission of your manuscript. Alternatively, you can submit it by email email@example.com