Retrieval of Images using Combination of Features as Color, Color Moments and Hu Moments
In today’s digital era, several of the image retrieval systems focus on retrieving images using features from images themselves such as color, shape and textures and are referred as low-level features. In this proposed work, the features like color with HSV color space, color moments and Hu moments are employed to retrieve similar images. Various experimentations were conducted on Wang’s database images to test the combination of features for higher performance using precision, recall, accuracy and f-score. The results obtained are compared with one another and also with existing works. The retrieval performance is found to be high for proposed system against existing works.
(1) Yogita Mistry D.T.Ingole, M.D.Ingole, Content based image retrieval using hybrid features and various distance metric. Journal of Electrical Systems and Information Technology, https://doi.org/10.1016/j.jesit.2016.12.009, Volume 5, Issue 3, December 2018, Pages 874-888.
(2) Zhijie Zhao et al., Content Based Image Retrieval Scheme using Color, Texture and Shape Features. International Journal of Signal Processing, Image Processing and Pattern Recognition, http://dx.doi.org/10.14257/ijsip.2016.9.1.19, Vol.9, No.1 , 2016, pp.203-212.
(3) Nishad P M et al., Various Colour Spaces and Colour Space Conversion Algorithms. Journal of Global Research in Computer Science, 4 (1), January 2013, 44-48.
(4) Mr. Milind V. Lande, Prof .Praveen bhanodiya, Mr. Pritesh Jain, Efficient Content Based Image Retrieval Using Color and Texture. International Journal of Scientific & Engineering Research, Volume 4, Issue 6, June-2013.
(5) G.-H. Liu et al., Content-based image retrieval using computational visual attention model, Elsevier, Pattern Recognition, 48, http://dx.doi.org/10.1016/j.patcog.2015.02.005, 2015, 2554–2566.
(6) Purohit Shrinivasacharya, Dr. M. V Sudhamani, Content Based Image Retrieval System using Texture and Modified Block Truncation Coding. IEEE International Conference on Advanced Computing and Communication
Systems (ICACCS -2013), Dec. 19 – 21, 2013, Coimbatore, INDIA.
(7) A. Amanatiadis, V.G. Kaburlasos, A. Gasteratos, and S.E. Papadakis, Evaluation of Shape Descriptors for Shape-Based Image Retrieval, IET Image Processing, 2011, 5, (5), p. 493-499, DOI: 10.1049/iet-ipr.2009.0246.
(8) Guang-Hai Liu, Jing-Yu Yang, Content-based image retrieval using color difference histogram, Elsevier, Pattern Recognition, 46, 2013, 188–198, http://dx.doi.org/10.1016/j.patcog.2012.06.001.
(9) Ahmed J. Afifi andWesam M. Ashour, Image Retrieval Based on Content Using Color Feature. International Scholarly Research Network - ISRN Computer Graphics, Volume 2012, Article ID 248285, 11 pages, doi:10.5402/2012/248285.
(10) L.K. Pavithra, T. Sree Sharmila, An efficient framework for image retrieval using color, texture and edge features. Elsevier, Computers and Electrical Engineering 70, 2018, 580–593.
(11) Swapnalini Pattanaik, Prof.D.G.Bhalke, Beginners to Content Based Image Retrieval, International Journal of Scientific Research Engineering & Technology (IJSRET), Volume 1 Issue 2, May 2012, pp 040-044.
(12) Kemal Erdogan and Nihat Yilmaz, Shifting Colors to Overcome not Realizing Objects Problem due to Color Vision Deficiency. Proc. of the Second Intl. Conf. on Advances in Computing, Electronics and Electrical Technology - CEET 2014, ISBN: 978-1-63248-034-7 doi: 10.15224/ 978-1-63248-034-7-27, pp 011 - 014.
(14) Jinsong Leng, Zhihu Huang, Analysis of Hu's moment invariants on image scaling and rotation. IEEE Proceedings of 2010 2nd International Conference on Computer Engineering and Technology (ICCET) - Chengdu, China, http://ro.ecu.edu.au/ecuworks/6350, 2010 pp. 476-480.
(15) Walia, E., Vesal, S. & Pal, An Effective and Fast Hybrid Framework for Color Image Retrieval. A. Sens Imaging (2014) 15: 93. https://doi.org/10.1007/s11220-014-0093-9.
(16) Chuen-Horng Lin, Chen Rong-Tai, and Chan Yung-Kuan, A smart content-based image retrieval system based on color and texture feature, Image and Vision Computing, Vol. 27(6), 2009,pp. 658–665.
(17) M.M. Fraz , P. Remagnino, A. Hoppe, S.A. Barman , "Retinal image analysis aimed at extraction of vascular structure using linear discriminant classifier", Proceedings of the IEEE International Conference on Computer Medical Applications ICCMA' 2013 , Jan, 2013, Sousse , Tunisia.
(18) M.M. Fraz , A.R. Rudnicka, C.G. Owen, D.P. Strachan, S.A. Barman , "Automated Arteriole and Venule Recognition in Retinal Images using Ensemble Classification", Proceedings of the 9th International Conference on Computer Vision Theory and Applications (VISAAP), Jan, 2014, Lisbon , Portugal.
Copyright (c) 2019 R Rajkumar, Dr. M V Sudhamani
This work is licensed under a Creative Commons Attribution 4.0 International License.
Authors wishing to include figures, tables, or text passages that have already been published elsewhere are required to obtain permission from the copyright owner(s) for both the print and online format and to include evidence that such permission has been granted when submitting their papers. Any material received without such evidence will be assumed to originate from the authors.
All authors of manuscripts accepted for publication in the journal Transactions on Networks and Communications are required to license the Scholar Publishing to publish the manuscript. Each author should sign one of the following forms, as appropriate:
License to publish; to be used by most authors. This grants the publisher a license of copyright. Download forms (MS Word formats) - (doc)
Publication agreement — Crown copyright; to be used by authors who are public servants in a Commonwealth country, such as Canada, U.K., Australia. Download forms (Adobe or MS Word formats) - (doc)
License to publish — U.S. official; to be used by authors who are officials of the U.S. government. Download forms (Adobe or MS Word formats) – (doc)
The preferred method to submit a completed, signed copyright form is to upload it within the task assigned to you in the Manuscript submission system, after the submission of your manuscript. Alternatively, you can submit it by email firstname.lastname@example.org