Semantic Enriched Lecture Video Retrieval System Using Feature Mixture and Hybrid Classification
The advancement in the web technologies has increased the lecture video contents tremendously. The lecture video retrieval for the e-learning process is a challenging task since the videos are unstructured and have a large size. Since many video lectures have less information, the video retrieval system needs to be built with the enhanced features to improve the efficiency of the retrieval process. In this paper, a semantic enriched lecture video retrieval system has been proposed. The key frames from the video are extracted through the pre-processing. The proposed model uses the feature mixture database with the more relevant features such as text, semantic word, and the Local Gabor Pattern (LGP) vectors. The video retrieval from the feature mixture database is done by using the hybrid K-Nearest Neighbour Naive Bayes (KNB) classifier. This classifier uses the techniques of both the Naive Bayes (NB) classifier and the K-Nearest Neighbour (K-NN) classifier. The performance metrics such as precision, recall and the f-measure analyze the efficiency of the proposed model. Simulation is done by giving the text query and the video query to the video database. The simulation results show that the proposed model has better precision and recall value of 1.0 and 0.7500 respectively. The f-measure of the proposed model has a better value of 0.8571 than the existing K-NN system.
(1) Manish Kanadje, Zachary Miller, Anurag Agarwal, Roger Gaborski, Richard Zanibbi and StephanieLudi, "Assisted keyword indexing for lecture videos using unsupervised keyword spotting," Pattern Recognition Letters, vol. 71, pp. 8-15, 2016.
(2) Vidhya Balasubramanian, Sooryanarayan Gobu Doraisamy and Navaneeth Kumar Kanakarajan, "A multimodal approach for extracting content descriptive metadata from lecture videos," Journal of Intelligent Information Systems, vol. 46, no. 1, pp. 121-145, 2016.
(3) Haojin Yang and Christoph Meinel, "Content Based Lecture Video Retrieval Using Speech and Video Text Information," IEEE Transactions on Learning Technologies, vol. 7, no. 2, pp. 142-154, 2014.
(4) Kai Li, Jue Wang, Haoqian Wang and Qionghai Dai, "Structuring Lecture Videos by Automatic Projection Screen Localization and Analysis," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 37, no. 6, pp. 1233-1246, 2015.
(5) Ruben Fernandez-Beltran and Filiberto Pla, "Incremental probabilistic Latent Semantic Analysis for video retrieval", Image and Vision Computing, vol. 38, pp. 1-12, 2015.
(6) Sara Memar, Lilly Suriani Affendey, Norwati Mustapha, Shyamala C. Doraisamy and Mohammadreza Ektefa, "An integrated semantic-based approach in concept based video retrieval", Multimedia Tools and Applications, vol. 64, no. 1, pp. 77-95, 2013.
(7) Stevan Rudinac, Martha Larson and Alan Hanjalic, "Leveraging visual concepts and query performance prediction for semantic-theme-based video retrieval", International Journal of Multimedia Information Retrieval, vol. 1, no. 4, pp. 263-280, 2012.
(8) Nhu Van Nguyen, Mickal Coustaty and Jean-Marc Ogier, "Multi-modal and cross-modal for lecture videos retrieval," In Proceedings of IEEE 22nd International Conference on Pattern Recognition, pp. 2667-2672, 2014.
(9) Hyun Ji Jeong, Tak-Eun Kim, Hyeon Gyu Kim, and Myoung Ho Kim, "Automatic detection of slide transitions in lecture videos," Multimedia Tools and Applications, vol. 74, no. 18, pp. 7537-7554, 2015.
(10) Haojin Yang, Maria Siebert, Patrick Lühne, Harald Sack and Christoph Meinel, "Lecture Video Indexing and Analysis Using Video OCR Technology," In Proceedings of IEEE Signal-Image Technology and Internet-Based Systems (SITIS), pp. 54-61, 2011.
(11) Lecia Barker, Christopher Lynnly Hovey and Jaspal Subhlok, Tayfun Tuna, "Student Perceptions of Indexed, Searchable Videos of Faculty Lectures," In Proceedings of IEEE Frontiers in Education Conference (FIE), pp. 1-8, 2014.
(12) Tayfun Tuna, Jaspal Subhlok and Shishir Shah, "Indexing and Keyword Search to Ease Navigation in Lecture Videos," In proceedings of IEEE Applied Imagery Pattern Recognition Workshop, pp. 1-8, 2011.
(13) Bailan Feng, Juan Cao, Xiuguo Bao, Lei Bao, Yongdong Zhang, Shouxun Lin and Xiaochun Yun, "Graph-based multi-space semantic correlation propagation for video retrieval," The Visual Computer, vol. 27,
no. 1, pp. 21-34, 2011.
(14) Vasconcelos N, Lippman A, "Towards semantically meaningful feature spaces for the characterization of video content," In Proceedings of IEEE International conference on image processing, Computer Society, 1997.
(15) Ali Shariq Imran, Laksmita Rahadianti, Faouzi Alaya Cheikh and Sule Yildirim Yayilgan, "Objective Keyword Selection for Lecture Video Annotation," In Proceedings of European Workshop on Visual Information Processing (EUVIP), pp. 1-6, 2014.
(16) Karl K. Szpunar, Helen G. Jing, Daniel L. Schacter, "Overcoming overconfidence in learning from video-recorded lectures: Implications of interpolated testing for online education," Journal of Applied Research in Memory and Cognition, vol. 3, no. 3, pp. 161-164, 2014.
(17) Ankush Mittal and Sumit Gupta, "Automatic content-based retrieval and semantic classification of video content," International Journal on Digital Libraries, vol. 6, no. 1, pp. 30-38, 2006.
(18) Sara Memar, Lilly Suriani Affendey, Norwati Mustapha and Mohamamdreza Ektefa, "Concept-based Video Retrieval Model Based on The Combination of Semantic Similarity Measures," In Proceedings of IEEE International Conference on Intelligent Systems Design and Applications, pp. 64-68, 2013.
(19) Dianting Liu and Mei-Ling Shyu, "Semantic Retrieval for Videos in Non-Static Background Using Motion Saliency and Global Features," In Proceedings of IEEE International Conference on Semantic Computing, pp. 294-301, 2013.
(20) Ali Shariq Imran, Alejandro Moreno and Faouzi Alaya Cheikh, "Exploiting Visual Cues in Non-Scripted Lecture Videos for Multi-modal Action Recognition," In Proceedings of IEEE International Conference on Signal Image Technology and Internet Based System, pp. 8-14, 2012.
(21) Poonam Yadav, "Annotation of web pages using semantic tagging and ranking model to effective information retrieval," International Journal of Computer Science & Engineering Technology, Volume 5, Issue 12, pp 1094-1098, 2014.
(22) SKR P. Vijaya, G. Raju, “An Ontology-Based Meta-Search Engine for Effective Web Page Retrieval,” International Review of Computers and Software(IRECOS), Volume 8, Issue 2, pp 533-541, 2013.
(23) Ray Smith, "An Overview of the Tesseract OCR Engine," Ninth Int. Conference on Document Analysis and Recognition (ICDAR), IEEE Computer Society, pp. 629-633, 2007.
(24) Arun Balagopalan, Lalitha Lakshmi Balasubramanian, Vidhya Balasubramanian, Nithin Chandrasekharan, and Aswin Damodar, “Automatic keyphrase extraction and segmentation of video lectures,” IEEE International Conference on Technology Enhanced Education (ICTEE), pp. 1
– 10, 2012.
(25) Che-Yu Yang and Hua-Yi Lin, “An automated semantic annotation based on WordNet ontology,” International Conference on Networked Computing and Advanced Information Management, pp. 682 – 687, 2010.
(26) Gangemi A, Navigli R, and Velardi P, “The OntoWordNet Project: Extension and Axiomatization of Conceptual Relations in WordNet,” International Conference on Ontologies Databases and Applications of Semantics (ODBASE 2003), Catania, Sicily, Italy, pp. 820–838, 2003.
(27) V. Snasel, P. Moravec, and J. Pokorny, “WordNet Ontology-Based Model for Web Retrieval,” International Workshop on Challenges in Web Information Retrieval and Integration, pp. 220 – 225, 2005.
(28) Kaveh Samiee, Peter Kovács, and Moncef Gabbouja, “Epileptic seizure detection in long-term EEG records using sparse rational decomposition and local Gabor binary patterns feature extraction,” Knowledge-Based Systems, Volume 118, pp. 228–240, 2016.
(29) A. Sharmila and P. Geethanjali, “DWT Based Detection of Epileptic Seizure from EEG Signals Using Naive Bayes and k-NN Classifiers,” Volume 4, pp. 7716 – 7727, 2016.
(30) B. S. Daga and A. A. Ghatol, "Multicue Optimized Object Detection for Automatic Video Event Extraction", Indian Journal of Science and Technology, Vol. 9, no. 47, December 2016.
(31) B. V. Patel, B. S. Daga, B. B. Meshram, "Building Multimedia Applications", B. V. Patel, B. S. Daga, B. B. Meshram, International journal on computer engineering and information technology, vol. 14, no. 19, pp.
(32) Brijmohan Daga, "Content based video retrieval using color feature: an integration approach", in proceedings of the International Conference on Advances in Computing, Communication, and Control, pp 609-625, 2013.
(33) Brijmohan Daga, Avinash Bhute, Ashok Ghatol, "Implementation of Parallel Image Processing using NVIDIA GPU Framework", in proceedings of the International Conference on Advances in Computing, Communication and Control, Vol. 125, pp 457-464, 2011.
Authors wishing to include figures, tables, or text passages that have already been published elsewhere are required to obtain permission from the copyright owner(s) for both the print and online format and to include evidence that such permission has been granted when submitting their papers. Any material received without such evidence will be assumed to originate from the authors.
All authors of manuscripts accepted for publication in the journal Transactions on Networks and Communications are required to license the Scholar Publishing to publish the manuscript. Each author should sign one of the following forms, as appropriate:
License to publish; to be used by most authors. This grants the publisher a license of copyright. Download forms (MS Word formats) - (doc)
Publication agreement — Crown copyright; to be used by authors who are public servants in a Commonwealth country, such as Canada, U.K., Australia. Download forms (Adobe or MS Word formats) - (doc)
License to publish — U.S. official; to be used by authors who are officials of the U.S. government. Download forms (Adobe or MS Word formats) – (doc)
The preferred method to submit a completed, signed copyright form is to upload it within the task assigned to you in the Manuscript submission system, after the submission of your manuscript. Alternatively, you can submit it by email email@example.com