RANSAC Algorithm for Matching Inlier Correspondences in Video Stabilization
In order to stabilize a video sequence we need to find a transformation which reduces the distortion between frames. To find this transformation feature points must be identified in consecutive frames. In order to get the correspondences between feature points Sum of Squared Differences (SSD) is adopted as matching cost between respective points but by this technique, many of the point correspondences are obtained and they have limited accuracy. To rectify this dilemma, Random Sample Consensus (RANSAC) algorithm is used which is implemented in the Geometric Transform function in Matlab. Utilizing the Random Sample Consensus (RANSAC) algorithm, a robust estimate of transformation between consecutive video frames could possibly be derived.
In this paper RANSAC algorithm can be used to find effective inlier correspondences and afterward it derives the affine transformation to map the inliers in consecutive video frames. This transformation is capable to improve the image plane .The RANSAC algorithm is repeated multiple times and at each run the cost of the end result is calculated via Sum of Absolute Differences between both image frames. SAD measures the distortion between two frames by evaluating the similarity between image blocks. On the cornerstone of SAD values, affine transform is obtained which makes the inliers from the initial set of points to match with the inliers from the following set. It is clear from simulation results, inliers correspondences gets exactly coincident which gives more favorable results. The cores of the images are generally well aligned. Thus by utilizing the Random Sample Consensus (RANSAC) algorithm, a robust estimate of transformation is obtained.
(1) M. Fischler and R. Bolles. “Random sample consensus: a paradigm for model fitting with application to imageanalysis and automated cartography”, Commun. Assoc. Comp. Mach., vol. 24:381–95, 1981.
(2) R. I. Hartley. Estimation of relative camera positions for uncalibrated cameras. In Proc. 2nd EuropeanConference on Computer Vision, LNCS 588, Santa Margherita Ligure, pages 579–587. Springer-Verlag,1992.
(3) P. J. Rousseeuw. Robust Regression and Outlier Detection. Wiley, New York, 1987.
(4) C. Tomasi and T. Kanade. “Shape and motion from image streams under orthography: A factorisation approach”, International Journal of Computer Vision, 9(2):137–154, 1992.
(5) P. H. S. Torr and D. W. Murray, “Outlier detection and motion segmentation”, In P. S. Schenker, editor, Sensor Fusion VI, pages 432–443. SPIE volume 2059, 1993. Boston.
(6) O. Chum and J. Matas. “Matching with PROSAC - progressive sample consensus”, In Proc. CVPR, pages 220–226,2005.
(7) J. Rabin, J. Delon, and Y. Gousseau. A statistical approachto the matching of local features. SIAM Journal on ImagingSciences, 2(3):931–958, 2009.
(8) P. H. S. Torr and A. Zisserman. MLESAC: a new robust estimator with application to estimating image geometry. Com-put. Vis. Image Underst., 78(1):138–156, 2000.
(9) W. Zhang and J. Kosecka. Generalized RANSAC Frame work for Relaxed Correspondence Problems. In Proceedings of 3DPVT’06.
(10) Z. Duric and A. Rosenfeld, "Stabilization of image sequences," Center forAutomation Research, University of Maryland, College Park, 1995.
(11) Matsushita Y., Ofek E., Tang X., and Shum H.Y, "Full-frame Video Stabilization with Motion Inpainting," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 28, no. 7, pp.
- 1163, July 2006.
(12) Torr P.H.S., and Zisserman A, "MLESAC: A New Robust Estimator with Application to Estimating Image Geometry," Computer Vision and ImageUnderstanding, vol. 78, no. 1, p. 138–156, 2000.
(13) Puglisi G. , and Battiato S., "A Robust Image Alignment Algorithm for Video Stabilization Purposes," IEEE Transactions on Circuits and Systems for Video Technology, vol. 21, no. 10, pp. 1390 - 1400, 2011.
(14) Abdullah L.M., Tahir N. M., and Samad M, "Video Stabilization Based on Point Feature Matching Technique," in Control and System Graduate Research Colloquium (ICSGRC), 2012 IEEE, Shah Alam, Selangor, 2012.
(15) Choi S., Kim T., Yu W, "Robust video stabilization to outlier motion using adaptive RANSAC," in International Conference on Intelligent Robots and Systems, St. Louis, MO, 2009.
(16) I Sack H., Boykov Y., Energy-based Geometric Multi-Model Fitting. International Journal of Computer Vision. 97(2): 1: 23-147, 2012
(17) Janicka, J., Rapapinski J., M-split transformation of coordinates. Survey Review. Vol. 45, issue 331, pp. 269-274, 2013
Authors wishing to include figures, tables, or text passages that have already been published elsewhere are required to obtain permission from the copyright owner(s) for both the print and online format and to include evidence that such permission has been granted when submitting their papers. Any material received without such evidence will be assumed to originate from the authors.
All authors of manuscripts accepted for publication in the journal Transactions on Networks and Communications are required to license the Scholar Publishing to publish the manuscript. Each author should sign one of the following forms, as appropriate:
License to publish; to be used by most authors. This grants the publisher a license of copyright. Download forms (MS Word formats) - (doc)
Publication agreement — Crown copyright; to be used by authors who are public servants in a Commonwealth country, such as Canada, U.K., Australia. Download forms (Adobe or MS Word formats) - (doc)
License to publish — U.S. official; to be used by authors who are officials of the U.S. government. Download forms (Adobe or MS Word formats) – (doc)
The preferred method to submit a completed, signed copyright form is to upload it within the task assigned to you in the Manuscript submission system, after the submission of your manuscript. Alternatively, you can submit it by email firstname.lastname@example.org