Evaluation of Suitability of Voice Reading of Al-Qur'an Verses Based on Tajwid Using Mel Frequency Cepstral Coefficients (MFCC) and Normalization of Dominant Weight (NDW)
The recitation of the Qur'an has its own uniqueness, among others having a special rule in reading and pronunciation, which is called tajwid science. At the time of the Qur'an is recited, there are often mistakes due to the limitations of knowledge of Tajwid. Therefore, the availability of tools to facilitate in checking the appropriateness of recitation is very much needed by those who recite the Qur'an and face limitations in understanding the science of tajwid. Checking the Qur'an reading is a problem that must be solved according to the rules. So far, voice identification studies have problems with feature extraction, compatibility or suitability testing, and accuracy. The issue of feature extraction, suitability, and impermanence testing have been improved in this study, which consists of two stages. The first stage is the extraction of the sound character of the Qur'an reading and the second stage is the testing of the conformity of the Qur'anic recitation and accuracy. In the first stage feature extraction is handled using MFCC and Normalization of Dominant Weight (NDW). Characteristics of reading the Qur'an as reference table is taken from one reader of Al-Qur'an who has competence in the field of science tajwid, for sampling 5-7 people as a source for testing. The process of the second stage of conformity testing of Qur'an reading is done starting from filtering, sequential multiplication of reference table and Conformity Uniformity Pattern (CUP). The sample of reading conformity test is taken from 11 Qur'anic letters containing 8 reading laws and 886 records. The test is performed on the dominant frame, the number of cepstral coefficient and the number of frames. The reading conformance test provides an average accuracy of 91.37% on the nine dominant frames. The test for the number of cepstral coefficients in the c-23 can be an average of 96.65%, while the number of frames on the F-10 is the best average of 96.65%.Keywords: voice; reading; Al-Qur'an; MFCC; suitability; feature extraction; reference table
(1) Zarkasyi I, “Pelajaran Tajwid,” Gontor Ponorogo, vol. Trimurti P, p. Hal 1-3, 1995.
(2) Suyanto and S. Hartati, “Design of Indonesian LVCSR using Combined Phoneme The Approaches of LVCSR,” Icts, pp. 191–196, 2013.
(3) S. Suyanto and A. E. Putra, “Automatic Segmentation of Indonesian Speech into Syllables using Fuzzy Smoothed Energy Contour with Local Normalization, Splitting, and Assimilation,” J. ICT Res. Appl., vol. 8, no. 2, pp. 97–112, 2014.
(4) R. Cahyarini, U. L. Yuhana, and A. Munif, “Rancang Bangun Modul Pengenalan Suara Menggunakan Teknologi Kinect,” J. Tek. Pomits, vol. 2, no. 1, pp. 1–5, 2013.
(5) M. L. Chen, S. K. Changchien, X. M. Zhang, and H. C. Yang, “The design of voice recognition controller via grey relational analysis,” Proc. 2011 Int. Conf. Syst. Sci. Eng. ICSSE 2011, no. June, pp. 477–481, 2011.
(6) M. Bodruzzaman, K. Kuah, T. Jamil, C. Wang, and X. Li, “System Using Artificial Neural Network,” pp. 1–3, 1993.
(7) B. P. Tomasouw and M. I. Irawan, “Multiclass Twin Bounded Support Vector Machine Untuk Pengenalan Ucapan,” Pros. Semin. Nas. Penelitian, Pendidik. dan Penerapan MIPA, Fak. MIPA, Univ. Negeri Yogyakarta, vol. 2, no. 2004, pp. 1–10, 2012.
(8) M. A. Pathak, “Privacy-Preserving Machine Learning for Speech Processing,” PhD, pp. 1–140, 2012.
(9) A. M. Aibinu, M. J. E. Salami, A. R. Najeeb, J. F. Azeez, and S. M. A. K. Rajin, “Evaluating the effect of voice activity detection in isolated Yoruba word recognition system,” 2011 4th Int. Conf. Mechatronics Integr. Eng. Ind. Soc. Dev. ICOM’11 - Conf. Proc., no. May, pp. 17–19, 2011.
(10) S. Hidayat, R. Hidayat, and T. B. Adji, “Sistem Pengenal Tutur Bahasa Indonesia Berbasis Suku Kata Menggunakan MFCC, Wavelet Dan HMM,” Conf. Inf. Technol. Electr. Eng., no. September, pp. 246–251, 2015.
(11) S. B. Davis and P. Mermelstein, “Comparison of Parametric Representations for Monosyllabic Word Recognition in Continuously Spoken Sentences,” IEEE Trans. Acoust., vol. 28, no. 4, pp. 357–366, 1980.
(12) K. Chakraborty Asmita Talele Savitha Upadhya, “Voice Recognition Using MFCC Algorithm,” Int. J. Innov. Res. Adv. Eng., vol. 1, no. 10, pp. 2349–2163, 2014.
(13) H. S. Manunggal, “Perancangan dan Pembuatan Perangkat Lunak Pengenalan Suara Pembicara Dengan Menggunakan Analisa MFCC Feature Extraction.,” Tugas Akhir Sarj. pada Jur. Tek. Inform. Fak. Teknol. Ind. Univ. Kristen Petra Surabaya, 2005.
(14) A. E. Putra, “Frekuensi Cuplik pada FFT,” Tan Li, Process. Digit. Signal, vol. 1, 2008.
(15) J. E.F Codd, P. Ritonga, and L. A. Reply, “Pengertian Normalisasi
Database Dan Bentuk-,” pp. 3–5, 2015.
Authors wishing to include figures, tables, or text passages that have already been published elsewhere are required to obtain permission from the copyright owner(s) for both the print and online format and to include evidence that such permission has been granted when submitting their papers. Any material received without such evidence will be assumed to originate from the authors.
All authors of manuscripts accepted for publication in the journal Transactions on Networks and Communications are required to license the Scholar Publishing to publish the manuscript. Each author should sign one of the following forms, as appropriate:
License to publish; to be used by most authors. This grants the publisher a license of copyright. Download forms (MS Word formats) - (doc)
Publication agreement — Crown copyright; to be used by authors who are public servants in a Commonwealth country, such as Canada, U.K., Australia. Download forms (Adobe or MS Word formats) - (doc)
License to publish — U.S. official; to be used by authors who are officials of the U.S. government. Download forms (Adobe or MS Word formats) – (doc)
The preferred method to submit a completed, signed copyright form is to upload it within the task assigned to you in the Manuscript submission system, after the submission of your manuscript. Alternatively, you can submit it by email email@example.com