Visual Interface to Speech-Cue Representation Coding
There have being great efforts made in the development of automated Instrumentation system for speech recognition (AISR) to provide a two-way communication between deaf and vocal people. This system performance achievable with the output of current real-time speech recognition systems would be extremely poor relative to normal speech reception. An alternate application of AISR technology to aid the hearing impaired would derive cues from the acoustical speech signal that could be used to supplement speechreading. We propose a study of highly trained receivers of speech signal that indicates that nearly perfect reception of everyday connected speech materials can be achieved at near normal speaking rates. To understand the accuracy that might be achieved with automatically generated cue symbols for visual representation. The system uses (HMM) for recognition of voiced data & Euclidian distance approach for sign language. The proposed task is a complementary work to the ongoing research work for recognizing the finger movement of a vocally disabled person to speech signal called. A New communication Paradigm: “Action-To-Speech”
(1) DONPEARSON “Visual Communication Systems for the Deaf” IEEE transactions on communications, vol. com-29, no. 12, December 1981
(2) Alison Wary, Stephen Cox, Mike Lincoln and Judy Tryggvason “A formulaic Approach to Translation at the Post Office: Reading the Signs”, The Journal of Language & Communication, No. 24, pp. 59-75, 2004.
(3) Glenn Lancaster, Karen Alkoby, Jeff Campen, Roymieco Carter, Mary Jo Davidson, Dan Ethridge, Jacob Furst, Damien Hinkle, Bret Kroll, Ryan Layesa, BarbaraLoeding, John McDonald, Nedjla Ougouag, Jerry Schnepp, Lori Smallwood, Prabhakar Srinivasan, Jorge Toro, Rosalee Wolfe, “Voice Activated Display of American Sign Language for Airport Security”. Technology and Persons with Disabilities Conference 2003. California State University at Northridge, Los Angeles, CA March 17-22, 2003
(4) Eric Sedgwick, Karen Alkoby, Mary Jo Davidson, Roymieco Carter, Juliet Christopher, Brock Craft, Jacob Furst, Damien Hinkle, Brian Konie, Glenn Lancaster,Steve Luecking, Ashley Morris, John McDonald, Noriko Tomuro, Jorge Toro,Rosalee Wolf, “Toward the Effective Animation of American Sign Language”.Proceedings of the 9th International Conference in Central Europe on Computer Graphics, Visualization and Interactive Digital Media. Plyen, Czech Republic,February 6 - 9, 2001. 375-378.
(5) Suszczanska, N., Szmal, P., and Francik, J., “Translating Polish Texts into Sign language in the TGT System”, the 20th IASTED International Multi-Conference on Allpied Informatics, Innsbruck, Austria, pp. 282-287, 2002.
(6) Scarlatos, T., Scarlatos, L., Gallarotti, F., “iSIGN: Making The Benefits of ReadingAloud Accessible to Families with Deaf Children”. The 6th IASTED International Conference on Computers, Graphics, and Imaging CGIM 2003, Hawaii, USA, August 13-15, 2003.
(7) San-Segundo, R., Montero, J.M., Macias-Guarasa, J., Cordoba, R., Ferreiros, J., and Pardo, J.M., “Generating Gestures from Speech”, Proc. of the InternationalConference on Spoken Language Processing (ICSLP'2004). Isla Jeju (corea).October 4-8, 2004.
(8) Aleem khalid ,Ali M, M. Usman, S. Mumtaz, Yousuf “Bolthay Haath – Paskistan sign Language Recgnition” CSIDC 2005
(9) Kadous, Waleed “GRASP: Recognition of Australian sign language using Instrumented gloves”, Australia, October 1995, pp. 1-2, 4-8.
(10) D. E. Pearson and J. P. Sumner, “An experimental visual telephone system for the deaf,” J. Roy. Television Society vol. 16, no. 2. pp. 6-10, 1976.
(11) Guitarte Perez, J.F.; Frangi, A.F.; Lleida Solano, E.; Lukas, K. “Lip Reading for Robust Speech Recognition on Embedded Devices” Volume 1, March 18-23, 2005 PP473 – 476
(12) SantoshKumar,S.A.; Ramasubramanian, V.” Automatic Language Identification Using Ergodic HMM” Acoustics, Speech, and Signal Processing, 2005. Proceedings. (ICASSP'05).IEEE International Conference Vol1,March18-23,2005Page(s):609-612
Authors wishing to include figures, tables, or text passages that have already been published elsewhere are required to obtain permission from the copyright owner(s) for both the print and online format and to include evidence that such permission has been granted when submitting their papers. Any material received without such evidence will be assumed to originate from the authors.
All authors of manuscripts accepted for publication in the journal Transactions on Networks and Communications are required to license the Scholar Publishing to publish the manuscript. Each author should sign one of the following forms, as appropriate:
License to publish; to be used by most authors. This grants the publisher a license of copyright. Download forms (MS Word formats) - (doc)
Publication agreement — Crown copyright; to be used by authors who are public servants in a Commonwealth country, such as Canada, U.K., Australia. Download forms (Adobe or MS Word formats) - (doc)
License to publish — U.S. official; to be used by authors who are officials of the U.S. government. Download forms (Adobe or MS Word formats) – (doc)
The preferred method to submit a completed, signed copyright form is to upload it within the task assigned to you in the Manuscript submission system, after the submission of your manuscript. Alternatively, you can submit it by email firstname.lastname@example.org