Projectwale,Opp. DMCE,Airoli,sector 2
projectwale@gmail.com

Conversion of Sign Language into Text

Conversion of Sign Language into Text

ABSTRACT: –

 

This article provides a performance analysis of various techniques that have been used to convert sign language into text. This article introduces a new system to help communicate with people with voice and hearing problems. Sign language recognition and speech-to-character conversion The proposed algorithm is able to extract characters from video sequences with a dynamic and minimally crowded background using skin color segmentation. distinguishes between static and dynamic gestures and extracts the appropriate vector. These are classified using support vector machines. Speech recognition is based on a standard engine called Sfinga. Experimental results show satisfactory character segmentation on different backgrounds and relatively high gesture accuracy. and speech recognition. In the huge world population of over 7.6 billion people, about 47 million people are verbally abused or have a speech impediment. Therefore, for expression, these people adapt to the methods of eye contact and non-verbal communication, which include gestures. This document provides a simple technique for translating the hand gestures of people with speech disabilities into accurate voice and text messages. Hand gestures are captured as input from flexible sensors and processed by a microcontroller that creates and stores a predefined database to deliver the desired message. This message is then evaluated as a blink on the LCD screen, and the corresponding audio signal stored on the Secure Digital card is amplified and transmitted to the speaker to ensure the audio message signal has excellent sound quality.

 

SYSTEM:-

 

The user will position their hand in front of the camera, and the system will capture a video of their hand gestures.

The system will extract individual frames from the video and pass them to the preprocessing module.

The preprocessing module will resize, crop, and normalize the input images before passing them to the CNN module.

The CNN module will recognize the hand gestures in the input images and map them to the appropriate text labels.

The text output module will generate the corresponding text output from the recognized hand gestures.

The user interface module will present the text output to the user in an intuitive and user-friendly manner.

 

PROPOSED SYSTEM:-

 

Our proposed system is a sign language recognition system using convolution.neural networks, which recognise various hand gestures by capturing video and converting it into frames. Then the hand pixels are segmented, and the image obtained and sent for comparison with the trained model. Thus, our system is more robust.in getting exact text labels for letters.

 

MODULES:-

 

  • Input Module: This module is responsible for capturing the hand gestures of the sign language user using a camera. The captured video will be processed by the system to extract individual frames of hand gestures, which will be passed to the CNN for recognition.
  • Preprocessing Module: This module is responsible for preprocessing the image frames before they are fed into the CNN. This includes tasks such as resizing, cropping, and normalization to ensure that the input images are of uniform size and quality.
  • CNN Module: This module is the core of the system and is responsible for recognizing the hand gestures and mapping them to the appropriate text labels. The CNN will be trained on a dataset of hand gesture images and corresponding text labels, and will learn to recognize the gestures and map them to the appropriate text labels.
  • Text Output Module: This module is responsible for generating the text output from the recognized hand gestures. Once the CNN has recognized a hand gesture, the corresponding text label will be passed to this module, which will generate the corresponding text output.

 

  • user interface module: This module is responsible for presenting the text output to the user in an intuitive and user-friendly manner. This can be done through a text display or through text-to-speech conversion.

 

 

 

APPLICATION:-

The application will use a camera to capture the hand gestures of a sign language user and convert them into text using a CNN. The application will be designed for ease of use and accessibility, allowing individuals with hearing or speech impairments to communicate more easily and effectively using sign language.

 

HARDWARE AND SOFTWARE REQUIREMENTS:-

 

HARDWARE:-

1. Processor: Intel i3 or above.

2. RAM: 6GB or more

3. Hard disk: 160 GB or more

 

SOFTWARE:-
  • Operating System : Windows 7/8/10
  • python
  • Anaconda
  • Jupyter notebook
  • flask framework

Leave a Reply

Your email address will not be published. Required fields are marked *