Sign Language Translator for Dumb and Deaf
- Publisher:
- IEEE
- Publication Type:
- Conference Proceeding
- Citation:
- 2023 10th International Conference on Soft Computing & Machine Intelligence (ISCMI), 2024, 00, pp. 198-202
- Issue Date:
- 2024-03-14
Closed Access
Filename | Description | Size | |||
---|---|---|---|---|---|
1718425.pdf | Published version | 414.88 kB |
Copyright Clearance Process
- Recently Added
- In Progress
- Closed Access
This item is closed access and not available.
The detection of sign language for the deaf and dumb community is a challenging task due to the complex and variable nature of signing gestures One problem that can arise in this task is overfitting where the model tries to memorize the training data rather than generalize patterns from it This can lead to underwhelming results on new and unseen information One common way to increase model capacity is to add more hidden units to the neural network which can exacerbate overfitting To address this problem We propose combining L2 regularisation with convolutional neural networks and long short term memory LSTM models implementing relu activation functions This approach facilitates the model to acquire knowledge of complex temporal patterns while the final dense layers enable meaningful classification The probabilities of the output layers are determined by the softmax activation function This method penalises greater weights in the model in order to promote the determination of simplified patterns in the data By using this technique we can increase the capacity of the model without overfitting leading to better generalization performance on new data Our proposed method has the potential to increase the precision of sign language detection allowing the deaf and dumb to better interact with the hearing world
Please use this identifier to cite or link to this item: