Myo:Hauptseite

Aus SWLabWiki
Wechseln zu: Navigation, Suche

Inhaltsverzeichnis

Description

Hier entwickeln Master-Studierende eine Software, um mit einem speziellen Armband (MYO) eine Gestenerkennung für Gebärdensprache zu realisieren.

Gesture recognition with the help of armband sensor.

Targets

1. To use Machine learning algorithm for identification of hand gestures.

2. Mapping the identified gestures on set of predefined language constructs.

Project-Team


Project-Status

  • We broke down the problem statement of gesture recognition to a controlled real-world simulation of a digit gesture recognition.
  • We started with the task of digit recognition, once the band is worn by the subject and a specific digit gesture is made. Once this task is completed, our assumption is the same model can be extended to alphabets and other complex gesture recognition.
  • We have conducted experiments and collected data of 16 people so far. This data is further used for model generation and analysis for recognition of gestures based on specific feature set in each gesture.
  • For this we have utilized and analyzed 4 classical machine learning approaches(Hidden Markov Models-HMM, Support Vector Machines-SVM, Naive Bayes-NB, K Nearest Neighbor-KNN) and Artificial Neural Networks approach(Long short term memory -LSTM).
  • Created two sets of training instances
    • One with 10 instances per class
    • One with 20 instances per class
  • Evaluated models using the following algorithms,
    • HMM - Raw Data
    • HMM - Windowed Features
    • Naive Bays
    • KNN (1 neighbour)
    • SVM (Parameters using grid search)
  • Analysed the accuracy precision, F-Score for all the models in all the folds
  • Analysed the features and tried to decide which features to eliminate and which features are not significant using.
    • Parallel Coordinates
    • Andrews Curves
  • Also we have converted the 3-D real world changes in (arm-band)spatial coordinates while performing gestures into the 2-D screen coordinates which then can be used to plot images of changes in 2D spatial data.
  • These images can be further fed to a variety of Neural networks such as - Convolution Neural Networks for identification and classification tasks.


  • Now we aim to wrap up these results via an application that is capable of capturing, analyzing (fixed set of)gestures in real time and classifying them. Once this is done we can give a live demo/presentation of our results so far and continue our work towards more complex and higher order gestures.

Internal Documents

Die hier verlinkten weiteren Seiten zu diesem Projekt sind nur für angemeldete SWLab-Teilnehmer lesbar.

Meine Werkzeuge