Lstm based Emg controller for Serious Game Application

A.A. 2019/2020 – EMG Project

Students: De Martini Alessandro, Koszoru Kristóf, Dal Mas Massimiliano.
Supervisor: Nicola Covre

Abstract

Exploiting an EMG sensored bracelet it is possible to create a program of clinical rehabilitation addressed to arm muscles. Using machine learning tools it should be possible to evaluate how a person performs gestures in specific cases for medical evaluation.

In this paper a first research is presented in the field in which a virtual model of helicopter is controlled using an EMG bracelet connected to a neural net for gesture classification, focusing on the net training and the connection between MATLAB and Unity.

Introduction

One rehabilitation field focuses on the possibility to track the muscles signals in real time, to perform analysis on the activation and find a proper rehabilitation program. The idea of this project is to unify an EMG bracelet, which is composed of eight sensors able to capture the superficial signals of muscle activation, with a game. In this way a patient could be invited to perform specific gestures to command specific events in the software.

As a first step of the research, the concept is to create a set of predetermined hand gestures and train a neural network model to classify them. In a second phase, exploiting the ZMQ interface, a connection between MATLAB and Unity is established to control a virtual model of helicopter, made in Unity, using the classification output of the neural net.

Myo Armband and LSTM Neural Net

This project focused on controlling a helicopter model exploiting hand and arm gestures. To capture the gestures a Myo bracelet was used. It is composed of eight sensors able to capture the superficial muscular signals and several accelerometers. Only the first signal was exploited, as the accelerometers are not trustable for this purpose.

To recognize the six gestures chosen in this project (relax, fist, open hand, left, right and down) a deep neural net was used. The implementation used  exploits seven layers of neurons:

  • Input layer: this layer receives in input the eight channels provided by the sensors.
  • LSTM layer: this layer applies the Long Short Term Memory model, which allows, given a continuous flow of data, the information provided by the new input to flow, be kept in memory or be discarded, taking into account past inputs.
  • FullyConnected layer: this layer increases the number of outputs.
  • Dropout layer: it ignores the neurons which have a probability lower than 0.3 to prevent overfitting.
  • FullyConnected layer: it decreases the number of outputs to the number of classes.
  • Softmax layer: it determines the probability of each class.
  • Classification layer: given the previous output, it takes the one with the highest probability and returns its class.

Dividing the signal in collections of ten samples for each sensor, the training reached an accuracy of 98%.

Matlab side

How is it possible to practically collect and analyze data from the Myo? The gesture recognition exploits the muscle electric signals analysis. With this purpose three different MATLAB scripts were used.

In order to develop a system which allows the analysis of different gestures, it is necessary to access the raw data of each single EMG sensors of the Myo armband. Once the bracelet is connected to the computer via Myo Connect, a link with MATLAB is established installing the MyoMex [2] library which allows to access the current raw data of the armband.

The first difficult part of this work was to select the features to run the Unity program, that is choosing the gestures that permit to have different sensor diagrams. For this purpose, several gestures were analyzed using a site provided by Myo [1]. At the end we classified and used six different actions shown in the image below.


Moreover, working with data it is possible to display some images that show the plot of ten samples of raw EMG data for each of the eight sensors using different colors. The frequency of the acquisition is about 200 Hz. Using these figures it’s evident if the actions are distinguishable. It’s possible to observe the actions “right” and “left” have completely opposite muscle activation. The more the actions are different and the more efficiently the neural network works.

The data can also be printed in two dimensions using clusters. These images show the comparison between two different sensors. This kind of graphic is very meaningful to understand the correlations between sensors or to see if data are correctly acquired. In the figures below, on the left, there is the cluster of data acquired by two different sensors with some correlation. Instead, on the right, there is a cluster without correlations, since data was wrongly acquired. The colors of dots represent different actions.

The neural network was implemented using the MATLAB machine learning extension. The used neural network required a specific data organization. All the data was organized in Cells and divided in Train and Test data. Each cell contains n vectors with 10 samples each. The number n is equal to the total amount of data acquired divided by the number of samples per vector (i.e. 10 in this case). In this way, action classification is not done sample by sample (quite a difficult task – more sensitive to noise) but vector by vector. A track of the action history is kept. A vector in a cell corresponds to 50 ms of acquisition. This neural network structure permitted to reach an 98.4% accuracy in the TEST. The confusion matrix is shown on the right.

The last script performs the real time action recognition. Once a ZMQ protocol [6] is started we enter an infinite loop in which the script acquires and classifies each action. The acquisition lasts  200 ms; this time guarantees a minimum of forty samples. The selected action is associated with the mode of all the classifications. Moreover, using the sliding window method it is possible to preserve the real-time having a classification of ten semple for each sensor per cell.

Game development with Unity

After the data collection and the Neural Network classification, the predicted action is sent to a virtual Helicopter in a simulated environment using the ZMQ (Zero Message Queue) [6]. An open source asynchronous message protocol library. The information flows only in one direction (from the Matlab to the Unity), therefore the provided publisher-subscriber architecture is able to fulfill the needs.

 

For building the main scene a publicly available free asset was used for the helicopter model, developed by Suncube [5].  The aim of the user is to go through red circles randomly placed in the 3D space and reach the highest score in the shortest time possible.

 

In case of connection failure, the model is able to autonomously guide itself back to the base station after 5 second of continuous inactivity of the control actions.

Conclusions

After a couple of months of research, a working prototype took shape, in which the deep learning classifier processes the signals of the bracelet quasi real-time and sends the predicted gestures through the ZMQ interface to the running helicopter simulation. The Figure below shows the system in action: the left gesture performed by the user causes the same directional change in the simulation.

References

[1] Myo Armband. MyoDiagnostics. http://diagnostics.myo.com.

[2] Mark-Toma. MyoMex. https://github.com/mark-toma/MyoMex. 2017.

[3] Donald A Neumann. Kinesiology of the musculoskeletal system: foundations for rehabilitation. Elsevier Health Sciences, 2013, pag. 225.

[4] SunCube. Base-Helicopter-Controller.https://github.com/suncube/Base-Helicopter-Controller. 2016.

[5] SunCube. Base-Helicopter-Controller Unity Asset Store. https://assetstore.unity.com/packages/tools/physics/base-helicopter-controller-40107. 2015.

[6] ZeroMQ library. https://zeromq.org

 

Lascia una risposta